markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The following automaton is almost able to recognize $a^nb^n$: it accepts only words of $a^nb^m$ (aka $a^b^$) and return $(n, m)$. One still has to check that $n = m$.
zmin2 = vcsn.context('lal<char(ab)>, lat<zmin, zmin>') zmin2 ab = zmin2.expression('(<1,0>a)*(<0,0>b)* & (<0,0>a)*(<0,1>b)*') ab a = ab.automaton() a print(a.shortest(len = 4).format('list'))
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Boss The interpretation of the following monster is left to the reader as an exercise:
vcsn.context('''nullableset< lat< lal<char(ba)>, lat< lan<char(vu)>, law<char(x-z)> > > > , lat<expressionset<nullableset<lat<lan<char(fe)>, lan<char(hg)>>>, lat<r, q>>, lat<b, z> > ''')
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Need a udf that returns a mapType of string:string that explode will expand
def json_to_map(s): """Convert a string containing JSON into a dictionary, Skip flattening for now.""" try: return json.loads(s) except: return {} json_to_map_udf = udf(json_to_map, MapType(StringType(), StringType())) print(json_to_map('{ "solr_long_lat": "-5.87403,30.49728", "related_record_types": "PreservedSpecimen|PreservedSpecimen", "related_record_links": "YPM-IP-530950|YPM-IP-530951" }')) idb_map = (idb_dyn .withColumn("props_map", json_to_map_udf(col("props_str"))) ) idb_map.select(col("props_map")).show(10, truncate=False) idb_triples = (idb_map .select(col("uuid"), col("recordset"), col("institutioncode"), explode(col("props_map")).alias("key", "value")) ) idb_triples.cache() idb_triples.count() idb_triples.show(20, truncate=False) (idb_triples .groupBy(col("key")) .count() .sort(col("count"), ascending=False) .limit(1000) ).toPandas() (idb_triples .groupBy(col("institutioncode")) .count() .sort(col("count"), ascending=False) .limit(1000) ).toPandas() (idb_triples .filter(col("key") == "NSF_TCN") .count() ) (idb_triples .filter(col("key") == "NSF_TCN") .groupBy(col("institutioncode"), col("value")) .count() .sort(col("count"), ascending=False) .limit(1000) ).toPandas()
04_Parsing_Dynamic_Properties_into_Traits.ipynb
bio-guoda/guoda-examples
mit
Now let's write this out and go back and join to the main DF for some summaries
#(idb_triples # .write # .parquet("/tmp/idigbio-20171014T023306-json-triples.parquet") #)
04_Parsing_Dynamic_Properties_into_Traits.ipynb
bio-guoda/guoda-examples
mit
How much more information might we be able to find in records that records that are not JSON parsable?
(idb_triples .select(length(col("key")).alias("len_key")) .avg(col("len_key")) .show() ) #joined = idb_dyn.join(idb_triples, idb_dyn["uuid"] == idb_triples["uuid"], "inner") #joined.show(3, truncate=False) #joined.count()
04_Parsing_Dynamic_Properties_into_Traits.ipynb
bio-guoda/guoda-examples
mit
Import the os module (operating system). It has the method listdir(), which gives a list of the files within a directory. We apply os.listdir() to get a list of all files and immediately filter it to get only the files that have .xls and don't start with '~'
import os os.listdir('.') import os xls_files = [f for f in os.listdir() if ".xls" in f and not f.startswith('~')] print(xls_files)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Next we'll read the data from the Excel workbook
import openpyxl # excel file functionality wb = openpyxl.load_workbook(xls_files[0]) type(wb) wb.get_sheet_names() # Show the sheetnames inside the Excel workbook wb wb.get_sheet_names()
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We can now treat the workbook as a dictionary in which each key is the name of a worksheet
wsPop = wb['Population'] type(wsPop)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
wsPop is an object. It has a number of methods and properties, which can be accessed by typing the name followed by a dot followed by a tab. To get more information it can be followed by a question mark. On the mac wsPop.rows is a tuple of tuples. On some other systems it is a generator. It generates the tuples one after the other in a loop like
for r in wsPop.rows: pass # do something
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
but with a generator, one cannot say len(wsPop.rows) or index it like wsPop.rows[3]. However it's easy to first generate a list of tuples with a comprehension:
data = [r for r in wsPop.rows]
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
and then continue to work with data instead of wsPop.rows. Therefore, if you have trouble indexing wsPop.rows, first generate a list as shown above and use that wherever you see wsPop.rows below.
cel = wsPop.rows[0][3] # may not work if wsPop.rows is a generator on your system and not a tuple of tuple. cel = data[0][3] # this is a list of tuples, generated above. cel.value
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We gen get the contents of sheet as a series of rows using the attribute rows, and then show an arbitrary value
print( type(wsPop.rows) ) # shows that the wsPop generates a tuple of tuples print( len(wsPop.rows) ) # shows the number of rows print( wsPop.rows[3][1] ) # shows some cell wsPop.rows[3][1].value # shows the value of some cell for r in wsPop.rows: print(r[1].value)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Turn the rows propery into a list of rows, with each row the values of the cells in them. We can do that in a list comprehension, in this case a list comprehension around a list comprehension. For each row we turn it into a list of the values of each cell in that row. We do that for each row. The result is a list of lists. This is done in one line:
data = [[c.value for c in r ] for r in wsPop.rows] data[15][1] # Show the first 5 lines for i in range(5): print(data[i])
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
To index the data directly, we don't want a list of lists but a dictionary with the country name as key, such that for country the data is kept in a dictionary with keys obtained form the headers in the first two rows of the excel file as is shown above.
hdrs = data[0] dims = data[1] print() print(hdrs) print() print(dims) print()
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now glue together the hdrs and the dimes, and filter out the None texts: All hdrs are strings, but that is not the case of the dims, where -2017 was turned in a value. So when guening h and d together to a new string below, we have to use h + str(d). The combined headers are obtained using a list comprehension, that also removes the text 'None' from the dims wherever it turns up:
hdrs = [(h + str(d)).replace('None','') for h, d in zip(hdrs, dims)] pprint(hdrs) h = data[13][:] print(h) name = h.pop(1) print(name) print(h)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We could generate a dict with the country names as keys where each cntry[key] as a list of itmes. We cannot use a dict comprehension here because we need to pop the country value from each row in to get the key for that rows. Hence a for loop is used:
cntry = dict() # empty dict for i in range(2, len(wsPop.rows)): row = [c.value for c in wsPop.rows[i]] cname = row.pop(1) cntry[cname] = row # entter the key cname and the value row into the dict cntry
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now we can get the data of any cntry like so:
cntry['Italy']
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
However, this is not smart enough. We can't see what the figure in the list mean. Therefore, we should use a dictionary for the contents of each cntry with the fields as keys. These fields are now simply obtained by popping the second item from the hdrs list:
print(hdrs) hdrs.pop(1) print() print(hdrs)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now generate the dict again, with the contens of each cntry itself a dict of its fields:
cntry = dict() for i in range(2, len(wsPop.rows)): # skip the first two lines, which are headers row = [c.value for c in wsPop.rows[i]] # turn the Excel row in a list cname = row.pop(1) # pop of the country name, which becomes the key cntry[cname] = {fld: v for fld, v in zip(hdrs, row)} # use dict comprehension
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now the contents of an arbitrary country looks like this, it's a dict with fields and values.
cntry['Netherlands']['Population-2017']
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Let's now compute the total population of the world. We do this by summing for each country.
totPop2017 = 0. # start out with zero for c in cntry.keys(): totPop2017 += cntry[c]['Population-2017'] # get the value directly from indexing the field print('Total population in the world is {:.2f} billion'.format(totPop2017/1e9)) # Now compute the toal populatin in 30 years, using the current growth rates popWorld = [0 for i in range(1, 31)] for k in cntry.keys(): pc = cntry[k]['Population-2017'] # country population 2017 try: frc = cntry[k]['Fert.Rate'] / 100. # growth rate fraction except: # needed in case frc contains `None` frc = 0 # simply use 0 in those cases # for the country popCntry = [pc * (1 + frc)**i for i in range(1, 31) ] # for the entire world popWorld = [pc + pw for pc, pw in zip(popCntry, popWorld)] for i, pw in enumerate(popWorld): yr = 2017 + i + 1 print('popWorld [{}] = {:5.2f} billion'.format(yr, pw/1e9))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Population per continent. We don't have the Continent field associated with the countries. But we do have a list of countries and their continents in a second worksheet:
wscont = wb['CountriesByContinent'] # read it print(wscont) # shows it's a worksheet object print(wscont.rows[0]) # shows it's a tuple of two cell objects wscont.rows[0][0].value, wscont.rows[0][1].value # turned into a tuple of tow strings
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We can immediately, in a single line, turn this worksheet into a dictionary with name cont that has the country as key and the continent as field (a string).
cont = { v.value : k.value for k, v in wb['CountriesByContinent'] } pprint(cont) # notic the key : value pairs separated by commas
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Show for a few countries in which continent they are:
for c in ['Bahamas', 'Spain', 'Morocco', 'Honduras', 'Cambodia']: print('{:20} lies in {}'.format(c, cont[c]))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Adding the field continent to the cntry dict We like to add the field Continent to our cntry dict using the cont dict. This would be easy if the country names in both dicts would be exactly the same. Let's see if this is the case. We can check this using set logic. Convert the keys of he cntry dict to a set and do the same with those of the cont dict First step: What countries are in th cont dict that are not in the cntry dict?
df_cont_cntry = set(cont.keys()) - set(cntry.keys()) pprint(df_cont_cntry) print("\n{} countries are in dict `cont` that are not in dict `cntry`".format(len(df_cont_cntry)))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
And likewise: which are in the cntry dict that are not in the cont dict?
df_cntry_cont = set(cntry.keys()) - set(cont.keys()) pprint(df_cntry_cont) print("\n{} countries are in dict `cntry` that are not in dict `cont`".format(len(df_cntry_cont)))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
The set of contries that are in cont but not in cntry obviously have differently spelled names. Probably the easiest way is to take the names from cntry that are not in cont and look up the continent in which each of these contries lies, and use that to supplement the missing countries. We don't then have to worry about the misspelled names. Using a list of missing names with their continent to complete the cntry dict This list of missing contries with their continent is in sheet missed of our workbook.
# notice that the columsns in this excel sheet are in columns 2 and 3 and not in 1 and 2 !! # We construct the dict in one line, using a single dict comprehension missing = {rw[2].value : rw[1].value for rw in wb['missing']} # don't need .rows pprint(missing)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now we can complete our cntry dict with a Continent attribute for every country in it by using the dict cont and the dict missing. Let's just join them, like so:
for k in missing.keys(): cont[k] = missing[k]
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
And then add the continent to the cntry dict
for k in cntry.keys(): cntry[k]['Continent'] = cont[k]
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Now we can print the country name with its continent next to it:
for k in cntry.keys(): print("{:30} lies in {}".format(k, cntry[k]['Continent']))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Population growth per continent It is now possible to compute and show the population and its growth per continent The first step is extract the continents from the dict using set logic, i.e. by set comprehension. The result is a set with the unique values from the Continent field.
continents = { cntry[k]['Continent'] for k in cntry.keys()} # set comprehension print(continents)
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
It's then possible to compute the future population be selecting the countries for this continent to do the computation on:
continent='Europe' # population and fertility rate for the countries of this continent: popCont = [(cntry[k]['Population-2017'], cntry[k]['Fert.Rate']) for k in cntry.keys() if cntry[k]['Continent']==continent] print("The number of countries in {} is {}".format(continent, len(popCont)))
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
The last but one step is to compute the population for the coming years in each continent. We make a dictionary pTot with the continent as key and which has the list of future population values as a list:
pTot = dict() # empty dict Nyr = 50 # year to predict print("The predicted population inover the next {} years is:".format(Nyr)) for c in continents: pTot[c] = [0 for i in range(Nyr)] # start with empty total for each continent # generate a list of [Pop, fertility rate] for each country in this continent popCont = [(cntry[k]['Population-2017'], cntry[k]['Fert.Rate']) for k in cntry.keys() if cntry[k]['Continent'] == c] # compute the country's future population and add to continentn total for p, fr in popCont: try: p = float(p) fr = float(fr) # population of country in coming years pcntry = [p * (1 + fr/100.)**i for i in range(1, Nyr + 1)] # add to continent total pTot[c]= [pt + p for pt, p in zip(pTot[c], pcntry)] except: # it crashes when fertility rate is 'None', we ignore these frew contries pass print("{:10s}".format(c), end="") # print continent for p in pTot[c]: print("{:6.2f}".format(p / 1.0e9), end="") # print Pop values print()
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
A better overview of the results would be to have columns, one per continent, and the numbers vertical, with the year as first column. Although the numbers are now not naturally ordered to do this, it can be realized with little effort as follows:
print("The predicted population in billions:") continents = pTot.keys() styear = 2018 # starting year print("{:4s}".format("Year"), end="") for c in continents: print("{:>12}".format(c), end="") print() for i in range(Nyr): print("{:4d}".format(styear + i), end="") for c in continents: print("{:12.2f}".format(pTot[c][i]/1e9), end="") print()
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Finally, make a plot of the growth curves:
years = [2018 + i for i in range(Nyr)] for c in continents: plt.plot(years, pTot[c], label=c) plt.xlabel('year') plt.ylabel('Population [billions]') plt.title('Population development') plt.legend(loc='best', fontsize='x-small') plt.show() #plt.pie?
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We can also put the year results in a dict that has as key the year and as values a dict with the continents.
contPop = dict() for yr in range(Nyr): contPop[2018 + yr] = { c : v for c, v in zip(continents, [pTot[c][yr] for c in continents])} # show it pprint(contPop[2025])
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Then finally we could make pie charts for the population at say 4 points in time.
import numpy as np fig, axs = plt.subplots(2,2, sharex=True, sharey=True) axs = axs.ravel() for ax, yr in zip(axs.ravel(), [2020, 2030, 2040,2050]): ax.set_title(str(yr)) x = np.array(list(contPop[yr].values()))/1.0e9 y = list(contPop[yr].keys()) #print(x) #print(y) r = np.sqrt(np.sum(x))/4 ax.xlim = [] ax.pie(x, labels=y, radius=r) plt.show()
exercises/Feb28/worldPopulation.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Information about the component Passing the class name to the help function returns descriptions of the various methods and parameters:
help(CarbonateProducer)
notebooks/tutorials/carbonates/carbonate_producer.ipynb
landlab/landlab
mit
Examples Example 1: carbonate growth on a rising continental margin under sinusoidal sea-level variation In this example, we consider a sloping ramp that rises tectonically while sea level oscillates.
# Parameters and settings nrows = 3 # number of grid rows ncols = 101 # number of grid columns dx = 1000.0 # node spacing, m sl_range = 120.0 # range of sea-level variation, m sl_period = 40000.0 # period of sea-level variation, y run_duration = 200000.0 # duration of run, y dt = 100.0 # time-step size, y initial_shoreline_pos = 25000.0 # starting position of the shoreline, m topo_slope = 0.01 # initial slope of the topography/bathymetry, m/m uplift_rate = 0.001 # rate of tectonic uplift, m/y plot_ylims = [-1000.0, 1000.0] # Derived parameters num_steps = int(run_duration / dt) sin_fac = 2.0 * np.pi / sl_period # factor for sine calculation of sea-level middle_row = np.arange(ncols, 2 * ncols, dtype=int) # Grid and fields # # Create a grid object grid = RasterModelGrid((nrows, ncols), xy_spacing=dx) # Create sea level field (a scalar, a.k.a. a "grid field") sea_level = grid.add_field("sea_level__elevation", 0.0, at="grid") # Create elevation field and initialize as a sloping ramp bedrock_elevation = topo_slope * (initial_shoreline_pos - grid.x_of_node) elev = grid.add_field("topographic__elevation", bedrock_elevation.copy(), at="node") # elev[:] = topo_slope * (initial_shoreline_pos - grid.x_of_node) # Remember IDs of middle row of nodes, for plotting middle_row = np.arange(ncols, 2 * ncols, dtype=int) plot_layers( bedrock_elevation[middle_row], x=grid.x_of_node[middle_row], sea_level=grid.at_grid["sea_level__elevation"], x_label="Distance (km)", y_label="Elevation (m)", title="Starting condition", legend_location="upper right", ) # Instantiate component cp = CarbonateProducer(grid) # RUN for i in range(num_steps): cp.sea_level = sl_range * np.sin(sin_fac * i * dt) cp.produce_carbonate(dt) elev[:] += uplift_rate * dt plot_layers( [ elev[middle_row] - grid.at_node["carbonate_thickness"][middle_row], elev[middle_row], ], x=grid.x_of_node[middle_row], sea_level=grid.at_grid["sea_level__elevation"], color_layer="Blues", x_label="Distance (km)", y_label="Elevation (m)", title="Carbonate production", legend_location="upper right", )
notebooks/tutorials/carbonates/carbonate_producer.ipynb
landlab/landlab
mit
Example 2: tracking stratigraphy Here we repeat the same example, except this time we use Landlab's MaterialLayers class to track stratigraphy.
# Track carbonate strata in time bundles of the below duration: layer_time_interval = 20000.0 # Derived parameters and miscellaneous next_layer_time = layer_time_interval time_period_index = 0 time_period = "0 to " + str(int(layer_time_interval) // 1000) + " ky" # Grid and fields grid = RasterModelGrid((nrows, ncols), xy_spacing=dx) sea_level = grid.add_field("sea_level__elevation", 0.0, at="grid") base_elev = grid.add_zeros("basement__elevation", at="node") base_elev[:] = topo_slope * (initial_shoreline_pos - grid.x_of_node) elev = grid.add_zeros("topographic__elevation", at="node") middle_row = np.arange(ncols, 2 * ncols, dtype=int) middle_row_cells = np.arange(0, ncols - 2, dtype=int) carbo_thickness = grid.add_zeros("carbonate_thickness", at="node") prior_carbo_thickness = np.zeros(grid.number_of_nodes) # Instantiate component cp = CarbonateProducer(grid) # RUN for i in range(num_steps): cp.sea_level = sl_range * np.sin(sin_fac * i * dt) cp.produce_carbonate(dt) base_elev[:] += uplift_rate * dt elev[:] = base_elev + carbo_thickness if (i + 1) * dt >= next_layer_time: time_period_index += 1 next_layer_time += layer_time_interval added_thickness = np.maximum( carbo_thickness - prior_carbo_thickness, 0.00001 ) # force a tiny bit of depo to keep layers consistent prior_carbo_thickness[:] = carbo_thickness # grid.material_layers.add(added_thickness[grid.node_at_cell], age=time_period_index) grid.event_layers.add(added_thickness[grid.node_at_cell], age=time_period_index)
notebooks/tutorials/carbonates/carbonate_producer.ipynb
landlab/landlab
mit
First get the layers we want to plot. In this case, plot the bottom and top layers as well as layers that correspond to sea level high stands. For the sinusoidal sea level curve we used, high stands occur every 400 time steps, with the first one being at time step 100.
layers = ( np.vstack( [ grid.event_layers.z[0], grid.event_layers.z[100::400], grid.event_layers.z[-1], ] ) + grid.at_node["basement__elevation"][grid.node_at_cell] ) plot_layers( layers, x=grid.x_of_node[grid.node_at_cell], sea_level=grid.at_grid["sea_level__elevation"], color_layer="Oranges_r", legend_location="upper right", x_label="Distance (km)", y_label="Elevation (m)", title="Carbonates colored by age of deposition (darkest = oldest)", )
notebooks/tutorials/carbonates/carbonate_producer.ipynb
landlab/landlab
mit
Esta simple función nos va a devolver el texto 'Hola' seguido del nombre que le ingresemos; pero como no contiene ningún control sobre el tipo de datos que pude admitir la variable nombre, los siguientes casos serían igualmente válidos:
print (saludo('Raul')) print (saludo(1))
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
En cambio, si pusiéramos un control sobre el tipo de datos que admitiera la variable nombre, para que siempre fuera un string, entonces el segundo caso ya no sería válido y lo podríamos detectar fácilmente antes de que nuestro programa se llegara a ejecutar. Obviamente, para poder detectar el segundo error y que nuestra función saludo solo admita una variable del tipo string como argumento, podríamos reescribir nuestra función, agregando un control del tipo de datos de la siguiente manera:
def saludo(nombre): if type(nombre) != str: return "Error: el argumento debe ser del tipo String(str)" return 'Hola {}'.format(nombre) print(saludo('Raul')) print(saludo(1))
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Pero una solución más sencilla a tener que ir escribiendo condiciones para controlar los tipos de las variables o de las funciones es utilizar MyPy MyPy MyPy es un proyecto que busca combinar los beneficios de un sistema de tipado dinámico con los de uno de tipado estático. Su meta es tener el poder y la expresividad de Python combinada con los beneficios que otorga el chequeo de los tipos de datos al momento de la compilación. Algunos de los beneficios que proporciona utilizar MyPy son: Chequeo de tipos al momento de la compilación: Un sistema de tipado estático hace más fácil detectar errores y con menos esfuerzo de debugging. Facilita el mantenimiento: Las declaraciones explícitas de tipos actúan como documentación, haciendo que nuestro código sea más fácil de entender y de modificar sin introducir nuevos errores. Permite crecer nuestro programa desde un tipado dinámico hacia uno estático: Nos permite comenzar desarrollando nuestros programas con un tipado dinámico y a mediada que el mismo vaya madurando podríamos modificarlo hacia un tipado estático de forma muy sencilla. De esta manera, podríamos beneficiarnos no solo de la comodidad de tipado dinámico en el desarrollo inicial, sino también aprovecharnos de los beneficios de los tipos estáticos cuando el código crece en tamaño y complejidad. Tipos de datos Estos son algunos de los tipos de datos más comunes que podemos encontrar en Python: int: Número entero de tamaño arbitrario float: Número flotante. bool: Valor booleano (True o False) str: Unicode string bytes: 8-bit string object: Clase base del que derivan todos los objecto en Python. List[str]: lista de objetos del tipo string. Dict[str, int]: Diccionario de string hacia enteros Iterable[int]: Objeto iterable que contiene solo enteros. Sequence[bool]: Secuencia de valores booleanos Any: Admite cualquier valor. (tipado dinámico) El tipo Any y los constructores List, Dict, Iterable y Sequence están definidos en el modulo typing que viene junto con MyPy. Ejemplos Por ejemplo, volviendo al ejemplo del comienzo, podríamos reescribir la función saludo utilizando MyPy de forma tal que los tipos de datos sean explícitos y puedan ser chequeados al momento de la compilación.
%%writefile typeTest.py import typing def saludo(nombre: str) -> str: return 'Hola {}'.format(nombre) print(saludo('Raul')) print(saludo(1))
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
En este ejemplo estoy creando un pequeño script y guardando en un archivo con el nombre 'typeTest.py', en la primer línea del script estoy importando la librería typing que viene con MyPy y es la que nos agrega la funcionalidad del chequeo de los tipos de datos. Luego simplemente ejecutamos este script utilizando el interprete de MyPy y podemos ver que nos va a detectar el error de tipo de datos en la segunda llamada a la función saludo.
!mypy typeTest.py
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Si ejecutáramos este mismo script utilizando el interprete de Python, veremos que obtendremos los mismos resultados que al comienzo de este notebook; lo que quiere decir, que la sintaxis que utilizamos al reescribir nuestra función saludo es código Python perfectamente válido!
!python3 typeTest.py
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Tipado explicito para variables y colecciones En el ejemplo anterior, vimos como es la sintaxis para asignarle un tipo de datos a una función, la cual utiliza la sintaxis de Python3, annotations. Si quisiéramos asignarle un tipo a una variable, podríamos utilizar la función Undefined que viene junto con MyPy.
%%writefile typeTest.py from typing import Undefined, List, Dict # Declaro los tipos de las variables texto = Undefined(str) entero = Undefined(int) lista_enteros = List[int]() dic_str_int = Dict[str, int]() # Asigno valores a las variables. texto = 'Raul' entero = 13 lista_enteros = [1, 2, 3, 4] dic_str_int = {'raul': 1, 'ezequiel': 2} # Intento asignar valores de otro tipo. texto = 1 entero = 'raul' lista_enteros = ['raul', 1, '2'] dic_str_int = {1: 'raul'} !mypy typeTest.py
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Otra alternativa que nos ofrece MyPy para asignar un tipo de datos a las variables, es utilizar comentarios; así, el ejemplo anterior lo podríamos reescribir de la siguiente forma, obteniendo el mismo resultado:
%%writefile typeTest.py from typing import List, Dict # Declaro los tipos de las variables texto = '' # type: str entero = 0 # type: int lista_enteros = [] # type: List[int] dic_str_int = {} # type: Dict[str, int] # Asigno valores a las variables. texto = 'Raul' entero = 13 lista_enteros = [1, 2, 3, 4] dic_str_int = {'raul': 1, 'ezequiel': 2} # Intento asignar valores de otro tipo. texto = 1 entero = 'raul' lista_enteros = ['raul', 1, '2'] dic_str_int = {1: 'raul'} !mypy typeTest.py
content/notebooks/MyPy-Python-Tipado-estatico.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Get the pipeline graph data of the table This will generate a pipeline graph file, in HTML format, under pipeline_graph directory. It may take sometime for this to run and generate.
def visualise_table_pipelines(table): pipeline_analysis.display_pipelines_of_table(table) visualise_table_pipelines("data-analytics-pocs.public.bigquery_audit_log")
examples/bigquery-table-access-pattern-analysis/pipeline-output_only.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Deploy trained model to Cloud AI Platform
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH} %%bash MODEL_NAME="babyweight" VERSION_NAME="dnn" MODEL_LOCATION=$EXPORT_PATH echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then echo "The model named $MODEL_NAME already exists." else # create model echo "Creating $MODEL_NAME model now." gcloud ai-platform models create --regions=$REGION $MODEL_NAME fi if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... " gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME echo "Please run this cell again if you don't see a Creating message ... " sleep 2 fi # TODO create model on Cloud AI Platform. Use python-version 3.5 and runtime-version 1.14 # https://cloud.google.com/sdk/gcloud/reference/ai-platform/versions/create echo "Creating $MODEL_NAME:$VERSION_NAME" gcloud ai-platform versions create # TODO complete the statement
quests/endtoendml/labs/3_keras_dnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Attribution: These examples are taken from the Scipy Tutorial The scipy.optimize package provides several commonly used optimization algorithms. A detailed listing can be found by:
help(scipy.optimize)
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
For Machine Learning, we are mainly interested in unconstrained minimization of multivariate scalar functions (typically where gradient information is available). In addition to several algorithms for unconstrained minimization of multivariate scalar functions (e.g. BFGS, Nelder-Mead simplex, Newton Conjugate Gradient, etc.) the module also contains: - Global (brute-force) optimization routines - Least-squares minimization (which we saw before in the Linear Algebra Notebook) - Scalar univariate function minimizers and root finders; and - Multivariate equation system solvers using a variety of algorithms Unconstrained minimization of multivariate scalar functions (minimize) The minimize function provides a common interface to unconstrained and constrained minimization algorithms for multivariate scalar functions. To demonstrate the minimization function, let's consider the problem of minimizing the Rosenbrock function of $N$ variables: $$ f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i}-x_{i-1}^{2}\right)^{2}+\left(1-x_{i-1}\right)^{2}.$$ The minimum value of this function is 0 which is achieved when $x_i=1$. Note that the Rosenbrock function and its derivatives are included in scipy.optimize. The implementations in the following provide examples of how to define an objective function as well as its Jacobian and Hessian functions. Nelder-Mead Simplex algorithm (method='Nelder-Mead') In the example below, the minimize routine is used with the Nelder-Mead simplex algorithm (selected through the method parameter):
import numpy as np from scipy.optimize import minimize def rosen(x): """The Rosenbrock function""" return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0) x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2]) res = minimize(rosen, x0, method='nelder-mead', options={'xtol': 1e-8, 'disp': True}) print(res.x)
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
The simplex method is a simple way to minimize a fairly well-behaved function. It only requires function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum. Broyden-Fletcher-Golfarb-Shanno algorithm (method='BFGS') In order to converge more quickly to the solution, this routine uses the gradient of the objective function. If the gradient is not given by the user, then it is estimated using first-differences. The Broyden-Fletcher-Golfarb-Shanno (BFGS) method typically requires fewer calls than the simplex algorithm even when the gradient must be estimated. To demonstrate this algorithm, the Rosenbrock function is used again. The gradient of the Rosenbrock function is the vector: $$ \begin{eqnarray} \frac{\partial f}{\partial x_{j}} & = & \sum_{i=1}^{N}200\left(x_{i}-x_{i-1}^{2}\right)\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-2\left(1-x_{i-1}\right)\delta_{i-1,j}.\ & = & 200\left(x_{j}-x_{j-1}^{2}\right)-400x_{j}\left(x_{j+1}-x_{j}^{2}\right)-2\left(1-x_{j}\right).\end{eqnarray}$$ This expression is vaalid for the interior derivatives. Special cases are: $$ \begin{eqnarray} \frac{\partial f}{\partial x_{0}} & = & -400x_{0}\left(x_{1}-x_{0}^{2}\right)-2\left(1-x_{0}\right),\ \frac{\partial f}{\partial x_{N-1}} & = & 200\left(x_{N-1}-x_{N-2}^{2}\right).\end{eqnarray} $$ A function which computes this gradient is:
# note the special handling of the exterior derivatives def rosen_der(x): xm = x[1:-1] xm_m1 = x[:-2] xm_p1 = x[2:] der = np.zeros_like(x) der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm) der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0]) der[-1] = 200*(x[-1]-x[-2]**2) return der
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
This gradient information is specified in the minimize function through the jac parameter:
res = minimize(rosen, x0, method='BFGS', jac=rosen_der, options={'disp': True}) print res.x
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
Machine learning libraries (e.g. Tensorflow, Theano, Torch etc.) will provide a similar interface. When they provide auto-differentiation capabilities, you will not need to worry about writing the derivative function yourself. You will need to provide the "forward" computational graph and an objective. Black-box function optimization with skopt Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization. Alternative libraries include Spearmint, PyBO, and Hyperopt. Black-box algorithms do not need any knowledge of the gradient. These libraries provide algorithms that are more powerful and scale better than the Nelder-Mead simplex algorithm above. Modern black-box (or sequential model-based) optimization algorithms are increasingly popular for optimizing the hyperparameters (user-tuned "knobs") of machine learning models. We'll talk more about this later. For now, just a brief example, which is taken from the skopt Bayesian Optimization tutorial:
import numpy as np from skopt import gp_minimize
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
Let's assume the following noisy function $f$:
noise_level = 0.1 def f(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$:
# Plot f(x) + contours x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = [f(x_i, noise_level=0.0) for x_i in x] plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])), alpha=.2, fc="r", ec="None") plt.legend() plt.grid()
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
Bayesian Optimization based on Gaussian Process regression is implemented in skopt.gp_minimize and can be carried out as follows:
res = gp_minimize(f, # the function to minimize [(-2.0, 2.0)], # the bounds on each dimension of x acq_func="EI", # the acquisition function n_calls=15, # the number of evaluations of f n_random_starts=5, # the number of random initialization points noise=0.1**2, # the noise level (optional) random_state=123) # the random seed
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
Accordingly, the approximated minimum is found to be:
print "x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
_posts/_code/RBC-DeepLearning-Notebooks/04 - Optimization.ipynb
hsaghir/hsaghir.github.io
mit
Did it work ? If not, run the collapsed cell marked RUN ME and try again! Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in Runtime > Change runtime type The cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras.
# Detect hardware try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection except ValueError: tpu = None gpus = tf.config.experimental.list_logical_devices("GPU") # Select appropriate distribution strategy for hardware if tpu: tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) print('Running on TPU ', tpu.master()) elif len(gpus) > 0: strategy = tf.distribute.MirroredStrategy(gpus) # this works for 1 to multiple GPUs print('Running on ', len(gpus), ' GPU(s) ') else: strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU print('Running on CPU') # How many accelerators do we have ? print("Number of accelerators: ", strategy.num_replicas_in_sync) # To use the selected distribution strategy: # with strategy.scope: # # --- define your (Keras) model here --- # # For distributed computing, the batch size and learning rate need to be adjusted: # global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs. # learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync
courses/fast-and-lean-data-science/colab_intro.ipynb
turbomanage/training-data-analyst
apache-2.0
Read the csv file into a panda
df = pd.read_csv('top10mountain-lions.csv')
ChartHomework/ChartsHomework.ipynb
lexieheinle/jour407homework
mit
Chart 1 features a whitegrid style to help assign a number to each bar. Despine removes Tufte's chart junk.
sns.set_style("whitegrid") sns.barplot(x="count", y="COUNTY", data=df, color="#2ecc71") sns.despine(bottom=True, left=True)
ChartHomework/ChartsHomework.ipynb
lexieheinle/jour407homework
mit
Chart 2 features a white chart style. Y label is removed because county label is redudant. Also, X label is more useful. Despine removes Tufte's chart junk. I think this chart is the most effective although I would decrease the scale to every 10. Another weakness is probably a locational aspect to the sightings and that relationship isn't shown in the bar chart.
sns.set_style("white") sns.barplot(x="count", y="COUNTY", data=df, color="#f6a14e") plt.ylabel('') plt.xlabel('Mountain Lion Sightings') sns.despine(bottom=True, left=True)
ChartHomework/ChartsHomework.ipynb
lexieheinle/jour407homework
mit
Chart 3 features a dark grid style. Although the grid helps assign values to the bars, the color is distracting and tends to chart junk
sns.set_style("darkgrid") sns.barplot(x="count", y="COUNTY", data=df, color="#7fe5ba") plt.ylabel('') plt.xlabel('Mountain Lion Sightings') sns.despine(bottom=True, left=True)
ChartHomework/ChartsHomework.ipynb
lexieheinle/jour407homework
mit
Decode DICOM files for medical imaging <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/dicom"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/dicom.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This tutorial shows how to use tfio.image.decode_dicom_image in TensorFlow IO to decode DICOM files with TensorFlow. Setup and Usage Download DICOM image The DICOM image used in this tutorial is from the NIH Chest X-ray dataset. The NIH Chest X-ray dataset consists of 100,000 de-identified images of chest x-rays in PNG format, provided by NIH Clinical Center and could be downloaded through this link. Google Cloud also provides a DICOM version of the images, available in Cloud Storage. In this tutorial, you will download a sample file of the dataset from the GitHub repo Note: For more information about the dataset, please find the following reference: Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald Summers, ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471, 2017
!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/dicom/dicom_00000001_000.dcm !ls -l dicom_00000001_000.dcm
site/en-snapshot/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
Install required Packages, and restart runtime
try: # Use the Colab's preinstalled TensorFlow 2.x %tensorflow_version 2.x except: pass !pip install tensorflow-io
site/en-snapshot/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
Decode DICOM image
import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow_io as tfio image_bytes = tf.io.read_file('dicom_00000001_000.dcm') image = tfio.image.decode_dicom_image(image_bytes, dtype=tf.uint16) skipped = tfio.image.decode_dicom_image(image_bytes, on_error='skip', dtype=tf.uint8) lossy_image = tfio.image.decode_dicom_image(image_bytes, scale='auto', on_error='lossy', dtype=tf.uint8) fig, axes = plt.subplots(1,2, figsize=(10,10)) axes[0].imshow(np.squeeze(image.numpy()), cmap='gray') axes[0].set_title('image') axes[1].imshow(np.squeeze(lossy_image.numpy()), cmap='gray') axes[1].set_title('lossy image');
site/en-snapshot/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
Decode DICOM Metadata and working with Tags decode_dicom_data decodes tag information. dicom_tags contains useful information as the patient's age and sex, so you can use DICOM tags such as dicom_tags.PatientsAge and dicom_tags.PatientsSex. tensorflow_io borrow the same tag notation from the pydicom dicom package.
tag_id = tfio.image.dicom_tags.PatientsAge tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id) print(tag_value) print(f"PatientsAge : {tag_value.numpy().decode('UTF-8')}") tag_id = tfio.image.dicom_tags.PatientsSex tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id) print(f"PatientsSex : {tag_value.numpy().decode('UTF-8')}")
site/en-snapshot/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
(a) Construct the mass matrix $M$ using p-type modal expansion with polynomial order $P=8$ and Gauss-Lobatto-Legendre quadrature $Q=10$.
e1 = elem.CommonJacobiElem([-1, 1], 9) M = numpy.where(e1.M != 0, 1, 0) pyplot.matshow(M, cmap=colors.ListedColormap(['white', 'black']));
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
(b) Construct the mass matrix $M$ using p-type Lagrange nodal expansion and using Gauss-Lobatto-Legendre quadrature points as nodes. Check if the mass matrix is diagonal if using Gauss-Lobatto-Legendre quadrature to numerically evaluate the mass matrix.
e2 = elem.GaussLobattoJacobiElem([-1, 1], 9) M = numpy.where(e2.M != 0, 1, 0) pyplot.matshow(M, cmap=colors.ListedColormap(['white', 'black']));
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
(c) Consider the projection problem, $u(x) = \sum_{i=0}^{P}u_i\phi_i(x) = f(x)$, where $f(x)=x^7$ and $-1 \le x \le 1$. The weighted residual equation will be: $$ \int_{-1}^{1} \phi_i(x)\left[\sum_{j=0}^{P}u_j\phi_j(x)\right]dx = \int_{-1}^{1} \phi_i(x)f(x)dx \text{, and }i=0\ to\ P $$ Using the mass matrices we built in (a) and (b), we can generate a system of linear equations: $$ \mathbf{M}\mathbf{u} = \mathbf{f} $$ Solve the unknowns $\mathbf{u}$ and compare the error of $u(x)=\sum_{i=0}^{P}u_i\phi_i(x)$ against $f(x)=x^7$. We first define a function to represent the behavior of $u(x)=\sum_{i=0}^{P}u_i\phi_i(x)$.
def u(x, expn, Ui): """return the result of approximations""" ans = numpy.array([ui * expn[i](x) for i, ui in enumerate(Ui)]) return ans.sum(axis=0)
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
And then solve the $\mathbf{u}$ using the two different expansion in part (a) and (b).
qd = quad.GaussLobattoJacobi(10) f = poly.Polynomial(roots=[0, 0, 0, 0, 0, 0, 0]) e1 = elem.CommonJacobiElem([-1, 1], 9) e2 = elem.GaussLobattoJacobiElem([-1, 1], 9) fi1 = numpy.array([qd(e1.expn[i] * f) for i in range(9)]) ui1 = numpy.linalg.solve(e1.M, fi1) fi2 = numpy.array([qd(e2.expn[i] * f) for i in range(9)]) ui2 = numpy.linalg.solve(e2.M, fi2) e1.set_ui(ui1) e2.set_ui(ui2)
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
Calculate the error between the interval $x \in [-1, 1]$.
x = numpy.linspace(-1, 1, 100) err1 = numpy.abs(e1(x) - f(x)) err2 = numpy.abs(e2(x) - f(x)) l2norm1 = numpy.linalg.norm(err1, 2) l2norm2 = numpy.linalg.norm(err2, 2) print(l2norm1) print(l2norm2)
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
(d) Now we consider only the the lifted problem. That is, we decouple the boundary mode and interior mode: $$u(x) = u^D(x) + u^{H}(x)$$ where $$u^{D}(x) = u(-1)\phi_0(x) + u(1)\phi_P(x) = u_0\phi_0(x) + u_P\phi_P(x)$$ and $$u^{H}(x) = \sum_{i=1}^{P-1} u_i\phi_i(x)$$ The weighted residual equation becomes $$ \int_{-1}^{1} \phi_i(x)\left[\sum_{j=1}^{P-1}u_j\phi_j(x)\right]dx = \int_{-1}^{1} \phi_i(x)f(x)dx - \int_{-1}^{1} \phi_i(x)\left[u_0\phi_0(x) + u_P\phi_P(x)\right]dx \text{, for } 1 \le i \le P-1 $$ or in the form of mass matrix: $$ \mathbf{M}{ij}\mathbf{u}_j = \mathbf{f}_i - u_0\mathbf{M}{i0} - u_{P}\mathbf{M}_{iP} \text{, for } 1 \le i,\ j \le P-1 $$
qd = quad.GaussLobattoJacobi(10) f = poly.Polynomial(roots=[0, 0, 0, 0, 0, 0, 0]) e1 = elem.CommonJacobiElem([-1, 1], 9) e2 = elem.GaussLobattoJacobiElem([-1, 1], 9) ui1 = numpy.zeros(9, dtype=numpy.float64) ui2 = numpy.zeros(9, dtype=numpy.float64) ui1[0] = ui2[0] = f(-1) ui1[-1] = ui2[-1] = f(1) fi1 = numpy.array([e1.expn[i](qd.nodes) * f(qd.nodes) * qd.weights for i in range(1, 8)]).sum(axis=1) - \ numpy.array(e1.M[1:-1, 0] * ui1[0] + e1.M[1:-1, -1] * ui1[-1]).flatten() ui1[1:-1] = numpy.linalg.solve(e1.M[1:-1, 1:-1], fi1) fi2 = numpy.array([e2.expn[i](qd.nodes) * f(qd.nodes) * qd.weights for i in range(1, 8)]).sum(axis=1) - \ numpy.array(e2.M[1:-1, 0] * ui2[0] + e2.M[1:-1, -1] * ui2[-1]).flatten() ui2[1:-1] = numpy.linalg.solve(e2.M[1:-1, 1:-1], fi2) e1.set_ui(ui1) e2.set_ui(ui2) x = numpy.linspace(-1, 1, 100) err1 = numpy.abs(e1(x) - f(x)) err2 = numpy.abs(e2(x) - f(x)) l2norm1 = numpy.linalg.norm(err1, 2) l2norm2 = numpy.linalg.norm(err2, 2) print(l2norm1) print(l2norm2)
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
(e) The same problem as in the part (c) except that now the function $f(x)$ is defined on interval $[2, 5]$. Use chain rule to handle this situation.
xmin = 2. xMax = 5. qd = quad.GaussLobattoJacobi(10) f = poly.Polynomial(roots=[0, 0, 0, 0, 0, 0, 0]) e1 = elem.CommonJacobiElem([xmin, xMax], 9) e2 = elem.GaussLobattoJacobiElem([xmin, xMax], 9) ui1 = numpy.zeros(9, dtype=numpy.float64) ui2 = numpy.zeros(9, dtype=numpy.float64) ui1[0] = ui2[0] = f(xmin) ui1[-1] = ui2[-1] = f(xMax) fi1 = numpy.array([e1.expn[i](qd.nodes) * f(e1.xi_to_x(qd.nodes)) * qd.weights for i in range(1, 8)]).sum(axis=1) - \ numpy.array(e1.M[1:-1, 0] * ui1[0] + e1.M[1:-1, -1] * ui1[-1]).flatten() ui1[1:-1] = numpy.linalg.solve(e1.M[1:-1, 1:-1], fi1) fi2 = numpy.array([e2.expn[i](qd.nodes) * f(e2.xi_to_x(qd.nodes)) * qd.weights for i in range(1, 8)]).sum(axis=1) - \ numpy.array(e2.M[1:-1, 0] * ui2[0] + e2.M[1:-1, -1] * ui2[-1]).flatten() ui2[1:-1] = numpy.linalg.solve(e2.M[1:-1, 1:-1], fi2) e1.set_ui(ui1) e2.set_ui(ui2) x = numpy.linspace(xmin, xMax, 100) err1 = numpy.abs(e1(x) - f(x)) err2 = numpy.abs(e2(x) - f(x)) l2norm1 = numpy.linalg.norm(err1, 2) l2norm2 = numpy.linalg.norm(err2, 2) print(l2norm1) print(l2norm2)
solutions/chapter02/exercise03.ipynb
piyueh/SEM-Toolbox
mit
Вводные Водитель может находиться в 4 возможных статусах: * free -- доступен для нового заказа * enroute -- едет на заказ * ontrip -- выполняет заказ * busy -- недоступен для нового заказа Возможные переходы из одного состояние в другое определены как: * free -> [free, enroute, busy] * enroute -> [free, ontrip] * ontrip -> [free] * busy -> [free] Почему переходы определяются таким образом: 1. Из состояния free можно перейти в * free -- если водитель ушел в офлайн и заново вышел на линию, тогда подряд будет две записи со статусом free * enroute -- если водитель принял заказ, то переходит в статус enroute и едет к клиенту * busy -- если водитель нажал кнопку "Занят" в таксометре (пошел на обед и т.д.) 2. Из состояния enroute можно перейти в * free -- если клиент или водитель отменил заказ * ontrip -- если водитель приехал к клиенту и начал выполнять заказ 3. Из состояния ontrip можно перейти только в free (после выполнения заказа) 4. Из состояния busy можно перейти только в free Эффективность на поездке -- это время с клиентом в машине (ontrip), деленное на сумму длительностей всех статусов, связанных с поездкой (sum(free) + enroute + ontrip), где sum(free) -- время простоя. Время простоя -- это сумма всех статусов free, предшествующих поездке. Суммируются все статусы free, идущие подряд, а также те, которые были прерваны короткими статусами busy или enroute (короткий статус == меньше какого-то TIMEOUT'а). Имеется набор данных со статусами водителей, по которому необходимо построить зависимость длительности поездки от эффективности. * driver_id -- id водителя * status -- один из статусов * dttm -- время начала статуса Примечания: * Поездка считается только при наличии статуса ontrip * Тесты написаны для python 2 1. Написать функцию-генератор, которая будет отдавать соседние элементы в цикле. Функция понадобится для итерирования по записям водителя и проверки соседних статусов по условиям. Не забудьте проверить, что тесты проходят без ошибок (см. test_neighbors).
def neighbors(iterable): # Write generator function which yields # previous, current and next values in iterable list. # ... type your code here ... # Check if test passes def test_neighbors(): test_neighbors = neighbors( range(2) ) assert test_neighbors.next() == (None, 0, 1) test_neighbors()
test_task_python-Copy1.ipynb
alekz112/Test
mit
2. Сгруппировать данные на уровне водителя таким образом, чтобы в одной строке находились все его записи со статусами и началом статуса списком: Формат исходной таблицы: <table> <tr><td>driver_id</td><td>status</td><td>dttm</td></tr> <tr><td>9f8f9bf3ee8f4874873288c246bd2d05</td><td>free</td><td>2018-02-04 00:19</td></tr> <tr><td>9f8f9bf3ee8f4874873288c246bd2d05</td><td>busy</td><td>2018-02-04 01:03</td></tr> <tr><td>8f174ffd446c456eaf3cca0915d0368d</td><td>free</td><td>2018-02-03 15:43</td></tr> <tr><td>8f174ffd446c456eaf3cca0915d0368d</td><td>enroute</td><td>2018-02-03 17:02</td></tr> <tr><td>...</td><td>...</td><td>...</td></tr> </table> Формат сгруппированной таблицы: <table> <tr><td>driver_id</td><td>driver_info</td></tr> <tr><td>9f8f9bf3ee8f4874873288c246bd2d05</td><td>[("free", 2018-02-04 00:19), ("busy", 2018-02-04 01:03)]</td></tr> <tr><td>8f174ffd446c456eaf3cca0915d0368d</td><td>[("free", 2018-02-03 15:43), ("enroute", 2018-02-03 17:02) ...]</td></tr> </table>
df = pd.read_csv(".../dataset.csv", parse_dates=["dttm"]) # ... type your code here ...
test_task_python-Copy1.ipynb
alekz112/Test
mit
3. Используя функцию neighbors, написать функцию, которая для каждой записи в списке driver_info посчитает ее длительность.
def calc_status_duration(driver_info): driver_info_updated = [] for i, j, k in neighbors(driver_info): # ... type your code here ... return driver_info_updated # Check if test passes def test_calc_status_duration(): sample_driver_info = [("free", datetime(2018, 4, 2, 0, 19)), ("busy", datetime(2018, 4, 2, 1, 3)),] sample_driver_info_updated = [('free', datetime(2018, 4, 2, 0, 19), 2640.0), ('busy', datetime(2018, 4, 2, 1, 3), None),] assert calc_status_duration(sample_driver_info) == sample_driver_info_updated test_calc_status_duration() df["driver_info"] = df.driver_info.apply(calc_status_duration)
test_task_python-Copy1.ipynb
alekz112/Test
mit
4. Используя функцию neighbors, написать функцию, которая сформирует из списка driver_info список поездок с информацией о длительности поездки и эффективности (duration_ontrip, efficiency).
TIMEOUT = 1600 def collapse_statuses(driver_info): # Here define conditions under which the "free" state # should be attributed to the trip. # ... type your code here ... # Check if test passes def test_collapse_statuses(): sample_driver_info = [("free", datetime(2018, 4, 2, 0, 19), 2640.0), ("busy", datetime(2018, 4, 2, 1, 3), 1660.0), ("free", datetime(2018, 4, 2, 1, 30, 40), 2050.0), ("enroute", datetime(2018, 4, 2, 2, 4, 50), 70.0), ("free", datetime(2018, 4, 2, 2, 6), 500.0), ("enroute", datetime(2018, 4, 2, 2, 14, 20), 520.0), ("ontrip", datetime(2018, 4, 2, 2, 23), 3060.0), ("free", datetime(2018, 4, 2, 3, 14), None) ] sample_driver_info_updated = [(3060.0, 3060.0 / (3060.0 + 520.0 + 500.0 + 2050.0))] assert collapse_statuses(sample_driver_info) == sample_driver_info_updated test_collapse_statuses() df["driver_info"] = df.driver_info.apply(collapse_statuses)
test_task_python-Copy1.ipynb
alekz112/Test
mit
Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7] already queried and convert it to SQL exclude Cycle 0, array 12m
report, resume = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \ maxFreqRes=999999999, array='12m', \ excludeCycle0=True, \ selectPol=False, \ minTimeBand={3:0., 6:0., 7:0.}, \ silent=True)
notebooks/selecting_source/alma_database_selection6.ipynb
bosscha/alma-calibrator
gpl-2.0
Select objects which has redshift collect the flux, band, freq, and obsdate plot based on the Band
def collect_z_and_flux(Band): z = [] flux = [] for idata in resume: if idata[6] is not None: # select object which has redshift information fluxnya = idata[11][0] bandnya = idata[11][1] freqnya = idata[11][2] datenya = idata[11][3] for i, band in enumerate(bandnya): if band == str(Band): # take only first data flux.append(fluxnya[i]) z.append(idata[6]) break return z, flux z3, f3 = collect_z_and_flux(Band=3) print("Number of seleted source in B3: ", len(z3)) z6, f6 = collect_z_and_flux(6) print("Number of seleted source in B6: ", len(z6)) z7, f7 = collect_z_and_flux(7) print("Number of seleted source in B7: ", len(z7))
notebooks/selecting_source/alma_database_selection6.ipynb
bosscha/alma-calibrator
gpl-2.0
Plot -- same object will located in the same z, some of them will not have flux in all 3 bands.
plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, f3, 'ro') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3") plt.subplot(222) plt.plot(z6, f6, 'go') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B6") plt.subplot(223) plt.plot(z7, f7, 'bo') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B7") plt.subplot(224) plt.plot(z3, f3, 'ro', z6, f6, 'go', z7, f7, 'bo', alpha=0.3) plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3, B6, B7")
notebooks/selecting_source/alma_database_selection6.ipynb
bosscha/alma-calibrator
gpl-2.0
Plot $\log_{10}(L)$ vs $z$
z3, l3 = calc_power(z3, f3) z6, l6 = calc_power(z6, f6) z7, l7 = calc_power(z7, f7) plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, np.log10(l3), 'r*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, np.log10(l6), 'g*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, np.log(l7), 'b*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, np.log10(l3), 'r*', z6, np.log10(l6), 'g*', z7, np.log10(l7), 'b*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7")
notebooks/selecting_source/alma_database_selection6.ipynb
bosscha/alma-calibrator
gpl-2.0
Without log10
z3, l3 = calc_power(z3, f3) z6, l6 = calc_power(z6, f6) z7, l7 = calc_power(z7, f7) plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, l3, 'r*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, l6, 'g*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, l7, 'b*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, l3, 'r*', z6, l6, 'g*', z7, l7, 'b*', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7")
notebooks/selecting_source/alma_database_selection6.ipynb
bosscha/alma-calibrator
gpl-2.0
Test Main
# %run main.py
draft_notebooks/main_draft.ipynb
jvbalen/cover_id
mit
Testing Individual Fingerprinting Methods
cliques_by_name, cliques_by_uri = SHS_data.read_cliques() # ratio = (1, 10, 90) ratio = (5, 15, 80) # ratio = (10, 25, 65) train_cliques, test_cliques, val_cliques = util.split_train_test_validation(cliques_by_name, ratio=ratio) reload(main) reload(fp) fp_function = fp.cov results = main.run_leave_one_out_experiment(train_cliques, fp_function, print_every=50) print('ratio:', ratio) print('fp_function:', fp_function.__name__) print('results:', results)
draft_notebooks/main_draft.ipynb
jvbalen/cover_id
mit
Let's go over the columns: - vvix: volatility of VIX - asof_date: the timeframe to which this data applies - timestamp: this is our timestamp on when we registered the data. We've done much of the data processing for you. Fields like timestamp are standardized across all our Store Datasets, so the datasets are easy to combine. We can select columns and rows with ease. Below, we'll do a simple plot of VVIX since 2007.
# Plotting this DataFrame since 2007 df = odo(dataset, pd.DataFrame) df.head(5) # So we can plot it, we'll set the index as the `asof_date` df['asof_date'] = pd.to_datetime(df['asof_date']) df = df.set_index(['asof_date']) df.head(5) # Plotting the VVIX import matplotlib.pyplot as plt df.vvix.plot(label=str(dataset)) plt.ylabel(str(dataset)) plt.legend() plt.title("Graphing %s since %s" % (str(dataset), min(df.index)))
notebooks/data/quandl.cboe_vvix/notebook.ipynb
quantopian/research_public
apache-2.0
<a id='pipeline'></a> Pipeline Overview Accessing the data in your algorithms & research The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows: Import the data set here from quantopian.pipeline.data.quandl import cboe_vvix Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline: pipe.add(cboe_vvix.vvix.latest, 'vvix') Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs.
# Import necessary Pipeline modules from quantopian.pipeline import Pipeline from quantopian.research import run_pipeline from quantopian.pipeline.factors import AverageDollarVolume # Import the datasets available from quantopian.pipeline.data.quandl import cboe_vvix
notebooks/data/quandl.cboe_vvix/notebook.ipynb
quantopian/research_public
apache-2.0
Now that we've imported the data, let's take a look at which fields are available for each dataset. You'll find the dataset, the available fields, and the datatypes for each of those fields.
print "Here are the list of available fields per dataset:" print "---------------------------------------------------\n" def _print_fields(dataset): print "Dataset: %s\n" % dataset.__name__ print "Fields:" for field in list(dataset.columns): print "%s - %s" % (field.name, field.dtype) print "\n" _print_fields(cboe_vvix) print "---------------------------------------------------\n"
notebooks/data/quandl.cboe_vvix/notebook.ipynb
quantopian/research_public
apache-2.0
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline. This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline() pipe.add(cboe_vvix.vvix.latest, 'vvix') # Setting some basic liquidity strings (just for good habit) dollar_volume = AverageDollarVolume(window_length=20) top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000 pipe.set_screen(top_1000_most_liquid & cboe_vvix.vvix.latest.notnan()) # The show_graph() method of pipeline objects produces a graph to show how it is being calculated. pipe.show_graph(format='png') # run_pipeline will show the output of your pipeline pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25') pipe_output
notebooks/data/quandl.cboe_vvix/notebook.ipynb
quantopian/research_public
apache-2.0
Here, you'll notice that each security is mapped to VVIX. So you could grab any security to obtain the value of VVIX. Taking what we've seen from above, let's see how we'd move that into the backtester.
# This section is only importable in the backtester from quantopian.algorithm import attach_pipeline, pipeline_output # General pipeline imports from quantopian.pipeline import Pipeline from quantopian.pipeline.factors import AverageDollarVolume # For use in your algorithms via the pipeline API from quantopian.pipeline.data.quandl import cboe_vvix def make_pipeline(): # Create our pipeline pipe = Pipeline() # Screen out penny stocks and low liquidity securities. dollar_volume = AverageDollarVolume(window_length=20) is_liquid = dollar_volume.rank(ascending=False) < 1000 # Create the mask that we will use for our percentile methods. base_universe = (is_liquid) # Add the datasets available pipe.add(cboe_vvix.vvix.latest, 'vvix') # Set our pipeline screens pipe.set_screen(is_liquid) return pipe def initialize(context): attach_pipeline(make_pipeline(), "pipeline") def before_trading_start(context, data): results = pipeline_output('pipeline')
notebooks/data/quandl.cboe_vvix/notebook.ipynb
quantopian/research_public
apache-2.0
By convention we give classes a name that starts with a capital letter. Note how x is now the reference to our new instance of a Sample class. In other words, we instanciate the Sample class. Inside of the class we currently just have pass. But we can define class attributes and methods. An attribute is a characteristic of an object. A method is an operation we can perform with the object. For example we can create a class called Dog. An attribute of a dog may be its breed or its name, while a method of a dog may be defined by a .bark() method which returns a sound. Let's get a better understanding of attributes through an example. Attributes The syntax for creating an attribute is: self.attribute = something There is a special method called: __init__() This method is used to initialize the attributes of an object. For example:
class Dog(object): def __init__(self,breed): self.breed = breed sam = Dog(breed='Lab') frank = Dog(breed='Huskie')
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Object Oriented Programming-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Methods Methods are functions defined inside the body of a class. They are used to perform operations with the attributes of our objects. Methods are essential in encapsulation concept of the OOP paradigm. This is essential in dividing responsibilities in programming, especially in large applications. You can basically think of methods as functions acting on an Object that take the Object itself into account through its self argument. Lets go through an example of creating a Circle class:
class Circle(object): pi = 3.14 # Circle get instantiaed with a radius (default is 1) def __init__(self, radius=1): self.radius = radius # Area method calculates the area. Note the use of self. def area(self): return self.radius * self.radius * Circle.pi # Method for resetting Radius def setRadius(self, radius): self.radius = radius # Method for getting radius (Same as just calling .radius) def getRadius(self): return self.radius c = Circle() c.setRadius(2) print 'Radius is: ',c.getRadius() print 'Area is: ',c.area()
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Object Oriented Programming-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Great! Notice how we used self. notation to reference attributes of the class within the method calls. Review how the code above works and try creating your own method Inheritance Inheritance is a way to form new classes using classes that have already been defined. The newly formed classes are called derived classes, the classes that we derive from are called base classes. Important benefits of inheritance are code reuse and reduction of complexity of a program. The derived classes (descendants) override or extend the functionality of base classes (ancestors). Lets see an example by incorporating our rpevious work on the Dog class:
class Animal(object): def __init__(self): print "Animal created" def whoAmI(self): print "Animal" def eat(self): print "Eating" class Dog(Animal): def __init__(self): Animal.__init__(self) print "Dog created" def whoAmI(self): print "Dog" def bark(self): print "Woof!" d = Dog() d.whoAmI() d.eat() d.bark()
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Object Oriented Programming-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
In this example, we have two classes: Animal and Dog. The Animal is the base class, the Dog is the derived class. The derived class inherits the functionality of the base class. It is shown by the eat() method. The derived class modifies existing behaviour of the base class. shown by the whoAmI() method. Finally, the derived class extends the functionality of the base class, by defining a new bark() method. Special Methods Finally lets go over special methods. Classes in Python can implement certain operations with special method names. These methods are not actually called directly but by Python specific language syntax. For example Lets create a Book class:
class Book(object): def __init__(self, title, author, pages): print "A book is created" self.title = title self.author = author self.pages = pages def __str__(self): return "Title:%s , author:%s, pages:%s " %(self.title, self.author, self.pages) def __len__(self): return self.pages def __del__(self): print "A book is destroyed" book = Book("Python Rocks!", "Jose Portilla", 159) #Special Methods print book print len(book) del book
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Object Oriented Programming-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Vectorizes Operation
u = torch.randint(10, (5, 1)) v = torch.randint(10, (5, 1)) print(u) print(v) u.size() u * v u.pow(v)
GradientDescend.ipynb
DBWangGroupUNSW/COMP6714
mit
Ground Truth Model
n = 5 a = 2.0 b = 3.0 epsilon = 0.02 # Specify our ground truth model X = torch.randint(10, (n, 1)) ps = torch.ones(n)/2.0 exponents = torch.bernoulli(ps) * epsilon + (torch.ones(n) - epsilon/2) e = exponents.reshape(5, 1) T = a * (X.pow(e)) + b print(X) print(e) print(T) # verify that our implementation is correct. print( a * X[0][0]**e[0][0] + b)
GradientDescend.ipynb
DBWangGroupUNSW/COMP6714
mit
The above output shows that we can do vectorized operations on tensors easily. You should be able to write down the ground truth model mathematically. Fitting a Linear Model $$\hat{\mathbf{y}} = \mathbf{x}\mathbf{w} + \mathbf{b}$$
# note that all operations here are vectorized operation. def forward(x): # dangerous: assuming w and b are global variables. return x.mm(w) + b # Loss function def SSEloss(y_pred, y): return (y_pred - y) * (y_pred - y) /2.0 def loss(x, t): y_pred = forward(x) return (y_pred - t).pow(2).sum() /2.0 def gradient_w(x, t): return torch.t(forward(x) - t).mm(x) def gradient_b(x, t): return (forward(x) - t).sum() # need to add sum(); otherwise, each b_i will be different due to different gradients it receives. # Training loop MAX_EPOCHES = 500 learning_rate = 0.01 # it is tricky to find a good learning rate. w = torch.randn(1, 1) b = torch.ones(n, 1) for epoch in range(MAX_EPOCHES): grad_w = gradient_w(X, T) w = w - learning_rate * grad_w grad_b = gradient_b(X, T) b = b - learning_rate * grad_b print(".. grad: ", w.item(), b[0].item()) l = loss(X, T) print("epoch = ", epoch, ", loss=", l.item(), "learning rate = ", learning_rate) # Adaptive learning rate on: 500 epoches => loss = 0.04446 # off: 500 epoches => loss = 0.04179 #learning_rate = 1.0 / (100 + epoch) # After training print(forward(X)) print(T) print(w) print(b)
GradientDescend.ipynb
DBWangGroupUNSW/COMP6714
mit
Filter talks
sessions_talks = OrderedDict() # remove the IDs from the talks for name, sess in talk_sessions.items(): sessions_talks[name] = [talk for tid, talk in sess.items()] talks = sessions_talks['talk']
notebooks/session_instructions_toPDF.ipynb
EuroPython/ep-tools
mit
Create start time and session fields in each talk
# add 'start' time for each talk for idx, talk in enumerate(talks): tr = talk['timerange'] if not tr: talk['start'] = dt.datetime.now() else: talk['start'] = dt.datetime.strptime(tr.split(',')[0].strip(), "%Y-%m-%d %H:%M:%S") talks[idx] = talk # add 'session_code' for each talk conference_start = dt.date(2016, 7, 17) first_coffee_start = dt.time(10, 0) lunch_start = dt.time(12, 45) secnd_coffee_start = dt.time(15, 30) close_start = dt.time(18, 0) journee_start_times = [first_coffee_start, lunch_start, secnd_coffee_start, close_start] def get_journee_number(talk_start_time, journee_start_times): for idx, start in enumerate(journee_start_times): if talk_start_time < start: return idx return -1 tracks = ['A1', 'A2', 'Barria 1', 'Barria 2', 'PyCharm Room', ] for idx, talk in enumerate(talks): talk_start = talk['start'].time() talk_room = talk['track_title'].split('[')[0].strip().replace(' ', '_') day_num = (talk['start'].date() - conference_start).days journee_num = get_journee_number(talk['start'].time(), journee_start_times) talk['session'] = str(talk_room) + '_' + str(int(day_num)) + '.' + str(journee_num) talks[idx] = talk
notebooks/session_instructions_toPDF.ipynb
EuroPython/ep-tools
mit