markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The CSDMS Standard Name for this variable is: "land_surface_water_sediment~bedload__mass_flow_rate" You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units.
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate') cem.time_step, cem.time_units, cem.time
docs/demos/cem.ipynb
csdms/coupling
mit
Set the bedload flux and run the model.
for time in range(3000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) cem.time cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z) val = np.empty((5, ), dtype=float) cem.get_value("basin_outlet~coastal_ce...
docs/demos/cem.ipynb
csdms/coupling
mit
Let's add another sediment source with a different flux and update the model.
qs[0, 150] = 1500 for time in range(3750): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z)
docs/demos/cem.ipynb
csdms/coupling
mit
Here we shut off the sediment supply completely.
qs.fill(0.) for time in range(4000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z)
docs/demos/cem.ipynb
csdms/coupling
mit
Data generation
num_samples = 300 input_dim = 50 side_dim = 50 # generate some random data with 100 samples # and 5 dimensions X = np.random.randn(num_samples, input_dim) # select the third dimension as the relevant # for our classification task S = X[:, 2:3] # The labels are simply the sign of S # (note the downcast to int32 -...
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Now let's define some side information: we simulate an additional sensorwhich contains S, but embedded into a different space.
Z = np.random.randn(num_samples, side_dim) # set second dimension of Z to correspond to S Z[:, 1] = S[:,0]
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Let's make it harder to find S in X and Z by applying a random rotations to both data sets
# random rotation 1 R = np.linalg.qr(np.random.randn(input_dim, input_dim))[0] X = X.dot(R) # random rotation 2 Q = np.linalg.qr(np.random.randn(side_dim, side_dim))[0] Z = Z.dot(Q)
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Finally, split our data into training, test, and validation data
split = num_samples/3 X_train = X[:split] X_val = X[split:2*split] X_test = X[2*split:] y_train = y[:split] y_val = y[split:2*split] y_test = y[2*split:] Z_train = Z[:split] Z_val = Z[split:2*split] Z_test = Z[2*split:]
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Purely supervised learning Let's check how hard the problem is for supervised learning alone.
if sklm is not None: # let's try different regularizations for c in [1e-5, 1e-1, 1, 10, 100, 1e5]: lr = sklm.LogisticRegression(C=c) lr.fit(X_train, y_train) print ("Logistic Regression (C=%f)\n accuracy = %.3f %%" % (c, 100*lr.score(X_test, y_test)))
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Learning with side information: building the pattern
# Let's first define the theano variables which will represent our data input_var = T.matrix('inputs') # for X target_var = T.ivector('targets') # for Y side_var = T.matrix('sideinfo') # for Z # Size of the intermediate representation phi(X); # since S is 1-dim, phi(X) can also map to a # 1-dim vector representa...
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Now define the functions - we choose linear functions. concarne internally relies on lasagne which encodes functions as (sets of) layers. Additionally, concarne supports nolearn style initialization of lasagne layers as follows:
phi = [ (lasagne.layers.DenseLayer, { 'num_units': concarne.patterns.Pattern.PHI_OUTPUT_SHAPE, 'nonlinearity':None, 'b':None })] psi = [(lasagne.layers.DenseLayer, { 'num_units': concarne.patterns.Pattern.PSI_OUTPUT_SHAPE, 'nonlinearity':lasagne.nonlinearities.softmax, 'b':None })] ...
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
For the variable of your layer that denotes the output of the network you should use the markers PHI_OUTPUT_SHAPE, PSI_OUTPUT_SHAPE and BETA_OUTPUT_SHAPE, so that the pattern can automatically infer the correct shape.
pattern = concarne.patterns.MultiViewPattern( phi=phi, psi=psi, beta=beta, # the following parameters are required to # build the functions and the losses input_var=input_var, target_var=target_var, side_var=side_var, input_shape=input_dim, target_shape=num_classes, side_s...
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Training To train a pattern, you can use the PatternTrainer which trains the pattern via stochastic gradient descent. It also supports different procedures to train the pattern.
trainer = concarne.training.PatternTrainer( pattern, procedure='simultaneous', num_epochs=500, batch_size=10, update=lasagne.updates.nesterov_momentum, update_learning_rate=0.01, update_momentum=0.9, )
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
<b>Let's train!</b>
trainer.fit_XYZ(X_train, y_train, [Z_train], X_val=X_val, y_val=y_val, side_val=[X_val, Z_val], verbose=True) pass
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
Some statistics: Test score.
trainer.score(X_test, y_test, verbose=True) pass
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
We can also compute a test score for the side loss:
trainer.score_side([X_test, Z_test], verbose=True) pass
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
You can then also query the prediction output, similar to the scikit-learn API:
trainer.predict(X_test)
example/concarne_multiview_demo.ipynb
tu-rbo/concarne
mit
II. Complex vectors
# Complex numbers in python have a j term: a = 1+2j v1 = array([1+2j, 3+2j, 5+1j, 4+0j])
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
III. Matrices
# a two-dimensional array m1 = array([[2,1],[2,1]]) m1 # can find transpose with the T method: m1.T # find the eigenvalues and eigenvectors of a matrix: eig(m1)
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Can also use the matrix type which is like array but restricts to 2D. Also, matrix adds .H and .I methods for hermitian and inverse, respectively. For more information, see Stack Overflow question #4151128
m2 = matrix( [[2,1],[2,1]]) m2.H eig(m2) # use a question mark to get help on a command eig?
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Interpret this result: the two eigenvalues are 1 and 2 the eigenvectors are strange decimals, but we can check them against the stated solution:
1/sqrt(2) # this is the value for both entries in the first eigenvector 1/sqrt(5) # this is the first value in the second eigenvector 2/sqrt(5) # this is the second value in the second eigenvector eigvals(M14)
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Signs are opposite compared to the book, but it turns out that (-) doesn't matter in the interpretation of eigenvectors: only "direction" matters (the relative size of the entries). Example: Problem 1.16 using Ipython functions
M16 = array([[0,-1j],[1j,0]]) evals, evecs = eig(M16) evecs evecs[:,0] evecs[:,1] dot(evecs[:,0].conjugate(),evecs[:,1])
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Practice: Problem 1.2 using the hist() function.
# Solution n, bins, patches = hist([10,13,14,14,6,8,7,9,12,14,13,11,10,7,7],bins=5,range=(5,14)) # Solution n # Solution pvals = n/n.sum()
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Problem 1.8 Hint: using sympy, we can calculate the relevant integral. The conds='none' asks the solver to ignore any strange conditions on the variables in the integral. This is fine for most of our integrals. Usually the variables are real and well-behaved numbers.
# Solution from sympy import * c,a,x = symbols("c a x") Q.positive((c,a)) first = integrate(c*exp(-a*x),(x,0,oo),conds='none') print("first = ",first) second = integrate(a*exp(-a*x),(x,0,oo),conds='none') print("second = ",second)
Lab 1 - Vectors and Matrices Solutions.ipynb
amcdawes/QMlabs
mit
Load file from data and convert to training set and test set (reading from two distinct files)
def read_file(filename): with open(filename) as f: content = f.readlines() y = [line[0] for line in content] X = [line[2:].strip() for line in content] return X,y X_train,y_train = read_file('Names_data_train.txt') X_test,y_test = read_file('Names_data_test.txt')
mloc/ch2_Machine_Learning/Learn_Names_Gradient_Boosting.ipynb
kit-cel/wt
gpl-2.0
A simple class that converts the string into numbers and than trains a simple classifier using the gradient boosting technique. The resulting gradient boosting classifier is essentially a rule-based system, where the results are derived from the inputs to the classifier. The main take-away message is that rule-based sy...
class Gradient_Boosting_Estimator(): ''' Class for training a gradient boosting, rule-based estimator on the letters Parameter is the number of letters of the word to consider ''' def __init__( self, letters ): self.letters = letters self.gbes = GradientBoostingClassifier() ...
mloc/ch2_Machine_Learning/Learn_Names_Gradient_Boosting.ipynb
kit-cel/wt
gpl-2.0
Let's try to see if Pandas can read the .csv files coming from Weather Underground.
CSV_URL = 'https://www.wunderground.com/weatherstation/WXDailyHistory.asp?\ ID=KCABERKE22&day=24&month=06&year=2018&graphspan=day&format=1' df = pd.read_csv(CSV_URL, index_col=False) df # remove every other row from the data because they contain `<br>` only dg = df.drop([2*i + 1 for i in range(236)]) dg def get_clean...
weather_station_data.ipynb
bearing/dosenet-analysis
mit
Now that we have a way of getting weather station data, let's do some time-based binning! But first, we will have to do a few things to make this data compatible with our sensor data. We will * convert the times into timestamps, * convert temperatures to degrees Celsius
def process_data(data_df): def deg_f_to_c(deg_f): return (5 / 9) * (deg_f - 32) def inhg_to_mbar(inhg): return 33.863753 * inhg for idx, time, tempf, dewf, pressure, *_ in data_df.itertuples(): data_df.loc[idx, 'Time'] = datetime.strptime(time, '%Y-%m-%d %H:%M:%S').timestamp() ...
weather_station_data.ipynb
bearing/dosenet-analysis
mit
From the previous cell, we see that temperature data (and for that matter, all other sensor data) begins from 17 November, 2017. So we only need to get weather station data from that date on.
start_time = date.fromtimestamp(int(temperature_data.loc[26230, 'unix_time'])) end_time = date.fromtimestamp(int(temperature_data.loc[temperature_data.shape[0] - 1, 'unix_time'])) current_time = start_time data_df = pd.DataFrame([]) while current_time < end_time: # store the result of the query in dataframe `data_...
weather_station_data.ipynb
bearing/dosenet-analysis
mit
Comparing Weather Stations to our Weather Data Now let's look at the differences between the average temperatures measured by the weather station versus our measurements. We chose a weather station close to Etcheverry Hall, so the measurements should be about the same. If the difference is relatively constant but nonze...
def weather_station_diff_and_corr(interval): ws_temp = pd.read_csv(f'binned_data/ws_data_Temperature_{interval}.csv', header=0, names=['utime', 'temp'], usecols=[1]) ws_pressure = pd.read_csv(f'binned_data/ws_data_Pressure_{interval}.csv', header=0, name...
weather_station_data.ipynb
bearing/dosenet-analysis
mit
Uggh! There are a few influential points that should not exist. Let's get rid of them too in weather_station_diff_and_corr.
def remove_influential_pts(df: pd.DataFrame, z_star: float): if df.shape[1] != 2: raise ValueError('DataFrame must have shape `Nx2`') for idx, elem1, elem2 in df.itertuples(): if (abs((elem1 - df.iloc[:, 0].mean()) / df.iloc[:, 0].std()) > z_star or abs((elem2 - df.iloc[:, 1].me...
weather_station_data.ipynb
bearing/dosenet-analysis
mit
Perceba que apesar de possuírmos informação de endereço, não temos coordenadas dos eventos, o que dificulta qualquer tipo de análise. Para obtermos as coordenadas vamos fazer o geocoding dos endereços. Mas antes, vamos unir todas as informações de endereço em uma coluna só chamada de endereco.
data['endereco'] = data['logradouro'] + ', ' + data['localNumero'].apply(str) data.head()
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Agora vamos transformar os endereços em coordenadas usando geocode() com a ferramente de busca de dados Nominatim que realiza consultas no OpenStreetMap. Antes será necessário instalar a biblioteca geopy com o pip, para isso utilize o comando: pip install geopy
# Import the geocoding tool from geopandas.tools import geocode # Geocode addresses with Nominatim backend geo = geocode(data['endereco'], provider = 'nominatim', user_agent ='carlos') geo
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Como resultado, temos um GeoDataFrame que contém nosso endereço e uma coluna 'geometry' contendo objeto Point que podemos usar para exportar os endereços para um Shapefile por exemplo. Como os indices das duas tabelas são iguais, podemos unir facilmente.
data['geometry'] = geo['geometry'] data.head()
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Notas sobre a ferramenta Nominatim Nominatim funciona relativamente bem se você tiver endereços bem definidos e bem conhecidos, como os que usamos neste tutorial. No entanto, em alguns casos, talvez você não tenha endereços bem definidos e você pode ter, por exemplo, apenas o nome de um shopping ou uma lanchonete. Ness...
from shapely.geometry import Point, Polygon # Create Point objects p1 = Point(24.952242, 60.1696017) p2 = Point(24.976567, 60.1612500) # Create a Polygon coords = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)] poly = Polygon(coords) # Let's check what we have print(p...
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Vamos verificar se esses pontos estão dentro do polígono
# Check if p1 is within the polygon using the within function print(p1.within(poly)) # Check if p2 is within the polygon print(p2.within(poly))
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Então podemos ver que o primeiro ponto parece estar dentro do polígono e o segundo não. Na verdade, o primeiro ponto é perto do centro do polígono, como nós     podemos ver se compararmos a localização do ponto com o centróide do polígono:
# Our point print(p1) # The centroid print(poly.centroid)
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
2.2 Interseção Outra operação geoespacial típica é ver se uma geometria intercepta ou toca outra geometria. A diferença entre esses dois é que: Se os objetos se cruzam, o limite e o interior de um objeto precisa interceptar com os do outro objeto. Se um objeto tocar o outro, só é necessário ter (pelo menos) um ...
from shapely.geometry import LineString, MultiLineString # Create two lines line_a = LineString([(0, 0), (1, 1)]) line_b = LineString([(1, 1), (0, 2)])
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Vamos ver se eles se interceptam
line_a.intersects(line_b)
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Eles também tocam um ao outro?
line_a.touches(line_b)
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Sim, as duas operações são verdade e podemos ver isso plotando os dois objetos juntos.
# Create a MultiLineString from line_a and line_b multi_line = MultiLineString([line_a, line_b]) multi_line
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
2.3 Ponto dentro de polygon usando o geopandas Uma das estratégias adotadas pela Secretaria da Segurança Pública e Defesa Social (SSPDS) para o aperfeiçoamento de trabalhos policiais, periciais e bombeirísticos em território cearense é a delimitação do Estado em Áreas Integradas de Segurança (AIS). A cidade de fortalez...
ais_filep = 'data/ais.shp' ais_gdf = gpd.read_file(ais_filep) ais_gdf.crs ais_gdf.head() import matplotlib.pyplot as plt fig, ax = plt.subplots(1,1, figsize=(15,8)) ais_gdf.plot(ax=ax) plt.show()
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Agora vamos mostrar somente as fronteiras das AIS e os nosso eventos de crimes. Mas antes bora transformar os nosso dados de roubo em um GeoDataFrame.
data_gdf = gpd.GeoDataFrame(data) data_gdf.crs = ais_gdf.crs data_gdf.head()
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Agora sim, vamos mostrar as fronteiras de cada AIS juntamente com os eventos de roubo.
fig, ax = plt.subplots(1,1, figsize=(15,8)) for idx, ais in ais_gdf.iterrows(): ax.plot(*ais['geometry'].exterior.xy, color='black') data_gdf.plot(ax=ax, color='red') plt.show()
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Relembrando o endereço dos nosso dados, dois roubos aconteceram na avenida bezerra de menezes próximos ao north shopping. Sabendo que a AIS que contém o shopping é a de número 6, vamos selecionar somente os eventos de roubo dentro da AIS 6. Primeiro vamos separar somente a geometria da AIS 6. Antes vamos visualizar os ...
ais_gdf
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Existem duas colunas que podem nos ajudar a filtrar a AIS desejada, a coluna AIS e a coluna NM_AIS. Vamos utilizar a primeira por ser necessário utilizar apenas o número.
ais6 = ais_gdf[ais_gdf['AIS'] == 6] ais6.plot() plt.show() ais6_geometry = ais6.iloc[0].geometry ais6_geometry type(ais6)
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Agora podemos utilizar a função within() para selecionar apenas os eventos que aconteceram dentro da AIS 6.
mask = data_gdf.within(ais6.geometry[0]) mask data_gdf_ais6 = data_gdf[mask] data_gdf_ais6
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Vamos ver os nosso dados em um mapa utilizando a o módulo Folium: conda install -c conda-forge folium
import folium map_fortal = folium.Map(location=[data_gdf_ais6.loc[0, 'geometry'].y, data_gdf_ais6.loc[0, 'geometry'].x], zoom_start = 14) folium.Marker([data_gdf_ais6.loc[0, 'geometry'].y, data_gdf_ais6.loc[0, 'geometry'].x]).add_to...
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
InsightLab/data-science-cookbook
mit
Using SQL for Queries Note that SQL is case-insensitive, but it is traditional to use ALL CAPS for SQL keywords. It is also standard to end SQL statements with a semi-colon.
pdsql("SELECT * FROM tips LIMIT 5;") pdsql("SELECT * FROM tips WHERE sex='Female' LIMIT 5;") pdsql("SELECT tip, sex, size FROM tips WHERE total_bill< 10 LIMIT 5;")
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Ordering
query = """ SELECT * FROM tips WHERE sex='Female' and smoker='Yes' ORDER BY total_bill ASC LIMIT 5; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Aggregate queries
query = """ SELECT count(*) AS count, max(tip) AS max, min(tip) AS min FROM tips WHERE size > 1 GROUP BY sex, day HAVING max < 6 ORDER BY count DESC LIMIT 5; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Joins A join occurs when you combine information from two or more database tables, based on information in a column that is common among the tables. As usual, it is easier to understand the concept with examples.
student = pd.read_csv('data/student.txt') student cls = pd.read_csv('data/class.txt') cls major = pd.read_csv('data/major.txt') major student_cls = pd.read_csv('data/student_class.txt') student_cls
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Matching students and majors Inner join
query = """ SELECT s.first, s.last, m.name FROM student s INNER JOIN major m ON s.major_id = m.major_id; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Left outer join SQL also has RIGHT OUTER JOIN and FULL OUTER JOIN but these are not currently supported by SQLite3 (the database engine used by pdsql).
query = """ SELECT s.first, s.last, m.name FROM student s LEFT OUTER JOIN major m ON s.major_id = m.major_id; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Emulating a full outer join with UNION ALL Only necessary if the database does not proivde FULL OUTER JOIN
query = """ SELECT s.first, s.last, m.name FROM student s LEFT JOIN major m ON s.major_id = m.major_id UNION All SELECT s.first, s.last, m.name FROM major m LEFT JOIN student s ON s.major_id = m.major_id WHERE s.major_id IS NULL; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Using linker tables to match students to classes (a MANY TO MANY join)
query = """ SELECT s.first, s.last, c.code, c.name, c.credits FROM student s INNER JOIN student_cls sc ON s.student_id = sc.student_id INNER JOIN cls c ON c.class_id = sc.class_id; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Same thing but including students with no majors
query = """ SELECT s.first, s.last, c.code, c.name, c.credits FROM student s LEFT OUTER JOIN student_cls sc ON s.student_id = sc.student_id LEFT OUTER JOIN cls c ON c.class_id = sc.class_id; """ pdsql(query)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Using SQLite3 SQLite3 is part of the standard library. However, the mechanics of using essentially any database in Python is similar, because of the Python DB-API.
import sqlite3 c = sqlite3.connect('data/Chinook_Sqlite.sqlite')
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
SQLite specific commands to get metadata Unlike SQL syntax for queries, how you get metadata from a relational database is vendor-specific. You'll have to read the docs to find out what is needed for your SQL flavor. What tables are there in the database?
list(c.execute("SELECT name FROM sqlite_master WHERE type='table';"))
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
What are the columns of the table "Album"?
list(c.execute("PRAGMA table_info(Album);"))
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Standard SQL statements with parameter substitution Note: Using Python string substitution for Python defined parameters is dangerous because of the risk of SQL injection attacks. Use parameter substitution with ? instead. Do this
t = ['%rock%'] list(c.execute("SELECT * FROM Album WHERE Title like ? LIMIT 5;", t))
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Not this
t = ["'%rock%'"] list(c.execute("SELECT * FROM Album WHERE Title like %s LIMIT 5;" % t[0]))
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
User defined functions Sometimes it is useful to have custom functions that run on the database server rather than on the client. These are called User Defined Functions or UDF. How do to do this varies with the database used, but it is fairly simple with Python and SQLite. A standard UDF
def encode(text, offset): """Caesar cipher of text with given offset.""" from string import ascii_lowercase, ascii_uppercase tbl = dict(zip(map(ord, ascii_lowercase + ascii_uppercase), ascii_lowercase[offset:] + ascii_lowercase[:offset] + ascii_uppercase[offset:] + ascii_uppe...
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
An aggregate UDF We can also add aggregate UDFs similar to SQL MIN, SUM, COUNT etc. Aggregate UDFs require you to write a class __init__, step and finalize methods.
class CV: """Aggregate UDF for coefficient of varation in %.""" def __init__(self): self.s = [] def step(self, value): self.s.append(value) def finalize(self): if len(self.s) < 2: return 0 else: return 100.0*np.std(self.s)/np.mean(self.s) c.cre...
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Using SQL magic functions We will use the ipython-sql notebook extension for convenience. This will only work in notebooks and IPython scripts with the .ipy extension.
import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore') %load_ext sql
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Configuring the SqlMagic extension
%config SqlMagic %config SqlMagic.displaylimit=10
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Connect to SQLite3 database
%sql sqlite:///data/Chinook_Sqlite.sqlite
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Other databases See SQLAlchemy connection strings for how to connect to other databases such as Oracle, MySQL or PostgreSQL. Line magic
%sql SELECT * from Album LIMIT 5; %sql SELECT * from Artist LIMIT 5;
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Cell magic
%%sql SELECT Artist.Name, Album.Title FROM Album INNER JOIN Artist on Album.ArtistId = Artist.ArtistId ORDER BY Artist.Name ASC LIMIT 5;
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
You can assign results of queries to Python names
result = %sql SELECT * from Album; type(result)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Resutls behave like lists
result[2:4]
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
You can use Python variables in your queires. Use :varname where you want to use a Python variable in your query.
artist_id = 10 %sql select * from Artist where ArtistId < :artist_id; word = '%rock%' %sql select * from Album WHERE Title LIKE :word;
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Convert to pandas dataframe
df = result.DataFrame() df.head(5)
notebook/08_SQL.ipynb
cliburn/sta-663-2017
mit
Step 3: Visualization IMPORTANT DO NOT RUN UNLESS YOU HAVE A SUPERCOMPUTER - UNREALISTIC TO RUN AT THIS TIME Now that the data has been transformed we can visualize the new data.
c = cl.Clarity("Fear187") c.loadImg().imgToPoints(threshold=0.02,sample=0.3).showHistogram(bins=256)
code/Advanced Texture Based Clarity Visualization.ipynb
Upward-Spiral-Science/claritycontrol
apache-2.0
Let's compare the results to the pre-equalized histogram image data.
import clarity as cl import clarity.resources as rs c = cl.Clarity("Fear187") c.loadImg().imgToPoints(threshold=0.02,sample=0.3).showHistogram(bins=256) c.loadImg().imgToPoints(threshold=0.04,sample=0.5).savePoints() c.loadPoints().show()
code/Advanced Texture Based Clarity Visualization.ipynb
Upward-Spiral-Science/claritycontrol
apache-2.0
enumerate() becomes particularly useful when you have a case where you need to have some sort of tracker. For example:
for count,item in enumerate(lst): if count >= 2: break else: print item
PythonBootCamp/Complete-Python-Bootcamp-master/Enumerate.ipynb
yashdeeph709/Algorithms
apache-2.0
1.-- Cuántos registros tiene el archivo?
x=pd.read_csv('AportesDiario_2015.csv', sep=';',decimal=',',thousands='.',skiprows=2) len(x) x.head()
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
2.-- Cuántas regiones hidrológicas diferentes hay?
len(set(x['Region Hidrologica']))
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
3.-- Cuántos rios hay?
len(set(x['Nombre Rio']))
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
4.-- Cuántos registros hay por región hidrológica?
y = x.groupby('Region Hidrologica') y.size()
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
5.-- Cuál es el promedio de aportes en energía kWh por región?
x.groupby('Region Hidrologica').mean()['Aportes %']
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
6.-- Cuáles registros no tienen datos?
Caudal=len(x[x['Aportes Caudal m3/s'].isnull()]) Aportes=len(x[x['Aportes Energia kWh'].isnull()]) Aport =len(x[x['Aportes %'].isnull()]) print (Caudal) print (Aportes) print (Aport) # x.dropna() borra los registros na len(x) - len(x.dropna())
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
7.-- Grafique (gráfico de barras) la producción promedio por región hidrológica?
import matplotlib %matplotlib inline x.groupby('Region Hidrologica').mean()['Aportes Energia kWh'].plot(kind='bar')
ETVL-IPy-09-Taller-Energia.ipynb
ikvergarab/DiplomadoOLADE
mit
Download Isochrones We use a log-space age grid for ages less than a billion years, and a linear grid of every 0.5 Gyr thereafter.
from astropy.coordinates import Distance import astropy.units as u from padova import AgeGridRequest, IsochroneRequest from starfisher import LibraryBuilder z_grid = [0.015, 0.019, 0.024] delta_gyr = 0.5 late_ages = np.log10(np.arange(1e9 + delta_gyr, 13e9, delta_gyr * 1e9)) if not os.path.exists(os.path.join(STARFIS...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Build the Isochrone Library and Synthesize CMD planes
from collections import namedtuple from starfisher import Lockfile from starfisher import Synth from starfisher import ExtinctionDistribution from starfisher import ExtantCrowdingTable from starfisher import ColorPlane from m31hst.phatast import PhatAstTable if not os.path.exists(os.path.join(STARFISH, synth_dir)): ...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Here we visualize the isochrone bins in $\log(\mathrm{age})$ space.
from starfisher.plots import plot_lock_polygons, plot_isochrone_logage_logzsol fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(111) plot_isochrone_logage_logzsol(ax, builder, c='k', s=8) plot_lock_polygons(ax, lockfile, facecolor='None', edgecolor='r') ax.set_xlim(6, 10.2) ax.set_ylim(-0.2, 0.2) ax.set_xlabel(r"...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Export the dataset for StarFISH
from astropy.table import Table from m31hst import phat_v2_phot_path if not os.path.exists(os.path.join(STARFISH, fit_dir)): os.makedirs(os.path.join(STARFISH, fit_dir)) data_root = os.path.join(fit_dir, "b23ir.") full_data_path = os.path.join(STARFISH, '{0}f110f160'.format(data_root)) brick_table = Table.read(ph...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Run StarFISH SFH
from starfisher import SFH, Mask mask = Mask(colour_planes) sfh = SFH(data_root, synth, mask, fit_dir) if not os.path.exists(sfh.full_outfile_path): sfh.run_sfh() sfh_table = sfh.solution_table()
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Visualization of the SFH
from starfisher.sfhplot import LinearSFHCirclePlot, SFHCirclePlot fig = plt.figure(figsize=(9, 5)) ax_log = fig.add_subplot(121) ax_lin = fig.add_subplot(122) cp = SFHCirclePlot(sfh_table) cp.plot_in_ax(ax_log, max_area=800) for logage in np.log10(np.arange(1, 13, 1) * 1e9): ax_log.axvline(logage, c='0.8', zorder...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Comparison of Observed and Modelled CMDs
import cubehelix cmapper = lambda: cubehelix.cmap(startHue=240,endHue=-300,minSat=1,maxSat=2.5, minLight=.3,maxLight=.8,gamma=.9) from starfisher.sfhplot import ChiTriptykPlot fig = plt.figure(figsize=(10, 6)) ctp = ChiTriptykPlot(sfh.full_chi_path, 1, ir_cmd.x_span, ir_cmd.y_span, ...
notebooks/Brick_23_IR_V3.ipynb
jonathansick/androcmd
mit
Linear Regression with L2 regularization (Ridge Regression)
def getModel(alpha): return Ridge(alpha=alpha, fit_intercept=True, normalize=False, copy_X=True, random_state=random_state) model = getModel(alpha=0.01) cvs = cross_val_score(estimator=model, X=XX, y=yy, cv=10) cvs cv_score = np.mean(cvs) cv_score def gpOptimization(n_jobs=n_jobs, cv=10, verbose=True): ...
02_preprocessing/exploration01-linear_regression.ipynb
pligor/predicting-future-product-prices
agpl-3.0
We see that by numbers we have a good coefficient correlation between the true values and the predicted ones
def fit_scatter(y_true, y_pred, x_label = 'Measured', y_label = 'Predicted'): assert y_true.shape == y_pred.shape fig, ax = plt.subplots() ax.scatter(y_true, y_pred) ax.plot([y_true.min(), y_true.max()], [y_true.min(), y_true.max()], 'k--', lw=4) ax.set_xlabel(x_label) ax.set_ylabel(y_label) ...
02_preprocessing/exploration01-linear_regression.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Kind of gaussian distribution of the error but the test dataset is rather small Which coefficient is stronger ?
coefdic = dict(zip(XX.columns, np.absolute(model.coef_) )) from collections import OrderedDict #weights_sorted = sorted(coefdic, key=coefdic.get)[::-1] weights_sorted = OrderedDict(sorted(coefdic.items(), key=lambda x: x[1])) weights_sorted.keys()[::-1] plt.rc('ytick', labelsize=20) def weight_plot(weights_dic, st...
02_preprocessing/exploration01-linear_regression.ipynb
pligor/predicting-future-product-prices
agpl-3.0
The conclusion is that even if the cpu power was independently correlated with price the overall effect on price seem to be played by the manufacturer, and cool features, such as iris scanner, or tango sensor which allows augmented reality to work better Good Deal or not We are determining if the mobile phone is a good...
df.shape df_all = pd.concat((df, df_test), axis=0) XX_all = df_all.drop(labels=SkroutzMobile.PRICE_COLS, axis=1) yy_all = df_all[SkroutzMobile.TARGET_COL] XX_all.shape, yy_all.shape preds_all = model.predict(XX_all) len(preds_all) deal = preds_all - yy_all deal.sample(5, random_state=random_state) sorted_deal = d...
02_preprocessing/exploration01-linear_regression.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Import
import pandas import numpy from folding_group import FoldingGroupClassifier from rep.data import LabeledDataStorage from rep.report import ClassificationReport from rep.report.metrics import RocAuc from sklearn.metrics import roc_curve, roc_auc_score from decisiontrain import DecisionTrainClassifier from rep.estimators...
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0
Reading initial data
import root_numpy MC = pandas.DataFrame(root_numpy.root2array('../datasets/MC/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000)) data = pandas.DataFrame(root_numpy.root2array('../datasets/data/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000)) data.head() MC.head()
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0
Data preprocessing: Add necessary features: - #### define label = signB * signTrack * if &gt; 0 (same sign) - label **1** * if &lt; 0 (different sign) - label **0** diff pt, min/max PID Apply selections: remove ghost tracks loose selection on PID
from utils import data_tracks_preprocessing data = data_tracks_preprocessing(data, N_sig_sw=True) MC = data_tracks_preprocessing(MC) ', '.join(data.columns) print sum(data.signB == 1), sum(data.signB == -1) print sum(MC.signB == 1), sum(MC.signB == -1)
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0
Define mask for non-B events
mask_sw_positive = (data.N_sig_sw.values > 1) * 1 data.head() data['group_column'] = numpy.unique(data.event_id, return_inverse=True)[1] MC['group_column'] = numpy.unique(MC.event_id, return_inverse=True)[1] data.index = numpy.arange(len(data)) MC.index = numpy.arange(len(MC))
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0
Define features
# features = ['cos_diff_phi', 'diff_pt', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP', # 'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm', # 'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb', # 'IPPU', 'eta', 'max_PID_m...
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0
Test that B-events similar in MC and data
b_ids_data = numpy.unique(data.group_column.values, return_index=True)[1] b_ids_MC = numpy.unique(MC.group_column.values, return_index=True)[1] Bdata = data.iloc[b_ids_data].copy() BMC = MC.iloc[b_ids_MC].copy() Bdata['Beta'] = Bdata.diff_eta + Bdata.eta BMC['Beta'] = BMC.diff_eta + BMC.eta Bdata['Bphi'] = Bdata.dif...
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
tata-antares/tagging_LHCb
apache-2.0