markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We want to try a range of sampling area sizes. We analyse how the potential is affects. We also want low variance (plateau condidtions). Ideally we should have as large an area as possible, with low (< 1e-5) variance.
dim = [1,10,20,40,60,80,100] print("Dimension Potential Variance") print("--------------------------------") for d in dim: cube = [d, d, d] cube_potential, cube_var = md.volume_average(cube_origin, cube,grid_pot, NGX, NGY, NGZ, travelled=travelled) print(" %3i %10.4f %10.6f"%(d, cube_potential, cu...
tutorials/Porous/Porous.ipynb
WMD-group/MacroDensity
mit
From the OUTCAR the VBM is at -2.4396 V
print("IP: %3.4f eV" % (2.3068 -- 2.4396 ))
tutorials/Porous/Porous.ipynb
WMD-group/MacroDensity
mit
Verify tables exist Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
%%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical features BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all c...
%%bigquery CREATE OR REPLACE MODEL babyweight.model_1 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add base features and label ML.FEATURE_CROSS( # TODO: Cross categorical features ) AS gender_...
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create two SQL statements to evaluate the model.
%%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_1, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval )) %%bigquery SELECT # TODO: Select just the calculated RMSE FROM ML.EVALUATE(MODEL babyweight.model_1, ( ...
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #2: Model 2: Apply the BUCKETIZE Function Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numeri...
%%bigquery CREATE OR REPLACE MODEL babyweight.model_2 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, ML.FEATURE_CROSS( STRUCT(...
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create three SQL statements to EVALUATE the model. Let's now retrieve the training statistics and evaluate the model.
%%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We now evaluate our model on our eval dataset:
%%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval))
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
%%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval))
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #3: Model 3: Apply the TRANSFORM clause Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRANSFORM clause to the model_3 and run...
%%bigquery CREATE OR REPLACE MODEL babyweight.model_3 TRANSFORM( # TODO: Add base features and label as you would in select # TODO: Add transformed features as you would in select ) OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPL...
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's retrieve the training statistics:
%%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We now evaluate our model on our eval dataset:
%%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
%%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next we create a csv reader. You give it a file handle and optionally the dialect, the separator (usually commas or tabs), and the quote character. with open(filename, 'r') as fh: reader = csv.reader(fh, delimiter='\t', quotechar='"')
with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',')
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
The reader doesn't do anything yet. It is a generator that allows you to loop through the data (it is very similar to a file handle). To loop through the data you just write a simple for loop for row in reader: #process row The each row will be a list with each element corresponding to a single column.
with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',') for row in reader: print(row)
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Open up the file workout.txt (tab delimited, tab='\t') with the csv reader and print out each row. Doesn't that look nice? Well there are a few problems that I can see. First the header, how do we deal with that? Headers The easiest way I have found is to use the next method (that is available with any generator...
with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',') header = next(reader) for row in reader: print(row) print("Header", header)
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
Values are Strings Notice that each item is a string. You'll need to remember that and convert things that actually should be numbers using the float() or int() functions.
with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',') header = next(reader) for row in reader: float_row = [float(row[0]), float(row[1])] print(float_row)
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Open workouts with a csv reader. Save the header line to a variable called header. Convert each value in the data rows to ints and print them out. Analyzing our data You can use just about everything we have learned up until this point to analyze your data: if statements, regexs, math, data structures. Let's loo...
# Let's find the average distance for all walks. with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',') header = next(reader) # Empty list for storing all distances walks = [] for row in reader: #distance is in the first column dist = row[0] # Convert to ...
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
Here is something I do all the time. It is a little more complicated than the above examples, so take your time trying to understand it. What I like to do is to read the csv data and transform it to a dictionary of lists. This allows me to use it in many different ways later in the code. It is most useful with larger d...
# Lets see our pace for each walk with open('walks.csv', 'r') as fh: reader = csv.reader(fh, delimiter=',') header = next(reader) # This is the dictionary we will put our data from the csv into # The key's are the column headers and the values is a list of # all the data in that column (transformed ...
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Find the average number of squats done from the workouts.txt file. Feel free to copy the code for opening from the previous TRY IT. Writing CSVs The csv module also contains code for writing csvs. To write, you create a writer using the writer method and give it a filehandle and optionally delimiter and quotecha...
import random with open('sleep.csv', 'w') as fh: writer = csv.writer(fh, delimiter='\t', quotechar='"') header = ['day', 'sleep (hr)'] writer.writerow(header) for i in range(1,11): hr_sleep = random.randint(4,10) writer.writerow([i, hr_sleep]) #open the file to prove you wrote i...
Lesson12_TabularData/Tabular Data.ipynb
WomensCodingCircle/CodingCirclePython
mit
Start by defining the parent catalog URL from NCI's THREDDS Data Server Note: Switch the '.html' ending on the URL to '.xml'
url = 'http://dapds00.nci.org.au/thredds/catalog/u39/public/data/modis/fractionalcover-clw/v2.2/netcdf/catalog.xml'
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
<a id='part1'></a> Using Siphon Siphon is a collection of Python utilities for downloading data from Unidata data technologies. More information on installing and using Unidata's Siphon can be found: https://github.com/Unidata/siphon Once selecting a parent dataset directory, Siphon can be used to search and use the ...
tds = catalog.TDSCatalog(url) datasets = list(tds.datasets) endpts = list(tds.datasets.values()) list(tds.datasets.keys())
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
The possible data services end points through NCI's THREDDS includes: OPeNDAP, Netcdf Subset Service (NCSS), HTTP download, Web Map Service (WMS), Web Coverage Service (WCS), NetCDF Markup Language (NcML), and a few metadata services (ISO, UDDC).
for key, value in endpts[0].access_urls.items(): print('{}, {}'.format(key, value))
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
We can create a small function that uses Siphon's Netcdf Subset Service (NCSS) to extract a spatial request (defined by a lat/lon box)
def get_data(dataset, bbox): nc = ncss.NCSS(dataset.access_urls['NetcdfSubset']) query = nc.query() query.lonlat_box(north=bbox[3],south=bbox[2],east=bbox[1],west=bbox[0]) query.variables('bs') data = nc.get_data(query) lon = data['longitude'][:] lat = data['latitude'][:] b...
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
Query a single file and view result
bbox = (135, 140, -31, -27) lon, lat, bs, t = get_data(endpts[0], bbox) plt.figure(figsize=(10,10)) plt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper') plt.xlabel('longitude (degrees)', fontsize=14) plt.ylabel('latitude (degrees)', fontsize=14) print("Date: {}".format(t))
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
Loop and query over the collection
bbox = (135, 140, -31, -27) plt.figure(figsize=(10,10)) for endpt in endpts[:15]: try: lon, lat, bs, t = get_data(endpt, bbox) plt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper') plt.clim(vmin=-2, vmax=100) plt.tick_params(labelsize=14) plt.xlabel('longitude (de...
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
Can make an animation of the temporal evolution (this example is by converting the series of *.png files above into a GIF) <img src="./images/animated.gif"> Can also use Siphon to extract a single point
def get_point(dataset, lat, lon): nc = ncss.NCSS(dataset.access_urls['NetcdfSubset']) query = nc.query() query.lonlat_point(lon, lat) query.variables('bs') data = nc.get_data(query) bs = data['bs'][0] date = data['date'][0] return bs, date bs, date = get_point(endpts[4], -27.7...
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
Time series example
data = [] for endpt in endpts[::20]: bs, date = get_point(endpt, -27.75, 137) data.append([date, bs]) import numpy as np BS = np.array(data)[:,1] Date = np.array(data)[:,0] plt.figure(figsize=(12,6)) plt.plot(Date, BS, '-o', linewidth=2, markersize=8) plt.tick_params(labelsize=14) plt.xlabel('date', fontsiz...
training/py3/Python3_Siphon_II.ipynb
adamsteer/nci-notebooks
apache-2.0
Set start date, end date and data source ('Yahoo Finance', 'Google Finance', etc.). Download S&P 500 index data from Yahoo Finance.
start_date = datetime.date(1976,1,1) end_date = datetime.date(2017,1,1) # Download S&P 500 index data try: SnP500_Ddata = web.DataReader('^GSPC','yahoo',start_date,end_date) except: SnP500_Ddata = pd.read_csv("http://analytics.romanko.ca/data/SP500_hist.csv") SnP500_Ddata.index = pd.to_datetime(SnP500_Ddata...
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Transform daily data into annual data:
# Create a time-series of annual data points from daily data SnP500_Adata = SnP500_Ddata.resample('A').last() SnP500_Adata[['Volume','Adj Close']].tail()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute annual return of S&P 500 index:
SnP500_Adata[['Adj Close']] = SnP500_Adata[['Adj Close']].apply(pd.to_numeric, errors='ignore') SnP500_Adata['returns'] = SnP500_Adata['Adj Close'] / SnP500_Adata['Adj Close'].shift(1) -1 SnP500_Adata = SnP500_Adata.dropna() print SnP500_Adata['returns']
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute average annual return and standard deviation of return for S&P 500 index:
SnP500_mean_ret = float(SnP500_Adata[['returns']].mean()) SnP500_std_ret = float(SnP500_Adata[['returns']].std()) print ("S&P 500 average return = %g%%, st. dev = %g%%") % (100*SnP500_mean_ret, 100*SnP500_std_ret)
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulation Example 1 We want to invest \$1000 in the US stock market for 1 year: $v_0 = 1000$
v0 = 1000 # Initial capital
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
In our example we assume that the return of the market over the next year follow Normal distribution. Between 1977 and 2014, S&P 500 returned 9.38% per year on average with a standard deviation of 16.15%. Generate 100 scenarios for the market return over the next year (draw 100 random numbers from a Normal distribution...
Ns = 100 # Number of scenarios r01 = random.normal(SnP500_mean_ret, SnP500_std_ret, Ns) r01
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Value of investment at the end of year 1: $v_1 = v_0 + r_{0,1}\cdot v_0 = (1 + r_{0,1})\cdot v_0$
# Distribution of value at the end of year 1 v1 = (r01 + 1) * v0 v1
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Mean:
mean(v1)
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Standard deviation:
std(v1)
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Minimum, maximum:
min(v1), max(v1)
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Persentiles 5th percentile, median, 95th percentile:
percentile(v1, [5, 50,95])
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Alternative way to compute percentiles 5th percentile, median, 95th percentile:
sortedScen = sorted(v1) # Sort scenarios sortedScen[5-1], sortedScen[50-1], sortedScen[95-1]
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Plot a histogram of the distribution of outcomes for v1:
hist, bins = histogram(v1) positions = (bins[:-1] + bins[1:]) / 2 plt.bar(positions, hist, width=60) plt.xlabel('portfolio value after 1 year') plt.ylabel('frequency') plt.show()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulated paths over time:
# Plot simulated paths over time for res in v1: plt.plot((0,1), (v0, res)) plt.xlabel('time step') plt.ylabel('portfolio value') plt.show()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulation Example 2 You are planning for retirement and decide to invest in the market for the next 30 years (instead of only the next year as in example 1). Assume that every year your investment returns from investing into the S&P 500 will follow a Normal distribution with the mean and standard deviation as i...
v0 = 1000 # Initial capital
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Between 1977 and 2014, S&P 500 returned 9.38% per year on average with a standard deviation of 16.15% Simulate 30 columns of 100 observations each of single period returns:
r_speriod30 = random.normal(SnP500_mean_ret, SnP500_std_ret, (Ns, 30)) r_speriod30
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute and plot $v_{30}$
v30 = prod(1 + r_speriod30 , 1) * v0 hist, bins = histogram(v30) positions = (bins[:-1] + bins[1:]) / 2 width = (bins[1] - bins[0]) * 0.8 plt.bar(positions, hist, width=width) plt.xlabel('portfolio value after 30 years') plt.ylabel('frequency') plt.show()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulated paths over time:
for scenario in r_speriod30: y = [prod(1 + scenario[0:i]) * v0 for i in range(0,31)] plt.plot(range(0,31), y) plt.xlabel('time step') plt.ylabel('portfolio value') plt.show()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulation Example 3 Download US Treasury bill data from Federal Reserve:
# Download 3-month T-bill rates from Federal Reserve start_date_b = datetime.date(1977,1,1) end_date_b = datetime.date(2017,1,1) TBill_Ddata = web.DataReader('DTB3','fred',start_date_b,end_date_b) TBill_Ddata.head()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Transform daily data into annual data:
# Create a time-series of annual data points from daily data TBill_Adata = TBill_Ddata.resample('A').last() TBill_Adata[['DTB3']].tail()
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute annual return for bonds:
TBill_Adata['returns'] = TBill_Adata['DTB3'] / 100 TBill_Adata = TBill_Adata.dropna() print TBill_Adata['returns']
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute average annual return and standard deviation of return for bonds:
TBill_mean_ret = float(TBill_Adata[['returns']].mean()) TBill_std_ret = float(TBill_Adata[['returns']].std()) print ("T-bill average return = %g%%, st. dev = %g%%") % (100*TBill_mean_ret, 100*TBill_std_ret)
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Compute covariance matrix:
covMat = cov(array(SnP500_Adata[['returns']]),array(TBill_Adata[['returns']]),rowvar=0) covMat
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulate portfolio:
v0 = 1000 # Initial capital Ns = 5000 # Number of scenarios mu = [SnP500_mean_ret, TBill_mean_ret] # Expected return mu stockRet = ones(Ns) bondsRet = ones(Ns) scenarios = random.multivariate_normal(mu, covMat, Ns) for year in range(1, 31): scenarios = random.multivariate_normal(mu, covMat, Ns) stockRet *= (...
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Simulation Example 4 Compare two portfolios
# Compute portfolios by iterating through different combinations of weights v30comp = [] for w in arange(0.2, 1.01, 0.2): v30comp += [w * v0 * stockRet + (1 - w) * v0 * bondsRet] # Plot a histogram of the distribution of # differences in outcomes for v30 # (Stratery 4 - Strategy 2) v30d = v30comp[3] - v30comp[1] ...
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
iRipVanWinkle/ml
mit
Terminologia e notação Um escalar $\alpha \in \mathbb{R}$ é denotado por uma letra grega minúscula. Um vetor $a \in \mathbb{R}^n$ é denotado por uma letra romana minúscula. Uma matriz $A \in \mathbb{R}^{n \times m}$ é denotada por uma letra romana maiúscula. Geometria analítica Considere dois vetores, $a = (\alpha_...
from random import randint, uniform from math import pi, cos, sin NUM_PAIRS = 10 NUM_FRAMES = 10 # devolve um versor positivo aleatório def random_pair(): angle = uniform(0, pi / 2) return np.array([cos(angle), sin(angle)]) # devolve uma cor aleatória def random_color(): r = randint(0, 255) g = r...
encontro04/encontro04.ipynb
gabicfa/RedesSociais
gpl-3.0
Note que as multiplicações por $A$ fazem o módulo dos vetores aumentar indefinidamente, mas a direção converge. Para deixar isso mais claro, vamos normalizar depois de multiplicar.
# normaliza um vetor def normalize(a): return a / np.linalg.norm(a) # versores positivos e cores aleatórias pairs = [] colors = [] for i in range(NUM_PAIRS): pairs.append(random_pair()) colors.append(random_color()) frames = [] for i in range(NUM_FRAMES): frames.append(ep.frame_vectors(pairs, colors...
encontro04/encontro04.ipynb
gabicfa/RedesSociais
gpl-3.0
Portanto, o algoritmo converge para uma direção que a multiplicação por $A$ não pode mudar. Isso corresponde à definição de autovetor dada acima! Cabe enfatizar, porém, que nem toda matriz garante convergência. Matriz de adjacência Considere um grafo $(N, E)$ e uma matriz $A \in {0, 1}^{|N| \times |N|}$. Denotando por ...
sn.graph_width = 320 sn.graph_height = 180 g = sn.load_graph('encontro02/3-bellman.gml', has_pos=True) for n in g.nodes(): g.node[n]['label'] = str(n) sn.show_graph(g, nlab=True) matrix = sn.build_matrix(g) print(matrix)
encontro04/encontro04.ipynb
gabicfa/RedesSociais
gpl-3.0
Table 1 - New Very Low Mass Members of NGC 1333 Internet is still not working
tbl1 = pd.read_clipboard(#"http://iopscience.iop.org/0004-637X/756/1/24/suppdata/apj437811t1_ascii.txt", sep='\t', skiprows=[0,1,2,4], skipfooter=3, engine='python', usecols=range(10)) tbl1 ! mkdir ../data/Scholz2012 tbl1.to_csv("../data/Scholz2012/tbl1.csv", index=False)
notebooks/Scholz2012.ipynb
BrownDwarf/ApJdataFrames
mit
MNIST Data cf. Found here http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz or https://www-labs.iro.umontreal.ca/~lisa/deep/data/mnist/?C=M;O=A
import gzip import six.moves.cPickle as pickle # find where `mnist.pkl.gz` is on your own computer f=gzip.open("../Data/mnist.pkl.gz",'rb') try: train_set,valid_set,test_set = pickle.load(f,encoding='latin1') except: train_set,valid_set,test_set = pickle.load(f) f.close() train_set_x,train_set_y=train_set v...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
Turn this into a so-called "one-hot vector representation." Recall that whereas the original labels (in the variable y) were 0,1, ..., 9 for 10 different (single) digits, for the purpose of training a neural network, we need to recode these labels as vectors containing only values 0 or 1.
K=10 m_train = train_set_y.shape[0] m_valid = valid_set_y.shape[0] m_test = test_set_y.shape[0] y_train = [np.zeros(K) for row in train_set_y] # list of m_train numpy arrays of size dims. (10,) y_valid = [np.zeros(K) for row in valid_set_y] # list of m_valid numpy arrays of size dims. (10,) y_test = [np.zeros(K) for...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
Convolution operator $*$ and related filter (stencil) $c$ example
X = tf.placeholder(tf.float32, shape=[None,None,None,3],name="X") filter_size = (9,9,3,2) # (W_1,W_2,C_lm1,C_l) c_bound = np.sqrt(3*9*9) c = tf.Variable( tf.random_uniform(filter_size, minval=-1.0/c_bound, maxval=1.0/c_bound)) b=tf.Variable(tf...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
Let's have a little bit of fun with this:
import pylab from PIL import Image # open example image of dimensions 639x516, HxW img = Image.open(open('../Data/3wolfmoon.jpg')) print(img.size) # WxH print(np.asarray(img).max()) print(np.asarray(img).min()) # dimensions are (height,width,channel) img_np=np.asarray(img,dtype=np.float32)/256. print(img_np.shape)...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
Empirically, we seen that the image "shrank" in size dimensions. We can infer that this convolution operation doesn't assume anything about the boundary conditions, and so the filter (stencil), requiring a, in this case, 9x9 "block" or 9x9 values, will only, near the boundaries, output values for the "inside" cells/gr...
input = tf.placeholder(tf.float32, shape=[None,None,None,None],name="input") maxpool_shape=(2,2) window_size = (1,) + maxpool_shape + (1,) pool_out = tf.nn.max_pool(input, ksize=window_size, strides=window_size,padding="VALID") tf.reset_default_graph() sess =...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
Convolution axon test
sys.path.append( "../ML/") import CNN_tf from CNN_tf import Axon_CNN tf.reset_default_graph() filter_size = (9,9,3,2) # (W_1,W_2,C_lm1,C_l) c_bound = np.sqrt(3*9*9) c = tf.Variable( tf.random_uniform(filter_size, minval=-1.0/c_bound, maxval=...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
CNN Feedforward test for Convolution Neural Networks
# sanity check CNNFF_test = CNN_tf.Feedforward(2,('C','C'), [{"C_ls":(1,20) ,"Wl":(5,5),"Pl":(2,2),"Ll":(28,28)}, {"C_ls":(20,50),"Wl":(5,5),"Pl":(2,2),"Ll":(12,12)}], psi_L=tf.tanh ) s_l = (50*4*4,500) tf.reshape( CNNFF_test.Axons[...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
CNN class test as a Deep Neural Network (i.e. Artificial neural network, i.e. no convolution)
L=4 CorD=('D','D','D','D') dims_data_test=[(784,392), (392,196),(196,98),(98,10)] CNNFF_test=CNN_tf.Feedforward(L,CorD,dims_data_test,psi_L=tf.nn.softmax) CNN_test=CNN_tf.CNN(CNNFF_test) CNN_test.connect_through() CNN_test.build_J_L2norm_w_reg(lambda_val=0.01) CNN_test.build_optimizer() CNN_test.train_model(max_i...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
CNN class test for Convolution Neural Networks
sys.path.append( "../ML/") import CNN_tf from CNN_tf import Axon_CNN L=4 CorD=('C','C','D','D') dims_data_test=[{"C_ls":(1,20) ,"Wl":(5,5),"Pl":(2,2),"Ll":(28,28)}, {"C_ls":(20,50) ,"Wl":(5,5),"Pl":(2,2),"Ll":(12,12)}, (50*4*4,500),(500,10)] CNNFF_test = CNN_tf.Feedforward(L,CorD,dims_d...
CNN/CNN_tf.ipynb
ernestyalumni/MLgrabbag
mit
As discussed in the wikipedia article, a polydivisible number with n-1 digits can be extended to a polydivisible number with n digits in 10/n different ways. So we can estimate the number of n-digit polydivisible numbers $$ F(n) \approx \frac{9\times 10^{n-1}}{n!} $$ The value tends to zero as $n\to \infty$ and we can ...
S_f = (9 * math.e**(10))/10 int(S_f) sum(nvnums)
vitale.ipynb
ianabc/vitale
gpl-2.0
So the estimate is off by only
(sum(nvnums) - S_f) / sum(nvnums)
vitale.ipynb
ianabc/vitale
gpl-2.0
http://www.ssa.gov/oact/babynames/limits.html Load file into a DataFrame
import pandas as pd names2010 = pd.read_csv('/resources/yob2010.txt', names=['name', 'sex', 'births']) names2010
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Total number of birth in year 2010 by sex
names2010.groupby('sex').births.sum()
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Insert prop column for each group
def add_prop(group): # Integer division floors births = group.births.astype(float) group['prop'] = births / births.sum() return group names2010 = names2010.groupby(['sex']).apply(add_prop) names2010
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Verify that the prop clumn sums to 1 within all the groups
import numpy as np np.allclose(names2010.groupby(['sex']).prop.sum(), 1)
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Extract a subset of the data with the top 10 names for each sex
def get_top10(group): return group.sort_index(by='births', ascending=False)[:10] grouped = names2010.groupby(['sex']) top10 = grouped.apply(get_top10) top10.index = np.arange(len(top10)) top10
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Aggregate all birth by the first latter from name column
# extract first letter from name column get_first_letter = lambda x: x[0] first_letters = names2010.name.map(get_first_letter) first_letters.name = 'first_letter' table = names2010.pivot_table('births', index=first_letters, columns=['sex'], aggfunc=sum) table.head()
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Normalize the table
table.sum() letter_prop = table / table.sum().astype(float)
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Plot proportion of boys and girls names starting in each letter
%matplotlib inline import matplotlib.pyplot as plt fig, axes = plt.subplots(2, 1, figsize=(10, 8)) letter_prop['M'].plot(kind='bar', rot=0, ax=axes[0], title='Male') letter_prop['F'].plot(kind='bar', rot=0, ax=axes[1], title='Female', legend=False)
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
iRipVanWinkle/ml
mit
Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6) Comparativa de Diametro X frente a Diametro Y para...
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
medidas/12082015/.ipynb_checkpoints/Análisis de datos Ensayo 2-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Each row of the table corresponds to a different bike trip, and we can use an analytic function to calculate the cumulative number of trips for each date in 2015.
# Query to count the (cumulative) number of trips per day num_trips_query = """ WITH trips_by_day AS ( SELECT DATE(start_date) AS trip_date, COUNT(*) as num_trips FROM `bigquery-public-data.san_francisco.bikeshare_trips` ...
notebooks/sql_advanced/raw/tut2.ipynb
Kaggle/learntools
apache-2.0
The query uses a common table expression (CTE) to first calculate the daily number of trips. Then, we use SUM() as an aggregate function. - Since there is no PARTITION BY clause, the entire table is treated as a single partition. - The ORDER BY clause orders the rows by date, where earlier dates appear first. - By se...
# Query to track beginning and ending stations on October 25, 2015, for each bike start_end_query = """ SELECT bike_number, TIME(start_date) AS trip_time, FIRST_VALUE(start_station_id) OVER ( PARTITION...
notebooks/sql_advanced/raw/tut2.ipynb
Kaggle/learntools
apache-2.0
2.3 search match 从字符串开始位置进行匹配,但是模式出现在字符串的中间的位置比开始位置的概率大得多
m = re.match('foo','seafood') if m is not None: m.group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
search 函数将返回字符串开始模式首次出现的位置
re.search('foo', 'seafood').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.4 匹配多个字符串
bt = 'bat|bet|bit' re.match(bt,'bat').group() re.match(bt, 'blt').group() re.match(bt, 'He bit me!').group() re.search(bt, 'He bit me!').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.5 匹配任意单个字符(.) 句点不能匹配换行符或者匹配非字符串(空字符串)
anyend='.end' re.match(anyend, 'bend').group() re.match(anyend, 'end').group() re.search(anyend, '\nend').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.6 创建字符集合([ ])
pattern = '[cr][23][dp][o2]' re.match(pattern, 'c3po').group() re.match(pattern, 'c3do').group() re.match('r2d2|c3po', 'c2do').group() re.match('r2d2|c3po', 'r2d2').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.7 分组 2.7.1匹配邮箱
patt = '\w+@(\w+\.)?\w+\.com' re.match(patt, 'nobady@xxx.com').group() re.match(patt, 'nobody@www.xxx.com').group() # 匹配多个子域名 patt = '\w+@(\w+\.)*\w+\.com' re.match(patt, 'nobody@www.xxx.yyy.zzz.com').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.7.2 分组表示
patt = '(\w\w\w)-(\d\d\d)' m = re.match(patt, 'abc-123') m.group() m.group(1) m.group(2) m.groups() m = re.match('ab', 'ab') m.group() m.groups() m = re.match('(ab)','ab') m.groups() m.group(1) m = re.match('(a(b))', 'ab') m.group() m.group(1) m.group(2) m.groups()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.8 字符串开头或者单词边界 2.8.1 字符串开头或者结尾
re.match('^The', 'The end.').group() # raise attributeError re.match('^The', 'end. The').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.8.2 单词边界
re.search(r'\bthe', 'bite the dog').group() re.search(r'\bthe', 'bitethe dog').group() re.search(r'\Bthe', 'bitthe dog').group()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.9 find 模块
re.findall('car', 'car') re.findall('car', 'scary') re.findall('car', 'carry, the barcardi to the car')
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.10 sub()和subn()函数
(re.sub('X', 'Mr. Smith', 'attn: X\n\nDear X, \n')) print re.subn('X', 'Mr. Smith', 'attn: X\n\nDear X, \n') re.sub('[ae]', 'X', 'abcdedf')
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2.11 split分割
re.split(':','str1:str2:str3') from os import popen from re import split f = popen('who', 'r') for eachLine in f.readlines(): print split('\s\s+|\t', eachLine.strip()) f.close()
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
3 搜索和匹配的比较,“贪婪”匹配
string = 'Thu Feb 15 17:46:04 2007::gaufung@cumt.edu.cn::1171590364-6-8' patt = '.+\d+-\d+-\d+' re.match(patt, string).group() patt = '.+(\d+-\d+-\d+)' re.match(patt, string).group(1)
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
由于通配符“.”默认贪心的,所以'.+'将会匹配尽可能多的字符,所以 Thu Feb 15 17:46:04 2007::gaufung@cumt.edu.cn::117159036 将匹配'.+',而分组匹配的内容则是“4-6-8”,非贪婪算法则通过'?'解决
patt = '.+?(\d+-\d+-\d+)' re.match(patt, string).group(1)
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
gaufung/Data_Analytics_Learning_Note
mit
TIMESERIES WEB API insert_ts(self, pk, ts): """ Insert a timeseries into the database by sending a request to the server. Parameters ---------- primary_key: int a unique identifier for the timeseries ts: a TimeSeries object the timeseries object intended to be inserted to datab...
# server = subprocess.Popen(['python', '../go_persistent_server.py']) # time.sleep(3) # web = subprocess.Popen(['python', '../go_web.py']) # time.sleep(3) web_interface = WebInterface() results = web_interface.add_trigger( 'junk', 'insert_ts', None, 'db:one:ts') assert results[0] == 200 print(results) r...
docs/Web_service_demo.ipynb
cs207-project/TimeSeries
mit
Having set up the triggers, now insert the time series, and upsert the metadata ========================================== When it's first time to insert these keys in TSDB_server, insert_ts will work and return TSDBStatus.OK ==========================================
for k in tsdict: results = web_interface.insert_ts(k, tsdict[k]) assert results[0] == 200 # upsert meta results = web_interface.upsert_meta(k, metadict[k]) assert results[0] == 200 results = web_interface.add_trigger( 'junk', 'insert_ts', None, 'db:one:ts') results # =================...
docs/Web_service_demo.ipynb
cs207-project/TimeSeries
mit
<h3>How many substantiated complaints occured by the time Marian Ewins moved to Washington Gardens?</h3> Marian Ewins moved in to Washington Gardens in May, 2015.
move_in_date = '2015-05-01'
notebooks/analysis/washington-gardens.ipynb
TheOregonian/long-term-care-db
mit
The facility_id for Washington Gardens is 50R382.
df[(df['facility_id']=='50R382') & (df['incident_date']<move_in_date)].count()[0]
notebooks/analysis/washington-gardens.ipynb
TheOregonian/long-term-care-db
mit
逻辑回归算法简介和Python实现 0. 实验数据
names = [("x", k) for k in range(8)] + [("y", 8)] df = pd.read_csv("./res/dataset/pima-indians-diabetes.data", names=names) df.head(3)
machine_learning/logistic_regression/demo.ipynb
facaiy/book_notes
cc0-1.0
1. 二分类 1.0 基本原理 ref: http://www.robots.ox.ac.uk/~az/lectures/ml/2011/lect4.pdf 逻辑回归,是对线性分类 $f(x) = w^T x + b$ 结果,额外做了sigmoid变换$\sigma(x) = \frac1{1 + e^x}$。加了sigmoid变换,更适合于做分类器,效果见下图。
x = np.linspace(-1.5, 1.5, 1000) y1 = 0.5 * x + 0.5 y2 = sp.special.expit(5 * x) pd.DataFrame({'linear': y1, 'logistic regression': y2}).plot()
machine_learning/logistic_regression/demo.ipynb
facaiy/book_notes
cc0-1.0
所以,逻辑回归的定义式是 \begin{equation} g(x) = \sigma(f(x)) = \frac1{1 + e^{-(w^T x + b)}} \end{equation} 那么问题来了,如何找到参数$w$呢? 1.0.0 损失函数 在回答如何找之前,我们得先定义找什么:即什么参数是好的? 观察到,对于二分类 $y \in {-1, 1}$,有 \begin{align} g(x) & \to 1 \implies y = 1 \ 1 - g(x) & \to 1 \implies y = 0 \end{align} 可以将$g(x)$看作是$y = 1$的概率值,$1 - g(x)$看作是$y = 0...
def logit_loss_and_grad(w, X, y): w = w[:, None] if len(w.shape) == 1 else w z = np.exp(np.multiply(-y, np.dot(X, w))) loss = np.sum(np.log1p(z)) grad = np.dot((np.multiply(-y, (1 - 1 / (1 + z)))).T, X) return loss, grad.flatten() # 测试数据 X = df["x"].as_matrix() # 标签 y = df["y"].as_matrix(...
machine_learning/logistic_regression/demo.ipynb
facaiy/book_notes
cc0-1.0