markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Lists could collect objects of any type, including other lists. Whitespace in Python Python uses indents and whitespace to group statements together. To write a short loop in C, you might use: for (i = 0, i < 5, i++){ printf("Stress and strain\n"); } Python does not use curly braces like C, so the same program...
for i in range(3): print("Stress and strain")
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
If you have nested for-loops, there is a further indent for the inner loop.
for i in range(3): for j in range(3): print('i:{} j:{}'.format(i, j)) print("This statement is within the outer i-loop, but not the inner j-loop")
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Scientific Python Environment Python is a high-level open-source language. But the Scientific Python Environment is inhabited by many packages or libraries that provide useful things like array operations, plotting functions, and much more. We can import libraries of functions to expand the capabilities of Python in o...
%pylab inline
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
So what just happened? We just imported most of numpy and matplotlib into current workspace, so their functions are from now available to use. So if we want to use the numpy function linspace, for instance, we can call it by writing:
linspace(-10, 10, 11)
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
To learn new functions available to you, visit the NumPy Reference page. If you are a proficient MATLAB user, there is a wiki page that should prove helpful to you: NumPy for Matlab Users Slicing Arrays In NumPy, you can look at portions of arrays in the same way as in Matlab, with a few extra tricks thrown in. Let's...
vals = array([1, 2, 3, 4, 5]) vals
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Python uses a zero-based index, so let's look at the first and last element in the array myvals
vals[0], vals[4]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
There are 5 elements in the array vals, but if we try to look at vals[5], Python will be unhappy, as vals[5] is actually calling the non-existant 6th element of that array.
vals[5]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Arrays can also be 'sliced', grabbing a range of values. Let's look at the first three elements
vals[0:3]
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Note here, the slice is inclusive on the front end and exclusive on the back, so the above command gives us the values of vals[0], vals[1] and vals[2], but not vals[3]. Assigning Array Variables One of the strange little quirks/features in Python that often confuses people comes up when assigning and comparing arrays o...
a = linspace(1,5,5) a
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
OK, so we have an array a, with the values 1 through 5. I want to make a copy of that array, called b, so I'll try the following:
b = a b
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Great. So a has the values 1 through 5 and now so does b. Now that I have a backup of a, I can change its values without worrying about losing data (or so I may think!).
a[2] = 17 a
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Here, the 3rd element of a has been changed to 17. Now let's check on b.
b
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
And that's how things go wrong! When you use a statement like a = b, rather than copying all the values of a into a new array called b, Python just creates an alias (or a pointer) called b and tells it to route us to a. So if we change a value in a then b will reflect that change (technically, this is called assignme...
c = a.copy()
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Now, we can try again to change a value in a and see if the changes are also seen in c.
a[2] = 3 a b c
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Plotting For scientific plotting we will use matplotlib, most commonly plot function.
x = linspace(-pi, pi, 150) plot(x, sin(x))
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
Learn More There are a lot of resources online to learn more about using NumPy and other libraries. Well done are Lectures on scientific computing with Python.
from IPython.core.display import HTML def css_styling(): styles = open("./css/sg2.css", "r").read() return HTML(styles) css_styling()
00_Python_Crash_Course.ipynb
ondrolexa/sg2
mit
The argument after the ipython magic is called the backend for plotting. There are several available, also for creating their own zoomable windows. But we also can zoom within the notebook, see below.
import numpy as np import matplotlib.pyplot as pl # import this for plotting routines
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Refresher -- acceleration with no initial velocity or displacement
a = 9.8 # Acceleration m s^{-2} count = 101 # Number of numbers timeArray = np.linspace(0, 10, count) # Create an array of 101 times between 0 and 10 (inclusive) distArray = 0.5 * a * timeArray**2 # Create an array of distances calculate from the times
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What do these arrays (distArray and timeArray) contain?
print(timeArray) print print(distArray)
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
To plot distArray vs. timeArray with a scatter plot:
pl.scatter(timeArray, distArray, color = 'k')
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
To plot just a section to see the discrete nature (and add labels):
pl.scatter(timeArray, distArray, color = 'k') pl.xlim(4, 6) pl.ylim(50, 200) pl.xlabel('time (s)') pl.ylabel('distance (m)')
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Now with the notebook backend:
%matplotlib notebook pl.scatter(timeArray, distArray, color = 'k') pl.xlim(4, 6) pl.ylim(50, 200) pl.xlabel('time (s)') pl.ylabel('distance (m)') %matplotlib inline
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
To plot distArray vs. timeArray with a blue solid line:
pl.plot(timeArray, distArray, color='b', ls='-') pl.xlabel('time (s)') # xlabel is the abscissa pl.ylabel('distance (m)') # ylabel is the ordinate
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
To save the figure, use savefig('filename') and the .pdf, or .eps, or .png, or ... extension (which Python interprets for you!):
pl.xlabel('time1 (s)') pl.plot(timeArray, distArray, color='b', ls='-') pl.ylabel('distance (m)') pl.title('Position vs. Time') pl.savefig('position_v_time.pdf') # In the same cell as pl.plot pl.savefig('position_v_time.eps') pl.savefig('position_v_time.png')
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. Where will these files be saved on our computer?
ls position_v_time*
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
More array methods Three topics today: Array slicing vs. copying "Allocating" or "initializing" arrays Boolean logic on arrays Making copies of arrays
yArray = np.linspace(0, 5, 6) # take care of differences of interval determination here! zArray = yArray[1:4] print(yArray, zArray) # Q. What will y and z contain? yArray[3] = 10
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What does the next command yield?
print(yArray, zArray)
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
zArray is not a copy of yArray, it is a slice of yArray! AND: All arrays generated by basic slicing are always views of the original array. In other words, the variable zArray is a reference to three elements within yArray, elements 1, 2, and 3. If this is not the desired behavior, copy arrays:
yArray = np.linspace(0, 5, 6) zArray = yArray.copy() print(yArray, zArray) zArray = yArray.copy()[1:4] # you only `catch` the slice into new variable, rest of copy NOT print(yArray, zArray) yArray[3] = 10 print(yArray, zArray)
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
"copy" is an attribute of every numpy array, as are "shape", "size", "min", "max", etc. Allocating Arrays If we want an array with the same "shape" as another array, we've seen that we can copy an array with:
xArray = np.array([1, 2, 3]) aArray = xArray.copy() aArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
then fill the array with the appropriate values. However, we could also use numpy.zeros with the attributes xArray.shape and xArray.dtype:
print(xArray.shape) # this is a 1D vector print(xArray.ndim) xArray xArray.shape = (3,1) print(xArray.shape) # Now it's a 3x1 2D matrix! print(xArray.ndim) xArray xArray.shape = (3,) aArray = np.zeros(xArray.shape, xArray.dtype) print(aArray.shape) aArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Which gives aArray the same "shape" and data type as xArray. Q. What do I mean by the "shape" of the array?
np.zeros((2,3,4))
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. And what is the data type (dtype)? Alternatively we could do:
aArray = np.zeros_like(xArray) np.zeros?? bArray = np.ones_like(xArray) print(aArray, bArray)
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Generalized Indexing Subarrays can be sliced too, with or without range:
# remember, we already imported numpy (as np)! xArray = np.linspace(1, 10, 10) xArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What will xArray contain?
# Note the double brackets indicating a subarray xArray[[1, 5, 6]] = -1 xArray # Using range instead: xArray = np.linspace(1, 10, 10) xArray[range(3, 10, 3)] = -1 xArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What will xArray contain?
# Compare xArray = np.linspace(1, 10, 10) xArray[[3, 6, 9]] = -1 xArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Boolean Logic When do I use that? * missing or invalid data * investigating subset of a dataset * masking/filtering etc. Complementary methods for dealing with missing or invalid data: numpy masked arrays http://docs.scipy.org/doc/numpy/reference/maskedarray.html (masked arrays are a bit harder to use, but offer more ...
xArray myArray = xArray < 0 myArray xArray[xArray < 0]
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
This will replace the elements of a new xArray with values less than zero with the maximum of xArray:
xArray = np.arange(-5, 5) xArray xArray[xArray < 0] = xArray.max() xArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Compound Conditionals & Arrays numpy has routines for doing boolean logic:
xArray = np.arange(-5, 5) xArray
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
"and"
np.logical_and(xArray > 0, xArray % 2 == 1) # % is the modulus: x % 2 == 1 means the remainder of x/2 is 1 # Q. So, what should running this cell give us?
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
"or"
np.logical_or(xArray == xArray.min(), xArray == xArray.max()) np.logical_not(xArray == xArray.min()) print(np.any(xArray > 10)) print(np.any(xArray < -2)) print(np.all(xArray > -10)) print(np.all(xArray > -2))
lecture_11_arrays_plotting.ipynb
CUBoulder-ASTR2600/lectures
isc
Let's investigate the data that we just loaded. A dataset contains the original data (digits.images), a 2 dimensional data array and some metadata about the dataset.
%matplotlib inline from matplotlib import pyplot # Show first 10 images for i in xrange(10): pyplot.figure(i+1) ax = pyplot.gca() # gca = get current axis ax.imshow(digits.images[i],cmap=pyplot.cm.binary)
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
The original data is however always normalized to a single dimensional array. This leads to the digits.data array being of dimension len(digits.images) (the number of images) x len(digits.images[i]) (the one-dimensional image data)
digits.data
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
Our goal is now to train a machine learning algorithm with the given digits dataset, so that it can use what it has learned to later predict or classify new digits. In this case, we'll have 9 target classes (numbers 0-9). For the digits training set, we already provide these target classes so that they can be used to t...
digits.target
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
For example, for digits.images[3], we have digits.target[3] == 3, as the digits.images[3] contains the number 3.
%matplotlib inline print "Class for digits.images[3] =", digits.target[3] pyplot.imshow(digits.images[3],cmap=pyplot.cm.binary)
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
The algorithm that we will use to do the classification is a so-called estimator. Well-known matematical estimators inlcude: absolute error, mean squared error, variance, ... In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T). An example of an estima...
from sklearn import svm clf = svm.SVC(gamma=0.001, C=100)
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
We now train the classifier with all but the last item in the dataset (using python's [:-1] syntax) by calling the fit method.
clf.fit(digits.data[:-1], digits.target[:-1])
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
Now you can predict new values, in particular, we can ask to the classifier what is the digit of our last image in the digits dataset, which we have not used to train the classifier:
result = clf.predict(digits.data[-1:]) %matplotlib inline print "Class for digits.images[-1] =", result[0] pyplot.imshow(digits.images[-1],cmap=pyplot.cm.binary)
sk-learn-intro.ipynb
jorisroovers/machinelearning-playground
apache-2.0
Define methods for generating an atmospheric model of Titan and performing RT calculation Note the parameters that are specified in the methods below. For example, the k-coefficients file, the gas compositions, and the surface reflectivity.
def create_atmosphere_model(**kw): """ Setup layered atmospheric model using a default physical/thermal structure determined by HASI, and composition from the GCMS, with aerosol scattering propertier from DISR. The titan dictionary contains the values used to determine the opacity and scat...
notebooks/Example_VIMS_spectrum.ipynb
adamkovics/atmosphere
gpl-2.0
Parallel (multi-core) execution Requires ipyparallel and starting clusters in IPython Clusters tab of notebook to make use of multilpe cores.
from ipyparallel import parallel, Client %%px import os os.environ['RTDATAPATH'] = '/Users/mate/g/rt/data/refdata/' pwd titan = setup_VIMS_calc(wav_range=(0.8850,0.8890)) pydisort.ipcluster_spectrum_calculation(titan) titan['rt']
notebooks/Example_VIMS_spectrum.ipynb
adamkovics/atmosphere
gpl-2.0
Compare calculations with observed VIMS spectrum
VIMS_test = setup_VIMS_calc( rsurf=0.25, wav_range=(1.5,3.5), view={'umu0':np.cos(73.13*(np.pi/180)), 'umue':np.cos(51.93*(np.pi/180)), 'phi0': 272.4, 'phie': 360-111.4}, verbose=True, ...
notebooks/Example_VIMS_spectrum.ipynb
adamkovics/atmosphere
gpl-2.0
Save configuration
import os try: import cPickle as pickle except ImportError: import pickle import iris import cf_units from datetime import datetime from utilities import CF_names, fetch_range, start_log # 1-week start of data. kw = dict(start=datetime(2014, 7, 1, 12), days=6) start, stop = fetch_range(**kw) # SECOORA region...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
Add SECOORA models and observations
from utilities import titles, fix_url for secoora_model in secoora_models: if titles[secoora_model] not in dap_urls: log.warning('{} not in the NGDC csw'.format(secoora_model)) dap_urls.append(titles[secoora_model]) # NOTE: USEAST is not archived at the moment! # https://github.com/ioos/secoora/is...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
Clean the DataFrame
from utilities import get_coops_metadata, to_html columns = {'sensor_id': 'sensor', 'station_id': 'station', 'latitude (degree)': 'lat', 'longitude (degree)': 'lon', 'sea_water_temperature (C)': sos_name} observations.rename(columns=columns, inplace=True) observations['sen...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
Uniform 6-min time base for model/data comparison
from owslib.ows import ExceptionReport from utilities import pyoos2df, save_timeseries iris.FUTURE.netcdf_promote = True log.info(fmt(' Observations ')) outfile = '{:%Y-%m-%d}-OBS_DATA.nc'.format(stop) outfile = os.path.join(run_name, outfile) log.info(fmt(' Downloading to file {} '.format(outfile))) data, bad_stati...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
Split good and bad stations
pattern = '|'.join(bad_station) if pattern: all_obs['bad_station'] = all_obs.station.str.contains(pattern) observations = observations[~observations.station.str.contains(pattern)] else: all_obs['bad_station'] = ~all_obs.station.str.contains(pattern) # Save updated `all_obs.csv`. fname = '{}-all_obs.csv'.fo...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
These buoys need some QA/QC before saving
from utilities.qaqc import filter_spikes, threshold_series if buoys: secoora_obs_data.apply(threshold_series, args=(-5, 40)) secoora_obs_data.apply(filter_spikes) # Interpolate to the same index as SOS. index = obs_data.index kw = dict(method='time', limit=30) secoora_obs_data = secoora_obs_da...
notebooks/timeSeries/sst/00-fetch_data.ipynb
ocefpaf/secoora
mit
Flip
# define function flip() # open 'eye.png', convert to grayscale, flip, and display
projects/images-starterkit.ipynb
parrt/msan501
mit
Blur
# define getpixel, region3x3, avg, and blur functions img = Image.open('pcb.png') img = img.convert("L") # make greyscale if not already (luminance) img img = blur(img) img
projects/images-starterkit.ipynb
parrt/msan501
mit
Denoise
# define median and denoise functions img = Image.open('Veggies_noise.jpg') img = img.convert("L") # make greyscale if not already (luminance) # denoise 3 times and display # show 'guesswho.png' # denoise 3 times then display
projects/images-starterkit.ipynb
parrt/msan501
mit
Generic filter
# define filterAnd open functions
projects/images-starterkit.ipynb
parrt/msan501
mit
Blur refactored
# Display 'pcb.png' img # use filter to blur the image
projects/images-starterkit.ipynb
parrt/msan501
mit
Denoise refactored
img = open('guesswho.png') img # using filter function, denoise the image
projects/images-starterkit.ipynb
parrt/msan501
mit
Edges
# define laplace function # Open 'obama.png' and show the edges # Show the edges for 'phobos2.jpg
projects/images-starterkit.ipynb
parrt/msan501
mit
Sharpen
# define minus function # display 'bonkers.png' # sharpen that image and display it
projects/images-starterkit.ipynb
parrt/msan501
mit
Check Permutation Check Permutation: Given two strings, write a method to decide if one is a permutation of the other.
def str_shuffle(str): str_list = list(str) random.shuffle(str_list) return "".join(str_list) str0 = gen_randstr() str1 = gen_randstr() str2 = gen_randstr() str3 = str_shuffle(str2[:]) print(str2) print(str3) def check_permutation(str0, str1): str0_ = "".join(sorted(str0)) str1_ = "".join(sorted...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
Palindrome Permutation: Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palindrome does not need to be limited to just dictionary words.
def check_palindrome(str0): size = len(str0) for i in range(size): if str0[i] != str0[size - 1 - i]: return False return True str0 = "AAAABBBBAAAA" str1 = "AAAAAAAAABBB" print("Test#1: %s"%("Pass" if True == check_palindrome(str0) else "Fail")) print("Test#2: %s"%("Pass" if False == ch...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
Palindrome Permutation Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palindrome does not need to be limited to just dictionary words.
def check_palindrome_permutation(str0): str0 = str0.lower() histogram = {} for ch in str0: if ch != ' ': histogram[ch] = histogram.get(ch, 0) + 1 # check one odd entries found_odd = False for ch, value in histogram.items(): if value%2 == 1: ...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
One Away: There are three types of edits that can be performed on strings: insert a character,remove a character, or replace a character. Given two strings, write a function to check if they are one edit (or zero edits) away.
def check_same(str0, str1): if len(str0) != len(str1): return False for i in range(len(str0)): if str0[i] != str1[i]: return False return True def check_oneaway(str0, str1): for i in range(len(str0)): if (i < len(str1) and str0[i] != str1[i]) or i > (len(str...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
String Compression: Implement a method to perform basic string compression using the counts of repeated characters. For example, the string aabcccccaaa would become a2b1c5a3. If the "compressed" string would not become smaller than the original string, your method should return the original string. You can assume the s...
def _string_compression(str0): dest = "" cur_ch = str0[0] count = 0 for i in range(len(str0)): if cur_ch == str0[i]: count += 1 else: dest += cur_ch dest += str(count) cur_ch = str0[i] count = 1 dest += cur_ch dest += st...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
Rotate Matrix: Given an image represented by an NxN matrix, where each pixel in the image is 4 bytes, write a method to rotate the image by 90 degrees. Can you do this in place? Zero Matrix: Write an algorithm such that if an element in an MxN matrix is 0, its entire row and column are set to 0.
import numpy as np matrix = np.random.randint(0, 20, (10,10)) print(matrix) def set_matrix_zero(mat): zero_rows = [] zero_cols = [] h, w = mat.shape for i in range(h): for j in range(w): if mat[i][j] == 0: zero_rows.append(i) zero_cols.append(j)...
Issues/algorithms/Arrays and Strings.ipynb
stereoboy/Study
mit
Getting the Data
url = r'https://archive.ics.uci.edu/ml/machine-learning-databases/00233/CNAE-9.data' count_df = pd.read_csv(url, header=None) count_df.info()
nlp/text_classification_uci.ipynb
dipanjank/ml
gpl-3.0
The result is a 1080*857 dense matrix. The first column is the label. We extract the first column as the label and convert the rest to a sparse matrix.
from scipy.sparse import csr_matrix labels, count_features = count_df.loc[:, 0], count_df.loc[:, 1:] count_data = csr_matrix(count_features.values) count_data
nlp/text_classification_uci.ipynb
dipanjank/ml
gpl-3.0
Check for Class Imbalance
label_counts = pd.Series(labels).value_counts() label_counts.plot(kind='bar', rot=0)
nlp/text_classification_uci.ipynb
dipanjank/ml
gpl-3.0
Model Construction and Cross-Validation In this section, we construct a classification model by Perform Singular Value Decomposition of the sparse matrix and keep the top 100 components. Use a Maximum Entropy Classifier on the scaled SVD components.
from sklearn.metrics import classification_report from sklearn.pipeline import Pipeline from sklearn.model_selection import StratifiedKFold, cross_val_predict from sklearn.linear_model import LogisticRegression from sklearn.decomposition import TruncatedSVD from sklearn.preprocessing import StandardScaler pipeline = P...
nlp/text_classification_uci.ipynb
dipanjank/ml
gpl-3.0
La lluvia cuando es transformada queda expresada en intensidad, para ser expresada en cantidad debe ser dividida por 12.
# Obtencion de la conversion a lluvia rad.DBZ2Rain() # Suma de la lluvia en todo el campo del radar rad.ppt = rad.ppt/12.0 print rad.ppt.sum() # Plot de la matriz de radar rad.plot.plot_radar_elegant(rad.ppt)
Examples/Ejemplo_Uso_radar.ipynb
nicolas998/Radar
gpl-3.0
First, the passenger list of the Titanic
titanic = sns.load_dataset("titanic") titanic.head(n=10)
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
One of the categorical variables in this dataset is embark_town Let's plot the number of passengers departing from each town
ax = titanic.groupby(['embark_town'])['age'].count().plot(kind='bar') plt.xticks(rotation=0) plt.xlabel('Departure Town') plt.ylabel('Passengers') plt.title('Number of Passengers by Town of Departure')
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
Let's look at another example: the cars93 dataset
cars = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/MASS/Cars93.csv', index_col=0) cars.head() cars.ix[1]
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
This dataset has multiple categorical variables Based on the description of the cars93 datatset, we'll consider Manufacturer, and DriveTrain to be categorical variables Let's plot Manufacturer and DriveTrain
cars.groupby('Manufacturer')['Model'].count().plot(kind='bar') plt.ylabel('Cars') plt.title('Number of Cars by Manufacturer') cars.groupby('DriveTrain')['Model'].count().plot(kind='bar') plt.ylabel('Cars') plt.title('Number of Cars by Drive Train')
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
If our categorical data has labels, we need to convert them to integer id's
def col_2_ids(df, col): ids = df[col].drop_duplicates().sort(inplace=False).reset_index(drop=True) ids.index.name = '%s_ids' % col ids = ids.reset_index() df = pd.merge(df, ids, how='left') del df[col] return df cat_columns = ['Manufacturer', 'DriveTrain'] for c in cat_columns: print c ...
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
Just as we model binary data with the beta Bernoulli distribution, we can model categorical data with the Dirichlet discrete distribution The beta Bernoulli distribution allows us to learn the underlying probability, $\theta$, of the binary random variable, $x$ $$P(x=1) =\theta$$ $$P(x=0) = 1-\theta$$ The Dirichlet dis...
from microscopes.models import dd as dirichlet_discrete
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
Then given the specific model we'd want we'd import from microscopes.model_name.definition import model_definition NOTE: You must specify the number of categories in your Dirichlet Discrete distribution For 5 categories, for examples you must specify the likelihood as:
dd5 = dirichlet_discrete(5)
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
You can then use the model definition as appropriate for your desired model:
from microscopes.irm.definition import model_definition as irm_definition from microscopes.mixture.definition import model_definition as mm_definition from microscopes.lda.definition import model_definition as hdp_definition
examples/dirichlet-discrete.ipynb
datamicroscopes/release
bsd-3-clause
Change the following variables according to your definitions.
# Project definitions PROJECT_ID = '<YOUR PROJECT ID>' # Change to your project id. REGION = '<LOCATION OF RESOURCES>' # Change to your region. # Bucket definitions STAGING_BUCKET = '<YOUR BUCKET NAME>' # Change to your bucket. WORKFLOW_MODEL_PATH = "gs://..." # Change to GCS path of the nvt workflow HUGECTR_MODEL_PA...
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Change the following variables ONLY if necessary. You can leave the default variables.
MODEL_ARTIFACTS_REPOSITORY = f'gs://{STAGING_BUCKET}/recsys-models' MODEL_NAME = 'deepfm' MODEL_VERSION = 'v01' MODEL_DISPLAY_NAME = f'criteo-hugectr-{MODEL_NAME}-{MODEL_VERSION}' MODEL_DESCRIPTION = 'HugeCTR DeepFM model' ENDPOINT_DISPLAY_NAME = f'hugectr-{MODEL_NAME}-{MODEL_VERSION}' LOCAL_WORKSPACE = '/home/jupyte...
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Initialize Vertex AI SDK
vertex_ai.init( project=PROJECT_ID, location=REGION, staging_bucket=STAGING_BUCKET )
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
1. Exporting Triton ensemble model A Triton ensemble model represents a pipeline of one or more models and the connection of input and output tensors between these models. Ensemble models are intended to encapsulate inference pipelines that involves multiple steps, each performed by a different model. For example, a co...
if os.path.isdir(LOCAL_WORKSPACE): shutil.rmtree(LOCAL_WORKSPACE) os.makedirs(LOCAL_WORKSPACE) !gsutil -m cp -r {WORKFLOW_MODEL_PATH} {LOCAL_WORKSPACE} !gsutil -m cp -r {HUGECTR_MODEL_PATH} {LOCAL_WORKSPACE}
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Export the ensemble model The src.export.export_ensemble utility function takes a number of arguments that are required to set up a proper flow of tensors between inputs and outputs of the NVTabular workflow and the HugeCTR model. model_name - The model name that will be used as a prefix for the generated ensemble art...
NUM_SLOTS = 26 MAX_NNZ = 2 EMBEDDING_VECTOR_SIZE = 11 MAX_BATCH_SIZE = 64 NUM_OUTPUTS = 1 continuous_columns = ["I" + str(x) for x in range(1, 14)] categorical_columns = ["C" + str(x) for x in range(1, 27)] label_columns = ["label"] local_workflow_path = str(Path(LOCAL_WORKSPACE) / Path(WORKFLOW_MODEL_PATH).parts[-1]...
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
The previous cell created the following local folder structure
! ls -la {local_ensemble_path}
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
The deepfm folder contains artifacts and configurations for the HugeCTR model. The deepfm_ens folder contains a configuration for the ensemble model. And the deepfm_nvt contains artifacts and configurations for the NVTabular preprocessing workflow. The ps.json file contains information required by the Triton's HugeCTR ...
! cat {local_ensemble_path}/ps.json
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Upload the ensemble to GCS In the later steps you will register the exported ensemble model as a Vertex AI Prediction model resource. Before doing that we need to move the ensemble to GCS.
gcs_ensemble_path = '{}/{}'.format(MODEL_ARTIFACTS_REPOSITORY, Path(local_ensemble_path).parts[-1]) !gsutil -m cp -r {local_ensemble_path}/* {gcs_ensemble_path}/
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
2. Building a custom serving container The custom serving container is derived from the NVIDIA NGC Merlin inference container. It adds Google Cloud SDK and an entrypoint script that executes the tasks described in detail in the overview.
! cat src/Dockerfile.triton
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
As described in detail in the overview, the entry point script copies the ensemble artifacts to the serving container's local file system and starts Triton.
! cat src/serving/entrypoint.sh
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
You use Cloud Build to build the serving container and push it to your projects Container Registry.
FILE_LOCATION = './src' ! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$DOCKERNAME,_IMAGE_URI=$IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=2h --machine-type=e2-highcpu-8
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
3. Uploading the model and its metadata to Vertex Models. In the following cell you will register (upload) the ensemble model as a Vertex AI Prediction Model resource. Refer to Use a custom container for prediction guide for detailed information about creating Vertex AI Prediction Model resources. Notice that the valu...
serving_container_args = [model_repository_path] model = vertex_ai.Model.upload( display_name=MODEL_DISPLAY_NAME, description=MODEL_DESCRIPTION, serving_container_image_uri=IMAGE_URI, artifact_uri=gcs_ensemble_path, serving_container_args=serving_container_args, sync=True ) model.resource_name
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
4. Deploying the model to Vertex AI Prediction. Deploying a Vertex AI Prediction Model is a two step process. First you create an endpoint that will expose an external interface to clients consuming the model. After the endpoint is ready you can deploy multiple versions of a model to the endpoint. Refer to Deploy a mod...
endpoint = vertex_ai.Endpoint.create( display_name=ENDPOINT_DISPLAY_NAME )
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Deploy the model to Vertex Prediction endpoint After the endpoint is ready, you can deploy your ensemble model to the endpoint. You will run the ensemble on a GPU node equipped with the NVIDIA Tesla T4 GPUs. Refer to Deploy a model using the Vertex AI API guide for more information.
traffic_percentage = 100 machine_type = "n1-standard-8" accelerator_type="NVIDIA_TESLA_T4" accelerator_count = 1 min_replica_count = 1 max_replica_count = 2 model.deploy( endpoint=endpoint, deployed_model_display_name=MODEL_DISPLAY_NAME, machine_type=machine_type, min_replica_count=min_replica_count, ...
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
5. Invoking the model To invoke the ensemble through Vertex AI Prediction endpoint you need to format your request using a standard Inference Request JSON Object or a Inference Request JSON Object with a binary extension and submit a request to Vertex AI Prediction REST rawPredict endpoint. You need to use the rawPredi...
payload = { 'id': '1', 'inputs': [ {'name': 'I1','shape': [3, 1], 'datatype': 'INT32', 'data': [5, 32, 0]}, {'name': 'I2', 'shape': [3, 1], 'datatype': 'INT32', 'data': [110, 3, 233]}, {'name': 'I3', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 5, 1]}, {'name': 'I4', 'shape'...
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
You can invoke the Vertex AI Prediction rawPredict endpoint using any HTTP tool or library, including curl.
uri = f'https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint.name}:rawPredict' ! curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ {uri} \ -d @criteo_payload.json
03-model-inference-hugectr.ipynb
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
apache-2.0
Function for determining the impulse response of an RC filter
######################## # find impulse response of an RC filter ######################## def get_rc_ir(K, n_up, t_symbol, beta): ''' Determines coefficients of an RC filter Formula out of: K.-D. Kammeyer, Nachrichtenübertragung At poles, l'Hospital was used NOTE: Length of the IR ...
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
kit-cel/wt
gpl-2.0