markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Implementation: Selecting Samples To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent...
# TODO: Select three indices of your choice you wish to sample from the dataset indices = [100,200,300] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" display(samples)
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 1 Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. What kind of establishment (customer) could each of the three samples you've chosen represent? Hint: Examples of establishments include places like markets, cafes, and ret...
from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import r2_score # TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = data.drop(['Milk'], axis = 1) # TODO: Split the data into training and testing s...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 2 Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits? Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. A...
# Produce a scatter matrix for each pair of features in the data pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 3 Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed? Hint: Is the data normally distributed? Where do most of the data points lie? Answe...
# TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Observation After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before...
# Display the log-transformed sample data display(log_samples)
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Implementation: Outlier Detection Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we ...
# For each feature find the data points with extreme high or low values for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature], 25) # TODO: Calculate Q3 (75th percentile of the data) for the given feature Q3...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 4 Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. Answer: There were three points which were considered outliers for more ...
from sklearn.decomposition import PCA # TODO: Apply PCA by fitting the good data with the same number of dimensions as features pca = PCA(n_components=6).fit(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = vs.pca_resul...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 5 How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending. Hint: A positive increase in a specific...
# Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Implementation: Dimensionality Reduction When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being...
# TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components=2).fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Create a DataFr...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Visualizing a Biplot A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can hel...
# Create a biplot vs.biplot(good_data, reduced_data, pca)
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Observation Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but no...
from sklearn.mixture import GaussianMixture from sklearn.metrics import silhouette_score # TODO: Apply your clustering algorithm of choice to the reduced data GM = GaussianMixture(n_components = 2) clusterer = GM.fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data)...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 7 Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? Answer: 2 clusters has the best silhouette score of 0.422 which is slightly better than 3 clusters, which had a score of 0.403. 4 clusters had a significantly smaller score of...
# Display the results of the clustering from implementation vs.cluster_results(reduced_data, preds, centers, pca_samples)
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Implementation: Data Recovery Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's cent...
# TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()...
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Question 8 Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent? Hint: A customer who is assigned to 'Cluster...
# Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Answer: The points are all predicted to be in Cluster 0. Analysing the scatter plot, one of them is quite close to the boundary with Cluster 1 and this is likely to be Sample point 2, which has a smaller Grocery value and significant higher Fresh value than the other two points. When compared to the cluster centers, th...
# Display the clustering results based on 'Channel' data vs.channel_results(reduced_data, outliers, pca_samples)
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit
Accelerate BERT encoder with TF-TRT Introduction The NVIDIA TensorRT is a C++ library that facilitates high performance inference on NVIDIA graphics processing units (GPUs). TensorFlow™ integration with TensorRT™ (TF-TRT) optimizes TensorRT compatible parts of your computation graph, allowing TensorFlow to execute the ...
!pip install -q tf-models-official import tensorflow as tf import tensorflow_hub as hub tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3' bert_saved_model_path = 'bert_base' bert_model = hub.load(tfhub_handle_encoder) tf.saved_model.save(bert_model, bert_saved_model_path)
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
2. Inference In this section we will convert the model using TF-TRT and run inference.
import matplotlib.pyplot as plt import numpy as np from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import tag_constants from tensorflow.python.compiler.tensorrt import trt_convert as trt from timeit import default_timer as timer tf.get_logger().setLevel('ERROR')
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
2.1 Helper functions
def get_func_from_saved_model(saved_model_dir): saved_model_loaded = tf.saved_model.load( saved_model_dir, tags=[tag_constants.SERVING]) graph_func = saved_model_loaded.signatures[ signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] return graph_func, saved_model_loaded def predict_and_b...
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
2.2 Convert the model with TF-TRT
bert_trt_path = bert_saved_model_path + '_trt' input_shapes = [[(1, 128), (1, 128), (1, 128)]] trt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16')
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
2.3 Run inference with converted model
trt_func, _ = get_func_from_saved_model(bert_trt_path) input_dict = random_input(1, 128) result_key = 'bert_encoder_1' # 'classifier' res = predict_and_benchmark_throughput(input_dict, trt_func, result_key=result_key)
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
Compare to the original function
func, model = get_func_from_saved_model(bert_saved_model_path) res = predict_and_benchmark_throughput(input_dict, func, result_key=result_key)
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
3. Dynamic sequence length The sequence length for the encoder is dynamic, we can use different input sequence lengths. Here we call the original model for two sequences.
seq1 = random_input(1, 128) res1 = func(**seq1) seq2 = random_input(1, 180) res2 = func(**seq2)
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
The converted model is optimized for a sequnce length of 128 (and batch size 8). If we infer the converted model using a different sequence length, then two things can happen: 1. If TrtConversionParams.allow_build_at_runtime == False: native TF model is inferred 2. if TrtConversionParams.allow_build_at_runtime == True ...
bert_trt_path = bert_saved_model_path + '_trt2' input_shapes = [[(1, 128), (1, 128), (1, 128)], [(1, 180), (1, 180), (1, 180)]] trt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16', prof_strategy='Range') trt_func_dynamic, _ = get_func_from_saved_model(bert_trt_...
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
tensorflow/tensorrt
apache-2.0
This structure can also be initialized with lists and numpy arrays
d = np.array([3,6,12]) * u.parsec print(d) d.value # value is one of the attributes of this class d.unit # the unit is another attribute
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Now we can quickly change the units of this quantity using the method to()
d.to(u.km)
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
The real power of the units submodule comes at the moment of computing quantities with mixed units
x = 4.0 * u.parsec # 4 parsec t = 6.0 * u.year # 6 years v = x/t print(v)
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Let's change the units to $km/s$
v.to(u.km/u.s)
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Physical constants are also available
from astropy import constants as c c.G # The gravitational constant c.c # The speed of light
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
I am still looking for a way to set tick labels on the colorbar. Now do the same for the SNP's in the PAR population.
pval, MAF, numSNP = [], [], [] with open("MAF_by_pval_par") as f: f.readline() # read the first line, but discard (header) for line in f: one, two, three = line.strip().split("\t") pval.append(float(one)) MAF.append(float(two)) numSNP.append(int(three)) numSNP = np.arra...
Data_analysis/SNP-indel-calling/ANGSD/SnpStat/MAF_by_pval.ipynb
claudiuskerth/PhDthesis
mit
I have also determined the MAF for SNP with negative F value for different p-value cutoffs.
pval, MAF, numSNP = [], [], [] with open("MAF_by_pval_negFis_par") as f: f.readline() # read the first line, but discard (header) for line in f: one, two, three = line.strip().split("\t") pval.append(float(one)) MAF.append(float(two)) numSNP.append(int(three)) numSNP = ...
Data_analysis/SNP-indel-calling/ANGSD/SnpStat/MAF_by_pval.ipynb
claudiuskerth/PhDthesis
mit
Setting the EEG reference This tutorial describes how to set or change the EEG reference in MNE-Python. As usual we'll start by importing the modules we need, loading some example data <sample-dataset>, and cropping it to save memory. Since this tutorial deals specifically with EEG, we'll also restrict the datase...
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False) raw.crop(tmax=60).load_data() raw.pick(['EEG 0{:...
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Background EEG measures a voltage (difference in electric potential) between each electrode and a reference electrode. This means that whatever signal is present at the reference electrode is effectively subtracted from all the measurement electrodes. Therefore, an ideal reference signal is one that captures none of th...
# code lines below are commented out because the sample data doesn't have # earlobe or mastoid channels, so this is just for demonstration purposes: # use a single channel reference (left earlobe) # raw.set_eeg_reference(ref_channels=['A1']) # use average of mastoid channels as reference # raw.set_eeg_reference(ref_c...
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If a scalp electrode was used as reference but was not saved alongside the raw data (reference channels often aren't), you may wish to add it back to the dataset before re-referencing. For example, if your EEG system recorded with channel Fp1 as the reference but did not include Fp1 in the data file, using :meth:~mne.i...
raw.plot()
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
By default, :func:~mne.add_reference_channels returns a copy, so we can go back to our original raw object later. If you wanted to alter the existing :class:~mne.io.Raw object in-place you could specify copy=False.
# add new reference channel (all zero) raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999']) raw_new_ref.plot()
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
.. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
# set reference to `EEG 050` raw_new_ref.set_eeg_reference(ref_channels=['EEG 050']) raw_new_ref.plot()
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the new reference (EEG 050) is now flat, while the original reference channel that we added back to the data (EEG 999) has a non-zero signal. Notice also that EEG 053 (which is marked as "bad" in raw.info['bads']) is not affected by the re-referencing. Setting average reference To set a "virtual reference" ...
# use the average of all channels as reference raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average') raw_avg_ref.plot()
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Creating the average reference as a projector If using an average reference, it is possible to create the reference as a :term:projector rather than subtracting the reference from the data immediately by specifying projection=True:
raw.set_eeg_reference('average', projection=True) print(raw.info['projs'])
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Creating the average reference as a projector has a few advantages: It is possible to turn projectors on or off when plotting, so it is easy to visualize the effect that the average reference has on the data. If additional channels are marked as "bad" or if a subset of channels are later selected, the project...
for title, proj in zip(['Original', 'Average'], [False, True]): fig = raw.plot(proj=proj, n_channels=len(raw)) # make room for title fig.subplots_adjust(top=0.9) fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Using an infinite reference (REST) To use the "point at infinity" reference technique described in :footcite:Yao2001 requires a forward model, which we can create in a few steps. Here we use a fairly large spacing of vertices (pos = 15 mm) to reduce computation time; a 5 mm spacing is more typical for real data analysi...
raw.del_proj() # remove our average reference projector first sphere = mne.make_sphere_model('auto', 'auto', raw.info) src = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.) forward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere) raw_rest = raw.copy().set_eeg_reference('REST', ...
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Using a bipolar reference To create a bipolar reference, you can use :meth:~mne.set_bipolar_reference along with the respective channel names for anode and cathode which creates a new virtual channel that takes the difference between two specified channels (anode and cathode) and drops the original channels by default....
raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'], cathode=['EEG 055']) raw_bip_ref.plot()
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
zeros , ones and eye np.zeros Return a new array of given shape and type, filled with zeros.
np.zeros(2, dtype=float) np.zeros((2,3))
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
ones Return a new array of given shape and type, filled with ones.
np.ones(3, )
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
eye Return a 2-D array with ones on the diagonal and zeros elsewhere.
np.eye(3)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
linspace Returns num evenly spaced samples, calculated over the interval [start, stop].
np.linspace(1, 11, 3)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Random number and matrix rand Random values in a given shape.
np.random.rand(2) np.random.rand(2,3,4)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
randn Return a sample (or samples) from the "standard normal" distribution. andom.standard_normal Similar, but takes a tuple as its argument.
np.random.randn(2,3)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
random Return random floats in the half-open interval [0.0, 1.0).
np.random.random()
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
randint Return n random integers (by default one integer) from low (inclusive) to high (exclusive).
np.random.randint(1,50,10) np.random.randint(1,40)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Shape and Reshape shape return the shape of data and reshape returns an array containing the same data with a new shape
zero = np.zeros([3,4]) print(zero , ' ' ,'shape of a :' , zero.shape) zero = zero.reshape([2,6]) print() print(zero)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Basic Operation Element wise product and matrix product
number = np.array([[1,2,], [3,4]]) number2 = np.array([[1,3],[2,1]]) print('element wise product :\n',number * number2 ) print('matrix product :\n',number.dot(number2)) ## also can use : np.dot(number, number2)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
min max argmin argmax mean
numbers = np.random.randint(1,100, 10) print(numbers) print('max is :', numbers.max()) print('index of max :', numbers.argmax()) print('min is :', numbers.min()) print('index of min :', numbers.argmin()) print('mean :', numbers.mean())
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Universal function numpy also has some funtion for mathmatical operation like exp, log, sqrt, abs and etc . for find more function click here
number = np.arange(1,10).reshape(3,3) print(number) print() print('exp:\n', np.exp(number)) print() print('sqrt:\n',np.sqrt(number))
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
dtype
numbers.dtype
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
No copy & Shallow copy & Deep copy No copy ###### Simple assignments make no copy of array objects or of their data.
number = np.arange(0,20) number2 = number print (number is number2 , id(number), id(number2)) print(number) number2.shape = (4,5) print(number)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Shallow copy Different array objects can share the same data. The view method creates a new array object that looks at the same data.
number = np.arange(0,20) number2 = number.view() print (number is number2 , id(number), id(number2)) number2.shape = (5,4) print('number2 shape:', number2.shape,'\nnumber shape:', number.shape) print('befor:', number) number2[0][0] = 2222 print() print('after:', number)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Deep copy The copy method makes a complete copy of the array and its data.
number = np.arange(0,20) number2 = number.copy() print (number is number2 , id(number), id(number2)) print('befor:', number) number2[0] = 10 print() print('after:', number) print() print('number2:',number2)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Broadcating ###### One of important concept to understand numpy is Broadcasting It's very useful for performancing mathmaica operation beetween arrays of different shape.
number = np.arange(1,11) num = 2 print(' number =', number) print('\n number .* num =',number * num) number = np.arange(1,10).reshape(3,3) number2 = np.arange(1,4).reshape(1,3) number * number2 number = np.array([1,2,3]) print('number =', number) print('\nnumber =', number + 100) number = np.arange(1,10).reshape(3,...
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
If you still doubt Why we use Python and NumPy see it. 😉
from time import time a = np.random.rand(8000000, 1) c = 0 tic = time() for i in range(len(a)): c +=(a[i][0] * a[i][0]) print ('output1:', c) tak = time() print('multiply 2 matrix with loop: ', tak - tic) tic = time() print('output2:', np.dot(a.T, a)) tak = time() print('multiply 2 matrix with numpy ...
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
I tried to write essential things in numpy that you can start to code and enjoy it but there are many function that i don't write in this book if you neet more informatino click here Pandas pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python p...
import pandas as pd
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Series
labels = ['a','b','c'] my_list = [10,20,30] arr = np.array([10,20,30]) d = {'a':10,'b':20,'c':30} pd.Series(data=my_list) pd.Series(data=my_list,index=labels) pd.Series(d)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Dataframe Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure
dataframe = pd.DataFrame(np.random.randn(5,4),columns=['A','B','V','D']) dataframe.head()
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Selection
dataframe['A'] dataframe[['A', 'D']]
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
creating new column
dataframe['E'] = dataframe['A'] + dataframe['B'] dataframe
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
removing a column
dataframe.drop('E', axis=1) dataframe dataframe.drop('E', axis=1, inplace=True) dataframe
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Selcting row
dataframe.loc[0] dataframe.iloc[0] dataframe.loc[0 , 'A'] dataframe.loc[[0,2],['A', 'C']]
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Conditional Selection
dataframe > 0.3 dataframe[dataframe > 0.3 ] dataframe[dataframe['A']>0.3] dataframe[dataframe['A']>0.3]['B'] dataframe[(dataframe['A']>0.5) & (dataframe['C'] > 0)]
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Multi-Index and Index Hierarchy
layer1 = ['g1','g1','g1','g2','g2','g2'] layer2 = [1,2,3,1,2,3] hier_index = list(zip(layer1,layer2)) hier_index = pd.MultiIndex.from_tuples(hier_index) hier_index dataframe2 = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B']) dataframe2 dataframe2.loc['g1'] dataframe2.loc['g1'].loc[1]
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Input and output
titanic = pd.read_csv('Datasets/titanic.csv') pd.read titanic.head() titanic.drop('Name', axis=1 , inplace = True) titanic.head() titanic.to_csv('Datasets/titanic_drop_names.csv')
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
csv is one of the most important format but Pandas compatible with many other format like html table , sql, json and etc. Mising data (NaN)
titanic.head() titanic.dropna() titanic.dropna(axis=1) titanic.fillna('Fill NaN').head()
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Concating merging and ...
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], ...
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Concatenation
frames = [df1, df2, df3 ] pd.concat(frames) #pd.concat(frames, ignore_index=True) pd.concat(frames, axis=1) df1.append(df2)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Mergeing
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', '...
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Joining
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], 'B': ['B0', 'B1', 'B2']}, index=['K0', 'K1', 'K2']) right = pd.DataFrame({'C': ['C0', 'C2', 'C3'], 'D': ['D0', 'D2', 'D3']}, index=['K0', 'K2', 'K3']) left right left.join(right)
appendix-02-Numpy_Pandas.ipynb
msadegh97/machine-learning-course
gpl-3.0
Prepare MNIST training data
train_X, train_y = mnist_training() test_X, test_y = mnist_testing()
HW2/notebooks/Q-1-1-3_Multiclass_Ridge.ipynb
JanetMatsen/Machine_Learning_CSE_546
mit
Explore hyperparameters before training model on all of the training data.
hyper_explorer = HyperparameterExplorer(X=train_X, y=train_y, model=RidgeMulti, validation_split=0.1, score_name = 'training RMSE', use_prev_best_weights=False, ...
HW2/notebooks/Q-1-1-3_Multiclass_Ridge.ipynb
JanetMatsen/Machine_Learning_CSE_546
mit
Importing cleaned data See ../deliver/coal_data_cleanup.ipynb for how the raw data was cleaned.
from IPython.display import FileLink FileLink("../deliver/coal_data_cleanup.ipynb") dframe = pd.read_csv("../data/coal_prod_cleaned.csv")
develop/2015-07-16-jw-example-notebook-setup.ipynb
jbwhit/OSCON-2015
mit
[Dead end] Does year predict production?
plt.scatter(dframe['Year'], dframe['Production_short_tons'])
develop/2015-07-16-jw-example-notebook-setup.ipynb
jbwhit/OSCON-2015
mit
Does Hours worked correlate with output?
df2 = dframe.groupby('Mine_State').sum() sns.jointplot('Labor_Hours', 'Production_short_tons', data=df2, kind="reg", ) plt.xlabel("Labor Hours Worked") plt.ylabel("Total Amount Produced") plt.tight_layout() # plt.savefig(fig_prefix + "production-vs-hours-worked.png", dpi=350) %load_ext autoreload %autoreload 2 i...
develop/2015-07-16-jw-example-notebook-setup.ipynb
jbwhit/OSCON-2015
mit
Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
def derivs(y, t, a, b, omega0): """Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the ...
assignments/assignment10/ODEsEx03.ipynb
ajhenrikson/phys202-2015-work
mit
Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol ...
thetai=np.pi omegai=0 ic=np.array([thetai,omegai]) y=odeint(derivs,ic,t,args=(0.0,0.0,0.0),atol=1e-6,rtol=1e-5) plt.plot(t,energy(y)) plt.xlabel('$t$') plt.ylabel('$E/m$') plt.title('Energy/Mass v. Time'); plt.plot(t, y[:,0], label='$\\theta(t)$') plt.plot(t, y[:,1], label='$\omega(t)$') plt.xlabel('$t$') plt.ylabel(...
assignments/assignment10/ODEsEx03.ipynb
ajhenrikson/phys202-2015-work
mit
Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Ma...
def plot_pendulum(a=0.0, b=0.0, omega0=0.0): """Integrate the damped, driven pendulum and make a phase plot of the solution.""" theta1=-np.pi+0.1 omega1=0.0 ic = np.array([theta1,omega1]) y=odeint(derivs,ic,t,args=(a,b,omega0),atol=1e-10,rtol=1e-9) plt.plot(y[:0],y[:,1])
assignments/assignment10/ODEsEx03.ipynb
ajhenrikson/phys202-2015-work
mit
Dataset and Task We test and validate our system over a common fairness dataset and task: Adult Census Income dataset. This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker. Our analysis aims at learning a model that does not bias predictions towards men over 50K through soft con...
# ======================================================================== # Constants # ======================================================================== _TRAIN_PATH = '' _TEST_PATH = '' _COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relation...
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Feature Columns The following code was taken from intro_to_fairness. In short, Tensorflow requires a mapping of data and so every column is specified.
#@title Prepare Dataset # ======================================================================== # Categorical Feature Columns # ======================================================================== # Unknown length occupation = tf.feature_column.categorical_column_with_hash_bucket( "occupation", hash_bucket_s...
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Create and Run Non-Constrained Neural Model Defining our neural model that will be used as a comparison. Note: this model was purposfully designed to be simplistic, as it is trying to highlight the benifit to learning with soft constraints.
def build_model(feature_columns, features): feature_layer = tf.keras.layers.DenseFeatures(feature_columns) hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features)) hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1) output = tf.keras.layers.Dense(1, act...
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Analyze Non-Constrained Results For this example we look at the fairness constraint that the protected group (gender) should have no predictive difference between classes. In this situation this means that the ratio of positive predictions should be the same between male and female. Note: this is by no means the only f...
print_analysis(train_df, train_predictions, test_df, test_predictions)
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Define Constraints This requires a constrained loss function and a custom train step within the keras model class.
def constrained_loss(data, logits, threshold=0.5, weight=3): """Linear constrained loss for equal ratio prediction for the protected group. The constraint: (#Female >50k / #Total Female) - (#Male >50k / #Total Male) This constraint penalizes predictions between the protected group (gender), such that the ratio...
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Build and Run Constrained Neural Model
def build_constrained_model(feature_columns, features): feature_layer = tf.keras.layers.DenseFeatures(feature_columns) hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features)) hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1) output = tf.keras.layers....
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Analyze Constrained Results Ideally this constraint should correct the ratio imbalance between the protected groups (gender). This means our parity should be very close to zero. Note: This constraint does not mean the neural classifier is guaranteed to generalize and make better predictions. It is more likely to attemp...
print_analysis(train_df, train_predictions, test_df, test_predictions)
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
google/uncertainty-baselines
apache-2.0
Data Preprocessing
#Load Data data = pd.read_csv('../facies_vectors.csv') # Parameters feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS'] facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS'] facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A5...
LA_Team/Facies_classification_LA_TEAM_05.ipynb
esa-as/2016-ml-contest
apache-2.0
We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
# Feature windows concatenation function def augment_features_window(X, N_neig): # Parameters N_row = X.shape[0] N_feat = X.shape[1] # Zero padding X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat))))) # Loop over windows X_aug = np.zeros((N_row, N_feat*(2*N_nei...
LA_Team/Facies_classification_LA_TEAM_05.ipynb
esa-as/2016-ml-contest
apache-2.0
Data Analysis In this section we will run a Cross Validation routine
from tpot import TPOTClassifier from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = preprocess() tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,max_eval_time_mins=20, max_time_mins=100,scoring='f1_micro', ...
LA_Team/Facies_classification_LA_TEAM_05.ipynb
esa-as/2016-ml-contest
apache-2.0
Prediction
#Load testing data test_data = pd.read_csv('../validation_data_nofacies.csv') # Prepare training data X_tr = X y_tr = y # Augment features X_tr, padded_rows = augment_features(X_tr, well, depth) # Removed padded rows X_tr = np.delete(X_tr, padded_rows, axis=0) y_tr = np.delete(y_tr, padded_rows, axis=0) # Prepare ...
LA_Team/Facies_classification_LA_TEAM_05.ipynb
esa-as/2016-ml-contest
apache-2.0
Task 1: What are you standing on Steve? We first need to detect if Steve is falling or sinking. How do we do that? As you know, we use blocks for building things in Minecraft. Most of you have built houses and crafted weapons. The secret is that in Minecraft every square that is visible is a block. The ground is built ...
# Task 1 pos = mc.player.getTilePos() # Steve's current position # b = mc.getBlock(?,?,?)
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Hmm ... The program is printing numbers. It turns out that the block identifier is a number. Often numbers are used as identifiers. In order to print a useful message try the following ```python if b == block.AIR.id: print "I am on air" if b == block.WATER_FLOWING.id: print "I am on flowing water" if b == block...
# Task 2 code:
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Now that you have defined your own cool function lets call the function with different arguments like ```python myCoolFunction(1) myCoolFunction(3) myCoolFunction(5) ``` Try calling your function below. That was something fun with functions!
# call your cool function
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Task 3: Is Steve Safe? For this task we will write a function named isSafe that will take a parameter position and return a value False if the input parameter position is above air or water and will return the value True otherwise. We will use the statements from Task 1 to write this function. Use return to return a va...
# Task 3
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Task 4: Steve's Safety Status Lets test the function isSafe that you wrote in Task 3 by posting a message evertime Steve is not safe i.e., lets post a message "You are not safe" evertime Steve is over air or water. Type and modify the code below to call your function to show the message. python while True: pos = mc...
# Task 4
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Task 5: Set a Block Now for the fun part of giving Steve superpowers. All we need to do to make Steve build a bridge everytime he is not on a safe block is to set the block under him to a GLASS block or any other block we want to use for the bridge. To set a block at a position in Minecraft use the setBlock function. E...
# Task 5
notebooks/Adventure3.ipynb
esumitra/minecraft-programming
mit
Problema Prático 8.5 Na Figura 8.13, seja R = 2 Ω, L = 0,4 H, C = 25 mF, v(0) = 0, e i(0) = 50 mA. Determine v(t) para t > 0.
print("Problema Prático 8.5") R = 2 L = 0.4 C = 25*m v0 = 0 i0 = 50*m A1 = symbols('A1') A2 = symbols('A2') alpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C) #C*dv(0)/dt + i(0) + v(0)/R = 0 #C*(-10A1 + A2) + i0 + v(0)/2 = 0 #v(0) = 0 = A1 #C*A2 = -i0 A2 = -i0/C A1 = 0 print("Constante A1:",A1) print("C...
Aula 14 - Circuito RLC paralelo.ipynb
GSimas/EEL7045
mit