markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
step4: replace the missing data
from sklearn.impute import SimpleImputer imputer=SimpleImputer(missing_values=np.nan,strategy='mean') imputer.fit(a[:,:]) a[:,:]=imputer.transform(a[:,:]) a b
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Step5: Encoding(not required) step6 : spiliting of data set into training and testing set
from sklearn.model_selection import train_test_split atrain,atest,btrain,btest=train_test_split(a,b,test_size=0.2,random_state=1) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step7 : Feature scaling
from sklearn.preprocessing import StandardScaler sc=StandardScaler() atrain=sc.fit_transform(atrain) atest=sc.fit_transform(atest) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Part B: build my first linear model step 1: training the classification model
from sklearn.linear_model import LogisticRegression LoR=LogisticRegression(random_state=0) LoR.fit(atrain,btrain)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step 2: testing the linear model
bestimated=LoR.predict(atest) print(np.concatenate((bestimated.reshape(len(bestimated),1),btest.reshape(len(btest),1)),1))
[[0 0] [0 0] [0 1] [1 1] [0 0] [0 0] [0 0] [1 1] [0 0] [1 0] [0 0] [0 0] [0 0] [1 1] [1 1] [1 1] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [1 1] [0 0] [0 1] [0 0] [1 1] [1 0] [1 1] [1 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [0 1] [0 0] [1 1] [1 1] [0 0] [0 0] [1 1] [0 1...
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step C: performance matrix
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score cm=confusion_matrix(btest,bestimated) print(cm) print(accuracy_score(btest,bestimated)) print(precision_score(btest,bestimated)) np.mean((True,True,False)) error_rate=[] for i in range(1,30): KC=KNeighborsClassifier(n_neighbors=i) KC.fi...
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
b) By using KNN Algorithm Part A: Data Preprocessing Step1 : importing the libraries
import numpy as np import matplotlib.pyplot as plt import pandas as pd
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step2: import data set
dataset=pd.read_csv('Logistic Data.csv') dataset
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step3: to create feature matrix and dependent variable vector
a=dataset.iloc[:,:-1].values b=dataset.iloc[:,-1].values a b
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step4: replace the missing data
from sklearn.impute import SimpleImputer imputer=SimpleImputer(missing_values=np.nan,strategy='mean') imputer.fit(a[:,:]) a[:,:]=imputer.transform(a[:,:]) a
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Step5: Encoding(not required) step6 : spiliting of data set into training and testing set
from sklearn.model_selection import train_test_split atrain,atest,btrain,btest=train_test_split(a,b,test_size=0.2,random_state=1) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step7 : Feature scaling
from sklearn.preprocessing import StandardScaler sc=StandardScaler() atrain=sc.fit_transform(atrain) atest=sc.fit_transform(atest) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Part B: build my KNN classification model step 1: training the classification model
from sklearn.neighbors import KNeighborsClassifier KC=KNeighborsClassifier(n_neighbors=7,weights='uniform',p=2) KC.fit(atrain,btrain)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step 2: testing the linear model
bestimated=KC.predict(atest)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step C: performance matrix
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score cm=confusion_matrix(btest,bestimated) print(cm) print(accuracy_score(btest,bestimated)) print(precision_score(btest,bestimated)) np.mean((True,True,False)) error_rate=[] for i in range(1,30): KC=KNeighborsClassifier(n_neighbors=i) KC.fi...
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
CNN Basic> In this post, We will dig into the basic operation of Convolutional Neural Network, and explain about what each layer look like. And we will simply implement the basic CNN archiecture with tensorflow.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Deep_Learning, Tenso...
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import pandas as pd plt.rcParams['figure.figsize'] = (16, 10) plt.rc('font', size=15)
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Convolutional Neural Network [Convolutional Neural Network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN for short) is the most widely used for image classification. Previously, we handled image classification problem (Fashion-MNIST) with Multi Layer Perceptron, and we found that works. At that tim...
image = tf.constant([[[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]]], dtype=np.float32) fig, ax = plt.subplots() ax.imshow(image.numpy().reshape(3, 3), cmap='gray') for (j, i), label in np.ndenumerate(image.numpy().reshape(3, 3)): if label < image.numpy().mean(): ...
(1, 3, 3, 1)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
We made a simple image that has size of 3x3. Rememeber that the order of data should be `(batch, height, width, channel)`. In this case, batch size is 1, and currently generates the grayscaled image, so the channel should be 1.Then, we need to define filter and kernel_size, and padding method. We will use one filter wi...
# Weight Initialization weight = np.array([[[[1.]], [[1.]]], [[[1.]], [[1.]]]]) weight_init = tf.constant_initializer(weight) print("weight.shape: {}".format(weight.shape)) # Convolution layer layer = tf.keras.layers.Conv2D(filters=1, kernel_size=(2, 2), padding='VALID', kernel_initializer=weight_in...
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
See? This is the output of convolution layer with toy image. In this time, change the padding argument from `VALID` to `SAME` and see the result. In this case, zero padding is added ('half' padding), so the output shape will be also changed.
# Convolution layer with half padding layer = tf.keras.layers.Conv2D(filters=1, kernel_size=(2, 2), padding='SAME', kernel_initializer=weight_init) output2 = layer(image) # Check the result fig, ax = plt.subplots() ax.imshow(output2.numpy().reshape(3, 3), cmap='gray') for (j, i), label in np.ndenumerate(output2.numpy()...
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
And what if we apply 3 filters in here?
# Weight initialization weight = np.array([[[[1., 10., -1.]], [[1., 10., -1.]]], [[[1., 10., -1.]], [[1., 10., -1.]]]]) weight_init = tf.constant_initializer(weight) print("Weight shape: {}".format(weight.shape)) # Convolution layer layer = tf.keras.layers.Conv2D(filters=3, kernel_size=(2, 2), paddi...
Weight shape: (2, 2, 1, 3)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Pooling LayerAfter passing Activation function, the output may be changed. We can summarize the output with some rules, for example, find the maximum pixel value in specific filter that assume to represent that field.![max_pool](image/maxpool.png)*Fig 7, Max-Pooling*In the figure, we use 2x2 filter for pixel handling....
# Sample image image = tf.constant([[[[4.], [3.]], [[2.], [1.]]]], dtype=np.float32) # Max Pooling layer layer = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=1, padding='VALID') output = layer(image) # Check the output print(output.numpy())
[[[[4.]]]]
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
After that, we found out that the output of this image is just 4, the maximum value. How about the case with `SAME` padding?
layer = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=1, padding='SAME') output = layer(image) print(output.numpy())
[[[[4.] [3.]] [[2.] [1.]]]]
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
You may see that the output is different compared with previous one. That's because, while max pooling operation is held, zero-padding is also considered as an one pixel. So the 4 max-pooling operation is occurred.![SAME padding](image/maxpooling_same_padding.png) Convolution/MaxPooling in MNISTIn this case, we apply ...
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() # Normalization X_train = X_train.astype(np.float32) / 255. X_test = X_test.astype(np.float32) / 255. image = X_train[0] plt.imshow(image, cmap='gray') plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
To handle this image in tensorflow, we need to convert it from 2d numpy array to 4D Tensor. There are several approaches to convert 4D tensor. One of approaches in Tensorflow is add `tf.newaxis` like this,
print("Dimension: {}".format(image.shape)) image = image[tf.newaxis, ..., tf.newaxis] print("Dimension: {}".format(image.shape)) # Convert it to tensor image = tf.convert_to_tensor(image)
Dimension: (28, 28) Dimension: (1, 28, 28, 1)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Same as before, we initialize the filter weight and apply it to convolution layer. In this case, we use 5 filters and (3, 3) filter size and stride to (2, 2), and padding is `SAME`.
weight_init = tf.keras.initializers.RandomNormal(stddev=0.01) layer_conv = tf.keras.layers.Conv2D(filters=5, kernel_size=(3, 3), strides=(2, 2), padding='SAME', kernel_initializer=weight_init) output = layer_conv(image) print(output.shape) feature_maps = np.swapaxes(output, 0, 3) fig, ax...
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
After that, we use this output to push max-pooling layer as an input.
layer_pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='SAME') output2 = layer_pool(output) print(output2.shape) feature_maps = np.swapaxes(output2, 0, 3) fig, ax = plt.subplots(1, 5) for i, feature_map in enumerate(feature_maps): ax[i].imshow(feature_map.reshape(7, 7), cmap='gray') plt.t...
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
0.0. IMPORTS
import math import numpy as np import pandas as pd import inflection import seaborn as sns from matplotlib import pyplot as plt from IPython.core.display import HTML
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
0.1. Helper Functions
def jupyter_settings(): %matplotlib inline %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [25, 12] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.dis...
Populating the interactive namespace from numpy and matplotlib
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
0.2. Loading data
df_sales_raw = pd.read_csv( 'data/train.csv', low_memory=False ) df_store_raw = pd.read_csv( 'data/store.csv', low_memory=False ) # merge df_raw = pd.merge( df_sales_raw, df_store_raw, how='left', on='Store' )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.0 DESCRIPCIÓN DE LOS DATOS
df1 = df_raw.copy()
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.1. Rename Columns
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval'] snakecas...
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.2. Data Dimensions
print( 'Number of Rows: {}'.format( df1.shape[0] ) ) print( 'Number of Cols: {}'.format( df1.shape[1] ) )
Number of Rows: 1017209 Number of Cols: 18
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.3. Data Types
df1['date'] = pd.to_datetime( df1['date'] ) df1.dtypes
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.4. Check NA
df1.isna().sum()
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.5. Fillout NA
df1.sample() #competition_distance df1['competition_distance'] = df1['competition_distance'].apply( lambda x: 200000.0 if math.isnan( x ) else x ) #competition_open_since_month df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x['...
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.6. Change Data Types
# competiton df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int ) df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int ) # promo2 df1['promo2_since_week'] = df1['promo2_since_week'].astype( int ) df1['promo2_since_year'] = df1['promo2_since_year'].a...
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7. Descriptive Statistics
num_attributes = df1.select_dtypes( include=['int64', 'float64'] ) cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7.1. Numerical Atributes
# Central Tendency - mean, meadina ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T # dispersion - std, min, max, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T d2 = pd.DataFrame( num_attributes.apply( min ) ).T d3 = pd.Da...
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7.2. Categorical Atributes
cat_attributes.apply( lambda x: x.unique().shape[0] ) aux = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)] plt.subplot( 1, 3, 1 ) sns.boxplot( x='state_holiday', y='sales', data=aux ) plt.subplot( 1, 3, 2 ) sns.boxplot( x='store_type', y='sales', data=aux ) plt.subplot( 1, 3, 3 ) sns.boxplot( x='assortment'...
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
Chapter 8: Modeling Continuous Variables
import swat conn = swat.CAS('server-name.mycomany.com', 5570, 'username', 'password') cars = conn.upload_file('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv', casout=dict(name='cars', replace=True)) cars.tableinfo() cars.columninfo()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Linear Regressions
conn.loadactionset('regression') conn.help(actionset='regression')
NOTE: Added action set 'regression'. NOTE: Information for action set 'regression': NOTE: regression NOTE: glm - Fits linear regression models using the method of least squares NOTE: genmod - Fits generalized linear regression models NOTE: logistic - Fits logistic regression models
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Simple linear regression
cars.glm( target='MSRP', inputs=['MPG_City'] )
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Another way to define a model
linear1 = cars.Glm() linear1.target = 'MSRP' linear1.inputs = ['MPG_City'] linear1() linear1.display.names = ['ParameterEstimates'] linear1()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Scoring
del linear1.display.names result1 = conn.CASTable('MSRPPrediction') result1.replace = True linear1.output.casout = result1 linear1.output.copyVars = 'ALL'; linear1() result1[['pred']].summary()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Output more information in the score table
result2 = conn.CASTable('MSRPPrediction3') result2.replace = True linear1.output.casout = result2 linear1.output.pred = 'Predicted_MSRP' linear1.output.resid = 'Presidual_MSRP' linear1.output.lcl = 'LCL_MSRP' linear1.output.ucl = 'UCL_MSRP' linear1()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Use scatter plot of predicted values and residuals to check the model fitting
from bokeh.charts import Scatter, output_file, output_notebook, show out1 = result2.to_frame() p = Scatter(out1, x='Predicted_MSRP', y='Residual_MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Investigate which observations have negative predicted MSRP values
result2[['Predicted_MSRP', 'MSRP', 'MPG_City','Make','Model']].query('Predicted_MSRP < 0').to_frame() p = Scatter(out1, x='MPG_City', y='MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Remove outliers
cars.where = 'MSRP < 100000 and MPG_City < 40' result2 = conn.CASTable('cas.MSRPPrediction2') result2.replace = True linear2 = cars.Glm() linear2 = cars.query('MSRP < 100000 and MPG_City < 40').glm linear2.target = 'MSRP' linear2.inputs = ['MPG_City'] linear2.output.casout = result2 linear2.output.copyVars = 'ALL'; l...
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Check the model fitting again
out2 = result2.to_frame() p = Scatter(out2, x='Predicted_MSRP', y='Residual_MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Adding more predictors
nomList = ['Origin','Type','DriveTrain'] contList = ['MPG_City','Weight','Length'] linear3 = conn.CASTable('cars').Glm() linear3.target = 'MSRP' linear3.inputs = nomList + contList linear3.nominals = nomList linear3.display.names = ['FitStatistics','ParameterEstimates'] linear3()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Groupby regression
cars = conn.CASTable('cars') out = cars.groupby('Origin')[['MSRP']].summary().concat_bygroups() out['Summary'][['Column','Mean','Var','Std']] cars = conn.CASTable('cars') cars.groupby=['Origin'] cars.where = 'MSRP < 100000 and MPG_City < 40' nomList = ['Type','DriveTrain'] contList = ['MPG_City','Weight','Length'] grou...
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Extensions of Ordinary Linear Regression Generalized Linear Models Gamma Regression
cars = conn.CASTable('cars') genmodModel1 = cars.Genmod() genmodModel1.model.depvars = 'MSRP' genmodModel1.model.effects = ['MPG_City'] genmodModel1.model.dist = 'gamma' genmodModel1.model.link = 'log' genmodModel1()
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Multinomial Regression
genmodModel1.model.depvars = 'Cylinders' genmodModel1.model.dist = 'multinomial' genmodModel1.model.link = 'logit' genmodModel1.model.effects = ['MPG_City'] genmodModel1.display.names = ['ModelInfo', 'ParameterEstimates'] genmodModel1()
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Score the input table
genmodResult = conn.CASTable('CylinderPredicted', replace=True) genmodModel1.output.casout = genmodResult genmodModel1.output.copyVars = 'ALL'; genmodModel1.output.pred = 'Prob_Cylinders' genmodModel1() genmodResult[['Prob_Cylinders','_level_','Cylinders','MPG_City']].head(24)
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Regression Trees
conn.loadactionset('decisiontree') conn.help(actionset='decisiontree') cars = conn.CASTable('cars') output1 = conn.CASTable('treeModel1') output1.replace = True; tree1 = cars.dtreetrain tree1.target = 'MSRP' tree1.inputs = ['MPG_City'] tree1.casout = output1 tree1.maxlevel = 2 tree1() output1[['_NodeID_', '_Parent_',...
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Huggingface Sagemaker-sdk - Distributed Training Demo for `TensorFlow` Distributed Data Parallelism with `transformers` and `tensorflow` 1. [Introduction](Introduction) 2. [Development Environment and Permissions](Development-Environment-and-Permissions) 1. [Installation](Installation) 2. [Development environ...
!pip install "sagemaker>=2.48.0" --upgrade
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Development environment **upgrade ipywidgets for `datasets` library and restart kernel, only needed when prerpocessing is done in the notebook**
%%capture import IPython !conda install -c conda-forge ipywidgets -y IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used import sagemaker.huggingface
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is n...
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
PreprocessingIn this example the preproccsing will be done in the `train.py` when executing the script. You could also move the `preprocessing` outside of the script and upload the data to s3 and pass it into it. Fine-tuning & starting Sagemaker Training JobIn order to create a sagemaker training job we need an `Hugg...
!pygmentize ./scripts/train.py
import argparse import logging import os import sys import tensorflow as tf from[39;4...
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Creating an Estimator and start a training job
from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={ 'epochs': 1, 'train_batch_size': 16, 'model_name':'distilbert-base-uncased', } # configuration for running training on smdistributed Data Parallel distribution = {'smdistributed':{'dat...
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Deploying the endpointTo deploy our endpoint, we call `deploy()` on our HuggingFace estimator object, passing in our desired number of instances and instance type.
predictor = huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Then, we use the returned predictor object to call the endpoint.
sentiment_input= {"inputs":"I love using the new Inference DLC."} predictor.predict(sentiment_input)
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Finally, we delete the endpoint again.
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Extras Estimator Parameters
# container image used for training job print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n") # s3 uri where the trained model is located print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n") # latest training job name for this estimator print(f"l...
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Attach to old training job to an estimator In Sagemaker you can attach an old training job to an estimator to continue training, get results etc..
from sagemaker.estimator import Estimator # job which is going to be attached to the estimator old_training_job_name='' # attach old training job huggingface_estimator_loaded = Estimator.attach(old_training_job_name) # get model output s3 from training job huggingface_estimator_loaded.model_data
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
https://github.com/facebook/fb.resnet.torch/issues/180https://github.com/bamos/densenet.pytorch/blob/master/compute-cifar10-mean.py
print(f'Number of training examples: {len(train_data)}') print(f'Number of validation examples: {len(valid_data)}') print(f'Number of testing examples: {len(test_data)}') BATCH_SIZE = 64 train_iterator = torch.utils.data.DataLoader(train_data, shuffle=True, batch_size=BATCH_SIZE) valid_iterator = torch.utils.data.Data...
_____no_output_____
MIT
misc/6 - ResNet - Dogs vs Cats.ipynb
oney/pytorch-image-classification
https://discuss.pytorch.org/t/why-does-the-resnet-model-given-by-pytorch-omit-biases-from-the-convolutional-layer/10990/4https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py
device = torch.device('cuda') import torchvision.models as models model = models.resnet18(pretrained=True).to(device) print(model) for param in model.parameters(): param.requires_grad = False print(model.fc) model.fc = nn.Linear(in_features=512, out_features=2).to(device) optimizer = optim.Adam(model.parameters())...
| Test Loss: 0.052 | Test Acc: 97.93% |
MIT
misc/6 - ResNet - Dogs vs Cats.ipynb
oney/pytorch-image-classification
Inference code for running on kaggle server
!pip install ../input/pretrainedmodels/pretrainedmodels-0.7.4/pretrainedmodels-0.7.4/ > /dev/null # no output import gc import os import random import sys import six import math from pathlib import Path from tqdm import tqdm_notebook as tqdm from IPython.core.display import display, HTML from typing import List import...
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Dataset
""" Referenced `chainer.dataset.DatasetMixin` to work with pytorch Dataset. """ class DatasetMixin(Dataset): def __init__(self, transform=None): self.transform = transform def __getitem__(self, index): """Returns an example or a sequence of examples.""" if torch.is_tensor(index): ...
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Data augmentation/processing
""" From https://www.kaggle.com/corochann/deep-learning-cnn-with-chainer-lb-0-99700 """ def affine_image(img): """ Args: img: (h, w) or (1, h, w) Returns: img: (h, w) """ # ch, h, w = img.shape # img = img / 255. if img.ndim == 3: img = img[0] # --- scale --- ...
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Classifier
def accuracy(y, t): pred_label = torch.argmax(y, dim=1) count = pred_label.shape[0] correct = (pred_label == t).sum().type(torch.float32) acc = correct / count return acc class BengaliClassifier(nn.Module): def __init__(self, predictor, n_grapheme=168, n_vowel=11, n_consonant=7): super...
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Check prediction
train = pd.read_csv(datadir/'train.csv') pred_df = pd.DataFrame({ 'grapheme_root': p0, 'vowel_diacritic': p1, 'consonant_diacritic': p2 }) fig, axes = plt.subplots(2, 3, figsize=(22, 6)) plt.title('Label Count') sns.countplot(x="grapheme_root",data=train, ax=axes[0, 0]) sns.countplot(x="vowel_diacritic",dat...
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Tutorial 1: Neural Rate Models**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom **Our 2021 Sp...
# Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/cou...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
--- Section 1: Neuronal network dynamics
# @title Video 1: Dynamic networks from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid=...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and netwo...
# @markdown *Execute this cell to set default parameters for a single excitatory population model* def default_pars_single(**kwargs): pars = {} # Excitatory parameters pars['tau'] = 1. # Timescale of the E population [ms] pars['a'] = 1.2 # Gain of the E population pars['theta'] = 2.8 # Threshold ...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because ...
def F(x, a, theta): """ Population activation function. Args: x (float): the population input a (float): the gain of the function theta (float): the threshold of the function Returns: float: the population activation response F(x) for input x """ ###########################################...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_45ddc05f.py)*Example output:* Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values...
# @title # @markdown Make sure you execute this cell to enable the widget! def interactive_plot_FI(a, theta): """ Population activation function. Expecxts: a : the gain of the function theta : the threshold of the function Returns: plot the F-I curve with give parameters """ # set the ...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_1c0165d7.py) Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via anal...
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`* def simulate_single(pars): """ Simulate an excitatory population of neurons Args: pars : Parameter dictionary Returns: rE : Activity of excitatory population (array) Example: pars = d...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How doe...
# @title # @markdown Make sure you execute this cell to enable the widget! # get default parameters pars = default_pars_single(T=20.) def Myplot_E_diffI_difftau(I_ext, tau): # set external input and time constant pars['I_ext'] = I_ext pars['tau'] = tau # simulation r = simulate_single(pars) # Analytic...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_65dee3e7.py) Think!Above, we have numerically solved a system driven by a positive input. Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why ...
# @title Video 2: Fixed point from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&p...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**....
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars): """Given parameters, compute dr/dt as a function of r. Args: r (1D array) : Average firing rate of the excitatory population I_ext, w, a, theta, tau (numbers): Simulation parameters to use other_pars : Other simulation parameters are unused by...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_c5280901.py)*Example output:* Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{...
# @markdown *Execute this cell to enable the fixed point functions* def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars): """ Calculate the fixed point through drE/dt=0 Args: r_guess : Initial value used for scipy.optimize function a, theta, w, I_ext : simulation parameters Returns: x_fp ...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_0637b6bf.py)*Example output:* Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the...
# @title # @markdown Make sure you execute this cell to enable the widget! def plot_intersection_single(w, I_ext): # set your parameters pars = default_pars_single(w=w, I_ext=I_ext) # find fixed points r_init_vector = [0, .4, .9] x_fps = my_fp_finder(pars, r_init_vector) # plot r = np.linspace(0, 1.,...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_20486792.py) --- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input par...
# @title Video 3: Stability of fixed points from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.h...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
# @markdown Execute this cell to see the trajectories! pars = default_pars_single() pars['w'] = 5.0 pars['I_ext'] = 0.5 plt.figure(figsize=(8, 5)) for ie in range(10): pars['r_init'] = 0.1 * ie # set the initial value r = simulate_single(pars) # run the simulation # plot the activity with given initial plt...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
# @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars_single(w=5.0, I_ext=0.5) def plot_single_diffEinit(r_init): pars['r_init'] = r_init r = simulate_single(pars) plt.figure() plt.plot(pars['range_t'], r, 'b', zorder=1) plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_4d2de6a0.py) Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} =...
def dF(x, a, theta): """ Population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : the population activation response F(x) for input x """ #####################################################...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_ce2e3bc5.py)*Example output:* Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fix...
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars): """ Args: fp : fixed point r_fp tau, a, theta, w, I_ext : Simulation parameters Returns: eig : eigevalue of the linearized system """ ##################################################################### ## TODO for students: compute ...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_e285f60d.py)...
# @title OU process `my_OU(pars, sig, myseed=False)` # @markdown Make sure you execute this cell to visualize the noise! def my_OU(pars, sig, myseed=False): """ A functions that generates Ornstein-Uhlenback process Args: pars : parameter dictionary sig : noise amplitute myseed : r...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
# @title Simulation of an E population with OU inputs # @markdown Make sure you execute this cell to spot the Up-Down states! pars = default_pars_single(T=1000) pars['w'] = 5.0 sig_ou = 0.7 pars['tau_ou'] = 1. # [ms] pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020) r = simulate_single(pars) plt.figure(f...
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Refs:https://github.com/deep-learning-with-pytorch/dlwpt-code
import numpy as np import torch
_____no_output_____
MIT
Tutorial/.ipynb_checkpoints/c5_optimizers-checkpoint.ipynb
danhtaihoang/pytorch-deeplearning
Optimizers
x = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4] y = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0] x = torch.tensor(x) y = torch.tensor(y) #x = 0.1*x # normalize x_norm = 0.1*x def model(x, w, b): return w * x + b def loss_fn(y_p, y): squared_diffs = (y_p - y)**2 ...
Epoch 500, Loss 7.860115 Epoch 1000, Loss 3.828538 Epoch 1500, Loss 3.092191 Epoch 2000, Loss 2.957698 Epoch 2500, Loss 2.933134 Epoch 3000, Loss 2.928648 Epoch 3500, Loss 2.927830 Epoch 4000, Loss 2.927679 Epoch 4500, Loss 2.927652 Epoch 5000, Loss 2.927647
MIT
Tutorial/.ipynb_checkpoints/c5_optimizers-checkpoint.ipynb
danhtaihoang/pytorch-deeplearning
The basic nbpy_top_tweeters app First, let's get connected with the Twitter API:
import os import tweepy auth = tweepy.AppAuthHandler( os.environ['TWITTER_API_TOKEN'], os.environ['TWITTER_API_SECRET'] ) api = tweepy.API(auth) api # import requests_cache # requests_cache.install_cache()
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
At this point, we use the `search()` method to get a list of tweets matching the search term:
nbpy_tweets = api.search('#nbpy', count=100) len(nbpy_tweets)
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
From the iterable of tweets we get the number of tweets per user by using a `collections.Counter` object:
from collections import Counter tweet_count_by_username = Counter(tweet.user.screen_name for tweet in nbpy_tweets) tweet_count_by_username
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
At this point, we can calculate the top $n$ tweeters:
top_tweeters = tweet_count_by_username.most_common(20) top_tweeters
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
And show a scoreboard with the winners:
for username, tweet_count in top_tweeters: print(f'@{username:20}{tweet_count:2d}')
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
- We can see that, already with the "vanilla" notebook, we have some degree of interactivity simply by editing and running the code cell-by-cell rather than in one go --- From `repr()` output to rich output with `IPython.display`
import random tweet = random.choice(nbpy_tweets) tweet
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
- The repr of these objects are rich in information, but not very easy to explore
tweet.user
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
The `IPython.display` module contains several classes that render rich output from objects in a cell's output
from IPython.display import *
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters