markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Create model
def create_model(input_shape: Tuple[int], output_shape: int, activation, loss, meta_shape: Optional[int] = None, task: str = "B", learning_rate: float = 0.001, pretrain: bool = False) -> models.Model: """ The function for creating model. Parameters ---...
_____no_output_____
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
Training model
def train(path_list: np.ndarray, target: np.ndarray, loss, meta_data: Optional[np.ndarray] = None, task: str = "B"): """ The function for training model. Parameters ---------- path_list : np.ndarray The path list of all image data. target : np.ndarray The array of targ...
_____no_output_____
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
Training models TaskA
meta_features =\ asset_df_A['collection.name'].unique().tolist() + ['num_sales'] path_list = asset_df_A['full_path'].values meta_data = asset_df_A[meta_features].values target = asset_df_A['target'].values model_A = train(path_list, target, losses.mean_squared_error, meta_data, task="A") # save_mo...
2021-11-14 08:31:09.150668: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-11-14 08:31:09.155139: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful N...
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
TaskA(画像のみ)
path_list = asset_df_A['full_path'].values target = asset_df_A['target'].values model_A = train(path_list, target, losses.mean_squared_error, task="B")
starting training *------------------------------* Epoch 1/100 1224/1224 [==============================] - 217s 175ms/step - loss: 2.7349 - mae: 1.0369 - mse: 2.7349 - val_loss: 2.2998 - val_mae: 0.9054 - val_mse: 2.2998 Epoch 2/100 1224/1224 [==============================] - 211s 172ms/step - loss: 1.7788 - mae: 0.8...
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
TaskB
path_list = asset_df_B['full_path'].values target = asset_df_B['target'].values model_B = train(path_list, target, losses.mean_squared_error) # save_model(model_B, "../models/baselineB.pkl")
starting training *------------------------------* Epoch 1/100 293/293 [==============================] - 57s 181ms/step - loss: 0.5716 - mae: 0.5127 - mse: 0.5716 - val_loss: 0.3407 - val_mae: 0.3353 - val_mse: 0.3407 Epoch 2/100 293/293 [==============================] - 52s 176ms/step - loss: 0.4381 - mae: 0.4247 - ...
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
Evaluate model Task A
file_name = "../models/baselineA.pkl" model = load_model(file_name) meta_features =\ asset_df_A['collection.name'].unique().tolist() + ['num_sales'] path_list = np.vstack( (asset_df_A['full_path'].values.reshape(-1, 1), asset_df_B['full_path'].values.reshape(-1, 1)) ).reshape(-1) meta_data = np.vstack( ...
_____no_output_____
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
Task B
file_name = "../models/baselineB.pkl" model = load_model(file_name) path_list = asset_df_B['full_path'].values meta_data = asset_df_B[meta_features].values target = asset_df_B['target'].values train_path, val_path, train_meta, val_meta, train_y, val_y =\ train_test_split(path_list, meta_data, target, test_size=0....
_____no_output_____
MIT
notebooks/training_model.ipynb
nft-appraiser/nft-appraiser-ml
Being asked to leave others or groupsBeing restricted from contact with othersDistancing self from relationshipsIsolationLack of meaningful social groupLonelinessNot being understoodphysiological barriers
pos_txt_files[0:5] pos_files1 = [] for pos_file in pos_files: pos_file = pos_file.split('\\') pos_file = pos_file[1].split('.knowtator') pos_file = pos_file[0] pos_files1.append(pos_file) pos_files1 neg_files1 = [] for neg_file in neg_files: neg_file = neg_file.split('\\') neg_file = neg_file[1]...
_____no_output_____
Apache-2.0
eHostXML_ext.ipynb
phzpan/6950_nlp
_*Using Qiskit Aqua algorithms, a how to guide*_This notebook demonstrates how to use the `Qiskit Aqua` library to invoke an algorithm and process the result.Further information may be found for the algorithms in the online [Aqua documentation](https://qiskit.org/documentation/aqua/algorithms.html).Algorithms in Aqua ...
from qiskit.aqua import Operator
_____no_output_____
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
As input, for an energy problem, we need a Hamiltonian and so we first create a suitable `Operator ` instance. In this case we have a paulis list, as below, from a previously computed Hamiltonian, that we saved, so as to focus this notebook on using the algorithms. We simply load these paulis to create the original Ope...
pauli_dict = { 'paulis': [{"coeff": {"imag": 0.0, "real": -1.052373245772859}, "label": "II"}, {"coeff": {"imag": 0.0, "real": 0.39793742484318045}, "label": "ZI"}, {"coeff": {"imag": 0.0, "real": -0.39793742484318045}, "label": "IZ"}, {"coeff": {"imag": 0.0, "real": -0.011...
_____no_output_____
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
Let's start with a classical algorithmWe can now use the Operator without regard to how it was created. We chose to start this tutorial with a classical algorithm as it involves a little less setting up than the `VQE` quantum algorithm we will use later. Here we will use `ExactEigensolver` to compute the minimum eigen...
from qiskit.aqua.algorithms import ExactEigensolver ee = ExactEigensolver(qubit_op) result = ee.run() print(result['energy'])
-1.857275030202378
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
Now let's show the `declarative` approach. Here we need to prepare a configuration dictionary of parameters to define the algorithm. Again we we will use the ExactEigensolver and need to create an `algorithm` where it is named by `name`. The name comes from a `CONFIGURATION` dictionary in the algorithm and this name ...
from qiskit.aqua import run_algorithm from qiskit.aqua.input import EnergyInput aqua_cfg_dict = { 'algorithm': { 'name': 'ExactEigensolver' } } algo_input = EnergyInput(qubit_op) result = run_algorithm(aqua_cfg_dict, algo_input) print(result['energy'])
-1.8572750302023808
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
Lets switch now to using a Quantum algorithm.We will use the Variational Quantum Eigensolver (VQE) to solve the same problem as above. As its name implies its uses a variational approach. An ansatz (a variational form) is supplied and using a quantum/classical hybrid technique the energy resulting from evaluating the ...
aqua_cfg_dict = { 'algorithm': { 'name': 'VQE', 'operator_mode': 'matrix' }, 'variational_form': { 'name': 'RYRZ', 'depth': 3, 'entanglement': 'linear' }, 'optimizer': { 'name': 'L_BFGS_B', 'maxfun': 1000 }, 'backend': { 'name':...
-1.8572750302012253
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
And now `programmatic` Here we create the variational form and optimizer and then pass them to VQE along with the Operator. The backend is created and passed to the algorithm so it can be run there.
from qiskit import BasicAer from qiskit.aqua.algorithms import VQE from qiskit.aqua.components.variational_forms import RYRZ from qiskit.aqua.components.optimizers import L_BFGS_B var_form = RYRZ(qubit_op.num_qubits, depth=3, entanglement='linear') optimizer = L_BFGS_B(maxfun=1000) vqe = VQE(qubit_op, var_form, optimi...
-1.8572750301886618
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
While a backend can be passed directly to the quantum algorithm run(), internally it will be detected as such and wrapped as a QuantumInstance. However by doing this explicitly yourself, as below, various parameters governing the execution can be set, including in more advanced cases ability to set noise models, coupli...
from qiskit.aqua import QuantumInstance from qiskit.transpiler import PassManager var_form = RYRZ(qubit_op.num_qubits, depth=3, entanglement='linear') optimizer = L_BFGS_B(maxfun=1000) vqe = VQE(qubit_op, var_form, optimizer) backend = BasicAer.get_backend('statevector_simulator') qi = QuantumInstance(backend=backend,...
-1.8572750302012366
Apache-2.0
aqua/algorithm_introduction_with_vqe.ipynb
renier/qiskit-tutorials-community
Notebook 2: Requesting information After getting all the access token as well as refreshing the token, we started requesting information for our analysis. Just to remind, our four goals are to find out which are the top twenty friends that like our post the most, demographic for places we have been tagged, reactions f...
import requests import importlib import json import pandas as pd import keys_project importlib.reload(keys_project) keychain = keys_project.keychain d={} d['access_token']=keychain['facebook']['access_token'] # Getting the long-lived access token
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
Below are all of the helper functions that we have used. The return type of a response from the graph api is not easy to parse and hence we convert all repsonses to JSON. The other functions are supplementing our data requests and modifications as described in the program level docs.
def response_to_json(response): ''' This function converts the response into json format Parameter: response: the request response to convert to json Return: the response in json ''' string_response = response.content.decode('utf-8') #decoding the response to string re...
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
Last but not least, we imported the dictionary into csv file for later analysis in Notebook 3. This question took us quite long time. However, the questions later on were pretty straightforward and similar to this question. Question: Getting the number of facebook reactions of each reaction type for a particular uploa...
def reaction_statistics(id_,limit,fb_upload_type): ''' This function gets the total reactions of each feed ParameterL id_: a string id to a facebook object such as a page or person limit: the limit to the numner of posts obtained from the request in string fb_upload_type: a valid t...
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
Hence, for each cell we can see the upload_type ID to identify the post or photo and the number of reactions for each upload. QUESTION: Obtaining feed data to anaylize the kinds, times and popularity of a user or page's feed. In this question, we get feed information for the artist Bon Dylan (though are function us a...
def feed_data(object_id,limit): ''' This function generates a list of dictionaries for each feed of information Parameters: object_id: the id of the object posting events in string limit: the number of most recent events in string Return: a list of dictionaries where each data i...
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
Question: Get the top twenty frequency of friends who like our post In the cell below, it is our code for the first question, which is the top friends who like our post the most. First, we created a function to convert the response into json format since we would be making a lot of requests and create dictionary from ...
def friend_likes(id_,limit,fb_upload_type): ''' This function gets a dictionary for each kind of reactions for each post Parameter: id_: a string id to a facebook object such as a page or person limit: the limit to the numner of posts obtained from the request in string fb_upload_ty...
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
Question: Demographic analysis for place that we have been tagged In this question, we want to explore the places that we have travelled and been tagged on Facebook. We want to create a demographic plot that show where we have been based on the latitudes and longitudes. Since we already know how to perform a GET reque...
def tagged_data(object_id): ''' This function generates a dictionary which includes the longitudes, latitudes, and names for places. Parameter: id_: a string id to a facebook object such as a page or person Return: a list of dictionaries of latitude,longitude, country and name of tagged...
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
We will import a dataframe that contains the data about latitude, longitude, and name. Then, we created a csv file out of this dataframe.
df_tagged_places= pd.DataFrame(tagged_data('me')) to_csv('df_tagged_places.csv',df_tagged_places)
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
We then showed the first ten row in this dataframe.
df_tagged_places.head(10)
_____no_output_____
MIT
Notebook_2/Notebook_2.ipynb
nguyenst1/facebook-api-analysis
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb) Tra...
!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash import nlu
--2021-05-05 05:38:30-- https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/scripts/colab_setup.sh Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... ...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
2. Download IMDB datasethttps://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviewsIMDB dataset having 50K movie reviews for natural language processing or Text analytics.This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a...
! wget http://ckl-it.de/wp-content/uploads/2021/01/IMDB-Dataset.csv import pandas as pd train_path = '/content/IMDB-Dataset.csv' train_df = pd.read_csv(train_path) # the text data to use for classification should be in a column named 'text' # the label column must have name 'y' name be of type str columns=['text','y'...
_____no_output_____
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
3. Train Deep Learning Classifier using nlu.load('train.sentiment')You dataset label column should be named 'y' and the feature column with text data should be named 'text'
import nlu from sklearn.metrics import classification_report # load a trainable pipeline by specifying the train. prefix and fit it on a datset with label and text columns # by default the Universal Sentence Encoder (USE) Sentence embeddings are used for generation trainable_pipe = nlu.load('train.sentiment') fitted...
tfhub_use download started this may take some time. Approximate size to download 923.7 MB [OK!] sentence_detector_dl download started this may take some time. Approximate size to download 354.6 KB [OK!] precision recall f1-score support negative 0.82 0.88 0.85 26 neu...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
4. Test the fitted pipe on new example
fitted_pipe.predict('It was one of the best films i have ever watched in my entire life !!')
_____no_output_____
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
5. Configure pipe training parameters
trainable_pipe.print_info()
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) : >>> pipe['sentiment_dl'] has settable params: pipe['sentiment_dl'].setMaxEpochs(1) | Info: Maximum number of epochs to train | Currently set to : 1 pipe['sentiment_dl'].setLr(0.005) | I...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
6. Retrain with new parameters
# Train longer! trainable_pipe['sentiment_dl'].setMaxEpochs(5) fitted_pipe = trainable_pipe.fit(train_df.iloc[:50]) # predict with the trainable pipeline on dataset and get predictions preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document') #sentence detector that is part of the pipe generates sone N...
precision recall f1-score support negative 0.92 0.92 0.92 26 neutral 0.00 0.00 0.00 0 positive 1.00 0.75 0.86 24 accuracy 0.84 50 macro avg 0.64 0.56 0.59 ...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
7. Try training with different Embeddings
# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert! nlu.print_components(action='embed_sentence') trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.sentiment') # We need to train longer and user smaller LR for NON-USE base...
sent_small_bert_L12_768 download started this may take some time. Approximate size to download 392.9 MB [OK!] sentence_detector_dl download started this may take some time. Approximate size to download 354.6 KB [OK!] precision recall f1-score support negative 0.87 0.77 0.82 ...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
7.1 evaluate on Test Data
preds = fitted_pipe.predict(test_df,output_level='document') #sentence detector that is part of the pipe generates sone NaNs. lets drop them first preds.dropna(inplace=True) print(classification_report(preds['y'], preds['trained_sentiment']))
precision recall f1-score support negative 0.85 0.75 0.80 246 neutral 0.00 0.00 0.00 0 positive 0.84 0.81 0.83 254 accuracy 0.78 500 macro avg 0.56 0.52 0.54 ...
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
8. Lets save the model
stored_model_path = './models/classifier_dl_trained' fitted_pipe.save(stored_model_path)
Stored model in ./models/classifier_dl_trained
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
9. Lets load the model from HDD.This makes Offlien NLU usage possible! You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk.
hdd_pipe = nlu.load(path=stored_model_path) preds = hdd_pipe.predict('It was one of the best films i have ever watched in my entire life !!') preds hdd_pipe.print_info()
_____no_output_____
Apache-2.0
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_IMDB.ipynb
fcivardi/spark-nlp-workshop
Image filtering-Convolution
import numpy as np import cv2 import matplotlib.pyplot as plt # loading an orange image imageBGR = cv2.imread('orange.jpg',-1) # convert the image from BGR color space to RGB imageRGB=cv2.cvtColor(imageBGR, cv2.COLOR_BGR2RGB) plt.imshow(imageRGB) imageRGB.shape
_____no_output_____
MIT
OpenCV_Image Filtering.ipynb
deepnetworks555/openCV-jupyter
Averaging
kernel = np.ones((10,10),np.float32)/100 result = cv2.filter2D(imageRGB,-1,kernel) plt.subplot(121),plt.imshow(imageRGB),plt.title('Original') plt.xticks([]), plt.yticks([]) plt.subplot(122),plt.imshow(result),plt.title('Averaging') plt.xticks([]), plt.yticks([]) plt.show()
_____no_output_____
MIT
OpenCV_Image Filtering.ipynb
deepnetworks555/openCV-jupyter
Gaussian Blur
blured_image = cv2.GaussianBlur(imageRGB,(21,21),10) plt.figure(figsize=(10,10)) plt.subplot(121),plt.imshow(imageRGB),plt.title('Original') plt.xticks([]), plt.yticks([]) plt.subplot(122),plt.imshow(blured_image),plt.title('blured_image') plt.xticks([]), plt.yticks([]) plt.show()
_____no_output_____
MIT
OpenCV_Image Filtering.ipynb
deepnetworks555/openCV-jupyter
Assignment 7 Chapter 6 Student ID: *Double click here to fill the Student ID* Name: *Double click here to fill the name* 1We perform best subset, forward stepwise, and backward stepwise selection on a single data set. For each approach, we obtain $p + 1$ models, containing $0, 1, 2, . . . ,p$ predictors. Explain you...
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (b) Consider $(6.13)$ with $p=1$. For some choice of $y_1$ and $\lambda>0$, plot $(6.13)$ as a function of $\beta_1$. Your plot should confirm that $(6.13)$ is solved by $(6.15)$. > Ans: *double click here to answer the question.*
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
9In this exercise, we will predict the number of applications received using the other variables in the `College` data set. (a) Split the data set into a training set and a test set. Use `train_test_split()` function.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
(b) Fit a **linear** model using least squares on the training set, and report the test error obtained.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (c) Fit a **ridge** regression model on the training set, with $λ$ chosen by cross-validation. Report the test error obtained.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (d) Fit a **lasso** model on the training set, with $λ$ chosen by cross-validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (e) Fit a **PCR** model on the training set, with *M* chosen by crossvalidation. Report the test error obtained, along with the value of *M* selected by cross-validation.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (f) Fit a **PLS** model on the training set, with *M* chosen by cross-validation. Report the test error obtained, along with the value of *M* selected by cross-validation.
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* (g) Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
> Ans: *double click here to answer the question.* 11We will now try to predict per capita crime rate in the `Boston` data set. (a) Try out some of the regression methods explored in this chapter, such as **best subset** selection, the **lasso**, **ridge** regression, and **PCR**. Present and discuss results for the a...
# coding your answer here.
_____no_output_____
MIT
static_files/assignments/Assignment7.ipynb
phonchi/nsysu-math524
IntegersPython represents integers (positive and negative whole numbers) using the`int` (immutable) type. For immutable objects, there is no difference betweena variable and an object di erenc
(58).bit_length() str='11' d=int(str) d b=int(str,2) b divmod(23,5) round(100.89,2) round(100.89,-2) round(100.8936,3) (4.50).as_integer_ratio()
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
The `fractions` ModulePython has the fraction module to deal with parts of a fraction.
import fractions dir(fractions) help(fractions.Fraction) from fractions import Fraction def rounding_float(number,place): return round(number,place) rounding_float(120.6765545362663,5) def float_to_fractions(number): return Fraction(*number.as_integer_ratio()) float_to_fractions(12.5) def get_denominator(num1,n...
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
The `decimal` ModuleWhen we need exact decimal foating-point numbers, Python has an additional immutable float type, the decimal.Decimal.
import decimal # dir(decimal) help(decimal.Decimal) sum(0.1 for i in range(10))==1.0 from decimal import Decimal sum(Decimal('0.1') for i in range(10))==1.0
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
While The `math` and `cmath` modules are not suitable for the decimalmodule, its built-in functions such as `decimal.Decimal.exp(x)` are enoughto most of the problems. Other Representations
bin(120) hex(123) oct(345)
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
Functions to Convert Between Different BasesConverts a number in any base smaller than 10 to the decimal base:
def convert_to_decimal(number, base): multiplier, result = 1, 0 while number > 0: result += number%10*multiplier multiplier *= base number = number//10 return result def test_convert_to_decimal(): number, base = 1001, 2 assert(convert_to_decimal(number, base) == 9) print(...
Tests passed!
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
Convert a number from a decimal base to anyother base (up to 20)
def convert_from_decimal_larger_bases(number, base): strings = "0123456789ABCDEFGHIJ" result = "" while number > 0: digit = number%base result = strings[digit] + result number = number//base return result def test_convert_from_decimal_larger_bases(): number, base = 31, 16 ...
Tests in this module have passed!
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
Greatest Common DivisorThe greatest common divisor (gcd) betweentwo given integers:
def finding_gcd(a, b): ''' implements the greatest common divider algorithm ''' while(b != 0): result = b a, b = b, a % b return result finding_gcd(2,5) finding_gcd(3,6)
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
The `Random` Module
import random help(random) my_list=[2,5,6,7,8,9] random.choice(my_list) random.sample(my_list,2) random.shuffle(my_list) my_list random.randint(1,10)
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
Fibonacci SequencesTo find the nth number in a Fibonacci sequence in three ways: (a) with a recursive O(2n) runtime; (b) with a iterative O(n2) runtime; and (c) using a formula that gives a O(1) runtime but is not precise after around the 70th element:
def find_fibonacci_seq_rec(n): if n < 2: return n return find_fibonacci_seq_rec(n-1) + find_fibonacci_seq_rec(n-2) find_fibonacci_seq_rec(8) def find_fibonacci_seq_iter(n): if n < 2: return n a, b = 0, 1 for i in range(n): a, b = b, a + b return a find_fibonacci_seq_iter...
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
PrimesThe following program finds whether a number is a prime in three ways:(a) brute force; (b) rejecting all the candidates up to the square root of thenumber; and (c) using the Fermat's theorem with probabilistic tests:
import math import random def finding_prime(number): num = abs(number) if num < 4 : return True for x in range(2, num): if num % x == 0: return False return True finding_prime(5) finding_prime(4) def finding_prime_sqrt(number): num = abs(number) if num < 4 : ...
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
The `math` module
import math help(math) dir(math) math.__spec__
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
Number-theoretic and representation functions
for i in dir(math): if i[0] !='_': print(i,end="\t") print(len(dir(math))) num1=6 num2= -56 num3=45.9086 num4= -45.898 math.ceil(num3) math.ceil(num4) math.floor(num3) math.floor(num4) math.copysign(num1,num2) math.fabs(num2) math.factorial(5) num=9 math.isnan(num)
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
The `NumPy` ModuleThe NumPy module provides array sequences that can store numbers orcharacters in a space-efficient way. Arrays in NumPy can have any ar-bitrary dimension. They can be generated from a list or a tuple with thearray-method, which transforms sequences of sequences into two dimensionalarrays:
import numpy as np x = np.array( ((11,12,13), (21,22,23), (31,32,33)) ) x x.ndim
_____no_output_____
MIT
basic/Numbers.ipynb
sanikamal/awesome-python-examples
基于机器学习数据库飞速上线AI应用大家平时可能都会打车,从出发的地点到目的地,行程耗时可能会存在多种因素,比如天气,是否周五,如果获取更准确的耗时预测,对人来说是一个复杂的问题,而对机器就变得很简单,今天的任务就是开发一个通过机器学习模型进行出租车行程耗时的实时智能应用,整个应用开发是基于[notebook](http://ipython.org/notebook.html)![出租车行程耗时预测](https://th.bing.com/th/id/Rcf52e9678006c3e99a98cf88a216e38d?rik=oQN4iVqyXXjYNg&riu=http%3a%2f%2fi1.hexun.com%2f2020-0...
!cd demo && sh init.sh
_____no_output_____
Apache-2.0
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb
heiyan1shengdun/OpenMLDB
导入行程历史数据到fedb使用fedb进行时序特征计算是需要历史数据的,所以我们将历史的行程数据导入到fedb,以便实时推理可以使用历史数据进行特征推理,导入代码可以参考https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/import.py
!cd demo && python3 import.py
_____no_output_____
Apache-2.0
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb
heiyan1shengdun/OpenMLDB
使用行程数据进行模型训练通过label数据进行模型训练,一下是这次任务使用的代码* 训练脚本代码 https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/train_sql.py * 训练数据 https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/data/taxi_tour_table_train_simple.snappy.parquet整个任务最终会生成一个model.txt
!cd demo && python3 train.py ./fe.sql /tmp/model.txt
_____no_output_____
Apache-2.0
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb
heiyan1shengdun/OpenMLDB
使用训练的模型搭建链接fedb的实时推理http服务基于上一步生成的模型和fedb历史数据,搭建一个实时推理服务,整个推理服务代码参考https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/predict_server.py
!cd demo && sh start_predict_server.sh ./fe.sql 8887 /tmp/model.txt
_____no_output_____
Apache-2.0
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb
heiyan1shengdun/OpenMLDB
通过http请求发送一个推理请求整个请求很简单,具体代码如下```pythonurl = "http://127.0.0.1:8887/predict"req ={"id":"id0376262", "vendor_id":1, "pickup_datetime":1467302350000, "dropoff_datetime":1467304896000, "passenger_count":2, "pickup_longitude":-73.873093, "pickup_latitude":40.774097, "dropoff_longitude":-73.926704, "dropoff_latitude":40.85...
!cd demo && python3 predict.py
_____no_output_____
Apache-2.0
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb
heiyan1shengdun/OpenMLDB
[Module 2.1] 피쳐 엔지니어링이 노트북은 아래와 같은 피쳐 엔지니어링을 통하여 새로운 피쳐를 생성 합니다.- 날짜관련 피쳐 생성(월, 일, 요일)- 기존의 피쳐들을 결합하여 새로운 피쳐 생성 (피쳐1 + 피쳐2 = 뉴피쳐)- Product_ID를 기준으로 Target Encoding 하여 새로운 피쳐 생성- Product_ID를 기준으로 Target Encoding Smoothing 하여 새로운 피쳐 생성- Category 피쳐를 레이블 인코딩 함- 로컬에 데이터 저장 - 최종 레이블 인코딩 된 데이터 세트 저장 (XGBoost, CatBoost 용)...
import pandas as pd pd.options.display.max_rows=5 import numpy as np %store -r full_data_file_name
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
데이터 로딩 및 셔플링
df = pd.read_csv(full_data_file_name) df = df.sample(frac=1.0, random_state=1000) df df.columns
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
날짜 피쳐 생성: Month, Day, WeeoOfDay(요일)
def create_date_feature(raw_df): df = raw_df.copy() df['order_date'] = pd.to_datetime(df['order_approved_at']) df['order_weekday'] = df['order_date'].dt.weekday df['order_day'] = df['order_date'].dt.day df['order_month'] = df['order_date'].dt.month return df f_df = create_date_f...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
기존 피쳐 결합하여 새로운 피쳐 생성 (컬럼1 + 컬럼2 = 뉴피쳐)
def change_var_type(f_df): df = f_df.copy() df['customer_zip_code_prefix'] = df['customer_zip_code_prefix'].astype(str) df['seller_zip_code_prefix'] = df['seller_zip_code_prefix'].astype(str) return df def comnbine_columns(f_df,src_col1, src_col2,new_col): df = f_df.copy() df[new_col] = df[...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
custoemr_state + seller_state
f_df = comnbine_columns(f_df,src_col1='customer_state', src_col2='seller_state',new_col='customer_seller_state')
df shape: (67176, 22)
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
custoemr_city + seller_city
f_df = comnbine_columns(f_df,src_col1='customer_city', src_col2='seller_city',new_col='customer_seller_city')
df shape: (67176, 23)
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
custoemr_zip + seller_zip
f_df = comnbine_columns(f_df,src_col1='customer_zip_code_prefix', src_col2='seller_zip_code_prefix',new_col='customer_seller_zip_code_prefix') f_df
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
product volume 컬럼 생성 (가로 * 세로 * 높이 의 계산값)
def add_product_volume(raw_df): df = raw_df.copy() df['product_volume'] = df.product_length_cm * df.product_width_cm * df.product_height_cm return df f_df = add_product_volume(f_df) f_df.columns
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
Train, Test 데이터 셋 분리
def split_data_2(raw_df, sort_col='order_approved_at',val_ratio=0.3): ''' train, test 데이터 분리 ''' df = raw_df.copy() val_ratio = 1 - val_ratio # 1 - 0.3 = 0.7 df = df.sort_values(by= sort_col) # 시간 순으로 정렬 # One-Hot-Encoding data1,data2, = np.split(df, [int(va...
data1, data2 shape: (53740, 25),(13436, 25)
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
Target Encoding 관련 피쳐 생성- Product_ID 별 Classes의 평균, 갯수 (te_pdid_mean, te_pdid_count)- Target Error (classes - te_pdid_mean) Target Encoding with Smoothing아래 비디오 및 코드 참조 함- Feature Engineering - RecSys 2020 Tutorial: Feature Engineering for Recommender Systems - https://www.youtube.com/watch?v=uROvhp7cj6Q ...
def create_target_encoding(cat, raw_df): ''' te_mean, te_count 피쳐 생성 ''' df = raw_df.copy() te = df.groupby(cat).classes.agg(['mean','count']).reset_index() te_mean_col = 'te_' + cat + '_mean' te_count_col = 'te_' + cat + '_count' cat = [cat] te.columns = cat + [te_mean_col,te_...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
Target Encoding 실행
def add_new_te(raw_train, raw_test): train_df = raw_train.copy() test_df = raw_test.copy() cat = 'product_id' trn, sub = target_encode(train_df[cat], test_df[cat], target=train_df.classes, min_samples_leaf=100...
(53740, 33) (13436, 33)
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
Category 레이블 Encoding
# from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder class LabelEncoderExt(object): ''' Source: # https://stackoverflow.com/questions/21057621/sklearn-labelencoder-with-never-seen-before-values ''' def __init__(self): """ It differs from LabelEncoder by ...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
Category 변수의 레이블 인코딩 실행
label_cols = ['customer_city','customer_state','customer_zip_code_prefix'] train2_lb, test2_lb = make_test_label_encoding(train2_df, test2_df, label_cols) pd.options.display.max_rows = 10 show_rows = 5 print(train2_lb.customer_state.value_counts()[0:show_rows]) # print(train2_lb[train2_lb.lb_customer_city == 185]) prin...
SP 28232 MG 6763 RJ 6034 PR 2912 RS 2385 Name: customer_state, dtype: int64 SP 6642 MG 1541 RJ 1491 PR 715 RS 663 Name: customer_state, dtype: int64
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
레이블 Encoding 안하고 바로 사용(AutoGluon 용)
# no_encoding_cate = tes_df
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
최종 사용할 컬럼 지정 XGBoost, CatBoost 알고리즘 용
def filter_df(raw_df, cols): df = raw_df.copy() df = df[cols] return df cols = ['classes', 'lb_customer_city', 'lb_customer_state', 'lb_customer_zip_code_prefix', 'price', 'freight_value', 'product_weight_g', 'product_volume', 'order_w...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
피쳐 변환한 AutoGluon 용
cols = ['classes', 'customer_city', 'customer_state', 'customer_zip_code_prefix', 'product_category_name_english', 'price', 'freight_value', 'product_weight_g', 'product_volume', 'order_weekday', 'order_day', 'ord...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
펴쳐 변환 없이 AutoGluon 용
train_df.columns cols = ['classes', 'customer_zip_code_prefix', 'customer_city', 'customer_state', 'price', 'freight_value', 'product_weight_g', 'product_category_name_english', 'seller_zip_code_prefix', 'seller_city', 'seller_state', 'order_date', 'order_weekday', 'order_day', 'ord...
_____no_output_____
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
로컬에 데이터 저장
import os def save_local(train_data, test_data, preproc_folder): train_df = pd.concat([train_data['classes'], train_data.drop(['classes'], axis=1)], axis=1) train_file_name = os.path.join(preproc_folder, 'train.csv') train_df.to_csv(train_file_name, index=False) print(f'{train_file_name} is saved') ...
Stored 'pre_train_file' (str) Stored 'pre_test_file' (str) Stored 'auto_train_file' (str) Stored 'auto_test_file' (str) Stored 'no_auto_train_file' (str) Stored 'no_auto_test_file' (str)
MIT
brazil_ecommerce/working/ref-te-method01-Feature_Engineer.ipynb
gonsoomoon-ml/predict-delivery-time
1.Write a function contracting(l) that takes as input a list of integer l and returns True if the absolute difference between each adjacent pair of elements strictly decreases.Here are some examples of how your function should work. >>> contracting([9,2,7,3,1]) True >>> contracting([-2,3,7,2,-1]) False >>> contr...
def contracting(l): n=len(l) b=abs(l[1]-l[0]) for i in range(2,n): d=abs(l[i]-l[i-1]) if (d<b): b=d else: return False break return True contracting([-2,3,7,2,-1])
_____no_output_____
Apache-2.0
week3_assignment.ipynb
GunaSekhargithub/npteldatastructureswithpython
2.In a list of integers l, the neighbours of l[i] are l[i-1] and l[i+1]. l[i] is a hill if it is strictly greater than its neighbours and a valley if it is strictly less than its neighbours.Write a function counthv(l) that takes as input a list of integers l and returns a list [hc,vc] where hc is the number of hills in...
def counthv(l): a=[] hc=0 vc=0 for i in range(1,len(l)-1): if (l[i]>l[i-1] and l[i]>l[i+1]): hc+=1 elif (l[i]<l[i-1] and l[i]<l[i+1]): vc+=1 else: continue a.append(hc) a.append(vc) return a counthv([3,1,2,3])
_____no_output_____
Apache-2.0
week3_assignment.ipynb
GunaSekhargithub/npteldatastructureswithpython
3.A square n×n matrix of integers can be written in Python as a list with n elements, where each element is in turn a list of n integers, representing a row of the matrix. For instance, the matrix 1 2 3 4 5 6 7 8 9would be represented as [[1,2,3], [4,5,6], [7,8,9]].Write a function leftrotate(m) that takes a l...
def col(l,n): m=[] for i in range(len(l)): m.append(l[i][n]) return m def leftrotate(l): m=[] for i in range(len(l)-1,-1,-1): m.append(col(l,i)) return m leftrotate([[1,2],[3,4]])
_____no_output_____
Apache-2.0
week3_assignment.ipynb
GunaSekhargithub/npteldatastructureswithpython
EDA Car Data Set**We will explore the Car Data set and perform the exploratory data analysis on the dataset. The major topics to be covered are below:**- **Removing duplicates**- **Missing value treatment**- **Outlier Treatment**- **Normalization and Scaling( Numerical Variables)**- **Encoding Categorical variables( D...
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Loading the data set**We will be loading the EDA cars excel file using pandas. For this we will be using read_excel file.**
df=pd.read_excel('EDA Cars.xlsx')
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Basic Data Exploration **In this step, we will perform the below operations to check what the data set comprises of. We will check the below things:**- **head of the dataset**- **shape of the dataset**- **info of the dataset**- **summary of the dataset** **head function will tell you the top records in the data set. B...
# Converting Postel Code into Category
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**info() is used to check the Information about the data and the datatypes of each respective attributes.** **The describe method will help to see how data has been spread for the numerical values. We can clearly see the minimum value, mean values, different percentile values and maximum values.** Check for Duplicate ...
# Check for duplicate data
Number of duplicate rows = 14
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**Since we have 14 duplicate records in the data, we will remove this from the data set so that we get only distinct records.** **Post removing the duplicate, we will check whether the duplicates has been removed from the data set or not.**
# Check for duplicate data dups = df.duplicated() print('Number of duplicate rows = %d' % (dups.sum())) df[dups]
Number of duplicate rows = 0
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**Now, we can clearly see that there are no duplicate records in the data set. We can also quickly confirm the number of records by using the shape attribute as those 14 records should be removed from the original data. Initially it had 303 records now it should have 289**
df.shape
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Outlier Treatment**To check for outliers, we will be plotting the box plots.**
df.boxplot(column=['INCOME']) plt.show() df.boxplot(column=['TRAVEL TIME']) plt.show() df.boxplot(column=['CAR AGE']) plt.show() df.boxplot(column=['MILES CLOCKED']) plt.show()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**Looking at the box plot, it seems that the three variables INCOME, MILES CLOCKED and TRAVEL TIME have outlier present in the variables.****These outliers value needs to be teated and there are several ways of treating them:** - **Drop the outlier value**- **Replace the outlier value using the IQR** **Created a use...
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Make Boxplots after Outlier Treatment
df.boxplot(column=['TRAVEL TIME']) plt.show() df.boxplot(column=['MILES CLOCKED']) plt.show()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**If you look at the box plots above,post treating the outlier there are no outliers in all these columns.** Check for missing value
# Check for missing value in any column
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**We can see that we have various missing values in respective columns. There are various ways of treating your missing values in the data set. And which technique to use when is actually dependent on the type of data you are dealing with.**- **Drop the missing values : In this case we drop the missing values from thos...
df[df.isnull().sum()[df.isnull().sum()>0].index].dtypes
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**Replacing NULL values in Numerical Columns using Median**
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**Replacing NULL values in Categorical Columns using Mode**
# Check for missing value in any column df.isnull().sum()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Univariate Analysis
# histogram of income
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
From above figure, we can say that the Income parameter is right skewed
sns.countplot(df["EDUCATION"],hue=df["SEX"]) #countplot for Education wrt SEX
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba