markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Write output
human_outdir = 'Eukaryota/human-visual-transduction-fastas/' ! mkdir $human_outdir with open(f'{human_outdir}/human_visual_transduction_proteins.fasta', 'w') as f: for record in tf_records: f.write(">{name}\n{sequence}\n".format(**record))
mkdir: cannot create directory ‘Eukaryota/human-visual-transduction-fastas/’: File exists
MIT
notebooks/521_subset_human_retina_genes.ipynb
czbiohub/kh-analysis
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They c...
import numpy as np from rnn_utils import *
_____no_output_____
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations...
# GRADED FUNCTION: rnn_cell_forward def rnn_cell_forward(xt, a_prev, parameters): """ Implements a single forward step of the RNN-cell as described in Figure (2) Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array o...
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] a_next.shape = (5, 10) yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] yt_pred.shape = (2, 10)...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) ...
# GRADED FUNCTION: rnn_forward def rnn_forward(x, a0, parameters): """ Implement the forward propagation of the recurrent neural network described in Figure (3). Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, m) parameters -- ...
a[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267] a.shape = (5, 10, 4) y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947] y_pred.shape = (2, 10, 4) caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319] len(caches) = 2
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.8622...
# GRADED FUNCTION: lstm_cell_forward def lstm_cell_forward(xt, a_prev, c_prev, parameters): """ Implement a single forward step of the LSTM-cell as described in Figure (4) Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", num...
a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] a_next.shape = (5, 10) c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] c_next.shape = (5, 10) ...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) ...
# GRADED FUNCTION: lstm_forward def lstm_forward(x, a0, parameters): """ Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3). Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, ...
a[4][3][6] = 0.172117767533 a.shape = (5, 10, 7) y[1][4][3] = 0.95087346185 y.shape = (2, 10, 7) caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] c[1][2][1] -0.855544916718 len(caches) = 2
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 ...
def rnn_cell_backward(da_next, cache): """ Implements the backward pass for the RNN-cell (single time-step). Arguments: da_next -- Gradient of loss with respect to next hidden state cache -- python dictionary containing useful values (output of rnn_cell_forward()) Returns: gradients -- pyt...
gradients["dxt"][1][2] = -0.460564103059 gradients["dxt"].shape = (3, 10) gradients["da_prev"][2][3] = 0.0842968653807 gradients["da_prev"].shape = (5, 10) gradients["dWax"][3][1] = 0.393081873922 gradients["dWax"].shape = (5, 3) gradients["dWaa"][1][2] = -0.28483955787 gradients["dWaa"].shape = (5, 5) gradients["dba"]...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.084...
def rnn_backward(da, caches): """ Implement the backward pass for a RNN over an entire sequence of input data. Arguments: da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x) caches -- tuple containing information from the forward pass (rnn_forward) Returns: gradients ...
gradients["dx"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317] gradients["dx"].shape = (3, 10, 4) gradients["da0"][2][3] = -0.314942375127 gradients["da0"].shape = (5, 10) gradients["dWax"][3][1] = 11.2641044965 gradients["dWax"].shape = (5, 3) gradients["dWaa"][1][2] = 2.30333312658 gradients["dWaa"].shape ...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = ...
def lstm_cell_backward(da_next, dc_next, cache): """ Implement the backward pass for the LSTM-cell (single time-step). Arguments: da_next -- Gradients of next hidden state, of shape (n_a, m) dc_next -- Gradients of next cell state, of shape (n_a, m) cache -- cache storing information from the f...
gradients["dxt"][1][2] = 3.23055911511 gradients["dxt"].shape = (3, 10) gradients["da_prev"][2][3] = -0.0639621419711 gradients["da_prev"].shape = (5, 10) gradients["dc_prev"][2][3] = 0.797522038797 gradients["dc_prev"].shape = (5, 10) gradients["dWf"][3][1] = -0.147954838164 gradients["dWf"].shape = (5, 8) gradients["...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639...
def lstm_backward(da, caches): """ Implement the backward pass for the RNN with LSTM-cell (over a whole sequence). Arguments: da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x) dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x) caches -- ca...
gradients["dx"][1][2] = [-0.00173313 0.08287442 -0.30545663 -0.43281115] gradients["dx"].shape = (3, 10, 4) gradients["da0"][2][3] = -0.095911501954 gradients["da0"].shape = (5, 10) gradients["dWf"][3][1] = -0.0698198561274 gradients["dWf"].shape = (5, 8) gradients["dWi"][1][2] = 0.102371820249 gradients["dWi"].shape ...
MIT
Course5 - Sequence Models/week1 Recurrent Neural Networks/Building a recurrent neural network/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
bheil123/DeepLearning
Computer Vision for Medical Imaging: Part 1. Train Model with Hyperparameter Tuning JobThis notebook is part 1 of a 4-part series of techniques and services offer by SageMaker to build a model which predicts if an image of cells contains cancer. This notebook shows how to build a model using hyperparameter tuning. Da...
import pip def import_or_install(package): try: __import__(package) except ImportError: pip.main(["install", package]) required_packages = ["sagemaker", "boto3", "mxnet", "h5py", "tqdm", "matplotlib"] for package in required_packages: import_or_install(package) %store -r %store
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Import Libraries
import io import os import h5py import zipfile import boto3 import sagemaker import mxnet as mx import numpy as np from tqdm import tqdm import matplotlib.pyplot as plt import cv2
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Configure Boto3 Clients and Sessions
region = "us-west-2" # Change region as needed boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client("s3", region_name=region) sagemaker_boto_client = boto_session.client("sagemaker") sagemaker_session = sagemaker.session.Session( boto_session=b...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Load Dataset
# check if directory exists if not os.path.isdir("data"): os.mkdir("data") # download zip file from public s3 bucket !wget -P data https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/pcam/medical_images.zip with zipfile.ZipFile("data/medical_images.zip") as zf: zf.extractall() with open("data/camely...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
View Sample Images from Dataset
def preview_images(X, y, n, cols): sample_images = X[:n] sample_labels = y[:n] rows = int(np.ceil(n / cols)) fig, axs = plt.subplots(rows, cols, figsize=(11.5, 7)) for i, ax in enumerate(axs.flatten()): image = sample_images[i] label = sample_labels[i] ax.imshow(image) ...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Shuffle and Split Dataset
from sklearn.model_selection import train_test_split X_numpy = X[:] y_numpy = y[:] X_train, X_test, y_train, y_test = train_test_split( X_numpy, y_numpy, test_size=1000, random_state=0 ) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=2000, random_state=1) print(X_train.shape) print...
(11000, 96, 96, 3) (2000, 96, 96, 3) (1000, 96, 96, 3)
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Convert Splits to RecordIO Format
def write_to_recordio(X: np.ndarray, y: np.ndarray, prefix: str): record = mx.recordio.MXIndexedRecordIO(idx_path=f"{prefix}.idx", uri=f"{prefix}.rec", flag="w") for idx, arr in enumerate(tqdm(X)): header = mx.recordio.IRHeader(0, y[idx], idx, 0) s = mx.recordio.pack_img( header, ...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Upload Data Splits to S3
prefix = "cv-metastasis" try: s3_client.create_bucket( Bucket=bucket, ACL="private", CreateBucketConfiguration={"LocationConstraint": region} ) print(f"Created S3 bucket: {bucket}") except Exception as e: if e.response["Error"]["Code"] == "BucketAlreadyOwnedByYou": print(f"Using existi...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Configure the Estimator
training_image = sagemaker.image_uris.retrieve("image-classification", region) num_training_samples = X_train.shape[0] num_classes = len(np.unique(y_train)) hyperparameters = { "num_layers": 18, "use_pretrained_model": 1, "augmentation_type": "crop_color_transform", "image_shape": "3,96,96", "num_c...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Configure the Hyperparameter TunerAlthough we would prefer to tune for recall, the current HyperparameterTuner implementation for Image Classification only supports validation accuracy.
hyperparameter_ranges = { "mini_batch_size": sagemaker.parameter.CategoricalParameter([16, 32, 64]), "learning_rate": sagemaker.parameter.CategoricalParameter([0.001, 0.01]), } hyperparameter_tuner = sagemaker.tuner.HyperparameterTuner( estimator=image_classifier, objective_metric_name="validation:accu...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Define the Data Channels
train_input = sagemaker.inputs.TrainingInput( s3_data=f"s3://{bucket}/{prefix}/data/train", content_type="application/x-recordio", s3_data_type="S3Prefix", input_mode="Pipe", ) val_input = sagemaker.inputs.TrainingInput( s3_data=f"s3://{bucket}/{prefix}/data/val", content_type="application/x-re...
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Run Hyperparameter Tuning Jobs
if 'tuning_job_name' not in locals(): hyperparameter_tuner.fit(inputs=data_channels) tuning_job_name = hyperparameter_tuner.describe().get('HyperParameterTuningJobName') %store tuning_job_name else: print(f'Using previous tuning job: {tuning_job_name}') %store tuning_job_name
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
Examine ResultsNOTE: If your kernel has restarted after running the hyperparameter tuning job, everyting you need has been persisted to SageMaker. You can continue on without having to run the tuning job again.
results = sagemaker.analytics.HyperparameterTuningJobAnalytics(tuning_job_name) results_df = results.dataframe() results_df best_training_job_summary = results.description()["BestTrainingJob"] best_training_job_name = best_training_job_summary["TrainingJobName"] %store best_training_job_name
_____no_output_____
Apache-2.0
use-cases/computer_vision/1-metastases-detection-train-model.ipynb
sureindia-in/sagemaker-examples-good
K-Means Clustering
# Import required packages import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.cluster import KMeans import matplotlib.pyplot as plt from pandas import DataFrame Data = {'x': [25,34,22,27,33,33,31,22,35,34,67,54,57,43,50,57,59,52,65,47,49,48,35,33,44,45,38,43,51,46], 'y': [79,51,53,...
_____no_output_____
Apache-2.0
K-MEANS CLUSTERING.ipynb
PrasannaDataBus/CLUSTERING
Next you’ll see how to use sklearn to find the centroids for 3 clusters, and then for 4 clusters.K-Means Clustering in Python – 3 clustersOnce you created the DataFrame based on the above data, you’ll need to import 2 additional Python modules: matplotlib – for creating charts in Python sklearn – for applying the...
import matplotlib.pyplot as plt from sklearn.cluster import KMeans mms = MinMaxScaler() mms.fit(df) data_transformed = mms.transform(df) Sum_of_squared_distances = [] K = range(1,15) for k in K: km = KMeans(n_clusters=k) km = km.fit(data_transformed) Sum_of_squared_distances.append(km.inertia_) plt.plot(K, ...
_____no_output_____
Apache-2.0
K-MEANS CLUSTERING.ipynb
PrasannaDataBus/CLUSTERING
Aerospike Connect for Spark Tutorial for Python Tested with Spark connector 3.2.0, ASDB EE 5.7.0.7, Java 8, Apache Spark 3.0.2, Python 3.7 and Scala 2.12.11 and [Spylon]( https://pypi.org/project/spylon-kernel/) Setup Ensure Database Is RunningThis notebook requires that Aerospike datbase is running.
!asd >& /dev/null !pgrep -x asd >/dev/null && echo "Aerospike database is running!" || echo "**Aerospike database is not running!**"
Aerospike database is running!
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Set Aerospike, Spark, and Spark Connector Paths and Parameters
# Directorie where spark related components are installed SPARK_NB_DIR = '/opt/spark-nb' SPARK_HOME = SPARK_NB_DIR + '/spark-3.0.3-bin-hadoop3.2' # IP Address or DNS name for one host in your Aerospike cluster AS_HOST ="localhost" # Name of one of your namespaces. Type 'show namespaces' at the aql prompt if you are not...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Alternative Setup for Running Notebook in Different EnvironmentPlease follow the instructions below **instead of the setup above** if you are running this notebook in a different environment from the one provided by the Aerospike Intro-Notebooks container.``` IP Address or DNS name for one host in your Aerospike clust...
# Next we locate the Spark installation - this will be found using the SPARK_HOME environment variable that you will have set import findspark findspark.init(SPARK_HOME) import pyspark from pyspark.sql.types import *
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Configure Aerospike properties in the Spark Session object. Please visit [Configuring Aerospike Connect for Spark](https://docs.aerospike.com/docs/connect/processing/spark/configuration.html) for more information about the properties used on this page.
from pyspark.sql import SparkSession from pyspark import SparkContext sc = SparkContext.getOrCreate() conf=sc._conf.setAll([("aerospike.namespace",AS_NAMESPACE),("aerospike.seedhost",AS_CONNECTION_STRING), ("aerospike.log.level","info")]) sc.stop() sc = pyspark.SparkContext(conf=conf) spark = SparkSession(sc) # sqlCont...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Schema in the Spark Connector- Aerospike is schemaless, however Spark adher to schema. After the schema is decided upon (either through inference or given), data within the bins must honor the types. - To infer schema, the connector samples a set of records (configurable through `aerospike.schema.scan`) to decide the...
import random num_records=200 schema = StructType( [ StructField("id", IntegerType(), True), StructField("name", StringType(), True) ] ) inputBuf = [] for i in range(1, num_records) : name = "name" + str(i) id_ = i inputBuf.append((id_, name)) inputRDD = spa...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
aerospike.schema.flexible = true (default) If none of the column types in the user-specified schema match the bin types of a record in Aerospike, a record with NULLs is returned in the result set. Please use the filter() in Spark to filter out NULL records. For e.g. df.filter("gender == NULL").show(false), where ...
schemaIncorrect = StructType( [ StructField("id", IntegerType(), True), StructField("name", IntegerType(), True) ##Note incorrect type of name bin ] ) flexSchemaInference=spark \ .read \ .format("aerospike") \ .schema(schemaIncorrect) \ .option("aerospike.set", "py_input_data").load() flexSc...
+---+----+ | id|name| +---+----+ | 10|null| | 50|null| |185|null| |117|null| | 88|null| +---+----+ only showing top 5 rows
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
aerospike.schema.flexible = false If a mismatch between the user-specified schema and the schema of a record in Aerospike is detected at the bin/column level, your query will error out.
#When strict matching is set, we will get an exception due to type mismatch with schema provided. try: errorDFStrictSchemaInference=spark \ .read \ .format("aerospike") \ .schema(schemaIncorrect) \ .option("aerospike.schema.flexible" ,"false") \ .option("aerospike.set", "py_input_data").load() ...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Create sample data
# We create age vs salary data, using three different Gaussian distributions import numpy as np import matplotlib.pyplot as plt import pandas as pd import math # Make sure we get the same results every time this workbook is run # Otherwise we are occasionally exposed to results not working out as expected np.random.se...
Data created
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Display simulated age/salary data
# Plot the sample data group_1_colour, group_2_colour, group_3_colour ='red','blue', 'pink' plt.xlabel('Age',fontsize=10) plt.ylabel("Salary",fontsize=10) plt.scatter(group_1_ages,group_1_salaries,c=group_1_colour,label="Group 1") plt.scatter(group_2_ages,group_2_salaries,c=group_2_colour,label="Group 2") plt.scatter...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Save data to Aerospike
# Turn the above records into a Data Frame # First of all, create an array of arrays inputBuf = [] for i in range(0, len(ages)) : id = i + 1 # Avoid counting from zero name = "Individual: {:03d}".format(id) # Note we need to make sure values are typed correctly # salary will have type numpy.float6...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Using Spark SQL syntax to insert data
#Aerospike DB needs a Primary key for record insertion. Hence, you must identify the primary key column #using for example .option(“aerospike.updateByKey”, “id”), where “id” is the name of the column that you’d #like to be the Primary key, while loading data from the DB. insertDFWithSchema=spark \ .read \ .format(...
+---+---------------+------------------+------+ | id| name| age|salary| +---+---------------+------------------+------+ |239|Individual: 239|34.652141285212814| 61747| |101|Individual: 101| 46.53337694047585| 89019| |194|Individual: 194| 45.57430980213645| 94548| | 31|Individual: 031| 25.2492042...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Load data into a DataFrame without specifying any Schema (uses schema inference)
# Create a Spark DataFrame by using the Connector Schema inference mechanism # The fields preceded with __ are metadata fields - key/digest/expiry/generation/ttl # By default you just get everything, with no column ordering, which is why it looks untidy # Note we don't get anything in the 'key' field as we have not cho...
+-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ |__key| __digest| __expiry|__generation| __ttl| age| name|salary| id| +-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ | ...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Load data into a DataFrame using user specified schema
# If we explicitly set the schema, using the previously created schema object # we effectively type the rows in the Data Frame loadedDFWithSchema=spark \ .read \ .format("aerospike") \ .schema(schema) \ .option("aerospike.set", "salary_data").load() loadedDFWithSchema.show(5)
+---+---------------+------------------+------+ | id| name| age|salary| +---+---------------+------------------+------+ |239|Individual: 239|34.652141285212814| 61747| |101|Individual: 101| 46.53337694047585| 89019| |194|Individual: 194| 45.57430980213645| 94548| | 31|Individual: 031| 25.2492042...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Sampling from Aerospike DB- Sample specified number of records from Aerospike to considerably reduce data movement between Aerospike and the Spark clusters. Depending on the aerospike.partition.factor setting, you may get more records than desired. Please use this property in conjunction with Spark `limit()` function ...
#number_of_spark_partitions (num_sp)=2^{aerospike.partition.factor} #total number of records = Math.ceil((float)aerospike.sample.size/num_sp) * (num_sp) #use lower partition factor for more accurate sampling setname="py_input_data" sample_size=101 df3=spark.read.format("aerospike") \ .option("aerospike.partition.fact...
count3= 104 count4= 113 limitCount= 101
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Working with Collection Data Types (CDT) in Aerospike Save JSON into Aerospike using a schema
# Schema specification aliases_type = StructType([ StructField("first_name",StringType(),False), StructField("last_name",StringType(),False) ]) id_type = StructType([ StructField("first_name",StringType(),False), StructField("last_name",StringType(),False), StructField("aliases",ArrayType(aliases...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Retrieve CDT from Aerospike into a DataFrame using schema
loadedComplexDFWithSchema=spark \ .read \ .format("aerospike") \ .option("aerospike.set", "complex_input_data") \ .schema(person_type) \ .load() loadedComplexDFWithSchema.show(5)
+--------------------+-----------+--------------------+--------------------+ | name| SSN| home_address| work_history| +--------------------+-----------+--------------------+--------------------+ |[Carrie, Collier,...|611-70-8032|[[14908, [Frankli...|[[Russell Group, ...| |[Ashley, Da...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Data Exploration with Aerospike
import pandas import matplotlib import matplotlib.pyplot as plt #convert Spark df to pandas df pdf = loadedDFWithSchema.toPandas() # Describe the data pdf.describe() #Histogram - Age age_min, age_max = int(np.amin(pdf['age'])), math.ceil(np.amax(pdf['age'])) age_bucket_size = 5 print(age_min,age_max) pdf[['age']].pl...
22 57
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Quering Aerospike Data using SparkSQL Note: 1. Queries that involve Primary Key or Digest in the predicate trigger aerospike_batch_get()( https://www.aerospike.com/docs/client/c/usage/kvs/batch.html) and run extremely fast. For e.g. a query containing `__key` or `__digest` with, with no `OR` between two bins. 2. ...
# Basic PKey query batchGet1= spark \ .read \ .format("aerospike") \ .option("aerospike.set", "salary_data") \ .option("aerospike.keyType", "int") \ .load().where("__key = 100") \ batchGet1.show() #Note ASDB only supports equality test with PKs in primary key query. #So, a where clause with "__key >10", would result ...
+-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ |__key| __digest| __expiry|__generation| __ttl| age| name|salary| id| +-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ | ...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
batchget query using `__digest` - `__digest` can have only two types `BinaryType`(default type) or `StringType`. - If schema is not provided and `__digest` is `StringType`, then set `aerospike.digestType` to `string`. - Records retrieved with `__digest` batchget call will have null primary key (i.e.`__key` is `n...
#convert digests to a list of byte[] digest_list=batchGet2.select("__digest").rdd.flatMap(lambda x: x).collect() #convert digest to hex string for querying. Only digests of type hex string and byte[] array are allowed. string_digest=[ ''.join(format(x, '02x') for x in m) for m in digest_list] #option("aerospike.diges...
+-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ |__key| __digest| __expiry|__generation| __ttl| age| name|salary| id| +-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ | ...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Queries including non-primary key conditions
# This query will run as a scan, which will be slower somePrimaryKeys= list(range(1,10)) scanQuery1= spark \ .read \ .format("aerospike") \ .option("aerospike.set", "salary_data") \ .option("aerospike.keyType", "int") \ .load().where((col("__key").isin(somePrimaryKeys)) | ( col("age") >50 )) scanQuery1.show()
+-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ |__key| __digest| __expiry|__generation| __ttl| age| name|salary| id| +-----+--------------------+---------+------------+-------+------------------+---------------+------+---+ | ...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Pushdown [Aerospike Expressions](https://docs.aerospike.com/docs/guide/expressions/) from within a Spark API. - Make sure that you do not use no the WHERE clause or spark filters while querying - See [Aerospike Expressions](https://docs.aerospike.com/docs/guide/expressions/) for more information on how to construct ex...
scala_predexp= sc._jvm.com.aerospike.spark.utility.AerospikePushdownExpressions #id % 5 == 0 => get rows where mod(col("id")) ==0 #Equvalent java Exp: Exp.eq(Exp.mod(Exp.intBin("a"), Exp.`val`(5)), Exp.`val`(0)) expIntBin=scala_predexp.intBin("id") # id is the name of column expMODIntBinEqualToZero=scala_predexp.eq(s...
+---+------+----+------+ | id| name| age|salary| +---+------+----+------+ | 10|name10|null| null| | 50|name50|null| null| +---+------+----+------+ only showing top 2 rows
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Parameters for tuning Aerospike / Spark performance - aerospike.partition.factor: number of logical aerospike partitions [0-15] - aerospike.maxthreadcount : maximum number of threads to use for writing data into Aerospike - aerospike.compression : compression of java client-server communication - aerospike.batchMa...
from sklearn.mixture import GaussianMixture # We take the data we previously ages=pdf['age'] salaries=pdf['salary'] #age_salary_matrix=np.matrix([ages,salaries]).T age_salary_matrix=np.asarray([ages,salaries]).T # Find the optimal number of clusters optimal_cluster_count = 1 best_bic_score = GaussianMixture(1).fit(a...
Optimal cluster count found to be 4
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Estimate cluster distribution parametersNext we fit our cluster using the optimal cluster count, and print out the discovered means and covariance matrix
gm = GaussianMixture(optimal_cluster_count) gm.fit(age_salary_matrix) estimates = [] # Index for index in range(0,optimal_cluster_count): estimated_mean_age = round(gm.means_[index][0],2) estimated_mean_salary = round(gm.means_[index][1],0) estimated_age_std_dev = round(math.sqrt(gm.covariances_[index][0][...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Original Distribution Parameters
distribution_data_as_rows = [] for distribution in distribution_data: row = [distribution['age_mean'],distribution['salary_mean'],distribution['age_std_dev'], distribution['salary_std_dev'],distribution['age_salary_correlation']] distribution_data_as_rows.append(row) pd.DataFrame(d...
_____no_output_____
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
You can see that the algorithm provides good estimates of the original parameters PredictionWe generate new age/salary pairs for each of the distributions and look at how accurate the prediction is
def prediction_accuracy(model,age_salary_distribution,sample_size): # Generate new values new_ages,new_salaries = age_salary_sample(age_salary_distribution,sample_size) #new_age_salary_matrix=np.matrix([new_ages,new_salaries]).T new_age_salary_matrix=np.asarray([new_ages,new_salaries]).T # Find whic...
Accuracies for each distribution : 100.00% ,60.20% ,98.00% Overall accuracy : 86.07%
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
aerolookup aerolookup allows you to look up records corresponding to a set of keys stored in a Spark DF, streaming or otherwise. It supports: - [Aerospike CDT](https://docs.aerospike.com/docs/guide/cdt.htmlarbitrary) - Quota and retry (these configurations are extracted from sparkconf) - [Flexible schema](https:/...
alias = StructType([StructField("first_name", StringType(), False), StructField("last_name", StringType(), False)]) name = StructType([StructField("first_name", StringType(), False), StructField("aliases", ArrayType(alias), False)]) street_adress = StructType([StructField("street_...
+--------------------+-----------+--------------------+ | name| SSN| home_address| +--------------------+-----------+--------------------+ |[Gary, [[Cameron,...|825-55-3247|[[66428, [Kim Mil...| |[Megan, [[Robert,...|289-18-1554|[[81551, [Archer ...| |[Melanie, [[Justi...|756-46-4088|[[6132...
MIT
notebooks/spark/AerospikeSparkPython.ipynb
dotyjim-work/aerospike-interactive-notebooks
Moon Data ClassificationIn this notebook, you'll be tasked with building and deploying a **custom model** in SageMaker. Specifically, you'll define and train a custom, PyTorch neural network to create a binary classifier for data that is separated into two classes; the data looks like two moon shapes when it is displa...
# data import pandas as pd import numpy as np from sklearn.datasets import make_moons from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Generating Moon DataBelow, I have written code to generate some moon data, using sklearn's [make_moons](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html) and [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).I'm specify...
# set data params np.random.seed(0) num_pts = 1000 noise_val = 0.25 # generate data # X = 2D points, Y = class labels (0 or 1) X, Y = make_moons(num_pts, noise=noise_val) # Split into test and training data X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=1) # plot # points a...
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
SageMaker ResourcesThe below cell stores the SageMaker session and role (for creating estimators and models), and creates a default S3 bucket. After creating this bucket, you can upload any locally stored data to S3.
# sagemaker import boto3 import sagemaker from sagemaker import get_execution_role # SageMaker session and role sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() # default S3 bucket bucket = sagemaker_session.default_bucket()
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
EXERCISE: Create csv filesDefine a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`. SageMaker expects `.csv` files to be in a certain format, according to the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html):> Amazon...
import os def make_csv(x, y, filename, data_dir): '''Merges features and labels and converts them into one csv file with labels in the first column. :param x: Data features :param y: Data labels :param file_name: Name of csv file, ex. 'train.csv' :param data_dir: The directory where fil...
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
The next cell runs the above function to create a `train.csv` file in a specified directory.
data_dir = 'data_moon' # the folder we will use for storing data name = 'train.csv' # create 'train.csv' make_csv(X_train, Y_train, name, data_dir)
Path created: data_moon/train.csv
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Upload Data to S3Upload locally-stored `train.csv` file to S3 by using `sagemaker_session.upload_data`. This function needs to know: where the data is saved locally, and where to upload in S3 (a bucket and prefix).
# specify where to upload in S3 prefix = 'sagemaker/moon-data' # upload to S3 input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) print(input_data)
s3://sagemaker-us-east-1-633655289115/sagemaker/moon-data
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Check that you've uploaded the data, by printing the contents of the default bucket.
# iterate through S3 objects and print contents for obj in boto3.resource('s3').Bucket(bucket).objects.all(): print(obj.key)
fraud_detection/linear-learner-2020-10-24-06-45-32-777/output/model.tar.gz fraud_detection/linear-learner-2020-10-24-07-25-35-056/output/model.tar.gz fraud_detection/linear-learner-2020-10-24-07-48-22-409/output/model.tar.gz fraud_detection/linear-learner-2020-10-24-08-12-30-994/output/model.tar.gz fraud_detection/line...
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
--- ModelingNow that you've uploaded your training data, it's time to define and train a model!In this notebook, you'll define and train a **custom PyTorch model**; a neural network that performs binary classification. EXERCISE: Define a model in `model.py`To implement a custom classifier, the first thing you'll do i...
!pygmentize source/model.py
import torch import torch.nn as nn import torch.nn.functi...
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Training ScriptTo implement a custom classifier, you'll also need to complete a `train.py` script. You can find this in the `source` directory.A typical training script:* Loads training data from a specified directory* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)* ...
!pygmentize source/train.py
from __future__ import print_function # future proof import argparse import sys import os import json ...
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
EXERCISE: Create a PyTorch EstimatorYou've had some practice instantiating built-in models in SageMaker. All estimators require some constructor arguments to be passed in. When a custom model is constructed in SageMaker, an **entry point** must be specified. The entry_point is the training script that will be executed...
# import a PyTorch wrapper from sagemaker.pytorch import PyTorch # specify an output path output_path = 's3://{}/{}'.format(bucket, prefix) # instantiate a pytorch estimator estimator = PyTorch(entry_point='train.py', source_dir='source', framework_version='1.0', ...
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Train the EstimatorAfter instantiating your estimator, train it with a call to `.fit()`. The `train.py` file explicitly loads in `.csv` data, so you do not need to convert the input data to any other format.
%%time # train the estimator on S3 training data estimator.fit({'train': input_data})
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2. 's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2. 'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Create a Trained ModelPyTorch models do not automatically come with `.predict()` functions attached (as many Scikit-learn models do, for example) and you may have noticed that you've been give a `predict.py` file. This file is responsible for loading a trained model and applying it to passed in, numpy data. When you c...
%%time # importing PyTorchModel from sagemaker.pytorch import PyTorchModel # Create a model from the trained estimator data # And point to the prediction script model = PyTorchModel(model_data=estimator.model_data, role=role, framework_version='1.0', entry_po...
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
EXERCISE: Deploy the trained modelDeploy your model to create a predictor. We'll use this to make predictions on our test data and evaluate the model.
%%time # deploy and create a predictor predictor = model.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
--- Evaluating Your ModelOnce your model is deployed, you can see how it performs when applied to the test data.The provided function below, takes in a deployed predictor, some test features and labels, and returns a dictionary of metrics; calculating false negatives and positives as well as recall, precision, and accu...
# code to evaluate the endpoint on test data # returns a variety of model metrics def evaluate(predictor, test_features, test_labels, verbose=True): """ Evaluate a model on a test set given the prediction endpoint. Return binary classification metrics. :param predictor: A prediction endpoint :para...
_____no_output_____
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Test ResultsThe cell below runs the `evaluate` function. The code assumes that you have a defined `predictor` and `X_test` and `Y_test` from previously-run cells.
# get metrics for custom predictor metrics = evaluate(predictor, X_test, Y_test, True)
predictions 0.0 1.0 actuals 0 107 11 1 15 117 Recall: 0.886 Precision: 0.914 Accuracy: 0.896
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Delete the EndpointFinally, I've add a convenience function to delete prediction endpoints after we're done with them. And if you're done evaluating the model, you should delete your model endpoint!
# Accepts a predictor endpoint as input # And deletes the endpoint by name def delete_endpoint(predictor): try: boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint) print('Deleted {}'.format(predictor.endpoint)) except: print('Already deleted: {}...
Deleted sagemaker-pytorch-2020-10-24-19-46-20-459
MIT
Moon_classification_Exercise/Moon_Classification_Exercise.ipynb
NwekeChidi/Udacity_ML_with_SageMaker
Loading Graphs in NetworkX
import networkx as nx import numpy as np import pandas as pd %matplotlib notebook # Instantiate the graph G1 = nx.Graph() # add node/edge pairs G1.add_edges_from([(0, 1), (0, 2), (0, 3), (0, 5), (1, 3), (1, 6), ...
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Adjacency List `G_adjlist.txt` is the adjaceny list representation of G1.It can be read as follows:* `0 1 2 3 5` $\rightarrow$ node `0` is adjacent to nodes `1, 2, 3, 5`* `1 3 6` $\rightarrow$ node `1` is (also) adjacent to nodes `3, 6`* `2` $\rightarrow$ node `2` is (also) adjacent to no new nodes* `3 4` $\rightarrow...
!cat G_adjlist.txt
0 1 2 3 5 1 3 6 2 3 4 4 5 7 5 8 6 7 8 9 9
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
If we read in the adjacency list using `nx.read_adjlist`, we can see that it matches `G1`.
G2 = nx.read_adjlist('G_adjlist.txt', nodetype=int) G2.edges()
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Adjacency MatrixThe elements in an adjacency matrix indicate whether pairs of vertices are adjacent or not in the graph. Each node has a corresponding row and column. For example, row `0`, column `1` corresponds to the edge between node `0` and node `1`. Reading across row `0`, there is a '`1`' in columns `1`, `2`, `...
G_mat = np.array([[0, 1, 1, 1, 0, 1, 0, 0, 0, 0], [1, 0, 0, 1, 0, 0, 1, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 1, 0, 1, 0, 0], [1, 0, 0, 0, 1, 0, 0, 0, 1, 0], [0...
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
If we convert the adjacency matrix to a networkx graph using `nx.Graph`, we can see that it matches G1.
G3 = nx.Graph(G_mat) G3.edges()
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Edgelist The edge list format represents edge pairings in the first two columns. Additional edge attributes can be added in subsequent columns. Looking at `G_edgelist.txt` this is the same as the original graph `G1`, but now each edge has a weight. For example, from the first row, we can see the edge between nodes `0`...
!cat G_edgelist.txt
0 1 4 0 2 3 0 3 2 0 5 6 1 3 2 1 6 5 3 4 3 4 5 1 4 7 2 5 8 6 8 9 1
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Using `read_edgelist` and passing in a list of tuples with the name and type of each edge attribute will create a graph with our desired edge attributes.
G4 = nx.read_edgelist('G_edgelist.txt', data=[('Weight', int)]) G4.edges(data=True)
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Pandas DataFrame Graphs can also be created from pandas dataframes if they are in edge list format.
G_df = pd.read_csv('G_edgelist.txt', delim_whitespace=True, header=None, names=['n1', 'n2', 'weight']) G_df G5 = nx.from_pandas_dataframe(G_df, 'n1', 'n2', edge_attr='weight') G5.edges(data=True)
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Chess Example Now let's load in a more complex graph and perform some basic analysis on it.We will be looking at chess_graph.txt, which is a directed graph of chess games in edge list format.
!head -5 chess_graph.txt
1 2 0 885635999.999997 1 3 0 885635999.999997 1 4 0 885635999.999997 1 5 1 885635999.999997 1 6 0 885635999.999997
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Each node is a chess player, and each edge represents a game. The first column with an outgoing edge corresponds to the white player, the second column with an incoming edge corresponds to the black player.The third column, the weight of the edge, corresponds to the outcome of the game. A weight of 1 indicates white wo...
chess = nx.read_edgelist('chess_graph.txt', data=[('outcome', int), ('timestamp', float)], create_using=nx.MultiDiGraph()) chess.is_directed(), chess.is_multigraph() chess chess.edges(data=True)
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Looking at the degree of each node, we can see how many games each person played. A dictionary is returned where each key is the player, and each value is the number of games played.
games_played = chess.degree() games_played
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Using list comprehension, we can find which player played the most games.
max_value = max(games_played.values()) max_key, = [i for i in games_played.keys() if games_played[i] == max_value] print('player {}\n{} games'.format(max_key, max_value))
player 461 280 games
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Let's use pandas to find out which players won the most games. First let's convert our graph to a DataFrame.
df = pd.DataFrame(chess.edges(data=True), columns=['white', 'black', 'outcome']) df.head()
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Next we can use a lambda to pull out the outcome from the attributes dictionary.
df['outcome'] = df['outcome'].map(lambda x: x['outcome']) df.head()
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
To count the number of times a player won as white, we find the rows where the outcome was '1', group by the white player, and sum.To count the number of times a player won as back, we find the rows where the outcome was '-1', group by the black player, sum, and multiply by -1.The we can add these together with a fill ...
won_as_white = df[df['outcome']==1].groupby('white').sum() won_as_black = -df[df['outcome']==-1].groupby('black').sum() win_count = won_as_white.add(won_as_black, fill_value=0) win_count.head()
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Using `nlargest` we find that player 330 won the most games at 109.
win_count.nlargest(5, 'outcome')
_____no_output_____
MIT
C5-Applied Social Network Analysis with Python/Notebooks/Week1/Loading+Graphs+in+NetworkX.ipynb
nishamathi/coursera-data-science-python
Counting Easter eggsOur experiment compares the classification approach and the regression approach. The selection is done with the `class_mode` option in Keras' ImageDataGenerator flow_from_directory. `categorical` is used for the one-hot encoding and `sparse` for integers as classes.Careful: While this is convention...
class_mode = "categorical"
_____no_output_____
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Imports and version numbers
import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory from tensorflow.keras.preprocessing.image import ImageDataGenerator from matplotlib import pyplot as plt import os import re import numpy as np from tensorflow.keras.preprocessing.image import load_img # Python version: 3.8 ...
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:12:52_Pacific_Daylight_Time_2019 Cuda compilation tools, release 10.1, V10.1.243
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Prepare data for the trainingIf you redo this notebook on your own, you'll need the images with 0..7 (without 5) eggs in the folders `./images/0` ... `./images/7` (`./images/5` must exist for the classification training, but be empty)
data_directory = "./images" input_shape = [64,64,3] # 256 batch_size = 16 seed = 123 # for val split train_datagen = ImageDataGenerator( validation_split=0.2, rescale=1.0/255.0 ) train_generator = train_datagen.flow_from_directory( data_directory, seed=seed, target_size=(input_shape[0],input...
Found 11200 images belonging to 8 classes. Found 2800 images belonging to 8 classes.
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Prepare the model
num_classes = 8 # because 0..7 eggs if class_mode == "categorical": num_output_dimensions = num_classes if class_mode == "sparse": num_output_dimensions = 1 model = tf.keras.Sequential() model.add( tf.keras.layers.Conv2D( filters = 4, kernel_size = 5, strides = 1, padding = 'same', activa...
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 64, 64, 4) 304 ____________________________________...
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Train the model(on 0,1,2,3,4,6,7, but not 5 eggs)
epochs = 5 model.fit( train_generator, epochs=epochs, validation_data=val_generator, ) plt.figure() if class_mode == "categorical": plt.plot(model.history.history['accuracy']) plt.plot(model.history.history['val_accuracy']) plt.title('History') plt.ylabel('Value') plt.xlabel('Epoc...
_____no_output_____
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Illustrate performance on unknown and completely unknown input(5 eggs are completly unknown; all other numbers trained but at least the test image with 4 eggs was not used during training)If you are running this notebook on you own, you might have to adjust the filepaths, and you'll have to put the images with 5 eggs ...
filepath_known = './images_known_unknown/4/0.png' filepath_unknown = './images_unknown/5/0.png' # helper function to make the notebook more tidy def make_prediction(filepath): img = load_img( filepath, target_size=(input_shape[0],input_shape[1]) ) img = np.array(img) img = img / 255...
Found 2000 images belonging to 8 classes. 2000/2000 [==============================] - 5s 3ms/step
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
Illustrate performance on completely known dataFor the sake of completeness, we repeat the last plots again for the known data used in the training phase.
data_test_directory = "./images" test_datagen = ImageDataGenerator( rescale=1.0/255.0 ) test_generator = test_datagen.flow_from_directory( data_test_directory, target_size=(input_shape[0],input_shape[1]), color_mode="rgb", class_mode=class_mode, batch_size=1, subset=None, shuffle=Fa...
Found 14000 images belonging to 8 classes. 14000/14000 [==============================] - 37s 3ms/step
MIT
jupyter_notebooks/classification.ipynb
lightning485/osterai
110. Balanced Binary Tree ContentGiven a binary tree, determine if it is height-balanced.For this problem, a height-balanced binary tree is defined as:a binary tree in which the left and right subtrees of every node differ in height by no more than 1. Example 1:Input: root = [3,9,20,null,null,15,7]Output: trueEx...
from typing import Tuple # Definition for a binary tree node. class TreeNode: def __init__(self, val=0, left=None, right=None): self.val = val self.left = left self.right = right class Solution: def isBalanced(self, root: TreeNode) -> bool: balanced, _ = self.visitSubTree(root)...
_____no_output_____
MIT
101-150/110.balanced-binary-tree.ipynb
Sorosliu1029/LeetCode
EmoDJ Music PlayerEmoDJ is a brand-new AI-powered offline music player desktop application that focuses on improving listeners' emotional wellness.This application is designed based on psychology theories. It is powered by machine learning to automatically identify music emotion of your songs.To start EmoDJ at first t...
import os import librosa import sklearn def preprocess_feature(file_name): n_mfcc = 12 mfcc_all = [] #MFCC per time period (500ms) x, sr = librosa.load(MUSIC_FOLDER + file_name) for i in range(0, len(x), int(sr*500/1000)): x_cont = x[i:i+int(sr*500/1000)] mfccs = librosa.fea...
_____no_output_____
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
Music Emotion Retrieval PanelDisplay music by their valence and arousal value. Colour of marker represents colour association to average music emotion of that music piece.Listeners can see annotation (song name, valence value, arousal value) of particular music piece by hovering its marker.Listeners can retrieve and p...
HAPPY_COLOUR = 'lawngreen' SAD_COLOUR = 'darkblue' TENSE_COLOUR = 'red' CALM_COLOUR = 'darkcyan' BASE_COLOUR = 'darkgrey' PLAYING_COLOUR = 'gold' FILE_FORMAT = '.wav' #Start playing music when user pick the marker on scatter plot def pick_music(event): ind = event.ind song_id = emotion_df.iloc[ind , :][ID_FIEL...
_____no_output_____
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
Music Visualisation EngineWhile playing the music, Fast Fourier Transform was performed on each 1024 frames to show amplitude (converted to dB) and frequency.Colour of line represents colour association to time vary music emotion.
#Initialise visualition def init_vis(): line.set_ydata([0] * len(vis_x)) return line, #Update the visualisation #Line plot value based on real FFT (converted to dB) #Line colour based on emotion of arousal valence value at that time period def animate_vis(i): global num_CHUNK #Show visualisation whe...
_____no_output_____
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
Music Recommendation and Player EngineIn addition to standard functions (such as next, pause, resume), it provides recommended playlist based on similarity of music emotion with the music selection.It would play the next music piece in playlist automatically, starting from the most similar one, until it reaches the en...
from tkinter import messagebox import pygame def get_song_name(song_id): return processed_music[processed_music[ID_FIELD]==song_id][NAME_FIELD].values[0] def get_song_file_path(song_id): return MUSIC_FOLDER + get_song_name(song_id) #Construct playlist based on similarity with song selected #Euclidean distan...
pygame 1.9.6 Hello from the pygame community. https://www.pygame.org/contribute.html
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
Create IndexCreate below search index files- Average emotion of musics - Time varying valence values of musics - Time varying arousal values of musics - Processed music (Music pieces with music emotion recognised)
import os import pickle import time, sys from IPython.display import clear_output def update_progress(processed, total): bar_length = 20 progress = processed/total if isinstance(progress, int): progress = float(progress) if not isinstance(progress, float): progress = 0 if progr...
_____no_output_____
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
Load IndexLoad indexes if any. Otherwise, create index folder and empty indexes.Folder structure:- musics/ (Music files)- index/ (Index files)- model/ (Music emotion recognition model)
import pandas as pd import numpy as np MUSIC_FOLDER = 'musics/' INDEX_FOLDER = 'index/' MODEL_FOLDER = 'model/' VAL_FIELD = 'valence' ARO_FIELD = 'arousal' NAME_FIELD = 'song_name' ID_FIELD = 'song_id' def load_index(): #For first time using this program #Create initial index if not os.path.exists(INDEX_FO...
_____no_output_____
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
GUI EngineGraphical user interface to interact with listener.Before launching GUI, it will check if there are unprocessed music. If so, process to get music emotion values of the unprocessed music and re-create of index.
#Due to system specification difference #Parameter to ensure synchronisation of visualisation and sound SYNC = 3.5 import tkinter as tk import wave from pygame import mixer import matplotlib.animation as animation from matplotlib import style style.use('ggplot') import matplotlib.pyplot as plt import numpy as np im...
Enjoy music! See you next time.
MIT
EmoDJ Music Player/EmoDJ_Music_Player_Code.ipynb
peggypytang/EmoDJ
XGBoost-Ray with ModinThis notebook includes an example workflow using[XGBoost-Ray](https://docs.ray.io/en/latest/xgboost-ray.html) and[Modin](https://modin.readthedocs.io/en/latest/) for distributed modeltraining and prediction. Cluster SetupFirst, we'll set up our Ray Cluster. The provided ``modin_xgboost.yaml``clus...
import argparse import time import modin.pandas as pd from modin.experimental.sklearn.model_selection import train_test_split from xgboost_ray import RayDMatrix, RayParams, train, predict import ray
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray
Next, let's parse some arguments. This will be used for executing the ``.py``file, but not for the ``.ipynb``. If you are using the interactive notebook,you can directly override the arguments manually.
parser = argparse.ArgumentParser() parser.add_argument( "--address", type=str, default="auto", help="The address to use for Ray." ) parser.add_argument( "--smoke-test", action="store_true", help="Read a smaller dataset for quick testing purposes.", ) parser.add_argument( "--num-actors", type=int, de...
_____no_output_____
Apache-2.0
doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb
richardsliu/ray