markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Showing the most popular songs in the dataset
graphlab.canvas.set_target('ipynb') song_data['song'].show() len(song_data)
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Count number of unique users in the dataset
users = song_data['user_id'].unique() len(users)
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Create a song recommender
train_data,test_data = song_data.random_split(.8,seed=0)
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Simple popularity-based recommender
popularity_model = graphlab.popularity_recommender.create(train_data, user_id='user_id', item_id='song')
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Use the popularity model to make some predictions A popularity model makes the same prediction for all users, so provides no personalization.
popularity_model.recommend(users=[users[0]]) popularity_model.recommend(users=[users[1]])
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Build a song recommender with personalization We now create a model that allows us to make personalized recommendations to each user.
personalized_model = graphlab.item_similarity_recommender.create(train_data, user_id='user_id', item_id='song')
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Applying the personalized model to make song recommendations As you can see, different users get different recommendations now.
personalized_model.recommend(users=[users[0]]) personalized_model.recommend(users=[users[1]])
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
We can also apply the model to find similar songs to any song in the dataset
personalized_model.get_similar_items(['With Or Without You - U2']) personalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Quantitative comparison between the models We now formally compare the popularity and the personalized models using precision-recall curves.
if graphlab.version[:3] >= "1.6": model_performance = graphlab.compare(test_data, [popularity_model, personalized_model], user_sample=0.05) graphlab.show_comparison(model_performance,[popularity_model, personalized_model]) else: %matplotlib inline model_performance = graphlab.recommender.util.compare_models(test_data, [popularity_model, personalized_model], user_sample=.05)
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Ignore the error message related to tensorflow-serving-api.
import sys import warnings warnings.filterwarnings('ignore') os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # If you are running this notebook in Colab, follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() !pip install witwidget --quiet !pip install tensorflow==1.15.2 --quiet !gcloud config set project $PROJECT_ID elif "DL_PATH" in os.environ: !sudo pip install tabulate --quiet
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model version based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. Note that you may not use a Multi-Regional Storage bucket for training with AI Platform.
BUCKET_NAME = "michaelabel-gcp-training-ml" REGION = "us-central1" os.environ['BUCKET_NAME'] = BUCKET_NAME os.environ['REGION'] = REGION
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the following cell to create your Cloud Storage bucket if it does not already exist.
%%bash exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/) if [ -n "$exists" ]; then echo -e "Bucket gs://${BUCKET_NAME} already exists." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET_NAME} echo -e "\nHere are your current buckets:" gsutil ls fi
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Import libraries for creating model Import the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow 1.15.2.
%tensorflow_version 1.x import tensorflow as tf import tensorflow.feature_column as fc import pandas as pd import numpy as np import json import time # Should be 1.15.2 print(tf.__version__)
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Downloading and preprocessing data In this section you'll download the data to train and evaluate your model from a public GCS bucket. The original data has been preprocessed from the public BigQuery dataset linked above.
%%bash # Copy the data to your notebook instance mkdir taxi_preproc gsutil cp -r gs://cloud-training/bootcamps/serverlessml/taxi_preproc/*_xai.csv ./taxi_preproc ls -l taxi_preproc
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Read the data with Pandas We'll use Pandas to read the training and validation data into a DataFrame. We will only use the first 7 columns of the csv files for our models.
CSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon', 'pickuplat', 'dropofflon', 'dropofflat'] DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] DTYPES = ['float32', 'str' , 'int32', 'float32' , 'float32' , 'float32' , 'float32' ] def prepare_data(file_path): df = pd.read_csv(file_path, usecols = range(7), names = CSV_COLUMNS, dtype = dict(zip(CSV_COLUMNS, DTYPES)), skiprows=1) labels = df['fare_amount'] df = df.drop(columns=['fare_amount']) df['dayofweek'] = df['dayofweek'].map(dict(zip(DAYS, range(7)))).astype('float32') return df, labels train_data, train_labels = prepare_data('./taxi_preproc/train_xai.csv') valid_data, valid_labels = prepare_data('./taxi_preproc/valid_xai.csv') # Preview the first 5 rows of training data train_data.head()
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build, train, and evaluate our model with Keras We'll use tf.Keras to build a our ML model that takes our features as input and predicts the fare amount. But first, we will do some feature engineering. We will be utilizing tf.feature_column and tf.keras.layers.Lambda to implement our feature engineering in the model graph to simplify our serving_input_fn later.
# Create functions to compute engineered features in later Lambda layers def euclidean(params): lat1, lon1, lat2, lon2 = params londiff = lon2 - lon1 latdiff = lat2 - lat1 return tf.sqrt(londiff*londiff + latdiff*latdiff) NUMERIC_COLS = ['pickuplon', 'pickuplat', 'dropofflon', 'dropofflat', 'hourofday', 'dayofweek'] def transform(inputs): transformed = inputs.copy() transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([ inputs['pickuplat'], inputs['pickuplon'], inputs['dropofflat'], inputs['dropofflon']]) feat_cols = {colname: fc.numeric_column(colname) for colname in NUMERIC_COLS} feat_cols['euclidean'] = fc.numeric_column('euclidean') print("BEFORE TRANSFORMATION") print("INPUTS:", inputs.keys()) print("AFTER TRANSFORMATION") print("TRANSFORMED:", transformed.keys()) print("FEATURES", feat_cols.keys()) return transformed, feat_cols def build_model(): raw_inputs = { colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } transformed, feat_cols = transform(raw_inputs) dense_inputs = tf.keras.layers.DenseFeatures(feat_cols.values(), name = 'dense_input')(transformed) h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dense_inputs) h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1) output = tf.keras.layers.Dense(1, activation='linear', name = 'output')(h2) model = tf.keras.models.Model(raw_inputs, output) return model model = build_model() model.summary() # Compile the model and see a summary optimizer = tf.keras.optimizers.Adam(0.001) model.compile(loss='mean_squared_error', optimizer=optimizer, metrics = [tf.keras.metrics.RootMeanSquaredError()]) tf.keras.utils.plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True, rankdir="TB")
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create an input data pipeline with tf.data Per best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.
def load_dataset(features, labels, mode): dataset = tf.data.Dataset.from_tensor_slices(({"dayofweek" : features["dayofweek"], "hourofday" : features["hourofday"], "pickuplat" : features["pickuplat"], "pickuplon" : features["pickuplon"], "dropofflat" : features["dropofflat"], "dropofflon" : features["dropofflon"]}, labels )) if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.repeat().batch(256).shuffle(256*10) else: dataset = dataset.batch(256) return dataset.prefetch(1) train_dataset = load_dataset(train_data, train_labels, tf.estimator.ModeKeys.TRAIN) valid_dataset = load_dataset(valid_data, valid_labels, tf.estimator.ModeKeys.EVAL)
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train the model Now we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.
tf.keras.backend.get_session().run(tf.tables_initializer(name='init_all_tables')) steps_per_epoch = 426433 // 256 model.fit(train_dataset, steps_per_epoch=steps_per_epoch, validation_data=valid_dataset, epochs=10) # Send test instances to model for prediction predict = model.predict(valid_dataset, steps = 1) predict[:5]
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export the model as a TF 1 SavedModel In order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it in GCS.
## Convert our Keras model to an estimator keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export') print(model.input) # We need this serving input function to export our model in the next cell serving_fn = tf.estimator.export.build_raw_serving_input_receiver_fn( model.input ) export_path = keras_estimator.export_saved_model( 'gs://' + BUCKET_NAME + '/explanations', serving_input_receiver_fn=serving_fn ).decode('utf-8')
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
!saved_model_cli show --dir $export_path --all
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Deploy the model to AI Explanations In order to deploy the model to Explanations, we need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with our SavedModel. Then we'll deploy the model using gcloud. Prepare explanation metadata We need to tell AI Explanations the names of the input and output tensors our model is expecting, which we print below. The value for input_baselines tells the explanations service what the baseline input should be for our model. Here we're using the median for all of our input features. That means the baseline prediction for this model will be the fare our model predicts for the median of each feature in our dataset.
# Print the names of our tensors print('Model input tensors: ', model.input) print('Model output tensor: ', model.output.name) baselines_med = train_data.median().values.tolist() baselines_mode = train_data.mode().values.tolist() print(baselines_med) print(baselines_mode) explanation_metadata = { "inputs": { "dayofweek": { "input_tensor_name": "dayofweek:0", "input_baselines": [baselines_mode[0][0]] # Thursday }, "hourofday": { "input_tensor_name": "hourofday:0", "input_baselines": [baselines_mode[0][1]] # 8pm }, "dropofflon": { "input_tensor_name": "dropofflon:0", "input_baselines": [baselines_med[4]] }, "dropofflat": { "input_tensor_name": "dropofflat:0", "input_baselines": [baselines_med[5]] }, "pickuplon": { "input_tensor_name": "pickuplon:0", "input_baselines": [baselines_med[2]] }, "pickuplat": { "input_tensor_name": "pickuplat:0", "input_baselines": [baselines_med[3]] }, }, "outputs": { "dense": { "output_tensor_name": "output/BiasAdd:0" } }, "framework": "tensorflow" } print(explanation_metadata)
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
# Write the json to a local file with open('explanation_metadata.json', 'w') as output_file: json.dump(explanation_metadata, output_file) !gsutil cp explanation_metadata.json $export_path
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create the model Now we will create out model on Cloud AI Platform if it does not already exist.
MODEL = 'taxifare_explain' os.environ["MODEL"] = MODEL %%bash exists=$(gcloud ai-platform models list | grep ${MODEL}) if [ -n "$exists" ]; then echo -e "Model ${MODEL} already exists." else echo "Creating a new model." gcloud ai-platform models create ${MODEL} fi
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create the model version Creating the version will take ~5-10 minutes. Note that your first deploy may take longer.
# Each time you create a version the name should be unique import datetime now = datetime.datetime.now().strftime("%Y%m%d%H%M%S") VERSION_IG = 'v_IG_{}'.format(now) VERSION_SHAP = 'v_SHAP_{}'.format(now) # Create the version with gcloud !gcloud beta ai-platform versions create $VERSION_IG \ --model $MODEL \ --origin $export_path \ --runtime-version 1.15 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method 'integrated-gradients' \ --num-integral-steps 25 !gcloud beta ai-platform versions create $VERSION_SHAP \ --model $MODEL \ --origin $export_path \ --runtime-version 1.15 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method 'sampled-shapley' \ --num-paths 50 # Make sure the model deployed correctly. State should be `READY` in the following log !gcloud ai-platform versions describe $VERSION_IG --model $MODEL !echo "---" !gcloud ai-platform versions describe $VERSION_SHAP --model $MODEL
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Getting predictions and explanations on deployed model Now that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. We'll pass it a single test example here and see which features were most important in the model's prediction. Here we'll use gcloud to call our deployed model. Format our request for gcloud To use gcloud to make our AI Explanations request, we need to write the JSON to a file. Our example here is for a ride from the Google office in downtown Manhattan to LaGuardia Airport at 5pm on a Tuesday afternoon. Note that we had to write our day of the week at "3" instead of "Tue" since we encoded the days of the week outside of our model and serving input function.
# Format data for prediction to our model !rm taxi-data.txt !touch taxi-data.txt prediction_json = {"dayofweek": "3", "hourofday": "17", "pickuplon": "-74.0026", "pickuplat": "40.7410", "dropofflat": "40.7790", "dropofflon": "-73.8772"} with open('taxi-data.txt', 'a') as outfile: json.dump(prediction_json, outfile) # Preview the contents of the data file !cat taxi-data.txt
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Making the explain request Now we make the explaination requests. We will go ahead and do this here for both integrated gradients and SHAP using the prediction JSON from above.
resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_IG --json-instances='taxi-data.txt' response_IG = json.loads(resp_obj.s) resp_obj resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_SHAP --json-instances='taxi-data.txt' response_SHAP = json.loads(resp_obj.s) resp_obj
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Understanding the explanations response First let's just look at the difference between our predictions using our baselines and our predicted taxi fare for the example.
explanations_IG = response_IG['explanations'][0]['attributions_by_label'][0] explanations_SHAP = response_SHAP['explanations'][0]['attributions_by_label'][0] predicted = round(explanations_SHAP['example_score'], 2) baseline = round(explanations_SHAP['baseline_score'], 2 ) print('Baseline taxi fare: ' + str(baseline) + ' dollars') print('Predicted taxi fare: ' + str(predicted) + ' dollars')
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed our model prediction up by that amount, and vice versa for negative attribution values. Which features seem like they're the most important...well it seems like the location features are the most important!
from tabulate import tabulate feature_names = valid_data.columns.tolist() attributions_IG = explanations_IG['attributions'] attributions_SHAP = explanations_SHAP['attributions'] rows = [] for feat in feature_names: rows.append([feat, prediction_json[feat], attributions_IG[feat], attributions_SHAP[feat]]) print(tabulate(rows,headers=['Feature name', 'Feature value', 'Attribution value (IG)', 'Attribution value (SHAP)']))
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Where the META MODELs are as follows: DNN - A Deep Neural Network classifier [16, 32, 32, 16, 2] from the Keras toolkit. Cross-entropy loss function. NBC - A Naive Bayes Classifier SVMC - A support vector machine classifier. LOGC - A Logistic classifier. Modelling. The Meta Models run on Linux and Windows under Python 2.7 and 3.5 (Mac untested): Download (follow the readme setup) the entire directory tree from google drive here: https://github.com/kellerberrin/OSM-QSAR. Detailed instructions will be also posted so that the withheld molecules can be tested against the optimal model with minimum hassle. The pre-trained DRAGON classifier "ION_DRAGON_625.krs" must be in the model directories. In addition, for the "osm" model, the pre-trained meta model "ION_META_40.krs" should be in the "osm" model directory. The software should give sensible error messages if they are missing. Make sure you have setup and activated the python anaconda environment as described in "readme.md". For the optimal OSM meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory): $python OSM_QSAR.py --classify osm --load ION_META --epoch 40 --train 0 [--clean] For the svmc SKLearn meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory): $python OSM_QSAR.py --classify osm_sk [--clean] We visualize the training set probability maps by normalizing them to the unit interval [0, 1] and sorting them in descending order.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from pylab import * from sklearn.preprocessing import minmax_scale def sort_map(column): array = minmax_scale(train_results[column]) return array[np.argsort(-array)] scale = 1.0 fig = plt.figure(num=None, figsize=(8 * scale, 6 * scale), dpi=80, facecolor='w', edgecolor='k') for map in all_active: plt.plot(sort_map(map), label=map) xlabel("molecules") ylabel("normalized probability") title(" Training Set [ACTIVE] Probability Maps") legend(loc=1); # upper right corner def mol_label_list(data_frame): # Function to produce rdkit mols and associated molecular labels id = data_frame["ID"].tolist() klass = data_frame["ACTUAL_500"].tolist() potency = data_frame["EC50"].tolist() ion_activity = data_frame["ION_ACTIVITY"].tolist() map_prob = data_frame["M5_500_250"].tolist() labels = [] for idx in range(len(id)): labels.append("{} {} {} {} {:5.0f} ({:5.4f})".format(idx+1, id[idx], klass[idx][0], ion_activity[idx][0], potency[idx]*1000, map_prob[idx])) smiles = data_frame["SMILE"].tolist() mols = [Chem.MolFromSmiles(smile) for smile in smiles] return mols, labels from rdkit import Chem from rdkit.Chem import Draw from rdkit.Chem.Draw import IPythonConsole from rdkit import rdBase IPythonConsole.ipython_useSVG=True
Notebooks/OSM_Results/OSM Prelim Results.ipynb
kellerberrin/OSM-QSAR
mit
ION_ACTIVITY [ACTIVE] in EC50_200
ion_active = ec50_200_active.loc[train_results["ION_ACTIVITY"] == "ACTIVE"].sort_values("EC50") mols, labels = mol_label_list(ion_active) Draw.MolsToGridImage(mols,legends=labels,molsPerRow=4)
Notebooks/OSM_Results/OSM Prelim Results.ipynb
kellerberrin/OSM-QSAR
mit
ION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_200 to EC50_500 Commentary These molecules have the same Triazole arm as we noticed in the previous notebook when trying to classifiy the molecular ION_ACTIVITY using D840_ACTIVE (DRAGON). This structure is also well represented in the test molecules. ION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_500 to EC50_1000 The results of the EC50_500 classification of the test molecules.
sorted = test_results.sort_values("M5_500_250", ascending=False) mols, labels = mol_label_list(sorted) Draw.MolsToGridImage(mols,legends=labels,molsPerRow=4)
Notebooks/OSM_Results/OSM Prelim Results.ipynb
kellerberrin/OSM-QSAR
mit
练习二:写函数,根据给定符号和行数,打印相应直角三角形,等腰三角形及其他形式的三角形。
def printzj(fuhao,k): for i in range(1,k+1): print(fuhao*i) def printdy(fuhao,k): for i in range(1,k+1): if i==1: print(' '*(k-1) + fuhao) elif i%2==0: print(' '*(k-i) + fuhao*(i-1)*2+fuhao) else: print(' '*(k-i)+fuhao*(i-1)*2+fuhao) def printdyzj(fuhao,k): for i in range(1,k+1): print(fuhao*i) k=int(input('请输入所需行数:')) fuhao='.' printzj(fuhao,k) print('-'*10+'分割线'+'-'*10) printdy(fuhao,k) print('-'*10+'分割线'+'-'*10) printdyzj(fuhao,k)
chapter2/homework/computer/4-19/201611680283.ipynb
zhouqifanbdh/liupengyuan.github.io
mit
练习三:将任务4中的英语名词单数变复数的函数,尽可能的考虑多种情况,重新进行实现。
def n_change(a): k=len(a) if a[k-1:k] in ['o','s','x']: print (a,'es',sep='') elif a[k-2:k] in ['ch','sh']: print(a,'es',sep='') elif a[k-1:k]=='y' and a[k-2:k-1] in ['a','e','i','o','u']: print(a,'s',sep='') elif a[k-1:k]=='y': print(a[:-1],'ies',sep='') else: print(a,'s',sep='') a=input('请输入一个单词:') n_change(a)
chapter2/homework/computer/4-19/201611680283.ipynb
zhouqifanbdh/liupengyuan.github.io
mit
Datafreymin əsasına yerləşəcək verilənlər mənbə üzrə üzrə daxili və xarici formalara bölünür: Python daxili mənbələr: Siyahı: sətr əsaslı: İlk misal cədvəl daxilində dəyərlərin ard-arda, sətr-bə-sətr, siyahı daxilində DataFreymə daxil edilməsidir. SQL ilə tanış olan oxuyuculara bu dil daxilində İNSERT əmrin yada sala bilər. Məsələn, eyni sorğu SQL vasitəsi ilə aşağıda qeyd olunmuş formada icra oluna bilər İlk öncə kortejlərdən ibarət siyahı yaradırıq və onu email_list_lst adlı dəyişənə təhkim edirik.
email_list_lst=[('Omar','Bayramov','omarbayramov@hotmail.com',1), ('Ali','Aliyev','alialiyev@example.com',0), ('Dmitry','Vladimirov','v.dmitry@koala.kl',1), ('Donald','Trump','grabthat@pussycatdolls.com',1), ('Rashid','Maniyev','rashid.maniyev@exponential.az',1), ]
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Növbəti email_list_lst_cln dəyşəninə isə sütun adlarından ibarət siyahı təhkim edirik.
email_list_lst_cln=['f_name','l_name','email_adrs','a_status',]
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Nəhayət, DataFrame-nin "from_records" funksiyasına email_list_lst və email_list_lst_cln dəyərlərini ötürüb email_list_lst dəyərlərindən email_list_lst_cln sütunları ilə cədvəl yaradırıq və sonra cədvəli əks etdiririk.
df=pd.DataFrame.from_records(email_list_lst, columns=email_list_lst_cln) df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Siyahı: sütun əsaslı: Əvvəlki misaldan fərqli olaraq bu dəfə məlumatı sütun-sütun qəbul edib cədvələ ötürən yanaşmadan istifadə olunacaq. Bunun üçün kortej siyahısından istifadə olunacaq ki hər bir kortej özü-özlüyündə sütun adın əks edən sətrdən və həmin sətrdə yerləşən dəyərlər siyahısından ibarətdir.
email_list_lst=[('f_name', ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',]), ('l_name', ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',]), ('email_adrs', ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',]), ('a_status', [1, 0, 1, 1, 1,]), ] df = pd.DataFrame.from_items(email_list_lst) df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Lüğət: yazı əsaslı Növbəti misal mənim ən çox tərcihə etdiyim (əlbəttə ki çox vaxt verilənlər istədiyimiz formada olmur, və bizim nəyə üstünlük verdiyimiz onları heç marağlandırmır) üsula keçirik. Bu üsula üstünlük verməyimin səbəbi çox sadədir: bundan əvvəl və bundan sonra qeyd olunmuş yollardan istifadə edərkən məlumatın əldə edilməsi zamanı və ya təmizləmə zamanı "NaN" dəyəri almadan bəzi məlumatlar pozula bilər ki bu da sütunun və ya yazının sürüşməsinə gətirə bilər ki o zaman analiz ya qismən çətinləşə, və ya ümumiyyətlə verilənlərin qatışması üzündan mənasın itirə bilər. ( Danışdığım problem ilə tanış olmaq üçün 2017/08 tarixində çalışdığım məlumat əldə edilməsi və analizi işimə baxa bilərsiniz.). Amma bu dəfə hər bir dəyər üzrə hansı sütuna aid olması açıq şəkildə qeyd olunur ki, qeyd olunmadığı halda avtomatik "NaN" olaraq qeyd olunur. Nəticədə əlavə rutin təmizləmə işi aparmağa ehtiyac olmur, olmayan dəyərləri isə araşdırmadan ya yığışdırmaq və ya digər metodlar ilə verilənlər ilə doldurmaq olur. Sözügedən misala yaxından baxaq:
email_list=[{ 'f_name' : 'Omar', 'l_name': 'Bayramov', 'email_adrs' : 'omarbayramov@hotmail.com', 'a_status' : 1 }, {'f_name' : 'Ali', 'l_name': 'Aliyev', 'email_adrs':'alialiyev@example.com', 'a_status' : 0}, {'f_name': 'Dmitry', 'l_name': 'Vladimirov', 'email_adrs':'v.dmitry@koala.kl', 'a_status':1}, {'f_name': 'Donald', 'l_name': 'Trump', 'email_adrs':'grabthat@pussycatdolls.com', 'a_status':1}, {'f_name': 'Rashid', 'l_name': 'Maniyev', 'email_adrs':'rashid.maniyev@exponential.az', 'a_status':1}, ] df=pd.DataFrame(email_list,) df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Burada gördüyünüz kimi məlumat DataFrame daxilinə keçsədə, sütunlar istədiyimiz ardıcıllıqda yox, əlifba ardıcıllığı üzrə sıralanmışdır. Bu məqamı aradan qaldırmaq üçün ya yuxarıda DataFrame yaradan zamandaki kimi əvvəlcədən column parametri vasitəsi ilə sütun adların və ardıcıllığın qeyd etməli, və ya sonradan aşaqda qeyd olunmuş əmr ilə sütun yerlərin dəyşməliyik.
df=df[['f_name','l_name','email_adrs','a_status',]] df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Lüğət: sütun əsaslı Bu misal yuxarıda üzərindən keçdiyimiz "Siyahı:sütun əsaslı"-ya çox oxşayır. Fərq dəyərlərin bu dəfə siyahı şəkilində lüğət açarı olaraq qeyd olunmasıdır.
email_list_dct={'f_name': ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',], 'l_name': ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',], 'email_adrs': ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',], 'a_status': [1, 0, 1, 1, 1,], }
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Cədvəli yaradaq və sütunların yerlərin dəyişək:
df = pd.DataFrame.from_dict(email_list_dct) df=df[['f_name','l_name','email_adrs','a_status',]] df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Python xarici mənbələr: Standart, Python-un daxili verilən strukturlarından başqa Pandas fayl sistemi, Məlumat bazarı və digər mənbələrdən verilənləri əldə edib cədvəl qurmağa imkan yaradır. Excel fayl Cədvəli yaratmaq üçün pandas-ın read_excel funksiyasına Excel faylına işarələyən fayl sistemi yolu, fayl internetdə yerləşən zaman isə URL qeyd etmək bəsdir. Əgər faylda bir neçə səhifə varsa, və ya məhz olaraq müəyyən səhifədə yerləşən məlumatı əldə etmək lazımdırsa o zaman sheet_name parametrinə səhifə adın ötürmək ilə məlumatı cədvələ çevirmək olur.
df = pd.read_excel('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/excel_to_dataframe.xlsx', sheet_name='data_for_ttrl') df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
CSV Yuxarıdaki funksiyadaki kimi ilk öncə .csv fayla yol, amma sonra sətr daxilində dəyərləri bir birindən ayıran işarə delimiter parameterinə ötürülməlidir. Ötürülmədikdə standart olaraq vergülü qəbul olunur.
df = pd.read_csv('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/csv_to_dataframe.csv', delimiter=',') df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
JSON Json faylından verilənləri qəbul etmək üçün URL və ya fayl sistemində fayla yol tələb olunur. Json faylı misalı aşağıda qeyd olunub. Diqqət ilə baxdığınız halda özünüz üçün json faylın Lüğət: yazı əsaslı datafreym yaratma metodunda istifadə etdiyimiz dəyər təyinatından heç fərqi olmadığını görmüş oldunuz.
df = pd.read_json('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/json_to_dataframe.json') df = df[['f_name','l_name','email_adrs','a_status',]] df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
SQL Və son olaraq SQLite fayl məlumat bazasından məlumat sorğulayaq və datafreymə yerləşdirək. İlk öncə işimiz üçün tələb olunan modulları import edək.
import sqlalchemy from sqlalchemy import create_engine import sqlite3
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Sorğulama üçün engine yaradaq və məlumat bazası faylına yolu göstərək.
engine = create_engine('sqlite:///C:/Users/omarbayramov/Documents/GitHub/datasets/Tutorial_datasets/sql_to_dataframe.db')
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Qoşulma yaradıb, məlumat bazasında yerləşən emails cədvəlindən bütün sətrləri sorğulayaq.
con=engine.connect() a=con.execute('SELECT * FROM emails')
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Məlumat sorğulandıqdan sonra fetchall funksiyası vasitəsi ilə sətrləri "oxuyub" data dəyişkəninə təhkim edək və sonda MB bağlantısın bağlayaq.
data=a.fetchall() a.close() data
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Əldə olunan məlumatın strukturu tanış qəlir mi? Diqqət ilə baxsanız ilk tanış olduğumuz Siyahı: sətr əsaslı məlumat strukturun tanıyarsınız. Artıq tanış olduğumuz proseduru icra edərək cədvəl qurmaq qaldı:
df=pd.DataFrame(data, columns=['f_name','l_name','email_adrs','a_status',]) df
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
limpapud/data_science_tutorials_projects
mit
Next, let's add some particles. We'll work in units in which $G=1$ (see below on how to set $G$ to another value). The first particle we add is the central object. We place it at rest at the origin and use the convention of setting the mass of the central object $M_*$ to 1:
rebound.add(m=1.)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Let's look at the particle we just added:
print(rebound.particles[0])
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
The output tells us that the mass of the particle is 1 and all coordinates are zero. The next particle we're adding is a planet. We'll use Cartesian coordinates to initialize it. Any coordinate that we do not specify in the rebound.add() command is assumed to be 0. We place our planet on a circular orbit at $a=1$ and give it a mass of $10^{-3}$ times that of the central star.
rebound.add(m=1e-3, x=1., vy=1.)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Instead of initializing the particle with Cartesian coordinates, we can also use orbital elements. By default, REBOUND will use Jacobi coordinates, i.e. REBOUND assumes the orbital elements describe the particle's orbit around the centre of mass of all particles added previously. Our second planet will have a mass of $10^{-3}$, a semimajoraxis of $a=2$ and an eccentricity of $e=0.1$:
rebound.add(m=1e-3, a=2., e=0.1)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Now that we have added two more particles, let's have a quick look at what's "in REBOUND" by using
rebound.status()
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Next, let's tell REBOUND which integrator (WHFast, of course!) and timestep we want to use. In our system of units, an orbit at $a=1$ has the orbital period of $T_{\rm orb} =2\pi \sqrt{\frac{GM}{a}}= 2\pi$. So a reasonable timestep to start with would be $dt=10^{-3}$.
rebound.integrator = "whfast" rebound.dt = 1e-3
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
whfast referrs to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default 11th order symplectic correctors are used. We are now ready to start the integration. Let's integrate for one orbit, i.e. until $t=2\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\pi$. Changing a timestep in a symplectic integrator is a bad idea, so we'll tell rebound to don't worry about the exact_finish_time.
rebound.integrate(6.28318530717959, exact_finish_time=0) # 6.28318530717959 is 2*pi
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Once again, let's look at what REBOUND's status is
rebound.status()
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
As you can see the time has advanced to $t=2\pi$ and the positions and velocities of all particles have changed. If you want to post-process the particle data, you can access it in the following way:
particles = rebound.particles for p in particles: print(p.x, p.y, p.vx, p.vy)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
The particles object is an array of pointers to the particles. This means you can call particles = rebound.particles before the integration and the contents of particles will be updated after the integration. If you add or remove particles, you'll need to call rebound.particles again. Visualization with matplotlib Instead of just printing boring numbers at the end of the simulation, let's visualize the orbit using matplotlib (you'll need to install numpy and matplotlib to run this example, see above). We'll use the same particles as above. As the particles are already in memory, we don't need to add them again. Let us plot the position of the inner planet at 100 steps during its orbit. First, we'll import numpy and create an array of times for which we want to have an output (here, from $T_{\rm orb}$ to $2 T_{\rm orb}$ (we have already advanced the simulation time to $t=2\pi$).
import numpy as np torb = 2.*np.pi Noutputs = 100 times = np.linspace(torb, 2.*torb, Noutputs) x = np.zeros(Noutputs) y = np.zeros(Noutputs)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Next, we'll step through the simulation. Rebound will integrate up to time. Depending on the timestep, it might overshoot slightly. If you want to have the outputs at exactly the time you specify you can set the exactTime=1 flag in the integrate function. However, note that changing the timestep in a symplectic integrator could have negative impacts on its properties.
for i,time in enumerate(times): rebound.integrate(time, exact_finish_time=0) x[i] = particles[1].x y[i] = particles[1].y
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Let's plot the orbit using matplotlib.
%matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(5,5)) ax = plt.subplot(111) ax.set_xlim([-2,2]) ax.set_ylim([-2,2]) plt.plot(x, y);
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Hurray! It worked. The orbit looks like it should, it's an almost perfect circle. There are small perturbations though, induced by the outer planet. Let's integrate a bit longer to see them.
Noutputs = 1000 times = np.linspace(2.*torb, 20.*torb, Noutputs) x = np.zeros(Noutputs) y = np.zeros(Noutputs) for i,time in enumerate(times): rebound.integrate(time, exact_finish_time=0) x[i] = particles[1].x y[i] = particles[1].y fig = plt.figure(figsize=(5,5)) ax = plt.subplot(111) ax.set_xlim([-2,2]) ax.set_ylim([-2,2]) plt.plot(x, y);
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Oops! This doesn't look like what we expected to see (small perturbations to an almost circluar orbit). What you see here is the barycenter slowly drifting. Some integration packages require that the simulation be carried out in a particular frame, but WHFast provides extra flexibility by working in any inertial frame. If you recall how we added the particles, the Sun was at the origin and at rest, and then we added the planets. This means that the center of mass, or barycenter, will have a small velocity, which results in the observed drift. There are multiple ways we can get the plot we want to. 1. We can calculate only relative positions. 2. We can add the particles in the barycentric frame. 3. We can let REBOUND transform the particle coordinates to the bayrcentric frame for us. Let's use the third option (next time you run a simulation, you probably want to do that at the beginning).
rebound.move_to_com()
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
So let's try this again. Let's integrate for a bit longer this time.
times = np.linspace(20.*torb, 1000.*torb, Noutputs) for i,time in enumerate(times): rebound.integrate(time, exact_finish_time=0) x[i] = particles[1].x y[i] = particles[1].y fig = plt.figure(figsize=(5,5)) ax = plt.subplot(111) ax.set_xlim([-1.5,1.5]) ax.set_ylim([-1.5,1.5]) plt.scatter(x, y, marker='.', color='k', s=1.2);
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
That looks much more like it. Let us finally plot the orbital elements as a function of time.
times = np.linspace(1000.*torb, 9000.*torb, Noutputs) a = np.zeros(Noutputs) e = np.zeros(Noutputs) for i,time in enumerate(times): rebound.integrate(time, exact_finish_time=0) orbits = rebound.calculate_orbits() a[i] = orbits[1].a e[i] = orbits[1].e fig = plt.figure(figsize=(15,5)) ax = plt.subplot(121) ax.set_xlabel("time") ax.set_ylabel("semi-major axis") plt.plot(times, a); ax = plt.subplot(122) ax.set_xlabel("time") ax.set_ylabel("eccentricity") plt.plot(times, e);
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
The semimajor axis seems to almost stay constant, whereas the eccentricity undergoes an oscillation. Thus, one might conclude the planets interact only secularly, i.e. there are no large resonant terms. Advanced settings of WHFast You can set various attributes to change the default behaviour of WHFast depending on the problem you're interested in. Symplectic correctors You can change the order of the symplectic correctors in WHFast. The default is 11. If you simply want to turn off symplectic correctors alltogether, you can just choose the whfast-nocor integrator:
rebound.integrator = "whfast-nocor"
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
You can also set the order of the symplectic corrector explicitly:
rebound.integrator = "whfast" rebound.integrator_whfast_corrector = 7
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
You can choose between 0 (no correctors), 3, 5, 7 and 11 (default). Keeping particle data synchronized By default, REBOUND will only synchronized particle data at the end of the integration, i.e. if you call rebound.integrate(100.), it will assume you don't need to access the particle data between now and $t=100$. There are a few instances where you might want to change that. One example is MEGNO. Whenever you calculate MEGNO or the Lyapunov exponent, REBOUND needs to have the velocities and positions synchronized at the end of the timestep (to calculate the dot product between them). Thus, if you initialize MEGNO with
rebound.init_megno(1e-16)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
you implicitly force REBOUND to keep the particle coordinates synchronized. This will slow it down and might reduce its accuracy. You can also manually force REBOUND to keep the particles synchronized at the end of every timestep by integrating with the synchronize_each_timestep flag set to 1:
rebound.integrate(10., synchronize_each_timestep=1)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
In either case, you can change particle data between subsequent calls to integrate:
rebound.integrate(20.) rebound.particles[0].m = 1.1 # Sudden increase of particle's mass rebound.integrate(30.)
python_tutorials/WHFast.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
The model can be initialized using an iterable of relations, where a relation is simply a pair of nodes -
model = PoincareModel(train_data=[('node.1', 'node.2'), ('node.2', 'node.3')])
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
The model can also be initialized from a csv-like file containing one relation per line. The module provides a convenience class PoincareRelations to do so.
relations = PoincareRelations(file_path=wordnet_mammal_file, delimiter='\t') model = PoincareModel(train_data=relations)
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
Note that the above only initializes the model and does not begin training. To train the model -
model = PoincareModel(train_data=relations, size=2, burn_in=0) model.train(epochs=1, print_every=500)
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
The same model can be trained further on more epochs in case the user decides that the model hasn't converged yet.
model.train(epochs=1, print_every=500)
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
The model can be saved and loaded using two different methods -
# Saves the entire PoincareModel instance, the loaded model can be trained further model.save('/tmp/test_model') PoincareModel.load('/tmp/test_model') # Saves only the vectors from the PoincareModel instance, in the commonly used word2vec format model.kv.save_word2vec_format('/tmp/test_vectors') PoincareKeyedVectors.load_word2vec_format('/tmp/test_vectors')
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
3. What the embedding can be used for
# Load an example model models_directory = os.path.join(poincare_directory, 'models') test_model_path = os.path.join(models_directory, 'gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50') model = PoincareModel.load(test_model_path)
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
The learnt representations can be used to perform various kinds of useful operations. This section is split into two - some simple operations that are directly mentioned in the paper, as well as some experimental operations that are hinted at, and might require more work to refine. The models that are used in this section have been trained on the transitive closure of the WordNet hypernym graph. The transitive closure is the list of all the direct and indirect hypernyms in the WordNet graph. An example of a direct hypernym is (seat.n.03, furniture.n.01) while an example of an indirect hypernym is (seat.n.03, physical_entity.n.01). 3.1 Simple operations All the following operations are based simply on the notion of distance between two nodes in hyperbolic space.
# Distance between any two nodes model.kv.distance('plant.n.02', 'tree.n.01') model.kv.distance('plant.n.02', 'animal.n.01') # Nodes most similar to a given input node model.kv.most_similar('electricity.n.01') model.kv.most_similar('man.n.01') # Nodes closer to node 1 than node 2 is from node 1 model.kv.nodes_closer_than('dog.n.01', 'carnivore.n.01') # Rank of distance of node 2 from node 1 in relation to distances of all nodes from node 1 model.kv.rank('dog.n.01', 'carnivore.n.01') # Finding Poincare distance between input vectors vector_1 = np.random.uniform(size=(100,)) vector_2 = np.random.uniform(size=(100,)) vectors_multiple = np.random.uniform(size=(5, 100)) # Distance between vector_1 and vector_2 print(PoincareKeyedVectors.vector_distance(vector_1, vector_2)) # Distance between vector_1 and each vector in vectors_multiple print(PoincareKeyedVectors.vector_distance_batch(vector_1, vectors_multiple))
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
3.2 Experimental operations These operations are based on the notion that the norm of a vector represents its hierarchical position. Leaf nodes typically tend to have the highest norms, and as we move up the hierarchy, the norm decreases, with the root node being close to the center (or origin).
# Closest child node model.kv.closest_child('person.n.01') # Closest parent node model.kv.closest_parent('person.n.01') # Position in hierarchy - lower values represent that the node is higher in the hierarchy print(model.kv.norm('person.n.01')) print(model.kv.norm('teacher.n.01')) # Difference in hierarchy between the first node and the second node # Positive values indicate the first node is higher in the hierarchy print(model.kv.difference_in_hierarchy('person.n.01', 'teacher.n.01')) # One possible descendant chain model.kv.descendants('mammal.n.01') # One possible ancestor chain model.kv.ancestors('dog.n.01')
docs/notebooks/Poincare Tutorial.ipynb
mattilyra/gensim
lgpl-2.1
Layouts AnchorLayout, BoxLayout, FloatLayout, RelativeLayout, GridLayout, PageLayout, ScatterLayout, StackLayout
%%bash cat boxlayout.kv %%bash python boxlayout.py > /dev/null 2>&1
Kivy/Kivy.ipynb
CLEpy/CLEpy-MotM
mit
Widgets
%%bash cat widgets.kv %%bash python widgets.py > /dev/null 2>&1
Kivy/Kivy.ipynb
CLEpy/CLEpy-MotM
mit
Data binding
%%bash cat databinding.kv %%bash python databinding.py > /dev/null 2>&1
Kivy/Kivy.ipynb
CLEpy/CLEpy-MotM
mit
Build a custom estimator linear regressor In this exercise, we'll be trying to predict median_house_value. It will be our label. We'll use the remaining columns as our input features. To train our model, we'll use the Estimator API and create a custom estimator for linear regression. Note that we don't actually need a custom estimator for linear regression since there is a canned estimator for it, however we're keeping it simple so you can practice creating a custom estimator function.
# Define feature columns feature_columns = { colname : tf.feature_column.numeric_column(colname) \ for colname in ['housing_median_age','median_income','num_rooms','num_bedrooms','persons_per_house'] } # Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons feature_columns['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'), np.linspace(-124.3, -114.3, 5).tolist()) feature_columns['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), np.linspace(32.5, 42, 10).tolist()) # Split into train and eval and create input functions msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] SCALE = 100000 BATCH_SIZE=128 train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(feature_columns.keys())], y = traindf["median_house_value"] / SCALE, num_epochs = None, batch_size = BATCH_SIZE, shuffle = True) eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(feature_columns.keys())], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = len(evaldf), shuffle=False) # Create the custom estimator def custom_estimator(features, labels, mode, params): # 0. Extract data from feature columns input_layer = tf.feature_column.input_layer(features, params['feature_columns']) # 1. Define Model Architecture predictions = tf.layers.dense(input_layer,1,activation=None) # 2. Loss function, training/eval ops if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL: labels = tf.expand_dims(tf.cast(labels, dtype=tf.float32), -1) loss = tf.losses.mean_squared_error(labels, predictions) optimizer = tf.train.FtrlOptimizer(learning_rate=0.2) train_op = optimizer.minimize( loss = loss, global_step = tf.train.get_global_step()) eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error(labels*SCALE, predictions*SCALE) } else: loss = None train_op = None eval_metric_ops = None # 3. Create predictions predictions_dict = #TODO: create predictions dictionary # 4. Create export outputs export_outputs = #TODO: create export_outputs dictionary # 5. Return EstimatorSpec return tf.estimator.EstimatorSpec( mode = mode, predictions = predictions_dict, loss = loss, train_op = train_op, eval_metric_ops = eval_metric_ops, export_outputs = export_outputs) # Create serving input function def serving_input_fn(): feature_placeholders = { colname : tf.placeholder(tf.float32, [None]) for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',') } feature_placeholders['longitude'] = tf.placeholder(tf.float32, [None]) feature_placeholders['latitude'] = tf.placeholder(tf.float32, [None]) features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) # Create custom estimator's train and evaluate function def train_and_evaluate(output_dir): estimator = # TODO: Add estimator, make sure to add params={'feature_columns': list(feature_columns.values())} as an argument train_spec = tf.estimator.TrainSpec( input_fn = train_input_fn, max_steps = 1000) exporter = tf.estimator.LatestExporter('exporter', serving_input_fn) eval_spec = tf.estimator.EvalSpec( input_fn = eval_input_fn, steps = None, exporters = exporter) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) #Run Training OUTDIR = 'custom_estimator_trained_model' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR)
courses/machine_learning/deepdive/05_artandscience/labs/d_customestimator_linear.ipynb
turbomanage/training-data-analyst
apache-2.0
Questão 2
# Carregando o Dataset data = pd.read_csv("car.data", header=None) # Transformando as variáveis categóricas em valores discretos contáveis # Apesar de diminuir a interpretação dos atributos, essa medida facilita bastante a vida dos métodos de contagem for i in range(0, data.shape[1]): data.iloc[:,i] = LabelEncoder().fit_transform(data.iloc[:,i]) # Separação do Conjunto de Treino e Conjunto de Teste (80%/20%) X_train, X_test, y_train, y_test = train_test_split(data.iloc[:,:-1], data.iloc[:,-1], test_size=0.2) # Implementação do Naive Bayes Multinomial do SKLearn # O Naive Bayes Multinomial é a implementação do SKLearn que funciona para variáveis discretas, # ao invés de utilizar o modelo da função gaussiana (a classe GaussianNB faz dessa forma) clf = MultinomialNB() clf.fit(X_train.values, y_train.values) y_pred = clf.predict(X_test.values) # Impressão dos Resultados print("Multinomial Naive Bayes (SKlearn Version)") print("Total Accuracy: {}%".format(accuracy_score(y_true=y_test, y_pred=y_pred))) print("\nClassification Report:") print(classification_report(y_true=y_test, y_pred=y_pred, target_names=["unacc", "acc", "good", "vgood"]))
2017/05-naive-bayes/resp_naive_bayes_moesio.ipynb
IsacLira/data-science-cookbook
mit
Questão 3
# Utilização da minha função própria de Naive Bayes para o mesmo conjunto de dados nBayes = NaiveBayes() nBayes.fit(X_train.values, y_train.values) y_pred = nBayes.predict(X_test.values) # Impressão dos Resultados print("Naive Bayes (My Version :D)") print("Total Accuracy: {}%".format(accuracy_score(y_true=y_test, y_pred=y_pred))) print("\nClassification Report:") print(classification_report(y_true=y_test, y_pred=y_pred, target_names=["unacc", "acc", "good", "vgood"]))
2017/05-naive-bayes/resp_naive_bayes_moesio.ipynb
IsacLira/data-science-cookbook
mit
Set your Processor Variables
# TODO(developer): Fill these variables with your values before running the sample PROJECT_ID = "YOUR_PROJECT_ID_HERE" LOCATION = "us" # Format is 'us' or 'eu' PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console GCS_INPUT_BUCKET = 'cloud-samples-data' GCS_INPUT_PREFIX = 'documentai/async_forms/' GCS_OUTPUT_URI = 'YOUR-OUTPUT-BUCKET' GCS_OUTPUT_URI_PREFIX = 'TEST' TIMEOUT = 300
general/async_form_parser.ipynb
GoogleCloudPlatform/documentai-notebooks
apache-2.0
The following code calls the synchronous API and parses the form fields and values.
def process_document_sample(): # Instantiates a client client_options = {"api_endpoint": "{}-documentai.googleapis.com".format(LOCATION)} client = documentai.DocumentProcessorServiceClient(client_options=client_options) storage_client = storage.Client() blobs = storage_client.list_blobs(GCS_INPUT_BUCKET, prefix=GCS_INPUT_PREFIX) document_configs = [] print("Input Files:") for blob in blobs: if ".pdf" in blob.name: source = "gs://{bucket}/{name}".format(bucket = GCS_INPUT_BUCKET, name = blob.name) print(source) document_config = {"gcs_uri": source, "mime_type": "application/pdf"} document_configs.append(document_config) gcs_documents = documentai.GcsDocuments( documents=document_configs ) input_config = documentai.BatchDocumentsInputConfig(gcs_documents=gcs_documents) destination_uri = f"{GCS_OUTPUT_URI}/{GCS_OUTPUT_URI_PREFIX}/" # Where to write results output_config = documentai.DocumentOutputConfig( gcs_output_config={"gcs_uri": destination_uri} ) # The full resource name of the processor, e.g.: # projects/project-id/locations/location/processor/processor-id # You must create new processors in the Cloud Console first. name = f"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}" request = documentai.types.document_processor_service.BatchProcessRequest( name=name, input_documents=input_config, document_output_config=output_config, ) operation = client.batch_process_documents(request) # Wait for the operation to finish operation.result(timeout=TIMEOUT) # Results are written to GCS. Use a regex to find # output files match = re.match(r"gs://([^/]+)/(.+)", destination_uri) output_bucket = match.group(1) prefix = match.group(2) bucket = storage_client.get_bucket(output_bucket) blob_list = list(bucket.list_blobs(prefix=prefix)) for i, blob in enumerate(blob_list): # If JSON file, download the contents of this blob as a bytes object. if ".json" in blob.name: blob_as_bytes = blob.download_as_string() print("downloaded") document = documentai.types.Document.from_json(blob_as_bytes) print(f"Fetched file {i + 1}") # For a full list of Document object attributes, please reference this page: # https://cloud.google.com/document-ai/docs/reference/rpc/google.cloud.documentai.v1beta3#document document_pages = document.pages keys = [] keysConf = [] values = [] valuesConf = [] # Grab each key/value pair and their corresponding confidence scores. for page in document_pages: for form_field in page.form_fields: fieldName=get_text(form_field.field_name,document) keys.append(fieldName.replace(':', '')) nameConfidence = round(form_field.field_name.confidence,4) keysConf.append(nameConfidence) fieldValue = get_text(form_field.field_value,document) values.append(fieldValue.replace(':', '')) valueConfidence = round(form_field.field_value.confidence,4) valuesConf.append(valueConfidence) # Create a Pandas Dataframe to print the values in tabular format. df = pd.DataFrame({'Key': keys, 'Key Conf': keysConf, 'Value': values, 'Value Conf': valuesConf}) display(df) else: print(f"Skipping non-supported file type {blob.name}") # Extract shards from the text field def get_text(doc_element: dict, document: dict): """ Document AI identifies form fields by their offsets in document text. This function converts offsets to text snippets. """ response = "" # If a text segment spans several lines, it will # be stored in different text segments. for segment in doc_element.text_anchor.text_segments: start_index = ( int(segment.start_index) if segment in doc_element.text_anchor.text_segments else 0 ) end_index = int(segment.end_index) response += document.text[start_index:end_index] return response doc = process_document_sample()
general/async_form_parser.ipynb
GoogleCloudPlatform/documentai-notebooks
apache-2.0
Antimony Similar to the SBML functions above, you can also use the functions getCurrentAntimony and exportToAntimony to get or export the current Antimony representation.
import tellurium as te import tempfile # load model r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10') # file for export f_antimony = tempfile.NamedTemporaryFile(suffix=".txt") # export current model state r.exportToAntimony(f_antimony.name) # to export the initial state when the model was loaded # set the current argument to False r.exportToAntimony(f_antimony.name, current=False) # The string representations of the current model are available via str_antimony = r.getCurrentAntimony() # and of the initial state when the model was loaded via str_antimony = r.getAntimony() print(str_antimony)
examples/notebooks/core/tellurium_export.ipynb
kirichoi/tellurium
apache-2.0
CellML Tellurium also has functions for exporting the current model state to CellML. These functionalities rely on using Antimony to perform the conversion.
import tellurium as te import tempfile # load model r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10') # file for export f_cellml = tempfile.NamedTemporaryFile(suffix=".cellml") # export current model state r.exportToCellML(f_cellml.name) # to export the initial state when the model was loaded # set the current argument to False r.exportToCellML(f_cellml.name, current=False) # The string representations of the current model are available via str_cellml = r.getCurrentCellML() # and of the initial state when the model was loaded via str_cellml = r.getCellML() print(str_cellml)
examples/notebooks/core/tellurium_export.ipynb
kirichoi/tellurium
apache-2.0
Matlab To export the current model state to MATLAB, use getCurrentMatlab.
import tellurium as te import tempfile # load model r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10') # file for export f_matlab = tempfile.NamedTemporaryFile(suffix=".m") # export current model state r.exportToMatlab(f_matlab.name) # to export the initial state when the model was loaded # set the current argument to False r.exportToMatlab(f_matlab.name, current=False) # The string representations of the current model are available via str_matlab = r.getCurrentMatlab() # and of the initial state when the model was loaded via str_matlab = r.getMatlab() print(str_matlab)
examples/notebooks/core/tellurium_export.ipynb
kirichoi/tellurium
apache-2.0
Using Antimony Directly The above examples rely on Antimony as in intermediary between formats. You can use this functionality directly using e.g. antimony.getCellMLString. A comprehensive set of functions can be found in the Antimony API documentation.
import antimony antimony.loadAntimonyString('''S1 -> S2; k1*S1; k1 = 0.1; S1 = 10''') ant_str = antimony.getCellMLString(antimony.getMainModuleName()) print(ant_str)
examples/notebooks/core/tellurium_export.ipynb
kirichoi/tellurium
apache-2.0
Performing scripts with python-fmrest This is a short example on how to perform scripts with python-fmrest. Import the module
import fmrest
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
Create the server instance
fms = fmrest.Server('https://10.211.55.15', user='admin', password='admin', database='Contacts', layout='Demo', verify_ssl=False )
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
Login The login method obtains the access token.
fms.login()
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
Setup scripts You can setup scripts to run prerequest, presort, and after the action and sorting are executed. The script setup is passed to a python-fmrest method as an object that contains the types of script executions, followed by a list containing the script name and parameter.
scripts={ 'prerequest': ['name_of_script_to_run_prerequest', 'script_parameter'], 'presort': ['name_of_script_to_run_presort', None], # parameter can also be None 'after': ['name_of_script_to_run_after_actions', '1234'], #FMSDAPI expects all parameters to be string }
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
You only need to specify the scripts you actually want to execute. So if you only have an after action, just build a scripts object with only the 'after' key. Call a standard method Scripts are always executed as part of a standard request to the server. These requests are the usual find(), create_record(), delete_record(), edit_record(), get_record() methods the Server class exposes to you. Let's make a find and then execute a script. The script being called contains an error on purpose, so that we can later read out the error number.
fms.find( query=[{'name': 'David'}], scripts={ 'after': ['testScriptWithError', None], } )
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
Get the last script error and result Via the last_script_result property, you can access both last error and script result for all scripts that were called.
fms.last_script_result
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
We see that we had 3 as last error, and our script result was '1'. The FMS Data API only returns strings, but error numbers are automatically converted to integers, for convenience. The script result, however, will always be a string or None, even if you exit your script in FM with a number or boolean. Another example Let's do another call, this time with a script that takes a parameter and does not have any errors. It will exit with Exit Script[ Get(ScriptParameter) ], so essentially give us back what we feed in.
fms.find( query=[{'name': 'David'}], scripts={ 'prerequest': ['demoScript (id)', 'abc-1234'], } )
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
... and here is the result (error 0 means no error):
fms.last_script_result
examples/performing_scripts.ipynb
davidhamann/python-fmrest
mit
Data Split Idealy, we'd perform stratified 5x4 fold cross validation, however, given the timeframe, we'll stick with a single split. We'll use an old chunck of data as training, a more recent as validation, and finally, the most recent data as test set. Don't worry, we'll use K-fold cross-validation in the next notebook Since the data we want to predict is in the future, we'll use the first 60% as training, and the following 20% as validation and 20% test.
l1 = int(df.shape[0]*0.6) l2 = int(df.shape[0]*0.8) df_tra = df.loc[range(0,l1)] df_val = df.loc[range(l1,l2)] df_tst = df.loc[range(l2, df.shape[0])] df_tra.shape, df_val.shape, df_tst.shape, df.shape
notebooks/2016-11-01-dvro-feature-selection.ipynb
dvro/sf-open-data-analysis
mit
checking data distribution, we see that this is a good split (considering the porportion of targets)
fig, axs = plt.subplots(1,3, sharex=True, sharey=True, figsize=(12,3)) axs[0].hist(df_tra.target, bins=2) axs[1].hist(df_val.target, bins=2) axs[2].hist(df_tst.target, bins=2) axs[0].set_title('Training') axs[1].set_title('Validation') axs[2].set_title('Test') X_tra = df_tra.drop(labels=['target'], axis=1, inplace=False).values y_tra = df_tra.target.values X_val = df_val.drop(labels=['target'], axis=1, inplace=False).values y_val = df_val.target.values X_tst = df_tst.drop(labels=['target'], axis=1, inplace=False).values y_tst = df_tst.target.values
notebooks/2016-11-01-dvro-feature-selection.ipynb
dvro/sf-open-data-analysis
mit