markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_zero'] = np.nan _dat['predictions']['KKF_zero'].loc[25] = t_298 / (365*24*3600)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
First order (method of King, Kung, Fung) Define the objective function for a first order reaaction.
def error(p): t_298 = p[0] E = p[1] k = np.exp(E/R * (1/298. - 1/T)) k = np.log(1-Sp) / t_298 * k err = Co * np.exp(k*t) - C return np.sum(err**2)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_first'] = np.nan _dat['predictions']['KKF_first'].loc[25] = t_298 / (365*24*3600)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Second order (method of King, Kung, Fung) Define the objective function for a first order reaaction.
def error(p): t_298 = p[0] E = p[1] k = np.exp(E/R * (1/298. - 1/T)) k = 1 / t_298 * (Sp/(1-Sp)) * k err = Co / (1 + k*t) return np.sum(err**2)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_second'] = np.nan _dat['predictions']['KKF_second'].loc[25] = t_298 / (365*24*3600) _dat['predictions']
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Notebook is a revised version of notebook from Amy Wu and Shen Zhimo E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine and Two Towers builtin algorithm <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/note...
import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = ...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. ...
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in...
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP BUCKET_URI = "gs://" + BUCKET_NAME
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import libraries and define constants
import os from google.cloud import aiplatform %load_ext tensorboard
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to Two-Tower algorithm Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candi...
DATASET_NAME = "movielens_100k" # Change to your dataset name. # Change to your data and schema paths. These are paths to the movielens_100k # sample data. TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*" INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/v...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with CPU Submit the Two-Tower training job to Vertex AI Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set. Prepare your machine specification Now define the machine ...
TRAIN_COMPUTE = "n1-standard-8" machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Define the worker pool specification Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following: replica_count: The number of instances to provision of this machine type. machine_spec: The hardware specification. disk_spec : (op...
JOB_NAME = "twotowers_cpu_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) CMDARGS = [ f"--training_data_path={TRAINING_DATA_PATH}", f"--input_schema_path={INPUT_SCHEMA_PATH}", f"--job-dir={OUTPUT_DIR}", f"--train_batch_size={TRAIN_BATCH_SIZE}", f"--num_epochs={NUM_EPOCHS}", ] worker...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters: display_name: A human readable name for the custom job. worker_pool_specs: The specification for the corresponding VM instances.
job = aiplatform.CustomJob( display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec )
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Execute the custom job Next, execute your custom job using the method run().
job.run()
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
! gsutil ls {OUTPUT_DIR} ! gsutil rm -rf {OUTPUT_DIR}/*
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with GPU Next, train the Two Tower model using a GPU.
JOB_NAME = "twotowers_gpu_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) TRAIN_COMPUTE = "n1-highmem-4" TRAIN_GPU = "NVIDIA_TESLA_K80" machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": 1, } CMDARGS = [ f"--training_data_path={TRAINING_D...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and execute the custom job Next, create and execute the custom job.
job = aiplatform.CustomJob( display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec ) job.run()
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with TFRecords Next, train the Two Tower model using TFRecords
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*" JOB_NAME = "twotowers_tfrec_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) TRAIN_COMPUTE = "n1-standard-8" machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} CMDARGS = [ ...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
! gsutil ls {OUTPUT_DIR} ! gsutil rm -rf {OUTPUT_DIR}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tensorboard When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below: For Workbench AI Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell. In your Cloud Shell, launch Tensorboard on port 8080:...
try: TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard") %tensorboard --logdir {TENSORBOARD_DIR} except Exception as e: print(e)
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Hyperparameter tuning You may want to optimize the hyperparameters used during training to improve your model's accuracy and performance. For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimize...
from google.cloud.aiplatform import hyperparameter_tuning as hpt hpt_job = aiplatform.HyperparameterTuningJob( display_name="twotowers_" + TIMESTAMP, custom_job=job, metric_spec={ "val_auc": "maximize", }, parameter_spec={ "learning_rate": hpt.DoubleParameterSpec(min=0.0001, max=0.1...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
BEST_MODEL = OUTPUT_DIR + "/trial_" + best[0] ! gsutil ls {BEST_MODEL}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Upload the model to Vertex AI Model resource Your training job will export two TF SavedModels under gs://&lt;job_dir&gt;/query_model and gs://&lt;job_dir&gt;/candidate_model. These exported models can be used for online or batch prediction in Vertex Prediction. First, import the query (or candidate) model using the up...
# The following imports the query (user) encoder model. MODEL_TYPE = "query" # Use the following instead to import the candidate (movie) encoder model. # MODEL_TYPE = 'candidate' DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model. MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model to Vertex AI Endpoint Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions: Create an Endpoint resource exposing an external interface to users consuming the model. After the Endpoint is ready, deploy one or more instances of a model to the Endpoint. The deployed model...
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Creating embeddings Now that you have deployed the query/candidate encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data. Make an online prediction with SDK Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The follo...
# Input items for the query model: input_items = [ {"data": '{"user_id": ["1"]}', "key": "key1"}, {"data": '{"user_id": ["2"]}', "key": "key2"}, ] # Input items for the candidate model: # input_items = [{ # 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}', # 'key': 'key1' # }] encodings =...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make an online prediction with gcloud You can also do online prediction using the gcloud CLI.
import json request = json.dumps({"instances": input_items}) with open("request.json", "w") as writer: writer.write(f"{request}\n") ENDPOINT_ID = endpoint.resource_name ! gcloud ai endpoints predict {ENDPOINT_ID} \ --region={REGION} \ --json-request=request.json
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make a batch prediction Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such...
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl" import tensorflow as tf with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f: for i in range(0, 1000): query = {"data": '{"user_id": ["' + str(i) + '"]}', "key": f"key{i}"} f.write(json.dumps(query) + "\n") print("\nNumber of embeddi...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send the prediction request To make a batch prediction request, call the model object's batch_predict method with the following parameters: - instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list" - prediction_format: The format of th...
MIN_NODES = 1 MAX_NODES = 4 batch_predict_job = model.batch_predict( job_display_name=f"batch_predict_{DISPLAY_NAME}", gcs_source=[QUERY_EMBEDDING_PATH], gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output", machine_type=DEPLOY_COMPUTE, starting_replica_count=MIN_NODES, max_replica_count=MA...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Save the embeddings in JSONL format Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as: { 'id': .., 'embedding': [ ... ] } The format of the embeddings for the index can be in either CSV, JSON, or Avro format. Learn more about Embedding Formats for Indexing
embeddings = [] for result_file in result_files: with tf.io.gfile.GFile(result_file, "r") as f: instances = list(f) for instance in instances: instance = instance.replace('\\"', "'") result = json.loads(instance) prediction = result["prediction"] key = prediction["key"]...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store the JSONL formatted embeddings in Cloud Storage Next, you upload the training data to your Cloud Storage bucket.
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/twotower/" ! gsutil cp embeddings.json {EMBEDDINGS_URI}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial:
# Delete endpoint resource endpoint.delete(force=True) # Delete model resource model.delete() # Force undeployment of indexes and delete endpoint try: index_endpoint.delete(force=True) except Exception as e: print(e) # Delete indexes try: tree_ah_index.delete() brute_force_index.delete() except Excep...
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
把datetime域切成 日期 和 时间 两部分。
# 处理时间字段 temp = pd.DatetimeIndex(data['datetime']) data['date'] = temp.date data['time'] = temp.time data.head()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
时间那部分,好像最细的粒度也只到小时,所以我们干脆把小时字段拿出来作为更简洁的特征。
# 设定hour这个小时字段 data['hour'] = pd.to_datetime(data.time, format="%H:%M:%S") data['hour'] = pd.Index(data['hour']).hour data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
仔细想想,数据只告诉我们是哪天了,按照一般逻辑,应该周末和工作日出去的人数量不同吧。我们设定一个新的字段dayofweek表示是一周中的第几天。再设定一个字段dateDays表示离第一天开始租车多久了(猜测在欧美国家,这种绿色环保的出行方式,会迅速蔓延吧)
# 我们对时间类的特征做处理,产出一个星期几的类别型变量 data['dayofweek'] = pd.DatetimeIndex(data.date).dayofweek # 对时间类特征处理,产出一个时间长度变量 data['dateDays'] = (data.date - data.date[0]).astype('timedelta64[D]') data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
其实我们刚才一直都在猜测,并不知道真实的日期相关的数据分布对吧,所以我们要做一个小小的统计来看看真实的数据分布,我们统计一下一周各天的自行车租赁情况(分注册的人和没注册的人)
byday = data.groupby('dayofweek') # 统计下没注册的用户租赁情况 byday['casual'].sum().reset_index() # 统计下注册的用户的租赁情况 byday['registered'].sum().reset_index()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
周末既然有不同,就单独拿一列出来给星期六,再单独拿一列出来给星期日
data['Saturday']=0 data.Saturday[data.dayofweek==5]=1 data['Sunday']=0 data.Sunday[data.dayofweek==6]=1 data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
从数据中,把原始的时间字段等踢掉
# remove old data features dataRel = data.drop(['datetime', 'count','date','time','dayofweek'], axis=1) dataRel.head()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
特征向量化 我们这里打算用scikit-learn来建模。对于pandas的dataframe我们有方法/函数可以直接转成python中的dict。 另外,在这里我们要对离散值和连续值特征区分一下了,以便之后分开做不同的特征处理。
from sklearn.feature_extraction import DictVectorizer # 我们把连续值的属性放入一个dict中 featureConCols = ['temp','atemp','humidity','windspeed','dateDays','hour'] dataFeatureCon = dataRel[featureConCols] dataFeatureCon = dataFeatureCon.fillna( 'NA' ) #in case I missed any X_dictCon = dataFeatureCon.T.to_dict().values() # 把离散值的属性放...
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
标准化连续值特征 我们要对连续值属性做一些处理,最基本的当然是标准化,让连续值属性处理过后均值为0,方差为1。 这样的数据放到模型里,对模型训练的收敛和模型的准确性都有好处
from sklearn import preprocessing # 标准化连续值数据 scaler = preprocessing.StandardScaler().fit(X_vec_con) X_vec_con = scaler.transform(X_vec_con) X_vec_con
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
类别特征编码 最常用的当然是one-hot编码咯,比如颜色 红、蓝、黄 会被编码为[1, 0, 0],[0, 1, 0],[0, 0, 1]
from sklearn import preprocessing # one-hot编码 enc = preprocessing.OneHotEncoder() enc.fit(X_vec_cat) X_vec_cat = enc.transform(X_vec_cat).toarray() X_vec_cat
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
把特征拼一起 把离散和连续的特征都组合在一起
import numpy as np # combine cat & con features X_vec = np.concatenate((X_vec_con,X_vec_cat), axis=1) X_vec
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
最后的特征,前6列是标准化过后的连续值特征,后面是编码后的离散值特征 对结果值也处理一下 拿到结果的浮点数值
# 对Y向量化 Y_vec_reg = dataRel['registered'].values.astype(float) Y_vec_cas = dataRel['casual'].values.astype(float) Y_vec_reg Y_vec_cas
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
<table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/qcvv/xeb_theory"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/...
try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq import cirq print("installed cirq.")
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Cross Entropy Benchmarking Theory Cross entropy benchmarking uses the properties of random quantum programs to determine the fidelity of a wide variety of circuits. When applied to circuits with many qubits, XEB can characterize the performance of a large device. When applied to deep, two-qubit circuits it can be used ...
# Standard imports import numpy as np from cirq.contrib.svg import SVGCircuit
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
The action of random circuits with noise An XEB experiment collects data from the execution of random circuits subject to noise. The effect of applying a random circuit with unitary $U$ is modeled as $U$ followed by a depolarizing channel. The result is that the initial state $|𝜓⟩$ is mapped to a density matrix $ρ_U$ ...
exponents = np.linspace(0, 7/4, 8) exponents import itertools SINGLE_QUBIT_GATES = [ cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a) for a, z in itertools.product(exponents, repeat=2) ] SINGLE_QUBIT_GATES[:10], '...'
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Random circuit We use cirq.experiments.random_quantum_circuit_generation.random_rotations_between_two_qubit_circuit to generate a random two-qubit circuit. Note that we provide the possible single-qubit rotations from above and declare that our two-qubit operation is the $\sqrt{i\mathrm{SWAP}}$ gate.
import cirq_google as cg from cirq.experiments import random_quantum_circuit_generation as rqcg q0, q1 = cirq.LineQubit.range(2) circuit = rqcg.random_rotations_between_two_qubit_circuit( q0, q1, depth=4, two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b), single_qubit_gates=SINGLE_QUBIT_GAT...
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Estimating fidelity Let $O_U$ be an observable that is diagonal in the computational basis. Then the expectation value of $O_U$ on $ρ_U$ is given by $$ Tr(ρ_U O_U) = f ⟨𝜓_U|O_U|𝜓_U⟩ + (1 - f) Tr(O_U / D). $$ This equation shows how $f$ can be estimated, since $Tr(ρ_U O_U)$ can be estimated from experimental data,...
# Make long circuits (which we will truncate) MAX_DEPTH = 100 N_CIRCUITS = 10 circuits = [ rqcg.random_rotations_between_two_qubit_circuit( q0, q1, depth=MAX_DEPTH, two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b), single_qubit_gates=SINGLE_QUBIT_GATES) for _ in rang...
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Execute circuits Cross entropy benchmarking requires sampled bitstrings from the device being benchmarked as well as the true probabilities from a noiseless simulation. We find these quantities for all (cycle_depth, circuit) permutations.
pure_sim = cirq.Simulator() # Pauli Error. If there is an error, it is either X, Y, or Z # with probability E_PAULI / 3 E_PAULI = 5e-3 noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(E_PAULI)) # These two qubit circuits have 2^2 = 4 probabilities DIM = 4 records = [] for cycle_depth in cycle_depths: ...
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
What's the observable What is $O_U$? Let's define it to be the observable that gives the sum of all probabilities, i.e. $$ O_U |x \rangle = p(x) |x \rangle $$ for any bitstring $x$. We can use this to derive expressions for our quantities of interest. $$ e_U = \langle \psi_U | O_U | \psi_U \rangle \ = \sum_x a_x...
for record in records: e_u = np.sum(record['pure_probs']**2) u_u = np.sum(record['pure_probs']) / DIM m_u = np.sum(record['pure_probs'] * record['sampled_probs']) record.update( e_u=e_u, u_u=u_u, m_u=m_u, )
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Remember: $$ m_U - u_U = f (e_U - u_U) $$ We estimate f by performing least squares minimization of the sum of squared residuals $$ \sum_U \left(f (e_U - u_U) - (m_U - u_U)\right)^2 $$ over different random circuits. The solution to the least squares problem is given by $$ f = (∑_U (m_U - u_U) * (e_U - u_U)...
import pandas as pd df = pd.DataFrame(records) df['y'] = df['m_u'] - df['u_u'] df['x'] = df['e_u'] - df['u_u'] df['numerator'] = df['x'] * df['y'] df['denominator'] = df['x'] ** 2 df.head()
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Fit We'll plot the linear relationship and least-squares fit while we transform the raw DataFrame into one containing fidelities.
%matplotlib inline from matplotlib import pyplot as plt # Color by cycle depth import seaborn as sns colors = sns.cubehelix_palette(n_colors=len(cycle_depths)) colors = {k: colors[i] for i, k in enumerate(cycle_depths)} _lines = [] def per_cycle_depth(df): fid_lsq = df['numerator'].sum() / df['denominator...
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Fidelities
plt.plot( fids['cycle_depth'], fids['fidelity'], marker='o', label='Least Squares') xx = np.linspace(0, fids['cycle_depth'].max()) # In XEB, we extract the depolarizing fidelity, which is # related to (but not equal to) the Pauli error. # For the latter, an error involves doing X, Y, or Z with E_PAUL...
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Set up data augmentation The first thing we'll do is set up some data augmentation transformations to use during training, as well as some basic normalization to use during both training and testing. We'll use random crops and flips to train the model, and do basic normalization at both training time and test time. To ...
normalize = transforms.Normalize(mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]) aug_trans = [transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()] common_trans = [transforms.ToTensor(), normalize] train_compose = transforms.Compose(aug_trans + common_trans) test_compose = transforms.Com...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Create DataLoaders Next, we create dataloaders for the selected dataset using the built in torchvision datasets. The cell below will download either the cifar10 or cifar100 dataset, depending on which choice is made. The default here is cifar10, however training is just as fast on either dataset. After downloading the ...
dataset = "cifar10" if ('CI' in os.environ): # this is for running the notebook in our testing framework train_set = torch.utils.data.TensorDataset(torch.randn(8, 3, 32, 32), torch.rand(8).round().long()) test_set = torch.utils.data.TensorDataset(torch.randn(4, 3, 32, 32), torch.rand(4).round().long()) t...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the DenseNet Model With the data loaded, we can move on to defining our DKL model. A DKL model consists of three components: the neural network, the Gaussian process layer used after the neural network, and the Softmax likelihood. The first step is defining the neural network architecture. To do this, we use a...
from densenet import DenseNet class DenseNetFeatureExtractor(DenseNet): def forward(self, x): features = self.features(x) out = F.relu(features, inplace=True) out = F.avg_pool2d(out, kernel_size=self.avgpool_size).view(features.size(0), -1) return out feature_extractor = DenseNetFe...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the GP Layer In the next cell, we create the layer of Gaussian process models that are called after the neural network. In this case, we'll be using one GP per feature, as in the SV-DKL paper. The outputs of these Gaussian processes will the be mixed in the softmax likelihood.
class GaussianProcessLayer(gpytorch.models.ApproximateGP): def __init__(self, num_dim, grid_bounds=(-10., 10.), grid_size=64): variational_distribution = gpytorch.variational.CholeskyVariationalDistribution( num_inducing_points=grid_size, batch_shape=torch.Size([num_dim]) ) ...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the full SVDKL Model With both the DenseNet feature extractor and GP layer defined, we can put them together in a single module that simply calls one and then the other, much like building any Sequential neural network in PyTorch. This completes defining our DKL model.
class DKLModel(gpytorch.Module): def __init__(self, feature_extractor, num_dim, grid_bounds=(-10., 10.)): super(DKLModel, self).__init__() self.feature_extractor = feature_extractor self.gp_layer = GaussianProcessLayer(num_dim=num_dim, grid_bounds=grid_bounds) self.grid_bounds = grid...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Defining Training and Testing Code Next, we define the basic optimization loop and testing code. This code is entirely analogous to the standard PyTorch training loop. We create a torch.optim.SGD optimizer with the parameters of the neural network on which we apply the standard amount of weight decay suggested from the...
n_epochs = 1 lr = 0.1 optimizer = SGD([ {'params': model.feature_extractor.parameters(), 'weight_decay': 1e-4}, {'params': model.gp_layer.hyperparameters(), 'lr': lr * 0.01}, {'params': model.gp_layer.variational_parameters()}, {'params': likelihood.parameters()}, ], lr=lr, momentum=0.9, nesterov=True, ...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
We are now ready to train the model. At the end of each Epoch we report the current test loss and accuracy, and we save a checkpoint model out to a file.
for epoch in range(1, n_epochs + 1): with gpytorch.settings.use_toeplitz(False): train(epoch) test() scheduler.step() state_dict = model.state_dict() likelihood_state_dict = likelihood.state_dict() torch.save({'model': state_dict, 'likelihood': likelihood_state_dict}, 'dkl_cifar_chec...
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
.. _tut_viz_raw: Visualize Raw data
import os.path as op import mne data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample') raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif')) events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The visualization module (:mod:mne.viz) contains all the plotting functions that work in combination with MNE data structures. Usually the easiest way to use them is to call a method of the data container. All of the plotting method names start with plot. If you're using Ipython console, you can just write raw.plot and...
raw.plot(block=True, events=events)
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The channels are color coded by channel type. Generally MEG channels are colored in different shades of blue, whereas EEG channels are black. The channels are also sorted by channel type by default. If you want to use a custom order for the channels, you can use order parameter of :func:raw.plot. The scrollbar on right...
raw.plot_sensors(kind='3d', ch_type='mag')
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now let's add some ssp projectors to the raw data. Here we read them from a file and plot them.
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif')) raw.add_proj(projs) raw.plot_projs_topomap()
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting channel wise power spectra is just as easy. The layout is inferred from the data by default when plotting topo plots. This works for most data, but it is also possible to define the layouts by hand. Here we select a layout with only magnetometer channels and plot it. Then we plot the channel wise spectra of fi...
layout = mne.channels.read_layout('Vectorview-mag') layout.plot() raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next step we will set up basic Factories for Inception
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''): conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix)) bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix)) act = mx.symb...
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Build Network by using Factories
def inception(nhidden, grad_scale): # data data = mx.symbol.Variable(name="data") # stage 2 in3a = InceptionFactoryA(data, 64, 64, 64, 64, 96, "avg", 32, '3a') in3b = InceptionFactoryA(in3a, 64, 64, 96, 64, 96, "avg", 64, '3b') in3c = InceptionFactoryB(in3b, 128, 160, 64, 96, '3c') # stage 3...
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Make data iterator. Note we convert original CIFAR-100 dataset into image format then pack into RecordIO in purpose of using our build-in image augmentation. For details about RecordIO, please refer ()[]
batch_size = 64 train_dataiter = mx.io.ImageRecordIter( shuffle=True, path_imgrec="./data/train.rec", mean_img="./data/mean.bin", rand_crop=True, rand_mirror=True, data_shape=(3, 28, 28), batch_size=batch_size, prefetch_buffer=4, preprocess_threads=2) test_dataiter = mx.io.ImageRec...
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Make model
num_epoch = 38 model_prefix = "model/cifar_100" softmax = inception(100, 1.0) model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, learning_rate=0.05, momentum=0.9, wd=0.0001)
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Fit first stage
model.fit(X=train_dataiter, eval_data=test_dataiter, eval_metric="accuracy", batch_end_callback=mx.callback.Speedometer(batch_size, 200), epoch_end_callback=mx.callback.do_checkpoint(model_prefix))
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Without reducing learning rate, this model is able to achieve state-of-art result. Let's reduce learning rate to train a few more rounds.
# load params from saved model num_epoch = 38 model_prefix = "model/cifar_100" tmp_model = mx.model.FeedForward.load(model_prefix, epoch) # create new model with params num_epoch = 6 model_prefix = "model/cifar_100_stage2" model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, ...
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Now going the other way, convering DataFrame back into TicDat objects. Since pan_dat just has a collection of DataFrames attached to it, this is easy to show.
dat2 = input_schema.TicDat(foods = pan_dat.foods, categories = pan_dat.categories, nutrition_quantities = pan_dat.nutrition_quantities) dat2.nutrition_quantities input_schema._same_data(dat, dat2)
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
That said, if you drop the indicies then things don't work, so be sure to set DataFrame indicies correctly.
df = pan_dat.nutrition_quantities.reset_index(drop=False) df
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
So, well this might appear to be ok, it does need the index for the current ticdat implementation.
input_schema.TicDat(foods = pan_dat.foods, categories = pan_dat.categories, nutrition_quantities = df)
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
<H2> Create normally distributed data</H2>
# fake some data data = norm.rvs(loc=0.0, scale=1.0, size =150) plt.hist(data, rwidth=0.85, facecolor='black'); plt.ylabel('Number of events'); plt.xlabel('Value');
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2> Obtain the fitting to a normal distribution</H2> <P> This is simply the mean and the standard deviation of the sample data<P>
mean, stdev = norm.fit(data) print('Mean =%f, Stdev=%f'%(mean,stdev))
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
To adapt the normalized PDF of the normal distribution we simply have to multiply every value by the area of the histogram obtained <H2> Get the histogram data from NumPy</H2>
histdata = plt.hist(data, bins=10, color='black', rwidth=.85) # we set 10 bins counts, binedge = np.histogram(data, bins=10); print(binedge) #G et bincenters from bin edges bincenter = [0.5 * (binedge[i] + binedge[i+1]) for i in xrange(len(binedge)-1)] bincenter binwidth = (max(bincenter) - min(bincenter)) / len(bi...
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2> Scale the normal PDF to the area of the histogram</H2>
x = np.linspace( start = -4 , stop = 4, num = 100) mynorm = norm(loc = mean, scale = stdev) # Scale Norm PDF to the area (binwidth)*number of samples of the histogram myfit = mynorm.pdf(x)*binwidth*len(data) # Plot everthing together plt.hist(data, bins=10, facecolor='white', histtype='stepfilled'); plt.fill(x, myfit...
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
Loading up the data of the LCS SPRING 2016 tournament Fatures: MPB/MPP = Match Points Blue/Purple MR = Match Result (B = Blue won, P = Purple won) TRB/TRP = Team Rank Blue/Purple FRR = First Round Result (B/P) RB/RP = Rating of team Blue/Purple W10B/W10P = Wins in the last 10 months team Blue/Purple L10B/L10P = Losses...
dta = pd.read_csv('data/lol-2016/lcs-s-16.csv', parse_dates=[1]) dta
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Separating the data for training and for test
train_idx = np.array(dta.data < '2016-02-27') #year month day test_idx = np.array(dta.data >= '2016-02-27') #year month day results_train = np.array(dta.MR[train_idx]) results_test = np.array(dta.MR[test_idx]) print("Training set:", len(results_train), "Test set:", len(results_test)) dta[test_idx]
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Now the feature columns are setted so we can use it to train the machine-learning algorithm
feature_columns = ['TRB', 'TRP', 'RB', 'RP', 'W10B', 'L10B', 'W10P', 'L10P', 'HWB', 'HWP', 'GBB', 'GBP']
Odds to win.ipynb
Scoppio/League-of-Odds
mit
I don't really know if I needed this many manipulations, probably not. I just needed to concatenate train_arrays and test_arrays in a single place. If you know of a faster way, please point to me.
#Column numbers for odds for the two outcomes cidx_home = [i for i, col in enumerate(dta.columns) if col[-1] in 'B' and col in feature_columns] cidx_away = [i for i, col in enumerate(dta.columns) if col[-1] in 'P' and col in feature_columns] #The two feature matrices for training feature_train_home = dta.ix[train_idx...
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Now we are finally ready to use the data to train the algorithm. First an AdaBoostClassifier object is created, and here we need to give supply a set of arguments for it to work properly. The first argument is classification algoritm to use, which is the DecisionTreeClassifier algorithm. I have chosen to supply this al...
from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier adb = AdaBoostClassifier( DecisionTreeClassifier(max_depth=3), n_estimators=1000, learning_rate=0.4, random_state=42) adb = adb.fit(feature_train, results_train)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
We can now see how well the trained algorithm fits the training data.
import sklearn.metrics as skm from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix training_pred = adb.predict(feature_train) print (skm.confusion_matrix(list(training_pred), list(results_train)))
Odds to win.ipynb
Scoppio/League-of-Odds
mit
We see that no matches in the training data are misclassified, usually this is not very good because it may imply that we have an overfitted model with poor predictive power on new data. This problem may be solved by tweaking features and adding more data. Yeah, it may sound strange, but something that predicts 100% of...
import matplotlib.pyplot as plt test_pred = adb.predict(feature_test) labels = ['Blue Team', 'Purple Team'] y_test = results_test y_pred = test_pred def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorba...
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Scoring our model visualizing the data Ok, first thing we are going to do is run a score that will evaluate our model. There are many ways to evaluate it, but the simple "score" method shown here is good enough for this model that we are dealing with. (Others would be F1 and accuracy.) Also important is to be able to s...
adb.score(feature_test, results_test)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
0.633... as a score is not that bad, a little bit better than a blind guess. To know if your prediction model is good or not, all you have to do is compare with blind guesses. This case we have only two possible outcomes, either a victory to team purple or victory to team blue, 50-50! Now we have a model that predicts ...
Blue_Team_Win = 20 Purple_Team_Win = 10 total_games = Purple_Team_Win + Blue_Team_Win All_Blue_Bets = (100 / total_games) * Blue_Team_Win All_Purple_Bets = (100 / total_games) * Purple_Team_Win print ("Team Blue won:", All_Blue_Bets, "%") print ("Team Purple won:", All_Purple_Bets, "%")
Odds to win.ipynb
Scoppio/League-of-Odds
mit
63.3% does not seen great anymore So as we can see, the highest win rate is higher than our model accuracy. But we have to keep in mind that the accuracy points that this model predicts correctly 70% of the Purple Team wins and 60% of the Blue Team win. This unfortunatelly counts to only 63.3% of the correct prediction...
test_data = dta[test_idx] test_data
Odds to win.ipynb
Scoppio/League-of-Odds
mit
The PR column Now we add the Prediction column to the data (and drop the URL column because it takes too much view space) The Prediction column will be declared as PR and must show the predicted outcome for a given match.
pd.options.mode.chained_assignment = None test_data['PR'] = np.array(adb.predict(feature_test)) try: test_data = test_data.drop('url', 1) except: pass test_data.sort('data', ascending=True)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
3. Affine decomposition In order to obtain an affine decomposition, we recast the problem on a fixed, parameter independent, reference domain $\Omega$. We choose one characterized by $\mu_0=\mu_1=\mu_2=\mu_3=\mu_4=1$ and $\mu_5=0$, which we generate through the generate_mesh notebook provided in the data folder.
@PullBackFormsToReferenceDomain() @AffineShapeParametrization("data/t_bypass_vertices_mapping.vmp") class Stokes(StokesProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization StokesProblem.__init__(self, V, **kwargs) # ... and al...
tutorials/12_stokes/tutorial_stokes_1_rb.ipynb
mathLab/RBniCS
lgpl-3.0
1. Definiciones y conceptos básicos. Entenderemos por algoritmo a una serie de pasos que persiguen un objetivo específico. Intuitivamente lo podemos relacionar con una receta de cocina: una serie de pasos bien definidos (sin dejar espacio para la confusión del usuario) que deben ser realizados en un orden específico pa...
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, N): if N%i==0: es_primo = False if es_primo: print("{0} es primo".f...
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
2.2 Segundo Programa Al utilizar números grandes ($N=10^7$, por ejemplo) nos damos cuenta que el algoritmo anterior tarda mucho tiempo en ejecutar, y que recorre todos los numeros. Sin embargo, si se encuentra un divisor ya sabemos que el número no es primo, pudiendo detener inmediatamente el algoritmo. Esto se consigu...
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, N): if N%i==0: es_primo = False break if es_primo: prin...
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La ejecución de números grandes compuestos se detiene en el primer múltiplo cuando el número es compuesto. Sin embargo, para numeros grandes y primos tarda bastante. 2.3 Tercer Programa Un último truco que podemos utilizar para verificar más rápidamente si un número es primo es verificar únicamente parte del rango de ...
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, int(N**.5)): if N%i==0: es_primo = False break if es_primo: ...
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
3. Midiendo la complejidad Como dijimos anteriormente luego de hacer que un algoritmo funcione, una de las preguntas más importantes es la revisión de éste haciendo énfasis en la medición del tiempo que necesita para resolver el problema. Así, la primera interrogante es: ¿cómo podemos medir el tiempo que tarda un algor...
def sin_inputs_ni_outputs(): print "Hola mundo" def sin_inputs(): return "42" def sin_outputs(a,b): print a print b def con_input_y_output(a,b): return a+b def con_tuti(a,b,c=2): return a+b*c
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función sin_inputs_ni_outputs se ejecuta sin recibir datos de entrada ni producir datos de salida (Y no es muy útil).
sin_inputs_ni_outputs()
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función sin_inputs se ejecuta sin recibir datos de entrada pero si produce datos de salida.
x = sin_inputs() print("El sentido de la vida, el universo y todo lo demás es: "+x)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función con_input_y_output se ejecuta con datos de entrada y produce datos de salida. Cabe destacar que como python no utiliza tipos de datos, la misma función puede aplicarse a distintos tipos de datos mientras la lógica aplicada dentro de la función tenga sentido (y no arroje errores).
print con_input_y_output("uno","dos") print con_input_y_output(1,2) print con_input_y_output(1.0, 2) print con_input_y_output(1.0, 2.0)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función con_tuti se ejecuta con datos de entrada y valores por defecto, y produce datos de salida.
print con_tuti(1,2) print con_tuti("uno","dos") print con_tuti(1,2,c=3) print con_tuti(1,2,3) print con_tuti("uno","dos",3)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
네트워크는 사용자와 친구 관계를 나타낸다.
friendships = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 4), (4, 5), (5, 6), (5, 7), (6, 8), (7, 8), (8, 9)]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
친구 목록을 각 사용자의 dict에 추가하기도 했다.
# give each user a friends list for user in users: user["friends"] = [] # and populate it for i, j in friendships: # this works because users[i] is the user whose id is i users[i]["friends"].append(users[j]) # add i as a friend of j users[j]["friends"].append(users[i]) # add j as a friend of i
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
1장에서 연결 중심성(degree centrality)을 살펴볼 때는, 우리가 직관적으로 생각했던 주요 연결고리들이 선정되지 않아 약간 아쉬웠다. 대안으로 사용할 수 있는 지수 중 하나는 매개 중심성(betweenness centrality)인데, 이는 두 사람 사이의 최단 경로상에 빈번하게 등장하는 사람들이 큰 값을 가지는 지수이다. 구체적으로는, 노드 $i$의 매개 중심성은 다른 모든 노드 $j,k$ 쌍의 최단 경로 중에, $i$를 거치는 경로의 비율로 계산한다. 임의의 두 사람이 주어졌을 때 그들 간의 최단 경로를 구해야 한다. 이 책에서는 덜 효율적이더라도 훨...
# # Betweenness Centrality # def shortest_paths_from(from_user): # 특정 사용자로부터 다른 사용자까지의 모든 최단 경로를 포함하는 dict shortest_paths_to = { from_user["id"] : [[]] } # 확인해야 하는 (이전 사용자, 다음 사용자) 큐 # 모든 (from_user, from_user의 친구) 쌍으로 시작 frontier = deque((from_user, friend) for friend i...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense