markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Predictions are made on a dataframe with a column `ds` containing the dates for which predictions are to be made. The `make_future_dataframe` function takes the model object and a number of periods to forecast and produces a suitable dataframe. By default it will also include the historical dates so we can evaluate in-...
%%R future <- make_future_dataframe(m, periods = 365) tail(future)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
As with most modeling procedures in R, we use the generic `predict` function to get our forecast. The `forecast` object is a dataframe with a column `yhat` containing the forecast. It has additional columns for uncertainty intervals and seasonal components.
%%R forecast <- predict(m, future) tail(forecast[c('ds', 'yhat', 'yhat_lower', 'yhat_upper')])
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
You can use the generic `plot` function to plot the forecast, by passing in the model and the forecast dataframe.
%%R -w 10 -h 6 -u in plot(m, forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
You can use the `prophet_plot_components` function to see the forecast broken down into trend, weekly seasonality, and yearly seasonality.
%%R -w 9 -h 9 -u in prophet_plot_components(m, forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
BONUS: Elbow plot - finding elbow in order to decide about the number of clusters
#find errir for 1-10 clusters sse = [] k_rng = range(1,10) for k in k_rng: km = KMeans(n_clusters=k) km.fit(df[['Age','Income($)']]) sse.append(km.inertia_) #plot errors and find "elbow" plt.xlabel('K') plt.ylabel('Sum of squared error') plt.plot(k_rng,sse)
_____no_output_____
MIT
k-mean.ipynb
pawel-krawczyk/machine_learning_basic
Reading Data
df = pd.read_csv('data.csv') df.head() df.tail() df.info() df['fever'].value_counts() df['diffBreath'].value_counts() df.describe()
_____no_output_____
MIT
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
Train Test Splitting
import numpy as np def data_split(data, ratio): np.random.seed(42) shuffled = np.random.permutation(len(data)) test_set_size = int(len(data) * ratio) test_indices = shuffled[:test_set_size] train_indices = shuffled[test_set_size:] return data.iloc[train_indices], data.iloc[test_indices] np.rando...
_____no_output_____
MIT
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
TensorFlow 2 Complete Project Workflow in Amazon SageMaker Data Preprocessing -> Code Prototyping -> Automatic Model Tuning -> Deployment 1. [Introduction](Introduction)2. [SageMaker Processing for dataset transformation](SageMakerProcessing)3. [Local Mode training](LocalModeTraining)4. [Local Mode endpoint](LocalM...
import os import sagemaker import tensorflow as tf sess = sagemaker.Session() bucket = sess.default_bucket() data_dir = os.path.join(os.getcwd(), 'data') os.makedirs(data_dir, exist_ok=True) train_dir = os.path.join(os.getcwd(), 'data/train') os.makedirs(train_dir, exist_ok=True) test_dir = os.path.join(os.getcwd(...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker Processing for dataset transformation Next, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for ...
import numpy as np from tensorflow.python.keras.datasets import boston_housing from sklearn.preprocessing import StandardScaler (x_train, y_train), (x_test, y_test) = boston_housing.load_data() np.save(os.path.join(raw_dir, 'x_train.npy'), x_train) np.save(os.path.join(raw_dir, 'x_test.npy'), x_test) np.save(os.path....
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a mini...
%%writefile preprocessing.py import glob import numpy as np import os from sklearn.preprocessing import StandardScaler if __name__=='__main__': input_files = glob.glob('{}/*.npy'.format('/opt/ml/processing/input')) print('\nINPUT FILE LIST: \n{}\n'.format(input_files)) scaler = StandardScaler() f...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster fo...
from sagemaker import get_execution_role from sagemaker.sklearn.processing import SKLearnProcessor sklearn_processor = SKLearnProcessor(framework_version='0.20.0', role=get_execution_role(), instance_type='ml.m5.xlarge', ...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We're now ready to run the Processing job. To enable distributing the data files equally among the instances, we specify the `ShardedByS3Key` distribution type in the `ProcessingInput` object. This ensures that if we have `n` instances, each instance will receive `1/n` files from the specified S3 bucket. It may take...
from sagemaker.processing import ProcessingInput, ProcessingOutput from time import gmtime, strftime processing_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime())) output_destination = 's3://{}/{}/data'.format(bucket, s3_prefix) sklearn_processor.run(code='preprocessing.py', ...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
In the log output of the SageMaker Processing job above, you should be able to see logs in two different colors for the two different instances, and that each instance received different files. Without the `ShardedByS3Key` distribution type, each instance would have received a copy of **all** files. By spreading the ...
train_in_s3 = '{}/train/x_train.npy'.format(output_destination) test_in_s3 = '{}/test/x_test.npy'.format(output_destination) !aws s3 cp {train_in_s3} ./data/train/x_train.npy !aws s3 cp {test_in_s3} ./data/test/x_test.npy
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Local Mode training Local Mode in Amazon SageMaker is a convenient way to make sure your code is working locally as expected before moving on to full scale, hosted training in a separate, more powerful SageMaker-managed cluster. To train in Local Mode, it is necessary to have docker-compose or nvidia-docker-compose (...
!wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/local_mode_setup.sh !wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/daemon.json !/bin/bash ./local_mode_setup.sh
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Next, we'll set up a TensorFlow Estimator for Local Mode training. Key parameters for the Estimator include:- `train_instance_type`: the kind of hardware on which training will run. In the case of Local Mode, we simply set this parameter to `local` to invoke Local Mode training on the CPU, or to `local_gpu` if the inst...
from sagemaker.tensorflow import TensorFlow git_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', 'branch': 'master'} model_dir = '/opt/ml/model' train_instance_type = 'local' hyperparameters = {'epochs': 5, 'batch_size': 128, 'learning_rate': 0.01} local_estimator = Tens...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The `fit` method call below starts the Local Mode training job. Metrics for training will be logged below the code, inside the notebook cell. You should observe the validation loss decrease substantially over the five epochs, with no training errors, which is a good indication that our training code is working as exp...
inputs = {'train': f'file://{train_dir}', 'test': f'file://{test_dir}'} local_estimator.fit(inputs)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Local Mode endpoint While Amazon SageMaker’s Local Mode training is very useful to make sure your training code is working before moving on to full scale training, it also would be useful to have a convenient way to test your model locally before incurring the time and expense of deploying it to production. One possib...
!docker container stop $(docker container ls -aq) >/dev/null
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The following single line of code deploys the model locally in the SageMaker TensorFlow Serving container:
local_predictor = local_estimator.deploy(initial_instance_count=1, instance_type='local')
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To get predictions from the Local Mode endpoint, simply invoke the Predictor's predict method.
local_results = local_predictor.predict(x_test[:10])['predictions']
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
As a sanity check, the predictions can be compared against the actual target values.
local_preds_flat_list = [float('%.1f'%(item)) for sublist in local_results for item in sublist] print('predictions: \t{}'.format(np.array(local_preds_flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1)))
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We only trained the model for a few epochs and there is much room for improvement, but the predictions so far should at least appear reasonably within the ballpark. To avoid having the SageMaker TensorFlow Serving container indefinitely running locally, simply gracefully shut it down by calling the `delete_endpoint` m...
local_predictor.delete_endpoint()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker hosted training Now that we've confirmed our code is working locally, we can move on to use SageMaker's hosted training functionality. Hosted training is preferred for doing actual training, especially large-scale, distributed training. Unlike Local Mode training, for hosted training the actual training it...
s3_prefix = 'tf-2-workflow' traindata_s3_prefix = '{}/data/train'.format(s3_prefix) testdata_s3_prefix = '{}/data/test'.format(s3_prefix) train_s3 = sess.upload_data(path='./data/train/', key_prefix=traindata_s3_prefix) test_s3 = sess.upload_data(path='./data/test/', key_prefix=testdata_s3_prefix) inputs = {'train':t...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We're now ready to set up an Estimator object for hosted training. It is similar to the Local Mode Estimator, except the `train_instance_type` has been set to a SageMaker ML instance type instead of `local` for Local Mode. Also, since we know our code is working now, we'll train for a larger number of epochs with the e...
train_instance_type = 'ml.c5.xlarge' hyperparameters = {'epochs': 30, 'batch_size': 128, 'learning_rate': 0.01} git_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', 'branch': 'master'} estimator = TensorFlow(git_config=git_config, source_dir='tf-2-...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
After starting the hosted training job with the `fit` method call below, you should observe the training converge over the longer number of epochs to a validation loss that is considerably lower than that which was achieved in the shorter Local Mode training job. Can we do better? We'll look into a way to do so in the...
estimator.fit(inputs)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
As with the Local Mode training, hosted training produces a model saved in S3 that we can retrieve. This is an example of the modularity of SageMaker: having trained the model in SageMaker, you can now take the model out of SageMaker and run it anywhere else. Alternatively, you can deploy the model into a production-...
!aws s3 cp {estimator.model_data} ./model/model.tar.gz
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The unzipped archive should include the assets required by TensorFlow Serving to load the model and serve it, including a .pb file:
!tar -xvzf ./model/model.tar.gz -C ./model
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Automatic Model Tuning So far we have simply run one Local Mode training job and one Hosted Training job without any real attempt to tune hyperparameters to produce a better model, other than increasing the number of epochs. Selecting the right hyperparameter values to train your model can be difficult, and typically...
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = { 'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type="Logarithmic"), 'epochs': IntegerParameter(10, 50), 'batch_size': IntegerParameter(64, 256), } metric_definitions =...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Next we specify a HyperparameterTuner object that takes the above definitions as parameters. Each tuning job must be given a budget: a maximum number of training jobs. A tuning job will complete after that many training jobs have been executed. We also can specify how much parallelism to employ, in this case five j...
tuner = HyperparameterTuner(estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=15, max_parallel_jobs=5, objective_typ...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
After the tuning job is finished, we can use the `HyperparameterTuningJobAnalytics` object from the SageMaker Python SDK to list the top 5 tuning jobs with the best performance. Although the results vary from tuning job to tuning job, the best validation loss from the tuning job (under the FinalObjectiveValue column) l...
tuner_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name) tuner_metrics.dataframe().sort_values(['FinalObjectiveValue'], ascending=True).head(5)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The total training time and training jobs status can be checked with the following lines of code. Because automatic early stopping is by default off, all the training jobs should be completed normally. For an example of a more in-depth analysis of a tuning job, see the SageMaker official sample [HPO_Analyze_TuningJob_...
total_time = tuner_metrics.dataframe()['TrainingElapsedTimeSeconds'].sum() / 3600 print("The total training time is {:.2f} hours".format(total_time)) tuner_metrics.dataframe()['TrainingJobStatus'].value_counts()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker hosted endpoint Assuming the best model from the tuning job is better than the model produced by the individual Hosted Training job above, we could now easily deploy that model to production. A convenient option is to use a SageMaker hosted endpoint, which serves real time predictions from the trained mode...
tuning_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We can compare the predictions generated by this endpoint with those generated locally by the Local Mode endpoint:
results = tuning_predictor.predict(x_test[:10])['predictions'] flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist] print('predictions: \t{}'.format(np.array(flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1)))
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To avoid billing charges from stray resources, you can delete the prediction endpoint to release its associated instance(s).
sess.delete_endpoint(tuning_predictor.endpoint_name)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Workflow Automation with the AWS Step Functions Data Science SDK In the previous parts of this notebook, we prototyped various steps of a TensorFlow project within the notebook itself. Notebooks are great for prototyping, but generally are not used in production-ready machine learning pipelines. For example, a simp...
import sys !{sys.executable} -m pip install --quiet --upgrade stepfunctions
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Add an IAM policy to your SageMaker role **If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.1. Open the Amazon [SageMak...
import stepfunctions from stepfunctions.template.pipeline import TrainingPipeline # paste the StepFunctionsWorkflowExecutionRole ARN from above workflow_execution_role = "<execution-role-arn>" pipeline = TrainingPipeline( estimator=estimator, role=workflow_execution_role, inputs=inputs, s3_bucket=buc...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Visualizing the workflow You can now view the workflow definition, and visualize it as a graph. This workflow and graph represent your training pipeline from starting a training job to deploying the model.
print(pipeline.workflow.definition.to_json(pretty=True)) pipeline.render_graph()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Creating and executing the pipeline Before the workflow can be run for the first time, the pipeline must be created using the `create` method:
pipeline.create()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Now the workflow can be started by invoking the pipeline's `execute` method:
execution = pipeline.execute()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Use the `list_executions` method to list all executions for the workflow you created, including the one we just started. After a pipeline is created, it can be executed as many times as needed, for example on a schedule for retraining on new data. (For purposes of this notebook just execute the workflow one time to s...
pipeline.workflow.list_executions(html=True)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
While the workflow is running, you can check workflow progress inside this notebook with the `render_progress` method. This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress while the workflow is running.
execution.render_progress()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
BEFORE proceeding with the rest of the notebook:Wait until the workflow completes with status **Succeeded**, which will take a few minutes. You can check status with `render_progress` above, or open in a new browser tab the **Inspect in AWS Step Functions** link in the cell output. To view the details of the complet...
execution.list_events(reverse_order=True, html=False)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
From this list of events, we can extract the name of the endpoint that was set up by the workflow.
import re endpoint_name_suffix = re.search('endpoint\Wtraining\Wpipeline\W([a-zA-Z0-9\W]+?)"', str(execution.list_events())).group(1) print(endpoint_name_suffix)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Once we have the endpoint name, we can use it to instantiate a TensorFlowPredictor object that wraps the endpoint. This TensorFlowPredictor can be used to make predictions, as shown in the following code cell. BEFORE running the following code cell:Go to the [SageMaker console](https://console.aws.amazon.com/sagemak...
from sagemaker.tensorflow import TensorFlowPredictor workflow_predictor = TensorFlowPredictor('training-pipeline-' + endpoint_name_suffix) results = workflow_predictor.predict(x_test[:10])['predictions'] flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist] print('predictions: \t{}'.format(np...
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Example: compound interest $A = P (1 + \frac{r}{n})^{nt}$+ A - amount+ P - principle+ r - interest rate+ n - number of times interest is compunded per unit 't'+ t - time
import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots import ipywidgets as widgets from ipywidgets import interactive def compound_interest_with_saving_rate(start_value, saving_per_month, interest_rate, duration_years): months = np.array(np.linspace(0, (12*duration_years)...
_____no_output_____
MIT
plotly_widgets_compound_interest.ipynb
summiee/jupyter_demos
Paralelizacion de entrenamiento de redes neuronales con TensorFlowEn esta seccion dejaremos atras los rudimentos de las matematicas y nos centraremos en utilizar TensorFlow, la cual es una de las librerias mas populares de arpendizaje profundo y que realiza una implementacion mas eficaz de las redes neuronales que cua...
# Creando tensores # ============================================= import tensorflow as tf import numpy as np np.set_printoptions(precision=3) a = np.array([1, 2, 3], dtype=np.int32) b = [4, 5, 6] t_a = tf.convert_to_tensor(a) t_b = tf.convert_to_tensor(b) print(t_a) print(t_b) # Obteniendo las dimensiones de un ten...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Manipulando los tipos de datos y forma de un tensor
# Cambiando el tipo de datos del tensor # ============================================== print(matriz_tf.dtype) matriz_tf_n = tf.cast(matriz_tf, tf.int64) print(matriz_tf_n.dtype) # Transponiendo un tensor # ================================================= t = tf.random.uniform(shape=(3, 5)) print(t, end = '\n'*2) ...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Operaciones matematicas sobre tensores
# Inicializando dos tensores con numeros aleatorios # ============================================================= tf.random.set_seed(1) t1 = tf.random.uniform(shape=(5, 2), minval=-1.0, maxval=1.0) t2 = tf.random.normal(shape=(5, 2), mean=0.0, stddev=1.0) print(t1, '\n'*2, t2) # Producto tipo element-wise: elemento ...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Partir, apilar y concatenar tensores
# Datos a trabajar # ======================================= tf.random.set_seed(1) t = tf.random.uniform((6,)) print(t.numpy()) # Partiendo el tensor en un numero determinado de piezas # ====================================================== t_splits = tf.split(t, num_or_size_splits = 3) [item.numpy() for item in t_spl...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Mas funciones y herramientas en:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf. EJERCICIOS1. Cree dos tensores de dimensiones (4, 6), de numeros aleatorios provenientes de una distribucion normal estandar con promedio 0.0 y dsv 1.0. Imprimalos.2. Multiplique los anteriores tensores de las dos formas vistas...
import tensorflow as tf # Ejemplo con listas # ====================================================== a = [1.2, 3.4, 7.5, 4.1, 5.0, 1.0] ds = tf.data.Dataset.from_tensor_slices(a) print(ds) for item in ds: print(item) for i in ds: print(i.numpy())
tf.Tensor(1.2, shape=(), dtype=float32) tf.Tensor(3.4, shape=(), dtype=float32) tf.Tensor(7.5, shape=(), dtype=float32) tf.Tensor(4.1, shape=(), dtype=float32) tf.Tensor(5.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) 1.2 3.4 7.5 4.1 5.0 1.0
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Si queremos crear lotes a partir de este conjunto de datos, con un tamaño de lote deseado de 3, podemos hacerlo de la siguiente manera:
# Creando lotes de 3 elementos cada uno # =================================================== ds_batch = ds.batch(3) for i, elem in enumerate(ds_batch, 1): print(f'batch {i}:', elem)
batch 1: tf.Tensor([1.2 3.4 7.5], shape=(3,), dtype=float32) batch 2: tf.Tensor([4.1 5. 1. ], shape=(3,), dtype=float32)
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Esto creará dos lotes a partir de este conjunto de datos, donde los primeros tres elementos van al lote n° 1 y los elementos restantes al lote n° 2. El método `.batch()` tiene un argumento opcional, `drop_remainder`, que es útil para los casos en los que el número de elementos en el tensor no es divisible por el tamaño...
# Datos de ejemplo # ============================================ tf.random.set_seed(1) t_x = tf.random.uniform([4, 3], dtype=tf.float32) t_y = tf.range(4) print(t_x) print(t_y) # Uniendo los dos tensores en un Dataset # ============================================ ds_x = tf.data.Dataset.from_tensor_slices(t_x) ds_y = ...
x: [-0.6697383 0.80296254 0.26194835] y: 0 x: [-0.13090777 -0.41612196 0.28500414] y: 1 x: [ 0.951571 -0.12980103 0.32020378] y: 2 x: [0.20979166 0.27326298 0.22889757] y: 3
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Mezclar, agrupar y repetirPara entrenar un modelo NN usando la optimización de descenso de gradiente estocástico, es importante alimentar los datos de entrenamiento como lotes mezclados aleatoriamente. Ya hemos visto arriba como crear lotes llamando al método `.batch()` de un objeto de conjunto de datos. Ahora, además...
# Mezclando los elementos de un tensor # =================================================== tf.random.set_seed(1) ds = ds_joint.shuffle(buffer_size = len(t_x)) for example in ds: print(' x:', example[0].numpy(), ' y:', example[1].numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
donde las filas se barajan sin perder la correspondencia uno a uno entre las entradas en x e y. El método `.shuffle()` requiere un argumento llamado `buffer_size`, que determina cuántos elementos del conjunto de datos se agrupan antes de barajar. Los elementos del búfer se recuperan aleatoriamente y su lugar en el búfe...
ds = ds_joint.batch(batch_size = 3, drop_remainder = False) print(ds) batch_x, batch_y = next(iter(ds)) print('Batch-x:\n', batch_x.numpy()) print('Batch-y: ', batch_y.numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Además, al entrenar un modelo para múltiples épocas, necesitamos mezclar e iterar sobre el conjunto de datos por el número deseado de épocas. Entonces, repitamos el conjunto de datos por lotes dos veces:
ds = ds_joint.batch(3).repeat(count = 2) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Esto da como resultado dos copias de cada lote. Si cambiamos el orden de estas dos operaciones, es decir, primero lote y luego repetimos, los resultados serán diferentes:
ds = ds_joint.repeat(count=2).batch(3) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Finalmente, para comprender mejor cómo se comportan estas tres operaciones (batch, shuffle y repeat), experimentemos con ellas en diferentes órdenes. Primero, combinaremos las operaciones en el siguiente orden: (1) shuffle, (2) batch y (3) repeat:
# Orden 1: shuffle -> batch -> repeat tf.random.set_seed(1) ds = ds_joint.shuffle(4).batch(2).repeat(3) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x, batch_y.numpy(), end = '\n'*2) # Orden 2: batch -> shuffle -> repeat tf.random.set_seed(1) ds = ds_joint.batch(2).shuffle(4).repeat(3) for i,(batch_x, ...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Obteniendo conjuntos de datos disponibles de la biblioteca tensorflow_datasetsLa biblioteca tensorflow_datasets proporciona una buena colección de conjuntos de datos disponibles gratuitamente para entrenar o evaluar modelos de aprendizaje profundo. Los conjuntos de datos están bien formateados y vienen con descripcion...
# pip install tensorflow-datasets import tensorflow_datasets as tfds print(len(tfds.list_builders())) print(tfds.list_builders()[:5]) # Trabajando con el archivo mnist # =============================================== mnist, mnist_info = tfds.load('mnist', with_info=True, shuffle_files=False) print(mnist_info) print(mn...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Construyendo un modelo NN en TensorFlow La API de TensorFlow Keras (tf.keras)Keras es una API NN de alto nivel y se desarrolló originalmente para ejecutarse sobre otras bibliotecas como TensorFlow y Theano. Keras proporciona una interfaz de programación modular y fácil de usar que permite la creación de prototipos y l...
X_train = np.arange(10).reshape((10, 1)) y_train = np.array([1.0, 1.3, 3.1, 2.0, 5.0, 6.3, 6.6, 7.4, 8.0, 9.0]) X_train, y_train import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(X_train, y_train, 'o', markersize=10) ax.set_xlabel('x') ax.set_ylabel('y') import tensorflow as tf X_train_norm = (X_train ...
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Ahora, podemos definir nuestro modelo de regresión lineal como $𝑧 = 𝑤x + 𝑏$. Aquí, vamos a utilizar la API de Keras. `tf.keras` proporciona capas predefinidas para construir modelos NN complejos, pero para empezar, usaremos un modelo desde cero:
class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.w = tf.Variable(0.0, name='weight') self.b = tf.Variable(0.0, name='bias') def call(self, x): return self.w * x + self.b model = MyModel() model.build(input_shape=(None, 1)) model.summary()
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
--- now we have an app and can start doing stuff --- creating a mutable Object
myMutable = myApp.mData()
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
define Entries and drop them onto Safe
import datetime now = datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S') myName = 'Welcome to the SAFE Network' text = 'free speech and free knowledge to the world!' timeUser = f'{now} {myName}' entries={timeUser:text}
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
entries={'firstkey':'this is awesome', 'secondKey':'and soon it should be', 'thirdKey':'even easier to use safe with python', 'i love safe':'and this is just the start', 'thisWasUploaded at':datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S UTC'), 'additionalEntry':input('...
infoData = myMutable.new_random_public(777,signKey,entries) print(safenet.safe_utils.getXorAddresOfMutable(infoData,myMutable.ffi_app)) additionalEntries={'this wasnt here':'before'} additionalEntries={'baduff':'another entry'} myMutable.insertEntries(infoData,additionalEntries) with open('testfile','wb') as f: f.w...
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
lastState={} additionalEntries, lastState = getNewEntries(lastState,myMutable.getCurrentState(infoData)) additionalEntries
import queue import time from threading import Thread import datetime import sys from PyQt5.QtWidgets import (QWidget, QPushButton, QTextBrowser,QLineEdit, QHBoxLayout, QVBoxLayout, QApplication) class Example(QWidget): def __init__(self): super().__init__() self.lineedit1 = QLin...
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
Tutorial 13: Skyrmion in a disk> Interactive online tutorial:> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb) In this tutorial, we compute and relax a skyrmion in a interfacial-DMI material in a confined disk like geometry.
import oommfc as oc import discretisedfield as df import micromagneticmodel as mm
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`.
region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9)) mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
The mesh we defined is:
%matplotlib inline mesh.k3d()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Now, we can define the system object by first setting up the Hamiltonian:
system = mm.System(name='skyrmion') system.energy = (mm.Exchange(A=1.6e-11) + mm.DMI(D=4e-3, crystalclass='Cnv') + mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1)) + mm.Demag() + mm.Zeeman(H=(0, 0, 2e5)))
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Disk geometry is set up be defining the saturation magnetisation (norm of the magnetisation field). For that, we define a function:
Ms = 1.1e6 def Ms_fun(pos): """Function to set magnitude of magnetisation: zero outside cylindric shape, Ms inside cylinder. Cylinder radius is 50nm. """ x, y, z = pos if (x**2 + y**2)**0.5 < 50e-9: return Ms else: return 0
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
And the second function we need is the function to definr the initial magnetisation which is going to relax to skyrmion.
def m_init(pos): """Function to set initial magnetisation direction: -z inside cylinder (r=10nm), +z outside cylinder. y-component to break symmetry. """ x, y, z = pos if (x**2 + y**2)**0.5 < 10e-9: return (0, 0, -1) else: return (0, 0, 1) # create system with...
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
The geometry is now:
system.m.norm.k3d_nonzero()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
and the initial magnetsation is:
system.m.plane('z').mpl()
/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715: RuntimeWarning: divide by zero encountered in double_scalars length = a * (widthu_per_lenu / (self.scale * self.width)) /Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715...
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Finally we can minimise the energy and plot the magnetisation.
# minimize the energy md = oc.MinDriver() md.drive(system) # Plot relaxed configuration: vectors in z-plane system.m.plane('z').mpl() # Plot z-component only: system.m.z.plane('z').mpl() # 3d-plot of z-component system.m.z.k3d_scalar(filter_field=system.m.norm)
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Finally we can sample and plot the magnetisation along the line:
system.m.z.line(p1=(-49e-9, 0, 0), p2=(49e-9, 0, 0), n=20).mpl()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Reader - ImplantaçãoEste componente utiliza um modelo de QA pré-treinado em Português com o dataset SQuAD v1.1, é um modelo de domínio público disponível em [Hugging Face](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese).Seu objetivo é encontrar a resposta de uma ou mais perguntas de acordo...
%%writefile Model.py import joblib import numpy as np import pandas as pd from reader import Reader class Model: def __init__(self): self.loaded = False def load(self): # Load artifacts artifacts = joblib.load("/tmp/data/reader.joblib") self.model_parameters = artifac...
Overwriting Model.py
MIT
tasks/reader/Deployment.ipynb
platiagro/tasks
Estimator validationThis notebook contains code to generate Figure 2 of the paper. This notebook also serves to compare the estimates of the re-implemented scmemo with sceb package from Vasilis.
import pandas as pd import matplotlib.pyplot as plt import scanpy as sc import scipy as sp import itertools import numpy as np import scipy.stats as stats from scipy.integrate import dblquad import seaborn as sns from statsmodels.stats.multitest import fdrcorrection import imp pd.options.display.max_rows = 999 pd.set_o...
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Check 1D estimates of `sceb` with `scmemo`Using the Poisson model. The outputs should be identical, this is for checking the implementation.
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(100, 20)) adata = sc.AnnData(data) size_factors = scdd.dd_size_factor(adata) Nr = data.sum(axis=1).mean() _, M_dd = scdd.dd_1d_moment(adata, size_factor=scdd.dd_size_factor(adata), Nr=Nr) var_scdd = scdd.M_to_var(M_dd) print(var_scdd) imp.reload(estimator) me...
[0.5217290008068085, 0.9860336223993191]
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Check 2D estimates of `sceb` and `scmemo`Using the Poisson model. The outputs should be identical, this is for checking the implementation.
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(1000, 4)) adata = sc.AnnData(data) size_factors = scdd.dd_size_factor(adata) mean_scdd, cov_scdd, corr_scdd = scdd.dd_covariance(adata, size_factors) print(cov_scdd) imp.reload(estimator) cov_scmemo = estimator._poisson_cov(data, data.shape[0], size_factors, ...
-1.4590297282462616
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Extract parameters from interferon dataset
adata = sc.read(data_path + 'interferon_filtered.h5ad') adata = adata[adata.obs.cell_type == 'CD4 T cells - ctrl'] data = adata.X.copy() relative_data = data.toarray()/data.sum(axis=1) q = 0.07 x_param, z_param, Nc, good_idx = schypo.simulate.extract_parameters(adata.X, q=q, min_mean=q) imp.reload(simulate) transcript...
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Compare datasets generated by Poisson and hypergeometric processes
_, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson') _, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper') q_list = [0.05, 0.1, 0.2, 0.3, 0.5] plt.figure(figsize=(8, 2)) plt.subplots_adjust(wspace=0.3) for idx, q in enumerate(q_list): _, poi_captured =...
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Compare Poisson vs HG estimators
def compare_esimators(q, plot=False, true_data=None, var_q=1e-10): q_sq = var_q + q**2 true_data = schypo.simulate.simulate_transcriptomes(1000, 1000, correlated=True) if true_data is None else true_data true_relative_data = true_data / true_data.sum(axis=1).reshape(-1, 1) qs, captured_data =...
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
imp.reload(simulate)q = 0.4plt.figure(figsize=(4, 4))plt.subplots_adjust(wspace=0.5, hspace=0.5)true_data = simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], corr=x_param[2], Nc=Nc)compare_esimators(q, plot=True, true_data=true_data)plt.savefig(fig_path + 'poi_vs_hyper_scatte...
true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], Nc=Nc) q = 0.025 plt.figure(figsize=(4, 4)) plt.subplots_adjust(wspace=0.5, hspace=0.5) compare_esimators(q, plot=True, true_data=true_data) plt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_2.5.png', bbox_inches='t...
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
TRTR and TSTR Results Comparison
#import libraries import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd from matplotlib import pyplot as plt pd.set_option('precision', 4)
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
1. Create empty dataset to save metrics differences
DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP'] SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP'] ml_models = ['RF','KNN','DT','SVM','MLP']
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
2. Read obtained results when TRTR and TSTR
FILEPATHS = {'Real' : 'RESULTS/models_results_real.csv', 'GM' : 'RESULTS/models_results_gm.csv', 'SDV' : 'RESULTS/models_results_sdv.csv', 'CTGAN' : 'RESULTS/models_results_ctgan.csv', 'WGANGP' : 'RESULTS/models_results_wgangp.csv'} #iterate over all datasets filepaths an...
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
3. Calculate differences of models
metrics_diffs_all = dict() real_metrics = results_all['Real'] columns = ['data','accuracy_diff','precision_diff','recall_diff','f1_diff'] metrics = ['accuracy','precision','recall','f1'] for name in SYNTHESIZERS : syn_metrics = results_all[name] metrics_diffs_all[name] = pd.DataFrame(columns = columns) for...
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
4. Compare absolute differences 4.1. Barplots for each metric
metrics = ['accuracy', 'precision', 'recall', 'f1'] metrics_diff = ['accuracy_diff', 'precision_diff', 'recall_diff', 'f1_diff'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple'] barwidth = 0.15 fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(15, 2.5)) axs_idxs = range(4) idx = dict(zip(met...
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
Generating Simpson's ParadoxWe have been maually setting, but now we should also be able to generate it more programatically. his notebook will describe how we develop some functions that will be included in the `sp_data_util` package.
# %load code/env # standard imports we use throughout the project import numpy as np import pandas as pd import seaborn as sns import scipy.stats as stats import matplotlib.pyplot as plt import wiggum as wg import sp_data_util as spdata from sp_data_util import sp_plot
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We have been thinking of SP hrough gaussian mixture data, so we'll first work wih that. To cause SP we need he clusters to have an opposite trend of the per cluster covariance.
# setup r_clusters = -.6 # correlation coefficient of clusters cluster_spread = .8 # pearson correlation of means p_sp_clusters = .5 # portion of clusters with SP k = 5 # number of clusters cluster_size = [2,3] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [1.0/k]*k # keep all means in the mid...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
However independent sampling isn't really very uniform and we'd like to ensure the clusters are more spread out, so we can use some post processing to thin out close ones.
mu_thin = [mu[0]] # keep the first one p_dist = [1] # we'll use a gaussian kernel around each to filter and only the closest point matters dist = lambda mu_c,x: stats.norm.pdf(min(np.sum(np.square(mu_c -x),axis=1))) for m in mu: p_keep = 1- dist(mu_thin,m) if p_keep > .99: mu_thin.append(m) p_...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Now, we can sample points on top of that, also we'll only use the first k
sns.regplot(mu_thin[:k,0], mu_thin[:k,1]) plt.axis(domain_range)
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Keeping only a few, we can end up with ones in the center, but if we sort them by the distance to the ones previously selected, we get them spread out a little more
# sort by distance mu_sort, p_sort = zip(*sorted(zip(mu_thin,p_dist), key = lambda x: x[1], reverse =True)) mu_sort = np.asarray(mu_sort) sns.regplot(mu_sort[:k,0], mu_sort[:k,1]) plt.axis(domain_range) # cluster covariance cluster_corr = np.asarray([[1,r_clusters],[r_clusters,1]]) cluster_std = np.diag(n...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We might not want all of the clusters to have the reveral though, so we can also sample the covariances
# cluster covariance cluster_std = np.diag(np.sqrt(cluster_size)) cluster_corr_sp = np.asarray([[1,r_clusters],[r_clusters,1]]) # correlation with sp cluster_cov_sp = np.dot(cluster_std,cluster_corr_sp).dot(cluster_std) #cov with sp cluster_corr = np.asarray([[1,-r_clusters],[-r_clusters,1]]) #correlation without sp cl...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We'll call this construction of SP `geometric_2d_gmm_sp` and it's included in the `sp_data_utils` module now, so it can be called as follows. We'll change the portion of clusters with SP to 1, to ensure that all are SP.
type(r_clusters) type(cluster_size) type(cluster_spread) type(p_sp_clusters) type(domain_range) type(p_clusters) p_sp_clusters = .9 sp_df2 = spdata.geometric_2d_gmm_sp(r_clusters,cluster_size,cluster_spread, p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(sp_df2,'x1','x2','color'...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
With this, we can start to see how the parameters control a little
# setup r_clusters = -.4 # correlation coefficient of clusters cluster_spread = .8 # pearson correlation of means p_sp_clusters = .6 # portion of clusters with SP k = 5 # number of clusters cluster_size = [4,4] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [.5, .2, .1, .1, .1] sp_df3 = spdata...
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We might want to add multiple views, so we added a function that takes the same parameters or lists to allow each view to have different parameters. We'll look first at just two views with the same parameters, both as one another and as above
many_sp_df = spdata.geometric_indep_views_gmm_sp(2,r_clusters,cluster_size,cluster_spread,p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(many_sp_df,'x1','x2','A') sp_plot(many_sp_df,'x3','x4','B') many_sp_df.head()
200 4
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We can also look at the pairs of variables that we did not design SP into and see that they have vey different structure
# f, ax_grid = plt.subplots(2,2) # , fig_size=(10,10) sp_plot(many_sp_df,'x1','x4','A') sp_plot(many_sp_df,'x2','x4','B') sp_plot(many_sp_df,'x2','x3','B') sp_plot(many_sp_df,'x1','x3','B')
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
And we can set up the views to be different from one another by design
# setup r_clusters = [.8, -.2] # correlation coefficient of clusters cluster_spread = [.8, .2] # pearson correlation of means p_sp_clusters = [.6, 1] # portion of clusters with SP k = [5,3] # number of clusters cluster_size = [4,4] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [[.5, .2, .1, .1...
200 4
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
And we can run our detection algorithm on this as well.
many_sp_df_diff_result = wg.detect_simpsons_paradox(many_sp_df_diff) many_sp_df_diff_result
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We designed in SP to occur between attributes `x1` and `x2` with respect to `A` and 2 & 3 in grouby by B, for portions fo the subgroups. We detect other occurences. It can be interesting to exmine trends between the deisnged and spontaneous occurences of SP, so
designed_SP = [('x1','x2','A'),('x3','x4','B')] des = [] for i,r in enumerate(many_sp_df_diff_result[['attr1','attr2','groupbyAttr']].values): if tuple(r) in designed_SP: des.append(i) many_sp_df_diff_result['designed'] = 'no' many_sp_df_diff_result.loc[des,'designed'] = 'yes' many_sp_df_diff_result.head() ...
200 6
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Here I'm going to test the details of the function that we are going to write for real video testing.
y = read_video.read_video('/Users/mojtaba/Desktop/OrNet Project/DAT VIDEOS/LLO/DsRed2-HeLa_2_21_LLO_Cell0.mov') y2 = np.array(y[1:]) y2.shape y3 = y2[0,:,:,:] y3.shape def manual_scan(self, video): """ Manual, loop-based implementation of raster scanning. (reference implementation) """ ...
_____no_output_____
MIT
notebooks/real_video_test.ipynb
quinngroup/ornet-reu-2018