markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Prepare for trainingThis next section of code performs the following tasks:* Specify GS bucket, create output directory for model checkpoints and eval results.* Specify task and download training data.* Specify ALBERT pretrained model
#Download GLUE data !git clone https://github.com/nyu-mll/GLUE-baselines download_glue GLUE_DIR='glue_data' !python download_glue/download_glue_data.py --data_dir $GLUE_DIR --tasks all # Please find the full list of tasks and their fintuning hyperparameters # here https://github.com/google-research/albert/blob/master/...
***** Model output directory: gs://luanps/albert-tfhub/models/RTE *****
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert
Now let's run the fine-tuning scripts. If you use the default MRPC task, this should be finished in around 10 mintues and you will get an accuracy of around 86.5. Choose hyperparameters using [Optuna](https://optuna.readthedocs.io/en/stable/index.html)
#Install Optuna optimzation lib !pip install optuna import optuna import uuid def get_last_acc_from_file(result_file): f = open(result_file,'r') results = f.readlines() result_dict = dict() for r in results: if 'eval_accuracy' in r: k,v = r.split(' = ') return float(v) def object...
_____no_output_____
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert
Analyzing Optimization Results
#Download pkl file from GCP import joblib study_file = f'{TASK}_study.pkl' !gsutil cp $OUTPUT_DIR/$study_file . study = joblib.load(study_file) study.trials_dataframe() import optuna from optuna.visualization import plot_contour from optuna.visualization import plot_edf from optuna.visualization import plot_intermed...
_____no_output_____
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert
Bernoulli BanditWe are going to implement several exploration strategies for simplest problem - bernoulli bandit.The bandit has $K$ actions. Action produce 1.0 reward $r$ with probability $0 \le \theta_k \le 1$ which is unknown to agent, but fixed over time. Agent's objective is to minimize regret over fixed number $T...
class BernoulliBandit: def __init__(self, n_actions=5): self._probs = np.random.random(n_actions) @property def action_count(self): return len(self._probs) def pull(self, action): if np.any(np.random.random() > self._probs[action]): return 0.0 return 1.0 ...
_____no_output_____
MIT
Informatics/Reinforcement Learning/Practical RL - HSE/week6_exploration/bandits.ipynb
MarcosSalib/Cocktail_MOOC
Epsilon-greedy agent**for** $t = 1,2,...$ **do**   **for** $k = 1,...,K$ **do**       $\hat\theta_k \leftarrow \alpha_k / (\alpha_k + \beta_k)$   **end for**    $x_t \leftarrow argmax_{k}\hat\theta$ with probability $1 - \epsilon$ or random action with probab...
class EpsilonGreedyAgent(AbstractAgent): def __init__(self, epsilon=0.01): self._epsilon = epsilon def get_action(self): # <YOUR CODE> alpha = self._successes beta = self._failures theta_ = alpha / (alpha + beta) if np.random.random() < self._epsilon: ...
_____no_output_____
MIT
Informatics/Reinforcement Learning/Practical RL - HSE/week6_exploration/bandits.ipynb
MarcosSalib/Cocktail_MOOC
UCB AgentEpsilon-greedy strategy heve no preference for actions. It would be better to select among actions that are uncertain or have potential to be optimal. One can come up with idea of index for each action that represents otimality and uncertainty at the same time. One efficient way to do it is to use UCB1 algori...
class UCBAgent(AbstractAgent): def __init__(self, gamma=0.01): self._gamma = gamma def get_action(self): # <YOUR CODE> alpha = self._successes beta = self._failures t = self._total_pulls omega_ = alpha / (alpha + beta) + self._gamma * np.sqrt(2*np.lo...
_____no_output_____
MIT
Informatics/Reinforcement Learning/Practical RL - HSE/week6_exploration/bandits.ipynb
MarcosSalib/Cocktail_MOOC
Thompson samplingUCB1 algorithm does not take into account actual distribution of rewards. If we know the distribution - we can do much better by using Thompson sampling:**for** $t = 1,2,...$ **do**&nbsp;&nbsp; **for** $k = 1,...,K$ **do**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sample $\hat\theta_k \sim beta(\alpha_k, \b...
class ThompsonSamplingAgent(AbstractAgent): def get_action(self): # <YOUR CODE> alpha = self._successes beta = self._failures theta_ = np.random.beta(alpha+1, beta+1) return np.argmax(theta_) from collections import OrderedDict def get_regret(env, agents, n_steps=5...
/home/x/anaconda3/envs/rl/lib/python3.7/site-packages/ipykernel_launcher.py:10: RuntimeWarning: invalid value encountered in true_divide # Remove the CWD from sys.path while we load stuff. /home/x/anaconda3/envs/rl/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in true...
MIT
Informatics/Reinforcement Learning/Practical RL - HSE/week6_exploration/bandits.ipynb
MarcosSalib/Cocktail_MOOC
Submit to coursera
from submit import submit_bandits submit_bandits(agents, regret, '', '')
Submitted to Coursera platform. See results on assignment page!
MIT
Informatics/Reinforcement Learning/Practical RL - HSE/week6_exploration/bandits.ipynb
MarcosSalib/Cocktail_MOOC
Realizar:1. Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía.2. Hacer un contador regresivo con recursión.3. Sacar de un ADT pila el valor en la posición media.
class Pila: def __init__(self): self.items = [] def imprimirCompleto(self): for x in range(0,len(self.items),1): print(self.items[x],end=",") def estaVacia(self): return self.items == [] def incluir(self, item): self.items.app...
1. Crear una lista de enteros en Python y realizar la suma con recursividad, el caso base es cuando la lista este vacía. [1, 2, 3, 4, 5, 6, 7, 8, 9] 9 17 24 30 35 39 42 44 45 2. Hacer un contador regresivo con recursión. INICIA CONTEO REGRESIVO 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 ...
MIT
Tarea_7_Recursividad.ipynb
Sahp59/daa_2021_1
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.png) Azure Machine Learni...
import os import azureml.core from azureml.core import Workspace, Experiment, Datastore from azureml.widgets import RunDetails # Check core SDK version number print("SDK version:", azureml.core.VERSION)
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Pipeline-specific SDK importsHere, we import key pipeline modules, whose use will be illustrated in the examples below.
from azureml.pipeline.core import Pipeline from azureml.pipeline.steps import PythonScriptStep print("Pipeline SDK-specific imports completed")
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Initialize WorkspaceInitialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration.
ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') # Default datastore def_blob_store = ws.get_default_datastore() # The following call GETS the Azure Blob Store associated with your workspace. # Note that workspaceblobstore is **the name of this store and CANN...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Required data and script files for the the tutorialSample files required to finish this tutorial are already copied to the corresponding source_directory locations. Even though the .py provided in the samples does not have much "ML work" as a data scientist, you will work on this extensively as part of your work. To c...
# get_default_datastore() gets the default Azure Blob Store associated with your workspace. # Here we are reusing the def_blob_store object we obtained earlier def_blob_store.upload_files(["./20news.pkl"], target_path="20newsgroups", overwrite=True) print("Upload call completed")
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
(Optional) See your files using Azure PortalOnce you successfully uploaded the files, you can browse to them (or upload more files) using [Azure Portal](https://portal.azure.com). At the portal, make sure you have selected your subscription (click *Resource Groups* and then select the subscription). Then look for your...
cts = ws.compute_targets for ct in cts: print(ct)
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Retrieve or create a Azure Machine Learning computeAzure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then r...
from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException aml_compute_target = "cpu-cluster" try: aml_compute = AmlCompute(ws, aml_compute_target) print("found existing compute target.") except ComputeTargetException: print("creating new compu...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute. **Now that we have completed learning th...
# Uses default values for PythonScriptStep construct. source_directory = './train' print('Source directory for the step is {}.'.format(os.path.realpath(source_directory))) # Syntax # PythonScriptStep( # script_name, # name=None, # arguments=None, # compute_target=None, # runconfig=None, # ...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
**Note:** In the above call to PythonScriptStep(), the flag *allow_reuse* determines whether the step should reuse previous results when run with the same settings/inputs. This flag's default value is *True*; the default is set to *True* because, when inputs and parameters have not changed, we typically do not want to ...
# For this step, we use a different source_directory source_directory = './compare' print('Source directory for the step is {}.'.format(os.path.realpath(source_directory))) # All steps use the same Azure Machine Learning compute target as well step2 = PythonScriptStep(name="compare_step", scri...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Build the pipelineOnce we have the steps (or steps collection), we can build the [pipeline](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py). By deafult, all these steps will run in **parallel** once we submit the pipeline for run.A pipeline is...
# Syntax # Pipeline(workspace, # steps, # description=None, # default_datastore_name=None, # default_source_directory=None, # resolve_closure=True, # _workflow_provider=None, # _service_endpoint=None) pipeline1 = Pipeline(workspace=ws, steps=steps) ...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Validate the pipelineYou have the option to [validate](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-pyvalidate--) the pipeline prior to submitting for run. The platform runs validation steps such as checking for circular dependencies and parame...
pipeline1.validate() print("Pipeline validation complete")
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Submit the pipeline[Submitting](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-pysubmit) the pipeline involves creating an [Experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment?view=azure-ml-py) object and ...
# Submit syntax # submit(experiment_name, # pipeline_parameters=None, # continue_on_step_failure=False, # regenerate_outputs=False) pipeline_run1 = Experiment(ws, 'Hello_World1').submit(pipeline1, regenerate_outputs=False) print("Pipeline is submitted for execution")
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
**Note:** If regenerate_outputs is set to True, a new submit will always force generation of all step outputs, and disallow data reuse for any step of this run. Once this run is complete, however, subsequent runs may reuse the results of this run. Examine the pipeline run Use RunDetails WidgetWe are going to use the R...
RunDetails(pipeline_run1).show()
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Use Pipeline SDK objectsYou can cycle through the node_run objects and examine job logs, stdout, and stderr of each of the steps.
step_runs = pipeline_run1.get_children() for step_run in step_runs: status = step_run.get_status() print('Script:', step_run.name, 'status:', status) # Change this if you want to see details even if the Step has succeeded. if status == "Failed": joblog = step_run.get_job_log() print...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Get additonal run detailsIf you wait until the pipeline_run is finished, you may be able to get additional details on the run. **Since this is a blocking call, the following code is commented out.**
#pipeline_run1.wait_for_completion() #for step_run in pipeline_run1.get_children(): # print("{}: {}".format(step_run.name, step_run.get_metrics()))
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
Running a few steps in sequenceNow let's see how we run a few steps in sequence. We already have three steps defined earlier. Let's *reuse* those steps for this part.We will reuse step1, step2, step3, but build the pipeline in such a way that we chain step3 after step2 and step2 after step1. Note that there is no expl...
step2.run_after(step1) step3.run_after(step2) # Try a loop #step2.run_after(step3) # Now, construct the pipeline using the steps. # We can specify the "final step" in the chain, # Pipeline will take care of "transitive closure" and # figure out the implicit or explicit dependencies # https://www.geeksforgeeks.org/...
_____no_output_____
MIT
how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb
alla15747/MachineLearningNotebooks
DCGAN Tutorial==============**Author**: `Nathan Inkawhich `__ Introduction------------This tutorial will give an introduction to DCGANs through an example. Wewill train a generative adversarial network (GAN) to generate newcelebrities after showing it pictures of many real celebrities. Most ofthe code here is from the ...
from __future__ import print_function #%matplotlib inline import argparse import os import random import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as tran...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Inputs------Let’s define some inputs for the run:- **dataroot** - the path to the root of the dataset folder. We will talk more about the dataset in the next section- **workers** - the number of worker threads for loading the data with the DataLoader- **batch_size** - the batch size used in training. The DCGAN p...
# Root directory for dataset dataroot = "data/celeba" # Number of workers for dataloader workers = 2 # Batch size during training batch_size = 128 # Spatial size of training images. All images will be resized to this # size using a transformer. image_size = 64 # Number of channels in the training images. For colo...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Data----In this tutorial we will use the `Celeb-A Facesdataset `__ which canbe downloaded at the linked site, or in `GoogleDrive `__.The dataset will download as a file named *img_align_celeba.zip*. Oncedownloaded, create a directory named *celeba* and extract the zip fileinto that directory. Then, set the *dataroot* i...
# We can use an image folder dataset the way we have it setup. # Create the dataset dataset = dset.ImageFolder(root=dataroot, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), ...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Implementation--------------With our input parameters set and the dataset prepared, we can now getinto the implementation. We will start with the weigth initializationstrategy, then talk about the generator, discriminator, loss functions,and training loop in detail.Weight Initialization~~~~~~~~~~~~~~~~~~~~~From the DCG...
# custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Generator~~~~~~~~~The generator, $G$, is designed to map the latent space vector($z$) to data-space. Since our data are images, converting$z$ to data-space means ultimately creating a RGB image with thesame size as the training images (i.e. 3x64x64). In practice, this isaccomplished through a series of strided two dime...
# Generator Code class Generator(nn.Module): def __init__(self, ngpu): super(Generator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Now, we can instantiate the generator and apply the ``weights_init``function. Check out the printed model to see how the generator object isstructured.
# Create the generator netG = Generator(ngpu).to(device) # Handle multi-gpu if desired if (device.type == 'cuda') and (ngpu > 1): netG = nn.DataParallel(netG, list(range(ngpu))) # Apply the weights_init function to randomly initialize all weights # to mean=0, stdev=0.2. netG.apply(weights_init) # Print the mode...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Discriminator~~~~~~~~~~~~~As mentioned, the discriminator, $D$, is a binary classificationnetwork that takes an image as input and outputs a scalar probabilitythat the input image is real (as opposed to fake). Here, $D$ takesa 3x64x64 input image, processes it through a series of Conv2d,BatchNorm2d, and LeakyReLU layer...
class Discriminator(nn.Module): def __init__(self, ngpu): super(Discriminator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is (nc) x 64 x 64 nn.Conv2d(nc, ndf, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), # st...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Now, as with the generator, we can create the discriminator, apply the``weights_init`` function, and print the model’s structure.
# Create the Discriminator netD = Discriminator(ngpu).to(device) # Handle multi-gpu if desired if (device.type == 'cuda') and (ngpu > 1): netD = nn.DataParallel(netD, list(range(ngpu))) # Apply the weights_init function to randomly initialize all weights # to mean=0, stdev=0.2. netD.apply(weights_init) # Pr...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Loss Functions and Optimizers~~~~~~~~~~~~~~~~~~~~~~~~~~~~~With $D$ and $G$ setup, we can specify how they learnthrough the loss functions and optimizers. We will use the Binary CrossEntropy loss(`BCELoss `__)function which is defined in PyTorch as:\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \lef...
# Initialize BCELoss function criterion = nn.BCELoss() # Create batch of latent vectors that we will use to visualize # the progression of the generator fixed_noise = torch.randn(64, nz, 1, 1, device=device) # Establish convention for real and fake labels during training real_label = 1. fake_label = 0. # Setup Adam...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Training~~~~~~~~Finally, now that we have all of the parts of the GAN framework defined,we can train it. Be mindful that training GANs is somewhat of an artform, as incorrect hyperparameter settings lead to mode collapse withlittle explanation of what went wrong. Here, we will closely followAlgorithm 1 from Goodfellow’...
# Training Loop # Lists to keep track of progress img_list = [] G_losses = [] D_losses = [] iters = 0 print("Starting Training Loop...") # For each epoch for epoch in range(num_epochs): # For each batch in the dataloader for i, data in enumerate(dataloader, 0): ############################ ...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
Results-------Finally, lets check out how we did. Here, we will look at threedifferent results. First, we will see how D and G’s losses changedduring training. Second, we will visualize G’s output on the fixed_noisebatch for every epoch. And third, we will look at a batch of real datanext to a batch of fake data from G...
plt.figure(figsize=(10,5)) plt.title("Generator and Discriminator Loss During Training") plt.plot(G_losses,label="G") plt.plot(D_losses,label="D") plt.xlabel("iterations") plt.ylabel("Loss") plt.legend() plt.show()
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
**Visualization of G’s progression**Remember how we saved the generator’s output on the fixed_noise batchafter every epoch of training. Now, we can visualize the trainingprogression of G with an animation. Press the play button to start theanimation.
#%%capture fig = plt.figure(figsize=(8,8)) plt.axis("off") ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list] ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True) HTML(ani.to_jshtml())
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
**Real Images vs. Fake Images**Finally, lets take a look at some real images and fake images side byside.
# Grab a batch of real images from the dataloader real_batch = next(iter(dataloader)) # Plot the real images plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.axis("off") plt.title("Real Images") plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0))) # Plot...
_____no_output_____
MIT
PyTorch/Visual-Audio/Torchscript/dcgan_faces_tutorial.ipynb
MitchellTesla/Quantm
MSDS
def msds(N,arr): w_e = 0 e_e = 0 n_e = 0 s_e = 0 nw_e = 0 ne_e = 0 sw_e = 0 se_e = 0 for row in range(arr.shape[0] // N): for col in range(arr.shape[1] // N): f_block = arr[row * N : (row + 1) * N, col * N : (col + 1) * N] # w if col == 0: ...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
順変換 縦方向 DCT
for row in range(IMG.img.shape[0] // N): for col in range(IMG.img.shape[1]): eight_points = IMG.img[N * row : N * (row + 1), col] c = scipy.fftpack.dct(eight_points, norm="ortho") Fk[N * row : N * (row + 1), col] = c
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
残差
dmlct = DMLCT(n_bar, N) for row in range(IMG.img.shape[0] // N): for col in range(IMG.img.shape[1]): # ビューなら直接いじっちゃう F = Fk[N * row : N * (row + 1), col] F_L = get_F_L_k_vertical(Fk, N, row, col) F_R = get_F_R_k_vertical(Fk, N, row, col) U_k_n_bar = np.zeros(N) for k...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
横方向 DCT
for row in range(Fk.shape[0]): for col in range(Fk.shape[1] // N): eight_points = Fk[row, N * col : N * (col + 1)] c = scipy.fftpack.dct(eight_points, norm="ortho") Fk[row, N * col : N * (col + 1)] = c
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
残差
dmlct = DMLCT(n_bar, N) for row in range(IMG.img.shape[0]): for col in range(IMG.img.shape[1] // N): F = Fk[row, N * col : N * (col + 1)] F_L = get_F_L_k_horizontal(Fk, N, row, col) F_R = get_F_R_k_horizontal(Fk, N, row, col) U_k_n_bar = np.zeros(N) for kh in range(n_bar - 2...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
係数の確保
Fk_Ori = np.copy(Fk)
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
逆変換
recover = np.zeros(IMG.img.shape)
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
横方向 残差
for k in range(1, n_bar - 2 + 1): dmlct = DMLCT(k+1, N) for row in range(IMG.img.shape[0]): for col in range(IMG.img.shape[1] // N): F = Fk[row, N * col : N * (col + 1)] F_L = get_F_L_k_horizontal(Fk, N, row, col) F_R = get_F_R_k_horizontal(Fk, N, row, col) ...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
IDCT
for row in range(Fk.shape[0]): for col in range(Fk.shape[1] // N): F = Fk[row, N * col : N * col + N] data = scipy.fftpack.idct(F, norm="ortho") # Fkに代入した後、縦方向に対して処理 Fk[row, N * col : N * col + N] = data
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
縦方向 残差
for k in range(1, n_bar - 2 + 1): dmlct = DMLCT(k+1, N) for row in range(IMG.img.shape[0] // N): for col in range(IMG.img.shape[1]): # ビューなら直接いじっちゃう F = Fk[N * row : N * (row + 1), col] F_L = get_F_L_k_vertical(Fk, N, row, col) F_R = get_F_R_k_vertical(Fk,...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
IDCT
for row in range(Fk.shape[0] // N): for col in range(Fk.shape[1]): F = Fk[N * row : N * (row + 1), col] data = scipy.fftpack.idct(F, norm="ortho") # 復元画像 recover[N * row : N * (row + 1), col] = data plt.imshow(recover.astype("u8"), cmap="gray")
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
もどった...! 量子化テーブル
Q50_Luminance = np.array( [ [16, 11, 10, 16, 24, 40, 51, 61], [12, 12, 14, 19, 26, 58, 60, 55], [14, 13, 16, 24, 40, 57, 69, 56], [14, 17, 22, 29, 51, 87, 80, 62], [18, 22, 37, 56, 68, 109, 103, 77], [24, 35, 55, 64, 81, 104, 113, 92], [49, 64, 78, 87, 103, 12...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
量子化
Fk = np.copy(Fk_Ori) Q_Fk = np.zeros(Fk.shape) for row in range(IMG.img.shape[0] // N): for col in range(IMG.img.shape[1] // N): block = Fk[row * N : (row + 1) * N, col * N : (col + 1) * N] # 量子化 block = np.round(block / Q_Luminance) # 逆量子化 block = block * Q_Luminance ...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
横方向 残差
for k in range(1, n_bar - 2 + 1): dmlct = DMLCT(k+1, N) for row in range(IMG.img.shape[0]): for col in range(IMG.img.shape[1] // N): F = Fk[row, N * col : N * (col + 1)] F_L = get_F_L_k_horizontal(Fk, N, row, col) F_R = get_F_R_k_horizontal(Fk, N, row, col) ...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
IDCT
for row in range(Fk.shape[0]): for col in range(Fk.shape[1] // N): F = Fk[row, N * col : N * col + N] data = scipy.fftpack.idct(F, norm="ortho") # Fkに代入した後、縦方向に対して処理 Fk[row, N * col : N * col + N] = data
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
縦方向 残差
for k in range(1, n_bar - 2 + 1): dmlct = DMLCT(k+1, N) for row in range(IMG.img.shape[0] // N): for col in range(IMG.img.shape[1]): # ビューなら直接いじっちゃう F = Fk[N * row : N * (row + 1), col] F_L = get_F_L_k_vertical(Fk, N, row, col) F_R = get_F_R_k_vertical(Fk,...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
IDCT
for row in range(Fk.shape[0] // N): for col in range(Fk.shape[1]): F = Fk[N * row : N * (row + 1), col] data = scipy.fftpack.idct(F, norm="ortho") # 復元画像 Q_recover[N * row : N * (row + 1), col] = data Q_recover = np.round(Q_recover) plt.imshow(Q_recover, cmap="gray") plt.imsave("DMLC...
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
情報量
qfk = pd.Series(Q_Fk.flatten()) pro = qfk.value_counts() / qfk.value_counts().sum() pro.head() S = 0 for pi in pro: S -= pi * np.log2(pi) S
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
PSNR
MSE = np.sum(np.sum(np.power((IMG.img - Q_recover),2)))/(Q_recover.shape[0] * Q_recover.shape[1]) PSNR = 10 * np.log10(255 * 255 / MSE) PSNR
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
MSSIM
MSSIM = ssim(IMG.img,Q_recover.astype(IMG.img.dtype),gaussian_weights=True,sigma=1.5,K1=0.01,K2=0.03) MSSIM dmlct = DMLCT(n_bar, N)
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
MSDS
MSDSt, MSDS1, MSDS2 = msds(N,Q_recover) MSDS1 MSDS2
_____no_output_____
MIT
DMLCT/8x8/DMLCT.ipynb
Hiroya-W/Python_DCT
Line Charts------ Tutorial---
# mounting drive from google.colab import drive drive.mount("/content/drive") # Importing libraries import pandas as pd pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # Loading Data spotify_data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/...
_____no_output_____
MIT
02 Line charts.ipynb
Bluelord/DataCamp_courses
Plot the data
# Sixe of plot plt.figure(figsize=(15,8)) # Line chart showing daily global streams of each song sns.lineplot(data=spotify_data)
_____no_output_____
MIT
02 Line charts.ipynb
Bluelord/DataCamp_courses
As you can see above, the line of code is relatively short and has two main components:- `sns.lineplot` tells the notebook that we want to create a line chart. - _Every command that you learn about in this course will start with `sns`, which indicates that the command comes from the [seaborn](https://seaborn.pydata.or...
# Set the width and height of the figure plt.figure(figsize=(14,6)) # Add title plt.title("Daily Global Streams of Popular Songs in 2017-2018") # Line chart showing daily global streams of each song sns.lineplot(data=spotify_data)
_____no_output_____
MIT
02 Line charts.ipynb
Bluelord/DataCamp_courses
The first line of code sets the size of the figure to `14` inches (in width) by `6` inches (in height). To set the size of _any figure_, you need only copy the same line of code as it appears. Then, if you'd like to use a custom size, change the provided values of `14` and `6` to the desired width and height.The seco...
list(spotify_data.columns) # Set the width and height of the figure plt.figure(figsize=(14,6)) # Add title plt.title("Daily Global Streams of Popular Songs in 2017-2018") # Line chart showing daily global streams of 'Shape of You' sns.lineplot(data=spotify_data['Shape of You'], label="Shape of You") # Line chart showi...
_____no_output_____
MIT
02 Line charts.ipynb
Bluelord/DataCamp_courses
Exercise ---
data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Kaggle_Courses/03 Data Visualization/museum_visitors.csv", index_col="Date", parse_dates=True) # Print the last five rows of the data data.tail(5) # How many visitors did the Chinese American Museum # receive in July 2018? ca_museum_jul18 = ...
_____no_output_____
MIT
02 Line charts.ipynb
Bluelord/DataCamp_courses
Numpy template* Cells before the ` [[nbplot]] template` are ignored.* Cells starting with ` [[nbplot]] ignore` are also ignored.* Some variables are substituted in every cell: * `${root_path}`: the working directory when `nbplot` was called. Input files will be relative to this.* Some variables are subtituted in th...
# [[nbplot]] template # Note: don't change that first line, it tells nbplot that the notebook below is a template # This cell will be executed and the metadata dictionary loaded, but not included in the output. template_metadata = { 'name': 'numpy', 'format_version': '0.1' } import io, math, os, sys from base6...
_____no_output_____
MIT
nbplot/templates/nbplot-numpy.ipynb
nburrus/nbplot
Cheatsheet gnuplot matplotlib |Gnuplot | Matplotlib|| :-- | :-- || `with lines` | `default` or `ax.plot(..., '-')` || `with linespoints` | `ax.plot(..., '.-')` || `with points` | `ax.plot(..., '.')` || `smooth csplines` | `ax.plot(*csplines(x,y))` || `using 1:2` | `ax.plot(data[:,0], data[:,1])` || `using 0:1` | `ax...
# interactive mode by default %matplotlib notebook #%matplotlib inline plt.ioff() # show the figure only at the end to avoid postponing potential loading errors fig,ax = plt.subplots(figsize=(8,6), num='MyWindow') #fig.suptitle('MyPlot') #ax.set_title('My Title') #ax.set_xlabel('x') #ax.set_ylabel('y') root_path = Pa...
_____no_output_____
MIT
nbplot/templates/nbplot-numpy.ipynb
nburrus/nbplot
Missing Category Imputation ==> Feature-Engine What is Feature-EngineFeature-Engine is an open source python package that I created at the back of this course. - Feature-Engine includes all the feature engineering techniques described in the course- Feature-Engine works like to Scikit-learn, so it is easy to learn- Fe...
import pandas as pd import numpy as np import matplotlib.pyplot as plt # to split the datasets from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline # from feature-engine from feature_engine.imputation import CategoricalImputer # let's load the dataset with a selected group of va...
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
Feature-Engine captures the categorical variables automatically
# we call the imputer from featur- engine # we don't need to specify anything imputer = CategoricalImputer() # we fit the imputer imputer.fit(X_train) # we see that the imputer found the categorical variables to # impute with the frequent category imputer.variables
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
**This imputer will replace missing data in categorical variables by 'Missing'**
# feature-engine returns a dataframe tmp = imputer.transform(X_train) tmp.head() # let's check that the numerical variables don't # contain NA any more tmp[imputer.variables].isnull().mean()
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
Feature-engine allows you to specify variable groups easily
# let's do it imputation but this time # and let's do it over 1 of the 2 categorical variables imputer = CategoricalImputer(variables=['BsmtQual']) imputer.fit(X_train) # now the imputer uses only the variables we indicated imputer.variables # transform data set tmp = imputer.transform(X_train) tmp.head() tmp[imput...
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
Feature-engine can be used with the Scikit-learn pipeline
# let's check the percentage of NA in each categorical variable X_train.isnull().mean()
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
- BsmtQual: 0.023 ==> frequent category imputation- FirePlaceQu: 0.46 ==> missing category imputation
pipe = Pipeline([ ('imputer_mode', CategoricalImputer(imputation_method='frequent', variables=['BsmtQual'])), ('imputer_missing', CategoricalImputer(variables=['FireplaceQu'])), ]) pipe.fit(X_train) pipe.named_steps['imputer_mode'].variables pipe.named_steps['imputer_missing'].variables # let's transform the da...
_____no_output_____
BSD-3-Clause
04.20-Missing-Category-Imputation-Feature-Engine.ipynb
sri-spirited/feature-engineering-for-ml
Now You Code 1: NumberIn this now you code we will learn to re-factor a program into a function. This is the most common way to write a function when you are a beginner. *Re-factoring* is the act of re-writing code without changing its functionality. We commonly do re-factoring to improve performance or readability of...
## STEP 2 : Write the program text = input("Enter a number: ") try: number = float(text) except ValueError: number = "NaN" print(number)
Enter a number: 5 5.0
MIT
content/lessons/06/Now-You-Code/NYC1-Number.ipynb
cspelz-su/Final-Project
Next we refactor it into a functionComplete this function. It should be similar to the program above, but it should not have any `input()` or `print()` functions as those are reserved for the main program. Functions should take variables as input and return a variable as output. When the function executes, the variabl...
# Step 3: write the function ## Function: Number ## Argument (input): text value ## Returns (output): float of text value or "NaN" def number(text): # TODO Write code here
_____no_output_____
MIT
content/lessons/06/Now-You-Code/NYC1-Number.ipynb
cspelz-su/Final-Project
Rewrite the program to use the functionFinally re-write the original program to use the new function. The program now works the same as STEP1 but it now uses a function!
## Step 4: write the program from step 2 again, but this time use the function
_____no_output_____
MIT
content/lessons/06/Now-You-Code/NYC1-Number.ipynb
cspelz-su/Final-Project
SelfBuildingModel[www.vexpower.com](www.vexpower.com)
# Set the right folder import sys import os if not os.path.isdir("mmm"): module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import mmm import pandas as pd pd.set_option('display.float_format', lambda x: '%.3f' % x) # suppress scientific nota...
_____no_output_____
MIT
notebooks/SelfBuildingModel.ipynb
hammer-mt/VEX-MMM
pytorch-yolo2 ref: https://github.com/longcw/yolo2-pytorch get model
from darknet import Darknet cfgfile = './cfg/yolo-pose.cfg' weightfile = './backup/cargo/model_backup.weights' weightfile2 = './backup/cargo/model.weights' m = Darknet(cfgfile) m2 = Darknet(cfgfile) m.load_weights(weightfile) m2.load_weights(weightfile2) print('Loading weights from %s... Done!' % (weightfile)) prin...
_____no_output_____
MIT
1.yolo2_pytorch_onnx_save_model_v2.ipynb
chenyu36/singleshot6Dpose
save detection information
import pickle op_dict = { 'num_classes':m.num_classes, 'anchors':m.anchors, 'num_anchors':m.num_anchors } pickle.dump(op_dict, open('detection_information.pkl','wb'))
_____no_output_____
MIT
1.yolo2_pytorch_onnx_save_model_v2.ipynb
chenyu36/singleshot6Dpose
use Onnx to convert model ref: https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb
import torch.onnx from torch.autograd import Variable # Standard ImageNet input - 3 channels, 224x224, # values don't matter as we care about network structure. # But they can also be real inputs. dummy_input = Variable(torch.randn(1, 3, 416, 416)) # Obtain your model, it can be also constructed in your script explici...
_____no_output_____
MIT
1.yolo2_pytorch_onnx_save_model_v2.ipynb
chenyu36/singleshot6Dpose
Build TensorRT engine and serialize it
import cv2 import numpy as np from numpy import array import pycuda.driver as cuda import pycuda.autoinit import tensorrt as trt import sys, os sys.path.insert(1, os.path.join(sys.path[0], "..")) import common # You can set the logger severity higher to suppress messages (or lower to display more messages). TRT_LOG...
trt outputs shape (1, 20, 13, 13)
MIT
1.yolo2_pytorch_onnx_save_model_v2.ipynb
chenyu36/singleshot6Dpose
Automatic Feature SelectionOften we collected many features that might be related to a supervised prediction task, but we don't know which of them are actually predictive. To improve interpretability, and sometimes also generalization performance, we can use automatic feature selection to select a subset of the origin...
from sklearn.datasets import load_breast_cancer, load_digits from sklearn.model_selection import train_test_split cancer = load_breast_cancer() # get deterministic random numbers rng = np.random.RandomState(42) noise = rng.normal(size=(len(cancer.data), 50)) # add noise features to the data # the first 30 features ar...
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
We have to define a threshold on the p-value of the statistical test to decide how many features to keep. There are several strategies implemented in scikit-learn, a straight-forward one being ``SelectPercentile``, which selects a percentile of the original features (we select 50% below):
from sklearn.feature_selection import SelectPercentile # use f_classif (the default) and SelectPercentile to select 50% of features: select = SelectPercentile(percentile=50) select.fit(X_train, y_train) # transform training set: X_train_selected = select.transform(X_train) print(X_train.shape) print(X_train_selected....
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
We can also use the test statistic directly to see how relevant each feature is. As the breast cancer dataset is a classification task, we use f_classif, the F-test for classification. Below we plot the p-values associated with each of the 80 features (30 original features + 50 noise features). Low p-values indicate in...
from sklearn.feature_selection import f_classif, f_regression, chi2 F, p = f_classif(X_train, y_train) plt.figure() plt.plot(p, 'o')
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
Clearly most of the first 30 features have very small p-values.Going back to the SelectPercentile transformer, we can obtain the features that are selected using the ``get_support`` method:
mask = select.get_support() print(mask) # visualize the mask. black is True, white is False plt.matshow(mask.reshape(1, -1), cmap='gray_r')
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
Nearly all of the original 30 features were recovered.We can also analize the utility of the feature selection by training a supervised model on the data.It's important to learn the feature selection only on the training set!
from sklearn.linear_model import LogisticRegression # transform test data: X_test_selected = select.transform(X_test) lr = LogisticRegression() lr.fit(X_train, y_train) print("Score with all features: %f" % lr.score(X_test, y_test)) lr.fit(X_train_selected, y_train) print("Score with only selected features: %f" % lr....
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
Model-based Feature SelectionA somewhat more sophisticated method for feature selection is using a supervised machine learning model, and selecting features based on how important they were deemed by the model. This requires the model to provide some way to rank the features by importance. This can be done for all tre...
from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import RandomForestClassifier select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=42), threshold="median") select.fit(X_train, y_train) X_train_rf = select.transform(X_train) print(X_train.shape) print(X_train_rf.shap...
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
This method builds a single model (in this case a random forest) and uses the feature importances from this model.We can do a somewhat more elaborate search by training multiple models on subsets of the data. One particular strategy is recursive feature elimination: Recursive Feature EliminationRecursive feature elimi...
from sklearn.feature_selection import RFE select = RFE(RandomForestClassifier(n_estimators=100, random_state=42), n_features_to_select=40) select.fit(X_train, y_train) # visualize the selected features: mask = select.get_support() plt.matshow(mask.reshape(1, -1), cmap='gray_r') X_train_rfe = select.transform(X_train) ...
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
Exercises Plot the "XOR" dataset which is created like this:
xx, yy = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50)) rng = np.random.RandomState(0) X = rng.randn(200, 2) Y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
_____no_output_____
CC0-1.0
notebooks/20 Feature Selection.ipynb
jlgorman/scipy-2016-sklearn
Demo: Wrapped single_run_processThe basic steps to set up an OpenCLSim simulation are:* Import libraries* Initialise simpy environment* Define object classes* Create objects * Create sites * Create vessels * Create activities* Register processes and run simpy----This notebook provides an example of a single_run_pro...
import datetime, time import simpy import shapely.geometry import pandas as pd import openclsim.core as core import openclsim.model as model import openclsim.plot as plot
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
1. Initialise simpy environment
# setup environment simulation_start = 0 my_env = simpy.Environment(initial_time=simulation_start)
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
2. Define object classes
# create a Site object based on desired mixin classes Site = type( "Site", ( core.Identifiable, core.Log, core.Locatable, core.HasContainer, core.HasResource, ), {}, ) # create a TransportProcessingResource object based on desired mixin classes TransportProcessin...
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3. Create objects 3.1. Create site object(s)
# prepare input data for from_site location_from_site = shapely.geometry.Point(4.18055556, 52.18664444) data_from_site = {"env": my_env, "name": "from_site", "geometry": location_from_site, "capacity": 10_000, "level": 10_000 } # i...
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.2. Create vessel object(s)
# prepare input data for vessel_01 data_vessel01 = {"env": my_env, "name": "vessel01", "geometry": location_from_site, "loading_rate": 1, "unloading_rate": 5, "capacity": 1_000, "compute_v": lambda x: 10 + 2 * x ...
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.3 Create activity/activities
# initialise registry registry = {} # create a 'while activity' that contains a pre-packed set of 'sub_processes' single_run, while_activity = model.single_run_process( name="single_run", registry={}, env=my_env, origin=from_site, destination=to_site, mover=vessel01, loader=vessel01, un...
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
4. Register processes and run simpy
# initate the simpy processes defined in the 'while activity' and run simpy model.register_processes([while_activity]) my_env.run()
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
5. Inspect results 5.1 Inspect logs
display(plot.get_log_dataframe(vessel01, [*single_run, while_activity])) display(plot.get_log_dataframe(from_site, [*single_run])) display(plot.get_log_dataframe(to_site, [*single_run]))
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
5.2 Visualise gantt charts
plot.get_gantt_chart([while_activity, vessel01, *single_run])
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
5.3 Visualise step charts
fig = plot.get_step_chart([from_site, to_site, vessel01])
_____no_output_____
MIT
notebooks/09_wrapped_single_run_process.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
a) By using Logistic Regression Algorithm Part A: Data Preprocessing Step1 : importing the libraries
import numpy as np import matplotlib.pyplot as plt import pandas as pd
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step2: import data set
dataset=pd.read_csv('Logistic Data.csv') dataset
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step3: to create feature matrix and dependent variable vector
a=dataset.iloc[:,:-1].values b=dataset.iloc[:,-1].values a b
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning