markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Notice the large amplitude excursions in the data at the start and end of our data segment. This is **spectral leakage** caused by filters we applied to the boundaries ringing off the discontinuities where the data suddenly starts and ends (for a time up to the length of the filter).To avoid this we should trim the ends of the data in all steps of our filtering. Let's try cropping a couple seconds off of either side.
# Remove 2 seconds of data from both the beginning and end conditioned = strain.crop(2, 2) pylab.plot(conditioned.sample_times, conditioned) pylab.xlabel('Time (s)')
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
That's better. Calculating the spectral density of the dataOptimal matched filtering requires *whitening*; weighting the frequency components of the potential signal and data by the estimated noise amplitude.Let's compute the power spectral density (PSD) of our conditioned data.
from pycbc.psd import interpolate, inverse_spectrum_truncation # Estimate the power spectral density # We use 4 second samles of our time series in Welch method. psd = conditioned.psd(4) # Now that we have the psd we need to interpolate it to match our data # and then limit the filter length of 1 / PSD. After this, we can # directly use this PSD to filter the data in a controlled manner psd = interpolate(psd, conditioned.delta_f) # 1/PSD will now act as a filter with an effective length of 4 seconds # Since the data has been highpassed above 15 Hz, and will have low values # below this we need to informat the function to not include frequencies # below this frequency. psd = inverse_spectrum_truncation(psd, 4 * conditioned.sample_rate, low_frequency_cutoff=15)
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
---- Define a signal model Recall that matched filtering is essentially integrating the inner product between your data and your signal model in frequency or time (after weighting frequencies correctly) as you slide your signal model over your data in time. If there is a signal in the data that matches your 'template', we will see a large value of this inner product (the SNR, or 'signal to noise ratio') at that time.In a full search, we would grid over the parameters and calculate the SNR time series for each template in our template bankHere we'll define just one template. Let's assume equal masses (which is within the posterior probability of GW150914). Because we want to match our signal model with each time sample in our data, let's also rescale our signal model vector to match the same number of time samples as our data vector (**<- very important!**). Let's also plot the output to see what it looks like.
m = 36 # Solar masses hp, hc = get_td_waveform(approximant="SEOBNRv4_opt", mass1=m, mass2=m, delta_t=conditioned.delta_t, f_lower=20) # We should resize the vector of our template to match our data hp.resize(len(conditioned)) pylab.plot(hp) pylab.xlabel('Time samples')
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
Note that the waveform template currently begins at the start of the vector. However, we want our SNR time series (the inner product between our data and our template) to track with the approximate merger time. To do this, we need to shift our template so that the merger is approximately at the first bin of the data.For this reason, waveforms returned from `get_td_waveform` have their merger stamped with time zero, so we can easily shift the merger into the right position to compute our SNR time series. Let's try shifting our template time and plot the output.
template = hp.cyclic_time_shift(hp.start_time) pylab.plot(template) pylab.xlabel('Time samples')
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
--- Calculate an SNR time seriesNow that we've pre-conditioned our data and defined a signal model, we can compute the output of our matched filter search.
from pycbc.filter import matched_filter import numpy snr = matched_filter(template, conditioned, psd=psd, low_frequency_cutoff=20) pylab.figure(figsize=[10, 4]) pylab.plot(snr.sample_times, abs(snr)) pylab.xlabel('Time (s)') pylab.ylabel('SNR')
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
Note that as we expect, there is some corruption at the start and end of our SNR time series by the template filter and the PSD filter. To account for this, we can smoothly zero out 4 seconds (the length of the PSD filter) at the beginning and end for the PSD filtering. We should remove an 4 additional seconds at the beginning to account for the template length, although this is somewhat generous for so short a template. A longer signal such as from a BNS, would require much more padding at the beginning of the vector.
snr = snr.crop(4 + 4, 4) pylab.figure(figsize=[10, 4]) pylab.plot(snr.sample_times, abs(snr)) pylab.ylabel('Signal-to-noise') pylab.xlabel('Time (s)') pylab.show()
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
Finally, now that the output is properly cropped, we can find the peak of our SNR time series and estimate the merger time and associated SNR of any event candidate within the data.
peak = abs(snr).numpy().argmax() snrp = snr[peak] time = snr.sample_times[peak] print("We found a signal at {}s with SNR {}".format(time, abs(snrp)))
We found a signal at 1126259462.42s with SNR 19.6770890131
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
You found the first gravitational wave detection in LIGO Hanford data! Nice work. --- Exercise 3 How does the SNR change if you re-compute the matched filter result using a signal model with compenent masses that are closer to the current estimates for GW150914, say m1 = 36 Msol and m2 = 31 Msol?
# complete
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
Exercise 4 **Network SNR** is the quadrature sum of the single-detector SNR from each contributing detector. GW150914 was detected by H1 and L1. Try calculating the network SNR (you'll need to estimate the SNR in L1 first), and compare your answer to the network PyCBC SNR as reported in the [GWTC-1 catalog](https://arxiv.org/abs/1811.12907).
# complete
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
--- Estimate the single-detector significance of an event candidateGreat, we found a large spike in SNR! What are the chances this is a real astrophysical signal? How often would detector noise produce this by chance?Let's plot a histogram of SNR values output by our matched filtering analysis for this time and see how much this trigger stands out.
# import what we need from scipy.stats import norm from math import pi from math import exp # make a histogram of SNR values background = (abs(snr)) # plot the histogram to check out any other outliers pylab.hist(background, bins=50) pylab.xlabel('SNR') pylab.semilogy() # use norm.fit to fit a normal (Gaussian) distribution (mu, sigma) = norm.fit(background) # print out the mean and standard deviation of the fit print('The fit mean = %f and the fit std dev = %f' )%(mu, sigma)
The fit mean = 1.295883 and the fit std dev = 0.739471
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
Exercise 5 At what single-detector SNR is the significance of a trigger > 5 sigma? Remember that sigma is constant for a normal distribution (read: this should be simple multiplication now that we have estimated what 1 sigma is).
# complete
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
--- ChallengeOur match filter analysis assumes the noise is *stationary* and *Gaussian*, which is not a good assumption, and this short data set isn't representative of all the various things that can go bump in the detector (remember the phone?). **The simple significance estimate above won't work as soon as we encounter a glitch!** We need a better noise background estimate, and we can leverage our detector network to help make our signals stand out from our background. Observing a gravitational wave signal between detectors is an important cross-check to minimize the impact of transient detector noise. Our strategy:* We look for loud triggers within a time window to identify foreground events that occur within the gravitational wave travel time (v=c) between detectors, but could come from any sky position. * We use time slides to estimate the noise background for a network of detectors. If you still have time, try coding up an algorithm that checks for time coincidence between triggers in different detectors. Remember that the maximum gravitational wave travel time between LIGO detectors is ~10 ms. Check your code with the GPS times for the H1 and L1 triggers you identified for GW150914.
# complete if time
_____no_output_____
MIT
Session9/Day4/Matched_filter_tutorial.ipynb
jlmciver/LSSTC-DSFP-Sessions
![](data/dst/numpy_image_alpha_blend.jpg)
dst = src1 * 0.5 + src2 * 0.2 + (96, 128, 160) print(dst.max()) dst = dst.clip(0, 255) print(dst.max()) Image.fromarray(dst.astype(np.uint8)).save('data/dst/numpy_image_alpha_blend_gamma.jpg')
_____no_output_____
MIT
notebook/numpy_image_alpha_blend.ipynb
puyopop/python-snippets
RESTRICTED BOLTZMANN MACHINES IntroductionRestricted Boltzmann Machine (RBM): RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion. Why are RBMs important?It can automatically extract meaningful features from a given input.How does it work?RBM is a 2 layer neural network. Simply, RBM takes the inputs and translates those into a set of binary values that represents them in the hidden layer. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns. What are the applications of RBM?RBM is useful for Collaborative Filtering, dimensionality reduction, classification, regression, feature learning, topic modeling and even Deep Belief Networks.Is RBM a generative or Discriminative model?RBM is a generative model. Let me explain it by first, see what is different between discriminative and generative models: Discriminative: Consider a classification problem in which we want to learn to distinguish between Sedan cars (y = 1) and SUV cars (y = 0), based on some features of cars. Given a training set, an algorithm like logistic regression tries to find a straight line—that is, a decision boundary—that separates the suv and sedan. Generative: looking at cars, we can build a model of what Sedan cars look like. Then, looking at SUVs, we can build a separate model of what SUV cars look like. Finally, to classify a new car, we can match the new car against the Sedan model, and match it against the SUV model, to see whether the new car looks more like the SUV or Sedan. Generative Models specify a probability distribution over a dataset of input vectors. We can do both supervise and unsupervised tasks with generative models: In an unsupervised task, we try to form a model for P(x), where P is the probability given x as an input vector. In the supervised task, we first form a model for P(x|y), where P is the probability of x given y(the label for x). For example, if y = 0 indicates whether a car is a SUV or y = 1 indicates indicate a car is a Sedan, then p(x|y = 0) models the distribution of SUVs’ features, and p(x|y = 1) models the distribution of Sedans’ features. If we manage to find P(x|y) and P(y), then we can use Bayes rule to estimate P(y|x), because: $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$Now the question is, can we build a generative model, and then use it to create synthetic data by directly sampling from the modeled probability distributions? Lets see. Table of Contents Initialization RBM layers What RBM can do after training? How to train the model? Learned features InitializationFirst we have to load the utility file which contains different utility functions that are not connectedin any way to the networks presented in the tutorials, but rather help inprocessing the outputs into a more understandable way.
import urllib.request with urllib.request.urlopen("http://deeplearning.net/tutorial/code/utils.py") as url: response = url.read() target = open('utils.py', 'w') target.write(response.decode('utf-8')) target.close()
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Now, we load in all the packages that we use to create the net including the TensorFlow package:
import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data #!pip install pillow from PIL import Image from utils import tile_raster_images import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
RBM layersAn RBM has two layers. The first layer of the RBM is called the visible (or input layer). Imagine that our toy example, has only vectors with 7 values, so the visible layer must have j=7 input nodes. The second layer is the hidden layer, which possesses i neurons in our case. Each hidden node can have either 0 or 1 values (i.e., si = 1 or si = 0) with a probability that is a logistic function of the inputs it receives from the other j visible units, called for example, p(si = 1). For our toy sample, we'll use 2 nodes in the hidden layer, so i = 2. Each node in the first layer also has a bias. We will denote the bias as “v_bias” for the visible units. The v_bias is shared among all visible units.Here we define the bias of second layer as well. We will denote the bias as “h_bias” for the hidden units. The h_bias is shared among all hidden units
v_bias = tf.placeholder("float", [7]) h_bias = tf.placeholder("float", [2])
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
We have to define weights among the input layer and hidden layer nodes. In the weight matrix, the number of rows are equal to the input nodes, and the number of columns are equal to the output nodes. Let W be the Tensor of 7x2 (7 - number of visible neurons, 2 - number of hidden neurons) that represents weights between neurons.
W = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(7, 2)).astype(np.float32))
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
What RBM can do after training?Think RBM as a model that has been trained based on images of a dataset of many SUV and Sedan cars. Also, imagine that the RBM network has only two hidden nodes, one for the weight and, and one for the size of cars, which in a sense, their different configurations represent different cars, one represent SUV cars and one for Sedan. In a training process, through many forward and backward passes, RBM adjust its weights to send a stronger signal to either the SUV node (0, 1) or the Sedan node (1, 0) in the hidden layer, given the pixels of images. Now, given a SUV in hidden layer, which distribution of pixels should we expect? RBM can give you 2 things. First, it encodes your images in hidden layer. Second, it gives you the probability of observing a case, given some hidden values.How to inference?RBM has two phases: Forward Pass Backward Pass or ReconstructionPhase 1) Forward pass: Input one training sample (one image) X through all visible nodes, and pass it to all hidden nodes. Processing happens in each node in the hidden layer. This computation begins by making stochastic decisions about whether to transmit that input or not (i.e. to determine the state of each hidden layer). At the hidden layer's nodes, X is multiplied by a $W_{ij}$ and added to h_bias. The result of those two operations is fed into the sigmoid function, which produces the node’s output, $p({h_j})$, where j is the unit number. $p({h_j})= \sigma(\sum_i w_{ij} x_i)$, where $\sigma()$ is the logistic function.Now lets see what $p({h_j})$ represents. In fact, it is the probabilities of the hidden units. And, all values together are called probability distribution. That is, RBM uses inputs x to make predictions about hidden node activations. For example, imagine that the values of $h_p$ for the first training item is [0.51 0.84]. It tells you what is the conditional probability for each hidden neuron to be at Phase 1): p($h_{1}$ = 1|V) = 0.51 ($h_{2}$ = 1|V) = 0.84 As a result, for each row in the training set, a vector/tensor is generated, which in our case it is of size [1x2], and totally n vectors ($p({h})$=[nx2]). We then turn unit $h_j$ on with probability $p(h_{j}|V)$, and turn it off with probability $1 - p(h_{j}|V)$.Therefore, the conditional probability of a configuration of h given v (for a training sample) is:$$p(\mathbf{h} \mid \mathbf{v}) = \prod_{j=0}^H p(h_j \mid \mathbf{v})$$ Now, sample a hidden activation vector h from this probability distribution $p({h_j})$. That is, we sample the activation vector from the probability distribution of hidden layer values. Before we go further, let's look at a toy example for one case out of all input. Assume that we have a trained RBM, and a very simple input vector such as [1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], lets see what would be the output of forward pass:
sess = tf.Session() X = tf.constant([[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]]) v_state = X print ("Input: ", sess.run(v_state)) h_bias = tf.constant([0.1, 0.1]) print ("hb: ", sess.run(h_bias)) print ("w: ", sess.run(W)) # Calculate the probabilities of turning the hidden units on: h_prob = tf.nn.sigmoid(tf.matmul(v_state, W) + h_bias) #probabilities of the hidden units print ("p(h|v): ", sess.run(h_prob)) # Draw samples from the distribution: h_state = tf.nn.relu(tf.sign(h_prob - tf.random_uniform(tf.shape(h_prob)))) #states print ("h0 states:", sess.run(h_state))
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Phase 2) Backward Pass (Reconstruction):The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers.So, in the second phase (i.e. reconstruction phase), the samples from the hidden layer (i.e. h) play the role of input. That is, h becomes the input in the backward pass. The same weight matrix and visible layer biases are used to go through the sigmoid function. The produced output is a reconstruction which is an approximation of the original input.
vb = tf.constant([0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1]) print ("b: ", sess.run(vb)) v_prob = sess.run(tf.nn.sigmoid(tf.matmul(h_state, tf.transpose(W)) + vb)) print ("p(vi∣h): ", v_prob) v_state = tf.nn.relu(tf.sign(v_prob - tf.random_uniform(tf.shape(v_prob)))) print ("v probability states: ", sess.run(v_state))
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
RBM learns a probability distribution over the input, and then, after being trained, the RBM can generate new samples from the learned probability distribution. As you know, probability distribution, is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.The (conditional) probability distribution over the visible units v is given by$p(\mathbf{v} \mid \mathbf{h}) = \prod_{i=0}^V p(v_i \mid \mathbf{h}),$where,$p(v_i \mid \mathbf{h}) = \sigma\left( a_i + \sum_{j=0}^H w_{ji} h_j \right)$so, given current state of hidden units and weights, what is the probability of generating [1. 0. 0. 1. 0. 0. 0.] in reconstruction phase, based on the above probability distribution function?
inp = sess.run(X) print(inp) print(v_prob[0]) v_probability = 1 for elm, p in zip(inp[0],v_prob[0]) : if elm ==1: v_probability *= p else: v_probability *= (1-p) v_probability
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
How similar X and V vectors are? Of course, the reconstructed values most likely will not look anything like the input vector because our network has not trained yet. Our objective is to train the model in such a way that the input vector and reconstructed vector to be same. Therefore, based on how different the input values look to the ones that we just reconstructed, the weights are adjusted. MNIST We will be using the MNIST dataset to practice the usage of RBMs. The following cell loads the MNIST dataset.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Lets look at the dimension of the images.
trX[1].shape
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
MNIST images have 784 pixels, so the visible layer must have 784 input nodes. For our case, we'll use 50 nodes in the hidden layer, so i = 50.
vb = tf.placeholder("float", [784]) hb = tf.placeholder("float", [50])
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Let W be the Tensor of 784x50 (784 - number of visible neurons, 50 - number of hidden neurons) that represents weights between the neurons.
W = tf.placeholder("float", [784, 50])
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Lets define the visible layer:
v0_state = tf.placeholder("float", [None, 784])
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Now, we can define hidden layer:
h0_prob = tf.nn.sigmoid(tf.matmul(v0_state, W) + hb) #probabilities of the hidden units h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random_uniform(tf.shape(h0_prob)))) #sample_h_given_X
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Now, we define reconstruction part:
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb) v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random_uniform(tf.shape(v1_prob)))) #sample_v_given_h
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
What is objective function?Goal: Maximize the likelihood of our data being drawn from that distributionCalculate error: In each epoch, we compute the "error" as a sum of the squared difference between step 1 and step n,e.g the error shows the difference between the data and its reconstruction.Note: tf.reduce_mean computes the mean of elements across dimensions of a tensor.
err = tf.reduce_mean(tf.square(v0_state - v1_state))
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
How to train the model?Warning!! The following part discuss how to train the model which needs some algebra background. Still, you can skip this part and run the next cells.As mentioned, we want to give a high probability to the input data we train on. So, in order to train an RBM, we have to maximize the product of probabilities assigned to all rows v (images) in the training set V (a matrix, where each row of it is treated as a visible vector v):Which is equivalent, maximizing the expected log probability of V:So, we have to update the weights wij to increase p(v) for all v in our training data during training. So we have to calculate the derivative:$$\frac{\partial \log p(\mathbf v)}{\partial w_{ij}}$$This cannot be easily done by typical gradient descent (SGD), so we can use another approach, which has 2 steps: Gibbs Sampling Contrastive Divergence Gibbs Sampling First, given an input vector v we are using p(h|v) for prediction of the hidden values h. $p(h|v) = sigmoid(X \otimes W + hb)$ h0 = sampleProb(h0) Then, knowing the hidden values, we use p(v|h) for reconstructing of new input values v. $p(v|h) = sigmoid(h0 \otimes transpose(W) + vb)$ $v1 = sampleProb(v1)$ (Sample v given h) This process is repeated k times. After k iterations we obtain an other input vector vk which was recreated from original input values v0 or X.Reconstruction steps: Get one data point from data set, like x, and pass it through the net Pass 0: (x) $\Rightarrow$ (h0) $\Rightarrow$ (v1) (v1 is reconstruction of the first pass) Pass 1: (v1) $\Rightarrow$ (h1) $\Rightarrow$ (v2) (v2 is reconstruction of the second pass) Pass 2: (v2) $\Rightarrow$ (h2) $\Rightarrow$ (v3) (v3 is reconstruction of the third pass) Pass n: (vk) $\Rightarrow$ (hk+1) $\Rightarrow$ (vk+1)(vk is reconstruction of the nth pass) What is sampling here (sampleProb)?In forward pass: We randomly set the values of each hi to be 1 with probability $sigmoid(v \otimes W + hb)$. - To sample h given v means to sample from the conditional probability distribution P(h|v). It means that you are asking what are the probabilities of getting a specific set of values for the hidden neurons, given the values v for the visible neurons, and sampling from this probability distribution. In reconstruction: We randomly set the values of each vi to be 1 with probability $ sigmoid(h \otimes transpose(W) + vb)$.contrastive divergence (CD-k)The update of the weight matrix is done during the Contrastive Divergence step. Vectors v0 and vk are used to calculate the activation probabilities for hidden values h0 and hk. The difference between the outer products of those probabilities with input vectors v0 and vk results in the update matrix:$\Delta W =v0 \otimes h0 - vk \otimes hk$ Contrastive Divergence is actually matrix of values that is computed and used to adjust values of the W matrix. Changing W incrementally leads to training of W values. Then on each step (epoch), W is updated to a new value W' through the equation below:$W' = W + alpha * \Delta W$ What is Alpha? Here, alpha is some small step rate and is also known as the "learning rate". Ok, lets assume that k=1, that is we just get one more step:
h1_prob = tf.nn.sigmoid(tf.matmul(v1_state, W) + hb) h1_state = tf.nn.relu(tf.sign(h1_prob - tf.random_uniform(tf.shape(h1_prob)))) #sample_h_given_X alpha = 0.01 W_Delta = tf.matmul(tf.transpose(v0_state), h0_prob) - tf.matmul(tf.transpose(v1_state), h1_prob) update_w = W + alpha * W_Delta update_vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0) update_hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Let's start a session and initialize the variables:
cur_w = np.zeros([784, 50], np.float32) cur_vb = np.zeros([784], np.float32) cur_hb = np.zeros([50], np.float32) prv_w = np.zeros([784, 50], np.float32) prv_vb = np.zeros([784], np.float32) prv_hb = np.zeros([50], np.float32) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init)
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Lets look at the error of the first run:
sess.run(err, feed_dict={v0_state: trX, W: prv_w, vb: prv_vb, hb: prv_hb}) #Parameters epochs = 5 batchsize = 100 weights = [] errors = [] for epoch in range(epochs): for start, end in zip( range(0, len(trX), batchsize), range(batchsize, len(trX), batchsize)): batch = trX[start:end] cur_w = sess.run(update_w, feed_dict={ v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) cur_vb = sess.run(update_vb, feed_dict={v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) cur_hb = sess.run(update_hb, feed_dict={ v0_state: batch, W: prv_w, vb: prv_vb, hb: prv_hb}) prv_w = cur_w prv_vb = cur_vb prv_hb = cur_hb if start % 10000 == 0: errors.append(sess.run(err, feed_dict={v0_state: trX, W: cur_w, vb: cur_vb, hb: cur_hb})) weights.append(cur_w) print ('Epoch: %d' % epoch,'reconstruction error: %f' % errors[-1]) plt.plot(errors) plt.xlabel("Batch Number") plt.ylabel("Error") plt.show()
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
What is the final weight after training?
uw = weights[-1].T print (uw) # a weight matrix of shape (50,784)
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Learned features We can take each hidden unit and visualize the connections between that hidden unit and each element in the input vector. In our case, we have 50 hidden units. Lets visualize those. Let's plot the current weights:tile_raster_images helps in generating an easy to grasp image from a set of samples or weights. It transform the uw (with one flattened image per row of size 784), into an array (of size $25\times20$) in which images are reshaped and laid out like tiles on a floor.
tile_raster_images(X=cur_w.T, img_shape=(28, 28), tile_shape=(5, 10), tile_spacing=(1, 1)) import matplotlib.pyplot as plt from PIL import Image %matplotlib inline image = Image.fromarray(tile_raster_images(X=cur_w.T, img_shape=(28, 28) ,tile_shape=(5, 10), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (18.0, 18.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray')
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Each tile in the above visualization corresponds to a vector of connections between a hidden unit and visible layer's units. Let's look at one of the learned weights corresponding to one of hidden units for example. In this particular square, the gray color represents weight = 0, and the whiter it is, the more positive the weights are (closer to 1). Conversely, the darker pixels are, the more negative the weights. The positive pixels will increase the probability of activation in hidden units (after multiplying by input/visible pixels), and negative pixels will decrease the probability of a unit hidden to be 1 (activated). So, why is this important? So we can see that this specific square (hidden unit) can detect a feature (e.g. a "/" shape) and if it exists in the input.
from PIL import Image image = Image.fromarray(tile_raster_images(X =cur_w.T[10:11], img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray')
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Let's look at the reconstruction of an image now. Imagine that we have a destructed image of figure 3. Lets see if our trained network can fix it:First we plot the image:
!wget -O destructed3.jpg https://ibm.box.com/shared/static/vvm1b63uvuxq88vbw9znpwu5ol380mco.jpg img = Image.open('destructed3.jpg') img
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Now let's pass this image through the net:
# convert the image to a 1d numpy array sample_case = np.array(img.convert('I').resize((28,28))).ravel().reshape((1, -1))/255.0
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Feed the sample case into the network and reconstruct the output:
hh0_p = tf.nn.sigmoid(tf.matmul(v0_state, W) + hb) #hh0_s = tf.nn.relu(tf.sign(hh0_p - tf.random_uniform(tf.shape(hh0_p)))) hh0_s = tf.round(hh0_p) hh0_p_val,hh0_s_val = sess.run((hh0_p, hh0_s), feed_dict={ v0_state: sample_case, W: prv_w, hb: prv_hb}) print("Probability nodes in hidden layer:" ,hh0_p_val) print("activated nodes in hidden layer:" ,hh0_s_val) # reconstruct vv1_p = tf.nn.sigmoid(tf.matmul(hh0_s_val, tf.transpose(W)) + vb) rec_prob = sess.run(vv1_p, feed_dict={ hh0_s: hh0_s_val, W: prv_w, vb: prv_vb})
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
Here we plot the reconstructed image:
img = Image.fromarray(tile_raster_images(X=rec_prob, img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(img) imgplot.set_cmap('gray')
_____no_output_____
MIT
IBM_AI/5_TensorFlow/ML0120EN-4.1-Review-RBMMNIST.ipynb
merula89/cousera_notebooks
#Confusion matrix is at the bottom!! ************** import pandas as pd import os from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(r'C:\Users\Lesley\Downloads\train_features.csv'), pd.read_csv(r'C:\Users\Lesley\Downloads\train_labels.csv')) test = pd.read_csv(r'C:\Users\Lesley\Downloads\test_features.csv') train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) train.shape, val.shape, test.shape import numpy as np def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) target = 'status_group' train_features = train.drop(columns=[target, 'id']) numeric_features = train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features = cardinality[cardinality <= 50].index.tolist() features = numeric_features + categorical_features X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.feature_selection import f_regression, SelectKBest from sklearn.ensemble import RandomForestClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier() ) k = 3 score = cross_val_score(pipeline, X_train, y_train, cv=k, scoring='accuracy') print(f'Accuracy for {k} folds', score) from scipy.stats import randint, uniform from sklearn.model_selection import GridSearchCV, RandomizedSearchCV pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier() ) param_distributions = { 'simpleimputer__strategy': ['mean', 'median'], 'randomforestclassifier__n_estimators': [23 ,24, 25, 26, 27, 28, 29, 30], 'randomforestclassifier__max_depth': [5, 10, 15, 20, 25, None], 'randomforestclassifier__max_features': uniform(0, 1), 'randomforestclassifier__min_samples_leaf': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'randomforestclassifier__min_samples_split': [5,6, 7, 8, 9, 10, 11, 12, 13, 14, 15] } search = RandomizedSearchCV( pipeline, param_distributions=param_distributions, n_iter=10, cv=3, scoring='accuracy', verbose=10, return_train_score=True, n_jobs=-1) search.fit(X_train, y_train); pipeline.named_steps['randomforestclassifier'] pipeline = search.best_estimator_ pipeline print('Best hyperparameters', search.best_params_) sklearn.__version__ !pip install --user --upgrade scikit-learn import sklearn sklearn.__version__ y_pred = pipeline.predict(X_test) path=r'C:\Users\Lesley\Desktop\Lambda\Lesley_Rich' submission = test[['id']].copy() submission['status_group'] = y_pred # submission['status_group'] submission.to_csv(path+'DecisionTreeWaterPumpSub3.csv', index=False) from sklearn.metrics import plot_confusion_matrix import matplotlib.pyplot as plt plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical') plt.show()
_____no_output_____
MIT
module4-classification-metrics/Lesley_Rich_224_assignment.ipynb
terrainthesky-hub/DS-Unit-2-Kaggle-Challenge
Question 1 Create a function that takes a number as an argument and returns True or False depending on whether the number is symmetrical or not. A number is symmetrical when it is the same as its reverse. Examples is_symmetrical(7227) ➞ True is_symmetrical(12567) ➞ False is_symmetrical(44444444) ➞ True is_symmetrical(9939) ➞ False is_symmetrical(1112111) ➞ True
def is_symmetrical(n): rev = str(n)[::-1] #print(rev) if rev == str(n): return True return False is_symmetrical(7227) is_symmetrical(12567) is_symmetrical(44444444) is_symmetrical(9939) is_symmetrical(1112111)
_____no_output_____
MIT
Python Basic Programming/Programming Assignment_23.ipynb
Sayan97/Python
Question 2 Given a string of numbers separated by a comma and space, return the product of the numbers. Examples multiply_nums(&quot;2, 3&quot;) ➞ 6 multiply_nums(&quot;1, 2, 3, 4&quot;) ➞ 24 multiply_nums(&quot;54, 75, 453, 0&quot;) ➞ 0 multiply_nums(&quot;10, -2&quot;) ➞ -20
def multiply_nums(s): s = s.replace(' ', "") s = s.split(',') sum = 1 for i in s: sum = sum * int(i) return sum multiply_nums("2, 3") multiply_nums("1, 2, 3, 4") multiply_nums("54, 75, 453, 0") multiply_nums("10, -2")
_____no_output_____
MIT
Python Basic Programming/Programming Assignment_23.ipynb
Sayan97/Python
Question 3 Create a function that squares every digit of a number. Examples square_digits(9119) ➞ 811181 square_digits(2483) ➞ 416649 square_digits(3212) ➞ 9414 Notes The function receives an integer and must return an integer.
def square_digits(n): sq = ''.join(str(int(i)**2) for i in str(n)) return int(sq) square_digits(9119) square_digits(2483) square_digits(3212)
_____no_output_____
MIT
Python Basic Programming/Programming Assignment_23.ipynb
Sayan97/Python
Question 4 Create a function that sorts a list and removes all duplicate items from it. Examples setify([1, 3, 3, 5, 5]) ➞ [1, 3, 5] setify([4, 4, 4, 4]) ➞ [4] setify([5, 7, 8, 9, 10, 15]) ➞ [5, 7, 8, 9, 10, 15] setify([3, 3, 3, 2, 1]) ➞ [1, 2, 3]
def setify(l): m = [] l.sort() l = set(l) for i in l: m.append(i) return m setify([1, 3, 3, 5, 5]) setify([4, 4, 4, 4]) setify([5, 7, 8, 9, 10, 15]) setify([3, 3, 3, 2, 1])
_____no_output_____
MIT
Python Basic Programming/Programming Assignment_23.ipynb
Sayan97/Python
Question 5 Create a function that returns the mean of all digits. Examples mean(42) ➞ 3 mean(12345) ➞ 3 mean(666) ➞ 6 Notes  The mean of all digits is the sum of digits / how many digits there are (e.g. mean of digits in 512 is (5+1+2)/3(number of digits) = 8/3=2).  The mean will always be an integer.
def mean(n): sum = 0 lenth = len(str(n)) for i in str(n): sum = sum+ int(i) return int(sum/lenth) mean(42) mean(12345) mean(666)
_____no_output_____
MIT
Python Basic Programming/Programming Assignment_23.ipynb
Sayan97/Python
Differential Imaging**Warning:** This notebook will likely cause Google Colab to crash. It is advised to run the notebook locally, either by downloading and running through Jupyter or by connecting to a local runtime.**Disclaimer:** Satellite images are not publicly available in the GitHub repository in order to avoid potential legal issues. The images used are available internally to other researchers at the University of Portsmouth [here](https://drive.google.com/drive/folders/1GGK6HksIM7jISqC71g0KpzSJnPjFkWO2?usp=sharing). Access is restricted to external persons and all external access requests will be denied. Should the user wish to acquire the images themselves, the corresponding shapefiles are publicly available in the repository. Import Files
import rasterio as rio import rioxarray as riox import numpy as np import xarray as xr import matplotlib.pyplot as plt from glob import glob
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
Define Filepaths
fdir = '/home/foxleym/Downloads' filepaths = glob(f'{fdir}/Southsea2020_PSScene4Band_Explorer/files/*_SR_clip.tif')
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
Create 4-Band Median Raster
blueList = [] greenList = [] redList = [] nirList = [] for i, file in enumerate(filepaths): blueList.append(riox.open_rasterio(file)[0,:,:]) greenList.append(riox.open_rasterio(file)[1,:,:]) redList.append(riox.open_rasterio(file)[2,:,:]) nirList.append(riox.open_rasterio(file)[3,:,:]) blue_median = xr.concat(blueList, "t").median(dim="t") green_median = xr.concat(greenList, "t").median(dim="t") red_median = xr.concat(redList, "t").median(dim="t") nir_median = xr.concat(nirList, "t").median(dim="t") median_raster = xr.concat([blue_median, green_median, red_median, nir_median], dim='band') del(blueList, greenList, redList, nirList, blue_median, green_median, red_median, nir_median) median_raster.rio.to_raster(f'{fdir}/Southsea2020_PSScene4Band_Explorer/Southsea2020Median.tif')
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
Obtain Median RBG Raster and Plot
def normalize(array): """Normalizes numpy arrays into scale 0.0 - 1.0""" array_min, array_max = array.min(), array.max() return ((array - array_min)/(array_max - array_min)) def make_composite(band_1, band_2, band_3): """Converts three raster bands into a composite image""" return normalize(np.dstack((band_1, band_2, band_3))) b, g, r, nir = median_raster rgb = make_composite(r, g, b) plt.figure(figsize=(15,15)) plt.imshow(rgb) plt.xticks([]) plt.yticks([])
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
Perform Image Subtractions
subtractions = [] for f in filepaths: fname = f.split('/')[-1].split('.')[0] raster = riox.open_rasterio(f) subtraction = raster - median_raster subtractions.append(subtraction) subtraction.rio.to_raster(f'{fdir}/Southsea2020_PSScene4Band_Explorer/files/{fname}_MEDDIFF.tif')
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
Convert to RBG and Plot
b_0, g_0, r_0, nir_0 = raster b_med, g_med, r_med, nir_med = median_raster b_sub, g_sub, r_sub, nir_sub = subtractions[0] rgb_0 = make_composite(r_0, g_0, b_0) rgb_med = make_composite(r_med, g_med, b_med) rgb_sub = make_composite(r_sub, g_sub, b_sub) rgb_list = [rgb_0, rgb_med, rgb_sub] fig, ax = plt.subplots(nrows = 3, figsize=(15,15)) for i, rgb in enumerate(rgb_list): ax[i].imshow(rgb) ax[i].set_xticks([]) ax[i].set_yticks([]) plt.tight_layout()
_____no_output_____
MIT
Differential_Imaging.ipynb
Max-FM/IAA-Social-Distancing
oneDPL- Gamma Correction example Sections- [Gamma Correction](Gamma-Correction)- [Why use buffer iterators?](Why-use-buffer-iterators?)- _Lab Exercise:_ [Gamma Correction](Lab-Exercise:-Gamma-Correction)- [Image outputs](Image-outputs) Learning Objectives* Build a sample __DPC++ application__ to perform Image processing (gamma correction) using oneDPL. Gamma CorrectionGamma correction is an image processing algorithm where we enhance the image brightness and contrast levels to have a better view of the image.Below example creates a bitmap image, and applies the gamma to the image using the DPC++ library offloading to a device. Once we run the program we can view the original image and the gamma corrected image in the corresponding cells below In the below program we write a data parallel algorithm using the DPC++ library to leverage the computational power in __heterogenous computers__. The DPC++ platform model includes a host computer and a device. The host offloads computation to the device, which could be a __GPU, FPGA, or a multi-core CPU__. We create a buffer, being responsible for moving data around and counting dependencies. DPC++ Library provides `oneapi::dpl::begin()` and `oneapi::dpl::end()` interfaces for getting buffer iterators and we implemented as below. Why use buffer iterators?Using buffer iterators will ensure that memory is not copied back and forth in between each algorithm execution on device. The code example below shows how the same example above is implemented using buffer iterators which make sure the memory stays on device until the buffer is destructed. Pass the policy object to the `std::for_each` Parallel STL algorithm, which is defined in the oneapi::dpl::execution namespace and pass the __'begin'__ and __'end'__ buffer iterators as the second and third arguments. The `oneapi::dpl::execution::dpcpp_default` object is a predefined object of the device_policy class, created with a default kernel name and a default queue. Use it to create customized policy objects, or to pass directly when invoking an algorithm.The Parallel STL API handles the data transfer and compute. Lab Exercise: Gamma Correction* In this example the student will learn how to use oneDPL library to perform the gamma correction.* Follow the __Steps 1 to 3__ in the below code to create a SYCL buffer, create buffer iterators, and then call the std::for each function with DPC++ support. 1. Select the code cell below, __follow the STEPS 1 to 3__ in the code comments, click run ▶ to save the code to file.2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
%%writefile gamma-correction/src/main.cpp //============================================================== // Copyright © 2019 Intel Corporation // // SPDX-License-Identifier: MIT // ============================================================= #include <oneapi/dpl/algorithm> #include <oneapi/dpl/execution> #include <oneapi/dpl/iterator> #include <iomanip> #include <iostream> #include <CL/sycl.hpp> #include "utils.hpp" using namespace sycl; using namespace std; int main() { // Image size is width x height int width = 1440; int height = 960; Img<ImgFormat::BMP> image{width, height}; ImgFractal fractal{width, height}; // Lambda to process image with gamma = 2 auto gamma_f = [](ImgPixel &pixel) { auto v = (0.3f * pixel.r + 0.59f * pixel.g + 0.11f * pixel.b) / 255.0f; auto gamma_pixel = static_cast<uint8_t>(255 * v * v); if (gamma_pixel > 255) gamma_pixel = 255; pixel.set(gamma_pixel, gamma_pixel, gamma_pixel, gamma_pixel); }; // fill image with created fractal int index = 0; image.fill([&index, width, &fractal](ImgPixel &pixel) { int x = index % width; int y = index / width; auto fractal_pixel = fractal(x, y); if (fractal_pixel < 0) fractal_pixel = 0; if (fractal_pixel > 255) fractal_pixel = 255; pixel.set(fractal_pixel, fractal_pixel, fractal_pixel, fractal_pixel); ++index; }); string original_image = "fractal_original.png"; string processed_image = "fractal_gamma.png"; Img<ImgFormat::BMP> image2 = image; image.write(original_image); // call standard serial function for correctness check image.fill(gamma_f); // use default policy for algorithms execution auto policy = oneapi::dpl::execution::dpcpp_default; // We need to have the scope to have data in image2 after buffer's destruction { // ****Step 1: Uncomment the below line to create a buffer, being responsible for moving data around and counting dependencies //buffer<ImgPixel> b(image2.data(), image2.width() * image2.height()); // create iterator to pass buffer to the algorithm // **********Step 2: Uncomment the below lines to create buffer iterators. These are passed to the algorithm //auto b_begin = oneapi::dpl::begin(b); //auto b_end = oneapi::dpl::end(b); //*****Step 3: Uncomment the below line to call std::for_each with DPC++ support //std::for_each(policy, b_begin, b_end, gamma_f); } image2.write(processed_image); // check correctness if (check(image.begin(), image.end(), image2.begin())) { cout << "success\n"; } else { cout << "fail\n"; return 1; } cout << "Run on " << policy.queue().get_device().template get_info<info::device::name>() << "\n"; cout << "Original image is in " << original_image << "\n"; cout << "Image after applying gamma correction on the device is in " << processed_image << "\n"; return 0; }
Overwriting gamma-correction/src/main.cpp
MIT
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/07_DPCPP_Library/oneDPL_gamma_correction.ipynb
praveenkk123/oneAPI-samples
Build and RunSelect the cell below and click run ▶ to compile and execute the code:
! chmod 755 q; chmod 755 run_gamma_correction.sh; if [ -x "$(command -v qsub)" ]; then ./q run_gamma_correction.sh; else ./run_gamma_correction.sh; fi
_____no_output_____
MIT
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/07_DPCPP_Library/oneDPL_gamma_correction.ipynb
praveenkk123/oneAPI-samples
_If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel: "Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_ Image outputsonce you run the program sucessfuly it creates gamma corrected image and the original image. You can see the difference by running the two cells below and visually compare it. View the gamma corrected ImageSelect the cell below and click run ▶ to view the generated image using gamma correction:
from IPython.display import display, Image display(Image(filename='gamma-correction/build/src/fractal_gamma.png'))
_____no_output_____
MIT
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/07_DPCPP_Library/oneDPL_gamma_correction.ipynb
praveenkk123/oneAPI-samples
View the original ImageSelect the cell below and click run ▶ to view the generated image using gamma correction:
from IPython.display import display, Image display(Image(filename='gamma-correction/build/src/fractal_original.png'))
_____no_output_____
MIT
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/07_DPCPP_Library/oneDPL_gamma_correction.ipynb
praveenkk123/oneAPI-samples
SummaryIn this module you will have learned how to apply gamma correction to Images using Data Parallel C++ Library Reset Notebook Should you be experiencing any issues with your notebook or just want to start fresh run the below cell.
from IPython.display import display, Markdown, clear_output import ipywidgets as widgets button = widgets.Button( description='Reset Notebook', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='This will update this notebook, overwriting any changes.', icon='check' # (FontAwesome names without the `fa-` prefix) ) out = widgets.Output() def on_button_clicked(_): # "linking function with output" with out: # what happens when we press the button clear_output() !rsync -a --size-only /data/oneapi_workshop/oneAPI_Essentials/07_DPCPP_Library/ ~/oneAPI_Essentials/07_DPCPP_Library print('Notebook reset -- now click reload on browser.') # linking button and function together using a button's method button.on_click(on_button_clicked) # displaying button and its output together widgets.VBox([button,out])
_____no_output_____
MIT
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/07_DPCPP_Library/oneDPL_gamma_correction.ipynb
praveenkk123/oneAPI-samples
DECOMON tutorial 3 Local Robustness to Adversarial Attacks for classification tasks IntroductionAfter training a model, we want to make sure that the model will give the same output for any images "close" to the initial one, showing some robustness to perturbation. In this notebook, we start from a classifier built on MNIST dataset that given a hand-written digit as input will predict the digit. This will be the first part of the notebook.In the second part of the notebook, we will investigate the robustness of this model to unstructured modification of the input space: adversarial attacks. For this kind of attacks, **we vary the magnitude of the perturbation of the initial image** and want to assess that despite this noise, the classifier's prediction remain unchanged.What we will show is the use of decomon module to assess the robustness of the prediction towards noise. The notebook imports
import os import tensorflow.keras as keras import matplotlib.pyplot as plt import matplotlib.patches as patches %matplotlib inline import numpy as np import tensorflow.keras.backend as K from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.layers import Dense from tensorflow.keras.datasets import mnist from ipywidgets import interact, interactive, fixed, interact_manual from ipykernel.pylab.backend_inline import flush_figures import ipywidgets as widgets import time import sys sys.path.append('..') import os.path import os import pickle as pkl from contextlib import closing import time import tensorflow as tf import decomon from decomon.wrapper import refine_boxes x_min = np.ones((3, 4, 5)) x_max = 2*x_min refine_boxes(x_min, x_max, 10)
> /Users/ducoffe/Documents/Code/open_sourcing/Airbus/decomon/decomon/wrapper.py(1065)split()  1063  split_value = np.array([mid_x[i, index_k, j[i]] for i in index_0])  1064  import pdb; pdb.set_trace() -> 1065  for i in index_0:  1066  X_min_[i, 1, index_k, j[i]] = split_value[i]  1067  X_max_[i, 0, index_k, j[i]] = split_value[i]  ipdb> p split_value.shape (3, 1) ipdb> c > /Users/ducoffe/Documents/Code/open_sourcing/Airbus/decomon/decomon/wrapper.py(1065)split()  1063  split_value = np.array([mid_x[i, index_k, j[i]] for i in index_0])  1064  import pdb; pdb.set_trace() -> 1065  for i in index_0:  1066  X_min_[i, 1, index_k, j[i]] = split_value[i]  1067  X_max_[i, 0, index_k, j[i]] = split_value[i]  ipdb> p split_value.shape (3, 2) ipdb> c > /Users/ducoffe/Documents/Code/open_sourcing/Airbus/decomon/decomon/wrapper.py(1065)split()  1063  split_value = np.array([mid_x[i, index_k, j[i]] for i in index_0])  1064  import pdb; pdb.set_trace() -> 1065  for i in index_0:  1066  X_min_[i, 1, index_k, j[i]] = split_value[i]  1067  X_max_[i, 0, index_k, j[i]] = split_value[i]  ipdb> p split_value.shape (3, 4) ipdb> c
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
load imagesWe load MNIST data from keras datasets.
ara img_rows, img_cols = 28, 28 (x_train, y_train_), (x_test, y_test_) = mnist.load_data() x_train = x_train.reshape((-1, 784)) x_test = x_test.reshape((-1, 784)) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 y_train = keras.utils.to_categorical(y_train_) y_test = keras.utils.to_categorical(y_test_)
_____no_output_____
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
learn the model (classifier for MNIST images)For the model, we use a small fully connected network. It is made of 6 layers with 100 units each and ReLU activation functions. **Decomon** is compatible with a large set of Keras layers, so do not hesitate to modify the architecture.
model = Sequential() model.add(Dense(100, activation='relu', input_dim=784)) model.add(Dense(100, activation='relu')) model.add(Dense(10, activation='softmax')) model.compile('adam', 'categorical_crossentropy', metrics='acc') model.fit(x_train, y_train, batch_size=32, shuffle=True, validation_split=0.2, epochs=5) model.evaluate(x_test, y_test, batch_size=32)
313/313 [==============================] - 0s 920us/step - loss: 0.1193 - acc: 0.9666
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
After training, we see that the assessment of performance of the model on data that was not seen during training shows pretty good results: around 0.97 (maximum value is 1). It means that out of 100 images, the model was able to guess the correct digit for 97 images. But how can we guarantee that we will get this performance for images different from the ones in the test dataset? - If we perturbate a "little" an image that was well predicted, will the model stay correct? - Up to which perturbation? - Can we guarantee that the model will output the same digit for a given perturbation? This is where decomon comes in. Applying Decomon for Local Robustness to misclassificationIn this section, we detail how to prove local robustness to misclassification. Misclassification can be studied with the global optimisation of a function f:$$ f(x; \Omega) = \max_{z\in \Omega} \text{NN}_{j\not= i}(z) - \text{NN}_i(z)\;\; \text{s.t}\;\; i = argmax\;\text{NN}(x)$$If the maximum of f is **negative**, this means that whathever the input sample from the domain, the value outputs by the neural network NN for class i will always be greater than the value output for another class. Hence, there will be no misclassification possible. This is **adversarial robustness**.In that order, we will use the [decomon](https://gheprivate.intra.corp/CRT-DataScience/decomon/tree/master/decomon) library. Decomon combines several optimization trick, including linear relaxationto get state-of-the-art outer approximation.To use **decomon** for **adversarial robustness** we first need the following imports:+ *from decomon.models import convert*: to convert our current Keras model into another neural network nn_model. nn_model will output the same prediction that our model and adds extra information that will be used to derive our formal bounds. For a sake of clarity, how to get such bounds is hidden to the user+ *from decomon import get_adv_box*: a genereric method to get an upper bound of the funtion f described previously. If the returned value is negative, then we formally assess the robustness to misclassification.+ *from decomon import check_adv_box*: a generic method that computes the maximum of a lower bound of f. Eventually if this value is positive, it demonstrates that the function f takes positive value. It results that a positive value formally proves the existence of misclassification.
import decomon from decomon.models import convert from decomon import get_adv_box, get_upper_box, get_lower_box, check_adv_box, get_upper_box
_____no_output_____
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
For computational efficiency, we convert the model into its decomon version once and for all.Note that the decomon method will work on the non-converted model. To obtain more refined guarantees, we activate an option denoted **forward**. You can speed up the method by removing this option in the convert method.
decomon_model = convert(model) from decomon import build_formal_adv_model adv_model = build_formal_adv_model(decomon_model) x_=x_train[:1] eps=1e-2 z = np.concatenate([x_[:, None]-eps, x_[:, None]+eps], 1) get_adv_box(decomon_model, x_,x_, source_labels=y_train[0].argmax()) adv_model.predict([x_, z, y_train[:1]]) # compute gradient import tensorflow as tf x_tensor = tf.convert_to_tensor(x_, dtype=tf.float32) from tensorflow.keras.layers import Concatenate with tf.GradientTape() as t: t.watch(x_tensor) z_tensor = Concatenate(1)([x_tensor[:,None]-eps,\ x_tensor[:, None]+eps]) output = adv_model([x_, z_tensor, y_train[:1]]) result = output gradients = t.gradient(output, x_tensor) mask = gradients.numpy() # scale between 0 and 1. mask = (mask-mask.min()) plt.imshow(gradients.numpy().reshape((28,28))) img_mask = np.zeros((784,)) img_mask[np.argsort(mask[0])[::-1][:100]]=1 plt.imshow(img_mask.reshape((28,28))) plt.imshow(mask.reshape((28,28))) plt.imshow(x_.reshape((28,28)))
_____no_output_____
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
We offer an interactive visualisation of the basic adversarial robustness method from decomon **get_adv_upper**. We randomly choose 10 test images use **get_adv_upper** to assess their robustness to misclassification pixel perturbations. The magnitude of the noise on each pixel is independent and bounded by the value of the variable epsilon. The user can reset the examples and vary the noise amplitude.Note one of the main advantage of decomon: **we can assess robustness on batches of data!**Circled in green are examples that are formally assessed to be robust, orange examples that could be robust and red examples that are formally non robust
def frame(epsilon, reset=0, filename='./data/.hidden_index.pkl'): n_cols = 5 n_rows = 2 n_samples = n_cols*n_rows if reset: index = np.random.permutation(len(x_test))[:n_samples] with closing(open(filename, 'wb')) as f: pkl.dump(index, f) # save data else: # check that file exists if os.path.isfile(filename): with closing(open(filename, 'rb')) as f: index = pkl.load(f) else: index = np.arange(n_samples) with closing(open(filename, 'wb')) as f: pkl.dump(index, f) #x = np.concatenate([x_test[0:1]]*10, 0) x = x_test[index] x_min = np.maximum(x - epsilon, 0) x_max = np.minimum(x + epsilon, 1) n_cols = 5 n_rows = 2 fig, axs = plt.subplots(n_rows, n_cols) fig.set_figheight(n_rows*fig.get_figheight()) fig.set_figwidth(n_cols*fig.get_figwidth()) plt.subplots_adjust(hspace=0.2) # increase vertical separation axs_seq = axs.ravel() source_label = np.argmax(model.predict(x), 1) start_time = time.process_time() upper = get_adv_box(decomon_model, x_min, x_max, source_labels=source_label) lower = check_adv_box(decomon_model, x_min, x_max, source_labels=source_label) end_time = time.process_time() count = 0 time.sleep(1) r_time = "{:.2f}".format(end_time - start_time) fig.suptitle('Formal Robustness to Adversarial Examples with eps={} running in {} seconds'.format(epsilon, r_time), fontsize=16) for i in range(n_cols): for j in range(n_rows): ax= axs[j, i] ax.imshow(x[count].reshape((28,28)), cmap='Greys') robust='ROBUST' if lower[count]>=0: color='red' robust='NON ROBUST' elif upper[count]<0: color='green' else: color='orange' robust='MAYBE ROBUST' ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # Create a Rectangle patch rect = patches.Rectangle((0,0),27,27,linewidth=3,edgecolor=color,facecolor='none') ax.add_patch(rect) ax.set_title(robust) count+=1 interact(frame, epsilon = widgets.FloatSlider(value=0., min=0., max=5./255., step=0.0001, continuous_update=False, readout_format='.4f',), reset = widgets.IntSlider(value=0., min=0, max=1, step=1, continuous_update=False), fast = widgets.IntSlider(value=1., min=0, max=1, step=1, continuous_update=False) )
_____no_output_____
MIT
tutorials/tutorial3_adversarial_attack_Gradient.ipynb
airbus/decomon
Largest prime factor Problem 3 The prime factors of 13195 are 5, 7, 13 and 29.What is the largest prime factor of the number 600851475143 ?
from math import sqrt def factorize(n, start=2): p = start while p <= sqrt(n): q, r = divmod(n, p) if r == 0: return [p] + factorize(q, p) p += 1 return [n] assert factorize(13195) == [5, 7, 13, 29] def solve(): return max(factorize(600851475143))
_____no_output_____
MIT
003-largest-prime-factor.ipynb
arkeros/projecteuler
Solution
solve()
_____no_output_____
MIT
003-largest-prime-factor.ipynb
arkeros/projecteuler
High-School Maths Exercise Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow Problem 1. MarkdownJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.First, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.Second, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).Let me give you a... Quick Introduction to Markdown Text and ParagraphsThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:```This is some text.This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).This text is displayed in a new paragraph.And this is yet another paragraph.```**Result:**This is some text.This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).This text is displayed in a new paragraph.And this is yet another paragraph. HeadingsThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six "" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:``` Heading 1 Heading 2 Heading 3 Heading 4 Heading 5 Heading 6```**Result:** Heading 1 Heading 2 Heading 3 Heading 4 Heading 5 Heading 6It is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly. EmphasisYou can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\*) or underscores (\_)). In order to "escape" a symbol, prefix it with a backslash (\). You can also strike thorugh your text in order to signify a correction.```**bold** __bold__*italic* _italic_This is \*\*not \*\* bold.I ~~didn't make~~ a mistake.```**Result:****bold** __bold__*italic* _italic_This is \*\*not\*\* bold.I ~~didn't make~~ a mistake. ListsYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).To create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...```1. This is2. A list10. With many9. Items 1. Some of which 2. Can 3. Be nested42. You can also * Mix * list * types```**Result:**1. This is2. A list10. With many9. Items 1. Some of which 2. Can 3. Be nested42. You can also * Mix * list * types To create an unordered list, type an asterisk, plus or minus at the beginning:```* This is* An + Unordered - list```**Result:*** This is* An + Unordered - list LinksThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:```This is [a link](http://google.com) to Google.```**Result:**This is [a link](http://google.com) to Google. ImagesThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):```![Alt text](http://i.imgur.com/dkY1gph.jpg) Do you know that "taco cat" is a palindrome? Thanks to The Oatmeal :)```**Result:**![Alt text](http://i.imgur.com/dkY1gph.jpg) Do you know that "taco cat" is a palindrome? Thanks to The Oatmeal :)If you want to resize images or do some more advanced stuff, just use HTML. Did I mention these cells support HTML, CSS and JavaScript? Now I did. TablesThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.```| Cell1 | Cell2 | Cell3 ||-------|-------|-------|| 1.1 | 1.2 | 1.3 || 2.1 | 2.2 | 2.3 || 3.1 | 3.2 | 3.3 |```**Result:**| Cell1 | Cell2 | Cell3 ||-------|-------|-------|| 1.1 | 1.2 | 1.3 || 2.1 | 2.2 | 2.3 || 3.1 | 3.2 | 3.3 | CodeJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.```pythondef square(x): return x ** 2```This is `inline` code. No syntax highlighting here.**Result:**```pythondef square(x): return x ** 2```This is `inline` code. No syntax highlighting here. **Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook). Write some Markdown here. This is my highlight with a _italic_ wordby LuGe few python code```pythondef multiply_by(x, y): return x * y```previous python method will `multiply` any two numbers in other words: $result = x * y$
def multiply_by(x, y): return x * y res = multiply_by(4, 7.21324) print(res)
28.85296
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
Problem 2. Formulas and LaTeXWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.There are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.Most commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \frac{a}{b} $$`: $$ \frac{a}{b} $$.[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.You're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.Note that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.![Math formulas and equations](math.jpg) Write your formulas here.Equation of a line: $$y = ax+b$$Roots of quadratic equasion $ax^2 + bx + c = 0$ $$x_{1,2}=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$Taylor series expansion: $$f(x)\arrowvert_{x=a}=f(a)+f'(a)(x-a)+\frac{f^n(a)}{2!}(x-a)^2+\dots+\frac{f^{(n)}(a)}{n!}(x-a)^n+\dots$$Binominal theoren: $$ (x+y)^n=\left(\begin{array}{cc}n \\0 \end{array}\right)x^ny^0+\left(\begin{array}{cc}n \\1 \end{array}\right)x^{n-1}y^1+\dots+ \left(\begin{array}{cc}n \\n \end{array}\right)x^0y^n=\sum^n_{k=0}\left(\begin{array}{cc}n \\k \end{array}\right)x^{n-k}y^k$$An integral(this one is a lot of fun to solve:D): $$\int_{+\infty}^{-\infty}e^{-x^{2}}dx=\sqrt\pi$$A short matrix: $$\left(\begin{array}{cc}2&1&3 \\2&6&8\\6&8&18 \end{array}\right)$$A long matrix: $$A=\left(\begin{array}{cc}a_{11}&a_{12}&\dots&a_{1n} \\a_{21}&a_{22}&\dots&a_{2n}\\\vdots&\vdots&\ddots&\vdots \\a_{m1}&a_{m2}&\dots&a_{mn}\end{array}\right)$$ Problem 3. Solving with PythonLet's first do some symbolic computation. We need to import `sympy` first. **Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**Let's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): ```python import sympy ```Next, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:```python x = sympy.symbols('x')a, b, c = sympy.symbols('a b c')```Now solve:```python sympy.solve(a * x**2 + b * x + c)```Hmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:```python sympy.solve(a * x**2 + b * x + c, x)```Finally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.
sp.init_printing() x = sp.symbols('x') a,b,c = sp.symbols('a b c') sp.solve(a*x**2 + b*x + c, x)
_____no_output_____
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
How about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?Remember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.If $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$If $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$If $b^2 - 4ac < 0$, the equation has zero real rootsWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.
def solve_quadratic_equation(a, b, c): d = b**2 - 4*a*c if a ==0 and b != 0: return [-c/b] elif a==0: return [] elif d < 0: return [] elif d == 0: return[-b/2*a] else: d = math.sqrt(d) return[(-b - d)/2*a,(-b + d)/2*a ] # Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests print(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0] print(solve_quadratic_equation(1, -8, 16)) # [4.0] print(solve_quadratic_equation(1, 1, 1)) # [] print(solve_quadratic_equation(0, 1, 1)) # [-1.0] print(solve_quadratic_equation(0, 0, 1)) # []
[-1.0, 2.0] [4.0] [] [-1.0] []
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time). Problem 4. Equation of a LineLet's go back to our linear equations and systems. There are many ways to define what "linear" means, but they all boil down to the same thing.The equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).The function produces a straight line and we can see it.How do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.Now, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:* All elements in it must be of the same type* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.There's one more thing: it's blazingly fast because all computations are done in C, instead of Python.First let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:```pythonimport numpy as np```Import that at the top cell and don't forget to re-run it.Next, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).```pythonx = np.linspace(-3, 5, 1000)```Now, let's generate our function variable```pythony = 2 * x + 3```We can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.```pythonimport matplotlib.pyplot as plt```Now, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a "magic string": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.```pythonplt.plot(x, y)plt.show()```
k = np.arange(1, 7, 1) print(k) x = np.linspace(-3, 5, 1000) ##y = 2 * x + 3 y = [2 * current + 3 for current in x] plt.plot(x,y) ax = plt.gca() ax.spines["bottom"].set_position("zero") ax.spines["left"].set_position("zero") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) xticks = ax.xaxis.get_major_ticks() xticks[4].label1.set_visible(False) yticks = ax.yaxis.get_major_ticks() yticks[2].label1.set_visible(False) ax.text(-0.3,-1, '0', fontsize = 12) plt.show()
[1 2 3 4 5 6]
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
It doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the "spines" of the plot (i.e. the borders).All `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for "axis".Let's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.```pythonax = plt.gca()ax.spines["bottom"].set_position("zero")ax.spines["left"].set_position("zero")ax.spines["top"].set_visible(False)ax.spines["right"].set_visible(False)```**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.This should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :). * Problem 5. Linearizing FunctionsWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. A commonly used method for linearizing functions is through algebraic transformations. Try to linearize $$ y = ae^{bx} $$Hint: The inverse operation of $e^{x}$ is $\ln(x)$. Start by taking $\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).
x = np.linspace(-5,5,5000) y = 0.5 * np.exp(0.5 * x) plt.plot(x, y) plt.show plt.title('exponent')
_____no_output_____
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
* Problem 6. Generalizing the Plotting FunctionLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.Note: We can also pass *lambda expressions* (anonymous functions) like this: ```pythonlambda x: x + 2```This is a shorter way to write```pythondef some_anonymous_function(x): return x + 2```We'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.Write a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):```pythonf_vectorized = np.vectorize(f)y = f_vectorized(x)```
def plot_math_function(f, min_x, max_x, num_points): x = np.linspace(min_x, max_x, num_points) f_vectorized = np.vectorize(f) y = f_vectorized(x) plt.plot(x,y) plt.show() plot_math_function(lambda x: 2 * x + 3, -3, 5, 1000) plot_math_function(lambda x: -x + 8, -1, 10, 1000) plot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000) plot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000) plot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)
_____no_output_____
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
* Problem 7. Solving Equations GraphicallyNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the "=" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.To do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.```pythonvectorized_fs = [np.vectorize(f) for f in functions]ys = [vectorized_f(x) for vectorized_f in vectorized_fs]```
def plot_math_functions(functions, min_x, max_x, num_points): # Write your code here pass plot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000) plot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)
_____no_output_____
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
This is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.
plot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)
_____no_output_____
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
Problem 8. Trigonometric FunctionsWe already saw the graph of the function $y = \sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.The two basic trigonometric functions are defined as the ratio of two sides:$$ \sin(x) = \frac{\text{opposite}}{\text{hypotenuse}} $$$$ \cos(x) = \frac{\text{adjacent}}{\text{hypotenuse}} $$And also:$$ \tan(x) = \frac{\text{opposite}}{\text{adjacent}} = \frac{\sin(x)}{\cos(x)} $$$$ \cot(x) = \frac{\text{adjacent}}{\text{opposite}} = \frac{\cos(x)}{\sin(x)} $$This is fine, but using this, "right-triangle" definition, we're able to calculate the trigonometric functions of angles up to $90^\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a "unit circle".We can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\cos(\alpha)$ and the $y$-coordinate - to $\sin(\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\circ$. After that, the same values repeat: these functions are **periodic**: $$ \sin(k.360^\circ + \alpha) = \sin(\alpha), k = 0, 1, 2, \dots $$$$ \cos(k.360^\circ + \alpha) = \cos(\alpha), k = 0, 1, 2, \dots $$We can, of course, use this picture to derive other identities, such as:$$ \sin(90^\circ + \alpha) = \cos(\alpha) $$A very important property of the sine and cosine is that they accept values in the range $(-\infty; \infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\infty; \infty)$ **except when their denominators are zero** and produce values in the same range. RadiansA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\text{rad}$ or without any designation, so $\sin(2)$ means "sine of two radians".![Radian definition](radian.gif)It's defined as *the central angle of an arc with length equal to the circle's radius* and $1\text{rad} \approx 57.296^\circ$.We know that the circle circumference is $C = 2\pi r$, therefore we can fit exactly $2\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\circ$ or $2\pi\ \text{rad}$. Also, $\pi rad = 180^\circ$.(Some people prefer using $\tau = 2\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\text{[deg]} = 180/\pi.\text{[rad]}, \text{[rad]} = \pi/180.\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively. Inverse trigonometric functionsAll trigonometric functions have their inverses. If you plug in, say $\pi/4$ in the $\sin(x)$ function, you get $\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:$$ \arcsin(y) = x: sin(y) = x $$$$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} $$Please note that this is NOT entirely correct. From the relations we found:$$\sin(x) = sin(2k\pi + x), k = 0, 1, 2, \dots $$it follows that $\arcsin(x)$ has infinitely many values, separated by $2k\pi$ radians each:$$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} + 2k\pi, k = 0, 1, 2, \dots $$In most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.Note 1: There are inverse functions for all four basic trigonometric functions: $\arcsin$, $\arccos$, $\arctan$, $\text{arccot}$. These are sometimes written as $\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. Just notice the difference between $\sin^{-1}(x) := \arcsin(x)$ and $\sin(x^{-1}) = \sin(1/x)$. ExerciseUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).
x = np.linspace(-10,10) plt.plot(x, np.arctan(x)) plt.plot(x, np.sin(x)) plt.plot(x, np.cos(x)) plt.show() x = np.linspace(-10, 10) plt.plot(x, np.arccosh(x)) plt.show()
C:\Users\lgeorgiev\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in arccosh
MIT
MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb
LuGeorgiev/Python-SoftUni
1/14 최초 구현 by 소연 수정 및 테스트 시 본 파일이 아닌 사본 사용을 부탁드립니다.
import os, sys from google.colab import drive drive.mount('/content/drive') %cd /content/drive/Shareddrives/KPMG_Ideation import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pprint import pprint from krwordrank.word import KRWordRank from copy import deepcopy import kss import itertools import unicodedata import requests from functools import reduce from bs4 import BeautifulSoup import string import torch from textrankr import TextRank from lexrankr import LexRank from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize from pydub import AudioSegment from konlpy.tag import Okt import re import nltk # nltk.download('punkt') # import pre-trained model -- frameBERT (pytorch GPU 환경 필요) %cd /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT !pip install transformers import frame_parser path="/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT" parser = frame_parser.FrameParser(model_path=path, language='ko') ##### below are permanently installed packages ##### # nb_path = '/content/notebooks' # os.symlink('/content/drive/Shareddrives/KPMG_Ideation', nb_path) # sys.path.insert(0, nb_path) # !pip install --target=$nb_path pydub # !pip install --target=$nb_path kss # %cd /content/drive/Shareddrives/KPMG_Ideation/hanspell # !python setup.py install # !pip install --target=$nb_path transformers # !apt-get update # !apt-get g++ openjdk-8-jdk # !pip3 install --target=$nb_path konlpy # !pip install --target=$nb_path soykeyword # !pip install --target=$nb_path krwordrank # !pip install --target=$nb_path bert # !pip install --target=$nb_path textrankr # !pip install --target=$nb_path lexrankr # Due to google api credentials, SpeechRecognition needs to be installed everytime !pip install SpeechRecognition import speech_recognition as sr # !pip install --upgrade google-cloud-speech def to_wav(audio_file_name): if audio_file_name.split('.')[1] == 'mp3': sound = AudioSegment.from_mp3(audio_file_name) audio_file_name = audio_file_name.split('.')[0] + '.wav' sound.export(audio_file_name, format="wav") if audio_file_name.split('.')[1] == 'm4a': sound = AudioSegment.from_file(file_name,'m4a') audio_file_name = audio_file_name.replace('m4a','wav') sound.export(audio_file_name, format="wav") #!/usr/bin/env python3 files_path = '' file_name = '' startMin = 0 startSec = 0 endMin = 4 endSec = 30 # Time to miliseconds startTime = startMin*60*1000+startSec*1000 endTime = endMin*60*1000+endSec*1000 %cd /content/drive/Shareddrives/KPMG_Ideation/data file_name='audio_only_1.m4a' track = AudioSegment.from_file(file_name,'m4a') wav_filename = file_name.replace('m4a', 'wav') file_handle = track.export(wav_filename, format='wav') song = AudioSegment.from_wav('audio_only_1.wav') extract = song[startTime:endTime] # Saving as wav extract.export('result.wav', format="wav") AUDIO_FILE = os.path.join(os.path.dirname(os.path.abspath('data')), "result.wav") # use the audio file as the audio source r = sr.Recognizer() with sr.AudioFile(AUDIO_FILE) as source: audio = r.record(source) # read the entire audio file # recognize speech using Google Speech Recognition try: # for testing purposes, we're just using the default API key # to use another API key, use `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")` # instead of `r.recognize_google(audio)` txt = r.recognize_google(audio, language='ko') print("Google Speech Recognition:" + txt) except sr.UnknownValueError: print("Google Speech Recognition could not understand audio") except sr.RequestError as e: print("Could not request results from Google Speech Recognition service; {0}".format(e)) %cd /content/drive/Shareddrives/KPMG_Ideation/hanspell from hanspell import spell_checker chked="" line = kss.split_sentences(txt) for i in range(len(line)): line[i] = spell_checker.check(line[i])[2] print("Checked spelling ",line[i]) chked += "".join(line[i]) chked += ". " chked okt = Okt() class Text(): def __init__(self, text): text = re.sub("'", ' ', text) paragraphs = text.split('\n') self.text = text self.paragraphs = [i for i in paragraphs if i] self.counts = len(self.paragraphs) self.docs = [kss.split_sentences(paragraph) for paragraph in paragraphs if kss.split_sentences(paragraph)] self.newtext = deepcopy(self.text) print("TEXT") def findall(self, p, s): i = s.find(p) while i != -1: yield i i = s.find(p, i + 1) def countMatcher(self, sentences, paragraph_no): paragraph = self.docs[paragraph_no] total_no = len(paragraph) vec = [0] * total_no for idx, candidate in enumerate(paragraph): for sentence in sentences: if sentence[:4] in candidate: vec[idx] += 1 return vec class Highlight(Text): def __init__(self, text): super().__init__(text) print("Highlight") wordrank_extractor = KRWordRank(min_count=3, max_length=10) self.keywords, rank, graph = wordrank_extractor.extract(self.paragraphs) self.path = "/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT" p = [] kw = [] for k, v in self.keywords.items(): p.append(okt.pos(k)) kw.append(k) words = self.text.split(' ') s = set() keylist = [word for i in kw for word in words if i in word] keylist = [i for i in keylist if len(i)>2] for i in keylist: if len(i)>2: s.add(i) # print("KEYLIST: ",keylist) p = [okt.pos(word) for word in s] self.s = set() for idx in range(len(p)): ls = p[idx] for tags in ls: word,tag = tags if tag == "Noun": if len(word)>=2: self.s.add(word) self.keys = [] for temp in self.s: self.keys.append(" " + str(temp)) print("KEYWORDS: ", self.keys) def add_tags_conj(self, txt): conj = '그리고, 그런데, 그러나, 그래도, 그래서, 또는, 및, 즉, 게다가, 따라서, 때문에, 아니면, 왜냐하면, 단, 오히려, 비록, 예를 들어, 반면에, 하지만, 그렇다면, 바로, 이에 대해' conj = conj.replace("'", "") self.candidates = conj.split(",") self.newtext = deepcopy(txt) self.idx = [(i, i + len(candidate)) for candidate in self.candidates for i in self.findall(candidate, txt)] for i in range(len(self.idx)): try: self.idx = [(start, start + len(candidate)) for candidate in self.candidates for start in self.findall(candidate, self.newtext)] word = self.newtext[self.idx[i][0]:self.idx[i][1]] self.newtext = word.join([self.newtext[:self.idx[i][0]], self.newtext[self.idx[i][1]:]]) except: pass return self.newtext class Summarize(Highlight): def __init__(self, text, paragraph_no): super().__init__(text) print("length of paragraphs ",len(self.paragraphs)) self.txt = self.paragraphs[paragraph_no] self.paragraph_no = paragraph_no def summarize(self): url = "https://api.smrzr.io/v1/summarize?num_sentences=5&algorithm=kmeans" headers = { 'content-type': 'raw/text', 'origin': 'https://smrzr.io', 'referer': 'https://smrzr.io/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site', "user-agent": "Mozilla/5.0" } resp = requests.post(url, headers=headers, data= self.txt.encode('utf-8')) assert resp.status_code == 200 summary = resp.json()['summary'] temp = summary.split('\n') print("BERT: ", temp) return temp def summarizeTextRank(self): tr = TextRank(sent_tokenize) summary = tr.summarize(self.txt, num_sentences=5).split('\n') print("Textrank: ",summary) return summary def summarizeLexRank(self): lr = LexRank() lr.summarize(self.txt) summaries = lr.probe() print("Lexrank: ",summaries) return summaries def ensembleSummarize(self): a = np.array(self.countMatcher(self.summarize(), self.paragraph_no)) try: b = np.array(self.countMatcher(self.summarizeLexRank(), self.paragraph_no)) except: b = np.zeros_like(a) c = np.array(self.countMatcher(self.summarizeTextRank(),self.paragraph_no)) result= a+b+c i, = np.where(result == max(result)) txt, index = self.docs[self.paragraph_no][i[0]], i[0] return txt, index result = chked high = Highlight(result) summarizer = Summarize(chked, 0) sum, id = summarizer.ensembleSummarize() print("summarized ",sum) sum
_____no_output_____
MIT
Python_Modules/STT.ipynb
Soyeon-ErinLee/KPMG_Ideation
Networks and Simulation Packages
%%writefile magic_functions.py from tqdm import tqdm from multiprocess import Pool import scipy import networkx as nx import random import pandas as pd import numpy as np import rpy2.robjects as robjects from rpy2.robjects import pandas2ri from sklearn.metrics.pairwise import cosine_similarity from tqdm.notebook import tqdm import warnings warnings.filterwarnings("ignore") import pickle from scipy import stats ### read percentage of organizations in each region and market cap range p_reg = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'reg',index_col=0) p_med = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'med',index_col=0)
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Generating network with desired characteristics
def create_network(N,nr,er,asa,bs_n,m_size): ### Graph generation ## Total organizations N=N ## region specific N n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)] if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])!=N): if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N)>0: n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N else: n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])+N g = nx.random_partition_graph(n_regions_list, p_in= 0.60, p_out=0.15, seed=123, directed=True) edge_list_df=pd.DataFrame(list(g.edges(data=True))) edge_list_df.columns=['source','target','weight'] ### #calculate n of b,bs,s nr_n=[int(nr[0]*n_regions_list[0]),int(nr[1]*n_regions_list[0]),int(nr[2]*n_regions_list[0])] er_n=[int(er[0]*n_regions_list[1]),int(er[1]*n_regions_list[1]),int(er[2]*n_regions_list[1])] asa_n=[int(asa[0]*n_regions_list[2]),int(asa[1]*n_regions_list[2]),int(asa[2]*n_regions_list[2])] if (np.sum(nr_n)<n_regions_list[0]): nr_n[0]=nr_n[0]+(n_regions_list[0]-np.sum(nr_n)) if (np.sum(er_n)<n_regions_list[1]): er_n[0]=er_n[0]+(n_regions_list[1]-np.sum(er_n)) if (np.sum(asa_n)<n_regions_list[2]): asa_n[0]=asa_n[0]+(n_regions_list[2]-np.sum(asa_n)) ## if bs n controlled k_diff=nr_n[2]-int((nr_n[0]+nr_n[2])/((nr_n[0]/nr_n[2])+1+bs_n)) nr_n[2]=nr_n[2]-k_diff nr_n[0]=nr_n[0]+k_diff k_diff=er_n[2]-int((er_n[0]+er_n[2])/((er_n[0]/er_n[2])+1+bs_n)) er_n[2]=er_n[2]-k_diff er_n[0]=er_n[0]+k_diff k_diff=asa_n[2]-int((asa_n[0]+asa_n[2])/((asa_n[0]/asa_n[2])+1+bs_n)) asa_n[2]=asa_n[2]-k_diff asa_n[0]=asa_n[0]+k_diff # choose b , s , bs #nr list1=range(0,n_regions_list[0]) random.seed(10) list1_0=random.sample(list1, nr_n[0]) random.seed(10) list1_1=random.sample(pd.DataFrame(set(list1)-set(list1_0)).iloc[:,0].tolist(),nr_n[1]) random.seed(10) list1_2=random.sample(pd.DataFrame(set(list1)-(set(list1_1).union(set(list1_0)))).iloc[:,0].tolist(),nr_n[2]) #eur list2=range(0+n_regions_list[0],n_regions_list[1]+n_regions_list[0]) random.seed(10) list2_0=random.sample(list2, er_n[0]) random.seed(10) list2_1=random.sample(pd.DataFrame(set(list2)-set(list2_0)).iloc[:,0].tolist(),er_n[1]) random.seed(10) list2_2=random.sample(pd.DataFrame(set(list2)-(set(list2_1).union(set(list2_0)))).iloc[:,0].tolist(),er_n[2]) #asi list3=range(0+n_regions_list[0]+n_regions_list[1],n_regions_list[2]+n_regions_list[0]+n_regions_list[1]) random.seed(10) list3_0=random.sample(list3, asa_n[0]) random.seed(10) list3_1=random.sample(pd.DataFrame(set(list3)-set(list3_0)).iloc[:,0].tolist(),asa_n[1]) random.seed(10) list3_2=random.sample(pd.DataFrame(set(list3)-(set(list3_1).union(set(list3_0)))).iloc[:,0].tolist(),asa_n[2]) # nodes_frame=pd.DataFrame(range(N),columns=['nodes']) nodes_frame['partition']=n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'] nodes_frame['category']="" nodes_frame['category'][list1_0]="buyer" nodes_frame['category'][list2_0]="buyer" nodes_frame['category'][list3_0]="buyer" nodes_frame['category'][list1_1]="both" nodes_frame['category'][list2_1]="both" nodes_frame['category'][list3_1]="both" nodes_frame['category'][list1_2]="sup" nodes_frame['category'][list2_2]="sup" nodes_frame['category'][list3_2]="sup" # params_sn=pd.read_csv('skew_norm_params_reg_tier_mark_size.csv',index_col=0) nodes_frame['ms']="" ########### draw a market size based on region and tier for i in nodes_frame['nodes']: ps = params_sn.loc[(params_sn['tier']==nodes_frame['category'][i])&((params_sn['reg']==nodes_frame['partition'][i]))] #print(ps) np.random.seed(seed=123) nodes_frame['ms'][i] = stats.skewnorm(ps['ae'], ps['loce'], ps['scalee']).rvs(1)[0] nqn1=np.quantile(nodes_frame['ms'],0.05) nqn3=np.quantile(nodes_frame['ms'],0.5) nodes_frame['ms']=nodes_frame['ms']+ m_size*nodes_frame['ms'] dummy=pd.DataFrame(columns=['ms']) dummy['ms']=range(0,N) for i in range(0,N): if nodes_frame.iloc[i,3]<=nqn1: dummy['ms'][i]="low" elif nodes_frame.iloc[i,3]<=nqn3: dummy['ms'][i]="med" else: dummy['ms'][i]="high" nodes_frame['ms2']=dummy['ms'] buy_list=list1_0+list2_0+list3_0 sup_list=list1_2+list2_2+list3_2 edge_list_df_new=edge_list_df.drop([i for i, e in enumerate(list(edge_list_df['source'])) if e in set(sup_list)],axis=0) new_index=range(edge_list_df_new.shape[0]) edge_list_df_new.index=new_index edge_list_df_new=edge_list_df_new.drop([i for i, e in enumerate(list(edge_list_df_new['target'])) if e in set(buy_list)],axis=0) new_index=range(edge_list_df_new.shape[0]) edge_list_df_new.index=new_index g = nx.DiGraph( ) # Add edges and edge attributes for i, elrow in edge_list_df_new.iterrows(): g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2]) return [edge_list_df_new,nodes_frame,g]
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Generate initial attributes Python wrapper
def sample_lab_attr_all_init(N): # Defining the R script and loading the instance in Python r = robjects.r r['source']('sampling_for_attributes_normal.R') # Loading the function we have defined in R. sampling_for_attributes_r2 = robjects.globalenv['sampling_for_attributes_normal'] #Invoking the R function and getting the result df_result_r = sampling_for_attributes_r2(N) #Converting it back to a pandas dataframe. df_result = pandas2ri.rpy2py(df_result_r) return(df_result)
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
R function for beta distributed tolerance
library(bnlearn) library(stats) sampling_for_attributes_normal <- function(N){ #' Preprocessing df to filter country #' #' data_orgs<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') library(bnlearn) library(stats) my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit.rds") x_s <- seq(0, 1, length.out = N) y1<-dbeta(x_s, 1.1, 0.5)*100 x<-y1 #N_reg=N for (i in 1:length(x)){ ## S3 method for class 'bn.fit' sampled_data<-rbn(my_model, n = 500) sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric) sampled_data[sampled_data <=0] <- NA sampled_data[sampled_data >=100] <- NA r_ind<-rowMeans(sampled_data, na.rm=FALSE) sampled_data<-sampled_data[!is.na(r_ind),] #head(sampled_data) sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data)) sc_diffs=abs(x[i]-sampled_data$score) if(i==1){ sampled_data_f<-sampled_data[sc_diffs==min(sc_diffs),] }else{ sampled_data_f<-rbind(sampled_data_f,sampled_data[sc_diffs==min(sc_diffs),]) } } return(sampled_data_f) }
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Generate new attributes Python wrapper
def sample_lab_attr_new_B(N,reg,s_av1,s_av2): # Defining the R script and loading the instance in Python r = robjects.r r['source']('sampling_for_attributes.R') # Loading the function we have defined in R. sampling_for_attributes_r = robjects.globalenv['sampling_for_attributes'] #Invoking the R function and getting the result df_result_r = sampling_for_attributes_r(N,reg) #print(df_result_r.head()) #Converting it back to a pandas dataframe. df_result = pandas2ri.rpy2py(df_result_r) """if (s_av2-s_av1)<2: s_av2=np.min([s_av2+2,64.28571]) if (s_av2-s_av1)>0: s_av1=np.max([s_av1-2,0]) else: s_av1=np.max([s_av2-2,0]) if s_av2>100: s_av2=100""" sampled_data=df_result.loc[((df_result['score']>=(s_av1)) & (df_result['score']<=(s_av2)))] if sampled_data.shape[0]==0: s_av=np.mean([s_av1,s_av2]) s_th=s_av*0.05 sampled_data=df_result.loc[((df_result['score']>=(s_av-s_th)) | (df_result['score']<=(s_av+s_th)))] tmp_vector=np.abs(sampled_data['score']-s_av) #tmp_vector2=np.abs(df_result['score']-s_av) sampled_data=sampled_data.loc[tmp_vector==np.min(tmp_vector)] return(sampled_data.sample())
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
R function to sample new attributes
sampling_for_attributes <- function(N,reg){ #' Preprocessing df to filter country #' #' library(bnlearn) #my_model <- readRDS("C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/model_fit.rds") if ( reg==1){ #C:/Users/ADMIN/OneDrive/Documents my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_1.rds") }else if(reg==2){ my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_2.rds") }else{ my_model <- readRDS("C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_3.rds") } N_reg=N N=N+500 ## S3 method for class 'bn.fit' sampled_data<-rbn(my_model, n = N) sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric) sampled_data[sampled_data <=0] <- NA sampled_data[sampled_data >=100] <- NA r_ind<-rowMeans(sampled_data, na.rm=FALSE) sampled_data<-sampled_data[!is.na(r_ind),] #head(sampled_data) sampled_data<-sampled_data[sample(nrow(sampled_data), N_reg), ] rownames(sampled_data) <- seq(length=nrow(sampled_data)) sampled_data #sampled_data$score= (0.38752934*sampled_data[,1]+ 0.37163856*sampled_data[,2]+ 0.32716766*sampled_data[,3]+ 0.39613783*sampled_data[,4]+ 0.38654069*sampled_data[,5]+0.38654069*sampled_data[,6]+ 0.38589444*sampled_data[,7])/(0.38752934+ 0.37163856+ 0.32716766+ 0.39613783+ 0.38654069+0.38654069+ 0.38589444) sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data)) return(sampled_data) }
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Bayesian Network fit to attributes of an organization
library(bnlearn) data<-read.csv('C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') head(data) data1<-data[data$Region=='North America',] data2<-data[data$Region=='Europe',] data3<-data[data$Region=='Asia',] data<-data[,c(2:8)] data1<-data1[,c(2:8)] data2<-data2[,c(2:8)] data3<-data3[,c(2:8)] dim(data) summary(data) data[,c(1:7)] <- lapply(data[,c(1:7)], as.numeric) data1[,c(1:7)] <- lapply(data1[,c(1:7)], as.numeric) data2[,c(1:7)] <- lapply(data2[,c(1:7)], as.numeric) data3[,c(1:7)] <- lapply(data3[,c(1:7)], as.numeric) bn.scores <- hc(data) bn.scores1 <- hc(data1) bn.scores2 <- hc(data2) bn.scores3 <- hc(data3) plot(bn.scores) plot(bn.scores1) plot(bn.scores2) plot(bn.scores3) bn.scores fit = bn.fit(bn.scores,data ) fit1 = bn.fit(bn.scores1,data1 ) fit2 = bn.fit(bn.scores2,data2 ) fit3 = bn.fit(bn.scores3,data3 ) fit bn.fit.qqplot(fit) bn.fit.xyplot(fit) bn.fit.histogram(fit) bn.fit.histogram(fit1) bn.fit.histogram(fit2) bn.fit.histogram(fit3) saveRDS(fit1, file = "model_fit_1.rds") saveRDS(fit1, file = "model_fit_2.rds") saveRDS(fit1, file = "model_fit_3.rds") ## S3 method for class 'bn' rbn(bn.scores, n = 1000, data, fit = "mle", ..., debug = FALSE) ## S3 method for class 'bn.fit' sampled_data<-rbn(fit, n = 1000,) head(sampled_data) write.csv(sampled_data,file = 'C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/sampled_data.csv')
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
R scripts for cumulative and probability density of a tolerance score
prob_cdf<-function(cur_sc,reg){ data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) reg_lt=unique(data$Region) data<-data[data$Region==reg_lt[reg],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") ecdff<-ecdf(data$Tolerance) p=1-ecdff(cur_sc) return(p) } prob_cdf_m<-function(cur_sc,msh){ data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2]) data<-data[data$market_cap==m_lt[msh],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") ecdff<-ecdf(data$Tolerance) p=1-ecdff(cur_sc) return(p) } prob_pdf<-function(cur_sc,reg){ #library(MEPDF) data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) reg_lt=unique(data$Region) data<-data[data$Region==reg_lt[reg],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") #ecdff<-epdf(data$Tolerance) kd=density(data$Tolerance) p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))] return(p) } prob_pdf_m<-function(cur_sc,msh){ #library(MEPDF) data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') #head(data) m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2]) data<-data[data$market_cap==m_lt[msh],] #hist(data$Tolerance,probability=TRUE) #lines(density(data$Tolerance),col="red") kd=density(data$Tolerance) p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))] return(p) }
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Python scripts for running R scripts
def prob_cdf(cur_sc,reg): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_cdf_r = robjects.globalenv['prob_cdf'] #Invoking the R function and getting the result df_result_p= prob_cdf_r(cur_sc,reg) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_cdf_m(cur_sc,msh): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_cdf_m = robjects.globalenv['prob_cdf_m'] #Invoking the R function and getting the result df_result_p= prob_cdf_m(cur_sc,msh) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_pdf(cur_sc,reg): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_pdf_r = robjects.globalenv['prob_pdf'] #Invoking the R function and getting the result df_result_p= prob_pdf_r(cur_sc,reg) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p) def prob_pdf_m(cur_sc,msh): # Defining the R script and loading the instance in Python r = robjects.r r['source']('prob_cdf.R') # Loading the function we have defined in R. prob_pdf_m = robjects.globalenv['prob_pdf_m'] #Invoking the R function and getting the result df_result_p= prob_pdf_m(cur_sc,msh) #Converting it back to a pandas dataframe. #df_result = pandas2ri.rpy2py(df_result_r) return(df_result_p)
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Simulation
def simulation_continous (node_attr,edge_list_df,num_sim,W,bs1,bs2,N,r_on,m_on,p_reg,p_med,probs_mat,probs_mat2,run_iter,alpha1,alpha2,alpha3,Tmp,rgn,mcp): #nodes and edges N=N node_attr = node_attr edge_list_df = edge_list_df #P's blanck_data_tot=np.empty([N,32,num_sim],dtype='object') #blanck_data_tot2=np.empty([N,4,num_sim],dtype='object') for i in tqdm (range (num_sim), desc="Running i ..."): blanck_data=np.empty([N,32],dtype='object') #blanck_data2=np.empty([N,4],dtype='object') # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] #edge_list_f.head() st = ["high","low"] ########################################################### for j in tqdm (range(0,N), desc="Running j..."): #N=np.float(N) if len(list(np.where(edge_list_f.iloc[:,1]==j)[0]))>=1: #################################################################################################### ########################################## MIMETIC################################################## #################################################################################################### st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])] t_node_attr = node_attr.iloc[p_tier_ind,:] #t_node_attr=t_node_attr.reset_index().iloc[:,1:] #t_node_attr.head() t_node_attr_score=t_node_attr['score'].copy() t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:] #t_node_attr_score #t_node_attr.index[tnr] for tnr in range(0,t_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]: t_node_attr['state'][t_node_attr.index[tnr]]='high' else: t_node_attr['state'][t_node_attr.index[tnr]]='low' tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts())) tier_p=tier_p.reset_index() tier_p.columns=['state','t_p'] #tier_p t_tier_p=pd.merge(st,tier_p,how="left",left_on=['state'],right_on='state') t_tier_p=t_tier_p.fillna(0.01) tier_p=t_tier_p #tier_p ############################################################################### #d_tier.index #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1]) #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1] #states and distances #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1], # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) #print(Ld) #d_tier=d_tier.drop([j]) #d_tier=d_tier.reset_index() d_tier=d_tier.fillna(1) #and average disances per state d_tier_avg=d_tier.groupby(['state']).mean(str(j)) #d_tier_avg s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score']) s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_tier_avg ## state local prob and avg distance mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state']) mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state']) #mimetic_p mimetic_p.columns=['state','tier_p','cur_node','score_m'] mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p']) #mimetic_p #round(mimetic_p['score_m'][0]) ################################################ region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])] ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])] h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index mimetic_p['pbreg_m']=pbreg['pbreg'] mimetic_p['pbm_m']=pbm['pbm'] #mimetic_p #################################################################################################### ########################################## Local & Global / inform reg & normative ################# #################################################################################################### #Index in node attributes df for rows with target column == j prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])] #Index in node attributes df for rows with target column == j prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] l_node_attr = node_attr.iloc[prnt_ind,:] l_node_attr_score=l_node_attr['score'].copy() l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:] #len(l_node_attr.iloc[:,-2-2-1]) #l_node_attr.loc[j] for tnr in range(0,l_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]: l_node_attr['state'][l_node_attr.index[tnr]]='high' else: l_node_attr['state'][l_node_attr.index[tnr]]='low' l2_node_attr = node_attr.iloc[prnt_ind2,:] l2_node_attr_score=l2_node_attr['score'].copy() l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,l2_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]: l2_node_attr['state'][l2_node_attr.index[tnr]]='high' else: l2_node_attr['state'][l2_node_attr.index[tnr]]='low' #Lp1 if len(prnt_ind2)>0: #states prob of parent nodes(can also clculate d*count probabilities) Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp1 = Lp1.reset_index() #states prob of parent nodes(can also clculate d*count probabilities) Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp2 = Lp2.reset_index() Lp1=pd.merge(st,Lp1,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp2=pd.merge(st,Lp2,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=pd.merge(Lp1,Lp2,how="left",left_on=['state_x'],right_on='state_x') #print(Lp.head()) Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y'] Lp=Lp.iloc[:,[0,5]] Lp.columns=['index','state'] #print(Lp1.head()) #print(Lp2.head()) else: #states prob of parent nodes(can also clculate d*count probabilities) Lp = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp = Lp.reset_index() #print(Lp) Lp=pd.merge(st,Lp,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=Lp.iloc[:,[0,2]] #print(Lp) Lp.columns=['index','state'] #Lp.head() if len(prnt_ind2)>0: #states and distances Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) Lad1=Ld1.groupby(['state']).mean() #states and distances Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #Lp2.head() Lad2=Ld2.groupby(['state']).mean() Lad1=pd.merge(st,Lad1,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad2=pd.merge(st,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=pd.merge(Lad1,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y'] Lad=Lad.iloc[:,[0,3]] Lad.columns=['state',str(j)] Lad.index=Lad['state'] Lad=Lad.iloc[:,1] #print(Lad.head()) s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how="left",left_on=['state'],right_on='state') #print(s_l_avg) s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y'] s_l_avg=s_l_avg.iloc[:,[0,3]] s_l_avg.columns=['state','score'] else: #states and distances Ld=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) #print(Ld) #and average disances per state Lad=Ld.groupby(['state']).mean()#str(j) Lad=pd.merge(st,Lad,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=Lad.reset_index() s_l_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=s_l_avg.reset_index() #Lad.head() #print(Lad) #Lad #s_l_avg #print(dist_local) if len(prnt_ind2)>0: ## state local prob and avg distance dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local.columns=['state','local_prob','cur_node_l'] #dist_local dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) else : #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y'] dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,4]] dist_local.columns=['state','local_prob','cur_node_l'] #dist_local #print(s_l_avg) dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,2,4]] #dist_local dist_local.columns=['state','local_prob','cur_node_l','score_l'] dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob']) #print(dist_local) h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_local['pbreg_l']=pbreg['pbreg'] dist_local['pbm_l']=pbm['pbm'] #dist_local ## global prob #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts())) #glb_p=glb_p.reset_index() #glb_p.columns=['state','g_p'] st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])] r_node_attr = node_attr.iloc[p_region_ind,:] r_node_attr_score=r_node_attr['score'].copy() r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,r_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]: r_node_attr['state'][r_node_attr.index[tnr]]='high' else: r_node_attr['state'][r_node_attr.index[tnr]]='low' glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts())) glb_p=glb_p.reset_index() glb_p.columns=['state','g_p'] t_glb_p=pd.merge(st,glb_p,how="left",left_on=['state'],right_on='state') t_glb_p=t_glb_p.fillna(0.01) glb_p=t_glb_p #print(glb_p) #states and distances gd=pd.concat([r_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1) #print(gd) #and average disances per state gad=gd.groupby(['state']).mean(str(j)) gad=pd.merge(st,gad,how="left",left_on=['state'],right_on='state') #gad.reset_index(inplace=True) #print(gad) s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score']) s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_g_avg ## state local prob and avg distance dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state']) dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state']) #dist_local dist_global.columns=['state','glob_prob','cur_node_g','score_g'] dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob']) #print(dist_global) h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_global['pbreg_g']=pbreg['pbreg'] dist_global['pbm_g']=pbm['pbm'] #dist_global #print('glb_p') if (((i+1)*(j+1)) % 5000) ==0: print(dist_global) ## all memetic dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state']) dist_local_global=dist_local_global.fillna(0.01) #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1)) #print(dist_local_global) # #################################################################################################### ########################################## All_ Pressures ########################################## #################################################################################################### # # All presures all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state']) all_p=all_p.fillna(0.01) #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state']) #all_p #= all_p.iloc[:,[0,4,5,6]] #0.25*all_p.iloc[:,3:5].product(axis=1) #all_p.iloc[:,3:5] #w1=w2=w3=w4=0.25 #all_p #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0] #w1=w2=w3=w4=0.25 all_p_tpd=all_p.copy() #all_p_tpd all_p_new=pd.DataFrame(all_p_tpd['state']) #print(all_p_new) all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node']) all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g']) all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l']) all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l']) all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l']) all_p_new['score_m']=all_p_tpd['score_m'] all_p_new['score_l']=all_p_tpd['score_l'] all_p_new['score_g']=all_p_tpd['score_g'] #pd.DataFrame(all_p_new) all_p=all_p_new """ if r_on==1: rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]] else: mpbm =[1,1,1] ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)) ptotall=1-ptotalh ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] #all_p """ if r_on==1: rpbr =all_p['pbreg'][0] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][0] else: mpbm =0 rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm']) if r_on==0: rpmp[0]=1 rpmp[1]=1 ###### multivariate normal nsd2=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2)) ## 2 if (j==0): nsd3=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2)) #### normal epsilon_l=list() for repeat in range(0,100): epsilon_l.append(np.random.normal(0,1)) epsilon=np.mean(epsilon_l) #### """ if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)): w1=W[0] w2=W[1] w3=W[2] else: w1=W[3] w2=W[4] w3=W[5] """ #### if ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[0] w2=W[1] w3=W[2] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[3] w2=W[4] w3=W[5] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[6] w2=W[7] w3=W[8] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[9] w2=W[10] w3=W[11] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[12] w2=W[13] w3=W[14] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[15] w2=W[16] w3=W[17] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[18] w2=W[19] w3=W[20] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[21] w2=W[22] w3=W[23] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[24] w2=W[25] w3=W[26] else : w1=0.333 w2=0.333 w3=0.333 #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))) #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0])) ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0])) #ptotalh=ptotalh/np.sum(ptotalh) #ptotall=1-ptotalh ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1]) ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal']) #all_p #print(all_p) #0.6224593312018546 """ d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0] """ if np.count_nonzero([w1,w2,w3])!=0: if all_p['ptotal'][0]>0.6224593312018546: #0.6224593312018546: d_s_ind=0 elif all_p['ptotal'][0]<0.6224593312018546: #0.6224593312018546: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 else: if all_p['ptotal'][0]>0.5: d_s_ind=0 elif all_p['ptotal'][0]<0.5: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 """u = np.random.uniform() if all_p['ptotal'][0]>u: d_s_ind=0 else: d_s_ind=1""" #print(d_s_ind) """ if r_on==1: rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]] else: mpbm =[1,1,1] """ if r_on==1: rpbr =all_p['pbreg'][d_s_ind] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][d_s_ind] else: mpbm =0 """s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])""" """s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)""" # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) if w1==0: s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w2==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w3==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) elif np.count_nonzero([w1,w2,w3])==1: s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] elif np.count_nonzero([w1,w2,w3])==0: s_av1=node_attr['score'][j] s_av2=node_attr['score'][j] else: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) #s_av #region_ind #print(all_p) probs_mat[i,j]=np.max(all_p['ptotal']) if i==0: probs_mat2[i,j]=np.max(all_p['ptotal']) else: probs_mat2[i+j,:]=probs_mat[i,j] ## hihest prob label #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0] #desired_state = all_p['state'][d_s_ind] #desired_state #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0] ##### draw attributes with given label """sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)""" """if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)""" if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2) #################################################################################################### ########################################## Update attributes ###################################### #################################################################################################### ## update node attributes for k,replc in enumerate(sample_df_1.values[0]): node_attr.iloc[j,k]=replc ## update edge attributes # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] for k,replc in enumerate(node_attr.iloc[j,:].values): blanck_data[j,k]=replc for k,replc in enumerate(all_p.iloc[0,1:].values): blanck_data[j,k+13]=replc blanck_data[j,29]=j blanck_data[j,30]=i blanck_data[j,31]=all_p['state'][d_s_ind] #blanck_data2[:,:2,i]=np.array(edge_list_f) else: ####2 #################################################################################################### ########################################## MIMETIC################################################## #################################################################################################### st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])] t_node_attr = node_attr.iloc[p_tier_ind,:] #t_node_attr=t_node_attr.reset_index().iloc[:,1:] #t_node_attr.head() t_node_attr_score=t_node_attr['score'].copy() t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:] #t_node_attr_score #t_node_attr.index[tnr] for tnr in range(0,t_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]: t_node_attr['state'][t_node_attr.index[tnr]]='high' else: t_node_attr['state'][t_node_attr.index[tnr]]='low' tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts())) tier_p=tier_p.reset_index() tier_p.columns=['state','t_p'] #tier_p t_tier_p=pd.merge(st,tier_p,how="left",left_on=['state'],right_on='state') t_tier_p=t_tier_p.fillna(0.01) tier_p=t_tier_p #tier_p #d_tier.index #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1]) #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1] #states and distances #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1], # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1) #print(Ld) #d_tier=d_tier.drop([j]) #d_tier=d_tier.reset_index() d_tier=d_tier.fillna(1) #and average disances per state d_tier_avg=d_tier.groupby(['state']).mean(str(j)) #d_tier_avg s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score']) s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_tier_avg ## state local prob and avg distance mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state']) mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state']) #mimetic_p mimetic_p.columns=['state','tier_p','cur_node','score_m'] mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p']) #mimetic_p #round(mimetic_p['score_m'][0]) ################################################ regulatary mem region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])] ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])] h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index mimetic_p['pbreg_m']=pbreg['pbreg'] mimetic_p['pbm_m']=pbm['pbm'] #mimetic_p #################################################################################################### ########################################## Local & Global / inform reg & normative ################# #################################################################################################### #Index in node attributes df for rows with target column == j """prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])]""" #Index in node attributes df for rows with target column == j prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] """l_node_attr = node_attr.iloc[prnt_ind,:] l_node_attr_score=l_node_attr['score'].copy() l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:] #len(l_node_attr.iloc[:,-2-2-1]) #l_node_attr.loc[j] for tnr in range(0,l_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]: l_node_attr['state'][l_node_attr.index[tnr]]='high' else: l_node_attr['state'][l_node_attr.index[tnr]]='low'""" l2_node_attr = node_attr.iloc[prnt_ind2,:] l2_node_attr_score=l2_node_attr['score'].copy() l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,l2_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]: l2_node_attr['state'][l2_node_attr.index[tnr]]='high' else: l2_node_attr['state'][l2_node_attr.index[tnr]]='low' #Lp1 """if len(prnt_ind2)>0: #states prob of parent nodes(can also clculate d*count probabilities) Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts())) Lp1 = Lp1.reset_index() #states prob of parent nodes(can also clculate d*count probabilities) Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp2 = Lp2.reset_index() Lp1=pd.merge(st,Lp1,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp2=pd.merge(st,Lp2,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=pd.merge(Lp1,Lp2,how="left",left_on=['state_x'],right_on='state_x') #print(Lp.head()) Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y'] Lp=Lp.iloc[:,[0,5]] Lp.columns=['index','state'] #print(Lp1.head()) #print(Lp2.head()) else:""" #states prob of parent nodes(can also clculate d*count probabilities) Lp = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts())) Lp = Lp.reset_index() #print(Lp) Lp=pd.merge(st,Lp,how="left",left_on=['state'],right_on='index').fillna(0.01) Lp=Lp.iloc[:,[0,2]] #print(Lp) Lp.columns=['index','state'] #print(Lp) #Lp.head() """if len(prnt_ind2)>0: #states and distances Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1) Lad1=Ld1.groupby(['state']).mean() #states and distances Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #Lp2.head() Lad2=Ld2.groupby(['state']).mean() Lad1=pd.merge(st,Lad1,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad2=pd.merge(st,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) Lad=pd.merge(Lad1,Lad2,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y'] Lad=Lad.iloc[:,[0,3]] Lad.columns=['state',str(j)] #print(Lad.head()) s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score']) s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how="left",left_on=['state'],right_on='state') #print(s_l_avg) s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y'] s_l_avg=s_l_avg.iloc[:,[0,3]] s_l_avg.columns=['state','score'] else:""" #states and distances Ld=pd.concat([l2_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1) #print(Ld) #and average disances per state Lad=Ld.groupby(['state']).mean()#str(j) #print(Lad) Lad=pd.merge(st,Lad,how="left",left_on=['state'],right_on='state').fillna(0.01) #print(Lad) Lad=Lad.reset_index() #print(Lad) s_l_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score']) s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) s_l_avg=s_l_avg.reset_index() #Lad.head() #print(Lad) #Lad.index=Lad['state'] #Lad=Lad.iloc[:,1:] #print(Lad) #Lad #s_l_avg #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y'] ## state local prob and avg distance dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,4]] dist_local.columns=['state','local_prob','cur_node_l'] #dist_local #print(s_l_avg) dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state']) dist_local=dist_local.iloc[:,[0,1,2,4]] #print(dist_local) #dist_local=dist_local.drop(['index']) #dist_local dist_local.columns=['state','local_prob','cur_node_l','score_l'] dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob']) #print(dist_local) h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_local['pbreg_l']=pbreg['pbreg'] dist_local['pbm_l']=pbm['pbm'] #dist_local ## global prob #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts())) #glb_p=glb_p.reset_index() #glb_p.columns=['state','g_p'] st=["high","low"] st=pd.DataFrame(st) st.columns=['state'] #Index in node attributes df['partitions'] == jth row partition column p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])] r_node_attr = node_attr.iloc[p_region_ind,:] r_node_attr_score=r_node_attr['score'].copy() r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:] for tnr in range(0,r_node_attr.shape[0]): if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]: r_node_attr['state'][r_node_attr.index[tnr]]='high' else: r_node_attr['state'][r_node_attr.index[tnr]]='low' glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts())) glb_p=glb_p.reset_index() glb_p.columns=['state','g_p'] t_glb_p=pd.merge(st,glb_p,how="left",left_on=['state'],right_on='state') t_glb_p=t_glb_p.fillna(0.01) glb_p=t_glb_p #print(glb_p) #states and distances gd=pd.concat([r_node_attr.iloc[:,-2-2-1], df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1) #print(gd) #and average disances per state gad=gd.groupby(['state']).mean(str(j)) gad=pd.merge(st,gad,how="left",left_on=['state'],right_on='state') #gad.reset_index(inplace=True) #print(gad) s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score']) s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score']) #s_g_avg ## state local prob and avg distance dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state']) dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state']) #dist_local dist_global.columns=['state','glob_prob','cur_node_g','score_g'] dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob']) #print(dist_global) h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0] pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg']) pbreg.index=['high','low'] pbreg=pbreg.reset_index() pbreg.columns=['state','pbreg'] #pbreg h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0] pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm']) pbm.index=['high','low'] pbm=pbm.reset_index() pbm.columns=['state','pbm'] #pbm pbreg.index=mimetic_p.index pbm.index=mimetic_p.index dist_global['pbreg_g']=pbreg['pbreg'] dist_global['pbm_g']=pbm['pbm'] #dist_global #print('glb_p') #if (((i+1)*(j+1)) % 5000) ==0: print(dist_global) ## all memetic dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state']) dist_local_global=dist_local_global.fillna(0.01) #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1)) #print(dist_local_global) # #################################################################################################### ########################################## All_ Pressures ########################################## #################################################################################################### # # All presures all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state']) all_p=all_p.fillna(0.01) #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state']) #all_p #= all_p.iloc[:,[0,4,5,6]] #0.25*all_p.iloc[:,3:5].product(axis=1) #all_p.iloc[:,3:5] #w1=w2=w3=w4=0.25 #all_p #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0] #w1=w2=w3=w4=0.25 all_p_tpd=all_p.copy() #all_p_tpd all_p_new=pd.DataFrame(all_p_tpd['state']) #print(all_p_new) all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node']) all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g']) all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l']) all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l']) all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l']) all_p_new['score_m']=all_p_tpd['score_m'] all_p_new['score_l']=all_p_tpd['score_l'] all_p_new['score_g']=all_p_tpd['score_g'] #pd.DataFrame(all_p_new) all_p=all_p_new """ if r_on==1: rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]] else: mpbm =[1,1,1] ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)) ptotall=1-ptotalh ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] #all_p """ if r_on==1: rpbr =all_p['pbreg'][0] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][0] else: mpbm =0 rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm']) if r_on==0: rpmp[0]=1 rpmp[1]=1 ###### multivariate normal nsd2=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2)) ## 2 if (j==0): nsd3=list() for repeat in range(0,100): nsd=list() for mni in range(0,3): nsd.append(np.random.normal(0,1)) nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3))) #for ni,nsd2i in enumerate(nsd2): # nsd2[ni]=np.round_(nsd2i,2) nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2)) #### normal epsilon_l=list() for repeat in range(0,100): epsilon_l.append(np.random.normal(0,1)) epsilon=np.mean(epsilon_l) """ if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)): w1=W[0] w2=W[1] w3=W[2] else: w1=W[3] w2=W[4] w3=W[5] """ #### if ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[0] w2=W[1] w3=W[2] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[3] w2=W[4] w3=W[5] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[6] w2=W[7] w3=W[8] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[9] w2=W[10] w3=W[11] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[12] w2=W[13] w3=W[14] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[15] w2=W[16] w3=W[17] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[18] w2=W[19] w3=W[20] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[21] w2=W[22] w3=W[23] elif ((node_attr.iloc[j,9]=="NrA")&(node_attr.iloc[j,12]=="high")): w1=W[24] w2=W[25] w3=W[26] else : w1=0.333 w2=0.333 w3=0.333 #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))) #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0])) ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0])) #ptotalh=ptotalh/np.sum(ptotalh) #ptotall=1-ptotalh ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) + (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+ (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1]) ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal']) ptot.index=all_p.index all_p['ptotal']=ptot['ptotal'] all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal']) #all_p #print(all_p) #0.6224593312018546 """ d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0] """ if np.count_nonzero([w1,w2,w3])!=0: if all_p['ptotal'][0]>0.6224593312018546: #0.6224593312018546: d_s_ind=0 elif all_p['ptotal'][0]<0.6224593312018546: #0.6224593312018546: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 else: if all_p['ptotal'][0]>0.5: d_s_ind=0 elif all_p['ptotal'][0]<0.5: d_s_ind=1 else: d_s_ind = 1 if np.random.random()<0.5 else 0 """u = np.random.uniform() if all_p['ptotal'][0]>u: d_s_ind=0 else: d_s_ind=1""" #print(d_s_ind) """ if r_on==1: rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]] else: rpbr =[1,1,1] if m_on==1: mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]] else: mpbm =[1,1,1] """ if r_on==1: rpbr =all_p['pbreg'][d_s_ind] else: rpbr =0 if m_on==1: mpbm =all_p['pbm'][d_s_ind] else: mpbm =0 """s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])""" """s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)""" # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) if w1==0: s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w2==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]]) elif w3==0: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]]) elif np.count_nonzero([w1,w2,w3])==1: s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]] elif np.count_nonzero([w1,w2,w3])==0: s_av1=node_attr['score'][j] s_av2=node_attr['score'][j] else: s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]]) #s_av #region_ind #print(all_p) probs_mat[i,j]=np.max(all_p['ptotal']) if i==0: probs_mat2[i,j]=np.max(all_p['ptotal']) else: probs_mat2[i+j,:]=probs_mat[i,j] ## hihest prob label #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0] #desired_state = all_p['state'][d_s_ind] #desired_state #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0] ##### draw attributes with given label """sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)""" """if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)""" if s_av1==s_av2: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12) else: if d_s_ind==0: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100) else: sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2) #################################################################################################### ########################################## Update attributes ###################################### #################################################################################################### ## update node attributes for k,replc in enumerate(sample_df_1.values[0]): node_attr.iloc[j,k]=replc ## update edge attributes # node attr to edge attr df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] for k,replc in enumerate(node_attr.iloc[j,:].values): blanck_data[j,k]=replc for k,replc in enumerate(all_p.iloc[0,1:].values): blanck_data[j,k+13]=replc blanck_data[j,29]=j blanck_data[j,30]=i blanck_data[j,31]=all_p['state'][d_s_ind] #blanck_data2[:,:2,i]=np.array(edge_list_f) blanck_data_tot[:,:,i]=pd.DataFrame(blanck_data) #if i>= 2: #if i%5==0: #probs_mat_pr.append(np.prod(np.log(probs_mat[i,:]),axis=1)) edge_list_f.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+str(i)+"_edge_attr.csv") reshaped_bd = np.vstack(blanck_data_tot[:,:,i] for i in range(num_sim)) reshaped_bd_df=pd.DataFrame(reshaped_bd) reshaped_bd_df.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+ "other_node_attr.csv") print('Complete')
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Parallelization of simulation
def process_func(run_iter): #print('@@@@@ run iter @@@@@ --' + str(run_iter)) stc=scn_params.iloc[run_iter,20] Tmp=scn_params.iloc[run_iter,21] if stc==1: nr=[0.22,0.35,0.43] er=[0.38,0.13,0.50] asa=[0.22,0.06,0.72] else: nr=[0.33,0.335,0.335] er=[0.33,0.335,0.335] asa=[0.33,0.335,0.335] #W=[scn_params.iloc[run_iter,5],scn_params.iloc[run_iter,6],scn_params.iloc[run_iter,7],scn_params.iloc[run_iter,8],scn_params.iloc[run_iter,9],scn_params.iloc[run_iter,10]] W=[scn_params.iloc[run_iter,23],scn_params.iloc[run_iter,24],scn_params.iloc[run_iter,25],scn_params.iloc[run_iter,26], scn_params.iloc[run_iter,27],scn_params.iloc[run_iter,28],scn_params.iloc[run_iter,29],scn_params.iloc[run_iter,30], scn_params.iloc[run_iter,31],scn_params.iloc[run_iter,32],scn_params.iloc[run_iter,33],scn_params.iloc[run_iter,34], scn_params.iloc[run_iter,35],scn_params.iloc[run_iter,36],scn_params.iloc[run_iter,37],scn_params.iloc[run_iter,38], scn_params.iloc[run_iter,39],scn_params.iloc[run_iter,40],scn_params.iloc[run_iter,41],scn_params.iloc[run_iter,42], scn_params.iloc[run_iter,43],scn_params.iloc[run_iter,44],scn_params.iloc[run_iter,45],scn_params.iloc[run_iter,46], scn_params.iloc[run_iter,47],scn_params.iloc[run_iter,48],scn_params.iloc[run_iter,49]] N=scn_params.iloc[run_iter,0] bs_n=scn_params.iloc[run_iter,3] m_size=scn_params.iloc[run_iter,4] bs1=scn_params.iloc[run_iter,1] bs2=scn_params.iloc[run_iter,2] rgn=scn_params.iloc[run_iter,13] mcp=scn_params.iloc[run_iter,14] ################################################### create network network_created=create_network(N,nr,er,asa,bs_n,m_size) #graph g=network_created[2] #centrality deg_cent = nx.degree_centrality(g) in_deg_cent = nx.in_degree_centrality(g) out_deg_cent = nx.out_degree_centrality(g) eigen_cent = nx.eigenvector_centrality(g) #katz_cent = nx.katz_centrality(g) closeness_cent = nx.closeness_centrality(g) #betw_cent = nx.betweenness_centrality(g) #vote_cent = nx.voterank(g) deg=pd.DataFrame(list(deg_cent.values()),columns=['deg']) indeg=pd.DataFrame(list(in_deg_cent.values()),columns=['indeg']) outdeg=pd.DataFrame(list(out_deg_cent.values()),columns=['outdeg']) eigencent=pd.DataFrame(list(eigen_cent.values()),columns=['eigdeg']) closeness=pd.DataFrame(list(closeness_cent.values()),columns=['closedeg']) all_net_p=pd.concat([deg,indeg,outdeg,closeness,eigencent],axis=1) #tier and ms nodes_frame=network_created[1] #edge list edge_list_df_new=network_created[0] edge_list_df=edge_list_df_new.copy() n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)] if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])!=N): if (len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N)>0: n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])-N else: n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*["NrA"]+n_regions_list[1]*["Eur"]+n_regions_list[2]*['Asia'])+N #print(n_regions_list) #till partition ###################################################initial attributes at random one at a time #### method 1 """init_samples = sample_lab_attr_all_init(np.float(N),30,10)""" init_samples = sample_lab_attr_all_init(np.float(N)) #### method 2 """init_samples1 = initial_random_attr(n_regions_list[0],np.array([0.2,0.3,0.5])) init_samples1=init_samples1.reset_index().iloc[:,1:] init_samples2 = initial_random_attr(n_regions_list[1],np.array([0.3,0.3,0.4])) init_samples2=init_samples2.reset_index().iloc[:,1:] init_samples3 = initial_random_attr(n_regions_list[2],np.array([0.5,0.3,0.2])) init_samples3=init_samples3.reset_index().iloc[:,1:] init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)""" ### method 3 """init_samples1 = sample_lab_attr_all(n_regions_list[0],1) init_samples1=init_samples1.reset_index().iloc[:,1:] init_samples2 = sample_lab_attr_all(n_regions_list[1],2) init_samples2=init_samples2.reset_index().iloc[:,1:] init_samples3 = sample_lab_attr_all(n_regions_list[2],3) init_samples3=init_samples3.reset_index().iloc[:,1:] init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)""" ############ #init_samples.head() init_samples=init_samples.reset_index() #init_samples.head() init_samples=init_samples.iloc[:,1:] #init_samples.index node_attr=init_samples node_attr['state']="high" node_attr['partition']="" for i in range(0,node_attr.shape[0]): if i<n_regions_list[0]: node_attr['partition'][i]='NrA' elif i< (n_regions_list[0]+n_regions_list[1]): node_attr['partition'][i]='Eur' else: node_attr['partition'][i]='Asia' #tier and MS merge with attributes node_attr = pd.concat([node_attr,nodes_frame.iloc[:,2:]],axis=1) #node_attr.columns node_attr.columns=['X1..Commitment...Governance', 'X2..Traceability.and.Risk.Assessment', 'X3..Purchasing.Practices', 'X4..Recruitment', 'X5..Worker.Voice', 'X6..Monitoring', 'X7..Remedy', 'score', 'state', 'partition','tier','ms','ms2'] # #node_attr.info() # region wise reg assumption and market size assumption #p_reg_org=p_reg.copy() #p_med_org=p_med.copy() # init_node_attrs_df=node_attr.copy() init_edge_attrs_df=edge_list_df.copy() import os os.mkdir(folder_location+"sc_"+str(run_iter+1)) node_attr.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+str(0)+ "_node_attr.csv") df_3=cosine_similarity(node_attr.iloc[:,:8]) df_4=pd.DataFrame(df_3) df_4.values[[np.arange(len(df_4))]*2] = np.nan #mat_data.head() edge_list_2=df_4.stack().reset_index() edge_list_2.columns=['source','target','weight'] #edge_list_2.head() edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target']) #edge_list_f.head() edge_list_f.drop('weight_x',axis=1,inplace=True) edge_list_f.columns=['source','target','weight'] edge_list_f.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"initial_edge_attr.csv") ##### run simulation for all print("@@@@@@@@@@@@@@@ -- "+str(run_iter)) """ w1=scn_params.iloc[run_iter,5] w2=scn_params.iloc[run_iter,6] w3=scn_params.iloc[run_iter,7] w4=scn_params.iloc[run_iter,8] w5=scn_params.iloc[run_iter,9] """ r_on=scn_params.iloc[run_iter,15] m_on=scn_params.iloc[run_iter,16] alpha1=scn_params.iloc[run_iter,17] alpha2=scn_params.iloc[run_iter,18] alpha3=scn_params.iloc[run_iter,19] if N==500: num_sim=20 else: num_sim=20 probs_mat=np.zeros((num_sim,N)) probs_mat2=np.zeros((((num_sim-1)*N)+1,N)) ## Initial node and edge attributes node_attr=init_node_attrs_df.copy() edge_list_df=init_edge_attrs_df.copy() ################################################## simulation simulation_continous(node_attr=node_attr,edge_list_df=edge_list_df,num_sim=num_sim,W=W,bs1=bs1,bs2=bs2,N=N,r_on=r_on,m_on=m_on,p_reg=p_reg,p_med=p_med,probs_mat=probs_mat,probs_mat2=probs_mat2,run_iter=run_iter,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3,Tmp=Tmp,rgn=rgn,mcp=mcp) lik_probs_mat=pd.DataFrame(probs_mat) lik_probs_mat2=pd.DataFrame(probs_mat2) lik_probs_mat.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"lik_probs_mat.csv") lik_probs_mat2.to_csv(folder_location+"sc_"+str(run_iter+1)+"/"+"lik_probs_mat2.csv") del lik_probs_mat
Overwriting magic_functions.py
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
Running simulation
#scenarios scn_params=pd.read_csv('simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/testscenarios_uniform_parallel_W_alpha_new_sense_simple.csv') # Organizational data data_orgs=pd.read_csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv') folder_location='simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/' from magic_functions import process_func frames_list = range(0,31)#range(0,8) with Pool(max_pool) as p: pool_outputs = list( tqdm( p.imap(process_func, frames_list), total=len(frames_list) ) )
_____no_output_____
MIT
build/html/python_scripts1.ipynb
krishna-harsha/Tolerance-to-modern-slavery
I started this competition investigating neural networks with this kernel https://www.kaggle.com/mulargui/keras-nnNow switching to using ensembles in this new kernel. As of today V6 is the most performant version.You can find all my notes and versions at https://github.com/mulargui/kaggle-Classify-forest-types
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) #load data dftrain=pd.read_csv('/kaggle/input/learn-together/train.csv') dftest=pd.read_csv('/kaggle/input/learn-together/test.csv') ####### DATA PREPARATION ##### #split train data in features and labels y = dftrain.Cover_Type x = dftrain.drop(['Id','Cover_Type'], axis=1) # split test data in features and Ids Ids = dftest.Id x_predict = dftest.drop('Id', axis=1) # one data set with all features X = pd.concat([x,x_predict],keys=[0,1]) ###### FEATURE ENGINEERING ##### #https://www.kaggle.com/mancy7/simple-eda #Soil_Type7, Soil_Type15 are non-existent in the training set, nothing to learn #I have problems with np.where if I do this, postponed #X.drop(["Soil_Type7", "Soil_Type15"], axis = 1, inplace=True) #https://www.kaggle.com/evimarp/top-6-roosevelt-national-forest-competition from itertools import combinations from bisect import bisect X['Euclidean_distance_to_hydro'] = (X.Vertical_Distance_To_Hydrology**2 + X.Horizontal_Distance_To_Hydrology**2)**.5 cols = [ 'Horizontal_Distance_To_Roadways', 'Horizontal_Distance_To_Fire_Points', 'Horizontal_Distance_To_Hydrology', ] X['distance_mean'] = X[cols].mean(axis=1) X['distance_sum'] = X[cols].sum(axis=1) X['distance_road_fire'] = X[cols[:2]].mean(axis=1) X['distance_hydro_fire'] = X[cols[1:]].mean(axis=1) X['distance_road_hydro'] = X[[cols[0], cols[2]]].mean(axis=1) X['distance_sum_road_fire'] = X[cols[:2]].sum(axis=1) X['distance_sum_hydro_fire'] = X[cols[1:]].sum(axis=1) X['distance_sum_road_hydro'] = X[[cols[0], cols[2]]].sum(axis=1) X['distance_dif_road_fire'] = X[cols[0]] - X[cols[1]] X['distance_dif_hydro_road'] = X[cols[2]] - X[cols[0]] X['distance_dif_hydro_fire'] = X[cols[2]] - X[cols[1]] # Vertical distances measures colv = ['Elevation', 'Vertical_Distance_To_Hydrology'] X['Vertical_dif'] = X[colv[0]] - X[colv[1]] X['Vertical_sum'] = X[colv].sum(axis=1) SHADES = ['Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm'] X['shade_noon_diff'] = X['Hillshade_9am'] - X['Hillshade_Noon'] X['shade_3pm_diff'] = X['Hillshade_Noon'] - X['Hillshade_3pm'] X['shade_all_diff'] = X['Hillshade_9am'] - X['Hillshade_3pm'] X['shade_sum'] = X[SHADES].sum(axis=1) X['shade_mean'] = X[SHADES].mean(axis=1) X['ElevationHydro'] = X['Elevation'] - 0.25 * X['Euclidean_distance_to_hydro'] X['ElevationV'] = X['Elevation'] - X['Vertical_Distance_To_Hydrology'] X['ElevationH'] = X['Elevation'] - 0.19 * X['Horizontal_Distance_To_Hydrology'] X['Elevation2'] = X['Elevation']**2 X['ElevationLog'] = np.log1p(X['Elevation']) X['Aspect_cos'] = np.cos(np.radians(X.Aspect)) X['Aspect_sin'] = np.sin(np.radians(X.Aspect)) #df['Slope_sin'] = np.sin(np.radians(df.Slope)) X['Aspectcos_Slope'] = X.Slope * X.Aspect_cos #df['Aspectsin_Slope'] = df.Slope * df.Aspect_sin cardinals = [i for i in range(45, 361, 90)] points = ['N', 'E', 'S', 'W'] X['Cardinal'] = X.Aspect.apply(lambda x: points[bisect(cardinals, x) % 4]) d = {'N': 0, 'E': 1, 'S': 0, 'W':-1} X['Cardinal'] = X.Cardinal.apply(lambda x: d[x]) #https://www.kaggle.com/jakelj/basic-ensemble-model X['Avg_shade'] = ((X['Hillshade_9am'] + X['Hillshade_Noon'] + X['Hillshade_3pm']) / 3) X['Morn_noon_int'] = ((X['Hillshade_9am'] + X['Hillshade_Noon']) / 2) X['noon_eve_int'] = ((X['Hillshade_3pm'] + X['Hillshade_Noon']) / 2) #adding features based on https://douglas-fraser.com/forest_cover_management.pdf pages 21,22 #note: not all climatic and geologic codes have a soil type columns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6'] X['Climatic2'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type7', 'Soil_Type8'] X['Climatic3'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type9', 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13'] X['Climatic4'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type14', 'Soil_Type15'] X['Climatic5'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type16', 'Soil_Type17', 'Soil_Type18'] X['Climatic6'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type19', 'Soil_Type20', 'Soil_Type21', 'Soil_Type22', 'Soil_Type23', 'Soil_Type24', 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30', 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34'] X['Climatic7'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type35', 'Soil_Type36', 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40'] X['Climatic8'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type14', 'Soil_Type15', 'Soil_Type16', 'Soil_Type17', 'Soil_Type19', 'Soil_Type20', 'Soil_Type21'] X['Geologic1'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type9', 'Soil_Type22', 'Soil_Type23'] X['Geologic2'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type7', 'Soil_Type8'] X['Geologic5'] = np.select([X[columns].sum(1).gt(0)], [1]) columns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6', 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13', 'Soil_Type18', 'Soil_Type24', 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30', 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34', 'Soil_Type35', 'Soil_Type36', 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40'] X['Geologic7'] = np.select([X[columns].sum(1).gt(0)], [1]) #Reversing One-Hot-Encoding to Categorical attributes, several articles recommend it for decision tree algorithms #Doing it for Soil_Type, Wilderness_Area, Geologic and Climatic X['Soil_Type']=np.where(X.loc[:, 'Soil_Type1':'Soil_Type40'])[1] +1 X.drop(X.loc[:,'Soil_Type1':'Soil_Type40'].columns, axis=1, inplace=True) X['Wilderness_Area']=np.where(X.loc[:, 'Wilderness_Area1':'Wilderness_Area4'])[1] +1 X.drop(X.loc[:,'Wilderness_Area1':'Wilderness_Area4'].columns, axis=1, inplace=True) X['Climatic']=np.where(X.loc[:, 'Climatic2':'Climatic8'])[1] +1 X.drop(X.loc[:,'Climatic2':'Climatic8'].columns, axis=1, inplace=True) X['Geologic']=np.where(X.loc[:, 'Geologic1':'Geologic7'])[1] +1 X.drop(X.loc[:,'Geologic1':'Geologic7'].columns, axis=1, inplace=True) from sklearn.preprocessing import StandardScaler StandardScaler(copy=False).fit_transform(X) # Adding Gaussian Mixture features to perform some unsupervised learning hints from the full data #https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover #https://www.kaggle.com/stevegreenau/stacking-multiple-classifiers-clustering from sklearn.mixture import GaussianMixture X['GM'] = GaussianMixture(n_components=15).fit_predict(X) #https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover # Add PCA features from sklearn.decomposition import PCA pca = PCA(n_components=0.99).fit(X) trans = pca.transform(X) for i in range(trans.shape[1]): col_name= 'pca'+str(i+1) X[col_name] = trans[:,i] #https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers # Scale and bin features from sklearn.preprocessing import MinMaxScaler MinMaxScaler((0, 100),copy=False).fit_transform(X) #X = np.floor(X).astype('int8') print("Completed feature engineering!") #break it down again in train and test x,x_predict = X.xs(0),X.xs(1) ###### THIS IS THE ENSEMBLE MODEL SECTION ###### #https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers import random randomstate = 1 random.seed(randomstate) np.random.seed(randomstate) from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier ab_clf = AdaBoostClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier( min_samples_leaf=2, random_state=randomstate), random_state=randomstate) #max_features = min(30, x.columns.size) max_features = 30 from sklearn.ensemble import ExtraTreesClassifier et_clf = ExtraTreesClassifier(n_estimators=300, min_samples_leaf=2, min_samples_split=2, max_depth=50, max_features=max_features, random_state=randomstate, n_jobs=1) from lightgbm import LGBMClassifier lg_clf = LGBMClassifier(n_estimators=300, num_leaves=128, verbose=-1, random_state=randomstate, n_jobs=1) from sklearn.ensemble import RandomForestClassifier rf_clf = RandomForestClassifier(n_estimators=300, random_state=randomstate, n_jobs=1) #Added a KNN classifier to the ensemble #https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model from sklearn.neighbors import KNeighborsClassifier knn_clf = KNeighborsClassifier(n_neighbors=y.nunique(), n_jobs=1) #added several more classifiers at once #https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier bag_clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(criterion = 'entropy', max_depth=None, min_samples_split=2, min_samples_leaf=1,max_leaf_nodes=None, max_features='auto', random_state = randomstate), n_estimators=500,max_features=0.75, max_samples=1.0, random_state=randomstate,n_jobs=1,verbose=0) from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(max_iter=1000, n_jobs=1, solver= 'lbfgs', multi_class = 'multinomial', random_state=randomstate, verbose=0) #https://www.kaggle.com/bustam/6-models-for-forest-classification from catboost import CatBoostClassifier cat_clf = CatBoostClassifier(n_estimators =300, eval_metric='Accuracy', metric_period=200, max_depth = None, random_state=randomstate, verbose=0) #https://www.kaggle.com/jakelj/basic-ensemble-model from sklearn.experimental import enable_hist_gradient_boosting from sklearn.ensemble import HistGradientBoostingClassifier hbc_clf = HistGradientBoostingClassifier(max_iter = 500, max_depth =25, random_state = randomstate) ensemble = [('AdaBoostClassifier', ab_clf), ('ExtraTreesClassifier', et_clf), ('LGBMClassifier', lg_clf), #('KNNClassifier', knn_clf), ('BaggingClassifier', bag_clf), #('LogRegressionClassifier', lr_clf), #('CatBoostClassifier', cat_clf), #('HBCClassifier', hbc_clf), ('RandomForestClassifier', rf_clf) ] #Cross-validating classifiers from sklearn.model_selection import cross_val_score for label, clf in ensemble: score = cross_val_score(clf, x, y, cv=10, scoring='accuracy', verbose=0, n_jobs=-1) print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (score.mean(), score.std(), label)) # Fitting stack from mlxtend.classifier import StackingCVClassifier stack = StackingCVClassifier(classifiers=[ab_clf, et_clf, lg_clf, bag_clf, rf_clf], meta_classifier=rf_clf, cv=10, stratify=True, shuffle=True, use_probas=True, use_features_in_secondary=True, verbose=0, random_state=randomstate) stack = stack.fit(x, y) print("Completed modeling!") #make predictions y_predict = stack.predict(x_predict) y_predict = pd.Series(y_predict, index=x_predict.index, dtype=y.dtype) print("Completed predictions!") # Save predictions to a file for submission output = pd.DataFrame({'Id': Ids, 'Cover_Type': y_predict}) output.to_csv('submission.csv', index=False) #create a link to download the file from IPython.display import FileLink FileLink(r'submission.csv')
_____no_output_____
MIT
src/Ensemble.ipynb
mulargui/kaggle-Classify-forest-types
DATASET GENERATION
import numpy as np import os from scipy.misc import imread, imresize import matplotlib.pyplot as plt %matplotlib inline cwd = os.getcwd() print ("PACKAGES LOADED") print ("CURRENT FOLDER IS [%s]" % (cwd) )
PACKAGES LOADED CURRENT FOLDER IS [/home/sj/notebooks/github/deep-learning-cpslab/code]
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
CONFIGURATION
# FOLDER LOCATIONS paths = ["../../img_dataset/celebs/Arnold_Schwarzenegger" , "../../img_dataset/celebs/Junichiro_Koizumi" , "../../img_dataset/celebs/Vladimir_Putin" , "../../img_dataset/celebs/George_W_Bush"] categories = ['Terminator', 'Koizumi', 'Putin', 'Bush'] # CONFIGURATIONS imgsize = [64, 64] use_gray = 1 data_name = "custom_data" print ("YOUR IMAGES SHOULD BE AT") for i, path in enumerate(paths): print (" [%d/%d] %s" % (i, len(paths), path)) print ("DATA WILL BE SAVED TO \n [%s]" % (cwd + '/data/' + data_name + '.npz'))
YOUR IMAGES SHOULD BE AT [0/4] ../../img_dataset/celebs/Arnold_Schwarzenegger [1/4] ../../img_dataset/celebs/Junichiro_Koizumi [2/4] ../../img_dataset/celebs/Vladimir_Putin [3/4] ../../img_dataset/celebs/George_W_Bush DATA WILL BE SAVED TO [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
RGB2GRAY
def rgb2gray(rgb): if len(rgb.shape) is 3: return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) else: return rgb
_____no_output_____
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
LOAD IMAGES
nclass = len(paths) valid_exts = [".jpg",".gif",".png",".tga", ".jpeg"] imgcnt = 0 for i, relpath in zip(range(nclass), paths): path = cwd + "/" + relpath flist = os.listdir(path) for f in flist: if os.path.splitext(f)[1].lower() not in valid_exts: continue fullpath = os.path.join(path, f) currimg = imread(fullpath) # CONVERT TO GRAY (IF REQUIRED) if use_gray: grayimg = rgb2gray(currimg) else: grayimg = currimg # RESIZE graysmall = imresize(grayimg, [imgsize[0], imgsize[1]])/255. grayvec = np.reshape(graysmall, (1, -1)) # SAVE curr_label = np.eye(nclass, nclass)[i:i+1, :] if imgcnt is 0: totalimg = grayvec totallabel = curr_label else: totalimg = np.concatenate((totalimg, grayvec), axis=0) totallabel = np.concatenate((totallabel, curr_label), axis=0) imgcnt = imgcnt + 1 print ("TOTAL %d IMAGES" % (imgcnt))
TOTAL 681 IMAGES
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
DIVIDE INTO TRAINING AND TEST
def print_shape(string, x): print ("SHAPE OF [%s] IS [%s]" % (string, x.shape,)) randidx = np.random.randint(imgcnt, size=imgcnt) trainidx = randidx[0:int(4*imgcnt/5)] testidx = randidx[int(4*imgcnt/5):imgcnt] trainimg = totalimg[trainidx, :] trainlabel = totallabel[trainidx, :] testimg = totalimg[testidx, :] testlabel = totallabel[testidx, :] print_shape("totalimg", totalimg) print_shape("totallabel", totallabel) print_shape("trainimg", trainimg) print_shape("trainlabel", trainlabel) print_shape("testimg", testimg) print_shape("testlabel", testlabel)
SHAPE OF [totalimg] IS [(681, 4096)] SHAPE OF [totallabel] IS [(681, 4)] SHAPE OF [trainimg] IS [(544, 4096)] SHAPE OF [trainlabel] IS [(544, 4)] SHAPE OF [testimg] IS [(137, 4096)] SHAPE OF [testlabel] IS [(137, 4)]
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
SAVE TO NPZ
savepath = cwd + "/data/" + data_name + ".npz" np.savez(savepath, trainimg=trainimg, trainlabel=trainlabel , testimg=testimg, testlabel=testlabel , imgsize=imgsize, use_gray=use_gray, categories=categories) print ("SAVED TO [%s]" % (savepath))
SAVED TO [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
LOAD NPZ
# LOAD cwd = os.getcwd() loadpath = cwd + "/data/" + data_name + ".npz" l = np.load(loadpath) print (l.files) # Parse data trainimg_loaded = l['trainimg'] trainlabel_loaded = l['trainlabel'] testimg_loaded = l['testimg'] testlabel_loaded = l['testlabel'] categories_loaded = l['categories'] print ("[%d] TRAINING IMAGES" % (trainimg_loaded.shape[0])) print ("[%d] TEST IMAGES" % (testimg_loaded.shape[0])) print ("LOADED FROM [%s]" % (savepath))
['trainlabel', 'imgsize', 'trainimg', 'testimg', 'testlabel', 'use_gray', 'categories'] [544] TRAINING IMAGES [137] TEST IMAGES LOADED FROM [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
PLOT LOADED DATA
ntrain_loaded = trainimg_loaded.shape[0] batch_size = 5; randidx = np.random.randint(ntrain_loaded, size=batch_size) for i in randidx: currimg = np.reshape(trainimg_loaded[i, :], (imgsize[0], -1)) currlabel_onehot = trainlabel_loaded[i, :] currlabel = np.argmax(currlabel_onehot) if use_gray: currimg = np.reshape(trainimg[i, :], (imgsize[0], -1)) plt.matshow(currimg, cmap=plt.get_cmap('gray')) plt.colorbar() else: currimg = np.reshape(trainimg[i, :], (imgsize[0], imgsize[1], 3)) plt.imshow(currimg) title_string = ("[%d] CLASS-%d (%s)" % (i, currlabel, categories_loaded[currlabel])) plt.title(title_string) plt.show()
_____no_output_____
MIT
code/basic_gendataset.ipynb
cpslab-snu/deep_learning
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision.
import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to C:\Users\yuxinshi/.pytorch/F_MNIST_data/FashionMNIST\raw\train-images-idx3-ubyte.gz
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
YuxinShi0423/deep-learning-v2-pytorch
Here we can see one of the images.
image, label = next(iter(trainloader)) helper.imshow(image[0,:]);
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
YuxinShi0423/deep-learning-v2-pytorch
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
# TODO: Define your network architecture here from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim = 1) return x
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
YuxinShi0423/deep-learning-v2-pytorch
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
# TODO: Create the network, define the criterion and optimizer model = Classifier() print(list(model.parameters())) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr = 0.003) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
YuxinShi0423/deep-learning-v2-pytorch
The Adonis architectureQubit connectivity:``` QB1 |QB4 - QB3 - QB2 | QB5```Construct an `IQMDevice` instance representing the Adonis architecture
adonis = Adonis() print(adonis.NATIVE_GATES) print(adonis.NATIVE_GATE_INSTANCES) print(adonis.qubits)
_____no_output_____
Apache-2.0
examples/usage.ipynb
nikosavola/cirq-on-iqm
Creating a quantum circuitCreate a quantum circuit and insert native gates
a, b, c = adonis.qubits[:3] circuit = cirq.Circuit(device=adonis) circuit.append(cirq.X(a)) circuit.append(cirq.PhasedXPowGate(phase_exponent=0.3, exponent=0.5)(c)) circuit.append(cirq.CZ(a, c)) circuit.append(cirq.YPowGate(exponent=1.1)(c)) print(circuit)
_____no_output_____
Apache-2.0
examples/usage.ipynb
nikosavola/cirq-on-iqm