markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
3. ROC Curve (what is ROC/AUC?)
Area Under the Curve (AUC) over 0.7 is considered useful in classification performance.
(Streiner and Cairney, 2007, What's under the roc? an introduction to receiver operating charcteristics curves.)
AUC calculated by grid search(0.68) is greater than that by greedy method(0.66)
1) Greedy Method
|
clf_RF = RandomForestClassifier(n_estimators=18, max_features=14, min_samples_leaf=9, oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
from sklearn.metrics import roc_curve, auc
fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, clf_RF.predict_proba(X_test)[:, 0], pos_label=0)
fpr_base, tpr_base, thresholds_base = roc_curve(y_test,clf_Dummy.predict_proba(X_test)[:, 0], pos_label=1)
plt.figure(figsize=(5, 5))
plt.plot(fpr_rf, tpr_rf, color='#E45A84', linewidth=3, linestyle='-',
label = 'random forest: %(performance)0.2f' % {'performance':auc(fpr_rf, tpr_rf)})
plt.plot(fpr_base, tpr_base, color='#FFACAC', linewidth=2, linestyle='--',
label = 'baseline: %(performance)0.2f' % {'performance':auc(fpr_base, tpr_base)})
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('ROC (Receiver operating characteristic)', fontdict={'fontsize': 12})
plt.legend(loc="lower right");
|
3_model selection_evalutation.ipynb
|
higee/amazon-helpful-review
|
mit
|
2) Grid Search
|
clf_RF = RandomForestClassifier(n_estimators=10, max_features=19, min_samples_leaf=27, criterion='entropy', oob_score=True).fit(X_train, y_train)
clf_RF_predicted = clf_RF.predict(X_test)
fpr_rf, tpr_rf, thresholds_rf = roc_curve(y_test, clf_RF.predict_proba(X_test)[:, 0], pos_label=0)
fpr_base, tpr_base, thresholds_base = roc_curve(y_test,clf_Dummy.predict_proba(X_test)[:, 0], pos_label=1)
plt.figure(figsize=(5, 5))
plt.plot(fpr_rf, tpr_rf, color='#E45A84', linewidth=3, linestyle='-',
label = 'random forest: %(performance)0.2f' % {'performance':auc(fpr_rf, tpr_rf)})
plt.plot(fpr_base, tpr_base, color='#FFACAC', linewidth=2, linestyle='--',
label = 'baseline: %(performance)0.2f' % {'performance':auc(fpr_base, tpr_base)})
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('ROC (Receiver operating characteristic)', fontdict={'fontsize': 12})
plt.legend(loc="lower right");
|
3_model selection_evalutation.ipynb
|
higee/amazon-helpful-review
|
mit
|
Define hyperparameters
|
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 128
EPOCHS = 5
CROP_TO = 32
SEED = 26
PROJECT_DIM = 2048
LATENT_DIM = 512
WEIGHT_DECAY = 0.0005
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Load the CIFAR-10 dataset
|
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
print(f"Total training examples: {len(x_train)}")
print(f"Total test examples: {len(x_test)}")
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Defining our data augmentation pipeline
As studied in SimCLR having the right data
augmentation pipeline is critical for SSL systems to work effectively in computer vision.
Two particular augmentation transforms that seem to matter the most are: 1.) Random
resized crops and 2.) Color distortions. Most of the other SSL systems for computer
vision (such as BYOL,
MoCoV2, SwAV,
etc.) include these in their training pipelines.
|
def flip_random_crop(image):
# With random crops we also apply horizontal flipping.
image = tf.image.random_flip_left_right(image)
image = tf.image.random_crop(image, (CROP_TO, CROP_TO, 3))
return image
def color_jitter(x, strength=[0.4, 0.4, 0.4, 0.1]):
x = tf.image.random_brightness(x, max_delta=0.8 * strength[0])
x = tf.image.random_contrast(
x, lower=1 - 0.8 * strength[1], upper=1 + 0.8 * strength[1]
)
x = tf.image.random_saturation(
x, lower=1 - 0.8 * strength[2], upper=1 + 0.8 * strength[2]
)
x = tf.image.random_hue(x, max_delta=0.2 * strength[3])
# Affine transformations can disturb the natural range of
# RGB images, hence this is needed.
x = tf.clip_by_value(x, 0, 255)
return x
def color_drop(x):
x = tf.image.rgb_to_grayscale(x)
x = tf.tile(x, [1, 1, 3])
return x
def random_apply(func, x, p):
if tf.random.uniform([], minval=0, maxval=1) < p:
return func(x)
else:
return x
def custom_augment(image):
# As discussed in the SimCLR paper, the series of augmentation
# transformations (except for random crops) need to be applied
# randomly to impose translational invariance.
image = flip_random_crop(image)
image = random_apply(color_jitter, image, p=0.8)
image = random_apply(color_drop, image, p=0.2)
return image
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
It should be noted that an augmentation pipeline is generally dependent on various
properties of the dataset we are dealing with. For example, if images in the dataset are
heavily object-centric then taking random crops with a very high probability may hurt the
training performance.
Let's now apply our augmentation pipeline to our dataset and visualize a few outputs.
Convert the data into TensorFlow Dataset objects
Here we create two different versions of our dataset without any ground-truth labels.
|
ssl_ds_one = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_one = (
ssl_ds_one.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
ssl_ds_two = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_two = (
ssl_ds_two.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# We then zip both of these datasets.
ssl_ds = tf.data.Dataset.zip((ssl_ds_one, ssl_ds_two))
# Visualize a few augmented images.
sample_images_one = next(iter(ssl_ds_one))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_one[n].numpy().astype("int"))
plt.axis("off")
plt.show()
# Ensure that the different versions of the dataset actually contain
# identical images.
sample_images_two = next(iter(ssl_ds_two))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_two[n].numpy().astype("int"))
plt.axis("off")
plt.show()
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Notice that the images in samples_images_one and sample_images_two are essentially
the same but are augmented differently.
Defining the encoder and the predictor
We use an implementation of ResNet20 that is specifically configured for the CIFAR10
dataset. The code is taken from the
keras-idiomatic-programmer repository. The hyperparameters of
these architectures have been referred from Section 3 and Appendix A of the original
paper.
|
!wget -q https://git.io/JYx2x -O resnet_cifar10_v2.py
import resnet_cifar10_v2
N = 2
DEPTH = N * 9 + 2
NUM_BLOCKS = ((DEPTH - 2) // 9) - 1
def get_encoder():
# Input and backbone.
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(
inputs
)
x = resnet_cifar10_v2.stem(x)
x = resnet_cifar10_v2.learner(x, NUM_BLOCKS)
x = layers.GlobalAveragePooling2D(name="backbone_pool")(x)
# Projection head.
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
outputs = layers.BatchNormalization()(x)
return tf.keras.Model(inputs, outputs, name="encoder")
def get_predictor():
model = tf.keras.Sequential(
[
# Note the AutoEncoder-like structure.
layers.Input((PROJECT_DIM,)),
layers.Dense(
LATENT_DIM,
use_bias=False,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
),
layers.ReLU(),
layers.BatchNormalization(),
layers.Dense(PROJECT_DIM),
],
name="predictor",
)
return model
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Defining the (pre-)training loop
One of the main reasons behind training networks with these kinds of approaches is to
utilize the learned representations for downstream tasks like classification. This is why
this particular training phase is also referred to as pre-training.
We start by defining the loss function.
|
def compute_loss(p, z):
# The authors of SimSiam emphasize the impact of
# the `stop_gradient` operator in the paper as it
# has an important role in the overall optimization.
z = tf.stop_gradient(z)
p = tf.math.l2_normalize(p, axis=1)
z = tf.math.l2_normalize(z, axis=1)
# Negative cosine similarity (minimizing this is
# equivalent to maximizing the similarity).
return -tf.reduce_mean(tf.reduce_sum((p * z), axis=1))
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
We then define our training loop by overriding the train_step() function of the
tf.keras.Model class.
|
class SimSiam(tf.keras.Model):
def __init__(self, encoder, predictor):
super(SimSiam, self).__init__()
self.encoder = encoder
self.predictor = predictor
self.loss_tracker = tf.keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def train_step(self, data):
# Unpack the data.
ds_one, ds_two = data
# Forward pass through the encoder and predictor.
with tf.GradientTape() as tape:
z1, z2 = self.encoder(ds_one), self.encoder(ds_two)
p1, p2 = self.predictor(z1), self.predictor(z2)
# Note that here we are enforcing the network to match
# the representations of two differently augmented batches
# of data.
loss = compute_loss(p1, z2) / 2 + compute_loss(p2, z1) / 2
# Compute gradients and update the parameters.
learnable_params = (
self.encoder.trainable_variables + self.predictor.trainable_variables
)
gradients = tape.gradient(loss, learnable_params)
self.optimizer.apply_gradients(zip(gradients, learnable_params))
# Monitor loss.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Pre-training our networks
In the interest of this example, we will train the model for only 5 epochs. In reality,
this should at least be 100 epochs.
|
# Create a cosine decay learning scheduler.
num_training_samples = len(x_train)
steps = EPOCHS * (num_training_samples // BATCH_SIZE)
lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(
initial_learning_rate=0.03, decay_steps=steps
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="loss", patience=5, restore_best_weights=True
)
# Compile model and start training.
simsiam = SimSiam(get_encoder(), get_predictor())
simsiam.compile(optimizer=tf.keras.optimizers.SGD(lr_decayed_fn, momentum=0.6))
history = simsiam.fit(ssl_ds, epochs=EPOCHS, callbacks=[early_stopping])
# Visualize the training progress of the model.
plt.plot(history.history["loss"])
plt.grid()
plt.title("Negative Cosine Similairty")
plt.show()
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
If your solution gets very close to -1 (minimum value of our loss) very quickly with a
different dataset and a different backbone architecture that is likely because of
representation collapse. It is a phenomenon where the encoder yields similar output for
all the images. In that case additional hyperparameter tuning is required especially in
the following areas:
Strength of the color distortions and their probabilities.
Learning rate and its schedule.
Architecture of both the backbone and their projection head.
Evaluating our SSL method
The most popularly used method to evaluate a SSL method in computer vision (or any other
pre-training method as such) is to learn a linear classifier on the frozen features of
the trained backbone model (in this case it is ResNet20) and evaluate the classifier on
unseen images. Other methods include
fine-tuning on the source dataset or even a
target dataset with 5% or 10% labels present. Practically, we can use the backbone model
for any downstream task such as semantic segmentation, object detection, and so on where
the backbone models are usually pre-trained with pure supervised learning.
|
# We first create labeled `Dataset` objects.
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
# Then we shuffle, batch, and prefetch this dataset for performance. We
# also apply random resized crops as an augmentation but only to the
# training set.
train_ds = (
train_ds.shuffle(1024)
.map(lambda x, y: (flip_random_crop(x), y), num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO)
# Extract the backbone ResNet20.
backbone = tf.keras.Model(
simsiam.encoder.input, simsiam.encoder.get_layer("backbone_pool").output
)
# We then create our linear classifier and train it.
backbone.trainable = False
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = backbone(inputs, training=False)
outputs = layers.Dense(10, activation="softmax")(x)
linear_model = tf.keras.Model(inputs, outputs, name="linear_model")
# Compile model and start training.
linear_model.compile(
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
optimizer=tf.keras.optimizers.SGD(lr_decayed_fn, momentum=0.9),
)
history = linear_model.fit(
train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[early_stopping]
)
_, test_acc = linear_model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))
|
examples/vision/ipynb/simsiam.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Dissecting the program above
The first line of the program above imports the read_raw_bdf function from the mne.io module.
The second line of the program is the most complicated. A lot of stuff is going on there:
<img src="images/function_call_explanation.png">
The read_raw_bdf function is called with two parameters. The first parameter is a piece of text (a "string") containing the name of the BDF file to load. Literal text (strings) must always be enclosed in ' quotes. The second parameter is a "named" parameter, which is something I added since last level. We will use named parameters a lot during this session (see below). This parameter is set to the special value True. Python has three special values: True, False, and None, which are often used to indicate "yes", "no", and "I don't know/care" respectively. Finally, the result is stored in a variable called raw.
The last line of the program calls the print function, which is used to display things. Here, it is called with the raw variable as parameter, so it displays the data contained in this variable, namely the data we loaded with read_raw_bdf.
Named parameters
Many functions of MNE-Python take dozens of parameters that fine-tune exactly how to perform some operation. If you had to specify them all every time you want to call a function, you'd spend ages worrying about little details and get nothing done. Luckily, Python allows us to specify default values for parameters, which means these parameters may be omitted when calling a function, and the default will be used. In MNE-Python, most parameters have a default value, so while a function may have 20 parameters, you only have to specify one or two. The rest of the parameters are like little knobs and buttons you can use to fine tune things, or just leave alone. This allows MNE-Python to keep simple things simple, while making complicated things possible.
Parameters with default values are called "named" parameters, and you specify them with name=value. The preload parameter that you saw in the program above is such a named parameter. It controls whether to load all of the data in memory, or only read the "metadata" of the file, i.e., when it was recorded, how long it is, how many sensors the MEG machine had, etc. By default, preload=False, meaning only the metadata is read. In the example above, we set it to True, indicating we wish to really load all the data in memory.
Visualizing the data
As we have seen in the last level, raw data can be visualized (or "plotted") by the plot_raw function that is kept inside the mne.viz module.
It needs one parameter: the variable containing the data you wish to plot. (It also has a lot of named parameters, but you can leave them alone for now.)
As a quick refresher, I'm going to let you write the visualization code.
But first, there is a little housekeeping that we need to do.
We need to tell the visualization engine to send the results to your browser and not attempt to open a window on the server where this code is running. Please run the cell below:
|
%matplotlib notebook
print('From now on, all graphics will send to your browser.')
|
eeg-erp/adept.ipynb
|
wmvanvliet/neuroscience_tutorials
|
bsd-2-clause
|
Now, it's your turn! Write the Python code that will visualize the raw EEG data we just loaded.
Keep the following things in mind:
1. The function is called plot_raw and is kept inside the mne.viz module. Remember to import to function first!
2. Call the function with one parameter, namely the raw variable we created above that contains the MEG data.
3. Assign the result of the plot_raw function to a variable (pick any name you want), otherwise the figure will show twice.
Use the cell below to write your code:
If you wrote the code correctly, you should be looking at a little interface that shows the data collected on all the EEG sensors. Click inside the scrollbars or use the arrow keys to explore the data.
Events, and how to read the documentation
Browsing through the sensors, you will notice there are two types:
8 EEG sensors, named Fz-P2
1 STIM sensor, which is not really a sensor, so we'll call it a "channel". Its name is "Status"
Take a close look at the STIM channel.
On this channel, the computer that is presenting the stimuli was sending timing information to the MEG equipment.
Whenever a stimulus (one of the 9 playing cards) was presented, the signal at this channel jumps briefly from 0 to 1-9, indicating which playing card was being shown.
We can use this channel to create an "events" matrix: a table listing all the times a stimulus was presented, along with the time of the event and the type of stimulus.
The function to do this is called find_events, and is kept inside the mne module.
In this document, all the function names are links to their documentation. Click on find_events to pull up its documentation. It will open a new browser tab. It should look like this:
<img src="images/doc_with_explanation.png" alt="Documentation for find_events"/>
Looking at the function "signature" reveals that many of the parameters have default values associated with them. This means these are named parameters and we can ignore them if we want. There is only a single required parameter, named raw. Looking at the parameter list, it seems we need to set it to the raw data we just loaded with the read_raw_bdf function. If we called the function correctly, it should provide us with an "array" (don't worry about what an array is for now) with all the events.
Now, call the function and find some events! Keep the following things in mind:
The function is called find_events and is kept inside the mne module. Remember to import to function first!
Call the function. Use the documentation to find out what parameters it needs.
Assign the result to a variable called events.
If you called the function correctly, running the cell below should display the found events on top of the raw data. It should show as cyan lines, with a number on top indicating the type of event.
These numbers are referred to as "event codes".
|
fig = plot_raw(raw, events=events)
|
eeg-erp/adept.ipynb
|
wmvanvliet/neuroscience_tutorials
|
bsd-2-clause
|
Frequency filtering, or, working with objects
Throughout this exercise, we have created many variables, such as raw and events.
Up to now, we've treated these as simple boxes that hold some data, or rather "objects" as they are called in Python.
However, the box/object metaphor is not really a good one.
Variables are more like little machines of their own.
They can do things!
The raw variable is a very powerful object.
If you want, you can look at the documentation for Raw to see the long list of things it can do.
One useful things is that a raw object knows how to visualize (i.e. "plot") itself.
You already know that modules hold functions, but objects can hold functions too.
Functions that are kept inside objects are called "methods", to distinguish them from "functions" that are kept inside modules.
Instead of using the plot_raw function, we can use the plotting method of the object itself, like this:
python
fig = raw.plot()
Notice how the .plot() method call doesn't need any parameters: it already know which object it needs to visualize, namely the object it belong to.
In MNE-Python, many objects have such a plot method.
Another method of the raw object is called filter, which applies a frequency filter to the data.
A frequency filter gets rid of some of the waves in the data that are either too slow or to fast to be of any interest to us.
The raw.filter() method takes two parameters: a lower bound and upper bound, expressed in Hz.
From this, we can deduce it applies a "bandpass" filter: keeping only waves within a certain frequency range.
Here is an example of a bandpass filter that keeps only waves between 5 to 50 Hz:
python
raw.filter(5, 50)
Notice how the result of the method is not assigned to any variable.
In this case, the raw.filter method operated on the raw variable directly, overwriting the data contained in it.
In this experiment, we're hunting for the P300 "oddball" effect, which is a relatively slow wave, but note extremely slow.
A good choice for us would be to get rid of all waves slower than 0.5 Hz and all waves faster than 20 Hz.
In the cell below, write the code to apply the desired bandpass filter to the data.
Note that the example given above used a different frequency range (5-50 Hz) than the one we actually want (0.5-20 Hz), so you will have to make some changes.
After you have applied the frequency filter, plot the data using the raw.plot() method so you can see the result of your hard work.
Epochs, or, how to create a dictionary
<img src="images/dictionary.jpg" width="200" style="float: right; margin-left: 10px">
Now that we have the information on what stimulus was presented at one time, we can extract "epochs". Epochs are little snippets of signal surrounding an event. These epochs can then be averaged to produce the "evoked" signal.
In order to create epochs, we need a way to translate the event codes (1, 2, 3, ...) into something more descriptive.
This can be done using a new type of variable called a "dictionary".
A Python dictionary, allows us (or rather the computer) to "look things up".
The following piece of code creates a dictionary called event_id. Take a look and run it:
|
event_id = { 'Ace of spades': 1, 'Jack of clubs': 2, 'Queen of hearts': 3, 'King of diamonds': 4, '10 of spades': 5, '3 of clubs': 6, '10 of hearts': 7, '3 of diamonds': 8, 'King of spades': 9 }
|
eeg-erp/adept.ipynb
|
wmvanvliet/neuroscience_tutorials
|
bsd-2-clause
|
A dictionary is created by using curly braces { } and colons :. I've spread out the code over multiple lines to make things a little clearer. The way you create a dictionary is to say {this: means that} and you use commas if you want to put more than one thing in the dictionary.
Finally, you should know that Python allows you to spread out lists across multiple lines, so we can write our dictionary like this, which is much nicer to read:
|
event_id = {
'Ace of spades': 1,
'Jack of clubs': 2,
'Queen of hearts': 3,
'King of diamonds': 4,
'10 of spades': 5,
'3 of clubs': 6,
'10 of hearts': 7,
'3 of diamonds': 8,
'King of spades': 9,
}
|
eeg-erp/adept.ipynb
|
wmvanvliet/neuroscience_tutorials
|
bsd-2-clause
|
Armed with our event_id dictionary, we can move on to creating epochs.
For each event, let's cut a snippet of signal from 0.2 seconds before the moment the stimulus was presented, up until 0.5 seconds after it was presented. If we take the moment the stimulus was presented as time 0, we will cut epochs from -0.2 until 0.5 seconds.
The function to do this is called Epochs (with a capital E).
Click on the function name to open its documentation and look at the parameters it needs.
<div style="margin-left: 50px; margin-top: 1ex"><img src="images/OMG.png" width="20" style="display: inline"> That's a lot of parameters!</div>
<div style="margin-left: 50px"><img src="images/thinking.png" width="20" style="display: inline"> Ok, how many are optional?</div>
<div style="margin-left: 50px"><img src="images/phew.png" width="20" style="display: inline"> Almost all of them, phew!</div>
Here are the points of the documentation that are most relevant to us right now:
* There are two required arguments, the raw data and the events array we created earlier.
* The next optional parameter is the event_id dictionary we just created.
* The next two optional parameters, tmin and tmax, specify the time range to cut epochs for. They are set to -0.2 to 0.5 seconds by default. We want a little more than that. Set them to cut epochs from -0.2 to 0.8 seconds.
* We can also leave the rest of the parameters alone.
Go ahead and import the Epochs function from the mne module.
Then call it with the correct parameters and store the result in a variable called epochs (small e):
Most MNE-Python have a .plot() method to visualize themselves. The cell below will create an interactive visualization of your epochs. Click inside the scrollbars or use the arrow keys to scroll through the data.
|
epochs.plot()
|
eeg-erp/adept.ipynb
|
wmvanvliet/neuroscience_tutorials
|
bsd-2-clause
|
Load in the permittivity distribution from an FDFD simulation
|
def reshape_arr(arr, Nx, Ny):
return arr.reshape((Nx, Ny, 1))
eps_r = np.load('data/eps_r_splitter4.npy')
eps_wg = np.load('data/eps_waveguide.npy')
plt.imshow(eps_r.T, cmap='gist_earth_r')
plt.show()
Nx, Ny = eps_r.shape
J_in = np.load('data/J_in.npy')
J_outs = np.load('data/J_list.npy')
J_wg = np.flipud(J_in.copy())
eps_wg = reshape_arr(eps_wg, Nx, Ny)
eps_r = reshape_arr(eps_r, Nx, Ny)
J_in = reshape_arr(J_in, Nx, Ny)
J_outs = [reshape_arr(J, Nx, Ny) for J in J_outs]
J_wg = reshape_arr(J_wg, Nx, Ny)
Nz = 1
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Initial setting up of parameters
|
nx, ny, nz = Nx//2, Ny//2, Nz//2
dL = 5e-8
pml = [20, 20, 0]
F = fdtd(eps_r, dL=dL, npml=pml)
F_wg = fdtd(eps_wg, dL=dL, npml=pml)
# source parameters
steps = 10000
t0 = 2000
sigma = 100
source_amp = 5
omega = 2 * np.pi * C_0 / 2e-6 # units of 1/sec
omega_sim = omega * F.dt # unitless time
gaussian = lambda t: np.exp(-(t - t0)**2 / 2 / sigma**2) * np.cos(omega_sim * t)
source = lambda t: J_in * source_amp * gaussian(t)
plt.plot(1e15 * F.dt * np.arange(steps), gaussian(np.arange(steps)))
plt.xlabel('time (femtoseconds)')
plt.ylabel('source amplitude')
plt.show()
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Compute the transmission (as a function of time) for a straight waveguide (to normalize later)
|
measured_wg = measure_fields(F_wg, source, steps, J_wg)
plt.plot(1e15 * F.dt*np.arange(steps), gaussian(np.arange(steps))/gaussian(np.arange(steps)).max(), label='source (Jz)')
plt.plot(1e15 * F.dt*np.arange(steps), measured_wg/measured_wg.max(), label='measured (Ez)')
plt.xlabel('time (femtoseconds)')
plt.ylabel('amplitude')
plt.legend()
plt.show()
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Show the field plots at 10 time steps
|
aniplot(F_wg, source, steps, num_panels=10)
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Compute the power spectrum transmitted
|
plot_spectral_power(gaussian(np.arange(steps)), F.dt, f_top=3e16)
gaussian(np.arange(steps)).shape
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Now for the measured fields
|
plot_spectral_power(measured_wg[:,0], F.dt, f_top=2e14)
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Now measure the transmission for the inverse designed splitter (at each of the four ports)
|
measured = measure_fields(F, source, steps, J_outs)
ham = np.hamming(steps).reshape((steps,1))
# plt.plot(measured*ham**5)
plot_spectral_power(measured*ham**100, dt=F.dt, f_top=4e16)
plt.plot(1e15 * F.dt*np.arange(steps), gaussian(np.arange(steps))/gaussian(np.arange(steps)).max(), label='source (Jz)')
plt.plot(1e15 * F.dt*np.arange(steps), measured/measured.max(), label='measured (Ez)')
plt.xlabel('time (femtoseconds)')
plt.ylabel('amplitude')
plt.legend()
plt.show()
aniplot(F_wg, source, steps, num_panels=10)
series_in = gaussian(np.arange(steps))
series_wg = None
plot_spectral_power(measured_wg, F.dt, f_top=15e14)
series_in = gaussian(np.arange(steps))
measured_wg
freq1, spect1 = get_spectrum(series_in, dt=F.dt)
freq2, spect2 = get_spectrum(measured_wg, dt=F.dt)
plt.plot(series_in/series_in.max())
plt.plot(measured_wg/measured_wg.max())
plt.show()
spect1 = np.fft.fft(series_in/series_in.max())
spect2 = np.fft.fft(measured_wg/measured_wg.max())
plt.plot(np.fft.fftshift(np.abs(spect1)))
plt.plot(np.fft.fftshift(np.abs(spect2)))
plt.show()
plt.plot(freq1, np.abs(spect1/spect1.max()))
plt.plot(freq2, np.abs(spect2/spect2.max()))
plt.show()
freq, spect = get_spectrum(measured_wg, dt=F.dt)
plt.plot(freq, np.abs(spect))
plt.xlim([0, 4e14])
ex = 1e-50
plt.ylim([-0.1*ex, ex])
plt.show()
plot_spectral_power(measured, dt=F.dt, f_top=1e16)
plt.plot(series_in)
series_out = measured_wg/measured_wg.max()
plt.plot(series_out)
plt.plot(np.abs(np.fft.fft(series_in*np.hanning(steps))))
plt.plot(np.abs(np.fft.fft(series_out*np.hanning(steps))))
(measured_wg*np.hamming(steps)).shape
plt.plot(np.hamming(steps))
F.dt
from ceviche.utils import get_spectrum_lr
get_spectrum_lr(measured_wg[:,0], dt=F.dt)
get_spectrum_lr(gaussian(np.arange(steps)), dt=F.dt)
|
examples/simulate_splitter_fdtd.ipynb
|
fancompute/ceviche
|
mit
|
Just check that analytical solution coincides with the solution of ODE for the variance
|
def arccoth(x):
return 0.5*np.log((1.+x)/(x-1.))
############ parameters #############
th = 0.1 # Interaction parameter
alpha = np.cos(th)
beta = np.sin(th)
gamma = 1.
def gammaf(t):
return 0.25+t/12+t*t/6
def f_gamma(t,*args):
return (0.25+t/12+t*t/6)**(0.5)
################# Solution of the differential equation for the variance Vc ####################
T = 6.
N_store = 200
tlist = np.linspace(0,T,N_store)
y0 = 0.5
def func(y, t):
return -(gammaf(t) - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gammaf(t)
y_td = odeint(func, y0, tlist)
def func(y, t):
return -(gamma - alpha*beta)*y - 2*alpha*alpha*y*y + 0.5*gamma
y = odeint(func, y0, tlist)
############ Exact steady state solution for Vc #########################
Vc = (alpha*beta - gamma + np.sqrt((gamma-alpha*beta)**2 + 4*gamma*alpha**2))/(4*alpha**2)
#### Analytic solution
A = (gamma**2 + alpha**2 * (beta**2 + 4*gamma) - 2*alpha*beta*gamma)**0.5
B = arccoth((-4*alpha**2*y0 + alpha*beta - gamma)/A)
y_an = (alpha*beta - gamma + A / np.tanh(0.5*A*tlist - B))/(4*alpha**2)
f, (ax, ax2) = plt.subplots(2, 1, sharex=True)
ax.set_title('Variance as a function of time')
ax.plot(tlist,y)
ax.plot(tlist,Vc*np.ones_like(tlist))
ax.plot(tlist,y_an)
ax.set_ylim(0,0.5)
ax2.set_title('Deviation of odeint from analytic solution')
ax2.set_xlabel('t')
ax2.set_ylabel(r'$\epsilon$')
ax2.plot(tlist,y_an - y.T[0]);
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Test of different SME solvers
|
####################### Model ###########################
N = 30 # number of Fock states
Id = qeye(N)
a = destroy(N)
s = 0.5*((alpha+beta)*a + (alpha-beta)*a.dag())
x = (a + a.dag())/np.sqrt(2)
H = Id
c_op = [np.sqrt(gamma)*a]
c_op_td = [[a,f_gamma]]
sc_op = [s]
e_op = [x, x*x]
rho0 = fock_dm(N,0) # initial vacuum state
#sc_len=1 # one stochastic operator
############## time steps and trajectories ###################
ntraj = 1 #100 # number of trajectories
T = 6. # final time
N_store = 200 # number of time steps for which we save the expectation values/density matrix
tlist = np.linspace(0,T,N_store)
ddt = (tlist[1]-tlist[0])
Nsubs = list((13*np.logspace(0,1,10)).astype(np.int))
stepsizes = [ddt/j for j in Nsubs] # step size is doubled after each evaluation
Nt = len(Nsubs) # number of step sizes that we compare
Nsubmax = Nsubs[-1] # Number of intervals for the smallest step size;
dtmin = (tlist[1]-tlist[0])/(Nsubmax)
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Plotting the figure - Constant case
|
# Analetical solution not available:
# Compute the evolution with the best solver and very small step size and use it as the reference
sol = ssesolve(H, fock(N), tlist, [sc_op[0]+c_op[0]], e_op, nsubsteps=2000, method="homodyne",solver="taylor2.0")
y_sse = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
ntraj = 1
def run_sse(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for jj in range(0,Nt):
for j in range(0,ntraj):
Nsub = Nsubs[jj]#int(Nsubmax/(2**jj))
sol = ssesolve(H, fock(N), tlist, [sc_op[0]+c_op[0]], e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_sse - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats(**kw):
start = time.time()
y = run_sse(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit,time.time()-start
stats_cte = []
stats_cte.append(get_stats(solver='euler-maruyama'))
stats_cte.append(get_stats(solver='platen'))
stats_cte.append(get_stats(solver='pred-corr'))
stats_cte.append(get_stats(solver='milstein'))
stats_cte.append(get_stats(solver='milstein-imp', tol=1e-9))
stats_cte.append(get_stats(solver='pred-corr-2'))
stats_cte.append(get_stats(solver='explicit1.5'))
stats_cte.append(get_stats(solver="taylor1.5"))
stats_cte.append(get_stats(solver="taylor1.5-imp", tol=1e-9))
stats_cte.append(get_stats(solver="taylor2.0"))
stats_cte.append(get_stats(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_cte):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.003*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.01*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.001*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.01*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
ax.set_xlabel(r'$\Delta t$ $\left[\gamma^{-1}\right]$')
ax.set_ylabel('deviation')
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Deterministic part time dependent
|
def H_f(t,args):
return 0.125+t/12+t*t/72
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist, sc_op, e_op,
nsubsteps=2500, method="homodyne",solver="taylor2.0")
y_sse_td = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_sse_td)
ntraj = 1
def run_sse_td(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for jj in range(0,Nt):
for j in range(0,ntraj):
Nsub = Nsubs[jj]#int(Nsubmax/(2**jj))
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist, sc_op, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_sse_td - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats(**kw):
y = run_sse_td(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit
stats_td = []
stats_td.append(get_stats(solver='euler-maruyama'))
stats_td.append(get_stats(solver='platen'))
stats_td.append(get_stats(solver='pred-corr'))
stats_td.append(get_stats(solver='milstein'))
stats_td.append(get_stats(solver='milstein-imp'))
stats_td.append(get_stats(solver='pred-corr-2'))
stats_td.append(get_stats(solver='explicit1.5'))
stats_td.append(get_stats(solver="taylor1.5"))
stats_td.append(get_stats(solver="taylor1.5-imp", tol=1e-9))
stats_td.append(get_stats(solver="taylor2.0"))
stats_td.append(get_stats(solver="taylor2.0", noiseDepth=500))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_td):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.1*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.loglog(stepsizes, 0.5*np.array(stepsizes)**2.0, label="$\propto\Delta t^{2}$")
ax.set_xlabel(r'$\Delta t$ $\left[\gamma^{-1}\right]$')
ax.set_ylabel('deviation')
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Both d1 and d2 time-dependent
|
def H_f(t,args):
return 0.125+t/12+t*t/72
def H_bf(t,args):
return 0.125+t/10+t*t/108
sc_op_td = [[sc_op[0],H_bf]]
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist, sc_op_td, e_op,
nsubsteps=2000, method="homodyne",solver="taylor15")
y_sse_btd = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_sse_btd)
ntraj = 1
def run_sse_td(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for jj in range(0,Nt):
for j in range(0,ntraj):
Nsub = Nsubs[jj]#int(Nsubmax/(2**jj))
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist, sc_op_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_sse_btd - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_b(**kw):
y = run_sse_td(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return y,tag,fit
stats_d2_td = []
stats_d2_td.append(get_stats_b(solver='euler-maruyama'))
stats_d2_td.append(get_stats_b(solver='platen'))
stats_d2_td.append(get_stats_b(solver='pred-corr'))
stats_d2_td.append(get_stats_b(solver='milstein'))
stats_d2_td.append(get_stats_b(solver='milstein-imp'))
stats_d2_td.append(get_stats_b(solver='pred-corr-2'))
stats_d2_td.append(get_stats_b(solver='explicit1.5'))
stats_d2_td.append(get_stats_b(solver="taylor1.5"))
stats_d2_td.append(get_stats_b(solver="taylor1.5-imp", tol=1e-9))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>1hdD"
for i,run in enumerate(stats_d2_td):
ax.loglog(stepsizes, run[0], mark[i], label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.03*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.03*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.03*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.set_xlabel(r'$\Delta t$ $\left[\gamma^{-1}\right]$')
ax.set_ylabel('deviation')
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Multiple sc_ops, time-dependent
|
def H_f(t,args):
return 0.125+t/12+t*t/36
def H_bf(t,args):
return 0.125+t/10+t*t/108
sc_op_td = [[sc_op[0]],[sc_op[0],H_bf],[sc_op[0],H_f]]
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist/3, sc_op_td, e_op,
nsubsteps=2000, method="homodyne",solver="taylor15")
y_sse_multi = sol.expect[1]-sol.expect[0]*sol.expect[0].conj()
plt.plot(y_sse_multi)
ntraj = 1
def run_sss_multi(**kwargs):
epsilon = np.zeros(Nt)
std = np.zeros(Nt)
print(kwargs)
for jj in range(0,Nt):
for j in range(0,ntraj):
Nsub = Nsubs[jj]#int(Nsubmax/(2**jj))
sol = ssesolve([H,[c_op[0].dag()*c_op[0]/2,H_f]], fock(N), tlist/3, sc_op_td, e_op, nsubsteps=Nsub, **kwargs)
epsilon_j = 1/T * np.sum(np.abs(y_sse_multi - (sol.expect[1]-sol.expect[0]*sol.expect[0].conj())))*ddt
epsilon[jj] += epsilon_j
std[jj] += epsilon_j
epsilon/= ntraj
std = np.sqrt(1/ntraj * (1/ntraj * std - epsilon**2))
return epsilon
def get_stats_multi(**kw):
y = run_sss_multi(**kw)
tag = str(kw["solver"])
x = np.log(stepsizes)
ly = np.log(y)
fit = np.polyfit(x, ly, 1)[0]
return (y,tag,fit)
stats_multi = []
stats_multi.append(get_stats_multi(solver='euler-maruyama'))
stats_multi.append(get_stats_multi(solver="platen"))
stats_multi.append(get_stats_multi(solver='pred-corr'))
stats_multi.append(get_stats_multi(solver='milstein'))
stats_multi.append(get_stats_multi(solver='milstein-imp'))
stats_multi.append(get_stats_multi(solver='pred-corr-2'))
stats_multi.append(get_stats_multi(solver='explicit1.5'))
stats_multi.append(get_stats_multi(solver="taylor1.5"))
stats_multi.append(get_stats_multi(solver="taylor1.5-imp", tol=1e-9))
fig = plt.figure()
ax = plt.subplot(111)
mark = "o*vspx+^<>Dd"
for run in stats_multi:
ax.loglog(stepsizes, run[0], 'o', label=run[1]+": " + str(run[2]))
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**0.5, label="$\propto\Delta t^{1/2}$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1, label="$\propto\Delta t$")
ax.loglog(stepsizes, 0.05*np.array(stepsizes)**1.5, label="$\propto\Delta t^{3/2}$")
ax.set_xlabel(r'$\Delta t$ $\left[\gamma^{-1}\right]$')
ax.set_ylabel('deviation')
lgd=ax.legend(loc='center left', bbox_to_anchor=(1, 0.64), prop={'size':12})
|
development/development-ssesolver-new-methods.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
In the above example, x and y variables are pointing to same memory location and are two names to the same memory object, thus change in one will result in updating another value as long as they are not assigned to another memory location.
With that in mind lets explore the new example
|
x = [10, 20, 30]
y = x
print(id(x), id(y))
x = 100
print(x, y)
print(id(x), id(y))
x = ['a','b','c','d']
lst = [x, x]
print(x, lst)
x[2] = 'dd'
print(x, lst)
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
Copy with the Slice Operator
Copying using slice can help us upto a certain level as shown in the below example
|
x = ['a','b','c','d']
y = x[:2]
print(x, y)
# Good :)
print(id(x), id(y))
# but :(
print(id(x[1]), id(y[1]))
# thus
x[1] = "33"
print(x, y)
# Good :)
print(id(x), id(y))
# but :(
print(id(x[1]), id(y[1]))
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
but may will fail, if we have heterogeneous data-types in the list as shown in below examples
|
x = [["a1", "b1", "c1"],'a','b','c','d']
y = x[:]
print(x, y)
# Good :)
print(id(x), id(y))
# but :(
print(id(x[0]), id(y[0]))
# thus
x[0][1] = "33"
print(x, y)
# Good :)
print(id(x), id(y))
# but :(
print(id(x[1]), id(y[1]))
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
Now we now know that in Python Assignment statements do not copy objects, instead they create bindings between a target and an object and can't be used to create a copy of the existing object.
Few More Examples for better understanding
|
magic_list = [1,2,3]
print (magic_list)
for i in magic_list:
i = i+1
print (i)
print (magic_list)
magic_list = ["1","2","3"]
print (magic_list)
for i in magic_list:
i = i+str(1)
print (i)
print (magic_list)
lst1 = ['a','b',['ab','ba']]
print(id(lst1))
lst2 = lst1
print(id(lst2))
print(lst1)
print(lst2)
lst1[2][1] = "new"
print(lst1)
print(lst2)
lst1[0] = "new1"
print(lst1)
print(lst2)
lst1 = "as"
print(lst1)
print(lst2)
print(id(lst1))
print(id(lst2))
x = 10
y = x
print(x)
print(y)
print(id(x))
print(id(y))
x = 11
print(id(x))
print(id(y))
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
<img src="files/shallowcopy1.png" width=300>
<img src="files/shallowcopy2.png" width=300>
<img src="files/shallowcopy3.png" width=470>
For collections which are mutable or contain mutable items, copy is sometimes needed so one can change one copy without changing the other.
Fortunately we have This module provides generic shallow and deep copy operations called copy.
It provides two functions, one for shallow copy and another for deep copy.
Shallow Copy - copy.copy(x)
Shallow copy duplicates minimal possible
Shallow copy of a collection is a copy of the collection structure, but not the elements. After shallow copy, both original and copied collection, share the individual elements.
Lets examine the below example for details
|
import copy
class MyClass:
def __init__(self, name): # contructor
self.name = name
def __cmp__(self, other):
return cmp(self.name, other.name)
a = MyClass('a')
l = [ a ]
dup = copy.copy(l)
print ('l :', l)
print ('dup:', dup)
print ('dup is l:', (dup is l))
print ('dup == l:', (dup == l))
print ('dup[0] is l[0]:', (dup[0] is l[0]))
print ('dup[0] == l[0]:', (dup[0] == l[0]))
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
Deep Copy - copy.deepcopy(x)
Return a deep copy of x.
|
dup = copy.deepcopy(l)
print ('l :', l)
print ('dup:', dup)
print ('dup is l:', (dup is l))
print ('dup == l:', (dup == l))
print ('dup[0] is l[0]:', (dup[0] is l[0]))
print ('dup[0] == l[0]:', (dup[0] == l[0]))
# shallow copy
import copy
lst1 = ['a','b',['ab','ba']]
lst2 = copy.copy(lst1)
# print(id(lst1))
# print(id(lst2))
# print(lst1)
# print(lst2)
# lst1[2][1] = "new"
# print(lst1)
# print(lst2)
# lst1[0] = "new1"
# print(lst1)
# print(lst2)
# lst1 = "as"
# print(lst1)
# print(lst2)
print(id(lst1))
print(id(lst2))
print(id(lst1[1]))
print(id(lst2[1]))
print(lst1[1])
# shallow copy
import copy
lst1 = ['a','b',['ab','ba']]
lst2 = copy.deepcopy(lst1)
print(id(lst1))
print(id(lst2))
print(lst1)
print(lst2)
lst1[2][1] = "new"
print(lst1)
print(lst2)
lst1[0] = "new1"
print(lst1)
print(lst2)
lst1 = "as"
print(lst1)
print(lst2)
# # print(id(lst3))
# print(id(lst4))
# print(id(lst3[1]))
# print(id(lst4[1]))
# print(lst3[1])
from copy import deepcopy
class person:
def __init__ (this, name, background, age):
this.name=name
this.background=background
this.age=age
def setage (this,age):
this.age = age
def __str__(this):
retst = this.name + "\nTrained as "
retst += this.background + "\nAged "
retst += str(this.age) + "\n"
return retst
def tlist(source,demo):
tl = demo + "\n"
for pers in source:
tl += str(pers)
return tl
team = []
team.append(person("Lisa","Graphic Designer",21))
team.append(person("Graham","Support Manager",51))
team.append(person("Charlie","Unknown",9))
# firstyear is a clone of all levels of team - a full copy.
# Changes to team will NOT effect firstyear
firstyear = deepcopy(team)
# secondyear is a copy of all the team member references but
# not of the individual data for each team member. Changes to
# team will NOT effect secondyear, but changes to attributes
# of members within the team will.
secondyear = team[:]
# thirdyear is an alternative name for team, so any changes
# to team will also be changes to thirdyear.
thirdyear = team
print (tlist(team,"Original team"))
team[2] = person("Charlotte","Cat's home entertainer",10)
team[1].setage(53)
print (tlist(team,"Team after changes"))
print (tlist(firstyear,"Deep copy - no changes from original"))
print (tlist(secondyear,"Shallow copy - some changes"))
print (tlist(thirdyear,"Normal copy (alias) - all changes shown"))
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
Difference between shallow and deep copying
The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances):
A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.
A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
Two problems often exist with deep copy operations that don't exist with shallow copy operations:
Recursive objects (compound objects that, directly or indirectly, contain a reference to themselves) may cause a recursive loop.
Because deep copy copies everything it may copy too much, such as data which is intended to be shared between copies.
The deepcopy() function avoids these problems by:
keeping a 'memo' dictionary of objects already copied during the current copying pass; and
letting user-defined classes override the copying operation or the set of components copied.
Controlling Copy Behavior
It is possible to control how copies are made using the copy and deepcopy hooks.
__copy__() is called without any arguments and should return a shallow copy of the object.
__deepcopy__() is called with a memo dictionary, and should return a deep copy of the object. Any member attributes that need to be deep-copied should be passed to copy.deepcopy(), along with the memo dictionary, to control for recursion (see below).
This example illustrates how the methods are called:
|
import copy
class MyClass:
def __init__(self, name):
self.name = name
def __cmp__(self, other):
return cmp(self.name, other.name)
def __copy__(self):
print ('__copy__()')
return MyClass(self.name)
def __deepcopy__(self, memo):
print ('__deepcopy__(%s)' % str(memo))
return MyClass(copy.deepcopy(self.name, memo))
a = MyClass('a')
sc = copy.copy(a)
dc = copy.deepcopy(a)
|
Section 1 - Core Python/Chapter 16 - Standard library/Reference, Shallow and deep copy.ipynb
|
mayankjohri/LetsExplorePython
|
gpl-3.0
|
Simple regression tree model
Here we define a simple regression tree and then load it into SHAP as a custom model.
|
X,y = shap.datasets.boston()
orig_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
orig_model.fit(X, y)
dot_data = sklearn.tree.export_graphviz(orig_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
# extract the arrays that define the tree
children_left = orig_model.tree_.children_left
children_right = orig_model.tree_.children_right
children_default = children_right.copy() # because sklearn does not use missing values
features = orig_model.tree_.feature
thresholds = orig_model.tree_.threshold
values = orig_model.tree_.value.reshape(orig_model.tree_.value.shape[0], 1)
node_sample_weight = orig_model.tree_.weighted_n_node_samples
print(" children_left", children_left) # note that negative children values mean this is a leaf node
print(" children_right", children_right)
print(" children_default", children_default)
print(" features", features)
print(" thresholds", thresholds.round(3))
print(" values", values.round(3))
print("node_sample_weight", node_sample_weight)
# define a custom tree model
tree_dict = {
"children_left": children_left,
"children_right": children_right,
"children_default": children_default,
"features": features,
"thresholds": thresholds,
"values": values,
"node_sample_weight": node_sample_weight
}
model = {
"trees": [tree_dict]
}
explainer = shap.TreeExplainer(model)
# Make sure that the ingested SHAP model (a TreeEnsemble object) makes the
# same predictions as the original model
assert np.abs(explainer.model.predict(X) - orig_model.predict(X)).max() < 1e-4
# make sure the SHAP values sum up to the model output (this is the local accuracy property)
assert np.abs(explainer.expected_value + explainer.shap_values(X).sum(1) - orig_model.predict(X)).max() < 1e-4
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Simple GBM classification model (with 2 trees)
Here we define a simple regression tree and then load it into SHAP as a custom model.
|
X2,y2 = shap.datasets.adult()
orig_model2 = sklearn.ensemble.GradientBoostingClassifier(n_estimators=2)
orig_model2.fit(X2, y2)
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Pull the info of the first tree
|
tree_tmp = orig_model2.estimators_[0][0].tree_
# extract the arrays that define the tree
children_left1 = tree_tmp.children_left
children_right1 = tree_tmp.children_right
children_default1 = children_right1.copy() # because sklearn does not use missing values
features1 = tree_tmp.feature
thresholds1 = tree_tmp.threshold
values1 = tree_tmp.value.reshape(tree_tmp.value.shape[0], 1)
node_sample_weight1 = tree_tmp.weighted_n_node_samples
print(" children_left1", children_left1) # note that negative children values mean this is a leaf node
print(" children_right1", children_right1)
print(" children_default1", children_default1)
print(" features1", features1)
print(" thresholds1", thresholds1.round(3))
print(" values1", values1.round(3))
print("node_sample_weight1", node_sample_weight1)
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Pull the info of the second tree
|
tree_tmp = orig_model2.estimators_[1][0].tree_
# extract the arrays that define the tree
children_left2 = tree_tmp.children_left
children_right2 = tree_tmp.children_right
children_default2 = children_right2.copy() # because sklearn does not use missing values
features2 = tree_tmp.feature
thresholds2 = tree_tmp.threshold
values2 = tree_tmp.value.reshape(tree_tmp.value.shape[0], 1)
node_sample_weight2 = tree_tmp.weighted_n_node_samples
print(" children_left2", children_left2) # note that negative children values mean this is a leaf node
print(" children_right2", children_right2)
print(" children_default2", children_default2)
print(" features2", features2)
print(" thresholds2", thresholds2.round(3))
print(" values2", values2.round(3))
print("node_sample_weight2", node_sample_weight2)
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Create a list of SHAP Trees
|
# define a custom tree model
tree_dicts = [
{
"children_left": children_left1,
"children_right": children_right1,
"children_default": children_default1,
"features": features1,
"thresholds": thresholds1,
"values": values1 * orig_model2.learning_rate,
"node_sample_weight": node_sample_weight1
},
{
"children_left": children_left2,
"children_right": children_right2,
"children_default": children_default2,
"features": features2,
"thresholds": thresholds2,
"values": values2 * orig_model2.learning_rate,
"node_sample_weight": node_sample_weight2
},
]
model2 = {
"trees": tree_dicts,
"base_offset": scipy.special.logit(orig_model2.init_.class_prior_[1]),
"tree_output": "log_odds",
"objective": "binary_crossentropy",
"input_dtype": np.float32, # this is what type the model uses the input feature data
"internal_dtype": np.float64 # this is what type the model uses for values and thresholds
}
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Explain the custom model
|
# build a background dataset for us to use based on people near a 0.95 cutoff
vs = np.abs(orig_model2.predict_proba(X2)[:,1] - 0.95)
inds = np.argsort(vs)
inds = inds[:200]
# build an explainer that explains the probability output of the model
explainer2 = shap.TreeExplainer(model2, X2.iloc[inds,:], feature_dependence="independent", model_output="probability")
# Make sure that the ingested SHAP model (a TreeEnsemble object) makes the
# same predictions as the original model
assert np.abs(explainer2.model.predict(X2, output="probability") - orig_model2.predict_proba(X2)[:,1]).max() < 1e-4
# make sure the sum of the SHAP values equals the model output
shap_sum = explainer2.expected_value + explainer2.shap_values(X2.iloc[:,:]).sum(1)
assert np.abs(shap_sum - orig_model2.predict_proba(X2)[:,1]).max() < 1e-4
|
notebooks/tabular_examples/tree_based_models/Example of loading a custom tree model into SHAP.ipynb
|
slundberg/shap
|
mit
|
Think of data structures as like tools. You'll use them to store and retrieve values, to implement more complicated procedures, or algorithms. Data structures hold data.
As a preview of how we might get that data in the first place, lets skip ahead and start using the Python library.
Actually, lets start outside the Standard Library using a 3rd party tool, pandas. Here's a link to a well-known book: Python for Data Analysis by William McKinney, who started pandas. You can also watch him on Youtube.
Python ships with a Standard Library if it's a full blown Python, consisting of "namespaces" you may take for granted are just one import statement away.
|
import math
hypotenuse = math.hypot(3, 4) # try your own numbers here!
hypotenuse
import binascii
in_hex = binascii.hexlify(b"i like eating python as much as i like coding in python")
in_hex # show ascii bytes in terms of the underlying hex codes
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
However a lot of what makes Python so great are the modules and packages we might get from the Python repository (aka "Cheese shop"), more formally known as PyPI, the Python Package Index.
What you don't find in the Standard Library, you may, in most cases, add to your Python path using pip3 install.
If you've used Linux, then you might want to think of pip3 install as the apt-get of the Python universe.
However that's not the end of the story. Python distributions such as Anaconda provide their own way of updating and upgrading. You might also find yourself using Git.
Lets skip ahead to where you've already downloaded pandas, enhancing your Python ecosystem with this powerful free tool.
|
import pandas as pd
url = "https://raw.githubusercontent.com/dariusk/corpora/master/data/animals/dinosaurs.json"
df = pd.read_json(url)
df.head() # just show the first five lines of a much taller table
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
What happened just there? Pandas is all about Dataframe objects, which McKinney is hoping will be the basis of a more generalized object that works across computer languages, such as R, Python, and those using the Java Virtual Machine (JVM).
We just created a Dataframe object by reading in data over the web, from a public stash of dinosaur names out there in the cloud, on Github. The first column isn't really adding any value though. We know it's a list of dinosaurs, no need to say that over and over. A first step after harvesting raw data is usually cleaning and/or massaging it into the shape we need. Lets drop that "description" column...
|
df.drop('description', axis=1, inplace=True) # you won't be able to run this twice, why?
df.head()
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
Now that our dataframe is this simple, might we convert it to a native list, the data structure we started out with, with the square brackets? Sure we might.
|
dinos = df["dinosaurs"].tolist() # yep, it's that easy
dinos[10:20] # this is called "slicing", getting items 10 to 19
len(dinos) # how many dinosaurs are we talking about actually?
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
Wow.
Yes, you could start using this list to harvest pictures, for example, maybe starting with the some Dinosaur Database.
|
dinos.sort() # I notice these are not alphabetized. We might sort in place.
dinos[-10:] # now lets look from 10th from the end, to the end
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
Slice notation is important because it's used with numpy arrays as well, in addition to pandas DataFrames and ordinary Python lists. Numpy arrays are like Python lists on steriods, meaning they have enhanced capabilities and multiple dimensions.
Lets go back to an ordinary list and see test this feature more.
|
zoo[1:] # all but 0th element (addressing begins with 0)
zoo[-1] # last item in the list
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
Lets end this section with a quick look at a numpy array. Python lists may be "heterogenous" meaning their elements may be of many different types. Numpy needs its arrays to have all elements the same type, whatever type that may be (floats, ints, complex numbers are all typical).
|
import numpy as np # notice how we rename the module as we import it
test_data = np.random.randint(1, 100, size=(5, 5)) # all integers, in a 5x5 matrix
test_data
test_data ** 2 # we can raise all these numbers to a 2nd power in one line!
|
Welcome to Python.ipynb
|
4dsolutions/Python5
|
mit
|
Making an interpolation from TRIGDAT
Getting the data
We can use GetGBMData to download the data
|
data = GetGBMData("080916009")
data.set_destination("") # You can enter a folder here. If you want the CWD, you do not have to set
data.get_trigdat()
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
By default, GetGBMData will grad data from all detectors. However, one can set a subset to retrieve different data types
|
data.select_detectors('n1','n2','b0')
data.get_rsp_cspec()
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Interpolating
First let's create an interpolating object for a given TRIGDAT file (POSHIST files are also readable)
|
interp = PositionInterpolator(trigdat="glg_trigdat_all_bn080916009_v02.fit")
# In trigger times
print "Quaternions"
print interp.quaternion(0)
print interp.quaternion(10)
print
print "SC XYZ"
print interp.sc_pos(0)
print interp.sc_pos(10)
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Single detector
One can look at a single detector which knows about it's orientation in the Fermi SC coordinates
|
na = NaIA(interp.quaternion(0))
print na.get_center()
print na.get_center().icrs #J2000
print na.get_center().galactic # Galactic
print
print "Changing in time"
na.set_quaternion(interp.quaternion(100))
print na.get_center()
print na.get_center().icrs #J2000
print na.get_center().galactic # Galactic
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
We can also go back into the GBMFrame
|
center_j2000 = na.get_center().icrs
center_j2000
center_j2000.transform_to(GBMFrame(quaternion=interp.quaternion(100.)))
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Earth Centered Coordinates
The sc_pos are Earth centered coordinates (in km for trigdat and m for poshist) and can also be passed. It is a good idea to specify the units!
|
na = NaIA(interp.quaternion(0),interp.sc_pos(0)*u.km)
na.get_center()
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Working with the GBM class
Ideally, we want to know about many detectors. The GBM class performs operations on all detectors for ease of use. It also has plotting capabilities
|
myGBM = GBM(interp.quaternion(0),sc_pos=interp.sc_pos(0)*u.km)
myGBM.get_centers()
[x.icrs for x in myGBM.get_centers()]
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Plotting
We can look at the NaI view on the sky for a given FOV
|
myGBM.detector_plot(radius=60)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=40)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2)
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
In Fermi GBM coodinates
We can also plot in Fermi GBM spacecraft coordinates
|
myGBM.detector_plot(radius=60,fermi_frame=True)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=20,fermi_frame=True)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=200,fermi_frame=True,fignum=3)
myGBM.detector_plot(radius=10,projection='ortho',lon_0=0,lat_0=40,fignum=2,fermi_frame=True)
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Capturing points on the sky
We can even see which detector's FOVs contain a point on the sky. We create a mock GRB SKycoord first.
|
grb = SkyCoord(ra=130.,dec=-45 ,frame='icrs', unit='deg')
myGBM.detector_plot(radius=60,
projection='moll',
good=True, # only plot NaIs that see the GRB
point=grb,
lon_0=110,lat_0=-0)
myGBM.detector_plot(radius=60,
projection='ortho',
good=True, # only plot NaIs that see the GRB
point=grb,
lon_0=180,lat_0=-40,fignum=2)
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Looking at Earth occulted points on the sky
We can plot the points occulted by the Earth (assuming points 68.5 degrees form the Earth anti-zenith are hidden)
|
myGBM.detector_plot(radius=10,show_earth=True,lon_0=90)
myGBM.detector_plot(radius=10,lon_0=100,show_earth=True,projection='ortho')
myGBM.detector_plot(radius=10,show_earth=True,lon_0=120,lat_0=-30,fermi_frame=True,projection='ortho')
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Source/Detector Separation
We can even look at the separation angles for the detectors and the source
|
seps = myGBM.get_separation(grb)
seps.sort("Separation")
seps
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Examining Legal Detector Pairs
To see which detectors are valid, can look at the legal pairs map
|
get_legal_pairs()
|
examples/demo.ipynb
|
drJfunk/gbmgeometry
|
mit
|
Introduction
Linear regression, like any model regression methods, is made of two parts:
* a regression model = a structure with parameters
* a regression method to estimate which parameters minimize a residual between the points and the model
When least squares methods are used, the residual is the sum of the squares of the distances between each data point and the corresponding model value.
The linear regression model
Simple linear regression
Let us consider a problem with only one independent (explanatory) variable $x$ and one dependent variable $y$.
With $n \in \mathbb{N}^*$, let ${(x_i,y_i),\, i=1,\dots,n}$ be a set of points.
|
n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0, 0.25, n)
y = 0.75*x + 1 + noise
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.scatter(x, y)
ax.set_xlim([0,2])
ax.set_ylim([0,3.1])
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
Simple linear regression considers the model function
\begin{equation}
y = \alpha + \beta x
\end{equation}
We can write the relation between $y_i$ and $x_i$ as:
\begin{equation}
\forall i, \quad y_i = \alpha + \beta x_i + \varepsilon_i
\end{equation}
where the $\varepsilon_i$, called residuals, are what's missing between the model and the data.
(For the sake of demonstration, the points above were generated as a uniform sample of 50 $x$ values in $[0,1]$, and the $y$'s were computed as $y = \frac{3}{4}x+1+\varepsilon$, where $\varepsilon \sim \mathcal{N}(0,0.1)$.)
Now comes the fun part: we are looking for the $(\alpha,\beta)$ pair that provides the best fit between model and data. Best, but in which sense?
General form of linear regression
In the general case, we can write for each dependent variable $y_i, \, i = 1, \dots, n$ a set of $p$-vectors $\mathbf{x}_i$ called regressors.
The regression model above then takes the form
\begin{equation}
y_i = \beta_0 1 + \beta_1 x_{i1} + \dots + \beta_p x_{ip} + \varepsilon_i,\quad i = 1, \dots, n
\end{equation}
which can be more concisely written as:
\begin{equation}
y_i = \mathbf{x}_i^T \mathbf{\beta} + \varepsilon_i,\quad i = 1, \dots, n
\end{equation}
and in vector form:
\begin{equation}
\mathbf{y} = X \mathbf{\beta} + \mathbf{\varepsilon}
\end{equation}
You may have noticed that there is no more $\alpha$ in this form: it has been replaced by $\beta_0$, which is the multiplication factor for the constant value $1$. This makes $X$ an $n \times (p+1)$ matrix and $\mathbf{\beta}$ a $(p+1)$ vector. This is more understandable with an explicit display of $X$, $\mathbf{y}$, $\mathbf{\beta}$ and $\mathbf{\epsilon}$:
$$\mathbf{y} = \left(
\begin{array}{c}
y_1\
y_2\
\vdots\
y_n\
\end{array}\right),
\quad
X = \left(
\begin{array}{cccc}
1&x_{11}&\cdots&x_{1p}\
1&x_{21}&\cdots&x_{2p}\
\vdots&\vdots&\ddots&\vdots\
1&x_{n1}&\cdots&x_{np}\
\end{array}\right),
\quad
\mathbf{\beta} = \left(
\begin{array}{c}
\beta_0\
\beta_1\
\beta_2\
\vdots\
\beta_n\
\end{array}\right),
\quad
\mathbf{\epsilon} = \left(
\begin{array}{c}
\varepsilon_1\
\varepsilon_2\
\vdots\
\varepsilon_n\
\end{array}\right)
$$
Fitting the model
The best known family of regression methods rely this definition of the 'best fit': the best fit is the one that minimizes the sum of the squares of the residuals $\varepsilon_i$.
In the case of the simple linear regression, that is:
\begin{equation}
\textrm{Find} \min_{\alpha,\beta} Q(\alpha,\beta)\quad Q(\alpha,\beta) = \sum_{i=1}^n \varepsilon_i^2 = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2
\end{equation}
Ordinary / Linear least squares (OLS)
In the general case we are looking for $\hat{\beta}$ that minimizes:
\begin{equation}
S(b) = (y - Xb)^T(y-Xb)
\end{equation}
We are then looking for $\hat{\beta}$ such that
\begin{equation}
0 = \frac{dS}{db}(\hat{\beta}) = \frac{d}{db}\left.\left(y^T y - b^TX^Ty-y^TXb + b^TX^TXb\right)\right|_{b=\hat{\beta}}
\end{equation}
By matrix calculus:
$$
\begin{array}{rcl}
\frac{d}{db}(-b^TX^Ty) &=& -\frac{d}{db}(b^TX^Ty) = -X^Ty\
\frac{d}{db}(-y^TXb) &=& -\frac{d}{db}(y^TXb) = -(y^TX)^T = -X^Ty\
\frac{d}{db}(b^TX^TXb) &=& 2X^TXb\
\end{array}
$$
So that we get
$$-2X^Ty + 2X^TX\hat{\beta} = 0$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: $X$ has full column rank</b><br>
That is, features should not be linearly dependent.
Under this assumption $X^TX$ is invertible, and we can write the least squares estimator for $\beta$:
\begin{equation}
\hat{\beta} = (X^TX)^{-1}X^Ty
\end{equation}
</div>
For the simple linear regression, we get:
$$\hat{\beta} = \frac{\textrm{Cov}(x,y)}{\textrm{Var}(x)},\quad \hat{\alpha} = \bar{y} - \hat{\beta} \bar{x}$$
|
def fit(x,y, with_constant=True):
beta = np.cov(x,y)[0][1] / np.var(x)
if with_constant:
alpha = np.mean(y) - beta * np.mean(x)
else:
alpha = 0
r = np.cov(x,y)[0][1] / (np.std(x)*np.std(y))
mse = np.sum((y-alpha-beta*x)**2)/n
return([beta,alpha,r,mse])
beta,alpha,r,mse = fit(x,y)
print('alpha: {:.2f}, beta: {:.2f}'.format(alpha,beta))
print('r squared: {:.2f}'.format(r*r))
print('MSE: {:.2f}'.format(mse))
def fit_plot(x,y,noise,beta,alpha):
fig, ax = plt.subplots(1, 3, figsize=(18,4))
ax[0].scatter(x, y)
x_ = np.linspace(0,2,5)
ax[0].plot(x_, alpha + beta*x_, color='orange', linewidth=2)
ax[0].set_xlim([0,2])
ax[0].set_ylim([0,3.1])
mse = np.sum((y-alpha-beta*x)**2)/n
ax[0].set_title('MSE: {:.2f}'.format(mse), fontsize=14)
ax[1].hist(noise, alpha=0.5, bins=np.arange(-0.7,0.7,0.1));
ax[1].hist(y - alpha - beta*x, alpha=0.5, bins=np.arange(-0.7,0.7,0.1))
ax[1].legend(['original noise', 'residual']);
stats.probplot(y - alpha - beta*x, dist="norm", plot=ax[2]);
fit_plot(x,y,noise,beta,alpha)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
Expected value
$$
\begin{array}{rcl}
\mathbf{E}[\hat{\beta}] &=& \mathbf{E}\left[(X^TX)^{-1}X^T(X\beta + \varepsilon)\right]\
&=& \beta + \mathbf{E}\left[(X^TX)^{-1}X^T \varepsilon\right]\
&=& \beta + \mathbf{E}\left[\mathbf{E}\left[(X^TX)^{-1}X^T \varepsilon|X\right]\right] \quad \textit{(law of total expectation)}\
&=& \beta + \mathbf{E}\left[(X^TX)^{-1}X^T \mathbf{E}\left[\varepsilon|X\right]\right]\
\end{array}
$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: strict exogeneity: $\mathbf{E}\left[\varepsilon|X\right] = 0$</b><br>
Under this assumption, $\mathbf{E}[\hat{\beta}] = \beta$, the ordinary least squares estimator is unbiased.
</div>
Variance
$$
\begin{array}{rcl}
\textrm{Var}(\hat{\beta}) &=& \mathbf{E}\left[\left(\hat{\beta} - \mathbf{E}(\hat{\beta})\right)^2\right]\
&=& \mathbf{E}\left[(\hat{\beta} - \beta)^2\right]\
&=& \mathbf{E}\left[(\hat{\beta} - \beta)(\hat{\beta} - \beta)^T\right]\
&=& \mathbf{E}\left[((X^TX)^{-1}X^T\varepsilon)((X^TX)^{-1}X^T\varepsilon)^T\right]\
&=& \mathbf{E}\left[(X^TX)^{-1}X^T\varepsilon\varepsilon^TX(X^TX)^{-1}\right]\
&=& (X^TX)^{-1}X^T\mathbf{E}\left[\varepsilon\varepsilon^T\right]X(X^TX)^{-1}\
\end{array}
$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: homoskedasticity: $\mathbf{E}\left[\varepsilon_i^2|X\right] = \sigma^2$</b><br>
</div>
Then we have:
$$
\begin{array}{rcl}
\textrm{Var}(\hat{\beta}) &=& \sigma^2 (X^TX)^{-1}X^TX(X^TX)^{-1}\
&=& \sigma^2 (X^TX)^{-1}\
\end{array}
$$
For simple linear regression, this gives ${\displaystyle\textrm{Var}(\beta) = \frac{\sigma^2}{\sum_{i=1}^n(x_i - \overline{x})}}$
Confidence intervals for coefficients
Here we will go through the case of the simple linear regression. There are two situations: either we can make the assumption that the residuals are normally distributed, or that the number of points in the dataset is "large enough", so that the law of large numbers and the central limit theorem become applicable.
Under normality assumption
Under a normality assumption for $\varepsilon$, $\hat{\beta}$ is also normally distributed.
Then,
$$Z = \frac{\hat{\beta}-\beta}{\frac{1}{\sqrt{n}}\sqrt{\frac{\sigma^2}{\sum_{i=1}^n (x_i - \overline{x})^2}}} \sim \mathcal{N}(0,1)$$
Let $S = \frac{n}{\sigma} \sum_{i=1}^n \hat{\varepsilon_i}^2$. $S$ has a chi-squared distribution with $n-2$ degrees of freedom.
Then $T = \frac{Z}{\sqrt{\frac{S}{n-2}}}$ has a Student's t-distribution with $n-2$ degrees of freedom.
$T$ will be easier to use if we write it as:
$$T = \frac{\hat{\beta} - \beta}{s_{\hat{\beta}}}$$
where
$$s_{\hat{\beta}} = \sqrt{\frac{\frac{1}{n-2} \sum_{i=1}^n \hat{\varepsilon_i}^2}{\sum_{i=1}^n (x_i - \overline{x})^2}}$$
With this setting, the confidence interval for $\beta$ with confidence level $\alpha$ is
$$\left[\hat{\beta} - t_{1-\alpha/2}^{n-2} s_{\hat{\beta}}, \hat{\beta} + t_{1-\alpha/2}^{n-2} s_{\hat{\beta}}\right]$$
<span style='color: red;'>TODO: confidence interval for $\alpha$</span>
Under asymptotic assumption
<span style='color: red;'>TODO</span>
Using regression as a machine learning predictor
|
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(0, 0.25, n)
y = 0.75*x + 1 + noise
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.scatter(x, y)
ax.set_xlim([0,10])
ax.set_ylim([0,10])
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
We can fit a linear regression model on the training data:
|
beta,alpha,r,mse = fit(x,y)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
And use it to predict $y$ on test data:
|
y_pred = alpha + beta*x_test
def pred_plot(y_pred,y_test):
fig, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].scatter(y_test, y_pred)
ax[0].plot([0,10],[0,10],color='g')
ax[0].set_xlim([0,10])
mse_pred = np.sum((y_test-y_pred)**2)/n
ax[0].set_title('MSE: {:.2f}'.format(mse_pred), fontsize=14)
ax[1].hist(y_test-y_pred);
pred_plot(y_pred,y_test)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
What could possibly go wrong? Assumptions for OLS
$X$ has full column rank
If we break this assumption, we can't even compute the OLS estimator.
strict exogeneity: $\mathbf{E}\left[\varepsilon|X\right] = 0$
If we break this one, then the OLS estimator gets biased.
|
n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0.25, 0.25, n)
y = 0.75*x + noise
beta,alpha,r,mse = fit(x,y, with_constant=False)
fit_plot(x,y,noise,beta,alpha)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
What would it mean for a prediction model?
|
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(1, 0.25, n)
y = 0.75*x + noise
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
beta,alpha,r,mse = fit(x,y, with_constant=False)
y_pred = beta*x_test
pred_plot(y_pred,y_test)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
But if we have a constant in our model, it is unbiased:
|
n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0.25, 0.25, n)
y = 0.75*x + noise
beta,alpha,r,mse = fit(x,y, with_constant=True)
fit_plot(x,y,noise,beta,alpha)
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(1, 0.25, n)
y = 0.75*x + noise
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
beta,alpha,r,mse = fit(x,y, with_constant=True)
y_pred = alpha+beta*x_test
pred_plot(y_pred, y_test)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
<span style='color: red;'>What happened?</span>
Homoscedasticity: $\mathbf{E}\left[\varepsilon_i^2|X\right] = \sigma^2$
What happens if we break this assumption?
|
n = 100
x = np.linspace(0, 2, n)
sigma2 = 0.1*x**2
noise = np.random.normal(0, np.sqrt(sigma2), n)
y = 0.75*x + 1 + noise
beta,alpha,r,mse = fit(x,y)
fit_plot(x,y,noise,beta,alpha)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
When this assumption is violated, we will turn to a weighted least squares estimator.
No autocorrelation: $\mathbf{E}\left[\varepsilon_i \varepsilon_j|X\right] = 0 \textrm{ if } i \neq j$
What happens if we break this assumption?
|
n = 100
x = np.linspace(0, 2, n)
sigma2 = 0.1*((6*x).astype(int) % 4)
noise = np.random.normal(0, np.sqrt(sigma2), n)
y = 0.75*x + 1 + noise
beta,alpha,r,mse = fit(x,y)
fit_plot(x,y,noise,beta,alpha)
|
machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb
|
gansanay/datascience-theoryinpractice
|
mit
|
Noiseless mixture of 2 Gaussians in 1D
This is original problem considered by Pearson, and we will solve it as an optimization problem.
K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions of the Royal Society of London. A, 185:71–110, 1894.
We have two Gaussians with means $\xi_1, \xi_2$ and variances $c_1, c_2$, mixing coefficients $\pi_1, \pi_2$. So that $x \sim \pi_1 p(x; \xi_1, c_1) + \pi_2 p(x; \xi_2, c_2)$ where $p$ is the density function of the normal distribution.
More details can be found in our paper: Estimating mixture models via mixtures of polynomials.
|
xi,c = sp.symbols('xi,c')
K = 2 # number of clusters
xi0 = [1, -0.9] # true parameters
c0 = [0.4, 0.6]
pi0 = [0.4, 0.6]
moment_exprs = [xi, xi**2 + c, xi**3 + 3*xi*c, xi**4 + 6*xi**2 * c + 3*c**2,\
xi**5 + 10*xi**3*c + 15*xi*c**2,\
xi**6 + 15*xi**4*c**1 + 45*xi**2*c**2 + 15*c**3 ,\
xi**7 + 21*xi**5*c**1 + 105*xi**3*c**2 + 105*xi*c**3]
moment_exprs = moment_exprs[0:6]
#print 'Gaussian moments are '
display(moment_exprs)
# construct the true constraints
hs = []
for expr in moment_exprs:
val = 0
for k in range(K):
val += pi0[k]*expr.subs({xi:xi0[k], c:c0[k]})
hs += [expr - val]
hs_true = hs
# we will minimize some kind of a trace..
f = 1 + xi**2 + c + c**2 + xi**4 + c*xi**2
gs = [c>=0]
print_problem(f, gs, hs)
sol = mp.solvers.solve_GMP(f, gs, hs, rounds = 2, slack=1e-3)
display(mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2, tol = 1e-5, maxdeg=2))
print 'the truth: ' + str({c:c0, xi:xi0})
sol['MM'].numeric_instance(sol['x'],1)
|
extra_examples.ipynb
|
sidaw/mompy
|
mit
|
Noisy mixture of Gaussian
Now we draw a bunch of samples, and repeat the above experiment.
|
# draw some samples
numsample = 1e5
np.random.seed(1)
z = (np.random.rand(numsample) < pi0[0]).astype('int8')
means = xi0[0]*z + xi0[1]*(1-z)
stds = np.sqrt(c0[0]*z + c0[1]*(1-z))
Xs = means + stds * np.random.randn(numsample)
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(Xs, 50);
# construct the empirical constraints
hs = []
for d,expr in enumerate(moment_exprs):
val = np.mean(np.power(Xs,d+1))
hs += [expr - val]
# we will minimize some kind of a trace..
f = 1 + xi**2 + c + c**2 + xi**4 + c*xi**2
gs = [c>=0.1]
print_problem(f, gs, hs)
sol = mp.solvers.solve_GMP(f, gs, hs, rounds = 4, slack = 1e-5)
display(mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2, tol = 1e-5, maxdeg=2))
print 'the truth: ' + str({c:c0, xi:xi0})
|
extra_examples.ipynb
|
sidaw/mompy
|
mit
|
PSD max-cut
This is the problem $\text{minimize} -x^T W x$ subject to constraints $x_i \in {-1,1}$ for a positive definite random matrix $W$.
|
size = 5
np.random.seed(1)
xs = sp.symbols('x1:'+str(size+1))
Wh = np.random.randn(size,size)
W = -Wh*Wh.T;
gs = [x**2 >=1 for x in xs] + [x**2 <=1 for x in xs]
fs = [ w * xs[ij[0]] * xs[ij[1]] for ij,w in np.ndenumerate(W) ]
f = sum(fs)
print_problem(f, gs)
sol = mp.solvers.solve_GMP(f, gs, rounds = 3)
mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2, maxdeg = 2)
|
extra_examples.ipynb
|
sidaw/mompy
|
mit
|
<table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table>
Introduction
This is an Earth Engine <> TensorFlow demonstration notebook. This demonstrates a per-pixel neural network implemented in a way that allows the trained model to be hosted on Google AI Platform and used in Earth Engine for interactive prediction from an ee.Model.fromAIPlatformPredictor. See this example notebook for background on the dense model.
Running this demo may incur charges to your Google Cloud Account!
Setup software libraries
Import software libraries and/or authenticate as necessary.
Authenticate to Colab and Cloud
To read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). This should be the same account you use to login to Earth Engine. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process.
(You may need to run this again if you get a credentials error later.)
|
from google.colab import auth
auth.authenticate_user()
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Upgrade Earth Engine and Authenticate
Update Earth Engine to ensure you have the latest version. Authenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. This should be the same account you used to login to Cloud previously. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process.
|
!pip install -U earthengine-api --no-deps
import ee
ee.Authenticate()
ee.Initialize()
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Test the TensorFlow installation
Import TensorFlow and check the version.
|
import tensorflow as tf
print(tf.__version__)
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Define variables
The training data are land cover labels with a single vector of Landsat 8 pixel values (BANDS) as predictors. See this example notebook for details on how to generate these training data.
|
# REPLACE WITH YOUR CLOUD PROJECT!
PROJECT = 'your-project'
# Cloud Storage bucket with training and testing datasets.
DATA_BUCKET = 'ee-docs-demos'
# Output bucket for trained models. You must be able to write into this bucket.
OUTPUT_BUCKET = 'your-bucket'
# This is a good region for hosting AI models.
REGION = 'us-central1'
# Training and testing dataset file names in the Cloud Storage bucket.
TRAIN_FILE_PREFIX = 'Training_demo'
TEST_FILE_PREFIX = 'Testing_demo'
file_extension = '.tfrecord.gz'
TRAIN_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension
TEST_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TEST_FILE_PREFIX + file_extension
# The labels, consecutive integer indices starting from zero, are stored in
# this property, set on each point.
LABEL = 'landcover'
# Number of label values, i.e. number of classes in the classification.
N_CLASSES = 3
# Use Landsat 8 surface reflectance data for predictors.
L8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
# Use these bands for prediction.
BANDS = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
# These names are used to specify properties in the export of
# training/testing data and to define the mapping between names and data
# when reading into TensorFlow datasets.
FEATURE_NAMES = list(BANDS)
FEATURE_NAMES.append(LABEL)
# List of fixed-length features, all of which are float32.
columns = [
tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES
]
# Dictionary with feature names as keys, fixed-length features as values.
FEATURES_DICT = dict(zip(FEATURE_NAMES, columns))
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Read data
Check existence of the data files
Check that you have permission to read the files in the output Cloud Storage bucket.
|
print('Found training file.' if tf.io.gfile.exists(TRAIN_FILE_PATH)
else 'No training file found.')
print('Found testing file.' if tf.io.gfile.exists(TEST_FILE_PATH)
else 'No testing file found.')
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Read into a tf.data.Dataset
Here we are going to read a file in Cloud Storage into a tf.data.Dataset. (these TensorFlow docs explain more about reading data into a tf.data.Dataset). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable. Note that we will use all data for training.
|
# Create a dataset from the TFRecord file in Cloud Storage.
train_dataset = tf.data.TFRecordDataset([TRAIN_FILE_PATH, TEST_FILE_PATH],
compression_type='GZIP')
# Print the first record to check.
print(iter(train_dataset).next())
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Parse the dataset
Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized Example proto (i.e. example.proto) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. (Learn more about parsing Example protocol buffer messages).
|
def parse_tfrecord(example_proto):
"""The parsing function.
Read a serialized example into the structure defined by FEATURES_DICT.
Args:
example_proto: a serialized Example.
Returns:
A tuple of the predictors dictionary and the LABEL, cast to an `int32`.
"""
parsed_features = tf.io.parse_single_example(example_proto, FEATURES_DICT)
labels = parsed_features.pop(LABEL)
return parsed_features, tf.cast(labels, tf.int32)
# Map the function over the dataset.
parsed_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=4)
from pprint import pprint
# Print the first parsed record to check.
pprint(iter(parsed_dataset).next())
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands names for keys and tensors storing the pixel data for values. The second element of the tuple is tensor storing the class label.
Adjust dimension and shape
Turn the dictionary of {name: tensor,...} into a 1x1xP array of values, where P is the number of predictors. Turn the label into a 1x1xN_CLASSES array of indicators (i.e. one-hot vector), in order to use a categorical crossentropy-loss function. Return a tuple of (predictors, indicators where each is a three dimensional array; the first two dimensions are spatial x, y (i.e. 1x1 kernel).
|
# Inputs as a tuple. Make predictors 1x1xP and labels 1x1xN_CLASSES.
def to_tuple(inputs, label):
return (tf.expand_dims(tf.transpose(list(inputs.values())), 1),
tf.expand_dims(tf.one_hot(indices=label, depth=N_CLASSES), 1))
input_dataset = parsed_dataset.map(to_tuple)
# Check the first one.
pprint(iter(input_dataset).next())
input_dataset = input_dataset.shuffle(128).batch(8)
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Model setup
Make a densely-connected convolutional model, where the convolution occurs in a 1x1 kernel. This is exactly analogous to the model generated in this example notebook, but operates in a convolutional manner in a 1x1 kernel. This allows Earth Engine to apply the model spatially, as demonstrated below.
Note that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning.
Create the Keras model
Before we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See the Keras loss function docs, the TensorFlow categorical identity docs and the tf.one_hot docs for details).
Here we will use a simple neural network model with a 64 node hidden layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See the Keras Sequential model guide for more details.
|
from tensorflow import keras
# Define the layers in the model. Note the 1x1 kernels.
model = tf.keras.models.Sequential([
tf.keras.layers.Input((None, None, len(BANDS),)),
tf.keras.layers.Conv2D(64, (1,1), activation=tf.nn.relu),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(N_CLASSES, (1,1), activation=tf.nn.softmax)
])
# Compile the model with the specified loss and optimizer functions.
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Fit the model to the training data. Lucky number 7.
model.fit(x=input_dataset, epochs=7)
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Save the trained model
Export the trained model to TensorFlow SavedModel format in your cloud storage bucket. The Cloud Platform storage browser is useful for checking on these saved models.
|
MODEL_DIR = 'gs://' + OUTPUT_BUCKET + '/demo_pixel_model'
model.save(MODEL_DIR, save_format='tf')
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
EEification
EEIfication prepares the model for hosting on Google AI Platform. Learn more about EEification from this doc. First, get (and SET) input and output names of the nodes. CHANGE THE OUTPUT NAME TO SOMETHING THAT MAKES SENSE FOR YOUR MODEL! Keep the input name of 'array', which is how you'll pass data into the model (as an array image).
|
from tensorflow.python.tools import saved_model_utils
meta_graph_def = saved_model_utils.get_meta_graph_def(MODEL_DIR, 'serve')
inputs = meta_graph_def.signature_def['serving_default'].inputs
outputs = meta_graph_def.signature_def['serving_default'].outputs
# Just get the first thing(s) from the serving signature def. i.e. this
# model only has a single input and a single output.
input_name = None
for k,v in inputs.items():
input_name = v.name
break
output_name = None
for k,v in outputs.items():
output_name = v.name
break
# Make a dictionary that maps Earth Engine outputs and inputs to
# AI Platform inputs and outputs, respectively.
import json
input_dict = "'" + json.dumps({input_name: "array"}) + "'"
output_dict = "'" + json.dumps({output_name: "output"}) + "'"
print(input_dict)
print(output_dict)
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Run the EEifier
The actual EEification is handled by the earthengine model prepare command. Note that you will need to set your Cloud Project prior to running the command.
|
# Put the EEified model next to the trained model directory.
EEIFIED_DIR = 'gs://' + OUTPUT_BUCKET + '/eeified_pixel_model'
# You need to set the project before using the model prepare command.
!earthengine set_project {PROJECT}
!earthengine model prepare --source_dir {MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Deploy and host the EEified model on AI Platform
Now there is another TensorFlow SavedModel stored in EEIFIED_DIR ready for hosting by AI Platform. Do that from the gcloud command line tool, installed in the Colab runtime by default. Be sure to specify a regional model with the REGION parameter. Note that the MODEL_NAME must be unique. If you already have a model by that name, either name a new model or a new version of the old model. The Cloud Console AI Platform models page is useful for monitoring your models.
If you change anything about the trained model, you'll need to re-EEify it and create a new version!
|
MODEL_NAME = 'pixel_demo_model'
VERSION_NAME = 'v0'
!gcloud ai-platform models create {MODEL_NAME} \
--project {PROJECT} \
--region {REGION}
!gcloud ai-platform versions create {VERSION_NAME} \
--project {PROJECT} \
--region {REGION} \
--model {MODEL_NAME} \
--origin {EEIFIED_DIR} \
--framework "TENSORFLOW" \
--runtime-version=2.3 \
--python-version=3.7
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Connect to the hosted model from Earth Engine
Generate the input imagery. This should be done in exactly the same way as the training data were generated. See this example notebook for details.
Connect to the hosted model.
Use the model to make predictions.
Display the results.
Note that it takes the model a couple minutes to spin up and make predictions.
|
# Cloud masking function.
def maskL8sr(image):
cloudShadowBitMask = ee.Number(2).pow(3).int()
cloudsBitMask = ee.Number(2).pow(5).int()
qa = image.select('pixel_qa')
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(
qa.bitwiseAnd(cloudsBitMask).eq(0))
return image.updateMask(mask).select(BANDS).divide(10000)
# The image input data is a 2018 cloud-masked median composite.
image = L8SR.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()
# Get a map ID for display in folium.
rgb_vis = {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3, 'format': 'png'}
mapid = image.getMapId(rgb_vis)
# Turn into an array image for input to the model.
array_image = image.float().toArray()
# Point to the model hosted on AI Platform. If you specified a region other
# than the default (us-central1) at model creation, specify it here.
model = ee.Model.fromAiPlatformPredictor(
projectName=PROJECT,
modelName=MODEL_NAME,
version=VERSION_NAME,
# Can be anything, but don't make it too big.
inputTileSize=[8, 8],
# Keep this the same as your training data.
proj=ee.Projection('EPSG:4326').atScale(30),
fixInputProj=True,
# Note the names here need to match what you specified in the
# output dictionary you passed to the EEifier.
outputBands={'output': {
'type': ee.PixelType.float(),
'dimensions': 1
}
},
)
# model.predictImage outputs a one dimensional array image that
# packs the output nodes of your model into an array. These
# are class probabilities that you need to unpack into a
# multiband image with arrayFlatten(). If you want class
# labels, use arrayArgmax() as follows.
predictions = model.predictImage(array_image)
probabilities = predictions.arrayFlatten([['bare', 'veg', 'water']])
label = predictions.arrayArgmax().arrayGet([0]).rename('label')
# Get map IDs for display in folium.
probability_vis = {
'bands': ['bare', 'veg', 'water'], 'max': 0.5, 'format': 'png'
}
label_vis = {
'palette': ['red', 'green', 'blue'], 'min': 0, 'max': 2, 'format': 'png'
}
probability_mapid = probabilities.getMapId(probability_vis)
label_mapid = label.getMapId(label_vis)
# Visualize the input imagery and the predictions.
map = folium.Map(location=[37.6413, -122.2582], zoom_start=11)
folium.TileLayer(
tiles=mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='median composite',
).add_to(map)
folium.TileLayer(
tiles=label_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='predicted label',
).add_to(map)
folium.TileLayer(
tiles=probability_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='probability',
).add_to(map)
map.add_child(folium.LayerControl())
map
|
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
|
google/earthengine-api
|
apache-2.0
|
Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
|
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
num_example_each_fold = X_train.shape[0] / num_folds
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
print "num_example_each_fold is {}".format(num_example_each_fold)
print "X_train_folds:", [data.shape for data in X_train_folds]
print "y_train_folds:", [data.shape for data in y_train_folds]
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
def run_knn(X_train, y_train, X_test, y_test, k):
#print X_train.shape,y_train.shape,X_test.shape,y_test.shape
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
dists = classifier.compute_distances_no_loops(X_test)
y_test_pred = classifier.predict_labels(dists, k=k)
num_correct = np.sum(y_test_pred == y_test)
num_test = X_test.shape[0]
accuracy = float(num_correct) / num_test
return accuracy
def run_knn_on_nfolds_with_k(X_train_folds,y_train_folds,num_folds,k):
acc_list = []
for ind_fold in range(num_folds):
X_test = X_train_folds[ind_fold]
y_test = y_train_folds[ind_fold]
X_train = np.vstack(X_train_folds[0:ind_fold]+X_train_folds[ind_fold+1:])
y_train = np.hstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold+1:])
#print "y_train_folds:", [data.shape for data in y_train_folds]
#print np.vstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold:]).shape
#print X_train.shape,y_train.shape,X_test.shape,y_test.shape
acc = run_knn(X_train,y_train,X_test,y_test,k)
acc_list.append(acc)
return acc_list
for k in k_choices:
print "run knn on {} folds data with k: {}".format(num_folds,k)
k_to_accuracies[k] = run_knn_on_nfolds_with_k(X_train_folds, y_train_folds, num_folds, k)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
print "the best mean from k {}".format(np.argmax(accuracies_mean))
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 4
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
|
assign/assignment1/knn.ipynb
|
zhmz90/CS231N
|
mit
|
Le traitement automatique des langues (ou Natural Language Processing) propose un ensemble de méthodes permettant (entre autres) :
- d'extraire automatiquement les informations voulues de données textuelles brutes (comme apr exemple les noms propres)
- indexer des documents et permettre une recherche par mot-clés (cf. Moteurs de recherche, ou, moins ambitieux, moteurs d'auto-complétion)
- résumer automatiquement des documents,
- comparer la similarité entre plusieurs documents,
- traduction automatique,
- génération de textes automatiques,
- analyse de sentiments,
- agents conversationnels (cf. ELIZA en 1966). Si vous voulez savoir ce qu'il se passe quand deux agents conversationnels discutent ensemble c'est ici.
Le traitement automatique des langues a fait ses premiers pas dans le contexte de la guerre froide, où la traduction automatique était devenu un enjeu geopolitique. En 1950, dans sont article « Computing machinery and intelligence », Alan Turing défini ce qui est appellé plus tartd, le test de Turing. On dit qu'un programme passe le test de Turing s'il parvient à personnifier un humain dans une conversation écrite en temps réel, de façon suffisamment convaincante pour que l'interlocuteur humain ne puisse pas distinguer avec certitude — sur la base du seul contenu de la conversation — s'il interagit avec un programme ou avec un autre humain.
Les progrès en traitement automatique des langues ont été beaucoup plus lents qu'initialement prévus. Cependant certains considèrent que pour la première fois en 2014, grâce aux progrès en machine learning une machine a passé le test en se faisant passer pour un enfant de 13 ans.
L'objet de ce TD est de présenter l'essentiel du traitement automatique des langues, selon trois approches :
L'approche bag of words : on ne tient pas compte de l'ordre des mots, ni du contexte dans lequel ils interviennent (ou alors de manière très partielle, en étudiant par exemple le mot suivant). L'idée est d'étudier la fréquence des mots d'un document et la surreprésentation des mots par rapport à un document de référence (appelé corpus). Cette approche un peu simpliste mais très efficace : on peut calculer des scores permettant par exemple de faire de classification automatique de document par thème, de comparer la similarité de deux documents. Elle est souvent utilisée en première analyse, et elle reste la référence pour l'analyse de textes mal structurés (tweets, dialogue tchat, etc.) Mot-clés : td-idf, indice de similarité cosine
L'approche contextuelle : on s'intéresse non seulement aux mots et à leur fréquence, mais aussi aux mots qui suivent. Cette approche est essentielle pour désambiguiser les homonymes. Elle permet aussi d'affiner les modèles "bag-of-words". Le calcul de n-grams (bigrams pour les co-occurences de mots deux-à-deux, tri-grams pour les co-occurences trois-à-trois, etc.) constitue la méthode la plus simple pour tenir compte du contexte.
L'approche structurelle : on s'intéresse à la structure des phrases, des mots (stemming, lemmatisation), aux règles syntaxiques, au sens des phrases. L'idée est d'introduire de la structure dans l'analyse du langage, à partir de règles connues et modélisées (par des expressions régulières, ou formalisation des règles syntaxiques), enrichies manuellement par des contributeurs, ou apprises par des méthodes de machine learning. Mots-clés : tokenisation des phrases et des mots, Part-Of-Speech tagging, extraction d'entité etc. Cette approche est beaucoup plus coûteuse et longue à mettre en place, mais c'est la seule capable de répondre à des besoins de traitement automatique des langues plus ambitieux tels que la traduction automatique, les agents conversationnels, et permet d'augmenter la performance des modèles de classifications de documents, de prédiction du sentiment, etc.
Approche "bag of words"
Récupération de données textuelles grâce à l'API Google +
Installation
|
import httplib2 # pip install httplib2
import json # déjà installée, sinon : pip install json
import apiclient.discovery # pip install google-api-python-client
import bs4 # déjà ja installée, sinon : pip install bs4
import nltk # pip install nltk --> sous Windows, il faut aller à http://www.lfd.uci.edu/~gohlke/pythonlibs/
nltk.__version__
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Récupération de la clé d'API Google +
google a arrêté Google +, l'API est désactivée également vers le 1er mars 2019, le code suivant ne fonctionne pas mais les données récupérées sont toujours disponibles : échantillon googleplus.
Pour obtenir une clé d'API (google plus ou autre), il faut :
avoir un compte gmail (si vous n'en avez pas, c'est rapide à créer)
aller sur l'interface developeurs de Google
se connecter à son compte gmail (en haut à droite)
à droite, sélectionner "bibliothèque", sélectionner "Google +"
sélectionner "ACTIVER" (en bleu en haut)
à droite "Accéder à identifiants"
choisissez "clé API"
puis "aucun" et cliquer sur le bouton "Créer"
recopier votre clé ci-dessous
|
#remplacer par VOTRE clé
import os
try:
from pyquickhelper.loghelper import get_password
API_KEY = get_password("gapi", "ensae_teaching_cs,key")
except Exception as e:
print(e)
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Connexion à l'API, et requête d'une personne ayant un compte Google + (avec activités publiques)
|
if False: # à remplacer par une autre API
# Entrer le nom d'une personne ayant un compte google plus public
Q = "Tim O'Reilly"
# Se connecter à l'API (méthode Oauth2)
service = apiclient.discovery.build('plus', 'v1', http=httplib2.Http(),
developerKey=API_KEY)
# Récupérer les feeds
people_feed = service.people().search(query=Q).execute()
# Imprimer le json récupéré
res = json.dumps(people_feed['items'], indent=1)
print(res if len(res) < 1000 else res[:1000] + "...")
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
[
{
"kind": "plus#person",
"etag": "\"Sh4n9u6EtD24TM0RmWv7jTXojqc/tjedXFyeIkzudZzRey5EJb8iZIk\"",
"objectType": "person",
"id": "107033731246200681024",
"displayName": "Tim O'Reilly",
"url": "https://plus.google.com/107033731246200681024",
"image": {
"url": "https://lh4.googleusercontent.com/-J8nmMwIhpiA/AAAAAAAAAAI/AAAAAAADdg4/68r2hyFUgzI/photo.jpg?sz=50"
}
},
{
"kind": "plus#person",
"etag": "\"Sh4n9u6EtD24TM0RmWv7jTXojqc/ofg-30rIv-rKw7XTBBnDA1i3I_Y\"",
"objectType": "person",
"id": "110160587587635791009",
"displayName": "TIM O'REILLY",
"url": "https://plus.google.com/110160587587635791009",
"image": {
"url": "https://lh4.googleusercontent.com/-gWq9vr_JEnc/AAAAAAAAAAI/AAAAAAAAADI/zwCXKP4QeiU/photo.jpg?sz=50"
}
},
{
"kind": "plus#person",
"etag": "\"Sh4n9u6EtD24TM0RmWv7jTXojqc/DVTuV3GDJ0h4UlM5bybS_d26Fdo\"",
"objectType": "person",
"id": "106492472890341598734",
"displayName": "Tim O'Reilly",
"url": "https://plus.google.com/10649...
|
if False: # à remplacer par une autre API
# Parce que l'on travaille sur un Notebook il est possible d'afficher facilement les images correspondantes
# l'identifiant unique d'avatar google plus et le nom
from IPython.core.display import HTML
html = []
for p in people_feed['items']:
html += ['<p><img src="{}" /> {}: {}</p>'.format(p['image']['url'], p['id'], p['displayName'])]
HTML(''.join(html[:5]))
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Requete sur l'activité de la personne sélectionnée
|
if False: # à remplacer par une autre API
USER_ID = '107033731246200681024'
activity_feed = service.activities().list(
userId=USER_ID,
collection='public',
maxResults='100' # Max allowed per API
).execute()
res = json.dumps(activity_feed, indent=1)
print(res if len(res) < 1000 else res[:1000] + "...")
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
{
"kind": "plus#activityFeed",
"etag": "\"Sh4n9u6EtD24TM0RmWv7jTXojqc/UVhLnzZeFbRMD00k0VRD5tkC6es\"",
"nextPageToken": "ADSJ_i32R0IpxThTClWgVQ71un8FkJDHG8Pl4hLCvWIbyb6T65r6coxSlWk1svDgsrzxTQ3JHFV1CGnbjFCSaY14sttcvnb1QgiHBgXRtn3A8GjJjin7",
"title": "Google+ List of Activities for Collection PUBLIC",
"updated": "2017-09-13T15:59:45.234Z",
"items": [
{
"kind": "plus#activity",
"etag": "\"Sh4n9u6EtD24TM0RmWv7jTXojqc/Dlr_44FOo97cNjKbX7ZHrVWgen4\"",
"title": "It looks like #@CTRLLabsCo has made a real breakthrough. This is one of the advances that will take ...",
"published": "2017-09-13T15:59:31.577Z",
"updated": "2017-09-13T15:59:45.234Z",
"id": "z123e5zb4zbmcxf5004chl3pvxfbszirt5o",
"url": "https://plus.google.com/+TimOReilly/posts/TpYYyGh7pr1",
"actor": {
"id": "107033731246200681024",
"displayName": "Tim O'Reilly",
"url": "https://plus.google.com/107033731246200681024",
"image": {
"url": "https://lh4.googleusercontent.com/-J8nmMwIhpiA...
Récupération de l'échantillon
|
import json
with open("ressources_googleplus/107033731246200681024.json", "r", encoding="utf-8") as f:
activity_feed = json.load(f)
res = json.dumps(activity_feed, indent=1)
print(res if len(res) < 1000 else res[:1000] + "...")
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Nettoyage des données textuelles avec BS4
|
from bs4 import BeautifulSoup
def cleanHtml(html):
if html == "":
return ""
return BeautifulSoup(html, 'html.parser').get_text()
try:
print(activity_feed[0]['object']['content'])
print("\n")
print(cleanHtml(activity_feed[0]['object']['content']))
except Exception as e:
print(e)
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Récupération des données et stockage
Créer un dossier "ressources_googleplus" dans votre répertoire courant (%pwd pour le connaitre)
|
if False: # à remplacer par une autre API
import json
import apiclient.discovery
MAX_RESULTS = 200 # limite fixée à 100 résultats par requete => on va itérer sur une boucle pour en avoir 200
activity_feed = service.activities().list(
userId=USER_ID,
collection='public',
maxResults='100'
)
activity_results = []
while activity_feed != None and len(activity_results) < MAX_RESULTS:
activities = activity_feed.execute()
if 'items' in activities:
for activity in activities['items']:
if activity['object']['objectType'] == 'note' and activity['object']['content'] != '':
activity['title'] = cleanHtml(activity['title'])
activity['object']['content'] = cleanHtml(activity['object']['content'])
activity_results += [activity]
# list_next permet de passer à la requete suivante
activity_feed = service.activities().list_next(activity_feed, activities)
# on écrit le résultat dans un fichier json
import os
if not os.path.exists("ressources_googleplus"):
os.mkdir("ressources_googleplus")
f = open('./ressources_googleplus/' + USER_ID + '.json', 'w')
f.write(json.dumps(activity_results, indent=1))
f.close()
print(str(len(activity_results)), "activités écrites dans", f.name)
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Analyse des données textuelles - TD-IDF, similarité cosine et n-grams
Le calcul tf-idf (term frequency–inverse document frequency) permet de calculer un score de proximité entre un terme de recherche et un document (c'est ce que font les moteurs de recherche). La partie tf calcule une fonction croissante de la fréquence du terme de recherche dans le document à l'étude, la partie idf calcule une fonction inversement proportionnelle à la fréquence du terme dans l'ensemble des documents (ou corpus). Le score total, obtenu en multipliant les deux composantes, permet ainsi de donner un score d'autant plus élevé que le terme est surréprésenté dans un document (par rapport à l'ensemble des documents). Il existe plusieurs fonctions, qui pénalisent plus ou moins les documents longs, ou qui sont plus ou moins smooth.
|
import json
with open("ressources_googleplus/107033731246200681024.json", "r", encoding="utf-8") as f:
activity_results = json.load(f)
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Exemple sur un petit corpus de 3 documents
|
corpus = {
'a' : "Mr. Green killed Colonel Mustard in the study with the candlestick. \
Mr. Green is not a very nice fellow.",
'b' : "Professor Plum has a green plant in his study.",
'c' : "Miss Scarlett watered Professor Plum's green plant while he was away \
from his office last week."
}
terms = {
'a' : [ i.lower() for i in corpus['a'].split() ],
'b' : [ i.lower() for i in corpus['b'].split() ],
'c' : [ i.lower() for i in corpus['c'].split() ]
}
from math import log
QUERY_TERMS = ['mr.', 'green']
def tf(term, doc, normalize=True):
doc = doc.lower().split()
if normalize:
return doc.count(term.lower()) / float(len(doc))
else:
return doc.count(term.lower()) / 1.0
def idf(term, corpus):
num_texts_with_term = len([True for text in corpus if term.lower() \
in text.lower().split()])
try:
return 1.0 + log(float(len(corpus)) / num_texts_with_term)
except ZeroDivisionError:
return 1.0
def tf_idf(term, doc, corpus):
return tf(term, doc) * idf(term, corpus)
for (k, v) in sorted(corpus.items()):
print(k, ':', v)
print('\n')
query_scores = {'a': 0, 'b': 0, 'c': 0}
for term in [t.lower() for t in QUERY_TERMS]:
for doc in sorted(corpus):
print('TF({}): {}'.format(doc, term), tf(term, corpus[doc]))
print('IDF: {}'.format(term, ), idf(term, corpus.values()))
print('\n')
for doc in sorted(corpus):
score = tf_idf(term, corpus[doc], corpus.values())
print('TF-IDF({}): {}'.format(doc, term), score)
query_scores[doc] += score
print('\n')
print("Score TF-IDF total pour le terme '{}'".format(' '.join(QUERY_TERMS), ))
for (doc, score) in sorted(query_scores.items()):
print(doc, score)
|
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.