markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
A first simple example Let's start from a simple example: We create a new class that subclasses keras.Model. We just override the method train_step(self, data). We return a dictionary mapping metric names (including the loss) to their current value. The input argument data is what gets passed to fit as training data: If you pass Numpy arrays, by calling fit(x, y, ...), then data will be the tuple (x, y) If you pass a tf.data.Dataset, by calling fit(dataset, ...), then data will be what gets yielded by dataset at each batch. In the body of the train_step method, we implement a regular training update, similar to what you are already familiar with. Importantly, we compute the loss via self.compiled_loss, which wraps the loss(es) function(s) that were passed to compile(). Similarly, we call self.compiled_metrics.update_state(y, y_pred) to update the state of the metrics that were passed in compile(), and we query results from self.metrics at the end to retrieve their current value.
class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics}
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Let's try this out:
import numpy as np # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # Just use `fit` as usual x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=3)
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Going lower-level Naturally, you could just skip passing a loss function in compile(), and instead do everything manually in train_step. Likewise for metrics. Here's a lower-level example, that only uses compile() to configure the optimizer: We start by creating Metric instances to track our loss and a MAE score. We implement a custom train_step() that updates the state of these metrics (by calling update_state() on them), then query them (via result()) to return their current average value, to be displayed by the progress bar and to be pass to any callback. Note that we would need to call reset_states() on our metrics between each epoch! Otherwise calling result() would return an average since the start of training, whereas we usually work with per-epoch averages. Thankfully, the framework can do that for us: just list any metric you want to reset in the metrics property of the model. The model will call reset_states() on any object listed here at the beginning of each fit() epoch or at the beginning of a call to evaluate().
loss_tracker = keras.metrics.Mean(name="loss") mae_metric = keras.metrics.MeanAbsoluteError(name="mae") class CustomModel(keras.Model): def train_step(self, data): x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute our own loss loss = keras.losses.mean_squared_error(y, y_pred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Compute our own metrics loss_tracker.update_state(loss) mae_metric.update_state(y, y_pred) return {"loss": loss_tracker.result(), "mae": mae_metric.result()} @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. # If you don't implement this property, you have to call # `reset_states()` yourself at the time of your choosing. return [loss_tracker, mae_metric] # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) # We don't passs a loss or metrics here. model.compile(optimizer="adam") # Just use `fit` as usual -- you can use callbacks, etc. x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=5)
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Supporting sample_weight & class_weight You may have noticed that our first basic example didn't make any mention of sample weighting. If you want to support the fit() arguments sample_weight and class_weight, you'd simply do the following: Unpack sample_weight from the data argument Pass it to compiled_loss & compiled_metrics (of course, you could also just apply it manually if you don't rely on compile() for losses & metrics) That's it. That's the list.
class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. if len(data) == 3: x, y, sample_weight = data else: sample_weight = None x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value. # The loss function is configured in `compile()`. loss = self.compiled_loss( y, y_pred, sample_weight=sample_weight, regularization_losses=self.losses, ) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics. # Metrics are configured in `compile()`. self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # You can now use sample_weight argument x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) sw = np.random.random((1000, 1)) model.fit(x, y, sample_weight=sw, epochs=3)
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Providing your own evaluation step What if you want to do the same for calls to model.evaluate()? Then you would override test_step in exactly the same way. Here's what it looks like:
class CustomModel(keras.Model): def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_pred = self(x, training=False) # Updates the metrics tracking the loss self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Update the metrics. self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(loss="mse", metrics=["mae"]) # Evaluate with our custom test_step x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.evaluate(x, y)
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Wrapping up: an end-to-end GAN example Let's walk through an end-to-end example that leverages everything you just learned. Let's consider: A generator network meant to generate 28x28x1 images. A discriminator network meant to classify 28x28x1 images into two classes ("fake" and "real"). One optimizer for each. A loss function to train the discriminator.
from tensorflow.keras import layers # Create the discriminator discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) # Create the generator latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", )
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Here's a feature-complete GAN class, overriding compile() to use its own signature, and implementing the entire GAN algorithm in 17 lines in train_step:
class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(GAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim def compile(self, d_optimizer, g_optimizer, loss_fn): super(GAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] # Sample random points in the latent space batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Decode them to fake images generated_images = self.generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(tf.shape(labels)) # Train the discriminator with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = self.discriminator(self.generator(random_latent_vectors)) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) return {"d_loss": d_loss, "g_loss": g_loss}
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Let's test-drive it:
# Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim) gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) # To limit the execution time, we only train on 100 batches. You can train on # the entire dataset. You will need about 20 epochs to get nice results. gan.fit(dataset.take(100), epochs=1)
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
ํ™œ์šฉ ์˜ˆ์ œ
interval_point(0, 1, 0.5) interval_point(3, 2, 0.2)
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 1 (5์ ) ์œ„ ์ฝ”๋“œ๋ฅผ if ์กฐ๊ฑด๋ฌธ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋„๋ก ์ˆ˜์ •ํ•˜๊ณ ์ž ํ•œ๋‹ค. ์•„๋ž˜ ์ฝ”๋“œ์˜ ๋นˆ์นธ (A)๋ฅผ ์ฑ„์›Œ๋ผ. def interval_point_no_if(a, b, x): return (A) ๋ฌธ์ œ 2 ์•„๋ž˜ ์ฝ”๋“œ๋Š” ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ๋ฅผ ๋Œ€๋น„ํ•˜์—ฌ ์˜ˆ์™ธ์ฒ˜๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•œ ์ฝ”๋“œ์ด๋‹ค.
while True: try: x = float(raw_input("Please type a new number: ")) inverse = 1.0 / x print("The inverse of {} is {}.".format(x, inverse)) break except ValueError: print("You should have given either an int or a float") except ZeroDivisionError: print("The input number is {} which cannot be inversed.".format(int(x)))
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 2 (10์ ) ์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๊ณ  ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ˆ์™ธ๋“ค์„ ๋‚˜์—ดํ•˜๋ฉฐ, ์˜ˆ์™ธ์ฒ˜๋ฆฌ๋ฅผ ์–ด๋–ป๊ฒŒ ํ•˜๋Š”์ง€ ์„ค๋ช…ํ•˜๋ผ. ๋ฌธ์ œ 3 ์ฝค๋งˆ(',')๋กœ ๊ตฌ๋ถ„๋œ ๋ฌธ์ž๋“ค์ด ์ €์žฅ๋˜์–ด ์žˆ๋Š” ํŒŒ์ผ์„ csv(comma separated value) ํŒŒ์ผ์ด๋ผ ๋ถ€๋ฅธ๋‹ค. ์ˆซ์ž๋“ค๋กœ๋งŒ ๊ตฌ์„ฑ๋œ csv ํŒŒ์ผ์„ ์ธ์ž๋กœ ๋ฐ›์•„์„œ ๊ฐ ์ค„๋ณ„๋กœ ํฌํ•จ๋œ ์ˆซ์ž๋“ค๊ณผ ์ˆซ์ž๋“ค์˜ ํ•ฉ์„ ๊ณ„์‚ฐํ•˜์—ฌ ๋ณด์—ฌ์ฃผ๋Š” ํ•จ์ˆ˜ print_line_sum_of_file๊ณผ ๊ด€๋ จ๋œ ๋ฌธ์ œ์ด๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด test.txt ํŒŒ์ผ์— ์•„๋ž˜ ๋‚ด์šฉ์ด ๋“ค์–ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋ฉด ์•„๋ž˜์˜ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์™€์•ผ ํ•œ๋‹ค. 1,3,5,8 0,4,7 1,18 In [1]: print_line_sum_of_file("test.txt") out[1]: 1 + 3 + 5 + 8 = 17 0 + 4 + 7 = 11 1 + 18 = 19 text.txt ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
f = open("test.txt", 'w') f.write("1,3,5,8\n0,4,7\n1,18") f.close()
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋˜ํ•œ print_line_sum_of_file์„ ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค.
def print_line_sum_of_file(filename): g = open("test.txt", 'r') h = g.readlines() g.close() for line in h: sum = 0 k = line.strip().split(',') for i in range(len(k)): if i < len(k) -1: print(k[i] + " +"), else: print(k[i] + " ="), sum = sum + int(k[i]) print(sum)
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์œ„ ํ•จ์ˆ˜๋ฅผ ์ด์ „์— ์ž‘์„ฑํ•œ ์˜ˆ์ œ ํŒŒ์ผ์— ์ ์šฉํ•˜๋ฉด ์˜ˆ์ƒ๋œ ๊ฒฐ๊ณผ๊ณผ ๋‚˜์˜จ๋‹ค.
print_line_sum_of_file("test.txt")
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 3 (5์ ) ๊ทธ๋Ÿฐ๋ฐ ์œ„์™€ ๊ฐ™์ด ์ •์˜ํ•˜๋ฉด ์ˆซ์ž๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž์—ด์ด ํฌํ•จ๋˜์–ด ์žˆ์„ ๊ฒฝ์šฐ ValueError๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ValuError๊ฐ€ ์–ด๋””์—์„œ ๋ฐœ์ƒํ•˜๋Š”์ง€ ๋‹ตํ•˜๋ผ. ํ•ด์•ผ ํ•  ์ผ 4 (10์ ) ์ด์ œ ๋ฐ์ดํ„ฐ ํŒŒ์ผ์— ์ˆซ์ž๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž์—ด์ด ํฌํ•จ๋˜์–ด ์žˆ์„ ๊ฒฝ์šฐ๋„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋„๋ก print_line_sum_of_file๋ฅผ ์ˆ˜์ •ํ•ด์•ผ ํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ˆซ์ž๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ๋Š” ๋‹จ์–ด๊ฐ€ ์žˆ์„ ๊ฒฝ์šฐ ๋‹จ์–ด์˜ ๊ธธ์ด๋ฅผ ๋ง์…ˆ์— ์ถ”๊ฐ€ํ•˜๋„๋ก ํ•ด๋ณด์ž. ์˜ˆ์ œ: test.txt ํŒŒ์ผ์— ์•„๋ž˜ ๋‚ด์šฉ์ด ๋“ค์–ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋ฉด ์•„๋ž˜ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์™€์•ผ ํ•œ๋‹ค. 1,3,5,8 1,cat4,7 co2ffee In [1]: print_line_sum_of_file("test.txt") out[1]: 1 + 3 + 5 + 8 = 17 1 + cat4 + 7 = 12 co2ffee = 7 ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋นˆ ์นธ (A)์™€ (B)๋ฅผ ์ฑ„์›Œ๋ผ. f = open("test.txt", 'w') f.write("1,3,5,8\n1,cat4,7\nco2ffee") f.close() def print_line_sum_of_file(filename): g = open("test.txt", 'r') h = g.readlines() g.close() for line in h: sum = 0 k = line.strip().split(',') for i in range(len(k)): if i &lt; len(k) - 1: print(k[i] + " +"), else: print(k[i] + " ="), try: (A) except ValueError: (B) print(sum) ๋ฌธ์ œ 4 ํ•จ์ˆ˜๋ฅผ ๋ฆฌํ„ด๊ฐ’์œผ๋กœ ๊ฐ–๋Š” ๊ณ ๊ณ„ํ•จ์ˆ˜(higer-order function)๋ฅผ ๋‹ค๋ฃจ๋Š” ๋ฌธ์ œ์ด๋‹ค. ๋จผ์ € ๋‹ค์Œ์˜ ํ•จ์ˆ˜๋“ค์„ ์‚ดํŽด๋ณด์ž.
def linear_1(a, b): return a + b def linear_2(a, b): return a * 2 + b
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋™์ผํ•œ ๋ฐฉ์‹์„ ๋ฐ˜๋ณตํ•˜๋ฉด ์ž„์˜์˜ ์ž์—ฐ์ˆ˜ n์— ๋Œ€ํ•ด linear_n ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฆ‰, linear_n(a, b) = a * n + b ์ด ๋งŒ์กฑ๋˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋ฌดํ•œํžˆ ๋งŽ์ด ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฐ๋ฐ ๊ทธ๋Ÿฐ ํ•จ์ˆ˜๋“ค์„ ์œ„์™€๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ •์˜ํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ๋น„ํšจ์œจ์ ์ด๋‹ค. ํ•œ ๊ฐ€์ง€ ๋Œ€์•ˆ์€ ๋ณ€์ˆ˜๋ฅผ ํ•˜๋‚˜ ๋” ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด๋‹ค.
def linear_gen(n, a, b): return a * n + b
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์œ„์™€ ๊ฐ™์ด linear_gen ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•œ ๋‹ค์Œ์— ํŠน์ • n์— ๋Œ€ํ•ด linear_n ์ด ํ•„์š”ํ•˜๋‹ค๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ๊ฐ„๋‹จํ•˜๊ฒŒ ์ •์˜ํ•ด์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด n = 10์ธ ๊ฒฝ์šฐ์ด๋‹ค.
def linear_10(a, b): return linear_gen(10, a, b)
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 5 (10์ ) ๊ทธ๋Ÿฐ๋ฐ ์ด ๋ฐฉ์‹์€ ํŠน์ • linear_n์„ ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•  ๋•Œ๋งˆ๋‹ค def ํ‚ค์›Œ๋“œ๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ด์•ผ ํ•˜๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ๊ทธ๋Ÿฐ๋ฐ ๊ณ ๊ณ„ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•˜๋ฉด def ํ‚ค์›Œ๋“œ๋ฅผ ํ•œ ๋ฒˆ๋งŒ ์‚ฌ์šฉํ•ด๋„ ๋ชจ๋“  ์ˆ˜ n์— ๋Œ€ํ•ด linear_n ํ•จ์ˆ˜๋ฅผ ํ•„์š”ํ•  ๋•Œ๋งˆ๋‹ค ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜ ๋“ฑ์‹์ด ๋งŒ์กฑ์‹œํ‚ค๋Š” ๊ณ ๊ณ„ํ•จ์ˆ˜ linear_high๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค. linear_10(3, 5) = linear_high(10)(3, 5) linear_15(2, 7) = linear_high(15)(2, 7) ์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ์œ„ ๋“ฑ์‹์„ ๋งŒ์กฑ์‹œํ‚ค๋„๋ก ๋นˆ์ž๋ฆฌ (A)์™€ (B)๋ฅผ ์ฑ„์›Œ๋ผ. def linear_high(n): def linear_n(a, b): (A) return (B) ๋ฌธ์ œ 5 ์ด๋ฆ„์ด ์—†๋Š” ๋ฌด๋ช…ํ•จ์ˆ˜๋ฅผ ๋‹ค๋ฃจ๋Š” ๋ฌธ์ œ์ด๋‹ค. ๋ฌด๋ช…ํ•จ์ˆ˜๋Š” ๊ฐ„๋‹จํ•˜๊ฒŒ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋Š” ํ•จ์ˆ˜๋ฅผ ํ•œ ๋ฒˆ๋งŒ ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•  ๊ฒฝ์šฐ์— ๊ตณ์ด ํ•จ์ˆ˜ ์ด๋ฆ„์ด ํ•„์š”์—†๋‹ค๊ณ  ํŒ๋‹จ๋˜๋ฉด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์•ž์„œ linear_high ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ๋•Œ ์‚ฌ์šฉ๋œ linear_n ํ•จ์ˆ˜์˜ ๊ฒฝ์šฐ๊ฐ€ ๊ทธ๋ ‡๋‹ค. linear_n ํ•จ์ˆ˜๋Š” linear_high ํ•จ์ˆ˜๊ฐ€ ํ˜ธ์ถœ๋  ๋•Œ๋งŒ ์˜๋ฏธ๋ฅผ ๊ฐ–๋Š” ํ•จ์ˆ˜์ด๋ฉฐ ๊ทธ ์ด์™ธ์—๋Š” ์กด์žฌํ•˜์ง€ ์•Š๋Š” ํ•จ์ˆ˜๊ฐ€ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ๊ทธ๋ƒฅ linear_n ํ•จ์ˆ˜๋ฅผ ๋ฌผ์–ด๋ณด๋ฉด ํŒŒ์ด์ฌ ํ•ด์„๊ธฐ๊ฐ€ ์ „ํ˜€ ์•Œ์ง€ ๋ชปํ•˜๋ฉฐ NameError๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ํ•ด์•ผ ํ•  ์ผ 6 (5์ ) linear_n ํ•จ์ˆ˜์˜ ์ •์˜๊ฐ€ ๋งค์šฐ ๋‹จ์ˆœํ•˜๋‹ค. ๋”ฐ๋ผ์„œ ๊ตณ์ด ์ด๋ฆ„์„ ์ค„ ํ•„์š”๊ฐ€ ์—†์ด ๋žŒ๋‹ค(lambda) ๊ธฐํ˜ธ๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๋ฉด ํŽธ๋ฆฌํ•˜๋‹ค. __๋ฌธ์ œ 4__์—์„œ linear_high ํ•จ์ˆ˜์™€ ๋™์ผํ•œ ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜ linear_high_lambda ํ•จ์ˆ˜๋ฅผ ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜ํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋นˆ ์นธ (A)๋ฅผ ์ฑ„์›Œ๋ผ. def linear_high_lambda(n): return (A) ๋ฌธ์ œ 6 ๋ฌธ์ž์—ด๋กœ ๊ตฌ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ names๊ฐ€ ์žˆ๋‹ค.
names = ["Koh", "Kang", "Park", "Kwon", "Lee", "Yi", "Kim", "Jin"]
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
K๋กœ ์‹œ์ž‘ํ•˜๋Š” ์ด๋ฆ„์œผ๋กœ๋งŒ ๊ตฌ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋Š” ํŒŒ์ด์ฌ ๋‚ด์žฅํ•จ์ˆ˜ filter๋ฅผ ์ด์šฉํ•˜์—ฌ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค.
def StartsWithK(s): return s[0] == 'K' K_names = filter(StartsWithK, names) K_names
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 7 (15์ ) filter ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์œผ๋ฉด์„œ ๋™์ผํ•œ ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ณ ์ž ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋นˆ์นธ (A), (B), (C)๋ฅผ ์ฑ„์›Œ๋ผ. K_names = [] for name in names: if (A) : (B) else: (C) ํ•ด์•ผ ํ•  ์ผ 8 (5์ ) K๋กœ ์‹œ์ž‘ํ•˜๋Š” ์ด๋ฆ„๋งŒ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋ฅผ ๊ธ€์ž ์ˆœ์„œ์˜ __์—ญ์ˆœ__์œผ๋กœ ์ •๋ ฌํ•˜๊ณ ์ž ํ•œ๋‹ค. ์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ๋ฆฌ์ŠคํŠธ ๊ด€๋ จ ํŠน์ • ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ๋นˆ ์ž๋ฆฌ (D)๋ฅผ ์ฑ„์›Œ๋ผ. K_names.(D) ๋ฌธ์ œ 7 ํŒŒ์ด์ฌ ๋‚ด์žฅํ•จ์ˆ˜ map์˜ ๊ธฐ๋Šฅ์€ ์•„๋ž˜ ์˜ˆ์ œ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.
map(lambda x : x ** 2, range(5))
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
map ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๋ฐฉ์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
def list_square(num): L = [] for i in range(num): L.append(i ** 2) return L list_square(5)
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํ•ด์•ผ ํ•  ์ผ 9 (10์ ) list_square ํ•จ์ˆ˜์™€ ๋™์ผํ•œ ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜ list_square_comp ํ•จ์ˆ˜๋ฅผ ๋ฆฌ์ŠคํŠธ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์„ ํ™œ์šฉํ•˜์—ฌ ์•„๋ž˜์ฒ˜๋Ÿผ ๊ตฌํ˜„ํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋นˆ์ž๋ฆฌ (A)๋ฅผ ์ฑ„์›Œ๋ผ. def list_square_comp(num): return (A) ๋ฌธ์ œ 8 ๋‹ค์„ฏ ๊ฐœ์˜ ๋„์‹œ๋ช…๊ณผ ๊ฐ ๋„์‹œ์˜ ์ธ๊ตฌ์ˆ˜๋กœ ์ด๋ฃจ์–ด์ง„ ๋‘ ๊ฐœ์˜ ๋ฆฌ์ŠคํŠธ๊ฐ€ ์•„๋ž˜์ฒ˜๋Ÿผ ์žˆ๋‹ค.
cities = ['A', 'B', 'C', 'D', 'E'] populations = [20, 30, 140, 80, 25]
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋„์‹œ์ด๋ฆ„๊ณผ ์ธ๊ตฌ์ˆ˜๋ฅผ ์Œ์œผ๋กœ ๊ฐ–๋Š” ๋ฆฌ์ŠคํŠธ๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•„๋ž˜์™€ ๊ฐ™๋‹ค.
city_pop = [] for i in range(len(cities)): city_pop.append((cities[i], populations[i])) city_pop
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
city_pop๋ฅผ ์ด์šฉํ•˜์—ฌ ์˜ˆ๋ฅผ ๋“ค์–ด C ๋„์‹œ์˜ ์ธ๊ตฌ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
city_pop[2][1]
ref_materials/exams/2015/midterm.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada:
%%tutor --lang python3 a = 1 b = 2 def add(x, y): return x + y c = add(a, b)
tutormagic.ipynb
Pybonacci/notebooks
bsd-2-clause
Ahora un ejemplo con javascript:
%%tutor --lang javascript var a = 1; var b = 1; console.log(a + b);
tutormagic.ipynb
Pybonacci/notebooks
bsd-2-clause
Macro Basics A Macro command begins with a percent sign (% similar to the %sql magic command) and can be found anywhere within a %sql line or %%sql block. Macros must be separated from other text in the SQL with a space. To define a macro, the %%sql macro &lt;name&gt; command is used. The body of the macro is found in the cell below the definition of the macro. This simple macro called EMPTABLE will substitute a SELECT statement into a SQL block.
%%sql macro emptable select * from employee
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The name of the macro follows the %%sql macro command and is case sensitive. To use the macro, we can place it anywhere in the %sql block. This first example uses it by itself.
%sql %emptable
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The actual SQL that is generated is not shown by default. If you do want to see the SQL that gets generated, you can use the -e (echo) option to display the final SQL statement. The following example will display the generated SQL. Note that the echo setting is only used to display results for the current cell that is executing.
%%sql -e %emptable
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Since we can use the %emptable anywhere in our SQL, we can add additional commands around it. In this example we add some logic to the select statement.
%%sql %emptable where empno = '000010'
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Macros can also have parameters supplied to them. The parameters are included after the name of the macro. Here is a simple macro which will use the first parameter as the name of the column we want returned from the EMPLOYEE table.
%%sql macro emptable SELECT {1} FROM EMPLOYEE
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
This example illustrates two concepts. The MACRO command will replace any existing macro with the same name. Since we already have an emptable macro, the macro body will be replaced with this code. In addition, macros only exist for the duration of your notebook. If you create another Jupyter notebook, it will not contain any macros that you may have created. If there are macros that you want to share across notebooks, you should create a separate notebook and place all of the macro definitions in there. Then you can include these macros by executing the %run command using the name of the notebook that contains the macros. The following SQL shows the use of the macro with parameters.
%%sql %emptable(lastname)
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The remainder of this notebook will explore the advanced features of macros. Macro Parameters Macros can have up to 9 parameters supplied to them. The parameters are numbered from 1 to 9, left to right in the argument list for the macro. For instance, the following macro has 5 paramters: %emptable(lastname,firstnme,salary,bonus,'000010') Parameters are separated by commas, and can contain strings as shown using single or double quotes. When the parameters are used within a macro, the quotes are not included as part of the string. If you do want to pass the quotes as part of the parameter, use square brackets [] around the string. For instance, the following parameter will not have quotes passed to the macro: python %sql %abc('no quotes') To send the string with quotes, you could surround the parameter with other quotes "'hello'" or use the following technique if you use multiple quotes in your string: python %sql %abc (['quotes']) To use a parameter within your macro, you enclose the parameter number with braces {}. The next command will illustrate the use of the five parameters.
%%sql macro emptable display on SELECT {1},{2},{3},{4} FROM EMPLOYEE WHERE EMPNO = '{5}'
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Note that the EMPNO field is a character field in the EMPLOYEE table. Even though the employee number was supplied as a string, the quotes are not included in the parameter. The macro places quotes around the parameter {5} so that it is properly used in the SQL statement. The other feature of this macro is that the display (on) command is part of the macro body so the generated SQL will always be displayed.
%sql %emptable(lastname,firstnme,salary,bonus,'000010')
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
We can modify the macro to assume that the parameters will include the quotes in the string.
%%sql macro emptable SELECT {1},{2},{3},{4} FROM EMPLOYEE WHERE EMPNO = {5}
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
We just have to make sure that the quotes are part of the parameter now.
%sql -e %emptable(lastname,firstnme,salary,bonus,"'000010'")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
We could use the square brackets as an alternative way of passing the parameter.
%sql -e %emptable(lastname,firstnme,salary,bonus,['000010'])
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Parameters can also be named in a macro. To name an input value, the macro needs to use the format: field=value For instance, the following macro call will have 2 numbered parameters and one named parameter: %showemp(firstnme,lastname,logic="WHERE EMPNO='000010'") From within the macro the parameter count would be 2 and the value for parameter 1 is firstnme, and the value for parameter 2 is lastname. Since we have a named parameter, it is not included in the list of numbered parameters. In fact, the following statement is equivalent since unnamed parameters are numbered in the order that they are found in the macro, ignoring any named parameters that are found: %showemp(firstnme,logic="WHERE EMPNO='000010'",lastname) The following macro illustrates this feature.
%%sql macro showemp SELECT {1},{2} FROM EMPLOYEE {logic} %sql %showemp(firstnme,lastname,logic="WHERE EMPNO='000010'") %sql %showemp(firstnme,logic="WHERE EMPNO='000010'",lastname)
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Named parameters are useful when there are many options within the macro and you don't want to keep track of which position it is in. In addition, if you have a variable number of parameters, you should use named parameters for the fixed (required) parameters and numbered parameters for the optional ones. Macro Coding Overview Macros can contain any type of text, including SQL commands. In addition to the text, macros can also contain the following keywords: echo - Display a message exit - Exit the macro immediately if/else/endif - Conditional logic var - Set a variable display - Turn the display of the final text on The only restriction with macros is that macros cannot be nested. This means I can't call a macro from within a macro. The sections below explain the use of each of these statement types. Echo Option The -e option will result in the final SQL being display after the macro substitution is done. %%sql -e %showemp(...)
%%sql macro showdisplay SELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Using the -e flag will display the final SQL that is run.
%sql -e %showdisplay
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
If we remove the -e option, the final SQL will not be shown.
%sql %showdisplay
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Exit Command The exit command will terminate the processing within a macro and not run the generated SQL. You would use this when a condition is not met within the macro (like a missing parameter).
%%sql macro showexit echo This message gets shown SELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY exit echo This message does not get shown
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The macro that was defined will not show the second statement, nor will it execute the SQL that was defined in the macro body.
%sql %showexit
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Echo Command As you already noticed in the previous example, the echo command will display information on the screen. Any text following the command will have variables substituted and then displayed with a green box surrounding it. The following code illustates the use of the command.
%%sql macro showecho echo Here is a message echo Two lines are shown
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The echo command will show each line as a separate box.
%sql %showecho
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
If you want to have a message go across multiple lines use the &lt;br&gt; to start a new line.
%%sql macro showecho echo Here is a paragraph. <br> And a final paragraph. %sql %showecho
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Var Command The var (variable) command sets a macro variable to a value. A variable is referred to in the macro script using curly braces {name}. By default the arguments that are used in the macro call are assigned the variable names {1} to {9}. If you use a named argument (option="value") in the macro call, a variable called {option} will contain the value within the macro. To set a variable within a macro you would use the var command: var name value The variable name can be any name as long as it only includes letters, numbers, underscore _ and $. Variable names are case sensitive so {a} and {A} are different. When the macro finishes executing, the contents of the variables will be lost. If you do want to keep a variable between macros, you should start the name of the variable with a $ sign: var $name value This variable will persist between macro calls.
%%sql macro initialize var $hello Hello There var hello You won't see this %%sql macro runit echo The value of hello is *{hello}* echo {$hello}
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Calling runit will display the variable that was set in the first macro.
%sql %initialize %sql %runit
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
A variable can be converted to uppercase by placing the ^ beside the variable name or number.
%%sql macro runit echo The first parameter is {^1} %sql %runit(Hello There)
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The string following the variable name can include quotes and these will not be removed. Only quotes that are supplied in a parameter to a macro will have the quotes removed.
%%sql macro runit var hello This is a long string without quotes var hello2 'This is a long string with quotes' echo {hello} <br> {hello2} %sql %runit
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
When passing parameters to a macro, the program will automatically create variables based on whether they are positional parameters (1, 2, ..., n) or named parameters. The following macro will be used to show how parameters are passed to the routine.
%%sql macro showvar echo parm1={1} <br>parm2={2} <br>message={message}
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Calling the macro will show how the variable names get assigned and used.
%sql %showvar(parameter 1, another parameter,message="Hello World")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
If you pass an empty value (or if a variable does not exist), a "null" value will be shown.
%sql %showvar(1,,message="Hello World")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
An empty string also returns a null value.
%sql %showvar(1,2,message="")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Finally, any string that is supplied to the macro will not include the quotes in the variable. The Hello World string will not have quotes when it is displayed:
%sql %showvar(1,2,message="Hello World")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
You need to supply the quotes in the script or macro when using variables since quotes are stripped from any strings that are supplied.
%%sql macro showvar echo parm1={1} <br>parm2={2} <br>message='{message}' %sql %showvar(1,2,message="Hello World")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
The count of the total number of parameters passed is found in the {argc} variable. You can use this variable to decide whether or not the user has supplied the proper number of arguments or change which code should be executed.
%%sql macro showvar echo The number of unnamed parameters is {argc}. The where clause is *{where}*.
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Unnamed parameters are included in the count of arguments while named parameters are ignored.
%sql %showvar(1,2,option=nothing,3,4,where=)
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
If/Else/Endif Command If you need to add conditional logic to your macro then you should use the if/else/endif commands. The format of the if statement is: if variable condition value statements else statements endif The else portion is optional, but the block must be closed with the endif command. If statements can be nested up to 9 levels deep: if condition 1 if condition 2 statements else if condition 3 statements end if endif endif If the condition in the if clause is true, then anything following the if statement will be executed and included in the final SQL statement. For instance, the following code will create a SQL statement based on the value of parameter 1: if {1} = null SELECT * FROM EMPLOYEE else SELECT {1} FROM EMPLOYEE endif Conditions The if statement requires a condition to determine whether or not the block should be executed. The condition uses the following format: if {variable} condition {variable} | constant | null Variable can be a number from 1 to 9 which represents the argument in the macro list. So {1} refers to the first argument. The variable can also be the name of a named parameter or global variable. The condition is one of the following comparison operators: - =, ==: Equal to - &lt;: Less than - &gt;: Greater than - &lt;=,=&lt;: Less than or equal to - &gt;=, =&gt;: Greater than or equal to - !=, &lt;&gt; : Not equal to The variable or constant will have quotes stripped away before doing the comparison. If you are testing for the existence of a variable, or to check if a variable is empty, use the keyword null.
%%sql macro showif if {argc} = 0 echo No parameters supplied if {option} <> null echo The optional parameter option was set: {option} endif else if {argc} = "1" echo One parameter was supplied else echo More than one parameter was supplied: {argc} endif endif
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Running the previous macro with no parameters will check to see if the option keyword was used.
%sql %showif
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Now include the optional parameter.
%sql %showif(option="Yes there is an option")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Finally, issue the macro with multiple parameters.
%sql %showif(Here,are,a,number,of,parameters)
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
One additional option is available for variable substitution. If the first character of the variable name or parameter number is the ^ symbol, it will uppercase the entire string.
%%sql macro showif if {option} <> null echo The optional parameter option was set: {^option} endif %sql %showif(option="Yes there is an option")
Db2 Jupyter Macros.ipynb
DB2-Samples/db2jupyter
apache-2.0
Lorsqu'on dรฉcrit n'importe quel algorithme, on รฉvoque toujours son coรปt, souvent une formule de ce style : $$O(n^u(\ln_2 n)^v)$$ $u$ et $v$ sont des entiers. $v$ est souvent soit 0, soit 1. Mais d'oรน vient ce logarithme ? Le premier algorithme auquel on pense et dont le coรปt correspond au cas $u=0$ et $v=1$ est la recherche dichotomique. Il consiste ร  chercher un รฉlรฉment dans une liste triรฉe. Le logarithme vient du fait qu'on rรฉduit l'espace de recherche par deux ร  chaque itรฉration. Fatalement, on trouve trรจs vite l'รฉlรฉment ร  chercher. Et le logarithme, dans la plupart des algorithmes, vient du fait qu'on divise la dimension du problรจme par un nombre entier ร  chaque itรฉration, ici 2. La recherche dichotomique est assez simple : on part d'une liste triรฉe T et on cherche l'รฉlรฉment v (on suppose qu'il s'y trouve). On procรจde comme suit : On compare v ร  l'รฉlรฉment du milieu de la liste. S'il est รฉgal ร  v, on a fini. Sinon, s'il est infรฉrieur, il faut chercher dans la premiรจre moitiรฉ de la liste. On retourne ร  l'รฉtape 1 avec la liste rรฉduite. S'il est supรฉrieur, on fait de mรชme avec la seconde moitiรฉ de la liste. C'est ce qu'illustre la figure suivante oรน a dรฉsigne le dรฉbut de la liste, b la fin, m le milieu. A chaque itรฉration, on dรฉplace ces trois positions.
from pyquickhelper.helpgen import NbImage NbImage("images/dicho.png")
_doc/notebooks/1a/recherche_dichotomique.ipynb
sdpython/ensae_teaching_cs
mit
Version itรฉrative
def recherche_dichotomique(element, liste_triee): a = 0 b = len(liste_triee)-1 m = (a+b)//2 while a < b : if liste_triee[m] == element: return m elif liste_triee[m] > element: b = m-1 else : a = m+1 m = (a+b)//2 return a li = [0, 4, 5, 19, 100, 200, 450, 999] recherche_dichotomique(5, li)
_doc/notebooks/1a/recherche_dichotomique.ipynb
sdpython/ensae_teaching_cs
mit
Version rรฉcursive
def recherche_dichotomique_recursive( element, liste_triee, a = 0, b = -1 ): if a == b : return a if b == -1 : b = len(liste_triee)-1 m = (a+b)//2 if liste_triee[m] == element: return m elif liste_triee[m] > element: return recherche_dichotomique_recursive(element, liste_triee, a, m-1) else : return recherche_dichotomique_recursive(element, liste_triee, m+1, b) recherche_dichotomique(5, li)
_doc/notebooks/1a/recherche_dichotomique.ipynb
sdpython/ensae_teaching_cs
mit
Version rรฉcursive 2 L'ajout des parametrรจs a et b peut paraรฎtre un peu lourd. Voici une troisiรจme implรฉmentation en Python (toujours rรฉcursive) :
def recherche_dichotomique_recursive2(element, liste_triee): if len(liste_triee)==1 : return 0 m = len(liste_triee)//2 if liste_triee[m] == element: return m elif liste_triee[m] > element: return recherche_dichotomique_recursive2(element, liste_triee[:m]) else : return m + recherche_dichotomique_recursive2(element, liste_triee[m:]) recherche_dichotomique(5, li)
_doc/notebooks/1a/recherche_dichotomique.ipynb
sdpython/ensae_teaching_cs
mit
Read Expression data using pandas. Notice that pandas can read URLs (!), not just files on your computer!
csv = "https://media.githubusercontent.com/media/olgabot/macosko2015/" \ "master/data/05_make_rentina_subsets_for_teaching/big_clusters_expression.csv" expression = pd.read_csv(csv, index_col=0) print(expression.shape) expression.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 1 Now use pd.read_csv to read the csv file of the cell metadata
csv = "https://media.githubusercontent.com/media/olgabot/macosko2015/" \ "master/data/05_make_rentina_subsets_for_teaching/big_clusters_cell_metadata.csv" # YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
csv = "https://media.githubusercontent.com/media/olgabot/macosko2015/" \ "master/data/05_make_rentina_subsets_for_teaching/big_clusters_cell_metadata.csv" cell_metadata = pd.read_csv(csv, index_col=0) print(cell_metadata.shape) cell_metadata.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
To correlate columns of dataframes in pandas, you use the function .corr. Let's look at the documentation of .corr Is the default method Pearson or Spearman correlation? Can you correlate between rows, or only between columns?
expression.corr?
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Since .corr only correlates between columns, we need to transpose our dataframe. Here's a little animation of matrix transposition from Wikipedia: Exercise 2 Transpose the expression matrix so the cells are the columns, which makes it easy to calculate correlations. How do you transpose a dataframe in pandas? (hint: google knows everything)
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
expression_t = expression.T print(expression_t.shape) expression_t.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 3 Use .corr to calculate the Spearman correlation of the transposed expression dataframe. Make sure to print the shape, and show the head of the resulting dataframe.
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
expression_corr = expression_t.corr(method='spearman') print(expression_corr.shape) expression_corr.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Pro tip: if your matrix is really big, here's a trick to make spearman correlations faster Remember that spearman correlation is equal to performing pearson correlation on the ranks? Well, that's exactly what's happening inside the .corr(method='spearman') function! Every time it's calculating spearman, it's converting each row to ranks, which means that it's double-converting to ranks since it has to do it for each pair. Let's cut the work in half by converting to ranks FIRST. Let's take a look at the options for .rank:
expression_t.rank?
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Notice we can specify axis=1 or axis=0, but what does that really mean? Was this ascending along rows, or ascending along columns? To figure this out, let's use a small, simple dataframe:
df = pd.DataFrame([[5, 6, 7], [5, 6, 7], [5, 6, 7]]) df
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 4 Try axis=0 when using rank on this df
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
df.rank(axis=0)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Did that make ranks ascending along columns or along rows? Exercise 5 Now try axis=1 when using rank on this df
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
df.rank(axis=1)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Did that make ranks ascending along columns or along rows? Exercise 6 To get the gene (row) ranks for each cell (column), do we want axis=1 or axis=0? Perform .rank on the transposed expression matrix (expression_t), print the shape of the resulting ranks, and show the head() of it.
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
ranks = expression_t.rank(axis=0) print(ranks.shape) ranks.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 6 Now that you're armed with all this information, we'll calculate the ranks. While you're at it, let's compare the time it takes to run ("runtime") of .corr(method="pearson") on the ranks matrix vs .corr(method="spearman") on the expression matrix. Perform pearson correlation on the ranks Check that it is equal to the expression spearman correlation. Use the %timeit magic to check the runtimes of .corr on the ranks and expression matrices. (Feel free to calculate the expression correlation again, below) Note that when you use timeit, you cannot assign any variables -- using an equals sign doesn't work here. How much time did it take, in comparison? What's the order of magnitude difference? Use as many cells as you need.
# YOUR CODE HERE # YOUR CODE HERE # YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
%timeit expression_t.corr(method='spearman') %timeit ranks.corr(method='pearson') ranks_corr = ranks.corr(method='pearson') print(ranks_corr.shape) ranks_corr.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Use inequality to see if any points are not the same. If this is equal to zero, then we know that they are ALL the same.
(ranks_corr != expression_corr).sum().sum()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
This is a flip of checking for equality, which is a little trickier because then you have to know exactly how many items are in the matrix. Since we have a 300x300 matrix, that multiplication is a little easier to do in your head and know that you got the right answer.
(ranks_corr == expression_corr).sum().sum()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Make a heatmap!! Now we are ready to make a clustered heatmap! We'll use seaborn's sns.clustermap. Let's read the documentation for sns.clustermap. What is the default distance metric and linkage method?
sns.clustermap?
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 7 Now run sns.clustermap on either the ranks or expression correlation matrices, since they are equal :)
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
sns.clustermap(expression_corr)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
How can we add the colors labeling the rows and columns? Check the documentation for sns.clustermap again: Exercise 8
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
sns.clustermap?
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Since I am not a color design expert, I defer to color design experts in choosing my color palettes. One such expert is Cynthia Brewer, who made a ColorBrewer (hah!) list of color maps for both increasing quantity (shades), and for categories (differing colors). As a reference, I like using this demo of every ColorBrewer scale. Hover over the palette to see its name. Thankfully, seaborn has the ColorBrewer color maps built-in. Let's see what this output is Remember -- we never make a variable without looking at it first!!
palette = sns.color_palette('Accent', n_colors=3) palette
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Huh that's a bunch of weird numbers. What do they mean? Turns out it's a value from 0 to 1 representing the red, green, and blue (RGB) color channels that computers understand. But I'm not a computer .... what am I supposed to do?? Turns out, seaborn also has a very convenient function called palplot to plot the entire palette. This lets us look at the variable without having to convert from RGB
sns.palplot(palette)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 9 Get the color palette for the "Set2" colormap and specify that you want 6 colors (read the documentation of sns.color_palette) Plot the color palette
# YOUR CODE HERE # YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
set2 = sns.color_palette('Set2', n_colors=6) sns.palplot(set2)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
If you are more advanced and want access to more colormaps, I recommend checking out palettable. Assign colors to clusters To set a specific color to each cluster, we'll need to see the unique clusters here. For an individual column (called a "Series" in pandas-speak), how can we get only the unique items? Exercise 10 Get the unique values from the column "cluster_celltype_with_id". Remember, always look at the variable you created!
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
cluster_ids_unique = cell_metadata['cluster_celltype_with_id'].unique() cluster_ids_unique
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Detour: zip and dict To map colors to each cluster name, we need to talk about some built-in functions in Python, called zip and dict For this next part, we'll use the built-in function zip which is very useful. It acts like a zipper (like for clothes) to glue together the pairs of items in two lists:
english = ["hello", "goodbye", "no", "yes", "please", "thank you",] spanish = ["hola", "adios", "no", "si", "por favor", "gracias"] zip(english, spanish)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
To be memory efficient, this doesn't show us what's inside right away. To look inside a zip object, we can use list:
list(zip(english, spanish))
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 11 What happened to "please" and "thank you" from english? Make another list, called spanish2, that contains the Spanish words for "please" and "thank you" (again, google knows everything), then call zip on english and spanish2. Don't forget to use list on them!
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit