markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Two random points After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After one step of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After two steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After three steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After four steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After five steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After six steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
After seven steps of GP (and two random points)
bo.maximize(init_points=0, n_iter=1, kappa=5) plot_gp(bo, x, y)
examples/visualization.ipynb
ysasaki6023/NeuralNetworkStudy
mit
Load and process review dataset
products = pd.read_csv('../../data/amazon_baby_subset.csv') products['sentiment'] products['sentiment'].size products.head(10).name print ('# of positive reviews =', len(products[products['sentiment']==1])) print ('# of negative reviews =', len(products[products['sentiment']==-1])) # The same feature processing (same as the previous assignments) # --------------------------------------------------------------- import json with open('../../data/important_words.json', 'r') as f: # Reads the list of most frequent words important_words = json.load(f) important_words = [str(s) for s in important_words] def remove_punctuation(text): import string translator = str.maketrans('', '', string.punctuation) return str(text).translate(translator) # Remove punctuation. products['review_clean'] = products['review'].apply(remove_punctuation) # Split out the words into individual columns for word in important_words: products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Train-Validation split We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result. Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
with open('../../data/module-4-assignment-train-idx.json', 'r') as f: train_idx = json.load(f) train_data = products.ix[train_idx] with open ('../../data/module-4-assignment-validation-idx.json', 'r') as f: v_idx = json.load(f) validation_data = products.ix[v_idx]
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Convert Frame to NumPy array Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
import numpy as np def get_numpy_data(data_frame, features, label): data_frame['intercept'] = 1 features = ['intercept'] + features features_frame = data_frame[features] feature_matrix = features_frame.as_matrix() label_array = data_frame[label] return(feature_matrix, label_array) feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment') feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Building on logistic regression with no L2 penalty assignment Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$. We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
def prediction(score): return (1 / (1 + np.exp(-score))) ''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients scores = np.dot(feature_matrix, coefficients) # Compute P(y_i = +1 | x_i, w) using the link function predictions = np.apply_along_axis(prediction, 0, scores) # return predictions return predictions
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail. Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ Adding L2 penalty to the derivative It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty. Recall from the lecture that the link function is still the sigmoid: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ We add the L2 penalty term to the per-coefficient derivative of log likelihood: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$ The per-coefficient derivative for logistic regression with an L2 penalty is as follows: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$ and for the intercept term, we have $$ \frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature. Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments: * errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$ * feature vector containing $h_j(\mathbf{x}_i)$ for all $i$ * coefficient containing the current value of coefficient $w_j$. * l2_penalty representing the L2 penalty constant $\lambda$ * feature_is_constant telling whether the $j$-th feature is constant or not.
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): # Compute the dot product of errors and feature derivative = np.dot(feature, errors) # add L2 penalty term for any feature that isn't the intercept. if not feature_is_constant: derivative = derivative - 2 * l2_penalty * coefficient return derivative
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Quiz Question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$? The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
from math import sqrt def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in range(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function # YOUR CODE HERE predictions = predict_probability(feature_matrix, coefficients) # Compute indicator value for (y_i = +1) indicator = (sentiment==+1) # Compute the errors as indicator - predictions errors = indicator - predictions for j in range(len(coefficients)): # loop over each coefficient # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]. # Compute the derivative for coefficients[j]. Save it in a variable called derivative # YOUR CODE HERE derivative = feature_derivative_with_L2(errors, feature_matrix[:, j], coefficients[j], l2_penalty, j == 0) # add the step size times the derivative to the current coefficient coefficients[j] += (step_size * derivative) # Checking whether log likelihood is increasing if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \ or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0: lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty) print ('iteration %*d: log likelihood of observed labels = %.8f' % \ (int(np.ceil(np.log10(max_iter))), itr, lp)) return coefficients
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Compare coefficients We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values. Below is a simple helper function that will help us create this table.
important_words.insert(0, 'intercept') data = np.array(important_words) table = pd.DataFrame(columns = ['words'], data = data) def add_coefficients_to_table(coefficients, column_name): table[column_name] = coefficients return table important_words.remove('intercept') add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]') add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]') add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]') add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]') add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]') add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words. Quiz Question. Which of the following is not listed in either positive_words or negative_words?
def make_tuple(column_name): word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip( table['words'], table[column_name])] return word_coefficient_tuples positive_words = list(map(lambda x: x[0], sorted(make_tuple('coefficients [L2=0]'), key=lambda x:x[1], reverse=True)[:5])) negative_words = list(map(lambda x: x[0], sorted(make_tuple('coefficients [L2=0]'), key=lambda x:x[1], reverse=False)[:5])) positive_words negative_words
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 10, 6 def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list): cmap_positive = plt.get_cmap('Reds') cmap_negative = plt.get_cmap('Blues') xx = l2_penalty_list plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k') table_positive_words = table[table['words'].isin(positive_words)] table_negative_words = table[table['words'].isin(negative_words)] del table_positive_words['words'] del table_negative_words['words'] for i in range(len(positive_words)): color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15)) plt.plot(xx, table_positive_words[i:i+1].as_matrix().flatten(), '-', label=positive_words[i], linewidth=4.0, color=color) for i in range(len(negative_words)): color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15)) plt.plot(xx, table_negative_words[i:i+1].as_matrix().flatten(), '-', label=negative_words[i], linewidth=4.0, color=color) plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5) plt.axis([1, 1e5, -1, 2]) plt.title('Coefficient path') plt.xlabel('L2 penalty ($\lambda$)') plt.ylabel('Coefficient value') plt.xscale('log') plt.rcParams.update({'font.size': 18}) plt.tight_layout() make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
train_accuracy = {} train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty) train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty) train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty) train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty) train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty) train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty) validation_accuracy = {} validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty) validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty) validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty) validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty) validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty) validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty) # Build a simple report for key in sorted(validation_accuracy.keys()): print("L2 penalty = %g" % key) print("train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])) print("--------------------------------------------------------------------------------") # Optional. Plot accuracy on training and validation sets over choice of L2 penalty. import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 10, 6 sorted_list = sorted(train_accuracy.items(), key=lambda x:x[0]) plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'bo-', linewidth=4, label='Training accuracy') sorted_list = sorted(validation_accuracy.items(), key=lambda x:x[0]) plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'ro-', linewidth=4, label='Validation accuracy') plt.xscale('symlog') plt.axis([0, 1e3, 0.78, 0.786]) plt.legend(loc='lower left') plt.rcParams.update({'font.size': 18}) plt.tight_layout
notebooks/classification/module-4-linear-classifier-regularization-pandas.ipynb
rthadani/coursera-ml
epl-1.0
Viewing Sentences Line by Line Unlike the displaCy dependency parse, the NER viewer has to take in a Doc object with an ents attribute. For this reason, we can't just pass a list of spans to .render(), we have to create a new Doc from each span.text:
for sent in doc.sents: displacy.render(nlp(sent.text), style='ent', jupyter=True)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
<div class="alert alert-info"><font color=black>**NOTE**: If a span does not contain any entities, displaCy will issue a harmless warning:</font></div>
doc2 = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million. ' u'By contrast, my kids sold a lot of lemonade.') for sent in doc2.sents: displacy.render(nlp(sent.text), style='ent', jupyter=True)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
<div class="alert alert-info"><font color=black>**WORKAROUND:** We can avert this with an additional bit of code:</font></div>
for sent in doc2.sents: docx = nlp(sent.text) if docx.ents: displacy.render(docx, style='ent', jupyter=True) else: print(docx.text)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
Viewing Specific Entities You can pass a list of entity types to restrict the visualization:
options = {'ents': ['ORG', 'PRODUCT']} displacy.render(doc, style='ent', jupyter=True, options=options)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
Customizing Colors and Effects You can also pass background color and gradient options:
colors = {'ORG': 'linear-gradient(90deg, #aa9cfc, #fc9ce7)', 'PRODUCT': 'radial-gradient(yellow, green)'} options = {'ents': ['ORG', 'PRODUCT'], 'colors':colors} displacy.render(doc, style='ent', jupyter=True, options=options)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
For more on applying CSS background colors and gradients, visit https://www.w3schools.com/css/css3_gradients.asp Creating Visualizations Outside of Jupyter If you're using another Python IDE or writing a script, you can choose to have spaCy serve up HTML separately. Instead of displacy.render(), use displacy.serve():
displacy.serve(doc, style='ent', options=options)
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/03-Visualizing-NER.ipynb
rishuatgithub/MLPy
apache-2.0
Numeri Reali (?)
y=1.0 print(y) type(x), type(y) type(x+y)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Type conversion o type cast Sono usati per convertire un tipo in un altro, in maniera dinamica. Per convertire un oggetto in un dato tipo, basta usare il nome del tipo come se fosse una funzione. Esempi:
a = "3.0" type(a) b = int(a) b = float(a) type(b) x = float(1)/3 y = 1/3 z = 1.0//3.0 print(x+y+z) 2.4//2.5
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
ATTENZIONE: La divisione tra numeri interi non considera il resto
x = 1.0/3 y = 1.0/3 z = 1/3 print(x+y+z)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
NOTA: In Python 2.7 questa espressione viene valutata a 0.666666 in quanto z vienve valutata con una divisione tra interi
1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 == 1.0 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10 + 1/10
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
ATTENZIONE: Ricordarsi di come vengono rappresentati i numeri al calcolatori con la rappresentazione in virgola mobile (floating points). Numeri Complessi
x = 1+3j y = 4+2j z = 2+1j print(type(x)) print(x+y*z) print(z.real, z.imag, z.conjugate) z.conjugate() z.real = 3
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Comandi utili nel workspace Per controllare le variabili in memoria (nel workspace):
who
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Per rimuovere una variabile dal workspace:
del x who print(x+y+z) help(complex)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Funzioni Supponiamo di voler scrivere una funzione che calcola il doppio di un numero dato ovvero la funzione: $f : \mathbb{C} \rightarrow \mathbb{C}$ con: $f(x) = 2 x$ In python dobbiamo scrivere (ATTENZIONE AGLI SPAZI DEL TAB):
# Notare l'identazione nella funzione seguente def f(x): """Funzione che il doppio di x""" return x*2 print(type(f)) f(27) f(1.2) f(1+3j) f("Ciao") 2*"domanda" help(f)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Vogliamo ora scrivere la funzione: $$f : \mathbb{C} \times \mathbb{C} \rightarrow \mathbb{C}$$ con $$f(x,y) = x^y$$ In python scriviamo:
def Potenza(base, esponente): """ Calcola la potenza (base)^(esponente). """ return base**esponente Potenza(2, 3) def Potenza2(base, esponente=2): """ Calcola la potenza (base)^(esponente). Se il secondo parametro non viene passato, calcola la potenza quadrata, ovvero (esponente=2). """ return base**esponente print(Potenza2(14)) print(Potenza2(14,3)) Potenza2(esponente=4, base=10)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Python supporta la possibilitร  di usare i nomi di variabili usati nella definizione di una funzione per passare gli argomenti (chiamati keyword arguments). Questo si differenza dalla tipica notazione posizionale di molti linguaggi di programmazione, tipo il C, e permette di rendere il codice piรน leggibile e di evitare ambiguitร  nel passare gli argomenti ad una funzione. Si noti infine che nella definizione della funzione Potenza2 รจ stato definito un parametro di default (default parameter values) per definire il valore dell'esponente. Liste
As = [5,3,2,8,7,13] print(As) print(type(As)) # help(list)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
IMPORTANTE: Abituarsi ad usare il tasto tab per avere il completamento automatico
# Aggiungere un elemento ad una lista As.append(27) print(As) As.remove(7) print(As) As.insert(4, 12) print(As) # Invertire l'ordine di una lista As.reverse() print(As) # Ordinamento come funzione Bs = sorted(As) print(As, Bs) # Ordinamento "IN_PLACE" As.sort() print(As)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
COMMENTO: Ricordarsi la differenza che alcune funzioni operano IN PLACE
print(As) for x in As: print(x) # Equivalente in ANSI-C # for (int i=0; i<n; i++) # printf("%f\n", As[i]); # Iterare gli elementi di una lista (esempio di commento in python) for n in As: print("x=", str(n)," \t -> f(x^3) =",str(Potenza(n,3))) list(map(lambda x: Potenza(x,3), As)) # Iterare una lista usando una variabile # per l'indice dell'elemento nella lista for i,n in enumerate(As): print("indice:", i, " -> As["+str(i)+"] = " + str(n)) # Iterare una lista usando una variabile # per l'indice dell'elemento nella lista for i,n in enumerate(reversed(As)): print("indice:", i, " -> As["+str(i)+"] = " + str(n))
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Operazioni di Slicing Per ottenere una sotto lista, si puรฒ usare un espressione di Slicing. Per esempio, l'espressione Lista[start:end], restituisce la sottolista che inizia in posizione "start" e finisce in posizione "end". Esempio:
As As[3:5] As[-1] head, tail = As[0], As[1:] print(head, tail)
Introduzione a Python - Prima parte.ipynb
mathcoding/Programmazione2
mit
Series Sรฉries sรฃo estruturas unidimensionais, como um array do Numpy de dimensรฃo 1
a = pd.Series([20, 50, 190, 11, 76]) a
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Os dados em uma sรฉrie podem conter um รญndice, permitindo uma otimizaรงรฃo no acesso dos dados
dados = [20, 50, 190, 11, 76] rotulos = ['a', 'b', 'c', 'd', 'e'] b = pd.Series(dados, index=rotulos) b
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Alรฉm disso, o รญndice pode ser utilizado para dar uma semรขntia ao dado de uma sรฉrie, permitindo tambรฉm que o mesmo seja acessado pelo รญndice atribuรญdo
print(a[2]) print(b[2]) print(b['c'])
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
As Series tambรฉm possuem um mรฉtodo de transformaรงรฃo, conforme apresentado na aula anterior. Esse mรฉtodo chama-se apply: ele recebe uma funรงรฃo que serรก aplicada sobre todos os elementos da Serie, retornando entรฃo uma Serie com os resultados
a.apply(lambda x: 2*x)
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
DataFrame Um DataFrame รฉ uma tabela onde cada coluna รฉ uma Serie. Assim como as Series, o DataFrame possui um รญndice, porรฉm, o รญndice refere-se a uma linha inteira, ou seja, ao elemento naquela posiรงรฃo em todas suas colunas
matriz = np.array([[1, 2, 3], [4, 5, 6]]) nomes_linhas = ['L1', 'L2'] nomes_cols = ['C1', 'C2', 'C3'] df = pd.DataFrame(matriz, index=nomes_linhas, columns=nomes_cols) df
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Para fins de exportaรงรฃo, um DataFrame pode ser representado em diversos formatos
print(df.to_latex()) # latex print(df.to_csv(index=False)) #csv print(df.to_json()) # JSON print(df.to_html()) #HTML
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Enquanto em uma Serie utilizamos os colchetes ([]) para acessar um elemento em um certo รญndice, no DataFrame o operador refere-se ร  uma Serie, permitindo acessรก-la, sobrescrevรช-la ou adicionar uma nova
df['C3'] df['C4'] = [1, 0] df df['C4'] = [4, 7] df
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Um DataFrame tambรฉm pode ser transposto, ou seja, as labels das suas colunas viram รญndices e os รญndices viram as novas colunas
df.transpose()
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Podemos tambรฉm ordenar as linhas do DataFrame a partir de uma de suas colunas
df.sort_values(by='C4', ascending=False)
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Importando um dataset real Plataforma Kaggle - Competiรงรตes de Ciรชncias de Dados Titanic: Machine Learning from Disaster Nela, sรฃo providenciadas diversas informaรงรตes sobre os passageiros, como idade, sexo, cabine, valor do tรญquete pago, entre outros. O pandas possui funรงรตes prรฉ-definidas para a leitura de alguns formatos de arquivos.
df = pd.read_csv('titanic.csv') df.head() # 5 primeiras linhas df.tail() # รบltimas 5 linhas df.columns # colunas do dataframe df.describe() # calcula estatรญsticas para cada coluna numรฉrica do DataFrame
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
O operador colchetes ([]) do pandas tambรฉm pode ser usado como uma filtragem, ou seja, dada uma condiรงรฃo (ou predicado), ele retorna apenas as linhas do DataFrame que satisfaรงam o predicado. OBS: escrever um predicado nesse operador nem sempre รฉ tรฃo simples quanto um predicado Python comum
df[df.Sex == "female"]
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
O DataFrame permite tambรฉm que sejam relizadas contagens sobre os valores presentes nas sรฉries, permitindo assim analizarmos a ocorrรชncia de certos dados categรณricos
df['Sex'].value_counts() df['Survived'].apply(lambda s: "Yes" if s == 1 else "No").value_counts()
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
Por รบltimo mas nunca menos importante, podemos agrupar as linhas do DataFrame a partir de uma coluna e operar sobre os grupos criados
df.groupby('Sex')['Survived'].value_counts()
2020/02-python-bibliotecas-manipulacao-dados/Pandas.ipynb
InsightLab/data-science-cookbook
mit
map_element In some situations, you may not want to decorate a function. In these cases you can use map_element map_element(func, in_stream, out_stream, state=None, name=None, kwargs) where func is the function applied to each element of the input stream. Next, we implement the previous example without using decorators. Note that you have to declare x AND y as streams, and specify the relation between x and y by calling map_element before extending stream x.
def example(): def f(v): return v+10 x, y = Stream(), Stream() map_element(func=f, in_stream=x, out_stream=y) x.extend([1, 2, 3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Mapping element to element : function with keyword arguments The function f(v, addend) = return v * multiplier + addend maps v to the return value. The first parameter v is an element of an input stream, and the arguments addend and multiplier are keyword arguments of the function. Decorating the function with @fmap_e gives a function that maps a stream to a stream.
@fmap_e def f(v, addend, multiplier): return v * multiplier + addend # f is a function that maps a stream to a stream def example(): x = Stream() # Specify the keyword argument: addend y = f(x, addend=20, multiplier=2) x.extend([1, 2, 3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Mapping element to element: Function with state Strictly speaking a function cannot have state; however, we bend the definition here to allow functions with states that operate on streams. Look at this example: The input and output streams of a function are x and y, respectively; and we want: y[n] = (x[0] + x[1] + ... + x[n]) * multiplier where multiplier is an argument. We can implement a function where its state before the n-th application of the function is: x[0] + x[1] + ... + x[n-1], and its state after the n-th application is: x[0] + x[1] + ... + x[n-1] + x[n] The state is updated by adding x[n]. We can capture the state of a function by specifying a special keyword argument state and specifying the initial state in the call to the function. The function must returns two values: the next output and the next state. In this example, the function has 3 values: the next element of the stream, state, and multiplier. The state and multiplier are keyword arguments.
@fmap_e def f(v, state, multiplier): output = (v + state)*multiplier next_state = v + state return output, next_state # f is a function that maps a stream to a stream def example(): x = Stream() # Specify the keyword argument: multiplier y = f(x, state=0, multiplier=2) x.extend([1, 2, 3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Same example using map_element instead of a decorator
def f(v, state, multiplier): output = (v + state)*multiplier next_state = v + state return output, next_state def example(): x, y = Stream(), Stream() map_element(func=f, in_stream=x, out_stream=y, state=0, multiplier=2) x.extend([1, 2, 3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Saving state in an object You can save the state of a stream in an object such as a dict as shown in the following example of the Fibonacci sequence.
def example(): x = Stream('x') # Object in which state is saved s = {'a':0, 'b':1} @fmap_e def fib(v, fib): # Update state fib['a'], fib['b'] = fib['a'] + fib['b'], fib['a'] return fib['a'] + v # Declare stream y y = fib(x, fib=s) x.extend([0, 0, 0, 0, 0, 0, 0]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Filtering elements in a stream We are given a function f that returns a Boolean. We apply the decorator @filter_e to get a function that takes an input stream and returns an output stream consisting of those elements in the input stream for which f returns True. In the following example, positive(v) returns True exactly when v is positive. After we apply the decorator, f becomes a function that reads an input stream and returns a stream consisting of the input stream's positive values.
from IoTPy.agent_types.basics import filter_e @filter_e def positive(v, threshold): return v > threshold # f is a function that maps a stream to a stream def example(): x = Stream() y = positive(x, threshold=0) x.extend([-1, 2, -3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Using filter_element instead of a decorator Just as you may prefer to use map_element instead of the decorator fmap_e in some situations, you may also prefer to use filter_element instead of the decorator filter_e. The previous example, implemented without decorators, is given next.
from IoTPy.agent_types.op import filter_element def example(): def positive(v, threshold): return v > threshold x, y = Stream(), Stream() filter_element(func=positive, in_stream=x, out_stream=y, threshold=0) x.extend([-1, 2, -3, 4]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Function that maps list to list In some cases, using functions that map lists to lists is more convenient than functions that map element to element. When such a function is decorated with @fmap_l, the function becomes one that maps streams to streams. Example: Decorate the function increment_odd_numbers
from IoTPy.agent_types.basics import fmap_l @fmap_l def increment_odd_numbers(the_list): return [v+1 if v%2 else v for v in the_list] def example(): x = Stream() y = increment_odd_numbers(x) x.extend([0, 1, 2, 3, 4, 5, 6]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Example: incremental computations from list to list Given a list x we can generate list y where y[j] = x[0] + .. + x[j] by: y = list(accumulate(x))* For example, if x = [1, 2, 3] then y = [1, 3, 6] Now suppose we extend x with the list [4, 5, 6, 7], then we can get the desired y = [1, 3, 6, 10, 15, 21, 28] by calling accumulate again. We can also compute the new value of y incrementally by adding the last output from the previous computation (i.e., 6) to the accumulation of the extension, as shown next.
from itertools import accumulate def incremental_accumulate(the_list, state): print ("the_list ", the_list) output_list = [v + state for v in list(accumulate(the_list))] next_state = output_list[-1] return output_list, next_state def example(): x = [1, 2, 3] y, state = incremental_accumulate(x, state=0) print ('y is ', y) x.extend([4, 5, 6, 7]) y, state = incremental_accumulate(x, state=0) print ('y is ', y) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Incremental computations from stream to stream We can decorate the incremental computation from list to list to obtain a computation from stream to stream. This is illustrated in the next example.
from itertools import accumulate @fmap_l def incremental_accumulate(the_list, state): output_list = [v + state for v in list(accumulate(the_list))] next_state = output_list[-1] return output_list, next_state def example(): x = Stream() y = incremental_accumulate(x, state=0) x.extend([10, 20, -30, 50, -40]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) x.extend([10, 20, -30]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Example with state and keyword argument We want to output the elements of the accumulated stream that exceed a threshold. For example, if a stream x is [10, 20, -30, 50, -40] then the accumulation stream is [10, 30, 0, 50, 10] and the elements of the accumulated stream that exceed a threshold of 25 are [30, 50].
from itertools import accumulate @fmap_l def total_exceeds_threshold(the_list, state, threshold): output_list = [v + state for v in list(accumulate(the_list)) if v + state > threshold] state += sum(the_list) return output_list, state def example(): x = Stream() y = total_exceeds_threshold(x, state=0, threshold=25) x.extend([10, 20, -30, 50, -40]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) x.extend([10, 20, -30]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Example: function composition We can also solve the previous problem by concatenating the functions positive and incremental_accumulate.
def example(): x = Stream() y = positive(incremental_accumulate(x, state=0), threshold=25) x.extend([10, 20, -30, 50, -40]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) x.extend([10, 20, -30]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Using list_element instead of the decorator fmap_l The next example illustrates how map_element can be used with state and keyword arguments. It is the same as the previous example, except that it doesn't use decorators.
from IoTPy.agent_types.basics import map_list def total_exceeds_threshold(the_list, state, threshold): output_list = [v + state for v in list(accumulate(the_list)) if v + state > threshold] state += sum(the_list) return output_list, state def example(): x, y = Stream(), Stream() map_list(func=total_exceeds_threshold, in_stream=x, out_stream=y, state=0, threshold=25) x.extend([10, 20, -30, 50, -40]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) x.extend([10, 20, -30]) # Execute a step run() print ('recent values of stream y are') print (recent_values(y)) example()
examples/FunctionsStreamToStream.ipynb
AssembleSoftware/IoTPy
bsd-3-clause
Time stepping
# Init Seismograms Seismogramm=np.zeros((3,nt)); # Three seismograms # Calculation of some coefficients i_dx=1.0/(dx) i_dx3=1.0/(dx**3) c9=dt**3/24.0 kx=np.arange(5,nx-4) print("Starting time stepping...") ## Time stepping for n in range(2,nt): # Inject source wavelet p[xscr]=p[xscr]+q[n] # Calculating spatial derivative p_x=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1]) p_xxx=i_dx3*(-3.0)*(p[kx+1]-p[kx])+i_dx3*(1.0)*(p[kx+2]-p[kx-1]) # Update velocity vx[kx]=vx[kx]-dt/rho[kx]*p_x-l[kx]*c9*1/(rho[kx]**2)*(p_xxx) # Calculating spatial derivative vx_x= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2]) vx_xxx=i_dx3*(-3.0)*(vx[kx]-vx[kx-1])+i_dx3*(1.0)*(vx[kx+1]-vx[kx-2]) # Update pressure p[kx]=p[kx]-l[kx]*dt*(vx_x)-l[kx]**2.0*c9*1.0/(rho[kx])*(vx_xxx) # Save seismograms Seismogramm[0,n]=p[xrec1] Seismogramm[1,n]=p[xrec2] Seismogramm[2,n]=p[xrec3] print("Finished time stepping!")
JupyterNotebook/1D/FD_1D_DX4_DT4_LW_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Save seismograms
## Save seismograms np.save("Seismograms/FD_1D_DX4_DT4_LW_fast",Seismogramm) ## Plot seismograms fig, (ax1, ax2, ax3) = plt.subplots(3, 1) fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 ) ax1.plot(t,Seismogramm[0,:]) ax1.set_title('Seismogram 1') ax1.set_ylabel('Amplitude') ax1.set_xlabel('Time in s') ax1.set_xlim(0, T) ax2.plot(t,Seismogramm[1,:]) ax2.set_title('Seismogram 2') ax2.set_ylabel('Amplitude') ax2.set_xlabel('Time in s') ax2.set_xlim(0, T) ax3.plot(t,Seismogramm[2,:]) ax3.set_title('Seismogram 3') ax3.set_ylabel('Amplitude') ax3.set_xlabel('Time in s') ax3.set_xlim(0, T);
JupyterNotebook/1D/FD_1D_DX4_DT4_LW_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Now we'll create a function that will save an animation and embed it in an html string. Note that this will require ffmpeg or mencoder to be installed on your system. For reasons entirely beyond my limited understanding of video encoding details, this also requires using the libx264 encoding for the resulting mp4 to be properly embedded into HTML5.
from tempfile import NamedTemporaryFile VIDEO_TAG = """<video controls> <source src="data:video/x-m4v;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>""" def anim_to_html(anim): if not hasattr(anim, '_encoded_video'): with NamedTemporaryFile(suffix='.mp4') as f: anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264']) video = open(f.name, "rb").read() anim._encoded_video = video.encode("base64") return VIDEO_TAG.format(anim._encoded_video)
AnimationEmbedding.ipynb
cbcoutinho/gravBody2D
gpl-3.0
With this HTML function in place, we can use IPython's HTML display tools to create a function which will show the video inline:
from IPython.display import HTML def display_animation(anim): plt.close(anim._fig) return HTML(anim_to_html(anim))
AnimationEmbedding.ipynb
cbcoutinho/gravBody2D
gpl-3.0
Example of Embedding an Animation The result looks something like this -- we'll use a basic animation example taken from my earlier Matplotlib Animation Tutorial post:
from matplotlib import animation # First set up the figure, the axis, and the plot element we want to animate fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, 2)) line, = ax.plot([], [], lw=2) # initialization function: plot the background of each frame def init(): line.set_data([], []) return line, # animation function. This is called sequentially def animate(i): x = np.linspace(0, 2, 1000) y = np.sin(2 * np.pi * (x - 0.01 * i)) line.set_data(x, y) return line, # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True) # call our new function to display the animation display_animation(anim)
AnimationEmbedding.ipynb
cbcoutinho/gravBody2D
gpl-3.0
Making the Embedding Automatic We can go a step further and use IPython's display hooks to automatically represent animation objects with the correct HTML. We'll simply set the _repr_html_ member of the animation base class to our HTML converter function:
animation.Animation._repr_html_ = anim_to_html
AnimationEmbedding.ipynb
cbcoutinho/gravBody2D
gpl-3.0
Now simply creating an animation will lead to it being automatically embedded in the notebook, without any further function calls:
animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True)
AnimationEmbedding.ipynb
cbcoutinho/gravBody2D
gpl-3.0
โฃ๏ธNotes: * It seems that in ADF, when test statistic is lower than critical value, it's stationary; but in KPSS when test statistic is higher than critical value, it's stationary. * Also covariance and mean are always overlap here. * Let's analysis above results: * The visualization is showing that, standard deviation maintains the same but mean is still changing with the time, so it's not strict stationary. * The absolute value of ADF test statstic is lower than all the absolute critical values, so it's not differencing stationary, not strict stationary either. * KPSS test statistic is higher than 10% critical value, so it has 90% confidence that the series is trend stationary. * Theoretically, if we try to remove the trend, it should become closer to strict stationary. Because it's trend stationary.
# Change 1 - Differencing ## I still want to try differencing. ts_log_diff = ts_log - ts_log.shift(3) # I tried 1, 7 steps too plt.figure(figsize=(9,7)) plt.plot(ts_log_diff) plt.show() ts_log_diff.dropna(inplace=True) test_stationarity(ts_log_diff)
sequencial_analysis/time_series_stationary_measures.ipynb
hanhanwu/Hanhan_Data_Science_Practice
mit
โฃ๏ธNotes: * I tried step=1,7 and 3 here. 1,7 all failed in ADF. * Let's analysis above results with step=3: * The visualization is showing that, mean and standard devitation are showing less correlation to the time. * ADF test statstic is showing 95% confidence of differencing stationary, since the absolute value of the test statistic is higher than the ansolute 5% critical value but lower than the absolute 1% critical value. * KPSS test statistic is higher than 10% critical value, so it has 90% confidence that the series is (trend) stationary. * This may indicate that, when a time series is not differencing stationary but trend stationary, it is still possible to make it stationary using differencing method, but shifting step can make a difference.
# Change 2 - Remove trend with moving average ## As we found above, log series seems to be a trend stationary moving_avg = ts_log.rolling(window=12,center=False).mean() # taking average of LAST 2 years (36-12) values plt.figure(figsize=(9,7)) plt.plot(ts_log) plt.plot(moving_avg, color='orange') plt.show() ts_log_moving_avg_diff = ts_log - moving_avg ts_log_moving_avg_diff.head(12) ts_log_moving_avg_diff.dropna(inplace=True) test_stationarity(ts_log_moving_avg_diff)
sequencial_analysis/time_series_stationary_measures.ipynb
hanhanwu/Hanhan_Data_Science_Practice
mit
โฃ๏ธNotes: As we noted above, ts_log looks like trend stationary, so if we remove the trend, the series should be strict stationary. In change 2, with moving average we can remove the trend. And now both ADF and KPSS are showing 90% confidence of stationary, so ts_log_moving_avg_diff should be strict stationary.
# Change 3 - Remove trend with weighted moving average expwighted_avg = ts_log.ewm(alpha=0.9,ignore_na=False,min_periods=0,adjust=True).mean() plt.figure(figsize=(9,7)) plt.plot(ts_log) plt.plot(expwighted_avg, color='red') plt.show() ts_log_ewma_diff = ts_log - expwighted_avg test_stationarity(ts_log_ewma_diff)
sequencial_analysis/time_series_stationary_measures.ipynb
hanhanwu/Hanhan_Data_Science_Practice
mit
Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28,1), nam) targets_ = ### Encoder conv1 = # Now 28x28x16 maxpool1 = # Now 14x14x16 conv2 = # Now 14x14x8 maxpool2 = # Now 7x7x8 conv3 = # Now 7x7x8 encoded = # Now 4x4x8 ### Decoder upsample1 = # Now 7x7x8 conv4 = # Now 7x7x8 upsample2 = # Now 14x14x8 conv5 = # Now 14x14x8 upsample3 = # Now 28x28x8 conv6 = # Now 28x28x16 logits = #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = # Pass logits through sigmoid and calculate the cross-entropy loss loss = # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
autoencoder/Convolutional_Autoencoder.ipynb
tkurfurst/deep-learning
mit
What we want is $C^1$ continuity, i.e., first-derivatives are continuous, without needing values of f_left and f_right outside where they are defined. To accomplish this, we'll join the second derivatives of both functions by a line, around a region $[x_0, x_1]$ that contains the threshold:
d_dx_f_left_ = sympy.diff(f_left_, x_, 1) d_dx_f_right_ = sympy.diff(f_right_, x_, 1) d_dx_f_ = Piecewise( (d_dx_f_left_, x_ < x_threshold), (d_dx_f_right_, True) ) d_dx_f = sympy.lambdify(x_, d_dx_f_) d_dx_f_left = sympy.lambdify(x_, d_dx_f_left_) d_dx_f_right = sympy.lambdify(x_, d_dx_f_right_) plt.plot(x, d_dx_f(x)) plt.xlim((x_threshold * 0.8, x_threshold * 1.2)) plt.ylim((-0.4, 1.8)) plt.show()
smooth_transition_between_analytic_functions.ipynb
ESSS/notebooks
mit
The derivative is indeed very discontinuous! Let's fix this! They do not need to be symmetric around the threshold
x_0 = x_threshold * 0.8 x_1 = x_threshold * 1.2 d_dx_at_x_0 = d_dx_f_left_.evalf(subs={x_: x_0}) d_dx_at_x_1 = d_dx_f_right_.evalf(subs={x_: x_1}) d_dx_f_center_ = d_dx_at_x_0 + ((x_ - x_0) / (x_1 - x_0)) * (d_dx_at_x_1 - d_dx_at_x_0) d_dx_f_smooth_ = Piecewise( (d_dx_f_left_, x_ < x_0), (d_dx_f_center_, x_ < x_1), (d_dx_f_right_, True) ) d_dx_f_smooth = sympy.lambdify(x_, d_dx_f_smooth_) plt.plot(x, d_dx_f_smooth(x)) plt.xlim((x_threshold * 0.6, x_threshold * 1.4)) plt.ylim((-0.4, 1.8)) plt.show()
smooth_transition_between_analytic_functions.ipynb
ESSS/notebooks
mit
So now we just have to integrate d_dx_f_center and adjust it's integral constant to create a better piecewise f_smooth
f_center_ = sympy.integrate(d_dx_f_center_, x_) f_left = sympy.lambdify(x_, f_left_) f_center = sympy.lambdify(x_, f_center_) # f_center(x0) == f_left(x0) f_center_ = f_center_ + (f_left(x_0) - f_center(x_0)) f_smooth_ = Piecewise( (f_left_, x_ < x_0), (f_center_, x_ < x_1), (f_right_, True) ) f_smooth = sympy.lambdify(x_, f_smooth_) plt.plot(x, f_smooth(x)) plt.show()
smooth_transition_between_analytic_functions.ipynb
ESSS/notebooks
mit
Much better! Now let's generalize a way to create f_center:
sympy.init_printing() x_0_, x_1_, f_0_ = sympy.symbols('x_0,x_1,f_{left}(x_0)', real=True) # df_0 is f_left'(x_0) # df_1 is f_right'(x_1) df_0_, df_1_ = sympy.symbols('df_0,df_1', real=True) a = (x_ - x_0_) / (x_1_ - x_0_) # d_dx_f_center_ = d_dx_at_x_0 + a * (d_dx_at_x_1 - d_dx_at_x_0) d_dx_f_center_ = (1 - a) * df_0_ + a * df_1_ f_center_ = sympy.integrate(d_dx_f_center_, x_) # f_center_ = f_center_ + f_0_ - f_center_.subs(x_, x_0_) f_center_ # Simplifying f_center to be an equation like x * (a*x + b) + c: def get_f_center_coefficients(x0, x1, f_left, df_left, df_right): dx = x1 - x0 df0 = df_left(x0) df1 = df_right(x1) a = 0.5 * (df1 - df0) / dx b = (df0*x1 - df1*x0) / dx c = f_left(x0) - (x0 * (a*x0 + b)) return a, b, c x_0 = x_threshold * 0.8 x_1 = x_threshold * 1.2 # So, for example: a, b, c = get_f_center_coefficients( x_0, x_1, f_left=lambda x: x**1.2, df_left=lambda x: 1.2 * x**0.2, df_right=lambda x: -2.0*x**-1.2, ) print (a, b, c) def f_smooth(x): return np.piecewise(x, [x < x_0, (x_0 <= x) & (x <= x_1), x > x_1], [ lambda x: x**1.2, lambda x: x * (-0.4387286573389664 * x + 5.230431342173345) - 8.63388046349694, lambda x: 10.0 / x**0.2, ]) plt.plot(x, f_smooth(x)) plt.show()
smooth_transition_between_analytic_functions.ipynb
ESSS/notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืคืขืžื™ื ื ืจืฆื” ืœืฉื ื•ืช ืืช ืกื“ืจ ื”ืืจื’ื•ืžื ื˜ื™ื ืฉืื ื—ื ื• ืฉื•ืœื—ื™ื ืœืคื•ื ืงืฆื™ื”.<br> ื ืขืฉื” ื–ืืช ื‘ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื”, ืขืœึพื™ื“ื™ ื”ืขื‘ืจืช ืฉื ื”ืืจื’ื•ืžื ื˜ ื•ืœืื—ืจ ืžื›ืŸ ื”ืขื‘ืจืช ื”ืขืจืš ืฉืื ื—ื ื• ืจื•ืฆื™ื ืœื”ืขื‘ื™ืจ ืืœื™ื•: </p>
my_range(start=0, end=5)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืฉื•ืจื” ื”ื–ื• ื”ืคื›ื ื• ืืช ืกื“ืจ ื”ืืจื’ื•ืžื ื˜ื™ื.<br> ื›ื™ื•ื•ืŸ ืฉื‘ืงืจื™ืื” ื›ืชื‘ื ื• ืืช ืฉืžื•ืช ื”ืคืจืžื˜ืจื™ื ื”ืชื•ืืžื™ื ืœื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื”, ื”ืขืจื›ื™ื ื ืฉืœื—ื• ืœืžืงื•ื ื”ื ื›ื•ืŸ.<br> ื”ืฉื™ื˜ื” ื”ื–ื• ื ืงืจืืช <dfn>keyword arguments</dfn> (<dfn>"ืืจื’ื•ืžื ื˜ื™ื ืœืคื™ ืฉื"</dfn>), ื•ื‘ื” ืื ื—ื ื• ืžืขื‘ื™ืจื™ื ืืช ื”ืืจื’ื•ืžื ื˜ื™ื ืฉืœื ื• ืœืคื™ ืฉืžื•ืช ื”ืคืจืžื˜ืจื™ื ื‘ื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื”.<br> ืื ื—ื ื• ืžืฉืชืžืฉื™ื ื‘ืฉื™ื˜ื” ื”ื–ื• ืืคื™ืœื• ื›ืฉืื ื—ื ื• ืœื ืจื•ืฆื™ื ืœืฉื ื•ืช ืืช ืกื“ืจ ื”ืืจื’ื•ืžื ื˜ื™ื, ืืœื ืจืง ืœืขืฉื•ืช ืงืฆืช ืกื“ืจ ื‘ืงื•ื“.<br> ื ื‘ื—ืŸ, ืœื“ื•ื’ืžื”, ืืช ื”ืžืงืจื” ืฉืœ ื”ืคื•ื ืงืฆื™ื” <code>random.randrange</code> โ€“ ื ืขื™ื ื™ื•ืชืจ ืœืจืื•ืช ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื” ืขื ืฉืžื•ืช ื”ืคืจืžื˜ืจื™ื: </p>
import random random.randrange(100, 200) # ืžื•ื‘ืŸ ืคื—ื•ืช random.randrange(start=100, stop=200) # ืžื•ื‘ืŸ ื™ื•ืชืจ
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl;"> ืœืžืจื•ืช ื”ืฉื™ืžื•ืฉ ื‘ืกื™ืžืŸ <code>=</code>, ืœื ืžื“ื•ื‘ืจ ืคื” ื‘ื”ืฉืžื” ื‘ืžื•ื‘ืŸ ื”ืงืœืืกื™ ืฉืœื”.<br> ื–ื•ื”ื™ ืฆื•ืจืช ื›ืชื™ื‘ื” ืžื™ื•ื—ื“ืช ื‘ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื•ืช ืฉื”ืžื˜ืจื” ืฉืœื” ื”ื™ื ืœืกืžืŸ "ื”ืขื‘ืจ ืœืคืจืžื˜ืจ ืฉืฉืžื• ื›ืšึพื•ื›ืš ืืช ื”ืขืจืš ื›ืšึพื•ื›ืš". </p> </div> </div> <span style="text-align: right; direction: rtl; float: right; clear: both;">ืคืจืžื˜ืจื™ื ืขื ืขืจื›ื™ ื‘ืจื™ืจืช ืžื—ื“ืœ<span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื–ื›ืจ ื‘ืคื•ื ืงืฆื™ื” <code>get</code> ืฉืœ ืžื™ืœื•ืŸ, ืฉืžืืคืฉืจืช ืœืงื‘ืœ ืžืžื ื• ืขืจืš ืœืคื™ ืžืคืชื— ืžืกื•ื™ื.<br> ืื ื”ืžืคืชื— ืฉืื ื—ื ื• ืžื—ืคืฉื™ื ืœื ืงื™ื™ื ื‘ืžื™ืœื•ืŸ, ื”ืคื•ื ืงืฆื™ื” ืžื—ื–ื™ืจื” <samp>None</samp>: </p>
ghibli_release_dates = { 'Castle in the Sky': '1986-08-02', 'My Neighbor Totoro': '1988-04-16', 'Spirited Away': '2001-07-20', 'Ponyo': '2008-07-19', } ponyo_release_date = ghibli_release_dates.get('Ponyo') men_in_black_release_date = ghibli_release_dates.get('Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}")
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ืžืžืฉ ืืช ื”ืคื•ื ืงืฆื™ื” <code>get</code> ื‘ืขืฆืžื ื•. ืœืฉื ื”ื ื•ื—ื•ืช, ื™ื™ืจืื” ื”ืฉื™ืžื•ืฉ ืฉื•ื ื” ื‘ืžืงืฆืช:<br> </p>
def get(dictionary, key): if key in dictionary: return dictionary[key] return None ponyo_release_date = get(ghibli_release_dates, 'Ponyo') men_in_black_release_date = get(ghibli_release_dates, 'Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}")
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ืžื™ืžื•ืฉ ืฉืœื ื• ืœื ืžื•ืฉืœื. ื”ืคืขื•ืœื” ื”ืžืงื•ืจื™ืช, <code>get</code> ืขืœ ืžื™ืœื•ืŸ, ืคื•ืขืœืช ื‘ืฆื•ืจื” ืžืชื•ื—ื›ืžืช ื™ื•ืชืจ.<br> ืืคืฉืจ ืœื”ืขื‘ื™ืจ ืœื” ืคืจืžื˜ืจ ื ื•ืกืฃ, ืฉืงื•ื‘ืข ืžื” ื™ื—ื–ื•ืจ ืื ื”ืžืคืชื— ืฉื”ืขื‘ืจื ื• ื‘ืคืจืžื˜ืจ ื”ืจืืฉื•ืŸ ืœื ื ืžืฆื ื‘ืžื™ืœื•ืŸ: </p>
ponyo_release_date = ghibli_release_dates.get('Ponyo', '???') men_in_black_release_date = ghibli_release_dates.get('Men in Black', '???') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}")
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉื™ืžื• ืœื‘ ืœื”ืชื ื”ื’ื•ืช ื”ืžื™ื•ื—ื“ืช ืฉืœ ื”ืคืขื•ืœื” <code>get</code>!<br> ืื ื”ืžืคืชื— ืฉื”ืขื‘ืจื ื• ื‘ืืจื’ื•ืžื ื˜ ื”ืจืืฉื•ืŸ ืœื ืงื™ื™ื ื‘ืžื™ืœื•ืŸ, ื”ื™ื ืžื—ื–ื™ืจื” ืืช ื”ืขืจืš ืฉื›ืชื•ื‘ ื‘ืืจื’ื•ืžื ื˜ ื”ืฉื ื™.<br> ืืคืฉืจ ืœื”ืขื‘ื™ืจ ืœื” ืืจื’ื•ืžื ื˜ ืื—ื“, ื•ืืคืฉืจ ืœื”ืขื‘ื™ืจ ืœื” ืฉื ื™ ืืจื’ื•ืžื ื˜ื™ื. ื”ื™ื ืžืชืคืงื“ืช ื›ืจืื•ื™ ื‘ืฉื ื™ ื”ืžืฆื‘ื™ื.<br> ื–ื• ืœื ืคืขื ืจืืฉื•ื ื” ืฉืื ื—ื ื• ืจื•ืื™ื ืคื•ื ืงืฆื™ื•ืช ื›ืืœื•. ืœืžืขืฉื”, ื‘ืฉื‘ื•ืข ืฉืขื‘ืจ ืœืžื“ื ื• ืขืœ ืคืขื•ืœื•ืช builtins ืจื‘ื•ืช ืฉืžืชื ื”ื’ื•ืช ื›ืš:<br> <code>range</code>, <code>enumerate</code> ื•ึพ<code>round</code>, ื›ื•ืœืŸ ื™ื•ื“ืขื•ืช ืœืงื‘ืœ ืžืกืคืจ ืžืฉืชื ื” ืฉืœ ืืจื’ื•ืžื ื˜ื™ื. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื ื™ื— ืœืคืขื•ืœื” <code>get</code> ื‘ื™ื ืชื™ื™ื. ืืœ ื“ืื’ื”, ื ื—ื–ื•ืจ ืืœื™ื” ื‘ืงืจื•ื‘.<br> ื‘ื–ืžืŸ ืฉืื ื—ื ื• ื ื—ื™ื ืžืคืขื•ืœื•ืช ืขืœ ืžื™ืœื•ื ื™ื ื™ื•ื ื”ืื”ื‘ื” ืžืชืงืจื‘, ื•ื—ื ื•ืช ื”ื•ื•ืจื“ื™ื ื”ืงืจื•ื‘ื” ืžืขื•ื ื™ื™ื ืช ืœื”ืขืœื•ืช ืืช ืžื—ื™ืจื™ ื›ืœ ืžื•ืฆืจื™ื” ื‘ืฉืงืœ ืื—ื“.<br> ื”ืชื‘ืงืฉื ื• ืœื‘ื ื•ืช ืขื‘ื•ืจื ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืจืฉื™ืžืช ืžื—ื™ืจื™ื, ื•ืžื—ื–ื™ืจื” ืจืฉื™ืžื” ืฉื‘ื” ื›ืœ ืื™ื‘ืจ ื’ื“ื•ืœ ื‘ึพ1 ืžืจืฉื™ืžืช ื”ืžื—ื™ืจื™ื ื”ืžืงื•ืจื™ืช.<br> ื ื™ื’ืฉ ืœืขื‘ื•ื“ื”: </p>
def get_new_prices(l): l2 = [] for item in l: l2.append(item + 1) return l2 prices = [42, 73, 300] print(get_new_prices(prices))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืชื•ืš ื–ืžืŸ ืงืฆืจ ื”ืคื•ื ืงืฆื™ื” ืฉื‘ื ื™ื ื• ื”ื•ืคื›ืช ืœืœื”ื™ื˜ ื”ื™ืกื˜ืจื™ ื‘ื—ื ื•ื™ื•ืช ื”ื•ื•ืจื“ื™ื.<br> ืžื ื”ืœ ืงืจื˜ืœ ื”ื•ื•ืจื“ื™ื ื”ื‘ื™ืŸึพืœืื•ืžื™ ื’'ื•ื–ืคื” ื•ืจื“ื™ ื™ื•ืฆืจ ืื™ืชื ื• ืงืฉืจ, ื•ืžื‘ืงืฉ ืœืฉื›ืœืœ ื”ืชื•ื›ื ื” ื›ืš ืฉื™ื•ื›ืœ ืœื”ืขืœื•ืช ืืช ืžื—ื™ืจื™ ื”ืžื•ืฆืจื™ื ื›ืจืฆื•ื ื•.<br> ื›ื“ื™ ืœืขืžื•ื“ ื‘ื“ืจื™ืฉื”, ื ื‘ื ื” ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืจืฉื™ืžื”, ื•ื‘ื ื•ืกืฃ ืืœื™ื” ืืช ื”ืžื—ื™ืจ ืฉื™ืชื•ื•ืกืฃ ืœื›ืœ ืื™ื‘ืจ ื‘ืจืฉื™ืžื” ื–ื•.<br> ื›ืš, ืื ื”ืงื•ืจื ืœืคื•ื ืงืฆื™ื” ื™ืขื‘ื™ืจ ื›ืืจื’ื•ืžื ื˜ ื”ืฉื ื™ ืืช ื”ืขืจืš 2, ื›ืœ ืื™ื‘ืจ ื‘ืจืฉื™ืžื” ื™ื’ื“ืœ ื‘ึพ2.<br> ื ืžืžืฉ ื‘ืงืœื™ืœื•ืช: </p>
def get_new_prices(l, increment_by): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(get_new_prices(prices, 1)) print(get_new_prices(prices, 2))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื•ืจื“ื™ ืคื•ืฆื— ื‘ืฉื™ืจื” ืžืจื•ื‘ ืื•ืฉืจ, ื•ืžื‘ืงืฉ ืฉื›ืœื•ืœ ืื—ืจื•ืŸ ืœืคื•ื ืงืฆื™ื”, ืื ืืคืฉืจ.<br> ืื ื”ืงื•ืจื ืœืคื•ื ืงืฆื™ื” ื”ืขื‘ื™ืจ ืœื” ืจืง ืืช ืจืฉื™ืžืช ื”ืžื—ื™ืจื™ื, ื”ืขืœื• ืืช ื›ืœ ื”ืžื—ื™ืจื™ื ื‘ืฉืงืœ, ื›ื‘ืจื™ืจืช ืžื—ื“ืœ.<br> ืื ื›ืŸ ื”ื•ืขื‘ืจ ื”ืืจื’ื•ืžื ื˜ ื”ืฉื ื™, ื”ืขืœื• ืืช ื”ืžื—ื™ืจ ืœืคื™ ื”ืขืจืš ืฉืฆื•ื™ืŸ ื‘ืื•ืชื• ืืจื’ื•ืžื ื˜.<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ืคืขื ืื ื—ื ื• ืžืชื—ื‘ื˜ื™ื ืงืฆืช ื™ื•ืชืจ, ืžื’ืจื“ื™ื ื‘ืจืืฉ, ืงื•ืจืื™ื ื›ืžื” ืžื“ืจื™ื›ื™ ืคื™ื™ืชื•ืŸ ื•ืžื’ื™ืขื™ื ืœื‘ืกื•ืฃ ืœืชืฉื•ื‘ื” ื”ื‘ืื”: </p>
def get_new_prices(l, increment_by=1): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices)) print(get_new_prices(prices, 5))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ืฉืื ื—ื ื• ืจื•ืฆื™ื ืœื”ื’ื“ื™ืจ ืคืจืžื˜ืจ ืขื ืขืจืš ื‘ืจื™ืจืช ืžื—ื“ืœ, ื ื•ื›ืœ ืœืงื‘ื•ืข ืืช ืขืจืš ื‘ืจื™ืจืช ื”ืžื—ื“ืœ ืฉืœื• ื‘ื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื”.<br> ืื ื™ื•ืขื‘ืจ ืืจื’ื•ืžื ื˜ ืฉื›ื–ื” ืœืคื•ื ืงืฆื™ื” โ€“ ืคื™ื™ืชื•ืŸ ืชืฉืชืžืฉ ื‘ืขืจืš ืฉื”ื•ืขื‘ืจ.<br> ืื ืœื โ€“ ื™ื™ืœืงื— ืขืจืš ื‘ืจื™ืจืช ื”ืžื—ื“ืœ ืฉื”ื•ื’ื“ืจ ื‘ื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื”. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืžืงืจื” ืฉืœื ื• ื”ื’ื“ืจื ื• ืืช ื”ืคืจืžื˜ืจ <code>increment_by</code> ืขื ืขืจืš ื‘ืจื™ืจืช ื”ืžื—ื“ืœ 1.<br> ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื” ืขื ืืจื’ื•ืžื ื˜ ืื—ื“ ื‘ืœื‘ื“ (ืจืฉื™ืžืช ื”ืžื—ื™ืจื™ื) ืชื’ื“ื™ืœ ืืช ื›ืœ ื”ืžื—ื™ืจื™ื ื‘ึพ1, ืฉื”ืจื™ ื”ื•ื ืขืจืš ื‘ืจื™ืจืช ื”ืžื—ื“ืœ.<br> ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื” ืขื ืฉื ื™ ืืจื’ื•ืžื ื˜ื™ื (ืจืฉื™ืžืช ื”ืžื—ื™ืจื™ื, ืกื›ื•ื ื”ื”ืขืœืื”) ืชื’ื“ื™ืœ ืืช ื›ืœ ื”ืžื—ื™ืจื™ื ื‘ืกื›ื•ื ื”ื”ืขืœืื” ืฉื”ื•ืขื‘ืจ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื—ืฉื•ื‘ ืœื”ื‘ื™ืŸ ืฉืงืจื™ืื” ืœืคื•ื ืงืฆื™ื” ืขื ืขืจื›ื™ื ื‘ืžืงื•ื ืขืจื›ื™ ื‘ืจื™ืจืช ื”ืžื—ื“ืœ, ืœื ืชืฉื ื” ืืช ืขืจืš ื‘ืจื™ืจืช ื”ืžื—ื“ืœ ื‘ืงืจื™ืื•ืช ื”ื‘ืื•ืช: </p>
print(get_new_prices(prices, 5)) print(get_new_prices(prices))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืžืžืฉื• ืืช ืคื•ื ืงืฆื™ื™ืช <code>get</code> ื”ืžืœืื”. ื”ืคื•ื ืงืฆื™ื” ืชืงื‘ืœ ืžื™ืœื•ืŸ, ืžืคืชื— ื•"ืขืจืš ืœืฉืขืช ื—ื™ืจื•ื".<br> ื”ื—ื–ื™ืจื• ืืช ื”ืขืจืš ื”ืฉื™ื™ืš ืœืžืคืชื— ืฉื”ืชืงื‘ืœ. ืื—ืจืช โ€“ ื”ื—ื–ื™ืจื• ืืช ื”ืขืจืš ืœืฉืขืช ื”ื—ื™ืจื•ื ืฉื”ื•ืขื‘ืจ ืœืคื•ื ืงืฆื™ื”.<br> ืื ืœื ื”ื•ืขื‘ืจ ืขืจืš ืœืฉืขืช ื—ื™ืจื•ื ื•ื”ืžืคืชื— ืœื ื ืžืฆื ื‘ืžื™ืœื•ืŸ, ื”ื—ื–ื™ืจื• <samp>None</samp>. </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื“ื’ื™ื ืืช ืื•ืชื• ืขื™ืงืจื•ืŸ ืขื ื›ืžื” ืขืจื›ื™ ื‘ืจื™ืจืช ืžื—ื“ืœ.<br> ืื ื”ื“ืจื™ืฉื” ื”ื™ื™ืชื”, ืœื“ื•ื’ืžื”, ืœื”ื•ืกื™ืฃ ืœืคื•ื ืงืฆื™ื” ื’ื ืืคืฉืจื•ืช ืœื”ื ื—ื” ื‘ืžื—ื™ืจื™ ื”ืคืจื—ื™ื, ื”ื™ื™ื ื• ื™ื›ื•ืœื™ื ืœืžืžืฉ ื–ืืช ื›ืš: </p>
def get_new_prices(l, increment_by=1, discount=0): l2 = [] for item in l: new_price = item + increment_by - discount l2.append(new_price) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices, 10, 1)) # ื”ืขืœืื” ืฉืœ 10, ื”ื ื—ื” ืฉืœ 1 print(get_new_prices(prices, 5)) # ื”ืขืœืื” ืฉืœ 5
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืืš ืžื” ื™ืงืจื” ื›ืฉื ืจืฆื” ืœืชืช ืจืง ื”ื ื—ื”?<br> ื‘ืžืงืจื” ื›ื–ื”, ื›ืฉื ืจืฆื” "ืœื“ืœื’" ืžืขืœ ืื—ื“ ืžืขืจื›ื™ ื‘ืจื™ืจืช ื”ืžื—ื“ืœ, ื ืฆื˜ืจืš ืœื”ืขื‘ื™ืจ ืืช ืฉืžื•ืช ื”ืคืจืžื˜ืจื™ื ื‘ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื”.<br> ื‘ื“ื•ื’ืžื” ื”ื‘ืื” ืื ื—ื ื• ืžืขืœื™ื ืืช ื”ืžื—ื™ืจ ื‘ึพ1 (ื›ื™ ื–ื• ื‘ืจื™ืจืช ื”ืžื—ื“ืœ), ื•ืžื•ืจื™ื“ื™ื ืื•ืชื• ื‘ึพ5: </p>
prices = [42, 73, 300] print(prices) print(get_new_prices(prices, discount=5))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื–ื” ืืžื ื ืขื ื™ื™ืŸ ืฉืœ ืกื’ื ื•ืŸ, ืื‘ืœ ื™ืฉ ื™ื•ืคื™ ื•ืกื“ืจ ื‘ืฆื™ื•ืŸ ืฉืžื•ืช ื”ืคืจืžื˜ืจื™ื ื’ื ื›ืฉืœื ื—ื™ื™ื‘ื™ื: </p>
print(get_new_prices(prices, increment_by=10, discount=1))
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืžืกืคืจ ืžืฉืชื ื” ืฉืœ ืืจื’ื•ืžื ื˜ื™ื<span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ืคื•ื ืงืฆื™ื” ื”ืคื™ื™ืชื•ื ื™ืช <code>max</code>, ืœืžืฉืœ, ืžืชื ื”ื’ืช ื‘ืื•ืคืŸ ืžืฉื•ื ื”.<br> ื”ื™ื ื™ื•ื“ืขืช ืœืงื‘ืœ ื›ืœ ืžืกืคืจ ืฉื”ื•ื ืฉืœ ืืจื’ื•ืžื ื˜ื™ื, ื•ืœื”ื—ืœื™ื˜ ืžื™ ืžื”ื ื”ื•ื ื”ื’ื“ื•ืœ ื‘ื™ื•ืชืจ.<br> ืจืื• ื‘ืขืฆืžื›ื! </p>
max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื•ื›ืœ ื’ื ืื ื—ื ื• ืœืžืžืฉ ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืžืกืคืจ ืžืฉืชื ื” ืฉืœ ืคืจืžื˜ืจื™ื ื“ื™ ื‘ืงืœื•ืช.<br> ื ืชื—ื™ืœ ืžืœืžืžืฉ ืคื•ื ืงืฆื™ื” ื˜ื™ืคืฉื™ืช ืœืžื“ื™, ืฉืžืงื‘ืœืช ืžืกืคืจ ืžืฉืชื ื” ืฉืœ ืคืจืžื˜ืจื™ื ื•ืžื“ืคื™ืกื” ืื•ืชื: </p>
def silly_function(*parameters): print(parameters) print(type(parameters)) print('-' * 20) silly_function('Shmulik', 'Shlomo') silly_function('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function()
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืžื” ื”ืชืจื—ืฉ ื‘ื“ื•ื’ืžื” ื”ืื—ืจื•ื ื”, ื‘ืขืฆื?<br> ื›ืฉืคืจืžื˜ืจ ืžื•ื’ื“ืจ ื‘ื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื” ืขื ื”ืกื™ืžืŸ ื›ื•ื›ื‘ื™ืช, ืืคืฉืจ ืœืฉืœื•ื— ืืœ ืื•ืชื• ืคืจืžื˜ืจ ืžืกืคืจ ื‘ืœืชื™ ืžื•ื’ื‘ืœ ืฉืœ ืืจื’ื•ืžื ื˜ื™ื.<br> ื”ืขืจืš ืฉื™ื™ื›ื ืก ืœืคืจืžื˜ืจ ื™ื”ื™ื” ืžืกื•ื’ <code>tuple</code>, ืฉืื™ื‘ืจื™ื• ื”ื ื›ืœ ื”ืื™ื‘ืจื™ื ืฉื”ื•ืขื‘ืจื• ื›ืืจื’ื•ืžื ื˜ื™ื. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืฆื•ืจืš ื”ื”ื“ื’ืžื”, ื ื‘ื ื” ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืคืจืžื˜ืจื™ื ื•ืžื“ืคื™ืกื” ืื•ืชื ื‘ื–ื” ืื—ืจ ื–ื”: </p>
def silly_function2(*parameters): print(f"Printing all the items in {parameters}:") for parameter in parameters: print(parameter) print("-" * 20) silly_function2('Shmulik', 'Shlomo') silly_function2('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function2()
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉื—ืงื• ืขื ื”ืคื•ื ืงืฆื™ื” <code>silly_function2</code> ื•ื•ื“ืื• ืฉื”ื‘ื ืชื ืžื” ืžืชืจื—ืฉ ื‘ื”.<br> ื›ืฉืชืกื™ื™ืžื•, ื ืกื• ืœืžืžืฉ ืืช ื”ืคื•ื ืงืฆื™ื” <code>max</code> ื‘ืขืฆืžื›ื. </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ืžืžืฉ ืืช <code>max</code>: </p>
def my_max(*numbers): if not numbers: # ืื ืœื ืกื•ืคืงื• ืืจื’ื•ืžื ื˜ื™ื, ืื™ืŸ ืžืงืกื™ืžื•ื return None maximum = numbers[0] for number in numbers: if number > maximum: maximum = number return maximum my_max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ื•ืชืจืช ื”ืคื•ื ืงืฆื™ื” ื™ื›ื•ืœื” ืœื›ืœื•ืœ ืžืฉืชื ื™ื ื ื•ืกืคื™ื ืœืคื ื™ ื”ื›ื•ื›ื‘ื™ืช.<br> ื ืจืื” ืœื“ื•ื’ืžื” ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ื’ื•ื‘ื” ื”ื ื—ื” ื•ืืช ืžื—ื™ืจื™ ื›ืœ ื”ืžื•ืฆืจื™ื ืฉืงื ื™ื ื•, ื•ืžื—ื–ื™ืจื” ืืช ื”ืกื›ื•ื ื”ืกื•ืคื™ ืฉืขืœื™ื ื• ืœืฉืœื: </p>
def get_final_price(discount, *prices): return sum(prices) - discount get_final_price(10000, 3.141, 90053)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืืฃ ืฉื‘ืžื‘ื˜ ืจืืฉื•ืŸ ื”ืคื•ื ืงืฆื™ื” <code>get_final_price</code> ืขืฉื•ื™ื” ืœื”ื™ืจืื•ืช ืžื’ื ื™ื‘ื”, ื›ื“ืื™ ืœื”ื™ื–ื”ืจ ืžืฉื™ืžื•ืฉ ืžื•ื’ื–ื ื‘ืชื›ื•ื ื” ื”ื–ื• ืฉืœ ืคื™ื™ืชื•ืŸ.<br> ื”ื“ื•ื’ืžื” ื”ื–ื• ืืžื ื ืžื“ื’ื™ืžื” ื’ืžื™ืฉื•ืช ื™ื•ืฆืืช ื“ื•ืคืŸ ืฉืœ ืคื™ื™ืชื•ืŸ, ืื‘ืœ ื›ื›ืœืœ ื”ื™ื ื“ื•ื’ืžื” ื’ืจื•ืขื” ืžืื•ื“ ืœืฉื™ืžื•ืฉ ื‘ื›ื•ื›ื‘ื™ืช. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉื™ืžื• ืœื‘ ื›ืžื” ื ื•ื— ื™ื•ืชืจ ืœื”ื‘ื™ืŸ ืืช ื”ืžื™ืžื•ืฉ ื”ื‘ื ืœึพ<code>get_final_price</code>, ื•ื›ืžื” ื ื•ื— ื™ื•ืชืจ ืœื”ื‘ื™ืŸ ืืช ื”ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื” ื”ื–ื•: </p>
def get_final_price(prices, discount): return sum(prices) - discount get_final_price(prices=(3.141, 90053), discount=10000)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืชืจื’ื•ืœ ื‘ื™ื ื™ื™ื: ืกื•ืœืœ ื“ืจืš<span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ืชื‘ื• ืคื•ื ืงืฆื™ื” ื‘ืฉื <code>create_path</code> ืฉื™ื›ื•ืœื” ืœืงื‘ืœ ืžืกืคืจ ื‘ืœืชื™ ืžื•ื’ื‘ืœ ืฉืœ ืืจื’ื•ืžื ื˜ื™ื.<br> ื”ืคืจืžื˜ืจ ื”ืจืืฉื•ืŸ ื™ื”ื™ื” ืื•ืช ื”ื›ื•ื ืŸ ืฉื‘ื• ื”ืงื‘ืฆื™ื ืžืื•ื—ืกื ื™ื (ืœืจื•ื‘ "C"), ื•ื”ืคืจืžื˜ืจื™ื ืฉืื—ืจื™ื• ื™ื”ื™ื• ืฉืžื•ืช ืฉืœ ืชื™ืงื™ื•ืช ื•ืงื‘ืฆื™ื.<br> ืฉืจืฉืจื• ืื•ืชื ื‘ืขื–ืจืช ื”ืชื• <code>\</code> ื›ื“ื™ ืœื™ืฆื•ืจ ืžื”ื ืžื—ืจื•ื–ืช ื”ืžื™ื™ืฆื’ืช ื ืชื™ื‘ ื‘ืžื—ืฉื‘. ืื—ืจื™ ื”ืื•ืช ืฉืœ ื”ื›ื•ื ืŸ ืฉื™ืžื• ื ืงื•ื“ืชื™ื™ื.<br> ื”ื ื™ื—ื• ืฉื”ืงืœื˜ ืฉื”ืžืฉืชืžืฉ ื”ื›ื ื™ืก ื”ื•ื ืชืงื™ืŸ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ื ื” ื›ืžื” ื“ื•ื’ืžืื•ืช ืœืงืจื™ืื•ืช ืœืคื•ื ืงืฆื™ื” ื•ืœืขืจื›ื™ ื”ื”ื—ื–ืจื” ืฉืœื”: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>ื”ืงืจื™ืื” <code dir="ltr">create_path("C", "Users", "Yam")</code> ืชื—ื–ื™ืจ <samp dir="ltr">"C:\Users\Yam"</samp></li> <li>ื”ืงืจื™ืื” <code dir="ltr">create_path("C", "Users", "Yam", "HaimonLimon.mp4")</code> ืชื—ื–ื™ืจ <samp dir="ltr">"C:\Users\Yam\HaimonLimon.mp4"</samp></li> <li>ื”ืงืจื™ืื” <code dir="ltr">create_path("D", "1337.png")</code> ืชื—ื–ื™ืจ <samp dir="ltr">"D:\1337.png"</samp></li> <li>ื”ืงืจื™ืื” <code dir="ltr">create_path("C")</code> ืชื—ื–ื™ืจ <samp dir="ltr">"C:"</samp></li> <li>ื”ืงืจื™ืื” <code dir="ltr">create_path()</code> ืชื’ืจื•ื ืœืฉื’ื™ืื”</li> </ul> <span style="text-align: right; direction: rtl; float: right; clear: both;">ืžืกืคืจ ืžืฉืชื ื” ืฉืœ ืืจื’ื•ืžื ื˜ื™ื ืขื ืฉืžื•ืช<span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืชื—ื™ืœืช ื”ืžื—ื‘ืจืช ืœืžื“ื ื• ื›ื™ืฆื“ ืžืขื‘ื™ืจื™ื ืœืคื•ื ืงืฆื™ื•ืช ืืจื’ื•ืžื ื˜ื™ื ื‘ืขื–ืจืช ืฉื: </p>
def print_introduction(name, age): return f"My name is {name} and I am {age} years old." print_introduction(age=2019, name="Gandalf")
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื‘ืœ ืžื” ืื ื ืจืฆื” ืœื”ืขื‘ื™ืจ ืœืคื•ื ืงืฆื™ื” ืฉืœื ื• ืžืกืคืจ ื‘ืœืชื™ ืžื•ื’ื‘ืœ ืฉืœ ืืจื’ื•ืžื ื˜ื™ื ืœืคื™ ืฉื?<br> ื ื‘ื™ื ื›ื“ื•ื’ืžื” ืืช ื”ืคืขื•ืœื” <code>format</code> ืขืœ ืžื—ืจื•ื–ื•ืช.<br> <code>format</code> ื”ื™ื ืคื•ื ืงืฆื™ื” ื’ืžื™ืฉื” ื‘ื›ืœ ื”ื ื•ื’ืข ืœื›ืžื•ืช ื•ืœืฉืžื•ืช ืฉืœ ื”ืืจื’ื•ืžื ื˜ื™ื ืฉืžื•ืขื‘ืจื™ื ืœื” ืœืคื™ ืฉื.<br> ื ืจืื” ืฉืชื™ ื“ื•ื’ืžืื•ืช ืœืฉื™ืžื•ืฉ ื‘ื”, ืฉื™ืžื•ืฉ ืฉื‘ืžื‘ื˜ ืจืืฉื•ืŸ ืขืฉื•ื™ ืœื”ื™ืจืื•ืช ืงืกื•ื: </p>
message = "My name is {name} and I am {age} years old" formatted_message = message.format(name="Gandalf", age=2019) print(formatted_message) song = "I'll {action} a story of a {animal}.\nA {animal} who's {key} is {value}." formatted_song = song.format(action="sing", animal="duck", key="name", value="Alfred Kwak") print(formatted_song)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื›ืชื•ื‘ ื’ื ืื ื—ื ื• ืคื•ื ืงืฆื™ื” ืฉืžืกื•ื’ืœืช ืœืงื‘ืœ ืžืกืคืจ ื‘ืœืชื™ ืžื•ื’ื‘ืœ ืฉืœ ืืจื’ื•ืžื ื˜ื™ื ืœืคื™ ืฉื.<br> ื ื™ืขื–ืจ ืชื—ื™ืœื” ื‘ื™ื“ื™ื“ืชื ื• ื”ื•ื•ืชื™ืงื”, <code>silly_function</code>, ื›ื“ื™ ืœืจืื•ืช ืื™ืš ื”ืงืกื ืงื•ืจื”: </p>
def silly_function(**kwargs): print(kwargs) print(type(kwargs)) silly_function(a=5, b=6, address="221B Baker St, London, England.")
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ื”ืชื ื”ื’ื•ืช ื”ื–ื• ืžืชืจื—ืฉืช ื›ื™ื•ื•ืŸ ืฉื”ืฉืชืžืฉื ื• ื‘ืฉืชื™ ื›ื•ื›ื‘ื™ื•ืช ืœืคื ื™ ืฉื ื”ืžืฉืชื ื”.<br> ื”ืฉื™ืžื•ืฉ ื‘ืฉืชื™ ื›ื•ื›ื‘ื™ื•ืช ืžืืคืฉืจ ืœื ื• ืœื”ืขื‘ื™ืจ ืžืกืคืจ ื‘ืœืชื™ ืžื•ื’ื‘ืœ ืฉืœ ืืจื’ื•ืžื ื˜ื™ื ืขื ืฉื, ื‘ืื•ืคืŸ ืฉืžื–ื›ื™ืจ ืงืฆืช ืืช ื”ืฉื™ืžื•ืฉ ื‘ื›ื•ื›ื‘ื™ืช ืฉืจืื™ื ื• ืงื•ื“ื.<br> ื”ืžืฉืชื ื” ืฉื‘ื• ื ืฉืžืจื™ื ื”ื ืชื•ื ื™ื ื”ื•ื ืžืกื•ื’ ืžื™ืœื•ืŸ, ื•ื‘ื• ื”ืžืคืชื—ื•ืช ื™ื”ื™ื• ืฉืžื•ืช ื”ืืจื’ื•ืžื ื˜ื™ื ืฉื”ื•ืขื‘ืจื•, ื•ื”ืขืจื›ื™ื โ€“ ื”ืขืจื›ื™ื ืฉื”ื•ืขื‘ืจื• ืœืื•ืชื ืฉืžื•ืช. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื—ืจื™ ืฉื”ื‘ื ื• ืื™ืš ื”ืกื™ืคื•ืจ ื”ื–ื” ืขื•ื‘ื“, ื‘ื•ืื• ื ื ืกื” ืœื™ืฆื•ืจ ืคื•ื ืงืฆื™ื” ืžืขื ื™ื™ื ืช ื™ื•ืชืจ.<br> ื”ืคื•ื ืงืฆื™ื” ืฉื ื›ืชื•ื‘ ืชืงื‘ืœ ื›ืืจื’ื•ืžื ื˜ื™ื ื›ืžื” ื’ืจื ืžื›ืœ ืจื›ื™ื‘ ืฆืจื™ืš ื›ื“ื™ ืœื”ื›ื™ืŸ ืกื•ืฉื™, ื•ืชื“ืคื™ืก ืœื ื• ืžืชื›ื•ืŸ: </p>
def print_sushi_recipe(**ingredients_and_amounts): for ingredient, amount in ingredients_and_amounts.items(): print(f"{amount} grams of {ingredient}") print_sushi_recipe(rice=300, water=300, vinegar=15, sugar=10, salt=3, fish=600)
week05/2_Functions_Part_2.ipynb
PythonFreeCourse/Notebooks
mit