markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
One hot encodingURL: https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f
# one hot encoding kut_onehot = pd.get_dummies(kut_venues[['Venue Category']], prefix="", prefix_sep="") # add neighborhood column back to dataframe kut_onehot['Neighborhood'] = kut_venues['Neighborhood'] # move neighborhood column to the first column fixed_columns = [kut_onehot.columns[-1]] + list(kut_onehot.columns[:-1]) kut_onehot = kut_onehot[fixed_columns] kut_onehot.head()
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Grouping rows by neighborhood and by taking the mean of the frequency of occurrence of each category
kut_grouped = kut_onehot.groupby('Neighborhood').mean().reset_index() kut_grouped kut_grouped.shape num_top_venues = 5 for hood in kut_grouped['Neighborhood']: print("----"+hood+"----") temp = kut_grouped[kut_grouped['Neighborhood'] == hood].T.reset_index() temp.columns = ['venue','freq'] temp = temp.iloc[1:] temp['freq'] = temp['freq'].astype(float) temp = temp.round({'freq': 2}) print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues)) print('\n')
----Berrylands---- venue freq 0 Bus Stop 0.25 1 Gym / Fitness Center 0.25 2 Park 0.25 3 Café 0.25 4 Pub 0.00 ----Canbury---- venue freq 0 Pub 0.31 1 Park 0.08 2 Hotel 0.08 3 Indian Restaurant 0.08 4 Fish & Chips Shop 0.08 ----Chessington---- venue freq 0 Fast Food Restaurant 1.0 1 Asian Restaurant 0.0 2 Portuguese Restaurant 0.0 3 Hardware Store 0.0 4 Hotel 0.0 ----Hook---- venue freq 0 Bakery 0.25 1 Indian Restaurant 0.25 2 Fish & Chips Shop 0.25 3 Convenience Store 0.25 4 Asian Restaurant 0.00 ----Kingston Vale---- venue freq 0 Grocery Store 0.25 1 Soccer Field 0.25 2 Bar 0.25 3 Italian Restaurant 0.25 4 Platform 0.00 ----Kingston upon Thames---- venue freq 0 Café 0.13 1 Coffee Shop 0.13 2 Sushi Restaurant 0.07 3 Burger Joint 0.07 4 Pub 0.07 ----Malden Rushett---- venue freq 0 Pub 0.25 1 Restaurant 0.25 2 Convenience Store 0.25 3 Garden Center 0.25 4 Park 0.00 ----Motspur Park---- venue freq 0 Bus Stop 0.2 1 Gym 0.2 2 Restaurant 0.2 3 Park 0.2 4 Soccer Field 0.2 ----New Malden---- venue freq 0 Gym 0.17 1 Indian Restaurant 0.17 2 Gastropub 0.17 3 Sushi Restaurant 0.17 4 Supermarket 0.17 ----Norbiton---- venue freq 0 Indian Restaurant 0.11 1 Italian Restaurant 0.07 2 Platform 0.07 3 Food 0.07 4 Pub 0.07 ----Old Malden---- venue freq 0 Pub 0.33 1 Train Station 0.33 2 Food 0.33 3 Market 0.00 4 Platform 0.00 ----Seething Wells---- venue freq 0 Indian Restaurant 0.17 1 Coffee Shop 0.13 2 Italian Restaurant 0.09 3 Pub 0.09 4 Café 0.09 ----Surbiton---- venue freq 0 Coffee Shop 0.17 1 Pub 0.13 2 Supermarket 0.07 3 Breakfast Spot 0.07 4 Grocery Store 0.03 ----Tolworth---- venue freq 0 Grocery Store 0.20 1 Pharmacy 0.13 2 Bus Stop 0.07 3 Furniture / Home Store 0.07 4 Pizza Place 0.07
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Create a data frame of the venues Function to sort the venues in descending order.
def return_most_common_venues(row, num_top_venues): row_categories = row.iloc[1:] row_categories_sorted = row_categories.sort_values(ascending=False) return row_categories_sorted.index.values[0:num_top_venues]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Create the new dataframe and display the top 10 venues for each neighborhood
num_top_venues = 10 indicators = ['st', 'nd', 'rd'] # create columns according to number of top venues columns = ['Neighborhood'] for ind in np.arange(num_top_venues): try: columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind])) except: columns.append('{}th Most Common Venue'.format(ind+1)) # create a new dataframe neighborhoods_venues_sorted = pd.DataFrame(columns=columns) neighborhoods_venues_sorted['Neighborhood'] = kut_grouped['Neighborhood'] for ind in np.arange(kut_grouped.shape[0]): neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(kut_grouped.iloc[ind, :], num_top_venues) neighborhoods_venues_sorted.head()
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Clustering similar neighborhoods together using k - means clustering
# import k-means from clustering stage from sklearn.cluster import KMeans # set number of clusters kclusters = 5 kut_grouped_clustering = kut_grouped.drop('Neighborhood', 1) # run k-means clustering kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(kut_grouped_clustering) # check cluster labels generated for each row in the dataframe kmeans.labels_[0:10] # add clustering labels neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_) kut_merged = kut_neig # merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood kut_merged = kut_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood') kut_merged.head() # check the last columns! kut_merged.info() # Dropping the row with the NaN value kut_merged.dropna(inplace = True) kut_merged.shape kut_merged['Cluster Labels'] = kut_merged['Cluster Labels'].astype(int) kut_merged.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 14 entries, 0 to 14 Data columns (total 15 columns): Neighborhood 14 non-null object Borough 14 non-null object Latitude 14 non-null float64 Longitude 14 non-null float64 Cluster Labels 14 non-null int64 1st Most Common Venue 14 non-null object 2nd Most Common Venue 14 non-null object 3rd Most Common Venue 14 non-null object 4th Most Common Venue 14 non-null object 5th Most Common Venue 14 non-null object 6th Most Common Venue 14 non-null object 7th Most Common Venue 14 non-null object 8th Most Common Venue 14 non-null object 9th Most Common Venue 14 non-null object 10th Most Common Venue 14 non-null object dtypes: float64(2), int64(1), object(12) memory usage: 1.8+ KB
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Visualize the clusters
# create map map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11.5) # set color scheme for the clusters x = np.arange(kclusters) ys = [i + x + (i*x)**2 for i in range(kclusters)] colors_array = cm.rainbow(np.linspace(0, 1, len(ys))) rainbow = [colors.rgb2hex(i) for i in colors_array] # add markers to the map markers_colors = [] for lat, lon, poi, cluster in zip(kut_merged['Latitude'], kut_merged['Longitude'], kut_merged['Neighborhood'], kut_merged['Cluster Labels']): label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True) folium.CircleMarker( [lat, lon], radius=8, popup=label, color=rainbow[cluster-1], fill=True, fill_color=rainbow[cluster-1], fill_opacity=0.5).add_to(map_clusters) map_clusters
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Each cluster is color coded for the ease of presentation, we can see that majority of the neighborhood falls in the red cluster which is the first cluster. Three neighborhoods have their own cluster (Blue, Purple and Yellow), these are clusters two three and five. The green cluster consists of two neighborhoods which is the 4th cluster. Analysis Analyse each of the clusters to identify the characteristics of each cluster and the neighborhoods in them. Examine the first cluster
kut_merged[kut_merged['Cluster Labels'] == 0]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
The cluster one is the biggest cluster with 9 of the 15 neighborhoods in the borough Kingston upon Thames. Upon closely examining these neighborhoods we can see that the most common venues in these neighborhoods are Restaurants, Pubs, Cafe, Supermarkets, and stores. Examine the second cluster
kut_merged[kut_merged['Cluster Labels'] == 1]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
The second cluster has one neighborhood which consists of Venues such as Restaurants, Golf courses, and wine shops. Examine the third cluster
kut_merged[kut_merged['Cluster Labels'] == 2]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
The third cluster has one neighborhood which consists of Venues such as Train stations, Restaurants, and Furniture shops. Examine the forth cluster
kut_merged[kut_merged['Cluster Labels'] == 3]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
The fourth cluster has two neighborhoods in it, these neighborhoods have common venues such as Parks, Gym/Fitness centers, Bus Stops, Restaurants, Electronics Stores and Soccer fields etc. Examine the fifth cluster
kut_merged[kut_merged['Cluster Labels'] == 4]
_____no_output_____
MIT
Capstone Project - The Battle of the Neighborhoods - London Neighborhood Clustering.ipynb
ZRQ-rikkie/coursera-python
Creating your own dataset from Google Images*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)* In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).
from fastai.vision import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Get a list of URLs Search and scroll Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants: "canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalisYou can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown. Download into file Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.Press CtrlShiftJ in Windows/Linux and CmdOptJ in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.You will need to get the urls of each of the images. You can do this by running the following commands:```javascripturls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));``` Create directory and upload urls file into your server Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.
folder = 'black' file = 'urls_black.txt' folder = 'teddys' file = 'urls_teddys.txt' folder = 'grizzly' file = 'urls_grizzly.txt'
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
You will need to run this cell once per each category.
path = Path('data/bears') dest = path/folder dest.mkdir(parents=True, exist_ok=True) path.ls()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.![uploaded file](images/download_images/upload.png) Download images Now you will need to download your images from their respective urls.fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.You will need to run this line once for every category.
classes = ['teddys','grizzly','black'] download_images(path/file, dest, max_pics=200) # If you have problems download, try with `max_workers=0` to see exceptions: download_images(path/file, dest, max_pics=20, max_workers=0)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Then we can remove any images that can't be opened:
for c in classes: print(c) verify_images(path/c, delete=True, max_size=500)
teddys
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
View data
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats) # If you already cleaned your data, run this cell instead of the one before # np.random.seed(42) # data = ImageDataBunch.from_csv(".", folder=".", valid_pct=0.2, csv_labels='cleaned.csv', # ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Good! Let's take a look at some of our pictures then.
data.classes data.show_batch(rows=3, figsize=(7,8)) data.classes, data.c, len(data.train_ds), len(data.valid_ds)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Train model
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(4) learn.save('stage-1') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4)) learn.save('stage-2')
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Interpretation
learn.load('stage-2'); interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Cleaning UpSome of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.Using the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.
from fastai.widgets import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.Notice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new ImageDataBunch with the corrected labels to continue training your model.
ds, idxs = DatasetFormatter().from_toplosses(learn, ds_type=DatasetType.Valid) ImageCleaner(ds, idxs, path)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. `ImageCleaner` will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from `top_losses.ImageCleaner(ds, idxs)` You can also find duplicates in your dataset and delete them! To do this, you need to run `.from_similars` to get the potential duplicates' ids and then run `ImageCleaner` with `duplicates=True`. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.
ds, idxs = DatasetFormatter().from_similars(learn, ds_type=DatasetType.Valid) ImageCleaner(ds, idxs, path, duplicates=True)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Remember to recreate your ImageDataBunch from your `cleaned.csv` to include the changes you made in your data! Putting your model in production First thing first, let's export the content of our `Learner` object for production:
learn.export()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used). You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:
defaults.device = torch.device('cpu') img = open_image(path/'black'/'00000021.jpg') img
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
We create our `Learner` in production enviromnent like this, jsut make sure that `path` contains the file 'export.pkl' from before.
learn = load_learner(path) pred_class,pred_idx,outputs = learn.predict(img) pred_class
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):```python@app.route("/classify-url", methods=["GET"])async def classify_url(request): bytes = await get_bytes(request.query_params["url"]) img = open_image(BytesIO(bytes)) _,_,losses = learner.predict(img) return JSONResponse({ "predictions": sorted( zip(cat_learner.data.classes, map(float, losses)), key=lambda p: p[1], reverse=True ) })```(This example is for the [Starlette](https://www.starlette.io/) web app toolkit.) Things that can go wrong - Most of the time things will train fine with the defaults- There's not much you really need to tune (despite what you've heard!)- Most likely are - Learning rate - Number of epochs Learning rate (LR) too high
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(1, max_lr=0.5)
Total time: 00:13 epoch train_loss valid_loss error_rate 1 12.220007 1144188288.000000 0.765957 (00:13)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Learning rate (LR) too low
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Previously we had this result:```Total time: 00:57epoch train_loss valid_loss error_rate1 1.030236 0.179226 0.028369 (00:14)2 0.561508 0.055464 0.014184 (00:13)3 0.396103 0.053801 0.014184 (00:13)4 0.316883 0.050197 0.021277 (00:15)```
learn.fit_one_cycle(5, max_lr=1e-5) learn.recorder.plot_losses()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
As well as taking a really long time, it's getting too many looks at each image, so may overfit. Too few epochs
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False) learn.fit_one_cycle(1)
Total time: 00:14 epoch train_loss valid_loss error_rate 1 0.602823 0.119616 0.049645 (00:14)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Too many epochs
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32, ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0 ),size=224, num_workers=4).normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0) learn.unfreeze() learn.fit_one_cycle(40, slice(1e-6,1e-4))
Total time: 06:39 epoch train_loss valid_loss error_rate 1 1.513021 1.041628 0.507326 (00:13) 2 1.290093 0.994758 0.443223 (00:09) 3 1.185764 0.936145 0.410256 (00:09) 4 1.117229 0.838402 0.322344 (00:09) 5 1.022635 0.734872 0.252747 (00:09) 6 0.951374 0.627288 0.192308 (00:10) 7 0.916111 0.558621 0.184982 (00:09) 8 0.839068 0.503755 0.177656 (00:09) 9 0.749610 0.433475 0.144689 (00:09) 10 0.678583 0.367560 0.124542 (00:09) 11 0.615280 0.327029 0.100733 (00:10) 12 0.558776 0.298989 0.095238 (00:09) 13 0.518109 0.266998 0.084249 (00:09) 14 0.476290 0.257858 0.084249 (00:09) 15 0.436865 0.227299 0.067766 (00:09) 16 0.457189 0.236593 0.078755 (00:10) 17 0.420905 0.240185 0.080586 (00:10) 18 0.395686 0.255465 0.082418 (00:09) 19 0.373232 0.263469 0.080586 (00:09) 20 0.348988 0.258300 0.080586 (00:10) 21 0.324616 0.261346 0.080586 (00:09) 22 0.311310 0.236431 0.071429 (00:09) 23 0.328342 0.245841 0.069597 (00:10) 24 0.306411 0.235111 0.064103 (00:10) 25 0.289134 0.227465 0.069597 (00:09) 26 0.284814 0.226022 0.064103 (00:09) 27 0.268398 0.222791 0.067766 (00:09) 28 0.255431 0.227751 0.073260 (00:10) 29 0.240742 0.235949 0.071429 (00:09) 30 0.227140 0.225221 0.075092 (00:09) 31 0.213877 0.214789 0.069597 (00:09) 32 0.201631 0.209382 0.062271 (00:10) 33 0.189988 0.210684 0.065934 (00:09) 34 0.181293 0.214666 0.073260 (00:09) 35 0.184095 0.222575 0.073260 (00:09) 36 0.194615 0.229198 0.076923 (00:10) 37 0.186165 0.218206 0.075092 (00:09) 38 0.176623 0.207198 0.062271 (00:10) 39 0.166854 0.207256 0.065934 (00:10) 40 0.162692 0.206044 0.062271 (00:09)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
technophile21/course-v3
Tutorial: LIF Neuron - Part I**Week 0, Day 1: Python Workshop 1****By Neuromatch Academy**__Content creators:__ Marco Brigham and the [CCNSS](https://www.ccnss.org/) team__Content reviewers:__ Michael Waskom, Karolina Stosio, Spiros Chavlis --- Tutorial objectivesNMA students, you are going to use Python skills to advance your understanding of neuroscience. Just like two legs that support and strengthen each other. One has "Python" written in it, and the other has "Neuro". And step-by-step they go.&nbsp; In this notebook, we'll practice basic operations with Python variables, control flow, plotting, and a sneak peek at `np.array`, the workhorse of scientific computation in Python.&nbsp; Each new concept in Python will unlock a different aspect of our implementation of a **Leaky Integrate-and-Fire (LIF)** neuron. And as if it couldn't get any better, we'll visualize the evolution of its membrane potential in time, and extract its statistical properties!&nbsp; Well then, let's start our walk today! --- Imports and helper functionsPlease execute the cell(s) below to initialize the notebook environment.
# Import libraries import numpy as np import matplotlib.pyplot as plt from IPython.display import YouTubeVideo # @title Figure settings %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
--- Neuron modelA *membrane equation* and a *reset condition* define our *leaky-integrate-and-fire (LIF)* neuron:\begin{align*}\\&\tau_m\,\frac{d}{dt}\,V(t) = E_{L} - V(t) + R\,I(t) &\text{if }\quad V(t) \leq V_{th}\\\\&V(t) = V_{reset} &\text{otherwise}\\\\\end{align*}where $V(t)$ is the membrane potential, $\tau_m$ is the membrane time constant, $E_{L}$ is the leak potential, $R$ is the membrane resistance, $I(t)$ is the synaptic input current, $V_{th}$ is the firing threshold, and $V_{reset}$ is the reset voltage. We can also write $V_m$ for membrane potential - very convenient for plot labels.The membrane equation is an *ordinary differential equation (ODE)* that describes the time evolution of membrane potential $V(t)$ in response to synaptic input and leaking of change across the cell membrane.**Note that, in this tutorial the neuron model will not implement a spiking mechanism.**
# @title Video: Synaptic input video = YouTubeVideo(id='UP8rD2AwceM', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 1We start by defining and initializing the main simulation variables.**Suggestions*** Modify the code below to print the simulation parameters
# t_max = 150e-3 # second # dt = 1e-3 # second # tau = 20e-3 # second # el = -60e-3 # milivolt # vr = -70e-3 # milivolt # vth = -50e-3 # milivolt # r = 100e6 # ohm # i_mean = 25e-11 # ampere # print(t_max, dt, tau, el, vr, vth, r, i_mean)
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
**SAMPLE OUTPUT**```0.15 0.001 0.02 -0.06 -0.07 -0.05 100000000.0 2.5e-10``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_4adeccd3.py) Exercise 2![synaptic input](https://github.com/mpbrigham/colaboratory-figures/raw/master/nma/python-for-nma/synaptic_input.png)We start with a sinusoidal model to simulate the synaptic input $I(t)$ given by:\begin{align*}\\I(t)=I_{mean}\left(1+\sin\left(\frac{2 \pi}{0.01}\,t\right)\right)\\\\\end{align*}Compute the values of synaptic input $I(t)$ between $t=0$ and $t=0.009$ with step $\Delta t=0.001$.**Suggestions*** Loop variable `step` for 10 steps (`step` takes values from `0` to `9`)* At each time step * Compute the value of `t` with variables `step` and `dt` * Compute the value of `i` * Print `i`* Use `np.pi` and `np.sin` for evaluating $\pi$ and $\sin(\cdot)$, respectively
# initialize t t = 0 # loop for 10 steps, variable 'step' takes values from 0 to 9 for step in range(10): t = step * dt i = ... print(i)
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
**SAMPLE OUTPUT**```2.5e-103.969463130731183e-104.877641290737885e-104.877641290737885e-103.9694631307311837e-102.5000000000000007e-101.0305368692688176e-101.2235870926211617e-111.223587092621159e-111.0305368692688186e-10``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_943bc60a.py) Exercise 3Print formatting is handy for displaying simulation parameters in a clean and organized form. Python 3.6 introduced the new string formatting [f-strings](https://www.python.org/dev/peps/pep-0498). Since we are dealing with type `float` variables, we use `f'{x:.3f}'` for formatting `x` to three decimal points, and `f'{x:.4e}'` for four decimal points but in exponential notation.```x = 3.14159265e-1print(f'{x:.3f}')--> 0.314print(f'{x:.4e}')--> 3.1416e-01```Repeat the loop from the previous exercise and print the `t` values with three decimal points, and synaptic input $I(t)$ with four decimal points in exponential notation.For additional formatting options with f-strings see [here](http://zetcode.com/python/fstring/).**Suggestions*** Print `t` and `i` with help of *f-strings* formatting
# initialize step_end step_end = 10 # loop for step_end steps for step in range(step_end): t = step * dt i = ... print(...)
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
**SAMPLE OUTPUT**```0.000 2.5000e-100.001 3.9695e-100.002 4.8776e-100.003 4.8776e-100.004 3.9695e-100.005 2.5000e-100.006 1.0305e-100.007 1.2236e-110.008 1.2236e-110.009 1.0305e-10``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_cae53962.py) ODE integration without spikesIn the next exercises, we now simulate the evolution of the membrane equation in discrete time steps, with a sufficiently small $\Delta t$.We start by writing the time derivative $d/dt\,V(t)$ in the membrane equation without taking the limit $\Delta t \to 0$:\begin{align*}\\\tau_m\,\frac{V\left(t+\Delta t\right)-V\left(t\right)}{\Delta t} &= E_{L} - V(t) + R\,I(t) \qquad\qquad (1)\\\\\end{align*}The value of membrane potential $V\left(t+\Delta t\right)$ can be expressed in terms of its previous value $V(t)$ by simple algebraic manipulation. For *small enough* values of $\Delta t$, this provides a good approximation of the continuous-time integration.This operation is an integration since we obtain a sequence $\{V(t), V(t+\Delta t), V(t+2\Delta t),...\}$ starting from the ODE. Notice how the ODE describes the evolution of $\frac{d}{dt}\,V(t)$, the derivative of $V(t)$, but not directly the evolution of $V(t)$. For the evolution of $V(t)$ we need to integrate the ODE, and in this tutorial, we will do a discrete-time integration using the Euler method. See [Numerical methods for ordinary differential equations](https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations) for additional details.
# @title Video: Discrete time integration video = YouTubeVideo(id='kyCbeR28AYQ', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 4Compute the values of $V(t)$ between $t=0$ and $t=0.01$ with step $\Delta t=0.001$ and $V(0)=E_L$.We will write a `for` loop from scratch in this exercise. The following three formulations are all equivalent and loop for three steps:```for step in [0, 1, 2]: print(step)for step in range(3): print(step)start = 0end = 3stepsize = 1for step in range(start, end, stepsize): print(step)```**Suggestions*** Reorganize the Eq. (1) to isolate $V\left(t+\Delta t\right)$ on the left side, and express it as function of $V(t)$ and the other terms* Initialize the membrane potential variable `v` to leak potential `el`* Loop variable `step` for `10` steps* At each time step * Compute the current value of `t`, `i` * Print the current value of `t` and `v` * Update the value of `v`
# initialize step_end and v step_end = 10 v = el # loop for step_end steps for step in range(step_end): t = step * dt i = ... print(...) v = ...
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
**SAMPLE OUTPUT**```0.000 -6.0000e-020.001 -5.8750e-020.002 -5.6828e-020.003 -5.4548e-020.004 -5.2381e-020.005 -5.0778e-020.006 -4.9989e-020.007 -4.9974e-020.008 -5.0414e-020.009 -5.0832e-02``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_95c91766.py)
# @title Video: Plotting from IPython.display import YouTubeVideo video = YouTubeVideo(id='BOh8CsuTFkY', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 5![synaptic input discrete](https://github.com/mpbrigham/colaboratory-figures/raw/master/nma/python-for-nma/synaptic_input_discrete.png)Plot the values of $I(t)$ between $t=0$ and $t=0.024$.**Suggestions*** Increase `step_end`* initialize the figure with `plt.figure`, set title, x and y labels with `plt.title`, `plt.xlabel` and `plt.ylabel`, respectively* Replace printing command `print` with plotting command `plt.plot` with argument `'ko'` (short version for `color='k'` and `marker='o'`) for black small dots* Use `plt.show()` at the end to display the plot
# initialize step_end step_end = 25 # initialize the figure plt.figure() # Complete these lines and uncomment # plt.title(...) # plt.xlabel(...) # plt.ylabel(...) # loop for step_end steps for step in range(step_end): t = step * dt i = ... # Complete this line and uncomment # plt.plot(...) # plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_23446a7e.py)*Example output:* Exercise 6Plot the values of $V(t)$ between $t=0$ and $t=t_{max}$.**Suggestions*** Compute the required number of steps with`int(t_max/dt)`* Use plotting command for black small(er) dots with argument `'k.'`
# initialize step_end and v step_end = int(t_max / dt) v = el # initialize the figure plt.figure() plt.title('$V_m$ with sinusoidal I(t)') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)'); # loop for step_end steps for step in range(step_end): t = step * dt i = ... # Complete this line and uncomment # plt.plot(...) v = ... t = t + dt # Complete this line and uncomment # plt.plot(...) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_1046fd94.py)*Example output:* --- Random synaptic inputFrom the perspective of neurons, synaptic input is random (or stochastic). We'll improve the synaptic input model by introducing random input current with statistical properties similar to the previous exercise:\begin{align*}\\I(t)=I_{mean}\left(1+0.1\sqrt{\frac{t_{max}}{\Delta t}}\,\xi(t)\right)\qquad\text{with }\xi(t)\sim U(-1,1)\\\\\end{align*}where $U(-1,1)$ is the [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)) with support $x\in[-1,1]$.Random synaptic input $I(t)$ results in random time course for $V(t)$. Exercise 7Plot the values of $V(t)$ between $t=0$ and $t=t_{max}-\Delta t$ with random input $I(t)$.Initialize the (pseudo) random number generator (RNG) to a fixed value to obtain the same random input each time.The function `np.random.seed()` initializes the RNG, and `np.random.random()` generates samples from the uniform distribution between `0` and `1`.**Suggestions*** Use `np.random.seed()` to initialize the RNG to `0`* Use `np.random.random()` to generate random input in range `[0,1]` at each timestep* Multiply random input by an appropriate factor to expand the range to `[-1,1]`* Verify that $V(t)$ has a random time course by changing the initial RNG value* Alternatively, comment RNG initialization by typing `CTRL` + `\` in the relevant line
# set random number generator np.random.seed(2020) # initialize step_end and v step_end = int(t_max / dt) v = el # initialize the figure plt.figure() plt.title('$V_m$ with random I(t)') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') # loop for step_end steps for step in range(step_end): t = step * dt # Complete this line and uncomment # plt.plot(...) i = ... v = ... plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_41355f96.py)*Example output:* Ensemble statisticsMultiple runs of the previous exercise may give the impression of periodic regularity in the evolution of $V(t)$. We'll collect the sample mean over $N=50$ realizations of $V(t)$ with random input to test such a hypothesis. The sample mean, sample variance and sample autocovariance at times $\left\{t, s\right\}\in[0,t_{max}]$, and for $N$ realizations $V_n(t)$ are given by:\begin{align*}\\\left\langle V(t)\right\rangle &= \frac{1}{N}\sum_{n=1}^N V_n(t) & & \text{sample mean}\\\left\langle (V(t)-\left\langle V(t)\right\rangle)^2\right\rangle &= \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)^2 & & \text{sample variance} \\\left\langle \left(V(t)-\left\langle V(t)\right\rangle\right)\left(V(s)-\left\langle V(s)\right\rangle\right)\right\rangle&= \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)\left(V_n(s)-\left\langle V(s)\right\rangle\right) & & \text{sample autocovariance}\\\\\end{align*}
# @title Video: Ensemble statistics video = YouTubeVideo(id='4nIAS2oPEFI', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 8Plot multiple realizations ($N=50$) of $V(t)$ by storing in a list the voltage of each neuron at time $t$.Keep in mind that the plotting command `plt.plot(x, y)` requires `x` to have the same number of elements as `y`.Mathematical symbols such as $\alpha$ and $\beta$ are specified as `$\alpha$` and `$\beta$` in [TeX markup](https://en.wikipedia.org/wiki/TeX). See additional details in [Writing mathematical expressions](https://matplotlib.org/3.2.2/tutorials/text/mathtext.html) in Matplotlib.**Suggestions*** Initialize a list `v_n` with `50` values of membrane leak potential `el`* At each time step: * Plot `v_n` with argument `'k.'` and parameter `alpha=0.05` to adjust the transparency (by default, `alpha=1`) * In the plot command, replace `t` from the previous exercises with a list of size `n` with values `t` * Loop over `50` realizations of random input * Update `v_n` with the values of $V(t)$* Why is there a black dot at $t=0$?
# set random number generator np.random.seed(2020) # initialize step_end, n and v_n step_end = int(t_max / dt) n = 50 # Complete this line and uncomment # v_n = ... # initialize the figure plt.figure() plt.title('Multiple realizations of $V_m$') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') # loop for step_end steps for step in range(step_end): t = step * dt # Complete this line and uncomment # plt.plot(...) # loop for n steps for j in range(0, n): i = ... # Complete this line and uncomment # v_n[j] = ... plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_8b55f5dd.py)*Example output:* Exercise 9Add the sample mean $\left\langle V(t)\right\rangle=\frac{1}{N}\sum_{n=1}^N V_n(t)$ to the plot.**Suggestions*** At each timestep: * Compute and store in `v_mean` the sample mean $\left\langle V(t)\right\rangle$ by summing the values of list `v_n` with `sum` and dividing by `n` * Plot $\left\langle V(t)\right\rangle$ with `alpha=0.8` and argument `'C0.'` for blue (you can read more about [specifying colors](https://matplotlib.org/tutorials/colors/colors.htmlsphx-glr-tutorials-colors-colors-py)) * Loop over `50` realizations of random input * Update `v_n` with the values of $V(t)$
# set random number generator np.random.seed(2020) # initialize step_end, n and v_n step_end = int(t_max / dt) n = 50 v_n = [el] * n # initialize the figure plt.figure() plt.title('Multiple realizations of $V_m$') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') # loop for step_end steps for step in range(step_end): t = step * dt v_mean = ... # Complete these lines and uncomment # plt.plot(...) # plt.plot(...) for j in range(0, n): i = ... v_n[j] = ... plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_98017570.py)*Example output:* Exercise 10Add the sample standard deviation $\sigma(t)\equiv\sqrt{\text{Var}\left(t\right)}$ to the plot, with sample variance $\text{Var}(t) = \frac{1}{N-1} \sum_{n=1}^N \left(V_n(t)-\left\langle V(t)\right\rangle\right)^2$.Use a list comprehension to collect the sample variance `v_var`. Here's an example to initialize a list with squares of `0` to `9`:```squares = [x**2 for x in range(10)]print(squares)--> [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]```Why are we plotting $\sigma(t)$ rather than the $\text{Var}(t)$? What are the units of each and the units of $\left\langle V(t)\right\rangle$?**Suggestions*** At each timestep: * Compute and store in `v_mean` the sample mean $\left\langle V(t)\right\rangle$ * Initialize a list `v_var_n` with the contribution of each $V_n(t)$ to $\text{Var}\left(t\right)$ with a list comprehension over values of `v_n` * Compute sample variance `v_var` by summing the values of `v_var_n` with `sum` and dividing by `n-1` * (alternative: loop over the values of `v_n` and add to `v_var` each contribution $V_n(t)$ and divide by `n-1` outside the loop) * Compute the standard deviation `v_std` with the function `np.sqrt` * Plot $\left\langle V(t)\right\rangle\pm\sigma(t)$ with `alpha=0.8` and argument `'C7.'`
# set random number generator np.random.seed(2020) # initialize step_end, n and v_n step_end = int(t_max / dt) n = 50 v_n = [el] * n # initialize the figure plt.figure() plt.title('Multiple realizations of $V_m$') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') # loop for step_end steps for step in range(step_end): t = step * dt v_mean = ... v_var_n = ... v_var = ... v_std = ... # Complete these lines and uncomment # plt.plot(...) # plt.plot(...) # plt.plot(...) # plt.plot(...) for j in range(0, n): i = ... v_n[j] = ... plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_9e048e4b.py)*Example output:* --- Using NumPyThe next set of exercises introduces `np.array`, the workhorse from the scientific computation package [NumPy](https://numpy.org). Numpy arrays the default for numerical data storage and computation and will separate computing steps from plotting.![NumPy package](https://github.com/mpbrigham/colaboratory-figures/raw/master/nma/python-for-nma/numpy_logo_small.png)We updated plots inside the main loop in the previous exercises and stored intermediate results in lists for plotting them. The purpose was to simplify earlier exercises as much as possible. However, there are very few scenarios where this technique is necessary, and you should avoid it in the future. Using numpy arrays will significantly simplify our coding narrative by computing inside the main loop and plotting afterward.Lists are much more natural for storing data for other purposes than computation. For example, lists are handy for storing numerical indexes and text.
# @title Video: Using NumPy video = YouTubeVideo(id='ewyHKKa2_OU', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 11Rewrite the single neuron plot with random input from _Exercise 7_ with numpy arrays. The time range, voltage values, and synaptic current are initialized or pre-computed as numpy arrays before numerical integration.**Suggestions*** Use `np.linspace` to initialize a numpy array `t_range` with `num=step_end=150` values from `0` to `t_max`* Use `np.ones` to initialize a numpy array `v` with `step_end + 1` leak potential values `el`* Pre-compute `step_end` synaptic current values in numpy array `syn` with `np.random.random(step_end)` for `step_end` random numbers* Iterate for numerical integration of `v`* Since `v[0]=el`, we should iterate for `step_end` steps, for example by skipping `step=0`. Why?
# set random number generator np.random.seed(2020) # initialize step_end, t_range, v and syn step_end = int(t_max / dt) - 1 # skip the endpoint to match Exercise 7 plot t_range = np.linspace(0, t_max, num=step_end, endpoint=False) v = el * np.ones(step_end) syn = ... # loop for step_end - 1 steps # Complete these lines and uncomment # for step in range(1, step_end): # v[step] = ... plt.figure() plt.title('$V_m$ with random I(t)') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') plt.plot(t_range, v, 'k.') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_4427a815.py)*Example output:* Exercise 12Let's practice using `enumerate` to iterate over the indexes and values of the synaptic current array `syn`.**Suggestions*** Iterate indexes and values of `syn` with `enumerate` in the `for` loop* Plot `v` with argument `'k'` for displaying a line instead of dots
# set random number generator np.random.seed(2020) # initialize step_end, t_range, v and syn step_end = int(t_max / dt) t_range = np.linspace(0, t_max, num=step_end) v = el * np.ones(step_end) syn = i_mean * (1 + 0.1 * (t_max / dt)**(0.5) * (2 * np.random.random(step_end) - 1)) # loop for step_end values of syn for step, i in enumerate(syn): # skip first iteration if step==0: continue # Complete this line and uncomment # v[step] = ... plt.figure() plt.title('$V_m$ with random I(t)') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') plt.plot(t_range, v, 'k') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_4139f63a.py)*Example output:*
# @title Video: Aggregation video = YouTubeVideo(id='1ME-0rJXLFg', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
Exercise 13Plot multiple realizations ($N=50$) of $V(t)$ by storing the voltage of each neuron at time $t$ in a numpy array.**Suggestions*** Initialize a numpy array `v_n` of shape `(n, step_end)` with membrane leak potential values `el`* Pre-compute synaptic current values in numpy array `syn` of shape `(n, step_end)`* Iterate `step_end` steps with a `for` loop for numerical integration* Plot results with a single plot command, by providing `v_n.T` to the plot function. `v_n.T` is the transposed version of `v_n` (with rows and columns swapped).
# set random number generator np.random.seed(2020) # initialize step_end, n, t_range, v and syn step_end = int(t_max / dt) n = 50 t_range = np.linspace(0, t_max, num=step_end) v_n = el * np.ones([n, step_end]) syn = ... # loop for step_end - 1 steps # Complete these lines and uncomment # for step in range(1, step_end): # v_n[:, step] = ... # initialize the figure plt.figure() plt.title('Multiple realizations of $V_m$') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') # Complete this line and uncomment # plt.plot(...) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D1_PythonWorkshop1/solutions/W0D1_Tutorial1_Solution_e8466b6b.py)*Example output:* Exercise 14Add sample mean $\left\langle V(t)\right\rangle$ and standard deviation $\sigma(t)\equiv\sqrt{\text{Var}\left(t\right)}$ to the plot.`np.mean(v_n, axis=0)` computes mean over rows, i.e. mean for each neuron`np.mean(v_n, axis=1)` computes mean over columns (axis `1`), i.e. mean for each time step**Suggestions*** Use `np.mean` and `np.std` with `axis=0` to sum over neurons* Use `label` argument in `plt.plot` to specify labels in each trace. Label only the last voltage trace to avoid labeling all `N` of them.
# set random number generator np.random.seed(2020) # initialize step_end, n, t_range, v and syn step_end = int(t_max / dt) n = 50 t_range = np.linspace(0, t_max, num=step_end) v_n = el * np.ones([n, step_end]) syn = i_mean * (1 + 0.1 * (t_max / dt)**(0.5) * (2 * np.random.random([n, step_end]) - 1)) # loop for step_end - 1 steps for step in range(1, step_end): v_n[:,step] = v_n[:,step - 1] + (dt / tau) * (el - v_n[:, step - 1] + r * syn[:, step]) v_mean = ... v_std = ... # initialize the figure plt.figure() plt.title('Multiple realizations of $V_m$') plt.xlabel('time (s)') plt.ylabel('$V_m$ (V)') plt.plot(t_range, v_n[:-1].T, 'k', alpha=0.3) # Complete these lines and uncomment # plt.plot(t_range, v_n[-1], 'k', alpha=0.3, label=...) # plt.plot(t_range, ..., 'C0', alpha=0.8, label='mean') # plt.plot(t_range, ..., 'C7', alpha=0.8) # plt.plot(t_range, ..., 'C7', alpha=0.8, label=...) #plt.legend() plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D1_PythonWorkshop1/student/W0D1_Tutorial1.ipynb
bgalbraith/course-content
¿Cómo crece una población? Antes de empezar: llenar la siguiente encuesta.- https://forms.office.com/Pages/ResponsePage.aspx?id=8kgDb5jkyUWE9MbYHc_9_oplb4UZe4dMnU4bxi5xU55UQjlEQ1pLWElPOE9ON082RktFQVdRWEtPSS4u> El modelo más simple de crecimiento poblacional de organismos es $\frac{dx}{dt}=rx$, donde $x(t)$ es la población en el tiempo $t$ y $r>0$ es la tasa de crecimiento.> Este modelo predice crecimiento exponencial $x(t)=x_0e^{rt}$ (solución de la ecuación diferencial) donde $x_0=x(0)$ es la población inicial. ¿Es esto válido?- Recordar que $\lim_{t\to\infty}x(t)=x_0\lim_{t\to\infty}e^{rt}=\infty$.- Este modelo no tiene en cuenta entonces sobrepoblación ni recursos limitados.> En realidad la tasa de crecimiento no es una constante, sino que depende de la población $\frac{dx}{dt}=\mu(x)x$. Cuando $x$ es pequeña $\mu(x)\approx r$, como antes, pero cuando $x>1$ (población normalizada) $\mu(x)<0$: la tasa de muerte es mayor a la tasa de nacimiento. Una forma matemática conveniente de modelar lo anterior es con una tasa de crecimiento $\mu(x)$ decreciendo linealmente con $x$.Referencia:- Strogatz, Steven. *NONLINEAR DYNAMICS AND CHAOS*, ISBN: 9780813349107, (eBook disponible en biblioteca). Ecuación LogísticaPrimero, veamos como luce $\mu(x)$ con decrecimiento lineal respecto a la población x.Como queremos que $\mu(0)=r$ y $\mu(1)=0$, la línea recta que conecta estos puntos es... (graficar)
# Importar librerías necesarias # Definir función mu(x) # Graficar
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
___Entonces, con esta elección de $\mu(x)=r(1-x)$, obtenemos la llamada **ecuación lógistica**, publicada por Pierre Verhulst en 1838.$$\frac{dx}{dt} = r\; x\; (1- x)$$ ** Solución a la ecuación diferencial ** La ecuación diferencial inicial tiene *solución analítica*, $$ x(t) = \frac{1}{1+ (\frac{1}{x_{0}}- 1) e^{-rt}}.$$ Ver en el tablero... Graficamos varias curvas de la solución analítica para $r = \left[-1, 1\right]$.
# Definir la solución analítica x(t,x0) # Vector de tiempo # Condicion inicial # Graficar para diferentes r entre -1 y 1
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
Como podemos ver, la solución a está ecuación en el continuo nos puede ganantizar la extinción o bien un crecimiento descomunal, dependiendo del valor asignado a $r$. *Numéricamente*, ¿cómo resolveríamos esta ecuación?
# Importamos función para integrar numéricamente ecuaciones diferenciales # Definimos el campo de la ecuación diferencial # Parámetro r # Condición inicial # Vector de tiempo # Solución # Gráfico de la solución
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
¿Qué tan buena es la aproximación de la solución numérica?Hay ecuaciones diferenciales ordinarias no lineales para las cuales es imposible obtener la solución exacta. En estos casos, se evalúa una solución aproximada de forma numérica.Para el caso anterior fue posible obtener la solución exacta, lo cual nos permite comparar ambas soluciones y evaluar qué tan buena es la aproximación que nos brinda la solución numérica.Primero veamos esto gráficamente
# Solución numérica # Solución exacta # Gráfica de comparación
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
Gráficamente vemos que la solución numérica está cerca (coincide) con la solución exacta. Sin embargo, con esta gráfica no podemos visualizar qué tan cerca están una solución de la otra. ¿Qué tal si evaluamos el error?
# Error de aproximación # Gráfica del error
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
Entonces, **cualitativamente** ya vimos que la solución numérica es *suficientemente buena*. De todas maneras, es siempre bueno cuantificar *qué tan buena* es la aproximación. Varias formas:- Norma del error: tenemos el error de aproximación en ciertos puntos (especificados por el vector de tiempo). Este error es entonces un vector y le podemos tomar su norma 2$$||e||_2=\sqrt{e[0]^2+\dots+e[n-1]^2}$$
np.linalg.norm(error)
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
- Error cuadrático medio: otra forma de cuantificar es con el error cuadrático medio$$e_{ms}=\frac{e[0]^2+\dots+e[n-1]^2}{n}$$
np.mean(error**2)
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
- Integral del error cuadrático: evalúa la acumulación de error cuadrático. Se puede evaluar cabo con la siguiente aproximación rectangular de la integral$$e_{is}=\int_{0}^{t_f}e(t)^2\text{d}t\approx \left(e[0]^2+\dots+e[n-1]^2\right)h$$donde $h$ es el tamaño de paso del vector de tiempo.
h = t[1]-t[0] np.sum(error**2)*h
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
Comentarios del modelo logísticoEl modelo no se debe tomar literalmente. Más bien se debe interpretar metefóricamente como que la población tiene una tendencia a crecer hasta su tope, o bien, desaparecer.La ecuación logística fue probada en experimentos de laboratorio para colonias de bacterias en condiciones de clima constante, abastecimiento de comida y ausencia de predadores. Los experimentos mostraron que la ecuación predecía muy bien el comportamiento real.Por otra parte, la predicción no resultó tan buena para moscas que se alimentan de frutas, escarabajos y otros organismos con ciclos de vida complejos. En esos casos se observaron fluctuaciones (oscilaciones) inmensas de la población. ___ Mapa logístico> La ecuación logística (curva de crecimiento logístico) es un modelo del crecimiento continuo en el tiempo. Una modificación de la ecuación continua a una ecuación de recurrencia discreta conocida como **mapa logistico** es muy usada.Referencia: - https://es.wikipedia.org/wiki/Aplicación_log%C3%ADstica- https://en.wikipedia.org/wiki/Logistic_map> Si reemplazamos la ecuación logísitica por la ecuación a diferencias: > $$x_{n+1} = r\; x_{n}(1- x_{n}),$$> donde $r$ es la razón de crecimiento máximo de la población y $x_{n}$ es la n-ésima iteración. Entonces, lo que tenemos que programar es la siguiente relación recursiva> $$x_{n+1}^{(r)} = f_r(x_n^{(r)}) = rx_n^{(r)}(1-x_n^{(r)})$$ El siguiente `gif` muestra las primeras 63 iteraciones de la anterior ecuación para diferentes valores de $r$ variando entre 2 y 4.Tomado de https://upload.wikimedia.org/wikipedia/commons/1/1f/Logistic_map_animation.gif.Note que:- Para $2<r<3$ el las soluciones se estabilizan en un valor de equilibrio.- Para $3<r<1+\sqrt{6}\approx 3.44949$ el las soluciones oscilan entre dos valores.- Para $3.44949<r<3.54409$ las soluciones oscilan entre cuatro valores.- Para $r>3.54409$ las soluciones exhiben un comportamiento **caótico**. Caos: comportamiento determinista aperiódico muy sensible a las condiciones iniciales. Es decir, pequeñas variaciones en dichas condiciones iniciales pueden implicar grandes diferencias en el comportamiento futuro **¿Cómo podemos capturar este comportamiento en una sola gráfica?**
# Definición de la función mapa logístico def mapa_logistico(r, x): return r * x * (1 - x) # Para mil valores de r entre 2.0 y 4.0 n = 1000 r = np.linspace(2.0, 4.0, n) # Hacemos 1000 iteraciones y nos quedamos con las ultimas 100 (capturamos el comportamiento final) iteraciones = 1000 ultimos = 100 # La misma condición inicial para todos los casos. x = 1e-5 * np.ones(n) # Gráfico plt.figure(figsize=(7, 5)) for i in np.arange(iteraciones): x = mapa_logistico(r, x) if i >= (iteraciones - ultimos): plt.plot(r, x, ',k', alpha=.2) plt.xlim(np.min(r), np.max(r)) plt.ylim(-.1, 1.1) plt.title("Diagrama de bifurcación", fontsize=20) plt.xlabel('$r$', fontsize=18) plt.ylabel('$x$', fontsize=18) plt.show() fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', sharey='row',figsize =(13,4.5)) r = np.linspace(.5, 4.0, n) for i in np.arange(iteraciones): x = mapa_logistico(r, x) if i >= (iteraciones - ultimos): ax1.plot(r, x, '.k', alpha=1, ms = .1) r = np.linspace(2.5, 4.0, n) for i in np.arange(iteraciones): x = mapa_logistico(r, x) if i >= (iteraciones - ultimos): ax2.plot(r, x, '.k', alpha=1, ms = .1) ax1.set_xlim(.4, 4) ax1.set_ylim(-.1, 1.1) ax2.set_xlim(2.5, 4) ax2.set_ylim(-.1, 1.1) ax1.set_ylabel('$x$', fontsize = 20) ax1.set_xlabel('$r$', fontsize = 20) ax2.set_xlabel('$r$', fontsize = 20) plt.show() fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', sharey='row',figsize =(13,4.5)) r = np.linspace(.5, 4.0, n) for i in np.arange(iteraciones): x = mapa_logistico(r, x) if i >= (iteraciones - ultimos): ax1.scatter(r, x, s = .1, cmap= 'inferno', c = x, lw = 0) r = np.linspace(2.5, 4.0, n) for i in np.arange(iteraciones): x = mapa_logistico(r, x) if i >= (iteraciones - ultimos): ax2.scatter(r, x, s = .1, cmap = 'inferno', c = x, lw = 0) ax1.set_xlim(.4, 4) ax1.set_ylim(-.1, 1.1) ax2.set_xlim(2.5, 4) ax2.set_ylim(-.1, 1.1) ax1.set_ylabel('$x$', fontsize = 20) ax1.set_xlabel('$r$', fontsize = 20) ax2.set_xlabel('$r$', fontsize = 20) plt.show()
_____no_output_____
MIT
Modulo2/.ipynb_checkpoints/Clase11_MapaLogistico-checkpoint.ipynb
ariadnagalindom/SimMat2018-2
Définition du ProblèmeLe projet consiste à prédire le vainqueur de combats entre deux pokemons.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import csv from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.externals import joblib import warnings warnings.filterwarnings('always') warnings.filterwarnings('ignore') import matplotlib as mpl mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12)
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Description des features NUMERO: Numero NOM: Nom du Pokemon TYPE_1: Type primaire TYPE_2: Type Secondaire POINTS_DE_VIE: Point de vie POINTS_ATTAQUE: Niveau d'attaque POINTS_DEFFENCE: Niveau de defense POINTS_ATTAQUE_SPECIALE: Niveau d'attaque spéciale POINT_DEFENSE_SPECIALE: Niveau de spéciale spéciale POINTS_VITESSE: Vitesse NOMBRE_GENERATIONS : Numéro de la génération LEGENDAIRE: Le pokemon est il légendaire? Acquisition des données
#Récupération des fichiers necessaires au modèle. import os fileList = os.listdir("./datas") for file in fileList: print(file) pokemons = pd.read_csv("./datas/pokedex.csv", encoding = "ISO-8859-1") pokemons.head(10)
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Préparation et Nettoyage des données
pokemons.shape pokemons.info() pokemons[pokemons['NOM'].isnull()] pokemons['NOM'][62]="Colossinge"
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Idenfication des features de catégorisation
cat_features = pokemons.select_dtypes(include=['object']) cat_features.head()
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Nous allons nous concentrer sur ces features à l'exception du **NOM**.
#Nombre de pokemons de type primaire #sns.catplot(x='TYPE_1',data=pokemons, kind='count', height=3, aspect=1.5) pokemons.TYPE_1.value_counts().plot.bar() #Nombre de pokemons de type secondaire pokemons.TYPE_2.value_counts().plot.bar()LEGENDAIRE pokemons.LEGENDAIRE.value_counts().plot.bar() #Transformation la feature de catégorisation LEGENDAIRE en donnée numerique pokemons['LEGENDAIRE'] = (pokemons['LEGENDAIRE']=="VRAI").astype(int)
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Acquisition des données de combats
combats = pd.read_csv(".datas/combats.csv", encoding = "ISO-8859-1") combats.head() combats.columns combats.shape combats.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 50000 entries, 0 to 49999 Data columns (total 3 columns): Premier_Pokemon 50000 non-null int64 Second_Pokemon 50000 non-null int64 Pokemon_Gagnant 50000 non-null int64 dtypes: int64(3) memory usage: 1.1 MB
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Feature engineering Nous allons déterminer le nombre de combats par Pokémon. Pour cela nous devons caluler le nombre d'apparitions en premier position et le nombre de fois en seconde position.
nbreCombatsPremierePosition = combats.groupby('Premier_Pokemon').count() nbreCombatsPremierePosition.head(5) nbreCombatsSecondePosition = combats.groupby('Second_Pokemon').count() nbreCombatsSecondePosition.head(5) nbreTotalCombatsParPokemon = nbreCombatsPremierePosition+nbreCombatsSecondePosition nbreTotalCombatsParPokemon.head(8) #Nombre de combats gagnés nbreCombatsGagnes = combats.groupby('Pokemon_Gagnant').count() nbreCombatsGagnes.head(5) nbreCombatsGagnes.info() listePokemons = combats.groupby('Pokemon_Gagnant').count() listePokemons.sort_index() listePokemons['NBRE_COMBATS'] = nbreTotalCombatsParPokemon.Pokemon_Gagnant listePokemons['NBRE_VICTOIRES'] = nbreCombatsGagnes.Premier_Pokemon listePokemons['POURCENTAGE_VICTOIRES'] = nbreCombatsGagnes.Premier_Pokemon/nbreTotalCombatsParPokemon.Pokemon_Gagnant listePokemons.head(5) #Agrregate both dataframe to have global view into data nouveauPokedex = pokemons.merge(listePokemons,left_on='NUMERO', right_index=True, how='left') nouveauPokedex.head(5) #Phase d'apprentissage #Decoupage des observations en jeu d'apprentissage et jeu de test nouveauPokedex.info() #What Pokemons Type should trainer have? #For TYPE_1 axe_X = sns.countplot(x='TYPE_1', hue='LEGENDAIRE', data=nouveauPokedex) plt.xticks(rotation=90) plt.xlabel('TYPE_1') plt.ylabel('Total') plt.title('Pokemons par TYPE_1') plt.show() #What Pokemons Type should trainer have? #For TYPE_2 axe_X = sns.countplot(x='TYPE_2', hue='LEGENDAIRE', data=nouveauPokedex) plt.xticks(rotation=90) plt.xlabel('TYPE_1') plt.ylabel('Total') plt.title('Pokemons par TYPE_2') plt.show() nouveauPokedex.describe() #What Pokemons Type have the great winning percentage? nouveauPokedex.groupby('TYPE_1').agg({'POURCENTAGE_VICTOIRES':'mean'}).sort_values(by='POURCENTAGE_VICTOIRES') #Correlation entre les données corr = nouveauPokedex.loc[:,['TYPE_1','POINTS_DE_VIE','POINTS_ATTAQUE','POINTS_DEFFENCE','POINTS_ATTAQUE_SPECIALE', 'POINT_DEFENSE_SPECIALE','POINTS_VITESSE','LEGENDAIRE','POURCENTAGE_VICTOIRES']].corr() sns.heatmap(corr, annot=True, cmap='Greens') plt.title('Correlation des features') plt.show() #Sauvegarde du nouveau dataset Pokedex dataset = nouveauPokedex dataset.to_csv("./datas/dataset.csv", encoding = "ISO-8859-1", sep='\t') dataset = pd.read_csv("./datas/dataset.csv", encoding = "ISO-8859-1", delimiter='\t') dataset.info() dataset.shape dataset.head(5) #Supprimer toutes les lignes ayant des valeurs manquantes dataset = dataset.dropna(axis=0, how ='any') # Extraction des valeurs explicatives X = dataset.iloc[:, 5:12].values #Extraction de la valeur expliquée Y = dataset.iloc[:, 17].values # Construction du jeu d'entrainement et du jeu de test X_APPRENTISSAGE, X_VALIDATION, Y_APPRENTISSAGE, Y_VALIDATION = train_test_split(X, Y, test_size=0.2, random_state= 0) Y_VALIDATION.shape
_____no_output_____
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Phase d'apprentissageIl s'agit d'un problème de regression. Nous allons utiliser les algorithmes suivants Regression Lineaire Arbre de decision Forêt aléatoire
#Modèle d'apprentissage #Algorithme de regression Lineaire algorithme = LinearRegression() #Apprentissage de l'algorithme avec des jeux de données d'apprentissage algorithme.fit(X_APPRENTISSAGE, Y_APPRENTISSAGE) #Realisation des predictions avec notre jeu de test predictions = algorithme.predict(X_VALIDATION) #Calcul de la precision de notre algorithme precision = r2_score(Y_VALIDATION, predictions) precision #Algorithme d'abre de decisions. algorithme = DecisionTreeRegressor() #Apprentissage de l'algorithme avec des jeux de données d'apprentissage algorithme.fit(X_APPRENTISSAGE, Y_APPRENTISSAGE) #Realisation des predictions avec notre jeu de test predictions = algorithme.predict(X_VALIDATION) #Calcul de la precision de notre algorithme precision = r2_score(Y_VALIDATION, predictions) precision #Algorithme de Forêt aléatoire. algorithme = RandomForestRegressor() #Apprentissage de l'algorithme avec des jeux de données d'apprentissage algorithme.fit(X_APPRENTISSAGE, Y_APPRENTISSAGE) #Realisation des predictions avec notre jeu de test predictions = algorithme.predict(X_VALIDATION) #Calcul de la precision de notre algorithme precision = r2_score(Y_VALIDATION, predictions) #Sauvegarde du modèle d'apprentissage dans un fichier car il presente la plus grande precision. file = './modele/modele_pokemon.mod' joblib.dump(algorithme, file) precision # Fonction qui recherche les informations en fonction du numero du Pokemon dans le Pokedex def rechercheInformationPokemon(numeroPokemon, pokedex): infosPokemon = [] for pokemon in pokedex: if ( numeroPokemon == int(pokemon[0]) ): infosPokemon = pokemon[1], pokemon[4], pokemon[5], pokemon[6], pokemon[7], pokemon[8], pokemon[9], pokemon[10] break return infosPokemon # Fonction de prediction def prediction(numeroPokemon1, numeroPokemon2, Pokedex): pokemon_1 = rechercheInformationPokemon(numeroPokemon1,Pokedex) pokemon_2 = rechercheInformationPokemon(numeroPokemon2,Pokedex) # Chargement du modele d'apprentissage dans l'algorithme modele_prediction = joblib.load('C:/Users/Andreas/PycharmProjects/Pokemon/modele/modele_pokemon.mod') prediction_pokemon_1 = modele_prediction.predict([[pokemon_1[1], pokemon_1[2], pokemon_1[3], pokemon_1[4], pokemon_1[5], pokemon_1[6], pokemon_1[7]]]) prediction_pokemon_2 = modele_prediction.predict([[pokemon_2[1], pokemon_2[2], pokemon_2[3], pokemon_2[4], pokemon_2[5], pokemon_2[6], pokemon_2[7]]]) print('COMBAT OPPOSANT ' + str(pokemon_1[0]) +' A ' + str(pokemon_2[0])) print('----------Prediction des Pokemons--------') print( str(pokemon_1[0]) + " " +str(prediction_pokemon_1) ) print( str(pokemon_2[0]) + " " +str(prediction_pokemon_2) ) if prediction_pokemon_1 > prediction_pokemon_2: print( str(pokemon_1[0]) +' est vainqueur') else: print( str(pokemon_2[0]) +' est vainqueur') with open("./datas/pokedex.csv", newline='') as csvfile: pokedex=csv.reader(csvfile) next(pokedex) prediction(368, 598, pokedex);
COMBAT OPPOSANT Mangriff A Crapustule ----------Prediction des Pokemons-------- Mangriff [0.70453906] Crapustule [0.56317528] Mangriff est vainqueur
Apache-2.0
Pokemon/Pokemon.ipynb
ISSOH/Machine-Learning
Visualise Trove newspaper searches over timeYou know the feeling. You enter a query into [Trove's digitised newspapers](https://trove.nla.gov.au/newspaper/) search box and...![Trove search results screen capture](images/trove-newspaper-results.png)Hmmm, **3 million results**, how do you make sense of that..?Trove tries to be as helpful as possible by ordering your results by relevance. This is great if you aim is to find a few interesting articles. But how can you get a sense of the complete results set? How can you *see* everything? Trove's web interface only shows you the first 2,000 articles matching your search. But by getting data directly from the [Trove API](https://help.nla.gov.au/trove/building-with-trove/api) we can go bigger. This notebook helps you zoom out and explore how the number of newspaper articles in your results varies over time by using the `decade` and `year` facets. We'll then combine this approach with other search facets to see how we can slice a set of results up in different ways to investigate historical changes.1. [Setting things up](1.-Setting-things-up)2. [Find the number of articles per year using facets](2.-Find-the-number-of-articles-per-year-using-facets)3. [How many articles in total were published each year?](3.-How-many-articles-in-total-were-published-each-year?)4. [Charting our search results as a proportion of total articles](4.-Charting-our-search-results-as-a-proportion-of-total-articles)5. [Comparing multiple search terms over time](5.-Comparing-multiple-search-terms-over-time)6. [Comparing a search term across different states](6.-Comparing-a-search-term-across-different-states)7. [Comparing a search term across different newspapers](7.-Comparing-a-search-term-across-different-newspapers)8. [Chart changes in illustration types over time](8.-Chart-changes-in-illustration-types-over-time)9. [But what are we searching?](9.-But-what-are-we-searching?)10. [Next steps](10.-Next-steps)11. [Related resources](11.-Related-resources)12. [Further reading](12.-Further-reading)If you're interested in exploring the possibilities examined in this notebook, but are feeling a bit intimidated by the code, skip to the [Related resources](11.-Related-resources) section for some alternative starting points. But once you've got a bit of confidence, please come back here to learn more about how it all works! If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens! Some tips: Code cells have boxes around them. To run a code cell click on the cell and then hit Shift+Enter. The Shift+Enter combo will also move you to the next cell, so it's a quick way to work through the notebook. While a cell is running a * appears in the square brackets next to the cell. Once the cell has finished running the asterix will be replaced with a number. In most cases you'll want to start from the top of notebook and work your way down running each cell in turn. Later cells might depend on the results of earlier ones. To edit a code cell, just click on it and type stuff. Remember to run the cell once you've finished editing. Is this thing on? If you can't edit or run any of the code cells, you might be viewing a static (read only) version of this notebook. Click here to load a live version running on Binder. 1. Setting things up Import what we need
import requests import os import ipywidgets as widgets from operator import itemgetter # used for sorting import pandas as pd # makes manipulating the data easier import altair as alt from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from tqdm.auto import tqdm from IPython.display import display, HTML, FileLink, clear_output import math from collections import OrderedDict import time # Make sure data directory exists os.makedirs('data', exist_ok=True) # Create a session that will automatically retry on server errors s = requests.Session() retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ]) s.mount('http://', HTTPAdapter(max_retries=retries)) s.mount('https://', HTTPAdapter(max_retries=retries))
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Enter a Trove API keyWe're going to get our data from the Trove API. You'll need to get your own [Trove API key](http://help.nla.gov.au/trove/building-with-trove/api) and enter it below.
api_key = 'YOUR API KEY' print('Your API key is: {}'.format(api_key))
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
2. Find the number of articles per year using facetsWhen you search for newspaper articles using Trove's web interface, the results appear alongside a column headed 'Refine your results'. This column displays summary data extracted from your search, such as the states in which articles were published and the newspapers that published them. In the web interface, you can use this data to filter your results, but using the API we can retrieve the raw data and use it to visualise the complete result set.Here you can see the decade facet, showing the number of newspaper articles published each decade. If you click on a decade, the interface displays the number of results per year. So sitting underneath the web interface is data that breaks down our search results by year. Let's use this data to visualise a search over time. To get results by year from the Trove API, you need to set the `facet` parameter to `year`. However, this only works if you've also selected a specific decade using the `l-decade` parameter. In other words, you can only get one decade's worth of results at a time. To assemble the complete dataset, you need to loop through all the decades, requesting the `year` data for each decade in turn.Let's start with some basic parameters for our search.
# Basic parameters for Trove API params = { 'facet': 'year', # Get the data aggregated by year. 'zone': 'newspaper', 'key': api_key, 'encoding': 'json', 'n': 0 # We don't need any records, just the facets! }
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
But what are we searching for? We need to supply a `q` parameter that includes our search terms. We can use pretty much anything that works in the Trove simple search box. This includes boolean operators, phrase searches, and proximity modifiers. But let's start with something simple. Feel free to modify the `q` value in the cell below.
# CHANGE THIS TO SEARCH FOR SOMETHING ELSE! params['q'] = 'radio'
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Let's define a couple of handy functions for getting facet data from the Trove API.
def get_results(params): ''' Get JSON response data from the Trove API. Parameters: params Returns: JSON formatted response data from Trove API ''' response = s.get('https://api.trove.nla.gov.au/v2/result', params=params, timeout=30) response.raise_for_status() # print(response.url) # This shows us the url that's sent to the API data = response.json() return data def get_facets(data): ''' Loop through facets in Trove API response, saving terms and counts. Parameters: data - JSON formatted response data from Trove API Returns: A list of dictionaries containing: 'year', 'total_results' ''' facets = [] try: # The facets are buried a fair way down in the results # Note that if you ask for more than one facet, you'll have use the facet['name'] param to find the one you want # In this case there's only one facet, so we can just grab the list of terms (which are in fact the results by year) for term in data['response']['zone'][0]['facets']['facet']['term']: # Get the year and the number of results, and convert them to integers, before adding to our results facets.append({'year': int(term['search']), 'total_results': int(term['count'])}) # Sort facets by year facets.sort(key=itemgetter('year')) except TypeError: pass return facets
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now we'll define a function to loop through the decades, processing each in turn. To loop through the decades we need to define start and end points. Trove includes newspapers from 1803 right through until the current decade. Note that Trove expects decades to be specified using the first three digits of a year – so the decade value for the 1800s is just `180`. So let's set our range by giving `180` and `201` to the function as our default `start_decade` and `end_decade` values. Also note that I'm defining them as numbers, not strings (no quotes around them!). This is so that we can use them to build a range.This function returns a list of dictionaries with values for `year` and `total_results`.
def get_facet_data(params, start_decade=180, end_decade=201): ''' Loop throught the decades from 'start_decade' to 'end_decade', getting the number of search results for each year from the year facet. Combine all the results into a single list. Parameters: params - parameters to send to the API start_decade end_decade Returns: A list of dictionaries containing 'year', 'total_results' for the complete period between the start and end decades. ''' # Create a list to hold the facets data facet_data = [] # Loop through the decades for decade in tqdm(range(start_decade, end_decade + 1)): # Avoid confusion by copying the params before we change anything. search_params = params.copy() # Add decade value to params search_params['l-decade'] = decade # Get the data from the API data = get_results(search_params) # Get the facets from the data and add to facets_data facet_data += get_facets(data) # Try not to go over API rate limit - increase if you get 403 errors time.sleep(0.2) # Reomve the progress bar (you can also set leave=False in tqdm, but that still leaves white space in Jupyter Lab) clear_output() return facet_data # Call the function and save the results to a variable called facet_data facet_data = get_facet_data(params)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
For easy exploration, we'll convert the facet data into a [Pandas](https://pandas.pydata.org/) DataFrame.
# Convert our data to a dataframe called df df = pd.DataFrame(facet_data) # Let's have a look at the first few rows of data df.head()
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Which year had the most results? We can use `idxmax()` to find out.
# Show the row that has the highest value in the 'total_results' column. # Use .idxmax to find the row with the highest value, then use .loc to get it df.loc[df['total_results'].idxmax()]
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now let's display the data as a chart using [Altair](https://altair-viz.github.io/index.html).
alt.Chart(df).mark_line(point=True).encode( # Years on the X axis x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # Number of articles on the Y axis y=alt.Y('total_results:Q', axis=alt.Axis(format=',d', title='Number of articles')), # Display details when you hover over a point tooltip=[alt.Tooltip('year:Q', title='Year'), alt.Tooltip('total_results:Q', title='Articles', format=',')] ).properties(width=700, height=400)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
No suprise to see a sudden increase in the use of the word 'radio' in the early decades of the 20th century, but why do the results drop away after 1954? To find out we have to dig a bit deeper into Trove. 3. How many articles in total were published each year? Ok, we've visualised a search in Trove's digitised newspapers. Our chart shows a clear change in the number of articles over time, but are we really observing a historical shift relating to the topic, or is this just because more newspapers were published at particular times? To explore this further, let's create another chart, but this time we'll search for *everything*. The way we do this is by setting the `q` parameter to ' ' – a single space.First let's get the data.
# Reset the 'q' parameter # Use a an empty search (a single space) to get ALL THE ARTICLES params['q'] = ' ' # Get facet data for all articles all_facet_data = get_facet_data(params)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now let's create the chart.
# Convert the results to a dataframe df_total = pd.DataFrame(all_facet_data) # Make a chart alt.Chart(df_total).mark_line(point=True).encode( # Display the years along the X axis x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # Display the number of results on the Y axis (formatted using thousands separator) y=alt.Y('total_results:Q', axis=alt.Axis(format=',d', title='Number of articles')), # Create a tooltip when you hover over a point to show the data for that year tooltip=[alt.Tooltip('year:Q', title='Year'), alt.Tooltip('total_results:Q', title='Articles', format=',')] ).properties(width=700, height=400)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
This chart shows us the total number of newspaper articles in Trove for each year from 1803 to 2013. As you might expect, there's a steady increase in the number of articles published across the 19th century. But why is there such a notable peak in 1915, and why do the numbers drop away so suddenly in 1955? The answers are explored more fully in [this notebook](visualise-total-newspaper-articles-by-state-year.ipynb), but in short they're a reflection of digitisation priorities and copyright restrictions – they're artefacts of the environment in which Trove's newspapers are digitised.The important point is that our original chart showing search results over time is distorted by these underlying features. Radios didn't suddenly go out of fashion in 1955! 4. Charting our search results as a proportion of total articlesOne way of lessening the impact of these distortions is to show the number of search results as a proportion of the total number of articles available on Trove from that year. We've just harvested the total number of articles, so to get the proportion all we have to do is divide the original number of search results for each year by the total number of articles. Again, Pandas makes this sort of manipulation easy.Below we'll define a function that takes two dataframes – the search results, and the total results – merges them, and then calculates what proportion of the total that the search results represent.
def merge_df_with_total(df, df_total): ''' Merge dataframes containing search results with the total number of articles by year. This is a left join on the year column. The total number of articles will be added as a column to the existing results. Once merged, do some reorganisation and calculate the proportion of search results. Parameters: df - the search results in a dataframe df_total - total number of articles per year in a dataframe Returns: A dataframe with the following columns - 'year', 'total_results', 'total_articles', 'proportion' (plus any other columns that are in the search results dataframe). ''' # Merge the two dataframes on year # Note that we're joining the two dataframes on the year column df_merged = pd.merge(df, df_total, how='left', on='year') # Rename the columns for convenience df_merged.rename({'total_results_y': 'total_articles'}, inplace=True, axis='columns') df_merged.rename({'total_results_x': 'total_results'}, inplace=True, axis='columns') # Set blank values to zero to avoid problems df_merged['total_results'] = df_merged['total_results'].fillna(0).astype(int) # Calculate proportion by dividing the search results by the total articles df_merged['proportion'] = df_merged['total_results'] / df_merged['total_articles'] return df_merged
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Let's merge!
# Merge the search results with the total articles df_merged = merge_df_with_total(df, df_total) df_merged.head()
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now we have a new dataframe `df_merged` that includes both the raw number of search results for each year, and the proportion the results represent of the total number of articles on Trove. Let's create charts for both and look at the diferences.
# This is the chart showing raw results -- it's the same as the one we created above (but a bit smaller) chart1 = alt.Chart(df).mark_line(point=True).encode( x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), y=alt.Y('total_results:Q', axis=alt.Axis(format=',d', title='Number of articles')), tooltip=[alt.Tooltip('year:Q', title='Year'), alt.Tooltip('total_results:Q', title='Articles', format=',')] ).properties(width=700, height=250) # This is the new view, note that it's using the 'proportion' column for the Y axis chart2 = alt.Chart(df_merged).mark_line(point=True, color='red').encode( x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # This time we're showing the proportion (formatted as a percentage) on the Y axis y=alt.Y('proportion:Q', axis=alt.Axis(format='%', title='Proportion of articles')), tooltip=[alt.Tooltip('year:Q', title='Year'), alt.Tooltip('proportion:Q', title='Proportion', format='%')], # Make the charts different colors color=alt.value('orange') ).properties(width=700, height=250) # This is a shorthand way of stacking the charts on top of each other chart1 & chart2
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
The overall shape of the two charts is similar, but there are some significant differences. Both show a dramatic increase after 1920, but the initial peaks are in different positions. The sudden drop-off after 1954 has gone, and we even have a new peak in 1963. Why 1963? The value of these sorts of visualisations is in the questions they prompt, rather than any claim to 'accuracy'. How meaningful are the post-1954 results? If we [break down the numbers by state](visualise-total-newspaper-articles-by-state-year.ipynb), we see that the post-1954 results are mostly from the ACT. It is a small, narrowly-focused sample. Reading these two charts in combination reminds us that the structure and content of a large corpus like Trove is not natural. While viewing the number of results over time can alert us to historical shifts, we have to be prepared to ask questions about how those results are generated, and what they represent. 5. Comparing multiple search terms over timeAnother way of working around inconsistencies in the newspaper corpus is to *compare* search queries. While the total numbers could be misleading, the comparative numbers might still show us interesting shifts in usage or meaning. Once again, this is not something we can do through the web interface, but all we need to achieve this using the API is a few minor adjustments to our code.Instead of a single search query, this time we'll define a list of search queries. You can include as many queries as you want and, once again, the queries can be anything you'd type in the Trove search box.
# Create a list of queries queries = [ 'telegraph', 'radio', 'wireless' ]
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now we'll define a new function that loops through each of the search terms, retrieving the facet data for each, and combining it all into a single dataframe.
def get_search_facets(params, queries): ''' Process a list of search queries, gathering the facet data for each and combining the results into a single dataframe. Parameters: params - basic parameters to send to the API queries - a list of search queries Returns: A dataframe ''' # This is where we'll store the invididual dataframes dfs = [] # Make a copy of the basic parameters these_params = params.copy() # Loop through the list of queries for q in queries: # Set the 'q' parameter to the current search query these_params['q'] = q # Get all the facet data for this search facet_data = get_facet_data(these_params) # Convert the facet data into a dataframe df = pd.DataFrame(facet_data) # Add a column with the search query -- this will enable us to distinguish between the results in the combined dataframe. df['query'] = q # Add this df to our list dfs.append(df) # Combine the dfs into one df using concat and return the result return pd.concat(dfs)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Now we're ready to harvest some data!
df_queries = get_search_facets(params, queries)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Once again, it would be useful to have the number of search results as a proportion of the total articles, so let's use our merge function again to add the proportions.
df_queries_merged = merge_df_with_total(df_queries, df_total)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
As we're repeating the same sorts of charts with different data, we might as well save ourselves some effort by creating a couple of reusable charting functions. One shows the raw numbers, and the other shows the proportions.
def make_chart_totals(df, category, category_title): ''' Make a chart showing the raw number of search results over time. Creates different coloured lines for each query or category. Parameters: df - a dataframe category - the column containing the value that distinguishes multiple results set (eg 'query' or 'state') category_title - a nicely formatted title for the category to appear above the legend ''' chart = alt.Chart(df).mark_line(point=True).encode( # Show the year on the X axis x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # Show the total number of articles on the Y axis (with thousands separator) y=alt.Y('total_results:Q', axis=alt.Axis(format=',d', title='Number of articles')), # Display query/category, year, and number of results on hover tooltip=[alt.Tooltip('{}:N'.format(category), title=category_title), alt.Tooltip('year:Q', title='Year'), alt.Tooltip('total_results:Q', title='Articles', format=',')], # In these charts were comparing results, so we're using color to distinguish between queries/categories color=alt.Color('{}:N'.format(category), legend=alt.Legend(title=category_title)) ).properties(width=700, height=250) return chart def make_chart_proportions(df, category, category_title): ''' Make a chart showing the proportion of search results over time. Creates different coloured lines for each query or category. Parameters: df - a dataframe category - the column containing the value that distinguishes multiple results set (eg 'query' or 'state') category_title - a nicely formatted title for the category to appear above the legend ''' chart = alt.Chart(df).mark_line(point=True).encode( # Show the year on the X axis x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # Show the proportion of articles on the Y axis (formatted as percentage) y=alt.Y('proportion:Q', axis=alt.Axis(format='%', title='Proportion of articles'), stack=None), # Display query/category, year, and proportion of results on hover tooltip=[alt.Tooltip('{}:N'.format(category), title=category_title), alt.Tooltip('year:Q', title='Year'), alt.Tooltip('proportion:Q', title='Proportion', format='%')], # In these charts were comparing results, so we're using color to distinguish between queries/categories color=alt.Color('{}:N'.format(category), legend=alt.Legend(title=category_title)) ).properties(width=700, height=250) return chart
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Let's use the new functions to create charts for our queries.
# Chart total results chart3 = make_chart_totals(df_queries_merged, 'query', 'Search query') # Chart proportions chart4 = make_chart_proportions(df_queries_merged, 'query', 'Search query') # Shorthand way of concatenating the two charts (note there's only one legend) chart3 & chart4
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Once again, it's interesting to compare the total results with the proportions. In this case, both point to something interesting happening around 1930. To explore this further we could use the [Trove Newspaper Harvester](https://glam-workbench.github.io/trove-harvester/) to assemble a dataset of articles from 1920 to 1940 for detailed analysis. You might also notice a little peak for 'wireless' around 2011 – new uses for old words! 6. Comparing a search term across different statesAnother way of building comparisons over time is to use some of the other facets available in Trove to slice up our search results. For example, the `state` facet tells us the number of results per state. We might be able to use this to track differences in language, or regional interest in particular events.Because we're combining three facets, `state` and `decade`/`year`, we need to think a bit about how we assemble the data. In this case we're only using one search query, but we're repeating this query across a number of different states. We're then getting the data for decade and year for each of the states.The possible values for the `state` facet are:* ACT* New South Wales* Northern Territory* Queensland* South Australia* Tasmania* Victoria* Western Australia* National* InternationalThere's some other ways of exploring and visualising the `state` facet in [Visualise the total number of newspaper articles in Trove by year and state](visualise-total-newspaper-articles-by-state-year.ipynb).Let's start by defining a list of states we want to compare...
# A list of state values that we'll supply to the state facet states = [ 'New South Wales', 'Victoria' ]
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
...and our search query.
# Remember this time we're comparing a single search query across multiple states query = 'Chinese'
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
As before, we'll display both the raw number of results, and the proportion this represents of the total number of articles. But what is the total number of articles in this case? While we could generate a proportion using the totals for each year across all of Trove's newspapers, it seems more useful to use the total number of articles for each state. Otherwise, states with more newspapers will dominate. This means we'll have to make some additional calls to the API to get the state totals as well as the search results.Let's create a couple of new functions. The main function `get_state_facets()` loops through the states in our list, gathering the year by year results. It's similar to the way we handled multiple queries, but this time there's an additional step. Once we have the search results, we use `get_state_totals()` to get the total number of articles published in that state for each year. Then we merge the search results and total articles as we did before.
def get_state_totals(state): ''' Get the total number of articles for each year for the specified state. Parameters: state Returns: A list of dictionaries containing 'year', 'total_results'. ''' these_params = params.copy() # Set the q parameter to a single space to get everything these_params['q'] = ' ' # Set the state facet to the given state value these_params['l-state'] = state # Get the year by year data facet_data = get_facet_data(these_params) return facet_data def get_state_facets(params, states, query): ''' Loop through the supplied list of states searching for the specified query and getting the year by year results. Merges the search results with the total number of articles for that state. Parameters: params - basic parameters to send to the API states - a list of states to apply using the state facet query - the search query to use Returns: A dataframe ''' dfs = [] these_params = params.copy() # Set the q parameter to the supplied query these_params['q'] = query # Loop through the supplied list of states for state in states: # Set the state facet to the current state value these_params['l-state'] = state # Get year facets for this state & query facet_data = get_facet_data(these_params) # Convert the results to a dataframe df = pd.DataFrame(facet_data) # Get the total number of articles per year for this state total_data = get_state_totals(state) # Convert the totals to a dataframe df_total = pd.DataFrame(total_data) # Merge the two dataframes df_merged = merge_df_with_total(df, df_total) # Add a state column to the dataframe and set its value to the current state df_merged['state'] = state # Add this df to the list of dfs dfs.append(df_merged) # Concatenate all the dataframes and return the result return pd.concat(dfs)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Let's get the data!
df_states = get_state_facets(params, states, query)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
And now chart the results, specifying `state` as the column to use for our category.
# Chart totals chart5 = make_chart_totals(df_states, 'state', 'State') # Chart proportions chart6 = make_chart_proportions(df_states, 'state', 'State') # Shorthand way of concatenating the two charts (note there's only one legend) chart5 & chart6
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Showing the results as a proportion of the total articles for each state does seem to show up some interesting differences. Did 10% of newspaper articles published in Victoria in 1857 really mention 'Chinese'? That seems like something to investigate in more detail.Another way of visualising the number of results per state is by using a map! See [Map newspaper results by state](Map-newspaper-results-by-state.ipynb) for a demonstration. 7. Comparing a search term across different newspapersFor a more fine-grained analysis, we might want to compare the contents of different newspapers – how did their coverage or language vary over time? To do this we can use Trove's `title` facet which, despite the name, limits your results to a particular newspaper.The `title` facet expects a numeric newspaper identifier. The easiest way of find this id number is to go to the [list of newspapers](https://trove.nla.gov.au/newspaper/about) and click on the one you're interested in. The id number will be in the url of the newspaper details page. For example, the url of the *Canberra Times* page is:`https://trove.nla.gov.au/newspaper/title/11`So the id number is '11'.As with previous examples, we'll create a list of the newspapers we want to use with the `title` facet. However, the id number on it's own isn't going to be very useful in the legend of our chart, so we'll include the name of the newspaper as well.
# Create a list of dictionaries, each with the 'id' and 'name' of a newspaper newspapers = [ {'id': 1180, 'name': 'Sydney Sun'}, {'id': 35, 'name': 'Sydney Morning Herald'}, {'id': 1002, 'name': 'Tribune'} ] # Our search query we want to compare across newspapers query = 'worker'
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
In this case the total number of articles we want to use in calculating the proportion of results is probably the total number of articles published in each particular newspaper. This should allow a more meaningful comparison between, for example, a weekly and a daily newspaper. As in the example above, we'll define a function to loop through the newspapers, and another to get the total number of articles for a given newspaper.
def get_newspaper_totals(newspaper_id): ''' Get the total number of articles for each year for the specified newspaper. Parameters: newspaper_id - numeric Trove newspaper identifier Returns: A list of dictionaries containing 'year', 'total_results'. ''' these_params = params.copy() # Set q to a single space for everything these_params['q'] = ' ' #Set the title facet to the newspaper_id these_params['l-title'] = newspaper_id # Get all the year by year data facet_data = get_facet_data(these_params) return facet_data def get_newspaper_facets(params, newspapers, query): ''' Loop through the supplied list of newspapers searching for the specified query and getting the year by year results. Merges the search results with the total number of articles for that newspaper. Parameters: params - basic parameters to send to the API newspapers - a list of dictionaries with the id and name of a newspaper query - the search query to use Returns: A dataframe ''' dfs = [] these_params = params.copy() # Set the query these_params['q'] = query # Loop through the list of newspapers for newspaper in newspapers: # Sewt the title facet to the id of the current newspaper these_params['l-title'] = newspaper['id'] # Get the year by year results for this newspaper facet_data = get_facet_data(these_params) # Convert to a dataframe df = pd.DataFrame(facet_data) # Get the total number of articles published in this newspaper per year total_data = get_newspaper_totals(newspaper['id']) # Convert to a dataframe df_total = pd.DataFrame(total_data) # Merge the two dataframes df_merged = merge_df_with_total(df, df_total) # Create a newspaper column and set its value to the name of the newspaper df_merged['newspaper'] = newspaper['name'] # Add the current datarame to the list dfs.append(df_merged) # Concatenate the dataframes and return the result return pd.concat(dfs)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Let's get the data!
df_newspapers = get_newspaper_facets(params, newspapers, query)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
And make some charts!
# Chart totals chart7 = make_chart_totals(df_newspapers, 'newspaper', 'Newspaper') # Chart proportions chart8 = make_chart_proportions(df_newspapers, 'newspaper', 'Newspaper') # Shorthand way of concatenating the two charts (note there's only one legend) chart7 & chart8
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
8. Chart changes in illustration types over timeLet's try something a bit different and explore the *format* of articles rather than their text content. Trove includes a couple of facets that enable you to filter your search by type of illustration. First of all you have to set the `illustrated` facet to `true`, then you can specify a type of illustration using the `illtype` facet. Possible values include:* Photo* Cartoon* Illustration* Map* GraphFirst we'll create a list with all the illustration types we're interesed in.
ill_types = [ 'Photo', 'Cartoon', 'Illustration', 'Map', 'Graph' ]
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Then we'll define a function to loop through the illustration types getting the year by year results of each.
def get_ill_facets(params, ill_types): ''' Loop through the supplied list of illustration types getting the year by year results. Parameters: params - basic parameters to send to the API ill_types - a list of illustration types to use with the ill_type facet Returns: A dataframe ''' dfs = [] ill_params = params.copy() # No query! Set q to a single space for everything ill_params['q'] = ' ' # Set the illustrated facet to true - necessary before setting ill_type ill_params['l-illustrated'] = 'true' # Loop through the illustration types for ill_type in ill_types: # Set the ill_type facet to the current illustration type ill_params['l-illtype'] = ill_type # Get the year by year data facet_data = get_facet_data(ill_params) # Convert to a dataframe df = pd.DataFrame(facet_data) # Create an ill_type column and set its value to the illustration type df['ill_type'] = ill_type # Add current df to the list of dfs dfs.append(df) # Concatenate all the dfs and return the result return pd.concat(dfs)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers