markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Split the data into training and testing
# Create training and validation sets using 80-20 split img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector, cap_vector, test_size=0.2, ...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
# feel free to change these parameters according to your system's configuration BATCH_SIZE = 64 BUFFER_SIZE = 1000 embedding_dim = 256 units = 512 vocab_size = len(tokenizer.word_index) # shape of the vector extracted from InceptionV3 is (64, 2048) # these two variables represent that features_shape = 2048 attention_f...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Model Fun fact, the decoder below is identical to the one in the example for Neural Machine Translation with Attention. The model architecture is inspired by the Show, Attend and Tell paper. In this example, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 20...
def gru(units): # If you have a GPU, we recommend using the CuDNNGRU layer (it provides a # significant speedup). if tf.test.is_gpu_available(): return tf.keras.layers.CuDNNGRU(units, return_sequences=True, return_state=True, ...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Training We extract the features stored in the respective .npy files and then pass those features through the encoder. The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder. The decoder returns the predictions and the decoder hidden state. The deco...
# adding this in a separate cell because if you run the training cell # many times, the loss_plot array will be reset loss_plot = [] EPOCHS = 20 for epoch in range(EPOCHS): start = time.time() total_loss = 0 for (batch, (img_tensor, target)) in enumerate(dataset): loss = 0 #...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Caption! The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output. Stop predicting when the model predicts the end token. And store the attention weights for...
def evaluate(image): attention_plot = np.zeros((max_length, attention_features_shape)) hidden = decoder.reset_state(batch_size=1) temp_input = tf.expand_dims(load_image(image)[0], 0) img_tensor_val = image_features_extract_model(temp_input) img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_v...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Try it on your own images For fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)
image_url = 'https://tensorflow.org/images/surf.jpg' image_extension = image_url[-4:] image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url) result, attention_plot = evaluate(image_path) print ('Prediction Caption:', ' '.join(result)) plot_attention(image_...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
We can now subset the 0.1x0.1 degree regular grid with the shapefiles from http://biogeo.ucdavis.edu/data/gadm2.8/gadm28_levels.shp.zip which were downloaded/extracted to ~/Downloads/gadm
austria = shapefile.get_gad_grid_points( testgrid, os.path.join('/home', os.environ['USER'], 'Downloads', 'gadm', 'gadm28_levels.shp.zip'), 0, name='Austria')
docs/examples/subsetting_grid_objects_with_shape_files.ipynb
TUW-GEO/pygeogrids
mit
We can the plot the resulting grid using a simple scatterplot.
import matplotlib.pyplot as plt %matplotlib inline plt.scatter(austria.arrlon, austria.arrlat)
docs/examples/subsetting_grid_objects_with_shape_files.ipynb
TUW-GEO/pygeogrids
mit
Behind the scenes this functionality uses the get_shp_grid_points function of the grid object. We can also use this directly using any ogr.Geometry object.
ring = ogr.Geometry(ogr.wkbLinearRing) ring.AddPoint(14, 47) ring.AddPoint(14, 48) ring.AddPoint(16, 48) ring.AddPoint(16, 47) ring.AddPoint(14, 47) poly = ogr.Geometry(ogr.wkbPolygon) poly.AddGeometry(ring) subgrid = austria.get_shp_grid_points(poly) plt.scatter(austria.arrlon, austria.arrlat) plt.scatter(subgrid.arr...
docs/examples/subsetting_grid_objects_with_shape_files.ipynb
TUW-GEO/pygeogrids
mit
Optically pumped magnetometer (OPM) data In this dataset, electrical median nerve stimulation was delivered to the left wrist of the subject. Somatosensory evoked fields were measured using nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here we demonstrate how to localize these custom OPM data ...
# sphinx_gallery_thumbnail_number = 4 import os.path as op import numpy as np import mne from mayavi import mlab data_path = mne.datasets.opm.data_path() subject = 'OPM_sample' subjects_dir = op.join(data_path, 'subjects') raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif') bem_fname = op.join(subjects_d...
0.18/_downloads/dc0d85321d22190ec4d4c4394d0057f4/plot_opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Prepare data for localization First we filter and epoch the data:
raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(None, 90, h_trans_bandwidth=10.) raw.notch_filter(50., notch_widths=1) # Set epoch rejection threshold a bit larger than for SQUIDs reject = dict(mag=2e-10) tmin, tmax = -0.5, 1 # Find Median nerve stimulator trigger event_id = dict(Median=257) events = m...
0.18/_downloads/dc0d85321d22190ec4d4c4394d0057f4/plot_opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Examine our coordinate alignment for source localization and compute a forward operator: <div class="alert alert-info"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the co-registration method used equates the two coordinate systems. This mis-defines the head coordinate system ...
bem = mne.read_bem_solution(bem_fname) trans = None # To compute the forward solution, we must # provide our temporary/custom coil definitions, which can be done as:: # # with mne.use_coil_def(coil_def_fname): # fwd = mne.make_forward_solution( # raw.info, trans, src, bem, eeg=False, mindist=5.0, # ...
0.18/_downloads/dc0d85321d22190ec4d4c4394d0057f4/plot_opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Perform dipole fitting
# Fit dipoles on a subset of time points with mne.use_coil_def(coil_def_fname): dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080), cov, bem, trans, verbose=True) idx = np.argmax(dip_opm.gof) print('Best dipole at t=%0.1f ms with %0.1f%% GOF' % (1000 * dip_opm.times[i...
0.18/_downloads/dc0d85321d22190ec4d4c4394d0057f4/plot_opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Perform minimum-norm localization Due to the small number of sensors, there will be some leakage of activity to areas with low/no sensitivity. Constraining the source space to areas we are sensitive to might be a good idea.
inverse_operator = mne.minimum_norm.make_inverse_operator( evoked.info, fwd, cov) method = "MNE" snr = 3. lambda2 = 1. / snr ** 2 stc = mne.minimum_norm.apply_inverse( evoked, inverse_operator, lambda2, method=method, pick_ori=None, verbose=True) # Plot source estimate at time of best dipole fit brain = s...
0.18/_downloads/dc0d85321d22190ec4d4c4394d0057f4/plot_opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Fetching data from Infodengue We can download the data from a full state. Let's pick Goiás.
go = get_alerta_table(state='GO', doenca='dengue') go municipios = geobr.read_municipality(code_muni='GO') municipios municipios['code_muni'] = municipios.code_muni.astype('int') municipios.plot(figsize=(10,10)); goias = pd.merge(go.reset_index(), municipios,how='left', left_on='municipio_geocodigo', right_on='code...
Notebooks/Spatial Exploration.ipynb
AlertaDengue/InfoDenguePredict
gpl-3.0
Building the dashboard
from functools import lru_cache from IPython.display import display, Markdown import pandas_bokeh pandas_bokeh.output_notebook() pd.options.plotting.backend = "pandas_bokeh" @lru_cache(maxsize=27) def get_dados(sigla='PR', doenca='dengue'): df = get_alerta_table(state=sigla, doenca=doenca) municipios = geobr.r...
Notebooks/Spatial Exploration.ipynb
AlertaDengue/InfoDenguePredict
gpl-3.0
Building Animated films
data_path = Path('./data/') map_path = Path("./maps/") os.makedirs(data_path, exist_ok=True) os.makedirs(map_path, exist_ok=True)
Notebooks/Spatial Exploration.ipynb
AlertaDengue/InfoDenguePredict
gpl-3.0
Downloading data We will start by Downloading the full alerta table for all diseases.
from infodenguepredict.data.infodengue import get_full_alerta_table diseases = ['dengue','chik','zika'] for dis in diseases: os.makedirs(data_path/dis, exist_ok=True) for dis in diseases: get_full_alerta_table(dis, output_dir=data_path/dis, chunksize=50000, start_SE=202140) brmunis = geobr.read_municipality(...
Notebooks/Spatial Exploration.ipynb
AlertaDengue/InfoDenguePredict
gpl-3.0
loading data from disk we can load all chunks at once,into a single dataframe, since they are parquet files.doenca
dengue = pd.read_parquet(data_path/'dengue') dengue.sort_values('SE',inplace=True) chik = pd.read_parquet(data_path/'chik') chik.sort_values('SE',inplace=True) zika = pd.read_parquet(data_path/'zika') zika.sort_values('SE',inplace=True) dengue dmdf = merge(brmunis,dengue) cmdf = merge(brmunis, chik) zmdf = merge(brmun...
Notebooks/Spatial Exploration.ipynb
AlertaDengue/InfoDenguePredict
gpl-3.0
Import/export circuits <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/interop"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.resea...
try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq print("installed cirq.")
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
Cirq has several features that allow the user to import/export from other quantum languages. Exporting and importing to JSON For storing circuits or for transfering them between collaborators, JSON can be a good choice. Many objects in cirq can be serialized as JSON and then stored as a text file for transfer, storage...
import cirq # Example circuit circuit = cirq.Circuit(cirq.Z(cirq.GridQubit(1,1))) # Serialize to a JSON string json_string = cirq.to_json(circuit) print('JSON string:') print(json_string) print() # Now, read back the string into a cirq object # cirq.read_json can also read from a file new_circuit = cirq.read_json(js...
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
Advanced: Adding JSON compatibility for custom objects in cirq Most cirq objects come with serialization functionality added already. If you are adding a custom object (such as a custom gate), you can still serialize the object, but you will need to add the magic functions _json_dict_ and _from_json_dict_ to your obje...
!pip install --quiet cirq !pip install --quiet ply==3.4
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
The following call will create a circuit defined by the input QASM string:
from cirq.contrib.qasm_import import circuit_from_qasm circuit = circuit_from_qasm(""" OPENQASM 2.0; include "qelib1.inc"; qreg q[3]; creg meas[3]; h q; measure q -> meas; """) print(circuit)
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
Supported control statements | Statement|Cirq support|Description|Example| |----| --------| --------| --------| |OPENQASM 2.0;| supported| Denotes a file in Open QASM format| OPENQASM 2.0;| |qreg name[size];| supported (see mapping qubits)| Declare a named register of qubits|qreg q[5];| |creg name[size];|supported (see...
quirk_url = "https://algassert.com/quirk#circuit=%7B%22cols%22:[[%22H%22,%22H%22],[%22%E2%80%A2%22,%22X%22],[%22H%22,%22H%22]]}" c= cirq.quirk_url_to_circuit(quirk_url) print(c)
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
You can also convert the JSON from the "export" button on the top bar of Quirk. Note that you must parse the JSON string into a dictionary before passing it to the function:
import json quirk_str="""{ "cols": [ [ "H", "H" ], [ "•", "X" ], [ "H", "H" ] ] }""" quirk_json=json.loads(quirk_str) c= cirq.quirk_json_to_circuit(quirk_json) print(c)
docs/interop.ipynb
quantumlib/Cirq
apache-2.0
Step 1: Data acquisition Below is a function that takes two inputs, the API endpoint (either 'pagecounts' or 'pageviews') and the access parameter. For pagecounts the access parameter can be 'all-sites', 'desktop-site', or 'mobile-site'. For pageviews the access parameter can be 'desktop', 'mobile-app', or 'mobile-web'...
# since we will be performing api calls at least five times, we will functionalize it def data_acquisition(api_endpoint, access): ''' call the wikimedia api and return a json format data set :param api_endpoint: legacy (pagecounts) current (pageviews) :param access: legacy (all...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Run the above function to call the API and assign the responses to variables
response_pageview_desktop = data_acquisition('pageviews', 'desktop') response_pageview_mobileweb = data_acquisition('pageviews', 'mobile-web') response_pageview_mobileapp = data_acquisition('pageviews', 'mobile-app') response_pagecount_desktop = data_acquisition('pagecounts', 'desktop-site') response_pagecount_mobile =...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Export the API raw data files. This section has been commented out in order to not continuously overwrite the raw data files. The raw data files have already been created and will be imported in the next step.
# json.dump(response_pageview_desktop, open('../data_raw/pageviews_desktop_' + response_pageview_desktop['items'][0]['timestamp'][:-4] + '-' + response_pageview_desktop['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4) # json.dump(response_pageview_mobileweb, open('../data_raw/pageviews_mobile-web_' + response_...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Step 2: Data processing Import the raw .json files to process and create a new file for analysis.
response_pagecount_desktop = json.load(open('../data_raw/pagecounts_desktop-site_200801-201608.json')) response_pagecount_mobile = json.load(open('../data_raw/pagecounts_mobile-site_201410-201608.json')) response_pageview_desktop = json.load(open('../data_raw/pageviews_desktop_201507-201709.json')) response_pageview_mo...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Functions for processing get_views and get_counts take the raw .json files as inputs, strip the timestamps and views/counts, and return arrays with two columns (timestamp, views/counts) and a row with each month's worth of data. lookup_val takes the arrays created from the prior functions as one input and a date as a s...
def get_views(api_response): ''' strip all views from an api response ''' temp_list = [] for i in api_response['items']: temp_list.append([i['timestamp'], i['views']]) return np.array(temp_list) def get_count(api_response): ''' strip all views from an api...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Run the above functions to get all of the views/counts for both the legacy and current API
# strip all dates and views/count from api responses pageview_desktop_views = get_views(response_pageview_desktop) pageview_mobileweb_views = get_views(response_pageview_mobileweb) pageview_mobileapp_views = get_views(response_pageview_mobileapp) pagecount_desktop_views = get_count(response_pagecount_desktop) pagecount...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Processing First, all of the formatted arrays from the API responses are concatenated and the first column (timestamp) is taken as a set() to remove any duplicate timestamps. From here we can easily parse the timestamps into a list of just the years and a list of just the months. This gives us our first two columns of ...
# combine all data into one array all_dates_views = np.concatenate((pageview_desktop_views, pageview_mobileweb_views, pageview_mobileapp_views, pagecount_desktop_views, pagecount_mobil...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Second, we initialize five (one for each API response) lists where we will obtain just the counts/views from the two column arrays. We will then loop through all of the dates (no duplicates) that we found from the previous step and use the lookup_val function to find the corresponding counts/views for each API response...
# initialize lists for columns of csv file pageview_desktop_views_col = [] pageview_mobileweb_views_col = [] pageview_mobileapp_views_col = [] pagecount_desktop_views_col = [] pagecount_mobile_views_col = [] # loop through all of the dates and lookup respective values from each api response for i in all_dates: pag...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Third, we need to aggregate the two mobile sets of data from pageviews to get the total mobile data. For both pagecounts and pageviews we aggregate the desktop counts/views and mobile counts/views to get the total views for each.
# aggregate the mobile views from pageviews and the "all views" from pageviews and pagecounts pageview_mobile_views_col = [sum(i) for i in zip(pageview_mobileweb_views_col, pageview_mobileapp_views_col)] pageview_all_views_col = [sum(i) for i in zip(pageview_desktop_views_col, pageview_mobile_views_col)] pagecount_all_...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Convert to pandas DataFrame for easy export.
# assign column data to a pandas dataframe df = pd.DataFrame({'year': year_col, 'month': month_col, 'pagecount_all_views': pagecount_all_views_col, 'pagecount_desktop_views': pagecount_desktop_views_col, 'pagecount_mobile_views': pagecount_mobi...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Export data in single csv. This section has been commented out in order to not continuously overwrite the cleaned data file. The cleaned data file has already been created and will be imported in the next step.
# write the column data to csv # df.to_csv('../data_clean/en-wikipedia_traffic_200801-201709.csv', index=False)
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Step 3: Analysis Import the cleaned data file to use for analysis.
df = pd.read_csv('../data_clean/en-wikipedia_traffic_200801-201709.csv', dtype={'year': str, 'month': str})
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Plot the data The dates from the csv are converted to a datetime format in order to be plotted neatly. The points from the data are plotted, filtering out non-zero values in y-axis data. The figure is then saved as a .png file.
# convert dates to a datetime format for plotting dates = np.array([datetime.strptime(list(df['year'])[i] + list(df['month'])[i], '%Y%m') for i in range(len(df))]) # set plot size plt.figure(figsize=(16, 8)) # plot the points, filtering on non-zero values in the column data plt.plot(dates[np.array(df['pageview_deskto...
src/hcds-a1-data-curation.ipynb
drjordy66/data-512-a1
mit
Pretty nice: they are all indexed with the city ID, or "Code Officiel Géographique" so we can merge those three datasets. While doing that, let's make sure we restrict to current cities only:
all_cities = pd.merge( cities[cities.current & ~cities.arrondissement], city_stats, right_index=True, left_index=True, how='outer') all_cities = pd.merge( all_cities, urban_entities, right_index=True, left_index=True, how='outer') all_cities.head()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
OK we're all set to start looking at the data. Coverage
official_cities = all_cities[all_cities.name.notnull()] official_cities.urban.notnull().value_counts()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
Pretty neat! We have urban data for all the cities. Now let's try to get a better understanding of this data. Data The two fields we are going to dig are urban and UU2010. Supposedly urban gives a score where 0 means rural and then from 1 to 8, it relates to bigger and bigger urban entities. UU2010 gives the ID of the ...
official_cities.sort_values('population', ascending=False)[['name', 'urban', 'UU2010']].head()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
That sounds good: the biggest cities are inside the biggest urban entities. Let's check one of them:
official_cities[official_cities.UU2010 == '00758']\ .sort_values('population', ascending=False)[['name', 'urban', 'UU2010']].head()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
Cool, those are indeed cities that are part of the Lyon urban entities. Let's check the other side of the spectrum:
official_cities[official_cities.urban == 0][['name', 'urban', 'UU2010', 'population']].head()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
Indeed those seems like small villages (population count is low) however they seem to have an UU2010 field which is common. Apparently that field is not valid for rural cities:
official_cities[official_cities.urban == 0]\ .groupby(['UU2010', 'departement_id_x'])\ .urban.count().to_frame().head()
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
Alright, there seems to be a unique UU2010 per département assigned to all rural cities in this département. We will make sure to ignore it. Now let's see global stats for each level of urban entities:
def _stats_per_urban_group(cities): if cities.urban.iloc[0]: entities_population = cities.groupby('UU2010').population.sum() else: # Not grouping as UU2010 has no meaning for rural areas. entities_population = cities.population return pd.Series({ 'total_population': entities_...
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
OK, many things interesting in those stats. First the size of entities seems to be globally consistent with the documentation: entities level are defined by their sizes. For the small numbers though, there seem to be some slight inconsistencies but we'll say that population data is not very precise. Let's check the dis...
COLOR_MAP = [ '#e0f2f1', '#c8e6c9', '#c5e1a5', '#dce775', '#ffee58', '#ffc107', '#ff9800', '#ff5722', '#795548', ] urban_stats.num_entities.plot(kind='pie', figsize=(5, 5), colors=COLOR_MAP);
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
The huge majority of cities are rural, and only very few of them are part of the largest urban entities. Let's look at it from another angle, and check the population distribution:
urban_stats.total_population.plot(kind='pie', figsize=(5, 5), colors=COLOR_MAP);
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
OK, this is a whole other picture: rural areas account only for less than a quarter of the population, and actually half of the population lives in urban entities level 6 or above (each entity is larger than 100k inhabitants). Finally let's plot the urban entities for France metropolitan area:
is_in_metropol = (official_cities.longitude > -5) & (official_cities.latitude > 25) official_cities[is_in_metropol & official_cities.urban.notnull()]\ .sort_values('urban')\ .plot(kind='scatter', x='longitude', y='latitude', s=5, c='urban', figsize=(12, 10));
data_analysis/notebooks/datasets/french_urban_entities.ipynb
bayesimpact/bob-emploi
gpl-3.0
Now we are defining the parameters which we will use to create the microstructures. n_samples will determine how many microstructures of a particular volume fraction we want to create. size determines the number of pixels we want to be included in the microstructure. We will define the material properties to be used...
sample_size = 100 n_samples = 4 * [sample_size] size = (101, 101) elastic_modulus = (1.3, 75) poissons_ratio = (0.42, .22) macro_strain = 0.001 n_phases = 2 grain_size = [(40, 2), (10, 2), (2, 40), (2, 10)] v_frac = [(0.7, 0.3), (0.6, 0.4), (0.3, 0.7), (0.4, 0.6)] per_ch = 0.1
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Now we will create the microstructures and generate their responses using the make_elastic_stress_random function from pyMKS. Four datasets are created to create the four different volume fractions that we are simulating. Then the datasets are combined into one variable. The volume fractions are listed in the varia...
from pymks.datasets import make_elastic_stress_random dataset, stresses = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, ...
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Now we are going to print out a few microstructres to look at how the fiber length, orientation and volume fraction are varied.
from pymks.tools import draw_microstructures examples = dataset[::sample_size] draw_microstructures(examples)
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Creating the Model Next we are going to initiate the model. The MKSHomogenizationModel takes in microstructures and runs two-point statistics on them to get a statistical representation of the microstructures. An expalnation of the use of two-point statistics can be found in the Checkerboard Microstructure Example. T...
from pymks import MKSHomogenizationModel from pymks import PrimitiveBasis p_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) model = MKSHomogenizationModel(basis=p_basis, correlations=[(0, 0)], periodic_axes=[0, 1])
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Now we are going to split our data into testing and training segments so we can test and see if our model can accurately predict the effective stress.
from sklearn.cross_validation import train_test_split flat_shape = (dataset.shape[0],) + (dataset[0].size,) data_train, data_test, stress_train, stress_test = train_test_split( dataset.reshape(flat_shape), stresses, test_size=0.2, random_state=3)
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
We will use sklearn's GridSearchCV to optimize the n_components and degree for our model. Let's search over the range of 1st order to 3rd order polynomial for degree and 2 to 7 principal components for n_components.
from sklearn.grid_search import GridSearchCV params_to_tune = {'degree': np.arange(1, 4), 'n_components': np.arange(2, 8)} fit_params = {'size': dataset[0].shape} gs = GridSearchCV(model, params_to_tune, fit_params=fit_params).fit(data_train, stress_train)
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Let's take a look at the results.
print('Order of Polynomial'), (gs.best_estimator_.degree) print('Number of Components'), (gs.best_estimator_.n_components) print('R-squared Value'), (gs.score(data_test, stress_test)) from pymks.tools import draw_gridscores_matrix draw_gridscores_matrix(gs, ['n_components', 'degree'], score_label='R-Squared', ...
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Our best model was found to have degree equal to 3 and n_components equal to 5. Let's go ahead and use it.
model = gs.best_estimator_
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Structures in PCA space Now we want to draw how the samples are spread out in PCA space and look at how the testing and training data line up.
from pymks.tools import draw_components_scatter stress_predict = model.predict(data_test) draw_components_scatter([model.reduced_fit_data[:, :3], model.reduced_predict_data[:, :3]], ['Training Data', 'Testing Data'], legend_outside=True)
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
It looks like there is pretty good agreement between the testing and the training data. We can also see that the four different fiber sizes are seperated in the PC space. Draw Goodness of fit Now we are going to look at how well our model predicts the properties of the structures. The calculated properties will be ...
from pymks.tools import draw_goodness_of_fit fit_data = np.array([stresses, model.predict(dataset)]) pred_data = np.array([stress_test, stress_predict]) draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Testing Data'])
notebooks/homogenization_fiber_2D.ipynb
davidbrough1/pymks
mit
Lesson 1 : Demonstration This first example follows Chapter 4, section 3 of Richard McElreath's book Statistical Rethinking. The task is understand height in a population, in this case using data about the !Kung San people. Anthropologist Nancy Howell conducted interviews with the !Kung San and collected the data used ...
# Read some data into a frame # A frame is like an table in a spreadsheet. # It contains columns (which usually have names) and rows (which can be indexed by number, # but may also have names) df = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv', sep=";") df.head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
R equivalent: Installing ggplot2 is optional if you already have it. ```R install.packages("ggplot2") library(ggplot2) ggplot(data=df, mapping=aes(x=age, y=height)) + geom_point() ```
# Graph the data -- let's look at height vs. age df.plot.scatter(x='age', y='height')
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
R equivalent ```R install.packages('dplyr') library(dplyr) adults_df <- filter(df, age>=18) ggplot(data=adults_df, mapping=aes(x=age, y=height)) + geom_point() ```
# Filter to adults, since height and age are correlated in children adults_df = df[df['age'] >= 18] # Look at height vs. age again adults_df.plot.scatter(x='age', y='height')
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
R equivalent: ``` R nrow(adults_df); nrow(df) ```
# Print out how many rows are in each frame len(df), len(adults_df)
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
R equivalent: ``` R ggplot(data=adults_df, mapping=aes(x=age)) + geom_histogram() ```
# Let's look at how the data are distributed adults_df['height'].plot.hist()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
R equivalent: ```R ```
# Split data in to male and female # -- first add in a sex column to make it less confusing df['sex'] = df.apply(lambda row: 'Male' if row['male'] == 1 else 'Female', axis=1) # -- re-apply the filter, since we modified the data adults_df = df[df['age'] >= 18] adults_df.head() # Let's summarize the data adults_df[['age...
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Lesson 2: Details What just happened!? Let's take a deeper look at what was done above. Reading in data
df = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv', sep=";") df.head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
One central abstraction in pandas is the DataFrame, which is similar to a data frame in R &mdash; that is, basically a spreadsheet. It is made up of columns, which are usually names, and rows, which may be named or just accessed by index. Pandas is designed to be fast and efficient, so the table isn't necessarily store...
df['height'].head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
In many cases, columns of a frame can be accessed like an array. The result is a pandas Series object, which, as you see, has a name, index, and type. Aside — why all the calls to 'head()'? Series and frames can be very large. The methods head() and tail() can be used get a few of the first and last rows, respectively....
df.loc[0]
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Rows are accessed using the method loc or iloc. The method 'loc' takes the index, 'iloc' takes the row index. In the above case, these are the same, but that is not always true. For example...
summary_df = df.describe() summary_df.loc['mean']
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
To access an individual cell, specify a row and column using loc.
summary_df.loc['mean', 'age']
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Another aside -- loc vs. iloc As you saw above, the method loc takes the "label" of the index. The method iloc takes the index as arguments, with the parameters [row-index, col-index]
# select row index 0, and all the columns in that row df.iloc[0,:] # select all the rows in column 0 by index df.iloc[:,0]
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Basic frame manipulations — data subsetting
df[['age', 'height', 'weight']].head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Specifiying an array of column names returns a frame containing just those columns.
df.iloc[0:5]
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
It's also possible to access a subset of the rows by index. More commonly, though, you will want to subset the data by some property.
df[df['age'] >= 18].head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
This is intiutive to understand, but may seem a little magical at first. It is worth understanding what is going on underneath the covers. The expression df['age'] &gt;= 18 returns an series of bool indicating whether the expression is true or false for that row (identified by index).
(df['age'] >= 18).head() (df['male'] == 0).head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
When such a series is the argument to the indexing operator, [], pandas returns a frame containing the rows where the value is True. These kinds of expressions can be combined as well, using the bitwise operators (not and/or).
((df['age'] >= 18) & (df['male'] == 0)).head() df[(df['age'] >= 18) & (df['male'] == 0)].head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
This way, code for subsetting is intuitive to understand. It is also possible to subset rows and columns simultaneously.
df.loc[(df['age'] >= 18) & (df['male'] == 0), ['height', 'weight', 'age']].head()
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Basic frame manipulations — renaming columns Renaming columns: just feed a list of new columns and pass it to df.columns
df.columns = ['new_height', 'new_weight', 'new_age', 'coded_gender']
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Creating columns based on other columns If I wanted to create a new column based on adding up the weight and age, I could do this:
df['new_id'] = df['new_weight'] + df['new_age'] df.head(2)
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
If I wanted to create a calculated column using a dictionary replacement, I could use the map function
gender_text = {1: 'Male', 0: 'Female'} df['text_gender'] = df['coded_gender'].map(gender_text) df.head(2)
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
What about using a lambda function to create a new column?
df['double_age'] = df['new_age'].apply(lambda x: x*2) df.head(2)
lessons/pandas-for-r-users.ipynb
ciyer/pandas-intro
bsd-2-clause
Here, biolog_base_composition describes the media components that are present in every biolog condition (Note: if you are using these data for your own purposes, keep in mind that we added Heme and H2S2O3 due to common issues encountered in models. These are not actually in the biolog medium). The biolog_base_dict is a...
biolog_base_dict
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
The actual growth/no growth data is in biolog_thresholded, which is a pandas DataFrame with organism species/genus as rows, and biolog media conditions as columns represented by the ModelSEED metabolite ID for the single carbon/nitrogen source present. The original source of these data is [4]; there, you can find the n...
# Just inspect the first 5 species biolog_thresholded.head(5)
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
Now we'll extract the positive growth conditions for the species we're interested in (Staphylococcus aureus)
test_mod_pheno = biolog_thresholded.loc['Staphylococcus aureus'] test_mod_pheno = list(test_mod_pheno[test_mod_pheno == True].index) test_mod_pheno
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
In order to gapfill this model, we have to make sure that the biolog media components are in the model, and that there are exchange reactions for each of these metabolites. To make this process more convenient, we'll load the universal reaction database now, which we will also use later in the process. The universal mo...
# load the universal reaction database from medusa.test import load_universal_modelseed from cobra.core import Reaction universal = load_universal_modelseed() # check for biolog base components in the model and record # the metabolites/exchanges that need to be added add_mets = [] add_exchanges = [] for met in list(bi...
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
Next, we need to do the same for the single carbon/nitrogen sources in the biolog data. When performing this workflow on your own GENRE, you may want to check that all of the media components that enable growth have suitable transporters in the universal model (or already in the draft model).
# Find metabolites from the biolog data that are missing in the test model # and add them from the universal missing_mets = [] missing_exchanges = [] media_dicts = {} for met_id in test_mod_pheno: try: model.metabolites.get_by_id(met_id) except: print(met_id + " was not in model, adding met and ...
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
Now, let's fill some gaps using the iterative_gapfill_from_binary_phenotypes function. For simplicity, we'll just take the first 5 conditions and perform gapfilling for 10 cycles, which should yield an ensemble with 10 members. We set lower_bound = 0.05, which requires that the model produces 0.05 units of flux through...
from medusa.reconstruct.expand import iterative_gapfill_from_binary_phenotypes # select a subset of the biolog conditions to perform gapfilling with sources = list(media_dicts.keys()) sub_dict = {sources[0]:media_dicts[sources[0]], sources[1]:media_dicts[sources[1]], sources[2]:media_dicts[sources...
docs/creating_ensemble.ipynb
gregmedlock/Medusa
mit
NLMO part of the DIPOLE MOMENT ANALYSIS: section is now shown in <span style="color:orange"><b>Table 1</b></span>. The corresponding *_dip.csv file was saved with the path shown in the previous cell.
# Print html formatted table from the loaded css file HTML(df.to_html(classes = 'grid', escape=False))
ReadNboDip.ipynb
marpat/blog
gpl-3.0
Plots Entropy vs langton blue = all, red = our random
# Plot Entropy of all rules against the langton parameter ax1 = plt.gca() d_five.plot("langton", "Entropy", ax=ax1, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "Entropy", ax=ax1, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax1 = plt.gca() d_five.plot("langton", "Entr...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Chi-square vs langton blue = all, red = our random
# Plot Chi-Square of all rules against the langton parameter ax2 = plt.gca() d_five.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax2 = plt.gca...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Mean vs langton blue = all, red = our random
# Plot Mean of all rules against the langton parameter ax3 = plt.gca() d_five.plot("langton", "Mean", ax=ax3, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "Mean", ax=ax3, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax3 = plt.gca() d_five.plot("langton", "Mean", ax=ax3...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Monte-Carlo-Pi vs langton blue = all, red = our random
# Plot Monte Carlo of all rules against the langton parameter ax4 = plt.gca() d_five.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax4 = plt.gca() d_five.plo...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Serial-Correlation vs langton blue = all, red = our random
# Plot Serial Correlation of all rules against the langton parameter ax5 = plt.gca() d_five.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax5 = plt.g...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
p-value vs langton blue = all, red = our random
# Plot p-value of all rules against the langton parameter ax6 = plt.gca() d_five.plot("langton", "p-value", ax=ax6, kind="scatter", marker='o', alpha=.5, s=40) d_five_p10_90.plot("langton", "p-value", ax=ax6, kind="scatter", color="r", marker='o', alpha=.5, s=40) plt.show() ax6 = plt.gca() d_five.plot("langton", "p-va...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Python's and linux' RNG Python's random.randint and linux' /dev/urandom
def read_results(filename): results = (File_bytes, Monte_Carlo_Pi, Rule, Serial_Correlation, Entropy, Chi_square, Mean) = [[] for _ in range(7)] with open(filename) as f: data = json.load(f) variables = {"File-bytes": File_bytes, "Monte-Carlo-Pi": Monte_Carlo_Pi, "Rule": Rule, "Serial-Correlation": ...
results-two-colors.ipynb
sunsistemo/mozzacella-automato-salad
gpl-3.0
Answer Implement your sequential backward selection class here, either as a separate class or by extending the SBS class that can handle both forward and backward selection (via an input parameter to indicate the direction).
from sklearn.base import clone from itertools import combinations import numpy as np from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score class SequentialSelection(): def __init__(self, estimator, k_features, scoring=accuracy_score, backward = True, test...
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit
Apply your sequential forward/backward selection code to the KNN classifier with the wine data set, and plot the accuracy versus number-of-features curves for both. Describe the similarities and differences you can find, e.g. * do the two methods agree on the optimal number of features? * do the two methods have simil...
%matplotlib inline import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=2) # selecting features for backward in [True, False]: if backward: k_features = 1 else: k_features = X_train_std.shape[1] ss = SequentialS...
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit
PCA versus LDA (50 points) We have learned two different methods for feature extraction, PCA (unsupervised) and LDA (supervised). Under what circumstances would PCA and LDA produce very different results? Provide one example dataset in 2D, analyze it via PCA and LDA, and plot it with the PCA and LDA components. You ca...
%matplotlib inline
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit
Write code to produce your own dataset in 2D. You are free to design relative characteristics like the number of class, the number of samples for each class, as long as your dataset could be analyzed via PCA and LDA.
# visualize the data set import numpy as np import matplotlib.pyplot as plt def plot_dataset(X, y, xlabel="", ylabel=""): num_class = np.unique(y).size colors = ['red', 'blue', 'green', 'black'] markers = ['^', 'o', 's', 'd'] if num_class <= 1: plt.scatter(X[:, 0], X[:, 1]) pa...
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit
Plot your data set, with different classes in different marker colors and/or shapes. You can write your own plot code or use existing library plot code.
plot_dataset(X, y)
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit
Apply your dataset through PCA and LDA, and plot the projected data using the same plot code. Explain the differences you notice, and how you manage to construct your dataset to achieve such differences. You can use the PCA and LDA code from the scikit-learn library.
from sklearn.decomposition import PCA pca = PCA() X_pca = pca.fit_transform(X) plot_dataset(X_pca, y, xlabel='PC 1', ylabel='PC 2') from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA(n_components=dimension) X_lda = lda.fit_transform(X, y) plot_dataset(X_lda, y, xlabel = "LD 1", y...
assignments/solutions/ex04_sample_solution.ipynb
xdnian/pyml
mit