markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Cython gains most of its tremendous speed gains from static typing. Let's try with the isprime example from before again:
%%cython # import some C functions to avoid calls to the Python interpreter from libc.math cimport sqrt, ceil # define type n as integer def isprime_cython(int n): """Returns True if n is prime and False otherwise""" # we can skip the instance check since we have strong typing now if n < 2: return...
parallel.ipynb
superbock/parallel2015
mit
Let's see if (and how much) it helps:
%timeit sum_primes(10000) %timeit sum_primes_cython(10000)
parallel.ipynb
superbock/parallel2015
mit
speed-up ~24x (686 µs vs. 16.5 ms)
%timeit -n 1 [sum_primes(n) for n in xrange(10000)] %timeit -n 1 [sum_primes_cython(n) for n in xrange(10000)]
parallel.ipynb
superbock/parallel2015
mit
speed-up ~25x (3.1 s vs. 76 s) What if we use this faster version of the function with multiple processes?
%timeit mp.Pool(2).map(sum_primes_cython, (n for n in xrange(10000))) %timeit mp.Pool(4).map(sum_primes_cython, (n for n in xrange(10000)))
parallel.ipynb
superbock/parallel2015
mit
Starting more threads than physical CPU cores gives some more performance, but does not scale as good because of hyper-threading. Total speed-up compared to the single-process pure Python variant is ~49x (1.56 s vs. 76 s) Still, the isprime_cython is a Python function, which adds some overhead. Since we call this funct...
%%cython from libc.math cimport sqrt, ceil # make this a C function cdef int isprime_cython_nogil(int n) nogil: """Returns True if n is prime and False otherwise""" if n < 2: return 0 if n == 2: return 1 cdef int max max = int(ceil(sqrt(n))) cdef int i = 2 while i <= max: ...
parallel.ipynb
superbock/parallel2015
mit
Again, a bit faster; total speed-up compared to the single-process pure Python variant is ~64x (1.18 s vs. 76 s) These are the Cython basics. Let's apply them to the other example, the backward comb filter which we were not able to vectorise:
%%cython import numpy as np # we also want to load the C-bindings of numpy with cimport cimport numpy as np # statically type the obvious variables (tau, alpha, n) def feed_backward_comb_filter(signal, unsigned int tau, float alpha): """ Filter the signal with a feed backward comb filter. :param signal: ...
parallel.ipynb
superbock/parallel2015
mit
A bit better (roughly half the time), but still far away from the feed forward variant. Let's see, what kills performance and fix it. Cython has this nice -a switch to highlight calls to Python in yellow.
%%cython -a import numpy as np cimport cython cimport numpy as np def feed_backward_comb_filter(signal, unsigned int tau, float alpha): """ Filter the signal with a feed backward comb filter. :param signal: signal :param tau: delay length :param alpha: scaling factor :return: comb f...
parallel.ipynb
superbock/parallel2015
mit
In line 25, we still have calls to Python within the loop (e.g. PyNumber_Multiply and PyNumber_Add). We can get rid of these by statically typing the signal as well. Unfortunately, we lose the ability to call the filter function with a signal of arbitrary dimensions, since Cython needs to know the dimensions of the sig...
%%cython import numpy as np cimport cython cimport numpy as np def feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal, unsigned int tau, float alpha): """ Filter the signal with a feed backward comb filter. :param signal: s...
parallel.ipynb
superbock/parallel2015
mit
Much better, let's check again.
%%cython -a import numpy as np cimport cython cimport numpy as np def feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal, unsigned int tau, float alpha): """ Filter the signal with a feed backward comb filter. :param signal...
parallel.ipynb
superbock/parallel2015
mit
For the sake of completeness, let's get rid of these Pyx_RaiseBufferIndexError in line 27 as well. We tell Cython that it does not need to check for bounds by adding a @cython.boundscheck(False) decorator.
%%cython import numpy as np cimport cython cimport numpy as np @cython.boundscheck(False) def feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal, unsigned int tau, float alpha): """ Filter the signal with a feed backward comb fi...
parallel.ipynb
superbock/parallel2015
mit
Ok, we are now on par with the feed forward Numpy variant -- or even a bit better :) To get back the flexibility of the Python/Numpy solution to be able to handle signals of arbitrary dimension, we need to define a wrapper function (in pure Python):
%%cython import numpy as np cimport cython cimport numpy as np def feed_backward_comb_filter(signal, tau, alpha): """ Filter the signal with a feed backward comb filter. :param signal: signal :param tau: delay length :param alpha: scaling factor :return: comb filtered signal "...
parallel.ipynb
superbock/parallel2015
mit
Working with Epoch metadata This tutorial shows how to add metadata to ~mne.Epochs objects, and how to use Pandas query strings &lt;pandas:indexing.query&gt; to select and plot epochs based on metadata properties. For this tutorial we'll use a different dataset than usual: the kiloword-dataset, which contains EEG data ...
import os import numpy as np import pandas as pd import mne kiloword_data_folder = mne.datasets.kiloword.data_path() kiloword_data_file = os.path.join(kiloword_data_folder, 'kword_metadata-epo.fif') epochs = mne.read_epochs(kiloword_data_file)
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Viewing Epochs metadata .. sidebar:: Restrictions on metadata DataFrames Metadata dataframes are less flexible than typical :class:Pandas DataFrames &lt;pandas.DataFrame&gt;. For example, the allowed data types are restricted to strings, floats, integers, or booleans; and the row labels are always integers cor...
epochs.metadata
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Viewing the metadata values for a given epoch and metadata variable is done using any of the Pandas indexing &lt;pandas:/reference/indexing.rst&gt; methods such as :obj:~pandas.DataFrame.loc, :obj:~pandas.DataFrame.iloc, :obj:~pandas.DataFrame.at, and :obj:~pandas.DataFrame.iat. Because the index of the dataframe is th...
print('Name-based selection with .loc') print(epochs.metadata.loc[2:4]) print('\nIndex-based selection with .iloc') print(epochs.metadata.iloc[2:4])
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Modifying the metadata Like any :class:pandas.DataFrame, you can modify the data or add columns as needed. Here we convert the NumberOfLetters column from :class:float to :class:integer &lt;int&gt; data type, and add a :class:boolean &lt;bool&gt; column that arbitrarily divides the variable VisualComplexity into high a...
epochs.metadata['NumberOfLetters'] = \ epochs.metadata['NumberOfLetters'].map(int) epochs.metadata['HighComplexity'] = epochs.metadata['VisualComplexity'] > 65 epochs.metadata.head()
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selecting epochs using metadata queries All ~mne.Epochs objects can be subselected by event name, index, or :term:slice (see tut-section-subselect-epochs). But ~mne.Epochs objects with metadata can also be queried using Pandas query strings &lt;pandas:indexing.query&gt; by passing the query string just as you would nor...
print(epochs['WORD.str.startswith("dis")'])
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This capability uses the :meth:pandas.DataFrame.query method under the hood, so you can check out the documentation of that method to learn how to format query strings. Here's another example:
print(epochs['Concreteness > 6 and WordFrequency < 1'])
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note also that traditional epochs subselection by condition name still works; MNE-Python will try the traditional method first before falling back on rich metadata querying.
epochs['solenoid'].plot_psd()
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
One use of the Pandas query string approach is to select specific words for plotting:
words = ['typhoon', 'bungalow', 'colossus', 'drudgery', 'linguist', 'solenoid'] epochs['WORD in {}'.format(words)].plot(n_channels=29)
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that in this dataset, each "condition" (A.K.A., each word) occurs only once, whereas with the sample-dataset dataset each condition (e.g., "auditory/left", "visual/right", etc) occurred dozens of times. This makes the Pandas querying methods especially useful when you want to aggregate epochs that have different...
evokeds = dict() query = 'NumberOfLetters == {}' for n_letters in epochs.metadata['NumberOfLetters'].unique(): evokeds[str(n_letters)] = epochs[query.format(n_letters)].average() mne.viz.plot_compare_evokeds(evokeds, cmap=('word length', 'viridis'), picks='Pz')
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Metadata can also be useful for sorting the epochs in an image plot. For example, here we order the epochs based on word frequency to see if there's a pattern to the latency or intensity of the response:
sort_order = np.argsort(epochs.metadata['WordFrequency']) epochs.plot_image(order=sort_order, picks='Pz')
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Although there's no obvious relationship in this case, such analyses may be useful for metadata variables that more directly index the time course of stimulus processing (such as reaction time). Adding metadata to an Epochs object You can add a metadata :class:~pandas.DataFrame to any ~mne.Epochs object (or replace exi...
new_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'], index=range(len(epochs))) epochs.metadata = new_metadata epochs.metadata.head()
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can remove metadata from an ~mne.Epochs object by setting its metadata to None:
epochs.metadata = None
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare evoked responses for different conditions In this example, an Epochs object for visual and auditory responses is created. Both conditions are then accessed by their respective names to create a sensor layout plot of the related evoked responses.
# Authors: Denis Engemann <denis.engemann@gmail.com> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne.viz import plot_evoked_topo from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.24/_downloads/da444a4db06576d438b46fdb32d045cd/topo_compare_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up amplitude-peak rejection val...
0.24/_downloads/da444a4db06576d438b46fdb32d045cd/topo_compare_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show topography for two different conditions
colors = 'blue', 'red' title = 'MNE sample data\nleft vs right (A/V combined)' plot_evoked_topo(evokeds, color=colors, title=title, background_color='w') plt.show()
0.24/_downloads/da444a4db06576d438b46fdb32d045cd/topo_compare_conditions.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we set up and run a sampling routine using Monomial-Gamma HMC MCMC
# Choose starting points for 3 mcmc chains xs = [ [2, 1], [3, 3], [5, 4], ] # Create mcmc routine sigma = [1, 1] mcmc = pints.MCMCController(log_pdf, 3, xs, method=pints.MonomialGammaHamiltonianMCMC, sigma0=sigma) # Add stopping criterion mcmc.set_max_iterations(1000) # Set up modest logging mcmc.set_log...
examples/sampling/monomial-gamma-hmc.ipynb
martinjrobins/hobo
bsd-3-clause
Monomial-Gamma HMC on a time-series problem We now try the same method on a time-series problem
import pints import pints.toy as toy import pints.plot import numpy as np import matplotlib.pyplot as plt # Load a forward model model = toy.LogisticModel() # Create some toy data times = np.linspace(0, 1000, 50) real_parameters = np.array([0.015, 500]) org_values = model.simulate(real_parameters, times) # Add noise...
examples/sampling/monomial-gamma-hmc.ipynb
martinjrobins/hobo
bsd-3-clause
The chains do not take long to reach equilibrium with this method.
# Check convergence and other properties of chains results = pints.MCMCSummary(chains=chains[:, 200:], time=mcmc.time(), parameter_names=['growth rate', 'capacity']) print(results) # Show traces and histograms pints.plot.trace(chains) plt.show()
examples/sampling/monomial-gamma-hmc.ipynb
martinjrobins/hobo
bsd-3-clause
Chains have converged! Extract any divergent iterations -- looks fine as there were none.
div_iterations = [] for sampler in mcmc.samplers(): div_iterations.append(sampler.divergent_iterations()) print("There were " + str(np.sum(div_iterations)) + " divergent iterations.")
examples/sampling/monomial-gamma-hmc.ipynb
martinjrobins/hobo
bsd-3-clause
Exercise 1 a. Series Given an array of data, please create a pandas Series s with a datetime index starting 2016-01-01. The index should be daily frequency and should be the same length as the data.
l = np.random.randint(1,100, size=1000) s = pd.Series(l) new_index = pd.date_range("2016-01-01", periods=len(s), freq="D") s.index = new_index print s
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Accessing Series Elements. Print every other element of the first 50 elements of series s. Find the value associated with the index 2017-02-20.
# Print every other element of the first 50 elements s.iloc[:50:2]; # Values associated with the index 2017-02-20 s.loc['2017-02-20']
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
c. Boolean Indexing. In the series s, print all the values between 1 and 3.
# Print s between 1 and 3 s.loc[(s>1) & (s<3)]
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 2 : Indexing and time series. a. Display Print the first and last 5 elements of the series s.
# First 5 elements s.head(5) # Last 5 elements s.tail(5)
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Resampling Using the resample method, upsample the daily data to monthly frequency. Use the median method so that each monthly value is the median price of all the days in that month. Take the daily data and fill in every day, including weekends and holidays, using forward-fills.
symbol = "CMG" start = "2012-01-01" end = "2016-01-01" prices = get_pricing(symbol, start_date=start, end_date=end, fields="price") # Resample daily prices to get monthly prices using median. monthly_prices = prices.resample('M').median() monthly_prices.head(24) # Data for every day, (including weekends and holidays...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 3 : Missing Data Replace all instances of NaN using the forward fill method. Instead of filling, remove all instances of NaN from the data.
# Fill missing data using Backwards fill method bfilled_prices = calendar_prices.fillna(method='bfill') bfilled_prices.head(10) # Drop instances of nan in the data dropped_prices = calendar_prices.dropna() dropped_prices.head(10)
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 4 : Time Series Analysis with pandas a. General Information Print the count, mean, standard deviation, minimum, 25th, 50th, and 75th percentiles, and the max of our series s.
print "Summary Statistics" print prices.describe()
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Series Operations Get the additive and multiplicative returns of this series. Calculate the rolling mean with a 60 day window. Calculate the standard deviation with a 60 day window.
data = get_pricing('GE', fields='open_price', start_date='2016-01-01', end_date='2017-01-01') mult_returns = data.pct_change()[1:] #Multiplicative returns add_returns = data.diff()[1:] #Additive returns # Rolling mean rolling_mean = data.rolling(window=60).mean() rolling_mean.name = "60-day rolling mean" # Rolling...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 5 : DataFrames a. Indexing Form a DataFrame out of dict_data with l as its index.
l = ['First','Second', 'Third', 'Fourth', 'Fifth'] dict_data = {'a' : [1, 2, 3, 4, 5], 'b' : ['L', 'K', 'J', 'M', 'Z'], 'c' : np.random.normal(0, 1, 5) } # Adding l as an index to dict_data frame_data = pd.DataFrame(dict_data, index=l) print frame_data
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. DataFrames Manipulation Concatenate the following two series to form a dataframe. Rename the columns to Good Numbers and Bad Numbers. Change the index to be a datetime index starting on 2016-01-01.
s1 = pd.Series([2, 3, 5, 7, 11, 13], name='prime') s2 = pd.Series([1, 4, 6, 8, 9, 10], name='other') numbers = pd.concat([s1, s2], axis=1) # Concatenate the two series numbers.columns = ['Useful Numbers', 'Not Useful Numbers'] # Rename the two columns numbers.index = pd.date_range("2016-01-01", periods=len(numbers)) #...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 6 : Accessing DataFrame elements. a. Columns Check the data type of one of the DataFrame's columns. Print the values associated with time range 2013-01-01 to 2013-01-10.
symbol = ["XOM", "BP", "COP", "TOT"] start = "2012-01-01" end = "2016-01-01" prices = get_pricing(symbol, start_date=start, end_date=end, fields="price") if isinstance(symbol, list): prices.columns = map(lambda x: x.symbol, prices.columns) else: prices.name = symbol # Check Type of Data for these two. pric...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 7 : Boolean Indexing a. Filtering. Filter pricing data from the last question (stored in prices) to only print values where: BP > 30 XOM < 100 The intersection of both above conditions (BP > 30 and XOM < 100) The union of the previous composite condition along with TOT having no nan values ((BP > 30 and XOM <...
# Filter data # BP > 30 print prices.loc[prices.BP > 30].head() # XOM < 100 print prices.loc[prices.XOM < 100].head() # BP > 30 AND XOM < 100 print prices.loc[(prices.BP > 30) & (prices.XOM < 100)].head() # The union of (BP > 30 AND XOM < 100) with TOT being non-nan print prices.loc[((prices.BP > 30) & (prices.XOM < 1...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. DataFrame Manipulation (again) Concatenate these DataFrames. Fill the missing data with 0s
df_1 = get_pricing(['SPY', 'VXX'], start_date=start, end_date=end, fields='price') df_2 = get_pricing(['MSFT', 'AAPL', 'GOOG'], start_date=start, end_date=end, fields='price') # Concatenate the dataframes df_3 = pd.concat([df_1, df_2], axis=1) df_3.head() # Fill GOOG missing data with nan filled0_df_3 = df_3.fillna(0)...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 8 : Time Series Analysis a. Summary Print out a summary of the prices DataFrame from above. Take the log returns and print the first 10 values. Print the multiplicative returns of each company. Normalize and plot the returns from 2014 to 2015. Plot a 60 day window rolling mean of the prices. Plot a 60 day win...
# Summary prices.describe() # Natural Log of the returns and print out the first 10 values np.log(prices).head(10) # Multiplicative returns mult_returns = prices.pct_change()[1:] mult_returns.head() # Normalizing the returns and plotting one year of data norm_returns = (mult_returns - mult_returns.mean(axis=0))/mult...
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Contents Overview Set-up Sparse feature representations Feature representations Model wrapper for hyperparameter search Assessment Hypothesis-only baselines Sentence-encoding models Dense representations Sentence-encoding RNNs Other sentence-encoding model ideas Chained models Simple RNN Separate premise and hypot...
from collections import Counter from itertools import product from nltk.tokenize.treebank import TreebankWordTokenizer import numpy as np import os import pandas as pd from datasets import load_dataset import warnings from sklearn.exceptions import ConvergenceWarning from sklearn.linear_model import LogisticRegression...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Sparse feature representations We begin by looking at models based in sparse, hand-built feature representations. As in earlier units of the course, we will see that these models are competitive: easy to design, fast to optimize, and highly effective. Feature representations The guiding idea for NLI sparse features is ...
tokenizer = TreebankWordTokenizer() def word_overlap_phi(ex): """ Basis for features for the words in both the premise and hypothesis. Downcases all words. Parameters ---------- ex: NLIExample instance Returns ------- defaultdict Maps each word in both `t1` and `t2` to 1. ...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
With word_cross_product_phi, we count all the pairs $(w_{1}, w_{2})$ where $w_{1}$ is a word from the premise and $w_{2}$ is a word from the hypothesis. This creates a very large feature space. These models are very strong right out of the box, and they can be supplemented with more fine-grained features.
def word_cross_product_phi(ex): """ Basis for cross-product features. Downcases all words. Parameters ---------- ex: NLIExample instance Returns ------- defaultdict Maps each (w1, w2) in the cross-product of `t1.leaves()` and `t2.leaves()` (both downcased) to its count....
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Model wrapper for hyperparameter search Our experiment framework is basically the same as the one we used for the Stanford Sentiment Treebank. For a full evaluation, we would like to search for the best hyperparameters. However, SNLI is very large, so each evaluation is very expensive. To try to keep this under contro...
def fit_softmax_with_hyperparameter_search(X, y): """ A MaxEnt model of dataset with hyperparameter cross-validation. Parameters ---------- X : 2d np.array The matrix of features, one example per row. y : list The list of labels for rows in `X`. Returns ------- skl...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Assessment
%%time word_cross_product_experiment_xval = nli.experiment( train_reader=nli.NLIReader(snli['train']), phi=word_cross_product_phi, train_func=fit_softmax_with_hyperparameter_search, assess_reader=None, verbose=False) optimized_word_cross_product_model = word_cross_product_experiment_xval['model'] ...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
As expected word_cross_product_phi is reasonably strong. This model is similar to (a simplified version of) the baseline "Lexicalized Classifier" in the original SNLI paper by Bowman et al.. Hypothesis-only baselines In an outstanding project for this course in 2016, Leonid Keselman observed that one can do much better...
def hypothesis_only_unigrams_phi(ex): return Counter(tokenizer.tokenize(ex.hypothesis)) def fit_softmax(X, y): mod = LogisticRegression( fit_intercept=True, solver='liblinear', multi_class='ovr') mod.fit(X, y) return mod %%time _ = nli.experiment( train_reader=nli.NLIReader...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Chance performance on SNLI is 0.33 accuracy/F1. The above makes it clear that using chance as a baseline will overstate how much traction a model has actually gotten on the SNLI problem. The hypothesis-only baseline is better for this kind of calibration. Ideally, for each model one explores, one would fit a minimally...
glove_lookup = utils.glove2dict( os.path.join(GLOVE_HOME, 'glove.6B.300d.txt')) def glove_leaves_phi(ex, np_func=np.mean): """ Represent `ex` as a combination of the vector of their words, and concatenate these two combinator vectors. Parameters ---------- ex : NLIExample np_func : fu...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
The hypothesis-only counterpart of this model is very clear: we would just encode ex.hypothesiss with GloVe, leaving ex.premise out entirely. As an elaboration of this approach, it is worth considering the VecAvg model we studied in sst_03_neural_networks.ipynb, which updates the initial vector representations during l...
class TorchRNNSentenceEncoderDataset(torch.utils.data.Dataset): def __init__(self, prem_seqs, hyp_seqs, prem_lengths, hyp_lengths, y=None): self.prem_seqs = prem_seqs self.hyp_seqs = hyp_seqs self.prem_lengths = prem_lengths self.hyp_lengths = hyp_lengths self.y = y a...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
A sentence-encoding model With TorchRNNSentenceEncoderClassifierModel, we create a new nn.Module that functions just like the existing torch_rnn_classifier.TorchRNNClassifierModel, except that it takes two RNN instances as arguments and combines their final output states to create the classifier input:
class TorchRNNSentenceEncoderClassifierModel(nn.Module): def __init__(self, prem_rnn, hyp_rnn, output_dim): super().__init__() self.prem_rnn = prem_rnn self.hyp_rnn = hyp_rnn self.output_dim = output_dim self.bidirectional = self.prem_rnn.bidirectional # Doubled becau...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
A sentence-encoding model interface Finally, we subclass TorchRNNClassifier. Here, just need to redefine three methods: build_dataset and build_graph to make use of the new components above:
class TorchRNNSentenceEncoderClassifier(TorchRNNClassifier): def build_dataset(self, X, y=None): X_prem, X_hyp = zip(*X) X_prem, prem_lengths = self._prepare_sequences(X_prem) X_hyp, hyp_lengths = self._prepare_sequences(X_hyp) if y is None: return TorchRNNSentenceEncode...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Simple example This toy problem illustrates how this works in detail:
def simple_example(): vocab = ['a', 'b', '$UNK'] # Reversals are good, and other pairs are bad: train = [ [(list('ab'), list('ba')), 'good'], [(list('aab'), list('baa')), 'good'], [(list('abb'), list('bba')), 'good'], [(list('aabb'), list('bbaa')), 'good'], [(list('b...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Example SNLI run
def sentence_encoding_rnn_phi(ex): """Map `ex.premise` and `ex.hypothesis` to a pair of lists of leaf nodes.""" p = tuple(tokenizer.tokenize(ex.premise)) h = tuple(tokenizer.tokenize(ex.hypothesis)) return (p, h) def get_sentence_encoding_vocab(X, n_words=None, mincount=1): wc = Counter([w ...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
This is above our general hypothesis-only baseline ($\approx$0.65), but it is below the simpler word cross-product model ($\approx$0.75). A natural hypothesis-only baseline for this model be a simple TorchRNNClassifier that processed only the hypothesis. Other sentence-encoding model ideas Given that we already explore...
def simple_chained_rep_rnn_phi(ex): """Map `ex.premise` and `ex.hypothesis` to a single list of leaf nodes. A slight variant might insert a designated boundary symbol between the premise leaves and the hypothesis leaves. Be sure to add it to the vocab in that case, else it will be $UNK. """ p =...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
This model is close to the word cross-product baseline ($\approx$0.75), but it's not better. Perhaps using a GloVe embedding would suffice to push it into the lead. The hypothesis-only baseline for this model is very simple: we just use the same model, but we process only the hypothesis. Separate premise and hypothesis...
mnli = load_dataset("multi_nli") rnn_multinli_experiment = nli.experiment( train_reader=nli.NLIReader(mnli['train']), phi=simple_chained_rep_rnn_phi, train_func=fit_optimized_simple_chained_rnn, assess_reader=None, random_state=42, vectorize=False)
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
The return value of nli.experiment contains the information we need to make predictions on new examples. Next, we load in the 'matched' condition annotations ('mismatched' would work as well):
matched_ann_filename = os.path.join( ANNOTATIONS_HOME, "multinli_1.0_matched_annotations.txt") matched_ann = nli.read_annotated_subset( matched_ann_filename, mnli['validation_matched'])
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
The following function uses rnn_multinli_experiment to make predictions on annotated examples, and harvests some other information that is useful for error analysis:
def predict_annotated_example(ann, experiment_results): model = experiment_results['model'] phi = experiment_results['phi'] ex = ann['example'] feats = phi(ex) pred = model.predict([feats])[0] data = {cat: True for cat in ann['annotations']} data.update({'gold': ex.label, 'prediction': pred,...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Finally, this function applies predict_annotated_example to a collection of annotated examples and puts the results in a pd.DataFrame for flexible analysis:
def get_predictions_for_annotated_data(anns, experiment_results): data = [] for ex_id, ann in anns.items(): results = predict_annotated_example(ann, experiment_results) data.append(results) return pd.DataFrame(data) ann_analysis_df = get_predictions_for_annotated_data( matched_ann, rnn_...
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
With ann_analysis_df, we can see how the model does on individual annotation categories:
pd.crosstab(ann_analysis_df['correct'], ann_analysis_df['#MODAL'])
nli_02_models.ipynb
cgpotts/cs224u
apache-2.0
Find Natural Neighbors Verification Finding natural neighbors in a triangulation A triangle is a natural neighbor of a point if that point is within a circumscribed circle ("circumcircle") containing the triangle.
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import Delaunay from metpy.interpolate.geometry import circumcircle_radius, find_natural_neighbors # Create test observations, test points, and plot the triangulation and points. gx, gy = np.meshgrid(np.arange(0, 20, 4), np.arange(0, 20, 4)) pts = ...
v1.1/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
metpy/MetPy
bsd-3-clause
Since finding natural neighbors already calculates circumcenters, return that information for later use. The key of the neighbors dictionary refers to the test point index, and the list of integers are the triangles that are natural neighbors of that particular test point. Since point 4 is far away from the triangulati...
neighbors, circumcenters = find_natural_neighbors(tri, test_points) print(neighbors)
v1.1/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
metpy/MetPy
bsd-3-clause
We can plot all of the triangles as well as the circles representing the circumcircles
fig, ax = plt.subplots(figsize=(15, 10)) for i, inds in enumerate(tri.simplices): pts = tri.points[inds] x, y = np.vstack((pts, pts[0])).T ax.plot(x, y) ax.annotate(i, xy=(np.mean(x), np.mean(y))) # Using circumcenters and calculated circumradii, plot the circumcircles for idx, cc in enumerate(circumce...
v1.1/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
metpy/MetPy
bsd-3-clause
source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.
target_sentences[:50].split('\n')
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Preprocess To do anything useful with it, we'll need to turn the each string into a list of characters: <img src="images/source_and_target_arrays.png"/> Then convert the characters to their int values as declared in our vocabulary:
def extract_character_vocab(data): special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, w...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
This is the final shape we need them to be in. We can now proceed to building the model. Model Check the Version of TensorFlow This will check to make sure you have the correct version of TensorFlow
from distutils.version import LooseVersion import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__))
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Hyperparameters
# Number of Epochs epochs = 60 # Batch Size batch_size = 128 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 15 decoding_embedding_size = 15 # Learning Rate learning_rate = 0.001
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Input
def get_model_inputs(): input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') ...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Sequence to Sequence Model We can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components: 2.1 Encoder - Embedding - Encoder cell 2.2 Decoder 1- Process decoder inputs 2- Set up the decoder - Embedding - Deco...
def encoding_layer(input_data, rnn_size, num_layers, source_sequence_length, source_vocab_size, encoding_embedding_size): # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) # RNN cell de...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
2.2 Decoder The decoder is probably the most involved part of this model. The following steps are needed to create it: 1- Process decoder inputs 2- Set up the decoder components - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder Process Decoder Input In the train...
# Process the input we'll feed to the decoder def process_decoder_input(target_data, vocab_to_int, batch_size): '''Remove the last word id from each batch and concat the <GO> to the begining of each batch''' ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.f...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Set up the decoder components - Embedding - Decoder cell - Dense output layer - Training decoder - Inference decoder 1- Embedding Now that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. We'll create an embedding matrix l...
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size, target_sequence_length, max_target_sequence_length, enc_state, dec_input): # 1. Decoder Embedding target_vocab_size = len(target_letter_to_int) dec_embeddings = tf.Variable(tf.random_uniform([target_vo...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
2.3 Seq2seq model Let's now go a step above, and hook up the encoder and decoder using the methods we just declared
def seq2seq_model(input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers): # Pass the input...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this: <img src="images/logits.png"/> The logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.
# Build the graph train_graph = tf.Graph() # Set the graph to default to ensure that it is ready for training with train_graph.as_default(): # Load the model inputs input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs() # Create...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Get Batches There's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2 Source sequences (it's actually in int form, we're showing the characters for clarity): <img src="images/source_batch.png" /> Target sequences (also in int, but showing letters for clarity): <im...
def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batch...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
# Split data to training and validation sets train_source = source_letter_ids[batch_size:] train_target = target_letter_ids[batch_size:] valid_source = source_letter_ids[:batch_size] valid_target = target_letter_ids[:batch_size] (valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = ...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Prediction
def source_to_seq(text): '''Prepare the text for the model''' sequence_length = 7 return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text)) input_sentence = 'hello' text = source_to_seq(input_sentence) checkpoint...
seq2seq/sequence_to_sequence_implementation.ipynb
Lstyle1/Deep_learning_projects
mit
Extract Images Included in these workshop materials is a compressed file ("data.tar.gz") containg the images that we'll be classifying today. Once you extract this file, you should have a directory called "data" which contains the following directories: Directory | Contents :-------------------------:|:----...
rect_image = cv2.imread('data/I/27.png', cv2.IMREAD_GRAYSCALE) circle_image = cv2.imread('data/O/11527.png', cv2.IMREAD_GRAYSCALE) queen_image = cv2.imread('data/Q/18027.png', cv2.IMREAD_GRAYSCALE) plt.figure(figsize = (10, 7)) plt.title('Rectangle Tag') plt.axis('off') plt.imshow(rect_image, cmap = cm.Greys_r)...
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
Image Properties One of the really useful things about using OpenCV to manipulate images in Python is that all images are treated as NumPy matrices. This means we can use NumPy's functions to manipulate and understand the data we're working with. To demonstrate this, we'll use use NumPy's "shape" and "dtype" commands t...
print (rect_image.shape) print (rect_image.dtype)
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
This tells us that this image is 24x24 pixels in size, and that the datatype of the values it stores are unsigned 8 bit integers. While the explanation of this datatype isn't especially relevant to the lesson, the main point is that it is extremely important to double check the size and structure of your data. Let's do...
print (circle_image.shape) print (circle_image.dtype)
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
This holds the same values, which is good. When you're working with your own datasets in the future, it would be highly beneficial to write your own little program to check the values and structure of your data to ensure that subtle bugs don't creep in to your analysis. Cropping One of the things you've probably notice...
cropped_rect_image = rect_image[4:20,4:20] cropped_circle_image = circle_image[4:20,4:20] cropped_queen_image = queen_image[4:20,4:20] plt.figure(figsize = (10, 7)) plt.title('Rectangle Tag ' + str(cropped_rect_image.shape)) plt.axis('off') plt.imshow(cropped_rect_image, cmap = cm.Greys_r) plt.figure(figsize = (10, ...
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
Feature Engineering When people think of machine learning, the first thing that comes to mind tends to be the fancy algorithms that will train the computer to solve your problem. Of course this is important, but the reality of the matter is that the way you process the data you'll eventually feed into the machine learn...
plt.figure(figsize = (10, 7)) plt.title('Rectangle Tag') plt.axis('off') plt.imshow(rect_image, cmap = cm.Greys_r)
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
In fact this is not actualy the case. In the case of this dataset, the features are actually the pixel values that make up the images - those are the values we'll be training the machine learning algorithm with:
print(rect_image)
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
So what can we do to manipulate the features in out dataset? We'll explore three methods to acheive this: Image smoothing Modifying brightness Modifying contrast Techniques like image smoothing can be useful when improving the features you train the machine learning algorithm on as you can eliminate some of the poten...
mean_smoothed = cv2.blur(rect_image, (5, 5)) median_smoothed = cv2.medianBlur(rect_image, 5) gaussian_smoothed = cv2.GaussianBlur(rect_image, (5, 5), 0)
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
Feel free to have a play with the different parameters for these smoothing operations. We'll now write some code to place the original images next to their smoothed counterparts in order to compare them:
mean_compare = np.hstack((rect_image, mean_smoothed)) median_compare = np.hstack((rect_image, median_smoothed)) gaussian_compare = np.hstack((rect_image, gaussian_smoothed)) plt.figure(figsize = (15, 12)) plt.title('Mean') plt.axis('off') plt.imshow(mean_compare, cmap = cm.Greys_r) plt.figure(figsize = (15, 12)) plt...
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
Brightness and Contrast Modifying the brightness and contrast of our images is a surprisingly simple task, but can have a big impact on the appearance of the image. Here is how you can increase and decrease these characteristics in an image: Characteristic | Increase/Decrease | Action :-------------------...
increase_brightness = rect_image + 30 decrease_brightness = rect_image - 30 increase_contrast = rect_image * 1.5 decrease_contrast = rect_image * 0.5 brightness_compare = np.hstack((increase_brightness, decrease_brightness)) constrast_compare = np.hstack((increase_contrast, decrease_contrast)) plt.figure(figsize = (1...
images_features.ipynb
jackbrucesimpson/Machine-Learning-Workshop
mit
For now, we are only interested in relatively small systems, we will try lattice sizes between $2\times 2$ and $5\times 5$. With this, we set the parameters for DMRG and ED:
lattice_range = [2, 3, 4, 5] parms = [{ 'LATTICE' : "open square lattice", # Set up the lattice 'MODEL' : "spinless fermions", # Select the model 'L' : L, # Lattice dimension 't' : -1 , # This and the following 'mu' ...
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
We will need a helper function to extract the ground state energy from the solutions:
def extract_ground_state_energies(data): E0 = [] for Lsets in data: allE = [] for q in pyalps.flatten(Lsets): allE.append(q.y[0]) E0.append(allE[0]) return sorted(E0, reverse=True)
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\times 4$.
prefix_sparse = 'comparison_sparse' input_file_sparse = pyalps.writeInputFiles(prefix_sparse, parms[:-1]) res = pyalps.runApplication('sparsediag', input_file_sparse) sparsediag_data = pyalps.loadEigenstateMeasurements( pyalps.getResultFiles(prefix=prefix_sparse)) sparsediag_ground_state_energy ...
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
DMRG scales to all the lattice sizes we want:
prefix_dmrg = 'comparison_dmrg' input_file_dmrg = pyalps.writeInputFiles(prefix_dmrg, parms) res = pyalps.runApplication('dmrg',input_file_dmrg) dmrg_data = pyalps.loadEigenstateMeasurements( pyalps.getResultFiles(prefix=prefix_dmrg)) dmrg_ground_state_energy = extract_ground_state_energies(dmrg_data...
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Calculating the ground state energy with SDP The ground state energy problem can be rephrased as a polynomial optimiziation problem of noncommuting variables. We use Ncpol2sdpa to translate this optimization problem to a sparse SDP relaxation [4]. The relaxation is solved with SDPA, a high-performance SDP solver that d...
from sympy.physics.quantum.dagger import Dagger from ncpol2sdpa import SdpRelaxation, generate_operators, \ fermionic_constraints, get_neighbors
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
We set the additional parameters for this formulation, including the order of the relaxation:
level = 1 gam, lam = 0, 1
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step:
sdp_ground_state_energy = [] for lattice_dimension in lattice_range: n_vars = lattice_dimension * lattice_dimension C = generate_operators('C%s' % (lattice_dimension), n_vars) hamiltonian = 0 for r in range(n_vars): hamiltonian -= 2*lam*Dagger(C[r])*C[r] for s in get_neighbors(r, la...
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Comparison The level-one relaxation matches the ground state energy given by DMRG and ED.
data = [dmrg_ground_state_energy,\ sparsediag_ground_state_energy,\ sdp_ground_state_energy] labels = ["DMRG", "ED", "SDP"] print ("{:>4} {:>9} {:>10} {:>10} {:>10}").format("", *lattice_range) for label, row in zip(labels, data): print ("{:>4} {:>7.6f} {:>7.6f} {:>7.6f} {:>7.6f}").format(label, *ro...
Comparing_DMRG_ED_and_SDP.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
data = np.array(np.loadtxt('yearssn.dat')) year = data[:,0] ssc = data[:,1] #raise NotImplementedError() assert len(year)==315 assert year.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float)
assignments/assignment04/MatplotlibEx01.ipynb
hglanz/phys202-2015-work
mit
Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
ticks = np.arange(year.min(), year.max(), 10) f = plt.figure(figsize=(30,1)) plt.plot(year, ssc, 'b.-'); plt.xlim(year.min() - 5, year.max() + 5); plt.xticks(ticks, [str(int(x)) for x in ticks]); plt.xlabel("Year") plt.ylabel("Mean Total Sunspots") plt.title("Mean Total Sunspots vs. Year") #raise NotImplementedError() ...
assignments/assignment04/MatplotlibEx01.ipynb
hglanz/phys202-2015-work
mit
Describe the choices you have made in building this visualization and how they make it effective. I haven't altered much. The aspect ratio to effect a maximum slope of 1 was hard to achieve. I could have taken the borders of the box off the top and right. I liked blue and decided to use points with lines so that it was...
cents = np.array([int(x / 100) for x in year]) ucents = np.unique(cents) f, ax = plt.subplots(2, 2, sharey = True, figsize = (12, 1)) for i in range(2): for j in range(2): subyr = np.array([year[x] for x in range(len(year)) if cents[x] == ucents[2*i + j]]) subspots = [ssc[x] for x in range(len(yea...
assignments/assignment04/MatplotlibEx01.ipynb
hglanz/phys202-2015-work
mit
Read the potential
if os.path.isfile('LOCPOT'): print('LOCPOT already exists') else: os.system('bunzip2 LOCPOT.bz2') input_file = 'LOCPOT' #=== No need to edit below vasp_pot, NGX, NGY, NGZ, Lattice = md.read_vasp_density(input_file) vector_a,vector_b,vector_c,av,bv,cv = md.matrix_2_abc(Lattice) resolution_x = vector_a/NGX resolu...
tutorials/Porous/Porous.ipynb
WMD-group/MacroDensity
mit
Look for pore centre points For this we will use VESTA. Open the LOCPOT in VESTA. Expand to 2x2x2 cell, by choosing the Boundary option on the left hand side. Look for a pore centre - I think that [1,1,1] is at a pore centre here. Now draw a lattice plane through that point. Choose Edit > Lattice Planes. Click New. ...
cube_origin = [1,1,1] travelled = [0,0,0] int(cube_origin[0]*NGX)
tutorials/Porous/Porous.ipynb
WMD-group/MacroDensity
mit