markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Variable helper functions Q11-15. Complete this code.
tf.reset_default_graph() w1 = tf.Variable(1.0, name="weight1") w2 = tf.Variable(2.0, name="weight2", trainable=False) w3 = tf.Variable(3.0, name="weight3") with tf.Session() as sess: # Q11. Initialize the variables w1 and w2. sess.run(tf.variables_initializer([w1, w2])) # Q12. Print the name of all g...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Saving and Restoring Variables Q14-15. This is a simple example code to find the coefficient of a linear function. (Here y = 2x). Complete the code.
tf.reset_default_graph() w = tf.Variable(0.2, 'weight') # <- This is what we want to find. The true value is 2. x = tf.random_uniform([1]) y = 2. * x # Let's pretend we don't know the coefficient 2 here. y_hat = w * x loss = tf.squared_difference(y, y_hat) train_op = tf.train.GradientDescentOptimizer(0.001).minimize(lo...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Sharing Variables Q16. Complete this code.
g = tf.Graph() with g.as_default(): with tf.variable_scope("foo"): v = tf.get_variable("vv", [1,]) # v.name == "foo/vv:0" #Q. Get the existing variable `v` with tf.variable_scope("foo", reuse=True): v1 = tf.get_variable("vv") # The same as v above. assert v1 == v
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q18. Complete this code.
value = [0, 1, 2, 3, 4, 5, 6, 7] # Q. Create an initializer with `value`. init = tf.constant_initializer(value) tf.reset_default_graph() x = tf.get_variable('x', shape=[2, 4], initializer=init) with tf.Session() as sess: sess.run(x.initializer) print("x =\n", sess.run(x))
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q19. Complete this code.
# Q. Create an initializer with a normal distribution of mean equals 0 and standard deviation equals 2. init = tf.random_normal_initializer(mean=0, stddev=2) tf.reset_default_graph() x = tf.get_variable('x', shape=[10, 1000], initializer=init) with tf.Session(): x.initializer.run() _x = x.eval() print("Ma...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q20. Complete this code.
# Q. Create an initializer with a truncated normal distribution of mean equals 0 and standard deviation equals 2. init = tf.truncated_normal_initializer(mean=0, stddev=2) tf.reset_default_graph() x = tf.get_variable('x', shape=[1000,], initializer=init) with tf.Session(): x.initializer.run() _x = x.eval() ...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q21. Complete this code.
# Q. Create an initializer with a random normal distribution of minimum 0 and maximum 1. init = tf.random_uniform_initializer(0, 1) tf.reset_default_graph() x = tf.get_variable('x', shape=[5000,], initializer=init) with tf.Session(): x.initializer.run() _x = x.eval() count, bins, ignored = plt.hist(_x, 20...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Exporting and Importing Meta Graphs Q22. Complete the code. Make sure you've done questions 14-15.
tf.reset_default_graph() print("Of course, there're no variables since we reset the graph. See", tf.global_variables()) with tf.Session() as sess: # Q. Import the saved graph of `model/my-model-10000`. new_saver = tf.train.import_meta_graph('model/my-model-10000.meta') new_saver.restore(sess, 'model/my...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Pseudocode: - Count the number of "C"s in the above sequence - Count the number of "G"s in the above sequence - Add "C" and "G" counts together - Count the total number of nucleotides in the sequence - Divide teh total number of "C" and "G" nucleotides by the total number of nucleotides - Print the percentage NOTE: Ple...
from __future__ import division # Write your code here (if you wish) flu_ns1_seq_upper = flu_ns1_seq.upper() # Count the number of "C"s in the above sequence c_count = flu_ns1_seq_upper.count('C') # Count the number of "G"s in the above sequence g_count = flu_ns1_seq_upper.count('G') # Add "C" and "G" counts tog...
Week_03/Week03 - 02 - Week 2 Homework Review.ipynb
biof-309-python/BIOF309-2016-Fall
mit
If you would like to create a file with your source doe paste it in the cell below and run. Please remember to add your name to the file.
%%writefile GC_calculator.py from __future__ import division flu_ns1_seq = 'GTGACAAAGACATAATGGATCCAAACACTGTGTCAAGCTTTCAGGTAGATTGCTTTCTTTGGCATGTCCGCAAACGAGTTGCAGACCAAGAACTAGGTGA' # Write your code here (if you wish) flu_ns1_seq_upper = flu_ns1_seq.upper() # Count the number of "C"s in the above sequence c_count = ...
Week_03/Week03 - 02 - Week 2 Homework Review.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Gaussian Process Regression In Gaussian process regression, the prior $f$ is a multivariate normal with mean zero and covariance matrix $K$, and the likelihood is a factored normal (or, equivalently, a multivariate normal with diagonal covariance) with mean $f$ and variance $\sigma^2_n$: \begin{equation} f \sim N(\bold...
np.random.seed(1) # Number of training points n = 30 X0 = np.sort(3 * np.random.rand(n))[:, None] # Number of points at which to interpolate m = 100 X = np.linspace(0, 3, m)[:, None] # Covariance kernel parameters noise = 0.1 lengthscale = 0.3 f_scale = 1 cov = f_scale * pm.gp.cov.ExpQuad(1, lengthscale) K = cov(X0...
notebooks/GP-slice-sampling.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Examine actual posterior distribution The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book "Gaussian Processes for Machine Learning" (2006) by R...
fig, ax = plt.subplots(figsize=(14, 6)); ax.scatter(X0, f, s=40, color='b', label='True points'); # Analytically compute posterior mean L = np.linalg.cholesky(K_noise.eval()) alpha = np.linalg.solve(L.T, np.linalg.solve(L, f)) post_mean = np.dot(K_s.T.eval(), alpha) ax.plot(X, post_mean, color='g', alpha=0.8, label='...
notebooks/GP-slice-sampling.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Sample from posterior distribution
with pm.Model() as model: # The actual distribution of f_sample doesn't matter as long as the shape is right since it's only used # as a dummy variable for slice sampling with the given prior f_sample = pm.Flat('f_sample', shape=(n, )) # Likelihood y = pm.MvNormal('y', observed=f, mu=f_sample, ...
notebooks/GP-slice-sampling.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Gaussian Process Classification In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix $K$, and the likelihood is the standard likelihood for logistic regression: \begin{equation} L(y | f) = \...
np.random.seed(5) f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_stable.eval()) # Separate data into positive and negative classes f[f > 0] = 1 f[f <= 0] = 0 fig, ax = plt.subplots(figsize=(14, 6)); ax.scatter(X0, np.ma.masked_where(f == 0, f), color='b', label='Positive Observations'); ax.scatter(X0, np.m...
notebooks/GP-slice-sampling.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Model definition
def split_asym_mig_2epoch(params, ns, pts): """ params = (nu1_1,nu2_1,T1,nu1_2,nu2_2,T2,m1,m2) ns = (n1,n2) Split into two populations of specified size, with potentially asymmetric migration. The split coincides with a stepwise size change in the daughter populations. Then, have a second stepw...
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
LRT get optimal parameter values
ar_split_asym_mig_2epoch = [] for filename in glob("OUT_2D_models/split_asym_mig_2epoch_[0-9]*dill"): ar_split_asym_mig_2epoch.append(dill.load(open(filename))) l = 2*8+1 returned = [flatten(out)[:l] for out in ar_split_asym_mig_2epoch] df = pd.DataFrame(data=returned, \ columns=['ery_1_0','par...
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
This two-epoch model can be reduced to one-epoch model by either setting $Nery_2 = Nery_1$ and $Npar_2 = Npar_1$ or by setting $T_2 = 0$.
# optimal paramter values for simple model (1 epoch) # note: Nery_2=Nery_1, Npar_2=Npar_1 and T2=0 popt_s = [1.24966921, 3.19164623, 1.42043464, 1.24966921, 3.19164623, 0.0, 0.08489757, 0.39827944]
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
get bootstrap replicates
# load bootstrapped 2D SFS all_boot = [dadi.Spectrum.from_file("../SFS/bootstrap/2DSFS/{0:03d}.unfolded.2dsfs.dadi".format(i)).fold() for i in range(200)]
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
calculate adjustment for D
# calculate adjustment for D evaluating at the *simple* model parameterisation # specifying only T2 as fixed adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[5], multinom=True) adj_s # calculate adjustment for D evaluating at the *complex* model parameterisation # specifying on...
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
From Coffman2016, suppl. mat.: The two-epoch model can be marginalized down to the SNM model for an LRT by either setting η = 1 or T = 0. We found that the LRT adjustment performed well when treating both parameters as nested, so μ(θ) was evaluated with T = 0 and η = 1.
# calculate adjustment for D evaluating at the *simple* model parameterisation # treating Nery_2, Npar_2 and T2 as nested adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[3,4,5], multinom=True) adj_s # calculate adjustment for D evaluating at the *complex* model parameterisatio...
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb
claudiuskerth/PhDthesis
mit
Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX. Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
aapl = web.Options('AAPL', 'yahoo') appl_opt = aapl.get_all_data().reset_index() appl_opt appl_opt['Expiry'] appl_opt['Type'] appl_opt.loc[1080] call01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='call')] call01
02. Parte 2/15. Clase 15/.ipynb_checkpoints/03Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
2. Volatilidad implícita
ax = call01.set_index('Strike')[['IV']].plot(figsize=(8,6)) ax.axvline(call01.Underlying_Price.iloc[0], color='g'); put01 = appl_opt[(appl_opt.Expiry=='2018-01-19') & (appl_opt.Type=='put')] put01 ax = put01.set_index('Strike')[['IV']].plot(figsize=(8,6)) ax.axvline(put01.Underlying_Price.iloc[0], color='g');
02. Parte 2/15. Clase 15/.ipynb_checkpoints/03Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
3. Gráficos del Pay Off
def call_payoff(ST, K): return max(0, ST-K) call_payoff(25, 30) def call_payoffs(STmin, STmax, K, step=1): maturities = np.arange(STmin, STmax+step, step) payoffs = np.vectorize(call_payoff)(maturities, K) df = pd.DataFrame({'Strike': K, 'Payoff': payoffs}, index=maturities) df.index.name = 'Preci...
02. Parte 2/15. Clase 15/.ipynb_checkpoints/03Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
Here we obtained some basic count statistics. We can see that only around a third of all nodes has full information, i.e. at least on similar song and tag and release year information. Release year distribution
import matplotlib.pyplot as plt # Aggregate by year to find distribution r = %cypher MATCH (s:Song) WHERE s.year IS NOT NULL RETURN DISTINCT s.year as year, COUNT(s) as count ORDER BY year df = r.get_dataframe() years = df["year"].values.tolist() counts = df["count"].values.tolist() years.append(years[-1]+1) counts.app...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Here we show the distribution of release years of the dataset. It provides important information in that it characterizes what kind of music we can expect in the database. Most of the songs are fairly recent. In fact 56% are from after 2000 and over 80% are from after 1990. Most frequent artists and tags Let's first f...
r = %cypher MATCH (s:Song) WHERE s.artist IS NOT NULL RETURN DISTINCT s.artist as artist, COUNT(s) as count ORDER BY count DESC LIMIT 10 print r.get_dataframe()
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
... and the Tags with most songs.
query = """ MATCH (t:Tag)<-[r:TAGGED]-() RETURN DISTINCT t.name as tag, COUNT(r) as count ORDER BY count DESC LIMIT 10 """ r = %cypher {query} print r.get_dataframe()
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
The results are not particularly surprising. Obviously rock and pop are the most popular tags indicating two very popular genres. While there are a lot of tags not carrying genre information ("favourites", "best song ever", "female vocalists") in the dataset, the most common tags contain a higher proportion of genre ta...
# get all similarity relationships query =""" MATCH (s:Song)-[r:SIMILAR]->(s2:Song) RETURN s.id AS from, s2.id AS to, r.score AS score """ r = %cypher {query} df = r.get_dataframe() # build similarity graph G = nx.DiGraph() for row in df.iterrows(): r = row[1] G.add_edge(r["from"],r["to"],{"score":r["score"]})...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
As we can see, out of the roughly 100000 nodes, there are a lot which do not have any similar songs. If we are only considering the rest almost 95% of those belong to the giant connected component. Song-Tag graph Another graph is the bipartite graph containing songs and tags. As there are about one million song-tag rel...
query = """ MATCH (s:Song)-[r:SIMILAR]-() WHERE s.year IS NOT NULL RETURN distinct s.id, count(r) """ r = %cypher {query} sim_degrees = r.get_dataframe().values[:,1] query = """ MATCH (s:Song)-[r:TAGGED]->(t:Tag) WHERE s.year IS NOT NULL RETURN distinct t.name, count(r) """ r = %cypher {query} tag_degrees = r.get_dat...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
We notice that the amount of similar songs, that a song can have seems to have an upper bound. If we look at tags on the other hand, their degree seems to follow a clear power law. We can thus expect the maximum tag degree to grow unbounded as the network size increases. It is a scale-free network. The song similarity ...
query =""" MATCH (s:Song)-[r:TAGGED]->(t:Tag) RETURN s.id AS song, t.name as tag """ r = %cypher {query} df = r.get_dataframe() tags = {} for row in df.iterrows(): r = row[1] if r["tag"] not in tags: tags[r["tag"]] = {r["song"]} else: tags[r["tag"]].add(r["song"])
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Now we define a similarity measure as the Jaccard similarity between two sets of songs $$ J = \frac{\lvert S1 \bigcap S2\rvert}{\lvert S1 \bigcup S2\rvert} $$ and use it to construct a graph with tags that are connected by a significant similarity (> 0.1 rather than 0, to decrease the number of edges). We then run the ...
import json from networkx.readwrite import json_graph import community def sim(s1,s2): isec = len(s1 & s2) return isec / float(len(s1) + len(s2) - isec) select_tags = [t for t,s in tags.iteritems() if len(s) > 180 and not t.isdigit()] N = len(select_tags) print "%d tags selected" % N simG = nx.Graph() for i...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
As we can see for this particular community (and for others, see webpage) we obtain groups of tags that refer to similar kinds of music. Although certain tags within a group definitely share songs with other genres, the high modularity within this group makes them end up in one community. Natural Language Processing Ch...
query = """ MATCH (s:Song)-[:CONTAINS]->(:Word) WITH COUNT(DISTINCT s) as songCount MATCH (w:Word)<-[:CONTAINS]-(s:Song) RETURN DISTINCT coalesce(w.unstemmed, w.word) as word, log10(songCount/toFloat(COUNT(DISTINCT s))) as IDF """ r = %cypher {query} idf_df = r.get_dataframe() idf_df.set_index("word",inplace=True) pri...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Here we can see, very common words get a low IDF score, which says they carry little information if we observe them. Indeed, these words tell us nothing about the particular topic of a text. We will now compute the TF-IDF scores for the 100 most frequent tags using results from all songs tagged by a one of those tags.
%%capture from nltk.corpus import stopwords query = """ MATCH (t:Tag)<-[r:TAGGED]-() RETURN distinct t.name as tag,COUNT(r) as c ORDER BY c DESC LIMIT 100 """ r = %cypher {query} df = r.get_dataframe() tags = df["tag"].values.tolist() tags.append("Gangsta Rap") tags.append("death metal") tags.append("political") tags....
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Here we see the top 10 words by TF-IDF score for some selected tags. It is obvious that the used words hint at possible tags. This could be used to find the most probable tags given the lyrics of a song. Naturally, it was a lot less clear for popular tags like "rock" or "pop" which did not show any significant keywords...
from scipy import misc import matplotlib.pyplot as plt %matplotlib inline from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator def draw_wordcloud(tag, mask, ax): img_mask = misc.imread(mask,mode="RGB") keywords = " ".join([ word for k,v in tf_idf[tag].iteritems() for word in ["%s " % k] * int(10*v) ...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Sentiment analysis Sentiment analysis is another approach of characterizing the data. During the research of [Dodds], the team obtained a set of 10222 (labMT 1.0) word and happiness score pairs indicating how much happiness the individual word conveys. We will use this list to define an average sentiment score for a se...
import io import json labmt_df = pd.read_csv("./songdata/labmt.txt",sep="\t", usecols=[0,2]) sentiment_map = {v[0]:v[1] for v in labmt_df.values} def calculate_sentiment(wordcounts): count = 0 sentiment = 0 for k,v in wordcounts.iteritems(): if k in sentiment_map: sentiment += v*sentime...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Oldies apparently have the happiest lyrics, while metal in general seems to be a very unhappy genre. This is probably because it reflects a lot about dark topics like death which have a very low happyness score. The entire scale gives an insight into how positive the lyrics of certain genre are in comparison to other g...
from IPython.display import clear_output #getting list of years dbQueryGetY = """ MATCH (s:Song) WHERE s.year IS NOT NULL RETURN distinct s.year as year ORDER BY year ASC """ #creating list of years r = %cypher {dbQueryGetY} dataFrame = r.get_dataframe() years = dataFrame["year"].values.tolist() #reading average happi...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Here we can see the calculated average sentiment values over the years included in our dataset. As we have previously seen, the dataset contains only few very old songs and more and more songs in the recent years. This is why the graph fluctuates a lot in the early years, as we have a high degree of uncertainty. The mo...
%reload_ext cypher import io import json import numpy as np import pandas as pd with io.open("./songdata/lyrics.txt","r",encoding="UTF-8") as lyrics_file: # create words for line in lyrics_file: if line.startswith("%"): stemming_df = pd.read_csv("./songdata/reverse_stemming.txt",sep="<SEP>"...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Now we iterate through the lyrics file and for each song obtain meta data from the LastFM data and add the song to the database including its similar songs (i.e. they are created as "empty" nodes if they don't exist yet). This process can be stopped and run again to pupulate the database in several runs (it takes a lon...
# find all songs in the database that have lyric information from IPython.display import clear_output print "Getting database songs..." limit = 10000 skip = 0 added = 0 all_saved_ids = set() while True: query = """ MATCH (s:Song) WHERE EXISTS((s)-[:CONTAINS]->(:Word)) RETURN s.id AS track_id...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
Finally we add year information for all songs where it is available
with io.open("./songdata/tracks_per_year.txt","r",encoding="UTF-8") as year_file: year_data = {} for line in year_file: parts = line.split("<SEP>") year_data[parts[1]] = int(parts[0]) limit = 1000 skip = 0 while True: # get a batch of nodes that have no year query = """ ...
Explainer_notebook.ipynb
thangout/thangout.github.io
gpl-3.0
get email of author compare to list of known persons of interest return boolean if author is person of interest aggregate count over all emails to person
from __future__ import division data_point = data_dict['METTS MARK'] frac = data_point["from_poi_to_this_person"] / data_point["to_messages"] print frac def computeFraction( poi_messages, all_messages ): """ given a number messages to/from POI (numerator) and number of all messages to/from a person (deno...
Feature Selection.ipynb
omoju/udacityUd120Lessons
gpl-3.0
Beware of BUGS!!! When Katie was working on the Enron POI identifier, she engineered a feature that identified when a given person was on the same email as a POI. So for example, if Ken Lay and Katie Malone are both recipients of the same email message, then Katie Malone should have her "shared receipt" feature increme...
sys.path.append(dataPath+'text_learning/') words_file = "your_word_data.pkl" authors_file = "your_email_authors.pkl" word_data = pickle.load( open(words_file, "r")) authors = pickle.load( open(authors_file, "r") ) ### test_size is the percentage of events assigned to the test set (the ### remainder go into training)...
Feature Selection.ipynb
omoju/udacityUd120Lessons
gpl-3.0
This is an interative process - start off with a peered down version of the dataset - run a decision tree on it - get the accuracy, should be rather high - get the important features definesd by coefs over 0.2 - remove those features - run again until very fews have 0.2 importance value
from sklearn import tree clf = tree.DecisionTreeClassifier() clf.fit(features_train, labels_train) print"{}{:.2f}".format("Classifier accurancy: ", clf.score(features_test, labels_test)) import operator featuresImportance = clf.feature_importances_ featuresSortedByScore = [] for feature in range(len(featuresImpor...
Feature Selection.ipynb
omoju/udacityUd120Lessons
gpl-3.0
A. Numbers and Calculations Note: To insert comments to yourself (this is always a great idea), use the # symbol.
## You can use Python as a calculator: 5*7 #This is a comment and does not affect your code. #You can have as many as you want. #No worries. 5+7 5-7 5/7
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
These simple operations on numbers in Python 3 works exactly as you'd expect, but that's not true across all programming languages. For example, in Python 2, an older version of Python that is still used often in scientific programming: $5/7$ $\neq$ $5./7$ The two calculations below would be equal on most calculators, ...
a2 = 10 b = 7 print(a2) print(b) print(a*b , a+b, a/b) a = 5. b = 7 print(a*b, a+b, a/b)
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
Next, let's create a list of numbers and do math to that list.
c = [0,1,2,3,4,5,6,7,8,9,10,11] print(c)
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
This should not have worked. Why? The short answer is that a list is very useful, but it is not an array. However, you can convert your lists to arrays (and back again if you feel you need to). In order to do this conversion (and just about anything else), we need something extra. Python is a fantastic language becaus...
import sys sys.path
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
To convert our list $c = [0,1,2,3,4,5,6,7,8,9]$ to an array we use numpy.array(),
c = np.array(c) d = c**2 print(d)
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
Next make an array with endpoints 0 and 1 (include 0 and 1), that has 50 values in it. You can use either (both?) np.arange or np.linspace. Which is easier to you? How many numbers do you get? Are these numbers integers or floats (decimal place)?
np.linspace(0,1,50) np.arange(0,1.02,.02)
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
Next make an array with endpoints 0 and 2.5 (include 0 and 2.5), that has values spaced out in increments of 0.05. For example: 0, 0.05, 0.1, 0.15... You can use either np.arange or np.linspace. Which is easier to you? How many numbers do you get? Are these numbers integers or floats (decimal place)?
np.arange(0,2.55,0.05)
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
Next, let's plot these two arrays. Call them $a$ and $b$, or $x$ and $y$ (whichever you prefer--this is your code!), for example: a = np.linspace(). Fill in the missing bits in the code below.
import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt ema = np.linspace(0,1,50) bob = np.arange(0,2.5,0.05) # Clear the plotting field. plt.clf() # No need to add anything inside these parentheses. plt.plot(ema,bob,'b*') # The 'ro' says you want to use Red o plotting symbols.
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
For all the possible plotting symbols, see: http://matplotlib.org/api/markers_api.html. Next, let's plot the positive half of a circle. Let's also add labels in using plt.title(), plt.xlabel(), and plt.ylabel().
import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt x = np.linspace(-1,1,10) y = np.sqrt(x) print(x) print(y) # Clear the plotting field. plt.clf() # No need to add anything inside these parentheses. plt.xlim([0,1.1]) plt.plot(x,y,'ro') # The 'ro' says you want to use Red o pl...
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
Plotting Multiple Curves:
import numpy as np import matplotlib import matplotlib.pyplot as plt # Constants in MKS (meters, kilograms, & seconds) h = 6.626e-34 # c = 2.998e8 # m/s k = 1.381e-23 # J/K # Let's try to recreate the plot above. # Pick temperatures: T1 = 7000 K , T2= 5800 K, and T3 = 4000 K. # Let's have the domain (x val...
notebooks/Lectures2018/Lecture1/GradMap_L1.ipynb
astroumd/GradMap
gpl-3.0
We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
reload(sys) sys.setdefaultencoding('utf8') start=50; end=100 X = fac0[start:end] Y = fac2[start:end] plt.figure(figsize=(15,15)) plt.scatter(X, Y) for i, x, y in zip(topMovies[start:end], X, Y): plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14) plt.show()
deeplearning1/nbs/lesson4.ipynb
VadimMalykh/courses
apache-2.0
1. Create and run the synthetic example of NST First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a synthetic grid, defining the location of each node and link.
y_of_node = (0, 100, 200, 200, 300, 400, 400, 125) x_of_node = (0, 0, 100, -50, -100, 50, -150, -100) nodes_at_link = ((1, 0), (2, 1), (1, 7), (3, 1), (3, 4), (4, 5), (4, 6)) grid1 = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link) grid1.at_node["bedrock__elevation"] = [0.0, 0.05, 0.2, 0.1, 0.25, 0.4, 0.8, 0.8...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
2. Create and run an example of NST using a shapefile to define the network First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a grid based on a polyline shapefile.
datadir = ExampleData("io/shapefile", case="methow").base shp_file = datadir / "MethowSubBasin.shp" points_shapefile = datadir / "MethowSubBasin_Nodes_4.shp" grid2 = read_shapefile( shp_file, points_shapefile=points_shapefile, node_fields=["usarea_km2", "Elev_m"], link_fields=["usarea_km2", "Length_m"...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
3. Options for link color and link line widths The dictionary below (link_color_options) outlines 4 examples of link color and line width choices: 1. The default output of plot_network_and_parcels 2. Some simple modifications: the whole network is red, with a line width of 7, and no parcels. 3. Coloring links by an ex...
network_norm = Normalize(-1, 6) # see matplotlib.colors.Normalize link_color_options = [ {}, # empty dictionary = defaults { "network_color": "r", # specify some simple modifications. "network_linewidth": 7, "parcel_alpha": 0, # make parcels transparent (not visible) }, { ...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
In addition to plotting link coloring using an existing link attribute, we can pass any array of size link. In this example, we color links using an array of random values.
random_link = np.random.randn(grid2.size("link")) l_opts = { "link_attribute": random_link, # use an array of size link "network_cmap": "jet", # change colormap "network_norm": network_norm, # and normalize "link_attribute_title": "A random number", "parcel_alpha": 0, "network_linewidth": 3,...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
4. Options for parcel color The dictionary below (parcel_color_options) outlines 4 examples of link color and line width choices: 1. The default output of plot_network_and_parcels 2. Some simple modifications: all parcels are red, with a parcel size of 10 3. Color parcels by an existing parcel attribute, in this case ...
parcel_color_norm = Normalize(0, 1) # Linear normalization parcel_color_norm2 = colors.LogNorm(vmin=0.01, vmax=1) parcel_color_options = [ {}, # empty dictionary = defaults {"parcel_color": "r", "parcel_size": 10}, # specify some simple modifications. { "parcel_color_attribute": "D", # existing...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
5. Options for parcel size The dictionary below (parcel_size_options) outlines 4 examples of link color and line width choices: 1. The default output of plot_network_and_parcels 2. Set a uniform parcel size and color 3. Size parcels by an existing parcel attribute, in this case the sediment diameter (parcels1.dataset[...
parcel_size_norm = Normalize(0, 1) parcel_size_norm2 = colors.LogNorm(vmin=0.01, vmax=1) parcel_size_options = [ {}, # empty dictionary = defaults {"parcel_color": "b", "parcel_size": 10}, # specify some simple modifications. { "parcel_size_attribute": "D", # use a parcel attribute. "par...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
6. Plotting a subset of the parcels In some cases, we might want to plot only a subset of the parcels on the network. Below, we plot every 50th parcel in the DataRecord.
parcel_filter = np.zeros((parcels2.dataset.dims["item_id"]), dtype=bool) parcel_filter[::50] = True pc_opts = { "parcel_color_attribute": "D", # a more complex normalization and a parcel filter. "parcel_color_norm": parcel_color_norm2, "parcel_color_attribute_title": "Diameter [m]", "parcel_alpha": 1.0...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
7. Combining network and parcel plotting options Nothing will stop us from making all of the choices at once.
parcel_color_norm = colors.LogNorm(vmin=0.01, vmax=1) parcel_filter = np.zeros((parcels2.dataset.dims["item_id"]), dtype=bool) parcel_filter[::30] = True fig = plot_network_and_parcels( grid2, parcels2, parcel_time_index=0, parcel_filter=parcel_filter, link_attribute="sediment_total_volume", n...
notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb
landlab/landlab
mit
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentenc...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. Th...
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionar...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() fu...
def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ input_ = tf.placeholder(t...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn()
from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size ...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell ...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder ...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(e...
def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer ...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, bat...
def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, ...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_siz...
# Number of Epochs epochs = 10 # Batch Size batch_size = 1024 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 300 decoding_embedding_size = 300 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence_lower = sentence.lower() word_ids = [] for word in sentence_...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Translate This will translate translate_sentence from English to French.
translate_sentence = 'he saw a old yellow truck .' # translate_sentence = "New Jersey is usually chilly during july , and it is usually freezing in november" """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Sess...
language-translation/dlnd_language_translation.ipynb
lukechen526/deep-learning
mit
Define the variables
# Initialization the_vars = {} # We need Lx from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) Ly = eval(myOpts.geom['Ly']) # Gaussian with sinus and parabola # The skew sinus # In cartesian coordinates we would like a sinus with with a wave-vector in the direction # ...
MES/boundaries/3-cauchyBC/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Series from a dictionary:
dictionary = {'Favorite Food': 'mexican', 'Favorite city': 'Portland', 'Hometown': 'Mexico City'} favorite = pd.Series(dictionary) favorite
pandas-intro/README.ipynb
caromedellin/Python-notes
mit
Accesing an item from a series:
favorite['Favorite Food']
pandas-intro/README.ipynb
caromedellin/Python-notes
mit
BOOLEAN indexing for selection
favorite[favorite=='mexican']
pandas-intro/README.ipynb
caromedellin/Python-notes
mit
Not null function
favorite.notnull() favorite[favorite.notnull()]
pandas-intro/README.ipynb
caromedellin/Python-notes
mit
Data Frame To create a DataFrame we can pass a dictionary of lits in to the DataFrame constructor.
data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012], 'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'], 'wins': [11, 8, 10, 15, 11, 6, 10, 4], 'losses': [5, 8, 6, 1, 5, 10, 6, 12]} football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'los...
pandas-intro/README.ipynb
caromedellin/Python-notes
mit
First we import some datasets of interest
#the seed information #df_seeds = pd.read_csv('../input/NCAATourneySeeds.csv') #print(df_seeds.shape) #print(df_seeds.head()) #print(df_seeds.Season.value_counts()) #the seed information df_seeds = pd.read_csv('../input/NCAATourneySeeds_SampleTourney2018.csv') print(df_seeds.shape) print(df_seeds.head()) #print(df_see...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now we separate the winners from the losers and organize our dataset
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) ) df_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'}) df_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'}) df_d...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now we match the detailed results to the merge dataset above
df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1) print(df_concat.shape) print(df_concat.head()) print(df_concat.Season.value_counts())
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we get our submission info
df_sample_sub1 = pd.read_csv('../input/SampleSubmissionStage1.csv') #prepares sample submission df_sample_sub2 = pd.read_csv('../input/SampleSubmissionStage2.csv') df_sample_sub=pd.concat([df_sample_sub1, df_sample_sub2]) print(df_sample_sub.shape) print(df_sample_sub.head()) df_sample_sub['Season'] = df_sample_su...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Training Data Creation
winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1', 'LTeamID' : 'TeamID2', 'WScore' : 'Team1_Score', 'LScore' : 'Team2_Score'}).drop(['WSeed', 'L...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We will only consider years relevant to our test submission
years = [2014, 2015, 2016, 2017,2018]
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now lets just look at TeamID2, or just the second team info.
train_test_inner = pd.merge( train.loc[ train['Season'].isin(years), : ].reset_index(drop = True), df_sample_sub.drop(['ID', 'Pred'], axis = 1), on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' ) train_test_inner.head() train_test_inner.shape
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
def geo_mean( x ): return np.exp( np.mean(np.log(x)) ) def harm_mean( x ): return np.mean( x ** -1.0 ) ** -1.0 team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']]\ .agg({ 'Score_Ratio': geo_mean, 'Score_Pct' : harm_mean}).reset_index()\ .set_index('Season').rena...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now lets create a model just solely based on the inner group and predict those probabilities. We will get the teams with the missing result.
X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']] train_labels = train_test_inner['Result'] train_test_outer = pd.merge( train.loc[ train['Season'].isin(years), : ].reset_index(drop = True), df_sample_sub.drop(['ID', 'Pred'], axis = 1), on = ['Season', 'TeamID1', 'T...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We scale our data for our logistic regression, and make sure our categorical variables are properly processed.
X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']] n = X_train.shape[0] train_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True) train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ), train_test_merge.drop(...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we store our probabilities
train_test_inner['Pred1'] = log_clf.predict_proba(X_train)[:,1] train_test_missing['Pred1'] = log_clf.predict_proba(X_test)[:,1]
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We merge our predictions
sub = pd.merge(df_sample_sub, pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']], train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ], axis = 0).reset_index(drop = True), ...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.
sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\ .apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2], axis = 1) print(sub.shape) print(sub.head()) sub.ID.value_counts() sub=sub.groupby('ID', as_index=False).agg({"Pred": "mean"}) sub.ID.value_counts() sub2018=...
MNIST_2017/dump_/men_2018_0ld_logistic_script.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize? To make it easier to compare candidate objectives across multiple units of measurement.
def normalize(problem, points): """ Normalize all the objectives in each point and return them """ meta = problem.objectives all_objs = [] for point in points: objs = [] for i, o in enumerate(problem.evaluate(point)): low, high = meta[i].low, meta[i].high # TODO 3: Normalize 'o' betwee...
code/7/workshop/magoff2_performance.ipynb
gbtimmon/ase16GBT
unlicense
Why is word2vec so popular? Creates a word "cloud", organized by semantic meaning. Converts text into a numerical form that machine learning algorithms and Deep Learning Neural Nets can then use as input. <img src="images/firth.png" style="width: 300px;"/> “You shall know a word by the company it keeps” - J. R....
bigrams = ["insurgents killed", "killed in", "in ongoing", "ongoing fighting"] skip_2_bigrams = ["insurgents killed", "insurgents in", "insurgents ongoing", "killed in", "killed ongoing", "killed fighting", "in ongoing", "in fighting", "ongoing fighting"]
word2vec_slides.ipynb
brianspiering/word2vec-talk
apache-2.0
Geomtric Example The idea of SVD is based on geometric properties. If you don't remember much from linear algebra let's try and refresh your memory with a simple 2-dimensional data. What SVD is doing is finding principle components that are orthogonal basis functions. This means that in this 2d example we are going to...
# Sampling a 400 points from a 2-dimensional gaussian distribution with an eliptic slented shape. ps = np.random.multivariate_normal([0, 0], [[3, 2.5], [2.5, 3.2]], size=400) f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=120, alpha=.75) ax.set_xlim(-6, 6) ax.set_ylim(-6, 6) plt.show()
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Here in this exampel we have 400 points in the plane. The idea of representing the points in an embedded space (also known as latent-space) is not new to you. This is the same whing that K-Means clustering is doing. In that K-Means each centroid (or cluster) is a dimension in that space and the points are now represent...
# Step 1 mu = np.mean(ps, axis=0) X0 = ps - mu # Step 2 U, s, V = svd(X0, full_matrices=False) # Step 3 W = np.dot(U, np.diag(s)) print 'Shape of W = (%d, %d)' % W.shape, 'Shape of V = (%d, %d)' % V.shape
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Interpretation of V The learned V matrix is the directions that the SVD finds (or the principle components). It represents for us the new latent-space in which each row is a unique direction. In the HW assignment you are required to plot the directions for the Faces data sets, and there's a code that shows you how to d...
f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=80, alpha=.55) for j in range(2): a = 2 * np.median(np.abs(W[:, j])) # The scalar # Compting the "direction" as a function of the mean. p1 = mu - a * V[j] p2 = mu + a * V[j] dx, dy = p2 - p1 ax.arrow(p1[0], p1[1...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
We can see that running SVD gave us two orthogonal lines (basis functions), each one of them is a "direction". Understanding W Each row in V is a basis function, and each point can be represented as a linear combination of those basis function. The scalar used in that linear combination are in W. If we take the first ...
f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=80, alpha=.55) # Reconstruction of the data from the latent-space representation Xhat = np.dot(W, V) + mu # DO NOT FORGET TO ADD BACK THE MU # Plotting the reconstructed points on top of the blue dots. ax.scatter(Xhat[:, 0], Xhat[:, 1], s=35...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
The plot shows the reconstructed data (red dots) overlayed on top of the original data (blue dots), showing that each point can be represented as a linear combination of the basis function. That'll be pretty much it for the linear algebra cover :) Understanding the Embedded Space The basis function represent the new e...
ps1 = np.random.multivariate_normal([0, 0], [[1, 0.5], [0.5, 1.2]], size=100) ps2 = np.random.multivariate_normal([8, 8], [[1.2, 0.5], [0.5, 1]], size=100) ps = np.vstack([ps1, ps2]) f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=120, alpha=.75) ax.set_xlim(-6, 15) ax.set_ylim(-6, 15) p...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Let's run the SVD algorithm again (using the SAME process).
# Step 1 mu = np.mean(ps, axis=0) X0 = ps - mu # Step 2 U, s, V = svd(X0, full_matrices=False) # Step 3 W = np.dot(U, np.diag(s)) print 'Shape of W = (%d, %d)' % W.shape, 'Shape of V = (%d, %d)' % V.shape
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Question: What would the directions clusters look like??? Make sure you understand why it looks exactly the same (the way the arros point is not that important).
f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=80, alpha=.55) for j in range(2): a = 2 * np.median(np.abs(W[:, j])) # The scalar # Compting the "direction" as a function of the mean. p1 = mu - a * V[j] p2 = mu + a * V[j] dx, dy = p2 - p1 ax.arrow(p1[0], p1[1...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Now, instead of using the entire embedded space, let's just use one of the basis function and plot the reconstructed data.
f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=80, alpha=.55) # Reconstruction of the data from the latent-space representation # With both dimensions - if you want to see that one again. # Xhat = np.dot(W, V) + mu # DON'T FORGET TO ADD THE MU # Only one vector Xhat = np.dot(W[:, 0].re...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0