markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Friend Recommendation: Open Triangles Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles. Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured...
# Fill in your code here. def get_open_triangles(G, node): """ There are many ways to represent this. One may choose to represent only the nodes involved in an open triangle; this is not the approach taken here. Rather, we have a code that explicitly enumrates every open triangle present. """ ...
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Exercise This should allow us to find all n-sized maximal cliques. Try writing a function maximal_cliques_of_size(size, G) that implements this.
def maximal_cliqes_of_size(size, G): return ______________________ maximal_cliqes_of_size(2, G)
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Connected Components From Wikipedia: In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph. NetworkX also implements a function that identifies c...
ccsubgraphs = list(nx.connected_component_subgraphs(G)) len(ccsubgraphs)
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Exercise Play a bit with the Circos API. Can you colour the nodes by their subgraph identifier?
# Start by labelling each node in the master graph G by some number # that represents the subgraph that contains the node. for i, g in enumerate(_____________): # Fill in code below. # Then, pass in a list of nodecolors that correspond to the node order. # Feel free to change the colours around! node_cmap ...
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Discussion From the above graphs it is clear that the quality of SOD depends on the choice of $\text{snn}$. Interestingly, the optimal setting for $\text{snn}$ appears to be approximately the same as the number of anomalies in the dataset (or slightly larger). Since in practice we won't know the number of anomalies to...
# SOD running time (s) vs snn df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5]) fig = plt.figure(figsize=(7,3)) ax = fig.add_axes([0.1, 0.15, 0.63, 0.7]) ax.plot(df[2].values[0:5], df[5].values[0:5], label="5000 pts") ax.plot(df[2].values[7:12], df[5].values[7:1...
_notebooks/SOD vs One-class SVM.ipynb
ActivisionGameScience/blog
apache-2.0
Likewise, we need to know how running time is affected by increasing the size of the dataset. Below we plot several curves with fixed $\text{snn}$ and varying data size. It turns out that the running time grows quadratically $\text{O}(n^2)$ in each case.
# SOD running time (s) vs # datapoints df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5]) fig = plt.figure(figsize=(7.5,3)) ax = fig.add_axes([0.1, 0.15, 0.69, 0.7]) ax.plot([5000,10000,20000], df[5].values[0::7], label="snn=25") ax.plot([5000,10000,20000], df[5]...
_notebooks/SOD vs One-class SVM.ipynb
ActivisionGameScience/blog
apache-2.0
Discussion Putting these together we see that SOD running time grows like $\text{O}(n^2\cdot\text{snn})$. However, we already saw that we should scale $\text{snn}\gtrapprox(\text{no. of anomalies})$ to optimize quality. Since $(\text{no. of anomalies})\propto(\text{size of dataset }n)$, we conclude that we should scal...
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import auc %matplotlib inline # process SVM PR curves datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'mouse', 'kddcup99_5000', 'kddcup99_10000'] for name in datasets: if name == 'kd...
_notebooks/SOD vs One-class SVM.ipynb
ActivisionGameScience/blog
apache-2.0
Discussion From the graphs it is clear that robust SVM (with or without automated gamma tuning) performs much worse on most datasets. Since it seems to be unreliable, we drop it from further discussion. Now consider the effect of automated gamma tuning. Although it helps for the "ionosphere" and "mouse" datasets, re...
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import auc %matplotlib inline # process SVM PR curves vs SOD (optimal) PR curves datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'kddcup99_5000', 'kddcup99_10000'] for name in datasets: ...
_notebooks/SOD vs One-class SVM.ipynb
ActivisionGameScience/blog
apache-2.0
Head-to-head time comparison: Optimal SOD vs One-Class SVM Finally, we compare the running time of one-class SVM (automated gamma tuning off) versus SOD (where $\text{snn}$ was tuned optimally for each dataset). We already argued above that Optimal SOD running time grows as $\text{O}(n^3)$. Our experiments verify thi...
# SOD optimal running time (s) compared to one-class SVM running time df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[5,12]) fig = plt.figure(figsize=(7.5,3)) ax = fig.add_axes([0.1, 0.15, 0.69, 0.7]) ax.plot([5000,10000,20000], [df[5].values[2], df[5].values[10], d...
_notebooks/SOD vs One-class SVM.ipynb
ActivisionGameScience/blog
apache-2.0
Since deques are a type of sequence container, they support some of the same operations as list, such as examining the contents with __getitem__(), determining length, and removing elements from the middle of the queue by matching identity. Populating A deque can be populated from either end, termed “left” and “right” ...
import collections # Add to the right d1 = collections.deque() d1.extend('abcdefg') print('extend :', d1) d1.append('h') print('append :', d1) # Add to the left d2 = collections.deque() d2.extendleft(range(6)) print('extendleft:', d2) d2.appendleft(6) print('appendleft:', d2)
data_structure/deque — Double-Ended Queue.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
The extendleft() function iterates over its input and performs the equivalent of an appendleft() for each item. The end result is that the deque contains the input sequence in reverse order. Consuming Similarly, the elements of the deque can be consumed from both ends or either end, depending on the algorithm being app...
import collections print('From the right:') d = collections.deque('abcdefg') while True: try: print(d.pop(), end='') except IndexError: break print() print('\nFrom the left:') d = collections.deque(range(6)) while True: try: print(d.popleft(), end='') except IndexError: ...
data_structure/deque — Double-Ended Queue.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Use pop() to remove an item from the “right” end of the deque and popleft() to take an item from the “left” end. Since deques are thread-safe, the contents can even be consumed from both ends at the same time from separate threads.
import collections import threading import time candle = collections.deque(range(5)) def burn(direction, nextSource): while True: try: next = nextSource() except IndexError: break else: print('{:>8}: {}'.format(direction, next)) time.sleep(0...
data_structure/deque — Double-Ended Queue.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Rotating(think it as a belt) Another useful aspect of the deque is the ability to rotate it in either direction, so as to skip over some items.
import collections d = collections.deque(range(10)) print('Normal :', d) d = collections.deque(range(10)) d.rotate(2) print('Right rotation:', d) d = collections.deque(range(10)) d.rotate(-2) print('Left rotation :', d)
data_structure/deque — Double-Ended Queue.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Constraining the Queue Size A deque instance can be configured with a maximum length so that it never grows beyond that size. When the queue reaches the specified length, existing items are discarded as new items are added. This behavior is useful for finding the last n items in a stream of undetermined length.
import collections import random # Set the random seed so we see the same output each time # the script is run. random.seed(1) d1 = collections.deque(maxlen=3) d2 = collections.deque(maxlen=3) for i in range(5): n = random.randint(0, 100) print('n =', n) d1.append(n) d2.appendleft(n) print('D1:',...
data_structure/deque — Double-Ended Queue.ipynb
scotthuang1989/Python-3-Module-of-the-Week
apache-2.0
Таким образом, резонансная частота примерно равна $f_p = 6.9~кГц$ и не зависит от сопротивления. Это расходится с ожидаемыми данными. Скорее всего, у нашей катушки индективность больше или меньше 100 мГн. Добротность при $R = 1~Ом$ составляет примерно $Q \approx \frac{4.667}{7.6-6.3} \approx 5.31$, а при $R = 500~Ом$ —...
nII = pandas.read_excel('lab-3-3.xlsx', 'tab-2', header=None) nII.head() import numpy f = nII.values x = f[0, 1:] y = f[2, 1:] l = numpy.mean(x * y) / numpy.mean(x ** 2) dl = ((numpy.mean(x ** 2) * numpy.mean(y ** 2) - (numpy.mean(x * y) ** 2)) / (len(x) * (numpy.mean(x ** 2) ** 2))) ** 0.5 fff = numpy.linspace(0, 10,...
labs/term-5/lab-3-3.ipynb
eshlykov/mipt-day-after-day
unlicense
In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: $\large F(\epsilon) = {\Large \frac{1}{e^{(\epsilon-\mu...
def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" F = 1/(np.exp((energy-mu)/kT)+1) return F assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0...
assignments/midterm/InteractEx06.ipynb
rsterbentz/phys202-2015-work
mit
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and...
def plot_fermidist(mu, kT): energy = np.linspace(0,10.0,21) plt.plot(energy, fermidist(energy, mu, kT)) plt.tick_params(direction='out') plt.xlabel('$Energy$') plt.ylabel('$F(Energy)$') plt.title('Fermi Distribution') plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermi...
assignments/midterm/InteractEx06.ipynb
rsterbentz/phys202-2015-work
mit
Probability Distribution Let us begin by specifying discrete probability distributions. The class ProbDist defines a discrete probability distribution. We name our random variable and then assign probabilities to the different values of the random variable. Assigning probabilities to the values works similar to that of...
%psource ProbDist p = ProbDist('Flip') p['H'], p['T'] = 0.25, 0.75 p['T']
probability.ipynb
SnShine/aima-python
mit
The distribution by default is not normalized if values are added incremently. We can still force normalization by invoking the normalize method.
p = ProbDist('Y') p['Cat'] = 50 p['Dog'] = 114 p['Mice'] = 64 (p['Cat'], p['Dog'], p['Mice']) p.normalize() (p['Cat'], p['Dog'], p['Mice'])
probability.ipynb
SnShine/aima-python
mit
A probability model is completely determined by the joint distribution for all of the random variables. (Section 13.3) The probability module implements these as the class JointProbDist which inherits from the ProbDist class. This class specifies a discrete probability distribute over a set of variables.
%psource JointProbDist
probability.ipynb
SnShine/aima-python
mit
Inference Using Full Joint Distributions In this section we use Full Joint Distributions to calculate the posterior distribution given some evidence. We represent evidence by using a python dictionary with variables as dict keys and dict values representing the values. This is illustrated in Section 13.3 of the book. T...
full_joint = JointProbDist(['Cavity', 'Toothache', 'Catch']) full_joint[dict(Cavity=True, Toothache=True, Catch=True)] = 0.108 full_joint[dict(Cavity=True, Toothache=True, Catch=False)] = 0.012 full_joint[dict(Cavity=True, Toothache=False, Catch=True)] = 0.016 full_joint[dict(Cavity=True, Toothache=False, Catch=False)]...
probability.ipynb
SnShine/aima-python
mit
Let us now look at the enumerate_joint function returns the sum of those entries in P consistent with e,provided variables is P's remaining variables (the ones not in e). Here, P refers to the full joint distribution. The function uses a recursive call in its implementation. The first parameter variables refers to rema...
%psource enumerate_joint
probability.ipynb
SnShine/aima-python
mit
We might be interested in the probability distribution of a particular variable conditioned on some evidence. This can involve doing calculations like above for each possible value of the variable. This has been implemented slightly differently using normalization in the function enumerate_joint_ask which returns a pr...
%psource enumerate_joint_ask
probability.ipynb
SnShine/aima-python
mit
You can verify that the first value is the same as we obtained earlier by manual calculation. Bayesian Networks A Bayesian network is a representation of the joint probability distribution encoding a collection of conditional independence statements. A Bayes Network is implemented as the class BayesNet. It consisits of...
%psource BayesNode
probability.ipynb
SnShine/aima-python
mit
It is possible to avoid using a tuple when there is only a single parent. So an alternative format for the cpt is
john_node = BayesNode('JohnCalls', ['Alarm'], {True: 0.90, False: 0.05}) mary_node = BayesNode('MaryCalls', 'Alarm', {(True, ): 0.70, (False, ): 0.01}) # Using string for parents. # Equvivalant to john_node definition.
probability.ipynb
SnShine/aima-python
mit
With all the information about nodes present it is possible to construct a Bayes Network using BayesNet. The BayesNet class does not take in nodes as input but instead takes a list of node_specs. An entry in node_specs is a tuple of the parameters we use to construct a BayesNode namely (X, parents, cpt). node_specs mus...
%psource BayesNet
probability.ipynb
SnShine/aima-python
mit
Exact Inference in Bayesian Networks A Bayes Network is a more compact representation of the full joint distribution and like full joint distributions allows us to do inference i.e. answer questions about probability distributions of random variables given some evidence. Exact algorithms don't scale well for larger net...
%psource enumerate_all
probability.ipynb
SnShine/aima-python
mit
enumerate__all recursively evaluates a general form of the Equation 14.4 in the book. $$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$$ such that P(X, e, y) is written in the form of product of conditional probabilities P(variable | parents(variable)) from ...
%psource enumeration_ask
probability.ipynb
SnShine/aima-python
mit
Variable Elimination The enumeration algorithm can be improved substantially by eliminating repeated calculations. In enumeration we join the joint of all hidden variables. This is of exponential size for the number of hidden variables. Variable elimination employes interleaving join and marginalization. Before we look...
%psource make_factor
probability.ipynb
SnShine/aima-python
mit
make_factor is used to create the cpt and variables that will be passed to the constructor of Factor. We use make_factor for each variable. It takes in the arguments var the particular variable, e the evidence we want to do inference on, bn the bayes network. Here variables for each node refers to a list consisting of ...
%psource all_events
probability.ipynb
SnShine/aima-python
mit
Here the cpt is for P(MaryCalls | Alarm = True). Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence. Operations on Factors We are interested in two kinds of operations on factors. Pointwise Product...
%psource Factor.pointwise_product
probability.ipynb
SnShine/aima-python
mit
Factor.pointwise_product implements a method of creating a joint via combining two factors. We take the union of variables of both the factors and then generate the cpt for the new factor using all_events function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product ...
%psource pointwise_product
probability.ipynb
SnShine/aima-python
mit
pointwise_product extends this operation to more than two operands where it is done sequentially in pairs of two.
%psource Factor.sum_out
probability.ipynb
SnShine/aima-python
mit
Factor.sum_out makes a factor eliminating a variable by summing over its values. Again events_all is used to generate combinations for the rest of the variables.
%psource sum_out
probability.ipynb
SnShine/aima-python
mit
sum_out uses both Factor.sum_out and pointwise_product to finally eliminate a particular variable from all factors by summing over its values. Elimination Ask The algorithm described in Figure 14.11 of the book is implemented by the function elimination_ask. We use this for inference. The key idea is that we eliminate ...
%psource elimination_ask elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
probability.ipynb
SnShine/aima-python
mit
Approximate Inference in Bayesian Networks Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.
%psource BayesNode.sample
probability.ipynb
SnShine/aima-python
mit
Before we consider the different algorithms in this section let us look at the BayesNode.sample method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The probabi...
%psource prior_sample
probability.ipynb
SnShine/aima-python
mit
The function prior_sample implements the algorithm described in Figure 14.13 of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in Figure 14.12 to try out the prior_sample <img src="files/images/sprinklernet.jpg" ...
N = 1000 all_observations = [prior_sample(sprinkler) for x in range(N)]
probability.ipynb
SnShine/aima-python
mit
Rejection Sampling Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. The function rejection_sampling implements the algorithm described by Figure 14.14
%psource rejection_sampling
probability.ipynb
SnShine/aima-python
mit
The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters X - The Query Variable, e - evidence, bn - Bayes net and N - number of prior samples to generate. consistent_with is used to check ...
%psource consistent_with
probability.ipynb
SnShine/aima-python
mit
Likelihood Weighting Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent. The pseudocode in Figure 14....
%psource weighted_sample
probability.ipynb
SnShine/aima-python
mit
weighted_sample samples an event from Bayesian Network that's consistent with the evidence e and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters bn the Bayesian Network and e the evidence. The weight is obtained by multiplying P(x<sub>i</sub> | parents...
weighted_sample(sprinkler, dict(Rain=True)) %psource likelihood_weighting
probability.ipynb
SnShine/aima-python
mit
Gibbs Sampling In likelihood sampling, it is possible to obtain low weights in cases where the evidence variables reside at the bottom of the Bayesian Network. This can happen because influence only propagates downwards in likelihood sampling. Gibbs Sampling solves this. The implementation of Figure 14.16 is provided i...
%psource gibbs_ask
probability.ipynb
SnShine/aima-python
mit
Create Table
%%sql -- Create a table of criminals_1 CREATE TABLE criminals_1 (pid, name, age, sex, city, minor); INSERT INTO criminals_1 VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1); INSERT INTO criminals_1 VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0); INSERT INTO criminals_1 VALUES (632, 'Stacy Miller', 23, 'F', ...
sql/copy_data_between_tables.ipynb
tpin3694/tpin3694.github.io
mit
View Table
%%sql -- Select all SELECT * -- From the table 'criminals_1' FROM criminals_1
sql/copy_data_between_tables.ipynb
tpin3694/tpin3694.github.io
mit
Create New Empty Table
%%sql -- Create a table called criminals_2 CREATE TABLE criminals_2 (pid, name, age, sex, city, minor);
sql/copy_data_between_tables.ipynb
tpin3694/tpin3694.github.io
mit
Copy Contents Of First Table Into Empty Table
%%sql -- Insert into the empty table INSERT INTO criminals_2 -- Everything SELECT * -- From the first table FROM criminals_1;
sql/copy_data_between_tables.ipynb
tpin3694/tpin3694.github.io
mit
View Previously Empty Table
%%sql -- Select everything SELECT * -- From the previously empty table FROM criminals_2
sql/copy_data_between_tables.ipynb
tpin3694/tpin3694.github.io
mit
0. Mental note: Inkscape 0. Mental note 2: Heatmap Represent one quantitative variable as color, two qualitative (or binned quantitative) in the sides.
data = pd.read_csv("data/random.csv",sep="\t",index_col=0)*100 data.head() import seaborn as sns sns.heatmap? ax = sns.heatmap(data,cbar_kws={"label":"Body temperature"},cmap="YlOrRd") ax.invert_yaxis() plt.ylabel("Pizzas eaten") plt.xlabel("Outside temperature") plt.show() sns.heatmap(data,cbar_kws={"label":"Body t...
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
Conclusion: Pizzas make you lekker warm Lesson of the day: Eat more pizza 1. In-class exercises 1.1 Read the data from the world bank (inside class3 folder, then folder data, subfolder world_bank), and save it with name df
#Read data and print the head to see how it looks like df = pd.read_csv("../class3/data/world_bank/data.csv",na_values="..") df.head() #We could fix the column names with: df.columns = ["Country Name","Country Code","Series Name","Series Code",1967,1968,1969,...] ## 4.1b Fix the year of the column (make it numbers) d...
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
4.2 Fix the format and save it with name df_fixed Remember, this was the code that we use to fix the file of the ` ### Fix setp 1: Melt variables_already_presents = ['METRO_ID', 'Metropolitan areas','VAR'] columns_combine = cols df = pd.melt(df, id_vars=variables_already_presents, ...
### Fix setp 1: Melt cols = list(df.columns) variables_already_presents = cols[:4] columns_combine = cols[4:] df_1 = pd.melt(df, id_vars=variables_already_presents, value_vars=columns_combine, var_name="Year", value_name="Value") df_1.head() ### Fix step 2: Pivot column_wit...
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
4.3 Create two dataframes with names df_NL and df_CO. The first with the data for the Netherlands The second with the data for Colombia
#code df_NL = df_CO =
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
4.4 Concatenate/Merge (the appropriate one) the two dataframes 4.5 Create two dataframes with names df_pri and df_pu. The first with the data for all rows and columns "country", "year" and indicator "SH.XPD.PRIV.ZS" (expenditure in health care as %GDP) The second with the data for all rows and columns "country", "yea...
df_pri = df_pu =
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
4.6 Concatenate/Merge (the appropriate one) the two dataframes (how = "outer") 4.7 Groupby the last dataframe (step 4.6) by country code and describe If you don't remember check class3c_groupby.ipynb 4.8 Groupby the last dataframe (step 4.6) by country code and find skewness A skewness value > 0 means that there is...
import scipy.stats #you need to import scipy.stats
class4/class4b_inclass.ipynb
jgarciab/wwd2017
gpl-3.0
Periodic boundary conditions
def periodic(i,limit,add): """ Choose correct matrix index with periodic boundary conditions Input: - i: Base index - limit: Highest \"legal\" index - add: Number to add or subtract from i """ return (i + limit + add) % limit
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Set up spin matrix, initialize to ground state
size = 256 # L_x temp = 10. # temperature T spin_matrix = np.zeros( (size,size), np.int8) + 1 spin_matrix
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Create and initialize variables
E = M = 0 E_av = E2_av = M_av = M2_av = Mabs_av = 0
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Setup array for possible energy changes
w = np.zeros(17, np.float64) for de in xrange(-8,9,4): print de w[de+8] = math.exp(-de/temp) print w
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Calculate initial magnetization
M = spin_matrix.sum() print M
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Calculate initial energy
# for i in xrange(16): print i r # range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements. # xrange is a sequence object that evaluates lazily. for j in xrange(size): for i in xrange(size): E -= spin_matrix.item(i,j) * (spin_matrix.item(periodic(i,size,-1)...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Metropolis MonteCarlo computation, 1 single step or iteration, done explicitly:
x = int(np.random.random()*size) print(x) y = int(np.random.random()*size) print(y) deltaE = 2*spin_matrix.item(i,j) * \ (spin_matrix.item(periodic(x,size,-1),y) + spin_matrix.item(periodic(x,size,1),y) + \ spin_matrix.item(x,periodic(y,size,-1))+spin_matrix.item(x,periodic(y,size,1))) print(delt...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Accept (if True)!
print( spin_matrix[x,y] ) print( spin_matrix.item(x,y) ) spin_matrix[x,y] *= -1 M += 2*spin_matrix[x,y] E += deltaE print(spin_matrix.item(x,y)) print(M) print(E) import pygame
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Initialize (all spins up), explicitly shown
Lx=256; Ly=256 spin_matrix = np.zeros((Lx,Ly),np.int8) print(spin_matrix.shape) spin_matrix.fill(1) spin_matrix def initialize_allup( spin_matrix, J=1.0 ): Lx,Ly = spin_matrix.shape spin_matrix.fill(1) M = spin_matrix.sum() # Calculate initial energy E=0 for j in xrange(Ly): for...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Setup array for possible energy changes
temp = 1.0 w = np.zeros(17,np.float32) for de in xrange(-8,9,4): # include +8 w[de+8] = math.exp(-de/temp) print(w)
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Importing from the script ising2dim.py
import os print(os.getcwd()) print(os.listdir( os.getcwd() )) sys.path.append('./')
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Reading out data from ./IsingGPU/FileIO/output.h Data is generated by the parallel Metropolis algorithm in CUDA C++ in the subdirectory ./IsingGPU/data/, which is done by the function process_avgs in ./IsingGPU/FileIO/output.h. The values are saved as a character array, which then can be read in as a NumPy array of fl...
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32) print(avgsresults_GPU.shape) print(avgsresults_GPU.size) avgsresults_GPU = avgsresults_GPU.reshape(201,7) # 7 different averages print(avgsresults_GPU.shape) print(avgsresults_GPU.size) avgsresults_GPU import matplotlib.pyplot as pl...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
For 2^10 x 2^10 or 1024 x 1024 grid; 50000 trials, temperature T = 1.0, 1.005,...3. (temperature step of 0.005), so 400 different temperatures, 32 x 32 thread block,
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32) print(avgsresults_GPU.shape) print(avgsresults_GPU.size) avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages print(avgsresults_GPU.shape) print(avgsresults_GPU.size) fig = plt.figure() ax = fi...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
From drafts
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/data/IsingMetroGPU.bin",dtype=np.float32) print(avgsresults_GPU.shape) print(avgsresults_GPU.size) avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages print(avgsresults_GPU.shape) print(avgsresults_GPU.size) fig = plt.figure() ...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
From CLaigit
avgsresults_GPU = [] for temp in range(10,31,2): avgsresults_GPU.append( np.fromfile("./data/ising2d_CLaigit" + str(temp) + ".bin",dtype=np.float64) ) avgsresults_GPU = np.array( avgsresults_GPU) print( avgsresults_GPU.shape, avgsresults_GPU.size) fig = plt.figure() ax = fig.add_subplot(1,1,1) T = avgsresults_...
Cpp/Ising/ising.ipynb
ernestyalumni/CompPhys
apache-2.0
Creating cells To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created. To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart bu...
class_name = "Nishanth Koganti" message = class_name + " is awesome!" message
startup.ipynb
buntyke/DataAnalysis
mit
Here, we follow the reasoning presented by Webster (1904) for analyzing the ellipsoidal coordinate $\lambda$ describing a oblate ellipsoid. Let's consider an ellipsoid with semi-axes $a$, $b$, $c$ oriented along the $x$-, $y$-, and $z$-axis, respectively, where $0 < a < b = c$. This ellipsoid is defined by the followi...
a = 11. b = 20. x = 21. y = 23. z = 30.
code/lambda_oblate_ellipsoids.ipynb
pinga-lab/magnetic-ellipsoid
bsd-3-clause
In the sequence, we define a set of values for the variable $\rho$ in an interval $\left[ \rho_{min} \, , \rho_{max} \right]$ and evaluate the quadratic equation $f(\rho)$ (equation 4).
rho_min = -b**2 - 500. rho_max = -a**2 + 2500. rho = np.linspace(rho_min, rho_max, 100) f = p2*(rho**2) + p1*rho + p0
code/lambda_oblate_ellipsoids.ipynb
pinga-lab/magnetic-ellipsoid
bsd-3-clause
Finally, the cell below shows the quadratic equation $f(\rho)$ (equation 4) evaluated in the range $\left[ \rho_{min} \, , \rho_{max} \right]$ defined above.
ymin = np.min(f) - 0.1*(np.max(f) - np.min(f)) ymax = np.max(f) + 0.1*(np.max(f) - np.min(f)) plt.close('all') plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.plot([rho_min, rho_max], [0., 0.], 'k-') plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$') plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', labe...
code/lambda_oblate_ellipsoids.ipynb
pinga-lab/magnetic-ellipsoid
bsd-3-clause
Plotting according to the Andrew Hansen's thesis equations (2.15) and (2.16): $TEC = (P2-P1)/(f1^2/f2^2 - 1)$ ans $TEC = -(L2-L1)/(f1^2/f^2 - 1)$ theoretically they should be the same
# sattelites in file data.items # parameters in the file # https://igscb.jpl.nasa.gov/igscb/data/format/rinex211.txt # section 10.1.1 says what the letters mean data.major_axis f1 = 1575.42 #MHz f2 = 1227.6 #MHz f5 = 1176.45 #MHz sv_of_interest = 27 fig = plt.figure(figsize=(10,10)) ax1 = plt.subplot(212) fmt = ...
Examples/.ipynb_checkpoints/ReadRinex Demo-checkpoint.ipynb
gregstarr/PyGPS
agpl-3.0
So the TEC is off by a large factor on the pseudorange graph, I'm not sure where that's coming from right now, I followed the equation from the thesis. The difference in pseudorange between the frequencies is very small, is that how its supposed to be and it needs to be multiplied by a constant? or is the data off? The...
fig2 = plt.figure(figsize = (10,10)) ax = plt.subplot() ax.xaxis.set_major_formatter(fmt) ax.autoscale_view() plt.xlabel('time') plt.ylabel('pseudorange (m)') plt.title('comparison of C2 and P2') plt.plot(data[:,sv_of_interest,'P2','data']) plt.plot(data[:,sv_of_interest,'C2','data']) plt.show()
Examples/.ipynb_checkpoints/ReadRinex Demo-checkpoint.ipynb
gregstarr/PyGPS
agpl-3.0
Create a linear stream of 10million points between -50 and 50.
x = np.arange(-50,50,0.00001) x.shape
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Create random noise of same dimension
bias = np.random.standard_normal(x.shape)
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Define the function
y2 = np.cos(x)**3 * (x**2/max(x)) + bias*5
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Train test split
x_train, x_test, y_train, y_test = train_test_split(x,y2, test_size=0.3) x_train.shape
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Plotting algorithms cannot work with millions of points, so you downsample just for plotting
stepper = int(x_train.shape[0]/1000) stepper fig, ax = plt.subplots(1,1, figsize=(13,8)) ax.scatter(x[::stepper],y2[::stepper], marker='d') ax.set_title('Distribution of training points')
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Curve fitting Let us define a function that will try to fit against the training data. It starts with lower order and sequentially increases the complexity of the model. The hope is, somewhere here is the sweet spot of low bias and variance. We will find it empirically
def greedy_fitter(x_train, y_train, x_test, y_test, max_order=25): """Fitter will try to find the best order of polynomial curve fit for the given synthetic data""" import time train_predictions=[] train_rmse=[] test_predictions=[] test_rmse=[] for order in range(1,max_ord...
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Run the model. Change the max_order to higher or lower if you wish
%%time complexity=50 train_predictions, train_rmse, test_predictions, test_rmse = greedy_fitter( x_train, y_train, x_test, y_test, max_order=complexity)
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Plot results How well did the models fit against training data? Training results
%%time fig, axes = plt.subplots(1,1, figsize=(15,15)) axes.scatter(x_train[::stepper], y_train[::stepper], label='Original data', color='gray', marker='x') order=1 for p, r in zip(train_predictions, train_rmse): axes.scatter(x_train[:stepper], p[:stepper], label='O: ' + str(order) + ...
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Test results
%%time fig, axes = plt.subplots(1,1, figsize=(15,15)) axes.scatter(x_test[::stepper], y_test[::stepper], label='Test data', color='gray', marker='x') order=1 for p, r in zip(test_predictions, test_rmse): axes.scatter(x_test[:stepper], p[:stepper], label='O: ' + str(order) + " RMSE: "...
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Bias vs Variance
ax = plt.plot(np.arange(1,complexity+1),test_rmse) plt.title('Bias vs Complexity'); plt.xlabel('Order of polynomial'); plt.ylabel('Test RMSE') ax[0].axes.get_yaxis().get_major_formatter().set_useOffset(False) plt.savefig('Model efficiency.png')
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
AtmaMani/pyChakras
mit
Playing with the Convenience Functions First, we're going to see how we can access ARF and RMF from the convenience functions. Let's set up a data set:
import sherpa.astro.ui
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Load in the data with the convenience function:
sherpa.astro.ui.load_data("../data/Chandra/js_spec_HI1_IC10X1_5asB1_jsgrp.pi")
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
If there is a grouping, get rid of it, because we don't like groupings (except for Mike Nowak).
sherpa.astro.ui.ungroup()
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
This method gets the data and stores it in an object:
d = sherpa.astro.ui.get_data()
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
In case we need them for something, this is how we get ARF and RMF objects:
arf = d.get_arf() rmf = d.get_rmf()
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Next, we'd like to play around with a model. Let's set this up based on the XSPEC model I got from Jack:
sherpa.astro.ui.set_xsabund("angr") sherpa.astro.ui.set_xsxsect("bcmc") sherpa.astro.ui.set_xscosmo(70,0,0.73) sherpa.astro.ui.set_xsxset("delta", "0.01") sherpa.astro.ui.set_model("xstbabs.a1*xsdiskbb.a2") print(sherpa.astro.ui.get_model())
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
We can get the fully specified model and store it in an object like this:
m = sherpa.astro.ui.get_model()
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Here's how you can set parameters. Note that this changes the state of the object (boo!)
sherpa.astro.ui.set_par(a1.nH,0.01)
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Actually, we'd like to change the state of the object directly rather than using the convenience function, which works like this:
m._set_thawed_pars([0.01, 2, 0.01])
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Now we're ready to evaluate the model and apply RMF/ARF to it. This is actually a method on the data object, not the model object. It returns an array:
model_counts = d.eval_model(m)
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Let's plot the results:
plt.figure() plt.plot(rmf.e_min, d.counts) plt.plot(rmf.e_min, model_counts)
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Let's set the model parameters to the fit results from XSPEC:
m._set_thawed_pars([0.313999, 1.14635, 0.0780871]) model_counts = d.eval_model(m) plt.figure() plt.plot(rmf.e_min, d.counts) plt.plot(rmf.e_min, model_counts, lw=3)
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
MCMC by hand Just for fun, we're going to use emcee directly to sample from this model. Let's first define a posterior object:
from scipy.special import gamma as scipy_gamma from scipy.special import gammaln as scipy_gammaln logmin = -100000000.0 class PoissonPosterior(object): def __init__(self, d, m): self.data = d self.model = m return def loglikelihood(self, pars, neg=False): sel...
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Now we can define a posterior object with the data and model objects:
lpost = PoissonPosterior(d, m)
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Can we compute the posterior probability of some parameters?
print(lpost([0.1, 0.1, 0.1])) print(lpost([0.313999, 1.14635, 0.0780871]))
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0