markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Friend Recommendation: Open Triangles
Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.
Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph.
What are the two general scenarios for finding open triangles that a given node is involved in?
The given node is the centre node.
The given node is one of the termini nodes.
Exercise
Can you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one?
Note: For this exercise, only consider the case when the node of interest is the centre node.
Possible Implementation: Check every pair of my neighbors, and if they are not connected to one another, then we are in an open triangle relationship.
|
# Fill in your code here.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
#for n in neighbors:
for nbr1, nbr2 in itertools.combinations(neighbors, 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if not G.has_edge(nbr1, nbr2):
# We break because any triangle that is present automatically
# satisfies the problem requirements.
open_triangle_nodes.append([nbr1,node,nbr2])
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
nodes = get_open_triangles(G, 2)
for i, triplet in enumerate(nodes):
fig = plt.figure(i)
nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
|
4. Cliques, Triangles and Graph Structures (Student).ipynb
|
SubhankarGhosh/NetworkX
|
mit
|
Exercise
This should allow us to find all n-sized maximal cliques. Try writing a function maximal_cliques_of_size(size, G) that implements this.
|
def maximal_cliqes_of_size(size, G):
return ______________________
maximal_cliqes_of_size(2, G)
|
4. Cliques, Triangles and Graph Structures (Student).ipynb
|
SubhankarGhosh/NetworkX
|
mit
|
Connected Components
From Wikipedia:
In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.
NetworkX also implements a function that identifies connected component subgraphs.
Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
|
ccsubgraphs = list(nx.connected_component_subgraphs(G))
len(ccsubgraphs)
|
4. Cliques, Triangles and Graph Structures (Student).ipynb
|
SubhankarGhosh/NetworkX
|
mit
|
Exercise
Play a bit with the Circos API. Can you colour the nodes by their subgraph identifier?
|
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, g in enumerate(_____________):
# Fill in code below.
# Then, pass in a list of nodecolors that correspond to the node order.
# Feel free to change the colours around!
node_cmap = {0: 'red', 1:'blue', 2: 'green', 3:'yellow'}
nodecolor = [__________________________________________]
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/physicians.png', dpi=300)
|
4. Cliques, Triangles and Graph Structures (Student).ipynb
|
SubhankarGhosh/NetworkX
|
mit
|
Discussion
From the above graphs it is clear that the quality of SOD depends on the choice of $\text{snn}$. Interestingly, the optimal setting for $\text{snn}$ appears to be approximately the same as the number of anomalies in the dataset (or slightly larger).
Since in practice we won't know the number of anomalies to expect, we won't be able to tune $\text{snn}$ effectively. It is important to notice that the quality drops especially quickly if $\text{snn}$ is chosen too small.
SOD running time
Since quality depends on $\text{snn}$, we need to know how running time depends on $\text{snn}$. The plot below shows this relationship for three separate datasets: kddcup99 5000, 10000, and 20000. Running time appears to grow linearly $\text{O}(\text{snn})$ in each case.
|
# SOD running time (s) vs snn
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5])
fig = plt.figure(figsize=(7,3))
ax = fig.add_axes([0.1, 0.15, 0.63, 0.7])
ax.plot(df[2].values[0:5], df[5].values[0:5], label="5000 pts")
ax.plot(df[2].values[7:12], df[5].values[7:12], label="10000 pts")
ax.plot(df[2].values[9:], df[5].values[9:], label = "20000 pts")
ax.set_xlabel('snn', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([24, 401])
ax.set_title('SOD running time (s) vs SNN (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.44, 0.75), prop={'family': 'monospace'})
plt.show()
|
_notebooks/SOD vs One-class SVM.ipynb
|
ActivisionGameScience/blog
|
apache-2.0
|
Likewise, we need to know how running time is affected by increasing the size of the dataset. Below we plot several curves with fixed $\text{snn}$ and varying data size. It turns out that the running time grows quadratically $\text{O}(n^2)$ in each case.
|
# SOD running time (s) vs # datapoints
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[2,5])
fig = plt.figure(figsize=(7.5,3))
ax = fig.add_axes([0.1, 0.15, 0.69, 0.7])
ax.plot([5000,10000,20000], df[5].values[0::7], label="snn=25")
ax.plot([5000,10000,20000], df[5].values[1::7], label="snn=50")
ax.plot([5000,10000,20000], df[5].values[2::7], label="snn=100")
ax.plot([5000,10000,20000], df[5].values[3::7], label="snn=200")
ax.plot([5000,10000,20000], df[5].values[4::7], label="snn=400")
ax.set_xlabel('# datapoints', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([4900, 21000])
ax.set_title('SOD running time (s) vs #pts (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.32, 0.85), prop={'family': 'monospace'})
plt.show()
|
_notebooks/SOD vs One-class SVM.ipynb
|
ActivisionGameScience/blog
|
apache-2.0
|
Discussion
Putting these together we see that SOD running time grows like $\text{O}(n^2\cdot\text{snn})$.
However, we already saw that we should scale $\text{snn}\gtrapprox(\text{no. of anomalies})$ to optimize quality. Since $(\text{no. of anomalies})\propto(\text{size of dataset }n)$, we conclude that we should scale $\text{snn}\propto n$.
So optimal SOD has time complexity $\text{O}(n^3)$. Below we will see that SOD is far more expensive than SVM when we compare them head-to-head.
One-Class SVM quality
We already mentioned that we are comparing three variants of one-class SVM: ordinary one-class SVM, eta SVM, and robust SVM. Although we could study other kernels, in our experiments we limited ourselves to RBF. Two settings for the parameter $\gamma$ were investigated:
"automated gamma tuning" as was proposed in evangelista2007
the simple heuristic $\gamma=\frac{1}{\text{no. of datapoints}}$
The regularization is bundled into a separate parameter $\nu$. The default value $\nu=0.5$ seemed to work well, but we did not systematically study this aspect.
In the graphs below we compare all six Precision-Recall curves for each dataset:
eta SVM with automated gamma tuning
eta SVM without automated gamma tuning
robust SVM with automated gamma tuning
robust SVM without automated gamma tuning
ordinary one-class SVM with automated gamma tuning
ordinary one-class SVM without automated gamma tuning
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import auc
%matplotlib inline
# process SVM PR curves
datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'mouse', 'kddcup99_5000', 'kddcup99_10000']
for name in datasets:
if name == 'kddcup99_5000':
dirname = 'kddcup99'
subdirbase = "outputsvm_5000"
elif name == 'kddcup99_10000':
dirname = 'kddcup99'
subdirbase = "outputsvm_10000"
else:
dirname = name
subdirbase = "outputsvm"
eta = pd.read_csv('%s/%s_eta/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
eta_no_gamma_tuning = pd.read_csv('%s/%s_eta_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
robust = pd.read_csv('%s/%s_robust/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
robust_no_gamma_tuning = pd.read_csv('%s/%s_robust_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
one_class = pd.read_csv('%s/%s_one_class/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
one_class_no_gamma_tuning = pd.read_csv('%s/%s_one_class_no_gamma_tuning/pr.txt' % (dirname, subdirbase), header=None, index_col=False, skiprows=1)
eta_auc = auc(eta[0], eta[1])
eta_no_gamma_tuning_auc = auc(eta_no_gamma_tuning[0], eta_no_gamma_tuning[1])
robust_auc = auc(robust[0], robust[1])
robust_no_gamma_tuning_auc = auc(robust_no_gamma_tuning[0], robust_no_gamma_tuning[1])
one_class_auc = auc(one_class[0], one_class[1])
one_class_no_gamma_tuning_auc = auc(one_class_no_gamma_tuning[0], one_class_no_gamma_tuning[1])
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.045, 0.1, 0.6, 0.8])
ax.plot(eta[0].values, eta[1].values, label='eta AUC=%f' % eta_auc, lw=2)
ax.plot(eta_no_gamma_tuning[0].values, eta_no_gamma_tuning[1].values, label='eta_noauto AUC=%f' % eta_no_gamma_tuning_auc, lw=2)
ax.plot(robust[0].values, robust[1].values, label='robust AUC=%f' % robust_auc, lw=2)
ax.plot(robust_no_gamma_tuning[0].values, robust_no_gamma_tuning[1].values, label='robust_noauto AUC=%f' % robust_no_gamma_tuning_auc, lw=2)
ax.plot(one_class[0].values, one_class[1].values, label='one_class AUC=%f' % one_class_auc, lw=2)
ax.plot(one_class_no_gamma_tuning[0].values, one_class_no_gamma_tuning[1].values, label='one_class_noauto AUC=%f' % one_class_no_gamma_tuning_auc, lw=2)
ax.set_xlabel('Recall', fontsize=14)
ax.set_ylabel('Precision', fontsize=14)
ax.set_ylim([0.0, 1.05])
ax.set_xlim([0.0, 1.0])
ax.set_title('SVM Precision-Recall (%s)' % name, fontsize=16)
ax.legend(bbox_to_anchor=(1.6, 0.7), prop={'family': 'monospace'})
plt.show()
|
_notebooks/SOD vs One-class SVM.ipynb
|
ActivisionGameScience/blog
|
apache-2.0
|
Discussion
From the graphs it is clear that robust SVM (with or without automated gamma tuning) performs much worse on most datasets. Since it seems to be unreliable, we drop it from further discussion.
Now consider the effect of automated gamma tuning. Although it helps for the "ionosphere" and "mouse" datasets, results are mixed for the "shuttle" and "satellite" datasets. It actually hurts the "breast_cancer" results, and it really hurts the "kddcup99" results. Since it seems to be unreliable, we cannot recommend it.
So we are left comparing eta SVM versus ordinary one-class SVM (both with automated gamma tuning off). We see that the results are mostly comparable. Since one-class SVM is standard in libsvm, we see no reason to use the eta variant.
Head-to-head quality comparison: Optimal SOD vs One-Class SVM
In the graphs below we compare ordinary one-class SVM (automated gamma tuning off) against SOD (where $\text{snn}$ was tuned optimally for each dataset). We see that both algorithms are competitive, however one-class SVM seems to produce slightly better quality overall.
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import auc
%matplotlib inline
# process SVM PR curves vs SOD (optimal) PR curves
datasets = ['ionosphere', 'shuttle', 'breast_cancer_wisconsin_diagnostic', 'satellite', 'kddcup99_5000', 'kddcup99_10000']
for name in datasets:
if name == 'ionosphere':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_10snn"
elif name == 'shuttle':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_1000snn"
elif name == 'breast_cancer_wisconsin_diagnostic':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_25snn"
elif name == 'satellite':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_250snn"
elif name == 'mouse':
dirname = name
svmsubdirbase = "outputsvm_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_10snn"
elif name == 'kddcup99_5000':
dirname = 'kddcup99'
svmsubdirbase = "outputsvm_5000_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_5000_100snn"
elif name == 'kddcup99_10000':
dirname = 'kddcup99'
svmsubdirbase = "outputsvm_10000_one_class_no_gamma_tuning"
sodsubdirbase = "outputsod_5000_200snn"
one_class_no_gamma_tuning = pd.read_csv('%s/%s/pr.txt' % (dirname, svmsubdirbase), header=None, index_col=False, skiprows=1)
sod = pd.read_csv('%s/%s/pr-curve.txt' % (dirname, sodsubdirbase), header=None, index_col=False, skiprows=2, sep=' ')
one_class_no_gamma_tuning_auc = auc(one_class_no_gamma_tuning[0], one_class_no_gamma_tuning[1])
sod_auc = auc(sod[0], sod[1])
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.045, 0.1, 0.6, 0.8])
ax.plot(sod[0].values, sod[1].values, label='SOD (optimal) AUC=%f' % sod_auc, lw=2)
ax.plot(one_class_no_gamma_tuning[0].values, one_class_no_gamma_tuning[1].values, label='one_class_noauto AUC=%f' % one_class_no_gamma_tuning_auc, lw=2)
ax.set_xlabel('Recall', fontsize=14)
ax.set_ylabel('Precision', fontsize=14)
ax.set_ylim([0.0, 1.05])
ax.set_xlim([0.0, 1.0])
ax.set_title('Optimal SOD and SVM Precision-Recall (%s)' % name, fontsize=16)
ax.legend(bbox_to_anchor=(1.6, 0.7), prop={'family': 'monospace'})
plt.show()
|
_notebooks/SOD vs One-class SVM.ipynb
|
ActivisionGameScience/blog
|
apache-2.0
|
Head-to-head time comparison: Optimal SOD vs One-Class SVM
Finally, we compare the running time of one-class SVM (automated gamma tuning off) versus SOD (where $\text{snn}$ was tuned optimally for each dataset).
We already argued above that Optimal SOD running time grows as $\text{O}(n^3)$. Our experiments verify this.
On the other hand, our experiments show that one-class SVM only grows as $\text{O}(n^2)$.
Clearly, Optimal SOD is much more expensive than one-class SVM.
|
# SOD optimal running time (s) compared to one-class SVM running time
df = pd.read_csv('output_summary.csv', header=None, index_col=False, skiprows=39, nrows=19, usecols=[5,12])
fig = plt.figure(figsize=(7.5,3))
ax = fig.add_axes([0.1, 0.15, 0.69, 0.7])
ax.plot([5000,10000,20000], [df[5].values[2], df[5].values[10], df[5].values[18]], label="optimal SOD")
ax.plot([5000,10000,20000], [df[12].values[5], df[12].values[12], df[12].values[14]], label="one-class SVM")
ax.set_xlabel('# datapoints', fontsize=14)
ax.set_ylabel('Time (sec)', fontsize=14)
ax.set_ylim([-100, 2700])
ax.set_xlim([4900, 21000])
ax.set_title('Comparison of SOD and SVM running time (kddcup99)', fontsize=16)
ax.legend(bbox_to_anchor=(1.5, 0.65), prop={'family': 'monospace'})
plt.show()
|
_notebooks/SOD vs One-class SVM.ipynb
|
ActivisionGameScience/blog
|
apache-2.0
|
Since deques are a type of sequence container, they support some of the same operations as list, such as examining the contents with __getitem__(), determining length, and removing elements from the middle of the queue by matching identity.
Populating
A deque can be populated from either end, termed “left” and “right” in the Python implementation.
|
import collections
# Add to the right
d1 = collections.deque()
d1.extend('abcdefg')
print('extend :', d1)
d1.append('h')
print('append :', d1)
# Add to the left
d2 = collections.deque()
d2.extendleft(range(6))
print('extendleft:', d2)
d2.appendleft(6)
print('appendleft:', d2)
|
data_structure/deque — Double-Ended Queue.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
The extendleft() function iterates over its input and performs the equivalent of an appendleft() for each item. The end result is that the deque contains the input sequence in reverse order.
Consuming
Similarly, the elements of the deque can be consumed from both ends or either end, depending on the algorithm being applied.
|
import collections
print('From the right:')
d = collections.deque('abcdefg')
while True:
try:
print(d.pop(), end='')
except IndexError:
break
print()
print('\nFrom the left:')
d = collections.deque(range(6))
while True:
try:
print(d.popleft(), end='')
except IndexError:
break
print()
|
data_structure/deque — Double-Ended Queue.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Use pop() to remove an item from the “right” end of the deque and popleft() to take an item from the “left” end.
Since deques are thread-safe, the contents can even be consumed from both ends at the same time from separate threads.
|
import collections
import threading
import time
candle = collections.deque(range(5))
def burn(direction, nextSource):
while True:
try:
next = nextSource()
except IndexError:
break
else:
print('{:>8}: {}'.format(direction, next))
time.sleep(0.1)
print('{:>8} done'.format(direction))
return
left = threading.Thread(target=burn,
args=('Left', candle.popleft))
right = threading.Thread(target=burn,
args=('Right', candle.pop))
left.start()
right.start()
left.join()
right.join()
|
data_structure/deque — Double-Ended Queue.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Rotating(think it as a belt)
Another useful aspect of the deque is the ability to rotate it in either direction, so as to skip over some items.
|
import collections
d = collections.deque(range(10))
print('Normal :', d)
d = collections.deque(range(10))
d.rotate(2)
print('Right rotation:', d)
d = collections.deque(range(10))
d.rotate(-2)
print('Left rotation :', d)
|
data_structure/deque — Double-Ended Queue.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Constraining the Queue Size
A deque instance can be configured with a maximum length so that it never grows beyond that size. When the queue reaches the specified length, existing items are discarded as new items are added. This behavior is useful for finding the last n items in a stream of undetermined length.
|
import collections
import random
# Set the random seed so we see the same output each time
# the script is run.
random.seed(1)
d1 = collections.deque(maxlen=3)
d2 = collections.deque(maxlen=3)
for i in range(5):
n = random.randint(0, 100)
print('n =', n)
d1.append(n)
d2.appendleft(n)
print('D1:', d1)
print('D2:', d2)
|
data_structure/deque — Double-Ended Queue.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Таким образом, резонансная частота примерно равна $f_p = 6.9~кГц$ и не зависит от сопротивления. Это расходится с ожидаемыми данными. Скорее всего, у нашей катушки индективность больше или меньше 100 мГн.
Добротность при $R = 1~Ом$ составляет примерно $Q \approx \frac{4.667}{7.6-6.3} \approx 5.31$, а при $R = 500~Ом$ — $Q \approx \frac{3.467}{8.2-6.4} \approx 3.83$.
Формулы: $ F = (2 \pi f_p)^{-2}~[с^2]$
Погрешность: $\varepsilon C = 5\%$, $\Delta F = F \sqrt{2} \frac{0.05 Гц}{f} $.
|
nII = pandas.read_excel('lab-3-3.xlsx', 'tab-2', header=None)
nII.head()
import numpy
f = nII.values
x = f[0, 1:]
y = f[2, 1:]
l = numpy.mean(x * y) / numpy.mean(x ** 2)
dl = ((numpy.mean(x ** 2) * numpy.mean(y ** 2) - (numpy.mean(x * y) ** 2)) / (len(x) * (numpy.mean(x ** 2) ** 2))) ** 0.5
fff = numpy.linspace(0, 10, 100)
matplotlib.pyplot.figure(figsize=(18, 9))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('Зависимость резонансной частоты от емкости$', fontweight='bold')
matplotlib.pyplot.xlabel('$F$, мс^2')
matplotlib.pyplot.ylabel('$C$, нФ')
matplotlib.pyplot.errorbar(f[0, 1:], f[2, 1:], xerr=f[0, 1:] * 0.05, yerr=f[3, 1:], fmt='o', c='black', lw=3)
matplotlib.pyplot.plot(fff, l * fff, '--', c='black', lw=2)
matplotlib.pyplot.show()
l * 1000, dl * 1000, 1 / (2 * numpy.pi * (3 * l * 10 ** (-9)) ** 0.5)
|
labs/term-5/lab-3-3.ipynb
|
eshlykov/mipt-day-after-day
|
unlicense
|
In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
$\large F(\epsilon) = {\Large \frac{1}{e^{(\epsilon-\mu)/kT}+1}}$
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
|
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
F = 1/(np.exp((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
|
assignments/midterm/InteractEx06.ipynb
|
rsterbentz/phys202-2015-work
|
mit
|
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
|
def plot_fermidist(mu, kT):
energy = np.linspace(0,10.0,21)
plt.plot(energy, fermidist(energy, mu, kT))
plt.tick_params(direction='out')
plt.xlabel('$Energy$')
plt.ylabel('$F(Energy)$')
plt.title('Fermi Distribution')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
|
assignments/midterm/InteractEx06.ipynb
|
rsterbentz/phys202-2015-work
|
mit
|
Probability Distribution
Let us begin by specifying discrete probability distributions. The class ProbDist defines a discrete probability distribution. We name our random variable and then assign probabilities to the different values of the random variable. Assigning probabilities to the values works similar to that of using a dictionary with keys being the Value and we assign to it the probability. This is possible because of the magic methods _getitem _ and _setitem _ which store the probabilities in the prob dict of the object. You can keep the source window open alongside while playing with the rest of the code to get a better understanding.
|
%psource ProbDist
p = ProbDist('Flip')
p['H'], p['T'] = 0.25, 0.75
p['T']
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
The distribution by default is not normalized if values are added incremently. We can still force normalization by invoking the normalize method.
|
p = ProbDist('Y')
p['Cat'] = 50
p['Dog'] = 114
p['Mice'] = 64
(p['Cat'], p['Dog'], p['Mice'])
p.normalize()
(p['Cat'], p['Dog'], p['Mice'])
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
A probability model is completely determined by the joint distribution for all of the random variables. (Section 13.3) The probability module implements these as the class JointProbDist which inherits from the ProbDist class. This class specifies a discrete probability distribute over a set of variables.
|
%psource JointProbDist
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Inference Using Full Joint Distributions
In this section we use Full Joint Distributions to calculate the posterior distribution given some evidence. We represent evidence by using a python dictionary with variables as dict keys and dict values representing the values.
This is illustrated in Section 13.3 of the book. The functions enumerate_joint and enumerate_joint_ask implement this functionality. Under the hood they implement Equation 13.9 from the book.
$$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$$
Here α is the normalizing factor. X is our query variable and e is the evidence. According to the equation we enumerate on the remaining variables y (not in evidence or query variable) i.e. all possible combinations of y
We will be using the same example as the book. Let us create the full joint distribution from Figure 13.3.
|
full_joint = JointProbDist(['Cavity', 'Toothache', 'Catch'])
full_joint[dict(Cavity=True, Toothache=True, Catch=True)] = 0.108
full_joint[dict(Cavity=True, Toothache=True, Catch=False)] = 0.012
full_joint[dict(Cavity=True, Toothache=False, Catch=True)] = 0.016
full_joint[dict(Cavity=True, Toothache=False, Catch=False)] = 0.064
full_joint[dict(Cavity=False, Toothache=True, Catch=True)] = 0.072
full_joint[dict(Cavity=False, Toothache=False, Catch=True)] = 0.144
full_joint[dict(Cavity=False, Toothache=True, Catch=False)] = 0.008
full_joint[dict(Cavity=False, Toothache=False, Catch=False)] = 0.576
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Let us now look at the enumerate_joint function returns the sum of those entries in P consistent with e,provided variables is P's remaining variables (the ones not in e). Here, P refers to the full joint distribution. The function uses a recursive call in its implementation. The first parameter variables refers to remaining variables. The function in each recursive call keeps on variable constant while varying others.
|
%psource enumerate_joint
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
We might be interested in the probability distribution of a particular variable conditioned on some evidence. This can involve doing calculations like above for each possible value of the variable. This has been implemented slightly differently using normalization in the function enumerate_joint_ask which returns a probability distribution over the values of the variable X, given the {var:val} observations e, in the JointProbDist P. The implementation of this function calls enumerate_joint for each value of the query variable and passes extended evidence with the new evidence having X = x<sub>i</sub>. This is followed by normalization of the obtained distribution.
|
%psource enumerate_joint_ask
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
You can verify that the first value is the same as we obtained earlier by manual calculation.
Bayesian Networks
A Bayesian network is a representation of the joint probability distribution encoding a collection of conditional independence statements.
A Bayes Network is implemented as the class BayesNet. It consisits of a collection of nodes implemented by the class BayesNode. The implementation in the above mentioned classes focuses only on boolean variables. Each node is associated with a variable and it contains a conditional probabilty table (cpt). The cpt represents the probability distribution of the variable conditioned on its parents P(X | parents).
Let us dive into the BayesNode implementation.
|
%psource BayesNode
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
It is possible to avoid using a tuple when there is only a single parent. So an alternative format for the cpt is
|
john_node = BayesNode('JohnCalls', ['Alarm'], {True: 0.90, False: 0.05})
mary_node = BayesNode('MaryCalls', 'Alarm', {(True, ): 0.70, (False, ): 0.01}) # Using string for parents.
# Equvivalant to john_node definition.
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
With all the information about nodes present it is possible to construct a Bayes Network using BayesNet. The BayesNet class does not take in nodes as input but instead takes a list of node_specs. An entry in node_specs is a tuple of the parameters we use to construct a BayesNode namely (X, parents, cpt). node_specs must be ordered with parents before children.
|
%psource BayesNet
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Exact Inference in Bayesian Networks
A Bayes Network is a more compact representation of the full joint distribution and like full joint distributions allows us to do inference i.e. answer questions about probability distributions of random variables given some evidence.
Exact algorithms don't scale well for larger networks. Approximate algorithms are explained in the next section.
Inference by Enumeration
We apply techniques similar to those used for enumerate_joint_ask and enumerate_joint to draw inference from Bayesian Networks. enumeration_ask and enumerate_all implement the algorithm described in Figure 14.9 of the book.
|
%psource enumerate_all
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
enumerate__all recursively evaluates a general form of the Equation 14.4 in the book.
$$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$$
such that P(X, e, y) is written in the form of product of conditional probabilities P(variable | parents(variable)) from the Bayesian Network.
enumeration_ask calls enumerate_all on each value of query variable X and finally normalizes them.
|
%psource enumeration_ask
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Variable Elimination
The enumeration algorithm can be improved substantially by eliminating repeated calculations. In enumeration we join the joint of all hidden variables. This is of exponential size for the number of hidden variables. Variable elimination employes interleaving join and marginalization.
Before we look into the implementation of Variable Elimination we must first familiarize ourselves with Factors.
In general we call a multidimensional array of type P(Y1 ... Yn | X1 ... Xm) a factor where some of Xs and Ys maybe assigned values. Factors are implemented in the probability module as the class Factor. They take as input variables and cpt.
Helper Functions
There are certain helper functions that help creating the cpt for the Factor given the evidence. Let us explore them one by one.
|
%psource make_factor
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
make_factor is used to create the cpt and variables that will be passed to the constructor of Factor. We use make_factor for each variable. It takes in the arguments var the particular variable, e the evidence we want to do inference on, bn the bayes network.
Here variables for each node refers to a list consisting of the variable itself and the parents minus any variables that are part of the evidence. This is created by finding the node.parents and filtering out those that are not part of the evidence.
The cpt created is the one similar to the original cpt of the node with only rows that agree with the evidence.
|
%psource all_events
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Here the cpt is for P(MaryCalls | Alarm = True). Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence.
Operations on Factors
We are interested in two kinds of operations on factors. Pointwise Product which is used to created joint distributions and Summing Out which is used for marginalization.
|
%psource Factor.pointwise_product
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Factor.pointwise_product implements a method of creating a joint via combining two factors. We take the union of variables of both the factors and then generate the cpt for the new factor using all_events function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product assigns new probabilities by multiplying rows similar to that in a database join.
|
%psource pointwise_product
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
pointwise_product extends this operation to more than two operands where it is done sequentially in pairs of two.
|
%psource Factor.sum_out
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Factor.sum_out makes a factor eliminating a variable by summing over its values. Again events_all is used to generate combinations for the rest of the variables.
|
%psource sum_out
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
sum_out uses both Factor.sum_out and pointwise_product to finally eliminate a particular variable from all factors by summing over its values.
Elimination Ask
The algorithm described in Figure 14.11 of the book is implemented by the function elimination_ask. We use this for inference. The key idea is that we eliminate the hidden variables by interleaving joining and marginalization. It takes in 3 arguments X the query variable, e the evidence variable and bn the Bayes network.
The algorithm creates factors out of Bayes Nodes in reverse order and eliminates hidden variables using sum_out. Finally it takes a point wise product of all factors and normalizes. Let us finally solve the problem of inferring
P(Burglary=True | JohnCalls=True, MaryCalls=True) using variable elimination.
|
%psource elimination_ask
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Approximate Inference in Bayesian Networks
Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.
|
%psource BayesNode.sample
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Before we consider the different algorithms in this section let us look at the BayesNode.sample method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The probability function is a simple helper from utils module which returns True with the probability passed to it.
Prior Sampling
The idea of Prior Sampling is to sample from the Bayesian Network in a topological order. We start at the top of the network and sample as per P(X<sub>i</sub> | parents(X<sub>i</sub>) i.e. the probability distribution from which the value is sampled is conditioned on the values already assigned to the variable's parents. This can be thought of as a simulation.
|
%psource prior_sample
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
The function prior_sample implements the algorithm described in Figure 14.13 of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in Figure 14.12 to try out the prior_sample
<img src="files/images/sprinklernet.jpg" height="500" width="500">
We store the samples on the observations. Let us find P(Rain=True)
|
N = 1000
all_observations = [prior_sample(sprinkler) for x in range(N)]
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Rejection Sampling
Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. The function rejection_sampling implements the algorithm described by Figure 14.14
|
%psource rejection_sampling
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters X - The Query Variable, e - evidence, bn - Bayes net and N - number of prior samples to generate.
consistent_with is used to check consistency.
|
%psource consistent_with
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Likelihood Weighting
Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent.
The pseudocode in Figure 14.15 is implemented as likelihood_weighting and weighted_sample.
|
%psource weighted_sample
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
weighted_sample samples an event from Bayesian Network that's consistent with the evidence e and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters bn the Bayesian Network and e the evidence.
The weight is obtained by multiplying P(x<sub>i</sub> | parents(x<sub>i</sub>)) for each node in evidence. We set the values of event = evidence at the start of the function.
|
weighted_sample(sprinkler, dict(Rain=True))
%psource likelihood_weighting
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Gibbs Sampling
In likelihood sampling, it is possible to obtain low weights in cases where the evidence variables reside at the bottom of the Bayesian Network. This can happen because influence only propagates downwards in likelihood sampling.
Gibbs Sampling solves this. The implementation of Figure 14.16 is provided in the function gibbs_ask
|
%psource gibbs_ask
|
probability.ipynb
|
SnShine/aima-python
|
mit
|
Create Table
|
%%sql
-- Create a table of criminals_1
CREATE TABLE criminals_1 (pid, name, age, sex, city, minor);
INSERT INTO criminals_1 VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals_1 VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals_1 VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals_1 VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
|
sql/copy_data_between_tables.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
View Table
|
%%sql
-- Select all
SELECT *
-- From the table 'criminals_1'
FROM criminals_1
|
sql/copy_data_between_tables.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Create New Empty Table
|
%%sql
-- Create a table called criminals_2
CREATE TABLE criminals_2 (pid, name, age, sex, city, minor);
|
sql/copy_data_between_tables.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Copy Contents Of First Table Into Empty Table
|
%%sql
-- Insert into the empty table
INSERT INTO criminals_2
-- Everything
SELECT *
-- From the first table
FROM criminals_1;
|
sql/copy_data_between_tables.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
View Previously Empty Table
|
%%sql
-- Select everything
SELECT *
-- From the previously empty table
FROM criminals_2
|
sql/copy_data_between_tables.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
0. Mental note: Inkscape
0. Mental note 2: Heatmap
Represent one quantitative variable as color, two qualitative (or binned quantitative) in the sides.
|
data = pd.read_csv("data/random.csv",sep="\t",index_col=0)*100
data.head()
import seaborn as sns
sns.heatmap?
ax = sns.heatmap(data,cbar_kws={"label":"Body temperature"},cmap="YlOrRd")
ax.invert_yaxis()
plt.ylabel("Pizzas eaten")
plt.xlabel("Outside temperature")
plt.show()
sns.heatmap(data,cbar_kws={"label":"Body temperature"},cmap="YlOrRd")
plt.ylabel("Pizzas eaten")
plt.xlabel("Outside temperature")
plt.xticks(0.5+np.arange(10),["10-20","20-30","30-40","40-50","50-60","60-70","80-90","90-100","100-110","110-120"],rotation=90)
plt.show()
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
Conclusion: Pizzas make you lekker warm
Lesson of the day: Eat more pizza
1. In-class exercises
1.1 Read the data from the world bank (inside class3 folder, then folder data, subfolder world_bank), and save it with name df
|
#Read data and print the head to see how it looks like
df = pd.read_csv("../class3/data/world_bank/data.csv",na_values="..")
df.head()
#We could fix the column names with: df.columns = ["Country Name","Country Code","Series Name","Series Code",1967,1968,1969,...]
## 4.1b Fix the year of the column (make it numbers)
df = pd.read_csv("../class3/data/world_bank/data.csv",na_values="..")
old_columns = list(df.columns)
new_columns = []
for index,column_name in enumerate(old_columns):
if index < 4:
new_columns.append(column_name)
else:
year_column = int(column_name[:4])
new_columns.append(year_column)
df.columns = new_columns
#We could save our data with: df.to_csv("data/new_columns.csv",sep="\t")
df.head()
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
4.2 Fix the format and save it with name df_fixed
Remember, this was the code that we use to fix the file of the
`
### Fix setp 1: Melt
variables_already_presents = ['METRO_ID', 'Metropolitan areas','VAR']
columns_combine = cols
df = pd.melt(df,
id_vars=variables_already_presents,
value_vars=columns_combine,
var_name="Year",
value_name="Value")
df.head()
### Fix step 2: Pivot
column_with_values = "Value"
column_to_split = ["VAR"]
variables_already_present = ["METRO_ID","Metropolitan areas","Year"]
df.pivot_table(column_with_values,
variables_already_present,
column_to_split).reset_index().head()
`
|
### Fix setp 1: Melt
cols = list(df.columns)
variables_already_presents = cols[:4]
columns_combine = cols[4:]
df_1 = pd.melt(df,
id_vars=variables_already_presents,
value_vars=columns_combine,
var_name="Year",
value_name="Value")
df_1.head()
### Fix step 2: Pivot
column_with_values = "Value"
column_to_split = ["Series Name"]
variables_already_present = ["Country Name","Country Code","Year"]
df_1.pivot_table(column_with_values,
variables_already_present,
column_to_split).reset_index().head()
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
4.3 Create two dataframes with names df_NL and df_CO.
The first with the data for the Netherlands
The second with the data for Colombia
|
#code
df_NL =
df_CO =
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
4.4 Concatenate/Merge (the appropriate one) the two dataframes
4.5 Create two dataframes with names df_pri and df_pu.
The first with the data for all rows and columns "country", "year" and indicator "SH.XPD.PRIV.ZS" (expenditure in health care as %GDP)
The second with the data for all rows and columns "country", "year" and indicator "SH.XPD.PUBL.ZS"
|
df_pri =
df_pu =
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
4.6 Concatenate/Merge (the appropriate one) the two dataframes (how = "outer")
4.7 Groupby the last dataframe (step 4.6) by country code and describe
If you don't remember check class3c_groupby.ipynb
4.8 Groupby the last dataframe (step 4.6) by country code and find skewness
A skewness value > 0 means that there is more weight in the left tail of the distribution.
If you don't remember check class3c_groupby.ipynb
|
import scipy.stats #you need to import scipy.stats
|
class4/class4b_inclass.ipynb
|
jgarciab/wwd2017
|
gpl-3.0
|
Periodic boundary conditions
|
def periodic(i,limit,add):
"""
Choose correct matrix index with periodic boundary conditions
Input:
- i: Base index
- limit: Highest \"legal\" index
- add: Number to add or subtract from i
"""
return (i + limit + add) % limit
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Set up spin matrix, initialize to ground state
|
size = 256 # L_x
temp = 10. # temperature T
spin_matrix = np.zeros( (size,size), np.int8) + 1
spin_matrix
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Create and initialize variables
|
E = M = 0
E_av = E2_av = M_av = M2_av = Mabs_av = 0
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Setup array for possible energy changes
|
w = np.zeros(17, np.float64)
for de in xrange(-8,9,4):
print de
w[de+8] = math.exp(-de/temp)
print w
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Calculate initial magnetization
|
M = spin_matrix.sum()
print M
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Calculate initial energy
|
# for i in xrange(16): print i r
# range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements.
# xrange is a sequence object that evaluates lazily.
for j in xrange(size):
for i in xrange(size):
E -= spin_matrix.item(i,j) * (spin_matrix.item(periodic(i,size,-1),j) + spin_matrix.item(i,periodic(j,size,1)))
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Metropolis MonteCarlo computation, 1 single step or iteration, done explicitly:
|
x = int(np.random.random()*size)
print(x)
y = int(np.random.random()*size)
print(y)
deltaE = 2*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(x,size,-1),y) + spin_matrix.item(periodic(x,size,1),y) + \
spin_matrix.item(x,periodic(y,size,-1))+spin_matrix.item(x,periodic(y,size,1)))
print(deltaE)
print( w[deltaE + 8] )
np.random.random()
print( np.random.random() <= w[deltaE+8])
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Accept (if True)!
|
print( spin_matrix[x,y] )
print( spin_matrix.item(x,y) )
spin_matrix[x,y] *= -1
M += 2*spin_matrix[x,y]
E += deltaE
print(spin_matrix.item(x,y))
print(M)
print(E)
import pygame
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Initialize (all spins up), explicitly shown
|
Lx=256; Ly=256
spin_matrix = np.zeros((Lx,Ly),np.int8)
print(spin_matrix.shape)
spin_matrix.fill(1)
spin_matrix
def initialize_allup( spin_matrix, J=1.0 ):
Lx,Ly = spin_matrix.shape
spin_matrix.fill(1)
M = spin_matrix.sum()
# Calculate initial energy
E=0
for j in xrange(Ly):
for i in xrange(Lx):
E += (-J)*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(i,Lx,+1),j) + spin_matrix.item(i,periodic(j,Ly,1)) )
print "M: ",M," E: ", E
return E,M
E,M = initialize_allup( spin_matrix)
def initialize_allup1( spin_matrix, J=1.0 ):
Lx,Ly = spin_matrix.shape
spin_matrix.fill(1)
M = spin_matrix.sum()
# Calculate initial energy
E=0
for j in xrange(Ly):
for i in xrange(Lx):
E -= J*spin_matrix.item(i,j) * \
(spin_matrix.item(periodic(i,Lx,-1),j) + spin_matrix.item(i,periodic(j,Ly,1)) )
print "M: ",M," E: ", E
return E,M
E,M = initialize_allup( spin_matrix)
Lx=512; Ly=512
spin_matrix = np.zeros((Lx,Ly),np.int8)
E,M = initialize_allup1( spin_matrix)
E,M = initialize_allup( spin_matrix)
Lx=1024; Ly=1024
print(Lx*Ly)
spin_matrix = np.zeros((Lx,Ly),np.int8)
E,M = initialize_allup1( spin_matrix)
E,M = initialize_allup( spin_matrix)
math.pow(2,31)
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Setup array for possible energy changes
|
temp = 1.0
w = np.zeros(17,np.float32)
for de in xrange(-8,9,4): # include +8
w[de+8] = math.exp(-de/temp)
print(w)
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Importing from the script ising2dim.py
|
import os
print(os.getcwd())
print(os.listdir( os.getcwd() ))
sys.path.append('./')
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Reading out data from ./IsingGPU/FileIO/output.h
Data is generated by the parallel Metropolis algorithm in CUDA C++ in the subdirectory ./IsingGPU/data/, which is done by the function process_avgs in ./IsingGPU/FileIO/output.h. The values are saved as a character array, which then can be read in as a NumPy array of float32's. Be sure to enforce, declare the dtype to be float32.
|
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape(201,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
ax.scatter( T, E_avg)
plt.show()
Evar_avg = avgsresults_GPU[:,2]
plt.scatter( T, Evar_avg)
plt.show()
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
#fig = plt.figure()
#ax = fig.add_subplot(4,1,1)
plt.scatter( T, M_avg)
#fig.add_subplot(4,1,2)
#plt.scatter(T,Mvar_avg)
#fig.add_subplot(4,1,3)
#plt.scatter(T,absM_avg)
#fig.add_subplot(4,1,4)
#plt.scatter(T,M4_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
For
2^10 x 2^10 or 1024 x 1024 grid; 50000 trials, temperature T = 1.0, 1.005,...3. (temperature step of 0.005), so 400 different temperatures, 32 x 32 thread block,
|
avgsresults_GPU = np.fromfile("./IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
From drafts
|
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/IsingGPU/data/IsingMetroGPU_runs10.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
avgsresults_GPU = np.fromfile("./IsingGPU/drafts/IsingGPU/data/IsingMetroGPU.bin",dtype=np.float32)
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
avgsresults_GPU = avgsresults_GPU.reshape( avgsresults_GPU.size/7 ,7) # 7 different averages
print(avgsresults_GPU.shape)
print(avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
Evar_avg = avgsresults_GPU[:,2]
M_avg = avgsresults_GPU[:,3]
Mvar_avg = avgsresults_GPU[:,4]
absM_avg = avgsresults_GPU[:,5]
M4_avg = avgsresults_GPU[:,6]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, Evar_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter(T,Mvar_avg)
plt.show()
plt.scatter(T,absM_avg)
plt.show()
plt.scatter(T,M4_avg)
plt.show()
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
From CLaigit
|
avgsresults_GPU = []
for temp in range(10,31,2):
avgsresults_GPU.append( np.fromfile("./data/ising2d_CLaigit" + str(temp) + ".bin",dtype=np.float64) )
avgsresults_GPU = np.array( avgsresults_GPU)
print( avgsresults_GPU.shape, avgsresults_GPU.size)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
T = avgsresults_GPU[:,0]
E_avg = avgsresults_GPU[:,1]
M_avg = avgsresults_GPU[:,2]
heat_cap_avg = avgsresults_GPU[:,3]
mag_sus_avg = avgsresults_GPU[:,4]
ax.scatter( T, E_avg)
plt.show()
plt.scatter( T, M_avg)
plt.show()
plt.scatter( T, heat_cap_avg)
plt.show()
plt.scatter( T, mag_sus_avg)
plt.show()
|
Cpp/Ising/ising.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
|
class_name = "Nishanth Koganti"
message = class_name + " is awesome!"
message
|
startup.ipynb
|
buntyke/DataAnalysis
|
mit
|
Here, we follow the reasoning presented by Webster (1904) for analyzing the ellipsoidal coordinate $\lambda$ describing a oblate ellipsoid. Let's consider an ellipsoid with semi-axes $a$, $b$, $c$ oriented along the $x$-, $y$-, and $z$-axis, respectively, where $0 < a < b = c$. This ellipsoid is defined by the following equation:
<a id='eq1'></a>
$$
\frac{x^{2}}{a^{2}} + \frac{y^{2} + z^{2}}{b^{2}} = 1 \: . \tag{1}
$$
A quadric surface which is confocal with the ellipsoid defined in equation 1 can be described as follows:
<a id='eq2'></a>
$$
\frac{x^{2}}{a^{2} + \rho} + \frac{y^{2} + z^{2}}{b^{2} + \rho}= 1 \: , \tag{2}
$$
where $\rho$ is a real number. We know that equation 2 represents an ellipsoid for $\rho$ satisfying the condition
<a id='eq3'></a>
$$
\rho + a^{2} > 0 \: . \tag{3}
$$
Given $a$, $b$, and a $\rho$ satisfying equation 3, we may use equation 2 for determining a set of points $(x, y, z)$ lying on the surface of an ellipsoid confocal with that one defined in equation 1. Now, consider the problem of determining the ellipsoid which is confocal with that one defined in equation 1 and pass through a particular point $(x, y, z)$. This problem consists in determining the real number $\rho$ that, given $a$, $b$, $x$, $y$, and $z$, satisfies the equation 2.
By rearranging equation 2, we obtain the following quadratic equation for $\rho$:
$$
f(\rho) = (a^{2} + \rho)(b^{2} + \rho) - (b^{2} + \rho) \, x^{2}
- (a^{2} + \rho) \, (y^{2} + z^{2}) \: .
$$
This equation shows that:
$$
\rho = \begin{cases}
d \to \infty \: &, \quad f(\rho) > 0 \
-a^{2} \: &, \quad f(\rho) < 0 \
-b^{2} \: &, \quad f(\rho) > 0
\end{cases} \: .
$$
By rearranging this equation, we obtain a simpler one given by:
<a id='eq4'></a>
$$
f(\rho) = p_{2} \, \rho^{2} + p_{1} \, \rho + p_{0} \: , \tag{4}
$$
where
<a id='eq5'></a>
$$
p_{2} = 1 \: , \tag{5}
$$
<a id='eq6'></a>
$$
p_{1} = a^{2} + b^{2} - x^{2} - y^{2} - z^{2} \tag{6}
$$
and
<a id='eq7'></a>
$$
p_{0} = a^{2} \, b^{2} - b^{2} \, x^{2} - a^{2} \, y^{2} - a^{2} \, z^{2} \: . \tag{7}
$$
Note that a particular $\rho$ satisfying equation 2 results in $f(\rho) = 0$ (equation 4).
In order to illustrate the parameter $\rho$, consider the constants $a$, $b$, $x$, $y$, and $z$ given in the cell below:
|
a = 11.
b = 20.
x = 21.
y = 23.
z = 30.
|
code/lambda_oblate_ellipsoids.ipynb
|
pinga-lab/magnetic-ellipsoid
|
bsd-3-clause
|
In the sequence, we define a set of values for the variable $\rho$ in an interval $\left[ \rho_{min} \, , \rho_{max} \right]$ and evaluate the quadratic equation $f(\rho)$ (equation 4).
|
rho_min = -b**2 - 500.
rho_max = -a**2 + 2500.
rho = np.linspace(rho_min, rho_max, 100)
f = p2*(rho**2) + p1*rho + p0
|
code/lambda_oblate_ellipsoids.ipynb
|
pinga-lab/magnetic-ellipsoid
|
bsd-3-clause
|
Finally, the cell below shows the quadratic equation $f(\rho)$ (equation 4) evaluated in the range $\left[ \rho_{min} \, , \rho_{max} \right]$ defined above.
|
ymin = np.min(f) - 0.1*(np.max(f) - np.min(f))
ymax = np.max(f) + 0.1*(np.max(f) - np.min(f))
plt.close('all')
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(rho_min, rho_max)
plt.ylim(ymin, ymax)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.subplot(1,2,2)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(-600., 100.)
plt.ylim(-0.3*10**6, 10**6)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
#plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.tight_layout()
plt.show()
|
code/lambda_oblate_ellipsoids.ipynb
|
pinga-lab/magnetic-ellipsoid
|
bsd-3-clause
|
Plotting
according to the Andrew Hansen's thesis equations (2.15) and (2.16):
$TEC = (P2-P1)/(f1^2/f2^2 - 1)$
ans
$TEC = -(L2-L1)/(f1^2/f^2 - 1)$
theoretically they should be the same
|
# sattelites in file
data.items
# parameters in the file
# https://igscb.jpl.nasa.gov/igscb/data/format/rinex211.txt
# section 10.1.1 says what the letters mean
data.major_axis
f1 = 1575.42 #MHz
f2 = 1227.6 #MHz
f5 = 1176.45 #MHz
sv_of_interest = 27
fig = plt.figure(figsize=(10,10))
ax1 = plt.subplot(212)
fmt = DateFormatter('%H:%M:%S')
ax1.xaxis.set_major_formatter(fmt)
ax1.autoscale_view()
plt.xlabel('Time')
plt.ylabel('TEC (in meters at L1?)')
plt.title('Pseudorange Calculated TEC')
tec_from_pr1 = (data[:,sv_of_interest,'C2','data']-data[:,sv_of_interest,'C1','data'])/(f1**2/f2**2 - 1)
tec_from_pr2 = (data[:,sv_of_interest,'C5','data']-data[:,sv_of_interest,'C1','data'])/(f1**2/f5**2 - 1)
plt.plot(tec_from_pr1)
plt.plot(tec_from_pr2)
ax2 = plt.subplot(211, sharex=ax1)
plt.ylabel('TEC (in meters at L1?)')
plt.title('Phase Advance Calculated TEC')
tec_from_ph1 = -1*(data[:,sv_of_interest,'L2','data']-data[:,sv_of_interest,'L1','data'])/(f1**2/f2**2 - 1)
tec_from_ph2 = -1*(data[:,sv_of_interest,'L5','data']-data[:,sv_of_interest,'L1','data'])/(f1**2/f5**2 - 1)
plt.plot(tec_from_ph1)
plt.plot(tec_from_ph2)
plt.show()
|
Examples/.ipynb_checkpoints/ReadRinex Demo-checkpoint.ipynb
|
gregstarr/PyGPS
|
agpl-3.0
|
So the TEC is off by a large factor on the pseudorange graph, I'm not sure where that's coming from right now, I followed the equation from the thesis. The difference in pseudorange between the frequencies is very small, is that how its supposed to be and it needs to be multiplied by a constant? or is the data off? The file I used doesn't have P1 and P2, only P2, but I compared P2 to C2 and it is the same, so C1, C2 and C5 should work the same.
|
fig2 = plt.figure(figsize = (10,10))
ax = plt.subplot()
ax.xaxis.set_major_formatter(fmt)
ax.autoscale_view()
plt.xlabel('time')
plt.ylabel('pseudorange (m)')
plt.title('comparison of C2 and P2')
plt.plot(data[:,sv_of_interest,'P2','data'])
plt.plot(data[:,sv_of_interest,'C2','data'])
plt.show()
|
Examples/.ipynb_checkpoints/ReadRinex Demo-checkpoint.ipynb
|
gregstarr/PyGPS
|
agpl-3.0
|
Create a linear stream of 10million points between -50 and 50.
|
x = np.arange(-50,50,0.00001)
x.shape
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Create random noise of same dimension
|
bias = np.random.standard_normal(x.shape)
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Define the function
|
y2 = np.cos(x)**3 * (x**2/max(x)) + bias*5
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Train test split
|
x_train, x_test, y_train, y_test = train_test_split(x,y2, test_size=0.3)
x_train.shape
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Plotting algorithms cannot work with millions of points, so you downsample just for plotting
|
stepper = int(x_train.shape[0]/1000)
stepper
fig, ax = plt.subplots(1,1, figsize=(13,8))
ax.scatter(x[::stepper],y2[::stepper], marker='d')
ax.set_title('Distribution of training points')
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Curve fitting
Let us define a function that will try to fit against the training data. It starts with lower order and sequentially increases the complexity of the model. The hope is, somewhere here is the sweet spot of low bias and variance. We will find it empirically
|
def greedy_fitter(x_train, y_train, x_test, y_test, max_order=25):
"""Fitter will try to find the best order of
polynomial curve fit for the given synthetic data"""
import time
train_predictions=[]
train_rmse=[]
test_predictions=[]
test_rmse=[]
for order in range(1,max_order+1):
t1 = time.time()
coeff = np.polyfit(x_train, y_train, deg=order)
n_order = order
count = 0
y_predict = np.zeros(x_train.shape)
while n_order >=0:
y_predict += coeff[count]*x_train**n_order
count+=1
n_order = n_order-1
# append to predictions
train_predictions.append(y_predict)
# find training errors
current_train_rmse =np.sqrt(mean_squared_error(y_train, y_predict))
train_rmse.append(current_train_rmse)
# predict and find test errors
n_order = order
count = 0
y_predict_test = np.zeros(x_test.shape)
while n_order >=0:
y_predict_test += coeff[count]*x_test**n_order
count+=1
n_order = n_order-1
# append test predictions
test_predictions.append(y_predict_test)
# find test errors
current_test_rmse =np.sqrt(mean_squared_error(y_test, y_predict_test))
test_rmse.append(current_test_rmse)
t2 = time.time()
elapsed = round(t2-t1, 3)
print("Elapsed: " + str(elapsed) + \
"s Order: " + str(order) + \
" Train RMSE: " + str(round(current_train_rmse, 4)) + \
" Test RMSE: " + str(round(current_test_rmse, 4)))
return (train_predictions, train_rmse, test_predictions, test_rmse)
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Run the model. Change the max_order to higher or lower if you wish
|
%%time
complexity=50
train_predictions, train_rmse, test_predictions, test_rmse = greedy_fitter(
x_train, y_train, x_test, y_test, max_order=complexity)
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Plot results
How well did the models fit against training data?
Training results
|
%%time
fig, axes = plt.subplots(1,1, figsize=(15,15))
axes.scatter(x_train[::stepper], y_train[::stepper],
label='Original data', color='gray', marker='x')
order=1
for p, r in zip(train_predictions, train_rmse):
axes.scatter(x_train[:stepper], p[:stepper],
label='O: ' + str(order) + " RMSE: " + str(round(r,2)),
marker='.')
order+=1
axes.legend(loc=0)
axes.set_title('Performance against training data')
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Test results
|
%%time
fig, axes = plt.subplots(1,1, figsize=(15,15))
axes.scatter(x_test[::stepper], y_test[::stepper],
label='Test data', color='gray', marker='x')
order=1
for p, r in zip(test_predictions, test_rmse):
axes.scatter(x_test[:stepper], p[:stepper],
label='O: ' + str(order) + " RMSE: " + str(round(r,2)),
marker='.')
order+=1
axes.legend(loc=0)
axes.set_title('Performance against test data')
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Bias vs Variance
|
ax = plt.plot(np.arange(1,complexity+1),test_rmse)
plt.title('Bias vs Complexity'); plt.xlabel('Order of polynomial'); plt.ylabel('Test RMSE')
ax[0].axes.get_yaxis().get_major_formatter().set_useOffset(False)
plt.savefig('Model efficiency.png')
|
ml/curve_fitting_model-complexity-vs-accuracy.ipynb
|
AtmaMani/pyChakras
|
mit
|
Playing with the Convenience Functions
First, we're going to see how we can access ARF and RMF from the convenience functions.
Let's set up a data set:
|
import sherpa.astro.ui
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Load in the data with the convenience function:
|
sherpa.astro.ui.load_data("../data/Chandra/js_spec_HI1_IC10X1_5asB1_jsgrp.pi")
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
If there is a grouping, get rid of it, because we don't like groupings (except for Mike Nowak).
|
sherpa.astro.ui.ungroup()
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
This method gets the data and stores it in an object:
|
d = sherpa.astro.ui.get_data()
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
In case we need them for something, this is how we get ARF and RMF objects:
|
arf = d.get_arf()
rmf = d.get_rmf()
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Next, we'd like to play around with a model.
Let's set this up based on the XSPEC model I got from Jack:
|
sherpa.astro.ui.set_xsabund("angr")
sherpa.astro.ui.set_xsxsect("bcmc")
sherpa.astro.ui.set_xscosmo(70,0,0.73)
sherpa.astro.ui.set_xsxset("delta", "0.01")
sherpa.astro.ui.set_model("xstbabs.a1*xsdiskbb.a2")
print(sherpa.astro.ui.get_model())
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
We can get the fully specified model and store it in an object like this:
|
m = sherpa.astro.ui.get_model()
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Here's how you can set parameters. Note that this changes the state of the object (boo!)
|
sherpa.astro.ui.set_par(a1.nH,0.01)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Actually, we'd like to change the state of the object directly rather than using the convenience function, which works like this:
|
m._set_thawed_pars([0.01, 2, 0.01])
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Now we're ready to evaluate the model and apply RMF/ARF to it. This is actually a method on the data object, not the model object. It returns an array:
|
model_counts = d.eval_model(m)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Let's plot the results:
|
plt.figure()
plt.plot(rmf.e_min, d.counts)
plt.plot(rmf.e_min, model_counts)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Let's set the model parameters to the fit results from XSPEC:
|
m._set_thawed_pars([0.313999, 1.14635, 0.0780871])
model_counts = d.eval_model(m)
plt.figure()
plt.plot(rmf.e_min, d.counts)
plt.plot(rmf.e_min, model_counts, lw=3)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
MCMC by hand
Just for fun, we're going to use emcee directly to sample from this model.
Let's first define a posterior object:
|
from scipy.special import gamma as scipy_gamma
from scipy.special import gammaln as scipy_gammaln
logmin = -100000000.0
class PoissonPosterior(object):
def __init__(self, d, m):
self.data = d
self.model = m
return
def loglikelihood(self, pars, neg=False):
self.model._set_thawed_pars(pars)
mean_model = d.eval_model(self.model)
#stupid hack to make it not go -infinity
mean_model += np.exp(-20.)
res = np.nansum(-mean_model + self.data.counts*np.log(mean_model) \
- scipy_gammaln(self.data.counts + 1.))
if not np.isfinite(res):
res = logmin
if neg:
return -res
else:
return res
def logprior(self, pars):
nh = pars[0]
p_nh = ((nh > 0.0) & (nh < 10.0))
tin = pars[1]
p_tin = ((tin > 0.0) & (tin < 5.0))
lognorm = np.log(pars[2])
p_norm = ((lognorm > -10.0) & (lognorm < 10.0))
logp = np.log(p_nh*p_tin*p_norm)
if not np.isfinite(logp):
return logmin
else:
return logp
def logposterior(self, pars, neg=False):
lpost = self.loglikelihood(pars) + self.logprior(pars)
if neg is True:
return -lpost
else:
return lpost
def __call__(self, pars, neg=False):
return self.logposterior(pars, neg)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Now we can define a posterior object with the data and model objects:
|
lpost = PoissonPosterior(d, m)
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Can we compute the posterior probability of some parameters?
|
print(lpost([0.1, 0.1, 0.1]))
print(lpost([0.313999, 1.14635, 0.0780871]))
|
notebooks/SherpaResponses.ipynb
|
eblur/clarsach
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.