text
stringlengths 0
1.25M
| meta
stringlengths 47
1.89k
|
|---|---|
[STATEMENT]
lemma rquot_D: "x \<preceq>\<^sub>R y \<Longrightarrow> z = rquot y x \<Longrightarrow> D x z"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>x \<preceq>\<^sub>R y; z = rquot y x\<rbrakk> \<Longrightarrow> D x z
[PROOF STEP]
using gR_rel_def rquot_prop
[PROOF STATE]
proof (prove)
using this:
(?x \<preceq>\<^sub>R ?y) = (\<exists>z. D ?x z \<and> ?x \<oplus> z = ?y)
D ?x ?z \<and> ?y = ?x \<oplus> ?z \<Longrightarrow> ?z = rquot ?y ?x
goal (1 subgoal):
1. \<lbrakk>x \<preceq>\<^sub>R y; z = rquot y x\<rbrakk> \<Longrightarrow> D x z
[PROOF STEP]
by force
|
{"llama_tokens": 256, "file": "PSemigroupsConvolution_Partial_Semigroups", "length": 2}
|
\documentclass[../main.tex]{subfiles}
\begin{document}
\subsubsection{Comparing different radii}
The first objective we set was determining the optimal beacon range.
For this we tested all datasets, ceteris paribus, with the following levels of beacon radius: 0.25 till 3 with increments of 0.25 and also a radius of 0.1.
In figure \ref{fig:rad} we see the results of this tests where we took averages across all datasets.
The figure clearly indicates a minimum at a beacon radius of 0.75.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{radiusaveragecost.png}
\caption{The averages taken over all dataset per beacon radius with the given levels.}
\label{fig:rad}
\end{figure}
\subsubsection{Comparing Exchange}
One of the objectives is to find out whether exchanging parcels between agents is beneficial with respect to the objective function.
In figure \ref{fig:exch} the results of running all data sets, ceteris paribus, with and without exchanges enabled are displayed.
There is no clear difference which choice is better only 8 of the 15 datasets yields better results with the exchanges enabled.
Neither the averages in table \ref{tab:avgstrat} show a clear difference between the two cases.
Note however that this could be due to the way we implemented the exchange.
The implementation is quite naive and tries to exchange very often.
An improvement could be looking at how much distance would be covered between the parcels in both agents and only exchange if the new clustering would improve this number.
Of course this would have to take the distance to drive for the exchange into account.
This could then probably eliminate a whole lot of unnecessary exchanges and reduces the cost.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{exchangecomp.png}
\caption{Visual representation of comparing experiments with and without exchanges between agents.}
\label{fig:exch}
\end{figure}
\subsubsection{Comparing Delivery Strategies}
The third and last objective was determining how a delivery strategy improves the solution.
In figures \ref{fig:stat_ex} and \ref{fig:strat_noex} we can clearly see that NearestOnTimeDeliveryStrategy is, outliers excluded, consistently better.
This is also very easy to see in table \ref{tab:avgstrat} where we find in both the exchange and no exchange case it is clearly the lowest average.
Second in place would be the NearestDeliveryStrategy although the difference with the last place is not as clear compared to the first place. Unfortunately the differences in Table~\ref{tab:avgstrat} are negligible and therefore statistically not irrelevant.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{exchangestrategy.png}
\caption{The averages taken over all dataset per beacon radius with the following levels:}
\label{fig:strat_ex}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{noexchangestrategy.png}
\caption{The averages taken over all dataset per beacon radius with the following levels:}
\label{fig:strat_noex}
\end{figure}
\begin{table}
\begin{tabular}{lcc}
\toprule
& Without Exchange & With Exchange \\
\midrule
EarliestDeadlineStrategy & 5366 & 5666.8 \\
NearestDeliveryStrategy & 5427.867 & 5558,357 \\
NearestOnTimeDeliveryStrategy & 5323.929 & 5189.615 \\
\bottomrule
\end{tabular}
\caption{Table with averages of the different delivery strategies across the all data sets.}
\label{tab:avgstrat}
\end{table}
\end{document}
|
{"hexsha": "4ee20fc5ec4558210183e259a04a95d3c44f0b09", "size": 3469, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/subfiles/results.tex", "max_stars_repo_name": "JDevlieghere/MAS", "max_stars_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-04-27T13:48:02.000Z", "max_stars_repo_stars_event_max_datetime": "2016-04-27T13:48:02.000Z", "max_issues_repo_path": "report/subfiles/results.tex", "max_issues_repo_name": "JDevlieghere/MAS", "max_issues_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/subfiles/results.tex", "max_forks_repo_name": "JDevlieghere/MAS", "max_forks_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5606060606, "max_line_length": 258, "alphanum_fraction": 0.7993658115, "num_tokens": 840}
|
import scipy.signal
import scipy.fftpack as fftpack
import numpy as np
def sin(f,fs,time):
x = np.linspace(0, 2*np.pi*f*time, fs*time)
return np.sin(x)
def downsample(signal,fs1=0,fs2=0,alpha=0,mod = 'just_down'):
if alpha == 0:
alpha = int(fs1/fs2)
if mod == 'just_down':
return signal[::alpha]
elif mod == 'avg':
result = np.zeros(int(len(signal)/alpha))
for i in range(int(len(signal)/alpha)):
result[i] = np.mean(signal[i*alpha:(i+1)*alpha])
return result
def medfilt(signal,x):
return scipy.signal.medfilt(signal,x)
def cleanoffset(signal):
return signal - np.mean(signal)
def bpf_fir(signal,fs,fc1,fc2,numtaps=101):
b=scipy.signal.firwin(numtaps, [fc1, fc2], pass_zero=False,fs=fs)
result = scipy.signal.lfilter(b, 1, signal)
return result
def fft_filter(signal,fs,fc=[],type = 'bandpass'):
'''
signal: Signal
fs: Sampling frequency
fc: [fc1,fc2...] Cut-off frequency
type: bandpass | bandstop
'''
k = []
N=len(signal)#get N
for i in range(len(fc)):
k.append(int(fc[i]*N/fs))
#FFT
signal_fft=scipy.fftpack.fft(signal)
#Frequency truncation
if type == 'bandpass':
a = np.zeros(N)
for i in range(int(len(fc)/2)):
a[k[2*i]:k[2*i+1]] = 1
a[N-k[2*i+1]:N-k[2*i]] = 1
elif type == 'bandstop':
a = np.ones(N)
for i in range(int(len(fc)/2)):
a[k[2*i]:k[2*i+1]] = 0
a[N-k[2*i+1]:N-k[2*i]] = 0
signal_fft = a*signal_fft
signal_ifft=scipy.fftpack.ifft(signal_fft)
result = signal_ifft.real
return result
def rms(signal):
signal = signal.astype('float64')
return np.mean((signal*signal))**0.5
def energy(signal,kernel_size,stride,padding = 0):
_signal = np.zeros(len(signal)+padding)
_signal[0:len(signal)] = signal
signal = _signal
out_len = int((len(signal)+1-kernel_size)/stride)
energy = np.zeros(out_len)
for i in range(out_len):
energy[i] = rms(signal[i*stride:i*stride+kernel_size])
return energy
def signal2spectrum(data,window_size, stride, n_downsample=1, log = True, log_alpha = 0.1):
# window : ('tukey',0.5) hann
if n_downsample != 1:
data = downsample(data,alpha=n_downsample)
zxx = scipy.signal.stft(data, window='hann', nperseg=window_size,noverlap=window_size-stride)[2]
spectrum = np.abs(zxx)
if log:
spectrum = np.log1p(spectrum)
h = window_size//2+1
x = np.linspace(h*log_alpha, h-1,num=h+1,dtype=np.int64)
index = (np.log1p(x)-np.log1p(h*log_alpha))/(np.log1p(h)-np.log1p(h*log_alpha))*h
spectrum_new = np.zeros_like(spectrum)
for i in range(h):
spectrum_new[int(index[i]):int(index[i+1])] = spectrum[i]
spectrum = spectrum_new
spectrum = (spectrum-0.05)/0.25
else:
spectrum = (spectrum-0.02)/0.05
return spectrum
|
{"hexsha": "c525c02c602fdd4e1753775dcc277abfb909af62", "size": 2969, "ext": "py", "lang": "Python", "max_stars_repo_path": "util/dsp.py", "max_stars_repo_name": "HiYKY/candock", "max_stars_repo_head_hexsha": "fdbfced6f91f1d9a264bd6cdf9f957c03ec5d5d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "util/dsp.py", "max_issues_repo_name": "HiYKY/candock", "max_issues_repo_head_hexsha": "fdbfced6f91f1d9a264bd6cdf9f957c03ec5d5d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "util/dsp.py", "max_forks_repo_name": "HiYKY/candock", "max_forks_repo_head_hexsha": "fdbfced6f91f1d9a264bd6cdf9f957c03ec5d5d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-21T03:25:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-21T03:25:24.000Z", "avg_line_length": 29.69, "max_line_length": 100, "alphanum_fraction": 0.6055911081, "include": true, "reason": "import numpy,import scipy", "num_tokens": 897}
|
[STATEMENT]
lemma converged_cd_diverge_cs: assumes \<open>is_path \<pi>\<close> and \<open>is_path \<pi>'\<close> and \<open>cs\<^bsup>\<pi>\<^esup> j = cs\<^bsup>\<pi>'\<^esup> j'\<close> and \<open>j<l\<close> and \<open>\<not> (\<exists>l'. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> l')\<close> and \<open>l < m\<close> and \<open>cs\<^bsup>\<pi>\<^esup> m = cs\<^bsup>\<pi>'\<^esup> m'\<close>
obtains k k' where \<open>j\<le>k\<close> \<open>cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'\<close> and \<open>l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k\<close> and \<open>\<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>is_path (\<pi>\<guillemotleft>j)\<close> \<open>is_path (\<pi>'\<guillemotleft>j')\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_path (\<pi> \<guillemotleft> j) &&& is_path (\<pi>' \<guillemotleft> j')
[PROOF STEP]
using assms(1,2) path_path_shift
[PROOF STATE]
proof (prove)
using this:
is_path \<pi>
is_path \<pi>'
is_path ?\<pi> \<Longrightarrow> is_path (?\<pi> \<guillemotleft> ?m)
goal (1 subgoal):
1. is_path (\<pi> \<guillemotleft> j) &&& is_path (\<pi>' \<guillemotleft> j')
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
is_path (\<pi> \<guillemotleft> j)
is_path (\<pi>' \<guillemotleft> j')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
is_path (\<pi> \<guillemotleft> j)
is_path (\<pi>' \<guillemotleft> j')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>(\<pi>\<guillemotleft>j) 0 = (\<pi>'\<guillemotleft>j') 0\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
[PROOF STEP]
using assms(3) last_cs
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> j = cs\<^bsup>\<pi>'\<^esup> j'
last (cs\<^bsup>?\<pi>\<^esup> ?i) = ?\<pi> ?i
goal (1 subgoal):
1. (\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
[PROOF STEP]
by (metis path_shift_def add.right_neutral)
[PROOF STATE]
proof (state)
this:
(\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
(\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>\<not>(\<exists>l'. cs\<^bsup>\<pi>\<guillemotleft>j\<^esup> (l-j) = cs\<^bsup>\<pi>'\<guillemotleft>j'\<^esup> l')\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l' \<Longrightarrow> False
[PROOF STEP]
assume \<open>\<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'\<close>
[PROOF STATE]
proof (state)
this:
\<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
goal (1 subgoal):
1. \<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l' \<Longrightarrow> False
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
\<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
[PROOF STEP]
obtain l' where csl: \<open>cs\<^bsup>\<pi>\<guillemotleft>j\<^esup> (l - j) = cs\<^bsup>\<pi>'\<guillemotleft>j'\<^esup> l'\<close>
[PROOF STATE]
proof (prove)
using this:
\<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
goal (1 subgoal):
1. (\<And>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l' \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
goal (1 subgoal):
1. \<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l' \<Longrightarrow> False
[PROOF STEP]
have \<open>cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')
[PROOF STEP]
using shifted_cs_eq_is_eq[OF assms(1,2,3) csl] assms(4)
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> (j + (l - j)) = cs\<^bsup>\<pi>'\<^esup> (j' + l')
j < l
goal (1 subgoal):
1. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')
goal (1 subgoal):
1. \<exists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l' \<Longrightarrow> False
[PROOF STEP]
thus \<open>False\<close>
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')
goal (1 subgoal):
1. False
[PROOF STEP]
using assms(5)
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> (j' + l')
\<nexists>l'. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> l'
goal (1 subgoal):
1. False
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
False
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
\<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>l-j < m-j\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. l - j < m - j
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
is_path \<pi>
is_path \<pi>'
cs\<^bsup>\<pi>\<^esup> j = cs\<^bsup>\<pi>'\<^esup> j'
j < l
\<nexists>l'. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> l'
l < m
cs\<^bsup>\<pi>\<^esup> m = cs\<^bsup>\<pi>'\<^esup> m'
goal (1 subgoal):
1. l - j < m - j
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
l - j < m - j
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
l - j < m - j
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>\<pi> j \<noteq> return\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<pi> j \<noteq> local.return
[PROOF STEP]
using cs_return assms(1-5) term_path_stable
[PROOF STATE]
proof (prove)
using this:
?\<pi> ?n = local.return \<Longrightarrow> cs\<^bsup>?\<pi>\<^esup> ?n = [?\<pi> ?n]
is_path \<pi>
is_path \<pi>'
cs\<^bsup>\<pi>\<^esup> j = cs\<^bsup>\<pi>'\<^esup> j'
j < l
\<nexists>l'. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> l'
\<lbrakk>is_path ?\<pi>; ?\<pi> ?i = local.return; ?i \<le> ?j\<rbrakk> \<Longrightarrow> ?\<pi> ?j = local.return
goal (1 subgoal):
1. \<pi> j \<noteq> local.return
[PROOF STEP]
by (metis nat_less_le)
[PROOF STATE]
proof (state)
this:
\<pi> j \<noteq> local.return
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
hence \<open>j'<m'\<close>
[PROOF STATE]
proof (prove)
using this:
\<pi> j \<noteq> local.return
goal (1 subgoal):
1. j' < m'
[PROOF STEP]
using cs_order[OF assms(1,2,3,7)] assms
[PROOF STATE]
proof (prove)
using this:
\<pi> j \<noteq> local.return
\<lbrakk>\<pi> j \<noteq> local.return; j < m\<rbrakk> \<Longrightarrow> j' < m'
is_path \<pi>
is_path \<pi>'
cs\<^bsup>\<pi>\<^esup> j = cs\<^bsup>\<pi>'\<^esup> j'
j < l
\<nexists>l'. cs\<^bsup>\<pi>\<^esup> l = cs\<^bsup>\<pi>'\<^esup> l'
l < m
cs\<^bsup>\<pi>\<^esup> m = cs\<^bsup>\<pi>'\<^esup> m'
goal (1 subgoal):
1. j' < m'
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
j' < m'
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
hence \<open>cs\<^bsup>\<pi>\<guillemotleft>j\<^esup> (m-j) = cs\<^bsup>\<pi>'\<guillemotleft>j'\<^esup> (m'-j')\<close>
[PROOF STATE]
proof (prove)
using this:
j' < m'
goal (1 subgoal):
1. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
[PROOF STEP]
using cs_eq_is_eq_shifted[OF assms(1,2,3),of \<open>m-j\<close> \<open>m'-j'\<close>] assms(4,6,7)
[PROOF STATE]
proof (prove)
using this:
j' < m'
cs\<^bsup>\<pi>\<^esup> (j + (m - j)) = cs\<^bsup>\<pi>'\<^esup> (j' + (m' - j')) \<Longrightarrow> cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
j < l
l < m
cs\<^bsup>\<pi>\<^esup> m = cs\<^bsup>\<pi>'\<^esup> m'
goal (1 subgoal):
1. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
is_path (\<pi> \<guillemotleft> j)
is_path (\<pi>' \<guillemotleft> j')
(\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
\<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
l - j < m - j
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
[PROOF STEP]
obtain k k' where csk: \<open>cs\<^bsup>\<pi>\<guillemotleft>j\<^esup> k = cs\<^bsup>\<pi>'\<guillemotleft>j'\<^esup> k'\<close> and lcdk: \<open>l-j cd\<^bsup>\<pi>\<guillemotleft>j\<^esup>\<rightarrow> k\<close> and suc:\<open>(\<pi>\<guillemotleft>j) (Suc k) \<noteq> (\<pi>'\<guillemotleft>j') (Suc k')\<close>
[PROOF STATE]
proof (prove)
using this:
is_path (\<pi> \<guillemotleft> j)
is_path (\<pi>' \<guillemotleft> j')
(\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
\<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
l - j < m - j
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> k = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> k'; l - j cd\<^bsup>\<pi> \<guillemotleft> j\<^esup>\<rightarrow> k; (\<pi> \<guillemotleft> j) (Suc k) \<noteq> (\<pi>' \<guillemotleft> j') (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using converged_cd_diverge
[PROOF STATE]
proof (prove)
using this:
is_path (\<pi> \<guillemotleft> j)
is_path (\<pi>' \<guillemotleft> j')
(\<pi> \<guillemotleft> j) 0 = (\<pi>' \<guillemotleft> j') 0
\<nexists>l'. cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (l - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> l'
l - j < m - j
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> (m - j) = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> (m' - j')
\<lbrakk>is_path ?\<pi>; is_path ?\<pi>'; ?\<pi> 0 = ?\<pi>' 0; \<nexists>l'. cs\<^bsup>?\<pi>\<^esup> ?l = cs\<^bsup>?\<pi>'\<^esup> l'; ?l < ?m; cs\<^bsup>?\<pi>\<^esup> ?m = cs\<^bsup>?\<pi>'\<^esup> ?m'; \<And>k k'. \<lbrakk>cs\<^bsup>?\<pi>\<^esup> k = cs\<^bsup>?\<pi>'\<^esup> k'; ?l cd\<^bsup>?\<pi>\<^esup>\<rightarrow> k; ?\<pi> (Suc k) \<noteq> ?\<pi>' (Suc k')\<rbrakk> \<Longrightarrow> ?thesis\<rbrakk> \<Longrightarrow> ?thesis
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> k = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> k'; l - j cd\<^bsup>\<pi> \<guillemotleft> j\<^esup>\<rightarrow> k; (\<pi> \<guillemotleft> j) (Suc k) \<noteq> (\<pi>' \<guillemotleft> j') (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi> \<guillemotleft> j\<^esup> k = cs\<^bsup>\<pi>' \<guillemotleft> j'\<^esup> k'
l - j cd\<^bsup>\<pi> \<guillemotleft> j\<^esup>\<rightarrow> k
(\<pi> \<guillemotleft> j) (Suc k) \<noteq> (\<pi>' \<guillemotleft> j') (Suc k')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>cs\<^bsup>\<pi>\<^esup> (j+k) = cs\<^bsup>\<pi>'\<^esup> (j'+k')\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
[PROOF STEP]
using shifted_cs_eq_is_eq[OF assms(1-3) csk]
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
goal (1 subgoal):
1. cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
[PROOF STEP]
.
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j+k\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
[PROOF STEP]
using lcdk assms(1,2,4)
[PROOF STATE]
proof (prove)
using this:
l - j cd\<^bsup>\<pi> \<guillemotleft> j\<^esup>\<rightarrow> k
is_path \<pi>
is_path \<pi>'
j < l
goal (1 subgoal):
1. l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
[PROOF STEP]
by (metis add.commute add_diff_cancel_right' cd_path_shift le_add1)
[PROOF STATE]
proof (state)
this:
l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>\<pi> (Suc (j+k)) \<noteq> \<pi>' (Suc (j'+ k'))\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
[PROOF STEP]
using suc
[PROOF STATE]
proof (prove)
using this:
(\<pi> \<guillemotleft> j) (Suc k) \<noteq> (\<pi>' \<guillemotleft> j') (Suc k')
goal (1 subgoal):
1. \<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
\<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
have \<open>j \<le> j+k\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. j \<le> j + k
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
j \<le> j + k
goal (1 subgoal):
1. (\<And>k k'. \<lbrakk>j \<le> k; cs\<^bsup>\<pi>\<^esup> k = cs\<^bsup>\<pi>'\<^esup> k'; l cd\<^bsup>\<pi>\<^esup>\<rightarrow> k; \<pi> (Suc k) \<noteq> \<pi>' (Suc k')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
\<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
j \<le> j + k
[PROOF STEP]
show \<open>thesis\<close>
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
\<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
j \<le> j + k
goal (1 subgoal):
1. thesis
[PROOF STEP]
using that[of \<open>j+k\<close> \<open>j'+k'\<close>]
[PROOF STATE]
proof (prove)
using this:
cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k')
l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k
\<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))
j \<le> j + k
\<lbrakk>j \<le> j + k; cs\<^bsup>\<pi>\<^esup> (j + k) = cs\<^bsup>\<pi>'\<^esup> (j' + k'); l cd\<^bsup>\<pi>\<^esup>\<rightarrow> j + k; \<pi> (Suc (j + k)) \<noteq> \<pi>' (Suc (j' + k'))\<rbrakk> \<Longrightarrow> thesis
goal (1 subgoal):
1. thesis
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
thesis
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 9129, "file": "IFC_Tracking_IFC", "length": 59}
|
"""
Asif Khan
"""
import numpy as np
import cv2
from mtcnn.mtcnn import MTCNN
detector = MTCNN()
def one_face(frame, bbs, pointss):
# process only one face (center ?)
offsets = [(bbs[:,0]+bbs[:,2])/2-frame.shape[1]/2,
(bbs[:,1]+bbs[:,3])/2-frame.shape[0]/2]
offset_dist = np.sum(np.abs(offsets),0)
index = np.argmin(offset_dist)
bb = bbs[index]
points = pointss[:,index]
return bb, points
def draw_landmarks(frame, bb, points):
# draw rectangle and landmarks on face
cv2.rectangle(frame,(int(bb[0]),int(bb[1])),(int(bb[2]),int(bb[3])),orange,2)
cv2.circle(frame, (points[0], points[5]), 2, (255,0,0), 2)# eye
cv2.circle(frame, (points[1], points[6]), 2, (255,0,0), 2)
cv2.circle(frame, (points[2], points[7]), 2, (255,0,0), 2)# nose
cv2.circle(frame, (points[3], points[8]), 2, (255,0,0), 2)# mouth
cv2.circle(frame, (points[4], points[9]), 2, (255,0,0), 2)
w = int(bb[2])-int(bb[0])# width
h = int(bb[3])-int(bb[1])# height
w2h_ratio = w/h# ratio
eye2box_ratio = (points[0]-bb[0]) / (bb[2]-points[1])
cv2.putText(frame, "Width (pixels): {}".format(w), (10,30), font, font_size, red, 1)
cv2.putText(frame, "Height (pixels): {}".format(h), (10,40), font, font_size, red, 1)
if w2h_ratio < 0.7 or w2h_ratio > 0.9:
#cv2.putText(frame, "width/height: {0:.2f}".format(w2h_ratio), (10,40), font, font_size, blue, 1)
cv2.putText(frame, "Narrow Face", (10,60), font, font_size, red, 1)
if eye2box_ratio > 1.5 or eye2box_ratio < 0.88:
#cv2.putText(frame, "leye2lbox/reye2rbox: {0:.2f}".format((points[0]-bb[0]) / (bb[2]-points[1])), (10,70), font, font_size, red, 1)
cv2.putText(frame, "Acentric Face", (10,70), font, font_size, red, 1)
def find_smile(pts):
dx_eyes = pts[1] - pts[0]# between pupils
dx_mout = pts[4] - pts[3]# between mouth corners
smile_ratio = dx_mout/dx_eyes
return smile_ratio
def find_roll(pts):
return pts[6] - pts[5]
def find_yaw(pts):
le2n = pts[2] - pts[0]
re2n = pts[1] - pts[2]
return le2n - re2n
def find_pitch(pts):
eye_y = (pts[5] + pts[6]) / 2
mou_y = (pts[8] + pts[9]) / 2
e2n = eye_y - pts[7]
n2m = pts[7] - mou_y
return e2n/n2m
def find_pose(points):
X=points[0:5]
Y=points[5:10]
angle=np.arctan((Y[1]-Y[0])/(X[1]-X[0]))/np.pi*180
alpha=np.cos(np.deg2rad(angle))
beta=np.sin(np.deg2rad(angle))
# compensate for roll: rotate points (landmarks) so that both the eyes are
# alligned horizontally
Xr=np.zeros((5))
Yr=np.zeros((5))
for i in range(5):
Xr[i]=alpha*X[i]+beta*Y[i]+(1-alpha)*X[2]-beta*Y[2]
Yr[i]=-beta*X[i]+alpha*Y[i]+beta*X[2]+(1-alpha)*Y[2]
# average distance between eyes and mouth
dXtot=(Xr[1]-Xr[0]+Xr[4]-Xr[3])/2
dYtot=(Yr[3]-Yr[0]+Yr[4]-Yr[1])/2
# average distance between nose and eyes
dXnose=(Xr[1]-Xr[2]+Xr[4]-Xr[2])/2
dYnose=(Yr[3]-Yr[2]+Yr[4]-Yr[2])/2
# relative rotation 0% is frontal 100% is profile
Xfrontal=np.abs(np.clip(-90+90/0.5*dXnose/dXtot,-90,90))
Yfrontal=np.abs(np.clip(-90+90/0.5*dYnose/dYtot,-90,90))
return Xfrontal, Yfrontal
logo_size = 150
show_size = 150 # Size showed detected faces
#pixel_in_max=1000
#show_space=150
font = cv2.FONT_HERSHEY_COMPLEX # Text in video
font_size=0.4
blue=(225,0,0)
green=(0,128,0)
red=(0,0,255)
orange=(0,140,255)
total_size = np.array([750, 1400], dtype=int) # demo resolution
res_try = np.array([1080, 1920], dtype=int) # video resolution
res_max = np.zeros((2), dtype=int)
res_resize = np.zeros((2), dtype=int)
PERC_CROP_HEIGHT=0
PERC_CROP_WIDTH=0
print('initializing variables...')
minsize = 20 # minimum size of face
threshold = [ 0.6, 0.7, 0.7 ] # three steps's threshold
factor = 0.709 # scale factor
# Recordings on/off
image_save=False
video_save = True
fps=10.
video_format=cv2.VideoWriter_fourcc('M','J','P','G')
video_max_frame=60
video_outs=[]
# video capture initialization
camera = 0#0: internal, 1: external
cap = cv2.VideoCapture(camera)
res_actual = np.zeros((1,2), dtype=int)# initialize resolution
res_actual[0,0]=cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
res_actual[0,1]=cap.get(cv2.CAP_PROP_FRAME_WIDTH)
print("camera resolution: {}".format(res_actual))
if video_save:
video_file='video_out.avi'
video_out=cv2.VideoWriter(video_file, video_format, fps, (640,480))
while (True):
rets, frames = cap.read()
if not (rets):
break
frame = np.array(frames)
frame = cv2.flip(frame,1)
frame_show=np.ones((total_size[0],total_size[1],3),dtype='uint8')*255
res_crop = np.asarray(frame.shape)[0:2]
bbs_all, pointss_all = detector.detect_faces(frame)# face detection
bbs = bbs_all.copy()
pointss = pointss_all.copy()
if len(bbs_all) > 0:# if at least one face is detected
#process only one face (center ?)
bb,points = one_face(frame, bbs, pointss)
draw_landmarks(frame, bb, points)# draw land marks on face
cv2.putText(frame, "Roll: {0:.2f} (-50 to +50)".format(find_roll(points)), (10,90), font, font_size, red, 1)
cv2.putText(frame, "Yaw: {0:.2f} (-100 to +100)".format(find_yaw(points)), (10,100), font, font_size, red, 1)
cv2.putText(frame, "Pitch: {0:.2f} (0 to 4)".format(find_pitch(points)), (10,110), font, font_size, red, 1)
#cv2.putText(frame, "smiles: {}, neutrals: {}, idframes: {}".format(Nsmiles, Nneutrals, Nframesperid), (10,460), font, font_size, blue, 1)
Xfrontal, Yfrontal = find_pose(points)
cv2.putText(frame, "Xfrontal: {0:.2f}".format(Xfrontal), (10,130), font, font_size, red, 1)
cv2.putText(frame, "Yfrontal: {0:.2f}".format(Yfrontal), (10,140), font, font_size, red, 1)
smile_ratio = find_smile(points)
if smile_ratio > 0.9:
cv2.putText(frame, "Expression: Smile", (10,160), font, font_size, green, 1)
else:
cv2.putText(frame, "Expression: Neutral", (10,160), font, font_size, green, 1)
else:
cv2.putText(frame_show, 'no face', (10,logo_size+200), font, font_size, blue, 2)
res_max[0]=total_size[0]#-show_size
res_max[1]=total_size[1]-2*logo_size
res_resize[1]=res_max[1]
res_resize[0]=res_max[1]/res_crop[1]*res_crop[0]
if res_resize[0]>res_max[0]:
res_resize[0]=res_max[0]
res_resize[1]=int(res_max[0]/res_crop[0]*res_crop[1]/2)*2
frame_resize = cv2.resize(frame,(res_resize[1],res_resize[0]), interpolation = cv2.INTER_LINEAR)
space_vert=(total_size[1]-res_resize[1]) // 2
frame_show[:frame_resize.shape[0],space_vert:-space_vert,:]=frame_resize
cv2.putText(frame_show, 'q: quit', (10,50), font, font_size, blue, 2)
cv2.imshow('Pose Detection - MTCNN',frame_show)
if video_save:
video_out.write(frame)
key_pressed = cv2.waitKey(1) & 0xFF
option=[]
options=['Quit']
if key_pressed == ord('q'):
break
cap.release()
if video_save:
video_out.release()
cv2.destroyAllWindows()
|
{"hexsha": "e63b10d62f081cd601194af7988dbf3483501c3a", "size": 7403, "ext": "py", "lang": "Python", "max_stars_repo_path": "pose_detection_mtcnn.py", "max_stars_repo_name": "fisakhan/Face_Pose", "max_stars_repo_head_hexsha": "b98e7338d1b64357e5842b507fb9e845dc48d126", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-07-29T01:18:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T08:50:11.000Z", "max_issues_repo_path": "pose_detection_mtcnn.py", "max_issues_repo_name": "fisakhan/Face_Pose", "max_issues_repo_head_hexsha": "b98e7338d1b64357e5842b507fb9e845dc48d126", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-12-31T12:36:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-01T09:05:16.000Z", "max_forks_repo_path": "pose_detection_mtcnn.py", "max_forks_repo_name": "fisakhan/Face_Pose", "max_forks_repo_head_hexsha": "b98e7338d1b64357e5842b507fb9e845dc48d126", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-05-11T01:37:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-31T12:37:11.000Z", "avg_line_length": 34.4325581395, "max_line_length": 147, "alphanum_fraction": 0.6027286235, "include": true, "reason": "import numpy", "num_tokens": 2461}
|
<a href="https://colab.research.google.com/github/deep-learning-indaba/indaba-pracs-2019/blob/master/4a_recurrent_nets.ipynb" target="_parent"></a>
# Practical 4a: Recurrent Neural Networks (RNNs)
© Deep Learning Indaba. Apache License 2.0.
## Introduction
Feedforward models (eg deep MLPs and ConvNets) map fixed-size input-data (vectors of a fixed dimensionality) to their output labels. They're very powerful and have been successfully used for many tasks. However, a lot of data is not in the form of fixed-size vectors, but exists in the form of **sequences**. Language is one good example, where sentences are sequences of words. In some way, almost any data types can be considered as a sequence (for instance an image consists of a sequence of pixels, speech a sequence of phonemes, and so forth).
Recurrent neural networks (**RNNs**) were designed to be able to handle sequential data, and in this practical we will take a closer look at RNNs and then build a model that can generate English sentences in the style of Shakespeare!
## Learning Objectives
* Understand how RNNs model sequential data.
* Understand how the vanilla RNN is a generalization of feedforward models to incorporate sequential dependencies.
* Understand the issues involved when training RNNs.
* Know how to implement an RNN for time-series estimation (**regression**) and an RNN language model (character-level **classification**) in Tensorflow using Keras.
**IMPORTANT: Please fill out the exit ticket form before you leave the practical: https://forms.gle/vkLk6hHCwNrNsb8u6**
## Imports
```python
#@title Imports (RUN ME!) { display-mode: "form" }
#!pip install tensorflow-gpu==2.0.0-beta0 > /dev/null 2>&1
#!pip -q install pydot_ng > /dev/null 2>&1
#!pip -q install grapfrffhviz > /dev/null 2>&1
#!apt install graphviz > /dev/null 2>&1
import numpy as np
import tensorflow as tf
import math
import random
import ssl
import sys
import urllib
from IPython import display
import matplotlib.pyplot as plt
%matplotlib inline
```
##From Feedforward to Recurrent Models
### Intuition
RNNs generalize feedforward networks (FFNs) to be able to work with sequential data. Recall that FFNs take an input (e.g. an image) and immediately produce an output (e.g. a digit class), operating on all elements of the input simultaneously.
Here's an example of a function that computes the output of an FNN on an input `x` (the `W` and `b` arguments represent the weight and bias arrays):
```python
def ffn_forward(x, W_xh, W_ho, b_hid, b_out):
# Compute activations on the hidden layer.
hidden_layer = np.tanh(np.dot(W_xh, x) + b_hid)
# Compute the (linear) output layer activations.
output = np.dot(W_ho, hidden_layer) + b_out
return output
```
> *NOTE*: The above cell is to show the structure of an FNN, so you don't need to run it.
RNNs, on the other hand, consider the data sequentially, and can remember what they have seen earlier in the sequence to help interpret or contextualize elements from later in the sequence when making predictions.
Here is a function that illustrations the idea. It takes the input sequence, and an initial state for the RNN, and then loops over the elements of the sequence, calculating new states for each one.
```python
def rnn_forward(data_sequence, initial_state):
state = initial_state # the variable state is updated at every time-step
all_states = [state] # save the states, so we can return them
all_ys = [] # Used to save all predictions
for x in data_sequence:
# rnn_cell is a function (not shown) that it takes the current input x_t
# and the previous state h, and produces an output y_t and a new state h.
y, new_state = rnn_cell(x, state)
all_states.append(new_state)
all_ys.append(y)
# Update state for the next time-step
state = new_state
return all_states, all_ys
```
> *NOTE*: The above cell is to show the structure of an RNN, so you don't need to run it. The definition of `rnn_cell` is not included, it will be explained below.
To understand the distinction between FFNs and RNNs, imagine we want to label words as the part-of-speech categories that they belong to: E.g. for the input sentence "I want a duck" and "He had to duck", we want our model to predict that duck is a `Noun` in the first sentence and a `Verb` in the second. To do this successfully, the model needs to be aware of the surrounding context. However, if we feed a FFN model only one word at a time, how could it know the difference? If we want to feed it all the words at once, how do we deal with the fact that sentences are of different lengths?
RNNs solve this issue by processing the sentence word-by-word, and maintaining an internal **state** summarizing what it has seen so far. This applies not only to words, but also to phonemes in speech, or even, as we will see, elements of a time-series.
### Unrolling the network
Imagine we are trying to classify a sequence $X = (x_1, x_2, \ldots, x_T)$ into labels $y$ (for now, let's keep it abstract). After running the `rnn_forward()` function of our RNN defined above on $X$, we would have a list of internal states and outputs of the model at each sequence position. This process is called **unrolling or unfolding in time**, because you can think of it as unrolling the *computations* defined by the RNN loop over the inputs at each position of the sequence. RNNs are often used to model **time series data** (which we will do in this practical), and therefore these positions are referred to as **time-steps**, and hence we call this process "unrolling over time". See the visualization below:
<center></center>
We can therefore think of an RNN as a composition of identical feedforward neural networks (with replicated/tied weights), one for each moment or step in time.
These feedforward functions that make up the RNN (i.e. our `rnn_cell` function above) are typically referred to as **cells**, and the only requirement for this function is that the cell function needs to be a differentiable function that can map an input and a state vector to an output and a new state vector. What we have shown above is called the **vanilla RNN**, but there are many more possibilities. One of the most popular variants is called the **Long Short-Term Memory (LSTM)** cell, which we'll use later to create our Shakespeare language model.
### Putting this together
In the feedforward models we've seen before, the input $x$ is mapped to an intermediate hidden layer $h$ as follows:
\begin{equation}
h = \sigma(\underbrace{W_{xh}x}_\text{current input (per-example)} + b)
\end{equation}
where $\sigma$ is some non-linear activation function like ReLU or tanh. We can then make a prediction $\hat{y} = \sigma(W_{hy}h + b)$ based on $h$, or we can add another layer, etc. **NOTE**: We use the weight subscript $W_{xz}$ to indicate a mapping from layer $x$ to layer $z$.
RNNs generalize this feedforward idea to a sequence of inputs $X = {x_1, x_2, ...}$ by maintaining a sequence of state vectors $h_t$, one for every time-step $t$. The simplest form of such an RNN looks very similar to the equation for a single feedforward layer:
\begin{equation}
h_t = \sigma(\underbrace{W_{hh}h_{t-1}}_\text{previous state} + \underbrace{W_{xh}x_t}_\text{current input (per time-step)} + b)
\end{equation}
These states $h_t$ can then be used for various purposes, as we describe below.
Feedforward models map one input to one output. On the other hand, RNNs can give **many-to-many** (many inputs, many labels), **many-to-one**, and **one-to-many** mappings. Examples of these tasks include machine translation, sentiment analysis, and image captioning, respectively. For example, we could use each $h_t$ to predict the part of speech for that time-step ($y_t= \sigma(W_{hy}h_t+b)$) or we could use the last state $h_T$ to predict the topic of the whole document. Which task the RNN performs is based on the training data (of course)! We can visualize these as follows (from this [excellent blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) by Andrej Karpathy):
<center></center>
**QUESTIONS**
* How are FFNs and RNNs **similar**?
* How are they **different**?
* Why do we call RNNs "recurrent"?
* Can you think of a one-to-many task?
##Modeling General Time-Series
We will train an RNN to model a time-series as a first step. A **time-series** is a series of data-points ordered over discrete time-steps. Examples include the hourly temperature of Stellenbosch over a month or a year, the market price of some asset (like a company's stock) over time, and so forth. We will generate a **sinusoidal time-series** (with or without noise) as a toy example, and then train a tiny RNN model with only 5 parameters on this data.
###Create some artificial data
```python
#@title Create sinusoidal data {run: "auto"}
steps_per_cycle = 20 #@param { type: "slider", min:1, max:100, step:1 }
number_of_cycles = 176 #@param { type: "slider", min:1, max:1000, step:1 }
noise_factor = 0.1 #@param { type: "slider", min:0, max:1, step:0.1 }
plot_num_cycles = 23 #@param { type: "slider", min:1, max:50, step:1 }
seq_len = steps_per_cycle * number_of_cycles
t = np.arange(seq_len)
sin_t_noisy = np.sin(2 * np.pi / steps_per_cycle * t + noise_factor * np.random.uniform(-1.0, +1.0, seq_len))
sin_t_clean = np.sin(2 * np.pi / steps_per_cycle * t)
upto = plot_num_cycles * steps_per_cycle
fig = plt.figure(figsize=(15,3))
plt.plot(t[:upto], sin_t_noisy[:upto])
plt.title("Showing first {} cycles.".format(plot_num_cycles))
plt.show()
#both = np.column_stack((t, sin_t_noisy))
#print("both.shape = {}".format(both.shape))
#print("both[:steps_per_cycle, :steps_per_cycle]")
#print(both[:steps_per_cycle,:steps_per_cycle])
```
**TASK**: Adjust the parameters above to generate data with different properties.
Now we pack the data into train and test batches. Note that while RNNs can in theory learn the dependencies across all inputs received so far (using an algorithm called **backpropagation through time**, or BPTT; see the Aside box below), in practice they are trained using an algorithm called **truncated BPTT** where we truncate the inputs to only the last $T$ symbols (this is the `truncated_seq_len` variable below).
**QUESTIONs**:
* What issues can you think may arise by truncating the training data in this way?
* Despite these issues, why do you think it might be necessary to do this?
```python
#@title Pack truncated sequence data {run: "auto"}
def pack_truncated_data(data, num_prev = 100):
X, Y = [], []
for i in range(len(data) - num_prev):
X.append(data[i : i + num_prev])
Y.append(data[i + num_prev])
# NOTE: Keras expects input data in the shape (batch_size, truncated_seq_len, input_dim)
# We have only one real-valued number per time-step, so we therefore expand
# the last dimension from (batch_size, truncated_seq_len) to
# (batch_size, truncated_seq_len, 1).
X, Y = np.array(X)[:,:,np.newaxis], np.array(Y)[:,np.newaxis]
return X, Y
# We only consider this many previous data points
truncated_seq_len = 2 #@param { type: "slider", min:1, max:10, step:1 }
test_split = 0.25 # Fraction of total data to keep out as test data
# We use only the sin(t) values, and discard the time values
data = sin_t_noisy
data_len = data.shape[0]
num_train = int(data_len * (1 - test_split))
train_data = data[:num_train]
test_data = data[num_train:]
X_train, y_train = pack_truncated_data(train_data, num_prev=truncated_seq_len)
X_test, y_test = pack_truncated_data(test_data, num_prev=truncated_seq_len)
print("Generated training/test data with shapes\nX_train: {}, y_train: {}\nX_test: {}, y_test: {}. ".format(
X_train.shape, y_train.shape, X_test.shape, y_test.shape))
```
**NOTE**: We reshape the training data into (batch_size, truncated_seq_len, 1) and (batch_size, 1) arrays.
### Intermediate optional extra reading: (Truncated) Backpropagation-through-Time and Vanishing and Exploding Gradients
RNNs model sequential data, and are designed to capture how ***outputs*** at the current time step are influenced by the ***inputs*** that came before them. This is referred to as **long-range dependencies**. At a high level, this allows the model to remember what it has seen so far in order to better contextualize what it is seeing at the moment (think about how knowing the context of the sentence or conversation can sometimes help one to better figure out the intended meaning of a misheard word or ambiguous statement). It is what makes these models so powerful, but it is also what makes them so hard to train!
The most well-known algorithm for training RNNs is called **back-propagation through time (BPTT**; there are other algorithms). BPTT conceptually amounts to unrolling the computations of the RNN over time, computing the errors, and backpropagating the gradients through the unrolled graph structure. Ideally we want to unroll the graph up to the maximum sequence length, however in practice, since sequence lengths vary and memory is limited, we only end up unrolling sequences up to some length $T$. This is called **truncated BPTT**, and is the most used variant of BPTT.
At a high level, there are two main issues when using (truncated) BPTT to train RNNs:
* Having shared ("tied") recurrent weights ($W_{hh}$) mean that **the gradient on these weights at some time step $t$ depends on all time steps up to time-step $T$**, the length of the full (truncated) sequence. This also leads to the **vanishing/exploding gradients** problem, which we'll explain below.
* **Memory usage grows linearly with the total number of steps $T$ that we unroll for**, because we need to save/cache the activations at each time-step (look at the Python code above to convince yourself of this). This matters computationally, since memory is a limited resource. It also matters statistically, because it puts a limit on the types of dependency that the model can successfully learn, by preventing it from correcting errors that stem from an input more than $T$ steps ago.
**NOTE**: Think about that last statement and make sure you understand those 2 points.
BPTT is very similar to the standard back-propagation algorithm. Key to understanding the BPTT algorithm is to realize that gradients on the non-recurrent weights (weights of a per time-step classifier that tries to predict the part-of-speech tag for each word for example) and recurrent weights (that transform $h_{t-1}$ into $h_t$) are computed differently:
* The gradients of **non-recurrent weights** ($W_{hy}$) depend only on the error at that time-step, $E_t$.
* The gradients of **recurrent weights** ($W_{hh}$) depend on all previous time-steps up to maximum length $T$.
The first point is fairly intuitive: predictions at time-step $t$ is related to the loss of that particular prediction.
The second point will be explained in more detail in the lectures (see also [this great blog post](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/)), but briefly, this can be summarized in these equations:
1. The **current** state is a function of the **previous** state and the current input: $h_t = \sigma(W_{hh}h_{t-1} + W_{xh}x_t)$
2. The gradient of the loss $E_t$ at time $t$ on $W_{hh}$ is a function of the current hidden state and model predictions $\hat{y}_t$ at time t:
$\frac{\partial E_t}{\partial W_{hh}} = \frac{\partial E_t}{\partial \hat{y}_t}\frac{\partial\hat{y}_t}{\partial h_t}\frac{\partial h_t}{\partial W_{hh}}$
3. Substituting (1) into (2) results in a **sum over all previous time-steps**:
$\frac{\partial E_t}{\partial W_{hh}} = \sum\limits_{k=0}^{t} \underbrace{\frac{\partial E_t}{\partial \hat{y}_t}\frac{\partial\hat{y}_t}{\partial h_t}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W_{hh}}}_\text{product of gradient terms}$
The problem is that $\frac{\partial h_t}{\partial h_k} = \Pi_j \frac{\partial h_j}{\partial h_{j-1}}$ for j from $k + 1$ to $t$. Because of this **repeated multiplicative interaction**, as the sequence length $t$ gets longer, the gradients themselves can get diminishingly small (**vanish**) or grow too large and result in numeric overflow (**explode**). This has been shown to be related to the norms of the recurrent weight matrices being less than or equal to 1. Intuitively, it works very similar to how multiplying a small number $v<1.0$ with itself repeatedly can quickly go to zero, or conversely, a large number $v>1.0$ could quickly go to infinity; only this is for matrices.
###Build a tiny RNN in Keras
Building an RNN in Keras is quite simple. We simply chain the layers together as follows:
```python
def define_model(truncated_seq_len):
input_dimension = 1
hidden_dimension = 1
output_dimension = 1
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.SimpleRNN(
# We need to specify the input_shape *without* leading batch_size (it is inferred)
input_shape=(truncated_seq_len, input_dimension),
units=hidden_dimension,
return_sequences=False,
name='hidden_layer'))
model.add(tf.keras.layers.Dense(
output_dimension,
name='output_layer'))
model.compile(loss="mean_squared_error",
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3))
return model
```
**NOTE**: We're building an RNN for **regression**. We therefore use a linear layer (which outputs real-valued numbers) at the output with the "*mean_squared_error*" loss function.
```python
model = define_model(truncated_seq_len = X_train.shape[1])
model.summary()
```
**NOTE**: You need to re-run the above cell every time after training to reset the model weights!
###Train the tiny RNN
Now let's train the model. This may take a few minutes (it takes much longer if you increase `truncated_seq_len`). Set `verbose=1` **before** you run the cell to see the intermediate output as the model is training. Set it to 0 if you don't want any output.
```python
''' SOLUTION TO ONE OF TASKS [DELETE]
patience = 5
train_history = model.fit(X_train, y_train, batch_size=600, epochs=1000,
verbose=1, validation_split=0.05,
callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience, verbose=1)])
'''
```
```python
train_history = model.fit(X_train, y_train, batch_size=600, epochs=1000,
verbose=1, validation_split=0.05)
```
Let's visualize the training and validation losses.
```python
plt.figure(figsize=(15,5))
for label in ["loss","val_loss"]:
plt.plot(train_history.history[label], label=label)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("The final validation loss: {}".format(train_history.history["val_loss"][-1]))
plt.legend()
plt.show()
```
Finally, let's look at the parameters for the trained model.
```python
for layer in model.layers:
print("{}, {}".format(layer.name, layer.get_weights()))
```
**QUESTION**:
* Relate the above weights to the terms in the equation for the vanilla RNN we saw earlier, namely:
* input-to-hidden $W_{xh}$,
* hidden-to-hidden $W_{hh}$,
* hidden-to-output weights $W_{hy}$
* recurrent and out biases.
###Make predictions using the trained model
```python
y_pred = model.predict(X_test[:100])
plt.figure(figsize=(19,3))
plt.plot(y_test[:100], label="true")
plt.plot(y_pred, label="predicted")
plt.legend()
plt.show()
```
**YOUR TASKS**:
* [**ALL**] Change the learning rate up to 5 orders of magnitude larger and smaller and retrain. What happens when it is too large? What happens when it is too small?
* [**ALL**] Change the `SimpleRNN` to `GRU`.
* You can read more about LSTMs in [this blog post](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) by Chris Olah.
* What is the effect on the number of parameters? Can you explain why? Now do the same for `LSTM`.
* [**INTERMEDIATE**] Note that the loss does not decrease much after around epoch 400. Add "Early Stopping with patience" to the `model.fit()` function to stop it from training beyond this point. **Hint**: Look at tf.keras.callbacks.
* *Early stopping* is a technique where we stop training the model once it's performance on validation data stops improving, and is a very common approach to prevent overfitting. Early stopping *with patience* means as soon as the model starts doing worse on validation we wait for at least `patience` more evaluations before stopping training, and if it improves within that time, we reset the counter. The patience parameter is a way to ensure we don't stop by accident, as the validation loss can fluctuate randomly from epoch to epoch.
##Generating Shakespeare
Now let's build an RNN language model to generate Shakespearian English! A language model is trained to assign high probabilities to sequences of words or sentences that are well formed, and low probabilities to sequences which are not realistic. When the model is trained, one can use it to *generate* data that is similar to the training data.
Our data is now sequences of discrete symbols (characters). But neural networks operate in continuous spaces, and so we need to take the discrete language data, and **embed** it in a continuous space. To do this, we'll simply break up the data into sequences of characters, and represent each character using a learned vector. This is a standard trick for processing text using neural networks.
### Download and Preprocess the Data
We first download the data and examine what it looks like:
```python
context = ssl._create_unverified_context()
shakespeare_url = 'https://cs.stanford.edu/people/karpathy/char-rnn/shakespeare_input.txt'
data = urllib.request.urlopen(shakespeare_url, context=context)
all_text = data.read().lower().decode('utf8')
print("Downloaded Shakespeare data with {} characters.".format(len(all_text)))
print("FIRST 1000 CHARACTERS: ")
print(all_text[:1000])
```
```python
training_text = all_text[:1000000] # Keep only the first 1 million characters
```
We now preprocess the text data as follows:
1. Extract the vocabulary of all `vocab_size` unique characters appearing in the data.
2. Assign each character a unique integer id in `0 <= id < vocab_size`. This is so we can map the characters to unique embedding vectors. This is a common way to map discrete inputs to continuous vectors that neural networks can work with. See e.g. this [blog post](https://www.tensorflow.org/tutorials/representation/word2vec), or [this one](http://ruder.io/word-embeddings-1/) for more information.
3. Split the data into sequences ("windows") of `max_len` characters (the input to the model) followed by the next character as target. E.g. using `max_len=5` the sentence "I saw a cat" (11 characters) will get split into "I saw" and /space/, "/space/saw/space/" and "a", "saw a" and /space/, etc. To add some variation, we skip `step` characters between each sequence (i.e. we use a "sliding window of `max_len` with stride `step`").
```python
max_len = 30 # We only consider this many previous data points (characters)
step = 3 # We start a new training sequence every `step` characters
sentences = [] # This holds our extracted sequences
next_chars = [] # This holds the targets (the follow-up characters)
chars = sorted(list(set(training_text))) # List of unique characters in the corpus
vocab_size = len(chars)
print('Number of unique characters: ', vocab_size)
print(chars)
# Construct dictionaries mapping unique characters to their index in `chars` and reverse
char2index = dict((c, chars.index(c)) for c in chars)
index2char = dict((chars.index(c), c) for c in chars)
```
Now we encode the training data by mapping each character to its unique integer id.
```python
for i in range(0, len(training_text) - max_len, step):
sentences.append([char2index[s] for s in training_text[i: i + max_len]])
next_chars.append([char2index[training_text[i + max_len]]])
print('Number of extracted sequences:', len(sentences))
```
This yields the following numpy arrays:
```python
X, Y = np.array(sentences, dtype=np.int64), np.array(next_chars, dtype=np.int64)
X.shape, Y.shape
```
Let's take a look at the first example.
```python
print("X[0].shape = {}, Y[0].shape = {}".format(X[0].shape, Y[0].shape))
print("X[0]: ", X[0])
print("Y[0]: ", Y[0])
```
###Build an RNN language model
A **language model** estimates a probability distribution over sequences $\mathbb{x}_{1:N} = (x_1, x_2, ..., x_N)$ by breaking up the full joint probability into a sequence of conditional probabilities using the **[chain-rule of probability](https://en.wikipedia.org/wiki/Chain_rule_(probability))**:
\begin{align}
p(\mathbb{x}_{1:N}) &= p(x_1) \cdot p(x_2 | x_1) \cdot p(x_3 | x_2, x_1) \ldots \\
&= \Pi_1^N p(x_i | \mathbb{x}_{1:i-1})
\end{align}
In other words, to model the probability of the phrase "*i saw a cat*" at the character level, the model learns to estimate the probabilities for p(i), p(/space/| i), p(c | i, /space/), and so forth, and multiplies them together.
There are many different ways in which to estimate these individual probabilities. But one particularly effective way is to use an RNN! To do this, we'll therefore be modeling the $p(x_i | \mathbb{x}_{1:i-1})$ terms using an RNN conditioned on $\mathbb{x}_{1:i-1}$.
* We model these probabilities at the character-level, so we'll use an `Embedding` layer as the first layer of our model to map the discrete character id's to real-valued embedding vectors.
* Next, the RNN-core will map these sequences of character embeddings to a probability distribution over all characters $p(x_i | \mathbb{x}_{1:i-1}) \in \mathbb{R}^\textrm{vocab_size}$ at every step of the sequence. To do this, the RNN will map the embeddings to a sequence of *hidden states*. We will then use a `Dense` layer to map from the RNN hidden state to an output distribution over the total number of characters using a [`softmax`](https://en.wikipedia.org/wiki/Softmax_function) activation.
We can do this with a few lines of code:
```python
embedding_dim = 32 # Map each character to a unique vector of this dimension
vocab_size = len(chars)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Embedding(
vocab_size, embedding_dim,
input_length=max_len,
embeddings_initializer=tf.keras.initializers.TruncatedNormal))
model.add(tf.keras.layers.LSTM(
128,
input_shape=(max_len, embedding_dim), # NB: Ensure this matches the embedding_dim!
dropout=0.1, # input-to-hidden drop-probability
recurrent_dropout=0.2)) # hidden-to-hidden drop-probability
model.add(tf.keras.layers.Dense(vocab_size, activation='softmax'))
model.summary()
```
Once we have a model that can map sequence of characters to a probability distribution over the next character in the sequence, we can train it using **maximum likelihood** on the training set to find the model parameters which maximizes the probability of the training data. Again, this is very simple to do by choosing an optimizer and selecting the `sparse_categorical_crossentropy` loss function:
```python
optimizer = tf.keras.optimizers.Adam()
loss='sparse_categorical_crossentropy'
model.compile(loss=loss, optimizer=optimizer)
```
### Optional extra reading: RNN vs GRU vs LSTM
Our language model above is using an RNN variant called the LSTM (long short-term memory). The LSTM solves a big problem with the vanilla RNN: vanishing and exploding gradients. It does this by allowing gradients to flow backwards through the layer unimpeded. As a result, the LSTM is very popular for sequence tasks. The key to the success of the LSTM is that it maintains an internal **cell state** ($\mathbf{c}_t$) which can be thought of as a 'memory' for the LSTM. The cell state provides the path along which the gradients can easily flow. The LSTM can control the contents of the cell state using three **gates**:
* The **forget** gate allows the LSTM to remove contents from the cell state.
* The **input** and **memory** gates control what information is added to the cell state from the input data.
Finally, an **output** gate controls which part of the cell state we output at each time step.
A somewhat simplified version of the LSTM is the GRU (gated recurrent unit), which has also been used successfully in many applications and has only three gates.
Looking at the diagrams below, which show the vanilla RNN, LSTM, and GRU, respectively, work with your partner to try to understand the description above and then answer the following **questions:**
1. What does each of the gates in the GRU do?
2. How does the GRU address the vanishing/exploding gradients problem? In other words, how does it allow the gradient to pass through unimpeded?
If you get stuck or want to know more, you can read more about the LSTM, GRU, and other variants [here](http://colah.github.io/posts/2015-08-Understanding-LSTMs/).
<center></center>
<center></center>
<center></center>
###Helper functions
```python
def sample_with_temp(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
```python
def shift_and_append(test_arr, next_item):
'''Returns a copy of test_arr with items shifted one position to the left and
next_item appended.
'''
tmp = np.empty_like(test_arr)
tmp[:,:-1] = test_arr[:,1:]
tmp[:,-1] = next_item
return tmp
## TEST the above function:
test_arr = np.array([[1,2,3,4]])
print("test_arr = {}".format(test_arr))
test_arr = shift_and_append(test_arr, 5)
print("roll_arr(test_arr, 5) = {}".format(test_arr))
```
```python
def sample_from_model(model,
num_generate=400,
prev_text=None, # the text used to condition the model
temperatures=[0.2, 0.5, 1.0, 1.2]):
if not prev_text:
# Select a text seed at random
start_index = random.randint(0, len(training_text) - max_len - 1)
while ((start_index < (len(training_text) - max_len - 1)) and (
training_text[start_index - 1] is not ' ')):
start_index += 1 # Advance to beginning of new word
prev_text = training_text[start_index: start_index + max_len]
if len(prev_text) != max_len:
print("`prev_text` must be of length `max_len`.")
return
print('GENERATING TEXT WITH SEED: \n"' + prev_text + '"')
prev_text_arr = np.array(
[[char2index[c] for c in prev_text]], dtype=np.int64)
for temp in temperatures:
print('==TEMPERATURE:', temp)
sys.stdout.write(prev_text)
# Start with the same sampled text for all temperatures
generated_text = prev_text
generated_text_arr = prev_text_arr
# Now generate this many characters
for i in range(num_generate):
# Get the output softmax given the conditioning text
#prev_text = generated_text_enc[np.newaxis,:]
preds = model.predict(generated_text_arr, verbose=0)[0]
next_index = sample_with_temp(preds, temp)
next_char = index2char[next_index]
generated_text += next_char
generated_text = generated_text[1:]
# Left-shift and add into encoded array
generated_text_arr = shift_and_append(generated_text_arr, next_index)
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
###Train the model
Let's train the model! The code below will train the model on a subset of the available data, and then generate from the model every `sample_every` number of batches.
To generate from the model, we use `model.predict()` on a sequence of `max_len` conditioning characters to produce an output distribution over `vocab_size` characters. We then sample one character from this distribution and shift everything up by one and append the new characters. By repeating this, we can generate text from the (partially-trained) model.
**NOTE**:
* It takes a while to train a model that starts generating anything resembling the Shapespeare text! In general it should start getting the rough structure in place around the 100K training example mark (examples, not batches). But to generate any meaningful words will need several hundred thousand examples.
* We sample with *temperature*. This is a way to sharpen or flatten the probabilities produced by the model. By lowering the temperature, we emphasize the modes of the predicted distribution, and by increasing the temperature, we flatten the modes (tends towards uniform). Higher temperatures therefore encourage the model to be more 'creative', instead of always choosing the most likely next character.
```python
batch_size = 128
total_num_batches = X.shape[0] // batch_size
sample_every = 256 # Train on this many batches, then generate something
print("Training on {} batches in total.".format(total_num_batches))
for cur_batch in range(0, total_num_batches, sample_every):
print('TRAINING ON BATCH {} to {} (example {} to {})'.format(
cur_batch, cur_batch + sample_every,
cur_batch * batch_size, (cur_batch + sample_every) * batch_size)
)
X_batch = X[batch_size * cur_batch : batch_size * (cur_batch + sample_every), :]
Y_batch = Y[batch_size * cur_batch : batch_size * (cur_batch + sample_every), :]
'''
# Show the first 5 examples to make sure we're not training on garbage
print("X_batch.shape = {}".format(X_batch.shape))
print("Y_batch.shape = {}".format(Y_batch.shape))
print("FIRST 5 EXAMPLES:")
for num in range(5):
in_seq = [index2char[int(indx)] for indx in np.nditer(X_batch[num, :])]
next_char = index2char[Y_batch[num, 0]]
print(str(num) + '. ' + ''.join(in_seq) + '-->' + next_char)
'''
model.fit(X_batch, Y_batch,
batch_size=batch_size,
epochs=1,
verbose=1)
print("GENERATING SOME RANDOM TEXT FROM THE MODEL")
sample_from_model(model)
```
**NOTE**: Even after training has stopped you can still generate from the (partially trained) model as follows:
```python
my_text = " the meaning of life is:" # Needs to be max_len characters
print(len(my_text))
sample_from_model(model, prev_text=my_text)
```
###IMPORTANT NOTES
* Even if you stop training the model weights are persistant. If you resume training it will start where you left off.
* To reset the weights, you need to recompile the model.
* Sampling is **stochastic** (random), so you'll get new outputs every time you rerun the sampling code.
### YOUR TASKS:
* [**ALL**] Read the generations from your model in a funny voice to your neighbour.
* [**ALL**] Increase `max_len` and regenerate the data and retrain the model.
* What's the effect on training speed as you double `max_len`. Can you explain why?
* Do you notice any effect on the quality of the model? Can you explain why?
* [**ALL**] Set the max_len to be roughly the length of half a word; one word; two words... What kind of samples do these models generate? Explain how they go wrong.
* [**ALL**] Change `embedding_dim` and the hidden size of the LSTM and observe the effect on training speed and quality.
* [**INTERMEDIATE**] Change the dropout rates & retrain the model.
* What types of dropout do we get for recurrent models?
* What's the effect on the text quality?
* [**ADVANCED**] Implement the "teacher forcing" training methodology, where the net must predict the entire output sequence shifted forward by one character, instead of just the next character. Compare the output of a model trained with teacher forcing versus the per-character model, given a similar training time. Is it fair to say that teacher forcing is a more efficient training methodology?
**IMPORTANT: Please fill out the exit ticket form before you leave the practical: https://forms.gle/vkLk6hHCwNrNsb8u6**
##Further reading
* https://distill.pub/2016/augmented-rnns/
* https://distill.pub/2017/ctc/
* https://algotravelling.com/en/machine-learning-fun-part-5/
* http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
|
{"hexsha": "bb7d0594ad046de47fa01f44f35c24be45f3557b", "size": 131300, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4a_recurrent_nets.ipynb", "max_stars_repo_name": "amrrs/indaba-pracs-2019", "max_stars_repo_head_hexsha": "33f6121a8aec5856936254524c385ea847635e5a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 84, "max_stars_repo_stars_event_min_datetime": "2019-07-09T14:45:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T15:07:42.000Z", "max_issues_repo_path": "4a_recurrent_nets.ipynb", "max_issues_repo_name": "amrrs/indaba-pracs-2019", "max_issues_repo_head_hexsha": "33f6121a8aec5856936254524c385ea847635e5a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-08-08T09:43:44.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-03T12:40:25.000Z", "max_forks_repo_path": "4a_recurrent_nets.ipynb", "max_forks_repo_name": "amrrs/indaba-pracs-2019", "max_forks_repo_head_hexsha": "33f6121a8aec5856936254524c385ea847635e5a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 46, "max_forks_repo_forks_event_min_datetime": "2019-05-11T16:14:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T07:11:53.000Z", "avg_line_length": 100.4590665647, "max_line_length": 25636, "alphanum_fraction": 0.7951332826, "converted": true, "num_tokens": 8963}
|
#include <boost/program_options.hpp>
#include <boost/format.hpp>
#include <iostream>
#include "../include/OutputFormatter.h"
#include "../include/Genre.h"
namespace RomViewer {
/* Init Functions */
OutputFormatter::OutputFormatter() { }
OutputFormatter::~OutputFormatter() { }
/* Primary Functions */
void OutputFormatter::PrintFullGenre(Genre * g) {
PrintGenre(g);
for(auto sg : g->GetSubGenres())
PrintGenre(sg.second);
}
void OutputFormatter::PrintGenre(Genre * g) {
std::string out = "[" + g->GetName() + "]\n";
for(auto r : g->GetLeafRoms())
out += "\t\t> " + r.second->GetLocalPath() + "\n";
std::cout << out;
}
/********************************** Settings **********************************/
void OutputFormatter::PrintHelp(
boost::program_options::options_description &desc) {
std::cout << desc << std::endl;
}
void OutputFormatter::PrintVersion() {
std::cout << "lsrom v. "
<< VERSION_MAJOR << "." << VERSION_MINOR
<< std::endl;
}
}//namespace RomViewer
|
{"hexsha": "991d9c674aa2ba63ca0a2ad654941a5ab3363078", "size": 1057, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/OutputFormatter.cpp", "max_stars_repo_name": "vyth/lsrom", "max_stars_repo_head_hexsha": "b926a1a2774f3c25d4e7d491d6de9296763557d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/OutputFormatter.cpp", "max_issues_repo_name": "vyth/lsrom", "max_issues_repo_head_hexsha": "b926a1a2774f3c25d4e7d491d6de9296763557d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/OutputFormatter.cpp", "max_forks_repo_name": "vyth/lsrom", "max_forks_repo_head_hexsha": "b926a1a2774f3c25d4e7d491d6de9296763557d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.4888888889, "max_line_length": 80, "alphanum_fraction": 0.582781457, "num_tokens": 248}
|
import numpy as np
from torch.utils.data import DataLoader
from typing import Tuple
class TorchDataset:
"""`TorchDataset` class.
Represents a dataset class for PyTorch.
"""
DATA_ROOT = "./data"
@classmethod
def numpy(
cls,
one_hot_encode: bool = True,
transformers: str = "default",
take: int = 100,
) -> Tuple[np.ndarray, ...]:
"""Loads train and test dataset for given model type as a numpy arrays.
:param one_hot_encode: If data should be one-hot-encoded.
:param transformers: Transformers for the dataset; either `default` or `training`.
:param take: Percentage of the data set to use.
:return: Train and Test data and labels as numpy arrays.
"""
n_train = round(cls.DATASET_SIZE["train"] / 100 * take)
n_test = round(cls.DATASET_SIZE["test"] / 100 * take)
train_loader, test_loader = cls.data_loader(
train_batch_size=n_train,
test_batch_size=n_test,
one_hot_encode=one_hot_encode,
transformers=transformers,
shuffle=False,
)
x_train, y_train = next(iter(train_loader))
x_train, y_train = x_train.numpy(), y_train.numpy()
x_test, y_test = next(iter(test_loader))
x_test, y_test = x_test.numpy(), y_test.numpy()
cls.validate(x_train, y_train, n_train)
cls.validate(x_test, y_test, n_test)
return x_train, y_train, x_test, y_test
@classmethod
def data_loader(
cls,
train_batch_size: int = 128,
test_batch_size: int = 128,
one_hot_encode: bool = True,
transformers: str = "default",
shuffle: bool = True,
) -> Tuple[DataLoader, DataLoader]:
"""Loads the dataset as train and test DataLoader.
:param train_batch_size: Batch size of the train data loader.
:param test_batch_size: Batch size of the test data loader.
:param one_hot_encode: If data should be one-hot-encoded.
:param transformers: Transformers for the dataset; either `default` or `training`.
:param shuffle: If training data should be shuffled.
:return: Train and test data loaders.
"""
train_set = cls.TORCH_MODULE(
root=cls.DATA_ROOT,
train=True,
download=True,
transform=cls.TRANSFORMERS[transformers]["train"],
)
test_set = cls.TORCH_MODULE(
root=cls.DATA_ROOT,
train=False,
download=True,
transform=cls.TRANSFORMERS[transformers]["test"],
)
if one_hot_encode:
train_set.targets = cls.one_hot_encode(np.array(train_set.targets))
test_set.targets = cls.one_hot_encode(np.array(test_set.targets))
return (
DataLoader(
train_set, batch_size=train_batch_size, shuffle=shuffle, num_workers=4
),
DataLoader(
test_set, batch_size=test_batch_size, shuffle=False, num_workers=4
),
)
@classmethod
def one_hot_encode(cls, y: np.ndarray) -> np.ndarray:
"""On-hot-encode labels.
:param y: Labels to be one-hot-encoded.
:return: One-hot-encoded labels.
"""
y_one_hot_encoded = np.zeros((y.shape[0], cls.N_CLASSES))
y_one_hot_encoded[np.arange(y.shape[0]), y] = 1
return y_one_hot_encoded
@classmethod
def validate(
cls,
x: np.ndarray,
y: np.ndarray,
n: int,
one_hot_encoded: bool = True,
):
"""Validates the data.
:param x: Data to be validated.
:param y: Labels for `x` to be validated.
:param n: Expected number of data points.
:param one_hot_encoded: If data is one-hot-encoded or not.
"""
assert x.shape == (n, *cls.INPUT_SHAPE)
if one_hot_encoded:
assert y.shape == (n, cls.N_CLASSES)
else:
assert y.shape == (n)
|
{"hexsha": "9d3a54205ca0770ae4f59b65f102cd9889b9d352", "size": 4040, "ext": "py", "lang": "Python", "max_stars_repo_path": "privacy_evaluator/datasets/torch/torch.py", "max_stars_repo_name": "chen-yuxuan/privacy-evaluator", "max_stars_repo_head_hexsha": "ed4852408108c3e6a01216af4183261945fd7e67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-04-10T15:01:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T14:45:21.000Z", "max_issues_repo_path": "privacy_evaluator/datasets/torch/torch.py", "max_issues_repo_name": "chen-yuxuan/privacy-evaluator", "max_issues_repo_head_hexsha": "ed4852408108c3e6a01216af4183261945fd7e67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 175, "max_issues_repo_issues_event_min_datetime": "2021-04-13T08:32:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-30T09:44:51.000Z", "max_forks_repo_path": "privacy_evaluator/datasets/torch/torch.py", "max_forks_repo_name": "chen-yuxuan/privacy-evaluator", "max_forks_repo_head_hexsha": "ed4852408108c3e6a01216af4183261945fd7e67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2021-04-13T08:03:36.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T15:35:01.000Z", "avg_line_length": 32.32, "max_line_length": 90, "alphanum_fraction": 0.5962871287, "include": true, "reason": "import numpy", "num_tokens": 917}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# BCDI: tools for pre(post)-processing Bragg coherent X-ray diffraction imaging data
# (c) 07/2017-06/2019 : CNRS UMR 7344 IM2NP
# (c) 07/2019-present : DESY PHOTON SCIENCE
# authors:
# Jerome Carnis, carnis_jerome@yahoo.fr
import numpy as np
from matplotlib import pyplot as plt
import tkinter as tk
from tkinter import filedialog
import bcdi.postprocessing.postprocessing_utils as pu
import bcdi.graph.graph_utils as gu
helptext = """
Load a 3D BCDI reconstruction (.npz file) containing the fields 'amp' and 'strain'.
Calculate the mean and variance of the strain, for all voxels in the support.
"""
scan = 1301 # spec scan number
root_folder = "D:/data/SIXS_2019_Ni/"
sample_name = "S" # "S"
datadir = root_folder + sample_name + str(scan) + "/pynxraw/"
strain_range = 0.001 # for plots
support_threshold = 0.6 # threshold for support determination
use_bulk = False # True to use the bulk array as support,
# if False it will use support_threshold on the modulus to define the support
debug = True # True to see data plots
##########################
# end of user parameters #
##########################
###################
# define colormap #
###################
bad_color = "1.0" # white background
colormap = gu.Colormap(bad_color=bad_color)
my_cmap = colormap.cmap
#############
# load data #
#############
plt.ion()
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename(initialdir=datadir, filetypes=[("NPZ", "*.npz")])
print("Opening ", file_path)
npzfile = np.load(file_path)
######################
# define the support #
######################
if use_bulk:
try:
bulk = npzfile["bulk"]
except KeyError:
print("Bulk is not of key of the npz file")
print("Using the modulus and support_threshold to define the bulk")
amp = npzfile["amp"]
bulk = pu.find_bulk(
amp=amp, support_threshold=support_threshold, method="threshold"
)
if debug:
gu.multislices_plot(
amp,
sum_frames=False,
title="Amplitude",
plot_colorbar=True,
cmap=my_cmap,
is_orthogonal=True,
reciprocal_space=False,
)
del amp
nz, ny, nx = bulk.shape
support = bulk
else: # use amplitude
print("Using the modulus and support_threshold to define the support")
amp = npzfile["amp"]
nz, ny, nx = amp.shape
support = np.ones((nz, ny, nx))
support[abs(amp) < support_threshold * abs(amp).max()] = 0
if debug:
gu.multislices_plot(
amp,
sum_frames=False,
title="Amplitude",
plot_colorbar=True,
cmap=my_cmap,
is_orthogonal=True,
reciprocal_space=False,
)
del amp
strain = npzfile["strain"]
strain[support == 0] = 0
print("Data size: ({:d},{:d},{:d})".format(nz, ny, nx))
if debug:
gu.multislices_plot(
strain,
sum_frames=False,
title="Strain",
plot_colorbar=True,
vmin=-strain_range,
vmax=strain_range,
cmap=my_cmap,
is_orthogonal=True,
reciprocal_space=False,
)
gu.multislices_plot(
support,
sum_frames=False,
title="Support",
plot_colorbar=True,
vmin=0,
vmax=1,
cmap=my_cmap,
is_orthogonal=True,
reciprocal_space=False,
)
#####################################################################
# calculate the mean, variance and RMS of the strain on the support #
#####################################################################
mean_strain = strain[np.nonzero(support)].mean()
var_strain = strain[np.nonzero(support)].var()
rms_strain = np.sqrt(np.mean(np.ndarray.flatten(strain[np.nonzero(support)]) ** 2))
print(
"Mean strain = ",
str("{:.4e}".format(mean_strain)).replace(".", ","),
"\nVariance strain = ",
str("{:.4e}".format(var_strain)).replace(".", ","),
"\nRMS strain = ",
str("{:.4e}".format(rms_strain)).replace(".", ","),
)
plt.ioff()
plt.show()
|
{"hexsha": "a27c5a5c9602b07cb78a0344ea522ae7eb85c0e2", "size": 4165, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/postprocessing/bcdi_strain_mean_var_rms.py", "max_stars_repo_name": "sjleake/bcdi", "max_stars_repo_head_hexsha": "bf071ad085a11622158e1e651857a8a172c51cf1", "max_stars_repo_licenses": ["CECILL-B"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2020-04-30T08:48:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T14:42:01.000Z", "max_issues_repo_path": "scripts/postprocessing/bcdi_strain_mean_var_rms.py", "max_issues_repo_name": "sjleake/bcdi", "max_issues_repo_head_hexsha": "bf071ad085a11622158e1e651857a8a172c51cf1", "max_issues_repo_licenses": ["CECILL-B"], "max_issues_count": 78, "max_issues_repo_issues_event_min_datetime": "2019-06-30T03:45:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-23T15:04:44.000Z", "max_forks_repo_path": "scripts/postprocessing/bcdi_strain_mean_var_rms.py", "max_forks_repo_name": "sjleake/bcdi", "max_forks_repo_head_hexsha": "bf071ad085a11622158e1e651857a8a172c51cf1", "max_forks_repo_licenses": ["CECILL-B"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2019-07-03T17:18:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-12T15:54:56.000Z", "avg_line_length": 29.3309859155, "max_line_length": 88, "alphanum_fraction": 0.5776710684, "include": true, "reason": "import numpy", "num_tokens": 1032}
|
"""Python library for backtesting and analyzing trading strategies at scale.
While there are many great backtesting packages for Python, vectorbt was designed specifically for data mining:
it excels at processing performance and offers interactive tools to explore complex phenomena in trading.
With it you can traverse a huge number of strategy configurations, time periods and instruments in seconds,
to explore where your strategy performs best and to uncover hidden patterns in data. Accessing and analyzing
this information for yourself could give you an information advantage in your own trading.
## How it works
vectorbt was implemented to address common performance shortcomings of backtesting libraries.
It builds upon the idea that each instance of a trading strategy can be represented in a vectorized form,
so multiple strategy instances can be packed into a single multi-dimensional array, processed in a highly
efficient manner, and compared easily. It overhauls the traditional OOP approach that represents strategies
as classes or other data structures, which are far more easier to write and extend, but harder to analyze
compared to vectors and require additional effort to do it fast.
Thanks to the time series nature of trading data, most of the aspects related to backtesting can be translated
to vectors. Instead of performing operations on one element at a time, vectorization allows us to avoid naive
looping and perform the same operation on all the elements at the same time. The path-dependency problem
related to vectorization is solved by using Numba - it allows both writing iterative code and compiling slow
Python loops to be run at native machine code speed.
## Performance
While it might seem tempting to perform all sorts of computations with pandas alone, the NumPy+Numba combo
outperforms pandas significantly, especially for basic operations:
```python-repl
>>> import numpy as np
>>> import pandas as pd
>>> import vectorbt as vbt
>>> big_ts = pd.DataFrame(np.random.uniform(size=(1000, 1000)))
>>> %timeit big_ts.pct_change()
280 ms ± 12.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit big_ts.vbt.pct_change()
5.95 ms ± 380 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
But also pandas functions already compiled with Cython/Numba are slower:
```python-repl
>>> %timeit big_ts.expanding().max()
48.4 ms ± 557 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit big_ts.vbt.expanding_max()
8.82 ms ± 121 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
Moreover, pandas functions cannot be accessed within user-defined Numba code, since Numba cannot do any
compilation on pandas objects. Take for example generating trailing stop orders: to calculate expanding
maximum for each order, you cannot simply do `df.expanding().max()` from within Numba, but you must write
and compile your own expanding max function wrapped with `@njit`. That's why vectorbt provides an arsenal
of Numba-compiled functions for any sort of task.
## Usability
From the user's point of view, working with NumPy and Numba alone is difficult, since important information
in form of index and columns and all indexing checks must be explicitly handled by the user,
making analysis prone to errors. That's why vectorbt introduces a namespace (accessor) to pandas objects
(see [extending pandas](https://pandas.pydata.org/pandas-docs/stable/development/extending.html)).
This way, user can easily switch between native pandas functionality such as indexing, and highly-efficient
vectorbt methods. Moreover, each vectorbt method is flexible and can work on both Series and DataFrames.
Another argument against using exclusively NumPy is iterative code: sometimes vectorized implementation is hard
to read or cannot be properly defined at all, and one must rely on an iterative approach instead,
which is processing data in element-by-element fashion. That's where Numba comes into play.
The [previous versions](https://github.com/polakowo/vectorbt/tree/9f270820dd3e5dc4ff5468dbcc14a29c4f45f557)
of vectorbt were written in pure NumPy which led to more performance but less usability.
### Indexing
vectorbt makes use of [hierarchical indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html)
to store valuable information on each backtest. Take for example a simple crossover strategy:
it depends on the size of the fast and slow windows, and other hyper-parameters such as whether
it is SMA or EMA. Each of these hyper-parameters becomes an additional dimension for manipulating data
and gets stored as a separate column level. Below is an example of a column hierarchy for MACD:
```python-repl
>>> macd = vbt.MACD.run(
... pd.Series([1, 2, 3, 4, 3, 2, 1]),
... fast_window=(2, 3),
... slow_window=(3, 4),
... signal_window=(2, 3),
... macd_ewm=(True, False),
... signal_ewm=(False, True)
... )
>>> macd.signal
macd_fast_window 2 3
macd_slow_window 3 4
macd_signal_window 2 3
macd_macd_ewm True False
macd_signal_ewm False True
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 0.235073 NaN
4 0.168060 NaN
5 -0.054956 0.166667
6 -0.235246 -0.188889
```
Columns here capture different strategy configurations that can now be easily analyzed and compared.
You might, for example, consider grouping your performance by `macd_fast_window` to see how the size of
the fast window impacts profitability of the strategy.
The other advantage of vectorbt is that it ensures that the column hierarchy is preserved across
the whole backtesting pipeline, from signal generation, to performance modeling.
### Broadcasting
Moreover, vectobt borrows broadcasting rules from NumPy.
For example, consider the following objects:
```python-repl
>>> sr = pd.Series([1, 2, 3], index=['x', 'y', 'z'])
>>> sr
x 1
y 2
z 3
dtype: int64
>>> df = pd.DataFrame([[4, 5, 6]], index=['x', 'y', 'z'], columns=['a', 'b', 'c'])
>>> df
a b c
x 4 5 6
y 4 5 6
z 4 5 6
```
Despite both having the same index, pandas can't figure out how to add them correctly:
```python-repl
>>> sr + df
a b c x y z
x NaN NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN NaN
```
And here is the expected result using vectorbt:
```python-repl
>>> sr.vbt + df
a b c
x 5 6 7
y 6 7 8
z 7 8 9
```
In case index or columns in both objects are different, thy are simply stacked upon each other
(by default):
```python-repl
>>> df2 = pd.DataFrame([[4, 5, 6]], index=['x', 'y', 'z'], columns=['a2', 'b2', 'c2'])
>>> df + df2
a a2 b b2 c c2
x NaN NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN NaN
>>> df.vbt + df2
a b c
a2 b2 c2
x 8 10 12
y 8 10 12
z 8 10 12
```
This way, you can perform operations on objects of arbitrary (but compatible) shapes, and still
preserve their index information. This is handy for combining complex DataFrames, such as signals
from different indicators.
## Example
To better understand how these concepts fit together in vectorbt, consider the following example.
You have a complex strategy that has lots of parameters. While brute-forcing all parameter combinations
seems to be a rather unrealistic attempt, vectorbt makes exactly this possible. It doesn't care whether
you have one strategy instance or millions. As soon as their vectors can be concatenated into a matrix,
you can analyze them in one go.
Let's start with fetching the daily price of Bitcoin:
```python-repl
>>> import numpy as np
>>> import pandas as pd
>>> import yfinance as yf
>>> from datetime import datetime
>>> import vectorbt as vbt
>>> # Prepare data
>>> start = datetime(2019, 1, 1)
>>> end = datetime(2020, 1, 1)
>>> btc_price = yf.Ticker("BTC-USD").history(start=start, end=end)['Open']
>>> btc_price
Date
2018-12-31 3866.84
2019-01-01 3746.71
2019-01-02 3849.22
...
2019-12-29 7317.65
2019-12-30 7420.27
2019-12-31 7294.44
Name: Open, Length: 366, dtype: float64
```
We will test a simple Dual Moving Average Crossover (DMAC) strategy. For this, we will be using the
`vectorbt.indicators.basic.MA` class for calculating moving averages and generating signals.
Our first test will be rather simple: buy when the 10-day moving average crosses above the 20-day moving
average, and sell when the opposite happens.
```python-repl
>>> # (10, 20) - 10 day moving average crosses 20 day moving average
>>> fast_ma = vbt.MA.run(btc_price, 10, short_name='fast')
>>> slow_ma = vbt.MA.run(btc_price, 20, short_name='slow')
>>> entries = fast_ma.ma_above(slow_ma, crossed=True)
>>> entries
Date
2018-12-31 False
2019-01-01 False
2019-01-02 False
...
2019-12-29 False
2019-12-30 False
2019-12-31 False
Name: (10, 20, Open), Length: 366, dtype: bool
>>> exits = fast_ma.ma_below(slow_ma, crossed=True)
>>> exits
Date
2018-12-31 False
2019-01-01 False
2019-01-02 False
...
2019-12-29 False
2019-12-30 False
2019-12-31 False
Name: (10, 20, Open), Length: 366, dtype: bool
>>> portfolio = vbt.Portfolio.from_signals(btc_price, entries, exits)
>>> portfolio.total_return
0.6633185970977524
```
One strategy instance of DMAC produced one column in signals and one performance value.
Adding one more strategy instance is as simple as adding a new column. Here we are passing an array of
window sizes instead of a single value. For each window size in this array, it will compute a moving
average over the entire price series and store it as a distinct column.
```python-repl
>>> # Multiple strategy instances: (10, 30) and (20, 30)
>>> fast_ma = vbt.MA.run(btc_price, [10, 20], short_name='fast')
>>> slow_ma = vbt.MA.run(btc_price, [30, 30], short_name='slow')
>>> entries = fast_ma.ma_above(slow_ma, crossed=True)
>>> entries
fast_window 10 20
slow_window 30 30
Date
2018-12-31 False False
2019-01-01 False False
2019-01-02 False False
... ... ...
2019-12-29 False False
2019-12-30 True False
2019-12-31 False False
[366 rows x 2 columns]
>>> exits = fast_ma.ma_below(slow_ma, crossed=True)
>>> exits
fast_window 10 20
slow_window 30 30
Date
2018-12-31 False False
2019-01-01 False False
2019-01-02 False False
... ... ...
2019-12-29 False False
2019-12-30 False False
2019-12-31 False False
[366 rows x 2 columns]
>>> portfolio = vbt.Portfolio.from_signals(btc_price, entries, exits)
>>> portfolio.total_return
fast_window slow_window
10 30 0.865956
20 30 0.547047
dtype: float64
```
For the sake of convenience, vectorbt has created column levels `fast_window` and `slow_window` for you
to easily identify which window size corresponds to which column.
Notice how signal generation part remains the same for each example - most functions in vectorbt work on
time series of any shape. This allows creation of analysis pipelines that are universal to input data.
The representation of different features as columns offers endless possibilities for backtesting.
You could, for example, go a step further and conduct the same tests for Ethereum. To compare both instruments,
combine price series for Bitcoin and Ethereum into one DataFrame and run the same backtesting pipeline on it.
```python-repl
>>> # Multiple strategy instances and instruments
>>> eth_price = yf.Ticker("ETH-USD").history(start=start, end=end)['Open']
>>> comb_price = btc_price.vbt.concat(eth_price,
... keys=pd.Index(['BTC', 'ETH'], name='asset'))
>>> comb_price
asset BTC ETH
Date
2018-12-31 3866.84 140.03
2019-01-01 3746.71 133.42
2019-01-02 3849.22 141.52
... ... ...
2019-12-29 7317.65 128.27
2019-12-30 7420.27 134.80
2019-12-31 7294.44 132.61
[366 rows x 2 columns]
>>> fast_ma = vbt.MA.run(comb_price, [10, 20], short_name='fast')
>>> slow_ma = vbt.MA.run(comb_price, [30, 30], short_name='slow')
>>> entries = fast_ma.ma_above(slow_ma, crossed=True)
>>> entries
fast_window 10 20
slow_window 30 30
asset BTC ETH BTC ETH
Date
2018-12-31 False False False False
2019-01-01 False False False False
2019-01-02 False False False False
... ... ... ... ...
2019-12-29 False False False False
2019-12-30 True False False False
2019-12-31 False False False False
[366 rows x 4 columns]
>>> exits = fast_ma.ma_below(slow_ma, crossed=True)
>>> exits
fast_window 10 20
slow_window 30 30
asset BTC ETH BTC ETH
Date
2018-12-31 False False False False
2019-01-01 False False False False
2019-01-02 False False False False
... ... ... ... ...
2019-12-29 False False False False
2019-12-30 False False False False
2019-12-31 False False False False
[366 rows x 4 columns]
>>> # Notice that we need to align the price to the shape of signals
>>> portfolio = vbt.Portfolio.from_signals(
... comb_price.vbt.tile(2), entries, exits)
>>> portfolio.total_return
fast_window slow_window asset
10 30 BTC 0.865956
ETH 0.249013
20 30 BTC 0.547047
ETH -0.319945
dtype: float64
>>> mean_return = portfolio.total_return.groupby('asset').mean()
>>> mean_return.vbt.bar(
... xaxis_title='Asset',
... yaxis_title='Mean total return').show_png()
```

Not only strategies and instruments can act as separate features, but also time! If you want to find out
when your strategy performs best, it's reasonable to test it over multiple time periods. vectorbt allows
you to split one time period into many (given they have the same length and frequency) and represent
them as distinct columns. For example, let's split `[2019-1-1, 2020-1-1]` into two equal time periods -
`[2018-12-31, 2019-07-01]` and `[2019-07-02, 2019-12-31]`, and backtest them all at once.
```python-repl
>>> # Multiple strategy instances, instruments and time periods
>>> mult_comb_price = comb_price.vbt.split_into_ranges(n=2)
>>> mult_comb_price
asset BTC ETH
range_start 2018-12-31 2019-07-02 2018-12-31 2019-07-02
range_end 2019-07-01 2019-12-31 2019-07-01 2019-12-31
0 3866.84 10588.68 140.03 293.54
1 3746.71 10818.16 133.42 291.76
2 3849.22 11972.72 141.52 303.03
3 3931.05 11203.10 155.20 284.38
4 3832.04 10982.54 148.91 287.89
.. ... ... ... ...
178 13017.12 7238.14 336.96 126.37
179 11162.17 7289.03 294.14 127.21
180 12400.76 7317.65 311.28 128.27
181 11931.99 7420.27 319.58 134.80
182 10796.93 7294.44 290.27 132.61
[183 rows x 4 columns]
>>> fast_ma = vbt.MA.run(mult_comb_price, [10, 20], short_name='fast')
>>> slow_ma = vbt.MA.run(mult_comb_price, [30, 30], short_name='slow')
>>> entries = fast_ma.ma_above(slow_ma, crossed=True)
>>> exits = fast_ma.ma_below(slow_ma, crossed=True)
>>> portfolio = vbt.Portfolio.from_signals(
... mult_comb_price.vbt.tile(2), entries, exits, freq='1D')
>>> portfolio.total_return
fast_window slow_window asset range_start range_end
10 30 BTC 2018-12-31 2019-07-01 1.631617
2019-07-02 2019-12-31 -0.281432
ETH 2018-12-31 2019-07-01 0.941945
2019-07-02 2019-12-31 -0.306689
20 30 BTC 2018-12-31 2019-07-01 1.725547
2019-07-02 2019-12-31 -0.417770
ETH 2018-12-31 2019-07-01 0.336136
2019-07-02 2019-12-31 -0.257854
dtype: float64
```
Notice how index is no more datetime-like, since it captures multiple time periods.
That's why it's required here to pass the frequency `freq` to the `vectorbt.portfolio.base.Portfolio`
class methods in order to be able to compute performance metrics such as Sharpe ratio.
The index hierarchy of the final performance series can be then used to group performance
by any feature, such as window pair, asset, and time period.
```python-repl
>>> mean_return = portfolio.total_return.groupby(['range_end', 'asset']).mean()
>>> mean_return = mean_return.unstack(level=-1).vbt.bar(
... xaxis_title='End date',
... yaxis_title='Mean total return',
... legend_title_text='Asset').show_png()
```

There is much more to backtesting than simply stacking columns. vectorbt offers functions for
most parts of a common backtesting pipeline, from building indicators and generating signals, to
modeling portfolio performance and visualizing results.
## Notebooks
- [Who beats Bitcoin: Dual moving average crossover, trading randomly or holding?](https://nbviewer.jupyter.org/github/polakowo/vectorbt/blob/master/examples/Bitcoin-DMAC.ipynb)
- [How stop-loss and trailing stop orders perform on crypto?](https://nbviewer.jupyter.org/github/polakowo/vectorbt/blob/master/examples/StopLoss-vs-TrailingStop.ipynb)
- [How to backtest per trading session](https://nbviewer.jupyter.org/github/polakowo/vectorbt/blob/master/examples/Trading-Sessions.ipynb)
There is also [a range of notebooks](https://github.com/polakowo/vectorbt/tree/master/tests/notebooks) for testing purposes.
"""
# Load all accessors
import vectorbt.root_accessors
import vectorbt.base.accessors
import vectorbt.generic.accessors
import vectorbt.signals.accessors
import vectorbt.returns.accessors
import vectorbt.ohlcv.accessors
# Most important classes
from vectorbt.generic import nb, plotting
from vectorbt.records import (
MappedArray,
Records,
Orders,
Events,
Trades,
Positions,
Drawdowns
)
from vectorbt.portfolio import Portfolio
from vectorbt.indicators import (
IndicatorFactory,
MA,
MSTD,
BollingerBands,
RSI,
Stochastic,
MACD,
OBV,
ATR
)
# silence NumbaExperimentalFeatureWarning
import warnings
from numba.core.errors import NumbaExperimentalFeatureWarning
warnings.filterwarnings("ignore", category=NumbaExperimentalFeatureWarning)
|
{"hexsha": "01d79e96f5179598fd5b16fe9b900a963bb1457a", "size": 18721, "ext": "py", "lang": "Python", "max_stars_repo_path": ".venv/lib/python3.8/site-packages/vectorbt/__init__.py", "max_stars_repo_name": "eo1989/VectorBTanalysis", "max_stars_repo_head_hexsha": "bea3deaf2ee3fc114b308146f2af3e4f35f70197", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": ".venv/lib/python3.8/site-packages/vectorbt/__init__.py", "max_issues_repo_name": "eo1989/VectorBTanalysis", "max_issues_repo_head_hexsha": "bea3deaf2ee3fc114b308146f2af3e4f35f70197", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".venv/lib/python3.8/site-packages/vectorbt/__init__.py", "max_forks_repo_name": "eo1989/VectorBTanalysis", "max_forks_repo_head_hexsha": "bea3deaf2ee3fc114b308146f2af3e4f35f70197", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2928286853, "max_line_length": 177, "alphanum_fraction": 0.6913626409, "include": true, "reason": "import numpy,from numba", "num_tokens": 5170}
|
# coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for task."""
import numpy as np
import tensorflow as tf
from multiple_user_representations.models import task
class TaskTest(tf.test.TestCase):
def test_retrieval_task(self):
user_embeddings = tf.convert_to_tensor(
np.arange(12).reshape(2, 3, 2), dtype=tf.float32)
item_embeddings = tf.convert_to_tensor(
np.array([[0.0, 0.1], [0.2, 0.0]]), dtype=tf.float32)
loss = task.MultiShotRetrievalTask()(user_embeddings, item_embeddings)
self.assertAlmostEqual(loss.numpy(), 1.1955092)
def test_multi_query_factorized_top_k_with_multi_streaming(self):
num_candidates, num_queries, num_heads, embedding_dim = (100, 10, 3, 4)
rng = np.random.RandomState(42)
# Create testdata
candidates = rng.normal(size=(num_candidates,
embedding_dim)).astype(np.float32)
query = rng.normal(size=(num_queries, num_heads,
embedding_dim)).astype(np.float32)
true_candidates = rng.normal(size=(num_queries,
embedding_dim)).astype(np.float32)
# Compute positive and candidate scores
positive_scores = (query * np.expand_dims(true_candidates, 1)).sum(
axis=-1, keepdims=True).max(axis=1)
candidate_scores = (query @ candidates.T).max(axis=1)
# Concatenate all scores (Positive first)
all_scores = np.concatenate([positive_scores, candidate_scores], axis=1)
ks = [1, 5, 10, 50]
# Prepare item candidate dataset
candidates = tf.data.Dataset.from_tensor_slices(candidates).batch(32)
candidates = task.MultiQueryStreaming().index_from_dataset(candidates)
# Initialize and update the metric state.
metric = task.MultiQueryFactorizedTopK(
candidates=candidates,
metrics=[
tf.keras.metrics.TopKCategoricalAccuracy(k=x, name=f'HR@{x}')
for x in ks
],
k=max(ks),
)
metric.update_state(
query_embeddings=query, true_candidate_embeddings=true_candidates)
for k, metric_value in zip(ks, metric.result()):
in_top_k = tf.math.in_top_k(
targets=np.zeros(num_queries).astype(np.int32),
predictions=all_scores,
k=k)
self.assertAllClose(metric_value, in_top_k.numpy().mean())
if __name__ == '__main__':
tf.test.main()
|
{"hexsha": "8350c65ed54ba75222e685754d26b950fc50dbea", "size": 2941, "ext": "py", "lang": "Python", "max_stars_repo_path": "multiple_user_representations/models/task_test.py", "max_stars_repo_name": "xxdreck/google-research", "max_stars_repo_head_hexsha": "dac724bc2b9362d65c26747a8754504fe4c615f8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 23901, "max_stars_repo_stars_event_min_datetime": "2018-10-04T19:48:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:27:42.000Z", "max_issues_repo_path": "multiple_user_representations/models/task_test.py", "max_issues_repo_name": "xxdreck/google-research", "max_issues_repo_head_hexsha": "dac724bc2b9362d65c26747a8754504fe4c615f8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 891, "max_issues_repo_issues_event_min_datetime": "2018-11-10T06:16:13.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:42:34.000Z", "max_forks_repo_path": "multiple_user_representations/models/task_test.py", "max_forks_repo_name": "admariner/google-research", "max_forks_repo_head_hexsha": "7cee4b22b925581d912e8d993625c180da2a5a4f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6047, "max_forks_repo_forks_event_min_datetime": "2018-10-12T06:31:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:59:28.000Z", "avg_line_length": 33.8045977011, "max_line_length": 76, "alphanum_fraction": 0.6851411085, "include": true, "reason": "import numpy", "num_tokens": 679}
|
import logging
import networkx as nx
import numpy as np
from joblib import Parallel, delayed
from tqdm import tqdm
from pygkernels.data.utils import np2nx
class GraphGenerator:
@classmethod
def params_from_adj_matrix(cls, A, partition, name=None):
return cls.params_from_graph(np2nx(A, partition), name=name)
@classmethod
def params_from_graph(cls, G, name=None) -> 'GraphGenerator':
raise NotImplementedError()
def generate_graph(self, seed=None) -> (np.ndarray, np.ndarray):
raise NotImplementedError()
def generate_info(self, n_graphs) -> dict:
raise NotImplementedError()
def generate_connected_graph(self, seed=None):
if seed is not None:
np.random.seed(seed)
A, partition = self.generate_graph()
G = nx.from_numpy_matrix(A)
while not nx.is_connected(G):
components = list(nx.connected_components(G))
print(f'not connected! {len(components)} components')
G.add_edge(np.random.choice(list(components[0])), np.random.choice(list(components[1])))
return A, partition
def generate_graphs(self, n_graphs, is_connected=True, verbose=False, n_jobs=1):
logging.info(f'{self.__class__.__name__}: count={n_graphs}')
graphs_range = range(n_graphs)
if verbose:
graphs_range = tqdm(graphs_range, desc=f'{self.__class__.__name__}')
generate_method = self.generate_connected_graph if is_connected else self.generate_graph
graphs = Parallel(n_jobs=n_jobs)(delayed(generate_method)(idx) for idx in graphs_range)
info = self.generate_info(n_graphs)
return graphs, info
|
{"hexsha": "6185783b6bc1942c051bac6124dc2406bf6b4cd4", "size": 1686, "ext": "py", "lang": "Python", "max_stars_repo_path": "pygkernels/data/graph_generator.py", "max_stars_repo_name": "vlivashkin/pygraphs", "max_stars_repo_head_hexsha": "ec0ec0d064c0fc7f5a3620e94152bc3fe2f9feaf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-04-19T01:56:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T04:50:41.000Z", "max_issues_repo_path": "pygkernels/data/graph_generator.py", "max_issues_repo_name": "illusionww/pygraphs", "max_issues_repo_head_hexsha": "ec0ec0d064c0fc7f5a3620e94152bc3fe2f9feaf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pygkernels/data/graph_generator.py", "max_forks_repo_name": "illusionww/pygraphs", "max_forks_repo_head_hexsha": "ec0ec0d064c0fc7f5a3620e94152bc3fe2f9feaf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-04-01T15:50:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-02T15:13:28.000Z", "avg_line_length": 35.8723404255, "max_line_length": 100, "alphanum_fraction": 0.68742586, "include": true, "reason": "import numpy,import networkx", "num_tokens": 366}
|
import os
from qtpy.QtWidgets import QFileDialog
from qtpy import QtGui
import numpy as np
from collections import OrderedDict
import glob
from NeuNorm.normalization import Normalization
from __code.file_handler import make_or_reset_folder
from __code.panoramic_stitching_for_tof.image_handler import HORIZONTAL_MARGIN, VERTICAL_MARGIN
from __code.file_handler import copy_and_rename_files_to_folder
FILE_PREFIX = "image_"
class Export:
def __init__(self, parent=None):
self.parent = parent
def run(self):
output_folder = QFileDialog.getExistingDirectory(self.parent,
directory=self.parent.working_dir,
caption="Select where the folder containing the "
"panoramic images will be created!",
options=QFileDialog.ShowDirsOnly)
if output_folder:
self.parent.ui.setEnabled(False)
QtGui.QGuiApplication.processEvents()
self.create_panoramic_images()
self.export_images(output_folder=output_folder)
self.parent.ui.setEnabled(True)
def create_panoramic_images(self, output_folder=None):
data_dictionary = self.parent.data_dictionary
offset_dictionary = self.parent.offset_dictionary
list_folder = list(data_dictionary.keys())
nbr_files = len(data_dictionary[list_folder[0]].keys())
self.parent.eventProgress.setMaximum(nbr_files)
self.parent.eventProgress.setValue(0)
self.parent.eventProgress.setVisible(True)
# width, height because we did a transpose to display current_live_image correctly
panoramic_width, panoramic_height = np.shape(self.parent.current_live_image)
image_height = self.parent.image_height
image_width = self.parent.image_width
panoramic_images_dict = OrderedDict()
self.parent.ui.statusbar.showMessage("Create the panoramic images ...")
QtGui.QGuiApplication.processEvents()
for _file_index in np.arange(nbr_files):
panoramic_image = np.zeros((panoramic_height, panoramic_width))
for _folder_index, _folder in enumerate(data_dictionary.keys()):
xoffset = offset_dictionary[_folder]['xoffset']
yoffset = offset_dictionary[_folder]['yoffset']
list_files = list(data_dictionary[_folder].keys())
image = data_dictionary[_folder][list_files[_file_index]].data
if _folder_index == 0:
panoramic_image[yoffset+VERTICAL_MARGIN: yoffset+image_height+VERTICAL_MARGIN,
xoffset+HORIZONTAL_MARGIN: xoffset+image_width+HORIZONTAL_MARGIN] = image
continue
temp_big_image = np.zeros((panoramic_height, panoramic_width))
temp_big_image[yoffset+VERTICAL_MARGIN: yoffset+image_height+VERTICAL_MARGIN,
xoffset+HORIZONTAL_MARGIN: xoffset+image_width+HORIZONTAL_MARGIN] = image
where_temp_big_image_has_value_only = np.where((temp_big_image != 0) & (panoramic_image == 0))
where_both_images_overlap = np.where((panoramic_image != 0) & (temp_big_image != 0))
panoramic_image[where_temp_big_image_has_value_only] = \
temp_big_image[where_temp_big_image_has_value_only]
panoramic_image[where_both_images_overlap] = (panoramic_image[where_both_images_overlap] +
temp_big_image[where_both_images_overlap]) / 2
file_name = FILE_PREFIX + "{:04d}.tiff".format(_file_index)
panoramic_images_dict[file_name] = panoramic_image
self.parent.eventProgress.setValue(_file_index + 1)
QtGui.QGuiApplication.processEvents()
self.parent.panoramic_images_to_export = panoramic_images_dict
self.parent.eventProgress.setVisible(False)
def export_images(self, output_folder=None):
new_folder_name = os.path.basename(self.parent.working_dir) + "_panoramic"
self.parent.ui.statusbar.showMessage("Exporting images in folder {}".format(new_folder_name))
QtGui.QGuiApplication.processEvents()
new_output_folder_name = os.path.join(output_folder, new_folder_name)
make_or_reset_folder(new_output_folder_name)
panoramic_images_dict = self.parent.panoramic_images_to_export
list_data = []
list_filename = []
for _file_name in panoramic_images_dict.keys():
list_data.append(panoramic_images_dict[_file_name])
list_filename.append(_file_name)
o_norm = Normalization()
o_norm.load(data=list_data)
o_norm.data['sample']['filename'] = list_filename
o_norm.export(new_output_folder_name, data_type='sample')
self.copy_txt_files_to_output_folder(output_folder=new_output_folder_name)
self.parent.ui.statusbar.showMessage("{} has been created!".format(new_output_folder_name), 10000) # 10s
QtGui.QGuiApplication.processEvents()
def copy_txt_files_to_output_folder(self, output_folder=None):
list_input_folders = self.parent.list_folders
first_folder = list_input_folders[0]
list_txt_files = glob.glob(first_folder + "/*.txt")
list_new_file_names = []
for txt_file in list_txt_files:
split_name = os.path.basename(txt_file).split("_")
new_name = split_name[-1]
list_new_file_names.append(new_name)
copy_and_rename_files_to_folder(list_files=list_txt_files,
new_list_files_names=list_new_file_names,
output_folder=output_folder)
|
{"hexsha": "c0fd8056178ae75d9d3a3225502f90ae80e89fcd", "size": 5916, "ext": "py", "lang": "Python", "max_stars_repo_path": "notebooks/__code/panoramic_stitching_for_tof/export.py", "max_stars_repo_name": "mabrahamdevops/python_notebooks", "max_stars_repo_head_hexsha": "6d5e7383b60cc7fd476f6e85ab93e239c9c32330", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/__code/panoramic_stitching_for_tof/export.py", "max_issues_repo_name": "mabrahamdevops/python_notebooks", "max_issues_repo_head_hexsha": "6d5e7383b60cc7fd476f6e85ab93e239c9c32330", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/__code/panoramic_stitching_for_tof/export.py", "max_forks_repo_name": "mabrahamdevops/python_notebooks", "max_forks_repo_head_hexsha": "6d5e7383b60cc7fd476f6e85ab93e239c9c32330", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.8222222222, "max_line_length": 113, "alphanum_fraction": 0.6643002028, "include": true, "reason": "import numpy", "num_tokens": 1127}
|
'''
An implementation of a subset of the following symbolic nomenclature:
http://www.ncbi.nlm.nih.gov/books/NBK310273/table/symbolnomenclature.T.monosaccharide_symb/?report=objectonly
'''
import logging
from collections import Counter
from functools import partial
import numpy as np
from matplotlib.path import Path
from matplotlib.textpath import TextPath
import matplotlib.patches as patches
import matplotlib
from matplotlib.colors import rgb2hex
from matplotlib.transforms import Affine2D
import glypy
from glypy.structure import Modification, Stem
from glypy.utils.enum import Enum
from glypy.io.nomenclature import identity
from .common import MonosaccharidePatch
from .symbolic_nomenclature import SymbolicNomenclatureBase
logger = logging.getLogger(__name__)
default_line_color = 'black'
default_scale_factor = 1.0
zorder = 2
line_weight = 0.5
NeuAc = glypy.monosaccharides.NeuAc
NeuGc = glypy.monosaccharides.NeuGc
class CFGNomenclature(SymbolicNomenclatureBase):
class ResidueColor(Enum):
gal = rgb2hex((255 / 255., 255 / 255., 0 / 255.)) # yellow
lyx = rgb2hex((255 / 255., 255 / 255., 0 / 255.)) # yellow
glc = rgb2hex((0 / 255., 0 / 255., 255 / 255.)) # blue
man = rgb2hex((0, 200 / 255., 50 / 255.)) # green
fuc = rgb2hex((255 / 255., 0 / 255., 0 / 255.)) # red
xyl = rgb2hex((250 / 255., 234 / 255., 213 / 255.)) # orange
neuac = rgb2hex((200 / 255., 0 / 255., 200 / 255.)) # purple
neugc = rgb2hex((233 / 255., 255 / 255., 255 / 255.)) # light blue
kdn = rgb2hex((0, 200 / 255., 50 / 255.)) # green
glca = rgb2hex((0 / 255., 0 / 255., 255 / 255.)) # blue
idoa = rgb2hex((150 / 255., 100 / 255., 50 / 255.)) # tan
gala = rgb2hex((255 / 255., 255 / 255., 0 / 255.)) # yellow
mana = rgb2hex((0, 200 / 255., 50 / 255.)) # green
generic = 'white'
class ResidueShape(Enum):
circle = 1
square = 2
bisected_square = 3
triangle = 4
star = 5
diamond = 6
top_bisected_diamond = 7
left_bisected_diamond = 8
right_bisected_diamond = 9
bottom_bisected_diamond = 10
generic = 11
def residue_color(self, monosaccharide):
'''
Determine which color to use to represent `monosaccharide` under the CFG
symbol nomenclature.
Parameters
----------
monosaccharide: |Monosaccharide|
The residue to be rendered
Returns
-------
ResidueColor.EnumValue
'''
if any(mod == Modification.a for p, mod in monosaccharide.modifications.items()):
return self.resolve_acid_color(monosaccharide)
if "hex" in [monosaccharide.superclass]:
if any(mod == Modification.d for p, mod in monosaccharide.modifications.items()) and\
monosaccharide.stem == (Stem.gal,):
return self.ResidueColor.fuc
try:
return self.ResidueColor[monosaccharide.stem[0]]
except KeyError:
return self.ResidueColor.generic
def resolve_acid_color(self, monosaccharide):
'''
Resolve the special case in :func:`residue_color` for acidic residues
'''
if ('gro' in monosaccharide.stem) and ('gal' in monosaccharide.stem):
if any(sub.name == 'n_acetyl' for p, sub in monosaccharide.substituents()):
return self.ResidueColor.neuac
elif any(sub.name == 'n_glycolyl' for p, sub in monosaccharide.substituents()):
return self.ResidueColor.neugc
else:
return self.ResidueColor.kdn
elif 'glc' in monosaccharide.stem:
return self.ResidueColor.glca
elif 'gal' in monosaccharide.stem:
return self.ResidueColor.gala
elif 'man' in monosaccharide.stem:
return self.ResidueColor.mana
elif 'ido' in monosaccharide.stem:
return self.ResidueColor.idoa
def residue_shape(self, monosaccharide):
'''
Determine which shape to use to represent `monosaccharide` under the CFG
symbol nomenclature.
Parameters
----------
monosaccharide: |Monosaccharide|
The residue to be rendered
Returns
-------
ResidueShape.EnumValue
'''
if any(mod == Modification.a for p, mod in monosaccharide.modifications.items()):
return self.resolve_acid_shape(monosaccharide)
if "hex" in [monosaccharide.superclass]:
if any(sub.name == 'n_acetyl' for p, sub in monosaccharide.substituents()):
return self.ResidueShape.square
elif any(sub.name == 'amino' or sub.name.startswith("n_") for p, sub in monosaccharide.substituents()):
return self.ResidueShape.bisected_square
elif any(mod == Modification.d for p, mod in monosaccharide.modifications.items()):
return self.ResidueShape.triangle
return self.ResidueShape.circle
elif "pen" in [monosaccharide.superclass]:
if 'xyl' in monosaccharide.stem or 'lyx' in monosaccharide.stem:
return self.ResidueShape.star
return self.ResidueShape.generic
def resolve_acid_shape(self, monosaccharide):
'''
Resolve the special case in :func:`residue_shape` for acidic residues
'''
if ('gro' in monosaccharide.stem) and ('gal' in monosaccharide.stem):
if any(sub.name == 'n_acetyl' for p, sub in monosaccharide.substituents()):
return self.ResidueShape.diamond
elif any(sub.name == 'n_glycolyl' for p, sub in monosaccharide.substituents()):
return self.ResidueShape.diamond
else:
return self.ResidueShape.diamond
elif 'glc' in monosaccharide.stem:
return self.ResidueShape.top_bisected_diamond
elif 'gal' in monosaccharide.stem:
return self.ResidueShape.left_bisected_diamond
elif 'man' in monosaccharide.stem:
return self.ResidueShape.right_bisected_diamond
elif 'ido' in monosaccharide.stem:
return self.ResidueShape.bottom_bisected_diamond
else:
return self.ResidueShape.bottom_bisected_diamond
def draw_circle(self, ax, x, y, color, scale=0.1):
path = Path(Path.unit_circle().vertices * scale, Path.unit_circle().codes)
trans = Affine2D().translate(x, y)
t_path = path.transformed(trans)
patch = patches.PathPatch(
t_path, facecolor=color.value, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a,))
return ma
def draw_square(self, ax, x, y, color, scale=0.1):
square_verts = np.array([
(0.5, 0.5),
(0.5, -0.5),
(-0.5, -0.5),
(-0.5, 0.5),
(0.5, 0.5),
(0., 0.),
]) * 2
square_codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
path = Path(square_verts * scale, square_codes)
trans = Affine2D().translate(x, y)
t_path = path.transformed(trans)
patch = patches.PathPatch(
t_path, facecolor=color.value, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a,))
return ma
def draw_triangle(self, ax, x, y, color, scale=0.1):
path = Path(Path.unit_regular_polygon(3).vertices * scale, Path.unit_regular_polygon(3).codes)
trans = Affine2D().translate(
x, y).rotate_deg_around(x, y, 0)
t_path = path.transformed(trans)
patch = patches.PathPatch(
t_path, facecolor=color.value, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a,))
return ma
def draw_bisected_square(self, ax, x, y, color, scale=0.1):
lower_verts = (np.array([
(0., 0.),
(1.0, 0),
(0, 1.0),
(0, 0),
(0., 0.),
]) - 0.5) / 5
upper_verts = (np.array([
(1., 1.),
(1.0, 0),
(0, 1.0),
(1, 1),
(0., 0.),
]) - 0.5) / 5
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
lower_path = Path(lower_verts, codes).transformed(
Affine2D().translate(x, y))
upper_path = Path(upper_verts, codes).transformed(
Affine2D().translate(x, y))
patch = patches.PathPatch(
lower_path, facecolor=color.value, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
patch = patches.PathPatch(
upper_path, facecolor="white", lw=line_weight, zorder=2)
b = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a, b))
return ma
def draw_diamond(self, ax, x, y, color, scale=0.1):
path = Path(Path.unit_regular_polygon(4).vertices * scale, Path.unit_regular_polygon(4).codes)
trans = Affine2D().translate(x, y)
t_path = path.transformed(trans)
patch = patches.PathPatch(
t_path, facecolor=color.value, lw=line_weight, zorder=2)
a = (ax.add_patch(patch),)
ma = MonosaccharidePatch(saccharide_shape=(a,))
return ma
def draw_right_bisected_diamond(self, ax, x, y, color, scale=0.1):
return self.draw_horizontal_bisected_diamond(ax, x, y, color, scale, 'left')
def draw_left_bisected_diamond(self, ax, x, y, color, scale=0.1):
return self.draw_horizontal_bisected_diamond(ax, x, y, color, scale, 'right')
def draw_top_bisected_diamond(self, ax, x, y, color, scale=0.1):
return self.draw_vertical_bisected_diamond(ax, x, y, color, scale, 'top')
def draw_bottom_bisected_diamond(self, ax, x, y, color, scale=0.1):
return self.draw_vertical_bisected_diamond(ax, x, y, color, scale, 'bottom')
def draw_vertical_bisected_diamond(self, ax, x, y, color, scale=0.1, side=None):
lower_verts = (np.array([
(0., 0.),
(1.0, 0),
(0, 1.0),
(0, 0),
(0., 0.),
]) - 0.5) / 5
upper_verts = (np.array([
(1., 1.),
(1.0, 0),
(0, 1.0),
(1, 1),
(0., 0.),
]) - 0.5) / 5
codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
lower_path = Path(lower_verts, codes).transformed(
Affine2D().translate(x, y).rotate_deg_around(x, y, 45))
upper_path = Path(upper_verts, codes).transformed(
Affine2D().translate(x, y).rotate_deg_around(x, y, 45))
try:
color = color.value
except AttributeError:
color = 'white'
if side == 'top':
top_color = color
bottom_color = 'white'
elif side == 'bottom':
top_color = 'white'
bottom_color = color
patch = patches.PathPatch(
lower_path, facecolor=bottom_color, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
patch = patches.PathPatch(
upper_path, facecolor=top_color, lw=line_weight, zorder=2)
b = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a, b))
return ma
# return a, b
def draw_horizontal_bisected_diamond(self, ax, x, y, color, scale=0.1, side=None):
left_verts = (np.array([
(0., 0.),
(1.0, 0),
(0, 1.0),
(0, 0),
(0., 0.),
]) - 0.5) / 5
right_verts = (np.array([
(1., 1.),
(1.0, 0),
(0, 1.0),
(1, 1),
(0., 0.),
]) - 0.5) / 5
codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
try:
color = color.value
except AttributeError:
color = 'white'
if side == 'left':
left_color = color
right_color = 'white'
elif side == 'right':
left_color = 'white'
right_color = color
left_path = Path(left_verts, codes).transformed(
Affine2D().translate(x, y).rotate_deg_around(x, y, -45))
right_path = Path(right_verts, codes).transformed(
Affine2D().translate(x, y).rotate_deg_around(x, y, -45))
patch = patches.PathPatch(
left_path, facecolor=left_color, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
patch = patches.PathPatch(
right_path, facecolor=right_color, lw=line_weight, zorder=2)
b = ax.add_patch(patch)
ma = MonosaccharidePatch(
saccharide_shape=(a, b), saccharide_label=None)
return ma
# return a, b
def draw_star(self, ax, x, y, color, scale=0.1):
path = Path(Path.unit_regular_star(5, 0.45).vertices * scale, Path.unit_regular_star(5, 0.45).codes)
trans = Affine2D().translate(x, y)
t_path = path.transformed(trans)
patch = patches.PathPatch(
t_path, facecolor=color.value, lw=line_weight, zorder=2)
a = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a,))
return ma
def draw_generic(self, ax, x, y, name, n_points=6, scale=0.1):
unit_polygon = Path.unit_regular_polygon(n_points)
path = Path(unit_polygon.vertices * scale, unit_polygon.codes)
trans = Affine2D().translate(x, y)
t_path = path.transformed(trans)
name = TextPath((x - (0.35 * scale), y), s=name, size=2 * scale * .25)
patch = patches.PathPatch(
t_path, facecolor="white", lw=line_weight, zorder=2)
a = ax.add_patch(patch)
patch = patches.PathPatch(name, lw=line_weight, zorder=2)
s = ax.add_patch(patch)
ma = MonosaccharidePatch(saccharide_shape=(a,), saccharide_label=(s,))
return ma
def get_drawer(self, shape):
shape_name = shape.name
drawer = getattr(self, 'draw_%s' % (shape_name,), None)
return drawer
def resolve_generic_name(self, monosaccharide):
try:
abbrev = identity.identify(monosaccharide)
except identity.IdentifyException:
if monosaccharide.stem[0] == Stem.x:
abbrev = monosaccharide.superclass.name.lower().capitalize()
else:
abbrev = ','.join(s.name.lower().capitalize()
for s in monosaccharide.stem)
return abbrev
def get_relevant_substituents(self, residue, shape=None):
'''
Given the shape for a residue, determine which of its substituents must
be explicitly drawn. Calls :func:`residue_shape` if `shape` is not
provided.
Parameters
----------
residue: |Monosaccharide|
The monosaccharide residue being rendered
shape: ResidueShape or |None|
The shape enum being used to represent `residue`. Defaults to None.
If `shape` is |None|, it is calculated by :func:`residue_shape`.
Returns
-------
|list| of |Substituent|s
'''
shape = self.residue_shape(residue) if shape is None else shape
if shape != self.ResidueShape.generic:
substituents = list(sub.name for p, sub in residue.substituents() if not sub._derivatize)
if shape == self.ResidueShape.square:
substituents.pop(substituents.index("n_acetyl"))
elif shape == self.ResidueShape.bisected_square:
try:
index_amino = substituents.index("amino")
substituents.pop(index_amino)
except ValueError:
pass
try:
pass
# index_n_sulfate = substituents.index('n_sulfate')
# TODO:
# Find a way to forward different substituents
#
# substituents.pop(index_n_sulfate)
# substituents.append("sulfate")
except ValueError:
pass
return self._pruned_substituents(residue, substituents)
elif shape == self.ResidueShape.diamond:
color = self.residue_color(residue)
if color == self.residue_color(NeuAc):
substituents.pop(substituents.index("n_acetyl"))
elif color == self.residue_color(NeuGc):
substituents.pop(substituents.index("n_glycolyl"))
return self._pruned_substituents(residue, substituents)
return list((p, sub.name) for p, sub in residue.substituents() if not sub._derivatize)
def _pruned_substituents(self, residue, substituents):
relevant_substituents = Counter(substituents)
results = []
for p, sub in residue.substituents()[::-1]:
if relevant_substituents[sub.name] > 0:
results.append((p, sub.name))
relevant_substituents[sub.name] -= 1
return results
def get_symbol(self, monosaccharide):
'''
Convenience function to retrieve the shape and color of `monosaccharide`
'''
shp = self.residue_shape(monosaccharide)
col = self.residue_color(monosaccharide)
return shp, col
def draw(self, monosaccharide, x, y, ax, tree_node=None, scale=0.1, annotation_transform=None, **kwargs):
'''
Renders `monosaccharide` at the given `(x, y)` coordinates on the `matplotlib.Axis`
`ax` provided. Determines the shape to use by :func:`residue_shape` and color by
:func:`residue_color`. The shape value is used to select the specialized draw_* function
'''
abbrev = None
shape, color = self.get_symbol(monosaccharide)
if shape == self.ResidueShape.generic:
abbrev = self.resolve_generic_name(monosaccharide)
drawer = self.get_drawer(shape)
if drawer is None:
raise Exception("Don't know how to draw {}".format(
(shape, monosaccharide)))
res = None
if shape == self.ResidueShape.generic:
res = drawer(
ax, x, y, abbrev, n_points=monosaccharide.superclass.value or 1, scale=scale)
else:
res = drawer(ax, x, y, color, scale=scale)
substituents = self.get_relevant_substituents(monosaccharide)
# Render substituents along the bottom of the monosaccharide
# These layouts should be moved to be defined by the DrawTreeNode
if annotation_transform is None:
annotation_transform = Affine2D()
node_x, node_y = res.centroid()
subs = self.draw_substituents(ax, substituents, node_x, node_y, annotation_transform, **kwargs)
res.add_substituents(subs)
return res
def __call__(self, *args, **kwargs):
return self.draw(*args, **kwargs)
|
{"hexsha": "80d543dc857b5c1b0a32dd4b33f51ef947a82599", "size": 19547, "ext": "py", "lang": "Python", "max_stars_repo_path": "glypy/plot/cfg_symbols.py", "max_stars_repo_name": "dcambie/glypy", "max_stars_repo_head_hexsha": "ecbf849b9686dc617a2e65ea171bcc33881a8db7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2016-02-18T10:35:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-15T23:50:26.000Z", "max_issues_repo_path": "glypy/plot/cfg_symbols.py", "max_issues_repo_name": "dcambie/glypy", "max_issues_repo_head_hexsha": "ecbf849b9686dc617a2e65ea171bcc33881a8db7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2015-05-30T16:58:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-27T16:37:31.000Z", "max_forks_repo_path": "glypy/plot/cfg_symbols.py", "max_forks_repo_name": "dcambie/glypy", "max_forks_repo_head_hexsha": "ecbf849b9686dc617a2e65ea171bcc33881a8db7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2015-12-07T20:51:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-25T14:23:02.000Z", "avg_line_length": 37.7355212355, "max_line_length": 115, "alphanum_fraction": 0.5800890162, "include": true, "reason": "import numpy", "num_tokens": 5169}
|
ah1_file = path*"/SampleFiles/AH/ah1.f"
ah1_fstr = path*"/SampleFiles/AH/ah1.*"
ahc_file = path*"/SampleFiles/AH/lhz.ah"
ah_resp = path*"/SampleFiles/AH/BRV.TSG.DS.lE21.resp"
ah2_file = path*"/SampleFiles/AH/ah2.f"
ah2_fstr = path*"/SampleFiles/AH/ah2.*"
printstyled(" AH (Ad Hoc)\n", color=:light_green)
printstyled(" v1\n", color=:light_green)
redirect_stdout(out) do
S = read_data("ah1", ah1_file, v=3)
S = read_data("ah1", ah1_file, full=true)
S = read_data("ah1", ah1_fstr, full=true, v=3)
@test S.n == 4
@test S.fs[1] == 4.0
@test isapprox(S.gain[1], 64200.121094)
@test isapprox(S.loc[1].lat, 35.599899)
@test isapprox(S.loc[1].lon, -85.568802)
@test isapprox(S.loc[1].el, 481.0)
@test any([occursin("gdsn_tape",s) for s in S.notes[1]])
@test any([occursin("demeaned",s) for s in S.notes[1]])
@test length(S.resp[1].p) == 24
@test length(S.resp[1].z) == 7
@test u2d(S.t[1][1,2]*1.0e-6) == DateTime("1984-04-20T06:42:00.12")
@test length(S.x[1]) == 720
@test eltype(S.x[1]) == Float32
@test isapprox(S.x[1][1:4], [-731.41247559, -724.41247559, -622.41247559, -470.4125061])
C = read_data("ah1", ahc_file, v=3, full=true)[1]
# Station
@test isapprox(C.loc.lat, 36.5416984)
@test isapprox(C.loc.lon, 138.2088928)
@test isapprox(C.loc.el, 422.0)
@test isapprox(C.gain, 1.178e8)
@test length(C.resp.p) == 10
@test isapprox(C.resp.p[1], -0.0123f0 + 0.0123f0im)
@test isapprox(C.resp.p[2], -0.0123f0 - 0.0123f0im)
@test length(C.resp.z) == 3
# Data
@test length(C.x) == 309
@test eltype(C.x) == Float32
@test C.fs == 1.0
@test u2d(C.t[1,2]*1.0e-6) == DateTime("1990-05-12T04:49:54.49")
# Event
@test isapprox(C.misc["ev_lat"], 49.037)
@test isapprox(C.misc["ev_lon"], 141.847)
@test isapprox(C.misc["ev_dep"], 606.0)
@test string(u2d(C.misc["ot"]*1.0e-6)) == "1990-05-12T04:50:08.7"
@test startswith(C.misc["data_comment"], "Streckeisen STS-1V/VBB Seismometer")
@test startswith(C.misc["event_comment"], "null")
C = read_data("ah1", ah_resp, full=true)[1]
@test isapprox(C.loc.lat, 53.058060)
@test isapprox(C.loc.lon, 70.282799)
@test isapprox(C.loc.el, 300.0)
@test isapprox(C.gain, 0.05)
@test isapprox(C.resp.a0, 40.009960)
@test length(C.resp.p) == 7
@test isapprox(C.resp.p[1], -0.1342653f0 + 0.1168836f0im)
@test length(C.resp.z) == 4
@test startswith(C.misc["data_comment"], "DS response in counts/nm")
@test startswith(C.misc["event_comment"], "Calibration_for_hg_TSG")
@test any([occursin("brv2ah: ahtedit",s) for s in C.notes])
@test any([occursin("demeaned",s) for s in C.notes])
@test any([occursin("modhead",s) for s in C.notes])
@test any([occursin("ahtedit",s) for s in C.notes])
printstyled(" v2\n", color=:light_green)
S = read_data("ah2", ah2_file, v=3)
S = read_data("ah2", ah2_file, v=3, full=true)
S = read_data("ah2", ah2_fstr, v=3, full=true)
@test S.n == 4
@test S.fs[1] == 4.0
@test isapprox(S.gain[1], 64200.121094)
@test isapprox(S.loc[1].lat, 35.599899)
@test isapprox(S.loc[1].lon, -85.568802)
@test isapprox(S.loc[1].el, 481.0)
@test any([occursin("gdsn_tape",s) for s in S.notes[1]])
@test any([occursin("demeaned",s) for s in S.notes[1]])
@test length(S.resp[1].p) == 24
@test length(S.resp[1].z) == 7
@test u2d(S.t[1][1,2]*1.0e-6) == DateTime("1984-04-20T06:42:00.12")
@test length(S.x[1]) == 720
@test eltype(S.x[1]) == Float32
@test isapprox(S.x[1][1:4], [-731.41247559, -724.41247559, -622.41247559, -470.4125061])
end
|
{"hexsha": "cab826290ca78b8bb7e2769053cfdbe7418fd56d", "size": 3523, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/DataFormats/test_ah.jl", "max_stars_repo_name": "UnofficialJuliaMirror/SeisIO.jl-b372bb87-02dd-52bb-bcf6-c30dd83fd342", "max_stars_repo_head_hexsha": "ae4ddd969c4c42281f36e218d5d3039af6c3146a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/DataFormats/test_ah.jl", "max_issues_repo_name": "UnofficialJuliaMirror/SeisIO.jl-b372bb87-02dd-52bb-bcf6-c30dd83fd342", "max_issues_repo_head_hexsha": "ae4ddd969c4c42281f36e218d5d3039af6c3146a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/DataFormats/test_ah.jl", "max_forks_repo_name": "UnofficialJuliaMirror/SeisIO.jl-b372bb87-02dd-52bb-bcf6-c30dd83fd342", "max_forks_repo_head_hexsha": "ae4ddd969c4c42281f36e218d5d3039af6c3146a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2934782609, "max_line_length": 90, "alphanum_fraction": 0.6403633267, "num_tokens": 1480}
|
###############################################################################
# 重要: 请务必把任务(jobs)中需要保存的文件存放在 results 文件夹内
# Important : Please make sure your files are saved to the 'results' folder
# in your jobs
# 本代码来源于 Notebook cell 里面的模型,大家进行离线任务时尽量只训练模型,不要进行模型评估等操作
###############################################################################
import math
import numpy as np
import os
import cv2
import random
import shutil
import time
from matplotlib import pyplot as plt
from easydict import EasyDict
from PIL import Image
import mindspore as ms
from mindspore import context
from mindspore import nn
from mindspore import Tensor
from mindspore.train.model import Model
from mindspore.train.serialization import load_checkpoint, save_checkpoint, export
from mindspore.train.callback import Callback, LossMonitor, ModelCheckpoint, CheckpointConfig
from src_mindspore.dataset import create_dataset # 数据处理脚本
from src_mindspore.mobilenetv2 import MobileNetV2Backbone, MobileNetV2Head, mobilenet_v2 # 模型定义脚本
os.environ['GLOG_v'] = '2' # Log Level = Error
has_gpu = (os.system('command -v nvidia-smi') == 0)
print('Excuting with', 'GPU' if has_gpu else 'CPU', '.')
context.set_context(mode=context.GRAPH_MODE, device_target='GPU' if has_gpu else 'CPU')
# 垃圾分类数据集标签,以及用于标签映射的字典。
index = {'00_00': 0, '00_01': 1, '00_02': 2, '00_03': 3, '00_04': 4, '00_05': 5, '00_06': 6, '00_07': 7,
'00_08': 8, '00_09': 9, '01_00': 10, '01_01': 11, '01_02': 12, '01_03': 13, '01_04': 14,
'01_05': 15, '01_06': 16, '01_07': 17, '02_00': 18, '02_01': 19, '02_02': 20, '02_03': 21,
'03_00': 22, '03_01': 23, '03_02': 24, '03_03': 25}
inverted = {0: 'Plastic Bottle', 1: 'Hats', 2: 'Newspaper', 3: 'Cans', 4: 'Glassware', 5: 'Glass Bottle', 6: 'Cardboard', 7: 'Basketball',
8: 'Paper', 9: 'Metalware', 10: 'Disposable Chopsticks', 11: 'Lighter', 12: 'Broom', 13: 'Old Mirror', 14: 'Toothbrush',
15: 'Dirty Cloth', 16: 'Seashell', 17: 'Ceramic Bowl', 18: 'Paint bucket', 19: 'Battery', 20: 'Fluorescent lamp', 21: 'Tablet capsules',
22: 'Orange Peel', 23: 'Vegetable Leaf', 24: 'Eggshell', 25: 'Banana Peel'}
# 训练超参
config = EasyDict({
"num_classes": 26, # 分类数,即输出层的维度
"reduction": 'mean', # mean, max, Head部分池化采用的方式
"image_height": 224,
"image_width": 224,
"batch_size": 24, # 鉴于CPU容器性能,太大可能会导致训练卡住
"eval_batch_size": 10,
"epochs": 50, # 请尝试修改以提升精度
"lr_max": 0.07, # 请尝试修改以提升精度
"decay_type": 'cosine', # 请尝试修改以提升精度
"momentum": 0.9, # 请尝试修改以提升精度
"weight_decay": 0.001,
"dataset_path": "./datasets/5fbdf571c06d3433df85ac65-momodel/garbage_26x100",
"features_path": "./results/garbage_26x100_features", # 临时目录,保存冻结层Feature Map,可随时删除
"class_index": index,
"save_ckpt_epochs": 1,
"save_ckpt_path": './results/ckpt_mobilenetv2',
"pretrained_ckpt": './src_mindspore/mobilenetv2-200_1067_cpu_gpu.ckpt',
"export_path": './results/mobilenetv2.mindir'
})
def build_lr(total_steps, lr_init=0.0, lr_end=0.0, lr_max=0.1, warmup_steps=0, decay_type='cosine'):
"""
Applies cosine decay to generate learning rate array.
Args:
total_steps(int): all steps in training.
lr_init(float): init learning rate.
lr_end(float): end learning rate
lr_max(float): max learning rate.
warmup_steps(int): all steps in warmup epochs.
Returns:
list, learning rate array.
"""
lr_init, lr_end, lr_max = float(lr_init), float(lr_end), float(lr_max)
decay_steps = total_steps - warmup_steps
lr_all_steps = []
inc_per_step = (lr_max - lr_init) / warmup_steps if warmup_steps else 0
for i in range(total_steps):
if i < warmup_steps:
lr = lr_init + inc_per_step * (i + 1)
else:
if decay_type == 'cosine':
cosine_decay = 0.5 * (1 + math.cos(math.pi * (i - warmup_steps) / decay_steps))
lr = (lr_max - lr_end) * cosine_decay + lr_end
elif decay_type == 'square':
frac = 1.0 - float(i - warmup_steps) / (total_steps - warmup_steps)
lr = (lr_max - lr_end) * (frac * frac) + lr_end
else:
lr = lr_max
lr_all_steps.append(lr)
return lr_all_steps
def extract_features(net, dataset_path, config):
if not os.path.exists(config.features_path):
os.makedirs(config.features_path)
dataset = create_dataset(config=config)
step_size = dataset.get_dataset_size()
if step_size == 0:
raise ValueError("The step_size of dataset is zero. Check if the images count of train dataset is more \
than batch_size in config.py")
data_iter = dataset.create_dict_iterator()
for i, data in enumerate(data_iter):
features_path = os.path.join(config.features_path, f"feature_{i}.npy")
label_path = os.path.join(config.features_path, f"label_{i}.npy")
if not os.path.exists(features_path) or not os.path.exists(label_path):
image = data["image"]
label = data["label"]
features = net(image)
np.save(features_path, features.asnumpy())
np.save(label_path, label.asnumpy())
print(f"Complete the batch {i+1}/{step_size}")
return
backbone = MobileNetV2Backbone()
load_checkpoint(config.pretrained_ckpt, net=backbone)
extract_features(backbone, config.dataset_path, config)
# backbone = MobileNetV2Backbone()
# load_checkpoint(config.pretrained_ckpt, net=backbone)
# extract_features(backbone, config.dataset_path, config)
class GlobalPooling(nn.Cell):
"""
Global avg pooling definition.
Args:
reduction: mean or max, which means AvgPooling or MaxpPooling.
Returns:
Tensor, output tensor.
Examples:
>>> GlobalAvgPooling()
"""
def __init__(self, reduction='mean'):
super(GlobalPooling, self).__init__()
if reduction == 'max':
self.mean = ms.ops.ReduceMax(keep_dims=False)
else:
self.mean = ms.ops.ReduceMean(keep_dims=False)
def construct(self, x):
x = self.mean(x, (2, 3))
return x
class MobileNetV2Head(nn.Cell):
"""
MobileNetV2Head architecture.
Args:
input_channel (int): Number of channels of input.
hw (int): Height and width of input, 7 for MobileNetV2Backbone with image(224, 224).
num_classes (int): Number of classes. Default is 1000.
reduction: mean or max, which means AvgPooling or MaxpPooling.
activation: Activation function for output logits.
Returns:
Tensor, output tensor.
Examples:
>>> MobileNetV2Head(num_classes=1000)
"""
def __init__(self, input_channel=1280, hw=7, num_classes=1000, reduction='mean', activation="None"):
super(MobileNetV2Head, self).__init__()
if reduction:
self.flatten = GlobalPooling(reduction)
else:
self.flatten = nn.Flatten()
input_channel = input_channel * hw * hw
self.dense = nn.Dense(input_channel, num_classes, weight_init='ones', has_bias=False)
if activation == "Sigmoid":
self.activation = nn.Sigmoid()
elif activation == "Softmax":
self.activation = nn.Softmax()
else:
self.need_activation = False
def construct(self, x):
x = self.flatten(x)
x = self.dense(x)
if self.need_activation:
x = self.activation(x)
return x
def train_head():
train_dataset = create_dataset(config=config)
eval_dataset = create_dataset(config=config)
step_size = train_dataset.get_dataset_size()
backbone = MobileNetV2Backbone()
# Freeze parameters of backbone. You can comment these two lines.
for param in backbone.get_parameters():
param.requires_grad = False
load_checkpoint(config.pretrained_ckpt, net=backbone)
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes, reduction=config.reduction)
network = mobilenet_v2(backbone, head)
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
lrs = build_lr(config.epochs * step_size, lr_max=config.lr_max, warmup_steps=0, decay_type=config.decay_type)
opt = nn.Momentum(head.trainable_params(), lrs, config.momentum, config.weight_decay)
net = nn.WithLossCell(head, loss)
train_step = nn.TrainOneStepCell(net, opt)
train_step.set_train()
# train
history = list()
features_path = config.features_path
idx_list = list(range(step_size))
for epoch in range(config.epochs):
random.shuffle(idx_list)
epoch_start = time.time()
losses = []
for j in idx_list:
feature = Tensor(np.load(os.path.join(features_path, f"feature_{j}.npy")))
label = Tensor(np.load(os.path.join(features_path, f"label_{j}.npy")))
losses.append(train_step(feature, label).asnumpy())
epoch_seconds = (time.time() - epoch_start)
epoch_loss = np.mean(np.array(losses))
history.append(epoch_loss)
print("epoch: {}, time cost: {}, avg loss: {}".format(epoch + 1, epoch_seconds, epoch_loss))
if (epoch + 1) % config.save_ckpt_epochs == 0:
save_checkpoint(network, os.path.join(config.save_ckpt_path, f"mobilenetv2-{epoch+1}.ckpt"))
# evaluate
print('validating the model...')
eval_model = Model(network, loss, metrics={'acc', 'loss'})
acc = eval_model.eval(eval_dataset, dataset_sink_mode=False)
print(acc)
return history
if os.path.exists(config.save_ckpt_path):
shutil.rmtree(config.save_ckpt_path)
os.makedirs(config.save_ckpt_path)
history = train_head()
CKPT = f'mobilenetv2-{config.epochs}.ckpt'
print("Chosen checkpoint is", CKPT)
print('training is ok!!!')
|
{"hexsha": "08639803373506e53e1796d3ce90c8750f0f5bdd", "size": 9860, "ext": "py", "lang": "Python", "max_stars_repo_path": "train_main.py", "max_stars_repo_name": "Stella925/Programming-Projects", "max_stars_repo_head_hexsha": "5d6357390e7494a10b675f02bd47cee75424c95f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "train_main.py", "max_issues_repo_name": "Stella925/Programming-Projects", "max_issues_repo_head_hexsha": "5d6357390e7494a10b675f02bd47cee75424c95f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train_main.py", "max_forks_repo_name": "Stella925/Programming-Projects", "max_forks_repo_head_hexsha": "5d6357390e7494a10b675f02bd47cee75424c95f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-19T14:13:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-19T14:13:20.000Z", "avg_line_length": 39.126984127, "max_line_length": 148, "alphanum_fraction": 0.6493914807, "include": true, "reason": "import numpy", "num_tokens": 2805}
|
import GPy
import numpy as np
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.bayesian_optimization.loops import UnknownConstraintBayesianOptimizationLoop
from emukit.core import ContinuousParameter, ParameterSpace
from emukit.core.loop import FixedIterationsStoppingCondition, UserFunctionWrapper
from emukit.model_wrappers import GPyModelWrapper
def f(x):
return x ** 2, x - 0.5
def test_loop():
n_iterations = 5
x_init = np.random.rand(5, 1)
y_init, y_constraint_init = f(x_init)
# Make GPy objective model
gpy_model = GPy.models.GPRegression(x_init, y_init)
model = GPyModelWrapper(gpy_model)
# Make GPy constraint model
gpy_constraint_model = GPy.models.GPRegression(x_init, y_init)
constraint_model = GPyModelWrapper(gpy_constraint_model)
space = ParameterSpace([ContinuousParameter("x", 0, 1)])
acquisition = ExpectedImprovement(model)
# Make loop and collect points
bo = UnknownConstraintBayesianOptimizationLoop(
model_objective=model, space=space, acquisition=acquisition, model_constraint=constraint_model
)
bo.run_loop(
UserFunctionWrapper(f, extra_output_names=["Y_constraint"]), FixedIterationsStoppingCondition(n_iterations)
)
# Check we got the correct number of points
assert bo.loop_state.X.shape[0] == n_iterations + 5
|
{"hexsha": "3dad65944e11c75f72b4ae98d3c66fc2a3261171", "size": 1379, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/emukit/bayesian_optimization/test_constrained_loop.py", "max_stars_repo_name": "lfabris-mhpc/emukit", "max_stars_repo_head_hexsha": "ccb07f6bed0e9ae41dbeefdb3ad2ab247d3991e2", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/emukit/bayesian_optimization/test_constrained_loop.py", "max_issues_repo_name": "lfabris-mhpc/emukit", "max_issues_repo_head_hexsha": "ccb07f6bed0e9ae41dbeefdb3ad2ab247d3991e2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/emukit/bayesian_optimization/test_constrained_loop.py", "max_forks_repo_name": "lfabris-mhpc/emukit", "max_forks_repo_head_hexsha": "ccb07f6bed0e9ae41dbeefdb3ad2ab247d3991e2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.8333333333, "max_line_length": 115, "alphanum_fraction": 0.7686729514, "include": true, "reason": "import numpy", "num_tokens": 337}
|
# Created by msinghal at 09/04/22
import os
import numpy as np
import networkx as nx
import pandas as pd
from sklearn import preprocessing
from stellargraph import StellarGraph
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
from stellargraph.datasets import DatasetLoader
class MovieLens(
DatasetLoader,
name="MovieLens",
directory_name="ml-100k",
url="http://files.grouplens.org/datasets/movielens/ml-100k.zip",
url_archive_format="zip",
expected_files=["u.data", "u.user", "u.item", "u.genre", "u.occupation"],
description="The MovieLens 100K dataset contains 100,000 ratings from 943 users on 1682 movies.",
source="https://grouplens.org/datasets/movielens/100k/",
):
def load(self):
"""
Load this dataset into an undirected heterogeneous graph, downloading it if required.
The graph has two types of nodes (``user`` and ``movie``) and one type of edge (``rating``).
The dataset includes some node features on both users and movies: on users, they consist of
categorical features (``gender`` and ``job``) which are one-hot encoded into binary
features, and an ``age`` feature that is scaled to have mean = 0 and standard deviation = 1.
Returns:
A tuple where the first element is a :class:`.StellarGraph` instance containing the graph
data and features, and the second element is a pandas DataFrame of edges, with columns
``user_id``, ``movie_id`` and ``rating`` (a label from 1 to 5).
"""
self.download()
ratings, users, movies, *_ = [
self._resolve_path(path) for path in self.expected_files
]
edges = pd.read_csv(
ratings,
sep="\t",
header=None,
names=["user_id", "movie_id", "rating", "timestamp"],
usecols=["user_id", "movie_id", "rating"],
)
users = pd.read_csv(
users,
sep="|",
header=None,
names=["user_id", "age", "gender", "job", "zipcode"],
usecols=["user_id", "age", "gender", "job"],
)
movie_columns = [
"movie_id",
"title",
"release_date",
"video_release_date",
"imdb_url",
# features from here:
"unknown",
"action",
"adventure",
"animation",
"childrens",
"comedy",
"crime",
"documentary",
"drama",
"fantasy",
"film_noir",
"horror",
"musical",
"mystery",
"romance",
"sci_fi",
"thriller",
"war",
"western",
]
movies = pd.read_csv(
movies,
sep="|",
header=None,
names=movie_columns,
usecols=["movie_id"] + movie_columns[5:],
)
# manage the IDs
def u(users):
return "u_" + users.astype(str)
def m(movies):
return "m_" + movies.astype(str)
users_ids = u(users["user_id"])
movies["movie_id"] = m(movies["movie_id"])
movies.set_index("movie_id", inplace=True)
edges["user_id"] = u(edges["user_id"])
edges["movie_id"] = m(edges["movie_id"])
# convert categorical user features to numeric, and normalize age
feature_encoding = preprocessing.OneHotEncoder(sparse=False)
onehot = feature_encoding.fit_transform(users[["gender", "job"]])
scaled_age = preprocessing.scale(users["age"])
encoded_users = pd.DataFrame(onehot, index=users_ids).assign(
scaled_age=scaled_age
)
g = StellarGraph(
{"user": encoded_users, "movie": movies},
{"rating": edges[["user_id", "movie_id"]]},
source_column="user_id",
target_column="movie_id",
)
return g, edges
class Amz_Dataset():
def load(self, base_path="/Users/msinghal/SourceCodes/Personal/graphranko/Data/amazon-electronics-v4"):
"""
Load this dataset into an undirected heterogeneous graph, downloading it if required.
The graph has two types of nodes (``user`` and ``movie``) and one type of edge (``rating``).
The dataset includes some node features on both users and movies: on users, they consist of
categorical features (``gender`` and ``job``) which are one-hot encoded into binary
features, and an ``age`` feature that is scaled to have mean = 0 and standard deviation = 1.
Returns:
A tuple where the first element is a :class:`.StellarGraph` instance containing the graph
data and features, and the second element is a pandas DataFrame of edges, with columns
``user_id``, ``movie_id`` and ``rating`` (a label from 1 to 5).
"""
rating_file = base_path + "/user-item-rating.csv"
user_file = base_path + "/user_list.txt"
item_file = base_path + "/item_meta_list.txt"
category_file = base_path + "/categories.txt"
edges = pd.read_csv(rating_file)
users = pd.read_csv(
user_file,
sep=" ",
header=None,
names=["user", "user_id"]
)
categories = pd.read_csv(
category_file,
sep="|",
header=None,
names=["category_id", "category"]
)
items = pd.read_csv(
item_file,
sep=" ",
header=None,
names=["item", "item_id"] + list(categories["category"])
)
# manage the IDs
def u(users):
return "u_" + users.astype(str)
def m(movies):
return "i_" + movies.astype(str)
print(edges.columns)
print(users.columns)
edges = edges.merge(users, on="user").merge(items, on="item")
users["user_id"] = u(users["user_id"])
users.set_index("user_id", inplace=True)
#
items["item_id"] = m(items["item_id"])
items.set_index("item_id", inplace=True)
items.drop("item", inplace=True, axis=1)
#
edges["user_id"] = u(edges["user_id"])
edges["item_id"] = m(edges["item_id"])
# convert categorical user features to numeric, and normalize age
# feature_encoding = preprocessing.OneHotEncoder(sparse=False)
# onehot = feature_encoding.fit_transform(users[["user"]])
# scaled_age = preprocessing.scale(users["age"])
# Create adjacency matrix
return users, items, edges
def create_graph_for_training(self, users, items, edges, metadata=False):
values = np.zeros((len(users.index), len(users.index)), dtype=np.uint8)
np.fill_diagonal(values, 1)
encoded_users = pd.DataFrame(values, index=users.index, columns=users.index, dtype=np.uint8)
if not metadata:
values = np.zeros((len(items.index), len(items.index)), dtype=np.uint8)
np.fill_diagonal(values, 1)
encoded_items = pd.DataFrame(values, index=items.index, columns=items.index, dtype=np.uint8)
return self.generate_nx_graph_with_features(edges, encoded_users, encoded_items)
return self.generate_nx_graph_with_features(edges, encoded_users, items)
def generate_nx_graph_with_features(self, edges, users, items):
G = nx.Graph()
# Add nodes
for user_id in users.index:
G.add_node(user_id, feature=users.loc[user_id, :], label="user")
for item_id in items.index:
G.add_node(item_id, feature=items.loc[item_id, :], label="item")
for index, row in edges.iterrows():
src_node_id = row['user_id']
dst_node_id = row['item_id']
G.add_edge(src_node_id, dst_node_id, rating=row['rating'])
return StellarGraph.from_networkx(G, node_features="feature")
|
{"hexsha": "b284b830f7c9254aeb9f24e0e643bf684b625b7a", "size": 8003, "ext": "py", "lang": "Python", "max_stars_repo_path": "GraphicoRango/steller_graph_implementation/datasets.py", "max_stars_repo_name": "madansinghal/graphranko", "max_stars_repo_head_hexsha": "c975aa42104216237f4651c5b300a09ac8db02e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GraphicoRango/steller_graph_implementation/datasets.py", "max_issues_repo_name": "madansinghal/graphranko", "max_issues_repo_head_hexsha": "c975aa42104216237f4651c5b300a09ac8db02e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GraphicoRango/steller_graph_implementation/datasets.py", "max_forks_repo_name": "madansinghal/graphranko", "max_forks_repo_head_hexsha": "c975aa42104216237f4651c5b300a09ac8db02e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.100877193, "max_line_length": 107, "alphanum_fraction": 0.5804073472, "include": true, "reason": "import numpy,import networkx", "num_tokens": 1807}
|
import tensorflow as tf
import os
import time
import json
import pandas as pd
import numpy as np
"""
Script to train a sequential NN.
NN trains on training data, all results output to disk.
Use this script over `train_and_predict.py`
"""
#GPU configuration - dont use too much memory
os.environ["TF_GPU_ALLOCATOR"]="cuda_malloc_async"
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=4096)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
#input_file = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/MODIS_ERA_joined_data_single.pkl'
input_file = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/ML_data_ERA_MODIS_joined.pkl'
train_condition = pd.to_datetime("2019-01-01 00:00:00")
test_condition = pd.to_datetime("2020-01-01 00:00:00")
epochs = 10
batch_size = 10000
output_path = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/'
n_permutations = 5
def load_and_process_data(input_file,train_condition,test_condition):
print('Loading the data')
#Load the raw
df = pd.read_pickle(input_file)
# Create a "results_df" copy that will be used later for clean IO
results_df = df[['latitude_ERA', 'longitude_ERA','time','skt','t2m','MODIS_LST']].copy()
# Split into features/targets
feature_names = [ 'sp', 'msl', 'u10', 'v10','t2m',
'aluvp', 'aluvd', 'alnip', 'alnid', 'cl',
'cvl', 'cvh', 'slt', 'sdfor', 'z', 'sd', 'sdor', 'isor', 'anor', 'slor',
'd2m', 'lsm', 'fal','skt']
features = df[feature_names]
target = df.pop('MODIS_LST')
#Normalise
target_normed = (target-target.mean())/target.std()
features_normed = (features-features.mean())/features.std()
#Separate into train/test based on time
idx_train = df.time < train_condition
idx_test = df.time > test_condition
x_train = features_normed[idx_train]
y_train = target_normed[idx_train]
x_test = features_normed[idx_test]
y_test = target_normed[idx_test]
results_df = results_df[idx_test] #For IO we will just output predictions
return x_train,y_train,x_test,y_test,target.mean(),target.std(),results_df
def train_NN(x,y,epochs,batch_size):
print('Training model')
nfeatures = x.shape[-1]
#Create a basic NN model
model = tf.keras.Sequential([
tf.keras.layers.Dense(int(nfeatures/2), activation='relu',input_shape=(nfeatures,),name='layer1'),
tf.keras.layers.Dense(1, name='output')
])
#Compile it
model.compile(optimizer='adam',
loss='mse',
metrics=['accuracy'])
#Train it
history = model.fit(x, y, epochs=epochs, batch_size=batch_size,verbose=1)
return history,model
#Load and normalise data
print('Getting data')
x_train,y_train,x_test,y_test,T_normalize_mean,T_normalize_std,results_df = load_and_process_data(input_file,train_condition,test_condition)
#Train model
print ('Training model')
history,model = train_NN(x_train,y_train,epochs,batch_size)
#Evaluate
print('Evaluating model')
score0 = model.evaluate(x_test, y_test,batch_size)
#Permute
print('Feature importance')
dfs = []
for i in x_test.columns:
running_score = 0
for j in range(n_permutations):
#Create a copy that will be permuted
x_test_permuted = x_test.copy()
#Shuffle target column
shuffle = np.random.permutation(x_test_permuted[i])
#Permute
x_test_permuted[i] = shuffle
#Now make a prediction with the permuted data
score = model.evaluate(x_test_permuted, y_test,batch_size)
running_score += score
running_score = running_score/n_permutations
#Make as a pandas df
d= {'feature': [i], 'score': [running_score]}
dftmp = pd.DataFrame(data=d)
dfs.append(dftmp)
#Add the unpermuted score
d= {'feature': ['H0'], 'score': [score0]}
dftmp = pd.DataFrame(data=d)
dfs.append(dftmp)
print('IO')
#Bring together and save to disk
df = pd.concat(dfs).reset_index()
df.to_pickle('feature_importance_n5.pkl')
print ("All completed OK")
|
{"hexsha": "4930cefa30c222d5a64e2e77b25b4e583e03ceb2", "size": 4744, "ext": "py", "lang": "Python", "max_stars_repo_path": "legacy/legacy_scripts/.ipynb_checkpoints/feature_importance-checkpoint.py", "max_stars_repo_name": "tomkimpson/ML4L", "max_stars_repo_head_hexsha": "ffa8360cb80df25bd6af4fa5cc39b42bd6f405cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-23T12:31:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T12:31:56.000Z", "max_issues_repo_path": "legacy/legacy_scripts/feature_importance.py", "max_issues_repo_name": "tomkimpson/ML4L", "max_issues_repo_head_hexsha": "ffa8360cb80df25bd6af4fa5cc39b42bd6f405cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "legacy/legacy_scripts/feature_importance.py", "max_forks_repo_name": "tomkimpson/ML4L", "max_forks_repo_head_hexsha": "ffa8360cb80df25bd6af4fa5cc39b42bd6f405cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.6432432432, "max_line_length": 140, "alphanum_fraction": 0.6585160202, "include": true, "reason": "import numpy", "num_tokens": 1223}
|
import os.path as osp
import json, pickle
import sys
from math import sqrt
from itertools import product
from numpy import random
import jittor as jt
import numpy as np
max_image_size = 550
augment_idx = 0
dump_file = 'weights/bboxes_aug.pkl'
box_file = 'weights/bboxes.pkl'
def augment_boxes(bboxes):
bboxes_rel = []
for box in bboxes:
bboxes_rel.append(prep_box(box))
bboxes_rel = np.concatenate(bboxes_rel, axis=0)
with open(dump_file, 'wb') as f:
pickle.dump(bboxes_rel, f)
def prep_box(box_list):
global augment_idx
boxes = np.array([box_list[2:]], dtype=np.float32)
# Image width and height
width, height = box_list[:2]
# To point form
boxes[:, 2:] += boxes[:, :2]
# Expand
ratio = random.uniform(1, 4)
left = random.uniform(0, width*ratio - width)
top = random.uniform(0, height*ratio - height)
height *= ratio
width *= ratio
boxes[:, :2] += (int(left), int(top))
boxes[:, 2:] += (int(left), int(top))
# RandomSampleCrop
height, width, boxes = random_sample_crop(height, width, boxes)
# RandomMirror
if random.randint(0, 2):
boxes[:, 0::2] = width - boxes[:, 2::-2]
# Resize
boxes[:, [0, 2]] *= (max_image_size / width)
boxes[:, [1, 3]] *= (max_image_size / height)
width = height = max_image_size
# ToPercentCoords
boxes[:, [0, 2]] /= width
boxes[:, [1, 3]] /= height
if augment_idx % 50000 == 0:
print('Current idx: %d' % augment_idx)
augment_idx += 1
return boxes
sample_options = (
# using entire original input image
None,
# sample a patch s.t. MIN jaccard w/ obj in .1,.3,.4,.7,.9
(0.1, None),
(0.3, None),
(0.7, None),
(0.9, None),
# randomly sample a patch
(None, None),
)
def intersect(box_a, box_b):
max_xy = np.minimum(box_a[:, 2:], box_b[2:])
min_xy = np.maximum(box_a[:, :2], box_b[:2])
inter = np.clip((max_xy - min_xy), a_min=0, a_max=np.inf)
return inter[:, 0] * inter[:, 1]
def jaccard_numpy(box_a, box_b):
"""Compute the jaccard overlap of two sets of boxes. The jaccard overlap
is simply the intersection over union of two boxes.
E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Args:
box_a: Multiple bounding boxes, Shape: [num_boxes,4]
box_b: Single bounding box, Shape: [4]
Return:
jaccard overlap: Shape: [box_a.shape[0], box_a.shape[1]]
"""
inter = intersect(box_a, box_b)
area_a = ((box_a[:, 2]-box_a[:, 0]) *
(box_a[:, 3]-box_a[:, 1])) # [A,B]
area_b = ((box_b[2]-box_b[0]) *
(box_b[3]-box_b[1])) # [A,B]
union = area_a + area_b - inter
return inter / union # [A,B]
def random_sample_crop(height, width, boxes=None):
global sample_options
while True:
# randomly choose a mode
mode = random.choice(sample_options)
if mode is None:
return height, width, boxes
min_iou, max_iou = mode
if min_iou is None:
min_iou = float('-inf')
if max_iou is None:
max_iou = float('inf')
for _ in range(50):
w = random.uniform(0.3 * width, width)
h = random.uniform(0.3 * height, height)
if h / w < 0.5 or h / w > 2:
continue
left = random.uniform(0, width - w)
top = random.uniform(0, height - h)
rect = np.array([int(left), int(top), int(left+w), int(top+h)])
overlap = jaccard_numpy(boxes, rect)
if overlap.min() < min_iou and max_iou < overlap.max():
continue
centers = (boxes[:, :2] + boxes[:, 2:]) / 2.0
m1 = (rect[0] < centers[:, 0]) * (rect[1] < centers[:, 1])
m2 = (rect[2] > centers[:, 0]) * (rect[3] > centers[:, 1])
mask = m1 * m2
if not mask.any():
continue
current_boxes = boxes[mask, :].copy()
current_boxes[:, :2] = np.maximum(current_boxes[:, :2], rect[:2])
current_boxes[:, :2] -= rect[:2]
current_boxes[:, 2:] = np.minimum(current_boxes[:, 2:], rect[2:])
current_boxes[:, 2:] -= rect[:2]
return h, w, current_boxes
if __name__ == '__main__':
with open(box_file, 'rb') as f:
bboxes = pickle.load(f)
augment_boxes(bboxes)
|
{"hexsha": "71f236b773b43118ab1403c46fa3d02b5edb3b80", "size": 3985, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/augment_bbox.py", "max_stars_repo_name": "li-xl/Yolact.jittor", "max_stars_repo_head_hexsha": "10d93335b7e8c7a10cb9e62dc64b4ba9f9409ea5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-03T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-09T00:50:06.000Z", "max_issues_repo_path": "scripts/augment_bbox.py", "max_issues_repo_name": "li-xl/Yolact.jittor", "max_issues_repo_head_hexsha": "10d93335b7e8c7a10cb9e62dc64b4ba9f9409ea5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/augment_bbox.py", "max_forks_repo_name": "li-xl/Yolact.jittor", "max_forks_repo_head_hexsha": "10d93335b7e8c7a10cb9e62dc64b4ba9f9409ea5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-16T02:03:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T02:03:19.000Z", "avg_line_length": 23.1686046512, "max_line_length": 77, "alphanum_fraction": 0.6160602258, "include": true, "reason": "import numpy,from numpy", "num_tokens": 1286}
|
# Lint as: python3
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Tests for the pybind11 bindings of format_converter."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import absltest
import numpy as np
from tensorflow.lite.tools.optimize.sparsity import format_converter_wrapper_pybind11 as format_converter
class FormatConverterTest(absltest.TestCase):
def test_bcsr_fp32(self):
"""Same as FormatConverterTest::BlockTestD0S1 but via pybind11."""
# pyformat: disable
dense_matrix = [1.0, 0.0, 2.0, 3.0,
0.0, 4.0, 0.0, 0.0,
0.0, 0.0, 5.0, 0.0,
0.0, 0.0, 0.0, 6.0]
# pyformat: enable
dense_shape = [4, 4]
traversal_order = [0, 1, 2, 3]
dim_types = [
format_converter.TfLiteDimensionType.TF_LITE_DIM_DENSE,
format_converter.TfLiteDimensionType.TF_LITE_DIM_SPARSE_CSR
]
block_size = [2, 2]
block_map = [0, 1]
converter = format_converter.FormatConverterFp32(dense_shape,
traversal_order, dim_types,
block_size, block_map)
converter.DenseToSparse(np.asarray(dense_matrix, dtype=np.float32).data)
dim_metadata = converter.GetDimMetadata()
self.assertEqual([2], dim_metadata[0])
self.assertEmpty(dim_metadata[1]) # rows are dense.
self.assertEqual([0, 2, 3], dim_metadata[2]) # array segments.
self.assertEqual([0, 1, 1], dim_metadata[3]) # array indices.
self.assertEqual([2], dim_metadata[4])
self.assertEmpty(dim_metadata[5]) # sub block rows are dense.
self.assertEqual([2], dim_metadata[6])
self.assertEmpty(dim_metadata[7]) # sub block columns are dense.
expected_data = [1.0, 0.0, 0.0, 4.0, 2.0, 3.0, 0.0, 0.0, 5.0, 0.0, 0.0, 6.0]
sparse_data = converter.GetData()
self.assertTrue(np.allclose(expected_data, sparse_data))
converter.SparseToDense(np.asarray(sparse_data, dtype=np.float32).data)
self.assertTrue(np.allclose(dense_matrix, converter.GetData()))
if __name__ == '__main__':
absltest.main()
|
{"hexsha": "f08c80ad602fc8619425ed56ca3be8d2b150ef8b", "size": 2837, "ext": "py", "lang": "Python", "max_stars_repo_path": "tensorflow/lite/tools/optimize/sparsity/format_converter_wrapper_pybind11_test.py", "max_stars_repo_name": "neochristou/tensorflow", "max_stars_repo_head_hexsha": "50b55bfc5c9132c3bd82505181380bffbb47a5ff", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-06-30T10:53:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-19T16:52:00.000Z", "max_issues_repo_path": "tensorflow/lite/tools/optimize/sparsity/format_converter_wrapper_pybind11_test.py", "max_issues_repo_name": "donny-stacks/tensorflow", "max_issues_repo_head_hexsha": "1fb338b1c42930c0eef4d0b4d8d5fdf24a678654", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-04-09T16:22:20.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-15T13:57:36.000Z", "max_forks_repo_path": "tensorflow/lite/tools/optimize/sparsity/format_converter_wrapper_pybind11_test.py", "max_forks_repo_name": "donny-stacks/tensorflow", "max_forks_repo_head_hexsha": "1fb338b1c42930c0eef4d0b4d8d5fdf24a678654", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-01-13T11:23:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T11:11:42.000Z", "avg_line_length": 37.8266666667, "max_line_length": 105, "alphanum_fraction": 0.6630243215, "include": true, "reason": "import numpy", "num_tokens": 745}
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import cv2
import numpy as np
from progress.bar import Bar
import time
import torch
from models.model import create_model, load_model
from utils.image import get_affine_transform
from utils.debugger import Debugger
def py_cpu_nms(dets, thresh):
"""Pure Python NMS baseline."""
#x1、y1、x2、y2、以及score赋值
x1 = dets[:, 0]
y1 = dets[:, 1]
x2 = dets[:, 2]
y2 = dets[:, 3]
scores = dets[:, 4]
#每一个检测框的面积
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
#按照score置信度降序排序
order = scores.argsort()[::-1]
keep = [] #保留的结果框集合
while order.size > 0:
i = order[0]
keep.append(i) #保留该类剩余box中得分最高的一个
#得到相交区域,左上及右下
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
#计算相交的面积,不重叠时面积为0
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
#计算IoU:重叠面积 /(面积1+面积2-重叠面积)
ovr = inter / (areas[i] + areas[order[1:]] - inter)
#保留IoU小于阈值的box
inds = np.where(ovr <= thresh)[0]
order = order[inds + 1] #因为ovr数组的长度比order数组少一个,所以这里要将所有下标后移一位
return keep
class BaseDetector(object):
def __init__(self, opt):
if opt.gpus[0] >= 0:
opt.device = torch.device('cuda')
else:
opt.device = torch.device('cpu')
print('Creating model...')
self.model = create_model(opt.arch, opt.heads, opt.head_conv)
self.model = load_model(self.model, opt.load_model)
self.model = self.model.to(opt.device)
self.model.eval()
self.mean = np.array(opt.mean, dtype=np.float32).reshape(1, 1, 3)
self.std = np.array(opt.std, dtype=np.float32).reshape(1, 1, 3)
self.max_per_image = 100
self.num_classes = opt.num_classes
self.scales = opt.test_scales
self.opt = opt
self.pause = True
def pre_process(self, image, scale, meta=None):
height, width = image.shape[0:2]
new_height = int(height * scale)
new_width = int(width * scale)
if self.opt.fix_res:
inp_height, inp_width = self.opt.input_h, self.opt.input_w
c = np.array([new_width / 2., new_height / 2.], dtype=np.float32)
s = max(height, width) * 1.0
else:
inp_height = (new_height | self.opt.pad) + 1
inp_width = (new_width | self.opt.pad) + 1
c = np.array([new_width // 2, new_height // 2], dtype=np.float32)
s = np.array([inp_width, inp_height], dtype=np.float32)
trans_input = get_affine_transform(c, s, 0, [inp_width, inp_height])
resized_image = cv2.resize(image, (new_width, new_height))
inp_image = cv2.warpAffine(
resized_image, trans_input, (inp_width, inp_height),
flags=cv2.INTER_LINEAR)
inp_image = ((inp_image / 255. - self.mean) / self.std).astype(np.float32)
images = inp_image.transpose(2, 0, 1).reshape(1, 3, inp_height, inp_width)
if self.opt.flip_test:
images = np.concatenate((images, images[:, :, :, ::-1]), axis=0)
images = torch.from_numpy(images)
meta = {'c': c, 's': s,
'out_height': inp_height // self.opt.down_ratio,
'out_width': inp_width // self.opt.down_ratio}
return images, meta
def process(self, images, return_time=False):
raise NotImplementedError
def post_process(self, dets, meta, scale=1):
raise NotImplementedError
def merge_outputs(self, detections):
raise NotImplementedError
def debug(self, debugger, images, dets, output, scale=1):
raise NotImplementedError
def show_results(self, debugger, image, results):
raise NotImplementedError
def run(self, image_or_path_or_tensor, meta=None):
load_time, pre_time, net_time, dec_time, post_time = 0, 0, 0, 0, 0
merge_time, tot_time = 0, 0
debugger = Debugger(dataset=self.opt.dataset, ipynb=(self.opt.debug==3),
theme=self.opt.debugger_theme)
start_time = time.time()
pre_processed = False
if isinstance(image_or_path_or_tensor, np.ndarray):
image = image_or_path_or_tensor
elif type(image_or_path_or_tensor) == type (''):
image = cv2.imread(image_or_path_or_tensor)
else:
image = image_or_path_or_tensor['image'][0].numpy()
pre_processed_images = image_or_path_or_tensor
pre_processed = True
loaded_time = time.time()
load_time += (loaded_time - start_time)
detections = []
for scale in self.scales:
scale_start_time = time.time()
if not pre_processed:
images, meta = self.pre_process(image, scale, meta)
else:
# import pdb; pdb.set_trace()
images = pre_processed_images['images'][scale][0]
meta = pre_processed_images['meta'][scale]
meta = {k: v.numpy()[0] for k, v in meta.items()}
images = images.to(self.opt.device)
torch.cuda.synchronize()
pre_process_time = time.time()
pre_time += pre_process_time - scale_start_time
output, dets, forward_time = self.process(images, return_time=True)
torch.cuda.synchronize()
net_time += forward_time - pre_process_time
decode_time = time.time()
dec_time += decode_time - forward_time
if self.opt.debug >= 2:
self.debug(debugger, images, dets, output, scale)
dets = self.post_process(dets, meta, scale)
torch.cuda.synchronize()
post_process_time = time.time()
post_time += post_process_time - decode_time
detections.append(dets)
results = self.merge_outputs(detections)
torch.cuda.synchronize()
end_time = time.time()
merge_time += end_time - post_process_time
tot_time += end_time - start_time
if self.opt.debug >= 1:
# print('--->>> base_detector run show_results')
# img_ = self.show_results(debugger, image, results)
debugger.add_img(image, img_id='multi_pose')
#---------------------------------------------------------------- NMS
nms_dets_ = []
for bbox in results[1]:
if bbox[4] > self.opt.vis_thresh:
nms_dets_.append((bbox[0], bbox[1],bbox[2], bbox[3], bbox[4]))
if len(nms_dets_)>0:
keep_ = py_cpu_nms(np.array(nms_dets_),thresh=0.35)
# print('keep_ : ',nms_dets_,keep_)
#----------------------------------------------------------------
faces_boxes = []
person_boxes = []
idx = 0
for bbox in results[1]:
if bbox[4] > self.opt.vis_thresh:
idx += 1
if (idx-1) not in keep_:
continue
# 绘制目标物体
# print('------------------>>>add_coco_bbox')
debugger.add_coco_bbox(bbox[:4], 0, bbox[4], img_id='multi_pose')
face_pts = debugger.add_coco_hp(bbox[5:39], img_id='multi_pose')
# print('--------------------------------->>>>>>>>>>oou')
if len(face_pts)==5:
# print('change box')
person_boxes.append([int(bbox[0]),int(bbox[1]),int(bbox[2]),int(bbox[3]),bbox[4]])
x_min = min([face_pts[i][0] for i in range(len(face_pts))])
y_min = min([face_pts[i][1] for i in range(len(face_pts))])
x_max = max([face_pts[i][0] for i in range(len(face_pts))])
y_max = max([face_pts[i][1] for i in range(len(face_pts))])
edge = abs(x_max-x_min)
#
bbox_x1 = int(max(0,(x_min-edge*0.05)))
bbox_x2 = int(min(image.shape[1]-1,(x_max+edge*0.05)))
bbox_y1 = int(max(0,(y_min-edge*0.32)))
bbox_y2 = int(min(image.shape[0]-1,(y_max+edge*0.55)))
# print('ppppp',face_pts,x1)
# if ((bbox_x2-bbox_x1)*(bbox_y2-bbox_y1))>100:
faces_boxes.append([bbox_x1,bbox_y1,bbox_x2,bbox_y2,1.])
# cv2.rectangle(image,(bbox_x1,bbox_y1),(bbox_x2,bbox_y2),(0,255,255),2)
# print('-------->>> show_results debugger')
img_ = debugger.show_all_imgs(pause=self.pause)
return img_,{'results': results, 'tot': tot_time, 'load': load_time,
'pre': pre_time, 'net': net_time, 'dec': dec_time,
'post': post_time, 'merge': merge_time},faces_boxes,person_boxes
|
{"hexsha": "4975e5cd1c91381bef6854f2f96249966e2ee258", "size": 8266, "ext": "py", "lang": "Python", "max_stars_repo_path": "chapter_08/lib/detectors/base_detector.py", "max_stars_repo_name": "XiangLiK/cv_course", "max_stars_repo_head_hexsha": "da7c2318fd4128bbdab96db26ddbb2524f37d0a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-04-11T12:09:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-13T06:43:45.000Z", "max_issues_repo_path": "chapter_08/lib/detectors/base_detector.py", "max_issues_repo_name": "XLEric/cv_course", "max_issues_repo_head_hexsha": "da7c2318fd4128bbdab96db26ddbb2524f37d0a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter_08/lib/detectors/base_detector.py", "max_forks_repo_name": "XLEric/cv_course", "max_forks_repo_head_hexsha": "da7c2318fd4128bbdab96db26ddbb2524f37d0a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-16T20:42:42.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-16T20:42:42.000Z", "avg_line_length": 36.7377777778, "max_line_length": 96, "alphanum_fraction": 0.6035567384, "include": true, "reason": "import numpy", "num_tokens": 2382}
|
#!/usr/bin/env python3
# coding: utf-8
import logging
import os
import sys
import warnings
from argparse import ArgumentParser
from typing import List, Tuple, Dict
import numpy as np
import pandas
from pandas import DataFrame
try:
import matchbox
except ImportError:
sys.path.append(os.path.join(os.path.dirname(sys.argv[0]), '..'))
from matchbox.uintersect import UIntersectFinder
from matchbox.util.loaders import load_datasets
logger = logging.getLogger('ds-summary')
def max_ind_by_name(d1: DataFrame, d2: DataFrame) -> int:
"""
Return how many matching names there are between two datasets
"""
return np.in1d(d1.columns, d2.columns).sum()
def uind_histogram(dataframes: List[Tuple[str, DataFrame]], alphas: List[float]) -> Dict[
float, Tuple[np.ndarray, np.ndarray]]:
"""
Compute an histogram of how many matching columns each of the columns of the first dataframe
has on the second one.
Returns
-------
A dictionary where the key matches the significance levels passed as parameter,
and the value is a pair with the binning and the counts
"""
# UINDs have as an attribute the significance level, so
# just compute the minimum, and then filter for the rest
logger.info('Computing UIND')
uind_finder = UIntersectFinder(method='ks')
for df_name, df in dataframes:
uind_finder.add(df_name, df)
uinds = uind_finder(alpha=min(alphas), no_symmetric=True)
logger.info('Computing histograms')
histograms = {}
for a in alphas:
uind_subset = [u for u in uinds if u.confidence >= a]
matches = {c: 0 for c in dataframes[0][1].columns}
for u in uind_subset:
c = u.lhs.attr_names[0]
matches[c] += 1
match_count = list(matches.values())
bins = np.arange(0, max(match_count) + 1)
edges = np.concatenate([[-0.5], bins + 0.5])
hist, edges = np.histogram(match_count, bins=edges)
histograms[a] = DataFrame(data=hist.reshape(1, -1), columns=bins)
return histograms
def main():
"""
Entry point
"""
# Basic setup
warnings.filterwarnings("ignore", category=RuntimeWarning)
# Arguments
parser = ArgumentParser(description='Display some summary statistics about a dataset')
parser.add_argument('--debug', action='store_true', help='Enable debug output')
parser.add_argument('--seed', type=int, default=0, help='Initial random seed')
parser.add_argument('--sample-size', type=int, default=200, help='Sample size')
parser.add_argument('--uind-alphas', type=float, nargs='+', default=[0.05, 0.1],
help='Significance level for the unary IND tests')
parser.add_argument('data1', metavar='DATA1', help='Dataset 1')
parser.add_argument('data2', metavar='DATA2', help='Dataset 2')
args = parser.parse_args()
# Logging
logging.basicConfig(format='%(asctime)s %(name)15.15s %(levelname)s\t%(message)s',
level=logging.DEBUG if args.debug else logging.INFO, stream=sys.stderr)
# Initialize the random state
random_generator = np.random.MT19937(args.seed)
# Load dataset
logging.info('Loading datasets')
datasets = load_datasets([args.data1, args.data2])
# Display general summary
print('Rows:', ' + '.join([str(len(df)) for _, df in datasets]))
print('Columns:', ' + '.join([str(len(df.columns)) for _, df in datasets]))
print('Max. IND:', max_ind_by_name(datasets[0][1], datasets[1][1]))
# Select samples
logger.info('Using %d samples', args.sample_size)
with pandas.option_context('mode.use_inf_as_na', True):
samples = [
(name, df.sample(args.sample_size, replace=True, random_state=random_generator))
for name, df in datasets
]
# Get the histogram for matching UINDs
uind_hist = uind_histogram(samples, alphas=args.uind_alphas)
for a in args.uind_alphas:
print('Matching counts for', a)
print('\t' + uind_hist[a].to_string().replace('\n', '\n\t'))
if __name__ == '__main__':
main()
|
{"hexsha": "9b922d598759cfc0bbcf6337cac53741db954a2c", "size": 4102, "ext": "py", "lang": "Python", "max_stars_repo_path": "bin/ds-summary.py", "max_stars_repo_name": "ayllon/MatchBox", "max_stars_repo_head_hexsha": "367b69c51f1ef4b574ce2a534d3e5441b2b2933b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/ds-summary.py", "max_issues_repo_name": "ayllon/MatchBox", "max_issues_repo_head_hexsha": "367b69c51f1ef4b574ce2a534d3e5441b2b2933b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/ds-summary.py", "max_forks_repo_name": "ayllon/MatchBox", "max_forks_repo_head_hexsha": "367b69c51f1ef4b574ce2a534d3e5441b2b2933b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.0598290598, "max_line_length": 96, "alphanum_fraction": 0.6657727938, "include": true, "reason": "import numpy", "num_tokens": 1002}
|
from pathlib import Path
from typing import Dict, List, Set, Tuple
import networkx as nx
import numpy as np
import pandas as pd
def sepset_dict_to_ndarray(
sepsets: Dict[Tuple[int, int], Set[int]], variable_count: int, max_level: int
) -> np.ndarray:
separation_sets = np.full((variable_count, variable_count, max_level), -1, np.int32)
for (v_i, v_j), value in sepsets.items():
sepset_list = list(value)
sepset_list.extend([-1] * (max_level - len(sepset_list)))
separation_sets[v_i][v_j] = sepset_list
return separation_sets
# experimentally determined for the current datasets, will throw if exceeded
MAX_LEVEL_SEPSETS_PCALG = 20
def read_sepsets_pcalg(file_path: str, variable_count: int) -> np.ndarray:
separation_sets = np.full(
(variable_count, variable_count, MAX_LEVEL_SEPSETS_PCALG), -1, np.int32
)
with open(file_path) as f:
lines = f.readlines()
for line in lines:
if line.strip() == "":
continue
v_1, v_2, sepset_length, *sepset_list = line.split()
assert int(sepset_length) == len(sepset_list)
# pcalg sepsets use the label of the nodes. Transform them to use zero-indexed ints.
v_1 = int(v_1) - 1
v_2 = int(v_2) - 1
sepset_list = [int(v) - 1 for v in list(sepset_list)]
sepset_list_dedup = list(set(sepset_list))
sepset_list_dedup.extend(
[-1] * (MAX_LEVEL_SEPSETS_PCALG - len(sepset_list_dedup))
)
separation_sets[v_1][v_2] = sepset_list_dedup
separation_sets[v_2][v_1] = sepset_list_dedup
return (separation_sets, MAX_LEVEL_SEPSETS_PCALG)
# Convention: for all graphs use id as label, discard all other attributes
# so that equality checks are easier
def read_gml(
file_path: str, label_attr: str, attributes_to_remove: List[str]
) -> nx.DiGraph:
graph = nx.read_gml(path=file_path, label=label_attr)
for _, attr in graph.nodes(data=True):
for attribute_name in attributes_to_remove:
del attr[attribute_name]
return graph
def read_pcalg_gml(file_path):
return read_gml(file_path, "id", ["name"])
def read_ground_truth_gml(file_path):
return read_gml(file_path, "id", ["label"])
def read_gpucsl_gml(file_path: str):
return read_gml(file_path, "id", ["label"])
def read_pmax(file_path: str) -> np.ndarray:
return pd.read_csv(file_path, sep=",", header=None).to_numpy()
|
{"hexsha": "60dcbd3134ca87dadb257051e16aff424956d7af", "size": 2515, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/fixtures/file_readers.py", "max_stars_repo_name": "hpi-epic/gpucsl", "max_stars_repo_head_hexsha": "f461c47ce17105f7cf25aa65d39cb671021f07e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2022-03-22T14:56:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T18:41:58.000Z", "max_issues_repo_path": "tests/fixtures/file_readers.py", "max_issues_repo_name": "hpi-epic/gpucsl", "max_issues_repo_head_hexsha": "f461c47ce17105f7cf25aa65d39cb671021f07e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2022-03-26T10:39:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T20:43:54.000Z", "max_forks_repo_path": "tests/fixtures/file_readers.py", "max_forks_repo_name": "hpi-epic/gpucsl", "max_forks_repo_head_hexsha": "f461c47ce17105f7cf25aa65d39cb671021f07e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6707317073, "max_line_length": 96, "alphanum_fraction": 0.6664015905, "include": true, "reason": "import numpy,import networkx", "num_tokens": 676}
|
__precompile__()
module Utils
# export @define, @reexport
#
# include("utils/macro_utils.jl")
export istriustrict, istrilstrict
export simd_scale!, simd_copy!, simd_copy_scale!,
simd_copy_xy_first!, simd_copy_yx_first!,
simd_copy_yx_first_last!,
simd_xpy!, simd_axpy!, simd_wxpy!, simd_waxpy!, simd_mult!
include("utils/matrix_utils.jl")
end
|
{"hexsha": "11766ffc7a03ba27343cad20fb584aedbfead27e", "size": 404, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Utils.jl", "max_stars_repo_name": "ChrisRackauckas/GeomDAE.jl", "max_stars_repo_head_hexsha": "dbaf5d909dc1686d211312f4143ad7355ffbc325", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Utils.jl", "max_issues_repo_name": "ChrisRackauckas/GeomDAE.jl", "max_issues_repo_head_hexsha": "dbaf5d909dc1686d211312f4143ad7355ffbc325", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Utils.jl", "max_forks_repo_name": "ChrisRackauckas/GeomDAE.jl", "max_forks_repo_head_hexsha": "dbaf5d909dc1686d211312f4143ad7355ffbc325", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.4444444444, "max_line_length": 69, "alphanum_fraction": 0.6757425743, "num_tokens": 108}
|
# -*- coding: utf-8 -*-
"""
Created on Sat Oct 12 14:51:16 2019
@author: Devendra Mishra
"""
#%%
"""
santa is matrix produced from the previous code synthetic
outputs as out and input as inp
took learning rate as lr=0.01
7 layers with tanh activation function and 1 layer of sigmoid activation function is used to train the model
wi where i=1,2,3,... are weights of ith layer
bi where i=1,2,3,... are biases of ith layer
took 50 iterations to train the model as the synthetic data is easier to train.
"""
import numpy as np
out=santa[-1:]
inp=santa[:-1]
lr = 0.01
"""
random initialising weights and biases
"""
w1 = np.random.rand(69,69)*0.1
b1 = np.zeros((69,1))
w2 = np.random.rand(69,69)*0.1
b2 = b1
w3 = np.random.rand(69,69)*0.1
b3 = b1
w4 = np.random.rand(30,69)*0.1
b4 = np.zeros((30,1))
w5 = np.random.rand(30,30)*0.1
b5 = np.zeros((30,1))
w6 = np.random.rand(20,30)*0.1
b6 = np.zeros((20,1))
w7 = np.random.rand(10,20)*0.1
b7 = np.zeros((10,1))
w8 = np.random.rand(1,10)*0.1
b8=np.zeros((1,1))
for iterations in range(0,50):
"""
forward propagation of the Neural Network Model
"""
x1 = np.add(np.dot(w1,inp),b1)
z1 = np.tanh(x1)
x2 = np.add(np.dot(w2,z1),b2)
z2 = np.tanh(x2)
x3 = np.add(np.dot(w3,z2),b3)
z3 = np.tanh(x3)
x4 = np.add(np.dot(w4,z3),b4)
z4 = np.tanh(x4)
x5 = np.add(np.dot(w5,z4),b5)
z5 = np.tanh(x5)
x6 = np.add(np.dot(w6,z5),b6)
z6 = np.tanh(x6)
x7 = np.add(np.dot(w7,z6),b7)
z7 = np.tanh(x7)
x8 = np.add(np.dot(w8,z7),b8)
z8 = np.divide(1,np.add(1,np.exp(-x8)))
"""
back Propagation of the Neural Network Model
"""
dx8 = np.subtract(z8,out)
dw8 = np.dot(dx8,np.transpose(z7))
db8 = np.asmatrix(np.sum(dx8,axis=1))
dz7 = np.dot(np.transpose(w8),dx8)
dx7 = np.multiply(dz7,np.square(np.divide(1,np.cosh(x7))))
dw7 = np.dot(dx7,np.transpose(z6))
db7 = np.reshape(np.asmatrix(np.sum(dx7,axis=1)),(10,1))
dz6 = np.dot(np.transpose(w7),dx7)
dx6 = np.multiply(dz6,np.square(np.divide(1,np.cosh(x6))))
dw6 = np.dot(dx6,np.transpose(z5))
db6 = np.reshape(np.asmatrix(np.sum(dx6,axis=1)),(20,1))
dz5 = np.dot(np.transpose(w6),dx6)
dx5 = np.multiply(dz5,np.square(np.divide(1,np.cosh(x5))))
dw5 = np.dot(dx5,np.transpose(z4))
db5 = np.reshape(np.asmatrix(np.sum(dx5,axis=1)),(30,1))
dz4 = np.dot(np.transpose(w5),dx5)
dx4 = np.multiply(dz4,np.square(np.divide(1,np.cosh(x4))))
dw4 = np.dot(dx4,np.transpose(z3))
db4 = np.reshape(np.asmatrix(np.sum(dx4,axis=1)),(30,1))
dz3 = np.dot(np.transpose(w4),dx4)
dx3 = np.multiply(dz3,np.square(np.divide(1,np.cosh(x3))))
dw3 = np.dot(dx3,np.transpose(z2))
db3 = np.reshape(np.asmatrix(np.sum(dx3,axis=1)),(69,1))
dz2 = np.dot(np.transpose(w3),dx3)
dx2 = np.multiply(dz2,np.square(np.divide(1,np.cosh(x2))))
dw2 = np.dot(dx2,np.transpose(z1))
db2 = np.reshape(np.asmatrix(np.sum(dx2,axis=1)),(69,1))
dz1 = np.dot(np.transpose(w2),dx2)
dx1 = np.multiply(dz1,np.square(np.divide(1,np.cosh(x1))))
dw1 = np.dot(dx1,np.transpose(inp))
db1 = np.reshape(np.asmatrix(np.sum(dx1,axis=1)),(69,1))
""""
Updating weights and biases to get the better Neural Network at each iteration
"""
w8 = np.subtract(w8,np.multiply(lr,dw8))
b8 = np.subtract(b8,np.multiply(lr,db8))
w7 = np.subtract(w7,np.multiply(lr,dw7))
b7 = np.subtract(b7,np.multiply(lr,db7))
w6 = np.subtract(w6,np.multiply(lr,dw6))
b6 = np.subtract(b6,np.multiply(lr,db6))
w5 = np.subtract(w5,np.multiply(lr,dw5))
b5 = np.subtract(b5,np.multiply(lr,db5))
w4 = np.subtract(w4,np.multiply(lr,dw4))
b4 = np.subtract(b4,np.multiply(lr,db4))
w3 = np.subtract(w3,np.multiply(lr,dw3))
b3 = np.subtract(b3,np.multiply(lr,db3))
w2 = np.subtract(w2,np.multiply(lr,dw2))
b2 = np.subtract(b2,np.multiply(lr,db2))
w1 = np.subtract(w1,np.multiply(lr,dw1))
b1 = np.subtract(b1,np.multiply(lr,db1))
"""
Once the ANN is trained for few hours
then give a data to the program below to find whether the body is an Inclined Sheet or a Horizontal cylinder
Where 'data' is a matrix of 69 rows and 1 column ( data of Inclined Sheet or Horizontal Cylinder over the principle profile)
x1 = np.add(np.dot(w1,data),b1)
z1 = np.tanh(x1)
x2 = np.add(np.dot(w2,z1),b2)
z2 = np.tanh(x2)
x3 = np.add(np.dot(w3,z2),b3)
z3 = np.tanh(x3)
x4 = np.add(np.dot(w4,z3),b4)
z4 = np.tanh(x4)
x5 = np.add(np.dot(w5,z4),b5)
z5 = np.tanh(x5)
x6 = np.add(np.dot(w6,z5),b6)
z6 = np.tanh(x6)
x7 = np.add(np.dot(w7,z6),b7)
z7 = np.tanh(x7)
x8 = np.add(np.dot(w8,z7),b8)
z8 = np.divide(1,np.add(1,np.exp(-x8)))
"""
|
{"hexsha": "e71af5997549b56dbd8a2c5ca023a494ca43afa6", "size": 4663, "ext": "py", "lang": "Python", "max_stars_repo_path": "Ann.py", "max_stars_repo_name": "Devendra1225mishra/Optimisations", "max_stars_repo_head_hexsha": "dc59819693728807740cc7b510a5b643b3717fc7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Ann.py", "max_issues_repo_name": "Devendra1225mishra/Optimisations", "max_issues_repo_head_hexsha": "dc59819693728807740cc7b510a5b643b3717fc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Ann.py", "max_forks_repo_name": "Devendra1225mishra/Optimisations", "max_forks_repo_head_hexsha": "dc59819693728807740cc7b510a5b643b3717fc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7985074627, "max_line_length": 124, "alphanum_fraction": 0.6356422904, "include": true, "reason": "import numpy", "num_tokens": 1666}
|
#!/usr/bin/env python
# coding: utf-8
import numpy as np
import pandas as pd
import glob
filnames = glob.glob('data_transport/u_*')
filnames.sort()
dates = [filname.split('.')[-2].split('_')[-1] for filname in filnames]
gammas = [filname.split('.')[-2].split('_')[-2] for filname in filnames]
opt_types = [filname.split('.')[-2].split('_')[-3] for filname in filnames]
df_severe_beds = pd.read_csv('data_Koro/severe_beds.csv',index_col=0)
df_u = pd.DataFrame(df_severe_beds['japan_prefecture_code'])
df_u['都道府県名'] = df_severe_beds['都道府県名']
df_i = df_u.copy()
for i in range(len(filnames)):
uv = np.load(filnames[i])
T = uv.shape[0]
N = int(np.sqrt(uv.shape[1]))
U = uv.reshape(T,N,N)
df_u[opt_types[i] + '_' + gammas[i] + '_' + dates[i]] = U[:14,:,:].sum(axis = 0).sum(1)
df_i[opt_types[i] + '_' + gammas[i] + '_' + dates[i]] = U[:14,:,:].sum(axis = 0).sum(0)
df_u.to_csv('resultC_transport_strategy/summary/export_num.csv')
df_i.to_csv('resultC_transport_strategy/summary/import_num.csv')
dirC = 'resultC_tranport_strategy/'
dirnames = (df_severe_beds['japan_prefecture_code'] + df_severe_beds['都道府県名']).values
for i in range(len(filnames)):
uv = np.load(filnames[i])
T = uv.shape[0]
N = int(np.sqrt(uv.shape[1]))
U = uv.reshape(T,N,N)
png_links = [dirC + dirnames[j] + '/transport_{0}_{1}_{2}.png\n'.format(opt_types[i],gammas[i],dates[i]) for j in range(df_u.shape[0]) if U[:14,:,:].sum(axis = 0).sum(1)[j]>0]
path = 'slide_png_list/transport_{0}_{1}_{2}.txt'.format(opt_types[i],gammas[i],dates[i])
with open(path, mode='w') as f:
f.writelines(png_links)
if dates[i] == max(dates) and opt_types[i] == 'mean':
path = 'slide_png_list/transport_{0}.txt'.format(gammas[i])
with open(path, mode='w') as f:
f.writelines(png_links)
# ################################### Hospital ######################################
filnames = glob.glob('data_hospital_transport/u_*')
filnames.sort()
dates = [filname.split('.')[-2].split('_')[-1] for filname in filnames]
gammas = [filname.split('.')[-2].split('_')[-2] for filname in filnames]
opt_types = [filname.split('.')[-2].split('_')[-3] for filname in filnames]
df_severe_beds = pd.read_csv('data_Koro/hospital_beds.csv',index_col=0)
df_u = pd.DataFrame(df_severe_beds['japan_prefecture_code'])
df_u['都道府県名'] = df_severe_beds['都道府県名']
df_i = df_u.copy()
for i in range(len(filnames)):
uv = np.load(filnames[i])
T = uv.shape[0]
N = int(np.sqrt(uv.shape[1]))
U = uv.reshape(T,N,N)
df_u[opt_types[i] + '_' + gammas[i] + '_' + dates[i]] = U[:14,:,:].sum(axis = 0).sum(1)
df_i[opt_types[i] + '_' + gammas[i] + '_' + dates[i]] = U[:14,:,:].sum(axis = 0).sum(0)
df_u.to_csv('resultD_transport_strategy_hospital/summary/export_num.csv')
df_i.to_csv('resultD_transport_strategy_hospital/summary/import_num.csv')
dirD = 'resultD_transport_strategy_hospital/'
dirnames = (df_severe_beds['japan_prefecture_code'] + df_severe_beds['都道府県名']).values
for i in range(len(filnames)):
uv = np.load(filnames[i])
T = uv.shape[0]
N = int(np.sqrt(uv.shape[1]))
U = uv.reshape(T,N,N)
png_links = [dirD + dirnames[j] + '/transport_{0}_{1}_{2}.png\n'.format(opt_types[i],gammas[i],dates[i]) for j in range(df_u.shape[0]) if U[:14,:,:].sum(axis = 0).sum(1)[j]>0]
path = 'slide_png_hospital/transport_{0}_{1}_{2}.txt'.format(opt_types[i],gammas[i],dates[i])
with open(path, mode='w') as f:
f.writelines(png_links)
if dates[i] == max(dates) and opt_types[i] == 'mean':
path = 'slide_png_hospital/transport_{0}.txt'.format(gammas[i])
with open(path, mode='w') as f:
f.writelines(png_links)
# ################################### over_beds_date ######################################
filenames = glob.glob('data_severe/x_*')
filenames.sort()
forecast_dates = [filename.split('_')[-1].split('.')[0] for filename in filenames]
df_severe = pd.read_csv("data_severe/x_" +max(forecast_dates) + ".csv",index_col=0)
df_hospital = pd.read_csv("data_hospital/x_" +max(forecast_dates) + ".csv",index_col=0)
df_hospital_beds = pd.read_csv('data_Koro/hospital_beds.csv',index_col=0)
df_severe_beds = pd.read_csv('data_Koro/severe_beds.csv',index_col=0)
new_time_google = max(forecast_dates)
new_time_Koro = max(df_hospital_beds.columns[2:])
N = df_severe.shape[1]
severe_beds = df_severe_beds[new_time_Koro].values
hospital_beds = df_hospital_beds[new_time_Koro].values
gamma = 1
over_severe_beds100 = []
over_hospital_beds100 = []
for i in range(N):
if sum(df_severe[df_severe.columns[i]] > gamma * severe_beds[i]) !=0:
over_severe_beds100.append( min(df_severe.index[df_severe[df_severe.columns[i]] > gamma * severe_beds[i]]))
else:
over_severe_beds100.append(None)
if sum(df_hospital[df_hospital.columns[i]] > gamma * hospital_beds[i]) !=0:
over_hospital_beds100.append( min(df_hospital.index[df_hospital[df_hospital.columns[i]] > gamma * hospital_beds[i]]))
else:
over_hospital_beds100.append(None)
gamma = 0.8
over_severe_beds080 = []
over_hospital_beds080 = []
for i in range(N):
if sum(df_severe[df_severe.columns[i]] > gamma * severe_beds[i]) !=0:
over_severe_beds080.append( min(df_severe.index[df_severe[df_severe.columns[i]] > gamma * severe_beds[i]]))
else:
over_severe_beds080.append(None)
if sum(df_hospital[df_hospital.columns[i]] > gamma * hospital_beds[i]) !=0:
over_hospital_beds080.append( min(df_hospital.index[df_hospital[df_hospital.columns[i]] > gamma * hospital_beds[i]]))
else:
over_hospital_beds080.append(None)
df = df_hospital_beds[['japan_prefecture_code','都道府県名']].copy()
df['over_severe_beds080'] = over_severe_beds080
df['over_severe_beds100'] = over_severe_beds100
df['over_hospital_beds080'] = over_hospital_beds080
df['over_hospital_beds100'] = over_hospital_beds100
df.to_csv('slide_over_beds/over_list_'+new_time_google +'.csv')
|
{"hexsha": "6989ad6fdc7f385acd78d9b3e5ef8638216531a9", "size": 6004, "ext": "py", "lang": "Python", "max_stars_repo_path": "make_check_list.py", "max_stars_repo_name": "clinfo/2021_Patients_Transport", "max_stars_repo_head_hexsha": "4f14cd0b1350eca98dfbe9d4ae530fda34759811", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "make_check_list.py", "max_issues_repo_name": "clinfo/2021_Patients_Transport", "max_issues_repo_head_hexsha": "4f14cd0b1350eca98dfbe9d4ae530fda34759811", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "make_check_list.py", "max_forks_repo_name": "clinfo/2021_Patients_Transport", "max_forks_repo_head_hexsha": "4f14cd0b1350eca98dfbe9d4ae530fda34759811", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.9520958084, "max_line_length": 180, "alphanum_fraction": 0.6640572951, "include": true, "reason": "import numpy", "num_tokens": 1844}
|
"""
Baseline
Bi-GRU encoder -> MLP -> MST/ILP
"""
import tensorflow as tf
import numpy as np
import os, json, random, time, re, math
from Sentence_Encoder import Sentence_Encoder
from utils import load_data, build_vocab
from os import path as fp
from ilp import load_scip_output, mk_zimpl_input, dump_scores_to_dat_files
if not os.environ.has_key("CUDA_VISIBLE_DEVICES"):
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
FLAGS = tf.flags.FLAGS
tf.flags.DEFINE_boolean("train", False, "is training")
tf.flags.DEFINE_string("scip_path", "~/SCIPOptSuite-6.0.0-Linux/bin/scip", "path to SCIP")
tf.flags.DEFINE_boolean("load_checkpoint", True, "load checkpoint")
tf.flags.DEFINE_string("word_vector", "../glove/glove.6B.100d.txt", "word vector")
tf.flags.DEFINE_string("model_dir", "model", "model directory")
tf.flags.DEFINE_string("log_dir", "log", "log directory")
tf.flags.DEFINE_string("method", "mst", "method mst/ilp/greedy")
tf.flags.DEFINE_integer("max_edu_dist", 20, "maximum distance between two related edus")
tf.flags.DEFINE_integer("dim_embed_word", 100, "dimension of word embedding")
tf.flags.DEFINE_integer("dim_traditional", 3, "dimension of traditional features")
tf.flags.DEFINE_integer('num_epochs', 50, 'number of epochs')
tf.flags.DEFINE_integer("num_units", 256, "number of units in RNN encoder")
tf.flags.DEFINE_integer("num_layers", 1, "number of RNN layers")
tf.flags.DEFINE_integer("num_relations", 16, "number of relations")
tf.flags.DEFINE_integer("batch_size", 4, "batch size")
tf.flags.DEFINE_integer("vocab_size", 1000, "vocabulary size")
tf.flags.DEFINE_float("positive_lb", 0.5, "lower bound of positive samples")
tf.flags.DEFINE_float("keep_prob", 0.5, "probability to keep units in dropout")
tf.flags.DEFINE_float("learning_rate", 0.1, "learning rate")
tf.flags.DEFINE_float("learning_rate_decay", 0.98, "learning rate decay factor")
def padding(sent, l):
return sent + ["EOS"] + ["PAD"] * (l - len(sent) - 1)
def get_batched_data_test(dialog, edges_candidate):
data = get_batches([dialog], 1, is_test=True)[0]
x, y, traditional_features = [], [], []
for sample in edges_candidate:
x.append(sample["i"])
y.append(sample["j"])
traditional_features.append(sample["traditional_features"])
return {
"text_string": data["text_string"],
"len_word": data["len_word"],
"len_doc": data["len_doc"],
"len_sent": data["len_sent"],
"x_bi": np.array(x),
"y_bi": np.array(y),
"traditional_features_bi": np.array(traditional_features),
"x_multi": np.array(x),
"y_multi": np.array(y),
"traditional_features_multi": np.array(traditional_features),
}
def get_batches(data, batch_size, positive_lb=0, is_test=False):
batches = []
random.shuffle(data)
text_string, len_word, len_sent, len_doc = [], [], [], []
x_bi, y_bi, relation_bi, traditional_features_bi = [], [], [], []
x_multi, y_multi, relation_multi, traditional_features_multi, idx = [], [], [], [], []
for i in range(len(data) / batch_size + bool(len(data) % batch_size)):
max_num_sent, max_num_word = 0, 0
for dialog in data[i * batch_size : (i + 1) * batch_size]:
max_num_sent = max(max_num_sent, len(dialog["edus"]))
for edu in dialog["edus"]:
max_num_word = max(max_num_word, len(edu["tokens"]))
max_num_word += 1
sample_pos, sample_neg = [], []
for dialog in data[i * batch_size : (i + 1) * batch_size]:
text_string.append([])
len_word.append([])
for edu in dialog["edus"]:
text_string[-1].append(padding(edu["tokens"], max_num_word))
len_word[-1].append(len(edu["tokens"]) + 1)
length = len(dialog["edus"])
for i in range(max_num_sent - length):
text_string[-1].append(["POS"] * max_num_word)
len_word[-1].append(0)
len_doc.append(length)
std = np.zeros((length, length), dtype=np.int32)
for relation in dialog["relations"]:
std[relation["x"]][relation["y"]] = relation["type"] + 1
if FLAGS.method == "ilp":
for i in range(length):
for j in range(length):
if j - i <= FLAGS.max_edu_dist:
sample = {
"x": (len(text_string) - 1) * max_num_sent + i,
"y": (len(text_string) - 1) * max_num_sent + j,
"relation": std[i][j],
"traditional_features": [
bool(dialog["edus"][i]["speaker"] == dialog["edus"][j]["speaker"]),
bool(dialog["edus"][i]["turn"] == dialog["edus"][j]["turn"]),
j - i
]
}
if std[i][j] > 0:
sample_pos.append(sample)
else:
sample_neg.append(sample)
else:
for j in range(length):
for i in range(j):
if j - i <= FLAGS.max_edu_dist:
sample = {
"x": (len(text_string) - 1) * max_num_sent + i,
"y": (len(text_string) - 1) * max_num_sent + j,
"relation": std[i][j],
"traditional_features": [
bool(dialog["edus"][i]["speaker"] == dialog["edus"][j]["speaker"]),
bool(dialog["edus"][i]["turn"] == dialog["edus"][j]["turn"]),
j - i
]
}
if std[i][j] > 0:
sample_pos.append(sample)
else:
sample_neg.append(sample)
if positive_lb == 0:
max_negative = len(sample_neg)
else:
max_negative = int(len(sample_pos) / positive_lb - len(sample_pos))
random.shuffle(sample_neg)
sample_neg = sample_neg[:max_negative]
samples = sample_pos + sample_neg
random.shuffle(samples)
if not is_test:
for sample in samples:
x_bi.append(sample["x"])
y_bi.append(sample["y"])
relation_bi.append(bool(sample["relation"] > 0))
traditional_features_bi.append(sample["traditional_features"])
if sample["relation"] > 0:
x_multi.append(sample["x"])
y_multi.append(sample["y"])
relation_multi.append(sample["relation"] - 1)
traditional_features_multi.append(sample["traditional_features"])
idx.append(len(x_bi) - 1)
if len(x_bi) == 0 or len(x_multi) == 0:
text_string, len_word, len_sent, len_doc = [], [], [], []
x_bi, y_bi, relation_bi, traditional_features_bi = [], [], [], []
x_multi, y_multi, relation_multi, traditional_features_multi, idx = [], [], [], [], []
continue
batches.append({
"text_string": np.array(text_string),
"len_word": np.array(len_word),
"len_sent": np.array(len_sent),
"len_doc": np.array(len_doc),
"x_bi": np.array(x_bi),
"y_bi": np.array(y_bi),
"relation_bi": np.array(relation_bi),
"traditional_features_bi": np.array(traditional_features_bi),
"x_multi": np.array(x_multi),
"y_multi": np.array(y_multi),
"relation_multi": np.array(relation_multi),
"traditional_features_multi": np.array(traditional_features_multi),
"idx": np.array(idx),
})
text_string, len_word, len_sent, len_doc = [], [], [], []
x_bi, y_bi, relation_bi, traditional_features_bi = [], [], [], []
x_multi, y_multi, relation_multi, traditional_features_multi, idx = [], [], [], [], []
return batches
def test_mst():
cnt_golden, cnt_pred, cnt_cor_bi, cnt_cor_multi = 0, 0, 0, 0
_cnt_golden = np.zeros(200)
_cnt_pred = np.zeros(200)
_cnt_cor_bi = np.zeros(200)
_cnt_cor_multi = np.zeros(200)
f1_bi = np.zeros(200)
f1_multi = np.zeros(200)
for i, dialog in enumerate(data_test):
if len(dialog["edus"]) == 1: continue
edges_candidate = []
for j in range(len(dialog["edus"])):
if FLAGS.method == "greedy" or j == 0 or j > 0 and dialog["edus"][j]["turn"] != dialog["edus"][j - 1]["turn"]: # inter-turn
for i in range(len(dialog["edus"])):
if i < j and j - i <= FLAGS.max_edu_dist:
edges_candidate.append({
"traditional_features": [
bool(dialog["edus"][i]["speaker"] == dialog["edus"][j]["speaker"]),
bool(dialog["edus"][i]["turn"] == dialog["edus"][j]["turn"]),
j - i
],
"i": i,
"j": j
})
else: # intra-turn
i = j - 1
edges_candidate.append({
"traditional_features": [
bool(dialog["edus"][i]["speaker"] == dialog["edus"][j]["speaker"]),
bool(dialog["edus"][i]["turn"] == dialog["edus"][j]["turn"]),
1
],
"i": i,
"j": j
})
batch = get_batched_data_test(dialog, edges_candidate)
r, weight = sentence_encoder.infer(batch)
best_weight = [-1] * len(dialog["edus"])
edge = [None] * len(dialog["edus"])
for i, e in enumerate(edges_candidate):
e["type"] = r[i]
e["weight"] = weight[i]
if e["weight"] > best_weight[e["j"]]:
best_weight[e["j"]] = e["weight"]
edge[e["j"]] = i
pred = []
cnt_in = [0] * len(dialog["edus"])
for i in range(len(dialog["edus"])):
if edge[i] is not None:
idx = edge[i]
cnt_in[edges_candidate[idx]["j"]] += 1
pred.append((edges_candidate[idx]["i"], edges_candidate[idx]["j"], edges_candidate[idx]["type"]))
root_pred = 0
while (cnt_in[root_pred] > 0): root_pred += 1
cnt_in = [0] * len(dialog["edus"])
for relation in dialog["relations"]:
cnt_in[relation["y"]] += 1
cnt_pred += 1
for i in range(len(dialog["edus"])):
if cnt_in[i] == 0:
cnt_golden += 1
_cnt_golden[i + 1] += 1
if cnt_in[root_pred] == 0:
cnt_cor_bi += 1
cnt_cor_multi += 1
relation_types = np.zeros((len(dialog["edus"]), len(dialog["edus"])), dtype=np.int32)
for relation in dialog["relations"]:
relation_types[relation["x"]][relation["y"]] = relation["type"] + 1
if relation["y"] > relation["x"]:
_cnt_golden[relation["y"] - relation["x"]] += 1
last = -1
for rr in dialog["relations"]:
if rr["y"] == relation["x"] and rr["x"] < relation["x"] and rr["x"] > last:
last = rr["x"]
if last == -1:
last = relation["x"]
cnt_golden += 1
_cnt_pred[1] += 1
if cnt_in[0] == 0:
_cnt_cor_bi[1] += 1
_cnt_cor_multi[1] += 1
for r in pred:
cnt_pred += 1
if relation_types[r[0]][r[1]] > 0:
cnt_cor_bi += 1
if r[2] == relation_types[r[0]][r[1]] - 1:
cnt_cor_multi += 1
last = -1
for rr in dialog["relations"]:
if rr["y"] == r[0] and rr["x"] < r[0] and rr["x"] > last:
last = rr["x"]
if last == -1:
last = r[0]
_cnt_pred[r[1] - r[0]] += 1
if relation_types[r[0]][r[1]] > 0:
_cnt_cor_bi[r[1] - r[0]] += 1
if relation_types[r[0]][r[1]] == r[2] + 1:
_cnt_cor_multi[r[1] - r[0]] += 1
for i in range(20):
prec = _cnt_cor_bi[i] * 1. / _cnt_pred[i]
recall = _cnt_cor_bi[i] * 1. / _cnt_golden[i]
f1_bi[i] = 2. * prec * recall / (prec + recall)
prec = _cnt_cor_multi[i] * 1. / _cnt_pred[i]
recall = _cnt_cor_multi[i] * 1. / _cnt_golden[i]
f1_multi[i] = 2. * prec * recall / (prec + recall)
prec_bi = cnt_cor_bi * 1. / cnt_pred
recall_bi = cnt_cor_bi * 1. / cnt_golden
f1_bi = 2 * prec_bi * recall_bi / (prec_bi + recall_bi)
prec_multi = cnt_cor_multi * 1. / cnt_pred
recall_multi = cnt_cor_multi * 1. / cnt_golden
f1_multi = 2 * prec_multi * recall_multi / (prec_multi + recall_multi)
return [
prec_bi,
recall_bi,
f1_bi,
prec_multi,
recall_multi,
f1_multi
]
def test_ilp():
cnt_golden, cnt_pred, cnt_cor_bi, cnt_cor_multi = 0, 0, 0, 0
for i, dialog in enumerate(data_test):
if len(dialog["edus"]) == 1: continue
edges_candidate = []
n = len(dialog["edus"])
att_mat = np.zeros((n, n))
lab_tsr = np.zeros((n, n, FLAGS.num_relations))
for i in range(len(dialog["edus"])):
for j in range(len(dialog["edus"])):
edges_candidate.append({
"traditional_features": [
bool(dialog["edus"][i]["speaker"] == dialog["edus"][j]["speaker"]),
bool(dialog["edus"][i]["turn"] == dialog["edus"][j]["turn"]),
j - i
],
"i": i,
"j": j
})
batch = get_batched_data_test(dialog, edges_candidate)
r, weight_bi, weight_multi = sentence_encoder.infer(batch)
cur = 0
for i in range(len(dialog["edus"])):
for j in range(len(dialog["edus"])):
att_mat[i][j] = weight_bi[cur]
for k in range(FLAGS.num_relations):
lab_tsr[i][j][k] = weight_multi[cur][k]
cur += 1
# Prepare ZIMPL template and data
dump_scores_to_dat_files(dialog, att_mat, lab_tsr, "raw")
input_path = mk_zimpl_input(dialog)
# Run SCIP
param_path = fp.join('scip.parameters')
output_path = fp.join("./tmp", 'output.scip')
os.system(FLAGS.scip_path + " " \
+ " -f " + input_path \
+ " -s " + param_path \
+ " > " + output_path)
relation_types = np.zeros((len(dialog["edus"]), len(dialog["edus"])), dtype=np.int32)
for relation in dialog["relations"]:
relation_types[relation["x"]][relation["y"]] = relation["type"] + 1
cnt_golden += 1
pred= load_scip_output(dialog, output_path)
for r in pred:
cnt_pred += 1
if relation_types[r[0]][r[1]] > 0:
cnt_cor_bi += 1
if r[2] == relation_types[r[0]][r[1]] - 1:
cnt_cor_multi += 1
prec_bi = cnt_cor_bi * 1. / cnt_pred
recall_bi = cnt_cor_bi * 1. / cnt_golden
f1_bi = 2 * prec_bi * recall_bi / (prec_bi + recall_bi)
prec_multi = cnt_cor_multi * 1. / cnt_pred
recall_multi = cnt_cor_multi * 1. / cnt_golden
f1_multi = 2 * prec_multi * recall_multi / (prec_multi + recall_multi)
return [
prec_bi,
recall_bi,
f1_bi,
prec_multi,
recall_multi,
f1_multi
]
map_relations = {}
data_train = load_data('../data/STAC/train.json', map_relations)
data_test = load_data('../data/STAC/test.json', map_relations)
vocab, embed = build_vocab(data_train)
print "Dataset sizes: %d/%d" % (len(data_train), len(data_test))
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
with sess.as_default():
sentence_encoder = Sentence_Encoder(sess, FLAGS, embed)
global_step = tf.Variable(0, name="global_step", trainable=False)
global_step_inc_op = global_step.assign(global_step + 1)
epoch = tf.Variable(0, name='epoch', trainable=False)
epoch_inc_op = epoch.assign(epoch + 1)
saver = tf.train.Saver(
write_version=tf.train.SaverDef.V2,
max_to_keep=None,
pad_step_number=True,
keep_checkpoint_every_n_hours=1.0
)
summary_list = [
"loss", "loss_bi", "loss_multi",
"prec_bi", "recall_bi", "f1_bi",
"prec_multi", "recall_multi", "f1_multi",
]
summary_num = len(summary_list)
len_output_feed = 7
if FLAGS.train:
if FLAGS.load_checkpoint and tf.train.get_checkpoint_state(FLAGS.model_dir):
print "Reading model parameters from %s" % FLAGS.model_dir
saver.restore(sess, tf.train.latest_checkpoint(FLAGS.model_dir))
else:
print "Created model with fresh parameters"
sentence_encoder.initialize(vocab=vocab)
sess.run(tf.global_variables_initializer())
print "Trainable variables:"
for var in tf.trainable_variables():
print var
train_writer = tf.summary.FileWriter(os.path.join(FLAGS.log_dir, "train"))
test_writer = tf.summary.FileWriter(os.path.join(FLAGS.log_dir, "test"))
summary_placeholders = [tf.placeholder(tf.float32) for i in range(summary_num)]
summary_op = [tf.summary.scalar(summary_list[i], summary_placeholders[i]) for i in range(summary_num)]
test_batches = get_batches(data_test, FLAGS.batch_size)
best_test = [0, 0]
while epoch.eval() < FLAGS.num_epochs:
epoch_inc_op.eval()
summary_steps = 0
train_batches = get_batches(data_train, FLAGS.batch_size, FLAGS.positive_lb)
start_time = time.time()
s = np.zeros(len_output_feed)
for batch in train_batches:
ops = sentence_encoder.step(batch, train=True)
for i in range(len_output_feed):
s[i] += ops[i]
summary_steps += 1
global_step_inc_op.eval()
global_step_val = global_step.eval()
summary_sum = [
s[0] / summary_steps,
s[1] / summary_steps,
s[2] / summary_steps,
s[5] * 1. / s[3], # prec_bi
s[5] * 1. / s[4], # recall_bi
2. * s[5] * 1. / s[3] * s[5] * 1. / s[4] / (s[5] * 1. / s[3] + s[5] * 1. / s[4]), # f1_bi
s[6] * 1. / s[3], # prec_multi
s[6] * 1. / s[4], # recall_multi
2. * s[6] * 1. / s[3] * s[6] * 1. / s[4] / (s[6] * 1. / s[3] + s[6] * 1. / s[4]), # f1_multi,
]
print "Epoch %s" % epoch.eval()
for k in range(3, summary_num):
print " Train %s: %.5lf" % (
summary_list[k],
summary_sum[k]
)
summaries = sess.run(summary_op, feed_dict=dict(zip(summary_placeholders, summary_sum)))
for s in summaries:
train_writer.add_summary(summary=s, global_step=global_step_val)
s = np.zeros(len_output_feed)
summary_steps = 0
start_time = time.time()
saver.save(sess, "%s/checkpoint" % FLAGS.model_dir, global_step=global_step_val)
if FLAGS.method == "ilp":
res_test = test_ilp()
else:
res_test = test_mst()
summary_sum = [0] * 3 + res_test
summaries = sess.run(summary_op, feed_dict=dict(zip(summary_placeholders, summary_sum)))
for s in summaries:
test_writer.add_summary(summary=s, global_step=global_step_val)
for k in range(3, 9):
print " Test %s: %.5lf" % (summary_list[k], summary_sum[k])
if summary_sum[8] > best_test[1]:
best_test[0] = summary_sum[5]
best_test[1] = summary_sum[8]
sentence_encoder.learning_rate_decay_op.eval()
print " Best test: %.3f %.3f" % (best_test[0], best_test[1])
else:
print "Reading model parameters from %s" % FLAGS.model_dir
saver.restore(sess, tf.train.latest_checkpoint(FLAGS.model_dir))
if FLAGS.method == "ilp":
res_test = test_ilp()
else:
res_test = test_mst()
|
{"hexsha": "94cbe4e57b6a09632f9a33d1d0bcb25acf65bd89", "size": 21805, "ext": "py", "lang": "Python", "max_stars_repo_path": "baseline/main.py", "max_stars_repo_name": "amillert/DialogueDiscourseParsing", "max_stars_repo_head_hexsha": "9baf9f957f054b0ed0e0f2668eabb26c74e38759", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 54, "max_stars_repo_stars_event_min_datetime": "2019-01-15T14:44:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T07:39:04.000Z", "max_issues_repo_path": "baseline/main.py", "max_issues_repo_name": "amillert/DialogueDiscourseParsing", "max_issues_repo_head_hexsha": "9baf9f957f054b0ed0e0f2668eabb26c74e38759", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2019-04-13T19:04:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-20T17:21:29.000Z", "max_forks_repo_path": "baseline/main.py", "max_forks_repo_name": "amillert/DialogueDiscourseParsing", "max_forks_repo_head_hexsha": "9baf9f957f054b0ed0e0f2668eabb26c74e38759", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-07T13:15:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-21T05:45:29.000Z", "avg_line_length": 42.0945945946, "max_line_length": 135, "alphanum_fraction": 0.5042421463, "include": true, "reason": "import numpy", "num_tokens": 5176}
|
"""
@author: LXA
Date: 2021年 2 月 20 日
"""
import os
import sys
import tensorflow as tf
import numpy as np
import matplotlib
import platform
import shutil
import DNN_data
import time
import DNN_base
import DNN_tools
import plotData
import saveData
# 记录字典中的一些设置
def dictionary_out2file(R_dic, log_fileout):
DNN_tools.log_string('PDE type for problem: %s\n' % (R_dic['PDE_type']), log_fileout)
DNN_tools.log_string('Equation name for problem: %s\n' % (R_dic['eqs_name']), log_fileout)
if R_dic['activate_stop'] != 0:
DNN_tools.log_string('activate the stop_step and given_step= %s\n' % str(R_dic['max_epoch']), log_fileout)
else:
DNN_tools.log_string('no activate the stop_step and given_step = default: %s\n' % str(R_dic['max_epoch']), log_fileout)
DNN_tools.log_string('Network model of solving problem: %s\n' % str(R_dic['model']), log_fileout)
DNN_tools.log_string('Activate function for network: %s\n' % str(R_dic['activate_func']), log_fileout)
if R_dic['model'] != 'DNN':
DNN_tools.log_string('The frequency to neural network: %s\n' % (R_dic['freqs']), log_fileout)
if (R_dic['optimizer_name']).title() == 'Adam':
DNN_tools.log_string('optimizer:%s\n' % str(R_dic['optimizer_name']), log_fileout)
else:
DNN_tools.log_string('optimizer:%s with momentum=%f\n' % (R_dic['optimizer_name'], R_dic['momentum']), log_fileout)
if R_dic['train_group'] == 0:
DNN_tools.log_string('Training total loss \n', log_fileout)
elif R_dic['train_group'] == 1:
DNN_tools.log_string('Training total loss + parts loss \n', log_fileout)
elif R_dic['train_group'] == 2:
DNN_tools.log_string('Training parts loss \n', log_fileout)
DNN_tools.log_string('Init learning rate: %s\n' % str(R_dic['learning_rate']), log_fileout)
DNN_tools.log_string('Decay to learning rate: %s\n' % str(R_dic['learning_rate_decay']), log_fileout)
DNN_tools.log_string('hidden layer:%s\n' % str(R_dic['hidden_layers']), log_fileout)
DNN_tools.log_string('Batch-size 2 integral: %s\n' % str(R_dic['batch_size2integral']), log_fileout)
DNN_tools.log_string('Batch-size 2 auxiliary: %s\n' % str(R_dic['batch_size2auxiliary']), log_fileout)
def Bernoulli(Z=None):
BRR = [Z > 0.5]
BR = tf.cast(BRR, dtype=tf.float32)
return BR
def pi_star(t=None):
pi = tf.exp(1 - t) / (1 + tf.exp(1 - t))
return pi
def my_normal(t=None):
z = (1/np.sqrt(2*np.pi))*tf.exp(-0.5*t*t)
return z
def solve_Integral_Equa(R):
log_out_path = R['FolderName'] # 将路径从字典 R 中提取出来
if not os.path.exists(log_out_path): # 判断路径是否已经存在
os.mkdir(log_out_path) # 无 log_out_path 路径,创建一个 log_out_path 路径
outFile2para_name = '%s_%s.txt' % ('para2', 'beta')
logfile_name = '%s%s.txt' % ('log2', R['activate_func'])
log_fileout = open(os.path.join(log_out_path, logfile_name), 'w') # 在这个路径下创建并打开一个可写的 log_train.txt文件
para_outFile = open(os.path.join(log_out_path, outFile2para_name), 'w') # 在这个路径下创建并打开一个可写的 para_outFile.txt文件
dictionary_out2file(R, log_fileout)
# 积分方程问题需要的设置
batchsize = R['batch_size2integral']
batchsize2aux = R['batch_size2auxiliary']
wb_regular = R['regular_weight_biases']
lr_decay = R['learning_rate_decay']
learning_rate = R['learning_rate']
hidden_layers = R['hidden_layers']
act_func = R['activate_func']
# ------- set the problem ---------
data_indim = R['input_dim']
data_outdim = R['output_dim']
para_dim = R['estimate_para_dim']
# 初始化权重和和偏置的模式
flag2bnn = 'bnn'
input_dim = 1
if R['model'] == 'PDE_DNN_Cos_C_Sin_Base':
W2b, B2b = DNN_base.initialize_NN_random_normal2_CS(input_dim, para_dim, hidden_layers, flag2bnn)
else:
W2b, B2b = DNN_base.initialize_NN_random_normal2(input_dim, para_dim, hidden_layers, flag2bnn)
global_steps = tf.Variable(0, trainable=False)
with tf.device('/gpu:%s' % (R['gpuNo'])):
with tf.variable_scope('vscope', reuse=tf.AUTO_REUSE):
X = tf.placeholder(tf.float32, name='X_recv', shape=[batchsize, data_indim])
Y = tf.placeholder(tf.float32, name='Y_recv', shape=[batchsize, data_indim])
R2XY = tf.placeholder(tf.float32, name='R2XY_recv', shape=[batchsize, data_indim])
y_aux = tf.placeholder(tf.float32, name='y_auxiliary', shape=[batchsize2aux, 1])
beta = tf.Variable(tf.random.uniform([1, para_dim]), dtype='float32', name='beta')
# beta = tf.Variable([[0.25, -0.5]], dtype='float32', name='beta')
# beta = tf.constant([[0.25, -0.5]], dtype=tf.float32, name='beta')
tfOne = tf.placeholder(tf.float32, shape=[1, 1], name='tfOne')
inline_lr = tf.placeholder_with_default(input=1e-5, shape=[], name='inline_learning_rate')
# 供选择的网络模式
if R['model'] == str('DNN'):
b_NN2y = DNN_base.PDE_DNN(y_aux, W2b, B2b, hidden_layers, activate_name=act_func)
# b_NN2Y = DNN_base.PDE_DNN(Y, W2b, B2b, hidden_layers, activate_name=act_func)
elif R['model'] == 'DNN_scale':
freq = R['freqs']
b_NN2y = DNN_base.PDE_DNN_scale(y_aux, W2b, B2b, hidden_layers, freq, activate_name=act_func)
# b_NN2Y = DNN_base.PDE_DNN_scale(Y, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_adapt_scale':
freq = R['freqs']
b_NN2y = DNN_base.PDE_DNN_adapt_scale(y_aux, W2b, B2b, hidden_layers, freq, activate_name=act_func)
# b_NN2Y = DNN_base.PDE_DNN_adapt_scale(Y, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_FourierBase':
freq = R['freqs']
b_NN2y = DNN_base.PDE_DNN_FourierBase(y_aux, W2b, B2b, hidden_layers, freq, activate_name=act_func)
# b_NN2Y = DNN_base.PDE_DNN_FourierBase(Y, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_Cos_C_Sin_Base':
freq = R['freqs']
b_NN2y = DNN_base.PDE_DNN_Cos_C_Sin_Base(y_aux, W2b, B2b, hidden_layers, freq, activate_name=act_func)
# b_NN2Y = DNN_base.PDE_DNN_Cos_C_Sin_Base(Y, W2b, B2b, hidden_layers, freq, activate_name=act_func)
# 下面怎么把循环改为向量操作呢?
sum2bleft = tf.zeros(shape=[1, 1], dtype=tf.float32, name='01')
sum2bright = tf.zeros(shape=[1, 1], dtype=tf.float32, name='02')
# 使用循环将Xi取出来,然后带入方程计算b(·:beta),需要对i累加,最后取均值
for i in range(batchsize):
Xtemp = tf.reshape(X[i], shape=[1, 1]) # Xi取出
OneX = tf.concat([tfOne, Xtemp], axis=-1) # 1 行 (1+dim) 列
XiTrans = tf.transpose(OneX, [1, 0]) # (1,Xi)
fYX_y = my_normal(t=y_aux-tf.matmul(beta, XiTrans)) # fY|X在y处的取值,fY|X(y)
dfYX_beta = tf.matmul(my_normal(t=y_aux-tf.matmul(beta, XiTrans))*(y_aux-tf.matmul(beta, XiTrans)), OneX) # fY|X(y)对beta的导数
# beta 是 1 行 para_dim 列
fyx_1minus_phi_integral = tf.reduce_mean(fYX_y * (1 - pi_star(t=y_aux)), axis=0) # fY|X(t)*(1-pi(t))的积分
dfyx_phi_integral = tf.reduce_mean(dfYX_beta * pi_star(t=y_aux), axis=0) # diff_fY|X(y)*pi(t)的积分
ceof_vec2left = dfyx_phi_integral/fyx_1minus_phi_integral
sum2bleft = sum2bleft + dfYX_beta + ceof_vec2left*fYX_y
b_fyx_phi_integral = tf.reduce_mean(b_NN2y*fYX_y*pi_star(t=y_aux), axis=0) # b(t, beta)*fY|X(t)*pi(t)的积分
ceof_vec2right = b_fyx_phi_integral / fyx_1minus_phi_integral
sum2bright = sum2bright + b_NN2y * fYX_y + ceof_vec2right * fYX_y
bleft = sum2bleft / batchsize # (1/N)sum{i=1:i=N(·)}
bright = sum2bright / batchsize # (1/N)sum{i=1:i=N(·)}
loss2b = tf.reduce_mean(tf.reduce_mean(tf.square(bleft - bright), axis=0))
sum2Seff = tf.zeros(shape=[1, 1], dtype=tf.float32, name='03')
for i in range(batchsize):
Xtemp = tf.reshape(X[i], shape=[1, 1])
OneX = tf.concat([tfOne, Xtemp], axis=-1) # 1 行 (1+dim) 列
XiTrans = tf.transpose(OneX, [1, 0])
fYX2y = my_normal(t=y_aux - tf.matmul(beta, XiTrans)) # fY|X在y处的取值,fY|X(y)
Yi = tf.reshape(Y[i], shape=[1, 1]) # Yi
fYX2Y = my_normal(t=Yi-tf.matmul(beta, XiTrans)) # fY|X在Yi处的取值,fY|X(Yi)
dfYX_beta2Y = tf.matmul(my_normal(t=Yi-tf.matmul(beta, XiTrans))*(Yi-tf.matmul(beta, XiTrans)), OneX) # diff_fY|X(Yi)
dfYX_beta2y = tf.matmul(my_normal(t=y_aux - tf.matmul(beta, XiTrans)) * (y_aux - tf.matmul(beta, XiTrans)), OneX) # diff_fY|X(t)
fyx_1minus_phi_integral = tf.reduce_mean(fYX2y * (1 - pi_star(t=y_aux)), axis=0) # fY|X(t)*(1-pi(t))的积分
dfyx_phi_integral = tf.reduce_mean(dfYX_beta2y * pi_star(t=y_aux), axis=0) # diff_fY|X(y)*pi(t)的积分
fyx_b_phi_integral = tf.reduce_mean(fYX2y * b_NN2y * pi_star(t=y_aux), axis=0) # fY|X(t)*b(t, beta)*pi(t)的积分
R2XY_i = tf.reshape(R2XY[i], shape=[1, -1]) # Ri
Seff1 = (R2XY_i/fYX2Y) * dfYX_beta2Y - ((1-R2XY_i)/fyx_1minus_phi_integral) * dfyx_phi_integral # S^*_beta
if R['model'] == 'DNN':
b_NN2Yi = DNN_base.PDE_DNN(Yi, W2b, B2b, hidden_layers, activate_name=act_func)
elif R['model'] == 'DNN_scale':
freq = R['freqs']
b_NN2Yi = DNN_base.PDE_DNN_scale(Yi, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_adapt_scale':
freq = R['freqs']
b_NN2Yi = DNN_base.PDE_DNN_adapt_scale(Yi, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_FourierBase':
freq = R['freqs']
b_NN2Yi = DNN_base.PDE_DNN_FourierBase(Yi, W2b, B2b, hidden_layers, freq, activate_name=act_func)
elif R['model'] == 'DNN_Cos_C_Sin_Base':
freq = R['freqs']
b_NN2Yi = DNN_base.PDE_DNN_Cos_C_Sin_Base(Yi, W2b, B2b, hidden_layers, freq, activate_name=act_func)
Seff2 = R2XY_i * tf.reshape(b_NN2Yi, shape=[1, -1])
Seff3 = (1-R2XY_i) * (fyx_b_phi_integral/fyx_1minus_phi_integral)
Seff = Seff1 - Seff2 + Seff3
sum2Seff = sum2Seff + Seff
loss2s_temp = sum2Seff/batchsize
loss2Seff = tf.reduce_mean(tf.square(loss2s_temp))
if R['regular_weight_model'] == 'L1':
regular_WB2b = DNN_base.regular_weights_biases_L1(W2b, B2b)
elif R['regular_weight_model'] == 'L2':
regular_WB2b = DNN_base.regular_weights_biases_L2(W2b, B2b)
else:
regular_WB2b = tf.constant(0.0)
penalty_WB = wb_regular * regular_WB2b
loss = loss2b + loss2Seff + penalty_WB # 要优化的loss function
my_optimizer = tf.train.AdamOptimizer(inline_lr)
if R['train_group'] == 0:
train_my_loss = my_optimizer.minimize(loss, global_step=global_steps)
if R['train_group'] == 1:
train_op1 = my_optimizer.minimize(loss2b, global_step=global_steps)
train_op2 = my_optimizer.minimize(loss2Seff, global_step=global_steps)
train_op3 = my_optimizer.minimize(loss, global_step=global_steps)
train_my_loss = tf.group(train_op1, train_op2, train_op3)
elif R['train_group'] == 2:
train_op1 = my_optimizer.minimize(loss2b, global_step=global_steps)
train_op2 = my_optimizer.minimize(loss2Seff, global_step=global_steps)
train_my_loss = tf.group(train_op1, train_op2)
t0 = time.time()
loss_b_all, loss_seff_all, loss_all = [], [], [] # 空列表, 使用 append() 添加元素
# ConfigProto 加上allow_soft_placement=True就可以使用 gpu 了
config = tf.ConfigProto(allow_soft_placement=True) # 创建sess的时候对sess进行参数配置
config.gpu_options.allow_growth = True # True是让TensorFlow在运行过程中动态申请显存,避免过多的显存占用。
config.allow_soft_placement = True # 当指定的设备不存在时,允许选择一个存在的设备运行。比如gpu不存在,自动降到cpu上运行
x_batch = DNN_data.randnormal_mu_sigma(size=batchsize, mu=0.5, sigma=0.5)
y_init = 0.25 - 0.5 * x_batch + np.reshape(np.random.randn(batchsize, 1), (-1, 1))
y_aux_batch = DNN_data.rand_it(batch_size=batchsize2aux, variable_dim=data_indim, region_a=-2, region_b=2)
# y_aux_batch = DNN_data.randnormal_mu_sigma(size=batchsize2aux, mu=0.0, sigma=1.5)
# x_aux_batch = DNN_data.randnormal_mu_sigma(size=batchsize2aux, mu=0.5, sigma=0.5)
# y_aux_batch = 0.25 - 0.5 * x_aux_batch + np.reshape(np.random.randn(batchsize2aux, 1), (-1, 1))
relate2XY = np.reshape(np.random.randint(0, 2, batchsize), (-1, 1))
y_batch = np.multiply(y_init, relate2XY)
one2train = np.ones((1, 1))
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
tmp_lr = learning_rate
for i_epoch in range(R['max_epoch'] + 1):
tmp_lr = tmp_lr * (1 - lr_decay)
_, loss2b_tmp, loss2seff_tmp, loss_tmp, p_WB, beta_temp = sess.run(
[train_my_loss, loss2b, loss2Seff, loss, penalty_WB, beta],
feed_dict={X: x_batch, Y: y_batch, R2XY: relate2XY, y_aux: y_aux_batch, tfOne: one2train,
inline_lr: tmp_lr})
loss_b_all.append(loss2b_tmp)
loss_seff_all.append(loss2seff_tmp)
loss_all.append(loss_tmp)
if (i_epoch % 10) == 0:
DNN_tools.log_string('*************** epoch: %d*10 ****************' % int(i_epoch / 10), log_fileout)
DNN_tools.log_string('lossb for training: %.10f\n' % loss2b_tmp, log_fileout)
DNN_tools.log_string('lossS for training: %.10f\n' % loss2seff_tmp, log_fileout)
DNN_tools.log_string('loss for training: %.10f\n' % loss_tmp, log_fileout)
if (i_epoch % 100) == 0:
print('**************** epoch: %d*100 *******************'% int(i_epoch/100))
print('beta:[%f %f]' % (beta_temp[0, 0], beta_temp[0, 1]))
print('\n')
DNN_tools.log_string('*************** epoch: %d*100 *****************' % int(i_epoch/100), para_outFile)
DNN_tools.log_string('beta:[%f, %f]' % (beta_temp[0, 0], beta_temp[0, 1]), para_outFile)
DNN_tools.log_string('\n', para_outFile)
saveData.save_trainLoss2mat_1actFunc(loss_b_all, loss_seff_all, loss_all, actName=act_func,
outPath=R['FolderName'])
plotData.plotTrain_loss_1act_func(loss_b_all, lossType='loss_b', seedNo=R['seed'], outPath=R['FolderName'],
yaxis_scale=True)
plotData.plotTrain_loss_1act_func(loss_seff_all, lossType='loss_s', seedNo=R['seed'], outPath=R['FolderName'],
yaxis_scale=True)
plotData.plotTrain_loss_1act_func(loss_all, lossType='loss_all', seedNo=R['seed'], outPath=R['FolderName'],
yaxis_scale=True)
if __name__ == "__main__":
R={}
# -------------------------------------- CPU or GPU 选择 -----------------------------------------------
R['gpuNo'] = 2
# 默认使用 GPU,这个标记就不要设为-1,设为0,1,2,3,4....n(n指GPU的数目,即电脑有多少块GPU)
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # -1代表使用 CPU 模式
# os.environ["CUDA_VISIBLE_DEVICES"] = "0" # 设置当前使用的GPU设备仅为第 0 块GPU, 设备名称为'/gpu:0'
if platform.system() == 'Windows':
os.environ["CDUA_VISIBLE_DEVICES"] = "%s" % (R['gpuNo'])
else:
print('-------------------------------------- linux -----------------------------------------------')
# Linux终端没有GUI, 需要添加如下代码,而且必须添加在 import matplotlib.pyplot 之前,否则无效。
matplotlib.use('Agg')
if tf.test.is_gpu_available():
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3" # 设置当前使用的GPU设备仅为第 0,1,2,3 块GPU, 设备名称为'/gpu:0'
else:
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
# ------------------------------------------- 文件保存路径设置 ----------------------------------------
store_file = 'missdata'
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(BASE_DIR)
OUT_DIR = os.path.join(BASE_DIR, store_file)
if not os.path.exists(OUT_DIR):
print('---------------------- OUT_DIR ---------------------:', OUT_DIR)
os.mkdir(OUT_DIR)
R['seed'] = np.random.randint(1e5)
seed_str = str(R['seed']) # int 型转为字符串型
FolderName = os.path.join(OUT_DIR, seed_str) # 路径连接
R['FolderName'] = FolderName
if not os.path.exists(FolderName):
print('--------------------- FolderName -----------------:', FolderName)
os.mkdir(FolderName)
# ---------------------------------------- 复制并保存当前文件 -----------------------------------------
if platform.system() == 'Windows':
tf.compat.v1.reset_default_graph()
shutil.copy(__file__, '%s/%s' % (FolderName, os.path.basename(__file__)))
else:
shutil.copy(__file__, '%s/%s' % (FolderName, os.path.basename(__file__)))
# ---------------------------- Setup of laplace equation ------------------------------
# if the value of step_stop_flag is not 0, it will activate stop condition of step to kill program
step_stop_flag = input('please input an integer number to activate step-stop----0:no---!0:yes--:')
R['activate_stop'] = int(step_stop_flag)
# if the value of step_stop_flag is not 0, it will activate stop condition of step to kill program
R['max_epoch'] = 200000
if 0 != R['activate_stop']:
epoch_stop = input('please input a stop epoch:')
R['max_epoch'] = int(epoch_stop)
R['PDE_type'] = 'Integral_Eq'
R['eqs_name'] = 'missdata'
R['input_dim'] = 1 # 输入维数,即问题的维数(几元问题)
R['output_dim'] = 1 # 输出维数
R['estimate_para_dim'] = 2
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Setup of DNN %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# 训练集的设置
R['batch_size2integral'] = 200 # 内部训练数据的批大小
# R['batch_size2integral'] = 100 # 内部训练数据的批大小
# R['batch_size2integral'] = 5 # 内部训练数据的批大小
# R['batch_size2auxiliary'] = 100
R['batch_size2auxiliary'] = 50
R['optimizer_name'] = 'Adam' # 优化器
R['learning_rate'] = 1e-2 # 学习率
R['learning_rate_decay'] = 5e-3 # 学习率 decay
# R['train_group'] = 0
# R['train_group'] = 1
R['train_group'] = 2
R['regular_weight_model'] = 'L0'
# R['regular_weight_model'] = 'L1'
# R['regular_weight_model'] = 'L2'
R['regular_weight_biases'] = 0.000 # Regularization parameter for weights
# R['regular_weight_biases'] = 0.001 # Regularization parameter for weights
# R['regular_weight_biases'] = 0.0025 # Regularization parameter for weights
# &&&&&&&&&&&&&&&&&&& 使用的网络模型 &&&&&&&&&&&&&&&&&&&&&&&&&&&
# R['model'] = 'DNN'
# R['model'] = 'DNN_BN'
# R['model'] = 'DNN_scale'
# R['model'] = 'DNN_adapt_scale'
R['model'] = 'DNN_FourierBase'
# R['model'] = 'DNN_Cos_C_Sin_Base'
# R['model'] = 'DNN_WaveletBase'
if R['model'] != 'DNN':
# 网络的频率范围设置
R['freqs'] = np.arange(1, 101)
# R['freqs'] = np.concatenate(([1], np.arange(1, 100 - 1)), axis=0)
# &&&&&&&&&&&&&&&&&&&&&& 隐藏层的层数和每层神经元数目 &&&&&&&&&&&&&&&&&&&&&&&&&&&&
if R['model'] == 'DNN_Cos_C_Sin_Base':
# R['hidden_layers'] = (30, 20, 10, 10, 5)
R['hidden_layers'] = (100, 50, 30, 30, 20)
# R['hidden_layers'] = (100, 100, 80, 80, 60) # 1*200+200*100+100*80+80*80+80*60+60*1= 39460 个参数
# R['hidden_layers'] = (125, 100, 80, 80, 60) # 1*250+250*100+100*80+80*80+80*60+60*1= 44510 个参数
# R['hidden_layers'] = (100, 120, 80, 80, 80) # 1*200+200*120+120*80+80*80+80*80+80*1= 46680 个参数
# R['hidden_layers'] = (125, 120, 80, 80, 80) # 1*250+250*120+120*80+80*80+80*80+80*1= 52730 个参数
# R['hidden_layers'] = (100, 150, 100, 100, 80) # 1*200+200*150+150*100+100*100+100*80+80*1= 63280 个参数
# R['hidden_layers'] = (125, 150, 100, 100, 80) # 1*250+250*150+150*100+100*100+100*80+80*1= 70830 个参数
else:
# R['hidden_layers'] = (30, 20, 10, 10, 5)
R['hidden_layers'] = (100, 50, 30, 30, 20)
# R['hidden_layers'] = (400, 300, 300, 200, 100, 100, 50)
# R['hidden_layers'] = (500, 400, 300, 300, 200, 100)
# R['hidden_layers'] = (500, 400, 300, 200, 200, 100)
# &&&&&&&&&&&&&&&&&&& 激活函数的选择 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&
# R['activate_func'] = 'relu'
R['activate_func'] = 'tanh'
# R['activate_func'] = 'sintanh'
# R['activate_func']' = leaky_relu'
# R['activate_func'] = 'srelu'
# R['activate_func'] = 's2relu'
# R['activate_func'] = 'elu'
# R['activate_func'] = 'selu'
# R['activate_func'] = 'phi'
solve_Integral_Equa(R)
|
{"hexsha": "9bf0b0b7d0faba7351c5ae21c625ba879193e85f", "size": 21872, "ext": "py", "lang": "Python", "max_stars_repo_path": "MissData_DNN.py", "max_stars_repo_name": "Blue-Giant/DNN2missdata_EQ", "max_stars_repo_head_hexsha": "be049af314ff0f85fcf7dd4f454be7ac59a93374", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MissData_DNN.py", "max_issues_repo_name": "Blue-Giant/DNN2missdata_EQ", "max_issues_repo_head_hexsha": "be049af314ff0f85fcf7dd4f454be7ac59a93374", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MissData_DNN.py", "max_forks_repo_name": "Blue-Giant/DNN2missdata_EQ", "max_forks_repo_head_hexsha": "be049af314ff0f85fcf7dd4f454be7ac59a93374", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.9588377724, "max_line_length": 146, "alphanum_fraction": 0.5683522312, "include": true, "reason": "import numpy", "num_tokens": 6865}
|
Ray teaches electric bass and is one of many area music teachers in Davis.
|
{"hexsha": "e0fe8c998fec95d2c6b422e4892c456df781f07c", "size": 78, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "lab/davisWiki/Ray_Yukich.f", "max_stars_repo_name": "voflo/Search", "max_stars_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/davisWiki/Ray_Yukich.f", "max_issues_repo_name": "voflo/Search", "max_issues_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/davisWiki/Ray_Yukich.f", "max_forks_repo_name": "voflo/Search", "max_forks_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.6, "max_line_length": 74, "alphanum_fraction": 0.7692307692, "num_tokens": 16}
|
"""
Cantera Simulator Adapter module
Used to run mechanism analysis with Cantera as an ideal gas in a batch reactor at constant V-U (adiabatic)
"""
import cantera as ct
import numpy as np
from typing import List, Optional, Type
from rmgpy.tools.canteramodel import generate_cantera_conditions
from rmgpy.tools.data import GenericData
from t3.logger import Logger
from t3.simulate.adapter import SimulateAdapter
from t3.simulate.factory import register_simulate_adapter
class CanteraConstantUV(SimulateAdapter):
"""
CanteraConstantUV is an adapter for the abstract class SimulateAdapter that simulates ideal gases
in a batch reactor adiabatically at constant volume.
Args:
t3 (dict): The T3.t3 attribute, which is a dictionary containing the t3 block from the input yaml or API.
rmg (dict): The T3.rmg attribute, which is a dictionary containing the rmg block from the input yaml or API.
paths (dict): The T3.paths attribute, which is a dictionary containing relevant paths.
logger (Logger): Instance of T3's Logger class.
atol (float): The absolute tolerance used when integrating.
rtol (float): The relative tolerance used when integrating.
observable_list (Optional[list]): Species used for SA. Entries are species labels as strings. Example: ['OH']
sa_atol (float, optional): The absolute tolerance used when performing sensitivity analysis.
sa_atol (float, optional): The relative tolerance used when performing sensitivity analysis.
global_observables (Optional[List[str]]): List of global observables ['IgD', 'ESR', 'SL'] used by Cantera adapters.
Attributes:
all_data (list): List containing the following RMG GenericData objects grouped as a tuple:
time, data_list, reaction_sensitivity_data, thermodynamic_sensitivity_data
atol (float): The absolute tolerance used when integrating.
cantera_reactor_type (str): String specifying the type of Cantera reactor to use.
cantera_simulation (ct.ReactorNet): Cantera reactor net object.
conditions (list): List whose entries are reaction conditions for simulation.
global_observables (List[str]): List of global observables ['IgD', 'ESR', 'SL'] used by Cantera adapters.
inert_list (list): List of possible inert species in the model
inert_index_list (list): List of indices corresponding to the inert species present in the model.
initialconds (dict): Key is the Cantera species. Value is the initial mol fraction.
logger (Logger): Instance of T3's Logger class.
model (ct.Solution): Cantera solution object for the mechanism.
num_ct_reactions (int): Number of reactions in the model.
num_ct_species (int): Number of species in the model.
observable_list (list): Species used for SA. Entries are species labels as strings. Example: ['OH']
paths (dict): The T3.paths attribute, which is a dictionary containing relevant paths.
rmg (dict): The T3.rmg attribute, which is a dictionary containing the rmg block from the input yaml or API.
rtol (float): The relative tolerance used when integrating.
rxn_identifier_lookup (dict): Keys are reactions (str). Values are index in the model.
sa_atol (float): The absolute tolerance used when performing sensitivity analysis.
sa_rtol (float): The relative tolerance used when performing sensitivity analysis.
sensitive_species (list): List of sensitive species. Entries are strings that include the RMG index.
spc_identifier_lookup (dict): Keys are species (str). Values are index in the model.
t3 (dict): The T3.t3 attribute, which is a dictionary containing the t3 block from the input yaml or API.
"""
def __init__(self,
t3: dict,
rmg: dict,
paths: dict,
logger: Type[Logger],
atol: float = 1e-16,
rtol: float = 1e-8,
observable_list: Optional[list] = None,
sa_atol: float = 1e-6,
sa_rtol: float = 1e-4,
global_observables: Optional[Type[List[str]]] = None
):
self.t3 = t3
self.rmg = rmg
self.paths = paths
self.logger = logger
self.atol = atol
self.rtol = rtol
self.observable_list = observable_list or list()
self.sa_atol = sa_atol
self.sa_rtol = sa_rtol
self.global_observables = global_observables
# initialize other attributes
self.sensitive_species = list()
self.initialconds = dict()
self.all_data = list()
# this adapter is for constant V batch simulations
self.cantera_reactor_type = 'IdealGasReactor'
self.model = None
self.cantera_simulation = None
self.conditions = list()
self.inert_list = ['He', 'Ne', 'Ar', 'Kr', 'Xe', 'N2']
self.inert_index_list = list()
self.num_ct_reactions = None
self.num_ct_species = None
self.set_up()
# create list of indices of the inerts
for i, species in enumerate(self.model.species()):
if species.name in self.inert_list:
self.inert_index_list.append(i)
self.spc_identifier_lookup = {}
for i, spc in enumerate(self.model.species()):
self.spc_identifier_lookup[spc.name] = i
self.rxn_identifier_lookup = {}
for i, rxn in enumerate(self.model.reactions()):
self.rxn_identifier_lookup[rxn.equation] = i
def set_up(self):
"""
Read in the Cantera input file and set up the empty attributes initialized in the init method.
"""
# read in the cantera .cti file
self.model = ct.Solution(infile=self.paths['cantera annotated'])
self.num_ct_reactions = len(self.model.reactions())
self.num_ct_species = len(self.model.species())
# create list of indices of the inerts
for i, species in enumerate(self.model.species()):
if species.name in self.inert_list:
self.inert_index_list.append(i)
self.spc_identifier_lookup = {}
for i, spc in enumerate(self.model.species()):
self.spc_identifier_lookup[spc.name] = i
self.rxn_identifier_lookup = {}
for i, rxn in enumerate(self.model.reactions()):
self.rxn_identifier_lookup[rxn.equation] = i
self.species_names_without_indices = [self.model.species()[i].name.split('(')[0] for i in
range(self.num_ct_species)]
# set initial conditions and find any species for SA
for input_species in self.rmg['species']:
# find index of this species in the list of Cantera species
idx = self.species_names_without_indices.index(input_species['label'])
self.initialconds.update({self.model.species(idx).name: input_species['concentration']})
if self.species_names_without_indices[idx] in self.observable_list:
self.sensitive_species.append(self.model.species(idx).name)
reactor_type_list = [self.cantera_reactor_type]
mol_frac_list = [self.initialconds]
Tlist = ([self.rmg['reactors'][0]['T']], 'K')
Plist = ([self.rmg['reactors'][0]['P']], 'bar')
rxn_time, units = self.rmg['reactors'][0]['termination_time']
reaction_time_list = ([rxn_time], units) # tuple giving the ([list of reaction times], units)
self.generate_conditions(reactor_type_list, reaction_time_list, mol_frac_list, Tlist, Plist)
def generate_conditions(self,
reactor_type_list: Type[List[tuple]],
reaction_time_list: Type[List[tuple]],
mol_frac_list: Type[List[dict]],
T0_list: Type[List[tuple]] = None,
P0_list: Type[List[tuple]] = None,
V0_list: Type[List[tuple]] = None,
):
"""
Saves all the reaction conditions.
Args:
reactor_type_list (list): A list of strings specifying the type of supported Cantera reactor:
IdealGasReactor: A constant volume, zero-dimensional reactor for ideal gas mixtures
IdealGasConstPressureReactor: A homogeneous, constant pressure, zero-dimensional reactor for ideal gas mixtures
IdealGasConstPressureTemperatureReactor: A homogenous, constant pressure and constant temperature, zero-dimensional reactor
for ideal gas mixtures (the same as RMG's SimpleReactor)
reaction_time_list (tuple): A tuple object giving the ([list of reaction times], units)
mol_frac_list (list): A list of molfrac dictionaries with species object keys
and mole fraction values
To specify the system for an ideal gas, 2 of the following 3 parameters must be defined:
T0_list (tuple): A tuple giving the ([list of initial temperatures], units)
P0_list (tuple): A tuple giving the ([list of initial pressures], units)
V0_list (tuple): A tuple giving the ([list of initial specific volumes], units)
"""
self.conditions = generate_cantera_conditions(reactor_type_list,
reaction_time_list,
mol_frac_list,
T0_list,
P0_list,
V0_list,
)
def reinitialize_simulation(self,
T0=None,
P0=None,
X0=None,
V0=None,
):
"""
Re-initializes the cantera solution object (self.model) and cantera reactor object (self.cantera_reactor).
This method is called at the start of other methods in this class.
Args:
T0 (float): Initial temperature in Kelvin.
P0 (float): Initial pressure in Pascals.
X0 (dict): Initial mole fractions.
V0 (float): Initial volume in m3.
"""
# assign initial conditions
if V0 is None:
self.model.TPX = T0, P0, X0
elif P0 is None:
self.model.TDX = T0, 1 / V0, X0
self.cantera_reactor = ct.IdealGasReactor(self.model)
# Run this individual condition as a simulation
self.cantera_simulation = ct.ReactorNet([self.cantera_reactor])
# set simulation tolerance
self.cantera_simulation.atol = self.atol
self.cantera_simulation.rtol = self.rtol
if self.sensitive_species:
if ct.__version__ == '2.2.1':
self.logger.warning('Cantera version 2.2.1 may not support sensitivity analysis unless SUNDIALS '
'was used during compilation.')
self.logger.warning('Upgrade to newer of Cantera in anaconda using the command '
'"conda update -c rmg cantera"')
# Add all the reactions as part of SA
for i in range(self.num_ct_reactions):
self.cantera_reactor.add_sensitivity_reaction(i)
# add all species enthalpies as part of SA
for i in range(self.num_ct_species):
self.cantera_reactor.add_sensitivity_species_enthalpy(i)
# Set the tolerances for the sensitivity coefficients
self.cantera_simulation.rtol_sensitivity = self.sa_atol
self.cantera_simulation.atol_sensitivity = self.sa_rtol
def simulate(self):
"""
Simulate the mechanism and store all results to the all_data attribute.
"""
if self.sensitive_species:
self.logger.info('Running a simulation with SA using CanteraConstantUV...')
else:
self.logger.info('Running a simulation using CanteraConstantUV...')
species_names_list = [species.name for species in self.model.species()]
self.all_data = list()
for condition in self.conditions:
# Set Cantera simulation conditions
T0 = condition.T0.value_si
try:
V0 = self.conditions[0].V0.value_si
P0 = None
except AttributeError as e:
P0 = condition.P0.value_si
V0 = None
self.reinitialize_simulation(T0=T0,
P0=P0,
X0=condition.mol_frac,
V0=V0,
)
# Initialize the variables to be saved
times = []
temperature = []
pressure = []
species_data = []
kinetic_sensitivity_data = []
thermo_sensitivity_data = []
# Begin integration
while self.cantera_simulation.time < condition.reaction_time.value_si:
# Advance the state of the reactor network in time from the current time to time t [s], taking as many integrator timesteps as necessary.
self.cantera_simulation.step()
times.append(self.cantera_simulation.time)
temperature.append(self.cantera_reactor.T)
pressure.append(self.cantera_reactor.thermo.P)
species_data.append(self.cantera_reactor.thermo[species_names_list].X)
if self.sensitive_species:
# Cantera returns mass-based sensitivities rather than molar concentration or mole fraction based sensitivities.
# The equation for converting between them is:
#
# d ln xi = d ln wi - sum_(species i) (dln wi) (xi)
#
# where xi is the mole fraction of species i and wi is the mass fraction of species i
mass_frac_sensitivity_array = self.cantera_simulation.sensitivities()
if condition.reactor_type == 'IdealGasReactor':
# Row 0: mass, Row 1: volume, Row 2: internal energy or temperature, Row 3+: mass fractions of species
mass_frac_sensitivity_array = mass_frac_sensitivity_array[3:, :]
elif condition.reactor_type == 'IdealGasConstPressureReactor' or condition.reactor_type == 'IdealGasConstPressureTemperatureReactor':
# Row 0: mass, Row 1: enthalpy or temperature, Row 2+: mass fractions of the species
mass_frac_sensitivity_array = mass_frac_sensitivity_array[2:, :]
else:
raise Exception('Other types of reactor conditions are currently not supported')
for i in range(len(mass_frac_sensitivity_array)):
mass_frac_sensitivity_array[i] *= species_data[-1][i]
# extract kinetics SA
kinetics_mass_frac_sa = mass_frac_sensitivity_array[:, 0:self.num_ct_reactions]
sensitivity_array = np.zeros(len(self.sensitive_species) * len(self.model.reactions()))
for index, species in enumerate(self.sensitive_species):
for j in range(self.num_ct_reactions):
sensitivity_array[self.num_ct_reactions * index + j] = self.cantera_simulation.sensitivity(
species, j)
for i in range(len(kinetics_mass_frac_sa)):
if i not in self.inert_index_list:
# massFracSensitivity for inerts are returned as 0.0 in Cantera, so we do not include them here
sensitivity_array[self.num_ct_reactions * index + j] -= kinetics_mass_frac_sa[i][j]
kinetic_sensitivity_data.append(sensitivity_array)
# extract thermo SA
thermo_mass_frac_sa = mass_frac_sensitivity_array[:, self.num_ct_reactions:]
sensitivity_array = np.zeros(len(self.sensitive_species) * self.num_ct_species)
for index, species in enumerate(self.sensitive_species):
for j in range(self.num_ct_species):
sensitivity_array[self.num_ct_species * index + j] = self.cantera_simulation.sensitivity(
species, j + self.num_ct_reactions)
for i in range(len(mass_frac_sensitivity_array)):
if i not in self.inert_index_list:
# massFracSensitivity for inerts are returned as 0.0 in Cantera, so we must not include them here
sensitivity_array[self.num_ct_species * index + j] -= thermo_mass_frac_sa[i][j]
thermo_sensitivity_data.append(sensitivity_array)
# Convert species_data and sensitivity data to numpy arrays
species_data = np.array(species_data)
kinetic_sensitivity_data = np.array(kinetic_sensitivity_data)
thermo_sensitivity_data = np.array(thermo_sensitivity_data)
# Resave data into generic data objects
time = GenericData(label='Time',
data=times,
units='s')
temperature = GenericData(label='Temperature',
data=temperature,
units='K')
pressure = GenericData(label='Pressure',
data=pressure,
units='Pa')
condition_data = []
condition_data.append(temperature)
condition_data.append(pressure)
for index, species in enumerate(self.model.species()):
# Create generic data object that saves the species object into the species object. To allow easier manipulate later.
species_generic_data = GenericData(label=species.name,
species=species,
data=species_data[:, index],
index=index
)
condition_data.append(species_generic_data)
# save kinetic data as generic data object
reaction_sensitivity_data = []
for index, species in enumerate(self.sensitive_species):
for j in range(self.num_ct_reactions):
reaction_sensitivity_generic_data = GenericData(
label='dln[{0}]/dln[k{1}]: {2}'.format(species, j + 1, self.model.reactions()[j]),
species=species,
reaction=self.model.reactions()[j],
data=kinetic_sensitivity_data[:, self.num_ct_reactions * index + j],
index=j + 1,
)
reaction_sensitivity_data.append(reaction_sensitivity_generic_data)
# save thermo data as generic data object
thermodynamic_sensitivity_data = []
for index, species in enumerate(self.sensitive_species):
for j in range(self.num_ct_species):
thermo_sensitivity_generic_data = GenericData(
label='dln[{0}]/dH[{1}]'.format(species, self.model.species()[j].name),
species=species,
data=thermo_sensitivity_data[:, self.num_ct_species * index + j],
index=j + 1,
)
thermodynamic_sensitivity_data.append(thermo_sensitivity_generic_data)
self.all_data.append((time, condition_data, reaction_sensitivity_data, thermodynamic_sensitivity_data))
def get_sa_coefficients(self):
"""
Obtain the SA coefficients.
Returns:
sa_dict (dict): a SA dictionary, whose structure is given in the docstring for T3/t3/main.py
"""
sa_dict = {'kinetics': dict(), 'thermo': dict(), 'time': list()}
for condition_data in self.all_data:
time, data_list, reaction_sensitivity_data, thermodynamic_sensitivity_data = condition_data
sa_dict['time'] = time.data
# extract kinetic SA
for rxn in reaction_sensitivity_data:
# for kinetics, get `ethane(1)` from `dln[ethane(1)]/dln[k8]: H(6)+ethane(1)=H2(12)+C2H5(5)`
observable_label = rxn.label.split('[')[1].split(']')[0]
if observable_label not in sa_dict['kinetics']:
sa_dict['kinetics'][observable_label] = dict()
# for kinetics, get k8 from `dln[ethane(1)]/dln[k8]: H(6)+ethane(1)=H2(12)+C2H5(5)` then only extract 8
parameter = rxn.label.split('[')[2].split(']')[0]
parameter = int(parameter[1:])
sa_dict['kinetics'][observable_label][parameter] = rxn.data
# extract thermo SA
for spc in thermodynamic_sensitivity_data:
# for thermo get 'C2H4(8)' from `dln[ethane(1)]/dH[C2H4(8)]`
observable_label = spc.label.split('[')[1].split(']')[0]
if observable_label not in sa_dict['thermo']:
sa_dict['thermo'][observable_label] = dict()
# for thermo get 'C2H4(8)' from `dln[ethane(1)]/dH[C2H4(8)]`
parameter = spc.label.split('[')[2].split(']')[0]
sa_dict['thermo'][observable_label][parameter] = spc.data
return sa_dict
def get_idt_by_T(self):
"""
Finds the ignition point by approximating dT/dt as a first order forward difference
and then finds the point of maximum slope.
Returns:
idt_dict (dict): Dictionary whose keys are 'idt' and 'idt_index' and whose values are lists of
the ignition delay time in seconds and index at which idt occurs respectively.
"""
idt_dict = {'idt': list(),
'idt_index': list(),
}
for i, condition_data in enumerate(self.all_data):
time, data_list, reaction_sensitivity_data, thermodynamic_sensitivity_data = condition_data
T_data = data_list[0]
dTdt = np.diff(T_data.data) / np.diff(time.data)
idt_dict['idt_index'].append(int(np.argmax(dTdt)))
idt_dict['idt'].append(time.data[idt_dict['idt_index'][i]])
return idt_dict
def find_equilibrium(self,
constrained_state_vars: str,
):
"""
Args:
constrained_state_vars (str): One of the following options supported by Cantera:
'TP','TV','HP','SP','SV','UV'
Returns:
equilibrium_dictionaries (list[dict]): List whose entries are the mole fraction dictionaries of the
equilibrated model.
"""
equilibrium_dictionaries = list()
for condition in self.conditions:
# Set Cantera simulation conditions
T0 = condition.T0.value_si
try:
V0 = self.conditions[0].V0.value_si
P0 = None
except AttributeError as e:
P0 = condition.P0.value_si
V0 = None
self.reinitialize_simulation(T0=T0,
P0=P0,
X0=condition.mol_frac,
V0=V0,
)
self.model.equilibrate(constrained_state_vars)
equilibrium_dictionaries.append(self.model.mole_fraction_dict())
return equilibrium_dictionaries
def get_t50(self,
species: str,
criteria: Optional[str] = 'mass_frac',
):
"""
Finds the half-life in seconds of the given species on either a mole fraction or mass fraction basis. Uses the
initial conditions and reactor type when the class was initialized.
Args:
species (str): Cantera species name
criteria (str): 'mol_frac' (50% reduction in X) or 'mass_frac' (50% reduction in Y)
Returns:
t50_list (list): List whose entries are the time [sec] of 50% conversion from initial amount.
"""
t50_list = list()
spc_index = self.spc_identifier_lookup[species]
for condition in self.conditions:
T0 = condition.T0.value_si
try:
V0 = self.conditions[0].V0.value_si
P0 = None
except AttributeError as e:
P0 = condition.P0.value_si
V0 = None
self.reinitialize_simulation(T0=T0,
P0=P0,
X0=condition.mol_frac,
V0=V0,
)
if criteria == 'mol_frac':
x0 = self.model.X[spc_index]
elif criteria == 'mass_frac':
x0 = self.model.mass_fraction_dict()[species]
else:
raise ValueError(f'Invalid criteria: {criteria}')
while True:
self.cantera_simulation.step()
if criteria == 'mol_frac':
x1 = self.model.X[spc_index]
else:
x1 = self.model.mass_fraction_dict()[species]
if x1 < x0 * 0.5:
break
t50_list.append(self.cantera_simulation.time)
return t50_list
register_simulate_adapter("CanteraConstantUV", CanteraConstantUV)
|
{"hexsha": "d2bcf9b87bfc65b3aecd5774cf4d14995518fb88", "size": 26604, "ext": "py", "lang": "Python", "max_stars_repo_path": "t3/simulate/cantera_constantUV.py", "max_stars_repo_name": "ReactionMechanismGenerator/T3", "max_stars_repo_head_hexsha": "13e50482282c1ae4b82a9057eeaba5f3b8de2e8c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-04-06T15:24:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T19:46:14.000Z", "max_issues_repo_path": "t3/simulate/cantera_constantUV.py", "max_issues_repo_name": "ReactionMechanismGenerator/T3", "max_issues_repo_head_hexsha": "13e50482282c1ae4b82a9057eeaba5f3b8de2e8c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 63, "max_issues_repo_issues_event_min_datetime": "2020-03-15T14:18:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-19T08:40:38.000Z", "max_forks_repo_path": "t3/simulate/cantera_constantUV.py", "max_forks_repo_name": "ReactionMechanismGenerator/T3", "max_forks_repo_head_hexsha": "13e50482282c1ae4b82a9057eeaba5f3b8de2e8c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-06-12T23:42:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T03:18:18.000Z", "avg_line_length": 49.6343283582, "max_line_length": 161, "alphanum_fraction": 0.5723199519, "include": true, "reason": "import numpy", "num_tokens": 5301}
|
# library imports
library(tidyverse)
library(scales)
library(limma)
library(edgeR)
library(psych)
# get the default plot width and height
width <- options()$repr.plot.width
height <- options()$repr.plot.height
# load the IRS-normalized data and check the table
data_import <- read_tsv("labeled_grouped_protein_summary_TMT_9_AVE_IRS_normalized.txt", guess_max = 10326)
# the "Filter" column flags contams and decoys
# the "Missing" column flags proteins without reporter ion intensities (full sets missing)
# the prepped table from pandas is sorted so these are the upper rows
data_all <- filter(data_import, is.na(Filter), is.na(Missing))
# save gene names for edgeR so we can double check that results line up
accessions <- data_all$Accession
# see how many rows in the table
nrow(data_all)
# we want to get the SL normed columns, and subsetted by condition
sl_all <- data_all %>%
select(starts_with("SLNorm"))
sl_HeH <- sl_all %>% select(contains("_HeH_"))
sl_ETV6 <- sl_all %>% select(contains("_ETV6-RUNX1_"))
# and the IRS normed columns by condition
irs_all <- data_all %>%
select(starts_with("IRSNorm"))
irs_HeH <- irs_all %>% select(contains("_HeH_"))
irs_ETV6 <- irs_all %>% select(contains("_ETV6-RUNX1_"))
# and collect the pooled channels before and after IRS
sl_pool <- sl_all %>% select(contains("QC"))
irs_pool <- irs_all %>% select(contains("QC"))
# multi-panel scatter plot grids from the psych package
pairs.panels(log2(sl_pool), lm = TRUE, main = "Pooled Std before IRS")
pairs.panels(log2(irs_pool), lm = TRUE, main = "Pooled Std after IRS")
# multi-panel scatter plot grids
heh_sample <- sample(1:18, 5)
pairs.panels(log2(sl_HeH[heh_sample]), lm = TRUE, main = "HeH before IRS (random 5)")
pairs.panels(log2(irs_HeH[heh_sample]), lm = TRUE, main = "HeH after IRS (same 5)")
# multi-panel scatter plot grids
etv6_sample <- sample(1:9, 5)
pairs.panels(log2(sl_ETV6[etv6_sample]), lm = TRUE, main = "ETV6-RUNX1 before IRS (random 5)")
pairs.panels(log2(irs_ETV6[etv6_sample]), lm = TRUE, main = "ETV6-RUNX1 after IRS (same 5)")
# get the biological sample data into a DGEList object
group = c(rep('HeH', 18), rep('ETV6', 9))
y_sl <- DGEList(counts = cbind(sl_HeH, sl_ETV6), group = group, genes = accessions)
y_irs <- DGEList(counts = cbind(irs_HeH, irs_ETV6), group = group, genes = accessions)
# run TMM normalization (also includes a library size factor)
y_sl <- calcNormFactors(y_sl)
y_irs <- calcNormFactors(y_irs)
# set some colors by condition
colors = c(rep('red', 18), rep('blue', 9))
# check the clustering
plotMDS(y_sl, col = colors, main = "SL: all samples")
plotMDS(y_irs, col = colors, main = "IRS: all samples")
# we do not want the technical replicates in the mix for dispersion estimates
irs <- cbind(irs_HeH, irs_ETV6)
# load a new DGEList object (need to update the groups)
y <- DGEList(counts = irs, group = group, genes = accessions) # group was set above
y <- calcNormFactors(y)
# see what the normalization factors look like
y$samples
# Compute the normalized intensities (start with the IRS data)
# sample loading adjusts each channel to the same average total
lib_facs <- mean(colSums(irs)) / colSums(irs)
# print("Sample loading normalization factors")
print("Library size factors")
round(lib_facs, 4)
# the TMM factors are library adjustment factors (so divide by them)
norm_facs <- lib_facs / y$samples$norm.factors
# print these final correction factors
print("Combined (lib size and TMM) normalization factors")
round(norm_facs, 4)
# compute the normalized data as a new data frame
irs_tmm <- sweep(irs, 2, norm_facs, FUN = "*")
colnames(irs_tmm) <- str_c(colnames(irs), "_TMMnorm") # add suffix to col names
# head(results) # check that the column headers are okay
long_results <- gather(irs_tmm, key = "sample", value = "intensity") %>%
mutate(log_int = log10(intensity)) %>%
extract(sample, into = 'group', ".*?_(.*?)_", remove = FALSE)
head(long_results)
ggplot(long_results, aes(x = sample, y = log_int, fill = group)) +
geom_boxplot(notch = TRUE) +
coord_flip() +
ggtitle("edgeR normalized data")
# look at normalized intensity distributions for each sample
boxplot(log10(irs_tmm), col = colors,
xlab = 'TMT samples', ylab = 'log10 Intensity',
main = 'edgeR normalized data', notch = TRUE)
ggplot(long_results, aes(x = log_int, color = sample)) +
geom_density() +
guides(color = FALSE) +
ggtitle("edgeR normalized data (with legend is too busy)")
# we can compare CVs before and after IRS
sl <- cbind(sl_HeH, sl_ETV6)
# save column indexes for different conditions (indexes to data_raw frame)
# these make things easier (and reduce the chance for errors)
HeH <- 1:18
ETV6 <- (1:9) + 18
# create a CV computing function
CV <- function(df) {
ave <- rowMeans(df)
sd <- apply(df, 1, sd)
cv <- 100 * sd / ave
}
# put CVs in data frames to simplify plots and summaries
cv_frame <- data.frame(HeH_sl = CV(sl[HeH]), HeH_final = CV(irs_tmm[HeH]),
ETV6_sl = CV(sl[ETV6]), ETV6_final = CV(irs_tmm[ETV6]))
# see what the median CV values are
medians <- apply(cv_frame, 2, FUN = median)
print("Median CVs by condition, before/after IRS (%)")
round(medians, 1)
# see what the CV distibutions look like
# need long form for ggplot
long_cv <- gather(cv_frame, key = "condition", value = "cv") %>%
extract(condition, into = 'group', "(.*?)_+", remove = FALSE)
# traditional boxplots
cv_plot <- ggplot(long_cv, aes(x = condition, y = cv, fill = group)) +
geom_boxplot(notch = TRUE) +
ggtitle("CV distributions")
# vertical orientation
cv_plot
# horizontal orientation
cv_plot + coord_flip()
# density plots
ggplot(long_cv, aes(x = cv, color = condition)) +
geom_density() +
coord_cartesian(xlim = c(0, 150)) +
ggtitle("CV distributions")
# compute dispersions and plot BCV
y <- estimateDisp(y)
plotBCV(y, main = "BCV plot of IRS normed, TMM normed, all 27")
# the exact test object has columns like fold-change, CPM, and p-values
et <- exactTest(y, pair = c("HeH", "ETV6"))
# this counts up, down, and unchanged genes (proteins) at 10% FDR
summary(decideTestsDGE(et, p.value = 0.10))
# the topTags function adds the BH FDR values to an exactTest data frame
# make sure we do not change the row order (the sort.by parameter)!
topTags(et, n = 25)
tt <- topTags(et, n = Inf, sort.by = "none")
tt <- tt$table # tt is a list. We just need the "table" data frame
# make an MD plot (like MA plot)
plotMD(et, p.value = 0.10)
abline(h = c(-1, 1), col = "black")
# check the p-value distribution
ggplot(tt, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(et$table$PValue, breaks = 100,
plot = FALSE)$counts[26:100])) +
ggtitle("HeH vs ETV6 PValue distribution")
# get the averages within each condition
# results already has the normalized data in its left columns
tt$ave_HeH <- rowMeans(irs_tmm[HeH])
tt$ave_ETV6 <- rowMeans(irs_tmm[ETV6])
# add the cadidate status column
tt <- tt %>%
mutate(candidate = cut(FDR, breaks = c(-Inf, 0.01, 0.05, 0.10, 1.0),
labels = c("high", "med", "low", "no")))
tt %>% count(candidate) # count candidates
ggplot(tt, aes(x = logFC, fill = candidate)) +
geom_histogram(binwidth=0.1, color = "black") +
facet_wrap(~candidate) +
coord_cartesian(xlim = c(-4, 4)) +
ggtitle("HeH vs ETV6-RUNX1 logFC distributions by candidate")
# ================= reformat edgeR test results ================================
collect_results <- function(df, tt, x, xlab, y, ylab) {
# Computes new columns and extracts some columns to make results frame
# df - data in data.frame
# tt - top tags table from edgeR test
# x - columns for first condition
# xlab - label for x
# y - columns for second condition
# ylab - label for y
# returns a new dataframe
# condition average vectors
ave_x <- rowMeans(df[x])
ave_y <- rowMeans(df[y])
# FC, direction, candidates
fc <- ifelse(ave_y > ave_x, (ave_y / ave_x), (-1 * ave_x / ave_y))
direction <- ifelse(ave_y > ave_x, "up", "down")
candidate <- cut(tt$FDR, breaks = c(-Inf, 0.01, 0.05, 0.10, 1.0),
labels = c("high", "med", "low", "no"))
# make data frame
temp <- cbind(df[c(x, y)], data.frame(logFC = tt$logFC, FC = fc,
PValue = tt$PValue, FDR = tt$FDR,
ave_x = ave_x, ave_y = ave_y,
direction = direction, candidate = candidate,
Acc = tt$genes))
# fix column headers for averages
names(temp)[names(temp) %in% c("ave_x", "ave_y")] <- str_c("ave_", c(xlab, ylab))
temp # return the data frame
}
# get the results
results <- collect_results(irs, tt, HeH, "HeH", ETV6, "ETV6")
transform <- function(results, x, y) {
# Make data frame with some transformed columns
# results - results data frame
# x - columns for x condition
# y - columns for y condition
# return new data frame
df <- data.frame(log10((results[x] + results[y])/2),
log2(results[y] / results[x]),
results$candidate,
-log10(results$FDR))
colnames(df) <- c("A", "M", "candidate", "P")
df # return the data frame
}
MA_plots <- function(results, x, y, title) {
# makes MA-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# uses transformed data
temp <- transform(results, x, y)
# 2-fold change lines
ma_lines <- list(geom_hline(yintercept = 0.0, color = "black"),
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted"),
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted"))
# make main MA plot
ma <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous(paste0("logFC (", y, "/", x, ")")) +
scale_x_continuous("Ave_intensity") +
ggtitle(title) +
ma_lines
# make separate MA plots
ma_facet <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous(paste0("log2 FC (", y, "/", x, ")")) +
scale_x_continuous("log10 Ave_intensity") +
ma_lines +
facet_wrap(~ candidate) +
ggtitle(str_c(title, " (separated)"))
# make the plots visible
print(ma)
print(ma_facet)
}
scatter_plots <- function(results, x, y, title) {
# makes scatter-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# 2-fold change lines
scatter_lines <- list(geom_abline(intercept = 0.0, slope = 1.0, color = "black"),
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted"),
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted"),
scale_y_log10(),
scale_x_log10())
# make main scatter plot
scatter <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = candidate, shape = candidate)) +
ggtitle(title) +
scatter_lines
# make separate scatter plots
scatter_facet <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = candidate, shape = candidate)) +
scatter_lines +
facet_wrap(~ candidate) +
ggtitle(str_c(title, " (separated)"))
# make the plots visible
print(scatter)
print(scatter_facet)
}
volcano_plot <- function(results, x, y, title) {
# makes a volcano plot
# results - a data frame with edgeR results
# x - string for the x-axis column
# y - string for y-axis column
# title - plot title string
# uses transformed data
temp <- transform(results, x, y)
# build the plot
ggplot(temp, aes(x = M, y = P)) +
geom_point(aes(color = candidate, shape = candidate)) +
xlab("log2 FC") +
ylab("-log10 FDR") +
ggtitle(str_c(title, " Volcano Plot"))
}
# make the DE plots
MA_plots(results, "ave_HeH", "ave_ETV6", "HeH vs ETV6/RUNX1")
scatter_plots(results, "ave_HeH", "ave_ETV6", "HeH vs ETV6/RUNX1")
volcano_plot(results, "ave_HeH", "ave_ETV6", "HeH vs ETV6/RUNX1")
# ============== individual protein expression plots ===========================
# function to extract the identifier part of the accesssion
get_identifier <- function(accession) {
identifier <- str_split(accession, "\\|", simplify = TRUE)
identifier[,3]
}
set_plot_dimensions <- function(width_choice, height_choice) {
options(repr.plot.width=width_choice, repr.plot.height=height_choice)
}
plot_top_tags <- function(results, nleft, nright, top_tags) {
# results should have data first, then test results (two condition summary table)
# nleft, nright are number of data points in each condition
# top_tags is number of up and number of down top DE candidates to plot
# get top ipregulated
up <- results %>%
filter(logFC >= 0) %>%
arrange(FDR)
up <- up[1:top_tags, ]
# get top down regulated
down <- results %>%
filter(logFC < 0) %>%
arrange(FDR)
down <- down[1:top_tags, ]
# pack them into one data frame
proteins <- rbind(up, down)
color = c(rep("red", nleft), rep("blue", nright))
for (row_num in 1:nrow(proteins)) {
row <- proteins[row_num, ]
vec <- as.vector(unlist(row[1:(nleft + nright)]))
names(vec) <- colnames(row[1:(nleft + nright)])
title <- str_c(get_identifier(row$Acc), ", int: ", scientific(mean(vec), 2),
", p-val: ", scientific(row$FDR, digits = 3),
", FC: ", round(row$FC, digits = 1))
barplot(vec, col = color, main = title)
}
}
# set plot size, make plots, reset plot size
set_plot_dimensions(6, 4)
plot_top_tags(results, length(HeH), length(ETV6), 25)
set_plot_dimensions(width, height)
write.table(results, "IRS_R_averages_results.txt", sep = "\t",
row.names = FALSE, na = " ")
sessionInfo()
|
{"hexsha": "4aa8590521f34b4bedc9a33b1899000b2a4c3e20", "size": 14805, "ext": "r", "lang": "R", "max_stars_repo_path": "Nat-Comm-2019_TMT_QE_averages.r", "max_stars_repo_name": "pwilmart/BCP-ALL_QE-TMT_Nat-Comm-2019", "max_stars_repo_head_hexsha": "646f1f4cf667df4f71aa31b3c994845e18c9a05d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-09T05:01:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-09T05:01:10.000Z", "max_issues_repo_path": "Nat-Comm-2019_TMT_QE_averages.r", "max_issues_repo_name": "pwilmart/BCP-ALL_QE-TMT_Nat-Comm-2019", "max_issues_repo_head_hexsha": "646f1f4cf667df4f71aa31b3c994845e18c9a05d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Nat-Comm-2019_TMT_QE_averages.r", "max_forks_repo_name": "pwilmart/BCP-ALL_QE-TMT_Nat-Comm-2019", "max_forks_repo_head_hexsha": "646f1f4cf667df4f71aa31b3c994845e18c9a05d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5889423077, "max_line_length": 109, "alphanum_fraction": 0.6349206349, "num_tokens": 4144}
|
# *****************************************************************************
# © Copyright IBM Corp. 2018-2020. All Rights Reserved.
#
# This program and the accompanying materials
# are made available under the terms of the Apache V2.0
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
#
# *****************************************************************************
"""
The Built In Functions module contains preinstalled functions
"""
import datetime as dt
import logging
# for gradient boosting
import lightgbm
import numpy as np
import pandas as pd
import scipy as sp
from pyod.models.cblof import CBLOF
# for Spectral Analysis
from scipy import signal, fftpack
# for KMeans
# import skimage as ski
from skimage import util as skiutil # for nifty windowing
from sklearn import ensemble
from sklearn import linear_model
from sklearn import metrics
from sklearn.covariance import MinCovDet
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler, minmax_scale
from sklearn.utils import check_array
from .base import (BaseTransformer, BaseRegressor, BaseEstimatorFunction, BaseSimpleAggregator)
from .bif import (AlertHighValue)
from .ui import (UISingle, UIMultiItem, UIFunctionOutSingle, UISingleItem, UIFunctionOutMulti)
logger = logging.getLogger(__name__)
PACKAGE_URL = 'git+https://github.com/ibm-watson-iot/functions.git@'
_IS_PREINSTALLED = True
Error_SmallWindowsize = 0.0001
Error_Generic = 0.0002
FrequencySplit = 0.3
DefaultWindowSize = 12
SmallEnergy = 1e-20
KMeans_normalizer = 1
Spectral_normalizer = 100 / 2.8
FFT_normalizer = 1
Saliency_normalizer = 1
Generalized_normalizer = 1 / 300
def custom_resampler(array_like):
# initialize
if 'gap' not in dir():
gap = 0
if array_like.values.size > 0:
gap = 0
return 0
else:
gap += 1
return gap
def min_delta(df):
# minimal time delta for merging
if len(df.index.names) > 1:
df2 = df.copy()
df2.index = df2.index.droplevel(list(range(1, df.index.nlevels)))
else:
df2 = df
try:
mindelta = df2.index.to_series().diff().min()
except Exception as e:
logger.debug('Min Delta error: ' + str(e))
mindelta = pd.Timedelta('5 seconds')
if mindelta == dt.timedelta(seconds=0) or pd.isnull(mindelta):
mindelta = pd.Timedelta('5 seconds')
return mindelta, df2
def set_window_size_and_overlap(windowsize, trim_value=2 * DefaultWindowSize):
# make sure it exists
if windowsize is None:
windowsize = DefaultWindowSize
# make sure it is positive and not too large
trimmed_ws = np.minimum(np.maximum(windowsize, 1), trim_value)
# overlap
if trimmed_ws == 1:
ws_overlap = 0
else:
# larger overlap - half the window
ws_overlap = trimmed_ws // 2
return trimmed_ws, ws_overlap
def dampen_anomaly_score(array, dampening):
if dampening is None:
dampening = 0.9 # gradient dampening
if dampening >= 1:
return array
if dampening < 0.01:
return array
if array.size <= 1:
return array
gradient = np.gradient(array)
# dampened
grad_damp = np.float_power(abs(gradient), dampening) * np.sign(gradient)
# reconstruct (dampened) anomaly score by discrete integration
integral = []
x = array[0]
for x_el in np.nditer(grad_damp):
x = x + x_el
integral.append(x)
# shift array slightly to the right to position anomaly score
array_damp = np.roll(np.asarray(integral), 1)
array_damp[0] = array_damp[1]
# normalize
return array_damp / dampening / 2
# Saliency helper functions
# copied from https://github.com/y-bar/ml-based-anomaly-detection
# remove the boring part from an image resp. time series
def series_filter(values, kernel_size=3):
"""
Filter a time series. Practically, calculated mean value inside kernel size.
As math formula, see https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html.
:param values:
:param kernel_size:
:return: The list of filtered average
"""
filter_values = np.cumsum(values, dtype=float)
filter_values[kernel_size:] = filter_values[kernel_size:] - filter_values[:-kernel_size]
filter_values[kernel_size:] = filter_values[kernel_size:] / kernel_size
for i in range(1, kernel_size):
filter_values[i] /= i + 1
return filter_values
# Saliency class
# see https://www.inf.uni-hamburg.de/en/inst/ab/cv/research/research1-visual-attention.html
class Saliency(object):
def __init__(self, amp_window_size, series_window_size, score_window_size):
self.amp_window_size = amp_window_size
self.series_window_size = series_window_size
self.score_window_size = score_window_size
def transform_saliency_map(self, values):
"""
Transform a time-series into spectral residual, which is method in computer vision.
For example, See https://docs.opencv.org/master/d8/d65/group__saliency.html
:param values: a list or numpy array of float values.
:return: silency map and spectral residual
"""
freq = np.fft.fft(values)
mag = np.sqrt(freq.real ** 2 + freq.imag ** 2)
# remove the boring part of a timeseries
spectral_residual = np.exp(np.log(mag) - series_filter(np.log(mag), self.amp_window_size))
freq.real = freq.real * spectral_residual / mag
freq.imag = freq.imag * spectral_residual / mag
# and apply inverse fourier transform
saliency_map = np.fft.ifft(freq)
return saliency_map
def transform_spectral_residual(self, values):
saliency_map = self.transform_saliency_map(values)
spectral_residual = np.sqrt(saliency_map.real ** 2 + saliency_map.imag ** 2)
return spectral_residual
def merge_score(dfEntity, dfEntityOrig, column_name, score, mindelta):
"""
Fit interpolated score to original entity slice of the full dataframe
"""
# equip score with time values, make sure it's positive
score[score < 0] = 0
dfEntity[column_name] = score
# merge
dfEntityOrig = pd.merge_asof(dfEntityOrig, dfEntity[column_name], left_index=True, right_index=True,
direction='nearest', tolerance=mindelta)
if column_name + '_y' in dfEntityOrig:
merged_score = dfEntityOrig[column_name + '_y'].to_numpy()
else:
merged_score = dfEntityOrig[column_name].to_numpy()
return merged_score
#####
# experimental function to interpolate over larger gaps
####
class Interpolator(BaseTransformer):
"""
Interpolates NaN and data to be interpreted as NaN (for example 0 as invalid sensor reading)
The window size is typically set large enough to allow for "bridging" gaps
Missing indicates sensor readings to be interpreted as invalid.
"""
def __init__(self, input_item, windowsize, missing, output_item):
super().__init__()
logger.debug(input_item)
self.input_item = input_item
# use 12 by default
self.windowsize, self.windowoverlap = set_window_size_and_overlap(windowsize)
self.missing = missing
self.output_item = output_item
self.inv_zscore = None
self.whoami = 'Interpolator'
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# remove Nan
dfe = dfe[dfe[self.input_item].notna()]
# remove self.missing
dfe = dfe[dfe[self.input_item] != self.missing]
# interpolate gaps - data imputation
try:
dfe = dfe.interpolate(method="time")
except Exception as e:
logger.error('Prepare data error: ' + str(e))
# one dimensional time series - named temperature for catchyness
# replace NaN with self.missing
temperature = dfe[[self.input_item]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
return dfe, temperature
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df.index.levels[0])
logger.debug(str(entities))
df_copy[self.output_item] = 0
# check data type
if df_copy[self.input_item].dtype != np.float64:
return (df_copy)
for entity in entities:
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
logger.debug('Timedelta:' + str(mindelta) + ' Index: ' + str(dfe_orig.index))
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug('Module Interpolator, Entity: ' + str(entity) + ', Input: ' + str(
self.input_item) + ', Windowsize: ' + str(self.windowsize) + ', Output: ' + str(
self.output_item) + ', Inputsize: ' + str(temperature.size) + ', Fullsize: ' + str(
dfe_orig[self.input_item].values.shape))
if temperature.size <= self.windowsize:
logger.debug(str(temperature.size) + ' <= ' + str(self.windowsize))
dfe[self.output_item] = Error_SmallWindowsize
else:
logger.debug(str(temperature.size) + str(self.windowsize))
temperatureII = None
try:
# length of time_series_temperature, signal_energy and ets_zscore is smaller than half the original
# extend it to cover the full original length
temperatureII = merge_score(dfe, dfe_orig, self.output_item, temperature, mindelta)
except Exception as e:
logger.error('Spectral failed with ' + str(e))
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = temperatureII
msg = 'Interpolator'
self.trace_append(msg)
return (df_copy)
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to interpolate'))
inputs.append(
UISingle(name='windowsize', datatype=int, description='Minimal size of the window for interpolating data.'))
inputs.append(UISingle(name='missing', datatype=int, description='Data to be interpreted as not-a-number.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name='output_item', datatype=float, description='Interpolated data'))
return (inputs, outputs)
#######################################################################################
# Scalers
#######################################################################################
class Standard_Scaler(BaseEstimatorFunction):
"""
Learns and applies standard scaling
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
def set_estimators(self):
self.estimators['standard_scaler'] = (StandardScaler, self.params)
logger.info('Standard Scaler initialized')
def __init__(self, features=None, targets=None, predictions=None):
super().__init__(features=features, targets=targets, predictions=predictions, keep_current_models=True)
# do not run score and call transform instead of predict
self.is_scaler = True
self.experiments_per_execution = 1
self.normalize = True # support for optional scaling in subclasses
self.prediction = self.predictions[0] # support for subclasses with univariate focus
self.params = {}
# used by all the anomaly scorers based on it
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data for ' + self.prediction + ' column')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# interpolate gaps - data imputation
try:
dfe = dfe.interpolate(method="time")
except Exception as e:
logger.error('Prepare data error: ' + str(e))
# one dimensional time series - named temperature for catchyness
temperature = dfe[[self.prediction]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
return dfe, temperature
# dummy function for scaler, can be replaced with anomaly functions
def kexecute(self, entity, df_copy):
return df_copy
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
try:
check_array(df_copy.loc[[entity]][self.features].values, allow_nd=True)
except Exception as e:
logger.error(
'Found Nan or infinite value in feature columns for entity ' + str(entity) + ' error: ' + str(e))
continue
# support for optional scaling in subclasses
if self.normalize:
dfe = super()._execute(df_copy.loc[[entity]], entity)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
df_copy = self.kexecute(entity, df_copy)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
class Robust_Scaler(BaseEstimatorFunction):
"""
Learns and applies robust scaling, scaling after outlier removal
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
def set_estimators(self):
self.estimators['robust_scaler'] = (RobustScaler, self.params)
logger.info('Robust Scaler initialized')
def __init__(self, features=None, targets=None, predictions=None):
super().__init__(features=features, targets=targets, predictions=predictions, keep_current_models=True)
# do not run score and call transform instead of predict
self.is_scaler = True
self.experiments_per_execution = 1
self.params = {}
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
# per entity - copy for later inplace operations
try:
check_array(df_copy.loc[[entity]][self.features].values, allow_nd=True)
except Exception as e:
logger.error(
'Found Nan or infinite value in feature columns for entity ' + str(entity) + ' error: ' + str(e))
continue
dfe = super()._execute(df_copy.loc[[entity]], entity)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
class MinMax_Scaler(BaseEstimatorFunction):
"""
Learns and applies minmax scaling
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
def set_estimators(self):
self.estimators['minmax_scaler'] = (MinMaxScaler, self.params)
logger.info('MinMax Scaler initialized')
def __init__(self, features=None, targets=None, predictions=None):
super().__init__(features=features, targets=targets, predictions=predictions, keep_current_models=True)
# do not run score and call transform instead of predict
self.is_scaler = True
self.experiments_per_execution = 1
self.params = {}
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
try:
check_array(df_copy.loc[[entity]][self.features].values, allow_nd=True)
except Exception as e:
logger.error(
'Found Nan or infinite value in feature columns for entity ' + str(entity) + ' error: ' + str(e))
continue
dfe = super()._execute(df_copy.loc[[entity]], entity)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
#######################################################################################
# Anomaly Scorers
#######################################################################################
class SpectralAnomalyScore(BaseTransformer):
"""
An unsupervised anomaly detection function.
Applies a spectral analysis clustering techniqueto extract features from time series data and to create z scores.
Moves a sliding window across the data signal and applies the anomalymodelto each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that fits your data best.
"""
def __init__(self, input_item, windowsize, output_item):
super().__init__()
logger.debug(input_item)
self.input_item = input_item
# use 12 by default
self.windowsize, self.windowoverlap = set_window_size_and_overlap(windowsize)
# assume 1 per sec for now
self.frame_rate = 1
self.output_item = output_item
self.inv_zscore = None
self.whoami = 'Spectral'
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# interpolate gaps - data imputation
try:
dfe = dfe.interpolate(method="time")
except Exception as e:
logger.error('Prepare data error: ' + str(e))
# one dimensional time series - named temperature for catchyness
temperature = dfe[[self.input_item]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
return dfe, temperature
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df.index.levels[0])
logger.debug(str(entities))
df_copy[self.output_item] = 0
# check data type
if df_copy[self.input_item].dtype != np.float64:
return (df_copy)
for entity in entities:
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
logger.debug('Timedelta:' + str(mindelta) + ' Index: ' + str(dfe_orig.index))
# one dimensional time series - named temperature for catchyness
temperature = dfe[[self.input_item]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug(
'Module Spectral, Entity: ' + str(entity) + ', Input: ' + str(self.input_item) + ', Windowsize: ' + str(
self.windowsize) + ', Output: ' + str(self.output_item) + ', Overlap: ' + str(
self.windowoverlap) + ', Inputsize: ' + str(temperature.size))
if temperature.size <= self.windowsize:
logger.debug(str(temperature.size) + ' <= ' + str(self.windowsize))
dfe[self.output_item] = Error_SmallWindowsize
else:
logger.debug(str(temperature.size) + str(self.windowsize))
dfe[self.output_item] = Error_Generic
if self.inv_zscore is not None:
dfe[self.inv_zscore] = Error_Generic
zScoreII = None
inv_zScoreII = None
try:
# Fourier transform:
# frequency, time, spectral density
frequency_temperature, time_series_temperature, spectral_density_temperature = signal.spectrogram(
temperature, fs=self.frame_rate, window='hanning', nperseg=self.windowsize,
noverlap=self.windowoverlap, detrend='l', scaling='spectrum')
# cut off freqencies too low to fit into the window
frequency_temperatureb = (frequency_temperature > 2 / self.windowsize).astype(int)
frequency_temperature = frequency_temperature * frequency_temperatureb
frequency_temperature[frequency_temperature == 0] = 1 / self.windowsize
signal_energy = np.dot(spectral_density_temperature.T, frequency_temperature)
signal_energy[signal_energy < SmallEnergy] = SmallEnergy
inv_signal_energy = np.divide(np.ones(signal_energy.size), signal_energy)
dfe[self.output_item] = 0.0005
ets_zscore = abs(sp.stats.zscore(signal_energy)) * Spectral_normalizer
inv_zscore = abs(sp.stats.zscore(inv_signal_energy))
logger.debug(
'Spectral z-score max: ' + str(ets_zscore.max()) + ', Spectral inv z-score max: ' + str(
inv_zscore.max()))
# length of time_series_temperature, signal_energy and ets_zscore is smaller than half the original
# extend it to cover the full original length
dfe[self.output_item] = 0.0006
linear_interpolate = sp.interpolate.interp1d(time_series_temperature, ets_zscore, kind='linear',
fill_value='extrapolate')
zScoreII = merge_score(dfe, dfe_orig, self.output_item,
abs(linear_interpolate(np.arange(0, temperature.size, 1))), mindelta)
if self.inv_zscore is not None:
linear_interpol_inv_zscore = sp.interpolate.interp1d(time_series_temperature, inv_zscore,
kind='linear', fill_value='extrapolate')
inv_zScoreII = merge_score(dfe, dfe_orig, self.inv_zscore,
abs(linear_interpol_inv_zscore(np.arange(0, temperature.size, 1))),
mindelta)
except Exception as e:
logger.error('Spectral failed with ' + str(e))
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = zScoreII
if self.inv_zscore is not None:
df_copy.loc[idx[entity, :], self.inv_zscore] = inv_zScoreII
if self.inv_zscore is not None:
msg = 'SpectralAnomalyScoreExt'
else:
msg = 'SpectralAnomalyScore'
self.trace_append(msg)
return (df_copy)
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to analyze'))
inputs.append(UISingle(name='windowsize', datatype=int,
description='Size of each sliding window in data points. Typically set to 12.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(
UIFunctionOutSingle(name='output_item', datatype=float, description='Spectral anomaly score (z-score)'))
return (inputs, outputs)
class SpectralAnomalyScoreExt(SpectralAnomalyScore):
"""
An unsupervised anomaly detection function.
Applies a spectral analysis clustering techniqueto extract features from time series data and to create z scores.
Moves a sliding window across the data signal and applies the anomalymodelto each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that fits your data best.
"""
def __init__(self, input_item, windowsize, output_item, inv_zscore):
super().__init__(input_item, windowsize, output_item)
logger.debug(input_item)
self.inv_zscore = inv_zscore
def execute(self, df):
return super().execute(df)
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to analyze'))
inputs.append(UISingle(name='windowsize', datatype=int,
description='Size of each sliding window in data points. Typically set to 12.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(
UIFunctionOutSingle(name='output_item', datatype=float, description='Spectral anomaly score (z-score)'))
outputs.append(UIFunctionOutSingle(name='inv_zscore', datatype=float,
description='z-score of inverted signal energy - detects unusually low activity'))
return (inputs, outputs)
class KMeansAnomalyScore(BaseTransformer):
"""
An unsupervised anomaly detection function.
Applies a k-means analysis clustering technique to time series data.
Moves a sliding window across the data signal and applies the anomaly model to each window.
The window size is typically set to 12 data points.
Try several anomaly models on your data and use the one that fits your databest.
"""
def __init__(self, input_item, windowsize, output_item, expr=None):
super().__init__()
logger.debug(input_item)
self.input_item = input_item
# use 12 by default
self.windowsize, windowoverlap = set_window_size_and_overlap(windowsize)
# step
self.step = self.windowsize - windowoverlap
# assume 1 per sec for now
self.frame_rate = 1
self.output_item = output_item
self.whoami = 'KMeans'
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# interpolate gaps - data imputation
try:
dfe = dfe.interpolate(method="time")
except Exception as e:
logger.error('Prepare data error: ' + str(e))
# one dimensional time series - named temperature for catchyness
temperature = dfe[[self.input_item]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
return dfe, temperature
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
df_copy[self.output_item] = 0
# check data type
if df_copy[self.input_item].dtype != np.float64:
return (df_copy)
for entity in entities:
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
logger.debug('Timedelta:' + str(mindelta))
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug(
'Module KMeans, Entity: ' + str(entity) + ', Input: ' + str(self.input_item) + ', Windowsize: ' + str(
self.windowsize) + ', Output: ' + str(self.output_item) + ', Overlap: ' + str(
self.step) + ', Inputsize: ' + str(temperature.size))
if temperature.size > self.windowsize:
logger.debug(str(temperature.size) + ',' + str(self.windowsize))
# Chop into overlapping windows
slices = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
if self.windowsize > 1:
n_cluster = 40
else:
n_cluster = 20
n_cluster = np.minimum(n_cluster, slices.shape[0] // 2)
logger.debug('KMeans params, Clusters: ' + str(n_cluster) + ', Slices: ' + str(slices.shape))
cblofwin = CBLOF(n_clusters=n_cluster, n_jobs=-1)
try:
cblofwin.fit(slices)
except Exception as e:
logger.info('KMeans failed with ' + str(e))
self.trace_append('KMeans failed with' + str(e))
continue
pred_score = cblofwin.decision_scores_.copy() * KMeans_normalizer
# length of time_series_temperature, signal_energy and ets_zscore is smaller than half the original
# extend it to cover the full original length
diff = temperature.size - pred_score.size
time_series_temperature = np.linspace(self.windowsize // 2, temperature.size - self.windowsize // 2 + 1,
temperature.size - diff)
linear_interpolate_k = sp.interpolate.interp1d(time_series_temperature, pred_score, kind='linear',
fill_value='extrapolate')
zScoreII = merge_score(dfe, dfe_orig, self.output_item,
linear_interpolate_k(np.arange(0, temperature.size, 1)), mindelta)
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = zScoreII
msg = 'KMeansAnomalyScore'
self.trace_append(msg)
return (df_copy)
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to analyze'))
inputs.append(UISingle(name='windowsize', datatype=int,
description='Size of each sliding window in data points. Typically set to 12.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name='output_item', datatype=float, description='Anomaly score (kmeans)'))
return (inputs, outputs)
class GeneralizedAnomalyScore(BaseTransformer):
"""
An unsupervised anomaly detection function.
Applies the Minimum Covariance Determinant (FastMCD) technique to detect outliers.
Moves a sliding window across the data signal and applies the anomaly model to each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that fits your data best.
"""
def __init__(self, input_item, windowsize, output_item):
super().__init__()
logger.debug(input_item)
self.whoami = 'GAM'
self.input_item = input_item
# use 12 by default
self.windowsize, windowoverlap = set_window_size_and_overlap(windowsize)
# step
self.step = self.windowsize - windowoverlap
# assume 1 per sec for now
self.frame_rate = 1
self.dampening = 1 # dampening - dampen anomaly score
self.output_item = output_item
self.normalizer = Generalized_normalizer
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# interpolate gaps - data imputation
try:
dfe = dfe.interpolate(method="time")
except Exception as e:
logger.error('Prepare data error: ' + str(e))
# one dimensional time series - named temperature for catchyness
temperature = dfe[[self.input_item]].fillna(0).to_numpy(dtype=np.float64).reshape(-1, )
return dfe, temperature
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
slices = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
return slices
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
df_copy[self.output_item] = 0
# check data type
if df_copy[self.input_item].dtype != np.float64:
return (df_copy)
for entity in entities:
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug('Module GeneralizedAnomaly, Entity: ' + str(entity) + ', Input: ' + str(
self.input_item) + ', Windowsize: ' + str(self.windowsize) + ', Output: ' + str(
self.output_item) + ', Overlap: ' + str(self.step) + ', Inputsize: ' + str(temperature.size))
if temperature.size > self.windowsize:
logger.debug(str(temperature.size) + "," + str(self.windowsize))
temperature -= np.mean(temperature, axis=0)
mcd = MinCovDet()
# Chop into overlapping windows (default) or run through FFT first
slices = self.feature_extract(temperature)
pred_score = None
try:
mcd.fit(slices)
pred_score = mcd.mahalanobis(slices).copy() * self.normalizer
except ValueError as ve:
logger.info(self.whoami + " GeneralizedAnomalyScore: Entity: " + str(entity) + ", Input: " + str(
self.input_item) + ", WindowSize: " + str(self.windowsize) + ", Output: " + str(
self.output_item) + ", Step: " + str(self.step) + ", InputSize: " + str(
slices.shape) + " failed in the fitting step with \"" + str(ve) + "\" - scoring zero")
dfe[self.output_item] = 0
# this fails in the interpolation step
continue
except Exception as e:
dfe[self.output_item] = 0
logger.error(self.whoami + " GeneralizedAnomalyScore: Entity: " + str(entity) + ", Input: " + str(
self.input_item) + ", WindowSize: " + str(self.windowsize) + ", Output: " + str(
self.output_item) + ", Step: " + str(self.step) + ", InputSize: " + str(
slices.shape) + " failed in the fitting step with " + str(e))
continue
# will break if pred_score is None
# length of timesTS, ETS and ets_zscore is smaller than half the original
# extend it to cover the full original length
diff = temperature.size - pred_score.size
time_series_temperature = np.linspace(self.windowsize // 2, temperature.size - self.windowsize // 2 + 1,
temperature.size - diff)
logger.debug(self.whoami + ' Entity: ' + str(entity) + ', result shape: ' + str(
time_series_temperature.shape) + ' score shape: ' + str(pred_score.shape))
linear_interpolate_k = sp.interpolate.interp1d(time_series_temperature, pred_score, kind="linear",
fill_value="extrapolate")
gam_scoreI = linear_interpolate_k(np.arange(0, temperature.size, 1))
dampen_anomaly_score(gam_scoreI, self.dampening)
zScoreII = merge_score(dfe, dfe_orig, self.output_item, gam_scoreI, mindelta)
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = zScoreII
msg = "GeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze", ))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12."))
# define arguments that behave as function outputs
outputs = []
outputs.append(
UIFunctionOutSingle(name="output_item", datatype=float, description="Anomaly score (GeneralizedAnomaly)", ))
return (inputs, outputs)
class NoDataAnomalyScore(GeneralizedAnomalyScore):
"""
An unsupervised anomaly detection function.
Uses FastMCD to find gaps in data.
The function moves a sliding window across the data signal and applies the anomaly model to each window.
The window size is typically set to 12 data points.
"""
def __init__(self, input_item, windowsize, output_item):
super().__init__(input_item, windowsize, output_item)
self.whoami = 'NoData'
self.normalizer = 1
logger.debug('NoData')
def prepare_data(self, dfEntity):
logger.debug(self.whoami + ': prepare Data')
# operate on simple timestamp index
if len(dfEntity.index.names) > 1:
index_names = dfEntity.index.names
dfe = dfEntity.reset_index().set_index(index_names[0])
else:
index_names = None
dfe = dfEntity
# count the timedelta in seconds between two events
timeSeq = (dfEntity.index.values - dfEntity.index[0].to_datetime64()) / np.timedelta64(1, 's')
dfe = dfEntity.copy()
# one dimensional time series - named temperature for catchyness
# we look at the gradient of the time series timestamps for anomaly detection
# might throw an exception - we catch it in the super class !!
try:
temperature = np.gradient(timeSeq)
dfe[[self.input_item]] = temperature
except Exception as pe:
logger.info("NoData Gradient failed with " + str(pe))
dfe[[self.input_item]] = 0
temperature = dfe[[self.input_item]].values
temperature[0] = 10 ** 10
return dfe, temperature
def execute(self, df):
df_copy = super().execute(df)
msg = "NoDataAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to analyze'))
inputs.append(UISingle(name='windowsize', datatype=int,
description='Size of each sliding window in data points. Typically set to 12.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name='output_item', datatype=float, description='No data anomaly score'))
return (inputs, outputs)
class FFTbasedGeneralizedAnomalyScore(GeneralizedAnomalyScore):
"""
An unsupervised and robust anomaly detection function.
Extracts temporal features from time series data using Fast Fourier Transforms.
Applies the GeneralizedAnomalyScore to the features to detect outliers.
Moves a sliding window across the data signal and applies the anomaly models to each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that best fits your data.
"""
def __init__(self, input_item, windowsize, output_item):
super().__init__(input_item, windowsize, output_item)
self.whoami = 'FFT'
self.normalizer = FFT_normalizer
logger.debug('FFT')
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
slices_ = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
slicelist = []
for slice in slices_:
slicelist.append(fftpack.rfft(slice))
return np.stack(slicelist, axis=0)
def execute(self, df):
df_copy = super().execute(df)
msg = "FFTbasedGeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze", ))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12."))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name="output_item", datatype=float,
description="Anomaly score (FFTbasedGeneralizedAnomalyScore)", ))
return (inputs, outputs)
#####
# experimental function with dampening factor
####
class FFTbasedGeneralizedAnomalyScore2(GeneralizedAnomalyScore):
"""
An unsupervised and robust anomaly detection function.
Extracts temporal features from time series data using Fast Fourier Transforms.
Applies the GeneralizedAnomalyScore to the features to detect outliers.
Moves a sliding window across the data signal and applies the anomaly models to each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that best fits your data.
"""
def __init__(self, input_item, windowsize, dampening, output_item):
super().__init__(input_item, windowsize, output_item)
self.whoami = 'FFT dampen'
self.dampening = dampening
self.normalizer = FFT_normalizer / dampening
logger.debug('FFT')
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
slices_ = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
slicelist = []
for slice in slices_:
slicelist.append(fftpack.rfft(slice))
return np.stack(slicelist, axis=0)
def execute(self, df):
df_copy = super().execute(df)
msg = "FFTbasedGeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze", ))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12."))
inputs.append(UISingle(name="dampening", datatype=float,
description="Moderate the anomaly score. Use a value <=1. Typically set to 1."))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name="output_item", datatype=float,
description="Anomaly score (FFTbasedGeneralizedAnomalyScore)", ))
return (inputs, outputs)
class SaliencybasedGeneralizedAnomalyScore(GeneralizedAnomalyScore):
"""
An unsupervised anomaly detection function.
Based on salient region detection models,
it uses fast fourier transform to reconstruct a signal using the salient features of a the signal.
It applies GeneralizedAnomalyScore to the reconstructed signal.
The function moves a sliding window across the data signal and applies its analysis to each window.
The window size is typically set to 12 data points.
Try several anomaly detectors on your data and use the one that fits your data.
"""
def __init__(self, input_item, windowsize, output_item):
super().__init__(input_item, windowsize, output_item)
self.whoami = 'Saliency'
self.saliency = Saliency(windowsize, 0, 0)
self.normalizer = Saliency_normalizer
logger.debug('Saliency')
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
temperature_saliency = self.saliency.transform_spectral_residual(temperature)
slices = skiutil.view_as_windows(temperature_saliency, window_shape=(self.windowsize,), step=self.step)
return slices
def execute(self, df):
df_copy = super().execute(df)
msg = "SaliencybasedGeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze"))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12.", ))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name="output_item", datatype=float,
description="Anomaly score (SaliencybasedGeneralizedAnomalyScore)", ))
return (inputs, outputs)
#######################################################################################
# Anomaly detectors with scaling
#######################################################################################
class KMeansAnomalyScoreV2(Standard_Scaler):
"""
An unsupervised anomaly detection function.
Applies a k-means analysis clustering technique to time series data.
Moves a sliding window across the data signal and applies the anomaly model to each window.
The window size is typically set to 12 data points.
The normalize switch allows to learn and apply a standard scaler prior to computing the anomaly score.
Try several anomaly models on your data and use the one that fits your databest.
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
def __init__(self, input_item, windowsize, normalize, output_item, expr=None):
super().__init__(features=[input_item], targets=[output_item], predictions=None)
logger.debug(input_item)
# do not run score and call transform instead of predict
self.input_item = input_item
# use 12 by default
self.windowsize, windowoverlap = set_window_size_and_overlap(windowsize)
# step
self.step = self.windowsize - windowoverlap
self.normalize = normalize
# assume 1 per sec for now
self.frame_rate = 1
self.output_item = output_item
self.whoami = 'KMeansV2'
def kexecute(self, entity, df_copy):
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
logger.debug('Timedelta:' + str(mindelta))
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug('Module ' + self.whoami + ', Entity: ' + str(entity) + ', Input: ' + str(
self.input_item) + ', Windowsize: ' + str(self.windowsize) + ', Output: ' + str(
self.output_item) + ', Overlap: ' + str(self.step) + ', Inputsize: ' + str(temperature.size))
if temperature.size > self.windowsize:
logger.debug(str(temperature.size) + ',' + str(self.windowsize))
# Chop into overlapping windows
slices = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
if self.windowsize > 1:
n_cluster = 40
else:
n_cluster = 20
n_cluster = np.minimum(n_cluster, slices.shape[0] // 2)
logger.debug('KMeans parms, Clusters: ' + str(n_cluster) + ', Slices: ' + str(slices.shape))
cblofwin = CBLOF(n_clusters=n_cluster, n_jobs=-1)
try:
cblofwin.fit(slices)
except Exception as e:
logger.info('KMeans failed with ' + str(e))
self.trace_append('KMeans failed with' + str(e))
return df_copy
pred_score = cblofwin.decision_scores_.copy() * KMeans_normalizer
# length of time_series_temperature, signal_energy and ets_zscore is smaller than half the original
# extend it to cover the full original length
diff = temperature.size - pred_score.size
time_series_temperature = np.linspace(self.windowsize // 2, temperature.size - self.windowsize // 2 + 1,
temperature.size - diff)
linear_interpolate_k = sp.interpolate.interp1d(time_series_temperature, pred_score, kind='linear',
fill_value='extrapolate')
z_score_ii = merge_score(dfe, dfe_orig, self.output_item,
linear_interpolate_k(np.arange(0, temperature.size, 1)), mindelta)
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = z_score_ii
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name='input_item', datatype=float, description='Data item to analyze'))
inputs.append(UISingle(name='windowsize', datatype=int,
description='Size of each sliding window in data points. Typically set to 12.'))
inputs.append(UISingle(name='normalize', datatype=bool, description='Flag for normalizing data.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name='output_item', datatype=float, description='Anomaly score (kmeans)'))
return (inputs, outputs)
class GeneralizedAnomalyScoreV2(Standard_Scaler):
"""
An unsupervised anomaly detection function.
Applies the Minimum Covariance Determinant (FastMCD) technique to detect outliers.
Moves a sliding window across the data signal and applies the anomaly model to each window.
The window size is typically set to 12 data points.
The normalize switch allows to learn and apply a standard scaler prior to computing the anomaly score.
Try several anomaly detectors on your data and use the one that fits your data best.
"""
# class variables
eval_metric = staticmethod(metrics.r2_score)
train_if_no_model = True
def __init__(self, input_item, windowsize, normalize, output_item, expr=None):
super().__init__(features=[input_item], targets=[output_item], predictions=None)
logger.debug(input_item)
# do not run score and call transform instead of predict
self.input_item = input_item
# use 12 by default
self.windowsize, windowoverlap = set_window_size_and_overlap(windowsize)
# step
self.step = self.windowsize - windowoverlap
self.normalize = normalize
# assume 1 per sec for now
self.frame_rate = 1
self.dampening = 1 # dampening - dampen anomaly score
self.output_item = output_item
self.normalizer = Generalized_normalizer
self.whoami = 'GAMV2'
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
slices = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
return slices
def kexecute(self, entity, df_copy):
# per entity - copy for later inplace operations
dfe = df_copy.loc[[entity]].dropna(how='all')
dfe_orig = df_copy.loc[[entity]].copy()
# get rid of entityid part of the index
# do it inplace as we copied the data before
dfe.reset_index(level=[0], inplace=True)
dfe.sort_index(inplace=True)
dfe_orig.reset_index(level=[0], inplace=True)
dfe_orig.sort_index(inplace=True)
# minimal time delta for merging
mindelta, dfe_orig = min_delta(dfe_orig)
logger.debug('Timedelta:' + str(mindelta))
# interpolate gaps - data imputation by default
# for missing data detection we look at the timestamp gradient instead
dfe, temperature = self.prepare_data(dfe)
logger.debug('Module ' + self.whoami + ', Entity: ' + str(entity) + ', Input: ' + str(
self.input_item) + ', Windowsize: ' + str(self.windowsize) + ', Output: ' + str(
self.output_item) + ', Overlap: ' + str(self.step) + ', Inputsize: ' + str(temperature.size))
if temperature.size > self.windowsize:
logger.debug(str(temperature.size) + "," + str(self.windowsize))
temperature -= np.mean(temperature, axis=0)
mcd = MinCovDet()
# Chop into overlapping windows (default) or run through FFT first
slices = self.feature_extract(temperature)
pred_score = None
try:
mcd.fit(slices)
pred_score = mcd.mahalanobis(slices).copy() * self.normalizer
except ValueError as ve:
logger.info(self.whoami + " GeneralizedAnomalyScore: Entity: " + str(entity) + ", Input: " + str(
self.input_item) + ", WindowSize: " + str(self.windowsize) + ", Output: " + str(
self.output_item) + ", Step: " + str(self.step) + ", InputSize: " + str(
slices.shape) + " failed in the fitting step with \"" + str(ve) + "\" - scoring zero")
dfe[self.output_item] = 0
return df_copy
except Exception as e:
dfe[self.output_item] = 0
logger.error(self.whoami + " GeneralizedAnomalyScore: Entity: " + str(entity) + ", Input: " + str(
self.input_item) + ", WindowSize: " + str(self.windowsize) + ", Output: " + str(
self.output_item) + ", Step: " + str(self.step) + ", InputSize: " + str(
slices.shape) + " failed in the fitting step with " + str(e))
return df_copy
# will break if pred_score is None
# length of timesTS, ETS and ets_zscore is smaller than half the original
# extend it to cover the full original length
diff = temperature.size - pred_score.size
time_series_temperature = np.linspace(self.windowsize // 2, temperature.size - self.windowsize // 2 + 1,
temperature.size - diff)
logger.debug(self.whoami + ' Entity: ' + str(entity) + ', result shape: ' + str(
time_series_temperature.shape) + ' score shape: ' + str(pred_score.shape))
linear_interpolate_k = sp.interpolate.interp1d(time_series_temperature, pred_score, kind="linear",
fill_value="extrapolate")
gam_scoreI = linear_interpolate_k(np.arange(0, temperature.size, 1))
dampen_anomaly_score(gam_scoreI, self.dampening)
zScoreII = merge_score(dfe, dfe_orig, self.output_item, gam_scoreI, mindelta)
idx = pd.IndexSlice
df_copy.loc[idx[entity, :], self.output_item] = zScoreII
msg = "GeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze", ))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12."))
inputs.append(UISingle(name='normalize', datatype=bool, description='Flag for normalizing data.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(
UIFunctionOutSingle(name="output_item", datatype=float, description="Anomaly score (GeneralizedAnomaly)", ))
return (inputs, outputs)
class FFTbasedGeneralizedAnomalyScoreV2(GeneralizedAnomalyScoreV2):
"""
An unsupervised and robust anomaly detection function.
Extracts temporal features from time series data using Fast Fourier Transforms.
Applies the GeneralizedAnomalyScore to the features to detect outliers.
Moves a sliding window across the data signal and applies the anomaly models to each window.
The window size is typically set to 12 data points.
The normalize switch allows to learn and apply a standard scaler prior to computing the anomaly score.
Try several anomaly detectors on your data and use the one that best fits your data.
"""
def __init__(self, input_item, windowsize, normalize, output_item):
super().__init__(input_item, windowsize, normalize, output_item)
self.whoami = 'FFTV2'
self.normalizer = FFT_normalizer
logger.debug('FFT')
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
slices_ = skiutil.view_as_windows(temperature, window_shape=(self.windowsize,), step=self.step)
slicelist = []
for slice in slices_:
slicelist.append(fftpack.rfft(slice))
return np.stack(slicelist, axis=0)
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze", ))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12."))
inputs.append(UISingle(name='normalize', datatype=bool, description='Flag for normalizing data.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name="output_item", datatype=float,
description="Anomaly score (FFTbasedGeneralizedAnomalyScore)", ))
return (inputs, outputs)
class SaliencybasedGeneralizedAnomalyScoreV2(GeneralizedAnomalyScoreV2):
"""
An unsupervised anomaly detection function.
Based on salient region detection models,
it uses fast fourier transform to reconstruct a signal using the salient features of a the signal.
It applies GeneralizedAnomalyScore to the reconstructed signal.
The function moves a sliding window across the data signal and applies its analysis to each window.
The window size is typically set to 12 data points.
The normalize switch allows to learn and apply a standard scaler prior to computing the anomaly score.
Try several anomaly detectors on your data and use the one that fits your data.
"""
def __init__(self, input_item, windowsize, normalize, output_item):
super().__init__(input_item, windowsize, normalize, output_item)
self.whoami = 'SaliencyV2'
self.saliency = Saliency(windowsize, 0, 0)
self.normalizer = Saliency_normalizer
logger.debug('Saliency')
def feature_extract(self, temperature):
logger.debug(self.whoami + ': feature extract')
temperature_saliency = self.saliency.transform_spectral_residual(temperature)
slices = skiutil.view_as_windows(temperature_saliency, window_shape=(self.windowsize,), step=self.step)
return slices
def execute(self, df):
df_copy = super().execute(df)
msg = "SaliencybasedGeneralizedAnomalyScore"
self.trace_append(msg)
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UISingleItem(name="input_item", datatype=float, description="Data item to analyze"))
inputs.append(UISingle(name="windowsize", datatype=int,
description="Size of each sliding window in data points. Typically set to 12.", ))
inputs.append(UISingle(name='normalize', datatype=bool, description='Flag for normalizing data.'))
# define arguments that behave as function outputs
outputs = []
outputs.append(UIFunctionOutSingle(name="output_item", datatype=float,
description="Anomaly score (SaliencybasedGeneralizedAnomalyScore)", ))
return (inputs, outputs)
#######################################################################################
# Regressors
#######################################################################################
class BayesRidgeRegressor(BaseEstimatorFunction):
"""
Linear regressor based on a probabilistic model as provided by sklearn
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
num_rounds_per_estimator = 3
def BRidgePipeline(self):
steps = [('scaler', StandardScaler()), ('bridge', linear_model.BayesianRidge(compute_score=True))]
return Pipeline(steps)
def set_estimators(self):
params = {}
self.estimators['bayesianridge'] = (self.BRidgePipeline, params)
logger.info('Bayesian Ridge Regressor start searching for best model')
def __init__(self, features, targets, predictions=None):
super().__init__(features=features, targets=targets, predictions=predictions, stddev=True)
self.experiments_per_execution = 1
self.auto_train = True
self.correlation_threshold = 0
self.stop_auto_improve_at = -2
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions + self.pred_stddev if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
try:
check_array(df_copy.loc[[entity]][self.features].values)
dfe = super()._execute(df_copy.loc[[entity]], entity)
print(df_copy.columns)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
df_copy.loc[entity, self.pred_stddev] = dfe[self.pred_stddev]
print(df_copy.columns)
except Exception as e:
logger.info('Bayesian Ridge regressor for entity ' + str(entity) + ' failed with: ' + str(e))
df_copy.loc[entity, self.predictions] = 0
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
class GBMRegressor(BaseEstimatorFunction):
"""
Regressor based on gradient boosting method as provided by lightGBM
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
def GBMPipeline(self):
steps = [('scaler', StandardScaler()), ('gbm', lightgbm.LGBMRegressor())]
return Pipeline(steps=steps)
def set_estimators(self):
# gradient_boosted
self.estimators['light_gradient_boosted_regressor'] = (self.GBMPipeline, self.params)
logger.info('GBMRegressor start searching for best model')
def __init__(self, features, targets, predictions=None, n_estimators=None, num_leaves=None, learning_rate=None,
max_depth=None):
super().__init__(features=features, targets=targets, predictions=predictions, keep_current_models=True)
self.experiments_per_execution = 1
self.correlation_threshold = 0
self.auto_train = True
self.num_rounds_per_estimator = 1
self.parameter_tuning_iterations = 1
self.cv = 1
if n_estimators is not None or num_leaves is not None or learning_rate is not None:
self.params = {'gbm__n_estimators': [n_estimators], 'gbm__num_leaves': [num_leaves],
'gbm__learning_rate': [learning_rate], 'gbm__max_depth': [max_depth], 'gbm__verbosity': [2]}
else:
self.params = {'gbm__n_estimators': [500], 'gbm__num_leaves': [50], 'gbm__learning_rate': [0.001],
'gbm__verbosity': [2]}
self.stop_auto_improve_at = -2
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
# per entity - copy for later inplace operations
try:
check_array(df_copy.loc[[entity]][self.features].values, allow_nd=True)
except Exception as e:
logger.error(
'Found Nan or infinite value in feature columns for entity ' + str(entity) + ' error: ' + str(e))
continue
dfe = super()._execute(df_copy.loc[[entity]], entity)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
inputs.append(
UISingle(name='n_estimators', datatype=int, required=False, description=('Max rounds of boosting')))
inputs.append(
UISingle(name='num_leaves', datatype=int, required=False, description=('Max leaves in a boosting tree')))
inputs.append(UISingle(name='learning_rate', datatype=float, required=False, description=('Learning rate')))
inputs.append(
UISingle(name='max_depth', datatype=int, required=False, description=('Cut tree to prevent overfitting')))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
class SimpleRegressor(BaseEstimatorFunction):
"""
Regressor based on stochastic gradient descent and gradient boosting method as provided by sklearn
"""
eval_metric = staticmethod(metrics.r2_score)
# class variables
train_if_no_model = True
num_rounds_per_estimator = 3
def GBRPipeline(self):
steps = [('scaler', StandardScaler()), ('gbr', ensemble.GradientBoostingRegressor)]
return Pipeline(steps)
def SGDPipeline(self):
steps = [('scaler', StandardScaler()), ('sgd', linear_model.SGDRegressor)]
return Pipeline(steps)
def set_estimators(self):
# gradient_boosted
params = {'n_estimators': [100, 250, 500, 1000], 'max_depth': [2, 4, 10], 'min_samples_split': [2, 5, 9],
'learning_rate': [0.01, 0.02, 0.05], 'loss': ['ls']}
self.estimators['gradient_boosted_regressor'] = (ensemble.GradientBoostingRegressor, params)
logger.info('SimpleRegressor start searching for best model')
def __init__(self, features, targets, predictions=None, n_estimators=None, num_leaves=None, learning_rate=None,
max_depth=None):
super().__init__(features=features, targets=targets, predictions=predictions)
self.experiments_per_execution = 1
self.auto_train = True
self.correlation_threshold = 0
def execute(self, df):
df_copy = df.copy()
entities = np.unique(df_copy.index.levels[0])
logger.debug(str(entities))
missing_cols = [x for x in self.predictions if x not in df_copy.columns]
for m in missing_cols:
df_copy[m] = None
for entity in entities:
try:
check_array(df_copy.loc[[entity]][self.features].values)
dfe = super()._execute(df_copy.loc[[entity]], entity)
df_copy.loc[entity, self.predictions] = dfe[self.predictions]
except Exception as e:
logger.info('GBMRegressor for entity ' + str(entity) + ' failed with: ' + str(e))
df_copy.loc[entity, self.predictions] = 0
return df_copy
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
# define arguments that behave as function outputs
outputs = []
return (inputs, outputs)
class SimpleAnomaly(BaseRegressor):
"""
A supervised anomaly detection function.
Uses a regression model to predict the value of target data items based on dependent data items or features.
Then, it compares the actual value to the predicted valueand generates an alert when the difference falls outside of a threshold.
"""
# class variables
train_if_no_model = True
num_rounds_per_estimator = 3
def __init__(self, features, targets, threshold, predictions=None, alerts=None):
super().__init__(features=features, targets=targets, predictions=predictions)
if alerts is None:
alerts = ['%s_alert' % x for x in self.targets]
self.alerts = alerts
self.threshold = threshold
self.correlation_threshold = 0
def execute(self, df):
try:
df_new = super().execute(df)
df = df_new
for i, t in enumerate(self.targets):
prediction = self.predictions[i]
df['_diff_'] = (df[t] - df[prediction]).abs()
alert = AlertHighValue(input_item='_diff_', upper_threshold=self.threshold, alert_name=self.alerts[i])
alert.set_entity_type(self.get_entity_type())
df = alert.execute(df)
except Exception as e:
logger.info('Simple Anomaly failed with: ' + str(e))
return df
@classmethod
def build_ui(cls):
# define arguments that behave as function inputs
inputs = []
inputs.append(UIMultiItem(name='features', datatype=float, required=True))
inputs.append(UIMultiItem(name='targets', datatype=float, required=True, output_item='predictions',
is_output_datatype_derived=True))
inputs.append(UISingle(name='threshold', datatype=float,
description=('Threshold for firing an alert. Expressed as absolute value not percent.')))
# define arguments that behave as function outputs
outputs = []
outputs.append(
UIFunctionOutMulti(name='alerts', datatype=bool, cardinality_from='targets', is_datatype_derived=False, ))
return (inputs, outputs)
#######################################################################################
# Crude change point detection
#######################################################################################
def make_histogram(t, bins):
rv = ''
if t is None:
logger.warning('make_histogram encountered None')
return rv
logger.info('make_histogram ' + str(type(t)) + ' ' + str(t.shape))
if np.isnan(t).any():
logger.warning('make_histogram encountered NaN')
return rv
try:
tv = minmax_scale(t.values)
hist = np.histogram(tv, bins=bins, density=True)
logger.info('make_histogram returns ' + str(hist))
rv = str(hist[0])
except Exception as e:
logger.warning('make_histogram np.hist failed with ' + str(e))
return rv
class HistogramAggregator(BaseSimpleAggregator):
"""
The docstring of the function will show as the function description in the UI.
"""
def __init__(self, source=None, bins=None):
self.input_item = source
if bins is None:
self.bins = 15
else:
self.bins = int(bins)
def execute(self, group):
#
# group is a series
# when calling agg(<aggregator functions>) for each element of the group dictionary
# df_input.groupby([pd.Grouper(freq='1H', level='timestamp'), pd.Grouper(level='deviceid')])
#
return make_histogram(group, self.bins)
@classmethod
def build_ui(cls):
inputs = []
inputs.append(UISingleItem(name='source', datatype=float,
description='Choose the data items that you would like to aggregate'))
# output_item='name', is_output_datatype_derived=True))
inputs.append(UISingle(name='bins', datatype=int, description='Histogram bins - 15 by default'))
outputs = []
outputs.append(UIFunctionOutSingle(name='name', datatype=str, description='Histogram encoded as string'))
return (inputs, outputs)
|
{"hexsha": "942249580b5a13fbb06468e74ca6e977de3b3718", "size": 79597, "ext": "py", "lang": "Python", "max_stars_repo_path": "iotfunctions/anomaly.py", "max_stars_repo_name": "TheTheseus/functions", "max_stars_repo_head_hexsha": "526bb01598c80c5db8995a979a613a7aa6a1e126", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "iotfunctions/anomaly.py", "max_issues_repo_name": "TheTheseus/functions", "max_issues_repo_head_hexsha": "526bb01598c80c5db8995a979a613a7aa6a1e126", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "iotfunctions/anomaly.py", "max_forks_repo_name": "TheTheseus/functions", "max_forks_repo_head_hexsha": "526bb01598c80c5db8995a979a613a7aa6a1e126", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.9417808219, "max_line_length": 134, "alphanum_fraction": 0.6215435255, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 16999}
|
\documentclass[11pt,letterpaper,roman]{moderncv}
\usepackage{luapackageloader}
\directlua{resume = require("resume")}
% Modern CV type
\moderncvstyle{classic}
\moderncvcolor{grey}
% Packages
\usepackage{verbatim}
\usepackage[margin=0.5in]{geometry}
\usepackage{import}
% Custom comand definitions
\definecolor{cvblue}{rgb}{0.22,0.45,0.70}
\newcommand{\code}{\texttt}
\newcommand{\bfcolor}[2]{\textbf{\textcolor{#1}{#2}}}
\newcommand{\bbold}[1]{\bfcolor{darkgray}{#1}}
\newcommand{\iitem}[1]{\item{#1}.}
\newcommand{\titem}[2]{\iitem{#1} \hfill \code{[#2]}}
% Personal data
\name{\directlua{tex.print(resume.first_name())}}{\directlua{tex.print(resume.last_name())}}
\phone[mobile]{\directlua{tex.print(resume.phone())}}
\email{\directlua{tex.print(resume.email())}}
\homepage{\directlua{tex.print(resume.website())}}
%% Content %%
\begin{document}
\makecvtitle
\sethintscolumntowidth{mmm yyyy -- mmm y}
%% \small{Insert objective here}
\section{Education}
\directlua{tex.print(resume.education())}
\section{Technical Skills}
\cvitem{Prog. Languages}{\directlua{tex.print(resume.prog_langs())}}
\cvitem{Web Development}{\directlua{tex.print(resume.web_dev())}}
\cvitem{Technologies}{\directlua{tex.print(resume.tech())}}
\section{Work Experience}
\directlua{tex.print(resume.work())}
\section{Leadership Experience}
\directlua{tex.print(resume.leadership())}
\end{document}
|
{"hexsha": "14c506c7c442fea7bebea49643e3083c264cdb31", "size": 1379, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EricLondresResume.tex", "max_stars_repo_name": "slondr/Resume", "max_stars_repo_head_hexsha": "4791a8e78ed32c8013e4f72fadd77e09639136b0", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-29T03:18:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-29T03:18:14.000Z", "max_issues_repo_path": "EricLondresResume.tex", "max_issues_repo_name": "slondr/Resume", "max_issues_repo_head_hexsha": "4791a8e78ed32c8013e4f72fadd77e09639136b0", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EricLondresResume.tex", "max_forks_repo_name": "slondr/Resume", "max_forks_repo_head_hexsha": "4791a8e78ed32c8013e4f72fadd77e09639136b0", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6444444444, "max_line_length": 92, "alphanum_fraction": 0.7396664249, "num_tokens": 438}
|
# -*- coding: utf-8 -*-
"""
Colour Blindness Plotting
=========================
Defines the colour blindness plotting objects:
- :func:`plot_cvd_simulation_Machado2009`
"""
from __future__ import division
from colour.blindness import cvd_matrix_Machado2009
from colour.plotting import CONSTANTS_COLOUR_STYLE, plot_image, override_style
from colour.utilities import dot_vector
__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2013-2020 - Colour Developers'
__license__ = 'New BSD License - https://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-developers@colour-science.org'
__status__ = 'Production'
__all__ = ['plot_cvd_simulation_Machado2009']
@override_style()
def plot_cvd_simulation_Machado2009(RGB,
deficiency='Protanomaly',
severity=0.5,
M_a=None,
**kwargs):
"""
Performs colour vision deficiency simulation on given *RGB* colourspace
array using *Machado et al. (2009)* model.
Parameters
----------
RGB : array_like
*RGB* colourspace array.
deficiency : unicode, optional
{'Protanomaly', 'Deuteranomaly', 'Tritanomaly'}
Colour blindness / vision deficiency type.
severity : numeric, optional
Severity of the colour vision deficiency in domain [0, 1].
M_a : array_like, optional
Anomalous trichromacy matrix to use instead of Machado (2010)
pre-computed matrix.
Other Parameters
----------------
\\**kwargs : dict, optional
{:func:`colour.plotting.artist`, :func:`colour.plotting.plot_image`,
:func:`colour.plotting.render`},
Please refer to the documentation of the previously listed definitions.
Notes
-----
- Input *RGB* array is expected to be linearly encoded.
Returns
-------
tuple
Current figure and axes.
Examples
--------
>>> import numpy as np
>>> RGB = np.random.rand(32, 32, 3)
>>> plot_cvd_simulation_Machado2009(RGB) # doctest: +ELLIPSIS
(<Figure size ... with 1 Axes>, <...AxesSubplot...>)
.. image:: ../_static/Plotting_Plot_CVD_Simulation_Machado2009.png
:align: center
:alt: plot_cvd_simulation_Machado2009
"""
if M_a is None:
M_a = cvd_matrix_Machado2009(deficiency, severity)
text = 'Deficiency: {0} - Severity: {1}'.format(deficiency, severity)
settings = {'text_kwargs': {'text': None if M_a is None else text}}
settings.update(kwargs)
return plot_image(
CONSTANTS_COLOUR_STYLE.colour.colourspace.cctf_encoding(
dot_vector(M_a, RGB)), **settings)
|
{"hexsha": "43610d30deeec1ca2da7f47c005de25460cee557", "size": 2754, "ext": "py", "lang": "Python", "max_stars_repo_path": "colour/plotting/blindness.py", "max_stars_repo_name": "njwardhan/colour", "max_stars_repo_head_hexsha": "fedf769764b46cd0b4484cde7e4f59a09b37515c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "colour/plotting/blindness.py", "max_issues_repo_name": "njwardhan/colour", "max_issues_repo_head_hexsha": "fedf769764b46cd0b4484cde7e4f59a09b37515c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "colour/plotting/blindness.py", "max_forks_repo_name": "njwardhan/colour", "max_forks_repo_head_hexsha": "fedf769764b46cd0b4484cde7e4f59a09b37515c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.9438202247, "max_line_length": 79, "alphanum_fraction": 0.6394335512, "include": true, "reason": "import numpy", "num_tokens": 665}
|
import argparse
import os
import random
import sys
import time
import struct
from collections import Counter
from collections import deque
from operator import itemgetter
from tempfile import NamedTemporaryFile as NTF
import SharedArray as sa
import numpy as np
from numba import jit
from text_embedding.documents import *
FLOAT = np.float32
INT = np.uint32
CHUNK = 1000000
STORE = 10*CHUNK
FMT = 'iif'
NBYTES = 12
def vocab_count(corpusfile, vocabfile=None, min_count=1, verbose=True, comm=None):
'''counts word occurrences to determine vocabulary
Args:
corpusfile: corpus .txt file
vocabfile: output .txt file
min_count: minimum word count
verbose: display progress
comm: MPI Communicator
Returns:
[(word, count)] list if vocabfile is None ; else None
'''
rank, size = ranksize(comm)
if verbose:
write('Counting Words with Minimum Count '+str(min_count)+'\n', comm)
t = time.time()
with open(corpusfile, 'r') as f:
documents = (line for i, line in enumerate(f) if i%size == rank)
counts = Counter(w for doc in documents for w in doc.split())
if size > 1:
counts = comm.reduce(counts, root=0)
if not rank:
vocab = sorted((item for item in counts.items() if item[1] >= min_count), key=itemgetter(1), reverse=True)
if verbose:
write('Counted '+str(len(vocab))+' Words, Time='+str(round(time.time()-t))+' sec\n')
if vocabfile is None:
checkpoint(comm)
return vocab
with open(vocabfile, 'w') as f:
for word, count in vocab:
f.write(word+' '+str(count)+'\n')
checkpoint(comm)
@jit
def doc2cooc(indices, weights, window_size, V):
row, col, val = [], [], []
start = 0
for i, index in enumerate(indices):
if index != V:
for w, other in zip(weights[start-i:], indices[start:i]):
if other != V:
if index < other:
row.append(index)
col.append(other)
else:
row.append(other)
col.append(index)
val.append(w)
start += i >= window_size
return row, col, val
@jit
def doc2cooc_unweighted(indices, window_size, V):
row, col = [], []
start = 0
for i, index in enumerate(indices):
if index != V:
for other in indices[start:i]:
if other != V:
if index < other:
row.append(index)
col.append(other)
else:
row.append(other)
col.append(index)
start += i >= window_size
return row, col
def counts2bin(counts, f):
for (i, j), v in counts.items():
f.write(struct.pack(FMT, i, j, v))
def bin2counts(f, counts, subset):
position = f.tell()
ncooc = int((f.seek(0, 2)-position)/NBYTES)
f.seek(position)
for cooc in range(ncooc):
i, j, v = struct.unpack(FMT, f.read(NBYTES))
if i in subset:
counts[(i, j)] += v
# NOTE: Result is highly non-random and contains only upper triangular entries
def cooc_count(corpusfile, vocabfile, coocfile, window_size=10, unweighted=False, verbose=True, comm=None):
'''counts word cooccurrence in a corpus
Args:
corpusfile: corpus .txt file
vocabfile: vocab .txt file
coocfile: cooccurrence .bin file
window_size: length of cooccurrence window
unweighted: do not weight cooccurrence by distance
verbose: display progress
comm: MPI Communicator
Returns:
None
'''
rank, size = ranksize(comm)
with open(vocabfile, 'r') as f:
word2index = {line.split()[0]: INT(i) for i, line in enumerate(f)}
if unweighted:
one = FLOAT(1)
else:
weights = np.fromiter((1.0/d for d in reversed(range(1, window_size+1))), FLOAT, window_size)
V = INT(len(word2index))
counts = Counter()
if verbose:
write('\rCounting Cooccurrences with Window Size '+str(window_size)+'\n', comm)
lines = 0
t = time.time()
if size > 1:
random.seed(0)
idx = list(range(V))
random.shuffle(idx)
start, stop = int(rank/size*V), int((rank+1)/size*V)
subset = set(idx[start:stop])
positions = [0]*size
with open(corpusfile, 'r') as f:
n = 0
while True:
v = None
with NTF() as tmp:
dump = Counter()
files = comm.allgather(tmp.name)
for k, line in enumerate(f):
if k%size == rank:
doc = line.split()
if unweighted:
for i, j in zip(*doc2cooc_unweighted(np.fromiter((word2index.get(word, V) for word in doc), INT, len(doc)), window_size, V)):
if i in subset:
counts[(i, j)] += one
else:
dump[(i, j)] += one
else:
for i, j, v in zip(*doc2cooc(np.fromiter((word2index.get(word, V) for word in doc), INT, len(doc)), weights, window_size, V)):
if i in subset:
counts[(i, j)] += v
else:
dump[(i, j)] += v
if not (k+1)%CHUNK:
counts2bin(dump, tmp)
dump = Counter()
if verbose:
write('\rProcessed '+str(n+k+1)+' Lines, Time='+str(round(time.time()-t))+' sec', comm)
if not (k+1)%STORE:
n += k+1
break
counts2bin(dump, tmp)
tmp.flush()
for k in range(2):
for i, name in enumerate(files):
if i != rank:
with open(name, 'rb') as g:
g.seek(positions[i])
bin2counts(g, counts, subset)
positions[i] = g.tell() * (k == 0)
checkpoint(comm)
if verbose:
write('\rProcessed '+str(n)+' Lines, Time='+str(round(time.time()-t))+' sec', comm)
if not comm.allreduce(int(not v is None)):
break
if verbose:
write('\rCounted '+str(comm.allreduce(len(counts.items())))+' Cooccurrences, Time='+str(round(time.time()-t))+' sec\n', comm)
for k in range(size):
if k == rank:
mode = 'ab' if rank else 'wb'
with open(coocfile, mode) as f:
counts2bin(counts, f)
checkpoint(comm)
else:
with open(corpusfile, 'r') as f:
for k, line in enumerate(f):
doc = line.split()
if unweighted:
for i, j in zip(*doc2cooc_unweighted(np.fromiter((word2index.get(word, V) for word in doc), INT, len(doc)), window_size, V)):
counts[(i, j)] += one
else:
for i, j, v in zip(*doc2cooc(np.fromiter((word2index.get(word, V) for word in doc), INT, len(doc)), weights, window_size, V)):
counts[(i, j)] += v
if verbose and not (k+1)%CHUNK:
write('\rProcessed '+str(k+1)+' Lines, Time='+str(round(time.time()-t))+' sec')
if verbose:
write('\rCounted '+str(len(counts.items()))+' Cooccurrences, Time='+str(round(time.time()-t))+' sec\n')
with open(coocfile, 'wb') as f:
counts2bin(counts, f)
def reformat_coocfile(inputfile, outputfile):
'''converts full-matrix cooccurrence file upper-triangular cooccurrence file
Args:
inputfile: full-matrix binary cooccurrence file with index starting at 1 in format "int,int,double" (as created by original GloVe code)
outputfile: ouput binary file
Returns:
None
'''
with open(inputfile, 'rb') as f:
with open(outputfile, 'wb') as g:
while True:
try:
i, j, d = struct.unpack('iid', f.read(16))
except struct.error:
break
if i <= j:
g.write(struct.pack(FMT, INT(i-1), INT(j-1), FLOAT(d)))
# NOTE: Open using 'with ... as' to prevent too many open POSIX files
class SharedArrayManager:
_shared = []
def __init__(self, comm=None):
self._comm = comm
self._rank, self._size = ranksize(comm)
def __enter__(self):
return self
def __exit__(self, *args):
for array in self._shared:
try:
sa.delete(array)
except FileNotFoundError:
pass
def create(self, array=None, dtype=None):
comm, rank = self._comm, self._rank
if rank:
shared = sa.attach(comm.bcast(None, root=0))
else:
dtype = array.dtype if dtype is None else dtype
if self._size == 1:
return array.astype(dtype)
filename = str(time.time())
shared = sa.create(filename, array.shape, dtype=dtype)
shared += array.astype(dtype)
self._shared.append(comm.bcast(filename, root=0))
checkpoint(comm)
return shared
def splitcooc(f, ncooc=None):
row = deque()
col = deque()
if ncooc is None:
position = f.tell()
ncooc = int((f.seek(0, 2)-position)/NBYTES)
f.seek(position)
for cooc in range(ncooc):
i, j, xij = struct.unpack(FMT, f.read(NBYTES))
row.append(INT(i))
col.append(INT(j))
yield FLOAT(xij)
for idx in [row, col]:
for cooc in range(ncooc):
yield idx.popleft()
def symcooc(coocfile, comm=None):
rank, size = ranksize(comm)
with open(coocfile, 'rb') as f:
flength = f.seek(0, 2)
offset = int(flength*rank/size / NBYTES)
ncooc = int(flength*(rank+1)/size / NBYTES) - offset
f.seek(NBYTES*offset)
coocs = splitcooc(f, ncooc)
val = np.fromiter(coocs, FLOAT, ncooc)
row = np.fromiter(coocs, INT, ncooc)
col = np.fromiter(coocs, INT, ncooc)
sym = row < col
symcooc = ncooc + sum(sym)
values, rowdata, coldata = [np.empty(symcooc, dtype=dtype) for dtype in [FLOAT, INT, INT]]
values[:ncooc], rowdata[:ncooc], coldata[:ncooc] = val, row, col
values[ncooc:], rowdata[ncooc:], coldata[ncooc:] = val[sym], col[sym], row[sym]
return values, rowdata, coldata
# NOTE: Open using 'with ... as' to prevent too many open POSIX files
class GloVe(SharedArrayManager):
def _load_cooc_data(self, coocfile, alpha, xmax):
data, self.row, self.col = symcooc(coocfile, self._comm)
self.logcooc = np.log(data)
data /= FLOAT(xmax)
mask = data<1.0
data[mask] **= FLOAT(alpha)
data[~mask] = FLOAT(1.0)
self.weights = data
self.ncooc = data.shape[0]
self._cooc_data = [self.row, self.col, self.weights, self.logcooc]
def _shuffle_cooc_data(self, seed):
for data in self._cooc_data:
np.random.seed(seed)
np.random.shuffle(data)
@staticmethod
def _shapes(V, d):
return [(V, d)]*2 + [(V,)]*2
def _init_vecs(self, shapes, d, seed, init):
create = self.create
if self._rank:
self._params = [create() for shape in shapes]
elif init is None:
np.random.seed(seed)
self._params = [create((np.random.rand(*shape)-0.5)/d, dtype=FLOAT) for shape in shapes]
else:
self._params = [create(param, dtype=FLOAT) for param in init]
def __init__(self, coocfile, V=None, d=None, seed=None, init=None, alpha=0.75, xmax=100.0, comm=None):
'''
Args:
coocfile: binary cooccurrence file (assumed to have only upper triangular entries)
V: vocab size
d: vector dimension
seed: random seed for initializing vectors
init: tuple of numpy arrays to initialize parameters
alpha: GloVe weighting parameter
xmax: GloVe max cooccurrence parameter
comm: MPI Communicator
'''
super().__init__(comm=comm)
self._load_cooc_data(coocfile, alpha, xmax)
assert not (init is None and (V is None or d is None)), "'V' and 'd' must be defined if 'init' not given"
self._init_vecs(self._shapes(V, d), d, seed, init)
def embeddings(self):
'''returns GloVe embeddings using current parameters
Returns:
numpy array of size V x d
'''
return sum(self._params[:2]) / FLOAT(2.0)
def dump(self, fid):
'''dumps GloVe embeddings to binary file
Args:
fid: open file object or filename string
Returns:
None
'''
if not self._rank:
self.embeddings().tofile(fid)
_pnames = ['wv', 'cv', 'wb', 'cb']
_numpar = 4
def save(self, fid):
'''saves parameters to HDF5 file
Args:
fid: filename string
Returns:
None
'''
import h5py
if not self._rank:
f = h5py.File(fid)
for name, param in zip(self._pnames, self._params[:self._numpar]):
f.create_dataset(name, data=param)
f.close()
@staticmethod
@jit
def predict(i, j, wv, cv, wb, cb):
return np.dot(wv[i].T, cv[j])+wb[i]+cb[j]
def loss(self):
row, col = self.row, self.col
ncooc = self.ncooc
checkpoint(self._comm)
params = self._params[:self._numpar]
predict = self.predict
errors = np.fromiter((predict(i, j, *params) for i, j in zip(row, col)), FLOAT, ncooc) - self.logcooc
loss = np.inner(self.weights*errors, errors)
if self._size > 1:
ncooc = self._comm.allreduce(ncooc)
return self._comm.allreduce(loss/ncooc)
return loss/ncooc
@staticmethod
@jit
def sgd_epoch(row, col, weights, logcoocs, wv, cv, wb, cb, ncooc, eta):
etax2 = FLOAT(2.0*eta)
loss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, cvj = wv[i], cv[j]
error = np.dot(wvi.T, cvj) + wb[i] + cb[j] - logcooc
werror = weight*error
coef = werror*etax2
upd = coef*cvj
cvj -= coef*wvi
wvi -= upd
wb[i] -= coef
cb[j] -= coef
loss += werror*error
return loss / ncooc
def sgd(self, epochs=25, eta=0.01, seed=None, verbose=True, cumulative=True):
'''runs SGD on GloVe objective
Args:
epochs: number of epochs
eta: learning rate
seed: random seed for cooccurrence shuffling
verbose: write loss and time information
cumulative: compute cumulative loss instead of true loss; ignored if not verbose
Returns:
None
'''
comm, rank, size = self._comm, self._rank, self._size
random.seed(seed)
if verbose:
write('\rRunning '+str(epochs)+' Epochs of SGD with Learning Rate '+str(eta)+'\n', comm)
if not cumulative:
write('\rInitial Loss='+str(self.loss())+'\n', comm)
ncooc = comm.allreduce(self.ncooc)
t = time.time()
for ep in range(epochs):
if verbose:
write('Epoch '+str(ep+1), comm)
self._shuffle_cooc_data(random.randint(0, 2**32-1))
loss = self.sgd_epoch(*self._cooc_data, *self._params, ncooc, eta)
if verbose:
loss = comm.allreduce(loss) if cumulative else self.loss()
checkpoint(comm)
if verbose:
write(': Loss='+str(loss)+', Time='+str(round(time.time()-t))+' sec\n', comm)
t = time.time()
@staticmethod
@jit
def adagrad_epoch(row, col, weights, logcoocs, wv, cv, wb, cb, ssg_wv, ssg_cv, ssg_wb, ssg_cb, ncooc, eta):
eta = FLOAT(eta)
two = FLOAT(2.0)
loss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, cvj = wv[i], cv[j]
ssg_wvi, ssg_cvj = ssg_wv[i], ssg_cv[j]
error = np.dot(wvi.T, cvj) + wb[i] + cb[j] - logcooc
werror = weight*error
coef = two*werror
updi = coef*cvj
updj = coef*wvi
reg_wvi = np.sqrt(ssg_wvi)
reg_cvj = np.sqrt(ssg_cvj)
ssg_wvi += updi ** 2
ssg_cvj += updj ** 2
wvi -= eta * updi / reg_wvi
cvj -= eta * updj / reg_cvj
reg_wbi = np.sqrt(ssg_wb[i])
reg_cbj = np.sqrt(ssg_cb[j])
coefsq = coef ** 2
ssg_wb[i] += coefsq
ssg_cb[j] += coefsq
coef *= eta
wb[i] -= coef / reg_wbi
cb[j] -= coef / reg_cbj
loss += werror*error
return loss / ncooc
def adagrad(self, epochs=25, eta=0.05, seed=None, verbose=True, cumulative=True):
'''runs AdaGrad on GloVe objective
Args:
epochs: number of epochs
eta: learning rate
seed: random seed for cooccurrence shuffling
verbose: write loss and time information
cumulative: compute cumulative loss instead of true loss; ignored if not verbose
Returns:
None
'''
comm, rank, size = self._comm, self._rank, self._size
random.seed(seed)
if not hasattr(self, '_ssg'):
self._ssg = [self.create(np.ones(param.shape, dtype=FLOAT)) for param in self._params[:self._numpar]]
if verbose:
write('\rRunning '+str(epochs)+' Epochs of AdaGrad with Learning Rate '+str(eta)+'\n', comm)
if not cumulative:
write('\rInitial Loss='+str(self.loss())+'\n', comm)
ncooc = comm.allreduce(self.ncooc)
t = time.time()
for ep in range(epochs):
if verbose:
write('Epoch '+str(ep+1), comm)
self._shuffle_cooc_data(random.randint(0, 2**32-1))
loss = self.adagrad_epoch(*self._cooc_data, *self._params, *self._ssg, ncooc, eta)
if verbose:
loss = comm.allreduce(loss) if cumulative else self.loss()
checkpoint(comm)
if verbose:
write(': Loss='+str(loss)+', Time='+str(round(time.time()-t))+' sec\n', comm)
t = time.time()
# NOTE: Open using 'with ... as' to prevent too many open POSIX files
class SN(GloVe):
@staticmethod
def _shapes(V, d):
return [(V, d), (1,)]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def embeddings(self):
return self._params[0]
_pnames = ['wv', 'b']
_numpar = 2
@staticmethod
@jit
def predict(i, j, wv, b):
sumij = wv[i] + wv[j]
return np.dot(sumij.T, sumij) + b[0]
@staticmethod
@jit
def sgd_epoch(row, col, weights, logcoocs, wv, b, ncooc, eta):
etax2 = FLOAT(2.0*eta)
two = FLOAT(2.0)
loss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, wvj = wv[i], wv[j]
sumij = wvi + wvj
error = np.dot(sumij.T, sumij) + b[0] - logcooc
werror = weight*error
coef = werror*etax2
b -= coef
upd = (two*coef)*sumij
wvi -= upd
wvj -= upd
loss += werror * error
return loss / ncooc
@staticmethod
@jit
def adagrad_epoch(row, col, weights, logcoocs, wv, b, ssg_wv, ssg_b, ncooc, eta):
eta = FLOAT(eta)
two = FLOAT(2.0)
loss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, wvj = wv[i], wv[j]
ssg_wvi, ssg_wvj = ssg_wv[i], ssg_wv[j]
sumij = wvi + wvj
error = np.dot(sumij.T, sumij) + b[0] - logcooc
werror = weight*error
coef = two*werror
reg_b = np.sqrt(ssg_b)
ssg_b += coef ** 2
b -= eta*coef
upd = (two*coef)*sumij
updsq = upd ** 2
reg_wvi = np.sqrt(ssg_wvi)
ssg_wvi += updsq
reg_wvj = np.sqrt(ssg_wvj)
ssg_wvj += updsq
upd *= eta
wvi -= upd / reg_wvi
wvj -= upd / reg_wvj
loss += werror * error
return loss / ncooc
# NOTE: Open using 'with ... as' to prevent too many open POSIX files
class RegularizedGloVe(GloVe):
def _word_cooc_counts(self, V):
counts = Counter(self.row)+Counter(self.col)
array = np.fromiter((counts[i] for i in range(V)), INT, V)
if self._size > 1:
output = None if self._rank else np.empty(V, dtype=INT)
self._comm.Reduce(array, output, root=0)
return output
return array
def __init__(self, src, *args, reg=1.0, **kwargs):
super().__init__(*args, **kwargs)
create = self.create
params = self._params
params.append(create(src, dtype=FLOAT))
params.append(FLOAT(reg))
params.append(create(self._word_cooc_counts(src.shape[0]), dtype=FLOAT))
oloss = self.loss
if self._rank:
self.loss = lambda: oloss() + self._comm.bcast(None, root=0)
else:
rloss = lambda: reg/src.shape[0]*norm(self.embeddings()-src)**2
if self._size > 1:
self.loss = lambda: oloss() + self._comm.bcast(rloss(), root=0)
else:
self.loss = lambda: oloss() + rloss()
@staticmethod
@jit
def sgd_epoch(row, col, weights, logcoocs, wv, cv, wb, cb, src, reg, wcc, ncooc, eta):
etax2 = FLOAT(2.0*eta)
two = FLOAT(2.0)
regoV = FLOAT(reg / wcc.shape[0])
regcoef = FLOAT(eta * ncooc * regoV)
oloss = FLOAT(0.0)
rloss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, cvj, wcci, wccj = wv[i], cv[j], wcc[i], wcc[j]
error = np.dot(wvi.T, cvj) + wb[i] + cb[j] - logcooc
werror = weight*error
coef = werror*etax2
diffi = (wvi+cv[i])/two - src[i]
diffj = (wv[j]+cvj)/two - src[j]
upd = coef*cvj + (regcoef/wcci)*diffi
cvj -= coef*wvi + (regcoef/wccj)*diffj
wvi -= upd
wb[i] -= coef
cb[j] -= coef
oloss += werror*error
rloss += np.dot(diffi.T, diffi)/wcci + np.dot(diffj.T, diffj)/wccj
return (oloss + regoV*rloss) / ncooc
@staticmethod
@jit
def adagrad_epoch(row, col, weights, logcoocs, wv, cv, wb, cb, src, reg, wcc, ssg_wv, ssg_cv, ssg_wb, ssg_cb, ncooc, eta):
eta = FLOAT(eta)
two = FLOAT(2.0)
regoV = FLOAT(reg / wcc.shape[0])
regcoef = FLOAT(ncooc * regoV)
oloss = FLOAT(0.0)
rloss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, cvj, wcci, wccj = wv[i], cv[j], wcc[i], wcc[j]
ssg_wvi, ssg_cvj = ssg_wv[i], ssg_cv[j]
error = np.dot(wvi.T, cvj) + wb[i] + cb[j] - logcooc
werror = weight*error
coef = two*werror
diffi = (wvi+cv[i])/two - src[i]
diffj = (wv[j]+cvj)/two - src[j]
updi = coef*cvj + (regcoef/wcci)*diffi
updj = coef*wvi + (regcoef/wccj)*diffj
reg_wvi = np.sqrt(ssg_wvi)
reg_cvj = np.sqrt(ssg_cvj)
ssg_wvi += updi ** 2
ssg_cvj += updj ** 2
wvi -= eta * updi / reg_wvi
cvj -= eta * updj / reg_cvj
reg_wbi = np.sqrt(ssg_wb[i])
reg_cbj = np.sqrt(ssg_cb[j])
coefsq = coef ** 2
ssg_wb[i] += coefsq
ssg_cb[j] += coefsq
coef *= eta
wb[i] -= coef / reg_wbi
cb[j] -= coef / reg_cbj
oloss += werror*error
rloss += np.dot(diffi.T, diffi)/wcci + np.dot(diffj.T, diffj)/wccj
return (oloss + regoV*rloss) / ncooc
# NOTE: Open using 'with ... as' to prevent too many open POSIX files
class RegularizedSN(SN, RegularizedGloVe):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@staticmethod
@jit
def sgd_epoch(row, col, weights, logcoocs, wv, b, src, reg, wcc, ncooc, eta):
etax2 = FLOAT(2.0*eta)
two = FLOAT(2.0)
regoV = FLOAT(reg / wcc.shape[0])
regcoef = FLOAT(etax2 * ncooc * regoV)
oloss = FLOAT(0.0)
rloss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, wvj, wcci, wccj = wv[i], wv[j], wcc[i], wcc[j]
sumij = wvi + wvj
error = np.dot(sumij.T, sumij) + b[0] - logcooc
werror = weight*error
coef = werror*etax2
b -= coef
diffi = wvi - src[i]
diffj = wvj - src[j]
upd = (two*coef)*sumij
wvi -= upd + (regcoef/wcci)*diffi
wvj -= upd + (regcoef/wccj)*diffj
oloss += werror*error
rloss += np.dot(diffi.T, diffi)/wcci + np.dot(diffj.T, diffj)/wccj
return (oloss + regoV*rloss) / ncooc
@staticmethod
@jit
def adagrad_epoch(row, col, weights, logcoocs, wv, b, src, reg, wcc, ssg_wv, ssg_b, ncooc, eta):
eta = FLOAT(eta)
two = FLOAT(2.0)
regoV = FLOAT(reg / wcc.shape[0])
regcoef = FLOAT(ncooc * regoV)
oloss = FLOAT(0.0)
rloss = FLOAT(0.0)
for i, j, weight, logcooc in zip(row, col, weights, logcoocs):
wvi, wvj, wcci, wccj = wv[i], wv[j], wcc[i], wcc[j]
ssg_wvi, ssg_wvj = ssg_wv[i], ssg_wv[j]
sumij = wvi + wvj
error = np.dot(sumij.T, sumij) + b[0] - logcooc
werror = weight*error
coef = two*werror
reg_b = np.sqrt(ssg_b)
ssg_b += coef ** 2
b -= eta*coef
diffi = wvi - src[i]
diffj = wvj - src[j]
upd = (two*coef)*sumij
updi = upd + (regcoef/wcci)*diffi
updj = upd + (regcoef/wccj)*diffj
regi = np.sqrt(ssg_wvi)
regj = np.sqrt(ssg_wvj)
ssg_wvi += updi ** 2
ssg_wvj += updj ** 2
wvi -= eta * updi
wvj -= eta * updj
oloss += werror*error
rloss += np.dot(diffi.T, diffi)/wcci + np.dot(diffj.T, diffj)/wccj
return (oloss + regoV*rloss) / ncooc
def align_params(params, srcvocab, tgtvocab, mean_fill=True):
output = []
for param in params:
if len(param.shape) == 1:
if param.shape[0] == 1:
output.append(param)
continue
shape = (len(tgtvocab),)
default = np.mean(param)
else:
shape = (len(tgtvocab), param.shape[1])
default = np.mean(param, axis=0)
array = np.empty(shape, dtype=FLOAT)
if not mean_fill:
default *= FLOAT(0.0)
w2e = dict(zip(srcvocab, param))
for i, w in enumerate(tgtvocab):
array[i] = w2e.get(w, default)
output.append(array)
return output
def induce_embeddings(srcvocab, srccooc, srcvecs, tgtvocab, tgtcooc, comm=None):
from scipy import sparse as sp
from sklearn.linear_model import LinearRegression as LR
rank, size = ranksize(comm)
Vsrc, d = srcvecs.shape
Vtgt = len(tgtvocab)
with SharedArrayManager(comm=comm) as sam:
write('Loading Source Cooccurrences\n', comm)
data, row, col = symcooc(srccooc, comm)
srcvecs = sam.create(srcvecs, dtype=FLOAT)
X = sp.csr_matrix((data, (row, col)), shape=(Vsrc, Vsrc), dtype=FLOAT)
write('Computing Source Counts\n', comm)
if size > 1:
C = None if rank else np.empty(Vsrc, dtype=FLOAT)
comm.Reduce(np.array(X.sum(1))[:,0], C, root=0)
C = sam.create(C)
else:
C = np.array(X.sum(1))[:,0]
write('Building Source Context Vectors\n', comm)
if size > 1:
U = None if rank else np.empty((Vsrc, d), dtype=FLOAT)
comm.Reduce(X.dot(srcvecs), U, root=0)
U = sam.create(U)
else:
U = X.dot(srcvecs)
U = U[C>0]
C = C[C>0]
start, stop = int(rank/size*Vsrc), int((rank+1)/size*Vsrc)
U[start:stop] /= C[start:stop, None]
checkpoint(comm)
write('Learning Induction Matrix\n', comm)
M = sam.create(np.zeros((d, d), dtype=FLOAT))
start, stop = int(rank/size*d), int((rank+1)/size*d)
M[:,start:stop] = LR(fit_intercept=False).fit(X[:,start:stop], srcvecs).coef_
checkpoint(comm)
write('Loading Target Cooccurrences\n', comm)
data, row, col = symcooc(tgtcooc, comm)
tgt2idx = {w: i for i, w in enumerate(tgtvocab)}
tgt2src = {tgt2idx.get(w): i for i, w in enumerate(srcvocab)}
zero = FLOAT(0.0)
for i, j in enumerate(col):
try:
col[i] = tgt2src[j]
except KeyError:
data[i] = zero
X = sp.csr_matrix((data, (row, col)), shape=(Vtgt, Vsrc), dtype=FLOAT)
X.eliminate_zeros()
write('Computing Target Counts\n', comm)
if size > 1:
C = None if rank else np.empty(Vtgt, dtype=FLOAT)
comm.Reduce(np.array(X.sum(1))[:,0], C, root=0)
C = sam.create(C)
else:
C = np.array(X.sum(1))[:,0]
write('Building Target Context Vectors\n', comm)
rank, size = ranksize(comm)
if size > 1:
U = None if rank else np.empty((Vtgt, d), dtype=FLOAT)
comm.Reduce(X.dot(srcvecs), U, root=0)
U = sam.create(U)
else:
U = X.dot(srcvecs)
nz = sum(C>0)
start, stop = int(rank/size*nz), int((rank+1)/size*nz)
U[C>0][start:stop] /= C[C>0][start:stop, None]
write('Computing Induced Embeddings\n', comm)
tgtvecs = sam.create(np.zeros((Vtgt, d), dtype=FLOAT))
tgtvecs[start:stop] = U[start:stop].dot(M.T)
checkpoint(comm)
if not rank:
return tgtvecs
def main(args, comm=None):
if args.mode == 'vocab' or args.mode[:4] in 'thru':
vocab_count(args.input, args.vocab, args.min_count, args.verbose, comm)
if args.mode == 'cooc' or args.mode[:4] in 'thru':
cooc_count(args.input, args.vocab, args.cooc, args.window_size, args.unweighted, args.verbose, comm)
Embedding = GloVe if args.mode[-5:] == 'glove' else SN if args.mode[-2:] == 'sn' else None
if Embedding is None:
if not args.mode in {'vocab', 'cooc', 'thru-cooc'}:
raise(NotImplementedError)
return
with open(args.vocab, 'r') as f:
V = len(f.readlines())
with Embedding(args.cooc, V, args.dimension, alpha=args.alpha, xmax=args.xmax, comm=comm) as E:
if args.sgd:
E.sgd(args.niter, args.eta, verbose=args.verbose)
else:
E.adagrad(args.niter, args.eta, verbose=args.verbose)
E.dump(args.output)
def parse():
parser = argparse.ArgumentParser(prog='python text_embeddings/solvers.py')
parser.add_argument('mode', help="'vocab', 'cooc', 'glove', 'sn', 'thru-cooc', 'thru-glove', or 'thru-sn'")
parser.add_argument('vocab', help='vocabulary .txt file')
parser.add_argument('-i', '--input', help='corpus .txt file')
parser.add_argument('-c', '--cooc', help='cooccurrence .bin file')
parser.add_argument('-o', '--output', help='embedding .bin file')
parser.add_argument('-m', '--min_count', default=1, help='minimum word count in corpus', type=int)
parser.add_argument('-w', '--window_size', default=10, help='size of cooccurrence window', type=int)
parser.add_argument('-u', '--unweighted', action='store_true', help='no distance weighting')
parser.add_argument('-d', '--dimension', default=300, help='embedding dimension', type=int)
parser.add_argument('-x', '--xmax', default=100.0, help='maximum cooccurrence', type=float)
parser.add_argument('-a', '--alpha', default=0.75, help='weighting exponent', type=float)
parser.add_argument('-s', '--sgd', action='store_true', help='use SGD')
parser.add_argument('-n', '--niter', default=25, help='number of training epochs', type=int)
parser.add_argument('-e', '--eta', default=0.05, help='learning rate', type=float)
parser.add_argument('-v', '--verbose', action='store_true', help='display output')
return parser.parse_args()
if __name__ == '__main__':
try:
from mpi4py import MPI
comm = MPI.COMM_WORLD
except ImportError:
comm = None
main(parse(), comm=comm)
|
{"hexsha": "1a585e927d9b23d55ccfb3d92f3eaecfabdbc3f1", "size": 33728, "ext": "py", "lang": "Python", "max_stars_repo_path": "solvers.py", "max_stars_repo_name": "NLPrinceton/text_embedding", "max_stars_repo_head_hexsha": "ee269727863982669ffb95c984c4220c1fba2834", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2018-06-26T13:48:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T10:59:32.000Z", "max_issues_repo_path": "solvers.py", "max_issues_repo_name": "NLPrinceton/text_embedding", "max_issues_repo_head_hexsha": "ee269727863982669ffb95c984c4220c1fba2834", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-05-08T18:59:40.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-08T19:14:17.000Z", "max_forks_repo_path": "solvers.py", "max_forks_repo_name": "NLPrinceton/text_embedding", "max_forks_repo_head_hexsha": "ee269727863982669ffb95c984c4220c1fba2834", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-06-30T02:10:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-24T20:26:49.000Z", "avg_line_length": 35.0238836968, "max_line_length": 158, "alphanum_fraction": 0.5306866698, "include": true, "reason": "import numpy,from scipy,from numba", "num_tokens": 9184}
|
import torch
import random
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
import atexit
from os import path
import gym
from torch.utils.tensorboard import SummaryWriter
from models import StaticReconstructor, DiscriminatorConv
from utils import WarpFrame, NoopResetEnv, MaxAndSkipEnv
BATCH_SIZE = 1
SEQ_LEN = 500
NUM_STEPS = 10000 if torch.cuda.is_available() else 1000
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
PATH = 'weights/endecode.pt'
GLR = 3e-4
DLR = 3e-4
L1_SCALER = 0.5
K = 6
WEIGHT_DECAY = 0.0
WRITER = SummaryWriter(log_dir="logs/endecode-L1scaler"+str(L1_SCALER)+"-dLR"+str(DLR)+"-K"+str(K))
DATA = torch.zeros(NUM_STEPS, 1, 84, 84)
G = StaticReconstructor(lr=GLR, weight_decay=WEIGHT_DECAY, device=DEVICE)
G.to(DEVICE)
if path.exists(PATH):
print("Loading model from", PATH)
G.load_state_dict(torch.load(PATH, map_location=DEVICE))
D = DiscriminatorConv(lr=DLR, weight_decay=WEIGHT_DECAY, device=DEVICE)
D.to(DEVICE)
def exit_handler():
print("Saving model as", PATH)
torch.save(G.state_dict(), PATH)
print("Saving Images")
x = z = DATA[13]
out = G(x.unsqueeze(0))
tgt = z
plt.figure(1)
plt.imshow(tgt.squeeze().cpu().detach().numpy())
plt.savefig('tgt-atari')
plt.figure(2)
plt.imshow(out.squeeze().cpu().detach().numpy())
plt.savefig('out-atari')
atexit.register(exit_handler)
env = gym.make("PongNoFrameskip-v4")
env = WarpFrame(env, width=84, height=84)
env = NoopResetEnv(env)
env = MaxAndSkipEnv(env)
env.reset()
print("Generating Data")
for step in tqdm(range(NUM_STEPS)):
# Roll out env
action = random.randint(0, 5)
obs, rew, done, _ = env.step(action)
DATA[step] = torch.tensor(obs.reshape(1, 84, 84))
if done:
env.reset()
DATA = DATA.to(DEVICE)
print("Training")
for e in tqdm(range(10000)):
for k in range(K):
# Generate batch of images for discriminator
x = DATA[torch.randperm(NUM_STEPS)[:SEQ_LEN]] / 255.0
z = DATA[torch.randperm(NUM_STEPS)[:SEQ_LEN]] / 255.0
# Compute discriminator loss
D.optim.zero_grad()
D_loss_real = -torch.mean(torch.log(D(x)))
D_loss_fake = -torch.mean(torch.log(1 - D(G(z))))
(D_loss_real + D_loss_fake).backward()
D.optim.step()
# Generate batch of images for generator
z = DATA[torch.randperm(NUM_STEPS)[:SEQ_LEN]] / 255.0
# Compute generator loss
G.optim.zero_grad()
gout = G(z)
G_gan_loss = torch.mean(torch.log(1 - D(gout)))
G_l1_loss = L1_SCALER * torch.mean(torch.abs(gout - z))
(G_gan_loss + G_l1_loss).backward()
G.optim.step()
# Log results
WRITER.add_scalar('Accuracy/D Accuracy', np.mean(D.accuracy(x, G(z))), e)
WRITER.add_scalar('Discriminator Loss/Loss Real', np.mean(D_loss_real.item()), e)
WRITER.add_scalar('Discriminator Loss/Loss Fake', np.mean(D_loss_fake.item()), e)
WRITER.add_scalar('Generator Loss/GAN Loss', np.mean(G_gan_loss.item()), e)
WRITER.add_scalar('Generator Loss/L1 Loss', np.mean(G_l1_loss.item()), e)
|
{"hexsha": "ba7188e28f2107d0841394459a6bf3a0a70742a2", "size": 3083, "ext": "py", "lang": "Python", "max_stars_repo_path": "endecode.py", "max_stars_repo_name": "klemenkotar/optimus", "max_stars_repo_head_hexsha": "c454d48934a422a5033da2c5e8f4073dcbf60500", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "endecode.py", "max_issues_repo_name": "klemenkotar/optimus", "max_issues_repo_head_hexsha": "c454d48934a422a5033da2c5e8f4073dcbf60500", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "endecode.py", "max_forks_repo_name": "klemenkotar/optimus", "max_forks_repo_head_hexsha": "c454d48934a422a5033da2c5e8f4073dcbf60500", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.83, "max_line_length": 99, "alphanum_fraction": 0.6843983133, "include": true, "reason": "import numpy", "num_tokens": 891}
|
#pragma once
#include "kissfft.hh"
#include <boost/math/constants/constants.hpp>
#include <boost/optional.hpp>
#include <cstddef>
namespace vv
{
template <class T, class U>
auto lerp(T x0, T x1, U ratio)
{
return x0 + (x1 - x0) * ratio;
}
template <class T>
auto invlerp(T x0, T x1, T x)
{
return (x - x0) / (x1 - x0);
}
template <class T>
auto squared(T x)
{
return x * x;
}
class processor
{
public:
static const std::size_t buffer_size = 4096;
static const std::size_t nsdf_size = buffer_size / 2;
explicit processor(double sampleRate)
: sampleRate_(sampleRate)
, v1_(buffer_size)
, v2_(buffer_size + nsdf_size)
, v3_(buffer_size + nsdf_size)
, v4_(buffer_size + nsdf_size)
, v5_(buffer_size + nsdf_size)
, v6_(buffer_size)
, v7_(buffer_size / 2)
{
}
void operator ()(const float* input, float* output, double pitch_shift, double formant_shift)
{
for (std::size_t i = 0; i < buffer_size; ++i)
v1_[i] = std::complex<float>(input[i], 0.0f);
for (std::size_t i = 0; i < buffer_size; ++i)
{
auto r = static_cast<double>(i) / static_cast<double>(buffer_size);
auto w = 0.5 - 0.5 * std::cos(boost::math::constants::two_pi<double>() * r);
v2_[i] = v1_[i] * static_cast<float>(w);
}
fft_.transform(v2_.data(), v3_.data());
auto cutoff_hz = 800.0;
auto cutoff_index = static_cast<std::size_t>(std::round(cutoff_hz * static_cast<double>(buffer_size) / sampleRate_));
for (std::size_t i = 0; i < cutoff_index; ++i)
{
v4_[i + 1] = std::norm(v3_[i + 1]);
v4_[buffer_size - i - 1] = std::norm(v3_[buffer_size - i - 1]);
}
ifft_.transform(v4_.data(), v5_.data());
for (std::size_t i = 0; i < buffer_size + nsdf_size; ++i)
v5_[i] /= static_cast<float>(buffer_size + nsdf_size);
for (std::size_t i = 1; i < buffer_size; ++i)
{
auto j = buffer_size - i - 1;
v6_[j] = v6_[j + 1] + squared(v2_[i].real()) + squared(v2_[j].real());
}
for (std::size_t i = 0; i < buffer_size / 2; ++i)
{
if (v6_[i] < std::numeric_limits<double>::min())
v7_[i] = 0.0f;
else
v7_[i] = 2.0f * v5_[i].real() / v6_[i];
}
auto minimum_hz = 50.0;
auto maximum_hz = 300.0;
auto minimum_index = static_cast<std::size_t>(std::round(sampleRate_ / maximum_hz));
auto maximum_index = static_cast<std::size_t>(std::round(sampleRate_ / minimum_hz));
minimum_index = std::max<std::size_t>(minimum_index, 1);
minimum_index = std::min<std::size_t>(minimum_index, buffer_size / 2 - 2);
maximum_index = std::max<std::size_t>(maximum_index, 1);
maximum_index = std::min<std::size_t>(maximum_index, buffer_size / 2 - 2);
double maximum_value = 0.0;
for (std::size_t i = minimum_index; i < maximum_index; ++i)
{
auto p1 = v7_[i - 1];
auto p2 = v7_[i];
auto p3 = v7_[i + 1];
if (p1 < p2 && p2 > p3 && p2 > maximum_value)
maximum_value = p2;
}
boost::optional<std::size_t> peak_index;
double peak_value = 0.0;
for (std::size_t i = minimum_index; i < maximum_index; ++i)
{
auto p1 = v7_[i - 1];
auto p2 = v7_[i];
auto p3 = v7_[i + 1];
if (p1 < p2 && p2 > p3 && p2 > maximum_value * 0.9)
{
peak_index = i;
peak_value = p2;
break;
}
}
if (!peak_index && last_peak_index_)
peak_index = last_peak_index_;
last_peak_index_ = peak_index;
bool enable = false;
if (peak_index)
{
std::size_t last_dst = 0;
std::size_t last_src1 = 0;
std::size_t last_src2 = 0;
double last_src_ratio = 0.0;
auto easing = [&](double x)
{
return (1.0 - std::cos(boost::math::constants::pi<double>() * x)) / 2.0;
};
auto interpolate = [&](double x1, double x2, double ratio)
{
return lerp(x1, x2, easing(ratio));
};
auto get_value = [&](double indexf) -> double
{
if (indexf < 0.0)
return input[0];
auto index = static_cast<std::size_t>(std::floor(indexf));
if (index >= buffer_size - 1)
return input[buffer_size - 1];
auto ratio = indexf - std::floor(indexf);
return interpolate(input[index], input[index + 1], ratio);
};
auto overlap = [&](std::size_t dst, std::size_t src1, std::size_t src2, double src_ratio)
{
for (std::size_t i = last_dst; i < dst; ++i)
{
auto ratio = static_cast<double>(i - last_dst) / static_cast<double>(dst - last_dst);
auto p1_1 = get_value(static_cast<double>(last_src1) + static_cast<double>(i - last_dst) * formant_shift);
auto p1_2 = get_value(static_cast<double>(last_src2) + static_cast<double>(i - last_dst) * formant_shift);
auto p1 = interpolate(p1_1, p1_2, last_src_ratio);
auto p2_1 = get_value(static_cast<double>(src1) - static_cast<double>(dst - i) * formant_shift);
auto p2_2 = get_value(static_cast<double>(src2) - static_cast<double>(dst - i) * formant_shift);
auto p2 = interpolate(p2_1, p2_2, src_ratio);
output[i] = static_cast<float>(interpolate(p1, p2, ratio));
}
last_dst = dst;
last_src1 = src1;
last_src2 = src2;
last_src_ratio = src_ratio;
};
auto q = buffer_size / *peak_index;
auto r = buffer_size % *peak_index;
auto nf = (static_cast<double>(buffer_size) * pitch_shift - static_cast<double>(r)) / static_cast<double>(*peak_index);
auto n = static_cast<std::size_t>(std::max(0.0, std::round(nf)));
if (q != 0 && n != 0)
{
auto actual_pitch_shift = static_cast<double>(n * *peak_index + r) / static_cast<double>(buffer_size);
for (std::size_t i = 1; i <= n; ++i)
{
double frame_indexf = 1.0;
if (n != 1)
frame_indexf = static_cast<double>((i - 1) * (q - 1)) / static_cast<double>(n - 1) + 1;
auto frame_index = static_cast<std::size_t>(std::floor(frame_indexf));
auto dst = static_cast<std::size_t>(std::floor(static_cast<double>(i * *peak_index) / actual_pitch_shift));
auto src = frame_index * *peak_index;
if (frame_index == q)
{
overlap(dst, src, src, 0.0);
}
else
{
auto src_ratio = frame_indexf - std::floor(frame_indexf);
overlap(dst, src, src + *peak_index, src_ratio);
}
}
overlap(buffer_size, buffer_size, buffer_size, 0.0);
enable = true;
}
}
if (!enable)
std::copy(input, input + buffer_size, output);
}
private:
double sampleRate_;
kissfft<float> fft_{ buffer_size + nsdf_size, false };
kissfft<float> ifft_{ buffer_size + nsdf_size, true };
std::vector<std::complex<float>> v1_;
std::vector<std::complex<float>> v2_;
std::vector<std::complex<float>> v3_;
std::vector<std::complex<float>> v4_;
std::vector<std::complex<float>> v5_;
std::vector<float> v6_;
std::vector<float> v7_;
boost::optional<std::size_t> last_peak_index_;
};
}
|
{"hexsha": "c3db6cc05ad1805b7094c9412c731bfde37ff443", "size": 6847, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "vv/src/processor.hpp", "max_stars_repo_name": "planaria/vv", "max_stars_repo_head_hexsha": "08aebfbe37338fe1735fd3431f1178237941dde1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 13.0, "max_stars_repo_stars_event_min_datetime": "2018-10-16T17:06:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T13:40:28.000Z", "max_issues_repo_path": "vv/src/processor.hpp", "max_issues_repo_name": "planaria/vv", "max_issues_repo_head_hexsha": "08aebfbe37338fe1735fd3431f1178237941dde1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1.0, "max_issues_repo_issues_event_min_datetime": "2018-12-17T02:57:04.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-19T05:50:57.000Z", "max_forks_repo_path": "vv/src/processor.hpp", "max_forks_repo_name": "planaria/vv", "max_forks_repo_head_hexsha": "08aebfbe37338fe1735fd3431f1178237941dde1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2.0, "max_forks_repo_forks_event_min_datetime": "2020-03-19T18:08:18.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-15T09:37:45.000Z", "avg_line_length": 26.9566929134, "max_line_length": 123, "alphanum_fraction": 0.6120928874, "num_tokens": 2245}
|
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
import datetime as dt
import os
import io
import requests
from dateutil.relativedelta import relativedelta
from Modules.Utils import Listador, FindOutlier, FindOutlierMAD, Cycles
from Modules.Graphs import GraphSerieOutliers, GraphSerieOutliersMAD
def SSTregions():
"""
Read SST weekly anomalies and put them in a DataFrame
OUPUTS:
SST : DataFrame with the anomalies of SST in El Niño regions
"""
SSTweek = 'https://www.cpc.ncep.noaa.gov/data/indices/wksst8110.for'
s = requests.get(SSTweek).content
date = []
N12 = []
N12_A = []
N3 = []
N3_A = []
N34 = []
N34_A = []
N4 = []
N4_A = []
with io.StringIO(s.decode('utf-8')) as f:
data = f.readlines()
for d in data[4:]:
d = d.strip()
d = d.split(' ')
date.append(dt.datetime.strptime(d[0], '%d%b%Y'))
N12 .append(float(d[1][:4]))
N12_A.append(float(d[1][4:]))
N3 .append(float(d[2][:4]))
N3_A .append(float(d[2][4:]))
N34 .append(float(d[3][:4]))
N34_A.append(float(d[3][4:]))
N4 .append(float(d[4][:4]))
N4_A .append(float(d[4][4:]))
SST = pd.DataFrame(np.array([N12_A,N3_A,N34_A,N4_A]).T, index=date, \
columns=[u'Niño1+2',u'Niño3',u'Niño34',u'Niño4'])
return SST
def ONIdata():
"""
Read ONI data and put them in a DataFrame
OUPUTS:
ONI : DataFrame with the ONI data
"""
linkONI = 'https://www.cpc.ncep.noaa.gov/data/indices/oni.ascii.txt'
s = requests.get(linkONI).content
Season = []
year = []
Total = []
Anom = []
date = []
with io.StringIO(s.decode('utf-8')) as f:
data = f.readlines()
m = 0
for d in data[1:]:
d = d.strip()
d = d.split()
Season.append(d[0])
year .append(int(d[1]))
Total .append(float(d[2]))
Anom .append(float(d[3]))
date .append(dt.datetime(1950,2,1)+relativedelta(months=m))
m+=1
ONI = pd.DataFrame(np.array([Anom, Total, Season]).T, index=date, \
columns=[u'Anomalie', u'Total',u'Season'])
return ONI
def SOIdata():
"""
Read ONI data and put them in a DataFrame
OUPUTS:
ONI : DataFrame with the ONI data
"""
# linkSOI = 'http://www.bom.gov.au/climate/enso/soi.txt'
# linkSOI = 'https://www.ncdc.noaa.gov/teleconnections/enso/indicators/soi/data.csv'
linkSOI = 'https://www.cpc.ncep.noaa.gov/data/indices/soi'
s = requests.get(linkSOI).content
date = []
soi = []
with io.StringIO(s.decode('utf-8')) as f:
data = f.readlines()
# Old version with csv data
# m = 0
# for i in range(len(data)):
# if i >=2:
# row = data[i].strip()
# val = row.split(',')
# date.append(dt.datetime.strptime(val[0], '%Y%m'))
# soi.append(float(val[1]))
# SOI = pd.DataFrame(np.array(soi).T, index=date, columns=[u'SOI'])
standarrized_flag = False
for i in range(len(data)):
if 'STANDARDIZED' in data[i]:
standarrized_flag = True
if standarrized_flag == False:
continue
row = data[i].strip()
if len(row) != 76:
continue
if row.startswith('Y') == True:
continue
year = int(row[:4])
for m in range(12):
a = float(row[6*(m)+4:6*(m+1)+4])
date.append(dt.datetime(year, m+1,1))
if a == -999.9:
a = np.nan
soi.append(a)
SOI = pd.DataFrame(np.array(soi).T, index=date, columns=[u'SOI'])
SOI = SOI.dropna()
return SOI
def MEIdata():
"""
Read ONI data and put them in a DataFrame
OUPUTS:
ONI : DataFrame with the ONI data
"""
linkMEI = 'https://psl.noaa.gov/enso/mei/data/meiv2.data'
s = requests.get(linkMEI).content
date = []
mei = []
with io.StringIO(s.decode('utf-8')) as f:
data = f.readlines()
lims = np.array(data[0].strip().split(' ')).astype(int)
for i in range(len(data)):
if i >=1:
row = data[i].strip()
val = row.split(' ')
for m in range(12):
date.append(dt.datetime(int(val[0]),m+1,1))
mei.append(np.array(val[1:]).astype(float))
if int(val[0])== lims[1]-1:
break
mei = np.array(mei).reshape(len(mei)*12)
MEI = pd.DataFrame(np.array(mei).astype(float), index=date, columns=[u'MEI'])
return MEI
def OuliersENSOjust(Serie, ENSO, method='IQR', lim_inf=0,
write=True, name=None,
graph=True, label='', title='', pdf=False, png=True, Path_Out=''):
"""
Remove outliers with the function find outliers and justify the values in ENSO periods
INPUTS
Serie : Pandas DataFrame or pandas Series with index as datetime
ENSO : Pandas DataFrame with the index of dates of ENSO periods
method: str to indicate the mehtod to find outliers, ['IQR','MAD']
lim_inf : limit at the bottom for the outliers
write : boolean to write the outliers
Name : string of estation name to save the outliers
label : string of the label
title : Figure title
pdf : Boolean to save figure in pdf format
png : Boolean to save figure in png format
Path_Out: Directoty to save figures
OUTPUTS
S : DataFrame without outliers outside ENSO periods
"""
if method == 'IQR':
idx = FindOutlier(Serie, clean=False, index=True, lims=False, restrict_inf=lim_inf)
elif method == 'MAD':
idx = FindOutlierMAD(Serie.dropna().values,clean=False, index=True)
else:
print(f'{method} is not a valid method, please check the spelling')
injust = []
for ii in idx:
month = dt.datetime(Serie.index[ii].year,Serie.index[ii].month, 1)
if month not in ENSO.index:
injust.append(ii)
if len(injust) == 0:
S = Serie
else:
S = Serie.drop(Serie.index[injust])
if write == True:
outliers = Serie.iloc[injust]
outliers.to_csv(os.path.join(Path_Out, f'Outliers_{name}_{method}.csv'))
if graph == True:
outliers = Serie.iloc[injust]
GraphSerieOutliersMAD(Serie, outliers,
name=f'Outliers_{name}_{method}',
label=label,title=title,pdf=pdf, png=png,
PathFigs=Path_Out)
return S
|
{"hexsha": "1e0e356cf22f78253be57bc1b26be4f731e1d446", "size": 6600, "ext": "py", "lang": "Python", "max_stars_repo_path": "Modules/ENSO.py", "max_stars_repo_name": "cmcuervol/HydroBalbo", "max_stars_repo_head_hexsha": "0c70536305d12f6fb9fb8fe7ce7cdb08d88472af", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Modules/ENSO.py", "max_issues_repo_name": "cmcuervol/HydroBalbo", "max_issues_repo_head_hexsha": "0c70536305d12f6fb9fb8fe7ce7cdb08d88472af", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Modules/ENSO.py", "max_forks_repo_name": "cmcuervol/HydroBalbo", "max_forks_repo_head_hexsha": "0c70536305d12f6fb9fb8fe7ce7cdb08d88472af", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.5555555556, "max_line_length": 91, "alphanum_fraction": 0.5618181818, "include": true, "reason": "import numpy", "num_tokens": 1880}
|
#include <cassert>
#include <cmath>
#include <functional>
#include <numeric>
#include <sstream>
#include <boost/multiprecision/gmp.hpp>
#include <QtDebug>
using s64 = int64_t;
using namespace std;
using boost::multiprecision::mpq_rational;
QDebug operator<<(QDebug d, const mpq_rational& r) {
d.nospace();
d.noquote();
stringstream s;
s << r;
d << QString::fromStdString(s.str());
return d.resetFormat();
}
QMap<int /*roll*/, s64 /*count*/> rolls(int sides, int repetitions) {
std::function<void(int , int, QMap<int, s64>*)> traverse = [&traverse, sides, repetitions](int sum, int rep, QMap<int, s64>* ret) {
if (rep == repetitions) {
if (ret->contains(sum) == false) {
(*ret)[sum] = 0;
}
(*ret)[sum] += 1;
return;
}
for (int i = 1; i <= sides; i += 1) {
traverse(sum + i, rep + 1, ret);
}
};
QMap<int, s64> ret;
traverse(0, 0, &ret);
return ret;
}
mpq_rational winProbability(int sides1, int reps1, int sides2, int reps2) {
auto rolls1 = rolls(sides1, reps1);
auto rolls2 = rolls(sides2, reps2);
auto values1 = rolls1.values();
auto values2 = rolls2.values();
s64 sum1 = accumulate(begin(values1), end(values1), 0);
s64 sum2 = accumulate(begin(values2), end(values2), 0);
mpq_rational ret(0);
QMap<int, s64>::const_iterator i = rolls2.constBegin();
while (i != rolls2.constEnd()) {
mpq_rational pickProbability(i.value(), sum1);
s64 winNumerator = 0;
QMap<int, s64>::const_iterator j = rolls1.constBegin();
while (j != rolls1.constEnd()) {
if (j.key() < i.key()) {
winNumerator += j.value();
//qDebug() << " +" << j.key() << j.value() << winNumerator;
} else {
//qDebug() << " -" << j.key();
}
++j;
}
ret += pickProbability*mpq_rational(winNumerator, sum2);
//qDebug() << "sum" << i.key() << ", pick" << pickProbability << ", win this" << mpq_rational(winNumerator, sum2) << ", total" << ret;
++i;
}
return ret;
}
void testRolls();
void test1();
int main(int argc, char **args) {
Q_UNUSED(argc); Q_UNUSED(args);
testRolls();
test1();
return 0;
}
void testRolls() {
auto a31 = rolls(3, 1);
assert(a31.size() == 3);
assert(a31[1] == 1);
assert(a31[2] == 1);
assert(a31[3] == 1);
auto a22 = rolls(2, 2);
assert(a22.size() == 3);
assert(a22[2] == 1);
assert(a22[3] == 2);
assert(a22[4] == 1);
auto a49 = rolls(4, 9).values(); assert(accumulate(begin(a49), end(a49), 0) == 262144);
auto a66 = rolls(6, 6).values(); assert(accumulate(begin(a66), end(a66), 0) == 46656);
}
void test1() {
qDebug() << winProbability(6, 6, 4, 9);
}
|
{"hexsha": "d0082201219a366ebef5645dd13d7f9f37241af0", "size": 2919, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "project-euler/205/main.cpp", "max_stars_repo_name": "hydroo/coding-and-math-exercises", "max_stars_repo_head_hexsha": "c0c9b8ae48e043b0809e4c592444f3e4bc3222d8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 2.0, "max_stars_repo_stars_event_min_datetime": "2015-05-29T21:03:26.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-11T02:10:53.000Z", "max_issues_repo_path": "project-euler/205/main.cpp", "max_issues_repo_name": "hydroo/coding-and-math-exercises", "max_issues_repo_head_hexsha": "c0c9b8ae48e043b0809e4c592444f3e4bc3222d8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project-euler/205/main.cpp", "max_forks_repo_name": "hydroo/coding-and-math-exercises", "max_forks_repo_head_hexsha": "c0c9b8ae48e043b0809e4c592444f3e4bc3222d8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.352, "max_line_length": 142, "alphanum_fraction": 0.5344295992, "num_tokens": 858}
|
(* Author: Simon Wimmer *)
theory TA_Graphs
imports
More_List Stream_More
"HOL-Library.Rewrite"
begin
chapter \<open>Graphs\<close>
section \<open>Basic Definitions and Theorems\<close>
locale Graph_Defs =
fixes E :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
begin
inductive steps where
Single: "steps [x]" |
Cons: "steps (x # y # xs)" if "E x y" "steps (y # xs)"
lemmas [intro] = steps.intros
lemma steps_append:
"steps (xs @ tl ys)" if "steps xs" "steps ys" "last xs = hd ys"
using that by induction (auto 4 4 elim: steps.cases)
lemma steps_append':
"steps xs" if "steps as" "steps bs" "last as = hd bs" "as @ tl bs = xs"
using steps_append that by blast
coinductive run where
"run (x ## y ## xs)" if "E x y" "run (y ## xs)"
lemmas [intro] = run.intros
lemma steps_appendD1:
"steps xs" if "steps (xs @ ys)" "xs \<noteq> []"
using that proof (induction xs)
case Nil
then show ?case by auto
next
case (Cons a xs)
then show ?case
by - (cases xs; auto elim: steps.cases)
qed
lemma steps_appendD2:
"steps ys" if "steps (xs @ ys)" "ys \<noteq> []"
using that by (induction xs) (auto elim: steps.cases)
lemma steps_appendD3:
"steps (xs @ [x]) \<and> E x y" if "steps (xs @ [x, y])"
using that proof (induction xs)
case Nil
then show ?case by (auto elim!: steps.cases)
next
case prems: (Cons a xs)
then show ?case by (cases xs) (auto elim: steps.cases)
qed
lemma steps_ConsD:
"steps xs" if "steps (x # xs)" "xs \<noteq> []"
using that by (auto elim: steps.cases)
lemmas stepsD = steps_ConsD steps_appendD1 steps_appendD2
lemma steps_alt_induct[consumes 1, case_names Single Snoc]:
assumes
"steps x" "(\<And>x. P [x])"
"\<And>y x xs. E y x \<Longrightarrow> steps (xs @ [y]) \<Longrightarrow> P (xs @ [y]) \<Longrightarrow> P (xs @ [y,x])"
shows "P x"
using assms(1)
proof (induction rule: rev_induct)
case Nil
then show ?case by (auto elim: steps.cases)
next
case prems: (snoc x xs)
then show ?case by (cases xs rule: rev_cases) (auto intro: assms(2,3) dest!: steps_appendD3)
qed
lemma steps_appendI:
"steps (xs @ [x, y])" if "steps (xs @ [x])" "E x y"
using that
proof (induction xs)
case Nil
then show ?case by auto
next
case (Cons a xs)
then show ?case by (cases xs; auto elim: steps.cases)
qed
lemma steps_append_single:
assumes
"steps xs" "E (last xs) x" "xs \<noteq> []"
shows "steps (xs @ [x])"
using assms(3,1,2) by (induction xs rule: list_nonempty_induct) (auto 4 4 elim: steps.cases)
lemma extend_run:
assumes
"steps xs" "E (last xs) x" "run (x ## ys)" "xs \<noteq> []"
shows "run (xs @- x ## ys)"
using assms(4,1-3) by (induction xs rule: list_nonempty_induct) (auto 4 3 elim: steps.cases)
lemma run_cycle:
assumes "steps xs" "E (last xs) (hd xs)" "xs \<noteq> []"
shows "run (cycle xs)"
using assms proof (coinduction arbitrary: xs)
case run
then show ?case
apply (rewrite at \<open>cycle xs\<close> stream.collapse[symmetric])
apply (rewrite at \<open>stl (cycle xs)\<close> stream.collapse[symmetric])
apply clarsimp
apply (erule steps.cases)
subgoal for x
apply (rule conjI)
apply (simp; fail)
apply (rule disjI1)
apply (inst_existentials xs)
apply (simp, metis cycle_Cons[of x "[]", simplified])
by auto
subgoal for x y xs'
apply (rule conjI)
apply (simp; fail)
apply (rule disjI1)
apply (inst_existentials "y # xs' @ [x]")
using steps_append_single[of "y # xs'" x]
apply (auto elim: steps.cases split: if_split_asm)
apply (subst (2) cycle_Cons, simp) (* XXX Automate forward reasoning *)
apply (subst cycle_Cons, simp)
done
done
qed
lemma run_stl:
"run (stl xs)" if "run xs"
using that by (auto elim: run.cases)
lemma run_sdrop:
"run (sdrop n xs)" if "run xs"
using that by (induction n arbitrary: xs) (auto intro: run_stl)
lemma run_reachable':
assumes "run (x ## xs)" "E\<^sup>*\<^sup>* x\<^sub>0 x"
shows "pred_stream (\<lambda> x. E\<^sup>*\<^sup>* x\<^sub>0 x) xs"
using assms by (coinduction arbitrary: x xs) (auto 4 3 elim: run.cases)
lemma run_reachable:
assumes "run (x\<^sub>0 ## xs)"
shows "pred_stream (\<lambda> x. E\<^sup>*\<^sup>* x\<^sub>0 x) xs"
by (rule run_reachable'[OF assms]) blast
lemma run_decomp:
assumes "run (xs @- ys)" "xs \<noteq> []"
shows "steps xs \<and> run ys \<and> E (last xs) (shd ys)"
using assms(2,1) proof (induction xs rule: list_nonempty_induct)
case (single x)
then show ?case by (auto elim: run.cases)
next
case (cons x xs)
then show ?case by (cases xs; auto 4 4 elim: run.cases)
qed
lemma steps_decomp:
assumes "steps (xs @ ys)" "xs \<noteq> []" "ys \<noteq> []"
shows "steps xs \<and> steps ys \<and> E (last xs) (hd ys)"
using assms(2,1,3) proof (induction xs rule: list_nonempty_induct)
case (single x)
then show ?case by (auto elim: steps.cases)
next
case (cons x xs)
then show ?case by (cases xs; auto 4 4 elim: steps.cases)
qed
lemma steps_rotate:
assumes "steps (x # xs @ y # ys @ [x])"
shows "steps (y # ys @ x # xs @ [y])"
proof -
from steps_decomp[of "x # xs" "y # ys @ [x]"] assms have
"steps (x # xs)" "steps (y # ys @ [x])" "E (last (x # xs)) y"
by auto
then have "steps ((x # xs) @ [y])" by (blast intro: steps_append_single)
from steps_append[OF \<open>steps (y # ys @ [x])\<close> this] show ?thesis by auto
qed
lemma run_shift_coinduct[case_names run_shift, consumes 1]:
assumes "R w"
and "\<And> w. R w \<Longrightarrow> \<exists> u v x y. w = u @- x ## y ## v \<and> steps (u @ [x]) \<and> E x y \<and> R (y ## v)"
shows "run w"
using assms(2)[OF \<open>R w\<close>] proof (coinduction arbitrary: w)
case (run w)
then obtain u v x y where "w = u @- x ## y ## v" "steps (u @ [x])" "E x y" "R (y ## v)"
by auto
then show ?case
apply -
apply (drule assms(2))
apply (cases u)
apply force
subgoal for z zs
apply (cases zs)
subgoal
apply simp
apply safe
apply (force elim: steps.cases)
subgoal for u' v' x' y'
by (inst_existentials "x # u'") (cases u'; auto)
done
subgoal for a as
apply simp
apply safe
apply (force elim: steps.cases)
subgoal for u' v' x' y'
apply (inst_existentials "a # as @ x # u'")
using steps_append[of "a # as @ [x, y]" "u' @ [x']"]
apply simp
apply (drule steps_appendI[of "a # as" x, rotated])
by (cases u'; force elim: steps.cases)+
done
done
done
qed
lemma run_flat_coinduct[case_names run_shift, consumes 1]:
assumes "R xss"
and
"\<And> xs ys xss.
R (xs ## ys ## xss) \<Longrightarrow> xs \<noteq> [] \<and> steps xs \<and> E (last xs) (hd ys) \<and> R (ys ## xss)"
shows "run (flat xss)"
proof -
obtain xs ys xss' where "xss = xs ## ys ## xss'" by (metis stream.collapse)
with assms(2)[OF assms(1)[unfolded this]] show ?thesis
proof (coinduction arbitrary: xs ys xss' xss rule: run_shift_coinduct)
case (run_shift xs ys xss' xss)
from run_shift show ?case
apply (cases xss')
apply clarify
apply (drule assms(2))
apply (inst_existentials "butlast xs" "tl ys @- flat xss'" "last xs" "hd ys")
apply (cases ys)
apply (simp; fail)
subgoal premises prems for x1 x2 z zs
proof (cases "xs = []")
case True
with prems show ?thesis
by auto
next
case False
then have "xs = butlast xs @ [last xs]" by auto
then have "butlast xs @- last xs ## tail = xs @- tail" for tail
by (metis shift.simps(1,2) shift_append)
with prems show ?thesis by simp
qed
apply (simp; fail)
apply assumption
subgoal for ws wss
by (inst_existentials ys ws wss) (cases ys, auto)
done
qed
qed
lemma steps_non_empty[simp]:
"\<not> steps []"
by (auto elim: steps.cases)
lemma steps_non_empty'[simp]:
"xs \<noteq> []" if "steps xs"
using that by auto
(* XXX Generalize *)
lemma steps_replicate:
"steps (hd xs # concat (replicate n (tl xs)))" if "last xs = hd xs" "steps xs" "n > 0"
using that
proof (induction n)
case 0
then show ?case by simp
next
case (Suc n)
show ?case
proof (cases n)
case 0
with Suc.prems show ?thesis by (cases xs; auto)
next
case prems: (Suc nat)
from Suc.prems have [simp]: "hd xs # tl xs @ ys = xs @ ys" for ys
by (cases xs; auto)
from Suc.prems have **: "tl xs @ ys = tl (xs @ ys)" for ys
by (cases xs; auto)
from prems Suc show ?thesis
by (fastforce intro: steps_append')
qed
qed
notation E ("_ \<rightarrow> _" [100, 100] 40)
abbreviation reaches ("_ \<rightarrow>* _" [100, 100] 40) where "reaches x y \<equiv> E\<^sup>*\<^sup>* x y"
abbreviation reaches1 ("_ \<rightarrow>\<^sup>+ _" [100, 100] 40) where "reaches1 x y \<equiv> E\<^sup>+\<^sup>+ x y"
lemma steps_reaches:
"hd xs \<rightarrow>* last xs" if "steps xs"
using that by (induction xs) auto
lemma steps_reaches':
"x \<rightarrow>* y" if "steps xs" "hd xs = x" "last xs = y"
using that steps_reaches by auto
lemma reaches_steps:
"\<exists> xs. hd xs = x \<and> last xs = y \<and> steps xs" if "x \<rightarrow>* y"
using that
apply (induction)
apply force
apply clarsimp
subgoal for z xs
by (inst_existentials "xs @ [z]", (cases xs; simp), auto intro: steps_append_single)
done
lemma reaches_steps_iff:
"x \<rightarrow>* y \<longleftrightarrow> (\<exists> xs. hd xs = x \<and> last xs = y \<and> steps xs)"
using steps_reaches reaches_steps by fast
lemma steps_reaches1:
"x \<rightarrow>\<^sup>+ y" if "steps (x # xs @ [y])"
by (metis list.sel(1,3) rtranclp_into_tranclp2 snoc_eq_iff_butlast steps.cases steps_reaches that)
lemma stepsI:
"steps (x # xs)" if "x \<rightarrow> hd xs" "steps xs"
using that by (cases xs) auto
lemma reaches1_steps:
"\<exists> xs. steps (x # xs @ [y])" if "x \<rightarrow>\<^sup>+ y"
proof -
from that obtain z where "x \<rightarrow> z" "z \<rightarrow>* y"
by atomize_elim (simp add: tranclpD)
from reaches_steps[OF this(2)] obtain xs where *: "hd xs = z" "last xs = y" "steps xs"
by auto
then obtain xs' where [simp]: "xs = xs' @ [y]"
by atomize_elim (auto 4 3 intro: append_butlast_last_id[symmetric])
with \<open>x \<rightarrow> z\<close> * show ?thesis
by (auto intro: stepsI)
qed
lemma reaches1_steps_iff:
"x \<rightarrow>\<^sup>+ y \<longleftrightarrow> (\<exists> xs. steps (x # xs @ [y]))"
using steps_reaches1 reaches1_steps by fast
lemma reaches_steps_iff2:
"x \<rightarrow>* y \<longleftrightarrow> (x = y \<or> (\<exists>vs. steps (x # vs @ [y])))"
by (simp add: Nitpick.rtranclp_unfold reaches1_steps_iff)
lemma reaches1_reaches_iff1:
"x \<rightarrow>\<^sup>+ y \<longleftrightarrow> (\<exists> z. x \<rightarrow> z \<and> z \<rightarrow>* y)"
by (auto dest: tranclpD)
lemma
"x \<rightarrow>\<^sup>+ z" if "x \<rightarrow>* y" "y \<rightarrow>\<^sup>+ z"
using that by auto
lemma
"x \<rightarrow>\<^sup>+ z" if "x \<rightarrow>\<^sup>+ y" "y \<rightarrow>* z"
using that by auto
lemma steps_append2:
"steps (xs @ x # ys)" if "steps (xs @ [x])" "steps (x # ys)"
using that by (auto dest: steps_append)
lemma reaches1_steps_append:
assumes "a \<rightarrow>\<^sup>+ b" "steps xs" "hd xs = b"
shows "\<exists> ys. steps (a # ys @ xs)"
using assms by (fastforce intro: steps_append' dest: reaches1_steps)
lemma steps_last_step:
"\<exists> a. a \<rightarrow> last xs" if "steps xs" "length xs > 1"
using that by induction auto
lemma steps_remove_cycleE:
assumes "steps (a # xs @ [b])"
obtains ys where "steps (a # ys @ [b])" "distinct ys" "a \<notin> set ys" "b \<notin> set ys" "set ys \<subseteq> set xs"
using assms
proof (induction "length xs" arbitrary: xs rule: less_induct)
case less
note prems = less.prems(2) and intro = less.prems(1) and IH = less.hyps
consider
"distinct xs" "a \<notin> set xs" "b \<notin> set xs" | "a \<in> set xs" | "b \<in> set xs" | "\<not> distinct xs"
by auto
then consider (goal) ?case
| (a) as bs where "xs = as @ a # bs" | (b) as bs where "xs = as @ b # bs"
| (between) x as bs cs where "xs = as @ x # bs @ x # cs"
using prems by (cases; fastforce dest: not_distinct_decomp simp: split_list intro: intro)
then show ?case
proof cases
case a
with prems show ?thesis
by - (rule IH[where xs = bs], auto 4 3 intro: intro dest: stepsD)
next
case b
with prems have "steps (a # as @ b # [] @ (bs @ [b]))"
by simp
then have "steps (a # as @ [b])"
by (metis Cons_eq_appendI Graph_Defs.steps_appendD1 append_eq_appendI neq_Nil_conv)
with b show ?thesis
by - (rule IH[where xs = as], auto 4 3 dest: stepsD intro: intro)
next
case between
with prems have "steps (a # as @ x # cs @ [b])"
by simp (metis
stepsI append_Cons list.distinct(1) list.sel(1) list.sel(3) steps_append steps_decomp)
with between show ?thesis
by - (rule IH[where xs = "as @ x # cs"], auto 4 3 intro: intro dest: stepsD)
qed
qed
lemma reaches1_stepsE:
assumes "a \<rightarrow>\<^sup>+ b"
obtains xs where "steps (a # xs @ [b])" "distinct xs" "a \<notin> set xs" "b \<notin> set xs"
proof -
from assms obtain xs where "steps (a # xs @ [b])"
by (auto dest: reaches1_steps)
then show ?thesis
by - (erule steps_remove_cycleE, rule that)
qed
lemma reaches_stepsE:
assumes "a \<rightarrow>* b"
obtains "a = b" | xs where "steps (a # xs @ [b])" "distinct xs" "a \<notin> set xs" "b \<notin> set xs"
proof -
from assms consider "a = b" | xs where "a \<rightarrow>\<^sup>+ b"
by (meson rtranclpD)
then show ?thesis
by cases ((erule reaches1_stepsE)?; rule that; assumption)+
qed
definition sink where
"sink a \<equiv> \<nexists>b. a \<rightarrow> b"
lemma sink_or_cycle:
assumes "finite {b. reaches a b}"
obtains b where "reaches a b" "sink b" | b where "reaches a b" "reaches1 b b"
proof -
let ?S = "{b. reaches1 a b}"
have "?S \<subseteq> {b. reaches a b}"
by auto
then have "finite ?S"
using assms by (rule finite_subset)
then show ?thesis
using that
proof (induction ?S arbitrary: a rule: finite_psubset_induct)
case psubset
consider (empty) "Collect (reaches1 a) = {}" | b where "reaches1 a b"
by auto
then show ?case
proof cases
case empty
then have "sink a"
unfolding sink_def by auto
with psubset.prems show ?thesis
by auto
next
case 2
show ?thesis
proof (cases "reaches b a")
case True
with \<open>reaches1 a b\<close> have "reaches1 a a"
by auto
with psubset.prems show ?thesis
by auto
next
case False
show ?thesis
proof (cases "reaches1 b b")
case True
with \<open>reaches1 a b\<close> psubset.prems show ?thesis
by (auto intro: tranclp_into_rtranclp)
next
case False
with \<open>\<not> reaches b a\<close> \<open>reaches1 a b\<close> have "Collect (reaches1 b) \<subset> Collect (reaches1 a)"
by (intro psubsetI) auto
then show ?thesis
using \<open>reaches1 a b\<close> psubset.prems
by - (erule psubset.hyps; meson tranclp_into_rtranclp tranclp_rtranclp_tranclp)
qed
qed
qed
qed
qed
text \<open>
A directed graph where every node has at least one ingoing edge, contains a directed cycle.
\<close>
lemma directed_graph_indegree_ge_1_cycle':
assumes "finite S" "S \<noteq> {}" "\<forall> y \<in> S. \<exists> x \<in> S. E x y"
shows "\<exists> x \<in> S. \<exists> y. E x y \<and> E\<^sup>*\<^sup>* y x"
using assms
proof (induction arbitrary: E rule: finite_ne_induct)
case (singleton x)
then show ?case by auto
next
case (insert x S E)
from insert.prems obtain y where "y \<in> insert x S" "E y x"
by auto
show ?case
proof (cases "y = x")
case True
with \<open>E y x\<close> show ?thesis by auto
next
case False
with \<open>y \<in> _\<close> have "y \<in> S" by auto
define E' where "E' a b \<equiv> E a b \<or> (a = y \<and> E x b)" for a b
have E'_E: "\<exists> c. E a c \<and> E\<^sup>*\<^sup>* c b" if "E' a b" for a b
using that \<open>E y x\<close> unfolding E'_def by auto
have [intro]: "E\<^sup>*\<^sup>* a b" if "E' a b" for a b
using that \<open>E y x\<close> unfolding E'_def by auto
have [intro]: "E\<^sup>*\<^sup>* a b" if "E'\<^sup>*\<^sup>* a b" for a b
using that by (induction; blast intro: rtranclp_trans)
have "\<forall>y\<in>S. \<exists>x\<in>S. E' x y"
proof (rule ballI)
fix b assume "b \<in> S"
with insert.prems obtain a where "a \<in> insert x S" "E a b"
by auto
show "\<exists>a\<in>S. E' a b"
proof (cases "a = x")
case True
with \<open>E a b\<close> have "E' y b" unfolding E'_def by simp
with \<open>y \<in> S\<close> show ?thesis ..
next
case False
with \<open>a \<in> _\<close> \<open>E a b\<close> show ?thesis unfolding E'_def by auto
qed
qed
from insert.IH[OF this] guess x y by safe
then show ?thesis by (blast intro: rtranclp_trans dest: E'_E)
qed
qed
lemma directed_graph_indegree_ge_1_cycle:
assumes "finite S" "S \<noteq> {}" "\<forall> y \<in> S. \<exists> x \<in> S. E x y"
shows "\<exists> x \<in> S. \<exists> y. x \<rightarrow>\<^sup>+ x"
using directed_graph_indegree_ge_1_cycle'[OF assms] reaches1_reaches_iff1 by blast
text \<open>Vertices of a graph\<close>
definition "vertices = {x. \<exists>y. E x y \<or> E y x}"
lemma reaches1_verts:
assumes "x \<rightarrow>\<^sup>+ y"
shows "x \<in> vertices" and "y \<in> vertices"
using assms reaches1_reaches_iff2 reaches1_reaches_iff1 vertices_def by blast+
lemmas graphI =
steps.intros
steps_append_single
steps_reaches'
stepsI
end (* Graph Defs *)
section \<open>Graphs with a Start Node\<close>
locale Graph_Start_Defs = Graph_Defs +
fixes s\<^sub>0 :: 'a
begin
definition reachable where
"reachable = E\<^sup>*\<^sup>* s\<^sub>0"
lemma start_reachable[intro!, simp]:
"reachable s\<^sub>0"
unfolding reachable_def by auto
lemma reachable_step:
"reachable b" if "reachable a" "E a b"
using that unfolding reachable_def by auto
lemma reachable_reaches:
"reachable b" if "reachable a" "a \<rightarrow>* b"
using that(2,1) by induction (auto intro: reachable_step)
lemma reachable_steps_append:
assumes "reachable a" "steps xs" "hd xs = a" "last xs = b"
shows "reachable b"
using assms by (auto intro: graphI reachable_reaches)
lemmas steps_reachable = reachable_steps_append[of s\<^sub>0, simplified]
lemma reachable_steps_elem:
"reachable y" if "reachable x" "steps xs" "y \<in> set xs" "hd xs = x"
proof -
from \<open>y \<in> set xs\<close> obtain as bs where [simp]: "xs = as @ y # bs"
by (auto simp: in_set_conv_decomp)
show ?thesis
proof (cases "as = []")
case True
with that show ?thesis
by simp
next
case False
(* XXX *)
from \<open>steps xs\<close> have "steps (as @ [y])"
by (auto intro: stepsD)
with \<open>as \<noteq> []\<close> \<open>hd xs = x\<close> \<open>reachable x\<close> show ?thesis
by (auto 4 3 intro: reachable_reaches graphI)
qed
qed
lemma reachable_steps:
"\<exists> xs. steps xs \<and> hd xs = s\<^sub>0 \<and> last xs = x" if "reachable x"
using that unfolding reachable_def
proof induction
case base
then show ?case by (inst_existentials "[s\<^sub>0]"; force)
next
case (step y z)
from step.IH guess xs by clarify
with step.hyps show ?case
apply (inst_existentials "xs @ [z]")
apply (force intro: graphI)
by (cases xs; auto)+
qed
lemma reachable_cycle_iff:
"reachable x \<and> x \<rightarrow>\<^sup>+ x \<longleftrightarrow> (\<exists> ws xs. steps (s\<^sub>0 # ws @ [x] @ xs @ [x]))"
proof (safe, goal_cases)
case (2 ws)
then show ?case
by (auto intro: steps_reachable stepsD)
next
case (3 ws xs)
then show ?case
by (auto intro: stepsD steps_reaches1)
next
case prems: 1
from \<open>reachable x\<close> prems(2) have "s\<^sub>0 \<rightarrow>\<^sup>+ x"
unfolding reachable_def by auto
with \<open>x \<rightarrow>\<^sup>+ x\<close> show ?case
by (fastforce intro: steps_append' dest: reaches1_steps)
qed
lemma reachable_induct[consumes 1, case_names start step, induct pred: reachable]:
assumes "reachable x"
and "P s\<^sub>0"
and "\<And> a b. reachable a \<Longrightarrow> P a \<Longrightarrow> a \<rightarrow> b \<Longrightarrow> P b"
shows "P x"
using assms(1) unfolding reachable_def
by induction (auto intro: assms(2-)[unfolded reachable_def])
lemmas graphI_aggressive =
tranclp_into_rtranclp
rtranclp.rtrancl_into_rtrancl
tranclp.trancl_into_trancl
rtranclp_into_tranclp2
lemmas graphI_aggressive1 =
graphI_aggressive
steps_append'
lemmas graphI_aggressive2 =
graphI_aggressive
stepsD
steps_reaches1
steps_reachable
lemmas graphD =
reaches1_steps
lemmas graphD_aggressive =
tranclpD
lemmas graph_startI =
reachable_reaches
start_reachable
end (* Graph Start Defs *)
section \<open>Subgraphs\<close>
subsection \<open>Edge-induced Subgraphs\<close>
locale Subgraph_Defs = G: Graph_Defs +
fixes E' :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
begin
sublocale G': Graph_Defs E' .
end (* Subgraph Defs *)
locale Subgraph_Start_Defs = G: Graph_Start_Defs +
fixes E' :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
begin
sublocale G': Graph_Start_Defs E' s\<^sub>0 .
end (* Subgraph Start Defs *)
locale Subgraph = Subgraph_Defs +
assumes subgraph[intro]: "E' a b \<Longrightarrow> E a b"
begin
lemma non_subgraph_cycle_decomp:
"\<exists> c d. G.reaches a c \<and> E c d \<and> \<not> E' c d \<and> G.reaches d b" if
"G.reaches1 a b" "\<not> G'.reaches1 a b" for a b
using that
proof induction
case (base y)
then show ?case
by auto
next
case (step y z)
show ?case
proof (cases "E' y z")
case True
with step have "\<not> G'.reaches1 a y"
by (auto intro: tranclp.trancl_into_trancl) (* XXX *)
with step obtain c d where
"G.reaches a c" "E c d" "\<not> E' c d" "G.reaches d y"
by auto
with \<open>E' y z\<close> show ?thesis
by (blast intro: rtranclp.rtrancl_into_rtrancl) (* XXX *)
next
case False
with step show ?thesis
by (intro exI conjI) auto
qed
qed
lemma reaches:
"G.reaches a b" if "G'.reaches a b"
using that by induction (auto intro: rtranclp.intros(2))
lemma reaches1:
"G.reaches1 a b" if "G'.reaches1 a b"
using that by induction (auto intro: tranclp.intros(2))
end (* Subgraph *)
locale Subgraph_Start = Subgraph_Start_Defs + Subgraph
begin
lemma reachable_subgraph[intro]: "G.reachable b" if \<open>G.reachable a\<close> \<open>G'.reaches a b\<close> for a b
using that by (auto intro: G.graph_startI mono_rtranclp[rule_format, of E'])
lemma reachable:
"G.reachable x" if "G'.reachable x"
using that by (fastforce simp: G.reachable_def G'.reachable_def)
end (* Subgraph Start *)
subsection \<open>Node-induced Subgraphs\<close>
locale Subgraph_Node_Defs = Graph_Defs +
fixes V :: "'a \<Rightarrow> bool"
begin
definition E' where "E' x y \<equiv> E x y \<and> V x \<and> V y"
sublocale Subgraph E E' by standard (auto simp: E'_def)
lemma subgraph':
"E' x y" if "E x y" "V x" "V y"
using that unfolding E'_def by auto
lemma E'_V1:
"V x" if "E' x y"
using that unfolding E'_def by auto
lemma E'_V2:
"V y" if "E' x y"
using that unfolding E'_def by auto
lemma G'_reaches_V:
"V y" if "G'.reaches x y" "V x"
using that by (cases) (auto intro: E'_V2)
lemma G'_steps_V_all:
"list_all V xs" if "G'.steps xs" "V (hd xs)"
using that by induction (auto intro: E'_V2)
lemma G'_steps_V_last:
"V (last xs)" if "G'.steps xs" "V (hd xs)"
using that by induction (auto dest: E'_V2)
lemmas subgraphI = E'_V1 E'_V2 G'_reaches_V
lemmas subgraphD = E'_V1 E'_V2 G'_reaches_V
end (* Subgraph Node *)
locale Subgraph_Node_Defs_Notation = Subgraph_Node_Defs
begin
no_notation E ("_ \<rightarrow> _" [100, 100] 40)
notation E' ("_ \<rightarrow> _" [100, 100] 40)
no_notation reaches ("_ \<rightarrow>* _" [100, 100] 40)
notation G'.reaches ("_ \<rightarrow>* _" [100, 100] 40)
no_notation reaches1 ("_ \<rightarrow>\<^sup>+ _" [100, 100] 40)
notation G'.reaches1 ("_ \<rightarrow>\<^sup>+ _" [100, 100] 40)
end (* Subgraph_Node_Defs_Notation *)
subsection \<open>The Reachable Subgraph\<close>
context Graph_Start_Defs
begin
interpretation Subgraph_Node_Defs_Notation E reachable .
sublocale reachable_subgraph: Subgraph_Node_Defs E reachable .
lemma reachable_supgraph:
"x \<rightarrow> y" if "E x y" "reachable x"
using that unfolding E'_def by (auto intro: graph_startI)
lemma reachable_reaches_equiv: "reaches x y \<longleftrightarrow> x \<rightarrow>* y" if "reachable x" for x y
apply standard
subgoal premises prems
using prems \<open>reachable x\<close>
by induction (auto dest: reachable_supgraph intro: graph_startI graphI_aggressive)
subgoal premises prems
using prems \<open>reachable x\<close>
by induction (auto dest: subgraph)
done
lemma reachable_reaches1_equiv: "reaches1 x y \<longleftrightarrow> x \<rightarrow>\<^sup>+ y" if "reachable x" for x y
apply standard
subgoal premises prems
using prems \<open>reachable x\<close>
by induction (auto dest: reachable_supgraph intro: graph_startI graphI_aggressive)
subgoal premises prems
using prems \<open>reachable x\<close>
by induction (auto dest: subgraph)
done
lemma reachable_steps_equiv:
"steps (x # xs) \<longleftrightarrow> G'.steps (x # xs)" if "reachable x"
apply standard
subgoal premises prems
using prems \<open>reachable x\<close>
by (induction "x # xs" arbitrary: x xs) (auto dest: reachable_supgraph intro: graph_startI)
subgoal premises prems
using prems by induction auto
done
end (* Graph Start Defs *)
section \<open>Bundles\<close>
bundle graph_automation
begin
lemmas [intro] = Graph_Defs.graphI Graph_Start_Defs.graph_startI
lemmas [dest] = Graph_Start_Defs.graphD
end (* Bundle *)
bundle reaches_steps_iff =
Graph_Defs.reaches1_steps_iff [iff]
Graph_Defs.reaches_steps_iff [iff]
bundle graph_automation_aggressive
begin
unbundle graph_automation
lemmas [intro] = Graph_Start_Defs.graphI_aggressive
lemmas [dest] = Graph_Start_Defs.graphD_aggressive
end (* Bundle *)
bundle subgraph_automation
begin
unbundle graph_automation
lemmas [intro] = Subgraph_Node_Defs.subgraphI
lemmas [dest] = Subgraph_Node_Defs.subgraphD
end (* Bundle *)
section \<open>Directed Acyclic Graphs\<close>
locale DAG = Graph_Defs +
assumes acyclic: "\<not> E\<^sup>+\<^sup>+ x x"
begin
lemma topological_numbering:
fixes S assumes "finite S"
shows "\<exists>f :: _ \<Rightarrow> nat. inj_on f S \<and> (\<forall>x \<in> S. \<forall>y \<in> S. E x y \<longrightarrow> f x < f y)"
using assms
proof (induction rule: finite_psubset_induct)
case (psubset A)
show ?case
proof (cases "A = {}")
case True
then show ?thesis
by simp
next
case False
then obtain x where x: "x \<in> A" "\<forall>y \<in> A. \<not>E y x"
using directed_graph_indegree_ge_1_cycle[OF \<open>finite A\<close>] acyclic by auto
let ?A = "A - {x}"
from \<open>x \<in> A\<close> have "?A \<subset> A"
by auto
from psubset.IH(1)[OF this] obtain f :: "_ \<Rightarrow> nat" where f:
"inj_on f ?A" "\<forall>x\<in>?A. \<forall>y\<in>?A. x \<rightarrow> y \<longrightarrow> f x < f y"
by blast
let ?f = "\<lambda>y. if x \<noteq> y then f y + 1 else 0"
from \<open>x \<in> A\<close> have "A = insert x ?A"
by auto
from \<open>inj_on f ?A\<close> have "inj_on ?f A"
by (auto simp: inj_on_def)
moreover from f(2) x(2) have "\<forall>x\<in>A. \<forall>y\<in>A. x \<rightarrow> y \<longrightarrow> ?f x < ?f y"
by auto
ultimately show ?thesis
by blast
qed
qed
end
section \<open>Finite Graphs\<close>
locale Finite_Graph = Graph_Defs +
assumes finite_graph: "finite vertices"
locale Finite_DAG = Finite_Graph + DAG
begin
lemma finite_reachable:
"finite {y. x \<rightarrow>* y}" (is "finite ?S")
proof -
have "?S \<subseteq> insert x vertices"
by (metis insertCI mem_Collect_eq reaches1_verts(2) rtranclpD subsetI)
also from finite_graph have "finite \<dots>" ..
finally show ?thesis .
qed
end
section \<open>Graph Invariants\<close>
locale Graph_Invariant = Graph_Defs +
fixes P :: "'a \<Rightarrow> bool"
assumes invariant: "P a \<Longrightarrow> a \<rightarrow> b \<Longrightarrow> P b"
begin
lemma invariant_steps:
"list_all P as" if "steps (a # as)" "P a"
using that by (induction "a # as" arbitrary: as a) (auto intro: invariant)
lemma invariant_reaches:
"P b" if "a \<rightarrow>* b" "P a"
using that by (induction; blast intro: invariant)
lemma invariant_run:
assumes run: "run (x ## xs)" and P: "P x"
shows "pred_stream P (x ## xs)"
using run P by (coinduction arbitrary: x xs) (auto 4 3 elim: invariant run.cases)
text \<open>Every graph invariant induces a subgraph.\<close>
sublocale Subgraph_Node_Defs where E = E and V = P .
lemma subgraph':
assumes "x \<rightarrow> y" "P x"
shows "E' x y"
using assms by (intro subgraph') (auto intro: invariant)
lemma invariant_steps_iff:
"G'.steps (v # vs) \<longleftrightarrow> steps (v # vs)" if "P v"
apply (rule iffI)
subgoal
using G'.steps_alt_induct steps_appendI by blast
subgoal premises prems
using prems \<open>P v\<close> by (induction "v # vs" arbitrary: v vs) (auto intro: subgraph' invariant)
done
lemma invariant_reaches_iff:
"G'.reaches u v \<longleftrightarrow> reaches u v" if "P u"
using that by (simp add: reaches_steps_iff2 G'.reaches_steps_iff2 invariant_steps_iff)
lemma invariant_reaches1_iff:
"G'.reaches1 u v \<longleftrightarrow> reaches1 u v" if "P u"
using that by (simp add: reaches1_steps_iff G'.reaches1_steps_iff invariant_steps_iff)
end (* Graph Invariant *)
locale Graph_Invariants = Graph_Defs +
fixes P Q :: "'a \<Rightarrow> bool"
assumes invariant: "P a \<Longrightarrow> a \<rightarrow> b \<Longrightarrow> Q b" and Q_P: "Q a \<Longrightarrow> P a"
begin
sublocale Pre: Graph_Invariant E P
by standard (blast intro: invariant Q_P)
sublocale Post: Graph_Invariant E Q
by standard (blast intro: invariant Q_P)
lemma invariant_steps:
"list_all Q as" if "steps (a # as)" "P a"
using that by (induction "a # as" arbitrary: as a) (auto intro: invariant Q_P)
lemma invariant_run:
assumes run: "run (x ## xs)" and P: "P x"
shows "pred_stream Q xs"
using run P by (coinduction arbitrary: x xs) (auto 4 4 elim: invariant run.cases intro: Q_P)
lemma invariant_reaches1:
"Q b" if "a \<rightarrow>\<^sup>+ b" "P a"
using that by (induction; blast intro: invariant Q_P)
end (* Graph Invariants *)
locale Graph_Invariant_Start = Graph_Start_Defs + Graph_Invariant +
assumes P_s\<^sub>0: "P s\<^sub>0"
begin
lemma invariant_steps:
"list_all P as" if "steps (s\<^sub>0 # as)"
using that P_s\<^sub>0 by (rule invariant_steps)
lemma invariant_reaches:
"P b" if "s\<^sub>0 \<rightarrow>* b"
using invariant_reaches[OF that P_s\<^sub>0] .
lemmas invariant_run = invariant_run[OF _ P_s\<^sub>0]
end (* Graph Invariant Start *)
locale Graph_Invariant_Strong = Graph_Defs +
fixes P :: "'a \<Rightarrow> bool"
assumes invariant: "a \<rightarrow> b \<Longrightarrow> P b"
begin
sublocale inv: Graph_Invariant by standard (rule invariant)
lemma P_invariant_steps:
"list_all P as" if "steps (a # as)"
using that by (induction "a # as" arbitrary: as a) (auto intro: invariant)
lemma steps_last_invariant:
"P (last xs)" if "steps (x # xs)" "xs \<noteq> []"
using steps_last_step[of "x # xs"] that by (auto intro: invariant)
lemmas invariant_reaches = inv.invariant_reaches
lemma invariant_reaches1:
"P b" if "a \<rightarrow>\<^sup>+ b"
using that by (induction; blast intro: invariant)
end (* Graph Invariant *)
section \<open>Simulations and Bisimulations\<close>
locale Simulation_Defs =
fixes A :: "'a \<Rightarrow> 'a \<Rightarrow> bool" and B :: "'b \<Rightarrow> 'b \<Rightarrow> bool"
and sim :: "'a \<Rightarrow> 'b \<Rightarrow> bool" (infixr "\<sim>" 60)
begin
sublocale A: Graph_Defs A .
sublocale B: Graph_Defs B .
end (* Simulation Defs *)
locale Simulation = Simulation_Defs +
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> a \<sim> a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
begin
lemma simulation_reaches:
"\<exists> b'. B\<^sup>*\<^sup>* b b' \<and> a' \<sim> b'" if "A\<^sup>*\<^sup>* a a'" "a \<sim> b"
using that by (induction rule: rtranclp_induct) (auto intro: rtranclp.intros(2) dest: A_B_step)
lemma simulation_reaches1:
"\<exists> b'. B\<^sup>+\<^sup>+ b b' \<and> a' \<sim> b'" if "A\<^sup>+\<^sup>+ a a'" "a \<sim> b"
using that by (induction rule: tranclp_induct) (auto 4 3 intro: tranclp.intros(2) dest: A_B_step)
lemma simulation_steps:
"\<exists> bs. B.steps (b # bs) \<and> list_all2 (\<lambda> a b. a \<sim> b) as bs" if "A.steps (a # as)" "a \<sim> b"
using that
apply (induction "a # as" arbitrary: a b as)
apply force
apply (frule A_B_step, auto)
done
lemma simulation_run:
"\<exists> ys. B.run (y ## ys) \<and> stream_all2 (\<sim>) xs ys" if "A.run (x ## xs)" "x \<sim> y"
proof -
let ?ys = "sscan (\<lambda> a' b. SOME b'. B b b' \<and> a' \<sim> b') xs y"
have "B.run (y ## ?ys)"
using that by (coinduction arbitrary: x y xs) (force dest!: someI_ex A_B_step elim: A.run.cases)
moreover have "stream_all2 (\<sim>) xs ?ys"
using that by (coinduction arbitrary: x y xs) (force dest!: someI_ex A_B_step elim: A.run.cases)
ultimately show ?thesis by blast
qed
end (* Simulation *)
lemma (in Subgraph) Subgraph_Simulation:
"Simulation E' E (=)"
by standard auto
locale Simulation_Invariant = Simulation_Defs +
fixes PA :: "'a \<Rightarrow> bool" and PB :: "'b \<Rightarrow> bool"
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> a \<sim> a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
assumes A_invariant[intro]: "\<And> a b. PA a \<Longrightarrow> A a b \<Longrightarrow> PA b"
assumes B_invariant[intro]: "\<And> a b. PB a \<Longrightarrow> B a b \<Longrightarrow> PB b"
begin
definition "equiv' \<equiv> \<lambda> a b. a \<sim> b \<and> PA a \<and> PB b"
sublocale Simulation A B equiv' by standard (auto dest: A_B_step simp: equiv'_def)
sublocale PA_invariant: Graph_Invariant A PA by standard blast
sublocale PB_invariant: Graph_Invariant B PB by standard blast
lemma simulation_reaches:
"\<exists> b'. B\<^sup>*\<^sup>* b b' \<and> a' \<sim> b' \<and> PA a' \<and> PB b'" if "A\<^sup>*\<^sup>* a a'" "a \<sim> b" "PA a" "PB b"
using simulation_reaches[of a a' b] that unfolding equiv'_def by simp
lemma simulation_steps:
"\<exists> bs. B.steps (b # bs) \<and> list_all2 (\<lambda> a b. a \<sim> b \<and> PA a \<and> PB b) as bs"
if "A.steps (a # as)" "a \<sim> b" "PA a" "PB b"
using simulation_steps[of a as b] that unfolding equiv'_def by simp
lemma simulation_steps':
"\<exists> bs. B.steps (b # bs) \<and> list_all2 (\<lambda> a b. a \<sim> b) as bs \<and> list_all PA as \<and> list_all PB bs"
if "A.steps (a # as)" "a \<sim> b" "PA a" "PB b"
using simulation_steps[OF that]
by (force dest: list_all2_set1 list_all2_set2 simp: list_all_iff elim: list_all2_mono)
context
fixes f
assumes eq: "a \<sim> b \<Longrightarrow> b = f a"
begin
lemma simulation_steps'_map:
"\<exists> bs.
B.steps (b # bs) \<and> bs = map f as
\<and> list_all2 (\<lambda> a b. a \<sim> b) as bs
\<and> list_all PA as \<and> list_all PB bs"
if "A.steps (a # as)" "a \<sim> b" "PA a" "PB b"
proof -
from simulation_steps'[OF that] guess bs by clarify
note guessed = this
from this(2) have "bs = map f as"
by (induction; simp add: eq)
with guessed show ?thesis
by auto
qed
end (* Context for Equality Relation *)
end (* Simulation Invariant *)
locale Simulation_Invariants = Simulation_Defs +
fixes PA QA :: "'a \<Rightarrow> bool" and PB QB :: "'b \<Rightarrow> bool"
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> a \<sim> a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
assumes A_invariant[intro]: "\<And> a b. PA a \<Longrightarrow> A a b \<Longrightarrow> QA b"
assumes B_invariant[intro]: "\<And> a b. PB a \<Longrightarrow> B a b \<Longrightarrow> QB b"
assumes PA_QA[intro]: "\<And> a. QA a \<Longrightarrow> PA a" and PB_QB[intro]: "\<And> a. QB a \<Longrightarrow> PB a"
begin
sublocale Pre: Simulation_Invariant A B "(\<sim>)" PA PB
by standard (auto intro: A_B_step)
sublocale Post: Simulation_Invariant A B "(\<sim>)" QA QB
by standard (auto intro: A_B_step)
sublocale A_invs: Graph_Invariants A PA QA
by standard auto
sublocale B_invs: Graph_Invariants B PB QB
by standard auto
lemma simulation_reaches1:
"\<exists> b2. B.reaches1 b1 b2 \<and> a2 \<sim> b2 \<and> QB b2" if "A.reaches1 a1 a2" "a1 \<sim> b1" "PA a1" "PB b1"
using that
by - (drule Pre.simulation_reaches1, auto intro: B_invs.invariant_reaches1 simp: Pre.equiv'_def)
lemma reaches1_unique:
assumes unique: "\<And> b2. a \<sim> b2 \<Longrightarrow> QB b2 \<Longrightarrow> b2 = b"
and that: "A.reaches1 a a" "a \<sim> b" "PA a" "PB b"
shows "B.reaches1 b b"
using that by (auto dest: unique simulation_reaches1)
end (* Simualation Invariants *)
locale Bisimulation = Simulation_Defs +
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> a \<sim> a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
assumes B_A_step: "\<And> a a' b'. B a' b' \<Longrightarrow> a \<sim> a' \<Longrightarrow> (\<exists> b. A a b \<and> b \<sim> b')"
begin
sublocale A_B: Simulation A B "(\<sim>)" by standard (rule A_B_step)
sublocale B_A: Simulation B A "\<lambda> x y. y \<sim> x" by standard (rule B_A_step)
lemma A_B_reaches:
"\<exists> b'. B\<^sup>*\<^sup>* b b' \<and> a' \<sim> b'" if "A\<^sup>*\<^sup>* a a'" "a \<sim> b"
using A_B.simulation_reaches[OF that] .
lemma B_A_reaches:
"\<exists> b'. A\<^sup>*\<^sup>* b b' \<and> b' \<sim> a'" if "B\<^sup>*\<^sup>* a a'" "b \<sim> a"
using B_A.simulation_reaches[OF that] .
end (* Bisim *)
locale Bisimulation_Invariant = Simulation_Defs +
fixes PA :: "'a \<Rightarrow> bool" and PB :: "'b \<Rightarrow> bool"
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> a \<sim> a' \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
assumes B_A_step: "\<And> a a' b'. B a' b' \<Longrightarrow> a \<sim> a' \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> (\<exists> b. A a b \<and> b \<sim> b')"
assumes A_invariant[intro]: "\<And> a b. PA a \<Longrightarrow> A a b \<Longrightarrow> PA b"
assumes B_invariant[intro]: "\<And> a b. PB a \<Longrightarrow> B a b \<Longrightarrow> PB b"
begin
sublocale PA_invariant: Graph_Invariant A PA by standard blast
sublocale PB_invariant: Graph_Invariant B PB by standard blast
lemmas B_steps_invariant[intro] = PB_invariant.invariant_reaches
definition "equiv' \<equiv> \<lambda> a b. a \<sim> b \<and> PA a \<and> PB b"
sublocale bisim: Bisimulation A B equiv'
by standard (clarsimp simp add: equiv'_def, frule A_B_step B_A_step, assumption; auto)+
sublocale A_B: Simulation_Invariant A B "(\<sim>)" PA PB
by (standard; blast intro: A_B_step B_A_step)
sublocale B_A: Simulation_Invariant B A "\<lambda> x y. y \<sim> x" PB PA
by (standard; blast intro: A_B_step B_A_step)
context
fixes f
assumes eq: "a \<sim> b \<longleftrightarrow> b = f a"
and inj: "\<forall> a b. PB (f a) \<and> PA b \<and> f a = f b \<longrightarrow> a = b"
begin
lemma list_all2_inj_map_eq:
"as = bs" if "list_all2 (\<lambda>a b. a = f b) (map f as) bs" "list_all PB (map f as)" "list_all PA bs"
using that inj
by (induction "map f as" bs arbitrary: as rule: list_all2_induct) (auto simp: inj_on_def)
lemma steps_map_equiv:
"A.steps (a # as) \<longleftrightarrow> B.steps (b # map f as)" if "a \<sim> b" "PA a" "PB b"
using A_B.simulation_steps'_map[of f a as b] B_A.simulation_steps'[of b "map f as" a] that eq
by (auto dest: list_all2_inj_map_eq)
lemma steps_map:
"\<exists> as. bs = map f as" if "B.steps (f a # bs)" "PA a" "PB (f a)"
proof -
have "a \<sim> f a" unfolding eq ..
from B_A.simulation_steps'[OF that(1) this \<open>PB _\<close> \<open>PA _\<close>] guess as by clarify
from this(2) show ?thesis
unfolding eq by (inst_existentials as, induction rule: list_all2_induct, auto)
qed
lemma reaches_equiv:
"A.reaches a a' \<longleftrightarrow> B.reaches (f a) (f a')" if "PA a" "PB (f a)"
apply safe
apply (drule A_B.simulation_reaches[of a a' "f a"]; simp add: eq that)
apply (drule B_A.simulation_reaches)
defer
apply (rule that | clarsimp simp: eq | metis inj)+
done
end (* Context for Equality Relation *)
lemma equiv'_D:
"a \<sim> b" if "A_B.equiv' a b"
using that unfolding A_B.equiv'_def by auto
lemma equiv'_rotate_1:
"B_A.equiv' b a" if "A_B.equiv' a b"
using that by (auto simp: B_A.equiv'_def A_B.equiv'_def)
lemma equiv'_rotate_2:
"A_B.equiv' a b" if "B_A.equiv' b a"
using that by (auto simp: B_A.equiv'_def A_B.equiv'_def)
lemma stream_all2_equiv'_D:
"stream_all2 (\<sim>) xs ys" if "stream_all2 A_B.equiv' xs ys"
using stream_all2_weaken[OF that equiv'_D] by fast
lemma stream_all2_equiv'_D2:
"stream_all2 B_A.equiv' ys xs \<Longrightarrow> stream_all2 ((\<sim>)\<inverse>\<inverse>) ys xs"
by (coinduction arbitrary: xs ys) (auto simp: B_A.equiv'_def)
lemma stream_all2_rotate_1:
"stream_all2 B_A.equiv' ys xs \<Longrightarrow> stream_all2 A_B.equiv' xs ys"
by (coinduction arbitrary: xs ys) (auto simp: B_A.equiv'_def A_B.equiv'_def)
lemma stream_all2_rotate_2:
"stream_all2 A_B.equiv' xs ys \<Longrightarrow> stream_all2 B_A.equiv' ys xs"
by (coinduction arbitrary: xs ys) (auto simp: B_A.equiv'_def A_B.equiv'_def)
end (* Bisim Invariant *)
locale Bisimulation_Invariants = Simulation_Defs +
fixes PA QA :: "'a \<Rightarrow> bool" and PB QB :: "'b \<Rightarrow> bool"
assumes A_B_step: "\<And> a b a'. A a b \<Longrightarrow> a \<sim> a' \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> (\<exists> b'. B a' b' \<and> b \<sim> b')"
assumes B_A_step: "\<And> a a' b'. B a' b' \<Longrightarrow> a \<sim> a' \<Longrightarrow> PA a \<Longrightarrow> PB a' \<Longrightarrow> (\<exists> b. A a b \<and> b \<sim> b')"
assumes A_invariant[intro]: "\<And> a b. PA a \<Longrightarrow> A a b \<Longrightarrow> QA b"
assumes B_invariant[intro]: "\<And> a b. PB a \<Longrightarrow> B a b \<Longrightarrow> QB b"
assumes PA_QA[intro]: "\<And> a. QA a \<Longrightarrow> PA a" and PB_QB[intro]: "\<And> a. QB a \<Longrightarrow> PB a"
begin
sublocale PA_invariant: Graph_Invariant A PA by standard blast
sublocale PB_invariant: Graph_Invariant B PB by standard blast
sublocale QA_invariant: Graph_Invariant A QA by standard blast
sublocale QB_invariant: Graph_Invariant B QB by standard blast
sublocale Pre_Bisim: Bisimulation_Invariant A B "(\<sim>)" PA PB
by standard (auto intro: A_B_step B_A_step)
sublocale Post_Bisim: Bisimulation_Invariant A B "(\<sim>)" QA QB
by standard (auto intro: A_B_step B_A_step)
sublocale A_B: Simulation_Invariants A B "(\<sim>)" PA QA PB QB
by standard (blast intro: A_B_step)+
sublocale B_A: Simulation_Invariants B A "\<lambda> x y. y \<sim> x" PB QB PA QA
by standard (blast intro: B_A_step)+
context
fixes f
assumes eq[simp]: "a \<sim> b \<longleftrightarrow> b = f a"
and inj: "\<forall> a b. QB (f a) \<and> QA b \<and> f a = f b \<longrightarrow> a = b"
begin
lemmas list_all2_inj_map_eq = Post_Bisim.list_all2_inj_map_eq[OF eq inj]
lemmas steps_map_equiv' = Post_Bisim.steps_map_equiv[OF eq inj]
lemma steps_map_equiv:
"A.steps (a # as) \<longleftrightarrow> B.steps (b # map f as)" if "a \<sim> b" "PA a" "PB b"
proof
assume "A.steps (a # as)"
then show "B.steps (b # map f as)"
proof cases
case Single
then show ?thesis by auto
next
case prems: (Cons a' xs)
from A_B_step[OF \<open>A a a'\<close> \<open>a \<sim> b\<close> \<open>PA a\<close> \<open>PB b\<close>] obtain b' where "B b b'" "a' \<sim> b'"
by auto
with steps_map_equiv'[OF \<open>a' \<sim> b'\<close>, of xs] prems that show ?thesis
by auto
qed
next
assume "B.steps (b # map f as)"
then show "A.steps (a # as)"
proof cases
case Single
then show ?thesis by auto
next
case prems: (Cons b' xs)
from B_A_step[OF \<open>B b b'\<close> \<open>a \<sim> b\<close> \<open>PA a\<close> \<open>PB b\<close>] obtain a' where "A a a'" "a' \<sim> b'"
by auto
with that prems have "QA a'" "QB b'"
by auto
with \<open>A a a'\<close> \<open>a' \<sim> b'\<close> steps_map_equiv'[OF \<open>a' \<sim> b'\<close>, of "tl as"] prems that show ?thesis
apply clarsimp
subgoal for z zs
using inj[rule_format, of z a'] by auto
done
qed
qed
lemma steps_map:
"\<exists> as. bs = map f as" if "B.steps (f a # bs)" "PA a" "PB (f a)"
using that proof cases
case Single
then show ?thesis by simp
next
case prems: (Cons b' xs)
from B_A_step[OF \<open>B _ b'\<close> _ \<open>PA a\<close> \<open>PB (f a)\<close>] obtain a' where "A a a'" "a' \<sim> b'"
by auto
with that prems have "QA a'" "QB b'"
by auto
with Post_Bisim.steps_map[OF eq inj, of a' xs] prems \<open>a' \<sim> b'\<close> obtain ys where "xs = map f ys"
by auto
with \<open>bs = _\<close> \<open>a' \<sim> b'\<close> show ?thesis
by (inst_existentials "a' # ys") auto
qed
text \<open>
@{thm Post_Bisim.reaches_equiv} cannot be lifted directly:
injectivity cannot be applied for the reflexive case.
\<close>
lemma reaches1_equiv:
"A.reaches1 a a' \<longleftrightarrow> B.reaches1 (f a) (f a')" if "PA a" "PB (f a)"
proof safe
assume "A.reaches1 a a'"
then obtain a'' where prems: "A a a''" "A.reaches a'' a'"
including graph_automation_aggressive by blast
from A_B_step[OF \<open>A a _\<close> _ that] obtain b where "B (f a) b" "a'' \<sim> b"
by auto
with that prems have "QA a''" "QB b"
by auto
with Post_Bisim.reaches_equiv[OF eq inj, of a'' a'] prems \<open>B (f a) b\<close> \<open>a'' \<sim> b\<close>
show "B.reaches1 (f a) (f a')"
by auto
next
assume "B.reaches1 (f a) (f a')"
then obtain b where prems: "B (f a) b" "B.reaches b (f a')"
including graph_automation_aggressive by blast
from B_A_step[OF \<open>B _ b\<close> _ \<open>PA a\<close> \<open>PB (f a)\<close>] obtain a'' where "A a a''" "a'' \<sim> b"
by auto
with that prems have "QA a''" "QB b"
by auto
with Post_Bisim.reaches_equiv[OF eq inj, of a'' a'] prems \<open>A a a''\<close> \<open>a'' \<sim> b\<close>
show "A.reaches1 a a'"
by auto
qed
end (* Context for Equality Relation *)
end (* Bisim Invariant *)
lemma Bisimulation_Invariant_composition:
assumes
"Bisimulation_Invariant A B sim1 PA PB"
"Bisimulation_Invariant B C sim2 PB PC"
shows
"Bisimulation_Invariant A C (\<lambda> a c. \<exists> b. PB b \<and> sim1 a b \<and> sim2 b c) PA PC"
proof -
interpret A: Bisimulation_Invariant A B sim1 PA PB
by (rule assms(1))
interpret B: Bisimulation_Invariant B C sim2 PB PC
by (rule assms(2))
show ?thesis
by (standard; (blast dest: A.A_B_step B.A_B_step | blast dest: A.B_A_step B.B_A_step))
qed
lemma Bisimulation_Invariant_filter:
assumes
"Bisimulation_Invariant A B sim PA PB"
"\<And> a b. sim a b \<Longrightarrow> PA a \<Longrightarrow> PB b \<Longrightarrow> FA a \<longleftrightarrow> FB b"
"\<And> a b. A a b \<and> FA b \<longleftrightarrow> A' a b"
"\<And> a b. B a b \<and> FB b \<longleftrightarrow> B' a b"
shows
"Bisimulation_Invariant A' B' sim PA PB"
proof -
interpret Bisimulation_Invariant A B sim PA PB
by (rule assms(1))
have unfold:
"A' = (\<lambda> a b. A a b \<and> FA b)" "B' = (\<lambda> a b. B a b \<and> FB b)"
using assms(3,4) by auto
show ?thesis
unfolding unfold
apply standard
using assms(2) apply (blast dest: A_B_step)
using assms(2) apply (blast dest: B_A_step)
by blast+
qed
lemma Bisimulation_Invariants_filter:
assumes
"Bisimulation_Invariants A B sim PA QA PB QB"
"\<And> a b. QA a \<Longrightarrow> QB b \<Longrightarrow> FA a \<longleftrightarrow> FB b"
"\<And> a b. A a b \<and> FA b \<longleftrightarrow> A' a b"
"\<And> a b. B a b \<and> FB b \<longleftrightarrow> B' a b"
shows
"Bisimulation_Invariants A' B' sim PA QA PB QB"
proof -
interpret Bisimulation_Invariants A B sim PA QA PB QB
by (rule assms(1))
have unfold:
"A' = (\<lambda> a b. A a b \<and> FA b)" "B' = (\<lambda> a b. B a b \<and> FB b)"
using assms(3,4) by auto
show ?thesis
unfolding unfold
apply standard
using assms(2) apply (blast dest: A_B_step)
using assms(2) apply (blast dest: B_A_step)
by blast+
qed
lemma Bisimulation_Invariants_composition:
assumes
"Bisimulation_Invariants A B sim1 PA QA PB QB"
"Bisimulation_Invariants B C sim2 PB QB PC QC"
shows
"Bisimulation_Invariants A C (\<lambda> a c. \<exists> b. PB b \<and> sim1 a b \<and> sim2 b c) PA QA PC QC"
proof -
interpret A: Bisimulation_Invariants A B sim1 PA QA PB QB
by (rule assms(1))
interpret B: Bisimulation_Invariants B C sim2 PB QB PC QC
by (rule assms(2))
show ?thesis
by (standard, blast dest: A.A_B_step B.A_B_step) (blast dest: A.B_A_step B.B_A_step)+
qed
lemma Bisimulation_Invariant_Invariants_composition:
assumes
"Bisimulation_Invariant A B sim1 PA PB"
"Bisimulation_Invariants B C sim2 PB QB PC QC"
shows
"Bisimulation_Invariants A C (\<lambda> a c. \<exists> b. PB b \<and> sim1 a b \<and> sim2 b c) PA PA PC QC"
proof -
interpret Bisimulation_Invariant A B sim1 PA PB
by (rule assms(1))
interpret B: Bisimulation_Invariants B C sim2 PB QB PC QC
by (rule assms(2))
interpret A: Bisimulation_Invariants A B sim1 PA PA PB QB
by (standard; blast intro: A_B_step B_A_step)+
show ?thesis
by (standard; (blast dest: A.A_B_step B.A_B_step | blast dest: A.B_A_step B.B_A_step))
qed
lemma Bisimulation_Invariant_Bisimulation_Invariants:
assumes "Bisimulation_Invariant A B sim PA PB"
shows "Bisimulation_Invariants A B sim PA PA PB PB"
proof -
interpret Bisimulation_Invariant A B sim PA PB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step B_A_step)
qed
lemma Bisimulation_Invariant_strengthen_post:
assumes
"Bisimulation_Invariant A B sim PA PB"
"\<And> a b. PA' a \<Longrightarrow> PA b \<Longrightarrow> A a b \<Longrightarrow> PA' b"
"\<And> a. PA' a \<Longrightarrow> PA a"
shows "Bisimulation_Invariant A B sim PA' PB"
proof -
interpret Bisimulation_Invariant A B sim PA PB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step B_A_step assms)
qed
lemma Bisimulation_Invariant_strengthen_post':
assumes
"Bisimulation_Invariant A B sim PA PB"
"\<And> a b. PB' a \<Longrightarrow> PB b \<Longrightarrow> B a b \<Longrightarrow> PB' b"
"\<And> a. PB' a \<Longrightarrow> PB a"
shows "Bisimulation_Invariant A B sim PA PB'"
proof -
interpret Bisimulation_Invariant A B sim PA PB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step B_A_step assms)
qed
lemma Simulation_Invariant_strengthen_post:
assumes
"Simulation_Invariant A B sim PA PB"
"\<And> a b. PA a \<Longrightarrow> PA b \<Longrightarrow> A a b \<Longrightarrow> PA' b"
"\<And> a. PA' a \<Longrightarrow> PA a"
shows "Simulation_Invariant A B sim PA' PB"
proof -
interpret Simulation_Invariant A B sim PA PB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step assms)
qed
lemma Simulation_Invariant_strengthen_post':
assumes
"Simulation_Invariant A B sim PA PB"
"\<And> a b. PB a \<Longrightarrow> PB b \<Longrightarrow> B a b \<Longrightarrow> PB' b"
"\<And> a. PB' a \<Longrightarrow> PB a"
shows "Simulation_Invariant A B sim PA PB'"
proof -
interpret Simulation_Invariant A B sim PA PB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step assms)
qed
lemma Simulation_Invariants_strengthen_post:
assumes
"Simulation_Invariants A B sim PA QA PB QB"
"\<And> a b. PA a \<Longrightarrow> QA b \<Longrightarrow> A a b \<Longrightarrow> QA' b"
"\<And> a. QA' a \<Longrightarrow> QA a"
shows "Simulation_Invariants A B sim PA QA' PB QB"
proof -
interpret Simulation_Invariants A B sim PA QA PB QB
by (rule assms)
show ?thesis
by (standard; blast intro: A_B_step assms)
qed
lemma Bisimulation_Invariant_sim_replace:
assumes "Bisimulation_Invariant A B sim PA PB"
and "\<And> a b. PA a \<Longrightarrow> PB b \<Longrightarrow> sim a b \<longleftrightarrow> sim' a b"
shows "Bisimulation_Invariant A B sim' PA PB"
proof -
interpret Bisimulation_Invariant A B sim PA PB
by (rule assms(1))
show ?thesis
apply standard
using assms(2) apply (blast dest: A_B_step)
using assms(2) apply (blast dest: B_A_step)
by blast+
qed
end (* Theory *)
|
{"author": "wimmers", "repo": "archive-of-graph-formalizations", "sha": "cf49dd3379174cca7f3f1de16214e1c66238841e", "save_path": "github-repos/isabelle/wimmers-archive-of-graph-formalizations", "path": "github-repos/isabelle/wimmers-archive-of-graph-formalizations/archive-of-graph-formalizations-cf49dd3379174cca7f3f1de16214e1c66238841e/TA_Graphs/TA_Graphs.thy"}
|
# ! /usr/bin/env python
import argparse
import os
import numpy as np
import json
from voc import parse_voc_annotation
from yolo import create_yolov3_model, create_yolov3_tiny_model, dummy_loss
from generator import BatchGenerator
from utils.utils import normalize, evaluate, makedirs
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
from keras.optimizers import Adam
from callbacks import CustomModelCheckpoint, CustomTensorBoard
from utils.multi_gpu_model import multi_gpu_model
import tensorflow as tf
import keras
from keras.models import load_model
def create_training_instances(
train_annot_folder,
train_image_folder,
train_cache,
valid_annot_folder,
valid_image_folder,
valid_cache,
labels,
split_factor=0.8
):
# parse annotations of the training set
train_ints, train_labels = parse_voc_annotation(train_annot_folder, train_image_folder, train_cache, labels)
# parse annotations of the validation set, if any, otherwise split the training set
if os.path.exists(valid_annot_folder):
valid_ints, valid_labels = parse_voc_annotation(valid_annot_folder, valid_image_folder, valid_cache, labels)
else:
print("valid_annot_folder not exists. Spliting the trainining set. ","%.0f" %(split_factor*100)," / ","%.0f" %((1-split_factor)*100))
train_valid_split = int(split_factor * len(train_ints))
np.random.seed(0)
np.random.shuffle(train_ints)
np.random.seed()
valid_ints = train_ints[train_valid_split:]
train_ints = train_ints[:train_valid_split]
# compare the seen labels with the given labels in config.json
if len(labels) > 0:
overlap_labels = set(labels).intersection(set(train_labels.keys()))
print('Seen labels: \t' + str(train_labels) + '\n')
print('Given labels: \t' + str(labels))
# return None, None, None if some given label is not in the dataset
if len(overlap_labels) < len(labels):
print('Some labels have no annotations! Please revise the list of labels in the config.json.')
return None, None, None
else:
print('No labels are provided. Train on all seen labels.')
print(train_labels)
labels = train_labels.keys()
max_box_per_image = max([len(inst['object']) for inst in (train_ints + valid_ints)])
return train_ints, valid_ints, sorted(labels), max_box_per_image
def create_callbacks(saved_weights_name, tensorboard_logs, model_to_save):
makedirs(tensorboard_logs)
early_stop = EarlyStopping(
monitor='loss',
min_delta=0.01,
patience=5,
mode='min',
verbose=1
)
checkpoint = CustomModelCheckpoint(
model_to_save=model_to_save,
filepath=saved_weights_name,
monitor='loss',
verbose=1,
save_best_only=True,
mode='min',
period=1
)
reduce_on_plateau = ReduceLROnPlateau(
monitor='loss',
factor=0.1,
patience=2,
verbose=1,
mode='min',
epsilon=0.01,
cooldown=0,
min_lr=0
)
tensorboard = CustomTensorBoard(
log_dir=tensorboard_logs,
write_graph=True,
write_images=True,
)
return [early_stop, checkpoint, reduce_on_plateau, tensorboard]
def create_model(
nb_class,
anchors,
max_box_per_image,
max_grid, batch_size,
warmup_batches,
ignore_thresh,
multi_gpu,
saved_weights_name,
lr,
grid_scales,
obj_scale,
noobj_scale,
xywh_scale,
class_scale,
tiny=False
):
if multi_gpu > 1:
if tiny:
with tf.device('/cpu:0'):
template_model, infer_model = create_yolov3_tiny_model(
nb_class=nb_class,
anchors=anchors,
max_box_per_image=max_box_per_image,
max_grid=max_grid,
batch_size=batch_size // multi_gpu,
warmup_batches=warmup_batches,
ignore_thresh=ignore_thresh,
grid_scales=grid_scales,
obj_scale=obj_scale,
noobj_scale=noobj_scale,
xywh_scale=xywh_scale,
class_scale=class_scale
)
else:
with tf.device('/cpu:0'):
template_model, infer_model = create_yolov3_model(
nb_class=nb_class,
anchors=anchors,
max_box_per_image=max_box_per_image,
max_grid=max_grid,
batch_size=batch_size // multi_gpu,
warmup_batches=warmup_batches,
ignore_thresh=ignore_thresh,
grid_scales=grid_scales,
obj_scale=obj_scale,
noobj_scale=noobj_scale,
xywh_scale=xywh_scale,
class_scale=class_scale
)
else:
if tiny:
template_model, infer_model = create_yolov3_tiny_model(
nb_class=nb_class,
anchors=anchors,
max_box_per_image=max_box_per_image,
max_grid=max_grid,
batch_size=batch_size,
warmup_batches=warmup_batches,
ignore_thresh=ignore_thresh,
grid_scales=grid_scales,
obj_scale=obj_scale,
noobj_scale=noobj_scale,
xywh_scale=xywh_scale,
class_scale=class_scale
)
else:
template_model, infer_model = create_yolov3_model(
nb_class=nb_class,
anchors=anchors,
max_box_per_image=max_box_per_image,
max_grid=max_grid,
batch_size=batch_size,
warmup_batches=warmup_batches,
ignore_thresh=ignore_thresh,
grid_scales=grid_scales,
obj_scale=obj_scale,
noobj_scale=noobj_scale,
xywh_scale=xywh_scale,
class_scale=class_scale
)
# load the pretrained weight if exists, otherwise start from scratch
if os.path.exists(saved_weights_name):
print("\nLoading pretrained weights.\n")
try:
template_model.load_weights(saved_weights_name)
except:
print("Weights file failed to load, starting from scratch")
else:
print("No pre-trained weights available, starting from scratch")
if multi_gpu > 1:
train_model = multi_gpu_model(template_model, gpus=multi_gpu)
else:
train_model = template_model
optimizer = Adam(lr=lr, clipnorm=0.001)
train_model.compile(loss=dummy_loss, optimizer=optimizer)
return train_model, infer_model
def _main_(args):
config_path = args.conf
with open(config_path) as config_buffer:
config = json.loads(config_buffer.read())
retrain = config['retrain']['enabled']
tiny = config['model']['tiny']
###############################
# Parse the annotations
###############################
train_ints, valid_ints, labels, max_box_per_image = create_training_instances(
config['train']['train_annot_folder'],
config['train']['train_image_folder'],
config['train']['cache_name'],
config['valid']['valid_annot_folder'],
config['valid']['valid_image_folder'],
config['valid']['cache_name'],
config['model']['labels'],
config['valid']['train_validate_split']
)
print('\nTraining on: \t' + str(labels) + '\n')
###############################
# Create the generators
###############################
train_generator = BatchGenerator(
instances=train_ints,
anchors=config['model']['anchors'],
labels=labels,
downsample=32, # ratio between network input's size and network output's size, 32 for YOLOv3
max_box_per_image=max_box_per_image,
batch_size=config['train']['batch_size'],
min_net_size=config['model']['min_input_size'],
max_net_size=config['model']['max_input_size'],
shuffle=True,
jitter=0.3,
norm=normalize
)
valid_generator = BatchGenerator(
instances=valid_ints,
anchors=config['model']['anchors'],
labels=labels,
downsample=32, # ratio between network input's size and network output's size, 32 for YOLOv3
max_box_per_image=max_box_per_image,
batch_size=config['train']['batch_size'],
min_net_size=config['model']['min_input_size'],
max_net_size=config['model']['max_input_size'],
shuffle=True,
jitter=0.0,
norm=normalize
)
###############################
# Create the model
###############################
if retrain:
if os.path.exists(config['train']['saved_weights_name']):
config['retrain']['warmup_epochs'] = 0
warmup_batches = config['retrain']['warmup_epochs'] * (config['retrain']['train_times'] * len(train_generator))
else:
if os.path.exists(config['train']['saved_weights_name']):
config['train']['warmup_epochs'] = 0
warmup_batches = config['train']['warmup_epochs'] * (config['train']['train_times'] * len(train_generator))
os.environ['CUDA_VISIBLE_DEVICES'] = config['train']['gpus']
multi_gpu = len(config['train']['gpus'].split(','))
if retrain:
train_model, infer_model = create_model(
nb_class=len(labels),
anchors=config['model']['anchors'],
max_box_per_image=max_box_per_image,
max_grid=[config['model']['max_input_size'], config['model']['max_input_size']],
batch_size=config['train']['batch_size'],
warmup_batches=warmup_batches,
ignore_thresh=config['retrain']['ignore_thresh'],
multi_gpu=multi_gpu,
saved_weights_name=config['train']['saved_weights_name'],
lr=config['retrain']['learning_rate'],
grid_scales=config['train']['grid_scales'],
obj_scale=config['train']['obj_scale'],
noobj_scale=config['train']['noobj_scale'],
xywh_scale=config['train']['xywh_scale'],
class_scale=config['train']['class_scale'],
tiny=tiny
)
else:
train_model, infer_model = create_model(
nb_class=len(labels),
anchors=config['model']['anchors'],
max_box_per_image=max_box_per_image,
max_grid=[config['model']['max_input_size'], config['model']['max_input_size']],
batch_size=config['train']['batch_size'],
warmup_batches=warmup_batches,
ignore_thresh=config['train']['ignore_thresh'],
multi_gpu=multi_gpu,
saved_weights_name=config['train']['saved_weights_name'],
lr=config['train']['learning_rate'],
grid_scales=config['train']['grid_scales'],
obj_scale=config['train']['obj_scale'],
noobj_scale=config['train']['noobj_scale'],
xywh_scale=config['train']['xywh_scale'],
class_scale=config['train']['class_scale'],
tiny=tiny
)
###############################
# Kick off the training
###############################
callbacks = create_callbacks(config['train']['saved_weights_name'], config['train']['tensorboard_dir'], infer_model)
if retrain:
train_model.fit_generator(
generator=train_generator,
steps_per_epoch=len(train_generator) * config['retrain']['train_times'],
epochs=config['retrain']['nb_epochs'] + config['retrain']['warmup_epochs'],
verbose=2 if config['train']['debug'] else 1,
callbacks=callbacks,
workers=4,
max_queue_size=8
)
else:
train_model.fit_generator(
generator=train_generator,
steps_per_epoch=len(train_generator) * config['train']['train_times'],
epochs=config['train']['nb_epochs'] + config['train']['warmup_epochs'],
verbose=2 if config['train']['debug'] else 1,
callbacks=callbacks,
workers=4,
max_queue_size=8
)
# make a GPU version of infer_model for evaluation
if multi_gpu > 1:
infer_model = load_model(config['train']['saved_weights_name'])
###############################
# Run the evaluation
###############################
# compute mAP for all the classes
average_precisions = evaluate(infer_model, valid_generator)
# print the score
for label, average_precision in average_precisions.items():
print(labels[label] + ': {:.4f}'.format(average_precision))
print('mAP: {:.4f}'.format(sum(average_precisions.values()) / len(average_precisions)))
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description='train and evaluate YOLO_v3 model on any dataset')
argparser.add_argument('-c', '--conf', help='path to configuration file')
args = argparser.parse_args()
_main_(args)
|
{"hexsha": "7ad3cefe6c8bfe8a5319dfe5522e111ef59b2782", "size": 13713, "ext": "py", "lang": "Python", "max_stars_repo_path": "train.py", "max_stars_repo_name": "LachlanMares/keras-yolo3-generic", "max_stars_repo_head_hexsha": "abc0ad0488e727b3d997a924b57356ace2caf444", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "train.py", "max_issues_repo_name": "LachlanMares/keras-yolo3-generic", "max_issues_repo_head_hexsha": "abc0ad0488e727b3d997a924b57356ace2caf444", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train.py", "max_forks_repo_name": "LachlanMares/keras-yolo3-generic", "max_forks_repo_head_hexsha": "abc0ad0488e727b3d997a924b57356ace2caf444", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8629032258, "max_line_length": 142, "alphanum_fraction": 0.5826587909, "include": true, "reason": "import numpy", "num_tokens": 2840}
|
import os
import sys
import glob
import time
import copy
import logging
import argparse
import random
import numpy as np
import torch
import torch.nn as nn
import torch.utils
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import utils
from controller import NAO
from nasbench import api
def build_parser():
parser = argparse.ArgumentParser()
# Basic model parameters.
parser.add_argument('--data', type=str, default='data')
parser.add_argument('--output_dir', type=str, default='models')
parser.add_argument('--seed', type=int, default=1)
parser.add_argument('--seed_arch', type=int, default=100)
parser.add_argument('--random_arch', type=int, default=10000)
parser.add_argument('--nodes', type=int, default=7)
parser.add_argument('--new_arch', type=int, default=100)
parser.add_argument('--k', type=int, default=100)
parser.add_argument('--encoder_layers', type=int, default=1)
parser.add_argument('--hidden_size', type=int, default=16)
parser.add_argument('--mlp_layers', type=int, default=2)
parser.add_argument('--mlp_hidden_size', type=int, default=64)
parser.add_argument('--decoder_layers', type=int, default=1)
parser.add_argument('--source_length', type=int, default=27)
parser.add_argument('--encoder_length', type=int, default=27)
parser.add_argument('--decoder_length', type=int, default=27)
parser.add_argument('--dropout', type=float, default=0.1)
parser.add_argument('--l2_reg', type=float, default=1e-4)
parser.add_argument('--vocab_size', type=int, default=7)
parser.add_argument('--max_step_size', type=int, default=100)
parser.add_argument('--trade_off', type=float, default=0.8)
parser.add_argument('--pretrain_epochs', type=int, default=10000)
parser.add_argument('--epochs', type=int, default=1000)
parser.add_argument('--up_sample_ratio', type=int, default=100)
parser.add_argument('--batch_size', type=int, default=100)
parser.add_argument('--lr', type=float, default=0.001)
parser.add_argument('--optimizer', type=str, default='adam')
parser.add_argument('--grad_bound', type=float, default=5.0)
parser.add_argument('--iteration', type=int, default=2)
return parser
# args = parser.parse_args()
# log_format = '%(asctime)s %(message)s'
# logging.basicConfig(stream=sys.stdout, level=logging.INFO,
# format=log_format, datefmt='%m/%d %I:%M:%S %p')
def controller_train(args, train_queue, model, optimizer):
objs = utils.AvgrageMeter()
mse = utils.AvgrageMeter()
nll = utils.AvgrageMeter()
model.train()
for step, sample in enumerate(train_queue):
encoder_input = utils.move_to_cuda(sample['encoder_input'])
encoder_target = utils.move_to_cuda(sample['encoder_target'])
decoder_input = utils.move_to_cuda(sample['decoder_input'])
decoder_target = utils.move_to_cuda(sample['decoder_target'])
optimizer.zero_grad()
predict_value, log_prob, arch = model(encoder_input, decoder_input)
loss_1 = F.mse_loss(predict_value.squeeze(), encoder_target.squeeze())
loss_2 = F.nll_loss(log_prob.contiguous().view(-1, log_prob.size(-1)), decoder_target.view(-1))
loss = args.trade_off * loss_1 + (1 - args.trade_off) * loss_2
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_bound)
optimizer.step()
n = encoder_input.size(0)
objs.update(loss.data, n)
mse.update(loss_1.data, n)
nll.update(loss_2.data, n)
return objs.avg, mse.avg, nll.avg
def controller_infer(queue, model, step, direction='+'):
new_arch_list = []
new_predict_values = []
model.eval()
for i, sample in enumerate(queue):
encoder_input = utils.move_to_cuda(sample['encoder_input'])
model.zero_grad()
new_arch, new_predict_value = model.generate_new_arch(encoder_input, step, direction=direction)
new_arch_list.extend(new_arch.data.squeeze().tolist())
new_predict_values.extend(new_predict_value.data.squeeze().tolist())
return new_arch_list, new_predict_values
def train_controller(args, model, train_input, train_target, epochs):
logging.info('Train data: {}'.format(len(train_input)))
controller_train_dataset = utils.ControllerDataset(train_input, train_target, True)
controller_train_queue = torch.utils.data.DataLoader(
controller_train_dataset, batch_size=args.batch_size, shuffle=True, pin_memory=True)
optimizer = torch.optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.l2_reg)
for epoch in range(1, epochs + 1):
loss, mse, ce = controller_train(args, controller_train_queue, model, optimizer)
logging.info("epoch %04d train loss %.6f mse %.6f ce %.6f", epoch, loss, mse, ce)
def generate_synthetic_controller_data(nasbench, model, base_arch=None, random_arch=0, direction='+'):
random_synthetic_input = []
random_synthetic_target = []
if random_arch > 0:
while len(random_synthetic_input) < random_arch:
seq = utils.generate_arch(1, nasbench)[1][0]
if seq not in random_synthetic_input and seq not in base_arch:
random_synthetic_input.append(seq)
controller_synthetic_dataset = utils.ControllerDataset(random_synthetic_input, None, False)
controller_synthetic_queue = torch.utils.data.DataLoader(
controller_synthetic_dataset, batch_size=len(controller_synthetic_dataset),
shuffle=False, pin_memory=True)
with torch.no_grad():
model.eval()
for sample in controller_synthetic_queue:
encoder_input = sample['encoder_input'].cuda()
_, _, _, predict_value = model.encoder(encoder_input)
random_synthetic_target += predict_value.data.squeeze().tolist()
assert len(random_synthetic_input) == len(random_synthetic_target)
synthetic_input = random_synthetic_input
synthetic_target = random_synthetic_target
assert len(synthetic_input) == len(synthetic_target)
return synthetic_input, synthetic_target
def main(args, myargs):
if not torch.cuda.is_available():
logging.info('No GPU found!')
sys.exit(1)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
cudnn.enabled = True
cudnn.benchmark = True
logging.info("Args = %s", args)
args.source_length = args.encoder_length = args.decoder_length = (args.nodes + 2) * (args.nodes - 1) // 2
nasbench = api.NASBench(args.nasbench_file, num_samples=args.num_samples)
controller = NAO(
args.encoder_layers,
args.decoder_layers,
args.mlp_layers,
args.mlp_hidden_size,
args.hidden_size,
args.vocab_size,
args.dropout,
args.source_length,
args.encoder_length,
args.decoder_length,
)
logging.info("param size = %d", utils.count_parameters(controller))
controller = controller.cuda()
child_arch_pool, child_seq_pool, child_arch_pool_valid_acc = utils.generate_arch(
args.seed_arch, nasbench, need_perf=True)
arch_pool = []
seq_pool = []
arch_pool_valid_acc = []
for i in range(args.iteration+1):
logging.info('Iteration {}'.format(i+1))
if not child_arch_pool_valid_acc:
for arch in child_arch_pool:
data = nasbench.query(arch)
child_arch_pool_valid_acc.append(data['validation_accuracy'])
arch_pool += child_arch_pool
arch_pool_valid_acc += child_arch_pool_valid_acc
seq_pool += child_seq_pool
arch_pool_valid_acc_sorted_indices = np.argsort(arch_pool_valid_acc)[::-1]
arch_pool = [arch_pool[i] for i in arch_pool_valid_acc_sorted_indices]
seq_pool = [seq_pool[i] for i in arch_pool_valid_acc_sorted_indices]
arch_pool_valid_acc = [arch_pool_valid_acc[i] for i in arch_pool_valid_acc_sorted_indices]
with open(os.path.join(args.output_dir, 'arch_pool.{}'.format(i)), 'w') as fa:
for arch, seq, valid_acc in zip(arch_pool, seq_pool, arch_pool_valid_acc):
fa.write('{}\t{}\t{}\t{}\n'.format(arch.matrix, arch.ops, seq, valid_acc))
for arch_index in range(10):
print('Top 10 architectures:')
print('Architecutre connection:{}'.format(arch_pool[arch_index].matrix))
print('Architecture operations:{}'.format(arch_pool[arch_index].ops))
print('Valid accuracy:{}'.format(arch_pool_valid_acc[arch_index]))
if i == args.iteration:
print('Final architectures:')
for arch_index in range(10):
print('Architecutre connection:{}'.format(arch_pool[arch_index].matrix))
print('Architecture operations:{}'.format(arch_pool[arch_index].ops))
print('Valid accuracy:{}'.format(arch_pool_valid_acc[arch_index]))
fs, cs = nasbench.get_metrics_from_spec(arch_pool[arch_index])
test_acc = np.mean([cs[108][j]['final_test_accuracy'] for j in range(3)])
print('Mean test accuracy:{}'.format(test_acc))
break
train_encoder_input = seq_pool
min_val = min(arch_pool_valid_acc)
max_val = max(arch_pool_valid_acc)
train_encoder_target = [(i - min_val) / (max_val - min_val) for i in arch_pool_valid_acc]
# Pre-train
logging.info('Pre-train EPD')
train_controller(args, controller, train_encoder_input, train_encoder_target, args.pretrain_epochs)
logging.info('Finish pre-training EPD')
# Generate synthetic data
logging.info('Generate synthetic data for EPD')
synthetic_encoder_input, synthetic_encoder_target = generate_synthetic_controller_data(
nasbench, controller, train_encoder_input, args.random_arch)
if args.up_sample_ratio is None:
up_sample_ratio = np.ceil(args.random_arch / len(train_encoder_input)).astype(np.int)
else:
up_sample_ratio = args.up_sample_ratio
all_encoder_input = train_encoder_input * up_sample_ratio + synthetic_encoder_input
all_encoder_target = train_encoder_target * up_sample_ratio + synthetic_encoder_target
# Train
logging.info('Train EPD')
train_controller(args, controller, all_encoder_input, all_encoder_target, args.epochs)
logging.info('Finish training EPD')
new_archs = []
new_seqs = []
predict_step_size = 0
unique_input = train_encoder_input + synthetic_encoder_input
unique_target = train_encoder_target + synthetic_encoder_target
unique_indices = np.argsort(unique_target)[::-1]
unique_input = [unique_input[i] for i in unique_indices]
topk_archs = unique_input[:args.k]
controller_infer_dataset = utils.ControllerDataset(topk_archs, None, False)
controller_infer_queue = torch.utils.data.DataLoader(
controller_infer_dataset, batch_size=len(controller_infer_dataset), shuffle=False, pin_memory=True)
while len(new_archs) < args.new_arch:
predict_step_size += 1
logging.info('Generate new architectures with step size %d', predict_step_size)
new_seq, new_perfs = controller_infer(
controller_infer_queue, controller, predict_step_size, direction='+')
for seq in new_seq:
matrix, ops = utils.convert_seq_to_arch(seq)
arch = api.ModelSpec(matrix=matrix, ops=ops)
if nasbench.is_valid(arch) and len(arch.ops) == 7 \
and seq not in train_encoder_input and seq not in new_seqs:
new_archs.append(arch)
new_seqs.append(seq)
if len(new_seqs) >= args.new_arch:
break
logging.info('%d new archs generated now', len(new_archs))
if predict_step_size > args.max_step_size:
break
child_arch_pool = new_archs
child_seq_pool = new_seqs
child_arch_pool_valid_acc = []
child_arch_pool_test_acc = []
logging.info("Generate %d new archs", len(child_arch_pool))
print(nasbench.get_budget_counters())
def run(argv_str=None):
from template_lib.utils.config import parse_args_and_setup_myargs, config2args
from template_lib.utils.modelarts_utils import prepare_dataset
run_script = os.path.relpath(__file__, os.getcwd())
args1, myargs, _ = parse_args_and_setup_myargs(argv_str, run_script=run_script, start_tb=False)
myargs.args = args1
myargs.config = getattr(myargs.config, args1.command)
parser = build_parser()
args = parser.parse_args([])
if hasattr(myargs.config, 'datasets'):
prepare_dataset(myargs.config.datasets, cfg=myargs.config)
args = config2args(myargs.config.args, args)
args.output_dir = args1.outdir
log_format = '%(asctime)s %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.INFO,
format=log_format, datefmt='%m/%d %I:%M:%S %p')
# root_logger = logging.getLogger()
# root_logger.addHandler(myargs.logger.handlers[0])
logging.info('test logging.')
main(args, myargs)
if __name__ == '__main__':
run()
|
{"hexsha": "e83003b219003c665e787cb06dad344461680f20", "size": 13410, "ext": "py", "lang": "Python", "max_stars_repo_path": "nasbench/train_seminas.py", "max_stars_repo_name": "PeterouZh/SemiNAS", "max_stars_repo_head_hexsha": "39731663271b994571160d43d796b2bb93386b3b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nasbench/train_seminas.py", "max_issues_repo_name": "PeterouZh/SemiNAS", "max_issues_repo_head_hexsha": "39731663271b994571160d43d796b2bb93386b3b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nasbench/train_seminas.py", "max_forks_repo_name": "PeterouZh/SemiNAS", "max_forks_repo_head_hexsha": "39731663271b994571160d43d796b2bb93386b3b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.8235294118, "max_line_length": 111, "alphanum_fraction": 0.6821774795, "include": true, "reason": "import numpy", "num_tokens": 3001}
|
!COMPILER-GENERATED INTERFACE MODULE: Fri Mar 6 15:34:29 2020
! This source file is for reference only and may not completely
! represent the generated interface used by the compiler.
MODULE DIAGOSC_EMBM__genmod
INTERFACE
SUBROUTINE DIAGOSC_EMBM(ISTEP,IOUT,EXT,FX0FLUX,FWFLUX, &
&WATERATM)
INTEGER(KIND=4) :: ISTEP
INTEGER(KIND=4) :: IOUT
CHARACTER(LEN=3) :: EXT
REAL(KIND=8) :: FX0FLUX(4,36,36)
REAL(KIND=8) :: FWFLUX(2,36,36)
REAL(KIND=8) :: WATERATM
END SUBROUTINE DIAGOSC_EMBM
END INTERFACE
END MODULE DIAGOSC_EMBM__genmod
|
{"hexsha": "14d659bf0dd4c68daf0a774c8ed172d535db112e", "size": 708, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "genie-embm/src/fortran/diagosc_embm__genmod.f90", "max_stars_repo_name": "crem33/EcoGENIE_LA", "max_stars_repo_head_hexsha": "89032945316dc2827478100ed6bf60143e31f34d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "genie-embm/src/fortran/diagosc_embm__genmod.f90", "max_issues_repo_name": "crem33/EcoGENIE_LA", "max_issues_repo_head_hexsha": "89032945316dc2827478100ed6bf60143e31f34d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "genie-embm/src/fortran/diagosc_embm__genmod.f90", "max_forks_repo_name": "crem33/EcoGENIE_LA", "max_forks_repo_head_hexsha": "89032945316dc2827478100ed6bf60143e31f34d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6470588235, "max_line_length": 73, "alphanum_fraction": 0.5790960452, "num_tokens": 207}
|
#!/usr/bin/env python
""" Generate events and perform jet finding.
Can be invoked via ``python -m jet_hadron.event_gen.jet_analyis -c ...``.
.. codeauthor:: Raymond Ehlers <raymond.ehlers@cern.ch>, Yale University
"""
import abc
import enlighten
import logging
import numpy as np
import os
import pyjet
from scipy.spatial import KDTree
from typing import Any, Dict, List, Sequence, Tuple
# NOTE: This is out of the expected order, but it must be here to prevent ROOT from stealing the command
# line options
from jet_hadron.base.typing_helpers import Hist # noqa: F401
from pachyderm import histogram
from jet_hadron.base import analysis_manager
from jet_hadron.base import analysis_objects
from jet_hadron.base import labels
from jet_hadron.base import params
from jet_hadron.plot import base as plot_base
from jet_hadron.event_gen import generator
from jet_hadron.event_gen import pythia6 as gen_pythia6
# Type helpers
PseudoJet: pyjet._libpyjet.PseudoJet = pyjet._libpyjet.PseudoJet
DTYPE_PT = np.dtype([("pT", np.float64), ("eta", np.float64), ("phi", np.float64), ("m", np.float64)])
# Stores particle and detector level jets.
DTYPE_JETS = np.dtype(
[(f"{label}_{name}", dtype) for label in ["part", "det"] for name, dtype in DTYPE_PT.descr]
)
DTYPE_EVENT_PROPERTIES = np.dtype([("cross_section", np.float64), ("pt_hard", np.float64)])
logger = logging.getLogger(__name__)
def view_fields(arr: np.ndarray, fields: Sequence[str]) -> np.ndarray:
""" Provides a view of a selection of fields of a structured array.
Code from: https://stackoverflow.com/a/21819324
Args:
arr: Array to select some fields from.
fields: Fields which should be selected from the array.
Returns:
Array view which selects only those fields.
"""
new_dtype = np.dtype({name: arr.dtype.fields[name] for name in fields})
return np.ndarray(arr.shape, new_dtype, arr, 0, arr.strides)
def save_tree(output_info: analysis_objects.PlottingOutputWrapper,
arr: np.ndarray, output_name: str) -> str:
""" Write the tree stored in a numpy array to a file.
Args:
output_index: Output information.
arr: Tree stored in an array to write out.
output_name: Filename under which the tree should be saved, but without the file extension.
Returns:
The filename under which the tree was written.
"""
# Determine filename
if not output_name.endswith(".npy"):
output_name += ".npy"
full_path = os.path.join(output_info.output_prefix, output_name)
# Write
with open(full_path, "wb") as f:
np.save(f, arr)
return full_path
def find_jets(input_array: np.ndarray, algorithm: str = "antikt", R: float = 0.4, min_jet_pt: float = 2) -> np.array:
""" Perform jet finding use FastJet.
Assumes that the input is of the form (E, p_x, p_y, p_z, ...)
Args:
input_array: Array containing the input particles.
algorithm: Name of the jet-algorithm to use. Default: "antikt".
R: Jet finding resolution parameter. Default: R = 0.2.
min_jet_pt: Minimum jet pt. Default: 2.
Returns:
Array of PsuedoJets found by FastJet.
"""
jet_def = pyjet.JetDefinition(algo = algorithm, R = R)
cluster_sequence = pyjet.ClusterSequenceArea(
inputs = input_array, jetdef = jet_def,
areatype = "active_explicit_ghosts",
)
jets: np.array = np.array(cluster_sequence.inclusive_jets(ptmin = min_jet_pt))
return jets
def max_constituent_pt(jet: PseudoJet) -> float:
""" Find max constituent pt.
Args:
jet: The jet of interest.
Returns:
Maximum constituent pt.
"""
max_pt = 0
for c in jet.constituents():
if c.pt > max_pt:
max_pt = c.pt
return max_pt
def match_jets(particle_level_jets: np.ndarray, detector_level_jets: np.ndarray, matching_distance: float) -> List[Tuple[int, int]]:
""" Match particle and detector level jets geometrically.
Matching is performed via KDTrees. The particle level jet is required to match the
detector level jet and vice-versa.
Args:
particle_level_jets: Particle level jets.
detector_level_jets: Detector level jets.
matching_distance: Maximum matching distance between jets. Default guidance is
to use 0.6 * R.
Returns:
List of pairs of (particle level index, detector level index).
"""
# Extract the jet locations from the PSeudoJets.
part_level_positions = np.array([(j.eta, j.phi) for j in particle_level_jets])
det_level_positions = np.array([(j.eta, j.phi) for j in detector_level_jets])
# Construct the KDTress. They default to using the L^2 norm (ie our expected distance measure).
part_level_tree = KDTree(part_level_positions)
det_level_tree = KDTree(det_level_positions)
# Perform the actual matching.
part_level_matches = part_level_tree.query_ball_tree(det_level_tree, r = matching_distance)
det_level_matches = det_level_tree.query_ball_tree(part_level_tree, r = matching_distance)
# Only keep the closest match where the particle level jet points to the detector level
# jet and vise-versa.
indices = []
for i, part_match in enumerate(part_level_matches):
min_distance = 1000
min_distance_index = -1
for det_match in det_level_matches:
for m in det_match:
if m in part_match:
# Calculate the distance
dist = np.sqrt(
(part_level_positions[i][0] - det_level_positions[m][0]) ** 2
+ (part_level_positions[i][1] - det_level_positions[m][1]) ** 2
)
#logger.debug(f"part_level_index: {i}, Potential match: {m}, distance: {dist}")
if dist < min_distance:
#logger.debug(f"Found match! Previous min_distance: {min_distance}")
min_distance = dist
min_distance_index = m
if min_distance_index != -1:
indices.append((i, min_distance_index))
#logger.debug(f"part_level_matches: {part_level_matches}, det_level_matches: {det_level_matches}")
#logger.debug(f"indices: {indices}")
return indices
class JetAnalysis:
""" Direct the generation and processing of events from a generator through jet finding to final processing.
Args:
generator: Generator object.
identifier: String by which the analysis should be identified.
jet_radius: Jet finding parameter.
output_prefix: Prefix where data will be stored.
Attributes:
generator: Generator object.
identifier: String by which the analysis should be identified.
jet_radius: Jet finding parameter.
jets: Array of jets found from particles from the generator.
_progress_manager: Used to display processing progress.
"""
def __init__(self, generator: generator.Generator, identifier: str, jet_radius: float = 0.4, output_prefix: str = "."):
self.generator = generator
self.identifier = identifier
self.jet_radius = jet_radius
# Output variables
# They start out as lists, but will be converted to numpy arrays when finalized.
self.jets: np.ndarray = []
self.events: np.ndarray = []
# Output info
self.output_info = analysis_objects.PlottingOutputWrapper(
output_prefix = output_prefix,
printing_extensions = ["pdf"],
)
def setup(self) -> bool:
""" Setup the generator and the outputs.
If use of TTress is desired, a separate tree for the event level properties and a flat tree
for all of the jets is probably recommended. Precisely what will be done depends on the user requirements.
"""
...
return True
def _process_event(self, event: generator.Event) -> bool:
""" Process each event.
Args:
event: Event level information and the input particles.
Returns:
True if the event was successfully processed.
"""
# Setup
event_properties, event_particles = event
# Jet finding
particle_level_jets = find_jets(event_particles)
# Store the jet properties
for jet in particle_level_jets:
self.jets.append(
np.array((jet.pt, jet.eta, jet.phi, jet.mass), dtype = DTYPE_PT)
)
# Store the event properties
self.events.append(
np.array((event_properties.cross_section, event_properties.pt_hard), dtype = DTYPE_EVENT_PROPERTIES)
)
return True
def event_loop(self, n_events: int, progress_manager: enlighten._manager.Manager) -> int:
""" Loop over the generator to generate events.
Args:
n_events: Number of events to generate.
Returns:
Number of accepted events.
"""
n_accepted = 0
with progress_manager.counter(total = n_events,
desc = "Generating", unit = "events") as progress:
for event in self.generator(n_events = n_events):
# Process event
result = self._process_event(event)
# Keep track of the number of events which actually passed all of the conditions.
if result:
n_accepted += 1
# Update the progress bar
progress.update()
logger.info(f"n_accepted: {n_accepted}")
return n_accepted
def finalize(self, n_events_accepted: int) -> None:
""" Finalize the analysis. """
# Finally, convert to a proper numpy array. It's only converted here because it's not efficient to expand
# existing numpy arrays.
self.jets = np.array(self.jets, dtype = DTYPE_JETS)
self.events = np.array(self.events, dtype = DTYPE_EVENT_PROPERTIES)
# And save out the tree so we don't have to calculate it again later.
self.save_tree(arr = self.jets, output_name = "jets")
def save_tree(self, *args: Any, **kwargs: Any) -> str:
""" Helper for saving a tree to file.
Args:
output_index: Output information.
arr: Tree stored in an array to write out.
output_name: Filename under which the tree should be saved, but without the file extension.
Returns:
The filename under which the tree was written.
"""
return save_tree(self.output_info, *args, **kwargs)
class STARJetAnalysis(JetAnalysis):
""" Find and analyze jets using STAR Au--Au data taking conditions.
This allows us to simulate.
Args:
event_activity: Centrality selection for determining the momentum resolution.
"""
def __init__(self, event_activity: params.EventActivity, *args: Any, **kwargs: Any):
# Setup base class
super().__init__(*args, **kwargs)
# Efficiency hists
self.event_activity = event_activity
self.efficiency_hists: List[np.ndarray] = []
self.efficiency_sampling: List[int] = []
# Output
# Start with the output_prefix from the base class
output_prefix = self.output_info.output_prefix.format(collision_energy_in_GeV = self.generator.sqrt_s)
self.output_info = analysis_objects.PlottingOutputWrapper(
output_prefix = os.path.join(output_prefix, str(self.identifier)),
printing_extensions = ["pdf"],
)
if not os.path.exists(self.output_info.output_prefix):
os.makedirs(self.output_info.output_prefix)
def setup(self) -> bool:
""" Setup the analysis and the PYTHIA 6 generator.
The main task is to setup the efficiency histograms.
Usually, we'd like the generator to set itself up, but ``TPythia{6,8}`` is a singleton,
so we need to call set up immediately before we run the particular analysis.
"""
# Setup the efficiency histograms
# Distribution to sample for determining which efficiency hist to use.
# Based on year 14 ZDC rates (plot from Dan).
# 0 = 0-33 kHz -> 1/9
# 1 = 33-66 kHz -> 5/9
# 2 = 66-100 kHz -> 3/9
self.efficiency_sampling = [
0,
1, 1, 1, 1, 1,
2, 2, 2
]
# Retrieve the efficiency histograms
hists = histogram.get_histograms_in_file(filename = "inputData/AuAu/200/y14_efficiency_dca1.root")
centrality_index_map = {
# 0-10% in most analyses.
# 0 = 0-5%, 1 = 5-10%
params.EventActivity.central: list(range(0, 2)),
# 20-50% in Joel's STAR analysis.
# 4 = 20-25%, 5 = 25-30%, 6 = 30-35%, 7 = 35-40%, 8 = 40-45%, 9 = 45-50%
params.EventActivity.semi_central: list(range(4, 10)),
}
for interation_rate_index in range(3):
centrality_hists = []
for centrality_index in centrality_index_map[self.event_activity]:
h_root = hists[f"efficiency_lumi_{interation_rate_index}_cent_{centrality_index}"]
_, _, h_temp = histogram.get_array_from_hist2D(h_root, set_zero_to_NaN = False)
centrality_hists.append(h_temp)
# Average the efficiency over the centrality bins.
final_efficiency_hist = sum(centrality_hists) / len(centrality_hists)
self.efficiency_hists.append(final_efficiency_hist)
if interation_rate_index == 0:
# h_root was set from the last iteration, so we take advantage of it.
# Take advantage of the last iteration
self.efficiency_pt_bin_edges = histogram.get_bin_edges_from_axis(h_root.GetXaxis())
self.efficiency_eta_bin_edges = histogram.get_bin_edges_from_axis(h_root.GetYaxis())
# Complete setup of PYTHIA6 if necessary.
if self.generator.initialized is False:
self.generator.setup()
return True
def _apply_STAR_detector_effects(self, particles: np.ndarray) -> np.ndarray:
""" Apply the STAR detector effects.
In particular, applies the momentum resolution and efficiency.
Args:
particles: Input particles.
Returns:
"Detector level" particles.
"""
# Make a copy so that we don't propagate the changes to the particle level particles.
detector_level = np.copy(particles)
# STAR momentum resolution
# Modeled by a gaussian width of parameters provided by Hanseul (from Nick originally).
# Functionally, it goes as sigma_pt / pt = (0.005 + 0.0025 * pt)
detector_level["pT"] = np.random.normal(
detector_level["pT"],
detector_level["pT"] * (0.005 + 0.0025 * detector_level["pT"]),
)
# STAR tracking efficiency.
# Determine the expected efficiency based on the particle pt and eta, and then drop the particle
# if the tracking efficiency is less than a flat random distribution.
random_values = np.random.rand(len(particles))
# Need to decide which efficiency histogram to use. See the definition of the efficiency_sampling
# for further information on how and why these values are used.
efficiency_hist = self.efficiency_hists[np.random.choice(self.efficiency_sampling)]
# Determine the efficiency histogram indices.
# This means that if we have the bin edges [0, 1, 2], and we pass value 1.5, it will return
# index 2, but we want to return bin 1, so we subtract one from the result. For more, see
# ``histogram.Histogram1D.find_bin(...)``.
efficiency_pt_index = np.searchsorted(self.efficiency_pt_bin_edges, detector_level["pT"], side = "right") - 1
efficiency_eta_index = np.searchsorted(self.efficiency_eta_bin_edges, detector_level["eta"], side = "right") - 1
# Deal with particles outside of the efficiency histograms
# We could have particles over 5 GeV, so we assume that the efficiency is flat above 5 GeV and
# assign any of the particles above 5 GeV to the last efficiency bin.
# - 1 because because the efficiency hist values are 0 indexed.
efficiency_pt_index[efficiency_pt_index >= efficiency_hist.shape[0]] = efficiency_hist.shape[0] - 1
# Since we have an eta cut at 1, we don't need to check for particles outside of this range.
# Keep any particles where the efficiency is higher than the random value.
keep_particles_mask = efficiency_hist[efficiency_pt_index, efficiency_eta_index] > random_values
detector_level = detector_level[keep_particles_mask]
return detector_level
def _process_event(self, event: generator.Event) -> bool:
""" Process the generated event.
Args:
event: Event level information and the input particles.
Returns:
True if the event was successfully processed.
"""
# Setup
event_properties, event_particles = event
# Acceptance cuts
particle_level_particles = event_particles[np.abs(event_particles["eta"]) < 1]
# Apply efficiency.
detector_level_particles = self._apply_STAR_detector_effects(particle_level_particles)
# Apply particle selections
# Only keep particles above 2 GeV. This is performed at the detector level to match the STAR
# analysis, and then we also apply it to the detector level because we take that fragmentation
# bias in the ALICE analysis against which we are comparing.
particle_level_particles = particle_level_particles[particle_level_particles["pT"] > 2]
detector_level_particles = detector_level_particles[detector_level_particles["pT"] > 2]
# Jet finding
particle_level_jets = find_jets(particle_level_particles)
detector_level_jets = find_jets(detector_level_particles)
# Apply jet cuts
# Only keep detector level jets with an additional 4 GeV particle bias.
max_const_pt = np.array([max_constituent_pt(j) for j in detector_level_jets])
detector_level_jets = detector_level_jets[max_const_pt > 4.]
# Avoid computing the matches if we don't have anything to match
if len(particle_level_jets) == 0 or len(detector_level_jets) == 0:
return False
# Match jets
matches = match_jets(particle_level_jets, detector_level_jets, matching_distance = 0.6 * self.jet_radius)
# Require a match to continue. We include this condition explicitly because we don't want to record
# the event properties in this case.
if not matches:
return False
# Store data
for (part_index, det_index) in matches:
self.jets.append(
np.array(
(
particle_level_jets[part_index].pt,
particle_level_jets[part_index].eta,
particle_level_jets[part_index].phi,
particle_level_jets[part_index].mass,
detector_level_jets[det_index].pt,
detector_level_jets[det_index].eta,
detector_level_jets[det_index].phi,
detector_level_jets[det_index].mass,
),
dtype = DTYPE_JETS,
)
)
# Store event properties
self.events.append(
np.array((event_properties.cross_section, event_properties.pt_hard), dtype = DTYPE_EVENT_PROPERTIES)
)
return True
def finalize(self, n_events_accepted: int) -> None:
""" Finalize the analysis. """
# Sanity check
assert len(self.events) == n_events_accepted
logger.info(f"number of accepted events: {len(self.events)}, jets in tree: {len(self.jets)}")
# Finally, convert to a proper numpy array. It's only converted here because it's not efficient to expand
# existing numpy arrays.
self.jets = np.array(self.jets, dtype = DTYPE_JETS)
self.events = np.array(self.events, dtype = DTYPE_EVENT_PROPERTIES)
# And save out the tree so we don't have to calculate it again later.
self.save_tree(arr = self.jets, output_name = "jets")
self.save_tree(arr = self.events, output_name = "event_properties")
# Create the response matrix and plot it as a cross check.
# Create histogram
h, x_edges, y_edges = np.histogram2d(
self.jets["det_pT"], self.jets["part_pT"],
bins = (60, 60), range = ((0, 60), (0, 60))
)
# Plot
import matplotlib
import matplotlib.pyplot as plt
# Fix normalization
h[h == 0] = np.nan
fig, ax = plt.subplots(figsize = (8, 6))
resp = ax.imshow(
h, extent = (x_edges[0], x_edges[-1], y_edges[0], y_edges[-1]),
interpolation = "nearest",
aspect = "auto",
origin = "lower",
norm = matplotlib.colors.Normalize(vmin = np.nanmin(h), vmax = np.nanmax(h)),
)
fig.colorbar(resp)
# Final labeling and presentation
ax.set_xlabel(labels.make_valid_latex_string(labels.jet_pt_display_label("det")))
ax.set_ylabel(labels.make_valid_latex_string(labels.jet_pt_display_label("part")))
fig.tight_layout()
fig.subplots_adjust(hspace = 0, wspace = 0, right = 0.99)
plot_base.save_plot(
self.output_info, fig,
f"response"
)
plt.close(fig)
class JetAnalysisManager(analysis_manager.Manager, abc.ABC):
""" Manage running ``JetAnalysis`` objects.
Args:
config_filename: Path to the configuration filename.
selected_analysis_options: Selected analysis options.
manager_task_name: Name of the analysis manager task name in the config.
"""
def __init__(self, config_filename: str, selected_analysis_options: params.SelectedAnalysisOptions,
manager_task_name: str = "JetAnalysisManager", **kwargs: str):
# Initialize the base class
super().__init__(
config_filename = config_filename, selected_analysis_options = selected_analysis_options,
manager_task_name = manager_task_name, **kwargs,
)
# Basic configuration
self.pt_hard_bins = self.task_config["pt_hard_bins"]
self.n_events_to_generate = self.task_config["n_events_to_generate"]
self.random_seed = self.task_config.get("random_seed", None)
self.jet_radius = self.task_config["jet_radius"]
# Create the jet analyses. Note that we are just constructing the objects
# directly instead of using KeyIndex objects. We don't need the additional
# complexity.
self.analyses: Dict[analysis_objects.PtHardBin, JetAnalysis] = {}
self._create_analysis_tasks()
@abc.abstractmethod
def _create_analysis_tasks(self) -> None:
""" Create the actual analysis tasks."""
...
def run(self) -> bool:
""" Setup and run the jet analyses. """
# Everything in this function could be wrapped into a function for use with dask. Better would be to
# include the object creation there. Unfortunately, regardless of these efforts, dask won't work here
# because of ROOT...
# See: https://root-forum.cern.ch/t/file-closing-running-python-multiprocess/16213
# Note that moving the initialization of ROOT until later can help, but
# it's not enough - loading libraries into the tasks seems to be a problem.
# Steps to the analysis
# 1. Setup
# 2. Generate and finalize.
#
# Usually, we would setup and run the analysis objects separately, but that isn't compatible with
# TPythia{6,8}, which are singleton classes. So instead we configure and then run them immediately.
with self._progress_manager.counter(total = len(self.analyses),
desc = "Setting up and running",
unit = "pt hard binned analysis") as progress:
for label, analysis in self.analyses.items():
logger.info(f"Setting up {label.name} {label}")
res = analysis.setup()
if not res:
raise RuntimeError("Setup failed!")
logger.info(f"Analyzing {label.name} {label}")
n_events_accepted = analysis.event_loop(
n_events = self.n_events_to_generate,
progress_manager = self._progress_manager
)
analysis.finalize(n_events_accepted)
progress.update()
return True
class STARJetAnalysisManager(JetAnalysisManager):
""" Analysis manager for the jet finding analysis with STAR detector conditions.
Args:
config_filename: Path to the configuration filename.
selected_analysis_options: Selected analysis options.
manager_task_name: Name of the analysis manager task name in the config.
"""
def __init__(self, config_filename: str, selected_analysis_options: params.SelectedAnalysisOptions, **kwargs: str):
# Initialize the base class
super().__init__(
config_filename = config_filename, selected_analysis_options = selected_analysis_options,
manager_task_name = "STARJetAnalysisManager", **kwargs,
)
def _create_analysis_tasks(self) -> None:
""" Create the actual analysis tasks. """
for pt_hard_bin in self.pt_hard_bins:
analysis = STARJetAnalysis(
event_activity = self.selected_analysis_options.event_activity,
generator = gen_pythia6.Pythia6(
# Energy is specified in TeV in ``params.CollisionEnergy``, but PYTHIA
# expects it to be in GeV.
sqrt_s = self.selected_analysis_options.collision_energy.value * 1000,
random_seed = self.random_seed,
pt_hard = (pt_hard_bin.min, pt_hard_bin.max),
),
identifier = pt_hard_bin.bin,
jet_radius = self.jet_radius,
output_prefix = self.output_info.output_prefix,
)
self.analyses[pt_hard_bin] = analysis
def run_STAR_jet_analysis_from_terminal() -> STARJetAnalysisManager:
""" Driver function for running the STAR jet analysis. """
# Basic setup
# Quiet down some pachyderm modules
logging.getLogger("pachyderm.generic_config").setLevel(logging.INFO)
logging.getLogger("pachyderm.histogram").setLevel(logging.INFO)
# Setup and run the analysis
manager: STARJetAnalysisManager = analysis_manager.run_helper(
manager_class = STARJetAnalysisManager, task_name = "STARJetAnalysisManager",
description = "STAR Jet Analysis Manager",
)
# Return it for convenience.
return manager
if __name__ == "__main__":
run_STAR_jet_analysis_from_terminal()
|
{"hexsha": "b5dc1fb47e42de9e3d59c43c14ba2a63b4a33a68", "size": 27353, "ext": "py", "lang": "Python", "max_stars_repo_path": "jet_hadron/event_gen/jet_analysis.py", "max_stars_repo_name": "raymondEhlers/alice-jet-hadron", "max_stars_repo_head_hexsha": "8526567935c0339cebb9ef224b09a551a0b96932", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-29T20:00:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-29T20:00:06.000Z", "max_issues_repo_path": "jet_hadron/event_gen/jet_analysis.py", "max_issues_repo_name": "raymondEhlers/alice-jet-hadron", "max_issues_repo_head_hexsha": "8526567935c0339cebb9ef224b09a551a0b96932", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2019-10-22T22:17:05.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-26T00:24:08.000Z", "max_forks_repo_path": "jet_hadron/event_gen/jet_analysis.py", "max_forks_repo_name": "raymondEhlers/alice-jet-hadron", "max_forks_repo_head_hexsha": "8526567935c0339cebb9ef224b09a551a0b96932", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-07-02T19:33:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-04T15:14:00.000Z", "avg_line_length": 42.0168970814, "max_line_length": 132, "alphanum_fraction": 0.6447556027, "include": true, "reason": "import numpy,from scipy", "num_tokens": 6003}
|
import numpy as np
import scipy.stats as sp
import matplotlib.pyplot as plt
import h5py
def manifold(gridSize, binary, epoch):
f = h5py.File('params/ff_epoch_' + str(epoch) + '.hdf5','r')
wsig = np.matrix(f["wsig"])
bsig = np.matrix(f["bsig"]).T
if binary:
shape = (28,28)
activation = lambda z, wb : activation_binary(z,wb)
wtanh = np.matrix(f["wtanh"])
btanh = np.matrix(f["btanh"]).T
wb = (wtanh,btanh,wsig,bsig)
else:
shape = (28,20)
activation = lambda z, wb: activation_continuous(z,wb)
wrelu = np.matrix(f["wrelu"])
brelu = np.matrix(f["brelu"]).T
wb = (wrelu,brelu,wsig,bsig)
gridValues = np.linspace(0.05,0.95,gridSize)
z = lambda gridpoint: np.matrix(sp.norm.ppf(gridpoint)).T
image = np.vstack([np.hstack([activation(z((i,j)),wb).reshape(shape) for j in gridValues]) for i in gridValues])
plt.imshow(image, cmap='Greys')
plt.axis('off')
plt.show()
def activation_binary(z, wb):
wtanh, btanh, wsig, bsig = wb
h = np.tanh(wtanh.dot(z) + btanh)
y = 1 / (1 + np.exp(-(wsig.dot(h) + bsig)))
return y
def activation_continuous(z, wb):
wrelu, brelu, wsig, bsig = wb
h = np.log(1 + np.exp(np.dot(wrelu,z) + brelu))
y = 1 / (1 + np.exp(-(np.dot(wsig,h) + bsig)))
return y
#Gridsize, Binary (True/False), Epoch number for params
manifold(10,False,740)
|
{"hexsha": "fc54241a36be6b027a4628567af9f78211205098", "size": 1427, "ext": "py", "lang": "Python", "max_stars_repo_path": "plot.py", "max_stars_repo_name": "TobiasMR/AEVB", "max_stars_repo_head_hexsha": "78b780f69d5d5136b49bcaff7da96eabb34de31a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "plot.py", "max_issues_repo_name": "TobiasMR/AEVB", "max_issues_repo_head_hexsha": "78b780f69d5d5136b49bcaff7da96eabb34de31a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "plot.py", "max_forks_repo_name": "TobiasMR/AEVB", "max_forks_repo_head_hexsha": "78b780f69d5d5136b49bcaff7da96eabb34de31a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.0350877193, "max_line_length": 116, "alphanum_fraction": 0.6005606167, "include": true, "reason": "import numpy,import scipy", "num_tokens": 452}
|
"""
collection of useful miscellaneous functions
"""
def get_dim_exp(exp):
"""
outputs hard-coded data dimensions (lat-lon-lev-time)
for a given simulation
"""
if exp == "QSC5.TRACMIP.NH01.L.pos.Q0.300.lon0.150.lond.45.lat0.0.latd.30":
from ds21grl import dim_aqua_short as dim
else:
from ds21grl import dim_aqua as dim
return dim
def tic():
# Homemade version of matlab tic function
import time
global startTime_for_tictoc
startTime_for_tictoc = time.time()
def toc():
# Homemade version of matlab tic function
import time
if 'startTime_for_tictoc' in globals():
print("Elapsed time is " + str(time.time() - startTime_for_tictoc) + " seconds.")
else:
print("Toc: start time not set")
def qflux_const(exp):
"""
predefined constants to generate qflux patterns
"""
if exp == 'QSC5.TRACMIP.NH01.L.pos.Q0.150.lon0.150.lond.90.lat0.0.latd.30':
Q0 = 150
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.L.neg.Q0.150.lon0.150.lond.90.lat0.0.latd.30':
Q0 = -150
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.U.pos.Q0.150.lon0.150.lond.90.lat0.0.latd.30':
Q0 = 150
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.U.neg.Q0.150.lon0.150.lond.90.lat0.0.latd.30':
Q0 = -150
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.L.pos.Q0.300.lon0.150.lond.90.lat0.0.latd.30':
Q0 = 300
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.Lk1.Q0.75.lon0.150.lat0.0.latd.30':
Q0 = 75
lon0 = 150
lond = 0
lat0 = 0
latd = 30
wavenum_flag = 1
elif exp == 'QSC5.TRACMIP.NH01.U.pos.Q0.300.lon0.150.lond.90.lat0.0.latd.30':
Q0 = 300
lon0 = 150
lond = 90
lat0 = 0
latd = 30
wavenum_flag = 0
elif exp == 'QSC5.TRACMIP.NH01.L.pos.Q0.300.lon0.150.lond.45.lat0.0.latd.30':
Q0 = 300
lon0 = 150
lond = 45
lat0 = 0
latd = 30
wavenum_flag = 0
return Q0,lon0,lond,lat0,latd,wavenum_flag
def daysinmonths(month):
"""
outputs number of days in a given month without leap
year days
"""
import numpy as np
temp = np.array([31,28,31,30,31,30,31,31,30,31,30,31])
return temp[month-1]
def month_to_dayofyear(month):
"""
maps months (0-11) to day of year (0-364)
where day of year refers to last day of that month
"""
import numpy as np
daysinmonths = np.array([31,28,31,30,31,30,31,31,30,31,30,31])
if month < 0:
dayofyear = 0
else:
dayofyear = daysinmonths[:month+1].sum(axis=0)
return dayofyear
def dayofyear_to_month(dayofyear):
"""
maps day of year (1-365) onto a months (1-12)
"""
import numpy as np
daysinmonths = np.array([31,28,31,30,31,30,31,31,30,31,30,31])
dayofyear = dayofyear + 1 # convert to dayofyear (1-365) from (0-364)
if dayofyear > 0 and dayofyear <= 31:
month = 1
else:
for i in range(1,12):
if (dayofyear > np.sum(daysinmonths[0:i])) and (dayofyear <= np.sum(daysinmonths[0:i+1])) :
month = i + 1
return month
def leap_year_test(year):
"""
Flag if year is a leap year
"""
flag = 0
if (year % 4 == 0):
flag = 1
elif (year % 100 == 0) and (year % 400 != 0):
flag = 0
return flag
def get_aqua_timestamp(iyear,ichunk,branch_flag):
"""
outputs a timestamp string for model runs with a
predifined year-month-day timestamp split into
5 x 73 day chunks for a given year
"""
import numpy as np
if branch_flag == 0:
if ichunk == 0:
timestamp = format(iyear,"04") + '-01-01-00000'
elif ichunk == 1:
timestamp = format(iyear,"04") + '-03-15-00000'
elif ichunk == 2:
timestamp = format(iyear,"04") + '-05-27-00000'
elif ichunk == 3:
timestamp = format(iyear,"04") + '-08-08-00000'
elif ichunk == 4:
timestamp = format(iyear,"04") + '-10-20-00000'
else: # branch run chunk start days shifted by 1 day
if ichunk == 0:
timestamp = format(iyear,"04") + '-01-02-00000'
elif ichunk == 1:
timestamp = format(iyear,"04") + '-03-16-00000'
elif ichunk == 2:
timestamp = format(iyear,"04") + '-05-28-00000'
elif ichunk == 3:
timestamp = format(iyear,"04") + '-08-09-00000'
elif ichunk == 4:
timestamp = format(iyear,"04") + '-10-21-00000'
return timestamp
def AxRoll(x,ax,invert=False):
"""
Re-arrange array x so that axis 'ax' is first dimension.
Undo this if invert=True
"""
import numpy as np
if ax < 0:
n = len(x.shape) + ax
else:
n = ax
if invert is False:
y = np.rollaxis(x,n,0)
else:
y = np.rollaxis(x,0,n+1)
return y
def get_season_daily(data,season,ax):
"""
Extracts days in season from axis with
days 1-365
"""
import numpy as np
data = AxRoll(data,ax,invert=False)
dayofyear = np.arange(0,365,1)
# hardcoded mask for days in season
if season == 'NDJFM':
dayofyear = np.roll(dayofyear,61,axis=0)
index = dayofyear[0:151]
elif season == 'MJJAS':
index = dayofyear[120:273]
elif season == 'ANNUAL':
index = dayofyear
# extract days in season
data = data[index,:]
data = AxRoll(data,ax,invert=True)
return data
def get_season_monthly(data,season,ax):
"""
Extracts months correspinding to a given season from monthly
data
NOTE: ax = month dimension
"""
import numpy as np
data = AxRoll(data,ax,invert=False)
# hardcoded mask for days in season
if season == 'NDJFM':
index = np.array([1,2,3,11,12]) -1
elif season == 'MJJAS':
index = np.arange(5,10,1) -1
elif season == 'ANNUAL':
index = np.arange(1,13,1) -1
# extract months in season
data = data[index,:]
data = AxRoll(data,ax,invert=True)
return data
def get_anomaly_daily_seasonal_cycle(data,dim):
"""
Removes the climatological daily seasonal cycle.
NOTE: Data must have years and daysofyear as 1st and 2nd dims
respectively
"""
import numpy as np
# define daily mean seasonal cycle
scycle = data.mean(axis=0)
# remove seasonal cycle
for t in range(0,dim.years.size):
data[t,:] = data[t,:] - scycle[:]
return data
def get_eddy(data,ax):
"""
Extracts deviation from zonal-mean
ax = longitude axis
"""
import numpy as np
if data.ndim == 1:
zmean = data.mean(axis=0)
data = data - zmean
else:
data = AxRoll(data,ax,invert=False) # shift longitude to 1st dim
zmean = data.mean(axis=0)
for i in range(0,data.shape[0]):
data[i,:] = data[i,:] - zmean[:]
data = AxRoll(data,ax,invert=True)
return data
|
{"hexsha": "134f15917aa2fbb9ad54862e24e8149153f10b8a", "size": 8804, "ext": "py", "lang": "Python", "max_stars_repo_path": "ds21grl/misc.py", "max_stars_repo_name": "edunnsigouin/ds21grl", "max_stars_repo_head_hexsha": "b6544cbc97529943da86e48a437ce68dc00e0f82", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-08T18:13:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T18:13:47.000Z", "max_issues_repo_path": "ds21grl/misc.py", "max_issues_repo_name": "edunnsigouin/ds21grl", "max_issues_repo_head_hexsha": "b6544cbc97529943da86e48a437ce68dc00e0f82", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ds21grl/misc.py", "max_forks_repo_name": "edunnsigouin/ds21grl", "max_forks_repo_head_hexsha": "b6544cbc97529943da86e48a437ce68dc00e0f82", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-10T14:48:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-10T14:48:04.000Z", "avg_line_length": 28.7712418301, "max_line_length": 138, "alphanum_fraction": 0.478191731, "include": true, "reason": "import numpy", "num_tokens": 2476}
|
#pragma once
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <boost/shared_ptr.hpp>
#include "macros.h"
namespace cloudproc {
class CloudGrabberImpl;
struct RGBD {
typedef boost::shared_ptr<RGBD> Ptr;
std::vector<unsigned char> rgb;
std::vector<unsigned short> depth;
RGBD() : rgb(480*640*3), depth(480*640) {}
};
/**
Simple wrapper around pcl's openni interface, allowing you to query for a point cloud (rather than using a callback)
*/
class TRAJOPT_API CloudGrabber {
public:
CloudGrabber();
void startXYZ();
void startXYZRGB();
void startRGBD();
void stop();
pcl::PointCloud<pcl::PointXYZ>::Ptr getXYZ();
pcl::PointCloud<pcl::PointXYZRGB>::Ptr getXYZRGB();
RGBD::Ptr getRGBD();
private:
boost::shared_ptr<CloudGrabberImpl> m_impl;
};
}
|
{"hexsha": "fe31f59a1987a06f40e43e688d707c07011b2366", "size": 795, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "src/cloudproc/cloudgrabber.hpp", "max_stars_repo_name": "HARPLab/trajopt", "max_stars_repo_head_hexsha": "40e2260d8f1e4d0a6a7a8997927bd65e5f36c3a4", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 250.0, "max_stars_repo_stars_event_min_datetime": "2015-01-13T04:38:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T15:52:54.000Z", "max_issues_repo_path": "src/cloudproc/cloudgrabber.hpp", "max_issues_repo_name": "HARPLab/trajopt", "max_issues_repo_head_hexsha": "40e2260d8f1e4d0a6a7a8997927bd65e5f36c3a4", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 31.0, "max_issues_repo_issues_event_min_datetime": "2015-08-19T13:14:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-22T08:08:26.000Z", "max_forks_repo_path": "src/cloudproc/cloudgrabber.hpp", "max_forks_repo_name": "HARPLab/trajopt", "max_forks_repo_head_hexsha": "40e2260d8f1e4d0a6a7a8997927bd65e5f36c3a4", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 118.0, "max_forks_repo_forks_event_min_datetime": "2015-01-08T16:06:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T11:44:00.000Z", "avg_line_length": 21.4864864865, "max_line_length": 116, "alphanum_fraction": 0.7106918239, "num_tokens": 215}
|
"""
SE-ResNet for CUB-200-2011, implemented in Gluon.
Original paper: 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
"""
__all__ = ['seresnet10_cub', 'seresnet12_cub', 'seresnet14_cub', 'seresnetbc14b_cub', 'seresnet16_cub',
'seresnet18_cub', 'seresnet26_cub', 'seresnetbc26b_cub', 'seresnet34_cub', 'seresnetbc38b_cub',
'seresnet50_cub', 'seresnet50b_cub', 'seresnet101_cub', 'seresnet101b_cub', 'seresnet152_cub',
'seresnet152b_cub', 'seresnet200_cub', 'seresnet200b_cub']
from .seresnet import get_seresnet
def seresnet10_cub(classes=200, **kwargs):
"""
SE-ResNet-10 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=10, model_name="seresnet10_cub", **kwargs)
def seresnet12_cub(classes=200, **kwargs):
"""
SE-ResNet-12 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=12, model_name="seresnet12_cub", **kwargs)
def seresnet14_cub(classes=200, **kwargs):
"""
SE-ResNet-14 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=14, model_name="seresnet14_cub", **kwargs)
def seresnetbc14b_cub(classes=200, **kwargs):
"""
SE-ResNet-BC-14b model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model (bottleneck compressed).
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=14, bottleneck=True, conv1_stride=False, model_name="seresnetbc14b_cub",
**kwargs)
def seresnet16_cub(classes=200, **kwargs):
"""
SE-ResNet-16 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=16, model_name="seresnet16_cub", **kwargs)
def seresnet18_cub(classes=200, **kwargs):
"""
SE-ResNet-18 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=18, model_name="seresnet18_cub", **kwargs)
def seresnet26_cub(classes=200, **kwargs):
"""
SE-ResNet-26 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=26, bottleneck=False, model_name="seresnet26_cub", **kwargs)
def seresnetbc26b_cub(classes=200, **kwargs):
"""
SE-ResNet-BC-26b model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model (bottleneck compressed).
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=26, bottleneck=True, conv1_stride=False, model_name="seresnetbc26b_cub",
**kwargs)
def seresnet34_cub(classes=200, **kwargs):
"""
SE-ResNet-34 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=34, model_name="seresnet34_cub", **kwargs)
def seresnetbc38b_cub(classes=200, **kwargs):
"""
SE-ResNet-BC-38b model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model (bottleneck compressed).
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=38, bottleneck=True, conv1_stride=False, model_name="seresnetbc38b_cub",
**kwargs)
def seresnet50_cub(classes=200, **kwargs):
"""
SE-ResNet-50 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=50, model_name="seresnet50_cub", **kwargs)
def seresnet50b_cub(classes=200, **kwargs):
"""
SE-ResNet-50 model with stride at the second convolution in bottleneck block from 'Squeeze-and-Excitation Networks,'
https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=50, conv1_stride=False, model_name="seresnet50b_cub", **kwargs)
def seresnet101_cub(classes=200, **kwargs):
"""
SE-ResNet-101 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=101, model_name="seresnet101_cub", **kwargs)
def seresnet101b_cub(classes=200, **kwargs):
"""
SE-ResNet-101 model with stride at the second convolution in bottleneck block from 'Squeeze-and-Excitation
Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=101, conv1_stride=False, model_name="seresnet101b_cub", **kwargs)
def seresnet152_cub(classes=200, **kwargs):
"""
SE-ResNet-152 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=152, model_name="seresnet152_cub", **kwargs)
def seresnet152b_cub(classes=200, **kwargs):
"""
SE-ResNet-152 model with stride at the second convolution in bottleneck block from 'Squeeze-and-Excitation
Networks,' https://arxiv.org/abs/1709.01507.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=152, conv1_stride=False, model_name="seresnet152b_cub", **kwargs)
def seresnet200_cub(classes=200, **kwargs):
"""
SE-ResNet-200 model for CUB-200-2011 from 'Squeeze-and-Excitation Networks,' https://arxiv.org/abs/1709.01507.
It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=200, model_name="seresnet200_cub", **kwargs)
def seresnet200b_cub(classes=200, **kwargs):
"""
SE-ResNet-200 model with stride at the second convolution in bottleneck block from 'Squeeze-and-Excitation
Networks,' https://arxiv.org/abs/1709.01507. It's an experimental model.
Parameters:
----------
classes : int, default 200
Number of classification classes.
pretrained : bool, default False
Whether to load the pretrained weights for model.
ctx : Context, default CPU
The context in which to load the pretrained weights.
root : str, default '~/.mxnet/models'
Location for keeping the model parameters.
"""
return get_seresnet(classes=classes, blocks=200, conv1_stride=False, model_name="seresnet200b_cub", **kwargs)
def _test():
import numpy as np
import mxnet as mx
pretrained = False
models = [
seresnet10_cub,
seresnet12_cub,
seresnet14_cub,
seresnetbc14b_cub,
seresnet16_cub,
seresnet18_cub,
seresnet26_cub,
seresnetbc26b_cub,
seresnet34_cub,
seresnetbc38b_cub,
seresnet50_cub,
seresnet50b_cub,
seresnet101_cub,
seresnet101b_cub,
seresnet152_cub,
seresnet152b_cub,
seresnet200_cub,
seresnet200b_cub,
]
for model in models:
net = model(pretrained=pretrained)
ctx = mx.cpu()
if not pretrained:
net.initialize(ctx=ctx)
# net.hybridize()
net_params = net.collect_params()
weight_count = 0
for param in net_params.values():
if (param.shape is None) or (not param._differentiable):
continue
weight_count += np.prod(param.shape)
print("m={}, {}".format(model.__name__, weight_count))
assert (model != seresnet10_cub or weight_count == 5052932)
assert (model != seresnet12_cub or weight_count == 5127496)
assert (model != seresnet14_cub or weight_count == 5425104)
assert (model != seresnetbc14b_cub or weight_count == 9126136)
assert (model != seresnet16_cub or weight_count == 6614240)
assert (model != seresnet18_cub or weight_count == 11368192)
assert (model != seresnet26_cub or weight_count == 17683452)
assert (model != seresnetbc26b_cub or weight_count == 15756776)
assert (model != seresnet34_cub or weight_count == 21548468)
assert (model != seresnetbc38b_cub or weight_count == 22387416)
assert (model != seresnet50_cub or weight_count == 26448824)
assert (model != seresnet50b_cub or weight_count == 26448824)
assert (model != seresnet101_cub or weight_count == 47687672)
assert (model != seresnet101b_cub or weight_count == 47687672)
assert (model != seresnet152_cub or weight_count == 65182648)
assert (model != seresnet152b_cub or weight_count == 65182648)
assert (model != seresnet200_cub or weight_count == 70196664)
assert (model != seresnet200b_cub or weight_count == 70196664)
x = mx.nd.zeros((1, 3, 224, 224), ctx=ctx)
y = net(x)
assert (y.shape == (1, 200))
if __name__ == "__main__":
_test()
|
{"hexsha": "e0575014302c61a6366d08182911d69a0fffacd2", "size": 15666, "ext": "py", "lang": "Python", "max_stars_repo_path": "gluon/gluoncv2/models/seresnet_cub.py", "max_stars_repo_name": "naviocean/imgclsmob", "max_stars_repo_head_hexsha": "f2993d3ce73a2f7ddba05da3891defb08547d504", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2649, "max_stars_repo_stars_event_min_datetime": "2018-08-03T14:18:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T08:08:17.000Z", "max_issues_repo_path": "gluon/gluoncv2/models/seresnet_cub.py", "max_issues_repo_name": "naviocean/imgclsmob", "max_issues_repo_head_hexsha": "f2993d3ce73a2f7ddba05da3891defb08547d504", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 95, "max_issues_repo_issues_event_min_datetime": "2018-08-13T01:46:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-13T08:38:14.000Z", "max_forks_repo_path": "gluon/gluoncv2/models/seresnet_cub.py", "max_forks_repo_name": "naviocean/imgclsmob", "max_forks_repo_head_hexsha": "f2993d3ce73a2f7ddba05da3891defb08547d504", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 549, "max_forks_repo_forks_event_min_datetime": "2018-08-06T08:09:22.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T08:08:21.000Z", "avg_line_length": 37.0354609929, "max_line_length": 120, "alphanum_fraction": 0.6698582918, "include": true, "reason": "import numpy", "num_tokens": 4022}
|
from housing_df.utils import build_housing_df_registry_for_all_regions, save_all_dfs_in_registry
from housing_df.specific import RegionDF, MetroDF
from housing_df.workflow.report import Report
import pandas as pd
import numpy as np
class MetroAreasRanked(Report):
def __init__(self, housing_type):
self.reg = build_housing_df_registry_for_all_regions(from_yearly_data=True)
def show_top_apartments(self):
temp_df = None
for (df, label) in self.reg.df_list:
region_df = RegionDF(df, label)
top_10_for_region = region_df.most_apartments(count=10)
if temp_df is None:
temp_df = top_10_for_region
else:
temp_df = temp_df.append(top_10_for_region)
top_10 = temp_df.sort_values(ascending=False).iloc[0:10]
for (i, metro) in enumerate(top_10.index):
metro_name = MetroDF(self.reg.get_df_for_cbsa(metro), metro).top_apartment_city()
print("{0}:{1} built {2} apartments".format(str(i), metro_name, str(top_10[metro])))
def save_data_set(self):
save_all_dfs_in_registry()
if __name__ == "__main__":
my_report = MetroAreasRanked('5+ units Units')
my_report.show_top_apartments()
my_report.save_data_set()
|
{"hexsha": "ad38bb400fd00a13706d31a58af5e73222883758", "size": 1278, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/housing_df/workflow/metro_areas_ranked.py", "max_stars_repo_name": "ojarrett/housing-data-utils", "max_stars_repo_head_hexsha": "f041e789ff8de7f08550a1d39641dfd1b683f324", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/housing_df/workflow/metro_areas_ranked.py", "max_issues_repo_name": "ojarrett/housing-data-utils", "max_issues_repo_head_hexsha": "f041e789ff8de7f08550a1d39641dfd1b683f324", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/housing_df/workflow/metro_areas_ranked.py", "max_forks_repo_name": "ojarrett/housing-data-utils", "max_forks_repo_head_hexsha": "f041e789ff8de7f08550a1d39641dfd1b683f324", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5142857143, "max_line_length": 96, "alphanum_fraction": 0.6956181534, "include": true, "reason": "import numpy", "num_tokens": 313}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Author: Jie Yang, Wei wu, Xiaoy LI
# Last update: 2019.03.12
# First create: 2017.06.15
# Concate:
#
import os
import sys
root_path = "/".join(os.path.realpath(__file__).split("/")[:-3])
if root_path not in sys.path:
sys.path.insert(0, root_path)
import torch
import torch.optim as optim
import torch.autograd as autograd
import gc
import time
import logging
import argparse
import datetime
import numpy as np
from glyce.model.latticeLSTM.model.bilstmcrf import BiLSTMCRF as SeqModel
from glyce.model.latticeLSTM.utils.data import Data
from glyce.model.latticeLSTM.utils.metric import get_ner_fmeasure
parser = argparse.ArgumentParser(description='Tuning with bi-directional LSTM-CRF')
parser.add_argument('--status', choices=['train', 'test', 'decode'], help='update algorithm', default='train')
parser.add_argument('--name', type=str, default='CTB9POS')
parser.add_argument('--mode', type=str, default='char')
parser.add_argument('--data_dir', type=str, default='/data/nfsdata/nlp/datasets/sequence_labeling/CN_NER/')
parser.add_argument('--raw', type=str)
parser.add_argument('--loadmodel', type=str)
parser.add_argument('--gpu_id', type=int, default=0)
parser.add_argument('--gaz_dropout', type=float, default=0.5)
parser.add_argument('--HP_lr', type=float, default=0.01)
parser.add_argument('--HP_dropout', type=float, default=0.5)
parser.add_argument('--HP_use_glyph', action='store_true')
parser.add_argument('--HP_glyph_ratio', type=float, default=0.1)
parser.add_argument('--HP_font_channels', type=int, default=2)
parser.add_argument('--HP_glyph_highway', action='store_true')
parser.add_argument('--HP_glyph_layernorm', action='store_true')
parser.add_argument('--HP_glyph_batchnorm', action='store_true')
parser.add_argument('--HP_glyph_embsize', type=int, default=64)
parser.add_argument('--HP_glyph_output_size', type=int, default=64)
parser.add_argument('--HP_glyph_dropout', type=float, default=0.7)
parser.add_argument('--HP_glyph_cnn_dropout', type=float, default=0.5)
parser.add_argument('--setting_str', type=str, default='')
parser.add_argument('--src_folder', type=str, default='/data/nfsdata/nlp/projects/wuwei')
args = parser.parse_args()
os.environ["CUDA_VISIBLE_DEVICES"] = str(args.gpu_id)
save_dir = F'{args.src_folder}/{args.setting_str}.'
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
logger = logging.getLogger() # pylint: disable=invalid-name
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(os.path.join(save_dir, 'run.log'))
fh.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
logger.info(args)
seed_num = 42
random.seed(seed_num)
torch.manual_seed(seed_num)
np.random.seed(seed_num)
def data_initialization(data, gaz_file, train_file, dev_file, test_file):
data.build_alphabet(train_file)
data.build_alphabet(dev_file)
data.build_alphabet(test_file)
data.build_gaz_file(gaz_file)
data.build_gaz_alphabet(train_file)
data.build_gaz_alphabet(dev_file)
data.build_gaz_alphabet(test_file)
data.fix_alphabet()
return data
def predict_check(pred_variable, gold_variable, mask_variable):
"""
input:
pred_variable (batch_size, sent_len): pred tag result, in numpy format
gold_variable (batch_size, sent_len): gold result variable
mask_variable (batch_size, sent_len): mask variable
"""
pred = pred_variable.cpu().data.numpy()
gold = gold_variable.cpu().data.numpy()
mask = mask_variable.cpu().data.numpy()
overlaped = (pred == gold)
right_token = np.sum(overlaped * mask)
total_token = mask.sum()
return right_token, total_token
def recover_label(pred_variable, gold_variable, mask_variable, label_alphabet, word_recover):
"""
input:
pred_variable (batch_size, sent_len): pred tag result
gold_variable (batch_size, sent_len): gold result variable
mask_variable (batch_size, sent_len): mask variable
"""
pred_variable = pred_variable[word_recover]
gold_variable = gold_variable[word_recover]
mask_variable = mask_variable[word_recover]
seq_len = gold_variable.size(1)
mask = mask_variable.cpu().data.numpy()
pred_tag = pred_variable.cpu().data.numpy()
gold_tag = gold_variable.cpu().data.numpy()
batch_size = mask.shape[0]
pred_label = []
gold_label = []
for idx in range(batch_size):
pred = [label_alphabet.get_instance(pred_tag[idx][idy]) for idy in range(seq_len) if mask[idx][idy] != 0]
gold = [label_alphabet.get_instance(gold_tag[idx][idy]) for idy in range(seq_len) if mask[idx][idy] != 0]
# logger.info "p:",pred, pred_tag.tolist()
# logger.info "g:", gold, gold_tag.tolist()
assert(len(pred)==len(gold))
pred_label.append(pred)
gold_label.append(gold)
return pred_label, gold_label
def load_data_setting(save_dir):
with open(save_dir, 'rb') as fp:
data = torch.load(fp)
logger.info("Data setting loaded from file: " + save_dir)
data.show_data_summary()
return data
def lr_decay(optimizer, epoch, decay_rate, init_lr):
lr = init_lr * ((1-decay_rate)**epoch)
logger.info(F" Learning rate is setted as: {lr}")
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
def evaluate(data, model, name):
if name == "train":
instances = data.train_Ids
elif name == "dev":
instances = data.dev_Ids
elif name == 'test':
instances = data.test_Ids
elif name == 'raw':
instances = data.raw_Ids
else:
logger.info("Error: wrong evaluate name," + name)
pred_results = []
gold_results = []
model.eval()
start_time = time.time()
train_num = len(instances)
total_batch = train_num//data.HP_batch_size+1
for batch_id in range(total_batch):
start = batch_id*data.HP_batch_size
end = (batch_id+1)*data.HP_batch_size
if end > train_num:
end = train_num
instance = instances[start:end]
if not instance:
continue
gaz_list, batch_word, batch_biword, batch_wordlen, batch_wordrecover, batch_char, batch_charlen, batch_charrecover, batch_label, mask = batchify_with_label(instance, data.HP_gpu, True)
tag_seq = model(gaz_list,batch_word, batch_biword, batch_wordlen, batch_char, batch_charlen, batch_charrecover, mask)
# logger.info("tag_seq", tag_seq)
pred_label, gold_label = recover_label(tag_seq, batch_label, mask, data.label_alphabet, batch_wordrecover)
pred_results += pred_label
gold_results += gold_label
decode_time = time.time() - start_time
speed = len(instances)/decode_time
acc, p, r, f = get_ner_fmeasure(gold_results, pred_results, data.tagScheme)
return speed, acc, p, r, f, pred_results
def batchify_with_label(input_batch_list, gpu, volatile_flag=False):
"""
input: list of words, chars and labels, various length. [[words,biwords,chars,gaz, labels],[words,biwords,chars,labels],...]
words: word ids for one sentence. (batch_size, sent_len)
chars: char ids for on sentences, various length. (batch_size, sent_len, each_word_length)
output:
zero padding for word and char, with their batch length
word_seq_tensor: (batch_size, max_sent_len) Variable
word_seq_lengths: (batch_size,1) Tensor
char_seq_tensor: (batch_size*max_sent_len, max_word_len) Variable
char_seq_lengths: (batch_size*max_sent_len,1) Tensor
char_seq_recover: (batch_size*max_sent_len,1) recover char sequence order
label_seq_tensor: (batch_size, max_sent_len)
mask: (batch_size, max_sent_len)
"""
batch_size = len(input_batch_list)
words = [sent[0] for sent in input_batch_list]
biwords = [sent[1] for sent in input_batch_list]
chars = [sent[2] for sent in input_batch_list]
gazs = [sent[3] for sent in input_batch_list]
labels = [sent[4] for sent in input_batch_list]
word_seq_lengths = torch.LongTensor(list(map(len, words)))
max_seq_len = word_seq_lengths.max()
word_seq_tensor = autograd.Variable(torch.zeros((batch_size, max_seq_len)), volatile = volatile_flag).long()
biword_seq_tensor = autograd.Variable(torch.zeros((batch_size, max_seq_len)), volatile = volatile_flag).long()
label_seq_tensor = autograd.Variable(torch.zeros((batch_size, max_seq_len)),volatile = volatile_flag).long()
mask = autograd.Variable(torch.zeros((batch_size, max_seq_len)),volatile = volatile_flag).byte()
for idx, (seq, biseq, label, seqlen) in enumerate(zip(words, biwords, labels, word_seq_lengths)):
word_seq_tensor[idx, :seqlen] = torch.LongTensor(seq)
biword_seq_tensor[idx, :seqlen] = torch.LongTensor(biseq)
label_seq_tensor[idx, :seqlen] = torch.LongTensor(label)
mask[idx, :seqlen] = torch.Tensor([1]*seqlen)
word_seq_lengths, word_perm_idx = word_seq_lengths.sort(0, descending=True)
word_seq_tensor = word_seq_tensor[word_perm_idx]
biword_seq_tensor = biword_seq_tensor[word_perm_idx]
## not reorder label
label_seq_tensor = label_seq_tensor[word_perm_idx]
mask = mask[word_perm_idx]
### deal with char
# pad_chars (batch_size, max_seq_len)
pad_chars = [chars[idx] + [[0]] * (max_seq_len-len(chars[idx])) for idx in range(len(chars))]
length_list = [list(map(len, pad_char)) for pad_char in pad_chars]
max_word_len = max(list(map(max, length_list)))
char_seq_tensor = autograd.Variable(torch.zeros((batch_size, max_seq_len, max_word_len)), volatile = volatile_flag).long()
char_seq_lengths = torch.LongTensor(length_list)
for idx, (seq, seqlen) in enumerate(zip(pad_chars, char_seq_lengths)):
for idy, (word, wordlen) in enumerate(zip(seq, seqlen)):
# logger.info len(word), wordlen
char_seq_tensor[idx, idy, :wordlen] = torch.LongTensor(word)
char_seq_tensor = char_seq_tensor[word_perm_idx].view(batch_size*max_seq_len,-1)
char_seq_lengths = char_seq_lengths[word_perm_idx].view(batch_size*max_seq_len,)
char_seq_lengths, char_perm_idx = char_seq_lengths.sort(0, descending=True)
char_seq_tensor = char_seq_tensor[char_perm_idx]
_, char_seq_recover = char_perm_idx.sort(0, descending=False)
_, word_seq_recover = word_perm_idx.sort(0, descending=False)
# keep the gaz_list in orignial order
gaz_list = [ gazs[i] for i in word_perm_idx]
gaz_list.append(volatile_flag)
if gpu:
word_seq_tensor = word_seq_tensor.cuda()
biword_seq_tensor = biword_seq_tensor.cuda()
word_seq_lengths = word_seq_lengths.cuda()
word_seq_recover = word_seq_recover.cuda()
label_seq_tensor = label_seq_tensor.cuda()
char_seq_tensor = char_seq_tensor.cuda()
char_seq_recover = char_seq_recover.cuda()
mask = mask.cuda()
return gaz_list, word_seq_tensor, biword_seq_tensor, word_seq_lengths, word_seq_recover, char_seq_tensor, char_seq_lengths, char_seq_recover, label_seq_tensor, mask
def train(data, save_model_dir, seg=True):
logger.info("Training model...")
data.show_data_summary()
model = SeqModel(data)
logger.info("finished built model.")
parameters = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.SGD(parameters, lr=data.HP_lr, momentum=data.HP_momentum)
best_dev = -1
data.HP_iteration = 100
# start training
for idx in range(data.HP_iteration):
epoch_start = time.time()
temp_start = epoch_start
logger.info(("Epoch: %s/%s" %(idx,data.HP_iteration)))
optimizer = lr_decay(optimizer, idx, data.HP_lr_decay, data.HP_lr)
instance_count = 0
sample_loss = 0
batch_loss = 0
total_loss = 0
right_token = 0
whole_token = 0
random.shuffle(data.train_Ids)
model.train()
model.zero_grad()
train_num = len(data.train_Ids)
total_batch = train_num//data.HP_batch_size+1
for batch_id in range(total_batch):
start = batch_id*data.HP_batch_size
end = min((batch_id+1)*data.HP_batch_size, train_num)
instance = data.train_Ids[start:end]
if not instance:
continue
gaz_list, batch_word, batch_biword, batch_wordlen, batch_wordrecover, batch_char, batch_charlen, batch_charrecover, batch_label, mask = batchify_with_label(instance, data.HP_gpu)
instance_count += 1
loss, tag_seq = model.neg_log_likelihood_loss(gaz_list, batch_word, batch_biword, batch_wordlen, batch_char, batch_charlen, batch_charrecover, batch_label, mask)
right, whole = predict_check(tag_seq, batch_label, mask)
right_token += right
whole_token += whole
sample_loss += loss.data[0]
total_loss += loss.data[0]
batch_loss += loss
if end % 500 == 0:
temp_time = time.time()
temp_cost = temp_time - temp_start
temp_start = temp_time
logger.info((" Instance: %s; Time: %.2fs; loss: %.4f; acc: %s/%s=%.4f" % (end, temp_cost, sample_loss, right_token, whole_token,(right_token+0.)/whole_token)))
sys.stdout.flush()
sample_loss = 0
if end % data.HP_batch_size == 0:
batch_loss.backward()
optimizer.step()
model.zero_grad()
batch_loss = 0
temp_time = time.time()
temp_cost = temp_time - temp_start
logger.info((" Instance: %s; Time: %.2fs; loss: %.4f; acc: %s/%s=%.4f" % (end, temp_cost, sample_loss, right_token, whole_token,(right_token+0.)/whole_token)))
epoch_finish = time.time()
epoch_cost = epoch_finish - epoch_start
logger.info(("Epoch: %s training finished. Time: %.2fs, speed: %.2fst/s, total loss: %s" % (idx, epoch_cost, train_num/epoch_cost, total_loss)))
speed, acc, p, r, f, _ = evaluate(data, model, "dev")
dev_finish = time.time()
dev_cost = dev_finish - epoch_finish
if seg:
current_score = f
logger.info(("Dev: time: %.2fs, speed: %.2fst/s; acc: %.4f, p: %.4f, r: %.4f, f: %.4f" % (dev_cost, speed, acc, p, r, f)))
else:
current_score = acc
logger.info(("Dev: time: %.2fs speed: %.2fst/s; acc: %.4f"%(dev_cost, speed, acc)))
if current_score > best_dev:
if seg:
logger.info(F"Exceed previous best f score: {best_dev}")
else:
logger.info(F"Exceed previous best acc score: {best_dev}")
model_name = os.path.join(save_model_dir, 'saved.model')
torch.save(model.state_dict(), model_name)
best_dev = current_score
# ## decode test
# speed, acc, p, r, f, _ = evaluate(data, model, "test")
# test_finish = time.time()
# test_cost = test_finish - dev_finish
# if seg:
# logger.info(("Test: time: %.2fs, speed: %.2fst/s; acc: %.4f, p: %.4f, r: %.4f, f: %.4f"%(test_cost, speed, acc, p, r, f)))
# else:
# logger.info(("Test: time: %.2fs, speed: %.2fst/s; acc: %.4f"%(test_cost, speed, acc)))
gc.collect()
def load_model_decode(save_dir, data):
logger.info("Load Model from file: " + save_dir)
model = SeqModel(data)
model.load_state_dict(torch.load(save_dir))
logger.info(F"Decode dev data ...")
start_time = time.time()
speed, acc, p, r, f, pred_results = evaluate(data, model, 'dev')
end_time = time.time()
time_cost = end_time - start_time
logger.info(("%s: time:%.2fs, speed:%.2fst/s; acc: %.4f, p: %.4f, r: %.4f, f: %.4f"%('dev', time_cost, speed, acc, p, r, f)))
logger.info(F"Decode test data ...")
start_time = time.time()
speed, acc, p, r, f, pred_results = evaluate(data, model, 'test')
end_time = time.time()
time_cost = end_time - start_time
logger.info(("%s: time:%.2fs, speed:%.2fst/s; acc: %.4f, p: %.4f, r: %.4f, f: %.4f"%('test', time_cost, speed, acc, p, r, f)))
if __name__ == '__main__':
char_emb = '/data/nfsdata/nlp/embeddings/chinese/gigaword/gigaword_chn.all.a2b.uni.ite50.vec'
bichar_emb = ''
# bichar_emb = '/data/nfsdata/nlp/embeddings/chinese/gigaword/gigaword_chn.all.a2b.bi.ite50.vec'
ctb_gaz = '/data/nfsdata/nlp/embeddings/chinese/ctb/ctb.50d.vec' # NER
wiki_gaz = '/data/nfsdata/nlp/embeddings/chinese/wiki/zh.wiki.bpe.vs200000.d50.w2v.txt'
gaz_file = ctb_gaz if 'NER' in args.name else wiki_gaz
train_file = F'{args.data_dir}/{args.name}/train.{args.mode}.bmes'
dev_file = F'{args.data_dir}/{args.name}/dev.{args.mode}.bmes'
test_file = F'{args.data_dir}/{args.name}/test.{args.mode}.bmes'
logger.info("Train file:" + train_file)
logger.info("Dev file:" + dev_file)
logger.info("Test file:" + test_file)
logger.info("Char emb:" + char_emb)
logger.info("Bichar emb:" + bichar_emb)
logger.info("Gaz file:" + gaz_file)
logger.info("Save dir:" + save_dir)
sys.stdout.flush()
if args.status == 'train':
data = Data()
data.HP_use_char = False
data.use_bigram = False if 'NER' in args.name else True # ner: False, cws: True
data.gaz_dropout = args.gaz_dropout
data.HP_lr = 0.015 if 'NER' in args.name else 0.01
data.HP_dropout = args.HP_dropout
data.HP_use_glyph = args.HP_use_glyph
data.HP_glyph_ratio = args.HP_glyph_ratio
data.HP_font_channels = args.HP_font_channels
data.HP_glyph_highway = args.HP_glyph_highway
data.HP_glyph_embsize = args.HP_glyph_embsize
data.HP_glyph_output_size = args.HP_glyph_output_size
data.HP_glyph_dropout = args.HP_glyph_dropout
data.HP_glyph_cnn_dropout = args.HP_glyph_cnn_dropout
data.HP_glyph_batchnorm = args.HP_glyph_batchnorm
data.HP_glyph_layernorm = args.HP_glyph_layernorm
data.norm_gaz_emb = False if 'NER' in args.name else True # ner: False, cws: True
data.HP_fix_gaz_emb = False
data_initialization(data, gaz_file, train_file, dev_file, test_file)
data.generate_instance_with_gaz(train_file, 'train')
data.generate_instance_with_gaz(dev_file, 'dev')
data.generate_instance_with_gaz(test_file, 'test')
data.build_word_pretrain_emb(char_emb)
data.build_biword_pretrain_emb(bichar_emb)
data.build_gaz_pretrain_emb(gaz_file)
torch.save(data, save_dir + '/data.set')
data = torch.load(save_dir + '/data.set')
train(data, save_dir)
elif args.status == 'test':
data = load_data_setting(args.loadmodel + '/data.set')
load_model_decode(args.loadmodel + '/saved.model', data)
# load_model_decode(args.loadmodel + '/saved.model', data, 'test')
# elif args.status == 'decode':
# data = load_data_setting(args.loadmodel + '/data.set')
# data.generate_instance_with_gaz(args.raw, 'raw')
# decode_results = load_model_decode(args.loadmodel + '/saved.model', data, 'raw')
# data.write_decoded_results(args.loadmodel + '/decoded.output', decode_results, 'raw')
else:
logger.info("Invalid argument! Please use valid arguments! (train/test/decode)")
|
{"hexsha": "4072c072a9617f750953160b1b3c4e2eead567f7", "size": 19712, "ext": "py", "lang": "Python", "max_stars_repo_path": "glyce/bin/run_lattice_lstm.py", "max_stars_repo_name": "TimSYQQX/glyce", "max_stars_repo_head_hexsha": "1542ed30ce104c25aa5c69ffcc9cc5ef2fcda975", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 396, "max_stars_repo_stars_event_min_datetime": "2019-05-11T09:26:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T11:08:23.000Z", "max_issues_repo_path": "glyce/bin/run_lattice_lstm.py", "max_issues_repo_name": "TimSYQQX/glyce", "max_issues_repo_head_hexsha": "1542ed30ce104c25aa5c69ffcc9cc5ef2fcda975", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 46, "max_issues_repo_issues_event_min_datetime": "2019-06-03T07:41:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-16T07:11:04.000Z", "max_forks_repo_path": "glyce/bin/run_lattice_lstm.py", "max_forks_repo_name": "TimSYQQX/glyce", "max_forks_repo_head_hexsha": "1542ed30ce104c25aa5c69ffcc9cc5ef2fcda975", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 75, "max_forks_repo_forks_event_min_datetime": "2019-06-27T08:35:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T01:23:19.000Z", "avg_line_length": 44.2966292135, "max_line_length": 193, "alphanum_fraction": 0.6733969156, "include": true, "reason": "import numpy", "num_tokens": 5085}
|
# Copyright 2020 MIT Probabilistic Computing Project.
# See LICENSE.txt
from math import log
import pytest
from numpy import linspace
from sppl.distributions import bernoulli
from sppl.distributions import beta
from sppl.distributions import randint
from sppl.compilers.ast_to_spe import IfElse
from sppl.compilers.ast_to_spe import Sample
from sppl.compilers.ast_to_spe import Sequence
from sppl.compilers.ast_to_spe import Switch
from sppl.compilers.ast_to_spe import Id
from sppl.math_util import allclose
from sppl.math_util import logsumexp
from sppl.sym_util import binspace
Y = Id('Y')
X = Id('X')
def test_simple_model_eq():
command_switch = Sequence(
Sample(Y, randint(low=0, high=4)),
Switch(Y, range(0, 4), lambda i:
Sample(X, bernoulli(p=1/(i+1)))))
model_switch = command_switch.interpret()
command_ifelse = Sequence(
Sample(Y, randint(low=0, high=4)),
IfElse(
Y << {0}, Sample(X, bernoulli(p=1/(0+1))),
Y << {1}, Sample(X, bernoulli(p=1/(1+1))),
Y << {2}, Sample(X, bernoulli(p=1/(2+1))),
Y << {3}, Sample(X, bernoulli(p=1/(3+1))),
))
model_ifelse = command_ifelse.interpret()
for model in [model_switch, model_ifelse]:
symbols = model.get_symbols()
assert symbols == {X, Y}
assert allclose(
model.logprob(X << {1}),
logsumexp([-log(4) - log(i+1) for i in range(4)]))
def test_simple_model_lte():
command_switch = Sequence(
Sample(Y, beta(a=2, b=3)),
Switch(Y, binspace(0, 1, 5), lambda i:
Sample(X, bernoulli(p=i.right))))
model_switch = command_switch.interpret()
command_ifelse = Sequence(
Sample(Y, beta(a=2, b=3)),
IfElse(
Y <= 0, Sample(X, bernoulli(p=0)),
Y <= 0.25, Sample(X, bernoulli(p=.25)),
Y <= 0.50, Sample(X, bernoulli(p=.50)),
Y <= 0.75, Sample(X, bernoulli(p=.75)),
Y <= 1, Sample(X, bernoulli(p=1)),
))
model_ifelse = command_ifelse.interpret()
grid = [float(x) for x in linspace(0, 1, 5)]
for model in [model_switch, model_ifelse]:
symbols = model.get_symbols()
assert symbols == {X, Y}
assert allclose(
model.logprob(X << {1}),
logsumexp([
model.logprob((il < Y) <= ih) + log(ih)
for il, ih in zip(grid[:-1], grid[1:])
]))
def test_simple_model_enumerate():
command_switch = Sequence(
Sample(Y, randint(low=0, high=4)),
Switch(Y, enumerate(range(0, 4)), lambda i,j:
Sample(X, bernoulli(p=1/(i+j+1)))))
model = command_switch.interpret()
assert allclose(model.prob(Y<<{0} & (X << {1})), .25 * 1/(0+0+1))
assert allclose(model.prob(Y<<{1} & (X << {1})), .25 * 1/(1+1+1))
assert allclose(model.prob(Y<<{2} & (X << {1})), .25 * 1/(2+2+1))
assert allclose(model.prob(Y<<{3} & (X << {1})), .25 * 1/(3+3+1))
def test_error_range():
with pytest.raises(AssertionError):
# Switch cases do not sum to one.
command = Sequence(
Sample(Y, randint(low=0, high=4)),
Switch(Y, range(0, 3), lambda i:
Sample(X, bernoulli(p=1/(i+1)))))
command.interpret()
def test_error_linspace():
with pytest.raises(AssertionError):
# Switch cases do not sum to one.
command = Sequence(
Sample(Y, beta(a=2, b=3)),
Switch(Y, linspace(0, .5, 5), lambda i:
Sample(X, bernoulli(p=i))))
command.interpret()
def test_error_binspace():
with pytest.raises(AssertionError):
# Switch cases do not sum to one.
command = Sequence(
Sample(Y, beta(a=2, b=3)),
Switch(Y, binspace(0, .5, 5), lambda i:
Sample(X, bernoulli(p=i.right))))
command.interpret()
|
{"hexsha": "0706de0a01ab04120ddced102df2d39cb131bbc6", "size": 3919, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_ast_switch.py", "max_stars_repo_name": "SEICS/sppl", "max_stars_repo_head_hexsha": "902c8c0ec0144fbabc8c0f33b15850af238e8d38", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 42, "max_stars_repo_stars_event_min_datetime": "2020-10-20T16:03:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T13:35:23.000Z", "max_issues_repo_path": "tests/test_ast_switch.py", "max_issues_repo_name": "SEICS/sppl", "max_issues_repo_head_hexsha": "902c8c0ec0144fbabc8c0f33b15850af238e8d38", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2020-10-13T21:48:53.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-25T20:19:23.000Z", "max_forks_repo_path": "tests/test_ast_switch.py", "max_forks_repo_name": "SEICS/sppl", "max_forks_repo_head_hexsha": "902c8c0ec0144fbabc8c0f33b15850af238e8d38", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2021-08-16T18:37:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T23:03:08.000Z", "avg_line_length": 34.0782608696, "max_line_length": 69, "alphanum_fraction": 0.5751467211, "include": true, "reason": "from numpy", "num_tokens": 1118}
|
# Morphological operation functions for HDI data preparation
# Developer: Joshua M. Hess, BSc
# Developed at the Vaccine & Immunotherapy Center, Mass. General Hospital
# Import external modules
import numpy as np
import skimage.filters
import skimage.morphology
import skimage.color
import scipy.sparse
# Define function
def MedFilter(image, filter_size, parallel=False):
"""Median filtering of images to remove salt and pepper noise.
A circular disk is used for the filtering. Images are automatically converted to
single channel grayscale images if they arent already in single channel format
filter_size: size of disk to use for filter.
parallel: number of proceses to use for calculations
"""
# Ensure that the image is grayscale
if len(image) > 2:
# Not yet converted to grayscale - use grayscale image
image = skimage.color.rgb2gray(image)
# Check to see if parallel computation
if parallel:
# Filter the image to remove salt and pepper noise (use original image)
filtered_im = skimage.util.apply_parallel(
skimage.filters.median,
image,
extra_keywords={
# Set size of circular filter
"selem": skimage.morphology.disk(filter_size)
},
)
# Use single processor
else:
# Otherwise use only singe processor
filtered_im = skimage.filters.median(
image, selem=skimage.morphology.disk(filter_size)
)
# Return the filtered image
return filtered_im
def Thresholding(image, type, thresh_value=None, correction=1.0):
"""Otsu (manual) thresholding of grayscale images. Returns a sparse boolean
mask.
image: numpy array that represents image
type: Type of thresholding to use. Options are 'manual' or "otsu"
thresh_value: If manual masking, insert a threshold value
correction: Correction factor after thresholding. Values >1 make the threshold more
stringent. By default, value with be 1 (identity)
"""
# Ensure that the image is grayscale
if len(image) > 2:
# Not yet converted to grayscale - use grayscale image
image = skimage.color.rgb2gray(image)
# Check is the threshold type is otsu
if type == "otsu":
# Otsu threshold
thresh_value = skimage.filters.threshold_otsu(image) * correction
# Check if manual thresholding
elif type == "manual":
# manual threshold
thresh_value = thresh_value * correction
# Otherwise raise an exception!
else:
# Raise exception
raise (Exception("Threshold type not supported!"))
# Create a mask from the threshold value
thresh_img = image < thresh_value
# Convert the mask to a boolean sparse matrix
return scipy.sparse.coo_matrix(thresh_img, dtype=np.bool)
def Opening(mask, disk_size, parallel=False):
"""Morphological opening on boolean array (mask). A circular disk is used for the filtering.
disk_size: size of disk to use for filter.
parallel: number of proceses to use for calculations
"""
# Ensure that the image is boolean
if not mask.dtype is np.dtype(np.bool):
# Raise an exception
raise (Exception("Mask must be a boolean array!"))
# Proceed to process the mask as an array
if isinstance(mask, scipy.sparse.coo_matrix):
# Convert to array
mask = mask.toarray()
# Check to see if parallel computation
if parallel:
# Filter the image to remove salt and pepper noise (use original image)
mask = skimage.util.apply_parallel(
skimage.morphology.opening,
mask,
extra_keywords={
# Set size of circular filter
"selem": skimage.morphology.disk(disk_size)
},
)
# Use single processor
else:
# Otherwise use only singe processor
mask = skimage.morphology.opening(
mask, selem=skimage.morphology.disk(disk_size)
)
# Convert the mask back to scipy sparse matrix for storage
return scipy.sparse.coo_matrix(mask, dtype=np.bool)
def Closing(mask, disk_size, parallel=False):
"""Morphological closing on boolean array (mask). A circular disk is used for the filtering.
disk_size: size of disk to use for filter.
parallel: number of proceses to use for calculations
"""
# Ensure that the image is boolean
if not mask.dtype is np.dtype(np.bool):
# Raise an exception
raise (Exception("Mask must be a boolean array!"))
# Proceed to process the mask as an array
if isinstance(mask, scipy.sparse.coo_matrix):
# Convert to array
mask = mask.toarray()
# Check to see if parallel computation
if parallel:
# Filter the image to remove salt and pepper noise (use original image)
mask = skimage.util.apply_parallel(
skimage.morphology.closing,
mask,
extra_keywords={
# Set size of circular filter
"selem": skimage.morphology.disk(disk_size)
},
)
# Use single processor
else:
# Otherwise use only singe processor
mask = skimage.morphology.closing(
mask, selem=skimage.morphology.disk(disk_size)
)
# Convert the mask back to scipy sparse matrix for storage
return scipy.sparse.coo_matrix(mask, dtype=np.bool)
def MorphFill(mask):
"""Morphological filling on a binary mask. Fills holes"""
# Ensure that the image is boolean
if not mask.dtype is np.dtype(np.bool):
# Raise an exception
raise (Exception("Mask must be a boolean array!"))
# Proceed to process the mask as an array
if isinstance(mask, scipy.sparse.coo_matrix):
# Convert to array
mask = mask.toarray()
# Filling in the mask
mask = scipy.ndimage.binary_fill_holes(mask)
# Return the mask
return scipy.sparse.coo_matrix(mask, dtype=np.bool)
def NonzeroSlice(mask, original):
"""Slice original image and mask to be a certain size based on bounding
region around mask"""
# Ensure that the image is boolean
if not mask.dtype is np.dtype(np.bool):
# Raise an exception
raise (Exception("Mask must be a boolean array!"))
# Proceed to process the mask as an array
if isinstance(mask, scipy.sparse.coo_matrix):
# Convert to array
mask = mask.toarray()
# Get nonzero indices from your mask so we can apply to original image
nonzero = np.nonzero(mask)
# Get bounding box
minx = min(nonzero[0])
maxx = max(nonzero[0])
miny = min(nonzero[1])
maxy = max(nonzero[1])
# Extract sliced, nonzero regions from your original image
original = original[minx:maxx, miny:maxy]
# Extract sliced, nonzero regions from your mask image
mask = mask[minx:maxx, miny:maxy]
# Return the original image and then the mask as a sparse matrix
return original, scipy.sparse.coo_matrix(mask, dtype=np.bool)
|
{"hexsha": "fa7f4c00fbefe7b385405e62c94afe2faece0159", "size": 7067, "ext": "py", "lang": "Python", "max_stars_repo_path": "HDIprep/morphology.py", "max_stars_repo_name": "JoshuaHess12/hdi-prep", "max_stars_repo_head_hexsha": "224994b17b229abb30c29e9e70579ad8fdfeff8a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-27T01:04:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-27T01:04:59.000Z", "max_issues_repo_path": "HDIprep/morphology.py", "max_issues_repo_name": "JoshuaHess12/hdi-prep", "max_issues_repo_head_hexsha": "224994b17b229abb30c29e9e70579ad8fdfeff8a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-05-04T23:29:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-14T20:15:37.000Z", "max_forks_repo_path": "HDIprep/morphology.py", "max_forks_repo_name": "JoshuaHess12/hdi-prep", "max_forks_repo_head_hexsha": "224994b17b229abb30c29e9e70579ad8fdfeff8a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3349056604, "max_line_length": 96, "alphanum_fraction": 0.6640724494, "include": true, "reason": "import numpy,import scipy", "num_tokens": 1565}
|
import torch
import numpy as np
# From https://github.com/soumith/dcgan.torch/issues/14
def np_slerp(val, low, high):
omega = np.arccos(np.clip(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)), -1, 1))
so = np.sin(omega)
if so == 0:
return (1.0-val) * low + val * high # L'Hopital's rule/LERP
return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega) / so * high
def slerp(val, low, high):
omega = torch.acos(torch.clamp(torch.dot(low/torch.norm(low), high/torch.norm(high)), -1, 1))
so = torch.sin(omega)
if so == 0:
return (1.0-val) * low + val * high # L'Hopital's rule/LERP
return torch.sin((1.0-val)*omega) / so*low + torch.sin(val*omega)/so*high
|
{"hexsha": "a03f484ad7f67a0aa74ca93641d9b84dcdef954b", "size": 717, "ext": "py", "lang": "Python", "max_stars_repo_path": "utils/slerp.py", "max_stars_repo_name": "li012589/NeuralWavelet", "max_stars_repo_head_hexsha": "6e593ded5cb4ae80579cbf56eb9c346d808669cb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2021-01-27T00:41:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T10:11:51.000Z", "max_issues_repo_path": "utils/slerp.py", "max_issues_repo_name": "li012589/NeuralWavelet", "max_issues_repo_head_hexsha": "6e593ded5cb4ae80579cbf56eb9c346d808669cb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-03-08T12:11:35.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-09T08:58:30.000Z", "max_forks_repo_path": "utils/slerp.py", "max_forks_repo_name": "li012589/NeuralWavelet", "max_forks_repo_head_hexsha": "6e593ded5cb4ae80579cbf56eb9c346d808669cb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2021-02-03T01:42:08.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-03T17:47:19.000Z", "avg_line_length": 37.7368421053, "max_line_length": 97, "alphanum_fraction": 0.6234309623, "include": true, "reason": "import numpy", "num_tokens": 239}
|
import numpy as np
import pandas as pd
import logging
from math import pi
from datetime import datetime
BASE_FEATURES = ['ip', 'app', 'device', 'os', 'channel']
def feature_creation(data_directory):
""" Reads in the raw data stored in the data_directory
(either sample, train, or test) and builds the required
features.
:param data_directory: Directory of CSV to parse
:return: Pandas data frame with features
"""
logger = logging.getLogger(__name__)
logger.info("Feature Creation: Reading in data from %s" % data_directory)
df = pd.read_csv(data_directory)
assert np.isin(BASE_FEATURES, df.columns).all()
logger.info("Feature Creation: Generating temporal features")
assert 'click_time' in df.columns
# df['click_time'] = pd.to_datetime(df['click_time'])
# The below feature did not increase the score on the private or publice LB
# df['day_of_week'] = df.click_time.apply(lambda x: x.dayofweek)
# Drop temporal columns that are not features
drop_cols = [c for c in df.columns if "_time" in c]
df = df.drop(columns=drop_cols)
return df
def convert_time(dt):
""" Converts the provided datetime to a tuple that represents
the coordinates on a clock.
:param dt: Datetime object to be converted
:return: tuple of clock coordinates
"""
if not isinstance(dt, datetime):
raise ValueError("Expected datetime object. Gor class [{}]".format(
dt.__class__))
rad = (360 + ((3 - dt.hour) / 12) * 360) * (pi / 180)
return (np.cos(rad), np.sin(rad))
|
{"hexsha": "adbf425532707ae32389878b15dd9cfacbbd18c0", "size": 1631, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/features/build_features.py", "max_stars_repo_name": "KennedyMurphy/AdTracking", "max_stars_repo_head_hexsha": "dfe69b89ada8fc3a3b5aae59de018e49267d7201", "max_stars_repo_licenses": ["FTL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/features/build_features.py", "max_issues_repo_name": "KennedyMurphy/AdTracking", "max_issues_repo_head_hexsha": "dfe69b89ada8fc3a3b5aae59de018e49267d7201", "max_issues_repo_licenses": ["FTL"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-03-24T16:28:40.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-01T22:59:35.000Z", "max_forks_repo_path": "src/features/build_features.py", "max_forks_repo_name": "KennedyMurphy/AdTracking", "max_forks_repo_head_hexsha": "dfe69b89ada8fc3a3b5aae59de018e49267d7201", "max_forks_repo_licenses": ["FTL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.125, "max_line_length": 79, "alphanum_fraction": 0.6603310852, "include": true, "reason": "import numpy", "num_tokens": 389}
|
(**
* Copyright (C) 2022 BedRock Systems, Inc.
* All rights reserved.
*
* SPDX-License-Identifier: LGPL-2.1 WITH BedRock Exception for use over network, see repository root for details.
*)
Require Import iris.algebra.agree.
Require Import iris.proofmode.proofmode.
Require Import bedrock.lang.bi.spec.frac_splittable.
Require Import bedrock.lang.cpp.logic.
Require Import bedrock.lang.algebra.frac_auth.
Require Import bedrock.lang.cpp.logic.own_instances.
Require Import bedrock.lang.proofmode.own_obs.
Set Printing Coercions.
(**
Well-typed ghost resources:
- [afrac.auth (g : afrac.gname A) (q : Qp) (x : A) : mpred]
represents fractional ownership of ghost cell [g] currently containing
the authoritative state [x]
- [afrac.frag (g : afrac.gname A) (q : Qp) (x : A) : mpred]
represents fractional ownership of ghost cell [g] currently containing
the fragmentative state [x]
*)
Module Type AUTH_FRAC.
(** CMRA *)
Parameter G : ∀ (A : Type) (Σ : gFunctors), Set.
Existing Class G.
Parameter Σ : ∀ (A : Type), gFunctors.
#[global] Declare Instance subG {A Σ} : subG (AUTH_FRAC.Σ A) Σ -> G A Σ.
(** Ghosts *)
Parameter gname : ∀ (A : Type), Set.
#[global] Declare Instance gname_inhabited A : Inhabited (gname A).
#[global] Declare Instance gname_eq_dec A : EqDecision (gname A).
#[global] Declare Instance gname_countable A : Countable (gname A).
(** Predicates *)
Parameter auth : ∀ {A} `{Σ : cpp_logic, !AUTH_FRAC.G A Σ}
(g : gname A) (q : Qp) (x : A), mpred.
Parameter frag : ∀ {A} `{Σ : cpp_logic, !AUTH_FRAC.G A Σ}
(g : gname A) (q : Qp) (x : A), mpred.
Section properties.
Context {A} `{Σ : cpp_logic, !AUTH_FRAC.G A Σ}.
(** Structure *)
#[global] Declare Instance auth_objective : Objective3 auth.
#[global] Declare Instance auth_frac g : FracSplittable_1 (auth g).
#[global] Declare Instance auth_agree g : AgreeF1 (auth g).
#[global] Declare Instance frag_objective : Objective3 frag.
#[global] Declare Instance frag_frac g : FracSplittable_1 (frag g).
#[global] Declare Instance frag_agree g : AgreeF1 (frag g).
#[global] Declare Instance auth_frag_agree g q1 q2 x1 x2 :
Observe2 [| x1 = x2 |] (auth g q1 x1) (frag g q2 x2).
(** Allocation *)
#[local] Notation OWN g x := (auth g 1 x ** frag g 1 x) (only parsing).
(* (* Stronger allocation rules may not be needed for now. *)
Axiom alloc_strong_dep : ∀ (f : gname A -> A) (P : gname A -> Prop),
pred_infinite P ->
|-- |==> Exists g, [| P g |] ** OWN g (f g).
Axiom alloc_cofinite_dep : ∀ (f : gname A -> A) (G : gset (gname A)),
|-- |==> Exists g, [| g ∉ G |] ** OWN g (f g).
Axiom alloc_dep : ∀ (f : gname A -> A),
|-- |==> Exists g, OWN g (f g).
Axiom alloc_strong : ∀ (P : gname A -> Prop) x,
pred_infinite P ->
|-- |==> Exists g, [| P g |] ** OWN g x.
Axiom alloc_cofinite : ∀ (G : gset (gname A)) x,
|-- |==> Exists g, [| g ∉ G |] ** OWN g x.
*)
Axiom alloc : ∀ x, |-- |==> Exists g, OWN g x.
(** Updates *)
Axiom update : ∀ g x y, |-- auth g 1 x -* frag g 1 x -* |==> OWN g y.
(** TODO: Automation (generically derivable) *)
End properties.
End AUTH_FRAC.
(**
TODO: unify with [bedrock.algebra.frac_auth_agree].
*)
Module afrac : AUTH_FRAC.
(** CMRA *)
#[local] Notation RA A := (frac_authR (agreeR (leibnizO A))).
Class G (A : Type) (Σ : gFunctors) : Set := G_inG :> inG Σ (RA A).
Definition Σ (A : Type) : gFunctors := #[ GFunctor (RA A) ].
Lemma subG {A Σ} : subG (afrac.Σ A) Σ -> G A Σ.
Proof. solve_inG. Qed.
(** Ghosts *)
Definition gname (A : Type) : Set := iprop.gname.
#[local] Instance gname_inhabited A : Inhabited (gname A) := _.
#[local] Instance gname_eq_dec A : EqDecision (gname A) := _.
#[local] Instance gname_countable A : Countable (gname A) := _.
(** Predicates *)
Section defs.
Context {A} `{Σ : cpp_logic, !afrac.G A Σ}.
#[local] Notation to_agree := (to_agree (A:=leibnizO A)).
Definition auth (g : gname A) (q : Qp) (x : A) : mpred :=
own g (●F{q} (to_agree x)).
Definition frag (g : gname A) (q : Qp) (x : A) : mpred :=
own g (◯F{q} (to_agree x)).
#[local] Instance auth_objective : Objective3 auth := _.
#[local] Instance auth_frac g : FracSplittable_1 (auth g).
Proof. solve_frac. Qed.
#[local] Instance auth_agree g : AgreeF1 (auth g).
Proof.
(**
TODO (PDS): Shouldn't need to expose [to_agree].
*)
intros. rewrite -(inj_iff to_agree). apply _.
Qed.
#[local] Instance frag_objective : Objective3 frag := _.
#[local] Instance frag_frac g : FracSplittable_1 (frag g).
Proof. solve_frac. Qed.
#[local] Instance frag_agree g : AgreeF1 (frag g).
Proof.
(**
TODO (PDS): own_frac_auth_frag_frac_agree_L missing in
bedrock.lang.proofmode.own_obs
*)
intros. iIntros "F1 F2".
iDestruct (own_valid_2 with "F1 F2") as %Hv. iModIntro. iPureIntro.
move: Hv. rewrite -frac_auth_frag_op frac_auth_frag_valid=>-[] _.
by rewrite to_agree_op_valid_L.
Qed.
#[local] Instance auth_frag_agree g q1 q2 x1 x2 :
Observe2 [| x1 = x2 |] (auth g q1 x1) (frag g q2 x2).
Proof.
(**
TODO (PDS): Problem with [own_frac_auth_agree_L]
*)
intros. iIntros "A F".
iDestruct (observe_2 [| _ ≼ _ |] with "A F") as %Hinc.
iModIntro. iPureIntro. move: Hinc.
move/to_agree_included. by fold_leibniz.
Qed.
#[local] Notation OWN g x := (auth g 1 x ** frag g 1 x) (only parsing).
Lemma alloc x : |-- |==> Exists g, OWN g x.
Proof.
iMod (own_alloc (●F{1} (to_agree x) ⋅ ◯F{1} (to_agree x))) as (g) "[A F]".
{ by apply frac_auth_valid. }
iExists g. by iFrame "A F".
Qed.
Lemma update g x y :
|-- auth g 1 x -* frag g 1 x -* |==> OWN g y.
Proof.
iIntros "A F". iMod (own_update_2 with "A F") as "[$$]"; last done.
by apply frac_auth_update_1.
Qed.
End defs.
End afrac.
|
{"author": "bedrocksystems", "repo": "BRiCk", "sha": "23d7e64cc53706de608dbff0be75d1c4b8c3a7ec", "save_path": "github-repos/coq/bedrocksystems-BRiCk", "path": "github-repos/coq/bedrocksystems-BRiCk/BRiCk-23d7e64cc53706de608dbff0be75d1c4b8c3a7ec/theories/lang/cpp/logic/lib/auth_frac.v"}
|
# -*- coding: utf-8 -*-
"""
Created on Mon Jun 8 17:03:43 2020
@author: Shoba Banik
"""
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
train = pd.read_csv('C:/Users/Shoba Banik/Downloads/Loan_SVM/train.csv')
test = pd.read_csv('C:/Users/Shoba Banik/Downloads/Loan_SVM/test.csv')
train.columns
train.shape
train['Loan_Status'].value_counts()
sns.set_style = ("whitegrid");
sns.FacetGrid(train,hue="Loan_Status",size=4)\
.map(plt.scatter,"Credit_History","ApplicantIncome")\
.add_legend();
plt.show()
train['LoanAmount'].median()
train['Loan_Amount_Term'].value_counts()
train.apply(lambda x: len(x.unique()))
train['Dependents'].replace('3+', 3,inplace=True)
test['Dependents'].replace('3+', 3,inplace=True)
train.isnull().sum()
train['Gender'].fillna(train['Gender'].mode()[0], inplace=True)
train ['Married'].fillna(train['Married'].mode()[0],inplace=True)
train['Dependents'].fillna(train['Dependents'].mode()[0], inplace=True)
train['Self_Employed'].fillna(train['Self_Employed'].mode()[0], inplace=True)
train['Credit_History'].fillna(train['Credit_History'].mode()[0], inplace=True)
train['Loan_Amount_Term'].fillna(train['Loan_Amount_Term'].mode()[0], inplace=True)
train['LoanAmount'].fillna(train['LoanAmount'].median(), inplace=True)
train.isnull().sum()
test.isnull().sum()
test['Gender'].fillna(test['Gender'].mode()[0], inplace=True)
test ['Married'].fillna(test['Married'].mode()[0],inplace=True)
test['Dependents'].fillna(test['Dependents'].mode()[0], inplace=True)
test['Self_Employed'].fillna(test['Self_Employed'].mode()[0], inplace=True)
test['Credit_History'].fillna(test['Credit_History'].mode()[0], inplace=True)
test['Loan_Amount_Term'].fillna(test['Loan_Amount_Term'].mode()[0], inplace=True)
test['LoanAmount'].fillna(test['LoanAmount'].median(), inplace=True)
test.isnull().sum()
train['LoanAmount'] = np.log(train['LoanAmount'])
test['LoanAmount'] = np.log(test['LoanAmount'])
X = train.iloc[:, 1:-1].values
y = train.iloc[:, -1].values
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X = LabelEncoder()
X[:,0] = labelencoder_X.fit_transform(X[:,0])
X[:,1] = labelencoder_X.fit_transform(X[:,1])
X[:,3] = labelencoder_X.fit_transform(X[:,3])
X[:,4] = labelencoder_X.fit_transform(X[:,4])
X[:,-1] = labelencoder_X.fit_transform(X[:,-1])
onehotencoder = OneHotEncoder(categorical_features = [0])
X = onehotencoder.fit_transform(X).toarray()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
from sklearn.metrics import classification_report
classification_report(y_test, y_pred)
|
{"hexsha": "9b29c9c5f7f714207377b0a2d2134c7018b066d4", "size": 3529, "ext": "py", "lang": "Python", "max_stars_repo_path": "Loan_SVM/Loan_svm.py", "max_stars_repo_name": "ShobaBanik/SVM_shoba", "max_stars_repo_head_hexsha": "3a428156fb085504ff0c5560b37f438f276d59d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Loan_SVM/Loan_svm.py", "max_issues_repo_name": "ShobaBanik/SVM_shoba", "max_issues_repo_head_hexsha": "3a428156fb085504ff0c5560b37f438f276d59d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Loan_SVM/Loan_svm.py", "max_forks_repo_name": "ShobaBanik/SVM_shoba", "max_forks_repo_head_hexsha": "3a428156fb085504ff0c5560b37f438f276d59d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.1623931624, "max_line_length": 94, "alphanum_fraction": 0.7072825163, "include": true, "reason": "import numpy", "num_tokens": 891}
|
"""High-level API for cubic splines"""
import numpy
import numpy as np
from ..cartesian import mlinspace
class CubicSpline:
"""Class representing a cubic spline interpolator on a regular cartesian grid.."""
__grid__ = None
__values__ = None
__coeffs__ = None
def __init__(self, a, b, orders, values=None):
"""Creates a cubic spline interpolator on a regular cartesian grid.
Parameters:
-----------
a : array of size d (float)
Lower bounds of the cartesian grid.
b : array of size d (float)
Upper bounds of the cartesian grid.
orders : array of size d (int)
Number of nodes along each dimension (=(n1,...,nd) )
values : (optional, (n1 x ... x nd) array)
Values on the nodes of the function to interpolate.
Returns
-------
spline : CubicSpline
Cubic spline interpolator. Can be evaluated at point(s) `y` with
`spline(y)`
"""
self.d = len(a)
assert len(b) == self.d
assert len(orders) == self.d
self.a = np.array(a, dtype=float)
self.b = np.array(b, dtype=float)
self.orders = np.array(orders, dtype=int)
self.dtype = self.a.dtype
self.__coeffs__ = None
if values is not None:
self.set_values(values)
def set_values(self, values):
"""Set values on the nodes for the function to interpolate."""
values = np.array(values, dtype=float)
from .filter_cubic import filter_coeffs
if not np.all(np.isfinite(values)):
raise Exception("Trying to interpolate non-finite values")
sh = self.orders.tolist()
sh2 = [e + 2 for e in self.orders]
values = values.reshape(sh)
self.__values__ = values
# this should be done without temporary memory allocation
self.__coeffs__ = filter_coeffs(self.a, self.b, self.orders, values)
def interpolate(self, points, values=None, with_derivatives=False):
"""Interpolate spline at a list of points.
Parameters
----------
points : (array-like) list of point where the spline is evaluated.
values : (optional) container for inplace computation
Returns
-------
values : (array-like) list of point where the spline is evaluated.
"""
import time
from .eval_splines import eval_cubic
grid = tuple((self.a[i], self.b[i], self.orders[i]) for i in range(len(self.a)))
if not np.all(np.isfinite(points)):
raise Exception("Spline interpolator evaluated at non-finite points.")
if not with_derivatives:
if points.ndim == 1:
# evaluate only on one point
return eval_cubic(grid, self.__coeffs__, points)
else:
N, d = points.shape
assert d == self.d
if values is None:
values = np.empty(N, dtype=self.dtype)
eval_cubic(grid, self.__coeffs__, points, values)
return values
else:
raise Exception("Not implemented.")
@property
def grid(self):
"""Cartesian enumeration of all nodes."""
if self.__grid__ is None:
self.__grid__ = mlinspace(self.a, self.b, self.orders)
return self.__grid__
def __call__(self, s):
"""Interpolate the spline at one or many points"""
if s.ndim == 1:
res = self.__call__(numpy.atleast_2d(s))
return res[0]
return self.interpolate(s)
class CubicSplines:
__grid__ = None
__values__ = None
__coeffs__ = None
__n_splines__ = None
def __init__(self, a, b, orders, values=None):
"""Creates a cubic multi-spline interpolator for many functions on a regular cartesian grid."""
self.d = len(a)
assert len(b) == self.d
assert len(orders) == self.d
self.a = np.array(a, dtype=float)
self.b = np.array(b, dtype=float)
self.orders = np.array(orders, dtype=int)
self.__mcoeffs__ = None
if values is not None:
self.set_values(values)
def set_values(self, mvalues):
"""Change values on the nodes of the functions to approximate."""
mvalues = np.array(mvalues, dtype=float)
n_sp = mvalues.shape[-1]
mvalues = mvalues.reshape(list(self.orders) + [n_sp])
if not np.all(np.isfinite(mvalues)):
raise Exception("Trying to interpolate non-finite values")
from .filter_cubic import filter_mcoeffs
# number of splines
self.__mcoeffs__ = filter_mcoeffs(self.a, self.b, self.orders, mvalues)
self.__mvalues__ = mvalues
def interpolate(self, points, diff=False):
"""Interpolate splines at manu points."""
import time
if points.ndim == 1:
raise Exception("Expected 2d array. Received {}d array".format(points.ndim))
if points.shape[1] != self.d:
raise Exception(
"Second dimension should be {}. Received : {}.".format(
self.d, points.shape[0]
)
)
if not np.all(np.isfinite(points)):
raise Exception("Spline interpolator evaluated at non-finite points.")
n_sp = self.__mcoeffs__.shape[-1]
N = points.shape[0]
d = points.shape[1]
from .eval_splines import eval_cubic
if not diff:
grid = tuple(
(self.a[i], self.b[i], self.orders[i]) for i in range(len(self.a))
)
from .eval_splines import eval_cubic
values = np.empty((N, n_sp), dtype=float)
eval_cubic(grid, self.__mcoeffs__, points, values)
return values
else:
from .eval_cubic import vec_eval_cubic_splines_G
values = np.empty((N, n_sp), dtype=float)
dvalues = np.empty((N, d, n_sp), dtype=float)
vec_eval_cubic_splines_G(
self.a, self.b, self.orders, self.__mcoeffs__, points, values, dvalues
)
return [values, dvalues]
@property
def grid(self):
"""Cartesian enumeration of all nodes."""
if self.__grid__ is None:
self.__grid__ = mlinspace(self.a, self.b, self.orders)
return self.__grid__
def __call__(self, s):
"""Interpolate the splines at one or many points.
Parameters
----------
s : (array-like with 1 or 2 dimensions)
Coordinates of one point, or list of coordinates, at which the splines
are interpolated.
Returns:
--------
res : (array-like with 1 or 2 dimensions)
Vector or list of vectors containing the interpolator evaluated at `s`.
"""
if s.ndim == 1:
res = self.__call__(numpy.atleast_2d(s))
return res.ravel()
return self.interpolate(s)
|
{"hexsha": "008f6000bd66928040f8ac4be05c0b00bc84069a", "size": 7070, "ext": "py", "lang": "Python", "max_stars_repo_path": "interpolation/splines/splines.py", "max_stars_repo_name": "vishalbelsare/interpolation.py", "max_stars_repo_head_hexsha": "116d144700a1c78b5ad86eee097d064610be8325", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 110, "max_stars_repo_stars_event_min_datetime": "2015-03-16T05:40:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T22:32:51.000Z", "max_issues_repo_path": "interpolation/splines/splines.py", "max_issues_repo_name": "vishalbelsare/interpolation.py", "max_issues_repo_head_hexsha": "116d144700a1c78b5ad86eee097d064610be8325", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 70, "max_issues_repo_issues_event_min_datetime": "2016-01-18T11:51:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-27T13:21:41.000Z", "max_forks_repo_path": "interpolation/splines/splines.py", "max_forks_repo_name": "vishalbelsare/interpolation.py", "max_forks_repo_head_hexsha": "116d144700a1c78b5ad86eee097d064610be8325", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 32, "max_forks_repo_forks_event_min_datetime": "2016-06-15T16:27:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T16:33:16.000Z", "avg_line_length": 30.474137931, "max_line_length": 103, "alphanum_fraction": 0.5755304102, "include": true, "reason": "import numpy", "num_tokens": 1616}
|
import h5py
from mpi4py import MPI
import numpy as np
import time
comm = MPI.COMM_WORLD
rank = comm.rank # The process ID (integer 0-3 for 4-process run)
size = comm.size
def MPI_open(comm,handle,key,nparts):
## figure out how many particles each MPI task is supposed to read
num_per_task = int(nparts//comm.size)+1
## go ahead and read the data from the handle
sendbuf = handle[key][rank*num_per_task : (rank+1)*num_per_task].astype(np.float64)
recvbuf = None
if rank == 0:
## initialize the receiving memory buffer
recvbuf = np.empty(
[comm.size]+list(sendbuf.shape),
dtype=sendbuf.dtype)
## gather the data
comm.Gather(sendbuf, recvbuf, root=0)
if rank == 0:
recvbuf = np.concatenate(recvbuf,axis=0)[:nparts]
return recvbuf
## to test functionality use fname = 'foo.hdf5'
fname = 'foo.hdf5'
fname = '/scratch/projects/xsede/GalaxiesOnFIRE/metal_diffusion/m12i_res7100/output/snapdir_600/snapshot_600.0.hdf5'
## how many particles to read
nparts = 21
if rank == 0:
if fname == 'foo.hdf5':
## initialize a dummy file just for this test
with h5py.File(fname, 'w') as f:
group = f.create_group("PartType0")
dset = group.create_dataset('Coordinates', (nparts,3), dtype='i')
dset[:] = np.arange(nparts*3).reshape(-1,3)
init_time = time.time()
## open the hdf5 file with the mpio driver for parallel read (?)
with h5py.File(fname, 'r', driver='mpio', comm=comm) as handle:
if 'Header' in handle.keys():
nparts = int(handle['Header'].attrs['NumPart_ThisFile'][0])
## read the coordinates from the handle
coords = MPI_open(comm,handle['PartType0'],'Coordinates',nparts)
## clean up child MPI tasks
if rank != 0:
exit()
print(time.time()-init_time,'s elapsed reading %d particles'%nparts)
|
{"hexsha": "726abcfebf99abaa7911c38fd5778dd93db8a6f2", "size": 1869, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/abg_python/parallel/parallel_read.py", "max_stars_repo_name": "agurvich/abg_python", "max_stars_repo_head_hexsha": "f76425481781e6e8e28caf9e8290c0b5b920ab91", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-10T16:36:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-10T16:36:49.000Z", "max_issues_repo_path": "src/abg_python/parallel/parallel_read.py", "max_issues_repo_name": "agurvich/abg_python", "max_issues_repo_head_hexsha": "f76425481781e6e8e28caf9e8290c0b5b920ab91", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/abg_python/parallel/parallel_read.py", "max_forks_repo_name": "agurvich/abg_python", "max_forks_repo_head_hexsha": "f76425481781e6e8e28caf9e8290c0b5b920ab91", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-09-19T01:14:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-13T22:32:08.000Z", "avg_line_length": 29.203125, "max_line_length": 116, "alphanum_fraction": 0.6623863028, "include": true, "reason": "import numpy", "num_tokens": 519}
|
/-
Copyright (c) 2022 Yaël Dillies. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yaël Dillies
! This file was ported from Lean 3 source module combinatorics.double_counting
! leanprover-community/mathlib commit 327c3c0d9232d80e250dc8f65e7835b82b266ea5
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Algebra.BigOperators.Order
/-!
# Double countings
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file gathers a few double counting arguments.
## Bipartite graphs
In a bipartite graph (considered as a relation `r : α → β → Prop`), we can bound the number of edges
between `s : finset α` and `t : finset β` by the minimum/maximum of edges over all `a ∈ s` times the
the size of `s`. Similarly for `t`. Combining those two yields inequalities between the sizes of `s`
and `t`.
* `bipartite_below`: `s.bipartite_below r b` are the elements of `s` below `b` wrt to `r`. Its size
is the number of edges of `b` in `s`.
* `bipartite_above`: `t.bipartite_above r a` are the elements of `t` above `a` wrt to `r`. Its size
is the number of edges of `a` in `t`.
* `card_mul_le_card_mul`, `card_mul_le_card_mul'`: Double counting the edges of a bipartite graph
from below and from above.
* `card_mul_eq_card_mul`: Equality combination of the previous.
-/
open Finset Function Relator
open BigOperators
variable {α β : Type _}
/-! ### Bipartite graph -/
namespace Finset
section Bipartite
variable (r : α → β → Prop) (s : Finset α) (t : Finset β) (a a' : α) (b b' : β)
[DecidablePred (r a)] [∀ a, Decidable (r a b)] {m n : ℕ}
#print Finset.bipartiteBelow /-
/-- Elements of `s` which are "below" `b` according to relation `r`. -/
def bipartiteBelow : Finset α :=
s.filterₓ fun a => r a b
#align finset.bipartite_below Finset.bipartiteBelow
-/
#print Finset.bipartiteAbove /-
/-- Elements of `t` which are "above" `a` according to relation `r`. -/
def bipartiteAbove : Finset β :=
t.filterₓ (r a)
#align finset.bipartite_above Finset.bipartiteAbove
-/
#print Finset.bipartiteBelow_swap /-
theorem bipartiteBelow_swap : t.bipartiteBelow (swap r) a = t.bipartiteAbove r a :=
rfl
#align finset.bipartite_below_swap Finset.bipartiteBelow_swap
-/
/- warning: finset.bipartite_above_swap -> Finset.bipartiteAbove_swap is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) (s : Finset.{u1} α) (b : β) [_inst_2 : forall (a : α), Decidable (r a b)], Eq.{succ u1} (Finset.{u1} α) (Finset.bipartiteAbove.{u2, u1} β α (Function.swap.{succ u1, succ u2, 1} α β (fun (ᾰ : α) (ᾰ : β) => Prop) r) s b (fun (a : α) => _inst_2 a)) (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_2 a))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) (s : Finset.{u2} α) (b : β) [_inst_2 : forall (a : α), Decidable (r a b)], Eq.{succ u2} (Finset.{u2} α) (Finset.bipartiteAbove.{u1, u2} β α (Function.swap.{succ u2, succ u1, 1} α β (fun (ᾰ : α) (ᾰ : β) => Prop) r) s b (fun (a : α) => _inst_2 a)) (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_2 a))
Case conversion may be inaccurate. Consider using '#align finset.bipartite_above_swap Finset.bipartiteAbove_swapₓ'. -/
theorem bipartiteAbove_swap : s.bipartiteAbove (swap r) b = s.bipartiteBelow r b :=
rfl
#align finset.bipartite_above_swap Finset.bipartiteAbove_swap
/- warning: finset.coe_bipartite_below -> Finset.coe_bipartiteBelow is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) (s : Finset.{u1} α) (b : β) [_inst_2 : forall (a : α), Decidable (r a b)], Eq.{succ u1} (Set.{u1} α) ((fun (a : Type.{u1}) (b : Type.{u1}) [self : HasLiftT.{succ u1, succ u1} a b] => self.0) (Finset.{u1} α) (Set.{u1} α) (HasLiftT.mk.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (CoeTCₓ.coe.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (Finset.Set.hasCoeT.{u1} α))) (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_2 a))) (Sep.sep.{u1, u1} α (Set.{u1} α) (Set.hasSep.{u1} α) (fun (a : α) => r a b) ((fun (a : Type.{u1}) (b : Type.{u1}) [self : HasLiftT.{succ u1, succ u1} a b] => self.0) (Finset.{u1} α) (Set.{u1} α) (HasLiftT.mk.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (CoeTCₓ.coe.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (Finset.Set.hasCoeT.{u1} α))) s))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) (s : Finset.{u2} α) (b : β) [_inst_2 : forall (a : α), Decidable (r a b)], Eq.{succ u2} (Set.{u2} α) (Finset.toSet.{u2} α (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_2 a))) (setOf.{u2} α (fun (a : α) => And (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) (r a b)))
Case conversion may be inaccurate. Consider using '#align finset.coe_bipartite_below Finset.coe_bipartiteBelowₓ'. -/
@[simp, norm_cast]
theorem coe_bipartiteBelow : (s.bipartiteBelow r b : Set α) = { a ∈ s | r a b } :=
coe_filter _ _
#align finset.coe_bipartite_below Finset.coe_bipartiteBelow
#print Finset.coe_bipartiteAbove /-
@[simp, norm_cast]
theorem coe_bipartiteAbove : (t.bipartiteAbove r a : Set β) = { b ∈ t | r a b } :=
coe_filter _ _
#align finset.coe_bipartite_above Finset.coe_bipartiteAbove
-/
variable {s t a a' b b'}
/- warning: finset.mem_bipartite_below -> Finset.mem_bipartiteBelow is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) {s : Finset.{u1} α} {b : β} [_inst_2 : forall (a : α), Decidable (r a b)] {a : α}, Iff (Membership.Mem.{u1, u1} α (Finset.{u1} α) (Finset.hasMem.{u1} α) a (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_2 a))) (And (Membership.Mem.{u1, u1} α (Finset.{u1} α) (Finset.hasMem.{u1} α) a s) (r a b))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) {s : Finset.{u2} α} {b : β} [_inst_2 : forall (a : α), Decidable (r a b)] {a : α}, Iff (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_2 a))) (And (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) (r a b))
Case conversion may be inaccurate. Consider using '#align finset.mem_bipartite_below Finset.mem_bipartiteBelowₓ'. -/
@[simp]
theorem mem_bipartiteBelow {a : α} : a ∈ s.bipartiteBelow r b ↔ a ∈ s ∧ r a b :=
mem_filter
#align finset.mem_bipartite_below Finset.mem_bipartiteBelow
#print Finset.mem_bipartiteAbove /-
@[simp]
theorem mem_bipartiteAbove {b : β} : b ∈ t.bipartiteAbove r a ↔ b ∈ t ∧ r a b :=
mem_filter
#align finset.mem_bipartite_above Finset.mem_bipartiteAbove
-/
/- warning: finset.sum_card_bipartite_above_eq_sum_card_bipartite_below -> Finset.sum_card_bipartiteAbove_eq_sum_card_bipartiteBelow is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) {s : Finset.{u1} α} {t : Finset.{u2} β} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], Eq.{1} Nat (Finset.sum.{0, u1} Nat α Nat.addCommMonoid s (fun (a : α) => Finset.card.{u2} β (Finset.bipartiteAbove.{u1, u2} α β r t a (fun (a_1 : β) => _inst_3 a a_1)))) (Finset.sum.{0, u2} Nat β Nat.addCommMonoid t (fun (b : β) => Finset.card.{u1} α (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_3 a b))))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) {s : Finset.{u2} α} {t : Finset.{u1} β} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], Eq.{1} Nat (Finset.sum.{0, u2} Nat α Nat.addCommMonoid s (fun (a : α) => Finset.card.{u1} β (Finset.bipartiteAbove.{u2, u1} α β r t a (fun (a_1 : β) => _inst_3 a a_1)))) (Finset.sum.{0, u1} Nat β Nat.addCommMonoid t (fun (b : β) => Finset.card.{u2} α (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_3 a b))))
Case conversion may be inaccurate. Consider using '#align finset.sum_card_bipartite_above_eq_sum_card_bipartite_below Finset.sum_card_bipartiteAbove_eq_sum_card_bipartiteBelowₓ'. -/
theorem sum_card_bipartiteAbove_eq_sum_card_bipartiteBelow [∀ a b, Decidable (r a b)] :
(∑ a in s, (t.bipartiteAbove r a).card) = ∑ b in t, (s.bipartiteBelow r b).card :=
by
simp_rw [card_eq_sum_ones, bipartite_above, bipartite_below, sum_filter]
exact sum_comm
#align finset.sum_card_bipartite_above_eq_sum_card_bipartite_below Finset.sum_card_bipartiteAbove_eq_sum_card_bipartiteBelow
/- warning: finset.card_mul_le_card_mul -> Finset.card_mul_le_card_mul is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) {s : Finset.{u1} α} {t : Finset.{u2} β} {m : Nat} {n : Nat} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], (forall (a : α), (Membership.Mem.{u1, u1} α (Finset.{u1} α) (Finset.hasMem.{u1} α) a s) -> (LE.le.{0} Nat Nat.hasLe m (Finset.card.{u2} β (Finset.bipartiteAbove.{u1, u2} α β r t a (fun (a_1 : β) => _inst_3 a a_1))))) -> (forall (b : β), (Membership.Mem.{u2, u2} β (Finset.{u2} β) (Finset.hasMem.{u2} β) b t) -> (LE.le.{0} Nat Nat.hasLe (Finset.card.{u1} α (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_3 a b))) n)) -> (LE.le.{0} Nat Nat.hasLe (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat Nat.hasMul) (Finset.card.{u1} α s) m) (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat Nat.hasMul) (Finset.card.{u2} β t) n))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) {s : Finset.{u2} α} {t : Finset.{u1} β} {m : Nat} {n : Nat} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], (forall (a : α), (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) -> (LE.le.{0} Nat instLENat m (Finset.card.{u1} β (Finset.bipartiteAbove.{u2, u1} α β r t a (fun (a_1 : β) => _inst_3 a a_1))))) -> (forall (b : β), (Membership.mem.{u1, u1} β (Finset.{u1} β) (Finset.instMembershipFinset.{u1} β) b t) -> (LE.le.{0} Nat instLENat (Finset.card.{u2} α (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_3 a b))) n)) -> (LE.le.{0} Nat instLENat (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat instMulNat) (Finset.card.{u2} α s) m) (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat instMulNat) (Finset.card.{u1} β t) n))
Case conversion may be inaccurate. Consider using '#align finset.card_mul_le_card_mul Finset.card_mul_le_card_mulₓ'. -/
/-- Double counting argument. Considering `r` as a bipartite graph, the LHS is a lower bound on the
number of edges while the RHS is an upper bound. -/
theorem card_mul_le_card_mul [∀ a b, Decidable (r a b)]
(hm : ∀ a ∈ s, m ≤ (t.bipartiteAbove r a).card)
(hn : ∀ b ∈ t, (s.bipartiteBelow r b).card ≤ n) : s.card * m ≤ t.card * n :=
calc
_ ≤ ∑ a in s, (t.bipartiteAbove r a).card := s.card_nsmul_le_sum _ _ hm
_ = ∑ b in t, (s.bipartiteBelow r b).card :=
(sum_card_bipartiteAbove_eq_sum_card_bipartiteBelow _)
_ ≤ _ := t.sum_le_card_nsmul _ _ hn
#align finset.card_mul_le_card_mul Finset.card_mul_le_card_mul
#print Finset.card_mul_le_card_mul' /-
theorem card_mul_le_card_mul' [∀ a b, Decidable (r a b)]
(hn : ∀ b ∈ t, n ≤ (s.bipartiteBelow r b).card)
(hm : ∀ a ∈ s, (t.bipartiteAbove r a).card ≤ m) : t.card * n ≤ s.card * m :=
card_mul_le_card_mul (swap r) hn hm
#align finset.card_mul_le_card_mul' Finset.card_mul_le_card_mul'
-/
/- warning: finset.card_mul_eq_card_mul -> Finset.card_mul_eq_card_mul is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) {s : Finset.{u1} α} {t : Finset.{u2} β} {m : Nat} {n : Nat} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], (forall (a : α), (Membership.Mem.{u1, u1} α (Finset.{u1} α) (Finset.hasMem.{u1} α) a s) -> (Eq.{1} Nat (Finset.card.{u2} β (Finset.bipartiteAbove.{u1, u2} α β r t a (fun (a_1 : β) => _inst_3 a a_1))) m)) -> (forall (b : β), (Membership.Mem.{u2, u2} β (Finset.{u2} β) (Finset.hasMem.{u2} β) b t) -> (Eq.{1} Nat (Finset.card.{u1} α (Finset.bipartiteBelow.{u1, u2} α β r s b (fun (a : α) => _inst_3 a b))) n)) -> (Eq.{1} Nat (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat Nat.hasMul) (Finset.card.{u1} α s) m) (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat Nat.hasMul) (Finset.card.{u2} β t) n))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) {s : Finset.{u2} α} {t : Finset.{u1} β} {m : Nat} {n : Nat} [_inst_3 : forall (a : α) (b : β), Decidable (r a b)], (forall (a : α), (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) -> (Eq.{1} Nat (Finset.card.{u1} β (Finset.bipartiteAbove.{u2, u1} α β r t a (fun (a_1 : β) => _inst_3 a a_1))) m)) -> (forall (b : β), (Membership.mem.{u1, u1} β (Finset.{u1} β) (Finset.instMembershipFinset.{u1} β) b t) -> (Eq.{1} Nat (Finset.card.{u2} α (Finset.bipartiteBelow.{u2, u1} α β r s b (fun (a : α) => _inst_3 a b))) n)) -> (Eq.{1} Nat (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat instMulNat) (Finset.card.{u2} α s) m) (HMul.hMul.{0, 0, 0} Nat Nat Nat (instHMul.{0} Nat instMulNat) (Finset.card.{u1} β t) n))
Case conversion may be inaccurate. Consider using '#align finset.card_mul_eq_card_mul Finset.card_mul_eq_card_mulₓ'. -/
theorem card_mul_eq_card_mul [∀ a b, Decidable (r a b)]
(hm : ∀ a ∈ s, (t.bipartiteAbove r a).card = m)
(hn : ∀ b ∈ t, (s.bipartiteBelow r b).card = n) : s.card * m = t.card * n :=
(card_mul_le_card_mul _ (fun a ha => (hm a ha).ge) fun b hb => (hn b hb).le).antisymm <|
card_mul_le_card_mul' _ (fun a ha => (hn a ha).ge) fun b hb => (hm b hb).le
#align finset.card_mul_eq_card_mul Finset.card_mul_eq_card_mul
/- warning: finset.card_le_card_of_forall_subsingleton -> Finset.card_le_card_of_forall_subsingleton is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} (r : α -> β -> Prop) {s : Finset.{u1} α} {t : Finset.{u2} β}, (forall (a : α), (Membership.Mem.{u1, u1} α (Finset.{u1} α) (Finset.hasMem.{u1} α) a s) -> (Exists.{succ u2} β (fun (b : β) => And (Membership.Mem.{u2, u2} β (Finset.{u2} β) (Finset.hasMem.{u2} β) b t) (r a b)))) -> (forall (b : β), (Membership.Mem.{u2, u2} β (Finset.{u2} β) (Finset.hasMem.{u2} β) b t) -> (Set.Subsingleton.{u1} α (Sep.sep.{u1, u1} α (Set.{u1} α) (Set.hasSep.{u1} α) (fun (a : α) => r a b) ((fun (a : Type.{u1}) (b : Type.{u1}) [self : HasLiftT.{succ u1, succ u1} a b] => self.0) (Finset.{u1} α) (Set.{u1} α) (HasLiftT.mk.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (CoeTCₓ.coe.{succ u1, succ u1} (Finset.{u1} α) (Set.{u1} α) (Finset.Set.hasCoeT.{u1} α))) s)))) -> (LE.le.{0} Nat Nat.hasLe (Finset.card.{u1} α s) (Finset.card.{u2} β t))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} (r : α -> β -> Prop) {s : Finset.{u2} α} {t : Finset.{u1} β}, (forall (a : α), (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) -> (Exists.{succ u1} β (fun (b : β) => And (Membership.mem.{u1, u1} β (Finset.{u1} β) (Finset.instMembershipFinset.{u1} β) b t) (r a b)))) -> (forall (b : β), (Membership.mem.{u1, u1} β (Finset.{u1} β) (Finset.instMembershipFinset.{u1} β) b t) -> (Set.Subsingleton.{u2} α (setOf.{u2} α (fun (a : α) => And (Membership.mem.{u2, u2} α (Finset.{u2} α) (Finset.instMembershipFinset.{u2} α) a s) (r a b))))) -> (LE.le.{0} Nat instLENat (Finset.card.{u2} α s) (Finset.card.{u1} β t))
Case conversion may be inaccurate. Consider using '#align finset.card_le_card_of_forall_subsingleton Finset.card_le_card_of_forall_subsingletonₓ'. -/
theorem card_le_card_of_forall_subsingleton (hs : ∀ a ∈ s, ∃ b, b ∈ t ∧ r a b)
(ht : ∀ b ∈ t, ({ a ∈ s | r a b } : Set α).Subsingleton) : s.card ≤ t.card := by
classical simpa using
card_mul_le_card_mul _
(fun a h =>
card_pos.2 <|
(by
rw [← coe_nonempty, coe_bipartite_above]
exact hs _ h : (t.bipartite_above r a).Nonempty))
fun b h =>
card_le_one.2 <| by
simp_rw [mem_bipartite_below]
exact ht _ h
#align finset.card_le_card_of_forall_subsingleton Finset.card_le_card_of_forall_subsingleton
#print Finset.card_le_card_of_forall_subsingleton' /-
theorem card_le_card_of_forall_subsingleton' (ht : ∀ b ∈ t, ∃ a, a ∈ s ∧ r a b)
(hs : ∀ a ∈ s, ({ b ∈ t | r a b } : Set β).Subsingleton) : t.card ≤ s.card :=
card_le_card_of_forall_subsingleton (swap r) ht hs
#align finset.card_le_card_of_forall_subsingleton' Finset.card_le_card_of_forall_subsingleton'
-/
end Bipartite
end Finset
open Finset
namespace Fintype
variable [Fintype α] [Fintype β] {r : α → β → Prop}
/- warning: fintype.card_le_card_of_left_total_unique -> Fintype.card_le_card_of_leftTotal_unique is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} [_inst_1 : Fintype.{u1} α] [_inst_2 : Fintype.{u2} β] {r : α -> β -> Prop}, (Relator.LeftTotal.{u1, u2} α β r) -> (Relator.LeftUnique.{u1, u2} α β r) -> (LE.le.{0} Nat Nat.hasLe (Fintype.card.{u1} α _inst_1) (Fintype.card.{u2} β _inst_2))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} [_inst_1 : Fintype.{u2} α] [_inst_2 : Fintype.{u1} β] {r : α -> β -> Prop}, (Relator.LeftTotal.{u2, u1} α β r) -> (Relator.LeftUnique.{u2, u1} α β r) -> (LE.le.{0} Nat instLENat (Fintype.card.{u2} α _inst_1) (Fintype.card.{u1} β _inst_2))
Case conversion may be inaccurate. Consider using '#align fintype.card_le_card_of_left_total_unique Fintype.card_le_card_of_leftTotal_uniqueₓ'. -/
theorem card_le_card_of_leftTotal_unique (h₁ : LeftTotal r) (h₂ : LeftUnique r) :
Fintype.card α ≤ Fintype.card β :=
card_le_card_of_forall_subsingleton r (by simpa using h₁) fun b _ a₁ ha₁ a₂ ha₂ => h₂ ha₁.2 ha₂.2
#align fintype.card_le_card_of_left_total_unique Fintype.card_le_card_of_leftTotal_unique
/- warning: fintype.card_le_card_of_right_total_unique -> Fintype.card_le_card_of_rightTotal_unique is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} {β : Type.{u2}} [_inst_1 : Fintype.{u1} α] [_inst_2 : Fintype.{u2} β] {r : α -> β -> Prop}, (Relator.RightTotal.{u1, u2} α β r) -> (Relator.RightUnique.{u1, u2} α β r) -> (LE.le.{0} Nat Nat.hasLe (Fintype.card.{u2} β _inst_2) (Fintype.card.{u1} α _inst_1))
but is expected to have type
forall {α : Type.{u2}} {β : Type.{u1}} [_inst_1 : Fintype.{u2} α] [_inst_2 : Fintype.{u1} β] {r : α -> β -> Prop}, (Relator.RightTotal.{u2, u1} α β r) -> (Relator.RightUnique.{u2, u1} α β r) -> (LE.le.{0} Nat instLENat (Fintype.card.{u1} β _inst_2) (Fintype.card.{u2} α _inst_1))
Case conversion may be inaccurate. Consider using '#align fintype.card_le_card_of_right_total_unique Fintype.card_le_card_of_rightTotal_uniqueₓ'. -/
theorem card_le_card_of_rightTotal_unique (h₁ : RightTotal r) (h₂ : RightUnique r) :
Fintype.card β ≤ Fintype.card α :=
card_le_card_of_forall_subsingleton' r (by simpa using h₁) fun b _ a₁ ha₁ a₂ ha₂ => h₂ ha₁.2 ha₂.2
#align fintype.card_le_card_of_right_total_unique Fintype.card_le_card_of_rightTotal_unique
end Fintype
|
{"author": "leanprover-community", "repo": "mathlib3port", "sha": "62505aa236c58c8559783b16d33e30df3daa54f4", "save_path": "github-repos/lean/leanprover-community-mathlib3port", "path": "github-repos/lean/leanprover-community-mathlib3port/mathlib3port-62505aa236c58c8559783b16d33e30df3daa54f4/Mathbin/Combinatorics/DoubleCounting.lean"}
|
// Copyright Oliver Kowalke 2013.
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_FIBERS_H
#define BOOST_FIBERS_H
#include <boost/fiber/algo/algorithm.hpp>
#include <boost/fiber/algo/round_robin.hpp>
#include <boost/fiber/algo/shared_work.hpp>
#include <boost/fiber/algo/work_stealing.hpp>
#include <boost/fiber/algo/numa/work_stealing.hpp>
#include <boost/fiber/barrier.hpp>
#include <boost/fiber/buffered_channel.hpp>
#include <boost/fiber/channel_op_status.hpp>
#include <boost/fiber/condition_variable.hpp>
#include <boost/fiber/context.hpp>
#include <boost/fiber/exceptions.hpp>
#include <boost/fiber/fiber.hpp>
#include <boost/fiber/fixedsize_stack.hpp>
#include <boost/fiber/fss.hpp>
#include <boost/fiber/future.hpp>
#include <boost/fiber/numa/pin_thread.hpp>
#include <boost/fiber/numa/topology.hpp>
#include <boost/fiber/mutex.hpp>
#include <boost/fiber/operations.hpp>
#include <boost/fiber/policy.hpp>
#include <boost/fiber/pooled_fixedsize_stack.hpp>
#include <boost/fiber/properties.hpp>
#include <boost/fiber/protected_fixedsize_stack.hpp>
#include <boost/fiber/recursive_mutex.hpp>
#include <boost/fiber/recursive_timed_mutex.hpp>
#include <boost/fiber/scheduler.hpp>
#include <boost/fiber/segmented_stack.hpp>
#include <boost/fiber/timed_mutex.hpp>
#include <boost/fiber/type.hpp>
#include <boost/fiber/unbuffered_channel.hpp>
#endif // BOOST_FIBERS_H
|
{"hexsha": "b1919dbbf0252cbdd4c865caa29f12affd601da9", "size": 1557, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "REDSI_1160929_1161573/boost_1_67_0/boost/fiber/all.hpp", "max_stars_repo_name": "Wultyc/ISEP_1718_2A2S_REDSI_TrabalhoGrupo", "max_stars_repo_head_hexsha": "eb0f7ef64e188fe871f47c2ef9cdef36d8a66bc8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16.0, "max_stars_repo_stars_event_min_datetime": "2015-04-27T00:12:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-05T01:52:56.000Z", "max_issues_repo_path": "REDSI_1160929_1161573/boost_1_67_0/boost/fiber/all.hpp", "max_issues_repo_name": "Wultyc/ISEP_1718_2A2S_REDSI_TrabalhoGrupo", "max_issues_repo_head_hexsha": "eb0f7ef64e188fe871f47c2ef9cdef36d8a66bc8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1.0, "max_issues_repo_issues_event_min_datetime": "2021-12-11T00:36:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-11T15:39:01.000Z", "max_forks_repo_path": "REDSI_1160929_1161573/boost_1_67_0/boost/fiber/all.hpp", "max_forks_repo_name": "Wultyc/ISEP_1718_2A2S_REDSI_TrabalhoGrupo", "max_forks_repo_head_hexsha": "eb0f7ef64e188fe871f47c2ef9cdef36d8a66bc8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7.0, "max_forks_repo_forks_event_min_datetime": "2015-02-28T01:38:22.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-13T13:36:36.000Z", "avg_line_length": 37.0714285714, "max_line_length": 62, "alphanum_fraction": 0.7610789981, "num_tokens": 392}
|
# coding: utf-8
# In[ ]:
# Copyright (c) 2017 Andrew Glassner
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Deep Learning From Basics to Practice
# by Andrew Glassner, https://dlbasics.com, http://glassner.com
# Python utilities for saving and loading files, mostly images and Keras model weights
#
# -----------------------------------------------------------------------------
#
import os
import matplotlib.pyplot as plt
import numpy as np
import h5py
from keras.models import load_model
class File_Helper:
"""
These routines let us conveniently save and load input data, such
as text and image files, as well as save and load Keras model files
and weight files. When we save a file, the corresponding directory
is created if necessary.
When we make the object, we can optionally set the one argument really_save_files
to True or False, depending on whether or not we want save_xx() calls to really
write files. It's time-saving to set this to False when debugging because writing
files can take a while. The default value is True.
Here's a typical way to import this package from a file in a folder two levels down
(adapted from https://stackoverflow.com/questions/714063/importing-modules-from-parent-folder)
# find the absolute path to the parent folder and add that to Python's search list
import os, sys, inspect
current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
sys.path.insert(0, os.path.dirname(current_dir)) # path to grandparent dir
# Now that we can find the folder, import the package and instantiate a File_Helper object
from Python_Utilities import File_Helper
file_helper = File_Helper(True)
"""
def __init__(self, really_save_files=True):
self.really_save_files = really_save_files
self.saved_output_dir = 'saved_output'
self.input_data_dir = '../data'
self.saved_models_dir = 'saved_models'
self.saved_weights_dir = 'saved_weights'
def get_input_file_path(self, filename):
"""Get the local path relative to the calling file's location to the input file"""
return self.input_data_dir + '/' + filename
def check_for_directory(self, directory, create_if_needed=True):
"""See if the directory exists. Optionally, create it."""
path_exists = os.path.exists(directory)
if path_exists:
if not os.path.isdir(directory):
raise Exception('Found '+directory+' but it is a file, not a directory')
return False
return True
if create_if_needed:
os.makedirs(directory)
return path_exists
def save_figure(self, filename):
"""Save the figure. Call this just before plt.show()."""
if self.really_save_files and (filename != None):
self.check_for_directory(self.saved_output_dir)
plt.savefig(self.saved_output_dir+'/'+filename+'.png', dpi=300, bbox_inches='tight')
def load_model_weights(self, model, weights_filename):
"""If the weights file exists, load from it and return True, else return False."""
fullpath = self.saved_weights_dir+'/'+weights_filename+'.h5'
if os.path.exists(fullpath):
if os.path.isfile(fullpath):
model.load_weights(fullpath)
return True
return False
def save_model_weights(self, model, weights_filename):
"""Save the weights file in the saved weights directory."""
if self.really_save_files and (weights_filename != None):
self.check_for_directory(self.saved_weights_dir)
fullpath = self.saved_weights_dir+'/'+weights_filename+'.h5'
model.save_weights(fullpath)
def load_model(self, model_filename):
"""If the model file exists, load from it and return the model, else return None."""
fullpath = self.saved_models_dir+'/'+model_filename+'.h5'
if os.path.exists(fullpath):
if os.path.isfile(fullpath):
model = load_model(fullpath)
return model
return None
def save_model(self, model, model_filename):
"""Save the model file in the saved models directory."""
if self.really_save_files and (model_filename != None):
self.check_for_directory(self.saved_models_dir)
fullpath = self.saved_models_dir+'/'+model_filename+'.h5'
model.save(fullpath)
def get_saved_output_dir(self):
"""Get the name of the directory where we save matplotlib output PNG files."""
return self.saved_output_dir
def get_input_data_dir(self):
"""Get the name of the directory where we look for input files."""
return self.input_data_dir
def get_saved_weights_dir(self):
"""Get the name of the directory where we read and write Keras weight files."""
return self.saved_weights_dir
def get_saved_models_dir(self):
"""Get the name of the directory where we read and write Keras model files."""
return self.saved_models_dir
|
{"hexsha": "bc365c6474f60802f021ca3f3e24a21bd7c3bb78", "size": 5646, "ext": "py", "lang": "Python", "max_stars_repo_path": "docs/lectures/lecture18/notes/DLBasics_Utilities.py", "max_stars_repo_name": "sytseng/2019-CS109B", "max_stars_repo_head_hexsha": "3ecde086bcc77dbac2e930fa3bdc93f94a1f2b6d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T17:47:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T17:47:29.000Z", "max_issues_repo_path": "docs/lectures/lecture18/notes/DLBasics_Utilities.py", "max_issues_repo_name": "sytseng/2019-CS109B", "max_issues_repo_head_hexsha": "3ecde086bcc77dbac2e930fa3bdc93f94a1f2b6d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/lectures/lecture18/notes/DLBasics_Utilities.py", "max_forks_repo_name": "sytseng/2019-CS109B", "max_forks_repo_head_hexsha": "3ecde086bcc77dbac2e930fa3bdc93f94a1f2b6d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.4307692308, "max_line_length": 462, "alphanum_fraction": 0.7497343252, "include": true, "reason": "import numpy", "num_tokens": 1247}
|
# -*- coding: utf-8 -*-
from flask import Flask, render_template, request
import torch
from torch import nn
from torch.utils.data import Dataset
import gluonnlp as nlp
import numpy as np
import kss
from googletrans import Translator
from itertools import combinations
from krwordrank.word import summarize_with_keywords
from konlpy.tag import Mecab
mecab=Mecab(dicpath=r"/mecab/mecab-ko-dic")
from kobert.utils import get_tokenizer
from kobert.pytorch_kobert import get_pytorch_kobert_model
app = Flask(__name__)
with open('korean_stopwords.txt', 'r', encoding='UTF8') as f:
stopwords = f.read().split()
def extract_keyword(text): # 텍스트에서 명사/대명사만 추출 후 모델 실행
keyword_list=[]
nouns=[w[0] for w in mecab.pos(text) if w[1]=="NNP" or w[1]=="NNG"]
noun_combine = ' '.join(nouns)
#하이퍼파라미터 조정하기 # 짧은글에선 min_count=1, 긴글에선 min_count=3
try:
keywords = summarize_with_keywords([noun_combine], min_count=3, max_length=10,
beta=0.85, max_iter=10, stopwords=stopwords, verbose=True)
except:
# print("<except 실행>")
keywords = summarize_with_keywords([noun_combine], min_count=1, max_length=10,
beta=0.85, max_iter=10, stopwords=stopwords, verbose=True)
for word, r in sorted(keywords.items(), key=lambda x:x[1], reverse=True)[:5]:
keyword_list.append(word)
return keyword_list
bertmodel, vocab = get_pytorch_kobert_model()
tokenizer = get_tokenizer()
tok = nlp.data.BERTSPTokenizer(tokenizer, vocab, lower=False)
# 기본 Bert tokenizer 사용
class BERTDataset(Dataset):
def __init__(self, dataset, sent_idx, label_idx, bert_tokenizer, max_len,
pad, pair):
transform = nlp.data.BERTSentenceTransform(
bert_tokenizer, max_seq_length=max_len, pad=pad, pair=pair)
self.sentences = [transform([i[sent_idx]]) for i in dataset]
self.labels = [np.int32(i[label_idx]) for i in dataset]
def __getitem__(self, i):
return (self.sentences[i] + (self.labels[i], ))
def __len__(self):
return (len(self.labels))
class BERTClassifier(nn.Module):
def __init__(self,
bert,
hidden_size = 768,
num_classes = 5, # softmax 사용 <- binary일 경우는 2
dr_rate=None,
params=None):
super(BERTClassifier, self).__init__()
self.bert = bert
self.dr_rate = dr_rate
self.classifier = nn.Linear(hidden_size , num_classes)
if dr_rate:
self.dropout = nn.Dropout(p=dr_rate)
def gen_attention_mask(self, token_ids, valid_length):
attention_mask = torch.zeros_like(token_ids)
for i, v in enumerate(valid_length):
attention_mask[i][:v] = 1
return attention_mask.float()
def forward(self, token_ids, valid_length, segment_ids):
attention_mask = self.gen_attention_mask(token_ids, valid_length)
_, pooler = self.bert(input_ids = token_ids, token_type_ids = segment_ids.long(), attention_mask = attention_mask.float().to(token_ids.device))
if self.dr_rate:
out = self.dropout(pooler)
return self.classifier(out)
if torch.cuda.is_available():
map_location=lambda storage, loc: storage.cuda()
else:
map_location='cpu'
@app.route("/")
def index():
return render_template('main.html')
@app.route('/findemo',methods = ['GET', 'POST'])
def getFindemo():
if request.method =='GET':
return render_template('findemo.html')
elif request.method == 'POST':
input_text = request.form['input_text']
text = input_text
output_text = input_text
text=kss.split_sentences(text)
keywords = extract_keyword(input_text)
# translate the keyword for image search
keywords_eng = []
translator = Translator()
for word in keywords:
eng_word = translator.translate(word, dest = 'en')
keywords_eng.append(eng_word.text)
# keyword combinations
comb = combinations(keywords_eng, 2)
test_sen=[]
text_lst=[]
for i in range(len(text)):
text_lst=[text[i]]
text_lst.append('1')
test_sen.append(text_lst)
# Setting parameters
max_len = 128 # 해당 길이를 초과하는 단어에 대해선 bert가 학습하지 않음
batch_size = 32
data_test_sen = BERTDataset(test_sen, 0, 1, tok, max_len, True, False)
test_sen_dataloader = torch.utils.data.DataLoader(data_test_sen, batch_size=batch_size, num_workers=0)
model = BERTClassifier(bertmodel, dr_rate=0.5)
model.load_state_dict(torch.load("full_model.pt", map_location=map_location))
model.eval()
# Setting parameters
max_len = 128 # 해당 길이를 초과하는 단어에 대해선 bert가 학습하지 않음
batch_size = 32
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(test_sen_dataloader):
token_ids = token_ids.long()
segment_ids = segment_ids.long()
valid_length= valid_length
label = label.long()
out = model(token_ids, valid_length, segment_ids)
print(out)
# {0: '공포', 1: '기쁨', 2: '분노', 3: '사랑', 4: '슬픔'}
score=[0,0,0,0,0]
out_lst=out.tolist()
for i in range(len(out_lst)):
score[out_lst[i].index(max(out_lst[i]))]+=1
print(score)
result =''
# {0: '공포', 1: '기쁨', 2: '분노', 3: '사랑', 4: '슬픔'}
if score.index(max(score))==0:
result = "공포"
elif score.index(max(score))==1:
result = "기쁨"
elif score.index(max(score))==2:
result = "분노"
elif score.index(max(score))==3:
result = "사랑"
elif score.index(max(score))==4:
result = "슬픔"
return render_template('result.html', Result = result, Output_text = output_text, keywords = keywords_eng, score = score, combi = comb)
@app.route('/aboutus')
def getAboutus():
return render_template('about_us.html')
@app.route('/result',methods = ['GET', 'POST'])
def getResult():
return render_template('result.html')
if __name__ == '__main__':
app.run(debug = True)
|
{"hexsha": "ce4499dbb538ddffbdcf9b6542bc6d1814445646", "size": 6225, "ext": "py", "lang": "Python", "max_stars_repo_path": "web/server.py", "max_stars_repo_name": "hanna56/Teddy", "max_stars_repo_head_hexsha": "c89a2a81ca4cb4c6cc2e41e70b49028d81d5e391", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "web/server.py", "max_issues_repo_name": "hanna56/Teddy", "max_issues_repo_head_hexsha": "c89a2a81ca4cb4c6cc2e41e70b49028d81d5e391", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "web/server.py", "max_forks_repo_name": "hanna56/Teddy", "max_forks_repo_head_hexsha": "c89a2a81ca4cb4c6cc2e41e70b49028d81d5e391", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-07-23T02:42:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-23T02:42:12.000Z", "avg_line_length": 32.087628866, "max_line_length": 151, "alphanum_fraction": 0.6277911647, "include": true, "reason": "import numpy", "num_tokens": 1629}
|
%-----------------------------------------------------------------------------------------------
\section*{Conflict of Interest}
The authors declare that there is no conflict of interest in this work.
%-----------------------------------------------------------------------------------------------
|
{"hexsha": "14a9eafb333534d8a7a940b431e4b301a7bdf1f9", "size": 305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "02-01-conflicts.tex", "max_stars_repo_name": "cnaak/man-CarnotPrinciple", "max_stars_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "02-01-conflicts.tex", "max_issues_repo_name": "cnaak/man-CarnotPrinciple", "max_issues_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "02-01-conflicts.tex", "max_forks_repo_name": "cnaak/man-CarnotPrinciple", "max_forks_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.125, "max_line_length": 96, "alphanum_fraction": 0.2721311475, "num_tokens": 31}
|
"""
Sampling from a poisson distribution
"""
abstract type AbstractPoissonDistribution <: AbstractSampleDistribution end
struct PoissonSampleDistribution{T} <: AbstractPoissonDistribution
lambda::T
function PoissonSampleDistribution(lambda::T) where {T <: AbstractFloat}
return new{T}(lambda)
end
end
"""
Sampling from Poisson Distribution
Reference:
Poisson Random Variate Generation, Appl. Statis. (1991),
40, No. 1, pp 143 - 158.
# Example:
sample(PoissonSampleDistribution(1.0), (20,))
"""
function sample(distrib::PoissonSampleDistribution{T}, shape::Tuple{Vararg{Int64}}, version::Val{1}) where {T <: AbstractFloat}
n = prod(shape)
p = exp(-distrib.lambda)
ret = zeros(T, shape)
for i in 1:n
s = T(1); x = T(0)
while true
u = rand(T, 1)[1]
s *= u
if s < p
break
end
x += 1
end
ret[i] = x
end
return ret
end
function sample(distrib::PoissonSampleDistribution{T}, shape::Int64, version::Val{1}) where {T <: AbstractFloat}
return sample(distrib, (shape,))
end
"""
Function to sample from poisson from a lambda array
"""
function sample(::Type{<: AbstractPoissonDistribution}, lambda::Array{T}, version::Val{1}) where {T <: AbstractFloat}
shape = size(lambda)
n = prod(shape)
ret = zeros(T, shape)
for i in 1:n
p = exp(-lambda[i])
s = T(1); x = T(0)
while true
u = rand(T, 1)[1]
s *= u
if s < p
break
end
x += 1
end
ret[i] = x
end
return ret
end
function sample(distrib::PoissonSampleDistribution{T}, shape::Tuple{Vararg{Int64}}, version::Val{2}) where {T <: AbstractFloat}
n = prod(shape)
ret = zeros(T, shape)
for i in 1:n
p = exp(-distrib.lambda)
F = p; x = T(0)
u = rand(T, 1)[1]
while true
if u < F
break
end
x += 1
p *= (distrib.lambda/x)
F += p
end
ret[i] = x
end
return ret
end
function sample(distrib::PoissonSampleDistribution{T}, shape::Int64, version::Val{2}) where {T <: AbstractFloat}
return sample(distrib, (shape,))
end
"""
Function to sample from poisson from a lambda array
"""
function sample(::Type{<: AbstractPoissonDistribution}, lambda::Array{T}, version::Val{2}) where {T <: AbstractFloat}
shape = size(lambda)
n = prod(shape)
ret = zeros(T, shape)
for i in 1:n
p = exp(-lambda[i])
F = p; x = T(0)
u = rand(T, 1)[1]
while true
if u < F
break
end
x += 1
p *= (lambda[i]/x)
F += p
end
ret[i] = x
end
return ret
end
"""
# Example simulate poisson distribution
X, eta = simulateData(Float64, 10, 1000);
y = linkinv(LogLink(), eta);
y = sample(AbstractPoissonDistribution, y, Val{2}());
"""
"""
Function to simulate data
# Example
using Random: seed!
seed!(0);
X, y = simulateData(Float64, PoissonDistribution(), LogLink(), 10, 1000)
X, y = simulateData(Float64, BinomialDistribution(), LogitLink(), 10, 1000)
X, y = simulateData(Float64, GammaDistribution(), LogLink(), 10, 1000)
X, y = simulateData(Float64, GaussianDistribution(), IdentityLink(), 10, 1000)
"""
function simulateData(::Type{T}, distrib::AbstractDistribution,
link::AbstractLink, p::Int64, n::Int64) where {T <: AbstractFloat}
X = Array{T, 2}(undef, (0, 0))
eta = Array{T, 1}(undef, (0,))
X, eta = simulateData(T, p, n)
y = linkinv(link, eta)
if (typeof(distrib) <: GammaDistribution)
y .+= 10
end
if typeof(distrib) <: PoissonDistribution
y = sample(AbstractPoissonDistribution, y, Val{2}())
end
if typeof(distrib) <: BinomialDistribution
y = map((x, u) -> T(1)*(x > u), y, rand(T, n))
end
y = reshape(y, (size(y)[1], 1))
return X, y
end
|
{"hexsha": "69b7e843109beea3ddd73b50e96ca4c27f3029d8", "size": 3781, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "glmSolverjl/src/simulate.jl", "max_stars_repo_name": "dataPulverizer/glmSolver", "max_stars_repo_head_hexsha": "d82623caac1e14d29ee09fbafa7efbbd5d6ee0bf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-17T15:34:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-17T15:34:44.000Z", "max_issues_repo_path": "glmSolverjl/src/simulate.jl", "max_issues_repo_name": "dataPulverizer/glmSolver", "max_issues_repo_head_hexsha": "d82623caac1e14d29ee09fbafa7efbbd5d6ee0bf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glmSolverjl/src/simulate.jl", "max_forks_repo_name": "dataPulverizer/glmSolver", "max_forks_repo_head_hexsha": "d82623caac1e14d29ee09fbafa7efbbd5d6ee0bf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.4829545455, "max_line_length": 127, "alphanum_fraction": 0.6088336419, "num_tokens": 1189}
|
# -*- coding: utf-8 -*-
"""CNN.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1K2l75LJfglEOZG4M09pzFUqdUfemlWQ3
"""
!unzip -qq /content/drive/MyDrive/Data/joined.zip
import numpy as np
import pandas as pd
import keras
from keras.layers import *
from keras.models import *
from keras.preprocessing import image
from keras.utils.vis_utils import plot_model
from keras.callbacks import EarlyStopping
import os
import gc
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tensorflow as tf
from tqdm import tqdm
import cv2
from keras.layers import Activation
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras import layers
image_size = (180, 180)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"/content/joined/",
validation_split=0.2,
subset="training",
seed=42,
image_size=image_size,
batch_size=batch_size,
label_mode="binary",
labels='inferred' #
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
"/content/joined/",
validation_split=0.2,
subset="validation",
seed=42,
image_size=image_size,
batch_size=batch_size,
label_mode="binary",
labels='inferred'
)
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomRotation(0.1),
]
)
train_ds = train_ds.prefetch(buffer_size=32)
val_ds = val_ds.prefetch(buffer_size=32)
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x
for size in [128, 256, 512, 728]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
residual = layers.Conv2D(size, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual])
previous_block_activation = x
x = layers.SeparableConv2D(1024, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
activation = "sigmoid"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation=activation)(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size + (3,), num_classes=1)
model.summary()
epochs = 50
model.compile(
optimizer=keras.optimizers.Adam(1e-3),
loss="binary_crossentropy",
metrics=["AUC"],
)
# main training block
model.fit(
train_ds, epochs=epochs, validation_data=val_ds,
)
|
{"hexsha": "b2a4be031d7427c9cb32a69382aa070b6c210a0b", "size": 3316, "ext": "py", "lang": "Python", "max_stars_repo_path": "Models/cnn.py", "max_stars_repo_name": "GunH-colab/CovidOps", "max_stars_repo_head_hexsha": "6a64a85d60f158fbe15a7f116a972ce419366996", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-15T20:05:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-15T20:05:00.000Z", "max_issues_repo_path": "Models/cnn.py", "max_issues_repo_name": "GunH-colab/CovidOps", "max_issues_repo_head_hexsha": "6a64a85d60f158fbe15a7f116a972ce419366996", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Models/cnn.py", "max_forks_repo_name": "GunH-colab/CovidOps", "max_forks_repo_head_hexsha": "6a64a85d60f158fbe15a7f116a972ce419366996", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3174603175, "max_line_length": 77, "alphanum_fraction": 0.6957177322, "include": true, "reason": "import numpy", "num_tokens": 859}
|
# -*- coding: utf-8 -*-
"""
MWT collision graph manipulation general utilities
"""
from __future__ import (
absolute_import, division, print_function, unicode_literals)
import six
from six.moves import (zip, filter, map, reduce, input, range)
# standard library
import itertools
import collections
import types
import logging
# third party
import multiworm
import networkx as nx
import pandas as pd
# package specific
__all__ = [
'flat_node_list',
'component_size_summary',
#'suspected_collisions',
'consolidate_node_data',
'group_composite_nodes',
]
L = logging.getLogger(__name__)
def _check_assumptions(graph):
for node in graph:
successors = graph.successors(node)
predecessors = graph.predecessors(node)
if len(successors) == 2:
for successor, successor_indeg in graph.in_degree_iter(successors):
if successor_indeg != 1:
print(node, '-->', successors)
raise AssertionError("Fission children have unexpected number of parents (not 1)")
elif len(successors) == 1:
successor = successors[0]
successor_indeg = graph.in_degree(successor)
if successor_indeg != 2:
print(node, '-->', successors, 'indeg={}'.format(successor_indeg))
raise AssertionError("Fusion child has unexpected number of parents (not 2)")
if len(predecessors) == 2:
for predecessor, predecessor_indeg in graph.out_degree_iter(predecessors):
assert predecessor_indeg == 1, "Fission parents have unexpected number of children (not 1)"
elif len(predecessors) == 1:
assert graph.out_degree(predecessors[0]) == 2, "Fusion parent has unexpected number of children (not 2)"
def flatten(l):
for el in l:
if isinstance(el, collections.Iterable) and not isinstance(el, six.string_types):
for sub in flatten(el):
yield sub
else:
yield el
def frame_filter(threshold):
def conditional(graph, nodes):
for node in nodes:
lifespan = graph.lifespan_f(node)
if lifespan > threshold:
L.debug('node {} was older than threshold ({} > {})'.format(node, lifespan, threshold))
return False
return True
return conditional
def flat_node_list(graph):
"""
returns list of all non-compund nodes in a graph, inculding
all the nodes that are inside compound nodes.
"""
node_ids = []
for node in graph:
node_data = graph.node[node]
#print(node_data)
if 'components' in node_data:
internal_nodes = list(node_data['components'])
internal_nodes.append(node)
else:
internal_nodes = [node]
node_ids.extend(internal_nodes)
return list(set(node_ids))
def component_size_summary(graph):
Gcc = nx.connected_component_subgraphs(graph.to_undirected())
print("Component sizes and example nodes in descending order")
for n, G in enumerate(sorted(list(Gcc), key=len, reverse=True)[:10], start=1):
print("{:>2d}. {:>5d} nodes : {}{}".format(
n, len(G),
', '.join(str(node) for node, _ in zip(G.nodes_iter(), range(5))),
'...' if len(G) > 5 else ''))
def is_isolated(graph, node):
""" returns True if node does not have any parents or children
"""
return graph.degree(node) == 0
def consolidate_node_data(graph, experiment, node):
"""
Returns a pandas DataFrame with all blob data for a node.
Accepts both compound and single nodes.
For compound nodes, data includes all components.
params
-----
graph: (waldo.network.Graph object)
a directed graph of node interactions
experiment: (multiworm experiment object)
the experiment from which data can be exctracted.
node: (int or tuple)
the id (from graph) for a node.
returns
-----
all_data: (pandas DataFrame)
index is 'frame'
columns are:
'area', 'centroid'
'contour_encode_len', 'contour_encoded', 'contour_start',
'midline', 'size', 'std_ortho', 'std_vector', 'time'
"""
data = []
for subnode in graph.components(node):
try:
blob = experiment[subnode]
df = blob.df
except (KeyError, multiworm.MWTDataError) as e:
print('{e} reading blob {i}'.format(e=e, i=subnode))
continue
if blob.empty:
continue
df.set_index('frame', inplace=True)
df['blob'] = subnode
data.append(df)
if data:
all_data = pd.concat(data)
all_data.sort(inplace=True)
return all_data
# should find a better spot for this
def group_composite_nodes(experiment, graph, dataframe):
cgen = ((node, graph.where_is(node)) for node in experiment.graph)
cgen = (x for x in cgen if x[1] in graph)
composites = pd.DataFrame(list(cgen), columns=['bid', 'composite_bid'])
# merge, only keep ids in dataframe
bounds = pd.merge(dataframe, composites, on='bid', how='left')
# clean up and group
bounds.drop('bid', axis=1, inplace=True)
bounds.rename(columns={'composite_bid': 'bid'}, inplace=True)
return bounds.groupby('bid')
|
{"hexsha": "409c62698872e36a9a170c160825a90538bcf75b", "size": 5340, "ext": "py", "lang": "Python", "max_stars_repo_path": "code/waldo/collider/simplifications/util.py", "max_stars_repo_name": "amarallab/waldo", "max_stars_repo_head_hexsha": "e38d23d9474a0bcb7a94e685545edb0115b12af4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "code/waldo/collider/simplifications/util.py", "max_issues_repo_name": "amarallab/waldo", "max_issues_repo_head_hexsha": "e38d23d9474a0bcb7a94e685545edb0115b12af4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/waldo/collider/simplifications/util.py", "max_forks_repo_name": "amarallab/waldo", "max_forks_repo_head_hexsha": "e38d23d9474a0bcb7a94e685545edb0115b12af4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1677018634, "max_line_length": 116, "alphanum_fraction": 0.6269662921, "include": true, "reason": "import networkx", "num_tokens": 1215}
|
import os
import json
import datetime
import numpy as np
import glob
from easydict import EasyDict
def edict2dict(edict_obj):
dict_obj = {}
for key, vals in edict_obj.items():
if isinstance(vals, EasyDict):
dict_obj[key] = edict2dict(vals)
else:
dict_obj[key] = vals
return dict_obj
def gen_common_pathcfg(saver_cfg, is_train=False):
output_dir = saver_cfg.output_dir
saver_cfg.log_dir = os.path.join(output_dir, 'log')
saver_cfg.model_dir = os.path.join(output_dir, 'model')
saver_cfg.pred_dir = os.path.join(output_dir, 'pred')
if not os.path.exists(saver_cfg.log_dir):
os.makedirs(saver_cfg.log_dir)
if not os.path.exists(saver_cfg.model_dir):
os.makedirs(saver_cfg.model_dir)
if not os.path.exists(saver_cfg.pred_dir):
os.makedirs(saver_cfg.pred_dir)
if is_train:
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
saver_cfg.log_file = os.path.join(saver_cfg.log_dir, 'log-' + timestamp)
else:
saver_cfg.log_file = None
return saver_cfg
def find_best_val_models(log_dir, model_dir):
model_jsons = glob.glob(os.path.join(log_dir, 'val.epoch.*.step.*.json'))
val_names, val_scores = [], []
for i, json_file in enumerate(model_jsons):
json_name = os.path.basename(json_file)
scores = json.load(open(json_file))
val_names.append(json_name)
val_scores.append(scores)
measure_names = list(val_scores[0].keys())
model_files = {}
for measure_name in measure_names:
# for metrics: the lower the better
if 'loss' in measure_name or 'medr' in measure_name or 'meanr' in measure_name:
idx = np.argmin([scores[measure_name] for scores in val_scores])
# for metrics: the higher the better
else:
idx = np.argmax([scores[measure_name] for scores in val_scores])
json_name = val_names[idx]
model_file = os.path.join(model_dir, json_name[4:-5] + '.th')
model_files.setdefault(model_file, [])
model_files[model_file].append(measure_name)
name2file = {'-'.join(measure_name): model_file for model_file, measure_name in model_files.items()}
return name2file
|
{"hexsha": "7d4e562eaa2179c162bbdff292406f9d143f15dc", "size": 2121, "ext": "py", "lang": "Python", "max_stars_repo_path": "framework/run_utils.py", "max_stars_repo_name": "DeLightCMU/ElaborativeRehearsal", "max_stars_repo_head_hexsha": "0cf2a234b4716ff7a16fdbf4b18c173cc42fbcc4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2021-08-14T12:08:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T01:01:47.000Z", "max_issues_repo_path": "framework/run_utils.py", "max_issues_repo_name": "DeLightCMU/ElaborativeRehearsal", "max_issues_repo_head_hexsha": "0cf2a234b4716ff7a16fdbf4b18c173cc42fbcc4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-08-12T06:17:07.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-19T05:25:50.000Z", "max_forks_repo_path": "framework/run_utils.py", "max_forks_repo_name": "DeLightCMU/ElaborativeRehearsal", "max_forks_repo_head_hexsha": "0cf2a234b4716ff7a16fdbf4b18c173cc42fbcc4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-08-21T09:19:00.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T01:58:54.000Z", "avg_line_length": 31.1911764706, "max_line_length": 102, "alphanum_fraction": 0.7142857143, "include": true, "reason": "import numpy", "num_tokens": 556}
|
[STATEMENT]
lemma invariant_start:
"\<lbrakk>wf_state r; wf_state s\<rbrakk> \<Longrightarrow> invariant r s ([([], r, s)], [], {(post r, post s)})"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>wf_state r; wf_state s\<rbrakk> \<Longrightarrow> invariant r s ([([], r, s)], [], {(post r, post s)})
[PROOF STEP]
by (auto simp add: invariant_def bij_betw_def)
|
{"llama_tokens": 150, "file": "MSO_Regex_Equivalence_Pi_Equivalence_Checking", "length": 1}
|
import numpy as np
from matplotlib.axes import Axes
from matplotlib.patches import Ellipse
from .c_ellipsoid2 import AsymmetricEllipsoidalShell, EllipsoidalShellWithSizeDistribution
from .c_gauss_ellipsoid import I0Rgfromrho, rhofromI0Rg, F2GaussianEllipsoid
from ..core import FitFunction
class F2AsymmetricCoreShellEllipsoidWithBackgroundI0Rg(FitFunction):
name = 'Rotational core-shell ellipsoid'
arguments = [('I0', 'Intensity extrapolated to zero'),
('Rg', 'Radius of gyration'),
('a', 'Principal semi-axis of the core'),
('b', 'Equatorial semi-axis of the core'),
('ta', 'Shell thickness (along the principal axis)'),
('tb', 'Shell thickness (along the equatorial axes)'),
('bg', 'Constant background')]
unfittable_parameters = [
('ninteg','Number of points for numerical averaging', 2,100000,100),
]
description = "Scattering intensity of an asymmetric rotational core-shell ellipsoid with additional constant background"
def _get_rhos(self, I0,Rg,a,b,ta,tb):
btb=b+tb
ata=a+ta
btb2ata = btb**2*ata
b2a = b**2*a
rhocoredivrhoshell = (ata**3*btb**2+2*ata*btb**4-5*Rg**2*ata*btb**2-a**3*b**2-2*a*b**4+5*a*b**2*Rg**2)/(5*a*b**2*Rg**2-a**3*b**2-2*a*b**4)
eta_shell = 3*I0**0.5/4/np.pi/(rhocoredivrhoshell*b2a+btb2ata-b2a)
eta_core = rhocoredivrhoshell*eta_shell
return eta_core, eta_shell
def function(self, x, I0, Rg, a, b, ta, tb, C, ninteg):
eta_core, eta_shell = self._get_rhos(I0, Rg, a, b, ta, tb)
return AsymmetricEllipsoidalShell(x, eta_core, eta_shell, a, b, ta, tb) + C
def draw_representation(self, fig, x, I0, Rg, a, b, ta, tb, C, ninteg):
eta_core, eta_shell = self._get_rhos(I0, Rg, a, b, ta, tb)
fig.clear()
ax=fig.add_subplot(1,1,1)
assert isinstance(ax, Axes)
p=Ellipse((0,0),2*(b+tb),2*(a+ta),color='yellow')
ax.add_patch(p)
p=Ellipse((0,0),2*(b),2*(a), color='green')
ax.add_patch(p)
ax.vlines(0,-a-ta,a+ta,linestyle='--',color='black')
ax.autoscale_view(True, True, True)
ax.text(0.05,0.95,
'$a$: {}\n'
'$b$: {}\n'
'$t_a$: {}\n'
'$t_b$: {}\n'
'$\\rho_\mathrm{{core}}$: {}\n'
'$\\rho_\mathrm{{shell}}$: {}\n'
'$I_0$: {}\n'
'$R_g$: {}\n'.format(a,b,ta,tb, eta_core, eta_shell, I0, Rg),
transform=ax.transAxes,ha='left',va='top')
ax.axis('equal')
fig.canvas.draw()
class F2SymmetricCoreShellEllipsoidWithBackgroundI0RgA(F2AsymmetricCoreShellEllipsoidWithBackgroundI0Rg):
name = 'Rotational core-shell ellipsoid, symmetric shell'
arguments = [('I0', 'Intensity extrapolated to zero'),
('Rg', 'Radius of gyration'),
('a', 'Principal semi-axis of the core'),
('b', 'Equatorial semi-axis of the core'),
('t', 'Shell thickness (along both axes)'),
('bg', 'Constant background')]
description = "Scattering intensity of a symmetric rotational core-shell ellipsoid with additional constant background"
def function(self, x, I0, Rg, a, b, t, C, ninteg):
return super().function(x,I0,Rg,a,b,t,t,C,ninteg)
def draw_representation(self, fig, x, I0, Rg, a, b, t, C, ninteg):
return super().draw_representation(fig, x, I0, Rg, a, b, t, t, C, ninteg)
class F2EllipsoidalShellWithSizeDistribution(F2AsymmetricCoreShellEllipsoidWithBackgroundI0Rg):
name = 'Ellipsoidal core-shell particle with size distribution'
arguments = [('I0', 'Intensity extrapolated to zero'),
('Rg', 'Radius of gyration'),
('a', 'Rotational semi-axis of the core'),
('b/a', 'Ratio of the equatorial and the rotational semi-axes of the core'),
('sigma(b/a)','HWHM of the Gaussian distribution placed on b/a'),
('ta', 'Shell thickness along the rotational axis'),
('tb/ta', 'Ratio of the shell thicknesses along the rotational and the equatorial semi-axes'),
('sigma(tb/ta)','HWHM of the Gaussian distribution placed on tb/ta'),
('bg', 'Constant background')]
unfittable_parameters = [
('Nintorientation','Number of points for numerical averaging of the macroscopic orientation', 2,100000,100),
('Nintanisometrycore', 'Number of points for numerical averaging of the core anisometry',2,100000,100),
('Nintanisometryshell', 'Number of points for numerical averaging of the shell anisometry',2,100000,100),
]
def function(self, x, I0, Rg, a, bdiva_mean, bdiva_sigma, ta, tbdivta_mean, tbdivta_sigma, C, Nintorientation, Nintanisometrycore, Nintanisometryshell):
rhocore, rhoshell = self._get_rhos(I0, Rg, a,bdiva_mean*a, ta, tbdivta_mean*ta)
return EllipsoidalShellWithSizeDistribution(x, rhocore, rhoshell, a, bdiva_mean, bdiva_sigma, ta, tbdivta_mean, tbdivta_sigma, int(Nintorientation), int(Nintanisometrycore), int(Nintanisometryshell))+C
def draw_representation(self, fig, x, I0, Rg, a, bdiva_mean, bdiva_sigma, ta, tbdivta_mean, tbdivta_sigma, C, Nintorientation, Nintanisometrycore, Nintanisometryshell):
super().draw_representation(fig, x, I0, Rg, a, bdiva_mean*a, ta, tbdivta_mean*ta, C, Nintorientation)
class GaussianEllipsoid(FitFunction):
name = 'Gaussian shell in an ellipsoid'
arguments = [('I0', 'Intensity at q=0'),
('Rg', 'Radius of gyration'),
('a', 'Principal semi-axis of the core'),
('b', 'Equatorial semi-axis of the core'),
('sigmain', 'Core-side HWHM of the Gauss shell layer'),
('sigmaout', 'Solvent-side HWHM of the Gauss shell layer'),
('bg', 'Constant background')]
description = "Scattering intensity of a Gaussian shell on an ellipsoid of rotation"
unfittable_parameters = [
('Nintr','Number of points for numerical integration with respect to "r"', 10,100000,100),
('Nintu','Number of points for numerical integration with respect to "u"', 10,100000,100),
('NintU','Number of points for numerical integration with respect to "U"', 10,100000,100),
]
def function(self, x, I0, Rg, a, b, sigmain, sigmaout, bg, Nintr, Nintu, NintU):
rhocore, rhoshell = rhofromI0Rg(I0, Rg, a, b, sigmain, sigmaout)
return F2GaussianEllipsoid(x, a, b, rhocore, rhoshell, sigmain, sigmaout, int(Nintr), int(Nintu), int(NintU))+bg
class GaussianEllipsoidSymmetricHead(FitFunction):
name = 'Gaussian shell in an ellipsoid, symmetric head'
arguments = [('I0', 'Intensity at q=0'),
('Rg', 'Radius of gyration'),
('a', 'Principal semi-axis of the core'),
('b', 'Equatorial semi-axis of the core'),
('sigma', 'HWHM of the Gauss shell layer'),
('bg', 'Constant background')]
description = "Scattering intensity of a Gaussian shell on an ellipsoid of rotation. The head is symmetric"
unfittable_parameters = [
('Nintr','Number of points for numerical integration with respect to "r"', 10,100000,100),
('Nintu','Number of points for numerical integration with respect to "u"', 10,100000,100),
('NintU','Number of points for numerical integration with respect to "U"', 10,100000,100),
]
def function(self, x, I0, Rg, a, b, sigma, bg, Nintr, Nintu, NintU):
rhocore, rhoshell = rhofromI0Rg(I0, Rg, a, b, sigma, sigma)
return F2GaussianEllipsoid(x, a, b, rhocore, rhoshell, sigma, sigma, int(Nintr), int(Nintu), int(NintU))+bg
|
{"hexsha": "3d4c0046eb6ca878084f7ac829d8bcf88d830601", "size": 7843, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/saxsfittool/fitfunction/ellipsoid/ellipsoid.py", "max_stars_repo_name": "awacha/saxsfittool", "max_stars_repo_head_hexsha": "20e9a8aecf680cfc5f3b84264b932c9a50e47085", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/saxsfittool/fitfunction/ellipsoid/ellipsoid.py", "max_issues_repo_name": "awacha/saxsfittool", "max_issues_repo_head_hexsha": "20e9a8aecf680cfc5f3b84264b932c9a50e47085", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-06-16T09:39:00.000Z", "max_issues_repo_issues_event_max_datetime": "2017-06-16T09:39:00.000Z", "max_forks_repo_path": "src/saxsfittool/fitfunction/ellipsoid/ellipsoid.py", "max_forks_repo_name": "awacha/saxsfittool", "max_forks_repo_head_hexsha": "20e9a8aecf680cfc5f3b84264b932c9a50e47085", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.9285714286, "max_line_length": 209, "alphanum_fraction": 0.6231034043, "include": true, "reason": "import numpy", "num_tokens": 2242}
|
! requires pulchra.dat
module nmr
implicit none
! variables for adding backbone atoms
integer,parameter:: num_stat = 2363
integer,parameter:: num_stat_pro = 1432 ! stats for proline
integer, dimension(num_stat,3),save:: nco_stat_bin
integer, dimension(num_stat_pro,3),save:: nco_stat_pro_bin
real, dimension(num_stat,8,3),save:: nco_stat
real, dimension(num_stat_pro,8,3),save:: nco_stat_pro
real,parameter:: INVALIDANGLE = 999
real,parameter:: M_PI = 3.14159265358979323846
real,parameter:: pi180 = 180.0/M_PI
logical,save:: initStatFlag=.false. ! set to true the first time build_bb is called
! variables for NOE restraints
integer,parameter:: MAXSEQLEN = 1000
integer,parameter:: NUMCONTYPES = 16
integer,parameter:: NNINDEX = 1
integer,parameter:: CACAINDEX = 2
integer,parameter:: CBCBINDEX = 3
integer,parameter:: SCSCINDEX = 4
integer,parameter:: NCAINDEX = 5
integer,parameter:: NCBINDEX = 6
integer,parameter:: NSCINDEX = 7
integer,parameter:: CACBINDEX = 8
integer,parameter:: CASCINDEX = 9
integer,parameter:: CBSCINDEX = 10
integer,parameter:: CANINDEX = 11
integer,parameter:: CBNINDEX = 12
integer,parameter:: SCNINDEX = 13
integer,parameter:: CBCAINDEX = 14
integer,parameter:: SCCAINDEX = 15
integer,parameter:: SCCBINDEX = 16
character(len=4), dimension(NUMCONTYPES), parameter:: TYPENAMES = (/'NN ','CACA','CBCB','SCSC','NCA ', &
'NCB ','NSC ','CACB','CASC','CBSC','CAN ','CBN ','SCN ','CBCA','SCCA','SCCB'/)
integer,parameter:: NTYPE = 1
integer,parameter:: CATYPE = 2
integer,parameter:: CBTYPE = 3
integer,parameter:: SCTYPE = 4
integer,parameter,dimension(NUMCONTYPES):: ATOMTYPE1 = (/NTYPE,CATYPE,CBTYPE,SCTYPE, NTYPE,NTYPE,NTYPE,CATYPE, &
CATYPE,CBTYPE,CATYPE,CBTYPE, SCTYPE,CBTYPE,SCTYPE,SCTYPE/) ! first atom type of contact
integer,parameter,dimension(NUMCONTYPES):: ATOMTYPE2 = (/NTYPE,CATYPE,CBTYPE,SCTYPE, CATYPE,CBTYPE,SCTYPE,CBTYPE, &
SCTYPE,SCTYPE,NTYPE,NTYPE, NTYPE,CATYPE,CATYPE,CBTYPE/) ! second atom type of contact
integer,parameter:: MAXCONTACTPARTNERS = 100
integer, dimension(MAXSEQLEN,NUMCONTYPES),save:: numContactPartners = 0 ! [res i][TYPE_INDEX] = num contacts incident to i
integer, dimension(MAXSEQLEN,NUMCONTYPES,MAXCONTACTPARTNERS),save:: contactPartners = 0 ! [res i][TYPE_INDEX][ith partner] = res j
integer,parameter:: SCOREINDEX = 1 ! used to index into scoreDistBound
integer,parameter:: DISTINDEX = 2 ! distances
real, dimension(MAXSEQLEN,MAXSEQLEN,NUMCONTYPES,2),save:: scoreDistBound = 0! stores the score,distbound; matrix not necessarily symmetric
integer,parameter:: MAXAMBIG = 200 ! maximum number of ambiguous restraints
integer,parameter:: MAXAMBIGGROUP = 10 ! maximum number of restraints in an ambiguous assignment group
integer,parameter:: RES1INDEX = 1 ! used for ambigRestraints
integer,parameter:: RES2INDEX = 2 ! used for ambigRestraints
integer,parameter:: TYPEINDEX = 3 ! used for ambigRestraints
integer, dimension(MAXAMBIG,MAXAMBIGGROUP,3),save:: ambigRestraints = 0 ! res1, res2, typeIndex (e.g. NNINDEX);
real, dimension(MAXAMBIG,MAXAMBIGGROUP,2),save:: ambigScoreDistBound = 0 ! score,distbound
integer, dimension(MAXAMBIG),save:: ambigCounts = 0 ! for each ambiguous restraint, the number of contacts in the restraint
real, dimension(MAXAMBIG),save:: ambigPenalty = 0 ! penalty of ambig restr viol, equal to sum of individual restraints
integer,save:: numAmbig = 0 ! will be set when read restraint file
real,parameter:: LINPENALTYBOUND = 1.0 ! 2.0 ! if dist-distRestr > 0 && dist-distRestr < LINPENALTYBOUND, a linear penalty is applied
! energy terms to save for weight training (used by energy_tot,ESHORT,EHB)
real,dimension(13),save:: NMR_ETERMS = 0
! number of restraint violations (includes ambig restraints), set by noeRestraintPenalty
integer,save:: violCount = 0
! number of restraints (includes ambig restraints), set by readNOEAssignments and readAmbigNOEAssignments
integer,save:: numRestr = 0
! 1 2 3 4 5 6 7 8 9 10 11 12 13
! ENOE, ESHORT3, ESHORT4, ESHORT4a, ESHORT9, ESHORT10, ESHORT11 EHB1c, EHB1, EHB1a, EHB1b, EHB4, ENOEA
! ENOE_wt, er1, er3, er4, er5, er6, er7, eh1c, eh1, eh1a, eh1b, eh4, ENOEA_wt
public:: build_bb, build_bb_ij, build_bb_ij_nocheck, readTalos, getPhiPsi, getPhiPsi_ij, &
torsionRestraintPenalty, readNOEAssignments, readAmbigNOEAssignments, noeRestraintPenalty, & ! noeRestraintPenaltyAnti, &
strtok
contains
! builds the backbone atoms N,C,O from the input CA coordinates
! based on algorithm in pulchra
! OXT is stored in ox(Lch+1), oy(Lch+1), oz(Lch+1)
! seq contains the AA type of each residue (in cas.f). Defined by aa
! aa = (/ 'BCK','GLY','ALA','SER','CYS','VAL','THR','ILE', &
!c -1 0 1 2 3 4 5 6
! 'PRO','MET','ASP','ASN','LEU', &
!c 7 8 9 10 11
! 'LYS','GLU','GLN','ARG', &
!c 12 13 14 15
! 'HIS','PHE','TYR','TRP','CYX'/)
!c 16 17 18 19 20
subroutine build_bb(cax,cay,caz,nx,ny,nz,cx,cy,cz,ox,oy,oz,Lch,seq)
implicit none
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz
real,dimension(Lch),intent(out):: nx, ny, nz
real,dimension(Lch),intent(out):: cx, cy, cz
real,dimension(Lch+1),intent(out):: ox, oy, oz ! +1 for terminal O
integer,dimension(Lch+1,3):: RBINS
integer,dimension(Lch),intent(in):: seq ! pro=7
real,dimension(8):: cacoordsx, cacoordsy, cacoordsz
real,dimension(8):: tmpcoordsx, tmpcoordsy, tmpcoordsz
real,dimension(8):: tmpstatx, tmpstaty, tmpstatz
real:: x1, y1, z1
real:: x2, y2, z2
real:: x3, y3, z3
real:: x4, y4, z4
real:: besthit, hit
integer:: bestpos
integer:: i, j,bin13_1, bin13_2, bin14
real:: rmsd, total, maxrms
real,dimension(Lch+10,3):: X_COORDS
if (.not. initStatFlag) then
call init_backbone_bins()
initStatFlag = .true.
endif
do i=1,Lch+10
do j=1,3
X_COORDS(i,j) = 0.
end do
end do
call prepare_rbins(cax,cay,caz,Lch,RBINS,X_COORDS)
total = 0
maxrms = 0.0
do i=1,Lch+1
x1 = X_COORDS(i+3,1)
y1 = X_COORDS(i+3,2)
z1 = X_COORDS(i+3,3)
x2 = X_COORDS(i+4,1)
y2 = X_COORDS(i+4,2)
z2 = X_COORDS(i+4,3)
x3 = X_COORDS(i+5,1)
y3 = X_COORDS(i+5,2)
z3 = X_COORDS(i+5,3)
x4 = X_COORDS(i+6,1)
y4 = X_COORDS(i+6,2)
z4 = X_COORDS(i+6,3)
cacoordsx(1) = x1
cacoordsy(1) = y1
cacoordsz(1) = z1
cacoordsx(2) = x2
cacoordsy(2) = y2
cacoordsz(2) = z2
cacoordsx(3) = x3
cacoordsy(3) = y3
cacoordsz(3) = z3
cacoordsx(4) = x4
cacoordsy(4) = y4
cacoordsz(4) = z4
bin13_1 = RBINS(i,1)
bin13_2 = RBINS(i,2)
bin14 = RBINS(i,3)
if (i .ne. 1 .and. seq(i-1) == 7) then ! prev res is pro
j=1
besthit=1000.
bestpos=1
do
hit = abs(nco_stat_pro_bin(j,1)-bin13_1)+abs(nco_stat_pro_bin(j,2)-bin13_2)+0.2*abs(nco_stat_pro_bin(j,3)-bin14)
if (hit<besthit) then
besthit=hit
bestpos=j
endif
j = j+1
if (nco_stat_pro_bin(j,1) < 0 .or. hit <= 1e-3) exit
end do
do j=1,4
tmpstatx(j) = nco_stat_pro(bestpos,j,1)
tmpstaty(j) = nco_stat_pro(bestpos,j,2)
tmpstatz(j) = nco_stat_pro(bestpos,j,3)
end do
do j=1,8
tmpcoordsx(j) = nco_stat_pro(bestpos,j,1)
tmpcoordsy(j) = nco_stat_pro(bestpos,j,2)
tmpcoordsz(j) = nco_stat_pro(bestpos,j,3)
end do
else
j=1
besthit=1000.
bestpos=1
do
hit = abs(nco_stat_bin(j,1)-bin13_1)+abs(nco_stat_bin(j,2)-bin13_2)+0.2*abs(nco_stat_bin(j,3)-bin14)
if (hit<besthit) then
besthit=hit
bestpos=j
endif
j = j+1
if (nco_stat_bin(j,1) < 0 .or. hit<=1e-3) exit
end do
do j=1,4
tmpstatx(j) = nco_stat(bestpos,j,1)
tmpstaty(j) = nco_stat(bestpos,j,2)
tmpstatz(j) = nco_stat(bestpos,j,3)
end do
do j=1,8
tmpcoordsx(j) = nco_stat(bestpos,j,1)
tmpcoordsy(j) = nco_stat(bestpos,j,2)
tmpcoordsz(j) = nco_stat(bestpos,j,3)
end do
endif
rmsd=superimpose2(cacoordsx,cacoordsy,cacoordsz,tmpstatx,tmpstaty,tmpstatz,4,tmpcoordsx,tmpcoordsy,tmpcoordsz,8)
total = total + rmsd
if (rmsd>maxrms) then
maxrms=rmsd
endif
! add C,O,N
if (i .ne. 1) then
cx(i-1) = tmpcoordsx(5)
cy(i-1) = tmpcoordsy(5)
cz(i-1) = tmpcoordsz(5)
ox(i-1) = tmpcoordsx(6)
oy(i-1) = tmpcoordsy(6)
oz(i-1) = tmpcoordsz(6)
endif
if (i .ne. Lch+1) then
nx(i) = tmpcoordsx(7)
ny(i) = tmpcoordsy(7)
nz(i) = tmpcoordsz(7)
else ! terminal oxygen instead of nitrogen
ox(i) = tmpcoordsx(7)
oy(i) = tmpcoordsy(7)
oz(i) = tmpcoordsz(7)
endif
end do ! end for each residue
end subroutine
! build_bb_ij will adjust start and end according to the following rules:
! Residues outside of start, end can have torsion angle changes when residues in
! start, end (inclusive) change. Therefore, +/- 2 residues are added to start, end
! end-start+1 must be at least 5 because this is the minimum number needed for construction
! If not, start and end are enlarged equally in both directions (unless one end is
! at the boundary in which case it is enlarged more in one direction
! The first 2 residues and the last 3 of the sequence is used to build the ends of
! the chain, so if start is 2, then it is set to 1, and if end is set to Lch-2 or
! Lch-1, then it is set to Lch
subroutine build_bb_ij(cax,cay,caz,nx,ny,nz,cx,cy,cz,ox,oy,oz,start,end,Lch,seq)
implicit none
integer,intent(in):: start,end
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz
real,dimension(Lch),intent(inout):: nx, ny, nz
real,dimension(Lch),intent(inout):: cx, cy, cz
real,dimension(Lch+1),intent(inout):: ox, oy, oz ! +1 for terminal O
integer,dimension(Lch),intent(in):: seq ! pro=7
integer:: s, e, d, temp
if (.not. initStatFlag) then
call init_backbone_bins()
if (end-start+1 == Lch) then
initStatFlag = .true.
endif
endif
! enlarge interval
s = start-2
e = end+2
if (s < 1) then
s = 1
endif
if (e > Lch) then
e = Lch
endif
! make sure at least width 5
if (e-s < 5) then
temp = 5-e+s
temp = ceiling(real(temp)/2.0)
s = s-temp
if (s < 1) then
d = 1-s
s = 1
e = e+d
if (e > Lch) then
e = Lch
endif
endif
e = e+temp
if (e > Lch) then
d = e-Lch
e = Lch
s = s-d
if (s < 1) then
s = 1
endif
endif
endif
! round end points if near chain boundary
if (s == 2) then
s = 1
endif
if (e == Lch-2 .or. e == Lch-1) then
e = Lch
endif
call build_bb_ij_nocheck(cax,cay,caz,nx,ny,nz,cx,cy,cz,ox,oy,oz,s,e,Lch,seq)
end subroutine
! used by build_bb_ij
! see build_bb_ij for conditions on start, end. The conditions are skipped here
subroutine build_bb_ij_nocheck(cax,cay,caz,nx,ny,nz,cx,cy,cz,ox,oy,oz,start,end,Lch,seq)
implicit none
integer,intent(in):: start,end
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz
real,dimension(Lch),intent(inout):: nx, ny, nz
real,dimension(Lch),intent(inout):: cx, cy, cz
real,dimension(Lch+1),intent(inout):: ox, oy, oz ! +1 for terminal O
integer,dimension(Lch),intent(in):: seq ! pro=7
integer,dimension(end-start+2,3):: RBINS
real,dimension(8):: cacoordsx, cacoordsy, cacoordsz
real,dimension(8):: tmpcoordsx, tmpcoordsy, tmpcoordsz
real,dimension(8):: tmpstatx, tmpstaty, tmpstatz
real:: x1, y1, z1
real:: x2, y2, z2
real:: x3, y3, z3
real:: x4, y4, z4
real:: besthit, hit
real:: rmsd
integer:: bestpos, length, si
integer:: i, j,bin13_1, bin13_2, bin14
real,dimension(end-start+11,3):: X_COORDS
length = end-start+1
if (.not. initStatFlag) then
call init_backbone_bins()
if (end-start+1 == Lch) then
initStatFlag = .true.
endif
endif
do i=1,length+10
do j=1,3
X_COORDS(i,j) = 0.
end do
end do
call prepare_rbins_ij(cax,cay,caz,start,end,Lch,RBINS,X_COORDS)
do i=1,length+1
x1 = X_COORDS(i+3,1)
y1 = X_COORDS(i+3,2)
z1 = X_COORDS(i+3,3)
x2 = X_COORDS(i+4,1)
y2 = X_COORDS(i+4,2)
z2 = X_COORDS(i+4,3)
x3 = X_COORDS(i+5,1)
y3 = X_COORDS(i+5,2)
z3 = X_COORDS(i+5,3)
x4 = X_COORDS(i+6,1)
y4 = X_COORDS(i+6,2)
z4 = X_COORDS(i+6,3)
cacoordsx(1) = x1
cacoordsy(1) = y1
cacoordsz(1) = z1
cacoordsx(2) = x2
cacoordsy(2) = y2
cacoordsz(2) = z2
cacoordsx(3) = x3
cacoordsy(3) = y3
cacoordsz(3) = z3
cacoordsx(4) = x4
cacoordsy(4) = y4
cacoordsz(4) = z4
bin13_1 = RBINS(i,1)
bin13_2 = RBINS(i,2)
bin14 = RBINS(i,3)
si = i+start-1 ! index between star, -end
if (si .ne. 1 .and. seq(si-1) == 7) then ! prev res is pro
j=1
besthit=1000.
bestpos=1
do
hit = abs(nco_stat_pro_bin(j,1)-bin13_1)+abs(nco_stat_pro_bin(j,2)-bin13_2)+0.2*abs(nco_stat_pro_bin(j,3)-bin14)
if (hit<besthit) then
besthit=hit
bestpos=j
endif
j = j+1
if (nco_stat_pro_bin(j,1) < 0 .or. hit <= 1e-3) exit
end do
do j=1,4
tmpstatx(j) = nco_stat_pro(bestpos,j,1)
tmpstaty(j) = nco_stat_pro(bestpos,j,2)
tmpstatz(j) = nco_stat_pro(bestpos,j,3)
end do
do j=1,8
tmpcoordsx(j) = nco_stat_pro(bestpos,j,1)
tmpcoordsy(j) = nco_stat_pro(bestpos,j,2)
tmpcoordsz(j) = nco_stat_pro(bestpos,j,3)
end do
else
j=1
besthit=1000.
bestpos=1
do
hit = abs(nco_stat_bin(j,1)-bin13_1)+abs(nco_stat_bin(j,2)-bin13_2)+0.2*abs(nco_stat_bin(j,3)-bin14)
if (hit<besthit) then
besthit=hit
bestpos=j
endif
j = j+1
if (nco_stat_bin(j,1) < 0 .or. hit<=1e-3) exit
end do
do j=1,4
tmpstatx(j) = nco_stat(bestpos,j,1)
tmpstaty(j) = nco_stat(bestpos,j,2)
tmpstatz(j) = nco_stat(bestpos,j,3)
end do
do j=1,8
tmpcoordsx(j) = nco_stat(bestpos,j,1)
tmpcoordsy(j) = nco_stat(bestpos,j,2)
tmpcoordsz(j) = nco_stat(bestpos,j,3)
end do
endif
rmsd = superimpose2(cacoordsx,cacoordsy,cacoordsz,tmpstatx,tmpstaty,tmpstatz,4,tmpcoordsx,tmpcoordsy,tmpcoordsz,8)
! add C,O,N
if (si .ne. 1) then
cx(si-1) = tmpcoordsx(5)
cy(si-1) = tmpcoordsy(5)
cz(si-1) = tmpcoordsz(5)
ox(si-1) = tmpcoordsx(6)
oy(si-1) = tmpcoordsy(6)
oz(si-1) = tmpcoordsz(6)
endif
if (si .ne. Lch+1) then
nx(si) = tmpcoordsx(7)
ny(si) = tmpcoordsy(7)
nz(si) = tmpcoordsz(7)
else ! terminal oxygen instead of nitrogen
ox(si) = tmpcoordsx(7)
oy(si) = tmpcoordsy(7)
oz(si) = tmpcoordsz(7)
endif
end do ! end for each residue
end subroutine
! helper function for build_bb
! populates RBINS and X_COORDS based on input ca coordinates
subroutine prepare_rbins(cax,cay,caz,Lch,RBINS,X_COORDS)
implicit none
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz
integer,dimension(Lch+1,3),intent(out):: RBINS
integer:: i, bin13_1, bin13_2, bin14
real:: x1, y1, z1
real:: x2, y2, z2
real:: x3, y3, z3
real:: x4, y4, z4
real:: r13_1, r13_2, r14
real,dimension(8):: cacoordsx, cacoordsy, cacoordsz
real,dimension(8):: tmpcoordsx, tmpcoordsy, tmpcoordsz
real,dimension(8):: tmpstatx, tmpstaty, tmpstatz
real,dimension(Lch+10,3),intent(inout):: X_COORDS
real:: rmsd
do i=1,Lch
X_COORDS(i+5,1) = cax(i);
X_COORDS(i+5,2) = cay(i);
X_COORDS(i+5,3) = caz(i);
end do
! for rebuilding N-term end (build 2 pseudo res before N-term by superposing
! res 3-5 to 1-3, and then keeping the shifted res 1-2 as res -1, 0)
do i=1,5
tmpcoordsx(i) = X_COORDS(i+5,1)
tmpcoordsy(i) = X_COORDS(i+5,2)
tmpcoordsz(i) = X_COORDS(i+5,3)
end do
do i=1,3
cacoordsx(i) = X_COORDS(i+7,1)
cacoordsy(i) = X_COORDS(i+7,2)
cacoordsz(i) = X_COORDS(i+7,3)
end do
do i=1,3
tmpstatx(i) = X_COORDS(i+5,1)
tmpstaty(i) = X_COORDS(i+5,2)
tmpstatz(i) = X_COORDS(i+5,3)
end do
rmsd = superimpose2(tmpstatx,tmpstaty,tmpstatz,cacoordsx,cacoordsy,cacoordsz,3,tmpcoordsx,tmpcoordsy,tmpcoordsz,5)
do i=1,2
X_COORDS(i+3,1) = tmpcoordsx(i)
X_COORDS(i+3,2) = tmpcoordsy(i)
X_COORDS(i+3,3) = tmpcoordsz(i)
end do
! for rebuilding C-term end (build 3 pseudo res after C-term by superposing
! res [L-4,L-2] to [L-2,L] and then keeping the shifted [L-2,L] as res [L+1,L+3])
do i=1,5
tmpcoordsx(i) = X_COORDS(i+Lch,1)
tmpcoordsy(i) = X_COORDS(i+Lch,2)
tmpcoordsz(i) = X_COORDS(i+Lch,3)
end do
do i=1,3
cacoordsx(i) = X_COORDS(i+Lch,1)
cacoordsy(i) = X_COORDS(i+Lch,2)
cacoordsz(i) = X_COORDS(i+Lch,3)
end do
do i=1,3
tmpstatx(i) = X_COORDS(i+Lch+2,1)
tmpstaty(i) = X_COORDS(i+Lch+2,2)
tmpstatz(i) = X_COORDS(i+Lch+2,3)
end do
rmsd = superimpose2(tmpstatx,tmpstaty,tmpstatz,cacoordsx,cacoordsy,cacoordsz,3,tmpcoordsx,tmpcoordsy,tmpcoordsz,5)
do i=1,3
X_COORDS(i+Lch+5,1) = tmpcoordsx(i+3)
X_COORDS(i+Lch+5,2) = tmpcoordsy(i+3)
X_COORDS(i+Lch+5,3) = tmpcoordsz(i+3)
end do
! use ca coords of i-2,i-1,i,i+1 to get res i's bin
do i=1,Lch+1
x1 = X_COORDS(i+3,1)
y1 = X_COORDS(i+3,2)
z1 = X_COORDS(i+3,3)
x2 = X_COORDS(i+4,1)
y2 = X_COORDS(i+4,2)
z2 = X_COORDS(i+4,3)
x3 = X_COORDS(i+5,1)
y3 = X_COORDS(i+5,2)
z3 = X_COORDS(i+5,3)
x4 = X_COORDS(i+6,1)
y4 = X_COORDS(i+6,2)
z4 = X_COORDS(i+6,3)
r13_1 = calc_distance(x1, y1, z1, x3, y3, z3)
r13_2 = calc_distance(x2, y2, z2, x4, y4, z4)
r14 = calc_r14(x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4)
bin13_1 = int((r13_1-4.6)/0.3)
bin13_2 = int((r13_2-4.6)/0.3)
bin14 = int((r14+11.)/0.3)
if (bin13_1<0) bin13_1=0
if (bin13_2<0) bin13_2=0
if (bin14<0) bin14=0
if (bin13_1>9) bin13_1=9
if (bin13_2>9) bin13_2=9
if (bin14>73) bin14=73
RBINS(i,1) = bin13_1
RBINS(i,2) = bin13_2
RBINS(i,3) = bin14
end do
end subroutine
! helper function for build_bb_ij
subroutine prepare_rbins_ij(cax,cay,caz,start,end,Lch,RBINS,X_COORDS)
implicit none
integer,intent(in):: start, end
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz
integer,dimension(end-start+2,3),intent(out):: RBINS
integer:: i, bin13_1, bin13_2, bin14, length
real:: x1, y1, z1
real:: x2, y2, z2
real:: x3, y3, z3
real:: x4, y4, z4
real:: r13_1, r13_2, r14
real,dimension(8):: cacoordsx, cacoordsy, cacoordsz
real,dimension(8):: tmpcoordsx, tmpcoordsy, tmpcoordsz
real,dimension(8):: tmpstatx, tmpstaty, tmpstatz
real,dimension(end-start+11,3),intent(inout):: X_COORDS
real:: rmsd
length = end-start+1
do i=1,length
X_COORDS(i+5,1) = cax(i+start-1);
X_COORDS(i+5,2) = cay(i+start-1);
X_COORDS(i+5,3) = caz(i+start-1);
end do
if (start < 3) then
! for rebuilding N-term end (build 2 pseudo res before N-term by superposing
! res 3-5 to 1-3, and then keeping the shifted res 1-2 as res -1, 0)
do i=1,5
tmpcoordsx(i) = X_COORDS(i+5,1)
tmpcoordsy(i) = X_COORDS(i+5,2)
tmpcoordsz(i) = X_COORDS(i+5,3)
end do
do i=1,3
cacoordsx(i) = X_COORDS(i+7,1)
cacoordsy(i) = X_COORDS(i+7,2)
cacoordsz(i) = X_COORDS(i+7,3)
end do
do i=1,3
tmpstatx(i) = X_COORDS(i+5,1)
tmpstaty(i) = X_COORDS(i+5,2)
tmpstatz(i) = X_COORDS(i+5,3)
end do
rmsd = superimpose2(tmpstatx,tmpstaty,tmpstatz,cacoordsx,cacoordsy,cacoordsz,3,tmpcoordsx,tmpcoordsy,tmpcoordsz,5)
do i=1,2
X_COORDS(i+3,1) = tmpcoordsx(i)
X_COORDS(i+3,2) = tmpcoordsy(i)
X_COORDS(i+3,3) = tmpcoordsz(i)
end do
else
do i=1,2
X_COORDS(i+3,1) = cax(start-3+i)
X_COORDS(i+3,2) = cay(start-3+i)
X_COORDS(i+3,3) = caz(start-3+i)
end do
endif
if (end > Lch-3) then
! for rebuilding C-term end (build 3 pseudo res after C-term by superposing
! res [L-4,L-2] to [L-2,L] and then keeping the shifted [L-2,L] as res [L+1,L+3])
do i=1,5
tmpcoordsx(i) = X_COORDS(i+length,1)
tmpcoordsy(i) = X_COORDS(i+length,2)
tmpcoordsz(i) = X_COORDS(i+length,3)
end do
do i=1,3
cacoordsx(i) = X_COORDS(i+length,1)
cacoordsy(i) = X_COORDS(i+length,2)
cacoordsz(i) = X_COORDS(i+length,3)
end do
do i=1,3
tmpstatx(i) = X_COORDS(i+length+2,1)
tmpstaty(i) = X_COORDS(i+length+2,2)
tmpstatz(i) = X_COORDS(i+length+2,3)
end do
rmsd = superimpose2(tmpstatx,tmpstaty,tmpstatz,cacoordsx,cacoordsy,cacoordsz,3,tmpcoordsx,tmpcoordsy,tmpcoordsz,5)
do i=1,3
X_COORDS(i+length+5,1) = tmpcoordsx(i+3)
X_COORDS(i+length+5,2) = tmpcoordsy(i+3)
X_COORDS(i+length+5,3) = tmpcoordsz(i+3)
end do
else
do i=1,3
X_COORDS(i+length+5,1) = cax(end+i)
X_COORDS(i+length+5,2) = cay(end+i)
X_COORDS(i+length+5,3) = caz(end+i)
end do
endif
! use ca coords of i-2,i-1,i,i+1 to get res i's bin
do i=1,length+1
x1 = X_COORDS(i+3,1)
y1 = X_COORDS(i+3,2)
z1 = X_COORDS(i+3,3)
x2 = X_COORDS(i+4,1)
y2 = X_COORDS(i+4,2)
z2 = X_COORDS(i+4,3)
x3 = X_COORDS(i+5,1)
y3 = X_COORDS(i+5,2)
z3 = X_COORDS(i+5,3)
x4 = X_COORDS(i+6,1)
y4 = X_COORDS(i+6,2)
z4 = X_COORDS(i+6,3)
r13_1 = calc_distance(x1, y1, z1, x3, y3, z3)
r13_2 = calc_distance(x2, y2, z2, x4, y4, z4)
r14 = calc_r14(x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4)
bin13_1 = int((r13_1-4.6)/0.3)
bin13_2 = int((r13_2-4.6)/0.3)
bin14 = int((r14+11.)/0.3)
if (bin13_1<0) bin13_1=0
if (bin13_2<0) bin13_2=0
if (bin14<0) bin14=0
if (bin13_1>9) bin13_1=9
if (bin13_2>9) bin13_2=9
if (bin14>73) bin14=73
RBINS(i,1) = bin13_1
RBINS(i,2) = bin13_2
RBINS(i,3) = bin14
end do
end subroutine
! reads noe.tbl
! initializes
! a) numContactPartners
! b) contactPartners
! c) scoreDistBound
! d) numRestr
subroutine readNOEAssignments(filename)
implicit none
character(len=*):: filename
character(len=50):: line
character(len=4):: type
integer:: IOstatus, numContacts, i, res1, res2
real:: score, dist
open(unit=64,file=filename,status='old')
read(64,'(A50)',IOSTAT=IOstatus) line
read(line,'(I6)') numContacts
! write(*,*) 'Num NOE Restraints: ',numContacts
do i=1,numContacts
read(64,'(A50)',IOSTAT=IOstatus) line
if (IOstatus.ne.0) then
write(*,*) 'Not a NOE restraint formatted file'
return
endif
read(line,100) type,res1,res2,score,dist
! add both directions
if (type.eq.'NN') then
if (numContactPartners(res1,NNINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,NNINDEX) = numContactPartners(res1,NNINDEX)+1
contactPartners(res1,NNINDEX,numContactPartners(res1,NNINDEX)) = res2
scoreDistBound(res1,res2,NNINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,NNINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,NNINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,NNINDEX) = numContactPartners(res2,NNINDEX)+1
contactPartners(res2,NNINDEX,numContactPartners(res2,NNINDEX)) = res1
scoreDistBound(res2,res1,NNINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,NNINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'CACA') then
if (numContactPartners(res1,CACAINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,CACAINDEX) = numContactPartners(res1,CACAINDEX)+1
contactPartners(res1,CACAINDEX,numContactPartners(res1,CACAINDEX)) = res2
scoreDistBound(res1,res2,CACAINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,CACAINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,CACAINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,CACAINDEX) = numContactPartners(res2,CACAINDEX)+1
contactPartners(res2,CACAINDEX,numContactPartners(res2,CACAINDEX)) = res1
scoreDistBound(res2,res1,CACAINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,CACAINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'CBCB') then
if (numContactPartners(res1,CBCBINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,CBCBINDEX) = numContactPartners(res1,CBCBINDEX)+1
contactPartners(res1,CBCBINDEX,numContactPartners(res1,CBCBINDEX)) = res2
scoreDistBound(res1,res2,CBCBINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,CBCBINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,CBCBINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,CBCBINDEX) = numContactPartners(res2,CBCBINDEX)+1
contactPartners(res2,CBCBINDEX,numContactPartners(res2,CBCBINDEX)) = res1
scoreDistBound(res2,res1,CBCBINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,CBCBINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'SCSC') then
if (numContactPartners(res1,SCSCINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,SCSCINDEX) = numContactPartners(res1,SCSCINDEX)+1
contactPartners(res1,SCSCINDEX,numContactPartners(res1,SCSCINDEX)) = res2
scoreDistBound(res1,res2,SCSCINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,SCSCINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,SCSCINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,SCSCINDEX) = numContactPartners(res2,SCSCINDEX)+1
contactPartners(res2,SCSCINDEX,numContactPartners(res2,SCSCINDEX)) = res1
scoreDistBound(res2,res1,SCSCINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,SCSCINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'NCA') then
if (numContactPartners(res1,NCAINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,NCAINDEX) = numContactPartners(res1,NCAINDEX)+1
contactPartners(res1,NCAINDEX,numContactPartners(res1,NCAINDEX)) = res2
scoreDistBound(res1,res2,NCAINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,NCAINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,CANINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,CANINDEX) = numContactPartners(res2,CANINDEX)+1
contactPartners(res2,CANINDEX,numContactPartners(res2,CANINDEX)) = res1
scoreDistBound(res2,res1,CANINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,CANINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'NCB') then
if (numContactPartners(res1,NCBINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,NCBINDEX) = numContactPartners(res1,NCBINDEX)+1
contactPartners(res1,NCBINDEX,numContactPartners(res1,NCBINDEX)) = res2
scoreDistBound(res1,res2,NCBINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,NCBINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,CBNINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,CBNINDEX) = numContactPartners(res2,CBNINDEX)+1
contactPartners(res2,CBNINDEX,numContactPartners(res2,CBNINDEX)) = res1
scoreDistBound(res2,res1,CBNINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,CBNINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'NSC') then
if (numContactPartners(res1,NSCINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,NSCINDEX) = numContactPartners(res1,NSCINDEX)+1
contactPartners(res1,NSCINDEX,numContactPartners(res1,NSCINDEX)) = res2
scoreDistBound(res1,res2,NSCINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,NSCINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,SCNINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,SCNINDEX) = numContactPartners(res2,SCNINDEX)+1
contactPartners(res2,SCNINDEX,numContactPartners(res2,SCNINDEX)) = res1
scoreDistBound(res2,res1,SCNINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,SCNINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'CACB') then
if (numContactPartners(res1,CACBINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,CACBINDEX) = numContactPartners(res1,CACBINDEX)+1
contactPartners(res1,CACBINDEX,numContactPartners(res1,CACBINDEX)) = res2
scoreDistBound(res1,res2,CACBINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,CACBINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,CBCAINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,CBCAINDEX) = numContactPartners(res2,CBCAINDEX)+1
contactPartners(res2,CBCAINDEX,numContactPartners(res2,CBCAINDEX)) = res1
scoreDistBound(res2,res1,CBCAINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,CBCAINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'CASC') then
if (numContactPartners(res1,CASCINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,CASCINDEX) = numContactPartners(res1,CASCINDEX)+1
contactPartners(res1,CASCINDEX,numContactPartners(res1,CASCINDEX)) = res2
scoreDistBound(res1,res2,CASCINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,CASCINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,SCCAINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,SCCAINDEX) = numContactPartners(res2,SCCAINDEX)+1
contactPartners(res2,SCCAINDEX,numContactPartners(res2,SCCAINDEX)) = res1
scoreDistBound(res2,res1,SCCAINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,SCCAINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else if (type.eq.'CBSC') then
if (numContactPartners(res1,CBSCINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res1,CBSCINDEX) = numContactPartners(res1,CBSCINDEX)+1
contactPartners(res1,CBSCINDEX,numContactPartners(res1,CBSCINDEX)) = res2
scoreDistBound(res1,res2,CBSCINDEX,SCOREINDEX) = score
scoreDistBound(res1,res2,CBSCINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
if (numContactPartners(res2,SCCBINDEX) < MAXCONTACTPARTNERS) then
numContactPartners(res2,SCCBINDEX) = numContactPartners(res2,SCCBINDEX)+1
contactPartners(res2,SCCBINDEX,numContactPartners(res2,SCCBINDEX)) = res1
scoreDistBound(res2,res1,SCCBINDEX,SCOREINDEX) = score
scoreDistBound(res2,res1,SCCBINDEX,DISTINDEX) = dist
numRestr = numRestr + 1
endif
else
cycle
endif
end do ! end for each contact in file
close(64)
100 format(a4,1x,i5,1x,i5,1x,f9.6,1x,f6.2)
end subroutine
! initializes
! a) ambigRestraints
! b) ambigCounts
! c) ambigPenalty
! d) numAmbig
! e) numRestr
subroutine readAmbigNOEAssignments(filename)
implicit none
character(len=*):: filename
character(len=50):: line
character(len=4):: type
integer:: IOstatus, numContacts, i, res1, res2, typeI
real:: score, dist, scoreSum
logical:: foundFlag
open(unit=64,file=filename,status='old')
do
read(64,'(A50)',IOSTAT=IOstatus) line ! read AMB
if (IOstatus.ne.0) then
if (IOStatus.gt.0) then
write(*,*) 'Error reading ambig NOE file'
return
endif
exit ! end of file
endif
if (len(trim(line)).eq.0) then
cycle
endif
read(line,101) type, numContacts
if ( (numAmbig+1) > MAXAMBIG) then
exit;
endif
numAmbig = numAmbig+1
ambigCounts(numAmbig) = numContacts
scoreSum = 0
do i=1,numContacts
if (i > MAXAMBIGGROUP) then
exit
endif
read(64,'(A50)',IOSTAT=IOstatus) line
if (IOstatus.ne.0) then
write(*,*) 'Error: Invalid ambig.tbl format'
return
endif
read(line,102) type,res1,res2,score,dist
ambigScoreDistBound(numAmbig,i,SCOREINDEX) = score
scoreSum = scoreSum+score
ambigScoreDistBound(numAmbig,i,DISTINDEX) = dist
ambigRestraints(numAmbig,i,RES1INDEX) = res1
ambigRestraints(numAmbig,i,RES2INDEX) = res2
foundFlag = .false.
do typeI=1,NUMCONTYPES
if (TYPENAMES(typeI).eq.type) then
foundFlag = .true.
exit
endif
end do
if (foundFlag) then
ambigRestraints(numAmbig,i,TYPEINDEX) = typeI
else
write(*,*) 'Error: Invalid type in ambig.tbl'
return
endif
end do ! end for each contact in restraint group
ambigPenalty(numAmbig) = scoreSum
read(64,'(A50)',IOSTAT=IOstatus) line ! read END
if (IOstatus.ne.0) then
write(*,*) 'Error: Invalid ambig.tbl format. Missing END'
return
endif
end do ! end while read file
close(64)
numRestr = numRestr + numAmbig
101 format(a4,1x,i2)
102 format(a4,1x,i5,1x,i5,1x,f9.6,1x,f6.2)
end subroutine
! reads talos tab
! phi, psi and their errors in dphi, dpsi
! only the 'Good' predictions (according to TALOS) are stored
! otherwise the prediction is INVALIDANGLE
subroutine readTalos(filename, phiPre, psiPre, dphiPre, dpsiPre, Lch)
implicit none
character(len=*):: filename
integer,intent(in):: Lch
real,dimension(Lch),intent(out):: phiPre, psiPre, dphiPre, dpsiPre
character(len=70):: line
character:: resName
integer:: IOstatus, resNum, num3, cscount,i,offset
real:: phi, psi, dphi, dpsi, dist, s2
character(len=4):: classT, temp
do i=1,Lch
phiPre(i) = INVALIDANGLE
psiPre(i) = INVALIDANGLE
dphiPre(i) = INVALIDANGLE
dpsiPre(i) = INVALIDANGLE
end do
open(unit=71,file=filename,status='old')
! skip to FORMAT string, read OFFSET if available
offset = 0;
do
read(71,120,IOSTAT=IOstatus) line
if (IOstatus.ne.0) then
write(*,*) 'Not a TALOS formatted file'
return
endif
if (line(1:6).eq.'OFFSET') then
temp = line(8:)
temp = trim(temp)
read(temp,'(i4)') offset
endif
if (line(1:6).eq.'FORMAT') then
exit
endif
end do
do
read(71,120, IOSTAT=IOstatus) line
if (IOstatus.ne.0) then
if (IOStatus.gt.0) then
write(*,*) 'Error reading file'
return
endif
exit
endif
if (len(trim(line)).eq.0) then
cycle
endif
read(line,130) resNum,resName,phi,psi,dphi,dpsi,dist,s2,num3,cscount,classT
classT = trim(classT)
if(classT.eq.'Good') then
phiPre(resNum+offset) = phi
psiPre(resNum+offset) = psi
dphiPre(resNum+offset) = dphi
dpsiPre(resNum+offset) = dpsi
else
cycle
end if
end do
close(71)
120 format(a70)
130 format(i4,1x,a1,1x,f8.3,1x,f8.3,1x,f8.3,1x,f8.3,1x,f8.3,1x,f5.3,1x,i2,1x,i2,1x,a4)
return
end subroutine
!real function noeRestraintPenaltyTest(start,end,Lch,cas,sgs)
! implicit none
! integer,intent(in)::start
! integer,intent(in)::end
! integer,intent(in)::Lch
! real,dimension(Lch,3),intent(out):: cas
! real,dimension(Lch,3),intent(out):: sgs
! real:: energy
! real:: xa,ya,za,xg,yg,zg,xp,yp,zp
! real:: dist, cut
! integer:: i,j,p,numVio
!
! energy = 0
! numVio = 0
! do i=start,end
! xg = sgs(i,1)
! yg = sgs(i,2)
! zg = sgs(i,3)
! xa = cas(i,1)
! ya = cas(i,2)
! za = cas(i,3)
! do j=1,numContactPartners(i,SCSCINDEX)
! p = contactPartners(i,SCSCINDEX,j)
! if (i < p .or. p < start) then ! to prevent duplicate computation
! xp = sgs(p,1)
! yp = sgs(p,2)
! zp = sgs(p,3)
! dist=sqrt((xg-xp)**2+(yg-yp)**2+(zg-zp)**2)
!
! cut = scsc(i,p,DISTINDEX)
!
! if (dist > cut) then
! energy = energy + scsc(i,p,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! do j=1,numContactPartners(i,SCCAINDEX)
! p = contactPartners(i,SCCAINDEX,j)
! if (i < p .or. p < start) then
! xp = cas(p,1)
! yp = cas(p,2)
! zp = cas(p,3)
! dist=sqrt((xg-xp)**2+(yg-yp)**2+(zg-zp)**2)
!
! cut = casc(p,i,DISTINDEX)
!
! if (dist > cut) then
! energy = energy + casc(p,i,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! do j=1,numContactPartners(i,CASCINDEX)
! p = contactPartners(i,CASCINDEX,j)
! if (i < p .or. p < start) then
! xp = sgs(p,1)
! yp = sgs(p,2)
! zp = sgs(p,3)
! dist=sqrt((xa-xp)**2+(ya-yp)**2+(za-zp)**2)
!
! cut = casc(i,p,DISTINDEX)
!
! if (dist > cut) then
! energy = energy + casc(i,p,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! do j=1,numContactPartners(i,CACAINDEX)
! p = contactPartners(i,CACAINDEX,j)
! if (i < p .or. p < start) then
! xp = cas(p,1)
! yp = cas(p,2)
! zp = cas(p,3)
! dist=sqrt((xa-xp)**2+(ya-yp)**2+(za-zp)**2)
!
! cut = caca(i,p,DISTINDEX)+1.5
!
! if (dist > cut) then
! energy = energy + caca(i,p,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! do j=1,numContactPartners(i,ASCSINDEX)
! p = contactPartners(i,ASCSINDEX,j)
! if (i < p .or. p < start) then ! to prevent duplicate computation
! xp = sgs(p,1)
! yp = sgs(p,2)
! zp = sgs(p,3)
! dist=sqrt((xg-xp)**2+(yg-yp)**2+(zg-zp)**2)
!
! cut = ascs(i,p,DISTINDEX)
!
! if (dist < cut) then
! energy = energy + ascs(i,p,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! end do
! noeRestraintPenaltyTest = energy
! return
!end function
real function noeRestraintPenalty(start,endI,printViol)
implicit none
integer,intent(in)::start
integer,intent(in)::endI
logical,intent(in)::printViol
real:: energy
real:: xi,yi,zi,xp,yp,zp ! coords of contact between residue i and p
real:: axi,ayi,azi,axp,ayp,azp ! ca of i, p; used for CB, SC
real:: dist, cut, distDiff
integer:: i,j,p,contactType,numVio
integer:: iseq,pseq,nv1,nv2,nv1p,nv2p ! used for CB, SC of residue i, p
integer:: aTypei, aTypep ! atom type of i and p
integer:: ambigIndex
logical:: reComputeFlag ! if true, ambiguous restraint is recomputed
real:: minDistDiff ! the minimum restraint violation is used to compute ambig restr energy
integer, parameter:: ndim=1000
integer, parameter:: nvec=416
integer seq(ndim),sec(ndim)
common/seqe/seq,sec
integer mv(ndim)
common/chainm/mv
! ica for CB,SC; x for all types except N
integer ica(0:ndim),x(ndim),y(ndim),z(ndim)
common/chain1/ica,x,y,z
! for CA
real ex(ndim), ey(ndim), ez(ndim)
common/echain1/ex,ey,ez
! for SC
real egx(ndim),egy(ndim),egz(ndim)
common/echain2/egx,egy,egz
real gx(nvec,nvec,0:19),gy(nvec,nvec,0:19),gz(nvec,nvec,0:19)
common/sg/gx,gy,gz
! for CB
real etx(ndim),ety(ndim),etz(ndim)
common/echain6/etx,ety,etz
real hx(nvec,nvec,0:19),hy(nvec,nvec,0:19),hz(nvec,nvec,0:19)
common/cb/hx,hy,hz
! for N
real cax_coord(ndim),cay_coord(ndim),caz_coord(ndim),nx_coord(ndim),ny_coord(ndim),nz_coord(ndim), &
cx_coord(ndim),cy_coord(ndim),cz_coord(ndim),ox_coord(ndim),oy_coord(ndim),oz_coord(ndim)
common/nmr_backbone/cax_coord,cay_coord,caz_coord,nx_coord,ny_coord,nz_coord,cx_coord,cy_coord, &
cz_coord,ox_coord,oy_coord,oz_coord
energy = 0
numVio = 0
do i=start,endI
iseq=seq(i)
do contactType=1,NUMCONTYPES
if (numContactPartners(i,contactType) > 0) then
! get the cartesian coords of aTypei, convert from grid if not NTYPE
aTypei = ATOMTYPE1(contactType)
aTypep = ATOMTYPE2(contactType)
if (aTypei.ne.NTYPE) then
if (aTypei.eq.CATYPE) then
if(mv(i).gt.0)then
xi=real(x(i))
yi=real(y(i))
zi=real(z(i))
else
xi=ex(i)
yi=ey(i)
zi=ez(i)
endif
else if (aTypei == CBTYPE) then
if(mv(i).gt.0)then
xi=x(i)+HX(ica(i-1),ica(i),iseq)
yi=y(i)+HY(ica(i-1),ica(i),iseq)
zi=z(i)+HZ(ica(i-1),ica(i),iseq)
else
xi=etx(i)
yi=ety(i)
zi=etz(i)
endif
else if (aTypei == SCTYPE) then
if(mv(i).gt.0)then
axi=real(x(i))
ayi=real(y(i))
azi=real(z(i))
nv1=ica(i-1)
nv2=ica(i)
xi=axi+GX(nv1,nv2,iseq)
yi=ayi+GY(nv1,nv2,iseq)
zi=azi+GZ(nv1,nv2,iseq)
else
xi=egx(i)
yi=egy(i)
zi=egz(i)
endif
else
cycle
endif
! convert to cartesian
xi = 0.87*xi
yi = 0.87*yi
zi = 0.87*zi
else
! use CA coordinates instead for efficiency
! xi = nx_coord(i)
! yi = ny_coord(i)
! zi = nz_coord(i)
if(mv(i).gt.0)then
xi=real(x(i))
yi=real(y(i))
zi=real(z(i))
else
xi=ex(i)
yi=ey(i)
zi=ez(i)
endif
! convert to cartesian if using CA (but not for N)
xi = 0.87*xi
yi = 0.87*yi
zi = 0.87*zi
endif
do j=1,numContactPartners(i,contactType)
p = contactPartners(i,contactType,j)
if (i < p .or. p < start) then ! to prevent duplicate computation
pseq=seq(p)
! get the cartesian coords of aTypep, convert from grid if not NTYPE
if (aTypep.ne.NTYPE) then
if (aTypep.eq.CATYPE) then
if(mv(p).gt.0)then
xp=real(x(p))
yp=real(y(p))
zp=real(z(p))
else
xp=ex(p)
yp=ey(p)
zp=ez(p)
endif
else if (aTypep == CBTYPE) then
if(mv(p).gt.0)then
xp=x(p)+HX(ica(p-1),ica(p),pseq)
yp=y(p)+HY(ica(p-1),ica(p),pseq)
zp=z(p)+HZ(ica(p-1),ica(p),pseq)
else
xp=etx(p)
yp=ety(p)
zp=etz(p)
endif
else if (aTypep == SCTYPE) then
if(mv(p).gt.0)then
axp=real(x(p))
ayp=real(y(p))
azp=real(z(p))
nv1p=ica(p-1)
nv2p=ica(p)
xp=axp+GX(nv1p,nv2p,pseq)
yp=ayp+GY(nv1p,nv2p,pseq)
zp=azp+GZ(nv1p,nv2p,pseq)
else
xp=egx(p)
yp=egy(p)
zp=egz(p)
endif
else
cycle
endif
! convert to cartesian
xp = 0.87*xp
yp = 0.87*yp
zp = 0.87*zp
else
! use CA coordinates instead for efficiency
! xp = nx_coord(p)
! yp = ny_coord(p)
! zp = nz_coord(p)
if(mv(p).gt.0)then
xp=real(x(p))
yp=real(y(p))
zp=real(z(p))
else
xp=ex(p)
yp=ey(p)
zp=ez(p)
endif
! convert to cartesian if using CA (but not for N)
xp = 0.87*xp
yp = 0.87*yp
zp = 0.87*zp
endif
! compute energy
dist=sqrt((xi-xp)**2+(yi-yp)**2+(zi-zp)**2)
cut = scoreDistBound(i,p,contactType,DISTINDEX)
distDiff = dist-cut
if (distDiff > 0) then
if (distDiff < LINPENALTYBOUND) then ! linear penalty
energy = energy + scoreDistBound(i,p,contactType,SCOREINDEX)*distDiff/LINPENALTYBOUND
else
energy = energy + scoreDistBound(i,p,contactType,SCOREINDEX)
! write (*,'(i4,i4,a5,3f12.6)') i,p,TYPENAMES(contactType),dist,cut,distDiff
numVio = numVio +1
if (printViol) then
write (*,*) 'viol: ',i,p
endif
endif
endif
endif ! end if i < p < start
end do ! end for each contact partner j
endif ! end if num contact partners > 0
end do ! end for each contact type
end do ! end for each residue between start end
! compute ambiguous restraint penalty
do ambigIndex=1,numAmbig
reComputeFlag = .false.
do j=1,ambigCounts(ambigIndex)
! check if restraint contains a residue in start,end
i = ambigRestraints(ambigIndex,j,RES1INDEX)
p = ambigRestraints(ambigIndex,j,RES2INDEX)
if ( (i >= start.and.i <= endI).or.(p >= start.and.p <= endI) ) then
! recompute this ambiguous restraint
reComputeFlag = .true.
exit
endif
end do
if (reComputeFlag) then
minDistDiff = 999999.0
do j=1,ambigCounts(ambigIndex)
i = ambigRestraints(ambigIndex,j,RES1INDEX)
p = ambigRestraints(ambigIndex,j,RES2INDEX)
contactType = ambigRestraints(ambigIndex,j,TYPEINDEX)
aTypei = ATOMTYPE1(contactType)
aTypep = ATOMTYPE2(contactType)
iseq=seq(i)
pseq=seq(p)
if (aTypei.ne.NTYPE) then
if (aTypei.eq.CATYPE) then
if(mv(i).gt.0)then
xi=real(x(i))
yi=real(y(i))
zi=real(z(i))
else
xi=ex(i)
yi=ey(i)
zi=ez(i)
endif
else if (aTypei == CBTYPE) then
if(mv(i).gt.0)then
xi=x(i)+HX(ica(i-1),ica(i),iseq)
yi=y(i)+HY(ica(i-1),ica(i),iseq)
zi=z(i)+HZ(ica(i-1),ica(i),iseq)
else
xi=etx(i)
yi=ety(i)
zi=etz(i)
endif
else if (aTypei == SCTYPE) then
if(mv(i).gt.0)then
axi=real(x(i))
ayi=real(y(i))
azi=real(z(i))
nv1=ica(i-1)
nv2=ica(i)
xi=axi+GX(nv1,nv2,iseq)
yi=ayi+GY(nv1,nv2,iseq)
zi=azi+GZ(nv1,nv2,iseq)
else
xi=egx(i)
yi=egy(i)
zi=egz(i)
endif
else
cycle
endif
! convert to cartesian
xi = 0.87*xi
yi = 0.87*yi
zi = 0.87*zi
else
! use CA coordinates instead for efficiency
! xi = nx_coord(i)
! yi = ny_coord(i)
! zi = nz_coord(i)
if(mv(i).gt.0)then
xi=real(x(i))
yi=real(y(i))
zi=real(z(i))
else
xi=ex(i)
yi=ey(i)
zi=ez(i)
endif
! convert to cartesian if using CA (but not for N)
xi = 0.87*xi
yi = 0.87*yi
zi = 0.87*zi
endif ! end if aTypei
if (aTypep.ne.NTYPE) then
if (aTypep.eq.CATYPE) then
if(mv(p).gt.0)then
xp=real(x(p))
yp=real(y(p))
zp=real(z(p))
else
xp=ex(p)
yp=ey(p)
zp=ez(p)
endif
else if (aTypep == CBTYPE) then
if(mv(p).gt.0)then
xp=x(p)+HX(ica(p-1),ica(p),pseq)
yp=y(p)+HY(ica(p-1),ica(p),pseq)
zp=z(p)+HZ(ica(p-1),ica(p),pseq)
else
xp=etx(p)
yp=ety(p)
zp=etz(p)
endif
else if (aTypep == SCTYPE) then
if(mv(p).gt.0)then
axp=real(x(p))
ayp=real(y(p))
azp=real(z(p))
nv1p=ica(p-1)
nv2p=ica(p)
xp=axp+GX(nv1p,nv2p,pseq)
yp=ayp+GY(nv1p,nv2p,pseq)
zp=azp+GZ(nv1p,nv2p,pseq)
else
xp=egx(p)
yp=egy(p)
zp=egz(p)
endif
else
cycle
endif
! convert to cartesian
xp = 0.87*xp
yp = 0.87*yp
zp = 0.87*zp
else
! use CA coordinates instead for efficiency
! xp = nx_coord(p)
! yp = ny_coord(p)
! zp = nz_coord(p)
if(mv(p).gt.0)then
xp=real(x(p))
yp=real(y(p))
zp=real(z(p))
else
xp=ex(p)
yp=ey(p)
zp=ez(p)
endif
! convert to cartesian if using CA (but not for N)
xp = 0.87*xp
yp = 0.87*yp
zp = 0.87*zp
endif ! end if aTypep
dist=sqrt((xi-xp)**2+(yi-yp)**2+(zi-zp)**2)
cut = ambigScoreDistBound(ambigIndex,j,DISTINDEX)
distDiff = dist-cut
if (distDiff > 0) then
if (distDiff < minDistDiff) then
minDistDiff = distDiff
endif
else
minDistDiff = 0 ! restraint satisified
exit
endif
end do ! end for each contact in ambig restraint
if (minDistDiff > 0) then
if (minDistDiff < LINPENALTYBOUND) then
energy = energy + ambigPenalty(ambigIndex)*minDistDiff/LINPENALTYBOUND
else
energy = energy + ambigPenalty(ambigIndex)
numVio = numVio +1
if (printViol) then
do j=1,ambigCounts(ambigIndex)
i = ambigRestraints(ambigIndex,j,RES1INDEX)
p = ambigRestraints(ambigIndex,j,RES2INDEX)
write (*,*) 'ambig:',ambigIndex,'viol: ',i,p
end do
endif
endif
endif
endif ! end if reComputeFlag
end do ! end for each ambig restraint
noeRestraintPenalty = energy
violCount = numVio;
! write (*,*) 'NumViols: ',numVio
! if (endI-start+1 == 102) then
! write (*,*) start,endi,energy
! endif;
! if (end-start > 100) then
! write (*,'(i5,i5,f12.2)') start, end, energy
! endif
! write(*,*) 'NOEEnergy',energy
return
end function
!real function noeRestraintPenaltyAnti(start,endI)
! implicit none
! integer,intent(in)::start
! integer,intent(in)::endI
! real:: energy
! real:: axi,ayi,azi,axp,ayp,azp,agxi,agyi,agzi,agxp,agyp,agzp
! real:: dist, cut
! integer:: i,j,p,iseq,pseq,nv1,nv2, numVio
! integer, parameter:: ndim=1000
! integer, parameter:: nvec=416
! integer seq(ndim),sec(ndim)
! common/seqe/seq,sec
! integer mv(ndim)
! common/chainm/mv
! integer ica(0:ndim),x(ndim),y(ndim),z(ndim)
! common/chain1/ica,x,y,z
! real ex(ndim), ey(ndim), ez(ndim)
! common/echain1/ex,ey,ez
! real egx(ndim),egy(ndim),egz(ndim)
! common/echain2/egx,egy,egz
! real gx(nvec,nvec,0:19),gy(nvec,nvec,0:19),gz(nvec,nvec,0:19)
! common/sg/gx,gy,gz
! energy = 0
! numVio = 0
! do i=start,endI
! iseq=seq(i)
! if(mv(i).gt.0)then
! axi=real(x(i))
! ayi=real(y(i))
! azi=real(z(i))
! nv1=ica(i-1)
! nv2=ica(i)
! agxi=axi+GX(nv1,nv2,iseq)
! agyi=ayi+GY(nv1,nv2,iseq)
! agzi=azi+GZ(nv1,nv2,iseq)
! else
! axi=ex(i)
! ayi=ey(i)
! azi=ez(i)
! agxi=egx(i)
! agyi=egy(i)
! agzi=egz(i)
! endif
! do j=1,numContactPartners(i,ASCSINDEX)
! p = contactPartners(i,ASCSINDEX,j)
! if (i < p .or. p < start) then ! to prevent duplicate computation
! pseq=seq(p)
! if(mv(p).gt.0)then
! agxp=real(x(p))+gx(ica(p-1),ica(p),pseq)
! agyp=real(y(p))+gy(ica(p-1),ica(p),pseq)
! agzp=real(z(p))+gz(ica(p-1),ica(p),pseq)
! else
! agxp=egx(p)
! agyp=egy(p)
! agzp=egz(p)
! endif
! dist=0.87*sqrt((agxi-agxp)**2+(agyi-agyp)**2+(agzi-agzp)**2) ! in cartesian instead of grid
! cut = ascs(i,p,DISTINDEX);
! if (dist < cut) then
! energy = energy + ascs(i,p,SCOREINDEX)
! numVio = numVio +1
! endif
! endif
! end do
! end do
! noeRestraintPenaltyAnti = energy
! ! if (endI-start+1 == 102) then
! ! write (*,*) start,endi,energy
! ! endif;
!! if (end-start > 100) then
!! write (*,'(i5,i5,f12.2)') start, end, energy
!! endif
!! write(*,*) 'NOEEnergy',energy
! return
!end function
real function torsionRestraintPenalty(phi,psi,phiPre,psiPre,start,end,Lch)
implicit none
integer,intent(in):: Lch
integer,intent(in):: start
integer,intent(in):: end
real,dimension(Lch),intent(in):: phi, psi, phiPre, psiPre
integer:: i,s,e
real:: diff,energy
s = start
e = end
! don't consider the terminal residues
if (s < 4) then
s = 4
endif
if (e > Lch-4) then
e = Lch-4
endif
energy = 0;
do i=s,e
if (phiPre(i) .ne. INVALIDANGLE .and. phi(i) .ne. INVALIDANGLE) then
diff = abs(phiPre(i)-phi(i)) ! max diff is 180
if (diff > 180.0) then
diff = 360.0-diff
endif
if (diff > 90.0) then
energy = energy+2.0*diff/180.0
endif
endif
if (psiPre(i) .ne. INVALIDANGLE .and. psi(i) .ne. INVALIDANGLE) then
diff = abs(psiPre(i)-psi(i))
if (diff > 180.0) then
diff = 360.0-diff
endif
if (diff > 90.0) then
energy = energy+1.5*diff/180.0 ! psi has larger range
endif
endif
end do
torsionRestraintPenalty = energy
return
end function
! computes the phi, psi angles and stores them in phi, psi
! residues with no predictions are given INVALIDANGLE as its angles
subroutine getPhiPsi(cax,cay,caz,nx,ny,nz,cx,cy,cz,phi,psi,Lch)
implicit none
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz ! ca coords
real,dimension(Lch),intent(in):: nx, ny, nz ! backbone n coords
real,dimension(Lch),intent(in):: cx, cy, cz ! backbone c0 coords
real,dimension(Lch),intent(out):: phi, psi
real:: n(3), ca(3), c(3), n1(3), c1(3) ! n1 = Ni+1, c1=Ci-1
integer:: i
real:: u(3), v(3), w(3), n13(3), n24(3)
do i=1,Lch
phi(i) = INVALIDANGLE
psi(i) = INVALIDANGLE
end do
do i=2,Lch-1
n = (/nx(i),ny(i),nz(i)/)
ca = (/cax(i),cay(i),caz(i)/)
c = (/cx(i),cy(i),cz(i)/)
n1 = (/nx(i+1),ny(i+1),nz(i+1)/)
c1 = (/cx(i-1),cy(i-1),cz(i-1)/)
! phi
u = n-c1
v = ca-n
w = c-ca
! u x v
n13(1)=u(2)*v(3)-u(3)*v(2)
n13(2)=u(3)*v(1)-u(1)*v(3)
n13(3)=u(1)*v(2)-u(2)*v(1)
n13 = n13/sqrt(sum(n13**2))
n24(1)=v(2)*w(3)-v(3)*w(2)
n24(2)=v(3)*w(1)-v(1)*w(3)
n24(3)=v(1)*w(2)-v(2)*w(1)
n24 = n24/sqrt(sum(n24**2))
phi(i) = sign(acos(dot_product(n13,n24)), dot_product(n13,c-ca))
phi(i) = pi180*phi(i)
! psi
u = ca-n
v = c-ca
w = n1-c
! u x v
n13(1)=u(2)*v(3)-u(3)*v(2)
n13(2)=u(3)*v(1)-u(1)*v(3)
n13(3)=u(1)*v(2)-u(2)*v(1)
n13 = n13/sqrt(sum(n13**2))
n24(1)=v(2)*w(3)-v(3)*w(2)
n24(2)=v(3)*w(1)-v(1)*w(3)
n24(3)=v(1)*w(2)-v(2)*w(1)
n24 = n24/sqrt(sum(n24**2))
psi(i) = sign(acos(dot_product(n13,n24)), dot_product(n13,n1-c))
psi(i) = pi180*psi(i)
end do
end subroutine
! torsion angles from start+1, to end-1 (inclusive)
subroutine getPhiPsi_ij(cax,cay,caz,nx,ny,nz,cx,cy,cz,phi,psi,start,end,Lch)
implicit none
integer,intent(in):: Lch
real,dimension(Lch),intent(in):: cax, cay, caz ! ca coords
real,dimension(Lch),intent(in):: nx, ny, nz ! backbone n coords
real,dimension(Lch),intent(in):: cx, cy, cz ! backbone c0 coords
real,dimension(Lch),intent(out):: phi, psi
integer,intent(in)::start
integer,intent(in)::end
real:: n(3), ca(3), c(3), n1(3), c1(3) ! n1 = Ni+1, c1=Ci-1
integer:: i
real:: u(3), v(3), w(3), n13(3), n24(3)
do i=start,end
phi(i) = INVALIDANGLE
psi(i) = INVALIDANGLE
end do
do i=start+1,end-1
n = (/nx(i),ny(i),nz(i)/)
ca = (/cax(i),cay(i),caz(i)/)
c = (/cx(i),cy(i),cz(i)/)
n1 = (/nx(i+1),ny(i+1),nz(i+1)/)
c1 = (/cx(i-1),cy(i-1),cz(i-1)/)
! phi
u = n-c1
v = ca-n
w = c-ca
! u x v
n13(1)=u(2)*v(3)-u(3)*v(2)
n13(2)=u(3)*v(1)-u(1)*v(3)
n13(3)=u(1)*v(2)-u(2)*v(1)
n13 = n13/sqrt(sum(n13**2))
n24(1)=v(2)*w(3)-v(3)*w(2)
n24(2)=v(3)*w(1)-v(1)*w(3)
n24(3)=v(1)*w(2)-v(2)*w(1)
n24 = n24/sqrt(sum(n24**2))
phi(i) = sign(acos(dot_product(n13,n24)), dot_product(n13,c-ca))
phi(i) = pi180*phi(i)
! psi
u = ca-n
v = c-ca
w = n1-c
! u x v
n13(1)=u(2)*v(3)-u(3)*v(2)
n13(2)=u(3)*v(1)-u(1)*v(3)
n13(3)=u(1)*v(2)-u(2)*v(1)
n13 = n13/sqrt(sum(n13**2))
n24(1)=v(2)*w(3)-v(3)*w(2)
n24(2)=v(3)*w(1)-v(1)*w(3)
n24(3)=v(1)*w(2)-v(2)*w(1)
n24 = n24/sqrt(sum(n24**2))
psi(i) = sign(acos(dot_product(n13,n24)), dot_product(n13,n1-c))
psi(i) = pi180*psi(i)
end do
end subroutine
! superimposition of two sets for coordinates + optional transformation of tpoints
! coords2,tpoints superpose into -> coords1
! (only tpoints modified, coords1,coords2 values stay the same)
! returns rmsd of coords2->coords1
real function superimpose2(coords1x,coords1y,coords1z, &
coords2x,coords2y,coords2z, &
npoints, &
tpointsx, tpointsy, tpointsz, &
ntpoints)
implicit none
integer, intent(in):: npoints
integer, intent(in):: ntpoints
real,dimension(npoints), intent(inout):: coords1x
real,dimension(npoints), intent(inout):: coords1y
real,dimension(npoints), intent(inout):: coords1z
real,dimension(npoints), intent(inout):: coords2x
real,dimension(npoints), intent(inout):: coords2y
real,dimension(npoints), intent(inout):: coords2z
real,dimension(ntpoints), intent(inout):: tpointsx
real,dimension(ntpoints), intent(inout):: tpointsy
real,dimension(ntpoints), intent(inout):: tpointsz
real,dimension(3,3):: mat_s, mat_a, mat_b, mat_g, mat_u, tmp_mat
real:: val, d, alpha, beta, gamma, x, y, z
real:: cx1, cy1, cz1, cx2, cy2, cz2, tmpx, tmpy, tmpz
integer:: i,j,k,n
cx1=0.
cy1=0.
cz1=0.
cx2=0.
cy2=0.
cz2=0.
do i=1,npoints
cx1 = cx1+coords1x(i)
cy1 = cy1+coords1y(i)
cz1 = cz1+coords1z(i)
cx2 = cx2+coords2x(i)
cy2 = cy2+coords2y(i)
cz2 = cz2+coords2z(i)
end do
cx1=cx1/real(npoints)
cy1=cy1/real(npoints)
cz1=cz1/real(npoints)
cx2=cx2/real(npoints)
cy2=cy2/real(npoints)
cz2=cz2/real(npoints)
do i=1,npoints
coords1x(i)=coords1x(i)-cx1
coords1y(i)=coords1y(i)-cy1
coords1z(i)=coords1z(i)-cz1
coords2x(i)=coords2x(i)-cx2
coords2y(i)=coords2y(i)-cy2
coords2z(i)=coords2z(i)-cz2
end do
do i=1,ntpoints
tpointsx(i) = tpointsx(i)-cx2
tpointsy(i) = tpointsy(i)-cy2
tpointsz(i) = tpointsz(i)-cz2
end do
do i=1,3
do j=1,3
if (i==j) then
mat_s(i,j)=1.0
mat_a(i,j)=1.0
mat_b(i,j)=1.0
mat_g(i,j)=1.0
else
mat_s(i,j)=0.0
mat_a(i,j)=0.0
mat_b(i,j)=0.0
mat_g(i,j)=0.0
endif
mat_u(i,j)=0.
end do
end do
do n=1,npoints
end do
do n=1,npoints
mat_u(1,1)=mat_u(1,1)+coords1x(n)*coords2x(n)
mat_u(1,2)=mat_u(1,2)+coords1x(n)*coords2y(n)
mat_u(1,3)=mat_u(1,3)+coords1x(n)*coords2z(n)
mat_u(2,1)=mat_u(2,1)+coords1y(n)*coords2x(n)
mat_u(2,2)=mat_u(2,2)+coords1y(n)*coords2y(n)
mat_u(2,3)=mat_u(2,3)+coords1y(n)*coords2z(n)
mat_u(3,1)=mat_u(3,1)+coords1z(n)*coords2x(n)
mat_u(3,2)=mat_u(3,2)+coords1z(n)*coords2y(n)
mat_u(3,3)=mat_u(3,3)+coords1z(n)*coords2z(n)
end do
do i=1,3
do j=1,3
tmp_mat(i,j)=0.
end do
end do
do
d=mat_u(3,2)-mat_u(2,3)
if (d==0) then
alpha=0
else
alpha=atan(d/(mat_u(2,2)+mat_u(3,3)))
endif
if (cos(alpha)*(mat_u(2,2)+mat_u(3,3))+sin(alpha)*(mat_u(3,2)-mat_u(2,3))<0.0) then
alpha=alpha+M_PI
endif
mat_a(2,2)=cos(alpha)
mat_a(3,3)=cos(alpha)
mat_a(3,2)=sin(alpha)
mat_a(2,3)=-mat_a(3,2)
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_u(i,k)*mat_a(j,k)
end do
end do
end do
do i=1,3
do j=1,3
mat_u(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_a(i,k)*mat_s(k,j)
end do
end do
end do
do i=1,3
do j=1,3
mat_s(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
d=mat_u(1,3)-mat_u(3,1)
if (d==0) then
beta=0
else
beta=atan(d/(mat_u(1,1)+mat_u(3,3)))
endif
if (cos(beta)*(mat_u(1,1)+mat_u(3,3))+sin(beta)*(mat_u(1,3)-mat_u(3,1))<0.0) then
beta=beta+M_PI
endif
mat_b(1,1)=cos(beta)
mat_b(3,3)=cos(beta)
mat_b(1,3)=sin(beta)
mat_b(3,1)=-mat_b(1,3)
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_u(i,k)*mat_b(j,k)
end do
end do
end do
do i=1,3
do j=1,3
mat_u(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_b(i,k)*mat_s(k,j)
end do
end do
end do
do i=1,3
do j=1,3
mat_s(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
d=mat_u(2,1)-mat_u(1,2)
if (d==0) then
gamma=0
else
gamma=atan(d/(mat_u(1,1)+mat_u(2,2)))
endif
if (cos(gamma)*(mat_u(1,1)+mat_u(2,2))+sin(gamma)*(mat_u(2,1)-mat_u(1,2))<0.0) then
gamma=gamma+M_PI
endif
mat_g(1,1)=cos(gamma)
mat_g(2,2)=cos(gamma)
mat_g(2,1)=sin(gamma)
mat_g(1,2)=-mat_g(2,1)
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_u(i,k)*mat_g(j,k)
end do
end do
end do
do i=1,3
do j=1,3
mat_u(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
do i=1,3
do j=1,3
do k=1,3
tmp_mat(i,j)=tmp_mat(i,j)+mat_g(i,k)*mat_s(k,j)
end do
end do
end do
do i=1,3
do j=1,3
mat_s(i,j)=tmp_mat(i,j)
tmp_mat(i,j)=0.
end do
end do
val=abs(alpha)+abs(beta)+abs(gamma)
if (val<=0.001) exit
end do
val=0.
do i=1,npoints
x=coords2x(i)
y=coords2y(i)
z=coords2z(i)
tmpx=x*mat_s(1,1)+y*mat_s(1,2)+z*mat_s(1,3)
tmpy=x*mat_s(2,1)+y*mat_s(2,2)+z*mat_s(2,3)
tmpz=x*mat_s(3,1)+y*mat_s(3,2)+z*mat_s(3,3)
x=coords1x(i)-tmpx
y=coords1y(i)-tmpy
z=coords1z(i)-tmpz
val=val+x*x+y*y+z*z
end do
do i=1,ntpoints
x=tpointsx(i)
y=tpointsy(i)
z=tpointsz(i)
tpointsx(i)=x*mat_s(1,1)+y*mat_s(1,2)+z*mat_s(1,3)
tpointsy(i)=x*mat_s(2,1)+y*mat_s(2,2)+z*mat_s(2,3)
tpointsz(i)=x*mat_s(3,1)+y*mat_s(3,2)+z*mat_s(3,3)
end do
do i=1,npoints
coords1x(i)=coords1x(i)+cx1
coords1y(i)=coords1y(i)+cy1
coords1z(i)=coords1z(i)+cz1
coords2x(i)=coords2x(i)+cx2
coords2y(i)=coords2y(i)+cy2
coords2z(i)=coords2z(i)+cz2
end do
do i=1,ntpoints
tpointsx(i)=tpointsx(i)+cx1
tpointsy(i)=tpointsy(i)+cy1
tpointsz(i)=tpointsz(i)+cz1
end do
superimpose2 = sqrt(val/real(npoints))
return
end function
! computes the distance
real function calc_distance(x1,y1,z1,x2,y2,z2)
implicit none
real,intent(in):: x1,y1,z1,x2,y2,z2
real:: dx,dy,dz
real:: dist2
dx = (x1) - (x2)
dy = (y1) - (y2)
dz = (z1) - (z2)
if (dx .ne. 0 .or. dy .ne. 0 .or. dz .ne. 0 ) then
dist2 = dx*dx+dy*dy+dz*dz
calc_distance = sqrt(dist2)
else
calc_distance = 0.0
endif
return
end function
! r14 chiral distance
real function calc_r14(x1,y1,z1,x2,y2,z2,x3,y3,z3,x4,y4,z4)
implicit none
real,intent(in):: x1,y1,z1,x2,y2,z2,x3,y3,z3,x4,y4,z4
real:: r, dx, dy, dz
real:: vx1, vy1, vz1, vx2, vy2, vz2, vx3, vy3, vz3
real:: hand
dx = x4-x1
dy = y4-y1
dz = z4-z1
r = sqrt(dx*dx+dy*dy+dz*dz)
vx1=x2-x1
vy1=y2-y1
vz1=z2-z1
vx2=x3-x2
vy2=y3-y2
vz2=z3-z2
vx3=x4-x3
vy3=y4-y3
vz3=z4-z3
hand = (vy1*vz2-vy2*vz1)*vx3+(vz1*vx2-vz2*vx1)*vy3+(vx1*vy2-vx2*vy1)*vz3
if (hand<0) then
r=-r
endif
calc_r14 = r
return
end function
! initial call is strtok(string,len(string),startIndex=1,endIndex=1)
! The returned indices startIndex, endIndex are inclusive
! returns true if new token; false otherwise. If false, startIndex and endIndex are undefined
! subsequent calls is strtok(string,len(string),startIndex=previous endIndex+1,endIndex=previous endIndex+1)
! e.g.
! startIndex = 1
! endIndex = 1
! length = len(line)
! status = strtok(line,length, startIndex, endIndex)
! if (status) ...
! startIndex = endIndex+1
! endIndex = startIndex
! status = strtok(line,length,startIndex,endIndex)
logical function strtok(strin,strlen,startIndex,endIndex)
implicit none
character(len=strlen), intent(in):: strin
integer, intent(in):: strlen
integer, intent(inout):: startIndex, endIndex
character(len=1), parameter:: tab = char(9)
character(len=1), parameter:: newline = char(10)
character(len=1), parameter:: ff = char(12)
character(len=1), parameter:: cr = char(13)
character(len=1), parameter:: ws(5) = (/' ',tab,newline,cr,ff /)
integer:: i,j
logical:: hasDelim, hasChar=.false.
if (endIndex.gt.strlen .or. startIndex.gt.strlen) then
strtok = .false.
return
endif
! advance startIndex until char no longer white space
do i=startIndex,strlen
hasDelim = .false.
do j=1,5
if (strin(i:i).eq.ws(j)) then
hasDelim = .true.
exit
endif
end do
if (.not.hasDelim) then
exit
else
startIndex = startIndex+1
endif
end do
! might have reached the end
if (startIndex.gt.strlen) then
strtok = .false.
return
endif
! check if char at startIndex is non-delim
do j=1,5
if (strin(startIndex:startIndex).ne.ws(j)) then
hasChar = .true.
exit
endif
end do
if (.not.haschar) then
strtok = .false.
return
endif
! move forward until encounter first white space or reach end of string
endIndex = startIndex
do i=startIndex,strlen
hasDelim = .false.
do j=1,5
if (strin(i:i).eq.ws(j)) then
hasDelim = .true.
exit
endif
end do
if (hasDelim) then
exit
endif
end do
if (hasDelim) then
endIndex = i-1; ! first white space case
else
endIndex = i; ! end of string
endif
strtok = .true.
return
end function strtok
! requires pulchra.dat
subroutine init_backbone_bins()
implicit none
character(len=25):: line
integer:: i,j,b1,b2,b3
real:: r1,r2,r3
open(unit=87,file='pulchra.dat',status='old')
do i=1,num_stat
read(87,'(a25)') line
read(line,150) b1, b2, b3
nco_stat_bin(i,1) = b1
nco_stat_bin(i,2) = b2
nco_stat_bin(i,3) = b3
do j=1,8
read(87,'(a25)') line
read(line,160) r1,r2,r3
nco_stat(i,j,1) = r1
nco_stat(i,j,2) = r2
nco_stat(i,j,3) = r3
end do
end do
do i=1,num_stat_pro
read(87,'(a25)') line
read(line,150) b1, b2, b3
nco_stat_pro_bin(i,1) = b1
nco_stat_pro_bin(i,2) = b2
nco_stat_pro_bin(i,3) = b3
do j=1,8
read(87,'(a25)') line
read(line,160) r1,r2,r3
nco_stat_pro(i,j,1) = r1
nco_stat_pro(i,j,2) = r2
nco_stat_pro(i,j,3) = r3
end do
end do
close(87)
150 format(3i4)
160 format(3f8.3)
end subroutine
end module
|
{"hexsha": "397065ba9863641975816eaa7b2da93a9590859e", "size": 79213, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "itasser/nmr.f90", "max_stars_repo_name": "fanufree/assign-it", "max_stars_repo_head_hexsha": "7eae0828aea4964f1d459a14a0e13025fefc4c9a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "itasser/nmr.f90", "max_issues_repo_name": "fanufree/assign-it", "max_issues_repo_head_hexsha": "7eae0828aea4964f1d459a14a0e13025fefc4c9a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "itasser/nmr.f90", "max_forks_repo_name": "fanufree/assign-it", "max_forks_repo_head_hexsha": "7eae0828aea4964f1d459a14a0e13025fefc4c9a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.4854157597, "max_line_length": 142, "alphanum_fraction": 0.510938861, "num_tokens": 25079}
|
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
from numpy.polynomial.legendre import legval
from scipy import linalg
from ..fixes import einsum
from ..utils import logger, warn, verbose
from ..io.pick import pick_types, pick_channels, pick_info
from ..surface import _normalize_vectors
from ..bem import _fit_sphere
from ..forward import _map_meg_channels
def _calc_g(cosang, stiffness=4, num_lterms=50):
"""Calculate spherical spline g function between points on a sphere.
Parameters
----------
cosang : array-like of float, shape(n_channels, n_channels)
cosine of angles between pairs of points on a spherical surface. This
is equivalent to the dot product of unit vectors.
stiffness : float
stiffness of the spline.
num_lterms : int
number of Legendre terms to evaluate.
Returns
-------
G : np.ndrarray of float, shape(n_channels, n_channels)
The G matrix.
"""
factors = [(2 * n + 1) / (n ** stiffness * (n + 1) ** stiffness *
4 * np.pi) for n in range(1, num_lterms + 1)]
return legval(cosang, [0] + factors)
def _make_interpolation_matrix(pos_from, pos_to, alpha=1e-5):
"""Compute interpolation matrix based on spherical splines.
Implementation based on [1]
Parameters
----------
pos_from : np.ndarray of float, shape(n_good_sensors, 3)
The positions to interpoloate from.
pos_to : np.ndarray of float, shape(n_bad_sensors, 3)
The positions to interpoloate.
alpha : float
Regularization parameter. Defaults to 1e-5.
Returns
-------
interpolation : np.ndarray of float, shape(len(pos_from), len(pos_to))
The interpolation matrix that maps good signals to the location
of bad signals.
References
----------
[1] Perrin, F., Pernier, J., Bertrand, O. and Echallier, JF. (1989).
Spherical splines for scalp potential and current density mapping.
Electroencephalography Clinical Neurophysiology, Feb; 72(2):184-7.
"""
pos_from = pos_from.copy()
pos_to = pos_to.copy()
# normalize sensor positions to sphere
_normalize_vectors(pos_from)
_normalize_vectors(pos_to)
# cosine angles between source positions
cosang_from = pos_from.dot(pos_from.T)
cosang_to_from = pos_to.dot(pos_from.T)
G_from = _calc_g(cosang_from)
G_to_from = _calc_g(cosang_to_from)
if alpha is not None:
G_from.flat[::len(G_from) + 1] += alpha
n_channels = G_from.shape[0] # G_from should be square matrix
C = np.r_[np.c_[G_from, np.ones((n_channels, 1))],
np.c_[np.ones((1, n_channels)), 0]]
C_inv = linalg.pinv(C)
interpolation = np.c_[G_to_from,
np.ones((G_to_from.shape[0], 1))].dot(C_inv[:, :-1])
return interpolation
def _do_interp_dots(inst, interpolation, goods_idx, bads_idx):
"""Dot product of channel mapping matrix to channel data."""
from ..io.base import BaseRaw
from ..epochs import BaseEpochs
from ..evoked import Evoked
if isinstance(inst, (BaseRaw, Evoked)):
inst._data[bads_idx] = interpolation.dot(inst._data[goods_idx])
elif isinstance(inst, BaseEpochs):
inst._data[:, bads_idx, :] = einsum(
'ij,xjy->xiy', interpolation, inst._data[:, goods_idx, :])
else:
raise ValueError('Inputs of type {0} are not supported'
.format(type(inst)))
@verbose
def _interpolate_bads_eeg(inst, verbose=None):
"""Interpolate bad EEG channels.
Operates in place.
Parameters
----------
inst : mne.io.Raw, mne.Epochs or mne.Evoked
The data to interpolate. Must be preloaded.
"""
bads_idx = np.zeros(len(inst.ch_names), dtype=np.bool)
goods_idx = np.zeros(len(inst.ch_names), dtype=np.bool)
picks = pick_types(inst.info, meg=False, eeg=True, exclude=[])
inst.info._check_consistency()
bads_idx[picks] = [inst.ch_names[ch] in inst.info['bads'] for ch in picks]
if len(picks) == 0 or bads_idx.sum() == 0:
return
goods_idx[picks] = True
goods_idx[bads_idx] = False
pos = inst._get_channel_positions(picks)
# Make sure only EEG are used
bads_idx_pos = bads_idx[picks]
goods_idx_pos = goods_idx[picks]
pos_good = pos[goods_idx_pos]
pos_bad = pos[bads_idx_pos]
# test spherical fit
radius, center = _fit_sphere(pos_good)
distance = np.sqrt(np.sum((pos_good - center) ** 2, 1))
distance = np.mean(distance / radius)
if np.abs(1. - distance) > 0.1:
warn('Your spherical fit is poor, interpolation results are '
'likely to be inaccurate.')
logger.info('Computing interpolation matrix from {0} sensor '
'positions'.format(len(pos_good)))
interpolation = _make_interpolation_matrix(pos_good, pos_bad)
logger.info('Interpolating {0} sensors'.format(len(pos_bad)))
_do_interp_dots(inst, interpolation, goods_idx, bads_idx)
@verbose
def _interpolate_bads_meg(inst, mode='accurate', verbose=None):
"""Interpolate bad channels from data in good channels.
Parameters
----------
inst : mne.io.Raw, mne.Epochs or mne.Evoked
The data to interpolate. Must be preloaded.
mode : str
Either `'accurate'` or `'fast'`, determines the quality of the
Legendre polynomial expansion used for interpolation. `'fast'` should
be sufficient for most applications.
verbose : bool, str, int, or None
If not None, override default verbose level (see :func:`mne.verbose`
and :ref:`Logging documentation <tut_logging>` for more).
"""
picks_meg = pick_types(inst.info, meg=True, eeg=False,
ref_meg=True, exclude=[])
picks_good = pick_types(inst.info, meg=True, eeg=False,
ref_meg=True, exclude='bads')
meg_ch_names = [inst.info['ch_names'][p] for p in picks_meg]
bads_meg = [ch for ch in inst.info['bads'] if ch in meg_ch_names]
# select the bad meg channel to be interpolated
if len(bads_meg) == 0:
picks_bad = []
else:
picks_bad = pick_channels(inst.info['ch_names'], bads_meg,
exclude=[])
# return without doing anything if there are no meg channels
if len(picks_meg) == 0 or len(picks_bad) == 0:
return
inst_info = inst.info.copy()
inst_info['comps'] = []
info_from = pick_info(inst_info, picks_good)
info_to = pick_info(inst_info, picks_bad)
mapping = _map_meg_channels(info_from, info_to, mode=mode)
_do_interp_dots(inst, mapping, picks_good, picks_bad)
|
{"hexsha": "9c6fdafa2353d4f82397df632436ffe9bfb9d6f1", "size": 6725, "ext": "py", "lang": "Python", "max_stars_repo_path": "mne/channels/interpolation.py", "max_stars_repo_name": "jasmainak/mne-python", "max_stars_repo_head_hexsha": "039cb1bf52770019bd48ac028795af0861792fa2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mne/channels/interpolation.py", "max_issues_repo_name": "jasmainak/mne-python", "max_issues_repo_head_hexsha": "039cb1bf52770019bd48ac028795af0861792fa2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-02-27T13:43:15.000Z", "max_issues_repo_issues_event_max_datetime": "2018-07-18T19:44:45.000Z", "max_forks_repo_path": "mne/channels/interpolation.py", "max_forks_repo_name": "jasmainak/mne-python", "max_forks_repo_head_hexsha": "039cb1bf52770019bd48ac028795af0861792fa2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-03-05T20:44:07.000Z", "max_forks_repo_forks_event_max_datetime": "2017-03-05T20:44:07.000Z", "avg_line_length": 34.1370558376, "max_line_length": 78, "alphanum_fraction": 0.6520446097, "include": true, "reason": "import numpy,from numpy,from scipy", "num_tokens": 1730}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020 Ryan L. Collins <rlcollins@g.harvard.edu>
# and the Talkowski Laboratory
# Distributed under terms of the MIT license.
"""
Identify, cluster, and refine all significant segments per HPO from rCNV sliding window analysis
"""
from os import path
import gzip
import csv
import pybedtools as pbt
import numpy as np
import pandas as pd
from scipy.stats import norm
import string
import networkx as nx
from sklearn.cluster import KMeans
from operator import itemgetter
from itertools import combinations
from collections import Counter
import argparse
from sys import stdout
def format_stat(x, is_neg_log10=False, na_val=0):
"""
Helper function to format & convert p-values as needed
"""
if x == 'NA':
return float(na_val)
else:
if is_neg_log10:
return 10 ** -float(x)
else:
return float(x)
def get_sig_label(primary_p, secondary_p, n_nominal, primary_q, primary_p_cutoff,
secondary_p_cutoff=0.05, n_nominal_cutoff=2,
secondary_or_nominal=True, fdr_q_cutoff=0.05,
secondary_for_fdr=False):
"""
Checks if a window should be considered exome-wide or FDR significant
"""
# Run all comparisons
primary_p_is_sig = (primary_p < primary_p_cutoff)
secondary_p_is_sig = (secondary_p < secondary_p_cutoff)
n_nominal_is_sig = (n_nominal >= n_nominal_cutoff)
fdr_q_is_sig = (primary_q < fdr_q_cutoff)
# Determine secondary criteria
if secondary_p_is_sig and n_nominal_is_sig:
secondary_is_sig = True
elif secondary_or_nominal and (secondary_p_is_sig or n_nominal_is_sig):
secondary_is_sig = True
else:
secondary_is_sig = False
# First consider genome-wide significance
if primary_p_is_sig and secondary_is_sig:
return 'GWS'
# Second consider FDR significance
elif fdr_q_is_sig and secondary_for_fdr and secondary_is_sig:
return 'FDR'
elif fdr_q_is_sig and not secondary_for_fdr:
return 'FDR'
# Otherwise, non-significant
else:
return 'NS'
def ci2se(ci):
"""
Converts a tuple of (lower, upper) confidence interval bounds to standard error
"""
ci = sorted(ci)
return (ci[1] - ci[0]) / (2 * 1.96)
def make_cs(refine_res, cs_val=0.95, cs_merge_buffer=200000,
cov_df=None, jac_cutoff=0.8, sig_wids=[]):
"""
Merge windows into credible set based on sum of ranked PIPs
Pads CS with ±cs_merge_buffer; recommended to set to size of one full bin
"""
cs_windows = []
vals = pd.DataFrame.from_dict(refine_res, orient='index')
vals = vals.sort_values(by='PIP', ascending=False)
cs_sum = 0
for i in range(len(vals)):
cs_windows.append(vals.index[i])
cs_sum += vals.PIP[i]
if cs_sum >= cs_val:
break
# Convert window IDs back to pbt.BedTool and merge to nonredundant intervals
# Include all windows with sufficient covariance with any significant window from credset
if cov_df is not None:
cs_sig_windows = set(sig_wids).intersection(set(cs_windows))
best_cov = cov_df.loc[cov_df.index.isin(cs_sig_windows), :].max()
cs_windows = list(set(cs_windows).union(set(best_cov.index[best_cov >= jac_cutoff])))
credset_bt = pbt.BedTool('\n'.join([x.replace('_', '\t') for x in cs_windows]),
from_string=True).sort().merge()
# Otherwise, use fixed merge distance
else:
credset_bt = pbt.BedTool('\n'.join([x.replace('_', '\t') for x in cs_windows]),
from_string=True).sort().merge(d=cs_merge_buffer)
credset_coords = [[x.chrom, x.start, x.end] for x in credset_bt]
return credset_coords, credset_bt, cs_windows
def refine(window_priors, window_info, null_variance=0.42 ** 2, cs_val=0.95,
cs_merge_buffer=200000, cov_df=None, jac_cutoff=0.8):
"""
Refines a set of windows given their prior probability and effect size estimates
Inputs:
window_priors : dict of windows with prior probabilities
window_info : dict of window association stats as processed by process_hpo()
null_se : float, variance of OR estimates under the null. By default,
this value is set to a 5% chance that ORs are > 2, as suggested
by Wakefield 2009
"""
windows = list(window_priors.keys())
if len(windows) == 1:
refine_res = {windows[0] : {'ABF' : None, 'PIP' : 1}}
else:
refine_res = {}
# Compute ABF per window
for window in windows:
# From Wakefield, 2009
theta = window_info[window]['lnOR']
se = ci2se((window_info[window]['lnOR_lower'], window_info[window]['lnOR_upper']))
V = se ** 2
if V > 0:
zsq = (theta ** 2) / V
W = null_variance
ABF = np.sqrt((V+W) / V) * np.exp((-zsq / 2) * (W / (V+W)))
# Wakefield 2009 formulates BF relative to H0. We need to invert to
# obtain evidence & posterior for H1 (i.e., true non-zero effect)
# However, we also are only testing for a _positive_ effect in cases,
# so we will only invert ABF if theta >= 0. This is necessary to
# prevent sites with enrichments in controls being scored with high ABF
if theta >= 0:
ABF = 1 / ABF
else:
ABF = 0
refine_res[window] = {'ABF' : ABF}
# Compute PIP per window as fraction of total BF, adjusted for prior probs
# As per Mahajan 2018 (T2D refining paper, Nat. Genet.)
posteriors = {}
for window in windows:
posteriors[window] = refine_res[window]['ABF'] * window_priors[window]
posterior_sum = np.sum(list(posteriors.values()))
for window in windows:
refine_res[window]['PIP'] = posteriors[window] / posterior_sum
# Define credible set as sum of ranked PIPs >= credset_val
sig_wids = [k for k, v in window_info.items() if v['gw_sig'] or v['fdr_sig']]
credset_coords, credset_bt, credset_windows \
= make_cs(refine_res, cs_val, cs_merge_buffer, cov_df, jac_cutoff, sig_wids)
return refine_res, credset_coords, credset_bt, credset_windows
def parse_stats(stats_in, primary_p_cutoff, p_is_neg_log10=True,
secondary_p_cutoff=0.05, n_nominal_cutoff=2,
secondary_or_nominal=True, fdr_q_cutoff=0.05,
secondary_for_fdr=False, sig_only=False, keep_windows=None,
refine_secondary=False):
"""
Parse all association stats for a single phenotype
"""
stats_dict = {}
if path.splitext(stats_in)[1] in '.gz .bgz .bgzip'.split():
csvin = gzip.open(stats_in, 'rt')
else:
csvin = open(stats_in)
reader = csv.reader(csvin, delimiter='\t')
for chrom, start, end, n_nominal, top_cohort, excluded_cohorts, case_freq, \
control_freq, lnOR, lnOR_lower, lnOR_upper, zscore, primary_p, primary_q, \
secondary_lnOR, secondary_lnOR_lower, secondary_lnOR_upper, \
secondary_zscore, secondary_p, secondary_q \
in reader:
# Skip header line
if chrom.startswith('#'):
continue
# Set window id
window = '_'.join([chrom, start, end])
# If optioned, restrict on window name
if keep_windows is not None:
if window not in keep_windows:
continue
# Clean up window data
primary_p = format_stat(primary_p, p_is_neg_log10, 1)
primary_q = format_stat(primary_q, p_is_neg_log10, 1)
secondary_p = format_stat(secondary_p, p_is_neg_log10, 1)
n_nominal = int(n_nominal)
sig_label = get_sig_label(primary_p, secondary_p, n_nominal, primary_q,
primary_p_cutoff, secondary_p_cutoff,
n_nominal_cutoff, secondary_or_nominal,
fdr_q_cutoff, secondary_for_fdr)
gw_sig = False
fdr_sig = False
if sig_label == 'GWS':
gw_sig = True
fdr_sig = True
elif sig_label == 'FDR':
fdr_sig = True
case_freq = format_stat(case_freq)
control_freq = format_stat(control_freq)
if refine_secondary:
use_lnOR = format_stat(secondary_lnOR)
use_lnOR_lower = format_stat(secondary_lnOR_lower)
use_lnOR_upper = format_stat(secondary_lnOR_upper)
use_zscore = format_stat(secondary_zscore)
else:
use_lnOR = format_stat(lnOR)
use_lnOR_lower = format_stat(lnOR_lower)
use_lnOR_upper = format_stat(lnOR_upper)
use_zscore = format_stat(zscore)
window_stats = {'case_freq' : case_freq, 'control_freq' : control_freq,
'lnOR' : use_lnOR, 'lnOR_lower' : use_lnOR_lower,
'lnOR_upper' : use_lnOR_upper, 'zscore' : use_zscore,
'primary_p' : primary_p, 'primary_q' : primary_q,
'secondary_p' : secondary_p, 'n_nominal' : n_nominal,
'gw_sig' : gw_sig, 'fdr_sig' : fdr_sig}
# Store window association stats
if sig_only:
if sig_label in 'GWS FDR'.split():
window_bt = pbt.BedTool('\t'.join([chrom, start, end, window]),
from_string=True)
window_stats['window_bt'] = window_bt
stats_dict[window] = window_stats
else:
window_bt = pbt.BedTool('\t'.join([chrom, start, end, window]),
from_string=True)
window_stats['window_bt'] = window_bt
stats_dict[window] = window_stats
csvin.close()
return stats_dict
def process_hpo(hpo, stats_in, primary_p_cutoff, p_is_neg_log10=True,
secondary_p_cutoff=0.05, n_nominal_cutoff=2,
secondary_or_nominal=True, fdr_q_cutoff=0.05,
secondary_for_fdr=False, block_merge_dist=1000000,
block_prefix='window_block', null_variance=0.42 ** 2,
refine_secondary=False, cs_val=0.95):
"""
Loads & processes all necessary data for a single phenotype
Returns a dict with the following entries:
sig_windows : dict of sig windows, with each entry corresponding to stats
for a single significant window
all_windows : dict of sig windows + all windows within block_merge_dist with
stats per window
blocks : pbt.BedTool of all clustered windows to be refined
"""
hpo_info = {'blocks' : {}}
# First pass: parse data for significant windows only
hpo_info['sig_windows'] = parse_stats(stats_in, primary_p_cutoff, p_is_neg_log10,
secondary_p_cutoff, n_nominal_cutoff,
secondary_or_nominal, fdr_q_cutoff,
secondary_for_fdr, sig_only=True,
refine_secondary=refine_secondary)
# Second pass: parse data for all windows within block_merge_dist of sig_windows
if len(hpo_info['sig_windows']) > 0:
# Make bt of significant windows
sig_window_bts = [g['window_bt'] for g in hpo_info['sig_windows'].values()]
if len(sig_window_bts) > 1:
sig_windows_bt = sig_window_bts[0].cat(*sig_window_bts[1:], postmerge=False).sort()
else:
sig_windows_bt = sig_window_bts[0]
# Intersect sig windows with all windows
all_windows_bt = pbt.BedTool(stats_in).cut(range(3)).sort()
nearby_windows_bt = all_windows_bt.closest(sig_windows_bt.sort(), d=True).\
filter(lambda x: int(x[-1]) > -1 and \
int(x[-1]) <= block_merge_dist).\
saveas()
nearby_windows = ['_'.join([x.chrom, str(x.start), str(x.end)]) for x in nearby_windows_bt]
# Gather window stats
hpo_info['all_windows'] = parse_stats(stats_in, primary_p_cutoff, p_is_neg_log10,
secondary_p_cutoff, n_nominal_cutoff,
secondary_or_nominal, fdr_q_cutoff,
secondary_for_fdr, sig_only=False,
keep_windows=nearby_windows,
refine_secondary=refine_secondary)
# Cluster significant windows into blocks to be refined
window_bts = [g['window_bt'] for g in hpo_info['all_windows'].values()]
if len(window_bts) > 1:
windows_bt = window_bts[0].cat(*window_bts[1:], postmerge=False).sort()
else:
windows_bt = window_bts[0]
blocks = windows_bt.merge(d=block_merge_dist, c=4, o='distinct')
# Assign block IDs
blocks_wids = {'_'.join([hpo, block_prefix, str(k)]) : block \
for k, block in enumerate(blocks)}
# Perform initial refinment of each block with flat prior
for block_id, block in blocks_wids.items():
windows = block[3].split(',')
window_priors = {window : 1 / len(windows) for window in windows}
refine_res, credset_coords, credset_bt, credset_windows \
= refine(window_priors, hpo_info['all_windows'], null_variance,
cs_val, block_merge_dist)
hpo_info['blocks'][block_id] = {'coords' : block,
'refine_res' : refine_res,
'credset_coords' : credset_coords,
'credset_bt' : credset_bt,
'credset_windows' : credset_windows}
# If no windows are significant, add empty placeholder dict for all windows
else:
hpo_info['all_windows'] = {}
return hpo_info
def load_all_hpos(statslist, secondary_p_cutoff=0.05, n_nominal_cutoff=2,
secondary_or_nominal=True, fdr_q_cutoff=0.05,
secondary_for_fdr=False, block_merge_dist=200000,
block_prefix='window_block', refine_secondary=False,
cs_val=0.95):
"""
Wrapper function to process each HPO with process_hpo()
Returns a dict with one entry per HPO
"""
hpo_data = {}
with open(statslist) as infile:
reader = csv.reader(infile, delimiter='\t')
for hpo, stats_in, pval, in reader:
print('Loading data from {}...'.format(hpo))
primary_p_cutoff = float(pval)
hpo_data[hpo] = process_hpo(hpo, stats_in, primary_p_cutoff,
p_is_neg_log10=True,
secondary_p_cutoff=secondary_p_cutoff,
n_nominal_cutoff=n_nominal_cutoff,
secondary_or_nominal=secondary_or_nominal,
fdr_q_cutoff=fdr_q_cutoff,
secondary_for_fdr=secondary_for_fdr,
block_merge_dist=block_merge_dist,
block_prefix=block_prefix,
refine_secondary=refine_secondary,
cs_val=cs_val)
return hpo_data
def calc_cnv_cov(cnvbed, hpo_data, cnv, frac=0.5, max_search_dist=20000000):
"""
Compute a CNV covariance matrix for cases from each HPO per chromosome
"""
cnv_cov = {}
cnvbt_orig = pbt.BedTool(cnvbed)
contigs = set([x.chrom for x in cnvbt_orig])
# Iterate over each HPO
for hpo, hdat in hpo_data.items():
print('Computing covariance matrixes for {}...'.format(hpo))
# Make single bedtool of all windows per contig
wbt_dict = {contig : {'all_wids' : set()} for contig in contigs}
for wid in hdat['all_windows'].keys():
contig = wid.split('_')[0]
wbt_dict[contig]['all_wids'].add(wid)
for contig in contigs:
wbt_str = ''
for wid in wbt_dict[contig]['all_wids']:
wbt_str += '\t'.join(wid.split('_') + [wid]) + '\n'
wbt_dict[contig]['wbt'] = pbt.BedTool(wbt_str, from_string=True)
# Filter CNVs by HPO and CNV type
cnvbt = cnvbt_orig.filter(lambda x: hpo in x[5])
if cnv != 'NS':
cnvbt = cnvbt.filter(lambda x: x[4] == cnv).saveas()
# Make covariance matrix of all by all windows per chromosome
cov_dfs = {}
for contig in contigs:
# Filter CNVs and windows to contig of interest
cnvbt_contig = cnvbt.filter(lambda x: x.chrom == contig)
wbt_contig = wbt_dict[contig]['wbt']
all_contig_wids = wbt_dict[contig]['all_wids']
# Make dict mapping window ID to dict of set(CNV ids)
cnvs_per_window = {wid : set() for wid in all_contig_wids}
for hit in cnvbt_contig.intersect(wbt_contig, wa=True, wb=True, F=frac):
cnvid = hit[3]
wid = hit[-1]
cnvs_per_window[wid].add(cnvid)
# Compute covarance for all pairs of windows
cov_dfs[contig] = pd.DataFrame(columns=all_contig_wids)
for wid_a in all_contig_wids:
jac_l = []
# If first window has no CNVs, Jaccard index = 0 for all mates
cnvs_a = cnvs_per_window[wid_a]
if len(cnvs_a) == 0:
cov_dfs[contig].loc[wid_a] = [0.0] * len(all_contig_wids)
continue
for wid_b in all_contig_wids:
# If the Jaccard index has already been computed,
# can copy value across matrix diagonal
if wid_b in cov_dfs[contig].index:
jac_l.append(cov_dfs[contig].loc[wid_b, wid_a])
continue
# If second window has no CNVs, Jaccard index = 0
cnvs_b = cnvs_per_window[wid_b]
if len(cnvs_b) == 0:
jac_l.append(0.0)
continue
# Otherwise, compute Jaccard index as long as windows are
# closer than max_search_dist apart
mid_a = np.mean([int(x) for x in wid_a.split('_')[1:]])
mid_b = np.mean([int(x) for x in wid_b.split('_')[1:]])
if np.abs(mid_b - mid_a) > max_search_dist:
jac_l.append(0.0)
else:
jac_l.append(len(cnvs_a.intersection(cnvs_b)) / len(cnvs_a.union(cnvs_b)))
cov_dfs[contig].loc[wid_a] = jac_l
cnv_cov[hpo] = cov_dfs
return cnv_cov
def merge_blocks_by_cov(hpo_data, cnv_cov, jac_cutoff=0.8,
block_merge_dist=200000, cs_val=0.95,
block_prefix='window_block'):
"""
Merge blocks from the same HPO by CNV covariance
"""
for hpo, hdat in hpo_data.items():
all_bids = list(hdat['blocks'].keys())
# Construct graph of all blocks
G = nx.Graph()
for bid in all_bids:
G.add_node(bid)
# Add edges between pairs of nodes where at least one pair of windows from
# their credible intervals shares CNV cov >= jac_cutoff
for bid_a in all_bids:
chrom_a = hdat['blocks'][bid_a]['credset_coords'][0][0]
for bid_b in all_bids:
chrom_b = hdat['blocks'][bid_b]['credset_coords'][0][0]
# Only process nonredundant block pairs on the same chromosome
if bid_a == bid_b or chrom_a != chrom_b:
continue
cov_df = cnv_cov[hpo][chrom_a]
wids_a = hdat['blocks'][bid_a]['credset_windows']
wids_b = hdat['blocks'][bid_b]['credset_windows']
cov_df = cov_df.loc[cov_df.index.isin(wids_a),
cov_df.columns.isin(wids_b)]
best_jac = cov_df.max().max()
if best_jac >= jac_cutoff:
G.add_edge(bid_a, bid_b)
# Collapse all subgraphs of two or more nodes
k = 0
for cluster in nx.connected_components(G):
if len(cluster) > 1:
k += 1
new_bid = '_'.join([hpo, block_prefix, 'merged', str(k)])
# Take union of all windows
windows = set()
for bid in cluster:
windows.update(hdat['blocks'][bid]['refine_res'].keys())
# Update refinement
window_priors = {window : 1 / len(windows) for window in windows}
refine_res, credset_coords, credset_bt, credset_windows = \
refine(window_priors, hdat['all_windows'], cs_val=cs_val,
cs_merge_buffer=block_merge_dist)
# Determine maximum significance level of any window in credible set
if any([hdat['all_windows'][wid]['gw_sig'] for wid in credset_windows]):
credset_max_sig = 'genome_wide'
elif any([hdat['all_windows'][wid]['fdr_sig'] for wid in credset_windows]):
credset_max_sig = 'FDR'
else:
credset_max_sig = 'not_significant'
# Add new merged block to hpo_data
block = credset_bt.merge(d=int(10e10))
hpo_data[hpo]['blocks'][new_bid] = \
{'coords' : block,
'refine_res' : refine_res,
'credset_coords' : credset_coords,
'credset_bt' : credset_bt,
'credset_windows' : credset_windows,
'credset_max_sig' : credset_max_sig}
# Remove all original member blocks
for bid in cluster:
hpo_data[hpo]['blocks'].pop(bid)
return hpo_data
def clump_windows(cov_df, sig_wids, jac_cutoff=0.2):
"""
inputs:
cov_df : an input matrix of CNV covariance for pairs of windows
jac_cutoff : minimum Jaccard index to treat windows as non-independent
outputs:
a list of clumps of window IDs
"""
# Make graph of all windows
all_wids = list(cov_df.columns)
wg = nx.Graph()
for wid in all_wids:
wg.add_node(wid)
# Annotate edges with Jaccard index if >= jac_cutoff
for sig_wid in sig_wids:
for other_wid in all_wids:
jac = cov_df.loc[sig_wid, other_wid]
if jac >= jac_cutoff:
wg.add_edge(sig_wid, other_wid)
wg.edges[sig_wid, other_wid]['jac'] = jac
clumps = []
for subg in nx.connected_components(wg):
if len(set(sig_wids).intersection(set(subg))) > 1:
clumps.append(list(subg))
return clumps
def split_blocks_by_cov(hpo_data, cnv_cov, jac_cutoff=0.2, block_prefix='window_block',
null_variance=0.42 ** 2, cs_val=0.95):
"""
Split blocks based on CNV covariance
"""
for hpo, hdat in hpo_data.items():
kn = 0
sig_wids = list(hdat['sig_windows'].keys())
orig_bids = list(hdat['blocks'].keys())
for block_id in orig_bids:
bdat = hdat['blocks'][block_id]
block_wids = list(bdat['refine_res'].keys())
sig_block_wids = list(set(sig_wids).intersection(set(block_wids)))
chrom = bdat['credset_coords'][0][0]
cov_df = cnv_cov[hpo][chrom]
# Note: only consider covariance between sig windows when assessing independence
# to avoid chaining of single-linkage between intermediate blocks
cov_df = cov_df.loc[cov_df.index.isin(sig_block_wids),
cov_df.columns.isin(sig_block_wids)]
window_clumps = clump_windows(cov_df, sig_block_wids, jac_cutoff)
# If multiple independent CNV clumps are found, split into independent blocks
if len(window_clumps) > 1:
for i in range(len(window_clumps)):
# Collect data corresponding to split block
windows = window_clumps[i]
windows_bt = pbt.BedTool('\n'.join(['\t'.join(x.split('_') + [x]) for x in windows]),
from_string=True).sort()
coords = windows_bt.merge(c=4, o='distinct')
window_priors = {window : 1 / len(windows) for window in windows}
refine_res, credset_coords, credset_bt, credset_windows \
= refine(window_priors, hdat['all_windows'], null_variance,
cs_val, cs_merge_buffer=0)
kn += 1
new_block_id = '_'.join([hpo, block_prefix, 'split', str(kn)])
# Determine maximum significance level of any window in credible set
if any([hdat['all_windows'][wid]['gw_sig'] for wid in credset_windows]):
credset_max_sig = 'genome_wide'
elif any([hdat['all_windows'][wid]['fdr_sig'] for wid in credset_windows]):
credset_max_sig = 'FDR'
else:
credset_max_sig = 'not_significant'
# Update hpo_data
hpo_data[hpo]['blocks'][new_block_id] = \
{'coords' : coords,
'refine_res' : refine_res,
'credset_coords' : credset_coords,
'credset_bt' : credset_bt,
'credset_windows' : credset_windows,
'credset_max_sig' : credset_max_sig}
# Remove original block after splitting
hpo_data[hpo]['blocks'].pop(block_id)
return hpo_data
def assign_or_quantiles(hpo_data, n_or_bins=1):
"""
Assign all associations into quantiles based on effect size
"""
# Gather effect size estimate of most significant window per block
lnors = {}
for hpo, hdat in hpo_data.items():
for bid, bdat in hdat['blocks'].items():
best_p = 1
best_wid = None
best_lnor = 0
for wid in list(bdat['refine_res'].keys()):
w_p = hdat['all_windows'][wid]['primary_p']
if w_p < best_p:
best_p = w_p
best_wid = wid
best_lnor = hdat['all_windows'][wid]['lnOR']
lnors[bid] = {'lnor' : best_lnor, 'hpo' : hpo}
# Assign blocks to quantiles
quants = np.floor(n_or_bins * np.argsort([x['lnor'] for x in lnors.values()]) / len(lnors))
qdict = {a : int(b) for a, b in zip(lnors.keys(), quants)}
for bid in qdict.keys():
hpo_data[lnors[bid]['hpo']]['blocks'][bid]['lnor_quantile'] = qdict[bid]
return hpo_data
def estimate_null_variance_basic(hpo_data, Wsq, dev_hpos=[], n_or_bins=1,
split_gw_fdr=False):
"""
Estimates null variance from average of all significant windows and best window per block
"""
# Compute 2 null variance estimates for refining:
# 1. Mean of all significant windows
# 2. Mean of all top windows (one per block)
vardict_sig = {hpo : {'gw' : {i : [] for i in range(n_or_bins + 1)},
'fdr' : {i : [] for i in range(n_or_bins + 1)}} \
for hpo in hpo_data.keys()}
vardict_best = {hpo : {'gw' : {i : [] for i in range(n_or_bins + 1)},
'fdr' : {i : [] for i in range(n_or_bins + 1)}} \
for hpo in hpo_data.keys()}
# Collect variance estimates from each significant block
for hpo, dat in hpo_data.items():
for bdat in dat['blocks'].values():
# Get block-wide odds ratio quantile and significance level
lnor_q = bdat['lnor_quantile']
gw_sig = any([dat['all_windows'][wid]['gw_sig'] for wid \
in bdat['refine_res'].keys()])
if gw_sig:
w_sig = 'gw'
else:
w_sig = 'fdr'
if not split_gw_fdr:
gw_sig = True
# Get odds ratio for each significant window within the block
sig_wids = [wid for wid in bdat['refine_res'].keys() \
if any([dat['all_windows'][wid]['gw_sig'],
dat['all_windows'][wid]['fdr_sig']])]
sig_lnors = {wid: dat['all_windows'][wid]['lnOR'] for wid in sig_wids}
sig_vars = {wid : (float(lnor) / 1.96) ** 2 for wid, lnor in sig_lnors.items()}
# Get odds ratio of single window with strongest P-value
bpvals = [(window, dat['all_windows'][window]['primary_p']) \
for window in sig_wids]
best_wid = sorted(bpvals, key=lambda x: x[1])[0][0]
best_var = sig_vars[best_wid]
# Update overall variance estimate collectors
vardict_sig[hpo][w_sig][lnor_q] += list(sig_vars.values())
vardict_best[hpo][w_sig][lnor_q].append(best_var)
# Summarize variance estimates across strata
for sig in 'gw fdr'.split():
for i in range(n_or_bins):
# Developmental HPOs
var_sig_dev_all = [vardict_sig[h][sig][i] for h in dev_hpos]
var_sig_dev = np.nanmean([v for sub in var_sig_dev_all for v in sub])
var_best_dev_all = [vardict_best[h][sig][i] for h in dev_hpos]
var_best_dev = np.nanmean([v for sub in var_best_dev_all for v in sub])
for hpo in dev_hpos:
Wsq[hpo][sig][i] += [var_sig_dev, var_best_dev]
# All other HPOs
other_hpos = list(set(hpo_data.keys()).difference(set(dev_hpos)))
var_sig_other_all = [vardict_sig[h][sig][i] for h in other_hpos]
var_sig_other = np.nanmean([v for sub in var_sig_other_all for v in sub])
var_best_other_all = [vardict_best[h][sig][i] for h in other_hpos]
var_best_other = np.nanmean([v for sub in var_best_other_all for v in sub])
for hpo in other_hpos:
Wsq[hpo][sig][i] += [var_sig_other, var_best_other]
return Wsq
def estimate_null_variance_gs(gs_lists, statslist, Wsq, single_gs_hpo=False,
n_or_bins=1):
"""
Estimates null variance from the average of a list of known causal windows
"""
statspaths = {h : p for h, p in [x.rstrip().split('\t')[:2] \
for x in open(statslist).readlines()]}
with gzip.open(list(statspaths.values())[0], 'rt') as ex_statfile:
statscols = ex_statfile.readline().rstrip().split('\t')
# Estimate null variance for each entry in gs_lists
for gspath in gs_lists:
for hpo, statspath in statspaths.items():
# Intersect sumstats for phenotype with GS regions
gsdf = pbt.BedTool(statspath).\
intersect(pbt.BedTool(gspath), u=True, f=1.0).\
to_dataframe(names=statscols)
gsdf['window'] = gsdf[['#chr', 'start', 'end']].astype(str).\
aggregate('_'.join, axis=1)
# Read effect sizes per window and convert to mean variance
stats = gsdf.loc[:, 'window meta_lnOR'.split()].\
rename(columns={'meta_lnOR' : 'lnOR'})
gs_var = np.nanmean((stats.lnOR.astype(float) / 1.96) ** 2)
# Update Wsq estimates for all sig. and effect size quantiles
if single_gs_hpo:
for hpo in Wsq.keys():
for sig in 'gw fdr'.split():
for i in range(n_or_bins):
Wsq[hpo][sig][i].append(gs_var)
break
else:
for sig in 'gw fdr'.split():
for i in range(n_or_bins):
Wsq[hpo][sig][i].append(gs_var)
return Wsq
def estimate_null_variance(hpo_data, gs_list, statslist, dev_hpos=[],
n_or_bins=1, split_gw_fdr=False, single_gs_hpo=False):
"""
Estimate null variance for fine-mapping, optionally splitting on several covariates
"""
# Build null variance hierarchy
Wsq = {hpo : {'gw' : {i : [] for i in range(n_or_bins)},
'fdr' : {i : [] for i in range(n_or_bins)}} \
for hpo in hpo_data.keys()}
# Estimate null variance using 2+ approaches
# 1. all significant windows
# 2. most significant window per block
# 3+. known causal regions (optional; can be multiple lists)
Wsq = estimate_null_variance_basic(hpo_data, Wsq, dev_hpos, n_or_bins, split_gw_fdr)
if gs_list is not None:
with open(gs_list) as gsf:
Wsq = estimate_null_variance_gs([x.rstrip() for x in gsf.readlines()],
statslist, Wsq, single_gs_hpo, n_or_bins)
# Prune nans from all null variance estimates
for hpo in Wsq.keys():
for sig in 'gw fdr'.split():
for i in range(n_or_bins):
var_list = Wsq[hpo][sig][i]
Wsq[hpo][sig][i] = list(np.array(var_list)[~np.isnan(var_list)])
return Wsq
def output_null_var_tsv(Wsq, outfile):
"""
Write table of null variance estimates to .tsv file
"""
with open(outfile, 'w') as fout:
header = '\t'.join('#HPO sig odds_ratio_quantile null_variance'.split())
fout.write(header + '\n')
for hpo in Wsq.keys():
for sig in 'gw fdr'.split():
for i, var_ests in Wsq[hpo][sig].items():
out_fmt = '\t'.join([hpo, sig, str(i), '{}']) + '\n'
for v in var_ests:
fout.write(out_fmt.format(v))
def update_refine(hpo_data, Wsq, split_gw_fdr=False, cs_val=0.95,
block_merge_dist=200000, cnv_cov=None, jac_cutoff=0.8):
"""
Update initial refinement results (flat prior) with null variances
If multiple Wsqs are provided, uses Bayesian model averaging across them
"""
for hpo, hdat in hpo_data.items():
for block_id, bdat in hdat['blocks'].items():
windows = list(bdat['refine_res'].keys())
sig_wids = [w for w in windows if w in hdat['sig_windows'].keys()]
window_priors = {window : 1 / len(windows) for window in windows}
# Get block-wide odds ratio quantile and significance level
lnor_q = bdat['lnor_quantile']
gw_sig = any([hdat['all_windows'][wid]['gw_sig'] for wid in windows])
if gw_sig or not split_gw_fdr:
b_sig = 'gw'
else:
b_sig = 'fdr'
# Subset to covariance matrix from this chromosome, if optioned
if cnv_cov is not None:
chrom = bdat['credset_coords'][0][0]
cov_df = cnv_cov[hpo][chrom]
else:
cov_df = None
# Compute ABFs and PIPs at each null variance estimate
bma_input = []
for W in Wsq[hpo][b_sig][lnor_q]:
refine_res, credset_coords, credset_bt, credset_windows \
= refine(window_priors, hpo_data[hpo]['all_windows'], W,
cs_val, block_merge_dist, cov_df, jac_cutoff)
bma_input.append(refine_res)
# Average ABFs for each window
refine_res = {w : {} for w in windows}
for window in windows:
refine_res[window]['ABF'] \
= np.nanmean([s[window]['ABF'] for s in bma_input])
# Recompute PIPs according to averaged ABFs
ABF_sum = np.nansum([x['ABF'] for x in refine_res.values()])
for window in windows:
refine_res[window]['PIP'] = refine_res[window]['ABF'] / ABF_sum
# Recompute credible set based on BMA PIPs
credset_coords, credset_bt, credset_windows = \
make_cs(refine_res, cs_val, cov_df=cov_df, jac_cutoff=jac_cutoff,
sig_wids=sig_wids)
# Determine maximum significance level of any window in credible set
if any([hpo_data[hpo]['all_windows'][wid]['gw_sig'] for wid in credset_windows]):
credset_max_sig = 'genome_wide'
elif any([hpo_data[hpo]['all_windows'][wid]['fdr_sig'] for wid in credset_windows]):
credset_max_sig = 'FDR'
else:
credset_max_sig = 'not_significant'
hpo_data[hpo]['blocks'][block_id] = {'refine_res' : refine_res,
'credset_coords' : credset_coords,
'credset_bt' : credset_bt,
'credset_windows' : credset_windows,
'credset_max_sig' : credset_max_sig}
return hpo_data
def get_cytobands(bt, cyto_bed):
"""
Return cytoband nomenclature for all cytobands overlapped by bt
Note: assumes all entries in bt are on the same chromosome
"""
chrom = bt[0].chrom
bands = sorted(list(set([x[-2] for x in bt.intersect(cyto_bed, wb=True)])))
if len(bands) > 1:
bandrange = '{}{}-{}'.format(chrom, bands[0], bands[-1])
else:
bandrange = chrom + bands[0]
return bandrange
def rename_blocks(block_dict, cyto_bed, block_prefix):
"""
Rename segments using cytoband nomenclature
Input: dict of { old_id : pbt.BedTool of credible intervals }
Returns a dict mapping { old_id : new_id }
"""
alpha_map = {i : l for i, l in enumerate(string.ascii_uppercase)}
rename_dict = {}
# Map cytobands to all blocks
cytomap = {}
for old_id, bt in block_dict.items():
cytomap[old_id] = get_cytobands(bt, cyto_bed)
band_counter = {b : 0 for b in set(list(cytomap.values()))}
# Rename blocks according to new scheme
for old_id in block_dict.keys():
bandrange = cytomap[old_id]
k = Counter(cytomap.values()).get(bandrange, 0)
if k < 2:
new_id = '_'.join([block_prefix, bandrange])
else:
new_id = '_'.join([block_prefix, bandrange,
alpha_map[band_counter[bandrange]]])
band_counter[bandrange] += 1
rename_dict[old_id] = new_id
return rename_dict
def rename_all_blocks(hpo_data, cyto_bed, block_prefix):
"""
Rename segments using cytoband nomenclature
"""
for hpo, hdat in hpo_data.items():
block_dict = {bid : bd['credset_bt'] for bid, bd in hdat['blocks'].items()}
rename_dict = rename_blocks(block_dict, cyto_bed, '_'.join([hpo, block_prefix]))
for old_id, new_id in rename_dict.items():
hpo_data[hpo]['blocks'][new_id] = hpo_data[hpo]['blocks'][old_id]
hpo_data[hpo]['blocks'].pop(old_id)
return hpo_data
def iv_mean(values, variances, conf=0.95):
"""
Returns inverse-variance weighted mean of values and conf% confidence interval
"""
# Note: must restrict to values with positive, non-zero variance
keep_idxs = np.where(np.array(variances) > 0)[0].tolist()
values = itemgetter(*keep_idxs)(values)
if type(values) is not tuple:
values = (values, )
variances = itemgetter(*keep_idxs)(variances)
if type(variances) is not tuple:
variances = (variances, )
weights = [1 / v for v in variances]
wsum = np.nansum(weights)
numerator = np.nansum([x / v for x, v in zip(values, variances)])
ivm = numerator / wsum
pooled_se = np.sqrt(1 / wsum)
ci_dist = norm.ppf(conf) * pooled_se
return ivm, (ivm - ci_dist, ivm + ci_dist)
def output_assoc_bed(hpo_data, cyto_bed, outfile, cnv='NS'):
"""
Format final list of credible sets with summary statistics
"""
cols = 'chr start_min end_max credible_set_id cnv hpo sig_level ' + \
'cytoband mean_control_freq mean_case_freq ' + \
'pooled_ln_or pooled_ln_or_ci_lower pooled_ln_or_ci_upper ' + \
'best_pvalue n_cred_intervals cred_interval_coords cred_intervals_size'
outfile.write('#' + '\t'.join(cols.split()) + '\n')
for hpo, hdat in hpo_data.items():
for block_id, binfo in hpo_data[hpo]['blocks'].items():
# Get basic credible set info
n_cred = len(binfo['credset_coords'])
cred_coords = ['{}:{}-{}'.format(x[0], x[1], x[2]) for x in binfo['credset_coords']]
cred_size = np.nansum([x.length for x in binfo['credset_bt']])
chrom = str(binfo['credset_coords'][0][0])
start = str(np.nanmin(binfo['credset_bt'].to_dataframe().start))
end = str(np.nanmax(binfo['credset_bt'].to_dataframe().end))
windows = sorted(list(set([w for w in binfo['credset_windows'] if w in hdat['all_windows']])))
wdat = hpo_data[hpo]['all_windows']
control_freq = np.nanmean([wdat[w]['control_freq'] for w in windows])
case_freq = np.nanmean([wdat[w]['case_freq'] for w in windows])
cytoband = get_cytobands(binfo['credset_bt'], cyto_bed)
sig_level = binfo['credset_max_sig']
# Get pooled effect size as inverse-variance weighted mean of all windows
n_windows = len(windows)
if n_windows > 1:
wors = [wdat[w]['lnOR'] for w in windows]
wvars = [ci2se((wdat[w]['lnOR_lower'], wdat[w]['lnOR_upper'])) ** 2 \
for w in windows]
lnor, lnor_ci = iv_mean(wors, wvars)
lnor_lower, lnor_upper = sorted(lnor_ci)
else:
lnor = wdat[windows[0]]['lnOR']
lnor_lower = wdat[windows[0]]['lnOR_lower']
lnor_upper = wdat[windows[0]]['lnOR_upper']
best_p = np.nanmin([wdat[w]['primary_p'] for w in windows])
if best_p == 0:
best_z = np.nanmax([wdat[w]['zscore'] for w in windows])
best_p = norm.sf(best_z)
# Write credset stats to file
outline = '\t'.join([chrom, start, end, block_id, cnv, hpo, sig_level, cytoband])
outnums_fmt = '\t{:.3E}\t{:.3E}\t{:.3}\t{:.3}\t{:.3}\t{:.3E}'
outline += outnums_fmt.format(control_freq, case_freq, lnor, lnor_lower,
lnor_upper, best_p)
outline += '\t' + '\t'.join([str(n_cred), ';'.join(cred_coords),
str(cred_size)]) + '\n'
if sig_level != 'not_significant':
outfile.write(outline)
outfile.close()
def cluster_credsets(hpo_data, block_merge_dist=200000, kmeans_subclustering=False):
"""
Cluster credible sets across HPOs to collapse overlapping regions
"""
# Pool credible sets, tagged with block ID
pooled_creds_str = ''
for hdat in hpo_data.values():
for bid, binfo in hdat['blocks'].items():
for ci in binfo['credset_bt']:
pooled_creds_str += '\t'.join([ci.chrom, str(ci.start),
str(ci.end), bid]) + '\n'
pooled_creds_bt = pbt.BedTool(pooled_creds_str, from_string=True).sort().saveas()
merged_creds_bt = pooled_creds_bt.merge(c=4, o='distinct', d=block_merge_dist)
# Build nx.Graph() of credible sets to be clustered
G = nx.Graph()
csids = \
list(set([e for s in [x.split(',') for x in merged_creds_bt.to_dataframe().\
iloc[:, 3].tolist()] for e in s]))
G.add_nodes_from(csids)
for x in merged_creds_bt:
creds = x[3].split(',')
if len(creds) > 1:
G.add_edges_from(list(combinations(creds, 2)))
# If optioned, attempt to split subgraphs containing multiple blocks from the same HPO
# This corrects for daisy chaining between distinct clumps identified
# earlier using split_blocks_by_cov()
if kmeans_subclustering:
v1_clusters = list(nx.connected_components(G))
for g in v1_clusters:
# Only need to correct clusters containing >1 independent block from the same HPO
hpo_counts = Counter([x.split('_')[0] for x in g])
if any(np.array(list(hpo_counts.values())) > 1):
# Get info on all blocks in cluster
all_bids = list(g)
block_bt = pooled_creds_bt.filter(lambda x: x[3] in all_bids).saveas()
# Compute matrix of coverage between all blocks
block_cov = pd.DataFrame(columns=all_bids)
for bid in all_bids:
bid_bt = block_bt.filter(lambda x: x[3] == bid).saveas()
cvals = {}
for x in block_bt.coverage(bid_bt):
ovr_bp = int(x[5])
union = bid_bt.cat(pbt.BedTool('\t'.join(x[:4]),
from_string=True))
union_bp = np.sum([len(x) for x in union])
cvals[x[3]] = ovr_bp / union_bp
block_cov.loc[bid] = block_cov.columns.map(cvals)
# Initial hierarchical clustering such that k clusters are obtained,
# where k is the max number of independent blocks from any one HPO
kmeans = KMeans(n_clusters=max(hpo_counts.values()), random_state=2021).fit(block_cov)
newclusters = {k : set() for k in range(max(hpo_counts.values()))}
for k, bid in zip(kmeans.labels_, block_cov.index):
newclusters[k].add(bid)
# Prune all edges between new subclusters
for members in newclusters.values():
for bid in members:
for other_id in [x for x in all_bids if x not in members]:
if G.has_edge(bid, other_id):
G.remove_edge(bid, other_id)
# Format each cluster in the graph of credsets
clustered_credsets = \
{ 'clustered_region_' + str(i) : x for i, x \
in enumerate(nx.connected_components(G))}
return clustered_credsets
def define_joint_credints(hpo_data, hpo_dict, cs_val, cnv_cov=None,
jac_cutoff=0.8, ncase_dict={}, return_all=True):
"""
Function to average PIPs across all windows from two or more HPOs to define a
consensus set of credible intervals
"""
# Take union of all windows evaluated in all phenotypes
all_wids = [hpo_data[hpo]['blocks'][bid]['refine_res'].keys() \
for bid, hpo in hpo_dict.items()]
all_wids = set([w for l in all_wids for w in l])
pip_df = pd.DataFrame(index=all_wids)
# Compile list of significant window IDs
sig_wids = set()
for hpo in hpo_dict.values():
for wid in all_wids:
if wid in hpo_data[hpo]['sig_windows'].keys():
sig_wids.add(wid)
# Map PIPs from each HPO onto pip_df
for bid, hpo in hpo_dict.items():
pip_map = {k : v['PIP'] for k, v in hpo_data[hpo]['blocks'][bid]['refine_res'].items()}
pip_df[hpo] = pip_df.index.map(pip_map)
# Average PIPs across all HPOs
pip_df.fillna(value=0, inplace=True)
refine_res = pip_df.mean(axis=1).to_frame(name='PIP').to_dict(orient='index')
# Take CNV covariance from HPO with largest sample size for assessing CNV overlap
# If significant window is not present for largest HPO, iterate through
# other HPOs ordered by size until all significant windows are added
if cnv_cov is not None:
hpos_by_n = [x[0] for x in sorted(ncase_dict.items(), key=lambda x: x[1], reverse=True)]
chrom = list(all_wids)[0].split('_')[0]
for i, hpo in enumerate([h for h in hpos_by_n if h in hpo_dict.values()]):
if i == 0:
cov_df = cnv_cov[hpo][chrom]
cov_df = cov_df.loc[cov_df.index.isin(sig_wids), :]
else:
sup_df = cnv_cov[hpo][chrom]
sup_df = sup_df.loc[sup_df.index.isin(sig_wids), :]
new_rows = ~sup_df.index.isin(cov_df.index)
new_cols = ~sup_df.columns.isin(cov_df.columns)
# Three-step update of cov_df
# First step: update missing columns for existing rows, if any
if any(new_cols):
cov_df = pd.merge(cov_df, sup_df.loc[~new_rows, new_cols],
how='left', left_index=True, right_index=True)
# Second step: rbind new rows, if any
if any(new_rows):
cov_df = cov_df.append(sup_df.loc[new_rows, :])
# Third step: fill any NaNs with sup_df
cov_df.update(sup_df, overwrite=False)
else:
cov_df = None
cov_df.fillna(0, inplace=True)
# Redefine credible intervals
credset_coords, credset_bt, credset_windows = \
make_cs(refine_res, cs_val, cov_df=cov_df, jac_cutoff=jac_cutoff,
sig_wids=sig_wids)
if return_all:
return refine_res, credset_coords, credset_bt, credset_windows
else:
return credset_bt
def joint_refine_all(hpo_data, final_loci, cs_val=0.95, cnv_cov=None,
jac_cutoff=0.8, ncase_dict={}, cs_merge_buffer=200000):
"""
Wrapper to call joint_refinement() on all multi-HPO clusters
"""
for members in final_loci.values():
n_members = len(members)
if n_members > 1:
hpo_dict = {cs : cs.split('_')[0] for cs in members}
hpos = sorted(list(set(hpo_dict.values())))
n_hpos = len(hpos)
# Collapse blocks from the same HPO within the same cluster prior to joint refinement
hpo_counts = Counter(hpo_dict.values())
for hpo, n in hpo_counts.items():
if n > 1:
# Collect data corresponding to merged blocks, taking average of PIPs within a single HPO
hdat = hpo_data[hpo]
hbids = [k for k, v in hpo_dict.items() if v == hpo]
windows = set()
sig_windows = set()
for bid in hbids:
bwids = list(hdat['blocks'][bid]['refine_res'].keys())
windows.update(bwids)
sig_windows.update([w for w in hdat['sig_windows'] if w in bwids])
windows = list(windows)
sig_windows = list(sig_windows)
chrom = windows[0].split('_')[0]
pip_df = pd.DataFrame(index=windows)
for bid in hbids:
pip_map = {k : v['PIP'] for k, v in hdat['blocks'][bid]['refine_res'].items()}
pip_df[bid] = pip_df.index.map(pip_map)
pip_df.fillna(value=0, inplace=True)
refine_res = pip_df.mean(axis=1).to_frame(name='PIP').to_dict(orient='index')
credset_coords, credset_bt, credset_windows = \
make_cs(refine_res, cs_val, cs_merge_buffer,
cnv_cov[hpo][chrom], jac_cutoff, sig_windows)
coords = [chrom,
np.nanmin([x.start for x in credset_bt]),
np.nanmax([x.stop for x in credset_bt])]
# Determine maximum significance level of any window in credible set
if any([hdat['all_windows'][wid]['gw_sig'] for wid in credset_windows]):
credset_max_sig = 'genome_wide'
elif any([hdat['all_windows'][wid]['fdr_sig'] for wid in credset_windows]):
credset_max_sig = 'FDR'
else:
credset_max_sig = 'not_significant'
# Update hpo_data for first block and remove other blocks
for i, bid in enumerate(hbids):
if i == 0:
hpo_data[hpo]['blocks'][bid] = \
{'coords' : coords,
'refine_res' : refine_res,
'credset_coords' : credset_coords,
'credset_bt' : credset_bt,
'credset_windows' : credset_windows,
'credset_max_sig' : credset_max_sig}
else:
hpo_data[hpo]['blocks'].pop(bid)
hpo_dict.pop(bid)
# Joint refinement of blocks after collapsing redundant blocks (see above)
credints_dict = {cs : hpo_data[hpo]['blocks'][cs]['credset_coords'] \
for cs, hpo in hpo_dict.items()}
credints_bts = [hpo_data[hpo]['blocks'][cs]['credset_bt'] \
for cs, hpo in hpo_dict.items()]
refine_res, cs_coords, cs_bt, cs_windows = \
define_joint_credints(hpo_data, hpo_dict, cs_val, cnv_cov, \
jac_cutoff, ncase_dict)
# Update hpo_data with results from joint refinement
new_coords = cs_bt.merge()
for bid, hpo in hpo_dict.items():
# Determine maximum significance level of any window in credible set
hcs_windows = [w for w in cs_windows if w in hpo_data[hpo]['all_windows'].keys()]
if any([hpo_data[hpo]['all_windows'][wid]['gw_sig'] for wid in hcs_windows]):
credset_max_sig = 'genome_wide'
elif any([hpo_data[hpo]['all_windows'][wid]['fdr_sig'] for wid in hcs_windows]):
credset_max_sig = 'FDR'
else:
credset_max_sig = 'not_significant'
# Update data
hpo_data[hpo]['blocks'][bid] = \
{'coords' : new_coords,
'refine_res' : refine_res,
'credset_coords' : cs_coords,
'credset_bt' : cs_bt,
'credset_windows' : cs_windows,
'credset_max_sig' : credset_max_sig}
return hpo_data
def prune_redundant_blocks(hpo_data):
"""
Searches for blocks from the same HPO with overlapping credible intervals
and retains just one
"""
for hpo, hdat in hpo_data.items():
if len(hdat['blocks']) == 0:
continue
# First step: make a graph of all blocks where edges indicate overlapping credible intervals
G = nx.Graph()
G.add_nodes_from(hdat['blocks'].keys())
cs_bt_strs = []
for bid, bdat in hdat['blocks'].items():
cs_bt_strs += ['{}\t{}\t{}\t{}\n'.format(*x, bid) for x in bdat['credset_coords']]
cs_bt = pbt.BedTool(''.join(cs_bt_strs), from_string=True)
for hit in cs_bt.sort().merge(c=4, o='distinct'):
bids = hit[3].split(',')
if len(bids) > 1:
for bid_a in bids:
for bid_b in bids:
if bid_a != bid_b:
G.add_edge(bid_a, bid_b)
# Second step: resolve subgraphs with multiple nodes
for g in nx.connected_components(G):
if len(g) > 1:
# Gather evidence for each block (significance level and size)
criteria = {bid : (hdat['blocks'][bid]['credset_max_sig'],
np.sum([x.length for x in hdat['blocks'][bid]['credset_bt']])) \
for bid in g}
# Keep blocks with higher significance level (GW over FDR)
# Break ties by taking larger block
criteria = {k : v for k, v in sorted(criteria.items(),
key=lambda x: x[1][1],
reverse=True)}
criteria = {k : v for k, v in sorted(criteria.items(),
key=lambda x: x[1][0].lower(),
reverse=True)}
for i, bid in enumerate(criteria.keys()):
if i > 0:
hpo_data[hpo]['blocks'].pop(bid)
return hpo_data
def output_loci_bed(hpo_data, final_loci, cyto_bed, outfile, ncase_dict, cnv='NS',
block_prefix=None, joint_credsets=False, cs_val=0.95,
cnv_cov=None, jac_cutoff=0.8):
"""
Format final list of collapsed credible sets and compute pooled summary statistics
"""
if block_prefix is None:
if cnv == 'NS':
region_id_prefix = 'merged_segment'
else:
region_id_prefix = 'merged_{}_segment'.format(cnv)
else:
region_id_prefix = 'merged_{}_segment'.format(block_prefix)
cols = 'chr start_min end_max region_id cnv best_sig_level cytoband ' + \
'pooled_control_freq pooled_case_freq ' + \
'pooled_ln_or pooled_ln_or_ci_lower pooled_ln_or_ci_upper ' + \
'min_ln_or max_ln_or n_hpos hpos n_constituent_assocs constituent_assocs ' + \
'n_cred_intervals cred_interval_coords cred_intervals_size'
outfile.write('#' + '\t'.join(cols.split()) + '\n')
out_presort = {}
for k, members in enumerate(final_loci.values()):
# Get basic information for credsets in final cluster
n_members = len(members)
hpo_dict = {cs : cs.split('_')[0] for cs in members}
hpos = sorted(list(set(hpo_dict.values())))
n_hpos = len(hpos)
credints_dict = {cs : hpo_data[hpo]['blocks'][cs]['credset_coords'] for cs, hpo in hpo_dict.items()}
credints_bts = [hpo_data[hpo]['blocks'][cs]['credset_bt'] for cs, hpo in hpo_dict.items()]
if n_members > 1:
if joint_credsets:
credints_bt = \
define_joint_credints(hpo_data, hpo_dict, cs_val, cnv_cov,
jac_cutoff, ncase_dict, return_all=False)
else:
credints_bt = credints_bts[0].cat(*credints_bts[1:], postmerge=False).sort().merge()
else:
credints_bt = credints_bts[0].sort().merge()
n_credints = len(credints_bt)
credints_size = np.nansum([x.length for x in credints_bt])
credints_coords = ['{}:{}-{}'.format(x.chrom, x.start, x.end) for x in credints_bt]
# Get region-level basic information
chrom = credints_bt[0].chrom
start = str(np.nanmin(credints_bt.to_dataframe().start))
end = str(np.nanmax(credints_bt.to_dataframe().end))
cytoband = get_cytobands(credints_bt, cyto_bed)
region_id = '_'.join([region_id_prefix, cytoband, str(k)])
# Summarize HPO-specific information pooled across all windows from each
# contributing credset (note: *not* all windows for all merged cred intervals)
windows_dict = {}
for bid, hpo in hpo_dict.items():
windows_dict[hpo] = [w for w in hpo_data[hpo]['blocks'][bid]['credset_windows'] \
if w in hpo_data[hpo]['all_windows'].keys()]
# Compute pooled control & case frequencies as mean weighted by np.sqrt(N_cases)
control_freq_dict, case_freq_dict = {}, {}
for hpo, windows in windows_dict.items():
control_freq_dict[hpo] = \
np.nanmean([hpo_data[hpo]['all_windows'][w]['control_freq'] for w in windows])
case_freq_dict[hpo] = \
np.nanmean([hpo_data[hpo]['all_windows'][w]['case_freq'] for w in windows])
control_freq = np.nanmean(list(control_freq_dict.values()))
case_weights = [np.sqrt(ncase_dict[hpo]) for hpo in case_freq_dict.keys()]
case_freq = np.average(list(case_freq_dict.values()), weights=case_weights)
# Compute pooled effect size as inverse-variance weighted average
lnor_means, lnor_cis = {}, {}
for hpo in hpos:
wdat = hpo_data[hpo]['all_windows']
hlnors = [wdat[w]['lnOR'] for w in windows_dict[hpo]]
hvars = [ci2se((wdat[w]['lnOR_lower'], wdat[w]['lnOR_upper'])) ** 2 \
for w in windows_dict[hpo]]
hlnor, hlnor_ci = iv_mean(hlnors, hvars)
lnor_means[hpo] = hlnor
lnor_cis[hpo] = sorted(hlnor_ci)
min_lnor = np.nanmin(list(lnor_means.values()))
max_lnor = np.nanmax(list(lnor_means.values()))
lnor, lnor_ci = \
iv_mean(list(lnor_means.values()),
[ci2se(tuple(ci)) ** 2 for ci in lnor_cis.values()])
# Get best significance level from any window
sig_levels = [hpo_data[hpo]['blocks'][bid].get('credset_max_sig') \
for bid, hpo in hpo_dict.items()]
if 'genome_wide' in sig_levels:
best_sig_level = 'genome_wide'
elif 'FDR' in sig_levels:
best_sig_level = 'FDR'
else:
best_sig_level = 'not_significant'
# Prepare to write region stats to file
out_front = '\t'.join([chrom, start, end])
out_back = '\t'.join([cnv, best_sig_level, cytoband])
outnums_fmt = '\t{:.3E}\t{:.3E}\t{:.3}\t{:.3}\t{:.3}\t{:.3}\t{:.3}'
out_back += outnums_fmt.format(control_freq, case_freq, lnor, lnor_ci[0],
lnor_ci[1], min_lnor, max_lnor)
out_back += '\t' + '\t'.join([str(n_hpos), ';'.join(hpos),
str(n_members), ';'.join(sorted(members)),
str(n_credints), ';'.join(credints_coords),
str(credints_size)]) + '\n'
if best_sig_level != 'not_significant':
out_presort[(int(chrom), int(start), int(end))] = [out_front, region_id, out_back]
# Final renaming of blocks with cytoband nomenclature using rename_blocks()
block_dict = {v[1] : pbt.BedTool(v[0], from_string=True) for v in out_presort.values()}
rename_dict = rename_blocks(block_dict, cyto_bed, region_id_prefix)
# Iterate over sorted blocks and write to file
for i, key in enumerate(sorted(out_presort.keys())):
outline = '\t'.join([out_presort[key][0],
rename_dict[out_presort[key][1]],
out_presort[key][2]])
outfile.write(outline)
outfile.close()
def main():
"""
Command-line main block
"""
# Parse command line arguments and options
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('statslist', help='tsv of metadata per phenotype. Three ' +
'required columns: HPO, path to meta-analysis stats, ' +
'and primary P-value cutoff.')
parser.add_argument('hpos_by_cohort', help='tsv of sample sizes per HPO.')
parser.add_argument('--secondary-p-cutoff', help='Maximum secondary P-value to ' +
'consider as significant. [default: 1]', default=1, type=float)
parser.add_argument('--cnv', help='Indicate CNV type. [default: NS]', default='NS')
parser.add_argument('--min-nominal', help='Minimum number of individual cohorts ' +
'required to be nominally significant to consider ' +
'significant. [default: 0]', default=0, type=int)
parser.add_argument('--secondary-or-nominal', dest='secondary_or_nom',
help='Allow windows to meet either --secondary-p-cutoff ' +
'or --min-nominal, but do not require both. ' +
'[default: require both]', default=False, action='store_true')
parser.add_argument('--fdr-q-cutoff', help='Maximum FDR Q-value to ' +
'consider as FDR significant. [default: 0.05]',
default=0.05, type=float)
parser.add_argument('--secondary-for-fdr', action='store_true',
default=False, help='Apply sample secondary and/or min. ' +
'nominal criteria when evaluating FDR significance. ' +
'[default: apply no secondary criteria to FDR segments]')
parser.add_argument('--cnv-bed', help='BED file of all CNVs from all cohorts. ' +
'Used for computing CNV covariance between pairs of windows.')
parser.add_argument('--credible-sets', dest='cs_val', type=float, default=0.95,
help='Credible set value. [default: 0.95]')
parser.add_argument('--joint-credset-definition', action='store_true',
default=False, dest='joint_credsets', help='After ' +
'clustering and refinement, add a final step of joint ' +
'credible set definition across all HPOs associated at ' +
'each locus [default: define credsets per HPO per locus]')
parser.add_argument('--search-distance', help='Load non-significant windows within ' +
'this distance of at least one significant window. [default: 1Mb]',
default=1000000, type=int)
parser.add_argument('--refine-pad', help='Distance to pad each significant window ' +
'prior to refinement. [default: 200kb]', default=200000,
type=int)
parser.add_argument('--block-jaccard', help='If --cnv-bed is provided, subdivide ' +
'blocks of significant windows if their maximum Jaccard index of ' +
'overlapping CNVs is not greater than this value. [default: 0.2]',
default=0.2, type=float, dest='block_jac')
parser.add_argument('--window-jaccard', help='If --cnv-bed is provided, clump ' +
'windows into credible set coordinates if their Jaccard index of ' +
'overlapping CNVs with a credible set is at least as large ' +
'as this value. [default: 0.8]',
default=0.8, type=float, dest='window_jac')
parser.add_argument('--known-causal-loci-list', help='.tsv list of paths to ' +
'.bed lists of known causal loci. Used for estimating null ' +
'variance. Can be specified multiple times. [default: ' +
'no known causal regions]', dest='gs_list')
parser.add_argument('--single-gs-hpo', action='store_true', default=False,
help='Use first row in --variance-estimation-statslist ' +
'to estimate null variance for all HPOs. [default: ' +
'estimate HPO-specific null variance]')
parser.add_argument('--variance-estimation-statslist', help='.tsv of paths to ' +
'meta-analysis per phenotype to use with --known-causal-loci-list, ' +
'if different from main statslist. [default: use main statslist]',
dest='var_est_statslist')
parser.add_argument('--developmental-hpos', help='.txt file listing which ' +
'HPOs to treat as developmental when estimating null ' +
'variance. [default: pool all HPOs for null variance]')
parser.add_argument('--n-effect-size-bins', dest='n_or_bins', type=int, default=1,
help='Number of effect size quantiles to evaluate when ' +
'estimating null variance [default: do not split ' +
'associations by effect size]')
parser.add_argument('--estimate-variance-by-sig', dest='split_gw_fdr',
action='store_true', help='Estimate null variance separately ' +
'for genome-wide significant and FDR-significant segments. ' +
'[default: pool all segments when estimating null variance]')
parser.add_argument('--cytobands', help='BED of chromosome cytobands. Used ' +
'for naming regions. Optional. [default: name based on coordinates]')
parser.add_argument('--refine-secondary', action='store_true', default=False,
help='Use secondary association statistics for segment refinement ' +
'[default: use primary association stats]')
parser.add_argument('--sig-loci-bed', help='Output BED of significant segments ' +
'and their overall association statistics.')
parser.add_argument('--sig-assoc-bed', help='Output BED of significant ' +
'segment-phenotype pairs and their corresponding association ' +
'statistics.')
parser.add_argument('--null-variance-estimates-tsv', help='Output .tsv with ' +
'full table of estimated null variances.', dest='null_var_out')
parser.add_argument('--prefix', help='Prefix for naming loci & associations.')
args = parser.parse_args()
# Set block prefix
block_prefix_components = ['segment']
if args.prefix is not None:
block_prefix_components.insert(0, args.prefix)
if args.cnv is not None and args.cnv != 'NS':
block_prefix_components.insert(0, args.cnv)
else:
block_prefix_components.insert(0, 'sig')
block_prefix = '_'.join(block_prefix_components)
# Process data per hpo
hpo_data = load_all_hpos(args.statslist, args.secondary_p_cutoff,
args.min_nominal, args.secondary_or_nom,
args.fdr_q_cutoff, args.secondary_for_fdr,
args.search_distance, block_prefix,
args.refine_secondary, args.cs_val)
# Read list of developmental HPOs:
if args.developmental_hpos is not None:
dev_hpos = [x.rstrip() for x in open(args.developmental_hpos).readlines()]
else:
dev_hpos = []
# Compute CNV covariance for all window pairs involving at least one significant window
if args.cnv_bed is not None:
cnv_cov = calc_cnv_cov(args.cnv_bed, hpo_data, args.cnv)
else:
cnv_cov = None
# Collapse blocks with high CNV covariance prior to updated refinement
hpo_data = merge_blocks_by_cov(hpo_data, cnv_cov, args.window_jac,
args.refine_pad, args.cs_val, block_prefix)
# Group blocks by effect size quantile
hpo_data = assign_or_quantiles(hpo_data, args.n_or_bins)
# Estimate null variance
if args.var_est_statslist is not None:
var_est_statslist = args.var_est_statslist
else:
var_est_statslist = args.statslist
Wsq = estimate_null_variance(hpo_data, args.gs_list, var_est_statslist,
dev_hpos, args.n_or_bins, args.split_gw_fdr,
args.single_gs_hpo)
# Update original refinement results with BMA of re-estimated null variances
hpo_data = update_refine(hpo_data, Wsq, args.split_gw_fdr, args.cs_val,
args.refine_pad, cnv_cov, args.window_jac)
# Split blocks based on CNV covariance, if optioned
if cnv_cov is not None:
hpo_data = split_blocks_by_cov(hpo_data, cnv_cov, args.block_jac, block_prefix)
# Read dict of N_case per HPO
ncase_df = pd.read_csv(args.hpos_by_cohort, sep='\t').loc[:, '#HPO Total'.split()]
ncase_df.index = ncase_df.iloc[:, 0]
ncase_dict = ncase_df.drop(columns='#HPO').transpose().to_dict(orient='records')[0]
# Joint refinement of overlapping credible intervals across HPOs, if optioned
if args.joint_credsets:
# First pass of clustering overlapping credible sets across HPOs
# While attempting to split clusters such that each cluster has only one block per HPO
clustered_loci_v1 = cluster_credsets(hpo_data, block_merge_dist=args.refine_pad,
kmeans_subclustering=True)
# Joint refinement of clustered blocks
hpo_data = joint_refine_all(hpo_data, clustered_loci_v1, args.cs_val,
cnv_cov, args.window_jac, ncase_dict,
args.refine_pad)
# Remove any lingering redundant blocks after joint refinement
hpo_data = prune_redundant_blocks(hpo_data)
# Rename all blocks according to cytobands of credible sets, if optioned
if args.cytobands is not None:
hpo_data = rename_all_blocks(hpo_data, args.cytobands, block_prefix)
# Format & write final table of significant associations
if args.sig_assoc_bed is not None:
sig_assoc_bed = open(args.sig_assoc_bed, 'w')
output_assoc_bed(hpo_data, args.cytobands, sig_assoc_bed, args.cnv)
# Cluster final associations based on simple overlap between credints
if args.joint_credsets:
final_loci = cluster_credsets(hpo_data, block_merge_dist=0)
else:
final_loci = cluster_credsets(hpo_data, block_merge_dist=args.refine_pad)
# Format & write final table of significant regions
# No joint refinement necessary at this step because it has already been accomplished above
if args.sig_loci_bed is not None:
sig_loci_bed = open(args.sig_loci_bed, 'w')
output_loci_bed(hpo_data, final_loci, args.cytobands, sig_loci_bed,
ncase_dict, args.cnv, block_prefix.replace('_segment', ''),
False, args.cs_val, cnv_cov, args.window_jac)
if __name__ == '__main__':
main()
|
{"hexsha": "c2fb42258db4999471fc799258135708138ae41f", "size": 74900, "ext": "py", "lang": "Python", "max_stars_repo_path": "analysis/sliding_windows/refine_significant_regions.py", "max_stars_repo_name": "talkowski-lab/rCNV2", "max_stars_repo_head_hexsha": "fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-01-28T15:46:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:50:40.000Z", "max_issues_repo_path": "analysis/sliding_windows/refine_significant_regions.py", "max_issues_repo_name": "talkowski-lab/rCNV2", "max_issues_repo_head_hexsha": "fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-02T01:33:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-02T01:33:53.000Z", "max_forks_repo_path": "analysis/sliding_windows/refine_significant_regions.py", "max_forks_repo_name": "talkowski-lab/rCNV2", "max_forks_repo_head_hexsha": "fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-02-21T19:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-22T15:56:21.000Z", "avg_line_length": 44.9040767386, "max_line_length": 109, "alphanum_fraction": 0.5699332443, "include": true, "reason": "import numpy,from scipy,import networkx", "num_tokens": 17792}
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# coding: utf-8
# pylint: disable=arguments-differ
""" losses for training neural networks """
__all__ = ['Loss', 'L2Loss', 'L1Loss',
'SigmoidBinaryCrossEntropyLoss', 'SigmoidBCELoss',
'SoftmaxCrossEntropyLoss', 'SoftmaxCELoss',
'KLDivLoss', 'CTCLoss', 'HuberLoss', 'HingeLoss',
'SquaredHingeLoss', 'LogisticLoss', 'TripletLoss', 'PoissonNLLLoss', 'CosineEmbeddingLoss', 'SDMLLoss']
import numpy as np
from .. import ndarray
from ..base import numeric_types
from .block import HybridBlock
from ..util import is_np_array
def _apply_weighting(F, loss, weight=None, sample_weight=None):
"""Apply weighting to loss.
Parameters
----------
loss : Symbol
The loss to be weighted.
weight : float or None
Global scalar weight for loss.
sample_weight : Symbol or None
Per sample weighting. Must be broadcastable to
the same shape as loss. For example, if loss has
shape (64, 10) and you want to weight each sample
in the batch separately, `sample_weight` should have
shape (64, 1).
Returns
-------
loss : Symbol
Weighted loss
"""
if sample_weight is not None:
if is_np_array():
loss = loss * sample_weight
else:
loss = F.broadcast_mul(loss, sample_weight)
if weight is not None:
assert isinstance(weight, numeric_types), "weight must be a number"
loss = loss * weight
return loss
def _reshape_like(F, x, y):
"""Reshapes x to the same shape as y."""
if F is ndarray:
return x.reshape(y.shape)
elif is_np_array():
F = F.npx
return F.reshape_like(x, y)
def _batch_mean(F, loss, batch_axis):
"""Return mean on the specified batch axis, not keeping the axis"""
if is_np_array():
axes = list(range(loss.ndim))
del axes[batch_axis]
return F.np.mean(loss, axis=axes)
else:
return F.mean(loss, axis=batch_axis, exclude=True)
def _batch_sum(F, loss, batch_axis):
"""Return sum on the specified batch axis, not keeping the axis"""
if is_np_array():
axes = list(range(loss.ndim))
del axes[batch_axis]
return F.np.sum(loss, axis=axes)
else:
return F.sum(loss, axis=batch_axis, exclude=True)
class Loss(HybridBlock):
"""Base class for loss.
Parameters
----------
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
"""
def __init__(self, weight, batch_axis, **kwargs):
super(Loss, self).__init__(**kwargs)
self._weight = weight
self._batch_axis = batch_axis
def __repr__(self):
s = '{name}(batch_axis={_batch_axis}, w={_weight})'
return s.format(name=self.__class__.__name__, **self.__dict__)
def hybrid_forward(self, F, x, *args, **kwargs):
"""Overrides to construct symbolic graph for this `Block`.
Parameters
----------
x : Symbol or NDArray
The first input tensor.
*args : list of Symbol or list of NDArray
Additional input tensors.
"""
# pylint: disable= invalid-name
raise NotImplementedError
class L2Loss(Loss):
r"""Calculates the mean squared error between `label` and `pred`.
.. math:: L = \frac{1}{2} \sum_i \vert {label}_i - {pred}_i \vert^2.
`label` and `pred` can have arbitrary shape as long as they have the same
number of elements.
Parameters
----------
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **label**: target tensor with the same size as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, weight=1., batch_axis=0, **kwargs):
super(L2Loss, self).__init__(weight, batch_axis, **kwargs)
def hybrid_forward(self, F, pred, label, sample_weight=None):
square_fn = F.np.square if is_np_array() else F.square
label = _reshape_like(F, label, pred)
loss = square_fn(label - pred)
loss = _apply_weighting(F, loss, self._weight / 2, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class L1Loss(Loss):
r"""Calculates the mean absolute error between `label` and `pred`.
.. math:: L = \sum_i \vert {label}_i - {pred}_i \vert.
`label` and `pred` can have arbitrary shape as long as they have the same
number of elements.
Parameters
----------
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **label**: target tensor with the same size as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, weight=None, batch_axis=0, **kwargs):
super(L1Loss, self).__init__(weight, batch_axis, **kwargs)
def hybrid_forward(self, F, pred, label, sample_weight=None):
abs_fn = F.np.abs if is_np_array() else F.abs
label = _reshape_like(F, label, pred)
loss = abs_fn(label - pred)
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class SigmoidBinaryCrossEntropyLoss(Loss):
r"""The cross-entropy loss for binary classification. (alias: SigmoidBCELoss)
BCE loss is useful when training logistic regression. If `from_sigmoid`
is False (default), this loss computes:
.. math::
prob = \frac{1}{1 + \exp(-{pred})}
L = - \sum_i {label}_i * \log({prob}_i) * pos\_weight +
(1 - {label}_i) * \log(1 - {prob}_i)
If `from_sigmoid` is True, this loss computes:
.. math::
L = - \sum_i {label}_i * \log({pred}_i) * pos\_weight +
(1 - {label}_i) * \log(1 - {pred}_i)
A tensor `pos_weight > 1` decreases the false negative count, hence increasing
the recall.
Conversely setting `pos_weight < 1` decreases the false positive count and
increases the precision.
`pred` and `label` can have arbitrary shape as long as they have the same
number of elements.
Parameters
----------
from_sigmoid : bool, default is `False`
Whether the input is from the output of sigmoid. Set this to false will make
the loss calculate sigmoid and BCE together, which is more numerically
stable through log-sum-exp trick.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **label**: target tensor with values in range `[0, 1]`. Must have the
same size as `pred`.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
- **pos_weight**: a weighting tensor of positive examples. Must be a vector with length
equal to the number of classes.For example, if pred has shape (64, 10),
pos_weight should have shape (1, 10).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, from_sigmoid=False, weight=None, batch_axis=0, **kwargs):
super(SigmoidBinaryCrossEntropyLoss, self).__init__(
weight, batch_axis, **kwargs)
self._from_sigmoid = from_sigmoid
def hybrid_forward(self, F, pred, label, sample_weight=None, pos_weight=None):
if is_np_array():
relu_fn = F.npx.relu
act_fn = F.npx.activation
abs_fn = F.np.abs
mul_fn = F.np.multiply
log_fn = F.np.log
else:
relu_fn = F.relu
act_fn = F.Activation
abs_fn = F.abs
mul_fn = F.broadcast_mul
log_fn = F.log
label = _reshape_like(F, label, pred)
if not self._from_sigmoid:
if pos_weight is None:
# We use the stable formula: max(x, 0) - x * z + log(1 + exp(-abs(x)))
loss = relu_fn(pred) - pred * label + \
act_fn(-abs_fn(pred), act_type='softrelu')
else:
# We use the stable formula: x - x * z + (1 + z * pos_weight - z) * \
# (log(1 + exp(-abs(x))) + max(-x, 0))
log_weight = 1 + mul_fn(pos_weight - 1, label)
loss = pred - pred * label + log_weight * \
(act_fn(-abs_fn(pred), act_type='softrelu') + relu_fn(-pred))
else:
eps = 1e-12
if pos_weight is None:
loss = -(log_fn(pred + eps) * label
+ log_fn(1. - pred + eps) * (1. - label))
else:
loss = -(mul_fn(log_fn(pred + eps) * label, pos_weight)
+ log_fn(1. - pred + eps) * (1. - label))
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
SigmoidBCELoss = SigmoidBinaryCrossEntropyLoss
class SoftmaxCrossEntropyLoss(Loss):
r"""Computes the softmax cross entropy loss. (alias: SoftmaxCELoss)
If `sparse_label` is `True` (default), label should contain integer
category indicators:
.. math::
\DeclareMathOperator{softmax}{softmax}
p = \softmax({pred})
L = -\sum_i \log p_{i,{label}_i}
`label`'s shape should be `pred`'s shape with the `axis` dimension removed.
i.e. for `pred` with shape (1,2,3,4) and `axis = 2`, `label`'s shape should
be (1,2,4).
If `sparse_label` is `False`, `label` should contain probability distribution
and `label`'s shape should be the same with `pred`:
.. math::
p = \softmax({pred})
L = -\sum_i \sum_j {label}_j \log p_{ij}
Parameters
----------
axis : int, default -1
The axis to sum over when computing softmax and entropy.
sparse_label : bool, default True
Whether label is an integer array instead of probability distribution.
from_logits : bool, default False
Whether input is a log probability (usually from log_softmax) instead
of unnormalized numbers.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: the prediction tensor, where the `batch_axis` dimension
ranges over batch size and `axis` dimension ranges over the number
of classes.
- **label**: the truth tensor. When `sparse_label` is True, `label`'s
shape should be `pred`'s shape with the `axis` dimension removed.
i.e. for `pred` with shape (1,2,3,4) and `axis = 2`, `label`'s shape
should be (1,2,4) and values should be integers between 0 and 2. If
`sparse_label` is False, `label`'s shape must be the same as `pred`
and values should be floats in the range `[0, 1]`.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as label. For example, if label has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, axis=-1, sparse_label=True, from_logits=False, weight=None,
batch_axis=0, **kwargs):
super(SoftmaxCrossEntropyLoss, self).__init__(
weight, batch_axis, **kwargs)
self._axis = axis
self._sparse_label = sparse_label
self._from_logits = from_logits
def hybrid_forward(self, F, pred, label, sample_weight=None):
if is_np_array():
log_softmax_fn = F.npx.log_softmax
pick_fn = F.npx.pick
else:
log_softmax_fn = F.log_softmax
pick_fn = F.pick
if not self._from_logits:
pred = log_softmax_fn(pred, self._axis)
if self._sparse_label:
loss = -pick_fn(pred, label, axis=self._axis, keepdims=True)
else:
label = _reshape_like(F, label, pred)
loss = -(pred * label).sum(axis=self._axis, keepdims=True)
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
SoftmaxCELoss = SoftmaxCrossEntropyLoss
class KLDivLoss(Loss):
r"""The Kullback-Leibler divergence loss.
KL divergence measures the distance between contiguous distributions. It
can be used to minimize information loss when approximating a distribution.
If `from_logits` is True (default), loss is defined as:
.. math::
L = \sum_i {label}_i * \big[\log({label}_i) - {pred}_i\big]
If `from_logits` is False, loss is defined as:
.. math::
\DeclareMathOperator{softmax}{softmax}
prob = \softmax({pred})
L = \sum_i {label}_i * \big[\log({label}_i) - \log({prob}_i)\big]
`label` and `pred` can have arbitrary shape as long as they have the same
number of elements.
Parameters
----------
from_logits : bool, default is `True`
Whether the input is log probability (usually from log_softmax) instead
of unnormalized numbers.
axis : int, default -1
The dimension along with to compute softmax. Only used when `from_logits`
is False.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape. If `from_logits` is
True, `pred` should be log probabilities. Otherwise, it should be
unnormalized predictions, i.e. from a dense layer.
- **label**: truth tensor with values in range `(0, 1)`. Must have
the same size as `pred`.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
References
----------
`Kullback-Leibler divergence
<https://en.wikipedia.org/wiki/Kullback-Leibler_divergence>`_
"""
def __init__(self, from_logits=True, axis=-1, weight=None, batch_axis=0,
**kwargs):
super(KLDivLoss, self).__init__(weight, batch_axis, **kwargs)
self._from_logits = from_logits
self._axis = axis
def hybrid_forward(self, F, pred, label, sample_weight=None):
if is_np_array():
log_softmax_fn = F.npx.log_softmax
log_fn = F.np.log
else:
log_softmax_fn = F.log_softmax
log_fn = F.log
if not self._from_logits:
pred = log_softmax_fn(pred, self._axis)
loss = label * (log_fn(label + 1e-12) - pred)
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class CTCLoss(Loss):
r"""Connectionist Temporal Classification Loss.
Parameters
----------
layout : str, default 'NTC'
Layout of prediction tensor. 'N', 'T', 'C' stands for batch size,
sequence length, and alphabet_size respectively.
label_layout : str, default 'NT'
Layout of the labels. 'N', 'T' stands for batch size, and sequence
length respectively.
weight : float or None
Global scalar weight for loss.
Inputs:
- **pred**: unnormalized prediction tensor (before softmax).
Its shape depends on `layout`. If `layout` is 'TNC', pred
should have shape `(sequence_length, batch_size, alphabet_size)`.
Note that in the last dimension, index `alphabet_size-1` is reserved
for internal use as blank label. So `alphabet_size` is one plus the
actual alphabet size.
- **label**: zero-based label tensor. Its shape depends on `label_layout`.
If `label_layout` is 'TN', `label` should have shape
`(label_sequence_length, batch_size)`.
- **pred_lengths**: optional (default None), used for specifying the
length of each entry when different `pred` entries in the same batch
have different lengths. `pred_lengths` should have shape `(batch_size,)`.
- **label_lengths**: optional (default None), used for specifying the
length of each entry when different `label` entries in the same batch
have different lengths. `label_lengths` should have shape `(batch_size,)`.
Outputs:
- **loss**: output loss has shape `(batch_size,)`.
**Example**: suppose the vocabulary is `[a, b, c]`, and in one batch we
have three sequences 'ba', 'cbb', and 'abac'. We can index the labels as
`{'a': 0, 'b': 1, 'c': 2, blank: 3}`. Then `alphabet_size` should be 4,
where label 3 is reserved for internal use by `CTCLoss`. We then need to
pad each sequence with `-1` to make a rectangular `label` tensor::
[[1, 0, -1, -1],
[2, 1, 1, -1],
[0, 1, 0, 2]]
References
----------
`Connectionist Temporal Classification: Labelling Unsegmented
Sequence Data with Recurrent Neural Networks
<http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_
"""
def __init__(self, layout='NTC', label_layout='NT', weight=None, **kwargs):
assert layout in ['NTC', 'TNC'],\
"Only 'NTC' and 'TNC' layouts for pred are supported. Got: %s" % layout
assert label_layout in ['NT', 'TN'],\
"Only 'NT' and 'TN' layouts for label are supported. Got: %s" % label_layout
self._layout = layout
self._label_layout = label_layout
batch_axis = label_layout.find('N')
super(CTCLoss, self).__init__(weight, batch_axis, **kwargs)
def hybrid_forward(self, F, pred, label,
pred_lengths=None, label_lengths=None, sample_weight=None):
if is_np_array():
swapaxes_fn = F.np.swapaxes
ctc_fn = F.npx.ctc_loss
else:
swapaxes_fn = F.swapaxes
ctc_fn = F.ctc_loss
if self._layout == 'NTC':
pred = swapaxes_fn(pred, 0, 1)
if self._batch_axis == 1:
label = swapaxes_fn(label, 0, 1)
loss = ctc_fn(pred, label, pred_lengths, label_lengths,
use_data_lengths=pred_lengths is not None,
use_label_lengths=label_lengths is not None,
blank_label='last')
return _apply_weighting(F, loss, self._weight, sample_weight)
class HuberLoss(Loss):
r"""Calculates smoothed L1 loss that is equal to L1 loss if absolute error
exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss.
.. math::
L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 &
\text{ if } |{label}_i - {pred}_i| < {rho} \\
|{label}_i - {pred}_i| - \frac{{rho}}{2} &
\text{ otherwise }
\end{cases}
`label` and `pred` can have arbitrary shape as long as they have the same
number of elements.
Parameters
----------
rho : float, default 1
Threshold for trimmed mean estimator.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **label**: target tensor with the same size as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, rho=1, weight=None, batch_axis=0, **kwargs):
super(HuberLoss, self).__init__(weight, batch_axis, **kwargs)
self._rho = rho
def hybrid_forward(self, F, pred, label, sample_weight=None):
if is_np_array():
abs_fn = F.np.abs
where_fn = F.np.where
square_fn = F.np.square
else:
abs_fn = F.abs
where_fn = F.where
square_fn = F.square
label = _reshape_like(F, label, pred)
loss = abs_fn(label - pred)
loss = where_fn(loss > self._rho, loss - 0.5 * self._rho,
(0.5 / self._rho) * square_fn(loss))
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class HingeLoss(Loss):
r"""Calculates the hinge loss function often used in SVMs:
.. math::
L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)
where `pred` is the classifier prediction and `label` is the target tensor
containing values -1 or 1. `label` and `pred` must have the same number of
elements.
Parameters
----------
margin : float
The margin in hinge loss. Defaults to 1.0
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape.
- **label**: truth tensor with values -1 or 1. Must have the same size
as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, margin=1, weight=None, batch_axis=0, **kwargs):
super(HingeLoss, self).__init__(weight, batch_axis, **kwargs)
self._margin = margin
def hybrid_forward(self, F, pred, label, sample_weight=None):
relu_fn = F.npx.relu if is_np_array() else F.relu
label = _reshape_like(F, label, pred)
loss = relu_fn(self._margin - pred * label)
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class SquaredHingeLoss(Loss):
r"""Calculates the soft-margin loss function used in SVMs:
.. math::
L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)^2
where `pred` is the classifier prediction and `label` is the target tensor
containing values -1 or 1. `label` and `pred` can have arbitrary shape as
long as they have the same number of elements.
Parameters
----------
margin : float
The margin in hinge loss. Defaults to 1.0
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **label**: truth tensor with values -1 or 1. Must have the same size
as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, margin=1, weight=None, batch_axis=0, **kwargs):
super(SquaredHingeLoss, self).__init__(weight, batch_axis, **kwargs)
self._margin = margin
def hybrid_forward(self, F, pred, label, sample_weight=None):
if is_np_array():
relu_fn = F.npx.relu
square_fn = F.np.square
else:
relu_fn = F.relu
square_fn = F.square
label = _reshape_like(F, label, pred)
loss = square_fn(relu_fn(self._margin - pred * label))
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class LogisticLoss(Loss):
r"""Calculates the logistic loss (for binary losses only):
.. math::
L = \sum_i \log(1 + \exp(- {pred}_i \cdot {label}_i))
where `pred` is the classifier prediction and `label` is the target tensor
containing values -1 or 1 (0 or 1 if `label_format` is binary).
`label` and `pred` can have arbitrary shape as long as they have the same number of elements.
Parameters
----------
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
label_format : str, default 'signed'
Can be either 'signed' or 'binary'. If the label_format is 'signed', all label values should
be either -1 or 1. If the label_format is 'binary', all label values should be either
0 or 1.
Inputs:
- **pred**: prediction tensor with arbitrary shape.
- **label**: truth tensor with values -1/1 (label_format is 'signed')
or 0/1 (label_format is 'binary'). Must have the same size as pred.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.
"""
def __init__(self, weight=None, batch_axis=0, label_format='signed', **kwargs):
super(LogisticLoss, self).__init__(weight, batch_axis, **kwargs)
self._label_format = label_format
if self._label_format not in ["signed", "binary"]:
raise ValueError("label_format can only be signed or binary, recieved %s."
% label_format)
def hybrid_forward(self, F, pred, label, sample_weight=None):
if is_np_array():
relu_fn = F.npx.relu
act_fn = F.npx.activation
abs_fn = F.np.abs
else:
relu_fn = F.relu
act_fn = F.Activation
abs_fn = F.abs
label = _reshape_like(F, label, pred)
if self._label_format == 'signed':
label = (label + 1.0) / 2.0 # Transform label to be either 0 or 1
# Use a stable formula in computation
loss = relu_fn(pred) - pred * label + \
act_fn(-abs_fn(pred), act_type='softrelu')
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class TripletLoss(Loss):
r"""Calculates triplet loss given three input tensors and a positive margin.
Triplet loss measures the relative similarity between a positive
example, a negative example, and prediction:
.. math::
L = \sum_i \max(\Vert {pos_i}_i - {pred} \Vert_2^2 -
\Vert {neg_i}_i - {pred} \Vert_2^2 + {margin}, 0)
`positive`, `negative`, and 'pred' can have arbitrary shape as long as they
have the same number of elements.
Parameters
----------
margin : float
Margin of separation between correct and incorrect pair.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **pred**: prediction tensor with arbitrary shape
- **positive**: positive example tensor with arbitrary shape. Must have
the same size as pred.
- **negative**: negative example tensor with arbitrary shape Must have
the same size as pred.
Outputs:
- **loss**: loss tensor with shape (batch_size,).
"""
def __init__(self, margin=1, weight=None, batch_axis=0, **kwargs):
super(TripletLoss, self).__init__(weight, batch_axis, **kwargs)
self._margin = margin
def hybrid_forward(self, F, pred, positive, negative, sample_weight=None):
if is_np_array():
relu_fn = F.npx.relu
square_fn = F.np.square
else:
relu_fn = F.relu
square_fn = F.square
positive = _reshape_like(F, positive, pred)
negative = _reshape_like(F, negative, pred)
loss = _batch_sum(F, square_fn(positive - pred) - square_fn(negative - pred), self._batch_axis)
loss = relu_fn(loss + self._margin)
return _apply_weighting(F, loss, self._weight, sample_weight)
class PoissonNLLLoss(Loss):
r"""For a target (Random Variable) in a Poisson distribution, the function calculates the Negative
Log likelihood loss.
PoissonNLLLoss measures the loss accrued from a poisson regression prediction made by the model.
.. math::
L = \text{pred} - \text{target} * \log(\text{pred}) +\log(\text{target!})
`target`, 'pred' can have arbitrary shape as long as they have the same number of elements.
Parameters
----------
from_logits : boolean, default True
indicating whether log(predicted) value has already been computed. If True, the loss is computed as
:math:`\exp(\text{pred}) - \text{target} * \text{pred}`, and if False, then loss is computed as
:math:`\text{pred} - \text{target} * \log(\text{pred}+\text{epsilon})`.The default value
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
compute_full: boolean, default False
Indicates whether to add an approximation(Stirling factor) for the Factorial term in the formula for the loss.
The Stirling factor is:
:math:`\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target})`
epsilon: float, default 1e-08
This is to avoid calculating log(0) which is not defined.
Inputs:
- **pred**: Predicted value
- **target**: Random variable(count or number) which belongs to a Poisson distribution.
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: Average loss (shape=(1,1)) of the loss tensor with shape (batch_size,).
"""
def __init__(self, weight=None, from_logits=True, batch_axis=0, compute_full=False, **kwargs):
super(PoissonNLLLoss, self).__init__(weight, batch_axis, **kwargs)
self._from_logits = from_logits
self._compute_full = compute_full
def hybrid_forward(self, F, pred, target, sample_weight=None, epsilon=1e-08):
if is_np_array():
exp_fn = F.np.exp
log_fn = F.np.log
else:
exp_fn = F.exp
log_fn = F.log
target = _reshape_like(F, target, pred)
if self._from_logits:
loss = exp_fn(pred) - target * pred
else:
loss = pred - target * log_fn(pred + epsilon)
if self._compute_full:
# Using numpy's pi value
stirling_factor = target * \
log_fn(target) - target + 0.5 * log_fn(2 * target * np.pi)
target_gt_1 = target > 1
stirling_factor *= target_gt_1
loss += stirling_factor
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
class CosineEmbeddingLoss(Loss):
r"""For a target label 1 or -1, vectors input1 and input2, the function computes the cosine distance
between the vectors. This can be interpreted as how similar/dissimilar two input vectors are.
.. math::
L = \sum_i \begin{cases} 1 - {cos\_sim({input1}_i, {input2}_i)} & \text{ if } {label}_i = 1\\
{cos\_sim({input1}_i, {input2}_i)} & \text{ if } {label}_i = -1 \end{cases}\\
cos\_sim(input1, input2) = \frac{{input1}_i.{input2}_i}{||{input1}_i||.||{input2}_i||}
`input1`, `input2` can have arbitrary shape as long as they have the same number of elements.
Parameters
----------
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
margin : float
Margin of separation between correct and incorrect pair.
Inputs:
- **input1**: a tensor with arbitrary shape
- **input2**: another tensor with same shape as pred to which input1 is
compared for similarity and loss calculation
- **label**: A 1-D tensor indicating for each pair input1 and input2, target label is 1 or -1
- **sample_weight**: element-wise weighting tensor. Must be broadcastable
to the same shape as input1. For example, if input1 has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).
Outputs:
- **loss**: The loss tensor with shape (batch_size,).
"""
def __init__(self, weight=None, batch_axis=0, margin=0, **kwargs):
super(CosineEmbeddingLoss, self).__init__(weight, batch_axis, **kwargs)
self._margin = margin
def hybrid_forward(self, F, input1, input2, label, sample_weight=None):
if is_np_array():
where_fn = F.np.where
clip_fn = F.np.clip
else:
where_fn = F.where
clip_fn = F.clip
input1 = _reshape_like(F, input1, input2)
cos_sim = self._cosine_similarity(F, input1, input2)
label = _reshape_like(F, label, cos_sim)
loss = where_fn(label == 1,
1 - cos_sim,
clip_fn(cos_sim - self._margin, 0, 1 - self._margin))
loss = _apply_weighting(F, loss, self._weight, sample_weight)
return _batch_mean(F, loss, self._batch_axis)
def _cosine_similarity(self, F, x, y, axis=-1):
if is_np_array():
reshape_fn = F.npx.reshape
norm_fn = F.npx.norm
sum_fn = F.np.sum
full_fn = F.np.full
max_fn = F.np.maximum
else:
reshape_fn = F.reshape
norm_fn = F.norm
sum_fn = F.sum
full_fn = F.full
max_fn = F.broadcast_maximum
# Calculates the cosine similarity between 2 vectors
x_norm = reshape_fn(norm_fn(x, axis=axis), (-1, 1))
y_norm = reshape_fn(norm_fn(y, axis=axis), (-1, 1))
x_dot_y = reshape_fn(sum_fn(x * y, axis=axis), (-1, 1))
eps_arr = full_fn((1, 1), 1e-12)
return (x_dot_y / max_fn(x_norm * y_norm, eps_arr))
class SDMLLoss(Loss):
r"""Calculates Batchwise Smoothed Deep Metric Learning (SDML) Loss given two input tensors and a smoothing weight
SDM Loss learns similarity between paired samples by using unpaired samples in the minibatch
as potential negative examples.
The loss is described in greater detail in
"Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric Learning."
- by Bonadiman, Daniele, Anjishnu Kumar, and Arpit Mittal. arXiv preprint arXiv:1905.12786 (2019).
URL: https://arxiv.org/pdf/1905.12786.pdf
According to the authors, this loss formulation achieves comparable or higher accuracy to
Triplet Loss but converges much faster.
The loss assumes that the items in both tensors in each minibatch
are aligned such that x1[0] corresponds to x2[0] and all other datapoints in the minibatch are unrelated.
`x1` and `x2` are minibatches of vectors.
Parameters
----------
smoothing_parameter : float
Probability mass to be distributed over the minibatch. Must be < 1.0.
weight : float or None
Global scalar weight for loss.
batch_axis : int, default 0
The axis that represents mini-batch.
Inputs:
- **x1**: Minibatch of data points with shape (batch_size, vector_dim)
- **x2**: Minibatch of data points with shape (batch_size, vector_dim)
Each item in x2 is a positive sample for the same index in x1.
That is, x1[0] and x2[0] form a positive pair, x1[1] and x2[1] form a positive pair - and so on.
All data points in different rows should be decorrelated
Outputs:
- **loss**: loss tensor with shape (batch_size,).
"""
def __init__(self, smoothing_parameter=0.3, weight=1., batch_axis=0, **kwargs):
super(SDMLLoss, self).__init__(weight, batch_axis, **kwargs)
self.kl_loss = KLDivLoss(from_logits=True)
self.smoothing_parameter = smoothing_parameter # Smoothing probability mass
def _compute_distances(self, F, x1, x2):
"""
This function computes the euclidean distance between every vector
in the two batches in input.
"""
if is_np_array():
expand_dims_fn = F.np.expand_dims
broadcast_to_fn = F.np.broadcast_to
else:
expand_dims_fn = F.expand_dims
broadcast_to_fn = F.broadcast_to
# extracting sizes expecting [batch_size, dim]
assert x1.shape == x2.shape
batch_size, dim = x1.shape
# expanding both tensor form [batch_size, dim] to [batch_size, batch_size, dim]
x1_ = broadcast_to_fn(expand_dims_fn(x1, 1), [batch_size, batch_size, dim])
x2_ = broadcast_to_fn(expand_dims_fn(x2, 0), [batch_size, batch_size, dim])
# pointwise squared differences
squared_diffs = (x1_ - x2_)**2
# sum of squared differences distance
return squared_diffs.sum(axis=2)
def _compute_labels(self, F, batch_size):
"""
The function creates the label matrix for the loss.
It is an identity matrix of size [BATCH_SIZE x BATCH_SIZE]
labels:
[[1, 0]
[0, 1]]
after the proces the labels are smoothed by a small amount to
account for errors.
labels:
[[0.9, 0.1]
[0.1, 0.9]]
Pereyra, Gabriel, et al. "Regularizing neural networks by penalizing
confident output distributions." arXiv preprint arXiv:1701.06548 (2017).
"""
gold = F.eye(batch_size)
labels = gold * (1 - self.smoothing_parameter) + (1 - gold) * self.smoothing_parameter / (batch_size - 1)
return labels
def hybrid_forward(self, F, x1, x2):
"""
the function computes the kl divergence between the negative distances
(internally it compute a softmax casting into probabilities) and the
identity matrix.
This assumes that the two batches are aligned therefore the more similar
vector should be the one having the same id.
Batch1 Batch2
President of France French President
President of US American President
Given the question president of France in batch 1 the model will
learn to predict french president comparing it with all the other
vectors in batch 2
"""
if is_np_array():
log_softmax_fn = F.npx.log_softmax
else:
log_softmax_fn = F.log_softmax
batch_size = x1.shape[0]
labels = self._compute_labels(F, batch_size)
distances = self._compute_distances(F, x1, x2)
log_probabilities = log_softmax_fn(-distances, axis=1)
# multiply for the number of labels to obtain the correct loss (gluon kl_loss averages instead of sum)
# PR#18423:multiply for the number of labels should multiply x1.shape[1] rather than x1.shape[0])
# After PR#18423, it is no need to multiply it anymore.
return self.kl_loss(log_probabilities, labels.as_in_context(distances.context))
|
{"hexsha": "bc447b0f1c55fbb96d9fe98c7fbffa0431203d71", "size": 42525, "ext": "py", "lang": "Python", "max_stars_repo_path": "python/mxnet/gluon/loss.py", "max_stars_repo_name": "franciscocalderon2/incubator-mxnet", "max_stars_repo_head_hexsha": "3260862c1ea928e99af5517b8e8ce16e670205a9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/mxnet/gluon/loss.py", "max_issues_repo_name": "franciscocalderon2/incubator-mxnet", "max_issues_repo_head_hexsha": "3260862c1ea928e99af5517b8e8ce16e670205a9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "python/mxnet/gluon/loss.py", "max_forks_repo_name": "franciscocalderon2/incubator-mxnet", "max_forks_repo_head_hexsha": "3260862c1ea928e99af5517b8e8ce16e670205a9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4146341463, "max_line_length": 118, "alphanum_fraction": 0.6226925338, "include": true, "reason": "import numpy", "num_tokens": 10534}
|
# -*- coding: utf-8 -*-
from base import Parameter, Channel, Estimator, Report, Constant
import numpy as np
def _cos_partial(x):
return (x/2.0 + 1/4.0 * np.sin(2.0*x))
def _sin_partial(x):
return (x/2.0 - 1/4.0 * np.sin(2.0*x))
class ThetaFFT(Parameter):
"""An angle of decesion from the |0> pole."""
def __init__(self, max_time, grains, sigma, k):
S = np.linspace(0, np.pi, grains+1)
super(ThetaFFT, self).__init__(S, max_time, "Theta")
start = np.random.rand()*2*np.pi
drift = np.random.normal(0.0, sigma, max_time+1)
drift = np.cumsum(drift)
self.val = np.mod(start + drift, np.pi)
self.k = k
def update(self, s, time):
w_x, w_z = np.sum(s[0]), np.sum(s[2])
if (w_x > 0) | (w_z > 0):
update = np.ones(len(self.M))
if (w_x > 0):
x_update = _cos_partial(self.S[1:] - self.hat[time]) \
- _cos_partial(self.S[:-1] - self.hat[time])
update = update * (x_update ** (2 * w_x))
if (w_z > 0):
z_update = _sin_partial(self.S[1:] - self.hat[time]) \
- _sin_partial(self.S[:-1] - self.hat[time])
update = update * (z_update ** (2 * w_z))
self.p = self.p * update
n = len(self.p)
k = self.k
w = np.fft.fft(self.p)
w = np.roll(w, n/2)
w[:n/2-k] = 0.0
w[n/2+k:] = 0.0
w = np.roll(w, n/2)
y = np.fft.ifft(w)
self.p = y/np.sum(y)
self.hat[time+1] = self.M[np.argmax(self.p)]
class ThetaNormals(Parameter):
"""An angle of decesion from the |0> pole."""
def __init__(self, max_time, grains, sigma, k):
S = np.linspace(0, np.pi, grains+1)
super(ThetaFFT, self).__init__(S, max_time, "Theta")
start = np.random.rand()*2*np.pi
drift = np.random.normal(0.0, sigma, max_time+1)
drift = np.cumsum(drift)
self.val = np.mod(start + drift, np.pi)
self.k = k
def update(self, s, time):
w_x, w_z = np.sum(s[0]), np.sum(s[2])
if (w_x > 0) | (w_z > 0):
update = np.ones(len(self.M))
if (w_x > 0):
x_update = _cos_partial(self.S[1:] - self.hat[time]) \
- _cos_partial(self.S[:-1] - self.hat[time])
update = update * (x_update ** (2 * w_x))
if (w_z > 0):
z_update = _sin_partial(self.S[1:] - self.hat[time]) \
- _sin_partial(self.S[:-1] - self.hat[time])
update = update * (z_update ** (2 * w_z))
self.p = self.p * update
n = len(self.p)
k = self.k
w = np.fft.fft(self.p)
w = np.roll(w, n/2)
w[:n/2-k] = 0.0
w[n/2+k:] = 0.0
w = np.roll(w, n/2)
y = np.fft.ifft(w)
self.p = y/np.sum(y)
self.hat[time+1] = self.M[np.argmax(self.p)]
class OneAngleDephasingChannel(Channel):
def __init__(self, n, max_time):
super(OneAngleDephasingChannel, self).__init__(n, max_time)
def px(self, params, constants, time):
theta = params["Theta"].hat[time] - params["Theta"].val[time]
p = constants["p"]
return p.val * (np.cos(theta) ** 2)
def py(self, params, constant, time):
return 0.0
def pz(self, params, constants, time):
theta = params["Theta"].hat[time] - params["Theta"].val[time]
p = constants["p"]
return p.val * (np.sin(theta) ** 2)
class OneAngleDephasingEstimator(Estimator):
def __init__(self, params, constants):
super(OneAngleDephasingEstimator, self).__init__(params, constants)
|
{"hexsha": "0c3b7a4322ea156977b4721db332d0015dd23d99", "size": 3766, "ext": "py", "lang": "Python", "max_stars_repo_path": "drift_qec/oneangleheuristic.py", "max_stars_repo_name": "janmtl/drift_qec", "max_stars_repo_head_hexsha": "3b1c703d151f9dc2833b761f85586cd09666557b", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "drift_qec/oneangleheuristic.py", "max_issues_repo_name": "janmtl/drift_qec", "max_issues_repo_head_hexsha": "3b1c703d151f9dc2833b761f85586cd09666557b", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "drift_qec/oneangleheuristic.py", "max_forks_repo_name": "janmtl/drift_qec", "max_forks_repo_head_hexsha": "3b1c703d151f9dc2833b761f85586cd09666557b", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.1962616822, "max_line_length": 75, "alphanum_fraction": 0.509559214, "include": true, "reason": "import numpy", "num_tokens": 1129}
|
import numpy as np
def process_state_static(state, dist_norm):
location = state.physics.location
rotation = state.physics.rotation
velocity = state.physics.velocity
ang_vel = state.physics.angular_velocity
boost = state.boost_amount
jumped = 1 if state.jumped else -1
double_j = 1 if state.double_jumped else -1
dist = dist_norm
return location.x, location.y, location.z, \
rotation.roll, rotation.pitch, rotation.yaw, \
velocity.x, velocity.y, velocity.z, \
ang_vel.x, ang_vel.y, ang_vel.z, \
dist, boost, jumped, double_j
class ExperienceReplay():
def __init__(self, current_state, action, next_state, reward, norm_dist, done):
self.current_state = current_state
self.action = action
self.next_state = next_state
self.reward = reward
self.done = done
self.norm_dist = norm_dist
def process_state(self, current=True):
if current:
state = self.current_state
else:
state = self.next_state
location = state.physics.location
rotation = state.physics.rotation
velocity = state.physics.velocity
ang_vel = state.physics.angular_velocity
boost = state.boost_amount
jumped = 1 if state.jumped else -1
double_j = 1 if state.double_jumped else -1
dist = self.norm_dist
return location.x, location.y, location.z, \
rotation.roll, rotation.pitch, rotation.yaw, \
velocity.x, velocity.y, velocity.z, \
ang_vel.x, ang_vel.y, ang_vel.z, \
dist, boost, jumped, double_j
def process_action(self):
steer = self.action.steer
throttle = self.action.throttle
pitch = self.action.pitch
yaw = self.action.yaw
roll = self.action.roll
jump = 1 if self.action.jump else -1
boost = 1 if self.action.boost else -1
handbrake = 1 if self.action.handbrake else -1
return steer, throttle, \
pitch, yaw, roll, \
jump, boost, handbrake
class ReplayBuffer():
def __init__(self, max_size):
self.max_size = max_size
self.replay_buffer = []
def push(self, experience):
self.replay_buffer.append(experience)
if len(self.replay_buffer) > self.max_size:
self.replay_buffer.pop(0)
def clear(self):
self.replay_buffer = []
def sample_batch(self, batch_size=32):
buff_size = len(self.replay_buffer)
sample_size = min(batch_size, buff_size)
idxs = np.random.randint(0, buff_size, size=sample_size)
batch = []
for i in idxs:
batch.append(self.replay_buffer[i])
return batch
|
{"hexsha": "c92a0abb863034e70da15cf8ba04528ae270b06e", "size": 2790, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/ml_dir/experience_replay_buffer.py", "max_stars_repo_name": "danielbairamian/RL-RL", "max_stars_repo_head_hexsha": "4fc4ac14bd10088e83e7a15c3319370f74d0a756", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-04-30T14:33:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-11T00:12:35.000Z", "max_issues_repo_path": "src/ml_dir/experience_replay_buffer.py", "max_issues_repo_name": "danielbairamian/RL-RL", "max_issues_repo_head_hexsha": "4fc4ac14bd10088e83e7a15c3319370f74d0a756", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ml_dir/experience_replay_buffer.py", "max_forks_repo_name": "danielbairamian/RL-RL", "max_forks_repo_head_hexsha": "4fc4ac14bd10088e83e7a15c3319370f74d0a756", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6593406593, "max_line_length": 83, "alphanum_fraction": 0.617921147, "include": true, "reason": "import numpy", "num_tokens": 639}
|
import cv2 as cv
import numpy as np
def read():
return cv.imread("images/bolt.jpg")
|
{"hexsha": "1386ea9f5c3c4b3cf6776f105e777a65e6ae61f6", "size": 89, "ext": "py", "lang": "Python", "max_stars_repo_path": "python/test_camera.py", "max_stars_repo_name": "karagenit/material-sorter", "max_stars_repo_head_hexsha": "9bd195ab1fcfb72cb41e3ad3d35cf7b8f55bafe3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/test_camera.py", "max_issues_repo_name": "karagenit/material-sorter", "max_issues_repo_head_hexsha": "9bd195ab1fcfb72cb41e3ad3d35cf7b8f55bafe3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2018-05-08T12:00:51.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-22T17:18:24.000Z", "max_forks_repo_path": "python/test_camera.py", "max_forks_repo_name": "karagenit/material-sorter", "max_forks_repo_head_hexsha": "9bd195ab1fcfb72cb41e3ad3d35cf7b8f55bafe3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.8333333333, "max_line_length": 39, "alphanum_fraction": 0.6966292135, "include": true, "reason": "import numpy", "num_tokens": 24}
|
import rospy
import bisect
import numpy as np
import time
import logging
import math
from time_msg_container import *
from plan_scoring import *
# Messages
from geometry_msgs.msg import Twist, PoseStamped, Point, Quaternion
from sensor_msgs.msg import LaserScan
from nav_msgs.msg import Path, Odometry
from move_base_msgs.msg import MoveBaseActionFeedback
from std_msgs.msg import Empty
class Mission():
def __init__(self):
self.loc_msgs = TimeMsgContainer()
self.odom_msgs = TimeMsgContainer()
self.vel_cmd_msgs = TimeMsgContainer()
self.scan_msgs = TimeMsgContainer()
self.goal = PoseStamped
self.start_time = rospy.Time()
self.end_time = rospy.Time()
self.params = {'threshold_range': 0.2}
def get_trajectory(self):
traj = np.zeros([2, len(self.loc_msgs)])
for i, msg in enumerate(self.loc_msgs.msgs):
traj[:,i] = np.array([msg.pose.pose.position.x, msg.pose.pose.position.y])
return traj
def get_trajectory_for_time_interval(self, interval):
print('')
x_vec = []
y_vec = []
for idx, msg in enumerate(self.loc_msgs.msgs):
if self.loc_msgs.times[idx] > interval[0] and self.loc_msgs.times[idx] < interval[1]:
x_vec.append(msg.pose.pose.position.x)
y_vec.append(msg.pose.pose.position.y)
return np.array([x_vec, y_vec])
def duration(self):
return (self.end_time - self.start_time).to_sec()
def obstacle_closeness(self):
cost = 0.0
if not self.params['threshold_range']:
raise('Please define a threshold range for the object closeness feature.')
for scan in self.scan_msgs.msgs:
min_range = min(scan.ranges)
if min_range < self.params['threshold_range']:
cost += 1/min_range
return cost
def distance(self):
dist = 0.0
position_prev = self.odom_msgs.msgs[0].pose.pose.position
for ii in range(1, len(self.odom_msgs)):
position_new = self.odom_msgs.msgs[ii].pose.pose.position
dist += math.hypot(position_new.x - position_prev.x, position_new.y - position_prev.y)
position_prev = position_new
return dist
def avg_speed(self):
if self.duration() > 0.0:
return self.distance()/self.duration()
else:
return 0.0
def inverse_avg_speed(self):
avg_speed = self.avg_speed()
if avg_speed > 0:
return 1/avg_speed
else:
return 100
def acceleration_vector(self):
acc_trans = np.zeros([len(self.odom_msgs)])
acc_rot = np.zeros([len(self.odom_msgs)])
t_prev = self.odom_msgs.times[0]
v_trans_prev = math.hypot(self.odom_msgs.msgs[0].twist.twist.linear.x, self.odom_msgs.msgs[0].twist.twist.linear.y)
v_rot_prev = self.odom_msgs.msgs[0].twist.twist.angular.z
for ii in range(1,len(self.odom_msgs)):
t = self.odom_msgs.times[ii]
v_trans = math.hypot(self.odom_msgs.msgs[ii].twist.twist.linear.x, self.odom_msgs.msgs[ii].twist.twist.linear.y)
v_rot = self.odom_msgs.msgs[ii].twist.twist.angular.z
t_diff = (t - t_prev).to_sec()
if t_diff > 0:
acc_trans[ii] = (v_trans - v_trans_prev) / t_diff
acc_rot[ii] = (v_rot - v_rot_prev) / t_diff
return acc_trans, acc_rot
def normalized_rotational_energy(self):
acc_trans, acc_rot = self.acceleration_vector()
return np.sum(np.abs(acc_rot))/self.duration()
def normalized_translational_energy(self):
acc_trans, acc_rot = self.acceleration_vector()
return np.sum(np.abs(acc_trans))/self.duration()
def normalized_energy(self):
return self.energy()/self.duration()
def final_goal_dist(self):
try:
return math.hypot(self.odom_msgs.msgs[-1].pose.pose.position.x - self.goal.pose.position.x,
self.odom_msgs.msgs[-1].pose.pose.position.y - self.goal.pose.position.y)
except (AttributeError):
return math.hypot(self.odom_msgs.msgs[-1].pose.pose.position.x - self.goal.goal.target_pose.pose.position.x,
self.odom_msgs.msgs[-1].pose.pose.position.y - self.goal.goal.target_pose.pose.position.y)
def compute_cost(self):
feature_list = [self.normalized_rotational_energy,
self.normalized_translational_energy,
self.final_goal_dist,
self.distance,
self.duration]
feature_names = ['$E_{rot}$', '$E_{trans}$', '$d_{goal}$', '$dist$', '$time$']
feature_weights = [1.0, 1.0, 1.0, 1.0, 1.0]
cost = 0.0
cost_dict = {}
for n, f, w in zip(feature_names, feature_list, feature_weights):
feature_cost = f()
cost += w * feature_cost
cost_dict[n] = w * feature_cost
return cost, cost_dict
def adjust_start_stop_msgs(start_msgs, stop_msgs):
# All trajectories are complete
start_adjusted = TimeMsgContainer()
stop_adjusted = TimeMsgContainer()
# recording started during driving and ended after end of mission -> delete first stop message
if len(start_msgs) < len(stop_msgs) and start_msgs.times[0] > stop_msgs.times[0]:
start_adjusted = start_msgs
stop_adjusted.times = stop_msgs.times[1:]
stop_adjusted.msgs = stop_msgs.msgs[1:]
# recording started during driving and ended during driving -> delete last start and first stop message
elif len(start_msgs) == len(stop_msgs) and start_msgs.times[0] > stop_msgs.times[0] and start_msgs.times[-1] > stop_msgs.times[-1]:
start_adjusted.times = start_msgs.times[:-1]
start_adjusted.msgs = start_msgs.msgs[:-1]
stop_adjusted.times = stop_msgs.times[1:]
stop_adjusted.msgs = stop_msgs.msgs[1:]
# recording started before driving and ended during driving -> delete last start message
elif len(start_msgs) > len(stop_msgs) and start_msgs.times[-1] > stop_msgs.times[-1]:
stop_adjusted = stop_msgs
start_adjusted.times = start_msgs.times[:-1]
start_adjusted.msgs = start_msgs.msgs[:-1]
else:
start_adjusted = start_msgs
stop_adjusted = stop_msgs
return start_adjusted, stop_adjusted
def extract_missions(msg_container):
# Only considerd "full" trajectories from start to goal
start_msgs, stop_msgs = adjust_start_stop_msgs(msg_container['start'], msg_container['stop'])
missions = []
for ii in range(len(start_msgs)):
data = Mission()
data.start_time = start_msgs.times[ii]
data.end_time = stop_msgs.times[ii]
if len(msg_container['loc']) > 0:
data.loc_msgs = msg_container['loc'].get_data_for_interval(data.start_time, data.end_time)
if len(msg_container['joy']) > 0:
data.joy_msgs = msg_container['joy'].get_data_for_interval(data.start_time, data.end_time)
data.odom_msgs = msg_container['odom'].get_data_for_interval(data.start_time, data.end_time)
data.vel_cmd_msgs = msg_container['vel_cmd'].get_data_for_interval(data.start_time, data.end_time)
data.scan_msgs = msg_container['scan'].get_data_for_interval(data.start_time, data.end_time)
data.goal = msg_container['goal'].get_previous_msg(data.start_time)
missions.append(data)
return missions
|
{"hexsha": "c1200d0e294ac9def4718b3efe4e077eb288e617", "size": 7040, "ext": "py", "lang": "Python", "max_stars_repo_path": "planner_comparison/python/planner_comparison/plan_scoring.py", "max_stars_repo_name": "IhabMohamed/deep_motion_planning", "max_stars_repo_head_hexsha": "6512f651bafbb56710ddbae501a5b4c22d56ac66", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-01-29T16:25:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-06T13:04:13.000Z", "max_issues_repo_path": "planner_comparison/python/planner_comparison/plan_scoring.py", "max_issues_repo_name": "IhabMohamed/deep_motion_planning", "max_issues_repo_head_hexsha": "6512f651bafbb56710ddbae501a5b4c22d56ac66", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-05T21:21:55.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-13T08:36:09.000Z", "max_forks_repo_path": "planner_comparison/python/planner_comparison/plan_scoring.py", "max_forks_repo_name": "ethz-asl/deep_motion_planning", "max_forks_repo_head_hexsha": "6512f651bafbb56710ddbae501a5b4c22d56ac66", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-01-30T05:45:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-20T11:38:14.000Z", "avg_line_length": 38.8950276243, "max_line_length": 133, "alphanum_fraction": 0.6934659091, "include": true, "reason": "import numpy", "num_tokens": 1784}
|
function solve()
n, a, b = [parse(Int, x) for x in split(readline())]
n - a + b
end
println(solve())
|
{"hexsha": "1361fb38b1b7de81cff696041844c8a6f57b4079", "size": 110, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "abc171-180/abc180/a.jl", "max_stars_repo_name": "aishikawa/atcoder-julia", "max_stars_repo_head_hexsha": "93339ea6dd954b0739b3895a5625f94433e33baf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abc171-180/abc180/a.jl", "max_issues_repo_name": "aishikawa/atcoder-julia", "max_issues_repo_head_hexsha": "93339ea6dd954b0739b3895a5625f94433e33baf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abc171-180/abc180/a.jl", "max_forks_repo_name": "aishikawa/atcoder-julia", "max_forks_repo_head_hexsha": "93339ea6dd954b0739b3895a5625f94433e33baf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.7142857143, "max_line_length": 56, "alphanum_fraction": 0.5636363636, "num_tokens": 37}
|
### A Pluto.jl notebook ###
# v0.15.1
using Markdown
using InteractiveUtils
# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error).
macro bind(def, element)
quote
local el = $(esc(element))
global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : missing
el
end
end
# ╔═╡ 5295694c-344f-11ec-2842-d7207c32a925
begin
using Pkg
Pkg.activate("../Project.toml")
end
# ╔═╡ 57884ba0-1209-4ebe-aaaf-b03edd5814d8
begin
using Flux
using Flux.Data: MNIST
using Statistics
using Flux: throttle, params
using Flux: @epochs, @functor
using Flux: onehotbatch, argmax
using Flux.Losses: logitbinarycrossentropy, crossentropy, mse
using Flux: chunk
using ProgressMeter: Progress, next!
using Base.Iterators: repeated, partition
using Plots
using Colors
using Images
end
# ╔═╡ e5ae0713-8bb0-4568-8eda-3cf8fdc5cef0
using PlutoUI
# ╔═╡ f47b82e5-1110-4bfa-9ac1-637076c6faf2
md"""
## Some Examples of using Flux (reference: EECS505, other code from this forked repo)
"""
# ╔═╡ 09fea98d-aaf0-4971-a81f-90b50ef04b47
md"""
## Classifier for data belonging to two circles
"""
# ╔═╡ 3e79f9cc-f01c-4622-ade9-92d6aaa954e1
md"""
Inspect a few images.
"""
# ╔═╡ 6ebac32e-ad7d-4321-be2f-b61127c47771
# [img(X[:, i]) for i in 6:10]
# ╔═╡ f74b47d4-7dff-40a1-b086-26bf8b5c88d7
function generatedata_circle(r1, r2, N, σ=0.1)
φ1 = LinRange(0, 2 * π, N)
φ2 = LinRange(0, 2 * π, N)
rx1 = r1 .+ σ * randn(N)
rx2 = r2 .+ σ * randn(N)
X1 = [rx1 .* cos.(φ1) rx1 .* sin.(φ1)]
X2 = [rx2 .* cos.(φ2) rx2 .* sin.(φ2)]
return X1', X2'
end
# ╔═╡ b0986b17-bd42-4fb4-a3f1-54a58e52593c
X1c, X2c = generatedata_circle(2, 0.5, 100)
# ╔═╡ 5b13fe75-6636-4ee2-9396-98ba53cedadd
begin
p1 = scatter(
X1c[1, :],
X1c[2, :];
color="red",
label="Class 1",
aspectratio=:equal,
)
scatter!(
X2c[1, :],
X2c[2, :];
color="blue",
label="Class 2",
legend=:bottomright,
)
end
# ╔═╡ c1b9423c-c79b-46b5-8f19-c252aaa7473b
X = [X1c X2c]
# ╔═╡ 100900ac-2b1f-492f-9a50-14f4fc5506cb
Y = [zeros(1, size(X1c, 2)) ones(1, size(X2c, 2))]
# ╔═╡ d8ebaeef-27a9-4530-a19f-7987919178df
@bind nHidden Slider(2:10, default=3, show_value=true)
# ╔═╡ b2b1c6a3-acbe-4a39-8f1c-378de73a96ee
nHidden
# ╔═╡ 24d87786-904c-4c2e-9c97-e3574521f49e
activeFunction = relu
# ╔═╡ 123965a9-a4e3-4194-b079-d32bd941aa17
m = Chain(
Dense(size(X,1), nHidden, activeFunction),
Dense(nHidden, nHidden, activeFunction),
Dense(nHidden, 1, activeFunction)
)
# ╔═╡ bdde6e5e-72ad-4071-ab98-15724f72ba86
md"Output has size $(size(m(X), 2))"
# ╔═╡ 8f5068ce-6301-4234-b284-9ec03b32a8e0
loss(x, y) = mse(m(x), y)
# ╔═╡ d3b9eaa8-be71-4b83-91fd-2662ce842259
iters = 10000
# ╔═╡ f373f536-9929-40a4-ad0d-b2adb606376a
md"""
We package the dataset into `(X, Y)` tuples.
"""
# ╔═╡ ea9abbbe-36a8-46d6-8987-0f4840e4b804
dataset = repeated((X, Y), iters)
# ╔═╡ 83b8153e-8b06-4f25-ad4d-b92c5ed21884
begin
opt = ADAM()
evalcb = () -> @show([loss(X, Y)])
end
# ╔═╡ 2c8025b6-cedc-463a-9625-551c4e402052
with_terminal() do
Flux.train!(loss, params(m), dataset, opt; cb = throttle(evalcb, 0.5))
end
# ╔═╡ b36530a1-ad38-4178-a914-51cc59c1d294
function display_decision_boundaries(X1c, X2c, m, x1range, x2range, τ=0.0)
D = [m([x1; x2])[1] for x2 in x2range, x1 in x1range]
heatmap(
x1range,
x2range,
sign.(D .- τ);
color=:grays,
xlim = [minimum(x1range),maximum(x1range)],
ylim = [minimum(x2range),maximum(x2range)],
)
scatter!(
X1c[1, :],
X1c[2, :],
color="red",
label="Class 1",
aspectratio=:1.0,
)
scatter!(
X2c[1, :],
X2c[2, :],
color="blue",
label="Class 2",
)
end
# ╔═╡ 09081010-4a09-42fc-8658-f02d7c35a4e9
x1range = range(-3; stop=3, length=200)
# ╔═╡ a184440d-621b-42ff-b4a9-a596b248acbc
x2range = range(-3; stop=3, length=200)
# ╔═╡ e7f6169f-aa66-4a43-9831-227507047156
begin
display_decision_boundaries(X1c, X2c, m, x1range, x2range)
plot!(; title="Loss = $(round(loss(X,Y), digits=4))")
end
# ╔═╡ 6eccef88-ac15-4538-93f7-7e807e4891fd
with_terminal() do
@show m.layers[1].W
@show m.layers[1].b
@show m.layers[1].σ;
end
# ╔═╡ b22cf1c6-aed7-43b4-9687-ad04dcd983d4
md"""
## Classifier for MNIST Data
"""
# ╔═╡ 6159bf70-b4d8-4bcb-ba24-4d3502d98e7c
imgs = MNIST.images()
# ╔═╡ 1df18a22-46e5-4c8a-a54e-88fe342ac39c
typeof(imgs)
# ╔═╡ 5a752624-23ad-41ab-8c53-2caf0ded9aa9
# ╔═╡ 91b6edde-c595-4bd4-99c8-dd07752c7a7f
# ╔═╡ 08bb5a70-39e0-46d3-81f7-71e27a87a4e0
function convert_to_image(x, y_size)
Gray.(permutedims(vcat(reshape.(chunk(x |> cpu, y_size), 28, :)...), (2, 1)))
end
# ╔═╡ 54bc078c-8069-4d2c-a578-7b9d1d2fc367
# ╔═╡ 2cd4d120-f02d-4667-8977-a40e800e9a3a
# ╔═╡ e1f0af51-2519-4846-82d5-76e02f1bd938
# ╔═╡ a8c4ae86-2b50-4f7d-bfd3-763e2d5d2321
# ╔═╡ 10de7adb-52ec-4e24-bfa5-e5652cb3f226
# ╔═╡ Cell order:
# ╟─f47b82e5-1110-4bfa-9ac1-637076c6faf2
# ╠═5295694c-344f-11ec-2842-d7207c32a925
# ╠═57884ba0-1209-4ebe-aaaf-b03edd5814d8
# ╠═e5ae0713-8bb0-4568-8eda-3cf8fdc5cef0
# ╠═09fea98d-aaf0-4971-a81f-90b50ef04b47
# ╟─3e79f9cc-f01c-4622-ade9-92d6aaa954e1
# ╠═6ebac32e-ad7d-4321-be2f-b61127c47771
# ╠═f74b47d4-7dff-40a1-b086-26bf8b5c88d7
# ╠═b0986b17-bd42-4fb4-a3f1-54a58e52593c
# ╟─5b13fe75-6636-4ee2-9396-98ba53cedadd
# ╠═c1b9423c-c79b-46b5-8f19-c252aaa7473b
# ╠═100900ac-2b1f-492f-9a50-14f4fc5506cb
# ╠═d8ebaeef-27a9-4530-a19f-7987919178df
# ╠═b2b1c6a3-acbe-4a39-8f1c-378de73a96ee
# ╠═24d87786-904c-4c2e-9c97-e3574521f49e
# ╠═123965a9-a4e3-4194-b079-d32bd941aa17
# ╠═bdde6e5e-72ad-4071-ab98-15724f72ba86
# ╠═8f5068ce-6301-4234-b284-9ec03b32a8e0
# ╠═d3b9eaa8-be71-4b83-91fd-2662ce842259
# ╟─f373f536-9929-40a4-ad0d-b2adb606376a
# ╠═ea9abbbe-36a8-46d6-8987-0f4840e4b804
# ╠═83b8153e-8b06-4f25-ad4d-b92c5ed21884
# ╠═2c8025b6-cedc-463a-9625-551c4e402052
# ╠═b36530a1-ad38-4178-a914-51cc59c1d294
# ╠═09081010-4a09-42fc-8658-f02d7c35a4e9
# ╠═a184440d-621b-42ff-b4a9-a596b248acbc
# ╠═e7f6169f-aa66-4a43-9831-227507047156
# ╠═6eccef88-ac15-4538-93f7-7e807e4891fd
# ╟─b22cf1c6-aed7-43b4-9687-ad04dcd983d4
# ╠═6159bf70-b4d8-4bcb-ba24-4d3502d98e7c
# ╠═1df18a22-46e5-4c8a-a54e-88fe342ac39c
# ╠═5a752624-23ad-41ab-8c53-2caf0ded9aa9
# ╠═91b6edde-c595-4bd4-99c8-dd07752c7a7f
# ╠═08bb5a70-39e0-46d3-81f7-71e27a87a4e0
# ╠═54bc078c-8069-4d2c-a578-7b9d1d2fc367
# ╠═2cd4d120-f02d-4667-8977-a40e800e9a3a
# ╠═e1f0af51-2519-4846-82d5-76e02f1bd938
# ╠═a8c4ae86-2b50-4f7d-bfd3-763e2d5d2321
# ╠═10de7adb-52ec-4e24-bfa5-e5652cb3f226
|
{"hexsha": "f0db2d1922c610c433d56609e77a7044b998b223", "size": 6592, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "notebooks/fluxExamples.jl", "max_stars_repo_name": "aniketjivani/variational-autoencoders", "max_stars_repo_head_hexsha": "bba54d571b2c674e73dee3c018fdce5701d06940", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/fluxExamples.jl", "max_issues_repo_name": "aniketjivani/variational-autoencoders", "max_issues_repo_head_hexsha": "bba54d571b2c674e73dee3c018fdce5701d06940", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/fluxExamples.jl", "max_forks_repo_name": "aniketjivani/variational-autoencoders", "max_forks_repo_head_hexsha": "bba54d571b2c674e73dee3c018fdce5701d06940", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.9696969697, "max_line_length": 195, "alphanum_fraction": 0.6931128641, "num_tokens": 3398}
|
C Copyright restrictions apply - see stsdas$copyright.stsdas
C
SUBROUTINE GEOPOS(LATI,LONGI,MLAT,MLONG,ISTAT)
*
* Module number:
*
* Module name: GEOPOS
*
* Keyphrase:
* ----------
* Calculate geomagnetic position from geographic position
*
* Description:
* ------------
* This routine calculates the geomagnetic position given the
* geographic latitude and longitude.
* This routine is adapted from an algorith supplied by the FOS
* IDT whose ultimate source is somewhere in the NSSDCA
*
* FORTRAN name: GEOPOS.for
*
* Keywords of accessed files and tables:
* --------------------------------------
* Name I/O Description / Comments
* Subroutines Called:
* -------------------
* CDBS:
*
* History:
* --------
* Version Date Author Description
* 1 Sep 91 S. Hulbert Designed and coded
* 1.1 Sep 98 M.D. De La Pena Removed dead statement.
*-------------------------------------------------------------------------------
*
* INPUTS:
* lati - geographic latitude
* lngi - geographic longitude
*
* OUTPUT:
* mlat - geomagnetic latitude
* mlong - geomagnetic longitude
* istat - error status
*
*----------------------------------------------------------------------------
INTEGER ISTAT
DOUBLE PRECISION LATI,LONGI,MLAT,MLONG
C------------------------------------------------------------------------------
C UMSPUT DESTINATIONS -- CB, DAO, 4-SEP-87
C
INTEGER STDOUT
PARAMETER (STDOUT = 1)
INTEGER STDERR
PARAMETER (STDERR = 2)
C
DOUBLE PRECISION PI
PARAMETER (PI = 3.1415926535898D0)
DOUBLE PRECISION RADDEG
PARAMETER (RADDEG = 57.295779513082D0)
C
C LOCAL VARIABLES ------------------------------------------------------
DOUBLE PRECISION DEGRAD,TWOPI,CBG,CI,SI,YLG,SBG,CLG,SLG
DOUBLE PRECISION SBM,CBM,SLM,CLM
C
C-----------------------------------------------------------------------
C
DEGRAD=1./RADDEG
TWOPI=2.D0*PI
CBG=11.4D0*DEGRAD
CI=COS(CBG)
SI=SIN(CBG)
C
C convert geographic latitude and longitude to geomagnetic
C
YLG=LONGI+69.8D0
CBG=COS(LATI*DEGRAD)
SBG=SIN(LATI*DEGRAD)
CLG=COS(YLG*DEGRAD)
SLG=SIN(YLG*DEGRAD)
SBM=SBG*CI+CBG*CLG*SI
MLAT=ASIN(SBM)
CBM=COS(MLAT)
SLM=(CBG*SLG)/CBM
CLM=(-SBG*SI+CBG*CLG*CI)/CBM
IF(CLM.GT.1.D0)CLM=1.D0
MLONG=ACOS(CLM)
IF(SLM.LT.0.D0)MLONG=TWOPI-ACOS(CLM)
MLAT=MLAT*RADDEG
MLONG=MLONG*RADDEG
C
ISTAT=0
1000 RETURN
END
|
{"hexsha": "a41d0e0269ffba7cb3122989ceb3c9ce27964052", "size": 2827, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "stsdas/pkg/hst_calib/stpoa/poa_fos/poa_calfos/geopos.f", "max_stars_repo_name": "iraf-community/stsdas", "max_stars_repo_head_hexsha": "043c173fd5497c18c2b1bfe8bcff65180bca3996", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-20T10:06:48.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-20T10:06:48.000Z", "max_issues_repo_path": "stsdas/pkg/hst_calib/stpoa/poa_fos/poa_calfos/geopos.f", "max_issues_repo_name": "spacetelescope/stsdas_stripped", "max_issues_repo_head_hexsha": "043c173fd5497c18c2b1bfe8bcff65180bca3996", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "stsdas/pkg/hst_calib/stpoa/poa_fos/poa_calfos/geopos.f", "max_forks_repo_name": "spacetelescope/stsdas_stripped", "max_forks_repo_head_hexsha": "043c173fd5497c18c2b1bfe8bcff65180bca3996", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-10-12T20:01:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-19T08:04:30.000Z", "avg_line_length": 29.7578947368, "max_line_length": 80, "alphanum_fraction": 0.4817828086, "num_tokens": 771}
|
"""
Script to compare different approaches to selecting Q-values according to integer action indices.
Compares two approaches:
1. Create a new one-hot encoding, apply the element-wise product, and reduce
2. Use torch.gather()
"""
import time
from numpy import dtype
import torch
import torch.nn as nn
class Stopwatch:
def __init__(self):
self._started = None
self._elapsed = 0
def start(self):
if self._started is None:
self._started = time.time()
def stop(self):
stopped = time.time()
if self._started is not None:
self._elapsed += stopped - self._started
self._started = None
def elapsed(self):
return self._elapsed
if __name__ == "__main__":
# Parameters
DEVICE = 'cpu'
NUM_SEEDS = 10
NUM_ITERATIONS = 1000
NUM_ACTIONS = 4
BATCH_SIZE = 1024
# Print config
print("\nRunning Q-value benchmark")
print(f"device: {DEVICE}")
print(f"num seeds: {NUM_SEEDS}")
print(f"num iterations: {NUM_ITERATIONS}")
print(f"num actions: {NUM_ACTIONS}")
print(f"batch size: {BATCH_SIZE}")
# Timers
gather_timer = Stopwatch()
mask_timer = Stopwatch()
# Accumulator - to ensure values are computed
acc = torch.zeros((BATCH_SIZE,), dtype=torch.float32)
# Iterate over seeds
for seed in range(NUM_SEEDS):
generator = torch.manual_seed(seed)
q_values = torch.rand(BATCH_SIZE, NUM_ACTIONS,
dtype=torch.float32,
device="cpu",
generator=generator)
actions = torch.randint(0, NUM_ACTIONS, (BATCH_SIZE,),
dtype=torch.int64,
device="cpu",
generator=generator)
q_values.to(DEVICE)
actions.to(DEVICE)
mask = nn.functional.one_hot(actions, NUM_ACTIONS)
# Test gather
gather_timer.start()
for _ in range(NUM_ITERATIONS):
action_values = torch.gather(q_values, -1, actions.unsqueeze(-1)).squeeze(-1)
acc += action_values
gather_timer.stop()
# Test mask
mask_timer.start()
for _ in range(NUM_ITERATIONS):
# mask = nn.functional.one_hot(actions, NUM_ACTIONS)
action_values = (mask * q_values).sum(-1)
acc += action_values
mask_timer.stop()
# Print results
print(f"\naccumulated values:\n{acc}")
print(f"\ngather - total time: {gather_timer.elapsed()}s")
print(f"mask - total time: {mask_timer.elapsed()}s")
|
{"hexsha": "ca74e23b38f761beb152521981049286b092c722", "size": 2652, "ext": "py", "lang": "Python", "max_stars_repo_path": "junk/torch_benchmarks.py", "max_stars_repo_name": "oliehoek-research/interactive_agents", "max_stars_repo_head_hexsha": "fddf99fed8e6aaf213c658897c2e232fe5323053", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "junk/torch_benchmarks.py", "max_issues_repo_name": "oliehoek-research/interactive_agents", "max_issues_repo_head_hexsha": "fddf99fed8e6aaf213c658897c2e232fe5323053", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2022-03-11T07:58:53.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-17T12:57:26.000Z", "max_forks_repo_path": "junk/torch_benchmarks.py", "max_forks_repo_name": "oliehoek-research/interactive_agents", "max_forks_repo_head_hexsha": "fddf99fed8e6aaf213c658897c2e232fe5323053", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-11T19:28:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-11T19:28:53.000Z", "avg_line_length": 27.9157894737, "max_line_length": 97, "alphanum_fraction": 0.5912518854, "include": true, "reason": "from numpy", "num_tokens": 604}
|
# coding=utf-8
import tensorflow as tf
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import cv2
import forward
import backward
import file
Z1_PREDICT_PATH = '.\\predict\\Z1\\Z1-'
def restore_model(time):
with tf.Graph().as_default() as tg:
x = tf.placeholder(tf.float32, [None, forward.INPUT_NODE])
y = forward.forward(x, 0)
variable_averages = tf.train.ExponentialMovingAverage(backward.MOVING_AVERAGE_DECAY)
variables_to_restore = variable_averages.variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(backward.MODEL_SAVE_PATH)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
pre_array = sess.run(y, feed_dict={x: time})
return pre_array
else:
print("No checkpoint file found")
return -1
# 预测矩阵转图片保存
def image_arr(time):
matrix = np.zeros(file.TIME_SIZE)
matrix[time + 1] = 1
matrix = matrix.reshape([1, file.TIME_SIZE])
matrix.astype(np.float32)
arr = restore_model(matrix)
arr = arr.reshape((file.ROW_SIZE, file.COL_SIZE))
arr = np.multiply(arr, 255)
new_dimension = (1200, 1200)
arr_ready = cv2.resize(arr, new_dimension, interpolation=cv2.INTER_LANCZOS4) # 8x8像素邻域的Lanczos插值
im = Image.fromarray(arr_ready) # 展示当前图片
plt.show(im)
im.save(Z1_PREDICT_PATH + str(time) + ".tif")
def main():
time = int(input("Input the time:"))
image_arr(time)
if __name__ == '__main__':
main()
|
{"hexsha": "4b1e5739504bcd79aa9a55bb7e392b7013e74792", "size": 1669, "ext": "py", "lang": "Python", "max_stars_repo_path": "app.py", "max_stars_repo_name": "DolorHunter/model_build_B", "max_stars_repo_head_hexsha": "7b189ebe4a95c328f13bde458fb245d3c5a37830", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-06-09T06:01:25.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-30T03:30:03.000Z", "max_issues_repo_path": "app.py", "max_issues_repo_name": "DolorHunter/PredictFuturePlants", "max_issues_repo_head_hexsha": "7b189ebe4a95c328f13bde458fb245d3c5a37830", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "app.py", "max_forks_repo_name": "DolorHunter/PredictFuturePlants", "max_forks_repo_head_hexsha": "7b189ebe4a95c328f13bde458fb245d3c5a37830", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.2881355932, "max_line_length": 101, "alphanum_fraction": 0.6608747753, "include": true, "reason": "import numpy", "num_tokens": 441}
|
# Databricks notebook source
# MAGIC %md ** Credit Card Fraud Detection ** : Supervised Machine Learning model is built to classify whether the transaction is fraud or not.<br>
# MAGIC ** Dataset Source ** : The dataset used in this experiment has been downloaded from kaggle. The input dataset contains 3075 rows and 12 numeric/string columns. <br>
# MAGIC ** Algorithms ** - Support Vector Machine algorithm is used in this experiment.<br>
# MAGIC ** Note: ** This script is integrated to train the ML model in Azure Machine Learning Service in DevOps pipeline.<br>
# MAGIC To run this script independently in Azure Databricks and view the results, please comment out cells 17-20 and uncomment cell 4- load input CSV data in dataframe.<br>
# MAGIC Run the experiment from the beginning.<br>
# COMMAND ----------
#Import Required Python Libraries for ML experiment
import pickle
from azureml.core import Workspace
from azureml.core.run import Run
import os
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
import numpy as np
import pandas as pd
import json
import subprocess
from typing import Tuple, List
from sklearn.metrics import classification_report, confusion_matrix, roc_curve, roc_auc_score,auc
from sklearn import preprocessing
from sklearn.svm import SVC
# COMMAND ----------
# Start recording results to Azure Machine Learning
run = Run.get_context()
# COMMAND ----------
#load input CSV data in dataframe
# df_credit = pd.read_csv('/dbfs/FileStore/tables/creditcardcsvpresent.csv')
# display(df_credit)
# load input CSV data from local git repo
df_credit = pd.read_csv('data/creditcardcsvpresent.csv')
# COMMAND ----------
# Statistical summary of input dataset
# df_credit.describe()
# COMMAND ----------
# Total number of rows and columns
print("Number of rows, number of columns in the input dataset:",df_credit.shape)
# print("List of attributes or cloumns:\n",df_credit.columns)
# COMMAND ----------
#Check if there is any missing rows and columns and Count the number of rows with missing values
print("The Column contains null values:", df_credit.columns[df_credit.isnull().any()].tolist())
print("Number of missing values: ",df_credit.isnull().sum().max())
# COMMAND ----------
# drop the missing column
newdf_credit = df_credit.drop(['Transaction date'], axis='columns')
print("List of columns or attributes:\n",newdf_credit.columns)
# COMMAND ----------
#Select the specific columns as features.
newdf_credit = newdf_credit[['Average Amount/transaction/day', 'Transaction_amount', 'Is declined', 'Total Number of declines/day', 'isForeignTransaction','isHighRiskCountry', 'Daily_chargeback_avg_amt', '6_month_avg_chbk_amt', '6-month_chbk_freq','isFradulent']]
# display(newdf_credit)
# COMMAND ----------
# Hold all the columns in the same data type using label encoder
for column in newdf_credit.columns:
if newdf_credit[column].dtype == type(object):
le = preprocessing.LabelEncoder()
newdf_credit[column] = le.fit_transform(newdf_credit[column])
# COMMAND ----------
#Divide the data into attributes and labels
X = newdf_credit.drop('isFradulent', axis=1)
y = newdf_credit['isFradulent']
# COMMAND ----------
# Split the data: 70% to training and 30% to testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30,random_state=0)
X_train.shape, X_test.shape
# COMMAND ----------
# Running training script
print("Running CreditMlExperiment.py")
# COMMAND ----------
# Training the model using SVM algorithm
svm_classifier = SVC(kernel='linear')
svm_classifier.fit(X_train, y_train)
# COMMAND ----------
# Prediction on test data
y_pred = svm_classifier.predict(X_test)
# COMMAND ----------
# SVM Classifier results
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix:\n", cm)
# print("Classification Metrcis:\n", classification_report(y_test, y_pred))
# COMMAND ----------
TP = cm[0][0]
FP = cm[0][1]
FN = cm[1][0]
TN = cm[1][1]
sensitivity = (TP)/ float(TP+FN)
specificity = TN / float(TN+FP)
precision = TP / float(TP+FP)
accuracy = (TN+TP) / float(TP+TN+FP+FN)
print("SVM classifier results for test data:")
print("Recall: ", sensitivity)
print("Specificity: ", specificity)
print("Precision: ", precision)
print("Accuracy: ", accuracy)
# COMMAND ----------
# Adding classification results to machine learning experiment in Azure Machine Learning service workspace
run.log("Accuracy",accuracy)
run.log("Precision",precision)
run.log("True Positive Rate",sensitivity)
run.log("True Negative Rate",specificity)
# COMMAND ----------
# Save model as part of the run history
model_name = "svm_mldevcredit_model.pkl"
with open(model_name, "wb") as file:
joblib.dump(value=svm_classifier, filename=model_name)
# COMMAND ----------
# upload the model file explicitly into artifacts
run.upload_file(name="./outputs/" + model_name, path_or_stream=model_name)
print("Uploaded the model {} to experiment {}".format(model_name, run.experiment.name))
dirpath = os.getcwd()
print(dirpath)
# COMMAND ----------
print("Following files are uploaded ")
print(run.get_file_names())
run.complete()
|
{"hexsha": "4cb7e9284faf812375ba1a942f8ee4c8cdf017d0", "size": 5163, "ext": "py", "lang": "Python", "max_stars_repo_path": "notebooks/Users/shamir.alavi@cra-arc.gc.ca/Training.py", "max_stars_repo_name": "dg1223/ML-pipeline", "max_stars_repo_head_hexsha": "b421fd8dddb695689ffe6dcf58c7640625066074", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-27T03:36:32.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-27T03:36:32.000Z", "max_issues_repo_path": "notebooks/Users/shamir.alavi@cra-arc.gc.ca/Training.py", "max_issues_repo_name": "dg1223/ML-pipeline", "max_issues_repo_head_hexsha": "b421fd8dddb695689ffe6dcf58c7640625066074", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Users/shamir.alavi@cra-arc.gc.ca/Training.py", "max_forks_repo_name": "dg1223/ML-pipeline", "max_forks_repo_head_hexsha": "b421fd8dddb695689ffe6dcf58c7640625066074", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-27T03:36:42.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-27T03:36:42.000Z", "avg_line_length": 32.26875, "max_line_length": 263, "alphanum_fraction": 0.7301956227, "include": true, "reason": "import numpy", "num_tokens": 1196}
|
epsilon=1e-9
from math import *
from alpha_zero.player.player_inherit_from import Player
import random
class Node:
""" A node in the game tree. Note wins is always from the viewpoint of playerJustMoved.
Crashes if state not specified.
"""
def __init__(self, move=None, parent=None, state=None):
self.move = move # the move that got us to this node - "None" for the root node
self.parentNode = parent # "None" for the root node
self.childNodes = []
self.wins = 0
self.visits = epsilon #this is so that we don't get a divide by 0
self.untriedMoves = state.get_legal_moves().tolist()
self.playerJustMoved = 3-state.player_turn() # the only part of the state that the Node needs later
self.UCTVal=0
self.state=state
def UCTSelectChild(self):
#Selects a child state.
s = sorted(self.childNodes, key=lambda c: c.wins / c.visits + sqrt(log(self.visits+1) / c.visits) + random.random()*epsilon)[-1]
#random number is in case we get a tie. Might be unnecesary and wasteful in big games.
return s
def getUCTVal(self,player=None):
#Return the UCT Value of this node
if(player==None) :player=3-self.playerJustMoved
UCTVal = self.wins / self.visits
if(player==3-self.playerJustMoved) :#TODO: IS THIS A SOURCE OF ERROR- IS IT CORRECT TO HAVE THIS. PREVIOUSLY I MADE NO REFERENCE AS TO WHICH PLAYER WAS GETTING VALUE.Passes the MCTS tests though.
UCTVal=1-UCTVal
return UCTVal
def getUCTPolicy(self):
#returns the [Child's Move, UCTVisits, UCT values] of all the children of this node.
UCTValues=[]
UCTVisits=[]
UCTMove=[]
for c in self.childNodes :
UCTValues.append(c.wins / c.visits)
UCTVisits.append(c.visits)
UCTMove.append(c.move)
return [UCTMove,UCTVisits,UCTValues]
def AddChild(self, m, state):
""" Remove m from untriedMoves and add a new child node for this move.
Return the added child node
"""
n = Node(move=m, parent=self, state=state)
self.untriedMoves.remove(m)
self.childNodes.append(n)
return n
def Update(self, result):
""" Update this node - one additional visit and result additional wins.
result must be from the viewpoint of playerJustmoved.
"""
self.visits += 1
self.wins += result
def __repr__(self):
return "[M:" + str(self.move) + " W/V:" + str(self.wins) + "/" + str(self.visits) + " U:" + str(
self.untriedMoves) + "]"
def TreeToString(self, indent):
s = self.IndentString(indent) + str(self)
for c in self.childNodes:
s += c.TreeToString(indent + 1)
return s
def IndentString(self, indent):
s = "\n"
for i in range(1, indent + 1):
s += "| "
return s
def ChildrenToString(self):
s = ""
for c in self.childNodes:
s += str(c) + "\n"
return s
def GetUCTVal(game,itermax, verbose=False,player=None) :
# Returns the UCTValue of the given state, using itermax iterations..
rootnode = Node(state=game)
for i in range(itermax):
MCTSBuild(rootnode)
return rootnode.getUCTVal(player)
def GeUCTMove(rootstate, itermax, verbose=False):
"""Gets the best move given the particular gamestate. Usually to interface with another game player you will
need to simply convert their move into a compliant gamestate. If you have a representation in a compliant state format
then you can use it as the roostate in this function . Return the best move from the rootstate.
"""
rootnode = Node(state=rootstate.Clone())
for i in range(itermax):
MCTSBuild(rootnode)
# Output some information about the tree
if (verbose):
print (rootnode.TreeToString(0))
else:
pass # print rootnode.ChildrenToString()
return sorted(rootnode.childNodes, key=lambda c: c.visits)[-1].move # return the move that was most visited
def MCTSBuild(rootnode, verbose=False) :
#Builds the MCTS tree
#rootnode.state.reset()
node = rootnode
#state = rootnode.state.Clone()
# Select
state=node.state.clone()
while node.untriedMoves == [] and node.childNodes != []:
node = node.UCTSelectChild() #cant optimise this cause the node changes with each iteration.
state.step(node.move)
# Expand
#if not node.untriedMoves == []: # if we can expand (i.e. state/node is non-terminal) BAD error in MCTS.AI Code here.
if not state.is_terminal(): # if we can expand (i.e. state/node is non-terminal) BAD error in MCTS.AI Code here.
# Unlike above it is more wasteful and incorrect to expand the tree beyond terminal cause it returns bad results
#starting with a terminal state can cause a untriedMoves to be empty.
#TODO: Make sure the state has a move representing pass eg -1
try :
m = random.choice(node.untriedMoves)
state.step(m)
node = node.AddChild(m, state.clone()) # add child and descend tree
except IndexError :
pass
# TODO: For some games there will need to be a pass option. This should reside in the gamestate. But need to ensure that both players dont pass forever.
# Rollout -
while not state.done:
m=state.get_legal_moves()
if len(m)==0 :
print(state.render())
print(state.winner)
print(state.done)
assert False #Shouldn't get here.
a=random.choice(m)
state.step(a)
#state.rndPlayout() # this is faster than a while loop in the MCTS, but if not doing RND playout then wont help.
result = state.get_result()
# Backpropagate
while node != None: # backpropagate from the expanded node and work back to the root node
val = -1000
if result == node.playerJustMoved: #Since we are progressing down the tree we need to keep track of each players reward at each level.
val = 1.0
elif result == 3-node.playerJustMoved:
val = 0.0
elif result == 3:
val = 0.5
else :
assert False #HOW DID YOU GET HERE??
node.Update(val) # state is terminal. Update node with result from POV of node.playerJustMoved
node = node.parentNode
import numpy as np
class MCTSPlayer(Player):
def __init__(self,env,playing_as=1,iterations=300,name="MCTS"):
shortName="M"+str(iterations)
super().__init__(env,playing_as,name,shortName)
self.iterations=iterations
self.gameplayed=0
def newGame(self):
pass
def get_move(self,env):
"""Gets the best move given the particular gamestate. Usually to interface with another game player you will
need to simply convert their move into a compliant gamestate. If you have a representation in a compliant state format
then you can use it as the roostate in this function . Return the best move from the rootstate.
"""
rootnode = Node(state=env)
#Game = rootstate.Clone()
[MCTSBuild(rootnode=rootnode) for _ in range(self.iterations)]
return sorted(rootnode.childNodes, key=lambda c: c.visits)[-1].move # return the move that was most visited
def getMoveDist(self,playerPieces):
return np.zeros_like(playerPieces)
#return a
def DoMove(self,env):
m=self.GetMove(env,env.board)
env.step(m)
def NewGame(self):
pass
def GameOver(self,game):
pass
|
{"hexsha": "6b89e609d7ab0ab2c38eaa38cf10da7994713dfd", "size": 7721, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/alpha_zero/player/mcts.py", "max_stars_repo_name": "theputernerd/alpha-zero-theputernerd", "max_stars_repo_head_hexsha": "7d0defbdbd4824a6104f8a889ca4c24d330c4ed2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-01-03T00:43:17.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-03T00:43:17.000Z", "max_issues_repo_path": "src/alpha_zero/player/mcts.py", "max_issues_repo_name": "theputernerd/alpha-zero-theputernerd", "max_issues_repo_head_hexsha": "7d0defbdbd4824a6104f8a889ca4c24d330c4ed2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/alpha_zero/player/mcts.py", "max_forks_repo_name": "theputernerd/alpha-zero-theputernerd", "max_forks_repo_head_hexsha": "7d0defbdbd4824a6104f8a889ca4c24d330c4ed2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.7666666667, "max_line_length": 203, "alphanum_fraction": 0.6355394379, "include": true, "reason": "import numpy", "num_tokens": 1882}
|
PIPaginationLinks <- function(first = NULL, previous = NULL, last = NULL) {
if (is.null(first) == FALSE) {
if (is.character(first) == FALSE) {
return (print(paste0("Error: first must be a string.")))
}
}
if (is.null(previous) == FALSE) {
if (is.character(previous) == FALSE) {
return (print(paste0("Error: previous must be a string.")))
}
}
if (is.null(last) == FALSE) {
if (is.character(last) == FALSE) {
return (print(paste0("Error: last must be a string.")))
}
}
value <- list(
First = first,
Previous = previous,
Last = last)
valueCleaned <- rmNullObs(value)
attr(valueCleaned, "className") <- "PIPaginationLinks"
return(valueCleaned)
}
|
{"hexsha": "cc5e2c1f3036053ca2bd3bce856cb1a70cb937fd", "size": 678, "ext": "r", "lang": "R", "max_stars_repo_path": "R/PIPaginationLinks.r", "max_stars_repo_name": "frbl/PI-Web-API-Client-R", "max_stars_repo_head_hexsha": "1e12907493053fe10c8b9feb229584b741d4ae2e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-04T17:29:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-04T17:29:38.000Z", "max_issues_repo_path": "R/PIPaginationLinks.r", "max_issues_repo_name": "frbl/PI-Web-API-Client-R", "max_issues_repo_head_hexsha": "1e12907493053fe10c8b9feb229584b741d4ae2e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "R/PIPaginationLinks.r", "max_forks_repo_name": "frbl/PI-Web-API-Client-R", "max_forks_repo_head_hexsha": "1e12907493053fe10c8b9feb229584b741d4ae2e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-09-04T15:03:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-16T04:57:06.000Z", "avg_line_length": 27.12, "max_line_length": 75, "alphanum_fraction": 0.6371681416, "num_tokens": 198}
|
import logging
import re
import time
from typing import List, Optional, Tuple, Union
import numpy as np
import pandas as pd
try:
from tqdm import tqdm
except ImportError:
def tqdm(*args, **kwargs):
if args:
return args[0]
return kwargs.get("iterable", None)
from collections import OrderedDict
from functools import partial
from itertools import chain
from multiprocessing import Pool, cpu_count
from multiprocessing.dummy import Pool as ThreadPool
from .utils import DB, chunkify, no_limit_timber_get, sanitize_t
class HeaderMaker:
def __init__(
self,
t: Union[float, str],
look_back: str = "30M",
look_forward: str = "30M",
t2: Optional[Union[float, str]] = None,
vec_var: str = "LHC.BLMI:LOSS_RS09",
n_jobs: int = -1,
n_threads: int = 512,
):
"""Makes timber's vector numeric BLM header for a given timestamp. By
brute forcely determining which column corresponds to which individual
BLM timber variable. Building the header can be quite slow.
Args:
t:
- int or float: assumes utc time, converts to pd.Timestamp and
to Europe/Zurich timezone.
- str: a pd.to_datetime compatible str, converts to
pd.Timestamp and to Europe/Zurich timezone, if not already.
look_back: look back amount, pd.Timedelta compatible string.
look_forward: look forward amount, pd.Timedelta compatible string.
t2: same type logic as t, if provided will ignore any look_forward/look_back
arguments and will use t1 = t, t2=t2.
vec_var: blm vector numeric timber variable.
n_jobs: number of jobs with which to do the distance matrix calculations.
n_threads: number of threads with which to fetch timber data.
"""
self._logger = logging.getLogger(__name__)
self.vec_var = vec_var
self.t = sanitize_t(t)
if t2 is not None:
self.t1 = self.t
self.t2 = sanitize_t(t2)
else:
self._look_back = pd.Timedelta(look_back)
self._look_forward = pd.Timedelta(look_forward)
self.t1 = self.t - self._look_back
self.t2 = self.t + self._look_forward
self._logger.info("Requested time: %s", self.t)
self._logger.info(
"Using data in range: [%s -> %s] for matching.", self.t1, self.t2
)
if n_jobs == -1:
n_jobs = cpu_count()
self._n_jobs = n_jobs
self._n_threads = n_threads
def fetch_vec(self) -> pd.DataFrame:
"""Fetches the vector numeric data from timber, and converts it to dataframe.
Returns:
`pd.DataFrame` of the vector numeric data with as index the timestamp
in Europe/Zurich timezone.
"""
self._logger.info("Fetching vector numeric data.")
# out = DB.get(self.vec_var, self.t1, self.t2)[self.vec_var]
out = no_limit_timber_get(self.vec_var, self.t1, self.t2)[self.vec_var]
timestamps = out[0][:, np.newaxis]
data = out[1]
if data.size == 0:
raise ValueError(
f"No vectornumeric data in time range [{self.t1} -> {self.t2}]."
)
df = pd.DataFrame(np.hstack([data, timestamps]))
df.iloc[:, -1] = pd.to_datetime(
df.iloc[:, -1], unit="s", utc=True
).dt.tz_convert("Europe/Zurich")
df.set_index(df.columns[-1], inplace=True)
df.index.name = "timestamp"
# rounding time to the second because there is slight offset between
# vector numeric and single blm var.
df.index = df.index.round("S")
self._logger.info("Vector numeric shape: %s", df.shape)
return df
def fetch_single(
self,
BLM_list: Optional[List[str]] = None,
timber_filter: str = "BLM%:LOSS_RS09",
reg_filter: Optional[str] = None,
) -> dict:
"""Fetches the all the individual BLM variable data from timber, and
stores them in a dict.
Args:
BLM_list: to only fetch specific blms, provide a list or blm names.
timber_filter: filtering when fetching variable list from timber,
"%" is the wild card.
reg_filter: additional regex filtering.
Returns:
dicitonary containing as keys, the blm name and as values, `pd.Series`
of the data.
"""
self._logger.info("Fetching individual BLM data.")
if BLM_list is None:
BLM_list = self._fetch_blm_var_list(
timber_filter=timber_filter, reg_filter=reg_filter
)
if self._n_threads > 1:
self._logger.debug("Using %i threads.", self._n_threads)
with ThreadPool(self._n_threads) as p:
out = list(
tqdm(
p.imap(
lambda x: DB.get(x, self.t1, self.t2),
BLM_list,
# chunksize=len(BLM_list) // self._n_threads),
),
total=len(BLM_list),
desc="Fetching BLM data",
)
)
# Merges list of dicts
out = {k: v for d in out for k, v in d.items()}
else:
out = DB.get(BLM_list, self.t1, self.t2)
blm_data = OrderedDict()
for blm, time_data in out.items():
if time_data[0].size > 0:
blm_data[blm.split(":")[0]] = self._clean_get(time_data)
else:
self._logger.debug("Timber variable %s has no data.", blm)
self._logger.info("Number of individual BLMs: %i", len(blm_data))
return blm_data
@staticmethod
def _fetch_blm_var_list(
timber_filter: str = "BLM%:LOSS_RS09", reg_filter: Optional[str] = None
):
"""Gets a list of all the BLM, respecting the filtering, from timber.
Args:
timber_filter: filtering when fetching from timber, "%" is the wild card.
reg_filter: additional regex filtering.
Returns:
List of str containing the timber variables respecting the filtering.
"""
# reg_filter='BLM.[IL]'):
out = DB.search(timber_filter)
if reg_filter is not None:
out = [blm for blm in out if re.search(reg_filter, blm)]
if not out:
raise ValueError("No timber BLM variables passed the filters.")
return out
def make_header(
self,
vec_data: Optional[pd.DataFrame] = None,
single_data: Optional[dict] = None,
**kwargs,
) -> List[str]:
"""Makes the header.
Note: this takes a long time... and there is not guarantee that the header
is correct.
Args:
vec_data: pd.DataFrame containing the vectornumeric data. If None will
fetch the vectornumeric data.
single_data: dictionary containing the individual BLM data. If None
will fetch individual BLM data.
**kwargs: blm filtering, see self.fetch_single.
Returns:
The header, a list of blm names.
"""
if vec_data is None:
vec_data = self.fetch_vec()
if single_data is None:
single_data = self.fetch_single(**kwargs)
d_mat = self.calc_distance_matrix(vec_data, single_data)
return self._distance_matrix_to_header(d_mat)
def calc_distance_matrix(
self, vec_data: pd.DataFrame, single_data: dict
) -> pd.DataFrame:
"""Constructs the distance matrix between the vectonumeric data and the
individual BLM variables.
Args:
vec_data: pd.DataFrame containing the vectornumeric data. If None will
fetch the vectornumeric data.
single_data: dictionary containing the individual BLM data. If None
will fetch individual BLM data.
Returns:
`pd.DataFrame` with as index the columns of the vector numeric data, as
columns the blm names and contains the "distance" between each.
"""
self._logger.info("Constructing distance matrix. This will take a while...")
start_t = time.time()
# calculate the distance matrix
col_diff = partial(self._multi_column_diff, single_data=single_data)
self._logger.debug("Using %i jobs.", self._n_jobs)
# Split the vec_data into chunks for more efficient multiprocessing
with Pool(self._n_jobs) as p:
res = p.imap(
col_diff,
enumerate(chunkify([c for _, c in vec_data.iteritems()], self._n_jobs)),
)
res = list(chain(*res))
self._logger.info("Time elapsed: %s s", round(time.time() - start_t))
return pd.DataFrame(res)
@staticmethod
def _clean_get(time_data: tuple) -> pd.Series:
"""Converts a tuple of (time, blm_data), typically coming from a pytimber
`db.get()` call, to a `pd.Series`.
Args:
time_data (tuple): tuple of unix timestamps array and blm data
array.
Returns:
`pd.Series` with the timestamp converted to pd.Timestamp, in Europe/Zurich
timezone, with the data array as data.
"""
series = pd.Series(data=time_data[1], index=time_data[0])
series.index = pd.to_datetime(series.index, unit="s", utc=True).tz_convert(
"Europe/Zurich"
)
# rounding time to the second because there is slight offset between
# vector numeric and single blm var.
series.index = series.index.round("S")
series.index.name = "timestamp"
return series
@staticmethod
def _multi_column_diff(
index_list_of_series: Tuple[int, list], single_data: dict
) -> List[dict]:
"""Run the distance calculation on a chunk of the vec_data. This is run
in one multiprocessing job.
Args:
index_list_of_series: tuple of (progress bar position, list of vec_data columns).
single_data: dict of the individual blm data.
Returns:
A list of dicts.
"""
def _single_column_diff(series: pd.Series):
"""Runs the diff of one of the vector numeric columns on each of the
individual blm data.
Args:
series: Column of the vector numeric dataframe.
Returns:
Dictionary with the blm name as keys and the result of the diff as values.
"""
return {b: (series - s).abs().mean() for b, s in single_data.items()}
# i is the position of the progress bar
# list_of_series if the chunk of vector numeric columns on which ot run
# the computation
i, list_of_series = index_list_of_series[0], index_list_of_series[1]
return [
_single_column_diff(series)
for series in tqdm(
list_of_series,
position=i,
desc=f"Computing matrix chunk {i:02}",
leave=False,
)
]
@staticmethod
def _distance_matrix_to_header(distance_matrix: pd.DataFrame) -> pd.DataFrame:
"""Converts a distance matrix to a header.
Args:
distance_matrix: as index the columns of the vector numeric data, as
columns the blm names and contains the "distance" between each.
Returns:
The header, a list of blm names.
"""
return distance_matrix.idxmin(axis=1).tolist()
|
{"hexsha": "094a20343361b66630c9236717f89eac1e38f46c", "size": 11846, "ext": "py", "lang": "Python", "max_stars_repo_path": "blm_header/header_maker.py", "max_stars_repo_name": "loiccoyle/blm_header", "max_stars_repo_head_hexsha": "f3430ad5013bcd6f1918ca1ec1ef9b676335c885", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-21T09:51:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-21T09:51:22.000Z", "max_issues_repo_path": "blm_header/header_maker.py", "max_issues_repo_name": "loiccoyle/blm_header", "max_issues_repo_head_hexsha": "f3430ad5013bcd6f1918ca1ec1ef9b676335c885", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "blm_header/header_maker.py", "max_forks_repo_name": "loiccoyle/blm_header", "max_forks_repo_head_hexsha": "f3430ad5013bcd6f1918ca1ec1ef9b676335c885", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.2262996942, "max_line_length": 93, "alphanum_fraction": 0.5868647645, "include": true, "reason": "import numpy", "num_tokens": 2616}
|
\documentclass[12pt, titlepage]{article}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage{hyperref}
\usepackage{verbatim}
\usepackage{fancyhdr}
\usepackage{graphicx}
\pagestyle{fancy}
\hypersetup{
colorlinks,
citecolor=black,
filecolor=blue,
linkcolor=red,
urlcolor=blue
}
\usepackage[round]{natbib}
\title{SE 3XA3: Test Plan\\Ultimate Tic Tac Toe}
\author{Team 3
\\ Kunal Shah - shahk24
\\ Pareek Ravi - ravip2
}
\date{\today}
\input{../Comments}
\begin{document}
\lhead{Team 3 - Test Plan}
\rhead{Ultimate Tic Tac Toe}
\maketitle
\pagenumbering{roman}
\tableofcontents
\listoftables
\listoffigures
\newpage
\begin{table}[hp]
\caption{\bf Revision History}
\begin{tabularx}{\textwidth}{p{3cm}p{2cm}X}
\toprule {\bf Date} & {\bf Version} & {\bf Notes}\\
\midrule
October 19 & 0.0 & Initial setup\\
October 31 & 1.0 & Finished Document \\
November 28 & 1.1 & Changed survey questions slightly\\
December 5 & 1.2 & Made changes Chris suggested\\
December 6 & 1.9 & Finalized document\\
December 7 & 2.0 & Rev 1 Submission\\
\bottomrule
\end{tabularx}
\end{table}
\newpage
\pagenumbering{arabic}
\section*{Abstract}
This document describes the Testing Plan for the Ultimate Tic Tac Toe Project.
\section{General Information}
\subsection{Purpose}
The purpose of testing this project's is to confirm all requirements that where
outlined in the Requirements Specifications document have been met and to build
confidence that the software for the project was implemented correctly.
\subsection{Scope}
The Test Plan presents a basis for testing the functionality of the re-
implementation of Ultimate Tic Tac Toe. It has the objective of proving that
Ultimate Tic Tac Toe has met the requirements specified in the Requirements
Document and to attach metrics to those requirements so that adherence to
requirements is quantifiable and can be measured. The testing plan acts as a
means to arrange testing activities. This document will present what is to be
tested of the software. It will also act as an outline of testing methods and
outline the tools that will be utilized.
\subsection{Acronyms, Abbreviations, and Symbols}
\begin{table}[hbp]
\caption{\textbf{Table of Abbreviations}} \label{Table}
\begin{tabularx}{\textwidth}{p{3cm}X}
\toprule
\textbf{Abbreviation} & \textbf{Definition} \\
\midrule
JS & JavaScript\\
UTTT & Ultimate Tic Tac Toe\\
npm & Node Package Manager\\
html & HyperText Markdown Language\\
POC & Proof of Concept\\
\bottomrule
\end{tabularx}
\end{table}
\begin{table}[!htbp]
\caption{\textbf{Table of Definitions}} \label{Table}
\begin{tabularx}{\textwidth}{p{3cm}X}
\toprule
\textbf{Term} & \textbf{Definition}\\
\midrule
Main Board & Full game board, all clickable regions \\
Inner Board [ID] & One of 9 Tic Tac Toe boards within the Main Board\\
Cell [ID] & One of 9 Tic Tac Toe elements of an Inner Board\\
Active Board & Inner board that next player is able to play on\\
Complete & Result has been determined for an Inner board \\
In-complete & Result has yet to be determined for an Inner board \\
\bottomrule
\end{tabularx}
\end{table}
\subsection{Overview of Document}
The Ultimate project will re-implement the project Ultimate Tic Tac Toe. The
software will allow user to play the the game based on the rules defined by
mathwithbaddrawings.com~\citep{Rules}. All the software's requirements can be
referenced in the \href{run:../SRS/SRS.pdf}{Requirements Document}. This document
demonstrates how the game Ultimate Tic Tac Toe will be tested, the testing
schedule and the tools used.
\section{Plan}
\subsection{Software Description}
The software is a JavaScript implementation of the game Ultimate Tic Tac Toe.
\subsection{Test Team}
The individuals responsible for testing are Kunal Shah and Pareek Ravi. For a
detailed breakdown of responsibilities refer to the
\href{run:../../ProjectSchedule/Gantt Chart.gan}{Gantt Chart}
\subsection{Automated Testing Approach}
The primary testing approach will be to use Karma Unit Testing to automate unit
tests. Karma will automatically play the game in a headless browser following
pre-defined moves. The unit testing framework will compare the results to
expected values
\subsection{Testing Tools}
The tool that will be utilized for this project is Karma unit testing with the
Jasmine framework. It will be used to automate the unit testing. There will be a
package.json file in the src directory to download all dependencies required for
Karma Unit Testing.\\
There will also be a Google Form used to survey user's experience and provide
and avenue for feedback and suggestions.
\subsection{Testing Schedule}
Refer to the \href{run:../../ProjectSchedule/Gantt Chart.gan}{Gantt Chart} for
details about Testing Schedule
\section{System Test Description}
\subsection{Tests for Functional Requirements}
\subsubsection{User Input}
\begin{enumerate}
\item{in-test-id1\\}
\textbf {Testing if user's input being received}
Type: Functional, Dynamic, Automatic
Initial State: It is the start of a new game and the board is empty
Input: A click on any cell of any inner tic tac toe board
Output: That cell should have the user's character on it
How test will be performed: This test will be performed automatically with the
use of the Karma Unit Test. In the test, an element will be clicked and then
the inner html of the text will be checked to see if it is updated to the
character that represents that person's turn. There will also be a check on
the variable that stores the all game information to see that it is updated.
\item{in-test-id2\\}
\textbf{Testing if user click on invalid inner board}
Type: Functional, Dynamic, Automatic
Initial State: Opponent has played on a inner board and it is the user's turn
Input: Click on a tile that they are not designated to click based on the
rules of the game
Output: The board should not change at all. The game data should not change
and it is still the user's turn
How test will be performed: This test will be done automatically with theuse
of the Karma Unit Test. When the use clicks on a inner board that they cannot
select, the javascript will prevent anything from changing in the game data
\item{in-test-id3\\}
\textbf{Testing if user's input on cell already clicked}
Type: Functional, Dynamic, Automatic
Initial State: Opponent has played on a inner board and it is the user's turn
Input: Click on a tile that has already been clicked previously
Output: The board should not change at all. The game data should not change
and it is still the user's turn
How test will be performed: This test will be performed automatically with the
use of the Karma Unit Test. In the test, an element in the inner board they
are meant to click is already selected previously will be clicked. There will
be no change to the board as the cell was previously selected. The script will
not change anything as that cell is not a null value, it has the character
representing the player that selected it.
\end{enumerate}
\subsubsection{Game Logic}
\begin{enumerate}
\item{log-test-id1\\}
\textbf{Test if inner board gets completed}
Type: Functional, Dynamic, Automatic
Initial State: Inner board is almost completed by user. It is their turn to
play in the inner board where they will complete it.
Input: Clicks on the cell that will complete the inner board
Output: The inner board will be marked with the character representing the
user
How test will be performed: This test will be performed automatically with the
use of the Karma Unit Test. The cell element will be clicked by the tester.
The logic will check if there is any three in a row in that inner board that
the player just played in. If there is a three in a row, the entire board is
deemed as completed and marked as so. The check for a three in a row is to
check all 8 possible win scenarios, i.e. three rows, three columns and two
diagonals
\item{log-test-id2\\}
\textbf{Test if inner board ends in draw}
Type: Functional, Dynamic, Automatic
Initial State: The board is in a state where an inner board only has 1 cell
available to click and it will result in that board being a draw
Input: Click on the only available cell
Output: The inner board will be marked with the character '-' meaning it is a
draw
How test will be performed: This test will be performed automatically with the
use of the Karma Unit Test. The cell element will be clicked by the tester.
The logic will check all 8 possible cases for a completed three in a row. If
non exist and the inner board is full it is classified as a draw.
\item{log-test-id3\\}
\textbf{Test if next move can be made on any incomplete inner board}
Type: Functional, Dynamic, Automatic
Initial State: Game board is partially filled with one inner board completed
Input: Click at a cell corresponding to a completed inner board
Output: All incomplete inner boards active
How test will be performed: When the click is made on the inner board, the
background of all inner boards that are not completed is set to blue. Based on
the array which contains a map of the board, a loop through all the inner
board elements and check their background colors. Any inner board that is not
complete will have a background style blue.
\item{log-test-id4\\}
\textbf{Test if user turns are alternating}
Type: Functional, Dynamic, Automatic
Initial State: Player with character O just made a move
Input: Click at any available cell
Output: The character changes to X
How test will be performed: When a click is made, the character should change
and the innerHTML should represent the other one.
\end{enumerate}
\subsubsection{Game Logistics}
\begin{enumerate}
\item{logistic-test-id1\\}
\textbf{Test if game launches}
Type: Functional, Dynamic, Manual
Initial State: User is in the file explorer
Input: User launches the html file in browser
Output: The game launches in all browsers and shows an empty UTTT game board.
How test will be performed: The user will launch the game from their file
explorer. If they are able to see a blank UTTT board, the game has launched.
\item{logistic-test-id2\\}
\textbf{Test if user input shows in window}
Type: Functional, Dynamic, Manual
Initial State: User has just opened game\\ %\textcolor{red}{Spelling - CM}
Input: User clicks on any cell
Output: Cell they clicked on should change appearance
How test will be performed: If the user's click was registered, it would
indicate that to the user by changing the cell they clicked on to the
character that represents them. This will be seen graphically.
\end{enumerate}
\subsection{Tests for Nonfunctional Requirements}
\subsubsection{Look and Feel} \label{LookAndFeel}
\begin{enumerate}
\item{laf-test-id1\\}
This will be tested by survey question~\ref{question:q4}.
Pass: 4.5/5 average rating on this question
\item{laf-test-id2\\}
This will be tested by survey question~\ref{question:q10}.
Pass 4.5/5 average rating for this question. The details will provide more
information on repairs needed
\item{laf-test-id3\\}
This will be tested by survey question~\ref{question:q11}.
Pass 4.5/5 average rating for this question. The details will provide more
information on repairs needed
\end{enumerate}
\subsubsection{Ease of Use}
\begin{enumerate}
\item{eou-test-id1\\}
This will be tested by survey question~\ref{question:q1}.
Pass: 4.5/5 average rating on this question
\item{eou-test-id2\\}
This will be tested by survey question~\ref{question:q2}.
Pass 4.5/5 average rating for this question.
\item{eou-test-id3\\}
This will be tested by survey question~\ref{question:q3}.
Pass 4.5/5 average rating for this question.
\end{enumerate}
\subsubsection{Environmental Requirements}
\begin{enumerate}
\item{er-test-id1\\}
This will be tested by survey question~\ref{question:q6}.
Pass: 4.5/5 average rating on this question. The details could lead to optimization
\end{enumerate}
\subsubsection{Performace Requirements}
\begin{enumerate}
\item{pr-test-id1\\}
This will be tested by survey question~\ref{question:q5}.
Pass: 4.5/5 average rating on this question.
\item{pr-test-id2\\}
This will be tested by survey question~\ref{question:q6}.
Pass: 4.5/5 average rating on this question. The details could lead to optimization
\end{enumerate}
\subsubsection{Safety Requirements}
\begin{enumerate}
\item{safreq-test-id1\\}
This will be tested by survey question~\ref{question:q9}.
Pass: 4.5/5 average rating on this question.
\end{enumerate}
\subsubsection{Cultural Requirements}
\begin{enumerate}
\item{culreq-test-id1\\}
This will be tested by survey question~\ref{question:q8}.
Pass: 4.5/5 average rating on this question. The details could lead to optimization
\end{enumerate}
\subsubsection{Legal Requirements}
\begin{enumerate}
\item{legreq-test-id1\\}
Description: This game should have a rating suitable for all ages to play
according to ESRB How: ESRB will rate the game based on their standards which
are accepted universally Pass: The game has a rating of E for everyone.
\end{enumerate}
\subsubsection{Security Requirements}
\begin{enumerate}
\item{secreq-test-id1\\}
Description: When the game is implemented on a server, no personal data should
be transfered How: Check all the packages that are transfered over the server
Pass: Only game data is passed, no personal data
\end{enumerate}
\section{Tests for Proof of Concept} \label{POC}
\subsection{User Input}
\paragraph{User inputs from the same local machine}
\begin{enumerate}
\item{Test first click\\}
Type: Manual
Initial State: On Load
Input: click on Inner Board [B00] Cell [1]
Output: Player1 symbol ( either X or O ) Figure: \ref{fig:Test1_output}
How test will be performed: User clicks input on game page loaded on an
browser. User watches for graphical response.
\item{Set Active Board\\}
Type: Manual
Initial State: On Load
Input: User click Inner Board [B03] Cell [5]
Output: Active Board set to Inner Board [B11] Figure: \ref{fig:Test2_output}
How test will be performed: User clicks input on game page loaded on an
browser. User watches for graphical response.
\item{All incomplete Inner Boards active when player sets complete inner board active \\}
Type: Manual
Initial State: One move till Player 1 completes inner board [B02]
Input: User click Inner Board [B02] Cell [3] Figure: \ref{fig:Test3_Test4_output}
Output: all Inner Boards excluding Inner Board [B02] show blue background colour
How test will be performed: Users clicks the following sequence on game page
loaded on an browser. User watches for graphical response.
\begin{enumerate}
\item Player 1 clicks on Inner Board [B02] Cell [5]
\item Player 2 clicks on Inner Board [B11] Cell [3]
\item Player 1 clicks on Inner Board [B02] Cell [7]
\item Player 2 clicks on Inner Board [B20] Cell [3]
\item Player 1 clicks on Inner Board [B02] Cell [3]
\end{enumerate}
\subsection{Game Logic}
\item{Complete Inner Board\\}
Type: Manual
Initial State: One move till Player 1 completes inner board [B01]
Input: User click Inner Board [B01] Cell [3]
Output: Inner Board [B01] displays Player 1 symbol Figure: \ref{fig:Test3_Test4_output}
How test will be performed: Users clicks the following sequence on game page
loaded on an browser. User watches for graphical response.
\begin{enumerate}
\item Player 1 clicks on Inner Board [B02] Cell [5]
\item Player 2 clicks on Inner Board [B11] Cell [3]
\item Player 1 clicks on Inner Board [B02] Cell [7]
\item Player 2 clicks on Inner Board [B20] Cell [3]
\item Player 1 clicks on Inner Board [B02] Cell [3]
\end{enumerate}
\item{Complete Inner Board with draw(or tie) \\}
Type: Manual
Initial State: One move till Player completes inner board
Input: User click to cause Inner Board to draw Figure: \ref{fig:Test5_input}
Output: Inner Board displays dash to indicate draw Figure: \ref{fig:Test5_output}
How test will be performed: User clicks input on game page loaded on an
browser. User watches for graphical response.
\item{Win Full Game\\}
Type: Manual
Initial State: One move till Player 1 wins full game
Input: User click cell to complete last inner board needed to win Main Board
Figure: \ref{fig:Test6_input}
Output: Browser alert with player that won ( X or O ) Figure: \ref{fig:Test6_output}
How test will be performed: Users clicks the following sequence on game page
loaded on an browser. User watches for graphical response.
\end{enumerate}
\section{Comparison to Existing Implementation}
There are 8 tests that compare the program to the Existing Implementation of
the program. Please refer to:
\begin{itemize}
\item test 1 to 6 in Tests for Proof of Concept
Reference section \ref{POC} of the document
\item test 1 to 3 in Look and Feel Nonfunctional Requirements
Reference section \ref{LookAndFeel} of Tests for Nonfunctional Requirements
\end{itemize}
\section{Unit Testing Plan}
%\textcolor{red}{Spelling below - CM} \\
The Karma and Jasmine framework will be used to implement unit testing for
this project. This will require the installation of multiple dependencies that
can be found in a json file. Using the npm (Node Package Manager) framework
these can all be easily installed.
\subsection{Unit testing of internal functions}
In order to unit test the internal functions of the game, the values of the
arrays will be checked after the function is run. There are arrays which are
used to record the user who controls each inner board, and also an array which
has the full 81 cells of the board in a multi-dimensional array. After running
a function which will update these arrays, compare the results to expected.
The values in the arrays will need to match. For the functions that have a
return, providing the proper inputs and comparing it to the expected output,
test cases can be created. Knowing that all functions are not testable, the
aim will be to have at least 75\% of the functions tested using unit testing.
\subsection{Unit testing of output files}
Given that this project does not have an output file, the unit testing out the
output will be the graphical portion. The unit testing will involve clicking
on cells of the game board and matching the change in the html to the expected
result. For example, a click on an available playable cell, should result in
the innerHTML of that cell to have either an X or O. Similar tests will be run
for numerous use cases such as:
\begin{itemize}
\item
Winning an inner board
\item
Inner board results in a draw
\item
Next move is set to entire board because previous click resulted in selection
of completed inner board
\item
Competition of inner board corresponds to same inner board resulting in next
click to be made to entire board
\item
Winning the entire game
\item
Entire game results in a draw
\end{itemize}
\bibliographystyle{plainnat}
\bibliography{TestPlan}
\newpage
\section{Appendix}
Karma JS Installation Tutorial and Example source code~\citep{Karma}
\subsection{Usability Survey Questions}
\begin{comment}
\textcolor{red}{Metrics for users may be helpful. I am not sure what constitutes a 5 on the understanding rules scale for example. Also some answers are ambiguous for the user, see question \ref{question:q6}. Take out some things that do not require a scale and have in separate section of survey - CM} \\
\end{comment}
Please provide a rating from 1 to 5, 1 representing strongly disagreeing with the statement, and 5 strongly agreeing.
\begin{enumerate}
\item
The rules were very easy to find. \label{question:q1}
\item
The rules are easy to understand. \label{question:q2}
\item
The color pallet is visually appealing .\label{question:q4}
\item
The response time was satisfying. \label{question:q5}
\end{enumerate}
Please select True or False and provide details where requested
\begin{enumerate}
\item
It is clear which player's turn it is. (X or O). \label{question:q3}
\item
The game runs smoothly on my browser. Please indicate the browser you used and computer model. \label{question:q6}
\item
Were you offended by anything in the game. Please provide details \label{question:q8}
\item
Did you encounter epileptic symptoms while playing the game \label{question:q9}
\item
Did anything unexpected appear on the screen while playing. Please describe
\label{question:q10}
\item
Is the entire game board visible on the screen? Are there any aspects that are
cut off? Please describe. \label{question:q11}
\end{enumerate}
\newpage
\subsection{Symbolic Parameters}
\begin{table}[h]
\caption{\textbf{Table of Symbolic Parameters}} \label{TableSP}
\begin{tabularx}{\textwidth}{p{5cm}X}
\toprule
\textbf{Symbolic Parameters} & \textbf{Description} \\
\midrule
MAX\_PLAYERS & Maximum number of concurrent players\\
activeColor & Active region Color\\
Ocolor &Player O winning inner board color\\
Xcolor & Player X winning inner board color\\
\bottomrule
\end{tabularx}
\end{table}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test1-output.png}
\caption{POC Test 1 Output}
\label{fig:Test1_output}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test2-output.png}
\caption{POC Test 2 Output}
\label{fig:Test2_output}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test3-4-output.png}
\caption{POC Test 3 and Test4 Output}
\label{fig:Test3_Test4_output}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test5-input.png}
\caption{POC Test 5 Example input}
\label{fig:Test5_input}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test5-output.png}
\caption{POC Test 5 Output}
\label{fig:Test5_output}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test6-input.png}
\caption{POC Test 6 Example input}
\label{fig:Test6_input}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{Figures/Test6-output.png}
\caption{POC Test 6 Output}
\label{fig:Test6_output}
\end{figure}
\end{document}
|
{"hexsha": "802178fcf06440084ff029cf42ffa5c4ee3680f3", "size": 22268, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/TestPlan/TestPlan.tex", "max_stars_repo_name": "curiouskunal/Ultimate_TicTacToe", "max_stars_repo_head_hexsha": "e77381d526e4f2e8ef325238d3e73419d7ed35ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-18T21:51:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-18T21:51:46.000Z", "max_issues_repo_path": "Doc/TestPlan/TestPlan.tex", "max_issues_repo_name": "curiouskunal/Ultimate_TicTacToe", "max_issues_repo_head_hexsha": "e77381d526e4f2e8ef325238d3e73419d7ed35ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/TestPlan/TestPlan.tex", "max_forks_repo_name": "curiouskunal/Ultimate_TicTacToe", "max_forks_repo_head_hexsha": "e77381d526e4f2e8ef325238d3e73419d7ed35ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-12-13T04:23:59.000Z", "max_forks_repo_forks_event_max_datetime": "2017-01-27T17:03:19.000Z", "avg_line_length": 31.7207977208, "max_line_length": 305, "alphanum_fraction": 0.7629782648, "num_tokens": 5646}
|
# Copyright 2017 Martin Haesemeyer. All rights reserved.
#
# Licensed under the MIT license
"""
Script to create movie frames of network activations upon temperature stimulation and behavior generation
"""
import sys
import numpy as np
import h5py
import matplotlib as mpl
import matplotlib.pyplot as pl
import tkinter as tk
from tkinter import filedialog
from core import GradientData, ModelData, ZfGpNetworkModel
from trainingData import CircGradientTrainer
from global_defs import GlobalDefs
if __name__ == "__main__":
if sys.platform == "darwin" and "Tk" not in mpl.get_backend():
print("On OSX tkinter likely does not work properly if matplotlib uses a backend that is not TkAgg!")
print("If using ipython activate TkAgg backend with '%matplotlib tk' and retry.")
sys.exit(1)
# load training data to obtain temperature scaling
try:
std = GradientData.load_standards("gd_training_data.hdf5")
except IOError:
print("No standards found attempting to load full training data")
train_data = GradientData.load("gd_training_data.hdf5")
std = train_data.standards
# load and interpolate temperature stimulus
dfile = h5py.File("stimFile.hdf5", 'r')
tsin = np.array(dfile['sine_L_H_temp'])
x = np.arange(tsin.size) # stored at 20 Hz !
xinterp = np.linspace(0, tsin.size, tsin.size * GlobalDefs.frame_rate // 20)
temp = np.interp(xinterp, x, tsin)
dfile.close()
print("Select model directory")
root = tk.Tk()
root.update()
root.withdraw()
model_dir = filedialog.askdirectory(title="Select directory with model checkpoints", initialdir="./model_data/")
mdata = ModelData(model_dir)
root.update()
# create our model and load from last checkpoint
gpn = ZfGpNetworkModel()
gpn.load(mdata.ModelDefinition, mdata.LastCheckpoint)
# prepend lead-in to stimulus
lead_in = np.full(gpn.input_dims[2] - 1, np.mean(temp[:10]))
temp = np.r_[lead_in, temp]
# run a short simulation to create some sample trajectories for speed and angle inputs
sim = CircGradientTrainer(100, 22, 37)
sim.p_move = 0.1 / GlobalDefs.frame_rate # use reduced movement rate to aide visualization
pos = sim.run_simulation(temp.size + 1)
spd = np.sqrt(np.sum(np.diff(pos[:, :2], axis=0)**2, 1))
da = np.diff(pos[:, 2])
activities = gpn.unit_stimulus_responses(temp, spd, da, std)
# make actual movie at five hertz, simply by skipping and also only create first repeat
for i in range(activities['o'][0].shape[0] // 60):
fig, ax = ZfGpNetworkModel.plot_network(activities, i * 20)
fig.savefig("./networkMovie/{0}.png".format(i), type="png", dpi=150)
pl.close(fig)
|
{"hexsha": "e0b08a249bd160f91a206348dca94120a01c3a74", "size": 2737, "ext": "py", "lang": "Python", "max_stars_repo_path": "activationMovie.py", "max_stars_repo_name": "haesemeyer/GradientPrediction", "max_stars_repo_head_hexsha": "679b48768ad74dccd58f8c2f434ad60036fc5cb7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-08-01T19:30:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T16:32:56.000Z", "max_issues_repo_path": "activationMovie.py", "max_issues_repo_name": "haesemeyer/GradientPrediction", "max_issues_repo_head_hexsha": "679b48768ad74dccd58f8c2f434ad60036fc5cb7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "activationMovie.py", "max_forks_repo_name": "haesemeyer/GradientPrediction", "max_forks_repo_head_hexsha": "679b48768ad74dccd58f8c2f434ad60036fc5cb7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-02T07:42:56.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-02T07:42:56.000Z", "avg_line_length": 41.4696969697, "max_line_length": 116, "alphanum_fraction": 0.7025940811, "include": true, "reason": "import numpy", "num_tokens": 699}
|
[STATEMENT]
lemma lift_add : "insertion (f::nat\<Rightarrow>real) (liftPoly 0 z (a + b)) = insertion f (liftPoly 0 z a + liftPoly 0 z b)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. insertion f (liftPoly 0 z (a + b)) = insertion f (liftPoly 0 z a + liftPoly 0 z b)
[PROOF STEP]
using liftPoly_add[of 0 z a b]
[PROOF STATE]
proof (prove)
using this:
liftPoly 0 z (a + b) = liftPoly 0 z a + liftPoly 0 z b
goal (1 subgoal):
1. insertion f (liftPoly 0 z (a + b)) = insertion f (liftPoly 0 z a + liftPoly 0 z b)
[PROOF STEP]
by simp
|
{"llama_tokens": 250, "file": "Virtual_Substitution_Debruijn", "length": 2}
|
import os
import sys
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import cv2
from skimage import io
from skimage.transform import resize
sys.path.append('../')
from model.models import CRNet
from config.cfg import cfg
def prepare_data(model):
"""
prepare training and test set
:param model:
:return:
"""
df = pd.read_csv('./cvsplit/jaffe.csv', sep=',')
files = [os.path.join(cfg['jaffe_dir'], df['PIC'][i].replace('-', '.') + '.' + str(df['#'][i]) + '.tiff') for i
in range(len(df['#'].tolist()))]
labels = []
for index, row in df.iterrows():
labels.append(np.argmax(np.array(row[1: 7].tolist())))
print(files)
print(labels)
X = []
y = []
for i in range(len(files)):
if os.path.exists(files[i]):
x = deep_ft(files[i], model)
X.append(x)
y.append(labels[i])
return X, y
def deep_ft(imgfile, model):
"""
extract deep features from pretrained model
:param imgfile:
:param model:
:return:
"""
print(imgfile)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
image = resize(io.imread(imgfile), (224, 224), mode='constant')
# image = cv2.imread(imgfile)
# image = cv2.resize(image, (224, 224)).astype(np.float32)
if image.shape == (224, 224, 3):
image[:, :, 0] -= np.mean(image[:, :, 0])
image[:, :, 1] -= np.mean(image[:, :, 1])
image[:, :, 2] -= np.mean(image[:, :, 2])
else:
tmp = np.zeros([224, 224, 3])
image -= np.mean(image)
tmp[:, :, 0] = image
tmp[:, :, 1] = image
tmp[:, :, 2] = image
image = tmp
image = np.transpose(image, [2, 0, 1])
x = torch.from_numpy(image).unsqueeze(0).float()
x = x.to(device)
for name, module in model.named_children():
if name == 'model':
for nm, mod in module.named_children():
if nm != 'fc':
x = mod(x)
else:
break
elif name == 'regressor':
x = x.view(-1, 512)
return x.to('cpu').detach().numpy().ravel().tolist()
# for nm, mod in module.named_children():
# if nm != 'fc1':
# x = mod.forward(x)
# else:
# return x.to('cpu').detach().numpy().ravel().tolist()
def fer_jaffe(X, y):
"""
train and test on JAFFE with SVM
:param X:
:param y:
:return:
"""
from sklearn.model_selection import train_test_split
from sklearn import svm
X_train, X_test, y_train, y_test = train_test_split(np.array(X, dtype=np.float64), np.array(y), test_size=0.3,
random_state=42)
svc = svm.SVC(kernel='rbf')
print('start training SVM with RBF kernel...')
svc.fit(X_train, y_train)
print('finish training SVM with RBF kernel...\nstart evaluating...')
acc = svc.score(X_test, y_test)
print('Accuracy is ', acc)
if __name__ == '__main__':
model = CRNet()
model = model.float()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model)
model = model.to(device)
print('Loading pre-trained model...')
model.load_state_dict(torch.load(os.path.join('./model/crnet.pth')))
model.eval()
X, y = prepare_data(model)
fer_jaffe(X, y)
|
{"hexsha": "8dc75af8a824cfa892c3a79ad1dfc88508012d4c", "size": 3579, "ext": "py", "lang": "Python", "max_stars_repo_path": "main/fer.py", "max_stars_repo_name": "lucasxlu/CRNet", "max_stars_repo_head_hexsha": "17d27e39a77181921cc2bd5a5a8866a25282b4de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2018-06-26T07:13:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T02:12:38.000Z", "max_issues_repo_path": "main/fer.py", "max_issues_repo_name": "lucasxlu/CRNet", "max_issues_repo_head_hexsha": "17d27e39a77181921cc2bd5a5a8866a25282b4de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main/fer.py", "max_forks_repo_name": "lucasxlu/CRNet", "max_forks_repo_head_hexsha": "17d27e39a77181921cc2bd5a5a8866a25282b4de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9609375, "max_line_length": 115, "alphanum_fraction": 0.5465213747, "include": true, "reason": "import numpy", "num_tokens": 931}
|
import sys
import mwapi
import toolforge
import pandas as pd
import pymysql
import numpy as np
import argparse
from urllib.parse import unquote
import utils.db_access as db_acc
import constants
pymysql.converters.encoders[np.int64] = pymysql.converters.escape_int
pymysql.converters.conversions = pymysql.converters.encoders.copy()
pymysql.converters.conversions.update(pymysql.converters.decoders)
## define constants
MIN_IDX = 0
def get_wiki_list(start_idx, end_idx, user_db_port=None, user=None, password=None):
"""
Fetches urls of all wikis and chooses the ones in the given indexes (both start and end indexes are included).
:param start_idx: starting index of the wikis, which should be processed.
:param end_idx: starting index of the wikis, which should be processed.
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: list of wikis' urls within given indexes
"""
try:
conn = db_acc.connect_to_user_database(
constants.DATABASE_NAME, user_db_port, user, password
)
with conn.cursor() as cur:
cur.execute(
"select url from Sources where url is not NULL"
) # all, except 'meta'
ret = [wiki[0] for wiki in cur][start_idx : end_idx + 1]
conn.close()
return ret
except Exception as err:
print("Something went wrong.\n", err)
exit(1)
def save_content(
wiki, data_list, in_api, in_database, user_db_port=None, user=None, password=None
):
"""
Saves data into Scripts table.
:param wiki: The wiki project corresponding to the data provided.
:param data_list: The data to be saved in Scripts table.
:param in_database: Whether data was collected from databases.
:param in_api: Whether data was collected from API.
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: None
"""
data_df = pd.DataFrame(
data_list,
columns=[
"id",
"title",
"url",
"length",
"content",
"content_model",
"touched",
"lastrevid",
],
)
query = (
"INSERT INTO Scripts(dbname, page_id, title, sourcecode, touched, "
"in_api, in_database, length, content_model, lastrevid, url) "
"VALUES(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) "
"ON DUPLICATE KEY UPDATE title = %s, sourcecode = %s, touched = %s, in_api = %s, in_database = %s, "
"length = %s, content_model = %s, lastrevid = %s, url = %s, is_missed=%s"
)
try:
conn = db_acc.connect_to_user_database(
constants.DATABASE_NAME, user_db_port, user, password
)
with conn.cursor() as cur:
cur.execute("SELECT dbname FROM Sources WHERE url = %s", wiki)
dbname = cur.fetchone()[0]
for index, elem in data_df.iterrows():
time = elem["touched"].replace("T", " ").replace("Z", " ")
cur.execute(
query,
[
dbname,
elem["id"],
elem["title"],
elem["content"],
time,
in_api,
in_database,
elem["length"],
elem["content_model"],
elem["lastrevid"],
elem["url"],
elem["title"],
elem["content"],
time,
in_api,
in_database,
elem["length"],
elem["content_model"],
elem["lastrevid"],
elem["url"],
0,
],
)
conn.commit()
conn.close()
except Exception as err:
print("Error saving pages from", wiki)
print(err)
def save_missed_content(wiki, missed, user_db_port=None, user=None, password=None):
"""
Mark missed pages as is_missed=True in Scripts table.
:param wiki: The wiki project corresponding to the data provided.
:param missed: List of pages missed.
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: None
"""
missed_df = pd.DataFrame(missed, columns=["id"])
query = (
"INSERT INTO Scripts(dbname, page_id, in_api, is_missed) "
"values(%s, %s, %s, %s) "
"ON DUPLICATE KEY UPDATE in_api = %s, is_missed = %s"
)
try:
conn = db_acc.connect_to_user_database(
constants.DATABASE_NAME, user_db_port, user, password
)
with conn.cursor() as cur:
cur.execute("SELECT dbname FROM Sources WHERE url = %s", wiki)
dbname = cur.fetchone()[0]
for index, elem in missed_df.iterrows():
cur.execute(query, [dbname, elem["id"], 1, 1, 1, 1])
conn.commit()
conn.close()
except Exception as err:
print("Something went wrong.\n", err)
exit(1)
def needs_update(wiki, pageid, title, touched, revid):
return True
def get_contents(wikis, revise=False, user_db_port=None, user=None, password=None):
"""
Connects to the wiki by using API, fetches Scribunto modules and additional info from there
and saves them to the user's database.
Possible ways for the process to fail:
1. Failed to connect to wiki (See from output)
2. Connected but could not GET wiki (See from output)
3. Could not grab a page (Listed in missed pages)
:param wikis: list of urls of wikis, from which the modules will be collected
:param revise: `False` collects all contents and saves fresh
`True` only collects those that have been edited
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: None
"""
for wiki in wikis:
try:
session = mwapi.Session(wiki, user_agent="abstract-wiki-ds")
except Exception as e:
print("Failed to connect to", wiki, "\n", e)
continue
data_list = []
cnt_data_list = 0
missed = []
cnt_missed = 0
_gapcontinue = ""
_continue = ""
while True:
params = {
"action": "query",
"generator": "allpages",
"gapnamespace": 828,
"gaplimit": 300,
"format": "json",
"prop": "info",
"inprop": "url",
"gapcontinue": _gapcontinue,
"continue": _continue,
}
try:
result = session.get(params)
except Exception as e:
print("Could not GET", wiki, "\n", e)
break
if "query" in result.keys():
for page in list(result["query"]["pages"].values()):
try:
pageid = page["pageid"]
title = page["title"]
touched = page["touched"]
length = page["length"]
url = unquote(page["fullurl"])
revid = page["lastrevid"]
if (not revise) or needs_update(
wiki, pageid, title, touched, revid
):
params = {
"action": "query",
"format": "json",
"prop": "revisions",
"revids": revid,
"rvprop": "content",
"rvslots": "main",
"formatversion": 2,
}
rev_result = session.get(params)
content_info = rev_result["query"]["pages"][0]["revisions"][
0
]["slots"]["main"]
content = content_info["content"]
content_model = content_info["contentmodel"]
if content_model == "Scribunto":
data_list.append(
[
pageid,
title,
url,
length,
content,
content_model,
touched,
revid,
]
)
except Exception as err:
if "pageid" in page.keys():
missed.append([pageid])
print("Miss:", wiki, title, pageid, "\n", err)
cnt_data_list += len(data_list)
cnt_missed += len(missed)
save_missed_content(wiki, missed, user_db_port, user, password)
save_content(wiki, data_list, 1, 0, user_db_port, user, password)
print(cnt_data_list, "pages loaded...")
data_list, missed = [], []
try:
_continue = result["continue"]["continue"]
_gapcontinue = (
result["continue"]["gapcontinue"]
if "gapcontinue" in result["continue"]
else ""
)
except:
break
print(
"All pages loaded for %s. Missed: %d, Loaded: %d"
% (wiki, cnt_missed, cnt_data_list)
)
print("Done loading!")
def get_db_map(wikis=[], dbs=[], user_db_port=None, user=None, password=None):
"""
Fetches info from the users database about the wikis with given dbnames or urls.
Chooses search by urls by default, if none are given (wikis is empty), searches by dbnames from dbs.
:param wikis: list of wikis' urls, whose info needed.
:param dbs: list of wikis' dbnames, whose info needed.
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: dictionary of fetched info in form {dbname1: url1, dbname2:url2,...},
a comma separated placeholder(%s) string for the the number of db or wikis.
"""
query_input = []
if len(wikis) > 0:
placeholders = ",".join("%s" for _ in wikis)
query_input = wikis
query = "SELECT dbname, url FROM Sources WHERE url IN (%s)" % placeholders
else:
placeholders = ",".join("%s" for _ in dbs)
query_input = dbs
query = "SELECT dbname, url FROM Sources WHERE dbname IN (%s)" % placeholders
db_map = {}
try:
conn = db_acc.connect_to_user_database(
constants.DATABASE_NAME, user_db_port, user, password
)
with conn.cursor() as cur:
cur.execute(query, query_input)
db_map = {data[0]: data[1] for data in cur}
conn.close()
except Exception as err:
print("Something went wrong.\n", err)
exit(1)
return db_map, placeholders
def get_pages(df, in_api, in_database, user_db_port=None, user=None, password=None):
"""
Connects to the wikis from wiki field and fetches infomation for the pages with given page_id,
then saving fetched content and missing content to the user's database.
:param df: dataframe with columns page_id, dbname, wiki (represents url of wiki). dbname is not required.
:param in_api: the value to which in_api field will be set
:param in_database: the value to which in_database field will be set
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: None
"""
for wiki, w_df in df.groupby("wiki"):
try:
session = mwapi.Session(wiki, user_agent="abstract-wiki-ds")
except Exception as e:
print("Failed to connect to", wiki, "\n", e)
continue
pageids = w_df["page_id"].values
data_list = []
missed = []
for pageid in list(pageids):
params = {
"action": "query",
"format": "json",
"prop": "revisions|info",
"pageids": pageid,
"rvprop": "content",
"rvslots": "main",
"inprop": "url",
"formatversion": 2,
}
try:
result = session.get(params)
page = result["query"]["pages"][0]
if page["lastrevid"] != 0:
url = unquote(page["fullurl"])
title = page["title"]
length = page["length"]
content_info = page["revisions"][0]["slots"]["main"]
content = content_info["content"]
content_model = content_info["contentmodel"]
touched = page["touched"]
revid = page["lastrevid"]
if content_model == "Scribunto":
data_list.append(
[
pageid,
title,
url,
length,
content,
content_model,
touched,
revid,
]
)
except Exception as err:
missed.append([pageid])
print("Miss:", pageid, "from wiki:", wiki, "\n", err)
save_content(wiki, data_list, in_api, in_database, user_db_port, user, password)
save_missed_content(wiki, missed, user_db_port, user, password)
print(
"All pages loaded for %s. Missed: %d, Loaded: %d"
% (wiki, len(missed), len(data_list))
)
def get_missed_contents(wikis, user_db_port=None, user=None, password=None):
"""
Retry fetching data for missed pages.
:param wiki: The wiki project corresponding to the data provided.
:param user_db_port: port for connecting to local Sources table through ssh tunneling, if used.
:param user: Toolforge username of the tool.
:param password: Toolforge password of the tool.
:return: None
"""
print("Started loading missed contents...")
db_map, placeholders = get_db_map(
wikis=wikis, user_db_port=user_db_port, user=user, password=password
)
query = (
"SELECT page_id, dbname FROM Scripts WHERE dbname IN (%s) AND in_api=1 AND is_missed=1"
% placeholders
)
try:
conn = db_acc.connect_to_user_database(
constants.DATABASE_NAME, user_db_port, user, password
)
with conn.cursor() as cur:
cur.execute(query, list(db_map.keys()))
df = pd.DataFrame(cur, columns=["page_id", "dbname"])
conn.close()
except Exception as err:
print("Something went wrong.\n", err)
exit(1)
df["wiki"] = df["dbname"].map(db_map)
get_pages(df, 1, 0)
print("Done loading missed pages!")
def index_type(x):
x = int(x)
if x < MIN_IDX:
raise argparse.ArgumentError("Minimum index is 0")
return x
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Updates Lua scripts and their additional info in database in Toolforge, "
"fetching info from Wikimedia API. For testing and parallelization sake, use start-idx and end-idx "
"parameters to choose which wikis from Sources table will be worked with."
"To use from local PC, provide all the additional flags needed for "
"establishing connection through ssh tunneling."
"More help available at "
"https://wikitech.wikimedia.org/wiki/Help:Toolforge/Database#SSH_tunneling_for_local_testing_which_makes_use_of_Wiki_Replica_databases"
)
parser.add_argument(
"start_idx",
type=index_type,
help="Starting index of info, fetched from database Sources, sorted by key (min=0).",
)
parser.add_argument(
"end_idx",
type=index_type,
help="Ending index of info, fetched from database Sources, sorted by key.(min=0)",
)
parser.add_argument(
"--revise",
"-rev",
action="store_true",
help="Whether content should be revised.",
)
local_data = parser.add_argument_group(
title="Info for connecting to Toolforge from local pc"
)
local_data.add_argument(
"--user-db-port",
"-udb",
type=int,
default=None,
help="Port for connecting to tables, created by user in Toolforge, "
"through ssh tunneling, if used.",
)
local_data.add_argument(
"--user", "-u", type=str, default=None, help="Toolforge username of the tool."
)
local_data.add_argument(
"--password",
"-p",
type=str,
default=None,
help="Toolforge password of the tool.",
)
args = parser.parse_args()
if args.start_idx > args.end_idx:
sys.exit("Error: Ending index must be greater than start index.")
wikis = get_wiki_list(
args.start_idx, args.end_idx, args.user_db_port, args.user, args.password
)
get_contents(wikis, args.revise, args.user_db_port, args.user, args.password)
get_missed_contents(wikis, args.user_db_port, args.user, args.password)
|
{"hexsha": "32f01865cdd781d826ce9369782f7459daf4321b", "size": 18622, "ext": "py", "lang": "Python", "max_stars_repo_path": "fetch_content.py", "max_stars_repo_name": "wikimedia/abstract-wikipedia-data-science", "max_stars_repo_head_hexsha": "e71cee92f3c8273749b747a155f802e62a425d0a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-09T16:15:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-26T07:33:24.000Z", "max_issues_repo_path": "fetch_content.py", "max_issues_repo_name": "wikimedia/abstract-wikipedia-data-science", "max_issues_repo_head_hexsha": "e71cee92f3c8273749b747a155f802e62a425d0a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 13, "max_issues_repo_issues_event_min_datetime": "2020-12-14T13:08:22.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-07T20:17:04.000Z", "max_forks_repo_path": "fetch_content.py", "max_forks_repo_name": "wikimedia/abstract-wikipedia-data-science", "max_forks_repo_head_hexsha": "e71cee92f3c8273749b747a155f802e62a425d0a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-07T02:49:23.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T11:13:20.000Z", "avg_line_length": 36.8752475248, "max_line_length": 143, "alphanum_fraction": 0.5367307486, "include": true, "reason": "import numpy", "num_tokens": 3999}
|
// -*- C++ -*-
//
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//
// Jiao Lin
// California Institute of Technology
// (C) 2007 All Rights Reserved
//
// {LicenseText}
//
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//
#include <sstream>
#include <boost/python.hpp>
#include "mccomponents/kernels/IsotropicKernel.h"
#include "mccomponents/boostpython_binding/wrap_kernel.h"
namespace wrap_mccomponents {
void wrap_IsotropicKernel()
{
using namespace boost::python;
using namespace mccomponents::boostpython_binding;
typedef mccomponents::kernels::IsotropicKernel w_t;
kernel_wrapper<w_t>::wrap
("IsotropicKernel",
init<double, double> ()
)
;
}
}
// version
// $Id$
// End of file
|
{"hexsha": "1452919b77b36469c5c96f2832b06e4eb80ef854", "size": 894, "ext": "cc", "lang": "C++", "max_stars_repo_path": "packages/mccomponents/mccomponentsbpmodule/wrap_IsotropicKernel.cc", "max_stars_repo_name": "mcvine/mcvine", "max_stars_repo_head_hexsha": "42232534b0c6af729628009bed165cd7d833789d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5.0, "max_stars_repo_stars_event_min_datetime": "2017-01-16T03:59:47.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-23T02:54:19.000Z", "max_issues_repo_path": "packages/mccomponents/mccomponentsbpmodule/wrap_IsotropicKernel.cc", "max_issues_repo_name": "mcvine/mcvine", "max_issues_repo_head_hexsha": "42232534b0c6af729628009bed165cd7d833789d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 293.0, "max_issues_repo_issues_event_min_datetime": "2015-10-29T17:45:52.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-07T16:31:09.000Z", "max_forks_repo_path": "packages/mccomponents/mccomponentsbpmodule/wrap_IsotropicKernel.cc", "max_forks_repo_name": "mcvine/mcvine", "max_forks_repo_head_hexsha": "42232534b0c6af729628009bed165cd7d833789d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1.0, "max_forks_repo_forks_event_min_datetime": "2019-05-25T00:53:31.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-25T00:53:31.000Z", "avg_line_length": 20.3181818182, "max_line_length": 81, "alphanum_fraction": 0.4843400447, "num_tokens": 183}
|
#=
ported from:
COTD Entry submitted by John W. Ratcliff [jratcliff@verant.com]
THIS IS A CODE SNIPPET WHICH WILL EFFICIEINTLY TRIANGULATE ANY
POLYGON/CONTOUR (without holes) AS A STATIC CLASS. THIS SNIPPET
IS COMPRISED OF 3 FILES, TRIANGULATE.H, THE HEADER FILE FOR THE
TRIANGULATE BASE CLASS, TRIANGULATE.CPP, THE IMPLEMENTATION OF
THE TRIANGULATE BASE CLASS, AND TEST.CPP, A SMALL TEST PROGRAM
DEMONSTRATING THE USAGE OF THE TRIANGULATOR. THE TRIANGULATE
BASE CLASS ALSO PROVIDES TWO USEFUL HELPER METHODS, ONE WHICH
COMPUTES THE AREA OF A POLYGON, AND ANOTHER WHICH DOES AN EFFICENT
POINT IN A TRIANGLE TEST.
SUBMITTED BY JOHN W. RATCLIFF (jratcliff@verant.com) July 22, 2000
**********************************************************************/
************ HEADER FILE FOR TRIANGULATE.H ***************************/
**********************************************************************/
*****************************************************************/
** Static class to triangulate any contour/polygon efficiently **/
** You should replace Vector2d with whatever your own Vector **/
** class might be. Does not support polygons with holes. **/
** Uses STL vectors to represent a dynamic array of vertices. **/
** This code snippet was submitted to FlipCode.com by **/
** John W. Ratcliff (jratcliff@verant.com) on July 22, 2000 **/
** I did not write the original code/algorithm for this **/
** this triangulator, in fact, I can't even remember where I **/
** found it in the first place. However, I did rework it into **/
** the following black-box static class so you can make easy **/
** use of it in your own code. Simply replace Vector2d with **/
** whatever your own Vector implementation might be. **/
*****************************************************************/
=#
function area{N, T}(contour::AbstractVector{Point{N, T}})
n = length(contour)
A = zero(T)
p=n; q=1
while q <= n
A += cross(contour[p], contour[q])
p = q; q +=1
end
return A*T(0.5)
end
"""
InsideTriangle decides if a point P is Inside of the triangle
defined by A, B, C.
"""
function InsideTriangle{T<:Point}(A::T, B::T, C::T, P::T)
a = C-B; b = A-C; c = B-A
ap = P-A; bp = P-B; cp = P-C
a_bp = a[1]*bp[2] - a[2]*bp[1];
c_ap = c[1]*ap[2] - c[2]*ap[1];
b_cp = b[1]*cp[2] - b[2]*cp[1];
t0 = zero(eltype(T))
return ((a_bp >= t0) && (b_cp >= t0) && (c_ap >= t0))
end
function snip{N, T}(
contour::AbstractVector{Point{N, T}}, u, v, w, n, V
)
A = contour[V[u]]
B = contour[V[v]]
C = contour[V[w]]
x = (
((B[1]-A[1])*(C[2]-A[2])) -
((B[2]-A[2])*(C[1]-A[1]))
)
if 0.0000000001f0 > x
return false
end
for p=1:n
((p == u) || (p == v) || (p == w)) && continue;
P = contour[V[p]]
if InsideTriangle(A, B, C, P)
return false;
end
end
return true;
end
"""
Triangulates a Polygon given as a `contour`::AbstractArray{Point} without holes.
It will return a Vector{`facetype`}, defining indexes into `contour`
"""
function polygon2faces{P<:Point}(
contour::AbstractArray{P}, facetype=GLTriangle
)
#= allocate and initialize list of Vertices in polygon =#
result = facetype[]
n = length(contour)
if n < 3
error("Not enough points in the contour. Found: $contour")
end
#= we want a counter-clockwise polygon in V =#
if 0 < area(contour)
V = Int[i for i=1:n]
else
V = Int[i for i=n:-1:1]
end
nv = n
#= remove nv-2 Vertices, creating 1 triangle every time =#
count = 2*nv #= error detection =#
v=nv
while nv>2
#= if we loop, it is probably a non-simple polygon =#
if 0 >= count
return result
end
count -= 1
#= three consecutive vertices in current polygon, <u,v,w> =#
u = v; (u > nv) && (u = 1) #= previous =#
v = u+1; (v > nv) && (v = 1) #= new v =#
w = v+1; (w > nv) && (w = 1) #= next =#
if snip(contour, u, v, w, nv, V)
#= true names of the vertices =#
a = V[u]; b = V[v]; c = V[w];
#= output Triangle =#
push!(result, facetype(Triangle{Int}(a, b, c)))
#= remove v from remaining polygon =#
s = v; t = v+1
while t<=nv
V[s] = V[t]
s += 1; t += 1
end
nv -= 1
#= resest error detection counter =#
count = 2*nv
end
end
return result
end
function topoint{T}(::Type{Point{3, T}}, p::Point{2, T})
Point{3, T}(p[1], p[2], T(0))
end
function topoint{T}(::Type{Point{3, T}}, p::Point{3, T})
p
end
function topoint{T}(::Type{Point{2, T}}, p::Point{3, T})
Point{2, T}(p[1], p[2])
end
@compat function (::Type{M}){M<:AbstractMesh, P<:Point}(
points::AbstractArray{P}
)
faces = polygon2faces(points, facetype(M))
VT = vertextype(M)
GLNormalMesh(faces=faces, vertices=VT[topoint(VT, p) for p in points])
end
|
{"hexsha": "866076c452d6d957651485835d58e9f147729d5c", "size": 5132, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/polygons.jl", "max_stars_repo_name": "JuliaPackageMirrors/GeometryTypes.jl", "max_stars_repo_head_hexsha": "705e5a646dd2177bbb4b9f8c26b52bc832b38d65", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/polygons.jl", "max_issues_repo_name": "JuliaPackageMirrors/GeometryTypes.jl", "max_issues_repo_head_hexsha": "705e5a646dd2177bbb4b9f8c26b52bc832b38d65", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/polygons.jl", "max_forks_repo_name": "JuliaPackageMirrors/GeometryTypes.jl", "max_forks_repo_head_hexsha": "705e5a646dd2177bbb4b9f8c26b52bc832b38d65", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8757763975, "max_line_length": 80, "alphanum_fraction": 0.5366328917, "num_tokens": 1546}
|
# Getting predictions from a deployed resnet-18 model
import json
import requests
import numpy as np
from pathlib import Path
import typer
import decord
from mlserve.common.logger import logger
from mlserve.common.misc import stopwatch
app = typer.Typer(name="Deployment tests", add_completion=False)
def load_video(
vid_loc:Path = typer.Option(..., "-i", help="Video Location")
) -> decord.VideoReader:
vr = decord.VideoReader(vid_loc)
return vr
@app.command()
@stopwatch
def get_preds(
vid_loc:Path = typer.Option("/home/srivatsas/work/data/sample-mp4-file.mp4", "-i", help="Video Location", exists=True),
num_frames:int = typer.Option(10, "-n", help="NUmber of frames to process")
) -> None:
vr = load_video(vid_loc=vid_loc.__str__())
np_arr:np.ndarray = vr.get_batch(indices=range(num_frames)).asnumpy()
# Ping the server
url = 'http://127.00.1:8000/say_hi'
print(json.loads((requests.get(url=url)).content))
# Get predictions
logger.info(f"Requesting prediction for {np_arr.shape[0]} frames")
response = requests.post(
url='http://127.0.0.1:8000/get_vecs',
json={'lvecs':np_arr.tolist()}
#data=json.dumps({'lvecs':np_arr.tolist()})
)
print(f"Status: {response.status_code}")
if response.status_code == 200:
fvecs = np.array(json.loads(response.content)['fvecs'])
logger.info(f"Obtained feature vectors of shape: {fvecs.shape}")
else:
logger.error("Expected response not received")
if __name__ == '__main__':
app()
|
{"hexsha": "338b6f5d24816138874af197a38e067ab1a885a0", "size": 1551, "ext": "py", "lang": "Python", "max_stars_repo_path": "mlserve/local_serve/client.py", "max_stars_repo_name": "svats2k/mlserve", "max_stars_repo_head_hexsha": "3f93e3db872612fc6222cb585b425304d9429e45", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-12-23T07:53:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T07:04:57.000Z", "max_issues_repo_path": "mlserve/local_serve/client.py", "max_issues_repo_name": "svats2k/mlserve", "max_issues_repo_head_hexsha": "3f93e3db872612fc6222cb585b425304d9429e45", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mlserve/local_serve/client.py", "max_forks_repo_name": "svats2k/mlserve", "max_forks_repo_head_hexsha": "3f93e3db872612fc6222cb585b425304d9429e45", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.02, "max_line_length": 123, "alphanum_fraction": 0.6847195358, "include": true, "reason": "import numpy", "num_tokens": 405}
|
[STATEMENT]
lemma erf_minus [simp]: "erf (-z) = - erf z"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. erf (- z) = - erf z
[PROOF STEP]
unfolding erf_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>n. of_real (erf_coeffs n) * (- z) ^ n) = - (\<Sum>n. of_real (erf_coeffs n) * z ^ n)
[PROOF STEP]
by (subst suminf_minus [OF summable_erf, symmetric], rule suminf_cong)
(simp_all add: erf_coeffs_def)
|
{"llama_tokens": 190, "file": "Error_Function_Error_Function", "length": 2}
|
# -*- coding: utf-8 -*-
# /usr/bin/python3.9
# import cv2
# import elasticdeform
import numpy as np
from skimage.exposure import rescale_intensity
class Background:
"""Background definition"""
def __init__(
self,
img_size,
perlin_noise_level,
poisson_noise_level,
perlin_out_range,
full_noise_level,
):
self.perlin_noise_level = perlin_noise_level
self.poisson_noise_level = poisson_noise_level
self.perlin_out_range = perlin_out_range
self.full_noise_level = full_noise_level
self.img_size = img_size
self.noise_map = np.zeros((self.img_size, self.img_size))
def create_background(self):
### Rescale Poisson noise
self.poisson_noise = np.random.normal(
loc=0, scale=1, size=(self.img_size, self.img_size)
)
self.poisson_noise = rescale_intensity(
self.poisson_noise, out_range=(0, self.poisson_noise_level)
)
#### Generate perlin noise
self.perlin_noise = self.generate_fractal_noise_2d(
(self.img_size, self.img_size),
(self.perlin_noise_level, self.perlin_noise_level),
5,
)
self.perlin_noise = rescale_intensity(
self.perlin_noise, out_range=(0, self.perlin_out_range)
)
#### Normalize noise
self.sum_noise = self.perlin_noise + self.poisson_noise
self.noise_map = rescale_intensity(
self.sum_noise, out_range=(0, self.full_noise_level)
)
def generate_perlin_noise_2d(self, shape, res):
def f(t):
return 6 * t ** 5 - 15 * t ** 4 + 10 * t ** 3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = (
np.mgrid[0 : res[0] : delta[0], 0 : res[1] : delta[1]].transpose(1, 2, 0)
% 1
)
# Gradients
angles = 2 * np.pi * np.random.rand(res[0] + 1, res[1] + 1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1, 0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:, 0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1, 1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:, 1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(np.dstack((grid[:, :, 0], grid[:, :, 1])) * g00, 2)
n10 = np.sum(np.dstack((grid[:, :, 0] - 1, grid[:, :, 1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:, :, 0], grid[:, :, 1] - 1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:, :, 0] - 1, grid[:, :, 1] - 1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00 * (1 - t[:, :, 0]) + t[:, :, 0] * n10
n1 = n01 * (1 - t[:, :, 0]) + t[:, :, 0] * n11
return np.sqrt(2) * ((1 - t[:, :, 1]) * n0 + t[:, :, 1] * n1)
def generate_fractal_noise_2d(self, shape, res, octaves=1, persistence=0.5):
noise = np.zeros(shape)
frequency = 1
amplitude = 1
for _ in range(octaves):
noise += amplitude * self.generate_perlin_noise_2d(
shape, (frequency * res[0], frequency * res[1])
)
frequency *= 2
amplitude *= persistence
return noise
def smooth_background(self, sigma):
from scipy.ndimage import gaussian_filter
self.noise_map = gaussian_filter(self.noise_map, sigma=sigma)
|
{"hexsha": "c4f7af17b31798233eee70ed35ebc5e11e1248b2", "size": 3470, "ext": "py", "lang": "Python", "max_stars_repo_path": "spheroid_simulator/background.py", "max_stars_repo_name": "ebouilhol/neuron_simulator", "max_stars_repo_head_hexsha": "bcaca4710d3f9b423ecd5253f9a67bd468b1af87", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "spheroid_simulator/background.py", "max_issues_repo_name": "ebouilhol/neuron_simulator", "max_issues_repo_head_hexsha": "bcaca4710d3f9b423ecd5253f9a67bd468b1af87", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "spheroid_simulator/background.py", "max_forks_repo_name": "ebouilhol/neuron_simulator", "max_forks_repo_head_hexsha": "bcaca4710d3f9b423ecd5253f9a67bd468b1af87", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.4081632653, "max_line_length": 85, "alphanum_fraction": 0.54870317, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1039}
|
[STATEMENT]
lemma approx_HComplex:
"\<lbrakk>a \<approx> b; c \<approx> d\<rbrakk> \<Longrightarrow> HComplex a c \<approx> HComplex b d"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>a \<approx> b; c \<approx> d\<rbrakk> \<Longrightarrow> HComplex a c \<approx> HComplex b d
[PROOF STEP]
unfolding approx_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>a - b \<in> Infinitesimal; c - d \<in> Infinitesimal\<rbrakk> \<Longrightarrow> HComplex a c - HComplex b d \<in> Infinitesimal
[PROOF STEP]
by (simp add: Infinitesimal_HComplex)
|
{"llama_tokens": 229, "file": null, "length": 2}
|
[STATEMENT]
lemma headconst_zero:
fixes p::"'a::zero poly"
shows "isnpolyh p n0 \<Longrightarrow> headconst p = 0 \<longleftrightarrow> p = 0\<^sub>p"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. isnpolyh p n0 \<Longrightarrow> (headconst p = (0::'a)) = (p = 0\<^sub>p)
[PROOF STEP]
by (induct p arbitrary: n0 rule: headconst.induct) auto
|
{"llama_tokens": 140, "file": "Taylor_Models_Polynomial_Expression", "length": 1}
|
#include <cstdlib>
#include <string>
#include <boost/filesystem.hpp>
#include <ros/ros.h>
#include <rosbag/recorder.h>
namespace fs = boost::filesystem;
using namespace std::string_literals;
int main(int argc, char** argv) {
ros::init(argc, argv, "record_snapshot");
rosbag::RecorderOptions opts;
const char* home = std::getenv("HOME");
if (home == nullptr) {
ROS_FATAL("HOME environment variable not set");
return 1;
}
ros::NodeHandle nh("~");
fs::path directory(
home + "/"s + nh.param<std::string>("snapshot_path", "rosbag_snapshots") +
"/");
if (!fs::exists(directory) && !fs::create_directory(directory)) {
ROS_FATAL("record_snapshot was not able to create directory for rosbags %s",
directory.string().c_str());
return 1;
}
constexpr uint32_t byte_to_mb = 1048576;
opts.snapshot = true;
opts.buffer_size = byte_to_mb * nh.param<int>("snapshot_size", 1024);
opts.prefix =
directory.string() +
nh.param<std::string>("snapshot_file_prefix", "rosbag_snapshot");
opts.topics = nh.param<std::vector<std::string>>(
"topics_to_record", {"/tf", "/camera/image_raw"});
ROS_INFO("started rosbag snapshot record\n");
ROS_INFO(
"publish a std_msgs/Empty on the topic /snapshot_trigger to record what "
"just happened");
ROS_INFO("the rosbag is then saved to ~/%s", directory.string().c_str());
// Run the recorder
rosbag::Recorder recorder(opts);
const int result = recorder.run();
return result;
}
|
{"hexsha": "351631a9055322c15cda5f4a4dd5a885ad9309b6", "size": 1519, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/kitcar_rosbag/src/record_snapshot/record_snapshot.cpp", "max_stars_repo_name": "KITcar-Team/kitcar-rosbag", "max_stars_repo_head_hexsha": "5fcbdcab16765862802e4c4cb69ed71deceea4f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/kitcar_rosbag/src/record_snapshot/record_snapshot.cpp", "max_issues_repo_name": "KITcar-Team/kitcar-rosbag", "max_issues_repo_head_hexsha": "5fcbdcab16765862802e4c4cb69ed71deceea4f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/kitcar_rosbag/src/record_snapshot/record_snapshot.cpp", "max_forks_repo_name": "KITcar-Team/kitcar-rosbag", "max_forks_repo_head_hexsha": "5fcbdcab16765862802e4c4cb69ed71deceea4f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.125, "max_line_length": 80, "alphanum_fraction": 0.6655694536, "num_tokens": 385}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.