text
stringlengths 0
1.25M
| meta
stringlengths 47
1.89k
|
|---|---|
% SIAM Article Template
\documentclass[review,onefignum,onetabnum]{siamart171218}
% Information that is shared between the article and the supplement
% (title and author information, macros, packages, etc.) goes into
% ex_shared.tex. If there is no supplement, this file can be included
% directly.
\input{ex_shared}
% Optional PDF information
\ifpdf
\hypersetup{
pdftitle={Inferring Relative Ability from Winning Probability
in Multi-Entrant Contests},
pdfauthor={Peter Cotton}
}
\fi
% The next statement enables references to information in the
% supplement. See the xr-hyperref package for details.
%% Use \myexternaldocument on Overleaf
\myexternaldocument{ex_supplement}
% FundRef data to be entered by SIAM
%<funding-group>
%<award-group>
%<funding-source>
%<named-content content-type="funder-name">
%</named-content>
%<named-content content-type="funder-identifier">
%</named-content>
%</funding-source>
%<award-id> </award-id>
%</award-group>
%</funding-group>
\begin{document}
\maketitle
% REQUIRED
\begin{abstract}
We provide a fast and scalable numerical algorithm for inferring the distributions of participant scores in a multi-party contest, under the assumption that each participant's score distribution is a translation of every other's. We term this the horse race problem, as the solution provides one way of assigning coherent probabilities to all outcomes, and pricing arbitrarily complex horseracing wagers. However the algorithm may also find use anywhere winning probabilities are apparent, such as with e-commerce product placement, in web search, or, as we show, in addressing a fundamental problem of trade: who to call, based on market share statistics and inquiry cost.
\end{abstract}
% REQUIRED
\begin{keywords}
order statistics, rank statistics, trifecta, exacta, quinella, baseball pythagorean theorem, inverse problems, horse racing, handicapping, asset pricing, coherent probability, risk neutral pricing, web search
\end{keywords}
% REQUIRED
\begin{AMS}
62P20
\end{AMS}
\section{The discrete horse race problem}
\label{sec:problem}
In this section we define the discrete horse race problem, for which we present a solution in Section \ref{sec:solution}. This finds use in some horseracing related analytical tasks, enumerated in Section \ref{sec:implications}. Each of those tasks is then shown, in Section \ref{sec:applications}, to bear essential similarity to commercial problems of greater economic significance.
Let \(X_1,...,X_n\) be discrete univariate random variables assumed to take values on a lattice of equally spaced points. Let \(X^{(k)}\) denote the $k$'th order statistic and in particular let $X^{(1)}$ denote the minimum. The random variable $X^{(1)}$ represents the winning score for a contest in which \(X_1,...,X_n\) represent the performances of contestants (and where a lower score is considered better than a higher one).
We define the implied state price for each contestant $i$ as follows.
\begin{equation} \label{eqn:state}
p_i = E \left[ \frac{\iota_{ X_i=X^{(1)}}}{\sum_k \iota_{X_k=X^{(1)}}} \right]
\end{equation}
where $\iota$ is the indicator function. In words this is equal to the expected payout in a game where we get one dollar if our contestant wins, fifty cents if it ties for the lowest score with one other contestant, one third of a dollar if it ties for lowest with two other contestants, and so forth.
Let \(f_i\) denote the $i$'th density function
\begin{equation}
f_i(j) = Prob(X_i=j)
\end{equation}
Parenthetically, one might instead write $Prob(X_i=\nu_j)$ where $\nu_j$ is the $j$'th evenly spaced value in the support of $f_i$. However, until we reach the point where performance takes physical meaning (i.e. beyond a device for determining probabilities on orderings) there is no loss of generality when we assume the lattice is the integers (for $f^*$ and the lattice unit of measurement $\nu_{i+1}-\nu_i$ could be scaled simultaneously).
To set up the discrete horse race problem we will assume that any two $f_i$ and $f_j$ are almost the same to within translation. Fix at the outset some distribution $f^*$. All horse performance distributions will be {\em approximate} translations of $f^*$. To make this assumption precise we define a translation operator on distributions on the natural numbers as follows. For any $f(\cdot)$ and any $a \in \mathbb{R}$ we define the shifted distribution $f^{\rightarrow a}(\cdot)$.
\begin{equation}
f^{\rightarrow a}(j) = (1-r) f^{\rightarrow \floor{a}}(j) + r f^{\rightarrow \floor{a}+1}(j)
\end{equation}
where $r=a-\floor{a}$ is the fractional part of the shift $a$ obtained by subtracting the floor. Here the shift operator $f^{\rightarrow \floor{a}}$ for integer shifts is, as expected, a shifting of the distribution to the right by an integer number of steps. Formerly $f^{\rightarrow \floor{a}}(j):=f(j-\floor{a})$. If $f(\cdot)$ is the distribution for $X$ then $f^{\rightarrow a}$ approximates the distribution $X+a$ (which cannot be exactly represented on the lattice unless $a$ happens to be an integer). Other translation definitions can be supplied that will not change in a material way what follows.
With this formality out of the way we can now define the discrete horse race problem. Given a distribution $f^*$ on the integers and a vector $\{p_i\}_{i=1}^n$ of state prices summing to unity, find a vector of offsets $(a_1,\dots,a_n)$ such that equation \ref{eqn:state} holds for every $i$ when the distribution of the $i$'th score $X_i$ is given by $f^{*\rightarrow a_i}$.
\subsection{The continuous horse race problem}
A continuous horse race problem close in spirit to the discrete problem we consider assumes a density $f^*$ with distribution $F^*$ and seeks parameters $(a_1,\dots,a_n)$ modifying $f^*$ in some way (typically by scale or location) to satisfy
\begin{equation}
\label{eqn:continuous}
p_i = \int f^*(x;a_i) \Pi_{j\neq i}^n \left( 1- F^*(x;a_j)\right) dx
\end{equation}
for some specified winning probabilities $p_i$. For example the Harville model considered in Section \ref{sec:harville} assumes all horses share the same distribution up to a scale parameter. In this paper we take interest in the special case of the continuous problem where $f^*(\cdot,a_i)$ is a translation by $a_i$ of a canonical horse performance distribution. The discrete horse race problem, as we have defined it, is an approximation to the continuous problem.
\subsection{Contribution}
By providing a fast solution to the {\em general} discrete problem for any $f^*$, and thereby an approximation to the general continuous problem also, we depart substantially from the literature which has, to date, either used known distribution functions $f^*$ with convenient analytic properties; or made approximations that are somewhat crude; or restricted attention to two horse races. Also, many prior approaches model probabilities in such a way that performance is not amenable to any well motivated updating procedure after the race has begun.
\subsection{Wagers}
To dispense with some jargon, a quinella is a bet on which two horses will finish a race first, in any order. An exacta is a similar wager in which finishing order matters. A trifecta is a wager on which three horses will finish in the top three, with order mattering. When a horse is removed from a race prior to its running, it is said to be scratched. Wagers placed on sporting events {\em after} the race has begun are said to be in-the-run wagers. There are bets settled on even more precise outcomes, such as the top four or top six finishers, though their names tend to vary more from place to place. Quinellas, exactas, trifectas and their ilk are termed exotic or combinatorial bets in this paper. A show bet is a bet that a horse will finish in the top three, a place bet refers to a top two finish.\footnote{We adopt U.S. conventions. In Australia and elsewhere a place bet refers to a top three finish typically, assuming eight starters or more.} An each-way bet is a win bet and a show bet bundled together.
\subsection{Coherent pricing}
The horse race problem is motivated by the case where $p_i$ is revealed. We might view this as revelation of some small part of the joint distribution applying to all horses - under some probability measure that need not correspond to reality. Rather, the measure might be regarded as risk neutral probability, sometimes referred to as Dutch book probability, or merely as a measure that helps guide relative value decisions. A gambler has a choice of many different types of wagers, so given a view of objective reality it is by no means a trivial task to construct a portfolio of wagers that best take advantage of their homework. A positive view on a horse might best be expressed as a collection of trifecta bets in which the horse finishes second, for example.\footnote{Some racetracks report the amount of money invested in trifectas that involve horse $i$ finishing second or third.}
\subsection{Harville's model and Luce's axiom of choice}
\label{sec:harville}
No comment on the mathematics of pricing horseracing wagers, especially combinatorial wagers, is complete without mention of the longstanding benchmark named for Harville \cite{Harville1973AssigningCompetitions}. Viewed from the perspective of running times, this benchmark is actually a scale, not a translation family, as noted. It assumes that time to finish a race is exponentially distributed and all horses share the same location parameter (which can be set to zero without loss of generality). The inference of ability from a collection of winning probabilities $\{p_i\}_{i=1}^n$ is trivial if we identify ability $a_i$ with some function of the hazard rate $h_i$ of the exponential distribution for the $i$'th horse. Clearly $h_i$ is proportional to $p_i$. In particular, for a two horse race, the probability that horse with hazard rate $h_i$ beats horse with hazard rate $h_j$ is given by $\frac{h_i}{h_i + h_j}$ and we recover one version of the Bradley-Terry model for pairwise comparisons \cite{Bradley1952RankComparisons}.
Exponential race times, or constant plus exponential race times, may not seem immediately compelling as a model for performance.\footnote{One can try to imagine certain games where exponential arrival is plausible and related to ability - such as darts played at an extremely high level.} Our job here is not to take sides on the plausibility of Harville's model but rather, provide the reader with a much more flexible framework in which any $f^*$ may be used. However, the exponential assumption is certainly tractable. The probability of coming second (or third, or forth) conditional on some horses having already finished remains proportional to the win probability $p_i$. Thus to price an exacta it suffices to use:
\begin{equation}
\label{eqn:conditional}
Prob(Horse\ j\ second | Horse\ k\ wins) = \frac{p_j}{\sum_{i \neq k}p_i }
\end{equation}
which by repeated application gives rise to probabilities for all permutations (trifectas and so forth) and, by summation, pricing of quinellas, show and place bets.
It has been our experience that equation \ref{eqn:conditional} is taken as anzatz far more often than the underlying generative model (exponential running times) is consciously considered. Therefore, the use of Harville's formula might be better recognized as an application of the more general principle known as Luce's axiom of choice. Given a set of alternatives involving choices $i$ and $j$ this axiom posits that the relative probability of choosing $i$ over $j$ is independent of the other alternatives \cite{Cane1960IndividualAnalysis.}. In our usage, we have
$$
\frac{ P(i | \Omega) }
{ P(j | \Omega } = \frac{ P(i | \Omega') }
{ P(j | \Omega') }
$$
where $\Omega$ represents the full field for the horse race and $\Omega'$ some subset of horses (after we remove the winner assumed not to be $i$ or $j$, say, or the winner and second placegetter, and so on).
Luce's axiom can be viewed as a restatement of the Harville model (unless there is use for a model of running times that is separately motivated beyond the desire to price combinatorial outcomes, which seems somewhat unlikely). Luce's axiom also suggests limitations in the Harville model. It may seem unlikely, for example, that revelation of the winner of a race would not also reveal some information impacting the ratio $\frac{ P(i | \Omega) }
{ P(j | \Omega) }$. For example if a horse deemed very unlikely to win succeeds against all odds (literally) it may suggest that something unexpected has occurred. The entropy of the race for second place may be higher than the axiom of choice suggests. In particular, Harville may tend to overestimate the probability of a highly favoured horse finishing in the top three, a topic we investigate in Section \ref{sec:quarter}.
\subsection{Other distributions}
The fact that Luce's axiom of choice is consistent with exponential running times might further suggest, to some readers, that the simple conditional calculations giving rise to Harville quinella, exacta and trifecta pricing might be substantially improved. Again, we are agnostic on the question of which $f^*$ is the more sensible a priori, preferring to leave the reader with a completely general framework with which to form their own opinion. However there have been numerous attempts to depart from Harville in the horseracing literature, and it is reasonable to assume that some of these are motivated by a desire for more realistic performance distributions.
The case of normal $f^*$ in a translation family (as per equation \ref{eqn:continuous}) is considered by \cite{Henery1981PlaceRaces} \cite{Henery1981PermutationRaces} and approximate analytical results derived by Taylor expansion:
$$
a_i = \frac{ (n-1) \phi\left( \Phi^{-1}(\frac{1}{n}) \right) \left(\Phi^{-1}(p_i)- \Phi^{-1}(\frac{1}{n})\right) }{ \Phi^{-1}\left( \frac{i-\frac{3}{8}}{n + \frac{3}{4} } \right) }
$$
where $\phi$ and $\Phi$ are the standard normal density and cumulative distribution respectively. A theoretical comparison between Harville and Henery's approach is made in \cite{Lo2006ACompetitions}. Other suggestions are made in \cite{Stern2010EstimatingFormulas} who previously noted the tractability of the case of Gamma distributed $X_i$ in \cite{Stern1990ModelsPermutations}.
The case of two horse races brings in literature from other sports. Baseball fans have long used the Pythagorean formula
\begin{equation}
\label{eqn:pythagorean}
\overbrace{p}^{win\ probability} = \frac{\overbrace{RS^\gamma}^{season\ runs}}{RS^\gamma+\underbrace{RA^\gamma}_{season\ runs\ against}}
\end{equation}
so named because discoverer Bill James used the exponent $\gamma=2$. Relatively recently this was shown to be consistent with Weibull distributed scores.\cite{Miller2012TheHockey}
\subsection{Ad hoc approaches}
Anecdotally, some professional handicappers have attempted to improve Harville by replacing $p_i$ with $p_i$ raised to a power $\beta$ (then normalized across horses).\footnote{As an aside, a different use of a power transform would apply the power transform directly to running time distributions. For instance raising an exponentially distributed variable to the power $1/4$ results in a roughly normally distributed distribution.} This is not unlike the literature that is devoted to empirical study of the Pythagorean coefficients (or generalizations) that apply to baseball ($\gamma \approx 1.8$), association football \cite{Hamilton2011AnFootball}, hockey \cite{Miller2012TheHockey}, basketball \cite{Alamar2016BasketballAnalysis} tennis \cite{Kovalchik2016IsTennis} and specific aspects of games such as goal-tending \cite{Shea2012CalculatingGoaltenders}. However none of these approaches presents a coherent compelling framework - even in the rare instances where they accommodate $n>2$ entrants and also a coherent set of probabilities (even then, one must be convinced of the correctness of exponential (say) or Weibull (say) performance distributions before any model estimation is performed).
\subsection{Multivariate optimization}
Naturally the horse race problem, either discrete or continuous, can be viewed as a straight-up optimization of $\{a_i\}_{i=1}^n$ where the objective function is some measure of aggregate discrepancy between implied and provided win probabilities. Numerical approaches to this calibration problem for general $f^*$ and an arbitrary number of horses don't appear to have attracted a lot of academic attention - though it would be surprising if practitioners were not availing themselves of this possibility. In \cite{cotton_blog} a general purpose solver is employed to find all $n$ of the $a_i$. It is shown that reducing the dimension of the search space from $n-1$ to $d \ll n$ by assuming some regularity on the $a_i$ can reduce computation time by two orders of magnitude for large $n \approx 200$. However a search in a space of dimension $d \approx 15$ is required to reduce errors in $p_i$ to negligible levels.
Optimization packages have been employed in closely related work. Though not intended or exactly the same purpose, the Luce-Plackett package \cite{plackettlucepkg} is another example of the use of customized multivariate optimization to accomplish calibration. However the iterative method presented herein differs markedly from a practical perspective to all such approaches. It seems safe to say that any approach reducing the horse race problem to Broyden Fletcher Goldfarb Shanno (or some other general purpose search routine) is unlikely to compete with the a trivial one dimensional interpolation used in our method. Optimization packages have advanced in efficacy but for problems involving hundreds of variables one can expect at least hundreds of iterations. Hand tuning the problem may not materially change the scaling. For instance \cite{cotton_blog} slows down considerably as the number of horses climbs beyond a few dozen. Similarly, most approaches suffer the curse of dimensionality. We shall see that the method presented herein does not.
\section{A solution to the discrete horse race problem}
\label{sec:solution}
We now present a simple, fast and surprisingly scalable calibration procedure solving the discrete horse race problem.
\subsection{Notation}
Recall that we have used $f^*$, and $f_i = f^{*\rightarrow a_i}$ to denote a common density and its approximate translations by the $a_i$. The $a_i$ could be called ability, if we are comfortable with lower ability being better (for instance lower golf handicaps are better). We use $f_i$ as shorthand for the translated density. We will also be interested in the distribution of the first order statistic $f^{(1)}$ of a group of horses - either the entire race of some subset $A \subset \{1\dots n\}$ thereof, denoted $f^{(1)}_{A}$. We use the shorthand $f_{\hat{i}}$ to refer to the first order statistic when $A$ is the set of all horses except $i$. As only the first order statistic is used here, we will sometimes drop the superscript $(1)$ to avoid cluttering the formulas.
For any density, whether referring to an individual horse or the first order statistic of a group, it will be convenient to define
\begin{equation}
\label{eqn:survival}
S(j) = Prob( X > j ) = 1 - F(j)
\end{equation}
to be the survival function equal to one minus the cumulative distribution function. By independence, survival for a group (meaning the $S$ that corresponds to the density $f^{(1)}$ for the first order statistic) is the product of survival of each member.
\begin{equation}
\label{eqn:ss}
S_A(j) = \Pi_{i \in A} S_i(j)
\end{equation}
The discrete horse race demands a treatment of ties. A key idea is maintenance of the conditional expectation of ties given a winning score. Define the {\em conditional multiplicity}, briefly multiplicity, to be the expected number of variables (horses) that tie for the lowest value, assuming the lowest value is $j$:
\begin{equation}
m(j) = E \left[ \sum_{i=1}^n \iota_{X_i=j}| X^{(1)}=j \right]
\end{equation}
where \(\iota\) is the indicator function.
\subsection{Combining two races}
We observe that for any group of horses $A$ the three tuple $\Upsilon_A = (f_A(), S_A(), m_A())$ can now characterize how the horses act as one, at least as far as first place finishes are concerned.\footnote{Strictly speaking the inclusion of both $f_A$ and $S_A$ might be considered redundant on account of Equation \ref{eqn:survival}, but the formulas are much cleaner when both $f$ and $S$ appear.} As a special case, a single horse can be considered a group unto itself, with conditional multiplicity identically one for every lattice point $j$.
Next consider what happens to the distribution of the winning performance $f=f^{(1)}$ and the multiplicity $m$ when two groups of horses, characterized in this manner, are combined into a single race. The operation of taking a union of two groups of horses has a corresponding operation on $\Upsilon$. This operation is straightforward for $f$ and $S$, because $S$'s multiply as per Equation \ref{eqn:ss}, and $S$ then determines $f$ via $F$, as per Equation \ref{eqn:survival}. Estimating the number of ties conditioned on a winning score is a little more involved. The multiplicity of a union of two disjoint collections of horses can be estimated as a weighted combination of multiplicities:
\begin{equation}
\label{eqn:two}
m_{A\cup B}(j) = \frac{ m_A(j) f_A(j) S_B(j) + (m_A(j) + m_B(j)) f_A(j) f_B(j) + m_B(j) f_B(j) S_A(j) }
{ f_A(j) S_B(j) + f_A(j) f_B(j) + f_B(j) S_A(j) }
\end{equation}
where, as a reminder, in our abbreviated notation $f_A=f^{(1)}_A$ refers to the density of the first order statistic for the first group of horses, and so on. In the numerator the first term is the multiplicity of the first group of horses weighted by the probability that all first placegetters derive from this group. The third term is the converse. The second term is the sum of multiplicities weighted by the probability that the groups tie.
Equation \ref{eqn:two} can be applied successively, each time adding one horse, until the survival $S$ and multiplicity $m$ of the race is known. This gives rise to Algorithm \ref{alg:multiplicity}.
\subsection{Removing one horse}
\label{sec:remove}
To briefly preview the solution to the discrete horse race problem we have in mind, we wish to consider each horse as facing off against all others - which is to say against a representative best performance distribution of the rest and a corresponding multiplicity. The set of all runners excluding the $i$'th horse will be characterized by $\Upsilon_{\hat{i}}$ as before (that is to say, by its first order statistic and multiplicity). This will allow us to estimate the currently implied state prices, denoted $\tilde{p}_i$, and it will suggest adjustments to the $\{a_i\}_{i=1}^n$ that will bring the $\tilde{p}_i$ into line with the supplied $p_i$. This leads to a very fast convergence with a close match achieved after just two or three iterations. Figure \ref{fig:covergence} demonstrates the drop in mean absolute error between implied and true relative ability.
But evidently if $n$ is very large we have no desire to recompute the multiplicity and first order statistic from scratch. Instead we carry from one iteration to the next a characterization $\Upsilon$ of the entire race. We require a method of quickly determining how this changes when one horse is removed.
\begin{figure}
\centering
\includegraphics[scale=0.4]{convergence.png}
\caption{Fast convergence of the iterative algorithm for relative horse ability. This example assumes a race with $100$ participants and a lattice of size $250$. A close match is achieved almost immediately. A perfect match is never achieved because the inversion algorithm is not supplied with the position of the first horse relative to the lattice, and there is a small quantization effect. This effect does not prevent close convergence to win probabilities, however - so this error will tend to overstate the numerical error that would be introduced in the pricing of quinellas or trifectas, say.}
\label{fig:covergence}
\end{figure}
To that end, we now note that all three quantities comprising $\Upsilon$ can be quickly calculated from their equivalents without hats. Once again the easiest calculation follows from \(S_i S_{\hat{i}} = S\) where $S$ is the survival function for all horses. Thus
\begin{equation}
\label{eqn:s}
S_{\hat{i}}(j) = \frac{ S(j)}{ S_i(j) }
\end{equation}
and once again, this determines the density.
Now appealing to equation \ref{eqn:two} with the singleton horse $A=\{i\}$ and $B$ the complement, the full field multiplicity is given by
$$
m(j) = \frac{m_i(j) f_i(j) S_{\hat{i}}(j)
+ \left( m_i(j) +m_{\hat{i}}(j) \right) f_i(j) f_{\hat{i}}(j)}{f_i(j) S_{\hat{i}}(j) + f_i(j) f_{\hat{i}}(j) + f_{\hat{i}}(j) S_i(j)}
$$
we can rearrange to determine the multiplicity with $i$ left out:
\begin{equation}
\label{eqn:inversion}
m_{\hat{i}}(j) = \frac{m(j) f_i(j) S_{\hat{i}}(j) + m(j) f_i(j) f_{\hat{i}}(j)
+ m(j) f_{\hat{i}}(j) S_i(j) - m_i(j) f_i(j) S_{\hat{i}}(j) }
{ f_{\hat{i}} ( f_i + S_i ) }
\end{equation}
This establishes a fast way to remove one horse.
\subsection{Approximate state prices}
The final component to the algorithm is a method of estimating the state price $p_i$ under the assumption that we have been supplied with the density $f_i$ for horse $i$ and also supplied a characterization of the remainder of the horses acting as one (i.e. $\Upsilon_{\hat{i}}$ which embodies the density and multiplicity). Recall the definition of the state price.
\begin{eqnarray*}
p_i & = & E \left[ \frac{ \overbrace{\iota_{ X_i=X^{(1)}}}^{horse\ i\ wins}}{\sum_k \iota_{X_k=X^{(1)}}} \right]
\end{eqnarray*}
We sum over all values $j$ on the lattice taken by $X_i$ and also by values $j'$ taken by $X_{\hat{i}}^{(1)}$. For brevity denote the denominator (number of ties) by
$
M = \sum_k \iota_{X_k=X^{(1)}}
$
the win indicator by
$
W = \iota_{ X_i=X^{(1)}}
$
Then splitting off the diagonal terms (ties) we have by independence of performance:
\begin{eqnarray}
\label{eqn:approximation}
p_i & = & \sum_{j,j'} f_i(j) f_{\hat{i}}(j') E\left[ \frac{W}{M} | X_i=j,X^{(1)}_{\hat{i}}=j' \right] \nonumber \\
& = & \sum_{j} f_i(j) f_{\hat{i}}(j) E\left[ \frac{1}{M} |X_i=X^{(1)}_{\hat{i}}=j\right] + \sum_{j, j' > j} f_i(j) f_{\hat{i}}(j') \cdot 1 \nonumber \\
& = & \sum_{j} f_i(j) f_{\hat{i}}(j) E\left[ \frac{1}{M} |j\right] + \sum_{j} f_i(j) S_{\hat{i}}(j) \nonumber \\
& \approx & \sum_{j} f_i(j) f_{\hat{i}}(j) \frac{1}{E[M|j]} + \sum_{j} f_i(j) S_{\hat{i}}(j) \nonumber \\
& = & \sum_j f_i(j) \left\{ \frac{f_{\hat{i}}}{1+m_{\hat{i}}(j)}+ S_{\hat{i}}(j) \right\}
\end{eqnarray}
The approximation in the second to last line could potentially benefit from a Jensen's Inequality estimate. However this only applies to ties and the bound $M>2$ further suggests the term arising from Jensen's Inequality is small.
\subsection{Update step}
An iterative algorithm for determining $a_i$ from $p_i$ can now be given in Algorithm \ref{alg:inversion}. The approximate state price calculation is used to determine a lookup table from ability to win probability. Linear interpolation is use to adjust the abilities of the horses. Note that we compute \ref{eqn:s}, then $f_{\hat{i}}$ from $S_{\hat{i}}$ using the definition \ref{eqn:survival}. An example of multiplicity $m_i(j)$ is plotted for a race with three entrants in Figure \ref{fig:multiplicity}. A similar example with $25$ participants is shown in Figure \ref{fig:multiplicity25}
\begin{figure}
\centering
\includegraphics[scale=0.4]{multiplicity_fixed.png}
\caption{Example of performance distributions (green, orange and blue), first order statistic (red) showing the distribution of the winning performance, and multiplicity (purple) for a race with three entrants. The red and purple curves are sufficient statistics when these three contestants are added to a race, at least as far as winning state prices $p_i$ are concerned. This example uses a lattice with $250$ points and
skew-normal performance distributions. Note the slightly fatter right tails, and the fact that multiplicity is numerically stable well beyond the region where it matters.}
\label{fig:multiplicity}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35]{multiplicities_25.png}
\caption{Example of performance distributions, first order statistics, and multiplicity (top pink curve) for a race with twenty five entrants. Only every fifth performance curve is plotted.}
\label{fig:multiplicity25}
\end{figure}
\begin{algorithm}[tb]
\caption{Computing the density and multiplicity for the least of $n$ discrete random variables}
\label{alg:multiplicity}
\begin{algorithmic}
\STATE {\bfseries Input:} Discrete densities $f_i: \mathbb{N}\rightarrow \mathbb{R}$, multiplicities $m_i:\mathbb{N}\rightarrow \mathbb{R}$ for $i \in \{1,\dots,n\}$
\STATE Initialize $S=S_1$ using equation \ref{eqn:survival}
\STATE Initialize $f=f_1$, $m=m_1=1$
\FOR{$i=2$ {\bfseries to} $n$}
\STATE $S \rightarrow 1- (1-S)(1-S_i)$
\STATE $f(j) = S(j)-S(j-1)$ for all $j$
\STATE Assign $m(j)$ for all $j$ using eqn \ref{eqn:two} with $f,m,S$ taking the role of group $A$ and $f_i,m_i,S_i$ the role of group $B$.
\ENDFOR
\RETURN $m$, $S$ and $f$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Discrete Horse Race Calibration}
\label{alg:inversion}
\begin{algorithmic}
\STATE {\bfseries Input:} data $p_i$, discrete density $f$ representing common performance density up to translation.
\STATE Initialize $a_i=0$ for all $i$.
\REPEAT
\STATE Use Algorithm \ref{alg:multiplicity} to compute $S,m,f$ from densities $f_i:=f^{\rightarrow a_i}$
\FOR{$i=1$ {\bfseries to} $n$}
\STATE Compute $S_{\hat{i}}$, $m_{\hat{i}}$ using eqn \ref{eqn:inversion} and \ref{eqn:s}
\STATE Compute implied state prices $\tilde{p}_i$ for all $i$ using \ref{eqn:approximation}
\ENDFOR
\STATE Add to the set of computed $(a,\tilde{p})$ pairs some additional implied state prices for a small number of evenly spaced, additional $a_i$ values (in order to create a stable lookup table from $a \rightarrow \tilde{p}$ used in the next step).
\STATE Assign new $a_i$ by piecewise linear interpolation.
\UNTIL $\tilde{p}_i \approx p_i$ for all $i$
\RETURN $a_i$'s
\end{algorithmic}
\end{algorithm}
\subsection{Reference implementation}
The novelty in this algorithm lies in the use of Equation \ref{eqn:inversion} to facilitate an extremely fast iterative sweep. This obviates any kind of multidimensional parameter search. There are some other details of the reference open source implementation that bear brief comment. However their modification is not likely to make nearly as big a difference. We refer the reader to the open source code provided \cite{pysport}. There is also a calculator made available at \cite{horseapi}.
The first comment is that given $\tilde{p}_i$'s the updating of $a_i$'s can be implemented in several ways. It was initially thought that some other monotonic transformation of $\tilde{p}_i$ would help, such as assuming log odds ratios were linear in $a_i$ (or some other approximation motivated by extreme value theory perhaps). But surprisingly, piecewise linear interpolation leads to almost immediate convergence. The reference implementation \cite{pysport} uses numpy's interp1d linear interpolation algorithm for simplicity.
Also, it should be clear that any reasonable convergence (i.e. termination) criteria might be used, such as requiring that $\tilde{p_i}$ differ from $p_i$ by less than a fixed threshold. Of course one might assume some other criterion such as divergence of logarithms of probabilities, or of odds ratios, or standard measures of dispersion between discrete distributions.
\subsection{Convergence}
There are two notions of convergence that might be used to test Algorithm \ref{alg:inversion}. We can start with a collection of winning probabilities, infer relative ability, recompute implied probability of winning and compare. Alternatively, we can start with relative ability and complete a similar round trip. We have already noted the second possibility and, in Figure \ref{fig:covergence}, the rapid convergence after two iterations.
The matching of win probabilities is probably the more relevant of the two criteria, if determining pricing for exotic wagers is the intent. For $n=10$ corresponding the relative error after five iterations is on the order of 1 part in 10,000, meaning that a horse specified to have a ten percent chance of winning will have an inferred probability of winning that is within $1/100,000$ of $0.1$.
We tested the ability to reproduce probabilities to high precision even when race size is increased. Figure \ref{fig:accuracy} reports the mean absolute relative error defined as
$$
relative/ error = \frac{1}{n} \sum_i \frac{|\tilde{p}_i-p_i|}{p_i}
$$
where $p_i$ and $\tilde{p}_i$ are the given and calibrated probabilities respectively. This experiment suggests that relative errors increase with the logarithm of the number of runners. However even for races of size $n=60,000$ this corresponds to a relative errors on the order of five one hundredths of a percent.
\begin{figure}
\centering
\includegraphics[scale=0.4]{accuracy.png}
\caption{Relative error in calibrated versus supplied probability as a function of race size $n$. Even for a race with $60,000$ entrants, the relative error between supplied and calibrated win probability is on the order of hundredths of a percentage point.}
\label{fig:accuracy}
\end{figure}
We do not provide a proof of convergence, which is an acknowledged shortcoming at present. The definition we have used for translation in the formulation of the discrete horse race problem guarantees monotonicity in winning probability as we vary $a_i$. Thus, in establishing that an $a_i$ can be chosen to match $p_i$ holding all other $a_j$ constant (and absent non-degenerate cases such as the one mentioned above) the intermediate value theorem suffices. This suggests, but does not prove, convergence of the vector $\tilde{p}_i$ to $p_i$ using the accelerated algorithm we suggest (in which all $a_i$'s are altered simultaneously).
It is worth noting that this algorithm, or any other, cannot converge for degenerate choices of $f^*$ if no solution exists! For instance if $f^*$ takes only one value, it will evidently be impossible to calibrate a translation family matching any set of winning probabilities other than $(1,0,\dots, 0)$, $(\frac{1}{2},\frac{1}{2},0,\dots, 0)$ and so on.
\subsection{Computational performance}
\begin{figure}
\centering
\includegraphics[scale=0.4]{performance.png}
\caption{Computation time of the Python reference implementation of Algorithm \ref{alg:inversion}. Five iterations were used, and a lattice size of $500$. Running time is initially super-linear due to one part of the algorithm being relatively inefficiently implemented in Python. But for large $n$ running time is empirically sub-linear. Ability can be implied for a race with $n=1000$ horses in a few seconds, or $n=100,000$ if one has more time (despite the fact that the latter is ostensibly a $99,999$ dimensional optimization.)}
\label{fig:one_thousand}
\end{figure}
We claim this algorithm is fast. For races up to size $n=1000$ the calibration using Algorithm \ref{alg:inversion} is accomplished in a few seconds, as illustrated in Figure \ref{fig:one_thousand}. As the obvious alternative would involve fitting a model with $999$ free parameters and thus a $999$ dimensional optimization, it is unclear how this should best be benchmarked.
A further difficulty in finding comparable algorithms arises from the scaling performance of Algorithm \ref{alg:inversion} well beyond $n=100$ or $n=1000$, which are typical upper limits of the dimensionality of problems for which most optimization routines are intended. In contrast Algorithm \ref{alg:inversion} is more akin to a fast inversion of a band matrix, or some other equally fast and scalable technique taking advantage of a special setup.
As demonstrated in Figure \ref{fig:one_thousand} the growth in computation time is approximately linear as $n$ rises (empirically slightly sub-linear, in fact). This is to be expected for Algorithm \ref{alg:inversion} given the efficiency of one dimensional linear interpolation in numpy, and the fact that Algorithm \ref{alg:multiplicity} is called only once per cycle.
Clearly the specific nature of the horse race problem or the pattern of the $a_i$ must be incorporated in some way, if we are to create any kind of benchmark to \ref{alg:inversion} that has remotely comparable performance. Any alternative technique that calls into a multi-dimensional optimization routine at some point will surely fail to match the match the linear performance of Algorithm \ref{alg:inversion}.\footnote{Potentially, swarm optimization or other techniques designed for very high dimensional problems might be used, but are likely to be slower by many, many orders of magnitude by the time we get to $n=1000$, never mind $n=100,000$.
} For this reason we do not know what a reasonable alternative might be, or, dare we say, how to arrange an interesting horse race between horse race calibration algorithms.
As noted, we previously attempted to use a low dimensional parametrization of the $a_i$ together with general purpose solvers in \cite{cotton_blog}, and this did lead to order of magnitude improvements - though still nothing remotely comparable to Algorithm \ref{alg:inversion}. A further drawback with this prior work is that it did not lead to tight calibration of the probabilities.
\section{Implications}
\label{sec:implications}
In this section we establish that some models made practical by Algorithm \ref{alg:inversion} will differ markedly from prior approaches when it comes to assigning probabilities, and therefore the space of models opened up is important. However we also explain why that is not the only implication. The use of explicit performance models (parametrized by a performance distribution $f^*$ and relative abilities) is not widespread but is likely to become so as in-the-run wagering grows in popularity.
There are other ways in which a fast algorithm for solving for $a_i$'s when $p_i's$ and $f^*$ are known can further research, and there seems to be no difficulty presented. For example, any choice of $f^*$ will imply a likelihood on race outcomes and thereby, different choices of $f^*$ will lead to more or less accurate empirical performance over time.
\subsection{Deviation from axiom of choice in exacta pricing}
We first demonstrate that the choice of $f^*$ greatly influences probabilities of joint outcomes - such as the probability of two particular horses finishing first and second in a given order. It follows from this that it is dangerous to assume exponential running times, or equivalently assume Luce's axiom of choice holds, even if runners are assumed to be independent.
Starting with a three horse race where winning probabilities are $p_1=1/2$, $p_2=1/3$ and $p_3=1/6$ we compute the extent to which the conditional probability of finishing second deviates from the axiom of choice. Table \ref{tab:exacta} reports the percentage differences in probability for each of the six possible outcomes. The increase in likelihood can be as much as thirty percent.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $p_2=1/2$ & $1/3$ & $1/6$ \\
\hline
$p_1=1/2$ & & -8.10 & 16.3 \\
$p_2=1/3$ & -10.2 & & 30.5 \\
$p_3=1/6$ & -5.70 & 8.50 & \\
\hline
\end{tabular}
\caption{Percentage increase in exacta probability relative to Harville's formula for a three horse race with winning probabilities $1/2$, $1/3$ and $1/6$, assuming normally distributed performance. For example the chance of the lest-favored horse finishing second behind the second-favored horse is thirty percent higher in this model than the axiom of choice (Harville model) would suggest. Harville predicts a probability of $\frac{1/3 \times 1/6}{1-1/3} = 1/27$ for this ``exacta'' outcome. Normal independent performance suggests a probability closer to $1/19$.}
\label{tab:exacta}
\end{table}
We also repeated that experiment for a larger field (using odds for the 2020 Kentucky Derby as a guide). In the interest of space we do not report the exacta matrix, but we found even bigger discrepancies. Exacta probabilities involving horses with odds over $100/1$ differed by a factor of $2$ compared to their Harville equivalents. This effect was even more pronounced when normally distributed performance was replaced by skew-normal.
\subsection{Scratchings}
\label{sec:scratching}
When a horse is removed from a race at a late stage, it is the custom to refund wagers. However this is unfair to bookmakers who have offered fixed prices on the other horses, whose winning probabilities have now improved. The customary remedy is an after the fact reduction in payments (i.e. odds). This reduction is based on Luce's axiom of choice, which is to say a simple rescaling. We would submit that a more equitable approach would involve a computation of reductions using some plausible choice of $f^*$, and Algorithm \ref{alg:inversion}.
\subsection{Deviation from the bookmakers' quarter rule for show wagers}
\label{sec:quarter}
A longstanding practice amongst bookmakers sets the odds for a show bet at one quarter of the odds for a win bet. A show wager, in this note, refers to a bet that a horse will finish first, second or third.\footnote{We use U.S. terminology where place refers to top two and show to top three. In Australia place refers to top three in fields with eight or more runners.} The quarter rule is not always offered for show betting in isolation, but for a combination of a win and a show bet (called an each-way bet). The return on longer priced horses is typically poorer than for shorter priced, so the rule of a quarter can sometimes be viewed as an enticement to enter an each-way bet. Setting aside these issues and the bookmaker's typical profit (greater for less favored horses), the quarter rule implies the following ratio between show and win probability:
$$
\frac{P(win)}{P(show)} = \frac{ p }{ 4/(p+1) + 1 }
$$
where $p=1/(o+1)$ is the probability implied by bookmaker odds quoted as $o:1$. For example, an even money favourite with $o=1$ is assigned odds of $1/4$ of finishing in the top three (meaning an eighty percent chance). We compared this heuristic against show probabilities implied by skew-normal performance distributions.
Once again using probabilities inspired by the Kentucky Derby long range odds, Table \ref{tab:show} reports the normalized bookmaker odds, quarter rule show odds, quarter rule show probabilities and skew-normal model show probabilities for all runners. As is readily apparent (and not altogether surprising) the presence of a pronounced favourite makes it perilous for bookmakers to apply the quarter rule. The probability of less favored runners finishing in the top three is substantially higher that the rule of a quarter would suggest.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& Bookmaker & Bookmaker & Bookmaker & Skew model & \\
Rank & Win & Show & P(show) & P(show) & Difference (\%) \\
\hline
0 & 1.382&0.345&0.743&0.651&-12.4 \\
1 & 8.423 & 2.106 & 0.322 & 0.311 & -3.5 \\
2 & 10.442 & 2.61 & 0.277 & 0.271 & -2.2 \\
3 & 13.807 & 3.452 & 0.225 & 0.23 & 2.2 \\
4 & 27.268 & 6.817 & 0.128 & 0.144 & 12.6 \\
5 & 27.268 & 6.817 & 0.128 & 0.144 & 12.4 \\
6 & 33.998 & 8.5 & 0.105 & 0.124 & 18.0 \\
7 & 38.037 & 9.509 & 0.095 & 0.113 & 18.7 \\
8 & 40.729 & 10.182 & 0.089 & 0.109 & 22.0 \\
9 & 44.767 & 11.192 & 0.082 & 0.1 & 22.5 \\
10 & 54.19 & 13.547 & 0.069 & 0.087 & 26.9 \\
11 & 54.19 & 13.547 & 0.069 & 0.088 & 28.3 \\
12 & 67.651 & 16.913 & 0.056 & 0.073 & 31.6 \\
13 & 74.381 & 18.595 & 0.051 & 0.069 & 34.7 \\
14 & 89.188 & 22.297 & 0.043 & 0.059 & 36.6 \\
15 & 89.188 & 22.297 & 0.043 & 0.059 & 38.4 \\
16 & 108.033 & 27.008 & 0.036 & 0.051 & 43.8 \\
17 & 108.033 & 27.008 & 0.036 & 0.052 & 44.4\\
18 & 134.955 & 33.739 & 0.029 & 0.043 & 49.5\\
19 & 134.955 & 33.739 & 0.029 & 0.043 & 49.8\\
20 & 134.955 & 33.739 & 0.029 & 0.043 & 49.4\\
21 & 168.608 & 42.152 & 0.023 & 0.036 & 56.1\\
22 & 168.608 & 42.152 & 0.023 & 0.036 & 53.5\\
23 & 202.26 & 50.565 & 0.019 & 0.032 & 63.6 \\
24 & 202.26 & 50.565 & 0.019 & 0.032 & 64.5 \\
\hline
\end{tabular}
\caption{Comparison between the bookmaking rule of a quarter show probability (finish in the top three) computed using skew-normal performance distributions. We use ordered, normalized early bookmaker odds for the Kentucky Derby (as of August 22, 2020). The bookmaker convention applied to the favorite implies that a bookmaker risks $1.382$ for every dollar a patron risks when that take a win bet, whereas for a show bet the bookmaker risks $0.345$ to the patron's dollar (one quarter). Break even probabilities are shown for show betting. The skew-normal performance model is calibrated to the bookmaker implied probabilities (not shown) using Algorithm \ref{alg:inversion}. The data suggests that less favored horses have a much higher chance of finishing in the top three than the quarter rule would suggest. However this discrepancy might be reduced or eliminated entirely were we to perform a power transform on horse probabilities prior to normalization, or otherwise account for lower returns on longer priced horses (known as the longshot effect). Moreover, show bets are often only available when bundled with win bets. For these reasons this table does not necessarily suggest positive returns are available, despite the large discrepancy. In generating this table a parameter $\alpha=2.0$ for the skew-normal distribution was used.}
\label{tab:show}
\end{table}
\subsection{A pricing measure for all wagers determined by finish positions}
Because the distribution of the individual $X_i$ is not assumed but left general, the user of Algorithm \ref{alg:inversion} is granted great flexibility. In particular, varying $f^*$ generates different families of pricing schemes for quinellas, trifectas and other types of exotic wagers - all of which are consistent with observed win prices. Some of those pricing methodologies may be more accurate than others, possibly much more accurate than Harville.
Indeed, if we recklessly make the assumption that the win market for horse racing contains most of the information (and quinellas, exactas and all other varieties of combinatorial bets are mere derivatives of the win market) then Algorithm \ref{alg:inversion} defines a coherent and in some sense comprehensive solution for the pricing of all possible claims which are contingent on the realized rank order of finishers.
We hastily add, in the interest of the reader's financial well-being, that there is strong reason to believe not all information is absorbed into win prices. One reason is that professional handicappers achieve more leverage in combinatorial markets, and thus trifecta markets can provide additional information (and might even be more efficient than win markets). That is why we have suggested that varying $f^*$ in order to calibrate to additional market information might be a worthwhile improvement.
\subsection{... and some that are not.}
Furthermore, with further calibration of the scale of $f^*$ (leaving probabilities for rank determined events unchanged) it will be possible to price derivatives which settle on outcomes other than finishing position. For example, with an empirically estimated scale parameter it is possible to price wagers that settle based on the relative finishing times of two horses. These are known as spread bets.
\subsection{Feature generation}
The reader intent on tackling the horse racing markets might wish to view this contribution as a technique for generating features $a_i$ from market prices $p_i$. Those features can then be used in conjunction with additional data to predict more accurate estimates. For instance the $p_i$ might enter an empirical iterative, probit or logistic regression model as in \cite{compleat}, \cite{Lo2006ACompetitions}, \cite{Bolton2008SearchingRaces}, conveyed verbally in \cite{probit_talk} and anecdotally used widely by successful handicapping syndicates.
Using transforms of market probabilities as predictive features is also considered for soccer forecasting \cite{Wunderlich2018TheSoccer}. As a more general comment it is well appreciated that mapping probabilities $p_i$ onto quantities that are closer to being normally distributed can allow the researcher to exploit a wider range of methods such as Kalman filtering or gaussian processes. Beyond wagering this may also assist the econometric literature that uses racetracks as laboratories \cite{Camerer2002CanBetting} to draw wider conclusions about asset markets, or as a lens into human behaviour such as risk preference and perceptions of probability \cite{ErikSnowberg2010ExplainingMisperceptions}, \cite{Williams2003WhyMarkets} \cite{Thaler2012Anomalies:Lotteries}, \cite{Asch2002MarketCorrection}.
\subsection{Ratings and implied observations}
Chess players have long been rated using the Elo system - which is roughly consistent with normal $f^*$ or logistic $F^*$ depending on the implementation. Normally distributed performance and Bayesian updating on a graph has been used in improvements to the Elo system \cite{2018TrueSkill:System}. Typically these rating systems use updates based on actual realized results. However there is no reason why market information cannot also be incorporated. When implied abilities $\{a_i\}_{i=1}^n$ are interpreted as a result, they too can be fed into any rating system. The importance of price implied updating versus result implied updating can be estimated from empirical data.
\subsection{Pre-post price fluctuations}
Another practical application of the transform $\{p_i\} \mapsto \{a_i\}$, that we have established in Algorithm \ref{alg:inversion}, is the modeling of the time varying dynamics of prices in a multi-entry contest {\em prior} to the event. Pre-event bookmaker and parimutuel price fluctuations are studied intently by professional handicappers and our approach provides them with a different, and perhaps more convenient, parameterization of prices. For instance $f^*$ may be chosen to be normal, lognormal or some other convenient distribution admitting a correspondingly convenient stochastic process on the $a_i$. This in turn confers a stochastic process on $p_i$.
\subsection{In-the-run wagering}
\label{sec:intherun}
While relative ability might be a convenient way to consider pre-event price fluctuations, the use of Algorithm \ref{alg:inversion} or an alternative is even more strongly motivated, dare we say mandatory, if one is required to model price changes after the event has begun. The reader will not that relative ability is arbitrary up to a scaling factor, since the performance distribution $f^*$ can be scaled. By choosing this scaling carefully, $f^*$ may be chosen to relate directly to measurable progress in a contest (such as score to par in a golf tournament) and, thereby, price estimates or probabilities may be updated.
\subsection{Possible extensions}
As a technical comment, we assumed in our description of the algorithm that each variable took on the same distribution $f^*$ up to translation. The careful reader will observe that this is leveraged only in the interpolation step, and that with care it may be possible to generalize. For example a large number of horses may take on a small number of choices of $f^*$'s.
A further limitation is that horse performances has been assumed to be independent. This may not prevent the approach from improving on Harville, where horses are also independent, but one can certainly question the generality of this assumption. For example in the ``straight six'' race held at Flemington Racecourse the field will sometimes bifurcate into two packs, one running the inside rail and one the outer. Differing track conditions will induce strong dependency. Another source of dependency is preference for pace.
It may be possible to generalize Algorithm \ref{alg:inversion} to allow for probabilities that are influenced by a single common factor (such as race pace) - for instance by the use of a Normal Copula with constant off-diagonal correlation matrix applied to win probabilities. This will, however, be more computationally expensive.
\section{Commercial Uses}
\label{sec:applications}
We now leave behind wagering - though the title of this section is not intended to suggest that horseracing markets are not economically significant in their own right (on the contrary, it is not unusual for hundreds of millions of dollars to be bet on a single race meeting).\footnote{In Hong Kong, Sha Tin and Happy Valley racecourse regularly report meeting volumes between \$$100$m and \$$200$m USD. The Melbourne Cup volume is estimated at \$$350$m AUD.} However, horseracing volumes pale in comparison with equity block orders, over the counter financial markets and the sum over all categories of trading activity in art, materials, apartments and illiquid goods of all varieties. We shall demonstrate an application of Algorithm \ref{alg:inversion} to trade of most kinds.
Algorithm \ref{alg:inversion} is also a basic tool for the study of order statistics and rankings. It will find many applications far from the racetrack any time empirical evidence for ``ability'', broadly defined, manifests as probability that one item out of several presented is chosen.
\subsection{Trading and the cost of inquiry}
\label{sec:trade}
A fundamental decision when selling an asset, and in some stylized setups the only decision, is the set of people to reach out to. When a party seeks to buy or sell a large amount of a stock, or a corporate bond, or a used car, they will often seek price quotes from several dealers. We present an approach to dealing with this ubiquitous problem of trade in which Algorithm \ref{alg:multiplicity} is used once, and subsequently Algorithm \ref{alg:inversion} is used repeatedly.
In this stylized version of trade each dealer is participating in a sealed bid auction and the customer will take the best price offered. This pattern of behaviour has not been replaced by electronification, except for a few cases where central limit order books prevail. Electronic venues for government and corporate bond trading continue to use variations of a ``request for quote'' (RFQ) protocol. In essence, this replaces a phone call with an electronic message to a chosen set of dealers, but does not otherwise alter the nature of trade, or the core decision: who to contact.\footnote{Examples include TradeWeb and MarketAxess}.
For most assets there are material economic costs of inquiry. For smaller assets the direct expense as a proportion of the asset's worth can quickly mount up, especially if inquiry is time consuming (as with filling out lengthy forms at a car dealership). For larger assets the more important cost is information revelation.
An inquiry might be construed as information about portfolio holdings or future trading intent. A call to an art dealer about a treasured family hierloom might be construed as personal financial distress, modifying the behaviour of the dealer the next time an inquiry is made and perhaps the behaviour of any other dealer catching wind. Thus all else being equal the holder of an illiquid security might prefer to indicate the desire to trade in that security only to a small number of market participants.
The tradeoff between inquiry cost and expected price improvement is usually left to judgement - but there is no reason why a calculation cannot assist or even replace a human in the task of choosing a subset of people to reach out to. To formalize we will assume the inquiry cost is a parameter set by the customer, presumably informed by their opinion as to the likely behaviour of the counterparty. On the other hand, we directly model the best price distribution as a function of who is called.
To this latter task, we note that {\em in theory} a customer might, with sufficient information, construct a detailed model for the joint distribution of dealer responses - and then by further optimization determine the best subset of dealers to call. In practice this will be unwieldy for the majority of market participants. Instead we provide, using Algorithm \ref{alg:inversion}, a relatively simple parameterization of the problem which requires the customer to enter only one number for each dealer. That number is (as the reader would anticipate) the probability $p_i$ that the dealer will return with the best price.
Adopting terminology from \cite{cotton_papanicolaou} (where the same setup is considered from the dealer's perspective) we assume $n$ dealers are aware of a true value of an asset $\pi$ and, when asked to bid on this asset respond with a bid of $\pi-m_i(\omega)$ where $m_i$ is the markdown of the $i$'th dealer.\footnote{For clarity we speak only of the case of selling an asset. In the case of buying the cost is the markup rather than the markdown. Equation \ref{eqn:utility} still applies.} Our notation emphasizes that $m_i(\omega)$ is a random variable with $\omega \in \Omega$, some probability space. We assume the markup for the $i$'th dealer has translated density $f^{*\rightarrow a_i}()$ for some parameter $a_i$.
The owner of the asset must decide who to call by minimizing the aggregate cost (markup over the fair price, plus the search cost). We write this
\begin{equation}
\label{eqn:utility}
V^* = argmin_{V \subset S} \left\{ x E\left[ \min_{i \in V} m_i \right] + I(V;x) \right\}
\end{equation}
where $x$ is the size of the potential trade, $S = {1,\dots, n}$ is the set of all dealers, $V$ is a subset and $I(V;x)$ is the economic cost of revealing the information to all dealers in this set. Again, $I(V;x)$ is exogenous whereas the expected markdown for the best response is to be estimated based on limited information about the dealers.
We assume that if all dealers are asked then the unconditional probability of the $i$'th dealer providing the best price is $p_i$. The quantity $p_i$ may be informed by market share statistics if more relevant idiosyncratic statistics are not tracked by the customer. Market share numbers are often publicized or can be indirectly inferred.
The quantity
\begin{equation}
\label{eqn:phi}
\phi(V) = E\left[\min_{i \in V} m_i \right]
\end{equation}
can be computed by first calibrating $f_i$, the density of markdown choice, to the collection $p_i$ of market share statistics using Algorithm \ref{alg:inversion}. In the appendix, we provide a proof that this set optimization problem that results is a sub-modular minimization and thus, informally speaking, easy.
We remark that Algorithm \ref{alg:inversion} may be particularly useful because it leaves open the specification of $f^*$ and therefore can accommodate quantized, skewed, or fat tailed markup distributions. In some markets convention dictates that disseminated prices lie on a discrete lattice. Skewed markups can also be a requirement because missing a trade is a small loss compared to taking on a losing trade (the problem of adverse selection rarely escapes the mind of a cautious dealer, so modeling one sided bids or offers with a skewed distribution is natural).
\subsection{Search, recommendation and placement}
In web search a horse race occurs every time a search phrase is entered. The winner of the race is the link that is clicked on. Risk neutral win probability ($\{p_i\}$) may not exist in quite the same fashion at the racetrack, but the vast number of searches that occur (at least for common phrases) leads to a precisely defined set of $\{p_i\}$ nonetheless.
The position of a link on the page strongly influences the user's decision (not completely dissimilar to horseracing where barrier position also matters) but we shall assume we are in the experimental phase and that a random permutation is applied so as not to bias any particular link. The user might in theory scroll or page down many times, so the size of this particular horse race might well run into the hundreds or more.
An analogous situation occurs in e-commerce, where there may be sufficient volume to accurately imply a probability that a product wins out over others in a given category. Similarly, there are services that try to estimate which image of a house or clothing item a person might click, when presented with numerous possibilities arrayed randomly. These are examples of contests occurring with high velocity. The problem is not the estimation of $\{p_i\}$'s from a surfeit of historical data, but rather, inferring what probabilities will apply when a new very similar search is performed, or when some results are removed (in analogy to the scratching of a horse from a race, as considered in Section \ref{sec:scratching}).
Let us further suppose that at some cost, which might be direct or indirect, it is possible for an owner of a business to improve the underlying ``ability'' of their link to attract a click. This ability is related to numerous characteristics, depending on the venue, but examples might include an investment in search engine optimization, effort put into content creation, brand awareness or other activity intended to increasing the likelihood of selection (it may even be possible to directly estimate the influence of factors such as relevance, logarithm of site popularity and so on, even if the underlying mechanics of search are opaque).
In this setting a shift in ability to attract click-throughs, purchases or whatever constitutes a business outcome (a call to a real estate broker, for example) will come at a cost for business $i$, but the benefit will be proportional to the change in $p_i$. This is directly analogous to an updating of probabilities after a sporting event has started, as considered in Section \ref{sec:intherun}, and therefore the application of horse race calibration algorithm \ref{alg:inversion} is clear. To tie this back to the example of trade considered in Section \ref{sec:trade}, we note that a dealer responding to inquiry may also benefit from exactly the same calculus.
\section{Summary}
We have provided a fast and scalable algorithm \ref{alg:inversion} for inferring the relative location of univariate variables $\{X_i\}_{i=1}^n$ sharing a common distribution up to translation. The evidence used is the probability of variable $i$ taking the lowest value. We approach this formally via the discrete horse race problem where distributions are supported on a set of evenly spaced points, ties are permitted, and the constraints provided are not probabilities but closely related state prices defined by formula \ref{eqn:state}.
This algorithm is likely to be particularly useful when information informing ability relates predominantly to winning probability, and:
\begin{enumerate}
\item It is necessary to infer winning probabilities for a subset; or
\item It is necessary to infer probabilities on events defined by more than one variable (such as the probability that
$X_i$ takes the least value and $X_j$ the second least); or
\item Ability finds interpretation as an expected scoring differential, and it is necessary to
infer probabilities after scores have been updated.
\end{enumerate}
For this reason the algorithm finds obvious use in sports wagering, both for in-the-run analytics and for the pricing of wagers other than win bets. However we have also illustrated uses that are seemingly unrelated, in recommendation and in the formulation of a consistent framework for optimizing inquiry.
\section*{Appendix: Proof of submodularity}
The customer problem set up by Algorithm \ref{alg:inversion} falls into an ``easy'' class of set optimization problems - allowing one to take advantage of the theory of submodular function minimization \cite{Schrijver2000ATime}, \cite{Iwata2001AFunctions}, \cite{Iwata2013AMinimization}. For simplicity of exposition we assume $I(V;x) \propto x \sum_{i \in V} I_i$ where $I_i$ is the per unit size cost of revealing any intent to party $i$ of a hypothetical trade of size $x$.\footnote{More generally we can assume submodularity in inquiry cost and the assertion holds.}
Recall that a function $g$ from $2^n \rightarrow \mathbb{R}$ is submodular if for any two subsets $V, W \subseteq S=\{1,\dots,n\}$ we have
\begin{equation}
\label{eqn:submodular1}
g(V \cup W) + g(W \cap V) \le g(V) + g(W)
\end{equation}
We claim that $\phi(V)$ is submodular from which it would follow, by linearity of $I(V;x)$ that inequality \ref{eqn:submodular1} also holds when applied to the quantity to be minimized in \ref{eqn:utility}.
To prove that $\phi()$ as defined in equation \ref{eqn:phi} is submodular we turn to an equivalent definition of submodularity. Denote by $\phi(A,i) := \phi(A+i) - \phi(A)$ the marginal value of $i$ with respect to A. This is the expected gain to be made in the price by calling one additional dealer $i$. It is well known that $\phi$ is submodular if there are decreasing returns. That is to say that for all $V \subseteq W \subseteq \{1,\dots,n\}$ and $i \in W$, $\phi(V,i) \ge \phi(W,i)$.\footnote{This result is perhaps more intuitive in our previous setting. Entering a horse in a small field will reduce the average winning time by more than if the same horse is entered in a larger field which contains all the horses as the first race and some more. The horse in question is less likely to win as the field in enlarged, and that is the only way it will have an impact on the winning time.}
We introduce notation $m_W := \min_{j \in W} m_j$ for the minimal markdown over any set. Further define
\begin{eqnarray}
\delta_W(i) & = & \min_{j \in W} m_j - \min_{j \in W \cup \{i\}} m_j \\
& = & \left\{ \begin{array}{cc}
0 & m_i \ge m_W \\
m_W - m_i & m_i < m_W \\
\end{array}
\right. \nonumber \\
& \ge & 0
\end{eqnarray}
Then for $V \subset W$ we have $m_V \ge m_W$ and thus
\begin{eqnarray*}
\delta_W(i) - \delta_V(i) & = & \left\{ \begin{array}{cc}
0 & m_i \ge m_V \ge m_W \\
m_i - m_V & m_W \le m_i < m_V \\
m_W - m_V & m_i < m_W \le m_V
\end{array}
\right. \nonumber \\
\end{eqnarray*}
In each case $\delta_W(i) - \delta_V(i) \le 0$. It follows that since both $\delta_W(i)$ and $\delta_V(i)$ are non-negative we can define a quantity $\eta(W,V)$ such that $\eta(W,V)\delta_V = \delta_W$ with $0\le \eta(W,V) \le 1$.
Submodularity of $\phi(\cdot)$ now follows by iterated expectations, taking advantage of the fact that dealer bids are independent. We write explicitly $E_V$ to denote the expectation over random variables $m_i$ for $i \in V$ and $E_{W \setminus V}$ for the expectation over those in the complement.
\begin{eqnarray*}
\phi(W,i) & = & E \left[ \min_{j \in W} m_j \right] - E \left[ \min_{j \in W \cup \{i\}} m_j \right] \\
& = & E_V \left[ E_{W \setminus V} \underbrace{
\left[ \min_{j \in W} m_j - \min_{j \in W \cup \{i\}} m_j
\right] }_{ =:\delta_W(i) }
\right] \\
& = & E_V \left[ E_V \left[ \delta_V(i) E_{W\setminus V} \underbrace{\left[ \eta(W,V)
\right]}_{\le 1} \right] \right]\\
& \le & E_V \left[ \delta_V(i) \right]\\
& = & \phi(V,i)
\end{eqnarray*}
\bibliographystyle{siamplain}
\bibliography{ROAR,unpublished}
\end{document}
|
{"hexsha": "c7f0485eaf7ca01a5bd2b883ed91790cd36ec2b3", "size": 68627, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/latex_src/ex_article.tex", "max_stars_repo_name": "microprediction/winning", "max_stars_repo_head_hexsha": "f03b74fc87cfe4b317a3b0ac3e165bc5e49a39fc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2021-03-05T21:52:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T20:40:52.000Z", "max_issues_repo_path": "docs/latex_src/ex_article.tex", "max_issues_repo_name": "terragord7/winning", "max_issues_repo_head_hexsha": "669e2bac5a6c1cc5cadc99f9d2ae485d954b75d0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-10-20T21:15:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-20T21:15:19.000Z", "max_forks_repo_path": "docs/latex_src/ex_article.tex", "max_forks_repo_name": "terragord7/winning", "max_forks_repo_head_hexsha": "669e2bac5a6c1cc5cadc99f9d2ae485d954b75d0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-12-09T03:16:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-24T20:26:23.000Z", "avg_line_length": 113.0593080725, "max_line_length": 1349, "alphanum_fraction": 0.7593804188, "num_tokens": 16903}
|
# vim: expandtab tabstop=4 shiftwidth=4
from numpy import array as nparray
from numpy import floor, ceil, sqrt, log2, log10, arange
from matplotlib import pyplot as plt
from dspftw import decimal_convert_to_base, vector_power
def constellation_plot(constellation: nparray, binary_mode: int=0, scale: int=100):
"""
Plots symbol numbers at corresponding constellation points
Parameters
----------
constellation:
QAM constellation as a row numpy array vector of complex numbers
binary_mode:
Mode for printing symbols: 0 = decimals (default), 1 = binary
scale:
Size of characters on plot (default 100)
Returns power normalized constellation as a numpy array
"""
# Ensure constellation is a 1-dimensional complex array
con = constellation.flatten().astype(complex)
# Constellation size
con_size = len(con)
# Check Parameters
# Constellations must contain at least 1 element
if con_size < 1:
print("constellation_plot error: Constellation size (",con_size,") must be at least 1.")
return constellation
# If user provided mode, then it must be 0 or 1
# if (binary_mode != 0) and (binary_mode != 1):
if binary_mode not in (0,1):
print("constellation_plot error: User input mode (",binary_mode,") must be either 0 or 1.")
return constellation
# If user provided sceale, then it must be nonnegative
if scale < 0:
print("constellation_plot error: User input scale (",scale,") must be nonnegative.")
return constellation
# Number of bits per symbol
bps = ceil(log2(con_size)).astype(int)
# Plotting logic
plt.suptitle("Scatterplot of {0}-point constellation".format(con_size))
for k in arange(0,con_size):
# Determine plot scale
if binary_mode == 0: # Decimal Mode
# Number of decimal digits in k
if k == 0:
ndigit = 1
else:
ndigit = floor(log10(k))+1
size = scale*ndigit
txt = "${0}$".format(k)
elif binary_mode == 1: # Binary Mode
size = scale*bps
tmp = decimal_convert_to_base(k,2,bps).flatten()
# Convert binary array to string
txt = "$"
for idx in arange(0,bps):
txt = "{0}{1}".format(txt,tmp[idx])
txt = "{0}$".format(txt)
plt.scatter(con[k].real,con[k].imag,s=size,marker=txt,c='k')
plt.show()
# Return power normalized constellation
return (con/sqrt(vector_power(con))).flatten()
def conplot(*args, **kwargs) -> nparray:
'''
Alias for constellation_plot.
'''
return constellation_plot(*args, **kwargs)
|
{"hexsha": "a7b9da981f1311ef2210ce1ee3928af0d9f3a4da", "size": 2723, "ext": "py", "lang": "Python", "max_stars_repo_path": "dspftwplot/constellation_plot.py", "max_stars_repo_name": "dspftw/dspftwplot", "max_stars_repo_head_hexsha": "2dd7e7300bfb86062a978fbc2b7afe583fa96c05", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dspftwplot/constellation_plot.py", "max_issues_repo_name": "dspftw/dspftwplot", "max_issues_repo_head_hexsha": "2dd7e7300bfb86062a978fbc2b7afe583fa96c05", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-05T14:10:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-05T14:10:16.000Z", "max_forks_repo_path": "dspftwplot/constellation_plot.py", "max_forks_repo_name": "dspftw/dspftwplot", "max_forks_repo_head_hexsha": "2dd7e7300bfb86062a978fbc2b7afe583fa96c05", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6172839506, "max_line_length": 99, "alphanum_fraction": 0.6309217775, "include": true, "reason": "from numpy", "num_tokens": 667}
|
#---------------------------------------------------------------------
#* only vapour phase
#*--------------------------------------------------------------------
type tank_vap
tank_vap()=begin
new(
tank_basic(),
[
:(_base_1.Outletm.v = 1),
:(_base_1.Outletm.h = _base_1.PP.VapourEnthalpy(_base_1.Outletm.T,_base_1.Outletm.P,_base_1.Outletm.z)),
:(_base_1.Tank.V = _base_1.Mt*_base_1.PP.VapourVolume(_base_1.Outletm.T,_base_1.Outletm.P,_base_1.Outletm.z)),
:(E = _base_1.Mt*_base_1.Outletm.h),
],
[
"Vapourisation fraction","Vapour Enthalpy","Volume constraint","Total internal energy",
],
)
end
_base_1::tank_basic
equations::Array{Expr,1}
equationNames::Array{String,1}
attributes::Dict{Symbol,Any}
end
export tank_vap
function setEquationFlow(in::tank_vap)
addEquation(1)
addEquation(2)
addEquation(3)
addEquation(4)
end
function atributes(in::tank_vap,_::Dict{Symbol,Any})
fields::Dict{Symbol,Any}=Dict{Symbol,Any}()
fields[:Brief]="Model of a generic vapour-phase tank"
drive!(fields,_)
return fields
end
tank_vap(_::Dict{Symbol,Any})=begin
newModel=tank_vap()
newModel.attributes=atributes(newModel,_)
newModel
end
|
{"hexsha": "0ff811c66cee40b70c8257ba10be9a1d793ffc11", "size": 1177, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "JuliaEMSOModels/reactors/tank_basic/tank_vap.jl", "max_stars_repo_name": "DANA-Laboratory/EMSOModelLibrary.jl", "max_stars_repo_head_hexsha": "e28904cc1bdf8f67c6839ad35b4658dd399c0e47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-08-18T02:32:44.000Z", "max_stars_repo_stars_event_max_datetime": "2017-08-18T02:32:44.000Z", "max_issues_repo_path": "JuliaEMSOModels/reactors/tank_basic/tank_vap.jl", "max_issues_repo_name": "DANA-Laboratory/EMSOModelLibrary.jl", "max_issues_repo_head_hexsha": "e28904cc1bdf8f67c6839ad35b4658dd399c0e47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-01-21T16:35:07.000Z", "max_issues_repo_issues_event_max_datetime": "2015-01-21T16:35:07.000Z", "max_forks_repo_path": "JuliaEMSOModels/reactors/tank_basic/tank_vap.jl", "max_forks_repo_name": "DANA-Laboratory/EMSOModelLibrary.jl", "max_forks_repo_head_hexsha": "e28904cc1bdf8f67c6839ad35b4658dd399c0e47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0238095238, "max_line_length": 114, "alphanum_fraction": 0.6397621071, "num_tokens": 383}
|
import numpy as np
import matplotlib.pyplot as plt
import os
from PIL import Image
def change_range(data, input_min, input_max, output_min, output_max):
result = ((data - input_min) / (input_max - input_min)) * (output_max - output_min) + output_min
return result
def plot_other(imgs, subtitles, title='', ncols=3, nrows=1, vmin=None, vmax=None, size_col=2,
size_row=2.5):
fig = plt.figure(figsize=(size_col * ncols, size_row * nrows))
fig.suptitle(title, fontsize=28)
nimgs = len(imgs)
for i in range(nimgs):
row = int(i / ncols)
col = i % ncols
ax = plt.subplot2grid((nrows, ncols), (row, col))
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(subtitles[i], fontsize=24)
ax.imshow(imgs[i], cmap='gray', vmin=vmin, vmax=vmax)
plt.show()
def tif2png(path_src, path_dst):
listOfFiles_src = os.listdir(path_src)
for file in listOfFiles_src:
# read image and gt
img = Image.open(os.path.join(path_src, file))
img.save(os.path.join(path_dst, file[:-3]+'png'), 'png')
def read_grayscale_img(path):
img = Image.open(path).convert('L')
m, n = img.getdata().size
img = np.asarray(img.getdata(), dtype=np.uint8).reshape(n, m) / 255.
return img
def read_color_img(path):
img = Image.open(path)
m, n = img.getdata().size
img = np.asarray(img.getdata(), dtype=np.uint8).reshape(n, m, 3) / 255.
return img
def save_numpy_as_img(img, path):
img = change_range(img, 0., 1., 0., 255.)
img = Image.fromarray(img).convert('RGB')
img.save(path)
|
{"hexsha": "1800d62edd230e9f10afba48a18883e77cce0f4e", "size": 1614, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/misc_img_func.py", "max_stars_repo_name": "dora-alvarado/robust-MTA-detection-modeling", "max_stars_repo_head_hexsha": "9e8f48c3ec0d532162d0dd0912ed25096dc6b57c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/misc_img_func.py", "max_issues_repo_name": "dora-alvarado/robust-MTA-detection-modeling", "max_issues_repo_head_hexsha": "9e8f48c3ec0d532162d0dd0912ed25096dc6b57c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/misc_img_func.py", "max_forks_repo_name": "dora-alvarado/robust-MTA-detection-modeling", "max_forks_repo_head_hexsha": "9e8f48c3ec0d532162d0dd0912ed25096dc6b57c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3454545455, "max_line_length": 100, "alphanum_fraction": 0.6325898389, "include": true, "reason": "import numpy", "num_tokens": 452}
|
import os
from pathlib import Path
from abc import abstractmethod
from typing import Type, Dict, Sequence, List, Callable, Tuple
import torch
import numpy as np
from tqdm import tqdm
from loguru import logger
from torchvision.models.detection.rpn import AnchorGenerator
from nndet.utils.info import SuppressPrint
with SuppressPrint():
from nnunet.experiment_planning.common_utils import get_pool_and_conv_props
from nndet.io.load import load_pickle
from nndet.arch.abstract import AbstractModel
from nndet.planning.estimator import MemoryEstimator, MemoryEstimatorDetection
from nndet.planning.architecture.abstract import ArchitecturePlanner
from nndet.core.boxes import (
get_anchor_generator,
expand_to_boxes,
box_center,
box_size,
compute_anchors_for_strides,
box_iou,
box_size_np,
box_area_np,
permute_boxes,
)
from nndet.planning.architecture.boxes.utils import (
fixed_anchor_init,
scale_with_abs_strides,
)
class BaseBoxesPlanner(ArchitecturePlanner):
def __init__(self,
preprocessed_output_dir: os.PathLike,
save_dir: os.PathLike,
network_cls: Type[AbstractModel] = None,
estimator: MemoryEstimator = None,
**kwargs,
):
"""
Plan the architecture for training
Args:
min_feature_map_length (int): minimal size of feature map in bottleneck
"""
super().__init__(**kwargs)
self.preprocessed_output_dir = Path(preprocessed_output_dir)
self.save_dir = Path(save_dir)
self.save_dir.mkdir(parents=True, exist_ok=True)
self.network_cls = network_cls
self.estimator = estimator
self.dataset_properties = load_pickle(
self.preprocessed_output_dir / "properties" / 'dataset_properties.pkl')
# parameters initialized from process properties
self.all_boxes: np.ndarray = None
self.all_ious: np.ndarray = None
self.class_ious: Dict[str, np.ndarray] = None
self.num_instances: Dict[int, int] = None
self.dim: int = None
self.architecture_kwargs: dict = {}
self.transpose_forward = None
def process_properties(self, **kwargs):
"""
Load dataset properties and extract information
"""
assert self.transpose_forward is not None
boxes = [case["boxes"] for case_id, case
in self.dataset_properties["instance_props_per_patient"].items()]
self.all_boxes = np.concatenate([b for b in boxes if not isinstance(b, list) and b.size > 0], axis=0)
self.all_boxes = permute_boxes(self.all_boxes, dims=self.transpose_forward)
self.all_ious = self.dataset_properties["all_ious"]
self.class_ious = self.dataset_properties["class_ious"]
self.num_instances = self.dataset_properties["num_instances"]
self.num_instances_per_case = {case_id: sum(case["num_instances"].values())
for case_id, case in self.dataset_properties["instance_props_per_patient"].items()}
self.dim = self.dataset_properties["dim"]
self.architecture_kwargs["classifier_classes"] = \
len(self.dataset_properties["class_dct"])
self.architecture_kwargs["seg_classes"] = \
self.architecture_kwargs["classifier_classes"]
self.architecture_kwargs["in_channels"] = \
len(self.dataset_properties["modalities"])
self.architecture_kwargs["dim"] = \
self.dataset_properties["dim"]
def plot_box_distribution(self, **kwargs):
"""
Plot histogram with ground truth bounding box distribution for
all axis
"""
try:
import matplotlib.pyplot as plt
except ImportError:
plt = None
logger.error("Failed to import matplotlib continue anyway.")
if plt is not None:
if isinstance(self.all_boxes, list):
_boxes = np.concatenate(
[b for b in self.all_boxes if not isinstance(b, list) and b.size > 0], axis=0)
dists = box_size_np(_boxes)
else:
dists = box_size_np(self.all_boxes)
for axis in range(dists.shape[1]):
dist = dists[:, axis]
plt.hist(dist, bins=100)
plt.savefig(
self.save_dir / f'bbox_sizes_axis_{axis}.png')
plt.xscale('log')
plt.savefig(
self.save_dir / f'bbox_sizes_axis_{axis}_xlog.png')
plt.yscale('log')
plt.savefig(
self.save_dir / f'bbox_sizes_axis_{axis}_xylog.png')
plt.close()
def plot_box_area_distribution(self, **kwargs):
"""
Plot histogram of areas of all ground truth boxes
"""
try:
import matplotlib.pyplot as plt
except ImportError:
plt = None
logger.error("Failed to import matplotlib continue anyway.")
if plt is not None:
if isinstance(self.all_boxes, list):
_boxes = np.concatenate(
[b for b in self.all_boxes if not isinstance(b, list) and b.size > 0], axis=0)
area = box_area_np(_boxes)
else:
area = box_area_np(self.all_boxes)
plt.hist(area, bins=100)
plt.savefig(self.save_dir / f'box_areas.png')
plt.xscale('log')
plt.savefig(self.save_dir / f'box_areas_xlog.png')
plt.yscale('log')
plt.savefig(self.save_dir / f'box_areas_xylog.png')
plt.close()
def plot_class_distribution(self, **kwargs):
try:
import matplotlib.pyplot as plt
except ImportError:
plt = None
logger.error("Failed to import matplotlib continue anyway.")
if plt is not None:
num_instances_dict = self.dataset_properties["num_instances"]
num_instances = []
classes = []
for key, item in num_instances_dict.items():
num_instances.append(item)
classes.append(str(key))
ind = np.arange(len(num_instances))
plt.bar(ind, num_instances)
plt.xlabel("Classes")
plt.ylabel("Num Instances")
plt.xticks(ind, classes)
plt.savefig(self.save_dir / f'num_classes.png')
plt.yscale('log')
plt.savefig(self.save_dir / f'num_classes_ylog.png')
plt.close()
def plot_instance_distribution(self, **kwargs):
try:
import matplotlib.pyplot as plt
except ImportError:
plt = None
logger.error("Failed to import matplotlib continue anyway.")
if plt is not None:
num_instances_per_case = list(self.num_instances_per_case.values())
plt.hist(num_instances_per_case, bins=100, range=(0, 100))
plt.savefig(self.save_dir / f'instances_per_case.png')
plt.close()
plt.hist(num_instances_per_case, bins=30, range=(0, 30))
plt.savefig(self.save_dir / f'instances_per_case_0_30.png')
plt.close()
plt.hist(num_instances_per_case, bins=11, range=(0, 11))
plt.savefig(self.save_dir / f'instances_per_case_0_10.png')
plt.close()
@abstractmethod
def _plan_anchors(self) -> dict:
"""
Plan anchors hyperparameters
"""
raise NotImplementedError
@abstractmethod
def _plan_architecture(self) -> Sequence[int]:
"""
Plan architecture
"""
raise NotImplementedError
def plan(self, **kwargs) -> dict:
"""
Plan architecture and training params
"""
for key, item in kwargs.items():
setattr(self, key, item)
self.create_default_settings()
if self.all_boxes is None:
self.process_properties(**kwargs)
self.plot_box_area_distribution(**kwargs)
self.plot_box_distribution(**kwargs)
self.plot_class_distribution(**kwargs)
self.plot_instance_distribution(**kwargs)
return {}
def create_default_settings(self):
pass
def compute_class_weights(self) -> List[float]:
"""
Compute classification weighting for inbalanced datasets
(background samples get weight 1 / (num_classes + 1) and forground
classes are weighted with (1 - 1 / (num_classes + 1))*(1 - ni / nall))
where ni is the number of sampler for class i and n all
is the number of all ground truth samples
Returns:
List[float]: weights
"""
num_instances_dict = self.dataset_properties["num_instances"]
num_classes = len(num_instances_dict)
num_instances = [0] * num_classes
for key, item in num_instances_dict.items():
num_instances[int(key)] = int(item)
bg_weight = 1 / (num_classes + 1)
remaining_weight = 1 - bg_weight
weights = [remaining_weight * (1 - ni / sum(num_instances)) for ni in num_instances]
return [bg_weight] + weights
def get_planner_id(self) -> str:
"""
Create identifier for this planner. If available append
:attr:`plan_tag` to the base name
Returns:
str: identifier
"""
base = super().get_planner_id()
if hasattr(self, "plan_tag"):
base = base + getattr(self, "plan_tag")
return base
class BoxC001(BaseBoxesPlanner):
def __init__(self,
preprocessed_output_dir: os.PathLike,
save_dir: os.PathLike,
network_cls: Callable,
estimator: MemoryEstimator = MemoryEstimatorDetection(),
model_cfg: dict = None,
**kwargs,
):
"""
Plan training architecture with heuristics
Args:
preprocessed_output_dir: base preprocessed directory to
access properties and save analysis files
save_dir: directory to save analysis plots
network_cls: constructor of network to plan
estimator: estimate GPU memory requirements for specific GPU
architectures. Defaults to MemoryEstimatorDetection().
"""
super().__init__(
preprocessed_output_dir=preprocessed_output_dir,
save_dir=save_dir,
network_cls=network_cls,
estimator=estimator,
**kwargs,
)
self.additional_params = {}
if model_cfg is None:
model_cfg = {}
self.model_cfg = model_cfg
self.plan_anchor_for_estimation = fixed_anchor_init(self.dim)
def create_default_settings(self):
"""
Generate some default settings for the architecture
"""
# MAX_NUM_FILTERS_2D, MAX_NUM_FILTERS_3D from nnUNet
self.architecture_kwargs["max_channels"] = 480 if self.dim == 2 else 320
# BASE_NUM_FEATURES_3D from nnUNet
self.architecture_kwargs["start_channels"] = 32
# DEFAULT_BATCH_SIZE_3D from nnUNet
self.batch_size = 32 if self.dim == 2 else 2
self.max_num_pool = 999
self.min_feature_map_size = 4
self.min_decoder_level = 2
self.num_decoder_level = 4
self.architecture_kwargs["fpn_channels"] = \
self.architecture_kwargs["start_channels"] * 2
self.architecture_kwargs["head_channels"] = \
self.architecture_kwargs["fpn_channels"]
def plan(self,
target_spacing_transposed: Sequence[float],
median_shape_transposed: Sequence[float],
transpose_forward: Sequence[int],
mode: str = "3d",
) -> dict:
"""
Plan network architecture, anchors, patch size and batch size
Args:
target_spacing_transposed: spacing after data is transposed and resampled
median_shape_transposed: median shape after data is
transposed and resampled
transpose_forward: new ordering of axes for forward pass
mode: mode to use for planning (this planner only supports 3d!)
Returns:
dict: training and architecture information
See Also:
:method:`_plan_architecture`, :method:`_plan_anchors`
"""
super().plan(
transpose_forward=transpose_forward,
target_spacing_transposed=target_spacing_transposed,
median_shape_transposed=median_shape_transposed,
)
self.architecture_kwargs["class_weight"] = self.compute_class_weights()
patch_size = self._plan_architecture(
transpose_forward=transpose_forward,
target_spacing_transposed=target_spacing_transposed,
target_median_shape_transposed=median_shape_transposed,
)
anchors = self._plan_anchors(
target_spacing_transposed=target_spacing_transposed,
median_shape_transposed=median_shape_transposed,
transpose_forward=transpose_forward,
)
plan = {"patch_size": patch_size,
"batch_size": self.batch_size,
"architecture": {
"arch_name": self.network_cls.__name__,
**self.architecture_kwargs
},
"anchors": anchors,
}
logger.info(f"Using architecture plan: \n{plan}")
return plan
def _plan_anchors(self, **kwargs) -> dict:
"""
Optimize anchors
"""
boxes_np_full = self.all_boxes.astype(np.float32)
boxes_np = self.filter_boxes(boxes_np_full)
logger.info(f"Filtered {boxes_np_full.shape[0] - boxes_np.shape[0]} "
f"boxes, {boxes_np.shape[0]} boxes remaining for anchor "
"planning.")
boxes_torch = torch.from_numpy(boxes_np).float()
boxes_torch = boxes_torch - expand_to_boxes(box_center(boxes_torch))
anchor_generator = get_anchor_generator(self.dim, s_param=True)
rel_strides = self.architecture_kwargs["strides"]
filt_rel_strides = [[1] * self.dim, *rel_strides]
filt_rel_strides = [filt_rel_strides[i] for i in self.architecture_kwargs["decoder_levels"]]
strides = np.cumprod(filt_rel_strides, axis=0) / np.asarray(rel_strides[0])
params = self.find_anchors(boxes_torch, strides.astype(np.int32), anchor_generator)
scaled_params = {key: scale_with_abs_strides(item, strides, dim_idx) for dim_idx, (key, item) in enumerate(params.items())}
logger.info(f"Determined Anchors: {params}; Results in params: {scaled_params}")
self.anchors = scaled_params
self.anchors["stride"] = 1
return self.anchors
@staticmethod
def filter_boxes(boxes_np: np.ndarray,
upper_percentile: float = 99.5,
lower_percentile: float = 00.5,
) -> np.ndarray:
"""
Determine upper and lower percentiles of bounding box sizes for each
axis and remove boxes which are outside the specified range
Args:
boxes_np (np.ndarray): bounding boxes [N, dim * 2](x1, y1, x2, y2, (z1, z2))
upper_percentile: percentile for upper boundary. Defaults to 99.5.
lower_percentile: percentile for lower boundary. Defaults to 00.5.
Returns:
np.ndarray: filtered boxes
See Also:
:func:`np.percentile`
"""
mask = np.ones(boxes_np.shape[0]).astype(bool)
box_sizes = box_size_np(boxes_np)
for ax in range(box_sizes.shape[1]):
ax_sizes = box_sizes[:, ax]
upper_th = np.percentile(ax_sizes, upper_percentile)
lower_th = np.percentile(ax_sizes, lower_percentile)
ax_mask = (ax_sizes < upper_th) * (ax_sizes > lower_th)
mask = mask * ax_mask
return boxes_np[mask.astype(bool)]
def find_anchors(self,
boxes_torch: torch.Tensor,
strides: Sequence[Sequence[int]],
anchor_generator: AnchorGenerator,
) -> Dict[str, Sequence[int]]:
"""
Find anchors which maximize iou over dataset
Args:
boxes_torch: filtered ground truth boxes
strides (Sequence[Sequence[int]]): strides of network to compute
anchor sizes of lower levels
anchor_generator (AnchorGenerator): anchor generator for generate
the anchors
Returns:
Dict[Sequence[int]]: parameterization of anchors
`width` (Sequence[float]): width values for bounding boxes
`height` (Sequence[float]): height values for bounding boxes
(`depth` (Sequence[float]): dpeth values for bounding boxes)
"""
import nevergrad as ng
dim = int(boxes_torch.shape[1] // 2)
sizes = box_size(boxes_torch)
maxs = sizes.max(dim=0)[0]
best_iou = 0
# TBPSA, PSO
for algo in ["TwoPointsDE", "TwoPointsDE", "TwoPointsDE"]:
_best_iou = 0
params = []
for axis in range(dim):
# TODO: find better initialization
anchor_init = self.get_anchor_init(boxes_torch)
p = ng.p.Array(init=np.asarray(anchor_init[axis]))
p.set_integer_casting()
# p.set_bounds(1, maxs[axis].item())
p.set_bounds(lower=1)
params.append(p)
instrum = ng.p.Instrumentation(*params)
optimizer = ng.optimizers.registry[algo](
parametrization=instrum, budget=5000, num_workers=1)
with torch.no_grad():
pbar = tqdm(range(optimizer.budget), f"Anchor Opt {algo}")
for _ in pbar:
x = optimizer.ask()
anchors = anchor_generator.generate_anchors(*x.args)
anchors = compute_anchors_for_strides(
anchors, strides=strides, cat=True)
anchors = anchors
# TODO: add checks if GPU is availabe and has enough VRAM
iou = box_iou(boxes_torch.cuda(), anchors.cuda()) # boxes x anchors
mean_iou = iou.max(dim=1)[0].mean().cpu()
optimizer.tell(x, -mean_iou.item())
pbar.set_postfix(mean_iou=mean_iou)
_best_iou = mean_iou
if _best_iou > best_iou:
best_iou = _best_iou
recommendation = optimizer.provide_recommendation().value[0]
return {key: list(val) for key, val in zip(["width", "height", "depth"], recommendation)}
def get_anchor_init(self, boxes: torch.Tensor) -> Sequence[Sequence[int]]:
"""
Initialize anchors sizes for optimization
Args:
boxes: scales and transposed boxes
Returns:
Sequence[Sequence[int]]: anchor initialization
"""
return [(2, 4, 8)] * 3
def _plan_architecture(self,
target_spacing_transposed: Sequence[float],
target_median_shape_transposed: Sequence[float],
**kwargs,
) -> Sequence[int]:
"""
Plan patchsize and main aspects of the architecture
Fills entries in :param:`self.architecture_kwargs`:
`conv_kernels`
`strides`
`decoder_levels`
Args:
target_spacing_transposed: spacing after data is transposed and resampled
target_median_shape_transposed: median shape after data is
transposed and resampled
Returns:
Sequence[int]: patch size to use for training
"""
self.estimator.batch_size = self.batch_size
patch_size = np.asarray(self._get_initial_patch_size(
target_spacing_transposed, target_median_shape_transposed))
first_run = True
while True:
if first_run:
pass
else:
patch_size = self._decrease_patch_size(
patch_size, target_median_shape_transposed, pooling, must_be_divisible_by)
num_pool_per_axis, pooling, convs, patch_size, must_be_divisible_by = \
self.plan_pool_and_conv_pool_late(patch_size, target_spacing_transposed)
self.architecture_kwargs["conv_kernels"] = convs
self.architecture_kwargs["strides"] = pooling
num_resolutions = len(self.architecture_kwargs["conv_kernels"])
decoder_levels_start = min(max(0, num_resolutions - self.num_decoder_level), self.min_decoder_level)
self.architecture_kwargs["decoder_levels"] = \
tuple([i for i in range(decoder_levels_start, num_resolutions)])
print(self.architecture_kwargs["decoder_levels"])
print(self.get_anchors_for_estimation())
_, fits_in_mem = self.estimator.estimate(
min_shape=must_be_divisible_by,
target_shape=patch_size,
in_channels=self.architecture_kwargs["in_channels"],
network=self.network_cls.from_config_plan(
model_cfg=self.model_cfg,
plan_arch=self.architecture_kwargs,
plan_anchors=self.get_anchors_for_estimation()),
optimizer_cls=torch.optim.Adam,
)
if fits_in_mem:
break
first_run = False
logger.info(f"decoder levels: {self.architecture_kwargs['decoder_levels']}; \n"
f"pooling strides: {self.architecture_kwargs['strides']}; \n"
f"kernel sizes: {self.architecture_kwargs['conv_kernels']}; \n"
f"patch size: {patch_size}; \n")
return patch_size
def _decrease_patch_size(self,
patch_size: np.ndarray,
target_median_shape_transposed: np.ndarray,
pooling: Sequence[Sequence[int]],
must_be_divisible_by: Sequence[int],
) -> np.ndarray:
"""
Decrease largest physical axis. If it larger than bottleneck size is
is decreased by the minimum value to be divisable by computed pooling
strides and will be halfed otherwise.
Args:
patch_size: current patch size
target_median_shape_transposed: median shape of dataset
correctly transposed
pooling: pooling kernels of network
must_be_divisible_by: necessary divisor per axis
Returns:
np.ndarray: new patch size
"""
argsrt = np.argsort(patch_size / target_median_shape_transposed)[::-1]
pool_fct_per_axis = np.prod(pooling, 0)
bottleneck_size_per_axis = patch_size / pool_fct_per_axis
reduction = []
for i in range(len(patch_size)):
if bottleneck_size_per_axis[i] > self.min_feature_map_size:
reduction.append(must_be_divisible_by[i])
else:
reduction.append(must_be_divisible_by[i] / 2)
patch_size[argsrt[0]] -= reduction[argsrt[0]]
return patch_size
@staticmethod
def _get_initial_patch_size(target_spacing_transposed: np.ndarray,
target_median_shape_transposed: Sequence[int]) -> List[int]:
"""
Generate initial patch which relies on the spacing of underlying images.
This is based on the fact that most acquisition protocols are optimized
to focus on the most importatnt aspects.
Returns:
List[int]: initial patch size
"""
voxels_per_mm = 1 / np.array(target_spacing_transposed)
# normalize voxels per mm
input_patch_size = voxels_per_mm / voxels_per_mm.mean()
# create an isotropic patch of size 512x512x512mm
input_patch_size *= 1 / min(input_patch_size) * 512 # to get a starting value
input_patch_size = np.round(input_patch_size).astype(np.int32)
# clip it to the median shape of the dataset because patches larger then that make not much sense
input_patch_size = [min(i, j) for i, j in zip(
input_patch_size, target_median_shape_transposed)]
return np.round(input_patch_size).astype(np.int32)
def plan_pool_and_conv_pool_late(self,
patch_size: Sequence[int],
spacing: Sequence[float],
) -> Tuple[List[int], List[Tuple[int]], List[Tuple[int]],
Sequence[int], Sequence[int]]:
"""
Plan pooling and convolutions of encoder network
Axis which do not need pooling in every block are pooled as late as possible
Uses kernel size 1 for anisotropic axis which are not reached by the fov yet
Args:
patch_size: target path size
spacing: target spacing transposed
Returns:
List[int]: max number of pooling operations per axis
List[Tuple[int]]: kernel sizes of pooling operations
List[Tuple[int]]: kernel sizes of convolution layers
Sequence[int]: patch size
Sequence[int]: coefficient each axes needs to be divisable by
"""
num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, \
patch_size, must_be_divisible_by = get_pool_and_conv_props(
spacing=spacing, patch_size=patch_size,
min_feature_map_size=self.min_feature_map_size,
max_numpool=self.max_num_pool)
return num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, patch_size, must_be_divisible_by
def get_anchors_for_estimation(self):
"""
Adjust anchor plan for varying number of feature maps
Returns:
dict: adjusted anchor plan
"""
num_levels = len(self.architecture_kwargs["decoder_levels"])
anchor_plan = {"stride": 1, "aspect_ratios": (0.5, 1, 2)}
if self.dim == 2:
_sizes = [(16, 32, 64)] * num_levels
anchor_plan["sizes"] = tuple(_sizes)
else:
_sizes = [(16, 32, 64)] * num_levels
anchor_plan["sizes"] = tuple(_sizes)
anchor_plan["zsizes"] = tuple(_sizes)
return anchor_plan
|
{"hexsha": "cc048039c6c0d1aa8547a8934361f4950e5117a5", "size": 27030, "ext": "py", "lang": "Python", "max_stars_repo_path": "nndet/planning/architecture/boxes/base.py", "max_stars_repo_name": "joeranbosma/nnDetection", "max_stars_repo_head_hexsha": "2ebbf1cdc8a8794c73e325f06fea50632c78ae8c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 242, "max_stars_repo_stars_event_min_datetime": "2021-05-17T12:31:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T11:51:29.000Z", "max_issues_repo_path": "nndet/planning/architecture/boxes/base.py", "max_issues_repo_name": "joeranbosma/nnDetection", "max_issues_repo_head_hexsha": "2ebbf1cdc8a8794c73e325f06fea50632c78ae8c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2021-06-02T07:32:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T18:45:52.000Z", "max_forks_repo_path": "nndet/planning/architecture/boxes/base.py", "max_forks_repo_name": "joeranbosma/nnDetection", "max_forks_repo_head_hexsha": "2ebbf1cdc8a8794c73e325f06fea50632c78ae8c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 38, "max_forks_repo_forks_event_min_datetime": "2021-05-31T14:01:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T08:24:40.000Z", "avg_line_length": 40.8925869894, "max_line_length": 131, "alphanum_fraction": 0.5991860895, "include": true, "reason": "import numpy", "num_tokens": 5554}
|
from datetime import datetime
from collections import defaultdict
import numpy as np
import h5py
import pandas as pd
from exetera.core import exporter, persistence, utils
from exetera.core.persistence import DataStore
def method_paper_summary_pipeline(ds, src_data, dest_data, first_timestamp, last_timestamp):
s_ptnts = src_data['patients']
s_asmts = src_data['assessments']
filters = ds.get_or_create_group(dest_data, 'filters')
print(s_ptnts.keys())
print(src_data['tests'].keys())
conditions = ('has_kidney_disease', 'has_lung_disease', 'has_heart_disease', 'has_diabetes',
'has_hayfever', 'has_cancer')
symptoms = ('persistent_cough', 'fatigue', 'delirium', 'shortness_of_breath', 'fever',
'diarrhoea', 'abdominal_pain', 'chest_pain', 'hoarse_voice', 'skipped_meals',
'loss_of_smell')
symptom_thresholds = {s: 2 for s in symptoms}
symptom_thresholds['fatigue'] = 3
symptom_thresholds['shortness_of_breath'] = 3
intercept = -1.19015973
weights = {'persistent_cough': 0.23186655,
'fatigue': 0.56532346,
'delirium': -0.12935112,
'shortness_of_breath': 0.58273967,
'fever': 0.16580974,
'diarrhoea': 0.10236126,
'abdominal_pain': -0.11204163,
'chest_pain': -0.12318634,
'hoarse_voice': -0.17818597,
'skipped_meals': 0.25902482,
'loss_of_smell': 1.82895239}
# Filter patients to be only from England
# =======================================
eng_pats = set()
p_ids_ = ds.get_reader(s_ptnts['id'])[:]
p_lsoas_ = ds.get_reader(s_ptnts['lsoa11cd'])[:]
for i in range(len(p_ids_)):
lsoa = p_lsoas_[i]
if len(lsoa) > 0 and lsoa[0] == 69: # E
eng_pats.add(p_ids_[i])
print("eng pats:", len(eng_pats))
# generating patient filter
# -------------------------
if 'patient_filter' not in filters.keys():
with utils.Timer("generating patient filter", new_line=True):
p_filter = ds.get_reader(s_ptnts['year_of_birth_valid'])[:]
# valid age ranges
r_ = ds.get_reader(s_ptnts['age'])[:]
f_ = (r_ >= 18) & (r_ <= 100)
p_filter = p_filter & f_
# gender filter
r_ = ds.get_reader(s_ptnts['gender'])[:]
f_ = (r_ == 1) | (r_ == 2)
p_filter = p_filter & f_
# country code
r_ = ds.get_reader(s_ptnts['country_code'])[:]
f_ = r_ == b'GB'
p_filter = p_filter & f_
print("UK:", p_filter.sum(), len(p_filter))
# # England only
# r_ = ds.get_reader(s_ptnts['lsoa11cd'])[:]
# f_ = np.zeros(len(r_), dtype=np.bool)
# for i in range(len(r_)):
# lsoa = r_[i]
# if len(lsoa) > 0 and lsoa[0] == 69: # E
# f_[i] = True
# p_filter = p_filter & f_
# print("Eng:", p_filter.sum(), len(p_filter))
# no assessments
r_ = ds.get_reader(s_ptnts['assessment_count'])[:]
f_ = r_ > 0
p_filter = p_filter & f_
print("No asmts:", p_filter.sum(), len(p_filter))
print(" {}, {}".format(np.count_nonzero(p_filter),
np.count_nonzero(p_filter == False)))
ds.get_numeric_writer(filters, 'patient_filter', 'bool').write(p_filter)
# generating assessment filter
# ----------------------------
if 'assessment_filter' not in filters.keys():
with utils.Timer("generating assessment filter", new_line=True):
a_filter = np.ones(len(ds.get_reader(s_asmts['id'])), dtype=np.bool)
# created_at in range
r_ = ds.get_reader(s_asmts['created_at'])[:]
f_ = (r_ >= first_timestamp) & (r_ < last_timestamp)
a_filter = a_filter & f_
# country code
r_ = ds.get_reader(s_asmts['country_code'])[:]
f_ = r_ == b'GB'
a_filter = a_filter & f_
with utils.Timer(f"filtering out orphaned assessments"):
p_ids_ = ds.get_reader(s_ptnts['id'])[:]
p_ids_ = ds.apply_filter(ds.get_reader(filters['patient_filter'])[:], p_ids_)
a_pids_ = ds.get_reader(s_asmts['patient_id'])[:]
f_ = persistence.foreign_key_is_in_primary_key(p_ids_, a_pids_)
a_filter = a_filter & f_
print(" {}, {}".format(np.count_nonzero(a_filter),
np.count_nonzero(a_filter == False)))
ds.get_numeric_writer(filters, 'assessment_filter', 'bool').write(a_filter)
# filtering patients
# ------------------
if 'filtered_patients' not in dest_data.keys():
flt_ptnts = dest_data.create_group('filtered_patients')
with utils.Timer("filtering/flattening patient fields", new_line=True):
p_filter = ds.get_reader(filters['patient_filter'])[:]
r = ds.get_reader(s_ptnts['age'])
r.get_writer(flt_ptnts, 'age').write(ds.apply_filter(p_filter, r[:]))
for k in conditions:
r = ds.get_reader(s_ptnts[k])
ds.get_numeric_writer(flt_ptnts, k, 'bool').write(
ds.apply_filter(p_filter, r[:]) == 2)
smoker1 = ds.get_reader(s_ptnts['is_smoker'])
smoker2 = ds.get_reader(s_ptnts['smoker_status'])
smoker = (smoker1[:] == 2) | (smoker2[:] == 3)
ds.get_numeric_writer(flt_ptnts, 'smoker', 'bool').write(smoker)
gender_ = ds.get_reader(s_ptnts['gender'])
ds.get_numeric_writer(flt_ptnts, 'gender', 'uint8').write(
ds.apply_filter(p_filter, gender_) - 1)
else:
flt_ptnts = dest_data['filtered_patients']
# filtering assessments
# ---------------------
if 'filtered_assessments' not in dest_data.keys():
flt_asmts = dest_data.create_group('filtered_assessments')
with utils.Timer("filtering/flattening symptoms",
new_line=True):
a_filter = ds.get_reader(filters['assessment_filter'])[:]
for s in symptoms:
r_ = ds.get_reader(s_asmts[s])[:]
ds.get_numeric_writer(flt_asmts, s, 'bool').write(
ds.apply_filter(a_filter, r_) >= symptom_thresholds[s])
a_pids = ds.get_reader(s_asmts['patient_id'])
a_pids.get_writer(flt_asmts, 'patient_id').write(ds.apply_filter(a_filter, a_pids[:]))
else:
flt_asmts = dest_data['filtered_assessments']
# predicting covid
# ----------------
if 'prediction' not in dest_data['filtered_assessments']:
with utils.Timer("generating covid prediction", new_line=True):
cumulative = np.zeros(len(ds.get_reader(flt_asmts['persistent_cough'])), dtype='float64')
for s in symptoms:
reader = ds.get_reader(flt_asmts[s])
cumulative += reader[:] * weights[s]
cumulative += intercept
print("positive predictions", np.count_nonzero(cumulative > 0.0), len(cumulative))
a_pids_ = ds.get_reader(flt_asmts['patient_id'])[:]
spans = ds.get_spans(a_pids_)
max_prediction_inds = ds.apply_spans_index_of_max(spans, cumulative)
max_predictions = cumulative[max_prediction_inds]
ds.get_numeric_writer(flt_asmts, 'prediction', 'float32').write(max_predictions)
pos_filter = max_predictions > 0.0
print("pos_filter: ", np.count_nonzero(pos_filter), len(pos_filter))
# generating table results
print('total_assessments:', np.count_nonzero(ds.get_reader(filters['assessment_filter'])[:]))
subjects = np.count_nonzero(ds.get_reader(filters['patient_filter'])[:])
genders = ds.get_reader(flt_ptnts['gender'])[:]
predicted_c19 = np.count_nonzero(ds.get_reader(flt_asmts['prediction'])[:] > 0.0)
age_mean = np.mean(ds.get_reader(flt_ptnts['age'])[:])
age_std = np.std(ds.get_reader(flt_ptnts['age'])[:])
print('subjects:', subjects)
male = np.count_nonzero(genders)
female = np.count_nonzero(genders == False)
print('gender: {}:{}, {:.2%}:{:.2%}'.format(male, female,
male / len(genders), female / len(genders)))
# print('predicted covid-19:', predicted_c19)
print('{}:'.format('predicted covid-19'),
'{} {:.2%}'.format(predicted_c19,
predicted_c19 / len(ds.get_reader(flt_asmts['prediction']))))
print('age {:.2f} ({:.2f})'.format(age_mean, age_std))
for k in conditions + ('smoker',):
kr_ = ds.get_reader(flt_ptnts[k])[:]
pos = np.count_nonzero(kr_)
print('{}:'.format(k), '{} {:.2%}'.format(pos, pos / len(kr_)))
# print(np.count_nonzero(kr_), len(kr_))
if __name__ == '__main__':
datastore = DataStore()
src_file = '/home/ben/covid/ds_20201008_full.hdf5'
dest_file = '/home/ben/covid/ds_20201008_summary.hdf5'
with h5py.File(src_file, 'r+') as src_data:
with h5py.File(dest_file, 'w') as dest_data:
method_paper_summary_pipeline(datastore, src_data, dest_data,
datetime.strptime("2020-03-01", '%Y-%m-%d').timestamp(),
datetime.strptime("2020-10-08", '%Y-%m-%d').timestamp())
# if __name__ == '__main__':
# datastore = DataStore()
# src_file = '/home/ben/covid/ds_20200929_full.hdf5'
# dest_file = '/home/ben/covid/ds_20200929_supplement_table.hdf5'
# with h5py.File(src_file, 'r+') as src_data:
# with h5py.File(dest_file, 'w') as dest_data:
# method_paper_summary_pipeline(datastore, src_data, dest_data,
# datetime.strptime("2020-04-01", '%Y-%m-%d').timestamp(),
# datetime.strptime("2020-09-29", '%Y-%m-%d').timestamp())
|
{"hexsha": "7d087d573c02c30c3b678d08aa62dcddd31af2e0", "size": 10119, "ext": "py", "lang": "Python", "max_stars_repo_path": "exeteracovid/scripts/method_paper_summary.py", "max_stars_repo_name": "deng113jie/ExeTeraCovid", "max_stars_repo_head_hexsha": "ee9ec90983d7c2c711962c7fe9ac25251392e41b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-03-23T14:23:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-29T16:54:42.000Z", "max_issues_repo_path": "exeteracovid/scripts/method_paper_summary.py", "max_issues_repo_name": "deng113jie/ExeTeraCovid", "max_issues_repo_head_hexsha": "ee9ec90983d7c2c711962c7fe9ac25251392e41b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2021-02-22T12:12:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-27T10:52:25.000Z", "max_forks_repo_path": "exeteracovid/scripts/method_paper_summary.py", "max_forks_repo_name": "deng113jie/ExeTeraCovid", "max_forks_repo_head_hexsha": "ee9ec90983d7c2c711962c7fe9ac25251392e41b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-08T15:00:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-08T15:00:30.000Z", "avg_line_length": 44.577092511, "max_line_length": 101, "alphanum_fraction": 0.5728826959, "include": true, "reason": "import numpy", "num_tokens": 2572}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""harrypotter magicawakened auto quiz
auto_quiz is the friendly quiz fork by biechuyangwang (and Contributors).
https://github.com/biechuyangwang/universalautomaticanswer/
usage:
python auto_quiz.py
Interactive input
sel:
{1:大神云游戏麻瓜,2:魔法史,3:院活,4:麻瓜多开,5:魔法史多开,6:院活多开,7:社团, 11:网易云游戏麻瓜,...}
# num<=10表示是chrome浏览器的大神云游戏中的截图来识别题目和选项,支持多开选项
# num> 10表示是chrome浏览器的网易云游戏中的截图来识别题目和选项,不支持多开选项
epoch:
{int}
# 表示运行的轮数
auto_quiz is the Python Library by biechuyangwang (and Contributors).
Copyright (c) 2022.04.18
"""
import sys
import numpy as np
from PIL import ImageGrab,Image
import matplotlib.pyplot as plt
import cv2
import time
import random
import logging
import warnings
warnings.filterwarnings('ignore') # warnings有点多,过滤一下
# 导入自定义包
sys.path.append(r"C:\\Users\\SAT") # 添加自定义包的路径
from UniversalAutomaticAnswer.conf.confImp import get_yaml_file
from UniversalAutomaticAnswer.screen.screenImp import ScreenImp # 加入自定义包
from UniversalAutomaticAnswer.ocr.ocrImp import OCRImp
from UniversalAutomaticAnswer.util.filter import filterQuestion, filterLine, filterPersonState, maguafilterQuestion, filtertimeLine
from UniversalAutomaticAnswer.match.matchImp import DataMatcher, match_options
# 指定logger输出级别
from UniversalAutomaticAnswer.util.logging import get_logger
logger = get_logger() # 写个logger来帮助调试
logger.setLevel(logging.INFO)
# 日志文件,输出重定向
def make_print_to_file(path='./'):
'''
path, it is a path for save your log about fuction print
example:
use make_print_to_file() and the all the information of funtion print , will be write in to a log file
:return:
'''
import sys
import os
# import config_file as cfg_file
import sys
import datetime
class Logger(object):
def __init__(self, filename="Default.log", path="./"):
self.terminal = sys.stdout
import os
if not os.path.exists(path): # 判断文件夹是否存在,不存在则创建文件夹
os.mkdir(path)
self.log = open(os.path.join(path, filename), "a", encoding='utf8',) # 追加的方式定义log文件
def write(self, message):
self.terminal.write(message) # 终端打印
self.log.write(message) # log文件也打印
def flush(self):
self.terminal.flush()
self.log.flush()
fileName = datetime.datetime.now().strftime('log_'+'%Y_%m_%d_%H')
sys.stdout = Logger(fileName + '.log', path=path)
# 获取配置文件
conf_path = 'conf/conf.yml'
conf_data = get_yaml_file(conf_path)
# 初始化ocr模型
ocr = OCRImp(conf_data)
# 初始化匹配器(题库)
data_matcher = DataMatcher(conf_data)
# 初始化屏幕实例
screen = ScreenImp(conf_data)
# 大神全局变量(需要自己根据自己的去设置)
chrom_dashen_left_padding = 244 # chrome浏览器中大神云游戏,游戏界面的左上角横坐标
chrom_dashen_top_padding = 155 # chrome浏览器中大神云游戏,游戏界面的左上角纵坐标
chrom_dashen_right_pading = -244 # chrome浏览器中大神云游戏,游戏界面的右上角横坐标
chrom_dashen_bottom_padding = -120 # chrome浏览器中大神云游戏,游戏界面的右上角纵坐标
chrom_wangyiyun_left_padding = 190
chrom_wangyiyun_top_padding = 155
chrom_wangyiyun_right_pading = -190
chrom_wangyiyun_bottom_padding = -60
rect_start = [705,750,1140,1340] # click 1111,777 # chrome浏览器中大神云游戏,游戏中“准备”按钮的[h1,h2,w1,w2],用于截取按钮,更加小面积的识别,提高响应速度。
rect_failure_back = [715,750,680,755] # click 710,735 # chrome浏览器中大神云游戏,游戏中“返回”按钮的[h1,h2,w1,w2],用于截取按钮,更加小面积的识别,提高响应速度。
rect_magua_countdown = [15,105,670,769] # chrome浏览器中大神云游戏,游戏中麻瓜研究课的“计时器”[h1,h2,w1,w2],用于截取按钮,更加小面积的识别,提高响应速度。
rect_mofashi_countdown = [7,70,675,760] # 识别不了11 和 3 # chrome浏览器中大神云游戏,游戏中魔法史的“计时器”[h1,h2,w1,w2],用于截取按钮,更加小面积的识别,提高响应速度。
rect_question = [415,518,155,800] # top bottom left right # 问题[415,518,155,800]
rect_question_ex = [415,522,155,800] # top bottom left right # 问题[415,518,155,800]
rect_option_a = [595,655,104,510] # click 510,645 # 选项A
rect_option_b = [595,655,804,1210] # click 1210,645 # 选项B
rect_option_c = [695,755,104,510] # click 510,745 # 选项C
rect_option_d = [695,755,804,1210] # click 1210,745 # 选项D
rect_continue = [755,790,1055,1180] # click 1111,777 # 右下角“继续”按钮
coordinate = [ # 四个选项按钮的点击坐标,和一个中间坐标(用于抢答时出现错误的缓冲)
[510,645],
[1210,645],
[510,745],
[1210,745],
[716,687]
]
click_start = [1240,730] # 点击开始坐标
click_continue = [1111,777] # 点击继续坐标
click_failure_back = [710,735] # 点击返回坐标
# 网易云全局变量(与大神云游的变量含义类似)
wangyiyun_rect_start = [755,810,1230,1430] # click
wangyiyun_rect_magua_countdown = [15,125,720,820] # 12 9 8 6稳定识别
wangyiyun_rect_mofashi_countdown = [13,75,730,810]
wangyiyun_rect_question = [450,558,170,890]
wangyiyun_rect_question_ex = [450,562,170,890]
wangyiyun_rect_option_a = [643,710,110,510] # click
wangyiyun_rect_option_b = [643,710,870,1270] # click
wangyiyun_rect_option_c = [765,808,110,510] # click
wangyiyun_rect_option_d = [765,808,870,1270] # click
wangyiyun_rect_continue = [810,850,1140,1270] # click todo
wangyiyun_coordinate = [
[510,693],
[1270,693],
[510,808],
[1270,808],
[770,740]
]
wangyiyun_click_start = [1340,785]
wangyiyun_click_continue = [1205,830]
# mumu模拟器
mumu_solution = [1440,810]
mumu_hand = 24
mumu_rect_start = [710,750,1145,1345] # click
mumu_rect_magua_countdown = [15,125,720,820] # Todo
mumu_rect_mofashi_countdown = [8,70,680,745]
mumu_rect_question = [420,525,155,800]
mumu_rect_question_ex = [420,530,155,860]
mumu_rect_option_a = [600,660,103,503] # click
mumu_rect_option_b = [600,660,807,1207] # click
mumu_rect_option_c = [703,763,103,503] # click
mumu_rect_option_d = [703,763,807,1207] # click
mumu_rect_continue = [760,795,1060,1190] # click todo
mumu_coordinate = [
[503,660],
[1207,660],
[503,763],
[1207,763],
[720,693]
]
mumu_click_start = [1250,733]
mumu_click_continue = [1122,777]
# multi
coordinate_mul = [
[366,753],
[753,753],
[366,810],
[753,810]
]
padd2wy = -195 # 指顶110 居中265 -155 网易云游戏300
import win32gui
hwnd_title = dict() # 使用upadate的方式更新字典
def get_all_hwnd(hwnd,nouse):
"""返回系统句柄集合
Args:
hwnd (_type_): _description_
nouse (_type_): _description_
"""
if win32gui.IsWindow(hwnd) and win32gui.IsWindowEnabled(hwnd) and win32gui.IsWindowVisible(hwnd):
hwnd_title.update({hwnd:win32gui.GetWindowText(hwnd)})
def get_hwnd(hwnd_title:list, target_title:str, brower_type:str='Chrom'): # brower_type:{'Chrom','Edge'}
"""获取名为target_title的句柄
Args:
hwnd_title (list): 系统句柄集合
target_title (str): 目标标题
brower_type (str, optional): 浏览器类型. Defaults to 'Chrom'.
Returns:
_type_: 存在则返回坐标和图像,不存在则返回空
"""
win32gui.EnumWindows(get_all_hwnd, 0)
for h,t in hwnd_title.items():
if target_title in t and brower_type in t:
ret_hwnd = h
return ret_hwnd
return None
chrom_dashen_hwnd = get_hwnd(hwnd_title,"大神云游戏",'Chrom') # 获取全局句柄
chrom_wangyiyun_hwnd = get_hwnd(hwnd_title,"网易云游戏",'Chrom') # 获取全局句柄
edge_dashen_hwnd = get_hwnd(hwnd_title,"大神云游戏",'Edge') # 获取全局句柄
mumu_hwnd = get_hwnd(hwnd_title,"MuMu模拟器",'') # 获取全局句柄
def get_rect_img(hwnd,plat_type='dashen'):
if chrom_dashen_hwnd == chrom_wangyiyun_hwnd:
plat_type='wangyiyun'
ret_rect = win32gui.GetWindowRect(hwnd)
if len(ret_rect)==0:
return None
else:
img = ImageGrab.grab() # 1920 1080
img = Image.frombytes('RGB', img.size, img.tobytes())
img = np.array(img)
# logger.info(ret_img.shape) # hwc={1080, 1920, 3} # width 900 width 954 height 537 left 163 top 0
# hightpadding 155 -120 -60 960-155=805 leftpadding 244 -244 1920-488=1420+12=1432
if plat_type=='dashen':
ret_img = img[chrom_dashen_top_padding:chrom_dashen_bottom_padding,chrom_dashen_left_padding:chrom_dashen_right_pading]
else:
ret_img = img[chrom_wangyiyun_top_padding:chrom_wangyiyun_bottom_padding,chrom_wangyiyun_left_padding:chrom_wangyiyun_right_pading]
# ret_img = img[:,:]
# logger.info(ret_img.shape) # hw:{805,1432} leftpadding 244 toppadding 155
return ret_rect, ret_img
def get_mumu_rect_img(hwnd):
rect = win32gui.GetWindowRect(hwnd)
ret_rect = [int(1.5*(rect[1]+24)),int(1.5*(rect[1]+mumu_hand)+mumu_solution[1]),int(1.5*rect[0]),int(1.5*rect[0]+mumu_solution[0])] # [y0:y1,x0:x1]
if len(ret_rect)==0:
return None
else:
img = ImageGrab.grab() # 1920 1080
img = Image.frombytes('RGB', img.size, img.tobytes())
img = np.array(img)
# logger.info(ret_img.shape) # hwc={1080, 1920, 3} # width 900 width 954 height 537 left 163 top 0
# hightpadding 155 -120 -60 960-155=805 leftpadding 244 -244 1920-488=1420+12=1432
ret_img = img[ret_rect[0]:ret_rect[1],ret_rect[2]:ret_rect[3]]
return ret_rect, ret_img
import win32api
import win32con
def left_click(x:int, y:int,times:int=1):
"""鼠标点击左键times次
Args:
x (int): 横坐标
y (int): 纵坐标
times (int, optional): 点击次数. Defaults to 1.
"""
win32api.SetCursorPos((x,y))
import time
while times:
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
times -= 1
def zoom_left_click(x, y, times:int=1, ratio=1.5): # 去电脑的显示设置里查看文字项目放大率,一般笔记本是150%
"""根据系统放缩比例点击屏幕
Args:
x (_type_): 横坐标
y (_type_): 纵坐标
ratio (float, optional): 系统放缩比例. Defaults to 1.5.
times (int, optional): 点击次数. Defaults to 1.
"""
rel_x = int((chrom_dashen_left_padding + x)/ratio)
rel_y = int((chrom_dashen_top_padding + y)/ratio)
left_click(rel_x, rel_y, times)
def click_instance_center(instance_name:str, ocr_result:list):
"""鼠标单击instance_name
Args:
instance_name (str): 实例名称
ocr_result (list): 识别结果列表
"""
if len(ocr_result) == 0:
return
ret_list = [ line[0] for line in ocr_result if instance_name in line[1][0] ]
if len(ret_list) == 0:
logger.info('当前页面不存在{}'.format(instance_name))
return
else:
ret = ret_list[0]
x = int((ret[0][0]+ret[1][0])/2)
y = int((ret[1][1]+ret[2][1])/2)
zoom_left_click(x, y)
def get_countdown(img, class_type='mofashi'):
countdown = ''
if class_type=='maguayanjiu':
img_countdown = img[rect_magua_countdown[0]:rect_magua_countdown[1],rect_magua_countdown[2]:rect_magua_countdown[3]]
elif class_type=='mofashi':
img_countdown = img[rect_mofashi_countdown[0]:rect_mofashi_countdown[1],rect_mofashi_countdown[2]:rect_mofashi_countdown[3]]
result = ocr.ocr(img_countdown)
content = ocr.ocr_content(result)
if len(filterLine(content)) != 0:
countdown = filterLine(content)[0]
return countdown
def get_question_options(img):
res = []
QBtn = img[rect_question[0]:rect_question[1],rect_question[2]:rect_question[3]]
ABtn = img[rect_option_a[0]:rect_option_a[1],rect_option_a[2]:rect_option_a[3]]
BBtn = img[rect_option_b[0]:rect_option_b[1],rect_option_b[2]:rect_option_b[3]]
CBtn = img[rect_option_c[0]:rect_option_c[1],rect_option_c[2]:rect_option_c[3]]
DBtn = img[rect_option_d[0]:rect_option_d[1],rect_option_d[2]:rect_option_d[3]]
resultq = ocr.ocr(QBtn)
resulta = ocr.ocr(ABtn)
resultb = ocr.ocr(BBtn)
resultc = ocr.ocr(CBtn)
resultd = ocr.ocr(DBtn)
contentq = ocr.ocr_content(resultq)
contenta = ocr.ocr_content(resulta)
contentb = ocr.ocr_content(resultb)
contentc = ocr.ocr_content(resultc)
contentd = ocr.ocr_content(resultd)
if(len(contentq)>0):
# print(contentq)
logger.info(contentq)
question, optiona,optionb,optionc,optiond = '', '', '', '' ,''
if len(filterQuestion(contentq))>0:
question = filterQuestion(contentq)[0]
# print(question)
if len(question)==0:
print('题目未识别!')
# print('源数据为:',resultq)
return res
if len(filterLine(contenta))>0:
optiona = filterLine(contenta)[0]
if len(filterLine(contentb))>0:
optionb = filterLine(contentb)[0]
if len(filterLine(contentc))>0:
optionc = filterLine(contentc)[0]
if len(filterLine(contentd))>0:
optiond = filterLine(contentd)[0]
options = [optiona, optionb, optionc, optiond]
print('ocr结果:', [question,options])
answer_list = list(data_matcher.get_close_match(question))
if len(answer_list) == 0 or list(answer_list[0])[1] < 40:
print('*'*20)
print('没有匹配到题库')
# time.sleep(2)
return res
else:
print('题库匹配结果:', answer_list[0])
if(answer_list[0][1] <= 50):
print('可能题库匹配错误')
answer = answer_list[0][0][1]
res = match_options(answer, options)
if len(res) == 0:
print('*'*20)
print('选项OCR出错')
return res
print('选项匹配结果:', res)
if(res[0][1]<=50):
print('可能选项错误')
return res
def get_line_content(img,rect):
res = ''
img_rect = img[rect[0]:rect[1],rect[2]:rect[3]]
result = ocr.ocr(img_rect)
content = ocr.ocr_content(result)
if len(filterLine(content))!=0:
res = filterLine(content)[0]
return res
# """
# 答题(包括魔法史 麻瓜研究 院活 社团魔法史)
def dati(time_chutdown=15, iter_num=15, epoch_num=1, num_multiple=1, class_type='mofashi', plat_type='dashen'):
if num_multiple>=2:
if plat_type == 'mumu':
win_rect_mul_leftgoogle_yunyouxi = win32gui.GetWindowRect(chrom_dashen_hwnd)
else:
win_rect_mul_leftgoogle_yunyouxi = win32gui.GetWindowRect(chrom_wangyiyun_hwnd)
if num_multiple>=3:
win_rect_mul_rightgoogle_dashen = win32gui.GetWindowRect(edge_dashen_hwnd)
question_num = 0
pre_answer = 0
is_answered = 0
total_time = 0
total_num = 0
while True:
if(question_num==iter_num):
epoch_num -= 1
question_num = 0
if epoch_num == 0:
print('此次答题平均每题耗时: {}'.format(1.0*total_time/total_num))
break
if plat_type=='mumu':
_, img = get_mumu_rect_img(mumu_hwnd)
else:
_, img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
# 识别计时器
if class_type == 'maguayanjiu':
content_countdown = get_countdown(img,'maguayanjiu')
else:
content_countdown = get_countdown(img)
countdown_num = -1
if len(content_countdown)!=0 and content_countdown.isdigit():
countdown_num = int(content_countdown)
# print(countdown_num)
else: # 没识别到计时器,就识别开始和继续按钮
# 识别奖励的中下部分继续按钮,然后识别下一轮的匹配按钮
# if is_answered==1:
if class_type == 'shetuan': # 社团抢答
x,y = coordinate[pre_answer] # 进去,先盲猜A,A没人选,大概率能首抢
zoom_left_click(x, y, times=4)
continue
content_continue = get_line_content(img, rect_continue)
if content_continue == '点击继续': # 655,750,770,790
x, y = click_continue
zoom_left_click(x, y, times=2)
if num_multiple>=2:
x, y = 747,830
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,4)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,4)
if num_multiple>=3:
x, y = 747,830
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,4)
if num_multiple>=1:
time.sleep(4)
x, y = click_continue
zoom_left_click(x, y, times=2)
if num_multiple>=2:
x, y = 747,830
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,4)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,4)
time.sleep(2)
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,4)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,4)
if num_multiple>=3:
x, y = 747,830
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,4)
time.sleep(2)
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,4)
continue
# 识别失败返回按钮
content_failure_back = get_line_content(img, rect_failure_back)
if content_failure_back == '返回':
x,y = click_failure_back
zoom_left_click(x, y, times=2)
if num_multiple>=2:
pass # 点击继续
if num_multiple>=3:
pass # 点击继续
if num_multiple>=1:
pass # 社团
if num_multiple>=2:
pass # 社团
if num_multiple>=3:
pass # 社团
continue
# 上课
content_start = get_line_content(img, rect_start)
if '匹配上课'==content_start or '上课'==content_start or '学院活动匹配'==content_start or '准备'==content_start:
x,y = click_start
zoom_left_click(x, y, times=1)
time.sleep(3)
if num_multiple>=2:
x, y = 800,800
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,1)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,1)
if num_multiple>=3:
x, y = 800,800
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,1)
time.sleep(1)
continue
if countdown_num == time_chutdown or (countdown_num <=20 and countdown_num > 15):
if countdown_num != time_chutdown and is_answered==1:
continue
question_num += 1
if class_type == 'shetuan': # 社团抢答
x,y = coordinate[pre_answer] # 进去,先盲猜A,A没人选,大概率能首抢
zoom_left_click(x, y, times=4)
is_answered = 0
print('epoch_num:{}, question_num:{}, countdown_num:{} '.format(epoch_num,question_num,countdown_num))
s_time = time.time()*1000
time.sleep(0.1) # 尽量扫描到题目
if plat_type=='mumu':
_, img = get_mumu_rect_img(mumu_hwnd)
else:
_, img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
res = get_question_options(img) # 保证优先抢答
if len(res) == 0: # 保证扫描到题目
# time.sleep(0.1)
global rect_question
tmp = rect_question
rect_question = rect_question_ex
if plat_type=='mumu':
_, img = get_mumu_rect_img(mumu_hwnd)
else:
_, img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
res = get_question_options(img)
rect_question = tmp
if len(res) >0:
countdown_num = -1
pre_answer = res[0][2]
x,y = coordinate[res[0][2]]
zoom_left_click(x, y, times=2)
if num_multiple>=2:
x,y = coordinate_mul[res[0][2]][0], coordinate_mul[res[0][2]][1]
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,2)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,2)
if num_multiple>=3:
x,y = coordinate_mul[res[0][2]][0], coordinate_mul[res[0][2]][1]
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,2)
print('这题选',chr(ord('A')+int(res[0][2])))
time.sleep(0.5)
# 计时模块
print('ocr时间: {}ms'.format(time.time()*1000-s_time),flush=True)
total_time += time.time()*1000-s_time
total_num += 1
is_answered = 1
# 保存分析错误
img = img[:,:,::-1]
cv2.imwrite('img/atest_harry_{}_{}.png'.format(plat_type, question_num), img)
# time.sleep(1)
else:
pre_answer = 4 # 一个无关紧要的坐标防止陷入错误抢答
print('抄答案吧!')
time.sleep(1)
continue
elif (is_answered == 0 and (countdown_num <=20 and countdown_num > 8)):
is_answered = 2 # 表示没得抄,盲猜
if is_answered == 2:
print('这题盲猜D')
x,y = coordinate[3]
zoom_left_click(x, y, times=2)
if num_multiple>=2:
x, y = coordinate_mul[3][0], coordinate_mul[3][1]
if plat_type=='mumu':
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y+padd2wy,4)
else :
left_click(win_rect_mul_leftgoogle_yunyouxi[0]+x,win_rect_mul_leftgoogle_yunyouxi[1]+y,4)
if num_multiple>=3:
x, y = coordinate_mul[3][0], coordinate_mul[3][1]
left_click(win_rect_mul_rightgoogle_dashen[0]+x,win_rect_mul_rightgoogle_dashen[1]+y+padd2wy,4)
if plat_type=='mumu':
_, img = get_mumu_rect_img(mumu_hwnd)
else:
_, img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
import datetime
fileName = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S')+'.png'
img = img[:,:,::-1]
cv2.imwrite('img/error_harry_{}_'.format(plat_type)+fileName, img)
is_answered = 3
# 麻瓜研究
def maguayanjiu(time_chutdown=12,iter_num=15,epoch_num=1,num_multiple=1, plat_type='dashen',is_preclick=True):
if is_preclick == False: # 全自动,从主界面进
# 点击麻瓜研究
x, y = 580, 550
zoom_left_click(x, y)
time.sleep(2)
# 点击进入
x, y = 1122, 720
zoom_left_click(x, y)
time.sleep(4)
dati(time_chutdown=time_chutdown, iter_num=iter_num, epoch_num=epoch_num, num_multiple=num_multiple, class_type='maguayanjiu', plat_type=plat_type)
# """
# 魔法史
def mofashi(time_chutdown=12,iter_num=15,epoch_num=1,num_multiple=1, plat_type='dashen',is_preclick=True):
if is_preclick == False: # 全自动,从主界面进
# 点击魔法史
x, y = 580, 550
zoom_left_click(x, y)
time.sleep(2)
# 点击进入
x, y = 1122, 720
zoom_left_click(x, y)
time.sleep(4)
dati(time_chutdown=time_chutdown, iter_num=iter_num, epoch_num=epoch_num, num_multiple=num_multiple, class_type='mofashi', plat_type=plat_type)
# 院活
def yuanhuo(time_chutdown=12,iter_num=15,epoch_num=1,num_multiple=1, plat_type='dashen',is_preclick=True):
if is_preclick == False: # 全自动,从主界面进
# 点击魔法史
x, y = 580, 550
zoom_left_click(x, y)
time.sleep(2)
# 点击进入
x, y = 1122, 720
zoom_left_click(x, y)
time.sleep(4)
dati(time_chutdown=time_chutdown, iter_num=iter_num, epoch_num=epoch_num, num_multiple=num_multiple, class_type='yuanhuo', plat_type=plat_type)
# 社团
def shetuan(time_chutdown=20,iter_num=20,epoch_num=1,num_multiple=1, plat_type='dashen',is_preclick=True):
if is_preclick == False: # 全自动,从主界面进
# 点击魔法史
x, y = 580, 550
zoom_left_click(x, y)
time.sleep(2)
# 点击进入
x, y = 1122, 720
zoom_left_click(x, y)
time.sleep(4)
dati(time_chutdown=time_chutdown, iter_num=iter_num, epoch_num=epoch_num, num_multiple=num_multiple, class_type='shetuan', plat_type=plat_type)
# if __name__ == '__main__':
# plat_type = 'mumu'
# ret_rect,img = get_mumu_rect_img(mumu_hwnd)
# if plat_type == 'mumu':
# chrom_dashen_left_padding = ret_rect[2]
# chrom_dashen_top_padding = ret_rect[0]
# det = True
# # for idx in range(6,19):
# # idx = 13
# # img_path = 'img/atest_harry_mumu_{}.png'.format(idx)
# # img_path = 'img/error_harry_dashen_2022_04_19_07_16_04.png'
# # img = cv2.imread(img_path)
# img = img[710:750,1145:1345]
# # clock 8:70,680:745 # 识别不到7和3
# # q 420:525,155:800
# # a 600:660,103:503
# # b 600:660,807:1207
# # c 703:763,103:503
# # d 703:763,807:1207
# # start 710:750,1145:1345 # 1250 733
# result = ocr.ocr(img,cls=True,det=det) # 只识别14.34ms 带检测24.70ms 带角度分类36.48
# # click_instance_center(instance_name='进入', ocr_result=result)
# # logger.info(result)
# print(result)
# # 批量读图
# # for i in range(0,20):
# # ret_rect,ret_img = get_mumu_rect_img(mumu_hwnd) # 截屏
# # ret_img = ret_img[:,:,::-1]
# # cv2.imwrite('img/atest_harry_mumu_{}.png'.format(i), ret_img)
# # time.sleep(0.8)
# plt.imshow(img)
# plt.show()
left_padding, top_padding = 244, 155
if __name__ == '__main__':
# 设置log文件
make_print_to_file(path='./log/')
plat_list = ['dashen','wangyiyun','mumu']
sel = '1'
sel = input('1.麻瓜研究(大神) 2.魔法史 3.院活 4.麻瓜多开 5.魔法史多开 6.院活多开 7.社团(抢) 8.社团多开(抢) 9.社团 11.麻瓜研究(网易云游戏) 22魔法史(mumu) \n')
num_sel = int(sel)
if num_sel > 20: # mumu
plat_type = plat_list[2] # 定义平台,控制截图大小
ret_rect,img = get_mumu_rect_img(mumu_hwnd)
chrom_dashen_left_padding = ret_rect[2]
chrom_dashen_top_padding = ret_rect[0]
elif num_sel > 10: # wangyiyun
plat_type = plat_list[1] # 定义平台,控制截图大小
else: # dashen
plat_type = plat_list[0] # 定义平台,控制截图大小
if plat_type =='wangyiyun': # 切换全局参数
chrom_dashen_hwnd = chrom_wangyiyun_hwnd # 切换句柄
chrom_dashen_left_padding = chrom_wangyiyun_left_padding
chrom_dashen_top_padding = chrom_wangyiyun_top_padding
chrom_dashen_right_pading = chrom_wangyiyun_right_pading
chrom_dashen_bottom_padding = chrom_wangyiyun_bottom_padding
rect_start = wangyiyun_rect_start
rect_question = wangyiyun_rect_question
rect_question_ex = wangyiyun_rect_question_ex
rect_option_a = wangyiyun_rect_option_a
rect_option_b = wangyiyun_rect_option_b
rect_option_c = wangyiyun_rect_option_c
rect_option_d = wangyiyun_rect_option_d
rect_continue = wangyiyun_rect_continue
coordinate = wangyiyun_coordinate
rect_magua_countdown = wangyiyun_rect_magua_countdown
rect_mofashi_countdown = wangyiyun_rect_mofashi_countdown
click_start = wangyiyun_click_start
click_continue = wangyiyun_click_continue
elif plat_type =='mumu': # 切换全局参数
# chrom_dashen_hwnd = mumu_hwnd # 切换句柄
rect_start = mumu_rect_start
rect_question = mumu_rect_question
rect_question_ex = mumu_rect_question_ex
rect_option_a = mumu_rect_option_a
rect_option_b = mumu_rect_option_b
rect_option_c = mumu_rect_option_c
rect_option_d = mumu_rect_option_d
rect_continue = mumu_rect_continue
coordinate = mumu_coordinate
rect_magua_countdown = mumu_rect_magua_countdown
rect_mofashi_countdown = mumu_rect_mofashi_countdown
click_start = mumu_click_start
click_continue = mumu_click_continue
epoch_num = 1
epoch = input('几次?\n')
if epoch!='':
epoch_num=int(epoch)
# 麻瓜研究
if num_sel == 1 or num_sel==11:
maguayanjiu(epoch_num=epoch_num, plat_type=plat_type)
# 魔法史
if num_sel == 2 or num_sel==12 or num_sel==22:
mofashi(epoch_num=epoch_num, plat_type=plat_type)
# 院活
if num_sel == 3 or num_sel==13 or num_sel==23:
yuanhuo(epoch_num=epoch_num, num_multiple=1, plat_type=plat_type)
# 麻瓜研究多开
if num_sel == 4:
maguayanjiu(epoch_num=epoch_num, num_multiple=3, plat_type=plat_type)
# 魔法史多开
if num_sel == 5 or num_sel==25:
mofashi(epoch_num=epoch_num, num_multiple=3, plat_type=plat_type)
# 学院多开
if num_sel == 6 or num_sel==26:
epoch_num=15
yuanhuo(epoch_num=epoch_num, num_multiple=3, plat_type=plat_type)
# 社团(抢)
if num_sel == 7 or num_sel==17 or num_sel==27:
epoch_num=1
shetuan(time_chutdown=20,iter_num=20,epoch_num=epoch_num, plat_type=plat_type)
# 社团多开(抢)
if num_sel == 8 or num_sel==28:
epoch_num=1
shetuan(time_chutdown=20,iter_num=22, num_multiple=3, epoch_num=epoch_num, plat_type=plat_type)
# 社团
if num_sel == 9 or num_sel==19 or num_sel==29:
epoch_num=1
mofashi(time_chutdown=20,iter_num=22, epoch_num=epoch_num, plat_type=plat_type)
# 单图测试
# idx = 1
# img_path = 'img/atest_harry_{}_{}.png'.format(plat_type,idx)
# img_path = 'img/error_harry_dashen_2022_04_19_07_16_04.png'
# img = cv2.imread(img_path)
# if plat_type=='mumu':
# _, img = get_mumu_rect_img(mumu_hwnd)
# else:
# _, img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
# ret_img = img[:,:,::-1]
# cv2.imwrite('img/atest_shetuan_0.png', ret_img)
# [[[1105.0, 816.0], [1261.0, 816.0], [1261.0, 844.0], [1105.0, 844.0]], ('#点击继续', 0.81634045)]
# [[[618.0, 765.0], [823.0, 762.0], [824.0, 788.0], [618.0, 791.0]], ('点击任意位置继续', 0.99693435)]
# imgcrop = img
# imgcrop = img[755:800,610:835] # 测试区域识别 # q 514[420,525,155,800]
# det = True
# result = ocr.ocr(imgcrop,cls=True,det=det) # 只识别14.34ms 带检测24.70ms 带角度分类36.48
# if len(result)==0:
# imgcrop = img[415:530,155:800]
# result = ocr.ocr(imgcrop,cls=True,det=det) # 只识别14.34ms 带检测24.70ms 带角度分类36.48
# logger.info(result)
# plt.imshow(imgcrop)
# plt.show()
# 批量读图
# ret_rect, ret_img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
# ret_img = ret_img[:,:,::-1]
# cv2.imwrite('img/atest_harry_dashen_50.png', ret_img)
# for i in range(0,20):
# ret_rect, ret_img = get_rect_img(chrom_dashen_hwnd,plat_type=plat_type) # 截屏
# ret_img = ret_img[:,:,::-1]
# cv2.imwrite('img/atest_harry_wangyiyun_{}.png'.format(i), ret_img)
# time.sleep(0.7)
#
# instance_name = '进入'
# click_instance_center(instance_name, result)
# 城堡 1340 740
# x, y = 1340, 740
# zoom_left_click(x, y)
# 活动 1380 230
# x, y = 1380, 230
# zoom_left_click(x, y)
# 作业 1380 310
# x, y = 1380, 310
# zoom_left_click(x, y)
# 宝箱 85 185
# x, y = 85, 185
# zoom_left_click(x, y)
# 返回 50 46
# x, y = 50, 46
# zoom_left_click(x, y)
|
{"hexsha": "4042e9c768d03ba2571cbaa03738f7f3c78f227c", "size": 31377, "ext": "py", "lang": "Python", "max_stars_repo_path": "mumu_auto_quiz.py", "max_stars_repo_name": "biechuyangwang/UniversalAutomaticAnswer", "max_stars_repo_head_hexsha": "4c558396cc04b36224e9be4409f80f9654c4aa88", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-12-11T19:11:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-24T19:32:12.000Z", "max_issues_repo_path": "mumu_auto_quiz.py", "max_issues_repo_name": "biechuyangwang/UniversalAutomaticAnswer", "max_issues_repo_head_hexsha": "4c558396cc04b36224e9be4409f80f9654c4aa88", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mumu_auto_quiz.py", "max_forks_repo_name": "biechuyangwang/UniversalAutomaticAnswer", "max_forks_repo_head_hexsha": "4c558396cc04b36224e9be4409f80f9654c4aa88", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.1765402844, "max_line_length": 151, "alphanum_fraction": 0.6258724543, "include": true, "reason": "import numpy", "num_tokens": 10679}
|
"""
Camera module for camera calibration and undistorting camera images
"""
import pickle
import logging
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
class Warper:
def __init__(self):
bottom = 720
left = 210
right = 1110
top = 460
top_left = 580
top_right = 705
dst_left = 320
dst_right = 960
dst_top = 0
self.src = np.float32([[left, bottom], [top_left, top], [
top_right, top], [right, bottom]])
self.dst = np.float32([[dst_left, bottom], [dst_left, dst_top], [
dst_right, dst_top], [dst_right, bottom]])
self.M = cv2.getPerspectiveTransform(self.src, self.dst)
self.Minv = cv2.getPerspectiveTransform(self.dst, self.src)
def warp(self, image):
img_size = (image.shape[1], image.shape[0])
warped = cv2.warpPerspective(
image, self.M, img_size, flags=cv2.INTER_LINEAR)
# crop overly distorted area
warped[:, 0:150] = 0
warped[:, -150:] = 0
return warped
def unwarp(self, image):
img_size = (image.shape[1], image.shape[0])
return cv2.warpPerspective(image, self.Minv, img_size, flags=cv2.INTER_LINEAR)
def draw_src(self, image):
cv2.polylines(image, [np.int32(self.src)], 1, (0, 255, 0), thickness=2)
def draw_dst(self, image):
cv2.polylines(image, [np.int32(self.dst)], 1, (255, 0, 0), thickness=2)
class PickleFile:
""" Utility class to load/save pickle file"""
def __init__(self, filename):
self.filename = filename
def exists(self):
return os.path.isfile(self.filename)
def save(self, data):
with open(self.filename, 'wb') as file_handle:
pickle.dump(obj=data, file=file_handle)
def load(self):
with open(self.filename, 'rb') as file_handle:
return pickle.load(file=file_handle)
class Camera:
"""Computes objpoints, imgpoints pair based on chessboard images for calibration"""
def __init__(self, nx, ny, calibration_images, calibration_filename):
logging.info('Initializing camera model...')
self.calibration_images = calibration_images
self.calibration_file = PickleFile(calibration_filename)
self.image_shape = self.get_shape(self.calibration_images[0])
self.corner_images = []
self.undistorted_images = []
self.image_names = []
self.calibration = {}
self.nx = nx
self.ny = ny
logging.info('Camera image shape x:%d y:%d',
self.image_shape[0], self.image_shape[1])
self.get_calibration_filenames_()
self.calibrate()
def get_shape(self, image_filename):
""" Get shape of camera images """
shape = cv2.imread(image_filename).shape
return (shape[1], shape[0])
def get_calibration_filenames_(self):
""" Get basename of all calibration images, just to display in plot's title """
for filename in self.calibration_images:
self.image_names.append(os.path.basename(filename))
def save_calibration_file(self):
""" Save calibration file """
self.calibration_file.save(self.calibration)
def load_calibration_file(self):
""" Load calibration file """
self.calibration = self.calibration_file.load()
def calibrate(self):
""" Calculate calibration values based on chessboard images or load values from file """
if self.calibration_file.exists():
logging.info('Loading calibration file: "%s"',
self.calibration_file.filename)
self.load_calibration_file()
else:
logging.info('No calibration file available, calibrating...')
objp = np.zeros((self.nx * self.ny, 3), np.float32)
objp[:, :2] = np.mgrid[0: self.nx, 0: self.ny].T.reshape(-1, 2)
objpoints = []
imgpoints = []
for filename in self.calibration_images:
logging.info('Finding corners in: "%s"', filename)
imgp = self.find_corners(filename)
if imgp is not None:
objpoints.append(objp)
imgpoints.append(imgp)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
objpoints, imgpoints, self.image_shape, None, None)
self.calibration = {'ret': ret, 'mtx': mtx, 'dist': dist,
'rvecs': rvecs, 'tvecs': tvecs}
self.save_calibration_file()
def find_corners(self, filename):
""" Get identified corner points from a single chessboard image """
bgr_image = cv2.imread(filename)
gray_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2GRAY)
retval, corners = cv2.findChessboardCorners(
gray_image, (self.nx, self.ny), None)
chessboard_corners = cv2.drawChessboardCorners(
bgr_image, (self.nx, self.ny), corners, retval)
self.corner_images.append(chessboard_corners)
if retval:
return corners
else:
logging.warning(
"Unable to find corners in: %s", filename
)
return
def show_corner_images(self, save_file='chessboard_identified_corners.png'):
""" Show calibration images with identified corners """
logging.info("Plotting calibration images with corners")
self.show_images_(self.corner_images, self.image_names, save_file)
def show_undistorted_images(self, save_file='chessboard_undistorted.png'):
""" Show calibration images after undistorting """
logging.info("Plotting undistorted images")
self.undistort_calibration_images_()
self.show_images_(self.undistorted_images,
self.image_names, save_file)
def show_images_(self, images, titles, save_file, rows=5, cols=4):
""" Display calibration images in 5x4 subplot """
figure, axes = plt.subplots(
rows, cols, figsize=(15, 10), frameon=False)
logging.info("Displaying %d images", len(images))
for index, sub in enumerate(axes.flat):
sub.axis('off')
sub.set_title(titles[index], fontsize=9)
sub.imshow(images[index])
plt.tight_layout()
plt.show()
figure.savefig(save_file, dpi='figure')
def undistort_image(self, img):
""" Returns undistorted image """
return cv2.undistort(img, self.calibration['mtx'], self.calibration['dist'])
def undistort_calibration_images_(self):
""" Undistort calibration files """
for calibration_image in self.calibration_images:
bgr_image = cv2.imread(calibration_image)
self.undistorted_images.append(self.undistort_image(bgr_image))
|
{"hexsha": "a9f047e05165e31d4cc531e8161b7501a46c3e33", "size": 6945, "ext": "py", "lang": "Python", "max_stars_repo_path": "camera.py", "max_stars_repo_name": "flsilves/CarND-Advanced-Lane-Lines", "max_stars_repo_head_hexsha": "5f4c3bf29d52df56371740d576e7a8c6d9db21ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "camera.py", "max_issues_repo_name": "flsilves/CarND-Advanced-Lane-Lines", "max_issues_repo_head_hexsha": "5f4c3bf29d52df56371740d576e7a8c6d9db21ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "camera.py", "max_forks_repo_name": "flsilves/CarND-Advanced-Lane-Lines", "max_forks_repo_head_hexsha": "5f4c3bf29d52df56371740d576e7a8c6d9db21ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3894230769, "max_line_length": 96, "alphanum_fraction": 0.6125269978, "include": true, "reason": "import numpy", "num_tokens": 1561}
|
# This file was generated, do not modify it. # hide
yMode = [mode(ŷ[i]) for i in 1:length(ŷ)]
y = coerce(y[:,1], OrderedFactor)
yMode = coerce(yMode, OrderedFactor)
confusion_matrix(yMode, y)
|
{"hexsha": "6809c54c8f2d3da959e81ec72256cfff37c1d12f", "size": 191, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "__site/assets/end-to-end/glm/code/ex12.jl", "max_stars_repo_name": "giordano/DataScienceTutorials.jl", "max_stars_repo_head_hexsha": "8284298842e0d77061cf8ee767d0899fb7d051ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2021-08-09T11:35:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T06:20:43.000Z", "max_issues_repo_path": "__site/assets/end-to-end/glm/code/ex12.jl", "max_issues_repo_name": "giordano/DataScienceTutorials.jl", "max_issues_repo_head_hexsha": "8284298842e0d77061cf8ee767d0899fb7d051ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 56, "max_issues_repo_issues_event_min_datetime": "2019-10-22T00:06:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-21T14:38:09.000Z", "max_forks_repo_path": "__site/assets/end-to-end/glm/code/ex12.jl", "max_forks_repo_name": "giordano/DataScienceTutorials.jl", "max_forks_repo_head_hexsha": "8284298842e0d77061cf8ee767d0899fb7d051ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2019-11-20T16:25:04.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-05T11:55:15.000Z", "avg_line_length": 38.2, "max_line_length": 51, "alphanum_fraction": 0.7068062827, "num_tokens": 62}
|
#%%
import random
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import pickle
from sklearn.model_selection import StratifiedKFold
from math import log2, ceil
import sys
sys.path.append("../../src/")
from lifelong_dnn import LifeLongDNN
from joblib import Parallel, delayed
# %%
def unpickle(file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
def get_colors(colors, inds):
c = [colors[i] for i in inds]
return c
def generate_2d_rotation(theta=0, acorn=None):
if acorn is not None:
np.random.seed(acorn)
R = np.array([
[np.cos(theta), np.sin(theta)],
[-np.sin(theta), np.cos(theta)]
])
return R
def generate_gaussian_parity(n, mean=np.array([-1, -1]), cov_scale=1, angle_params=None, k=1, acorn=None):
if acorn is not None:
np.random.seed(acorn)
d = len(mean)
if mean[0] == -1 and mean[1] == -1:
mean = mean + 1 / 2**k
mnt = np.random.multinomial(n, 1/(4**k) * np.ones(4**k))
cumsum = np.cumsum(mnt)
cumsum = np.concatenate(([0], cumsum))
Y = np.zeros(n)
X = np.zeros((n, d))
for i in range(2**k):
for j in range(2**k):
temp = np.random.multivariate_normal(mean, cov_scale * np.eye(d),
size=mnt[i*(2**k) + j])
temp[:, 0] += i*(1/2**(k-1))
temp[:, 1] += j*(1/2**(k-1))
X[cumsum[i*(2**k) + j]:cumsum[i*(2**k) + j + 1]] = temp
if i % 2 == j % 2:
Y[cumsum[i*(2**k) + j]:cumsum[i*(2**k) + j + 1]] = 0
else:
Y[cumsum[i*(2**k) + j]:cumsum[i*(2**k) + j + 1]] = 1
if d == 2:
if angle_params is None:
angle_params = np.random.uniform(0, 2*np.pi)
R = generate_2d_rotation(angle_params)
X = X @ R
else:
raise ValueError('d=%i not implemented!'%(d))
return X, Y.astype(int)
# %%
def exp(n_sample, n_test, angle_params, n_trees, reps, acorn=None):
if acorn != None:
np.random.seed(acorn)
error = np.zeros(reps,dtype=float)
for i in range(reps):
train, label = generate_gaussian_parity(n_sample,cov_scale=0.1,angle_params=angle_params)
test, test_label = generate_gaussian_parity(n_test,cov_scale=0.1,angle_params=angle_params)
l2f = LifeLongDNN()
l2f.new_forest(train, label, n_estimators=n_trees, max_samples=ceil(log2(n_sample)))
uf_task = l2f.predict(test, representation=0, decider=0)
error[i] = 1 - np.sum(uf_task == test_label)/n_test
return np.mean(error,axis=0), np.std(error,ddof=1,axis=0)
#%%
n_trees = range(1,50,1)
n_test = 1000
n_sample = 1500
reps = 20
error1 = np.zeros(len(n_trees),dtype=float)
error2 = np.zeros(len(n_trees),dtype=float)
'''for count,n_tree in enumerate(n_trees):
print(count)
error1[count],_ = exp(n_sample,n_test,angle_params=0,n_trees=n_tree,reps=reps)
for count,n_tree in enumerate(n_trees):
error2[count],_ = exp(n_sample,n_test,angle_params=np.pi/4,n_trees=n_tree,reps=reps)'''
error1 = np.array(
Parallel(n_jobs=-2,verbose=1)(
delayed(exp)(n_sample,n_test,angle_params=0,n_trees=n_tree,reps=reps) for n_tree in n_trees
)
)
error2 = np.array(
Parallel(n_jobs=-2,verbose=1)(
delayed(exp)(n_sample,n_test,angle_params=np.pi/4,n_trees=n_tree,reps=reps) for n_tree in n_trees
)
)
with open('./result/control_xor.pickle','wb') as f:
pickle.dump(error1,f)
with open('./result/control_nxor.pickle','wb') as f:
pickle.dump(error2,f)
#%% plotting the results
n_trees = np.arange(1,50,1)
tmp1 = unpickle('./result/control_xor.pickle')
tmp2 = unpickle('./result/control_nxor.pickle')
err1 = np.zeros(len(n_trees),dtype=float)
err2 = np.zeros(len(n_trees),dtype=float)
for i in range(len(n_trees)):
err1[i] = 1-tmp1[i][0]
err2[i] = 1-tmp2[i][0]
fig, ax = plt.subplots(1,2, figsize=(26,8))
ax[0].plot(n_trees, err1, marker='.', markersize=14, linewidth=3)
ax[1].plot(n_trees, err2, marker='.', markersize=14, linewidth=3)
ax[0].set_title('XOR',fontsize=30)
ax[0].set_xlabel('Accuracy')
ax[0].set_ylabel('Accuracy', fontsize=30)
ax[0].set_xlabel('Number of trees', fontsize=30)
ax[0].tick_params(labelsize=27.5)
#ax.set_xticks(rotation=90)
ax[1].set_title('RXOR',fontsize=30)
ax[1].set_xlabel('Accuracy')
ax[1].set_ylabel('Accuracy', fontsize=30)
ax[1].set_xlabel('Number of trees', fontsize=30)
ax[1].tick_params(labelsize=27.5)
for i in range(1,50,10):
ax[0].axvline(x = i, linewidth=1.5,alpha=0.5, color='k')
ax[1].axvline(x = i, linewidth=1.5,alpha=0.5, color='k')
plt.savefig('./result/control_xor_rxor.png',dpi=500)
plt.savefig('./result/control_xor_rxor.pdf',dpi=500)
# %%
|
{"hexsha": "fd952ed530a45bdb391e35178a5040b264aa2371", "size": 5043, "ext": "py", "lang": "Python", "max_stars_repo_path": "experiments/xor_rxor_spiral_exp/control_exp.py", "max_stars_repo_name": "mkusman1/progressive-learning", "max_stars_repo_head_hexsha": "af26fc89a14d6d41505d10d0d30a949ef8aa69d9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-03T12:36:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-03T12:36:28.000Z", "max_issues_repo_path": "experiments/xor_rxor_spiral_exp/control_exp.py", "max_issues_repo_name": "mkusman1/progressive-learning", "max_issues_repo_head_hexsha": "af26fc89a14d6d41505d10d0d30a949ef8aa69d9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "experiments/xor_rxor_spiral_exp/control_exp.py", "max_forks_repo_name": "mkusman1/progressive-learning", "max_forks_repo_head_hexsha": "af26fc89a14d6d41505d10d0d30a949ef8aa69d9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.8171428571, "max_line_length": 113, "alphanum_fraction": 0.6012294269, "include": true, "reason": "import numpy", "num_tokens": 1506}
|
import numpy as np
from numpy.random import default_rng
from scipy.special import expit
import numpy.linalg as la
from scipy.sparse import coo_matrix, csr_matrix, hstack, vstack
from prep_data import number_of_features
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
import warnings
import copy
MAX_VALIDATION_NOT_DECREASING = 200
class Node:
def __init__(self, id_node, alpha, x_train, y_train, regularization, nn_flag=False, compute_smoothness_min=True, **args):
if x_train.shape[0] != y_train.shape[0]:
raise ValueError('number of rows in x_train ({}) and y_train ({}) must be equal'.format(x_train.shape[0], y_train.shape[0]))
if alpha < 0.0 or alpha > 1.0:
raise ValueError('parameter alpha must be between 0 and 1')
self.id = id_node
self.alpha = alpha
if not nn_flag:
self.x_train = x_train
self.y_train = y_train
self.tolerance = args.get('tolerance', 1e-6)
self.validation = args.get('validation')
self.val_rat = args.get('val_rat')
self.n = self.x_train.shape[0]
if self.validation is True:
if self.val_rat is None:
self.x_validation = args.get('x_validation')
self.y_validation = args.get('y_validation')
if self.x_validation is None:
raise ValueError('Validation dataset is not provided. Either provide it, or set val_rat to some value')
else:
assert self.val_rat > 0.0
assert self.val_rat < 1.0
train_size = int((1 - self.val_rat) * self.n)
self.train_indices = np.arange(train_size)
self.validation_indices = np.arange(train_size, self.n)
self._add_intercept()
self.d = self.x_train.shape[1]
if len(self.y_train.shape) > 1:
self.d_y = self.y_train.shape[1]
else:
self.d_y = None
self.reg = regularization
if compute_smoothness_min:
if not nn_flag:
self.smoothness = self._smoothness()
self.w_opt = self.find_min()
else:
self.smoothness = None
self.w_opt = self.find_min()
else:
self.smoothness = None
self.w_opt = None
def change_alpha(self, alpha):
self.alpha = alpha
def _add_intercept(self):
ones = np.ones((self.x_train.shape[0], 1))
if type(self.x_train) == np.ndarray:
self.x_train = np.hstack([ones, self.x_train])
else:
if type(self.x_train) == coo_matrix:
fmt = 'coo'
elif type(self.x_train) == csr_matrix:
fmt = 'csr'
self.x_train = hstack([ones, self.x_train], format=fmt)
if self.validation is True and self.val_rat is None:
# if validation dataset is provided
ones = np.ones((self.x_validation.shape[0], 1))
if type(self.x_validation) == np.ndarray:
self.x_validation = np.hstack([ones, self.x_validation])
else:
if type(self.x_train) == coo_matrix:
fmt = 'coo'
elif type(self.x_train) == csr_matrix:
fmt = 'csr'
self.x_validation = hstack([ones, self.x_validation], format=fmt)
def f_value(self, w):
raise NotImplementedError
def fun_value(self, w):
return NotImplementedError
def f_value_general(self, x, y, w):
return NotImplementedError
def fun_value_general(self, x, y, w):
return NotImplementedError
def g(self, x, y, w):
raise NotImplementedError
def grad(self, x, y, w):
return NotImplementedError
def grad_shift(self, x, y, w):
grad_shift = self.alpha * self.grad(x, y, self.compute_local(w))
return grad_shift
def fun_value_shift(self, w):
f_val_shift = self.fun_value(self.compute_local(w))
return f_val_shift
def _smoothness(self, **args):
raise NotImplementedError
def find_min(self, **args):
raise NotImplementedError
def compute_local(self, w):
return self.alpha * w + (1 - self.alpha) * self.w_opt
def compute_local_modified(self, local_w: np.ndarray, global_w: np.ndarray):
"""Compute linear combination of weights for Modified Explicit Mixture Algorithm."""
return self.alpha * global_w + (1 - self.alpha) * local_w
def iterate_size(self):
return NotImplementedError
def local_steps(self, w: np.ndarray, stepsize: float, k: int, x=None, y=None) -> np.ndarray:
"""Perform local steps.
Arguments:
w -- initial vector
stepsize -- stepsize for Gradient Descent
k -- number of gradient steps
x -- features dataset; if None, self.x_train is used (default None)
y -- labels; if None, self.y_train is used (default None)
"""
assert stepsize > 0
if x is None:
x = self.x_train
if y is None:
y = self.y_train
weights = copy.deepcopy(w)
for _ in range(k):
weights -= stepsize * self.grad(x, y, weights)
return weights
def fomaml_grad(self, x, y, w, k, inner_loop_lr):
"""Return FOMAML with k SGD steps gradient. Deprecated."""
weights = w
for i in range(k):
weights -= inner_loop_lr * self.grad(x, y, weights)
return self.grad(x, y, weights)
def reptile_update(self, x, y, w, k, inner_loop_lr):
"""Return Reptile update after k inner steps. Deprecated."""
weights = copy.deepcopy(w)
initial_weights = copy.deepcopy(w)
for i in range(k):
weights -= inner_loop_lr * self.grad(x, y, weights)
print((((weights - initial_weights) / inner_loop_lr / k) ** 2).sum())
return (weights - initial_weights) / inner_loop_lr / k
def model_stochastic_update(self, model: str, *args, **kwargs):
"""Return a stochastic update depending on model.
E.g., if model is 'expmix', the function returns explicit_mixture_stochastic_grad(*args, **kwargs).
Arguments:
model -- model for which the update is computed; available values are 'expmix' for Explicit Mixture, 'modexpmix' for Modified Explicit Mixture, 'fomaml' for FOMAML, and 'reptile' for Reptile
"""
if model == 'expmix':
return self.explicit_mixture_stochastic_grad(*args, **kwargs)
if model == 'modexpmix':
return self.modified_explicit_mixture_stochastic_gradient(*args, **kwargs)
if model == 'fomaml':
return self.fomaml_stochastic_grad(*args, **kwargs)
if model == 'reptile':
return self.reptile_stochastic_update(*args, **kwargs)
raise NameError('Unknown model {}. Available models are expmix, modexpmix, fomaml and reptile.'.format(model))
def explicit_mixture_stochastic_grad(self, w: np.ndarray, batch='full'):
if batch == 'full':
return self.grad_shift(self.x_train, self.y_train, w)
else:
generator = default_rng()
choices = generator.choice(self.n, size=batch, replace=False)
return self.grad_shift(self.x_train[choices], self.y_train[choices], w)
def fomaml_stochastic_grad(self, w, k, inner_loop_lr, batch='full'):
assert self.validation
assert self.val_rat is not None or (self.x_validation is not None & self.y_validation is not None)
weights = copy.deepcopy(w)
if batch == 'full':
if self.val_rat is None:
x = self.x_train
y = self.y_train
else:
x = self.x_train[self.train_indices]
y = self.y_train[self.train_indices]
for i in range(k):
weights -= inner_loop_lr * self.grad(x, y, weights)
else:
generator = default_rng()
for i in range(k):
if self.val_rat is None:
inds = self.n
else:
inds = self.train_indices
choices = generator.choice(inds, size=batch, replace=False)
x = self.x_train[choices]
y = self.y_train[choices]
weights -= inner_loop_lr * self.grad(x, y, weights)
# if (torch.isnan(self.x_train).any() is None or torch.isnan(self.y_train).any() is None):
# warnings.warn("fomaml_stochastic_grad; None value is encountered.")
# print("FOMAML stoch gradient: {}".format(self.fomaml_grad(x, y, w, k, inner_loop_lr)))
if self.val_rat is None:
return self.grad(self.x_validation, self.y_validation, weights)
else:
return self.grad(self.x_train[self.validation_indices], self.y_train[self.validation_indices], weights)
def reptile_stochastic_update(self, w: np.ndarray, k: int, inner_loop_lr: float, batch: [int, 'full']='full', joint_dataset: bool=False):
"""Compute a stochastic Reptile update.
Stochasticity comes from random data selection of size 'batch'. Arguments:
w -- point to estimate the gradient at
k -- number of inner steps
inner_loop_lr -- inner loop learning step size
batch -- batch size
joint_dataset -- if True, data is selected from joint train and validation dataset, otherwise, only from train dataset
"""
if joint_dataset and (self.val_rat is None):
warnings.warn('Parameter <joint_dataset> is set to True, although validation dataset is provided as a separate object. Copyings slow down the computation.')
weights = copy.deepcopy(w)
initial_weights = copy.deepcopy(w)
assert joint_dataset is not None
if joint_dataset:
assert self.validation is True
if joint_dataset and self.val_rat is None:
joint_x = copy.deepcopy(self.x_train)
joint_y = copy.deepcopy(self.y_train)
if joint_dataset:
assert self.validation
if type(self.x_train) == np.ndarray:
joint_x = np.concatenate((joint_x, self.x_validation))
joint_y = np.concatenate((joint_y, self.y_validation))
elif type(self.x_train) == torch.Tensor:
joint_x = torch.cat((joint_x, self.x_validation))
joint_y = torch.cat((joint_y, self.y_validation))
elif type(self.x_train) == coo_matrix:
joint_x = vstack([joint_x, self.x_validation])
joint_y = np.concatenate((joint_y, self.y_validation))
else:
raise TypeError('{}: unexpected type of the dataset'.format(reptile_stochastic_update.__name__))
else:
# depending on self.val_rat the function chooses points either from all rows of joint_x and joint_y or from self.train_indices
joint_x = self.x_train
joint_y = self.y_train
generator = default_rng()
for i in range(k):
if batch == 'full':
x = joint_x
y = joint_y
else:
if joint_dataset is False and self.val_rat is not None:
assert self.validation is True
arr = self.train_indices
else:
arr = joint_x.shape[0]
choices = generator.choice(arr, size=batch, replace=False)
if type(joint_x) == coo_matrix:
warnings.warn('Features data type <coo_matrix> does not support indexing. Computations slow donw.')
x = joint_x.toarray()[choices]
else:
x = joint_x[choices]
y = joint_y[choices]
weights -= inner_loop_lr * self.grad(x, y, weights)
return (weights - initial_weights) / inner_loop_lr / k
def modified_explicit_mixture_stochastic_gradient(self, w: np.ndarray, k: int, inner_loop_lr: float, batch='full'):
assert self.validation # output gradient is computed on validation dataset
weights = copy.deepcopy(w)
initial_weights = copy.deepcopy(w) # initial weights imitate global mixture
def modified_grad(x, y, local_weights):
return (1 - self.alpha) * self.grad(x, y, self.compute_local_modified(local_weights, initial_weights))
generator = default_rng()
for i in range(k):
if batch == 'full':
if self.val_rat is None:
x = self.x_train
y = self.y_train
else:
x = self.x_train[self.train_indices]
y = self.y_train[self.train_indices]
else:
if self.val_rat is None:
inds = self.n
else:
inds = self.train_indices
choices = generator.choice(inds, size=batch, replace=False)
if type(self.x_train) == coo_matrix:
warnings.warn('Features data type <coo_matrix> does not support indexing. Computations slow donw.')
x = self.x_train.toarray()[choices]
else:
x = self.x_train[choices]
y = self.y_train[choices]
weights -= inner_loop_lr * modified_grad(x, y, weights)
if self.val_rat is None:
return self.grad(self.x_validation, self.y_validation, self.compute_local_modified(weights, initial_weights))
else:
return self.grad(self.x_train[self.validation_indices], self.y_train[self.validation_indices], self.compute_local_modified(weights, initial_weights))
class LogReg(Node):
def f_value(self, w):
h = self.get_h(self.x_train, w)
y = self.y_train
zeros = np.zeros_like(h)
return -(np.where(y == 1, np.log(h), zeros) + np.where(y == 0, np.log(1-h), zeros)).mean()
def g(self, x, y, w):
h = self.get_h(x, w)
return x.T.dot(h - y) / y.shape[0]
def fun_value(self, w):
return self.f_value(w) + self.reg * np.sum(w ** 2)/2
def fun_value_general(self, x, y, w):
return self.f_value_general(x, y, w) + self.reg * np.sum(w ** 2)/2
def grad(self, x, y, w):
return self.g(x, y, w) + self.reg * w
def _smoothness(self):
xtx = self.x_train.T.dot(self.x_train)
if type(xtx) == coo_matrix or type(xtx) == csr_matrix:
xtx = xtx.toarray()
return np.max(la.eigvalsh(xtx + self.reg * np.eye(self.d))) / (4 * self.n)
def find_min(self):
max_it = 100000
print("Worker(logreg) {} tolerance {}".format(self.id, self.tolerance))
w = np.zeros(self.x_train.shape[1])
lr = 1/self.smoothness
print("Learning rate is {}".format(lr))
for it in range(max_it):
grad = self.grad(self.x_train, self.y_train, w)
w -= lr * grad
print('{:5d}/{:5d} Iterations: fun_value {:f} norm of the gradient {}'
.format(it+1, max_it, self.fun_value(w), la.norm(grad)), end='\r')
if la.norm(grad) < self.tolerance:
break
print('')
return w
def get_h(self, x, w):
z = x.dot(w)
h = self._sigmoid(z)
return h
@staticmethod
def _sigmoid(x):
return 1.0 / (1 + np.exp(-x))
def iterate_size(self):
return self.d
class MulticlassLogReg(LogReg):
def f_value(self, w):
return self.f_value_general(self.x_train, self.y_train, w)
def f_value_general(self, x, y, w):
h = self.get_h(x, w).reshape((-1))
y = y.reshape((-1))
zeros = np.zeros_like(h)
return -(np.where(y == 1, np.log(h), zeros) + np.where(y == 0, np.log(1-h), zeros)).mean()
def g(self, x, y, w):
h = self.get_h(x, w)
assert y.ndim == 2
assert y.shape == h.shape
return (x.T.dot(h - y)).reshape((-1)) / y.size
def get_h(self, x, w):
"""Return probability of belonging to each class."""
z = x @ w.reshape((self.d, -1))
h = self._sigmoid(z)
return h
def find_min(self):
max_it = 100000
print("Worker(logreg) {} tolerance {}".format(self.id, self.tolerance))
w = np.zeros(self.d * self.y_train.shape[1])
lr = 1 / self.smoothness
print("Learning rate is {}".format(lr))
for it in range(max_it):
grad = self.grad(self.x_train, self.y_train, w)
w -= lr * grad
print('{:5d}/{:5d} Iterations: fun_value {:f} norm of the gradient {}'
.format(it+1, max_it, self.fun_value(w), la.norm(grad)), end='\r')
if la.norm(grad) < self.tolerance:
break
print('')
return w
def iterate_size(self):
return self.d * self.y_train.shape[1]
class LogRegNoncvx(Node):
"""
Implement logistic regression function with a nonconvex regularizer.
See Tran-Dinh et al. "Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization"
"""
def f_value(self, w):
h = self.get_h(self.x_train, w)
y = self.y_train
zeros = np.zeros_like(h)
return -(np.where(y == 1, np.log(h), zeros) + np.where(y == 0, np.log(1-h), zeros)).mean()
def fun_value(self, w):
return self.f_value(w) - self.reg * ( 1 / (1 + w ** 2) ).sum() + self.d * self.reg
def g(self, x, y, w):
h = self.get_h(x, w)
return x.T.dot(h - y) / y.shape[0]
def grad(self, x, y, w):
return self.g(x, y, w) + 2 * self.reg * w * (1 / (1 + w ** 2) ** 2)
def _smoothness(self):
xtx = self.x_train.T.dot(self.x_train)
# the hessian of the regularizer is bounded above by 2 * self.reg * I
return np.max(la.eigvalsh(xtx.toarray())) / (4 * self.n) + 2 * self.reg
def find_min(self):
max_it = 100000
w = np.zeros(self.x_train.shape[1])
lr = 1/self.smoothness
for it in range(max_it):
grad = self.grad(self.x_train, self.y_train, w)
w -= lr * grad
# print('{:5d}/{:5d} Iterations: fun_value {:f}'
# .format(it+1, max_it, self.fun_value(w)), end='\r')
if la.norm(grad) < self.tolerance:
break
# print('')
return w
def get_h(self, x, w):
z = x.dot(w)
h = self._sigmoid(z)
return h
@staticmethod
def _sigmoid(x):
return 1 / (1 + np.exp(-x))
def iterate_size(self):
return self.d
class LinReg(Node):
def f_value(self, w):
z = self.x_train.dot(w) - self.y_train
return 1/2 * np.mean(z**2)
def g(self, x, y, w):
xtx = x.T.dot(x)
xty = x.T.dot(y)
return (xtx.dot(w) - xty) / y.shape[0]
def grad(self, x, y, w):
return self.g(x, y, w) + self.reg * w
def fun_value(self, w):
return self.f_value(w) + self.reg * np.sum(w ** 2)/2
def _smoothness(self):
xtx = self.x_train.T.dot(self.x_train)
return np.max(la.eigvalsh(xtx.toarray() + self.reg * np.eye(self.d))) / self.n
def find_min(self):
xtx = self.x_train.T.dot(self.x_train)
xty = self.x_train.T.dot(self.y_train)
w = la.solve(xtx.toarray() + self.reg * np.eye(self.d), xty)
# print('The exact formula for minimum computed. Fun. value: {:f}'
# .format(self.fun_value(w)))
return w
def iterate_size(self):
return self.d
class NN_2layer_regression(nn.Module):
def __init__(self, input_dim, interm_dim1, interm_dim2):
super().__init__()
self.d = input_dim
self.interm_dim1 = interm_dim1
self.interm_dim2 = interm_dim2
self.fc1 = nn.Linear(input_dim, interm_dim1)
self.fc2 = nn.Linear(interm_dim1, interm_dim2)
self.fc3 = nn.Linear(interm_dim2, 1)
self.modules_sizes = [self.d * self.interm_dim1,
self.interm_dim1,
self.interm_dim1 * self.interm_dim2,
self.interm_dim2,
self.interm_dim2,
1]
self.w_opt = None
self.alpha = None
self.mixed_linead_weights = None
def set_mixed_linear_weights(self, w):
self.mixed_linead_weights = self.alpha * w + (1 - self.alpha) * self.w_opt
self.mixed_linead_weights.retain_grad()
fc_parameters = torch.split(self.mixed_linead_weights, self.modules_sizes)
ind = 0
for module in self.modules():
if type(module) == nn.Linear:
module.weight = torch.nn.Parameter(fc_parameters[ind].view(module.weight.shape))
ind += 1
module.bias = torch.nn.Parameter(fc_parameters[ind].view(module.bias.shape))
ind += 1
def forward(self, x, w=None):
if w is not None:
assert w.requires_grad
assert self.alpha is not None
assert self.w_opt is not None
self.set_mixed_linear_weights(w)
out = torch.tanh(self.fc1(x))
out = torch.tanh(self.fc2(out))
out = self.fc3(out)
return out
class NN_1d_regression(Node):
def __init__(self, id_node, alpha, x_train, y_train, tolerance, compute_smoothness_min=True, validation=False, x_validation=None, y_validation=None, gpu=False, stepsize=1e-2, interm_dim1=40, interm_dim2=40):
if gpu == True:
assert torch.cuda.is_available() == True
self.interm_dim1 = interm_dim1
self.interm_dim2 = interm_dim2
self.stepsize = stepsize
self.tolerance = tolerance
self.n, self.d = x_train.shape
self.separators = np.array([self.d * self.interm_dim1, # fc1.weight
(self.d + 1) * self.interm_dim1, # fc1.bias
(self.d + 1) * self.interm_dim1 + self.interm_dim1 * self.interm_dim2, # fc2.weight
(self.d + 1) * self.interm_dim1 + (self.interm_dim1 + 1) * self.interm_dim2, # fc2.bias
(self.d + 1) * self.interm_dim1 + (self.interm_dim1 + 2) * self.interm_dim2], dtype=np.int) # fc3.weight
self.model = NN_2layer_regression(self.d, self.interm_dim1, self.interm_dim2)
self.x_train = torch.from_numpy(x_train).float()
self.y_train = torch.from_numpy(np.array(y_train)).view(-1, 1).float()
self.memory = None # self._smoothness() for this class computes optimal point which is preserved in self.memory and passed in self.find_min()
self.validation = validation
self.x_validation = torch.from_numpy(x_validation).float() if validation else None
self.y_validation = torch.from_numpy(np.array(y_validation)).view(-1, 1).float() if validation else None
if gpu:
self.model = self.model.to('cuda')
self.x_train = self.x_train.to('cuda')
self.y_train = self.y_train.to('cuda')
self.x_validation = self.x_validation.to('cuda')
self.y_validation = self.y_validation.to('cuda')
self.gpu = gpu
super().__init__(id_node, alpha, x_train, y_train, 0, True, compute_smoothness_min)
def set_weights(self, w):
fc_parameters = np.split(w, self.separators)
device = torch.device('cuda') if self.gpu else torch.device('cpu')
ind = 0
with torch.no_grad():
for module in self.model.modules():
if type(module) == nn.Linear:
# print("fc_parameters[{}] shape = {}".format(ind, fc_parameters[ind].shape))
# print("weight_shape_to_transform = {}".format(module.weight.shape))
module.weight = torch.nn.Parameter(torch.from_numpy(fc_parameters[ind]).float().view(module.weight.shape).to(device))
ind += 1
# print("fc_parameters[{}] shape = {}".format(ind, fc_parameters[ind].shape))
# print("bias_shape_to_transform = {}".format(module.bias.shape))
module.bias = torch.nn.Parameter(torch.from_numpy(fc_parameters[ind]).float().view(module.bias.shape).to(device))
ind += 1
def get_weights(self):
fc_parameters = []
with torch.no_grad():
for module in self.model.modules():
if type(module) == nn.Linear:
fc_parameters.append(module.weight.data.clone().detach().view(-1).cpu().numpy())
fc_parameters.append(module.bias.data.clone().detach().view(-1).cpu().numpy())
return np.concatenate(fc_parameters)
def iterate_size(self):
return (self.d + 1) * self.interm_dim1 + (self.interm_dim1 + 1) * self.interm_dim2 + self.interm_dim2 + 1
def f_value(self, w):
self.set_weights(w)
criterion = nn.MSELoss()
return criterion(self.model(self.x_train), self.y_train).detach().numpy()
def fun_value(self, w):
return self.f_value(w)
def g(self, x, y, w):
self.set_weights(w)
self.model.zero_grad()
criterion = nn.MSELoss()
mse = criterion(self.model(x), y)
mse.backward()
g = []
ind = 0
for module in self.model.modules():
if type(module) == nn.Linear:
g.append(module.weight.grad.clone().detach().numpy().flatten()) # append weight gradient
g.append(module.bias.grad.clone().detach().numpy().flatten()) # append bias gradient
g = np.concatenate(g)
return g
def grad(self, x, y, w):
return self.g(x, y, w)
# def _smoothness(self):
# if not self.gpu:
# return self._smoothness_non_gpu()
# return None
# def _smoothness_non_gpu(self):
# print('Computing Lipschitz smoothness constant...')
# L = 0.1
# max_L = 0.1
# max_it = 60000
# w = np.random.randn(self.iterate_size())
# tol = self.tolerance
# max_L_constant = 0.1 * 2 ** 40
# grad_norm = None
# min_f_value = float('Inf')
# min_f_value_validation = float('Inf')
# validation_not_decreasing_counter = 0
# data_batch = int(0.7 * self.n)
# generator = default_rng()
# for it in range(max_it):
# grad = self.grad(self.x_train, self.y_train, w)
# L = 0.1
# curr_fun_value = self.fun_value(w)
# while True:
# if L > max_L_constant: # if L becomes too large, jump to another random point w
# w = np.random.randn(self.iterate_size())
# grad = self.grad(self.x_train, self.y_train, w)
# L = 0.1
# curr_fun_value = self.fun_value(w)
# print('Current L = {:f}'.format(L), end='\r')
# f_value_ = self.fun_value(w - grad / L)
# if curr_fun_value - f_value_ > 0:
# break
# L *= 2.0
# w -= grad / L
# grad_norm = la.norm(grad)
# if not self.validation:
# if f_value_ < min_f_value:
# min_f_value = f_value_
# self.memory = w
# if max_L < L:
# max_L = L
# print(' {:5d}/{:5d} Iterations: fun_value {:f} grad_norm {:f}'.format(it+1, max_it, f_value_, grad_norm), end='\r')
# if grad_norm < tol and f_value_ < tol ** 2:
# break
# else:
# self.set_weights(w)
# criterion = nn.MSELoss()
# f_value_validation = criterion(self.model(self.x_validation), self.y_validation).detach().numpy()
# if f_value_validation < min_f_value_validation:
# min_f_value_validation = f_value_validation
# self.memory = w
# else:
# validation_not_decreasing_counter += 1
# if validation_not_decreasing_counter >= MAX_VALIDATION_NOT_DECREASING:
# break
# print(' {:5d}/{:5d} Iterations: fun_value {:f} grad_norm {:f} fun_value_on_validation {:f}'.format(it+1, max_it, f_value_, grad_norm, f_value_validation), end='\r')
# print('')
# print('Worker {} smoothness constant: {}'.format(self.id, max_L))
# return max_L
def find_min(self):
print('Finding minimum of the local model. Worker id is {}'.format(self.id))
max_it = 60000
w = np.random.randn(self.iterate_size())
best_w = copy.deepcopy(w)
grad_norm = None
fun_value = None
min_f_value_validation = float('Inf')
validation_not_decreasing_counter = 0
criterion = nn.MSELoss()
generator = default_rng()
data_batch = int(0.7 * self.n) # hardcoded
for it in range(max_it):
fun_value = self.fun_value(w)
choices = generator.choice(self.n, size=data_batch, replace=False)
x = self.x_train[choices]
y = self.y_train[choices]
grad = self.grad(x, y, w)
w -= self.stepsize * grad
if self.validation:
self.set_weights(w)
out_val = self.model(self.x_validation)
loss_val = criterion(out_val, self.y_validation).to('cpu').item()
if loss_val < min_f_value_validation:
min_f_value_validation = loss_val
validation_not_decreasing_counter = 0
best_w = copy.deepcopy(w)
else:
validation_not_decreasing_counter += 1
print('{:5d}/{:5d} Iterations: fun_value {:f} fun_value_on_validation {:f}'.format(
it + 1, max_it, fun_value, loss_val), end='\r', flush=True)
if validation_not_decreasing_counter >= MAX_VALIDATION_NOT_DECREASING:
break
else:
if la.norm(self.grad(self.x_train, self.y_train, w)) < self.tolerance and self.fun_value(w) < self.tolerance ** 2:
return w
print('{:5d}/{:5d} Iterations: fun_value {:f}'.format(it+1, max_it, fun_value, end='\r', flush=True))
return best_w
|
{"hexsha": "fc7cef95a51756866daa1d827888d67673bbda44", "size": 31725, "ext": "py", "lang": "Python", "max_stars_repo_path": "models.py", "max_stars_repo_name": "gaseln/FLIX_small_scale_experiments", "max_stars_repo_head_hexsha": "af9ebd7f192fc0f67a6a94af7939fd3d6f548bd6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-04T07:31:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-04T07:31:52.000Z", "max_issues_repo_path": "models.py", "max_issues_repo_name": "gaseln/FLIX_small_scale_experiments", "max_issues_repo_head_hexsha": "af9ebd7f192fc0f67a6a94af7939fd3d6f548bd6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "models.py", "max_forks_repo_name": "gaseln/FLIX_small_scale_experiments", "max_forks_repo_head_hexsha": "af9ebd7f192fc0f67a6a94af7939fd3d6f548bd6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7252888318, "max_line_length": 228, "alphanum_fraction": 0.5523089046, "include": true, "reason": "import numpy,from numpy,from scipy", "num_tokens": 7322}
|
[GOAL]
α : Type u_1
β : Type u_2
inst✝ : TopologicalSpace α
r : Rel β α
l : Filter β
a : α
⊢ RTendsto' r l (𝓝 a) ↔ ∀ (s : Set α), IsOpen s → a ∈ s → Rel.preimage r s ∈ l
[PROOFSTEP]
rw [rtendsto'_def]
[GOAL]
α : Type u_1
β : Type u_2
inst✝ : TopologicalSpace α
r : Rel β α
l : Filter β
a : α
⊢ (∀ (s : Set α), s ∈ 𝓝 a → Rel.preimage r s ∈ l) ↔ ∀ (s : Set α), IsOpen s → a ∈ s → Rel.preimage r s ∈ l
[PROOFSTEP]
apply all_mem_nhds_filter
[GOAL]
case hf
α : Type u_1
β : Type u_2
inst✝ : TopologicalSpace α
r : Rel β α
l : Filter β
a : α
⊢ ∀ (s t : Set α), s ⊆ t → Rel.preimage r s ⊆ Rel.preimage r t
[PROOFSTEP]
apply Rel.preimage_mono
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
h : PContinuous f
⊢ IsOpen (PFun.Dom f)
[PROOFSTEP]
rw [← PFun.preimage_univ]
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
h : PContinuous f
⊢ IsOpen (PFun.preimage f Set.univ)
[PROOFSTEP]
exact h _ isOpen_univ
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
⊢ PContinuous f ↔ ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
[PROOFSTEP]
constructor
[GOAL]
case mp
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
⊢ PContinuous f → ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
[PROOFSTEP]
intro h x y h'
[GOAL]
case mp
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
h : PContinuous f
x : α
y : β
h' : y ∈ f x
⊢ PTendsto' f (𝓝 x) (𝓝 y)
[PROOFSTEP]
simp only [ptendsto'_def, mem_nhds_iff]
[GOAL]
case mp
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
h : PContinuous f
x : α
y : β
h' : y ∈ f x
⊢ ∀ (s : Set β), (∃ t, t ⊆ s ∧ IsOpen t ∧ y ∈ t) → ∃ t, t ⊆ PFun.preimage f s ∧ IsOpen t ∧ x ∈ t
[PROOFSTEP]
rintro s ⟨t, tsubs, opent, yt⟩
[GOAL]
case mp.intro.intro.intro
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
h : PContinuous f
x : α
y : β
h' : y ∈ f x
s t : Set β
tsubs : t ⊆ s
opent : IsOpen t
yt : y ∈ t
⊢ ∃ t, t ⊆ PFun.preimage f s ∧ IsOpen t ∧ x ∈ t
[PROOFSTEP]
exact ⟨f.preimage t, PFun.preimage_mono _ tsubs, h _ opent, ⟨y, yt, h'⟩⟩
[GOAL]
case mpr
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
⊢ (∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)) → PContinuous f
[PROOFSTEP]
intro hf s os
[GOAL]
case mpr
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
⊢ IsOpen (PFun.preimage f s)
[PROOFSTEP]
rw [isOpen_iff_nhds]
[GOAL]
case mpr
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
⊢ ∀ (a : α), a ∈ PFun.preimage f s → 𝓝 a ≤ 𝓟 (PFun.preimage f s)
[PROOFSTEP]
rintro x ⟨y, ys, fxy⟩ t
[GOAL]
case mpr.intro.intro
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
⊢ t ∈ 𝓟 (PFun.preimage f s) → t ∈ 𝓝 x
[PROOFSTEP]
rw [mem_principal]
[GOAL]
case mpr.intro.intro
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
⊢ PFun.preimage f s ⊆ t → t ∈ 𝓝 x
[PROOFSTEP]
intro (h : f.preimage s ⊆ t)
[GOAL]
case mpr.intro.intro
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
⊢ t ∈ 𝓝 x
[PROOFSTEP]
change t ∈ 𝓝 x
[GOAL]
case mpr.intro.intro
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
⊢ t ∈ 𝓝 x
[PROOFSTEP]
apply mem_of_superset _ h
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
have h' : ∀ s ∈ 𝓝 y, f.preimage s ∈ 𝓝 x := by
intro s hs
have : PTendsto' f (𝓝 x) (𝓝 y) := hf fxy
rw [ptendsto'_def] at this
exact this s hs
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
⊢ ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
intro s hs
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s✝ : Set β
os : IsOpen s✝
x : α
y : β
ys : y ∈ s✝
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s✝ ⊆ t
s : Set β
hs : s ∈ 𝓝 y
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
have : PTendsto' f (𝓝 x) (𝓝 y) := hf fxy
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s✝ : Set β
os : IsOpen s✝
x : α
y : β
ys : y ∈ s✝
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s✝ ⊆ t
s : Set β
hs : s ∈ 𝓝 y
this : PTendsto' f (𝓝 x) (𝓝 y)
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
rw [ptendsto'_def] at this
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s✝ : Set β
os : IsOpen s✝
x : α
y : β
ys : y ∈ s✝
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s✝ ⊆ t
s : Set β
hs : s ∈ 𝓝 y
this : ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
exact this s hs
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
h' : ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
show f.preimage s ∈ 𝓝 x
[GOAL]
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
h' : ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
⊢ PFun.preimage f s ∈ 𝓝 x
[PROOFSTEP]
apply h'
[GOAL]
case a
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
h' : ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
⊢ s ∈ 𝓝 y
[PROOFSTEP]
rw [mem_nhds_iff]
[GOAL]
case a
α : Type u_1
β : Type u_2
inst✝¹ : TopologicalSpace α
inst✝ : TopologicalSpace β
f : α →. β
hf : ∀ {x : α} {y : β}, y ∈ f x → PTendsto' f (𝓝 x) (𝓝 y)
s : Set β
os : IsOpen s
x : α
y : β
ys : y ∈ s
fxy : y ∈ f x
t : Set α
h : PFun.preimage f s ⊆ t
h' : ∀ (s : Set β), s ∈ 𝓝 y → PFun.preimage f s ∈ 𝓝 x
⊢ ∃ t, t ⊆ s ∧ IsOpen t ∧ y ∈ t
[PROOFSTEP]
exact ⟨s, Set.Subset.refl _, os, ys⟩
|
{"mathlib_filename": "Mathlib.Topology.Partial", "llama_tokens": 4179}
|
import numpy as np
import aimsprop as ai
# => Parse/Align/Weight/Clean <= #
# Parse a series of FMS90 trajectories that Hayley has run for Stilbene
# Troubles: 11, 12 (not finished), 15
# trajs = [ai.parse_fms90('/home/hweir/stilbene/5-aims/s0_extension/aims_%04d/job1' % x) for x in range(1,16+1) if x not in [11, 12, 15]]
trajs = [ai.parse_fms90("/home/parrish/chem/stil/6-aims/%04d" % x) for x in [1, 2]]
# TODO: Align trajectories to IC transition dipole moment (on z) and weight by oscillator strength at IC
# Merge the trajectories into one super-big Bundle with uniform weights
traj = ai.Bundle.merge(trajs, ws=[1.0 / len(trajs)] * len(trajs), labels=[1, 2])
# Compute properties at ~1 fs intervals, removing nonsense due to adaptive timesteps
ts = np.arange(0.0, max(traj.ts), 400.0) # TODO: Cleaner edges
traj = traj.interpolate_nearest(ts)
# => Tag with X-Ray Scattering Signal <= #
# Grab form factors for all atoms in this trajectory
factors = ai.iam.AtomicFormFactor.build_factors(traj.frames[0], mode="xray")
# The q values to compute scattering cross section at (in A^-1)
q = np.linspace(0.5, 3.0, 100)
# Compute the diffraction pattern moments (l=0,2,4)
ai.iam.compute_diffraction_moments(
traj=traj,
key="xray",
q=q,
factors=factors,
nlebedev=74,
nlebedev2=74,
nomega2=12,
nlegendre=4,
print_level=True,
)
# Plots of the result
ai.plot_vector(
"I0.pdf",
traj,
"xray-0",
y=q,
ylabel=r"$q [\AA{}^{-1}]$",
time_units="fs",
diff=True,
)
ai.plot_vector(
"I2.pdf",
traj,
"xray-2",
y=q,
ylabel=r"$q [\AA{}^{-1}]$",
time_units="fs",
diff=True,
)
ai.plot_vector(
"I4.pdf",
traj,
"xray-4",
y=q,
ylabel=r"$q [\AA{}^{-1}]$",
time_units="fs",
diff=True,
) # should be zero
# Compute the diffraction pattern moments (l=0 only)
ai.iam.compute_diffraction_moment0(
traj=traj,
key="xray0",
q=q,
factors=factors,
nlebedev=74,
print_level=True,
)
# Plots of the result
ai.plot_vector(
"I00.pdf",
traj,
"xray0-0",
y=q,
ylabel=r"$q [\AA{}^{-1}]$",
time_units="fs",
diff=True,
)
|
{"hexsha": "b6a95ab9125947de0051a09a5fa111a9b2be2c4e", "size": 2155, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/xray-1/example.py", "max_stars_repo_name": "mtzgroup/aimsprop", "max_stars_repo_head_hexsha": "464d88ad7a817da73027fd2ab7b12476bf59f83d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-28T13:11:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T13:11:56.000Z", "max_issues_repo_path": "examples/xray-1/example.py", "max_issues_repo_name": "mtzgroup/aimsprop", "max_issues_repo_head_hexsha": "464d88ad7a817da73027fd2ab7b12476bf59f83d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2021-03-17T17:53:58.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-17T17:59:25.000Z", "max_forks_repo_path": "examples/xray-1/example.py", "max_forks_repo_name": "mtzgroup/aimsprop", "max_forks_repo_head_hexsha": "464d88ad7a817da73027fd2ab7b12476bf59f83d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-04-05T08:36:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-20T22:12:12.000Z", "avg_line_length": 25.3529411765, "max_line_length": 137, "alphanum_fraction": 0.6417633411, "include": true, "reason": "import numpy", "num_tokens": 709}
|
\documentclass[12pt]{article}
\usepackage{lastpage}
\usepackage{fancyhdr}
\usepackage{graphicx}
\usepackage{lipsum} % for dummy text
\pagestyle{myheadings}
\pagestyle{fancy}
\fancyhf{}
\setlength{\headheight}{30pt}
\renewcommand{\headrulewidth}{1pt}
\renewcommand{\footrulewidth}{2pt}
\lhead{\includegraphics[width=1cm]{example-image-a}}
\rhead{}
\lfoot{ABC}
\rfoot{\thepage/\pageref{LastPage}}
\begin{document}
\subsection{One}
\lipsum[1-3]
\subsection{Two}
\lipsum[4-6]
\end{document}
|
{"hexsha": "0eaf0ec18d7b4ac6d9968c56ff8c37314e26b75d", "size": 486, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HS Bremerhaven Latex Vorlage/examples/header-footer-lastpage.tex", "max_stars_repo_name": "DiesDasJenes/Latex-Template-HS-Bremerhaven", "max_stars_repo_head_hexsha": "6579125cbaafe47b9d67508830673b7d1e7ad409", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-26T18:15:43.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-26T18:15:43.000Z", "max_issues_repo_path": "HS Bremerhaven Latex Vorlage/examples/header-footer-lastpage.tex", "max_issues_repo_name": "DiesDasJenes/Latex-Template-HS-Bremerhaven", "max_issues_repo_head_hexsha": "6579125cbaafe47b9d67508830673b7d1e7ad409", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HS Bremerhaven Latex Vorlage/examples/header-footer-lastpage.tex", "max_forks_repo_name": "DiesDasJenes/Latex-Template-HS-Bremerhaven", "max_forks_repo_head_hexsha": "6579125cbaafe47b9d67508830673b7d1e7ad409", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.1428571429, "max_line_length": 52, "alphanum_fraction": 0.7654320988, "num_tokens": 170}
|
from __future__ import division
import math
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from sklearn.datasets import load_svmlight_file
from sklearn.externals.joblib import Memory
from sklearn.model_selection import train_test_split
from tensorflow.examples.tutorials.mnist import input_data
from utils import *
mnist = input_data.read_data_sets('MNIST_data', one_hot=False)
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
############# define input options ################
flags = tf.flags
flags.DEFINE_string("output_file", None,
"Where the training/test experiment data is stored.")
flags.DEFINE_float("request_ratio", 0.7, "Positive / Negative data ratio. Default value 0.7:0.3")
flags.DEFINE_integer("batch_size", 32, "batch_size (default = 32).")
flags.DEFINE_integer("num_time_steps", 30000, "number of time steps for the AUC optimization")
flags.DEFINE_integer("num_epochs", 10, "number of times to repeat the same experiment")
FLAGS = flags.FLAGS
###################################################
"""
# Process MNIST
"""
# training set
train_num = mnist.train.images.shape[0]
test_num = mnist.test.images.shape[0]
mnist_train = mnist.train.images.reshape([train_num, 28, 28, 1]).astype(np.float32)
train_mean = np.mean(mnist_train, axis=0)
train_norm = np.linalg.norm(mnist_train)
# mean 0
mnist_train = mnist_train - np.stack([train_mean for _ in range(train_num)])
# norm 1
for i in range(train_num):
mnist_train[i, :] = mnist_train[i, :] / np.linalg.norm(mnist_train[i, :])
# testing set
mnist_test = mnist.test.images.reshape([test_num, 28, 28, 1]).astype(np.float32)
test_mean = np.mean(mnist_test, axis=0)
test_norm = np.linalg.norm(mnist_test)
# mean 0
mnist_test = mnist_test - np.stack([test_mean for _ in range(test_num)])
# norm 1
for i in range(test_num):
mnist_test[i, :] = mnist_test[i, :] / np.linalg.norm(mnist_test[i, :])
p = FLAGS.request_ratio
# partition training set into +/- groups: ratio=(7:3)
mnist_train_single_labels = []
for i in range(np.shape(mnist.train.labels)[0]):
if mnist.train.labels[i] > np.ceil(10 * (1 - p)) - 1:
mnist_train_single_labels.append(1)
else:
mnist_train_single_labels.append(0)
# further reshuffle
new_idx = np.random.permutation(mnist.train.labels.shape[0])
mnist_train = mnist_train[new_idx]
mnist_train_single_labels = np.asarray(mnist_train_single_labels)[new_idx]
# partition testing set into +/- groups: ratio=(7:3)
mnist_test_single_labels = []
for i in range(np.shape(mnist.test.labels)[0]):
if mnist.test.labels[i]>np.ceil(10*(1-p))-1:
mnist_test_single_labels.append(1)
else:
mnist_test_single_labels.append(0)
# as np array
mnist_test_single_labels = np.asarray(mnist_test_single_labels)
W_range = 1.0
batch_size = FLAGS.batch_size
# AUC neural net model
class AUCModel(object):
global p
def __init__(self):
self._build_model()
def _build_model(self):
self.X = tf.placeholder(tf.float32, [None, 28, 28, 1])
self.y_sing = tf.placeholder(tf.float32, [None])
with tf.variable_scope('feature_extraction'):
#CNN layers:
self.W_conv0 = tf.get_variable("W_conv0",[5,5,1,4],dtype=tf.float32,initializer=tf.random_normal_initializer(0.0,1e-3))
#self.b_conv0 = tf.get_variable("b_conv0",[4],dtype=tf.float32,initializer=tf.random_normal_initializer(0.0,1e-3))
self.h_conv0 = conv2d_stride(self.X, self.W_conv0, 2) # + self.b_conv0
self.h_relu0 = -tf.nn.elu(self.h_conv0)
self.bn_conv0 = tf.contrib.layers.batch_norm(self.h_relu0, center=True, scale=True, scope='bn0')
self.W_conv1 = tf.get_variable("W_conv1", [5, 5, 4, 16], dtype=tf.float32,
initializer=tf.random_normal_initializer(0.0, 1e-3))
#self.b_conv1 = tf.get_variable("b_conv1",[16],dtype=tf.float32,initializer=tf.random_normal_initializer(0.0,1e-3))
self.h_conv1 = conv2d_stride(self.bn_conv0, self.W_conv1, 2) # + self.b_conv1
self.h_relu1 = tf.nn.elu(self.h_conv1)
self.bn_conv1 = tf.contrib.layers.batch_norm(self.h_relu1,
center=True, scale=True,
scope='bn1')
# The feature vector
self.feature = tf.reshape(self.bn_conv1, [-1, 7*7*16],name='feature')
self.feature_ave = tf.Variable(tf.zeros([batch_size,7*7*16],
dtype=tf.float32), name='feature_ave')
with tf.variable_scope('weight'):
# current copy of w
self.w = tf.get_variable("w",[7*7*16,1],dtype=tf.float32,initializer=tf.random_normal_initializer(0.0,1e-3))
self.w_ave = tf.get_variable("w_ave",[7*7*16,1],dtype=tf.float32,initializer=tf.random_normal_initializer(0.0, 1e-3))
self.inner_prod = tf.matmul(self.feature, self.w)
#self.pred = 0.5*tf.sign(self.inner_prod)+0.5
with tf.variable_scope('network'):
# current copies of (a,b)
self.a = tf.Variable(tf.zeros([1], dtype=tf.float32), name='a')
self.b = tf.Variable(tf.zeros([1], dtype=tf.float32), name='b')
self.alpha = tf.Variable(tf.zeros([1], dtype=tf.float32), name='alpha')
# average versions of (a,b)
self.a_ave = tf.Variable(tf.zeros([1], dtype=tf.float32), name='a_ave')
self.b_ave = tf.Variable(tf.zeros([1], dtype=tf.float32), name='b_ave')
self.alpha_ave = tf.Variable(tf.zeros([1], dtype=tf.float32), name='alpha_ave')
self.loss1 = (1 - p) * tf.multiply(tf.square(self.inner_prod - tf.tile(self.a, [batch_size])), self.y_sing)
self.loss2 = p * tf.multiply(tf.square(self.inner_prod - tf.tile(self.b, [batch_size])), 1 - self.y_sing)
self.loss3 = 2 * (1 + self.alpha) * (p*tf.multiply(self.inner_prod, (1 - self.y_sing)) - (1 - p) * tf.multiply(self.inner_prod, self.y_sing)) - p * (1 - p) * tf.square(self.alpha)
self.loss = tf.reduce_mean(self.loss1 + self.loss2 + self.loss3 + tf.nn.l2_loss(self.inner_prod))
# Build the model graph
graph = tf.get_default_graph()
with graph.as_default():
model = AUCModel()
learning_rate = tf.placeholder(tf.float32, [])
weighted_coeff = tf.placeholder(tf.float32, [])
fraction = tf.divide(learning_rate, weighted_coeff)
# assign new weighted-averages of (w,a,b,alpha)
save_w_op = tf.assign(model.w_ave,
(1 - fraction) * model.w_ave + fraction * model.w)
save_a_op = tf.assign(model.a_ave,
(1 - fraction) * model.a_ave + fraction * model.a)
save_b_op = tf.assign(model.b_ave,
(1 - fraction) * model.b_ave + fraction * model.b)
save_alpha_op = tf.assign(model.alpha_ave,
(1 - fraction) * model.alpha_ave + fraction * model.alpha)
reset_a_op = tf.assign(model.a, tf.reshape(0.0, [1]))
reset_b_op = tf.assign(model.b, tf.reshape(0.0, [1]))
reset_alpha_op = tf.assign(model.alpha, tf.reshape(0.0, [1]))
t_vars = tf.trainable_variables()
# -------------------------------------------------------------------------------------------------
# Optimize (a,b):
# define min optimizer
min_train_op = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
# stochastic descent
# compute the gradients of a list of vars: a,b
grads_and_vars_min = min_train_op.compute_gradients(model.loss,[v for v in t_vars if(v.name == 'network/a:0' or v.name == 'network/b:0')])
min_op = min_train_op.apply_gradients(grads_and_vars_min)
# clip a and b
clip_a_op = tf.assign(model.a, tf.clip_by_value(model.a, clip_value_min=-W_range, clip_value_max=W_range))
clip_b_op = tf.assign(model.b, tf.clip_by_value(model.b, clip_value_min=-W_range, clip_value_max=W_range))
# ------------------------------------------------------------------------------------------------
# Optimize alpha:
# define max optimizer
max_train_op = tf.train.GradientDescentOptimizer(learning_rate = learning_rate)
# stochastic ascent
# compute the gradients of a list of vars: alpha
grads_and_vars_max=max_train_op.compute_gradients(tf.negative(model.loss),[v for v in t_vars if v.name=='network/alpha:0'])
max_op = min_train_op.apply_gradients(grads_and_vars_max)
# clip alpha
clip_alpha_op=tf.assign(model.alpha,tf.clip_by_value(model.alpha, clip_value_min=-2*W_range, clip_value_max=2*W_range))
# -------------------------------------------------------------------------------------------------
# Optimize w:
# define weight optimizer
w_train_op = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
# weight optimization:
grads_and_vars_w = w_train_op.compute_gradients(model.loss, [v for v in t_vars if(v.name == 'weight/w:0')])
w_op = min_train_op.apply_gradients(grads_and_vars_w)
# clip w
clip_w_op = tf.assign(model.w,tf.clip_by_norm(model.w, clip_norm=W_range, axes=[0]))
# -----------------------------------------------------------------------------------------------
# Optimize feature:
# define feature optimizer
feat_train_op = tf.train.AdamOptimizer(learning_rate=learning_rate)
# feature layer optimization:
feat_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='feature_extraction')
grads_and_vars_feat = feat_train_op.compute_gradients(model.loss,feat_vars)
feat_op = feat_train_op.apply_gradients(grads_and_vars_feat)
# ------------------------------------------------------------------------------------------------
# Critic:
# apply SGD/SGA
critic_op = tf.group(min_op, clip_a_op, clip_b_op, save_a_op, save_b_op,
max_op, clip_alpha_op, save_alpha_op,
min_op, clip_a_op, clip_b_op, save_a_op, save_b_op,
max_op, clip_alpha_op, save_alpha_op,
min_op, clip_a_op, clip_b_op, save_a_op, save_b_op,
max_op, clip_alpha_op, save_alpha_op,
min_op, clip_a_op, clip_b_op, save_a_op, save_b_op,
max_op, clip_alpha_op, save_alpha_op)
# -----------------------------------------------------------------------------------------------
# Actor:
# apply SGD on feature and weight
actor_op = tf.group(w_op, clip_w_op, save_w_op, feat_op)
train_op = tf.group(critic_op, critic_op, critic_op, actor_op)
# ------------------------------------------------------------------------------------------------
# Params
num_steps = FLAGS.num_time_steps
def train_and_evaluate(training_mode, graph, model, verbose=True):
"""Helper to run the model with different training modes."""
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.20)
with tf.Session(graph=graph) as sess:
tf.global_variables_initializer().run()
gen_train_batch = batch_generator(
[mnist_train, mnist_train_single_labels], batch_size)
gen_test_batch = batch_generator(
[mnist_test, mnist_test_single_labels], batch_size)
# Training loop
wc = 0.0
# wc_s=0.0
for i in range(num_steps):
# Learning rate as in Stochastic Online AUC optimization
X, y_sing = gen_train_batch.next()
lr = 2e-5
wc = wc + lr
batch_loss, frac, A, B, Alpha, inner_product, _ = sess.run(
[model.loss, fraction, model.a, model.b, model.alpha, model.inner_prod, train_op],
feed_dict={model.X: X, model.y_sing: y_sing, learning_rate: lr, weighted_coeff: wc}
)
if verbose and i % 200 == 199:
print '\n\nAUC optimization training, (+/-) ratio =', FLAGS.request_ratio, ':', 1 - FLAGS.request_ratio
print 'epoch', i, '/', A num_steps
print 'batch_loss', batch_loss
print 'learning_rate_s', lr
print 'fraction_s', frac
try:
batch_auc = metrics.roc_auc_score(y_sing_t, inner_product)
except Exception:
continue
print 'A', A
print 'B', B
print 'Alpha', Alpha
# print('weighted_coeff',weight)
print 'sklearn_auc', batch_auc
# print 'train_acc',accuracy
print 'inner_product', inner_product.T
# print 'prediction ', prediction.reshape([batch_size]).astype(int)
print 'groundtruth', y_sing.astype(int)
# Compute final evaluation on test data
mnist_TEST = mnist_test[::50, :] # only take one-50th of the testing data
TEST_num = mnist_TEST.shape[0]
mnist_TEST_single = mnist_test_single_labels[::50]
inner_product = sess.run(
[model.inner_prod],
feed_dict={model.X: mnist_TEST, model.y_sing: mnist_TEST_single}
)
inner_product = np.asarray(inner_product).reshape([TEST_num])
test_auc = metrics.roc_auc_score(mnist_TEST_single, inner_product)
return test_auc # , train_auc, train_pre, train_rec
if not FLAGS.output_file:
raise ValueError("Must set --output_file for experiments")
fout = open('./output/' + FLAGS.output_file, 'a')
fout.write('dataset: MNIST')
fout.write('\noutput_file: ' + FLAGS.output_file)
fout.write('\nbatch_size: ' + str(FLAGS.batch_size))
fout.write('\n(+/-) ratio: ' + str(FLAGS.request_ratio) + ':' + str(1 - FLAGS.request_ratio))
fout.write('\nAUC optimization (method 2) with ' + str(FLAGS.num_time_steps) + ' training steps')
fout.close()
print 'dataset: MNIST'
print 'output_file:', FLAGS.output_file
print '(+/-) ratio:', FLAGS.request_ratio, ' : ', 1 - FLAGS.request_ratio
print '\nauc optimization training'
auc_ave = 0.0
for i in range(FLAGS.num_epochs):
print 'epoch', i, '/', str(FLAGS.num_epochs)
fopen = open('./output/' + FLAGS.output_file, 'a')
fopen.write('\nNumber of experiment ' + str(i) + ' / ' +
str(FLAGS.num_epochs) + '\n')
auc = train_and_evaluate('auc', graph, model)
print 'testing data overall AUC', auc
fopen.write('testing data overall AUC: ' + str(auc) + '\n')
auc_ave = (i * auc_ave + auc) / ((i + 1) * 1.0)
fopen.close()
fopen = open('./output/' + FLAGS.output_file, 'a')
fopen.write('testing data average overall AUC score over ' +
str(FLAGS.num_epochs) + ' epochs: ' + str(auc_ave) + '\n')
fopen.close()
|
{"hexsha": "9db094265f5ba5451831a77afd8d7ea7a17b444f", "size": 14757, "ext": "py", "lang": "Python", "max_stars_repo_path": "m2_feat_AUC.py", "max_stars_repo_name": "chengweitsai/AUC-opt-DNN", "max_stars_repo_head_hexsha": "89de46983a68a4dc634402241dfd80d4a71e8795", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "m2_feat_AUC.py", "max_issues_repo_name": "chengweitsai/AUC-opt-DNN", "max_issues_repo_head_hexsha": "89de46983a68a4dc634402241dfd80d4a71e8795", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "m2_feat_AUC.py", "max_forks_repo_name": "chengweitsai/AUC-opt-DNN", "max_forks_repo_head_hexsha": "89de46983a68a4dc634402241dfd80d4a71e8795", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.8476190476, "max_line_length": 191, "alphanum_fraction": 0.6171986176, "include": true, "reason": "import numpy", "num_tokens": 3603}
|
# Dependencies
import openweathermapy.core as owm
import datetime
import pandas as pd
import numpy as np
import requests
import json
def get_lon_lat(city,country_code,units):
"""
This function gets weather forecast, lon and lat.
Input: "(city, country and units as parameters)"
Output: (forecast_data, lon, lat)
"""
# Open weather API
url = "http://api.openweathermap.org/data/2.5/weather?"
weather_key = "bf0eb865eaf1ab307e6b534d32a3da6f"
# Build partial query URL
query_url = f"{url}appid={weather_key}&units={units}&q="
response = requests.get(query_url + city).json()
lon = response['coord']['lon']
lat = response['coord']['lat']
temp = response['main']['temp']
# Create settings parameters
settings = {"units": units, "APPID": weather_key}
# Make the API call using owm's get_fourcast_hourly method
forecast = owm.get_forecast_hourly(f"{city}, {country_code}", **settings)
# Extract the date in text format and the temperature from each record
# and save them in a list
summary = ["dt_txt", "main.temp",
"main.humidity", "wind.speed", "weather", ]
data = [hourly_forecast(*summary) for hourly_forecast in forecast]
#data is a list of tuples
forecast_data = {
"date_time": [],
"temp": [],
"humidity": [],
"wind.speed": [],
"description": []
}
# format the printing of each record
for hourly_forecast in data:
forecast_data["date_time"].append(hourly_forecast[0])
forecast_data["temp"].append(hourly_forecast[1])
forecast_data["humidity"].append(hourly_forecast[2])
forecast_data["wind.speed"].append(hourly_forecast[3])
forecast_data["description"].append(
hourly_forecast[4][0]["description"])
description = forecast_data['description']
conversion_dict = {}
i = 0
unique_categories = list(np.unique(np.array(description)))
for desc in unique_categories:
conversion_dict[f"{desc}"] = i
i += 1
converted_description = []
for desc in description:
converted_description.append(conversion_dict[desc])
forecast_data["conv_description"] = converted_description
forecast_data["categories"] = unique_categories
return (forecast_data, lon, lat)
|
{"hexsha": "c04f900339d77cc41b7ce218ebc7de3c2f68ccff", "size": 2340, "ext": "py", "lang": "Python", "max_stars_repo_path": "APIstuff/lonLat.py", "max_stars_repo_name": "teomotun/Restaurant-Plug", "max_stars_repo_head_hexsha": "1ecaab7bb60706ec0eca96c2f3efb31276c536e7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "APIstuff/lonLat.py", "max_issues_repo_name": "teomotun/Restaurant-Plug", "max_issues_repo_head_hexsha": "1ecaab7bb60706ec0eca96c2f3efb31276c536e7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "APIstuff/lonLat.py", "max_forks_repo_name": "teomotun/Restaurant-Plug", "max_forks_repo_head_hexsha": "1ecaab7bb60706ec0eca96c2f3efb31276c536e7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4285714286, "max_line_length": 77, "alphanum_fraction": 0.6585470085, "include": true, "reason": "import numpy", "num_tokens": 564}
|
import unittest
import os
import sys
from contextlib import contextmanager
import numpy as np
from scipy.constants import pi
import matplotlib.pyplot as plt
from echonn.sys import Animator
class ExampleAnimator(Animator):
def __init__(self, max_step=10):
super().__init__()
self.t = np.arange(0, max_step, .1)
self.y = np.sin(2*pi*self.t)
self.max_step = max_step
def init_plot(self):
fig = plt.figure()
plt.xlim((0, self.max_step))
plt.ylim((-2, 2))
t, y = self.get_data(0, self.t, self.y)
line, *_ = plt.plot(t, y)
return fig, [line]
def animator(self, framei):
t, y = self.get_data(framei, self.t, self.y)
data, = self.lines
data.set_data(t, y)
class TestAnimator(unittest.TestCase):
def testAnimation(self):
animator = ExampleAnimator()
animator.render()
fname = os.path.join('src', 'test', 'test_data', 'test_video')
# http://thesmithfam.org/blog/2012/10/25/temporarily-suppress-console-output-in-python/
@contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = devnull
sys.stderr = devnull
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
try:
os.remove(fname+'.gif')
except:
pass # we don't care
self.assertFalse(os.path.isfile(fname+'.gif'))
with suppress_stdout():
animator.save(fname)
self.assertTrue(os.path.isfile(fname+'.gif'))
|
{"hexsha": "9d929f8106f6a1ee6b2c0a629e2e717e3ebcb742", "size": 1766, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/test/sys/test_animator.py", "max_stars_repo_name": "larkwt96/honors-thesis", "max_stars_repo_head_hexsha": "7e3a52c285c1fdaf4ae9659497154ba04e522f48", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/test/sys/test_animator.py", "max_issues_repo_name": "larkwt96/honors-thesis", "max_issues_repo_head_hexsha": "7e3a52c285c1fdaf4ae9659497154ba04e522f48", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/test/sys/test_animator.py", "max_forks_repo_name": "larkwt96/honors-thesis", "max_forks_repo_head_hexsha": "7e3a52c285c1fdaf4ae9659497154ba04e522f48", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0317460317, "max_line_length": 95, "alphanum_fraction": 0.566817667, "include": true, "reason": "import numpy,from scipy", "num_tokens": 399}
|
# Copyright 2019 PerfKitBenchmarker Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Runs stress-ng.
From the stress-ng ubuntu documentation:
stress-ng will stress test a computer system in various selectable ways.
It was designed to exercise various physical subsystems of a computer as
well as the various operating system kernel interfaces. stress-ng also has
a wide range of CPU specific stress tests that exercise floating point,
integer, bit manipulation and control flow.
stress-ng manpage:
http://manpages.ubuntu.com/manpages/xenial/man1/stress-ng.1.html
"""
import logging
import numpy
from perfkitbenchmarker import configs
from perfkitbenchmarker import flags
from perfkitbenchmarker import sample
FLAGS = flags.FLAGS
BENCHMARK_NAME = 'stress_ng'
BENCHMARK_CONFIG = """
stress_ng:
description: Runs stress-ng
vm_groups:
default:
vm_spec: *default_single_core
disk_spec: *default_50_gb
"""
STRESS_NG_DIR = '~/stress_ng'
GIT_REPO = 'https://github.com/ColinIanKing/stress-ng'
GIT_TAG_MAP = {
'0.05.23': '54722768329c9f8184c1c98db63435f201377df1', # ubuntu1604
'0.09.25': '2db2812edf99ec80c08edf98ee88806a3662031c', # ubuntu1804
}
VALID_CPU_METHODS = {
'all', 'ackermann', 'bitops', 'callfunc', 'cdouble', 'cfloat',
'clongdouble', 'correlate', 'crc16', 'decimal32', 'decimal64', 'decimal128',
'dither', 'djb2a', 'double', 'euler', 'explog', 'fft', 'fibonacci', 'float',
'fnv1a', 'gamma', 'gcd', 'gray', 'hamming', 'hanoi', 'hyperbolic', 'idct',
'int128', 'int64', 'int32', 'int16', 'int8', 'int128float', 'int128double',
'int128longdouble', 'int128decimal32', 'int128decimal64',
'int128decimal128', 'int64float', 'int64double', 'int64longdouble',
'int32float', 'int32double', 'int32longdouble', 'jenkin', 'jmp', 'ln2',
'longdouble', 'loop', 'matrixprod', 'nsqrt', 'omega', 'parity', 'phi', 'pi',
'pjw', 'prime', 'psi', 'queens', 'rand', 'rand48', 'rgb', 'sdbm', 'sieve',
'sqrt', 'trig', 'union', 'zeta'
}
VALID_STRESSORS = {
'affinity', 'af-alg', 'aio', 'aio-linux', 'apparmor', 'bigheap', 'brk',
'bsearch', 'cache', 'chdir', 'chmod', 'clock', 'clone', 'context', 'cpu',
'cpu-online', 'crypt', 'daemon', 'dentry', 'dir', 'dup', 'epoll', 'eventfd',
'exec', 'fallocate', 'fault', 'fcntl', 'fiemap', 'fifo', 'filename',
'flock', 'fork', 'fp-error', 'fstat', 'futex', 'get', 'getrandom',
'getdent', 'handle', 'hdd', 'heapsort', 'hsearch', 'icache', 'iosync',
'inotify', 'itimer', 'kcmp', 'key', 'kill', 'klog', 'lease', 'link',
'lockbus', 'lockf', 'longjmp', 'lsearch', 'malloc', 'matrix', 'membarrier',
'memcpy', 'memfd', 'mergesort', 'mincore', 'mknod', 'mlock', 'mmap',
'mmapfork', 'mmapmany', 'mremap', 'msg', 'mq', 'nice', 'null', 'numa',
'oom-pipe', 'open', 'personality', 'pipe', 'poll', 'procfs', 'pthread',
'ptrace', 'qsort', 'quota', 'rdrand', 'readahead', 'remap-file-pages',
'rename', 'rlimit', 'seccomp', 'seek', 'sem-posix', 'sem-sysv', 'shm-posix',
'shm-sysv', 'sendfile', 'sigfd', 'sigfpe', 'sigpending', 'sigq', 'sigsegv',
'sigsuspend', 'sleep', 'socket', 'socket-fd', 'socket-pair', 'spawn',
'splice', 'stack', 'str', 'stream', 'switch', 'symlink', 'sync-file',
'sysinfo', 'sysfs', 'tee', 'timer', 'timerfd', 'tsc', 'tsearch', 'udp',
'udp-flood', 'unshare', 'urandom', 'userfaultfd', 'utime', 'vecmath',
'vfork', 'vm', 'vm-rw', 'vm-splice', 'wait', 'wcs', 'xattr', 'yield',
'zero', 'zlib', 'zombie'
}
CPU_SUITE = {
'af-alg', 'bsearch', 'context', 'cpu', 'cpu-online', 'crypt', 'fp-error',
'getrandom', 'heapsort', 'hsearch', 'longjmp', 'lsearch', 'matrix',
'mergesort', 'numa', 'qsort', 'rdrand', 'str', 'stream', 'tsc', 'tsearch',
'vecmath', 'wcs', 'zlib'
}
CPU_CACHE_SUITE = {
'bsearch', 'cache', 'heapsort', 'hsearch', 'icache', 'lockbus', 'lsearch',
'malloc', 'matrix', 'membarrier', 'memcpy', 'mergesort', 'qsort', 'str',
'stream', 'tsearch', 'vecmath', 'wcs', 'zlib'
}
MEMORY_SUITE = {
'bsearch', 'context', 'heapsort', 'hsearch', 'lockbus', 'lsearch', 'malloc',
'matrix', 'membarrier', 'memcpy', 'memfd', 'mergesort', 'mincore', 'null',
'numa', 'oom-pipe', 'pipe', 'qsort', 'stack', 'str', 'stream', 'tsearch',
'vm', 'vm-rw', 'wcs', 'zero', 'zlib'
}
# Run the stressors that are each part of all of the compute related stress-ng
# classes: cpu, cpu-cache, and memory.
DEFAULT_STRESSORS = sorted(
CPU_SUITE.intersection(CPU_CACHE_SUITE).intersection(MEMORY_SUITE))
flags.DEFINE_integer('stress_ng_duration', 10,
'Number of seconds to run the test.')
flags.DEFINE_boolean('stress_ng_calc_geomean', True,
'Whether to calculate geomean or not.')
flags.DEFINE_list('stress_ng_custom_stressors', DEFAULT_STRESSORS,
'List of stressors to run against. Default combines cpu,'
'cpu-cache, and memory suites')
flags.DEFINE_list('stress_ng_cpu_methods', [],
'List of cpu methods to run with. By default none are ran.')
ALL_WORKLOADS = ['small', 'medium', 'large']
flags.DEFINE_list(
'stress_ng_thread_workloads', ['large'],
'List of threads sizes to run against. Options are'
'small (1 thread total), medium (1 thread per 2 cpus), and '
'large (1 thread per cpu).')
flags.register_validator(
'stress_ng_thread_workloads',
lambda workloads: workloads and set(workloads).issubset(ALL_WORKLOADS))
ALL_VERSIONS = ['0.05.23', '0.09.25']
flags.DEFINE_enum(
'stress_ng_version', '0.09.25', ALL_VERSIONS,
'Stress-ng version to use. Default is 0.09.25 which '
'is the default package on Ubuntu 1804.')
def _GeoMeanOverflow(iterable):
"""Returns the geometric mean.
See https://en.wikipedia.org/wiki/Geometric_mean#Relationship_with_logarithms
Args:
iterable: a list of positive floats to take the geometric mean of.
Returns: The geometric mean of the list.
"""
a = numpy.log(iterable)
return numpy.exp(a.sum() / len(a))
def StressngCustomStressorsValidator(stressors):
"""Returns whether or not the list of custom stressors is valid."""
return VALID_STRESSORS.issuperset(set(stressors))
def StressngCpuMethodsValidator(cpu_methods):
"""Returns whether or not the list of cpu methods is valid."""
return ('all_cpu_methods' in cpu_methods or
VALID_CPU_METHODS.issuperset(set(cpu_methods)))
flags.register_validator('stress_ng_custom_stressors',
StressngCustomStressorsValidator)
flags.register_validator('stress_ng_cpu_methods',
StressngCpuMethodsValidator)
def GetConfig(user_config):
return configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)
def Prepare(benchmark_spec):
"""Installs stress-ng on the target vm.
Args:
benchmark_spec: The benchmark specification. Contains all data that is
required to run the benchmark.
"""
vm = benchmark_spec.vms[0]
vm.InstallPackages(
'build-essential libaio-dev libapparmor-dev libattr1-dev libbsd-dev libcap-dev libgcrypt11-dev libkeyutils-dev libsctp-dev zlib1g-dev'
)
vm.RemoteCommand('git clone {0} {1}'.format(GIT_REPO, STRESS_NG_DIR))
vm.RemoteCommand('cd {0} && git checkout {1}'.format(
STRESS_NG_DIR, GIT_TAG_MAP[FLAGS.stress_ng_version]))
vm.RemoteCommand('cd {0} && make && sudo make install'.format(STRESS_NG_DIR))
def _ParseStressngResult(metadata, output, cpu_method=None):
"""Returns stress-ng data as a sample.
Sample output eg:
stress-ng: info: [2566] dispatching hogs: 2 context
stress-ng: info: [2566] successful run completed in 5.00s
stress-ng: info: [2566] stressor bogo ops real time usr time sys
time bogo ops/s bogo ops/s
stress-ng: info: [2566] (secs) (secs) (secs)
(real time) (usr+sys time)
stress-ng: info: [2566] context 22429 5.00 5.49
4.48 4485.82 2249.65
Args:
metadata: metadata of the sample.
output: the output of the stress-ng benchmark.
cpu_method: an optional flag for the cpu method for the cpu stressor.
"""
output_list = output.splitlines()
output_matrix = [i.split() for i in output_list]
if len(output_matrix) != 5:
logging.error('output is missing')
return ''
assert output_matrix[2][-4] == 'bogo' and output_matrix[2][-3] == 'ops/s'
assert output_matrix[3][-4] == '(real' and output_matrix[3][-3] == 'time)'
line = output_matrix[4]
name = line[3]
value = float(line[-2]) # parse bogo ops/s (real time)
if name == 'cpu' and cpu_method:
return sample.Sample(
metric=cpu_method,
value=value,
unit='bogus_ops_sec', # bogus operations per second
metadata=metadata)
return sample.Sample(
metric=name,
value=value,
unit='bogus_ops_sec', # bogus operations per second
metadata=metadata)
def _RunWorkload(vm, num_threads):
"""Runs stress-ng on the target vm.
Args:
vm: The target vm to run on.
num_threads: Number of instances of stressors to launch.
Returns:
A list of sample.Sample objects.
"""
metadata = {
'duration_sec': FLAGS.stress_ng_duration,
'threads': num_threads,
'version': FLAGS.stress_ng_version,
}
samples = []
values_to_geomean_list = []
stressors = FLAGS.stress_ng_custom_stressors
for stressor in stressors:
cmd = ('stress-ng --{stressor} {numthreads} --metrics-brief '
'-t {duration}'.format(
stressor=stressor,
numthreads=num_threads,
duration=FLAGS.stress_ng_duration))
stdout, stderr = vm.RemoteCommand(cmd)
# TODO(user): Find the actual stress-ng version that changes output to
# stderr instead of stdout
if FLAGS.stress_ng_version > '0.05.23':
stdout = stderr
stressng_sample = _ParseStressngResult(metadata, stdout)
if stressng_sample:
samples.append(stressng_sample)
values_to_geomean_list.append(stressng_sample.value)
cpu_methods = (VALID_CPU_METHODS
if 'all_cpu_methods' in FLAGS.stress_ng_cpu_methods
else FLAGS.stress_ng_cpu_methods)
for cpu_method in cpu_methods:
cmd = ('stress-ng --cpu {numthreads} --metrics-brief '
'-t {duration} --cpu-method {cpu_method}'.format(
numthreads=num_threads,
duration=FLAGS.stress_ng_duration,
cpu_method=cpu_method))
stdout, _ = vm.RemoteCommand(cmd)
stressng_sample = _ParseStressngResult(metadata, stdout, cpu_method)
if stressng_sample:
samples.append(stressng_sample)
values_to_geomean_list.append(stressng_sample.value)
if FLAGS.stress_ng_calc_geomean:
geomean_metadata = metadata.copy()
geomean_metadata['stressors'] = stressors
# True only if each stressor provided a value
geomean_metadata['valid_run'] = (
len(values_to_geomean_list) == len(stressors) + len(cpu_methods))
geomean_sample = sample.Sample(
metric='STRESS_NG_GEOMEAN',
value=_GeoMeanOverflow(values_to_geomean_list),
unit='bogus_ops_sec',
metadata=geomean_metadata)
samples.append(geomean_sample)
return samples
def Run(benchmark_spec):
"""Runs stress-ng on the target vm.
Args:
benchmark_spec: The benchmark specification. Contains all data that is
required to run the benchmark.
Returns:
A list of sample.Sample objects.
"""
vm = benchmark_spec.vms[0]
samples = []
for workload in FLAGS.stress_ng_thread_workloads:
if workload == 'small':
samples.extend(_RunWorkload(vm, 1))
elif workload == 'medium':
samples.extend(_RunWorkload(vm, vm.NumCpusForBenchmark() / 2))
elif workload == 'large':
samples.extend(_RunWorkload(vm, vm.NumCpusForBenchmark()))
return samples
def Cleanup(benchmark_spec):
"""Cleans up stress-ng from the target vm.
Args:
benchmark_spec: The benchmark specification. Contains all data that is
required to run the benchmark.
"""
vm = benchmark_spec.vms[0]
vm.RemoteCommand('cd {0} && sudo make uninstall'.format(STRESS_NG_DIR))
|
{"hexsha": "dbb4f1701c8d6e56b79444d193ee8a5b5676f904", "size": 12661, "ext": "py", "lang": "Python", "max_stars_repo_path": "perfkitbenchmarker/linux_benchmarks/stress_ng_benchmark.py", "max_stars_repo_name": "inflatador/PerfKitBenchmarker", "max_stars_repo_head_hexsha": "9a12f44aa0c3fe6873e57a7920b1d13c006073e3", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "perfkitbenchmarker/linux_benchmarks/stress_ng_benchmark.py", "max_issues_repo_name": "inflatador/PerfKitBenchmarker", "max_issues_repo_head_hexsha": "9a12f44aa0c3fe6873e57a7920b1d13c006073e3", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-23T12:07:44.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-23T12:07:44.000Z", "max_forks_repo_path": "perfkitbenchmarker/linux_benchmarks/stress_ng_benchmark.py", "max_forks_repo_name": "isabella232/PerfKitBenchmarker", "max_forks_repo_head_hexsha": "8dd509ac0e024b7deeccd74266c8e6211a69529e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.250755287, "max_line_length": 140, "alphanum_fraction": 0.6702472159, "include": true, "reason": "import numpy", "num_tokens": 3568}
|
# This file is a part of BAT.jl, licensed under the MIT License (MIT).
include("bat_sample.jl")
include("mcmc/mcmc.jl")
include("sampled_density.jl")
include("importance/importance_sampler.jl")
include("nested_sampling/nested_sampling.jl")
include("partitioned_sampling/partitioned_sampling.jl")
|
{"hexsha": "6832c185862d2a3c53ae4e55d305aedf651c596d", "size": 297, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/samplers/samplers.jl", "max_stars_repo_name": "celaue/BAT.jl", "max_stars_repo_head_hexsha": "521366e992fb764877615af0926f1acaa7f2aea1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2017-11-28T20:26:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T11:05:27.000Z", "max_issues_repo_path": "src/samplers/samplers.jl", "max_issues_repo_name": "celaue/BAT.jl", "max_issues_repo_head_hexsha": "521366e992fb764877615af0926f1acaa7f2aea1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 283, "max_issues_repo_issues_event_min_datetime": "2017-09-04T09:15:20.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-25T15:55:49.000Z", "max_forks_repo_path": "src/samplers/samplers.jl", "max_forks_repo_name": "celaue/BAT.jl", "max_forks_repo_head_hexsha": "521366e992fb764877615af0926f1acaa7f2aea1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2017-08-23T22:17:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-20T23:52:33.000Z", "avg_line_length": 33.0, "max_line_length": 70, "alphanum_fraction": 0.7946127946, "num_tokens": 75}
|
[STATEMENT]
lemma CondInv:
assumes wp: "P \<subseteq> Q"
assumes inv: "Q \<subseteq> {s. (s\<in>b \<longrightarrow> s\<in>P\<^sub>1) \<and> (s\<notin>b \<longrightarrow> s\<in>P\<^sub>2)}"
assumes deriv_c1: "\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P\<^sub>1 c\<^sub>1 Q,A"
assumes deriv_c2: "\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P\<^sub>2 c\<^sub>2 Q,A"
shows "\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P (Cond b c\<^sub>1 c\<^sub>2) Q,A"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
[PROOF STEP]
from wp inv
[PROOF STATE]
proof (chain)
picking this:
P \<subseteq> Q
Q \<subseteq> {s. (s \<in> b \<longrightarrow> s \<in> P\<^sub>1) \<and> (s \<notin> b \<longrightarrow> s \<in> P\<^sub>2)}
[PROOF STEP]
have "P \<subseteq> {s. (s\<in>b \<longrightarrow> s\<in>P\<^sub>1) \<and> (s\<notin>b \<longrightarrow> s\<in>P\<^sub>2)}"
[PROOF STATE]
proof (prove)
using this:
P \<subseteq> Q
Q \<subseteq> {s. (s \<in> b \<longrightarrow> s \<in> P\<^sub>1) \<and> (s \<notin> b \<longrightarrow> s \<in> P\<^sub>2)}
goal (1 subgoal):
1. P \<subseteq> {s. (s \<in> b \<longrightarrow> s \<in> P\<^sub>1) \<and> (s \<notin> b \<longrightarrow> s \<in> P\<^sub>2)}
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
P \<subseteq> {s. (s \<in> b \<longrightarrow> s \<in> P\<^sub>1) \<and> (s \<notin> b \<longrightarrow> s \<in> P\<^sub>2)}
goal (1 subgoal):
1. \<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
[PROOF STEP]
from Cond [OF this deriv_c1 deriv_c2]
[PROOF STATE]
proof (chain)
picking this:
\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
goal (1 subgoal):
1. \<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
[PROOF STEP]
.
[PROOF STATE]
proof (state)
this:
\<Gamma>,\<Theta>\<turnstile>\<^sub>t\<^bsub>/F\<^esub> P Cond b c\<^sub>1 c\<^sub>2 Q,A
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 1036, "file": "Simpl_HoareTotal", "length": 8}
|
#ifndef GSL_WRAPPERS_H
#define GSL_WRAPPERS_H
// #include <gsl/gsl_check_range.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_permutation.h>
#include <gsl/gsl_linalg.h>
#include <gsl/gsl_eigen.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#include <gsl/gsl_multimin.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_blas.h>
#include <math.h>
#include <assert.h>
#include <time.h>
#include <sys/stat.h>
#include <sys/types.h>
#define outlog(format, args...) \
fprintf(stderr, format, args); \
fprintf(stderr, "\n");
double safe_log(double);
double log_sum(double, double);
static inline double vget(const gsl_vector* v, int i)
{ return(gsl_vector_get(v, i)); };
static inline void vset(gsl_vector* v, int i, double x)
{ gsl_vector_set(v, i, x); };
// Increment a vector element by a double.
void vinc(gsl_vector*, int, double);
static inline double mget(const gsl_matrix* m, int i, int j)
{ return(gsl_matrix_get(m, i, j)); };
static inline void mset(gsl_matrix* m, int i, int j, double x)
{ gsl_matrix_set(m, i, j, x); };
void msetcol(gsl_matrix* m, int r, const gsl_vector* val);
// Increment a matrix element by a double.
void minc(gsl_matrix*, int, int, double);
void msetrow(gsl_matrix*, int, const gsl_vector*);
void col_sum(gsl_matrix*, gsl_vector*);
void vct_printf(const gsl_vector* v);
void mtx_printf(const gsl_matrix* m);
void vct_fscanf(const char*, gsl_vector* v);
void mtx_fscanf(const char*, gsl_matrix* m);
void vct_fprintf(const char* filename, gsl_vector* v);
void mtx_fprintf(const char* filename, const gsl_matrix* m);
double log_det(gsl_matrix*);
void matrix_inverse(gsl_matrix*, gsl_matrix*);
void sym_eigen(gsl_matrix*, gsl_vector*, gsl_matrix*);
double sum(const gsl_vector* v);
double norm(gsl_vector * v);
void vct_log(gsl_vector* v);
void vct_exp(gsl_vector* x);
void choose_k_from_n(int k, int n, int* result);
void log_normalize(gsl_vector* x);
void normalize(gsl_vector* x);
void optimize(int dim,
gsl_vector* x,
void* params,
void (*fdf)(const gsl_vector*, void*, double*, gsl_vector*),
void (*df)(const gsl_vector*, void*, gsl_vector*),
double (*f)(const gsl_vector*, void*));
void optimize_fdf(int dim,
gsl_vector* x,
void* params,
void (*fdf)(const gsl_vector*, void*, double*, gsl_vector*),
void (*df)(const gsl_vector*, void*, gsl_vector*),
double (*f)(const gsl_vector*, void*),
double* f_val,
double* conv_val,
int* niter);
void log_write(FILE* f, char* string);
int directory_exist(const char *dname);
void make_directory(char* name);
gsl_rng* new_random_number_generator();
#endif
|
{"hexsha": "02956b0c5ea2da80c20b865be7b6e33c136ac6ba", "size": 2806, "ext": "h", "lang": "C", "max_stars_repo_path": "scripts/lib/DTM/dtm/gsl-wrappers.h", "max_stars_repo_name": "iwangjian/dtm-lab", "max_stars_repo_head_hexsha": "07c936c07d268208dcc2f19e07fb8d2a18e39ba8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11.0, "max_stars_repo_stars_event_min_datetime": "2018-07-12T11:05:51.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-22T08:34:34.000Z", "max_issues_repo_path": "scripts/lib/DTM/dtm/gsl-wrappers.h", "max_issues_repo_name": "iwangjian/topic-extractor", "max_issues_repo_head_hexsha": "07c936c07d268208dcc2f19e07fb8d2a18e39ba8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/lib/DTM/dtm/gsl-wrappers.h", "max_forks_repo_name": "iwangjian/topic-extractor", "max_forks_repo_head_hexsha": "07c936c07d268208dcc2f19e07fb8d2a18e39ba8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7.0, "max_forks_repo_forks_event_min_datetime": "2019-03-15T04:11:00.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-10T09:29:17.000Z", "avg_line_length": 28.06, "max_line_length": 78, "alphanum_fraction": 0.6724875267, "num_tokens": 736}
|
"""This is a exmaple of random walke simultion using the WExplore
resampler.
"""
import sys
import os
import os.path as osp
import numpy as np
from wepy.resampling.resamplers.wexplore import WExploreResampler
from wepy.resampling.distances.randomwalk import RandomWalkDistance
from wepy.runners.randomwalk import RandomWalkRunner, UNIT_NAMES
from wepy.walker import Walker, WalkerState
from wepy_tools.toys.randomwalk import RandomwalkProfiler
ON = True
OFF = False
# the maximum weight allowed for a walker
PMAX = 0.1
# the minimum weight allowed for a walker
PMIN = 1e-100
# set the value of distance exponent
DIST_EXPONENT = 4
# the merge distance value
MERGE_DIST = 2.5
# field in the HDF5
SAVE_FIELDS = ('positions')
# Name of field's unit in the HDF5
UNITS = UNIT_NAMES
PROBABILITY=0.25
WEIGHTS=ON
# the maximum number of regions allowed under each parent region
MAX_N_REGIONS = (10, 10, 10, 10)
# the maximum size of regions, new regions will be created if a walker
# is beyond this distance from each voronoi image unless there is an
# already maximal number of regions
MAX_REGION_SIZES = (16, 4, 1, .25)
outputs_dir = osp.realpath('./outputs')
if not osp.exists(outputs_dir):
os.makedirs(outputs_dir)
# sets the input paths
hdf5_filename = 'rw_results.wepy.h5'
reporter_filename = 'randomwalk_wexplore.org'
hdf5_path= osp.join(outputs_dir, hdf5_filename)
reporter_path = osp.join(outputs_dir, reporter_filename)
if __name__=="__main__":
if sys.argv[1] == "--help" or sys.argv[1] == '-h':
print("arguments: n_cycles, n_walkers, dimension")
else:
n_runs = int(sys.argv[1])
n_cycles = int(sys.argv[2])
n_walkers = int(sys.argv[3])
dimension = int(sys.argv[4])
dimension = 5
# set up initial state for walkers
position_coords = np.zeros((1, dimension))
init_state = WalkerState(positions=position_coords, time=0.0)
# set up the distance function
distance = RandomWalkDistance();
# set up the WExplore Resampler with the parameters
resampler = WExploreResampler(distance=distance,
init_state=init_state,
max_n_regions=MAX_N_REGIONS,
max_region_sizes=MAX_REGION_SIZES,
pmin=PMIN, pmax=PMAX)
# set up a RandomWalkProfilier
rw_profiler = RandomwalkProfiler(resampler,
dimension,
hdf5_filename=hdf5_path,
reporter_filename=reporter_path)
# runs the simulations and gets the result
rw_profiler.run(num_runs=n_runs, num_cycles=n_cycles,
num_walkers=n_walkers)
#set up the Wexplore Resampler with the parameters
|
{"hexsha": "69705beeff028018ce849bfc4c8d3801e8e4290d", "size": 2813, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/RandomWalk/rw_wexplore.py", "max_stars_repo_name": "edeustua/wepy", "max_stars_repo_head_hexsha": "f1a2ef5c8cc368d5602c9d683983b3af69a48ce2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/RandomWalk/rw_wexplore.py", "max_issues_repo_name": "edeustua/wepy", "max_issues_repo_head_hexsha": "f1a2ef5c8cc368d5602c9d683983b3af69a48ce2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/RandomWalk/rw_wexplore.py", "max_forks_repo_name": "edeustua/wepy", "max_forks_repo_head_hexsha": "f1a2ef5c8cc368d5602c9d683983b3af69a48ce2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.13, "max_line_length": 70, "alphanum_fraction": 0.6768574476, "include": true, "reason": "import numpy", "num_tokens": 688}
|
import Data.Nat
import Data.Vect
{-
Exercise 1
Using plusZeroRightNeutral and plusSuccRightSucc, write your own version of plusCommutes:
myPlusCommutes : (n : Nat) -> (m : Nat) -> n + m = m + n
Hint: Write this by case splitting on n. In the case of S k,
you can rewrite with a recursive call to myPlusCommutes k m, and rewrites can be nested.
-}
myPlusCommutative : (n : Nat) -> (m : Nat) -> n + m = m + n
myPlusCommutative 0 m = sym (plusZeroRightNeutral m)
{-
Need to prove: S (plus k m) = plus m (S k)
prf1 : plus k m = plus m k
prf2 : S (plus m k) = plus m (S k)
Using prf1,
Rewrite S (plus m k) = plus m (S k)
as S (plus k m) = plus m (S k)
-}
myPlusCommutative (S k) m = let prf1 = myPlusCommutative k m
prf2 = plusSuccRightSucc m k
in
rewrite prf1 in prf2
-- Exercise 2
reverseProof_nil : Vect k a -> Vect (plus k 0) a
reverseProof_nil xs = rewrite plusZeroRightNeutral k in xs
reverseProof_xs : Vect (S (plus k len)) a -> Vect (plus k (S len)) a
reverseProof_xs xs = rewrite sym (plusSuccRightSucc k len) in xs
myReverse2 : Vect n a -> Vect n a
myReverse2 xs = reverse' [] xs where
reverse' : Vect k a -> Vect m a -> Vect (k + m) a
reverse' acc [] = reverseProof_nil acc
reverse' acc (x :: xs) = reverseProof_xs (reverse' (x :: acc) xs)
|
{"hexsha": "c042d3b49ce8174ff51fcd470755806818565b02", "size": 1414, "ext": "idr", "lang": "Idris", "max_stars_repo_path": "src/ch8/ex_8_2.idr", "max_stars_repo_name": "trevarj/tdd_idris_examples", "max_stars_repo_head_hexsha": "63a7128a797b64cb0ba54562778cdf9712b4d9f5", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/ch8/ex_8_2.idr", "max_issues_repo_name": "trevarj/tdd_idris_examples", "max_issues_repo_head_hexsha": "63a7128a797b64cb0ba54562778cdf9712b4d9f5", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ch8/ex_8_2.idr", "max_forks_repo_name": "trevarj/tdd_idris_examples", "max_forks_repo_head_hexsha": "63a7128a797b64cb0ba54562778cdf9712b4d9f5", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7391304348, "max_line_length": 89, "alphanum_fraction": 0.6018387553, "num_tokens": 443}
|
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
#include "kudu/master/sentry_privileges_fetcher.h"
#include <algorithm>
#include <cstddef>
#include <cstdint>
#include <iterator>
#include <memory>
#include <mutex>
#include <set>
#include <type_traits>
#include <unordered_map>
#include <vector>
#include <boost/algorithm/string/predicate.hpp>
#include <gflags/gflags.h>
#include <glog/logging.h>
#include "kudu/common/table_util.h"
#include "kudu/gutil/macros.h"
#include "kudu/gutil/map-util.h"
#include "kudu/gutil/port.h"
#include "kudu/gutil/strings/substitute.h"
#include "kudu/master/sentry_client_metrics.h"
#include "kudu/master/sentry_privileges_cache_metrics.h"
#include "kudu/sentry/sentry_action.h"
#include "kudu/sentry/sentry_authorizable_scope.h"
#include "kudu/sentry/sentry_client.h"
#include "kudu/sentry/sentry_policy_service_types.h"
#include "kudu/thrift/client.h"
#include "kudu/thrift/ha_client_metrics.h"
#include "kudu/util/async_util.h"
#include "kudu/util/flag_tags.h"
#include "kudu/util/flag_validators.h"
#include "kudu/util/malloc.h"
#include "kudu/util/monotime.h"
#include "kudu/util/net/net_util.h"
#include "kudu/util/slice.h"
#include "kudu/util/test_util_prod.h"
#include "kudu/util/trace.h"
#include "kudu/util/ttl_cache_metrics.h"
DEFINE_string(sentry_service_rpc_addresses, "",
"Comma-separated list of RPC addresses of the Sentry service(s). When "
"set, Sentry integration is enabled, fine-grained access control is "
"enforced in the master, and clients are issued authorization tokens. "
"Must match the value of the sentry.service.client.server.rpc-addresses "
"option in the Sentry server configuration.");
DEFINE_string(server_name, "server1",
"Configures which server namespace the Kudu instance belongs to for defining "
"server-level privileges in Sentry. Used to distinguish a particular Kudu "
"cluster in case of a multi-cluster setup. Must match the value of the "
"hive.sentry.server option in the HiveServer2 configuration, and the value "
"of the --server_name in Impala configuration.");
DEFINE_string(kudu_service_name, "kudu",
"The service name of the Kudu server. Must match the service name "
"used for Kudu server of sentry.service.admin.group option in the "
"Sentry server configuration.");
DEFINE_string(sentry_service_kerberos_principal, "sentry",
"The service principal of the Sentry server. Must match the primary "
"(user) portion of sentry.service.server.principal option in the "
"Sentry server configuration.");
DEFINE_string(sentry_service_security_mode, "kerberos",
"Configures whether Thrift connections to the Sentry server use "
"SASL (Kerberos) security. Must match the value of the "
"‘sentry.service.security.mode’ option in the Sentry server "
"configuration.");
DEFINE_int32(sentry_service_retry_count, 1,
"The number of times that Sentry operations will retry after "
"encountering retriable failures, such as network errors.");
TAG_FLAG(sentry_service_retry_count, advanced);
DEFINE_int32(sentry_service_send_timeout_seconds, 60,
"Configures the socket send timeout, in seconds, for Thrift "
"connections to the Sentry server.");
TAG_FLAG(sentry_service_send_timeout_seconds, advanced);
DEFINE_int32(sentry_service_recv_timeout_seconds, 60,
"Configures the socket receive timeout, in seconds, for Thrift "
"connections to the Sentry server.");
TAG_FLAG(sentry_service_recv_timeout_seconds, advanced);
DEFINE_int32(sentry_service_conn_timeout_seconds, 60,
"Configures the socket connect timeout, in seconds, for Thrift "
"connections to the Sentry server.");
TAG_FLAG(sentry_service_conn_timeout_seconds, advanced);
DEFINE_int32(sentry_service_max_message_size_bytes, 100 * 1024 * 1024,
"Maximum size of Sentry objects that can be received by the "
"Sentry client in bytes. Must match the value of the "
"sentry.policy.client.thrift.max.message.size option in the "
"Sentry server configuration.");
TAG_FLAG(sentry_service_max_message_size_bytes, advanced);
// TODO(aserbin): provide some reasonable default value for the
// --sentry_privileges_cache_capacity_mb flag. Maybe, make it
// a multiple of FLAG_sentry_service_max_message_size_bytes ?
DEFINE_uint32(sentry_privileges_cache_capacity_mb, 256,
"Capacity for the authz cache, in MiBytes. The cache stores "
"information received from Sentry. A value of 0 means Sentry "
"responses will not be cached.");
TAG_FLAG(sentry_privileges_cache_capacity_mb, advanced);
DEFINE_uint32(sentry_privileges_cache_ttl_factor, 10,
"Factor of multiplication for the authz token validity interval "
"defined by --authz_token_validity_seconds flag. The result of "
"the multiplication of this factor and authz token validity "
"defines the TTL of entries in the authz cache.");
TAG_FLAG(sentry_privileges_cache_ttl_factor, advanced);
DEFINE_uint32(sentry_privileges_cache_scrubbing_period_sec, 20,
"The interval to run the periodic task that scrubs the "
"privileges cache of expired entries. A value of 0 means expired "
"entries are only evicted when inserting new entries into a full "
"cache.");
TAG_FLAG(sentry_privileges_cache_scrubbing_period_sec, advanced);
DEFINE_uint32(sentry_privileges_cache_max_scrubbed_entries_per_pass, 32,
"Maximum number of entries in the privileges cache to process "
"in one pass of the periodic scrubbing task. A value of 0 means "
"there is no limit, i.e. all expired entries, if any, "
"are invalidated every time the scrubbing task runs. Note "
"that the cache is locked while the scrubbing task is running.");
TAG_FLAG(sentry_privileges_cache_max_scrubbed_entries_per_pass, advanced);
DECLARE_int64(authz_token_validity_seconds);
DECLARE_string(hive_metastore_uris);
DECLARE_string(kudu_service_name);
DECLARE_string(server_name);
using kudu::sentry::AuthorizableScopesSet;
using kudu::sentry::SentryAction;
using kudu::sentry::SentryAuthorizableScope;
using kudu::sentry::SentryClient;
using sentry::TListSentryPrivilegesRequest;
using sentry::TListSentryPrivilegesResponse;
using sentry::TSentryAuthorizable;
using sentry::TSentryGrantOption;
using sentry::TSentryPrivilege;
using std::make_shared;
using std::shared_ptr;
using std::string;
using std::unique_ptr;
using std::unordered_map;
using std::vector;
using strings::Substitute;
namespace kudu {
namespace master {
// Validates the sentry_service_rpc_addresses gflag.
static bool ValidateAddresses(const char* flag_name, const string& addresses) {
vector<HostPort> host_ports;
Status s = HostPort::ParseStringsWithScheme(addresses,
SentryClient::kDefaultSentryPort,
&host_ports);
if (!s.ok()) {
LOG(ERROR) << "invalid flag " << flag_name << ": " << s.ToString();
}
return s.ok();
}
DEFINE_validator(sentry_service_rpc_addresses, &ValidateAddresses);
// This group flag validator enforces the logical dependency of the Sentry+Kudu
// fine-grain authz scheme on the integration with HMS catalog.
//
// The validator makes it necessary to set the --hive_metastore_uris flag
// if the --sentry_service_rpc_addresses flag is set.
//
// Even if Kudu could successfully fetch information on granted privileges from
// Sentry to allow or deny commencing DML operations on already existing
// tables, the information on privileges in Sentry would become inconsistent
// after DDL operations (e.g., renaming a table).
bool ValidateSentryServiceRpcAddresses() {
if (!FLAGS_sentry_service_rpc_addresses.empty() &&
FLAGS_hive_metastore_uris.empty()) {
LOG(ERROR) << "Hive Metastore catalog is required (--hive_metastore_uris) "
"to run Kudu with Sentry-backed authorization scheme "
"(--sentry_service_rpc_addresses).";
return false;
}
return true;
}
GROUP_FLAG_VALIDATOR(sentry_service_rpc_addresses,
ValidateSentryServiceRpcAddresses);
namespace {
// Fetching privileges from Sentry gets more expensive the broader the scope of
// the authorizable is, since the API used in a fetch returns all ancestors and
// all descendents of an authorizable in its hierarchy tree.
//
// Even if requesting privileges at a relatively broad scope, e.g. DATABASE,
// fill in the authorizable to request a narrower scope, since the broader
// privileges (i.e. the ancestors) will be returned from Sentry anyway.
void NarrowAuthzScopeForFetch(const string& db, const string& table,
TSentryAuthorizable* authorizable) {
if (authorizable->db.empty()) {
authorizable->__set_db(db);
}
if (authorizable->table.empty()) {
authorizable->__set_table(table);
}
}
// Returns an authorizable based on the given database and table name and the
// given scope.
Status GetAuthorizable(const string& db, const string& table,
SentryAuthorizableScope::Scope scope,
TSentryAuthorizable* authorizable) {
// We should only ever request privileges from Sentry for authorizables of
// scope equal to or higher than 'TABLE'.
DCHECK_NE(scope, SentryAuthorizableScope::Scope::COLUMN);
switch (scope) {
case SentryAuthorizableScope::Scope::TABLE:
authorizable->__set_table(table);
FALLTHROUGH_INTENDED;
case SentryAuthorizableScope::Scope::DATABASE:
authorizable->__set_db(db);
FALLTHROUGH_INTENDED;
case SentryAuthorizableScope::Scope::SERVER:
authorizable->__set_server(FLAGS_server_name);
break;
default:
LOG(FATAL) << "unsupported SentryAuthorizableScope: "
<< sentry::ScopeToString(scope);
break;
}
return Status::OK();
}
// A utility class to help with Sentry privilege scoping, generating sequence
// of keys to lookup corresponding entries in the cache.
class AuthzInfoKey {
public:
// The maximum possible number of the elements in the key lookup sequence
// returned by the key_sequence() method (see below). Maximum number of keys
// to lookup in the cache is 2. See the comment for the GenerateKeySequence()
// method below for more details.
constexpr static size_t kKeySequenceMaxSize = 2;
AuthzInfoKey(const string& user,
const ::sentry::TSentryAuthorizable& authorizable);
// Get the key to lookup the corresponding entry in the cache with the scope
// of the authorizable widened as specified by the 'scope' parameter.
// E.g., if the original scope of the autorizable specified in the constructor
// was COLUMN, with the 'scope' set to TABLE the returned key is 'U/S/D/T',
// while the key for the authorizable as is would be 'U/S/D/T/C'.
const string& GetKey(SentryAuthorizableScope::Scope scope) const;
// This method returns the sequence of keys to look up in the cache
// if retrieving privileges granted to the 'user' on the 'authorizable'
// specified in the constructor.
const vector<string>& key_sequence() const {
return key_sequence_;
}
private:
// Generate the raw key sequence: a sequence of keys for the authz scope
// hierarchy, starting from the very top (i.e. SERVER scope) and narrowing
// down to the scope of the 'authorizable' specified in the constructor.
//
// For example, for user 'U' and authorizable { server:S, db:D, table:T }
// the raw sequence of keys is { 'U/S', 'U/S/D', 'U/S/D/T' }.
static vector<string> GenerateRawKeySequence(
const string& user, const ::sentry::TSentryAuthorizable& authorizable);
// Generate the cache key lookup sequence: a sequence of keys to use while
// looking up corresponding entry in the authz cache. The maximum
// length of the returned sequence is limited by kCacheKeySequenceMaxSize.
//
// For authorizables of the TABLE scope and narrower, it returns sequence
// { 'U/S/D', 'U/S/D/T' }. For authorizables of the DATABASE scope it returns
// { 'U/S/D' }. For authorizables of the SERVER scope it returns { 'U/S' }.
static vector<string> GenerateKeySequence(const vector<string>& raw_sequence);
// Convert the Sentry authz scope to an index in the list
// { SERVER, DATABASE, TABLE, COLUMN }.
static size_t ScopeToRawSequenceIdx(SentryAuthorizableScope::Scope scope);
const vector<string> raw_key_sequence_;
const vector<string> key_sequence_;
};
AuthzInfoKey::AuthzInfoKey(const string& user,
const ::sentry::TSentryAuthorizable& authorizable)
: raw_key_sequence_(GenerateRawKeySequence(user, authorizable)),
key_sequence_(GenerateKeySequence(raw_key_sequence_)) {
DCHECK(!raw_key_sequence_.empty());
DCHECK(!key_sequence_.empty());
DCHECK_GE(kKeySequenceMaxSize, key_sequence_.size());
}
const string& AuthzInfoKey::GetKey(SentryAuthorizableScope::Scope scope) const {
const size_t level = ScopeToRawSequenceIdx(scope);
if (level < raw_key_sequence_.size()) {
return raw_key_sequence_[level];
}
return raw_key_sequence_.back();
}
vector<string> AuthzInfoKey::GenerateRawKeySequence(
const string& user, const ::sentry::TSentryAuthorizable& authorizable) {
DCHECK(!user.empty());
DCHECK(!authorizable.server.empty());
if (!authorizable.__isset.db || authorizable.db.empty()) {
return {
Substitute("$0/$1", user, authorizable.server),
};
}
if (!authorizable.__isset.table || authorizable.table.empty()) {
auto k0 = Substitute("$0/$1", user, authorizable.server);
auto k1 = Substitute("$0/$1", k0, authorizable.db);
return { std::move(k0), std::move(k1), };
}
if (!authorizable.__isset.column || authorizable.column.empty()) {
auto k0 = Substitute("$0/$1", user, authorizable.server);
auto k1 = Substitute("$0/$1", k0, authorizable.db);
auto k2 = Substitute("$0/$1", k1, authorizable.table);
return { std::move(k0), std::move(k1), std::move(k2), };
}
auto k0 = Substitute("$0/$1", user, authorizable.server);
auto k1 = Substitute("$0/$1", k0, authorizable.db);
auto k2 = Substitute("$0/$1", k1, authorizable.table);
auto k3 = Substitute("$0/$1", k2, authorizable.column);
return { std::move(k0), std::move(k1), std::move(k2), std::move(k3), };
}
vector<string> AuthzInfoKey::GenerateKeySequence(
const vector<string>& raw_sequence) {
DCHECK(!raw_sequence.empty());
vector<string> sequence;
const auto idx_db = ScopeToRawSequenceIdx(SentryAuthorizableScope::DATABASE);
if (idx_db < raw_sequence.size()) {
sequence.emplace_back(raw_sequence[idx_db]);
}
const auto idx_table = ScopeToRawSequenceIdx(SentryAuthorizableScope::TABLE);
if (idx_table < raw_sequence.size()) {
sequence.emplace_back(raw_sequence[idx_table]);
}
if (sequence.empty()) {
sequence.emplace_back(raw_sequence.back());
}
DCHECK_GE(kKeySequenceMaxSize, sequence.size());
return sequence;
}
size_t AuthzInfoKey::ScopeToRawSequenceIdx(SentryAuthorizableScope::Scope scope) {
size_t idx = 0;
switch (scope) {
case SentryAuthorizableScope::Scope::SERVER:
idx = 0;
break;
case SentryAuthorizableScope::Scope::DATABASE:
idx = 1;
break;
case SentryAuthorizableScope::Scope::TABLE:
idx = 2;
break;
case SentryAuthorizableScope::Scope::COLUMN:
idx = 3;
break;
default:
LOG(DFATAL) << "unexpected scope: " << static_cast<int16_t>(scope);
break;
}
return idx;
}
// Returns a unique string key for the given authorizable, at the given scope.
// The authorizable must be a well-formed at the given scope.
string GetKey(const string& server,
const string& db,
const string& table,
const string& column,
SentryAuthorizableScope::Scope scope) {
DCHECK(!server.empty());
switch (scope) {
case SentryAuthorizableScope::SERVER:
return server;
case SentryAuthorizableScope::DATABASE:
DCHECK(!db.empty());
return Substitute("$0/$1", server, db);
case SentryAuthorizableScope::TABLE:
DCHECK(!db.empty());
DCHECK(!table.empty());
return Substitute("$0/$1/$2", server, db, table);
case SentryAuthorizableScope::COLUMN:
DCHECK(!db.empty());
DCHECK(!table.empty());
DCHECK(!column.empty());
return Substitute("$0/$1/$2/$3", server, db, table, column);
default:
LOG(DFATAL) << "not reachable";
break;
}
return "";
}
} // anonymous namespace
SentryPrivilegesBranch::SentryPrivilegesBranch(
const ::sentry::TSentryAuthorizable& authorizable,
const TListSentryPrivilegesResponse& response) {
DoInit(authorizable, response);
}
size_t SentryPrivilegesBranch::memory_footprint() const {
size_t res = kudu_malloc_usable_size(this);
// This is a simple approximation: the exact information could be available
// from the allocator of std::vector and std::string.
res += privileges_.capacity() * sizeof(AuthorizablePrivileges);
for (const auto& p : privileges_) {
res += p.db_name.capacity();
res += p.table_name.capacity();
res += p.column_name.capacity();
res += sizeof(decltype(p.allowed_actions));
}
return res;
}
void SentryPrivilegesBranch::Merge(const SentryPrivilegesBranch& other) {
std::copy(other.privileges_.begin(), other.privileges_.end(),
std::back_inserter(privileges_));
}
void SentryPrivilegesBranch::Split(
SentryPrivilegesBranch* other_scope_db,
SentryPrivilegesBranch* other_scope_table) const {
SentryPrivilegesBranch scope_db;
SentryPrivilegesBranch scope_table;
for (const auto& e : privileges_) {
switch (e.scope) {
case SentryAuthorizableScope::SERVER:
case SentryAuthorizableScope::DATABASE:
scope_db.privileges_.emplace_back(e);
break;
case SentryAuthorizableScope::TABLE:
case SentryAuthorizableScope::COLUMN:
scope_table.privileges_.emplace_back(e);
break;
default:
LOG(DFATAL) << "not reachable";
break;
}
}
*other_scope_db = std::move(scope_db);
*other_scope_table = std::move(scope_table);
}
void SentryPrivilegesBranch::DoInit(
const ::sentry::TSentryAuthorizable& authorizable,
const TListSentryPrivilegesResponse& response) {
unordered_map<string, AuthorizablePrivileges> privileges_map;
for (const auto& privilege_resp : response.privileges) {
SentryAuthorizableScope::Scope scope;
SentryAction::Action action;
if (!SentryPrivilegesFetcher::SentryPrivilegeIsWellFormed(
privilege_resp, authorizable, &scope, &action)) {
VLOG(1) << "ignoring privilege response: " << privilege_resp;
continue;
}
const auto& db = privilege_resp.dbName;
const auto& table = privilege_resp.tableName;
const auto& column = privilege_resp.columnName;
const string authorizable_key = GetKey(privilege_resp.serverName,
db, table, column, scope);
auto& privilege = LookupOrInsert(&privileges_map, authorizable_key,
AuthorizablePrivileges(scope, db, table, column));
InsertIfNotPresent(&privilege.allowed_actions, action);
if (action == SentryAction::ALL || action == SentryAction::OWNER) {
privilege.all_with_grant =
(privilege_resp.grantOption == TSentryGrantOption::ENABLED);
}
if (VLOG_IS_ON(1)) {
if (action != SentryAction::ALL && action != SentryAction::OWNER &&
privilege_resp.grantOption == TSentryGrantOption::ENABLED) {
VLOG(1) << "ignoring ENABLED grant option for unknown action: "
<< static_cast<int16_t>(action);
}
}
}
EmplaceValuesFromMap(std::move(privileges_map), &privileges_);
}
SentryPrivilegesFetcher::SentryPrivilegesFetcher(
scoped_refptr<MetricEntity> metric_entity)
: metric_entity_(std::move(metric_entity)) {
if (metric_entity_) {
std::unique_ptr<SentryClientMetrics> metrics(
new SentryClientMetrics(metric_entity_));
sentry_client_.SetMetrics(std::move(metrics));
}
}
Status SentryPrivilegesFetcher::Start() {
// The semantics of SentryAuthzProvider's Start()/Stop() don't guarantee
// immutability of the Sentry service's end-point between restarts. So, since
// the information in the cache might become irrelevant after restarting
// 'sentry_client_' with different Sentry address, it makes sense to clear
// the cache of all accumulated entries.
ResetCache();
vector<HostPort> addresses;
RETURN_NOT_OK(HostPort::ParseStringsWithScheme(
FLAGS_sentry_service_rpc_addresses,
SentryClient::kDefaultSentryPort,
&addresses));
thrift::ClientOptions options;
options.enable_kerberos = boost::iequals(
FLAGS_sentry_service_security_mode, "kerberos");
options.service_principal =
FLAGS_sentry_service_kerberos_principal;
options.send_timeout = MonoDelta::FromSeconds(
FLAGS_sentry_service_send_timeout_seconds);
options.recv_timeout = MonoDelta::FromSeconds(
FLAGS_sentry_service_recv_timeout_seconds);
options.conn_timeout = MonoDelta::FromSeconds(
FLAGS_sentry_service_conn_timeout_seconds);
options.max_buf_size =
FLAGS_sentry_service_max_message_size_bytes;
options.retry_count =
FLAGS_sentry_service_retry_count;
return sentry_client_.Start(std::move(addresses), std::move(options));
}
void SentryPrivilegesFetcher::Stop() {
sentry_client_.Stop();
}
Status SentryPrivilegesFetcher::ResetCache() {
const auto cache_capacity_bytes =
FLAGS_sentry_privileges_cache_capacity_mb * 1024 * 1024;
shared_ptr<PrivilegeCache> new_cache;
if (cache_capacity_bytes != 0) {
const auto cache_entry_ttl = MonoDelta::FromSeconds(
FLAGS_authz_token_validity_seconds *
FLAGS_sentry_privileges_cache_ttl_factor);
MonoDelta cache_scrubbing_period; // explicitly non-initialized variable
if (FLAGS_sentry_privileges_cache_scrubbing_period_sec > 0) {
cache_scrubbing_period = std::min(cache_entry_ttl, MonoDelta::FromSeconds(
FLAGS_sentry_privileges_cache_scrubbing_period_sec));
}
new_cache = make_shared<PrivilegeCache>(
cache_capacity_bytes, cache_entry_ttl, cache_scrubbing_period,
FLAGS_sentry_privileges_cache_max_scrubbed_entries_per_pass,
"sentry-privileges-ttl-cache");
if (metric_entity_) {
unique_ptr<SentryPrivilegesCacheMetrics> metrics(
new SentryPrivilegesCacheMetrics(metric_entity_));
new_cache->SetMetrics(std::move(metrics));
}
}
{
std::lock_guard<rw_spinlock> l(cache_lock_);
cache_ = new_cache;
}
return Status::OK();
}
Status SentryPrivilegesFetcher::GetSentryPrivileges(
SentryAuthorizableScope::Scope requested_scope,
const string& table_ident,
const string& user,
SentryCaching caching,
SentryPrivilegesBranch* privileges) {
Slice db_slice;
Slice table_slice;
RETURN_NOT_OK(ParseHiveTableIdentifier(table_ident, &db_slice, &table_slice));
DCHECK(!table_slice.empty());
DCHECK(!db_slice.empty());
const string table = table_slice.ToString();
const string db = db_slice.ToString();
// 1. Put together the requested authorizable.
TSentryAuthorizable authorizable;
RETURN_NOT_OK(GetAuthorizable(db, table, requested_scope, &authorizable));
if (PREDICT_FALSE(requested_scope == SentryAuthorizableScope::SERVER &&
!IsGTest())) {
// A request for an authorizable of the scope wider than DATABASE is served,
// but the response from Sentry is not cached. With current privilege
// scheme, SentryPrivilegesFetcher is not expected to request authorizables
// of the SERVER scope unless this method is called from test code.
LOG(DFATAL) << Substitute(
"requesting privileges of the SERVER scope from Sentry "
"on authorizable '$0' for user '$1'", table_ident, user);
}
// Not expecting requests for authorizables of the scope narrower than TABLE,
// even in tests.
DCHECK_NE(SentryAuthorizableScope::COLUMN, requested_scope);
const AuthzInfoKey requested_info(user, authorizable);
// Do not query Sentry for authz scopes narrower than 'TABLE'.
const auto& requested_key = requested_info.GetKey(SentryAuthorizableScope::TABLE);
const auto& requested_key_seq = requested_info.key_sequence();
// 2. Check the cache to see if it contains the requested privileges.
// Copy the shared pointer to the cache. That's necessary because:
// * the cache_ member may be reset by concurrent ResetCache()
// * TTLCache is based on Cache that doesn't allow for outstanding handles
// if the cache itself destructed (in this case, goes out of scope).
shared_ptr<PrivilegeCache> cache;
{
shared_lock<rw_spinlock> l(cache_lock_);
cache = cache_;
}
vector<typename PrivilegeCache::EntryHandle> handles;
handles.reserve(AuthzInfoKey::kKeySequenceMaxSize);
if (PREDICT_TRUE(cache)) {
for (const auto& e : requested_key_seq) {
auto handle = cache->Get(e);
VLOG(3) << Substitute("'$0': '$1' key lookup", requested_key, e);
if (!handle) {
continue;
}
VLOG(2) << Substitute("'$0': '$1' key found", requested_key, e);
handles.emplace_back(std::move(handle));
}
}
// If the cache contains all the necessary information, repackage the
// cached information and return as the result.
if (handles.size() == requested_key_seq.size()) {
SentryPrivilegesBranch result;
for (const auto& e : handles) {
DCHECK(e);
result.Merge(e.value());
}
*privileges = std::move(result);
return Status::OK();
}
// 3. The required privileges do not exist in the cache. Fetch them from
// Sentry.
// Narrow the scope of the authorizable to limit the number of privileges
// sent back from Sentry to be relevant to the provided table.
NarrowAuthzScopeForFetch(db, table, &authorizable);
const AuthzInfoKey full_authz_info(user, authorizable);
const string& full_key = full_authz_info.GetKey(SentryAuthorizableScope::TABLE);
Synchronizer sync;
bool is_first_request = false;
// The result (i.e. the retrieved informaton on privileges) might be used
// independently by multiple threads. The shared ownership approach simplifies
// passing the information around.
shared_ptr<SentryPrivilegesBranch> fetched_privileges;
{
std::lock_guard<simple_spinlock> l(pending_requests_lock_);
auto& pending_request = LookupOrEmplace(&pending_requests_,
full_key, SentryRequestsInfo());
// Is the queue of pending requests for the same key empty?
// If yes, that's the first request being sent out.
is_first_request = pending_request.callbacks.empty();
pending_request.callbacks.emplace_back(sync.AsStatusCallback());
if (is_first_request) {
DCHECK(!pending_request.result);
pending_request.result = make_shared<SentryPrivilegesBranch>();
}
fetched_privileges = pending_request.result;
}
if (!is_first_request) {
TRACE("Waiting for in-flight request to Sentry");
RETURN_NOT_OK(sync.Wait());
*privileges = *fetched_privileges;
return Status::OK();
}
TRACE("Fetching privileges from Sentry");
const auto s = FetchPrivilegesFromSentry(FLAGS_kudu_service_name,
user, authorizable,
fetched_privileges.get());
// 4. Cache the privileges from Sentry.
if (s.ok() && PREDICT_TRUE(cache)) {
// Put the result into the cache. Negative results (i.e. errors) are not
// cached. Split the information on privileges into at most two cache
// entries, for authorizables of scope:
// * SERVER, DATABASE
// * TABLE, COLUMN
//
// From this perspective, privileges on a corresponding authorizable of the
// DATABASE scope might be cached as a by-product when the original request
// comes for an authorizable of the TABLE scope.
SentryPrivilegesBranch priv_srv_db;
SentryPrivilegesBranch priv_table_column;
fetched_privileges->Split(&priv_srv_db, &priv_table_column);
if (requested_scope != SentryAuthorizableScope::SERVER) {
{
unique_ptr<SentryPrivilegesBranch> result_ptr(
new SentryPrivilegesBranch(std::move(priv_srv_db)));
const auto& db_key = full_authz_info.GetKey(SentryAuthorizableScope::DATABASE);
const auto result_footprint =
result_ptr->memory_footprint() + db_key.capacity();
cache->Put(db_key, std::move(result_ptr), result_footprint);
VLOG(2) << Substitute(
"added entry of size $0 bytes for key '$1' (server-database scope)",
result_footprint, db_key);
}
if (caching == ALL) {
unique_ptr<SentryPrivilegesBranch> result_ptr(
new SentryPrivilegesBranch(std::move(priv_table_column)));
const auto& table_key = full_authz_info.GetKey(SentryAuthorizableScope::TABLE);
const auto result_footprint =
result_ptr->memory_footprint() + table_key.capacity();
cache->Put(table_key, std::move(result_ptr), result_footprint);
VLOG(2) << Substitute(
"added entry of size $0 bytes for key '$1' (table-column scope)",
result_footprint, table_key);
}
}
}
// 5. Run any pending callbacks and return.
SentryRequestsInfo info;
{
std::lock_guard<simple_spinlock> l(pending_requests_lock_);
info = EraseKeyReturnValuePtr(&pending_requests_, full_key);
}
CHECK_LE(1, info.callbacks.size());
for (auto& cb : info.callbacks) {
cb(s);
}
RETURN_NOT_OK(s);
*privileges = *fetched_privileges;
return Status::OK();
}
// In addition to sanity checking of the contents of TSentryPrivilege in
// 'privilege', this function has DCHECKs to spot programmer's errors
// with regard to correctly setting fields of the 'requested_authorizable'
// parameter.
bool SentryPrivilegesFetcher::SentryPrivilegeIsWellFormed(
const TSentryPrivilege& privilege,
const TSentryAuthorizable& requested_authorizable,
SentryAuthorizableScope::Scope* scope,
SentryAction::Action* action) {
DCHECK_EQ(FLAGS_server_name, requested_authorizable.server);
DCHECK(!requested_authorizable.server.empty());
DCHECK(requested_authorizable.column.empty());
// A requested table must be accompanied by a database.
bool authorizable_has_db = !requested_authorizable.db.empty();
bool authorizable_has_table = !requested_authorizable.table.empty();
DCHECK((authorizable_has_db && authorizable_has_table) || !authorizable_has_table);
// Ignore anything that isn't a Kudu-related privilege.
SentryAuthorizableScope granted_scope;
SentryAction granted_action;
Status s = SentryAuthorizableScope::FromString(privilege.privilegeScope, &granted_scope)
.AndThen([&] {
return SentryAction::FromString(privilege.action, &granted_action);
});
if (!s.ok()) {
return false;
}
// Make sure that there aren't extraneous fields set in the privilege.
for (const auto& empty_field : ExpectedEmptyFields(granted_scope.scope())) {
switch (empty_field) {
case SentryAuthorizableScope::COLUMN:
if (!privilege.columnName.empty()) {
return false;
}
break;
case SentryAuthorizableScope::TABLE:
if (!privilege.tableName.empty()) {
return false;
}
break;
case SentryAuthorizableScope::DATABASE:
if (!privilege.dbName.empty()) {
return false;
}
break;
case SentryAuthorizableScope::SERVER:
if (!privilege.serverName.empty()) {
return false;
}
break;
default:
LOG(DFATAL) << Substitute("Granted privilege has invalid scope: $0",
sentry::ScopeToString(granted_scope.scope()));
}
}
// Make sure that all expected fields are set, and that they match those in
// the requested authorizable. Sentry auhtorizables are case-insensitive
// due to the properties of Kudu-HMS integration.
for (const auto& nonempty_field : ExpectedNonEmptyFields(granted_scope.scope())) {
switch (nonempty_field) {
case SentryAuthorizableScope::COLUMN:
if (!privilege.__isset.columnName || privilege.columnName.empty()) {
return false;
}
break;
case SentryAuthorizableScope::TABLE:
if (!privilege.__isset.tableName || privilege.tableName.empty() ||
(authorizable_has_table &&
!boost::iequals(privilege.tableName, requested_authorizable.table))) {
return false;
}
break;
case SentryAuthorizableScope::DATABASE:
if (!privilege.__isset.dbName || privilege.dbName.empty() ||
(authorizable_has_db &&
!boost::iequals(privilege.dbName, requested_authorizable.db))) {
return false;
}
break;
case SentryAuthorizableScope::SERVER:
if (privilege.serverName.empty() ||
!boost::iequals(privilege.serverName, requested_authorizable.server)) {
return false;
}
break;
default:
LOG(DFATAL) << Substitute("Granted privilege has invalid scope: $0",
sentry::ScopeToString(granted_scope.scope()));
}
}
*scope = granted_scope.scope();
*action = granted_action.action();
return true;
}
const AuthorizableScopesSet& SentryPrivilegesFetcher::ExpectedEmptyFields(
SentryAuthorizableScope::Scope scope) {
static const AuthorizableScopesSet kServerFields{ SentryAuthorizableScope::DATABASE,
SentryAuthorizableScope::TABLE,
SentryAuthorizableScope::COLUMN };
static const AuthorizableScopesSet kDbFields{ SentryAuthorizableScope::TABLE,
SentryAuthorizableScope::COLUMN };
static const AuthorizableScopesSet kTableFields{ SentryAuthorizableScope::COLUMN };
static const AuthorizableScopesSet kColumnFields{};
switch (scope) {
case SentryAuthorizableScope::SERVER:
return kServerFields;
case SentryAuthorizableScope::DATABASE:
return kDbFields;
case SentryAuthorizableScope::TABLE:
return kTableFields;
case SentryAuthorizableScope::COLUMN:
return kColumnFields;
default:
LOG(DFATAL) << "not reachable";
}
return kColumnFields;
}
const AuthorizableScopesSet& SentryPrivilegesFetcher::ExpectedNonEmptyFields(
SentryAuthorizableScope::Scope scope) {
static const AuthorizableScopesSet kColumnFields{ SentryAuthorizableScope::SERVER,
SentryAuthorizableScope::DATABASE,
SentryAuthorizableScope::TABLE,
SentryAuthorizableScope::COLUMN };
static const AuthorizableScopesSet kTableFields{ SentryAuthorizableScope::SERVER,
SentryAuthorizableScope::DATABASE,
SentryAuthorizableScope::TABLE };
static const AuthorizableScopesSet kDbFields{ SentryAuthorizableScope::SERVER,
SentryAuthorizableScope::DATABASE };
static const AuthorizableScopesSet kServerFields{ SentryAuthorizableScope::SERVER };
switch (scope) {
case SentryAuthorizableScope::COLUMN:
return kColumnFields;
case SentryAuthorizableScope::TABLE:
return kTableFields;
case SentryAuthorizableScope::DATABASE:
return kDbFields;
case SentryAuthorizableScope::SERVER:
return kServerFields;
default:
LOG(DFATAL) << "not reachable";
}
return kColumnFields;
}
Status SentryPrivilegesFetcher::FetchPrivilegesFromSentry(
const string& service_name,
const string& user,
const TSentryAuthorizable& authorizable,
SentryPrivilegesBranch* result) {
TListSentryPrivilegesRequest request;
request.__set_requestorUserName(service_name);
request.__set_principalName(user);
request.__set_authorizableHierarchy(authorizable);
TListSentryPrivilegesResponse response;
RETURN_NOT_OK(sentry_client_.Execute(
[&] (SentryClient* client) {
return client->ListPrivilegesByUser(request, &response);
}));
*result = SentryPrivilegesBranch(authorizable, response);
return Status::OK();
}
} // namespace master
} // namespace kudu
|
{"hexsha": "b9d35b7faf09a48e842e8ac2107e1072c1e1ee2e", "size": 37765, "ext": "cc", "lang": "C++", "max_stars_repo_path": "src/kudu/master/sentry_privileges_fetcher.cc", "max_stars_repo_name": "toddlipcon/kudu", "max_stars_repo_head_hexsha": "e5ee5e08c68c9c661ce676ad629b4ad3abf57def", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6.0, "max_stars_repo_stars_event_min_datetime": "2020-05-12T02:18:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-15T20:39:21.000Z", "max_issues_repo_path": "src/kudu/master/sentry_privileges_fetcher.cc", "max_issues_repo_name": "toddlipcon/kudu", "max_issues_repo_head_hexsha": "e5ee5e08c68c9c661ce676ad629b4ad3abf57def", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/kudu/master/sentry_privileges_fetcher.cc", "max_forks_repo_name": "toddlipcon/kudu", "max_forks_repo_head_hexsha": "e5ee5e08c68c9c661ce676ad629b4ad3abf57def", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4336188437, "max_line_length": 92, "alphanum_fraction": 0.7008870647, "num_tokens": 8520}
|
###############################################################################
# WaterTAP Copyright (c) 2021, The Regents of the University of California,
# through Lawrence Berkeley National Laboratory, Oak Ridge National
# Laboratory, National Renewable Energy Laboratory, and National Energy
# Technology Laboratory (subject to receipt of any required approvals from
# the U.S. Dept. of Energy). All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and license
# information, respectively. These files are also available online at the URL
# "https://github.com/watertap-org/watertap/"
#
###############################################################################
import pytest
from pyomo.environ import (
ConcreteModel,
Block,
Var,
Constraint,
TerminationCondition,
SolverStatus,
value,
assert_optimal_termination,
SolverFactory,
Expression,
TransformationFactory,
units as pyunits,
)
from pyomo.network import Arc, Port
from idaes.core import FlowsheetBlock
from idaes.core.solvers import get_solver
from idaes.core.util.model_statistics import degrees_of_freedom
from idaes.core.util.initialization import solve_indexed_blocks, propagate_state
from idaes.models.unit_models import Mixer, Separator, Product, Feed
from idaes.models.unit_models.mixer import MomentumMixingType
from pyomo.util.check_units import assert_units_consistent
from idaes.core.util.scaling import (
unscaled_variables_generator,
unscaled_constraints_generator,
)
from watertap.core.util.initialization import assert_degrees_of_freedom
from watertap.examples.flowsheets.case_studies.wastewater_resource_recovery.metab.metab import (
main,
build,
set_operating_conditions,
initialize_system,
solve,
add_costing,
display_costing,
display_results,
)
solver = get_solver()
# -----------------------------------------------------------------------------
class TestMetabFlowsheet:
@pytest.fixture(scope="class")
def system_frame(self):
m = build()
return m
@pytest.mark.unit
def test_build(self, system_frame):
m = system_frame
assert_degrees_of_freedom(m, 20)
assert_units_consistent(m)
@pytest.mark.component
def test_set_operating_conditions(self, system_frame):
m = system_frame
set_operating_conditions(m)
# check feed
assert pytest.approx(0.3264, rel=1e-3) == value(
m.fs.feed.flow_mass_comp[0, "H2O"]
)
assert pytest.approx(2.221e-3, rel=1e-3) == value(
m.fs.feed.flow_mass_comp[0, "cod"]
)
assert pytest.approx(0, abs=1e-6) == value(
m.fs.feed.flow_mass_comp[0, "hydrogen"]
)
assert pytest.approx(0, abs=1e-6) == value(
m.fs.feed.flow_mass_comp[0, "methane"]
)
# check one fixed variable on hydrogen and methane reactor
assert pytest.approx(0.101, rel=1e-3) == value(
m.fs.metab_methane.generation_ratio["cod_to_methane", "methane"]
)
assert pytest.approx(5.03e-3, rel=1e-3) == value(
m.fs.metab_hydrogen.generation_ratio["cod_to_hydrogen", "hydrogen"]
)
@pytest.mark.component
def test_initialize(self, system_frame):
m = system_frame
initialize_system(m)
# check products
assert pytest.approx(0.32637, rel=1e-3) == value(
m.fs.product_H2O.flow_mass_comp[0, "H2O"]
)
assert pytest.approx(1.033e-4, rel=1e-3) == value(
m.fs.product_methane.flow_mass_comp[0, "methane"]
)
assert pytest.approx(2.468e-6, rel=1e-3) == value(
m.fs.product_hydrogen.flow_mass_comp[0, "hydrogen"]
)
@pytest.mark.component
def test_solve(self, system_frame):
m = system_frame
results = solve(m)
assert_optimal_termination(results)
# check products
assert pytest.approx(0.32637, rel=1e-3) == value(
m.fs.product_H2O.flow_mass_comp[0, "H2O"]
)
assert pytest.approx(1.033e-4, rel=1e-3) == value(
m.fs.product_methane.flow_mass_comp[0, "methane"]
)
assert pytest.approx(2.468e-6, rel=1e-3) == value(
m.fs.product_hydrogen.flow_mass_comp[0, "hydrogen"]
)
@pytest.mark.component
def test_costing(self, system_frame):
m = system_frame
add_costing(m)
m.fs.costing.initialize()
results = solve(m)
assert_optimal_termination(results)
# check values
assert pytest.approx(2.6895e3, rel=1e-3) == value(m.fs.costing.LCOW)
assert pytest.approx(2.716e4, rel=1e-3) == value(m.fs.costing.LCOH)
assert pytest.approx(7.867e3, rel=1e-3) == value(m.fs.costing.LCOM)
@pytest.mark.component
def test_display(self, system_frame):
m = system_frame
display_results(m)
display_costing(m)
|
{"hexsha": "600b7147e49b817ff4e51e080847cbbf43710ef8", "size": 4984, "ext": "py", "lang": "Python", "max_stars_repo_path": "watertap/examples/flowsheets/case_studies/wastewater_resource_recovery/metab/tests/test_metab.py", "max_stars_repo_name": "k1nshuk/watertap", "max_stars_repo_head_hexsha": "b565687409e4001831486221250b83829b8e354e", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "watertap/examples/flowsheets/case_studies/wastewater_resource_recovery/metab/tests/test_metab.py", "max_issues_repo_name": "k1nshuk/watertap", "max_issues_repo_head_hexsha": "b565687409e4001831486221250b83829b8e354e", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "watertap/examples/flowsheets/case_studies/wastewater_resource_recovery/metab/tests/test_metab.py", "max_forks_repo_name": "k1nshuk/watertap", "max_forks_repo_head_hexsha": "b565687409e4001831486221250b83829b8e354e", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.7894736842, "max_line_length": 96, "alphanum_fraction": 0.6370385233, "include": true, "reason": "from pyomo", "num_tokens": 1227}
|
# # Heat Equation
#
# This example demonstrates how to combine `OrdinaryDiffEq` with `DiffEqOperators` to solve a time dependent PDE.
# We consider the heat equation on the unit interval, with dirichlet boundary conditions:
# ∂ₜu = Δu
# u(x=0,t) = a
# u(x=1,t) = b
# u(x, t=0) = u₀(x)
#
# For `a = b = 0` and `u₀(x) = sin(2πx)` a solution is given by:
u_analytic(x, t) = sin(2*π*x) * exp(-t*(2*π)^2)
#
# We want to reproduce it numerically
#
using DiffEqOperators, OrdinaryDiffEq
nknots = 100
h = 1.0/(nknots+1)
knots = range(h, step=h, length=nknots)
ord_deriv = 2
ord_approx = 2
const Δ = CenteredDifference(ord_deriv, ord_approx, h, nknots)
const bc = Dirichlet0BC(Float64)
t0 = 0.0
t1 = 0.03
u0 = u_analytic.(knots, t0)
step(u,p,t) = Δ*bc*u
prob = ODEProblem(step, u0, (t0, t1))
alg = KenCarp4()
sol = solve(prob, alg)
using Test
@test u_analytic.(knots, t1) ≈ sol[end] rtol=1e-3
|
{"hexsha": "16bd41ffbc1eee3ee81a4dce64cb6f34b4742de4", "size": 891, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "examples/heat_equation.jl", "max_stars_repo_name": "FuZhiyu/DiffEqOperators.jl", "max_stars_repo_head_hexsha": "a55b696ec6f4fe8b0307a2dcfec06665a382a7d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/heat_equation.jl", "max_issues_repo_name": "FuZhiyu/DiffEqOperators.jl", "max_issues_repo_head_hexsha": "a55b696ec6f4fe8b0307a2dcfec06665a382a7d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/heat_equation.jl", "max_forks_repo_name": "FuZhiyu/DiffEqOperators.jl", "max_forks_repo_head_hexsha": "a55b696ec6f4fe8b0307a2dcfec06665a382a7d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0810810811, "max_line_length": 113, "alphanum_fraction": 0.6722783389, "num_tokens": 352}
|
import numpy as np
from guesswhat.statistics.abstract_plotter import *
import seaborn as sns
import matplotlib.pyplot as plt
import collections
class QuestionVsObject(AbstractPlotter):
def __init__(self, path, games, logger, suffix):
super(QuestionVsObject, self).__init__(path, self.__class__.__name__, suffix)
ratio_q_object = []
for game in games:
no_object = len(game.objects)
no_question = len(game.questions)
ratio_q_object.append([no_object,no_question])
ratio_q_object = np.array(ratio_q_object)
sns.set(style="white")
x = np.linspace(3, 20, 80)
counter = collections.defaultdict(list)
for k, val in ratio_q_object:
counter[k] += [val]
arr = np.zeros( [4, 21])
for k, val in counter.items():
if len(val) > 0:
arr[0,k] = k
arr[1,k] = np.mean(val)
# Std
arr[2, k] = np.std(val)
# confidence interval 95%
arr[3,k] = 1.95*np.std(val)/np.sqrt(len(val))
#plt.plot(arr[0,:],arr[1,:] , 'b.', label="Human behavior")
sns.regplot(x=ratio_q_object[:, 0], y=ratio_q_object[:, 1], x_ci=None, x_bins=20, order=4, label="Human behavior", marker="o", line_kws={'linestyle':'-'})
plt.fill_between(x=arr[0,:], y1=arr[1,:]-arr[2,:], y2=arr[1,:]+arr[2,:], alpha=0.2)
sns.regplot (x=x, y=np.log2(x), order=6, scatter=False, label="y = log2(x)", line_kws={'linestyle':'--'})
f = sns.regplot(x=x, y=x , order=1, scatter=False, label="y = x" , line_kws={'linestyle':'--'})
f.legend(loc="best", fontsize='x-large')
f.set_xlim(3,20)
f.set_ylim(0,20)
f.set_xlabel("Number of objects", {'size':'14'})
f.set_ylabel("Number of questions", {'size':'14'})
|
{"hexsha": "525e2fe07b4bc3ad353c325593113d834f61c2ce", "size": 1893, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/guesswhat/statistics/question_object.py", "max_stars_repo_name": "devineproject/guesswhat", "max_stars_repo_head_hexsha": "512e136c868ceccf047cdba243cf46037d4037fe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 72, "max_stars_repo_stars_event_min_datetime": "2017-07-07T04:40:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-05T13:00:02.000Z", "max_issues_repo_path": "src/guesswhat/statistics/question_object.py", "max_issues_repo_name": "devineproject/guesswhat", "max_issues_repo_head_hexsha": "512e136c868ceccf047cdba243cf46037d4037fe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2017-06-30T18:35:24.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-21T12:01:09.000Z", "max_forks_repo_path": "src/guesswhat/statistics/question_object.py", "max_forks_repo_name": "devineproject/guesswhat", "max_forks_repo_head_hexsha": "512e136c868ceccf047cdba243cf46037d4037fe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 40, "max_forks_repo_forks_event_min_datetime": "2017-06-30T12:13:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-25T07:34:54.000Z", "avg_line_length": 30.0476190476, "max_line_length": 163, "alphanum_fraction": 0.5625990491, "include": true, "reason": "import numpy", "num_tokens": 518}
|
!
! CalculiX - A 3-dimensional finite element program
! Copyright (C) 1998-2015 Guido Dhondt
!
! This program is free software; you can redistribute it and/or
! modify it under the terms of the GNU General Public License as
! published by the Free Software Foundation(version 2);
!
!
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
! GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License
! along with this program; if not, write to the Free Software
! Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
!
subroutine fillknotmpc(co,ipompc,nodempc,coefmpc,labmpc,
& nmpc,nmpcold,mpcfree,idim,e1,e2,t1)
!
! updates the coefficients in nonlinear MPC's
!
implicit none
!
character*20 labmpc(*)
!
integer ipompc(*),nodempc(3,*),irefnode,irotnode,idir,idim,n,
& nmpc,index,ii,inode,nmpcold,iexpnode,irefnodeprev,i,ndepnodes,
& matz,ier,j,indexnext,node,mpcfree,nodeprev
!
real*8 co(3,*),coefmpc(*),e(3,3,3),dc(3,3,3) ,sx,sy,sz,sxx,
& sxy,sxz,syy,syz,szz,s(3,3),w(3),z(3,3),fv1(3),fv2(3),e1(3),
& e2(3),t1(3),u2(3,3),u3(3,3)
!
! e_ijk symbol
!
data e /0.,0.,0.,0.,0.,-1.,0.,1.,0.,
& 0.,0.,1.,0.,0.,0.,-1.,0.,0.,
& 0.,-1.,0.,1.,0.,0.,0.,0.,0./
!
! dc_ijk=e_ikj
!
data dc /0.,0.,0.,0.,0.,1.,0.,-1.,0.,
& 0.,0.,-1.,0.,0.,0.,1.,0.,0.,
& 0.,1.,0.,-1.,0.,0.,0.,0.,0./
!
irefnodeprev=0
!
do ii=nmpcold+1,nmpc
if(labmpc(ii)(1:4).eq.'KNOT') then
!
! from labmpc: if idim=1: only 2-d elements
! if idim=3: at least one 1-d element
!
irefnode=nodempc(1,nodempc(3,ipompc(ii)))
!
if(irefnode.ne.irefnodeprev) then
!
! new knot
!
irefnodeprev=irefnode
read(labmpc(ii)(5:5),'(i1)') idim
!
! determine the area moments of inertia
!
sx=0.d0
sy=0.d0
sz=0.d0
sxx=0.d0
sxy=0.d0
sxz=0.d0
syy=0.d0
syz=0.d0
szz=0.d0
!
ndepnodes=0
!
nodeprev=0
do i=ii,nmpc
if(labmpc(i)(1:4).eq.'KNOT') then
if(nodempc(1,nodempc(3,ipompc(i))).eq.irefnode)then
!
! node belonging to the same knot
!
node=nodempc(1,ipompc(i))
!
if(node.ne.nodeprev) then
nodeprev=node
ndepnodes=ndepnodes+1
!
sx=sx+co(1,node)
sy=sy+co(2,node)
sz=sz+co(3,node)
sxx=sxx+co(1,node)*co(1,node)
sxy=sxy+co(1,node)*co(2,node)
sxz=sxz+co(1,node)*co(3,node)
syy=syy+co(2,node)*co(2,node)
syz=syz+co(2,node)*co(3,node)
szz=szz+co(3,node)*co(3,node)
endif
else
exit
endif
else
exit
endif
enddo
!
sxx=sxx-sx*sx/ndepnodes
sxy=sxy-sx*sy/ndepnodes
sxz=sxz-sx*sz/ndepnodes
syy=syy-sy*sy/ndepnodes
syz=syz-sy*sz/ndepnodes
szz=szz-sz*sz/ndepnodes
!
s(1,1)=sxx
s(1,2)=sxy
s(1,3)=sxz
s(2,1)=sxy
s(2,2)=syy
s(2,3)=syz
s(3,1)=sxz
s(3,2)=syz
s(3,3)=szz
!
! determining the eigenvalues
!
n=3
matz=1
ier=0
call rs(n,n,s,w,matz,z,fv1,fv2,ier)
if(ier.ne.0) then
write(*,*) '*ERROR in knotmpc while calculating the'
write(*,*) ' eigenvalues/eigenvectors'
call exit(201)
endif
!
! the eigenvalues are the moments of inertia w.r.t. the
! plane orthogonal to the eigenvector
!
! dimension=1 if the two lowest eigenvalues are zero
! dimension=2 if only the lowest eigenvalue is zero
! else dimension=3
!
! the dimension cannot exceed the maximum dimension of
! the elements connected to the knot (2d-element nodes:
! dimension 1, 1d-element nodes: dimension 3)
!
c write(*,*) 'fillknotmpc eigenvalues ',w(1),w(2),w(3)
if((w(1).lt.1.d-10).and.(w(2).lt.1.d-10)) then
idim=min(idim,1)
c idim=1
elseif(w(1).lt.1.d-10) then
idim=min(idim,2)
c idim=2
else
idim=min(idim,1)
c idim=3
endif
c write(*,*) 'fillknotmpc iref= ',irefnode,' idim= ',idim
!
! defining a local coordinate system for idim=2
!
if(idim.eq.2) then
do i=1,3
t1(i)=z(i,1)
e2(i)=z(i,2)
e1(i)=z(i,3)
enddo
!
! check whether e1-e2-t1 is a rhs system
!
if(t1(1)*(e1(2)*e2(3)-e1(3)*e2(2))-
& t1(2)*(e1(1)*e2(3)-e1(3)*e2(1))+
& t1(3)*(e1(1)*e2(2)-e1(2)*e2(1)).lt.0.d0) then
do i=1,3
t1(i)=-t1(i)
enddo
endif
c write(*,*) 't1 ',t1(1),t1(2),t1(3)
c write(*,*) 'e1 ',e1(1),e1(2),e1(3)
c write(*,*) 'e2 ',e2(1),e2(2),e2(3)
!
! storing t1 and e1 as coordinates of irotnode and
! iexpnode, respectively
!
iexpnode=nodempc(1,nodempc(3,nodempc(3,ipompc(ii))))
irotnode=
& nodempc(1,nodempc(3,nodempc(3,nodempc(3,ipompc(ii)))))
do i=1,3
co(i,irotnode)=t1(i)
co(i,iexpnode)=e1(i)
enddo
endif
endif
!
if((idim.eq.1).or.(idim.eq.3)) then
!
! knot on a line
!
labmpc(ii)(5:5)='1'
!
! dependent node
!
index=ipompc(ii)
inode=nodempc(1,index)
idir=nodempc(2,index)
!
! translation node
!
index=nodempc(3,index)
irefnode=nodempc(1,index)
!
! expansion node
!
index=nodempc(3,index)
iexpnode=nodempc(1,index)
coefmpc(index)=co(idir,irefnode)-co(idir,inode)
!
! rotation node
!
index=nodempc(3,index)
irotnode=nodempc(1,index)
!
! determining the coefficients of the rotational degrees
! of freedom
!
coefmpc(index)=dc(idir,1,1)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,1)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,1)*(co(3,irefnode)-co(3,inode))
!
index=nodempc(3,index)
coefmpc(index)=dc(idir,1,2)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,2)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,2)*(co(3,irefnode)-co(3,inode))
!
index=nodempc(3,index)
coefmpc(index)=dc(idir,1,3)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,3)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,3)*(co(3,irefnode)-co(3,inode))
!
elseif(idim.eq.2) then
!
! nodes of knot lie in a plane
!
labmpc(ii)(5:5)='2'
!
do i=1,3
do j=1,3
u2(i,j)=2.d0*e1(i)*e1(j)
u3(i,j)=2.d0*e2(i)*e2(j)
enddo
enddo
!
! dependent node
!
index=ipompc(ii)
inode=nodempc(1,index)
idir=nodempc(2,index)
!
! translation node
!
index=nodempc(3,index)
irefnode=nodempc(1,index)
!
! expansion node (first term is amalgated with the second
! term since the coefficient matrix is zero)
!
index=nodempc(3,index)
iexpnode=nodempc(1,index)
nodempc(2,index)=2
coefmpc(index)=0.d0
indexnext=nodempc(3,index)
nodempc(3,index)=mpcfree
!
nodempc(1,mpcfree)=iexpnode
nodempc(2,mpcfree)=2
coefmpc(mpcfree)=u2(idir,1)*(co(1,irefnode)-co(1,inode))+
& u2(idir,2)*(co(2,irefnode)-co(2,inode))+
& u2(idir,3)*(co(3,irefnode)-co(3,inode))
mpcfree=nodempc(3,mpcfree)
!
nodempc(1,mpcfree)=iexpnode
nodempc(2,mpcfree)=3
coefmpc(mpcfree)=u3(idir,1)*(co(1,irefnode)-co(1,inode))+
& u3(idir,2)*(co(2,irefnode)-co(2,inode))+
& u3(idir,3)*(co(3,irefnode)-co(3,inode))
index=mpcfree
mpcfree=nodempc(3,mpcfree)
nodempc(3,index)=indexnext
!
! rotation node
!
index=indexnext
irotnode=nodempc(1,index)
!
! determining the coefficients of the rotational degrees
! of freedom
!
coefmpc(index)=dc(idir,1,1)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,1)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,1)*(co(3,irefnode)-co(3,inode))
!
index=nodempc(3,index)
coefmpc(index)=dc(idir,1,2)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,2)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,2)*(co(3,irefnode)-co(3,inode))
!
index=nodempc(3,index)
coefmpc(index)=dc(idir,1,3)*(co(1,irefnode)-co(1,inode))+
& dc(idir,2,3)*(co(2,irefnode)-co(2,inode))+
& dc(idir,3,3)*(co(3,irefnode)-co(3,inode))
!
endif
endif
enddo
!
return
end
|
{"hexsha": "2affb842a8b84cb990f32f114330f90265cefed1", "size": 10630, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "ccx_prool/CalculiX/ccx_2.9/src/fillknotmpc.f", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/CalculiX/ccx_2.9/src/fillknotmpc.f", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/CalculiX/ccx_2.9/src/fillknotmpc.f", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 33.1152647975, "max_line_length": 72, "alphanum_fraction": 0.4464722484, "num_tokens": 3363}
|
[STATEMENT]
lemma db\<^sub>s\<^sub>s\<^sub>t_in_cases:
assumes "(t,s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I D)"
shows "(t,s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
(t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I D)
goal (1 subgoal):
1. (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
proof (induction A arbitrary: D)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. \<And>D. (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t [] I D) \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set [] \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
2. \<And>a A D. \<lbrakk>\<And>D. (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I D) \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I); (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t (a # A) I D)\<rbrakk> \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set (a # A) \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
case (Cons a A)
[PROOF STATE]
proof (state)
this:
(t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I ?D) \<Longrightarrow> (t, s) \<in> set ?D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
(t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t (a # A) I D)
goal (2 subgoals):
1. \<And>D. (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t [] I D) \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set [] \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
2. \<And>a A D. \<lbrakk>\<And>D. (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I D) \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I); (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t (a # A) I D)\<rbrakk> \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set (a # A) \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
thus ?case
[PROOF STATE]
proof (prove)
using this:
(t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t A I ?D) \<Longrightarrow> (t, s) \<in> set ?D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set A \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
(t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t (a # A) I D)
goal (1 subgoal):
1. (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set (a # A) \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
by (cases a) fastforce+
[PROOF STATE]
proof (state)
this:
(t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set (a # A) \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
goal (1 subgoal):
1. \<And>D. (t, s) \<in> set (db'\<^sub>s\<^sub>s\<^sub>t [] I D) \<Longrightarrow> (t, s) \<in> set D \<or> (\<exists>t' s'. insert\<langle>t',s'\<rangle> \<in> set [] \<and> t = t' \<cdot> I \<and> s = s' \<cdot> I)
[PROOF STEP]
qed simp
|
{"llama_tokens": 1602, "file": "Stateful_Protocol_Composition_and_Typing_Stateful_Strands", "length": 6}
|
from path import Path
from utils.formater import mc2pose6dof,kitti2pose6dof
from utils.formater import eular2rotcoord,vec_length
import numpy as np
import matplotlib.pyplot as plt
def norm(arr):
max = arr.max()
min = arr.min()
arr = (arr-min)/(max-min)
return arr
def drawer():
PLOT2=True
# poses_kitti = np.loadtxt("/home/roit/datasets/kitti_odo_poses/06.txt")
# poses_6dof = kitti2pose6dof(poses_kitti,order='xyz')
# out_str = 'kitti_09'
# poses = np.loadtxt("/home/roit/datasets/mcv3/0001/time_poses.txt")
poses = np.loadtxt("../data_out/mcv2/0000/time_poses.txt")
poses_6dof = mc2pose6dof(poses)
out_str='mcrandom_01'
IS_NORM=True
position = poses_6dof[:, 3:] # xyz
eular = np.deg2rad(poses_6dof[:, :3]) # Nx3
rotcoord = eular2rotcoord(eular,'mc') # rpy
ts=[]
phis=[]
step =1
for i in range(step, len(position)-step):
delta_t = position[i+step]-position[i-step]
if IS_NORM:
delta_t/=vec_length(delta_t)
ts.append(delta_t)
x,y,z = rotcoord[i, 0,0], rotcoord[i, 1,0], rotcoord[i, 2,0]
delta_phi=np.array([x,y,z])
if IS_NORM:
delta_phi/=vec_length(delta_phi)
phis.append(delta_phi)
Lambda = []#np.array(t)*np.array(phi)
ts=np.array(ts)
phis=np.array(phis)
Lambda = (ts*phis).sum(1)
hist,bins = np.histogram(Lambda,range=[-1,1],bins=100)
bins_hist = np.array([bins[:-1],hist])
np.savetxt('./{}_bins_hist.txt'.format(out_str),bins_hist,fmt='%.3f', delimiter=',')
np.savetxt('./{}_lambda.txt'.format(out_str),Lambda,fmt='%.3f',delimiter=',')
# Lambda /= (vec_length(ts)*vec_length(phis))
# Lambda =abs(Lambda)
#for i in range(len(t)):
# Lambda.append(abs(sum(t[i]*phi[i]/vec_length(t[i]*phi[i]))))
if PLOT2:
plt.figure(figsize=[8,4])
plt.subplot(1,2,1)
plt.plot(Lambda)
# plt.ylim([-1.5,1.5])
plt.subplot(1,2,2)
plt.plot(bins[:-1],hist)
else:
plt.plot(Lambda)
plt.title("The $\lambda$ of a random motion sequence")
plt.xlabel('Frames(n)')
plt.ylabel('$\lambda$')
plt.scatter(x = [50,168,250],
y = [Lambda[50],Lambda[168],Lambda[250]],c='r')
# plt.grid()
plt.show()
pass
if __name__ == '__main__':
drawer()
pass
|
{"hexsha": "750fd216af19ca9922cd2c8e3cd93888d8938251", "size": 2387, "ext": "py", "lang": "Python", "max_stars_repo_path": "viz/lambda_draw.py", "max_stars_repo_name": "xdr940/trajectory", "max_stars_repo_head_hexsha": "d4bf9e0cc9584cffd78dc469d15fb70a9e19f8d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "viz/lambda_draw.py", "max_issues_repo_name": "xdr940/trajectory", "max_issues_repo_head_hexsha": "d4bf9e0cc9584cffd78dc469d15fb70a9e19f8d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "viz/lambda_draw.py", "max_forks_repo_name": "xdr940/trajectory", "max_forks_repo_head_hexsha": "d4bf9e0cc9584cffd78dc469d15fb70a9e19f8d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.2307692308, "max_line_length": 88, "alphanum_fraction": 0.5940511102, "include": true, "reason": "import numpy", "num_tokens": 737}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
sfftk.unittests.test_readers
This testing module should have no side-effects because it only reads.
"""
from __future__ import division, print_function
import glob
import os
import struct
import sys
import unittest
import numpy
import random_words
import __init__ as tests
import ahds
from ..readers import amreader, mapreader, modreader, segreader, stlreader, surfreader
__author__ = "Paul K. Korir, PhD"
__email__ = "pkorir@ebi.ac.uk, paul.korir@gmail.com"
__date__ = "2017-05-15"
__updated__ = '2018-02-14'
rw = random_words.RandomWords()
# readers
class TestReaders_amreader(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.am_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.am')
cls.header, cls.segments_by_stream = amreader.get_data(cls.am_file)
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
self.assertIsInstance(self.header, ahds.header.AmiraHeader)
self.assertIsInstance(self.segments_by_stream, numpy.ndarray)
self.assertGreaterEqual(len(self.segments_by_stream), 1)
def test_first_line_amiramesh(self):
"""test that it's declared as an AmiraMesh file"""
self.assertEqual(self.header.designation.filetype, 'AmiraMesh')
def test_first_line_binary_little_endian(self):
"""test that it is formatted as BINARY-LITTLE-ENDIAN"""
self.assertEqual(self.header.designation.format, 'BINARY-LITTLE-ENDIAN')
def test_first_line_version(self):
"""test that it is version 2.1"""
self.assertEqual(self.header.designation.version, '2.1')
def test_lattice_present(self):
"""test Lattice definition exists in definitions"""
self.assertTrue('Lattice' in self.header.definitions.attrs)
def test_materials_present(self):
"""test Materials exist in parameters"""
self.assertIsNotNone('Materials' in self.header.parameters.attrs)
def test_read_hxsurface(self):
"""Test handling of AmiraMesh hxsurface files"""
am_hxsurface_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_hxsurface.am')
header, segments_by_stream = amreader.get_data(am_hxsurface_file)
self.assertIsInstance(header, ahds.header.AmiraHeader)
self.assertIsNone(segments_by_stream)
class TestReaders_mapreader(unittest.TestCase):
def setUp(self):
self.map_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.map')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
map_ = mapreader.get_data(self.map_file)
self.assertIsInstance(map_, mapreader.Map)
self.assertGreater(map_._nc, 0)
self.assertGreater(map_._nr, 0)
self.assertGreater(map_._ns, 0)
self.assertIn(map_._mode, range(5))
self.assertIsInstance(map_._ncstart, int)
self.assertIsInstance(map_._nrstart, int)
self.assertIsInstance(map_._nsstart, int)
self.assertGreater(map_._nx, 0)
self.assertGreater(map_._ny, 0)
self.assertGreater(map_._nz, 0)
self.assertGreater(map_._x_length, 0)
self.assertGreater(map_._y_length, 0)
self.assertGreater(map_._z_length, 0)
self.assertTrue(0 < map_._alpha < 180)
self.assertTrue(0 < map_._beta < 180)
self.assertTrue(0 < map_._gamma < 180)
self.assertIn(map_._mapc, range(1, 4))
self.assertIn(map_._mapr, range(1, 4))
self.assertIn(map_._maps, range(1, 4))
self.assertIsInstance(map_._amin, float)
self.assertIsInstance(map_._amax, float)
self.assertIsInstance(map_._amean, float)
self.assertIn(map_._ispg, range(1, 231))
self.assertTrue(map_._nsymbt % 80 == 0)
self.assertIn(map_._lskflg, range(2))
self.assertIsInstance(map_._s11, float)
self.assertIsInstance(map_._s12, float)
self.assertIsInstance(map_._s13, float)
self.assertIsInstance(map_._s21, float)
self.assertIsInstance(map_._s22, float)
self.assertIsInstance(map_._s23, float)
self.assertIsInstance(map_._s31, float)
self.assertIsInstance(map_._s32, float)
self.assertIsInstance(map_._s33, float)
self.assertIsInstance(map_._t1, float)
self.assertIsInstance(map_._t2, float)
self.assertIsInstance(map_._t3, float)
self.assertEqual(map_._map, 'MAP ')
self.assertIsInstance(map_._machst, tuple)
self.assertGreater(map_._rms, 0)
self.assertGreater(map_._nlabl, 0)
def test_write(self):
"""Test write map file"""
map_to_write = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_write_map.map')
written_maps = glob.glob(map_to_write)
self.assertEqual(len(written_maps), 0)
with open(map_to_write, 'w') as f:
map_ = mapreader.get_data(self.map_file)
map_.write(f)
written_maps = glob.glob(map_to_write)
self.assertEqual(len(written_maps), 1)
map(os.remove, written_maps)
def test_invert(self):
"""Test invert map intensities"""
map_ = mapreader.get_data(self.map_file, inverted=False)
self.assertFalse(map_._inverted)
map_.invert()
self.assertTrue(map_._inverted)
map_ = mapreader.get_data(self.map_file, inverted=True)
self.assertTrue(map_._inverted)
# check the inversion is complete and that we add a new label
with open('rm.map', 'w') as f:
map_.write(f)
map__ = mapreader.get_data('rm.map')
self.assertEqual(map__._nlabl, 2)
os.remove('rm.map')
def test_fix_mask(self):
"""Test fix mask for fixable mask"""
fixable_mask = mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_fixable_mask.map'))
self.assertFalse(fixable_mask.is_mask)
fixable_mask.fix_mask()
self.assertTrue(fixable_mask.is_mask)
def test_unfixable_mask(self):
"""Test exception for unfixable mask"""
unfixable_mask = mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_unfixable_mask.map'))
self.assertFalse(unfixable_mask.is_mask)
with self.assertRaises(ValueError):
unfixable_mask.fix_mask()
self.assertFalse(unfixable_mask.is_mask)
def test_bad_data_fail(self):
"""Test that a corrupted file (extra data at end) raises Exception"""
with self.assertRaises(ValueError):
mapreader.Map(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data1.map'))
def test_bad_data_fail2(self):
"""Test that we can raise an exception with a malformed header"""
with self.assertRaises(ValueError):
mapreader.get_data(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_corrupt_header.map'))
def test_bad_data_fail3(self):
"""Test that we can't have too long a header"""
with self.assertRaises(ValueError):
# create a map file with a header larger than 1024 to see the exception
map = mapreader.get_data(os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.map'))
for i in range(map._nlabl):
label = getattr(map, '_label_{}'.format(i))
y = 11
for j in range(1, y):
setattr(map, '_label_{}'.format(j), label)
map._nlabl = y
with open('rm.map', 'w') as f:
map.write(f)
class TestReaders_modreader(unittest.TestCase):
@classmethod
def setUp(cls):
cls.mod_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.mod')
cls.mod = modreader.get_data(cls.mod_file)
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
self.assertTrue(self.mod.isset)
self.assertGreater(len(self.mod.objts), 0)
self.assertGreater(self.mod.objt_count, 0)
self.assertEqual(self.mod.version, 'V1.2')
self.assertEqual(self.mod.name, 'IMOD-NewModel')
self.assertGreater(self.mod.xmax, 0)
self.assertGreater(self.mod.ymax, 0)
self.assertGreater(self.mod.zmax, 0)
self.assertGreaterEqual(self.mod.objsize, 1)
self.assertIn(self.mod.drawmode, [-1, 1])
self.assertIn(self.mod.mousemode, range(3)) # unclear what 2 is equal to INVALID VALUE
self.assertIn(self.mod.blacklevel, range(256))
self.assertIn(self.mod.whitelevel, range(256))
self.assertEqual(self.mod.xoffset, 0)
self.assertEqual(self.mod.yoffset, 0)
self.assertEqual(self.mod.zoffset, 0)
self.assertGreater(self.mod.xscale, 0)
self.assertGreater(self.mod.yscale, 0)
self.assertGreater(self.mod.zscale, 0)
self.assertGreaterEqual(self.mod.object, 0)
self.assertGreaterEqual(self.mod.contour, -1)
self.assertGreaterEqual(self.mod.point, -1)
self.assertGreaterEqual(self.mod.res, 0)
self.assertIn(self.mod.thresh, range(256))
self.assertGreater(self.mod.pixsize, 0)
self.assertIn(self.mod.units, ['pm', 'Angstroms', 'nm', 'microns', 'mm', 'cm', 'm', 'pixels', 'km'])
self.assertIsInstance(self.mod.csum, int)
self.assertEqual(self.mod.alpha, 0)
self.assertEqual(self.mod.beta, 0)
self.assertEqual(self.mod.gamma, 0)
def test_read_fail1(self):
"""Test that file missing 'IMOD' at beginning fails"""
mod_fn = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data1.mod')
with self.assertRaises(ValueError):
modreader.get_data(mod_fn) # missing 'IMOD' start
def test_read_fail2(self):
"""Test that file missing 'IEOF' at end fails"""
mod_fn = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_bad_data2.mod')
with self.assertRaises(ValueError):
modreader.get_data(mod_fn) # missing 'IEOF' end
def test_IMOD_pass(self):
"""Test that IMOD chunk read"""
self.assertTrue(self.mod.isset)
def test_OBJT_pass(self):
"""Test that OBJT chunk read"""
for O in self.mod.objts.itervalues():
self.assertTrue(O.isset)
def test_CONT_pass(self):
"""Test that CONT chunk read"""
for O in self.mod.objts.itervalues():
for C in O.conts.itervalues():
self.assertTrue(C.isset)
def test_MESH_pass(self):
"""Test that MESH chunk read"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertTrue(M.isset)
def test_IMAT_pass(self):
"""Test that IMAT chunk read"""
for O in self.mod.objts.itervalues():
self.assertTrue(O.imat.isset)
def test_VIEW_pass(self):
"""Test that VIEW chunk read"""
for V in self.mod.views.itervalues():
self.assertTrue(V.isset)
def test_MINX_pass(self):
"""Test that MINX chunk read"""
self.assertTrue(self.mod.minx.isset)
def test_MEPA_pass(self):
"""Test that MEPA chunk read"""
for O in self.mod.objts.itervalues():
try:
self.assertTrue(O.mepa.isset)
except AttributeError:
self.assertEqual(O.mepa, None)
def test_CLIP_pass(self):
"""Test that CLIP chunk read"""
for O in self.mod.objts.itervalues():
try:
self.assertTrue(O.clip.isset)
except AttributeError:
self.assertEqual(O.clip, None)
def test_number_of_OBJT_chunks(self):
"""Test that compares declared and found OBJT chunks"""
self.assertEqual(self.mod.objsize, len(self.mod.objts))
def test_number_of_CONT_chunks(self):
"""Test that compares declared and found CONT chunks"""
for O in self.mod.objts.itervalues():
self.assertEqual(O.contsize, len(O.conts))
def test_number_of_MESH_chunks(self):
"""Test that compares declared and found MESH chunks"""
for O in self.mod.objts.itervalues():
self.assertEqual(O.meshsize, len(O.meshes))
def test_number_of_surface_objects(self):
"""Test that compares declared and found surface objects"""
for O in self.mod.objts.itervalues():
no_of_surfaces = 0
for C in O.conts.itervalues():
if C.surf != 0:
no_of_surfaces += 1
self.assertEqual(O.surfsize, no_of_surfaces)
def test_number_of_points_in_CONT_chunk(self):
"""Test that compares declared an found points in CONT chunks"""
for O in self.mod.objts.itervalues():
for C in O.conts.itervalues():
self.assertEqual(C.psize, len(C.pt))
def test_number_of_vertex_elements_in_MESH_chunk(self):
"""Test that compares declared an found vertices in MESH chunks"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertEqual(M.vsize, len(M.vert))
def test_number_of_list_elements_in_MESH_chunk(self):
"""Test that compares declared an found indices in MESH chunks"""
for O in self.mod.objts.itervalues():
for M in O.meshes.itervalues():
self.assertEqual(M.lsize, len(M.list))
class TestReaders_segreader(unittest.TestCase):
def setUp(self):
self.seg_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.seg')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
seg = segreader.get_data(self.seg_file)
print(seg, file=sys.stderr)
self.assertIsInstance(seg, segreader.SeggerSegmentation)
self.assertEqual(seg.map_level, 0.852)
self.assertEqual(seg.format_version, 2)
self.assertItemsEqual(seg.map_size, [26, 27, 30])
self.assertEqual(seg.format, 'segger')
self.assertEqual(seg.mask.shape, (30, 27, 26))
class TestReaders_stlreader(unittest.TestCase):
def setUp(self):
self.stl_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.stl')
self.stl_bin_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_binary.stl')
self.stl_multi_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data_multiple.stl')
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
meshes = stlreader.get_data(self.stl_file) # only one mesh here
name, vertices, polygons = meshes[0]
num_vertices = len(vertices)
a, b, c = zip(*polygons.values())
vertex_ids = set(a + b + c)
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_file), 0))
self.assertGreaterEqual(num_vertices, 1)
self.assertEqual(min(vertex_ids), min(vertices.keys()))
self.assertEqual(max(vertex_ids), max(vertices.keys()))
self.assertEqual(sum(set(vertex_ids)), sum(vertices.keys()))
self.assertEqual(set(vertex_ids), set(vertices.keys()))
def test_read_binary(self):
"""Test that we can read a binary STL file"""
meshes = stlreader.get_data(self.stl_bin_file)
print(meshes[0][0], file=sys.stderr)
name, vertices, polygons = meshes[0]
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_bin_file), 0))
self.assertTrue(len(vertices) > 0)
self.assertTrue(len(polygons) > 0)
polygon_ids = list()
for a, b, c in polygons.itervalues():
polygon_ids += [a, b, c]
self.assertItemsEqual(set(vertices.keys()), set(polygon_ids))
def test_read_multiple(self):
"""Test that we can read a multi-solid STL file
Only works for ASCII by concatenation"""
meshes = stlreader.get_data(self.stl_multi_file)
for name, vertices, polygons in meshes:
self.assertEqual(name, "{}#{}".format(os.path.basename(self.stl_multi_file), 0))
self.assertTrue(len(vertices) > 0)
self.assertTrue(len(polygons) > 0)
polygon_ids = list()
for a, b, c in polygons.itervalues():
polygon_ids += [a, b, c]
self.assertItemsEqual(set(vertices.keys()), set(polygon_ids))
class TestReaders_surfreader(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.surf_file = os.path.join(tests.TEST_DATA_PATH, 'segmentations', 'test_data.surf')
cls.header, cls.segments = surfreader.get_data(cls.surf_file) # only one mesh here
def test_get_data(self):
"""Test the main entry point: get_data(...)"""
name = self.segments[2].name
vertices = self.segments[2].vertices
triangles = self.segments[2].triangles
num_vertices = len(vertices)
a, b, c = zip(*triangles)
vertex_ids = set(a + b + c)
self.assertIsInstance(self.header, ahds.header.AmiraHeader)
self.assertIsInstance(self.segments, dict)
self.assertEqual(name, 'medulla_r')
self.assertGreaterEqual(num_vertices, 1)
self.assertGreaterEqual(len(self.segments), 1)
self.assertEqual(min(vertex_ids), min(vertices.keys()))
self.assertEqual(max(vertex_ids), max(vertices.keys()))
self.assertEqual(sum(set(vertex_ids)), sum(vertices.keys()))
self.assertEqual(set(vertex_ids), set(vertices.keys()))
if __name__ == "__main__":
unittest.main()
|
{"hexsha": "8e07779be64b5247716c3c8bb7b9dc225c5ec93a", "size": 17595, "ext": "py", "lang": "Python", "max_stars_repo_path": "sfftk/unittests/test_readers.py", "max_stars_repo_name": "ssabrii/sfftk", "max_stars_repo_head_hexsha": "5cab37f1c9ecb9c2507672fd3232b1b9b5405511", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sfftk/unittests/test_readers.py", "max_issues_repo_name": "ssabrii/sfftk", "max_issues_repo_head_hexsha": "5cab37f1c9ecb9c2507672fd3232b1b9b5405511", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sfftk/unittests/test_readers.py", "max_forks_repo_name": "ssabrii/sfftk", "max_forks_repo_head_hexsha": "5cab37f1c9ecb9c2507672fd3232b1b9b5405511", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4, "max_line_length": 118, "alphanum_fraction": 0.6499573743, "include": true, "reason": "import numpy", "num_tokens": 4113}
|
import os
import numpy as np
def process_file(filename):
"""Want to return tuple of the form:
(reaction, acqusition function, batch size, experiment budget, mean final yield, standard deviation)"""
final_yields = list()
with open(filename) as f:
for line in f.readlines()[1:]:
final_yields.append(float(line.split(",")[1]))
reaction, acqusition_func, batch_size, budget, _ = filename.split("_")
mean_yield = sum(final_yields) / len(final_yields)
standard_dev = np.std(final_yields)
return (reaction, acqusition_func, batch_size, budget, mean_yield, standard_dev)
OUTPUT_NAME = "Optimiser Summmary Statistics.csv"
def main():
summary_stats = list()
for filename in os.listdir():
if filename.split(".")[1] != "csv" or filename == OUTPUT_NAME:
continue
summary_stats.append(process_file(filename))
output_file = "reaction,acqusition function,batch size,experiment budget,mean final yield,standard deviation\n"
for stat in summary_stats:
output_file += ",".join(str(item) for item in stat) + "\n"
with open(OUTPUT_NAME, "w") as f:
f.write(output_file)
return 0
main()
|
{"hexsha": "82e795c617df6192fbce4fff16edd9ca36f22c0c", "size": 1220, "ext": "py", "lang": "Python", "max_stars_repo_path": "Shields paper/test_bo_aryl_amination/data/summary_stats.py", "max_stars_repo_name": "Pseudonium/edbo", "max_stars_repo_head_hexsha": "fe7fb332ea1a90342950124160c590adab2f4e59", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Shields paper/test_bo_aryl_amination/data/summary_stats.py", "max_issues_repo_name": "Pseudonium/edbo", "max_issues_repo_head_hexsha": "fe7fb332ea1a90342950124160c590adab2f4e59", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Shields paper/test_bo_aryl_amination/data/summary_stats.py", "max_forks_repo_name": "Pseudonium/edbo", "max_forks_repo_head_hexsha": "fe7fb332ea1a90342950124160c590adab2f4e59", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.5, "max_line_length": 115, "alphanum_fraction": 0.6631147541, "include": true, "reason": "import numpy", "num_tokens": 289}
|
#include <boost/rational.hpp> /// For calculations related to sampling coefficients.
#include <optional>
#include <Common/FieldVisitors.h>
#include <Storages/MergeTree/MergeTreeDataSelectExecutor.h>
#include <Storages/MergeTree/MergeTreeBlockInputStream.h>
#include <Storages/MergeTree/MergeTreeReadPool.h>
#include <Storages/MergeTree/MergeTreeThreadBlockInputStream.h>
#include <Storages/MergeTree/KeyCondition.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTFunction.h>
#include <Parsers/ASTSampleRatio.h>
/// Allow to use __uint128_t as a template parameter for boost::rational.
// https://stackoverflow.com/questions/41198673/uint128-t-not-working-with-clang-and-libstdc
#if !defined(__GLIBCXX_BITSIZE_INT_N_0) && defined(__SIZEOF_INT128__)
namespace std
{
template <>
struct numeric_limits<__uint128_t>
{
static constexpr bool is_specialized = true;
static constexpr bool is_signed = false;
static constexpr bool is_integer = true;
static constexpr int radix = 2;
static constexpr int digits = 128;
static constexpr __uint128_t min () { return 0; } // used in boost 1.65.1+
};
}
#endif
#include <DataStreams/ExpressionBlockInputStream.h>
#include <DataStreams/FilterBlockInputStream.h>
#include <DataStreams/CollapsingFinalBlockInputStream.h>
#include <DataStreams/AddingConstColumnBlockInputStream.h>
#include <DataStreams/CreatingSetsBlockInputStream.h>
#include <DataStreams/NullBlockInputStream.h>
#include <DataStreams/SummingSortedBlockInputStream.h>
#include <DataStreams/ReplacingSortedBlockInputStream.h>
#include <DataStreams/AggregatingSortedBlockInputStream.h>
#include <DataStreams/VersionedCollapsingSortedBlockInputStream.h>
#include <DataTypes/DataTypesNumber.h>
#include <DataTypes/DataTypeDate.h>
#include <DataTypes/DataTypeEnum.h>
#include <Storages/VirtualColumnUtils.h>
namespace ProfileEvents
{
extern const Event SelectedParts;
extern const Event SelectedRanges;
extern const Event SelectedMarks;
}
namespace DB
{
namespace ErrorCodes
{
extern const int INDEX_NOT_USED;
extern const int SAMPLING_NOT_SUPPORTED;
extern const int ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER;
extern const int ILLEGAL_COLUMN;
extern const int ARGUMENT_OUT_OF_BOUND;
}
MergeTreeDataSelectExecutor::MergeTreeDataSelectExecutor(MergeTreeData & data_)
: data(data_), log(&Logger::get(data.getLogName() + " (SelectExecutor)"))
{
}
/// Construct a block consisting only of possible values of virtual columns
static Block getBlockWithPartColumn(const MergeTreeData::DataPartsVector & parts)
{
auto column = ColumnString::create();
for (const auto & part : parts)
column->insert(part->name);
return Block{ColumnWithTypeAndName(std::move(column), std::make_shared<DataTypeString>(), "_part")};
}
size_t MergeTreeDataSelectExecutor::getApproximateTotalRowsToRead(
const MergeTreeData::DataPartsVector & parts, const KeyCondition & key_condition, const Settings & settings) const
{
size_t full_marks_count = 0;
/// We will find out how many rows we would have read without sampling.
LOG_DEBUG(log, "Preliminary index scan with condition: " << key_condition.toString());
for (size_t i = 0; i < parts.size(); ++i)
{
const MergeTreeData::DataPartPtr & part = parts[i];
MarkRanges ranges = markRangesFromPKRange(part->index, key_condition, settings);
/** In order to get a lower bound on the number of rows that match the condition on PK,
* consider only guaranteed full marks.
* That is, do not take into account the first and last marks, which may be incomplete.
*/
for (size_t j = 0; j < ranges.size(); ++j)
if (ranges[j].end - ranges[j].begin > 2)
full_marks_count += ranges[j].end - ranges[j].begin - 2;
}
return full_marks_count * data.index_granularity;
}
using RelativeSize = boost::rational<ASTSampleRatio::BigNum>;
std::string toString(const RelativeSize & x)
{
return ASTSampleRatio::toString(x.numerator()) + "/" + ASTSampleRatio::toString(x.denominator());
}
/// Converts sample size to an approximate number of rows (ex. `SAMPLE 1000000`) to relative value (ex. `SAMPLE 0.1`).
static RelativeSize convertAbsoluteSampleSizeToRelative(const ASTPtr & node, size_t approx_total_rows)
{
if (approx_total_rows == 0)
return 1;
const ASTSampleRatio & node_sample = typeid_cast<const ASTSampleRatio &>(*node);
auto absolute_sample_size = node_sample.ratio.numerator / node_sample.ratio.denominator;
return std::min(RelativeSize(1), RelativeSize(absolute_sample_size) / RelativeSize(approx_total_rows));
}
BlockInputStreams MergeTreeDataSelectExecutor::read(
const Names & column_names_to_return,
const SelectQueryInfo & query_info,
const Context & context,
QueryProcessingStage::Enum & processed_stage,
const size_t max_block_size,
const unsigned num_streams,
Int64 max_block_number_to_read) const
{
size_t part_index = 0;
MergeTreeData::DataPartsVector parts = data.getDataPartsVector();
/// If query contains restrictions on the virtual column `_part` or `_part_index`, select only parts suitable for it.
/// The virtual column `_sample_factor` (which is equal to 1 / used sample rate) can be requested in the query.
Names virt_column_names;
Names real_column_names;
bool part_column_queried = false;
bool sample_factor_column_queried = false;
Float64 used_sample_factor = 1;
for (const String & name : column_names_to_return)
{
if (name == "_part")
{
part_column_queried = true;
virt_column_names.push_back(name);
}
else if (name == "_part_index")
{
virt_column_names.push_back(name);
}
else if (name == "_sample_factor")
{
sample_factor_column_queried = true;
virt_column_names.push_back(name);
}
else
{
real_column_names.push_back(name);
}
}
NamesAndTypesList available_real_columns = data.getColumns().getAllPhysical();
NamesAndTypesList available_real_and_virtual_columns = available_real_columns;
for (const auto & name : virt_column_names)
available_real_and_virtual_columns.emplace_back(data.getColumn(name));
/// If there are only virtual columns in the query, you must request at least one non-virtual one.
if (real_column_names.empty())
real_column_names.push_back(ExpressionActions::getSmallestColumn(available_real_columns));
/// If `_part` virtual column is requested, we try to use it as an index.
Block virtual_columns_block = getBlockWithPartColumn(parts);
if (part_column_queried)
VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, context);
std::multiset<String> part_values = VirtualColumnUtils::extractSingleValueFromBlock<String>(virtual_columns_block, "_part");
data.check(real_column_names);
processed_stage = QueryProcessingStage::FetchColumns;
const Settings & settings = context.getSettingsRef();
SortDescription sort_descr = data.getPrimarySortDescription();
KeyCondition key_condition(query_info, context, available_real_and_virtual_columns, sort_descr,
data.getPrimaryExpression());
if (settings.force_primary_key && key_condition.alwaysUnknownOrTrue())
{
std::stringstream exception_message;
exception_message << "Primary key (";
for (size_t i = 0, size = sort_descr.size(); i < size; ++i)
exception_message << (i == 0 ? "" : ", ") << sort_descr[i].column_name;
exception_message << ") is not used and setting 'force_primary_key' is set.";
throw Exception(exception_message.str(), ErrorCodes::INDEX_NOT_USED);
}
std::optional<KeyCondition> minmax_idx_condition;
if (data.minmax_idx_expr)
{
minmax_idx_condition.emplace(
query_info, context, available_real_and_virtual_columns,
data.minmax_idx_sort_descr, data.minmax_idx_expr);
if (settings.force_index_by_date && minmax_idx_condition->alwaysUnknownOrTrue())
{
String msg = "MinMax index by columns (";
bool first = true;
for (const String & col : data.minmax_idx_columns)
{
if (first)
first = false;
else
msg += ", ";
msg += col;
}
msg += ") is not used and setting 'force_index_by_date' is set";
throw Exception(msg, ErrorCodes::INDEX_NOT_USED);
}
}
/// Select the parts in which there can be data that satisfy `minmax_idx_condition` and that match the condition on `_part`,
/// as well as `max_block_number_to_read`.
{
auto prev_parts = parts;
parts.clear();
for (const auto & part : prev_parts)
{
if (part_values.find(part->name) == part_values.end())
continue;
if (minmax_idx_condition && !minmax_idx_condition->mayBeTrueInRange(
data.minmax_idx_columns.size(),
&part->minmax_idx.min_values[0], &part->minmax_idx.max_values[0],
data.minmax_idx_column_types))
continue;
if (max_block_number_to_read && part->info.max_block > max_block_number_to_read)
continue;
parts.push_back(part);
}
}
/// Sampling.
Names column_names_to_read = real_column_names;
std::shared_ptr<ASTFunction> filter_function;
ExpressionActionsPtr filter_expression;
RelativeSize relative_sample_size = 0;
RelativeSize relative_sample_offset = 0;
ASTSelectQuery & select = typeid_cast<ASTSelectQuery &>(*query_info.query);
auto select_sample_size = select.sample_size();
auto select_sample_offset = select.sample_offset();
if (select_sample_size)
{
relative_sample_size.assign(
typeid_cast<const ASTSampleRatio &>(*select_sample_size).ratio.numerator,
typeid_cast<const ASTSampleRatio &>(*select_sample_size).ratio.denominator);
if (relative_sample_size < 0)
throw Exception("Negative sample size", ErrorCodes::ARGUMENT_OUT_OF_BOUND);
relative_sample_offset = 0;
if (select_sample_offset)
relative_sample_offset.assign(
typeid_cast<const ASTSampleRatio &>(*select_sample_offset).ratio.numerator,
typeid_cast<const ASTSampleRatio &>(*select_sample_offset).ratio.denominator);
if (relative_sample_offset < 0)
throw Exception("Negative sample offset", ErrorCodes::ARGUMENT_OUT_OF_BOUND);
/// Convert absolute value of the sampling (in form `SAMPLE 1000000` - how many rows to read) into the relative `SAMPLE 0.1` (how much data to read).
size_t approx_total_rows = 0;
if (relative_sample_size > 1 || relative_sample_offset > 1)
approx_total_rows = getApproximateTotalRowsToRead(parts, key_condition, settings);
if (relative_sample_size > 1)
{
relative_sample_size = convertAbsoluteSampleSizeToRelative(select_sample_size, approx_total_rows);
LOG_DEBUG(log, "Selected relative sample size: " << toString(relative_sample_size));
}
/// SAMPLE 1 is the same as the absence of SAMPLE.
if (relative_sample_size == RelativeSize(1))
relative_sample_size = 0;
if (relative_sample_offset > 0 && 0 == relative_sample_size)
throw Exception("Sampling offset is incorrect because no sampling", ErrorCodes::ARGUMENT_OUT_OF_BOUND);
if (relative_sample_offset > 1)
{
relative_sample_offset = convertAbsoluteSampleSizeToRelative(select_sample_offset, approx_total_rows);
LOG_DEBUG(log, "Selected relative sample offset: " << toString(relative_sample_offset));
}
}
/** Which range of sampling key values do I need to read?
* First, in the whole range ("universe") we select the interval
* of relative `relative_sample_size` size, offset from the beginning by `relative_sample_offset`.
*
* Example: SAMPLE 0.4 OFFSET 0.3
*
* [------********------]
* ^ - offset
* <------> - size
*
* If the interval passes through the end of the universe, then cut its right side.
*
* Example: SAMPLE 0.4 OFFSET 0.8
*
* [----------------****]
* ^ - offset
* <------> - size
*
* Next, if the `parallel_replicas_count`, `parallel_replica_offset` settings are set,
* then it is necessary to break the received interval into pieces of the number `parallel_replicas_count`,
* and select a piece with the number `parallel_replica_offset` (from zero).
*
* Example: SAMPLE 0.4 OFFSET 0.3, parallel_replicas_count = 2, parallel_replica_offset = 1
*
* [----------****------]
* ^ - offset
* <------> - size
* <--><--> - pieces for different `parallel_replica_offset`, select the second one.
*
* It is very important that the intervals for different `parallel_replica_offset` cover the entire range without gaps and overlaps.
* It is also important that the entire universe can be covered using SAMPLE 0.1 OFFSET 0, ... OFFSET 0.9 and similar decimals.
*/
bool use_sampling = relative_sample_size > 0 || settings.parallel_replicas_count > 1;
bool no_data = false; /// There is nothing left after sampling.
if (use_sampling)
{
if (!data.sampling_expression)
throw Exception("Illegal SAMPLE: table doesn't support sampling", ErrorCodes::SAMPLING_NOT_SUPPORTED);
if (sample_factor_column_queried && relative_sample_size != 0)
used_sample_factor = 1.0 / boost::rational_cast<Float64>(relative_sample_size);
RelativeSize size_of_universum = 0;
DataTypePtr type = data.getPrimaryExpression()->getSampleBlock().getByName(data.sampling_expression->getColumnName()).type;
if (typeid_cast<const DataTypeUInt64 *>(type.get()))
size_of_universum = RelativeSize(std::numeric_limits<UInt64>::max()) + RelativeSize(1);
else if (typeid_cast<const DataTypeUInt32 *>(type.get()))
size_of_universum = RelativeSize(std::numeric_limits<UInt32>::max()) + RelativeSize(1);
else if (typeid_cast<const DataTypeUInt16 *>(type.get()))
size_of_universum = RelativeSize(std::numeric_limits<UInt16>::max()) + RelativeSize(1);
else if (typeid_cast<const DataTypeUInt8 *>(type.get()))
size_of_universum = RelativeSize(std::numeric_limits<UInt8>::max()) + RelativeSize(1);
else
throw Exception("Invalid sampling column type in storage parameters: " + type->getName() + ". Must be unsigned integer type.",
ErrorCodes::ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER);
if (settings.parallel_replicas_count > 1)
{
if (relative_sample_size == RelativeSize(0))
relative_sample_size = 1;
relative_sample_size /= settings.parallel_replicas_count.value;
relative_sample_offset += relative_sample_size * RelativeSize(settings.parallel_replica_offset.value);
}
if (relative_sample_offset >= RelativeSize(1))
no_data = true;
/// Calculate the half-interval of `[lower, upper)` column values.
bool has_lower_limit = false;
bool has_upper_limit = false;
RelativeSize lower_limit_rational = relative_sample_offset * size_of_universum;
RelativeSize upper_limit_rational = (relative_sample_offset + relative_sample_size) * size_of_universum;
UInt64 lower = boost::rational_cast<ASTSampleRatio::BigNum>(lower_limit_rational);
UInt64 upper = boost::rational_cast<ASTSampleRatio::BigNum>(upper_limit_rational);
if (lower > 0)
has_lower_limit = true;
if (upper_limit_rational < size_of_universum)
has_upper_limit = true;
/*std::cerr << std::fixed << std::setprecision(100)
<< "relative_sample_size: " << relative_sample_size << "\n"
<< "relative_sample_offset: " << relative_sample_offset << "\n"
<< "lower_limit_float: " << lower_limit_rational << "\n"
<< "upper_limit_float: " << upper_limit_rational << "\n"
<< "lower: " << lower << "\n"
<< "upper: " << upper << "\n";*/
if ((has_upper_limit && upper == 0)
|| (has_lower_limit && has_upper_limit && lower == upper))
no_data = true;
if (no_data || (!has_lower_limit && !has_upper_limit))
{
use_sampling = false;
}
else
{
/// Let's add the conditions to cut off something else when the index is scanned again and when the request is processed.
std::shared_ptr<ASTFunction> lower_function;
std::shared_ptr<ASTFunction> upper_function;
if (has_lower_limit)
{
if (!key_condition.addCondition(data.sampling_expression->getColumnName(), Range::createLeftBounded(lower, true)))
throw Exception("Sampling column not in primary key", ErrorCodes::ILLEGAL_COLUMN);
ASTPtr args = std::make_shared<ASTExpressionList>();
args->children.push_back(data.sampling_expression);
args->children.push_back(std::make_shared<ASTLiteral>(lower));
lower_function = std::make_shared<ASTFunction>();
lower_function->name = "greaterOrEquals";
lower_function->arguments = args;
lower_function->children.push_back(lower_function->arguments);
filter_function = lower_function;
}
if (has_upper_limit)
{
if (!key_condition.addCondition(data.sampling_expression->getColumnName(), Range::createRightBounded(upper, false)))
throw Exception("Sampling column not in primary key", ErrorCodes::ILLEGAL_COLUMN);
ASTPtr args = std::make_shared<ASTExpressionList>();
args->children.push_back(data.sampling_expression);
args->children.push_back(std::make_shared<ASTLiteral>(upper));
upper_function = std::make_shared<ASTFunction>();
upper_function->name = "less";
upper_function->arguments = args;
upper_function->children.push_back(upper_function->arguments);
filter_function = upper_function;
}
if (has_lower_limit && has_upper_limit)
{
ASTPtr args = std::make_shared<ASTExpressionList>();
args->children.push_back(lower_function);
args->children.push_back(upper_function);
filter_function = std::make_shared<ASTFunction>();
filter_function->name = "and";
filter_function->arguments = args;
filter_function->children.push_back(filter_function->arguments);
}
filter_expression = ExpressionAnalyzer(filter_function, context, nullptr, available_real_columns).getActions(false);
/// Add columns needed for `sampling_expression` to `column_names_to_read`.
std::vector<String> add_columns = filter_expression->getRequiredColumns();
column_names_to_read.insert(column_names_to_read.end(), add_columns.begin(), add_columns.end());
std::sort(column_names_to_read.begin(), column_names_to_read.end());
column_names_to_read.erase(std::unique(column_names_to_read.begin(), column_names_to_read.end()), column_names_to_read.end());
}
}
if (no_data)
{
LOG_DEBUG(log, "Sampling yields no data.");
return {};
}
LOG_DEBUG(log, "Key condition: " << key_condition.toString());
if (minmax_idx_condition)
LOG_DEBUG(log, "MinMax index condition: " << minmax_idx_condition->toString());
/// PREWHERE
ExpressionActionsPtr prewhere_actions;
String prewhere_column;
if (select.prewhere_expression)
{
ExpressionAnalyzer analyzer(select.prewhere_expression, context, nullptr, available_real_columns);
prewhere_actions = analyzer.getActions(false);
prewhere_column = select.prewhere_expression->getColumnName();
SubqueriesForSets prewhere_subqueries = analyzer.getSubqueriesForSets();
/** Compute the subqueries right now.
* NOTE Disadvantage - these calculations do not fit into the query execution pipeline.
* They are done before the execution of the pipeline; they can not be interrupted; during the computation, packets of progress are not sent.
*/
if (!prewhere_subqueries.empty())
CreatingSetsBlockInputStream(std::make_shared<NullBlockInputStream>(Block()), prewhere_subqueries,
SizeLimits(settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode)).read();
}
RangesInDataParts parts_with_ranges;
/// Let's find what range to read from each part.
size_t sum_marks = 0;
size_t sum_ranges = 0;
for (auto & part : parts)
{
RangesInDataPart ranges(part, part_index++);
if (data.hasPrimaryKey())
ranges.ranges = markRangesFromPKRange(part->index, key_condition, settings);
else
ranges.ranges = MarkRanges{MarkRange{0, part->marks_count}};
if (!ranges.ranges.empty())
{
parts_with_ranges.push_back(ranges);
sum_ranges += ranges.ranges.size();
for (const auto & range : ranges.ranges)
sum_marks += range.end - range.begin;
}
}
LOG_DEBUG(log, "Selected " << parts.size() << " parts by date, " << parts_with_ranges.size() << " parts by key, "
<< sum_marks << " marks to read from " << sum_ranges << " ranges");
if (parts_with_ranges.empty())
return {};
ProfileEvents::increment(ProfileEvents::SelectedParts, parts_with_ranges.size());
ProfileEvents::increment(ProfileEvents::SelectedRanges, sum_ranges);
ProfileEvents::increment(ProfileEvents::SelectedMarks, sum_marks);
BlockInputStreams res;
if (select.final())
{
/// Add columns needed to calculate primary key and the sign.
std::vector<String> add_columns = data.getPrimaryExpression()->getRequiredColumns();
column_names_to_read.insert(column_names_to_read.end(), add_columns.begin(), add_columns.end());
if (!data.merging_params.sign_column.empty())
column_names_to_read.push_back(data.merging_params.sign_column);
if (!data.merging_params.version_column.empty())
column_names_to_read.push_back(data.merging_params.version_column);
std::sort(column_names_to_read.begin(), column_names_to_read.end());
column_names_to_read.erase(std::unique(column_names_to_read.begin(), column_names_to_read.end()), column_names_to_read.end());
res = spreadMarkRangesAmongStreamsFinal(
std::move(parts_with_ranges),
column_names_to_read,
max_block_size,
settings.use_uncompressed_cache,
prewhere_actions,
prewhere_column,
virt_column_names,
settings);
}
else
{
res = spreadMarkRangesAmongStreams(
std::move(parts_with_ranges),
num_streams,
column_names_to_read,
max_block_size,
settings.use_uncompressed_cache,
prewhere_actions,
prewhere_column,
virt_column_names,
settings);
}
if (use_sampling)
for (auto & stream : res)
stream = std::make_shared<FilterBlockInputStream>(stream, filter_expression, filter_function->getColumnName());
/// By the way, if a distributed query or query to a Merge table is made, then the `_sample_factor` column can have different values.
if (sample_factor_column_queried)
for (auto & stream : res)
stream = std::make_shared<AddingConstColumnBlockInputStream<Float64>>(
stream, std::make_shared<DataTypeFloat64>(), used_sample_factor, "_sample_factor");
return res;
}
BlockInputStreams MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreams(
RangesInDataParts && parts,
size_t num_streams,
const Names & column_names,
size_t max_block_size,
bool use_uncompressed_cache,
ExpressionActionsPtr prewhere_actions,
const String & prewhere_column,
const Names & virt_columns,
const Settings & settings) const
{
const size_t min_marks_for_concurrent_read =
(settings.merge_tree_min_rows_for_concurrent_read + data.index_granularity - 1) / data.index_granularity;
const size_t max_marks_to_use_cache =
(settings.merge_tree_max_rows_to_use_cache + data.index_granularity - 1) / data.index_granularity;
/// Count marks for each part.
std::vector<size_t> sum_marks_in_parts(parts.size());
size_t sum_marks = 0;
for (size_t i = 0; i < parts.size(); ++i)
{
/// Let the ranges be listed from right to left so that the leftmost range can be dropped using `pop_back()`.
std::reverse(parts[i].ranges.begin(), parts[i].ranges.end());
for (const auto & range : parts[i].ranges)
sum_marks_in_parts[i] += range.end - range.begin;
sum_marks += sum_marks_in_parts[i];
}
if (sum_marks > max_marks_to_use_cache)
use_uncompressed_cache = false;
BlockInputStreams res;
if (sum_marks > 0 && settings.merge_tree_uniform_read_distribution == 1)
{
/// Reduce the number of num_streams if the data is small.
if (sum_marks < num_streams * min_marks_for_concurrent_read && parts.size() < num_streams)
num_streams = std::max((sum_marks + min_marks_for_concurrent_read - 1) / min_marks_for_concurrent_read, parts.size());
MergeTreeReadPoolPtr pool = std::make_shared<MergeTreeReadPool>(
num_streams, sum_marks, min_marks_for_concurrent_read, parts, data, prewhere_actions, prewhere_column, true,
column_names, MergeTreeReadPool::BackoffSettings(settings), settings.preferred_block_size_bytes, false);
/// Let's estimate total number of rows for progress bar.
const size_t total_rows = data.index_granularity * sum_marks;
LOG_TRACE(log, "Reading approx. " << total_rows << " rows");
for (size_t i = 0; i < num_streams; ++i)
{
res.emplace_back(std::make_shared<MergeTreeThreadBlockInputStream>(
i, pool, min_marks_for_concurrent_read, max_block_size, settings.preferred_block_size_bytes,
settings.preferred_max_column_in_block_size_bytes, data, use_uncompressed_cache,
prewhere_actions, prewhere_column, settings, virt_columns));
if (i == 0)
{
/// Set the approximate number of rows for the first source only
static_cast<IProfilingBlockInputStream &>(*res.front()).addTotalRowsApprox(total_rows);
}
}
}
else if (sum_marks > 0)
{
const size_t min_marks_per_stream = (sum_marks - 1) / num_streams + 1;
for (size_t i = 0; i < num_streams && !parts.empty(); ++i)
{
size_t need_marks = min_marks_per_stream;
/// Loop over parts.
/// We will iteratively take part or some subrange of a part from the back
/// and assign a stream to read from it.
while (need_marks > 0 && !parts.empty())
{
RangesInDataPart part = parts.back();
size_t & marks_in_part = sum_marks_in_parts.back();
/// We will not take too few rows from a part.
if (marks_in_part >= min_marks_for_concurrent_read &&
need_marks < min_marks_for_concurrent_read)
need_marks = min_marks_for_concurrent_read;
/// Do not leave too few rows in the part.
if (marks_in_part > need_marks &&
marks_in_part - need_marks < min_marks_for_concurrent_read)
need_marks = marks_in_part;
MarkRanges ranges_to_get_from_part;
/// We take the whole part if it is small enough.
if (marks_in_part <= need_marks)
{
/// Restore the order of segments.
std::reverse(part.ranges.begin(), part.ranges.end());
ranges_to_get_from_part = part.ranges;
need_marks -= marks_in_part;
parts.pop_back();
sum_marks_in_parts.pop_back();
}
else
{
/// Loop through ranges in part. Take enough ranges to cover "need_marks".
while (need_marks > 0)
{
if (part.ranges.empty())
throw Exception("Unexpected end of ranges while spreading marks among streams", ErrorCodes::LOGICAL_ERROR);
MarkRange & range = part.ranges.back();
const size_t marks_in_range = range.end - range.begin;
const size_t marks_to_get_from_range = std::min(marks_in_range, need_marks);
ranges_to_get_from_part.emplace_back(range.begin, range.begin + marks_to_get_from_range);
range.begin += marks_to_get_from_range;
marks_in_part -= marks_to_get_from_range;
need_marks -= marks_to_get_from_range;
if (range.begin == range.end)
part.ranges.pop_back();
}
}
BlockInputStreamPtr source_stream = std::make_shared<MergeTreeBlockInputStream>(
data, part.data_part, max_block_size, settings.preferred_block_size_bytes,
settings.preferred_max_column_in_block_size_bytes, column_names, ranges_to_get_from_part,
use_uncompressed_cache, prewhere_actions, prewhere_column, true, settings.min_bytes_to_use_direct_io,
settings.max_read_buffer_size, true, virt_columns, part.part_index_in_query);
res.push_back(source_stream);
}
}
if (!parts.empty())
throw Exception("Couldn't spread marks among streams", ErrorCodes::LOGICAL_ERROR);
}
return res;
}
BlockInputStreams MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal(
RangesInDataParts && parts,
const Names & column_names,
size_t max_block_size,
bool use_uncompressed_cache,
ExpressionActionsPtr prewhere_actions,
const String & prewhere_column,
const Names & virt_columns,
const Settings & settings) const
{
const size_t max_marks_to_use_cache =
(settings.merge_tree_max_rows_to_use_cache + data.index_granularity - 1) / data.index_granularity;
size_t sum_marks = 0;
for (size_t i = 0; i < parts.size(); ++i)
for (size_t j = 0; j < parts[i].ranges.size(); ++j)
sum_marks += parts[i].ranges[j].end - parts[i].ranges[j].begin;
if (sum_marks > max_marks_to_use_cache)
use_uncompressed_cache = false;
BlockInputStreams to_merge;
/// NOTE `merge_tree_uniform_read_distribution` is not used for FINAL
for (size_t part_index = 0; part_index < parts.size(); ++part_index)
{
RangesInDataPart & part = parts[part_index];
BlockInputStreamPtr source_stream = std::make_shared<MergeTreeBlockInputStream>(
data, part.data_part, max_block_size, settings.preferred_block_size_bytes,
settings.preferred_max_column_in_block_size_bytes, column_names, part.ranges, use_uncompressed_cache,
prewhere_actions, prewhere_column, true, settings.min_bytes_to_use_direct_io, settings.max_read_buffer_size, true,
virt_columns, part.part_index_in_query);
to_merge.emplace_back(std::make_shared<ExpressionBlockInputStream>(source_stream, data.getPrimaryExpression()));
}
BlockInputStreamPtr merged;
switch (data.merging_params.mode)
{
case MergeTreeData::MergingParams::Ordinary:
merged = std::make_shared<MergingSortedBlockInputStream>(to_merge, data.getSortDescription(), max_block_size);
break;
case MergeTreeData::MergingParams::Collapsing:
merged = std::make_shared<CollapsingFinalBlockInputStream>(
to_merge, data.getSortDescription(), data.merging_params.sign_column);
break;
case MergeTreeData::MergingParams::Summing:
merged = std::make_shared<SummingSortedBlockInputStream>(to_merge,
data.getSortDescription(), data.merging_params.columns_to_sum, max_block_size);
break;
case MergeTreeData::MergingParams::Aggregating:
merged = std::make_shared<AggregatingSortedBlockInputStream>(to_merge, data.getSortDescription(), max_block_size);
break;
case MergeTreeData::MergingParams::Replacing: /// TODO Make ReplacingFinalBlockInputStream
merged = std::make_shared<ReplacingSortedBlockInputStream>(to_merge,
data.getSortDescription(), data.merging_params.version_column, max_block_size);
break;
case MergeTreeData::MergingParams::VersionedCollapsing: /// TODO Make VersionedCollapsingFinalBlockInputStream
merged = std::make_shared<VersionedCollapsingSortedBlockInputStream>(
to_merge, data.getSortDescription(), data.merging_params.sign_column, max_block_size, true);
break;
case MergeTreeData::MergingParams::Graphite:
throw Exception("GraphiteMergeTree doesn't support FINAL", ErrorCodes::LOGICAL_ERROR);
}
return {merged};
}
void MergeTreeDataSelectExecutor::createPositiveSignCondition(
ExpressionActionsPtr & out_expression, String & out_column, const Context & context) const
{
auto function = std::make_shared<ASTFunction>();
auto arguments = std::make_shared<ASTExpressionList>();
auto sign = std::make_shared<ASTIdentifier>(data.merging_params.sign_column);
auto one = std::make_shared<ASTLiteral>(Field(static_cast<Int64>(1)));
function->name = "equals";
function->arguments = arguments;
function->children.push_back(arguments);
arguments->children.push_back(sign);
arguments->children.push_back(one);
out_expression = ExpressionAnalyzer(function, context, {}, data.getColumns().getAllPhysical()).getActions(false);
out_column = function->getColumnName();
}
/// Calculates a set of mark ranges, that could possibly contain keys, required by condition.
/// In other words, it removes subranges from whole range, that definitely could not contain required keys.
MarkRanges MergeTreeDataSelectExecutor::markRangesFromPKRange(
const MergeTreeData::DataPart::Index & index, const KeyCondition & key_condition, const Settings & settings) const
{
size_t min_marks_for_seek = (settings.merge_tree_min_rows_for_seek + data.index_granularity - 1) / data.index_granularity;
MarkRanges res;
size_t used_key_size = key_condition.getMaxKeyColumn() + 1;
size_t marks_count = index.at(0)->size();
/// If index is not used.
if (key_condition.alwaysUnknownOrTrue())
{
res.push_back(MarkRange(0, marks_count));
}
else
{
/** There will always be disjoint suspicious segments on the stack, the leftmost one at the top (back).
* At each step, take the left segment and check if it fits.
* If fits, split it into smaller ones and put them on the stack. If not, discard it.
* If the segment is already of one mark length, add it to response and discard it.
*/
std::vector<MarkRange> ranges_stack{ {0, marks_count} };
/// NOTE Creating temporary Field objects to pass to KeyCondition.
Row index_left(used_key_size);
Row index_right(used_key_size);
while (!ranges_stack.empty())
{
MarkRange range = ranges_stack.back();
ranges_stack.pop_back();
bool may_be_true;
if (range.end == marks_count)
{
for (size_t i = 0; i < used_key_size; ++i)
{
index[i]->get(range.begin, index_left[i]);
}
may_be_true = key_condition.mayBeTrueAfter(
used_key_size, &index_left[0], data.primary_key_data_types);
}
else
{
for (size_t i = 0; i < used_key_size; ++i)
{
index[i]->get(range.begin, index_left[i]);
index[i]->get(range.end, index_right[i]);
}
may_be_true = key_condition.mayBeTrueInRange(
used_key_size, &index_left[0], &index_right[0], data.primary_key_data_types);
}
if (!may_be_true)
continue;
if (range.end == range.begin + 1)
{
/// We saw a useful gap between neighboring marks. Either add it to the last range, or start a new range.
if (res.empty() || range.begin - res.back().end > min_marks_for_seek)
res.push_back(range);
else
res.back().end = range.end;
}
else
{
/// Break the segment and put the result on the stack from right to left.
size_t step = (range.end - range.begin - 1) / settings.merge_tree_coarse_index_granularity + 1;
size_t end;
for (end = range.end; end > range.begin + step; end -= step)
ranges_stack.push_back(MarkRange(end - step, end));
ranges_stack.push_back(MarkRange(range.begin, end));
}
}
}
return res;
}
}
|
{"hexsha": "333f9c7cc6026c062023e1c8883cd18127a663ed", "size": 38608, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "max_stars_repo_name": "ywandy/ClickHouse", "max_stars_repo_head_hexsha": "a4093f2b1aba01eca7aa901bd0543f17c178b796", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5.0, "max_stars_repo_stars_event_min_datetime": "2018-05-10T14:40:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-13T11:43:15.000Z", "max_issues_repo_path": "dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "max_issues_repo_name": "ywandy/ClickHouse", "max_issues_repo_head_hexsha": "a4093f2b1aba01eca7aa901bd0543f17c178b796", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "max_forks_repo_name": "ywandy/ClickHouse", "max_forks_repo_head_hexsha": "a4093f2b1aba01eca7aa901bd0543f17c178b796", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2.0, "max_forks_repo_forks_event_min_datetime": "2020-05-23T04:55:22.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-24T11:30:51.000Z", "avg_line_length": 41.5139784946, "max_line_length": 157, "alphanum_fraction": 0.6521705346, "num_tokens": 8223}
|
import numpy as np
class Module:
def forward(self, X):
raise NotImplementedError
def __call__(self, X):
return self.forward(X)
def backward(self, X, grad):
raise NotImplementedError
class Sigmoid(Module):
def forward(self, X):
return 1/(1 + np.exp(-X))
def backward(self, X, grad):
z = self.forward(X)
return grad*z*(1 - z)
def __repr__(self):
return 'Sigmoid'
class Linear(Module):
def forward(self, X):
return X
def backward(self, X, grad):
return grad
def __repr__(self):
return 'Linear'
class ReLU(Module):
def forward(self, X):
return np.maximum(X, 0)
def backward(self, X, grad):
grad_t = np.array(grad, copy = True)
grad_t[X <= 0] = 0
return grad_t
def __repr__(self):
return 'ReLU'
class TanH(Module):
def forward(self, X):
return np.tanh(X)
def backward(self, X, grad):
z = self.forward(X)
return grad* (1 - z**2)
def __repr__(self):
return 'TanH'
def softmax(x, axis=0):
s = np.exp(x - x.max(axis=axis))
return s/s.sum(axis=axis)
class Layer(Module):
def __init__(self, dim_in : int, dim_out : int, act : Module):
self.W = np.random.randn(dim_out, dim_in)
self.B = np.zeros(shape=(dim_out,1), dtype=float)
self.act = act
def __str__(self):
return f'Layer(W={self.W.shape}, B={self.B.shape}, act={self.act})'
def __repr__(self):
return self.__str__()
def forward(self, X):
o = self.act(self.W@X + self.B)
return o
def backward(self, X : np.array, grad: np.array, lr : float):
h = self.forward(X)
m = h.shape[1]
grad = self.act.backward(h, grad)
# print('grad norm:' ,np.linalg.norm(grad))
grad_W = (1/m) * grad @ X.T
grad_B = (1/m) * np.sum(grad, axis=1, keepdims=True)
grad = self.W.T @ grad
# print('gradW:', grad_W.shape, 'gradB:', grad_B.shape, 'grad act:', grad.shape)
self.W -= lr*grad_W
self.B -= lr*grad_B
return grad
|
{"hexsha": "bbd58534cddd65465e1b7270ec4f8fd44c4185d4", "size": 2224, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/modules.py", "max_stars_repo_name": "s-omranpour/MLP-Numpy", "max_stars_repo_head_hexsha": "7bb05e2c0b680fcd7b5a2c3449281a718b6ecb68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-01T09:16:51.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-01T09:16:51.000Z", "max_issues_repo_path": "src/modules.py", "max_issues_repo_name": "s-omranpour/MLP-Numpy", "max_issues_repo_head_hexsha": "7bb05e2c0b680fcd7b5a2c3449281a718b6ecb68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/modules.py", "max_forks_repo_name": "s-omranpour/MLP-Numpy", "max_forks_repo_head_hexsha": "7bb05e2c0b680fcd7b5a2c3449281a718b6ecb68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.2727272727, "max_line_length": 88, "alphanum_fraction": 0.5386690647, "include": true, "reason": "import numpy", "num_tokens": 595}
|
import os, sys
import random
import numpy as np
data_dir = 'xsum'
train_path_src = os.path.join(data_dir, 'train.source')
train_path_tgt = os.path.join(data_dir, 'train.target')
dev_path_src = os.path.join(data_dir, 'val.source')
dev_path_tgt = os.path.join(data_dir, 'val.target')
test_path_src = os.path.join(data_dir, 'test.source')
test_path_tgt = os.path.join(data_dir, 'test.target')
# 100, 200, 500, 1k
def select(path_src, path_tgt, num, outp_src, outp_tgt):
src_full = []
with open (path_src) as src:
for line in src:
src_full.append(line.strip())
tgt_full = []
with open(path_tgt) as tgt:
for line in tgt:
tgt_full.append(line.strip())
assert len(src_full) == len(tgt_full)
print(len(src_full), len(tgt_full))
# a = np.random.choice(len(src_full), 10)
# print(a)
temp_lst = np.random.choice(len(src_full), num, replace=False)
src_out = open(outp_src, 'w')
tgt_out = open(outp_tgt, 'w')
for idx in temp_lst:
src_out.write(src_full[idx] + '\n')
tgt_out.write(tgt_full[idx] + '\n')
src_out.close()
tgt_out.close()
return
if __name__ == '__main__':
num_ = int(sys.argv[1])
seed_ = int(sys.argv[2])
np.random.seed(seed_)
data_dir2 = 'lowdata_xsum/xsum_{}_{}'.format(num_, seed_)
os.mkdir(data_dir2)
# data_dir2 = 'lowdata_xsum/xsum_small_test'
train_path_src2 = os.path.join(data_dir2, 'train.source')
train_path_tgt2 = os.path.join(data_dir2, 'train.target')
dev_path_src2 = os.path.join(data_dir2, 'val.source')
dev_path_tgt2 = os.path.join(data_dir2, 'val.target')
test_path_src2 = os.path.join(data_dir2, 'test.source')
test_path_tgt2 = os.path.join(data_dir2, 'test.target')
select(train_path_src, train_path_tgt, num_, train_path_src2, train_path_tgt2)
select(dev_path_src, dev_path_tgt, int(num_*0.3), dev_path_src2, dev_path_tgt2)
# os.mkdir(data_dir2)
# select(test_path_src, test_path_tgt, 1500, test_path_src2, test_path_tgt2)
|
{"hexsha": "34b669cbaee7188b30716a2f1289709fec9acb66", "size": 2041, "ext": "py", "lang": "Python", "max_stars_repo_path": "transformers/examples/seq2seq/lowdata_gen.py", "max_stars_repo_name": "jordiclive/ControlPrefixes", "max_stars_repo_head_hexsha": "b647f68bf0c7e771f847c4a51e5f22af2ac95699", "max_stars_repo_licenses": ["Apache-1.1"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2021-11-23T09:01:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T11:34:15.000Z", "max_issues_repo_path": "transformers/examples/seq2seq/lowdata_gen.py", "max_issues_repo_name": "jordiclive/ControlPrefixes", "max_issues_repo_head_hexsha": "b647f68bf0c7e771f847c4a51e5f22af2ac95699", "max_issues_repo_licenses": ["Apache-1.1"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-12-10T17:43:23.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-18T11:37:19.000Z", "max_forks_repo_path": "transformers/examples/seq2seq/lowdata_gen.py", "max_forks_repo_name": "jordiclive/ControlPrefixes", "max_forks_repo_head_hexsha": "b647f68bf0c7e771f847c4a51e5f22af2ac95699", "max_forks_repo_licenses": ["Apache-1.1"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-12-19T03:22:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-14T04:41:04.000Z", "avg_line_length": 29.5797101449, "max_line_length": 83, "alphanum_fraction": 0.6766291034, "include": true, "reason": "import numpy", "num_tokens": 579}
|
c
c ---------------------------------------------------------------
c
subroutine tirrfil(irr,nrow,ncol,xl,xr,yb,yt,level,lstnew)
c
c set the portion of the irregular array at the given level
c bounded by the given coordinates
c
implicit double precision (a-h,o-z)
logical graf
parameter (maxgr = 192, maxlv=12)
common /nodal/ rnode(12,maxgr),node(17,maxgr),lstart(maxlv),
* newstl(maxlv),
* listsp(maxlv),intrat(maxlv),tol,bzonex,bzoney,mstart,ndfree,
* lfine,iorder,mxnest,kcheck,lwidth,
* maxcl, graf, lhead
include "calloc.i"
dimension ialloc(allocsize)
equivalence (alloc, ialloc)
c
logical xint,yint
dimension irr(nrow,ncol)
c
iirr(i,j) = 2*(locirr-1)+1+i-1+mitot*(j-1)
c
hx = hxposs(level)
hy = hyposs(level)
c
mptr = lstart(level)
c
c check if mptr and patch intersect. recall that irr array
c is enlarged by lwidth around grid, so use effective size.
c
10 maxi = node(5,mptr)
maxj = node(6,mptr)
maxip1 = maxi + 1
maxjp1 = maxj + 1
locirr = node(14,mptr)
mitot = maxi-1+2*lwidth
mjtot = maxj-1+2*lwidth
xst = rnode(1,mptr)-lwidth*hx
yst = rnode(2,mptr)-lwidth*hy
xend = rnode(7,mptr)+lwidth*hx
yend = rnode(4,mptr)+lwidth*hy
lstgrd = node(17,mptr)
xint = .false.
yint = .false.
if (((xl .le. xst) .and. (xr .gt. xst)) .or.
1 ((xl .ge. xst) .and. (xl .lt. xend))) xint = .true.
if (((yb .le. yst) .and. (yt .gt. yst)) .or.
1 ((yb .ge. yst) .and. (yb .lt. yend))) yint = .true.
if (.not. (xint .and. yint)) go to 90
c
c calculate starting and ending indices for copying values from
c mptr to patch
c
ist = max(1, idint((xst-xl)/hx + 1.1))
jst = max(1, idint((yst-yb)/hy + 1.1))
iend = min(nrow,idint( (xend-xl)/hx + .1))
jend = min(ncol,idint( (yend-yb)/hy + .1))
c play it safe
if ((ist .gt. iend) .or. (jst .gt. jend)) go to 90
c
c calculate starting index for mptr as source updater
c
isrc = max(1, idint((xl-xst)/hx + 1.1))
jsrc = max(1, idint((yb-yst)/hy + 1.1))
c
jdonor = jsrc
do 55 j = jst, jend
idonor = isrc
do 45 i = ist, iend
irr(i,j) = ialloc(iirr(idonor,jdonor))
if (irr(i,j) .eq. lstgrd) then
c # need new lstgrd over all grids for this patch
irr(i,j) = lstnew
endif
idonor = idonor + 1
45 continue
jdonor = jdonor + 1
55 continue
c
90 mptr = node(10, mptr)
if (mptr .ne. 0) go to 10
c
return
end
|
{"hexsha": "af8477a3e0c13f8b8be26eca82638e3308f0ff8f", "size": 2684, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "src/2d/ag_ho/tirrfil.f", "max_stars_repo_name": "mjberger/ho_amrclaw_amrcart", "max_stars_repo_head_hexsha": "0e0d37dda52b8c813f7fc4bd7e61c5fdb33b0ada", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-06T23:14:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-06T23:14:49.000Z", "max_issues_repo_path": "src/2d/ag_ho/tirrfil.f", "max_issues_repo_name": "mjberger/ho_amrclaw_amrcart", "max_issues_repo_head_hexsha": "0e0d37dda52b8c813f7fc4bd7e61c5fdb33b0ada", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/2d/ag_ho/tirrfil.f", "max_forks_repo_name": "mjberger/ho_amrclaw_amrcart", "max_forks_repo_head_hexsha": "0e0d37dda52b8c813f7fc4bd7e61c5fdb33b0ada", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.1573033708, "max_line_length": 68, "alphanum_fraction": 0.5439642325, "num_tokens": 994}
|
"""
Author: Taiwo O. Adetiloye
Date: Dec 27, 2019
"""
"""Routes for logged-in application."""
from bokeh.plotting import figure
from bokeh.embed import components
from math import pi
import pandas as pd
from bokeh.palettes import Category20c
from bokeh.transform import cumsum
from flask import Blueprint, render_template, request
from flask_login import current_user
from flask import current_app as app
from .assets import compile_auth_assets
from flask_login import login_required
from .models import db
import numpy as np
from flask_paginate import Pagination, get_page_args
# Blueprint Configuration
main_bp = Blueprint('main_bp', __name__,
template_folder='templates',
static_folder='static')
compile_auth_assets(app)
@main_bp.route('/', methods=['GET', 'POST'])
@login_required
def dashboard():
"""Serve logged in Dashboard."""
facilities = [''] #Enter list items
selected_facility = request.form.get('facility')
facilities2 = [''] #Enter list items
if (not selected_facility) or selected_facility == 'Select facility':
return render_template('dashboard.html',
title='Sentiment Analytics Dashboard.',
template='dashboard-template',
current_user=current_user,
facilities=facilities,
Facilities='Facilities',
display='none',
selected_facility='24',
body="You are now logged in!")
elif (selected_facility in facilities2):
query_count_patient = 'MATCH (n:Patient)-[r:SURVEY_ON]->(f:Facility) WHERE f.facility_code={facility_code} return count(n)'
count_patient = db.graph.run(query_count_patient, parameters={'facility_code': selected_facility}).evaluate()
query_gross = 'MATCH (n:Patient)-[r:SURVEY_ON]->(f:Facility) return count(ID(n))'
count_gross = db.graph.run(query_gross).evaluate()
#------------------------------------------Bar plot
def make_bar_plot():
label = [selected_facility, 'All facilities']
y = [count_patient, count_gross]
plot = figure(x_range=label, y_range=(0, count_gross), plot_width=500, plot_height=300,
title="Patient Survey(facility) and Total Survey(all facilities)")
plot.xaxis.major_label_orientation = np.pi / 4
plot.vbar(label, top=y, width=0.5, color="#CAB2D6")
script, div = components(plot)
return script, div
plots = []
plots.append(make_bar_plot())
#------------------------------------------End Bar plot
#------------------------------------------Grid table
query_response_negative = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Negative' return ID(p), p.response, p.date,p.service_type_code ORDER BY p.date DESC"
query_response_positive = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Positive' return ID(p), p.response, p.date,p.service_type_code ORDER BY p.date DESC"
query_response_neutral = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Neutral' return ID(p), p.response,p.date ,p.service_type_code ORDER BY p.date DESC"
query_response_negative2 = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Negative' return count(p)"
query_response_positive2 = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Positive' return count(p)"
query_response_neutral2 = "match (p:Patient)-[r:SURVEY_ON]->(f:Facility) where f.facility_code={facility_code} and p.sentiment='Neutral' return count(p)"
count_negative = db.graph.run(query_response_negative2, parameters={'facility_code': selected_facility}).evaluate()
count_positive = db.graph.run(query_response_positive2, parameters={'facility_code': selected_facility}).evaluate()
count_neutral = db.graph.run(query_response_neutral2, parameters={'facility_code': selected_facility}).evaluate()
total_sentiment = count_negative + count_positive + count_neutral
def sentiment(query_response):
comment_id= []
comment_list = []
date = []
comment_svt_code = []
comments_db = db.graph.run(query_response, parameters={'facility_code': selected_facility}).data()
for comment in comments_db:
comment_id.append(comment['ID(p)'])
comment_list.append(comment['p.response'])
date.append(comment['p.date'])
comment_svt_code.append(comment['p.service_type_code'])
return (list(dict.fromkeys(comment_id)),comment_list,date,comment_svt_code) #' list(dict.fromkeys(comment_service_type_code)) # Remove duplicates
sentiment_pos = zip(sentiment(query_response_positive)[1],sentiment(query_response_positive)[2], sentiment(query_response_positive)[3])
sentiment_neg = zip(sentiment(query_response_negative)[1],sentiment(query_response_negative)[2], sentiment(query_response_negative)[3])
sentiment_neu = zip(sentiment(query_response_neutral)[1], sentiment(query_response_neutral)[2],sentiment(query_response_neutral)[3])
# ------------------------------------------Grid table Ends
#------------------------------------------Pie Chart plot
def make_pie_plot():
x = {
'Positive': count_positive/total_sentiment,
'Neutral': count_neutral/total_sentiment,
'Negative': count_negative/total_sentiment
}
data = pd.Series(x).reset_index(name='value').rename(columns={'index': 'sentiments'})
data['angle'] = data['value'] / data['value'].sum() * 2 * pi
data['color'] = Category20c[len(x)]
p = figure(plot_height=250,plot_width=300, title="Patient Sentiment Pie Chart", toolbar_location=None,
tools="hover", tooltips="@sentiments: @value{0.00%}", x_range=(-0.5, 1.0),
background_fill_color='#CAB2D6',
border_fill_color='#CAB2D6',)
p.wedge(x=0, y=1, radius=0.4,
start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'),
line_color="white", fill_color='color', legend_field='sentiments', source=data)
p.axis.axis_label = None
p.axis.visible = False
p.grid.grid_line_color = None
script, div = components(p)
return script, div
plots_pie = []
plots_pie.append(make_pie_plot())
#------------------------------------------Pie Chart End plot
return render_template('dashboard.html',
title='Sentiment Analytics Dashboard.',
template='dashboard-template',
current_user=current_user,
selected_facility=selected_facility,
facilities=facilities,
Facilities='Facility',
count_patient=total_sentiment,
count_gross=count_gross,
percentage= str(round((total_sentiment * 100 / count_gross), 2)) + '%', #percentage
plots=plots,
plots_pie=plots_pie,
display='block',
positive=sentiment_pos,
negative=sentiment_neg,
neutral=sentiment_neu,
body="You are now logged in!")
|
{"hexsha": "1280156e39a55ccf4892c81bb292b7b77afd1ff7", "size": 8076, "ext": "py", "lang": "Python", "max_stars_repo_path": "application/routes.py", "max_stars_repo_name": "taiwotman/flasklogin_neo4j", "max_stars_repo_head_hexsha": "11bbadc2c453ad222c468734d0feb82ca6f17c74", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-17T03:05:23.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-03T23:29:54.000Z", "max_issues_repo_path": "application/routes.py", "max_issues_repo_name": "taiwotman/flasklogin_neo4j", "max_issues_repo_head_hexsha": "11bbadc2c453ad222c468734d0feb82ca6f17c74", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-07-14T16:43:00.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-04T05:19:43.000Z", "max_forks_repo_path": "application/routes.py", "max_forks_repo_name": "taiwotman/flasklogin_neo4j", "max_forks_repo_head_hexsha": "11bbadc2c453ad222c468734d0feb82ca6f17c74", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-07-15T18:22:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-09T03:06:18.000Z", "avg_line_length": 46.1485714286, "max_line_length": 222, "alphanum_fraction": 0.5950965825, "include": true, "reason": "import numpy", "num_tokens": 1678}
|
# -*- coding: utf-8 -*-
"""
Created on Sun Dec 9 12:03:43 2018
Updated on Thu Dec 13 15:35:00 2018
@author: zhanglt
Python module for standard scorecard modeling
"""
from sklearn.linear_model import LogisticRegression
import pandas as pd
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
# ============================================================
# Basic Functions
# ============================================================
def _assign_interval_base(x, boundaries):
"""Assign each value in x an interval from boundaries.
Parameters
----------
x: numpy.array, shape (number of examples,)
The column of data that need to be discretized.
boundaries: numpy.array, shape (number of interval boundaries,)
The boundary values of the intervals to discretize target x.
Returns
-------
intervals: numpy.ndarray, shape (number of examples,2)
The array of intervals that are closed to the right.
The left column and right column of the array are the
left and right boundary respectively.
"""
# Add -inf and inf to the start and end of boundaries
boundaries = np.unique(
np.concatenate((np.array([-float('inf')]),
boundaries,np.array([float('inf')])),
axis=0))
# The max boundary that is smaller than x_i is its lower boundary.
# The min boundary that is >= than x_i is its upper boundary.
# Adding equal is because all intervals here are closed to the right.
boundaries_diff_boolean = x.reshape(1,-1).T > boundaries.reshape(1,-1)
lowers = [boundaries[b].max() for b in boundaries_diff_boolean]
uppers = [boundaries[b].min() for b in ~boundaries_diff_boolean]
# Array of intervals that are closed to the right
intervals= np.stack((lowers, uppers), axis=1)
return intervals
def assign_interval_str(x, boundaries, delimiter='~'):
"""Assign each value in x an interval from boundaries.
Parameters
----------
x: numpy.array, shape (number of examples,)
The column of data that need to be discretized.
boundaries: numpy.array, shape (number of interval boundaries,)
The boundary values of the intervals to discretize target x.
delimiter: string, optional(default='~')
The returned array will be an array of intervals. Each interval is
representated by string (i.e. '1~2'), which takes the form
lower+delimiter+upper. This parameter control the symbol that
connects the lower and upper boundaries.
Returns
-------
intervals_str: numpy.array, shape (number of examples,)
Discretized result. The array of intervals that represented by strings.
"""
intervals= _assign_interval_base(x, boundaries)
# use join rather than use a+delimiter+b makes this line faster
intervals_str = np.array(
[delimiter.join((str(a),str(b))) for a,b in zip(intervals[:,0],
intervals[:,1])]
)
return intervals_str
def interval_to_boundary_vector(vector, delimiter='~'):
"""Transform an array of interval strings into the
unique boundaries of such intervals.
Parameters
----------
vector: numpy.array, shape (number of examples,)
The array of interval whose unique boundaries will
be returned.
delimiter: string, optional(default='~')
The interval is representated by string (i.e. '1~2'),
which takes the form lower+delimiter+upper. This parameter
control the symbol that connects the lower and upper boundaries.
Returns
-------
boundaries: numpy.array, shape (number of interval boundaries,)
An array of boundary values.
"""
boundaries = np.array(list(set(delimiter.join(np.unique(vector)).split(delimiter))))
boundaries = boundaries[(boundaries!='-inf') & (boundaries!='inf')].astype(float)
return boundaries
def map_np(array, dictionary):
"""map function for numpy array
Parameters
----------
array: numpy.array, shape (number of examples,)
The array of data to map values to.
distionary: dict
The distionary object.
Return
----------
result: numpy.array, shape (number of examples,)
The mapped result.
"""
return [dictionary[e] for e in array]
def _applyScoreCard(scorecard, feature_name, feature_array, delimiter='~'):
"""Apply the scorecard to a column. Return the score for each value.
Parameters
----------
scorecard: pandas.DataFrame,
the Scorecard rule table.
feature_name: string,
the name of the feature to score.
feature_array: numpy.array,
the values of the feature to score.
"""
score_rules = scorecard[scorecard['feature']==feature_name]
boundaries = interval_to_boundary_vector(score_rules.value.values, delimiter=delimiter)
intervals = assign_interval_str(feature_array, boundaries, delimiter=delimiter)
score_dict = dict(zip(score_rules.value, score_rules.score))
scores = map_np(intervals, score_dict)
return scores
# ============================================================
# Main Functions
# ============================================================
class LogisticRegressionScoreCard(BaseEstimator, TransformerMixin):
"""Take woe-ed features, fit a regression and turn it into a scorecard.
Parameters
----------
woe_transformer: WOE transformer object from WOE module.
C: float, optional(Default=1.0)
regularization parameter in linear regression. Default value is 1.
A smaller value implies more regularization.
See details in scikit-learn document.
class_weight: dict, optional(default=None)
weights for each class of samples (e.g. {class_label: weight})
in linear regression. This is to deal with imbalanced training data.
Setting this parameter to 'auto' will aotumatically use
class_weight function from scikit-learn to calculate the weights.
The equivalent codes are:
>>> from sklearn.utils import class_weight
>>> class_weights = class_weight.compute_class_weight('balanced',
np.unique(y), y)
random_state: int, optional(default=None)
random seed in linear regression. See details in scikit-learn doc.
PDO: int, optional(default=-20)
Points to double odds. One of the parameters of Scorecard.
Default value is -20.
A positive value means the higher the scores, the lower
the probability of y being 1.
A negative value means the higher the scores, the higher
the probability of y being 1.
basePoints: int, optional(default=100)
the score for base odds(# of y=1/ # of y=0).
decimal: int, optional(default=0)
Control the number of decimals that the output scores have.
Default is 0 (no decimal).
start_points: boolean, optional(default=False)
There are two types of scorecards, with and without start points.
True means the scorecard will have a start poitns.
output_option: string, optional(default='excel')
Controls the output format of scorecard. For now 'excel' is
the only option.
output_path: string, optional(default=None)
The location to save the scorecard. e.g. r'D:\\Work\\jupyter\\'.
verbose: boolean, optioanl(default=False)
When verbose is set to False, the predict() method only returns
the total scores of samples. In this case the output of predict()
method will be numpy.array;
When verbose is set to True, the predict() method will return
the total scores, as well as the scores of each feature. In this case
The output of predict() method will be pandas.DataFrame in order to
specify the feature names.
delimiter: string, optional(default='~')
The feature interval is representated by string (i.e. '1~2'),
which takes the form lower+delimiter+upper. This parameter
is the symbol that connects the lower and upper boundaries.
Attributes
----------
woe_df_: pandas.DataFrame, the scorecard.
AB_ : A and B when converting regression to scorecard.
Methods
-------
fit(woed_X, y):
fit the Scorecard model.
predict(X_beforeWOE, load_scorecard=None):
Apply the model to the original feature
(before discretization and woe encoding).
If user choose to upload their own Scorecard,
user can pass a pandas.DataFrame to `load_scorecard`
parameter. The dataframe should contain columns such as
feature, value, woe, beta and score. An example would
be as followed (value is the range of feature values, woe
is the WOE encoding of that range, and score is the socre
for that range):
feature value woe beta score
x1 30~inf 0.377563 0.631033 5.0
x1 20~-30 1.351546 0.631033 37.0
x1 -inf~20 1.629890 0.631033 -17.0
"""
def __init__(self, woe_transformer, C=1, class_weight=None, random_state=None,
PDO=-20, basePoints = 100, decimal=0, start_points = False,
output_option='excel', output_path=None, verbose=False,
delimiter='~'):
self.__woe_transformer__ = woe_transformer
self.__C__ = C
self.__class_weight__ = class_weight
self.__random_state__ = random_state
self.__PDO__ = PDO
self.__basePoints__ = basePoints
self.__output_option__ = output_option
self.__decimal__ = decimal
self.__output_path__ = output_path
self.__start_points__ = start_points
self.__verbose__ = verbose
self.__delimiter__ = delimiter
def fit(self, woed_X, y):
"""
Parameters
----------
woed_X: numpy.ndarray or pandas.DataFrame, shape (number of examples,
number of features)
The woe encoded X.
y: numpy.array or pandas.Series, shape (number of examples,)
The target array (or dependent variable).
"""
# if X is pandas.DataFrame, turn it into numpy.ndarray and
# associate each column array with column names.
# if X is numpy.ndarray, generate column names for it (x1, x2,...)
self.fit_sample_size_, self.num_of_x_ = woed_X.shape
if isinstance(woed_X, pd.DataFrame):
self.columns_ = woed_X.columns.values # column names
features = woed_X.values
elif isinstance(woed_X, np.ndarray):
self.columns_ = np.array(
[''.join(('x',str(a))) for a in range(self.num_of_x_)]
) # # column names (i.e. x0, x1, ...)
features = woed_X
else:
raise TypeError('woed_X should be either numpy.ndarray or pandas.DataFrame')
if isinstance(y, pd.Series):
target = y.values
elif isinstance(y, np.ndarray):
target = y
else:
raise TypeError('y should be either numpy.array or pandas.Series')
# Basic settings of Scorecard
positive, total = y.sum(), y.shape[0]
self.__baseOdds__ = positive / (total - positive)
self.__p__ = self.__baseOdds__ / (1 + self.__baseOdds__)
B = self.__PDO__/np.log(2)
A = self.__basePoints__ + B * np.log(self.__baseOdds__)
self.AB_ = (A, B)
# Concat vertically all woe values (scorecard rules table)
woe_list = [pd.concat([
pd.Series([col]*len(self.__woe_transformer__.result_dict_[col][0])), # Name of x
pd.Series(list(self.__woe_transformer__.result_dict_[col][0].keys())), # interval strings of x
pd.Series(list(self.__woe_transformer__.result_dict_[col][0].values())) # woe values of x
],axis=1) for col in self.columns_]
self.woe_df_ = pd.concat(woe_list, axis=0, ignore_index=True
).rename(columns={0:'feature',1:'value',2:'woe'})
# Fit a logistic regression
lr = LogisticRegression(C=self.__C__,
class_weight=self.__class_weight__,
random_state=self.__random_state__)
lr.fit(features, y)
# Calculate scores for each value in each feature, and the start scores
beta_map = dict(zip(list(self.columns_),
lr.coef_[0,:]))
self.woe_df_['beta'] = map_np(self.woe_df_.feature, beta_map)
self.startPoints_ = A - B * lr.intercept_[0]
# Rule table for Scoracard
if self.__start_points__ is True:
self.woe_df_['score'] = np.around(
-B * self.woe_df_['beta'].values * self.woe_df_['woe'].values,
decimals=self.__decimal__)
startPoints = pd.DataFrame({'feature': ['StartPoints'],
'value': [np.nan],
'woe': [np.nan],
'beta': [np.nan],
'score': np.around(self.startPoints_, decimals=self.__decimal__)
})
#the scorecard
self.woe_df_ = pd.concat([startPoints, self.woe_df_],
axis=0,
ignore_index=True)
elif self.__start_points__ is False:
self.woe_df_['score'] = np.around(
-B * self.woe_df_['beta'].values * self.woe_df_['woe'].values + self.startPoints_ / self.num_of_x_,
decimals=self.__decimal__)
# Output the scorecard
if self.__output_option__ == 'excel' and self.__output_path__ is None:
self.woe_df_.to_excel('scorecards.xlsx', index=False)
elif self.__output_option__ == 'excel' and self.__output_path__ is not None:
self.woe_df_.to_excel(self.__output_path__+'scorecards.xlsx', index=False)
self.lr_ = lr
def predict(self, X_beforeWOE, load_scorecard=None):
"""Apply the scorecard.
Parameters
----------
X_beforeWOE: numpy.ndarray or pandas.DataFrame, shape (number of examples,
number of features)
The features before WOE transformation (the original X).
load_scorecard: pandas.DataFrame, optional(default=None)
If we want to use a modified scorecard
rather than the one automatically generated, we can pass the scorecard
we want to use using this parameter.
"""
# if X is pandas.DataFrame, turn it into numpy.ndarray and
# associate each column array with column names.
# if X is numpy.ndarray, generate column names for it (x1, x2,...)
self.transform_sample_size_ = X_beforeWOE.shape[0]
if isinstance(X_beforeWOE, pd.DataFrame):
features = X_beforeWOE.values.T
elif isinstance(X_beforeWOE, np.ndarray):
features = X_beforeWOE.T
else:
raise TypeError('X_beforeWOE should be either numpy.ndarray or pandas.DataFrame')
# Check whether the user choose to load a Scorecard rule table
if load_scorecard is None:
scorecard = self.woe_df_
else:
scorecard = load_scorecard
# Apply the Scorecard rules
scored_result = pd.concat(
[pd.Series(_applyScoreCard(scorecard,
name,
x,
delimiter=self.__delimiter__
),
name=name
) for name,x in zip(self.columns_, features)],
axis=1)
scored_result['TotalScore'] = scored_result.sum(axis=1)
if self.__verbose__:
return scored_result
else:
return scored_result['TotalScore'].values
|
{"hexsha": "e670da363012a29a9bafee72641f457e0120c243", "size": 16423, "ext": "py", "lang": "Python", "max_stars_repo_path": "scorecardbundle/model_training/LogisticRegressionScoreCard.py", "max_stars_repo_name": "zixuedanxin/Scorecard-Bundle", "max_stars_repo_head_hexsha": "1f87aed875123b6d3b8f4c0f41f5195975a0de66", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-02T06:49:16.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-02T06:49:16.000Z", "max_issues_repo_path": "scorecardbundle/model_training/LogisticRegressionScoreCard.py", "max_issues_repo_name": "zixuedanxin/Scorecard-Bundle", "max_issues_repo_head_hexsha": "1f87aed875123b6d3b8f4c0f41f5195975a0de66", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scorecardbundle/model_training/LogisticRegressionScoreCard.py", "max_forks_repo_name": "zixuedanxin/Scorecard-Bundle", "max_forks_repo_head_hexsha": "1f87aed875123b6d3b8f4c0f41f5195975a0de66", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.5772151899, "max_line_length": 116, "alphanum_fraction": 0.6004992998, "include": true, "reason": "import numpy", "num_tokens": 3607}
|
from __future__ import absolute_import
import os
import gym
import time
import math
import numpy as np
import urllib, hashlib
from collections import deque
from gym import error, spaces, utils
from gym.utils import seeding
from .four_keys_actions import FourKeysActions
from .four_keys_board_items import FourKeysBoardItems
from .four_keys_constants import ATTACK_FULL_CHARGE,MOVEMENT_FULL_CHARGE,HISTORY_QUEUE_LENGTH,TOTAL_KEYS,STARTING_PLAYER_NUMBER,BREADCRUMB_LIFESPAN
class FourKeysEnvironment(gym.Env):
def __init__(self,shared_state,player_number,details=None):
self._shared_state = shared_state
self.actions = FourKeysActions
self.action_space = self._shared_state.action_space
self.observation_space = self._shared_state.observation_space
self.last_action_keys = 0
self.total_steps = 0 # Total number of steps taken since reset
self.max_steps = 200 # Max number of possible steps for training
self.number = player_number
self.status_update_handler = None
# Last breadcrumb value at current tile. Zero when stepping on new tiles.
self.last_breadcrumb = 0
self.history_queue = deque([]) # Keeps track of historical board snapshots for observation
self.breadcrumb_queue = deque([]) # Keeps track of breadcrumb positions so they can be decayed
self.prev_distance_reward = None # Used to help shape rewards by tracking previous distance to key
self.current_distance_reward = 0 # Additional state tracking for reward shaping
self.breadcrumbs = [] # Tracks where the agent has been on a board
self.total_drops = 0 # Total drops since last reset
# If this is set to true then all other players are set to
# OTHER_PLAYER in observation and primary player is always player 1
# Currently only supported for agent_centered_observation shared state
self.homogenize_player_numbers = True
# If supplied, the reward mechanism is completely overwritten by this callback
# This is called with a dictionary containing relevant items
self.custom_reward = None
self._last_board_observation = None
self.reset(reset_shared_state=False)
self.status = { 'number':player_number, 'name': 'Unknown '+str(player_number+1), 'pic':'','description':'','address':'','endpoint':'','pickups':0, 'attacks':0, 'status':'live', 'position':{'x':0,'y':0}}
if details:
if 'username' in details and details['username']:
self.status['name'] = details['username']
if 'eth_address' in details and details['eth_address']:
self.status['address'] = details['eth_address']
if 'email' in details and details['email']:
gravatar_url = "https://www.gravatar.com/avatar/" + hashlib.md5(details['email'].lower().encode('utf-8')).hexdigest() + "?"
gravatar_url += 's=42'
self.status['pic'] = gravatar_url
size = int(math.sqrt(len(self._shared_state.board)))
self.position = shared_state.starting_points[player_number]
self.last_position = self.position
pos = self.position[0]*size + self.position[1]
self._shared_state.board[pos] = self.number+STARTING_PLAYER_NUMBER
self._shared_state.register_attached_environment(self)
# Updates the status snapshot
def process_status(self):
self.status['keys'] = self.keys
self.status['attack_recharge'] = self.attack_recharge
self.status['move_recharge'] = self.move_recharge
self.status['position']['x'] = self.position[1]
self.status['position']['y'] = self.position[0]
self.status['pickups'] = self.pickups
self.status['attacks'] = self.attacks
if self.status_update_handler is not None:
self.status_update_handler(self)
# Finds nearby players. Helpful for attacks.
def get_adjacent_players(self):
max_item = len(self._shared_state.board)
size = int(math.sqrt(max_item)) # Side size
pos = self.position[0]*size + self.position[1]
locs = [pos-1 if pos-1 >= 0 and (pos-1)%size != (size-1) else None,
pos+1 if pos+1 < max_item and (pos+1)%size != 0 else None,
pos+size if pos+size < max_item else None,
pos-size if pos-size >= 0 else None]
adjacent_players = []
for loc in locs:
if loc is None:
continue
if self._shared_state.board[loc] >= STARTING_PLAYER_NUMBER and self._shared_state.board[loc] <= STARTING_PLAYER_NUMBER+8:
adjacent_players.append(loc)
return adjacent_players
# Determines the number of adjacent players
def players_adjacent(self):
return len(self.get_adjacent_players()) > 0
# Commits a specified action to the shared state
def perform_action(self,action):
if action == FourKeysActions.NOTHING:
return True
if action == FourKeysActions.DROP_KEY:
self.drop_key()
self.total_drops += 1
if action == FourKeysActions.ATTACK:
if self._shared_state.attack_handler is not None:
adjacent_players = self.get_adjacent_players()
self._shared_state.attack_handler(adjacent_players)
self.attack_recharge = 0
self.attacks += 1
return True
else:
return False
max_item = len(self._shared_state.board)
size = int(math.sqrt(max_item)) # Side size
# L,R,D,U - todo: DRY
pos = self.position[0]*size + self.position[1]
locs = [pos-1 if pos-1 >= 0 else None,
pos+1 if pos+1 < max_item else None,
pos+size if pos+size < max_item else None,
pos-size if pos-size >= 0 else None]
position_action_map = {FourKeysActions.LEFT:locs[0], FourKeysActions.RIGHT:locs[1],FourKeysActions.DOWN:locs[2], FourKeysActions.UP:locs[3]}
if action in position_action_map.keys():
return self.move(position_action_map[action])
# Receives an attack from another agent in the shared state
def receive_attack(self):
self.move_recharge = 0
self.attack_recharge = math.floor(self.attack_recharge/2)
self.drop_keys()
self.process_status()
# Drop all keys (ex: when attacked)
def drop_keys(self):
while self.keys > 0:
self.drop_key()
# Drop single key
def drop_key(self):
if self.keys > 0:
key_placement = self.closest_open_tile()
self.keys -= 1
self._shared_state.board[key_placement] = FourKeysBoardItems.KEY
if self._shared_state.new_key_handler is not None:
size = int(math.sqrt(len(self._shared_state.board))) # Side size
coordinates = (math.floor(key_placement / size),key_placement%size)
self._shared_state.new_key_handler(coordinates,self)
return True
else:
return False
# Finds the closest tile that is not occupied
def closest_open_tile(self):
spots = []
layer = 1
while len(spots) == 0:
max_item = len(self._shared_state.board)
size = int(math.sqrt(max_item)) # Side size
pos = self.position[0]*size + self.position[1]
locs = [pos-layer if pos-layer >= 0 and (pos-layer)%size != (size-1) else None,
pos+layer if pos+layer < max_item and (pos+layer)%size != 0 else None,
pos+size*layer if pos+size*layer < max_item else None,
pos-size*layer if pos-size*layer >= 0 else None]
for loc in locs:
if loc is None:
continue
if self._shared_state.board[loc] == FourKeysBoardItems.EMPTY:
spots.append(loc)
layer+= 1
if len(spots) > 0:
key_placement = spots[0]
if len(spots) > 1:
spot_selection = self._shared_state.rng.randint(0,len(spots)-1)
key_placement = spots[spot_selection]
return key_placement
# Moves to the specified tile
def move(self,new_position_flat):
size = int(math.sqrt(len(self._shared_state.board))) # Side size
pos_flat = self.position[0]*size + self.position[1]
self.position = (math.floor(new_position_flat / size),new_position_flat%size)
if self._shared_state.board[new_position_flat] == FourKeysBoardItems.KEY:
self.keys += 1
self.pickups += 1
if self._shared_state.key_consumed_handler is not None:
self._shared_state.key_consumed_handler(self.position,self)
self._shared_state.board[pos_flat] = FourKeysBoardItems.EMPTY
self._shared_state.board[new_position_flat] = STARTING_PLAYER_NUMBER + self.number
# Checks if the specified action is valid
def is_action_valid(self,action):
if action == FourKeysActions.NOTHING:
return True
if action == FourKeysActions.DROP_KEY:
return self.keys > 0
if action == FourKeysActions.ATTACK and self.attack_recharge >= ATTACK_FULL_CHARGE:
return self.players_adjacent() # Can only attack adjacent players
max_item = len(self._shared_state.board)
size = int(math.sqrt(max_item)) # Side size
# L,R,D,U - todo: DRY
pos = self.position[0]*size + self.position[1]
locs = [pos-1 if pos-1 >= 0 and (pos-1)%size != (size-1) else None,
pos+1 if pos+1 < max_item and (pos+1)%size != 0 else None,
pos+size if pos+size < max_item else None,
pos-size if pos-size >= 0 else None]
board = self._shared_state.board
# ToDo: refactor
if action == FourKeysActions.LEFT and locs[0] is not None and (board[locs[0]] == FourKeysBoardItems.EMPTY or board[locs[0]] == FourKeysBoardItems.KEY):
return self.move_recharge >= MOVEMENT_FULL_CHARGE
if action == FourKeysActions.RIGHT and locs[1] is not None and (board[locs[1]] == FourKeysBoardItems.EMPTY or board[locs[1]] == FourKeysBoardItems.KEY):
return self.move_recharge >= MOVEMENT_FULL_CHARGE
if action == FourKeysActions.DOWN and locs[2] is not None and (board[locs[2]] == FourKeysBoardItems.EMPTY or board[locs[2]] == FourKeysBoardItems.KEY):
return self.move_recharge >= MOVEMENT_FULL_CHARGE
if action == FourKeysActions.UP and locs[3] is not None and (board[locs[3]] == FourKeysBoardItems.EMPTY or board[locs[3]] == FourKeysBoardItems.KEY):
return self.move_recharge >= MOVEMENT_FULL_CHARGE
return False
# Performs a specified action (from FourKeysActions) and updates the shared state if needed
def step(self, action):
# Snapshot keys right before action
self.last_action_keys = self.keys
self.prev_distance_reward = self.current_distance_reward
self.last_position = self.position
# Leave breadcrumb in previous position - track up to 20 unique visits to a given spot
previous_value = self.breadcrumbs[self.position[0],self.position[1]]
self.breadcrumbs[self.position[0],self.position[1]] = min(previous_value+1,20)
self.breadcrumb_queue.append(self.position)
if len(self.breadcrumb_queue) > BREADCRUMB_LIFESPAN:
expired_breadcrumb_location = self.breadcrumb_queue.popleft()
# Remove expiring breadcrumb from board
if self.breadcrumbs[expired_breadcrumb_location[0],expired_breadcrumb_location[1]] > 0:
#print('expiring %i,%i (%i)' % (expired_breadcrumb_location[0],expired_breadcrumb_location[1], self.breadcrumbs[expired_breadcrumb_location[0],expired_breadcrumb_location[1]]))
self.breadcrumbs[expired_breadcrumb_location[0],expired_breadcrumb_location[1]] -= 1
if (self.is_action_valid(action)):
self.perform_action(action)
self.last_breadcrumb = self.breadcrumbs[self.position[0],self.position[1]]
if self.attack_recharge < ATTACK_FULL_CHARGE:
self.attack_recharge += 1
if self.move_recharge < 8:
self.move_recharge += 1
self.total_steps += 1
self.process_status()
observation = self.observe()
self.snapshot_state(use_cached_observation=True)
return observation
def generate_current_observation(self):
attack_recharge_percent = (self.attack_recharge / ATTACK_FULL_CHARGE) * 100
movement_recharge_percent = (self.move_recharge / MOVEMENT_FULL_CHARGE) * 100
result,board,visible_breadcrumbs = self._shared_state.generate_observation(self.position,self.history_queue,self.keys,attack_recharge_percent,movement_recharge_percent,self.homogenize_player_numbers,self.breadcrumbs)
self._last_board_observation = board
self._last_visible_breadcrumbs = visible_breadcrumbs
return result
def observe(self,include_status=False):
attack_recharge_percent = (self.attack_recharge / ATTACK_FULL_CHARGE) * 100
movement_recharge_percent = (self.move_recharge / MOVEMENT_FULL_CHARGE) * 100
# board_observation is the isolated board component
observation = self.generate_current_observation()
done = self._shared_state.any_agent_won or self.total_steps >= self.max_steps
if self.max_steps < 99999999:
if self.custom_reward is None:
# The primary reward is just the amount of keys picked up (negative if dropped)
reward = self.keys - self.last_action_keys
# Add small penalty for dropping keys
if reward < 0:
reward = (self.keys - self.last_action_keys) * 1.05
# When finished a bonus is added proportional to the number of steps taken
if done and self.keys == TOTAL_KEYS:
final_key_reward = self.max_steps / self.total_steps
reward += final_key_reward
else:
# Adds a tiny bonus for exploring tiles that haven't yet been visited
if self.last_breadcrumb == 0:
reward += (1-(self.total_steps / self.max_steps*4)) * 0.01;
# Adds a small penalty for stepping on frequently visited tiles
if self.last_breadcrumb >= 10:
reward -= (self.total_steps / self.max_steps*4) * 0.01;
else:
# Custom reward specified
values = dict(last_breadcrumb=self.last_breadcrumb,
total_steps=self.total_steps,
max_steps=self.max_steps,
current_keys=self.keys,
previous_keys=self.last_action_keys,
done=done)
reward = self.custom_reward(values)
else:
reward = 0
#if self.total_steps > self.max_steps or reward > 500 or reward < -500:
# print('Unexpected condition: [R: %i, TS: %i]' % (reward,self.total_steps))
info = self.status if include_status else {}
return observation, reward, done, info
# Adds current position to the queue and removes last item if present
def snapshot_state(self,use_cached_observation=False):
board_observation = self._last_board_observation
if not use_cached_observation:
board_observation = self._shared_state.get_observable_board(self.position)
self.history_queue.append(board_observation)
if len(self.history_queue) > HISTORY_QUEUE_LENGTH:
self.history_queue.popleft()
# Resets both agent-specific state and shared state by default
def reset(self,reset_shared_state=True):
self.position = self._shared_state.starting_points[self.number]
side = int(math.sqrt(self._shared_state.board.shape[0]))
pos = self.position[0]*side + self.position[1]
self._shared_state.board[pos] = self.number+STARTING_PLAYER_NUMBER
self.history_queue = deque([])
self.breadcrumb_queue = deque([])
# Fill history queue with current state
for i in range(HISTORY_QUEUE_LENGTH):
self.snapshot_state()
self.keys = 0 # Number of keys held by this specific agent
self.attack_recharge = 0 # Current attack recharge state
self.move_recharge = MOVEMENT_FULL_CHARGE # Current movement recharge state
self.pickups = 0 # How many keys have been picked up since reset
self.attacks = 0 # How many attacks have been made since reset
self.total_drops = 0
self.total_steps = 0
self.last_action_keys = 0 # Used to keep track of reward over multi-agent action cycle
self.breadcrumbs = np.zeros((side,side))
if reset_shared_state:
self._shared_state.reset()
obs = self.generate_current_observation()
return obs
# Translates a single board item (from a discrete cell) to text
def describe(self,state_item):
descriptions = {FourKeysBoardItems.EMPTY:' ', FourKeysBoardItems.WALL: '███', FourKeysBoardItems.KEY: ' K ', 3:' 1 ', 4:' 2 ',5:' 3 ', 6:' 4 ', 7:' 5 ', 8:' 6 ', 9:' 7 ', 10:' 8 ', 11: ' O '}
return descriptions[state_item]
# Quick visualization of probabilities or other ranged numbers
def describe_prob(self,probability,max,min):
if probability == 0:
return ' '
if probability >= max:
return '███'
if probability > (max/8)*7:
return '▓▓▓'
if probability > (max/8)*5:
return '▒▒▒'
if probability > min:
return '░░░'
# Quick visualization of single digit numbers
def describe_raw(self,probability,scale=2):
return '[%01d]' % probability*scale
# A simple console render of the board
def render(self,label='',clear=True):
board_square = self._shared_state.get_observable_board(self.position,flat=False)
side = int(board_square.shape[0])
history = []
for i in range(len(self.history_queue)):
history.append(self.history_queue[i].reshape((side,side)))
row_index = 0
divider = ' ┃ '
output = ''
for row in board_square:
history_row_print = ''
for i in reversed(range(len(history))):
history_item_row = history[i][row_index]
history_row_print += ' ' + ''.join([(self.describe(i)) for i in history_item_row])
breadcrumbs_square = self._last_visible_breadcrumbs.reshape((side,side))
breadcrumb_row_print = ''.join([(self.describe_prob(i,20,0)) for i in breadcrumbs_square[row_index]])
output += ''.join([(self.describe(i)) for i in row]) + divider + breadcrumb_row_print + '\n'
row_index += 1
whole_board_square = self._shared_state.get_whole_square_board()
side_length = whole_board_square.shape[0]
min_divider_length = 46
divider_length = max(min_divider_length,side_length)
output += '━' * divider_length + '\n'
for row in whole_board_square:
output += ''.join([(self.describe(i)) for i in row]) + '\n'
if clear:
os.system('cls' if os.name=='nt' else 'clear')
print(output)
if label:
print(label)
|
{"hexsha": "b170bd66fc39cbfbbdcfcbef1593877399463540", "size": 19629, "ext": "py", "lang": "Python", "max_stars_repo_path": "skypond/games/four_keys/four_keys_environment.py", "max_stars_repo_name": "upkoi/skypond", "max_stars_repo_head_hexsha": "5e366a18f2c5c85ce7b092d69b28c8f8aaad8718", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "skypond/games/four_keys/four_keys_environment.py", "max_issues_repo_name": "upkoi/skypond", "max_issues_repo_head_hexsha": "5e366a18f2c5c85ce7b092d69b28c8f8aaad8718", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "skypond/games/four_keys/four_keys_environment.py", "max_forks_repo_name": "upkoi/skypond", "max_forks_repo_head_hexsha": "5e366a18f2c5c85ce7b092d69b28c8f8aaad8718", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-06-13T18:08:01.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-17T02:42:19.000Z", "avg_line_length": 38.4129158513, "max_line_length": 224, "alphanum_fraction": 0.6392072953, "include": true, "reason": "import numpy", "num_tokens": 4395}
|
program main
integer :: ii, jj
logical :: ll
ii = 13 + 33
jj = ii
jj = 2021
ii = jj - jj
ii = (3 + 4)*5
ll = .false.
end program
|
{"hexsha": "c4e9bf7bf21609f908fd3359ee2fe70d9decb33b", "size": 145, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "tests/nonsmoke/functional/CompileTests/experimental_fortran_tests/expressions.f90", "max_stars_repo_name": "ouankou/rose", "max_stars_repo_head_hexsha": "76f2a004bd6d8036bc24be2c566a14e33ba4f825", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 488, "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_issues_repo_path": "tests/nonsmoke/functional/CompileTests/experimental_fortran_tests/expressions.f90", "max_issues_repo_name": "ouankou/rose", "max_issues_repo_head_hexsha": "76f2a004bd6d8036bc24be2c566a14e33ba4f825", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 174, "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_forks_repo_path": "tests/nonsmoke/functional/CompileTests/experimental_fortran_tests/expressions.f90", "max_forks_repo_name": "ouankou/rose", "max_forks_repo_head_hexsha": "76f2a004bd6d8036bc24be2c566a14e33ba4f825", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 146, "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "avg_line_length": 13.1818181818, "max_line_length": 19, "alphanum_fraction": 0.5172413793, "num_tokens": 63}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
A module for walking through the query response of VO data access layer
(DAL) queries and general VOTable-based datasets.
Most data queries in the VO return a table as a result, usually
formatted as a VOTable. Each row of the table describes a single
physical or virtual dataset which can be retrieved. For uniformity,
datasets are described via standard metadata defined by a data model
specific to the type of data being queried. The fields of the data
model are identified most generally by their VOClient alias as defined
in this interface, or at a lower level by the Utype or UCD of the
specific standard and version of the standard being queried. While
the data model differs depending upon the type of data being queried,
the form of the query response is the same for all classes of data,
allowing a common query response interface to be used.
An exception to this occurs when querying an astronomical catalog or
other externally defined table. In this case there is no VO defined
standard data model. Usually the field names are used to uniquely
identify table columns.
"""
__all__ = ["DALService", "DALQuery", "DALResults", "Record"]
import os
import shutil
import re
import requests
try:
from collections.abc import Mapping
except ImportError:
from collections import Mapping
import collections
from warnings import warn
from astropy.table.table import Table
from astropy.io.votable import parse as votableparse
from astropy.io.votable.ucd import parse_ucd
from astropy.utils.exceptions import AstropyDeprecationWarning
from .mimetype import mime_object_maker
from .exceptions import (DALFormatError, DALServiceError, DALQueryError)
from .. import samp
from ..utils.decorators import stream_decode_content
from ..utils.http import use_session
class DALService:
"""
an abstract base class representing a DAL service located a particular
endpoint.
"""
def __init__(self, baseurl, session=None):
"""
instantiate the service connecting it to a base URL
Parameters
----------
baseurl : str
the base URL that should be used for forming queries to the service.
session : object
optional session to use for network requests
"""
self._baseurl = baseurl
self._session = use_session(session)
@property
def baseurl(self):
"""
the base URL identifying the location of the service and where
queries are submitted (read-only)
"""
return self._baseurl
def search(self, **keywords):
"""
send a search query to this service.
This implementation has no knowledge of the type of service being
queried. The query parameters are given as arbitrary keywords which
will be assumed to be understood by the service (i.e. there is no
argument checking). The response is a generic DALResults object.
Raises
------
DALServiceError
for errors connecting to or communicating with the service
DALQueryError
for errors either in the input query syntax or other user errors
detected by the service
DALFormatError
for errors parsing the VOTable response
"""
q = self.create_query(**keywords)
return q.execute()
def create_query(self, **keywords):
"""
create a query object that constraints can be added to and then
executed.
Returns
-------
DALQuery
a generic query object
"""
q = DALQuery(self.baseurl, session=self._session, **keywords)
return q
def describe(self):
print('DAL Service at {}'.format(self.baseurl))
class DALQuery(dict):
"""
a class for preparing a query to a particular service. Query constraints
are added via its service type-specific methods. The various execute()
functions will submit the query and return the results.
The base URL for the query can be changed via the baseurl property.
A session can also optionally be passed in that will be used for
network transactions made by this object to remote services.
"""
_ex = None
def __init__(self, baseurl, session=None, **keywords):
"""
initialize the query object with a baseurl
"""
if type(baseurl) == bytes:
baseurl = baseurl.decode("utf-8")
self._baseurl = baseurl.rstrip("?")
self._session = use_session(session)
self.update({key.upper(): value for key, value in keywords.items()})
@property
def baseurl(self):
"""
the base URL that this query will be sent to when one of the
execute functions is called.
"""
return self._baseurl
def execute(self):
"""
submit the query and return the results as a Results subclass instance
Raises
------
DALServiceError
for errors connecting to or communicating with the service
DALQueryError
for errors either in the input query syntax or
other user errors detected by the service
DALFormatError
for errors parsing the VOTable response
"""
return DALResults(self.execute_votable(), self.queryurl, session=self._session)
def execute_raw(self):
"""
submit the query and return the raw response as a string.
No exceptions are raised here because non-2xx responses might still
contain payload. They can be raised later by calling ``raise_if_error``
"""
f = self.execute_stream()
out = None
try:
out = f.read()
finally:
f.close()
return out
@stream_decode_content
def execute_stream(self):
"""
Submit the query and return the raw response as a file stream.
No exceptions are raised here because non-2xx responses might still
contain payload. They can be raised later by calling ``raise_if_error``
"""
response = self.submit()
try:
response.raise_for_status()
except requests.RequestException as ex:
# save for later use
self._ex = ex
finally:
return response.raw
def submit(self):
"""
does the actual request
"""
url = self.queryurl
params = {k: v for k, v in self.items()}
response = self._session.get(url, params=params, stream=True)
return response
def execute_votable(self):
"""
Submit the query and return the results as an AstroPy votable instance.
As this is the level where qualified error messages are available,
they are raised here instead of in the underlying execute_stream.
Returns
-------
astropy.io.votable.tree.Table
an Astropy votable Table instance
Raises
------
DALServiceError
for errors connecting to or communicating with the service
DALFormatError
for errors parsing the VOTable response
See Also
--------
astropy.io.votable
DALServiceError
DALFormatError
DALQueryError
"""
try:
return votableparse(self.execute_stream().read)
except Exception as e:
self.raise_if_error()
raise DALFormatError(e, self.queryurl)
def raise_if_error(self):
"""
Raise if there was an error on http level.
"""
if self._ex:
e = self._ex
raise DALServiceError.from_except(e, self.queryurl)
@property
def queryurl(self):
"""
The URL that encodes the current query. This is the
URL that the execute functions will use if called next.
"""
return self.baseurl
class DALResults:
"""
Results from a DAL query. It provides random access to records in
the response. Alternatively, it can provide results via a Cursor
(compliant with the Python Database API) or an iterable.
"""
@classmethod
@stream_decode_content
def _from_result_url(cls, result_url, session):
return session.get(result_url, stream=True).raw
@classmethod
def from_result_url(cls, result_url, session=None):
"""
Create a result object from a url.
Uses the optional session to make the request.
"""
session = use_session(session)
return cls(
votableparse(cls._from_result_url(result_url, session).read),
url=result_url,
session=session)
def __init__(self, votable, url=None, session=None):
"""
initialize the cursor. This constructor is not typically called
by directly applications; rather an instance is obtained from calling
a DALQuery's execute().
Parameters
----------
votable : str
the service response parsed into an
astropy.io.votable.tree.VOTableFile instance.
url : str
the URL that produced the response
session : object
optional session to use for network requests
Raises
------
DALFormatError
if the response VOTable does not contain a response table
See Also
--------
DALFormatError
"""
self._votable = votable
self._url = url
self._session = use_session(session)
self._status = self._findstatus(votable)
if self._status[0].lower() not in ("ok", "overflow"):
raise DALQueryError(self._status[1], self._status[0], url)
self._resultstable = self._findresultstable(votable)
if not self._resultstable:
raise DALFormatError(
reason="VOTable response missing results table", url=url)
self._fldnames = tuple(
field.name for field in self._resultstable.fields)
if not self._fldnames:
raise DALFormatError(
reason="response table missing column descriptions.", url=url)
self._infos = self._findinfos(votable)
def _findresultstable(self, votable):
# this can be overridden to specialize for a particular DAL protocol
res = self._findresultsresource(votable)
if not res or len(res.tables) < 1:
return None
return res.tables[0]
def _findresultsresource(self, votable):
# this can be overridden to specialize for a particular DAL protocol
if len(votable.resources) < 1:
return None
for res in votable.resources:
if res.type.lower() == "results":
return res
return votable.resources[0]
def _findstatus(self, votable):
# this can be overridden to specialize for a particular DAL protocol
# look first in the result resource
res = self._findresultsresource(votable)
if res:
# should be a RESOURCE/INFO
info = self._findstatusinfo(res.infos)
if info:
return (info.value, info.content)
# if not there, check inside first table
if len(res.tables) > 0:
info = self._findstatusinfo(res.tables[0].infos)
if info:
return (info.value, info.content)
# otherwise, look just below the root element
info = self._findstatusinfo(votable.infos)
if info:
return (info.value, info.content)
# assume it's okay
return ("OK", "QUERY_STATUS not specified")
def _findstatusinfo(self, infos):
# this can be overridden to specialize for a particular DAL protocol
for info in infos:
if info.name.lower() == 'query_status':
return info
def _findinfos(self, votable):
# this can be overridden to specialize for a particular DAL protocol
infos = {}
res = self._findresultsresource(votable)
for info in res.infos:
infos[info.name] = info.value
for info in votable.infos:
infos[info.name] = info.value
return infos
def __repr__(self):
return repr(self.to_table())
@property
def queryurl(self):
"""
the URL query that produced these results. None is returned if unknown
"""
return self._url
@property
def votable(self):
"""
The complete votable XML Document `astropy.io.votable.tree.VOTableFile`
"""
return self._votable
@property
def resultstable(self):
"""
The votable XML element `astropy.io.votable.tree.Table`
"""
return self._resultstable
def to_table(self):
"""
Returns a astropy Table object.
Returns
-------
`astropy.table.Table`
"""
return self.resultstable.to_table(use_names_over_ids=True)
@property
def table(self):
warn(AstropyDeprecationWarning(
'Using the table property is deprecated. '
'Please use se to_table() instead.'
))
return self.to_table()
def __len__(self):
"""
return the record count
"""
return len(self.resultstable.array)
def __getitem__(self, indx):
"""
if indx is a string, r[indx] will return the field with the name of
indx; if indx is an integer, r[indx] will return the indx-th record.
"""
if isinstance(indx, int):
return self.getrecord(indx)
elif isinstance(indx, tuple):
return self.getvalue(*indx)
else:
return self.getcolumn(indx)
@property
def fieldnames(self):
"""
return the names of the columns. These are the names that are used
to access values from the dictionaries returned by getrecord(). They
correspond to the column name.
"""
return self._fldnames
@property
def fielddescs(self):
"""
return the full metadata the columns as a list of Field instances,
a simple object with attributes corresponding the the VOTable FIELD
attributes, namely: name, id, type, ucd, utype, arraysize, description
"""
return self.resultstable.fields
@property
def status(self):
"""
The query status as a 2-element tuple e.g. ('OK', 'Everythings fine')
"""
return self._status
def fieldname_with_ucd(self, ucd):
"""
return the field name that has a given UCD value or None if the UCD
is not found.
"""
search_ucds = set(parse_ucd(ucd, has_colon=True))
for field in (field for field in self.fielddescs if field.ucd):
field_ucds = set(parse_ucd(field.ucd, has_colon=True))
if search_ucds & field_ucds:
return field.name
return None
def fieldname_with_utype(self, utype):
"""
return the field name that has a given UType value or None if the UType
is not found.
"""
try:
iterchain = (
self.getdesc(fieldname) for fieldname in self.fieldnames)
iterchain = (field for field in iterchain if field.utype == utype)
return next(iterchain).name
except StopIteration:
return None
def getcolumn(self, name):
"""
return a numpy array containing the values for the column with the
given name
"""
try:
if name not in self.fieldnames:
name = self.resultstable.get_field_by_id(name).name
return self.resultstable.array[name]
except KeyError:
raise KeyError("No such column: {}".format(name))
def getrecord(self, index):
"""
return a representation of a result record that follows dictionary
semantics. The keys of the dictionary are those returned by this
instance's fieldnames attribute.The returned record may have additional
accessor methods for getting at stardard DAL response metadata
(e.g. ra, dec).
Parameters
----------
index : int
the integer index of the desired record where 0 returns the first
record
Returns
-------
Record
a dictionary-like wrapper containing the result record metadata.
Raises
------
IndexError
if index is negative or equal or larger than the number of rows in
the result table.
See Also
--------
Record
"""
return Record(self, index, session=self._session)
def getvalue(self, name, index):
"""
return the value of a record attribute--a value from a column and row.
Parameters
----------
name : str
the name of the attribute (column)
index : int
the zero-based index of the record
Raises
------
IndexError
if index is negative or equal or larger than the
number of rows in the result table.
KeyError
if name is not a recognized column name
"""
return self.getrecord(index)[name]
def getdesc(self, name):
"""
return the field description for the record attribute (column) with
the given name
Parameters
----------
name : str
the name of the attribute (column)
Returns
-------
object
with attributes (name, id, datatype, unit, ucd, utype, arraysize)
which describe the column
"""
if name not in self._fldnames:
raise KeyError(name)
return self.resultstable.get_field_by_id_or_name(name)
def __iter__(self):
"""
return a python iterable for stepping through the records in this
result
"""
pos = 0
while True:
try:
out = self.getrecord(pos)
except IndexError:
break
yield out
pos += 1
def broadcast_samp(self, client_name=None):
"""
Broadcast the table to ``client_name`` via SAMP
"""
with samp.connection() as conn:
samp.send_table_to(
conn, self.to_table(),
client_name=client_name, name=self.queryurl)
def cursor(self):
"""
return a cursor that is compliant with the Python Database API's
:class:`.Cursor` interface. See PEP 249 for details.
"""
from .dbapi2 import Cursor
return Cursor(self)
class Record(Mapping):
"""
one record from a DAL query result. The column values are accessible
as dictionary items. It also provides special added functions for
accessing the dataset the record corresponds to. Subclasses may provide
additional functions for access to service type-specific data.
"""
def __init__(self, results, index, session=None):
self._results = results
self._index = index
self._session = use_session(session)
self._mapping = collections.OrderedDict(
zip(
results.fieldnames,
results.resultstable.array.data[index]
)
)
def __getitem__(self, key):
try:
if key not in self._mapping:
key = self._results.resultstable.get_field_by_id(key).name
return self._mapping[key]
except KeyError:
raise KeyError("No such column: {}".format(key))
def __iter__(self):
return iter(self._mapping)
def __len__(self):
return len(self._mapping)
def __repr__(self):
return repr(tuple(self.values()))
def get(self, key, default=None, decode=False):
"""
This method mimics the dict get method and adds a decode parameter
to allow decoding of binary strings.
"""
out = self._mapping.get(key, default)
if decode and isinstance(out, bytes):
out = out.decode('ascii')
return out
def getbyucd(self, ucd, default=None, decode=False):
"""
return the column with the given ucd.
"""
return self.get(
self._results.fieldname_with_ucd(ucd), default, decode)
def getbyutype(self, utype, default=None, decode=False):
"""
return the column with the given utype.
Raises
------
KeyError
if theres no column with the given utype.
"""
return self.get(
self._results.fieldname_with_utype(utype), default, decode)
def getdataformat(self):
"""
return the mimetype of the dataset described by this record.
"""
return self.getbyucd('meta.code.mime', decode=True)
def getdataurl(self):
"""
return the URL contained in the access URL column which can be used
to retrieve the dataset described by this record. None is returned
if no such column exists.
"""
for fieldname in self._results.fieldnames:
field = self._results.getdesc(fieldname)
if (field.utype and "access.reference" in field.utype.lower()) or (
field.ucd and "meta.dataset" in field.ucd and
"meta.ref.url" in field.ucd
):
out = self[fieldname]
if isinstance(out, bytes):
out = out.decode('utf-8')
return out
return None
def getdataobj(self):
"""
return the appropiate data object suitable for the data content behind
this record.
"""
return mime_object_maker(self.getdataurl(), self.getdataformat())
@stream_decode_content
def getdataset(self, timeout=None):
"""
Get the dataset described by this record from the server.
Parameters
----------
timeout : float
the time in seconds to allow for a successful
connection with server before failing with an
IOError (specifically, socket.timeout) exception
Returns
-------
A file-like object which may be read to retrieve the referenced
dataset.
Raises
------
KeyError
if no datast access URL is included in the record
URLError
if the dataset access URL is invalid (note: subclass of IOError)
HTTPError
if an HTTP error occurs while accessing the dataset
(note: subclass of IOError)
socket.timeout
if the timeout is exceeded before a connection is established.
(note: subclass of IOError)
IOError
if some other error occurs while establishing the data stream.
"""
url = self.getdataurl()
if not url:
raise KeyError("no dataset access URL recognized in record")
if timeout:
response = self._session.get(url, stream=True, timeout=timeout)
else:
response = self._session.get(url, stream=True)
response.raise_for_status()
return response.raw
def cachedataset(self, filename=None, dir=".", timeout=None, bufsize=None):
"""
retrieve the dataset described by this record and write it out to
a file with the given name. If the file already exists, it will be
over-written.
Parameters
----------
filename : str
the name of the file to write dataset to. If the
value represents a relative path, it will be taken
to be relative to the value of the ``dir``
parameter. If None, a default name is attempted
based on the record title and format.
dir : str
the directory to write the file into. This value
will be ignored if filename is an absolute path.
timeout : int
the time in seconds to allow for a successful
connection with server before failing with an
IOError (specifically, socket.timeout) exception
bufsize : int
a buffer size in bytes for copying the data to disk
(default: 0.5 MB)
Raises
------
KeyError
if no datast access URL is included in the record
URLError
if the dataset access URL is invalid
HTTPError
if an HTTP error occurs while accessing the dataset
socket.timeout
if the timeout is exceeded before a connection is established.
(note: subclass of IOError)
IOError
if an error occurs while writing out the dataset
"""
if not bufsize:
bufsize = 524288
if not filename:
filename = self.make_dataset_filename(dir)
inp = self.getdataset(timeout)
try:
with open(filename, 'wb') as out:
shutil.copyfileobj(inp, out)
finally:
inp.close()
_dsname_no = 0 # used by make_dataset_filename
def make_dataset_filename(self, dir=".", base=None, ext=None):
"""
create a viable pathname in a given directory for saving the dataset
available via getdataset(). The pathname that is returned is
guaranteed not to already exist (under single-threaded conditions).
This implementation will first try combining the base name with the
file extension (with a dot). If this file already exists in the
directory, a name that appends an integer suffix ("-#") to the base
before joining with the extension will be tried. The integer will
be incremented until a non-existent filename is created.
Parameters
----------
dir : str
the directory to save the dataset under. This must already exist.
base : str
a basename to use to as the base of the filename. If None, the
result of ``suggest_dataset_basename()`` will be used.
ext : str
the filename extension to use. If None, the result of
``suggest_extension()`` will be used.
"""
if not dir:
raise ValueError(
"make_dataset_filename(): no dir parameter provided")
if not os.path.exists(dir):
os.mkdir(dir)
if not os.path.isdir(dir):
raise ValueError("{}: not a directory".format(dir))
if not base:
base = self.suggest_dataset_basename()
if not ext:
ext = self.suggest_extension("dat")
# be efficient when writing a bunch of files into the same directory
# in succession
n = self._dsname_no
def mkpath(i):
return os.path.join(dir, "{}-{}.{}".format(base, i, ext))
if n > 0:
# find the last file written of the form, base-n.ext
while n > 0 and not os.path.exists(mkpath(n)):
n -= 1
if n > 0:
n += 1
if n == 0:
# never wrote a file of form, base-n.ext; try base.ext
path = os.path.join(dir, "{}.{}".format(base, ext))
if not os.path.exists(path):
return path
n += 1
# find next available name
while os.path.exists(mkpath(n)):
n += 1
self._dsname_no = n
return mkpath(n)
def suggest_dataset_basename(self):
"""
return a default base filename that the dataset available via
``getdataset()`` can be saved as. This function is
specialized for a particular service type this record originates from
so that it can be used by ``cachedataset()`` via
``make_dataset_filename()``.
"""
# abstract; specialized for the different service types
return "dataset"
def suggest_extension(self, default=None):
"""
returns a recommended filename extension for the dataset described
by this record. Typically, this would look at the column describing
the format and choose an extension accordingly. This function is
specialized for a particular service type this record originates from
so that it can be used by ``cachedataset()`` via
``make_dataset_filename()``.
"""
# abstract; specialized for the different service types
return default
class Iter:
def __init__(self, res):
self.resultset = res
self.pos = 0
def __iter__(self):
return self
def __next__(self):
try:
out = self.resultset.getrecord(self.pos)
self.pos += 1
return out
except IndexError:
raise StopIteration()
next = __next__
class Upload:
"""
This class represents a DALI Upload as described in
http://www.ivoa.net/documents/DALI/20161101/PR-DALI-1.1-20161101.html#tth_sEc3.4.5
"""
def __init__(self, name, content):
"""
Initialise the Upload object with the given parameters
Parameters
----------
name : str
Tablename for use in queries
content : object
If its a file-like object, a string pointing to a local file,
a `DALResults` object or a astropy table, `is_inline` will be true
and it will expose a file-like object under `fileobj`
Otherwise it exposes a URI under `uri`
"""
try:
self._is_file = os.path.isfile(content)
except Exception:
self._is_file = False
self._is_fileobj = hasattr(content, "read")
self._is_table = isinstance(content, Table)
self._is_resultset = isinstance(content, DALResults)
self._inline = any((
self._is_file,
self._is_fileobj,
self._is_table,
self._is_resultset,
))
self._name = name
self._content = content
@property
def is_inline(self):
"""
True if the upload can be inlined
"""
return self._inline
@property
def name(self):
return self._name
def fileobj(self):
"""
A file-like object for a local resource
Raises
------
ValueError
if theres no valid local resource
"""
if not self.is_inline:
raise ValueError(
"Upload {name} doesn't refer to a local resource".format(
name=self.name))
# astropy table
if isinstance(self._content, Table):
from io import BytesIO
fileobj = BytesIO()
self._content.write(output=fileobj, format="votable")
fileobj.seek(0)
return fileobj
elif isinstance(self._content, DALResults):
from io import BytesIO
fileobj = BytesIO()
table = self._content.to_table()
table.write(output=fileobj, format="votable")
fileobj.seek(0)
return fileobj
fileobj = self._content
try:
fileobj = open(self._content)
finally:
return fileobj
def uri(self):
"""
The URI pointing to the result
"""
# TODO: use a async job base class instead of hasattr for inspection
if hasattr(self._content, "result_uri"):
self._content.raise_if_error()
uri = self._content.result_uri
else:
uri = str(self._content)
return uri
def query_part(self):
"""
The query part for use in DALI requests
"""
if self.is_inline:
value = "{name},param:{name}"
else:
value = "{name},{uri}"
return value.format(name=self.name, uri=self.uri())
class UploadList(list):
"""
This class extends the native python list with utility functions for
upload handling
"""
@classmethod
def fromdict(cls, dct):
"""
Constructs a upload list from a dictionary with table_name: content
"""
return cls(Upload(key, value) for key, value in dct.items())
def param(self):
"""
Returns a string suitable for use in UPLOAD parameters
"""
return ";".join(upload.query_part() for upload in self)
_image_mt_re = re.compile(r'^image/(\w+)')
_text_mt_re = re.compile(r'^text/(\w+)')
_votable_mt_re = re.compile(r'^(\w+)/(x-)?votable(\+\w+)?')
|
{"hexsha": "0ce777d97a5fce588d1b5a4c4325d1ebbd0613e5", "size": 32946, "ext": "py", "lang": "Python", "max_stars_repo_path": "pyvo/dal/query.py", "max_stars_repo_name": "tomdonaldson/pyvo", "max_stars_repo_head_hexsha": "229820bd04b243a092b13e25362a7e1b258519f5", "max_stars_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-12T22:38:36.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-12T22:38:36.000Z", "max_issues_repo_path": "pyvo/dal/query.py", "max_issues_repo_name": "tomdonaldson/pyvo", "max_issues_repo_head_hexsha": "229820bd04b243a092b13e25362a7e1b258519f5", "max_issues_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pyvo/dal/query.py", "max_forks_repo_name": "tomdonaldson/pyvo", "max_forks_repo_head_hexsha": "229820bd04b243a092b13e25362a7e1b258519f5", "max_forks_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0225988701, "max_line_length": 87, "alphanum_fraction": 0.5962180538, "include": true, "reason": "from astropy", "num_tokens": 6929}
|
//----------------------------------------------------------------------------
/** @file SgPointSetTest.cpp
Unit tests for SgSet.
*/
//----------------------------------------------------------------------------
#include "SgSystem.h"
#include <sstream>
#include <boost/test/auto_unit_test.hpp>
#include "SgPointSet.h"
using namespace std;
using SgPointUtil::Pt;
//----------------------------------------------------------------------------
namespace {
BOOST_AUTO_TEST_CASE(SgPointSetTest_AllAdjacentTo)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
BOOST_CHECK(! b.AllAdjacentTo(a));
b.Exclude(Pt(1, 1));
b.Include(Pt(1, SG_MAX_SIZE));
BOOST_CHECK(! b.AllAdjacentTo(a));
b.Clear();
b.Include(Pt(1, 2));
BOOST_CHECK(b.AllAdjacentTo(a));
b.Include(Pt(3, 2));
BOOST_CHECK(b.AllAdjacentTo(a));
b.Include(Pt(5, 5));
BOOST_CHECK(! b.AllAdjacentTo(a));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Adjacent)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
BOOST_CHECK(! b.Adjacent(a));
b.Include(Pt(1, SG_MAX_SIZE));
BOOST_CHECK(! b.Adjacent(a));
b.Include(Pt(1, 2));
BOOST_CHECK(b.Adjacent(a));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_AdjacentOnlyTo)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 2));
SgPointSet b;
b.Include(Pt(19, 1));
b.Include(Pt(19, 2));
b.Include(Pt(19, 3));
b.Include(Pt(18, 1));
BOOST_CHECK(! a.AdjacentOnlyTo(b, 19));
b.Include(Pt(18, 2));
BOOST_CHECK(a.AdjacentOnlyTo(b, 19));
b.Include(Pt(18, 3));
BOOST_CHECK(a.AdjacentOnlyTo(b, 19));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_AdjacentTo)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
BOOST_CHECK(! a.AdjacentTo(Pt(1, 1)));
BOOST_CHECK(! a.AdjacentTo(Pt(1, SG_MAX_SIZE)));
BOOST_CHECK(a.AdjacentTo(Pt(1, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Adjacent8To)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
BOOST_CHECK(! a.Adjacent8To(Pt(1, SG_MAX_SIZE)));
BOOST_CHECK(a.Adjacent8To(Pt(1, 1)));
BOOST_CHECK(a.Adjacent8To(Pt(1, 2)));
BOOST_CHECK(a.Adjacent8To(Pt(3, 3)));
}
void SgPointSetTestAllPointsAtSize(int boardSize)
{
const SgPointSet& s = SgPointSet::AllPoints(boardSize);
BOOST_CHECK_EQUAL(s.Size(), boardSize * boardSize);
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_AllPoints)
{
SgPointSetTestAllPointsAtSize(SG_MIN_SIZE);
SgPointSetTestAllPointsAtSize(9);
SgPointSetTestAllPointsAtSize(10);
SgPointSetTestAllPointsAtSize(SG_MAX_SIZE);
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_And)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
SgPointSet c(a & b);
BOOST_CHECK_EQUAL(c.Size(), 1);
BOOST_CHECK(c.Contains(Pt(1, 1)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_AndAssign)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
a &= b;
BOOST_CHECK_EQUAL(a.Size(), 1);
BOOST_CHECK(a.Contains(Pt(1, 1)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Assign)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(3, 3));
b = a;
BOOST_CHECK_EQUAL(b.Size(), 2);
BOOST_CHECK(a.Contains(Pt(1, 1)));
BOOST_CHECK(a.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Border)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 2));
SgPointSet b = a.Border(19);
BOOST_CHECK_EQUAL(b.Size(), 3);
BOOST_CHECK(b.Contains(Pt(19, 3)));
BOOST_CHECK(b.Contains(Pt(18, 1)));
BOOST_CHECK(b.Contains(Pt(18, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Border8)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 2));
SgPointSet b = a.Border8(19);
BOOST_CHECK_EQUAL(b.Size(), 4);
BOOST_CHECK(b.Contains(Pt(19, 3)));
BOOST_CHECK(b.Contains(Pt(18, 1)));
BOOST_CHECK(b.Contains(Pt(18, 2)));
BOOST_CHECK(b.Contains(Pt(18, 3)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Center)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 2));
a.Include(Pt(19, 3));
BOOST_CHECK_EQUAL(a.Center(), Pt(19, 2));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Clear)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 2));
a.Include(Pt(19, 3));
a.Clear();
BOOST_CHECK_EQUAL(a.Size(), 0);
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Component)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(1, 2));
a.Include(Pt(2, 1));
a.Include(Pt(1, SG_MAX_SIZE));
SgPointSet b = a.Component(Pt(1, 1));
BOOST_CHECK_EQUAL(b.Size(), 3);
BOOST_CHECK(a.Contains(Pt(1, 1)));
BOOST_CHECK(a.Contains(Pt(1, 2)));
BOOST_CHECK(a.Contains(Pt(2, 1)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_ConnComp)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(1, 2));
a.Include(Pt(2, 1));
a.Include(Pt(1, SG_MAX_SIZE));
SgPointSet b = a.ConnComp(Pt(1, 1));
BOOST_CHECK_EQUAL(b.Size(), 3);
BOOST_CHECK(a.Contains(Pt(1, 1)));
BOOST_CHECK(a.Contains(Pt(1, 2)));
BOOST_CHECK(a.Contains(Pt(2, 1)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Disjoint)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
BOOST_CHECK(a.Disjoint(b));
b.Include(Pt(3, 3));
BOOST_CHECK(a.Disjoint(b));
b.Include(Pt(2, 2));
BOOST_CHECK(! a.Disjoint(b));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_EnclosingRect)
{
SgPointSet a;
a.Include(Pt(19, 1));
a.Include(Pt(19, 3));
a.Include(Pt(18, 2));
BOOST_CHECK_EQUAL(a.EnclosingRect(), SgRect(18, 19, 1, 3));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Equals)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
b.Include(Pt(2, 2));
BOOST_CHECK(a == b);
BOOST_CHECK(! (a != b));
b.Exclude(Pt(2, 2));
BOOST_CHECK(! (a == b));
BOOST_CHECK(a != b);
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Grow)
{
SgPointSet a;
a.Include(Pt(8, 1));
a.Include(Pt(8, 2));
a.Include(Pt(9, 1));
a.Include(Pt(9, 2));
a.Grow(9);
BOOST_CHECK_EQUAL(a.Size(), 8);
BOOST_CHECK(a.Contains(Pt(7, 1)));
BOOST_CHECK(a.Contains(Pt(7, 2)));
BOOST_CHECK(a.Contains(Pt(8, 1)));
BOOST_CHECK(a.Contains(Pt(8, 2)));
BOOST_CHECK(a.Contains(Pt(8, 3)));
BOOST_CHECK(a.Contains(Pt(9, 1)));
BOOST_CHECK(a.Contains(Pt(9, 2)));
BOOST_CHECK(a.Contains(Pt(9, 3)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_GrowNewArea)
{
SgPointSet a;
a.Include(Pt(8, 1));
a.Include(Pt(8, 2));
a.Include(Pt(9, 1));
a.Include(Pt(9, 2));
SgPointSet newArea;
a.Grow(&newArea, 9);
BOOST_CHECK_EQUAL(a.Size(), 8);
BOOST_CHECK(a.Contains(Pt(7, 1)));
BOOST_CHECK(a.Contains(Pt(7, 2)));
BOOST_CHECK(a.Contains(Pt(8, 1)));
BOOST_CHECK(a.Contains(Pt(8, 2)));
BOOST_CHECK(a.Contains(Pt(8, 3)));
BOOST_CHECK(a.Contains(Pt(9, 1)));
BOOST_CHECK(a.Contains(Pt(9, 2)));
BOOST_CHECK(a.Contains(Pt(9, 3)));
BOOST_CHECK_EQUAL(newArea.Size(), 4);
BOOST_CHECK(newArea.Contains(Pt(7, 1)));
BOOST_CHECK(newArea.Contains(Pt(7, 2)));
BOOST_CHECK(newArea.Contains(Pt(8, 3)));
BOOST_CHECK(newArea.Contains(Pt(9, 3)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Grow8)
{
SgPointSet a;
a.Include(Pt(8, 1));
a.Include(Pt(8, 2));
a.Include(Pt(9, 1));
a.Include(Pt(9, 2));
a.Grow8(9);
BOOST_CHECK_EQUAL(a.Size(), 9);
BOOST_CHECK(a.Contains(Pt(7, 1)));
BOOST_CHECK(a.Contains(Pt(7, 2)));
BOOST_CHECK(a.Contains(Pt(7, 3)));
BOOST_CHECK(a.Contains(Pt(8, 1)));
BOOST_CHECK(a.Contains(Pt(8, 2)));
BOOST_CHECK(a.Contains(Pt(8, 3)));
BOOST_CHECK(a.Contains(Pt(9, 1)));
BOOST_CHECK(a.Contains(Pt(9, 2)));
BOOST_CHECK(a.Contains(Pt(9, 3)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_IsCloseTo)
{
SgPointSet a;
a.Include(Pt(1, 1));
BOOST_CHECK(a.IsCloseTo(Pt(1, 1)));
BOOST_CHECK(a.IsCloseTo(Pt(1, 2)));
BOOST_CHECK(a.IsCloseTo(Pt(1, 3)));
BOOST_CHECK(a.IsCloseTo(Pt(1, 4)));
BOOST_CHECK(! a.IsCloseTo(Pt(1, 5)));
BOOST_CHECK(a.IsCloseTo(Pt(1, 1)));
BOOST_CHECK(a.IsCloseTo(Pt(2, 1)));
BOOST_CHECK(a.IsCloseTo(Pt(3, 1)));
BOOST_CHECK(a.IsCloseTo(Pt(4, 1)));
BOOST_CHECK(! a.IsCloseTo(Pt(5, 1)));
BOOST_CHECK(a.IsCloseTo(Pt(4, 4)));
BOOST_CHECK(! a.IsCloseTo(Pt(5, 5)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_IsConnected)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(1, 2));
a.Include(Pt(2, 1));
BOOST_CHECK(a.IsConnected());
a.Include(Pt(1, SG_MAX_SIZE));
BOOST_CHECK(! a.IsConnected());
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_IsEmpty)
{
SgPointSet a;
BOOST_CHECK(a.IsEmpty());
a.Include(Pt(1, 1));
BOOST_CHECK(! a.IsEmpty());
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Kernel)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(1, 2));
a.Include(Pt(1, 3));
a.Include(Pt(2, 1));
a.Include(Pt(2, 2));
a.Include(Pt(2, 3));
a.Include(Pt(3, 1));
a.Include(Pt(3, 2));
a.Include(Pt(3, 3));
SgPointSet k = a.Kernel(19);
BOOST_CHECK_EQUAL(k.Size(), 4);
BOOST_CHECK(k.Contains(Pt(1, 1)));
BOOST_CHECK(k.Contains(Pt(1, 2)));
BOOST_CHECK(k.Contains(Pt(2, 1)));
BOOST_CHECK(k.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_MaxOverlap)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(2, 2));
b.Include(Pt(3, 3));
BOOST_CHECK(! a.MaxOverlap(b, 0));
BOOST_CHECK(a.MaxOverlap(b, 1));
BOOST_CHECK(a.MaxOverlap(b, 2));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_MinOverlap)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(2, 2));
b.Include(Pt(3, 3));
BOOST_CHECK(a.MinOverlap(b, 0));
BOOST_CHECK(a.MinOverlap(b, 1));
BOOST_CHECK(! a.MinOverlap(b, 2));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Minus)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
SgPointSet c(a - b);
BOOST_CHECK_EQUAL(c.Size(), 1);
BOOST_CHECK(c.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_MinusAssign)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
a -= b;
BOOST_CHECK_EQUAL(a.Size(), 1);
BOOST_CHECK(a.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Or)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
SgPointSet c(a | b);
BOOST_CHECK_EQUAL(c.Size(), 2);
BOOST_CHECK(c.Contains(Pt(1, 1)));
BOOST_CHECK(c.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_OrAssign)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
a |= b;
BOOST_CHECK_EQUAL(a.Size(), 2);
BOOST_CHECK(a.Contains(Pt(1, 1)));
BOOST_CHECK(a.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Overlaps)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(3, 3));
BOOST_CHECK(! a.Overlaps(b));
BOOST_CHECK(! b.Overlaps(a));
b.Include(Pt(2, 2));
BOOST_CHECK(a.Overlaps(b));
BOOST_CHECK(b.Overlaps(a));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_PointOf)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
BOOST_CHECK_EQUAL(a.PointOf(), Pt(1, 1));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_SubsetOf)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
BOOST_CHECK(b.SubsetOf(a));
b.Include(Pt(1, 1));
BOOST_CHECK(b.SubsetOf(a));
b.Include(Pt(2, 2));
BOOST_CHECK(b.SubsetOf(a));
b.Include(Pt(3, 3));
BOOST_CHECK(! b.SubsetOf(a));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_SgPointSet_SgVector)
{
SgVector<SgPoint> a;
SgPointSet b(a);
BOOST_CHECK_EQUAL(b.Size(), 0);
a.PushBack(Pt(1, 1));
a.PushBack(Pt(2, 2));
a.PushBack(Pt(3, 3));
SgPointSet c(a);
BOOST_CHECK_EQUAL(c.Size(), 3);
SgVector<SgPoint> d;
BOOST_CHECK_EQUAL(d.Length(), 0);
c.ToVector(&d);
BOOST_CHECK_EQUAL(d.Length(), 3);
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_SupersetOf)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
BOOST_CHECK(! b.SupersetOf(a));
b.Include(Pt(1, 1));
BOOST_CHECK(! b.SupersetOf(a));
b.Include(Pt(2, 2));
BOOST_CHECK(b.SupersetOf(a));
b.Include(Pt(3, 3));
BOOST_CHECK(b.SupersetOf(a));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Swap)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(3, 3));
b.Include(Pt(4, 4));
b.Include(Pt(5, 5));
a.Swap(b);
BOOST_CHECK_EQUAL(a.Size(), 3);
BOOST_REQUIRE(a.Size() == 3);
BOOST_CHECK(a.Contains(Pt(3, 3)));
BOOST_CHECK(a.Contains(Pt(4, 4)));
BOOST_CHECK(a.Contains(Pt(5, 5)));
BOOST_CHECK_EQUAL(b.Size(), 2);
BOOST_REQUIRE(b.Size() == 2);
BOOST_CHECK(b.Contains(Pt(1, 1)));
BOOST_CHECK(b.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Write)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
ostringstream out;
a.Write(out, 5);
BOOST_CHECK_EQUAL(out.str(),
"-----\n"
"-----\n"
"-----\n"
"-@---\n"
"@----\n");
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_Xor)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
SgPointSet c(a ^ b);
BOOST_CHECK_EQUAL(c.Size(), 1);
BOOST_CHECK(c.Contains(Pt(2, 2)));
}
BOOST_AUTO_TEST_CASE(SgPointSetTest_XorAssign)
{
SgPointSet a;
a.Include(Pt(1, 1));
a.Include(Pt(2, 2));
SgPointSet b;
b.Include(Pt(1, 1));
a ^= b;
BOOST_CHECK_EQUAL(a.Size(), 1);
BOOST_CHECK(a.Contains(Pt(2, 2)));
}
} // namespace
//----------------------------------------------------------------------------
|
{"hexsha": "70c0c40fd45689ec708d49faae2167dd405ab9c7", "size": 14267, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "fuego-0.4/smartgame/test/SgPointSetTest.cpp", "max_stars_repo_name": "MisterTea/HyperNEAT", "max_stars_repo_head_hexsha": "516fef725621991ee709eb9b4afe40e0ce82640d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 85.0, "max_stars_repo_stars_event_min_datetime": "2015-02-08T20:36:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T20:38:31.000Z", "max_issues_repo_path": "fuego-0.4/smartgame/test/SgPointSetTest.cpp", "max_issues_repo_name": "afcarl/HyperNEAT", "max_issues_repo_head_hexsha": "516fef725621991ee709eb9b4afe40e0ce82640d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9.0, "max_issues_repo_issues_event_min_datetime": "2015-01-28T16:33:19.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T23:03:28.000Z", "max_forks_repo_path": "fuego-0.4/smartgame/test/SgPointSetTest.cpp", "max_forks_repo_name": "afcarl/HyperNEAT", "max_forks_repo_head_hexsha": "516fef725621991ee709eb9b4afe40e0ce82640d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 27.0, "max_forks_repo_forks_event_min_datetime": "2015-01-28T16:33:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-12T05:04:39.000Z", "avg_line_length": 24.5559380379, "max_line_length": 78, "alphanum_fraction": 0.6029298381, "num_tokens": 4866}
|
@mlj_model mutable struct APIx0 <: Model
f0::Int
end
@mlj_model mutable struct APIx0b <: Model
f0::Int
end
mutable struct APIx1 <: Model end
@testset "selectrows(model, data...)" begin
X = (x1 = [2, 4, 6],)
y = [10.0, 20.0, 30.0]
@test selectrows(APIx0(), 2:3, X, y) == ((x1 = [4, 6],), [20.0, 30.0])
end
@testset "fit-x" begin
m0 = APIx0(f0=1)
m1 = APIx0b(f0=3)
# no weight support: explicit fallback
M.fit(m::APIx0, v::Int, X, y) = (5, nothing, nothing)
M.fit(m::APIx0, v::Int, X, y, w) = (5, nothing, nothing)
@test fit(m0, 1, randn(2), randn(2), 5) == (5, nothing, nothing)
# with weight support: use
M.fit(m::APIx0b, v::Int, X, y, w) = (7, nothing, nothing)
@test fit(m1, 1, randn(2), randn(2), 5) == (7, nothing, nothing)
# default fitted params
@test M.fitted_params(m1, 7) == (fitresult=7,)
# default iteration_parameter
@test M.training_losses(m0, nothing) === nothing
#update fallback = fit
@test update(m0, 1, 5, nothing, randn(2), 5) == (5, nothing, nothing)
# training losses:
f, c, r = MLJModelInterface.fit(m0, 1, rand(2), rand(2))
@test M.training_losses(m0, r) === nothing
end
struct DummyUnivariateFinite end
mutable struct UnivariateFiniteFitter <: Probabilistic end
@testset "models fitting a distribution to data" begin
MMI = MLJModelInterface
function MMI.fit(model::UnivariateFiniteFitter, verbosity::Int, X, y)
fitresult = DummyUnivariateFinite()
report = nothing
cache = nothing
verbosity > 0 && @info "Fitted a $fitresult"
return fitresult, cache, report
end
function MMI.predict(model::UnivariateFiniteFitter, fitresult, X)
return fill(fitresult, length(X))
end
MMI.input_scitype(::Type{<:UnivariateFiniteFitter}) = Nothing
MMI.target_scitype(::Type{<:UnivariateFiniteFitter}) = AbstractVector{<:Finite}
y = categorical(collect("aabbccaa"))
X = nothing
model = UnivariateFiniteFitter()
fitresult, cache, report = MMI.fit(model, 1, X, y)
@test cache === nothing
@test report === nothing
ytest = y[1:3]
yhat = predict(model, fitresult, fill(nothing, 3))
@test yhat == fill(DummyUnivariateFinite(), 3)
end
|
{"hexsha": "e42ebf5082557e7d5ec94fdd7bfff435a266bd1f", "size": 2254, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/model_api.jl", "max_stars_repo_name": "davnn/MLJModelInterface.jl", "max_stars_repo_head_hexsha": "b86722bbaa82b1550918eb796d3a674ac692d51d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/model_api.jl", "max_issues_repo_name": "davnn/MLJModelInterface.jl", "max_issues_repo_head_hexsha": "b86722bbaa82b1550918eb796d3a674ac692d51d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/model_api.jl", "max_forks_repo_name": "davnn/MLJModelInterface.jl", "max_forks_repo_head_hexsha": "b86722bbaa82b1550918eb796d3a674ac692d51d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0533333333, "max_line_length": 83, "alphanum_fraction": 0.6326530612, "num_tokens": 748}
|
function polys = readPolygonSet(filename)
%READPOLYGONSET Read a set of simple polygons stored in a file.
%
% POLY = readPolygonSet(FILENAME);
% Returns the polygon stored in the file FILENAME.
% Polygons are assumed to be stored in text files, without headers, with
% x and y coordinates packed in two separate lines:
% X11 X12 X13 ... X1N
% Y11 Y12 Y13 ... Y1N
% X21 X22 X23 ... X2N
% Y21 Y22 Y23 ... Y2N
%
% Each polygon may have a different number of vertices. The result is a
% cell array of polygon, each cell containing a N-by-2 array representing
% the vertex coordinates.
%
% See also
% polygons2d
% ------
% Author: David Legland
% E-mail: david.legland@inrae.fr
% Created: 2004-04-11
% Copyright 2004-2022 INRA - TPV URPOI - BIA IMASTE
% the set of polygons (no pre-allocation, as we do not know how many
% polygons are stored)
polys = {};
% index of polygon
p = 0;
% open file for reading
fid = fopen(filename, 'rt');
% use an infinite loop, terminated in case of EOF
while true
% set of X, and Y coordinates
line1 = fgetl(fid);
line2 = fgetl(fid);
% break loop if end of file is reached
if line1 == -1
break;
end
% create a new polygon by concatenating vertex coordinates
p = p + 1;
polys{p} = [str2num(line1)' str2num(line2)']; %#ok<AGROW,ST2NM>
end
% close file
fclose(fid);
|
{"author": "mattools", "repo": "matGeom", "sha": "1fd2c937064be1ee1f4fd09fbfdf96145ebe5271", "save_path": "github-repos/MATLAB/mattools-matGeom", "path": "github-repos/MATLAB/mattools-matGeom/matGeom-1fd2c937064be1ee1f4fd09fbfdf96145ebe5271/matGeom/polygons2d/readPolygonSet.m"}
|
| pc = 0xc00e | a = 0x00 | x = 0x00 | y = 0x00 | sp = 0x01fb | p[NV-BDIZC] = 00110100 |
|
{"hexsha": "a22caf5e63f72dbc36c5ff506cb2d0ab818546ca", "size": 88, "ext": "r", "lang": "R", "max_stars_repo_path": "res/jsr_wo_rts.r", "max_stars_repo_name": "JSpuri/EmuParadise", "max_stars_repo_head_hexsha": "b8f6cf8823f8553f28dab5c6b44df20978ad6ba0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "res/jsr_wo_rts.r", "max_issues_repo_name": "JSpuri/EmuParadise", "max_issues_repo_head_hexsha": "b8f6cf8823f8553f28dab5c6b44df20978ad6ba0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "res/jsr_wo_rts.r", "max_forks_repo_name": "JSpuri/EmuParadise", "max_forks_repo_head_hexsha": "b8f6cf8823f8553f28dab5c6b44df20978ad6ba0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0, "max_line_length": 87, "alphanum_fraction": 0.5340909091, "num_tokens": 52}
|
import numpy
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
def updateIndexName(df, dictionary):
allIndex = df.index.values
for i in range(len(allIndex)):
if allIndex[i] in dictionary:
allIndex[i] = dictionary[allIndex[i]]
df.index = allIndex
return df
#%%
stimName = 'LPS'
controlName = 'PBS-BSA'
countCutOf = 1
#rangeCutOff = 0.1 #at least one condition where it is within X from the extreemes
consistencyCutOf = 0.2 #Median STD within a condition
plt.rcParams["figure.figsize"] = (5, 5)
dorotheaData = pd.read_csv('results/dorothea.tsv', sep='\t')
allTFs = pd.read_csv('data/tfList.tsv', sep='\t').values.flatten()
dorotheaData = dorotheaData.loc[allTFs,:]
#dropConditions = ['IL1RN', 'IL33']
dropConditions = []
ligandMapping = pd.read_csv('data/ligandMap.tsv', sep='\t')
ligand2id = dict(zip(ligandMapping['Name'], ligandMapping['Code']))
uniprot = pd.read_csv('data/uniprot-reviewed_yes+AND+organism__Homo+sapiens+(Human)+[9606]_.tab', sep='\t')
gene2uniprot = dict(zip(uniprot['Gene names (primary )'], uniprot['Entry']))
metaData = pd.read_csv('filtered/metaData.tsv', sep='\t')
metaData.index = metaData['uniqueId']
metaData = metaData.loc[dorotheaData.columns.values,:]
t = 1
dorotheaData = 1/(1 + numpy.exp(-t * dorotheaData))
#plt.hist(dorotheaData.values.flatten())
#plt.figure()
#dorotheaData = customSigmodal(dorotheaData)
stim = numpy.where(metaData['stim'], '_' + stimName, '')
conditionId = metaData['ligand'].values + stim
allConditions, counts = numpy.unique(conditionId, return_counts=True)
allConditions = allConditions[counts >= countCutOf]
allConditions = numpy.setdiff1d(allConditions, dropConditions)
allTFs = dorotheaData.index.values
outputs = numpy.zeros((len(allTFs), len(allConditions)))
outputStd = numpy.zeros((len(allTFs), len(allConditions)))
outputCount = numpy.zeros(len(allConditions), dtype=int)
for i in range(len(allConditions)):
affectedSamples = allConditions[i].split('_')
affectedLigand = numpy.isin(metaData['ligand'].values, affectedSamples[0])
stimState = len(affectedSamples) == 2
affectedStim = metaData['stim'].values == stimState
affectedFilter = numpy.logical_and(affectedLigand, affectedStim)
selectedConditions = metaData.index.values[affectedFilter]
curData = dorotheaData.loc[:,selectedConditions].values
outputs[:, i] = numpy.mean(curData, axis=1)
outputStd[:, i] = numpy.std(curData, axis=1)
outputCount[i] = curData.shape[1]
print(outputs.shape)
print(allConditions[numpy.argsort(numpy.median(outputStd, axis=0))])
print(numpy.sort(numpy.median(outputStd, axis=0)))
signalConsistency = numpy.percentile(outputStd, 75, axis=1) < consistencyCutOf
#signalConsistency = numpy.median(outputStd, axis=1) < consistencyCutOf
inconsistentTFs = allTFs[signalConsistency==False]
print(inconsistentTFs)
conditionsPlusN = numpy.array(allConditions.copy(), dtype=object)
for i in range(len(conditionsPlusN)):
conditionsPlusN[i] = '{:s} (N={:d})'.format(allConditions[i], outputCount[i])
plt.rcParams["figure.figsize"] = (7,20)
df = pd.DataFrame(outputStd.T, columns = allTFs, index=conditionsPlusN)
order = numpy.argsort(numpy.percentile(df.values, 75, axis=0))
df = df.iloc[:,order]
ax = sns.boxplot(data=df, orient='h', showfliers=False)
ax = sns.stripplot(data=df, orient='h', color = 'black')
leftRight = plt.ylim()
plt.plot([consistencyCutOf, consistencyCutOf], [leftRight[0], leftRight[1]], color='black')
ax.set_title('TF consistency')
plt.xlabel('STD')
plt.savefig("figures/TFSTD.svg")
plt.figure()
order = numpy.argsort(numpy.percentile(df.values, 75, axis=1))
df = df.iloc[order,:]
ax = sns.boxplot(data=df.T, orient='h', showfliers=False)
ax = sns.stripplot(data=df.T, orient='h', color = 'black')
ax.set_title('Condition consistency')
plt.xlabel('STD')
plt.savefig("figures/ConditionSTD.svg")
df = pd.DataFrame(outputs, index = allTFs, columns = allConditions)
qualityCriteria = signalConsistency
df = df.loc[qualityCriteria,:].copy()
#df = logIt(df)
# signalRange = numpy.logical_and(numpy.max(df, axis=1)>(1-rangeCutOff), numpy.min(df, axis=1)<rangeCutOff)
# qualityCriteria = signalRange
# print(df.index[qualityCriteria==False])
# df = df.loc[qualityCriteria,:].copy()
#df = customSigmodal(df)
#h = sns.clustermap(df, cmap='RdBu_r', vmin=0, vmax=1)
#sns.set(font_scale=0.7)
folder = '../model/figures/Figure 6/'
h = sns.clustermap(df.T, cmap='RdBu_r', vmin=0, vmax=1, xticklabels=True, yticklabels=True, figsize=(15, 20), dendrogram_ratio=0.08, cbar_pos=(0.02, 0.02, 0.02, 0.08))
h.ax_heatmap.set_xticklabels(h.ax_heatmap.get_xmajorticklabels(), fontsize = 14)
h.ax_heatmap.set_yticklabels(h.ax_heatmap.get_ymajorticklabels(), fontsize = 14)
plt.savefig(folder + "B.svg")
h.data2d.to_csv(folder + 'B.tsv', sep='\t')
df = updateIndexName(df, gene2uniprot)
df = df.T
df.to_csv('results/ligandScreen-TFs.tsv', sep='\t')
|
{"hexsha": "072b0cae283f96ee23b0adc8f51d5c8ee9a84ac4", "size": 5009, "ext": "py", "lang": "Python", "max_stars_repo_path": "TF activities Ligand Screen/convertToASNFormat.py", "max_stars_repo_name": "Lauffenburger-Lab/Artificial-Signaling-Network", "max_stars_repo_head_hexsha": "707e79c7e2ad341d68a719443b9e17fe9e7bb7c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-11-12T17:35:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T19:12:00.000Z", "max_issues_repo_path": "TF activities Ligand Screen/convertToASNFormat.py", "max_issues_repo_name": "Lauffenburger-Lab/Artificial-Signaling-Network", "max_issues_repo_head_hexsha": "707e79c7e2ad341d68a719443b9e17fe9e7bb7c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TF activities Ligand Screen/convertToASNFormat.py", "max_forks_repo_name": "Lauffenburger-Lab/Artificial-Signaling-Network", "max_forks_repo_head_hexsha": "707e79c7e2ad341d68a719443b9e17fe9e7bb7c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-17T14:53:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-17T14:53:41.000Z", "avg_line_length": 35.7785714286, "max_line_length": 167, "alphanum_fraction": 0.7312836894, "include": true, "reason": "import numpy", "num_tokens": 1421}
|
[STATEMENT]
lemma nat_explode'_digit: \<open>hd (nat_explode' n ) < 10\<close>
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. hd (nat_explode' n) < 10
[PROOF STEP]
proof(induct \<open>n\<close>)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. hd (nat_explode' 0) < 10
2. \<And>n. hd (nat_explode' n) < 10 \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
case 0
[PROOF STATE]
proof (state)
this:
goal (2 subgoals):
1. hd (nat_explode' 0) < 10
2. \<And>n. hd (nat_explode' n) < 10 \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. hd (nat_explode' 0) < 10
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
hd (nat_explode' 0) < 10
goal (1 subgoal):
1. \<And>n. hd (nat_explode' n) < 10 \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>n. hd (nat_explode' n) < 10 \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
case (Suc n)
[PROOF STATE]
proof (state)
this:
hd (nat_explode' n) < 10
goal (1 subgoal):
1. \<And>n. hd (nat_explode' n) < 10 \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
hd (nat_explode' n) < 10
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
using this:
hd (nat_explode' n) < 10
goal (1 subgoal):
1. hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
proof (cases \<open>n < 9\<close>)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. \<lbrakk>hd (nat_explode' n) < 10; n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
2. \<lbrakk>hd (nat_explode' n) < 10; \<not> n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
case True
[PROOF STATE]
proof (state)
this:
n < 9
goal (2 subgoals):
1. \<lbrakk>hd (nat_explode' n) < 10; n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
2. \<lbrakk>hd (nat_explode' n) < 10; \<not> n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
n < 9
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
n < 9
goal (1 subgoal):
1. hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
hd (nat_explode' (Suc n)) < 10
goal (1 subgoal):
1. \<lbrakk>hd (nat_explode' n) < 10; \<not> n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<lbrakk>hd (nat_explode' n) < 10; \<not> n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
case False
[PROOF STATE]
proof (state)
this:
\<not> n < 9
goal (1 subgoal):
1. \<lbrakk>hd (nat_explode' n) < 10; \<not> n < 9\<rbrakk> \<Longrightarrow> hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
\<not> n < 9
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<not> n < 9
goal (1 subgoal):
1. hd (nat_explode' (Suc n)) < 10
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
hd (nat_explode' (Suc n)) < 10
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
hd (nat_explode' (Suc n)) < 10
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 1618, "file": "Solidity_ReadShow", "length": 21}
|
import numpy as np
from math import log10
from PyRadioLoc.Enums import LeeAreaType
from PyRadioLoc.Enums import AreaKind
from PyRadioLoc.Enums import CityKind
from PyRadioLoc.Enums import TerrainKind
__all__ = ['FreeSpaceModel',
'OkumuraHataModel',
'LeeModel',
'EricssonModel']
class FreeSpaceModel(object):
"""Free Space path loss Model"""
def __init__(self, freq):
self.freq = freq
"""Path loss Free Space path loss Model"""
def pathloss(self,dist):
return 32.44 + 20*np.log10(dist) + 20*np.log10(self.freq)
class FlatEarthModel(object):
"""FlatEarthModel Model"""
def __init__(self, freq):
self.freq = freq
self.txH = 50.0
self.rxH = 1.5
def pathloss(self,dist):
return 120 + (40*np.log10(dist))-(20*np.log10(self.txH))-(20*np.log10(self.rxH))
class LeeModel(object):
"""Lee Point-to-Point Model"""
def __init__(self, freq):
self.freq = freq
self.n0 = LeeAreaType.SubUrban.value[0][0]
self.p0 = LeeAreaType.SubUrban.value[0][1]
self.txH = 50.0
self.rxH = 1.5
def pathloss(self,dist):
nf = 3 if self.freq > 850 else 2
X2 = 2 if self.rxH > 3 else 1
L1 = -20*np.log10(self.txH/30)
L2 = -10*X2*np.log10(self.rxH/3)
Lo = 50.3 + self.p0 - 10*self.n0*np.log10(1.61)-10*nf*np.log10(900)
L = Lo + 10*self.n0*np.log10(dist)+10*nf*np.log10(self.freq)+L1+L2
return L
class EricssonModel(object):
"""Ericcson Model"""
def __init__(self, freq, checkFreq=True):
self.freq = freq
self.cityKind = CityKind.Medium
self.areaKind = AreaKind.Urban
self.checkFreq = checkFreq
self.txH = 50.0
self.rxH = 1.5
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=500 or self.freq>=2000):
raise ValueError('The frequency range for Ericcson Model is 500MHz-1500Mhz')
f, d, hm, hb = self.freq, dist, self.rxH, self.txH
g = 44.49*np.log10(f)-4.78*(np.log10(f)**2)
a2= 12
a3= 0.1
if (self.cityKind== CityKind.Large):
a0,a1 = 36.2,30.2
elif (self.cityKind== CityKind.Medium):
a0,a1 = 43.2,68.9
else:
a0,a1=45.9,100.6
PL=a0+a1*np.log10(d)+a2*np.log10(hb)+a3*(np.log10(hb))*(np.log10(d))-3.2*np.log10((11.75*hm)**2)+g
return PL
class Cost231Model(object):
"""COST 231- Cost-Waldrosch-Ikegami Model"""
def __init__(self, freq, checkFreq=True):
self.freq = freq
self.txH = 50.0
self.rxH = 1.5
self.ws = 15.0
self.bs =0.5
self.hr =3.0
self.areaKind = AreaKind.Urban
self.cityKind = CityKind.Medium
self.checkFreq = checkFreq
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=150 or self.freq>=2000):
raise ValueError('The frequency range for Ecc-33 Model is 150MHz-2000Mhz')
f, d, hm, hb,hr,ws,bs = self.freq, dist, self.rxH, self.txH,self.hr,self.ws,self.bs
deltaH = hm/hb #relaction between heighths
Lbsh= 18*np.log10(1+deltaH) # Loss due to difference of heights
Ka=54.0 #Coefficient of proximity Buildings
Kd=18.0 #Coeficiente of proximidade Edifica??es
Kf=4.0 #Coeficient of environment(urban or not)
#Coeficient's calculate
if (hr > hb):
Lbsh=0.0
if (hb<=hr and d>=0.5):
Ka = Ka - 0.8*deltaH
elif (hb<=hr and d<0.5):
Ka = Ka - 0.8*deltaH*(d/0.5)
if (hb < hr):
Kd=Kd-15*(hb-hr)/(hr-hm)
if (self.cityKind==CityKind.Small):
Kf = Kf +0.7*(f/925-1)
else:
Kf = Kf +1.5*(f/925-1)
#path loss's calculate
Lo = 32.4+20*np.log10(d)+20*np.log10(f) #free space path loss
Lrts = 8.2+10*np.log(ws) + 10*np.log10(f) + 10*np.log(deltaH) # roofTop loss
Lmsd =Lbsh+ Ka+ Kd*np.log10(d)+Kf*np.log10(f)-9*np.log10(bs) #Multpath loss
#final path loss
PL = Lo + Lrts + Lmsd;
return PL
class Cost231HataModel(object):
"""COST 231-Cost-Hata Extension Model"""
def __init__(self, freq, checkFreq = True):
self.freq = freq
self.rxH =1.5
self.txH = 50.0
self.areaKind = AreaKind.Urban
self.checkFreq = checkFreq
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=150 or self.freq>=2000):
raise ValueError('The frequency range for Cost-Hata Extension Model is 150MHz-2000Mhz')
f,hm,hb,d = self.freq,self.rxH,self.txH,dist
ar=(1.1*np.log10(f)-0.7)*hm-(1.56*np.log10(f)-0.8)
C = 3 if (self.areaKind==AreaKind.Urban) else 0
L= 46.3 +33.9*np.log10(f)-13.82*np.log10(hb)-ar+(44.9-6.55*np.log10(hb))*np.log10(d)+C
return L
class OkumuraHataModel(object):
"""Okumura-Hata Model"""
def __init__(self, freq, checkFreq=True):
self.freq = freq
self.rxH =1.5
self.txH = 50.0
self.areaKind = AreaKind.Urban
self.cityKind = CityKind.Large
self.checkFreq = checkFreq
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=500 or self.freq>=1500):
raise ValueError('The frequency range for Okumura-Hata Model is 500MHz-1500Mhz')
hm,hb,f = self.rxH,self.txH,self.freq
# a Calc
if (f<=200 and self.cityKind==CityKind.Large):
a = 8.29*(np.log10(1.54*hm))**2-1.1
elif (f>=400 and self.cityKind==CityKind.Large):
a = 3.2*(np.log10(11.75*hm)**2)-4.97
else:
a = (1.1*np.log10(f-0.7))*hm -(1.56*np.log10(f-0.8))
# Pathloss Calc
lossUrban = 69.55 +(26.16)*np.log10(f)-13.82*np.log10(hb) - a + (44.9-6.55*np.log10(hb))*np.log10(dist)
if (self.areaKind==AreaKind.Rural):
lossOpen = lossUrban - 4.78*((np.log10(f))^2)+18.33*np.log10(f)-40.94
return lossOpen
elif (self.areaKind==AreaKind.Suburban):
lossSubUrban= lossUrban - 2*(np.log10(f/28.0))^2 - 5.4
return lossSubUrban
else:
return lossUrban
class Ecc33Model(object):
def __init__(self, freq, checkFreq=True):
self.freq = freq
self.rxH =1.5
self.txH = 50.0
self.areaKind = AreaKind.Urban
self.cityKind = CityKind.Large
self.checkFreq = checkFreq
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=500 or self.freq>=1500):
raise ValueError('The frequency range for Ecc33 Model is 500MHz-1500Mhz')
hm,hb,f,d = self.rxH,self.txH,self.freq,dist
PLfs = 92.4+20*np.log10(d)+20*np.log10(f/1000)
PLbm = 20.41+9.83*np.log10(d)+7.894*(np.log10(f/1000))+9.56*(np.log10(f/1000))**2
Gb = np.log10(hb/200)*(13.98+5.8*(np.log10(d))**2)
Gm =(42.57+13.7*np.log10(f/1000))*(np.log10(hm)-0.585)
PL= PLfs+PLbm-Gb-Gm
return PL
class SuiModel(object):
def __init__(self, freq, checkFreq=True):
self.freq = freq
self.rxH =1.5
self.txH = 50.0
self.terrainKind = TerrainKind.A
self.checkFreq = checkFreq
self.shadowFading = 8.2
def pathloss(self,dist):
if (self.checkFreq):
if (self.freq<=1900 or self.freq>=11000):
raise ValueError('The frequency range for SUI Model is 1900 MHz-11.000 Mhz')
txH, rxH, f,d = self.txH, self.rxH, self.freq, np.multiply(dist,1000)
coef_a = (4.6,0.0075,12.6,-10.8)
coef_b = (4.0,0.0065,17.1,-10.8)
coef_c = (3.6,0.005,20,-20)
s = self.shadowFading
# Terrain Mode B
if (self.terrainKind == TerrainKind.A):
a, b, c, XhCF = coef_a
elif (self.terrainKind == TerrainKind.B):
a, b, c, XhCF = coef_b
else:
a, b, c, XhCF = coef_c
d0 = 100
A = 20 *np.log10((4 *np.pi * d0) / (300 / f))
y = (a - b * txH) + c / txH
Xf = 6 * np.log10(f / 2000)
Xh = XhCF * np.log10(rxH / 2)
dr = np.multiply(d,1/d0)
return (10 * y * np.log10(dr)) + Xf + Xh + s+ A
|
{"hexsha": "73b2b5151ff793e4236b54f8a825473edbc54c70", "size": 8259, "ext": "py", "lang": "Python", "max_stars_repo_path": "PyRadioLoc/Pathloss/Models.py", "max_stars_repo_name": "mbs8/CMoveis", "max_stars_repo_head_hexsha": "52bec29194345caf669049e63878e39818900058", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "PyRadioLoc/Pathloss/Models.py", "max_issues_repo_name": "mbs8/CMoveis", "max_issues_repo_head_hexsha": "52bec29194345caf669049e63878e39818900058", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PyRadioLoc/Pathloss/Models.py", "max_forks_repo_name": "mbs8/CMoveis", "max_forks_repo_head_hexsha": "52bec29194345caf669049e63878e39818900058", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-24T21:23:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-24T21:23:05.000Z", "avg_line_length": 36.8705357143, "max_line_length": 111, "alphanum_fraction": 0.5645961981, "include": true, "reason": "import numpy", "num_tokens": 2722}
|
import make_env_
# from gym.spaces
import prng
import numpy as np
import random
import pprint
import matplotlib.pyplot as plt
import pandas as pd
# plt.style.use('seaborn')
# creates an multiagent environment which has reset, render, step
env = make_env_.make_env('traffic',benchmark=True) # ! 2vs1 Swarm environment
# env = make_env_.make_env('simple_tag_guided_1v2')
# create interactive policies for each agent
# policies = [InteractivePolicy(env,i) for i in range(env.n)]
# print(env.observation_space)
# print(env.action_space)
# env.observation_space
# exit()
# print(policies)
# exit()
def sample_actions():
action = [env.action_space[i].sample() for i in range(env.n)]
action.insert(0, 0)
# print(action)
return action
def deterministic_actions():
action = fixed()
action.insert(0, 0)
# print(action)
return action
def stop_action():
action = np.array([1,0,0,0,0])
return action
def deterministic_actions2():
action = fixed2()
action.insert(0, 0)
# print(action)
return action
def fixed():
# act = list(np.random.uniform(-1,0.5,4))
act = [random.uniform(-0.1, 0.1), 0, 0, random.choice([1, 1, 1, 0, 0],)]
return act
def fixed2():
# act = list(np.random.uniform(-1,0.5,4))
act = [0, np.random.choice([1, 1, 1, 0, 0]), 0, random.uniform(-0.1, 0.1)]
return act
for i_episode in range(3): # number of simulations
observation = env.reset()
# print(len(observation[0]))
rewards=[]
for t in range(150): # number of steps
env.render()
# my_action = [deterministic_actions2()]
# my_action = [deterministic_actions() if i < env.n /
# 2 else deterministic_actions2()for i in range(env.n)]
my_action = []
for i,agent in enumerate(env.agents):
if agent.isDone:
my_action.append(stop_action())
else:
if i < env.n/2:
my_action.append(deterministic_actions())
else:
my_action.append(deterministic_actions2())
next_state, reward, done, info = env.step(my_action)
for agent in env.agents:
# if(agent.isCollided):
# print('-'*5)
# print('collided!')
agent.isCollided = False
# print('*'*30)
# print('next_state\n',next_state)
print('\nreward\n',reward)
rewards.append(reward)
# print('\ndone\n',done)
print('\ninfo\n',info)
# print('*'*30)
# print(len(observation))
# print(observation[0].shape)
# pprint.pprint(observation)
# pprint.pprint(rewards)
df = pd.DataFrame(rewards)
df.to_csv('save/rewards{}.csv'.format(i_episode))
# plt.plot(rewards)
# plt.show()
env.close()
|
{"hexsha": "b6bbc42100301a545323963acba25dbebb584df7", "size": 2843, "ext": "py", "lang": "Python", "max_stars_repo_path": "deneme.py", "max_stars_repo_name": "kazimsanlav/RL_Traffic", "max_stars_repo_head_hexsha": "afac032af989fe8e08746b47dcb69732fab73e83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "deneme.py", "max_issues_repo_name": "kazimsanlav/RL_Traffic", "max_issues_repo_head_hexsha": "afac032af989fe8e08746b47dcb69732fab73e83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deneme.py", "max_forks_repo_name": "kazimsanlav/RL_Traffic", "max_forks_repo_head_hexsha": "afac032af989fe8e08746b47dcb69732fab73e83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-08T18:41:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-08T18:41:06.000Z", "avg_line_length": 25.6126126126, "max_line_length": 78, "alphanum_fraction": 0.5986633837, "include": true, "reason": "import numpy", "num_tokens": 747}
|
# -*- coding: utf-8 -*-
import sys, os
# path = os.path.dirname(os.path.dirname(__file__))
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, os.path.join(path,'System'))
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# from System.Network_3ph_pf import Network_3ph
from Network_3ph_pf import Network_3ph
import copy
import time
feeder = '13BusOxEmf'
ntwxs = os.path.join(path,'Data','Networks')
dir0 = os.path.join(ntwxs,feeder+'_dss')
sn0 = os.path.join(dir0,feeder)
net_ieee13 = Network_3ph()
line_df0 = net_ieee13.line_df
trn_df = net_ieee13.transformer_df
bus_columns = ['name','number','v_base','load_type','connect','Pa','Pb','Pc','Qa','Qb','Qc']
bus_index = range(net_ieee13.N_buses)
# #bus_df = pd.DataFrame(index = bus_index,columns = bus_columns)
# #bus_df.iloc[0]= {'name':'650','number':0, 'load_type':'S', 'connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
# #bus_df.iloc[1]= {'name':'632','number':1, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
# #bus_df.iloc[2]= {'name':'645','number':2, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb':150,'Pc': 0,'Qa': 0,'Qb': 50,'Qc': 0}
# #bus_df.iloc[3]= {'name':'646','number':3, 'load_type':'PQ','connect':'D','Pa': 0,'Pb':150,'Pc': 0,'Qa': 0,'Qb': 50,'Qc': 0}
# #bus_df.iloc[4]= {'name':'633','number':4, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
# #bus_df.iloc[5]= {'name':'634','number':5, 'load_type':'PQ','connect':'Y','Pa':200,'Pb':200,'Pc':200,'Qa': 50,'Qb': 50,'Qc': 50}
# #bus_df.iloc[6]= {'name':'671','number':6, 'load_type':'PQ','connect':'D','Pa':150,'Pb':150,'Pc':150,'Qa': 50,'Qb': 50,'Qc': 50}
# #bus_df.iloc[7]= {'name':'680','number':7, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
# #bus_df.iloc[8]= {'name':'684','number':8, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
# #bus_df.iloc[9]= {'name':'611','number':9, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc':150,'Qa': 0,'Qb': 0,'Qc': 50}
# #bus_df.iloc[10]={'name':'652','number':10, 'load_type':'PQ','connect':'Y','Pa':150,'Pb': 0,'Pc': 0,'Qa': 50,'Qb': 0,'Qc': 0}
# #bus_df.iloc[11]={'name':'692','number':11, 'load_type':'PQ','connect':'D','Pa': 0,'Pb': 0,'Pc':150,'Qa': 0,'Qb': 0,'Qc': 50}
# #bus_df.iloc[12]={'name':'675','number':12, 'load_type':'PQ','connect':'Y','Pa':150,'Pb':150,'Pc':150,'Qa': 50,'Qb': 50,'Qc': 50}
bus_df = pd.DataFrame(index = bus_index,columns = bus_columns)
bus_df.iloc[0]= {'name':'650','number':0,'v_base':net_ieee13.Vslack_ph, 'load_type':'S', 'connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[1]= {'name':'632','number':1,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[2]= {'name':'645','number':2,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb':150,'Pc': 0,'Qa': 0,'Qb': 50,'Qc': 0}
bus_df.iloc[3]= {'name':'646','number':3,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'D','Pa': 0,'Pb':150,'Pc': 0,'Qa': 0,'Qb': 50,'Qc': 0}
bus_df.iloc[4]= {'name':'633','number':4,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[5]= {'name':'634','number':5,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa':200,'Pb':200,'Pc':200,'Qa': 50,'Qb': 50,'Qc': 50}
bus_df.iloc[6]= {'name':'671','number':6,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'D','Pa':150,'Pb':150,'Pc':150,'Qa': 50,'Qb': 50,'Qc': 50}
bus_df.iloc[7]= {'name':'680','number':7,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[8]= {'name':'684','number':8,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[9]= {'name':'611','number':9,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa': 0,'Pb': 0,'Pc':0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[10]={'name':'652','number':10,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa':0,'Pb': 0,'Pc': 0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[11]={'name':'692','number':11,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'D','Pa': 0,'Pb': 0,'Pc':0,'Qa': 0,'Qb': 0,'Qc': 0}
bus_df.iloc[12]={'name':'675','number':12,'v_base':net_ieee13.Vslack_ph, 'load_type':'PQ','connect':'Y','Pa':150,'Pb':150,'Pc':150,'Qa': 50,'Qb': 50,'Qc': 50}
net_ieee13.bus_df=bus_df
# net_ieee13.zbus_pf()
# RES = net_ieee13.res_bus_df
bus_df2 = pd.read_csv(sn0+"_bus_df.csv",index_col=0)
bus_df2.index=np.arange(len(bus_df2))
bus_df2.loc[:,'load_type'] = 'PQ'
bus_df2.loc[0,'load_type'] = 'S'
bus_df2['name'] = bus_df2['name'].astype("str") # <------ required
# for i in range(bus_df2.shape[0]): # MWE showing that we can't easily convert to an int, for some annoying reason (CF lines_df2)
# bus_df2.iloc[i,1] = 0
# print(type(0))
# print(type(bus_df2.iloc[i,1]))
line_df2 = pd.read_csv(sn0+"_line_df.csv",index_col=0)
line_df2.index=np.arange(len(line_df2))
line_df2.loc[:,'busA'] = list(map(str,line_df2.loc[:,'busA'])) # cannot save numbers as 'str' type when saving as csv
line_df2.loc[:,'busB'] = list(map(str,line_df2.loc[:,'busB'])) # cannot save numbers as 'str' type when saving as csv
# fix up all of the complex numbers to be complex:
for i in range(2,line_df2.shape[1]):
for j in range(line_df2.shape[0]):
line_df2.iloc[j,i] = np.complex_(np.complex(line_df2.iloc[j,i])) # convert to (exactly) same format
line_df = net_ieee13.line_df
trn_df2 = pd.DataFrame(columns=trn_df.columns)
# shift = []
# for bus in buses2:
# shift = shift + np.nonzero(bus==buses)[0].tolist()
net_ieee13.bus_df = bus_df2
net_ieee13.line_df = line_df2
net_ieee13.transformer_df = trn_df2
# net_ieee13.bus_df = bus_df # for testing
# net_ieee13.line_df = line_df
# net_ieee13.transformer_df = trn_df
net_ieee13.update_YandZ()
print(sum(sum(net_ieee13.Y))) # check that this is not zero
net_ieee13.zbus_pf()
RES = net_ieee13.res_bus_df
plt.subplot(131)
plt.plot(abs(RES['Va'])/(4160/np.sqrt(3)),'kx')
plt.ylim((0.85,1.15))
plt.subplot(132)
plt.plot(abs(RES['Vb'])/(4160/np.sqrt(3)),'rx')
plt.ylim((0.85,1.15))
plt.subplot(133)
plt.plot(abs(RES['Vc'])/(4160/np.sqrt(3)),'bx')
plt.ylim((0.85,1.15))
plt.show()
# # bus_name_ev_station = '634'
# # bus_index_ev_station = np.argwhere(net_ieee13.bus_df['name']==bus_name_ev_station)[0,0]
# # #Linear power flow
# # P_ev_station_lin = 500e3
# # P_ev_station_base = 450e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pa'] = P_ev_station/3/1e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pb'] = P_ev_station/3/1e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pc'] = P_ev_station/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pa'] = P_ev_station_base/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pb'] = P_ev_station_lin/1e3 + P_ev_station_base/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pc'] = P_ev_station_base/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Qa'] = 0
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Qb'] = 0
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Qc'] = 0
# # clock_start = time.time()
# # net_ieee13.zbus_pf()
# # clock_elapsed = time.time() - clock_start
# # print('Time for power flow = '+ str(clock_elapsed)+ 's')
# # #v_lin0 = net_ieee13.v_flat()
# # v_lin0 = net_ieee13.v_net_res
# # S_wye_lin0 = net_ieee13.S_PQloads_wye_res
# # S_del_lin0 = net_ieee13.S_PQloads_del_res
# # net_ieee13.linear_model_setup(v_lin0,S_wye_lin0,S_del_lin0) #note that phases need to be 120degrees out for good results
# # net_ieee13.linear_pf()
# # pf_model = copy.deepcopy(net_ieee13)
# # #Actual power flow solution
# # P_ev_station = P_ev_station_lin + 100e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pa'] = P_ev_station/3/1e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pb'] = P_ev_station/3/1e3
# # #net_ieee13.bus_df.loc[bus_index_ev_station,'Pc'] = P_ev_station/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pa'] = P_ev_station_base/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pb'] = P_ev_station/1e3 + P_ev_station_base/3/1e3
# # net_ieee13.bus_df.loc[bus_index_ev_station,'Pc'] = P_ev_station_base/3/1e3
# net_ieee13.zbus_pf()
# # clock_start_lpf = time.time()
# # # net_ieee13.linear_pf()
# # # clock_elapsed_lpf = time.time() - clock_start_lpf
# # # print('Time for linear power flow = '+ str(clock_elapsed)+ 's')
# # plt.figure(1)
# # plt.clf()
# # for bus_i in range(0,len(net_ieee13.bus_df)):
# # aph_index = (bus_i)*3
# # bph_index = (bus_i)*3+1
# # cph_index = (bus_i)*3+2
# # a_ph_hand = plt.scatter(bus_i,np.abs(net_ieee13.v_net_res[aph_index])/net_ieee13.Vslack_ph,color='C0',marker='x')
# # b_ph_hand = plt.scatter(bus_i,np.abs(net_ieee13.v_net_res[bph_index])/net_ieee13.Vslack_ph,color='C1',marker='x')
# # c_ph_hand = plt.scatter(bus_i,np.abs(net_ieee13.v_net_res[cph_index])/net_ieee13.Vslack_ph,color='C2',marker='x')
# # for bus_i in range(0,len(net_ieee13.bus_df)):
# # aph_index = (bus_i)*3
# # bph_index = (bus_i)*3+1
# # cph_index = (bus_i)*3+2
# # a_ph_lin_hand = plt.scatter(bus_i,net_ieee13.v_net_lin_abs_res[aph_index]/net_ieee13.Vslack_ph,s=35,facecolors='none',edgecolor='C0')
# # b_ph_lin_hand = plt.scatter(bus_i,net_ieee13.v_net_lin_abs_res[bph_index]/net_ieee13.Vslack_ph,s=35,facecolors='none',edgecolor='C1')
# # c_ph_lin_hand = plt.scatter(bus_i,net_ieee13.v_net_lin_abs_res[cph_index]/net_ieee13.Vslack_ph,s=35,facecolors='none',edgecolor='C2')
# # #plt.plot(net_ieee13.v_abs_min/net_ieee13.Vslack_ph,'--')
# # #plt.plot(net_ieee13.v_abs_max/net_ieee13.Vslack_ph,'--')
# # plt.xlabel('Bus Number')
# # plt.ylabel('V (pu)')
# # plt.legend([a_ph_hand,b_ph_hand,c_ph_hand,a_ph_lin_hand,b_ph_lin_hand,c_ph_lin_hand]\
# # ,["Nonlinear Phase A","Nonlinear Phase B","Nonlinear Phase C","Linear Phase A","Linear Phase B","Linear Phase C"])
# # #Check linear model for slack bus and voltage limits
# # #Linear slack bus power flow limit
# # ev_station_phase_index_a= 15-3
# # ev_station_phase_index_b= 16-3
# # ev_station_phase_index_c= 17-3
# # ev_station_phase_index=ev_station_phase_index_b
# # d_P_ev = (P_ev_station-P_ev_station_lin)
# # M_wye = pf_model.M_wye
# # M_del = pf_model.M_del
# # M_wye_ph = M_wye[:,ev_station_phase_index]
# # M_del_ph = M_del[:,ev_station_phase_index]
# # Ysn = pf_model.Ysn
# # PQ0_wye = np.concatenate((np.real(pf_model.S_PQloads_wye_res),np.imag(pf_model.S_PQloads_wye_res)))*1e3
# # PQ0_del = np.concatenate((np.real(pf_model.S_PQloads_del_res),np.imag(pf_model.S_PQloads_del_res)))*1e3
# # A_Pslack = np.real(np.matmul(pf_model.vs.T,np.matmul(np.conj(Ysn),np.conj(M_wye_ph))))
# # b_Pslack = np.real(np.matmul(pf_model.vs.T,np.matmul(np.conj(Ysn),np.matmul(np.conj(M_wye),PQ0_wye))))\
# # +np.real(np.matmul(pf_model.vs.T,np.matmul(np.conj(Ysn),np.matmul(np.conj(M_del),PQ0_del))))\
# # +np.real(pf_model.M0[0])
# # P_slack_calc = (A_Pslack*d_P_ev+b_Pslack)/1e3
# # P_slack_actual = np.real(np.sum(net_ieee13.S_net_res[0:3]))
# # print('P slack linear = ' + str(np.around(P_slack_calc)) + 'kW')
# # print('P slack actual = ' + str(np.around(P_slack_actual)) + 'kW')
# # print('P slack calc error = ' + str((np.abs(P_slack_calc-P_slack_actual)/P_slack_actual*100)) + '%')
# # #linear voltage limits
# # A_vlim = pf_model.K_wye[:,ev_station_phase_index]
# # b_vlim = pf_model.v_lin_abs_res #- pf_model.K_wye[:,ev_station_phase_index]*P_ev_station_lin
# # V_calc = A_vlim*d_P_ev + b_vlim
# # V_actual = np.abs(net_ieee13.v_res)
# # for i in range((net_ieee13.N_buses-1)*3):
# # print('Bus ' + str(int(i/3)+1) + ' Phase ' + str(i%3))
# # if V_actual[i] ==0:
# # print('not connected')
# # else:
# # print('V abs error = ' + str(V_calc[i]-V_actual[i]) + 'V')
# # print('V abs error = ' + str((V_calc[i]-V_actual[i])/V_actual[i]*100) + '%')
# # #check line current calculations
# # #for each bus, check current injected and line currents sum to zero on each phase
# # Ibus_inj = np.zeros([net_ieee13.N_buses,3],dtype=np.complex_)
# # Ibus_lines = np.zeros([net_ieee13.N_buses,3],dtype=np.complex_)
# # for bus_i in range(net_ieee13.N_buses):
# # bus_i_df = net_ieee13.res_bus_df.iloc[bus_i]
# # Ibus_inj[bus_i,0] = bus_i_df['Ia']
# # Ibus_inj[bus_i,1] = bus_i_df['Ib']
# # Ibus_inj[bus_i,2] = bus_i_df['Ic']
# # for line_ij in range(net_ieee13.N_lines):
# # line_ij_df = net_ieee13.res_lines_df.iloc[line_ij]
# # if line_ij_df['busA'] == bus_i_df['name']:
# # Ibus_lines[bus_i,0] += line_ij_df['Ia'] #current FROM bus i
# # Ibus_lines[bus_i,1] += line_ij_df['Ib']
# # Ibus_lines[bus_i,2] += line_ij_df['Ic']
# # elif line_ij_df['busB'] == bus_i_df['name']:
# # Ibus_lines[bus_i,0] -= line_ij_df['Ia'] #current INTO bus i
# # Ibus_lines[bus_i,1] -= line_ij_df['Ib']
# # Ibus_lines[bus_i,2] -= line_ij_df['Ic']
# # #plt.figure()
# # #plt.plot(np.abs(Ibus_lines-Ibus_inj))
# # #Check linear model for line currents
# # I_lines = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # I_lines_calc = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # Iabs_lines = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # Iabs_lines_calc = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # for line_ij in range(net_ieee13.N_lines):
# # line_ij_df = net_ieee13.res_lines_df.iloc[line_ij]
# # I_lines[line_ij,0] = line_ij_df['Ia']
# # I_lines[line_ij,1] = line_ij_df['Ib']
# # I_lines[line_ij,2] = line_ij_df['Ic']
# # Iabs_lines[line_ij,0] = np.abs(line_ij_df['Ia'])
# # Iabs_lines[line_ij,1] = np.abs(line_ij_df['Ib'])
# # Iabs_lines[line_ij,2] = np.abs(line_ij_df['Ic'])
# # I_lines_calc[line_ij,0] = net_ieee13.J_dPQwye_list[line_ij][0,ev_station_phase_index]*d_P_ev + net_ieee13.J_I0_list[line_ij][0]
# # I_lines_calc[line_ij,1] = net_ieee13.J_dPQwye_list[line_ij][1,ev_station_phase_index]*d_P_ev + net_ieee13.J_I0_list[line_ij][1]
# # I_lines_calc[line_ij,2] = net_ieee13.J_dPQwye_list[line_ij][2,ev_station_phase_index]*d_P_ev + net_ieee13.J_I0_list[line_ij][2]
# # Iabs_lines_calc[line_ij,0] = net_ieee13.Jabs_dPQwye_list[line_ij][0,ev_station_phase_index]*d_P_ev + net_ieee13.Jabs_I0_list[line_ij][0]
# # Iabs_lines_calc[line_ij,1] = net_ieee13.Jabs_dPQwye_list[line_ij][1,ev_station_phase_index]*d_P_ev + net_ieee13.Jabs_I0_list[line_ij][1]
# # Iabs_lines_calc[line_ij,2] = net_ieee13.Jabs_dPQwye_list[line_ij][2,ev_station_phase_index]*d_P_ev + net_ieee13.Jabs_I0_list[line_ij][2]
# # I_lines_calc2 = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # I_lines_calc3 = np.zeros([net_ieee13.N_lines,3],dtype=np.complex_)
# # Iabs_lines_calc2 = np.zeros([net_ieee13.N_lines,3])
# # v_lin_calc = np.append(net_ieee13.vs,M_wye_ph*d_P_ev+np.matmul(M_wye,PQ0_wye)+np.matmul(M_del,PQ0_del) + pf_model.M0)#np.append(net_ieee13.vs,net_ieee13.v_lin_res) #
# # for line_ij in range(net_ieee13.N_lines):
# # line_ij_df = net_ieee13.res_lines_df.iloc[line_ij]
# # Yseries = net_ieee13.list_Yseries[line_ij]
# # Yshunt = net_ieee13.list_Yshunt[line_ij]
# # bus_i = net_ieee13.bus_df[net_ieee13.bus_df['name']==line_ij_df['busA']]['number'].values[0]
# # bus_j = net_ieee13.bus_df[net_ieee13.bus_df['name']==line_ij_df['busB']]['number'].values[0]
# # v_i_abc_lin = v_lin_calc[3*bus_i:3*(bus_i+1)]
# # v_i_abc = net_ieee13.v_net_res[3*(bus_i):3*(bus_i+1)]
# # v_j_abc_lin = v_lin_calc[3*bus_j:3*(bus_j+1)]
# # v_j_abc = net_ieee13.v_net_res[3*(bus_j):3*(bus_j+1)]
# # I_lines_calc2[line_ij,0:3] = np.matmul(Yshunt+Yseries,v_i_abc_lin)-np.matmul(Yseries,v_j_abc_lin)
# # I_lines_calc3[line_ij,0:3] = np.matmul(Yshunt+Yseries,v_i_abc)-np.matmul(Yseries,v_j_abc)
# # #plt.figure()
# # #plt.plot(np.abs(v_lin_calc))
# # #plt.plot(np.abs(net_ieee13.v_net_res),'--')
# # plt.figure(2)
# # for line_ij in range(net_ieee13.N_lines):
# # a_ph_hand = plt.scatter(line_ij,np.abs(I_lines_calc3[line_ij,0]),color='C0',marker='x')
# # b_ph_hand = plt.scatter(line_ij,np.abs(I_lines_calc3[line_ij,1]),color='C1',marker='x')
# # c_ph_hand = plt.scatter(line_ij,np.abs(I_lines_calc3[line_ij,2]),color='C2',marker='x')
# # a_ph_lin_hand = plt.scatter(line_ij,Iabs_lines_calc[line_ij,0],s=35,facecolors='none',edgecolor='C0')
# # b_ph_lin_hand = plt.scatter(line_ij,Iabs_lines_calc[line_ij,1],s=35,facecolors='none',edgecolor='C1')
# # c_ph_lin_hand = plt.scatter(line_ij,Iabs_lines_calc[line_ij,2],s=35,facecolors='none',edgecolor='C2')
# # plt.legend([a_ph_hand,b_ph_hand,c_ph_hand,a_ph_lin_hand,b_ph_lin_hand,c_ph_lin_hand]\
# # ,["Nonlinear Phase A","Nonlinear Phase B","Nonlinear Phase C","Linear Phase A","Linear Phase B","Linear Phase C"])
# # plt.xlabel('Line Number')
# # plt.ylabel('Current Magnitude (A)')
|
{"hexsha": "efaf928be9776f4f5c8e4a15239fc2aeadb14405", "size": 17268, "ext": "py", "lang": "Python", "max_stars_repo_path": "Test_Scripts/zbus_3ph_pf_test_MD.py", "max_stars_repo_name": "LF2L/OPEN", "max_stars_repo_head_hexsha": "4ed3f79be0b128ce623e48758d48eae30c4e7686", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2020-07-10T17:58:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T02:21:52.000Z", "max_issues_repo_path": "Test_Scripts/zbus_3ph_pf_test_MD.py", "max_issues_repo_name": "LF2L/OPEN", "max_issues_repo_head_hexsha": "4ed3f79be0b128ce623e48758d48eae30c4e7686", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-01-06T11:18:32.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-06T11:18:33.000Z", "max_forks_repo_path": "Test_Scripts/zbus_3ph_pf_test_MD.py", "max_forks_repo_name": "LF2L/OPEN", "max_forks_repo_head_hexsha": "4ed3f79be0b128ce623e48758d48eae30c4e7686", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2020-05-14T01:56:16.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-15T12:15:14.000Z", "avg_line_length": 54.819047619, "max_line_length": 169, "alphanum_fraction": 0.6643502432, "include": true, "reason": "import numpy", "num_tokens": 6451}
|
function i4vec_write ( output_filename, n, x )
%*****************************************************************************80
%
%% I4VEC_WRITE writes an I4VEC file.
%
% Discussion:
%
% An I4VEC is a vector of I4's.
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 12 July 2014
%
% Author:
%
% John Burkardt
%
% Parameters:
%
% Input, string OUTPUT_FILENAME, the output filename.
%
% Input, integer N, the number of points.
%
% Input, integer X(N), the vector.
%
%
% Open the file.
%
output_unit = fopen ( output_filename, 'wt' );
if ( output_unit < 0 )
fprintf ( 1, '\n' );
fprintf ( 1, 'I4VEC_WRITE - Error!\n' );
fprintf ( 1, ' Could not open the output file.\n' );
error ( 'I4VEC_WRITE - Error!' );
end
for j = 1 : n
fprintf ( output_unit, '%d\n', x(j) );
end
%
% Close the file.
%
fclose ( output_unit );
return
end
|
{"author": "johannesgerer", "repo": "jburkardt-m", "sha": "1726deb4a34dd08a49c26359d44ef47253f006c1", "save_path": "github-repos/MATLAB/johannesgerer-jburkardt-m", "path": "github-repos/MATLAB/johannesgerer-jburkardt-m/jburkardt-m-1726deb4a34dd08a49c26359d44ef47253f006c1/cc_io/i4vec_write.m"}
|
[STATEMENT]
lemma singleton_times_right_hmset[simp]: "N * \<omega>^M = HMSet (image_mset ((+) M) (hmsetmset N))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. N * \<omega>^ M = HMSet (image_mset ((+) M) (hmsetmset N))
[PROOF STEP]
by (metis mult.commute singleton_times_left_hmset)
|
{"llama_tokens": 123, "file": "Nested_Multisets_Ordinals_Syntactic_Ordinal", "length": 1}
|
!################################################################
!################################################################
!################################################################
!################################################################
subroutine make_boundary_force(ilevel)
use amr_commons
use poisson_commons
implicit none
integer::ilevel
! -------------------------------------------------------------------
! This routine set up boundary conditions for fine levels.
! -------------------------------------------------------------------
integer::ibound,boundary_dir,idim,inbor
integer::i,ncache,ivar,igrid,ngrid,ind
integer::iskip,iskip_ref,gdim,nx_loc,ix,iy,iz
integer,dimension(1:8)::ind_ref,alt
integer,dimension(1:nvector),save::ind_grid,ind_grid_ref
integer,dimension(1:nvector),save::ind_cell,ind_cell_ref
real(dp)::switch,dx,dx_loc,scale
real(dp),dimension(1:3)::gs,skip_loc
real(dp),dimension(1:twotondim,1:3)::xc
real(dp),dimension(1:nvector,1:ndim),save::xx
real(dp),dimension(1:nvector,1:ndim),save::ff
if(.not. simple_boundary)return
if(verbose)write(*,111)ilevel
! Mesh size at level ilevel
dx=0.5D0**ilevel
! Rescaling factors
nx_loc=(icoarse_max-icoarse_min+1)
skip_loc=(/0.0d0,0.0d0,0.0d0/)
if(ndim>0)skip_loc(1)=dble(icoarse_min)
if(ndim>1)skip_loc(2)=dble(jcoarse_min)
if(ndim>2)skip_loc(3)=dble(kcoarse_min)
scale=boxlen/dble(nx_loc)
dx_loc=dx*scale
! Set position of cell centers relative to grid center
do ind=1,twotondim
iz=(ind-1)/4
iy=(ind-1-4*iz)/2
ix=(ind-1-2*iy-4*iz)
if(ndim>0)xc(ind,1)=(dble(ix)-0.5D0)*dx
if(ndim>1)xc(ind,2)=(dble(iy)-0.5D0)*dx
if(ndim>2)xc(ind,3)=(dble(iz)-0.5D0)*dx
end do
! Loop over boundaries
do ibound=1,nboundary
! Compute direction of reference neighbors
boundary_dir=boundary_type(ibound)-10*(boundary_type(ibound)/10)
if(boundary_dir==1)inbor=2
if(boundary_dir==2)inbor=1
if(boundary_dir==3)inbor=4
if(boundary_dir==4)inbor=3
if(boundary_dir==5)inbor=6
if(boundary_dir==6)inbor=5
! Compute index of reference cells
! Reflexive boundary
if(boundary_type(ibound)== 1)ind_ref(1:8)=(/2,1,4,3,6,5,8,7/)
if(boundary_type(ibound)== 2)ind_ref(1:8)=(/2,1,4,3,6,5,8,7/)
if(boundary_type(ibound)== 3)ind_ref(1:8)=(/3,4,1,2,7,8,5,6/)
if(boundary_type(ibound)== 4)ind_ref(1:8)=(/3,4,1,2,7,8,5,6/)
if(boundary_type(ibound)== 5)ind_ref(1:8)=(/5,6,7,8,1,2,3,4/)
if(boundary_type(ibound)== 6)ind_ref(1:8)=(/5,6,7,8,1,2,3,4/)
! Free boundary
if(boundary_type(ibound)==11)ind_ref(1:8)=(/1,1,3,3,5,5,7,7/)
if(boundary_type(ibound)==12)ind_ref(1:8)=(/2,2,4,4,6,6,8,8/)
if(boundary_type(ibound)==13)ind_ref(1:8)=(/1,2,1,2,5,6,5,6/)
if(boundary_type(ibound)==14)ind_ref(1:8)=(/3,4,3,4,7,8,7,8/)
if(boundary_type(ibound)==15)ind_ref(1:8)=(/1,2,3,4,1,2,3,4/)
if(boundary_type(ibound)==16)ind_ref(1:8)=(/5,6,7,8,5,6,7,8/)
! Imposed boundary (used only for flag1)
if(boundary_type(ibound)==21)ind_ref(1:8)=(/1,1,3,3,5,5,7,7/)
if(boundary_type(ibound)==22)ind_ref(1:8)=(/2,2,4,4,6,6,8,8/)
if(boundary_type(ibound)==23)ind_ref(1:8)=(/1,2,1,2,5,6,5,6/)
if(boundary_type(ibound)==24)ind_ref(1:8)=(/3,4,3,4,7,8,7,8/)
if(boundary_type(ibound)==25)ind_ref(1:8)=(/1,2,3,4,1,2,3,4/)
if(boundary_type(ibound)==26)ind_ref(1:8)=(/5,6,7,8,5,6,7,8/)
! Vector sign switch for reflexive boundary conditions
gs=(/1,1,1/)
if(boundary_type(ibound)==1.or.boundary_type(ibound)==2)gs(1)=-1
if(boundary_type(ibound)==3.or.boundary_type(ibound)==4)gs(2)=-1
if(boundary_type(ibound)==5.or.boundary_type(ibound)==6)gs(3)=-1
! Loop over grids by vector sweeps
ncache=boundary(ibound,ilevel)%ngrid
do igrid=1,ncache,nvector
ngrid=MIN(nvector,ncache-igrid+1)
do i=1,ngrid
ind_grid(i)=boundary(ibound,ilevel)%igrid(igrid+i-1)
end do
! Gather neighboring reference grid
do i=1,ngrid
ind_grid_ref(i)=son(nbor(ind_grid(i),inbor))
end do
! Loop over cells
do ind=1,twotondim
iskip=ncoarse+(ind-1)*ngridmax
do i=1,ngrid
ind_cell(i)=iskip+ind_grid(i)
end do
! Gather neighboring reference cell
iskip_ref=ncoarse+(ind_ref(ind)-1)*ngridmax
do i=1,ngrid
ind_cell_ref(i)=iskip_ref+ind_grid_ref(i)
end do
! Wall and free boundary conditions
if((boundary_type(ibound)/10).ne.2)then
! Gather reference hydro variables
do ivar=1,ndim
do i=1,ngrid
ff(i,ivar)=f(ind_cell_ref(i),ivar)
end do
end do
! Scatter to boundary region
do ivar=1,ndim
switch=gs(ivar)
do i=1,ngrid
f(ind_cell(i),ivar)=ff(i,ivar)*switch
end do
end do
! Imposed boundary conditions
else
! Compute cell center in code units
do idim=1,ndim
do i=1,ngrid
xx(i,idim)=xg(ind_grid(i),idim)+xc(ind,idim)
end do
end do
! Rescale position from code units to user units
do idim=1,ndim
do i=1,ngrid
xx(i,idim)=(xx(i,idim)-skip_loc(idim))*scale
end do
end do
call gravana(xx,ff,dx_loc,ngrid)
! Scatter variables
do ivar=1,ndim
do i=1,ngrid
f(ind_cell(i),ivar)=ff(i,ivar)
end do
end do
end if
end do
! End loop over cells
end do
! End loop over grids
end do
! End loop over boundaries
111 format(' Entering make_boundary_force for level ',I2)
end subroutine make_boundary_force
!##########################################################################
!##########################################################################
!##########################################################################
!##########################################################################
subroutine make_boundary_phi(ilevel)
use amr_commons
use poisson_commons
implicit none
integer::ilevel
! -------------------------------------------------------------------
! This routine set up boundary conditions for fine levels.
! -------------------------------------------------------------------
integer::ibound,boundary_dir,idim,inbor
integer::i,ncache,ivar,igrid,ngrid,ind
integer::iskip,iskip_ref,gdim,nx_loc,ix,iy,iz
integer,dimension(1:8)::ind_ref,alt
integer,dimension(1:nvector),save::ind_grid,ind_cell
real(dp)::dx,dx_loc,scale,fourpi,boxlen2
real(dp),dimension(1:3)::skip_loc
real(dp),dimension(1:twotondim,1:3)::xc
real(dp),dimension(1:nvector),save::rr,pp
real(dp),dimension(1:nvector,1:ndim),save::xx
real(dp),dimension(1:nvector,1:ndim),save::ff
if(.not. simple_boundary)return
if(verbose)write(*,111)ilevel
! Mesh size at level ilevel
dx=0.5D0**ilevel
! Rescaling factors
nx_loc=(icoarse_max-icoarse_min+1)
skip_loc=(/0.0d0,0.0d0,0.0d0/)
if(ndim>0)skip_loc(1)=dble(icoarse_min)
if(ndim>1)skip_loc(2)=dble(jcoarse_min)
if(ndim>2)skip_loc(3)=dble(kcoarse_min)
scale=boxlen/dble(nx_loc)
dx_loc=dx*scale
fourpi=4.D0*ACOS(-1.0D0)
boxlen2=boxlen**2
! Set position of cell centers relative to grid center
do ind=1,twotondim
iz=(ind-1)/4
iy=(ind-1-4*iz)/2
ix=(ind-1-2*iy-4*iz)
if(ndim>0)xc(ind,1)=(dble(ix)-0.5D0)*dx
if(ndim>1)xc(ind,2)=(dble(iy)-0.5D0)*dx
if(ndim>2)xc(ind,3)=(dble(iz)-0.5D0)*dx
end do
! Loop over boundaries
do ibound=1,nboundary
! Loop over grids by vector sweeps
ncache=boundary(ibound,ilevel)%ngrid
do igrid=1,ncache,nvector
ngrid=MIN(nvector,ncache-igrid+1)
do i=1,ngrid
ind_grid(i)=boundary(ibound,ilevel)%igrid(igrid+i-1)
end do
! Loop over cells
do ind=1,twotondim
iskip=ncoarse+(ind-1)*ngridmax
do i=1,ngrid
ind_cell(i)=iskip+ind_grid(i)
end do
! Compute cell center in code units
do idim=1,ndim
do i=1,ngrid
xx(i,idim)=xg(ind_grid(i),idim)+xc(ind,idim)
end do
end do
! Rescale position from code units to user units
rr(1:ngrid)=0d0
do idim=1,ndim
do i=1,ngrid
xx(i,idim)=(xx(i,idim)-skip_loc(idim))*scale
rr(i)=rr(i)+(xx(i,idim)-multipole(idim+1)/multipole(1))**2
end do
end do
do i=1,ngrid
rr(i)=sqrt(rr(i))
end do
if(ngrid>0) call phi_ana(rr,pp,ngrid)
! Scatter variables
do i=1,ngrid
phi(ind_cell(i))=pp(i)/scale
end do
end do
! End loop over cells
end do
! End loop over grids
end do
! End loop over boundaries
111 format(' Entering make_boundary_phi for level ',I2)
end subroutine make_boundary_phi
!##########################################################################
!##########################################################################
!##########################################################################
!##########################################################################
subroutine make_boundary_mask(ilevel)
use amr_commons
use poisson_commons
implicit none
integer::ilevel
! -------------------------------------------------------------------
! This routine set up boundary conditions for fine levels.
! -------------------------------------------------------------------
integer::ibound,boundary_dir,idim,inbor
integer::i,ncache,igrid,ngrid,ind
integer::iskip
integer,dimension(1:nvector),save::ind_grid,ind_cell
if(.not. simple_boundary)return
if(verbose)write(*,111)ilevel
! Loop over boundaries
do ibound=1,nboundary
! Loop over grids by vector sweeps
ncache=boundary(ibound,ilevel)%ngrid
do igrid=1,ncache,nvector
ngrid=MIN(nvector,ncache-igrid+1)
do i=1,ngrid
ind_grid(i)=boundary(ibound,ilevel)%igrid(igrid+i-1)
end do
! Loop over cells
do ind=1,twotondim
iskip=ncoarse+(ind-1)*ngridmax
do i=1,ngrid
ind_cell(i)=iskip+ind_grid(i)
end do
! Set mask to -1d0
do i=1,ngrid
f(ind_cell(i),3)=-1d0
end do
end do
! End loop over cells
end do
! End loop over grids
end do
! End loop over boundaries
111 format(' Entering make_boundary_mask for level ',I2)
end subroutine make_boundary_mask
!##########################################################################
!##########################################################################
!##########################################################################
!##########################################################################
|
{"hexsha": "df8306e82e9c0d6da8ba91196afb901272837a79", "size": 11480, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "src/amuse/community/ramses/src/poisson/boundary_potential.f90", "max_stars_repo_name": "rknop/amuse", "max_stars_repo_head_hexsha": "85d5bdcc29cfc87dc69d91c264101fafd6658aec", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-28T22:47:51.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-28T22:47:51.000Z", "max_issues_repo_path": "src/amuse/community/ramses/src/poisson/boundary_potential.f90", "max_issues_repo_name": "rknop/amuse", "max_issues_repo_head_hexsha": "85d5bdcc29cfc87dc69d91c264101fafd6658aec", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-01-10T12:02:27.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-04T10:50:45.000Z", "max_forks_repo_path": "src/amuse/community/ramses/src/poisson/boundary_potential.f90", "max_forks_repo_name": "rknop/amuse", "max_forks_repo_head_hexsha": "85d5bdcc29cfc87dc69d91c264101fafd6658aec", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-11-19T04:41:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-20T02:11:17.000Z", "avg_line_length": 33.4693877551, "max_line_length": 75, "alphanum_fraction": 0.5100174216, "num_tokens": 3404}
|
\documentclass[leqno,labelfig,psfigt]{svmono}
%\usepackage{showkeys}
%\openup8pt
%\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\makeatletter
\providecommand*{\input@path}{}
\edef\input@path{{./}{../}\input@path}% prepend
%\makeatother
\input{../Xmacros}
%\renewcommand{\mgnote}[1]{}
\usepackage{geometry}
\geometry{letterpaper} % ... or a4paper or a5paper or ...
%\usepackage{listings}
\usepackage{mcode}
\usepackage{textcomp}
%\usepackage{algorithmic,algorithm}
\usepackage{enumerate}
\usepackage{empheq}
\usepackage{multirow}
\usepackage{undertilde}
\usepackage{centernot}
\usepackage{amscd}
\usepackage{indentfirst}
\usepackage{amsmath, amsfonts, amssymb}
\usepackage{tikz}
\makeatletter
\newenvironment{breakablealgorithm}
{% \begin{breakablealgorithm}Re: ML workshop
\begin{center}
\refstepcounter{algorithm}% New algorithm
\hrule height.8pt depth0pt \kern2pt% \@fs@pre for \@fs@ruled
\renewcommand{\caption}[2][\relax]{% Make a new \caption
{\raggedright\textbf{\ALG@name~\thealgorithm} ##2\par}%
\ifx\relax##1\relax % #1 is \relax
\addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##2}%
\else % #1 is not \relax
\addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##1}%
\fi
\kern2pt\hrule\kern2pt
}
}{% \end{breakablealgorithm}
\kern2pt\hrule\relax% \@fs@post for \@fs@ruled
\end{center}
}
\makeatother
\usepackage{multirow}
\usepackage{mathtools}
%\newtheorem{properties}[theorem]{Properties}
%\graphicspath{{figures/}{6DL/figures/}}
\graphicspath{{../}{../figures/}{../6DL/figures/}}
% add author name on header
\usepackage{fancyhdr}
%\pagestyle{fancy}
%\fancyhf{}
\rhead{\hfill Jinchao Xu}
%\newcommand{\sh}{{\cal S}_h}
\usepackage[style=numeric, backend=biber]{biblatex}
\addbibresource{3FEM.bib,6DL/ref.bib}
\begin{document}
\title{Finite Element and Deep Neural Networks}
\author{Jinchao Xu\\Penn State}
\date{Spring 2020}
\maketitle
\begin{quote}
This set of notes are prepared by Jinchao Xu for the class MATH/CSE
556 at Penn State in Spring 2019. All rights reserved. Not to be
disseminated without explicit permission of the author.
\end{quote}
\tableofcontents
\chapter{Finite Element Method}
\input{3FEM/FEspaces}
\input{3FEM/Nodal-Interpolation}
\chapter{Deep Neural Network Functions}
\input{FEM2DNN}
\input{6DL/WhyDeep}
\input{6DL/DefineDNN}
%\input{6DL/DNN_Qualitative}
%\input{6DL/FourierRepresentation}
%\input{6DL/DoubleFourier2}
\chapter{Qualitative Approximation Properties of DNN
}\label{ch:approx}
Three categories of approximation theory.
\begin{enumerate}
\item
In Barron's paper, there is a section on the lower bound of
approximation using linear subspaces. If the basis is fixed, then the
rate $n^{-1/d}$ is not improvable. The DNN uses a basis adapt to the
function.
The adaptive FEM we have studied before is indeed using linear
subspaces. For a given and fixed basis, select the best n term to
approximate a function. The non-linear approximation theory (by
DeVore) is to relax the smoothness of function to achieve the optimal
rate $n^{-1/d}$ but won't improve the rate.
Now the problem is a truly nonlinear approximation problem, even the
basis can be changed according to $f$. The dimension independent rate
$n^{-1/2}$ seems also optimal. What we can improve is the
characterization of the smoothness.
\end{enumerate}
\input{6DL/PolyWeierstrass}
\input{6DL/DNN_Qualitative}
\chapter{Monte Carlo and Stratified Samplings}
%\input{6DL/SampleLemma}
\section{Sampling Techniques}
\input{6DL/EEn}
\input{6DL/MonteCarloBasic}
\input{6DL/MonteCarlo}
%\input{6DL/StratifiedL2}
\chapter{Analysis for General Activation Functions}
\section{Integral representation of functions}
\input{6DL/FourierRepresentation}
\input{6DL/Barron-L2}
\input{6DL/Barron-Hk}
\input{6DL/DoubleFourier2}
\input{6DL/PeriodicActivation}
\section{Barry approximation for sigmoidal function}
\input{6DL/Barron-Approx}
\chapter{Analysis for ReLU-Related Activation Functions}
\section{ReLU fourier representation}
\input{6DL/ReLUFourierSimple}
\input{6DL/ReLUFourier}
\input{6DL/Stratified-beta1}
\input{6DL/ReLU1d}
\chapter{Maximum Norm Estimate using Dudley's Entropy Integral}
\input{6DL/Dudley}
%\input{3FEM/556/FEM-Exercises}
%\input{6DL/DNN-Approx}
%\input{6DL/WhyDeep}
%%\input{6DL/Approxi_Sobolev}
%\end{document}
%
%\chapter{Artificial Neural Network (ANN) and Deep eural Networks
% (DNN)}
%\chapter{Approximation Properties of Neural Network Function
%Class}\label{ch:approx}
%
%\input{6DL/FEM2DNN}
%%\input{6DL/DNN}
%\chapter{Approximation Properties of Neural Network Function
%Class}\label{ch:approx}
%\input{6DL/DNN-FEM}
%\input{6DL/PolyWeierstrass}
\end{document}
|
{"hexsha": "eb0c1f403fba5663575c3e44825d85026985d71f", "size": 4723, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/FEM-DNN.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/FEM-DNN.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/FEM-DNN.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9467455621, "max_line_length": 75, "alphanum_fraction": 0.7573576117, "num_tokens": 1522}
|
import tkinter as tk
import tkinter.filedialog as fd
import tkinter.messagebox as mb
import numpy as np
import pyknotid
import pyknotid.spacecurves as pkidsc
from pyknotid.spacecurves import Knot
import sympy
import csv
import os
# set initial values
gc_str = ""
fileopen = False
t = sympy.Symbol("t") # for use in displaying polynomial invariant
### GENERAL FUNCTIONS ###
# determine equation of line
def defineline(x1, y1, x2, y2):
xbound = [x1, x2]
ybound = [y1, y2]
if x1 == x2:
slope = None
else:
slope = (y2 - y1) / (x2 - x1)
return [xbound, ybound, slope]
# find the y-intercept of a given line
def findyintercept(x, y, m):
b = y - (m * x)
return b
# check if intersect between two lines falls within their range
def checkrange(xbound1, xbound2, ybound1, ybound2, intersect):
line1x, line2x, line1y, line2y = [False] * 4
# check xrange of first line
if intersect[0] > min(xbound1) and intersect[0] < max(xbound1):
line1x = True
# check x range of second line
if intersect[0] > min(xbound2) and intersect[0] < max(xbound2):
line2x = True
# check y range of first line
if intersect[1] > min(ybound1) and intersect[1] < max(ybound1):
line1y = True
# check y range of second line
if intersect[1] > min(ybound2) and intersect[1] < max(ybound2):
line2y = True
if line1x and line2x and line1y and line2y == True:
return True
else:
return False
# TODO
# check if two lines intersect
def checkintersect(xbound1, xbound2, ybound1, ybound2, slope1, slope2):
# check if line 1 is vertical
if slope1 == None:
# in this case, the two lines intersect everwhere
if slope2 == None:
# not correct but sufficient for the purposes of this script
return None
# otherwise, only line 1 is vertical
else:
b2 = findyintercept(xbound2[0], ybound2[0], slope2)
xintersect = xbound1[0]
yintersect = slope2 * xintersect + b2
# check if intersect in range
if (
checkrange(xbound1, xbound2, ybound1, ybound2, [xintersect, yintersect])
== True
):
return [xintersect, yintersect]
else:
return None
# check if line 2 is vertical
elif slope2 == None:
# previous conditial checked if line 1 was vertical
b1 = findyintercept(xbound1[0], ybound1[0], slope1)
xintersect = xbound2[0]
yintersect = slope1 * xintersect + b1
# check if intersect in range
if (
checkrange(xbound1, xbound2, ybound1, ybound2, [xintersect, yintersect])
== True
):
return [xintersect, yintersect]
else:
return None
# if neither line is vertical
else:
b1 = findyintercept(xbound1[0], ybound1[0], slope1)
b2 = findyintercept(xbound2[0], ybound2[0], slope2)
xintersect = (b2 - b1) / (slope1 - slope2)
yintersect = slope1 * xintersect + b1
# check if intersect in range
if (
checkrange(xbound1, xbound2, ybound1, ybound2, [xintersect, yintersect])
== True
):
return [xintersect, yintersect]
else:
return None
# determine which lines to check for intersection
# this function ignores the end point of the lines which is good for our purposes
# I am not sure if this is computationally more efficient than just checking for
# intersections with every line
def potentialintersection(xbound, ybound, linearray):
# take the bounds of one line and the bounds of the other lines stored in an array
# define bounding box
left = min(xbound)
right = max(xbound)
bottom = min(ybound)
top = max(ybound)
# empty array to store lines with potential intersections
potintersections = []
# each element of linearray is a list which contains the xbounds, ybounds and slope
for line in linearray:
xmin = min(line[0])
xmax = max(line[0])
ymin = min(line[1])
ymax = max(line[1])
# check if the line is in the bounding box at all
if (xmax > left and xmax < right) or (xmin > left and xmin < right):
if (ymax > bottom and ymax < top) or (ymin > bottom and ymin < top):
potintersections.append(line)
return potintersections
# determine which of point in an array is closer to the point of interest
def pythagdistance(x0, y0, points):
# points should be an array of [x,y] coordinates
distlist = []
for p in points:
distlist.append(
np.sqrt((np.abs(x0) - np.abs(p[0])) ** 2 + (np.abs(y0) - np.abs(p[1])) ** 2)
)
return points[distlist.index(min(distlist))]
def pythag(x0, y0, x, y):
distance = np.sqrt((np.abs(x0) - np.abs(x)) ** 2 + (np.abs(y0) - np.abs(y)) ** 2)
return distance
# define bounds of bridge
def definebridge(xintersect, yintersect, slope, x0, y0, radius, bridgeheight):
# slope represents the top line
# if top line is vertical
if slope == None:
x = 0
y = radius
# else if top line has an angle from vertical
else:
# find angle from slope
angle = np.arctan(slope)
x = radius * np.cos(angle)
y = radius * np.sin(angle)
# now we need to use pythagorean theorem to determine which end of the bridge to
# start drawing
edge1 = [xintersect - x, yintersect - y]
edge2 = [xintersect + x, yintersect + y]
closeedge = pythagdistance(x0, y0, [edge1, edge2])
if closeedge == edge1:
bridge = [[edge1[0], edge2[0]], [edge1[1], edge2[1]], bridgeheight]
else:
bridge = [[edge2[0], edge1[0]], [edge2[1], edge1[1]], bridgeheight]
return bridge
### DRAWING ###
# initialize lists to hold tags canvas elements
nodetags = [] # x, y
bridgetags = [] # xbounds, ybounds, (z?)
linetags = [] # xbounds, ybounds, (slope?)
closuretag = None # xbounds, ybounds
# graphical variables
#canvasbackground = "#d9d9d9"
canvasbackground = "#ffffff"
noderadius = 5
nodecolor = "#ce0000"
linethickness = 2
linecolor = "#5a79a5"
closurecolor = "#96439d"
activenode = "#000000"
bridgeheight = 1
# function to extract information from tags
def extracttaginfo(tag):
splittag = tag.split("_")
if splittag[0] == ("node") or splittag[0] == ("bridge"):
# return [x, y, z]
strtag = splittag[1:4]
numtags = 3
# for all other cases
else:
# return [x0, y0, x, y]
strtag = splittag[1:5]
numtags = 4
# create empty array to store floats
floattag = [0] * numtags
# convert to float
for i in range(numtags):
floattag[i] = float(strtag[i])
return floattag
# extract all line information
def extractlines(taglist):
# this will contain the mathematically defined lines
# [xbound, ybound, slope]
lines = []
for line in taglist:
linecoords = extracttaginfo(line)
lines.append(
defineline(linecoords[0], linecoords[2], linecoords[1], linecoords[3])
)
return lines
# extract 3D coordinate information from tags
# def extractcoords():
# # initialize a bridge counter
# bridgeloc = []
# numtags = len(nodetags)
# coords=[[0,0,0]]*(numtags+2)
# for c in range(numtags):
# coords[c+1]=extracttaginfo(nodetags[c])
# # check if the tag is a bridge
# if coords[c+1][2] == bridgeheight:
# #print("bridge detected!")
# bridgeloc.append(c+1)
# # iterate through array and insert bridge elements
# for b in range(len(bridgeloc)):
# bridgetag = extracttaginfo(bridgetags[b])
# loc = bridgeloc[b]+(4*(b))
# bridgestart= [bridgetag[0],bridgetag[2]]
# bridgeend=[bridgetag[1],bridgetag[3]]
# coords.insert(loc,[bridgestart[0],bridgestart[1],0])
# coords.insert(loc+1,[bridgestart[0],bridgestart[1],bridgeheight])
# coords.insert(loc+3,[bridgeend[0],bridgeend[1],bridgeheight])
# coords.insert(loc+4,[bridgeend[0],bridgeend[1],0])
# # add a bridge to first and last node so that closure goes below everything else
# if numtags > 1:
# # first copy first and last node
# coords[0] = extracttaginfo(nodetags[0])
# coords[-1] = extracttaginfo(nodetags[numtags-1])
# # then change the z to a positive higher value
# coords[0][2]=2.
# coords[-1][2]=2.
# return coords
# extract 3D coordinates from tags v2.0
def extractcoordinates():
# initialize a bridge counter
bridgeloc = []
numtags = len(nodetags)
coords = [[0, 0, 0]] * (numtags + 2)
for c in range(numtags):
coords[c + 1] = extracttaginfo(nodetags[c])
# check if tag is a bridge
if coords[c + 1][2] == bridgeheight:
bridgeloc.append(c + 1)
# iterate through array and insert bridge elements
for b in range(len(bridgeloc)):
loc = bridgeloc[b] + (4 * (b))
prevnode = coords[loc - 1] # x,y,z
nextnode = coords[loc + 1]
slope = defineline(prevnode[0], prevnode[1], nextnode[0], nextnode[1])[2]
tempbridge = definebridge(
coords[loc][0],
coords[loc][1],
slope,
prevnode[0],
prevnode[1],
noderadius,
bridgeheight,
)
x1 = tempbridge[0][0]
x2 = tempbridge[0][1]
y1 = tempbridge[1][0]
y2 = tempbridge[1][1]
bridge1 = [x1, y1, 0]
bridge2 = [x1, y1, 1]
bridge3 = [x2, y2, 1]
bridge4 = [x2, y2, 0]
coords.insert(loc, bridge1)
coords.insert(loc + 1, bridge2)
coords.insert(loc + 3, bridge3)
coords.insert(loc + 4, bridge4)
# add a bridge to first and last node so that closure goes below everything else
if numtags > 1:
# first copy first and last node
coords[0] = extracttaginfo(nodetags[0])
coords[-1] = extracttaginfo(nodetags[numtags - 1])
# then change the z to a positive higher value
coords[0][2] = 2.0
coords[-1][2] = 2.0
return coords
# check particular line for intersections against all other lines
# takes in the array of lines produced in extractlines()
def checklines(linearray, lineofinterest):
# this will contain intersection coordinates
# [xintersect, yintersect]
intersections = []
for line in linearray:
intersect = checkintersect(
line[0],
lineofinterest[0],
line[1],
lineofinterest[1],
line[2],
lineofinterest[2],
)
# we don't want a line to find intersections with itself
# if line == lineofinterest:
# intersect = None
# else:
# otherwise we check for intersections with all other lines
# in the future we may want to implement a limit on which lines to check
# intersect = checkintersect(line[0],lineofinterest[0],line[1],lineofinterest[1],line[2],lineofinterest[2])
# add an intersection if it exists
if intersect != None:
intersections.append(intersect)
return intersections
# function to draw nodes
def drawnode(x, y, z, radius, color, type="node", activecolor=nodecolor):
tag = type + "_" + str(x) + "_" + str(y) + "_" + str(z)
canvas.create_oval(
x - radius,
y - radius,
x + radius,
y + radius,
fill=color,
activefill=activecolor,
width=0,
tags=tag,
)
nodetags.append(tag)
def drawline(x0, y0, x, y, thickness, color, type="line", bridgeinfo=None):
# we need to make closuretag global to modify it here
global closuretag
tag = type + "_" + str(x0) + "_" + str(x) + "_" + str(y0) + "_" + str(y)
canvas.create_line(x0, y0, x, y, fill=color, width=thickness, tags=tag)
if type == "line":
linetags.append(tag)
drawintersections(x0, y0, x, y)
elif type == "closure":
closuretag = tag
# drawintersections(x0, y0, x, y)
elif type == "bridgeline":
tag += (
"_node_"
+ str(bridgeinfo[0])
+ "_"
+ str(bridgeinfo[1])
+ "_"
+ str(bridgeinfo[2])
)
bridgetags.append(tag)
# main drawing function
def drawsegment(x, y):
# number of nodes
numnodes = len(nodetags)
# check if there is an initial node
if numnodes != 0:
# use use the coordinates of the previous node
previousnode = extracttaginfo(nodetags[-1])
drawline(previousnode[0], previousnode[1], x, y, linethickness, linecolor)
drawnode(x, y, 0, noderadius, nodecolor)
# raise the nodes to the top of the canvas so they are drawn over the lines
canvas.tag_raise(nodetags[-1])
canvas.tag_raise(nodetags[numnodes - 1])
# otherwise just draw the initial node
else:
drawnode(x, y, 0, noderadius, nodecolor)
def drawclosure():
# delete previous closure line if it exists
if closuretag != None:
canvas.delete(closuretag)
# check if there are at least three nodes so that a closure line is appropriate
if len(nodetags) > 2:
# find coordinates of first node
firstnode = extracttaginfo(nodetags[0])
lastnode = extracttaginfo(nodetags[-1])
drawline(
firstnode[0],
firstnode[1],
lastnode[0],
lastnode[1],
linethickness,
closurecolor,
type="closure",
)
def drawbridge(x, y, z, x0, y0, slope):
drawnode(
x, y, z, noderadius, canvasbackground, type="bridge", activecolor=activenode
)
bridge = definebridge(x, y, slope, x0, y0, noderadius, bridgeheight)
drawline(
bridge[0][0],
bridge[1][0],
bridge[0][1],
bridge[1][1],
linethickness,
linecolor,
type="bridgeline",
bridgeinfo=[x, y, z],
)
def drawintersections(x0, y0, x, y, type="line"):
# define the line we are checking for intersections
drawnline = defineline(x0, y0, x, y)
# extract all other line data
lines = extractlines(linetags[:-2])
# check for intersections
intersections = checklines(lines, drawnline)
# if type == "line":
# color = linecolor
# elif type == "closure":
# color = closurecolor
intersectnumber = len(intersections)
orderedintersects = [0] * intersectnumber
# check which intersection should fall first
for i in range(intersectnumber):
# first closest
closestintersect = pythagdistance(x0, y0, intersections)
# append to ordered list
orderedintersects[i] = closestintersect
# delete from initial intersection list
del intersections[intersections.index(closestintersect)]
# draw a bridge for each intersection
for i in orderedintersects:
drawbridge(i[0], i[1], bridgeheight, x0, y0, drawnline[2])
def canvasinteract(event):
global knot
global coordinates
global coordarray
# capture mouse location
x, y = event.x, event.y
# this is the size of the bounding box for determining intersections
# this also is the region in which you cannot draw another node conveniently
hbsize = 10
tags = []
lines = []
# this should contain four elements: bridge node, bridge line and two intersecting
# lines
points = canvas.find_overlapping(
x - hbsize / 2, y - hbsize / 2, x + hbsize / 2, y + hbsize / 2
)
for p in points:
# this returns a tuple so we only want the first element of each tag
# we only use one tag per item on this canvas
tags.append(canvas.gettags(p)[0])
# check if anything was enclosed
if len(tags) > 0:
for tag in tags:
splittag = extracttaginfo(tag)
if tag.split("_")[0] == ("line"):
# x0,x,y0,y
lines.append(
defineline(splittag[0], splittag[2], splittag[1], splittag[3])
)
if tag.split("_")[0] == ("bridge"):
# x,y,z
bridgenode = extracttaginfo(tag)
nodeb = tag
bridgenodeposition = nodetags.index(tag)
if tag.split("_")[0] == ("bridgeline"):
# this is inefficient and could be remedied by fixing the tag order in
# drawline
for btag in bridgetags:
if btag.startswith(tag):
# x0,y0,x,y
origbridgetag = extracttaginfo(btag)
bridge = [
[origbridgetag[0], origbridgetag[1]],
[origbridgetag[2], origbridgetag[3]],
]
# tag location in array of bridge tags
tagloc = bridgetags.index(btag)
# remove previous bridge from canvas
canvas.delete(tag)
# the bridge line is deleted but its tag still exists
# we still have a list of 4 elements
# want to calculate the bridge about the node using the two
# lines
# we calculate both potential bridges manually
# returns [[x0,x],[y0,y],z]
bridge1 = definebridge(
bridgenode[0],
bridgenode[1],
lines[0][2],
lines[0][0][0],
lines[0][1][0],
noderadius,
bridgeheight,
)[:-1]
bridge2 = definebridge(
bridgenode[0],
bridgenode[1],
lines[1][2],
lines[1][0][0],
lines[1][1][0],
noderadius,
bridgeheight,
)[:-1]
# these are the potential initial nodes which we will need to
# find start and end of the two intersection lines
node1 = (
"node_"
+ str(int(lines[0][0][0]))
+ "_"
+ str(int(lines[0][1][0]))
+ "_"
+ "0"
)
node2 = (
"node_"
+ str(int(lines[1][0][0]))
+ "_"
+ str(int(lines[1][1][0]))
+ "_"
+ "0"
)
# delete old bridge node
del nodetags[bridgenodeposition]
# find the array position of each of these
node1loc = nodetags.index(node1)
node2loc = nodetags.index(node2)
# iterate through bridges after line starts
bridgedist1 = pythag(
lines[0][0][0], lines[0][1][0], bridgenode[0], bridgenode[1]
)
n = 1
while nodetags[node1loc + n].split("_")[0] == "bridge":
otherbridgedist = pythag(
lines[0][0][0],
lines[0][1][0],
float(nodetags[node1loc + n].split("_")[1]),
float(nodetags[node1loc + n].split("_")[2]),
)
if otherbridgedist > bridgedist1:
break
if nodetags[node1loc + n].split("_")[0] != "bridge":
break
n += 1
bridgedist2 = pythag(
lines[1][0][0], lines[1][1][0], bridgenode[0], bridgenode[1]
)
m = 1
while nodetags[node2loc + m].split("_")[0] == "bridge":
otherbridgedist = pythag(
lines[1][0][0],
lines[1][1][0],
float(nodetags[node2loc + m].split("_")[1]),
float(nodetags[node2loc + m].split("_")[2]),
)
if otherbridgedist > bridgedist2:
break
if nodetags[node2loc + m].split("_")[0] != "bridge":
break
m += 1
# we want the new bridge, not a duplicate of the old one
nodepart = (
"_node_"
+ str(bridgenode[0])
+ "_"
+ str(bridgenode[1])
+ "_"
+ str(bridgenode[2])
)
if bridge1 == bridge:
newtag = (
"bridgeline_"
+ str(bridge2[0][0])
+ "_"
+ str(bridge2[0][1])
+ "_"
+ str(bridge2[1][0])
+ "_"
+ str(bridge2[1][1])
)
# draw bridge2
canvas.create_line(
bridge2[0][0],
bridge2[1][0],
bridge2[0][1],
bridge2[1][1],
fill=linecolor,
width=linethickness,
tags=newtag,
)
# find the related node
# index = nodetags.index(node2)
# delete old bridge node
# del(nodetags[bridgenodeposition])
# find related node
# index = nodetags.index(node2)
# insert new bridge node
nodetags.insert(node2loc + m, nodeb)
elif bridge2 == bridge:
newtag = (
"bridgeline_"
+ str(bridge1[0][0])
+ "_"
+ str(bridge1[0][1])
+ "_"
+ str(bridge1[1][0])
+ "_"
+ str(bridge1[1][1])
)
# index = nodetags.index(node1)
# delete old bridge node
# del(nodetags[bridgenodeposition])
# index=nodetags.index(node1)
# insert new bridge node
# draw bridge
canvas.create_line(
bridge1[0][0],
bridge1[1][0],
bridge1[0][1],
bridge1[1][1],
fill=linecolor,
width=linethickness,
tags=newtag,
)
nodetags.insert(node1loc + n, nodeb)
else:
print("something went wrong")
# replace the tag for the old bridge
bridgetags[tagloc] = newtag + nodepart
elif len(tags) > 4:
print("Too many overlapping elements for program to distinguish")
# otherwise just draw a new line segment
else:
drawsegment(x, y)
drawclosure()
# print(ncoords)
# k = Knot(ncoords)
# k.plot(mode='matplotlib')
# print("bridges")
# print(bridgetags)
# convert in pyknotid knot object
coordinates = extractcoordinates()
coordarray = np.array(coordinates)
knot = Knot(coordarray)
### ANALYSIS
def findgausscode(event):
global gc
# global k
if len(coordinates) > 1:
gc = knot.gauss_code()
# simplify the gauss code
gc.simplify()
# display gauss code
gcode.config(text=str(gc))
def clearcanvas(event):
global nodetags, bridgetags, linetags, coordinates, coordarray, closuretag, knot
canvas.delete("all")
nodetags = [] # x, y
bridgetags = [] # xbounds, ybounds, (z?)
linetags = [] # xbounds, ybounds, (slope?)
coordinates = []
coordarray = []
closuretag = None # xbounds, ybounds
knot = None
def displayrealtime(event):
x, y = event.x, event.y
coords = str(x) + ", " + str(y)
coordsrealtime.config(text=coords)
def closewindow():
window.destroy()
# clear clipboard and copy the currently displayed gauss code
def copygauss(event):
root.clipboard_clear()
root.clipboard_append(gc_str)
# open file to save data:
def openfile(event):
global numknots
global fileopen
# set number of entries to -1 initially so we don't count the header
numknots = -1
root.filename = fd.askopenfilename(
initialdir="/",
title="Select file",
filetypes=[(" comma-separated values", ".csv")],
)
filename.config(text=root.filename.split("/")[-1])
fileopen = True
# update numknots NEEDS TESTING
with open(root.filename, newline="") as csvfile:
knotentries = csv.reader(csvfile, delimiter=" ", quotechar="|")
for row in knotentries:
numknots += 1
entries.config(text=str(numknots) + " entries")
# new file to save data to
def newfile(event):
global numknots
global fileopen
# ask for new filename and location
root.filename = fd.asksaveasfilename(
initialdir="/", title="New file", defaultextension=".csv"
)
# make file
with open(root.filename, "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["gauss", "crossingnum", "alexander"])
# update filename
filename.config(text=root.filename.split("/")[-1])
fileopen = True
# set number of knots to 0
numknots = 0
entries.config(text=str(numknots) + " entries")
def writedata(event):
global numknots
if fileopen == True:
# update number of knots
numknots = numknots + 1
entries.config(text=str(numknots) + " entries")
# check if knot directory exists and creates it if not
jsonpath = root.filename[:-4] + "_json"
if os.path.exists(jsonpath) != True:
os.makedirs(jsonpath)
# save knot coordinate data to json file
knot.to_json(jsonpath + "/" + str(numknots) + ".json")
# write knot analysis to csv
with open(root.filename, "a") as f:
writer = csv.writer(f)
writer.writerow(
[str(gc), len(gc), str(knot.alexander_polynomial(variable=t))]
)
else:
mb.showerror(
"Error", "No active file. Open a file or start a new file to save data."
)
def editdata(event):
global numknots
if fileopen == True:
# check if there is at least one knot
if numknots == 0:
mb.showerror(
"Error","This file has no knots to edit."
)
else:
# check if the knot listed in edit field exists
mknot = modknot.get()
if int(mknot) > numknots:
mb.showerror(
"Error","You are trying to edit a knot that does not exist."
)
else:
# check if json data for old knot exists
jsonpath = root.filename[:-4] + "_json/" + mknot + ".json"
if os.path.exists(jsonpath):
# delete old json file
os.remove(jsonpath)
knot.to_json(jsonpath)
else:
# save knot coordinate data to json file
knot.to_json(jsonpath)
# update knot analysis in csv
# read current file
with open(root.filename, "r") as f:
knotreader = csv.reader(f)
rows = []
for row in knotreader:
rows.append(row)
# remove original file
os.remove(root.filename)
# recreate file to reload knots
with open(root.filename, "a") as f:
knotwriter = csv.writer(f)
nrow = 0
for row in rows:
nrow += 1
# check if this is the knot we want to modify and modify it
if nrow == int(mknot) + 1:
knotwriter.writerow(
[str(gc), len(gc), str(knot.alexander_polynomial(variable=t))]
)
else:
# otherwise just write normal knot
knotwriter.writerow(row)
# delete value in entry field
modknot.delete(0,'end')
else:
mb.showerror(
"Error", "No active file. Open a file that you want to edit."
)
def addunknot(event):
global numknots
if fileopen == True:
# update number of knots
numknots = numknots + 1
entries.config(text=str(numknots) + " entries")
# write knot analysis to csv
with open(root.filename, "a") as f:
writer = csv.writer(f)
writer.writerow(["nil", 0, "nil"])
else:
mb.showerror(
"Error", "No active file. Open a file or start a new file to save data."
)
def addcomplex(event):
global numknots
if fileopen == True:
# update number of knots
numknots = numknots + 1
entries.config(text=str(numknots) + " entries")
# write knot analysis to csv
with open(root.filename, "a") as f:
writer = csv.writer(f)
writer.writerow(["complex", 99, "complex"])
else:
mb.showerror(
"Error", "No active file. Open a file or start a new file to save data."
)
def popuphelp(event):
mb.showinfo(
"Help",
"A wrapper which allows graphical input of knots to be analyzed with pyknotid.\nYou can open a previous csv file or start a new one and then draw your knot. The closure option doesn't change the analysis. It only affects the appearance. Upon saving (w), the knot coordinates are saved as a json file in a subfolder created by the program in the working directory. The analysis details are also appended to the csv. The canvas can be cleared (c) and the program can be closed easily (q).\nCreated by Xavier and Luc Capaldi and released with the MIT License (c) 2019. ",
)
def popupmoo(event):
mb.showwarning(
"Moo",
" -----------\n< whyknot >\n -----------\n \ ^__^\n \ (oo)\_______\n (__)\ )\/\ \n ||-----w |\n || ||",
)
# CAN USE activefill to show when switching
# can also use canvas.config(cursor="exchange")
### TKINTER ###
# create main window
root = tk.Tk()
root.title("WhyKnot")
root.geometry("1280x720")
root.wait_visibility(root)
root.wm_attributes('-alpha',1.0)
# create main frames within the root frame
drawframe = tk.Frame(root, width=980, height=720)
interfaceframe = tk.Frame(root, width=300, height=720)
# set geometry manager class for main frames and organize within root frame
drawframe.grid(column=0, row=0, sticky="nsw")
interfaceframe.grid(column=1, row=0, sticky="nse")
# create canvas widget for draw frame
canvas = tk.Canvas(
drawframe, bg=canvasbackground, cursor="cross", width=980, height=720
)
# place widget in draw frame
canvas.grid(row=0, column=0)
# create widgets for interface frame
title = tk.Label(interfaceframe, text="WhyKnot", font=("Helvetica", 18))
gcodevar = tk.StringVar() # button var so text can be updated
gcodevar.set("--")
gcode = tk.Label(interfaceframe, text="--", font=("Helvetica", 15), wraplength=300)
file = tk.Button(interfaceframe, text="File")
new = tk.Button(interfaceframe, text="New")
save = tk.Button(interfaceframe, text="Save (w)")
unknot = tk.Button(interfaceframe, text="Unknot (u)")
complexknot = tk.Button(interfaceframe, text="Too Complex (t)")
help = tk.Button(interfaceframe, text="Help")
# results = tk.Label(interface_frame, text="Results Goes Here")
close = tk.Button(interfaceframe, text="Quit (q)")
clear = tk.Button(interfaceframe, text="Clear (c)")
coordsrealtime = tk.Label(interfaceframe, text="--")
filename = tk.Label(interfaceframe, text="no file", font=("Helvetica", 10))
entries = tk.Label(interfaceframe, text="0 entries", font=("Helvetica", 10))
modknot = tk.Entry(interfaceframe)
edit = tk.Button(interfaceframe, text="Edit")
# place widgets in interface frame
title.grid(row=1, columnspan=2)
gcode.grid(row=7, columnspan=2)
# closures.grid(row=5, columnspan=2)
file.grid(row=2, column=0)
new.grid(row=2, column=1)
save.grid(row=3, column=0)
unknot.grid(row=3, column=1)
complexknot.grid(row=4, columnspan=2)
help.grid(row=11, columnspan=2)
filename.grid(row=5, columnspan=2)
entries.grid(row=6, columnspan=2)
clear.grid(row=9, column=0)
close.grid(row=9, column=1)
modknot.grid(row=10, column=0)
edit.grid(row=10, column=1)
coordsrealtime.grid(row=13, columnspan=2)
# event handlers
canvas.bind("<Button-1>", canvasinteract, add="+")
clear.bind("<Button-1>", clearcanvas)
root.bind("c", findgausscode)
root.bind("C", clearcanvas)
close.bind("<Button-1>", lambda e: root.destroy())
root.bind("q", lambda e: root.destroy())
root.bind("a", lambda e: root.wm_attributes('-alpha', 0.5))
root.bind("s", lambda e: root.wm_attributes('-alpha', 1.0))
# closures.bind("<Button-1>", include_closures)
# closures.bind("<Button-1>", find_gc)
canvas.bind("<Motion>", displayrealtime)
root.bind("y", copygauss)
save.bind("<Button-1>", writedata)
unknot.bind("<Button-1>", addunknot)
complexknot.bind("<Button-1>", addcomplex)
help.bind("<Button-1>", popuphelp)
root.bind("w", writedata)
edit.bind("<Button-1>", editdata)
root.bind("u", addunknot)
root.bind("t", addcomplex)
file.bind("<Button-1>", openfile)
new.bind("<Button-1>", newfile)
root.bind("m", popupmoo)
root.bind("p", lambda e: knot.plot(mode="matplotlib"))
# begin progam main loop
root.mainloop()
|
{"hexsha": "1664d20f3cee1e0ef0ad89348963716c33e587bd", "size": 35459, "ext": "py", "lang": "Python", "max_stars_repo_path": "whyknot-big-boys.py", "max_stars_repo_name": "luccapaldi/whyknot", "max_stars_repo_head_hexsha": "bb6a1f768b0cd7d39cc966c2a512240b9c8ca9f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-31T13:16:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T13:16:11.000Z", "max_issues_repo_path": "whyknot-big-boys.py", "max_issues_repo_name": "luccapaldi/whyknot", "max_issues_repo_head_hexsha": "bb6a1f768b0cd7d39cc966c2a512240b9c8ca9f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-06-01T23:51:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-01T23:51:43.000Z", "max_forks_repo_path": "whyknot-big-boys.py", "max_forks_repo_name": "luccapaldi/whyknot", "max_forks_repo_head_hexsha": "bb6a1f768b0cd7d39cc966c2a512240b9c8ca9f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-05-12T19:12:54.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-20T01:01:04.000Z", "avg_line_length": 36.4804526749, "max_line_length": 576, "alphanum_fraction": 0.5330945599, "include": true, "reason": "import numpy,import sympy", "num_tokens": 8658}
|
"""
TESTS::
sage: R.<x,y> = QQbar[]
sage: from sage.rings.polynomial.multi_polynomial_ring_generic import MPolynomialRing_generic
doctest:...: DeprecationWarning: the module sage.rings.polynomial.multi_polynomial_ring_generic is deprecated, use sage.rings.polynomial.multi_polynomial_ring_base instead.
See https://trac.sagemath.org/25563 for details.
sage: isinstance(R, MPolynomialRing_generic)
True
"""
from sage.misc.superseded import deprecation
deprecation(25563, "the module sage.rings.polynomial.multi_polynomial_ring_generic is deprecated, "
"use sage.rings.polynomial.multi_polynomial_ring_base instead.")
from .multi_polynomial_ring_base import *
MPolynomialRing_generic = MPolynomialRing_base
|
{"hexsha": "da84dba6c9553f63c05bb3f3b318c24bee4d3dfa", "size": 751, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/sage/rings/polynomial/multi_polynomial_ring_generic.py", "max_stars_repo_name": "bopopescu/sage", "max_stars_repo_head_hexsha": "2d495be78e0bdc7a0a635454290b27bb4f5f70f0", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-07-15T13:48:24.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-08T12:31:43.000Z", "max_issues_repo_path": "src/sage/rings/polynomial/multi_polynomial_ring_generic.py", "max_issues_repo_name": "bopopescu/sage", "max_issues_repo_head_hexsha": "2d495be78e0bdc7a0a635454290b27bb4f5f70f0", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-10-30T13:40:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-23T12:13:30.000Z", "max_forks_repo_path": "src/sage/rings/polynomial/multi_polynomial_ring_generic.py", "max_forks_repo_name": "bopopescu/sage", "max_forks_repo_head_hexsha": "2d495be78e0bdc7a0a635454290b27bb4f5f70f0", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-03-29T17:13:36.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-03T18:11:28.000Z", "avg_line_length": 41.7222222222, "max_line_length": 176, "alphanum_fraction": 0.7842876165, "include": true, "reason": "from sage", "num_tokens": 173}
|
c full copy and timings tests
program test73
integer n, i, j, time, ntime
parameter (n=1024)
parameter (ntime=10)
real a(n,n), b(n,n)
cfcd$ setbool('HPFC_LAZY_MESSAGES',0)
cfcd$ setbool('HPFC_USE_BUFFERS', 1)
chpf$ dynamic a, b
chpf$ template tc(n,n), tb(n,n)
chpf$ processors p(2,2)
chpf$ align a(i,j) with tc(i,j)
chpf$ align b(i,j) with tb(i,j)
chpf$ distribute tc(block,cyclic) onto p
chpf$ distribute tb(block,block) onto p
c
c initialize b
c
cfcd$ timeon
cfcd$ timeon
cfcd$ timeoff('empty measure')
c
chpf$ independent(j,i)
do j=1, n
do i=1, n
b(i,j) = real(i+j)
enddo
enddo
c
c realign b on tc => BLOCK to CYCLIC
c
chpf$ realign b(i,j) with tc(i,j)
c
c first a simple copy
c
chpf$ independent(j,i)
do j=1, n
do i=1, n
a(i,j) = b(i,j)
enddo
enddo
do time=1, ntime
chpf$ realign b(i,j) with tc(i,j)
cfcd$ timeon
chpf$ realign a(i,j) with tc(j,i)
chpf$ independent(j,i)
do j=1, n
do i=1, n
b(i,j) = a(j,i)
enddo
enddo
cfcd$ timeoff('transposition 1')
chpf$ realign a(i,j) with tc(i,j)
cfcd$ timeon
chpf$ realign b(i,j) with tc(j,i)
chpf$ independent(j,i)
do j=1, n
do i=1, n
a(i,j) = b(j,i)
enddo
enddo
cfcd$ timeoff('transposition 2')
enddo
cfcd$ timeoff('whole time')
end
|
{"hexsha": "558f46e59b806c13b9353543468276aa2a6cff9e", "size": 1454, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "packages/PIPS/validation/Hpfc/hpftest73c.f", "max_stars_repo_name": "DVSR1966/par4all", "max_stars_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 51, "max_stars_repo_stars_event_min_datetime": "2015-01-31T01:51:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T02:01:50.000Z", "max_issues_repo_path": "packages/PIPS/validation/Hpfc/hpftest73c.f", "max_issues_repo_name": "DVSR1966/par4all", "max_issues_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2017-05-29T09:29:00.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-11T16:01:39.000Z", "max_forks_repo_path": "packages/PIPS/validation/Hpfc/hpftest73c.f", "max_forks_repo_name": "DVSR1966/par4all", "max_forks_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2015-03-26T08:05:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-18T02:01:51.000Z", "avg_line_length": 22.0303030303, "max_line_length": 40, "alphanum_fraction": 0.5550206327, "num_tokens": 521}
|
import sys
sys.path.append('..')
import torch
from torch_geometric.datasets import Planetoid
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch_geometric.nn import GCNConv
from bijou.data import PyGDataWrapper, DataBunch
from bijou.learner import Learner
from bijou.metrics import masked_cross_entropy, masked_accuracy
from bijou.datasets import cora
from bijou.callbacks import PyGNodeInterpreter
import networkx as nx
import matplotlib.pyplot as plt
if torch.cuda.is_available():
torch.cuda.manual_seed_all(1)
else:
torch.manual_seed(1)
dataset = Planetoid(root=cora(), name='Cora')
train_data = PyGDataWrapper(dataset[0], 'train')
val_data = PyGDataWrapper(dataset[0], 'val')
test_data = PyGDataWrapper(dataset[0], 'test')
data = DataBunch(train_data, val_data)
class Model(nn.Module):
def __init__(self, feature_num, class_num):
super().__init__()
self.conv1 = GCNConv(feature_num, 16)
self.conv2 = GCNConv(16, class_num)
def forward(self, data):
x, edge_index = data.x, data.edge_index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = self.conv2(x, edge_index)
outputs = F.relu(x)
return outputs
model = Model(dataset.num_node_features, dataset.num_classes)
opt = optim.SGD(model.parameters(), lr=0.5, weight_decay=0.01)
learner = Learner(model, opt, masked_cross_entropy, data, metrics=[masked_accuracy], callbacks=PyGNodeInterpreter)
learner.fit(100)
learner.test(test_data)
def loss_noreduction(pred, target):
return F.cross_entropy(pred[target.mask], target.data[target.mask], reduction='none')
scores, xs, ys, preds, indecies = learner.interpreter.top_data(loss_noreduction, k=10, target='train', largest=True)
learner.interpreter.plot_confusion(target='train')
learner.interpreter.plot_confusion(target='val')
learner.interpreter.plot_confusion(target='test')
scores, xs, ys, preds, indecies = learner.interpreter.most_confused()
print(scores)
print(ys)
print(preds)
print(indecies)
# learner.interpreter.plot_graph(loss)
learner.interpreter.plot_graph(loss_noreduction, layout=nx.spring_layout, max_node_size=1000, min_node_size=300,
label_score=True, label_id=True, k=15, font_color='r', font_size=6)
plt.show()
|
{"hexsha": "72129cba1949fa540802a2c0901e198b6124ae77", "size": 2302, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/gnn-04-node_interprete.py", "max_stars_repo_name": "Tomas1861/bijou", "max_stars_repo_head_hexsha": "8db9a42a138c7480385c752c8106e35dd067a493", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-04T15:16:58.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-04T15:16:58.000Z", "max_issues_repo_path": "examples/gnn-04-node_interprete.py", "max_issues_repo_name": "Tomas1861/bijou", "max_issues_repo_head_hexsha": "8db9a42a138c7480385c752c8106e35dd067a493", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/gnn-04-node_interprete.py", "max_forks_repo_name": "Tomas1861/bijou", "max_forks_repo_head_hexsha": "8db9a42a138c7480385c752c8106e35dd067a493", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.8857142857, "max_line_length": 116, "alphanum_fraction": 0.7467419635, "include": true, "reason": "import networkx", "num_tokens": 573}
|
[STATEMENT]
lemma interchange:
assumes "seq \<nu> \<mu>" and "seq \<tau> \<sigma>"
shows "(\<nu> \<cdot> \<mu>) \<star> (\<tau> \<cdot> \<sigma>) = (\<nu> \<star> \<tau>) \<cdot> (\<mu> \<star> \<sigma>)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<nu> \<cdot> \<mu> \<star> \<tau> \<cdot> \<sigma> = (\<nu> \<star> \<tau>) \<cdot> (\<mu> \<star> \<sigma>)
[PROOF STEP]
using assms interchange
[PROOF STATE]
proof (prove)
using this:
seq \<nu> \<mu>
seq \<tau> \<sigma>
\<lbrakk>seq ?\<nu> ?\<mu>; seq ?\<tau> ?\<sigma>\<rbrakk> \<Longrightarrow> ?\<nu> \<cdot> ?\<mu> \<star> ?\<tau> \<cdot> ?\<sigma> = (?\<nu> \<star> ?\<tau>) \<cdot> (?\<mu> \<star> ?\<sigma>)
goal (1 subgoal):
1. \<nu> \<cdot> \<mu> \<star> \<tau> \<cdot> \<sigma> = (\<nu> \<star> \<tau>) \<cdot> (\<mu> \<star> \<sigma>)
[PROOF STEP]
by simp
|
{"llama_tokens": 340, "file": "Bicategory_Prebicategory", "length": 2}
|
# coding: utf8
import string
import numpy as np
from collections import Counter
from examples.commons.vocabulary import Vocabulary
from examples.commons.sequence_vocabulary import SequenceVocabulary
class NewsVectorizer(object):
def __init__(self, title_vocab, category_vocab):
self.title_vocab = title_vocab
self.category_vocab = category_vocab
def vectorize(self, title, vector_length=-1):
indices = [self.title_vocab.begin_seq_index]
indices.extend(self.title_vocab.lookup_token(token)
for token in title.split(" "))
indices.append(self.title_vocab.end_seq_index)
if vector_length < 0:
vector_length = len(indices)
out_vector = np.zeros(vector_length, dtype=np.int64)
out_vector[:len(indices)] = indices
out_vector[len(indices):] = self.title_vocab.mask_index
return out_vector
@classmethod
def from_dataframe(cls, news_df, cutoff=25):
title_vocab = SequenceVocabulary()
category_vocab = Vocabulary(add_unk=False)
word_counts = Counter()
for title in news_df.title:
for token in title.split(" "):
if token not in string.punctuation:
word_counts[token] += 1
for word, word_count in word_counts.items():
if word_count >= cutoff:
title_vocab.add_token(word)
for category in sorted(set(news_df.category)):
category_vocab.add_token(category)
return cls(title_vocab, category_vocab)
@classmethod
def from_serializable(cls, contents):
title_vocab = SequenceVocabulary.from_serializable(contents["title_vocab"])
category_vocab = Vocabulary.from_serializable(contents["category_vocab"])
return cls(title_vocab=title_vocab, category_vocab=category_vocab)
def to_serializable(self):
return {"title_vocab": self.title_vocab.to_serializable(),
"category_vocab": self.category_vocab.to_serializable()}
|
{"hexsha": "76ddb56242583666efc2e63fb555b903c903cc41", "size": 2042, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/doc_classification_with_pretrained_embeddings/news_vectorizer.py", "max_stars_repo_name": "Jochen-M/pytorch_nlp", "max_stars_repo_head_hexsha": "75ffbe60d1a9c383981396f346c6dcbabbb9e5d7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-21T13:07:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-21T13:07:41.000Z", "max_issues_repo_path": "examples/doc_classification_with_pretrained_embeddings/news_vectorizer.py", "max_issues_repo_name": "Jochen-M/pytorch_nlp", "max_issues_repo_head_hexsha": "75ffbe60d1a9c383981396f346c6dcbabbb9e5d7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/doc_classification_with_pretrained_embeddings/news_vectorizer.py", "max_forks_repo_name": "Jochen-M/pytorch_nlp", "max_forks_repo_head_hexsha": "75ffbe60d1a9c383981396f346c6dcbabbb9e5d7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0333333333, "max_line_length": 83, "alphanum_fraction": 0.6772771792, "include": true, "reason": "import numpy", "num_tokens": 397}
|
from __future__ import annotations
import typing
import sympy
from polygon import Polygon
from sector import Sector
if typing.TYPE_CHECKING:
from circle import Circle
from point import Point
class Segment:
def __init__(self, circle: Circle, point_1: Point, point_2: Point) -> None:
self.circle = circle
self.point_1 = point_1
self.point_2 = point_2
@property
def central_angle(self) -> float:
distance = self.point_1.distance(self.point_2)
angle = float(sympy.acos((distance ** 2 - 2 * (self.circle.radius ** 2)) / (2 * (self.circle.radius ** 2))))
return angle
@property
def area(self) -> float:
sector = Sector(self.circle, self.point_1, self.point_2)
triangle = Polygon(self.circle.center, self.point_1, self.point_2)
return sector.area - triangle.area
|
{"hexsha": "4a16c28859465e18f6edc7a3e5586d1acbc0267c", "size": 863, "ext": "py", "lang": "Python", "max_stars_repo_path": "data_structures/math/geometry/2D/segment.py", "max_stars_repo_name": "Pysics/Algorithm", "max_stars_repo_head_hexsha": "223f618e3e6d96e15091783b81b90ee00c771e8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data_structures/math/geometry/2D/segment.py", "max_issues_repo_name": "Pysics/Algorithm", "max_issues_repo_head_hexsha": "223f618e3e6d96e15091783b81b90ee00c771e8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2022-03-30T01:30:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T12:52:04.000Z", "max_forks_repo_path": "data_structures/math/geometry/2D/segment.py", "max_forks_repo_name": "Pysics/Algorithm", "max_forks_repo_head_hexsha": "223f618e3e6d96e15091783b81b90ee00c771e8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-03-29T12:27:48.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T05:02:31.000Z", "avg_line_length": 27.8387096774, "max_line_length": 116, "alphanum_fraction": 0.6685979143, "include": true, "reason": "import sympy", "num_tokens": 221}
|
#ifndef NETWORKS_HPP
#define NETWORKS_HPP
// For External Library
#include <torch/torch.h>
#include <boost/program_options.hpp>
// Define Namespace
namespace nn = torch::nn;
namespace po = boost::program_options;
// Function Prototype
void weights_init(nn::Module &m);
void LinearLayer(nn::Sequential &sq, const size_t in_dim, const size_t out_dim, const bool ReLU);
// -------------------------------------------------
// struct{AutoEncoder1dImpl}(nn::Module)
// -------------------------------------------------
struct AutoEncoder1dImpl : nn::Module{
private:
nn::Sequential encoder, decoder;
public:
AutoEncoder1dImpl(){}
AutoEncoder1dImpl(po::variables_map &vm);
torch::Tensor forward(torch::Tensor x);
};
TORCH_MODULE(AutoEncoder1d);
#endif
|
{"hexsha": "c713c54fc72798d53a37a71ab1faa7500f99015d", "size": 769, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "Dimensionality_Reduction/AE1d/src/networks.hpp", "max_stars_repo_name": "o8r/pytorch_cpp", "max_stars_repo_head_hexsha": "70ba1e64270da6d870847c074ce33afb154f1ef8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 181.0, "max_stars_repo_stars_event_min_datetime": "2020-03-26T12:33:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T04:04:25.000Z", "max_issues_repo_path": "Dimensionality_Reduction/AE1d/src/networks.hpp", "max_issues_repo_name": "o8r/pytorch_cpp", "max_issues_repo_head_hexsha": "70ba1e64270da6d870847c074ce33afb154f1ef8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11.0, "max_issues_repo_issues_event_min_datetime": "2020-07-26T13:18:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-09T10:04:10.000Z", "max_forks_repo_path": "Dimensionality_Reduction/AE1d/src/networks.hpp", "max_forks_repo_name": "o8r/pytorch_cpp", "max_forks_repo_head_hexsha": "70ba1e64270da6d870847c074ce33afb154f1ef8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 38.0, "max_forks_repo_forks_event_min_datetime": "2020-05-04T05:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T19:10:51.000Z", "avg_line_length": 24.03125, "max_line_length": 97, "alphanum_fraction": 0.6462938882, "num_tokens": 171}
|
# -*- coding: utf-8 -*-
from collections import OrderedDict
import datetime
from django.db import connection
import numpy
import time
def relativedelta2seconds(r):
u"""Conversion d'un relative delta en floatant."""
if r is None:
return 0
return (
r.days * 24 * 60 * 60 +
r.hours * 60 * 60 +
r.minutes * 60 +
r.seconds +
r.microseconds / 1000000.
)
def run():
u"""Point d'entrée du script."""
cursor = connection.cursor()
cursor.execute("""select * from temps_integration""")
data = OrderedDict()
delta = 15 * 60
for row in cursor.fetchall():
start, end = row
week_st = time.mktime(
((start - datetime.timedelta(
seconds=start.second,
microseconds=start.microsecond,
))).timetuple()
)
week_st = int(week_st / delta)
dur = (end - start).total_seconds()
if week_st not in data:
data[week_st] = []
data[week_st].append(dur)
cat = []
series = []
for x, all_y in data.iteritems():
dico = {
'x': x * delta*1000,
'mi': min(all_y),
'ma': max(all_y),
'co': len(all_y),
'av': sum(all_y) / len(all_y),
'me': numpy.percentile(all_y, 50),
'lo': numpy.percentile(all_y, 5),
'hi': numpy.percentile(all_y, 95),
}
series.append(u"{{x: {x}, low: {mi}, q1: {lo}, median: {me}, q3: {hi}, high: {ma}}}".format(**dico))
output = u"""
$(function () {
$('#container').highcharts({
chart: {
type: 'boxplot',
zoomType: 'x'
},
legend: {
enabled: false
},
xAxis: {
type: 'datetime',
},
yAxis: {
max: 50,
},
series: [{
data: ["""
output += u", ".join(series)
output += """]
}]
});
});
"""
with open('/tmp/graph.js', 'w') as fp:
fp.write(output.encode('utf-8'))
|
{"hexsha": "de91ac22608d9e2cd42f38225eb04282cad39f70", "size": 2237, "ext": "py", "lang": "Python", "max_stars_repo_path": "bin/temps_integration3.py", "max_stars_repo_name": "court-jus/dotfiles", "max_stars_repo_head_hexsha": "f8294dba47cc9e7f0f8b5c645dde13ff34e45124", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/temps_integration3.py", "max_issues_repo_name": "court-jus/dotfiles", "max_issues_repo_head_hexsha": "f8294dba47cc9e7f0f8b5c645dde13ff34e45124", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/temps_integration3.py", "max_forks_repo_name": "court-jus/dotfiles", "max_forks_repo_head_hexsha": "f8294dba47cc9e7f0f8b5c645dde13ff34e45124", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.5824175824, "max_line_length": 108, "alphanum_fraction": 0.4403218596, "include": true, "reason": "import numpy", "num_tokens": 533}
|
[STATEMENT]
lemma ir_While_backwards_both:
"(\<And>n. ir_hoare (\<lambda> s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))) \<Longrightarrow>
ir_hoare (P 0) (WHILE b DO c) (WHILE b' DO c') (\<lambda>s s'. \<exists>n. P n s s' \<and> \<not> bval b s \<and> \<not> bval b' s')"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>n. ir_hoare (\<lambda>s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))) \<Longrightarrow> ir_hoare (P 0) (WHILE b DO c) (WHILE b' DO c') (\<lambda>s s'. \<exists>n. P n s s' \<and> \<not> bval b s \<and> \<not> bval b' s')
[PROOF STEP]
apply(rule ir_While_backwards_frontier_both)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<And>n. (\<And>n. ir_hoare (\<lambda>s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))) \<Longrightarrow> ir_hoare (\<lambda>s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))
2. (\<And>n. ir_hoare (\<lambda>s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))) \<Longrightarrow> ir_hoare (\<lambda>s s'. \<exists>n. P n s s') (WHILE b DO c) (WHILE b' DO c') (\<lambda>s s'. \<exists>n. P n s s' \<and> \<not> bval b s \<and> \<not> bval b' s')
[PROOF STEP]
apply blast
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>n. ir_hoare (\<lambda>s s'. P n s s' \<and> bval b s \<and> bval b' s') c c' (P (Suc n))) \<Longrightarrow> ir_hoare (\<lambda>s s'. \<exists>n. P n s s') (WHILE b DO c) (WHILE b' DO c') (\<lambda>s s'. \<exists>n. P n s s' \<and> \<not> bval b s \<and> \<not> bval b' s')
[PROOF STEP]
by(simp add: ir_While_False ir_sym flip_def)
|
{"llama_tokens": 735, "file": "Relational-Incorrectness-Logic_RelationalIncorrectness", "length": 3}
|
import cv2
import numpy as np
def resize_tensor(tensor, new_shape):
"""
Resize a numeric input 3D tensor with opencv. Each channel is resized independently from the others.
Parameters
----------
tensor: ndarray
Numeric 3D tensor of shape (channels, h, w)
new_shape: tuple
Tuple (new_h, new_w)
Returns
-------
new_tensor: ndarray
Resized tensor having size (channels, new_h, new_w)
"""
channels = tensor.shape[0]
new_tensor = np.zeros(shape=(channels,) + new_shape)
for i in range(0, channels):
new_tensor[i] = cv2.resize(tensor[i], dsize=new_shape[::-1])
return new_tensor
def crop_tensor(tensor, indexes):
"""
Crop a numeric 3D input tensor.
Parameters
----------
tensor: ndarray
Numeric 3D tensor of shape (channels, h, w)
indexes: tuple
Crop indexes following convention (h1, h2, w1, w2)
Returns
-------
new_tensor: ndarray
Cropped tensor having size (channels, h2-h1, w2-w1)
"""
h1, h2, w1, w2 = indexes
new_tensor = tensor[:, h1:h2, w1:w2].copy()
return new_tensor
|
{"hexsha": "0aeed160500e092c1bbb4f07d6f14cb592def41a", "size": 1152, "ext": "py", "lang": "Python", "max_stars_repo_path": "tensor_manipulation.py", "max_stars_repo_name": "ndrplz/computer_vision_utils", "max_stars_repo_head_hexsha": "869ca8d5dcd6a95392d67127aa2a43042b33993c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2017-06-27T14:05:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-18T13:20:34.000Z", "max_issues_repo_path": "tensor_manipulation.py", "max_issues_repo_name": "anhhna/computer_vision_utils", "max_issues_repo_head_hexsha": "869ca8d5dcd6a95392d67127aa2a43042b33993c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tensor_manipulation.py", "max_forks_repo_name": "anhhna/computer_vision_utils", "max_forks_repo_head_hexsha": "869ca8d5dcd6a95392d67127aa2a43042b33993c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2017-12-17T05:40:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-12T23:51:55.000Z", "avg_line_length": 23.5102040816, "max_line_length": 104, "alphanum_fraction": 0.6102430556, "include": true, "reason": "import numpy", "num_tokens": 304}
|
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
A helper script to generate N permutations for M patches. The permutations are
selected based on hamming distance.
"""
import argparse
import itertools
import logging
import sys
import numpy as np
from scipy.spatial.distance import cdist
# initiate the logger
FORMAT = "[%(levelname)s: %(filename)s: %(lineno)4d]: %(message)s"
logging.basicConfig(level=logging.INFO, format=FORMAT, stream=sys.stdout)
logger = logging.getLogger(__name__)
def main():
parser = argparse.ArgumentParser(description="Permutations for patches")
parser.add_argument("--N", type=int, default=1000, help="Number of permuations")
parser.add_argument("--M", type=int, default=9, help="Number of patches to permute")
parser.add_argument(
"--method",
type=str,
default="max_avg",
choices=["max_avg", "max_min"],
help="hamming distance : max_avg, max_min",
)
parser.add_argument(
"--min_distance",
type=float,
default=2.0 / 9.0,
help="min distance of permutations in final set",
)
parser.add_argument(
"--output_dir",
type=str,
default=None,
required=True,
help="Output directory where permutations should be saved",
)
args = parser.parse_args()
# now generate data permutation for num_perms, num_patches and save them.
# The algorithm followed is same as in https://arxiv.org/pdf/1603.09246.pdf
# Algorithm 1 on page 12.
logger.info(
f"Generating all perms: M (#patches): {args.M}, "
f"N (#perms): {args.N}, method: {args.method}"
)
num_perms, num_patches = args.N, args.M
all_perms = np.array(list(itertools.permutations(list(range(num_patches)))))
total_perms = all_perms.shape[0]
logger.info(f"Selecting perms from set of {total_perms} perms")
for idx in range(num_perms):
if idx == 0:
j = np.random.randint(total_perms) # uniformly sample first perm
selected_perms = all_perms[j].reshape([1, -1])
else:
selected_perms = np.concatenate(
[selected_perms, all_perms[j].reshape([1, -1])], axis=0
)
all_perms = np.delete(all_perms, j, axis=0)
# compute the hamming distance now between the remaining and selected
D = cdist(selected_perms, all_perms, metric="hamming")
if args.method == "max_avg":
D = D.mean(axis=0)
j = D.argmax()
elif args.method == "max_min":
min_to_selected = D.min(axis=0)
j = min_to_selected.argmax()
if min_to_selected.min() < args.min_distance:
logger.info(
f"min distance {min_to_selected.min()} "
f"< threshold {args.min_distance}"
)
elif args.method == "avg":
logger.info("not implemented yet")
if (idx + 1) % 100 == 0:
logger.info(f"selected_perms: {(idx + 1)} -> {selected_perms.shape}")
dists_sel = cdist(selected_perms, selected_perms, metric="hamming")
non_diag_elements = dists_sel[np.where(np.eye(dists_sel.shape[0]) != 1)]
mean_dist = non_diag_elements.mean() * selected_perms.shape[1]
min_dist = non_diag_elements.min() * selected_perms.shape[1]
logger.info(f"Permutation stats: avg dist {mean_dist}; min dist {min_dist}")
perm_file = (
f"{args.output_dir}/hamming_perms_{args.N}_patches_{args.M}_{ args.method}.npy"
)
logger.info(f"Writing permutations to: {perm_file}")
logger.info(f"permutations shape: {selected_perms.shape}")
np.save(perm_file, selected_perms)
logger.info("Done!")
if __name__ == "__main__":
main()
|
{"hexsha": "36cee39811bde694ce6c392fcaa190093558a184", "size": 3876, "ext": "py", "lang": "Python", "max_stars_repo_path": "extra_scripts/generate_jigsaw_permutations.py", "max_stars_repo_name": "blazejdolicki/vissl", "max_stars_repo_head_hexsha": "9c10748a19fb1c637f32687142c8cd685f2410ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2512, "max_stars_repo_stars_event_min_datetime": "2021-01-27T18:44:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T19:33:49.000Z", "max_issues_repo_path": "extra_scripts/generate_jigsaw_permutations.py", "max_issues_repo_name": "blazejdolicki/vissl", "max_issues_repo_head_hexsha": "9c10748a19fb1c637f32687142c8cd685f2410ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 361, "max_issues_repo_issues_event_min_datetime": "2021-01-27T20:12:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T12:39:34.000Z", "max_forks_repo_path": "extra_scripts/generate_jigsaw_permutations.py", "max_forks_repo_name": "blazejdolicki/vissl", "max_forks_repo_head_hexsha": "9c10748a19fb1c637f32687142c8cd685f2410ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 277, "max_forks_repo_forks_event_min_datetime": "2021-01-29T08:09:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T07:57:35.000Z", "avg_line_length": 36.2242990654, "max_line_length": 88, "alphanum_fraction": 0.6367389061, "include": true, "reason": "import numpy,from scipy", "num_tokens": 910}
|
*
* Definition:
* ===========
*
* SUBROUTINE CGELQ( M, N, A, LDA, WORK1, LWORK1, WORK2, LWORK2,
* INFO)
*
* .. Scalar Arguments ..
* INTEGER INFO, LDA, M, N, LWORK1, LWORK2
* ..
* .. Array Arguments ..
* COMPLEX*16 A( LDA, * ), WORK1( * ), WORK2( * )
* ..
*
*
*> \par Purpose:
* =============
*>
*> \verbatim
*>
*> ZGELQ computes an LQ factorization of an M-by-N matrix A,
*> using ZLASWLQ when A is short and wide
*> (N sufficiently greater than M), and otherwise ZGELQT:
*> A = L * Q .
*> \endverbatim
*
* Arguments:
* ==========
*
*> \param[in] M
*> \verbatim
*> M is INTEGER
*> The number of rows of the matrix A. M >= 0.
*> \endverbatim
*>
*> \param[in] N
*> \verbatim
*> N is INTEGER
*> The number of columns of the matrix A. N >= 0.
*> \endverbatim
*>
*> \param[in,out] A
*> \verbatim
*> A is COMPLEX*16 array, dimension (LDA,N)
*> On entry, the M-by-N matrix A.
*> On exit, the elements on and below the diagonal of the array
*> contain the M-by-min(M,N) lower trapezoidal matrix L
*> (L is lower triangular if M <= N);
*> the elements above the diagonal are the rows of
*> blocked V representing Q (see Further Details).
*> \endverbatim
*>
*> \param[in] LDA
*> \verbatim
*> LDA is INTEGER
*> The leading dimension of the array A. LDA >= max(1,M).
*> \endverbatim
*>
*> \param[out] WORK1
*> \verbatim
*> WORK1 is COMPLEX*16 array, dimension (MAX(1,LWORK1))
*> WORK1 contains part of the data structure used to store Q.
*> WORK1(1): algorithm type = 1, to indicate output from
*> ZLASWLQ or ZGELQT
*> WORK1(2): optimum size of WORK1
*> WORK1(3): minimum size of WORK1
*> WORK1(4): horizontal block size
*> WORK1(5): vertical block size
*> WORK1(6:LWORK1): data structure needed for Q, computed by
*> ZLASWLQ or ZGELQT
*> \endverbatim
*>
*> \param[in] LWORK1
*> \verbatim
*> LWORK1 is INTEGER
*> The dimension of the array WORK1.
*> If LWORK1 = -1, then a query is assumed. In this case the
*> routine calculates the optimal size of WORK1 and
*> returns this value in WORK1(2), and calculates the minimum
*> size of WORK1 and returns this value in WORK1(3).
*> No error message related to LWORK1 is issued by XERBLA when
*> LWORK1 = -1.
*> \endverbatim
*>
*> \param[out] WORK2
*> \verbatim
*> (workspace) COMPLEX*16 array, dimension (MAX(1,LWORK2))
*>
*> \endverbatim
*> \param[in] LWORK2
*> \verbatim
*> LWORK2 is INTEGER
*> The dimension of the array WORK2.
*> If LWORK2 = -1, then a query is assumed. In this case the
*> routine calculates the optimal size of WORK2 and
*> returns this value in WORK2(1), and calculates the minimum
*> size of WORK2 and returns this value in WORK2(2).
*> No error message related to LWORK2 is issued by XERBLA when
*> LWORK2 = -1.
*> \endverbatim
*>
*> \param[out] INFO
*> \verbatim
*> INFO is INTEGER
*> = 0: successful exit
*> < 0: if INFO = -i, the i-th argument had an illegal value
*> \endverbatim
*
* Authors:
* ========
*
*> \author Univ. of Tennessee
*> \author Univ. of California Berkeley
*> \author Univ. of Colorado Denver
*> \author NAG Ltd.
*
*> \par Further Details:
* =====================
*>
*> \verbatim
*> Depending on the matrix dimensions M and N, and row and column
*> block sizes MB and NB returned by ILAENV, GELQ will use either
*> LASWLQ(if the matrix is short-and-wide) or GELQT to compute
*> the LQ decomposition.
*> The output of LASWLQ or GELQT representing Q is stored in A and in
*> array WORK1(6:LWORK1) for later use.
*> WORK1(2:5) contains the matrix dimensions M,N and block sizes MB, NB
*> which are needed to interpret A and WORK1(6:LWORK1) for later use.
*> WORK1(1)=1 indicates that the code needed to take WORK1(2:5) and
*> decide whether LASWLQ or GELQT was used is the same as used below in
*> GELQ. For a detailed description of A and WORK1(6:LWORK1), see
*> Further Details in LASWLQ or GELQT.
*> \endverbatim
*>
* =====================================================================
SUBROUTINE ZGELQ( M, N, A, LDA, WORK1, LWORK1, WORK2, LWORK2,
$ INFO)
*
* -- LAPACK computational routine (version 3.5.0) --
* -- LAPACK is a software package provided by Univ. of Tennessee, --
* -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd. --
* November 2013
*
* .. Scalar Arguments ..
INTEGER INFO, LDA, M, N, LWORK1, LWORK2
* ..
* .. Array Arguments ..
COMPLEX*16 A( LDA, * ), WORK1( * ), WORK2( * )
* ..
*
* =====================================================================
*
* ..
* .. Local Scalars ..
LOGICAL LQUERY, LMINWS
INTEGER MB, NB, I, II, KK, MINLW1, NBLCKS
* ..
* .. EXTERNAL FUNCTIONS ..
LOGICAL LSAME
EXTERNAL LSAME
* .. EXTERNAL SUBROUTINES ..
EXTERNAL ZGELQT, ZLASWLQ, XERBLA
* .. INTRINSIC FUNCTIONS ..
INTRINSIC MAX, MIN, MOD
* ..
* .. EXTERNAL FUNCTIONS ..
INTEGER ILAENV
EXTERNAL ILAENV
* ..
* .. EXECUTABLE STATEMENTS ..
*
* TEST THE INPUT ARGUMENTS
*
INFO = 0
*
LQUERY = ( LWORK1.EQ.-1 .OR. LWORK2.EQ.-1 )
*
* Determine the block size
*
IF ( MIN(M,N).GT.0 ) THEN
MB = ILAENV( 1, 'ZGELQ ', ' ', M, N, 1, -1)
NB = ILAENV( 1, 'ZGELQ ', ' ', M, N, 2, -1)
ELSE
MB = 1
NB = N
END IF
IF( MB.GT.MIN(M,N).OR.MB.LT.1) MB = 1
IF( NB.GT.N.OR.NB.LE.M) NB = N
MINLW1 = M + 5
IF ((NB.GT.M).AND.(N.GT.M)) THEN
IF(MOD(N-M, NB-M).EQ.0) THEN
NBLCKS = (N-M)/(NB-M)
ELSE
NBLCKS = (N-M)/(NB-M) + 1
END IF
ELSE
NBLCKS = 1
END IF
*
* Determine if the workspace size satisfies minimum size
*
LMINWS = .FALSE.
IF((LWORK1.LT.MAX(1,MB*M*NBLCKS+5)
$ .OR.(LWORK2.LT.MB*M)).AND.(LWORK2.GE.M).AND.(LWORK1.GE.M+5)
$ .AND.(.NOT.LQUERY)) THEN
IF (LWORK1.LT.MAX(1,MB*M*NBLCKS+5)) THEN
LMINWS = .TRUE.
MB = 1
END IF
IF (LWORK1.LT.MAX(1,M*NBLCKS+5)) THEN
LMINWS = .TRUE.
NB = N
END IF
IF (LWORK2.LT.MB*M) THEN
LMINWS = .TRUE.
MB = 1
END IF
END IF
*
IF( M.LT.0 ) THEN
INFO = -1
ELSE IF( N.LT.0 ) THEN
INFO = -2
ELSE IF( LDA.LT.MAX( 1, M ) ) THEN
INFO = -4
ELSE IF( LWORK1.LT.MAX( 1, MB*M*NBLCKS+5 )
$ .AND.(.NOT.LQUERY).AND. (.NOT.LMINWS)) THEN
INFO = -6
ELSE IF( (LWORK2.LT.MAX(1,M*MB)).AND.(.NOT.LQUERY)
$ .AND.(.NOT.LMINWS) ) THEN
INFO = -8
END IF
*
IF( INFO.EQ.0) THEN
WORK1(1) = 1
WORK1(2) = MB*M*NBLCKS+5
WORK1(3) = MINLW1
WORK1(4) = MB
WORK1(5) = NB
WORK2(1) = MB * M
WORK2(2) = M
END IF
IF( INFO.NE.0 ) THEN
CALL XERBLA( 'ZGELQ', -INFO )
RETURN
ELSE IF (LQUERY) THEN
RETURN
END IF
*
* Quick return if possible
*
IF( MIN(M,N).EQ.0 ) THEN
RETURN
END IF
*
* The LQ Decomposition
*
IF((N.LE.M).OR.(NB.LE.M).OR.(NB.GE.N)) THEN
CALL ZGELQT( M, N, MB, A, LDA, WORK1(6), MB, WORK2, INFO)
ELSE
CALL ZLASWLQ( M, N, MB, NB, A, LDA, WORK1(6), MB, WORK2,
$ LWORK2, INFO)
END IF
RETURN
*
* End of ZGELQ
*
END
|
{"hexsha": "2e188df9c9026053951ae040197b97f50097fe2e", "size": 7925, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "SRC/zgelq.f", "max_stars_repo_name": "sydha/Lapack", "max_stars_repo_head_hexsha": "1290e6b39b73b2431976e0a9b7fb7d78aeab3ba7", "max_stars_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SRC/zgelq.f", "max_issues_repo_name": "sydha/Lapack", "max_issues_repo_head_hexsha": "1290e6b39b73b2431976e0a9b7fb7d78aeab3ba7", "max_issues_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SRC/zgelq.f", "max_forks_repo_name": "sydha/Lapack", "max_forks_repo_head_hexsha": "1290e6b39b73b2431976e0a9b7fb7d78aeab3ba7", "max_forks_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5708955224, "max_line_length": 76, "alphanum_fraction": 0.52, "num_tokens": 2522}
|
import os, stat
import warnings
from nustar_gen.utils import energy_to_chan, validate_det1_region
from astropy import units as u
def make_spectra(infile, mod, src_reg,
mode='01', bgd_reg='None', outpath='None', runmkarf='yes', extended='no'):
'''
Generate a script to run nuproducts to extract a source (and optionally
a background) spectrum along with their response files.
Always runs numkrmf and numkarf for now.
Parameters
----------
infile: str
Full path to the input event file.
mod: str
'A' or 'B'
src_reg: str
Full path to source region.
Other Parameters
-------------------
bgd_reg: str
If not 'None', then must be the full path to the background region file
barycorr: bool
outpath: str
Optional. Default is to put the lightcurves in the same location as infile
mode: str
Optional. Used primarily if you're doing mode06 analysis and need to specify
output names that are more complicated.
'''
from astropy.io.fits import getheader
# Make sure environment is set up properly
_check_environment()
# Check to see that all files exist:
assert os.path.isfile(infile), 'make_spectra: infile does not exist!'
assert os.path.isfile(src_reg), 'make_spectra: src_reg does not exist!'
if bgd_reg != 'None':
assert os.path.isfile(bgd_reg), 'make_spectra: bgd_reg does not exist!'
bkgextract='yes'
else:
bkgextract='no'
reg_base = os.path.basename(src_reg)
reg_base = os.path.splitext(reg_base)[0]
evdir = os.path.dirname(infile)
seqid = os.path.basename(os.path.dirname(evdir))
if outpath == 'None':
outdir = evdir
else:
outdir = outpath
try:
os.makedirs(outpath)
except FileExistsError:
# directory already exists
pass
stemout = f'nu{seqid}{mod}{mode}_{reg_base}'
lc_script = outdir+f'/runspec_{stemout}.sh'
with open(lc_script, 'w') as f:
f.write('nuproducts imagefile=NONE lcfile=NONE bkglcfile=NONE ')
f.write(f'runmkarf={runmkarf} extended={extended} runmkrmf=yes ')
f.write(f'indir={evdir} outdir={outdir} instrument=FPM{mod} ')
f.write(f'steminputs=nu{seqid} stemout={stemout} ')
f.write(f'srcregionfile={src_reg} ')
if bkgextract == 'no':
f.write(f'bkgextract=no ')
else:
f.write(f'bkgextract=yes bkgregionfile={bgd_reg} ')
f.write('clobber=yes')
os.chmod(lc_script, stat.S_IRWXG+stat.S_IRWXU)
return lc_script
def make_lightcurve(infile, mod, src_reg,
barycorr=False, time_bin=100*u.s, mode='01',
bgd_reg='None', outpath='None', elow=3, ehigh=20):
'''
Generate a script to run nuproducts
Parameters
----------
infile: str
Full path to the input event file.
mod: str
'A' or 'B'
src_reg: str
Full path to source region.
Other Parameters
-------------------
bgd_reg: str
If not 'None', then must be the full path to the background region file
barycorr: bool
Default is 'False'. If 'True', then queries the infile for the OBJ J2000
coordinates and uses these for the barycenter correction.
elow: float
Low-eneryg bound. Default is 3 keV.
ehigh: float
High-energy bound. Default is 20 keV.
outpath: str
Optional. Default is to put the lightcurves in the same location as infile
mode: str
Optional. Used primarily if you're doing mode06 analysis and need to specify
output names that are more complicated.
'''
from astropy.io.fits import getheader
# Make sure environment is set up properly
_check_environment()
# Check to see that all files exist:
assert os.path.isfile(infile), 'make_lightcurve: infile does not exist!'
assert os.path.isfile(src_reg), 'make_lightcurve: src_reg does not exist!'
if bgd_reg != 'None':
assert os.path.isfile(bgd_reg), 'make_lightcurve: bgd_reg does not exist!'
bkgextract='yes'
else:
bkgextract='no'
reg_base = os.path.basename(src_reg)
reg_base = os.path.splitext(reg_base)[0]
evdir = os.path.dirname(infile)
seqid = os.path.basename(os.path.dirname(evdir))
if outpath == 'None':
outdir = evdir
else:
outdir = outpath
try:
os.makedirs(outdir)
except FileExistsError:
# directory already exists
pass
time_bin = (time_bin.to(u.s)).value
stemout = f'nu{seqid}{mod}{mode}_{reg_base}_{elow}to{ehigh}_{time_bin:3.4}s'
lc_script = outdir+f'/runlc_{stemout}.sh'
pi_low = energy_to_chan(elow)
pi_high = energy_to_chan(ehigh)
with open(lc_script, 'w') as f:
f.write('nuproducts phafile=NONE bkgphafile=NONE imagefile=NONE ')
f.write(f'runmkarf=no runmkrmf=no pilow={pi_low} pihigh={pi_high} ')
f.write(f'indir={evdir} outdir={outdir} instrument=FPM{mod} ')
f.write(f'steminputs=nu{seqid} stemout={stemout} ')
f.write(f'srcregionfile={src_reg} ')
if bkgextract == 'no':
f.write(f'bkgextract=no ')
else:
f.write(f'bkgextract=yes bkgregionfile={bgd_reg} ')
f.write(f'binsize={time_bin} ')
if barycorr:
attorb=evdir+f'/nu{seqid}{mod}.attorb'
hdr = getheader(infile)
ra = hdr['RA_OBJ']
dec = hdr['DEC_OBJ']
f.write(f'barycorr=yes srcra_barycorr={ra} srcdec_barycorr={dec} ')
f.write(f'orbitfile={attorb} ')
f.write('clobber=yes')
os.chmod(lc_script, stat.S_IRWXG+stat.S_IRWXU)
return lc_script
def make_exposure_map(obs, mod, vign_energy = False,
det_expo=False, evf=False):
'''
Create a script to run nuexpoomap. Returns the script name.
Parameters
----------
obs: nustar_gen.info.Observation(), required
A valid observation metadata.
mod: str
'A' or 'B'
Other Parameters
----------------
vign_energy: float, optional
Energy where you want to apply the vignetting. Default is no vignetting.
det_expo : boolean, optional, default=False
Whether or not to retain the DET1 exposure map file
'''
import glob
# Make sure environment is set up properly
_check_environment()
# Locate the mast file, attfile, which are what you need for inputs.
evdir = obs.evdir
# Find the mast file. glob is necessary to handle .gz or .fits extensions:
mastaspectfile = glob.glob(evdir+'/nu'+obs.seqid+'*mast*')[0]
# Find the attitude file:
attfile = glob.glob(evdir+'/nu'+obs.seqid+'*att.fits')[0]
# Find the det1reffile:
det1reffile = glob.glob(evdir+'/nu'+obs.seqid+mod+'_det1.fits')[0]
# Only do this for A01, since that's all that matters
# Override this with evfile keyword:
if evf is False:
evfile = obs.science_files[mod][0]
assert '01' in evfile, f'make_exposure_map: Not an 01 event file: {evfile}'
else:
evfile=evf
# Construct the nuexpomap call:
print(obs.seqid, mod)
expo_script = obs.out_path+'/runexpo_'+obs.seqid+mod+'.sh'
expo = open(expo_script, 'w')
cmd_string = 'nuexpomap '
cmd_string += f'infile={evfile} '
if vign_energy is not False:
cmd_string+=f'vignflag=yes energy={vign_energy} '
else:
cmd_string += 'vignflag=no '
cmd_string += f'mastaspectfile={mastaspectfile} '
cmd_string += f'attfile={attfile} '
cmd_string += f'det1reffile={det1reffile} '
sky_expo_file = obs.out_path+'/nu'+obs.seqid+mod+'_sky_expo.fits'
cmd_string += f'expomapfile={sky_expo_file} '
if det_expo:
det_expo_file = obs.out_path+'/nu'+obs.seqid+mod+'_det1_expo.fits'
cmd_string += f'det1instrfile={det_expo_file} '
cmd_string += 'clobber=yes '
expo.write(cmd_string)
expo.close()
os.chmod(expo_script, stat.S_IRWXG+stat.S_IRWXU)
return expo_script
def make_image(infile, elow = 3, ehigh = 20, clobber=True, outpath=False, usrgti=False):
'''
Spawn an xselect instance that produces the image in the energy range.
Parameters
----------
infile: str
Full path tot eh file that you want to process
elow: float
Low-energy band for the image
ehigh: float
High-energy band for the image
Other Parameters
----------------
clobber: boolean, optional, default=True
Overwrite existing files?
outpath: str, optional, default=os.path.dirname(infile)
Set the destination for output. Defaults to same location as infile.
usrgti : str, optional, default = False
Use a GTI file to time-fitler the data (see nustar_gen.utils.make_usr_gti)
If False, do nothing.
Return
-------
outfile: str
The full path to the output image.
'''
# Make sure environment is set up properly
_check_environment()
# Check if input file exists:
try:
with open(infile) as f:
pass
except IOError:
raise IOError("make_image: File does not exist %s" % (infile))
if not outpath:
outdir=os.path.dirname(infile)
else:
outdir=outpath
try:
os.makedirs(outdir)
except FileExistsError:
# directory already exists
pass
# Trim the filename:
sname=os.path.basename(infile)
if sname.endswith('.gz'):
sname = os.path.splitext(sname)[0]
sname = os.path.splitext(sname)[0]
if usrgti is not False:
rshort = os.path.basename(usrgti)
rname = os.path.splitext(rshort)[0]
sname += f'_{rname}'
# Generate outfile name
outfile = outdir+sname+f'_{elow}to{ehigh}keV.fits'
if (os.path.exists(outfile)):
if (~clobber):
warnings.warn('make_image: %s exists, use clobber=True to regenerate' % (outfile))
else:
os.system("rm "+outfile)
xsel_file = _make_xselect_commands(infile, outfile, elow, ehigh, usrgti=usrgti)
os.system("xselect @"+xsel_file)
os.system("rm -r -f "+xsel_file)
return outfile
def extract_sky_events(infile, regfile, clobber=True, outpath=False):
'''
Spawn an xselect instance that produces a new event file screened using a sky ds9
region file.
Parameters
----------
infile: str
Full path to the event file that you want to process
regfile: str
Full path to a ds9 region file (in sky coordinates) to be used to filter
the events.
Other Parameters
----------------
clobber: boolean, optional, default=True
Overwrite existing files?
outpath: str, optional, default=os.path.dirname(infile)
Set the destination for output. Defaults to same location as infile.
Return
-------
outfile: str
The full path to the output image.
'''
# Make sure environment is set up properly
_check_environment()
# Check if input file exists:
try:
with open(infile) as f:
pass
except IOError:
raise IOError("extract_det1_events: File does not exist %s" % (infile))
try:
with open(regfile) as f:
pass
except IOError:
raise IOError("extract_det1_events: File does not exist %s" % (regfile))
if not outpath:
outdir=os.path.dirname(infile)
else:
outdir=outpath
# Trim the filename:
sname=os.path.basename(infile)
if sname.endswith('.gz'):
sname = os.path.splitext(sname)[0]
sname = os.path.splitext(sname)[0]
rshort = os.path.basename(regfile)
rname = os.path.splitext(rshort)[0]
# Generate outfile name
outfile = outdir + '/'+sname+f'_{rname}.evt'
if (os.path.exists(outfile)) & (~clobber):
warnings.warn('extract_sky_events: %s exists, use clobber=True to regenerate' % (outfile))
else:
os.system("rm "+outfile)
xsel_file = _make_xselect_commands_sky_evts(infile, outfile, regfile)
os.system("xselect @"+xsel_file)
os.system("rm -r -f "+xsel_file)
return outfile
def barycenter_events(obs, infile, mod='A'):
'''
Run barycorr on an event file.
Parameters
--------------------
obs: nustar_gen.info.Observation
An instance of the Observation class
infile: str
Full path to input file
mod: str
Module to use. 'A' or 'B'
Other Parameters
-------------------
TO BE IMPLEMENTED
clockfile: str
Path to the clockfile you want to use. Default is to use the CALDB clockfile
'''
# Locate the attorb file:
evdir = obs.evdir
attorb = f'{obs.evdir}nu{obs.seqid}{mod}.attorb'
# Trim the filename:
if obs.out_path is False:
outdir = os.path.dirname(infile)
print(outdir)
else:
outdir = obs.out_path
sname=os.path.basename(infile)
sname=os.path.splitext(sname)[0]
# Generate outfile name
outfile = outdir + '/'+sname+f'_barycorr.fits'
bary_sh = outdir+'/run_bary_'+sname+'.sh'
with open(bary_sh, 'w') as f:
f.write(f'barycorr infile={infile} clobber=yes ')
f.write(f'outfile={outfile} orbitfiles={attorb} ')
f.write(f'ra={obs.source_position.ra.deg} dec={obs.source_position.dec.deg} ')
os.environ['HEADASNOQUERY'] = ""
os.environ['HEADASPROMPT'] = "/dev/null"
os.chmod(bary_sh, stat.S_IRWXG+stat.S_IRWXU)
os.system(f'{bary_sh}')
return outfile
def apply_gti(infile, gtifile, clobber=True, outpath=False):
'''
Spawn an xselect instance that produces a new event file screened using GTI file
Parameters
----------
infile: str
Full path to the event file that you want to process
regfile: str
Full path to a ds9 region file (in sky coordinates) to be used to filter
the events.
Other Parameters
----------------
clobber: boolean, optional, default=True
Overwrite existing files?
outpath: str, optional, default=os.path.dirname(infile)
Set the destination for output. Defaults to same location as infile.
Return
-------
outfile: str
The full path to the output image.
'''
# Make sure environment is set up properly
_check_environment()
# Check if input file exists:
try:
with open(infile) as f:
pass
except IOError:
raise IOError("apply_gti: File does not exist %s" % (infile))
try:
with open(gtifile) as f:
pass
except IOError:
raise IOError("apply_gti: File does not exist %s" % (gtifile))
if not outpath:
outdir=os.path.dirname(infile)
else:
outdir=outpath
# Trim the filename:
sname=os.path.basename(infile)
if sname.endswith('.gz'):
sname = os.path.splitext(sname)[0]
sname = os.path.splitext(sname)[0]
rshort = os.path.basename(gtifile)
rname = os.path.splitext(rshort)[0]
# Generate outfile name
outfile = outdir + '/'+sname+f'_{rname}.evt'
if (os.path.exists(outfile)) & (~clobber):
warnings.warn('apply_gti: %s exists, use clobber=True to regenerate' % (outfile))
else:
os.system("rm "+outfile)
xsel_file = _make_xselect_commands_apply_gti(infile, outfile, gtifile)
os.system("xselect @"+xsel_file)
os.system("rm -r -f "+xsel_file)
return outfile
def _make_xselect_commands_apply_gti(infile, outfile, gtifile):
'''
Helper script to generate the xselect commands to extract events from
a given sky region.
'''
import glob
for oldfile in glob.glob("session1*"):
os.system(f"rm {oldfile}")
xsel=open("xsel.xco","w")
xsel.write("session1\n")
xsel.write("read events \n")
evdir=os.path.dirname(infile)
xsel.write(f'{evdir} \n ' )
evfile = os.path.basename(infile)
xsel.write(f'{evfile} \n ')
xsel.write('yes \n')
xsel.write(f'filter time \n')
xsel.write('file \n')
xsel.write(f'{gtifile}\n')
xsel.write('extract events\n')
xsel.write("save events\n")
xsel.write("%s \n" % outfile)
xsel.write('n \n')
xsel.write('exit\n')
xsel.write('n \n')
xsel.close()
return 'xsel.xco'
def _make_xselect_commands_sky_evts(infile, outfile, regfile):
'''
Helper script to generate the xselect commands to extract events from
a given sky region.
'''
import glob
for oldfile in glob.glob("session1*"):
os.system(f"rm {oldfile}")
xsel=open("xsel.xco","w")
xsel.write("session1\n")
xsel.write("read events \n")
evdir=os.path.dirname(infile)
xsel.write(f'{evdir} \n ' )
evfile = os.path.basename(infile)
xsel.write(f'{evfile} \n ')
xsel.write('yes \n')
xsel.write(f'filter region {regfile} \n')
xsel.write("extract events\n")
xsel.write("save events\n")
xsel.write("%s \n" % outfile)
xsel.write('n \n')
xsel.write('exit\n')
xsel.write('n \n')
xsel.close()
return 'xsel.xco'
def _make_xselect_commands(infile, outfile, elow, ehigh, usrgti=False):
'''
Helper script to generate the xselect commands to make an image in a given NuSTAR range
'''
xsel=open("xsel.xco","w")
xsel.write("session1\n")
xsel.write("read events \n")
evdir=os.path.dirname(infile)
xsel.write(f'{evdir} \n ' )
evfile = os.path.basename(infile)
xsel.write(f'{evfile} \n ')
xsel.write('yes \n')
pi_low = energy_to_chan(elow)
pi_high = energy_to_chan(ehigh)
if usrgti is not False:
xsel.write(f'filter time \n')
xsel.write('file \n')
xsel.write(f'{usrgti}\n')
xsel.write('extract events\n')
xsel.write('filter pha_cutoff {} {} \n'.format(pi_low, pi_high))
xsel.write('set xybinsize 1\n')
xsel.write("extract image\n")
xsel.write("save image\n")
xsel.write("%s \n" % outfile)
xsel.write('exit\n')
xsel.write('n \n')
xsel.close()
return 'xsel.xco'
def _check_environment():
try:
if ("CALDB" in os.environ) & ("HEADAS" in os.environ):
pass
except IOerror:
raise IOError("Environment variables $CALDB and $HEADAS not set")
###
###
###
###
###
###
# From here down are DET1 methods
###
###
###
###
###
###
def make_det1_image(infile, elow = 3, ehigh = 20, clobber=True, outpath=False):
'''
Spawn an xselect instance that produces a DET1 image in the energy range.
Parameters
----------
infile: str
Full path tot eh file that you want to process
elow: float
Low-energy band for the image
ehigh: float
High-energy band for the image
Other Parameters
----------------
clobber: boolean, optional, default=True
Overwrite existing files?
outpath: str, optional, default=os.path.dirname(infile)
Set the destination for output. Defaults to same location as infile.
Return
-------
outfile: str
The full path to the output image.
'''
# Make sure environment is set up properly
_check_environment()
# Check if input file exists:
try:
with open(infile) as f:
pass
except IOError:
raise IOError("make_image: File does not exist %s" % (infile))
if not outpath:
outdir=os.path.dirname(infile)
else:
outdir=outpath
# Trime the filename:
sname=os.path.basename(infile)
if sname.endswith('.gz'):
sname = os.path.splitext(sname)[0]
sname = os.path.splitext(sname)[0]
# Generate outfile name
outfile = outdir + '/'+sname+f'_{elow}to{ehigh}keV_det1.fits'
if (os.path.exists(outfile)) & (~clobber):
warnings.warn('make_image: %s exists, use clobber=True to regenerate' % (outfile))
else:
os.system("rm "+outfile)
xsel_file = _make_xselect_commands_det1(infile, outfile, elow, ehigh)
os.system("xselect @"+xsel_file)
os.system("rm -r -f "+xsel_file)
return outfile
def extract_det1_events(infile, regfile, clobber=True, outpath=False):
'''
Spawn an xselect instance that produces a new event file screened using a det1 region
file.
Parameters
----------
infile: str
Full path to the event file that you want to process
regfile: str
Full path to a ds9 region file (in physical coordinates) to be used to filter
the events.
Other Parameters
----------------
clobber: boolean, optional, default=True
Overwrite existing files?
outpath: str, optional, default=os.path.dirname(infile)
Set the destination for output. Defaults to same location as infile.
Return
-------
outfile: str
The full path to the output image.
'''
# Make sure environment is set up properly
_check_environment()
# Make sure region file is correctly formatted
validate_det1_region(regfile)
# Check if input file exists:
try:
with open(infile) as f:
pass
except IOError:
raise IOError("extract_det1_events: File does not exist %s" % (infile))
try:
with open(regfile) as f:
pass
except IOError:
raise IOError("extract_det1_events: File does not exist %s" % (regfile))
if not outpath:
outdir=os.path.dirname(infile)
else:
outdir=outpath
# Trim the filename:
sname=os.path.basename(infile)
if sname.endswith('.gz'):
sname = os.path.splitext(sname)[0]
sname = os.path.splitext(sname)[0]
rshort = os.path.basename(regfile)
rname = os.path.splitext(rshort)[0]
# Generate outfile name
outfile = outdir + '/'+sname+f'_{rname}.evt'
if (os.path.exists(outfile)) & (~clobber):
warnings.warn('extract_det1_events: %s exists, use clobber=True to regenerate' % (outfile))
else:
os.system("rm "+outfile)
xsel_file = _make_xselect_commands_det1_evts(infile, outfile, regfile)
os.system("xselect @"+xsel_file)
os.system("rm -r -f "+xsel_file)
return outfile
def make_det1_lightcurve(infile, mod, obs,
time_bin=100*u.s, mode='01',
elow=3, ehigh=20, stemout=False, gtifile=False):
'''
Generate a script to run nuproducts to make a lightcurve using the whole
FoV and turning off all vignetting and PSF effects. Assumes that infile
has already been filtered using extract_det1_events().
Parameters
----------
infile: str
Full path to the input event file. This should be pre-filtered by
by extract_det1_events
mod: str
'A' or 'B'
obs: nustar_gen.info.Observation
Observation meta data
Other Parameters
-------------------
elow: float, optional, default = 3 keV
Low-energy bound
ehigh: float, optional, default is 20 keV
High-energy bound
mode: str, optional, default is '01'
Optional. Used to specify stemout if you're doing mode06 analysis and want
to specify output names that are more complicated.
gtifile: str
Path to a GTI file. If this is set, then this is passed to nuproducts.
stemout: str, optional
Use the specified stemout string when calling nuproducts. Otherwise
uses the default value.
'''
from astropy.io.fits import getheader
# Make sure environment is set up properly
_check_environment()
# Check to see that all files exist:
assert os.path.isfile(infile), 'make_det1_lightcurve: infile does not exist!'
# evdir = os.path.dirname(infile)
evdir = obs.evdir
seqid = obs.seqid
# seqid = os.path.basename(os.path.dirname(evdir))
outdir = obs.out_path
# if outpath is None:
# outdir = evdir
# else:
# outdir = outpath
hdr = getheader(infile)
ra = hdr['RA_OBJ']
dec = hdr['DEC_OBJ']
time_bin = int((time_bin.to(u.s)).value)
if stemout is False:
stemout = f'nu{seqid}{mod}{mode}_full_FoV_{elow}to{ehigh}_{time_bin}s'
lc_script = f'{outdir}/rundet1lc_{stemout}.sh'
pi_low = energy_to_chan(elow)
pi_high = energy_to_chan(ehigh)
with open(lc_script, 'w') as f:
f.write('nuproducts phafile=NONE bkgphafile=NONE imagefile=NONE ')
f.write(f'infile={infile} ')
f.write('runmkarf=no runmkrmf=no ')
f.write(f'indir={evdir} outdir={outdir} instrument=FPM{mod} ')
f.write(f'steminputs=nu{seqid} stemout={stemout} ')
f.write(f'pilow={pi_low} pihigh={pi_high} ')
f.write(f'bkgextract=no ')
f.write(f'binsize={time_bin} ')
f.write(f'srcra={ra} srcdec={dec} srcregionfile=DEFAULT srcradius=299 ')
# Turn off all of the time-dependent corrections for the pointing here
f.write(f'lcpsfflag=no lcexpoflag=no lcvignflag=no ')
if (gtifile != False):
f.write(f'usrgtifile={gtifile} ')
f.write('clobber=yes')
os.chmod(lc_script, stat.S_IRWXG+stat.S_IRWXU)
return lc_script
def make_det1_spectra(infile, mod, obs,
stemout=False, gtifile=False):
'''
Generate a script to run nuproducts to extract a source
spectrum along with the associated RMF.
Assumes that infile has already been filtered using extract_det1_events().
Always runs numkrmf, never runs numkarf. Never extract background.
Parameters
----------
infile: str
Full path to the input event file. This should be pre-filtered by
by extract_det1_events
mod: str
'A' or 'B'
obs: nustar_gen.info.Observation
Observation meta data
Other Parameters
-------------------
stemout: str
Optional. Use the specified stemout string when calling nuproducts. Otherwise
uses the default value.
gtifile: str
Path to a GTI file. If this is set, then this is passed to nuproducts.
**NOTE** As of now proper treatment of this being barycenter corrected (or not)
is not supported. If you're doing pulsar analysis, please write your own version.
'''
# from astropy.io.fits import getheader
from os.path import basename
# Make sure environment is set up properly
_check_environment()
# Check to see that all files exist:
assert os.path.isfile(infile), f'make_det1_spectra: {infile} does not exist!'
# assert os.path.isfile(src_reg), 'make_det1_spectra: src_reg does not exist!'
bkgextract='no'
evdir = obs.evdir
seqid = obs.seqid
outdir = obs.out_path
# Construct the output file name:
# hdr = getheader(infile)
ra =obs.source_position.ra.deg
dec = obs.source_position.dec.deg
# if outpath == 'None':
# outdir = evdir
# else:
# outdir = outpath
# try:
# os.makedirs(outdir)
# except FileExistsError:
# # directory already exists
# pass
# stemout = f'nu{seqid}{mod}{mode}_{reg_base}_det1'
# Use the default stemout unless this is set
if stemout is False:
stemout = basename(infile).split('.')[0]
lc_script = outdir+f'/rundet1spec_{stemout}.sh'
with open(lc_script, 'w') as f:
f.write('nuproducts imagefile=NONE lcfile=NONE bkglcfile=NONE ')
f.write(f'infile={infile} ')
f.write('runmkarf=no runmkrmf=yes ')
f.write(f'indir={evdir} outdir={outdir} instrument=FPM{mod} ')
f.write(f'steminputs=nu{seqid} stemout={stemout} ')
f.write(f'srcra={ra} srcdec={dec} srcregionfile=DEFAULT srcradius=299 ')
if (gtifile != False):
f.write(f'usrgtifile={gtifile} ')
f.write(f'runbackscale=no ')
f.write(f'bkgextract=no ')
f.write('clobber=yes')
os.chmod(lc_script, stat.S_IRWXG+stat.S_IRWXU)
return lc_script
def _make_xselect_commands_det1_evts(infile, outfile, regfile):
'''
Helper script to generate the xselect commands to extract events from
a given region.
'''
import glob
for oldfile in glob.glob("session1*"):
os.system(f"rm {oldfile}")
xsel=open("xsel.xco","w")
xsel.write("session1\n")
xsel.write("read events \n")
evdir=os.path.dirname(infile)
xsel.write(f'{evdir} \n ' )
evfile = os.path.basename(infile)
xsel.write(f'{evfile} \n ')
xsel.write('yes \n')
xsel.write('set xyname\n')
xsel.write('DET1X\n')
xsel.write('DET1Y\n')
xsel.write(f'filter region {regfile} \n')
xsel.write("extract events\n")
xsel.write("save events\n")
xsel.write("%s \n" % outfile)
xsel.write('n \n')
xsel.write('exit\n')
xsel.write('n \n')
xsel.close()
return 'xsel.xco'
def _make_xselect_commands_det1(infile, outfile, elow, ehigh):
'''
Helper script to generate the xselect commands to make an image in a
given NuSTAR energy range
'''
import glob
for oldfile in glob.glob("session1*"):
os.system(f"rm {oldfile}")
xsel=open("xsel.xco","w")
xsel.write("session1\n")
xsel.write("read events \n")
evdir=os.path.dirname(infile)
xsel.write(f'{evdir} \n ' )
evfile = os.path.basename(infile)
xsel.write(f'{evfile} \n ')
xsel.write('yes \n')
xsel.write('set xyname\n')
xsel.write('DET1X\n')
xsel.write('DET1Y\n')
pi_low = energy_to_chan(elow)
pi_high = energy_to_chan(ehigh)
xsel.write('filter pha_cutoff {} {} \n'.format(pi_low, pi_high))
xsel.write('set xybinsize 1\n')
xsel.write("extract image\n")
xsel.write("save image\n")
xsel.write("%s \n" % outfile)
xsel.write('exit\n')
xsel.write('n \n')
xsel.close()
return 'xsel.xco'
|
{"hexsha": "f7eb24935af516bf0c1e47e2fa0006ba5f75b447", "size": 30666, "ext": "py", "lang": "Python", "max_stars_repo_path": "nustar_gen/wrappers.py", "max_stars_repo_name": "rmludlam/nustar-gen-utils", "max_stars_repo_head_hexsha": "96e76f7bf4c16fb0bb7d1d99fdfdaa0f6010b320", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nustar_gen/wrappers.py", "max_issues_repo_name": "rmludlam/nustar-gen-utils", "max_issues_repo_head_hexsha": "96e76f7bf4c16fb0bb7d1d99fdfdaa0f6010b320", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nustar_gen/wrappers.py", "max_forks_repo_name": "rmludlam/nustar-gen-utils", "max_forks_repo_head_hexsha": "96e76f7bf4c16fb0bb7d1d99fdfdaa0f6010b320", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.8781818182, "max_line_length": 99, "alphanum_fraction": 0.6066001435, "include": true, "reason": "from astropy", "num_tokens": 8117}
|
export Tile, render
@doc """
A `Tile` is the basic currency in Canvas.
Most of the functions in the Canvas API take `Tile`s
among other things as arguments, and return a `Tile` as the result.
Tiles are immutable: once created there is no way to mutate them.
""" ->
abstract Tile
render{T <: Tile}(x::T) =
error("$T cannot be rendered.")
immutable Leaf <: Tile
element::Elem
end
render(x::Elem) = x
render(x::Leaf) = x.element
convert(::Type{Tile}, x::String) = Leaf(Elem(:span, x))
convert{ns, tag}(::Type{Tile}, x::Elem{ns, tag}) = Leaf(x)
function bestmime(val)
for mime in ("text/html", "image/svg+xml", "image/png", "text/plain")
mimewritable(mime, val) && return MIME(symbol(mime))
end
error("Cannot render $val.")
end
render_fallback(m::MIME"text/plain", x) = Elem(:div, stringmime(m, x))
render_fallback(m::MIME"text/html", x) = Elem(:div, innerHTML=stringmime(m, x))
render_fallback(m::MIME"text/svg", x) = Elem(:div, innerHTML=stringmime(m, x))
render_fallback(m::MIME"image/png", x) = Elem(:img, src="data:image/png;base64," * stringmime(m, x))
render(x::FloatingPoint) = @sprintf "%0.3f" x
render(x::Symbol) = string(x)
render(x::String) = Elem(:span, x)
render(xs::AbstractArray, tag="div") = Elem(tag, map(render, xs))
immutable AnyWrap <: Tile
value
end
render{T}(x::T) =
# Try to convert first
method_exists(convert, (Type{Tile}, T)) ?
render(convert(Tile, x)) :
render(AnyWrap(x))
# Catch-all render
function render(x::AnyWrap)
render_fallback(bestmime(x.value), x.xvalue)
end
@doc """
`Empty` is handy tile that is well... Empty.
use `empty` constant exported by Canvas in your code.
""" ->
immutable Empty <: Tile
end
@doc """
An Empty element.
""" ->
const empty = Empty()
render(t::Empty) = Elem(:div)
writemime(io::IO, m::MIME"text/html", x::Tile) =
writemime(io, m, Elem(:div, Canvas.render(x), className="canvasRoot"))
writemime{T <: Tile}(io::IO, m::MIME"text/html", x::Signal{T}) =
writemime(io, m, lift(Canvas.render, Patchwork.Elem, x))
render{T <: Tile}(s::Signal{T}) =
render(value(s))
# Note a TileList is NOT a Tile
immutable TileList
tiles::AbstractArray
end
convert(::Type{TileList}, xs::AbstractArray) =
TileList(xs)
convert(::Type{TileList}, xs::Tuple) =
TileList([x for x in xs])
convert(::Type{TileList}, x) =
TileList([x])
render(t::TileList) =
map(render, t.tiles)
render(t::TileList, wrap) =
Elem(wrap, map(render, t.tiles))
|
{"hexsha": "95956211958f77813bf479aaa00198f68795366f", "size": 2476, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/basics/tile.jl", "max_stars_repo_name": "aviks/Canvas.jl", "max_stars_repo_head_hexsha": "6b1b6d8b2ae539c6eae9049cc210395ae80c4f46", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-22T09:30:24.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-22T09:30:24.000Z", "max_issues_repo_path": "src/basics/tile.jl", "max_issues_repo_name": "aviks/Canvas.jl", "max_issues_repo_head_hexsha": "6b1b6d8b2ae539c6eae9049cc210395ae80c4f46", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/basics/tile.jl", "max_forks_repo_name": "aviks/Canvas.jl", "max_forks_repo_head_hexsha": "6b1b6d8b2ae539c6eae9049cc210395ae80c4f46", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-05-22T10:06:06.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-22T10:06:06.000Z", "avg_line_length": 25.7916666667, "max_line_length": 101, "alphanum_fraction": 0.6595315024, "num_tokens": 733}
|
import .ops
open subtype nnf
@[simp] def force {states : Type} (k : kripke states) : states → nnf → Prop
| s (var n) := k.val n s
| s (neg n) := ¬ k.val n s
| s (and φ ψ) := force s φ ∧ force s ψ
| s (or φ ψ) := force s φ ∨ force s ψ
| s (box φ) := ∀ s', k.rel s s' → force s' φ
| s (dia φ) := ∃ s', k.rel s s' ∧ force s' φ
def sat {st} (k : kripke st) (s) (Γ : list nnf) : Prop :=
∀ φ ∈ Γ, force k s φ
def unsatisfiable (Γ : list nnf) : Prop :=
∀ (st) (k : kripke st) s, ¬ sat k s Γ
theorem unsat_singleton {φ} (h : unsatisfiable [φ]) : ∀ {st} (k : kripke st) s, ¬ force k s φ
:=
begin
intros st k s hf, apply h, intros ψ hψ, rw list.mem_singleton at hψ, rw hψ, exact hf
end
theorem sat_of_empty {st} (k : kripke st) (s) : sat k s [] :=
λ φ h, absurd h $ list.not_mem_nil _
theorem ne_empty_of_unsat {Γ} (h : unsatisfiable Γ): Γ ≠ [] :=
begin
intro heq, rw heq at h,
apply h, apply @sat_of_empty nat,
apply inhabited_kripke.1, exact 0
end
class val_constructible (Γ : list nnf) extends saturated Γ:=
(no_contra : ∀ {n}, var n ∈ Γ → neg n ∉ Γ)
(v : list ℕ)
(hv : ∀ {n}, var n ∈ Γ ↔ n ∈ v)
class modal_applicable (Γ : list nnf) extends val_constructible Γ :=
(φ : nnf)
(ex : dia φ ∈ Γ)
class model_constructible (Γ : list nnf) extends val_constructible Γ :=
(no_dia : ∀ {φ}, nnf.dia φ ∉ Γ)
-- unbox and undia take a list of formulas, and
-- get rid of the outermost box or diamond of each formula respectively
def unmodal (Γ : list nnf) : list $ list nnf :=
list.map (λ d, d :: (unbox Γ)) (undia Γ)
theorem unmodal_size (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ → (node_size i < node_size Γ) :=
list.mapp _ _ begin intros φ h, apply undia_size h end
def unmodal_mem_box (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ → (∀ φ, box φ ∈ Γ → φ ∈ i) :=
list.mapp _ _ begin intros φ h ψ hψ, right, apply (@unbox_iff Γ ψ).1 hψ end
def unmodal_sat_of_sat (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ →
(∀ {st : Type} (k : kripke st) s Δ
(h₁ : ∀ φ, box φ ∈ Γ → box φ ∈ Δ)
(h₂ : ∀ φ, dia φ ∈ Γ → dia φ ∈ Δ),
sat k s Δ → ∃ s', sat k s' i) :=
list.mapp _ _
begin
intros φ hmem st k s Δ h₁ h₂ h,
have : force k s (dia φ),
{ apply h, apply h₂, rw undia_iff, exact hmem },
rcases this with ⟨w, hrel, hforce⟩,
split, swap, { exact w },
{ intro ψ, intro hψ, cases hψ,
{ rw hψ, exact hforce },
{ apply (h _ (h₁ _ ((@unbox_iff Γ ψ).2 hψ))) _ hrel} },
end
def unmodal_mem_head (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ → dia (list.head i) ∈ Γ :=
list.mapp _ _ begin intros φ hmem, rw undia_iff, exact hmem end
def unmodal_unsat_of_unsat (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ → Π h : unsatisfiable i, unsatisfiable (dia (list.head i) :: rebox (unbox Γ)) :=
list.mapp _ _
begin
intros φ _,
{ intro h, intro, intros k s hsat,
have ex := hsat (dia φ) (by simp),
cases ex with s' hs',
apply h st k s', intros ψ hmem,
cases hmem,
{ rw hmem, exact hs'.2 },
{ have := (@rebox_iff ψ (unbox Γ)).2 hmem,
apply hsat (box ψ) (by right; assumption) s' hs'.1 } }
end
def mem_unmodal (Γ : list nnf) (φ) (h : φ ∈ undia Γ) : (φ :: unbox Γ) ∈ unmodal Γ :=
begin apply list.mem_map_of_mem (λ φ, φ :: unbox Γ) h end
def unsat_of_unsat_unmodal {Γ : list nnf} (h : modal_applicable Γ) (i) : i ∈ unmodal Γ ∧ unsatisfiable i → unsatisfiable Γ :=
begin
intro hex, intros st k s h,
have := unmodal_sat_of_sat Γ i hex.1 k s Γ (λ x hx, hx) (λ x hx, hx) h,
cases this with w hw,
exact hex.2 _ k w hw
end
namespace list
universes u v w x
variables {α : Type u} {β : Type v} {γ : Type w} {δ : Type x}
theorem cons_diff_of_ne_mem [decidable_eq α] {a : α} : Π {l₁ l₂ : list α} (h : a ∉ l₂), (a::l₁).diff l₂ = a :: l₁.diff l₂
| l₁ [] h := by simp
| l₁ (hd::tl) h :=
begin
simp,
rw erase_cons_tail,
apply cons_diff_of_ne_mem,
{intro hin, apply h, simp [hin]},
{intro heq, apply h, simp [heq]}
end
-- TODO: Can we strengthen this?
theorem subset_of_diff_filter [decidable_eq α] {a : α} : Π {l₁ l₂ : list α}, l₁.diff (filter (≠ a) l₂) ⊆ a :: l₁.diff l₂
| l₁ [] := by simp
| l₁ (hd::tl) :=
begin
by_cases heq : hd = a,
{rw heq, simp,
by_cases ha : a ∈ l₁,
{have hsub₁ := @subset_of_diff_filter l₁ tl,
have hsp := @subperm_cons_diff _ _ a (l₁.erase a) tl,
have hsub₂ := subperm.subset hsp,
have hsub := (perm.subset (perm.diff_right tl (perm_cons_erase ha))),
intros x hx,
cases hsub₁ hx with hxa,
{left, exact hxa},
{exact hsub₂ (hsub h)}},
{rw erase_of_not_mem ha, apply subset_of_diff_filter}},
{simp [heq], apply subset_of_diff_filter}
end
end list
/- Regular lemmas for the propositional part. -/
section
variables (φ ψ : nnf) (Γ₁ Γ₂ Δ : list nnf) {st : Type}
variables (k : kripke st) (s : st)
open list
theorem sat_subset (h₁ : Γ₁ ⊆ Γ₂) (h₂ : sat k s Γ₂) : sat k s Γ₁ :=
λ x hx, h₂ _ (h₁ hx)
theorem unsat_subset (h₁ : Γ₁ ⊆ Γ₂) (h₂ : unsatisfiable Γ₁) : unsatisfiable Γ₂ :=
λ st k s h, (h₂ st k s (sat_subset _ Γ₂ _ _ h₁ h))
theorem sat_sublist (h₁ : Γ₁ <+ Γ₂) (h₂ :sat k s Γ₂) : sat k s Γ₁ :=
sat_subset _ _ _ _ (sublist.subset h₁) h₂
theorem unsat_sublist (h₁ : Γ₁ <+ Γ₂) (h₂ : unsatisfiable Γ₁) : unsatisfiable Γ₂ :=
λ st k s h, (h₂ st k s (sat_sublist _ Γ₂ _ _ h₁ h))
theorem unsat_contra {Δ n} : var n ∈ Δ → neg n ∈ Δ → unsatisfiable Δ:=
begin
intros h₁ h₂, intros v hsat, intros s hsat,
have := hsat _ h₁, have := hsat _ h₂, simpa
end
theorem sat_of_and : force k s (and φ ψ) ↔ (force k s φ) ∧ (force k s ψ) :=
by split; {intro, simpa}; {intro, simpa}
theorem sat_of_sat_erase (h₁ : sat k s $ Δ.erase φ) (h₂ : force k s φ) : sat k s Δ :=
begin
intro ψ, intro h,
by_cases (ψ = φ),
{rw h, assumption},
{have : ψ ∈ Δ.erase φ,
rw mem_erase_of_ne, assumption, exact h,
apply h₁, assumption}
end
theorem unsat_and_of_unsat_split
(h₁ : and φ ψ ∈ Δ)
(h₂ : unsatisfiable $ φ :: ψ :: Δ.erase (and φ ψ)) :
unsatisfiable Δ :=
begin
intro st, intros, intro h,
apply h₂, swap 3, exact k, swap, exact s,
intro e, intro he,
cases he,
{rw he, have := h _ h₁, rw sat_of_and at this, exact this.1},
{cases he,
{rw he, have := h _ h₁, rw sat_of_and at this, exact this.2},
{have := h _ h₁, apply h, apply mem_of_mem_erase he} }
end
theorem sat_and_of_sat_split
(h₁ : and φ ψ ∈ Δ)
(h₂ : sat k s $ φ :: ψ :: Δ.erase (and φ ψ)) :
sat k s Δ :=
begin
intro e, intro he,
by_cases (e = and φ ψ),
{ rw h, split, repeat {apply h₂, simp} },
{ have : e ∈ Δ.erase (and φ ψ),
{ rw mem_erase_of_ne, repeat { assumption } },
apply h₂, simp [this] }
end
theorem unsat_or_of_unsat_split
(h : or φ ψ ∈ Δ)
(h₁ : unsatisfiable $ φ :: Δ.erase (nnf.or φ ψ))
(h₂ : unsatisfiable $ ψ :: Δ.erase (nnf.or φ ψ)) :
unsatisfiable $ Δ :=
begin
intro, intros, intro hsat,
have := hsat _ h,
cases this,
{apply h₁, swap 3, exact k, swap, exact s, intro e, intro he,
cases he, rw he, exact this, apply hsat, apply mem_of_mem_erase he},
{apply h₂, swap 3, exact k, swap, exact s, intro e, intro he,
cases he, rw he, exact this, apply hsat, apply mem_of_mem_erase he}
end
theorem sat_or_of_sat_split_left
(h : or φ ψ ∈ Δ)
(hl : sat k s $ φ :: Δ.erase (nnf.or φ ψ)) :
sat k s Δ :=
begin
intros e he,
by_cases (e = or φ ψ),
{ rw h, left, apply hl, simp},
{have : e ∈ Δ.erase (or φ ψ),
{ rw mem_erase_of_ne, repeat { assumption } },
apply hl, simp [this]}
end
theorem sat_or_of_sat_split_right
(h : or φ ψ ∈ Δ)
(hl : sat k s $ ψ :: Δ.erase (nnf.or φ ψ)) :
sat k s Δ :=
begin
intros e he,
by_cases (e = or φ ψ),
{ rw h, right, apply hl, simp},
{ have : e ∈ Δ.erase (or φ ψ),
{ rw mem_erase_of_ne, repeat { assumption } },
apply hl, simp [this] }
end
end
def unmodal_jump (Γ : list nnf) : ∀ (i : list nnf), i ∈ unmodal Γ → Π Δ st (k : kripke st) s
(hsat : sat k s (Γ.diff Δ)) (hdia : dia i.head ∉ Δ),
∃ s, sat k s (i.diff (list.filter (≠ i.head) (unbox Δ))) :=
list.mapp _ _
begin
intros x hx Δ st k s hsat hdia,
rw list.cons_diff_of_ne_mem, swap,
{intro hmem, rw [list.mem_filter] at hmem, have := hmem.2,
simp at this, exact this},
{ rw [←undia_iff] at hx,
have := hsat _ (list.mem_diff_of_mem hx hdia),
rcases this with ⟨w, hw⟩,
split, swap, {exact w},
{ apply sat_subset, swap 3,
{ exact x::list.diff (unbox Γ) (unbox Δ) },
{ intros b hb, cases hb,
{ simp [hb] },
{ apply list.subset_of_diff_filter, exact hb } },
{ rw unbox_diff, intros c hc, cases hc,
{rw hc, exact hw.2},
{have := (@unbox_iff (list.diff Γ Δ) c).2 hc,
have hforce := hsat _ this, apply hforce, exact hw.1} } } }
end
/- Part of the soundness -/
theorem unsat_of_closed_and {Γ Δ} (i : and_instance Γ Δ) (h : unsatisfiable Δ) : unsatisfiable Γ :=
by cases i; { apply unsat_and_of_unsat_split, repeat {assumption} }
theorem unsat_of_closed_or {Γ₁ Γ₂ Δ : list nnf} (i : or_instance Δ Γ₁ Γ₂) (h₁ : unsatisfiable Γ₁) (h₂ : unsatisfiable Γ₂) : unsatisfiable Δ :=
by cases i; {apply unsat_or_of_unsat_split, repeat {assumption} }
/- Tree models -/
inductive model
| cons : list ℕ → list model → model
instance : decidable_eq model := by tactic.mk_dec_eq_instance
instance inhabited_model : inhabited model := ⟨model.cons [] []⟩
open model
@[simp] def mval : ℕ → model → bool
| p (cons v r) := p ∈ v
@[simp] def mrel : model → model → bool
| (cons v r) m := m ∈ r
theorem mem_of_mrel_tt : Π {v r m}, mrel (cons v r) m = tt → m ∈ r :=
begin
intros v r m h, by_contradiction,
simpa using h
end
@[simp] def builder : kripke model :=
{val := λ n s, mval n s, rel := λ s₁ s₂, mrel s₁ s₂}
inductive batch_sat : list model → list (list nnf) → Prop
| bs_nil : batch_sat [] []
| bs_cons (m Γ l₁ l₂) : sat builder m Γ → batch_sat l₁ l₂ →
batch_sat (m::l₁) (Γ::l₂)
open batch_sat
theorem bs_ex : Π l Γ,
batch_sat l Γ → ∀ m ∈ l, ∃ i ∈ Γ, sat builder m i
| l Γ bs_nil := λ m hm, by simpa using hm
| l Γ (bs_cons m Δ l₁ l₂ h hbs) :=
begin
intros n hn,
cases hn,
{split, swap, exact Δ, split, simp, rw hn, exact h},
{have : ∃ (i : list nnf) (H : i ∈ l₂), sat builder n i,
{apply bs_ex, exact hbs, exact hn},
{rcases this with ⟨w, hw, hsat⟩, split, swap, exact w, split,
{simp [hw]}, {exact hsat} } }
end
theorem bs_forall : Π l Γ,
batch_sat l Γ → ∀ i ∈ Γ, ∃ m ∈ l, sat builder m i
| l Γ bs_nil := λ m hm, by simpa using hm
| l Γ (bs_cons m Δ l₁ l₂ h hbs) :=
begin
intros i hi,
cases hi, {split, swap, exact m, split, simp, rw hi, exact h},
{have : ∃ (n : model) (H : n ∈ l₁), sat builder n i,
{apply bs_forall, exact hbs, exact hi},
{rcases this with ⟨w, hw, hsat⟩, split, swap, exact w, split,
{simp [hw]}, {exact hsat} } }
end
theorem sat_of_batch_sat : Π l Γ (h : modal_applicable Γ),
batch_sat l (unmodal Γ) → sat builder (cons h.v l) Γ :=
begin
intros l Γ h hbs φ hφ,
cases hfml : φ,
case nnf.var : n {rw hfml at hφ, simp, rw ←h.hv, exact hφ},
case nnf.box : ψ
{intros s' hs', have hmem := mem_of_mrel_tt hs',
have := bs_ex l (unmodal Γ) hbs s' hmem,
rcases this with ⟨w, hw, hsat⟩,
have := unmodal_mem_box Γ w hw ψ _,
swap, {rw ←hfml, exact hφ}, {apply hsat, exact this} },
case nnf.dia : ψ
{dsimp,
have := bs_forall l (unmodal Γ) hbs (ψ :: unbox Γ) _, swap,
{apply mem_unmodal, rw [←undia_iff, ←hfml], exact hφ},
{rcases this with ⟨w, hw, hsat⟩, split, swap, exact w, split,
{simp [hw]}, {apply hsat, simp} } },
case nnf.neg : n
{rw hfml at hφ, have : var n ∉ Γ,
{intro hin, have := h.no_contra, have := this hin, contradiction},
simp, rw ←h.hv, exact this },
case nnf.and : φ ψ
{rw hfml at hφ, have := h.no_and, have := @this φ ψ, contradiction},
case nnf.or : φ ψ
{rw hfml at hφ, have := h.no_or, have := @this φ ψ, contradiction}
end
theorem build_model : Π Γ (h : model_constructible Γ),
sat builder (cons h.v []) Γ :=
begin
intros, intro, intro hmem,
cases heq : φ,
case nnf.var : n {rw [heq,h.hv] at hmem, simp [hmem]},
case nnf.neg : n {rw heq at hmem, simp, rw ←h.hv, intro hin, exfalso, apply h.no_contra, repeat {assumption} },
case nnf.box : φ {simp},
case nnf.and : φ ψ { rw heq at hmem, exfalso, apply h.no_and, assumption},
case nnf.or : φ ψ { rw heq at hmem, exfalso, apply h.no_or, assumption},
case nnf.dia : φ { rw heq at hmem, exfalso, apply h.no_dia, assumption},
end
|
{"author": "minchaowu", "repo": "ModalTab", "sha": "9bb0bf17faf0554d907ef7bdd639648742889178", "save_path": "github-repos/lean/minchaowu-ModalTab", "path": "github-repos/lean/minchaowu-ModalTab/ModalTab-9bb0bf17faf0554d907ef7bdd639648742889178/src/K/semantics.lean"}
|
from functools import lru_cache
from dunders import dunders, maths
from numbers import Number
import numpy as np
class BaseNumber(Number):
def copy(self):
return self.__class__(self)
def inverse(self):
return self.conjugate() / self.square()
def square(self):
return (self.conjugate() * self).real
def norm(self):
return np.sqrt(self.square())
def __abs__(self):
return self.norm()
def __len__(self):
return self.dimensions
def __getitem__(self, index):
return self.coefficients()[index]
def __setitem__(self, index, value):
self.coefficients()[index] = value
def __delitem__(self, index):
self.coefficients()[index] = 0
def __contains__(self, needle):
return needle in self.coefficients()
def __str__(self):
return format(self)
def __repr__(self):
return str(self)
def __format__(self, fmt):
return "(" + ", ".join([str(x) for x in self.coefficients()]) + ")"
def cayley_dickson_real_base(base=float):
if not issubclass(base, Number):
raise TypeError("The base type must be derived from Number.")
@dunders(base=base, names=maths, force=False)
class Real(BaseNumber, base):
dimensions = 1
order = 0
@staticmethod
def base():
return base
@property
def real(self):
return base(self)
@property
def imag(self):
return base(0)
def coefficients(self):
return (self.real,)
def conjugate(self):
return Real(self)
def __hash__(self):
return hash(base(self))
return Real
def cayley_dickson_construction(parent):
if not hasattr(parent, "coefficients"):
raise ValueError("The parent type must be Real or HyperComplex. (No coefficients found.)")
def option(name, default, **options):
if name in options:
return options[name]
return default
class HyperComplex(BaseNumber):
# Class Data Properties
dimensions = parent.dimensions * 2
order = parent.order + 1
@property
def real(self):
return self.a.real
@property
def imag(self):
if len(self) == 2:
return self.b
return tuple(self.a.imag) + self.b.coefficients()
# HyperComplex Data Manipulation
@staticmethod
def coerce(other):
try:
return HyperComplex(other)
except TypeError:
return None
@staticmethod
def base():
return parent.base()
# HyperComplex.indexes(index) returns base index for HyperComplex.matric use
# HyperComplex.values(index) returns index value for HyperComplex.outerproduct use
# HyperComplex.named(input) returns named index (e0, e1) or (1, i), etc
def indexes(self, index, **args):
base = self.base()
result = [base(0)] * self.dimensions
result[index] = base(1)
return HyperComplex(*result)
def values(self, index, **args):
base = self.base()
coefficients = self.coefficients()
result = [base(0)] * self.dimensions
result[index] = coefficients[index]
return HyperComplex(*result)
def named(self, input, **args):
# Optional Arguments
element = option("element", "e", **args)
indices = option("indices", "1ijkLIJKmpqrMPQRnstuNSTUovwxOVWX", **args)
translate = option("translate", False, **args)
asindex = option("asindex", False, **args)
asstring = option("asstring", False, **args)
astuple = option("astuple", False, **args)
aslist = option("aslist", False, **args)
asobject = option("asobject", False, **args)
showplus = option("showplus", False, **args)
index = option("index", None, **args)
value = option("value", input, **args)
plus = "+" if showplus else ""
base = self.base()
if type(indices) != list:
indices = list(indices)
# index, value filters
if hasattr(input, "coefficients"):
# Added this to fix issue when doing outerproduct and having 0 values anywhere
# if any value is 0, return 0 immediately so the next() section doesn't throw a
# type error
if not input:
return 0
coefficients = input.coefficients()
enum = enumerate(coefficients)
index, value = next(((index, value) for index, value in enum if value))
elif type(value) is int and type(index) is int:
# Used by the group() function to add named elements
# to the rotation graph
# group() required 0 index, plot() 1 index starting arrays,
# group() also shows negativity as > len(self) numbering
is_negative = index >= len(self)
if asstring and translate:
value = base(-1) if is_negative else base(+1)
index -= len(self) if is_negative else 0
# text filters
if asindex:
# Used bu the HyperComplex.plot() functions to generate
# the base matricies used to generate the graphs
input = int(F"-{index+1}") if value < 0 else int(F"{index+1}")
elif asstring and not (asobject | astuple | aslist):
# Output Named/String Array, using either e0 + e1 + e2 + e3 format or the
# letter indices like 1 + i + j + k
enabled = translate and self.dimensions <= len(indices)
translated = indices[index] if enabled else F"{element}{index}"
sign = "-" if value < 0 else plus
value = "" if abs(value) == base(1) else "{:g}".format(abs(value))
translated = "" if translated == "1" and value else translated
input = F"{sign}{value}{translated}"
return input
def coefficients(self):
return self.a.coefficients() + self.b.coefficients()
def zero(self):
if isinstance(self.a, Number):
return HyperComplex(0, 0)
else:
return HyperComplex(self.a.zero(), self.b.zero())
def __init__(self, *args):
# Added list/tuple type as allowed arguments
# Remove need for pair=True
if len(args) == 2:
self.a, self.b = map(parent, args)
else:
if len(args) == 1:
if hasattr(args[0], "coefficients"):
args = args[0].coefficients()
elif isinstance(args[0], complex):
args = args[0].real, args[0].imag
elif type(args[0]) is tuple:
args = args[0]
elif type(args[0]) is list:
args = tuple(args[0])
if len(args) > len(self):
la = len(args)
ls = len(self)
raise TypeError(F"Too many args. Got {la} expecting at most {ls}.")
if len(self) != len(args):
args += (HyperComplex.base()(),) * (len(self) - len(args))
self.a = parent(*args[:len(self) // 2])
self.b = parent(*args[len(self) // 2:])
def __hash__(self):
return hash(self.coefficients())
def __iter__(self):
if isinstance(self.a, Number):
yield self.a
yield self.b
return
for i in self.a:
yield i
for i in self.b:
yield i
# HyperComplex Products Display
# HyperComplex.outerproduct() returns tensor outer product of AB'
# HyperCompex.innerproctuct() returns scalar inner product of A'B
# HyperCompex.hadamardproctuct() returns vector hadamard product of AB
# HyperCompex.matrix() returns multiplication matrix
def matrixdisplay(self, result, **args):
asstring = option("asstring", False, **args)
asobject = option("asobject", False, **args)
astuple = option("astuple", False, **args)
aslist = option("aslist", False, **args)
if asstring and not (asobject | astuple | aslist):
result = [list(map(str, row)) for row in result]
length = max(len(cell) for row in result for cell in row)
offset = length - max(len(row[0]) for row in result)
rows = [" ".join(cell.rjust(length) for cell in row)[offset:] for row in result]
return "\n".join(rows)
return result
def outerproduct(self, other, **args):
asobject = option("asobject", False, **args)
astuple = option("astuple", False, **args)
aslist = option("aslist", False, **args)
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
other = other.conjugate()
a = list(map(self.values, range(self.dimensions)))
b = list(map(other.values, range(other.dimensions)))
result = [[self.named(i * j, **args) for j in b] for i in a]
if asobject | astuple | aslist:
size = len(result)
for i, j in np.ndindex(size, size):
temp = result[i][j]
temp = temp.astuple() if astuple else temp
temp = temp.asobject() if asobject else temp
temp = temp.aslist() if aslist else temp
result[i][j] = temp
result = self.matrixdisplay(result, **args)
return result
def innerproduct(self, other):
return (self.conjugate() * other).real
def hadamardproduct(self, other, **args):
asobject = option("asobject", False, **args)
astuple = option("astuple", False, **args)
aslist = option("aslist", False, **args)
other = HyperComplex.coerce(other)
base = self.base()
if other is None:
return NotImplemented
a = self.coefficients()
b = other.coefficients()
result = [base(0)] * self.dimensions
for i in range(self.dimensions):
x = a[i]
y = b[i]
result[i] = self.named(base(x * y), index=i, **args)
if asobject | astuple | aslist:
result = HyperComplex(result)
result = result.astuple() if astuple else result
result = result.asobject() if asobject else result
result = result.aslist() if aslist else result
result = self.matrixdisplay(result, **args)
return result
def matrix(self, **args):
asobject = option("asobject", False, **args)
astuple = option("astuple", False, **args)
aslist = option("aslist", False, **args)
a = list(map(self.indexes, range(self.dimensions)))
result = [[self.named(i * j, **args) for j in a] for i in a]
if asobject | astuple | aslist:
size = len(result)
for i, j in np.ndindex(size, size):
temp = result[i][j]
temp = temp.astuple() if astuple else temp
temp = temp.asobject() if asobject else temp
temp = temp.aslist() if aslist else temp
result[i][j] = temp
result = self.matrixdisplay(result, **args)
return result
# Output Types
def asobject(self):
return HyperComplex(self)
def astuple(self):
return tuple(self.coefficients())
def aslist(self):
return list(self.coefficients())
def asstring(self, **args):
values = list(map(self.values, range(self.dimensions)))
values = [self.named(i, asstring=True, **args) for i in values]
values = [str(row) for row in values]
result = values.pop(0)
for value in values:
result += (value[:1] == "-" and [" - " + value[1:]] or [" + " + value])[0]
return result
# HyperComplex Comparison
def __eq__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
return self.a == other.a and self.b == other.b
def __ne__(self, other):
return not self == other
def __lt__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
return self.square() < other.square()
def __le__(self, other):
return self < other or self == other
def __gt__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
return self.square() > other.square()
def __ge__(self, other):
return self > other or self == other
# Mathematical Conversion
def convert(self, ctype, dimensions=1):
coefficients = self.coefficients()
if any(coefficients[dimensions:]):
size = self.dimensions
classname = self.__class__.__name__
typename = ctype.__name__
raise TypeError(F"Error converting {classname}[{size}] to {typename}: There are non-zero incompatible coefficients.")
return ctype(*coefficients[:dimensions])
def __bool__(self):
return bool(self.a) or bool(self.b)
def __int__(self):
return self.convert(int, 1)
def __float__(self):
return self.convert(float, 1)
def __complex__(self):
return self.convert(complex, 2)
# Mathematical Operations
def conjugate(self):
return HyperComplex(self.a.conjugate(), -self.b)
def __neg__(self):
return HyperComplex(-self.a, -self.b)
def __pos__(self):
return HyperComplex(+self.a, +self.b)
def __add__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
return HyperComplex(self.a + other.a, self.b + other.b)
def __radd__(self, other):
return HyperComplex(other) + self
def __sub__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
return HyperComplex(self.a - other.a, self.b - other.b)
def __rsub__(self, other):
return HyperComplex(other) - self
@lru_cache(maxsize=128)
def __pow__(self, power):
if not isinstance(power, int):
return NotImplemented
base = HyperComplex.base()
value = HyperComplex(base(1))
if power:
multiplier = self if power > 0 else self.inverse()
for _ in range(abs(power)):
value *= multiplier
return value
@lru_cache(maxsize=128)
def __mul__(self, other):
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
a = self.a * other.a - other.b.conjugate() * self.b
b = other.b * self.a + self.b * other.a.conjugate()
return HyperComplex(a, b)
@lru_cache(maxsize=128)
def __rmul__(self, other):
return HyperComplex(other) * self
@lru_cache(maxsize=128)
def __truediv__(self, other):
base = HyperComplex.base()
if isinstance(other, base):
other = base(1) / other
else:
other = HyperComplex.coerce(other)
if other is None:
return NotImplemented
other = other.inverse()
return self * other
@lru_cache(maxsize=128)
def __rtruediv__(self, other):
return HyperComplex(other) / self
return HyperComplex
def cayley_dickson_algebra(level, base=float):
if not isinstance(level, int) or level < 0:
raise ValueError("The level must be a positive integer.")
numbers = cayley_dickson_real_base(base)
for _ in range(level):
numbers = cayley_dickson_construction(numbers)
return numbers
def debug(*values):
print(*values, sep="\n", end="\n\n")
Real = R = cayley_dickson_real_base()
Complex = C = cayley_dickson_construction(R)
Quaternion = Q = H = cayley_dickson_construction(C)
Octonion = O = cayley_dickson_construction(H)
Sedenion = S = cayley_dickson_construction(O)
Pathion = P = cayley_dickson_construction(S)
Chingon = X = cayley_dickson_construction(P)
Routon = U = cayley_dickson_construction(X)
Voudon = V = cayley_dickson_construction(U)
Order = {
0: Real(),
1: Complex(),
2: Quaternion(),
3: Octonion(),
4: Sedenion(),
5: Pathion(),
6: Chingon(),
7: Routon(),
8: Voudon()
}
Names = {
"Real": Real(),
"Complex": Complex(),
"Quaternion": Quaternion(),
"Octonion": Octonion(),
"Sedenion": Sedenion(),
"Pathion": Pathion(),
"Chingon": Chingon(),
"Routon": Routon(),
"Voudon": Voudon()
}
|
{"hexsha": "b19a19636fefd7c6437edd86e125469927b49cfe", "size": 14743, "ext": "py", "lang": "Python", "max_stars_repo_path": "hypercomplex.py", "max_stars_repo_name": "tallenglish/cayleydickenson", "max_stars_repo_head_hexsha": "6bf9e016968801183adf697e872c405a0073239a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-12T00:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T00:42:34.000Z", "max_issues_repo_path": "hypercomplex.py", "max_issues_repo_name": "tallenglish/cayleydickenson", "max_issues_repo_head_hexsha": "6bf9e016968801183adf697e872c405a0073239a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hypercomplex.py", "max_forks_repo_name": "tallenglish/cayleydickenson", "max_forks_repo_head_hexsha": "6bf9e016968801183adf697e872c405a0073239a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.3351724138, "max_line_length": 121, "alphanum_fraction": 0.6550227226, "include": true, "reason": "import numpy", "num_tokens": 4036}
|
# File to specify the project exports
# It is not a module as it is just included by the JavaCall module
# to facilitate the exports
export
# Init Options
defaultopts, fromcurrentvm, forjavahome, setfromcurrentvm!, unsetfromcurrentvm!,
setjavahome!, unsetjavahome!, pushclasspath!, pushoptions!,
# JNI
# Types.jl
jint, jlong, jbyte, jboolean, jchar, jshort, jfloat, jdouble, jsize,
jvoid, jobject, jclass, jthrowable, jweak, jmethodID, jfieldID, jstring, jarray,
JNINativeMethod, jobjectArray, jbooleanArray, jbyteArray, jshortArray, jintArray,
jlongArray, jfloatArray, jdoubleArray, jcharArray, jvalue, jobjectRefType,
# Constants.jl
JNI_FALSE, JNI_TRUE,
# Java VM
init, destroy,
# Jimport
@jimport
|
{"hexsha": "77b4f524035e36a6b02d587b171bc8fc8c08265f", "size": 750, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/exports.jl", "max_stars_repo_name": "DuarteMRAlves/JavaCall.jl", "max_stars_repo_head_hexsha": "75255fde4693836d3653e0fd177c69780259d21d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/exports.jl", "max_issues_repo_name": "DuarteMRAlves/JavaCall.jl", "max_issues_repo_head_hexsha": "75255fde4693836d3653e0fd177c69780259d21d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-05-10T19:59:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-09T16:48:35.000Z", "max_forks_repo_path": "src/exports.jl", "max_forks_repo_name": "DuarteMRAlves/JavaCall.jl", "max_forks_repo_head_hexsha": "75255fde4693836d3653e0fd177c69780259d21d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.7142857143, "max_line_length": 85, "alphanum_fraction": 0.7373333333, "num_tokens": 220}
|
function [u,x]=SFrontTrack(u0,x0,T,u_flux,f_flux);
% Doing front tracking on the initial fronts defined by (u0,x0) in the
% time interval [0 T]. The piecewise linear flux function is in (u_flux,
% f_flux). u0 must take values in the set {u_flux}.
% This version plots the fronts as the Riemann problmes are solved.
[u s t_coll,x]=Sinitialize(u0,x0,T,u_flux,f_flux);
% Initializes the collision times.
t_start=zeros(size(x));
asm=mean(abs(s));
xlower=min([x(1)-asm*T,x(1)]);
xhigher=max([x(length(x))+asm*T,x(length(x))]);
if xlower==xhigher
xlower=xhigher-0.5;
xhigher=xlower+1;
end;
axis([xlower xhigher 0 T]);
set(gca,'Box','on');
%axes('XLim',[xlower xhigher],'YLim',[0 T],'Box','on');
hold on;
[t i]=min(t_coll);
while t<T&~isempty(s), % The main loop until no more collisions ...
[ncr nu]=size(u);
nx=length(x);
if nx~=nu-1,
[nx nu];
error('Wrong length of x vs u in FrontTrack');
end;
xc=x(i)+s(i)*(t-t_start(i));
line([x(i) xc],[t_start(i) t]); % Plotting the lines to the
line([x(i-1) xc],[t_start(i-1) t]); % collision about to be solved
[ur sr]=SRiemannsol(u(:,i-1),u(:,i+1),u_flux,f_flux); % Solving the RP
[nc nr]=size(ur);
ns=length(sr);
if ns>0,
if i>2,
t_coll(i-1)=collision_time([s(i-2) sr(1)],[t_start(i-2),t],...
[x(i-2),xc],T+1);
if t_coll(i-1)<=t, % The collision is not after present time.
[-1 t_coll(i-1) t] % Fatal error.
[s(i-2) sr(1) t_start(i-2) t]
error('not increasing t_coll : a');
end;
end;
if i<nu-1,
t_coll(i+1)=collision_time([sr(ns),s(i+1)],[t,t_start(i+1)],...
[xc,x(i+1)],T+1);
if t_coll(i+1)<=t, % The collision is not after present time.
[1 t_coll(i+1) t] % Fatal error.
[sr(ns) s(i+1)]
error('not increasing t_coll: b ');
end;
end;
else
if i>2&i<nu-1,
t_coll(i-1)=collision_time([s(i-2) s(i+1)],...
[t_start(i-2),t_start(i+1)],...
[x(i-2),x(i+1)],T+1);
end;
end;
u=[u(:,1:i-1) ur u(:,i+1:nu)]; % Managing the lists ...
s=[s(1:i-2) sr s(i+1:nu-1)];
hone=ones([1,ns]);
x=[x(1:i-2) xc*hone x(i+1:nu-1)];
t_start=[t_start(1:i-2) t*hone t_start(i+1:nu-1)];
t_coll=[t_coll(1:i-1) (T+1)*ones([1,nr]) t_coll(i+1:nu)];
[t i]=min(t_coll);
end;
n=length(x);
for i=1:n % Drawing the fronts up to final time T
line([x(i),x(i)+s(i)*(T-t_start(i))],[t_start(i) T]);
end;
x=x+s.*(T-t_start);
hold off;
|
{"author": "wme7", "repo": "Aero-matlab", "sha": "9430008f2e3b84f28633775a44dff534e780fbac", "save_path": "github-repos/MATLAB/wme7-Aero-matlab", "path": "github-repos/MATLAB/wme7-Aero-matlab/Aero-matlab-9430008f2e3b84f28633775a44dff534e780fbac/OperatorSplitting/AppendixA/Scalar_Fronttracking/SFrontTrack.m"}
|
function rs = spec(s)
%tstoolbox/@signal/spec
% Syntax:
% * rs = spec(s)
%
% compute power spectrum for real valued scalar signals. Multivariate
% signals are accepted but may produce unwanted results as only the
% spectrum of the first column is returned.
%
% Copyright 1997-2001 DPI Goettingen, License http://www.physik3.gwdg.de/tstool/gpl.txt
narginchk(1,1);
c = spec(s.core); % call real working routine for parent core object
rs = signal(c, s); % special constructor calling syntax for working routines
a = getaxis(s, 1);
rs = setaxis(rs, 1, achse(unit(a)^(-1),0, samplerate(a)/dlens(s,1)));
rs = addhistory(rs, 'Calculated spectrum (spec)');
rs = setyunit(rs, yunit(s)^2);
rs = addcommandlines(rs, 's = spec(s');
|
{"author": "benfulcher", "repo": "hctsa", "sha": "919f2aed7cc8e1a3a03304c1ade573fa664c73f8", "save_path": "github-repos/MATLAB/benfulcher-hctsa", "path": "github-repos/MATLAB/benfulcher-hctsa/hctsa-919f2aed7cc8e1a3a03304c1ade573fa664c73f8/Toolboxes/OpenTSTOOL/tstoolbox/@signal/spec.m"}
|
import tempfile
import numpy as np
import pysinsy
import soundfile as sf
import streamlit as st
from nnmnkwii.io import hts
from nnsvs.pretrained import create_svs_engine
st.title("NNSVS Demo")
st.markdown("Upload your .xml music file with text as input to make it sing.")
models = {
"kiritan": "r9y9/20220321_kiritan_timelag_mdn_duration_mdn_acoustic_resf0conv",
"yoko": "r9y9/20220322_yoko_timelag_mdn_duration_mdn_acoustic_resf0conv",
}
voice_option = st.selectbox("Select the voice", models.keys())
uploaded_file = st.file_uploader("Choose a .xml music file", type="xml")
if st.button("synthesis") and uploaded_file:
with st.spinner("Synthesizing to wav"):
# synthesize
with tempfile.NamedTemporaryFile(suffix=".xml") as f:
f.write(uploaded_file.getbuffer())
contexts = pysinsy.extract_fullcontext(f.name)
labels = hts.HTSLabelFile.create_from_contexts(contexts)
engine = create_svs_engine(models[voice_option])
wav, sr = engine.svs(labels)
# show audio player
with tempfile.NamedTemporaryFile(suffix=".wav") as f:
sf.write(f.name, wav.astype(np.int16), sr)
with open(f.name, "rb") as wav_file:
st.audio(wav_file.read(), format="audio/wav")
|
{"hexsha": "2c0600fb4e10879b809647330c166c9d5cdbce18", "size": 1284, "ext": "py", "lang": "Python", "max_stars_repo_path": "streamlit_demo/app.py", "max_stars_repo_name": "r9y9/dnnsvs", "max_stars_repo_head_hexsha": "b028f76fd4f081859ec99a2034e0e0dc8ce1a409", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 72, "max_stars_repo_stars_event_min_datetime": "2020-04-19T16:14:09.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-02T04:02:05.000Z", "max_issues_repo_path": "streamlit_demo/app.py", "max_issues_repo_name": "r9y9/dnnsvs", "max_issues_repo_head_hexsha": "b028f76fd4f081859ec99a2034e0e0dc8ce1a409", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-19T16:28:03.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-02T13:49:13.000Z", "max_forks_repo_path": "streamlit_demo/app.py", "max_forks_repo_name": "r9y9/dnnsvs", "max_forks_repo_head_hexsha": "b028f76fd4f081859ec99a2034e0e0dc8ce1a409", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-20T02:34:31.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-26T01:04:35.000Z", "avg_line_length": 34.7027027027, "max_line_length": 83, "alphanum_fraction": 0.7024922118, "include": true, "reason": "import numpy", "num_tokens": 335}
|
""" philoseismos: with passion for the seismic method.
This file defines a BinaryFileHeader object that represents
a Binary File Header from a SEG-Y file.
@author: Ivan Dubrovin
e-mail: dubrovin.io@icloud.com """
from philoseismos.segy.tools.constants import BFH_columns, BFH_format_string
from philoseismos.segy import gfunc
import pandas as pd
import numpy as np
import struct
class BinaryFileHeader:
""" Binary File Header for the SEG-Y file.
Binary File Header consists of 400 bytes of binary values relevant to the whole SEG-Y file.
Certain values in this header are crucial for the processing of the data in the file,
particularly the sample interval, trace length and sample format code.
"""
def __init__(self, file=None):
""" Create an empty Binary File Header object. """
self._bytes = None
self.endian = None
self.table = pd.Series(index=BFH_columns, dtype=np.int64)
self.table.fillna(0, inplace=True)
if file:
self.load_from_file(file)
# ----- Loading, writing ----- #
def load_from_file(self, file):
""" Loads Binary File Header from file into self.
Parameters
----------
file : str
Path to the SEG-Y file to extract Binary File Header from.
"""
with open(file, 'br') as f:
f.seek(3200)
bytes = f.read(400)
self.load_from_bytes(bytes)
def load_from_bytes(self, bytes):
""" Loads and unpacks the bytes given into self.
Parameters
----------
bytes : bytes
400 bytes of Binary File header to unpack into self.
"""
self._bytes = bytes
endian = gfunc._detect_endianness_from_sample_format_bytes(self._bytes[24:26])
full_format_string = endian + BFH_format_string
self.endian = endian
unpacked = struct.unpack(full_format_string, self._bytes)
table = pd.Series(index=BFH_columns, data=unpacked)
self.table.update(table)
# ----- Working with files ----- #
def replace_in_file(self, file):
""" Replaces the Binary File Header in the file with self.
Parameters
----------
file : str
Path to the SEG-Y file to replace Binary File Header in.
"""
endian = gfunc.get_endianness(file)
self._update_bytes(endian)
with open(file, 'br+') as f:
f.seek(3200)
f.write(self._bytes)
# --- Text files --- #
def export_to_csv(self, file):
""" Saves the content of the Binary File Header in .csv format.
Parameters
----------
file : str
A path and a name of a file to export self to.
Notes
-----
In the created .csv file each line contains a key-value pair separated by a comma.
"""
self.table.to_csv(file)
def import_from_csv(self, file):
""" Loads the content from the .csv file.
Parameters
----------
file : str
Path to the file to import.
Notes
-----
File should have each key-value pair on separate lines, separated by a colon.
Missing or incorrectly specified keys will be assigned the value of 0.
"""
self.table = pd.Series(index=BFH_columns, dtype=np.int64)
imported = pd.read_csv(file, squeeze=True, index_col=0, header=None,
skipinitialspace=True, names=['Field', 'BFH'])
self.table.update(imported)
self.table.fillna(0, inplace=True)
# ----- Dunder methods ----- #
def __repr__(self):
return str(self.table.loc[self.table != 0])
def __str__(self):
return str(self.table)
def __getitem__(self, key):
return self.table[key]
def __setitem__(self, key, value):
self.table[key] = value
# ----- Internal methods ----- #
def _update_bytes(self, endian):
""" Updates self._bytes by packing self.table. """
full_format_string = endian + BFH_format_string
self._bytes = struct.pack(full_format_string, *self.table.values)
|
{"hexsha": "eeedfbf10952f616ab8804e2cc08d240cbdf75e0", "size": 4198, "ext": "py", "lang": "Python", "max_stars_repo_path": "philoseismos/segy/components/BinaryFileHeader.py", "max_stars_repo_name": "sir-dio/old-philoseismos", "max_stars_repo_head_hexsha": "4c830971641313abd95693b24965ede261c6824b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-27T14:03:00.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-27T14:03:00.000Z", "max_issues_repo_path": "philoseismos/segy/components/BinaryFileHeader.py", "max_issues_repo_name": "sir-dio/old-philoseismos", "max_issues_repo_head_hexsha": "4c830971641313abd95693b24965ede261c6824b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "philoseismos/segy/components/BinaryFileHeader.py", "max_forks_repo_name": "sir-dio/old-philoseismos", "max_forks_repo_head_hexsha": "4c830971641313abd95693b24965ede261c6824b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.5696202532, "max_line_length": 95, "alphanum_fraction": 0.6017151024, "include": true, "reason": "import numpy", "num_tokens": 921}
|
#!/usr/bin/env python
# encoding: utf-8
"""
@author: Shanda Lau 刘祥德
@license: (C) Copyright 2019-now, Node Supply Chain Manager Corporation Limited.
@contact: shandalaulv@gmail.com
@software:
@file: read_img.py
@time: 5/18/20 5:47 PM
@version 1.0
@desc:
"""
import cv2
import numpy as np
import os
from pathlib import Path
input_dir = 'output/articts'
art_name = 'Hindu_Gods'
txt_dir = 'data/a2.txt'
output_dir = 'output/contents_a2'
Path(output_dir).mkdir(exist_ok=True, parents=True)
with open(txt_dir, 'r') as f:
pairs = [t.split('\t') for t in f.readlines()]
for p in pairs:
img_path = os.path.join(input_dir, art_name, 'warp', p[0], p[1].replace('\n', '') + '.png')
img = cv2.imread(img_path)
if img is None:
print(f'Empty Image: {img_path}')
continue
cv2.imwrite(os.path.join(output_dir, art_name + '-' + p[0] + '_' + p[1].replace('\n', '') + '.png'), img)
|
{"hexsha": "667ff8bd17604dc973050688e5e634c881b1ee5b", "size": 931, "ext": "py", "lang": "Python", "max_stars_repo_path": "cast/read_img.py", "max_stars_repo_name": "JinShiyin/sast_backend", "max_stars_repo_head_hexsha": "b2e282d393497da1d300d83c1a045c9f78f854ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cast/read_img.py", "max_issues_repo_name": "JinShiyin/sast_backend", "max_issues_repo_head_hexsha": "b2e282d393497da1d300d83c1a045c9f78f854ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cast/read_img.py", "max_forks_repo_name": "JinShiyin/sast_backend", "max_forks_repo_head_hexsha": "b2e282d393497da1d300d83c1a045c9f78f854ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.3823529412, "max_line_length": 113, "alphanum_fraction": 0.6380236305, "include": true, "reason": "import numpy", "num_tokens": 291}
|
import unittest
from vigeneretranslator import *
from .. import utils
import numpy
class TestVigenereTranslator(unittest.TestCase):
def setUp(self):
self.T = VigenereTranslator("ABC")
def test_translate(self):
self.assertEqual(self.T.translate("AAABBB"), "BCDCDE")
def test_encode(self):
self.assertEqual(self.T.encode("B C"), "A A")
if __name__ == '__main__':
unittest.main()
|
{"hexsha": "bd752ebb58a2d4de5ded177b5c57e3e1543d8465", "size": 420, "ext": "py", "lang": "Python", "max_stars_repo_path": "pycrypt/translators/test_vigeneretranslator.py", "max_stars_repo_name": "mathead/pycrypt", "max_stars_repo_head_hexsha": "4534c965dcf7202e41a6c93da05d55cad83fea3c", "max_stars_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-06-08T08:24:33.000Z", "max_stars_repo_stars_event_max_datetime": "2017-06-08T08:24:33.000Z", "max_issues_repo_path": "pycrypt/translators/test_vigeneretranslator.py", "max_issues_repo_name": "mathead/pycrypt", "max_issues_repo_head_hexsha": "4534c965dcf7202e41a6c93da05d55cad83fea3c", "max_issues_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pycrypt/translators/test_vigeneretranslator.py", "max_forks_repo_name": "mathead/pycrypt", "max_forks_repo_head_hexsha": "4534c965dcf7202e41a6c93da05d55cad83fea3c", "max_forks_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.7058823529, "max_line_length": 62, "alphanum_fraction": 0.6880952381, "include": true, "reason": "import numpy", "num_tokens": 98}
|
(** * Macros for the definition of HILECOP Place and Transition designs. *)
Require Import NArith.
Require Import AbstractSyntax.
Require Import GlobalTypes.
Require Import HVhdlTypes.
(** /!\ We must ensure that the identifiers generated by the SITPN to
VHDL transformation does override the set of reserved
identifiers. Therefore, the first generated identifier must begin
at the last reserved identifier + 1. *)
(** Defines the clock input port reserved identifier. Every design in
H-VHDL is equipped with an clock input port. *)
Definition clk : ident := 0.
(** Defines reserved identifiers for the place, and the transition
designs. *)
Definition place_id : ident := 1.
Definition place_archid : ident := 2.
Definition trans_id : ident := 3.
Definition transition_archid : ident := 4.
(** Reserved identifiers for the action activation and function
execution generated processes. *)
Definition actions_ps_id : ident := 5.
Definition functions_ps_id : ident := 6.
(** Defines the first fresh identifier.
Starts the available identifier range. *)
Definition ffid : ident := 10.
(** Defines multiple macros that are convenient to build
the Place and Transition designs in H-VHDL abstract syntax.
Constants, types and subtypes defined in the "petri.vhd" of HILECOP
find here their equivalent definition in Coq.
*)
(** Global constant macros. *)
Definition MAXIMAL_GLOBAL_MARKING : N := 255.
(** Type and subtype macros. *)
Definition arc_t : tind := tind_natural 0%N 2%N.
Definition transition_t : tind := tind_natural 0%N 3%N.
Definition weight_t : tind := tind_natural 0%N MAXIMAL_GLOBAL_MARKING.
Definition weight_vector_t : expr -> expr -> tind := tind_array weight_t.
Definition arc_vector_t : expr -> expr -> tind := tind_array arc_t.
Definition bool_vector_t : expr -> expr -> tind := tind_array tind_boolean.
|
{"author": "viampietro", "repo": "ver-hilecop", "sha": "cb539e9bf4e73f70d8e039fd56ddcfccd1e660d4", "save_path": "github-repos/coq/viampietro-ver-hilecop", "path": "github-repos/coq/viampietro-ver-hilecop/ver-hilecop-cb539e9bf4e73f70d8e039fd56ddcfccd1e660d4/hvhdl/Petri.v"}
|
import math
import numpy as np
from PIL import Image
from skimage import color, io
def load(image_path):
"""Loads an image from a file path.
HINT: Look up `skimage.io.imread()` function.
Args:
image_path: file path to the image.
Returns:
out: numpy array of shape(image_height, image_width, 3).
"""
### YOUR CODE HERE
# Use skimage io.imread
out = io.imread(image_path)
### END YOUR CODE
# Let's convert the image to be between the correct range.
out = out.astype(np.float64) / 255
return out
def dim_image(image):
"""Change the value of every pixel by following
x_n = 0.5*x_p^2
where x_n is the new value and x_p is the original value.
Args:
image: numpy array of shape(image_height, image_width, 3).
Returns:
out: numpy array of shape(image_height, image_width, 3).
"""
### YOUR CODE HERE
out = 0.5*np.power(image, 2)
### END YOUR CODE
return out
def convert_to_grey_scale(image):
"""Change image to gray scale.
HINT: Look at `skimage.color` library to see if there is a function
there you can use.
Args:
image: numpy array of shape(image_height, image_width, 3).
Returns:
out: numpy array of shape(image_height, image_width).
"""
out = None
### YOUR CODE HERE
out = color.rgb2grey(image)
### END YOUR CODE
return out
def rgb_exclusion(image, channel):
"""Return image **excluding** the rgb channel specified
Args:
image: numpy array of shape(image_height, image_width, 3).
channel: str specifying the channel. Can be either "R", "G" or "B".
Returns:
out: numpy array of shape(image_height, image_width, 3).
"""
if channel == 'R':
k = 0
elif channel == 'G':
k = 1
else:
k = 2
cImage = np.copy(image)
### YOUR CODE HERE
cImage[:,:,k] = 0
out = cImage
### END YOUR CODE
return out
def lab_decomposition(image, channel):
"""Decomposes the image into LAB and only returns the channel specified.
Args:
image: numpy array of shape(image_height, image_width, 3).
channel: str specifying the channel. Can be either "L", "A" or "B".
Returns:
out: numpy array of shape(image_height, image_width).
"""
if channel == 'L':
k = 0
elif channel == 'A':
k = 1
else:
k = 2
lab = color.rgb2lab(image)
### YOUR CODE HERE
out = lab[:,:,k]
### END YOUR CODE
return out
def hsv_decomposition(image, channel='H'):
"""Decomposes the image into HSV and only returns the channel specified.
Args:
image: numpy array of shape(image_height, image_width, 3).
channel: str specifying the channel. Can be either "H", "S" or "V".
Returns:
out: numpy array of shape(image_height, image_width).
"""
if channel == 'H':
k = 0
elif channel == 'S':
k = 1
else:
k = 2
hsv = color.rgb2hsv(image)
### YOUR CODE HERE
out = hsv[:,:,k]
### END YOUR CODE
return out
def mix_images(image1, image2, channel1, channel2):
"""Combines image1 and image2 by taking the left half of image1
and the right half of image2. The final combination also excludes
channel1 from image1 and channel2 from image2 for each image.
HINTS: Use `rgb_exclusion()` you implemented earlier as a helper
function. Also look up `np.concatenate()` to help you combine images.
Args:
image1: numpy array of shape(image_height, image_width, 3).
image2: numpy array of shape(image_height, image_width, 3).
channel1: str specifying channel used for image1.
channel2: str specifying channel used for image2.
Returns:
out: numpy array of shape(image_height, image_width, 3).
"""
out = None
### YOUR CODE HERE
s1 = image1.shape[1]
s2 = image2.shape[1]
print(s1,s2)
out = np.concatenate((image1[:,:int(s1/2)],image2[:,int(s2/2):]),axis = 1)
out = rgb_exclusion(out,channel1)
out = rgb_exclusion(out,channel2)
### END YOUR CODE
return out
def mix_quadrants(image):
"""THIS IS AN EXTRA CREDIT FUNCTION.
This function takes an image, and performs a different operation
to each of the 4 quadrants of the image. Then it combines the 4
quadrants back together.
Here are the 4 operations you should perform on the 4 quadrants:
Top left quadrant: Remove the 'R' channel using `rgb_exclusion()`.
Top right quadrant: Dim the quadrant using `dim_image()`.
Bottom left quadrant: Brighthen the quadrant using the function:
x_n = x_p^0.5
Bottom right quadrant: Remove the 'R' channel using `rgb_exclusion()`.
Args:
image1: numpy array of shape(image_height, image_width, 3).
Returns:
out: numpy array of shape(image_height, image_width, 3).
"""
out = None
s = image.shape[0:2]
qTopLeft = np.copy(image[:int(s[0]/2),:int(s[1]/2)])
qTopRight = np.copy(image[:int(s[0]/2),int(s[1]/2):])
qBottomLeft = np.copy(image[int(s[0]/2):,:int(s[1]/2)])
qBottomRight = np.copy(image[int(s[0]/2):,int(s[1]/2):])
qTopLeft = rgb_exclusion(qTopLeft,'R')
qTopRight = dim_image(qTopRight)
qBottomLeft = np.power(qBottomLeft,2)
qBottomRight = rgb_exclusion(qBottomRight,'R')
outTop = np.concatenate((qTopLeft,qTopRight),axis = 1)
outBottom = np.concatenate((qBottomLeft,qBottomRight),axis = 1)
out = np.concatenate((outTop,outBottom),axis = 0)
### END YOUR CODE
return out
|
{"hexsha": "2a53fd0b9780f12eb59242f1d0d55df500351b6c", "size": 5769, "ext": "py", "lang": "Python", "max_stars_repo_path": "hw0_release/imageManip.py", "max_stars_repo_name": "YuanSun-au/CS131", "max_stars_repo_head_hexsha": "2f0e4896beb80926636bdf2fb57a119ae77ff45a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw0_release/imageManip.py", "max_issues_repo_name": "YuanSun-au/CS131", "max_issues_repo_head_hexsha": "2f0e4896beb80926636bdf2fb57a119ae77ff45a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw0_release/imageManip.py", "max_forks_repo_name": "YuanSun-au/CS131", "max_forks_repo_head_hexsha": "2f0e4896beb80926636bdf2fb57a119ae77ff45a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-05T00:08:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-05T00:08:23.000Z", "avg_line_length": 25.192139738, "max_line_length": 78, "alphanum_fraction": 0.6131045242, "include": true, "reason": "import numpy", "num_tokens": 1539}
|
#!/usr/bin/env python
# import required libraries
from pyqtgraph.Qt import QtGui, QtCore
import pyqtgraph as pg
import sys
import numpy as np
from pyrf.devices.thinkrf import WSA
from pyrf.util import read_data_and_context
from pyrf.numpy_util import compute_fft
from pyrf.connectors.twisted_async import TwistedConnector
from twisted.internet import reactor, defer
import twisted.python.log
# plot constants
CENTER_FREQ = 2450 * 1e6
SAMPLE_SIZE = 1024
ATTENUATOR = 0
DECIMATION = 4
RFE_MODE = 'ZIF'
TRIGGER_SET = {'type': 'None',
'fstart': 2400 * 1e6,
'fstop': 2500 * 1e6,
'amplitude': -70}
# connect to WSA device
dut = WSA()
ip = sys.argv[1]
dut = WSA(connector=TwistedConnector(reactor))
win = pg.GraphicsWindow()
win.resize(1000,600)
win.setWindowTitle("PYRF FFT Plot Example")
@defer.inlineCallbacks
def show_i_q():
yield dut.connect(sys.argv[1])
# setup test conditions
yield dut.reset()
yield dut.request_read_perm()
yield dut.rfe_mode(RFE_MODE)
yield dut.freq(CENTER_FREQ)
yield dut.decimation(0)
yield dut.attenuator(ATTENUATOR)
yield dut.trigger(TRIGGER_SET)
dut.connector.vrt_callback = receive_vrt
# capture 1 packet
yield dut.capture(1024, 1)
context = {}
def receive_vrt(packet):
# read until I get 1 data packet
global context, dut
if not packet.is_data_packet():
context.update(packet.fields)
return
else:
pow_data = compute_fft(dut, packet, context)
print pow_data
update(dut, pow_data)
# initialize plot
fft_plot = win.addPlot(title="Power Vs. Frequency")
# disable auto size of the x-y axis
fft_plot.enableAutoRange('xy', False)
curve = fft_plot.plot(pen='g')
def update(dut, pow_data):
curve.setData(pow_data, pen = 'g')
dut.capture(SAMPLE_SIZE, 1)
d = show_i_q()
d.addErrback(twisted.python.log.err)
reactor.run()
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
|
{"hexsha": "c34758e3b3d945cd172393af390058d419db1684", "size": 2151, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/twisted_pyqtgraph_plot.py", "max_stars_repo_name": "thelaly/pyrf", "max_stars_repo_head_hexsha": "52d7e8059dcd5e9dedd6a2a689124372836aae1c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/twisted_pyqtgraph_plot.py", "max_issues_repo_name": "thelaly/pyrf", "max_issues_repo_head_hexsha": "52d7e8059dcd5e9dedd6a2a689124372836aae1c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/twisted_pyqtgraph_plot.py", "max_forks_repo_name": "thelaly/pyrf", "max_forks_repo_head_hexsha": "52d7e8059dcd5e9dedd6a2a689124372836aae1c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.2317073171, "max_line_length": 75, "alphanum_fraction": 0.7024639702, "include": true, "reason": "import numpy", "num_tokens": 577}
|
import numpy as np
from typing import Dict, List
RatingType = Dict[str, float]
class GetStudents3Q():
def __init__(self, rating: RatingType) -> None:
self.rating: RatingType = rating
self.studetsq3: RatingType = {}
def get(self) -> RatingType:
rating_list = list(self.rating.values())
third_quartile = np.quantile(rating_list, 0.75)
median = np.quantile(rating_list, 0.5)
for key in self.rating:
if median <= self.rating[key] <= third_quartile:
self.studetsq3[key] = float(self.rating[key])
return self.studetsq3
|
{"hexsha": "eac411db4c7a615d9410b83255c79c8400f7a16f", "size": 610, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/GetStudents3Q.py", "max_stars_repo_name": "VsevolodOn/PTLab1", "max_stars_repo_head_hexsha": "d1407a863af661660fc637b2ef20c6ff23cb94ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/GetStudents3Q.py", "max_issues_repo_name": "VsevolodOn/PTLab1", "max_issues_repo_head_hexsha": "d1407a863af661660fc637b2ef20c6ff23cb94ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/GetStudents3Q.py", "max_forks_repo_name": "VsevolodOn/PTLab1", "max_forks_repo_head_hexsha": "d1407a863af661660fc637b2ef20c6ff23cb94ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-03T01:11:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-03T01:11:01.000Z", "avg_line_length": 29.0476190476, "max_line_length": 61, "alphanum_fraction": 0.6344262295, "include": true, "reason": "import numpy", "num_tokens": 158}
|
function blkStruct = slblocks
%SLBLOCKS Define the Simulink library block representation.
%
% Define the Simulink library block representation for the Testing
% CP 10-11-05
% Copyright 2005-2006, Fraunhofer FOKUS
% $Revision: 1.0.0.0 $ $Date: 2005/11/10 20:11:50 $
% $Revision: 1.0.0.0 $ $Date: 2006/07/10 20:11:50 $
blkStruct.Name = 'MIL Test Blockset';
% The function that is called when
% the user double-clicks on this icon.
blkStruct.OpenFcn = 'MIL_Test';
blkStruct.MaskInitialization = '';
% Define the library list for the Simulink Library browser.
% Return the name of the library model and the name for it
Browser(1).Library = 'MIL_Test';
Browser(1).Name = 'MIL Test Blockset';
%Browser(1).IsFlat = 1;% Is this library "flat" (i.e. no subsystems)?
blkStruct.Browser = Browser;
% End of slblocks.m
|
{"author": "Sable", "repo": "mcbench-benchmarks", "sha": "ba13b2f0296ef49491b95e3f984c7c41fccdb6d8", "save_path": "github-repos/MATLAB/Sable-mcbench-benchmarks", "path": "github-repos/MATLAB/Sable-mcbench-benchmarks/mcbench-benchmarks-ba13b2f0296ef49491b95e3f984c7c41fccdb6d8/30826-model-in-the-loop-for-embedded-system-test-milest-preliminary-version/Transformation of SUT to MiLEST model/slblocks.m"}
|
import os
from os.path import join
import re
import numpy as np
import time as time
from scipy.io import FortranFile
import ctypes as c
import struct
class ReadHaloMergerTree:
#low precision Treebricks (mostly HAGN)
def __init__(self,file_path=None):
"""
Make a treebricks dictionary out of file
"""
self.file_path = file_path
self.merger_tree = None
self.read_data()
def read_data(self):
t0 = time.time()
"""
header stuff
"""
f = FortranFile(self.file_path, 'r')
nsteps = f.read_record('i')
nsteps.tolist()
halos = f.read_record('i')
aexp = f.read_record('f')
omega_t = f.read_record('f')
age_univ = f.read_record('f')
nh_old = halos[::2]
nsubh_old = halos[1::2]
self.merger_tree = {}
self.merger_tree['nsteps'] = nsteps
self.merger_tree['aexp'] = aexp
self.merger_tree['omega_t'] = omega_t
self.merger_tree['age_univ'] = age_univ
self.merger_tree['nh_old'] = nh_old
self.merger_tree['nsub_old'] = nsubh_old
nhaloes = nh_old + nsubh_old
self.merger_tree['nhaloes'] = nhaloes
print('nsteps:',nsteps,
'aexp:', aexp,
'omega_t:', omega_t,
'age:', age_univ,
'nsub, nhost:',halos,
'nhaloes:', nhaloes)
#def read_halo():
t1 = time.time()
for i in range(int(nsteps)):
ts = str(i)
halo_key = 'halos_ts' + ts
#this will hold dictionaries for a all halo information at this timestep
self.merger_tree[halo_key] = []
nhalos = nh_old[i] + nsubh_old[i]
for i in range(nhalos):
halo_dict = {}
#read halo stuff
my_number = f.read_record('i')
halo_dict['my_number'] = my_number.tolist()
bush_id = f.read_record('i')
halo_dict['bush_ID'] = bush_id
st = f.read_record('i')
halo_dict['st'] = st
halo_stuff = f.read_record('i')
halo_stuff.tolist()
#print(halo_stuff)
level, hosthalo, hostsub, nbsub, nextsub = halo_stuff[0],halo_stuff[1],halo_stuff[2],halo_stuff[3],halo_stuff[4]
halo_dict['level'] = level
halo_dict['host_halo'] = hosthalo
halo_dict['host_sub'] = hostsub
halo_dict['nbsub'] = nbsub
halo_dict['nextsub'] = nextsub
mass = f.read_record('f') #d for NH
halo_dict['mass'] = mass.tolist()
#print(mass)
macc = f.read_record('f') #d for NH
halo_dict['macc'] = macc.tolist()
#print(macc)
p = f.read_record('f')
p = p.tolist()
py,px,pz = p[0],p[1],p[2]
halo_dict['px'] = px
halo_dict['py'] = py
halo_dict['pz'] = pz
#print(p)
v = f.read_record('f')
v = v.tolist()
vx,vy,vz = v[0],v[1],v[2]
halo_dict['vx'] = vx
halo_dict['vy'] = vy
halo_dict['vz'] = vz
#print(v)
L = f.read_record('f')
L = L.tolist()
Lx,Ly,Lz = L[0],L[1],L[2]
halo_dict['Lx'] = Lx
halo_dict['Ly'] = Ly
halo_dict['Lz'] = Lz
#print(L)
shape = f.read_record('f')
shape = shape.tolist()
rmax,a,b,c = shape[0],shape[1],shape[2],shape[3]
halo_dict['rmax'] = rmax
halo_dict['a'] = a
halo_dict['b'] = b
halo_dict['c'] = c
#print(shape)
energy = f.read_record('f')
energy = energy.tolist()
ek,ep,et = energy[0],energy[1],energy[2]
halo_dict['ek'] = ek
halo_dict['ep'] = ep
halo_dict['et'] = et
#print(energy)
spin = f.read_record('f')
halo_dict['spin'] = spin.tolist()
#print('Spin',spin)
nb_fathers = f.read_record('i')
#print('nb_fathers',nb_fathers)
halo_dict['nb_fathers'] = nb_fathers
if nb_fathers != 0:
list_fathers = f.read_record('i')
#print('list_fathers', list_fathers)
halo_dict['list_fathers'] = list_fathers
mass_fathers = f.read_record('f')
#print('mass fathers', mass_fathers)
halo_dict['mass_fathers'] = mass_fathers
nb_sons = f.read_record('i')
#print('nb sons',nb_sons)
halo_dict['nb_sons'] = nb_sons
if nb_sons != 0:
list_sons = f.read_record('i')
#print('list_sons', list_sons)
halo_dict['list_sons'] = list_sons
virial = f.read_record('f')
virial = virial.tolist()
rvir,mvir,tvir,cvel = virial[0],virial[1],virial[2],virial[3]
#print('Virial',virial)
halo_dict['rvir'] = rvir
halo_dict['mvir'] = mvir
halo_dict['tvir'] = tvir
halo_dict['cvel'] = cvel
halo_profile = f.read_record('f')
halo_profile = halo_profile.tolist()
rho_0, r_c = halo_profile[0],halo_profile[1]
#print('halo profile',halo_profile)
halo_dict['rho_0'] = rho_0
halo_dict['r_c'] = r_c
self.merger_tree[halo_key].append(halo_dict)
print(f'Done reading halos in time step {ts}.')
t2 = time.time()
print('Reading haloes took {:0.2f} secs.'.format(t2-t1))
print('Total time was {:0.2f} secs.'.format(t2-t0))
return self.merger_tree
class ReadGalMergerTree:
#low precision Treebricks (mostly HAGN)
def __init__(self,file_path=None):
"""
Make a treebricks dictionary out of file
"""
self.file_path = file_path
self.merger_tree = None
self.read_data()
def read_data(self):
t0 = time.time()
"""
header stuff
"""
f = FortranFile(self.file_path, 'r')
nsteps = f.read_record('i')
nsteps.tolist()
nbodies = f.read_record('i')
aexp = f.read_reals('d')
omega_t = f.read_record('d')
age_univ = f.read_record('d')
ngal_old = nbodies[::2]
nsubgal_old = nbodies[1::2]
self.merger_tree = {}
self.merger_tree['nsteps'] = nsteps
self.merger_tree['aexp'] = aexp
self.merger_tree['omega_t'] = omega_t
self.merger_tree['age_univ'] = age_univ
self.merger_tree['ngal_old'] = ngal_old
self.merger_tree['nsubgal_old'] = nsubgal_old
ngals = ngal_old + nsubgal_old
self.merger_tree['ngals'] = ngals
print('nsteps:',nsteps,
'aexp:', aexp,
'omega_t:', omega_t,
'age:', age_univ,
'nsubgals, ngals:',nbodies,
'ngals:', ngals)
#def read_gal():
t1 = time.time()
for i in range(int(nsteps)):
ts = str(i)
gal_key = 'gals_ts' + ts
#this will hold dictionaries for a all gal information at this timestep
self.merger_tree[gal_key] = []
ngals = ngal_old[i] + nsubgal_old[i]
for i in range(ngals):
gal_dict = {}
#read gal stuff
my_number = f.read_record('i')
gal_dict['my_number'] = my_number.tolist()
bush_id = f.read_record('i')
gal_dict['bush_ID'] = bush_id
st = f.read_record('i')
gal_dict['st'] = st
gal_stuff = f.read_record('i')
gal_stuff.tolist()
#print(halo_stuff)
level,host_gal,host_subgal,nchild,nextsub = gal_stuff[0],gal_stuff[1],gal_stuff[2],gal_stuff[3],gal_stuff[4]
gal_dict['level'] = level
gal_dict['host_gal'] = host_gal
gal_dict['host_subgal'] = host_subgal
gal_dict['nchild'] = nchild
gal_dict['nextsub'] = nextsub
mass = f.read_record('d') #d for NH
gal_dict['mass'] = mass.tolist()
macc = f.read_record('d') #d for NH
gal_dict['macc'] = macc.tolist()
#print(macc)
p = f.read_record('d')
p = p.tolist()
py,px,pz = p[0],p[1],p[2]
gal_dict['px'] = px
gal_dict['py'] = py
gal_dict['pz'] = pz
v = f.read_record('d')
#print('v',v)
v = v.tolist()
vx,vy,vz = v[0],v[1],v[2]
gal_dict['vx'] = vx
gal_dict['vy'] = vy
gal_dict['vz'] = vz
L = f.read_record('d')
#print('L',L)
L = L.tolist()
Lx,Ly,Lz = L[0],L[1],L[2]
gal_dict['Lx'] = Lx
gal_dict['Ly'] = Ly
gal_dict['Lz'] = Lz
shape = f.read_record('d')
#print('shape',shape)
shape = shape.tolist()
rmax,a,b,c = shape[0],shape[1],shape[2],shape[3]
gal_dict['rmax'] = rmax
gal_dict['a'] = a
gal_dict['b'] = b
gal_dict['c'] = c
energy = f.read_record('d')
#print('energy',energy)
energy = energy.tolist()
ek,ep,et = energy[0],energy[1],energy[2]
gal_dict['ek'] = ek
gal_dict['ep'] = ep
gal_dict['et'] = et
spin = f.read_record('d')
#print('spin',spin)
gal_dict['spin'] = spin.tolist()
nb_fathers = f.read_record('i')
#print('nb_fathers',nb_fathers)
gal_dict['nb_fathers'] = nb_fathers
if nb_fathers != 0:
list_fathers = f.read_record('i')
#print('list_fathers', list_fathers)
gal_dict['list_fathers'] = list_fathers
mass_fathers = f.read_record('d')
#print('mass fathers', mass_fathers)
gal_dict['mass_fathers'] = mass_fathers
nb_sons = f.read_record('i')
#print('nb sons',nb_sons)
gal_dict['nb_sons'] = nb_sons
if nb_sons != 0:
list_sons = f.read_record('i')
#print('list_sons', list_sons)
gal_dict['list_sons'] = list_sons
virial = f.read_record('d')
virial = virial.tolist()
rvir,mvir,tvir,cvel = virial[0],virial[1],virial[2],virial[3]
gal_dict['rvir'] = rvir
gal_dict['mvir'] = mvir
gal_dict['tvir'] = tvir
gal_dict['cvel'] = cvel
halo_profile = f.read_record('d')
halo_profile = halo_profile.tolist()
rho_0, r_c = halo_profile[0],halo_profile[1]
#print('halo profile',halo_profile)
gal_dict['rho_0'] = rho_0
gal_dict['r_c'] = r_c
self.merger_tree[gal_key].append(gal_dict)
print(f'Done reading galaxies in time step {ts}.')
t2 = time.time()
print('Reading galaxies took {:0.2f} secs.'.format(t2-t1))
print('Total time was {:0.2f} secs.'.format(t2-t0))
return self.merger_tree
|
{"hexsha": "a9f107ec7fb9edccaea79dc2210b10b9f0537f95", "size": 13125, "ext": "py", "lang": "Python", "max_stars_repo_path": "tree_reader/__init__.py", "max_stars_repo_name": "janvimadhani/satellite_planes", "max_stars_repo_head_hexsha": "cb80d3c3a840e681ff26c78f40218ddbbec985b0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tree_reader/__init__.py", "max_issues_repo_name": "janvimadhani/satellite_planes", "max_issues_repo_head_hexsha": "cb80d3c3a840e681ff26c78f40218ddbbec985b0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tree_reader/__init__.py", "max_forks_repo_name": "janvimadhani/satellite_planes", "max_forks_repo_head_hexsha": "cb80d3c3a840e681ff26c78f40218ddbbec985b0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3121827411, "max_line_length": 129, "alphanum_fraction": 0.4345904762, "include": true, "reason": "import numpy,from scipy", "num_tokens": 3172}
|
#!/usr/bin/env python
import sys
# 2 inputs and 2 outputs
class MIMO22:
def __init__(self, T=1e3, **kwargs):
a = 1
b = 0.1
c = 0.1
d = 1
self.A = np.array([[-a, -b],
[-c, -d]], dtype=np.float)
self.B = np.array([[a, b],
[c, d]], dtype=np.float)
self.C = np.array([[1, 0],
[0, 1]], dtype=np.float)
self.T = T
self.x = np.zeros((self.A.shape[1],1), dtype=np.float)
self.u = np.zeros((self.B.shape[1],1), dtype=np.float)
self.y = np.zeros((self.C.shape[0],1), dtype=np.float)
def output(self, t, x, u=0):
dx = self.A.dot(x.reshape(self.x.shape)) + self.B.dot(u.reshape(self.u.shape))
return dx
# 3 inputs and 3 outputs
class MIMO33:
def __init__(self, T=1e3, **kwargs):
a = 1
b = 0.1
c = 0.1
d = 0.1
e = 1
f = 0.1
g = 0.1
h = 0.1
i = 1
self.A = np.array([[-a, -b, -c],
[-d, -e, -f],
[-g, -h, -i]], dtype=np.float)
self.B = np.array([[a, b, c],
[d, e, f],
[g, h, i]], dtype=np.float)
self.C = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]], dtype=np.float)
self.T = T
self.x = np.zeros((self.A.shape[1],1), dtype=np.float)
self.u = np.zeros((self.B.shape[1],1), dtype=np.float)
self.y = np.zeros((self.C.shape[0],1), dtype=np.float)
def output(self, t, x, u=0):
dx = self.A.dot(x.reshape(self.x.shape)) + self.B.dot(u.reshape(self.u.shape))
return dx
if __name__ == '__main__':
# Try importing predictivecontrol package
try:
from predictivecontrol import MPC
except ImportError:
print "\nPredictive control, scipy or numpy packages not installed."
print "To install, go to the root folder of this repository and run \"pip install -e .\""
print "The predictivecontrol package will automatically install scipy and numpy.\n"
sys.exit(0)
# Try importing the ODE solver
try:
from scipy.integrate import ode
except ImportError:
print "\nThis simulation depends on the ODE solver from the scipy package."
print "To install, run \"pip install -U scipy\"\n"
sys.exit(0)
# Try importing numpy
try:
import numpy as np
except ImportError:
print "\nThis simulation depends on the numpy package."
print "To install, run \"pip install -U numpy\"\n"
sys.exit(0)
# Instantiate system model
sys = MIMO22(T=1e-1)
# Instantiate MPC with DC motor model
mpc = MPC(sys.A, sys.B, sys.C, T=sys.T)
mpc.set_predict_horizon(50) # Set prediction horizon
mpc.set_control_horizon(2) # Set control horizon
mpc.dumin, mpc.dumax = np.array([-10,-10]), np.array([10,10]) # Set restrictions to actuator variation and amplitude
mpc.umin, mpc.umax = np.array([-100,-100]), np.array([100,500])
mpc.set_reference(np.array([10,100])) # Set reference (rad/s)
mpc.set_output_weights(np.array([0,10])) # Set output weigths
# Setup Nonstiff Ordinary Diff. Equation (ODE) solver (equivalent to matlab's ODE45)
dt = 1e-3 # ODE derivation time
solv = ode(sys.output).set_integrator('dopri5', method='rtol')
# Run for some seconds
timeout = 10
x = np.zeros((mpc.A.shape[0],2))
u = np.zeros((mpc.B.shape[1],2))
y = np.zeros((mpc.C.shape[0],2))
while True:
# Run MPC (will update controlled input u)
mpc.run()
# Solve ODE (simulate sys based on model)
solv.set_initial_value(mpc.x[:,-1]) # Current initial value is last state
solv.set_f_params(mpc.u[:,-1]) # Apply control input into system
while solv.successful() and solv.t < mpc.T:
solv.integrate(solv.t+dt)
# Update states (equivalent to sensing)
# Number of states kept by MPC are bound by prediction horizon, to avoid memory issues on continuous use
mpc.x = np.roll(mpc.x[-mpc.x.shape[0]:,:], -1)
mpc.x[:,-1] = solv.y
mpc.y[:,-1] = mpc.C.dot(mpc.x[:,-1].reshape(mpc.x[:,-1].shape))
x = np.c_[x, mpc.x[:,-1]]
u = np.c_[u, mpc.u[:,-1]]
y = np.c_[y, mpc.y[:,-1]]
# Append time
mpc.t = np.append(mpc.t, mpc.t[-1]+mpc.T)
if mpc.t[-1] >= timeout: # If timeout, break loop
break
# Print results
print "\nSimulation finished\n"
print "Setpoints:"
for i in range(len(y[:,-1])):
print "\tR%d: \t%.2f" % (i+1, mpc.get_reference()[i])
print "\nFinal states at time %.2f seconds:" % mpc.t[-1]
for i in range(len(x[:,-1])):
print "\tx%d: \t%.2f" % (i+1, x[i,-1])
print "\nOutputs at time %.2f seconds:" % mpc.t[-1]
for i in range(len(y[:,-1])):
print "\ty%d: \t%.2f" % (i+1, y[i,-1])
print "\nSteady-state error:"
for i in range(len(y[:,-1])):
print "\ty%d: \t%.2f" % (i+1, mpc.get_reference()[i]-y[i,-1])
# Plot results
try:
import matplotlib.pyplot as plt
# Plot states
plt.figure()
for k in range(x.shape[0]):
plt.plot(mpc.t, x[k,:], lw=2.0)
plt.xlabel('Time (s)')
plt.ylabel('x')
plt.title('States')
legend = []
for k in range(0,x.shape[0]):
legend.append('x%d' % (k+1))
plt.legend(legend)
plt.grid()
# Plot inputs
plt.figure()
for k in range(u.shape[0]):
plt.plot(mpc.t, u[k,:], lw=2.0)
plt.xlabel('Time (s)')
plt.ylabel('u')
plt.title('Inputs')
legend = [0 for _ in range(u.shape[0]*2)]
for k in range(u.shape[0]):
legend[k] = 'u%d' % (k+1)
plt.legend(legend)
plt.grid()
# Plot outputs
plt.figure()
for k in range(y.shape[0]):
ax = plt.plot(mpc.t, np.ones(mpc.t.shape)*mpc.get_reference()[k], '--', lw=2.0)
plt.plot(mpc.t, y[k,:], color=ax[0].get_color(), lw=2.0)
plt.xlabel('Time (s)')
plt.ylabel('y')
plt.title('Outputs')
legend = []
for k in range(0,y.shape[0]):
legend.append('Reference %d' % (k+1))
legend.append('y%d' % (k+1))
plt.legend(legend)
plt.grid()
# Show figures
plt.show()
except ImportError:
pass
|
{"hexsha": "2eee913355ca85a3faa3fc9a77a0e95ba618d3e9", "size": 6710, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/toy_mimo.py", "max_stars_repo_name": "rgmaidana/predictiveControl", "max_stars_repo_head_hexsha": "8a9ba62fee4e19ce8bc55ef28847b2d1ae87f9d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-07-26T22:10:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T10:16:57.000Z", "max_issues_repo_path": "examples/toy_mimo.py", "max_issues_repo_name": "passion4energy/predictiveControl", "max_issues_repo_head_hexsha": "8a9ba62fee4e19ce8bc55ef28847b2d1ae87f9d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/toy_mimo.py", "max_forks_repo_name": "passion4energy/predictiveControl", "max_forks_repo_head_hexsha": "8a9ba62fee4e19ce8bc55ef28847b2d1ae87f9d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-09-24T16:06:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T20:30:29.000Z", "avg_line_length": 33.55, "max_line_length": 123, "alphanum_fraction": 0.5105812221, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1952}
|
program use_ext
use constants, only: get_pi
implicit none (type, external)
print *,'using submodule, pi=',get_pi()
end program
|
{"hexsha": "4ec0645b36b56f9b079492329d7b55cb88fa845b", "size": 130, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "submodule/use_ext.f90", "max_stars_repo_name": "supershushu/fortran2018-examples", "max_stars_repo_head_hexsha": "f0dc03b80326bc7c06fa31945b6e7406a60c1fa8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 305, "max_stars_repo_stars_event_min_datetime": "2017-12-07T12:47:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T12:03:16.000Z", "max_issues_repo_path": "src/submodule/use_ext.f90", "max_issues_repo_name": "scivision/fortran2015-examples", "max_issues_repo_head_hexsha": "23fc7090997ecb4b838ebc1f09b86e2872d7141c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2018-11-24T15:45:53.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-06T08:10:43.000Z", "max_forks_repo_path": "src/submodule/use_ext.f90", "max_forks_repo_name": "scivision/fortran2015-examples", "max_forks_repo_head_hexsha": "23fc7090997ecb4b838ebc1f09b86e2872d7141c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 60, "max_forks_repo_forks_event_min_datetime": "2017-11-28T07:56:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-20T01:37:53.000Z", "avg_line_length": 14.4444444444, "max_line_length": 39, "alphanum_fraction": 0.7461538462, "num_tokens": 32}
|
# This file was generated by the Julia Swagger Code Generator
# Do not modify this file directly. Modify the swagger specification instead.
struct PrivateEndpointConnectionsApi <: SwaggerApi
client::Swagger.Client
end
"""
Deletes the specified private endpoint connection associated with the storage account.
Param: resourceGroupName::String (required)
Param: accountName::String (required)
Param: api_version::String (required)
Param: subscriptionId::String (required)
Param: privateEndpointConnectionName::String (required)
Return: Nothing
"""
function _swaggerinternal_privateEndpointConnectionsDelete(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsDelete", :maxLength, resourceGroupName, 90)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsDelete", :minLength, resourceGroupName, 1)
Swagger.validate_param("accountName", "privateEndpointConnectionsDelete", :maxLength, accountName, 24)
Swagger.validate_param("accountName", "privateEndpointConnectionsDelete", :minLength, accountName, 3)
Swagger.validate_param("api_version", "privateEndpointConnectionsDelete", :minLength, api_version, 1)
Swagger.validate_param("subscriptionId", "privateEndpointConnectionsDelete", :minLength, subscriptionId, 1)
_ctx = Swagger.Ctx(_api.client, "DELETE", Nothing, "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/privateEndpointConnections/{privateEndpointConnectionName}", ["azure_auth"])
Swagger.set_param(_ctx.path, "resourceGroupName", resourceGroupName) # type String
Swagger.set_param(_ctx.path, "accountName", accountName) # type String
Swagger.set_param(_ctx.path, "subscriptionId", subscriptionId) # type String
Swagger.set_param(_ctx.path, "privateEndpointConnectionName", privateEndpointConnectionName) # type String
Swagger.set_param(_ctx.query, "api-version", api_version) # type String
Swagger.set_header_accept(_ctx, ["application/json"])
Swagger.set_header_content_type(_ctx, (_mediaType === nothing) ? ["application/json"] : [_mediaType])
return _ctx
end
function privateEndpointConnectionsDelete(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsDelete(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName; _mediaType=_mediaType)
Swagger.exec(_ctx)
end
function privateEndpointConnectionsDelete(_api::PrivateEndpointConnectionsApi, response_stream::Channel, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsDelete(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName; _mediaType=_mediaType)
Swagger.exec(_ctx, response_stream)
end
"""
Gets the specified private endpoint connection associated with the storage account.
Param: resourceGroupName::String (required)
Param: accountName::String (required)
Param: api_version::String (required)
Param: subscriptionId::String (required)
Param: privateEndpointConnectionName::String (required)
Return: PrivateEndpointConnection
"""
function _swaggerinternal_privateEndpointConnectionsGet(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsGet", :maxLength, resourceGroupName, 90)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsGet", :minLength, resourceGroupName, 1)
Swagger.validate_param("accountName", "privateEndpointConnectionsGet", :maxLength, accountName, 24)
Swagger.validate_param("accountName", "privateEndpointConnectionsGet", :minLength, accountName, 3)
Swagger.validate_param("api_version", "privateEndpointConnectionsGet", :minLength, api_version, 1)
Swagger.validate_param("subscriptionId", "privateEndpointConnectionsGet", :minLength, subscriptionId, 1)
_ctx = Swagger.Ctx(_api.client, "GET", PrivateEndpointConnection, "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/privateEndpointConnections/{privateEndpointConnectionName}", ["azure_auth"])
Swagger.set_param(_ctx.path, "resourceGroupName", resourceGroupName) # type String
Swagger.set_param(_ctx.path, "accountName", accountName) # type String
Swagger.set_param(_ctx.path, "subscriptionId", subscriptionId) # type String
Swagger.set_param(_ctx.path, "privateEndpointConnectionName", privateEndpointConnectionName) # type String
Swagger.set_param(_ctx.query, "api-version", api_version) # type String
Swagger.set_header_accept(_ctx, ["application/json"])
Swagger.set_header_content_type(_ctx, (_mediaType === nothing) ? ["application/json"] : [_mediaType])
return _ctx
end
function privateEndpointConnectionsGet(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsGet(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName; _mediaType=_mediaType)
Swagger.exec(_ctx)
end
function privateEndpointConnectionsGet(_api::PrivateEndpointConnectionsApi, response_stream::Channel, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsGet(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName; _mediaType=_mediaType)
Swagger.exec(_ctx, response_stream)
end
"""
List all the private endpoint connections associated with the storage account.
Param: resourceGroupName::String (required)
Param: accountName::String (required)
Param: api_version::String (required)
Param: subscriptionId::String (required)
Return: PrivateEndpointConnectionListResult
"""
function _swaggerinternal_privateEndpointConnectionsList(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String; _mediaType=nothing)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsList", :maxLength, resourceGroupName, 90)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsList", :minLength, resourceGroupName, 1)
Swagger.validate_param("accountName", "privateEndpointConnectionsList", :maxLength, accountName, 24)
Swagger.validate_param("accountName", "privateEndpointConnectionsList", :minLength, accountName, 3)
Swagger.validate_param("api_version", "privateEndpointConnectionsList", :minLength, api_version, 1)
Swagger.validate_param("subscriptionId", "privateEndpointConnectionsList", :minLength, subscriptionId, 1)
_ctx = Swagger.Ctx(_api.client, "GET", PrivateEndpointConnectionListResult, "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/privateEndpointConnections", ["azure_auth"])
Swagger.set_param(_ctx.path, "resourceGroupName", resourceGroupName) # type String
Swagger.set_param(_ctx.path, "accountName", accountName) # type String
Swagger.set_param(_ctx.path, "subscriptionId", subscriptionId) # type String
Swagger.set_param(_ctx.query, "api-version", api_version) # type String
Swagger.set_header_accept(_ctx, ["application/json"])
Swagger.set_header_content_type(_ctx, (_mediaType === nothing) ? ["application/json"] : [_mediaType])
return _ctx
end
function privateEndpointConnectionsList(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsList(_api, resourceGroupName, accountName, api_version, subscriptionId; _mediaType=_mediaType)
Swagger.exec(_ctx)
end
function privateEndpointConnectionsList(_api::PrivateEndpointConnectionsApi, response_stream::Channel, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsList(_api, resourceGroupName, accountName, api_version, subscriptionId; _mediaType=_mediaType)
Swagger.exec(_ctx, response_stream)
end
"""
Update the state of specified private endpoint connection associated with the storage account.
Param: resourceGroupName::String (required)
Param: accountName::String (required)
Param: api_version::String (required)
Param: subscriptionId::String (required)
Param: privateEndpointConnectionName::String (required)
Param: properties::PrivateEndpointConnection (required)
Return: PrivateEndpointConnection
"""
function _swaggerinternal_privateEndpointConnectionsPut(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String, properties; _mediaType=nothing)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsPut", :maxLength, resourceGroupName, 90)
Swagger.validate_param("resourceGroupName", "privateEndpointConnectionsPut", :minLength, resourceGroupName, 1)
Swagger.validate_param("accountName", "privateEndpointConnectionsPut", :maxLength, accountName, 24)
Swagger.validate_param("accountName", "privateEndpointConnectionsPut", :minLength, accountName, 3)
Swagger.validate_param("api_version", "privateEndpointConnectionsPut", :minLength, api_version, 1)
Swagger.validate_param("subscriptionId", "privateEndpointConnectionsPut", :minLength, subscriptionId, 1)
_ctx = Swagger.Ctx(_api.client, "PUT", PrivateEndpointConnection, "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/privateEndpointConnections/{privateEndpointConnectionName}", ["azure_auth"], properties)
Swagger.set_param(_ctx.path, "resourceGroupName", resourceGroupName) # type String
Swagger.set_param(_ctx.path, "accountName", accountName) # type String
Swagger.set_param(_ctx.path, "subscriptionId", subscriptionId) # type String
Swagger.set_param(_ctx.path, "privateEndpointConnectionName", privateEndpointConnectionName) # type String
Swagger.set_param(_ctx.query, "api-version", api_version) # type String
Swagger.set_header_accept(_ctx, ["application/json"])
Swagger.set_header_content_type(_ctx, (_mediaType === nothing) ? ["application/json"] : [_mediaType])
return _ctx
end
function privateEndpointConnectionsPut(_api::PrivateEndpointConnectionsApi, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String, properties; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsPut(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName, properties; _mediaType=_mediaType)
Swagger.exec(_ctx)
end
function privateEndpointConnectionsPut(_api::PrivateEndpointConnectionsApi, response_stream::Channel, resourceGroupName::String, accountName::String, api_version::String, subscriptionId::String, privateEndpointConnectionName::String, properties; _mediaType=nothing)
_ctx = _swaggerinternal_privateEndpointConnectionsPut(_api, resourceGroupName, accountName, api_version, subscriptionId, privateEndpointConnectionName, properties; _mediaType=_mediaType)
Swagger.exec(_ctx, response_stream)
end
export privateEndpointConnectionsDelete, privateEndpointConnectionsGet, privateEndpointConnectionsList, privateEndpointConnectionsPut
|
{"hexsha": "8f0297f87910298b2117d302ed1e1905f00d244e", "size": 12343, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Storage/StorageManagementClient/api_PrivateEndpointConnectionsApi.jl", "max_stars_repo_name": "JuliaComputing/Azure.jl", "max_stars_repo_head_hexsha": "0e2b55e7602352d86bdf3579e547a74a9b5f44f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2019-12-18T16:23:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T07:39:13.000Z", "max_issues_repo_path": "src/Storage/StorageManagementClient/api_PrivateEndpointConnectionsApi.jl", "max_issues_repo_name": "JuliaComputing/Azure.jl", "max_issues_repo_head_hexsha": "0e2b55e7602352d86bdf3579e547a74a9b5f44f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-05-08T19:57:11.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-11T11:20:41.000Z", "max_forks_repo_path": "src/Storage/StorageManagementClient/api_PrivateEndpointConnectionsApi.jl", "max_forks_repo_name": "JuliaComputing/Azure.jl", "max_forks_repo_head_hexsha": "0e2b55e7602352d86bdf3579e547a74a9b5f44f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-05-07T10:26:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-11T13:04:47.000Z", "avg_line_length": 70.1306818182, "max_line_length": 284, "alphanum_fraction": 0.8182775662, "num_tokens": 2621}
|
[STATEMENT]
lemma lq_prod: "u \<le>p v\<cdot>u \<Longrightarrow> u \<le>p w \<Longrightarrow> u\<inverse>\<^sup>>(v\<cdot>u)\<cdot>u\<inverse>\<^sup>>w = u\<inverse>\<^sup>>(v\<cdot>w)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>u \<le>p v \<cdot> u; u \<le>p w\<rbrakk> \<Longrightarrow> u\<inverse>\<^sup>>(v \<cdot> u) \<cdot> u\<inverse>\<^sup>>w = u\<inverse>\<^sup>>(v \<cdot> w)
[PROOF STEP]
using lq_reassoc[of u "v \<cdot> u" "u\<inverse>\<^sup>>w"] lq_rq_reassoc_suf[of u w "v \<cdot> u", unfolded rq_triv[of v u]]
[PROOF STATE]
proof (prove)
using this:
u \<le>p v \<cdot> u \<Longrightarrow> u\<inverse>\<^sup>>(v \<cdot> u) \<cdot> u\<inverse>\<^sup>>w = u\<inverse>\<^sup>>((v \<cdot> u) \<cdot> u\<inverse>\<^sup>>w)
\<lbrakk>u \<le>p w; u \<le>s v \<cdot> u\<rbrakk> \<Longrightarrow> (v \<cdot> u) \<cdot> u\<inverse>\<^sup>>w = v \<cdot> w
goal (1 subgoal):
1. \<lbrakk>u \<le>p v \<cdot> u; u \<le>p w\<rbrakk> \<Longrightarrow> u\<inverse>\<^sup>>(v \<cdot> u) \<cdot> u\<inverse>\<^sup>>w = u\<inverse>\<^sup>>(v \<cdot> w)
[PROOF STEP]
by (simp add: suf_def)
|
{"llama_tokens": 476, "file": "Combinatorics_Words_CoWBasic", "length": 2}
|
"""
Constructing and loading dictionaries
"""
import cPickle as pkl
import numpy
from collections import OrderedDict
from scipy.io import loadmat
import os
import nltk
from nltk.tokenize import TweetTokenizer
DATA_DIR = '/home/shunan/Code/Data'
def build_dictionary(text):
"""
Build a dictionary
text: list of sentences (pre-tokenized)
"""
wordcount = OrderedDict()
for cc in text:
words = cc.split()
for w in words:
if w not in wordcount:
wordcount[w] = 0
wordcount[w] += 1
words = wordcount.keys()
freqs = wordcount.values()
sorted_idx = numpy.argsort(freqs)[::-1]
worddict = OrderedDict()
# Words are indexed by decreasing order of frequency, for some reason.
for idx, sidx in enumerate(sorted_idx):
worddict[words[sidx]] = idx+2 # 0: <eos>, 1: <unk>
return worddict, wordcount
def load_dictionary(loc='/ais/gobi3/u/rkiros/bookgen/book_dictionary_large.pkl'):
"""
Load a dictionary
"""
with open(loc, 'rb') as f:
worddict = pkl.load(f)
return worddict
def save_dictionary(worddict, wordcount, loc):
"""
Save a dictionary to the specified location
"""
with open(loc, 'wb') as f:
pkl.dump(worddict, f)
pkl.dump(wordcount, f)
def build_dictionary_imdb():
'''
Build a dictionary, but using the indices to the word2vec data. We will also have to re-index the word2vec matrix.
'''
# Create the word to index mapping.
word_to_index = dict()
word2vec_dict_file = open(os.path.join(DATA_DIR, 'word2vec/dict.txt'), 'r')
i = 0
line = word2vec_dict_file.readline()
while line != '':
word_to_index[line.strip()] = i
i += 1
line = word2vec_dict_file.readline()
word2vec_dict_file.close()
imdb_data = loadmat(os.path.join(DATA_DIR, 'imdb_sentiment/imdb_sentiment.mat'))
train_data = imdb_data['train_data']
test_data = imdb_data['test_data']
all_text = []
for i in range(len(train_data)):
line = train_data[i][0][0]
tokens = nltk.word_tokenize(line)
s = []
for token in tokens:
if token.lower() in word_to_index:
s.append(token.lower())
all_text.append(' '.join(s))
for i in range(len(test_data)):
line = train_data[i][0][0]
tokens = nltk.word_tokenize(line)
s = []
for token in tokens:
if token.lower() in word_to_index:
s.append(token.lower())
all_text.append(' '.join(s))
worddict, wordcount = build_dictionary(all_text)
# Re-indexing the word2vec vectors.
word2vec = loadmat(os.path.join(DATA_DIR, 'word2vec/GoogleNews-vectors-negative300.mat'))
word2vec = word2vec['vectors']
new_matrix = numpy.random.uniform(-1, 1, (len(wordcount) + 2, 300))
for word in worddict:
old_ind = word_to_index[word]
new_ind = worddict[word]
vec = word2vec[old_ind, :]
new_matrix[new_ind, :] = vec
# Saving the new word2vec matrix.
numpy.save('/home/shunan/Code/skip-thoughts/experiments/skip_thought_word2vec_embeds.npy', new_matrix)
return worddict, wordcount
def build_dictionary_amazon():
'''
Similar to the above, except that this is for the Amazon dataset.
'''
word_to_index = dict()
word2vec_dict_file = open(os.path.join(DATA_DIR, 'word2vec/dict.txt'), 'r')
i = 0
line = word2vec_dict_file.readline()
while line != '':
word_to_index[line.strip()] = i
i += 1
line = word2vec_dict_file.readline()
word2vec_dict_file.close()
with open(os.path.join(DATA_DIR, 'amazon_food/train_data.pkl')) as f:
train_data = pkl.load(f)
train_data = train_data[0]
with open(os.path.join(DATA_DIR, 'amazon_food/test_data.pkl')) as f:
test_data = pkl.load(f)
test_data = test_data[0]
tokenizer = TweetTokenizer()
all_text = []
for sen in train_data:
sen = sen.strip().lower()
tokens = nltk.word_tokenize(' '.join(tokenizer.tokenize(sen)))
s = []
for token in tokens:
if token.lower() in word_to_index:
s.append(token.lower())
all_text.append(' '.join(s))
for sen in test_data:
sen = sen.strip().lower()
tokens = nltk.word_tokenize(' '.join(tokenizer.tokenize(sen)))
s = []
for token in tokens:
if token.lower() in word_to_index:
s.append(token.lower())
all_text.append(' '.join(s))
worddict, wordcount = build_dictionary(all_text)
print('Number of words: {}'.format(wordcount))
# Re-indexing the word2vec vectors.
word2vec = loadmat(os.path.join(DATA_DIR, 'word2vec/GoogleNews-vectors-negative300.mat'))
word2vec = word2vec['vectors']
new_matrix = numpy.random.uniform(-1, 1, (len(wordcount) + 2, 300))
for word in worddict:
old_ind = word_to_index[word]
new_ind = worddict[word]
vec = word2vec[old_ind, :]
new_matrix[new_ind, :] = vec
# Saving the new word2vec matrix.
numpy.save('/home/shunan/Code/skip-thoughts/experiments/amazon/word2vec_embeds.npy', new_matrix)
return worddict, wordcount
|
{"hexsha": "b3359eae0b7b48a494ead386c14950780cd98870", "size": 5273, "ext": "py", "lang": "Python", "max_stars_repo_path": "training/vocab.py", "max_stars_repo_name": "zashuna/skip-thoughts", "max_stars_repo_head_hexsha": "dec2c97f47d2ad139f5ae8602faca40c81ac096b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "training/vocab.py", "max_issues_repo_name": "zashuna/skip-thoughts", "max_issues_repo_head_hexsha": "dec2c97f47d2ad139f5ae8602faca40c81ac096b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "training/vocab.py", "max_forks_repo_name": "zashuna/skip-thoughts", "max_forks_repo_head_hexsha": "dec2c97f47d2ad139f5ae8602faca40c81ac096b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6569767442, "max_line_length": 118, "alphanum_fraction": 0.6281054428, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1333}
|
function xn=linEqSolveFirstnRows(A,b,n)
%%LINEQSOLVEFIRSTNROWS This function solves the problem A*x=b for the first
% n rows of x. The matrix A CAN be singular as long as
% a unique solution exists for the first n rows of x.
% This function is useful for finding the position
% estimate in an information filter state even before
% estimates of all of the other target state
% components have become observable In such an
% instance A is the inverse covariance matrix, x is
% the target state and b is the information state.
%
%INPUTS: A The NXN matrix A in the equation A*x=b, where x is unknown. The
% matrix can be singular, but the first n rows of x should be
% observable.
% b The NX1 column vector b in the equation A*x=b.
% n The number of rows of x, starting from the first row and going
% down, for which one wishes to solve in the equation A*b. n must
% be less than or equal to the number of rows in A.
%
%OUTPUTS: xn The first n rows of the column vector of x solved from A*x=b.
% If any of the components of xn are not finite, then the
% problem was not completely observable. Because of how the
% problem is solved, if the kth component is not finite, then
% all subsequent components will not be finite, regardless of
% whether they are observable.
%
%The linear equation is solved by performing a modified qr decomposition on
%the matrix A such that A=Q*R, where Q is an rotation matrix and R is a
%LOWER triangular matrix. A standard QR decomposition produces an upper
%triangular matrix. By flipping the rows and columns of a and then
%performing the inverse operations on Q and R, one can get a decomposition
%where R is a lower-triangular marix. One can then write R*x=Q'*b, since
%the transpose of a rotation matrix is its inverse. The first n rows of x
%can then be solved using forward substitution.
%
%The QR decomposition is in Matlab's built-in function qr. It is also
%discussed in Chapter 5.2 of [1].
%
%REFERENCES:
%[1] G. E. Golub and C. F. van Loan, Matrix Computations, 4rd ed.
% Baltimore, MD: Johns Hopkins University Press, 2013.
%
%June 2014 David F.Crouse, Naval Research Laboratory, Washington D.C.
%(UNCLASSIFIED) DISTRIBUTION STATEMENT A. Approved for public release.
%Perform a lower-triangular qr decomposition.
[Q,R]=qr(rot90(A,2));
Q=rot90(Q,2);
R=rot90(R,2);
%Now, Q*R=A and R is lower-triangular.
b=Q'*b;
%Perform forward substitution to solve for the first n components of x.
xn=zeros(n,1);
xn(1)=b(1)/R(1,1);
for curRow=2:n
xn(curRow)=(b(curRow)-sum(xn(1:(curRow-1))'.*R(curRow,1:(curRow-1))))/R(curRow,curRow);
end
end
%LICENSE:
%
%The source code is in the public domain and not licensed or under
%copyright. The information and software may be used freely by the public.
%As required by 17 U.S.C. 403, third parties producing copyrighted works
%consisting predominantly of the material produced by U.S. government
%agencies must provide notice with such work(s) identifying the U.S.
%Government material incorporated and stating that such material is not
%subject to copyright protection.
%
%Derived works shall not identify themselves in a manner that implies an
%endorsement by or an affiliation with the Naval Research Laboratory.
%
%RECIPIENT BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF THE
%SOFTWARE AND ANY RELATED MATERIALS, AND AGREES TO INDEMNIFY THE NAVAL
%RESEARCH LABORATORY FOR ALL THIRD-PARTY CLAIMS RESULTING FROM THE ACTIONS
%OF RECIPIENT IN THE USE OF THE SOFTWARE.
|
{"author": "USNavalResearchLaboratory", "repo": "TrackerComponentLibrary", "sha": "9f6e329de5be06a371757c4b853200beb6def2d0", "save_path": "github-repos/MATLAB/USNavalResearchLaboratory-TrackerComponentLibrary", "path": "github-repos/MATLAB/USNavalResearchLaboratory-TrackerComponentLibrary/TrackerComponentLibrary-9f6e329de5be06a371757c4b853200beb6def2d0/Mathematical_Functions/Basic_Matrix_Operations/linEqSolveFirstnRows.m"}
|
import glob
import imp
import os
import pkgutil
import re
import sys
import tarfile
import pytest
from . import reset_setup_helpers, reset_distutils_log, fix_hide_setuptools # noqa
from . import run_cmd, run_setup, cleanup_import
PY3 = sys.version_info[0] == 3
if PY3:
_text_type = str
else:
_text_type = unicode # noqa
_DEV_VERSION_RE = re.compile(r'\d+\.\d+(?:\.\d+)?\.dev(\d+)')
TEST_VERSION_SETUP_PY = """\
#!/usr/bin/env python
from setuptools import setup
NAME = 'apyhtest_eva'
VERSION = {version!r}
RELEASE = 'dev' not in VERSION
from astropy_helpers.git_helpers import get_git_devstr
from astropy_helpers.version_helpers import generate_version_py
if not RELEASE:
VERSION += get_git_devstr(False)
generate_version_py(NAME, VERSION, RELEASE, False, uses_git=not RELEASE)
setup(name=NAME, version=VERSION, packages=['apyhtest_eva'])
"""
TEST_VERSION_INIT = """\
try:
from .version import version as __version__
from .version import githash as __githash__
except ImportError:
__version__ = __githash__ = ''
"""
@pytest.fixture
def version_test_package(tmpdir, request):
def make_test_package(version='42.42.dev'):
test_package = tmpdir.mkdir('test_package')
test_package.join('setup.py').write(
TEST_VERSION_SETUP_PY.format(version=version))
test_package.mkdir('apyhtest_eva').join('__init__.py').write(TEST_VERSION_INIT)
with test_package.as_cwd():
run_cmd('git', ['init'])
run_cmd('git', ['add', '--all'])
run_cmd('git', ['commit', '-m', 'test package'])
if '' in sys.path:
sys.path.remove('')
sys.path.insert(0, '')
def finalize():
cleanup_import('apyhtest_eva')
request.addfinalizer(finalize)
return test_package
return make_test_package
def test_update_git_devstr(version_test_package, capsys):
"""Tests that the commit number in the package's version string updates
after git commits even without re-running setup.py.
"""
# We have to call version_test_package to actually create the package
test_pkg = version_test_package()
with test_pkg.as_cwd():
run_setup('setup.py', ['--version'])
stdout, stderr = capsys.readouterr()
version = stdout.strip()
m = _DEV_VERSION_RE.match(version)
assert m, (
"Stdout did not match the version string pattern:"
"\n\n{0}\n\nStderr:\n\n{1}".format(stdout, stderr))
revcount = int(m.group(1))
import apyhtest_eva
assert apyhtest_eva.__version__ == version
# Make a silly git commit
with open('.test', 'w'):
pass
run_cmd('git', ['add', '.test'])
run_cmd('git', ['commit', '-m', 'test'])
import apyhtest_eva.version
imp.reload(apyhtest_eva.version)
# Previously this checked packagename.__version__, but in order for that to
# be updated we also have to re-import _astropy_init which could be tricky.
# Checking directly that the packagename.version module was updated is
# sufficient:
m = _DEV_VERSION_RE.match(apyhtest_eva.version.version)
assert m
assert int(m.group(1)) == revcount + 1
# This doesn't test astropy_helpers.get_helpers.update_git_devstr directly
# since a copy of that function is made in packagename.version (so that it
# can work without astropy_helpers installed). In order to get test
# coverage on the actual astropy_helpers copy of that function just call it
# directly and compare to the value in packagename
from astropy_helpers.git_helpers import update_git_devstr
newversion = update_git_devstr(version, path=str(test_pkg))
assert newversion == apyhtest_eva.version.version
def test_version_update_in_other_repos(version_test_package, tmpdir):
"""
Regression test for https://github.com/astropy/astropy-helpers/issues/114
and for https://github.com/astropy/astropy-helpers/issues/107
"""
test_pkg = version_test_package()
with test_pkg.as_cwd():
run_setup('setup.py', ['build'])
# Add the path to the test package to sys.path for now
sys.path.insert(0, str(test_pkg))
try:
import apyhtest_eva
m = _DEV_VERSION_RE.match(apyhtest_eva.__version__)
assert m
correct_revcount = int(m.group(1))
with tmpdir.as_cwd():
testrepo = tmpdir.mkdir('testrepo')
testrepo.chdir()
# Create an empty git repo
run_cmd('git', ['init'])
import apyhtest_eva.version
imp.reload(apyhtest_eva.version)
m = _DEV_VERSION_RE.match(apyhtest_eva.version.version)
assert m
assert int(m.group(1)) == correct_revcount
correct_revcount = int(m.group(1))
# Add several commits--more than the revcount for the apyhtest_eva package
for idx in range(correct_revcount + 5):
test_filename = '.test' + str(idx)
testrepo.ensure(test_filename)
run_cmd('git', ['add', test_filename])
run_cmd('git', ['commit', '-m', 'A message'])
import apyhtest_eva.version
imp.reload(apyhtest_eva.version)
m = _DEV_VERSION_RE.match(apyhtest_eva.version.version)
assert m
assert int(m.group(1)) == correct_revcount
correct_revcount = int(m.group(1))
finally:
sys.path.remove(str(test_pkg))
@pytest.mark.parametrize('version', ['1.0.dev', '1.0'])
def test_installed_git_version(version_test_package, version, tmpdir, capsys):
"""
Test for https://github.com/astropy/astropy-helpers/issues/87
Ensures that packages installed with astropy_helpers have a correct copy
of the git hash of the installed commit.
"""
# To test this, it should suffice to build a source dist, unpack it
# somewhere outside the git repository, and then do a build and import
# from the build directory--no need to "install" as such
test_pkg = version_test_package(version)
with test_pkg.as_cwd():
run_setup('setup.py', ['build'])
try:
import apyhtest_eva
githash = apyhtest_eva.__githash__
assert githash and isinstance(githash, _text_type)
# Ensure that it does in fact look like a git hash and not some
# other arbitrary string
assert re.match(r'[0-9a-f]{40}', githash)
finally:
cleanup_import('apyhtest_eva')
run_setup('setup.py', ['sdist', '--dist-dir=dist', '--formats=gztar'])
tgzs = glob.glob(os.path.join('dist', '*.tar.gz'))
assert len(tgzs) == 1
tgz = test_pkg.join(tgzs[0])
build_dir = tmpdir.mkdir('build_dir')
tf = tarfile.open(str(tgz), mode='r:gz')
tf.extractall(str(build_dir))
with build_dir.as_cwd():
pkg_dir = glob.glob('apyhtest_eva-*')[0]
os.chdir(pkg_dir)
run_setup('setup.py', ['build'])
try:
import apyhtest_eva
loader = pkgutil.get_loader('apyhtest_eva')
# Ensure we are importing the 'packagename' that was just unpacked
# into the build_dir
assert loader.get_filename().startswith(str(build_dir))
assert apyhtest_eva.__githash__ == githash
finally:
cleanup_import('apyhtest_eva')
|
{"hexsha": "5d1c3f8e557a023e294b7bedd1cfe2642454697d", "size": 7458, "ext": "py", "lang": "Python", "max_stars_repo_path": "astropy_helpers/astropy_helpers/tests/test_git_helpers.py", "max_stars_repo_name": "brechmos-stsci/deleteme", "max_stars_repo_head_hexsha": "02590bbe715750c908b6d8013b3ec935eeaf4040", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "astropy_helpers/astropy_helpers/tests/test_git_helpers.py", "max_issues_repo_name": "brechmos-stsci/deleteme", "max_issues_repo_head_hexsha": "02590bbe715750c908b6d8013b3ec935eeaf4040", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-29T19:53:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-29T19:53:22.000Z", "max_forks_repo_path": "astropy_helpers/astropy_helpers/tests/test_git_helpers.py", "max_forks_repo_name": "brechmos-stsci/deleteme", "max_forks_repo_head_hexsha": "02590bbe715750c908b6d8013b3ec935eeaf4040", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4683544304, "max_line_length": 87, "alphanum_fraction": 0.6486993832, "include": true, "reason": "from astropy", "num_tokens": 1829}
|
#!/usr/bin/env python3
import numpy as np
import rospy
from duckietown_msgs.msg import Twist2DStamped, Pose2DStamped
from duckietown.dtros import DTROS, NodeType, TopicType
class VelocityToPoseNode(DTROS):
"""
VelocityToPoseNode integrates the velocity of the Duckiebot in order to continuously obtain a pose
relative to the pose at which the node was started.
Args:
node_name (:obj:`str`): a unique, descriptive name for the node that ROS will use
Subscriber:
~velocity (:obj:`Twist2DStamped`): The robot velocity, typically obtained from forward kinematics
Publisher:
~pose (:obj:`Pose2DStamped`): The integrated pose relative to the pose of the robot at node initialization
"""
def __init__(self, node_name):
# Initialize the DTROS parent class
super(VelocityToPoseNode, self).__init__(
node_name=node_name,
node_type=NodeType.LOCALIZATION
)
# Get the vehicle name
self.veh_name = rospy.get_namespace().strip("/")
# Keep track of the last known pose
self.last_pose = Pose2DStamped()
self.last_theta_dot = 0
self.last_v = 0
# Setup the publisher
self.pub_pose = rospy.Publisher(
"~pose",
Pose2DStamped,
queue_size=1,
dt_topic_type=TopicType.LOCALIZATION
)
# Setup the subscriber
self.sub_velocity = rospy.Subscriber(
"~velocity",
Twist2DStamped,
self.velocity_callback,
queue_size=1
)
# ---
self.log("Initialized.")
def velocity_callback(self, msg_velocity):
"""
Performs the calclulation from velocity to pose and publishes a messsage with the result.
Args:
msg_velocity (:obj:`Twist2DStamped`): the current velocity message
"""
if self.last_pose.header.stamp.to_sec() > 0: # skip first frame
dt = (msg_velocity.header.stamp - self.last_pose.header.stamp).to_sec()
# Integrate the relative movement between the last pose and the current
theta_delta = self.last_theta_dot * dt
# to ensure no division by zero for radius calculation:
if np.abs(self.last_theta_dot) < 0.000001:
# straight line
x_delta = self.last_v * dt
y_delta = 0
else:
# arc of circle
radius = self.last_v / self.last_theta_dot
x_delta = radius * np.sin(theta_delta)
y_delta = radius * (1.0 - np.cos(theta_delta))
# Add to the previous to get absolute pose relative to the starting position
theta_res = self.last_pose.theta + theta_delta
x_res = self.last_pose.x + x_delta * np.cos(self.last_pose.theta) - y_delta * np.sin(self.last_pose.theta)
y_res = self.last_pose.y + y_delta * np.cos(self.last_pose.theta) + x_delta * np.sin(self.last_pose.theta)
# Update the stored last pose
self.last_pose.theta = theta_res
self.last_pose.x = x_res
self.last_pose.y = y_res
# Stuff the new pose into a message and publish
msg_pose = Pose2DStamped()
msg_pose.header = msg_velocity.header
msg_pose.header.frame_id = self.veh_name
msg_pose.theta = theta_res
msg_pose.x = x_res
msg_pose.y = y_res
self.pub_pose.publish(msg_pose)
self.last_pose.header.stamp = msg_velocity.header.stamp
self.last_theta_dot = msg_velocity.omega
self.last_v = msg_velocity.v
if __name__ == '__main__':
# Initialize the node
velocity_to_pose_node = VelocityToPoseNode(node_name='velocity_to_pose_node')
# Keep it spinning to keep the node alive
rospy.spin()
|
{"hexsha": "ef86ad83b83fabaa28e762b3be4980be7b46d95c", "size": 3903, "ext": "py", "lang": "Python", "max_stars_repo_path": "solution/exercise_ws/src/dagu_car/src/velocity_to_pose_node.py", "max_stars_repo_name": "rjean/duckie-segmentation", "max_stars_repo_head_hexsha": "5e720e1a96ef61c4560823030549ac1d5d16e2a4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-03T02:23:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-03T02:23:34.000Z", "max_issues_repo_path": "solution/exercise_ws/src/dagu_car/src/velocity_to_pose_node.py", "max_issues_repo_name": "rjean/mobile-segmentation", "max_issues_repo_head_hexsha": "5e720e1a96ef61c4560823030549ac1d5d16e2a4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "solution/exercise_ws/src/dagu_car/src/velocity_to_pose_node.py", "max_forks_repo_name": "rjean/mobile-segmentation", "max_forks_repo_head_hexsha": "5e720e1a96ef61c4560823030549ac1d5d16e2a4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5398230088, "max_line_length": 118, "alphanum_fraction": 0.6208045094, "include": true, "reason": "import numpy", "num_tokens": 862}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 31 21:21:31 2019
@author: mostafamousavi
last update: 06-21-2020
- downsampling using the interpolation function can cause false segmentaiton error.
This depend on your data and its sampling rate. If you kept getting this error when
using multiprocessors, try using only a single cpu.
"""
from obspy import read
import os
from os import listdir
from os.path import join
import h5py
import numpy as np
import csv
#from tqdm import tqdm
import shutil
import json
import pandas as pd
from multiprocessing.pool import ThreadPool
import multiprocessing
import pickle
import faulthandler; faulthandler.enable()
def preprocessor(mseed_dir, stations_json, overlap=0.3, n_processor=None):
"""
Performs preprocessing and partitions the continuous waveforms into 1-minute slices.
Parameters
----------
mseed_dir: str
A table containing trace names.
stations_json: str
Path to a JSON file containing station information.
overlap: float, default=0.3
If set, detection, and picking are performed in overlapping windows.
n_processor: int, default=None
The number of CPU processors for parallel downloading.
Returns
----------
mseed_dir_processed_hdfs/station.csv: Phase information for the associated events in hypoInverse format.
mseed_dir_processed_hdfs/station.hdf5: Containes all slices and preprocessed traces.
X_preprocessor_report.txt: A summary of processing performance.
time_tracks.pkl: Contain the time track of the continous data and its type.
"""
if not n_processor:
n_processor = multiprocessing.cpu_count()
json_file = open(stations_json)
stations_ = json.load(json_file)
save_dir = os.path.join(os.getcwd(), str(mseed_dir)+'_processed_hdfs')
if os.path.isdir(save_dir):
print(f' *** " {mseed_dir} " directory already exists!')
inp = input(" * --> Do you want to creat a new empty folder? Type (Yes or y) ")
if inp.lower() == "yes" or inp.lower() == "y":
shutil.rmtree(save_dir)
os.makedirs(save_dir)
repfile = open("X_preprocessor_report.txt", 'w')
station_list = [join(mseed_dir, ev) for ev in listdir(mseed_dir) if ev.split('/')[-1] != '.DS_Store'];
data_track = dict()
def process(station):
# for station in station_list:
output_name = station.split('/')[1]
try:
os.remove(output_name+'.hdf5')
os.remove(output_name+".csv")
except Exception:
pass
HDF = h5py.File(os.path.join(save_dir, output_name+'.hdf5'), 'a')
HDF.create_group("data")
csvfile = open(os.path.join(save_dir, output_name+".csv"), 'w')
output_writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
output_writer.writerow(['trace_name', 'start_time'])
csvfile.flush()
file_list = [join(station, ev) for ev in listdir(station) if ev.split('/')[-1] != '.DS_Store'];
mon = [ev.split('__')[1]+'__'+ev.split('__')[2] for ev in file_list ];
uni_list = list(set(mon))
uni_list.sort()
tim_shift = int(60-(overlap*60))
time_slots, comp_types = [], []
print('============ Station {} has {} chunks of data.'.format(station.split('/')[1], len(uni_list)), flush=True)
count_chuncks=0; fln=0; c1=0; c2=0; c3=0; fl_counts=1; slide_estimates=[];
for ct, month in enumerate(uni_list):
matching = [s for s in file_list if month in s]
if len(matching) == 3:
st1 = read(matching[0], debug_headers=True)
org_samplingRate = st1[0].stats.sampling_rate
for tr in st1:
time_slots.append((tr.stats.starttime, tr.stats.endtime))
comp_types.append(3)
try:
st1.merge(fill_value=0)
except Exception:
st1=_resampling(st1)
st1.merge(fill_value=0)
st1.detrend('demean')
count_chuncks += 1; c3 += 1
print(' * '+station.split('/')[1]+' ('+str(count_chuncks)+') .. '+month.split('T')[0]+' --> '+month.split('__')[1].split('T')[0]+' .. 3 components .. sampling rate: '+str(org_samplingRate))
st2 = read(matching[1], debug_headers=True)
try:
st2.merge(fill_value=0)
except Exception:
st2=_resampling(st2)
st2.merge(fill_value=0)
st2.detrend('demean')
st3 = read(matching[2], debug_headers=True)
try:
st3.merge(fill_value=0)
except Exception:
st3=_resampling(st3)
st3.merge(fill_value=0)
st3.detrend('demean')
st1.append(st2[0])
st1.append(st3[0])
st1.filter('bandpass',freqmin = 1.0, freqmax = 45, corners=2, zerophase=True)
st1.taper(max_percentage=0.001, type='cosine', max_length=2)
if len([tr for tr in st1 if tr.stats.sampling_rate != 100.0]) != 0:
try:
st1.interpolate(100, method="linear")
except Exception:
st1=_resampling(st1)
longest = st1[0].stats.npts
start_time = st1[0].stats.starttime
end_time = st1[0].stats.endtime
for tt in st1:
if tt.stats.npts > longest:
longest = tt.stats.npts
start_time = tt.stats.starttime
end_time = tt.stats.endtime
st1.trim(start_time, end_time, pad=True, fill_value=0)
start_time = st1[0].stats.starttime
end_time = st1[0].stats.endtime
slide_estimates.append((end_time - start_time)//tim_shift)
fl_counts += 1
chanL = [st1[0].stats.channel[-1], st1[1].stats.channel[-1], st1[2].stats.channel[-1]]
next_slice = start_time+60
while next_slice <= end_time:
w = st1.slice(start_time, next_slice)
npz_data = np.zeros([6000,3])
npz_data[:,2] = w[chanL.index('Z')].data[:6000]
try:
npz_data[:,0] = w[chanL.index('E')].data[:6000]
except Exception:
npz_data[:,0] = w[chanL.index('1')].data[:6000]
try:
npz_data[:,1] = w[chanL.index('N')].data[:6000]
except Exception:
npz_data[:,1] = w[chanL.index('2')].data[:6000]
tr_name = st1[0].stats.station+'_'+st1[0].stats.network+'_'+st1[0].stats.channel[:2]+'_'+str(start_time)
HDF = h5py.File(os.path.join(save_dir,output_name+'.hdf5'), 'r')
dsF = HDF.create_dataset('data/'+tr_name, npz_data.shape, data = npz_data, dtype= np.float32)
dsF.attrs["trace_name"] = tr_name
dsF.attrs["receiver_code"] = station.split('/')[1]
dsF.attrs["network_code"] = stations_[station.split('/')[1]]['network']
dsF.attrs["receiver_latitude"] = stations_[station.split('/')[1]]['coords'][0]
dsF.attrs["receiver_longitude"] = stations_[station.split('/')[1]]['coords'][1]
dsF.attrs["receiver_elevation_m"] = stations_[station.split('/')[1]]['coords'][2]
start_time_str = str(start_time)
start_time_str = start_time_str.replace('T', ' ')
start_time_str = start_time_str.replace('Z', '')
dsF.attrs['trace_start_time'] = start_time_str
HDF.flush()
output_writer.writerow([str(tr_name), start_time_str])
csvfile.flush()
fln += 1
start_time = start_time+tim_shift
next_slice = next_slice+tim_shift
if len(matching) == 1:
count_chuncks += 1; c1 += 1
st1 = read(matching[0], debug_headers=True)
org_samplingRate = st1[0].stats.sampling_rate
for tr in st1:
time_slots.append((tr.stats.starttime, tr.stats.endtime))
comp_types.append(1)
try:
st1.merge(fill_value=0)
except Exception:
st1=_resampling(st1)
st1.merge(fill_value=0)
st1.detrend('demean')
print(' * '+station.split('/')[1]+' ('+str(count_chuncks)+') .. '+month.split('T')[0]+' --> '+month.split('__')[1].split('T')[0]+' .. 1 components .. sampling rate: '+str(org_samplingRate))
st1.filter('bandpass',freqmin = 1.0, freqmax = 45, corners=2, zerophase=True)
st1.taper(max_percentage=0.001, type='cosine', max_length=2)
if len([tr for tr in st1 if tr.stats.sampling_rate != 100.0]) != 0:
try:
st1.interpolate(100, method="linear")
except Exception:
st1=_resampling(st1)
chan = st1[0].stats.channel
start_time = st1[0].stats.starttime
end_time = st1[0].stats.endtime
slide_estimates.append((end_time - start_time)//tim_shift)
fl_counts += 1
next_slice = start_time+60
while next_slice <= end_time:
w = st1.slice(start_time, next_slice)
npz_data = np.zeros([6000,3])
if chan[-1] == 'Z':
npz_data[:,2] = w[0].data[:6000]
if chan[-1] == 'E' or chan[-1] == '1':
npz_data[:,0] = w[0].data[:6000]
if chan[-1] == 'N' or chan[-1] == '2':
npz_data[:,1] = w[0].data[:6000]
tr_name = st1[0].stats.station+'_'+st1[0].stats.network+'_'+st1[0].stats.channel[:2]+'_'+str(start_time)
HDF = h5py.File(os.path.join(save_dir,output_name+'.hdf5'), 'r')
dsF = HDF.create_dataset('data/'+tr_name, npz_data.shape, data = npz_data, dtype= np.float32)
dsF.attrs["trace_name"] = tr_name
dsF.attrs["receiver_code"] = station.split('/')[1]
dsF.attrs["network_code"] = stations_[station.split('/')[1]]['network']
dsF.attrs["receiver_latitude"] = stations_[station.split('/')[1]]['coords'][0]
dsF.attrs["receiver_longitude"] = stations_[station.split('/')[1]]['coords'][1]
dsF.attrs["receiver_elevation_m"] = stations_[station.split('/')[1]]['coords'][2]
start_time_str = str(start_time)
start_time_str = start_time_str.replace('T', ' ')
start_time_str = start_time_str.replace('Z', '')
dsF.attrs['trace_start_time'] = start_time_str
HDF.flush()
output_writer.writerow([str(tr_name), start_time_str])
csvfile.flush()
fln += 1
start_time = start_time+tim_shift
next_slice = next_slice+tim_shift
if len(matching) == 2:
count_chuncks += 1; c2 += 1
st1 = read(matching[0], debug_headers=True)
org_samplingRate = st1[0].stats.sampling_rate
for tr in st1:
time_slots.append((tr.stats.starttime, tr.stats.endtime))
comp_types.append(2)
try:
st1.merge(fill_value=0)
except Exception:
st1=_resampling(st1)
st1.merge(fill_value=0)
st1.detrend('demean')
org_samplingRate = st1[0].stats.sampling_rate
print(' * '+station.split('/')[1]+' ('+str(count_chuncks)+') .. '+month.split('T')[0]+' --> '+month.split('__')[1].split('T')[0]+' .. 2 components .. sampling rate: '+str(org_samplingRate))
st2 = read(matching[1], debug_headers=True)
try:
st2.merge(fill_value=0)
except Exception:
st2=_resampling(st1)
st2.merge(fill_value=0)
st2.detrend('demean')
st1.append(st2[0])
st1.filter('bandpass',freqmin = 1.0, freqmax = 45, corners=2, zerophase=True)
st1.taper(max_percentage=0.001, type='cosine', max_length=2)
if len([tr for tr in st1 if tr.stats.sampling_rate != 100.0]) != 0:
try:
st1.interpolate(100, method="linear")
except Exception:
st1=_resampling(st1)
longest = st1[0].stats.npts
start_time = st1[0].stats.starttime
end_time = st1[0].stats.endtime
for tt in st1:
if tt.stats.npts > longest:
longest = tt.stats.npts
start_time = tt.stats.starttime
end_time = tt.stats.endtime
st1.trim(start_time, end_time, pad=True, fill_value=0)
start_time = st1[0].stats.starttime
end_time = st1[0].stats.endtime
slide_estimates.append((end_time - start_time)//tim_shift)
chan1 = st1[0].stats.channel
chan2 = st1[1].stats.channel
fl_counts += 1
next_slice = start_time+60
while next_slice <= end_time:
w = st1.slice(start_time, next_slice)
npz_data = np.zeros([6000,3])
if chan1[-1] == 'Z':
npz_data[:,2] = w[0].data[:6000]
elif chan1[-1] == 'E' or chan1[-1] == '1':
npz_data[:,0] = w[0].data[:6000]
elif chan1[-1] == 'N' or chan1[-1] == '2':
npz_data[:,1] = w[0].data[:6000]
if chan2[-1] == 'Z':
npz_data[:,2] = w[1].data[:6000]
elif chan2[-1] == 'E' or chan2[-1] == '1':
npz_data[:,0] = w[1].data[:6000]
elif chan2[-1] == 'N' or chan2[-1] == '2':
npz_data[:,1] = w[1].data[:6000]
tr_name = st1[0].stats.station+'_'+st1[0].stats.network+'_'+st1[0].stats.channel[:2]+'_'+str(start_time)
HDF = h5py.File(os.path.join(save_dir,output_name+'.hdf5'), 'r')
dsF = HDF.create_dataset('data/'+tr_name, npz_data.shape, data = npz_data, dtype= np.float32)
dsF.attrs["trace_name"] = tr_name
dsF.attrs["receiver_code"] = station.split('/')[1]
dsF.attrs["network_code"] = stations_[station.split('/')[1]]['network']
dsF.attrs["receiver_latitude"] = stations_[station.split('/')[1]]['coords'][0]
dsF.attrs["receiver_longitude"] = stations_[station.split('/')[1]]['coords'][1]
dsF.attrs["receiver_elevation_m"] = stations_[station.split('/')[1]]['coords'][2]
start_time_str = str(start_time)
start_time_str = start_time_str.replace('T', ' ')
start_time_str = start_time_str.replace('Z', '')
dsF.attrs['trace_start_time'] = start_time_str
HDF.flush()
output_writer.writerow([str(tr_name), start_time_str])
csvfile.flush()
fln += 1
start_time = start_time+tim_shift
next_slice = next_slice+tim_shift
st1, st2, st3 = None, None, None
HDF.close()
dd = pd.read_csv(os.path.join(save_dir, output_name+".csv"))
assert count_chuncks == len(uni_list)
assert sum(slide_estimates)-(fln/100) <= len(dd) <= sum(slide_estimates)+10
data_track[output_name]=[time_slots, comp_types]
print(f" Station {output_name} had {len(uni_list)} chuncks of data")
print(f"{len(dd)} slices were written, {sum(slide_estimates)} were expected.")
print(f"Number of 1-components: {c1}. Number of 2-components: {c2}. Number of 3-components: {c3}.")
try:
print(f"Original samplieng rate: {org_samplingRate}.")
repfile.write(f' Station {output_name} had {len(uni_list)} chuncks of data, {len(dd)} slices were written, {int(sum(slide_estimates))} were expected. Number of 1-components: {c1}, Number of 2-components: {c2}, number of 3-components: {c3}, original samplieng rate: {org_samplingRate}\n')
except Exception:
pass
with ThreadPool(n_processor) as p:
p.map(process, station_list)
with open('time_tracks.pkl', 'wb') as f:
pickle.dump(data_track, f, pickle.HIGHEST_PROTOCOL)
def _resampling(st):
need_resampling = [tr for tr in st if tr.stats.sampling_rate != 100.0]
if len(need_resampling) > 0:
# print('resampling ...', flush=True)
for indx, tr in enumerate(need_resampling):
if tr.stats.delta < 0.01:
tr.filter('lowpass',freq=45,zerophase=True)
tr.resample(100)
tr.stats.sampling_rate = 100
tr.stats.delta = 0.01
tr.data.dtype = 'int32'
st.remove(tr)
st.append(tr)
return st
|
{"hexsha": "e090553429c5fde27ed1dc7daab9c85baa1e1bb0", "size": 19469, "ext": "py", "lang": "Python", "max_stars_repo_path": "EQTransformer/utils/hdf5_maker.py", "max_stars_repo_name": "iceseismic/EQTransformer", "max_stars_repo_head_hexsha": "195559f9728604906b9c4352a2f1f782541a0533", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-05-07T08:17:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T04:25:31.000Z", "max_issues_repo_path": "src/S_EqT_codes/src/EqT_libs/hdf5_maker.py", "max_issues_repo_name": "Damin1909/ESPRH", "max_issues_repo_head_hexsha": "2b26a7e698fe7c411d44ce5f51d52fffdb742d48", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-12-04T17:00:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-17T04:02:11.000Z", "max_forks_repo_path": "src/S_EqT_codes/src/EqT_libs/hdf5_maker.py", "max_forks_repo_name": "Damin1909/ESPRH", "max_forks_repo_head_hexsha": "2b26a7e698fe7c411d44ce5f51d52fffdb742d48", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-07-15T11:37:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-07T10:49:50.000Z", "avg_line_length": 46.8004807692, "max_line_length": 299, "alphanum_fraction": 0.4855924804, "include": true, "reason": "import numpy", "num_tokens": 4398}
|
classdef spaces
% Utility functions to work with vector spaces
methods(Static)
function result = insersection_space(A, B)
% Let A and B represent vector spaces
% The function returns basis for their intersection
if size(A, 1) ~= size(B, 1)
error('The ambient space must be same.');
end
% Orthogonalize the two bases
A = orth(A); % A is n x a
B = orth(B); % B is n x b
% Count the rank of each of them
% After orthogonalization number of columns may have reduced
rank_a = size(A, 2);
rank_b = size(B, 2);
% Combine them
C = [A B]; % C is n x (ra + rb)
% Compute the null space of the whole thing
D = null(C); % C * D = 0. D is (ra + rb) x m
% m is the nullity of C
% Pick up the first rank_a rows from D
D2 = D(1:rank_a, :); % ra * m
% Multiply with A
result = A * D2 ; % n * m.
% We get the basis we were looking for
end
function result = orth_complement(A, B)
% Find orthogonal complement of A in B.
if size(A, 1) ~= size(B, 1)
error('The ambient space must be same.');
end
% Orthogonalize A to make sure that it is full rank.
A = orth(A);
rank_a = size(A,2);
C = [A B];
[Q, R] = qr(C, 0);
% Leave the first rank_a columns from A
result = Q(:, rank_a+ 1:end);
end
function result = principal_angles_orth_cos(A, B)
% assumes that A and B are orthonormal bases
% Compute the matrix of inner products of bases of A and B.
M = A' * B;
% Compute the SVD of M. The singular values are the cosine of principal angles
result = svd(M);
% make sure that result is bounded by 1.
result = min(1, result);
end
function result = principal_angles_cos(A, B)
% Finds the principal angles between the subspaces spanned by A and B
% References
% - http://in.mathworks.com/matlabcentral/newsreader/view_thread/284282
if size(A, 1) ~= size(B, 1)
error('The ambient space must be same.');
end
A = orth(A);
B = orth(B);
result = spx.la.spaces.principal_angles_orth_cos(A, B);
end
function result = principal_angles_radian(A, B)
% Returns the principal angles in radians
s = spx.la.spaces.principal_angles_cos(A, B);
% Return the angles as cos inverse of singular values
% ensure that cos-theta values are <= 1. Otherwise complex numbers may appear.
result = acos(min(1, s));
end
function result = principal_angles_degree(A, B)
% Returns the principal angles in degrees
radians = spx.la.spaces.principal_angles_radian(A, B);
% Return the angles as cos inverse of singular values
result = rad2deg(radians);
end
function result = smallest_angle_cos(A, B)
% Returns the smallest angle between two subspaces as cos(theta)
s = spx.la.spaces.principal_angles_cos(A, B);
result = s(1);
end
function result = smallest_angle_orth_cos(A, B)
% Returns the smallest angle between two subspaces as cos(theta)
% assumes A and B are orthonormal bases
s = spx.la.spaces.principal_angles_orth_cos(A, B);
result = s(1);
end
function result = smallest_angle_rad(A, B)
% Returns the smallest angle between two subspaces in radians
cos_theta = spx.la.spaces.smallest_angle_cos(A, B);
result = acos(cos_theta);
end
function result = smallest_angle_deg(A, B)
% Returns the smallest angle between two subspaces in degree
theta = spx.la.spaces.smallest_angle_rad(A, B);
result = rad2deg(theta);
end
function result = smallest_angles_cos(subspaces, d)
% subspaces is either a cell array of bases or a concatenated matrix.
% d is the dimension of each subspace [needed only if all bases are concatenated]
if iscell(subspaces)
bases = subspaces;
% number of subspaces
s = numel(subspaces);
% the ambient dimension
m = size(bases{1}, 1);
else
if nargin < 2
error('Dimension of each subspace must be specified.');
end
[m, n] = size(subspaces);
if mod(n, d) ~= 0
error('n must be multiple of d');
end
% number of subspaces
s = n /d;
% create the cell array for bases
bases = cell(s, 1);
for i=0:s-1
%i-th subspace basis
si = subspaces(:, i*d + (1:d));
bases{i+1} = si;
end
end
% Orthogonalize all subspaces
for i=1:s
bases{i} = orth(bases{i});
end
% The smallest angles result matrix
result = eye(s);
for i=1:s
si = bases{i};
for j=i+1:s
% j-th subspace
sj = bases{j};
result(i, j) = spx.la.spaces.smallest_angle_orth_cos(si, sj);
result(j, i) = result(i, j);
end
end
end
function result = smallest_angles_rad(subspaces, d)
if nargin < 2
% subspace dimensions are unspecified
d = -1;
end
result = spx.la.spaces.smallest_angles_cos(subspaces, d);
result = acos(result);
end
function result = smallest_angles_deg(subspaces, d)
if nargin < 2
% subspace dimensions are unspecified
d = -1;
end
result = spx.la.spaces.smallest_angles_rad(subspaces, d);
result = rad2deg(result);
end
function result = subspace_distance(A, B)
% The distance between two subspaces based on Grassmannian
% References
% - http://math.stackexchange.com/questions/198111/distance-between-real-finite-dimensional-linear-subspaces
if size(A, 1) ~= size(B, 1)
error('The ambient space must be same.');
end
A = orth(A);
B = orth(B);
if size(A, 2) ~= size(B, 2)
error('The two subspaces must be of same dimensions');
end
% Compute the projection matrices for the two spaces
PA = A*A';
PB = B*B';
D = PA - PB;
% We return the operator norm of D as the distance between the two subspaces.
result = norm(D);
end
function result = is_in_range_orth(v, U)
% Returns if v is in the range of U where U is a unitary matrix
nv = norm(v);
if nv == 0
% zero vector is always in the column space of U
result = true;
return;
end
% Compute the projection of v into the space spanned by U.
pv = U (U' * v);
% Compute the difference [projection to the orthogonal complement of U]
d = v - pv;
% Compute it's norm
nd = norm(d);
% Verify that the norm of difference is indeed very small
result = nd <= 1e-6 * nv;
end
function result = is_in_range(v, A)
% Returns if v is in the range of an arbitrary matrix A
result = is_in_range_orth(v, orth(A));
end
function result = find_basis(A)
% Returns a (not necessarily orthogonal) basis of A from columns of A
[R, pivot_cols] = rref(A);
[m, n] = size(A);
i = 1;
%pivot_cols = false(n, 1);
%for j=1:n
% if R(i, j) == 1
% % Add the column containing the leading one
% pivot_cols(j) = true;
% % Move on to find the 1 in next row
% i = i + 1;
% end
%end
% Return a basis
result = A (:, pivot_cols);
end
function [E, R] = elim(A)
% References
% - http://web.mit.edu/18.06/www/Course-Info/Mfiles/elim.m
% Factorize: E R = A
% where E is a product of elementary matrices and
% R is the row reduced echelon form
[m, n] = size(A);
I = eye(m);
RE = rref([A I]);
R = RE(:, 1:n);
E = RE(:, (n+1):m+n);
end
function N = null_basis(A)
% Returns a null basis for A
% References
% - http://web.mit.edu/18.06/www/Course-Info/Mfiles/nulbasis.m
[m, n] = size(A);
[R, pivot_cols] = rref(A, sqrt(eps));
r = length(pivot_cols);
% The columns of A which are not part of pivots
freecol = 1:n;
freecol(pivot_cols) = [];
% Create space for storing the null basis
N = zeros(n, n-r);
N(freecol, : ) = eye(n-r);
N(pivot_cols, : ) = -R(1:r, freecol);
end
function [col_space, null_space, row_space, left_null_space] = four_bases(A)
% Returns bases for the four spaces associated with the matrix A
% These are not necessarily orthonormal bases
[m, n] = size(A);
[R, pivot_cols] = rref(A, sqrt(eps));
rank_a = length(pivot_cols);
% The first rank_a rows of the echelon matrix form the row space
row_space = R(1:rank_a, :)';
% The columns of A indexed by the pivot columns form the column space
col_space = A(:, pivot_cols);
% Computation of the null space
% The columns of A which are not part of pivots
freecol = 1:n;
freecol(pivot_cols) = [];
% Allocate memory for the null space basis
null_space = zeros(n, n-r);
null_space(freecol, : ) = eye(n-r);
null_space(pivot_cols, : ) = -R(1:r, freecol);
left_null_space = E((r+1):m, :)';
end
function [col_space, null_space, row_space, left_null_space] = four_orth_bases(A)
% Prepares the four bases using SVD
[U, S, V] = svd(A);
% Finding the rank of A
s = diag(S);
tol = max(size(A))*eps(max(s));
r = sum(s > tol);
col_space = U(:, 1:r);
null_space = U(:, r+1:m);
row_space = V(:, 1:r);
left_null_space = V(:, r+1:n);
end
function Y = k_dim_to_n_dim(X, n, indices)
% Maps the data in K dimensions to N-dimensions.
[k, d] = size(X);
if nargin < 2
error('Target dimension must be specified');
end
if n < k
error('n must be larger than k');
end
if nargin < 3
indices = 1:k;
end
if ~isvector(indices)
error('indices must be a vector.');
end
if numel(indices) > k
error('Number of indices must be k');
end
if length(unique(indices))<length(indices)
error('There must be exactly k unique entries in indices');
end
if max(indices) > n || min(indices) < 1
error('Indices cannot point outside 1:n ');
end
Y = zeros(n, d);
Y(indices, :) = X;
end
function [A, B, C] = three_spaces_at_angle(N, theta)
if ~mod(N, 3) == 0
error('N must be divisible by 3');
end
% First create two random orthonormal vectors
X = orth(randn(N, 3));
% Then tilt the second one w.r.t. first
a1 = X(:, 1);
a2 = X(:, 2);
a3 = X(:, 3);
p = cos(theta);
% first vector for second space
b1 = sqrt(1 - p^2) * a2 + p * a1;
% first vector for third space
c1_1 = p;
c1_2 = p * (1 - p) / sqrt(1 - p^2);
c1_3 = sqrt(1 - c1_1^2 - c1_2^2);
c1 = c1_1 * a1 + c1_2 * a2 + c1_3 * a3;
X = [a1 b1 c1];
% Find the orthogonal complement of X
[U S V] = svd(X);
Y = U(:, 4:end);
[~, n] = size(Y);
% Distribute vectors from Y into A and B
A = [a1 Y(:, 1:n/3)];
B = [b1 Y(:, n/3 + (1:n/3))];
C = [c1 Y(:, 2*n/3 + (1:n/3))];
end
function [A, B, C] = three_disjoint_spaces_at_angle(N, theta)
X = eye(4);
R1 = [cos(theta) -sin(theta); sin(theta) cos(theta)];
a1 = [1 ; 0];
a2 = R1*a1;
a3 = R1*a2;
theta2 = deg2rad(59);
R2 = [cos(theta2) -sin(theta2); sin(theta2) cos(theta2)];
b1 = [1 ; 0];
b2 = R2*b1;
b3 = R2*b2;
z = zeros(2, 1);
A = [a1 z; z b1];
B = [a2 z; z b2];
C = [a3 z; z b3];
A =kron(A , eye(N / 2));
B =kron(B , eye(N / 2));
C =kron(C , eye(N / 2));
end
function describe_three_spaces(A, B, C)
% Compute principal angles between the subspaces
fprintf('Ranks: [A]: %d, [B]: %d, [A]: %d\n', ...
rank(A), rank(B), rank(C));
fprintf('Cols: [A]: %d, [B]: %d, [A]: %d\n', ...
size(A, 2), size(B, 2), size(C, 2));
fprintf('Ranks: [A B]: %d, [B C]: %d, [A C]: %d, \n', ...
rank([A B]), rank([B C]), rank([A C]));
D = [A B C];
fprintf('Rank [A B C]: %d\n', rank(D));
fprintf('Angle between A and B: %.4f deg\n', spx.la.spaces.smallest_angle_deg(A, B));
fprintf('Angle between B and C: %.4f deg\n', spx.la.spaces.smallest_angle_deg(B, C));
fprintf('Angle between A and C: %.4f deg\n', spx.la.spaces.smallest_angle_deg(A, C));
fprintf('Column wise norms: \n');
fprintf(' %.2f', spx.norm.norms_l2_cw(A));
fprintf('\n');
fprintf(' %.2f', spx.norm.norms_l2_cw(B));
fprintf('\n');
fprintf(' %.2f', spx.norm.norms_l2_cw(C));
fprintf('\n');
end
function [A, B, C] = abc_spaces_junk_1(N, theta)
if ~mod(N, 2) == 0
error('N must be divisible by 2');
end
d = N / 2;
% First create three random orthonormal vectors
X = orth(randn(N, 2));
% Then tilt the second one w.r.t. first
a1 = X(:, 1);
a2 = X(:, 2);
p = cos(theta);
% first vector for second space
b1 = sqrt(1 - p^2) * a2 + p * a1;
X = [a1 b1];
% Find the orthogonal complement of X
[U S V] = svd(X);
Y = U(:, 3:end);
[~, n] = size(Y);
% Distribute vectors from Y into A and B
A = [a1 Y(:, 1:n/2)];
B = [b1 Y(:, n/2 + 1:end)];
% choose a vector a3 which is orthogonal to A and b1
X = [A B(:, 1:d-1)];
[U S V] = svd(X);
a3 = U(:, size(X, 2) + 1);
% first vector for third space
c1_1 = p;
c1_2 = p * (1 - p) / sqrt(1 - p^2);
c1_3 = sqrt(1 - c1_1^2 - c1_2^2);
c1 = c1_1 * a1 + c1_2 * a2 + c1_3 * a3;
X = [A c1];
% Find the orthogonal complement of X
[U S V] = svd(X);
Y = U(:, size(X, 2)+1:end);
C = [c1 Y];
end
function result = have_same_column_spans(A, B)
% Checks if the column spans of two matrices are same.
r1 = rank(A);
r2 = rank(B);
if r1 ~= r2
result = false;
return;
end
r3 = rank([A B]);
if r3 ~= r1
result = false;
return;
end
result = true;
end
function result = bases(X, counts)
% bases for individual subspaces
K = length(counts);
% bases cell array
result = cell(1, K);
[start_indices, end_indices] = spx.cluster.start_end_indices(counts);
for k=1:K
ss = start_indices(k);
ee = end_indices(k);
XX = X(:, ss:ee);
basis = orth(XX);
result{k} = basis;
end
end
end
end
|
{"author": "indigits", "repo": "sparse-plex", "sha": "43cae2978f62938d001baaa03308a2a717ee6c9b", "save_path": "github-repos/MATLAB/indigits-sparse-plex", "path": "github-repos/MATLAB/indigits-sparse-plex/sparse-plex-43cae2978f62938d001baaa03308a2a717ee6c9b/library/+spx/+la/spaces.m"}
|
"""
This module provides tools for stacking a model on top of other
models without information leakage from a target variable to
predictions made by base models.
@author: Nikolay Lysenko
"""
from typing import List, Dict, Tuple, Callable, Union, Optional, Any
from abc import ABC, abstractmethod
import numpy as np
from sklearn.base import (
BaseEstimator, RegressorMixin, ClassifierMixin,
clone
)
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from sklearn.utils.multiclass import check_classification_targets
from sklearn.model_selection import (
KFold, StratifiedKFold, GroupKFold, TimeSeriesSplit
)
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import Pipeline
from joblib import Parallel, delayed
from .utils import InitablePipeline
# For the sake of convenience, define a new type.
FoldType = Union[KFold, StratifiedKFold, GroupKFold, TimeSeriesSplit]
def _fit_estimator(
X: np.ndarray,
y: np.ndarray,
estimator: BaseEstimator,
fit_kwargs: Dict[str, Any]
) -> BaseEstimator:
"""
A private function for fitting base estimators in parallel.
It should not be called outside of this module.
"""
return estimator.fit(X, y, **fit_kwargs)
class BaseStacking(BaseEstimator, ABC):
"""
A parent class for regression and classification stacking.
:param base_estimators_types:
list of types of first stage estimators, a type can occur
multiple times here
:param base_estimators_params:
list of (hyper)parameters of first stage estimators such
that its i-th element relates to the i-th element of
`base_estimator_types`
:param meta_estimator_type:
a type of second stage estimator
:param meta_estimator_params:
(hyper)parameters of second stage estimator
:param splitter:
an object that splits learning sample into folds for making
out-of-fold predictions with base estimators, these predictions
are used as features by second stage estimator
:param keep_meta_X:
if it is `True`, out-of-fold predictions made by first stage
estimators are stored in the class attribute named `meta_X_`
:param random_state:
random state for all estimators and splitting into folds;
if it is set, it overrides all other random states,
i.e., the ones that are set in `base_estimators_params`,
`meta_estimator_params`, and `splitter`; it is not set
by default
:param n_jobs:
number of parallel jobs for fitting each of base estimators
to different folds
"""
def __init__(
self,
base_estimators_types: Optional[List[type]] = None,
base_estimators_params: Optional[List[Dict[str, Any]]] = None,
meta_estimator_type: Optional[type] = None,
meta_estimator_params: Optional[Dict[str, Any]] = None,
splitter: Optional[FoldType] = None,
keep_meta_X: bool = True,
random_state: Optional[int] = None,
n_jobs: int = 1
):
self.base_estimators_types = base_estimators_types
self.base_estimators_params = base_estimators_params
self.meta_estimator_type = meta_estimator_type
self.meta_estimator_params = meta_estimator_params
self.splitter = splitter
self.keep_meta_X = keep_meta_X
self.random_state = random_state
self.n_jobs = n_jobs
@staticmethod
def __preprocess_base_estimators_sources(
types: List[type],
params: List[Dict[str, Any]]
) -> Tuple[List[type], List[Dict[str, Any]]]:
# Prepare types and parameters of base estimators, replace `None`s.
types = [x if x != Pipeline else InitablePipeline for x in types]
params = params or [dict() for _ in range(len(types))]
return types, params
@staticmethod
def __match_base_estimators_sources(
types: List[type],
params: List[Dict[str, Any]]
) -> List[Tuple[type, Dict[str, Any]]]:
# Validate and zip `types` and `params`.
if len(types) != len(params):
raise ValueError(
(
'Lengths mismatch: `base_estimators_types` has length {}, '
'whereas `base_estimator_params` has length {}.'
).format(len(types), len(params))
)
pairs = list(zip(types, params))
for estimator_type, estimator_params in pairs:
if (estimator_type == InitablePipeline and
'steps' not in estimator_params.keys()):
raise ValueError('Argument `steps` is not passed to pipeline.')
return pairs
def _create_base_estimators_from_their_types(
self,
types: List[type]
) -> List[BaseEstimator]:
# Create a list of base estimators from a list of their types and
# parameters of `self`.
types, params = self.__preprocess_base_estimators_sources(
types, self.base_estimators_params
)
pairs = self.__match_base_estimators_sources(types, params)
pairs = [
(t, p)
if 'random_state' not in t().get_params().keys()
or self.random_state is None
else (t, dict(p, **{'random_state': self.random_state}))
for t, p in pairs
]
base_estimators = [
estimator_type().set_params(**params)
for estimator_type, params in pairs
]
return base_estimators
@abstractmethod
def _create_base_estimators(self) -> List[BaseEstimator]:
# Instantiate base estimators from initialization parameters.
pass
def __prepare_all_for_base_estimators_fitting(
self,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None
) -> Tuple[List[BaseEstimator], Dict[type, Dict[str, Any]]]:
# Run preprocessing that is needed for base estimators fitting.
base_estimators = self._create_base_estimators()
base_fit_kwargs = (
base_fit_kwargs or {type(x): dict() for x in base_estimators}
)
return base_estimators, base_fit_kwargs
def _create_meta_estimator_from_its_type(
self,
meta_estimator_type: type,
) -> BaseEstimator:
# Instantiate second stage estimator based on its type and parameters
# of `self`.
if meta_estimator_type == Pipeline:
meta_estimator_type = InitablePipeline
meta_estimator_params = self.meta_estimator_params or dict()
random_state_is_applicable = (
'random_state' in meta_estimator_type().get_params().keys()
)
random_state_is_set = self.random_state is not None
if random_state_is_applicable and random_state_is_set:
meta_estimator_params['random_state'] = self.random_state
meta_estimator = (
meta_estimator_type().set_params(**meta_estimator_params)
)
return meta_estimator
@abstractmethod
def _create_meta_estimator(self) -> BaseEstimator:
# Instantiate second stage estimator from initialization parameters.
pass
def __create_splitter(self) -> FoldType:
# Create splitter that is used for the first stage of stacking.
splitter = self.splitter or KFold()
random_state_is_applicable = (
hasattr(splitter, 'shuffle') and splitter.shuffle
)
random_state_is_set = self.random_state is not None
if random_state_is_applicable and random_state_is_set:
splitter.random_state = self.random_state
return splitter
@abstractmethod
def _preprocess_target_variable(self, y: np.ndarray) -> np.ndarray:
# Run operations that are specific to regression or classification.
pass
def _fit_base_estimators(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None
) -> type(None):
# Fit each of base estimators to a whole learning sample.
base_estimators, base_fit_kwargs = (
self.__prepare_all_for_base_estimators_fitting(base_fit_kwargs)
)
self.base_estimators_ = [
estimator.fit(X, y, **base_fit_kwargs.get(type(estimator), dict()))
for estimator in base_estimators
]
@staticmethod
@abstractmethod
def _infer_operation(fitted_estimator: BaseEstimator) -> Callable:
# Figure out what `fitted_estimator` must do according to its type.
pass
@abstractmethod
def _apply_fitted_base_estimator(
self,
apply_fn: Callable,
estimator: BaseEstimator,
X: np.ndarray,
labels_from_training_folds: Optional[List[int]] = None
) -> np.ndarray:
# Use `estimator` on `X` with `apply_fn` which calls one of
# `predict`, `predict_proba`, and `transform` methods.
pass
@staticmethod
def __take_folds_data(
X: np.ndarray,
y: np.ndarray,
folds: List[Tuple[np.ndarray, np.ndarray]]
) -> Tuple[List[np.ndarray], List[np.ndarray], List[np.ndarray]]:
# Return three lists: list of features for fitting, list of targets
# for fitting, and list of hold-out features for making of
# out-of-fold predictions.
zipped_folds_data = [
(X[fit_indices, :], y[fit_indices], X[hold_out_indices, :])
for fit_indices, hold_out_indices in folds
]
folds_data = tuple(list(data) for data in zip(*zipped_folds_data))
return folds_data
@staticmethod
def __restore_initial_order(
meta_features: np.ndarray,
folds: List[Tuple[np.ndarray]]
) -> np.ndarray:
# Rearrange data for the second stage model and get order of rows
# that corresponds to initial order of objects.
# This is needed, because meta estimator can have sample weights.
ordering_column = np.hstack([x[1] for x in folds]).reshape((-1, 1))
meta_features = np.hstack((meta_features, ordering_column))
meta_features = meta_features[meta_features[:, -1].argsort(), :-1]
return meta_features
def __compute_meta_feature_produced_by_estimator(
self,
estimator_fits_to_folds: List[BaseEstimator],
apply_fn: Callable,
hold_out_Xs: List[np.ndarray],
fit_ys: List[np.ndarray]
) -> np.ndarray:
# Collect all out-of-fold predictions produced by the estimator
# such that its clones trained on different folds are stored in
# `estimator_fits_to_folds`, then combine these predictions in
# a single column.
meta_feature = [
self._apply_fitted_base_estimator(
apply_fn, estimator_on_other_folds, hold_out_X,
sorted(np.unique(fit_y).tolist())
if hasattr(self, 'classes_') else []
)
for estimator_on_other_folds, hold_out_X, fit_y
in zip(
estimator_fits_to_folds, hold_out_Xs, fit_ys
)
]
meta_x = np.vstack(meta_feature)
return meta_x
def __collect_out_of_fold_predictions(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None
) -> np.ndarray:
# Collect out-of-fold predictions of all base estimators that are
# trained on all folds except the one for which predictions are
# being made, then return matrix of out-of-fold predictions.
base_estimators, base_fit_kwargs = (
self.__prepare_all_for_base_estimators_fitting(base_fit_kwargs)
)
splitter = self.__create_splitter()
folds = list(splitter.split(X))
fit_Xs, fit_ys, hold_out_Xs = self.__take_folds_data(
X, y, folds
)
meta_features = []
for estimator in base_estimators:
apply_fn = self._infer_operation(estimator)
estimator_fits_to_folds = Parallel(n_jobs=self.n_jobs)(
delayed(_fit_estimator)(
*fit_data,
clone(estimator),
base_fit_kwargs.get(type(estimator), dict())
)
for fit_data in zip(fit_Xs, fit_ys)
)
current_meta_x = self.__compute_meta_feature_produced_by_estimator(
estimator_fits_to_folds, apply_fn, hold_out_Xs, fit_ys
)
meta_features.append(current_meta_x)
shuffled_meta_X = np.hstack(meta_features)
meta_X = self.__restore_initial_order(
shuffled_meta_X, folds
)
return meta_X
def _fit_meta_estimator(
self,
meta_X: np.ndarray,
y: np.ndarray,
meta_fit_kwargs: Optional[Dict[str, Any]] = None
) -> type(None):
# Fit second stage estimator on out-of-fold predictions made by first
# stage estimators.
meta_estimator = self._create_meta_estimator()
meta_fit_kwargs = meta_fit_kwargs or dict()
self.meta_estimator_ = meta_estimator.fit(meta_X, y, **meta_fit_kwargs)
def _fit(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None,
meta_fit_kwargs: Optional[Dict[str, Any]] = None,
) -> 'BaseStacking':
# Implement internal logic of fitting.
X, y = check_X_y(X, y)
y = self._preprocess_target_variable(y)
self._fit_base_estimators(X, y, base_fit_kwargs)
meta_X = self.__collect_out_of_fold_predictions(X, y, base_fit_kwargs)
self._fit_meta_estimator(meta_X, y, meta_fit_kwargs)
if self.keep_meta_X:
self.meta_X_ = meta_X
return self
def fit(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None,
meta_fit_kwargs: Optional[Dict[str, Any]] = None
) -> 'BaseStacking':
"""
Train estimators from both stages of stacking.
:param X:
features
:param y:
target
:param base_fit_kwargs:
settings of first stage estimators training, first stage
estimators are identified by their types and, as of now,
two estimators of the same type can not have different
settings
:param meta_fit_kwargs:
settings of second stage estimator training
:return:
fitted instance
"""
return self._fit(X, y, base_fit_kwargs, meta_fit_kwargs)
@abstractmethod
def _apply_fitted_meta_estimator(
self,
meta_X: np.ndarray,
return_probabilities: bool = False
) -> np.ndarray:
# Make predictions with second stage estimator.
pass
def _predict(
self,
X: np.ndarray,
return_probabilities: bool = False
) -> np.ndarray:
# Implement internal logic of predicting.
check_is_fitted(self, ['base_estimators_', 'meta_estimator_'])
X = check_array(X)
meta_features = []
for estimator in self.base_estimators_:
apply_fn = self._infer_operation(estimator)
current_meta_feature = self._apply_fitted_base_estimator(
apply_fn, estimator, X,
list(range(len(self.classes_)))
if hasattr(self, 'classes_')
else []
)
meta_features.append(current_meta_feature)
meta_X = np.hstack(meta_features)
predictions = self._apply_fitted_meta_estimator(
meta_X, return_probabilities
)
return predictions
def predict(
self,
X: np.ndarray,
) -> np.ndarray:
"""
Predict target variable on a new dataset.
:param X:
features
:return:
predictions
"""
return self._predict(X)
def _fit_predict(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None,
meta_fit_kwargs: Optional[Dict[str, Any]] = None,
return_probabilities: bool = False
) -> np.ndarray:
# Implement internal logic of predicting for the training set.
keep_meta_X = self.keep_meta_X
self.keep_meta_X = True
self._fit(X, y, base_fit_kwargs, meta_fit_kwargs)
if return_probabilities:
predictions = self.meta_estimator_.predict_proba(self.meta_X_)
else:
predictions = self.meta_estimator_.predict(self.meta_X_)
self.keep_meta_X = keep_meta_X
if not keep_meta_X:
self.drop_training_meta_features()
return predictions
def fit_predict(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None,
meta_fit_kwargs: Optional[Dict[str, Any]] = None
) -> np.ndarray:
"""
Train stacking and predict target variable on a learning
sample.
This is needed for correct measuring of train error -
composition of calls to `fit` and `predict` does not produce
the same results, because features for second stage estimator
are produced on the full learning sample there, whereas they
are produced out-of-fold here.
:param X:
features
:param y:
target
:param base_fit_kwargs:
settings of first stage estimators training, first stage
estimators are identified by their types, as of now two
estimators of the same type can not have different
settings
:param meta_fit_kwargs:
settings of second stage estimator training
:return:
predictions
"""
return self._fit_predict(X, y, base_fit_kwargs, meta_fit_kwargs)
def drop_training_meta_features(self) -> type(None):
"""
Delete a sample on which second stage estimator was trained.
:return:
None
"""
self.meta_X_ = None
class StackingRegressor(BaseStacking, RegressorMixin):
"""
A class that allows training a regressor on predictions made by
other regressors and/or transformations made by transformers.
Information does not leak through predictions and transformations,
because all of them are made in an out-of-fold manner.
"""
def _create_base_estimators(self) -> List[BaseEstimator]:
# Instantiate base estimators from initialization parameters.
default_types = [RandomForestRegressor, KNeighborsRegressor]
types = self.base_estimators_types or default_types
base_estimators = (
self._create_base_estimators_from_their_types(types)
)
return base_estimators
def _create_meta_estimator(self) -> BaseEstimator:
# Instantiate second stage estimator from initialization parameters.
meta_estimator_type = self.meta_estimator_type or LinearRegression
meta_estimator = self._create_meta_estimator_from_its_type(
meta_estimator_type
)
return meta_estimator
def _preprocess_target_variable(self, y: np.ndarray) -> np.ndarray:
# Just return `y`, regression targets do not need any preprocessing.
return y
@staticmethod
def _infer_operation(fitted_estimator: BaseEstimator) -> Callable:
# Figure out what `fitted_estimator` must do according to its type.
def predict(estimator: BaseEstimator, X: np.ndarray) -> np.ndarray:
return estimator.predict(X).reshape((-1, 1))
def transform(estimator: BaseEstimator, X: np.ndarray) -> np.ndarray:
result = estimator.transform(X)
result = (
result if len(result.shape) > 1 else result.reshape((-1, 1))
)
return result
if hasattr(fitted_estimator, 'predict'):
return predict
elif hasattr(fitted_estimator, 'transform'):
return transform
else:
raise TypeError(
'Invalid type of estimator: {}'.format(type(fitted_estimator))
)
def _apply_fitted_base_estimator(
self,
apply_fn: Callable,
estimator: BaseEstimator,
X: np.ndarray,
labels_from_training_folds: Optional[List[int]] = None
) -> np.ndarray:
# Use `estimator` on `X` with `apply_fn`.
result = apply_fn(estimator, X)
return result
def _apply_fitted_meta_estimator(
self,
meta_X: np.ndarray,
return_probabilities: bool = False
) -> np.ndarray:
# Make predictions with meta-estimator.
predictions = self.meta_estimator_.predict(meta_X)
return predictions
class StackingClassifier(BaseStacking, ClassifierMixin):
"""
A class that allows training a classifier on predictions made by
other classifiers and/or transformations made by transformers.
Information does not leak through predictions and transformations,
because all of them are made in an out-of-fold manner.
"""
def _create_base_estimators(self) -> List[BaseEstimator]:
# Instantiate base estimators from initialization parameters.
# The list of default types is not analogous to that from
# `StackingRegressor`, because inclusion of `KNeighborsClassifier`
# instead of `LogisticRegression` causes occasional failure of
# `sklearn` support test (actually, the issue is with the test).
default_types = [RandomForestClassifier, LogisticRegression]
types = self.base_estimators_types or default_types
base_estimators = (
self._create_base_estimators_from_their_types(types)
)
return base_estimators
def _create_meta_estimator(self) -> BaseEstimator:
# Instantiate second stage estimator from initialization parameters.
meta_estimator_type = self.meta_estimator_type or LogisticRegression
meta_estimator = self._create_meta_estimator_from_its_type(
meta_estimator_type
)
return meta_estimator
def _preprocess_target_variable(self, y: np.ndarray) -> np.ndarray:
# Convert class labels to dense integers.
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
return y
@staticmethod
def _infer_operation(fitted_estimator: BaseEstimator) -> Callable:
# Figure out what `fitted_estimator` must do according to its type.
def predict(
estimator: BaseEstimator,
X: np.ndarray,
*args, **kwargs
) -> np.ndarray:
return estimator.predict(X).reshape((-1, 1))
def predict_proba(
estimator: BaseEstimator,
X: np.ndarray,
*args, **kwargs
) -> np.ndarray:
def predict_proba_for_all_classes(
estimator: BaseEstimator,
X: np.ndarray,
train_labels: List[int],
n_all_labels: int
) -> np.ndarray:
# Take into consideration that some classes may be not
# represented on training folds.
preds = np.zeros((X.shape[0], n_all_labels))
preds[:, train_labels] = estimator.predict_proba(X)
# Last column is dropped, because probabilities sum up to 1.
preds = preds[:, :-1]
return preds
return predict_proba_for_all_classes(estimator, X, *args, **kwargs)
def transform(
estimator: BaseEstimator,
X: np.ndarray,
*args, **kwargs
) -> np.ndarray:
result = estimator.transform(X)
result = (
result if len(result.shape) > 1 else result.reshape((-1, 1))
)
return result
if hasattr(fitted_estimator, 'predict_proba'):
return predict_proba
elif hasattr(fitted_estimator, 'predict'):
return predict
elif hasattr(fitted_estimator, 'transform'):
return transform
else:
raise TypeError(
'Invalid type of estimator: {}'.format(type(fitted_estimator))
)
def _apply_fitted_base_estimator(
self,
apply_fn: Callable,
estimator: BaseEstimator,
X: np.ndarray,
labels_from_training_folds: Optional[List[int]] = None
) -> np.ndarray:
# Use `estimator` on `X` with `apply_fn`.
result = apply_fn(
estimator,
X,
labels_from_training_folds,
len(self.classes_)
)
return result
def _apply_fitted_meta_estimator(
self,
meta_X: np.ndarray,
return_probabilities: bool = False
) -> np.ndarray:
# Make predictions with meta-estimator.
if return_probabilities:
if not hasattr(self.meta_estimator_, 'predict_proba'):
raise NotImplementedError(
"Second stage estimator has not `predict_proba` method."
)
predictions = self.meta_estimator_.predict_proba(meta_X)
else:
raw_predictions = self.meta_estimator_.predict(meta_X)
predictions = np.apply_along_axis(
lambda x: self.classes_[x],
axis=0,
arr=raw_predictions
)
return predictions
def predict_proba(
self,
X: np.ndarray
) -> np.ndarray:
"""
Predict probabilities of classes on a new dataset.
:param X:
features
:return:
estimated probabilities of classes
"""
return self._predict(X, return_probabilities=True)
def fit_predict_proba(
self,
X: np.ndarray,
y: np.ndarray,
base_fit_kwargs: Optional[Dict[type, Dict[str, Any]]] = None,
meta_fit_kwargs: Optional[Dict[str, Any]] = None
) -> np.ndarray:
"""
Train stacking and predict class probabilities on a learning
sample.
This is needed for correct measuring of train performance -
composition of calls to `fit` and `predict_proba` does not
produce the same results, because features for second stage
estimator are produced on the full learning sample there,
whereas they are produced out-of-fold here.
:param X:
features
:param y:
target
:param base_fit_kwargs:
settings of first stage estimators training, first stage
estimators are identified by their types, as of now two
estimators of the same type can not have different
settings
:param meta_fit_kwargs:
settings of second stage estimator training
:return:
predictions
"""
return self._fit_predict(
X, y, base_fit_kwargs, meta_fit_kwargs, return_probabilities=True
)
|
{"hexsha": "2e83d9555ef88e2190c90e1655fcbe9de4dd1029", "size": 28172, "ext": "py", "lang": "Python", "max_stars_repo_path": "dsawl/stacking/stackers.py", "max_stars_repo_name": "Nikolay-Lysenko/dsawl", "max_stars_repo_head_hexsha": "1bc3cd219d1f9e91cf8e3bf849c94cdd6b58e6f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-02-08T21:44:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-08T21:44:12.000Z", "max_issues_repo_path": "dsawl/stacking/stackers.py", "max_issues_repo_name": "Nikolay-Lysenko/dsawl", "max_issues_repo_head_hexsha": "1bc3cd219d1f9e91cf8e3bf849c94cdd6b58e6f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 30, "max_issues_repo_issues_event_min_datetime": "2017-09-13T22:00:08.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-15T12:46:35.000Z", "max_forks_repo_path": "dsawl/stacking/stackers.py", "max_forks_repo_name": "Nikolay-Lysenko/dsawl", "max_forks_repo_head_hexsha": "1bc3cd219d1f9e91cf8e3bf849c94cdd6b58e6f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6822916667, "max_line_length": 79, "alphanum_fraction": 0.6140494108, "include": true, "reason": "import numpy", "num_tokens": 5753}
|
# from dcor import distance_correlation as dc
import numpy as np
import dcor
def distance_corr(X, Y):
"""
Computes the distance correlation between X and Y.
Taken from pypi package dcor based on the paper:
*Measuring and testing dependence by correlation of distances*
by Gábor et Al (2007)
Parameters
----------
X : numpy array like object where the rows correspond to the samples
and the columns to features.
Y : numpy array like, of same size as X and one single output.
Returns
-------
numpy array of size the number of input features of X
which holds the distance correlation between each feature
and Y.
"""
n, d = X.shape
ny, nd = Y.shape
assert n == ny
assert nd == 1
dc_stats = np.zeros((d, 1))
for i in range(d):
dc_stats[i] = dcor.distance_correlation(X[:, i], Y[:, 0])
return dc_stats
|
{"hexsha": "c178ceec252e1896b0fed93ee33bd608a2e9eaf6", "size": 907, "ext": "py", "lang": "Python", "max_stars_repo_path": "knock_off/association_measures/distance_correlation.py", "max_stars_repo_name": "PeterJackNaylor/knockoff-MMD-HSIC", "max_stars_repo_head_hexsha": "76b78c3c4c7ecdc1e9ab9f27a427b1017e69dfeb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "knock_off/association_measures/distance_correlation.py", "max_issues_repo_name": "PeterJackNaylor/knockoff-MMD-HSIC", "max_issues_repo_head_hexsha": "76b78c3c4c7ecdc1e9ab9f27a427b1017e69dfeb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "knock_off/association_measures/distance_correlation.py", "max_forks_repo_name": "PeterJackNaylor/knockoff-MMD-HSIC", "max_forks_repo_head_hexsha": "76b78c3c4c7ecdc1e9ab9f27a427b1017e69dfeb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1944444444, "max_line_length": 72, "alphanum_fraction": 0.6438809261, "include": true, "reason": "import numpy", "num_tokens": 226}
|
using Test
test_case = "../data/pglib_opf_case5_pjm.m"
test_case_cost = 17551.891
result_keys_required = ["case", "variables", "constraints",
"feasible", "cost", "time_total", "time_data", "time_build", "time_solve"]
function check_result_keys(result::Dict)
for k in result_keys_required
if !haskey(result, k)
@warn "result dict missing key \"$(k)\""
return false
end
end
return true
end
@testset "Rosetta OPF" begin
@testset "GalacticOptim" begin
include("../galacticoptim.jl")
result = solve_opf(test_case)
@test check_result_keys(result)
@test result["feasible"]
@test isapprox(result["cost"], test_case_cost)
end
@testset "JuMP" begin
include("../jump.jl")
result = solve_opf(test_case)
@test check_result_keys(result)
@test result["feasible"]
@test isapprox(result["cost"], test_case_cost)
end
@testset "NLPModels" begin
include("../nlpmodels.jl")
result = solve_opf(test_case)
@test check_result_keys(result)
@test result["feasible"]
@test isapprox(result["cost"], test_case_cost)
end
#=
# currently blocked by https://github.com/JuliaNonconvex/Nonconvex.jl/issues/130
@testset "NonConvex" begin
include("../nonconvex.jl")
result = solve_opf(test_case)
@test check_result_keys(result)
@test result["feasible"]
@test isapprox(result["cost"], test_case_cost)
end
=#
# does not converge to a feasible solution
@testset "Optim" begin
include("../optim.jl")
result = solve_opf(test_case)
@test check_result_keys(result)
@test !result["feasible"]
#@test isapprox(result["cost"], test_case_cost)
end
end
|
{"hexsha": "a245afb8fdccbc49ab532b4f1d3d4bcc37cbdbc4", "size": 1831, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/runtests.jl", "max_stars_repo_name": "lanl-ansi/rosetta-opf", "max_stars_repo_head_hexsha": "09e76f505c04cc788256a4f3f479033ba6abe1f0", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2022-03-25T19:09:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:42:04.000Z", "max_issues_repo_path": "test/runtests.jl", "max_issues_repo_name": "lanl-ansi/rosetta-opf", "max_issues_repo_head_hexsha": "09e76f505c04cc788256a4f3f479033ba6abe1f0", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2022-03-28T01:10:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T14:44:10.000Z", "max_forks_repo_path": "test/runtests.jl", "max_forks_repo_name": "lanl-ansi/rosetta-opf", "max_forks_repo_head_hexsha": "09e76f505c04cc788256a4f3f479033ba6abe1f0", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.4305555556, "max_line_length": 84, "alphanum_fraction": 0.6226105953, "num_tokens": 474}
|
using Luxor, Test, Colors, Random
Random.seed!(42)
function noisetest(fname)
Drawing(800, 800, fname)
background("chartreuse4")
origin()
initnoise(rand(1:12))
@test 0.0 < noise(.5) <= 1.0
@test 0.0 < noise(0.1, 0.5) <= 1.0
@test 0.0 < noise(0.1, 0.1, -0.1) <= 1.0
@test 0.0 < noise(0.5, 2.0, -2.0, 0.1) <= 1.0
freq = 0.02
tiles = Tiler(800, 800, 150, 150)
for k in 1:2:10
for (pos, n) in tiles
f, d = .01, k
ns = noise(pos.x * freq, pos.y * freq, detail=3, persistence=.3)
ns1 = noise(ns, detail = 3, persistence=2)
ns2 = noise(pos.x * freq, pos.y * freq, ns, detail = 2)
ns3 = noise(pos.x * freq, pos.y * freq, ns2)
setopacity(ns3)
sethue(LCHab(80ns, 100 * ns1, 360 * ns2))
box(pos, tiles.tilewidth, tiles.tileheight, :fill)
end
end
# test a custom rng
rng = MersenneTwister(1234)
initnoise(rng)
@test 0.0 < noise(.5) <= 1.0
@test 0.0 < noise(0.1, 0.5) <= 1.0
@test 0.0 < noise(0.1, 0.1, -0.1) <= 1.0
@test 0.0 < noise(0.5, 2.0, -2.0, 0.1) <= 1.0
@test finish() == true
end
fname = "noise-test.png"
noisetest(fname)
println("...finished noise test: output in $(fname)")
|
{"hexsha": "ce6358a4f35fd272e72f77f8ffcb3cf0747b5b08", "size": 1272, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/noise-test.jl", "max_stars_repo_name": "guo-yong-zhi/Luxor.jl", "max_stars_repo_head_hexsha": "3b4fe34fe1e05c17bfcc9cc5b074fa527e5d1ebf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 463, "max_stars_repo_stars_event_min_datetime": "2017-01-07T00:48:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:06:58.000Z", "max_issues_repo_path": "test/noise-test.jl", "max_issues_repo_name": "guo-yong-zhi/Luxor.jl", "max_issues_repo_head_hexsha": "3b4fe34fe1e05c17bfcc9cc5b074fa527e5d1ebf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 200, "max_issues_repo_issues_event_min_datetime": "2017-01-03T12:35:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-24T16:39:00.000Z", "max_forks_repo_path": "test/noise-test.jl", "max_forks_repo_name": "guo-yong-zhi/Luxor.jl", "max_forks_repo_head_hexsha": "3b4fe34fe1e05c17bfcc9cc5b074fa527e5d1ebf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 86, "max_forks_repo_forks_event_min_datetime": "2017-01-15T17:36:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T13:55:02.000Z", "avg_line_length": 27.0638297872, "max_line_length": 76, "alphanum_fraction": 0.5267295597, "num_tokens": 507}
|
# epoch - 0 , train_acc - 0.134183333333 , test_acc - 0.1341
# epoch - 1 , train_acc - 0.953983333333 , test_acc - 0.9519
# epoch - 2 , train_acc - 0.96605 , test_acc - 0.9618
# epoch - 3 , train_acc - 0.97145 , test_acc - 0.965
# epoch - 4 , train_acc - 0.97645 , test_acc - 0.9697
# epoch - 5 , train_acc - 0.979716666667 , test_acc - 0.9725
# epoch - 6 , train_acc - 0.981283333333 , test_acc - 0.9729
# epoch - 7 , train_acc - 0.983833333333 , test_acc - 0.975
# epoch - 8 , train_acc - 0.9853 , test_acc - 0.9756
# epoch - 9 , train_acc - 0.985833333333 , test_acc - 0.9754
# epoch - 10 , train_acc - 0.988 , test_acc - 0.9765
# epoch - 11 , train_acc - 0.98875 , test_acc - 0.9774
# epoch - 12 , train_acc - 0.9897 , test_acc - 0.9777
# epoch - 13 , train_acc - 0.990766666667 , test_acc - 0.9775
# epoch - 14 , train_acc - 0.9916 , test_acc - 0.9774
# epoch - 15 , train_acc - 0.991683333333 , test_acc - 0.9771
# epoch - 16 , train_acc - 0.9926 , test_acc - 0.9793
import sys, os
sys.path.append(os.pardir)
import numpy as np
from DeepLearningClass.dataset.mnist import load_mnist
from DeepLearningClass.chapter5.two_layer_net_3_layer import TwoLayerNet
from DeepLearningClass.common.optimizer import AdaGrad
# 데이터 읽기
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)
network = TwoLayerNet(input_size=784, hidden_size1=200, hidden_size2=200, output_size=10)
iters_num = 10000
train_size = x_train.shape[0]
batch_size = 100
learning_rate = 0.1
train_loss_list = []
train_acc_list = []
test_acc_list = []
iter_per_epoch = max(train_size / batch_size, 1)
adagrad = AdaGrad()
for i in range(iters_num):
batch_mask = np.random.choice(train_size, batch_size)
x_batch = x_train[batch_mask]
t_batch = t_train[batch_mask]
# 기울기 계산
# grad = network.numerical_gradient(x_batch, t_batch) # 수치 미분 방식
grad = network.gradient(x_batch, t_batch) # 오차역전파법 방식(훨씬 빠르다)
# 갱신
adagrad.update(network.params, grad)
loss = network.loss(x_batch, t_batch)
train_loss_list.append(loss)
if i % iter_per_epoch == 0:
train_acc = network.accuracy(x_train, t_train)
test_acc = network.accuracy(x_test, t_test)
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print('epoch -', int(i / iter_per_epoch), ', train_acc -', train_acc, ', test_acc -', test_acc)
|
{"hexsha": "50e54af45c57ebbf5c4fc14949a4b30fec79b734", "size": 2378, "ext": "py", "lang": "Python", "max_stars_repo_path": "Lectures/DeepLearningClass/chapter5/train_neuralnet_mnist_3_layer_adagrade.py", "max_stars_repo_name": "Tim232/Python-Things", "max_stars_repo_head_hexsha": "05f0f373a4cf298e70d9668c88a6e3a9d1cd8146", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-05T07:42:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-06T23:23:18.000Z", "max_issues_repo_path": "Lectures/DeepLearningClass/chapter5/train_neuralnet_mnist_3_layer_adagrade.py", "max_issues_repo_name": "Tim232/Python-Things", "max_issues_repo_head_hexsha": "05f0f373a4cf298e70d9668c88a6e3a9d1cd8146", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/DeepLearningClass/chapter5/train_neuralnet_mnist_3_layer_adagrade.py", "max_forks_repo_name": "Tim232/Python-Things", "max_forks_repo_head_hexsha": "05f0f373a4cf298e70d9668c88a6e3a9d1cd8146", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5846153846, "max_line_length": 103, "alphanum_fraction": 0.6963835156, "include": true, "reason": "import numpy", "num_tokens": 832}
|
#include <fmt/format.h>
#include <frozen/map.h>
#include <frozen/string.h>
#include <nextclade/nextclade.h>
#include <rapidcsv.h>
#include <array>
#include <boost/algorithm/string/join.hpp>
#include <boost/algorithm/string/replace.hpp>
#include <string>
#include <common/safe_vector.h>
#include "../utils/at.h"
#include "../utils/concat.h"
#include "../utils/contains.h"
#include <common/contract.h>
#include "../utils/eraseDuplicates.h"
#include "../utils/safe_cast.h"
#include "formatMutation.h"
#include "formatQcStatus.h"
namespace Nextclade {
namespace {
// Lists column names up to and including "clade" column
inline safe_vector<std::string> getDefaultColumnNamesUpToClades() {
return safe_vector<std::string>{
"seqName",
"clade",
};
}
// Lists column names after "clade" column
// The separation is needed because we want to put some more dynamic columns between these.
inline safe_vector<std::string> getDefaultColumnNamesAfterClades() {
return safe_vector<std::string>{
"qc.overallScore",
"qc.overallStatus",
"totalSubstitutions",
"totalDeletions",
"totalInsertions",
"totalFrameShifts",
"totalAminoacidSubstitutions",
"totalAminoacidDeletions",
"totalMissing",
"totalNonACGTNs",
"totalPcrPrimerChanges",
"substitutions",
"deletions",
"insertions",
"frameShifts",
"aaSubstitutions",
"aaDeletions",
"missing",
"nonACGTNs",
"pcrPrimerChanges",
"alignmentScore",
"alignmentStart",
"alignmentEnd",
"qc.missingData.missingDataThreshold",
"qc.missingData.score",
"qc.missingData.status",
"qc.missingData.totalMissing",
"qc.mixedSites.mixedSitesThreshold",
"qc.mixedSites.score",
"qc.mixedSites.status",
"qc.mixedSites.totalMixedSites",
"qc.privateMutations.cutoff",
"qc.privateMutations.excess",
"qc.privateMutations.score",
"qc.privateMutations.status",
"qc.privateMutations.total",
"qc.snpClusters.clusteredSNPs",
"qc.snpClusters.score",
"qc.snpClusters.status",
"qc.snpClusters.totalSNPs",
"qc.frameShifts.frameShifts",
"qc.frameShifts.totalFrameShifts",
"qc.frameShifts.frameShiftsIgnored",
"qc.frameShifts.totalFrameShiftsIgnored",
"qc.frameShifts.score",
"qc.frameShifts.status",
"qc.stopCodons.stopCodons",
"qc.stopCodons.totalStopCodons",
"qc.stopCodons.score",
"qc.stopCodons.status",
"errors",
};
}
}//namespace
class CSVWriter : public CsvWriterAbstract {
rapidcsv::Document doc;
size_t numRows = 1;
std::map<std::string, int> columnNames;
int getColumnIndex(const std::string& columnName) {
return columnNames.at(columnName);
}
public:
explicit CSVWriter(const CsvWriterOptions& opt, const safe_vector<std::string>& customNodeAttrKeys)
: doc{
"",
rapidcsv::LabelParams{/* pColumnNameIdx */ -1, /* pRowNameIdx */ -1},
rapidcsv::SeparatorParams{
/* pSeparator */ opt.delimiter,//
/* pTrim */ false, //
/* pHasCR */ true, //
/* pQuotedLinebreaks */ false, //
/* pAutoQuote */ true //
},
rapidcsv::ConverterParams{},
rapidcsv::LineReaderParams{
/* pSkipCommentLines */ true,//
/* pCommentPrefix */ '#', //
/* pSkipEmptyLines */ true //
},
} {
// Merge default column names with the incoming custom ones
auto columnNamesVec = merge(getDefaultColumnNamesUpToClades(), customNodeAttrKeys);
columnNamesVec = merge(columnNamesVec, getDefaultColumnNamesAfterClades());
// We want to avoid duplicate column names because std::map cannot have them.
// The loop below will produce incorrect indices and out-of-bounds errors can happen if there are duplicates.
eraseDuplicatesUnsortedInPlace(columnNamesVec);
// Insert headers row and build a map from column name to column index for lookups when writing data rows
doc.InsertRow<std::string>(0);
int columnIndex = 0;
for (const auto& columnName : columnNamesVec) {
columnNames[columnName] = columnIndex;
doc.SetCell(columnIndex, 0, columnName);
++columnIndex;
}
}
void addRow(const AnalysisResult& result) override {
const auto& rowName = numRows;
const auto numColumns = columnNames.size();
const safe_vector<std::string> rowData(numColumns, "");
doc.InsertRow<std::string>(numRows, rowData);
doc.SetCell(getColumnIndex("seqName"), rowName, result.seqName);
doc.SetCell(getColumnIndex("clade"), rowName, result.clade);
for (const auto& [key, value] : result.customNodeAttributes) {
const auto columnIndex = getColumnIndex(key);
doc.SetCell(columnIndex, rowName, value);
}
doc.SetCell(getColumnIndex("qc.overallScore"), rowName, std::to_string(result.qc.overallScore));
doc.SetCell(getColumnIndex("qc.overallStatus"), rowName, formatQcStatus(result.qc.overallStatus));
doc.SetCell(getColumnIndex("totalSubstitutions"), rowName, std::to_string(result.totalSubstitutions));
doc.SetCell(getColumnIndex("totalDeletions"), rowName, std::to_string(result.totalDeletions));
doc.SetCell(getColumnIndex("totalInsertions"), rowName, std::to_string(result.totalInsertions));
doc.SetCell(getColumnIndex("totalFrameShifts"), rowName, std::to_string(result.totalFrameShifts));
doc.SetCell(getColumnIndex("totalAminoacidSubstitutions"), rowName,
std::to_string(result.totalAminoacidSubstitutions));
doc.SetCell(getColumnIndex("totalAminoacidDeletions"), rowName, std::to_string(result.totalAminoacidDeletions));
doc.SetCell(getColumnIndex("totalMissing"), rowName, std::to_string(result.totalMissing));
doc.SetCell(getColumnIndex("totalNonACGTNs"), rowName, std::to_string(result.totalNonACGTNs));
doc.SetCell(getColumnIndex("totalPcrPrimerChanges"), rowName, std::to_string(result.totalPcrPrimerChanges));
doc.SetCell(getColumnIndex("substitutions"), rowName, formatAndJoin(result.substitutions, formatMutation, ","));
doc.SetCell(getColumnIndex("deletions"), rowName, formatAndJoin(result.deletions, formatDeletion, ","));
doc.SetCell(getColumnIndex("insertions"), rowName, formatAndJoin(result.insertions, formatInsertion, ","));
doc.SetCell(getColumnIndex("frameShifts"), rowName, formatAndJoin(result.frameShifts, formatFrameShift, ","));
doc.SetCell(getColumnIndex("aaSubstitutions"), rowName,
formatAndJoin(result.aaSubstitutions, formatAminoacidMutation, ","));
doc.SetCell(getColumnIndex("aaDeletions"), rowName,
formatAndJoin(result.aaDeletions, formatAminoacidDeletion, ","));
doc.SetCell(getColumnIndex("missing"), rowName, formatAndJoin(result.missing, formatMissing, ","));
doc.SetCell(getColumnIndex("nonACGTNs"), rowName, formatAndJoin(result.nonACGTNs, formatNonAcgtn, ","));
doc.SetCell(getColumnIndex("pcrPrimerChanges"), rowName,
formatAndJoin(result.pcrPrimerChanges, formatPcrPrimerChange, ","));
doc.SetCell(getColumnIndex("alignmentScore"), rowName, std::to_string(result.alignmentScore));
doc.SetCell(getColumnIndex("alignmentStart"), rowName, std::to_string(result.alignmentStart));
doc.SetCell(getColumnIndex("alignmentEnd"), rowName, std::to_string(result.alignmentEnd));
if (result.qc.missingData) {
doc.SetCell(getColumnIndex("qc.missingData.missingDataThreshold"), rowName,
std::to_string(result.qc.missingData->missingDataThreshold));
doc.SetCell(getColumnIndex("qc.missingData.score"), rowName, std::to_string(result.qc.missingData->score));
doc.SetCell(getColumnIndex("qc.missingData.status"), rowName, formatQcStatus(result.qc.missingData->status));
doc.SetCell(getColumnIndex("qc.missingData.totalMissing"), rowName,
std::to_string(result.qc.missingData->totalMissing));
}
if (result.qc.mixedSites) {
doc.SetCell(getColumnIndex("qc.mixedSites.mixedSitesThreshold"), rowName,
std::to_string(result.qc.mixedSites->mixedSitesThreshold));
doc.SetCell(getColumnIndex("qc.mixedSites.score"), rowName, std::to_string(result.qc.mixedSites->score));
doc.SetCell(getColumnIndex("qc.mixedSites.status"), rowName, formatQcStatus(result.qc.mixedSites->status));
doc.SetCell(getColumnIndex("qc.mixedSites.totalMixedSites"), rowName,
std::to_string(result.qc.mixedSites->totalMixedSites));
}
if (result.qc.privateMutations) {
doc.SetCell(getColumnIndex("qc.privateMutations.cutoff"), rowName,
std::to_string(result.qc.privateMutations->cutoff));
doc.SetCell(getColumnIndex("qc.privateMutations.excess"), rowName,
std::to_string(result.qc.privateMutations->excess));
doc.SetCell(getColumnIndex("qc.privateMutations.score"), rowName,
std::to_string(result.qc.privateMutations->score));
doc.SetCell(getColumnIndex("qc.privateMutations.status"), rowName,
formatQcStatus(result.qc.privateMutations->status));
doc.SetCell(getColumnIndex("qc.privateMutations.total"), rowName,
std::to_string(result.qc.privateMutations->total));
}
if (result.qc.snpClusters) {
doc.SetCell(getColumnIndex("qc.snpClusters.clusteredSNPs"), rowName,
formatAndJoin(result.qc.snpClusters->clusteredSNPs, formatClusteredSnp, ","));
doc.SetCell(getColumnIndex("qc.snpClusters.score"), rowName, std::to_string(result.qc.snpClusters->score));
doc.SetCell(getColumnIndex("qc.snpClusters.status"), rowName, formatQcStatus(result.qc.snpClusters->status));
doc.SetCell(getColumnIndex("qc.snpClusters.totalSNPs"), rowName,
std::to_string(result.qc.snpClusters->totalSNPs));
}
if (result.qc.frameShifts) {
doc.SetCell(getColumnIndex("qc.frameShifts.frameShifts"), rowName,
formatAndJoin(result.qc.frameShifts->frameShifts, formatFrameShift, ","));
doc.SetCell(getColumnIndex("qc.frameShifts.totalFrameShifts"), rowName,
std::to_string(result.qc.frameShifts->totalFrameShifts));
doc.SetCell(getColumnIndex("qc.frameShifts.frameShiftsIgnored"), rowName,
formatAndJoin(result.qc.frameShifts->frameShiftsIgnored, formatFrameShift, ","));
doc.SetCell(getColumnIndex("qc.frameShifts.totalFrameShiftsIgnored"), rowName,
std::to_string(result.qc.frameShifts->totalFrameShiftsIgnored));
doc.SetCell(getColumnIndex("qc.frameShifts.score"), rowName, std::to_string(result.qc.frameShifts->score));
doc.SetCell(getColumnIndex("qc.frameShifts.status"), rowName, formatQcStatus(result.qc.frameShifts->status));
}
if (result.qc.stopCodons) {
doc.SetCell(getColumnIndex("qc.stopCodons.stopCodons"), rowName,
formatAndJoin(result.qc.stopCodons->stopCodons, formatStopCodon, ","));
doc.SetCell(getColumnIndex("qc.stopCodons.totalStopCodons"), rowName,
std::to_string(result.qc.stopCodons->totalStopCodons));
doc.SetCell(getColumnIndex("qc.stopCodons.score"), rowName, std::to_string(result.qc.stopCodons->score));
doc.SetCell(getColumnIndex("qc.stopCodons.status"), rowName, formatQcStatus(result.qc.stopCodons->status));
}
++numRows;
}
void addErrorRow(const std::string& seqName, const std::string& errorFormatted) override {
doc.SetCell(getColumnIndex("seqName"), numRows, seqName);
doc.SetCell(getColumnIndex("errors"), numRows, errorFormatted);
++numRows;
}
void write(std::ostream& outputStream) override {
doc.Save(outputStream);
}
};
std::unique_ptr<CsvWriterAbstract> createCsvWriter(const CsvWriterOptions& options,
const safe_vector<std::string>& customNodeAttrKeys) {
return std::make_unique<CSVWriter>(options, customNodeAttrKeys);
}
}// namespace Nextclade
|
{"hexsha": "4360d0bde59aa2c7daa44ab08b37792211746ec4", "size": 12333, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "packages/nextclade/src/io/CsvWriter.cpp", "max_stars_repo_name": "Centralize/nextclade", "max_stars_repo_head_hexsha": "22115d497a877369f1e8b765e80ddc68cbbca65f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/nextclade/src/io/CsvWriter.cpp", "max_issues_repo_name": "Centralize/nextclade", "max_issues_repo_head_hexsha": "22115d497a877369f1e8b765e80ddc68cbbca65f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/nextclade/src/io/CsvWriter.cpp", "max_forks_repo_name": "Centralize/nextclade", "max_forks_repo_head_hexsha": "22115d497a877369f1e8b765e80ddc68cbbca65f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0464285714, "max_line_length": 118, "alphanum_fraction": 0.6815049055, "num_tokens": 2970}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.