text
stringlengths 0
1.25M
| meta
stringlengths 47
1.89k
|
|---|---|
\Easel\ is a C code library for computational analysis of biological
sequences using probabilistic models. \Easel\ is used by \HMMER\
\citep{hmmer,Eddy98}, the profile hidden Markov model software that
underlies the \Pfam\ protein family database
\citep{Finn06,Sonnhammer97} and several other protein family
databases. \Easel\ is also used by \Infernal\
\citep{infernal,NawrockiEddy07}, the covariance model software that
underlies the \Rfam\ RNA family database
\citep{Griffiths-Jones05}.
There are other biosequence analysis libraries out there, in a variety
of languages
\citep{Vahrson96,Pitt01,Mangalam02,Butt05,Dutheil06,Giancarlo07,Doring08};
but this is ours. \Easel\ is not meant to be comprehensive. \Easel
is for supporting what's needed in our group's work on probabilistic
modeling of biological sequences, in applications like \HMMER\ and
\Infernal. It includes code for generative probabilistic models of
sequences, phylogenetic models of evolution, bioinformatics tools for
sequence manipulation and annotation, numerical computing, and some
basic utilities.
\Easel\ is written in ANSI/ISO C because its primary goals are high
performance and portability. Additionally, \Easel\ aims to provide an
ease of use reasonably close to Perl or Python code.
\Easel\ is designed to be reused, but not only as a black box. I might
use a black box library for routine functions that are tangential to
my research, but for anything research-critical, I want to understand
and control the source code. It's rational to treat reusing other
people's code like using their toothbrush, because god only knows what
they've done to it. For me, code reuse more often means acting like a
magpie, studying and stealing shiny bits of other people's source
code, and weaving them into one's own nest. \Easel\ is designed so you
can easily pull useful baubles from it.
\Easel\ is also designed to enable us to publish reproducible and
extensible research results as supplementary material for our research
papers. We put work into documenting \Easel\ as carefully as any other
research data we distribute.
These considerations are reflected in \Easel design decisions.
\Easel's documentation includes tutorial examples to make it easy to
understand and get started using any given \Easel\ module, independent
of other parts of \Easel. \Easel\ is modular, in a way that should
enable you to extract individual files or functions for use in your
own code, with minimum disentanglement work. \Easel\ uses some
precepts of object-oriented design, but its objects are just C
structures with visible, documented contents. \Easel's source code is
consciously designed to be read as a reference work. It reflects, in a
modest way, principles of ``literate programming'' espoused by Donald
Knuth. \Easel\ code and documentation are interwoven. Most of this
book is automatically generated from \Easel's source code.
\section{Quick start}
Let's start with a quick tour. If you have any experience with the
variable quality of bioinformatics software, the first thing you want
to know is you can get Easel compiled -- without having to install a
million dependencies first. The next thing you'll want to know is
whether \Easel\ is going to be useful to you or not. We'll start with
compiling it. You can compile \Easel\ and try it out without
permanently installing it.
\subsection{Downloading and compiling Easel for the first time}
Easel is self-sufficient, with no dependencies other than what's
already on your system, provided you have an ANSI C99 compiler
installed. You can obtain an \Easel\ source tarball and compile it
cleanly on any UNIX, Linux, or Mac OS/X operating system with an
incantation like the following (where \ccode{xxx} will be the current
version number):
\begin{cchunk}
% wget http://eddylab.org/easel/easel.tar.gz
% tar zxf easel.tar.gz
% cd easel-xxx
% ./configure
% make
% make check
\end{cchunk}
The \ccode{make check} command is optional. It runs a battery of
quality control tests. All of these should pass. You should now see
\ccode{libeasel.a} in the directory. If you look in the directory
\ccode{miniapps}, you'll also see a bunch of small utility programs,
the \Easel\ ``miniapps''.
There are more complicated things you can do to customize the
\ccode{./configure} step for your needs. That includes customizing the
installation locations. If you decide you want to install
\Easel\ permanently, see the full installation instructions in
chapter~\ref{chapter:installation}.
\subsection{Cribbing from code examples}
Every source code module (that is, each \ccode{.c} file) ends with one
or more \esldef{driver programs}, including programs for unit tests
and benchmarks. These are \ccode{main()} functions that can be
conditionally included when the module is compiled. The very end of
each module is always at least one \esldef{example driver} that shows
you how to use the module. You can find the example code in a module
\eslmod{foo} by searching the \ccode{esl\_foo.c} file for the tag
\ccode{eslFOO\_EXAMPLE}, or just navigating to the end of the file. To
compile the example for module \eslmod{foo} as a working program, do:
\begin{cchunk}
% cc -o example -L. -I. -DeslFOO_EXAMPLE esl_foo.c -leasel -lm
\end{cchunk}
You may need to replace the standard C compiler \ccode{cc} with a
different compiler name, depending on your system. Linking to the
standard math library (\ccode{-lm}) may not be necessary, depending on
what module you're compiling, but it won't hurt. Replace \ccode{foo}
with the name of a module you want to play with, and you can compile
any of Easel's example drivers this way.
To run it, read the source code (or the corresponding section in this
book) to see if it needs any command line arguments, like the name of
a file to open, then:
\begin{cchunk}
% ./example <any args needed>
\end{cchunk}
You can edit the example driver to play around with it, if you like,
but it's better to make a copy of it in your own file (say,
\ccode{foo\_example.c}) so you're not changing \Easel's code. When you
extract the code into a file, copy what's between the \ccode{\#ifdef
eslFOO\_EXAMPLE} and \ccode{\#endif /*eslFOO\_EXAMPLE*/} flags that
conditionally include the example driver (don't copy the flags
themselves). Then compile your example code and link to \Easel\ like
this:
\begin{cchunk}
% cc -o foo_example -L. -I. foo_example.c -leasel -lm
\end{cchunk}
\subsection{Cribbing from Easel miniapplications}
The \ccode{miniapps} directory contains \Easel's
\esldef{miniapplications}: several utility programs that \Easel\
installs, in addition to the library \ccode{libeasel.a} and its header
files.
The miniapplications are described in more detail later, but for the
purpose of getting used to how \Easel\ is used, they provide you some
more useful examples of small \Easel-based applications that are a
little more complicated than individual module example drivers.
You can probably get a long way into \Easel\ just by browsing the
source code of the modules' examples and the miniapplications. If
you're the type (like me) that prefers to learn by example, you're
done, you can close this book now.
\section{Overview of Easel's modules}
Possibly your next question is, does \Easel\ provide any functionality
you're interested in?
Each \ccode{.c} file in \Easel\ corresponds to one \Easel\
\esldef{module}. A module consists of a group of functions for some
task. For example, the \eslmod{sqio} module can automatically parse
many common unaligned sequence formats, and the \eslmod{msa} module
can parse many common multiple alignment formats.
There are modules concerned with manipulating biological sequences and
sequence files (including a full-fledged parser for Stockholm multiple
alignment format and all its complex and powerful annotation markup):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{sq} & Single biological sequences \\
\eslmod{msa} & Multiple sequence alignments and i/o \\
\eslmod{alphabet} & Digitized biosequence alphabets \\
\eslmod{randomseq}& Sampling random sequences \\
\eslmod{sqio} & Sequence file i/o \\
\eslmod{ssi} & Indexing large sequence files for rapid random access \\
\end{tabular}
\end{center}
There are modules implementing common operations on multiple sequence
alignments (including many published sequence weighting algorithms,
and a memory-efficient single linkage sequence clustering algorithm):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{msacluster} & Efficient single linkage clustering of aligned sequences by \% identity\\
\eslmod{msaweight} & Sequence weighting algorithms \\
\end{tabular}
\end{center}
There are modules for probabilistic modeling of sequence residue
alignment scores (including routines for solving for the implicit
probabilistic basis of arbitrary score matrices):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{scorematrix} & Pairwise residue alignment scoring systems\\
\eslmod{ratematrix} & Standard continuous-time Markov models of residue evolution\\
\eslmod{paml} & Reading PAML data files (including rate matrices)\\
\end{tabular}
\end{center}
There is a module for sequence annotation:
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{wuss} & ASCII RNA secondary structure annotation strings\\
\end{tabular}
\end{center}
There are modules implementing some standard scientific numerical
computing concepts (including a free, fast implementation of conjugate
gradient optimization):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{vectorops} & Vector operations\\
\eslmod{dmatrix} & 2D matrix operations\\
\eslmod{minimizer} & Numerical optimization by conjugate gradient descent\\
\eslmod{rootfinder}& One-dimensional root finding (Newton/Raphson)\\
\end{tabular}
\end{center}
There are modules implementing phylogenetic trees and evolutionary
distance calculations:
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{tree} & Manipulating phylogenetic trees\\
\eslmod{distance} & Pairwise evolutionary sequence distance calculations\\
\end{tabular}
\end{center}
There are a number of modules that implement routines for many common
probability distributions (including maximum likelihood fitting
routines):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{stats} & Basic routines and special statistics functions\\
\eslmod{histogram} & Collecting and displaying histograms\\
\eslmod{dirichlet} & Beta, Gamma, and Dirichlet distributions\\
\eslmod{exponential} & Exponential distributions\\
\eslmod{gamma} & Gamma distributions\\
\eslmod{gev} & Generalized extreme value distributions\\
\eslmod{gumbel} & Gumbel (Type I extreme value) distributions\\
\eslmod{hyperexp} & Hyperexponential distributions\\
\eslmod{mixdchlet} & Mixture Dirichlet distributions and priors\\
\eslmod{mixgev} & Mixture generalized extreme value distributions\\
\eslmod{normal} & Normal (Gaussian) distributions\\
\eslmod{stretchexp} & Stretched exponential distributions\\
\eslmod{weibull} & Weibull distributions\\
\end{tabular}
\end{center}
There are several modules implementing some common utilities
(including a good portable random number generator and a powerful
command line parser):
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{cluster} & Efficient single linkage clustering\\
\eslmod{fileparser} & Parsing simple token-based (tab/space-delimited) files\\
\eslmod{getopts} & Parsing command line arguments and options.\\
\eslmod{keyhash} & Hash tables for emulating Perl associative arrays\\
\eslmod{random} & Pseudorandom number generation and sampling\\
\eslmod{regexp} & Regular expression matching\\
\eslmod{stack} & Pushdown stacks for integers, chars, pointers\\
\eslmod{stopwatch} & Timing parts of programs\\
\end{tabular}
\end{center}
There are some specialized modules in support of accelerated and/or parallel computing:
\begin{center}
\begin{tabular}{p{1in}p{3.7in}}
\eslmod{sse} & Routines for SSE (Streaming SIMD Intrinsics) vector computation support on Intel/AMD platforms\\
\eslmod{vmx} & Routines for Altivec/VMX vector computation support on PowerPC platforms\\
\eslmod{mpi} & Routines for MPI (message passing interface) support\\
\end{tabular}
\end{center}
\section{Navigating documentation and source code}
The quickest way to learn about what each module provides is to go to
the corresponding chapter in this document. Each chapter starts with a
brief introduction of what the module does, and highlights anything
that \Easel's implementation does that we think is particularly
useful, unique, or powerful. That's followed by a table describing
each function provided by the module, and at least one example code
listing of how the module can be used. The chapter might then go into
more detail about the module's functionality, though many chapters do
not, because the functionality is straightforward or self-explanatory.
Finally, each chapter ends with detailed documentation on each
function.
\Easel's source code is designed to be read. Indeed, most of this
documentation is generated automatically from the source code itself
-- in particular, the table listing the available functions, the
example code snippets, and the documentation of the individual
functions.
Each module \ccode{.c} file starts with a table of contents to help
you navigate.\footnote{\Easel\ source files are designed as complete
free-standing documents, so they tend to be larger than most people's
\ccode{.c} files; the more usual practice in C programming is to have
a smaller number of functions per file.} The first section will often
define how to create one or more \esldef{objects} (C structures) that
the module uses. The next section will typically define the rest of
the module's exposed API. Following that are any private (internal)
functions used in the module. Last are the drivers, including
benchmarks, unit tests, and one or more examples.
Each function has a structured comment header that describes how it is
called and used, including what arguments it takes, what it returns,
and what error conditions it may raise. These structured comments are
extracted for inclusion in this document, so what you read here for
each function's documentation is identical to what is in the source
code.
|
{"hexsha": "726d0adf12d4b354269b90838cc0ae4470eaacab", "size": 14434, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hmmer-3.3/easel/documentation/intro.tex", "max_stars_repo_name": "WooMichael/Project_Mendel", "max_stars_repo_head_hexsha": "ff572f7ce7f9beca148f7351cf34dbf11d670bc8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hmmer-3.3/easel/documentation/intro.tex", "max_issues_repo_name": "WooMichael/Project_Mendel", "max_issues_repo_head_hexsha": "ff572f7ce7f9beca148f7351cf34dbf11d670bc8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hmmer-3.3/easel/documentation/intro.tex", "max_forks_repo_name": "WooMichael/Project_Mendel", "max_forks_repo_head_hexsha": "ff572f7ce7f9beca148f7351cf34dbf11d670bc8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2155688623, "max_line_length": 115, "alphanum_fraction": 0.7769848968, "num_tokens": 3666}
|
*
* subroutine tstipa.f atmospheric timestep for c-goldstein
* JGS iterative implicit version 2/10/00
* NRE 2-step version, 2nd attempt 2/11/01
* to recover old explicit code change the lines indicated
* coefficient of implicitness cimp included 27/5/2
* cimp=1 fully implicit, cimp=0 explicit
* coeffs for iterative implicit scheme are defined at cell faces.
* eg flux across east face = cie(i)*T(i+1) + ciw(i)*T(i)
* converted from ocean to atmosphere 23/8/2
*
subroutine tstipa
#include "embm.cmn"
crma extra delaration (22/12/03)
c integer tnow
real tv, ups, ups0, pec, diffpp, cimp, centre, dtloc
real cie(0:maxi,0:maxj),ciw(0:maxi,0:maxj),
+ cin(0:maxi,0:maxj),cis(0:maxi,0:maxj)
real tq2(0:maxi+1,0:maxj+1)
c iterations to solve timestep
integer iits, nii
integer i, j, l
logical correct
parameter(correct=.true. )
c implicit
c NOTE: ups0 not currently used
if(igrid.eq.1)then
c const dlat grid ... increased nii for stability
c parameter (nii=16, ups0=999, cimp=0.5)
nii=16
cimp=0.5
else
c default, const dsinlat grid
c parameter (nii=4, ups0=999, cimp=0.5)
nii=4
cimp=0.5
endif
c parameter (nii=4, ups0=0.8, cimp=1.0)
c recover old explicit
c parameter (nii=8, ups0=0.0, cimp=0.0)
dtloc = dtatm
c set b.c's on local variables
do i=0,imax
cin(i,0) = 0.
cis(i,0) = 0.
tq2(i,0) = 0.
cin(i,jmax) = 0.
cis(i,jmax) = 0.
tq2(i,jmax+1) = 0.
enddo
do l=1,2
do j=1,jmax
do i=1,imax
c flux to east
cie(i,j) = betaz(l)*uatm(1,i,j)*rc(j)*0.5*rdphi
diffpp = diffa(l,1,j) +
1 (2-l)*diffmod0*max(0.0,min(1.0,
1 (pptn(i,j)-ppmin)/(ppmax-ppmin)))
tv = rc(j)*rc(j)*rdphi*diffpp*rdphi
c recover old explicit
c ups = sign(ups0, uatm(1,i,j))
pec = betaz(l)*uatm(1,i,j)*dphi/diffpp
ups = pec / (2.0 + abs(pec))
ciw(i,j) = cie(i,j)*(1+ups) + tv
cie(i,j) = cie(i,j)*(1-ups) - tv
c flux to north
cin(i,j) = cv(j)*betam(l)*uatm(2,i,j)*0.5
diffpp = diffa(l,2,j) +
1 (2-l)*diffmod0*max(0.0,min(1.0,
1 (pptn(i,j)-ppmin)/(ppmax-ppmin)))
c cv(jmax) = 0 but dsv not defined so mask needed
if(j.lt.jmax)then
tv = cv(j)*cv(j)*rdsv(j)*diffa(l,2,j)
c recover old explicit
c ups = sign(ups0, uatm(2,i,j))
pec = betam(l)*uatm(2,i,j)*dsv(j)/diffpp
ups = pec / (2.0 + abs(pec))
else
tv = 0.
ups = 0.
endif
cis(i,j) = cin(i,j)*(1+ups) + tv
cin(i,j) = cin(i,j)*(1-ups) - tv
enddo
enddo
do j=1,jmax
cie(0,j) = cie(imax,j)
ciw(0,j) = ciw(imax,j)
enddo
c iterate to solve timestep
do iits=1,nii
do j=1,jmax
do i=1,imax
tq2(i,j) = cimp*tq(l,i,j)
1 + (1.0 - cimp)*tq1(l,i,j)
enddo
enddo
do j=1,jmax
tq2(0,j) = tq2(imax,j)
tq2(imax+1,j) = tq2(1,j)
enddo
do j=1,jmax
do i=1,imax
centre = dtloc*(ciw(i,j) - cie(i-1,j)
1 + (cis(i,j) - cin(i,j-1))*rds(j))
tq(l,i,j) = (tq1(l,i,j)*(1.0 - (1.0-cimp)
1 *centre) - dtloc*(-tqa(l,i,j)
2 + cie(i,j) *tq2(i+1,j)
3 - ciw(i-1,j)*tq2(i-1,j)
4 + (cin(i,j) *tq2(i,j+1)
5 - cis(i,j-1)*tq2(i,j-1))*rds(j)))/
8 (1 + cimp*centre)
enddo
enddo
enddo
if(correct)then
do j=1,jmax
do i=1,imax
tq2(i,j) = 0.5*(tq2(i,j) + cimp*tq(l,i,j)
1 + (1.0 - cimp)*tq1(l,i,j))
enddo
enddo
do j=1,jmax
tq2(0,j) = tq2(imax,j)
tq2(imax+1,j) = tq2(1,j)
enddo
do j=1,jmax
do i=1,imax
c
c explicit and conservative corrector step
c
tq(l,i,j) = tq1(l,i,j) - dtloc*(-tqa(l,i,j)
1 + cie(i,j) *tq2(i+1,j)
2 - ciw(i-1,j)*tq2(i-1,j)
3 + (cin(i,j) *tq2(i,j+1)
4 - cis(i,j-1)*tq2(i,j-1))*rds(j))
7 - dtloc*tq2(i,j)*(
8 ciw(i,j) - cie(i-1,j)
9 + (cis(i,j) - cin(i,j-1))*rds(j) )
enddo
enddo
endif
enddo
c update tq1
do j=1,jmax
do i=1,imax
do l=1,2
c tv = abs(tq1(l,i,j) - tq(l,i,j))
tq1(l,i,j) = tq(l,i,j)
enddo
enddo
enddo
end
|
{"hexsha": "ab84a18096f58864da8e857e384bc4eb02416535", "size": 5253, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "genie-embm/src/fortran/tstipa.f", "max_stars_repo_name": "JUNPENGZ/cgenie.muffin", "max_stars_repo_head_hexsha": "43bc8dc025428a5141866d762129b2cfaf1345ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2018-05-28T08:11:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T10:13:30.000Z", "max_issues_repo_path": "genie-embm/src/fortran/tstipa.f", "max_issues_repo_name": "ruiying-ocean/cgenie.muffin", "max_issues_repo_head_hexsha": "7d4c6976779d9e120ad989831eb12cdd33ea9fa6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2018-09-19T22:58:49.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T12:24:47.000Z", "max_forks_repo_path": "genie-embm/src/fortran/tstipa.f", "max_forks_repo_name": "ruiying-ocean/cgenie.muffin", "max_forks_repo_head_hexsha": "7d4c6976779d9e120ad989831eb12cdd33ea9fa6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2018-06-11T19:14:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-13T19:50:24.000Z", "avg_line_length": 30.0171428571, "max_line_length": 67, "alphanum_fraction": 0.4258518942, "num_tokens": 1782}
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import shutil
import numpy as np
import time
import argparse
import functools
import paddle.fluid as fluid
from pyramidbox import PyramidBox
import reader
from utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# yapf: disable
add_arg('parallel', bool, True, "Whether use multi-GPU/threads or not.")
add_arg('learning_rate', float, 0.001, "The start learning rate.")
add_arg('batch_size', int, 16, "Minibatch size.")
add_arg('num_passes', int, 160, "Epoch number.")
add_arg('use_gpu', bool, True, "Whether use GPU.")
add_arg('use_pyramidbox', bool, True, "Whether use PyramidBox model.")
add_arg('model_save_dir', str, 'output', "The path to save model.")
add_arg('resize_h', int, 640, "The resized image height.")
add_arg('resize_w', int, 640, "The resized image width.")
add_arg('with_mem_opt', bool, True, "Whether to use memory optimization or not.")
add_arg('pretrained_model', str, './vgg_ilsvrc_16_fc_reduced/', "The init model path.")
add_arg('data_dir', str, 'data', "The base dir of dataset")
#yapf: enable
def train(args, config, train_file_list, optimizer_method):
learning_rate = args.learning_rate
batch_size = args.batch_size
num_passes = args.num_passes
height = args.resize_h
width = args.resize_w
use_gpu = args.use_gpu
use_pyramidbox = args.use_pyramidbox
model_save_dir = args.model_save_dir
pretrained_model = args.pretrained_model
with_memory_optimization = args.with_mem_opt
num_classes = 2
image_shape = [3, height, width]
devices = os.getenv("CUDA_VISIBLE_DEVICES") or ""
devices_num = len(devices.split(","))
fetches = []
network = PyramidBox(image_shape, num_classes,
sub_network=use_pyramidbox)
if use_pyramidbox:
face_loss, head_loss, loss = network.train()
fetches = [face_loss, head_loss]
else:
loss = network.vgg_ssd_loss()
fetches = [loss]
steps_per_pass = 12880 // batch_size
boundaries = [steps_per_pass * 99, steps_per_pass * 124,
steps_per_pass * 149]
values = [
learning_rate, learning_rate * 0.1,
learning_rate * 0.01, learning_rate * 0.001
]
if optimizer_method == "momentum":
optimizer = fluid.optimizer.Momentum(
learning_rate=fluid.layers.piecewise_decay(
boundaries=boundaries, values=values),
momentum=0.9,
regularization=fluid.regularizer.L2Decay(0.0005),
)
else:
optimizer = fluid.optimizer.RMSProp(
learning_rate=fluid.layers.piecewise_decay(boundaries, values),
regularization=fluid.regularizer.L2Decay(0.0005),
)
optimizer.minimize(loss)
if with_memory_optimization:
fluid.memory_optimize(fluid.default_main_program())
place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
start_pass = 0
if pretrained_model:
if pretrained_model.isdigit():
start_pass = int(pretrained_model) + 1
pretrained_model = os.path.join(model_save_dir, pretrained_model)
print("Resume from %s " %(pretrained_model))
if not os.path.exists(pretrained_model):
raise ValueError("The pre-trained model path [%s] does not exist." %
(pretrained_model))
def if_exist(var):
return os.path.exists(os.path.join(pretrained_model, var.name))
fluid.io.load_vars(exe, pretrained_model, predicate=if_exist)
if args.parallel:
train_exe = fluid.ParallelExecutor(
use_cuda=use_gpu, loss_name=loss.name)
train_reader = reader.train_batch_reader(config, train_file_list, batch_size=batch_size)
def save_model(postfix):
model_path = os.path.join(model_save_dir, postfix)
if os.path.isdir(model_path):
shutil.rmtree(model_path)
print('save models to %s' % (model_path))
fluid.io.save_persistables(exe, model_path)
def tensor(data, place, lod=None):
t = fluid.core.LoDTensor()
t.set(data, place)
if lod:
t.set_lod(lod)
return t
for pass_id in range(start_pass, num_passes):
start_time = time.time()
prev_start_time = start_time
end_time = 0
for batch_id in range(steps_per_pass):
im, face_box, head_box, labels, lod = next(train_reader)
im_t = tensor(im, place)
box1 = tensor(face_box, place, [lod])
box2 = tensor(head_box, place, [lod])
lbl_t = tensor(labels, place, [lod])
feeding = {'image': im_t, 'face_box': box1,
'head_box': box2, 'gt_label': lbl_t}
prev_start_time = start_time
start_time = time.time()
if args.parallel:
fetch_vars = train_exe.run(fetch_list=[v.name for v in fetches],
feed=feeding)
else:
fetch_vars = exe.run(fluid.default_main_program(),
feed=feeding,
fetch_list=fetches)
end_time = time.time()
fetch_vars = [np.mean(np.array(v)) for v in fetch_vars]
if batch_id % 10 == 0:
if not args.use_pyramidbox:
print("Pass {0}, batch {1}, loss {2}, time {3}".format(
pass_id, batch_id, fetch_vars[0],
start_time - prev_start_time))
else:
print("Pass {0}, batch {1}, face loss {2}, head loss {3}, " \
"time {4}".format(pass_id,
batch_id, fetch_vars[0], fetch_vars[1],
start_time - prev_start_time))
if pass_id % 1 == 0 or pass_id == num_passes - 1:
save_model(str(pass_id))
if __name__ == '__main__':
args = parser.parse_args()
print_arguments(args)
data_dir = os.path.join(args.data_dir, 'WIDER_train/images/')
train_file_list = os.path.join(args.data_dir,
'wider_face_split/wider_face_train_bbx_gt.txt')
config = reader.Settings(
data_dir=data_dir,
resize_h=args.resize_h,
resize_w=args.resize_w,
apply_distort=True,
apply_expand=False,
mean_value=[104., 117., 123.],
ap_version='11point')
train(args, config, train_file_list, optimizer_method="momentum")
|
{"hexsha": "0a71606bf300c2354d6e8429b70e96a8635b1421", "size": 6954, "ext": "py", "lang": "Python", "max_stars_repo_path": "fluid/face_detection/train.py", "max_stars_repo_name": "sefira/models", "max_stars_repo_head_hexsha": "16788c580641b3e525ad615336b416d9f224be1b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-26T08:30:26.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-26T08:30:26.000Z", "max_issues_repo_path": "fluid/face_detection/train.py", "max_issues_repo_name": "sefira/models", "max_issues_repo_head_hexsha": "16788c580641b3e525ad615336b416d9f224be1b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "fluid/face_detection/train.py", "max_forks_repo_name": "sefira/models", "max_forks_repo_head_hexsha": "16788c580641b3e525ad615336b416d9f224be1b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7934782609, "max_line_length": 97, "alphanum_fraction": 0.6091458154, "include": true, "reason": "import numpy", "num_tokens": 1614}
|
/**
* ExperimentPackager.cpp
*
* History:
* paul on 1/25/06 - Created.
*
* Copyright 2006 MIT. All rights reserved.
*/
#include "ExperimentPackager.h"
#include "Utilities.h"
#include "LoadingUtilities.h"
#include "SystemEventFactory.h"
#include <iostream>
#include <fstream>
#include "PlatformDependentServices.h"
#include "boost/filesystem/path.hpp"
#include "boost/algorithm/string/replace.hpp"
#include <boost/scope_exit.hpp>
BEGIN_NAMESPACE_MW
Datum ExperimentPackager::packageSingleFile(const Datum &contents, const std::string &filename) {
Datum unit(M_DICTIONARY, M_EXPERIMENT_PACKAGE_NUMBER_ELEMENTS_PER_UNIT);
Datum name;
name.setString(filename.c_str(), filename.length()+1);
unit.addElement(M_PACKAGER_FILENAME_STRING, name);
unit.addElement(M_PACKAGER_CONTENTS_STRING, contents);
return unit;
}
Datum
ExperimentPackager::packageSingleFile(const boost::filesystem::path filepath, const std::string filename) {
namespace bf = boost::filesystem;
std::ifstream mediaFile;
mediaFile.open(filepath.string().c_str(), std::ios::binary);
// get length of file:
mediaFile.seekg(0, std::ios::end);
int length = mediaFile.tellg();
// if the file was never opened
if (length < 0) {
mediaFile.close();
Datum undef;
return undef;
}
char * buffer = new char[length];
mediaFile.seekg(0, std::ios::beg);
mediaFile.read(buffer, length);
mediaFile.close();
Datum bufferData;
bufferData.setString(buffer, length);
delete [] buffer;
return packageSingleFile(bufferData, filename);
}
Datum ExperimentPackager::packageExperiment(const boost::filesystem::path filename) {
namespace bf = boost::filesystem;
IncludedFilesParser parser(filename.string());
std::string working_path_string;
Datum include_files;
try{
parser.parse(false);
working_path_string = parser.getWorkingPathString();
include_files = parser.getIncludedFilesManifest();
} catch(std::exception& e){
merror(M_PARSER_MESSAGE_DOMAIN, "Experiment packaging failed: %s",
e.what());
return Datum();
}
Datum eventPayload(M_DICTIONARY, M_EXPERIMENT_PACKAGE_NUMBER_ELEMENTS);
{
// Use getDocumentData to get the experiment file with any preprocessing and/or
// XInclude substitutions already applied
std::vector<xmlChar> fileData;
parser.getDocumentData(fileData);
Datum contents(reinterpret_cast<char *>(&(fileData.front())), fileData.size());
eventPayload.addElement(M_PACKAGER_EXPERIMENT_STRING,
packageSingleFile(contents, XMLParser::squashFileName(filename.string())));
}
if(include_files.getNElements() >= 1) {
Datum mediaFilesPayload(M_LIST, include_files.getNElements());
for(int i=0; i< include_files.getNElements(); ++i) {
// DDC there seem to be a lot of unnecessary steps here
// simplified hackily for the moment
std::string mediaName(include_files.getElement(i).getString());
bf::path mediaPath = expandPath(working_path_string, mediaName);
//bf::path mediaPath(include_files.getElement(i).getElement(M_PACKAGER_FULL_NAME).getString());
//std::string mediaName(include_files.getElement(i).getElement(M_PACKAGER_RELATIVE_NAME).getString());
Datum mediaElement = packageSingleFile(mediaPath, mediaName);
if(!mediaElement.isUndefined()) {
mediaFilesPayload.addElement(mediaElement);
} else {
merror(M_FILE_MESSAGE_DOMAIN,
"Can't find file: %s", mediaPath.string().c_str());
Datum undef;
return undef;
}
}
eventPayload.addElement(M_PACKAGER_MEDIA_BUFFERS_STRING,
mediaFilesPayload);
}
return SystemEventFactory::systemEventPackage(M_SYSTEM_DATA_PACKAGE,
M_EXPERIMENT_PACKAGE,
eventPayload);
}
IncludedFilesParser::IncludedFilesParser(const std::string &_path) :
XMLParser(_path, "MWMediaPackagerTransformation.xsl"),
manifest(M_LIST, 1)
{ }
void IncludedFilesParser::parse(bool announce_progress) {
// Load the experiment file, applying any preprocessing and/or XInclude substitutions
loadFile();
// Look for resource declarations
auto xpathObject = evaluateXPathExpression("//resource/@path[string-length() != 0]");
if (xpathObject) {
BOOST_SCOPE_EXIT( xpathObject ) {
xmlXPathFreeObject(xpathObject);
} BOOST_SCOPE_EXIT_END
if (xpathObject->type == XPATH_NODESET && xpathObject->nodesetval && xpathObject->nodesetval->nodeNr > 0) {
// Found one or more resource declarations, so add the identified files and directories to
// the manifest
for (int nodeIndex = 0; nodeIndex < xpathObject->nodesetval->nodeNr; nodeIndex++) {
auto path = _getContent(xpathObject->nodesetval->nodeTab[nodeIndex]);
if (boost::filesystem::is_directory(expandPath(getWorkingPathString(), path))) {
addDirectory(path, true); // Recursive
} else {
manifest.addElement(path);
}
}
// Return without parsing the file. This allows experiments that declare resources to use
// run-time expressions in "path" and other attributes that would otherwise need to be
// evaluated at parse time.
return;
}
}
// No resource declarations found, so parse the experiment (expanding any replicators), and
// infer the included files from component attributes
XMLParser::parse(announce_progress);
}
void IncludedFilesParser::addDirectory(const std::string &directoryPath, bool recursive) {
std::vector<std::string> filePaths;
mw::getFilePaths(getWorkingPathString(), directoryPath, filePaths, recursive);
for (const auto &path : filePaths) {
manifest.addElement(path);
}
}
void IncludedFilesParser::_processCreateDirective(xmlNode *node) {
xmlNode *child = node->children;
while (child != NULL) {
string name((const char *)(child->name));
if (name == "path") {
string filePath = _getContent(child);
manifest.addElement(filePath);
} else if (name == "directory_path") {
string directoryPath = _getContent(child);
addDirectory(directoryPath, false); // Not recursive
}
child = child->next;
}
}
void IncludedFilesParser::_processAnonymousCreateDirective(xmlNode *node) {
_processCreateDirective(node);
}
END_NAMESPACE_MW
|
{"hexsha": "f1fb633120bf266cba9c69280cbf2e37eabb6b02", "size": 6636, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "core/Core/ExperimentDataLoading/ExperimentPackager.cpp", "max_stars_repo_name": "esayui/mworks", "max_stars_repo_head_hexsha": "0522e5afc1e30fdbf1e67cedd196ee50f7924499", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "core/Core/ExperimentDataLoading/ExperimentPackager.cpp", "max_issues_repo_name": "esayui/mworks", "max_issues_repo_head_hexsha": "0522e5afc1e30fdbf1e67cedd196ee50f7924499", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "core/Core/ExperimentDataLoading/ExperimentPackager.cpp", "max_forks_repo_name": "esayui/mworks", "max_forks_repo_head_hexsha": "0522e5afc1e30fdbf1e67cedd196ee50f7924499", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.8823529412, "max_line_length": 115, "alphanum_fraction": 0.6767631103, "num_tokens": 1451}
|
import matplotlib.pyplot as plt
import scipy
import scipy.stats as stats
import json
import csv
#import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
from matplotlib.pyplot import *
import warnings
import random
# Draw plot
import matplotlib.patches as patches
from matplotlib.ticker import MultipleLocator, ScalarFormatter
#get_ipython().run_line_magic('matplotlib', 'inline')
# Prepare Data
df = pd.read_csv("./corporate_diversity.csv")
# Prepare Data
#df = df.groupby('Releases').size().reset_index(name='Companies')
# n = df['Releases'].unique().__len__()+1
# all_colors = list(plt.cm.colors.cnames.keys())
# random.seed(9)
# c = random.choices(all_colors, k=n)
# Plot Bars
fig, host = plt.subplots(figsize=(6,4), dpi=80) #, facecolor=(1, 1, 1) , facecolor="white"
par1 = host.twinx()
par2 = host.twinx()
def make_patch_spines_invisible(ax):
ax.set_frame_on(True)
ax.patch.set_visible(False)
for sp in ax.spines.values():
sp.set_visible(False)
# host.set_facecolor('xkcd:white')
# Offset the right spine of par2. The ticks and label have already been
# placed on the right by twinx above.
par2.spines["right"].set_position(("axes", 1.2))
# Having been created by twinx, par2 has its frame off, so the line of its
# detached spine is invisible. First, activate the frame but make the patch
# and spines invisible.
make_patch_spines_invisible(par2)
##Second, show the right spine.
par2.spines["right"].set_visible(True)
grid(color='r', linestyle='-', linewidth=.2)
host.set_xlim(0, 13)
host.set_ylim(0, 220)
par1.set_ylim(0, 5000)
host.set_xlabel("OpenStack Releases", fontsize=16)
host.set_ylabel("#Contributing Companies/release cycle", fontsize=16)
par1.set_ylabel("#Companies (NoC) with 50% of total commits", fontsize=16)
# # Add patches to color the X axis labels
f1 = patches.Rectangle((.50, -0.005), width=.40, height=.10, alpha=.2,
facecolor='green', transform=fig.transFigure)
f2 = patches.Rectangle((.120, -0.005), width=.370, height=.10, alpha=.2,
facecolor='yellow', transform=fig.transFigure)
fig.add_artist(f1)
fig.add_artist(f2)
p1, = host.plot(df['Releases'], df['Companies'], "darkblue", label="#Companies/Release")
p2, = par1.plot(df['Releases'], df['comcmts'], "k--", label="NoC with 50% commits")
host.yaxis.label.set_color(p1.get_color())
par1.yaxis.label.set_color(p2.get_color())
tkw = dict(size=5, width=2.5)
host.tick_params(axis='y', colors=p1.get_color(), **tkw)
par1.tick_params(axis='y', colors=p2.get_color(), **tkw)
host.tick_params(axis='x', **tkw, rotation=30)
lines = [p1, p2]
# plt.rcParams['axes.facecolor'] = 'white'
# sns.despine(left=True)
plt.yticks(visible=True)
plt.xticks(visible=True)
# plt.rcParams['axes.facecolor'] = 'w'
host.legend(lines, [l.get_label() for l in lines], fontsize=16)
# axhline(0,color='red') # x = 0
# axvline(0,color='red') # y = 0
host.grid(color='navy')
plt.rcParams['grid.linewidth'] = 2.
plt.rcParams.update({'axes.spines.left': True, 'axes.spines.right': True})
# Don't allow the axis to be on top of your data
host.set_axisbelow(True)
# Turn on the minor TICKS, which are required for the minor GRID
host.minorticks_on()
# Customize the major grid
host.grid(which='major', linestyle='-', linewidth='1.5', color='navy')
# Customize the minor grid
host.grid(which='minor', linestyle=':', linewidth='0.5', color='gray')
plt.show()
|
{"hexsha": "39f0cc3420d0eb9918ffb68d06e1986c4c029d90", "size": 3478, "ext": "py", "lang": "Python", "max_stars_repo_path": "5.scripts/20.multiplot.py", "max_stars_repo_name": "conferencepapers/ICSE_2020", "max_stars_repo_head_hexsha": "732ec08c3072b2af46e49f1aee3d73f9cf46b1e8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-15T13:30:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-15T13:30:50.000Z", "max_issues_repo_path": "5.scripts/20.multiplot.py", "max_issues_repo_name": "conferencepapers/ICSE_2020", "max_issues_repo_head_hexsha": "732ec08c3072b2af46e49f1aee3d73f9cf46b1e8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-28T14:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-28T14:20:13.000Z", "max_forks_repo_path": "5.scripts/20.multiplot.py", "max_forks_repo_name": "conferencepapers/ICSE_2020", "max_forks_repo_head_hexsha": "732ec08c3072b2af46e49f1aee3d73f9cf46b1e8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-22T16:09:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-22T16:09:20.000Z", "avg_line_length": 29.4745762712, "max_line_length": 90, "alphanum_fraction": 0.7107533065, "include": true, "reason": "import numpy,import scipy", "num_tokens": 969}
|
[STATEMENT]
lemma positive_add: "positive x \<Longrightarrow> positive y \<Longrightarrow> positive (x + y)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>positive x; positive y\<rbrakk> \<Longrightarrow> positive (x + y)
[PROOF STEP]
apply transfer
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>x y. \<lbrakk>snd x \<noteq> 0; 0 < fst x * snd x; snd y \<noteq> 0; 0 < fst y * snd y\<rbrakk> \<Longrightarrow> 0 < fst (fst x * snd y + fst y * snd x, snd x * snd y) * snd (fst x * snd y + fst y * snd x, snd x * snd y)
[PROOF STEP]
apply (auto simp add: zero_less_mult_iff add_pos_pos add_neg_neg mult_pos_neg mult_neg_pos mult_neg_neg)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
|
{"llama_tokens": 310, "file": null, "length": 3}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Copyright (C) @author: jose
Simple Perceptron for logical gates
FI UNAM
Created on Thu Jun 22 20:24:46 2017
"""
from random import choice,random
import numpy as np
import matplotlib.pyplot as plt
def set_data(target):
x = [np.array([0,0]),
np.array([0,1]),
np.array([1,0]),
np.array([1,1])]
bias = np.array([1 for _ in range(4)])
# inputs: [x1 | x0 | bias]
inputs = np.column_stack((x,bias))
# return data: [x1 | x0 | bias | target]
return [(np.array(i),j) for i,j in zip(inputs,target)]
heaviside = lambda x: 1 if x >= 0 else -1
def train(target,w,eta=0.1,epochs=40):
# Activation function
errors = []
#w_tmp = []
# Updating weights
for _ in range(epochs):
x,expected = choice(target)
result = np.dot(w,x)
error = expected - heaviside(result)
errors.append(error)
w += eta*error*x
return [w,error]
def predict(inputs,w):
# inputs: X + bias
return 1 if np.dot(inputs,w) >= 0 else -1
def run():
# output for a nand gate
target = np.array([1,1,1,-1])
# Random weights
w = np.array([random() for _ in range(3)])
print("random weights: {0}".format(w))
nand = set_data(target)
w,error = train(nand,w,eta=0.1,epochs=65)
print("weights updated: {0}".format(w))
print("Predicting\tAproximation\tResult")
print("{0}\t\t{1:.5f}\t\t{2}".format([0,0],np.dot([0,0,1],w),predict([0,0,1],w)))
print("{0}\t\t{1:.5f}\t\t{2}".format([0,1],np.dot([0,1,1],w),predict([0,1,1],w)))
print("{0}\t\t{1:.5f}\t\t{2}".format([1,0],np.dot([1,0,1],w),predict([1,0,1],w)))
print("{0}\t\t{1:.5f}\t{2}".format([1,1],np.dot([1,1,1],w),predict([1,1,1],w)))
if __name__ == '__main__':
run()
|
{"hexsha": "8e413d7bcec12b62e944392e76c0898f4b01bd78", "size": 2342, "ext": "py", "lang": "Python", "max_stars_repo_path": "perceptron.py", "max_stars_repo_name": "jolivaresc/simple_perceptron", "max_stars_repo_head_hexsha": "c0d168fe30c298052c851ce4a25089736b93ed3b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "perceptron.py", "max_issues_repo_name": "jolivaresc/simple_perceptron", "max_issues_repo_head_hexsha": "c0d168fe30c298052c851ce4a25089736b93ed3b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "perceptron.py", "max_forks_repo_name": "jolivaresc/simple_perceptron", "max_forks_repo_head_hexsha": "c0d168fe30c298052c851ce4a25089736b93ed3b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.5609756098, "max_line_length": 82, "alphanum_fraction": 0.6481639624, "include": true, "reason": "import numpy", "num_tokens": 745}
|
### FRAMEWORKS AND DEPENDENCIES
import copy
#from google.cloud import bigquery
import os
import sys
from collections import OrderedDict
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as mpl_color_map
from PIL import Image, ImageFilter
from collections import OrderedDict
import matplotlib as mpl
import streamlit as st
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import cm
import numpy as np
import time
import random
import datetime
from datetime import date
from datetime import timedelta
from dateutil.relativedelta import relativedelta
import pickle
import os
import pandas_bokeh
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource
pandas_bokeh.output_notebook()
from pypfopt.expected_returns import mean_historical_return
from pypfopt.risk_models import CovarianceShrinkage
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
from pypfopt import expected_returns
from pypfopt import plotting
from pypfopt.efficient_frontier import EfficientCVaR,EfficientCDaR
from pypfopt.discrete_allocation import DiscreteAllocation,get_latest_prices
from pypfopt import objective_functions
from pyomo.environ import *
from pyomo.opt import SolverFactory
import plotly.graph_objects as go
from src.test_pipeline import test_pipeline
from src.test_pipeline import test_rolling
from src.test_pipeline import random_test
from src.test_pipeline import Hierarchical_Computing
plt.rcParams["figure.figsize"] = (18,5)
@st.cache
def data_loader():
print('Loading ... ')
# LOAD DATA
complete_df = pd.read_csv("data/complete_df.csv")
betas = pd.read_csv("data/betas.csv")
category = pd.read_csv('data/category.csv')
train = pd.read_csv("data/prices_train.csv")
train.set_index("Unnamed: 0",inplace=True)
train.index.name= 'date'
test = pd.read_csv("data/prices_test.csv")
test.set_index("Unnamed: 0",inplace=True)
test.index.name= 'date'
with open('data/different_funds_7.pkl', 'rb') as f: #Cleaning duplicated name funds
DifferentNameFunds = pickle.load(f)
train = train[DifferentNameFunds]
test = test[DifferentNameFunds]
return complete_df,betas,category,train,test
### Title
def header():
html_header="""
<head>
<title>PControlDB</title>
<meta charset="utf-8">
<meta name="keywords" content="IroAdvisor , Your personal fund portfolio optimizer">
<meta name="description" content="IroAdvisor Your personal fund portfolio optimizer">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<h1 style="font-size:300%; color:#008080; font-family:Georgia"> IROADVISOR (Beta)<br>
<h2 style="color:#008080; font-family:Georgia"> Your personal fund portfolio optimizer</h3> <br>
<hr style= " display: block;
margin-top: 0.5em;
margin-bottom: 0.5em;
margin-left: auto;
margin-right: auto;
border-style: inset;
border-width: 1.5px;"></h1>
"""
st.set_page_config(
page_title = "IroAdvisor — Your personal fund portfolio optimizer",
page_icon = Image.open('./data/crop_circle.png') ,
layout = "wide",
initial_sidebar_state = "auto")
st.markdown('<style>body{background-color: #fbfff0}</style>',unsafe_allow_html=True)
st.markdown(html_header, unsafe_allow_html=True)
# st.markdown(""" <style>
# #MainMenu {visibility: hidden;}
# footer {visibility: hidden;}
# </style> """, unsafe_allow_html=True)
html_header1="""
<h2 style="font-size:300%; color:#008080; font-family:Georgia">Risk Assessment Questionnaire</h2>
"""
st.markdown(html_header1, unsafe_allow_html=True)
with st.expander('If you have low financial knowledge, we recommend you to fill this Questionnaire'):
html_header1="""
<h5 style="color:#008080; text-align:center;font-family:Georgia"> We will ask you 7 questions with the aim
of getting to know you better <br> and in this way discard certain funds.</h3> <br>
"""
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown(html_header1, unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
score = 0
col1, col2,col3,col4,col5= st.columns([3,10,5,10,3])
with col1:
st.write("")
with col2:
q1 = st.radio(
"1. If you had to choose between more job security with a small pay increase and less job security with a big pay increase, which would you pick?",
('A. Definitely more job security with a small pay increase',
'B. Probably more job security with a small pay increase',
'C. Probably less job security with a big pay increase',
'D. Definitely less job security with a big pay increase '))
if 'A.' in q1:
score += 4
elif 'B.' in q1:
score += 3
elif 'C.' in q1:
score += 2
elif 'D.' in q1:
score += 1
with col3:
st.write("")
with col4:
q4 = st.radio(
"4. Which of the statements better reflect the way you feel in situations in which you have little to no control over the outcome?",
('A. I tend to panic and start making bad decisions.',
'B. I feel powerless and start overthinking.',
'C. I get a bit nervous but I let the situation develop.',
'D. I remain completely calm.'))
if 'A.' in q4:
score += 4
elif 'B.' in q4:
score += 3
elif 'C.' in q4:
score += 2
elif 'D.' in q4:
score += 1
with col5:
st.write("")
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
col1, col2,col3,col4,col5= st.columns([3,10,5,10,3])
with col1:
st.write("")
with col2:
q2 = st.radio(
"2. Imagine you were in a job where you could choose to be paid salary, commission, or a mix of both. Which would you pick?",
('A. All salary',
'B. Mainly salary ',
'C. Mainly commission ',
'D. All commission'))
if 'A.' in q2:
score += 4
elif 'B.' in q2:
score += 3
elif 'C.' in q2:
score += 2
elif 'D.' in q2:
score += 1
with col3:
st.write("")
with col4:
q5 = st.radio(
"5. Of the following investments, which of the following scenarios would you be most comfortable with:",
('A. You can lose down to -2%, and gain up to +9%',
'B. You can lose down to -7%, and gain up to +13%',
'C. You can lose down to -15%, and gain up to +26%',
'D. You can lose down to -31%, and gain up to +48%'))
if 'A.' in q5:
score += 4
elif 'B.' in q5:
score += 3
elif 'C.' in q5:
score += 2
elif 'D.' in q5:
score += 1
with col5:
st.write("")
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
col1, col2,col3,col4,col5= st.columns([3,10,5,10,3])
with col1:
st.write("")
with col2:
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
q3 = st.radio(
"3. When investing, you are primarily concerned about:",
('A. Not losing money (combat inflation)',
'B. Keeping the money you invest and making a bit more',
'C. Relatively consistent growth over time',
'D. Making as much money as possible from your investments'))
if 'A.' in q3:
score += 4
elif 'B.' in q3:
score += 3
elif 'C.' in q3:
score += 2
elif 'D.' in q3:
score += 1
with col3:
st.write("")
with col4:
q6 = st.radio(
"6. Back in 2008, the market took a major hit and stocks went down nearly 30%. If you had owned stocks at that time, how would you have reacted (or your real reaction if you actually did have money invested).",
('A. You prefer losing some money than risk losing any more: sell everything!',
'B. Just to be safe, you prefer to sell some of your assets and keep a small part.',
'C. Do nothing! Let the market flow and see how it plays.',
'D. Drawdown? Buy more, now that the price is low!'))
if 'A.' in q6:
score += 4
elif 'B.' in q6:
score += 3
elif 'C.' in q6:
score += 2
elif 'D.' in q6:
score += 1
with col5:
st.write("")
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
col1, col2,col3 = st.columns([12,10,7])
with col1:
st.write("")
with col2:
q7 = st.radio(
"7. Your current age is:",
('A. Over 50',
'B. Between 35 and 49',
'C. Between 25 and 34',
'D. Under 25'))
if 'A.' in q7:
score += 4
elif 'B.' in q7:
score += 3
elif 'C.' in q7:
score += 2
elif 'D.' in q7:
score += 1
with col3:
st.write("")
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
agree = st.checkbox('Have you finished filling the Questionnaire?',help='If you want to modify several answers (after having clicked this checkbox), we recommend you to unclick the checkbox so that the program does not recalculate when you are modifying the Questionnaire.')
if agree:
if score in range(21,29):
lower_limit= 1
elif score in range(15,21):
lower_limit= 2
elif score in range(8,15):
lower_limit= 3
elif score in range(1,8):
lower_limit= 4
else:
lower_limit= 0
html_header1="""
<hr style= " display: block;
margin-top: 0.5em;
margin-bottom: 0.5em;
margin-left: auto;
margin-right: auto;
border-style: inset;
border-width: 1.5px;"></h4>
"""
st.markdown(html_header1, unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
return lower_limit
def initial_metrics(info_dict,budget):
html_card_header1="""
<div class="card">
<div class="card-body" style="border-radius: 10px 10px 0px 0px; background: #eef9ea; padding-top: 5px; width: 300px;
height: 50px;">
<h3 class="card-title" style="background-color:#eef9ea; color:#008080; font-family:Georgia; text-align: center; padding: 0px 0;">Volatility</h3>
</div>
</div>
"""
html_card_header2="""
<div class="card">
<div class="card-body" style="border-radius: 10px 10px 0px 0px; background: #eef9ea; padding-top: 5px; width: 300px;
height: 50px;">
<h3 class="card-title" style="background-color:#eef9ea; color:#008080; font-family:Georgia; text-align: center; padding: 0px 0;">Total Returns</h3>
</div>
</div>
"""
html_card_header3="""
<div class="card">
<div class="card-body" style="border-radius: 10px 10px 0px 0px; background: #eef9ea; padding-top: 5px; width: 300px;
height: 50px;">
<h3 class="card-title" style="background-color:#eef9ea; color:#008080; font-family:Georgia; text-align: center; padding: 0px 0;">Capital Earned</h3>
</div>
</div>
"""
html_card_header4="""
<div class="card">
<div class="card-body" style="border-radius: 10px 10px 0px 0px; background: #eef9ea; padding-top: 5px; width: 300px;
height: 50px;">
<h3 class="card-title" style="background-color:#eef9ea; color:#008080; font-family:Georgia; text-align: center; padding: 0px 0;">Capital after 1 Year</h3>
</div>
</div>
"""
with st.container():
col1, col2, col3, col4, col5, col6, col7, col8,col9 = st.columns([1,15,1,15,1,15,1,15,1])
with col1:
st.write("")
with col2:
st.markdown(html_card_header1, unsafe_allow_html=True)
fig_c1 = go.Figure(go.Indicator(
mode="number",
value=info_dict['test_volatility'],
number={'suffix': "%", "font": {"size": 40, 'color': "#008080", 'family': "Arial"}, 'valueformat': '.3f'}))
fig_c1.update_layout(autosize=False,
width=350, height=90, margin=dict(l=20, r=20, b=20, t=30),
paper_bgcolor="#fbfff0", font={'size': 20})
st.plotly_chart(fig_c1)
with col3:
st.write("")
with col4:
st.markdown(html_card_header2, unsafe_allow_html=True)
fig_c2 = go.Figure(go.Indicator(
mode="number",
value= info_dict['test_return'],
number={'suffix': "%", "font": {"size": 40, 'color': "#008080", 'family': "Arial"}, 'valueformat': '.2f'}))
fig_c2.update_layout(autosize=False,
width=350, height=90, margin=dict(l=20, r=20, b=20, t=30),
paper_bgcolor="#fbfff0", font={'size': 20})
st.plotly_chart(fig_c2)
with col5:
st.write("")
with col6:
st.markdown(html_card_header3, unsafe_allow_html=True)
fig_c3 = go.Figure(go.Indicator(
mode="number",
value= info_dict['money_test_year'],
number={'suffix': "$","font": {"size": 40, 'color': "#008080", 'family': "Arial"}, 'valueformat': '.2f'}))
fig_c3.update_layout(autosize=False,
width=350, height=90, margin=dict(l=20, r=20, b=20, t=30),
paper_bgcolor="#fbfff0", font={'size': 20})
st.plotly_chart(fig_c3)
with col7:
st.write("")
with col8:
st.markdown(html_card_header4, unsafe_allow_html=True)
fig_c4 = go.Figure(go.Indicator(
mode="number",
value= budget+info_dict['money_test_year'],
number={'suffix': "$", "font": {"size": 40, 'color': "#008080", 'family': "Arial"}, 'valueformat': '.2f'}))
fig_c4.update_layout(autosize=False,
width=350, height=90, margin=dict(l=20, r=20, b=20, t=30),
paper_bgcolor="#fbfff0", font={'size': 20})
st.plotly_chart(fig_c4)
with col9:
st.write("")
html_br="""
<br>
"""
st.markdown(html_br, unsafe_allow_html=True)
def user_portfolio(weights,returns2):
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
with st.container():
col1, col2, col3 = st.columns([12,0.5,9])
with col1:
p = returns2.plot_bokeh.line(
figsize=(650, 500),
title="Evolution of budget",
xlabel="Date",
ylabel="Your budget [$]",
panning=False,
zooming=True,
legend="top_left")
p.legend.label_text_font_size = '8pt'
st.bokeh_chart(p)
with col2:
st.write("")
with col3:
st.markdown("<h3 style='text-align: center;color:#008080; font-family:Georgia;'>Your Investments</h3>",unsafe_allow_html=True)
#PIE
f_names= []
data = []
for elem in weights:
if weights[elem]> 0:
data.append(weights[elem])
f_names.append(elem)
# fig, ax = plt.subplots(figsize=(6, 3), subplot_kw=dict(aspect="equal"))
def func(pct):
return "{:.1f}%".format(pct)
fig = go.Figure(data=[go.Pie(labels=f_names, values=data, hole=.3,sort=True)])
# fig.update_layout( title_text="Your Investments")
st.plotly_chart(fig)
### Computations
@st.cache
def perform_test_pipe(option_risk,budget,train, test,lower_limit):
selected_funds = Hierarchical_Computing(train,test,market_neutral=False,n_steps=2,split_size=500,print_every=20,
min_weight=0.001,add_leftovers=False,method=option_risk,risk_level=lower_limit,risk=0.05,gamma=0.2)
weights,returns2,info_dict = test_pipeline(train[selected_funds],test,market_neutral=False,
min_weight=0.04,add_leftovers=True,samples=0,method=option_risk,risk_level=lower_limit,
risk=0.05,budget=budget,gamma=0.15,rs=40) #Methods = CDaR, CVaR, sharpe, MAD, ML
return weights,returns2,info_dict
### Function for performing an Apply on the choosen funds (extracting extra data from the category csv)
def add_extra_info(bench_id,category):
sub_filter = category[category['benchmark_finametrix_id'] == bench_id]
indx = sub_filter.index[0]
return category.iloc[indx][['benchmark','category','morningstar_category_id']]
### Controllers
def controllers2():
### Description
# st.sidebar.markdown("""<p style='text-align: center;'>This is a pocket application focused on advising individuals on starting investing
# on the financial world. This app is for those who have the basic ideas of how they want to invert, but don't have
# enough knowledge to make their own investment portfolio.</p>""",unsafe_allow_html=True)
st.sidebar.image("data/complete_logo.png")
st.sidebar.markdown("<h1 style='text-align: center;'>Choose the following Measures</h1>",unsafe_allow_html=True)
option_risk = st.sidebar.selectbox('Select a Risk Measure',['CVaR', 'CDaR', 'MAD','ML','sharpe'],help="""
WARNING --> ¡Leave CVaR if you are not used to these terms!
- Conditional Value at Risk (CVaR) : Risk assessment measure that quantifies the amount of tail risk an investment portfolio has.
- Conditional Drawdown at Risk (CDaR) : Risk measure which quantifies in aggregated format the number and magnitude of the portfolio drawdowns over some period of time.
- MaxLoss (ML)
- Mean Absolute Deviation (MAD)
- Sharpe Ratio (sharpe) : Average return earned in excess of the risk-free rate per unit of volatility or total risk.""")
# risk_lvl = st.sidebar.slider(label="Risk Level",min_value=0.0,max_value=1.0,value=0.2,step=0.005,help="Between 0 and 1, choose a value. Keep in mind that the lower the value you choose the lower risk you are taking and thus you are being more conservative. ")
budget = st.sidebar.number_input('Insert your Investment Budget',min_value=0,value=2000,help="Total amount of money the you want to expend in this Portfolio" )
st.sidebar.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.sidebar.markdown('''
- <h3>Risk Evaluation Method </h3> Method that the Algorithm will use in order to perform the Risk Optimization (we recommend to use CVaR or CDaR).
- <h3>Budget </h3> Amount of money the Client is willing to invest.
''',unsafe_allow_html=True)
st.sidebar.markdown("<h1 style='text-align: center;'></h1>",unsafe_allow_html=True)
st.sidebar.markdown("<h1 style='text-align: center;'>PARTNERED WITH</h1>",unsafe_allow_html=True)
st.sidebar.markdown("<h1 style='text-align: center;'></h1>",unsafe_allow_html=True)
st.sidebar.image("data/uc3m_logo.png")
st.sidebar.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.sidebar.image("data/aliance.png")
st.sidebar.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.sidebar.image("data/ironia.png")
return option_risk,budget
# Setting commas
def place_value(number):
return ("{:,}".format(number))
def main():
sys.path.insert(0,"..")
### PAGE TITLE + ICON
lower_limit=header()
#### LOAD THE DATA AND PERFORM THE OPERATIONS
complete_df,betas,category,train,test=data_loader()
# Reuse the Controllers output
option_risk,budget = controllers2()
# Call our Function for performing all the computations
weights,returns2,info_dict = perform_test_pipe(option_risk,budget,train,test,lower_limit)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
html_subtitle="""
<h2 style="color:#008080; font-family:Georgia;"> User Portfolio </h2>
"""
st.markdown(html_subtitle, unsafe_allow_html=True)
### Portfolio Evolution Chart + Pie Chart ##########################################################################
user_portfolio(weights,returns2)
### Summary Metrics of the Portfolio ##########################################################################
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
initial_metrics(info_dict,budget)
### Additional Funds Information ##########################################################################
#COMPUTATIONS
# In order to show additional info of the choosen funds
funds_inversion = [str(round(i*budget,3))+'$' for i in list(weights.values())] # [str(round(i * 100,2))+'%' for i in list(weights.values())]
choosen_funds = list(returns2.columns[:-1])
choosen_funds_info = complete_df.loc[complete_df['names'].isin(choosen_funds)]
choosen_funds_info['budget inversion'] = funds_inversion
choosen_funds_info[['benchmark','category','morningstar_category_id']] = choosen_funds_info.benchmark_id.apply(lambda x: add_extra_info(x,category))
choosen_funds_info = choosen_funds_info[['names','benchmark_id','budget inversion','risk_level','category','benchmark','morningstar_category_id']]
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
html_subtitle="""
<h2 style="color:#008080; font-family:Georgia;"> Additional Fund's Information</h2>
"""
st.markdown(html_subtitle, unsafe_allow_html=True)
# st.dataframe(choosen_funds_info)
st.markdown("<h3 style='text-align: center;'></h3>",unsafe_allow_html=True)
table = "<table>\n"
# Create the table's column headers
head = ['Names','Benchmark Id','Budget Inversion','Risk Level','Category','Benchmark','Morningstar Category Id']
table += ' <tr style="background-color:#eef9ea; color:#008080; font-family:Georgia; font-size: 15px">\n'
for column in head:
table += " <th>{0}</th>\n".format(column.strip())
table += " </tr>\n"
# # Create the table's row data
for line in choosen_funds_info.to_numpy().tolist():
row =line
table += " <tr>\n"
col_count=0
for column in row:
table += " <td>{0}</td>\n".format(str(column).strip())
col_count+=1
table += " </tr>\n"
table += "</table>"
st.markdown(table, unsafe_allow_html=True)
if __name__=="__main__":
main()
#textColor="#989595"
|
{"hexsha": "e73ffec417e406bc92e2d4ef89d38a8ce6826521", "size": 28883, "ext": "py", "lang": "Python", "max_stars_repo_path": "app.py", "max_stars_repo_name": "DavimenUC3M/IronIA-RoboAdvisor", "max_stars_repo_head_hexsha": "06d37889d5cb9c40139ceb6a41c959b92fff3291", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "app.py", "max_issues_repo_name": "DavimenUC3M/IronIA-RoboAdvisor", "max_issues_repo_head_hexsha": "06d37889d5cb9c40139ceb6a41c959b92fff3291", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "app.py", "max_forks_repo_name": "DavimenUC3M/IronIA-RoboAdvisor", "max_forks_repo_head_hexsha": "06d37889d5cb9c40139ceb6a41c959b92fff3291", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-31T21:56:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-02T10:28:00.000Z", "avg_line_length": 46.0653907496, "max_line_length": 291, "alphanum_fraction": 0.504622096, "include": true, "reason": "import numpy,from pyomo", "num_tokens": 6293}
|
[STATEMENT]
lemma ternary_to_bool_bool_to_ternary: "ternary_to_bool (bool_to_ternary X) = Some X"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ternary_to_bool (bool_to_ternary X) = Some X
[PROOF STEP]
by(cases X, simp_all)
|
{"llama_tokens": 102, "file": "Iptables_Semantics_Common_Ternary", "length": 1}
|
#################################################################
# Name: BHtest.py #
# Authors: Michael Battaglia #
# Course: Phy407 #
# Inst.: Paul Kushner #
# Date: December 17, 2016 #
# Function: Program test speed and other characteristics of #
# quadtree algorithms computing forces on N bodies #
# interacting under gravity. #
#################################################################
#essential modules
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from time import clock
#essential imports
from Body import Body
from Quad import Quad
from BHTree import BHTree
from MCgalaxy import generateGalaxy
#function: main
if __name__ == '__main__':
#Milky Way parameters
r0 = 3 #kpc, scale length of galaxy
m0 = 50.0 #10^9 solar mass, mass of galaxy
#simulation space
N=100
L = 15.0 #length of box, kpc
#Barnes-Hut simulation resolution
theta = 1.0
epsilon = theta*L/np.sqrt(N) #softening length
#time evolution parameters
dt = 0.1 #10Myr
T = 100.0 #10Myr
steps = int(T/dt)
#generate 1000 masses in 15kpc box
bodies = generateGalaxy(r0, m0, N, L)
#plot galactic bodies, initial distribution
for body in bodies:
body.plot()
plt.xlim([-L,L])
plt.ylim([-L,L])
plt.show()
#generate Barnes-Hut tree on original grid
tree = BHTree(Quad(-L,-L,2*L))
#populate tree with bodies from list
for body in bodies:
tree.insertBody(body)
#calculate force on every body from tree and evolve leapfrog step
for body in bodies:
body.resetForce()
tree.applyForce(body, theta, epsilon)
#take a half time step
body.leapFrog(dt)
#test energy conservation over evolution
t = np.linspace(0,steps*dt,steps)
E = np.zeros(len(t))
#sum kinetic
for body in bodies:
E[0]+=body.Kenergy(dt)
#sum potential
for j in range(len(bodies)):
for k in range(j+1,len(bodies)):
E[0]+=bodies[j].Uinteract(bodies[k])
#evolve N-body in time
for i in range(steps):
#computation counter
print "Computing time step "+str(i+1)+"/"+str(steps)
#generate Barnes-Hut tree on original grid
tree = BHTree(Quad(-L,-L,2*L))
#populate tree with bodies from list
for body in bodies:
tree.insertBody(body)
#calculate force on every body from tree and evolve
for body in bodies:
body.resetForce()
tree.applyForce(body, theta, epsilon)
#take a time step
body.update(dt)
#calculate energy at time
for body in bodies:
E[i]+=body.Kenergy(dt)
for j in range(len(bodies)):
for k in range(j+1,len(bodies)):
E[i]+=bodies[j].Uinteract(bodies[k])
plt.plot(t, E)
plt.title("Energy conservation")
plt.ylabel("Energy [kMs*kpc^2/(10Myr)^2]")
plt.show()
#test BH tree construction/traversal speed
nums = range(1,101)+range(101,1001,10)+range(1001,10000,100)
timesTree = []
timesForce = []
for i in range(len(nums)):
num = nums[i]
print "Computing number "+str(num)+"/10000"
bodies = generateGalaxy(r0, m0, num, L)
#tree construction
t_start = clock()
tree = BHTree(Quad(-L,-L,2*L))
for body in bodies:
tree.insertBody(body)
t_end = clock()
timesTree.append(t_end - t_start)
#tree traversal
t_start = clock()
for body in bodies:
body.resetForce()
tree.applyForce(body, theta, epsilon)
t_end = clock()
timesForce.append(t_end - t_start)
plt.plot(nums, timesTree)
plt.xlabel("N-bodies")
plt.ylabel("Tree time")
plt.title("time to generate Barnes-Hut quadtree")
plt.show()
plt.plot(nums, timesForce)
plt.xlabel("N-bodies")
plt.ylabel("traverse time")
plt.title("traversing Barnes-Hut quadtree for N bodies")
plt.show()
|
{"hexsha": "3fb9e5f35d53fe84699bdf4c32e0a44ce9adb58a", "size": 4293, "ext": "py", "lang": "Python", "max_stars_repo_path": "BHtest.py", "max_stars_repo_name": "battaglia-michael/N-body-Galaxy-Simulation", "max_stars_repo_head_hexsha": "a661bf868e0f98ad0a48fab44d33f142ea3a70de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-28T14:29:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-28T14:29:37.000Z", "max_issues_repo_path": "BHtest.py", "max_issues_repo_name": "battaglia-michael/N-body-Galaxy-Simulation", "max_issues_repo_head_hexsha": "a661bf868e0f98ad0a48fab44d33f142ea3a70de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BHtest.py", "max_forks_repo_name": "battaglia-michael/N-body-Galaxy-Simulation", "max_forks_repo_head_hexsha": "a661bf868e0f98ad0a48fab44d33f142ea3a70de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.0230769231, "max_line_length": 69, "alphanum_fraction": 0.5618448637, "include": true, "reason": "import numpy", "num_tokens": 1061}
|
import os
import time
from datetime import datetime
import pdb
import math
import numpy as np
import pybullet as p
import pickle
import matplotlib.pyplot as plt
from utils import *
from gym_pybullet_drones.envs.BaseAviary import DroneModel, Physics
from gym_pybullet_drones.envs.CtrlAviary import CtrlAviary
from gym_pybullet_drones.control.DSLPIDControl import DSLPIDControl
from gym_pybullet_drones.utils.Logger import Logger
GUI = False
RECORD_VIDEO = False
TRACE_FILE = "example_trace.pkl"
PHYSICS = Physics.PYB
if __name__ == "__main__":
#### Load a trace and control reference from a .pkl file ###########################################
with open(os.path.dirname(os.path.abspath(__file__))+"/../files/"+TRACE_FILE, 'rb') as in_file:
TRACE_TIMESTAMPS, TRACE_DATA, TRACE_CTRL_REFERENCE, _, _, _ = pickle.load(in_file)
#### Compute trace's parameters ####################################################################
DURATION_SEC = int(TRACE_TIMESTAMPS[-1]); SIMULATION_FREQ_HZ = int(len(TRACE_TIMESTAMPS)/TRACE_TIMESTAMPS[-1])
#### Initialize the simulation #####################################################################
env = CtrlAviary(drone_model=DroneModel.CF2X, num_drones=1, initial_xyzs=np.array([0,0,.1]).reshape(1,3), \
physics=PHYSICS, freq=SIMULATION_FREQ_HZ, gui=GUI, record=RECORD_VIDEO, obstacles=False)
INITIAL_STATE = env.reset(); action = {"0": np.zeros(4)}; pos_err = 9999.
#### Assuming TRACE_FILE starts at position [0,0,0] and the sim starts at [0,0,INITIAL_STATE[2]] ###
TRACE_CTRL_REFERENCE[:,2] = INITIAL_STATE["0"]["state"][2]
#### Initialize the logger #########################################################################
logger = Logger(logging_freq_hz=SIMULATION_FREQ_HZ, num_drones=2, duration_sec=DURATION_SEC)
#### Initialize the controller #####################################################################
ctrl = DSLPIDControl(env)
#### Run the comparison ############################################################################
START = time.time()
for i in range(DURATION_SEC*env.SIM_FREQ):
#### Step the simulation ###########################################################################
obs, reward, done, info = env.step(action)
#### Compute the next action using the set points from the trace file ##############################
action["0"], pos_err, yaw_err = ctrl.computeControlFromState(control_timestep=env.TIMESTEP, state=obs["0"]["state"], \
target_pos=TRACE_CTRL_REFERENCE[i,0:3], target_vel=TRACE_CTRL_REFERENCE[i,3:6])
#### Re-arrange the trace for consistency with the logger #########################################
trace_obs = np.hstack([TRACE_DATA[i,0:3], np.zeros(4), TRACE_DATA[i,6:9], TRACE_DATA[i,3:6], TRACE_DATA[i,9:12], TRACE_DATA[i,12:16]])
#### Log the trace #################################################################################
logger.log(drone=0, timestamp=TRACE_TIMESTAMPS[i], state=trace_obs, control=np.hstack([TRACE_CTRL_REFERENCE[i,:], np.zeros(6)]))
#### Log the simulation ############################################################################
logger.log(drone=1, timestamp=i/env.SIM_FREQ, state=obs["0"]["state"], control=np.hstack([TRACE_CTRL_REFERENCE[i,:], np.zeros(6)]))
#### Printout ######################################################################################
if i%env.SIM_FREQ==0: env.render()
#### Sync the simulation ###########################################################################
if GUI: sync(i, START, env.TIMESTEP)
#### Close the environment #########################################################################
env.close()
#### Save the simulation results ###################################################################
logger.save()
#### Plot the simulation results ###################################################################
logger.plot(pwm=True)
|
{"hexsha": "b747020a4d967e3b0c73d76be07432186a091100", "size": 4167, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/compare.py", "max_stars_repo_name": "ziyangli/gym-pybullet-drones", "max_stars_repo_head_hexsha": "5593ec16a53c299f5300c62f6dff14b15247fcf5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/compare.py", "max_issues_repo_name": "ziyangli/gym-pybullet-drones", "max_issues_repo_head_hexsha": "5593ec16a53c299f5300c62f6dff14b15247fcf5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/compare.py", "max_forks_repo_name": "ziyangli/gym-pybullet-drones", "max_forks_repo_head_hexsha": "5593ec16a53c299f5300c62f6dff14b15247fcf5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-12T20:30:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-12T20:30:45.000Z", "avg_line_length": 52.0875, "max_line_length": 143, "alphanum_fraction": 0.4972402208, "include": true, "reason": "import numpy", "num_tokens": 825}
|
(************************************************************************)
(* Icharate Toolkit *)
(* Houda ANOUN *)
(* Pierre CASTERAN *)
(* 2003 -2004 *)
(* LaBRI *)
(************************************************************************)
Add LoadPath "..".
Set Implicit Arguments.
Require Export crossDep.
(* analysis of crossed-dependencies in Dutch: this phenomenon can't be anlysed by a CFG !*)
(* examples taken from 'Labelled Deduction in the composition of Form and Meaning: M.Moortgat 1999*)
Require Export notations.
Inductive I:Set:= |i0|i1.
Definition eqDecI: eqdec I.
unfold eqdec; decide equality.
Defined.
Inductive W:Set:=
Peter|Mary|wil|plagen|slapen.
(* here the translation of the above words:
wil --> wants
plagen-->tease
slapen-->sleep
*)
Definition eqDecW:eqdec W.
unfold eqdec; decide equality.
Defined.
Inductive atoms:Set:= np |s|inf.
Definition eqDecA:eqdec atoms.
unfold eqdec; decide equality.
Defined.
Open Scope lamb_scope.
Definition mapping(ato: atoms):semType:=
match ato with
|np =>e
|s=>t
|inf=> e-->t (* a revoir je suis pas du tout sure!*)
end.
Open Scope mmg_scope.
Notation "'!' F":=(At F)(at level 40):mmg_scope.
Notation "A '/o' B"
:= (Slash i0 A B) (at level 41, right associativity) : mmg_scope.
Notation " A '\o' B"
:= (Backslash i0 A B) (at level 41, right associativity) : mmg_scope.
Notation " A 'Oo B"
:= (Dot i0 A B) (at level 38, left associativity) : mmg_scope.
Notation "'[]o' A" :=(Box i0 A) (at level 30):mmg_scope.
Notation "'<>o' A":=(Diamond i0 A)(at level 30):mmg_scope.
Notation "A '/1' B"
:= (Slash i1 A B) (at level 41, right associativity) : mmg_scope.
Notation " A '\1' B"
:= (Backslash i1 A B) (at level 41, right associativity) : mmg_scope.
Notation " A 'O1 B"
:= (Dot i1 A B) (at level 38, left associativity) : mmg_scope.
Notation "'[]1' A" :=(Box i1 A) (at level 30):mmg_scope.
Notation "'<>1' A":=(Diamond i1 A)(at level 30):mmg_scope.
(* set of semantic constants*)
Inductive consSem:Set:=
|mary|peter|wi|plag|slap.
Notation "'$' n" :=(num (semRess I I atoms W) n) (at level 40):mmg_scope.
Notation "% n" :=(num consSem n)(at level 40):mmg_scope.
Notation "'#' w":= (oneW I I w)(at level 40):mmg_scope .
Notation " T1 ';1' T2 ":=(comW i1 T1 T2)(at level 41,right associativity):mmg_scope.
Notation " T1 ';o' T2 ":=(comW i0 T1 T2)(at level 41,right associativity):mmg_scope.
Definition setType(cs:consSem):semType:=
match cs with
| peter => e
| mary => e
|wi =>(e-->t)-->e-->t
|plag=>(intention (e-->e-->t))
|slap=>(intention (e-->t))
end.
Definition lexic(w:W):list (prod (Form I I atoms) (lambdaC consSem)):=
match w with
|Peter => (! np , @peter)::nil
|Mary =>(! np, @mary)::nil
|wil=>([]o ((! np \1 !s)/o !inf) ,@wi)::nil
|plagen=>([]o (!np \1 !inf) ,@plag)::nil
|slapen=>([]o (!inf) ,@slap)::nil
end.
Definition lexic1:lexicon.
econstructor.
eexact eqDecI.
eexact eqDecI.
eexact eqDecA.
eexact eqDecW.
eexact mapping.
eexact setType.
eexact lexic.
Defined.
Definition ext:=add_rule (K2Diam i1 i1) (add_rule (incDiam I i1 i0)
(add_rule (KDiam i0 i0) (add_rule (MPDiam i1 i0 i0) NL))).
Definition gram:Grammar.
eapply mk_gram with (lexic:=lexic1).
simpl.
exact ext.
Defined.
Definition frag:= Mary::wil::slapen::nil.
Definition my_contextW:= #Mary ;1 #wil ;o #slapen.
Definition treeDeriv:[gram] frag>>[]1 (!s).
unfold deriveTo.
setCont0 my_contextW.
simpl.
boxI.
unfold ext.
eapply cross_depend.
repeat econstructor. (* faire une tactique !*)
repeat econstructor.
repeat econstructor.
simpl.
ebackE.
axiom.
eslashE.
boxE.
axiom.
boxE.
axiom.
Defined.
Definition frag2:=Peter::Mary::wil::plagen::nil.
Definition cw:= #Peter ;1 #Mary ;1 #wil ;o #plagen.
Definition tree2: [gram] frag2>> []1 (!s).
unfold deriveTo.
setCont0 cw.
simpl.
boxI.
eapply cross_depend;unfold ext.
repeat constructor.
repeat constructor.
repeat constructor.
simpl.
eapply StructRule.
constructor 2.
constructor 2.
constructor 2.
constructor 1.
constructor 3.
constructor.
apply MPDiam_rw.
ebackE.
axiom.
eslashE.
boxE.
axiom.
ebackE.
axiom.
boxE;axiom.
Defined.
|
{"author": "coq-contribs", "repo": "icharate", "sha": "bcf05c9689b3c20e569daee45ea1ac2bf89e9ad4", "save_path": "github-repos/coq/coq-contribs-icharate", "path": "github-repos/coq/coq-contribs-icharate/icharate-bcf05c9689b3c20e569daee45ea1ac2bf89e9ad4/Examples/cross_depEx.v"}
|
module Docker
using Requests
using JSON
immutable DockerError
status::Int
msg::ByteString
end
const headers = Dict{}("Content-Type" => "application/json")
docker_uri(host) = URI("http://$host/v1.21")
docker_uri(host,endpoint) = URI("http://$host/v1.21/$endpoint")
parse(data) = JSON.parse(join(map(Char,data)))
function create_container(host, image;
cmd::Cmd = ``,
entryPoint = "",
tty = true,
attachStdin = false,
openStdin = false,
attachStdout = true,
attachStderr = true,
memory = 0,
cpuSets = "",
volumeDriver = "",
portBindings = ["",""], # [ContainerPort,HostPort]
ports = [],
pwd = "")
url = docker_uri(host)
params = Dict{}("Image" => image,
"Cmd" => collect(cmd.exec),
"Tty" => tty,
"AttachStdin" => attachStdin,
"OpenStdin" => openStdin,
"AttachStdout" => attachStdout,
"AttachStderr" => attachStderr,
"ExposedPorts" => [string(dec(p),"/tcp")=>Dict{}() for p in ports],
"HostConfig" => Dict{}(
"Memory" => memory,
"CpusetCpus" => cpuSets,
"VolumeDriver" => volumeDriver,
"PortBindings" => Dict{}( string(portBindings[1],"/tcp") => [Dict{}( "HostPort" => string(portBindings[2]))]
)
)
)
if !isempty(entryPoint)
params["Entrypoint"] = entryPoint
end
if !isempty(cmd.exec)
params["Cmd"] = cmd
end
if !isempty(pwd)
params["WorkingDir"] = pwd
end
resp = post(URI("$url/containers/create"),json=params,headers=headers)
if resp.status != 201
throw(DockerError(resp.status,resp.data))
end
parse(resp.data)
end
function inspect_container(host,id)
resp = get(docker_uri(host,"containers/$id/json"))
if resp.status != 200
throw(DockerError(resp.status,resp.data))
end
parse(resp.data)
end
function start_container(host, id)
resp = post(docker_uri(host,"containers/$id/start"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
id
end
function restart_container(host, id)
resp = post(docker_uri(host,"containers/$id/restart"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
resp
end
function stop_container(host, id)
resp = post(docker_uri(host,"containers/$id/stop"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
resp
end
function pause_container(host, id)
resp = post(docker_uri(host,"containers/$id/pause"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
resp
end
function unpause_container(host, id)
resp = post(docker_uri(host,"containers/$id/unpause"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
id
end
function kill_container(host, id)
resp = post(docker_uri(host,"containers/$id/kill"),"")
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
resp
end
function remove_container(host, id)
resp = Requests.delete(docker_uri(host,"containers/$id?force=1"))
if resp.status != 204
throw(DockerError(resp.status,resp.data))
end
resp
end
function processes_container(host, id)
resp = get(docker_uri(host,"containers/$id/top"))
println(resp.status)
if resp.status != 200
throw(DockerError(resp.status,resp.data))
end
parse(resp.data)
end
function list_containers(host)
resp = get(docker_uri(host,"containers/json"))
if resp.status != 200
throw(DockerError(resp.status,resp.data))
end
parse(resp.data)
end
function stats_container(host,id)
resp = get(docker_uri(host,"containers/$id/stats?stream=0"))
if resp.status != 200
throw(DockerError(resp.status,resp.data))
end
parse(resp.data)
end
function open_logs_stream(host, id; history=false)
path = "containers/$id/attach?logs&follow=1&stdout=1"
if history
path *= "&logs=1"
end
url = docker_uri(host,path)
Requests.open_stream(url,[Dict{}("Content-Type"=>"plain/text")],"","POST")
end
function cleanse!(host)
resp = get(docker_uri(host,"containers/json?all=true"))
if resp.status != 200
throw(DockerError(resp.status,resp.data))
end
data = parse(resp.data)
for c in data
remove_container(host,c["Id"])
kill_container(host,c["Id"])
end
nothing
end
end
|
{"hexsha": "c5cbd695fbae0eb29837c51f786339f6378f8473", "size": 4668, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/Docker.jl", "max_stars_repo_name": "JuliaPackageMirrors/Docker.jl", "max_stars_repo_head_hexsha": "e9d0dda2a5ed19e7e4a87baee58bd85425822417", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Docker.jl", "max_issues_repo_name": "JuliaPackageMirrors/Docker.jl", "max_issues_repo_head_hexsha": "e9d0dda2a5ed19e7e4a87baee58bd85425822417", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Docker.jl", "max_forks_repo_name": "JuliaPackageMirrors/Docker.jl", "max_forks_repo_head_hexsha": "e9d0dda2a5ed19e7e4a87baee58bd85425822417", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.7900552486, "max_line_length": 121, "alphanum_fraction": 0.6023993145, "num_tokens": 1216}
|
\documentclass[11pt]{article}
\usepackage[breaklinks]{hyperref}
\newcommand{\win}{\ensuremath{w_{\rm i}}}
\newcommand{\wout}{\ensuremath{w_{\rm o}}}
\begin{document}
\title{Some Notes on 3D Raytracing}
\author{Peter Erwin}
\maketitle
\section{Basics of Light Interaction with Surfaces}
We can divide the interactions of light into two basic types:
\textbf{radiative transfer through a medium} and \textbf{scattering}.
Scattering can take place in two ways: interactions with small particles
in a medium (e.g., interactions of light with fog, etc.), and scattering
at a boundary between media with different indices of refractions.
\subsection{Scattering (Reflection and Refraction) at a Planar Boundary}
Scattering at a (locally planar) boundary is \textit{most} of the
interesting effects in 3D rendering: it involves light reflecting,
scattering, and/or being refracted when light hits an object. The basic
math for this was worked out by Fresnel in the early 19th Century, and
is codified in the Fresnel equations. In simple terms (ignoring the role
of polarization), a light ray that intersects a planar boundary is
scattered in two discrete ways: reflection away from the boundary and
refraction through the boundary. Energy conservation means that the
sum of reflection and refraction must = incoming light.
\textit{Direct (or specular) reflection}, on a locally planar level, follows the
standard angle of incidence = angle of reflection model. For perfectly
flat surfaces, this is a single angle. For surfaces with microstructure,
the surfaces can be treated as a collection of locally flat regions (the
``microfacets'' model); the combined direct reflection will involve a
small range of angles about the mean direct reflection: generalzied
\textit{specular reflection.}
\textit{Refraction:} Refracted light travels into the second object/medium (at
an angle determined by Snell's Law). It then undergoes scattering and absorption
within the second object/medium.
For metals (conductors), the refracted light is absorbed almost immediately (apparently the free
electrons ensure this), and can thus be ignored (except, of course, in terms of
energy conservation: the refracted light is lost).
For dielectrics, the refracted light travels through the object/medium
and is subject to absorption and scattering. \textit{Some} of the
scattered light makes it way back out of the surface (e.g., after
multiple scatterings). This is knowns as ``subsurface scattering'', and
is in fact the origin of ``diffuse reflection''. Whether or not it
\textit{needs} to be explicitly modeled as subsurface scattering depends
on the scale of the scattering, including this scale relative to pixel
size. In some substances, the scattering is very local (presumably
because photons get absorbed too quickly to permit multiple scatterings
over longer ranges) and so the re-emergence of scattered photons from
the surface can be treated as Lambertian diffuse reflection. In other
(more translucent) substances, multiple scatterings without absorption
allow photons to re-emerge at large distances from the original
intersection point -- possibly with an asymmetric distribution, or even
on the other side of a sufficiently narrow object (thin edges of ears,
etc.).
\subsection{Reflection}
Reflection involves light scattering away from a surface. The \textit{amount} of light
that is scattered must always be $\le$ the incoming light; this can be
described using a single scalar $K_{r} \le 1$, or a spectrum of reflectance values.
For the standard RGB case, $\mathbf{K_{r}} = (K_{r,R}, K_{r,G}, K_{r,B})$.
If we imagine white light reflected diffusely from a surface, then
$\mathbf{L_{o}} = \mathbf{L_{i}} \mathbf{K_{r}} = (K_{r,R} L_{i,R},
K_{r,G} L_{i,G}, K_{r,B} L_{i,B})$, and $\mathbf{K_{r}}$ is then the
same as what we perceive as ``the'' color of the surface.
\section{Path-Tracing Basics}
``hit point'' = point on the surface where our current ray has intersected
an object, where we want to calculate the outgoing light (``outgoing'' = back
along the sequence of rays/paths toward the camera).
\textbf{Variables:}
\begin{itemize}
\item $L$ -- light traveling along a ray, in the form of Color [R,G,B float
components, positive but no upper limit]
\item $w$ -- a direction vector for a ray
\item $n$ -- a surface normal vector
\end{itemize}
Ignoring refraction for the moment, the rendering equation looks like this
\begin{equation}
L_{\rm out}(\wout) \; = \; L_{e}(\wout) \, + \, \int_{\Omega} f_{\rm brdf}(\win,\wout) \: L_{\rm in}(\win) \, (\win \cdot n) \, d{\win}
\end{equation}
where $L_{e}(\wout)$ is \textit{emission} from the local surface.
The second term (i.e., the integral) represents the contribution from
reflected (specular or diffuse) light, and in particular describes how
much of the \textit{incoming} light $L_{\rm in}$ is reflected. This can
be broken down into three pieces:
\begin{enumerate}
\item $L_{\rm in}(\win)$ -- amount of light coming in along
vector \win{} (e.g., from
background, or from previous reflection or refraction from further out) -- Color value;
\item $(\win \cdot n) = |\win| |n| \cos \theta_{i}$ -- the geometric dilution of that light
due to the angle between the incoming light and the surface normal -- float ;
\item $f_{\rm brdf}(\win,\wout)$ -- how much of the light is actually reflected by the surface
into the outgoing vector \wout{} -- Color value (or float for achromatic).
\end{enumerate}
Breaking this down, we can imagine calculating the reflected light for individual
incoming rays $\win$. In the classic ray-tracing case, this corresponds to one
incoming ray per (visible, non-shadowed) light. If there is an incoming reflection
ray, then this is also covered via the $f_{s}$ term.
In the idealized full integration, we simply sum up the contributions from all possible
incoming \win{} rays, distributed over the unit hemisphere above the hit point.
In practice, we approximate this with a Monte Carlo approach, shooting out
multiple rays to represent \win{} and summing up their contributions (weighted appropriately).
Put another way: ``The integral just means that we are going to take the
result of everything between those two symbols, and add them up for
every point in a hemisphere, multiplying each value by the fractional
size of the point’s area for the hemisphere. The hemisphere we are
talking about is the positive hemisphere surrounding the normal of the
surface we are looking at.''\footnote{\url{https://blog.demofox.org/2016/09/21/path-tracing-getting-started-with-diffuse-and-emissive/}}
\section{Environment Maps}
\subsection{Cubic Maps}
The basic idea is that when a ray (reflected or otherwise) goes to infinity without
intersecting an object, you can give it a color $L$ from:
\begin{enumerate}
\item A constant background color value $L_{\rm back}$ [what we do currently]
\item A background/environment image.
\end{enumerate}
The simplest and most popular modern approach is to use a cubic map, where each of the
$+$ and $-$ axes has one face of the cube associated with it.
How do we choose which face of the map to use? Simple: identify the largest component
of the ray in question and take its sign. Thus, for a ray like $(x, y, z) = (0.5, -0.1, 0.9)$,
you would choose the $+z$ axis and its face; for $(x, y, z) = (-0.5, -0.1, -0.4)$, you
would choose the $-x$ axis; and so forth.
How do we choose where in the individual-face image to get a pixel value? We take the
complementary component values of the ray and convert them into scaled coordinates on that face,
using the value of the other component to scale them.
Thus, in the case of $(x, y, z) = (0.5, -0.1, 0.9)$, where we're using the map associated
with the $+z$ face, we take $x/z = 0.5/0.9$ and $y/z = -0.1/0.9$ to get values on the $(-1,1)$
interval; then we map these into the $(0,1)$ interval by adding 1 to each and then dividing the
result by 2:
\begin{equation}
u = (x/z + 1)/2, \; \; v = (y/z + 1)/2 .
\end{equation}
Now the problem is one of mapping values on the $(0,1)$ interval into pixel values
in the image $u$ and $v$ axes.
\end{document}
|
{"hexsha": "3a461b7702877e288988390ea6a29715d520551f", "size": 8143, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "rendering_notes.tex", "max_stars_repo_name": "perwin/perspectiva", "max_stars_repo_head_hexsha": "3909e70228f6ed3d4322d42def470286a194dac7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "rendering_notes.tex", "max_issues_repo_name": "perwin/perspectiva", "max_issues_repo_head_hexsha": "3909e70228f6ed3d4322d42def470286a194dac7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "rendering_notes.tex", "max_forks_repo_name": "perwin/perspectiva", "max_forks_repo_head_hexsha": "3909e70228f6ed3d4322d42def470286a194dac7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.4916201117, "max_line_length": 136, "alphanum_fraction": 0.754267469, "num_tokens": 2130}
|
import numpy as np
import torch
from PIL import Image
from torchvision import datasets, transforms
TRAIN = 'train'
VALID = 'valid'
TEST = 'test'
NORMAL_MEANS = (0.485, 0.456, 0.406)
NORMAL_STD_DEVIATIONS = (0.229, 0.224, 0.225)
TRANSFORM_TRAIN = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(NORMAL_MEANS, NORMAL_STD_DEVIATIONS)])
TRANSFORM_TEST_VALIDATION = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(NORMAL_MEANS, NORMAL_STD_DEVIATIONS)])
def get_data_sets_loaders(data_dir: str = 'images'):
"""
Creates and returns image datasets and data loaders from a directory of images.
:param data_dir: the path to the directory of images
:return: datasets and dataloaders for training, testing & validation
"""
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# Transforms:
# - All: Resize and crop to 224x224
# - All: Apply normalization via mean & std dev
# - Train Only: Apply random scaling, cropping & flipping
# Define your transforms for the training, validation, and testing sets
data_transforms = {TRAIN: TRANSFORM_TRAIN,
VALID: TRANSFORM_TEST_VALIDATION,
TEST: TRANSFORM_TEST_VALIDATION}
# Load the datasets with ImageFolder
image_datasets = {TRAIN: datasets.ImageFolder(train_dir, transform=data_transforms[TRAIN]),
VALID: datasets.ImageFolder(valid_dir, transform=data_transforms[VALID]),
TEST: datasets.ImageFolder(test_dir, transform=data_transforms[TEST])}
# Using the image datasets and the transforms, define the dataloaders
dataloaders = {TRAIN: torch.utils.data.DataLoader(image_datasets[TRAIN], batch_size=64, shuffle=True),
VALID: torch.utils.data.DataLoader(image_datasets[VALID], batch_size=32),
TEST: torch.utils.data.DataLoader(image_datasets[TEST], batch_size=32)}
return image_datasets, dataloaders
def process_image(image_path):
"""
Given a path to a file, pre-process that image in preparation for making a prediction.
:param image_path: the path to the image file
:return: the image represented by a flattened numpy array
"""
im_transforms = TRANSFORM_TEST_VALIDATION
# Open image
im = Image.open(image_path)
# Transform it: creates pytorch tensor
im_transformed_tensor = im_transforms(im)
# Return np array
np_image = np.array(im_transformed_tensor)
return np_image
|
{"hexsha": "9f8edb01631cf95988a7b3c19c4680c79d037aba", "size": 2602, "ext": "py", "lang": "Python", "max_stars_repo_path": "util.py", "max_stars_repo_name": "gregdferrell/aipy-p1-image-classifier", "max_stars_repo_head_hexsha": "c505b027b6551c24c17c5c03a524a525932deb8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "util.py", "max_issues_repo_name": "gregdferrell/aipy-p1-image-classifier", "max_issues_repo_head_hexsha": "c505b027b6551c24c17c5c03a524a525932deb8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "util.py", "max_forks_repo_name": "gregdferrell/aipy-p1-image-classifier", "max_forks_repo_head_hexsha": "c505b027b6551c24c17c5c03a524a525932deb8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6933333333, "max_line_length": 103, "alphanum_fraction": 0.7432744043, "include": true, "reason": "import numpy", "num_tokens": 612}
|
###################################################
######## Programmable filter script ########
###################################################
# Author: Matthieu Heitz
# Output DataSet Type = Same as Input
############# Properties for auto-generated XML #############
Name = 'ApplyPoseTransformToPointCloud'
Label = 'Apply Pose To PointCloud'
Help = 'This applies the device pose point cloud at each timestep'
# Still don't know if these lines are actually necessary
NumberOfInputs = 2
InputDataType1 = 'vtkPolyData'
InputDataType2 = 'vtkPolyData'
# OutputDataType = 'vtkPolyData' # omit this line to use 'same as input'
Properties = {}
def RequestData():
############# Initialize the filter #############
import vtk
import numpy as np
from numpy import linalg as LA
# print "\n\n"
# print "********************************************************\n" \
# "Programmable Filter: Apply Pose Transform to Point Cloud\n" \
# "********************************************************\n"
############# Get I/O #############
# Get the two inputs, and the output
polyDataA = self.GetInputDataObject(0, 0)
polyDataB = self.GetInputDataObject(0, 1)
pdo = self.GetPolyDataOutput()
# If only one input is given, raise an exception
if polyDataA is None or polyDataB is None:
raise Exception("\nThis filter takes 2 inputs:\n"
"Point Cloud Data files: pc_HHMMSSDD_NNN.vtk\n"
"Pose Data file: pc_HHMMSSDD_poses.vtk\n"
"Note that ParaView groups all the Point Cloud Data files in one\n")
# Initialize vtkPolyData for point cloud data (PC) and pose data (P)
polyData_PC = vtk.vtkPolyData()
polyData_P = vtk.vtkPolyData()
# Figure out which PolyData is which
if polyDataA.GetFieldData().GetArray("timestamp") is not None and \
polyDataB.GetPointData().GetArray("timestamp") is not None:
polyData_PC = polyDataA
polyData_P = polyDataB
else:
if polyDataB.GetFieldData().GetArray("timestamp") is not None and \
polyDataA.GetPointData().GetArray("timestamp") is not None:
polyData_PC = polyDataB
polyData_P = polyDataA
else: # If none of the configuration above is met, raise an exception
raise Exception("\nOne or both of the inputs don't have a \"timestamp\" Point/Field Data\n"
"Is this data coming from the \"Paraview Tango Recorder\" app ?\n"
"The input that ends with \'_poses.vtk\" must have a \"timestamp\" PointData\n"
"The input that ends with \'*.vtk\" must have a \"timestamp\" FieldData\n")
# If the pose data doesn't contain an "orientation" PointData array, raise an exception
if polyData_P.GetPointData().GetArray("orientation") is None:
raise Exception("\nThe Pose file (that ends with \"_poses.vtk\") has no dataArray called \"orientation\"\n")
############# Find the point cloud timestamp #############
timestamp_PC = polyData_PC.GetFieldData().GetArray("timestamp").GetTuple(0)[0]
#print "Point cloud timestamp: " + str(timestamp_PC)
############# Find the closest timestamp in the poses #############
timestampArray_P = polyData_P.GetPointData().GetArray("timestamp")
minDiff = 1e10
closestIndex = 0
for i in range(0, timestampArray_P.GetNumberOfTuples()):
diff = abs(timestampArray_P.GetTuple(i)[0]-timestamp_PC)
if diff < minDiff:
closestIndex = i
minDiff = diff
#print "Closest Pose timestamp: " + str(timestampArray_P.GetTuple(closestIndex)[0])
#print "Index: " + str(closestIndex)
############# Calculate the pose transform #############
q = polyData_P.GetPointData().GetArray("orientation").GetTuple(closestIndex)
# Add the orientation
# Warning: orientation gives (x, y, z, w) but vtkQuaternion takes (w, x, y, z)
myQuaternion = vtk.vtkQuaternionf(q[3], q[0], q[1], q[2])
rotMatrix = np.zeros((4, 4))
rotMatrix[3, 3] = 1
myQuaternion.ToMatrix3x3(rotMatrix[0:3, 0:3])
# Add the translation components
pointArray_P = polyData_P.GetPoints()
translation = pointArray_P.GetPoint(closestIndex)
rotMatrix[0:3, 3] = translation
############# Read the Camera2Device transform #############
# Array of 16 values
raw_Cam2Dev_TFM = polyData_P.GetFieldData().GetArray("Cam2Dev_transform").GetTuple(0)
# Reshape the matrix
Camera2DeviceTFM = np.array(raw_Cam2Dev_TFM).reshape((4,4), order='F')
#print "Cam2DevTFM_read_np_reshaped =\n" + str(Camera2DeviceTFM)
############# Apply the transforms to the point cloud #############
vtkTFM = vtk.vtkTransform()
vtkTFM.PostMultiply()
vtkTFM.Identity()
vtkTFM.Concatenate(Camera2DeviceTFM.flatten())
vtkTFM.Concatenate(rotMatrix.flatten())
vtkTFMFilter = vtk.vtkTransformPolyDataFilter()
vtkTFMFilter.SetTransform(vtkTFM)
vtkTFMFilter.SetInputData(polyData_PC)
vtkTFMFilter.Update()
pdo.ShallowCopy(vtkTFMFilter.GetOutput())
def RequestInformation():
import vtk
############# Get I/O #############
# Get the two inputs, and the output
polyDataA = self.GetInputDataObject(0, 0)
polyDataB = self.GetInputDataObject(0, 1)
pdo = self.GetPolyDataOutput()
# If only one input is given, raise an exception
if polyDataA is None or polyDataB is None:
raise Exception("\nThis filter takes 2 inputs:\n"
"Point Cloud Data files: pc_HHMMSSDD_NNN.vtk\n"
"Pose Data file: pc_HHMMSSDD_poses.vtk\n"
"Note that ParaView groups all the Point Cloud Data files in one\n")
# Initialize vtkPolyData for point cloud data (PC) and pose data (P)
polyData_PC = vtk.vtkPolyData()
polyData_P = vtk.vtkPolyData()
if polyDataA.GetFieldData().GetArray("timestamp") is not None and \
polyDataB.GetPointData().GetArray("timestamp") is not None:
pointCloudPortIndex = 0
else:
if polyDataB.GetFieldData().GetArray("timestamp") is not None and \
polyDataA.GetPointData().GetArray("timestamp") is not None:
pointCloudPortIndex = 1
else: # If none of the configuration above is met, raise an exception
raise Exception("\nOne or both of the inputs don't have a \"timestamp\" Point/Field Data\n"
"Is this data coming from the \"Paraview Tango Recorder\" app ?\n"
"The input that ends with \'_poses.vtk\" must have a \"timestamp\" PointData\n"
"The input that ends with \'*.vtk\" must have a \"timestamp\" FieldData\n")
def setOutputTimesteps ( algorithm , timesteps ):
"helper routine to set timestep information"
executive = algorithm . GetExecutive ()
outInfo = executive . GetOutputInformation (0)
outInfo.Remove ( executive.TIME_STEPS () )
for timestep in timesteps :
outInfo . Append ( executive . TIME_STEPS () , timestep )
outInfo . Remove ( executive . TIME_RANGE () )
outInfo . Append ( executive . TIME_RANGE () , timesteps [0])
outInfo . Append ( executive . TIME_RANGE () , timesteps [ -1])
def getInputTimesteps( algorithm, portindex):
"helper routine to set timestep information"
executive = algorithm . GetExecutive ()
inInfo = executive . GetInputInformation (0, portindex)
return inInfo.Get(executive.TIME_STEPS())
myrange = getInputTimesteps(self, pointCloudPortIndex)
setOutputTimesteps(self, myrange)
|
{"hexsha": "09b727d6320c3d53dd5696b4bba3a075265f2f8f", "size": 7769, "ext": "py", "lang": "Python", "max_stars_repo_path": "ParaViewPlugins/ApplyPoseTransformToPointCloud.py", "max_stars_repo_name": "Kitware/ParaViewTangoRecorder", "max_stars_repo_head_hexsha": "1765d65def66fafdf33197a273784c68b87ef84f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2015-02-06T18:12:33.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-11T13:23:54.000Z", "max_issues_repo_path": "ParaViewPlugins/ApplyPoseTransformToPointCloud.py", "max_issues_repo_name": "Kitware/ParaViewTangoRecorder", "max_issues_repo_head_hexsha": "1765d65def66fafdf33197a273784c68b87ef84f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-03-05T20:57:43.000Z", "max_issues_repo_issues_event_max_datetime": "2016-06-20T08:37:40.000Z", "max_forks_repo_path": "ParaViewPlugins/ApplyPoseTransformToPointCloud.py", "max_forks_repo_name": "Kitware/ParaViewTangoRecorder", "max_forks_repo_head_hexsha": "1765d65def66fafdf33197a273784c68b87ef84f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2015-03-04T00:14:50.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-16T06:01:25.000Z", "avg_line_length": 39.637755102, "max_line_length": 116, "alphanum_fraction": 0.6153945167, "include": true, "reason": "import numpy,from numpy", "num_tokens": 1871}
|
using Juqst
@testset "Random Channel" begin
testChannel = [1.0 0.0 0.0 0.0
0.00101911 0.971286 -0.0706597 -0.000750742
0.0126756 0.0711915 0.964136 0.00751068
-0.0142784 0.000199791 -0.00735239 0.953949 ]
@test round(fidelity(testChannel),digits=10) == 0.9815618333
@test round(unitarity(testChannel),digits=10) == .9310485031
@test round(unitarityPercent(testChannel),digits=10) ==0.0475365736
testChannel = randomFidelityNoise()
@test size(testChannel) == (4,4)
@test iscp(pauliliou2liou(testChannel))
testChannel = randomPrepNoise()
@test size(testChannel) == (4,4)
@test iscp(pauliliou2liou(testChannel))
testChannel = randomMeasureNoise()
@test size(testChannel) == (4,4)
@test iscp(pauliliou2liou(testChannel))
end
@testset "Open Systems" begin
testChannel = [1.0 0.0 0.0 0.0
0.00101911 0.971286 -0.0706597 -0.000750742
0.0126756 0.0711915 0.964136 0.00751068
-0.0142784 0.000199791 -0.00735239 0.953949 ]
t1 = pauliliou2liou(testChannel)
c_s = [0.969835+0.0im 9.98955e-5+0.0036762im 9.98955e-5-0.0036762im 0.0158863+0.0im
0.000134184+0.0100931im 0.967711+0.0709256im 0.003575+0.0002659im 0.000884926+0.00258246im
0.000134184-0.0100931im 0.003575-0.0002659im 0.967711-0.0709256im 0.000884926-0.00258246im
0.0301647+0.0im -9.98955e-5-0.0036762im -9.98955e-5+0.0036762im 0.984114+0.0im ]
@test isapprox(round.(t1,digits=6),round.(c_s,digits=6))
c_s = [0.969835+0.0im 0.000134184-0.0100931im 9.98955e-5-0.0036762im 0.967711-0.0709256im
0.000134184+0.0100931im 0.0301647+0.0im 0.003575+0.0002659im -9.98955e-5+0.0036762im
9.98955e-5+0.0036762im 0.003575-0.0002659im 0.0158863+0.0im 0.000884926-0.00258246im
0.967711+0.0709256im -9.98955e-5-0.0036762im 0.000884926+0.00258246im 0.984114+0.0im ]
@test isapprox(round.(liou2choi(t1),digits=6),round.(c_s,digits=6))
c_s = [0.969835-0.0im 9.98955e-5-0.0036762im 0.000134184-0.0100931im 0.967711-0.0709256im
9.98955e-5+0.0036762im 0.0158863-0.0im 0.003575-0.0002659im 0.000884926-0.00258246im
0.000134184+0.0100931im 0.003575+0.0002659im 0.0301647-0.0im -9.98955e-5+0.0036762im
0.967711+0.0709256im 0.000884926+0.00258246im -9.98955e-5-0.0036762im 0.984114-0.0im ]
@test isapprox(round.(liou2choiX(t1),digits=6),round.(c_s,digits=6))
@test isapprox(round.(testChannel,digits=6),round.(liou2pauliliou(choi2liou(liou2choi(pauliliou2liou(testChannel)))),digits=6))
c_s = [3.88937+0.0im 0.00101911-0.0148631im -0.0126756+0.000950533im -0.0142784+0.141851im
0.00101911+0.0148631im 0.053201+0.0im -0.0005318-0.0142784im -0.000550951+0.0126756im
-0.0126756-0.000950533im -0.0005318+0.0142784im 0.038901+0.0im -0.00015829+0.00101911im
-0.0142784-0.141851im -0.000550951-0.0126756im -0.00015829-0.00101911im 0.018527+0.0im ]
@test isapprox(round.(c_s,digits=5),round.(choi2chi(liou2choi(pauliliou2liou(testChannel))),digits=5))
end
|
{"hexsha": "ab5b4ec19f4ad87c4faa27ea4af84291945fdc77", "size": 3259, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/ChannelTests.jl", "max_stars_repo_name": "rharper2/Juqst.jl", "max_stars_repo_head_hexsha": "6fe74fe327edd4626b7b4a33312c3a460bd3dfb6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-06-11T11:13:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-12T13:21:39.000Z", "max_issues_repo_path": "test/ChannelTests.jl", "max_issues_repo_name": "rharper2/Juqst.jl", "max_issues_repo_head_hexsha": "6fe74fe327edd4626b7b4a33312c3a460bd3dfb6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-12-21T06:29:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-09T23:33:05.000Z", "max_forks_repo_path": "test/ChannelTests.jl", "max_forks_repo_name": "rharper2/Juqst.jl", "max_forks_repo_head_hexsha": "6fe74fe327edd4626b7b4a33312c3a460bd3dfb6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.2372881356, "max_line_length": 127, "alphanum_fraction": 0.6563362995, "num_tokens": 1453}
|
import zipfile
import shutil
import os
import tensorflow as tf
import pandas as pd
import numpy as np
import wandb
from PIL import Image
path_reef = 'E:/kaggle_great_barrier_reef/Great_Barrier_Reef/'
zip_file = 'tensorflow-great-barrier-reef.zip'
data_path = os.path.join(path_reef, 'data_reef')
# with zipfile.ZipFile(path_reef + zip_file, 'r') as zip_ref:
# zip_ref.extractall(data_path)
# Training and Testing dataframes
train_csv = pd.read_csv(data_path + '/train.csv')
test_csv = pd.read_csv(data_path + '/test.csv')
# Length of the training and testing data
print(len(train_csv))
print(len(test_csv))
# Dataframe head
print(train_csv.head(200))
# Frames with starfish
train = train_csv.loc[train_csv["annotations"] != "[]"]
# Dataframe head
print(train.head(100))
# Feature Summary - copied from Diego Gomez
def resumetable(df):
'''function to create feature summary'''
print(f'Shape: {df.shape}')
summary = pd.DataFrame(df.dtypes, columns=['Data Type'])
summary = summary.reset_index()
summary = summary.rename(columns={'index': 'Features'})
summary['Num of Null Value'] = df.isnull().sum().values
summary['Num of Unique Value'] = df.nunique().values
summary['1st Value'] = df.loc[0].values
summary['2nd Value'] = df.loc[1].values
summary['3rd Value'] = df.loc[2].values
return summary
resumetable(train)
|
{"hexsha": "7e3ecef3544963733e9bef16c37788819d24982e", "size": 1370, "ext": "py", "lang": "Python", "max_stars_repo_path": "data_processing.py", "max_stars_repo_name": "urvins03/Great_Barrier_Reef", "max_stars_repo_head_hexsha": "d61356eed7f9c4e6827927f29c77069aa1d2ed53", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data_processing.py", "max_issues_repo_name": "urvins03/Great_Barrier_Reef", "max_issues_repo_head_hexsha": "d61356eed7f9c4e6827927f29c77069aa1d2ed53", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "data_processing.py", "max_forks_repo_name": "urvins03/Great_Barrier_Reef", "max_forks_repo_head_hexsha": "d61356eed7f9c4e6827927f29c77069aa1d2ed53", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.4, "max_line_length": 62, "alphanum_fraction": 0.7182481752, "include": true, "reason": "import numpy", "num_tokens": 364}
|
import numpy as np
MAX_LONGITUDE = 180.0
MAX_LATITUDE = 85.05112877980659 # (2*atan(exp(M_PI))*180.0/M_PI - 90.0)
MIN_LONGITUDE = -MAX_LONGITUDE
MIN_LATITUDE = -MAX_LATITUDE
MAX_ZOOM = 31
def quadint_from_ZXY(zoom, X, Y):
"""
Convert tile coordinates to quadint at specific zoom level
"""
if zoom < 0 or zoom > MAX_ZOOM:
raise Exception('Wrong zoom')
quadint = Y
quadint = quadint << zoom
quadint = quadint | X
quadint = quadint << 5
quadint = quadint | zoom
return quadint
# return (zoom & MAX_ZOOM) | (X << 5) | (Y << (zoom + 5))
def point_to_tile_fraction(lons, lats, zoom):
"""
Get the precise fractional tile location for a point at a zoom level
"""
sin = np.sin(lats * np.pi / 180.0)
z2 = zoom**2
X = z2 * (lons / 360 + 0.5)
Y = z2 * (0.5 - 0.25 * np.log((1 + sin) / (1 - sin)) / np.pi)
# Wrap Tile X
X = X % z2
X[X < 0] += z2
return X, Y
def point_to_tile(lons, lats, zoom):
"""
Get the tile for a point at a specified zoom level
"""
X, Y = point_to_tile_fraction(lons, lats, zoom)
X = np.floor(X).astype(np.int64)
Y = np.floor(Y).astype(np.int64)
return X, Y
def quadint_from_location(lons, lats, zoom):
"""
Get quadint for location at specific zoom level
"""
if zoom < 0 or zoom > MAX_ZOOM:
raise Exception('Wrong zoom')
lons = np.minimum(MAX_LONGITUDE, np.maximum(MIN_LONGITUDE, lons))
lats = np.minimum(MAX_LATITUDE, np.maximum(MIN_LATITUDE, lats))
X, Y = point_to_tile(lons, lats, zoom)
return quadint_from_ZXY(zoom, X, Y)
|
{"hexsha": "28073e0e8da28368b2ba2c8aa793063b7cf085e0", "size": 1620, "ext": "py", "lang": "Python", "max_stars_repo_path": "quadint_carto3_vect.py", "max_stars_repo_name": "vehrka/quadint_vect", "max_stars_repo_head_hexsha": "b742b928e1e683851d22982f8bcc9cdfa3b69636", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "quadint_carto3_vect.py", "max_issues_repo_name": "vehrka/quadint_vect", "max_issues_repo_head_hexsha": "b742b928e1e683851d22982f8bcc9cdfa3b69636", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quadint_carto3_vect.py", "max_forks_repo_name": "vehrka/quadint_vect", "max_forks_repo_head_hexsha": "b742b928e1e683851d22982f8bcc9cdfa3b69636", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-07T12:50:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T12:50:09.000Z", "avg_line_length": 23.1428571429, "max_line_length": 73, "alphanum_fraction": 0.6117283951, "include": true, "reason": "import numpy", "num_tokens": 528}
|
"""
Add Cell Connectivity To Points
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example for :class:`PVGeo.filters.AddCellConnToPoints`
This filter will add **linear** cell connectivity between scattered points.
You have the option to add ``VTK_LINE`` or ``VTK_POLYLINE`` connectivity.
``VTK_LINE`` connectivity makes a straight line between the points in order
(either in the order by index or using a nearest neighbor calculation).
The ``VTK_POLYLINE`` adds polyline connectivity between all points as one
spline (either in the order by index or using a nearest neighbor calculation).
"""
###############################################################################
# sphinx_gallery_thumbnail_number = 2
import numpy as np
import pyvista
from PVGeo import points_to_poly_data
from PVGeo.filters import AddCellConnToPoints
###############################################################################
# First, lets generate some points which we'd like to connect
def path1(y):
"""Equation: x = a(y-h)^2 + k"""
a = -110.0 / 160.0 ** 2
x = a * y ** 2 + 110.0
idxs = np.argwhere(x > 0)
return x[idxs][:, 0], y[idxs][:, 0]
x, y = path1(np.arange(0.0, 200.0, 25.0))
zo = np.linspace(9.0, 11.0, num=len(y))
coords = np.vstack((x, y, zo)).T
# Shuffle points to demonstrate value of Nearest Neighbor
np.random.shuffle(coords)
# Make a VTK data object for the filter to use
vtkPoints = points_to_poly_data(coords)
###############################################################################
# Apply the Filter
# ++++++++++++++++
#
# Now that you have the points generated, lets go ahead and apply
# the **Add Cell Connectivity To Points** filter from
# *Filters->PVGeo: General Filters->Add Cell Connectivity To Points*.
# The output data should look really wacky and incorrectly built like the image
# below; this is good.
line = AddCellConnToPoints().apply(vtkPoints)
p = pyvista.Plotter()
p.add_mesh(line, line_width=5, point_size=10)
p.show()
###############################################################################
# Remember that in the script given above we shuffle the points to demonstrate
# that the points make a useable line but we need to reconstruct the order of the
# points. We do this by using the *Use Nearest Nbr Approx* checkbox; this will
# ensure that a useable path is generate from the points.
# Go ahead and use the ``nearest_nbr`` argument for the algorith.
# Now it looks good (see image below)!
# Use the filter: Here is vtkPolyData containing the connected line:
line_o = AddCellConnToPoints(nearest_nbr=True).apply(vtkPoints)
p = pyvista.Plotter()
p.add_mesh(line_o, line_width=5, point_size=10)
p.show()
|
{"hexsha": "898f83a3bc65474b10ebc1da675811526fe6b288", "size": 2665, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/filters-general/add-cell-connectivity-to-points.py", "max_stars_repo_name": "tkoyama010/PVGeo", "max_stars_repo_head_hexsha": "d2852b07be5411ca4b3a96f886ae864bbf6d09d8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 145, "max_stars_repo_stars_event_min_datetime": "2018-07-20T21:46:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-21T02:23:06.000Z", "max_issues_repo_path": "examples/filters-general/add-cell-connectivity-to-points.py", "max_issues_repo_name": "tkoyama010/PVGeo", "max_issues_repo_head_hexsha": "d2852b07be5411ca4b3a96f886ae864bbf6d09d8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 50, "max_issues_repo_issues_event_min_datetime": "2018-06-14T22:38:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-29T03:38:08.000Z", "max_forks_repo_path": "examples/filters-general/add-cell-connectivity-to-points.py", "max_forks_repo_name": "tkoyama010/PVGeo", "max_forks_repo_head_hexsha": "d2852b07be5411ca4b3a96f886ae864bbf6d09d8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2018-07-27T07:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T06:39:57.000Z", "avg_line_length": 36.0135135135, "max_line_length": 81, "alphanum_fraction": 0.6375234522, "include": true, "reason": "import numpy", "num_tokens": 631}
|
from math import pi
import random
from typing import List
import numpy as np
import torch
from torch.nn.functional import (
binary_cross_entropy_with_logits,
mse_loss, tanh, relu
)
import matplotlib.pyplot as plt
import kornia
import pydiffvg
from models import BaseVAE, interpolate_vectors, reparameterize
OPAQUE_BLACK = (0, 0, 0, 1)
HIGH = np.array((0.565, 0.392, 0.173, 1))
LOW = np.array((0.094, 0.310, 0.635, 1))
dsample = kornia.transform.PyrDown()
def fig2data(fig) -> np.array:
"""
@brief Convert a Matplotlib figure to a 4D numpy array with RGBA channels and return it
@param fig a matplotlib figure
@return a numpy 3D array of RGBA values
"""
# draw the renderer
fig.canvas.draw()
x = np.array(fig.canvas.renderer.buffer_rgba())
return x[:, :, :3]
def bilinear_downsample(tensor: torch.Tensor, size: int) -> torch.Tensor:
return torch.nn.functional.interpolate(tensor, size, mode='bilinear')
def sample_circle(r: int, angles: torch.Tensor, sample_rate: int = 10):
pos = []
for i in range(1, sample_rate + 1):
x = (torch.cos(angles * (sample_rate / i)) * r) # + r
y = (torch.sin(angles * (sample_rate / i)) * r) # + r
pos.append(x)
pos.append(y)
return torch.stack(pos, dim=-1)
def decode_transform(x):
return x.permute(0, 2, 1)
def gaussian_pyramid_loss(recons, inp, loss_fn):
recon_loss = loss_fn(recons, inp, reduction='none').mean(dim=[1, 2, 3])
for j in range(2, 5):
recons = dsample(recons)
inp = dsample(inp)
recon_loss += loss_fn(recons, inp, reduction='none').mean(dim=[1, 2, 3]) / j
return recon_loss
def raster_verbose(curves, points) -> ([pydiffvg.Path], [pydiffvg.ShapeGroup]):
np.random.seed(0)
colors = np.random.rand(curves, 4)
colors[:, 3] = 1
diff = (HIGH - LOW) / curves
shapes = []
shape_groups = []
for i in range(curves):
scale = diff * i
color = LOW + scale
color[3] = 1
color = torch.tensor(color)
num_ctrl_pts = torch.zeros(1, dtype=torch.int32) + 2
if i * 3 + 4 > curves * 3:
curve_points = torch.stack([points[i * 3], points[i * 3 + 1], points[i * 3 + 2], points[0]])
else:
curve_points = points[i * 3:i * 3 + 4]
path = pydiffvg.Path(
num_control_points=num_ctrl_pts, points=curve_points,
is_closed=False, stroke_width=torch.tensor(4))
path_group = pydiffvg.ShapeGroup(
shape_ids=torch.tensor([i]),
fill_color=None,
stroke_color=color)
shapes.append(path)
shape_groups.append(path_group)
return shapes, shape_groups
class VectorVAE(BaseVAE):
def __init__(self,
in_channels: int,
latent_dim: int,
hidden_dims: List[int] = None,
loss_fn: str = 'MSE',
img_size: int = 128,
paths: int = 4,
**kwargs) -> None:
super(VectorVAE, self).__init__()
self.latent_dim_ = latent_dim # Used by VectorVAEnLayers
self.img_size_ = img_size
self.should_reparameterize_ = kwargs.get('reparameterize', False)
self.other_losses_weight_ = kwargs.get('other_losses_weight', 0)
self.curves_ = paths
self.scale_factor_ = kwargs['scale_factor']
self.learn_sampling_ = kwargs['learn_sampling']
self.beta = kwargs['beta']
self.only_auxiliary_training = kwargs['only_auxiliary_training']
if loss_fn == 'BCE':
self.loss_fn_ = binary_cross_entropy_with_logits
else:
self.loss_fn_ = mse_loss
if hidden_dims is None:
hidden_dims = [32, 64, 128, 256, 512]
# Build Encoder
modules = []
for h_dim in hidden_dims:
modules.append(
torch.nn.Sequential(
torch.nn.Conv2d(
in_channels,
out_channels=h_dim,
kernel_size=3,
stride=2,
padding=1
),
# nn.BatchNorm2d(h_dim),
torch.nn.ReLU())
)
in_channels = h_dim
self.encoder_ = torch.nn.Sequential(*modules)
out_size = img_size // (2 ** 5)
self.fc_mu_ = torch.nn.Linear(hidden_dims[-1] * out_size * out_size, latent_dim)
self.fc_var_ = torch.nn.Linear(hidden_dims[-1] * out_size * out_size, latent_dim)
self.circle_rad_ = kwargs['radius']
self.number_of_points_ = self.curves_ * 3
angles = torch.arange(0, self.number_of_points_, dtype=torch.float32) * pi * 2 / self.number_of_points_
sample_rate = 1
self.id_circle_ = sample_circle(self.circle_rad_, angles, sample_rate)[:, :]
base_control_features = torch.tensor([[1, 0], [0, 1], [0, 1]], dtype=torch.float32)
self.register_buffer('base_control_features', base_control_features)
self.angles_ = angles
def get_computational_unit(in_chan, out_chan, unit_type):
if unit_type == 'conv':
return torch.nn.Conv1d(
in_chan,
out_chan,
kernel_size=3,
padding=2,
padding_mode='circular',
stride=1,
dilation=1
)
else:
return torch.nn.Linear(in_chan, out_chan)
# Build Decoder
num_one_hot = base_control_features.shape[1]
fused_latent_dim = latent_dim + num_one_hot + (sample_rate * 2)
unit = 'conv'
self.decoder_input_ = get_computational_unit(fused_latent_dim, fused_latent_dim * 2, unit)
self.point_predictor_ = torch.nn.ModuleList([
get_computational_unit(fused_latent_dim * 2, fused_latent_dim * 2, unit),
get_computational_unit(fused_latent_dim * 2, fused_latent_dim * 2, unit),
get_computational_unit(fused_latent_dim * 2, fused_latent_dim * 2, unit),
get_computational_unit(fused_latent_dim * 2, fused_latent_dim * 2, unit),
get_computational_unit(fused_latent_dim * 2, 2, unit),
# nn.Sigmoid() # bound spatial extent
])
if self.learn_sampling_:
self.sample_deformation_ = torch.nn.Sequential(
get_computational_unit(latent_dim + 2 + (sample_rate * 2), latent_dim * 2, unit),
torch.nn.ReLU(),
get_computational_unit(latent_dim * 2, latent_dim * 2, unit),
torch.nn.ReLU(),
get_computational_unit(latent_dim * 2, 1, unit),
)
unit = 'mlp'
self.aux_network_ = torch.nn.Sequential(
get_computational_unit(latent_dim, latent_dim * 2, unit),
torch.nn.LeakyReLU(),
get_computational_unit(latent_dim * 2, latent_dim * 2, unit),
torch.nn.LeakyReLU(),
get_computational_unit(latent_dim * 2, latent_dim * 2, unit),
torch.nn.LeakyReLU(),
get_computational_unit(latent_dim * 2, 3, unit),
)
self.latent_lossvpath_ = {}
self.save_lossvspath = False
if self.only_auxiliary_training:
self.save_lossvspath = True
for name, param in self.named_parameters():
if 'aux_network' in name:
print(name)
param.requires_grad = True
else:
param.requires_grad = False
def redo_features(self, n):
self.curves_ = n
self.number_of_points_ = self.curves_ * 3
self.angles_ = (torch.arange(0, self.number_of_points_, dtype=torch.float32) * pi * 2 / self.number_of_points_)
self.id_circle_ = sample_circle(self.circle_rad_, self.angles_, sample_rate=1)[:, :]
def encode(self, inp: torch.Tensor) -> (torch.Tensor, torch.Tensor):
"""
Encodes the input by passing through the encoder network
and returns the latent codes.
:param inp: (Tensor) Input tensor to encoder [N x C x H x W]
:return: (Tensor) List of latent codes
"""
result = self.encoder_(inp)
result = torch.flatten(result, start_dim=1)
# Split the result into mu and var components
# of the latent Gaussian distribution
mu = self.fc_mu_(result)
log_var = self.fc_var_(result)
return mu, log_var
def raster_(self, all_points, color=OPAQUE_BLACK, verbose=False, white_background=True) -> torch.Tensor:
assert len(color) == 4
render_size = self.img_size_
if verbose:
render_size *= 2
all_points = all_points * render_size
num_ctrl_pts = torch.zeros(self.curves_, dtype=torch.int32).to(all_points.device) + 2
color = torch.tensor(color).to(all_points.device)
batch_size = all_points.shape[0]
outputs = []
for k in range(batch_size):
# Get point parameters from network
render = pydiffvg.RenderFunction.apply
points = all_points[k].contiguous() # [self.sort_idx[k]] # .cpu()
if verbose:
shapes, shape_groups = raster_verbose(self.curves_, points)
else:
shapes = [pydiffvg.Path(num_control_points=num_ctrl_pts, points=points, is_closed=True)]
shape_groups = [
pydiffvg.ShapeGroup(
shape_ids=torch.tensor([len(shapes) - 1]),
fill_color=color,
stroke_color=color
)
]
scene_args = pydiffvg.RenderFunction.serialize_scene(render_size, render_size, shapes, shape_groups)
out = render(render_size, # width
render_size, # height
3, # num_samples_x
3, # num_samples_y
102, # seed
None,
*scene_args)
out = out.permute(2, 0, 1).view(4, render_size, render_size) # [:3]#.mean(0, keepdim=True)
outputs.append(out)
output = torch.stack(outputs).to(all_points.device)
# map to [-1, 1]
if white_background:
alpha = output[:, 3:4, :, :]
output_white_bg = output[:, :3, :, :] * alpha + (1 - alpha)
output = torch.cat([output_white_bg, alpha], dim=1)
del num_ctrl_pts, color
return output
def decode(self, z: torch.Tensor) -> torch.Tensor:
"""
Maps the given latent codes onto the image space.
:param z: (Tensor) [B x D]
:return: (Tensor) [B x C x H x W]
"""
self.id_circle_ = self.id_circle_.to(z.device)
batch_size = z.shape[0]
z = z[:, None, :].repeat([1, self.curves_ * 3, 1])
base_control_features = self.base_control_features[None, :, :].repeat(batch_size, self.curves_, 1)
z_base = torch.cat([z, base_control_features], dim=-1)
if self.learn_sampling_:
self.angles_ = self.angles_.to(z.device)
angles = self.angles_[None, :, None].repeat(batch_size, 1, 1)
x = torch.cos(angles) # + r
y = torch.sin(angles) # + r
z_angles = torch.cat([z_base, x, y], dim=-1)
angles_delta = self.sample_deformation_(decode_transform(z_angles))
angles_delta = tanh(angles_delta / 50) * pi / 2
angles_delta = decode_transform(angles_delta)
new_angles = angles + angles_delta
x = (torch.cos(new_angles) * self.circle_rad_) # + r
y = (torch.sin(new_angles) * self.circle_rad_) # + r
z = torch.cat([z_base, x, y], dim=-1)
else:
id_circle = self.id_circle_[None, :, :].repeat(batch_size, 1, 1)
z = torch.cat([z_base, id_circle], dim=-1)
all_points = self.decoder_input_(decode_transform(z))
for compute_block in self.point_predictor_:
all_points = relu(all_points)
all_points = compute_block(all_points)
all_points = decode_transform(torch.sigmoid(all_points / self.scale_factor_))
return all_points
def reparameterize_(self, mu: torch.Tensor, log_var: torch.Tensor) -> torch.Tensor:
return reparameterize(mu, log_var) if self.should_reparameterize_ else mu
def forward(self, inp: torch.Tensor, **kwargs) -> List[torch.Tensor]:
mu, log_var = self.encode(inp)
z = self.reparameterize_(mu, log_var)
all_points = self.decode(z)
if not self.only_auxiliary_training or self.save_lossvspath:
output = self.raster_(all_points, white_background=True)
else:
output = torch.zeros([1, 3, 64, 64])
return [output, inp, mu, log_var]
def loss_function(self, *args, **kwargs) -> dict:
"""
Computes the VAE loss function.
KL(N(\\mu, \\sigma), N(0, 1)) = \\log \\frac{1}{\\sigma} + \\frac{\\sigma^2 + \\mu^2}{2} - \\frac{1}{2}
:param args:
:param kwargs:
:return:
"""
recons, inp, mu, log_var = args[:4]
recons = recons[:, :3, :, :]
other_losses = args[4] if len(args) == 5 else 0
kld_weight = kwargs['M_N'] # Account for the minibatch samples from the dataset
if not self.only_auxiliary_training or self.save_lossvspath:
recon_loss = gaussian_pyramid_loss(recons, inp, self.loss_fn_)
else:
recon_loss = torch.zeros([1])
if self.only_auxiliary_training:
recon_loss_non_reduced = recon_loss[:, None].clone().detach()
spacing = self.aux_network_(mu.clone().detach())
latents = mu.cpu().numpy()
num_latents = latents.shape[0]
if self.save_lossvspath:
recon_loss_non_reduced_cpu = recon_loss_non_reduced.cpu().numpy()
keys = self.latent_lossvpath_.keys()
for i in range(num_latents):
if np.array2string(latents[i]) in keys:
pair = torch.tensor([self.curves_, recon_loss_non_reduced_cpu[i, 0], ])[None, :].to(mu.device)
self.latent_lossvpath_[np.array2string(latents[i])] \
= torch.cat([self.latent_lossvpath_[np.array2string(latents[i])], pair], dim=0)
else:
self.latent_lossvpath_[np.array2string(latents[i])] = torch.tensor(
[[self.curves_, recon_loss_non_reduced_cpu[i, 0]], ]).to(mu.device)
num = torch.ones_like(spacing[:, 0]) * self.curves_
est_loss = spacing[:, 2] + 1 / torch.exp(num * spacing[:, 0] - spacing[:, 1])
aux_loss = torch.abs(num * (est_loss - recon_loss_non_reduced)).mean() * 10
else:
aux_loss = 0
for i in range(num_latents):
pair = self.latent_lossvpath_[np.array2string(latents[i])]
est_loss = spacing[i, 2] + 1 / torch.exp(pair[:, 0] * spacing[i, 0] - spacing[i, 1])
aux_loss += torch.abs(pair[:, 0] * (est_loss - pair[:, 1])).mean()
logs = {'Reconstruction_Loss': recon_loss.mean(), 'aux_loss': aux_loss}
return {'loss': aux_loss, 'progress_bar': logs}
recon_loss = recon_loss.mean()
kld_loss = 0
if self.beta > 0:
kld_loss = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim=1), dim=0)\
* self.beta * kld_weight
recon_loss = recon_loss * 10
loss = recon_loss + kld_loss + other_losses * self.other_losses_weight_
logs = {
'Reconstruction_Loss': recon_loss.detach(),
'KLD': -kld_loss,
'other losses': other_losses.detach() * self.other_losses_weight_
}
return {'loss': loss, 'progress_bar': logs}
def generate(self, x: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Given an input image x, returns the reconstructed image
:param x: (Tensor) [B x C x H x W]
:return: (Tensor) [B x C x H x W]
"""
mu, log_var = self.encode(x)
z = self.reparameterize_(mu, log_var)
return self.raster_(self.decode(z), verbose=random.choice([True, False]))
def save(self, x, save_dir, name):
z, log_var = self.encode(x)
all_points = self.decode(z)
# Get point parameters from network
points = all_points[0].cpu() # [self.sort_idx[k]]
color = torch.cat([torch.tensor([0, 0, 0, 1]), ])
num_ctrl_pts = torch.zeros(self.curves_, dtype=torch.int32) + 2
shapes = [
pydiffvg.Path(num_control_points=num_ctrl_pts, points=points, is_closed=True)
]
shape_groups = [
pydiffvg.ShapeGroup(
shape_ids=torch.tensor([len(shapes) - 1]),
fill_color=color,
stroke_color=color
)
]
pydiffvg.save_svg(f"{save_dir}{name}/{name}.svg", self.img_size_, self.img_size_, shapes, shape_groups)
# TODO: interpolation functions seem hardcoded
def interpolate(self, x: torch.Tensor, **kwargs) -> List[torch.Tensor]:
mu, log_var = self.encode(x)
all_interpolations = []
for i in range(mu.shape[0]):
z = interpolate_vectors(mu[2], mu[i], 10)
all_points = self.decode(z)
all_interpolations.append(self.raster_(all_points, verbose=kwargs['verbose']))
return all_interpolations
def interpolate_2d(self, x: torch.Tensor, **kwargs) -> List[torch.Tensor]:
mu, log_var = self.encode(x)
all_interpolations = []
y_axis = interpolate_vectors(mu[7], mu[6], 10)
for i in range(10):
z = interpolate_vectors(y_axis[i], mu[3], 10)
all_points = self.decode(z)
all_interpolations.append(self.raster_(all_points, verbose=kwargs['verbose']))
return all_interpolations
def naive_vector_interpolate(self, x: torch.Tensor, **kwargs) -> [torch.Tensor]:
mu, log_var = self.encode(x)
all_points = self.decode(mu)
all_interpolations = []
for i in range(mu.shape[0]):
z = interpolate_vectors(all_points[2], all_points[i], 10)
all_interpolations.append(self.raster_(z, verbose=kwargs['verbose']))
return all_interpolations
def visualize_sampling(self, x: torch.Tensor, **kwargs) -> [torch.Tensor]:
mu, log_var = self.encode(x)
all_interpolations = []
for i in range(5, 27):
self.redo_features(i)
all_points = self.decode(mu)
all_interpolations.append(self.raster_(all_points, verbose=kwargs['verbose']))
return all_interpolations
def sampling_error(self, x: torch.Tensor) -> torch.Tensor:
error = []
figure = plt.figure(figsize=(6, 6))
batch_size = x.shape[0]
for i in range(7, 25):
self.redo_features(i)
results = self.forward(x)
recons = results[0][:, :3, :, :]
input_batch = results[1]
recon_loss = gaussian_pyramid_loss(recons, input_batch, self.loss_fn_)
error.append(recon_loss)
etn = torch.stack(error, dim=1).numpy()
np.savetxt('sample_error.csv', etn, delimiter=',')
y = np.arange(7, 25)
for i in range(batch_size):
plt.plot(y, etn[i, :], label=str(i + 1))
plt.legend(loc='upper right')
img = fig2data(figure)
return img
def visualize_aux_error(self, x: torch.Tensor) -> torch.Tensor:
"""
Given an input image x, returns the reconstructed image
:param x: (Tensor) [B x C x H x W]
:return: (Tensor) [B x C x H x W]
"""
mu, log_var = self.encode(x)
batch_size = mu.shape[0]
all_spacing = []
figure = plt.figure(figsize=(6, 6))
for i in np.arange(7, 25):
spacing = self.aux_network_(mu.clone().detach())
num = torch.ones_like(spacing[:, 0]) * i
est_loss = spacing[:, 2] + (spacing[:, 0] / num)
all_spacing.append(est_loss)
all_spacing = torch.stack(all_spacing, dim=1).detach().cpu().numpy()
y = np.arange(7, 25)
for i in range(batch_size):
plt.plot(y, all_spacing[i, :], label=str(i + 1))
plt.legend(loc='upper right')
img = fig2data(figure)
return img
|
{"hexsha": "adb714026587df930c36abb8b159079ea048899c", "size": 20729, "ext": "py", "lang": "Python", "max_stars_repo_path": "models/vector_vae.py", "max_stars_repo_name": "IlyaBizyaev/Im2Vec", "max_stars_repo_head_hexsha": "4fff000c84500c6e977c502519497e27e1f946f2", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "models/vector_vae.py", "max_issues_repo_name": "IlyaBizyaev/Im2Vec", "max_issues_repo_head_hexsha": "4fff000c84500c6e977c502519497e27e1f946f2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "models/vector_vae.py", "max_forks_repo_name": "IlyaBizyaev/Im2Vec", "max_forks_repo_head_hexsha": "4fff000c84500c6e977c502519497e27e1f946f2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.7107279693, "max_line_length": 119, "alphanum_fraction": 0.5745573834, "include": true, "reason": "import numpy", "num_tokens": 5117}
|
from __future__ import (division, print_function)
from pomegranate import *
from nose.tools import with_setup
from nose.tools import assert_equal
from nose.tools import assert_not_equal
from nose.tools import assert_raises
import random
import numpy as np
import json
def setup():
'''
Build a model that we want to use to test sequences. This model will
be somewhat complicated, in order to extensively test YAHMM. This will be
a three state global sequence alignment HMM. The HMM models a reference of
'ACT', with pseudocounts to allow for slight deviations from this
reference.
'''
random.seed(0)
global model
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transition( model.start, m1, 0.9 )
model.add_transition( model.start, i0, 0.1 )
model.add_transition( m1, m2, 0.9 )
model.add_transition( m1, i1, 0.05 )
model.add_transition( m1, d2, 0.05 )
model.add_transition( m2, m3, 0.9 )
model.add_transition( m2, i2, 0.05 )
model.add_transition( m2, d3, 0.05 )
model.add_transition( m3, model.end, 0.9 )
model.add_transition( m3, i3, 0.1 )
# Create transitions from insert states
model.add_transition( i0, i0, 0.70 )
model.add_transition( i0, d1, 0.15 )
model.add_transition( i0, m1, 0.15 )
model.add_transition( i1, i1, 0.70 )
model.add_transition( i1, d2, 0.15 )
model.add_transition( i1, m2, 0.15 )
model.add_transition( i2, i2, 0.70 )
model.add_transition( i2, d3, 0.15 )
model.add_transition( i2, m3, 0.15 )
model.add_transition( i3, i3, 0.85 )
model.add_transition( i3, model.end, 0.15 )
# Create transitions from delete states
model.add_transition( d1, d2, 0.15 )
model.add_transition( d1, i1, 0.15 )
model.add_transition( d1, m2, 0.70 )
model.add_transition( d2, d3, 0.15 )
model.add_transition( d2, i2, 0.15 )
model.add_transition( d2, m3, 0.70 )
model.add_transition( d3, i3, 0.30 )
model.add_transition( d3, model.end, 0.70 )
# Call bake to finalize the structure of the model.
model.bake()
def multitransition_setup():
'''
Build a model that we want to use to test sequences. This is the same as the
above model, except that it uses the multiple transition methods for building.
'''
random.seed(0)
global model
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transitions( model.start, [m1, i0], [0.9, 0.1] )
model.add_transitions( m1, [m2, i1, d2], [0.9, 0.05, 0.05] )
model.add_transitions( m2, [m3, i2, d3], [0.9, 0.05, 0.05] )
model.add_transitions( m3, [model.end, i3], [0.9, 0.1] )
# Create transitions from insert states
model.add_transitions( i0, [i0, d1, m1], [0.7, 0.15, 0.15] )
model.add_transitions( i1, [i1, d2, m2], [0.7, 0.15, 0.15] )
model.add_transitions( i2, [i2, d3, m3], [0.7, 0.15, 0.15] )
model.add_transitions( [i3, i3], [i3, model.end], [0.85, 0.15] )
# Create transitions from delete states
model.add_transitions( d1, [d2, i1, m2], [0.15, 0.15, 0.70] )
model.add_transitions( [d2, d2, d2, d3, d3], [d3, i2, m3, i3, model.end],
[0.15, 0.15, 0.70, 0.30, 0.70 ] )
# Call bake to finalize the structure of the model.
model.bake()
def tied_edge_setup():
'''
Build a model that we want to use to test sequences. This model has
tied edges.
'''
random.seed(0)
global model
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transition( model.start, m1, 0.9 )
model.add_transition( model.start, i0, 0.1 )
model.add_transition( m1, m2, 0.9 )
model.add_transition( m1, i1, 0.05 )
model.add_transition( m1, d2, 0.05 )
model.add_transition( m2, m3, 0.9 )
model.add_transition( m2, i2, 0.05 )
model.add_transition( m2, d3, 0.05 )
model.add_transition( m3, model.end, 0.9 )
model.add_transition( m3, i3, 0.1 )
# Create transitions from insert states
model.add_transition( i0, i0, 0.70, group="i_a" )
model.add_transition( i0, d1, 0.15, group="i_b" )
model.add_transition( i0, m1, 0.15, group="i_c" )
model.add_transition( i1, i1, 0.70, group="i_a" )
model.add_transition( i1, d2, 0.15, group="i_b" )
model.add_transition( i1, m2, 0.15, group="i_c" )
model.add_transition( i2, i2, 0.70, group="i_a" )
model.add_transition( i2, d3, 0.15, group="i_b" )
model.add_transition( i2, m3, 0.15, group="i_c" )
model.add_transition( i3, i3, 0.85, group="i_a" )
model.add_transition( i3, model.end, 0.15 )
# Create transitions from delete states
model.add_transition( d1, d2, 0.15, group="d_a" )
model.add_transition( d1, i1, 0.15, group="d_b" )
model.add_transition( d1, m2, 0.70, group="d_c" )
model.add_transition( d2, d3, 0.15, group="d_a" )
model.add_transition( d2, i2, 0.15, group="d_b" )
model.add_transition( d2, m3, 0.70, group="d_c" )
model.add_transition( d3, i3, 0.30 )
model.add_transition( d3, model.end, 0.70 )
# Call bake to finalize the structure of the model.
model.bake()
def teardown():
'''
Remove the model at the end of the unit testing. Since it is stored in a
global variance, simply delete it.
'''
pass
@with_setup( setup, teardown )
def test_same_length_viterbi():
scores = [ -0.5132449003570658, -11.048101241343396, -9.125519674022627,
-5.0879558788604475 ]
sequences = [ list(x) for x in [ 'ACT', 'GGC', 'GAT', 'ACC' ] ]
for seq, score in zip( sequences, scores ):
assert_equal( model.viterbi( seq )[0], score )
assert_raises( ValueError, model.viterbi, list('XXX') )
@with_setup( setup, teardown )
def test_variable_length_viterbi():
scores = [ -5.406181012423981, -10.88681993576597, -3.6244718790494277,
-3.644880750680635, -10.674332964640293, -10.393824835172445,
-8.67126440174503, -16.903451796110275, -16.451699654050792 ]
sequences = [ list(x) for x in ('A', 'GA', 'AC', 'AT', 'ATCC',
'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ]
for seq, score in zip( sequences, scores ):
assert_equal( model.viterbi( seq )[0], score )
@with_setup( setup, teardown )
def test_log_probability():
scores = [ -5.3931, -0.5052, -11.8478, -14.3482 ]
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
for seq, score in zip( sequences, scores ):
assert_equal( round( model.log_probability( seq ), 4 ), score )
@with_setup( setup, teardown )
def test_posterior_transitions():
a_scores = [ 0.0, 0.0021, 0.2017, 1.5105 ]
b_scores = [ 0.013, 0.0036, 1.9836, 2.145 ]
c_scores = [ 0.013, 0.0035, 0.817, 0.477 ]
d_scores = [ 1.0, 0.0023, 0.2636, 0.3682 ]
t_scores = [ 4.013, 4.0083, 6.457, 8.9812 ]
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
i, j, k, l = indices['I2'], indices['I0'], indices['D1'], indices['D2']
scores = zip( sequences, a_scores, b_scores, c_scores, d_scores, t_scores )
for seq, a, b, c, d, t in scores:
trans, ems = model.forward_backward( seq )
assert_equal( round( trans[i].sum(), 4 ), a )
assert_equal( round( trans[j].sum(), 4 ), b )
assert_equal( round( trans[k].sum(), 4 ), c )
assert_equal( round( trans[l].sum(), 4 ), d )
assert_equal( round( trans.sum(), 4 ), t )
@with_setup( setup, teardown )
def test_posterior_transitions_w_training():
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
transitions = model.dense_transition_matrix()
i0, i1, i2 = indices['I0'], indices['I1'], indices['I2']
d1, d2, d3 = indices['D1'], indices['D2'], indices['D3']
m1, m2, m3 = indices['M1'], indices['M2'], indices['M3']
assert_equal( transitions[d1, i1], transitions[d2, i2] )
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[i0, i0], transitions[i2, i2] )
assert_equal( transitions[i0, m1], transitions[i1, m2] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
model.fit( sequences, verbose=False )
transitions = model.dense_transition_matrix()
assert_not_equal( transitions[d1, i1], transitions[d2, i2] )
assert_not_equal( transitions[i0, m1], transitions[i1, m2] )
assert_not_equal( transitions[d1, d2], transitions[d2, d3] )
assert_not_equal( transitions[i0, d1], transitions[i1, d2] )
assert_not_equal( transitions[i0, d1], transitions[i2, d3] )
@with_setup( setup, teardown )
def test_posterior_transitions_w_vtraining():
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
transitions = model.dense_transition_matrix()
i0, i1, i2, i3 = indices['I0'], indices['I1'], indices['I2'], indices['I3']
d1, d2, d3 = indices['D1'], indices['D2'], indices['D3']
m1, m2, m3 = indices['M1'], indices['M2'], indices['M3']
assert_equal( transitions[d1, i1], transitions[d2, i2] )
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[i0, i0], transitions[i2, i2] )
assert_equal( transitions[i0, m1], transitions[i1, m2] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
model.fit( sequences, verbose=False, algorithm='viterbi' )
transitions = model.dense_transition_matrix()
assert_not_equal( transitions[i0, i0], transitions[i1, i1] )
assert_not_equal( transitions[d1, d2], transitions[d2, d3] )
assert_not_equal( transitions[i0, d1], transitions[i1, d2] )
assert_not_equal( transitions[i0, d1], transitions[i2, d3] )
@with_setup( tied_edge_setup, teardown )
def test_posterior_transitions_w_tied_training():
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
transitions = model.dense_transition_matrix()
i0, i1, i2, i3 = indices['I0'], indices['I1'], indices['I2'], indices['I3']
d1, d2, d3 = indices['D1'], indices['D2'], indices['D3']
m1, m2, m3 = indices['M1'], indices['M2'], indices['M3']
assert_equal( transitions[d1, i1], transitions[d2, i2] )
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[i0, i0], transitions[i2, i2] )
assert_equal( transitions[i0, m1], transitions[i1, m2] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
model.fit( sequences, verbose=False )
transitions = model.dense_transition_matrix()
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
@with_setup( tied_edge_setup, teardown )
def test_posterior_transitions_w_tied_vtraining():
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
transitions = model.dense_transition_matrix()
i0, i1, i2 = indices['I0'], indices['I1'], indices['I2']
d1, d2, d3 = indices['D1'], indices['D2'], indices['D3']
m1, m2, m3 = indices['M1'], indices['M2'], indices['M3']
assert_equal( transitions[d1, i1], transitions[d2, i2] )
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[i0, i0], transitions[i2, i2] )
assert_equal( transitions[i0, m1], transitions[i1, m2] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
model.fit( sequences, verbose=False, algorithm='viterbi' )
transitions = model.dense_transition_matrix()
assert_equal( transitions[d1, i1], transitions[d2, i2] )
assert_equal( transitions[i0, i0], transitions[i1, i1] )
assert_equal( transitions[i0, i0], transitions[i2, i2] )
assert_equal( transitions[i0, m1], transitions[i1, m2] )
assert_equal( transitions[d1, d2], transitions[d2, d3] )
assert_equal( transitions[i0, d1], transitions[i1, d2] )
assert_equal( transitions[i0, d1], transitions[i2, d3] )
@with_setup( setup, teardown )
def test_posterior_emissions():
a_scores = [ 0.987, 0.9965, 0.183, 0.523 ]
b_scores = [ 0.0, 0.9977, 0.7364, 0.6318 ]
c_scores = [ 0.0, 0.9975, 0.6237, 0.8641 ]
d_scores = [ 0.0, 0.0021, 0.2017, 1.5105 ]
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
i, j, k, l = indices['M1'], indices['M2'], indices['M3'], indices['I2']
for seq, a, b, c, d in zip( sequences, a_scores, b_scores, c_scores, d_scores ):
trans, ems = model.forward_backward( seq )
ems = np.exp( ems )
assert_equal( round( ems[:,i].sum(), 4 ), a )
assert_equal( round( ems[:,j].sum(), 4 ), b )
assert_equal( round( ems[:,k].sum(), 4 ), c )
assert_equal( round( ems[:,l].sum(), 4 ), d )
assert_equal( round( ems.sum() ), len( seq ) )
@with_setup( multitransition_setup, teardown )
def test_posterior_emissions_w_multitransition_setup():
a_scores = [ 0.987, 0.9965, 0.183, 0.523 ]
b_scores = [ 0.0, 0.9977, 0.7364, 0.6318 ]
c_scores = [ 0.0, 0.9975, 0.6237, 0.8641 ]
d_scores = [ 0.0, 0.0021, 0.2017, 1.5105 ]
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
i, j, k, l = indices['M1'], indices['M2'], indices['M3'], indices['I2']
for seq, a, b, c, d in zip( sequences, a_scores, b_scores, c_scores, d_scores ):
trans, ems = model.forward_backward( seq )
ems = np.exp( ems )
assert_equal( round( ems[:,i].sum(), 4 ), a )
assert_equal( round( ems[:,j].sum(), 4 ), b )
assert_equal( round( ems[:,k].sum(), 4 ), c )
assert_equal( round( ems[:,l].sum(), 4 ), d )
assert_equal( round( ems.sum() ), len( seq ) )
@with_setup( tied_edge_setup, teardown )
def test_posterior_emissions_w_tied_edge_setup():
a_scores = [ 0.987, 0.9965, 0.183, 0.523 ]
b_scores = [ 0.0, 0.9977, 0.7364, 0.6318 ]
c_scores = [ 0.0, 0.9975, 0.6237, 0.8641 ]
d_scores = [ 0.0, 0.0021, 0.2017, 1.5105 ]
sequences = [ list(x) for x in ( 'A', 'ACT', 'GGCA', 'TACCTGT' ) ]
indices = { state.name: i for i, state in enumerate( model.states ) }
i, j, k, l = indices['M1'], indices['M2'], indices['M3'], indices['I2']
for seq, a, b, c, d in zip( sequences, a_scores, b_scores, c_scores, d_scores ):
trans, ems = model.forward_backward( seq )
ems = np.exp( ems )
assert_equal( round( ems[:,i].sum(), 4 ), a )
assert_equal( round( ems[:,j].sum(), 4 ), b )
assert_equal( round( ems[:,k].sum(), 4 ), c )
assert_equal( round( ems[:,l].sum(), 4 ), d )
assert_equal( round( ems.sum() ), len( seq ) )
@with_setup( setup, teardown )
def test_properties():
assert_equal( model.edge_count(), 29 )
assert_equal( model.state_count(), 12 )
assert_equal( model.name, "Global Alignment" )
@with_setup( setup, teardown )
def test_to_json():
b = json.loads( model.to_json() )
assert_equal( b['name'], 'Global Alignment' )
assert_equal( len(b['edges']), 29 )
assert_equal( len(b['states']), 12 )
assert_equal( b['silent_index'], 7 )
@with_setup( setup, teardown )
def test_from_json():
hmm = HiddenMarkovModel.from_json( model.to_json() )
assert_equal( hmm.edge_count(), 29 )
assert_equal( hmm.state_count(), 12 )
assert_equal( hmm.name, "Global Alignment" )
|
{"hexsha": "073556844b1764f3e91e00fc7a85b625ae53e585", "size": 17822, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_profile_hmm.py", "max_stars_repo_name": "m-martin-j/pomegranate", "max_stars_repo_head_hexsha": "d79b5464e8d2a3678de33d2323d75f0bc4168e19", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-19T00:44:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T16:56:51.000Z", "max_issues_repo_path": "tests/test_profile_hmm.py", "max_issues_repo_name": "m-martin-j/pomegranate", "max_issues_repo_head_hexsha": "d79b5464e8d2a3678de33d2323d75f0bc4168e19", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/test_profile_hmm.py", "max_forks_repo_name": "m-martin-j/pomegranate", "max_forks_repo_head_hexsha": "d79b5464e8d2a3678de33d2323d75f0bc4168e19", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.2973523422, "max_line_length": 98, "alphanum_fraction": 0.6624957917, "include": true, "reason": "import numpy", "num_tokens": 6312}
|
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
from math import exp
from torch import nn
from torch import Tensor
class L1_LOSS(nn.Module):
def __init__(self) -> None:
super(L1_LOSS, self).__init__()
def forward(self, input: Tensor) -> Tensor:
_, _, h, w = input.size()
norm = torch.sum(torch.abs(input), axis=(2, 3))
norm = norm.div(h*w)
return norm.mean()
class Per_LOSS(nn.Module):
def __init__(self) -> None:
super(Per_LOSS, self).__init__()
def forward(self, input: Tensor) -> Tensor:
_, c, h, w = input.size()
norm = torch.sum(torch.square(input), axis=(1, 2, 3))
loss = norm.div(h*w*c)
return loss
class Fro_LOSS(nn.Module):
def __init__(self) -> None:
super(Fro_LOSS, self).__init__()
def forward(self, input: Tensor) -> Tensor:
_, _, h, w = input.size()
fro_norm = torch.square(torch.norm(input, p='fro', dim=(2, 3))).div(h*w)
fro_norm = torch.mean(fro_norm)
return fro_norm
def gaussian(window_size, sigma):
gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)])
return gauss/gauss.sum()
def create_window(window_size, channel):
_1D_window = gaussian(window_size, sigma=1.5).unsqueeze(1)
_2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
return window
def _ssim(img1, img2, window, window_size, channel, size_average = True):
mu1 = F.conv2d(img1, window, padding = window_size//2, groups = channel)
mu2 = F.conv2d(img2, window, padding = window_size//2, groups = channel)
mu1_sq = mu1.pow(2)
mu2_sq = mu2.pow(2)
mu1_mu2 = mu1*mu2
sigma1_sq = F.conv2d(img1*img1, window, padding = window_size//2, groups = channel) - mu1_sq
sigma2_sq = F.conv2d(img2*img2, window, padding = window_size//2, groups = channel) - mu2_sq
sigma12 = F.conv2d(img1*img2, window, padding = window_size//2, groups = channel) - mu1_mu2
C1 = 0.01**2
C2 = 0.03**2
ssim_map = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*(sigma1_sq + sigma2_sq + C2))
if size_average:
return ssim_map.mean()
else:
return ssim_map.mean(1).mean(1).mean(1)
class SSIM(torch.nn.Module):
"""Reference
https://ece.uwaterloo.ca/~z70wang/research/ssim/
"""
def __init__(self, window_size = 11, size_average = True):
super(SSIM, self).__init__()
self.window_size = window_size
self.size_average = size_average
self.channel = 1
self.window = create_window(window_size, self.channel)
def forward(self, img1, img2):
(_, channel, _, _) = img1.size()
if channel == self.channel and self.window.data.type() == img1.data.type():
window = self.window
else:
window = create_window(self.window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
self.window = window
self.channel = channel
return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
def ssim(img1, img2, window_size = 11, size_average = True):
(_, channel, _, _) = img1.size()
window = create_window(window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
return _ssim(img1, img2, window, window_size, channel, size_average)
if __name__ == "__main__":
l1 = L1_LOSS()
per = Per_LOSS()
fro = Fro_LOSS()
x1 = torch.randn([16, 3, 256, 256])
x2 = torch.randn([16, 3, 256, 256])
sim = SSIM()
|
{"hexsha": "0a63d90c1d38da2fbe9513c884fc221e12d3442b", "size": 3897, "ext": "py", "lang": "Python", "max_stars_repo_path": "loss.py", "max_stars_repo_name": "LEE-SEON-WOO/FusionDN_pytorch", "max_stars_repo_head_hexsha": "7b05fd1fc86145ebfb427340c47bb87dc7da0976", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "loss.py", "max_issues_repo_name": "LEE-SEON-WOO/FusionDN_pytorch", "max_issues_repo_head_hexsha": "7b05fd1fc86145ebfb427340c47bb87dc7da0976", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "loss.py", "max_forks_repo_name": "LEE-SEON-WOO/FusionDN_pytorch", "max_forks_repo_head_hexsha": "7b05fd1fc86145ebfb427340c47bb87dc7da0976", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.7478991597, "max_line_length": 104, "alphanum_fraction": 0.6209905055, "include": true, "reason": "import numpy", "num_tokens": 1103}
|
import torch
import util.util as util
from util.NonparametricShift import Modified_NonparametricShift, Batch_NonShift
from torch.nn import functional as F
import numpy as numpy
import matplotlib.pyplot as plt
bz = 2
c = 3 # at least 2
w = 16
h = 16
feature_size = [bz, c, w, h]
former = torch.rand(bz*c*h*w).mul_(50).reshape(bz, c, h, w).int().float()
latter = torch.rand(bz*c*h*w).mul_(50).reshape(bz, c, h, w).int().float()
flag = torch.zeros(bz, h, w).byte()
flag[:, h//4:h//2+1, h//4:h//2+1] = 1
flag = flag.view(bz, h*w)
ind_lst = torch.FloatTensor(bz, h*w, h*w).zero_()
shift_offsets = []
#Nonparm = Modified_NonparametricShift()
bNonparm = Batch_NonShift()
cosine, latter_windows, i_2, i_3, i_1 = bNonparm.cosine_similarity(former.clone(), latter.clone(), 1, 1, flag)
print(cosine.size())
print(latter_windows.size())
## GET INDEXES THAT MAXIMIZE COSINE SIMILARITY
_, indexes = torch.max(cosine, dim=2)
print('indexes dim')
print(indexes.size())
# SET TRANSITION MATRIX
mask_indexes = (flag == 1).nonzero()
mask_indexes = mask_indexes[:,1] # remove indexes that indicates the batch dim
mask_indexes = mask_indexes.view(bz, -1)
# Also remove indexes of batch
tmp = (flag==0).nonzero()[:,1]
tmp = tmp.view(bz, -1)
print('tmp size')
print(tmp.size())
idx_tmp = indexes + torch.arange(indexes.size(0)).view(-1,1) * tmp.size(1)
non_mask_indexes = tmp.view(-1)[idx_tmp]
# Original method
non_mask_indexes_2 = []
for i in range(bz):
non_mask_indexes_tmp = tmp[i][indexes[i]]
non_mask_indexes_2.append(non_mask_indexes_tmp)
non_mask_indexes_2 = torch.stack(non_mask_indexes_2, dim=0)
print('These two methods should be the same, as the error is 0!')
print(torch.sum(non_mask_indexes-non_mask_indexes_2))
ind_lst2 = ind_lst.clone()
for i in range(bz):
ind_lst[i][mask_indexes[i], non_mask_indexes[i]] = 1
print(ind_lst.sum())
print(ind_lst)
for i in range(bz):
for mi, nmi in zip(mask_indexes[i], non_mask_indexes[i]):
print('The %d\t-th pixel in the %d-th tensor will shift to %d\t-th coordinate' %(nmi, i, mi))
print('~~~')
# GET FINAL SHIFT FEATURE
shift_masked_all = bNonparm._paste(latter_windows, ind_lst, i_2, i_3, i_1)
print(shift_masked_all.size())
assert 1==2
# print('flag')
# print(flag.reshape(h,w))
# print('ind_lst')
# print(ind_lst)
# print('out')
# print(shift_masked_all)
# get shift offset ()
shift_offset = torch.stack([non_mask_indexes.squeeze() // w, torch.fmod(non_mask_indexes.squeeze(), w)], dim=-1)
print('shift_offset')
print(shift_offset)
print(shift_offset.size())
shift_offsets.append(shift_offset)
shift_offsets = torch.cat(shift_offsets, dim=0).float()
print(shift_offsets.size())
print(shift_offsets)
shift_offsets_cl = shift_offsets.clone()
lt = (flag==1).nonzero()[0]
rb = (flag==1).nonzero()[-1]
mask_h = rb//w+1 - lt//w
mask_w = rb%w+1 - lt%w
shift_offsets = shift_offsets.view([bz] + [2] + [mask_h, mask_w]) # So only appropriate for square mask.
print(shift_offsets.size())
print(shift_offsets)
h_add = torch.arange(0, float(h)).view([1, 1, h, 1]).float()
h_add = h_add.expand(bz, 1, h, w)
w_add = torch.arange(0, float(w)).view([1, 1, 1, w]).float()
w_add = w_add.expand(bz, 1, h, w)
com_map = torch.cat([h_add, w_add], dim=1)
print('com_map')
print(com_map)
com_map_crop = com_map[:, :, lt//w:rb//w+1, lt%w:rb%w+1]
print('com_map crop')
print(com_map_crop)
shift_offsets = shift_offsets - com_map_crop
print('final shift_offsets')
print(shift_offsets)
# to flow image
flow = torch.from_numpy(util.flow_to_image(shift_offsets.permute(0,2,3,1).cpu().data.numpy()))
flow = flow.permute(0,3,1,2)
#visualize which pixels are attended
print(flag.size())
print(shift_offsets.size())
# global and N*C*H*W
# put shift_offsets_cl back to the global map.
shift_offsets_map = flag.clone().view(-1)
shift_offsets_map[indexes] = shift_offsets_cl.view(-1)
print(shift_offsets_map)
assert 1==2
flow2 = torch.from_numpy(util.highlight_flow((shift_offsets_cl).numpy()))
upflow = F.interpolate(flow, scale_factor=4, mode='nearest')
upflow2 = F.interpolate(flow2, scale_factor=4, mode='nearest')
upflow = upflow.squeeze().permute(1,2,0)
upflow2 = upflow2.squeeze().permute(1,2,0)
print('flow 1')
print(upflow)
print(upflow.size())
print('flow 2')
print(upflow2)
print(upflow2.size())
fig, axs = plot.subplots(ncols=2)
axs[0].imshow(upflow)
axs[1].imshow(upflow2)
plt.show()
|
{"hexsha": "c826f972bf761a1a23ce44eaecb2ba3747fa92c3", "size": 4368, "ext": "py", "lang": "Python", "max_stars_repo_path": "test_acc_shift.py", "max_stars_repo_name": "mauriliosalg/Seis_Shift-Net_pytorch", "max_stars_repo_head_hexsha": "26f777d5e3be9d0828972202a61f0e01e2c04a1a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 350, "max_stars_repo_stars_event_min_datetime": "2018-04-12T15:08:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T09:55:16.000Z", "max_issues_repo_path": "test_acc_shift.py", "max_issues_repo_name": "mauriliosalg/Seis_Shift-Net_pytorch", "max_issues_repo_head_hexsha": "26f777d5e3be9d0828972202a61f0e01e2c04a1a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 87, "max_issues_repo_issues_event_min_datetime": "2018-07-13T05:15:14.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-07T06:20:43.000Z", "max_forks_repo_path": "test_acc_shift.py", "max_forks_repo_name": "mauriliosalg/Seis_Shift-Net_pytorch", "max_forks_repo_head_hexsha": "26f777d5e3be9d0828972202a61f0e01e2c04a1a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 88, "max_forks_repo_forks_event_min_datetime": "2018-04-23T13:41:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:39:59.000Z", "avg_line_length": 26.6341463415, "max_line_length": 112, "alphanum_fraction": 0.7124542125, "include": true, "reason": "import numpy", "num_tokens": 1323}
|
C$Procedure CBINIT ( Character buffer, initialize )
SUBROUTINE CBINIT_1 ( DIM, BUFFER )
IMPLICIT NONE
C$ Abstract
C
C Initialize a character buffer.
C
C$ Disclaimer
C
C THIS SOFTWARE AND ANY RELATED MATERIALS WERE CREATED BY THE
C CALIFORNIA INSTITUTE OF TECHNOLOGY (CALTECH) UNDER A U.S.
C GOVERNMENT CONTRACT WITH THE NATIONAL AERONAUTICS AND SPACE
C ADMINISTRATION (NASA). THE SOFTWARE IS TECHNOLOGY AND SOFTWARE
C PUBLICLY AVAILABLE UNDER U.S. EXPORT LAWS AND IS PROVIDED "AS-IS"
C TO THE RECIPIENT WITHOUT WARRANTY OF ANY KIND, INCLUDING ANY
C WARRANTIES OF PERFORMANCE OR MERCHANTABILITY OR FITNESS FOR A
C PARTICULAR USE OR PURPOSE (AS SET FORTH IN UNITED STATES UCC
C SECTIONS 2312-2313) OR FOR ANY PURPOSE WHATSOEVER, FOR THE
C SOFTWARE AND RELATED MATERIALS, HOWEVER USED.
C
C IN NO EVENT SHALL CALTECH, ITS JET PROPULSION LABORATORY, OR NASA
C BE LIABLE FOR ANY DAMAGES AND/OR COSTS, INCLUDING, BUT NOT
C LIMITED TO, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND,
C INCLUDING ECONOMIC DAMAGE OR INJURY TO PROPERTY AND LOST PROFITS,
C REGARDLESS OF WHETHER CALTECH, JPL, OR NASA BE ADVISED, HAVE
C REASON TO KNOW, OR, IN FACT, SHALL KNOW OF THE POSSIBILITY.
C
C RECIPIENT BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF
C THE SOFTWARE AND ANY RELATED MATERIALS, AND AGREES TO INDEMNIFY
C CALTECH AND NASA FOR ALL THIRD-PARTY CLAIMS RESULTING FROM THE
C ACTIONS OF RECIPIENT IN THE USE OF THE SOFTWARE.
C
C$ Required_Reading
C
C CB
C
C$ Keywords
C
C ASCII
C CHARACTER
C STRING
C TEXT
C
C$ Declarations
INTEGER LBCBUF
PARAMETER ( LBCBUF = 0 )
INTEGER DIM
CHARACTER*(*) BUFFER ( LBCBUF:DIM )
C$ Brief_I/O
C
C Variable I/O Description
C -------- --- --------------------------------------------------
C DIM I Dimension of the character buffer array.
C BUFFER I,O Character buffer.
C
C$ Detailed_Input
C
C DIM is the dimension of the array containing the
C character buffer to be initialized.
C
C BUFFER is the array.
C
C$ Detailed_Output
C
C BUFFER is an initialized character buffer.
C
C$ Files
C
C None.
C
C$ Exceptions
C
C 1) The error 'SPICE(NOTLEGALCB)' is signalled whenever any of
C the following conditions is detected.
C
C -- The length of the individual array elements is less
C than eight.
C
C -- DIM is less than one.
C
C$ Particulars
C
C A character buffer must be initialized to allow subsequent
C operations on the buffer to detect possible overflows.
C
C$ Examples
C
C The following code fragment illustrates the initialization
C of a character buffer.
C
C INTEGER LBCBUF
C PARAMETER ( LBCBUF = 0 )
C
C INTEGER BUFDIM
C PARAMETER ( BUFDIM = 256 )
C
C INTEGER BUFLEN
C PARAMETER ( BUFLEN = 1024 )
C
C CHARACTER*(BUFLEN) BUFFER ( LBCBUF:BUFDIM )
C .
C .
C
C CALL CBINIT ( BUFDIM, BUFFER )
C
C In this example, the buffer contains 256K characters of available
C storage (256 array elements of 1024 characters each). Note that
C it is only necessary to supply the dimension of the array (256),
C and not the length of the individual elements (1024).
C
C$ Restrictions
C
C None.
C
C$ Literature_References
C
C None.
C
C$ Author_and_Institution
C
C Dagny Taggart, (JPL)
C
C$ Version
C
C- Beta Version 1.0.0, 19-JAN-1989 (DT)
C
C-&
C
C SPICELIB functions
C
LOGICAL RETURN
C
C Standard error handling.
C
IF ( RETURN() ) THEN
RETURN
ELSE
CALL CHKIN ( 'CBINIT_1' )
IF ( LEN ( BUFFER(0) ) .LT. 8 ) THEN
CALL SETMSG ( 'Length is #.' )
CALL ERRINT ( '#', LEN ( BUFFER(0) ) )
CALL SIGERR ( 'SPICE(NOTLEGALCB)' )
CALL CHKOUT ( 'CBINIT_1' )
RETURN
ELSE IF ( DIM .LT. 1 ) THEN
CALL SETMSG ( 'Dimension is #.' )
CALL ERRINT ( '#', DIM )
CALL SIGERR ( 'SPICE(NOTLEGALCB)' )
CALL CHKOUT ( 'CBINIT_1' )
RETURN
END IF
END IF
C
C Store only the dimension.
C
CALL ENCHAR ( DIM, BUFFER(0)(1:8) )
CALL CHKOUT ( 'CBINIT_1' )
RETURN
END
|
{"hexsha": "832b03f591a4dc6dcd1455ad812c52791b825b22", "size": 4566, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "source/nasa_f/cbinit_1.f", "max_stars_repo_name": "agforero/FTFramework", "max_stars_repo_head_hexsha": "6caf0bc7bae8dc54a62da62df37e852625f0427d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-08-19T21:43:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-20T02:57:25.000Z", "max_issues_repo_path": "source/nasa_f/cbinit_1.f", "max_issues_repo_name": "agforero/fortran-testing-framework", "max_issues_repo_head_hexsha": "6caf0bc7bae8dc54a62da62df37e852625f0427d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-07T21:17:16.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-09T02:18:07.000Z", "max_forks_repo_path": "source/nasa_f/cbinit_1.f", "max_forks_repo_name": "agforero/fortran-testing-framework", "max_forks_repo_head_hexsha": "6caf0bc7bae8dc54a62da62df37e852625f0427d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-31T08:41:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T08:41:53.000Z", "avg_line_length": 26.0914285714, "max_line_length": 71, "alphanum_fraction": 0.6097240473, "num_tokens": 1247}
|
import abc
import copy
import os
import numpy as np
import tqdm
from collections import deque
from . import stage as st
from .luminos_stage import luminos_stage as ls
from ..utils import gnuplot as gp
def _unique(seq):
seen = set()
return [seen.add(x) or x for x in seq if x not in seen]
class ScannerDesign:
'''
A list-style object containing a group of `scan` objects.
Supports performing various nested scans and chains of scans.
'''
def __init__(self):
self._get_pos_list = []
self._set_pos_list = []
self._steps = deque()
self._scan_types = deque()
def _add(self, scans):
'''
Adds a list of `scan` objects. When `scan(...)`
is called, all scans in the list will be run
sequentially, and, if specified, the stages will
go to the max power at the end of each scan.
Args:
scans(list(Scan)): List of `scan` objects.
'''
self._steps.append((scans, None))
self._scan_types.append('scans')
for scan in scans:
self._get_pos_list_add(scan._get_pos_funcs)
self._set_pos_list_add(scan._move_funcs)
def _add_nested(self, nested_scan, outer_scan):
'''
Adds a nested scan step. The nested scan
performs the `nested_scan` scan in the
`outer_scan` scan.
Args:
nested_scan(Scan): The nested scan.
outer_scan(Scan): The outer scan.
'''
self._steps.append((nested_scan, outer_scan))
self._scan_types.append('nested')
self._get_pos_list_add(nested_scan._get_pos_funcs)
self._set_pos_list_add(nested_scan._move_funcs)
self._get_pos_list_add(outer_scan._get_pos_funcs)
self._set_pos_list_add(outer_scan._move_funcs)
def _add_nested_each_max(self, nested_scans, outer_scan):
'''
Adds a nested scan step that goes to max after
each nested scan scan.
Args:
nested_scans(list(Scans)): A list of the nested
scans.
outer_scan(Scan): The outer scan.
'''
self._steps.append((nested_scans, outer_scan))
self._scan_types.append('nested_goto_max')
for nested_scan in nested_scans:
self._get_pos_list_add(nested_scan._get_pos_funcs)
self._set_pos_list_add(nested_scan._move_funcs)
self._get_pos_list_add(outer_scan._get_pos_funcs)
self._set_pos_list_add(outer_scan._move_funcs)
def _get_pos_list_add(self, get_pos_list):
self._get_pos_list.extend(get_pos_list)
self._get_pos_list = _unique(self._get_pos_list)
return self._get_pos_list
def _set_pos_list_add(self, set_pos_list):
self._set_pos_list.extend(set_pos_list)
self._set_pos_list = _unique(self._set_pos_list)
return self._set_pos_list
def _get_stages_pos(self):
pos = [get_pos() for get_pos in self._get_pos_list]
return pos
def _restore_stages_pos(self, pos):
for p, mf in zip(pos, self._set_pos_list):
mf(p)
def _scan(self, scans, goto_max):
# Get initial pos.
pos_init = self._get_stages_pos()
# Do all scans in step.
coords_pows = []
for scan in scans:
coords_pows.append(scan.scan(goto_max))
# Measure power at end of scans.
pow_final_uW = scans[-1].power_meter.get_power_uW()
return coords_pows
def _scan_nested(self, scan, outer_scan, goto_max=True):
# Get initial pos.
pos_init = self._get_stages_pos()
# Override and apply offsets
# Probably `outer_scan` doesn't actually require this, just `scan` does.
s_tmp = [None, None]
for i, s in enumerate([outer_scan, scan]):
if len(s.offsets):
axes_pos = np.array([get_pos() for get_pos in s._get_pos_funcs])
s._move_abs(axes_pos + s.offsets)
s_tmp[i] = copy.copy(s.offsets)
s.offsets = []
# Do scan.
coords_for_each, coords_pows = outer_scan.traverse_pattern(scan.scan,
kwargs={'goto_max': False})
print()
# Sort all coords from both scans.
coords_sorted = []
for coord_for_each, coord_pow in zip(coords_for_each, coords_pows):
for c, p in zip(coord_pow[0][0], coord_pow[0][1]):
comb = (coord_for_each, c, p)
coords_sorted.append(comb)
coords_sorted_T = np.array(coords_sorted).T
idx_max_pow = np.argmax(coords_sorted_T[2])
# Determine the max power coordinates, and the max power.
for_every_coord_max_pow = coords_sorted_T[0][idx_max_pow]
scan_coord_max_pow = coords_sorted_T[1][idx_max_pow]
max_pow = coords_sorted_T[2][idx_max_pow]
# If goto_max not set or below threshold, restore original positions.
pow_final_uW = scan.power_meter.get_power_uW()
if goto_max:
self._restore_stages_pos(pos_init)
scan._move_abs(scan_coord_max_pow)
outer_scan._move_abs(for_every_coord_max_pow)
else:
self._restore_stages_pos(pos_init)
# Only return max power coords and power value if threshold met.
r = coords_sorted_T, (for_every_coord_max_pow, scan_coord_max_pow, max_pow)
# Restore offsets
if s_tmp[0] != None:
outer_scan.offsets = s_tmp[0]
if s_tmp[1] != None:
scan.offsets = s_tmp[1]
return r
def _scan_nested_each_max(self, scans, outer_scan, goto_max=False):
# Get initial pos.
pos_init = self._get_stages_pos()
# Do scan.
max_pow_pos = []
max_pows = []
def scan_all():
for scan in scans:
_, coord_max_pow = scan.scan(goto_max=True)
max_pow_pos.append(self._get_stages_pos())
max_pows.append(coord_max_pow[1])
return coord_max_pow
outer_scan.traverse_pattern(scan_all)
# Find max.
idx_mp = np.argmax(max_pows)
# Move to max pos if specified.
if goto_max:
self._restore_stages_pos(pos_init)
self._restore_stages_pos(max_pow_pos[idx_mp])
else:
self._restore_stages_pos(pos_init)
return max_pow_pos[idx_mp], max_pows[idx_mp]
def scan(self, goto_max=True):
for (scans, outer_scan), scan_type in zip(self._steps, self._scan_types):
if scan_type == 'nested':
res = self._scan_nested(scans, outer_scan, goto_max)
elif scan_type == 'nested_goto_max':
res = self._scan_nested_each_max(scans, outer_scan, goto_max)
elif scan_type == 'scans':
res = self._scan(scans, goto_max)
else:
assert False
return res
def __str__(self):
recipe = ''
for i, scans_loop in enumerate(self._steps):
scan = scans_loop[0]
loop = scans_loop[1]
step_str = 'STEP %i: ' % i
scan_str = ' SCAN '
join_str = '\n' + ' '*len(scan_str) + ' THEN '
scan_str += join_str.join([scan.__class__.__name__ + ' USING ' +
','.join([axis.__class__.__name__ for axis in scan.axes])])
if loop:
loop_str = '\n' + ' '*len(step_str) + 'FOR EACH '
loop_str += loop.__class__.__name__ + ' USING ' + \
','.join([axis.__class__.__name__ for axis in loop.axes])
else:
loop_str = ''
recipe += step_str + scan_str + ' ' + loop_str + '\n'
return recipe
class Scan(metaclass=abc.ABCMeta):
'''
The general interface a `scan` should adhere to.
A scan object consists of a set of axes, a pattern function
defining the path of the axes, as well as a power meter.
'''
def __init__(self, axes, power_meter, offsets=[], *args, **kwargs):
for axis in axes:
assert issubclass(type(axis), st.Axis) or not axis
assert len(offsets) in (0, len(axes)), 'Incorrect offset length.'
self.offsets = np.array(offsets)
self.axes = axes
self.power_meter = power_meter
self.dimensions = len(axes)
self.pattern = self.__class__._pattern(*args, **kwargs)
# Flat array of coords
flat_shape = (np.product(self.pattern.shape[:-1]), self.pattern.shape[-1])
self.pattern_flat = np.copy(self.pattern).reshape(flat_shape)
# Determine if move_abs_um or move_abs_degree
self._move_funcs = []
self._get_pos_funcs = []
self._min_max = []
for axis in axes:
if issubclass(type(axis), st.AxisLinear):
self._move_funcs.append(axis.move_abs_um)
self._get_pos_funcs.append(axis.get_current_position_um)
self._min_max.append((axis.get_position_absolute_min_um(),
axis.get_position_absolute_max_um()))
elif issubclass(type(axis), st.AxisRotate):
self._move_funcs.append(axis.move_abs_degree)
self._get_pos_funcs.append(axis.get_current_position_degree)
self._min_max.append((axis.get_position_absolute_min_degree(),
axis.get_position_absolute_max_degree()))
@staticmethod
@abc.abstractmethod
def _pattern(*args, **kwargs):
'''
An implementation of the pattern that will be swept.
All the arguments passed to the constructor captured
by *args and **kwargs will be passed to this function.
Return:
n-dimensional iterable: n-dimensional object that
returns the coordinates to move to. The object
coordinates should be absolute (not relative)
movements, were (0,0) is the assumed origin of
the coordinates.
'''
pass
def traverse_pattern(self, func=None, args=[], kwargs={}):
'''
Sequentially move the stages to each point in the pattern
returned by `_pattern()`, calling `func(*args, **kwargs)`
at each point.
Args:
func(function): The function to call at each point.
args(list): The arguments to pass to `func`.
kwargs(dict): The kwargs to pass to `func`.
Returns:
(list, list): The first list is a flattened version
of the pattern coordinates, and the second list
contains the results of calling func.
'''
# Backup and set xy axis speeds.
_luminos_xy_speeds = deque()
_luminos_xy_accel = deque()
for axis in self.axes:
if issubclass(type(axis), (ls.LuminosAxisX, ls.LuminosAxisY)):
_luminos_xy_speeds.append(axis.get_speed())
_luminos_xy_speeds.append(axis.get_acceleration())
axis.set_speed(3000)
axis.set_acceleration(100)
axes_pos = np.array([get_pos() for get_pos in self._get_pos_funcs])
# Apply offset
if len(self.offsets):
axes_pos += self.offsets
coords = np.array([coord+axes_pos for coord in self.pattern_flat])
for (min, max), coords_axis, axis in zip(self._min_max, coords.T, self.axes):
assert np.all(min <= coords_axis), \
'Pattern exceeds %s-axis minimum range.' % axis.name
assert np.all(coords_axis <= max), \
'Pattern exceeds %s-axis maximum range.' % axis.name
results = [None]*coords.shape[0]
for i, coord in enumerate(tqdm.tqdm(coords, ncols=80)):
self._move_abs(coord)
if func:
results[i] = func(*args, **kwargs)
# Restore xy axis speeds.
for axis in self.axes:
if issubclass(type(axis), (ls.LuminosAxisX, ls.LuminosAxisY)):
axis.set_speed(_luminos_xy_speeds.popleft())
axis.set_acceleration(_luminos_xy_speeds.popleft())
return coords, results
def scan(self, goto_max=True):
'''
Traverse the pattern returned by `_pattern()` and
measure the power at each point.
Args:
goto_max(bool): If `True`, move the axes to the maximum
power reading measured after the scan. If `False`,
return the axes to their original positions (where
they were before starting scan).
Returns:
((list, list), (2-tuple, float)): The first list is a
flattened version of the pattern coordinates, and
the second list contains power readings. The 2-tuple
are the (x,y) coordinates of the maximum power of the
scan, and the float is the maximum power.
'''
# Store initial position.
pos_init = np.array([get_pos() for get_pos in self._get_pos_funcs])
# Traverse pattern and get max power.
coords, powers = self.traverse_pattern(self.power_meter.get_power_W)
powers = np.array(powers)
coord_max_power = self._get_coord_max_power(coords, powers)
# Either goto max or restore the initial position.
if goto_max:
self._move_abs(pos_init)
self._move_abs(coord_max_power[0])
else:
self._move_abs(pos_init)
return (coords, powers), coord_max_power
def _move_abs(self, coord):
#assert len(coord) == self.dimensions
for point, move_abs in zip(coord, self._move_funcs):
move_abs(point)
def _get_coord_max_power(self, coords, powers):
idx = np.argmax(powers)
return coords[idx], powers[idx]
def __str__(self):
return self.__class__.__name__ + ': dim ' + str(self.dimensions) \
+ '; axes ' + ','.join([axis.__class__.__name__ for axis in self.axes])
class Rectangle(Scan):
'''
A two-dimensional `scan` in a rectangular shape along `axis_1` and
`axis_2`.
'''
def __init__(self, axis_1, axis_2, power_meter,
axis_1_pts, axis_2_pts, axis_1_step, axis_2_step,
offset=(0,0), meander=True, origin='c'):
axes = [axis_1, axis_2]
Scan.__init__(self, axes, power_meter, offset,
axis_1_pts, axis_2_pts,
axis_1_step, axis_2_step,
meander, origin)
def scan(self, goto_max=False, plot=False):
r = Scan.scan(self, goto_max)
(coords, powers), _ = r
if plot:
np.savetxt(plot, np.c_[self.pattern_flat.T[0], self.pattern_flat.T[1], powers], '%.6e', ',')
root, _ = os.path.splitext(plot)
filename_png = root + '.png'
plot_args = {
'filename': plot,
'filename_png': filename_png,
'axis_1': self.axes[0].name,
'axis_2': self.axes[1].name
}
path = os.path.abspath(__file__)
dir_path = os.path.dirname(path)
gp.Gnuplot(dir_path + '/scanner.gpi', plot_args)
os.system('display %s' % filename_png)
return r
@staticmethod
def _pattern(axis_1_pts, axis_2_pts, axis_1_step, axis_2_step, meander, *args):
pts = []
coord = [None, None]
axis_1_dist = (axis_1_pts-1) * axis_1_step
axis_2_dist = (axis_2_pts-1) * axis_2_step
parity = False
for n_2 in np.arange(-axis_2_dist/2, (axis_2_dist+0.1*axis_2_step)/2, axis_2_step):
parity ^= 1
row = []
coord[1] = n_2
for n_1 in np.arange(-axis_1_dist/2, (axis_1_dist+0.1*axis_1_step)/2, axis_1_step):
coord[0] = n_1
row.append(np.array(coord))
if parity and meander:
pts.append(row[::-1])
else:
pts.append(row)
return np.array(pts)
@staticmethod
def plot(pattern, filename='pattern.dat'):
flat_shape = (np.product(pattern.shape[:-1]), pattern.shape[-1])
pattern_flat = np.copy(pattern).reshape(flat_shape)
np.savetxt(filename, pattern_flat)
filename_image, _ = os.path.splitext(filename)
filename_image += '.png'
args = {
'filename': filename,
'filename_image': filename_image
}
path = os.path.abspath(__file__)
dir_path = os.path.dirname(path)
gp.Gnuplot(dir_path+'/pattern.gpi', args)
class RectangleXY(Rectangle):
def __init__(self, stage, power_meter, x_pts, y_pts, x_step, y_step,
offset=(0,0), meander=True, origin='c'):
axis_1 = stage.axes['x']
axis_2 = stage.axes['y']
Rectangle.__init__(self, axis_1, axis_2, power_meter,
y_pts, x_pts, y_step, x_step,
offset, meander, origin)
@staticmethod
def _pattern(axis_1_pts, axis_2_pts, axis_1_step, axis_2_step, meander, origin):
pts = Rectangle._pattern(axis_1_pts, axis_2_pts, axis_1_step, axis_2_step, meander)
ref = RectangleXY._set_origin(pts, origin)
pts += ref
return pts
@staticmethod
def _set_origin(pts, origin):
assert origin in ('c', 'lm', 'rm', 'tm', 'bm', 'tl', 'tr', 'bl', 'br')
if origin == 'c':
ref = (0,0)
elif origin == 'tl':
ref = pts[0][-1]
elif origin == 'tr':
ref = pts[-1][-1]
elif origin == 'bl':
ref = pts[0][0]
elif origin == 'br':
ref = pts[-1][0]
elif origin == 'tm':
ref_x = 0
ref_y = pts[0][-1][1]
ref = (ref_x, ref_y)
elif origin == 'bm':
ref_x = 0
ref_y = pts[0][0][1]
ref = (ref_x, ref_y)
elif origin == 'lm':
ref_x = pts[0][-1][0]
ref_y = 0
ref = (ref_x, ref_y)
elif origin == 'rm':
ref_x = pts[-1][0][0]
ref_y = 0
ref = (ref_x, ref_y)
return ref
class Diamond(Rectangle):
def __init__(self, axis_1, axis_2, power_meter,
axis_1_pts, axis_2_pts, axis_1_step, axis_2_step,
offset=(0,0), meander=True, origin='c'):
Rectangle.__init__(axis_1, axis_2, power_meter,
axis_1_pts, axis_2_pts, axis_1_step, axis_2_step,
offset=(0,0), meander=True, origin='c')
@staticmethod
def _pattern(axis_1_pts, axis_2_pts,
axis_1_step, axis_2_step,
meander):
pattern = Rectangle._pattern(axis_1_pts, axis_2_pts,
axis_1_step, axis_2_step,
meander)
t = np.array([[np.cos(np.pi/4),-np.sin(np.pi/4)],
[np.sin(np.pi/4),np.cos(np.pi/4)]])
return np.dot(pattern, t)
class Line(Scan):
def __init__(self, axis, power_meter, axis_pts, axis_step, origin='c'):
self.origin = origin
Scan.__init__(self, [axis], power_meter, [], axis_pts, axis_step, origin)
@staticmethod
def _pattern(axis_pts, axis_step, origin='c'):
pts = np.arange(0., axis_pts*axis_step, axis_step)
if origin == 'c':
pts -= pts[-1]/2
elif origin == 'r':
pts -= pts[-1]
elif origin == 'l':
pass
pts = np.array([[p] for p in pts])
return pts
class OptimiseRectZ(ScannerDesign):
def __init__(self, power_meter,
stage,
axis_x_pts, axis_y_pts, axis_z_pts,
axis_x_step, axis_y_step, axis_z_step,
offset=(0,0)):
ScannerDesign.__init__(self)
# Always step backwards in z.
if axis_z_step >= 0:
axis_z_step = -axis_z_step
rect = Rectangle(stage.x, stage.y, power_meter, axis_x_pts,
axis_y_pts, axis_x_step, axis_y_step, offset)
lz = Line(stage.z, power_meter, axis_z_pts, axis_z_step, 'l')
self._add_nested(rect, lz)
class Cross(Scan):
def __init__(self, axis_1, axis_2, power_meter,
axis_1_pts, axis_2_pts, axis_1_step, axis_2_step,
offset=(0,0)):
Scan.__init__(self, [axis_1, axis_2,], power_meter, offset,
axis_1_pts, axis_2_pts, axis_1_step, axis_2_step)
@staticmethod
def _pattern(axis_1_pts, axis_2_pts, axis_1_step, axis_2_step):
l1 = Line._pattern(axis_1_pts, axis_1_step, 'c')
l1 = np.concatenate((l1.T, [np.zeros(l1.size)])).T
l2 = Line._pattern(axis_2_pts, axis_2_step, 'c')
l2 = np.concatenate(([np.zeros(l2.size)], l2.T)).T
pts = np.concatenate((l1, l2))
return pts
@staticmethod
def plot(pattern, filename='pattern.dat'):
return Rectangle.plot(pattern, filename)
class CrossXY(Cross):
def __init__(self, stage, power_meter, axis_x_pts,
axis_y_pts, axis_x_step, axis_y_step,
offset=(0,0)):
Cross.__init__(self, stage.x, stage.z, power_meter,
axis_x_pts, axis_y_pts,
axis_x_step, axis_y_step,
offset)
class Line2(ScannerDesign):
def __init__(self, axis_1, axis_2, power_meter, axis_1_pts,
axis_2_pts, axis_1_step, axis_2_step):
origin = 'c'
lx = Line(axis_1, power_meter, axis_1_pts, axis_1_step, origin)
ly = Line(axis_2, power_meter, axis_2_pts, axis_2_step, origin)
ScannerDesign.__init__(self)
self._add([ly])
self._add([lx])
class OptimiseLine2XY_Z(ScannerDesign):
def __init__(self, power_meter,
stage,
axis_x_pts, axis_y_pts, axis_z_pts,
axis_x_step, axis_y_step, axis_z_step,
offset=(0,0)):
ScannerDesign.__init__(self)
# Always step backwards in z.
if axis_z_step >= 0:
axis_z_step = -axis_z_step
line2 = Line2(stage.x, stage.y, power_meter,
axis_x_pts, axis_y_pts,
axis_x_step, axis_y_step)
lz = Line(stage.z, power_meter, axis_z_pts, axis_z_step, 'l')
ns = [step[0][0] for step in line2._steps]
self._add_nested_each_max(ns, lz)
class ScanRoutines(object):
def __init__(self, stages, power_meter):
self.inp = stages.input
self.out = stages.output
self.pm = power_meter
def _take_image(self, stage, x_pts, y_pts, x_step_um, y_step_um,
filename=None, goto_max=False, meander=False):
r = RectangleXY(stage, self.pm, x_pts, y_pts, x_step_um, y_step_um, (0,0), meander, 'c')
pos_pows = r.scan(goto_max, filename)
return pos_pows
def take_image_input(self, x_pts, y_pts, x_step_um, y_step_um,
filename='input.dat', goto_max=False):
return self._take_image(self.inp, x_pts, y_pts, x_step_um, y_step_um,
filename, goto_max, False)
def take_image_output(self, x_pts, y_pts, x_step_um, y_step_um,
filename='output.dat', goto_max=False):
return self._take_image(self.out, x_pts, y_pts, x_step_um, y_step_um,
filename, goto_max, False)
def _goto_max_rect(self, stage, x_pts, y_pts, x_step_um, y_step_um):
c = RectangleXY(stage, self.pm, x_pts, y_pts, x_step_um, y_step_um, meander=False)
pos_pows = c.scan(True)
return pos_pows
def goto_max_rect_input(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_rect(self.inp, x_pts, y_pts, x_step_um, y_step_um)
def goto_max_rect_output(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_rect(self.out, x_pts, y_pts, x_step_um, y_step_um)
def _goto_max_line2XY(self, stage, x_pts, y_pts, x_step_um, y_step_um):
c = Line2(stage.x, stage.y, self.pm, x_pts, y_pts, x_step_um, y_step_um)
pos_pows = c.scan(True)
return pos_pows
def goto_max_line2XY_input(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_line2XY(self.inp, x_pts, y_pts, x_step_um, y_step_um)
def goto_max_line2XY_output(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_line2XY(self.out, x_pts, y_pts, x_step_um, y_step_um)
def _goto_max_line2XY_z(self, stage, x_pts, y_pts, z_pts, x_step_um,
y_step_um, z_step_um):
o = OptimiseLine2XY_Z(self.pm, stage, x_pts, y_pts, z_pts,
x_step_um, y_step_um, z_step_um)
pos_pows = o.scan(True)
return pos_pows
def goto_max_line2XY_z_input(self, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um):
return self._goto_max_line2XY_z(self.inp, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um)
def goto_max_line2XY_z_output(self, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um):
return self._goto_max_line2XY_z(self.out, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um)
def _goto_max_rect_z(self, stage, x_pts, y_pts, z_pts, x_step_um,
y_step_um, z_step_um):
o = OptimiseRectZ(self.pm, stage, x_pts, y_pts, z_pts,
x_step_um, y_step_um, z_step_um)
pos_pows = o.scan(True)
return pos_pows
def goto_max_rect_z_input(self, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um):
return self._goto_max_rect_z(self.inp, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um)
def goto_max_rect_z_output(self, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um):
return self._goto_max_rect_z(self.out, x_pts, y_pts, z_pts, x_step_um, y_step_um, z_step_um)
def _goto_max_cross(self, stage, x_pts, y_pts, x_step_um, y_step_um):
c = CrossXY(stage, self.pm, x_pts, y_pts, x_step, y_step)
pos_pows = c.scan(True)
return pos_pows
def goto_max_cross_input(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_cross(self.inp, x_pts, y_pts, x_step_um, y_step_um)
def goto_max_cross_output(self, x_pts, y_pts, x_step_um, y_step_um):
return self._goto_max_cross(self.out, x_pts, y_pts, x_step_um, y_step_um)
def find_waveguide_rect(self, x_pts=7, y_pts=7, x_step_um=1, y_step_um=1,
offset=(0,-3)):
r_inp = RectangleXY(self.inp, self.pm,
x_pts, y_pts, x_step_um, y_step_um,
offset, True, 'c')
r_out = RectangleXY(self.out, self.pm,
x_pts, y_pts, x_step_um, y_step_um,
offset, True, 'c')
sd = ScannerDesign()
sd._add_nested(r_out, r_inp)
return sd.scan(True)
def find_waveguide_cross(self, x_pts=7, y_pts=7, x_step_um=3, y_step_um=3,
offset=(0,-3)):
r_inp = CrossXY(self.inp, self.pm,
x_pts, y_pts, x_step_um, y_step_um,
offset)
r_out = CrossXY(self.out, self.pm,
x_pts, y_pts, x_step_um, y_step_um,
offset)
sd = ScannerDesign()
sd._add_nested(r_out, r_inp)
return sd.scan(True)
def centre_x_y(self):
self.inp.x.move_abs_um(250)
self.out.x.move_abs_um(250)
self.inp.y.move_abs_um(250)
self.out.y.move_abs_um(250)
|
{"hexsha": "0a69baa64a0cfcc69ad52ca3151ec8d0aa9d9951", "size": 27761, "ext": "py", "lang": "Python", "max_stars_repo_path": "drivers/stages/scanner.py", "max_stars_repo_name": "jtambasco/photonic-coupling-drivers", "max_stars_repo_head_hexsha": "9f8e422b1b6e2e5ff783c9146130ed71ee01241d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-11-15T06:58:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-10T12:02:14.000Z", "max_issues_repo_path": "drivers/stages/scanner.py", "max_issues_repo_name": "jtambasco/photonic-coupling-drivers", "max_issues_repo_head_hexsha": "9f8e422b1b6e2e5ff783c9146130ed71ee01241d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "drivers/stages/scanner.py", "max_forks_repo_name": "jtambasco/photonic-coupling-drivers", "max_forks_repo_head_hexsha": "9f8e422b1b6e2e5ff783c9146130ed71ee01241d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-05-24T16:26:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-07T16:03:46.000Z", "avg_line_length": 37.2630872483, "max_line_length": 104, "alphanum_fraction": 0.5893519686, "include": true, "reason": "import numpy", "num_tokens": 6941}
|
import mdtraj as md
import bqplot as bq
import ipywidgets as w
from traitlets import Integer, Float, link, observe
from traittypes import Array
import nglview as nv
from .ezfigure import EZFigure
import numpy as np
class TrajPlot(w.Box):
"""Class responsible for compositing a traj with a figure"""
frame = Integer()
time = Float()
t_scale = Float(1000)
def __init__(self, traj, figure, *args, **kwargs):
super().__init__(*args, **kwargs)
view = nv.show_mdtraj(traj, **kwargs)
self.traj = traj
self.view = view
self.figure = figure
self.children = [figure, view]
link((self, 'frame'), (self.view, 'frame'))
self.view._remote_call(
"setSize",
target='Widget',
args=['100%', '100%']
)
self.view.layout = w.Layout(max_width="1000px")
self.figure.layout = w.Layout()
self.layout = w.Layout(
display='flex',
flex_flow='row wrap',
align_items='flex-start'
)
@observe('frame')
def frame2time(self, changes):
frame = changes['new']
dt = self.traj.timestep / self.t_scale
time = frame * dt
self.time = float(time)
@observe('time')
def time2frame(self, changes):
time = changes['new']
dt = self.traj.timestep / self.t_scale
frame = time / dt
self.frame = int(frame)
class TrajPlotTime(TrajPlot):
selected = Array(None, allow_none=True)
stride = Integer(1)
def __init__(self, traj, data_y, *args, stride=1, **kwargs):
if not isinstance(traj, md.Trajectory):
raise ValueError('traj must be an MDTraj Trajectory')
unit = {
1e-3: 'fs',
1e0: 'ps',
1e3: 'ns',
1e6: 'us',
1e9: 'ms',
1e12: 's'
}[self.t_scale]
kwargs.setdefault('label_x', f'Time ({unit})')
figure = EZFigure(**kwargs)
data_y = np.asarray(data_y)
if data_y.ndim == 1:
data_y = np.expand_dims(data_y, 0)
if not (data_y.ndim == 2 and data_y.shape[1] == len(traj)):
raise ValueError('traj and data_y should have same lengths')
for y_line in data_y:
scatter = figure.scatter(
x=traj.time[::stride] / self.t_scale,
y=y_line[::stride]
)
line = figure.vertline(self.frame)
selector = bq.interacts.IndexSelector(
line_width=0,
scale=figure.scale_x,
marks=[scatter]
)
figure.interaction = selector
super().__init__(traj, figure, *args, **kwargs)
self.stride = stride
link((self, 'selected'), (scatter, 'selected'))
link((self, 'time'), (line, 'position'))
@observe('selected')
def sele2time(self, change):
new = change['new']
if new is None:
return
else:
new = new[0]
x = self.traj.time[::self.stride][new] / self.t_scale
self.time = float(x)
# tp = TrajPlotTime(traj, props[prop], stride=1000, label_y=prop)
# tp
|
{"hexsha": "58228ba2e7084f6b53721ca49cfe7cae2625eead", "size": 3182, "ext": "py", "lang": "Python", "max_stars_repo_path": "bqploteins/trajplot.py", "max_stars_repo_name": "Yoshanuikabundi/JupyterJoy", "max_stars_repo_head_hexsha": "658f7bb0a33fdc34eb6366fae3b73d0481fd554f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-04-27T22:52:39.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-01T15:29:17.000Z", "max_issues_repo_path": "bqploteins/trajplot.py", "max_issues_repo_name": "Yoshanuikabundi/JupyterJoy", "max_issues_repo_head_hexsha": "658f7bb0a33fdc34eb6366fae3b73d0481fd554f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bqploteins/trajplot.py", "max_forks_repo_name": "Yoshanuikabundi/JupyterJoy", "max_forks_repo_head_hexsha": "658f7bb0a33fdc34eb6366fae3b73d0481fd554f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-01T15:29:21.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-01T15:29:21.000Z", "avg_line_length": 26.7394957983, "max_line_length": 72, "alphanum_fraction": 0.5493400377, "include": true, "reason": "import numpy", "num_tokens": 816}
|
#!/usr/bin/env python
import os
import os.path as op
import random
import string
import itertools as it
import textwrap as tw
from unittest import mock
from contextlib import contextmanager
import pytest
import numpy as np
import fsl.utils.settings as fslsettings
from fsl.utils.tempdir import tempdir
import fsleyes.colourmaps as fslcm
############
# Management
############
@contextmanager
def mockAssetDir():
with tempdir(changeto=False) as td:
with mock.patch('fsleyes.assetDir', td):
os.makedirs(op.join(td, 'colourmaps'))
os.makedirs(op.join(td, 'luts'))
yield op.join(td)
@contextmanager
def mockSettings():
with tempdir() as td:
fakesettings = fslsettings.Settings('fsleyes',
cfgdir=td,
writeOnExit=False)
os.makedirs(op.join(td, 'colourmaps'))
os.makedirs(op.join(td, 'luts'))
with fslsettings.use(fakesettings):
yield td
@contextmanager
def mockCmaps():
cmap = tw.dedent("""
0.3 0.4 0.5
0.6 0.7 0.8
""").strip()
lut = tw.dedent("""
1 0.3 0.4 0.5 label 1
2 0.6 0.7 0.8 label 2
""").strip()
with mockSettings() as sdir, \
mockAssetDir() as assetDir:
cmap1 = op.join(assetDir, 'colourmaps', 'cmap1.cmap')
cmap2 = op.join(sdir, 'colourmaps', 'cmap2.cmap')
lut1 = op.join(assetDir, 'luts', 'lut1.lut')
lut2 = op.join(sdir, 'luts', 'lut2.lut')
with open(cmap1, 'wt') as f: f.write(cmap)
with open(cmap2, 'wt') as f: f.write(cmap)
with open(lut1, 'wt') as f: f.write(lut)
with open(lut2, 'wt') as f: f.write(lut)
yield (assetDir, sdir)
def clearCmaps(func):
def regcmap(*a, **kwa):
pass
mo = mock.MagicMock()
def wrapper(*args, **kwargs):
with mock.patch('matplotlib.cm.register_cmap', regcmap), \
mock.patch('fsleyes.displaycontext.VolumeOpts', mo), \
mock.patch('fsleyes.displaycontext.VectorOpts', mo), \
mock.patch('fsleyes.displaycontext.MeshOpts', mo), \
mock.patch('fsleyes.displaycontext.LabelOpts', mo):
cmaps = fslcm._cmaps
luts = fslcm._luts
fslcm._cmaps = None
fslcm._luts = None
try:
func(*args, **kwargs)
finally:
fslcm._cmaps = cmaps
fslcm._luts = luts
return wrapper
def test_validMapKey():
for i in range(100):
instr = random.choice(string.ascii_letters) + \
''.join([random.choice(string.printable) for i in range(50)])
key = fslcm.makeValidMapKey(instr)
assert fslcm.isValidMapKey(key)
@clearCmaps
def test_scanDirs():
with mockSettings() as sdir, mockAssetDir() as assetDir:
os.mkdir(op.join(assetDir, 'colourmaps', 'sub'))
os.mkdir(op.join(assetDir, 'luts', 'sub'))
# builtins
bifiles = [op.join('luts', 'lut1_builtin.lut'),
op.join('luts', 'sub', 'lut2_builtin.lut'),
op.join('colourmaps', 'cmap1_builtin.cmap'),
op.join('colourmaps', 'sub', 'cmap2_builtin.cmap')]
# user added
uafiles = [op.join('luts', 'lut1_added.lut'),
op.join('luts', 'lut2_added.lut'),
op.join('colourmaps', 'cmap1_added.cmap'),
op.join('colourmaps', 'cmap2_added.cmap')]
for f in bifiles:
with open(op.join(assetDir, f), 'wt'):
pass
for f in uafiles:
with open(op.join(sdir, f), 'wt'):
pass
assert fslcm.getCmapDir() == op.join(assetDir, 'colourmaps')
assert fslcm.getLutDir() == op.join(assetDir, 'luts')
cmapsbuiltin = ['cmap1_builtin', 'sub_cmap2_builtin']
lutsbuiltin = ['lut1_builtin', 'sub_lut2_builtin']
cmapsadded = ['cmap1_added', 'cmap2_added']
lutsadded = ['lut1_added', 'lut2_added']
assert fslcm.scanBuiltInCmaps() == cmapsbuiltin
assert fslcm.scanBuiltInLuts() == lutsbuiltin
assert fslcm.scanUserAddedCmaps() == cmapsadded
assert fslcm.scanUserAddedLuts() == lutsadded
assert fslcm.scanColourMaps() == cmapsbuiltin + cmapsadded
assert fslcm.scanLookupTables() == lutsbuiltin + lutsadded
@clearCmaps
def test_init():
with mockCmaps() as (assetDir, sdir):
fslcm.init()
cmap1 = op.join(assetDir, 'colourmaps', 'cmap1.cmap')
cmap2 = op.join(sdir, 'colourmaps', 'cmap2.cmap')
lut1 = op.join(assetDir, 'luts', 'lut1.lut')
lut2 = op.join(sdir, 'luts', 'lut2.lut')
assert fslcm.getColourMaps() == ['cmap1', 'cmap2']
assert fslcm.getColourMapLabel( 'cmap1') == 'cmap1'
assert fslcm.getColourMapLabel( 'cmap2') == 'cmap2'
assert fslcm.getColourMapFile( 'cmap1') == cmap1
assert fslcm.getColourMapFile( 'cmap2') == cmap2
assert fslcm.getColourMapKey(cmap1) == 'cmap1'
assert fslcm.getColourMapKey(cmap2) == 'cmap2'
assert fslcm.getLookupTableFile('lut1') == lut1
assert fslcm.getLookupTableFile('lut2') == lut2
assert fslcm.getLookupTableKey(lut1) == 'lut1'
assert fslcm.getLookupTableKey(lut2) == 'lut2'
assert fslcm.isColourMapInstalled( 'cmap1')
assert fslcm.isColourMapInstalled( 'cmap2')
assert fslcm.isColourMapRegistered( 'cmap1')
assert fslcm.isColourMapRegistered( 'cmap2')
assert not fslcm.isColourMapRegistered( 'lut1')
assert fslcm.isColourMapRegistered(filename=cmap1)
assert fslcm.isColourMapRegistered(filename=cmap2)
assert fslcm.isColourMapRegistered(filename=cmap1)
assert fslcm.isColourMapRegistered(filename=cmap2)
assert not fslcm.isColourMapRegistered(filename=lut1)
assert fslcm.isLookupTableInstalled( 'lut1')
assert fslcm.isLookupTableInstalled( 'lut2')
assert fslcm.isLookupTableRegistered('lut1')
assert fslcm.isLookupTableRegistered('lut2')
assert not fslcm.isLookupTableRegistered('cmap1')
assert fslcm.isLookupTableRegistered(filename=lut1)
assert fslcm.isLookupTableRegistered(filename=lut2)
assert not fslcm.isLookupTableRegistered(filename=cmap1)
luts = fslcm.getLookupTables()
assert len(luts) == 2
assert luts[0].key == 'lut1'
assert luts[1].key == 'lut2'
@clearCmaps
def test_register():
cmap = tw.dedent("""
0 0 0
0 0 1
0 1 1
1 1 1
""").strip()
lut = tw.dedent("""
1 0 0 0 label 1
2 0 0 1 label 2
3 0 1 1 label 3
4 1 1 1 label 4
""").strip()
with mockCmaps() as (assetDir, sdir):
fslcm.init()
with open('cmap.txt', 'wt') as f: f.write(cmap)
with open('lut.txt', 'wt') as f: f.write(lut)
assert not fslcm.isColourMapRegistered('mycmap')
fslcm.registerColourMap('cmap.txt', key='mycmap', name='My cmap')
fslcm.getColourMap('mycmap')
assert fslcm.isColourMapRegistered('mycmap')
assert not fslcm.isColourMapInstalled( 'mycmap')
assert fslcm.getColourMapLabel('mycmap') == 'My cmap'
fslcm.installColourMap('mycmap')
assert fslcm.isColourMapInstalled( 'mycmap')
assert not fslcm.isLookupTableRegistered('mylut')
fslcm.registerLookupTable('lut.txt', key='mylut', name='My lut')
assert fslcm.isLookupTableRegistered('mylut')
assert not fslcm.isLookupTableInstalled( 'mylut')
assert fslcm.getLookupTable('mylut').name == 'My lut'
fslcm.installLookupTable('mylut')
assert fslcm.isLookupTableInstalled( 'mylut')
##########
# File I/O
##########
def test_fileType():
cmap = tw.dedent("""
0.5 0.7 0.1
0.5 0.7 0.1
""").strip()
lut = tw.dedent("""
1 0.5 0.7 0.1 label
2 0.5 0.7 0.1 label
""").strip()
vest = tw.dedent("""
%!VEST-LUT
<-color{0.000000,0.000000,0.000000}->
<-color{0.010000,0.010000,0.010000}->
""").strip()
bad = tw.dedent("""
this is not a colour map file
""").strip()
with tempdir():
with open('cmap.txt', 'wt') as f: f.write(cmap)
with open('lut.txt', 'wt') as f: f.write(lut)
with open('vest.txt', 'wt') as f: f.write(vest)
with open('bad.txt', 'wt') as f: f.write(bad)
assert fslcm.fileType('cmap.txt') == 'cmap'
assert fslcm.fileType('lut.txt') == 'lut'
assert fslcm.fileType('vest.txt') == 'vest'
with pytest.raises(ValueError):
fslcm.fileType('bad.txt')
def test_loadColourMapFile():
cmap = tw.dedent("""
0.0 0.5 1.0
0.3 0.4 0.5
0.5 0.6 0.7
""").strip()
vest = tw.dedent("""
%!VEST-LUT
<-color{0.0,0.5,1.0}->
<-color{0.3,0.4,0.5}->
<-color{0.5,0.6,0.7}->
""").strip()
exp = np.array([[0.0, 0.5, 1.0],
[0.3, 0.4, 0.5],
[0.5, 0.6, 0.7]])
explut = np.hstack((np.arange(1, 4).reshape(-1, 1), exp))
with tempdir():
with open('cmap.txt', 'wt') as f: f.write(cmap)
with open('vest.txt', 'wt') as f: f.write(vest)
gotcmap = fslcm.loadColourMapFile('cmap.txt')
gotvest = fslcm.loadColourMapFile('vest.txt')
gotcmaplut = fslcm.loadColourMapFile('cmap.txt', aslut=True)
assert np.all(np.isclose(gotcmap, exp))
assert np.all(np.isclose(gotvest, exp))
assert np.all(np.isclose(gotcmaplut, explut))
def test_loadLookupTableFile():
# Test file without names
lut = tw.dedent("""
1 0.0 0.5 1.0 label 1
4 0.3 0.4 0.5 label 4
7 0.5 0.6 0.7 label 7
""").strip()
lutnoname = tw.dedent("""
1 0.0 0.5 1.0
4 0.3 0.4 0.5
7 0.5 0.6 0.7
""").strip()
cmap = tw.dedent("""
0.0 0.5 1.0
0.3 0.4 0.5
0.5 0.6 0.7
""").strip()
exp = np.array([[1, 0.0, 0.5, 1.0],
[4, 0.3, 0.4, 0.5],
[7, 0.5, 0.6, 0.7]])
expcmap = np.array([[1, 0.0, 0.5, 1.0],
[2, 0.3, 0.4, 0.5],
[3, 0.5, 0.6, 0.7]])
with tempdir():
with open('lut.txt', 'wt') as f: f.write(lut)
with open('lutnoname.txt', 'wt') as f: f.write(lutnoname)
with open('cmap.txt', 'wt') as f: f.write(cmap)
gotlut = fslcm.loadLookupTableFile('lut.txt')
gotlutnoname = fslcm.loadLookupTableFile('lutnoname.txt')
gotcmap = fslcm.loadLookupTableFile('cmap.txt')
assert np.all(np.isclose(gotlut[ 0], exp))
assert np.all(np.isclose(gotlutnoname[0], exp))
assert np.all(np.isclose(gotcmap[ 0], expcmap))
assert gotlut[ 1] == ['label 1', 'label 4', 'label 7']
assert gotlutnoname[1] == ['1', '4', '7']
assert gotcmap[ 1] == ['1', '2', '3']
###############
# Miscellaneous
###############
def test_briconToScaleOffset():
assert fslcm.briconToScaleOffset(0.5, 0.5, 100) == (1, 0)
assert fslcm.briconToScaleOffset(0.25, 0.5, 100) == (1, -50)
assert fslcm.briconToScaleOffset(0.75, 0.5, 100) == (1, 50)
def test_briconToDisplayRange():
tests = list(it.product(np.linspace(0, 1, 5),
np.linspace(0, 1, 5)))
# bricon of 0.5/0.5 should result in a
# display range equal to the data range
assert fslcm.briconToDisplayRange((0, 100), 0.5, 0.5) == (0, 100)
for inbri, incon in tests:
dmin, dmax = fslcm.briconToDisplayRange((0, 100), inbri, incon)
outbri, outcon = fslcm.displayRangeToBricon((0, 100), (dmin, dmax))
assert np.all(np.isclose((inbri, incon), (outbri, outcon)))
def test_applyBricon():
rgb = np.random.random((10, 3))
rgba = np.random.random((10, 4))
# bricon of 0.5/0.5 should have no effect
assert np.all(np.isclose(rgb, fslcm.applyBricon(rgb, 0.5, 0.5)))
assert np.all(np.isclose(rgba, fslcm.applyBricon(rgba, 0.5, 0.5)))
# we should be able to pass in a single
# colour
onergb = [0.3, 0.4, 0.5]
onergba = [0.3, 0.4, 0.5, 0.6]
assert np.all(np.isclose(onergb, fslcm.applyBricon(onergb, 0.5, 0.5)))
assert np.all(np.isclose(onergba, fslcm.applyBricon(onergba, 0.5, 0.5)))
def test_randomX():
c1 = fslcm.randomColour()
c2 = fslcm.randomBrightColour()
c3 = fslcm.randomDarkColour()
for c in [c1, c2, c3]:
assert c.shape == (3,)
assert np.all((c >= 0) & (c <= 1))
def test_complementaryColour():
rgb = [0.3, 0.4, 0.5]
rgba = [0.3, 0.4, 0.5, 0.6]
crgb = fslcm.complementaryColour(rgb)
crgba = fslcm.complementaryColour(rgba)
assert len(crgb) == 3
assert len(crgba) == 4
assert crgba[3] == 0.6
def test_LookupTable():
lut = tw.dedent("""
1 0 0 0 Label 1
2 0 0 1 Label 2
3 0 1 1 Label 3
4 1 1 1 Label 4
""").strip()
colours = [(0, 0, 0, 1),
(0, 0, 1, 1),
(0, 1, 1, 1),
(1, 1, 1, 1)]
with tempdir():
with open('lut.txt', 'wt') as f:
f.write(lut)
lut = fslcm.LookupTable('mylut', 'My LUT', 'lut.txt')
assert lut.key == 'mylut'
assert lut.name == 'My LUT'
assert str( lut) == 'My LUT'
assert repr(lut) == 'My LUT'
assert len(lut) == 4
for i in range(3):
assert lut[i].value == i + 1
assert lut.index(i + 1) == i
assert lut.max() == 4
assert lut.saved
for i in range(4):
val = i + 1
lbl = lut.get(val)
name = 'Label {}'.format(val)
assert lbl.value == val
assert lbl.name == name
assert lbl.internalName == name.lower()
assert tuple(lbl.colour) == colours[i]
assert lut.getByName(name) == lbl
assert list(lut.labels())[i] == lbl
repr(lbl)
hash(lbl)
called = {}
def removed(lt, top, args):
called['removed'] = args
def added(lt, top, args):
called['added'] = args
def saved(lt, top, args):
called['saved'] = True
def label(lt, top, args):
called['label'] = args
lut.register('l1', removed, topic='removed')
lut.register('l2', added, topic='added')
lut.register('l3', saved, topic='saved')
lut.register('l4', label, topic='label')
lbl0 = list(lut.labels())[0]
lbl0.name = 'My Label 1!'
assert called['saved']
assert called['label'] == (lbl0, 0)
assert not lut.saved
called.pop('saved')
lut.save('newfile.lut')
assert called['saved']
assert lut.saved
called.pop('saved')
lut.delete(4)
assert len(lut) == 3
assert lut.max() == 3
assert not lut.saved
assert called['saved']
called.pop('saved')
lut.save('newfile.lut')
assert lut.saved
lbl = lut.new('New big label')
assert lbl.value == 4
assert lut.max() == 4
assert len(lut) == 4
assert not lut.saved
assert called['added'] == (lbl, 3)
called.pop('saved')
lut.save('newfile.lut')
assert lut.saved
lbl = lut.insert(7, name='New huge label')
assert lbl.value == 7
assert lut.max() == 7
assert len(lut) == 5
assert not lut.saved
assert called['added'] == (lbl, 4)
|
{"hexsha": "efc81a56fa373907183531342fdfbffcebf79bd3", "size": 15998, "ext": "py", "lang": "Python", "max_stars_repo_path": "fsleyes/tests/test_colourmaps.py", "max_stars_repo_name": "pauldmccarthy/fsleyes", "max_stars_repo_head_hexsha": "453a6b91ec7763c39195814d635257e3766acf83", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-05-05T01:36:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-23T20:44:08.000Z", "max_issues_repo_path": "fsleyes/tests/test_colourmaps.py", "max_issues_repo_name": "pauldmccarthy/fsleyes", "max_issues_repo_head_hexsha": "453a6b91ec7763c39195814d635257e3766acf83", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 97, "max_issues_repo_issues_event_min_datetime": "2018-05-05T02:17:23.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T14:58:42.000Z", "max_forks_repo_path": "fsleyes/tests/test_colourmaps.py", "max_forks_repo_name": "pauldmccarthy/fsleyes", "max_forks_repo_head_hexsha": "453a6b91ec7763c39195814d635257e3766acf83", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-12-09T09:02:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-05T18:55:13.000Z", "avg_line_length": 30.4144486692, "max_line_length": 76, "alphanum_fraction": 0.5463807976, "include": true, "reason": "import numpy", "num_tokens": 5079}
|
from nate.svonet.graph_svo import generate_ticks, find_max_burst
import networkx as nx
import stop_words as sw
import copy
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.ticker import MaxNLocator
import numpy as np
class DegreeOverTimeMixIn():
def __init__(self):
self.offset_dict: dict
self.edge_burst_dict: dict
self.s: int
self.gamma: int
self.from_svo: bool
self.lookup: dict
def top_degree(self,
number_of_slices: int = 8,
list_top: int = 10,
minimum_burst_level: int = 0,
degree_type="both",
remove_stop_words=True):
"""[summary]
Args:
number_of_slices (int, optional): [description]. Defaults to 20.
list_top (int, optional): [description]. Defaults to 10.
degree_type (str, optional): Type of degree calculation to use.
Must be one of "in", "out", or "both". Defaults to "both".
Returns:
[type]: [description]
"""
if degree_type != "in" and degree_type != "out" and degree_type != "both":
raise Exception(
"`degree_type` must be one of 'in', 'out', or 'both'")
# Create list of time slices:
offset_set = set()
for key in self.offset_dict:
for offset in self.offset_dict[key]:
offset_set.add(offset)
time_slices, time_labels = generate_ticks(
offset_set, number_of_ticks=(number_of_slices))
# Create network consisting of all Subjects and Objects:
G = nx.DiGraph()
for entry in self.edge_burst_dict:
G.add_node(entry[0])
G.add_node(entry[-1])
# Iterate over time slices
top_degree_by_slice = {}
for i in range(1, len(time_slices)):
graphCopy = copy.deepcopy(G)
for key in self.edge_burst_dict:
burst_level = find_max_burst(self.edge_burst_dict[key],
time_slices[i - 1], time_slices[i])
if burst_level > minimum_burst_level:
graphCopy.add_edge(key[0], key[-1])
if degree_type == "in":
degree_list = list(graphCopy.in_degree)
elif degree_type == "out":
degree_list = list(graphCopy.out_degree)
elif degree_type == "both":
degree_list = list(graphCopy.degree)
degree_list.sort(key=lambda x: x[1], reverse=True)
if remove_stop_words:
stops = sw.get_stop_words("english")
degree_list = [
item for item in degree_list if item[0] not in stops
]
top_degree_by_slice[time_labels[i]] = degree_list[0:list_top]
return top_degree_by_slice
def specific_degree(self,
tokens: list,
number_of_slices: int = 15,
minimum_burst_level: int = 0,
degree_type="both",
remove_stop_words=False):
"""[summary]
Args:
tokens (list): [description]
number_of_slices (int, optional): [description]. Defaults to 20.
minimum_burst_level (int, optional): [description]. Defaults to 0.
degree_type (str, optional): [description]. Defaults to "both".
remove_stop_words (bool, optional): [description]. Defaults to False.
Returns:
[type]: [description]
"""
if isinstance(tokens, list) == False:
tokens = [tokens]
full_lists = self.top_degree(number_of_slices=number_of_slices,
list_top=None,
minimum_burst_level=minimum_burst_level,
degree_type=degree_type,
remove_stop_words=remove_stop_words)
token_rank_dict = {}
for day in full_lists:
v = [item for item in full_lists[day] if item[0] in tokens]
token_rank_dict[day] = v
return token_rank_dict
def plot_top_degree(self,
number_of_slices: int = 8,
list_top: int = 10,
minimum_burst_level: int = 0,
degree_type="both",
remove_stop_words=True,
filename: str = False,
):
"""[summary]
Args:
number_of_slices (int, optional): [description]. Defaults to 20.
list_top (int, optional): [description]. Defaults to 10.
minimum_burst_level (int, optional): [description]. Defaults to 0.
degree_type (str, optional): [description]. Defaults to "both".
remove_stop_words (bool, optional): [description]. Defaults to True.
"""
data = self.top_degree(number_of_slices=number_of_slices,
list_top=list_top,
minimum_burst_level=minimum_burst_level,
degree_type=degree_type,
remove_stop_words=remove_stop_words)
print(data)
date_names = []
time_slices = []
for k, v in data.items():
date_names.append(k)
time_slices.append(v)
for i in range(1, len(date_names)):
x = np.arange(list_top)
values = []
names = []
for top_degrees in time_slices[i]:
values.append(top_degrees[1])
names.append(top_degrees[0])
values.reverse()
names.reverse()
if np.sum(values) > 0:
fig, ax = plt.subplots()
fig.set_figwidth(6)
fig.set_figheight(10)
fig.suptitle('{} to {}'.format(date_names[i - 1],
date_names[i]),
fontsize=12, ha="center")
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.barh(x, values, color='#32363A')
plt.yticks(x, names)
if filename:
plt.savefig(str(filename) + str(i) + ".pdf")
else:
plt.show()
else:
print("No nodes with degree > 0 in this time slice.")
def plot_specific_degree(self,
tokens: list,
number_of_slices: int = 15,
minimum_burst_level: int = 0,
degree_type="both",
plot_type="line",
remove_stop_words=False,
filename: str = False,):
"""[summary]
Args:
tokens (list): [description]
number_of_slices (int, optional): [description]. Defaults to 20.
minimum_burst_level (int, optional): [description]. Defaults to 0.
degree_type (str, optional): [description]. Defaults to "both".
plot_type (str, optional): [description]. Defaults to "line".
remove_stop_words (bool, optional): [description]. Defaults to False.
Raises:
Exception: [description]
"""
if isinstance(tokens, list) == False:
tokens = [tokens]
if plot_type != "line" and plot_type != "bar":
raise Exception("`plot_type` must be one of 'line' or 'bar'")
data = self.specific_degree(tokens=tokens,
number_of_slices=number_of_slices,
minimum_burst_level=minimum_burst_level,
degree_type=degree_type,
remove_stop_words=remove_stop_words)
inverted_dict = {}
for token in tokens:
full_list = []
for date, degree_list in data.items():
degree = [item[1] for item in degree_list if item[0] == token]
full_list.append((date, degree[0]))
inverted_dict[token] = full_list
x = np.arange(number_of_slices)
for k, v in inverted_dict.items():
values = [item[1] for item in v]
dates = [item[0].replace(", ", "\n") for item in v]
fig, ax = plt.subplots()
fig.set_figwidth(10)
fig.set_figheight(6)
fig.suptitle("'{}'".format(k), fontsize=12, ha="center")
ax.yaxis.set_major_locator(MaxNLocator(integer=True))
if plot_type == "bar":
plt.bar(x, values, color='#32363A')
elif plot_type == "line":
plt.plot(x, values, color='#32363A')
plt.xticks(x, dates)
if filename:
plt.savefig(str(filename) + str(k) + ".pdf")
else:
plt.show()
|
{"hexsha": "4f0424e98bb55daba9fb9d1ae04da58bd62bcec5", "size": 9258, "ext": "py", "lang": "Python", "max_stars_repo_path": "nate/svonet/degree_over_time.py", "max_stars_repo_name": "UWNETLAB/nelanna", "max_stars_repo_head_hexsha": "9029670d5804f478cac2e83d77ff86ff2a7266c2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-02-09T15:39:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T14:35:08.000Z", "max_issues_repo_path": "nate/svonet/degree_over_time.py", "max_issues_repo_name": "UWNETLAB/nlpnet", "max_issues_repo_head_hexsha": "9029670d5804f478cac2e83d77ff86ff2a7266c2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-03-13T18:46:46.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-02T18:58:57.000Z", "max_forks_repo_path": "nate/svonet/degree_over_time.py", "max_forks_repo_name": "UWNETLAB/nlpnet", "max_forks_repo_head_hexsha": "9029670d5804f478cac2e83d77ff86ff2a7266c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-05T19:08:50.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-05T19:08:50.000Z", "avg_line_length": 35.0681818182, "max_line_length": 82, "alphanum_fraction": 0.5114495571, "include": true, "reason": "import numpy,import networkx", "num_tokens": 1851}
|
import scipy.io
import numpy as np
import pickle
image_size = 128
num_labels = 3
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
#labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
f = open('/home/master/Desktop/weapons_complex/ATR_FNN/final_dataset.pickle', 'rb')
final_dataset = pickle.load(f)
train_dataset, train_labels = reformat(final_dataset['train_dataset'], final_dataset['train_labels'])
valid_dataset, valid_labels = reformat(final_dataset['valid_dataset'], final_dataset['valid_labels'])
test_dataset, test_labels = reformat(final_dataset['test_dataset'], final_dataset['test_labels'])
scipy.io.savemat('/home/master/Desktop/GIT/ATR-FNN/final_dataset.mat',
mdict={'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels
})
f.close()
|
{"hexsha": "62c042cee033da2163c8137f7be394084ba784d8", "size": 1206, "ext": "py", "lang": "Python", "max_stars_repo_path": "pickle_2_mat.py", "max_stars_repo_name": "krantirk/ATR-FNN", "max_stars_repo_head_hexsha": "c69ca9e711e3fe0eb6a77b7b9ef257b5b83934d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2017-10-10T19:13:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T09:50:51.000Z", "max_issues_repo_path": "pickle_2_mat.py", "max_issues_repo_name": "Jagannathrk2020/ATR-FNN", "max_issues_repo_head_hexsha": "c69ca9e711e3fe0eb6a77b7b9ef257b5b83934d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2019-12-16T22:05:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-25T15:41:41.000Z", "max_forks_repo_path": "pickle_2_mat.py", "max_forks_repo_name": "Jagannathrk2020/ATR-FNN", "max_forks_repo_head_hexsha": "c69ca9e711e3fe0eb6a77b7b9ef257b5b83934d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-03-11T09:25:59.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T09:14:18.000Z", "avg_line_length": 35.4705882353, "max_line_length": 101, "alphanum_fraction": 0.6500829187, "include": true, "reason": "import numpy,import scipy", "num_tokens": 283}
|
\chapter{The \lhcb\ experiment}
\label{chap:intro:lhcb}
The \lhcb\ detector~\cite{Alves:2008zz,Aaij:2014jba} is situated at point 8 of
the \ac{LHC}.
The general goal of the experiment is to explore the area of heavy flavour
physics, that is the interactions of charm and beauty quarks and the hadrons
that contain them.
In particular, \lhcb\ aims to make, and already has made some of, the most
precise measurements of heavy flavour properties to date, as well as to
discover decays and particles that were not observed by previous experiments.
Heavy flavour has a rich phenomenology: neutral particles can oscillate between
their matter and antimatter states, such as $\PBds \leftrightarrow \APBds$ and
$\PDzero \leftrightarrow \APDzero$~\cite{Abulencia:2006ze,Aaij:2012nva}; bound
states of four or five quarks can form~\cite{Aaij:2014jqa,Aaij:2015tga}; they
exhibit distinct matter-antimatter
asymmetries~\cite{Aubert:2001nu,Abe:2001xe,Aaij:2012kz,Aaij:2013iua,Aaij:2016cla};
and their rare decays can be precisely predicted by the
\ac{SM}~\cite{CMS:2014xfa,Aaij:2015oid}, as such being very sensitive to
contributions from new theories.
With this, the \lhcb\ detector must be flexible enough to accommodate a wide
physics programme, yet powerful enough to discriminate against the large
backgrounds characteristic of a \pp\ collision environment.
This \lcnamecref{chap:intro:lhcb} will link the properties of heavy flavour
decays with the requirements imposed on the detector.
At the \ac{LHC}, heavy flavour quarks are primarily produced through
gluon-gluon fusion, illustrated in
\cref{fig:intro:lhcb:hf_production:gg_fusion}.
With this mechanism, \bbbar\ and \ccbar\ pairs are predominantly produced in
the forward region, at low values of $\theta$, as shown in
\cref{fig:intro:lhcb:hf_production:bbbar_angles}, where the polar angle
$\theta$ is defined as the angle made with the beamline.
To exploit this, the \lhcb\ detector is instrumented in the pseudorapidity
region $2 < \Eta < 5$, where
\begin{equation}
\Eta = -\ln{\tan{\frac{\theta}{2}}}.
\end{equation}
Particles produced in this region are highly boosted in the laboratory frame,
and so heavy flavour hadrons can fly several millimetres before decaying, given
that their lifetimes are of the order of
$0.1$--$\SI{1}{\pico\second}$~\cite{PDG2014}.
With a sufficiently sensitive detector, their decay vertices can be spatially
distinguished from the primary proton-proton interaction vertices, providing a
powerful discriminant between signal and background.
Good secondary vertex resolution also allows for good decay time resolution,
which is necessary for measuring the fast oscillations of \PBds\ and \PDzero
mesons, and for measuring any possible decay time asymmetries.
In addition, the large displacement of the decay vertices with respect to the
\ac{PV} causes the particles produced at the secondary vertex to have a high
\ac{IP}, defined in \cref{fig:intro:lhcb:vertexing} as the smallest distance
from the particle trajectory to the \ac{PV}.
A sufficient experimental resolution on the \ac{IP} allows for additional
discrimination between random tracks in the event and those from heavy flavour
decays.
The precise reconstruction of primary and secondary vertices requires a precise
tracking system, particularly so around the \pp\ interaction region, and hence
the detector design is highly optimised for tracking performance.
The tracking system will be described in
\cref{chap:intro:lhcb:detector:tracking}.
The properties of heavy flavour hadrons, such as their masses and lifetimes,
can be inferred by reconstructing the four-momenta of their decay products, and
so most analyses at \lhcb\ aim to fully reconstruct the decays of beauty and
charm hadrons, such as $\decay{\PBz}{\PKplus\PKminus}$,
$\decay{\PBs}{\PDspm\PKmp}$ with $\decay{\PDsplus}{\PKminus\PKplus\Ppiplus}$,
$\decay{\PLambdab}{\PJpsi\Pproton\PKminus}$ with
$\decay{\PJpsi}{\Pmuon\APmuon}$, and $\decay{\PDz}{\PKpm\Ppimp}$.
It is important to note that certain final states offer higher sensitivities to
physics observables than others, and so it is crucial for \lhcb\ to be able to
distinguish between different particle species.
For example, the decay $\decay{\PDz}{\PKplus\Ppiminus}$ is of the order of one
thousand times less likely than the decay $\decay{\PDz}{\PKminus\Ppiplus}$, and
so a poor \ac{PID} performance would result in a search for the
$\PKplus\Ppiminus$ final state being dominated by $\PKminus\Ppiplus$
`background'.
In general, misidentification cannot be excluded completely, and so in addition
good mass resolution is required to be able to distinguish between different
final states in the parent mass spectrum.
An example of the power of, and necessity for, good particle identification and
momentum resolution is shown in \cref{fig:intro:lhcb:pid_power}.
Here, the signal decay is \decay{\PBzero}{\pippim}, and the distributions are
the two-body invariant mass before \ac{PID} requirements are made on the
tracks~(\ref{fig:intro:lhcb:pid_power:pre}) and
after~(\ref{fig:intro:lhcb:pid_power:post}).
As can be seen, before the requirements the distribution is dominated by other
two-body \PBzero, \PBs, and \PLambdab\ decays, whereas after the selection the
backgrounds are almost entirely removed.
The presence of such backgrounds significantly increases the complexity of an
analysis and can reduce the precision of the measurement.
In the following, the construction of the detector will be described, as
motivated by the physics goals of the collaboration.
\begin{figure}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\input{figures/introduction/gluon_gluon_fusion}
\caption{Quark pair production}
\label{fig:intro:lhcb:hf_production:gg_fusion}
\end{subfigure}
\begin{subfigure}[b]{0.6\textwidth}
\input{figures/introduction/bbbar_production_angles}
\caption{$\Pbottom\APbottom$ production distribution}
\label{fig:intro:lhcb:hf_production:bbbar_angles}
\end{subfigure}
\caption{%
Feynman diagram of quark pair production via gluon-gluon fusion
(\subref*{fig:intro:lhcb:hf_production:gg_fusion}), and a simulation of the
angular distribution of \bbbar\ production at the \ac{LHC} at $\sqrt{s} =
\SI{13}{\TeV}$ (\subref*{fig:intro:lhcb:hf_production:bbbar_angles}).
}
\label{fig:intro:lhcb:hf_production}
\end{figure}
\begin{figure}
\centering
\input{figures/introduction/vertexing}
\caption{%
Illustration of vertex reconstruction, showing a \PDz meson decaying in
flight to a kaon and a pion.
The kaon and pion are reconstructed as tracks, and then the \PDzero decay
vertex is inferred from the point of closest approach of the two tracks.
The minimum transverse distance the tracks make when extrapolated back
towards the primary proton-proton vertex, the \acf{IP}, is shown.
}
\label{fig:intro:lhcb:vertexing}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{introduction/B2pipi_pre_pid}
\caption{Before \ac{PID} requirements}
\label{fig:intro:lhcb:pid_power:pre}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{introduction/B2pipi_post_pid}
\caption{After \ac{PID} requirements}
\label{fig:intro:lhcb:pid_power:post}
\end{subfigure}
\caption{%
Two-body invariant mass spectrum, under the $\pimpip$ hypothesis,
before~(\subref*{fig:intro:lhcb:pid_power:pre}) and
after~(\subref*{fig:intro:lhcb:pid_power:post}) \ac{PID}
requirements~\cite{Aaij:2012as}.
The contribution from the signal decay is hatched with horizontal lines.
}
\label{fig:intro:lhcb:pid_power}
\end{figure}
\section{The \lhcb\ Detector}
\label{chap:intro:lhcb:detector}
Within the \lhcb\ experiment, the Cartesian co-ordinate system is defined such
that the $z$-axis is aligned along the beam direction, increasing in the
clockwise direction along the \ac{LHC} ring; the $y$-axis points vertically
upwards; and the $x$-axis, defined as $x = y \times z$, is then pointing away
from the centre of the accelerator.
The polar co-ordinate system is defined with the polar angle $\theta$ measured
with respect to the $z$-axis, and the azimuthal angle $\phi$ measured in the
$xy$ plane.
A schematic of the detector is given in \cref{fig:intro:lhcb:detector}.
Its forward geometry is evident, with the detector instrumenting the
pseudorapidity region $2 < \Eta < 5$.
A dipole magnet with an integrated field strength of \SI{4}{\tesla\metre} bends
charged particle trajectories in the $xz$ plane, with its polarity being
switched periodically during data-taking.
Charged particle trajectories are recorded by the tracking system.
This is composed of a vertex detector centred around the proton-proton
interaction region, three planes of tracking stations before the magnet, and
three planes after.
Immediately downstream of the vertex detector (increasing in $z$) is the first
of two \ac{RICH} detectors, each used for particle identification, the second
of which is after the tracking stations downstream of the magnet.
After this, a calorimetry system is in place to identify neutral particles such
as the \Ppizero\ and the photon, as well as to measure their energy and that of
the charged particles.
Finally, a muon identification system is installed after the calorimeters.
The data are selected in real time by a hardware trigger, after which a
two-stage software trigger performs an event reconstruction.
There are not enough computing resources available to the experiment to record
and analyse every proton-proton bunching crossing, and so the trigger is
essential to achieve the physics goals.
Each sub-detector will now be described in turn, followed by a description of
the trigger system.
These descriptions will relate to the detector and its performance during
\runone, and will be followed by \cref{chap:intro:lhcb:offline} on the offline
processing model and the upgrades performed for \runtwo.
\begin{figure}
\centering
\input{figures/introduction/lhcb_detector}
\caption{%
A schematic of the \lhcb\ detector.
In this Figure, the $z$-axis, labelled, increases from left to right, the
$y$-axis, also labelled, increases from bottom to top, and the $x$-axis
increases into the page.
}
\label{fig:intro:lhcb:detector}
\end{figure}
\subsection{Tracking}
\label{chap:intro:lhcb:detector:tracking}
Charged particle trajectories are reconstructed as tracks using hits deposited
in the tracking system.
This comprises the vertex locator~(\velo) surrounding the \pp\ interaction
region, the Tracker Turicensis~(\ttracker) before the magnet, and the
T-stations after the magnet.
In \runone, the tracking system achieved a momentum resolution
$\unc{\ptot}/\ptot$ from \SI{0.5}{\percent} at $\ptot = \SI{5}{\GeVc}$ to
\SI{0.8}{\percent} at \SI{100}{\GeVc}, and a track \acl{IP} resolution varying
from around \SI{70}{\micro\metre} for tracks with a low \pT\ to
\SI{20}{\micro\metre} at high \pT.
The \velo\ is an extremely precise silicon strip detector, whose active
elements come as close as \SI{8}{\milli\metre} to the \ac{LHC} beams.
Two sets of 24 silicon modules are arranged either side of the beam, as shown
in \cref{fig:intro:lhcb:velo}, each of which measures particle hits in $r$ and
$\phi$ coordinates.
Charged particles traversing the active silicon can ionise the material,
creating electron-hole pairs that drift towards the electrodes, registering a
hit as an electrical signal.
The magnetic field strength inside the \velo\ is approximated to be zero, and
so track segments are reconstructed as straight lines using the hits in the
sensors.
So-called long tracks, the track type used for the majority of physics
analyses, are created by combining \velo\ segments with hits in the T-stations
after the magnet.
\begin{figure}
\centering
\input{figures/introduction/velo_detector}
\caption{%
A schematic of the \velo\ sub-detector~\cite{Alves:2008zz}, showing the
relative positions of the sensors when the detector is open and closed.
}
\label{fig:intro:lhcb:velo}
\end{figure}
The three T-stations, T1--3, each comprise a silicon strip detector close to
the beam pipe~(the \itracker) and a drift-tube detector in the outer
regions~(the \otracker).
The \itracker\ has a finer spatial resolution than the \otracker, and covers a
small area around the beam where the particle multiplicities are particularly
high.
The \otracker\ covers an area of approximately \SI{30}{\metre\squared}, about 8
times greater than that covered by the \itracker.
Ionising particles liberate electrons from the gas molecules within the tubes,
which are detected by anode wires and registered as hits.
The distance of the line of ionisation from the wire is determined by measuring
the electron drift time, improving the track momentum resolution.
The momentum resolution on long tracks is further improved by adding hits
detected in the \ttracker, upstream of the T-stations.
The \ttracker\ is a silicon strip detector of the same technology as the
\itracker, with the three stations covering a total area of around
\SI{8}{\metre\squared}.
The tracking efficiency, defined as the fraction of genuine tracks that are
reconstructed given that enough hits were deposited to do so, is measured using
a tag-and-probe technique with \JpsiTomumu\ decays, and described in more
detail in \cref{chap:prod:effs:tracking}.
The average tracking efficiency is measured to be above
\SI{95}{\percent}~\cite{Aaij:2014pwa}.
The efficient creation of tracks requires the tracker system to be well
aligned, but the positions of the various stations can change with time.
To compensate for this effect, alignment constants are computed periodically
and used in the reconstruction software.
The effect of an improved alignment is illustrated in
\cref{fig:intro:lhcb:alignment}, where the $\APmuon\Pmuon$ mass resolution
improves from \SI{92}{\MeVcc} with an initial alignment to \SI{49}{\MeVcc} with
an improved one~\cite{Dujany:082010}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\centering
\input{figures/introduction/mumu_pre_alignment}
\caption{Before}
\label{fig:intro:lhcb:alignment:pre}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\input{figures/introduction/mumu_post_alignment}
\caption{After}
\label{fig:intro:lhcb:alignment:post}
\end{subfigure}
\caption{%
Invariant mass distribution of di-muon candidates in the region of the
first three \PUpsilon resonances~\cite{Dujany:082010}.
The left plot~(\subref*{fig:intro:lhcb:alignment:pre}) shows the data
reconstructed with a preliminary alignment, whilst the right
plot~(\subref*{fig:intro:lhcb:alignment:post}) shows the result of a
reconstruction performed with a revised alignment.
The di-muon mass resolution improves from \SI{92}{\MeVcc} to
\SI{49}{\MeVcc}.
}
\label{fig:intro:lhcb:alignment}
\end{figure}
\subsection{Particle identification}
\label{chap:intro:lhcb:detector:pid}
Through the momentum-velocity relation, the mass of a particle can be
determined by combining the momentum measurement from the tracking system with
a velocity measurement.
Velocity measurements are made using the \ac{RICH} detectors.
Tracks created by pions, kaons, and protons are identified using the response
of two \ac{RICH} detectors: \richone\ located upstream of the magnet and
\richtwo\ located downstream.
The \acp{RICH} also provide electron and muon discrimination, albeit at a
weaker level.
\richtwo\ has a larger surface area than \richone\ but a smaller angular
acceptance, being designed to efficiently identify tracks with momenta in the
range \SIrange{15}{100}{\GeVc}, which largely occupy the low-$\theta$ region.
\richone\ is effective in the momentum range \SIrange{3}{60}{\GeVc}.
During \runone, \richone\ contained both \ce{C4F10} gas and aerogel, of which
the latter was removed before the start of \runtwo.
\richtwo\ contains \ce{CF4} gas mixed with a small amount of \ce{CO2} to quench
scintillation light.
As charged particles travel through one of the radiators, each of refractive
index $n$, the particles emit Cherenkov radiation if they are travelling faster
than the phase velocity of light of the radiator.
This light is emitted at an angle $\thetac$ to the particle trajectory, and is
related to the velocity $v$ of the particle as
\begin{equation}
\cos{\thetac} = \frac{1}{n\beta},
\end{equation}
where $\beta = v/c$.
The refractive indices of the radiators are known and are controlled by
monitoring the gas pressure and temperature over time.
By measuring \thetac\ the \rich\ detectors provide velocity measurements of
charged tracks.
The Cherenkov light cones are reflected outside of the \lhcb\ detector
acceptance by spherical and plane mirrors onto planar arrays of photon
detectors.
The angle \thetac\ is related to the radius of the rings, and, for a given
momentum, particles of different masses will produced smaller or larger rings.
\Cref{fig:intro:lhcb:cherenkov_angles} illustrates the Cherenkov angles for
different particle species as a function of momentum.
Separation between mass hypotheses is given by the difference between the
angles, denoted on the $y$-axis.
Discrimination can also be provided by the lack of Cherenkov light if a track
has a momentum between two momentum thresholds.
The Cherenkov angle resolution in \richone\ is around \SI{1.6}{\milli\radian},
and around \SI{0.66}{\milli\radian} in \richtwo.
In order to assign a particle mass hypothesis to a track, a maximum likelihood
method is used~\cite{Forty:1998eqa}.
Initially, all reconstructed tracks are assumed to be pions, and the
\emph{predicted} Cherenkov rings assuming this hypothesis set are projected
onto the photo-detector planes within both \richone\ and \richtwo.
The value of the log-likelihood is computed by comparing the predicted number
of photons within each photo-detector to that observed.
For each track, the mass hypothesis is then changed to each of (\Pe, \Pmu,
\Ppi, \PK, \Pproton), the rings assuming the new global hypothesis set are
projected, and the log-likelihood is re-computed.
The hypothesis giving the largest increase in the log-likelihood is recorded,
and the track hypothesis is returned to its original value.
After all individual track hypotheses have been trialled, the single hypothesis
change that gave the largest increase in the log-likelihood is applied to the
respective track.
This procedure is repeated until the log-likelihood no longer increases with
any change in the hypothesis set.
The outputs of this method are the \ac{DLL} variables, with one value per track
for each of the (\Pe, \Pmu, \PK, \Pproton) hypotheses.
Each \ac{DLL} variable is defined as the difference in the log-likelihood value
when the track mass hypothesis is changed from the pion hypothesis to the one
of the variable, for example \dllkpi.
The pion \dll\ value is then zero by definition.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{introduction/cherenkov_angles}
\caption{%
Cherenkov angles for different particle species as a function of particle
momentum, in the different radiators used in the \rich\ detectors.
The Cherenkov angle resolution in \richone~(aerogel and \ce{C4F10}
radiators) is around \SI{1.6}{\milli\radian}, and around
\SI{0.66}{\milli\radian} in \richtwo~(\ce{CF4}).
}
\label{fig:intro:lhcb:cherenkov_angles}
\end{figure}
\subsection{Calorimetry}
\label{chap:intro:lhcb:detector:calo}
Given the momentum measurements from the tracking system and the velocity
measurements from the \ac{RICH} detectors, the particle energy is already
determined.
For charged hadrons then, precise calorimetry is not necessary.
However, the momentum and mass measurements are complex and require a
significant amount of time to compute, and so cannot be used in the first stage
of the trigger where the proton-proton collision rate must be reduced from
\SI{40}{\mega\hertz} down to around \SI{1}{\mega\hertz}.
In addition, the \rich\ detectors are much less sensitive to electron \ac{PID}
than (\Ppi, \PK, \Pproton), and the tracking system cannot detect photons or
neutral pions.
To provide a fast positive trigger decision for events containing hadronic
decays of heavy flavour, a hadronic calorimeter, the \hcal, is employed.
This is a sampling calorimeter, composed of alternating layers of iron and
active scintillating material and positioned downstream of \richtwo.
Hadrons traversing the \hcal\ deposit energy in these layers, with the
scintillation light collected by wavelength-shifting optical fibres and
recorded by photo-multiplier tubes.
This general construction and detection mechanism is common to all \lhcb\
calorimeters.
The \hcal\ is segmented transversely into square cells, \SI{130}{\milli\metre}
in width near the beam and \SI{260}{\milli\metre} in width in the outer
regions, decreasing in granularity to account for the lower particle
multiplicities.
In the first-level trigger, the total transverse energy \ET\ in all clusters of
$2\times2$ cells is computed, and a positive trigger decision is made if the
maximum \ET\ is above some threshold, typically between $3.5$ and
\SI{3.7}{\GeV}.
The \hcal\ provides the highest rate of positive trigger decisions out of those
at the first trigger level, giving a signal efficiency of around
\SI{40}{\percent} for two-body \PB decays, and \SI{20}{\percent} for four-body
\PD decays, with respect to the offline selection.
The resolution on the energy measured by the \hcal\ has been measured to be
$\unc{E}/E = \SI{69 \pm 5}{\percent}/\sqrt{E}$ plus a constant term of \SI{9
\pm 2}{\percent}~\cite{Perret:2015pla}.
An electromagnetic calorimeter, the \ecal, is located in front of the \hcal,
and measures the energies of electrons and photons.
It is segmented in the transverse plane into three regions, increasing in
granularity towards the beam.
The \ecal\ is partitioned more finely than the \hcal\ due to the smaller
transverse scale at which electromagnetic showers occur in comparison to
hadronic showers.
The energy measurements it makes are used in the first trigger level to accept
high-\ET\ electrons and photons, and in the reconstruction of \Ppizero and
\Peta mesons offline.
The ability to discriminate between electrons and photons in the trigger
decision is provided by the scintillating pad/preshower detector~(\spd/\ps)
which consists of two layers of near-identical scintillator either side of a
\SI{15}{\milli\metre}-thick layer of lead~(\ce{Pb}) converter, transversely
segmented in the same manner as the \ecal.
A positive electron trigger decision requires an \ET\ measurement of between
$2.5$ and \SI{3.0}{\GeV} in a cluster of $2\times2$ cells in the \ecal, along
with hits in both the \spd\ and the \ps\ pads in front of the corresponding
\ecal\ cluster. A positive photon trigger decision is near-identical to this,
except that no hits must be present in the corresponding \spd\ cells.
The resolution on \ecal\ energy measurements is $\unc{E}/E = \SI{9 \pm
0.5}{\percent}/\sqrt{E}$, plus a constant term of
\SI{0.8}{\percent}~\cite{Perret:2015pla}.
\subsection{Muon reconstruction and identification}
\label{chap:intro:lhcb:muon}
A series of five muon stations, M1--5, are used to identify muons.
M1 is positioned before the \spd/\ps, and M2--5 are after the \hcal,
interleaved with \SI{80}{\centi\metre}-thick layers of iron.
For almost all regions across the stations, multi-wire proportional chambers
are used to collect hits, with the exception being the very inner
$20\times\SI{24}{\centi\metre\squared}$ region of M1 which uses triple-layered
gas electron multiplier~(GEM) detectors.
Each station is partitioned into four regions of increasing transverse
granularity closer to the beam pipe.
A muon momentum of at least \SI{6}{\GeVc} is required for a full traversal of
all five stations.
Hits recorded in the muon stations are used to provide \pT\ measurements used
in the first trigger stage, where a track segment must be formed using hits in
all stations.
The single muon decision requires a minimum segment \pT\ from
\SIrange{1.48}{1.76}{\GeVc}, and the di-muon requires the product of the \pT\
of the two muons to have a minimum $\pT^{2}$ from
\SIrange{1.69}{2.56}{(\GeVc)\squared}.
\section{Online event reconstruction and selection}
\label{chap:intro:lhcb:trigger}
There are three stages to the \lhcb\ trigger system, run sequentially.
Within each stage there is a set of parallel trigger \emph{lines}, and each
stage runs only if at least one line in the previous stage gave a positive
decision.
The first stage is a hardware trigger, called the level-zero or \lzero.
It comprises a set of decision units as custom circuitry that evaluate
decisions at the full \SI{40}{\mega\hertz} of \pp\ collisions provided by the
accelerator.
The \lzero\ output rate is limited to the \SI{1}{\mega\hertz} rate at which the
full detector is able to be read out.
The muon stations and the calorimeters can be read at \SI{40}{\mega\hertz}, and
it is the information from these sub-detectors that is used for making \lzero\
decisions.
The single muon decision requires a high \pT\ muon, and the di-muon decision
requires two.
The hadron, electron, and photon triggers require high \ET\ signatures in the
\hcal\ or \ecal, as appropriate.
On a positive \lzero\ decision, when one or more lines have `fired', the entire
detector is read out and the data are sent to the \ac{HLT} computing farm for
processing in software.
Since 2012, around \SI{20}{\percent} of events sent to the farm are deferred to
disk, to be processed during the inter-fill periods when the detector is
idle~\cite{1742-6596-513-1-012006}.
This technique increases the average CPU time available for processing each
event, allowing for looser momentum requirements in the reconstruction and
hence more efficient triggers.
The first stage of the software trigger, \hltone, performs a simplified track
reconstruction in order to confirm the \lzero\ decisions and improve their
discriminatory power.
Segments in the \velo\ detector are first reconstructed, and \pp\ vertices are
formed.
Segments that do not have a significant \ac{IP} with respect to all \acp{PV} or
that cannot be matched to hits in the muon stations are discarded.
The remaining segments are matched to hits in the T-stations to form long
tracks.
To save processing time arising due to combinatorics, a search window is
defined by a minimum momentum that varied from \SIrange{3}{6}{\GeVc} and a
minimum \pT\ requirement that varied from \SIrange{0.5}{1.25}{\GeVc}.
The transverse momentum window is not required on tracks that can be matched to
hits in the muon stations.
With the faster track reconstruction, the mass resolution on \JpsiTomumu\
candidates is around \SI{3}{\percent} lower than that achieved in the offline
reconstruction.
For final states not containing leptons, the most efficient trigger path is
through the one-track line, which requires the presence of a single, good
quality, high \pT\ track with a large \ac{IP}.
Typical thresholds were $1.6$--\SI{1.7}{\GeVc} in \pT\ and
\SI{0.1}{\milli\metre} in \ac{IP}.
This line dominates the \hltone\ output rate, contributing over
\SI{70}{\percent} of all triggers.
Similar lines exist for muon and di-muon candidates, where the corresponding
\ac{IP} thresholds are not required if the candidate has a very high \pT\ (or a
high $\APmuon\Pmuon$ invariant mass in the case of the di-muon line).
The muon triggers have efficiencies upwards of \SI{70}{\percent} for \PB\
decays containing muons.
Events passing \hltone\ are sent to the second software stage, \hlttwo, at a
rate of \SI{80}{\kilo\hertz}, where a full event reconstruction is performed,
including the tracks, vertices, and \ac{PID} information.
As more processing time is now available per event, all \velo\ segments are
used in the long-track reconstruction, with looser search window requirements
of $\ptot > \SI{3}{\GeVc}$ and $\pT > \SI{0.3}{\GeVc}$, in comparison with
\hltone.
A relative loss in efficiency of \SIrange{1}{2}{\percent} per track is seen
compared to the offline reconstruction.
In \hlttwo, tracks are combined into vertex candidates, and \hlttwo\ lines are
grouped as either \emph{exclusive}, where decays are fully reconstructed, or
\emph{inclusive}, where generic signatures are considered.
For beauty decays, the inclusive `topological' lines select two, three, and
four-body vertices of charged tracks that are characteristic of those produced
by \Pbottom hadrons.
Input tracks are required to have a high \ac{IP}, and then the resulting vertex
is selected based on its distance from the \ac{PV}, the \ac{IP} of the vertex
momentum vector, the \ac{DOCA} and the sum of the \pT\ of the input tracks, and
the vertex mass and corrected mass, the latter of which accounts for decay
products not explicitly included in the vertex such as long-lived charm
hadrons.
The \ac{DOCA} requirements are loose enough to accommodate \Pbottom hadron
decays that include long-lived charm hadrons, where a tertiary vertex would be
defined in the offline reconstruction.
The vertex quantities are used as input to a \ac{BDT}, and the single output
value is used to make the trigger
decision~\cite{Gligorov:2011qxa,Gligorov:1384380}.
A similar set of topological lines exists that are optimised for \Pbottom
hadron decays that include a high \pT\ muon, and an inclusive \PDstarp\ line
exists for selecting \DstToDzpi\ with $\decay{\PDzero}{hhX}$ decays, where $hh$
are two charged hadrons.
The topological \PB triggers are around \SI{75}{\percent} efficient on average
at selecting fully charged \PB decays, and the inclusive \PDstarp\ trigger is
between $25$ and \SI{90}{\percent} efficient.
The efficiencies generally increase with increasing heavy flavour \pT, and for
the charm triggers are strongly dependent on the multiplicity of the final
state, with higher multiplicities having a lower efficiency.
The remaining inclusive lines are the single muon and di-muon lines, which are
similar to those in \hltone.
Exclusive \hlttwo\ lines fully reconstruct signal decays, such as
\decay{\PB}{\pippim} and \decay{\PLambdac}{\Pproton\PKminus\Ppiplus}.
Some of these lines are included to increase the efficiency of the inclusive
lines for specific final states, whilst others are the only trigger path which
would give a high efficiency.
Events passing \hlttwo\ are saved to disk.
The offline processing then begins with the full event reconstruction, which is
more precise than that in the trigger due to the longer processing times
permitted.
This is exploited in, for example, the reconstruction of long tracks, where two
methods are employed offline rather than the one online.
The differences between the online and offline reconstruction were removed
during \ac{LS1}, which will be discussed in the following
\lcnamecref{chap:intro:lhcb:offline}.
\section{Offline data flow and the upgrades for \runtwo}
\label{chap:intro:lhcb:offline}
During \runone, all data passing \hlttwo\ were saved to disk and were
reconstructed offline by a separate software application.
The output of this reconstruction comprises tracks and the information
associated to them such as the \ac{DLL} \ac{PID} responses.
To facilitate easier processing offline for analysts, a central selection is
run called the `stripping', which defines stripping lines that reconstruct
inclusive and exclusive decays in a similar manner to those defined in \hlttwo.
The lines are grouped into `streams' which group together stripping lines with
similar selections, such as those of semileptonic \PB decays or di-muon decays.
In general, any one stripping line contains events which could have been saved
due to the decision of any trigger line, and so analysts were able to choose
which triggers to include in their analysis dataset offline.
This can be a complex procedure, as now there are at least two sets of
selections to consider, those in the trigger and those in the stripping, and
one must also consider the difference in resolution between the two
reconstructions: using the same selection online and offline would result in
visible resolution effects around the selection boundaries, which can be hard
to model.
To overcome these complications, three parallel efforts were made during
\ac{LS1}.
The first of these was a major review of the online and offline reconstruction
software, resulting in a large decrease in the execution time required per
event in the offline reconstruction, such that the online reconstruction was
made to be identical to that offline.
The second effort was the development and deployment of a real-time alignment
and calibration of the full detector for each \acs{LHC} fill.
The full output of \hltone\ is buffered to a \SI{10}{\peta\byte} disk farm, the
detector alignment and calibration constants are computed, and are then applied
during the reconstruction performed in \hlttwo~\cite{Dujany:082010}.
The \hlttwo\ processing is then fully asynchronous to the \ac{LHC}.
The third effort was the introduction of the Turbo stream processing
model~\cite{Benson:2019752}.
Given that the additional offline processing was no longer necessary to improve
the resolution of the various physics quantities, and that the best possible
detector alignment and calibration are already applied, physics analyses could
be performed directly on the output of the trigger.
There are several benefits due to the \ac{LS1} efforts.
The improvement of the online reconstruction allows for parity between the
online and offline selections, increasing the trigger efficiency and the
possible number of signal candidates available for analysis, and decreasing the
total processing time required.
This in turn allowed for the inclusion of a new two-track trigger in \hltone,
which is able to build and select two-prong vertices, and a set of so-called
`lifetime unbiased' HLT1 lines, which create two-body vertices from tracks
reconstructed without an \ac{IP} requirement, allowing for lifetime
measurements without the need to model a complex acceptance efficiency as a
function of particle lifetime.
In addition, these efforts negate the need for an additional, offline
reconstruction, saving computing resources,
allowing analyses and data quality reports to progress more quickly as the
data are made available almost immediately, within hours rather than the days
or weeks with the \runone\ processing model.
Analyses have already been performed that exploited this
model~\cite{LHCb-PAPER-2015-037,Aaij:2015bpa,Aaij:2016jht}.
The Turbo stream contains the signal candidates reconstructed in \hlttwo, and
the data are not centrally processed further offline.
The time taken from data-taking to analysable data being available is then
considerably reduced with respect to the usual data flow, however the Turbo
processing model does have some limitations at present.
The most restrictive of these is that the candidates available to analysts are
only those explicitly used in the trigger decision.
For example, if a trigger line reconstructs \DzToKpi\ and the candidate passes
the selection, analysts can only use the information pertaining to the kaon
and pion tracks, the \PDzero vertex, and the associated primary vertex.
For many analyses, this information is all that is needed; however sometimes
additional information about the event is required.
Effort is ongoing to resolve these problems, and it is expected that a vast
majority of analyses will be using the Turbo stream in \runthree.
\section{Simulation}
\label{chap:intro:lhcb:simulation}
The simulation is used to model effects that cannot be modelled using the real
data, such as acceptance efficiencies, and to perform detector performance
studies during design phases.
Proton-proton collisions are generated by the \pythia\ 8
program~\cite{Sjostrand:2007gs} using a tuning specific to
\lhcb~\cite{Belyaev:1322400}.
\pythia\ simulates the parton interactions, the fragmentation of the outgoing
coloured objects (quarks and gluons), and resulting creation of colourless
particles through hadronisation.
The decays of the hadrons are simulated by the \evtgen\ program, which models
phenomena such as branching fractions, neutral meson mixing, \CP\ violation,
decay amplitude models, and decay times.
Simulated samples for a specific decay chain are generated by running \pythia\
in minimum bias mode, where proton-proton collisions are generated until the
particle at the top of the chain is created somewhere in the event.
This head particle is then forced to decay by \evtgen~\cite{Clemencic:2011zza}
through the decay chain under study.
The behaviour of the other particles generated in the event is also controlled
by \evtgen, but is not forced to any particular state.
After \evtgen, the particles are propagated through a simulation of the entire
detector using the \geant\ 4 program, where interactions with the active
detector elements are recorded as `\ac{MC} hits'.
These \ac{MC} hits are converted into a format mimicking the electrical
response of the real detector through an \lhcb-specific emulation, and the
response is processed through the trigger and reconstruction software in an
near-identical manner to real data.
The \lzero\ trigger decisions are emulated in software, and the trigger
configuration used for a given data-taking year, such as 2011, is chosen to be
the one that was used for the majority of the data-taking.
The simulation is processed in such a way that reconstructed objects can be
associated to the \ac{MC} objects that created them.
For charged, stable particles reconstructed via tracks, a particle is
associated to an \ac{MC} particle if at least \SI{70}{\percent} of the hits
comprising the associated track were deposited by the \ac{MC} particle in
question.
The process of assigning \ac{MC} objects to reconstructed objects is referred
to as truth matching.
Tracks are classified as `ghost' tracks if there are no \ac{MC} particles that
can be associated to them.
Vertices are assigned categories based on the associations of their input
tracks.
A `signal vertex' is defined as one in which all inputs are associated to
\ac{MC} particles, all inputs have been assigned a particle identity the same
as that of their associated \ac{MC} particle, all inputs have been associated
with \ac{MC} particles which come from the same true \ac{MC} parent, and the
identity of the \ac{MC} parent matches that assigned to the vertex.
Any deviation from these requirements results in the vertex being assigned a
particular \emph{background category}, dependent on the nature of the
deviations~\cite{Gligorov:1035682}.
For example, if at least one track is a ghost, the vertex is classified as a
ghost, and if at least one track is associated to an \ac{MC} particle with a
different \ac{PID}, the vertex is classified as a misidentification.
In the case of a vertex that can be associated to an \ac{MC} particle, the
vertex can be further classified as prompt or secondary based on the true
lifetime of the \ac{MC} particle.
For consistency of terminology, signal vertices are referred to as having a
`background category' of `signal'.
In general, separate samples of simulated data are produced on a per-analysis
basis, with independent samples generated for different beam energies and
dipole magnet polarities (usually only `up' and `down').
The generation of \ac{MC} is done using a globally distributed network of
computers, and the resulting data files are then made available to analysts in
the same way as for real data, with the additional inclusion of the
truth-matching information.
|
{"hexsha": "f3e6cd4a4ff192d483ea441c5fed5587ff706f53", "size": 40156, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/introduction/lhcb.tex", "max_stars_repo_name": "alexpearce/Thesis", "max_stars_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-18T00:58:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T00:58:34.000Z", "max_issues_repo_path": "chapters/introduction/lhcb.tex", "max_issues_repo_name": "alexpearce/Thesis", "max_issues_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/introduction/lhcb.tex", "max_forks_repo_name": "alexpearce/Thesis", "max_forks_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-05-13T07:54:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-06T23:42:27.000Z", "avg_line_length": 53.8284182306, "max_line_length": 83, "alphanum_fraction": 0.778762825, "num_tokens": 10103}
|
# =============================================================================
# Basic functions from which the composite functions can be generated
# Written by Ali Ahrari (aliahrari1983@gmail.com)
# last update by Ali Ahrari on 15 March 2021
# =============================================================================
class BasicFunc:
@staticmethod
def bohach2(z,h_GO): # f1: Bohachevsky 2 function (Modified)
A=0.5 # by default, it is 0.3. A larger value means a harder problem
x1=z[0::2]
x2=z[1::2]
f= x1**2+2*x2**2-A*h_GO*np.cos(3*np.pi*x1)*np.cos(4*np.pi*x2)+A*h_GO
return f
def booth(x): # f2: Booth function - shifted 2D function, see https://www.sfu.ca/~ssurjano/booth.html
x1=x[0]+1
x2=x[1]+3
f = ( (x1+2*x2-7)**2 + (2*x1+x2-5)**2)
return f
def branin(z): # f3: multi-output rescaled Branin function,
x=z[0::2]*4+3
y=z[1::2]*4+7
px= (x<-5)*(x+5)**2 + (x>10)*(x-10)**2
py= (y<0)*y**2 + (y>15)* (y-15)**2
f=(y-5.1/(4*np.pi**2)*x**2+5/np.pi*x-6)**2 + 10*(1-.125/np.pi)*np.cos(x)+10
f=f+ px+py
f=np.sqrt(f)-np.sqrt(5/(4*np.pi))
f=np.clip(f,a_min=0,a_max=np.inf)
return f
def cmmp(x): # f4: x1*=+- sqrt( 27/7 ) x2= +- sqrt(4/7), f*=31/7
# see "Multimodal Optimization Using a Bi-Objective Evolutionary
# Algorithm" by Deb and Saha for the structure
f0=31/7
x1=x[0::2]
x2=x[1::2]
g1=(x1/2)**2+(x2/4)**2-1
g2=(x1/3)**2+(x2/1)**2-1
g1=np.abs(g1)*(g1<0)
g2=np.abs(g2)*(g2<0)
f=x1**2+x2**2+ 1e10*((g1+g2)+((g1+g2)>0))-f0
f= np.clip(f,a_min=0,a_max=np.inf) # correction for possible rounding error
return f
def dp(x): # f5: Dixon & Price function (Shifted): single-output
n=np.arange(2,x.size+1)
x=x+2.0**(2.0**(1-np.arange(1,x.size+1)))/2
f=(x[0]-1)**2+sum(n*(2*x[1:]**2-x[0:-1])**2)
return f
def griewank(x,h_GO): # f6: multi-output rescaled 2D Griewank function
x=x*10
x1=x[0::2]
x2=x[1::2]
f= (x1**2+x2**2)/100 - h_GO*np.cos(x1/1)*np.cos(x2/np.sqrt(2)) + h_GO
f=1000*f
return f
def himl(z): # f7: Himmelblau function (Rescaled)
z=3*z
x=z[0::2]
y=z[1::2]
f=(x**2+y-11)**2 + (x+y**2-7)**2
return f
def hump6(z): # f11: Six Hump Camel Back function (Rescaled): nonlinearly rescaled 6-hump function
a=0.1 # specifies the distortion (a=0 for default function)
x=z[0::2]
y=z[1::2]
x=x*(1+a*np.sign(x))
y=y*(1-a*np.sign(y))
f=(4-2.1*x**2+x**4/3)*x**2 + x*y + (-4+4*y**2)*y**2 +1.03162845349
return f
def hump3(z,h_GO): # f13: multi-output three-Hump Camel Function (Modified)
x=z[0::2]
y=z[1::2]
Q2= (x**2+x**6+y**2)*.02
Q1=(2*x**2-1.05*x**4+x**6/6+x*y+y**2)
f=100*(h_GO*Q1+(1-h_GO)*Q2)
return f
def lvn13(z,h_GO): # f8: multi-output shifted 2d Levy function N13
# see https://www.sfu.ca/~ssurjano/levy13.html for original function
x1=z[0::2]*2+1
x2=z[1::2]*2+1
f=h_GO*(np.sin(3*np.pi*x1))**2 + (x1-1)**2*(1+h_GO*(np.sin(3*np.pi*x2))**2) + (x2-1)**2*(1+h_GO*(np.sin(2*np.pi*x2))**2)
return f
def neumaier3(x): # f9: neumaier3 aka Trid function
D=x.size
ind=np.arange(1,D+1)
sh=ind*(D+1-ind)
fstar=-D*(D+4)*(D-1)/6
x=x+sh
f=np.sum((x-1)**2) - np.sum(x[0:-1]*x[1:]) - fstar
return f
def shubert(x): # f10: Shubert with penalty (Rescaled)
x=4*x
f = 1
k = np.arange(1,6)
D=x.size
for m in np.arange(D):
f=f*np.sum( k*np.cos( (k+1)*x[m]+k ) )
p=np.abs(x)-10
penalt= p**2*(p>0)
offset=np.array([12.8708854977257, 1.86730908831024e+02, 2.70909350557283e+03 ])
if D>=4:
sys.exit('The rescaled shubert function is not defined for this dimension Available dimensions are 1,2,3')
f=f+offset[D-1]
f=f/10**(D-1)
f=f+sum(penalt)
return f
def weierstrass(y,h_GO): # f14: Weierstrass function (Modified)
D=y.size
y=.1*y
a=np.sqrt(h_GO)*0.5 # controls the global basin (a higher a makes problem harder)
b=3 # a higher value makes it makes it more rugged-default is 3
k=np.arange(4) # level of optima
h=np.zeros(D)
for m in np.arange(D):
h[m]=np.sum(a**k*np.cos(2*np.pi*b**k*(y[m]+.5)))
hprime=sum(a**k*np.cos(np.pi*b**k))
P= np.sum( (np.abs(y)-.5)**1 *(np.abs(y)>.5) )
f=50*( sum(h-hprime) + P )
return f
def zakharov(x): # f15: zakharov function (Modified)
n=np.arange(1,x.size+1)
g=(0.5*sum(n*x))**2
f=np.sqrt(np.sum(x**2)+g*(1+g))
return f
import numpy as np
import sys
if __name__=='__main__':
x=np.array([1,2,3])/3.28
h_GO=.5
y=BasicFunc.zakharov(x)
print(y)
|
{"hexsha": "c592d5822a16ed0891cdf3c9251efacc67d11742", "size": 5216, "ext": "py", "lang": "Python", "max_stars_repo_path": "BasicFunc.py", "max_stars_repo_name": "ElsevierSoftwareX/SOFTX-D-21-00154", "max_stars_repo_head_hexsha": "2cedb85a05dac8ea893ea69b8295d8557d5ec826", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-01-20T02:44:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T06:54:25.000Z", "max_issues_repo_path": "BasicFunc.py", "max_issues_repo_name": "ElsevierSoftwareX/SOFTX-D-21-00154", "max_issues_repo_head_hexsha": "2cedb85a05dac8ea893ea69b8295d8557d5ec826", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BasicFunc.py", "max_forks_repo_name": "ElsevierSoftwareX/SOFTX-D-21-00154", "max_forks_repo_head_hexsha": "2cedb85a05dac8ea893ea69b8295d8557d5ec826", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.0067114094, "max_line_length": 128, "alphanum_fraction": 0.4877300613, "include": true, "reason": "import numpy", "num_tokens": 1903}
|
import random
import numpy
from matplotlib import pyplot as plt
import csv
'''
INPUT VARIABLES
n: the number of active devices in the network
id_length: if given, all the binary id strings will be in certain length.
'''
def initialiseIDs(id_length,n):
'''
Given parameters n(the number of active devices in the network) and id_length(The length of the binary ID string),
then n random unique binary ID strings will be generated, each represents a device in the network
'''
id_max = 2**id_length - 1
for i in range(n):
id = random.randint(0,id_max)
if id in id_list_int: #ensure all the IDs are unique
while(id in id_list_int):
id = random.randint(0,id_max)
id_list_int.append(id)
for a in id_list_int:
id_bin = ("{:0%db}"%(id_length)).format(a) # here addtional prefix (like"00" or "11") can be added
ID_list.append(id_bin)
return ID_list
def responseToquery(query):
'''
If the query is a prefix to a device, then this device will repond to the gateway.
This function will return:
0 if no devices responded
2 if more than 1 devices reponded (caused collision)
"ID" if only one ID successfully transmitted
'''
counter = 0
for i in ID_list:
if i.startswith(query):
counter += 1
res_id = i
if counter > 1:
return 2 # functions returns 2 when theres a collision
if counter == 1:
return res_id # if only one ID successfully transmitted, returns the ID
if counter == 0:
return 0 # returns 0 if no reponse.
def queryTree():
'''
Simulate the querytree algorithm,
at the end, the number of slots needed to resolve the collision will be returned
'''
slot = 1
query_list = ['0','1'] # the query list corresponding to the Q in algorithm
Memory_list = [] # memory list of the gateway, corresponding M in the algorithm
while(len(query_list)!=0):
slot += 1
q = query_list[0] # q is the query sended by the gateway at the beginning of each time slot
#print(q)
response = responseToquery(q)
if response == 0: #if no response, delete the query from list
query_list.remove(q)
elif response == 2: # if collided, append "q0" and "q1" to the list
q1 = q + '0'
q2 = q + '1'
query_list.append(q1)
query_list.append(q2)
query_list.remove(q)
else :
Memory_list.append(response)
query_list.remove(q)
#print(Memory_list)
#print ("QT: %d slots"%slot,file=f)
return slot
def rdm(p): #for SICTA,generate '0' with probablity of p, or '1' otherwise
'''
a random simulator, it returns 0 with probability p, returns 1 with probability of "1-p"
'''
a = random.random()
if a < p:
return 0
else:
return 1
def calcuK(reception,slot=0):
'''
For SICTA and SICQTA, calculate the feedback k, where k means how many packet or empty slot
are decoded after one successful transmission
'''
k = 1 #Because the Pre-condition to calculate k is "already one successful transmission"
i=0
while(1):
flag = 0
buff=[]
a = reception[-2-i].copy() #Father of the successful transmitted slot
if (2+i) == len(reception): #when Father is the same as the first slot, it comes to end.
flag = 1
b = reception[-1-i].copy() #the successful transmitted slot
for ele in b:
a.remove(ele) # Interference Cancellation
buff = a
if len(buff) > 1: # No single ID is decoded from SIC(after Cancellation, still collision)
break
else: #After Cancellation, one ID is successfully decoded
k = k + 1
i += 1
if len(buff)==1:
memory_list.append(buff[0])
res_list.append(slot)
if flag == 1:
break
return k
def SICTA():
'''
simulate the SICTA algorithm,
at the end, the number of slots needed to resolve the collision will be returned
'''
slot = 0
end_con = 0
sicta_id = [] # local counter of each ID
gateway= [] # Gateway received IDs from each time slot
buffer = []
for i in ID_list: #Initailise all local counter to '0'
sicta_id.append([i,0])
while (end_con!= 1):
slot += 1
buffer = []
for i in sicta_id: # when counter is '0'. this device can send its ID
if i[1] == 0:
buffer.append(i[0])
response = len(buffer) #detect if there's a collision(when response>1)
gateway.append(buffer)
if response > 1: #according the algorithm in PAPER 7
for i in sicta_id:
if i[1] > 0:
i[1] += 1
if i[1]==0:
i[1]=rdm(p)
if response == 0: #according the algorithm in PAPER 7
#slot = slot - 1 #MTA! saves one slot if empty slot(doesn't work by rdm method)
for i in sicta_id:
if i[1]>1:
pass
if i[1]==1:
i[1] = rdm(p)
if response == 1: #according the algorithm in PAPER 7
memory_list.append(buffer[0])
tmp=[]
for emp in gateway: # Delete the empty slots(1.NULL Transmission.2.some slots are decoded)
if emp != []:
tmp.append(emp)
gateway=tmp
k=calcuK(gateway) # Calculate the feedback K
for c in range(k):
gateway.pop() # pop out the decoded(saved through SIC) time slots fron gateway
sicta_id_copy = sicta_id.copy()
for i in sicta_id:
i[1] = i[1]-(k-1)
if i[1]<= 0:
#i[1]=-100 #to enhance the ID-Quit Condition, i set all decoded to -100
for j in gateway:
if i[0] in j:
j.remove(i[0]) #remove the decoded IDs from each slot
sicta_id_copy.remove(i)
# end_con += 1 # each time an ID is decoded, end_con + 1
elif i[1] == 1:
i[1]=rdm(p)
sicta_id = sicta_id_copy.copy()
if len(sicta_id)==0:
end_con = 1
# double check if all the IDs are decoded.
for check in ID_list:
if check not in memory_list:
print(check,file=f),
print("not decoded",file=f)
#print ("SICTA: %d slots"%slot,file=f)
return slot
def feedbackToSICQT(query): # By SICQT, all the slots received by gateway must be saved for SIC
'''
This function is for the SICQTA algorithm, it saves the response from each slot for the future use.
'''
receiving = []
for i in ID_list:
if i.startswith(query):
receiving.append(i)
return receiving
def SICQT(): #with shortcutting
'''
Simulate the SICQTA algorithm,
at the end, the number of slots needed to resolve the collision will be returned
'''
slot = 1 # all the IDs transmitted already in the first slot!
query_brother = [] # saves the brother of each time slot, so we know which query-slot to skip later
received = [] # all received time slots are saved here
re_id_tmp = ID_list.copy()
received.append(re_id_tmp) # initialise first time slot with all IDs
q = '0' # initalise first query with '0'
end_con = 0
while(end_con!=1):
buffer = []
q_b = q[:-1] +'1' #the brother of query
slot += 1
buffer = feedbackToSICQT(q)
if len(buffer) == 0:
q = q_b+'0' # if no response, append '0' to the last q_b, but not append q_b to list querybrother
elif len(buffer) > 1:
query_brother.append(q_b) #append q_b to list querybrother
q = q + '0'
received.append(buffer)
elif len(buffer) == 1 :
query_brother.append(q_b)
memory_list.append(buffer[0]) # save the decoded IDs. which must be the same as ID_list in the end
res_list.append(slot)
received.append(buffer)
k=calcuK(received,slot)
if k > len(query_brother): #end condtion when k> existed time slots, means it goes all way back to first slot
end_con = 1
break
pos_in_qbrother = -1 - (k-1) # find out which query can be skipped
q = query_brother[pos_in_qbrother] + '0' # the next query is the + '0'
query_brother = query_brother[:pos_in_qbrother] #delete the skipped query
for a in range(k):
received.pop()
for j in received: # delete the decoded IDs from list 'received'
for suc in memory_list:
if suc in j:
j.remove(suc)
# double check if all the IDs are decoded.
for check in ID_list:
if check not in memory_list:
print(check,file=f),
print("not decoded",file=f)
def possion(interval,lamda):
'''
Generate new income users according to a possion destrubution
'''
total=0
for i in range(interval):
come=numpy.random.poisson(lamda)
total=total+come
return total
def delay(lamda):
'''
Use SICQTA to resolve a dynamic collision-resolving problem with parameter lamda for possion,
in the end the delay will be returned after accepting 30000 new incomers.
'''
global ID_list,memory_list,id_list_int,res_list
total_user=0
interval=100 #initalise first collision by assuming theres a CRI before
result=[]
while(total_user<30000):
ID_list=[]
memory_list=[]
id_list_int=[]
user_nr=possion(interval,lamda)
if user_nr == 0:
user_nr=1
total_user += user_nr
id_length = ???
ID_list=initialiseIDs(id_length,user_nr)
if user_nr == 1:
res_list=[1]
else:
res_list=[]
SICQT()
res_list = list(numpy.asarray(res_list) + interval/2)
result = result + res_list
interval=res_list[-1]
return result
if __name__ == '__main__':
'''
Main function: Test for different possion parameter lamda from 0 to 1,
in the end a csv file will be generated, which records the average delay of corresponding lamda
'''
f = open("test_possion.csv","a",newline='')
csv_writer=csv.writer(f)
csv_writer.writerow(['lamda','mean-delay'])
lamda=0.1
for i in range(20):
lamda=lamda+0.05
a=delay(lamda)
row=[lamda,numpy.mean(a)]
csv_writer.writerow(row)
|
{"hexsha": "e1b1d46d8a3b3e1197055c61f53276eeec7de516", "size": 10850, "ext": "py", "lang": "Python", "max_stars_repo_path": "code/Dynamic_simulation.py", "max_stars_repo_name": "tum-lkn/sicqta", "max_stars_repo_head_hexsha": "64ffeb2467b1e865a1d9c666e6e36a2f313b6d69", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-29T20:09:21.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-29T20:09:21.000Z", "max_issues_repo_path": "code/Dynamic_simulation.py", "max_issues_repo_name": "tum-lkn/sicqta", "max_issues_repo_head_hexsha": "64ffeb2467b1e865a1d9c666e6e36a2f313b6d69", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/Dynamic_simulation.py", "max_forks_repo_name": "tum-lkn/sicqta", "max_forks_repo_head_hexsha": "64ffeb2467b1e865a1d9c666e6e36a2f313b6d69", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.2876254181, "max_line_length": 121, "alphanum_fraction": 0.5773271889, "include": true, "reason": "import numpy", "num_tokens": 2755}
|
from googletrans import Translator
from textblob import TextBlob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
def get_similarity(tweet_prep, column):
senti = tweet_prep.loc[:10, [column]]
senti['polarity'] = pd.DataFrame(senti[column]).apply(lambda x: TextBlob(x[column]).sentiment.polarity, axis = 1)
senti
def analize_sentiment(tweet):
analysis = TextBlob(tweet)
if analysis.sentiment.polarity > 0:
return 1
elif analysis.sentiment.polarity == 0:
return 0
else:
return -1
def showPieChart(positive, neutral, negative):
pc = plt.figure(figsize = (7, 7))
plt.pie([positive, neutral, negative],
autopct = '%1.1f%%',
colors = ['green','white','red'],
explode = (0.1, 0.1, 0.1),
startangle = 140)
plt.show()
def get_sentiment(tweet_prep, column):
get_similarity(tweet_prep, column)
senti = tweet_prep.loc[:, [column]]
senti['sentiment'] = np.array([analize_sentiment(tweet) for tweet in senti[column]])
pos_tweets = [tweet for index, tweet in enumerate(senti[column]) if senti['sentiment'][index] > 0]
neu_tweets = [tweet for index, tweet in enumerate(senti[column]) if senti['sentiment'][index] == 0]
neg_tweets = [tweet for index, tweet in enumerate(senti[column]) if senti['sentiment'][index] < 0]
showPieChart(positive=len(pos_tweets), neutral=len(neu_tweets), negative=len(neg_tweets))
return pos_tweets, neu_tweets, neg_tweets
def get_tweets_sentimen(pos, neu, neg):
translator = Translator()
print(f'\n🟢 Positive \n{translator.translate(pos[random.randint(0, len(pos))], dest="id").text} \n')
print(f'⚪ Neutral \n{translator.translate(neu[random.randint(0, len(neu))], dest="id").text} \n')
print(f'🔴 Negative \n{translator.translate(neg[random.randint(0, len(neg))], dest="id").text}')
|
{"hexsha": "1a787f8a20e6582890eb396bd22d55d2565e8e91", "size": 1996, "ext": "py", "lang": "Python", "max_stars_repo_path": "functions/sentiment.py", "max_stars_repo_name": "descent-stis/simple-nlp-project", "max_stars_repo_head_hexsha": "1661e9c5613a9b88d8ec99870d2e387b900825eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "functions/sentiment.py", "max_issues_repo_name": "descent-stis/simple-nlp-project", "max_issues_repo_head_hexsha": "1661e9c5613a9b88d8ec99870d2e387b900825eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "functions/sentiment.py", "max_forks_repo_name": "descent-stis/simple-nlp-project", "max_forks_repo_head_hexsha": "1661e9c5613a9b88d8ec99870d2e387b900825eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6428571429, "max_line_length": 118, "alphanum_fraction": 0.6427855711, "include": true, "reason": "import numpy", "num_tokens": 526}
|
import math
import re
from configparser import ConfigParser
from ast import literal_eval
from decimal import *
getcontext().prec = 6
import numpy as np
import quaternion
from astropy.coordinates import SkyCoord
from astropy.time import Time
from astropy import units
from settings import *
from algo import tools
# Doc for SkyCoord:
# http://docs.astropy.org/en/stable/coordinates/skycoord.html
#
# C-G P67 coordinate system:
# http://www.aanda.org/articles/aa/full_html/2015/11/aa26349-15/aa26349-15.html
#
# COPIED FROM https://pds.nasa.gov/ds-view/pds/viewProfile.jsp
# ?dsid=RO-C-NAVCAM-2-ESC3-MTP021-V1.0
# >>>>
# ======================================================================
# Geometry Information - Coordinate System:
# ======================================================================
#
# The label files include the following geometric variables:
# - SC SUN POSITION VECTOR: The vector from the spacecraft to the Sun
# in equatorial J2000 inertial frame.
# - SC TARGET POSITION VECTOR: The vector from the spacecraft to the
# centre of the comet nucleus in equatorial J2000 inertial frame.
# - SC TARGET VELOCITY VECTOR: The spacecraft to comet nucleus velocity
# vector in in equatorial J2000 inertial frame.
# - TARGET CENTER DISTANCE: The distance between the spacecraft and the
# comet nucleus centre. (Note that also for checkout and stellar
# calibration images the comet nucleus distance is given here.)
# - SUB SPACECRAFT LATITUDE and SUB SPACECRAFT LONGITUDE: The latitude
# and longitude of the sub-spacecraft point derived from the Flight
# Dynamics body-fixed reference frame implicitly specified by the
# information provided in the comet attitude file CATT.
# - RIGHT ASCENSION and DECLINATION: Right Ascension and Declination of
# the camera boresight direction in equatorial J2000 inertial frame.
# - CELESTIAL NORTH CLOCK ANGLE: The direction of celestial north at the
# center of the image - measured from the upward direction,
# clockwise to the direction toward celestial north.
# - SOLAR ELONGATION: The angle between the line of sight of observation
# and the direction of the Sun.
# All geometric values are calculated for the time t = IMAGE TIME
# (and not START TIME).
def load_image_meta(src, sm):
# params given in equatorial J2000 coordinates, details:
# https://pds.nasa.gov/ds-view/pds/viewProfile.jsp
# ?dsid=RO-C-NAVCAM-2-ESC3-MTP021-V1.0
with open(src, 'r') as f:
config_data = f.read()
config_data = '[meta]\n' + config_data
config_data = re.sub(r'^/\*', '#', config_data, flags=re.M)
config_data = re.sub(r'^\^', '', config_data, flags=re.M)
config_data = re.sub(r'^(\w+):(\w+)', r'\1__\2', config_data, flags=re.M)
config_data = re.sub(r'^END\s*$', '', config_data, flags=re.M)
config_data = re.sub(r'^NOTE\s*=\s*"[^"]*"', '', config_data, flags=re.M)
config_data = re.sub(r' <(deg|km)>','', config_data)
config = ConfigParser(converters={'tuple':literal_eval})
config.read_string(config_data)
image_time = config.get('meta', 'IMAGE_TIME')
# from sun to spacecraft, equatorial J2000
sun_sc_eq_x, sun_sc_eq_y, sun_sc_eq_z = \
-np.array(config.gettuple('meta', 'SC_SUN_POSITION_VECTOR'))
if USE_ICRS:
sun_sc_ec_p = np.array([sun_sc_eq_x, sun_sc_eq_y, sun_sc_eq_z])
else:
sc = SkyCoord(x=sun_sc_eq_x, y=sun_sc_eq_y, z=sun_sc_eq_z, unit='km',
frame='icrs', representation_type='cartesian', obstime='J2000')\
.transform_to('heliocentrictrueecliptic')\
.represent_as('cartesian')
sun_sc_ec_p = np.array([sc.x.value, sc.y.value, sc.z.value])
sun_sc_dist = np.sqrt(np.sum(sun_sc_ec_p**2))
# from spacecraft to asteroid, equatorial J2000
sc_ast_x, sc_ast_y, sc_ast_z = \
config.gettuple('meta', 'SC_TARGET_POSITION_VECTOR')
# from asteroid to spacecraft, asteroid fixed body coordinates
ast_sc_r = config.getfloat('meta', 'TARGET_CENTER_DISTANCE')
ast_sc_lat = config.getfloat('meta', 'SUB_SPACECRAFT_LATITUDE')
ast_sc_lon = config.getfloat('meta', 'SUB_SPACECRAFT_LONGITUDE')
# spacecraft orientation, equatorial J2000
sc_rot_ra = config.getfloat('meta', 'RIGHT_ASCENSION')
sc_rot_dec = config.getfloat('meta', 'DECLINATION')
sc_rot_cnca = config.getfloat('meta', 'CELESTIAL_NORTH_CLOCK_ANGLE')
solar_elongation = config.getfloat('meta', 'SOLAR_ELONGATION')
## set time
##
half_range = sm.asteroid.rotation_period/2
timestamp = Time(image_time, scale='utc', format='isot').unix
sm.time.range = (timestamp - half_range, timestamp + half_range)
sm.time.value = timestamp
sm.time.real_value = timestamp
## set spacecraft orientation
##
xc, yc, zc = 0, 0, 0
#xc, yc, zc = -0.283, -0.127, 0 # ROS_CAM1_20150720T113057
#xc, yc, zc = 0.2699, -0.09, 0 # ROS_CAM1_20150720T165249
#xc, yc, zc = 0.09, -0.02, 0 # ROS_CAM1_20150720T064939
if USE_ICRS:
assert sc_rot_dec+xc<90 and sc_rot_dec+xc>-90, 'bad correction'
sm.spacecraft_rot = (
sc_rot_dec+xc, # axis lat
(sc_rot_ra+yc+180)%360 - 180, # axis lon
(360-sc_rot_cnca+zc)%360 - 180, # rotation
)
else:
sc = SkyCoord(ra=sc_rot_ra*units.deg, dec=sc_rot_dec*units.deg,
frame='icrs', obstime='J2000')
sc = sc.transform_to('barycentrictrueecliptic')
assert sc.lat.value+xc<90 and sc.lat.value+xc>-90, 'bad correction'
sm.spacecraft_rot = (
sc.lat.value+xc, # axis lat
(sc.lon.value+yc+180)%360 - 180, # axis lon
(sc_rot_cnca+zc+180)%360 - 180, # rotation
)
sm.real_spacecraft_rot = sm.spacecraft_rot
## set spacecraft position
##
if USE_ICRS:
sc_ast_ec_p = np.array([sc_ast_x, sc_ast_y, sc_ast_z])
else:
sc = SkyCoord(x=sc_ast_x, y=sc_ast_y, z=sc_ast_z, unit='km', frame='icrs',
representation_type='cartesian', obstime='J2000')\
.transform_to('barycentrictrueecliptic')\
.represent_as('cartesian')
sc_ast_ec_p = np.array([sc.x.value, sc.y.value, sc.z.value])
sm.asteroid.real_position = sun_sc_ec_p + sc_ast_ec_p
# s/c orientation
sco = list(map(math.radians, sm.spacecraft_rot))
scoq = tools.ypr_to_q(*sco)
# project old position to new base vectors
sc2gl_q = sm.frm_conv_q(sm.SPACECRAFT_FRAME, sm.OPENGL_FRAME)
scub = tools.q_to_unitbase(scoq * sc2gl_q)
scub_o = tools.q_to_unitbase(scoq)
sc_ast_p = scub.dot(sc_ast_ec_p.transpose())
sm.real_spacecraft_pos = sc_ast_p
# if USE_IMG_LABEL_FOR_SC_POS:
# sm.spacecraft_pos = sc_ast_p
##
## done setting spacecraft position
# use calculated asteroid axis as real axis
sm.asteroid_rotation_from_model()
sm.real_asteroid_axis = sm.asteroid_axis
sm.asteroid.real_sc_ast_vertices = sm.sc_asteroid_vertices(real=True)
if not np.isclose(float(Decimal(sm.time.value) - Decimal(sm.time.real_value)), 0):
sm.time.real_value = sm.time.value
if DEBUG:
print('Strange Python problem where float memory values get corrupted a little in random places of code')
if False:
print((''
+ '\nsco:\n%s\n'
+ '\nscoq:\n%s\n'
+ '\nscub_o:\n%s\n'
+ '\nscub:\n%s\n'
+ '\nast_sc_ec_p:\n%s\n'
+ '\nast_sc_p:\n%s\n'
) % (
sm.spacecraft_rot,
scoq,
scub,
scub_o,
sc_ast_ec_p,
sc_ast_p,
))
if DEBUG:
lbl_sun_ast_v = (sun_sc_ec_p+sc_ast_ec_p)*1e3
lbl_se, lbl_dir = tools.solar_elongation(lbl_sun_ast_v, scoq)
m_elong, m_dir = sm.solar_elongation()
mastp = sm.asteroid.position(sm.time.value)
print((
'solar elongation (deg), file: %.1f (%.1f), model: %.1f\n'
+ 'light direction (deg), file: %s, model: %s\n'
+ 'sun-asteroid loc (Gm), file: %s, model: %s\n'
) % (
solar_elongation, math.degrees(lbl_se), math.degrees(m_elong),
math.degrees(lbl_dir), math.degrees(m_dir),
lbl_sun_ast_v*1e-9, (mastp)*1e-9,
))
sm.save_state('none',printout=True)
#quit()
## Impossible to calculate asteroid rotation axis based on given data!!
## TODO: Or is it? Can use some help data from model.AsteroidModel?
## These should be used: ast_sc_lat, ast_sc_lon
#FOR TARGET_IMAGE = 'ROS_CAM1_20150720T113057', this seems to be a perfect match:
#system state:
# ast_x_rot = -74.81 in [-90.00, 90.00]
# ast_y_rot = -94.82 in [-180.00, 180.00]
# ast_z_rot = 138.96 in [-180.00, 180.00]
# time = 1437391848.27 in [1437376452.06, 1437407263.16]
# x_off = -0.54 in [-4.53, 3.45]
# x_rot = -29.99 in [-90.00, 90.00]
# y_off = 2.68 in [-1.34, 6.64]
# y_rot = 122.54 in [-180.00, 180.00]
# z_off = -170.19 in [-1280.00, -16.00]
# z_rot = -103.10 in [-180.00, 180.00]
#
#solar elongation: (94.07914335833404, 87.37274850492905)
#
#asteroid rotation: 104.05
#
#[main]
#ast_x_rot = -73.584
#ast_y_rot = -92.664
#ast_z_rot = 144.216
#time = 1437391848.267516
#x_off = -0.526339523473
#x_rot = -29.628
#y_off = 2.61886006801
#y_rot = 121.824
#z_off = -170.55296469020652
#z_rot = -104.544
#
#[real]
#x_off = 0.545467755596
#y_off = 2.48761450039
#z_off = -170.626950251
|
{"hexsha": "b6621ec9b08065185909444010e97f1987d9c90d", "size": 10046, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/iotools/lblloader.py", "max_stars_repo_name": "slamajakub/visnav-py", "max_stars_repo_head_hexsha": "872363a8f115ae2dc8966f7c890891a41cb60b16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/iotools/lblloader.py", "max_issues_repo_name": "slamajakub/visnav-py", "max_issues_repo_head_hexsha": "872363a8f115ae2dc8966f7c890891a41cb60b16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/iotools/lblloader.py", "max_forks_repo_name": "slamajakub/visnav-py", "max_forks_repo_head_hexsha": "872363a8f115ae2dc8966f7c890891a41cb60b16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2421875, "max_line_length": 118, "alphanum_fraction": 0.6111885327, "include": true, "reason": "import numpy,from astropy", "num_tokens": 2845}
|
#!/usr/bin/env python
"""
RS 2018/04/24: Visualization of samples from geological prior/posterior
This rips off many visualization elements from bin/python/visWorld.
Looked short enough, and important enough, to be worth rewriting.
"""
import argparse
import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import visvis as vv
plt.ioff()
vv.settings.preferredBackEnd = 'pyside'
vv.settings.figureSize=(560,420)
rockprop_names = ['Density', 'LogSusceptibility', 'ThermalConductivity',
'ThermalProductivity', 'LogResistivityX', 'LogResistivityY',
'LogResistivityZ', 'ResistivityPhase', 'PWaveVelocity']
class MasonView(object):
"""
Holds geological data.
"""
def __init__(self, npzfname):
"""
:param inputFile: numpy NPZ file from mason
"""
# Load all the base properties from the file
raw = np.load(npzfname)
xyzres = np.array(raw['resolution'].flatten(), dtype=int)
self.xres = xyzres[0]
self.yres = xyzres[1]
self.zres = xyzres[2]
self.xbounds = raw['x_bounds'].flatten()
self.ybounds = raw['y_bounds'].flatten()
self.zbounds = raw['z_bounds'].flatten()
self.layers = [raw[key] for key in raw if key.startswith('layer')]
self.fbounds = [raw[key] for key in raw if key.startswith('boundary')]
self.rockprops = { key: raw[key] for key in rockprop_names }
self.samples = self.fbounds[0].shape[0]
# Reshape to associate samples with 3-D voxel grids
newshapeVox = np.concatenate([[-1], xyzres[::-1]])
newshapeSurf = np.concatenate([[-1], xyzres[1::-1]])
self.layers = [k.reshape(newshapeVox, order='f') for k in self.layers]
self.fbounds = [k.reshape(newshapeSurf, order='f') for k in self.fbounds]
self.rockprops = { k: v.reshape(newshapeVox, order='f')
for k, v in self.rockprops.items() }
def add_samples(self, other):
"""
:param other: MasonView object with same shape as this one
"""
# Status checks all good, so proceed
npc = np.concatenate
for i in range(len(self.layers)):
self.layers[i] = npc([self.layers[i], other.layers[i]])
for i in range(len(self.fbounds)):
self.fbounds[i] = npc([self.fbounds[i], other.fbounds[i]])
for k in self.rockprops:
self.rockprops[k] = npc([self.rockprops[k], other.rockprops[k]])
def show_layer_boundaries(self, sample):
"""
Displays a 3-D rendering of boundary surfaces.
:param sample: index of sample for which to plot boundaries
"""
app = vv.use()
vv.figure(1)
X = np.linspace(self.xbounds[0], self.xbounds[1], self.xres)
Y = np.linspace(self.ybounds[0], self.xbounds[1], self.yres)
Z = np.linspace(self.zbounds[0], self.xbounds[1], self.zres)
vv.xlabel('Eastings (m)')
vv.ylabel('Northings (m)')
vv.zlabel('Depth (m)')
a = vv.gca()
a.camera.fov = 70
a.daspect = 1, 1, -1
for i in range(len(self.layers)):
C = plt.cm.jet(i/float(len(self.layers)))
C = np.array([[[C[0], C[1], C[2]]]])
m = vv.surf(X, Y, self.fbounds[i][sample], C)
vv.ColormapEditor(a)
app.Run()
def rockprop(self, prop, sample):
"""
Returns a voxelization of a rock property for a single sample.
"""
return self.rockprops[prop][sample]
def layerprop(self, layer, sample):
"""
Returns a voxelization of layer membership for a single sample.
"""
return self.layers[layer][sample]
def meanrockprop(self, prop):
"""
Returns a mean voxelized rock property over all samples.
"""
return np.mean(self.rockprops[prop], axis=0)
def meanlayer(self, layer):
"""
Returns a mean voxelized layer membership over all samples.
"""
return np.mean(self.layers[layer], axis=0)
def meanlayer_all(self):
"""
Returns a mean voxelized layer membership over all samples.
"""
layidx = np.arange(len(self.layers))
laywtd = [i*np.mean(self.layers[i], axis=0) for i in layidx]
return np.sum(laywtd, axis=0)
def show_vox(self, vfunc):
"""
Displays a 3-D rendering of a voxelized property.
:param vfunc: function accepting this MasonView instance and
returning a 3-D np.array of some voxelized property
"""
app = vv.use()
vv.figure(1)
vv.xlabel('Eastings (units)')
vv.ylabel('Northings (units)')
vv.zlabel('Depth (units)')
a = vv.gca()
a.camera.fov = 70
a.daspect = 1, 1, -1
vox = vfunc(self)
t = vv.volshow(vox, cm=vv.CM_JET, renderStyle='ray')
vv.ColormapEditor(a)
app.Run()
if __name__ == "__main__":
print("Initializing from", args.npzfname[0])
view = MasonView(args.npzfname[0])
for fn in args.npzfname[1:]:
print("Adding samples from", fn)
view.add_samples(MasonView(fn))
for i in range(len(view.layers)):
view.show_vox(lambda v: MasonView.meanlayer(v, i))
|
{"hexsha": "d2fe75aa2e5f57e2f46b4c1e7eb7108e703572d9", "size": 5322, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/geovis_notebook_version.py", "max_stars_repo_name": "divad-nhok/obsidian_fork", "max_stars_repo_head_hexsha": "e5bee2b706f78249564f06c88a18be086b17c895", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-08T16:28:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T14:55:59.000Z", "max_issues_repo_path": "scripts/geovis_notebook_version.py", "max_issues_repo_name": "divad-nhok/obsidian_fork", "max_issues_repo_head_hexsha": "e5bee2b706f78249564f06c88a18be086b17c895", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-08-16T00:46:58.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-16T00:46:58.000Z", "max_forks_repo_path": "scripts/geovis_notebook_version.py", "max_forks_repo_name": "divad-nhok/obsidian_fork", "max_forks_repo_head_hexsha": "e5bee2b706f78249564f06c88a18be086b17c895", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-02-26T01:03:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-01T02:31:37.000Z", "avg_line_length": 34.335483871, "max_line_length": 81, "alphanum_fraction": 0.5935738444, "include": true, "reason": "import numpy", "num_tokens": 1352}
|
#!/usr/bin/env python3
"""
summarize orthologer output
"""
import sys, os
import numpy as np
def count_genes(counts, genes):
index = 0
for gene in genes:
if gene != '-':
counts[index] += 1
index += 1
return counts
def count_orthologs(counts, genes):
index = 1
if genes[0] != '-':
for gene in genes[1:]:
if gene != '-':
counts[index] += 1
index += 1
return counts
def append_scores(scores, current):
current = current[0].split(' ** ')
index = 0
for score in current:
if score == '!':
scores[index].append(100)
elif score == '-' or score[0] == '-':
continue
else:
pident = float(score.split()[2])
scores[index].append(pident)
index += 1
return scores
def print_summary(genomes, genes, orthologs, pident):
if len(genomes) > 2:
header = ['# genome', 'genes', 'orthologs', 'average percent identity of orthologs']
print('\t'.join(header))
index = 0
for genome in genomes:
out = [genome]
out.append(genes[index])
out.append(orthologs[index])
out.append(pident[index])
out = [str(i) for i in out]
print('\t'.join(out))
index += 1
else:
header = ['# query genome', 'genes in query', 'reference genome', 'genes in reference', 'number of orthologs', 'average percent identity of orthologs']
print('\t'.join(header))
out = [genomes[0], genes[0], genomes[1], genes[1], orthologs[1], pident[1]]
out = [str(i) for i in out]
print('\t'.join(out))
def summarize(file):
switch = 0
for line in file:
line = line.strip().split('\t')
if line[0].startswith('### output'):
switch = 1
continue
if switch == 0:
continue
if len(line) == 1:
continue
if line[0].startswith('#'):
line[0] = line[0].split('# ')[1]
genomes = line[::3]
gene_counts = [0 for i in genomes]
ortholog_counts = [0 for i in genomes]
scores = [[] for i in genomes]
continue
gene_counts = count_genes(gene_counts, line[::3])
ortholog_counts = count_orthologs(ortholog_counts, line[::3])
scores = append_scores(scores, line[1::3])
average_pident = [np.average(i) for i in scores]
return genomes, gene_counts, ortholog_counts, average_pident
if __name__ == '__main__':
if len(sys.argv) != 2:
print('usage: orthologer_summary.py <orthologer_output.tsv or - if from stdin>')
exit()
file = sys.argv[1]
if file == '-':
file = sys.stdin
else:
file = open(file)
genomes, gene_counts, ortholog_counts, average_pident = summarize(file)
print_summary(genomes, gene_counts, ortholog_counts, average_pident)
|
{"hexsha": "14a8d9f08029d587a8607d8f1dd9beef40d739bc", "size": 2964, "ext": "py", "lang": "Python", "max_stars_repo_path": "ctbBio/orthologer_summary.py", "max_stars_repo_name": "christophertbrown/bioscripts", "max_stars_repo_head_hexsha": "c8c657b7895681b539bdfd8c78f786f9b99ab230", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-04-16T17:02:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T11:14:03.000Z", "max_issues_repo_path": "ctbBio/orthologer_summary.py", "max_issues_repo_name": "hackerzone85/bioscripts", "max_issues_repo_head_hexsha": "c8c657b7895681b539bdfd8c78f786f9b99ab230", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-05-02T13:37:58.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-26T14:57:21.000Z", "max_forks_repo_path": "ctbBio/orthologer_summary.py", "max_forks_repo_name": "hackerzone85/bioscripts", "max_forks_repo_head_hexsha": "c8c657b7895681b539bdfd8c78f786f9b99ab230", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-07-30T07:14:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-25T10:37:27.000Z", "avg_line_length": 30.875, "max_line_length": 159, "alphanum_fraction": 0.5489203779, "include": true, "reason": "import numpy", "num_tokens": 762}
|
using ForwardDiff, UnPack, LinearAlgebra
using DiffEqBase: DiffCache, get_tmp, dualcache
mutable struct RALΛ{L <: Function, LC}
Λ::L
cache::LC
end
function RALΛ(Λ::Function, z::C1, matrix_type::DataType, dims::Tuple{Int, Int}) where {C1 <: AbstractVector{<: Number}}
cache = matrix_type(undef, 0, 0) # Create empty matrix first, just to check if Λ is in place or not
if applicable(Λ, cache, z)
cache = matrix_type(undef, dims)
Λnew = function _Λ_ip(cache::LCN, z::C1N) where {LCN <: DiffCache, C1N <: AbstractVector{<: Number}}
Λ(get_tmp(cache, z), z)
return get_tmp(cache, z)
end
return RALΛ(Λnew, dualcache(cache, Val{length(z)}))
else
function _Λ_oop(cache::LCN, z::C1N) where {LCN <: Nothing, C1N <: AbstractVector{<: Number}}
return Λ(z)
end
return RALΛ(Λnew, nothing)
end
end
function RALΛ(Λin::LC, z::C1) where {LC <: AbstractMatrix{<: Number}, C1 <: AbstractVector{<: Number}}
Λ(cache::LCN, z::C1N) where {LCN <: AbstractMatrix{<: Number}, C1N <: AbstractVector{<: Number}} = cache
return RALΛ{Function, LC}(Λ, Λin)
end
function (ralλ::RALΛ)(z::C1) where {C1 <: AbstractVector{<: Number}}
return ralλ.Λ(ralλ.cache, z)
end
mutable struct RALΣ{S <: Function, SC}
Σ::S
cache::SC
end
function RALΣ(Σ::Function, z::C1, matrix_type::DataType, dims::Tuple{Int, Int}) where {C1 <: AbstractVector{<: Number}}
cache = matrix_type(undef, 0, 0)
if applicable(Σ, cache, z)
cache = matrix_type(undef, dims)
Σnew = function _Σ_ip(cache::SCN, z::C1N) where {SCN <: DiffCache, C1N <: AbstractVector{<: Number}}
du = get_tmp(cache, z)
Σ(du, z)
return du
end
return RALΣ(Σnew, dualcache(cache, Val{length(z)}))
else
Σnew = function _Σ_oop(cache::SCN, z::C1N) where {SCN <: Nothing, C1N <: AbstractVector{<: Number}}
return Σ(z)
end
return RALΣ(Σnew, nothing)
end
end
function RALΣ(Σin::SC, z::C1) where {SC <: AbstractMatrix{<: Number}, C1 <: AbstractVector{<: Number}}
Σ(cache::SCN, z::C1N) where {SCN <: AbstractMatrix{<: Number}, C1N <: AbstractVector{<: Number}} = cache
return RALΣ{Function, SC}(Σ, Σin)
end
function (ralσ::RALΣ)(z::C1) where {C1 <: AbstractVector{<: Number}}
return ralσ.Σ(ralσ.cache, z)
end
mutable struct RALNonlinearSystem{M <: Function, L <: RALΛ, S <: RALΣ, X <: Function, V <: Function,
VC1 <: AbstractVector{<: Number}, VC2 <: AbstractVector{<: Number}, VC3 <: AbstractVector{<: Number}}
μ::M # Functions
Λ::L # no type assertion for L b/c it can be Function or Matrix of zeros
Σ::S # no type assertion for S b/c it can be Function or constant Matrix
ξ::X
𝒱::V
μ_sss::VC1 # Stochastic steady state values, for caching
ξ_sss::VC2
𝒱_sss::VC3
inplace::NamedTuple{(:μ, :ξ, :𝒱), NTuple{3, Bool}}
end
function RALNonlinearSystem(μ::M, Λ::L, Σ::S, ξ::X, 𝒱::V, μ_sss::VC1, ξ_sss::VC2, 𝒱_sss::VC3,
z::C1, y::C1, Ψ::C2, Γ₅::JC5, Γ₆::JC6) where {M <: Function, L <: RALΛ, S <: RALΣ, X <: Function, V <: Function,
VC1 <: AbstractVector{<: Number}, VC2 <: AbstractVector{<: Number},
VC3 <: AbstractVector{<: Number},
C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number},
JC5 <: AbstractMatrix{<: Number}, JC6 <: AbstractMatrix{<: Number}}
inplace = (μ = applicable(μ, μ_sss, z, y), ξ = applicable(ξ, ξ_sss, z, y), 𝒱 = applicable(𝒱, 𝒱_sss, z, Ψ, Γ₅, Γ₆))
return RALNonlinearSystem{M, L, S, X, V, VC1, VC2, VC3}(μ, Λ, Σ, ξ, 𝒱, μ_sss, ξ_sss, 𝒱_sss, inplace)
end
function update!(m::RALNonlinearSystem, z::C1, y::C1, Ψ::C2,
Γ₅::JC5, Γ₆::JC6) where {C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number},
JC5 <: AbstractMatrix{<: Number}, JC6 <: AbstractMatrix{<: Number}}
if m.inplace[:μ]
m.μ(m.μ_sss, z, y)
else
m.μ_sss .= m.μ(z, y)
end
if m.inplace[:ξ]
m.ξ(m.ξ_sss, z, y)
else
m.ξ_sss .= m.ξ(z, y)
end
if m.inplace[:𝒱]
m.𝒱(m.𝒱_sss, z, Ψ, Γ₅, Γ₆)
else
m.𝒱_sss .= m.𝒱(z, Ψ, Γ₅, Γ₆)
end
m
end
mutable struct RALLinearizedSystem{Mz <: Function, My <: Function, Xz <: Function, Xy <: Function, J <: Function,
JC1 <: AbstractMatrix{<: Number}, JC2 <: AbstractMatrix{<: Number},
JC3 <: AbstractMatrix{<: Number}, JC4 <: AbstractMatrix{<: Number},
JC5 <: AbstractMatrix{<: Number}, JC6 <: AbstractMatrix{<: Number},
JC7 <: AbstractMatrix{<: Number}}
μz::Mz # Functions
μy::My
ξz::Xz
ξy::Xy
J𝒱::J
Γ₁::JC1 # Jacobians, for caching
Γ₂::JC2
Γ₃::JC3
Γ₄::JC4
Γ₅::JC5
Γ₆::JC6
JV::JC7
inplace::NamedTuple{(:μz, :μy, :ξz, :ξy, :J𝒱), NTuple{5, Bool}}
end
function RALLinearizedSystem(μz::Mz, μy::My, ξz::Xz, ξy::Xy, J𝒱::J,
Γ₁::JC1, Γ₂::JC2, Γ₃::JC3, Γ₄::JC4, Γ₅::JC5, Γ₆::JC6,
JV::JC7, z::C1, y::C1, Ψ::C2,
μ_sss::VC1, ξ_sss::VC2, 𝒱_sss::VC3) where {Mz <: Function, My <: Function, Xz <: Function,
Xy <: Function, J <: Function,
JC1 <: AbstractMatrix{<: Number}, JC2 <: AbstractMatrix{<: Number},
JC3 <: AbstractMatrix{<: Number}, JC4 <: AbstractMatrix{<: Number},
JC5 <: AbstractMatrix{<: Number}, JC6 <: AbstractMatrix{<: Number},
JC7 <: AbstractMatrix{<: Number},
C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number},
VC1 <: AbstractVector{<: Number}, VC2 <: AbstractVector{<: Number},
VC3 <: AbstractVector{<: Number},}
inplace = (μz = applicable(μz, Γ₁, z, y, μ_sss), μy = applicable(μy, Γ₂, z, y, μ_sss), ξz = applicable(ξz, Γ₃, z, y, ξ_sss),
ξy = applicable(ξy, Γ₄, z, y, ξ_sss), J𝒱 = applicable(J𝒱, JV, z, Ψ, Γ₅, Γ₆, 𝒱_sss))
return RALLinearizedSystem(μz, μy, ξz, ξy, J𝒱, Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆, JV, inplace)
end
function update!(m::RALLinearizedSystem, z::C1, y::C1, Ψ::C2,
μ_sss::VC1, ξ_sss::VC2, 𝒱_sss::VC3) where {C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number},
VC1 <: AbstractVector{<: Number}, VC2 <: AbstractVector{<: Number},
VC3 <: AbstractVector{<: Number}}
if m.inplace[:μz]
m.μz(m.Γ₁, z, y, μ_sss)
else
m.μz(m.Γ₁, z, y)
end
if m.inplace[:μy]
m.μy(m.Γ₂, z, y, μ_sss)
else
m.μy(m.Γ₂, z, y)
end
if m.inplace[:ξz]
m.ξz(m.Γ₃, z, y, ξ_sss)
else
m.ξz(m.Γ₃, z, y)
end
if m.inplace[:ξy]
m.ξy(m.Γ₄, z, y, ξ_sss)
else
m.ξy(m.Γ₄, z, y)
end
if m.inplace[:J𝒱]
m.J𝒱(m.JV, z, Ψ, m.Γ₅, m.Γ₆, 𝒱_sss)
else
m.J𝒱(m.JV, z, Ψ, m.Γ₅, m.Γ₆)
end
m
end
abstract type AbstractRiskAdjustedLinearization end
"""
RiskAdjustedLinearization(μ, Λ, Σ, ξ, Γ₅, Γ₆, 𝒱, Nz, Ny, Nε)
Creates a first-order perturbation around the stochastic steady state ``(z, y)`` of a discrete-time dynamic model.
(TODO: Move more of the formality to documentation, and make this shorter and concise, w/out explanation of matrix equations)
The affine approximation of the model is
``math
\\begin{aligned}
\\mathbb{E}[z_{t + 1}] & = \\mu(z, y) + \\Gamma_1(z_t - z) + \\Gamma_2(y_t - y)\\\\
0 & = \\xi(z, y) + \\Gamma_3(z_t - z) + \\Gamma_4(y_t - y) + \\Gamma_5 \\mathbb{E}_t z_{t + 1} + \\Gamma_6 \\mathbb{E}_t y_{t + 1} + \\mathscr{V}(z) + J\\mathscr{V}(z)(z_t - z),
\\end{aligned}
``
where ``\\Gamma_1, \\Gamma_2`` are the Jacobians of ``\\mu`` with respect to ``z_t`` and ``y_t``, respectively;
``\\Gamma_3, \\Gamma_4`` are the Jacobians of ``\\xi`` with respect to ``z_t`` and ``y_t``, respectively;
``\\Gamma_5, \\Gamma_6`` are constant matrices; ``\\mathscr{V}(z)`` is the model's entropy;
``J\\mathscr{V}(z)`` is the Jacobian of the entropy;
and the state variables ``z_t`` and jump variables ``y_t`` follow
``math
\\begin{aligned}
z_{t + 1} & = z + \\Gamma_1(z_t - z) + \\Gamma_2(y_t - y) + (I_{n_z} - \\Lambda(z_t) \\Psi)^{-1}\\Sigma(z_t)\\varepsilon_{t + 1},\\\\
y_t & = y + \\Psi(z_t - z)
\\end{aligned}
``
The unknowns ``(z, y, \\Psi)`` solve the system of equations
``math
\\begin{aligned}
0 & = \\mu(z, y) - z,\\\\
0 & = \\xi(z, y) + \\Gamma_5 z + \\Gamma_6 y + \\mathscr{V}(z),\\\\
0 & = \\Gamma_3 + \\Gamma_4 \\Psi + (\\Gamma_5 + \\Gamma_6 \\Psi)(\\Gamma_1 + \\Gamma_2 \\Psi) + J\\mathscr{V}(z).
\\end{aligned}
``
(TODO: Move the nonlinear model statement to documentation)
The true nonlinear equations defining model are assumed to take the form
``math
\\begin{aligned}
z_{t + 1} & = \\mu(z_t, y_t) + \\Lambda(z_t)(y_{t + 1} - \\mathbb{E}_t y_{t + 1}) + \\Sigma(z_t) \\varepsilon_{t + 1},\\\\
0 & = \\log\\mathbb{E}_t[\\exp(\\xi(z_t, y_t) + \\Gamma_5 z_{t + 1} + \\Gamma_6 y_{t + 1})].
\\end{aligned}
``
The vectors ``z_t\\in \\mathbb{R}^{n_z}`` and ``y_t \\in \\mathbb{R}^{n_y}`` are the state and jump variables, respectively.
The first vector equation comprise the model's expectational equations, which are typically
the first-order conditions for the jump variables from agents' optimization problem.
The second vector equation comprise the transition equations of the state variables. The exogenous shocks
``\\varepsilon\\in\\mathbb{R}^{n_\\varepsilon}`` form a martingale difference sequence whose distribution
is described by the differentiable, conditional cumulant generating function (ccgf)
``math
\\begin{aligned}
\\kappa[\\alpha(z_t) \\mid z_t] = \\log\\mathbb{E}_t[\\exp(\\alpha(z_t)' \\varepsilon_{t + 1})],\\quad \text{ for any differentiable map }\\alpha::\\mathbb{R}^{n_z}\\rightarrow\\mathbb{R}^{n_\\varepsilon}.
\\end{aligned}
``
The functions
``math
\\begin{aligned}
\\xi:\\mathbb{R}^{2n_y + 2n_z}\\rightarrow \\mathbb{R}^{n_y},& \\quad \\mu:\\mathbb{R}^{n_y + n_z}\\rightarrow \\mathbb{R}^{n_z},\\\\
\\Lambda::\\mathbb{R}^{n_z} \\rightarrow \\mathbb{R}^{n_z \\times n_y}, & \\quad \\Sigma::\\mathbb{R}^{n_z}\\rightarrow \\mathbb{R}^{n_z\\times n_\\varepsilon}
\\end{aligned}
are differentiable. The first two functions characterize the effects of time ``t`` variables on the expectational and
state transition equations. The function ``\\Lambda`` characterizes heteroskedastic endogenous risk that depends on
innovations in jump variables while the function ``\\Sigma`` characterizes exogenous risk.
Refer to Lopz et al. (2018) "Risk-Adjusted Linearizations of Dynamic Equilibrium Models" for details.
"""
mutable struct RiskAdjustedLinearization{A <: RALNonlinearSystem, B <: RALLinearizedSystem,
C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number}} <: AbstractRiskAdjustedLinearization
nonlinear::A
linearization::B
z::C1 # Coefficients
y::C1
Ψ::C2
Nz::Int # Dimensions
Ny::Int
Nε::Int
end
# TODO
# 1.UPDATE THE PRINTING, maybe just write out "risk-adjusted linearization with dimensions ()"
#
# 2. Test update! functions for the various blocks as well as access functions for RiskAdjustedLinearization
#
# 3. Check inplace inference is correct, check construction of each block plus main block
#=
TODO: Finish this once the final struct is completed
# A series of lower level constructors
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, 𝒱::V, μz::Mz, μy::My, ξz::Xz, ξy::Xy, J𝒱::J,
μ_sss::AbstractVector{T}, ξ_sss::AbstractVector{T}, 𝒱_sss::AbstractVector{T},
Γ₁::AbstractMatrix{T}, Γ₂::AbstractMatrix{T}, Γ₃::AbstractMatrix{T}
Γ₄::AbstractMatrix{T}, Γ₅::AbstractMatrix{T}, Γ₆::AbstractMatrix{T},
JV::AbstractMatrix{T}, z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1) where {T <: Number, M <: Function, L,
S, X <: Function, V <: Function,
Mz <: Function, My <: Function, Xz <: Function,
Xy <: Function, J <: Function}
Nz = length(z)
Ny = length(y)
if Nε < 0
Nε = size(Σ(z), 2)
end
return RiskAdjustedLinearization{T, M, L, S, X, V, J}(μ, Λ, Σ, ξ, 𝒱, J𝒱, μ_sss, ξ_sss, 𝒱_sss,
Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆,
JV, z, y, Ψ, Nz, Ny, Nε)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, 𝒱::V, μz::Mz, μy::My, ξz::Xz, ξy::Xy, J𝒱::J,
Γ₁::AbstractMatrix{T}, Γ₂::AbstractMatrix{T}, Γ₃::AbstractMatrix{T}
Γ₄::AbstractMatrix{T}, Γ₅::AbstractMatrix{T}, Γ₆::AbstractMatrix{T},
JV::AbstractMatrix{T}, z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1) where {T <: Number, M <: Function, L,
S, X <: Function, V <: Function,
Mz <: Function, My <: Function, Xz <: Function,
Xy <: Function, J <: Function}
Nz = length(z)
Ny = length(y)
if Nε < 0
Nε = size(Σ(z), 2)
end
# Cache stochastic steady state vectors
μ_sss, ξ_sss, 𝒱_sss = _cache_sss_vectors(z, y)
return RiskAdjustedLinearization{T, M, L, S, X, V, J}(μ, Λ, Σ, ξ, 𝒱, J𝒱, μ_sss, ξ_sss, 𝒱_sss,
Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆,
JV, z, y, Ψ, Nz, Ny, Nε)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, 𝒱::V, μz::Mz, μy::My, ξz::Xz, ξy::Xy, J𝒱::J,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1) where {T <: Number, M <: Function, L,
S, X <: Function, V <: Function,
Mz <: Function, My <: Function, Xz <: Function,
Xy <: Function, J <: Function}
# Get dimensions
Nz = length(z)
Ny = length(y)
if Nε < 0
Nε = size(Σ(z), 2)
end
# Cache stochastic steady state vectors
μ_sss, ξ_sss, 𝒱_sss = _cache_sss_vectors(z, y)
# Cache stochastic steady state Jacobians
Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆, JV = _cache_jacobians(Ψ, Nz, Ny)
return RiskAdjustedLinearization{T, M, L, S, X, V, J}(μ, Λ, Σ, ξ, 𝒱, J𝒱, μ_sss, ξ_sss, 𝒱_sss,
Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆,
JV, z, y, Ψ, Nz, Ny, Nε)
end
=#
function RiskAdjustedLinearization(nonlinear::A, linearization::B, z::C1, y::C1, Ψ::C2,
Nz::Int, Ny::Int, Nε::Int;
check_inputs::Bool = true) where {A <: RALNonlinearSystem, B <: RALLinearizedSystem,
C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number}}
# Make sure inputs are well-formed
if check_inputs
_check_inputs(nonlinear, linearization, z, y, Ψ)
end
return RiskAdjustedLinearization{A, B, C1, C2}(nonlinear, linearization, z, y, Ψ, Nz, Ny, Nε)
end
# Constructor that uses ForwardDiff to calculate Jacobian functions
# NOTE THAT here we pass in the ccgf, rather than 𝒱
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, Γ₅::JC5, Γ₆::JC6, ccgf::CF,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nz::Int, Ny::Int, Nε::Int; sss_vector_type::DataType = Vector{T},
jacobian_type::DataType = Matrix{T}) where {T <: Number, M <: Function, L <: RALΛ, S <: RALΣ,
X <: Function,
JC5 <: AbstractMatrix{<: Number},
JC6 <: AbstractMatrix{<: Number},
CF <: Function}
# Cache stochastic steady state vectors
μ_sss, ξ_sss, 𝒱_sss = _cache_sss_vectors(z, y)
# Cache stochastic steady state Jacobians
Γ₁, Γ₂, Γ₃, Γ₄, JV = _cache_jacobians(Ψ, Nz, Ny, jacobian_type)
# Use cached Jacobians to create Jacobian functions for μ, ξ
if applicable(μ, z, y) # Check if μ is in place or not
μz = (F, z, y) -> ForwardDiff.jacobian!(F, x -> μ(x, y), z) # not in place
μy = (F, z, y) -> ForwardDiff.jacobian!(F, x -> μ(z, x), y)
else # in place
μz = (F, z, y, μ_sss) -> ForwardDiff.jacobian!(F, (G, x) -> μ(G, x, y), μ_sss, z)
μy = (F, z, y, μ_sss) -> ForwardDiff.jacobian!(F, (G, x) -> μ(G, z, x), μ_sss, y)
end
if applicable(ξ, z, y) # Check if ξ is in place or not
ξz = (F, z, y) -> ForwardDiff.jacobian!(F, x -> ξ(x, y), z) # not in place
ξy = (F, z, y) -> ForwardDiff.jacobian!(F, x -> ξ(z, x), y)
else # in place
ξz = (F, z, y, ξ_sss) -> ForwardDiff.jacobian!(F, (G, x) -> ξ(G, x, y), ξ_sss, z)
ξy = (F, z, y, ξ_sss) -> ForwardDiff.jacobian!(F, (G, x) -> ξ(G, z, x), ξ_sss, y)
end
# Create 𝒱 and its Jacobian J𝒱
if applicable(ccgf, Γ₅, z) # Check if ccgf is in place or not
𝒱 = function _𝒱(F, z, Ψ, Γ₅, Γ₆)
F .= ccgf((Γ₅ + Γ₆ * Ψ) * ((I - Λ(z) * Ψ) \ Σ(z)), z)
end
else # in place
𝒱 = (F, z, Ψ, Γ₅, Γ₆) -> ccgf(F, (Γ₅ + Γ₆ * Ψ) * ((I - Λ(z) * Ψ) \ Σ(z)), z)
end
J𝒱 = function _J𝒱(F, z, Ψ, Γ₅, Γ₆, 𝒱_sss)
ForwardDiff.jacobian!(F, (G, x) -> 𝒱(G, x, Ψ, Γ₅, Γ₆), 𝒱_sss, z)
end
# Form underlying RAL blocks
nonlinear_system = RALNonlinearSystem(μ, Λ, Σ, ξ, 𝒱, μ_sss, ξ_sss, 𝒱_sss, z, y, Ψ, Γ₅, Γ₆)
linearized_system = RALLinearizedSystem(μz, μy, ξz, ξy, J𝒱, Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆, JV, z, y, Ψ, μ_sss, ξ_sss, 𝒱_sss)
return RiskAdjustedLinearization(nonlinear_system, linearized_system, z, y, Ψ, Nz, Ny, Nε)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, Γ₅::JC5, Γ₆::JC6, ccgf::CF,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int; sss_vector_type::DataType = Vector{T}, sss_matrix_type::DataType = Matrix{T},
jacobian_type::DataType = Matrix{T}) where {T <: Number, M <: Function, L <: Function, S <: Function,
X <: Function,
JC5 <: AbstractMatrix{<: Number},
JC6 <: AbstractMatrix{<: Number},
CF <: Function}
# Get dimensions
Nz = length(z)
Ny = length(y)
if Nε < 0
error("Nε cannot be negative")
end
# Create wrappers enabling caching for Λ and Σ
Λ = RALΛ(Λ, z, sss_matrix_type, (Nz, Ny))
Σ = RALΣ(Σ, z, sss_matrix_type, (Nz, Nε))
return RiskAdjustedLinearization(μ, Λ, Σ, ξ, Γ₅, Γ₆, ccgf, z, y, Ψ, Nz, Ny, Nε, sss_vector_type = sss_vector_type,
jacobian_type = jacobian_type)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, Γ₅::JC5, Γ₆::JC6, ccgf::CF,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1; sss_vector_type::DataType = Vector{T}, sss_matrix_type::DataType = Matrix{T},
jacobian_type::DataType = Matrix{T}) where {T <: Number, M <: Function, L <: AbstractMatrix{<: Number}, S <: Function,
X <: Function,
JC5 <: AbstractMatrix{<: Number},
JC6 <: AbstractMatrix{<: Number},
CF <: Function}
# Get dimensions
Nz = length(z)
Ny = length(y)
if Nε < 0
error("Nε cannot be negative")
end
# Create wrappers enabling caching for Λ and Σ
Λ = RALΛ(Λ, z)
Σ = RALΣ(Σ, z, sss_matrix_type, (Nz, Nε))
return RiskAdjustedLinearization(μ, Λ, Σ, ξ, Γ₅, Γ₆, ccgf, z, y, Ψ, Nz, Ny, Nε, sss_vector_type = sss_vector_type,
jacobian_type = jacobian_type)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, Γ₅::JC5, Γ₆::JC6, ccgf::CF,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1; sss_vector_type::DataType = Vector{T}, sss_matrix_type::DataType = Matrix{T},
jacobian_type::DataType = Matrix{T}) where {T <: Number, M <: Function, L <: Function, S <: AbstractMatrix{<: Number},
X <: Function,
JC5 <: AbstractMatrix{<: Number},
JC6 <: AbstractMatrix{<: Number},
CF <: Function}
# Get dimensions
Nz = length(z)
Ny = length(y)
if Nε < 0
error("Nε cannot be negative")
end
# Create wrappers enabling caching for Λ and Σ
Λ = RALΛ(Λ, z, sss_matrix_type, (Nz, Ny))
Σ = RALΣ(Σ, z)
return RiskAdjustedLinearization(μ, Λ, Σ, ξ, Γ₅, Γ₆, ccgf, z, y, Ψ, Nz, Ny, Nε, sss_vector_type = sss_vector_type,
jacobian_type = jacobian_type)
end
function RiskAdjustedLinearization(μ::M, Λ::L, Σ::S, ξ::X, Γ₅::JC5, Γ₆::JC6, ccgf::CF,
z::AbstractVector{T}, y::AbstractVector{T}, Ψ::AbstractMatrix{T},
Nε::Int = -1; sss_vector_type::DataType = Vector{T}, sss_matrix_type::DataType = Matrix{T},
jacobian_type::DataType = Matrix{T}) where {T <: Number, M <: Function,
L <: AbstractMatrix{<: Number}, S <: AbstractMatrix{<: Number},
X <: Function,
JC5 <: AbstractMatrix{<: Number},
JC6 <: AbstractMatrix{<: Number},
CF <: Function}
# Get dimensions
Nz = length(z)
Ny = length(y)
if Nε < 0
error("Nε cannot be negative")
end
# Create wrappers enabling caching for Λ and Σ
Λ = RALΛ(Λ, z)
Σ = RALΣ(Σ, z)
return RiskAdjustedLinearization(μ, Λ, Σ, ξ, Γ₅, Γ₆, ccgf, z, y, Ψ, Nz, Ny, Nε, sss_vector_type = sss_vector_type,
jacobian_type = jacobian_type)
end
function _cache_jacobians(Ψ::AbstractMatrix{T}, Nz::Int, Ny::Int, mat_type::DataType) where {T <: Number}
Γ₁ = mat_type(undef, Nz, Nz)
Γ₂ = mat_type(undef, Nz, Ny)
Γ₃ = similar(Ψ)
Γ₄ = mat_type(undef, Ny, Ny)
JV = similar(Ψ)
return Γ₁, Γ₂, Γ₃, Γ₄, JV
end
function _cache_sss_vectors(z::AbstractVector{T}, y::AbstractVector{T}) where {T <: Number, L, S}
μ_sss = similar(z)
ξ_sss = similar(y)
𝒱_sss = similar(y)
return μ_sss, ξ_sss, 𝒱_sss
end
function _check_inputs(nonlinear::A, linearization::B, z::C1, y::C1, Ψ::C2) where {A <: RALNonlinearSystem, B <: RALLinearizedSystem,
C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number}}
# Get contents of nonlinear and linearization blocks
@unpack μ, ξ, 𝒱, μ_sss, ξ_sss, 𝒱_sss = nonlinear
@unpack μz, μy, ξz, ξy, J𝒱, Γ₁, Γ₂, Γ₃, Γ₄, Γ₅, Γ₆, JV = linearization
@assert applicable(μ, z, y) ||
applicable(μ, μ_sss, z, y) "The function μ must take either the form " *
"μ(z, y) or the in-place equivalent μ(F, z, y)"
@assert applicable(ξ, z, y) ||
applicable(ξ, ξ_sss, z, y) "The function μ must take either the form " *
"ξ(z, y) or the in-place equivalent ξ(F, z, y)"
@assert applicable(𝒱, z, Ψ, Γ₅, Γ₆) ||
applicable(𝒱, y, z, Ψ, Γ₅, Γ₆) "The function 𝒱 must take either the form " *
"𝒱(z, Ψ, Γ₅, Γ₆) or the in-place equivalent 𝒱(F, z, Ψ, Γ₅, Γ₆)"
@assert applicable(μz, Γ₁, z, y) ||
applicable(μz, Γ₁, z, y, μ_sss) "The function μz must take either the form " *
"μz(F, z, y) or μz(F, z, y, μ_sss)"
@assert applicable(μy, Γ₂, z, y) ||
applicable(μy, Γ₂, z, y, μ_sss) "The function μy must take either the form " *
"μy(F, z, y) or μy(F, z, y, μ_sss)"
@assert applicable(ξz, Γ₃, z, y) ||
applicable(ξz, Γ₃, z, y, ξ_sss) "The function ξz must take either the form " *
"ξz(F, z, y) or ξz(F, z, y, ξ_sss)"
@assert applicable(ξy, Γ₄, z, y) ||
applicable(ξy, Γ₄, z, y, ξ_sss) "The function ξy must take either the form " *
"ξy(F, z, y) or ξy(F, z, y, ξ_sss)"
@assert applicable(J𝒱, z, Ψ, Γ₅, Γ₆) ||
applicable(J𝒱, JV, z, Ψ, Γ₅, Γ₆, 𝒱_sss) "The function J𝒱 must take either the form " *
"J𝒱(F, z, Ψ, Γ₅, Γ₆) or J𝒱(F, z, Ψ, Γ₅, Γ₆, 𝒱_sss)"
end
## Methods for using RiskAdjustedLinearization
@inline Γ₁(m::RiskAdjustedLinearization) = m.linearization.Γ₁
@inline Γ₂(m::RiskAdjustedLinearization) = m.linearization.Γ₂
@inline Γ₃(m::RiskAdjustedLinearization) = m.linearization.Γ₃
@inline Γ₄(m::RiskAdjustedLinearization) = m.linearization.Γ₄
@inline Γ₅(m::RiskAdjustedLinearization) = m.linearization.Γ₅
@inline Γ₆(m::RiskAdjustedLinearization) = m.linearization.Γ₆
@inline JV(m::RiskAdjustedLinearization) = m.linearization.JV
@inline getvalues(m::RiskAdjustedLinearization) = (m.z, m.y, m.Ψ)
@inline getvecvalues(m::RiskAdjustedLinearization) = vcat(m.z, m.y, vec(m.Ψ))
@inline nonlinear_system(m::RiskAdjustedLinearization) = m.nonlinear
@inline linearized_system(m::RiskAdjustedLinearization) = m.linearization
function update!(m::RiskAdjustedLinearization, z::C1, y::C1, Ψ::C2;
update_cache::Bool = true) where {C1 <: AbstractVector{<: Number}, C2 <: AbstractMatrix{<: Number}}
# Update values of the affine approximation
m.z .= z
m.y .= y
m.Ψ .= Ψ
# Update the cached vectors and Jacobians
if update_cache
update!(nonlinear_system(m), m.z, m.y, m.Ψ, Γ₅(m), Γ₆(m))
update!(linearized_system(m), m.z, m.y, m.Ψ, m.nonlinear.μ_sss, m.nonlinear.ξ_sss, m.nonlinear.𝒱_sss)
end
m
end
function Base.show(io::IO, m::RiskAdjustedLinearization)
@printf io "Risk-Adjusted Linearization of a Dynamic Economic Model\n"
@printf io "No. of state variables: %i\n" m.Nz
@printf io "No. of jump variables: %i\n" m.Ny
@printf io "No. of exogenous shocks: %i\n" m.Nε
end
|
{"hexsha": "c68504caa6e025f5c403b95626a0fb79767a2686", "size": 28993, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/risk_adjusted_linearization.jl", "max_stars_repo_name": "chenwilliam77/gen_affine", "max_stars_repo_head_hexsha": "d66fc65efabe5e01398d99526aba83fafe271006", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/risk_adjusted_linearization.jl", "max_issues_repo_name": "chenwilliam77/gen_affine", "max_issues_repo_head_hexsha": "d66fc65efabe5e01398d99526aba83fafe271006", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/risk_adjusted_linearization.jl", "max_forks_repo_name": "chenwilliam77/gen_affine", "max_forks_repo_head_hexsha": "d66fc65efabe5e01398d99526aba83fafe271006", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5295081967, "max_line_length": 205, "alphanum_fraction": 0.5138826613, "num_tokens": 9145}
|
[STATEMENT]
lemma compute_tmax_rule [hoare_triple]:
"<int_tree t1 b1 * int_tree t2 b2>
compute_tmax it b1 b2
<\<lambda>r. int_tree t1 b1 * int_tree t2 b2 * \<up>(r = max3 it (interval_tree.tmax t1) (interval_tree.tmax t2))>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. <int_tree t1 b1 * int_tree t2 b2> compute_tmax it b1 b2 <\<lambda>r. int_tree t1 b1 * int_tree t2 b2 * \<up> (r = max3 it (interval_tree.tmax t1) (interval_tree.tmax t2))>
[PROOF STEP]
by auto2
|
{"llama_tokens": 216, "file": "Auto2_Imperative_HOL_Imperative_IntervalTree_Impl", "length": 1}
|
from __future__ import print_function
import os
import numpy as np
import csv
import matplotlib
from matplotlib import pyplot as plt
from matplotlib.ticker import PercentFormatter
from PIL import Image, ImageOps
from skimage import io
from utils.preprocess import load_pfm
FONTSIZE=20
matplotlib.rcParams.update({'font.size': FONTSIZE})
#matplotlib.rc('xtick', labelsize=FONTSIZE)
#matplotlib.rc('ytick', labelsize=FONTSIZE)
#matplotlib.rc('xlabel', labelsize=FONTSIZE)
#matplotlib.rc('ylabel', labelsize=FONTSIZE)
#DATAPATH = '/media/sf_Shared_Data/gpuhomedataset/dispnet'
#DATAPATH = '/home/datasets/imagenet/dispnet/virtual'
#DATAPATH = '/home/datasets/imagenet/dispnet'
DATAPATH = './data'
OUTPUTPATH = '/tmp'
#FILELIST = 'FlyingThings3D_release_TEST.list'
#FILELIST = 'FlyingThings3D_release_TRAIN.list'
#FILELIST='lists/real_release.list'
#FILELIST='lists/kitti-groundtruth.list'
#FILELIST='lists/kitti2015_train.list'
#FILELIST='lists/MB2014_TRAIN.list'
#FILELIST='lists/eth3d_train.list'
FILELIST='lists/FlyingThings3D_release_TRAIN_100.list'
RESULTLIST = 'NEW_' + FILELIST
CLEANRESULTLIST = 'CLEAN_' + FILELIST
def plot_hist(d, save=False, filename=None, plot=True, color='r'):
flatten = d.ravel()
mean = np.mean(flatten)
max = np.max(flatten)
std = np.std(flatten)
print('len: %d, mean: %.3f, std: %.3f' % (len(flatten), mean, std))
#return n_neg, flatten.size # return #negative, total
if plot:
#count, bins, ignored = plt.hist(flatten, 50, color=color)
flatten = flatten[np.abs(flatten) > 0.0]
count, bins, ignored = plt.hist(flatten, bins=np.arange(0,300), density=True, color=color)
plt.gca().yaxis.set_major_formatter(PercentFormatter(1))
plt.xlabel('Disparity')
plt.ylabel('Percentage')
if save:
plt.savefig(os.path.join(OUTPUTPATH, '%s.pdf'%filename), bbox_inches='tight')
else:
#plt.show()
pass
#plt.clf()
return mean, std, max
def statistic(file_list):
img_pairs = []
with open(file_list, "r") as f:
img_pairs = f.readlines()
csv_file = open(RESULTLIST, 'a')
for f in img_pairs:
names = f.split()
name = names[2]
print('Name: ', name)
gt_disp_name = os.path.join(DATAPATH, name)
gt_disp, scale = load_pfm(gt_disp_name)
print('Shape: ', gt_disp.shape, ', Mean: ', np.mean(gt_disp))
name_items = name.split('/')
save_name = 'hist_{}_{}_{}'.format(name_items[-4], name_items[-3], name_items[-1].split('.')[0])
mean, std, max = plot_hist(gt_disp, save=True, filename=save_name, plot=False)
writer = csv.writer(csv_file, delimiter='\t')
writer.writerow([name, mean, std, max])
csv_file.close()
def statistic_with_file(fn):
result_file = open(CLEANRESULTLIST, 'a')
with open(fn, 'r') as f:
total_array = []
fns = []
for line in f.readlines():
items = line.split('\t')
total_array.append([float(i) for i in items[1:]])
fns.append(items[0])
total_array = np.array(total_array)
print('Shape: ', total_array[:, 0].shape)
for i, mean in enumerate(total_array[:, 0]):
if mean < 150:
grt = fns[i]
name_items = grt.split('/')
left = 'FlyingThings3D_release/frames_cleanpass/%s/%s/%s/left/%s.png' % (name_items[-5], name_items[-4], name_items[-3], name_items[-1].split('.')[0])
right = 'FlyingThings3D_release/frames_cleanpass/%s/%s/%s/right/%s.png' % (name_items[-5], name_items[-4], name_items[-3], name_items[-1].split('.')[0])
#result_file.write("%s %s %s\n" % (left, right, fns[i]))
plot_hist(total_array[:, 0])
#plot_hist(total_array[:, 1])
#plot_hist(total_array[:, 2])
result_file.close()
def statistic_mean_std(filelist):
img_pairs = []
with open(filelist, "r") as f:
img_pairs = f.readlines()
means = []
for f in img_pairs:
names = f.split()
leftname = names[0]
rightname = names[1]
leftfn = os.path.join(DATAPATH, leftname)
rightfn = os.path.join(DATAPATH, rightname)
leftimgdata = io.imread(leftfn)
rightimgdata = io.imread(rightfn)
leftmean = np.mean(leftimgdata.ravel())
rightmean = np.mean(rightimgdata.ravel())
print('leftmean: ', leftmean)
print('rightmean: ', rightmean)
means.append((leftmean+rightmean)/2)
means = np.array(means)
print('total mean: ', np.mean(means))
print('total std: ', np.std(means))
def statistic_disparity(filelist):
img_pairs = []
with open(filelist, "r") as f:
img_pairs = f.readlines()
all = np.array([], dtype=np.float32)
for f in img_pairs:
names = f.split()
dispname = names[2]
fn = os.path.join(DATAPATH, dispname)
print('fn: ', fn)
if fn.find('.png') >= 0:
gt_disp = Image.open(fn)
gt_disp = np.ascontiguousarray(gt_disp,dtype=np.float32)/256
else:
gt_disp, _ = load_pfm(fn)
gt_disp[np.isinf(gt_disp)] = 0
all = np.concatenate((gt_disp.ravel(), all))
mean = np.mean(all)
std = np.std(all)
color='#A9D18E'
mean, std, max = plot_hist(all, save=True, filename=filelist, plot=True, color=color)
print('total mean: ', mean)
print('total std: ', std)
def statistic_kitti_disparity(filelist):
img_pairs = []
with open(filelist, "r") as f:
img_pairs = f.readlines()
all = np.array([], dtype=np.float32)
for f in img_pairs:
dispname = f[:-1]
fn = dispname
gt_disp = Image.open(fn)
gt_disp = np.ascontiguousarray(gt_disp,dtype=np.float32)/256
#gt_disp, _ = load_pfm(fn)
all = np.concatenate((gt_disp.ravel(), all))
np.save('stat.npy', all)
mean = np.mean(all)
std = np.std(all)
print('total mean: ', mean)
print('total std: ', std)
mean, std, max = plot_hist(all, save=True, filename='real_disp.png', plot=False, color='r')
def force_plot():
all = np.load('stat.npy')
mean, std, max = plot_hist(all, save=True, filename='real_disp.png', plot=True, color='r')
def plot_hist_with_filename(fn):
fnt='img00000.bmp'
leftfn = '/media/sf_Shared_Data/gpuhomedataset/dispnet/real_release/frames_cleanpass/left/%s'%fnt
rightfn = '/media/sf_Shared_Data/gpuhomedataset/dispnet/real_release/frames_cleanpass/right/%s'%fnt
realimgdata = io.imread(leftfn)
#leftfn = '/media/sf_Shared_Data/gpuhomedataset/FlyingThings3D_release/frames_cleanpass/TRAIN/A/0001/left/%s'%fn
#rightfn = '/media/sf_Shared_Data/gpuhomedataset/FlyingThings3D_release/frames_cleanpass/TRAIN/A/0001/right/%s'%fn
#realimgdata = io.imread(leftfn)
leftfn = '/media/sf_Shared_Data/gpuhomedataset/FlyingThings3D_release/frames_cleanpass/TRAIN/A/0000/left/%s'%fn
rightfn = '/media/sf_Shared_Data/gpuhomedataset/FlyingThings3D_release/frames_cleanpass/TRAIN/A/0000/right/%s'%fn
leftimgdata = io.imread(leftfn)
rightimgdata = io.imread(rightfn)
mean, std, max = plot_hist(leftimgdata, save=False, filename=None, plot=True, color='r')
mean, std, max = plot_hist(realimgdata, save=False, filename=None, plot=True, color='b')
plt.show()
def extract_exception_of_occulution():
#occulution_list = 'CC_FlyingThings3D_release_TRAIN.list'
#occulution_list = './lists/CC_FlyingThings3D_release_TRAIN.list'
occulution_list = './lists/girl20_TRAIN.list'
img_pairs = []
with open(occulution_list, "r") as f:
img_pairs = f.readlines()
means = []
maxcount = 10000
i = 0
for f in img_pairs:
names = f.split()
name = names[2]
#gt_disp_name = os.path.join(DATAPATH, 'clean_dispnet', name)
gt_disp_name = os.path.join(DATAPATH, name)
if not os.path.isfile(gt_disp_name):
#print('Not found: ', gt_disp_name)
continue
gt_disp, scale = load_pfm(gt_disp_name)
mean = np.mean(gt_disp)
means.append(mean)
i+=1
if i > maxcount:
break
print('Name: ', name, ', Mean: ', mean, ', std: ', np.std(gt_disp), ' min: ', np.min(gt_disp), ' max: ', np.max(gt_disp))
np.save('virtualmean.log', np.array(means))
#mean, std, max = plot_hist(np.array(means), save=False, filename=None, plot=True, color='r')
#plt.show()
def parse_mean_log():
filename = './logs/meanstd_test.log'
f = open(filename, 'r')
means = []
fns = []
for line in f.readlines():
mean = line.split()[-4]
means.append(float(mean))
fns.append(line.split()[1])
means = np.array(means)
fns = np.array(fns)
k = 10
#sorted = np.argsort(means)[-k:]
sorted = np.argsort(means)[:k]
print(sorted)
print(means[sorted])
print(fns[sorted])
#plt.scatter(range(0, len(means)), means)
#plot_hist(np.array(means), plot=True)
#plt.show()
if __name__ == '__main__':
#statistic(FILELIST)
#statistic_with_file(RESULTLIST)
#fn='img00000.bmp'
#fn='0006.png'
#plot_hist_with_filename(fn)
#statistic_mean_std(FILELIST)
statistic_disparity(FILELIST)
#statistic_kitti_disparity(FILELIST)
#extract_exception_of_occulution()
#parse_mean_log()
|
{"hexsha": "4b38618b312709d7a15ca792cc55d4094e1afe08", "size": 9383, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/stastic_disparity.py", "max_stars_repo_name": "HKBU-HPML/FADNet-PP", "max_stars_repo_head_hexsha": "6e653e8f1fa0f55f10068f5592cbc8b49bb571e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/stastic_disparity.py", "max_issues_repo_name": "HKBU-HPML/FADNet-PP", "max_issues_repo_head_hexsha": "6e653e8f1fa0f55f10068f5592cbc8b49bb571e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/stastic_disparity.py", "max_forks_repo_name": "HKBU-HPML/FADNet-PP", "max_forks_repo_head_hexsha": "6e653e8f1fa0f55f10068f5592cbc8b49bb571e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.0869565217, "max_line_length": 168, "alphanum_fraction": 0.6360439092, "include": true, "reason": "import numpy", "num_tokens": 2606}
|
# This is open-source software licensed under a BSD license.
# Please see the file LICENSE.txt for details.
#
import numpy as np
from ginga.canvas.CanvasObject import get_canvas_types
#from ginga import trcalc
from ginga.gw import Widgets
from ginga.util import action
from .base import Stage
class Crop(Stage):
_stagename = 'crop-image'
def __init__(self):
super(Crop, self).__init__()
self.dc = get_canvas_types()
self.cropcolor = 'yellow'
self.layertag = 'crop-layer'
self._crop_rect = (0.0, 0.0, 1.0, 1.0)
self._aspect = None
self._img_dims = (1, 1)
canvas = self.dc.DrawingCanvas()
canvas.enable_edit(True)
canvas.set_callback('edit-event', self.edit_cb)
canvas.set_draw_mode('edit')
self.canvas = canvas
def build_gui(self, container):
self.viewer = self.pipeline.get('viewer')
fr = Widgets.Frame("Crop")
captions = (('Crop %:', 'label', 'crop', 'llabel'),
('Output size:', 'label', 'size', 'llabel'),
('Aspect:', 'label', 'aspect', 'entryset'),
)
w, b = Widgets.build_info(captions, orientation='vertical')
self.w.update(b)
arr = np.asarray(self._crop_rect) * 100.0
crop = "%6.2f,%6.2f to %6.2f,%6.2f" % tuple(arr)
b.crop.set_text(crop)
b.aspect.set_tooltip("Set the aspect ratio (wd/ht)")
b.aspect.add_callback('activated', self._set_aspect_cb)
if self._aspect is not None:
b.aspect.set_text(str(self._aspect))
fr.set_widget(w)
container.set_widget(fr)
self.canvas.set_surface(self.viewer)
self.canvas.register_for_cursor_drawing(self.viewer)
wd, ht = 100, 100
self.crop_obj = self.dc.CompoundObject(
self.dc.Rectangle(0, 0, wd, ht,
color=self.cropcolor),
self.dc.Text(0, 0, "Crop",
color=self.cropcolor))
self.crop_obj.objects[1].editable = False
self._gui_update_crop()
self.w.size.set_text("unknown")
@property
def crop_rect(self):
return self._crop_rect
@crop_rect.setter
def crop_rect(self, val):
self._crop_rect = val
if self.gui_up:
self._gui_update_crop()
@property
def aspect(self):
return self._aspect
@aspect.setter
def aspect(self, val):
self._aspect = val
if self.gui_up:
asp = self._aspect
self.w.aspect.set_text('' if asp is None else str(asp))
def _gui_update_crop(self):
arr = np.asarray(self._crop_rect) * 100.0
crop = "%6.2f,%6.2f to %6.2f,%6.2f" % tuple(arr)
self.w.crop.set_text(crop)
def resume(self):
# insert canvas, if not already
p_canvas = self.viewer.get_canvas()
try:
p_canvas.get_object_by_tag(self.layertag)
except KeyError:
# Add ruler layer
p_canvas.add(self.canvas, tag=self.layertag)
if not self.canvas.has_object(self.crop_obj):
self.canvas.add(self.crop_obj)
self.canvas.ui_set_active(True, viewer=self.viewer)
def pause(self):
self.canvas.ui_set_active(False)
# remove the canvas from the image
p_canvas = self.viewer.get_canvas()
try:
p_canvas.delete_object_by_tag(self.layertag)
except Exception:
pass
def edit_cb(self, canvas, obj):
if obj.kind != 'rectangle':
return True
x1, y1, x2, y2 = obj.get_llur()
old = self._get_state()
self._update_crop_rect(x1, y1, x2, y2)
new = self._get_state()
self.pipeline.push(action.AttrAction(self, old, new,
descr="change crop"))
self.pipeline.run_from(self)
def _update_crop_rect(self, x1, y1, x2, y2):
# reposition other elements to match
if self.aspect is not None:
x1, y1, x2, y2 = self._enforce_aspect(x1, y1, x2, y2)
rect = self.crop_obj.objects[0]
rect.x1, rect.y1, rect.x2, rect.y2 = x1, y1, x2, y2
text = self.crop_obj.objects[1]
text.x, text.y = x1, y2 + 4
self.viewer.redraw(whence=3)
wd, ht = self._img_dims
self.set_crop_rect(wd, ht, x1, y1, x2, y2)
def set_crop_rect(self, wd, ht, x1, y1, x2, y2):
x1p, y1p, x2p, y2p = [x1 / wd, y1 / ht, x2 / wd, y2 / ht]
self.crop_rect = (x1p, y1p, x2p, y2p)
def get_crop_rect_px(self, wd, ht, use_image_lim=False):
x1p, y1p, x2p, y2p = np.array(self._crop_rect)
x1, y1, x2, y2 = (int(x1p * wd), int(y1p * ht),
int(x2p * wd), int(y2p * ht))
if use_image_lim:
x1, y1, x2, y2 = max(0, x1), max(0, y1), min(x2, wd), min(y2, ht)
return x1, y1, x2, y2
def _set_aspect(self, asp_s):
wd, ht = self._img_dims
x1, y1, x2, y2 = self.get_crop_rect_px(wd, ht)
if len(asp_s) == 0:
self._aspect = None
else:
if ':' in asp_s:
wd, ht = [float(n) for n in asp_s.split(':')]
self._aspect = wd / ht
else:
self._aspect = float(asp_s)
self._update_crop_rect(x1, y1, x2, y2)
def _set_aspect_cb(self, widget):
asp_s = widget.get_text().strip()
old = self._get_state()
self._set_aspect(asp_s)
new = self._get_state()
self.pipeline.push(action.AttrAction(self, old, new,
descr="set aspect ratio"))
self.pipeline.run_from(self)
def _enforce_aspect(self, x1, y1, x2, y2):
ctr_x, ctr_y = (x1 + x2) / 2.0, (y1 + y2) / 2.0
if self.aspect > 0:
wd = x2 - x1
hht = wd / self.aspect * 0.5
ctr_y = (y1 + y2) * 0.5
y1, y2 = ctr_y - hht, ctr_y + hht
else:
ht = y2 - y1
hwd = ht / self.aspect * 0.5
ctr_x = (x1 + x2) * 0.5
x1, x2 = ctr_x - hwd, ctr_x + hwd
return x1, y1, x2, y2
def _get_state(self):
return dict(crop_rect=[float(n) for n in self._crop_rect],
aspect=self._aspect)
def run(self, prev_stage):
data = self.pipeline.get_data(prev_stage)
self.verify_2d(data)
if self._bypass or data is None:
self.pipeline.send(res_np=data)
return
ht, wd = data.shape[:2]
dims = (wd, ht)
if self._img_dims != dims:
self._img_dims = dims
x1, y1, x2, y2 = self.get_crop_rect_px(wd, ht,
use_image_lim=True)
res_np = data[y1:y2, x1:x2, ...]
if self.gui_up:
_ht, _wd = res_np.shape[:2]
## try:
## asp_s = trcalc.calc_aspect_str(_wd, _ht)
## except Exception as e:
## # sometimes Numpy throws a NaN error here
## asp_s = "{}:{}".format(_wd, _ht)
asp = _wd / _ht
s = "{}x{} ({})".format(_wd, _ht, asp)
self.w.size.set_text(s)
self.pipeline.send(res_np=res_np)
def export_as_dict(self):
d = super(Crop, self).export_as_dict()
d.update(self._get_state())
return d
def import_from_dict(self, d):
super(Crop, self).import_from_dict(d)
self.crop_rect = d['crop_rect']
self.aspect = d['aspect']
|
{"hexsha": "816e60e42be22d71a207018ca21bd91247afb676", "size": 7589, "ext": "py", "lang": "Python", "max_stars_repo_path": "ginga/util/stages/crop.py", "max_stars_repo_name": "kyraikeda/ginga", "max_stars_repo_head_hexsha": "e0ce979de4a87e12ba7a90eec0517a0be05d14bc", "max_stars_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_stars_count": 76, "max_stars_repo_stars_event_min_datetime": "2015-01-05T14:46:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T04:10:54.000Z", "max_issues_repo_path": "ginga/util/stages/crop.py", "max_issues_repo_name": "kyraikeda/ginga", "max_issues_repo_head_hexsha": "e0ce979de4a87e12ba7a90eec0517a0be05d14bc", "max_issues_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_issues_count": 858, "max_issues_repo_issues_event_min_datetime": "2015-01-17T01:55:12.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-08T20:20:31.000Z", "max_forks_repo_path": "ginga/util/stages/crop.py", "max_forks_repo_name": "kyraikeda/ginga", "max_forks_repo_head_hexsha": "e0ce979de4a87e12ba7a90eec0517a0be05d14bc", "max_forks_repo_licenses": ["BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause"], "max_forks_count": 60, "max_forks_repo_forks_event_min_datetime": "2015-01-14T21:59:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-13T03:38:49.000Z", "avg_line_length": 30.8495934959, "max_line_length": 77, "alphanum_fraction": 0.5435498748, "include": true, "reason": "import numpy", "num_tokens": 2112}
|
[STATEMENT]
lemma Nil_O_never: "[] = O tr \<longleftrightarrow> never \<gamma> tr"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ([] = O tr) = never \<gamma> tr
[PROOF STEP]
unfolding O_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ([] = map g (filter \<gamma> tr)) = never \<gamma> tr
[PROOF STEP]
by (induction tr) auto
|
{"llama_tokens": 138, "file": "BD_Security_Compositional_Independent_Secrets", "length": 2}
|
import tictactoe as ttt
import numpy as np
board = np.array([
['o', 'x', 't'],
['d', 'x', 'g'],
['o', 'v', 'b']
])
available = ttt.find_available_moves(board)
best_move, metric_dict = ttt.optimal_ai_move(board=board, x_or_o='x', available=available)
print(best_move, metric_dict)
"""
The AI chooses bottom left - (0, 2) here. This means the human can choose bottom middle - (1,2) and win.
So what's going on here?
The AI is not making any assumptions about what kind of player it is against,
and just counts the number of wins and losses, without prioritising some moves as more likely than others.
So policy='optimal' is actually optimal against a random player.
I can fix this by adding a 'weight' term, to value wins, losses, draws now
more highly than wins, losses draws later.
"""
|
{"hexsha": "42c9bb5e8f9ff82d4792f0b68c35524d17db72c4", "size": 812, "ext": "py", "lang": "Python", "max_stars_repo_path": "ttt_tests/greedy_ai_test.py", "max_stars_repo_name": "jcoombes/tictactoe", "max_stars_repo_head_hexsha": "948ab72854f579202f377df2a1a493ebe7990429", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ttt_tests/greedy_ai_test.py", "max_issues_repo_name": "jcoombes/tictactoe", "max_issues_repo_head_hexsha": "948ab72854f579202f377df2a1a493ebe7990429", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ttt_tests/greedy_ai_test.py", "max_forks_repo_name": "jcoombes/tictactoe", "max_forks_repo_head_hexsha": "948ab72854f579202f377df2a1a493ebe7990429", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2307692308, "max_line_length": 107, "alphanum_fraction": 0.7056650246, "include": true, "reason": "import numpy", "num_tokens": 217}
|
import tweepy
import numpy as np
import matplotlib.pyplot as plt
class Colorschemez:
def __init__(self, status):
"""Constructs a Colorscheme from a @colorschemez tweet.
Arguments:
status -- a @colorschemez tweet in a Tweepy status object.
"""
c = status.text.split('\n')
c[-1] = ' '.join(c[-1].split()[:-1])
self.colornames = c
self.url = status._json['entities']['media'][0]['url']
self.image_url = status._json['entities']['media'][0]['media_url_https']
self.colors = self._extract_colors(
self._retrieve_image(self.image_url)
)
@classmethod
def latest(cls):
"""Retrieves the latest @colorschemez tweet and returns the corresponding Colorscheme."""
latest_tweet = retrieve_tweets(count=1)
return cls(latest_tweet[0])
def _extract_colors(self, fp):
"""Extract the three colors from a @colorschemez image and return their hex code.
Uses KMeans clustering to find the colors as @colorschemez images include color transitions which makes the raw sRGB codes unreliable.
Arguments:
fp -- A filename (string), pathlib.Path object or a file object. Accepts whatever PIL's Image.open() accepts.
"""
from PIL import Image
from sklearn.cluster import KMeans
im = Image.open(fp)
self.image = im
# Extract the sRGB codes for the colors in the image.
# The output of getcolors is unique colors and the number of
# pixel with that color. We 'uncompress' this in order for the
# K-means clustering to be able to account for observation
# weights.
sRGB = []
for w, srgb in im.getcolors(maxcolors=512*512):
sRGB += (w//512) * [srgb]
kmeans = KMeans(n_clusters=3).fit(sRGB)
center_sRGB = np.round(kmeans.cluster_centers_).astype(np.int)
to_hex = lambda x: '#'+''.join(['{:02x}'.format(n) for n in x])
return [to_hex(c) for c in center_sRGB]
def _retrieve_image(self, url):
"""Downloads an image and returns it as a bytestream.
Arguments:
url - the url of the image.
"""
import requests
from io import BytesIO
r = requests.get(url) # TODO check failure
return BytesIO(r.content)
def example_plot(self, ax):
"""Construct an example plot from the Colorscheme.
Arguments:
ax -- matplotlib axes object to draw the example plot on.
"""
x = np.linspace(-1, 1, 100)
functions = [np.sin, lambda x: np.cos(x)-0.30, lambda x: 0.5*x]
for f, c in zip(functions, self.colors):
ax.plot(x, f(x), c=c, lw=6)
ax.imshow(np.asarray(self.image),
extent=(0.3, 1.0, -0.8, -0.1))
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-0.9, 1.1)
ax.text(-1.05, 1.05, '\n'.join(self.colornames),
horizontalalignment='left',
verticalalignment='top')
ax.set_xlabel('Colors by @colorschemez: %s' % self.url)
def retrieve_tweets(count):
"""Retrieve the user timeline of @colorschemez and return it as list of Tweepy Status objects.
Arguments:
count -- the number of tweets to retrieve.
"""
import config as cfg
auth = tweepy.OAuthHandler(cfg.consumer_key, cfg.consumer_secret)
auth.set_access_token(cfg.access_token, cfg.access_token_secret)
api = tweepy.API(auth)
valid_tweets = []
oldest_tweet_checked_id = None
while True:
if len(valid_tweets) == count:
break
if oldest_tweet_checked_id == None:
tweets = api.user_timeline(screen_name='colorschemez',
count=count-len(valid_tweets))
else:
tweets = api.user_timeline(screen_name='colorschemez',
count=count-len(valid_tweets),
max_id=oldest_tweet_checked_id)
oldest_tweet_checked_id = tweets[-1].id
valid_tweets += list(filter(valid_status, tweets))
return valid_tweets
def valid_status(status):
""" Checks if a status fullfills our assumptions. Return True if it does, False if it doesn't.
Arguments:
status -- a Tweepy Status object.
"""
# The tweet should consist of three lines, name of the colours.
if len(status.text.split('\n')) != 3:
return False
json = status._json
# The tweet should include one image.
if 'media' not in json['entities']:
return False
if len(json['entities']['media']) != 1:
return False
media = json['entities']['media'][0]
if 'url' not in media:
return False
if 'media_url_https' not in media:
return False
if not valid_url(media['url']):
return False
if not valid_url(media['media_url_https']):
return False
return True
class ColorschemezError(Exception):
def __init__(self, message):
self.message = message
def valid_url(url):
""" Check that an url is valid.
Uses Django's regex. """
import re
regex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' #domain...
r'localhost|' #localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
return False if regex.match(url) is None else True
if __name__ == '__main__':
"""Plot the last 16 color schemes as a grid.
Saves to results to ./sixteen.png.
"""
n = 4
tweets = retrieve_tweets(count=n**2)
fig, axes = plt.subplots(ncols=n, nrows=n, figsize=(5*n, 5*n))
for ax, tweet in zip(axes.flatten(), tweets):
cs = Colorschemez(tweet)
cs.example_plot(ax)
fig.savefig('sixteen.png', dpi=200)
|
{"hexsha": "2f8453f2386b6dd9f30f167754e412711bb13341", "size": 6029, "ext": "py", "lang": "Python", "max_stars_repo_path": "ScientificColorschemez.py", "max_stars_repo_name": "sliem/ScientificColorschemez", "max_stars_repo_head_hexsha": "2a259436bd535bf4318172e9d09038d5a6038f66", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-12-15T16:29:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-16T01:50:13.000Z", "max_issues_repo_path": "ScientificColorschemez.py", "max_issues_repo_name": "sliem/ScientificColorschemez", "max_issues_repo_head_hexsha": "2a259436bd535bf4318172e9d09038d5a6038f66", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2017-10-16T13:06:26.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:25:20.000Z", "max_forks_repo_path": "ScientificColorschemez.py", "max_forks_repo_name": "sliem/ScientificColorschemez", "max_forks_repo_head_hexsha": "2a259436bd535bf4318172e9d09038d5a6038f66", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.4494949495, "max_line_length": 142, "alphanum_fraction": 0.588323105, "include": true, "reason": "import numpy", "num_tokens": 1506}
|
"""Summary
"""
import numpy as np
from collections import deque
import math
def constructIncomingGraph(og):
"""Construct the graph.
No weights preserved. Return grapg ig, where ig[i] = j, there is an edge
from node j to node i.
Args:
og (dict): Graph with ougoing edges.
Returns:
dict: Graph with incoming edges.
"""
ig = dict()
for node_from, nodes_to in og.items():
for node_to in nodes_to:
if node_to[0] in ig:
ig[node_to[0]].append(node_from)
else:
ig[node_to[0]] = [node_from]
return ig
def topologicalSorting(graph, start_node):
"""Summary
Args:
graph (TYPE): Description
start_node (TYPE): Description
Returns:
TYPE: Description
"""
ig = constructIncomingGraph(graph)
# queue of sorted nodes (result)
sorted_nodes = deque([])
# queue of nodes with no incoming edges
in_set = deque([start_node])
while(in_set): # I is not empty
n = in_set.popleft()
# if out of I than no incoming
sorted_nodes.append(n)
if n not in graph:
print("Your graph is missing node", n)
children = graph[n] # children of type [(j,k)]
for child in children:
# remove an incoming edge (release the node from a parent :))
child_id = child[0]
ig[child_id].remove(n)
if not ig[child_id]:
# no incoming edges -> add to I
in_set.append(child_id)
print("sorting done")
return sorted_nodes
def shortestPath(graph, S):
""" Computes the shortest path
Only computes the path in the topologically sorted graph
The result is the list of graph node ids that are in the shortest path
Args:
graph (dict): constructed graph
S (deque): topological sorting of the node ids
Returns:
list(int): shortest path
"""
# For algorithm to work the nodes ids should be positive int in sequence
# Shortest path to the last element in the queue
# Incoming: graph - graph with outgoing edges
# S - queue of sorted nodes
N = len(S) # total number of nodes in S
if N > len(graph):
print("ERROR: More sorted nodes than in the graph")
if N < len(graph):
print("ERROR: Less sorted nodes than in the graph")
# initialization of the dist and p based on graph
dist = dict()
p = dict()
for node_id, children in graph.items():
dist[node_id] = math.inf
p[node_id] = None
dist[S[0]] = 0
for el in S:
u = el
children = graph[u]
for child in children:
v = child[0] # node to
w = child[1] # weight between u and v
# print("Edge", u,"->", v, ":", w)
if (dist[v] > dist[u] + w):
dist[v] = dist[u] + w
p[v] = u
# retrieving path
path = []
par_id = S[-1]
# print("Retrieving path from last node: ", S[-1])
path.append(par_id)
while par_id:
par_id = p[par_id]
path.append(par_id)
return path
|
{"hexsha": "008f0559f4304f7fa58706b543d08f8965fe35fe", "size": 3149, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/topological_sorting.py", "max_stars_repo_name": "ovysotska/image_sequence_matcher", "max_stars_repo_head_hexsha": "6a74126384df70625bdb43122e0517a82137c08d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2018-12-06T13:45:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-04T20:49:55.000Z", "max_issues_repo_path": "src/topological_sorting.py", "max_issues_repo_name": "ovysotska/image_sequence_matcher", "max_issues_repo_head_hexsha": "6a74126384df70625bdb43122e0517a82137c08d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/topological_sorting.py", "max_forks_repo_name": "ovysotska/image_sequence_matcher", "max_forks_repo_head_hexsha": "6a74126384df70625bdb43122e0517a82137c08d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-12-06T11:04:18.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-01T13:54:42.000Z", "avg_line_length": 27.8672566372, "max_line_length": 76, "alphanum_fraction": 0.5738329628, "include": true, "reason": "import numpy", "num_tokens": 778}
|
# -*- coding: utf-8 -*-
"""
"""
from __future__ import print_function
import numpy as np
class StaticStructureProblemParams:
#Nastran initial geometry file
nastran_geometry = 'nastran_input_geometry.inp'
def __init__(self, node_id, node_id_all):
#List of structural nodes on the outer surface
self.node_id = node_id
#List of all structural nodes
self.node_id_all = node_id_all
#Dictionary containing the structural parameters
self.structure_params = self.get_structure_params()
#Structural nodes coordinates (on the outer surface)
self.node_coord = self.structure_params['node_coord']
#Structural nodes coordinates (all nodes)
self.node_coord_all = self.structure_params['node_coord_all']
#Shell thicknesses
self.t = self.structure_params['t']
#Concentrated masses
self.m = self.structure_params['m']
#Function that returns the structural parameters from a Nastran input file
def get_structure_params(self):
node_coord = np.zeros((len(self.node_id),3))
node_coord_all = np.zeros((len(self.node_id_all),3))
t = []
m = []
#Write the outer surface node coordinates into an array
with open(self.nastran_geometry) as f:
lines = f.readlines()
for line in lines:
#Detect Nastran free field (comma separated) or small field (8 character)
if ',' in line:
line = line.split(',')
else:
line = [line[i:i+8] for i in range(0, len(line), 8)]
#Remove blank spaces
line = [word.strip() for word in line]
if len(line) > 1:
if line[0] == 'GRID' and line[1] in self.node_id_all:
node_coord_all[self.node_id_all.index(line[1]), 0] = float(line[3])
node_coord_all[self.node_id_all.index(line[1]), 1] = float(line[4])
node_coord_all[self.node_id_all.index(line[1]), 2] = float(line[5])
if line[1] in self.node_id:
node_coord[self.node_id.index(line[1]), 0] = float(line[3])
node_coord[self.node_id.index(line[1]), 1] = float(line[4])
node_coord[self.node_id.index(line[1]), 2] = float(line[5])
#Write shell thicknesses into a list
if line [0] == 'PSHELL':
t.append(float(line[3]))
#Write concentrated masses into a list
if line [0] == 'CONM2':
m.append(float(line[4]))
#Save thickness and mass lists as array
t = np.asarray(t)
m = np.asarray(m)
structure_params = {}
structure_params['node_coord_all'] = node_coord_all
structure_params['node_coord'] = node_coord
structure_params['t'] = t
structure_params['m'] = m
return structure_params
|
{"hexsha": "edfb0e03d73c1145039a420f96c3f5cdbc2059e7", "size": 3081, "ext": "py", "lang": "Python", "max_stars_repo_path": "aerostructures/structures_static/structures_static_problem_params.py", "max_stars_repo_name": "joanmasco/aerostructures", "max_stars_repo_head_hexsha": "4dcf598a126d7e419e08d518d552861744b48bcd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-06-22T15:56:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-12T15:31:55.000Z", "max_issues_repo_path": "aerostructures/structures_static/structures_static_problem_params.py", "max_issues_repo_name": "joanmasco/aerostructures", "max_issues_repo_head_hexsha": "4dcf598a126d7e419e08d518d552861744b48bcd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "aerostructures/structures_static/structures_static_problem_params.py", "max_forks_repo_name": "joanmasco/aerostructures", "max_forks_repo_head_hexsha": "4dcf598a126d7e419e08d518d552861744b48bcd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-11-16T08:14:18.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-16T08:14:18.000Z", "avg_line_length": 34.6179775281, "max_line_length": 91, "alphanum_fraction": 0.5654008439, "include": true, "reason": "import numpy", "num_tokens": 676}
|
import os
import sys
import gzip
import dill
import tqdm
import pickle
import torch
from torch.utils.data import Dataset
import numpy as np
from numpy.random import random_integers
from PIL import Image
def progress_bar(count, total, status=''):
bar_len = 60
filled_len = int(round(bar_len * count / float(total)))
percents = round(100.0 * count / float(total), 1)
bar = '=' * filled_len + '-' * (bar_len - filled_len)
sys.stdout.write('[%s] %s%s ...%s\r' % (bar, percents, '%', status))
sys.stdout.flush()
def make_sprites(n=50000, height=64, width=64):
images = np.zeros((n, height, width, 3))
counts = np.zeros((n,))
print('Generating sprite dataset...')
for i in range(n):
num_sprites = random_integers(0, 2)
counts[i] = num_sprites
for j in range(num_sprites):
pos_y = random_integers(0, height - 12)
pos_x = random_integers(0, width - 12)
scale = random_integers(12, min(16, height-pos_y, width-pos_x))
cat = random_integers(0, 2)
sprite = np.zeros((height, width, 3))
if cat == 0: # draw circle
center_x = pos_x + scale // 2.0
center_y = pos_y + scale // 2.0
for x in range(height):
for y in range(width):
dist_center_sq = (x - center_x)**2 + (y - center_y)**2
if dist_center_sq < (scale // 2.0)**2:
sprite[x][y][cat] = 1.0
elif cat == 1: # draw square
sprite[pos_x:pos_x + scale, pos_y:pos_y + scale, cat] = 1.0
else: # draw square turned by 45 degrees
center_x = pos_x + scale // 2.0
center_y = pos_y + scale // 2.0
for x in range(height):
for y in range(width):
if abs(x - center_x) + abs(y - center_y) < (scale // 2.0):
sprite[x][y][cat] = 1.0
images[i] += sprite
if i % 100 == 0:
progress_bar(i, n)
images = np.clip(images, 0.0, 1.0)
return {'x_train': images[:4 * n // 5],
'count_train': counts[:4 * n // 5],
'x_test': images[4 * n // 5:],
'count_test': counts[4 * n // 5:]}
class Sprites(Dataset):
def __init__(self, directory, n=50000, canvas_size=64,
train=True, transform=None):
np_file = 'sprites_{}_{}.npz'.format(n, canvas_size)
full_path = os.path.join(directory, np_file)
if not os.path.isfile(full_path):
gen_data = make_sprites(n, canvas_size, canvas_size)
np.savez(np_file, **gen_data)
data = np.load(full_path)
self.transform = transform
self.images = data['x_train'] if train else data['x_test']
self.counts = data['count_train'] if train else data['count_test']
def __len__(self):
return self.images.shape[0]
def __getitem__(self, idx):
img = self.images[idx]
if self.transform is not None:
img = self.transform(img)
return img, self.counts[idx]
class Clevr(Dataset):
def __init__(self, directory, transform=None):
self.directory = directory
self.filenames = os.listdir(directory)
self.n = len(self.filenames)
self.transform = transform
def __len__(self):
return self.n
def __getitem__(self, idx):
imgpath = os.path.join(self.directory, self.filenames[idx])
img = Image.open(imgpath)
if self.transform is not None:
img = self.transform(img)
return img, 1
class Atari(Dataset):
def __init__(self, directory, transform=None):
print('Loading data from ' + directory)
self.directory = directory
self.filenames = os.listdir(directory)
self.transform = transform
self.dataset = self.load_dataset(source_type=source_type)
self.n = self.dataset.shape[0]
def __len__(self):
return self.n
def load_dataset(self, source_type='dill'):
dataset = []
for imgfile in tqdm.tqdm(self.filenames):
imgpath = os.path.join(self.directory, imgfile)
if source_type == 'dill':
with gzip.open(imgpath, 'rb') as f:
img = dill.load(f)
dataset.append(img)
elif source_type == 'pickle':
with open(imgpath, 'rb') as f:
img = pickle.load(f)
img = img['X'].reshape(-1, 64, 64, 3)
dataset.append(img)
dataset = np.concatenate(dataset, axis=0)
np.random.shuffle(dataset)
return dataset
def __getitem__(self, idx):
raw = self.dataset[idx]
raw = raw * 255.
raw = raw.astype('uint8')
img = Image.fromarray(raw)
if self.transform is not None:
img = self.transform(img)
return img
def get_shape(self):
return self.dataset[0].shape
def get_all_imgs(self):
for raw in self.dataset:
raw = raw * 255.
raw = raw.astype('uint8')
img = Image.fromarray(raw)
if self.transform is not None:
img = self.transform(img)
yield img
class BlackWhite(Dataset):
def __init__(self, length, transform=None):
self.length = length
self.transform = transform
def __len__(self):
return self.length
def __getitem__(self, idx):
img = np.random.choice([0, 1])
img = [np.zeros((64, 64, 3)), np.ones((64, 64, 3))][img]
img = np.array(img, dtype='uint8') * 255
img = Image.fromarray(img)
if self.transform is not None:
img = self.transform(img)
return img
def get_shape(self):
return (3, 64, 64)
def get_all_imgs(self):
return []
|
{"hexsha": "11d5057b180e2766a9f47c7381cf211f8f0605c5", "size": 5961, "ext": "py", "lang": "Python", "max_stars_repo_path": "spatial_monet/util/datasets.py", "max_stars_repo_name": "cvoelcker/MONet", "max_stars_repo_head_hexsha": "55a23e4f09c6ad9c1823175f54ab372327189866", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-28T08:10:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-28T08:10:10.000Z", "max_issues_repo_path": "spatial_monet/util/datasets.py", "max_issues_repo_name": "cvoelcker/MONet", "max_issues_repo_head_hexsha": "55a23e4f09c6ad9c1823175f54ab372327189866", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "spatial_monet/util/datasets.py", "max_forks_repo_name": "cvoelcker/MONet", "max_forks_repo_head_hexsha": "55a23e4f09c6ad9c1823175f54ab372327189866", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7074468085, "max_line_length": 82, "alphanum_fraction": 0.5494044623, "include": true, "reason": "import numpy,from numpy", "num_tokens": 1483}
|
# !usr/bin/env python3
import numpy as np
import sys
import pandas as pd
def display_instruct():
"""Display game instructions."""
return """
Welcome to the greatest intellectual challenge of all time: Kuba Game
This will be a showdown between your human brain and my silicon processor.
You will make your move known by entering a number, 0-49. The number
will correspond to the board position as illustrated:
W1 | W2 | -- | -- | -- | B1 | B2 |
-------------------------------------
W3 | W4 | -- | R1 | -- | B2 | B4 |
-------------------------------------
-- | -- | R2 | R3 | R4 | -- | -- |
-------------------------------------
-- | R5 | R6 | R7 | R8 | R9 | -- |
-------------------------------------
-- | -- | R10 | R11 | R12 | -- | -- |
-------------------------------------
B5 | B6 | -- | R13 | -- | W5 | W6 |
-------------------------------------
B7 | B8 | -- | -- | -- | W7 | W8 |
-------------------------------------
Prepare yourself, human. The ultimate battle is about to begin. \n
"""
class Player:
def __init__(self):
self.playerA = sys.argv[1]
self.playerB = sys.argv[2]
def get_player(self):
self.playerA = self.playerA.split(',')
self.playerB = self.playerB.split(',')
if self.playerA[1] == self.playerB[1]:
raise Exception('Both players can\'t have same color')
else:
return self.playerA[0] + f' is {self.playerA[1]} \n' + self.playerB[0] + f' is {self.playerB[1]}'
return "Welcome to Kuba Game ..."
def play(self):
# self.turn = f"It's {"
pass
class Balls:
def __init__(self):
self.white = 'W'
self.black = 'B'
self.red ='R'
def balls(self):
# no == empty slot
self.arr = np.array(
[['w1','w2','-- ','-- ','-- ','B1','B2'],
['w3', 'w4','-- ','R1 ','-- ','B3','B4'],
['--', '--','R2 ','R3 ','R4 ','--','--'],
['--', 'R5','R6 ','R7 ','R8 ','R9','--'],
['--', '--','R10','R11','R12','--','--'],
['B5', 'B6','-- ','R13','-- ','W5','W6'],
['B7', 'B8','-- ','-- ','-- ','W7','W8']]
)
self.ball_dict = {
'W1': self.arr[0][0],
'W2': self.arr[0][1],
'W3': self.arr[1][0],
'W4': self.arr[1][1],
'W5': self.arr[5][5],
'W6': self.arr[5][6],
'W7': self.arr[6][5],
'W8': self.arr[6][6],
'B1': self.arr[0][5],
'B2': self.arr[0][6],
'B3': self.arr[1][5],
'B4': self.arr[1][6],
'B5': self.arr[5][0],
'B6': self.arr[5][1],
'B7': self.arr[6][0],
'B8': self.arr[6][1],
'R1': self.arr[1][3],
'R2': self.arr[2][2],
'R3': self.arr[2][3],
'R4': self.arr[2][4],
'R5': self.arr[3][1],
'R6': self.arr[3][2],
'R7': self.arr[3][3],
'R8': self.arr[3][4],
'R9': self.arr[3][5],
'R10': self.arr[4][2],
'R11': self.arr[4][3],
'R12': self.arr[4][4],
'R13': self.arr[5][3],
'no': '--'
}
return self.ball_dict
class Board(Balls):
"""
This is the board class for the balls and moving the elements. Also contains most of the logic
"""
def __init__(self):
Balls.__init__(self)
Balls.balls(self)
def draw_board(self):
string = ' | '
return string.join(self.arr[0]).upper() + '\n' + string.join(self.arr[1]).upper() + '\n' +string.join(self.arr[2]).upper() + '\n' + string.join(self.arr[3]).upper() + '\n' + string.join(self.arr[4]).upper() + '\n' + string.join(self.arr[5]).upper() + '\n' + string.join(self.arr[6]).upper()
def shift_balls(self):
# only method left
# ball = input("Enter ball to move...")
# direction = input("Enter direction to move towards (up/down/left/right)...")
# moves = int(input("Enter number of moves..."))
if ball == 'w1'.upper() or ball == 'w1':
# if self.arr[0][0]:
if direction == 'down':
# if moves == 1:
if self.arr[1][0] != '--':
self.arr[2][0] = self.arr[2][0].replace(self.arr[2][0],self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.arr[0][0])
self.arr[0][0] = self.arr[0][0].replace(self.arr[0][0], self.ball_dict['no'])
return self.arr
# moves
elif self.arr[0][0] == '--':
if self.arr[3][0] == '--':
self.arr[3][0] = self.arr[3][0].replace(self.arr[3][0],self.arr[2][0])
self.arr[2][0] = self.arr[2][0].replace(self.arr[2][0], self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.ball_dict['no'])
return self.arr
elif self.arr[1][0] == '--':
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.arr[0][0])
self.arr[0][0] = self.arr[0][0].replace(self.arr[0][0], self.ball_dict['no'])
return self.arr
if direction == 'right':
if self.arr[0][1] != '--' and self.arr[0][2] != '--':
self.arr[0][2] = self.arr[0][2].replace(self.arr[0][2],self.arr[0][1])
self.arr[0][1] = self.arr[0][1].replace(self.arr[0][1], self.arr[0][0])
self.arr[0][0] = self.arr[0][0].replace(self.arr[0][0], self.ball_dict['no'])
return self.arr
if direction == 'up' or direction == 'left':
return "It can't move that way right now :0 ....."
if ball == 'w2'.upper() or ball == 'w2':
if direction == 'down':
if self.arr[1][1] != '--':
self.arr[2][1] = self.arr[2][1].replace(self.arr[2][1],self.arr[1][1])
self.arr[1][1] = self.arr[1][1].replace(self.arr[1][1], self.arr[0][1])
self.arr[0][1] = self.arr[0][1].replace(self.arr[0][1], self.ball_dict['no'])
return self.arr
if direction == 'right':
if self.arr[1][1] != '--':
self.arr[0][3] = self.arr[0][3].replace(self.arr[0][3],self.arr[0][2])
self.arr[0][2] = self.arr[0][2].replace(self.arr[0][2], self.arr[0][1])
self.arr[0][1] = self.arr[0][1].replace(self.arr[0][1], self.ball_dict['no'])
return self.arr
if direction == 'up' or direction == 'left':
return "It can't move that way right now :0 ....."
if ball == 'w3'.upper() or ball == 'w3':
if direction == 'down':
if self.arr[2][0] == '--':
self.arr[3][0] = self.arr[3][0].replace(self.arr[3][0],self.arr[2][0])
self.arr[2][0] = self.arr[2][0].replace(self.arr[2][0], self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.ball_dict['no'])
return self.arr
elif self.arr[2][0] != '--':
if self.arr[4][0] == '--':
self.arr[4][0] = self.arr[4][0].replace(self.arr[4][0], self.arr[3][0])
self.arr[3][0] = self.arr[3][0].replace(self.arr[3][0],self.arr[2][0])
self.arr[2][0] = self.arr[2][0].replace(self.arr[2][0], self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.ball_dict['no'])
return self.arr
if direction == 'right':
if self.arr[1][2] == '--' or self.arr[1][1] != '--':
self.arr[1][2] = self.arr[1][2].replace(self.arr[1][2],self.arr[1][1])
self.arr[1][1] = self.arr[1][1].replace(self.arr[1][1], self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.ball_dict['no'])
return self.arr
if direction == 'up' or direction == 'left':
return "It can't move that way right now :0 ....."
if ball == 'w4'.upper() or ball == 'w4':
if direction == 'down':
if self.arr[2][1] == '--':
self.arr[2][1] = self.arr[2][1].replace(self.arr[2][1], self.arr[1][1])
self.arr[1][1] = self.arr[1][1].replace(self.arr[1][1], self.ball_dict['no'])
return self.arr
elif self.arr[2][1] != '--':
self.arr[3][1] = self.arr[3][1].replace(self.arr[3][1],self.arr[2][1])
self.arr[2][1] = self.arr[2][1].replace(self.arr[2][1], self.arr[1][1])
self.arr[1][1] = self.arr[1][1].replace(self.arr[1][1], self.ball_dict['no'])
return self.arr
if direction == 'right':
if self.arr[1][2] == '--' or self.arr[1][1] != '--':
self.arr[1][2] = self.arr[1][2].replace(self.arr[1][2],self.arr[1][1])
self.arr[1][1] = self.arr[1][1].replace(self.arr[1][1], self.arr[1][0])
self.arr[1][0] = self.arr[1][0].replace(self.arr[1][0], self.ball_dict['no'])
return self.arr
if direction == 'up' or direction == 'left':
return "It can't move that way right now :0 ....."
if ball == 'w5'.upper() or ball == 'w5':
if direction == 'up':
if self.arr[4][5] == '--':
self.arr[4][5]= self.arr[4][5].replace(self.arr[4][5], self.arr[5][5])
self.arr[5][5]= self.arr[5][5].replace(self.arr[5][5], self.ball_dict['no'])
return self.arr
# if ball == 'em1' or ball == 'em1'.upper():
# if direction == 'down':
# self.arr[4][0] = self.arr[4][0].replace(self.arr[4][0],self.arr[3][0])
# self.arr[3][0] = self.arr[3][0].replace(self.arr[3][0],self.arr[2][0])
# self.arr[2][0] = self.arr[2][0].replace(self.arr[2][0], self.ball_dict['no'])
# return self.arr
# more directions
if __name__ == "__main__":
# try:
# player_obj = Player()
# player = player_obj.get_player()
# print(player)
# except IndexError as err:
# print("run example\n'python kubagame.py ken,black mary,white'")
player_obj = Player()
first_player = player_obj.playerA.split(',')[0].capitalize()
second_player = player_obj.playerB.split(',')[0].capitalize()
print(display_instruct())
print(f"""
Player {first_player} begins the play
""")
mover_obj = Board()
running = True
turn = 1
while running:
ball = input("Enter ball to move...")
direction = input("Enter direction to move towards (up/down/left/right)...")
mover = mover_obj.shift_balls()
turn += 1
print(mover)
if turn % 2 != 0:
print(f"{first_player} is playing...")
else:
print(f"{second_player} is playing...")
# running = False
# player = player.play()
print(mover)
|
{"hexsha": "13a5a083d4b82879e2421d863b8c19a440b42775", "size": 12009, "ext": "py", "lang": "Python", "max_stars_repo_path": "kubagame.py", "max_stars_repo_name": "morehwachege/kubagame", "max_stars_repo_head_hexsha": "c0f67e8a0a35dda776c425ac6db602cbda86a9f0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "kubagame.py", "max_issues_repo_name": "morehwachege/kubagame", "max_issues_repo_head_hexsha": "c0f67e8a0a35dda776c425ac6db602cbda86a9f0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "kubagame.py", "max_forks_repo_name": "morehwachege/kubagame", "max_forks_repo_head_hexsha": "c0f67e8a0a35dda776c425ac6db602cbda86a9f0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.6547619048, "max_line_length": 298, "alphanum_fraction": 0.4217670081, "include": true, "reason": "import numpy", "num_tokens": 3304}
|
# совемещаем три графика. Располагаем внизу совместную легенду.
library(ggpubr)
#ggarrange function
Regression_Profiles111824 <- ggarrange(Loess_profile11, Loess_profile18, Loess_profile24, labels = c("1", "2", "3"), ncol = 3, nrow = 1, common.legend = TRUE, legend = "bottom")
|
{"hexsha": "a15b6f4581fafe6451c9fdb825c093b71634741f", "size": 281, "ext": "r", "lang": "R", "max_stars_repo_path": "Regression_3plots.r", "max_stars_repo_name": "paulinelemenkova/R-Regression-Analysis-Loess-", "max_stars_repo_head_hexsha": "48a52b78ab5e81b267799708b656879a20ddf71a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Regression_3plots.r", "max_issues_repo_name": "paulinelemenkova/R-Regression-Analysis-Loess-", "max_issues_repo_head_hexsha": "48a52b78ab5e81b267799708b656879a20ddf71a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Regression_3plots.r", "max_forks_repo_name": "paulinelemenkova/R-Regression-Analysis-Loess-", "max_forks_repo_head_hexsha": "48a52b78ab5e81b267799708b656879a20ddf71a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.125, "max_line_length": 177, "alphanum_fraction": 0.7473309609, "num_tokens": 105}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2018 University of Groningen
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Provides a processor that can perform a resolution transformation on a
molecule.
"""
from collections import defaultdict
from itertools import product, combinations
import networkx as nx
from ..molecule import Molecule, attributes_match
from .processor import Processor
from ..utils import are_all_equal, format_atom_string
from ..log_helpers import StyleAdapter, get_logger
LOGGER = StyleAdapter(get_logger(__name__))
def build_graph_mapping_collection(from_ff, to_ff, mappings):
"""
Function that produces a collection of :class:`vermouth.map_parser.Mapping`
objects.
Hereby deprecated.
Parameters
----------
from_ff: vermouth.forcefield.ForceField
Origin force field.
to_ff: vermouth.forcefield.ForceField
Destination force field.
mappings: dict[str, dict[str, vermouth.map_parser.Mapping]]
All known mappings
Returns
-------
collections.abc.Iterable
A collection of mappings that map from `from_ff` to `to_ff`.
"""
return mappings[from_ff.name][to_ff.name].values()
def edge_matcher(graph1, graph2, node11, node12, node21, node22):
"""
Checks whether the resids for node11 and node12 in graph1 are the same, and
whether that's also true for node21 and node22 in graph2.
Parameters
----------
graph1: networkx.Graph
graph2: networkx.Graph
node11: collections.abc.Hashable
A node key in `graph1`.
node12: collections.abc.Hashable
A node key in `graph1`.
node21: collections.abc.Hashable
A node key in `graph2`.
node22: collections.abc.Hashable
A node key in `graph2`.
Returns
-------
bool
"""
node11 = graph1.nodes[node11]
node12 = graph1.nodes[node12]
node21 = graph2.nodes[node21]
node22 = graph2.nodes[node22]
return (node11.get('resid') == node12.get('resid')) ==\
(node21.get('resid') == node22.get('resid'))
def node_matcher(node1, node2):
"""
Checks whether nodes should be considered equal for isomorphism. Takes all
attributes in `node2` into account, except for the attributes "atype",
"charge", "charge_group", "resid", "replace", and "_old_atomname".
Parameters
----------
node1: dict
node2: dict
Returns
-------
bool
"""
return attributes_match(node1, node2,
ignore_keys=('atype', 'charge', 'charge_group', 'mass',
'resid', 'replace', '_old_atomname'))
def _old_atomname_match(node1, node2):
"""
Adds a _name attribute to copies of the nodes, and feeds it all to
:func:`node_matcher`
"""
name1 = node1.get('_old_atomname', node1['atomname'])
name2 = node2.get('_old_atomname', node2['atomname'])
node1 = node1.copy()
node2 = node2.copy()
node1['_name'] = name1
node2['_name'] = name2
del node1['atomname']
del node2['atomname']
return node_matcher(node1, node2)
def node_should_exist(modification, node_idx):
"""
Returns True if the node with index `node_idx` in `modification` should
already exist in the parent molecule.
Parameters
----------
modification: networkx.Graph
node_idx: collections.abc.Hashable
The key of a node in `modification`.
Returns
-------
bool
True iff the node `node_idx` in `modification` should already exist in
the parent molecule.
"""
return not modification.nodes[node_idx].get('PTM_atom', False)
def ptm_resname_match(mol_node, map_node):
"""
As :func:`node_matcher`, except that empty resname and false PTM_atom
attributes from `node2` are removed.
"""
if 'resname' in map_node and not map_node['resname']:
map_node = map_node.copy()
del map_node['resname']
if 'PTM_atom' in map_node and not map_node['PTM_atom']:
map_node = map_node.copy()
del map_node['PTM_atom']
if 'modifications' in mol_node:
map_node = map_node.copy()
matching_mod = all(map_mod in mol_node.get('modifications', [])
for map_mod in map_node.pop('modifications', []))
else:
matching_mod = True
is_equal = node_matcher(mol_node, map_node)
return is_equal and matching_mod
def cover(to_cover, options):
"""
Implements a recursive backtracking algorithm to cover all elements of
`to_cover` with the elements from `options` that have the lowest index.
In this context "to cover" means that all items in an element of `options`
must be in `to_cover`. Elements in `to_cover` can only be covered *once*.
Parameters
----------
to_cover: collections.abc.MutableSet
The items that should be covered.
options: collections.abc.Sequence[collections.abc.MutableSet]
The elements that can be used to cover `to_cover`. All items in an
element of `options` must be present in `to_cover` to qualify.
Returns
-------
None or list
None if no covering can be found, or the list of items from `options`
with the lowest indices that exactly covers `to_cover`.
"""
if not to_cover:
return []
for idx, option in enumerate(options):
if all(item in to_cover for item in option):
left_to_cover = to_cover.copy()
for item in option:
# Only remove the leftmost item. PS. we know for sure all items
# in option are in left_to_cover at least once.
left_to_cover.remove(item)
found = cover(left_to_cover, options[idx:])
if found is not None:
return [option] + found
return None
def get_mod_mappings(mappings):
"""
Returns a dict of all known modification mappings.
Parameters
----------
mappings: collections.abc.Iterable[vermouth.map_parser.Mapping]
All known mappings.
Returns
-------
dict[tuple[str], vermouth.map_parser.Mapping]
All mappings that describe a modification mapping.
"""
out = {}
for mapping in mappings:
if mapping.type == 'modification':
out[mapping.names] = mapping
return out
def modification_matches(molecule, mappings):
"""
Returns a minimal combination of modification mappings and where they
should be applied that describes all modifications in `molecule`.
Parameters
----------
molecule: networkx.Graph
The molecule whose modifications should be treated. Modifications are
described by the 'modifications' node attribute.
mappings: collections.abc.Iterable[vermouth.map_parser.Mapping]
All known mappings.
Returns
-------
list[tuple[dict, vermouth.molecule.Link, dict]]
A list with the following items:
Dict describing the correspondence of node keys in `molecule` to
node keys in the modification.
The modification.
Dict with all reference atoms, mapping modification nodes to
nodes in `molecule`.
"""
modified_nodes = set() # This will contain whole residues.
for idx, node in molecule.nodes.items():
if node.get('modifications', []):
modified_nodes.add(idx)
ptm_subgraph = molecule.subgraph(modified_nodes)
grouped = nx.connected_components(ptm_subgraph)
found_ptm_groups = []
# For every modification group we would like a set with the names of the
# involved modifications, so we can use that to figure out which mod
# mappings should be used.
for group in grouped:
modifications = {
mod.name for mol_idx in group for mod in molecule.nodes[mol_idx].get('modifications', [])
}
found_ptm_groups.append(modifications)
needed_mod_mappings = set()
known_mod_mappings = get_mod_mappings(mappings)
for group in found_ptm_groups:
# known_mod_mappings is a dict[tuple[str], Mapping]. We want to know
# the minimal combination of those needed to cover all the PTMs found
# in group. The cheapest solution is covering the names of the PTMs in
# group with keys from known_mod_mappings. An improvement would be to
# do the graph covering again.
# TODO?
covered_by = cover(list(group),
sorted(known_mod_mappings, key=len, reverse=True))
if covered_by is None:
LOGGER.warning("Can't find modification mappings for the "
"modifications {}. The following modification "
"mappings are known: {}",
list(group), known_mod_mappings,
type='unmapped-atom')
continue
needed_mod_mappings.update(covered_by)
matches = []
# Sort on the tuple[str] type names of the mappings so that mappings that
# define most modifications at the same time get processed first
for mod_name in sorted(needed_mod_mappings, key=len, reverse=True):
mod_mapping = known_mod_mappings[mod_name]
for mol_to_mod, modification, references in mod_mapping.map(molecule, node_match=ptm_resname_match):
matches.append((mol_to_mod, modification, references))
if not set(mol_to_mod) <= modified_nodes:
# TODO: better message
LOGGER.warning('Overlapping modification mappings', type='inconsistent-data')
modified_nodes -= set(mol_to_mod)
return matches
def apply_block_mapping(match, molecule, graph_out, mol_to_out, out_to_mol):
"""
Performs a mapping operation for a "block". `match` is a tuple of 3
elements that describes what nodes in `molecule` should correspond to
a :class:`vermouth.molecule.Block` that should be added to `graph_out`, and
any atoms that should be used a references.
Add the required :class:`vermouth.molecule.Block` to `graph_out`, and
updates `mol_to_out` and `out_to_mol` *in-place*.
Parameters
----------
match
molecule: networkx.Graph
The original molecule
graph_out: vermouth.molecule.Molecule
The newly created graph that describes `molecule` at a different
resolution.
mol_to_out: dict[collections.abc.Hashable, dict[collections.abc.Hashable, float]]
A dict mapping nodes in `molecule` to nodes in `graph_out` with the
associated weights.
out_to_mol: dict[collections.abc.Hashable, dict[collections.abc.Hashable, float]]
A dict mapping nodes in `graph_out` to nodes in `molecule` with the
associated weights.
Returns
-------
set
A set of all overlapping nodes that were already mapped before.
set
A set of none-to-one mappings. I.e. nodes that were created without
nodes mapping to them.
dict
A dict of reference atoms, mapping `graph_out` nodes to nodes in
`molecule`.
"""
mol_to_block, blocks_to, references = match
if graph_out.nrexcl is None:
graph_out.nrexcl = blocks_to.nrexcl
try:
# merge_molecule will return a dict mapping the node keys of the
# added block to the ones in graph_out
# FIXME: Issue #154 lives here.
block_to_out = graph_out.merge_molecule(blocks_to)
except ValueError:
# This probably means the nrexcl of the block is different from the
# others. This means the user messed up their data. Or there are
# different forcefields in the same forcefield folder...
LOGGER.exception('Residue(s) {} is not compatible with the others',
set(nx.get_node_attributes(blocks_to, 'resname').values()),
type='inconsistent-data')
raise
# overlap does not have to be a dict, since the values in block_to_out are
# guaranteed to be unique in graph_out. So we can look them up in
# mol_to_out
overlap = set(mol_to_out.keys()) & set(mol_to_block.keys())
for mol_idx in mol_to_block:
for block_idx, weight in mol_to_block[mol_idx].items():
out_idx = block_to_out[block_idx]
mol_to_out[mol_idx][out_idx] = weight
out_to_mol[out_idx][mol_idx] = weight
none_to_one_mappings = set()
mapped_block_idxs = {block_idx
for mol_idx in mol_to_block
for block_idx in mol_to_block[mol_idx]}
for spawned in set(blocks_to.nodes) - mapped_block_idxs:
# These nodes come from "nowhere", so, let's pretend they come from
# all nodes in the block. This helps with setting attributes such
# as 'chain'
# "None to one" mapping - this is fine. This happens with e.g.
# charge dummies.
spawned = block_to_out[spawned]
none_to_one_mappings.add(spawned)
for mol_idx in mol_to_block:
mol_to_out[mol_idx][spawned] = 0
out_to_mol[spawned][mol_idx] = 0
new_references = {block_to_out[mod_idx]: mol_idx for mod_idx, mol_idx in references.items()}
return overlap, none_to_one_mappings, new_references
def apply_mod_mapping(match, molecule, graph_out, mol_to_out, out_to_mol):
"""
Performs the mapping operation for a modification.
Parameters
----------
match
molecule: networkx.Graph
The original molecule
graph_out: vermouth.molecule.Molecule
The newly created graph that describes `molecule` at a different
resolution.
mol_to_out: dict[collections.abc.Hashable, dict[collections.abc.Hashable, float]]
A dict mapping nodes in `molecule` to nodes in `graph_out` with the
associated weights.
out_to_mol: dict[collections.abc.Hashable, dict[collections.abc.Hashable, float]]
A dict mapping nodes in `graph_out` to nodes in `molecule` with the
associated weights.
Returns
-------
dict[str, dict[tuple, vermouth.molecule.Link]]
A dict of all modifications that have been applied by this modification
mapping operations. Maps interaction type to involved atoms to the
modification responsible.
dict
A dict of reference atoms, mapping `graph_out` nodes to nodes in
`molecule`.
"""
mol_to_mod, modification, references = match
LOGGER.info('Applying modification mapping {}', modification.name, type='general')
graph_out.citations.update(modification.citations)
mod_to_mol = defaultdict(dict)
for mol_idx, mod_idxs in mol_to_mod.items():
for mod_idx in mod_idxs:
mod_to_mol[mod_idx][mol_idx] = mol_to_mod[mol_idx][mod_idx]
mod_to_mol = dict(mod_to_mol)
mod_to_out = {}
# Some nodes of modification will already exist. The question is
# which, and which index they have in graph_out.
for mod_idx in modification:
if not node_should_exist(modification, mod_idx):
# Node does not exist yet.
if not graph_out.nodes:
out_idx = 0
else:
out_idx = max(graph_out) + 1
mod_to_out[mod_idx] = out_idx
graph_out.add_node(out_idx, **modification.nodes[mod_idx])
else:
# Node should already exist
# We need to find the out_index of this node. Since the
# node already exists, there is at least one mol_idx in
# mol_to_out that refers to the correct out_idx. What we do
# is try to find those mol indices by looking at
# mod_to_mol.
# Find the other mol nodes that map to this bead according
# to the mod mapping...
mol_idxs = mod_to_mol[mod_idx]
# ...and make the node with the correct attributes.
out_idxs = set()
for mol_idx in mol_idxs:
out_idxs.update(mol_to_out.get(mol_idx, {}))
for out_idx in out_idxs:
out_node = graph_out.nodes[out_idx]
if modification.nodes[mod_idx]['atomname'] == out_node['atomname']:
break
else: # No break, so no matching node found
raise ValueError("No node found in molecule with "
"atomname {}".format(modification.nodes[mod_idx]['atomname']))
# Undefined loop variable is guarded against by the else-raise above
mod_to_out[mod_idx] = out_idx # pylint: disable=undefined-loop-variable
graph_out.nodes[out_idx].update(modification.nodes[mod_idx].get('replace', {})) # pylint: disable=undefined-loop-variable
graph_out.nodes[out_idx]['modifications'] = graph_out.nodes[out_idx].get('modifications', [])
if modification not in graph_out.nodes[out_idx]['modifications']:
graph_out.nodes[out_idx]['modifications'].append(modification)
for mol_idx in mol_to_mod:
for mod_idx, weight in mol_to_mod[mol_idx].items():
out_idx = mod_to_out[mod_idx]
if mol_idx not in mol_to_out:
mol_to_out[mol_idx] = {}
mol_to_out[mol_idx][out_idx] = weight
if out_idx not in out_to_mol:
out_to_mol[out_idx] = {}
out_to_mol[out_idx][mol_idx] = weight
for mod_idx, mod_jdx in modification.edges:
out_idx = mod_to_out[mod_idx]
out_jdx = mod_to_out[mod_jdx]
if not graph_out.has_edge(out_idx, out_jdx):
graph_out.add_edge(out_idx, out_jdx)
new_references = {mod_to_out[mod_idx]: mol_idx for mod_idx, mol_idx in references.items()}
# Apply interactions
applied_interactions = defaultdict(lambda: defaultdict(list))
for interaction_type, interactions in modification.interactions.items():
for interaction in interactions:
atoms = [mod_to_out[mod_idx] for mod_idx in interaction.atoms]
assert len(atoms) == len(interaction.atoms)
interaction = interaction._replace(atoms=atoms)
applied_interactions[interaction_type][tuple(atoms)].append(modification)
graph_out.add_interaction(interaction_type, **interaction._asdict())
return dict(applied_interactions), new_references
def attrs_from_node(node, attrs):
"""
Helper function that applies a "replace" operations on the node if
required, and then returns a dict of the attributes listed in `attrs`.
Parameters
----------
node: dict
attrs: collections.abc.Container
Attributes that should be in the output.
Returns
-------
dict
"""
if 'replace' in node:
node = node.copy()
node.update(node['replace'])
return {attr: val for attr, val in node.items() if attr in attrs}
def do_mapping(molecule, mappings, to_ff, attribute_keep=(), attribute_must=()):
"""
Creates a new :class:`~vermouth.molecule.Molecule` in force field `to_ff`
from `molecule`, based on `mappings`. It does this by doing a subgraph
isomorphism of all blocks in `mappings` and `molecule`. Will issue warnings
if there's atoms not contributing to the new molecule, or if there's
overlapping blocks.
Node attributes in the new molecule will come from the blocks constructing
it, except for those in `attribute_keep`, which lists the attributes that
will be kept from `molecule`.
Parameters
----------
molecule: :class:`~vermouth.molecule.Molecule`
The molecule to transform.
mappings: dict[str, dict[str, dict[str, tuple]]]
``{ff_name: {ff_name: {block_name: (mapping, weights, extra)}}}``
A collection of mappings, as returned by e.g.
:func:`~vermouth.map_input.read_mapping_directory`.
to_ff: :class:`~vermouth.forcefield.ForceField`
The force field to transform to.
attribute_keep: :class:`~collections.abc.Iterable`
The attributes that will always be transferred from `molecule` to the
produced graph.
attribute_must: :class:`~collections.abc.Iterable`
The attributes that the nodes in the output graph *must* have. If
they're not provided by the mappings/blocks they're taken from
`molecule`.
Returns
-------
:class:`~vermouth.molecule.Molecule`
A new molecule, created by transforming `molecule` to `to_ff` according
to `mappings`.
"""
attribute_keep = tuple(attribute_keep)
attribute_must = tuple(attribute_must)
# Transferring the meta maybe should be a copy, or a deep copy...
# If it breaks we look at this line.
graph_out = Molecule(force_field=to_ff, meta=molecule.meta)
mappings = build_graph_mapping_collection(molecule.force_field, to_ff, mappings)
block_matches = []
for mapping in mappings:
if mapping.type == 'block':
block_matches.extend(mapping.map(molecule, node_match=_old_atomname_match,
edge_match=edge_matcher))
mod_matches = modification_matches(molecule, mappings)
# Sort by lowest node key per residue. We need to do this, since
# merge_molecule creates new resid's in order.
block_sort_key = lambda x: min(x[0].keys())
# Sort modifications by the lowest mapped index when all touched atoms are
# newly created PTM atoms that do not exist yet. Otherwise, take the
# highest index. If we don't do this we run the risk that a PTM mapped to
# a node with a low index also changes a node of a higher residue, causing
# all sorts of havok.
mod_sort_key = lambda x: (max(x[0].keys())
if any(node_should_exist(x[1], idx)
for idx in
{mod_idx for mol_idx in x[0]
for mod_idx in x[0][mol_idx]})
else
min(x[0].keys()))
block_matches = sorted(block_matches, key=block_sort_key, reverse=True)
mod_matches = sorted(mod_matches, key=mod_sort_key, reverse=True)
# There are a few separate mapping cases to be considered:
# One to one mapping - e.g. AA to AA, the simplest case
# Many to one mapping - e.g. AA to CG without sharing atoms between beads
# Many to many mapping - e.g. AA to CG *with* sharing atoms between beads
# These three cases are covered by the normal operation, the following are
# caught with some additional logic
# None to one - whole block taken as origin, with weights 0
# One to none - unmapped atoms (produces a warning)
# One to many - e.g. CG to AA. This mostly works, but we don't know how to
# make sure the "many" should be connected together. Gives a
# warning if it's disconnected.
mol_to_out = defaultdict(dict)
out_to_mol = defaultdict(dict)
overlapping_mappings = set()
none_to_one_mappings = set()
modified_interactions = {}
all_references = {}
all_matches = []
while block_matches or mod_matches:
# Take the match with the lowest atom id, and prefer blocks over
# modifications
if (not block_matches or
(mod_matches and
mod_sort_key(mod_matches[-1]) < block_sort_key(block_matches[-1]))):
match = mod_matches.pop(-1)
applied_interactions, refs = apply_mod_mapping(match,
molecule, graph_out,
mol_to_out, out_to_mol)
modified_interactions.update(applied_interactions)
else:
match = block_matches.pop(-1)
overlap, none_to_one, refs = apply_block_mapping(match,
molecule, graph_out,
mol_to_out, out_to_mol)
overlapping_mappings.update(overlap)
none_to_one_mappings.update(none_to_one)
all_matches.append(match)
all_references.update(refs)
# At this point, we should have created graph_out at the desired
# resolution, *and* have the associated correspondence in mol_to_out and
# out_to_mol.
# Set node attributes based on what the original atoms are.
to_remove = set()
for out_idx in out_to_mol:
mol_idxs = out_to_mol[out_idx].keys()
# Keep track of what bead comes from where
subgraph = molecule.subgraph(mol_idxs)
graph_out.nodes[out_idx]['graph'] = subgraph
weights = out_to_mol[out_idx]
graph_out.nodes[out_idx]['mapping_weights'] = weights
if out_idx in all_references:
ref_idx = all_references[out_idx]
new_attrs = attrs_from_node(molecule.nodes[ref_idx],
attribute_keep+attribute_must)
for attr, val in new_attrs.items():
# Attrs in attribute_keep we always transfer, those in
# attribute_must we transfer only if they're not already in the
# created node
if attr in attribute_keep or attr not in graph_out.nodes[out_idx]:
graph_out.nodes[out_idx].update(new_attrs)
else:
attrs = defaultdict(list)
for mol_idx in mol_idxs:
new_attrs = attrs_from_node(molecule.nodes[mol_idx],
attribute_keep+attribute_must)
for attr, val in new_attrs.items():
attrs[attr].append(val)
attrs_not_sane = []
for attr, vals in attrs.items():
if attr in attribute_keep or attr not in graph_out.nodes[out_idx]:
if vals:
graph_out.nodes[out_idx][attr] = vals[0]
else:
# No nodes hat the attribute.
graph_out.nodes[out_idx][attr] = None
if not are_all_equal(vals):
attrs_not_sane.append(attr)
if attrs_not_sane:
LOGGER.warning('The attributes {} for atom {} are going to'
' be garbage because the attributes of the'
' constructing atoms are different.',
attrs_not_sane,
format_atom_string(graph_out.nodes[out_idx]),
type='inconsistent-data')
if graph_out.nodes[out_idx].get('atomname', '') is None:
to_remove.add(out_idx)
# We need to add edges between residues. Within residues comes from the
# blocks.
for match1, match2 in combinations(all_matches, 2):
match1 = match1[0]
match2 = match2[0]
edges = molecule.edges_between(match1.keys(), match2.keys())
# TODO: Backmapping needs love here
for mol_idx, mol_jdx in edges:
# Subtract none_to_one_mappings, since those should not be made to
# connect to things automatically.
out_idxs = mol_to_out[mol_idx].keys() - none_to_one_mappings
out_jdxs = mol_to_out[mol_jdx].keys() - none_to_one_mappings
for out_idx, out_jdx in product(out_idxs, out_jdxs):
if out_idx != out_jdx:
graph_out.add_edge(out_idx, out_jdx)
############################
# Sanity check the results #
############################
# "Many to one" mapping - overlapping blocks means dubious node properties
if overlapping_mappings:
LOGGER.warning('These atoms are covered by multiple blocks. This is a '
'bad idea: {}. This probably means the following output'
' particles are wrong: {}.',
{format_atom_string(molecule.nodes[mol_idx])
for mol_idx in overlapping_mappings},
{format_atom_string(graph_out.nodes[out_idx], atomid='')
for mol_idx in overlapping_mappings
for out_idx in mol_to_out[mol_idx]},
type='inconsistent-data')
# "One to many" mapping - not necessarily a problem, unless it leads to
# missing edges
for mol_idx in mol_to_out:
# Subtract the none to one mapped nodes, since those don't contribute
# and make false positives.
out_idxs = mol_to_out[mol_idx].keys() - none_to_one_mappings
if len(out_idxs) > 1 and not nx.is_connected(graph_out.subgraph(out_idxs)):
# In this case there's a single input particle mapping to multiple
# output particles. This probably means there's bonds missing
LOGGER.warning('The input particle {} maps to multiple output '
'particles: {}, which are disconnected. There are '
'probably edges missing.',
format_atom_string(molecule.nodes[mol_idx]),
{format_atom_string(graph_out.nodes[out_idx], atomid='')
for out_idx in out_idxs},
type='inconsistent-data')
# "One to none" mapping - this means your mapping files are incomplete
uncovered_atoms = set(molecule.nodes.keys()) - set(mol_to_out.keys())
if uncovered_atoms:
uncovered_hydrogens = {idx for idx in uncovered_atoms
if molecule.nodes[idx].get('element', '') == 'H'}
if uncovered_hydrogens:
# Maybe this should be info?
LOGGER.debug('These hydrogen atoms are not covered by a mapping.'
' This is not the best idea. {}',
[format_atom_string(molecule.nodes[idx])
for idx in uncovered_hydrogens],
type='unmapped-atom'
)
other_uncovered = uncovered_atoms - uncovered_hydrogens
if other_uncovered:
LOGGER.warning("These atoms are not covered by a mapping. Either"
" your mappings don't describe all atoms (bad idea),"
" or, there's no mapping available for all residues."
" {}",
[format_atom_string(molecule.nodes[idx])
for idx in other_uncovered],
type='unmapped-atom')
for interaction_type in modified_interactions:
for atoms, modifications in modified_interactions[interaction_type].items():
if len(modifications) != 1:
# TODO: better message
LOGGER.warning('Interaction set by multiple modification '
'mappings', type='inconsistent-data')
graph_out.remove_nodes_from(to_remove)
return graph_out
class DoMapping(Processor):
"""
Processor for performing a resolution transformation from one force field to
another.
This processor will create new Molecules by stitching together Blocks from
the target force field, as dictated by the available mappings.
Fragments/atoms/residues/modifications for which no mapping is available
will not be represented in the resulting molecule.
The resulting molecules will have intra-block edges and interactions as
specified in the blocks from the target force field. Inter-block edges will
be added based on the connectivity of the original molecule, but no
interactions will be added for those.
Attributes
----------
mappings: dict[str, dict[str, dict[str, tuple]]]
``{ff_name: {ff_name: {block_name: (mapping, weights, extra)}}}``
A collection of mappings, as returned by e.g.
:func:`~vermouth.map_input.read_mapping_directory`.
to_ff: vermouth.forcefield.ForceField
The force field to map to.
delete_unknown: bool
Not currently used
attribute_keep: tuple[str]
The attributes that will always be transferred from the input molecule
to the produced graph.
attribute_must: tuple[str]
The attributes that the nodes in the output graph *must* have. If
they're not provided by the mappings/blocks they're taken from
the original molecule.
See Also
--------
:func:`do_mapping`
"""
def __init__(self, mappings, to_ff, delete_unknown=False, attribute_keep=(),
attribute_must=()):
self.mappings = mappings
self.to_ff = to_ff
self.delete_unknown = delete_unknown
self.attribute_keep = tuple(attribute_keep)
self.attribute_must = tuple(attribute_must)
super().__init__()
def run_molecule(self, molecule):
return do_mapping(
molecule,
mappings=self.mappings,
to_ff=self.to_ff,
attribute_keep=self.attribute_keep,
attribute_must=self.attribute_must
)
def run_system(self, system):
mols = []
for molecule in system.molecules:
try:
new_molecule = self.run_molecule(molecule)
except KeyError as err:
if not self.delete_unknown:
raise err
else:
raise
# TODO: raise a loud warning here
else:
if new_molecule:
mols.append(new_molecule)
system.molecules = mols
system.force_field = self.to_ff
|
{"hexsha": "a89bdf6a1a8f68042942ca2e6e74909e19a7b71c", "size": 34148, "ext": "py", "lang": "Python", "max_stars_repo_path": "vermouth/processors/do_mapping.py", "max_stars_repo_name": "biomolsim/vermouth-martinize", "max_stars_repo_head_hexsha": "332295078bfea680da7f488d2a9d61a97b8c9ae9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 35, "max_stars_repo_stars_event_min_datetime": "2018-02-16T12:39:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T12:18:36.000Z", "max_issues_repo_path": "vermouth/processors/do_mapping.py", "max_issues_repo_name": "biomolsim/vermouth-martinize", "max_issues_repo_head_hexsha": "332295078bfea680da7f488d2a9d61a97b8c9ae9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 300, "max_issues_repo_issues_event_min_datetime": "2018-02-16T12:24:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T13:41:36.000Z", "max_forks_repo_path": "vermouth/processors/do_mapping.py", "max_forks_repo_name": "biomolsim/vermouth-martinize", "max_forks_repo_head_hexsha": "332295078bfea680da7f488d2a9d61a97b8c9ae9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-11-07T18:52:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-06T08:34:38.000Z", "avg_line_length": 42.0024600246, "max_line_length": 133, "alphanum_fraction": 0.62820663, "include": true, "reason": "import networkx", "num_tokens": 7389}
|
import numpy as np, tensorflow as tf
import pickle as pkl
import argparse
import gym
def load_expert_data(data_path):
with open(data_path, 'rb') as f:
data = pkl.load(f)
return data
class Agent():
def __init__(self, env):
self.sess = tf.Session()
self.env = gym.make(env)
self.obs = tf.placeholder(tf.float32, shape=tuple([None] + list(self.env.observation_space.shape)))
self.actions = tf.placeholder(tf.float32, shape= tuple([None] + list(self.env.action_space.shape)))
self.build_nn()
self.sess.run(tf.global_variables_initializer())
def build_nn(self):
act_1 = tf.layers.Dense(128, activation=tf.nn.tanh)(self.obs)
self.pred_actions = tf.layers.Dense(self.env.action_space.shape[0], activation=None)(act_1)
self.loss = tf.reduce_sum(tf.reduce_mean((self.pred_actions - self.actions)**2, axis=0))
self.optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
self.train_op = self.optimizer.minimize(self.loss)
def train(self, obs, actions):#need to set the batch size and GD steps
self.sess.run(self.train_op, feed_dict= {self.obs:obs,
self.actions:actions})
def predict(self, obs):
return self.sess.run(self.pred_actions, feed_dict={self.obs : obs})
def evaluate_agent(self, num_rollouts, render=True):
def rollout():
obs = self.env.reset()
done = False
total_return = 0
while not done:
action = self.predict(np.array([obs]))
obs, ret, done, _ = self.env.step(action)
if render : self.env.render()
total_return += ret
return total_return
env_stats=[]
for _ in range(num_rollouts):
env_stats += [rollout()]
return np.mean(env_stats), np.var(env_stats)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("data_path", type=str, nargs=None,
help="pickle file path to rollout data")
cmd_args = parser.parse_args()
env_name = cmd_args.data_path.split('/')[1].split('.')[0]
print("instantiating an object")
rollouts = load_expert_data(cmd_args.data_path)
bc_agent = Agent(env_name)
bc_agent.train(rollouts['observations'], rollouts['actions'].reshape(-1,8))
print(bc_agent.evaluate_agent(100))
|
{"hexsha": "b82a2e2b9403eae621078f7fe2494b13a103bbdf", "size": 2141, "ext": "py", "lang": "Python", "max_stars_repo_path": "hw1/bc.py", "max_stars_repo_name": "namuchan95/homework", "max_stars_repo_head_hexsha": "2bd9da1bdf5408d14097566dc7261956eb627e00", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw1/bc.py", "max_issues_repo_name": "namuchan95/homework", "max_issues_repo_head_hexsha": "2bd9da1bdf5408d14097566dc7261956eb627e00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw1/bc.py", "max_forks_repo_name": "namuchan95/homework", "max_forks_repo_head_hexsha": "2bd9da1bdf5408d14097566dc7261956eb627e00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5322580645, "max_line_length": 102, "alphanum_fraction": 0.7141522653, "include": true, "reason": "import numpy", "num_tokens": 543}
|
//==============================================================================
// Copyright 2003 - 2013 LASMEA UMR 6602 CNRS/Univ. Clermont II
// Copyright 2009 - 2013 LRI UMR 8623 CNRS/Univ Paris Sud XI
//
// Distributed under the Boost Software License, Version 1.0.
// See accompanying file LICENSE.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt
//==============================================================================
#include <nt2/combinatorial/include/functions/factorial.hpp>
#include <nt2/sdk/functor/meta/call.hpp>
#include <nt2/sdk/unit/module.hpp>
#include <nt2/sdk/unit/tests/relation.hpp>
#include <nt2/sdk/unit/tests/type_expr.hpp>
#include <nt2/sdk/unit/tests/ulp.hpp>
#include <nt2/sdk/unit/tests/basic.hpp>
#include <nt2/sdk/unit/module.hpp>
#include <boost/simd/sdk/config.hpp>
#include <nt2/include/functions/min.hpp>
#include <nt2/include/functions/saturate.hpp>
#include <nt2/include/constants/eight.hpp>
#include <nt2/include/constants/eleven.hpp>
#include <nt2/include/constants/five.hpp>
#include <nt2/include/constants/four.hpp>
#include <nt2/include/constants/nine.hpp>
#include <nt2/include/constants/one.hpp>
#include <nt2/include/constants/seven.hpp>
#include <nt2/include/constants/six.hpp>
#include <nt2/include/constants/ten.hpp>
#include <nt2/include/constants/three.hpp>
#include <nt2/include/constants/twelve.hpp>
#include <nt2/include/constants/two.hpp>
#include <nt2/include/constants/valmax.hpp>
#include <nt2/include/constants/zero.hpp>
#include <nt2/include/constants/inf.hpp>
#include <nt2/include/constants/nan.hpp>
NT2_TEST_CASE_TPL ( factorial_real__1_0, NT2_REAL_TYPES)
{
using nt2::factorial;
using nt2::tag::factorial_;
typedef typename nt2::meta::call<factorial_(T)>::type r_t;
typedef T wished_r_t;
// return type conformity test
NT2_TEST_TYPE_IS(r_t, wished_r_t);
// specific values tests
#ifndef BOOST_SIMD_NO_INVALIDS
NT2_TEST_ULP_EQUAL(factorial(nt2::Inf<T>()), nt2::Inf<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Nan<T>()), nt2::Nan<T>(), 0);
#endif
NT2_TEST_ULP_EQUAL(factorial(nt2::Eight<T>()), nt2::min((T(40320ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Eleven<T>()), nt2::min((T(39916800ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Five<T>()), T(120), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Four<T>()), T(24), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Nine<T>()), nt2::min((T(362880ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::One<T>()), nt2::One<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Seven<T>()), nt2::min((T(5040ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Six<T>()), nt2::min((T(720ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Ten<T>()), nt2::min((T(3628800ll )),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Three<T>()), nt2::Six<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Twelve<T>()), nt2::min((T(479001600ll)),nt2::Valmax<T>()), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Two<T>()), nt2::Two<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Zero<T>()), nt2::One<T>(), 0);
}
NT2_TEST_CASE_TPL ( factorial_integer__1_0, NT2_INTEGRAL_TYPES)
{
using nt2::factorial;
using nt2::tag::factorial_;
typedef typename nt2::meta::call<factorial_(T)>::type r_t;
typedef T wished_r_t;
// return type conformity test
NT2_TEST_TYPE_IS(r_t, wished_r_t);
// specific values tests
NT2_TEST_ULP_EQUAL(factorial(nt2::Eight<T>()), T(nt2::saturate<T>(40320ull )), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Eleven<T>()), T(nt2::saturate<T>(39916800ull)), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Five<T>()), T(nt2::saturate<T>(120)), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Four<T>()), T(nt2::saturate<T>(24)), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Nine<T>()), T(nt2::saturate<T>(362880ull )), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::One<T>()), nt2::One<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Seven<T>()), T(nt2::saturate<T>(5040ull )), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Six<T>()), T(nt2::saturate<T>(720ull )), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Ten<T>()), T(nt2::saturate<T>(3628800ull )), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Three<T>()), nt2::Six<T>(), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Twelve<T>()), T(nt2::saturate<T>(479001600ull)), 0);
NT2_TEST_ULP_EQUAL(factorial(nt2::Zero<T>()), nt2::One<T>(), 0);
}
|
{"hexsha": "a76a8a72a5d11f5b565224081497ee9eba04fa28", "size": 4462, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "modules/core/combinatorial/unit/scalar/factorial.cpp", "max_stars_repo_name": "psiha/nt2", "max_stars_repo_head_hexsha": "5e829807f6b57b339ca1be918a6b60a2507c54d0", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 34.0, "max_stars_repo_stars_event_min_datetime": "2017-05-19T18:10:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-04T02:18:13.000Z", "max_issues_repo_path": "modules/core/combinatorial/unit/scalar/factorial.cpp", "max_issues_repo_name": "psiha/nt2", "max_issues_repo_head_hexsha": "5e829807f6b57b339ca1be918a6b60a2507c54d0", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modules/core/combinatorial/unit/scalar/factorial.cpp", "max_forks_repo_name": "psiha/nt2", "max_forks_repo_head_hexsha": "5e829807f6b57b339ca1be918a6b60a2507c54d0", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 7.0, "max_forks_repo_forks_event_min_datetime": "2017-12-02T12:59:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-31T12:46:14.000Z", "avg_line_length": 47.9784946237, "max_line_length": 98, "alphanum_fraction": 0.668982519, "num_tokens": 1431}
|
Lemma id : forall A : Type, A -> A.
Proof.
exact (fun A (x : A) => x).
Defined.
Print id.
|
{"author": "erikmd", "repo": "docker-run-github-workflow-example", "sha": "b30fbadb7a19078019791af76dcd8fdd06eb3a50", "save_path": "github-repos/coq/erikmd-docker-run-github-workflow-example", "path": "github-repos/coq/erikmd-docker-run-github-workflow-example/docker-run-github-workflow-example-b30fbadb7a19078019791af76dcd8fdd06eb3a50/test.v"}
|
# The water-tank example coded in Python
# TODO: Still need to write the parser
import macropy.activate
from language import *
from gen import *
from sympy import *
import shac
# SwitchTank From http://robotics.eecs.berkeley.edu/~sastry/ee291e/lygeros.pdf
# w=1 ; v1=0.3 ; v2 = 0.5
# Max height of a tank : 1. At the beginning, both tanks are half full
# r1 = r2 = 0.25
# To have a WHA, replace maximum height (whch is 1) by 0.75 and set v1=v2=0.5
# Don't forget then to erase ABOF=True in the compile command
ode_x1f = Ode(sympify("diff(x1(t))+0.3-1"), sympify("x1(t)"), 0.5, {})
ode_x1e = Ode(sympify("diff(x1(t))+0.3"), sympify("x1(t)"), 0.5, {})
ode_x2f = Ode(sympify("diff(x2(t))+0.4-1"), sympify("x2(t)"), 0.5, {})
ode_x2e = Ode(sympify("diff(x2(t))+0.4"), sympify("x2(t)"), 0.5, {})
# The locations of the hybrid automaton
t1 = Loc("t1", [ode_x1f, ode_x2e], [],
{S("x1(t)"): [Guard(S("x1<=1"))], S("x2(t)"): [Guard(S("x2>0.25"))]})
t2 = Loc("t2", [ode_x1e, ode_x2f], [],
{S("x1(t)"): [Guard(S("x1>0.25"))], S("x2(t)"): [Guard(S("x2<=1"))]})
# The edges
e1 = Edge('t1', 't2', {S("x2(t)"): [Guard(S("x2<=0.25"))]},
[Update.Update2(Symbol('x1'), Symbol('x1')),
Update.Update2(Symbol('x2'), Symbol('x2'))],
[])
e2 = Edge('t2', 't1', {S("x1(t)"): [Guard(S("x1<=0.25"))]},
[Update.Update2(Symbol('x1'), Symbol('x1')),
Update.Update2(Symbol('x2'), Symbol('x2'))],
[])
SwitchTank = Ha("SwitchTank", [t1, t2], t1,
[e1, e2], [], [])
# Compile
shac.compile(SwitchTank, ABOF=True)
|
{"hexsha": "38d787d677952aea3383ce728e5732e4af46f89f", "size": 1583, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/switchtank/SwitchTank.py", "max_stars_repo_name": "UoA-ECE-RP/sha", "max_stars_repo_head_hexsha": "0282f356a79729ab0e8706dcaa09c4af89ad7bbe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-04-25T21:52:20.000Z", "max_stars_repo_stars_event_max_datetime": "2017-04-25T21:52:20.000Z", "max_issues_repo_path": "examples/switchtank/SwitchTank.py", "max_issues_repo_name": "UoA-ECE-RP/sha", "max_issues_repo_head_hexsha": "0282f356a79729ab0e8706dcaa09c4af89ad7bbe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/switchtank/SwitchTank.py", "max_forks_repo_name": "UoA-ECE-RP/sha", "max_forks_repo_head_hexsha": "0282f356a79729ab0e8706dcaa09c4af89ad7bbe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9791666667, "max_line_length": 78, "alphanum_fraction": 0.5666456096, "include": true, "reason": "from sympy", "num_tokens": 617}
|
import sklearn
from pprint import pprint
# Standard Imports (Data Manipulation and Graphics)
import numpy as np # Load the Numpy library with alias 'np'
import pandas as pd # Load the Pandas library with alias 'pd'
import seaborn as sns # Load the Seabonrn, graphics library with alias 'sns'
import copy
from scipy import stats
from scipy import interp
from os import listdir; from os.path import isfile, join
from itertools import islice
from IPython import display
import ipywidgets as widgets
import itertools
import os; import sys
# Matplotlib pyplot provides plotting API
import matplotlib as mpl
from matplotlib import pyplot as plt
import chart_studio.plotly.plotly as py
import matplotlib.image as mpimg
# Preprocessing Imports
# from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler # Standardize data (0 mean, 1 stdev)
from sklearn.preprocessing import Normalizer # Normalize data (length of 1)
from sklearn.preprocessing import Binarizer # Binarization
# Imports for handling Training
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import GridSearchCV
# After Training Analysis Imports
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
# Classifiers Imports
# SVMs Classifieres
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn import svm
# Bayesian Classifieres
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
# Decision Tree Classifieres
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier
# Import scikit-learn classes: Hyperparameters Validation utility functions.
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import LeavePOut
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import validation_curve
from sklearn.model_selection import learning_curve
# Import scikit-learn classes: model's evaluation step utility functions.
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
# --------------------------------------------------------------------------- #
# Confusion Matirx & Roc Curve Custom
# --------------------------------------------------------------------------- #
def plot_conf_matrix(model, Xtest, ytest, title=None, plot_name="conf_matrix.png", show_figure=False, ax=None):
y_model = model.predict(Xtest)
mat = confusion_matrix(ytest, y_model)
if ax is None:
fig = plt.figure()
sns.heatmap(mat, square=True, annot=True, cbar=False)
plt.xlabel('predicted value')
plt.ylabel('true value')
if title:
plt.title(title)
plt.savefig(plot_name)
if show_figure is True:
plt.show()
else:
plt.close(fig)
else:
sns.heatmap(mat, square=True, annot=True, cbar=False, ax=ax)
ax.set_xlabel('predicted value')
ax.set_ylabel('true value')
if title:
ax.set_title(title)
pass
pass
def plot_roc_curve_custom(model,
X_test, y_test,
label=None, title=None, plot_name="roc_curve.png", show_figure=False, ax=None):
y_pred = model.predict_proba(X_test)
# print('y_test', type(y_test)); print('y_pred', type(y_pred));
# print('y_test', y_test.shape); print('y_pred', y_pred.shape);
# print('y_test', y_test[0], 'y_pred', y_pred[0])
# y_test_prob = np.array(list(map(lambda xi: [1, 0] if xi == 0 else [0, 1], y_test)))
# fpr, tpr, _ = roc_curve(y_test_prob, y_pred)
y_pred = np.argmax(y_pred, axis=1)
fpr, tpr, _ = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
if ax is None:
fig = plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc,))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
if title:
plt.title('ROC curve: {} | Auc {}'.format(title, f"{roc_auc:.2f}"))
else:
plt.title('ROC curve')
plt.legend(loc='best')
plt.savefig(plot_name)
# plt.show()
if show_figure is True:
plt.show()
else:
plt.close(fig)
else:
ax.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc,))
ax.plot([0, 1], [0, 1], 'k--')
ax.set_xlabel('False positive rate')
ax.set_ylabel('True positive rate')
if title:
ax.set_title('ROC curve: {} | Auc {}'.format(title, f"{roc_auc:.2f}"))
else:
ax.set_title('ROC curve')
ax.legend(loc='best')
# plt.savefig(plot_name)
# plt.show()
pass
return roc_auc
def show_plots_fit_by_n(clf, kernel, n_components, Xtest, ytest):
# Shos some plots if 'show_plot' flag is valued as True
plot_roc_curve_custom(
clf,
Xtest,
ytest,
'n_components={} | kernel={}'.format(n_components, kernel))
plot_conf_matrix(
clf,
Xtest,
ytest,
title='n_components={} | kernel={}'.format(10, kernel))
pass
def add_records(data, cv_list, res_kf, res_loo, res_sscv):
# record = list(map(lambda xi: f"{xi[0]:.2f} (+/-) {xi[1]:.2f}", [xi[1:] for xi in res_kf]))
record_acc = list(map(lambda xi: f"{xi[1]:.2f}", [xi for xi in res_kf]))
record_std = list(map(lambda xi: f"(+/-) {xi[2]:.2f}", [xi for xi in res_kf]))
record = list(itertools.chain.from_iterable(list(zip(record_acc, record_std))))
record = record + [f"{res_loo[0]:.2f}"]
record = record + [f"(+/-) {res_loo[1]:.2f}"]
record = record + [f"{res_sscv[0]:.2f}"]
record = record + [f"(+/-) {res_sscv[1]:.2f}"]
# print('len record:', len(record))
if len(data) == 0:
data = [[]] * (len(cv_list) + 2)
for ii in range(0, len(data)):
# print([record[ii*2], record[ii*2+1]])
data[ii] = data[ii] + [record[ii*2], record[ii*2+1]]
# print(f'len data[{ii}]:', len(data[ii]))
# data.append(copy.deepcopy(record))
# print(data)
pass
return data
def KernelPCA_transform_data(n_components, kernel, Xtrain, Xtest=None, verbose=0):
if verbose == 1:
print('KernelPCA')
print('-' * 100)
# Perform kernel PCA
kernel_pca =KernelPCA( \
n_components=n_components, \
kernel=kernel)
if verbose == 1:
print('KernelPCA - Fit')
print('-' * 100)
kernel_pca.fit(Xtrain)
# Transform data accordingly with current Kernel Pca mode
if verbose == 1:
print('KernelPCA - Transform')
print('-' * 100)
Xtrain_transformed = kernel_pca.transform(Xtrain)
if Xtest is None:
return Xtrain_transformed, None
Xtest_transformed = kernel_pca.transform(Xtest)
return Xtrain_transformed, Xtest_transformed
def prepare_output_df(cv_list, pca_kernels_list, data):
# col_names_acc = list(map(lambda xi: f"ACC(cv={xi})", cv_list))
# col_names_st = list(map(lambda xi: f"STD(cv={xi})", cv_list))
# col_names = list(itertools.chain.from_iterable(list(zip(col_names_acc, col_names_st))))
# col_names = col_names + ['ACC(loo)', 'STD(loo)', 'ACC(Stfd-CV)', 'STD(Stfd-CV)']
col_names = list(map(lambda xi: f"CV={xi}".lower(), cv_list))
col_names = col_names + ['loo'.lower(), 'Stfd-CV'.lower()]
idx_names = copy.deepcopy(col_names)
col_names = []
for kernel in pca_kernels_list:
col_names = col_names + [f"{kernel} - ACC".lower().capitalize(), f"{kernel} - STD".lower().capitalize()]
# df = pd.DataFrame(data=data, columns=col_names, index=pca_kernels_list)
# pprint(data)
# pprint(col_names)
df = pd.DataFrame(data=data, columns=col_names, index=idx_names)
return df
def prepare_output_df_baseline_fit(pca_kernels_list, data, estimator_name):
col_names = []
for kernel in pca_kernels_list:
col_names = col_names + [f"{kernel} - ACC".lower().capitalize(), f"{kernel} - F1".lower().capitalize()]
df = pd.DataFrame(data=[data], columns=col_names, index=[estimator_name])
return df
def prepare_output_df_grid_search(grid_searchs, pca_kernels, estimator_names, flag_no_computation=False):
if flag_no_computation is True:
return None, None
data, data_auc = [], []
col_params_names = None
for _, a_grid_search in enumerate(grid_searchs):
tmp_res, tmp_auc = [], []
for _, (a_grid, _, auc, acc_test) in enumerate(a_grid_search):
best_params_values = list(map(str, a_grid.best_params_.values()))
# best_score = "%.2f" % (a_grid.best_score_,)
# tmp_res = ([best_score] + best_params_values)
best_score_tst = "%.2f" % (acc_test,)
best_score_train = "%.2f" % (a_grid.best_score_,)
tmp_res = ([best_score_train, best_score_tst] + best_params_values)
tmp_auc.append("%.2f" % (auc,))
col_params_names = list(a_grid.best_params_.keys())
data.append(tmp_res)
pass
# data.append(tmp_res)
data_auc.append(tmp_auc)
pass
# col_names = [f'{k} Acc' for k in pca_kernels]
col_names = ["Acc Train", "Acc Test"] + col_params_names
indeces = []
for estimator_name in estimator_names:
indeces.extend([f'{estimator_name} {k}' for k in pca_kernels])
df = pd.DataFrame(data=data, columns=col_names, index=indeces)
col_names = [f'{k} AUC' for k in pca_kernels]
df_auc = pd.DataFrame(data=data_auc, columns=col_names, index=estimator_names)
return df, df_auc
# --------------------------------------------------------------------------- #
# Utilities Functions Custom Stratified Training and Test Set Creation
# --------------------------------------------------------------------------- #
def get_indices(class_ith_indeces, chunks=2):
divisor = len(class_ith_indeces) // chunks
max_len = max(len(class_ith_indeces) - divisor, divisor)
p1a = class_ith_indeces[:max_len]
p2a = class_ith_indeces[max_len:]
return [p1a, p2a]
def get_data(p_train, p_test, X, y):
ytrain_ = np.array([y[ii] for ii in p_train])
ytest_ = np.array([y[ii] for ii in p_test])
Xtrain_ = np.array([np.array(X[ii]) for ii in p_train])
Xtest_ = np.array([np.array(X[ii]) for ii in p_test])
assert len(ytrain_) == len(Xtrain_), f"Train {len(ytrain_)} != {len(Xtrain_)} Test {len(ytest_)} ?? {len(Xtest_)}"
assert len(ytest_) == len(Xtest_),f"Train {len(ytrain_)} ?? {len(Xtrain_)} Test {len(ytest_)} != {len(Xtest_)}"
return Xtrain_, Xtest_, ytrain_, ytest_
def get_stratified_groups(X, y):
# Get N-stratified Groups
class_0_indeces = list(map(lambda val: val[0], filter(lambda val: val[1] == -1, enumerate(y))))
class_1_indeces = list(map(lambda val: val[0], filter(lambda val: val[1] == 1, enumerate(y))))
p_class0 = get_indices(class_0_indeces)
p_class1 = get_indices(class_1_indeces)
# ytrain_ = [y[ii]for ii in p1a] + [y[ii]for ii in p1b] # ytest_ = [y[ii]for ii in p2a] + [y[ii]for ii in p2b]
p_train = p_class0[0] + p_class1[0]
p_test = p_class0[1] + p_class1[1]
Xtrain_, Xtest_, ytrain_, ytest_ = get_data(p_train, p_test, X, y)
return Xtrain_, Xtest_, ytrain_, ytest_
def create_widget_list_df(df_list, show_widget=False):
res_list = []
for df in df_list:
if show_widget is True:
widget = widgets.Output()
with widget: display.display(df); pass
res_list.append(widget)
else:
print(df)
if show_widget is True:
hbox = widgets.HBox(res_list)
return hbox
return
def create_widget_list_df_vertical(df_list, show_widget=False):
res_list = []
for df in df_list:
if show_widget is True:
widget = widgets.Output()
with widget: display.display(df); pass
res_list.append(widget)
else:
print(df)
pass
if show_widget is True:
vbox = widgets.VBox(res_list)
return vbox
return
def merge_dfs_by_common_columns(df1, df2, axis=0, ignore_index=True):
if df2 is None:
return df1
elif df1 is None:
return df2
res = list(set(df1.columns).intersection(set(df2.columns)))
df_res = pd.concat([df1[res], df2[res]], axis=axis, ignore_index=ignore_index)
if df1.index.equals(df2.index) is False:
indeces = pd.Index(list(df1.index) + list(df2.index))
return df_res.set_index(indeces)
return df_res
def reshape_dfs_acc(list_df, num_col=4, n_cp_list=[2, 9, 11]):
assert len(list_df) == len(n_cp_list)
updated_list = []
for df, ncp in zip(list_df, n_cp_list):
indeces = list(df.index)
estimators_names = list(set(list(map(lambda xi: xi.split(" ")[0], indeces))))
columns_names = list(set(list(map(lambda xi: xi.split(" ")[1], indeces))))
data = []
for ii in range(0, df.shape[0], num_col):
a_record = df.iloc[ii:(ii+num_col), 0].values
data.append(a_record)
pass
columns_names = list(map(lambda xi: f"{xi}(PCs={ncp})", columns_names))
df = pd.DataFrame(data=data, columns=columns_names, index=estimators_names)
updated_list.append(df)
return updated_list
def show_df_with_mean_at_bottom(df):
# show_df_with_mean_at_bottom(df_strfd) # df_strfd.head(df_strfd.shape[0])
def s2f(a_str):
if a_str.startswith("("):
return float(a_str[5:])
return float(a_str)
result = df.applymap(s2f).mean(axis=0)
def f2s(a_num):
return "%.2f" % (a_num, )
data = np.array(list(map(f2s, result.values)))
df_tmp = pd.DataFrame(data=[data], columns=df.columns, index=["Mean Values"])
vbox = create_widget_list_df_vertical([df, df_tmp])
display.display(vbox)
pass
def merge_images_within_dir_(pca_kernels_list, figs_dest):
for i, kernel in enumerate(pca_kernels_list):
dir_target = os.path.join(figs_dest, kernel)
images = [f for f in listdir(dir_target) if isfile(join(dir_target, f))]
def starts_with_merged(a_file):
return os.path.basename(a_file).startswith("merged") is False
images = list(filter(lambda xi: starts_with_merged(xi), images))
nrows = len(images) // 2
for j, image in enumerate(images):
if j % 2 == 0:
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 2, j % 2 +1)
full_path_img = os.path.join(dir_target, image)
# print(full_path_img)
img = mpimg.imread(full_path_img)
plt.imshow(img)
pass
# full_path_img = os.path.join(dir_target, "merged_learning_curves.png")
# plt.savefig(full_path_img)
pass
pass
|
{"hexsha": "76e78b2207fd1708844829b833c2b4f54ee89b3e", "size": 15672, "ext": "py", "lang": "Python", "max_stars_repo_path": "pittsburgh-bridges-data-set-analysis/resources/examples/notebooks/utils/utilities_functions.py", "max_stars_repo_name": "franec94/Pittsburgh-Bridge-Dataset", "max_stars_repo_head_hexsha": "682ff0e3979ca565637e858cc36dc07c2aeda7d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pittsburgh-bridges-data-set-analysis/resources/examples/notebooks/utils/utilities_functions.py", "max_issues_repo_name": "franec94/Pittsburgh-Bridge-Dataset", "max_issues_repo_head_hexsha": "682ff0e3979ca565637e858cc36dc07c2aeda7d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2021-02-02T22:51:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:39:08.000Z", "max_forks_repo_path": "pittsburgh-bridges-data-set-analysis/resources/examples/notebooks/utils/utilities_functions.py", "max_forks_repo_name": "franec94/Pittsburgh-Bridge-Dataset", "max_forks_repo_head_hexsha": "682ff0e3979ca565637e858cc36dc07c2aeda7d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3769751693, "max_line_length": 119, "alphanum_fraction": 0.6336778969, "include": true, "reason": "import numpy,from scipy", "num_tokens": 4049}
|
% !TEX root = ../main.tex
% !TEX spellcheck = en_US
\chapter{Background} While RTS games has been around since the 80s\cite{adams06, rtsHistory}, only a
handful of scientific articles can be found on teammate bots for RTS games, and current big RTS
titles have yet to implement a good teammate bot. In general little research has been done for
teammate bots throughout all genres; RTS researchers have focused on enemy bots to either create a
fun opponent\cite{hagelback09} or to create the best bot to compete in RTS bot tournaments, such as
AIIDE's StarCraft AI Competition\cite{scaiide} and CIG's StarCraft AI Competition\cite{sccig}.
Next we will begin covering related research topics, continuing with teammate bots in games across
all genres and what current RTS games lack.
\section{Research}
First we will describe the definition of real-time teammate bots, then covering teammate bots across
all genres asking what guidelines applies to RTS games; continuing with communication between humans
and bots; and ending with RTS enemy bots and asking, what implementation strategies exist, how a good
bot shall play.
\subsection{Teammate bots} \label{sec:teammate_bots} As mentioned, little
research has been done in the area of teammate bots, especially for RTS games. To our knowledge
there exist one paper on teammate bots\cite{mcgee10} which brings up communication; in their survey, McGee and Abraham, presents
their definition of real-time teammate, which their survey is limited to. A summary of their
definition reads; a real-time teammate bot
\begin{enumerate}
\item works together with team players while taking into account the state, needs, behavior,
goals, plans and intentions of these players;
\item uses coordinated behaviors or decision-making\ldots
\item {\ldots}that aligns with the team goals;
\item where these coordinated behaviors or decision-making includes player-related uncertainty
requiring inference, reasoning, classification, calculation, or another method; and
\item whenever possible, prioritizing the player experience.
\end{enumerate}
We will use the same definition and follow these five points for our proposed bot.
In their survey\cite{mcgee10} McGee and Abraham noticed that, although human player participation
and engagement are one key functionality of a game\cite{reynolds03}, often the player's preferences
are neglected and the bot(s) behave what it thinks is the best for either just itself or both the
player and itself; while the second option might sound as if it prioritizes the player, it does not,
it steers the player into how to play rather than the player steering the bot. When the bot
prioritizes the player, some challenges arise; how to create priority rules that do their job
correctly\cite{mcgee10}, i.e. the bot has to know, or use a qualified guess, what the player wants.
This is no easy task, probably impossible in the near future; humans have a hard time understanding
each others intentions, why think AIs (that humans have created) understands us
better\cite{norman07} without even asking?.
\paragraph{Communication}
McGee and Abraham points out the lack of research on communication between human players and bots,
“This survey suggests that there are also some aspects of real-time team-mate AI where there seems
to be little or no work: ..., and communication.”\cite{mcgee10} Meaning “little or no work” spans
through multiple topics and multiple genres; we have yet to find any paper that talks about
communication between human players and bots, for any genre. Games in other genres has, however,
implemented some sort of communication between human players and bots, this is covered in section
\ref{sec:game_communication}
\paragraph{Teammate bots across all genres}
Abraham and McGee created a teammate bot for a simple
game: Capture the gunner\cite{abraham10}. The goal of this game is to capture the gunner by touching
him from both sides while not being shot. The game required cooperation with their bot because
selfish players never passed the first level. Players had great responsibility over the teammate,
because of this they found that even if the bot died by the gunner players never felt it was unfair;
in fact, some players felt that it was partly their fault if the bot died.
\paragraph{Player classification}
To beat an enemy in an RTS game, as a team or a single player, you
need a strategy that exploits the opponent's weaknesses while eliminating your own (team’s)
weaknesses. Both tactics and strategy needs to be good to beat an enemy—although when human
beginners play against each other one can win with either just good strategy or good tactics.
To find the teammate’s and opponents’ weaknesses one can make use of simple classification rules, or
make it more advanced and use a model. These weaknesses are used to either exploit the opponent's
weaknesses or complementing the teammate's weaknesses; these techniques can be used for both
purposes, although information gathering and bot action will, however, be different.
\subparagraph{Teammate modeling}
Jansen has created a player model using opponent-modeling for his
RTS bot\cite{jansen07}. The bot actions are calculated from a decision tree computed with
supervised learning and neural networks. The goal of the teammate bot is to
\begin{inparaenum}[1\upshape)]
\item match the number of units and structures the player has, this includes the unit/structure
type, e.g. aggressive, defensive;
\item be able to deduct when the player is under attack from the model;
\item what action the bot shall take next using the decision tree from the learning strategies;
\item for the bot to find when the player has either a hole in their defense or attack; and
\item when the player is switching from defensive to offensive mode, or vice versa.
\end{inparaenum}
He found that his bot could mimic the player; but two problems were identified, the bot could not
identify which of the actions were the best one and in addition some players does not always know or
do the best action thus the learned actions can use bad strategies.
In their paper\cite{pucheng11} Pucheng and Huiyan uses Q-learning, teammate modeling, and reward
allotment for their teammate bot to faster learn which actions leads to a successful goal. Their
experiment tested this new learning technique against traditional Q-learning where the bot does not
take the teammate into account, and teammate Q-learning where the teammate is taken into account but
the reward is not split between bots.
For actually creating a player modeling system, Houlette presents a methodology for how to implement
a player model in code\cite{houlette03}. He talks about what a player model is, what it contains,
and what it is good at and used for. In addition he gives an simple code example and a description
when to update the model and two possible update implementations.
\subparagraph{Opponent modeling}
Kabanza et. al has implemented an RTS bot, HICOR (Hostile Intent,
Capability and Opportunity Recognizer). HICOR can, as specified by the name, infer the opponent's
intention (i.e. its plan) and use these to analyze the enemy capabilities and opportunities for the
bot. Put easy it can infer what build order the enemy is using, what tactic it is using and where it
will attack and use this information to its advantage. The underlying system uses Hidden Markov
Model to infer the enemy plan.
For recognizing the behavior of the opponent, i.e. aggressive/defensive and what type of
aggressive/defensive behavior Schadd et. al uses a hierarchical approach model with two classifiers
for their RTS bot. A top level classifier uses fuzzy models to classify the opponent as aggressive
vs. defensive, and a bottom level classifier for the type of aggressive/defensive behavior, e.g. the
opponent uses mainly tanks or ships when aggressive, or techs when being defensive.
Synnaeve and Bessière RTS bot uses Bayesian networks to model enemy opening
strategies\cite{synnaeve11}. The bot learns by watching replays manually label with the strategy of
the players.
\section{Teammate bot in games}
Teammate bots have been around for quite a while in sports game,
such as FIFA\cite{fifa}, but have just started to make a breakthrough in other genres. In most
games\cite{callofduty, brotherinarms, rainbow6} the teammate bots cannot be replaced by another
player as they either are a part of the story, and thus might not be around all the time, die, or
have something else happen to them. In games that are meant be played cooperatively with friends (or
strangers), these can be replaced with bots\cite{residentevil5, lostplanet2}.
\subsection{Communication}
\label{sec:game_communication}
Communication has been implemented across
several games and genres, most noteworthy are genres where you play as one character, such as FPS
games, third-person shooter (TPS) games. In these games some bots communicate you, warning when they spot
enemies, get shot, or comes with tips when the player is stuck.
Mass effect\cite{masseffect}, a TPS game, does this in its game by letting the bots tell the player when for example
enemies are sighted, and an area is cleared from enemies. Mass Effect 3, goes beyond regular
communication and lets players on Xbox 360 to control the bots through voice commands, like a squad
leader. This creates a better flow in the game since players do not have to open the action screen
(which pauses the game) as often.
\subsection{Controllable bots} \label{sec:games_controllable} Today there exists quite a few games
that implements the possibility for the player to actively control teammate bots (if the player
wants to). We cannot possibly find and go through each game that lets you control its teammate bots,
but we will mention a few to show that the feature can be found in games.
Mass Effect\cite{masseffect} does this by having the player the possibility to decide where the bots
shall move for cover and hold that position, retreat for cover, even order the usage of certain
abilities on target enemies. Rainbow Six Vegas 2\cite{rainbow6} and Brother in Arms: Road to Hill
30\cite{brotherinarms} lets you control its teammate bots much like Mass Effect.
\subsection{RTS games}
Today, only one RTS game, that we know of, allows to communicate with and
control your teammate bot, this game is Red Alert 3\cite{redalert3}. Before describing Red Alert 3,
however, a description is given of how teammate bots in RTS games commonly works. These teammate
bots acts more or less (depending on the game) on its own, i.e. it does not really collaborate with
the player; some bots might try to complement the player's behavior but does not ask if this is the
preferred choice for the player. Because commercial games are closed source we do not know to what
extent the bot complements the player's behavior, or if they are taking the player into account
at all.
The bot in the first StarCraft\cite{scbw} installment acts entirely on its own, and it does not feel
as it behaves differently when playing together with it. In WarCraft 3: The Frozen
Throne\cite{wc3ft} the bot reacts to the player coming to aid if s/he is under attack, and
communicates its attack position—by pinging on the minimap—to the player when moving out to attack a
target. Much like WarCraft 3, the bot in StarCraft 2: Wings of Liberty\footnote{First game in the
StarCraft 2 trilogy.}\cite{sc2wol} aids the player when s/he is under attack, although it does not
ping the minimap when it attacks. In Age of Empires 3\cite{ageofempires3} the bot acts almost
entirely on its own; it can, however, request resources from the player and give the player hints.
Red Alert 3\cite{redalert3} on the other hand has the most advanced teammate bot. The game’s
campaign mode is played cooperatively with either another human player or bot. The bot can be given
simple commands: move to specified position or strike a target; although these have some
restrictions as the bot needs to to have the free units to execute the commands. In special
missions, the bot will have super weapons that the player will have full control over. Like WarCraft
3 and StarCraft 2, the bot comes to aid the player when it is under attack.
\section{Why StarCraft?} \label{sec:why_starcraft} Why choose StarCraft and not another RTS game?
Other games or engines the bot can be implemented in is SpringRTS\cite{springrts}, which is an open RTS
game engine and is by itself not a game and requires a game mod\footnote{A game mod in this case is
the set of rules, units, graphics, to create a new RTS game.}—several game mods are currently availble.
ORTS\cite{orts} is aimed for developers and researching, and finally, Wargus\cite{wargus}, a
WarCraft II clone that allows for modifications and implementation of an AI. So why not choose one
of these instead of StarCraft?
\paragraph{Carefully balanced} Blizzard Entertainment released StarCraft: Brood War in 1998 and
continued to patch it until beginning of 2009\footnote{No official date can be found,
the only inofficial page we found mentioning the date was Wikipedia at:
\url{http://en.wikipedia.org/wiki/StarCraft: Brood War}, accessed 2012-09-13}. The other games have
neither had the time nor the amount of players to carefully balance the game. One factor might be
because StarCraft has become a huge E-sport in South Korea\cite{scKotakuKorea}.
\paragraph{Easy to find experienced players} Because StarCraft have been around for so long and is a
commercial successful game it is easy to find experienced players to test the game. By using
experienced players as testers, the players do not have to learn the game mechanics and can focus on
evaluating the bot instead of the game.
\paragraph{Big community} StarCraft has a big community, this makes it easy to find and ask people
what functionality they would like to see in a teammate bot to gain more ideas, but also evaluate
our ideas.
\paragraph{Extending an already existing bot} We have the opportunity to extend an already existing
bot, BTHAI\cite{bthai}, for BWAPI\cite{bwapi}. By extending a bot we can focus on making the bot a
good teammate and not worry about all the other details; such as good path finding, building
placement. Sometimes we, however, improve some already existing systems, e.g. build order, to meet
our needs, but we do not have to build the entire system from scratch. In addition, BTHAI is
developed by our supervisor, Hagelbäck, and we can therefore get fast help of the system if needed.
While we have not searched for other bots to extended, we figured it would be hard to top the
support we would get from BTHAI.
\section{Bot strategy} While our focus lies on communication and conveying intentions, a bot still
needs a decent strategy and tactics to win and be useful for the player. There does, however, not
exist any specific research on what is a good cooperation strategy that prioritizes the player for
RTS games. Instead we will rely on general one player strategies from
``Day[9]''\cite{day9} ([9] is part of the name and not a citation), our own experience playing
cooperative games, and by evaluating the bot throughout the development.
|
{"hexsha": "72ddc53397c2ca914672248fffd599ad26a22c72", "size": 15198, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "BATS/docs/thesis/chapters/background.tex", "max_stars_repo_name": "Senth/bats", "max_stars_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "BATS/docs/thesis/chapters/background.tex", "max_issues_repo_name": "Senth/bats", "max_issues_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BATS/docs/thesis/chapters/background.tex", "max_forks_repo_name": "Senth/bats", "max_forks_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.0368663594, "max_line_length": 128, "alphanum_fraction": 0.801092249, "num_tokens": 3462}
|
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
import numpy as np
class _RandomForest:
def init(verbose):
model = RandomForestClassifier(max_depth=4, verbose=verbose)
return model
def train(model, X_public_t, y_public_t):
model.fit(X_public_t, y_public_t)
return model
def initGrid(X,y):
min_samples_split = [2,4,6,8]
max_depth = [2,4,6,8,10]
max_features=["auto"]
class_weight=["balanced", "balanced_subsample"]
n_estimators=[50]
min_samples_leaf=[2,4,6,8]
grid = {
'min_samples_split':min_samples_split,
'max_depth': max_depth,
'max_features':max_features,
'class_weight':class_weight,
'n_estimators':n_estimators,
'min_samples_leaf':min_samples_leaf
}
model = RandomForestClassifier();
search = GridSearchCV(estimator=model, param_grid=grid, verbose=10, n_jobs=-1)
search.fit(X,y)
return search
|
{"hexsha": "615c33a4e59bdcc103b7cd0a378668dfde6bfab0", "size": 1120, "ext": "py", "lang": "Python", "max_stars_repo_path": "school/assignment1/r_forest.py", "max_stars_repo_name": "kubekbreha/ML-Python-Algorithms", "max_stars_repo_head_hexsha": "8058b68a2d98a79a6debcc69abdd188c97420d75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "school/assignment1/r_forest.py", "max_issues_repo_name": "kubekbreha/ML-Python-Algorithms", "max_issues_repo_head_hexsha": "8058b68a2d98a79a6debcc69abdd188c97420d75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "school/assignment1/r_forest.py", "max_forks_repo_name": "kubekbreha/ML-Python-Algorithms", "max_forks_repo_head_hexsha": "8058b68a2d98a79a6debcc69abdd188c97420d75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.7179487179, "max_line_length": 87, "alphanum_fraction": 0.6455357143, "include": true, "reason": "import numpy", "num_tokens": 258}
|
# -*- coding: utf-8 -*-
"""
Name: tic-tac-toe game
Edition: 1.1
Author: Li
Update log:
Version 1.1: add interface to show the winner (def winner())
Plan log:
Version 1.2: add buttons to restart and end the game (def button_restart() button_end())
Version 2.0: 1.add human-robot-fighting pattern
2.optimize the interface
"""
import pygame as pg
import sys
import random
import time
import numpy as np
def main():
pg.init()
game_window = pg.display.set_mode((600, 600))
pg.display.set_caption('tic-tac-toe')
window_color = (0, 0, 0)
circle_color, cross_color, line_color = (255, 255, 255) #元素颜色
font_score = pg.font.Font(None, 50) #定义文字
game_table = [[5,5,5],[5,5,5],[5,5,5]]
#游戏界面的代码表示:5(没有被占用),1(被叉占用),0(被圈占用)
inf = 0 #表示圈先下棋
win = 0 #表示游戏正在进行
game_window.fill(window_color) # 设置窗口颜色
draw_initial(game_window, line_color) # 画分割线
while True:
for event in pg.event.get(): #防止闪退
if event.type == pg.QUIT:
exit()
elif pg.mouse.get_pressed()[0] == 1:
mouse_x, mouse_y = pg.mouse.get_pos()
mouse_x_abstract = pos_abstract(mouse_x)
mouse_y_abstract = pos_abstract(mouse_y)
#对应九宫格九个位置
game_table = game_table_update(game_table, inf, mouse_x_abstract, mouse_y_abstract)
#更新棋盘
draw(game_window, cross_color, circle_color, inf, mouse_x_abstract, mouse_y_abstract)
#画图
inf = not inf
#下完一次后交换对手,0变1,1变0
win = end_or_continue(game_table) #更新输赢状态
if win == 1:
winner(game_window, inf)
pg.display.update()
time.sleep(2.012)
pg.quit()
#如果满足胜利判断条件,结束游戏
pg.display.update() #使得程序正常运行的关键语句
time.sleep(0.012)
def pos_abstract(pos): #用于生成位置代码(123分别代表600个像素点中的三段)
if 0 <= pos < 200:
return 0
if 200 <= pos < 400:
return 1
if 400 <= pos < 600:
return 2
def pos_off_abstact(pos): #用于将位置代码转化为具体坐标用于画图
if pos == 0:
return 100
if pos == 1:
return 300
if pos == 2:
return 500
def game_table_update(game_table, inf, pos_x, pos_y): #用于更新游戏状态
if inf == 1:
game_table[pos_x][pos_y] = 1
elif inf == 0:
game_table[pos_x][pos_y] = 0
return game_table
def end_or_continue(game_table): #判断是否已经分出胜负:1游戏结束;0继续进行
game_table_array = np.array(game_table)
value_x = np.sum(game_table_array, 1)
value_y = np.sum(game_table_array, 0)
for i in range(len(value_x)):
if value_x[i] == 0 or value_x[i] == 3 or value_y[i] == 0 or value_y[i] == 3: #行列相等
return 1
elif game_table[0][0] == 0 and game_table[1][1] == 0 and game_table[2][2] == 0: #斜线相等,待优化
return 1
elif game_table[0][0] == 1 and game_table[1][1] == 1 and game_table[2][2] == 1:
return 1
elif game_table[0][2] == 0 and game_table[1][1] == 0 and game_table[2][0] == 0:
return 1
elif game_table[0][2] == 1 and game_table[1][1] == 1 and game_table[2][0] == 1:
return 1
else:
return 0
def winner(game_window, inf):
font_score = pg.font.Font(None, 80)
if inf == 1:
my_text_score = font_score.render('Circle win!', False, (255, 255, 255)) # True会平滑一些
else:
my_text_score = font_score.render('Rectangular win!', False, (255, 255, 255))
game_window.fill((0, 0, 0)) #清屏
game_window.blit(my_text_score, (0, 0))
pass
def draw(game_window, cross_color, circle_color, inf, pos_x, pos_y): #画圈和叉(接受符号信息inf(1叉0圈),位置信息pos)
if inf == 1:
pg.draw.rect(game_window, cross_color, (pos_off_abstact(pos_x)-50,pos_off_abstact(pos_y)-50, 100, 100))
pass
elif inf == 0:
pg.draw.circle(game_window, circle_color, (pos_off_abstact(pos_x),pos_off_abstact(pos_y)), 50)
pass
def draw_initial(game_window, line_color): #画棋盘分割线
pg.draw.rect(game_window, line_color, (0, 200, 600, 10))
pg.draw.rect(game_window, line_color, (0, 400, 600, 10))
pg.draw.rect(game_window, line_color, (200, 0, 10, 600))
pg.draw.rect(game_window, line_color, (400, 0, 10, 600))
pass
if __name__ == "__main__":
main()
|
{"hexsha": "505d9731d14e872fa7154d53d55975351537d2ae", "size": 4313, "ext": "py", "lang": "Python", "max_stars_repo_path": "Others/Python/projects/game3_ttt.py", "max_stars_repo_name": "ZhekaiLi/Code", "max_stars_repo_head_hexsha": "60788a2d2089b358a1c39e50acced96cb5eb3fa1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-29T11:09:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-19T06:48:52.000Z", "max_issues_repo_path": "Others/Python/projects/game3_ttt.py", "max_issues_repo_name": "ZhekaiLi/Code", "max_issues_repo_head_hexsha": "60788a2d2089b358a1c39e50acced96cb5eb3fa1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Others/Python/projects/game3_ttt.py", "max_forks_repo_name": "ZhekaiLi/Code", "max_forks_repo_head_hexsha": "60788a2d2089b358a1c39e50acced96cb5eb3fa1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.6742424242, "max_line_length": 111, "alphanum_fraction": 0.5991189427, "include": true, "reason": "import numpy", "num_tokens": 1477}
|
import os
import random
from pathlib import Path
from allennlp.data.iterators import BasicIterator
from allennlp.nn.util import move_to_device
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertAdam
import config
from bert_model_variances.bert_multilayer_output import BertMultiLayerSeqClassification
from data_utils.exvocab import ExVocabulary
from data_utils.readers.bert_reader_content_selection import BertContentSelectionReader
from evaluation import ext_hotpot_eval
from flint import torch_util
from hotpot_data_analysis.fullwiki_provided_upperbound import append_gt_downstream_to_get_upperbound_from_doc_retri
from hotpot_fact_selection_sampler.sampler_full_wiki import down_sample_neg
from hotpot_fact_selection_sampler.sampler_utils import select_top_k_and_to_results_dict
from hotpot_fact_selection_sampler.sentence_level_sampler import get_sentence_pair
from neural_modules.model_EMA import EMA, get_ema_gpu_id_list
from utils import common, list_dict_data_tool
import torch
from tqdm import tqdm
import numpy as np
import copy
import allennlp
from utils import save_tool
import torch.nn.functional as F
from tqdm import tqdm
from hotpot_fact_selection_sampler import sentence_level_sampler
def eval_model(model, data_iter, device_num, with_probs=False, show_progress=False):
print("Evaluating ...")
with torch.no_grad():
model.eval()
totoal_size = 0
y_pred_list = []
y_fid_list = []
y_pid_list = []
y_element_list = []
y_logits_list = []
y_probs_list = []
for batch_idx, batch in tqdm(enumerate(data_iter), disable=(not show_progress)):
batch = move_to_device(batch, device_num)
eval_paired_sequence = batch['paired_sequence']
eval_paired_segments_ids = batch['paired_segments_ids']
eval_labels_ids = batch['label']
eval_att_mask, _ = torch_util.get_length_and_mask(eval_paired_sequence)
s1_span = batch['bert_s1_span']
s2_span = batch['bert_s2_span']
out = model(eval_paired_sequence, token_type_ids=eval_paired_segments_ids, attention_mask=eval_att_mask,
mode=BertMultiLayerSeqClassification.ForwardMode.EVAL,
labels=eval_labels_ids)
y_pid_list.extend(list(batch['qid']))
y_fid_list.extend(list(batch['fid']))
y_element_list.extend(list(batch['item']))
y_pred_list.extend(torch.max(out, 1)[1].view(out.size(0)).tolist())
y_logits_list.extend(out.view(out.size(0)).tolist())
if with_probs:
y_probs_list.extend(torch.sigmoid(out).view(out.size(0)).tolist())
totoal_size += out.size(0)
result_items_list = []
assert len(y_pred_list) == len(y_fid_list)
assert len(y_pred_list) == len(y_pid_list)
assert len(y_pred_list) == len(y_element_list)
assert len(y_pred_list) == len(y_logits_list)
if with_probs:
assert len(y_pred_list) == len(y_probs_list)
for i in range(len(y_pred_list)):
r_item = dict()
r_item['fid'] = y_fid_list[i]
r_item['qid'] = y_pid_list[i]
r_item['score'] = y_logits_list[i]
r_item['element'] = y_element_list[i]
if with_probs:
r_item['prob'] = y_probs_list[i]
result_items_list.append(r_item)
return result_items_list
# def select_top_k_and_to_results_dict(scored_dict, merged_field_name='merged_field',
# score_field_name='score', item_field_name='element',
# top_k=5):
#
# results_dict = {'sp_doc': dict(), 'scored_results': dict()}
# for key, value in scored_dict.items():
# fitems_dict = value[merged_field_name]
# scored_element_list = []
# for item in fitems_dict.values():
# score = item[score_field_name]
# element = item[item_field_name]
# scored_element_list.append((score, element)) # score is index 0.
#
# results_dict['scored_results'][key] = scored_element_list
# sorted_e_list = sorted(scored_element_list, key=lambda x: x[0], reverse=True)
# results_dict['sp_doc'][key] = [e for s, e in sorted_e_list[:top_k]]
#
# return results_dict
def model_go():
seed = 12
torch.manual_seed(seed)
# bert_model_name = 'bert-large-uncased'
bert_pretrain_path = config.PRO_ROOT / '.pytorch_pretrained_bert'
bert_model_name = 'bert-base-uncased'
lazy = False
# lazy = True
forward_size = 128
# batch_size = 64
batch_size = 128
gradient_accumulate_step = int(batch_size / forward_size)
warmup_proportion = 0.1
learning_rate = 5e-5
num_train_epochs = 5
eval_frequency = 2000
pos_ratio = 0.2
do_lower_case = True
document_top_k = 2
experiment_name = f'hotpot_v0_slevel_retri_(doc_top_k:{document_top_k})'
debug_mode = False
do_ema = True
# est_datasize = 900_000
num_class = 1
# num_train_optimization_steps
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device_num = 0 if torch.cuda.is_available() else -1
n_gpu = torch.cuda.device_count()
unk_token_num = {'tokens': 1} # work around for initiating vocabulary.
vocab = ExVocabulary(unk_token_num=unk_token_num)
vocab.add_token_to_namespace("false", namespace="labels") # 0
vocab.add_token_to_namespace("true", namespace="labels") # 1
vocab.add_token_to_namespace("hidden", namespace="labels")
vocab.change_token_with_index_to_namespace("hidden", -2, namespace='labels')
# Load Dataset
train_list = common.load_json(config.TRAIN_FILE)
dev_list = common.load_json(config.DEV_FULLWIKI_FILE)
# train_fitems = sentence_level_sampler.get_train_sentence_pair(document_top_k, True, debug_mode)
# dev_fitems = sentence_level_sampler.get_dev_sentence_pair(document_top_k, False, debug_mode)
# Load train eval results list
cur_train_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/train_p_level_bert_v1_results.jsonl")
cur_dev_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/dev_p_level_bert_v1_results.jsonl")
train_fitems = get_sentence_pair(document_top_k, train_list, cur_train_eval_results_list, is_training=True,
debug_mode=debug_mode)
dev_fitems = get_sentence_pair(document_top_k, dev_list, cur_dev_eval_results_list, is_training=False,
debug_mode=debug_mode)
if debug_mode:
dev_list = dev_list[:100]
eval_frequency = 2
# print(dev_list[-1]['_id'])
# exit(0)
# sampled_train_list = down_sample_neg(train_fitems_list, ratio=pos_ratio)
est_datasize = len(train_fitems)
dev_o_dict = list_dict_data_tool.list_to_dict(dev_list, '_id')
# print(dev_o_dict)
bert_tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=do_lower_case,
cache_dir=bert_pretrain_path)
bert_cs_reader = BertContentSelectionReader(bert_tokenizer, lazy, is_paired=True,
example_filter=lambda x: len(x['context']) == 0, max_l=128,
element_fieldname='element')
bert_encoder = BertModel.from_pretrained(bert_model_name, cache_dir=bert_pretrain_path)
model = BertMultiLayerSeqClassification(bert_encoder, num_labels=num_class, num_of_pooling_layer=1,
act_type='tanh', use_pretrained_pooler=True, use_sigmoid=True)
ema = None
if do_ema:
ema = EMA(model, model.named_parameters(), device_num=1)
model.to(device)
if n_gpu > 1:
model = torch.nn.DataParallel(model)
#
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
num_train_optimization_steps = int(est_datasize / forward_size / gradient_accumulate_step) * \
num_train_epochs
if debug_mode:
num_train_optimization_steps = 100
print("Estimated training size", est_datasize)
print("Number of optimization steps:", num_train_optimization_steps)
optimizer = BertAdam(optimizer_grouped_parameters,
lr=learning_rate,
warmup=warmup_proportion,
t_total=num_train_optimization_steps)
dev_instances = bert_cs_reader.read(dev_fitems)
biterator = BasicIterator(batch_size=forward_size)
biterator.index_with(vocab)
forbackward_step = 0
update_step = 0
logging_agent = save_tool.ScoreLogger({})
# # # Create Log File
file_path_prefix, date = save_tool.gen_file_prefix(f"{experiment_name}")
# Save the source code.
script_name = os.path.basename(__file__)
with open(os.path.join(file_path_prefix, script_name), 'w') as out_f, open(__file__, 'r') as it:
out_f.write(it.read())
out_f.flush()
# # # Log File end
for epoch_i in range(num_train_epochs):
print("Epoch:", epoch_i)
# sampled_train_list = down_sample_neg(train_fitems_list, ratio=pos_ratio)
random.shuffle(train_fitems)
train_instance = bert_cs_reader.read(train_fitems)
train_iter = biterator(train_instance, num_epochs=1, shuffle=True)
for batch in tqdm(train_iter):
model.train()
batch = move_to_device(batch, device_num)
paired_sequence = batch['paired_sequence']
paired_segments_ids = batch['paired_segments_ids']
labels_ids = batch['label']
att_mask, _ = torch_util.get_length_and_mask(paired_sequence)
s1_span = batch['bert_s1_span']
s2_span = batch['bert_s2_span']
loss = model(paired_sequence, token_type_ids=paired_segments_ids, attention_mask=att_mask,
mode=BertMultiLayerSeqClassification.ForwardMode.TRAIN,
labels=labels_ids)
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if gradient_accumulate_step > 1:
loss = loss / gradient_accumulate_step
loss.backward()
forbackward_step += 1
if forbackward_step % gradient_accumulate_step == 0:
optimizer.step()
if ema is not None and do_ema:
updated_model = model.module if hasattr(model, 'module') else model
ema(updated_model.named_parameters())
optimizer.zero_grad()
update_step += 1
if update_step % eval_frequency == 0:
print("Update steps:", update_step)
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(model, dev_iter, device_num, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
# 0.5
cur_results_dict_v05 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.5,
result_field='sp')
cur_results_dict_v02 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.2,
result_field='sp')
_, metrics_v5 = ext_hotpot_eval.eval(cur_results_dict_v05, dev_list, verbose=False)
_, metrics_v2 = ext_hotpot_eval.eval(cur_results_dict_v02, dev_list, verbose=False)
v02_sp_f1 = metrics_v2['sp_f1']
v02_sp_recall = metrics_v2['sp_recall']
v02_sp_prec = metrics_v2['sp_prec']
v05_sp_f1 = metrics_v5['sp_f1']
v05_sp_recall = metrics_v5['sp_recall']
v05_sp_prec = metrics_v5['sp_prec']
logging_item = {
'v02': metrics_v2,
'v05': metrics_v5,
}
print(logging_item)
# print(logging_item)
if not debug_mode:
save_file_name = f'i({update_step})|e({epoch_i})' \
f'|v02_f1({v02_sp_f1})|v02_recall({v02_sp_recall})' \
f'|v05_f1({v05_sp_f1})|v05_recall({v05_sp_recall})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
if do_ema and ema is not None:
ema_model = ema.get_inference_model()
master_device_num = 1
ema_inference_device_ids = get_ema_gpu_id_list(master_device_num=master_device_num)
ema_model = ema_model.to(master_device_num)
ema_model = torch.nn.DataParallel(ema_model, device_ids=ema_inference_device_ids)
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(ema_model, dev_iter, master_device_num, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
# 0.5
cur_results_dict_v05 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.5,
result_field='sp')
cur_results_dict_v02 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.2,
result_field='sp')
_, metrics_v5 = ext_hotpot_eval.eval(cur_results_dict_v05, dev_list, verbose=False)
_, metrics_v2 = ext_hotpot_eval.eval(cur_results_dict_v02, dev_list, verbose=False)
v02_sp_f1 = metrics_v2['sp_f1']
v02_sp_recall = metrics_v2['sp_recall']
v02_sp_prec = metrics_v2['sp_prec']
v05_sp_f1 = metrics_v5['sp_f1']
v05_sp_recall = metrics_v5['sp_recall']
v05_sp_prec = metrics_v5['sp_prec']
logging_item = {
'label': 'ema',
'v02': metrics_v2,
'v05': metrics_v5,
}
print(logging_item)
if not debug_mode:
save_file_name = f'ema_i({update_step})|e({epoch_i})' \
f'|v02_f1({v02_sp_f1})|v02_recall({v02_sp_recall})' \
f'|v05_f1({v05_sp_f1})|v05_recall({v05_sp_recall})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = ema_model.module if hasattr(ema_model, 'module') else ema_model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
def eval_model_for_downstream(model_saved_path, doc_top_k=2, tag='dev'):
seed = 12
torch.manual_seed(seed)
bert_model_name = 'bert-base-uncased'
# lazy = False
lazy = True
# forward_size = 256
forward_size = 128
# batch_size = 64
batch_size = 128
do_lower_case = True
document_top_k = doc_top_k
debug_mode = False
# est_datasize = 900_000
num_class = 1
# num_train_optimization_steps
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device_num = 0 if torch.cuda.is_available() else -1
n_gpu = torch.cuda.device_count()
unk_token_num = {'tokens': 1} # work around for initiating vocabulary.
vocab = ExVocabulary(unk_token_num=unk_token_num)
vocab.add_token_to_namespace("false", namespace="labels") # 0
vocab.add_token_to_namespace("true", namespace="labels") # 1
vocab.add_token_to_namespace("hidden", namespace="labels")
vocab.change_token_with_index_to_namespace("hidden", -2, namespace='labels')
# Load Dataset
train_list = common.load_json(config.TRAIN_FILE)
dev_list = common.load_json(config.DEV_FULLWIKI_FILE)
test_list = common.load_json(config.TEST_FULLWIKI_FILE)
# Load train eval results list
cur_train_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/train_p_level_bert_v1_results.jsonl")
cur_dev_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/dev_p_level_bert_v1_results.jsonl")
cur_test_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/test_p_level_bert_v1_results.jsonl")
if tag == 'train':
train_fitems = get_sentence_pair(document_top_k, train_list, cur_train_eval_results_list, is_training=True,
debug_mode=debug_mode)
elif tag == 'dev':
dev_fitems = get_sentence_pair(document_top_k, dev_list, cur_dev_eval_results_list, is_training=False,
debug_mode=debug_mode)
elif tag == 'test':
test_fitems = get_sentence_pair(document_top_k, test_list, cur_test_eval_results_list, is_training=False,
debug_mode=debug_mode)
if debug_mode:
eval_frequency = 2
# dev_list = dev_list[:10]
# dev_fitems_list = dev_fitems_list[:296]
# train_fitems_list = train_fitems_list[:300]
# print(dev_list[-1]['_id'])
# exit(0)
dev_o_dict = list_dict_data_tool.list_to_dict(dev_list, '_id')
train_o_dict = list_dict_data_tool.list_to_dict(train_list, '_id')
bert_tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=do_lower_case)
bert_cs_reader = BertContentSelectionReader(bert_tokenizer, lazy, is_paired=True,
example_filter=lambda x: len(x['context']) == 0, max_l=128,
element_fieldname='element')
bert_encoder = BertModel.from_pretrained(bert_model_name)
model = BertMultiLayerSeqClassification(bert_encoder, num_labels=num_class, num_of_pooling_layer=1,
act_type='tanh', use_pretrained_pooler=True, use_sigmoid=True)
model.load_state_dict(torch.load(model_saved_path))
model.to(device)
if n_gpu > 1:
model = torch.nn.DataParallel(model)
#
if tag == 'train':
train_instance = bert_cs_reader.read(train_fitems)
elif tag == 'dev':
dev_instances = bert_cs_reader.read(dev_fitems)
elif tag == 'test':
test_instances = bert_cs_reader.read(test_fitems)
biterator = BasicIterator(batch_size=forward_size)
biterator.index_with(vocab)
if tag == 'train':
train_iter = biterator(train_instance, num_epochs=1, shuffle=False)
print(len(train_fitems))
elif tag == 'dev':
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
print(len(dev_fitems))
elif tag == 'test':
test_iter = biterator(test_instances, num_epochs=1, shuffle=False)
print(len(test_fitems))
print("Forward size:", forward_size)
if tag == 'train':
cur_train_eval_results_list_out = eval_model(model, train_iter, device_num, with_probs=True, show_progress=True)
common.save_jsonl(cur_train_eval_results_list_out,
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_sentence_level/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)/train_s_level_bert_v1_results.jsonl")
elif tag == 'dev':
cur_dev_eval_results_list_out = eval_model(model, dev_iter, device_num, with_probs=True, show_progress=True)
common.save_jsonl(cur_dev_eval_results_list_out,
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_sentence_level/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)/dev_s_level_bert_v1_results.jsonl")
elif tag == 'test':
cur_test_eval_results_list_out = eval_model(model, test_iter, device_num, with_probs=True, show_progress=True)
common.save_jsonl(cur_test_eval_results_list_out,
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_sentence_level/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)/test_s_level_bert_v1_results.jsonl")
if tag == 'train' or tag == 'test':
exit(0)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_dev_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
# 0.5
cur_results_dict_v05 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.5,
result_field='sp')
cur_results_dict_v02 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.2,
result_field='sp')
_, metrics_v5 = ext_hotpot_eval.eval(cur_results_dict_v05, dev_list, verbose=False)
_, metrics_v2 = ext_hotpot_eval.eval(cur_results_dict_v02, dev_list, verbose=False)
logging_item = {
'v02': metrics_v2,
'v05': metrics_v5,
}
print(logging_item)
def eval_model_for_downstream_ablation(model_saved_path, doc_top_k=2, tag='dev'):
print(f"Run doc_top_k:{doc_top_k}")
bert_pretrain_path = config.PRO_ROOT / '.pytorch_pretrained_bert'
seed = 12
torch.manual_seed(seed)
bert_model_name = 'bert-base-uncased'
# lazy = False
lazy = True
# forward_size = 256
forward_size = 256
# batch_size = 64
batch_size = 128
do_lower_case = True
document_top_k = doc_top_k
debug_mode = False
# est_datasize = 900_000
num_class = 1
# num_train_optimization_steps
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device_num = 0 if torch.cuda.is_available() else -1
n_gpu = torch.cuda.device_count()
unk_token_num = {'tokens': 1} # work around for initiating vocabulary.
vocab = ExVocabulary(unk_token_num=unk_token_num)
vocab.add_token_to_namespace("false", namespace="labels") # 0
vocab.add_token_to_namespace("true", namespace="labels") # 1
vocab.add_token_to_namespace("hidden", namespace="labels")
vocab.change_token_with_index_to_namespace("hidden", -2, namespace='labels')
# Load Dataset
train_list = common.load_json(config.TRAIN_FILE)
dev_list = common.load_json(config.DEV_FULLWIKI_FILE)
test_list = common.load_json(config.TEST_FULLWIKI_FILE)
# Load train eval results list
# cur_train_eval_results_list = common.load_jsonl(
# config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
# "i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/train_p_level_bert_v1_results.jsonl")
cur_dev_eval_results_list = common.load_jsonl(
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
"i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/dev_p_level_bert_v1_results.jsonl")
# cur_test_eval_results_list = common.load_jsonl(
# config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_paragraph_level/04-10-17:44:54_hotpot_v0_cs/"
# "i(40000)|e(4)|t5_doc_recall(0.8793382849426064)|t5_sp_recall(0.879496479212887)|t10_doc_recall(0.888656313301823)|t5_sp_recall(0.8888325134240054)|seed(12)/test_p_level_bert_v1_results.jsonl")
# if tag == 'train':
# train_fitems = get_sentence_pair(document_top_k, train_list, cur_train_eval_results_list, is_training=True,
# debug_mode=debug_mode)
if tag == 'dev':
dev_fitems = get_sentence_pair(document_top_k, dev_list, cur_dev_eval_results_list, is_training=False,
debug_mode=debug_mode)
# elif tag == 'test':
# test_fitems = get_sentence_pair(document_top_k, test_list, cur_test_eval_results_list, is_training=False,
# debug_mode=debug_mode)
if debug_mode:
eval_frequency = 2
# dev_list = dev_list[:10]
# dev_fitems_list = dev_fitems_list[:296]
# train_fitems_list = train_fitems_list[:300]
# print(dev_list[-1]['_id'])
# exit(0)
dev_o_dict = list_dict_data_tool.list_to_dict(dev_list, '_id')
train_o_dict = list_dict_data_tool.list_to_dict(train_list, '_id')
bert_tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=do_lower_case,
cache_dir=bert_pretrain_path)
bert_cs_reader = BertContentSelectionReader(bert_tokenizer, lazy, is_paired=True,
example_filter=lambda x: len(x['context']) == 0, max_l=128,
element_fieldname='element')
bert_encoder = BertModel.from_pretrained(bert_model_name, cache_dir=bert_pretrain_path)
model = BertMultiLayerSeqClassification(bert_encoder, num_labels=num_class, num_of_pooling_layer=1,
act_type='tanh', use_pretrained_pooler=True, use_sigmoid=True)
model.load_state_dict(torch.load(model_saved_path))
model.to(device)
if n_gpu > 1:
model = torch.nn.DataParallel(model)
#
if tag == 'train':
train_instance = bert_cs_reader.read(train_fitems)
elif tag == 'dev':
dev_instances = bert_cs_reader.read(dev_fitems)
elif tag == 'test':
test_instances = bert_cs_reader.read(test_fitems)
biterator = BasicIterator(batch_size=forward_size)
biterator.index_with(vocab)
if tag == 'train':
train_iter = biterator(train_instance, num_epochs=1, shuffle=False)
print(len(train_fitems))
elif tag == 'dev':
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
print(len(dev_fitems))
elif tag == 'test':
test_iter = biterator(test_instances, num_epochs=1, shuffle=False)
print(len(test_fitems))
print("Forward size:", forward_size)
if tag == 'train':
cur_train_eval_results_list_out = eval_model(model, train_iter, device_num, with_probs=True,
show_progress=True)
common.save_jsonl(cur_train_eval_results_list_out,
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_sentence_level/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)/train_s_level_bert_v1_results.jsonl")
elif tag == 'dev':
cur_dev_eval_results_list_out = eval_model(model, dev_iter, device_num, with_probs=True, show_progress=True)
common.save_jsonl(cur_dev_eval_results_list_out, f"hotpot_s_level_{tag}_results_top_k_doc_{document_top_k}.jsonl")
elif tag == 'test':
cur_test_eval_results_list_out = eval_model(model, test_iter, device_num, with_probs=True,
show_progress=True)
common.save_jsonl(cur_test_eval_results_list_out,
config.PRO_ROOT / "data/p_hotpotqa/hotpotqa_sentence_level/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)/test_s_level_bert_v1_results.jsonl")
if tag == 'train' or tag == 'test':
exit(0)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_dev_eval_results_list_out, copied_dev_o_dict,
'qid', 'fid', check=True)
# 0.5
cur_results_dict_v05 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.5,
result_field='sp')
cur_results_dict_v02 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5,
score_field_name='prob',
filter_value=0.2,
result_field='sp')
_, metrics_v5 = ext_hotpot_eval.eval(cur_results_dict_v05, dev_list, verbose=False)
_, metrics_v2 = ext_hotpot_eval.eval(cur_results_dict_v02, dev_list, verbose=False)
logging_item = {
'v02': metrics_v2,
'v05': metrics_v5,
}
print(logging_item)
f1 = metrics_v5['sp_f1']
em = metrics_v5['sp_em']
pr = metrics_v5['sp_prec']
rec = metrics_v5['sp_recall']
common.save_json(logging_item, f"top_k_doc:{document_top_k}_em:{em}_pr:{pr}_rec:{rec}_f1:{f1}")
# common.save_jsonl(cur_train_eval_results_list, "train_p_level_bert_v1_results.jsonl")
if __name__ == '__main__':
# model_go()
model_saved_path = config.PRO_ROOT / "saved_models/04-19-02:17:11_hotpot_v0_slevel_retri_(doc_top_k:2)/i(12000)|e(2)|v02_f1(0.7153646038858843)|v02_recall(0.7114645831323757)|v05_f1(0.7153646038858843)|v05_recall(0.7114645831323757)|seed(12)"
# eval_model_for_downstream(model_saved_path, tag='train')
for doc_top_k in [1, 3, 5, 7, 9, 10, 11, 12]:
eval_model_for_downstream_ablation(model_saved_path, doc_top_k, tag='dev')
# eval_model_for_downstream_ablation(model_saved_path, 100, tag='dev')
# eval_model_for_downstream(model_saved_path, tag='test')
|
{"hexsha": "b3dd441f64363348a5b9357e75ea2588ed630f05", "size": 34396, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/hotpot_content_selection/bert_s_level_v1.py", "max_stars_repo_name": "ethanjperez/semanticRetrievalMRS", "max_stars_repo_head_hexsha": "765e00d6e7693e0eaba20ef1407fad0be4a7a92b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2019-09-19T03:04:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T03:59:28.000Z", "max_issues_repo_path": "src/hotpot_content_selection/bert_s_level_v1.py", "max_issues_repo_name": "ethanjperez/semanticRetrievalMRS", "max_issues_repo_head_hexsha": "765e00d6e7693e0eaba20ef1407fad0be4a7a92b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 13, "max_issues_repo_issues_event_min_datetime": "2019-09-19T12:11:01.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-28T17:51:43.000Z", "max_forks_repo_path": "src/hotpot_content_selection/bert_s_level_v1.py", "max_forks_repo_name": "ethanjperez/semanticRetrievalMRS", "max_forks_repo_head_hexsha": "765e00d6e7693e0eaba20ef1407fad0be4a7a92b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:07:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-12T08:12:08.000Z", "avg_line_length": 47.705963939, "max_line_length": 313, "alphanum_fraction": 0.6312071171, "include": true, "reason": "import numpy", "num_tokens": 8172}
|
import numpy as np
import numba as nb
################################################################################
@nb.jit(nopython = True, nogil = True)
def Maxwell(u):
#
f = np.sqrt(1 / np.pi) * np.exp(- u ** 2)
#f = 2 * abs(u) * np.exp(- u ** 2)
#
return(f)
################################################################################
@nb.jit(nopython = True, nogil = True)
def CLL_R(ur, ui):
#
sigma = 0.5
R = 1 / np.sqrt(np.pi * sigma * (2 - sigma)) * np.exp(- (ur - (1 - sigma) * ui) ** 2 / (sigma * (2 - sigma)))
#
return(R)
################################################################################
@nb.jit(nopython = True, nogil = True)
def Initial():
#
N = 100
ur = np.zeros(N + 1)
ui = np.zeros(N + 1)
Low0 = -5.0
Low1 = 0.0
High0 = 0.0
High1 = 5.0
for i in range(len(ur)):
ur[i] = Low0 + (High1 - Low0) * i / N
ui[i] = Low0 + (High1 - Low0) * i / N
#vi[i] = Low0 + (High1 - Low0) * i / N
#wi[i] = Low0 + (High0 - Low0) * i / N
#
return(ur, ui)
################################################################################
def main():
#
ur, ui = Initial()
Sum = 0.0
for i in range(len(ui)):
Sum += CLL_R(ur[50], ui[i]) * Maxwell(ui[i]) * (ui[1] - ui[0])
print(Sum)
################################################################################
if __name__ == '__main__':
#
main()
|
{"hexsha": "1f907088b9b947f6837f8cb1e94980e1671c4b62", "size": 1467, "ext": "py", "lang": "Python", "max_stars_repo_path": "V_4th_Condensation/jifensheji.py", "max_stars_repo_name": "KhalilWong/Kernel_MD", "max_stars_repo_head_hexsha": "ec36dd910020e3e49758b73c319df71d2ff1f1d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-07T02:53:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-07T02:53:27.000Z", "max_issues_repo_path": "V_4th_Condensation/jifensheji.py", "max_issues_repo_name": "KhalilWong/Kernel_MD", "max_issues_repo_head_hexsha": "ec36dd910020e3e49758b73c319df71d2ff1f1d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "V_4th_Condensation/jifensheji.py", "max_forks_repo_name": "KhalilWong/Kernel_MD", "max_forks_repo_head_hexsha": "ec36dd910020e3e49758b73c319df71d2ff1f1d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.1666666667, "max_line_length": 113, "alphanum_fraction": 0.3367416496, "include": true, "reason": "import numpy,import numba", "num_tokens": 419}
|
"""
Hannan-Rissanen procedure for estimating ARMA(p,q) model parameters.
Author: Chad Fulton
License: BSD-3
"""
import numpy as np
from scipy.signal import lfilter
from statsmodels.tools.tools import Bunch
from statsmodels.regression.linear_model import OLS, yule_walker
from statsmodels.tsa.tsatools import lagmat
from statsmodels.tsa.arima.specification import SARIMAXSpecification
from statsmodels.tsa.arima.params import SARIMAXParams
def hannan_rissanen(endog, ar_order=0, ma_order=0, demean=True,
initial_ar_order=None, unbiased=None):
"""
Estimate ARMA parameters using Hannan-Rissanen procedure.
Parameters
----------
endog : array_like
Input time series array, assumed to be stationary.
ar_order : int
Autoregressive order
ma_order : int
Moving average order
demean : bool, optional
Whether to estimate and remove the mean from the process prior to
fitting the ARMA coefficients. Default is True.
initial_ar_order : int, optional
Order of long autoregressive process used for initial computation of
residuals.
unbiased: bool, optional
Whether or not to apply the bias correction step. Default is True if
the estimated coefficients from the previous step imply a stationary
and invertible process and False otherwise.
Returns
-------
parameters : SARIMAXParams object
other_results : Bunch
Includes three components: `spec`, containing the
`SARIMAXSpecification` instance corresponding to the input arguments;
`initial_ar_order`, containing the autoregressive lag order used in the
first step; and `resid`, which contains the computed residuals from the
last step.
Notes
-----
The primary reference is [1]_, section 5.1.4, which describes a three-step
procedure that we implement here.
1. Fit a large-order AR model via Yule-Walker to estimate residuals
2. Compute AR and MA estimates via least squares
3. (Unless the estimated coefficients from step (2) are non-stationary /
non-invertible or `unbiased=False`) Perform bias correction
The order used for the AR model in the first step may be given as an
argument. If it is not, we compute it as suggested by [2]_.
The estimate of the variance that we use is computed from the residuals
of the least-squares regression and not from the innovations algorithm.
This is because our fast implementation of the innovations algorithm is
only valid for stationary processes, and the Hannan-Rissanen procedure may
produce estimates that imply non-stationary processes. To avoid
inconsistency, we never compute this latter variance here, even if it is
possible. See test_hannan_rissanen::test_brockwell_davis_example_517 for
an example of how to compute this variance manually.
This procedure assumes that the series is stationary, but if this is not
true, it is still possible that this procedure will return parameters that
imply a non-stationary / non-invertible process.
Note that the third stage will only be applied if the parameters from the
second stage imply a stationary / invertible model. If `unbiased=True` is
given, then non-stationary / non-invertible parameters in the second stage
will throw an exception.
References
----------
.. [1] Brockwell, Peter J., and Richard A. Davis. 2016.
Introduction to Time Series and Forecasting. Springer.
.. [2] Gomez, Victor, and Agustin Maravall. 2001.
"Automatic Modeling Methods for Univariate Series."
A Course in Time Series Analysis, 171–201.
"""
spec = SARIMAXSpecification(endog, ar_order=ar_order, ma_order=ma_order)
endog = spec.endog
if demean:
endog = endog - endog.mean()
p = SARIMAXParams(spec=spec)
nobs = len(endog)
max_ar_order = spec.max_ar_order
max_ma_order = spec.max_ma_order
# Default initial_ar_order is as suggested by Gomez and Maravall (2001)
if initial_ar_order is None:
initial_ar_order = max(np.floor(np.log(nobs)**2).astype(int),
2 * max(max_ar_order, max_ma_order))
# Create a spec, just to validate the initial autoregressive order
_ = SARIMAXSpecification(endog, ar_order=initial_ar_order)
# Compute lagged endog
# (`ar_ix`, and `ma_ix` below, are to account for non-consecutive lags;
# for indexing purposes, must have dtype int)
ar_ix = np.array(spec.ar_lags, dtype=int) - 1
lagged_endog = lagmat(endog, max_ar_order, trim='both')[:, ar_ix]
# If no AR or MA components, this is just a variance computation
if max_ma_order == 0 and max_ar_order == 0:
p.sigma2 = np.var(endog, ddof=0)
resid = endog.copy()
# If no MA component, this is just CSS
elif max_ma_order == 0:
mod = OLS(endog[max_ar_order:], lagged_endog)
res = mod.fit()
resid = res.resid
p.ar_params = res.params
p.sigma2 = res.scale
# Otherwise ARMA model
else:
# Step 1: Compute long AR model via Yule-Walker, get residuals
initial_ar_params, _ = yule_walker(
endog, order=initial_ar_order, method='mle')
X = lagmat(endog, initial_ar_order, trim='both')
y = endog[initial_ar_order:]
resid = y - X.dot(initial_ar_params)
# Get lagged residuals for `exog` in least-squares regression
ma_ix = np.array(spec.ma_lags, dtype=int) - 1
lagged_resid = lagmat(resid, max_ma_order, trim='both')[:, ma_ix]
# Step 2: estimate ARMA model via least squares
ix = initial_ar_order + max_ma_order - max_ar_order
mod = OLS(endog[initial_ar_order + max_ma_order:],
np.c_[lagged_endog[ix:], lagged_resid])
res = mod.fit()
p.ar_params = res.params[:spec.k_ar_params]
p.ma_params = res.params[spec.k_ar_params:]
resid = res.resid
p.sigma2 = res.scale
# Step 3: bias correction (if requested)
if unbiased is True or unbiased is None:
if p.is_stationary and p.is_invertible:
Z = np.zeros_like(endog)
V = np.zeros_like(endog)
W = np.zeros_like(endog)
ar_coef = p.ar_poly.coef
ma_coef = p.ma_poly.coef
for t in range(nobs):
if t >= max(max_ar_order, max_ma_order):
# Note: in the case of non-consecutive lag orders, the
# polynomials have the appropriate zeros so we don't
# need to subset `endog[t - max_ar_order:t]` or
# Z[t - max_ma_order:t]
tmp_ar = np.dot(
-ar_coef[1:], endog[t - max_ar_order:t][::-1])
tmp_ma = np.dot(ma_coef[1:],
Z[t - max_ma_order:t][::-1])
Z[t] = endog[t] - tmp_ar - tmp_ma
V = lfilter([1], ar_coef, Z)
W = lfilter(np.r_[1, -ma_coef[1:]], [1], Z)
lagged_V = lagmat(V, max_ar_order, trim='both')
lagged_W = lagmat(W, max_ma_order, trim='both')
exog = np.c_[
lagged_V[max(max_ma_order - max_ar_order, 0):, ar_ix],
lagged_W[max(max_ar_order - max_ma_order, 0):, ma_ix]]
mod_unbias = OLS(Z[max(max_ar_order, max_ma_order):], exog)
res_unbias = mod_unbias.fit()
p.ar_params = (
p.ar_params + res_unbias.params[:spec.k_ar_params])
p.ma_params = (
p.ma_params + res_unbias.params[spec.k_ar_params:])
# Recompute sigma2
resid = mod.endog - mod.exog.dot(
np.r_[p.ar_params, p.ma_params])
p.sigma2 = np.inner(resid, resid) / len(resid)
elif unbiased is True:
raise ValueError('Cannot perform third step of Hannan-Rissanen'
' estimation to remove paramater bias,'
' because parameters estimated from the'
' second step are non-stationary or'
' non-invertible')
# TODO: Gomez and Maravall (2001) or Gomez (1998)
# propose one more step here to further improve MA estimates
# Construct results
other_results = Bunch({
'spec': spec,
'initial_ar_order': initial_ar_order,
'resid': resid
})
return p, other_results
|
{"hexsha": "7fef471fd5716037e33bed009b3ba6ede9c44f11", "size": 8699, "ext": "py", "lang": "Python", "max_stars_repo_path": "statsmodels/tsa/arima/estimators/hannan_rissanen.py", "max_stars_repo_name": "timgates42/statsmodels", "max_stars_repo_head_hexsha": "ab8ff09e3eb8c385214bd1575aa47b81bf53d584", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 76, "max_stars_repo_stars_event_min_datetime": "2019-12-28T08:37:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T02:19:41.000Z", "max_issues_repo_path": "statsmodels/tsa/arima/estimators/hannan_rissanen.py", "max_issues_repo_name": "timgates42/statsmodels", "max_issues_repo_head_hexsha": "ab8ff09e3eb8c385214bd1575aa47b81bf53d584", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-21T02:42:32.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-21T02:42:32.000Z", "max_forks_repo_path": "statsmodels/tsa/arima/estimators/hannan_rissanen.py", "max_forks_repo_name": "timgates42/statsmodels", "max_forks_repo_head_hexsha": "ab8ff09e3eb8c385214bd1575aa47b81bf53d584", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2020-02-04T14:46:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T03:56:17.000Z", "avg_line_length": 41.2274881517, "max_line_length": 79, "alphanum_fraction": 0.628348086, "include": true, "reason": "import numpy,from scipy,from statsmodels", "num_tokens": 2073}
|
/- 30 Aug 2019 -/
-- degree
-- incidence matrix
-- adjacency matrix
/-
## Definitions:
* A sequence of nonnegative integers is called `graphic` if it is the degree
sequence of a simple graph.
how does one write dn where n is a subscript?
Havel-Hakimi Theorem: Let d_1 ≥ d_2 ≥ ... ≥ d_n ≥ 0 be a (finite) sequence of
nonnegative integers. The sequence is graphic iff the sequence
d_2 - 1, ... , d_(t + 1) - 1, d_(t + 2), ... , d_n, where t = d_1, is graphic.
Let 0 ≤ d_1 ≤ d_2 ≤ ... ≤ d_n be a (finite) sequence of
nonnegative integers. The sequence is graphic iff the sequence
d_2 - 1, ... , d_(t + 1) - 1, d_(t + 2), ... , d_n, where t = d_1 is graphic.
-/
import data.list.sort
import combinatorics.simple_graph.basic
import data.multiset.sort
universe u
variables (V : Type u) [fintype V]
-- what type should i use?
-- `list.sorted` or `list.pairwise`
-- i think i can just use nat since that includes zero
-- oh god i need some kind of counter? or index
-- copy over the sequence except erase largest element and
-- subtract one from the n next largest elements
def sub_one_n_times' (n : ℕ) (l : list ℕ) : list ℕ :=
(l.take n).map (nat.pred) ++ l.drop n
-- this one works i think, but ordering does matter
/-def list.pos_filter (l : list ℕ) : list ℕ := l.filter (λ n, 0 < n)
-- this probably already exists, just don't feel like looking it up
def n_pos_list_check (n : ℕ) (l : list ℕ) : Prop := n ≤ l.pos_filter.length-/
-- def nth_is_pos (n : ℕ) (l : list ℕ) [l.sorted (≤)] : Prop := 0 < (l.nth n)
-- bad
def sub_one_n_times (n : ℕ) (l : list ℕ) (h : l.sorted (≥)) : option (list ℕ) :=
if n ≤ (l.filter (λ n, 0 < n)).length then some (sub_one_n_times' n l) else none
def havel_hakimi' (l : list ℕ) (h : l.sorted (≥)) : option (list ℕ) :=
if (l.filter (λ n, 0 < n)) = [] then some [] else sub_one_n_times l.head l.tail h.tail
-- you can't get the empty list out of applying sub_one_n_times and removing the largest degree repeatedly, so when
-- you get the empty list, you're done
-- is there another way of doing it? is there something else i can return
-- also need to re-sort
def havel_hakimi_step (l : list ℕ) (h : l.sorted (≥)) : multiset ℕ := sub_one_n_times' l.head l.tail
-- ideas for degree sequence
-- multiset of vertices, take the image
-- `multiset.sort` to get sorted list
variables {V}
def simple_graph.degree_multiset (G : simple_graph V) [decidable_rel G.adj] : multiset ℕ := finset.univ.val.map (λ v, G.degree v)
def simple_graph.degree_sequence (G : simple_graph V) [decidable_rel G.adj] : list ℕ := G.degree_multiset.sort (≥)
-- test out definition - good for algebraic graph theory? - look through lecture notes
--variables (l : list ℕ) [l.sorted (≥)]
-- in pseudocode,
-- a multiset ℕ is graphic if it is the degree sequence of some graph `G`
def graphic' (s : multiset ℕ) : Prop := ∃ (G : simple_graph V) [decidable_rel G.adj], by exactI s = G.degree_multiset
-- a sorted list is graphic if blah blah
def graphic (l : list ℕ) : Prop := ∃ (n : ℕ) (G : simple_graph $ fin n) [decidable_rel G.adj], by exactI l = G.degree_sequence
-- theorem statement from wikipedia:
/-
Let `S = (d_{1},\dots ,d_{n})` be a finite list of nonnegative integers that is nonincreasing.
List `S` is graphic if and only if the finite list `S' = (d_{2}-1,d_{3}-1,\dots ,d_{{d_{1}+1}}-1,d_{{d_{1}+2}},\dots ,d_{n})`
has nonnegative integers and is graphic.
-/
variables (S : list ℕ) (h : S.sorted (≥))
def simple_graph.degree' (G : simple_graph V) [decidable_rel G.adj] : V → ℕ := λ v, G.degree v
theorem havel_hakimi_A : graphic S → (S.head ≤ (S.filter (λ n, 0 < n)).length) ∧ graphic ((havel_hakimi_step S h).sort (≥)) :=
begin
intros h2,
split,
{ -- this is just the fact that S.head is largest degree, so the vertex with that degree is adjacent
-- to S.head many vertices, which then means that they have degree at least 1
rcases h2 with ⟨n, G, hdec, hds⟩,
have h3 : S.head = (@simple_graph.degree_sequence (fin n) _ G hdec).head,
exact congr_arg list.head hds,
let d1 := (@simple_graph.degree_sequence (fin n) _ G hdec).head,
-- let v1 := simple_graph.degree_multiset⁻¹ G d1, -- how to get to the preimage of the map in degree_multiset
sorry },
{ -- the proof here is that performing the algorithm step is allowed because you can do the edge swap
sorry },
end
lemma havel_hakimi_B : (S.head ≤ (S.filter (λ n, 0 < n)).length) ∧ graphic ((havel_hakimi_step S h).sort (≥)) → graphic S :=
begin
intros h2,
rcases h2 with ⟨hnneg, n, G, hdec, hds⟩,
sorry,
end
theorem havel_hakimi : graphic S ↔ (S.head ≤ (S.filter (λ n, 0 < n)).length) ∧ graphic ((havel_hakimi_step S h).sort (≥)) :=
⟨havel_hakimi_A S h, havel_hakimi_B S h⟩
variables (G : simple_graph V) [decidable_eq V] (v w x y : V)
variables (h1 : G.adj v w) (h2 : G.adj x y) (hn1 : ¬ G.adj v x) (hn2 : ¬ G.adj w y)
def new_graph : simple_graph V :=
{ adj := λ a b, if (((a = v) ∧ (b = w)) ∨ ((a = v) ∧ (b = x)) ∨ (((a = w) ∧ (b = y)) ∨ ((a = x) ∧ (b = y)))) then ¬ G.adj a b
else G.adj a b,
-- there's gotta be a better way of doing this
sym := λ a b,
begin
simp,
intros h,
sorry,
end,
loopless := sorry, }
/-def new_graph : simple_graph V :=
{ adj := λ a b, if ((a ≠ v) ∧ (a ≠ w)) ∨ ((b ≠ x) ∧ (b ≠ y)) then G.adj a b
else ¬ G.adj a b,
-- there's gotta be a better way of doing this
sym := λ a b,
begin
simp,
intros h,
end,
loopless := _ }-/
-- okay shit this is gonna be annoying
-- going to need to show that the max degree is le the number of remaining vertices
-- sequence D is graphic if ∃ (G : simple_graph V), D is deg seq for G
-- for proof, need to define swapping edge algo
-- BUT FIRST we need to define edge deletion lmao
|
{"author": "agusakov", "repo": "math-688-lean", "sha": "67dc27ebff55a74c6b5a1c469ba04e7981d2e550", "save_path": "github-repos/lean/agusakov-math-688-lean", "path": "github-repos/lean/agusakov-math-688-lean/math-688-lean-67dc27ebff55a74c6b5a1c469ba04e7981d2e550/src/math-688/lectures/lec-2.lean"}
|
#ifndef LIME_SERVICE_HPP
#define LIME_SERVICE_HPP
#include <boost/format.hpp>
#include <iostream>
#include <set>
#include <string>
namespace Lime {
/**
* Service Interface, only to show a service template
*/
class ServiceInterface {
public:
ServiceInterface() {}
virtual ~ServiceInterface() {}
virtual bool Init() { return true; }
virtual bool Start() { return true; }
virtual bool Boot() { return true; }
virtual bool Stop() { return true; }
virtual bool Running() { return true; }
virtual std::string TypeName() { return typeid(ServiceInterface).name(); }
};
/**
* Service class is used to resolve Service instance depedencies.
*/
template <typename T>
class Service : public ServiceInterface {
public:
Service() {}
virtual ~Service() {}
public:
virtual Service<T> *Depends(ServiceInterface *service) final {
if (service && dependencies_.find(service) == dependencies_.end()) {
dependencies_.insert(service);
}
return this;
}
virtual bool Boot() {
for (auto &dependency : dependencies_) {
if (dependency && !dependency->Running()) {
if (dependency->Boot()) {
std::cout << boost::format("[Service %s][%s] Boot success.\n") %
dependency->TypeName() % __FUNCTION__;
} else {
std::cout << boost::format("[Service %s][%s] Boot fail.\n") %
dependency->TypeName() % __FUNCTION__;
return false;
}
}
}
if (!Start()) {
std::cout << boost::format("[Service %s][%s] Start fail.\n") %
TypeName() % __FUNCTION__;
return false;
}
return true;
}
virtual const std::set<ServiceInterface *> &Dependencies() final {
return dependencies_;
}
protected:
std::set<ServiceInterface *> dependencies_;
};
} // namespace Lime
#endif // !LIME_SERVICE_HPP
|
{"hexsha": "e8842007d90040e24562c45938caa588458e595b", "size": 1884, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "src/Base/Service.hpp", "max_stars_repo_name": "lizongti/lime", "max_stars_repo_head_hexsha": "71c966e19f96efd4de6eb2d427c47b382389c8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Base/Service.hpp", "max_issues_repo_name": "lizongti/lime", "max_issues_repo_head_hexsha": "71c966e19f96efd4de6eb2d427c47b382389c8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Base/Service.hpp", "max_forks_repo_name": "lizongti/lime", "max_forks_repo_head_hexsha": "71c966e19f96efd4de6eb2d427c47b382389c8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.8481012658, "max_line_length": 76, "alphanum_fraction": 0.6125265393, "num_tokens": 421}
|
function [roiFile, labelFile] = roiSaveAllAsNifti(vw, fname)
% Export all current ROIs in a mrVista view as a nifti segmentation file,
% with each ROI corresponding to a different layer (integer) in the nifti
% file.
%
% [roiFile, labelFile] = roiSaveAllAsNifti(vw, fname, roiColor)
%
% Oct 2008: JW
%
% See roiSaveAsNifti.m
% global variables
mrGlobals;
% get the view struct
if notDefined('vw'), vw = getCurView; end
% check that it's a gray or volume view
viewType = viewGet(vw, 'viewtype');
switch lower(viewType)
case {'gray', 'volume'}
nROIs = numel(vw.ROIs);
otherwise
error('[%s]: Must be in gray view', mfilename);
end
% check the name of the file to save
if notDefined('fname'),
fname = [fileparts(vANATOMYPATH) filesep 'ROIs-' datestr(now,1) '.nii.gz'];
end
% make a 3D image with all points set to zero except ROI = roiColor
roiData = zeros(size(vw.anat));
% loop through all ROIs in view struct
for rois = 1:nROIs
vw = viewSet(vw, 'selected ROI', rois);
%get ROI coords
coords = getCurROIcoords(vw);
nVoxels = size(coords, 2);
% assign all voxels within the ROI a unique value (the roinum). this
% will be the label number if the nifti file is imported to itkGray.
thelabel = rois;
for voxel = 1:nVoxels
roiData(coords(1,voxel), coords(2,voxel),coords(3,voxel)) = thelabel;
end
end
% save the file
roiFile = niftiSaveVistaVolume(vw, roiData, fname);
% create a label file for itkGray
useV1V2V3V4colors = true;
labelFile = saveLabels(vw, useV1V2V3V4colors);
message = sprintf...
('ROI file saved as %s.\n\nLabel file save as %s.', roiFile, labelFile);
disp(message);
%------------------------------------------------------------------------
end
%------------------------------------------------------------------------
function fname = saveLabels(vw, useV1V2V3V4colors)
% create and save an itkGray-compatible label file
mrGlobals
if notDefined('useV1V2V3V4colors'), useV1V2V3V4colors = true; end
% create a blank file
fname = [fileparts(vANATOMYPATH) filesep 'ROIs-' datestr(now,1), '.lbl'];
fid = fopen(fname, 'w');
% print some typical headers
h{1} = '################################################';
h{2} = '# ITK-SnAP Label Description File';
h{3} = '# File format:';
h{4} = '# IDX -R- -G- -B- -A-- VIS MSH LABEL';
h{5} = '# Fields:';
h{6} = '# IDX: Zero-based index ';
h{7} = '# -R-: Red color component (0..255)';
h{8} = '# -G-: Green color component (0..255)';
h{9} = '# -B-: Blue croiSaveAllForItkGrayolor component (0..255)';
h{10} = '# -A-: Label transparency (0.00 .. 1.00)';
h{11} = '# VIS: Label visibility (0 or 1)';
h{12} = '# IDX: Label mesh visibility (0 or 1)';
h{13} = '# LABEL: Label description ';
h{14} = '################################################';
for ii = 1:14
fwrite(fid, sprintf('%s\n',h{ii}));
end
% count the ROIs
nROIs = length(viewGet(vw, 'ROIs'));
% make some colors for the different labels
theColors = hsv(nROIs);
% type out a line of text into the label file for each ROI
for roi = 1:nROIs
c = theColors(roi, :);
rname = vw.ROIs(roi).name;
% -----------------------------------------------------------
if useV1V2V3V4colors
% force a color scheme on the labels for v1/v2/v3/v4
if strfind(lower(rname), 'v1')
c = [255 0 0 ];
elseif strfind(lower(rname), 'v2')
c = [255 255 0 ];
elseif strfind(lower(rname), 'v3a')
c = [255 255 255];
elseif strfind(lower(rname), 'v3b')
c = [0 255 255];
elseif strfind(lower(rname), 'v3')
c = [0 255 0 ];
elseif strfind(lower(rname), 'v4')
c = [0 0 255 ];
elseif strfind(lower(rname), 'vo1')
c = [255 255 255 ];
elseif strfind(lower(rname), 'vo2')
c = [0 255 255 ];
elseif strfind(lower(rname), 'lo1')
c = [255 0 255 ];
elseif strfind(lower(rname), 'lo2')
c = [127 127 255 ];
elseif strfind(lower(rname), 'to1')
c = [0 255 0];
elseif strfind(lower(rname), 'to2')
c = [255 0 0 ];
end
end
%------------------------------------------------------------
a = sprintf('%d\t%d\t%d\t%d\t1\t1\t1\t"%s"\n', ...
roi, c(1), c(2), c(3), rname);
fwrite(fid, a);
end
%fclose('all');
end
|
{"author": "vistalab", "repo": "vistasoft", "sha": "7f0102c696c091c858233340cc7e1ab02f064d4c", "save_path": "github-repos/MATLAB/vistalab-vistasoft", "path": "github-repos/MATLAB/vistalab-vistasoft/vistasoft-7f0102c696c091c858233340cc7e1ab02f064d4c/mrBOLD/ROI/roiSaveAllAsNifti.m"}
|
INTEGER FUNCTION LCMGDF(LUNIT,SUBSET)
C$$$ SUBPROGRAM DOCUMENTATION BLOCK
C
C SUBPROGRAM: LCMGDF
C PRGMMR: J. ATOR ORG: NP20 DATE: 2009-07-09
C
C ABSTRACT: THIS FUNCTION CHECKS WHETHER AT LEAST ONE "LONG" (I.E.
C GREATER THAN 8 BYTES) CHARACTER STRING EXISTS WITHIN THE INTERNAL
C DICTIONARY DEFINITION FOR THE TABLE A MESSAGE TYPE GIVEN BY SUBSET.
C
C PROGRAM HISTORY LOG:
C 2009-07-09 J. ATOR -- ORIGINAL AUTHOR
C
C USAGE: LCMGDF (LUNIT, SUBSET)
C INPUT ARGUMENT LIST:
C LUNIT - INTEGER: FORTRAN LOGICAL UNIT NUMBER ASSOCIATED WITH
C SUBSET DEFINITION
C SUBSET - CHARACTER*8: TABLE A MNEMONIC FOR MESSAGE TYPE
C
C OUTPUT ARGUMENT LIST:
C LCMGDF - INTEGER: RETURN CODE INDICATING WHETHER SUBSET CONTAINS
C AT LEAST ONE "LONG" CHARACTER STRING IN ITS DEFINITION
C 0 - NO
C 1 - YES
C
C REMARKS:
C THIS ROUTINE CALLS: BORT NEMTBA STATUS
C THIS ROUTINE IS CALLED BY: None
C Normally called only by application
C programs.
C
C ATTRIBUTES:
C LANGUAGE: FORTRAN 77
C MACHINE: PORTABLE TO ALL PLATFORMS
C
C$$$
INCLUDE 'bufrlib.prm'
COMMON /BTABLES/ MAXTAB,NTAB,TAG(MAXJL),TYP(MAXJL),KNT(MAXJL),
. JUMP(MAXJL),LINK(MAXJL),JMPB(MAXJL),
. IBT(MAXJL),IRF(MAXJL),ISC(MAXJL),
. ITP(MAXJL),VALI(MAXJL),KNTI(MAXJL),
. ISEQ(MAXJL,2),JSEQ(MAXJL)
CHARACTER*10 TAG
CHARACTER*8 SUBSET
CHARACTER*3 TYP
C-----------------------------------------------------------------------
C-----------------------------------------------------------------------
C Get LUN from LUNIT.
CALL STATUS(LUNIT,LUN,IL,IM)
IF (IL.EQ.0) GOTO 900
C Confirm that SUBSET is defined for this logical unit.
CALL NEMTBA(LUN,SUBSET,MTYP,MSBT,INOD)
C Check if there's a long character string in the definition.
NTE = ISC(INOD)-INOD
DO I = 1, NTE
IF ( (TYP(INOD+I).EQ.'CHR') .AND. (IBT(INOD+I).GT.64) ) THEN
LCMGDF = 1
RETURN
ENDIF
ENDDO
LCMGDF = 0
RETURN
900 CALL BORT('BUFRLIB: LCMGDF - INPUT BUFR FILE IS CLOSED, IT MUST'//
. ' BE OPEN')
END
|
{"hexsha": "ef3ff3511e0a84cd5182ef6efdbe9f83fa43cf03", "size": 2334, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "var/external/bufr/lcmgdf.f", "max_stars_repo_name": "matzegoebel/WRF-fluxavg", "max_stars_repo_head_hexsha": "686ae53053bf7cb55d6f078916d0de50f819fc62", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-08-27T12:49:19.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-16T14:22:54.000Z", "max_issues_repo_path": "var/external/bufr/lcmgdf.f", "max_issues_repo_name": "teb-model/wrf-teb", "max_issues_repo_head_hexsha": "60882e61a2a3d91f1c94cb5b542f46ffaebfad71", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2018-09-18T16:44:30.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-07T10:59:59.000Z", "max_forks_repo_path": "var/external/bufr/lcmgdf.f", "max_forks_repo_name": "teb-model/wrf-teb", "max_forks_repo_head_hexsha": "60882e61a2a3d91f1c94cb5b542f46ffaebfad71", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-08-31T21:51:56.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-21T21:41:59.000Z", "avg_line_length": 29.175, "max_line_length": 72, "alphanum_fraction": 0.5608397601, "num_tokens": 708}
|
HANSARD REVISE * NUMERO 168
Le mardi 8 decembre 1998
REPONSE DU GOUVERNEMENT A DES PETITIONS
LES COMITES DE LA CHAMBRE
Mme Nancy Karetak-Lindell
Procedure et affaires de la Chambre
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
L'hon. Harbance Singh Dhaliwal
M. Jean-Paul Marchand
L'ACCIDENT D'AVION A POINTE-LEBEL
LES VICTIMES DU SYNDROME DE LA GUERRE DU GOLFE
L'ASSOCIATION LEGISLATIVE CANADA-CHINE
HOMMAGE A M. MAURICE CHAMPAGNE
LE SERVICE D'ASSISTANCE CANADIEN AUX ORGANISMES
LE PROGRAMME NATIONAL DE SOINS A DOMICILE
L'hon. Ralph E. Goodale
L'hon. Ralph E. Goodale
M. Jake E. Hoeppner
L'hon. Ralph E. Goodale
L'hon. Ralph E. Goodale
L'hon. Ralph E. Goodale
M. Robert D. Nault
Le Comite permanent des finances
LES TRAVAUX DE LA CHAMBRE
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Mme Jocelyne Girard-Bujold
LES COMITES DE LA CHAMBRE
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
LES COMITES DE LA CHAMBRE
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
LES TRAVAUX DE LA CHAMBRE
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
M. Pierre de Savoye
HANSARD REVISE * NUMERO 168
Le mardi 8 decembre 1998
La seance est ouverte a 10 heures.
L'hon. Hedy Fry (secretaire d'Etat (Multiculturalisme) (Situation de la femme), Lib.):
REPONSE DU GOUVERNEMENT A DES PETITIONS
M. George Proud (Hillsborough, Lib.):
LES COMITES DE LA CHAMBRE
Mme Nancy Karetak-Lindell (Nunavut, Lib.):
PROCEDURE ET AFFAIRES DE LA CHAMBRE
Monsieur le President, je propose que le 49e rapport soit adopte.
M. Inky Mark (Dauphin-Swan River, Ref.):
M. Inky Mark (Dauphin-Swan River, Ref.):
Monsieur le President, la deuxieme petition a trait a la loi sur les jeunes contrevenants.
M. Inky Mark (Dauphin-Swan River, Ref.):
Monsieur le President, la troisieme petition porte sur les elections senatoriales.
M. Inky Mark (Dauphin-Swan River, Ref.):
M. Inky Mark (Dauphin-Swan River, Ref.):
M. John Finlay (Oxford, Lib.):
Mme Wendy Lill (Dartmouth, NPD):
Monsieur le President, j'ai deux petitions a presenter ce matin.
Mme Wendy Lill (Dartmouth, NPD):
Monsieur le President, je suggere que toutes les questions soient reservees.
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Nous avons etabli un comite consultatif special charge de nous communiquer ses observations en permanence.
Pres de 10 000 de nos employes ont participe a ce processus.
Les raisons qui expliquent ce large eventail d'appuis sont tres faciles a comprendre.
Avant de me lancer dans la politique, j'ai ete en affaires pendant 20 ans.
Quelle perte de temps pour les gens d'affaires.
Les provinces le disent tous les jours.
Les petites entreprises aussi et les particuliers.
Les Canadiens veulent que les gouvernements travaillent ensemble au service de leurs citoyens.
Ils ne veulent pas de systemes de perception fiscale paralleles dans tout le pays.
Ils le repetent sans cesse.
Cessez de gaspiller de l'argent.
Les gouvernements doivent cooperer pour reduire les couts et simplifier les processus.
Ici aussi, le nouvel organisme est une decision des plus logiques.
Ne vous fiez pas a ce que je dis.
Voila de bonnes raisons de creer la nouvelle agence.
La puissance des ordinateurs double tous les 18 mois.
Nous devons simplement moderniser ce systeme.
Nous devons concevoir une procedure de dotation en personnel plus rapide et plus juste.
La technologie nous force a faire des choix.
Ou nous pouvons travailler ensemble.
Cela lui aurait coute des dizaines de millions de dollars.
Sur ce total, les transactions faites au Canada representeront 13 milliards de dollars.
Est-ce logique d'obliger les entreprises a traiter avec 12 systemes differents?
Il est logique de travailler ensemble pour servir les interets des Canadiens.
Nous faisons un effort tres serieux pour etendre nos services.
Je dois souligner que la participation des provinces est entierement volontaire.
J'ai rencontre les autorites provinciales.
Je les ai rencontrees a plusieurs occasions.
Elles ont toutes appuye le concept et nous travaillerons en etroite collaboration avec elles.
Mais je suis realiste.
Nous pouvons donner bien d'autres exemples.
Lorsque nous tentons d'obtenir l'assentiment de tous, nous n'y parvenons jamais.
Il faut foncer et faire preuve de leadership.
L'agence represente une occasion.
La mise en place d'une infrastructure est deja un point de depart.
Toutefois, notre initiative la plus importante a consiste a renforcer la responsabilite ministerielle.
J'ai pris ce message tres a coeur, croyez-moi.
Le ministre soumettra au Parlement un rapport annuel d'activites.
Le Parlement procedera apres cinq ans a une revision legislative.
Un tel pouvoir de surveillance ne pourra etre exerce que par le ministre.
Ainsi que l'a declare l'Institut des comptables agrees devant le meme comite:
Gagner la confiance du plus grand nombre possible de provinces est essentiel.
J'essaie d'y arriver en mettant sur pied la meilleure structure possible.
Il est crucial d'assurer a chaque contribuable un traitement equitable.
J'en ferai ma priorite a chaque heure de chaque journee.
Les Canadiens n'en meritent pas moins.
C'est, apres tout, sur cette confiance que tout repose.
Ce regime est soutenu par des fonctionnaires travailleurs et honnetes.
Le projet de loi represente une etape cruciale.
Il s'agit d'un grand pas en avant.
Je respecte cette realite, au fondement meme de notre democratie.
D'abord, il n'existe qu'un seul niveau de contribuables au Canada.
Le projet de loi ne concerne pas la politique.
Il vise plutot a mettre en place quelque chose de positif pour le Canada.
M. Jason Kenney (Calgary-Sud-Est, Ref.):
Pourquoi a-t-il ecourte les deliberations democratiques sur ce projet de loi?
Je crois pouvoir repondre a cette question.
Les Canadiens s'eveilleront subitement en se demandant ce qui se passe.
Ils forcent en quelque sorte le gouvernement a imposer la cloture.
Le ministre a parle de bon sens.
Toutes ces ameliorations pourraient avoir lieu sans que Revenu Canada se metamorphose en une agence.
Cette transformation n'est tout simplement pas necessaire.
Je ne suis pas le seul a le penser.
Je tiens a ce que cela soit absolument clair.
Les raisons invoquees ne tiennent tout simplement pas.
Chaque jour ouvrable, Revenu Canada recueille en gros 1 milliard de dollars.
C'est 1 milliard de dollars qui est tire, siphonne des poches des contribuables canadiens.
Je vois, monsieur le President, que vous n'en revenez pas vous-meme.
Je n'ai pas invente cette donnee.
Ce n'est tout simplement pas correct.
Je ne comprends pas ce jargon administratif.
Ou la responsabilite commence-t-elle?
Ce n'est absolument pas clair.
Pourquoi est-ce necessaire?
C'est un pouvoir terrible qui est ainsi exerce.
Il arrive parfois, croyez-le ou non, qu'on abuse de ce pouvoir.
Je souleve, par exemple, le cas de Mme Suzanne Thiessen, de Winnipeg.
J'ai souleve cette question pendant la periode des questions, et ailleurs.
Comment cela est-il possible?
Et quelle reponse avons-nous obtenue pour le manquement a la confidentialite?
Mme Janice Collingridge, une dame de Calgary, est quadriplegique au niveau superieur.
Ils lui ont dit qu'ils allaient voir comment elle depensait cet argent.
La somme qu'on lui reclame est superieure aux economies de toute sa vie.
Ils ont essaye de lui soutirer 5 000 $ plus les interets et les penalites.
C'est ce qui est reprehensible dans notre regime fiscal.
J'ai signale ce cas au ministre.
Je sais qu'il y en a d'autres comme celui-la.
Les fiscalistes pourraient nous en parler.
Que repond le ministre?
Le ministere ne repond pas a cette question.
Voici un autre cas.
Il est venu me voir recemment a mon bureau pour me raconter son aventure.
Ce genre de chose arrive tous les jours au Canada.
Quand cela arrive, il n'y a personne pour rendre des comptes.
C'est pourquoi nous avons propose d'adopter une declaration des droits des contribuables.
Cependant, le gouvernement n'a meme pas eu l'honnetete de repondre a notre proposition.
Que ferait une declaration des droits des contribuables?
C'est une declaration louable, mais sans aucune efficacite.
Elle n'a pas de mordant, elle ne prevoit aucune sanction.
Elle n'a aucun pouvoir statutaire.
Elle ne peut imposer de sanction a Revenu Canada en cas d'ecart.
Les fonctionnaires seraient tenus d'informer les contribuables en cas de paiement en trop.
Il ne pouvait plus payer son loyer ou s'acheter a manger.
Perron (Riviere-des-Mille-iles, BQ):
Le president suppleant (M. McClelland):
Il parait que gouverner, c'est ecouter, consulter et agir.
On a tout simplement paquete la galerie de deputes liberaux et on nous a baillonnes.
L'an dernier, je me souviens, je faisais partie du Comite permanent des finances.
Ils sont coules a des amis.
Il parait que gouverner, c'est respecter les gens.
C'est un manque de respect.
C'est un manque flagrant de respect.
Il parait que gouverner, c'est etre equitable.
Qu'a-t-on fait?
C'est un manque de respect et un manque d'equite.
Il fait des profits sur quoi?
J'aimerais avoir le budget des depenses de la Banque Royale pour jouer au golf.
C'est un manque de respect.
Gouverner, c'est aussi un choix.
Ce gouvernement a choisi de creer des agences.
C'est notre verificateur general qui dit cela.
Pourquoi depenser de l'argent pour la creation d'un autre palier de fonctionnaires?
En nommant un commissaire, un commissaire-adjoint, on cree un autre niveau de fonctionnaires.
Je parlais plus tot des agences existantes.
Prenons la nouvelle Agence canadienne du ble dans l'Ouest canadien.
Cela va tout croche.
Nav Canada a tout simplement decide de fermer la tour de controle de Gatineau.
Hier, Nav Canada a ferme la tour de controle de Baie-Comeau.
Un accident est survenu a Baie-Comeau.
Nav Canada avait dit: Pas de problemes.
On ne coupera pas d'emplois.
Au moins pendant deux ou trois ans, on ne coupera pas d'emplois.
Ou est la parole donnee?
On questionne le ministre des Transports et il dit: Ils ajustent.
Ils font leur travail.
Parlons de l'ADM, l'Agence des aeroports de Montreal.
Il y avait, auparavant, une organisation tres bonne et tres interessante.
C'est ce qu'on a fait mais, depuis, c'est chaotique.
Voila ce que cela donne des agences.
Il s'en lave les mains.
Le gouvernement a peur de gouverner et de faire sa job.
Ce n'est pas ce que le gouvernement d'en face fait.
Mme Suzanne Tremblay (Rimouski-Mitis, BQ):
Monsieur le President, je suis heureuse de prendre la parole aujourd'hui pour deux raisons.
En fin de semaine, j'ai eu l'occasion de rencontrer plusieurs commettants et commettantes.
J'en ai appris des belles.
Je ne connais pas de maladie aussi grave pour l'avenir du Canada.
C'est un projet insense que la majorite de la population refuse.
Il laisse passer le temps.
Il gere le temps, c'est tout.
Quand cela ne fait pas son affaire, il gere le temps en baillonnant l'opposition.
Il sort le baton de baseball.
Regardons la situation d'un peu plus pres.
En cinq ans, les liberaux ont deja battu le record du gouvernement Mulroney.
Mais la lune de miel acheve.
Tous les jours, nous denoncons ce gouvernement.
Il y a meme plus.
Il n'y a pas d'urgence dans ce domaine.
Il y avait urgence pour ce projet de loi, afin de proteger une industrie.
On aurait pu proceder plus rapidement avec ce projet de loi.
C'etait pour defendre une de nos industries, pour defendre la culture canadienne.
On n'est pas devant une crise nationale, ni une crise internationale.
On est devant un projet de loi qui a besoin d'ameliorations importantes.
Comme ca, je vais pouvoir avoir une belle job.
Donc, M. Vallerand est venu dire a quel point cette belle agence etait magnifique.
Le gouvernement a fait la sourde oreille.
Notre opposition repose sur un grand nombre de raisons majeures et importantes.
Il refuse notre collaboration et ne bougera pas d'un iota.
Je veux que la population sache que cette agence sera nefaste.
Mme Wendy Lill (Dartmouth, NPD):
Nous pensons que cette agence est un enorme cheval de Troie liberal pour la privatisation.
Cela depasse de loin le concept de l'amelioration des services.
Le gouvernement se vantera d'avoir reduit les depenses de 2,2 milliards de dollars.
Il n'a pas non plus l'appui de la majorite de ses employes.
Les principaux interesses ne sont pas convaincus du bien fonde de cet idee.
Une agence unique de perception ne jouit pas d'un appui solide parmi les provinces.
La Colombie-Britannique et la Saskatchewan n'ont pas approuve le concept.
L'Alberta appuie le concept d'une agence independante pour des raisons strictement ideologiques.
Rien ne justifie la creation d'une agence independante.
Ces pretentions sont au mieux exagerees.
Rien n'empeche le gouvernement d'embaucher des verificateurs des maintenant.
Le projet de loi ne fournit aucun detail sur les mecanismes de recours.
Pour toutes ces raisons, le NPD votera contre le projet de loi C-43.
M. Scott Brison (Kings-Hants, PC):
A cet egard, le gouvernement n'a jamais connu beaucoup de succes.
Nous sommes d'avis que ce projet de loi souleve des difficultes importantes.
Il parle ainsi des syndicats.
Un representant des TCA siege au conseil d'administration de Chrysler Canada.
le gouvernement federal dit qu'il ne peut pas travailler avec la fonction publique.
Le ministre de l'Industrie a dit que des impots eleves stimulent la productivite.
Jamais declaration n'a autant illustre un parfait analphabetisme economique!
En realite, un dollar faible n'est avantageux pour personne.
A court terme, il peut s'ensuivre des avantages momentanes pour les exportateurs canadiens.
Par contre, a long terme, nous ne pouvons sous-estimer le poids de notre prosperite.
Le gouvernement devrait profiter de cette occasion pour donner l'exemple.
Nous ne pouvons pas faire cela.
Nous sommes prets a courir le risque des aspects negatifs de cette agence.
A l'Universite Dalhousie, mon cousin se destinait a une carriere dans l'administration publique.
Ils n'ont pas ete consultes a ce sujet.
On n'en a pas discute.
C'est ce que nous avons propose au comite.
Nous avons fait cette suggestion a la Chambre.
C'est le type de consultations que les Canadiens souhaitent.
C'est un abus de pouvoir systemique de la part de ce gouvernement.
A l'heure actuelle, les Canadiens ont acces aux memes renseignements que nous, parlementaires.
Ils veulent participer a la prise de decisions importantes comme celle-ci.
Nous n'appuierons pas le projet de loi C-43.
Il demande au gouvernement de retourner aupres du public pour aller faire d'autres consultations.
C'est ma question et j'aimerais entendre sa reponse.
Monsieur le President, je remercie mon collegue de cette tres importante question.
Ils n'ont pas ete crees dans ce but.
Nous assistons a une diminution constante du pouvoir des deputes.
Depuis la fin des annees 60, nous avons ete temoins d'une emasculation des deputes.
M. Werner Schmidt (Kelowna, Ref.):
Monsieur le President, je remercie mon collegue pour les observations qu'il a faites.
Monsieur le President, je remercie le depute de sa question.
Je vais traiter, tout d'abord, de la question du respect de la vie privee.
Ce ne sont pas les bonnes idees qui manquent.
A l'occasion, nous pourrions meme apprendre quelque chose.
Il est inutile de reinventer la roue.
Cela peut etre mauvais parfois egalement.
Les Canadiens ont de merveilleuses idees et il faut collaborer avec eux.
Cela explique que la valeur du dollar canadien n'en finit plus de baisser.
A long terme, cette baisse pourrait reduire la productivite encore davantage.
Il est inutile de reinventer la roue.
M. Roy Bailey (Souris-Moose Mountain, Ref.):
Monsieur le President, je remercie mon collegue du Parti progressiste-conservateur pour ses remarques.
J'applaudis a ses propos concernant l'utilisation des comites.
Monsieur le President, je comprends les remarques du depute.
Voila qui est tout a fait inacceptable.
Pour toute reponse le gouvernement annonce qu'il va serrer l'etau sur les medias.
Que le gouvernement serre l'etau sur les medias, voila qui est proprement ridicule.
C'est une plaisanterie.
Le depute dit que c'est une plaisanterie.
C'est pervers, ca frise la plaisanterie soit, mais ca n'a rien de drole.
C'est un mal chronique auquel il va falloir s'attaquer.
Mme Beth Phinney (secretaire parlementaire du ministre du Revenu national, Lib.):
Monsieur le President, je partagerai mon temps de parole avec le depute de Mississauga-Sud.
Nous devons faire preuve de leadership.
C'est ce que fera l'agence.
Ce sont la les raisons pour lesquelles nous proposons la nouvelle agence.
La nouvelle agence aurait de tels pouvoirs.
Les conventions collectives resteront en vigueur.
Les droits a la retraite et les credits de conge resteront intacts.
Et il a renforce la legislation a cette fin.
Je suis convaincue que la nouvelle agence offrira de nouvelles possibilites significatives aux employes.
Il y aura donc davantage de mobilite de l'emploi.
Les postes vacants seront combles en quelques semaines plutot qu'en plusieurs mois.
Cela aussi est un exemple clair d'un meilleur service a offrir aux Canadiens.
Cependant, pour les petites entreprises, ce n'est pas du tout le cas.
C'est tout cela que represente ce projet de loi.
Un meilleur service grace a des economies de temps.
Un meilleur service grace a des economies d'argent.
Un meilleur service grace a l'utilisation judicieuse de la technologie.
Un meilleur service grace a de nouvelles possibilites de partenariat.
Un meilleur service grace a la flexibilite et a l'autonomie accrues des employes.
Un meilleur service grace a la rationalisation et a la simplification.
Les personnes qui travaillent a Revenu Canada sont celles dont a besoin la nouvelle agence.
Et les Canadiens seront mieux servis par la nouvelle agence.
Nous avons fait des gens une priorite lors de la creation de l'agence.
Nous avons fait des gens une priorite dans la mission de l'agence.
Nous avons fait des gens une priorite dans l'administration de l'agence.
Je felicite le ministre du Revenu national pour ce projet de loi.
En fin de compte, ce projet de loi est dans l'interet public.
Il sert les interets du peuple canadien.
Monsieur le President, j'aimerais que ma collegue lise attentivement le hansard, demain matin.
Mais ma collegue a surement oublie de parler a M. Lampron.
Pourtant, il est venu deposer un memoire au comite.
Elle a surement oublie de parler a tous ces gens-la.
La deputee aurait du lire les resultats du sondage avant de faire son discours.
Mais ou ma collegue a-t-elle donc pris ses informations?
Il faudra du temps pour creer le conseil de gestion.
La reponse est oui.
Les employes nous ont egalement demande quels avantages sociaux continueraient d'exister.
Apres cela, il y aura un processus unifie d'accreditation.
M. Jason Kenney (Calgary-Sud-Est, Ref.):
M. Paul Szabo (Mississauga-Sud, Lib.):
Ces employes provenaient de differents secteurs du ministere.
Pour que la nouvelle agence puisse le faire efficacement, elle doit disposer de nouveaux pouvoirs.
J'aborderai maintenant la question des frais d'utilisation.
Laissez-moi decrire brievement ces controles.
Premierement, le ministre devra approuver tous les nouveaux frais ou toute augmentation de frais.
[...] Le temps est tres important pour les entreprises, particulierement pour les petites entreprises.
Telle est, essentiellement, la raison pour laquelle le regroupement de ces activites est si important.
Une administration unique permettrait aux provinces de realiser de veritables economies.
Les transactions qui ont lieu en un temps record sont difficiles a retracer.
Nous devons etre capables de reagir a cette nouvelle realite.
Des problemes nouveaux appellent des solutions nouvelles.
M. Jean-Paul Marchand (Quebec-Est, BQ):
Un vieux dicton dit que la taxation sans la representation, c'est de la tyrannie.
On semble se presser a vouloir reduire les impots des plus riches.
Cet article semble donner a cette agence des pouvoirs illimites.
Je lui cite la phrase essentielle de cet article:
Monsieur le President, le gouvernement d'un pays n'est pas un sujet de plaisanterie.
M. Jason Kenney (Calgary-Sud-Est, Ref.):
Le passage au prochain millenaire n'est certes pas un sujet specieux.
Le depute a pose une question au sujet du conseil.
Tout cela demeure intact.
M. Ted White (North Vancouver, Ref.):
Monsieur le President, c'est bon de vous voir au fauteuil.
Je partagerai mon temps de parole avec le depute de New Westminster-Coquitlam-Burnaby.
Le depute qui a pris la parole avant moi a parle du nouveau millenaire.
C'est typique du gouvernement d'etre completement deconnecte de la realite.
Les Suisses fabriquent de bonne montres, ils savent donc comment mesurer le temps.
Je pourrais m'attirer des problemes avec ca.
Je peux voir que le gouvernement est scandalise d'apprendre cela.
En fait, le gouvernement actuel est pire que celui qui l'a precede.
Je sais que mes collegues en ont parle dans leurs discours.
Qu'elle rende des comptes au ministre n'est tout simplement pas suffisant.
Il faut une plus grande transparence et une plus grande reddition de comptes.
Comme le premier ministre, ils beneficient peut-etre des conseils de sans-abri imaginaires.
C'est exactement ce qui se passe avec ce projet de loi.
Le gouvernement se moque des conseils des gens ordinaires.
La ministre evolue dans un monde imaginaire ou regne la rectitude politique.
Une bande de Squamish est etablie dans ma circonscription.
La reserve de la bande Squamish, a North Vancouver, compte 16 chefs.
Tout se resume a la hierarchie, et il n'y a aucune democratie.
M. Paul Szabo (Mississauga-Sud, Lib.):
Il y a d'abord la question d'un ombudsman.
Par consequent, les Canadiens ont bel et bien un ombudsman.
En fait, ils en ont meme 301.
Chacun d'entre nous ici a cette responsabilite.
Je sais que nous avons tous servi nos electeurs a cet egard.
C'est un concept interessant.
Je reviendrai tout a l'heure sur des points precis a cet egard.
Je n'ai passe aucun commentaire la-dessus.
Peut-etre ont-ils de bonnes raisons de ne pas l'aimer.
Je peux dire au depute pourquoi ils ne l'aiment pas.
Cela ne fait absolument aucun doute.
Monsieur le President, je vois que vous m'interrompez a nouveau.
C'est tres regrettable.
M. Paul Forseth (New Westminster-Coquitlam-Burnaby, Ref.):
Monsieur le President, c'est le temps de Noel.
C'est le temps de donner, pas de prendre.
La Bible dit, dans Luc, chapitre deux:
Ce recensement, le premier, eut lieu pendant que Quirinius etait gouverneur de Syrie.
Et tous allaient se faire recenser, chacun dans sa ville.
Historiquement, les gouvernements prelevent donc des impots, et la population en paie.
Les choses ont toujours ete ainsi.
Le projet de loi modifie les fondements legaux sur lesquels repose la perception des impots.
L'agence est placee sous la responsabilite du ministre du Revenu national.
C'est un changement historique et spectaculaire.
Nous avons certainement evolue depuis que Cesar a assujetti le monde a des impots.
M. Paul Szabo (Mississauga-Sud, Lib.):
Le depute est avocat.
Monsieur le President, nous pourrions peut-etre prendre l'exemple du Code criminel.
La Loi sur les jeunes contrevenants prevoit le processus en question.
Je pense que c'est l'endroit le plus approprie pour l'inserer.
C'est une insulte pour les fonctionnaires du ministere du Revenu.
Non, il n'est pas vendable.
Monsieur le President, je crois que les provinces opteront pour l'attentisme.
On a bien defini cette facon professionnelle dans les ecoles d'administration publique.
Les contribuables ont le droit de comprendre les lois qu'ils sont censes respecter.
Ce sujet donne lieu a un grand debat.
Cela pourrait faire l'objet d'une these de maitrise.
Les fonctionnaires devraient etre tenus d'informer les contribuables en cas de paiement en trop.
Si le gouvernement agit comme il se doit, il obtiendra une certaine cooperation.
M. Roy Bailey (Souris-Moose Mountain, Ref.):
Monsieur le President, j'ai une question pour le depute.
J'ai ete attentif aux propos qui ont fuse des deux cotes de la Chambre.
Monsieur le President, le fait est que les provinces s'en sont bien gardees.
Aucune n'a annonce sa participation.
On attend de voir si l'agence va donner les resultats tant vantes.
Le verificateur general en a deja touche un mot.
Nous attendons de voir si l'agence est aussi innovatrice qu'on le dit.
Nous attendons d'en avoir la preuve.
Les provinces y adhereront peut-etre si le gouvernement livre de bons resultats.
M. Paul DeVillers (Simcoe-Nord, Lib.):
Monsieur le President, je partagerai mon temps avec le depute de Waterloo-Wellington.
Etant donne son importance, je l'aborderai dans mon discours.
Le ministre continuera d'etre la personne designee pour exercer ces pouvoirs.
M. Ghislain Lebel (Chambly, BQ):
Le ministre demeure responsable.
Ce n'est pas comme pour d'autres agences.
C'est pour ces raisons que je crois que cette Agence sera plus responsable.
Le verificateur general va verifier les comptes de l'Agence.
A mon avis, plusieurs elements de cette Agence ne sont pas comme les autres.
M. Yves Rocheleau (Trois-Rivieres, BQ):
C'est une chose qui me preoccupe comme Quebecois et comme souverainiste.
M. Lynn Myers (Waterloo-Wellington, Lib.):
L'equite est l'une des assises fondamentales de l'administration globale des recettes.
Les consultations ont ete vastes et approfondies.
Le message etait des plus clairs.
Qui dit bon service dit service equitable.
L'equite suppose ouverture, transparence, courtoisie, adaptation, accessibilite ainsi que rapidite a repondre aux besoins.
C'est un element tres important.
Je pense que ce devrait etre fait au niveau du ministere.
Je crois que nous ferions bien de ne pas l'oublier.
Les Canadiens n'en attendent pas moins et n'en meritent pas moins.
Par consequent, j'exhorte tous les deputes a appuyer cette mesure legislative fort valable.
M. Jason Kenney (Calgary-Sud-Est, Ref.):
Monsieur le President, j'aimerais remercier le depute de sa question tres pertinente.
Il reste environ trois minutes pour les questions et commentaires.
M. Janko Peric (Cambridge, Lib.):
M. Chuck Strahl (Fraser Valley, Ref.):
M. Paul Steckle (Huron-Bruce, Lib.):
M. Peter Adams (Peterborough, Lib.):
L'ACCIDENT D'AVION A POINTE-LEBEL
M. Claude Drouin (Beauce, Lib.):
Sept passagers ont perdu la vie et trois autres ont ete blesses.
Tous les passagers etaient originaires de la Cote-Nord.
Que nos prieres accompagnent les victimes, les blesses, ainsi que leurs familles.
M. Werner Schmidt (Kelowna, Ref.):
Et ce n'est qu'un debut.
Hier, le ministre a refuse de rencontrer les franchises eux-memes.
Est-ce parce qu'il sait qu'ils ont raison?
M. Raymond Lavigne (Verdun-Saint-Henri, Lib.):
Je souhaite tres fortement que cette coutume continue pour de nombreuses annees encore.
Merci au maitre d'oeuvre, l'organisme Toujours Ensemble, ainsi qu'a tous les benevoles.
LES VICTIMES DU SYNDROME DE LA GUERRE DU GOLFE
M. Stephan Tremblay (Lac-Saint-Jean, BQ):
M. John Cannis (Scarborough-Centre, Lib.):
Nous avons legifere en matiere de controle des armes a feu.
Notre gouvernement est determine a mettre fin a la violence faite a tout Canadien.
Nous esperons que ces mesures contribueront a rendre notre societe plus sure.
M. Gerry Ritz (Battlefords-Lloydminster, Ref.):
Depuis, Elwin a ete elu chef du Saskatchewan Party, l'opposition officielle a Regina.
Je leur souhaite de reussir tous leurs projets politiques.
Mme Marlene Jennings (Notre-Dame-de-Grace-Lachine, Lib.):
Ce long metrage Canadien a ete mis en nomination pour 10 Genies.
L'ASSOCIATION LEGISLATIVE CANADA-CHINE
M. Bill Blaikie (Winnipeg-Transcona, NPD):
La Chine est maintenant un endroit ou les gens peuvent s'enrichir.
HOMMAGE A M. MAURICE CHAMPAGNE
Mme Maud Debien (Laval-Est, BQ):
Poete et essayiste, M. Champagne nous a quittes recemment.
L'oeuvre de Maurice Champagne lui survivra.
M. Bryon Wilfert (Oak Ridges, Lib.):
C'est un programme qui fonctionne.
M. Bill Casey (Cumberland-Colchester, PC):
La Chambre des communes merite de savoir qui est a l'origine de cette campagne.
Cela a ete regle hier, par une question de privilege.
La deputee de Thornhill.
LE SERVICE D'ASSISTANCE CANADIEN AUX ORGANISMES
Mme Elinor Caplan (Thornhill, Lib.):
Cet organisme n'avait aucune experience pratique de l'approvisionnement ni des procedures a utiliser.
M. Derrek Konrad (Prince Albert, Ref.):
Monsieur le President, les Canadiens des communautes rurales risquent de perdre leur emploi.
Penchons-nous un peu sur les faits.
Tout d'abord, le Canada est le producteur le moins couteux au monde.
Enfin, les usines de deshydratation appartiennent pour la plupart a des agriculteurs.
M. John Herron (Fundy-Royal, PC):
En 1974, il a ete nomme officier de l'Ordre du Canada.
M. Julian Reed (Halton, Lib.):
Monsieur le President, le Canada manque desesperement de main-d'oeuvre specialisee.
La main-d'oeuvre specialisee vieillit.
On y voit a Halton.
M. Preston Manning (chef de l'opposition, Ref.):
Monsieur le President, voici ce que le premier ministre de l'Alberta a dit:
M. Preston Manning (chef de l'opposition, Ref.):
M. Preston Manning (chef de l'opposition, Ref.):
Ou est la nouvelle loi sur les jeunes contrevenants?
Pourquoi ne raffermit-il pas le dollar canadien?
M. Grant Hill (Macleod, Ref.):
L'hon. Allan Rock (ministre de la Sante, Lib.):
M. Grant Hill (Macleod, Ref.):
Elle veut une nouvelle hanche de sorte qu'elle puisse preparer elle-meme ses repas.
L'hon. Allan Rock (ministre de la Sante, Lib.):
M. Gilles Duceppe (Laurier-Sainte-Marie, BQ):
Monsieur le President, personne ne cherche a isoler personne.
Nous negocions tous ensemble.
Et on espere que le premier ministre du Quebec negociera de bonne foi, en effet.
M. Gilles Duceppe (Laurier-Sainte-Marie, BQ):
On n'est pas paranoiaques, monsieur le President, on est juste pas sourds.
Il y a un gros probleme de chomage chez les jeunes.
M. Michel Gauthier (Roberval, BQ):
Il a dit qu'il souhaitait qu'il soit de bonne foi.
M. Michel Gauthier (Roberval, BQ):
Ce sont de grands bebes.
C'est d'ailleurs pourquoi on negocie l'union sociale canadienne.
Un des aspects, c'est que nous voulons ameliorer les processus de consultations mutuelles.
C'est en negociation, et on espere ameliorer cet etat de choses.
Mme Alexa McDonough (Halifax, NPD):
Si le ministre condamne ces pratiques, que fait-il pour y mettre fin?
L'hon. Allan Rock (ministre de la Sante, Lib.):
C'est une decision du gouvernement ontarien.
Mme Alexa McDonough (Halifax, NPD):
Le ministre pretend que son gouvernement appuie les cinq principes de l'assurance-maladie.
L'hon. Allan Rock (ministre de la Sante, Lib.):
M. Scott Brison (Kings-Hants, PC):
A mesure que le dollar baisse, nous voyons des Canadiens qui patissent.
L'hon. Paul Martin (ministre des Finances, Lib.):
La parole est au depute de Kings-Hants.
M. Scott Brison (Kings-Hants, PC):
Monsieur le President, je pense que le ministre n'a pas compris ma question.
Pendant que le ministre angoisse a propos du dollar, les Canadiens patissent.
La parole est au depute de Kings-Hants.
L'hon. Paul Martin (ministre des Finances, Lib.):
La devise canadienne etait malmenee.
Les taux d'interet au Canada etaient en hausse.
Aujourd'hui, nous avons le bilan le plus solide...
La parole est a la deputee d'Edmonton-Nord.
Mme Deborah Grey (Edmonton-Nord, Ref.):
Elle constitue la plus haute cour du pays, et nous pouvons faire quelque chose.
L'hon. Herb Gray (vice-premier ministre, Lib.):
Mme Deborah Grey (Edmonton-Nord, Ref.):
Bel essai, monsieur le President, mais cette tribune est morte.
Cette commission ne fait rien actuellement.
Elle a ete depouillee de ses pouvoirs, en grande partie par le gouvernement.
L'hon. Herb Gray (vice-premier ministre, Lib.):
Cela se passe de commentaires.
LE PROGRAMME NATIONAL DE SOINS A DOMICILE
M. Maurice Dumas (Argenteuil-Papineau, BQ):
Nous ne pouvons pas le faire sans elles, ce ne serait pas un bon programme.
Monsieur le President, le depute a beaucoup d'imagination.
M. Maurice Dumas (Argenteuil-Papineau, BQ):
Monsieur le President, je retire mes paroles.
Le depute n'a pas d'imagination.
M. Monte Solberg (Medicine Hat, Ref.):
Pourquoi le ministre des Finances tient-il a etre aussi mesquin avec les Canadiens?
L'hon. Paul Martin (ministre des Finances, Lib.):
Monsieur le President, le depute ferait mieux de se trouver un autre redacteur.
L'important, c'est l'avenir du Regime de pensions du Canada.
Le Parti reformiste n'y croit pas.
Le Parti liberal, si.
M. Monte Solberg (Medicine Hat, Ref.):
Quel est le taux reel de rendement du Regime de pensions du Canada?
Est-ce que c'est 11, 12, 13 p. 100?
L'hon. Paul Martin (ministre des Finances, Lib.):
C'est au nom de ces personnes que nous parlons.
Mme Christiane Gagnon (Quebec, BQ):
Monsieur le President, le nombre d'enfants pauvres ne cesse d'augmenter au Canada.
Il n'y a rien pour les pauvres.
Mme Bonnie Brown (secretaire parlementaire du ministre du Developpement des ressources humaines, Lib.):
Nous pensons egalement que la meilleure solution est de remettre les gens au travail.
Mme Christiane Gagnon (Quebec, BQ):
Monsieur le President, ma question supplementaire s'adresse cette fois-ci au ministre des Finances.
Il y a des enfants pauvres parce qu'il y a des parents pauvres.
L'hon. Paul Martin (ministre des Finances, Lib.):
M. John Duncan (Ile de Vancouver-Nord, Ref.):
Beaucoup d'entre eux changent radicalement...
Le depute de Nanaimo-Alberni.
M. Bill Gilmour (Nanaimo-Alberni, Ref.):
Monsieur le President, nous devons intervenir a l'etranger dans ce dossier.
Pourtant, les liberaux ne font rien pour contrer cette campagne.
Ils se contentent d'esperer qu'elle finisse.
M. Bob Speller (secretaire parlementaire du ministre du Commerce international, Lib.):
Nous continuerons notre combat au nom des travailleurs forestiers de la Colombie-Britannique.
M. Antoine Dube (Levis-et-Chutes-de-la-Chaudiere, BQ):
L'hon. John Manley (ministre de l'Industrie, Lib.):
Monsieur le President, tout d'abord, le preambule de la question est faux.
Ce sont des elements d'une politique forte pour la construction de navires au Canada.
M. Steve Mahoney (Mississauga-Ouest, Lib.):
Comment cela peut-il se produire au Canada?
Comment un reseau organise peut-il faire venir au Canada des esclaves sexuels?
L'hon. Lucienne Robillard (ministre de la Citoyennete et de l'Immigration, Lib.):
Il est evident que le Canada lutte contre le probleme.
C'est pourquoi nous nous sommes associes a differents pays pour lutter contre le probleme.
M. Garry Breitkreuz (Yorkton-Melville, Ref.):
Le ministre a defendu le processus.
Ce sont tous les resultats de ces elections qui sont maintenant suspects.
Le gouvernement va-t-il ordonner immediatement une verification independante?
Les problemes seront corriges dans les plus brefs delais.
M. Jake E. Hoeppner (Portage-Lisgar, Ref.):
Que veulent donc tant cacher le ministre et la Commission canadienne du ble?
L'hon. Ralph E. Goodale:
Et c'est un genre qui a du succes en face.
Ce sont les producteurs membres du conseil d'administration...
La deputee de Bras d'Or-Cape Breton.
Mme Michelle Dockrill (Bras d'Or-Cape Breton, NPD):
Monsieur le President, ma question s'adresse au ministre des Ressources naturelles.
Oui ou non, allez-vous venir au Cap-Breton?
Le ministre des Ressources naturelles.
M. Peter Mancini (Sydney-Victoria, NPD):
Monsieur le President, ma question s'adresse au meme ministre.
Monsieur le President, nous voulons une solution efficace.
Mme Elsie Wayne (Saint John, PC):
Les gens de l'industrie reclament une politique nationale equitable sur la construction navale.
Les gens de cette industrie ne reclament pas de subventions.
L'hon. John Manley (ministre de l'Industrie, Lib.):
Il y a un droit de 25 p. 100 sur les navires importes au Canada.
Il y a une politique d'achat du gouvernement dans ce secteur.
Nous ne sommes pas disposes a offrir ces subventions.
Mme Elsie Wayne (Saint John, PC):
Le ministre nage dans la confusion la plus complete.
Cela fait deja cinq ans.
Quand le ministre et le gouvernement vont-ils presenter une politique sur la construction navale?
L'hon. John Manley (ministre de l'Industrie, Lib.):
Cependant, permettez-moi de dire que ce n'est pas necessairement mauvais.
Dans ce cas-ci, nous offrons un appui a l'industrie de la construction navale.
Il y a egalement un droit de 25 p. 100.
Or, nous n'avons pas les moyens de faire cela.
Mme Carolyn Bennett (St. Paul's, Lib.):
Monsieur le President, ma question s'adresse au ministre de l'Industrie.
L'hon. John Manley (ministre de l'Industrie, Lib.):
Je me suis dit, quelle idee stupide.
Imaginez mon desarroi lorsque j'ai decouvert qu'elle m'etait attribuee.
Non, je ne suis pas en faveur d'impots eleves.
M. Werner Schmidt (Kelowna, Ref.):
L'hon. Alfonso Gagliano (ministre des Travaux publics et des Services gouvernementaux, Lib.):
Postes Canada est d'avis qu'il s'agit d'un plan raisonnable.
Donnons-lui la chance de voir quels seront les resultats.
M. Paul Mercier (Terrebonne-Blainville, BQ):
La population est inquiete.
L'honorable secretaire parlementaire du ministre des Transports a la parole.
M. Stan Dromisky (secretaire parlementaire du ministre des Transports, Lib.):
La securite est notre premiere priorite.
M. Yvon Godin (Acadie-Bathurst, NPD):
Aux depends des chomeurs.
Aujourd'hui, ce chiffre est tombe a 38 p. 100.
Ma question s'adresse au vice-premier ministre.
Mme Bonnie Brown (secretaire parlementaire du ministre du Developpement des ressources humaines, Lib.):
M. Greg Thompson (Nouveau-Brunswick-Sud-Ouest, PC):
Hier, j'ai invoque le Reglement, je n'ai pas souleve la question de privilege.
Et ce n'est pas tout.
C'est la que ca se complique.
C'est une affaire tres serieuse.
Je me reporte a des commentaires precis du Beauchesne.
Je lis maintenant le commentaire 93 de Beauchesne.
J'aimerais que la Chambre ecoute tres attentivement.
Ces menaces n'en suscitent pas mois de graves problemes pour la Chambre.
Ces menaces n'etaient pas anonymes.
Elles ont ete faites en personne par le depute de Kenora-Rainy River.
Le commentaire 99 conclut ainsi:
C'est une menace, et ce n'est pas ambigu.
Je crois que les excuses ne suffisent plus.
Il y a, a premiere vue, une atteinte aux privileges.
M. Robert D. Nault (Kenora-Rainy River, Lib.):
Je poserai une seule question monsieur le President.
Hier, j'ai rendu une decision sur la question de privilege soulevee.
Il ne s'agit pas de la question de privilege d'hier.
Cette question est reglee.
Je traite la question de privilege soulevee aujourd'hui.
Le depute de Kenora-Rainy River est parmi nous.
S'il souhaite intervenir, je l'invite a le faire.
M. Robert D. Nault:
C'etait tout a fait un debat sur une divergence d'opinions.
C'etait l'objet de la conversation.
Il n'y a pas eu d'intimidation.
Cela ressemblait beaucoup aux debats que nous avons dans cette enceinte tout le temps.
Le depute de Kenora-Rainy River.
M. Robert D. Nault:
Ils m'accusent de les attaquer quand en fait ce n'est pas ca.
L'autre depute en question nous donne sa version des faits.
Il s'agit de deputes.
Ils donnent chacun leur point de vue sur un evenement.
Je me dois de croire la version des deux deputes.
LE COMITE PERMANENT DES FINANCES
M. Yvan Loubier (Saint-Hyacinthe-Bagot, BQ):
Le cas qui nous preoccupe aujourd'hui est different.
Il existe une difference.
Monsieur le President, je vous soumets tres respectueusement ce cas.
M. Bob Kilger (Stormont-Dundas, Lib.):
On dirait une hemorragie.
C'est vraiment plus que regrettable.
Il est absolument inacceptable de nous trouver de nouveau dans cette situation aujourd'hui.
Lorsque je les aurai entendus, je prendrai une decision.
J'inviterai ensuite a intervenir les deputes qui veulent ajouter quelque chose.
Il n'est interdit a personne de s'exprimer a ce sujet.
Quand je le saurai, je le ferai savoir au depute.
Nous allons l'ecouter.
M. John Cummins (Delta-South Richmond, Ref.):
Le 28 octobre, j'ai soumis une question ecrite.
Mais ce n'est pas ce que j'ai fait.
Tous les volets de la question portaient sur le meme sujet.
Toutefois, le personnel de la Chambre a juge que la question etait trop longue.
On m'a demande de la diviser en cinq questions distinctes.
Je n'avais droit alors qu'a une seule question au Feuilleton .
Nous avons actuellement le pire des deux mondes.
Le Reglement ne donne aux employes aucune directive sur la scission des questions.
On etablissait ainsi un equilibre raisonnable.
On applique mal l'article 39 du Reglement.
On l'utilise au detriment des deputes.
On y recourt pour les empecher de poser des questions.
Cela pourrait se faire conformement au present Reglement.
Une pratique similaire existe en Australie.
La pratique australienne semble fonctionner.
Les reponses sont normalement fournies en deca d'une semaine ouvrable.
De cette facon, le deputes pourraient poser d'autres questions.
Tous les parlementaires seraient certainement tres heureux d'obtenir des reponses en une semaine.
A la question no 91(i), on trouve une reponse de toute evidence erronee.
Le president suppleant (M. McClelland):
Ils ne sont pas les valets du gouvernement.
La reponse semble plausible jusqu'a ce que l'on en fasse un examen approfondi.
Le president suppleant (M. McClelland):
C'est tout a fait autre chose et c'est un sujet de debat.
Ils ont trait au probleme des questions inscrites au Feuilleton .
La premiere chose dont je voulais parler est la longueur des questions.
On ne suit pas de regles prescrites par le Reglement.
Voila le premier sujet de preoccupation que je voulais exprimer.
Pour certaines d'entre elles, il faut presque 200 jours.
Le troisieme sujet de preoccupation est relie et a trait au caractere factuel des reponses.
Le president suppleant (M. McClelland):
Pour l'information des deputes, le paragraphe 39(2) du Reglement stipule ceci:
Cette disposition figure bel et bien dans le Reglement de la Chambre.
Nous acceptons votre point.
M. Randy White (Langley-Abbotsford, Ref.):
Monsieur le President, je serai bref.
C'est de cela qu'il est question ici.
Le paragraphe 39(2) du Reglement est le suivant:
Je ne blame pas le greffier.
Nous l'avons charge de cette responsabilite sans lui fournir des directives precises.
Les regles qui regissent les questions ont ete etablies au cours de la 33e legislature.
Malheureusement, les deputes de cette legislature ont conclu un mauvais accord avec le gouvernement.
M. Bill Blaikie (Winnipeg-Transcona, NPD):
Nous ne sommes alors plus differents de ceux qui ne siegent pas a la Chambre.
Cette situation ajoute au malaise general.
Le gouvernement ne respecte pas le Reglement.
Il ne repond pas aux questions.
On divulgue des rapports de comites.
Le President intervient pour rappeler a l'ordre les deputes.
Les deputes continuent de jacasser et de crier.
Que se passe-t-il, monsieur le President?
Ce n'est pas juste Noel.
Le gouvernement fait des annonces.
Les ministeriels ne vont meme pas a la tribune de la presse pour le faire.
Nous nous plaignons du manque de respect du gouvernement pour la Chambre.
Cela s'inscrit dans un comportement generalise.
Je voudrais appuyer le rappel au Reglement du depute.
Cependant, nous reviendrions alors a l'ancien systeme.
C'est l'un des dangers que represente cette proposition.
C'est tout ce que j'ai a dire a ce sujet.
Monsieur le President, j'ai ecoute attentivement les observations des deputes d'en face.
Il s'interesse beaucoup aux questions qu'il a fait inscrire au Feuilleton .
On a critique le Reglement en vigueur.
Vous l'avez releve, monsieur le President.
Le depute a formule des critiques a son sujet.
L'un d'entre eux est la longueur.
Le depute l'a mentionne.
On a parle aussi du nombre de questions.
Il y a ensuite la limite de 45 jours.
Il a repondu aux trois quarts d'entre elles.
Ce sont les faits.
Il y a autre chose qui me preoccupe.
Le leader parlementaire de l'opposition a pris la parole il y a un instant.
C'est le comite de la Chambre qui est charge du Reglement.
Le president suppleant (M. McClelland):
Le President donnera sa decision a la Chambre en temps voulu.
Je voudrais clarifier un point, monsieur le President.
Je ne m'en prenais pas au personnel.
Suggerer une telle chose, c'est detourner la critique et s'ecarter du probleme.
Le president suppleant (M. McClelland):
LES TRAVAUX DE LA CHAMBRE
M. Bob Kilger (Stormont-Dundas, Lib.):
Monsieur le President, j'invoque le Reglement.
Le president suppleant (M. McClelland):
La Chambre a entendu la motion presentee par le whip en chef du gouvernement.
Le president suppleant (M. McClelland):
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Mme Jocelyne Girard-Bujold (Jonquiere, BQ):
Je dois avouer que ce gouvernement m'inquiete.
Oui, il m'inquiete au plus haut point.
Oh non, ce serait bien trop lui demander.
Mais les sauver de quoi?
Je le repete, ce gouvernement n'a aucun mandat pour agir ainsi.
Je suis bien consciente que la decision n'est pas facile.
M. Steve Mahoney (Mississauga-Ouest, Lib.):
La deputee devrait faire comme moi et lire le rapport.
Il n'est question nulle part dans ce rapport de degrevements fiscaux pour athletes millionnaires.
Le sport est un secteur industriel au Canada.
C'est une somme colossale et une situation effrayante.
Je voulais simplement mettre les choses au point.
J'ai fait un peu de recherches.
Ce projet de loi a fait l'objet d'un debat.
Des opinions ont ete recueillies.
Combien d'articles les deputes croient-ils que le projet de loi contient?
Eh bien, il y en a 188.
Qu'est-ce que certaines des personnes a qui nous avons parle avaient a dire?
Le ministre des Finances de la Nouvelle-Ecosse, M. Don Downe, a dit:
Pourquoi se battre sur la responsabilite de la perception?
L'argent passe ensuite aux provinces.
Il poursuit dans sa lettre:
Je suis sur que les deputes me comprennent.
Notre premier ministre et les politiques du gouvernement jouissent d'un incroyable appui.
Qu'on me permette de communiquer ce que dit Stockwell Day:
Bref, c'est une agence dont la creation tombe a point.
On nous a tout simplement baillonnes lorsque les temoins sont venus au comite.
Il dit qu'il y a eu une tres grande participation.
Robert Spindler, de l'Institut canadien des comptables agrees, a dit:
M. Jason Kenney (Calgary-Sud-Est, Ref.):
Monsieur le President, le moulin a paroles en face tombe des nues.
Il devrait savoir, comme je suis la question...
Le president suppleant (M. McClelland):
Monsieur le President, j'ai communique avec les ministres des Finances des dix provinces.
Je me suis entretenu avec plusieurs d'entre eux a ce sujet.
Cela ne fait aucune difference.
Representant les Canadiens de partout au Canada, nous savons que cette mesure est tres importante.
Ce projet de loi semble etre une mesure judicieuse.
Le president suppleant (M. McClelland):
M. Roy Cullen (Etobicoke-Nord, Lib.):
Les exportations canadiennes atteignent actuellement un niveau sans precedent.
Le volume des activites continuera a augmenter.
Notre engagement a ameliorer le service offert a nos clients restera le meme.
Revenu Canada a fait de son mieux pour repondre a la nouvelle demande.
De toute maniere, peu de gains pouvaient ainsi etre realises.
Ce qui pouvait etre fait l'a ete.
De nouvelles taches nous attendent.
L'administration des recettes n'est confiee a personne d'autre.
Il veillera a ce que l'agence assure un niveau de service adequat aux Canadiens.
Je peux assurer les deputes que les renseignements personnels des contribuables resteront confidentiels.
Or, c'est tout le contraire.
Revenu Canada peut deja administrer des taxes et des impots harmonises.
Un nouveau mode de prestation des services: voila ce dont il s'agit.
J'encourage tous les deputes a appuyer cette importante mesure legislative.
Monsieur le President, j'invoque le Reglement.
Le president suppleant (M. McClelland):
Nous procederons en deux etapes.
La Chambre accepte-t-elle a l'unanimite que la motion soit presentee?
Je vais me ressayer avec la deuxieme motion.
LES COMITES DE LA CHAMBRE
Le president suppleant (M. McClelland):
La Chambre accepte-t-elle a l'unanimite que le secretaire parlementaire propose la motion?
Le president suppleant (M. McClelland):
Plait-il a la Chambre d'adopter la motion?
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Mme Maud Debien (Laval-Est, BQ):
Monsieur le President, je remercie ma collegue du Bloc quebecois pour ses commentaires.
Cette initiative n'est pas une privatisation de Revenu Canada.
Au contraire, il y aura plus d'imputabilite avec ce nouveau projet de loi.
Donc, ce projet de loi ameliore la situation pour les citoyens.
M. Yvan Bernier (Bonaventure-Gaspe-Iles-de-la-Madeleine-Pabok, BQ):
Monsieur le President, je ne suis pas sur d'avoir bien compris mon collegue.
Est-ce pour harmoniser les taxes avec les lois provinciales?
Je ne sais pas si c'est ce qu'ils veulent faire.
On dit bien que ce sera l'Agence.
Est-ce que le ministre va se ramasser avec son chauffeur pour seul employe?
Est-ce que c'est cela que ca veut dire?
Est-ce qu'on a encore besoin d'un ministre a ce moment-la?
Est-ce vraiment pour faire des economies sur le dos des fonctionnaires?
C'est une loi qui m'apparait anti-syndicale.
On va faire disparaitre 20 p. 100 des fonctionnaires au Canada.
Est-ce que c'est ca le but?
Monsieur le President, je remercie le depute du Bloc quebecois pour ses commentaires.
J'ajouterais que c'est typique de la paranoia des deputes du Bloc.
Ce que fera cette agence, c'est apporter une plus grande souplesse.
Elle eliminera les dedoublements et les chevauchements.
Pour les entreprises, il n'y aura qu'un interlocuteur.
Pour le gouvernement, elle ameliorera l'efficacite.
LES COMITES DE LA CHAMBRE
Monsieur le President, j'invoque le Reglement.
Je pense que vous constaterez qu'il y a consentement unanime pour la motion suivante.
Le president suppleant (M. McClelland):
Le president suppleant (M. McClelland):
Plait-il a la Chambre d'adopter la motion?
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Mme Angela Vautour (Beausejour-Petitcodiac, NPD):
Je suis une ex-employee de l'Alliance de la fonction publique.
Les services vont s'en aller avec les emplois.
Le gouvernement a trouve une maniere de supprimer 40 000 emplois.
Il a decide de creer une agence.
C'est aussi pour detruire les syndicats.
Ils ne sont pas contents non plus.
Ils en ont assez d'etre dans l'insecurite.
Ce n'est pas complique.
Ce sera la meme chose avec l'Agence des douanes et du revenu du Canada.
Mais le depute d'en face a oublie de dire combien d'emplois seraient perdus.
Chez nous, a Bouctouche, on a ferme le centre d'emploi.
On a coupe 5 000 emplois au ministere du Developpement des ressources humaines seulement.
Les gens sont obliges de se rendre a Richibucto, a Shediac et a Moncton.
L'ONU vient de faire la meme declaration qu'on fait depuis des annees.
Le gouvernement liberal dit que ce sont des statistiques de 1995.
Combien de personnes sont admissibles a l'assurance-emploi aujourd'hui a comparer a 1995?
Combien d'enfants de plus vivent dans la pauvrete aujourd'hui compare a 1995?
J'aurais honte de dire que ce sont les chiffres de 1995.
Mais aujourd'hui, j'interviens pour dire qu'on connait la verite.
On sait qu'il y a des gens qui vivent dans la pauvrete.
Parce que ces enfants vivent dans la pauvrete.
Il faut aider, il faut partager notre richesse.
Mais non, leur seul gout, c'est celui des banques et des millionnaires.
L'ecart entre les riches et les pauvres continue a s'agrandir.
Au meme moment, le ministre des Finances se vante d'avoir fait ceci et cela.
Mais on n'entend rien de tout cela.
C'est le mandat du gouvernement liberal.
Qu'a-t-on eu au Nouveau-Brunswick?
On est impose a 15 p. 100 sur tout.
Frank McKenna etait tres fier de l'avoir fait.
Je pense qu'il a eu un petit bonus en meme temps.
Lorsqu'on cree une agence, il faut regarder la realite.
C'est une autre facon d'enlever la securite aux employes.
C'est toujours comme ca.
C'est la meme chose lorsqu'on parle de couper les impots.
Les partis d'opposition disent qu'on ne les a pas assez baissees.
Est-ce si complique a comprendre?
N'est-ce pas causer des problemes, ca?
N'est-ce pas causer des problemes aux petites et moyennes entreprises?
Il n'y a plus d'argent qui circule.
Certains deputes disent ici qu'on n'a pas assez baisse les cotisations.
Ca ne l'aide pas, ce petit employeur.
Je peux calculer un peu.
Allez regarder dans les hopitaux.
Venez voir les listes d'attente au Nouveau-Brunswick.
On n'a pas le meme service.
On ne peut voir un medecin en moins de 45 minutes au Nouveau-Brunswick.
Je commence a me demander si on a vraiment les memes services.
Ca continue et ca continue.
Il y a aussi la question de l'equite salariale.
Il y a 40 000 employes au ministere du Revenu national.
L'ONU l'a dit.
On est cense etre un pays modele.
Et maintenant, on a notre fameuse autoroute a peage.
C'est la meme chose pour l'education postsecondaire.
C'est exactement ce qu'on voit ici avec le projet de loi C-43.
Ils disent que c'est une bonne chose.
On l'a entendu dans tous les dossiers.
C'est bon pour un petit groupe, les petits millionnaires.
Tout est bon pour eux.
On a des deputes qui osent critiquer pendant que je dis cela.
Tant qu'on critique, on refuse d'admettre qu'on a un probleme au pays.
C'est comme ca qu'ils peuvent se coucher et dormir le soir.
Ils ne voient pas la realite, mais elle va frapper un jour.
Elle commence deja a frapper.
Il faut que les Canadiens realisent que les services vont egalement s'en aller.
Si on perd les employes, on perd les services.
M. Grant Hill (Macleod, Ref.):
Le president suppleant (M. McClelland):
Il faudra trouver une facon plus appropriee de presenter les choses a la Chambre.
Ces termes ne conviennent tout simplement pas.
M. Yvan Bernier (Bonaventure-Gaspe-Iles-de-la-Madeleine-Pabok, BQ):
Le deuxieme point semble etre l'impotence du ministre du Revenu.
Je m'interroge sur la pertinence de conserver encore un tel ministre.
Ce n'est pas une petite affaire qu'on propose la.
Il veut offrir ses services aux provinces et meme aux municipalites.
Ou le gouvernement veut-il aller avec cela?
Ils pourront batir la soumission et gerer leurs affaires eux-memes.
On y parle de nommer 15 administrateurs, dont un president et un commissaire.
Eh bien, je vais le faire.
Voici ce que dit l'article 30(1), et je le cite:
30. (1) L'Agence a competence dans les domaines suivants:
a ) ses grandes orientations administratives;
c ) ses immeubles, au sens de l'article 73;
C'est quand meme tres inquietant.
Alors, cela va etre un peu loufoque.
Qui va repondre a ces questions au nom des actionnaires que sont les Canadiens?
Ils sont tres habiles pour camoufler des choses.
Est-ce qu'on devra s'attendre a des choses comme celles-la?
Mais qui habite sur le bord de l'eau?
Des pecheurs et des travailleurs d'usine.
Le marasme dans les peches n'est plus de sa faute.
On tente, d'une part, de se debarrasser de 20 p. 100 de fonctionnaires.
C'est une vraie girouette.
Il n'y a plus personne qui va le comprendre.
Je veux bien croire...
Le president suppleant (M. McClelland):
Je regrette d'interrompre l'honorable depute.
L'honorable depute de Hamilton-Ouest invoque le Reglement.
LES TRAVAUX DE LA CHAMBRE
M. Stan Keyes (Hamilton-Ouest, Lib.):
Monsieur le President, j'invoque le Reglement.
Monsieur le President, vous constaterez, je crois, que la motion suivante recueille le consentement unanime.
Le president suppleant (M. McClelland):
La Chambre accorde-t-elle son consentement?
Le president suppleant (M. McClelland):
LOI SUR L'AGENCE DES DOUANES ET DU REVENU DU CANADA
Le president suppleant (M. McClelland):
Plait-il a la Chambre d'adopter la motion?
Le president suppleant (M. McClelland):
Que tous ceux qui sont en faveur de la motion veuillent bien dire oui.
Le president suppleant (M. McClelland):
Que tous ceux qui sont contre veuillent bien dire non.
Le president suppleant (M. McClelland):
A mon avis, les non l'emportent.
Et plus de cinq deputes s'etant leves:
Le president suppleant (M. McClelland):
(La motion, mise aux voix, est adoptee.)
Je declare la motion adoptee.
(Le projet de loi, lu pour la troisieme fois, est adopte.)
Rares sont les entreprises aujourd'hui qui n'ont pas un ordinateur.
C'est une bonne politique economique et une bonne politique sociale.
Ces chiffres sont revelateurs, et le gouvernement porte une grande attention a la question.
Le PCPA aide 380 000 etudiants cette annee.
Nous avons aussi accru de plus de 50 p. 100 les limites des prets.
Elles avaient ete gelees en 1984 par le gouvernement precedent.
Nous avons prevu des modalites de remboursement plus souples.
Ce projet de loi ne propose pas qu'un simple credit d'impot.
Monsieur le President, ce soir, je tiens a feliciter le depute de London-Centre-Nord.
Oui, il faut aider notre jeunesse, comme notre collegue le disait.
Au Quebec, cela fait depuis 1964 que nous le faisons.
Nous sommes tres fiers de ce systeme.
Nous voterons en faveur de ce projet de loi.
M. Paul Forseth (New Westminster-Coquitlam-Burnaby, Ref.):
Cette nouvelle mesure fiscale pourrait couter quelque 800 millions de dollars.
Ce serait une mesure majeure d'aide aux etudiants.
Le fonds ne permettra pas d'alleger les dettes deja contractees par les etudiants.
Cela protegerait le capital humain du Canada.
Ils reclament des bourses couvrant 100 p. 100 de leurs frais.
Cependant, le pays n'en a pas les moyens pour le moment.
Le manque de moyens financiers ne devrait pas constituer en soi un obstacle.
Mme Libby Davies (Vancouver-Est, NPD):
De meme, il a reduit de 4 milliards de dollars le financement de la formation.
Les etudiants en font les frais.
Il faut le dire.
Ces modifications ont eu un impact vraiment dramatique sur les etudiants.
Auparavant, un etudiant pouvait declarer faillite deux ans apres avoir termine ses etudes.
Je dois faire remarquer que la plupart des etudiants ne declarent pas faillite.
La plupart des etudiants font tout leur possible pour rembourser leur pret etudiant.
Cela elimine pratiquement cette possibilite.
Je me rejouis de ce que le depute ait souleve cet aspect.
Voila une mesure que nous devons prendre a l'echelle nationale.
C'est une situation epouvantable.
M. Charlie Power (St. John's-Ouest, PC):
Le projet de loi dont nous discutons aujourd'hui est une bonne mesure.
Il constitue une amelioration.
Les augmentations des droits de scolarite sont enormes.
Il obtiendra l'entier appui du caucus conservateur.
M. Nick Discepola (Vaudreuil-Soulanges, Lib.):
Cette responsabilite nous appartient a tous.
Nous reconnaissons que dans un milieu economique global, il est necessaire de rester competitif.
Comme gouvernement, nous avons consacre beaucoup de nos efforts aux mesures qui favorisent l'education.
Les bourses appuieront une gamme variee d'acquisitions d'apprentissage et de competences.
L'experience est aussi un element essentiel pour trouver un emploi.
Ceci est une realite que les jeunes cherchant un emploi connaissent tres bien.
Les patrons veulent des employes qualifies, instruits et qui possedent une bonne experience de travail.
Pour 1998-1999, la Strategie est dotee d'une enveloppe de 427 millions de dollars.
Nous sommes conscients que l'endettement des etudiants constitue un vrai probleme.
Ces initiatives incluent les bourses du millenaire dont je viens tout juste de parler.
Tout ceci demontre que ce gouvernement croit ferment dans l'importance de l'education.
C'est une priorite pour nous.
M. Tony Valeri (secretaire parlementaire du ministre des Finances, Lib.):
Monsieur le President, je serai tres bref.
Cependant, l'endettement croissant a mis en difficulte nombre d'etudiants et de diplomes.
Les avis sont partages la-dessus.
Je tiens a feliciter le depute d'avoir presente ce projet de loi.
Je veux moi aussi le feliciter d'avoir presente cette mesure.
Nous accordons donc des subventions afin d'aider les etudiants a eviter de s'endetter.
En fait, les bourses du millenaire sont aussi des subventions.
Celles-ci vont etre accordees en fonction des besoins.
Le projet de loi C-316 traite de la question des interets.
Le depute de London-Centre-Nord a raison.
Il faut y ajouter de nombreux frais accessoires obligatoires.
En fait, on ne peut plus vraiment parler de frais accessoires.
M. Gerry Byrne (secretaire parlementaire du ministre des Ressources naturelles, Lib.):
M. Joe Fontana (London-Centre-Nord, Lib.) :
Nous devons examiner toutes les mesures.
Ce projet de loi n'est pas parfait.
Nous savons que l'education releve de la competence des provinces.
Nous savons que les provinces fixent le montant des frais de scolarite.
Nous savons qu'elles fixent le programme d'etudes.
Nous savons que les etudes sont la cle d'une vie plus prospere.
Le president suppleant (M. McClelland) :
M. Pat Martin (Winnipeg-Centre, NPD) propose:
Je lui en suis tres reconnaissant.
Le sujet de la motion est la creation d'emplois par les economies d'energie.
Je construisais des centrales energetiques.
C'etait le genre de projets que convoitaient les gens de mon metier.
Nous etions tous impatients d'y travailler.
Lorsque l'Ontario a annule le projet, nous avons recu un coup terrible.
A l'epoque, je representais le Syndicat des charpentiers du Manitoba.
Nous avions 1 200 membres qui attendaient avec impatience de construire Conawapa.
C'etait une chose que nous voulions faire.
Lorsqu'on a annule ce projet, les gens ne savaient litteralement plus quoi faire.
Cela nous a conduit a nous pencher sur d'autres idees.
Comment pouvions-nous redonner du travail a ces gens?
Cela a ete un grand soulagement.
C'est ce qui nous a conduit a cette conclusion.
Depuis de nombreuses annees, nous soutenons cette idee.
Nous formons nos gens en prevision du jour ou cette idee sera retenue.
Dans cette motion, je signale que le gouvernement federal possede 50 000 immeubles.
Il a pris des mesures.
Personne n'essaie de dire que le gouvernement federal ne fait rien a cet egard.
Il y a un programme intitule l'Initiative federale dans le secteur du batiment.
Les economies sont incroyables.
Je crois qu'il s'agissait de l'immeuble Harry Hays.
Cela a permis de realiser des economies energetiques de 300 000 $ par annee.
Cependant, ce n'est la qu'un seul immeuble.
Nous avons cree beaucoup d'emplois.
Nous avons aussi economise 300 000 $.
D'apres un document intitule A Brighter Future:
Elles voulaient que nous fassions tout a l'electricite et que nous allumions les lumieres.
Nous ne pouvons tout simplement plus faire cela.
Certains des immeubles produisent des quantites considerables de pollution.
Dans la profession que je represente, l'age moyen est de 48 ans.
Nous pourrions donner l'exemple au reste du monde.
Prenons l'industrie des fenetres, par exemple.
Il ne faut pas oublier tous les autres aspects des modifications econergetiques.
C'est l'industrie des services energetiques.
Il y a bien des etablissements financiers prives qui participent deja.
Il s'agit d'un secteur de tres haute technicite.
Qu'est-ce qu'on attend?
Le Manitoba depense 3,2 milliards de dollars par annee en energie.
On pourrait penser que cette solution est brillante.
C'est sept centrales nucleaires qui auraient coute 10 milliards de dollars chacune.
Elle n'a donc pas eu a emprunter de l'argent.
La Tennessee Valley Hydro Authority est en mesure de fournir des statistiques similaires.
Je ne sais pas pourquoi nous tardons tant a emboiter le pas.
Notre hiver est dur et cela entraine des couts energetiques enormes.
Je crois que c'est possible.
Les gens me demandent ce qui me motive.
Je chante toujours le meme refrain.
Que c'etait un veritable partenariat entre le secteur public et le secteur prive.
Nous avons les financiers en place.
Laissez-nous utiliser vos immeubles.
Franchement, le projet n'est pas alle tres loin.
Ils veulent un bien meilleur taux de rendement sur leur investissement.
Au Manitoba, nous sommes toujours aux prises avec l'inondation du lac South Indian.
Eriger un projet hydroelectrique sur une riviere constitue une intervention radicale dans un ecosysteme.
On ne saurait entamer un tel projet a la legere.
La construction d'une autre centrale ne devrait etre qu'une solution de dernier recours.
Ce serait tout simplement irresponsable de notre part.
Il y aurait diverses mesures de modification econergetique.
Un investissement de 15 $ nous fera epargner 75 $ par annee.
Les mesures en question sont aussi douloureusement evidentes.
Pourquoi ne les prenons-nous pas?
Le dernier obstacle est desormais leve.
Cela creerait des centaines de milliers d'emplois sans entrainer de frais pour les contribuables.
Toute l'industrie attend avec impatience de s'y mettre.
Ils attendent impatiemment de faire tout ce travail.
C'est un investissement correct et excellent.
Il y a une distribution plus egale des emplois.
Ce type de projet est nettement plus equitable.
Nous pourrions envisager cette initiative comme un megaprojet s'etendant a tout le pays.
Laissons l'industrie payer pour cela.
Elle veut simplement utiliser les immeubles gouvernementaux.
Nous aurions du le faire il y a longtemps.
J'invite les deputes a appuyer ma demarche.
Mme Carolyn Parrish (secretaire parlementaire du ministre des Travaux publics et des Services gouvernementaux, Lib.):
C'est une excellente motion.
Nous l'appuierons quand viendra le moment de voter.
Le gouvernement federal doit donner l'exemple a cet egard.
Pour ce faire, il vaut mieux agir que parler.
Nous avons aussi 22 793 vehicules qui devraient aussi etre efficients sur le plan energetique.
Au bout du compte, tout le monde est gagnant.
Je dois faire ici un bref aparte.
Je suis mariee a un ingenieur depuis 30 ans.
Son idee de l'efficience energetique prend la forme d'un nouveau pommeau de douche.
La creation d'emplois est une priorite pour le gouvernement.
Je voudrais vous faire part de quelques donnees fort impressionnantes.
Je pense que cela interessera les deputes presents.
Ces projets signifient encore plus d'economies et davantage d'emplois.
Le ministere des Travaux publics et des Services gouvernementaux n'est cependant pas seul.
C'etait un dur et c'est pourquoi on l'a charge de ce projet.
M. David Chatters (Athabasca, Ref.):
On ne peut pas etre contre la vertu.
L'exemple de Kyoto est certainement un des meilleurs.
Cela m'amene a ma premiere preoccupation en ce qui concerne cette motion.
A premiere vue, ce programme semble ideal et il a certainement du merite.
On serait porte a dire: Qu'est-ce qui nous retient?
Pourquoi ne nous lancons-nous pas a fond de train dans ce programme?
Lorsque les programmes ne repondent pas aux attentes, ils sont tenus dans l'ombre.
On ne nous a fourni que trois elements d'information.
La collecte de renseignements a incontestablement ete une tache ardue.
De toute evidence, la motion que propose le NPD aujourd'hui entre dans cette categorie.
Tous les Canadiens partagent cet ideal.
Tous les programmes viennent avec une facture que les Canadiens doivent payer.
Rien n'est gratuit.
Dans sa motion, il preconise egalement le developpement d'une expertise en haute technologie.
M. Pierre de Savoye (Portneuf, BQ):
Les avantages economiques lies a l'efficacite energetique sont incontestables.
A cet egard, d'ailleurs, l'experience du Quebec est probante.
Les dernieres donnees disponibles datent de 1994, mais elles sont tout a fait eloquentes.
Les instruments strategiques sont limites et Ressources naturelles Canada se concentre surtout sur la sensibilisation.
Tout cela, c'est le verificateur general du Canada qui le disait en 1997.
Logiquement, la motion implique que le gouvernement du Canada devrait lancer de nouvelles initiatives.
L'energie est d'abord de competence provinciale.
Cela englobe evidemment le domaine de l'efficacite energetique.
Au Quebec, nous sommes d'ailleurs a l'avant-garde en matiere d'efficacite energetique.
En 1997, le gouvernement du Quebec creait l'Agence de l'efficacite energetique.
Cette agence fait consensus au Quebec.
La loi qui l'a creee a ete adoptee unanimement par l'Assemblee nationale.
Le federal ne devrait pas dedoubler inutilement les efforts des provinces.
Le Bloc quebecois ne questionne pas cette logique.
Or, l'experience du gouvernement du Canada a cet egard, helas, est decevante.
Les initiatives actuelles pourraient etre beaucoup mieux gerees.
Voila donc pourquoi nous sommes hesitants a appuyer cette motion.
Avant d'investir davantage, le gouvernement doit s'assurer de l'efficacite des programmes existants.
M. John Herron (Fundy-Royal, PC):
Le Parti progressiste-conservateur est certainement d'accord avec cela.
Nous assistons a des manifestations meteorologiques extremes que nous pouvons lier a cela.
Nous devons nous pencher sur cette question.
Le gouvernement doit se soucier davantage d'efficacite energetique.
Le Parti reformiste est celui qui conteste toujours les preuves scientifiques des changements climatiques.
Le Parti progressiste conservateur appuiera cette motion presentee par le depute de Winnipeg-Centre.
M. Gerry Byrne (secretaire parlementaire du ministre des Ressources naturelles, Lib.):
Elle merite l'attention soutenue de la Chambre.
Ces programmes du ministere des Ressources naturelles ont eu un impact tres positif.
Le president suppleant (M. McClelland):
M. Yvon Godin (Acadie-Bathurst, NPD):
Ensuite, on leur demande de rembourser des montants de 15 000 $ ou 20 000 $.
Ils disent a la Chambre des choses qui ne sont pas vraies.
Est-ce que c'est devenu de la discrimination envers les femmes?
On n'a pas verifie les fils qui travaillent pour leur pere.
Pourquoi le faire pour la fille qui travaille pour son pere?
Pourquoi les enquetes portent-elles seulement sur la fille ou la mere?
Cela ne se produit pas seulement au Nouveau-Brunswick.
Aux Iles-de-la-Madeleine, beaucoup de femmes travaillent avec leur mari.
Meme les enqueteurs disaient que c'est une question de temps.
Il faut comprendre que la peche est quasiment une industrie familiale.
M. Gerry Byrne (secretaire parlementaire du ministre des Ressources naturelles, Lib.):
Monsieur le President, le depute souleve une question importante.
Ces femmes sont venues presenter au depute une plainte serieuse.
Elles estiment ne pas avoir beneficie de la procedure reguliere.
La encore, ces allegations d'abus viennent directement des membres de la collectivite.
Mme Christiane Gagnon (Quebec, BQ):
C'est trop peu.
Il n'agit pas.
Cent mille personnes en sont exclus, parce qu'ils ont quitte sans motif valable.
Ce que nous demandons, c'est que le regime soit bonifie.
Il repond aussi que le Bloc quebecois veut mettre les gens sur le chomage.
J'aurais eu besoin de quatre minutes de plus.
M. Gerry Byrne (secretaire parlementaire du ministre des Ressources naturelles, Lib.):
Ils ne veulent pas vraiment examiner les faits.
C'est un outil.
Le president suppleant (M. McClelland):
Le depute de Westminster-Coquitlam-Burnaby.
M. Paul Forseth (New Westminster-Coquitlam-Burnaby, Ref.):
Tous ces droits ont ete entraves par le gouvernement.
Nous reclamons la tenue d'une enquete judiciaire pour reparer ce gachis.
Le Parti reformiste reclame une enquete judiciaire independante et les Canadiens l'appuient.
C'etait le preambule de ma question au vice-premier ministre.
En fait, la Charte est surtout la pour restreindre les gouvernements.
Le paragraphe 2b) de la Charte porte sur la liberte d'expression.
Si le gouvernement impose des restrictions a ces pensees, il enfreint cette garantie.
Le paragraphe 2c) porte sur la liberte d'assemblee.
Les droits d'un accuse ne peuvent etre restreints par crainte d'un danger eventuel.
M. Gerry Byrne (secretaire parlementaire du ministre des Ressources naturelles, Lib.):
Cette loi est entree en vigueur pour permettre dans ce cas d'examiner cette plainte.
C'est une demande tres simple.
Nous n'avons aucun role actif.
Ce processus a ete enonce par le Parlement de facon non partisane.
Le president suppleant (M. McClelland):
La motion portant que la Chambre s'ajourne maintenant est reputee adoptee.
(La seance est levee a 20 h 02.)
|
{"hexsha": "901951c855e7c3d42c361d4e6985205f5dfab1f9", "size": 73118, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "data/Hansard/Training/hansard.36.1.house.debates.168.f", "max_stars_repo_name": "j1ai/Canadian_Hansards_Neural_Machine_Translation", "max_stars_repo_head_hexsha": "554666a89090fc1b1d1fb83601a2e9da132e6ad0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data/Hansard/Training/hansard.36.1.house.debates.168.f", "max_issues_repo_name": "j1ai/Canadian_Hansards_Neural_Machine_Translation", "max_issues_repo_head_hexsha": "554666a89090fc1b1d1fb83601a2e9da132e6ad0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "data/Hansard/Training/hansard.36.1.house.debates.168.f", "max_forks_repo_name": "j1ai/Canadian_Hansards_Neural_Machine_Translation", "max_forks_repo_head_hexsha": "554666a89090fc1b1d1fb83601a2e9da132e6ad0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.6640378549, "max_line_length": 123, "alphanum_fraction": 0.7812166635, "num_tokens": 20938}
|
#coding: utf-8
import numpy as np
import pylab
from sklearn.datasets import load_digits
# scikit-learnの手書き数字データをロード
# 1797サンプル、8x8ピクセル
digits = load_digits()
# 最初の10サンプルを描画
# digits.images[i] : i番目の画像データ(8x8ピクセル)
# digits.target[i] : i番目の画像データのクラス(数字なので0-9)
for index, (image, label) in enumerate(zip(digits.images, digits.target)[:10]):
pylab.subplot(2, 5, index + 1)
pylab.axis('off')
pylab.imshow(image, cmap=pylab.cm.gray_r, interpolation='nearest')
pylab.title('%i' % label)
pylab.show()
|
{"hexsha": "5b9a41dbdd4e891309fab20f2511eb52c6e14bb4", "size": 511, "ext": "py", "lang": "Python", "max_stars_repo_path": "ch5/plot_digits.py", "max_stars_repo_name": "aidiary/PRML", "max_stars_repo_head_hexsha": "db2dfc10bd39dc5649528d3778aa5fffb283186b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2016-04-17T16:07:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-29T09:19:12.000Z", "max_issues_repo_path": "ch5/plot_digits.py", "max_issues_repo_name": "sojvai/PRML", "max_issues_repo_head_hexsha": "db2dfc10bd39dc5649528d3778aa5fffb283186b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch5/plot_digits.py", "max_forks_repo_name": "sojvai/PRML", "max_forks_repo_head_hexsha": "db2dfc10bd39dc5649528d3778aa5fffb283186b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 55, "max_forks_repo_forks_event_min_datetime": "2016-03-12T15:03:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-09T02:34:54.000Z", "avg_line_length": 26.8947368421, "max_line_length": 79, "alphanum_fraction": 0.7240704501, "include": true, "reason": "import numpy", "num_tokens": 197}
|
# This file was generated, do not modify it. # hide
using MLJ, RDatasets, PrettyPrinting
MLJ.color_off() # hide
@load DecisionTreeClassifier pkg=DecisionTree
carseats = dataset("ISLR", "Carseats")
first(carseats, 3) |> pretty
|
{"hexsha": "12f96bce5e97ff7423d550688668d86137529d9b", "size": 227, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "__site/assets/isl/lab-8/code/ex1.jl", "max_stars_repo_name": "ven-k/MLJTutorials", "max_stars_repo_head_hexsha": "42151c8a96ad701aeaf763d53c8b7c6689eb6e8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "__site/assets/isl/lab-8/code/ex1.jl", "max_issues_repo_name": "ven-k/MLJTutorials", "max_issues_repo_head_hexsha": "42151c8a96ad701aeaf763d53c8b7c6689eb6e8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "__site/assets/isl/lab-8/code/ex1.jl", "max_forks_repo_name": "ven-k/MLJTutorials", "max_forks_repo_head_hexsha": "42151c8a96ad701aeaf763d53c8b7c6689eb6e8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.375, "max_line_length": 51, "alphanum_fraction": 0.7577092511, "num_tokens": 66}
|
! { dg-do compile }
! PR36724 - ICE on pointer to substring
! testcase contributed by Loukas Peristeras.
character(LEN=132), target :: line
character(LEN=1), pointer :: t
read(*,'(A)') line
t=>line(1:1)
end
|
{"hexsha": "054a29d56bbfd043d5908424c64a4ff7378ee02a", "size": 217, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "validation_tests/llvm/f18/gfortran.dg/pointer_to_substring.f90", "max_stars_repo_name": "brugger1/testsuite", "max_stars_repo_head_hexsha": "9b504db668cdeaf7c561f15b76c95d05bfdd1517", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 488, "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_issues_repo_path": "tests/CompileTests/Fortran_tests/gfortranTestSuite/gfortran.dg/pointer_to_substring.f90", "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 174, "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_forks_repo_path": "tests/CompileTests/Fortran_tests/gfortranTestSuite/gfortran.dg/pointer_to_substring.f90", "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 146, "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "avg_line_length": 19.7272727273, "max_line_length": 44, "alphanum_fraction": 0.6589861751, "num_tokens": 70}
|
import glob
from os.path import abspath, dirname
from pathlib import Path
import numpy as np
import torch
import torchvision
from torch.utils.data import DataLoader, TensorDataset
data_path = dirname(dirname(abspath(__file__))) + "/data/corruptmnist"
train_data = data_path + "/train/train_merged.npz"
test_data = data_path + "/test/test.npz"
file_list = glob.glob(data_path + "/train/*")
data_all = [np.load(fname) for fname in file_list]
merged_data = {}
for data in data_all:
[merged_data.update({k: v}) for k, v in data.items()]
np.savez(data_path + "/train/train_merged.npz", **merged_data)
def mnist():
# exchange with the corrupted mnist dataset
train = np.load(train_data)
test = np.load(test_data)
return train, test
|
{"hexsha": "cadd0925a06d445ef51d60c6fc8672ba3784bc17", "size": 779, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/models/data.py", "max_stars_repo_name": "samytessier/samy_mlops", "max_stars_repo_head_hexsha": "f52592d3b63d8fc11d0ea6cd2f51c80c4858ef4f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/models/data.py", "max_issues_repo_name": "samytessier/samy_mlops", "max_issues_repo_head_hexsha": "f52592d3b63d8fc11d0ea6cd2f51c80c4858ef4f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/models/data.py", "max_forks_repo_name": "samytessier/samy_mlops", "max_forks_repo_head_hexsha": "f52592d3b63d8fc11d0ea6cd2f51c80c4858ef4f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-20T00:56:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-20T00:56:36.000Z", "avg_line_length": 27.8214285714, "max_line_length": 71, "alphanum_fraction": 0.7073170732, "include": true, "reason": "import numpy", "num_tokens": 189}
|
"""Convenience functions."""
import inspect
import sys
import numpy as np
from scipy.integrate import quad
from scipy.stats import chi2, norm
def pprint(*s, output=True):
"""Hack to make for more informative print statements."""
f = inspect.stack()[1][1].split('/')[-1]
m = '{:13.13} |'.format(f)
if output:
print(m, *s)
else:
lines = []
for e in s:
lines.append('\n'.join([f'{m} {f}' for f in e.split('\n')]))
return '\n'.join(lines)
def progressbar(it, prefix="", size=69, file=sys.stdout):
"""Progressbar from adapted from Stack Overflow.
Args:
it (generator): range of values
prefix (str): Words displayed before the progress bar
size (int): Display width
file: Where to direct output
Returns:
type: Description of returned object.
"""
count = len(it)
size -= len(prefix)
def show(j):
x = int((size)*j/count)
print(f'{prefix} [{"#"*x}{"."*(size-x)}] {j}/{count}')
show(0)
for i, item in enumerate(it):
yield item
show(i+1)
file.flush()
def hist(parameter, bin_type='lin', n_bins=25, norm='max', edges=True,
bins=None):
"""Bin up a parameter either in a lin or log space.
Why is this not a standard option in numpy or matplotlib?
Args:
parameter (array): To be binned
bin_type (str): Either 'lin', 'log' or 'ln'
n_bins (int): Number of bins. Can be overriden internally
norm (bool): Whether to normalise to 'max' or 'prob' or none
Returns:
tuple: bin centers, values per bin
"""
if isinstance(parameter, list):
parameter = np.array(parameter)
if len(parameter) == 0:
return np.nan, np.nan
# Drop NaN-values
parameter = parameter[~(np.isnan(parameter) | np.isinf(parameter))]
# Determine number of bins
if n_bins != 25:
pass
elif len(parameter) < 50:
n_bins = 15
elif len(parameter) > 500:
n_bins = 50
# Determine type of binning
if bin_type == 'lin':
_bins = n_bins
elif bin_type == 'log':
min_f = np.log10(np.min(parameter[parameter != 0]))
max_f = np.log10(max(parameter))
_bins = np.logspace(min_f, max_f, n_bins)
elif bin_type == 'ln':
min_f = np.log(np.min(parameter[parameter != 0]))
max_f = np.log(max(parameter))
_bins = np.logspace(min_f, max_f, n_bins, base=np.e)
# Allow for custom bins
if bins is not None:
_bins = bins
# Allow for probability weighting
weights = None
if norm == 'prob':
weights = np.ones(len(parameter)) / len(parameter)
# Bin
n, bin_edges = np.histogram(parameter, bins=_bins, weights=weights)
if norm == 'max':
n = n/max(n) # Normalise
# Centre bins
bins = (bin_edges[:-1] + bin_edges[1:]) / 2
# Ensure there are edges on the outer bins of the histograms
if edges:
if bin_type == 'lin':
bin_dif = np.diff(bins)[-1]
bins = np.insert(bins, 0, bins[0] - bin_dif)
bins = np.insert(bins, len(bins), bins[-1] + bin_dif)
n = np.insert(n, 0, 0)
n = np.insert(n, len(n), 0)
else:
bin_dif = np.diff(np.log10(bins))[-1]
bins = np.insert(bins, 0, 10**(np.log10(bins[0])-bin_dif))
bins = np.insert(bins, len(bins), 10**(np.log10(bins[-1])+bin_dif))
n = np.insert(n, 0, 0)
n = np.insert(n, len(n), 0)
return bins, n
def poisson_interval(k, sigma=1):
"""
Use chi-squared info to get the poisson interval.
Given a number of observed events, which range of observed events would
have been just as likely given a particular interval?
Based off https://stackoverflow.com/questions/14813530/
poisson-confidence-interval-with-numpy
"""
gauss = norm(0, 1).pdf
a = 1 - quad(gauss, -sigma, sigma, limit=1000)[0]
low, high = (chi2.ppf(a/2, 2*k) / 2, chi2.ppf(1-a/2, 2*k + 2) / 2)
if k == 0:
low = 0.0
return low, high
|
{"hexsha": "a2f900d1a0fcb2ed6e3193065d2d94e0acb147ea", "size": 4119, "ext": "py", "lang": "Python", "max_stars_repo_path": "frbpoppy/misc.py", "max_stars_repo_name": "macrocosme/frbpoppy", "max_stars_repo_head_hexsha": "b23a0c1dbf4e6559f26e79994147ed2a9352ffc7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 24, "max_stars_repo_stars_event_min_datetime": "2019-02-20T09:59:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-26T15:20:35.000Z", "max_issues_repo_path": "frbpoppy/misc.py", "max_issues_repo_name": "macrocosme/frbpoppy", "max_issues_repo_head_hexsha": "b23a0c1dbf4e6559f26e79994147ed2a9352ffc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 38, "max_issues_repo_issues_event_min_datetime": "2017-03-16T09:03:49.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-19T02:34:41.000Z", "max_forks_repo_path": "frbpoppy/misc.py", "max_forks_repo_name": "macrocosme/frbpoppy", "max_forks_repo_head_hexsha": "b23a0c1dbf4e6559f26e79994147ed2a9352ffc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2019-08-20T01:19:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-15T11:57:25.000Z", "avg_line_length": 27.644295302, "max_line_length": 79, "alphanum_fraction": 0.5746540422, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1154}
|
import argparse
import copy
import json
import logging
import os
import random
import sys
import time
import cv2
import numpy as np
import PIL
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from PIL import Image
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts
from torch.utils.tensorboard import SummaryWriter
from angle import generate_angle
from slimmable_resnet20 import (SwitchableBatchNorm2d, max_arc_rep,
mutableResNet20)
from utils import (ArchLoader, AvgrageMeter, CrossEntropyLabelSmooth, accuracy,
get_lastest_model, get_num_correct, get_parameters,
save_checkpoint)
# os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2"
writer = SummaryWriter("./runs/%s-%05d" %
(time.strftime("%m-%d", time.localtime()), random.randint(0, 100)))
def get_args():
parser = argparse.ArgumentParser("ResNet20-Cifar100-oneshot")
parser.add_argument('--warmup', default=0, type=int,
help="warmup weight of the whole channels")
parser.add_argument('--total-iters', default=1000, type=int)
parser.add_argument('--num_workers', default=4, type=int)
parser.add_argument(
'--path', default="Track1_final_archs.json", help="path for json arch files")
parser.add_argument('--batch-size', type=int,
default=1280, help='batch size')
parser.add_argument('--learning-rate', type=float,
default=0.0447, help='init learning rate')
parser.add_argument('--momentum', type=float, default=0.9, help='momentum')
parser.add_argument('--weight-decay', type=float,
default=4e-5, help='weight decay')
parser.add_argument('--label-smooth', type=float,
default=0.1, help='label smoothing')
parser.add_argument('--save', type=str, default='./models',
help='path for saving trained models')
parser.add_argument('--save-interval', type=int,
default=100, help='report frequency')
parser.add_argument('--eval', default=False, action='store_true')
parser.add_argument('--eval-resume', type=str,
default='./snet_detnas.pkl', help='path for eval model')
parser.add_argument('--auto-continue', type=bool,
default=False, help='report frequency')
args = parser.parse_args()
return args
def main():
args = get_args()
# archLoader
arch_loader = ArchLoader(args.path)
# Log
log_format = '[%(asctime)s] %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.INFO,
format=log_format, datefmt='%m-%d %I:%M:%S')
t = time.time()
local_time = time.localtime(t)
if not os.path.exists('./log'):
os.mkdir('./log')
fh = logging.FileHandler(os.path.join(
'log/train-{}-{:02}-{:02}-{:.3f}'.format(local_time.tm_year % 2000, local_time.tm_mon, local_time.tm_mday, t)))
fh.setFormatter(logging.Formatter(log_format))
logging.getLogger().addHandler(fh)
use_gpu = False
if torch.cuda.is_available():
use_gpu = True
kwargs = {'num_workers': 4, 'pin_memory': True}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(root="./data", train=True, download=True,
transform=transforms.Compose([
transforms.Resize(32),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
val_loader = torch.utils.data.DataLoader(
datasets.MNIST(root="./data", train=False, transform=transforms.Compose([
transforms.Resize(32),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=False, **kwargs)
model = mutableResNet20(num_classes=10)
base_model = copy.deepcopy(model)
logging.info('load model successfully')
optimizer = torch.optim.SGD(get_parameters(model),
lr=args.learning_rate,
momentum=args.momentum,
weight_decay=args.weight_decay)
criterion_smooth = CrossEntropyLabelSmooth(1000, 0.1)
if use_gpu:
model = nn.DataParallel(model)
loss_function = criterion_smooth.cuda()
device = torch.device("cuda")
base_model.cuda()
else:
loss_function = criterion_smooth
device = torch.device("cpu")
# scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer,
# lambda step: (1.0-step/args.total_iters) if step <= args.total_iters else 0, last_epoch=-1)
scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=5)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(
# optimizer, T_max=200)
model = model.to(device)
all_iters = 0
if args.auto_continue:
lastest_model, iters = get_lastest_model()
if lastest_model is not None:
all_iters = iters
checkpoint = torch.load(
lastest_model, map_location=None if use_gpu else 'cpu')
model.load_state_dict(checkpoint['state_dict'], strict=True)
logging.info('load from checkpoint')
for i in range(iters):
scheduler.step()
# 参数设置
args.optimizer = optimizer
args.loss_function = loss_function
args.scheduler = scheduler
args.train_loader = train_loader
args.val_loader = val_loader
if args.eval:
if args.eval_resume is not None:
checkpoint = torch.load(
args.eval_resume, map_location=None if use_gpu else 'cpu')
model.load_state_dict(checkpoint, strict=True)
validate(model, device, args, all_iters=all_iters,
arch_loader=arch_loader)
exit(0)
# warmup weights
if args.warmup is not None:
logging.info("begin warmup weights")
while all_iters < args.warmup:
all_iters = train_supernet(
model, device, args, bn_process=False, all_iters=all_iters)
validate(model, device, args, all_iters=all_iters,
arch_loader=arch_loader)
while all_iters < args.total_iters:
all_iters = train_subnet(model, base_model, device, args, bn_process=False,
all_iters=all_iters, arch_loader=arch_loader)
logging.info("validate iter {}".format(all_iters))
if all_iters % 9 == 0:
validate(model, device, args, all_iters=all_iters,
arch_loader=arch_loader)
validate(model, device, args, all_iters=all_iters,
arch_loader=arch_loader)
def adjust_bn_momentum(model, iters):
for m in model.modules():
if isinstance(m, nn.BatchNorm2d):
m.momentum = 1 / iters
elif isinstance(m, SwitchableBatchNorm2d):
m.momentum = 1 / iters
def train_supernet(model, device, args, *, bn_process=False, all_iters=None):
logging.info("start warmup training...")
optimizer = args.optimizer
loss_function = args.loss_function
scheduler = args.scheduler
train_loader = args.train_loader
t1 = time.time()
model.train()
if bn_process:
adjust_bn_momentum(model, all_iters)
all_iters += 1
d_st = time.time()
# print(model)
total_correct = 0
for ii, (data, target) in enumerate(train_loader):
target = target.type(torch.LongTensor)
data, target = data.to(device), target.to(device)
data_time = time.time() - d_st
optimizer.zero_grad()
# 一个批次
output = model(data, max_arc_rep)
loss = loss_function(output, target)
loss.backward()
for p in model.parameters():
if p.grad is not None and p.grad.sum() == 0:
p.grad = None
total_correct += get_num_correct(output, target)
torch.nn.utils.clip_grad_norm_(model.parameters(), 5)
if ii % 2 == 0:
acc1, acc5 = accuracy(output, target, topk=(1, 5))
logging.info("warmup batch acc1: {:.6f} lr: {:.6f}".format(
acc1.item(), scheduler.get_last_lr()[0]))
writer.add_scalar("WTrain/Loss", loss.item(),
all_iters * len(train_loader) * args.batch_size+ii)
writer.add_scalar("WTrain/acc1", acc1.item(),
all_iters * len(train_loader) * args.batch_size+ii)
writer.add_scalar("WTrain/acc5", acc5.item(),
all_iters * len(train_loader) * args.batch_size+ii)
optimizer.step()
writer.add_scalar("Accuracy", total_correct /
(len(train_loader)*args.batch_size), all_iters)
writer.add_histogram("first_conv.weight",
model.module.first_conv.weight, all_iters)
writer.add_histogram(
"layer1[0].weight", model.module.layer1[0].body[0].weight, all_iters)
scheduler.step()
top1, top5 = accuracy(output, target, topk=(1, 5))
if True:
printInfo = 'TRAIN EPOCH {}: lr = {:.6f},\tloss = {:.6f},\t'.format(all_iters, scheduler.get_last_lr()[0], loss.item()) + \
'Top-1 acc = {:.5f}%,\t'.format(top1.item()) + \
'Top-5 acc = {:.5f}%,\t'.format(top5.item()) + \
'data_time = {:.5f},\ttrain_time = {:.5f}'.format(
data_time, (time.time() - t1))
logging.info(printInfo)
t1 = time.time()
if all_iters % args.save_interval == 0:
save_checkpoint({
'state_dict': model.state_dict(),
}, all_iters)
return all_iters
def train_subnet(model, base_model, device, args, *, bn_process=False, all_iters=None, arch_loader=None):
logging.info("start architecture training...")
assert arch_loader is not None
optimizer = args.optimizer
loss_function = args.loss_function
scheduler = args.scheduler
train_loader = args.train_loader
t1 = time.time()
model.train()
if bn_process:
adjust_bn_momentum(model, all_iters)
all_iters += 1
d_st = time.time()
total_correct = 0
for data, target in train_loader:
target = target.type(torch.LongTensor)
data, target = data.to(device), target.to(device)
data_time = time.time() - d_st
optimizer.zero_grad()
# fair_arc_list = arch_loader.generate_fair_batch()
# fair_arc_list = arch_loader.get_random_batch(25)
fair_arc_list = arch_loader.generate_niu_fair_batch()
for ii, arc in enumerate(fair_arc_list):
# 全部架构
output = model(data, arch_loader.convert_list_arc_str(arc))
loss = loss_function(output, target)
loss.backward()
for p in model.parameters():
if p.grad is not None and p.grad.sum() == 0:
p.grad = None
total_correct += get_num_correct(output, target)
if ii % 7 == 0:
acc1, acc5 = accuracy(output, target, topk=(1, 5))
angle = generate_angle(base_model, model.module, arch_loader.convert_list_arc_str(arc))
logging.info(
"epoch: {:4d} \t acc1:{:.4f} \t acc5:{:.4f} \t loss:{:.4f} \t angle:{:.3f}".format(all_iters, acc1.item(), acc5.item(), loss.item(), angle.item()))
writer.add_scalar("Train/Loss", loss.item(),
all_iters * len(train_loader) * args.batch_size+ii)
writer.add_scalar("Train/acc1", acc1.item(),
all_iters * len(train_loader) * args.batch_size+ii)
writer.add_scalar("Train/acc5", acc5.item(),
all_iters * len(train_loader) * args.batch_size+ii)
writer.add_scalar("Angle", angle.item(
), all_iters * len(train_loader) * args.batch_size+ii)
# 16 when using Fair sampling strategy
writer.add_scalar("Accuracy", total_correct /
(len(train_loader) * args.batch_size * 16), all_iters)
writer.add_histogram("first_conv.weight",
model.module.first_conv.weight, all_iters)
writer.add_histogram(
"layer1[0].weight", model.module.layer1[0].body[0].weight, all_iters)
torch.nn.utils.clip_grad_norm_(model.parameters(), 5)
optimizer.step()
scheduler.step()
if all_iters % args.save_interval == 0:
save_checkpoint({
'state_dict': model.state_dict(),
}, all_iters)
return all_iters
def validate(model, device, args, *, all_iters=None, arch_loader=None):
assert arch_loader is not None
objs = AvgrageMeter()
top1 = AvgrageMeter()
top5 = AvgrageMeter()
loss_function = args.loss_function
val_loader = args.val_loader
model.eval()
max_val_iters = 250
t1 = time.time()
result_dict = {}
arch_dict = arch_loader.get_part_dict()
with torch.no_grad():
for ii, (key, value) in enumerate(arch_dict.items()):
for data, target in val_loader:
target = target.type(torch.LongTensor)
data, target = data.to(device), target.to(device)
output = model(data, value["arch"])
loss = loss_function(output, target)
acc1, acc5 = accuracy(output, target, topk=(1, 5))
n = data.size(0)
objs.update(loss.item(), n)
top1.update(acc1.item(), n)
top5.update(acc5.item(), n)
if ii % 100:
logging.info(
"validate acc:{:.6f} iter:{}".format(top1.avg/100, ii))
writer.add_scalar("Val/Loss", loss.item(),
all_iters * len(val_loader) * args.batch_size+ii)
writer.add_scalar("Val/acc1", acc1.item(),
all_iters * len(val_loader) * args.batch_size+ii)
writer.add_scalar("Val/acc5", acc5.item(),
all_iters * len(val_loader) * args.batch_size+ii)
result_dict[key] = top1.avg
logInfo = 'TEST Iter {}: loss = {:.6f},\t'.format(all_iters, objs.avg) + \
'Top-1 acc = {:.6f},\t'.format(top1.avg) + \
'Top-5 acc = {:.6f},\t'.format(top5.avg) + \
'val_time = {:.6f}'.format(time.time() - t1)
logging.info(logInfo)
logging.info("RESULTS")
for ii, (key, value) in enumerate(result_dict.items()):
logging.info("{: ^10} \t {:.6f}".format(key, value))
if ii > 10:
break
logging.info("E N D")
if __name__ == "__main__":
main()
|
{"hexsha": "2160a5813ab14fd6ec3c47128ae35c903f45e2fc", "size": 15054, "ext": "py", "lang": "Python", "max_stars_repo_path": "NAS/single-path-one-shot/src/MNIST/train.py", "max_stars_repo_name": "naviocean/SimpleCVReproduction", "max_stars_repo_head_hexsha": "61b43e3583977f42e6f91ef176ec5e1701e98d33", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 923, "max_stars_repo_stars_event_min_datetime": "2020-01-11T06:36:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T00:26:57.000Z", "max_issues_repo_path": "NAS/single-path-one-shot/src/MNIST/train.py", "max_issues_repo_name": "Twenty3hree/SimpleCVReproduction", "max_issues_repo_head_hexsha": "9939f8340c54dbd69b0017cecad875dccf428f26", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2020-02-27T08:35:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-25T08:54:19.000Z", "max_forks_repo_path": "NAS/single-path-one-shot/src/MNIST/train.py", "max_forks_repo_name": "Twenty3hree/SimpleCVReproduction", "max_forks_repo_head_hexsha": "9939f8340c54dbd69b0017cecad875dccf428f26", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 262, "max_forks_repo_forks_event_min_datetime": "2020-01-02T02:19:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T04:56:16.000Z", "avg_line_length": 35.0093023256, "max_line_length": 167, "alphanum_fraction": 0.5905407201, "include": true, "reason": "import numpy", "num_tokens": 3404}
|
from typing import Tuple
import numpy as np
def signbit_convert(data: int, maxbit: int) -> int:
if (data & (1 << (maxbit - 1))):
data -= (1 << maxbit)
return data
def get_next_bits(buf: bytes, current_data: int, idx: int, bits_left: int) -> Tuple[int, int, int]:
h_data = buf[idx]
h_data += (buf[idx+1] << 8)
current_data += h_data << bits_left
idx += 2
bits_left += 16
return current_data, idx, bits_left
def unpack_float_acphy(nbits: int, autoscale: int, shft: int, fmt: int, nman: int, nexp: int, nfft: int, H: np.array) -> np.array:
k_tof_unpack_sgn_mask = (1<<31)
He = [0] * nfft
iq_mask = (1 << (nman - 1)) - 1
e_mask = (1 << nexp) - 1
e_p = (1 << (nexp - 1))
sgnr_mask = (1 << (nexp + 2*nman - 1))
sgni_mask = (sgnr_mask >> nman)
e_zero = -nman
out = np.zeros((nfft*2, 1), dtype=np.int64)
n_out = (nfft << 1)
e_shift = 1
maxbit = -e_p
for i in range(len(H)):
vi = ((H[i] >> (nexp + nman)) & iq_mask)
vq = ((H[i] >> nexp) & iq_mask)
e = (H[i] & e_mask)
if e >= e_p:
e -= (e_p << 1)
He[i] = e
x = vi | vq
if autoscale and x:
m = 0xffff0000
b = 0xffff
s = 16
while s > 0:
if x & m:
e += s
x >>= s
s >>= 1
m = (m >> s) & b
b >>= s
if e > maxbit:
maxbit = e
if H[i] & sgnr_mask:
vi |= k_tof_unpack_sgn_mask
if H[i] & sgni_mask:
vq |= k_tof_unpack_sgn_mask
out[i<<1] = vi
out[(i<<1)+1] = vq
shft = nbits - maxbit
for i in range(n_out):
e = He[(i >> e_shift)] + shft
vi = out[i]
sgn = 1
if vi & k_tof_unpack_sgn_mask:
sgn = -1
vi &= ~k_tof_unpack_sgn_mask
if e < e_zero:
vi = 0
elif e < 0:
e = -e
vi = (vi >> e)
else:
vi = (vi << e)
out[i] = sgn * vi
return out
|
{"hexsha": "1c822db7ff2869215b1ed38f6ca0638dccd2d53b", "size": 2206, "ext": "py", "lang": "Python", "max_stars_repo_path": "CSIKit/util/byteops.py", "max_stars_repo_name": "serrhini/CSIKit", "max_stars_repo_head_hexsha": "1cc9ecb2c0444622b258e9de48841366cbbc667b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSIKit/util/byteops.py", "max_issues_repo_name": "serrhini/CSIKit", "max_issues_repo_head_hexsha": "1cc9ecb2c0444622b258e9de48841366cbbc667b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSIKit/util/byteops.py", "max_forks_repo_name": "serrhini/CSIKit", "max_forks_repo_head_hexsha": "1cc9ecb2c0444622b258e9de48841366cbbc667b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.2828282828, "max_line_length": 130, "alphanum_fraction": 0.4261106074, "include": true, "reason": "import numpy", "num_tokens": 718}
|
[STATEMENT]
lemma impE[elim!]:
assumes "eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')"
shows "eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
proof cases
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
assume "(\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)"
[PROOF STATE]
proof (state)
this:
\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
proof cases
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
assume "\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>"
[PROOF STATE]
proof (state)
this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
from \<open>eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')\<close>
[PROOF STATE]
proof (chain)
picking this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
[PROOF STEP]
have "eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)"
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
using imp_def
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
?\<gamma> \<longrightarrow>\<^sup>b ?\<gamma>' \<equiv> \<lambda>t n. ?\<gamma> t n \<longrightarrow> ?\<gamma>' t n
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
have "\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
\<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)"
[PROOF STATE]
proof (prove)
using this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
[PROOF STEP]
using validCE_act[where \<gamma>="\<lambda> t n. \<gamma> t n \<longrightarrow> \<gamma>' t n"]
[PROOF STATE]
proof (prove)
using this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
\<lbrakk>\<exists>i\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>; eval ?cid ?t ?t' ?n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)\<rbrakk> \<Longrightarrow> \<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (the_enat \<langle>?cid #\<^bsub>enat ?n\<^esub>inf_llist ?t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (the_enat \<langle>?cid #\<^bsub>enat ?n\<^esub>inf_llist ?t\<rangle>)
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
goal (2 subgoals):
1. ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
2. \<not> ?P \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
with \<open>\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>\<close>
[PROOF STATE]
proof (chain)
picking this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
using eval_def
[PROOF STATE]
proof (prove)
using this:
\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (the_enat \<langle>cid #\<^bsub>enat n\<^esub>inf_llist t\<rangle>)
eval ?cid ?t ?t' ?n ?\<gamma> \<equiv> (\<exists>i\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (the_enat \<langle>?cid #\<^bsub>enat ?n\<^esub>inf_llist ?t\<rangle>) \<or> (\<exists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> \<not> (\<exists>i'\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i'\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (\<^bsub>?cid\<^esub>\<down>\<^bsub>?t\<^esub>?n) \<or> (\<nexists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) ?n
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
assume "\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)"
[PROOF STATE]
proof (state)
this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
from \<open>eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')\<close>
[PROOF STATE]
proof (chain)
picking this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
[PROOF STEP]
have "eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)"
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
using imp_def
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
?\<gamma> \<longrightarrow>\<^sup>b ?\<gamma>' \<equiv> \<lambda>t n. ?\<gamma> t n \<longrightarrow> ?\<gamma>' t n
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
have "\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
\<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)"
[PROOF STATE]
proof (prove)
using this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
[PROOF STEP]
using validCE_cont[where \<gamma>="\<lambda> t n. \<gamma> t n \<longrightarrow> \<gamma>' t n"] \<open>\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>\<close>
[PROOF STATE]
proof (prove)
using this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
\<lbrakk>\<exists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>; \<not> (\<exists>i'\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i'\<^esub>); eval ?cid ?t ?t' ?n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)\<rbrakk> \<Longrightarrow> \<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (\<^bsub>?cid\<^esub>\<down>\<^bsub>?t\<^esub>?n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (\<^bsub>?cid\<^esub>\<down>\<^bsub>?t\<^esub>?n)
\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
goal (1 subgoal):
1. \<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>) \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
with \<open>\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)\<close> \<open>\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>\<close>
[PROOF STATE]
proof (chain)
picking this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
using eval_def
[PROOF STATE]
proof (prove)
using this:
\<not> (\<exists>i\<ge>n. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)
\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n) \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) (\<^bsub>cid\<^esub>\<down>\<^bsub>t\<^esub>n)
eval ?cid ?t ?t' ?n ?\<gamma> \<equiv> (\<exists>i\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (the_enat \<langle>?cid #\<^bsub>enat ?n\<^esub>inf_llist ?t\<rangle>) \<or> (\<exists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> \<not> (\<exists>i'\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i'\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (\<^bsub>?cid\<^esub>\<down>\<^bsub>?t\<^esub>?n) \<or> (\<nexists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) ?n
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
assume "\<not>(\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)"
[PROOF STATE]
proof (state)
this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
from \<open>eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')\<close>
[PROOF STATE]
proof (chain)
picking this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
[PROOF STEP]
have "eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)"
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
using imp_def
[PROOF STATE]
proof (prove)
using this:
eval cid t t' n (\<gamma> \<longrightarrow>\<^sup>b \<gamma>')
?\<gamma> \<longrightarrow>\<^sup>b ?\<gamma>' \<equiv> \<lambda>t n. ?\<gamma> t n \<longrightarrow> ?\<gamma>' t n
goal (1 subgoal):
1. eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
[PROOF STEP]
have "\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
\<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n"
[PROOF STATE]
proof (prove)
using this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
[PROOF STEP]
using validCE_not_act[where \<gamma>="\<lambda> t n. \<gamma> t n \<longrightarrow> \<gamma>' t n"]
[PROOF STATE]
proof (prove)
using this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
eval cid t t' n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)
\<lbrakk>\<nexists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>; eval ?cid ?t ?t' ?n (\<lambda>t n. \<gamma> t n \<longrightarrow> \<gamma>' t n)\<rbrakk> \<Longrightarrow> \<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) ?n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) ?n
goal (1 subgoal):
1. \<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
goal (1 subgoal):
1. \<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub> \<Longrightarrow> eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
with \<open>\<not>(\<exists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>)\<close>
[PROOF STATE]
proof (chain)
picking this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
using eval_def
[PROOF STATE]
proof (prove)
using this:
\<nexists>i. \<parallel>cid\<parallel>\<^bsub>t i\<^esub>
\<gamma> (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n \<longrightarrow> \<gamma>' (lnth (\<pi>\<^bsub>cid\<^esub>inf_llist t @\<^sub>l inf_llist t')) n
eval ?cid ?t ?t' ?n ?\<gamma> \<equiv> (\<exists>i\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (the_enat \<langle>?cid #\<^bsub>enat ?n\<^esub>inf_llist ?t\<rangle>) \<or> (\<exists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> \<not> (\<exists>i'\<ge>?n. \<parallel>?cid\<parallel>\<^bsub>?t i'\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) (\<^bsub>?cid\<^esub>\<down>\<^bsub>?t\<^esub>?n) \<or> (\<nexists>i. \<parallel>?cid\<parallel>\<^bsub>?t i\<^esub>) \<and> ?\<gamma> (lnth (\<pi>\<^bsub>?cid\<^esub>inf_llist ?t @\<^sub>l inf_llist ?t')) ?n
goal (1 subgoal):
1. eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
eval cid t t' n \<gamma> \<longrightarrow> eval cid t t' n \<gamma>'
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 8991, "file": "DynamicArchitectures_Dynamic_Architecture_Calculus", "length": 50}
|
PROGRAM vmec
USE vmec_input
USE vmec_seq
USE safe_open_mod
USE vparams, ONLY: nlog, nlog0
IMPLICIT NONE
C-----------------------------------------------
C L o c a l P a r a m e t e r s
C-----------------------------------------------
INTEGER, PARAMETER :: nseq0 = 12
C-----------------------------------------------
C L o c a l V a r i a b l e s
C-----------------------------------------------
INTEGER :: numargs, ierr_vmec, index_end,
1 iopen, isnml, iread, iseq, index_seq,
2 index_dat, ireset, iunit
CHARACTER*120 :: input_file, seq_ext, reset_file_name, arg
CHARACTER*120 :: log_file
CHARACTER*120, DIMENSION(10) :: command_arg
LOGICAL :: lfirst=.true., lreseta, lscreen
C-----------------------------------------------
!***
! D I S C L A I M E R
!
! You are using a BETA version of the PROGRAM VMEC, which is currently
! under development by S. P. Hirshman at the Fusion Energy Division,
! Oak Ridge National Laboratory. Please report ANY problems or comments
! to him. As a BETA version, this PROGRAM is subject to change
! and improvement without notice.
!
! 1. CODE SYNOPSIS
!
! THIS PROGRAM - VMEC (Variational Moments Equilibrium Code) -
! SOLVES THREE-DIMENSIONAL MHD EQUILIBRIUM EQUATIONS USING
! FOURIER SPECTRAL (MOMENTS) METHODS. A CYLINDRICAL COORDINATE
! REPRESENTATION IS USED (R-Z COORDINATES). THE POLOIDAL
! ANGLE VARIABLE IS RENORMALIZED THROUGH THE STREAM FUNCTION
! LAMBDA, WHICH IS SELF-CONSISTENTLY DETERMINED AND DIFFERENCED
! VARIATIONALLY ON THE HALF-RADIAL MESH. THE POLOIDAL ANGLE IS
! DETERMINED BY MINIMIZING <M> = m**2 S(m) , WHERE S(m) =
! Rm**2 + Zm**2 . AN EVEN-ODD DECOMPOSITION IN THE POLOIDAL MODE
! NO. OF R,Z, AND LAMDA IS USED TO IMPROVE RADIAL RESOLUTION.
! A FREE-BOUNDARY OPTION IS AVAILABLE (FOR lfreeb=T), WITH A
! USER-SUPPLIED DATA-FILE "MGRID" NEEDED TO COMPUTE THE PLASMA
! VACUUM FIELD COMPONENTS BR, BPHI, BZ (see SUBROUTINE BECOIL)
!
! THE MAGNETIC FIELD IS REPRESENTED INTERNALLY AS FOLLOWS:
!
! B(s,u,v) = grad(phiT) X ( grad(u) + grad(lambda) ) +
!
! iota(s) * grad(v) X grad(phiT)
!
! WHERE phiT is the toroidal flux (called phi in code) and
! u,v are the poloidal, toroidal angles, respectively.
!
! 2. ADDITIONAL CODES REQUIRED
! For the fixed boundary calculation, the user must provide the Fourier
! coefficients for the plasma boundary (the last surface outside of which
! the pressure gradient vanishes). For ALL but the simplest geometry, the
! SCRUNCH code (available from R. Wieland), based on the DESCUR curve-fitting
! code, can be used to produce the optimized VMEC Fourier representation for
! an arbritrary closed boundary (it need not be a 'star-like' DOmain, nor
! need it possess vertical, or 'stellarator', symmetry).
!
! For the free boundary calculation, the MAKEGRID code (available upon
! request) is needed to create a binary Green''s FUNCTION table for the
! vacuum magnetic field(s) and, IF data analysis is to be done, flux and
! field loops as well. The user provides a SUBROUTINE (BFIELD) which can be
! called at an arbitrary spatial location and which should RETURN the three
! cylindrical components of the vacuum field at that point. (Similary,
! locations of diagnostic flux loops, Rogowski coils, etc. are required IF
! equilibrium reconstruction is to be done.)
!
! Plotting is handled by a stand-alone package, PROUT.NCARG (written by
! R. M. Wieland). It uses NCAR-graphics calls and reads the primary VMEC output
! file, WOUT.EXT, WHERE 'EXT' is the command-line extension of the INPUT file.
!
!
! 3. UNIX SCRIPT SETUP PARAMETERS
! The VMEC source code (vmec.lsqh) is actually a UNIX script file which uses
! the C-precompiler to produce both the machine-specific Fortran source and a
! make-file specific to ANY one of the following platforms:
!
! IBM-RISC6000, CRAY, ALPHA (DEC-STATION), HP-UX WORKSTATION,
! WINDOWS-NT, DEC-VMS
!
! Additional platforms are easy to add to the existing script as required.
!
!
! 4. FORTRAN PARAMETER STATEMENTS set by user
! In the Fortran-90 version of VMEC these PARAMETER statements have
! been replaced by dynamic memory allocation. So the user should set the
! run-time parameters ns (through ns_array), mpol, ntor in the NAMELIST INDATA.
!
!
! Added features since last edition
! 1. Implemented preconditioning algorithm for R,Z
! 2. The physical (unpreconditioned) residuals are used
! to determine the level of convergence
! 3. The original (MOMCON) scaling of lambda is used, i.e.,
! Bsupu = phip*(iota - lamda[sub]v)/SQRT(g). This is needed to
! maintain consistency with the time-stepper for arbitrary PHIP.
!
! WRITTEN BY S. P. HIRSHMAN (8/28/85 - REVISED 3/1/86) BASED ON
! 1. S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553 (1983).
! 2. S. P. Hirshman and H. K. Meier, Phys. Fluids 28, 1387 (1985).
! 3. S. P. Hirshman and D. K. Lee, Comp. Phys. Comm. 39, 161 (1986).
!***
!
! Read in command-line arguments to get input file or sequence file,
! screen display information, and restart information
!
CALL getcarg(1, command_arg(1), numargs)
DO iseq = 2, numargs
CALL getcarg(iseq, command_arg(iseq), numargs)
END DO
lreseta = .true. !!Default value: runvmec MUST be called this way the first time
lscreen = .true.
IF (numargs .lt. 1) THEN
STOP 'Invalid command line'
ELSE IF (command_arg(1).eq.'-h' .or. command_arg(1).eq.'/h') THEN
PRINT *,
1 ' ENTER INPUT FILE NAME OR INPUT-FILE SUFFIX ON COMMAND LINE'
PRINT *
PRINT *,' For example: '
PRINT *,' xvmec input.tftr OR xvmec tftr ',
1 'OR xvmec ../input.tftr'
PRINT *
PRINT *,' Sequence files, containing a LIST of input files',
1 ' are also allowed: '
PRINT *,' xvmec input.tftr_runs'
PRINT *
PRINT *,' Here, input.tftr_runs CONTAINS a &VSEQ NAMELIST',
1 ' ENTRY'
PRINT *
PRINT *,' Additional (optional) command arguments are',
1 ' allowed:'
PRINT *
PRINT *,' xvmec <filename> noscreen F reset_wout_file'
PRINT *
PRINT *,' noscreen: supresses ALL output to screen ',
1 ' (default, or "screen", displays output)'
PRINT *,' F (or T): IF "T", forces reset on',
1 ' a coarse mesh (used for sequencing control)'
PRINT *,' name of reset wout file (defaults to this extension)'
STOP
ELSE IF (numargs .gt. 1) THEN
arg = command_arg(2)
IF (TRIM(arg).eq.'noscreen' .or. TRIM(arg).eq.'NOSCREEN')
1 lscreen = .false.
END IF
IF (numargs .gt. 2) THEN
arg = command_arg(3)
IF (arg(1:1).eq.'f' .or. arg(1:1).eq.'F') lreseta = .false.
END IF
IF (numargs .gt. 3) THEN
reset_file_name = command_arg(4)
END IF
!
! Determine type of file opened (sequential or input-data)
! ARG1 (char var)
! By DEFAULT, ARG1 obtained from the command
! line is parsed as follows to determine the input data file(s):
! a. Attempt to OPEN file ARG1 (full path + file name).
! Look for the VSEQ NAMELIST to obtain nseq, nseq_select, and
! extension array. If they exist and nseq>0, VMEC will run
! sequentially using input determined from the array EXTENSION[i]
! or input.EXTENSION[i]
! b. If the command argument is not a sequence NAMELIST, THEN the data file
! ARG1 or input.ARG1 is READ directly, with NSEQ=1.
!
arg = command_arg(1)
index_dat = index(arg,'.')
index_end = len_trim(arg)
IF (index_dat .gt. 0) THEN
seq_ext = arg(index_dat+1:index_end)
input_file = TRIM(arg)
ELSE
seq_ext = TRIM(arg)
input_file = 'input.'//TRIM(seq_ext)
END IF
IF (numargs .le. 3) reset_file_name = 'wout.' // seq_ext
nseq = 1
nseq_select(1) = 1
extension(1) = input_file
!
! READ IN NAMELIST VSEQ TO GET ARRAY
! OF INPUT FILE EXTENSIONS AND INDEXING ARRAY, NSEQ_select
!
nlog = nlog0
iunit = nseq0
DO iseq = 1, 2
IF (iseq .eq. 1) THEN
arg = input_file
ELSE
arg = seq_ext
END IF
CALL safe_open(iunit, iopen, TRIM(arg), 'old', 'formatted')
IF (iopen .eq. 0) THEN
CALL read_namelist (iunit, isnml, 'vseq')
IF (isnml.eq.0 .and. nseq .gt. nseqmax) STOP 'NSEQ>NSEQMAX'
!
! OPEN FILE FOR STORING SEQUENTIAL RUN HISTORY
!
IF (isnml .eq. 0) THEN
log_file = 'log.'//seq_ext
CALL safe_open(nlog, iread, log_file, 'replace',
1 'formatted')
IF (iread .ne. 0) THEN
PRINT *, log_file,
1 ' LOG FILE IS INACCESSIBLE: IOSTAT= ',iread
STOP 3
ELSE
EXIT !!Break out of loop
END IF
ENDIF
ENDIF
CLOSE (iunit)
END DO
!
! CALL EQUILIBRIUM SOLVER
!
! nseq_select: IF sequence file (VSEQ NAMELIST given with nseq >0)
! array giving indices into EXTENSION array prescribing
! the order in which the input files are run by VMEC
! nseq: number of sequential VMEC runs to make
!
!
! CALL VMEC WITH POSSIBLE SEQUENCE EXTENSION (SEQ_EXT)
! AND ARRAY OF INPUT FILE EXTENSIONS (EXTENSION)
!
DO iseq = 1, nseq
index_seq = nseq_select(iseq)
ireset = 0
ierr_vmec = 0
IF (iseq .gt. 1) reset_file_name =
1 'wout.' // TRIM(extension(index_seq))
100 CONTINUE
CALL runvmec (extension(index_seq), iseq-1, lreseta, ierr_vmec,
1 ireset, lfirst, lscreen, reset_file_name)
lfirst = .false.
IF(ierr_vmec == 4)then
IF(.not.lmoreiter) ierr_vmec = 0
ENDIF
SELECT CASE (ierr_vmec)
! CASE (1:2) !BAD JACOBIAN AFTER 75 ITERATIONS...
! ireset = ireset + 1
! lreseta = .true.
! IF (ireset .le. 2) GOTO 100
CASE (4) !Try a few more iterations
ireset = ireset + 1
lreseta = .false.
IF (ireset .le. 1) THEN
IF (lscreen) WRITE (6, '(/,1x,a)')
1 'RUNNING A FEW MORE ITERATIONS THAN REQUESTED'
GOTO 100
ELSE IF (lscreen) THEN
PRINT *, 'DECREASE DELT OR INCREASE NITER'
ENDIF
CASE (6) !BAD JACOBIAN AFTER AXIS RESET: TRY DECREASING TO NS=3
ireset = ireset + 1
lreseta = .true.
IF (ireset .le. 1) GOTO 100
CASE DEFAULT
lreseta = .false.
END SELECT
END DO
!
! FREE ANY LONG-TERM (PERSISTENT THROUGH ISEQ > 1, OR XC, SCALXC FOR
! ITERATIVE OPTIMIZATION) POINTERS
!
CALL free_persistent_mem
CLOSE (nlog)
END PROGRAM vmec
|
{"hexsha": "c74e9b641d5786ec219c0817bda5dab25d93dd4b", "size": 11586, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "VMEC2000/Sources/TimeStep/vmec.f", "max_stars_repo_name": "jonathanschilling/VMEC_8_00", "max_stars_repo_head_hexsha": "25519df38bf81bcb673dd9374bda11988de2d940", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VMEC2000/Sources/TimeStep/vmec.f", "max_issues_repo_name": "jonathanschilling/VMEC_8_00", "max_issues_repo_head_hexsha": "25519df38bf81bcb673dd9374bda11988de2d940", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VMEC2000/Sources/TimeStep/vmec.f", "max_forks_repo_name": "jonathanschilling/VMEC_8_00", "max_forks_repo_head_hexsha": "25519df38bf81bcb673dd9374bda11988de2d940", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8144329897, "max_line_length": 97, "alphanum_fraction": 0.5841532885, "num_tokens": 3253}
|
#pragma once
#include <boost/spirit/home/x3.hpp>
namespace client {
namespace parser {
namespace x3 = boost::spirit::x3;
using iterator_type = std::string::const_iterator;
using context_type = x3::phrase_parse_context<x3::space_type>::type;
} // namespace parser
} // namespace client
|
{"hexsha": "df5bb6f35d3ad666e6319d6f3f2104e38bd73e41", "size": 300, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "include/config.hpp", "max_stars_repo_name": "limitz404/plurals-parser-boost", "max_stars_repo_head_hexsha": "c90f226c5b54647e13cc07d83bd9895e8783737b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2020-11-27T09:58:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-27T09:58:37.000Z", "max_issues_repo_path": "include/config.hpp", "max_issues_repo_name": "limitz404/plurals-parser-boost", "max_issues_repo_head_hexsha": "c90f226c5b54647e13cc07d83bd9895e8783737b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "include/config.hpp", "max_forks_repo_name": "limitz404/plurals-parser-boost", "max_forks_repo_head_hexsha": "c90f226c5b54647e13cc07d83bd9895e8783737b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0769230769, "max_line_length": 72, "alphanum_fraction": 0.7233333333, "num_tokens": 72}
|
\documentclass[12pt]{article}
\title{A Sample \LaTeX\ Document}
\author{Math 300}
\usepackage{graphicx}
\begin{document}
\maketitle
\section{Typing Text}
Since \LaTeX\ is a markup language, any text we
type appears on the page, unless it contains
one of the nine reserved characters of \LaTeX, listed
below.
\begin{verbatim}
\ { } & _ ^ % ~ #
\end{verbatim}
If we want those characters to appear on the page, we
must precede them by a backslash, or, in the special
case of the backslash itself, we must use the
command \verb(\verb(. In math mode, we can use the command
\verb(\backslash(.
Note that there are three kinds of dash-objects. Hyphens
are very short, and are typed the way you would expect, using
"-". Dashes, as in the range 1--2, are wider, and are typed
using \verb+--+. If you feel a need for a wide---dash, you can
use \verb+---+.
\section{Units}
Lengths in \LaTeX\ can be given in a number of Units.\\
\begin{tabular}{ll}
cm & Centimeters\\
em & The width of the letter M in the current font\\
ex & The height of the letter x in the current font\\
in & Inches\\
pc & Picas (1pc = 12pt)\\
pt & Points (1in = 72pt)\\
mm & Millimeters
\end{tabular}
\section{Space}
The most direct way to make spaces is simply to use the
\verb(\hspace( and \verb(\vspace( commands, for horizontal
and vertical space, respectively. Each takes one
argument: a distance specification for the size of the space.
The width or height of the space can be positive {\em or}
negative. Note that \verb(\vspace( can only be used in
vertical mode; that is, when you are starting a new paragraph
or starting a float or doing something else that shifts text
vertically. It will not work in a line.
\LaTeX\ also has a number of predefined spaces. To produce a
space with fixed width, and which cannot be used as a line break,
you may use the \verb(~( character. This would typically be used
to separate initials of an author, or in other situations where
we don't want to have a single letter or initial ending a line.
More often, we don't mind a line break, and want the space to shrink
or grow according to the justification requirements on the line.
In that case we make a standard space using \verb(\ (; backslash-space.
There are wider stretchy spaces available to us: \verb(\quad( and
\verb(\qquad(. There is also a ``thin space'': \verb(\,(.
The following words are separated by a thin space, a standard space,
a quad, and a qquad, respectively.
\begin{center}
space\,space\ space\quad{space}\qquad{space}
\end{center}
There are also predefined vertical spaces: \verb(\smallskip(
\verb(\medskip(, and \verb(\bigskip(, that behave as their
names imply.
There are also some exceptionally stretchy spaces that we can
use to push text around. For example, the following line is
set using the command \verb(text\hfill text(. The line below it
was set using \verb(text\hfill text\hfill text(.\\
{text\hfill text}\\
text\hfill text\hfill text\\
You get the idea: \verb(\hfill( makes enough space to fill the line
in question completely. If more than one \verb(\hfill( appears
on a line, then the two negotiate over how much space they each get.
\section{Lines and Boxes}
There are a variety of ways to make lines and boxes in \LaTeX.
The most basic is to make a horizontal rule using \verb(\hrule(.
\hrule
\verb(\hrule( makes a new line, and fills it up with a horizontal line.
If you don't want an entire line, you can use \verb(\hrulefill(, as
in \hrulefill.\\
This command works like \verb(\hfill(, but instead of filling with
space, it uses a horizontal rule to fill the line.
To make a box around some content, you can use the \verb(\framebox(
command. The framebox command puts a box around its argument,
so that \verb(\framebox{text}( looks like \framebox{text}. It takes
optional width and position arguments, so that
\verb(\framebox[3in][l]{text}(
appears as \\
\framebox[3in][l]{text}.
$E=mc^2$
\section{Margins}
This section is mistitled. \LaTeX\ does not really do margins,
so much as it places text. It uses several variables in placing the
text, which we can set. For example, to make the left margin on
all even-numbered pages 0.5 inches wider than the default (which is
1 inch), we would define\\
\verb(\evensidemargin=0.5in(.\\
A list of the variables we can set and their default values follows.
\begin{verbatim}
\evensidemargin
\oddsidemargin
\topmargin
\textwidth
\textheight
\parskip
\baselineskip
\end{verbatim}
\section{Tables}
Sometimes we need to typeset tables. For example,
consider Table \ref{animaltable}.
Any resemblance of the numbers in
Table \ref{animaltable} to those from any authentic poll is purely coincidental.
\begin{table}
\begin{center}
\caption{\label{animaltable}
Results from a poll that probably never happened.}
\begin{tabular}{||l|c||}
\hline
\cline{1-2}
\multicolumn{2}{||l||}{{\it What is your favorite animal?}}\\
\hline
Animal & Percentage of respondents\\
\hline
Dog & 43\%\\
Cat & 44\%\\
Schwarzenegger & 9\%\\
We kill animals & 4\%\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{Figures}
We can also put figures into our latex documents. For example, the
image in Figure \ref{sniper} is found at
{\tt http://www.math.wsu.edu/kcooper/M300/sniper.jpg}, but must
be converted to encapsulated postscript before it can be included
in this document.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4in]{sniper.eps}
\caption{\label{sniper}
Cats take vengeance on 4\% of respondents \cite{calvin}.}
\vfill
\end{center}
\end{figure}
\section{Homemade commands}
\newcommand{\dickens}[1]{It was the best of #1. It was the worst of #1.}
\dickens{ideas}
\LaTeX\ was conceived as a programming language. This is what makes it
harder to process using a wysiwyg interface, but it also allows
us to make our own shortcuts.
If you know that a particular expression will appear repeatedly
in your document, you can make an abbreviation for it, or even
a command that allows you to specify arguments. In our case,
the sequence ``\dickens{{\it fill in the blank}}''
is to appear many times in this section, so we created a command
as follows:
\begin{verbatim}
\newcommand{\dickens}[1]{It was the best of #1.
It was the worst of #1.}
\end{verbatim}
This command takes one argument, which appears wherever a \#1 appears
in the text for the command. Thus, to typeset ``\dickens{examples}'',
we need only to type \verb(\dickens{examples}(.
\section{Citations and References}
In technical documents there are many references to
typographical objects from the document, and citations
of materials from outside the document. \TeX\ \cite{knuth}
and \LaTeX\ \cite{lamport} lets us keep track of those citations
by name, rather than number. Using the ``thebibliography''
environment gives us automatic numbering of our references,
while associating those numbers with names, so that we can
refer to the references using the \verb(\cite( command.
Likewise, for internal references, such as those to Figure \ref{sniper},
we can assign labels to a counter using the \verb(\label(
command, and refer to them using the \verb(\ref( command.
\begin{thebibliography}{X}
\bibitem{knuth} Knuth, D., {\bf The \TeX book,} Addison-Wesley, Reading, 1984.
\bibitem{lamport} Lamport, L., {\bf \LaTeX: A Document Preparation System,}
Addison-Wesley, Reading, 1986.
\bibitem{calvin} Watterson, B., {\bf Homicidal Psycho Jungle Cat,}
Andrews McMeel, New Jersey, 1994.
\end{thebibliography}
\end{document}
|
{"hexsha": "5af8ba8be5ccb6aec235109511cbcb6deb05daef", "size": 7481, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "res/code/code_sample_latex.tex", "max_stars_repo_name": "Moenupa/Pigeon", "max_stars_repo_head_hexsha": "4449ccf4ecce863e3264494d7e3d6a843f7fa052", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-26T06:10:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-26T06:10:13.000Z", "max_issues_repo_path": "res/code/code_sample_latex.tex", "max_issues_repo_name": "Moenupa/Pigeon", "max_issues_repo_head_hexsha": "4449ccf4ecce863e3264494d7e3d6a843f7fa052", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "res/code/code_sample_latex.tex", "max_forks_repo_name": "Moenupa/Pigeon", "max_forks_repo_head_hexsha": "4449ccf4ecce863e3264494d7e3d6a843f7fa052", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.4926829268, "max_line_length": 80, "alphanum_fraction": 0.7469589627, "num_tokens": 2121}
|
[STATEMENT]
lemma (in list_alloc) "\<Gamma>,\<Theta>\<turnstile> {} \<lbrace>\<acute>p\<noteq>Null\<rbrace>\<longmapsto> \<acute>p\<rightarrow>\<acute>next :== \<acute>p \<lbrace>True\<rbrace>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<Gamma>,\<Theta>\<turnstile> {} (False, \<lbrace>\<acute>p \<noteq> Null\<rbrace>)\<longmapsto> \<acute>p\<rightarrow>\<acute>next :== \<acute>p \<lbrace>True\<rbrace>
[PROOF STEP]
apply vcg
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
|
{"llama_tokens": 207, "file": "Simpl_ex_VcgExSP", "length": 2}
|
# coding: utf8
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import cv2
import numpy as np
from utils.config import cfg
from models.model_builder import ModelPhase
from pdseg.data_aug import get_random_scale, randomly_scale_image_and_label, random_rotation, \
rand_scale_aspect, hsv_color_jitter, rand_crop
def resize(img, grt=None, grt_instance=None, mode=ModelPhase.TRAIN):
"""
改变图像及标签图像尺寸
AUG.AUG_METHOD为unpadding,所有模式均直接resize到AUG.FIX_RESIZE_SIZE的尺寸
AUG.AUG_METHOD为stepscaling, 按比例resize,训练时比例范围AUG.MIN_SCALE_FACTOR到AUG.MAX_SCALE_FACTOR,间隔为AUG.SCALE_STEP_SIZE,其他模式返回原图
AUG.AUG_METHOD为rangescaling,长边对齐,短边按比例变化,训练时长边对齐范围AUG.MIN_RESIZE_VALUE到AUG.MAX_RESIZE_VALUE,其他模式长边对齐AUG.INF_RESIZE_VALUE
Args:
img(numpy.ndarray): 输入图像
grt(numpy.ndarray): 标签图像,默认为None
mode(string): 模式, 默认训练模式,即ModelPhase.TRAIN
Returns:
resize后的图像和标签图
"""
if cfg.AUG.AUG_METHOD == 'unpadding':
target_size = cfg.AUG.FIX_RESIZE_SIZE
img = cv2.resize(img, target_size, interpolation=cv2.INTER_LINEAR)
if grt is not None:
grt = cv2.resize(grt, target_size, interpolation=cv2.INTER_NEAREST)
if grt_instance is not None:
grt_instance = cv2.resize(
grt_instance, target_size, interpolation=cv2.INTER_NEAREST)
elif cfg.AUG.AUG_METHOD == 'stepscaling':
if mode == ModelPhase.TRAIN:
min_scale_factor = cfg.AUG.MIN_SCALE_FACTOR
max_scale_factor = cfg.AUG.MAX_SCALE_FACTOR
step_size = cfg.AUG.SCALE_STEP_SIZE
scale_factor = get_random_scale(min_scale_factor, max_scale_factor,
step_size)
img, grt = randomly_scale_image_and_label(
img, grt, scale=scale_factor)
elif cfg.AUG.AUG_METHOD == 'rangescaling':
min_resize_value = cfg.AUG.MIN_RESIZE_VALUE
max_resize_value = cfg.AUG.MAX_RESIZE_VALUE
if mode == ModelPhase.TRAIN:
if min_resize_value == max_resize_value:
random_size = min_resize_value
else:
random_size = int(
np.random.uniform(min_resize_value, max_resize_value) + 0.5)
else:
random_size = cfg.AUG.INF_RESIZE_VALUE
value = max(img.shape[0], img.shape[1])
scale = float(random_size) / float(value)
img = cv2.resize(
img, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR)
if grt is not None:
grt = cv2.resize(
grt, (0, 0),
fx=scale,
fy=scale,
interpolation=cv2.INTER_NEAREST)
else:
raise Exception("Unexpect data augmention method: {}".format(
cfg.AUG.AUG_METHOD))
return img, grt, grt_instance
|
{"hexsha": "13f8dbfd72c63a0d5891bcf0b83f8ee761fe79ee", "size": 3431, "ext": "py", "lang": "Python", "max_stars_repo_path": "contrib/LaneNet/data_aug.py", "max_stars_repo_name": "windstamp/PaddleSeg", "max_stars_repo_head_hexsha": "828808ea306adf2e8b94c291b77e7b7cf558bc2a", "max_stars_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2021-01-31T02:19:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T12:39:21.000Z", "max_issues_repo_path": "contrib/LaneNet/data_aug.py", "max_issues_repo_name": "windstamp/PaddleSeg", "max_issues_repo_head_hexsha": "828808ea306adf2e8b94c291b77e7b7cf558bc2a", "max_issues_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-21T02:36:01.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-21T02:37:45.000Z", "max_forks_repo_path": "contrib/LaneNet/data_aug.py", "max_forks_repo_name": "windstamp/PaddleSeg", "max_forks_repo_head_hexsha": "828808ea306adf2e8b94c291b77e7b7cf558bc2a", "max_forks_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2021-02-03T11:18:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T08:18:53.000Z", "avg_line_length": 39.8953488372, "max_line_length": 124, "alphanum_fraction": 0.6732730982, "include": true, "reason": "import numpy", "num_tokens": 922}
|
#!/usr/bin/env python
"""
DESCRIPTION
Functions for the total variation of atmospheric data
AUTHOR
Nicholas Hamilton
nicholas.hamilton@nrel.gov
Date:
29 January 2020
"""
import numpy as np
import pandas as pd
import inspect
import scipy as sp
#######################################
def standardize_data(data, axis=0):
'''
Standardize data - calculate z-score.
'''
dnorm = (data - data.mean(axis=axis)) / data.std(axis=axis)
return dnorm
#######################################
def covdet(data):
'''
calculate the total variation of a data as the determinant of the covariance matrix.
Parameters
----------
data : pd.dataframe, np.ndarray
data
Returns
-------
tvar: float
total variation of the input data
V: np.array
eigenvalues of the input data's covariance matrix
S: np.array
eigenvectors of the input data's covariance matrix
'''
if 'pandas' in str(type(data)):
data = data.values
cov = np.cov(data.T)
return np.linalg.det(cov)
#######################################
def mahalanobis_distance(data):
'''
Calculate the distance in standard deviations from the center of the data.
Assumes data is a normally distrubuted multi-dimensional random variable.
Parameters
----------
data : np.array [ndata x ndims]
input data array
Returns
-------
MD : np.array [ndata]
Mahalanobis distance
'''
# inverse of covariance matrix
VI = np.linalg.inv(np.cov(data.T))
# estimate of center of data
center = np.repeat(data.mean(axis=0)[np.newaxis, :], data.shape[0], axis=0)
MD = np.diag(
np.sqrt(np.dot(np.dot((data - center), VI), (data - center).T)))
return MD
#######################################
def linearfit(x, a, b):
'''
linear fit function
'''
return a * x + b
#######################################
def quadfit(x, a, b, c, x0):
'''
quadriatic fit function
'''
return a * (x - x0)**2 + b * (x - x0) + c
#######################################
def sinefit(x, a, b, c, x0):
'''
sine fit function
'''
return a * np.sin((x - x0) * 2 * np.pi * b) + c
#######################################
def invtanfit(x, a, b, c, x0):
'''
inverse tangent fit function
'''
return a * np.arctan(b * (x - x0)) + c
#######################################
def parse_fitfunc(detrend):
'''
[summary]
Parameters
----------
detrend : str
detrend type (objective function flavor)
Returns
-------
fitfunc : function
objective function
param_names : list
names of each parameter
'''
if detrend is not None:
# fit and remove linear trend
if detrend.lower() in 'linear':
fitfunc = linearfit
param_names = ['slope', 'offset']
# fit and remove sinusoidal
if detrend.lower() in 'sinusoidal' or detrend.lower() in 'sine':
fitfunc = sinefit
param_names = ['amplitude', 'frequency', 'offset', 'phase']
# fit and remove inverse tangent trend
if detrend.lower() in 'inverse tangent':
fitfunc = invtanfit
param_names = ['amplitude', 'frequency', 'offset', 'phase']
return fitfunc, param_names
#######################################
def parse_init_fitvals(detrend, ydat):
'''
[summary]
Parameters
----------
detrend : str
detrend type (objective function flavor)
ydat : np.ndarray
data against which to fit
Returns
-------
p0 : np.array, None
inital values for fitting
'''
if detrend is not None:
# initial fit values for linear trend
if detrend.lower() in 'linear':
p0 = [(ydat[-1] - ydat[0]) / len(ydat), ydat.mean()]
# initial fit values for sinusoidal trend
if detrend.lower() in 'sinusoidal' or detrend.lower() in 'sine':
p0 = [
1, (np.pi * 2 * np.abs(np.argmax(ydat) - np.argmin(ydat))),
ydat.mean(), 0
]
# initial fit values for inverse tangent trend
if detrend.lower() in 'inverse tangent':
p0 = [1, 1, 1, len(ydat) / 2]
return p0
#######################################
def find_outliers(data, threshold=3, searchtype=None):
'''
[summary]
Parameters
----------
data : [type]
[description]
threshold : int, optional
[description] (the default is 3, which [default_description])
Returns
-------
[type]
[description]
'''
data_std = data.std(axis=0)
outliers = np.zeros((0, 2))
outlier_index = []
if searchtype is None:
for ii in range(data.shape[1]):
tmp = np.abs(data[:, ii]) > threshold * data_std[ii]
outlier_index.append([i for i, x in enumerate(tmp) if x])
tmp = data[tmp]
outliers = np.vstack([outliers, tmp])
outlier_index = [item for sublist in outlier_index for item in sublist]
clean_data = np.delete(data, outlier_index, axis=0)
elif searchtype.lower() in 'mahalanobis':
MD = mahalanobis_distance(data)
tmp = MD > threshold
outlier_index.append([i for i, x in enumerate(tmp) if x])
tmp = data[tmp]
outliers = np.vstack([outliers, tmp])
outlier_index = [item for sublist in outlier_index for item in sublist]
clean_data = np.delete(data, outlier_index, axis=0)
return clean_data, outliers, outlier_index
|
{"hexsha": "06960fc0dd3f81b059812a2d7307a9c86673c797", "size": 5625, "ext": "py", "lang": "Python", "max_stars_repo_path": "total_var_functions.py", "max_stars_repo_name": "nhamilto/total-variation", "max_stars_repo_head_hexsha": "67745f98f779cc914317415e99c19119b6baf05a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "total_var_functions.py", "max_issues_repo_name": "nhamilto/total-variation", "max_issues_repo_head_hexsha": "67745f98f779cc914317415e99c19119b6baf05a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "total_var_functions.py", "max_forks_repo_name": "nhamilto/total-variation", "max_forks_repo_head_hexsha": "67745f98f779cc914317415e99c19119b6baf05a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0384615385, "max_line_length": 88, "alphanum_fraction": 0.5367111111, "include": true, "reason": "import numpy,import scipy", "num_tokens": 1352}
|
"""
This code is used to Run the distributed model for jiboa rover in El Salvador
wher the catchment is consisted os a ustream lake and a volcanic area
- you have to make the root directory to the examples folder to enable the code
from reading input files
"""
from IPython import get_ipython
get_ipython().magic('reset -f')
import os
os.chdir("F:/02Case studies/El Salvador")
#%library
import gdal
import datetime as dt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# HAPI modules
from Hapi.run import RunHAPIwithLake
import Hapi.hbv as HBV
import Hapi.performancecriteria as Pf
import Hapi.raster as Raster
#%%
"""
paths to meteorological data
"""
start = dt.datetime(2012,6,14,19,00,00)
end = dt.datetime(2014,11,17,00,00,00)
calib_end = dt.datetime(2013,12,23,00,00,00)
PrecPath = prec_path = "inputs/Hapi/meteodata/4000/calib/prec_clipped"
Evap_Path = evap_path = "inputs/Hapi/meteodata/4000/calib/evap_clipped"
TempPath = temp_path = "inputs/Hapi/meteodata/4000/calib/temp_clipped"
#DemPath = path+"GIS/4000/dem4000.tif"
FlowAccPath = "inputs/Hapi/GIS/4000_matched/acc4000.tif"
FlowDPath = "inputs/Hapi/GIS/4000_matched/fd4000.tif"
ParPath = "inputs/Hapi/meteodata/4000/parameters/"
#ParPath = "inputs/Hapi/meteodata/4000/"+"parameters.txt"
Paths=[PrecPath, Evap_Path, TempPath, FlowAccPath, FlowDPath, ]
#p2=[24, 1530]
#init_st=[0,5,5,5,0]
init_st = np.loadtxt("inputs/Hapi/meteodata/Initia-jiboa.txt", usecols=0).tolist()
snow = 0
# lake meteorological data
ind = pd.date_range(start, end, freq = "H" )
lakedata = pd.read_csv("inputs/Hapi/meteodata/lakedata.csv", index_col = 0)
lakedata.index = ind
lakeCalib = lakedata.loc[start:calib_end]
lakeValid = lakedata.loc[calib_end:end]
# convert the dataframe into array
lakeCalibArray = lakeCalib.values
# take only the plake, et, t and tm columns and exclude the last column
lakeCalibArray = lakeCalibArray[:,0:-1]
# where the lake discharges its flow (give the indices of the cell)
lakecell = [2,1] # 4km
#lakecell = [4,2] # 2km
#lakecell = [10,4] # 1km
#lakecell = [19,10] # 500m
LakeParameters = np.loadtxt("inputs/Hapi/meteodata/4000/Lakeparameters.txt").tolist()
StageDischargeCurve = np.loadtxt("inputs/Hapi/meteodata/curve.txt")
p2 = [1, 227.31, 133.98, 70.64]
Lake_init_st = np.loadtxt("inputs/Hapi/meteodata/Initia-lake.txt", usecols=0).tolist()
#%% run the model
Sim =pd.DataFrame(index = lakeCalib.index)
st, Sim['Q'], q_uz_routed, q_lz_trans = RunHAPIwithLake(HBV, Paths, ParPath, p2, init_st,
snow, lakeCalibArray, StageDischargeCurve,
LakeParameters, lakecell,Lake_init_st)
#%% calculate some metrics
WS = {}
WS['type'] = 1
WS['N'] = 3
ModelMetrics=dict()
ModelMetrics['CalibErrorHf']=Pf.RMSEHF(lakeCalib['Q'],Sim['Q'],WS['type'],WS['N'],0.75)
ModelMetrics['CalibErrorLf']=Pf.RMSELF(lakeCalib['Q'],Sim['Q'],WS['type'],WS['N'],0.75)
ModelMetrics['CalibNSEHf']=Pf.NSE(lakeCalib['Q'],Sim['Q'])
ModelMetrics['CalibNSELf']=Pf.NSE(np.log(lakeCalib['Q']),np.log(Sim['Q']))
ModelMetrics['CalibRMSE']=Pf.RMSE(lakeCalib['Q'],Sim['Q'])
ModelMetrics['CalibKGE']=Pf.KGE(lakeCalib['Q'],Sim['Q'])
ModelMetrics['CalibWB']=Pf.WB(lakeCalib['Q'],Sim['Q'])
#%% plotting
plt.figure(50,figsize=(15,8))
Sim.Q.plot(color=[(0,0.3,0.7)],linewidth=2.5,label="Observed data", zorder = 10)
ax1=lakeCalib['Q'].plot(color='#DC143C',linewidth=2.8,label='Simulated Calibration data')
ax1.annotate("Model performance" ,xy=('2012-12-01 00:00:00',20),fontsize=15)
ax1.annotate("RMSE = " + str(round(ModelMetrics['CalibRMSE'],3)),xy=('2012-12-01 00:00:00',20-1.5),fontsize=15)
ax1.annotate("NSE = " + str(round(ModelMetrics['CalibNSEHf'],2)),xy=('2012-12-01 00:00:00',20-3),fontsize=15)
plt.legend()
#ax1.annotate("RMSELF = " + str(round(committee['c_rmself'],3)),xy=('2013-01-01 00:00:00',max(calib['Q'])-3),fontsize=15)
#ax2=single_valid['Q'].plot(color='orange',linewidth=2.8,label='Simulated Validation')
#ax2.annotate("Model performance" ,xy=('2014-01-01 00:00:00',20),fontsize=15)
#ax2.annotate("RMSE = " +str(round(single['v_rmse'],3)),xy=('2014-01-01 00:00:00',20-1.5),fontsize=15)
#ax1.annotate("NSE = " + str(round(single['v_nsehf'],2)),xy=('2014-01-01 00:00:00',20-3),fontsize=15)
#ax2.annotate("RMSELF = " +str(round(committee['v_rmself'],3)),xy=('2014-12-01 00:00:00',max(calib['Q'])-3),fontsize=15)
#%% store the result into rasters
# create list of names
src=gdal.Open(FlowAccPath)
index=pd.date_range(start,calib_end,freq="1H")
resultspath="results/upper_zone_discharge/4000/"
names=[resultspath+str(i)[:-6] for i in index]
names=[i.replace("-","_") for i in names]
names=[i.replace(" ","_") for i in names]
names=[i+".tif" for i in names]
"""
to save the upper zone discharge distributerd discharge in a raster forms
uncomment the next line
"""
Raster.RastersLike(src,q_uz_routed[:,:,:-1],names)
|
{"hexsha": "9865fc3faa5ce5c977d59f2c2db6df8965619370", "size": 4910, "ext": "py", "lang": "Python", "max_stars_repo_path": "Examples/03Jiboa_model_Hapi.py", "max_stars_repo_name": "juancotrino/Hapi", "max_stars_repo_head_hexsha": "db65313a333f9b763c1021f84ef656835fb2f855", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Examples/03Jiboa_model_Hapi.py", "max_issues_repo_name": "juancotrino/Hapi", "max_issues_repo_head_hexsha": "db65313a333f9b763c1021f84ef656835fb2f855", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Examples/03Jiboa_model_Hapi.py", "max_forks_repo_name": "juancotrino/Hapi", "max_forks_repo_head_hexsha": "db65313a333f9b763c1021f84ef656835fb2f855", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5785123967, "max_line_length": 121, "alphanum_fraction": 0.7061099796, "include": true, "reason": "import numpy", "num_tokens": 1647}
|
import os
import cv2
import numpy as np
HEIGHT, WIDTH = 28, 28
def sortContours(dected_list):
connect_obj = []
space = 13
while True:
if not dected_list:
break
cnt = 0
[x, y, w, h] = dected_list.pop()
for i, chk in enumerate(dected_list):
[x_, y_, w_, h_] = chk
#front
if (x+w)<x_ and (x+w+space)>x_ and y-5<=y_ and y_<=y+5:
#connect_obj.append([x, y, (x_-x+w_), h])
dected_list.pop(i)
dected_list.append([x, y, (x_-x+w_), h])
cnt += 1
break
#back
if (x_+w_)<x and (x_+w_+space)>x and y_-5<=y and y<=y_+5:
#connect_obj.append([x_, y_, (x-x_+w), h_])
dected_list.pop(i)
dected_list.append([x_, y_, (x-x_+w), h_])
cnt += 1
break
if cnt == 0 :
connect_obj.append([x, y, w, h])
return connect_obj
def make_image_list(src, point_list, imgPath):
img_arr = []
for point in point_list:
[x, y, w, h] = point
pad = 4
crop = src[y-pad:y+h+pad, x-pad:x+w+pad]
padding = cv2.resize(crop, dsize=(WIDTH, HEIGHT), interpolation=cv2.INTER_LINEAR)
padding = np.reshape(padding, [HEIGHT,WIDTH,1])
img_arr.append(padding)
return img_arr
def readIMG(imgPath):
img = cv2.imread(imgPath,cv2.IMREAD_GRAYSCALE)#.astype(np.float32) / 255.
_, thr = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY_INV)
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
opening = cv2.morphologyEx(thr, cv2.MORPH_OPEN, kernel)
_, contours, hierachy = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
detected_obj = []
for cnt in contours:
[x, y, w, h] = cv2.boundingRect(cnt)
if h>19 and h<30 and w>8 and w<30:
detected_obj.append([x, y, w, h])
#connect nearest digit
connect_obj = sortContours(detected_obj)
#make image array list
img_list = make_image_list(thr, connect_obj,imgPath.split('/')[-1].split('.')[0])
img_list = np.asarray(img_list)
return (connect_obj, img_list)
|
{"hexsha": "d55ab83bdd7225f8487cb7947e894909df04a60b", "size": 2210, "ext": "py", "lang": "Python", "max_stars_repo_path": "drawing_ocr/utils.py", "max_stars_repo_name": "godheeran/Machine-Drawing-OCR-CRNN", "max_stars_repo_head_hexsha": "b9b783036cced7921d46f273471e92e95be194c4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "drawing_ocr/utils.py", "max_issues_repo_name": "godheeran/Machine-Drawing-OCR-CRNN", "max_issues_repo_head_hexsha": "b9b783036cced7921d46f273471e92e95be194c4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "drawing_ocr/utils.py", "max_forks_repo_name": "godheeran/Machine-Drawing-OCR-CRNN", "max_forks_repo_head_hexsha": "b9b783036cced7921d46f273471e92e95be194c4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6451612903, "max_line_length": 97, "alphanum_fraction": 0.563800905, "include": true, "reason": "import numpy", "num_tokens": 670}
|
#!/usr/bin/env python2
import os, errno
import netCDF4 as nc
import numpy as np
import pandas as pd
import matplotlib.lines as mlines
import matplotlib.dates as dates
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib import style
from scipy import integrate
def create_phen_file(lai_ts):
total = lai_ts.resample('D', how='mean')
lai_veg = treegrass_frac(total, 30)
site_phen = divide_leaves(lai_veg)
site_phen['rootbiomass'] = [3840.]*len(total)
site_phen['nitrogen'] = [lai*2 for lai in total]
site_phen['TimeStep'] = total.index.dayofyear
site_phen_rot = site_phen.iloc[:, [13, 11] + range(0, 11) + [12]]
site_phen_out = pd.concat([site_phen_rot, site_phen.iloc[:, 1:11]], axis=1)
site_phen_out.columns = ['DOY', 'rootbiom', 'lai'] \
+ ['lfr_{0}'.format(i) for i in range(1, 11)] \
+ ['nit'] \
+ ['nfr_{0}'.format(i) for i in range(1, 11)]
return site_phen_out
def divide_leaves(lai_part):
# was 0.7 (trees) and 1.3 (grass) obviously wrong
trees_frac = [lai_part['tree']/lai_part['total']/5. for i in range(5)]
grass_frac = [lai_part['grass']/lai_part['total']/5. for i in range(5)]
leaf_alloc = pd.concat([lai_part['total']] + trees_frac + grass_frac, axis=1)
return leaf_alloc
def treegrass_frac(ndvi, day_rs):
"""
Process based on Donohue et al. (2009) to separate out tree and grass cover,
using moving windows (adapted here for daily time-step)
"""
# first calculate the 7-month moving minimum window across the time-series
# changed period to 3 to kill grass in dry season
fp1 = moving_something(np.min, ndvi, period=7, day_rs=day_rs)
fp2 = moving_something(lambda x: sum(x)/(9*day_rs), fp1, period=9, day_rs=day_rs)
fr1 = ndvi - fp2
ftree = [p2 - np.abs(r1) if r1 < 0 else p2 for p2, r1 in zip(fp2, fr1)]
fgrass = ndvi - ftree
return pd.DataFrame({'total':ndvi, 'tree':ftree, 'grass':fgrass})
def moving_something(_fun, tseries, period, day_rs=16, is_days=True):
"""
Applies a function to a moving window of the time-series:
ft_ = function([ f(t-N), f(t). f(t+N)])
"""
# if the time-series is at a day-time step, update the window to a step-size of 16 days
if is_days:
p0 = period*day_rs
else:
p0 = period
# find upper and lower bounds of the moving window
half = p0//2
tlen = len(tseries)
twin = [0]*tlen
for im in range(tlen):
# find the something for the window that satisfy the edge conditions
if im < half:
# fold back onto the end of the time-series
twin[im] = _fun(np.hstack([tseries[tlen-(half-im):tlen],\
tseries[0:im+half]]))
elif im > tlen-half:
# fold back into the beginning of the time-series
twin[im] = _fun(np.hstack([tseries[im-half:tlen],\
tseries[0:half-(tlen-im)]]))
else:
twin[im] = _fun(tseries[im-half:im+half])
return twin
def import_one_year(file_name):
"""
Imports the one-year climatology, resetting time columns
as a multi-index pandas dataframe
"""
# universal time labels
time_label = ['Month', 'Day', 'Hour', 'Min']
# import data
clim_raw = pd.read_csv(clim_met_file)
# fix column names
clim_raw.columns = time_label + list(clim_raw.columns[4:])
# datetime column
clim_raw['DT'] = pd.date_range("2004-01-01", periods=len(clim_raw), freq="30min")
# index on time
clim_data = clim_raw.set_index('DT')
# return to user
return clim_data.ix[:, 4:]
def import_tower_data(file_name):
"""
Imports dingo meteorology file and puts an index on the time column
"""
# read in file
tower = pd.read_csv(file_name)
# re-do the datetime column as it is out by 30 minutes
tower['DATE'] = pd.date_range(start="2001-01-01", periods=len(tower), freq="30min")
# set dataframe index on this column
tower_dated = tower.set_index(['DATE'])
# return to user
return tower_dated
def clean_tower_data(dataset):
"""
Cleans the dataset for misssing values, using a linear interpolation and backfill on
missing values.
Positive, non-physical values are removed by comparing values with the max of a 95% CI
monthly moving window.
Negative, non-physical values are set to 0
"""
# interpolate and back fill missing values
data_fill = dataset.interpolate().fillna(method='bfill')
# pick columns to clean
met_data = data_fill.ix[:, ["Ta_Con", "Fsd_Con", "VPD_Con"]]
# remove non-physical values (assume all are positive)
data_clean = met_data.apply(remove_nonphysical)
# add PAR data
data_clean["PAR"] = data_fill["Fsd_Con"]*2.3
# add timestep column for light interception geometric calculations
data_clean["TimeStep"] = [d.dayofyear + (d.hour + d.minute/60.)/24 \
for d in data_fill.index]
# add extra uncleaned columns
data_out = pd.concat([data_clean, \
data_fill.ix[:, ["CO2", "Ws_CSAT_Con", "Precip_Con"]]], \
axis=1)
# return data in this order of columns
return data_out.ix[:, ["TimeStep", "Ta_Con", "CO2", "Ws_CSAT_Con", "Fsd_Con", "VPD_Con", "PAR", "Precip_Con"]]
def remove_nonphysical(dstream, win=30):
"""
Non-physical values are removed by comparing values with the max of a 95% CI
monthly moving window.
Negative values are set to 0
"""
# rolling mean
mean = pd.rolling_mean(dstream, window=win*48).fillna(method="bfill")
# rolling standard deviation
std = pd.rolling_std(dstream, window=win*48).fillna(method="bfill")
# determined rolling ci
ci99 = [m + 2.5*s for (m, s) in zip(mean, std)]
# max CI 99
top_val = np.max(ci99)
# clean values
#dstream_clean = [np.min([top_val, ds[i]]) for (i, ds) in enumerate(dstream)]
dstream_clean = np.minimum(top_val, np.maximum(dstream, 0))
# return cleaned data stream
return dstream_clean
def expand_climatology(dataset):
"""
Takes the one-year (366 day) climatology and builds a 14 year dataset from it
"""
# remove leap day
non_leap = dataset[~((dataset.index.day == 29) & (dataset.index.month == 2))]
# build new time-series using non-leap years + leap year
grp_year = pd.concat([non_leap]*3 + [dataset], axis=0)
# there are 3 groups of non-leap + leap years PLUS two normal years
full_tseries = pd.concat([grp_year]*3 + [non_leap]*2, axis=0)
# returns a repeating climatological time-series over 14 years
full_tseries['DT'] = pd.date_range(start="2001-01-01", periods=len(full_tseries), freq="30 min")
full_tseries2 = full_tseries.reset_index(drop=True)
return full_tseries2.set_index(['DT'])
def mix_climatology(dataset1, dataset2, ycol):
dataset3 = dataset1.copy()
dataset3[ycol] = dataset2[ycol]
return dataset3
def smooth_data(dataset):
dayint = lambda x: integrate.trapz(x, dx=1800)*1e-6
# sample functions
samplers = [np.mean]*4 + [dayint, np.mean, dayint, np.sum]
# dictionary to pass to resample
sample_dict = {lab: samp for (lab, samp) in zip(dataset.columns, samplers*2)}
# downsampled to daily time-scale
daily_data = dataset.resample("D", how=sample_dict)
# smooth data using a 14-day rolling mean
rolling = lambda x: pd.rolling_mean(x, window=14, min_periods=1)
new_dataset = daily_data.apply(rolling, axis=0)
# return new dataset
return new_dataset
def plot_inputs(dataset, phen, swap_var=None, EX=1):
plt.rcParams['lines.linewidth'] = 1.25
plt.rcParams.update({'mathtext.default': 'regular'})
style.use('ggplot')
ncols = plt.rcParams['axes.color_cycle']
n_plots = 6
#import ipdb; ipdb.set_trace()
fig = plt.figure(figsize=(8, 9))
gs = gridspec.GridSpec(n_plots, 1)
ax1 = [plt.subplot(gs[i]) for i in range(n_plots)]
# turn off x labels
for i in range(5):
ax1[i].xaxis.set_ticklabels([])
agg_data = smooth_data(dataset)
agg_data['lai'] = phen['lai']
# plots
date_ticks = pd.date_range("2001", periods=15, freq='AS')
time_x = agg_data.index
plot_vars = ["Fsd_Con", "VPD_Con", "Ta_Con", "Ws_CSAT_Con", "Precip_Con", "lai"]
pcols = [ncols[0]]*7
if swap_var is not None:
if swap_var == "CO2":
swap_ix = 4;
else:
swap_ix = plot_vars.index(swap_var)
pcols[swap_ix] = ncols[1]
for (i, pvar) in enumerate(plot_vars):
if i != 4:
ax1[i].plot_date(time_x, agg_data[pvar], '-', c=pcols[i])
else:
if swap_var == "CO2":
bcol = ncols[0]
ccol = pcols[i]
elif swap_var == "Precip_Con":
bcol = pcols[i]
ccol = ncols[0]
else:
bcol = pcols[i]
ccol = pcols[i]
ax1[i].bar(time_x, agg_data[pvar], color=None, edgecolor=bcol, alpha=0.3)
ax1[i].xaxis_date()
ax2 = ax1[i].twinx()
ax2.plot_date(time_x, agg_data["CO2"], '-', c=ccol, lw=2.5)
# labels
plt_label = "Howard Springs Experiment {0} Inputs".format(EX)
ax1[0].set_title(plt_label)
ax1[0].set_ylabel("$R_{s}$ (MJ m$^{-2}$)")
ax1[1].set_ylabel("$D_{v}$ (kPa)")
ax1[2].set_ylabel("$T_{a}$ ($\degree$C)")
ax1[3].set_ylabel("$U_{v}$ (m s$^{-1}$)")
ax1[4].set_ylabel("$PPT$ (mm)")
ax1[5].set_ylabel("LAI")
ax2.set_ylabel("CO$_{2}$ (ppm)")
for i in range(len(ax1)):
ax1[i].yaxis.set_label_coords(-0.07, 0.5)
# limits
ax1[0].set_ylim([10, 30])
ax1[5].set_ylim([0.5, 2.5])
ax2.set_ylim([350, 400])
# axis
ax2.grid(False)
for i in range(len(ax1)):
ax1[i].set_xticks(date_ticks)
ax1[5].xaxis.set_ticklabels(date_ticks, rotation=45, ha="center", fontsize=11)
ax1[5].xaxis.set_major_formatter(dates.DateFormatter('%Y'))
# custom legend lines
held_line = mlines.Line2D([], [], color=ncols[0], lw=2, marker=None, label="held variables")
pert_line = mlines.Line2D([], [], color=ncols[1], lw=2, marker=None, label="perturbed variable")
# plot legend
ax1[5].legend(handles=[held_line, pert_line], bbox_to_anchor=(0.5, -0.4), \
loc='upper center', ncol=2, prop={'size':10})
plt.subplots_adjust(left=0.1, right=0.9, top=0.95, bottom=0.1, hspace=0.15)
# saving
figure_path = os.path.expanduser("~/Savanna/Analysis/figures/IAV/inputs/")
plt.savefig(figure_path + plt_label.replace(' ', '_') + ".pdf", rasterized=True)
return None
def assign_variables(nc_obj):
# CREATE DIMENSIONS
nc_obj.createDimension('x', 1)
nc_obj.createDimension('y', 1)
nc_obj.createDimension('z', 1)
nc_obj.createDimension('time', None)
# CREATE VARIABLES
nc_obj.createVariable('x', 'f8', ('x'))
nc_obj.createVariable('y', 'f8', ('y'))
nc_obj.createVariable('latitude', 'f8', ('x', 'y'))
nc_obj.createVariable('longitude', 'f8', ('x', 'y'))
nc_obj.createVariable('time', 'f8', ('time'))
# >> [Time-varying values]
# >> Local Meteorology
nc_obj.createVariable('SWdown', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('Tair', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('VPD', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('Cair', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('Wind', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('Rainfall', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('LAI', 'f8', ('time', 'x', 'y'))
# >> Climatologies
nc_obj.createVariable('clim_SWdown', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_Tair', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_VPD', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_Cair', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_Wind', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_Rainfall', 'f8', ('time', 'x', 'y'))
nc_obj.createVariable('clim_LAI', 'f8', ('time', 'x', 'y'))
return None
def assign_units(nc_obj, start_date):
# ASSIGN UNITS
# >> [Dimensions]
nc_obj.variables['x'].units = ""
nc_obj.variables['y'].units = ""
nc_obj.variables['latitude'].units = "degrees_north"
nc_obj.variables['longitude'].units = "degrees_east"
nc_obj.variables['time'].units = "seconds since " + start_date
# >> [Time-varying values]
# >> Local Meteorology
nc_obj.variables['SWdown'].units = "W/m^2"
nc_obj.variables['Tair'].units = "degrees Celsius"
nc_obj.variables['VPD'].units = "kPa"
nc_obj.variables['Cair'].units = "umol/mol"
nc_obj.variables['Wind'].units = "m/s"
nc_obj.variables['Rainfall'].units = "mm"
nc_obj.variables['LAI'].units = "m^2/m^2"
# >> Climatologies
nc_obj.variables['clim_SWdown'].units = "W/m^2"
nc_obj.variables['clim_Tair'].units = "degrees Celsius"
nc_obj.variables['clim_VPD'].units = "kPa"
nc_obj.variables['clim_Cair'].units = "umol/mol"
nc_obj.variables['clim_Wind'].units = "m/s"
nc_obj.variables['clim_Rainfall'].units = "mm"
nc_obj.variables['clim_LAI'].units = "m^2/m^2"
return None
def assign_longNames(nc_obj):
# LONG NAMES
nc_obj.variables['SWdown'].longname = "Downwelling shortwave radiation"
nc_obj.variables['Tair'].longname = "Air temperature"
nc_obj.variables['VPD'].longname = "Vapour pressure deficit"
nc_obj.variables['Cair'].longname = "Atmospheric CO2 concentration"
nc_obj.variables['Wind'].longname = "Wind speed"
nc_obj.variables['Rainfall'].longname = "Precipitation"
nc_obj.variables['LAI'].longname = "MODIS 8-day composite leaf area index"
# Vegetation
nc_obj.variables['clim_SWdown'].longname = "Downwelling shortwave radiation"
nc_obj.variables['clim_Tair'].longname = "Air temperature"
nc_obj.variables['clim_VPD'].longname = "Vapour pressure deficit"
nc_obj.variables['clim_Cair'].longname = "Atmospheric CO2 concentration"
nc_obj.variables['clim_Wind'].longname = "Wind speed"
nc_obj.variables['clim_Rainfall'].longname = "Precipitation"
nc_obj.variables['clim_LAI'].longname = "MODIS 8-day composite leaf area index"
return None
def ensure_dir(path):
# Create folders for storage if they done exist
try:
if not os.path.exists(path):
os.makedirs(path)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else: raise
def main():
#----------------------------------------------------------------------
# STAGE DATA
#----------------------------------------------------------------------
# create the directories if they don't exist
[ensure_dir(fpath) for fpath in INPUT_FOLDERS]
# import datasets to create experiment input files
climo_raw = import_one_year(clim_met_file)
tower_raw = import_tower_data(ec_tower_file)
# messy temporary solution to LAI artifacts
smooth_lai = np.minimum(climo_raw['Lai_1km_new_smooth'], 2.04)
climo_raw['lai_sm14'] = pd.rolling_mean(smooth_lai, \
window=2*48, min_periods=1)
# expand climatology out to 14 years (2001 to 2015)
climo_14yr = expand_climatology(climo_raw)
# # Check that the tree:grass partitioning is working correctly
# temp = tower_raw["Lai_1km_new_smooth"]
# tg_temp = treegrass_frac(temp.resample('D', how='mean'), 30)
# plt.plot(tg_temp['total'], '-', label='total')
# plt.plot(tg_temp['tree'], '-', label='tree')
# plt.plot(tg_temp['grass'], '-', label='grass')
# plt.legend(loc='upper center', ncol=3)
# plt.show()
# return 1
#----------------------------------------------------------------------
# PHENOLOGY FILE CREATION
#----------------------------------------------------------------------
print("Creating phenology files\n")
# universal phenology file
spa_phen_1 = create_phen_file(tower_raw["Lai_1km_new_smooth"])
# experiment 2
spa_phen_2 = create_phen_file(climo_14yr["lai_sm14"])
# universal phenology file
spa_phen_1.to_csv(INPUT_PHEN_1, sep=",", index=False, line_terminator=LT)
spa_phen_2.to_csv(INPUT_PHEN_2, sep=",", index=False, line_terminator=LT)
#----------------------------------------------------------------------
# METEOROLOGY FILE CREATION
#----------------------------------------------------------------------
print("Creating meteorology files\n")
# experiment 1
spa_met_1 = clean_tower_data(tower_raw)
# experiment 2
spa_met_2 = clean_tower_data(climo_14yr)
# swap on these variables
var_on = ["CO2", "Ta_Con", "Precip_Con", "Fsd_Con", "VPD_Con"]
# experiment 3 to 7
spa_met_x1 = [mix_climatology(spa_met_2, spa_met_1, vo) for vo in var_on]
# experiments 9 to 14
spa_met_x2 = [mix_climatology(spa_met_1, spa_met_2, vo) for vo in var_on]
# write experiment simulations to file
spa_met_1.to_csv(INPUT_FILES1[0], sep=",", index=False, line_terminator=LT)
spa_met_2.to_csv(INPUT_FILES1[1], sep=",", index=False, line_terminator=LT)
for (ix, (spa_df1, spa_df2)) in enumerate(zip(spa_met_x1, spa_met_x2)):
spa_df1.to_csv(INPUT_FILES1[2 + ix], sep=",", index=False, line_terminator=LT)
spa_df2.to_csv(INPUT_FILES2[ix], sep=",", index=False, line_terminator=LT)
return 1
#----------------------------------------------------------------------
# CREATE SYMBOLIC LINKS
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# PLOT OUTPUTS **CHECKING
#----------------------------------------------------------------------
# experiments 1 and 2
plot_inputs(spa_met_1, spa_phen_1, None, 1)
plot_inputs(spa_met_2, spa_phen_2, None, 2)
# experiments 8 and 14
plot_inputs(spa_met_2, spa_phen_1, "lai", 8)
plot_inputs(spa_met_1, spa_phen_2, "lai", 14)
# experiments 3 to 7 & 9 to 14
for (ic, (spa_df1, spa_df2)) in enumerate(zip(spa_met_x1, spa_met_x2)):
plot_inputs(spa_df1, spa_phen_2, var_on[ic], ic + 3)
plot_inputs(spa_df2, spa_phen_1, var_on[ic], ic + 9)
#----------------------------------------------------------------------
# NETCDF CREATION
#----------------------------------------------------------------------
# up-sample LAI to the same timestep as meteorology for ncdf export
LAI_phen1_30min = spa_phen_1["lai"].resample('30min', fill_method="ffill")
LAI_phen2_30min = spa_phen_2["lai"].resample('30min', fill_method="ffill")
# Open a NCDF4 file for SPA simulation outputs
nc_fout = NCSAVEPATH + "spa_hws_inputs.nc"
nc_file = nc.Dataset(nc_fout, 'w', format='NETCDF4')
assign_variables(nc_file)
assign_units(nc_file, "2001-01-01 00:00:30")
assign_longNames(nc_file)
# Assign values to variables
tseries = pd.timedelta_range(0, periods=len(spa_met_1), freq="1800s") \
.astype('timedelta64[s]')
# Get time from netcdf driver file
nc_file.variables['time'][:] = np.array(tseries)
# Local Meteorologies
nc_file.variables['SWdown'][:] = np.array(spa_met_1['Fsd_Con'])
nc_file.variables['VPD'][:] = np.array(spa_met_1['VPD_Con'])
nc_file.variables['Tair'][:] = np.array(spa_met_1['Ta_Con'])
nc_file.variables['Cair'][:] = np.array(spa_met_1['CO2'])
nc_file.variables['Wind'][:] = np.array(spa_met_1['Ws_CSAT_Con'])
nc_file.variables['Rainfall'][:] = np.array(spa_met_1['Precip_Con'])
nc_file.variables['LAI'][:] = np.array(LAI_phen1_30min)
# Climatologies
nc_file.variables['clim_SWdown'][:] = np.array(spa_met_2['Fsd_Con'])
nc_file.variables['clim_VPD'][:] = np.array(spa_met_2['VPD_Con'])
nc_file.variables['clim_Tair'][:] = np.array(spa_met_2['Ta_Con'])
nc_file.variables['clim_Cair'][:] = np.array(spa_met_2['CO2'])
nc_file.variables['clim_Wind'][:] = np.array(spa_met_2['Ws_CSAT_Con'])
nc_file.variables['clim_Rainfall'][:] = np.array(spa_met_2['Precip_Con'])
nc_file.variables['clim_LAI'][:] = np.array(LAI_phen2_30min)
nc_file.close()
return 1
if __name__ == "__main__":
clim_met_file = os.path.expanduser("~/Dropbox/30 minute met driver climatology v12a HowardSprings.csv")
ec_tower_file = os.path.expanduser("~/Dropbox/30 minute met driver 2001-2015 v12a HowardSprings.csv")
INPUT_PATH = os.path.expanduser("~/Savanna/Models/SPA1/outputs/site_co2")
INPUT_FOLDERS = ["{0}/HS_Exp{1}/inputs".format(INPUT_PATH, i) for i in range(1, 8) + range(9, 14)]
INPUT_FILES1 = ["{0}/hs_met_exp_{1}.csv".format(path, i+1) \
for (i, path) in enumerate(INPUT_FOLDERS[:7])]
INPUT_FILES2 = ["{0}/hs_met_exp_{1}.csv".format(path, i+9) \
for (i, path) in enumerate(INPUT_FOLDERS[7:])]
INPUT_PHEN_1 = "{0}/common_inputs/hs_phen_exp_1.csv".format(INPUT_PATH)
INPUT_PHEN_2 = "{0}/common_inputs/hs_phen_exp_all.csv".format(INPUT_PATH)
NCSAVEPATH = os.path.expanduser("~/Savanna/Data/HowardSprings_IAV/ncdf/")
# line terminator for CSV
LT = '\r\n'
main()
|
{"hexsha": "e1e7bb7a8e89aa4ab33ef456658f8da4e317eb85", "size": 21324, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/setup_model/spa_input_creator.py", "max_stars_repo_name": "rhyswhitley/savanna_iav", "max_stars_repo_head_hexsha": "4eadf29a4e9c05d0b14d3b9c973eb8db3ea7edba", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/setup_model/spa_input_creator.py", "max_issues_repo_name": "rhyswhitley/savanna_iav", "max_issues_repo_head_hexsha": "4eadf29a4e9c05d0b14d3b9c973eb8db3ea7edba", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/setup_model/spa_input_creator.py", "max_forks_repo_name": "rhyswhitley/savanna_iav", "max_forks_repo_head_hexsha": "4eadf29a4e9c05d0b14d3b9c973eb8db3ea7edba", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-01T04:15:21.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-01T04:15:21.000Z", "avg_line_length": 38.9835466179, "max_line_length": 114, "alphanum_fraction": 0.6165822547, "include": true, "reason": "import numpy,from scipy", "num_tokens": 6204}
|
using CwJWeaveTpl
fnames = [
"limits",
"limits_extensions",
#
"continuity",
"intermediate_value_theorem"
]
process_file(nm; cache=:off) = CwJWeaveTpl.mmd(nm * ".jmd", cache=cache)
function process_files(;cache=:user)
for f in fnames
@show f
process_file(f, cache=cache)
end
end
"""
## TODO limits
"""
|
{"hexsha": "67f63e87ce01cdcec5cb4351dffaacc02b7c5bbd", "size": 393, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "CwJ/limits/process.jl", "max_stars_repo_name": "BryceStevenWilley/CalculusWithJulia.jl", "max_stars_repo_head_hexsha": "801773a3651b2e8f03a8e6eb2f427ab1527e3844", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2019-08-29T02:00:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T11:15:12.000Z", "max_issues_repo_path": "CwJ/limits/process.jl", "max_issues_repo_name": "BryceStevenWilley/CalculusWithJulia.jl", "max_issues_repo_head_hexsha": "801773a3651b2e8f03a8e6eb2f427ab1527e3844", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 16, "max_issues_repo_issues_event_min_datetime": "2020-12-03T15:00:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T00:57:57.000Z", "max_forks_repo_path": "CwJ/limits/process.jl", "max_forks_repo_name": "BryceStevenWilley/CalculusWithJulia.jl", "max_forks_repo_head_hexsha": "801773a3651b2e8f03a8e6eb2f427ab1527e3844", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-01-07T10:53:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-15T06:08:38.000Z", "avg_line_length": 14.5555555556, "max_line_length": 72, "alphanum_fraction": 0.5547073791, "num_tokens": 107}
|
from brightics.common.repr import BrtcReprBuilder
from brightics.common.repr import strip_margin
from brightics.common.repr import dict2MD
from brightics.common.repr import pandasDF2MD
from brightics.function.utils import _model_dict
from brightics.common.groupby import _function_by_group
from brightics.common.utils import check_required_parameters
from collections import Counter
import pandas as pd
import numpy as np
def add_row_number(table, group_by=None, **params):
check_required_parameters(_add_row_number, params, ['table'])
if group_by is not None:
return _function_by_group(_add_row_number, table, group_by=group_by, **params)
else:
return _add_row_number(table, **params)
def _add_row_number(table, new_col='add_row_number'):
n = len(table)
out_table = table.copy()
out_table[new_col] = range(n)
columns = table.columns.insert(0, new_col)
out_table = out_table.reindex(columns=columns)
return {'out_table': out_table}
def discretize_quantile(table, group_by=None, **params):
check_required_parameters(_discretize_quantile, params, ['table'])
if group_by is not None:
return _function_by_group(_discretize_quantile, table, group_by=group_by, **params)
else:
return _discretize_quantile(table, **params)
def _discretize_quantile(table, input_col, num_of_buckets=2, out_col_name='bucket_number'):
out_table = table.copy()
out_table[out_col_name], buckets = pd.qcut(table[input_col], num_of_buckets, labels=False, retbins=True, precision=10, duplicates='drop')
params = {
'input_col': input_col,
'num_of_buckets': num_of_buckets,
'out_col_name': out_col_name
}
cnt = Counter(out_table[out_col_name].values)
# index_list, bucket_list
index_list = []
bucket_list = []
cnt_list = []
for i in range(len(buckets) - 1):
left = '[' if i == 0 else '('
index_list.append(i)
cnt_list.append(cnt[i])
bucket_list.append("{left}{lower}, {upper}]".format(left=left, lower=buckets[i], upper=buckets[i + 1])) # 'buckets' is tuple type data.
# Build model
result = pd.DataFrame.from_items([
['bucket number', index_list],
['buckets', bucket_list],
['count', cnt_list]
])
# Build model
rb = BrtcReprBuilder()
rb.addMD(strip_margin("""
| ## Quantile-based Discretization Result
| ### Result
| {result}
|
| ### Parameters
| {params}
""".format(result=pandasDF2MD(result), params=dict2MD(params))))
model = _model_dict('discretize_quantile')
model['result'] = result
model['params'] = params
model['_repr_brtc_'] = rb.get()
return {'out_table': out_table, 'model': model}
def binarizer(table, column, threshold=0, threshold_type='greater', out_col_name=None):
out_table = table.copy()
if out_col_name is None:
out_col_name = 'binarized_' + str(column)
if threshold_type == 'greater':
table[out_col_name] = table[column].apply(lambda x: 1 if x > threshold else 0)
else :
table[out_col_name] = table[column].apply(lambda x: 1 if x >= threshold else 0)
return{'table':table}
def capitalize_variable(table, input_cols, replace, out_col_suffix=None):
if out_col_suffix is None:
out_col_suffix = '_' + replace
out_table = table.copy()
for input_col in input_cols:
out_col_name = input_col + out_col_suffix
if replace == 'upper':
out_table[out_col_name] = table[input_col].str.upper()
else:
out_table[out_col_name] = table[input_col].str.lower()
return {'out_table': out_table}
|
{"hexsha": "83ce224a54dabd462d816e3446d1711b9855ed69", "size": 3766, "ext": "py", "lang": "Python", "max_stars_repo_path": "function/python/brightics/function/extraction/extraction.py", "max_stars_repo_name": "seungrojoo/studio", "max_stars_repo_head_hexsha": "08712c3936eee97f1f752a70a5f38332edebc454", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "function/python/brightics/function/extraction/extraction.py", "max_issues_repo_name": "seungrojoo/studio", "max_issues_repo_head_hexsha": "08712c3936eee97f1f752a70a5f38332edebc454", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "function/python/brightics/function/extraction/extraction.py", "max_forks_repo_name": "seungrojoo/studio", "max_forks_repo_head_hexsha": "08712c3936eee97f1f752a70a5f38332edebc454", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.0350877193, "max_line_length": 146, "alphanum_fraction": 0.6662241105, "include": true, "reason": "import numpy", "num_tokens": 932}
|
/**
* @file emst_test.cpp
*
* Test file for EMST methods.
*/
#include <mlpack/core.hpp>
#include <mlpack/methods/emst/dtb.hpp>
#include <boost/test/unit_test.hpp>
#include "old_boost_test_definitions.hpp"
#include <mlpack/core/tree/cover_tree.hpp>
using namespace mlpack;
using namespace mlpack::emst;
using namespace mlpack::tree;
using namespace mlpack::bound;
using namespace mlpack::metric;
BOOST_AUTO_TEST_SUITE(EMSTTest);
/**
* Simple emst test with small, synthetic dataset. This is an
* exhaustive test, which checks that each method for performing the calculation
* (dual-tree, naive) produces the correct results. The dataset is in one
* dimension for simplicity -- the correct functionality of distance functions
* is not tested here.
*/
BOOST_AUTO_TEST_CASE(ExhaustiveSyntheticTest)
{
// Set up our data.
arma::mat data(1, 11);
data[0] = 0.05; // Row addressing is unnecessary (they are all 0).
data[1] = 0.37;
data[2] = 0.15;
data[3] = 1.25;
data[4] = 5.05;
data[5] = -0.22;
data[6] = -2.00;
data[7] = -1.30;
data[8] = 0.45;
data[9] = 0.91;
data[10] = 1.00;
arma::mat results;
// Build the tree by hand to get a leaf size of 1.
typedef KDTree<EuclideanDistance, DTBStat, arma::mat> TreeType;
std::vector<size_t> oldFromNew;
std::vector<size_t> newFromOld;
TreeType tree(data, oldFromNew, newFromOld, 1);
// Create the DTB object and run the calculation.
DualTreeBoruvka<> dtb(&tree);
dtb.ComputeMST(results);
// Now the exhaustive check for correctness.
if (newFromOld[1] < newFromOld[8])
{
BOOST_REQUIRE_EQUAL(results(0, 0), newFromOld[1]);
BOOST_REQUIRE_EQUAL(results(1, 0), newFromOld[8]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 0), newFromOld[1]);
BOOST_REQUIRE_EQUAL(results(0, 0), newFromOld[8]);
}
BOOST_REQUIRE_CLOSE(results(2, 0), 0.08, 1e-5);
if (newFromOld[9] < newFromOld[10])
{
BOOST_REQUIRE_EQUAL(results(0, 1), newFromOld[9]);
BOOST_REQUIRE_EQUAL(results(1, 1), newFromOld[10]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 1), newFromOld[9]);
BOOST_REQUIRE_EQUAL(results(0, 1), newFromOld[10]);
}
BOOST_REQUIRE_CLOSE(results(2, 1), 0.09, 1e-5);
if (newFromOld[0] < newFromOld[2])
{
BOOST_REQUIRE_EQUAL(results(0, 2), newFromOld[0]);
BOOST_REQUIRE_EQUAL(results(1, 2), newFromOld[2]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 2), newFromOld[0]);
BOOST_REQUIRE_EQUAL(results(0, 2), newFromOld[2]);
}
BOOST_REQUIRE_CLOSE(results(2, 2), 0.1, 1e-5);
if (newFromOld[1] < newFromOld[2])
{
BOOST_REQUIRE_EQUAL(results(0, 3), newFromOld[1]);
BOOST_REQUIRE_EQUAL(results(1, 3), newFromOld[2]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 3), newFromOld[1]);
BOOST_REQUIRE_EQUAL(results(0, 3), newFromOld[2]);
}
BOOST_REQUIRE_CLOSE(results(2, 3), 0.22, 1e-5);
if (newFromOld[3] < newFromOld[10])
{
BOOST_REQUIRE_EQUAL(results(0, 4), newFromOld[3]);
BOOST_REQUIRE_EQUAL(results(1, 4), newFromOld[10]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 4), newFromOld[3]);
BOOST_REQUIRE_EQUAL(results(0, 4), newFromOld[10]);
}
BOOST_REQUIRE_CLOSE(results(2, 4), 0.25, 1e-5);
if (newFromOld[0] < newFromOld[5])
{
BOOST_REQUIRE_EQUAL(results(0, 5), newFromOld[0]);
BOOST_REQUIRE_EQUAL(results(1, 5), newFromOld[5]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 5), newFromOld[0]);
BOOST_REQUIRE_EQUAL(results(0, 5), newFromOld[5]);
}
BOOST_REQUIRE_CLOSE(results(2, 5), 0.27, 1e-5);
if (newFromOld[8] < newFromOld[9])
{
BOOST_REQUIRE_EQUAL(results(0, 6), newFromOld[8]);
BOOST_REQUIRE_EQUAL(results(1, 6), newFromOld[9]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 6), newFromOld[8]);
BOOST_REQUIRE_EQUAL(results(0, 6), newFromOld[9]);
}
BOOST_REQUIRE_CLOSE(results(2, 6), 0.46, 1e-5);
if (newFromOld[6] < newFromOld[7])
{
BOOST_REQUIRE_EQUAL(results(0, 7), newFromOld[6]);
BOOST_REQUIRE_EQUAL(results(1, 7), newFromOld[7]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 7), newFromOld[6]);
BOOST_REQUIRE_EQUAL(results(0, 7), newFromOld[7]);
}
BOOST_REQUIRE_CLOSE(results(2, 7), 0.7, 1e-5);
if (newFromOld[5] < newFromOld[7])
{
BOOST_REQUIRE_EQUAL(results(0, 8), newFromOld[5]);
BOOST_REQUIRE_EQUAL(results(1, 8), newFromOld[7]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 8), newFromOld[5]);
BOOST_REQUIRE_EQUAL(results(0, 8), newFromOld[7]);
}
BOOST_REQUIRE_CLOSE(results(2, 8), 1.08, 1e-5);
if (newFromOld[3] < newFromOld[4])
{
BOOST_REQUIRE_EQUAL(results(0, 9), newFromOld[3]);
BOOST_REQUIRE_EQUAL(results(1, 9), newFromOld[4]);
}
else
{
BOOST_REQUIRE_EQUAL(results(1, 9), newFromOld[3]);
BOOST_REQUIRE_EQUAL(results(0, 9), newFromOld[4]);
}
BOOST_REQUIRE_CLOSE(results(2, 9), 3.8, 1e-5);
}
/**
* Test the dual tree method against the naive computation.
*
* Errors are produced if the results are not identical.
*/
BOOST_AUTO_TEST_CASE(DualTreeVsNaive)
{
arma::mat inputData;
// Hard-coded filename: bad!
// Code duplication: also bad!
if (!data::Load("test_data_3_1000.csv", inputData))
BOOST_FAIL("Cannot load test dataset test_data_3_1000.csv!");
// Set up matrices to work with.
arma::mat dualData = inputData;
arma::mat naiveData = inputData;
// Reset parameters from last test.
DualTreeBoruvka<> dtb(dualData);
arma::mat dualResults;
dtb.ComputeMST(dualResults);
// Set naive mode.
DualTreeBoruvka<> dtbNaive(naiveData, true);
arma::mat naiveResults;
dtbNaive.ComputeMST(naiveResults);
BOOST_REQUIRE_EQUAL(dualResults.n_cols, naiveResults.n_cols);
BOOST_REQUIRE_EQUAL(dualResults.n_rows, naiveResults.n_rows);
for (size_t i = 0; i < dualResults.n_cols; i++)
{
BOOST_REQUIRE_EQUAL(dualResults(0, i), naiveResults(0, i));
BOOST_REQUIRE_EQUAL(dualResults(1, i), naiveResults(1, i));
BOOST_REQUIRE_CLOSE(dualResults(2, i), naiveResults(2, i), 1e-5);
}
}
/**
* Make sure the cover tree works fine.
*/
BOOST_AUTO_TEST_CASE(CoverTreeTest)
{
arma::mat inputData;
if (!data::Load("test_data_3_1000.csv", inputData))
BOOST_FAIL("Cannot load test dataset test_data_3_1000.csv!");
DualTreeBoruvka<> bst(inputData);
DualTreeBoruvka<EuclideanDistance, arma::mat, StandardCoverTree>
ct(inputData);
arma::mat bstResults;
arma::mat coverResults;
// Run the algorithms.
bst.ComputeMST(bstResults);
ct.ComputeMST(coverResults);
for (size_t i = 0; i < bstResults.n_cols; i++)
{
BOOST_REQUIRE_EQUAL(bstResults(0, i), coverResults(0, i));
BOOST_REQUIRE_EQUAL(bstResults(1, i), coverResults(1, i));
BOOST_REQUIRE_CLOSE(bstResults(2, i), coverResults(2, i), 1e-5);
}
}
/**
* Test BinarySpaceTree with Ball Bound.
*/
BOOST_AUTO_TEST_CASE(BallTreeTest)
{
arma::mat inputData;
if (!data::Load("test_data_3_1000.csv", inputData))
BOOST_FAIL("Cannot load test dataset test_data_3_1000.csv!");
// naive mode.
DualTreeBoruvka<> bst(inputData, true);
// Ball tree.
DualTreeBoruvka<EuclideanDistance, arma::mat, BallTree> ballt(inputData);
arma::mat bstResults;
arma::mat ballResults;
// Run the algorithms.
bst.ComputeMST(bstResults);
ballt.ComputeMST(ballResults);
for (size_t i = 0; i < bstResults.n_cols; i++)
{
BOOST_REQUIRE_EQUAL(bstResults(0, i), ballResults(0, i));
BOOST_REQUIRE_EQUAL(bstResults(1, i), ballResults(1, i));
BOOST_REQUIRE_CLOSE(bstResults(2, i), ballResults(2, i), 1e-5);
}
}
BOOST_AUTO_TEST_SUITE_END();
|
{"hexsha": "b4776584b15321f6cd2404e4801c1941b1517e16", "size": 7534, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/mlpack/tests/emst_test.cpp", "max_stars_repo_name": "vj-ug/Contribution-to-mlpack", "max_stars_repo_head_hexsha": "0ddb5ed463861f459ff2829712bdc59ba9d810b0", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2021-08-17T11:59:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-17T11:59:20.000Z", "max_issues_repo_path": "src/mlpack/tests/emst_test.cpp", "max_issues_repo_name": "vj-ug/Contribution-to-mlpack", "max_issues_repo_head_hexsha": "0ddb5ed463861f459ff2829712bdc59ba9d810b0", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/mlpack/tests/emst_test.cpp", "max_forks_repo_name": "vj-ug/Contribution-to-mlpack", "max_forks_repo_head_hexsha": "0ddb5ed463861f459ff2829712bdc59ba9d810b0", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.0035842294, "max_line_length": 80, "alphanum_fraction": 0.6860897266, "num_tokens": 2423}
|
Real Function ppk1(srt)
Real xarray(7), earray(7)
Save
Data xarray/0.013, 0.025, 0.016, 0.012, 0.017, 0.029, 0.025/
Data earray/3.67, 4.95, 5.52, 5.97, 6.05, 6.92, 7.87/
pmass = 0.9383
ppk1 = 0.
If (srt<=2.63) Return
If (srt>4.08) Then
ppk1 = 0.025
Return
End If
plab = sqrt(((srt**2-2.*pmass**2)/(2.*pmass))**2-pmass**2)
If (plab<earray(1)) Then
ppk1 = xarray(1)
Return
End If
Do ie = 1, 7
If (earray(ie)==plab) Then
ppk1 = xarray(ie)
Goto 10
Else If (earray(ie)>plab) Then
ymin = alog(xarray(ie-1))
ymax = alog(xarray(ie))
xmin = alog(earray(ie-1))
xmax = alog(earray(ie))
ppk1 = exp(ymin+(alog(plab)-xmin)*(ymax-ymin)/(xmax-xmin))
Goto 10
End If
End Do
10 Continue
Return
End Function ppk1
|
{"hexsha": "3c7af15d3036b9880ba5c5ed72708e148954d411", "size": 822, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "src/ppk1.f90", "max_stars_repo_name": "xiaohaijin/AMPT", "max_stars_repo_head_hexsha": "90c7a1ab4dc04a092e64af759d53e22f6fea5b02", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-24T19:37:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-14T12:58:59.000Z", "max_issues_repo_path": "src/ppk1.f90", "max_issues_repo_name": "xiaohaijin/AMPT", "max_issues_repo_head_hexsha": "90c7a1ab4dc04a092e64af759d53e22f6fea5b02", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ppk1.f90", "max_forks_repo_name": "xiaohaijin/AMPT", "max_forks_repo_head_hexsha": "90c7a1ab4dc04a092e64af759d53e22f6fea5b02", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.1764705882, "max_line_length": 66, "alphanum_fraction": 0.5559610706, "num_tokens": 368}
|
using Indicators
using PyPlot
using Dates
using Random
# Generate some toy sample data
Random.seed!(1)
n = 250
Open = 100.0 .+ cumsum(randn(n))
High = Open .+ rand(n)
Low = Open .- rand(n)
Close = 100.0 .+ cumsum(randn(n))
for i = 1:n
if Close[i] > High[i]
Close[i] = High[i]
elseif Close[i] < Low[i]
Close[i] = Low[i]
end
end
OHLC = [Open High Low Close]
HLC = [High Low Close]
HL = [High Low]
t = collect(today():Day(1):today()+Day(n-1))
# Overlays
subplot(411)
plot(t, Close, lw=2, c="k", label="Random Walk")
grid(ls="-", c=[0.8,0.8,0.8])
plot(t, sma(Close,n=40), c=[1,0.5,0], label="SMA (40)")
plot(t, ema(Close,n=10), c=[0,1,1], label="EMA (10)")
plot(t, wma(Close,n=20), c=[1,0,1], label="WMA (20)")
plot(t, psar(HL), "bo", label="Parabolic SAR")
legend(loc="best", frameon=false)
# MACD
subplot(412)
plot(t, macd(Close)[:,1], label="MACD", c=[1,0.5,1])
plot(t, macd(Close)[:,2], label="Signal", c=[0.5,0.25,0.5])
bar(t, macd(Close)[:,3], align="center", label="Histogram", color=[0,0.5,0.5], alpha=0.25)
plot([t[1],t[end]], [0,0], ls="--", c=[0.5,0.5,0.5])
grid(ls="-", c=[0.8,0.8,0.8])
legend(loc="best", frameon=false)
# RSI
subplot(413)
plot(t, rsi(Close), c=[0.5,0.5,0], label="RSI")
grid(ls="-", c=[0.8,0.8,0.8])
plot([t[1],t[end]], [30,30], c="g")
plot([t[1],t[end]], [70,70], c="r")
legend(loc="best", frameon=false)
# ADX
subplot(414)
plot(t, adx(HLC)[:,1], "g-", label="DI+")
plot(t, adx(HLC)[:,2], "r-", label="DI-")
plot(t, adx(HLC)[:,3], c=[0,0,1], lw=2, label="ADX")
grid(ls="-", c=[0.8,0.8,0.8])
legend(loc="best", frameon=false)
tight_layout()
|
{"hexsha": "845a34b9add14e19667c9a704ddf4d6ec94b4edb", "size": 1582, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "examples/example1.jl", "max_stars_repo_name": "darthur11/Indicators.jl", "max_stars_repo_head_hexsha": "06f34364de3cb81450fc00f317b29f77ce39ce69", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 190, "max_stars_repo_stars_event_min_datetime": "2016-03-16T00:39:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T08:43:34.000Z", "max_issues_repo_path": "examples/example1.jl", "max_issues_repo_name": "darthur11/Indicators.jl", "max_issues_repo_head_hexsha": "06f34364de3cb81450fc00f317b29f77ce39ce69", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 31, "max_issues_repo_issues_event_min_datetime": "2017-05-25T01:34:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-09T18:25:59.000Z", "max_forks_repo_path": "examples/example1.jl", "max_forks_repo_name": "darthur11/Indicators.jl", "max_forks_repo_head_hexsha": "06f34364de3cb81450fc00f317b29f77ce39ce69", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 57, "max_forks_repo_forks_event_min_datetime": "2016-03-30T02:44:22.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-12T03:30:21.000Z", "avg_line_length": 25.9344262295, "max_line_length": 90, "alphanum_fraction": 0.585335019, "num_tokens": 664}
|
# Mirco Ravanelli
# Mila, June 2019
# This script runs a simple emotion recognition experiment on the top of PASE features.
# The results are reported in terms of Frame Error Rate/ Sentence Error Rate over four emotions of the IEMOCAP dataset
# This system is not designed for an extensive evaluation of PASE features, but mainly for quickly monitoring the performance of PASE during the self-supervised training phase.
# The results are printed in standard output and within a text file in $output_folder/res.res
# To run it:
# python run_IEMOCAP_fast.py ../cfg/PASE.cfg ../PASE.ckpt /home/mirco/Dataset/IEMOCAP_processed iemocap_exp.res
import warnings
warnings.filterwarnings('ignore')
import sys
from neural_networks import MLP,context_window
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import os
from pase.models.frontend import wf_builder
# from waveminionet.models.frontend import wf_builder #old models
import soundfile as sf
import os
import json
from pase.models.WorkerScheduler.encoder import *
def get_freer_gpu(trials=10):
for j in range(trials):
os.system('nvidia-smi -q -d Memory |grep -A4 GPU|grep Free >tmp')
memory_available = [int(x.split()[2]) for x in open('tmp', 'r').readlines()]
dev_ = torch.device('cuda:'+str(np.argmax(memory_available)))
try:
a = torch.rand(1).cuda(dev_)
return dev_
except:
pass
print('NO GPU AVAILABLE!!!')
exit(1)
pase_cfg=sys.argv[1] # e.g, '../cfg/PASE.cfg'
pase_model=sys.argv[2] # e.g, '../PASE.ckpt'
data_folder=sys.argv[3] # eg. '/home/mirco/Dataset/IEMOCAP_ahsn_leave-two-speaker-out'
output_file=sys.argv[4] # e.g., 'iemocap_exp.res'
# Label dict
lab={}
lab['ang']=0
lab['hap']=1
lab['neu']=2
lab['sad']=3
# File list for IEMOCAP
tr_lst_file='tr_lst.txt'
dev_lst_file='te_lst.txt'
tr_lst = [line.rstrip('\n') for line in open(tr_lst_file)]
dev_lst = [line.rstrip('\n') for line in open(dev_lst_file)]
# Training parameters
N_epochs=15
seed=1234
batch_size=128
halving_factor=0.8
lr=0.0001
left=0
right=0
# Neural network parameters
options={}
options['dnn_lay']='256,4'
options['dnn_drop']='0.15,0.0'
options['dnn_use_batchnorm']='False,False'
options['dnn_use_laynorm']='True,False'
options['dnn_use_laynorm_inp']='True'
options['dnn_use_batchnorm_inp']='False'
options['dnn_act']='relu,softmax'
device=0 #get_freer_gpu()
dname=os.path.dirname(output_file)
if dname == '':
dname = '.'
if not os.path.exists(dname):
os.makedirs(dname)
# output file creation
text_file=open(output_file, "w")
# Loading pase
pase=wf_builder(pase_cfg)
pase.load_pretrained(pase_model, load_last=True, verbose=False)
pase.to(device)
pase.eval()
# reading the training signals
print("Waveform reading...")
fea={}
for wav_file in tr_lst:
[signal, fs] = sf.read(data_folder+'/'+wav_file)
#signal=signal/np.max(np.abs(signal))
signal = signal.astype(np.float32)
fea_id=wav_file.split('/')[-2]+'_'+wav_file.split('/')[-1]
fea[fea_id]=torch.from_numpy(signal).float().to(device).view(1,1,-1)
# reading the dev signals
fea_dev={}
for wav_file in dev_lst:
[signal, fs] = sf.read(data_folder+'/'+wav_file)
#signal=signal/np.max(np.abs(signal))
fea_id=wav_file.split('/')[-2]+'_'+wav_file.split('/')[-1]
fea_dev[fea_id]=torch.from_numpy(signal).float().to(device).view(1,1,-1)
# Computing pase features for training
print('Computing PASE features...')
fea_pase={}
for snt_id in fea.keys():
pase.eval()
fea_pase[snt_id]=pase(fea[snt_id], device).to('cpu').detach()
fea_pase[snt_id]=fea_pase[snt_id].view(fea_pase[snt_id].shape[1],fea_pase[snt_id].shape[2]).transpose(0,1)
avg_vect=fea_pase[snt_id].mean(0).repeat(fea_pase[snt_id].shape[0],1)
avg_neu=fea_pase[snt_id].mean(1)
std_vect=fea_pase[snt_id].std(0).repeat(fea_pase[snt_id].shape[0],1)
std_neu=fea_pase[snt_id].std(1)
fea_pase[snt_id]=torch.cat([(fea_pase[snt_id]),avg_vect],1)
inp_dim=fea_pase[snt_id].shape[1]*(left+right+1)
# Computing pase features for test
fea_pase_dev={}
for snt_id in fea_dev.keys():
fea_pase_dev[snt_id]=pase(fea_dev[snt_id], device).detach()
fea_pase_dev[snt_id]=fea_pase_dev[snt_id].view(fea_pase_dev[snt_id].shape[1],fea_pase_dev[snt_id].shape[2]).transpose(0,1)
avg_vect=fea_pase_dev[snt_id].mean(0).repeat(fea_pase_dev[snt_id].shape[0],1)
avg_neu=fea_pase_dev[snt_id].mean(1)
std_vect=fea_pase_dev[snt_id].std(0).repeat(fea_pase_dev[snt_id].shape[0],1)
std_neu=fea_pase_dev[snt_id].std(1)
fea_pase_dev[snt_id]=torch.cat([(fea_pase_dev[snt_id]),avg_vect],1)
# Network initialization
nnet=MLP(options,inp_dim)
nnet.to(device)
cost=nn.NLLLoss()
# Optimizer initialization
optimizer = optim.SGD(nnet.parameters(), lr=lr, momentum=0.0)
# Seeds initialization
np.random.seed(seed)
torch.manual_seed(seed)
# Batch creation (train)
fea_lst=[]
lab_lst=[]
print("Data Preparation...")
for snt in fea_pase.keys():
fea_lst.append(fea_pase[snt])
lab_lst.append(np.zeros(fea_pase[snt].shape[0])+lab[snt.split('_')[0]])
# feature matrix (training)
fea_conc=np.concatenate(fea_lst)
fea_conc=context_window(fea_conc,left,right)
# feature normalization
mean=np.mean(fea_conc,axis=0)
std=np.std(fea_conc,axis=0)
# normalization
fea_conc=(fea_conc-mean)/std
mean=torch.from_numpy(mean).float().to(device)
std=torch.from_numpy(std).float().to(device)
# lab matrix
lab_conc=np.concatenate(lab_lst)
if right>0:
lab_conc=lab_conc[left:-right]
else:
lab_conc=lab_conc[left:]
# dataset composition
dataset=np.concatenate([fea_conc,lab_conc.reshape(-1,1)],axis=1)
# shuffling
np.random.shuffle(dataset)
#dataset=torch.from_numpy(dataset).float().to(device)
dataset=torch.from_numpy(dataset).float()
# computing N_batches
N_ex_tr=dataset.shape[0]
N_batches=int(N_ex_tr/batch_size)
err_dev_fr_history=[]
err_dev_snt_history=[]
# Training loop
print("Training...")
for ep in range(N_epochs):
err_batches=0
loss_batches=0
beg_batch=0
# training modality
nnet.train()
# random shuffling
shuffle_index=torch.randperm(dataset.shape[0])
dataset=dataset[shuffle_index]
for batch_id in range(N_batches):
# Batch selection
end_batch=beg_batch+batch_size
batch=dataset[beg_batch:end_batch]
batch=batch.to(device)
fea_batch=batch[:,:-1]
lab_batch=batch[:,-1].long()
# computing the output probabilities
out=nnet(fea_batch)
# computing the loss
loss=cost(out,lab_batch)
# computing the error
pred=torch.max(out,dim=1)[1]
err = torch.mean((pred!=lab_batch).float())
# loss/error accumulation
err_batches=err_batches+err.detach()
loss_batches=loss_batches+loss.detach()
optimizer.zero_grad()
loss.backward()
optimizer.step()
beg_batch=end_batch
# evaluation
nnet.eval()
with torch.no_grad():
err_dev_fr_mean=0
err_dev_snt_mean=0
loss_dev_mean=0
N_dev_snt=len(list(fea_pase_dev.keys()))
for dev_snt in fea_pase_dev.keys():
fea_dev_norm=(fea_pase_dev[dev_snt]-mean)/std
out_dev=nnet(fea_dev_norm)
lab_snt=torch.zeros(fea_pase_dev[dev_snt].shape[0])+lab[dev_snt.split('_')[0]]
lab_snt=lab_snt.long().to(device)
loss_dev=cost(out_dev,lab_snt)
# frame level error
pred_dev=torch.max(out_dev,dim=1)[1]
err_dev = torch.mean((pred_dev!=lab_snt).float())
# sentence error level
prob_sum=torch.sum(out_dev,dim=0)
pred_dev_snt=torch.argmax(prob_sum)
err_snt=(pred_dev_snt!=lab_snt[0]).float()
err_dev_fr_mean=err_dev_fr_mean+err_dev.detach()
loss_dev_mean=loss_dev_mean+loss_dev.detach()
err_dev_snt_mean=err_dev_snt_mean+err_snt.detach()
err_dev_fr_history.append(err_dev_fr_mean/N_dev_snt)
err_dev_snt_history.append(err_dev_snt_mean/N_dev_snt)
print("epoch=%i loss_tr=%f err_tr=%f loss_te=%f err_te_fr=%f err_te_snt=%f lr=%f" %(ep,loss_batches/N_batches,err_batches/N_batches,loss_dev_mean/N_dev_snt,err_dev_fr_mean/N_dev_snt,err_dev_snt_mean/N_dev_snt,lr))
text_file.write("epoch=%i loss_tr=%f err_tr=%f loss_te=%f err_te_fr=%f err_te_snt=%f lr=%f \n" %(ep,loss_batches/N_batches,err_batches/N_batches,loss_dev_mean/N_dev_snt,err_dev_fr_mean/N_dev_snt,err_dev_snt_mean/N_dev_snt,lr))
# learning rate annealing
if ep>0:
if (err_dev_fr_history[-2]-err_dev_fr_history[-1])/err_dev_fr_history[-2]<0.0025:
lr=lr*halving_factor
optimizer.param_groups[0]['lr']=lr
print('BEST ERR=%f' %(min(err_dev_snt_history)))
print('BEST ACC=%f' %(1-min(err_dev_snt_history)))
text_file.write('BEST_ERR=%f\n' %(min(err_dev_snt_history)))
text_file.write('BEST_ACC=%f\n' %(1-min(err_dev_snt_history)))
text_file.close()
|
{"hexsha": "90dcac51a92b7c4fecf38128b45aa73370328868", "size": 9314, "ext": "py", "lang": "Python", "max_stars_repo_path": "emorec/run_IEMOCAP_fast.py", "max_stars_repo_name": "ishine/pase", "max_stars_repo_head_hexsha": "2a41e63e54fa8673efd12c16cdcdd5ad4f0f125e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 428, "max_stars_repo_stars_event_min_datetime": "2019-04-08T04:34:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T08:44:31.000Z", "max_issues_repo_path": "emorec/run_IEMOCAP_fast.py", "max_issues_repo_name": "ishine/pase", "max_issues_repo_head_hexsha": "2a41e63e54fa8673efd12c16cdcdd5ad4f0f125e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 46, "max_issues_repo_issues_event_min_datetime": "2019-04-07T23:38:53.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-19T12:06:12.000Z", "max_forks_repo_path": "emorec/run_IEMOCAP_fast.py", "max_forks_repo_name": "ishine/pase", "max_forks_repo_head_hexsha": "2a41e63e54fa8673efd12c16cdcdd5ad4f0f125e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 89, "max_forks_repo_forks_event_min_datetime": "2019-04-08T18:17:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T02:39:45.000Z", "avg_line_length": 28.3963414634, "max_line_length": 230, "alphanum_fraction": 0.6791926133, "include": true, "reason": "import numpy", "num_tokens": 2626}
|
# This is a special test to check that the integration is the same from version to version
# The intended usage is to verify that changes to the code do not affect integration outputs
# The key check is that the compartment sizes and flow rates come out the same as before
#
# The workflow for this function is that the first time it is run, it will write some saved
# result files. The second time it is run, it will re-run the simulations and compare them
# to the saved files. The saved files are not included with Git. When making changes,
# developers should first delete and regenerate the cache files. Then, re-run this script
# before committing the updates to the repository.
#
# The motivation for this workflow is that changes to the integration are expected to happen
# from time to time, and this way the repository size won't irreversibly grow due
# to old results being contained in the repo
import atomica as at
import os
import pytest
import numpy as np
# List available models based on which framework files exist
models = list()
for f in os.listdir(at.LIBRARY_PATH):
if f.endswith("_framework.xlsx") and not f.startswith("~$"):
models.append(f.replace("_framework.xlsx", ""))
def validate(r1, r2):
for p1, p2 in zip(r1.model.pops, r2.model.pops):
for v1 in p1.comps + p1.characs + p1.pars + p1.links: # For every variable in the old one
if isinstance(v1, at.model.Link):
try:
v2 = p2.get_variable("%s:%s:%s" % (v1.source.name, v1.dest.name, v1.parameter.name))[0]
assert np.allclose(v1.vals, v2.vals, equal_nan=True) # Default tolerances are rtol=1e-05, atol=1e-08
except at.system.NotFoundError:
print('Could not find "%s" in saved output, continuing' % (v1.name))
else:
try:
v2 = p2.get_variable(v1.name)[0]
assert np.allclose(v1.vals, v2.vals, equal_nan=True) # Default tolerances are rtol=1e-05, atol=1e-08
except at.system.NotFoundError:
print('Could not find "%s" in saved output, continuing' % (v1.name))
print("Validation passed")
@pytest.mark.parametrize("model", models)
def test_validate_model(model):
testdir = at.parent_dir()
tmpdir = testdir / "temp"
framework_file = at.LIBRARY_PATH / f"{model}_framework.xlsx"
databook_file = at.LIBRARY_PATH / f"{model}_databook.xlsx"
progbook_file = at.LIBRARY_PATH / f"{model}_progbook.xlsx"
# Only check if both parset and progset are present
# Not meant to be exhaustive, just reasonably comprehensive
if not os.path.isfile(databook_file) or not os.path.isfile(progbook_file):
return
P1 = at.Project(framework=framework_file, databook=databook_file, do_run=False)
P1.load_progbook(progbook_file)
P1.update_settings(sim_end=2025) # Make sure we run until 2025
P1.run_sim(P1.parsets[0], result_name="parset", store_results=True)
P1.run_sim(P1.parsets[0], P1.progsets[0], at.ProgramInstructions(start_year=2018), result_name="progset", store_results=True)
fname = tmpdir / ("validation_" + model + ".prj")
if os.path.isfile(fname):
P2 = at.Project.load(fname)
print("Validating %s parset" % (model))
validate(P1.results["parset"], P2.results["parset"])
validate(P1.results["progset"], P2.results["progset"])
else:
print("Regenerating %s parset" % (model))
P1.save(fname)
if __name__ == "__main__":
np.seterr(all="raise", under="ignore")
test_validate_model("combined")
for m in models:
test_validate_model(m)
|
{"hexsha": "93fce0d225ab78d0ea9398caf3a704d6105f8058", "size": 3677, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/validate_integration.py", "max_stars_repo_name": "atomicateam/atomica", "max_stars_repo_head_hexsha": "de05daf15088e315b89fd62319fa11529d43f8cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-16T17:56:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-02T15:53:18.000Z", "max_issues_repo_path": "tests/validate_integration.py", "max_issues_repo_name": "atomicateam/atomica", "max_issues_repo_head_hexsha": "de05daf15088e315b89fd62319fa11529d43f8cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 93, "max_issues_repo_issues_event_min_datetime": "2018-09-20T16:45:22.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-15T21:59:38.000Z", "max_forks_repo_path": "tests/validate_integration.py", "max_forks_repo_name": "atomicateam/atomica", "max_forks_repo_head_hexsha": "de05daf15088e315b89fd62319fa11529d43f8cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-09-22T08:12:53.000Z", "max_forks_repo_forks_event_max_datetime": "2018-09-22T08:12:53.000Z", "avg_line_length": 42.7558139535, "max_line_length": 129, "alphanum_fraction": 0.6769105249, "include": true, "reason": "import numpy", "num_tokens": 928}
|
theory ref_crdt
imports Main
"sorted_list"
fmap_functions
system_model
standard_crdts
"~~/src/HOL/Library/Finite_Map"
"~~/src/HOL/Library/Open_State_Syntax"
"~~/src/HOL/Library/Code_Target_Numeral"
begin
datatype operation =
init ref inref
| assign ref ref
| deref ref
| may_delete inref "ref list"
| reset_inref inref
| reset_ref ref
(* TODO resolve *)
datatype operation_effector =
effector_inref_inuse_enable inref
| effector_inref_rev_refs_add inref ref uid
| effector_inref_rev_refs_rem inref ref uid
| effector_ref_dest_keys_assign ref "inref option" uid "uid set"
datatype operation_result =
deref_result "antidoteKey option"
| no_result
| may_delete_result bool
record ref_state =
object_key :: "antidoteKey"
dest_keys :: "inref option mv_register_state"
record inref_state =
inref_object_key :: "antidoteKey"
rev_refs :: "(ref \<times> uid) two_phase_set_state"
inUse :: bool
record state =
state_refs :: "(ref, ref_state) fmap"
state_inrefs :: "(inref, inref_state) fmap"
definition initialState :: state where
"initialState \<equiv> \<lparr>
state_refs = fmempty,
state_inrefs = fmempty
\<rparr>"
type_synonym generator_function = "(operation \<Rightarrow> uid \<Rightarrow> state \<Rightarrow> operation_result \<times> (operation_effector list))"
type_synonym effector_function = "(operation_effector \<Rightarrow> state \<Rightarrow> state)"
type_synonym execution' = "(operation, operation_result, operation_effector, state) execution"
type_synonym eventInfo' = "(operation, operation_result, operation_effector, state) eventInfo"
definition return :: "'a \<Rightarrow> operation_effector list \<Rightarrow> ('a \<times> operation_effector list)" where
"return r l = (r,l)"
definition skip :: "operation_effector list \<Rightarrow> operation_effector list" where
"skip \<equiv> id"
definition forEach :: "'a list \<Rightarrow> ('a \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list) \<Rightarrow>operation_effector list \<Rightarrow> operation_effector list" where
"forEach list f effs \<equiv> foldl (\<lambda>es x. f x es) effs list"
text {* forEach loop with state: *}
definition forEachS :: "'a list \<Rightarrow> 'b \<Rightarrow> ('a \<Rightarrow> 'b \<Rightarrow> operation_effector list \<Rightarrow> ('b \<times> operation_effector list)) \<Rightarrow>operation_effector list \<Rightarrow> operation_effector list" where
"forEachS list s f effs \<equiv> foldl (\<lambda>(s,es) x. f x s es) (s,effs) list |> snd"
definition set_forEach :: "('a::linorder) set \<Rightarrow> ('a \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list) \<Rightarrow>operation_effector list \<Rightarrow> operation_effector list" where
"set_forEach S \<equiv> forEach (sorted_list_of_set2 S)"
definition set_forEachS :: "('a::linorder) set \<Rightarrow> 'b \<Rightarrow> ('a \<Rightarrow> 'b \<Rightarrow> operation_effector list \<Rightarrow> ('b \<times> operation_effector list)) \<Rightarrow>operation_effector list \<Rightarrow> operation_effector list" where
"set_forEachS S \<equiv> forEachS (sorted_list_of_set2 S)"
definition inref_inuse_enable :: "inref \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"inref_inuse_enable inref list = list@[effector_inref_inuse_enable inref]"
definition inref_rev_refs_add :: "inref \<Rightarrow> ref \<Rightarrow> uid \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"inref_rev_refs_add inref elem uid list = list@[effector_inref_rev_refs_add inref elem uid]"
definition inref_rev_refs_remove :: "inref \<Rightarrow> ref \<Rightarrow> uid \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"inref_rev_refs_remove inref elem uid list = list@[effector_inref_rev_refs_rem inref elem uid]"
definition ref_state :: "state \<Rightarrow> ref \<Rightarrow> ref_state" where
"ref_state state ref \<equiv> case (state_refs state).[ref] of
Some s \<Rightarrow> s
| None \<Rightarrow> \<lparr> object_key = D_antidoteKey (ref_number ref), dest_keys = {}\<rparr>"
definition ref_get_object_key :: "state \<Rightarrow> ref \<Rightarrow> antidoteKey" where
"ref_get_object_key state ref \<equiv> object_key (ref_state state ref)"
definition inref_state :: "state \<Rightarrow> inref \<Rightarrow> inref_state" where
"inref_state state inref \<equiv> case (state_inrefs state).[inref] of
Some s \<Rightarrow> s
| None \<Rightarrow> \<lparr> inref_object_key = D_antidoteKey (inref_number inref), rev_refs = ({},{}), inUse = False\<rparr>"
definition inref_get_object_key :: "state \<Rightarrow> inref \<Rightarrow> antidoteKey" where
"inref_get_object_key state ref \<equiv> inref_object_key (inref_state state ref)"
definition ref_dest_keys_assign :: "ref \<Rightarrow> inref option \<Rightarrow> uid \<Rightarrow> state \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"ref_dest_keys_assign ref key uid state list \<equiv> list@[effector_ref_dest_keys_assign ref key uid (snd` dest_keys (ref_state state ref))]"
definition s_update_inref :: "inref \<Rightarrow> (inref_state \<Rightarrow> inref_state) \<Rightarrow> state \<Rightarrow> state" where
"s_update_inref inref f S \<equiv> S\<lparr>state_inrefs := fmupd inref (f (inref_state S inref)) (state_inrefs S)\<rparr>"
definition s_update_ref :: "ref \<Rightarrow> (ref_state \<Rightarrow> ref_state) \<Rightarrow> state \<Rightarrow> state" where
"s_update_ref ref f S \<equiv> S\<lparr>state_refs := fmupd ref (f (ref_state S ref)) (state_refs S)\<rparr>"
definition may_delete_check :: "state \<Rightarrow> inref \<Rightarrow> ref set \<Rightarrow> bool" where
"may_delete_check state inref last_refs \<equiv>
(*let last_keypairs :: (ref \<times> uid) set = \<Union> ((\<lambda>r. dest_keys (ref_state state r)) ` last_refs) in *)
(fst ` two_phase_set_get (rev_refs (inref_state state inref))) = last_refs"
subsection {* Implementation *}
text {* We now present the implementation of the reference CRDT: *}
definition precondition_impl :: "operation \<rightharpoonup> state \<Rightarrow> bool" where
"precondition_impl opr \<equiv> case opr of
init ref inref \<Rightarrow> Some (\<lambda>state. \<not> inUse (inref_state state inref))
| assign x y \<Rightarrow> None
| deref ref \<Rightarrow> None
| may_delete inref remaining \<Rightarrow> Some (\<lambda>s. True)
| reset_inref inref \<Rightarrow> Some (\<lambda>state. may_delete_check state inref {})
| reset_ref ref \<Rightarrow> None
"
definition localPrecondition_impl :: "operation \<Rightarrow> state \<Rightarrow> bool" where
"localPrecondition_impl opr S \<equiv> case opr of
init ref inref \<Rightarrow> True
| assign x y \<Rightarrow>
mv_reg_count (dest_keys (ref_state S y)) = 1
\<and> mv_reg_get1' (dest_keys (ref_state S y)) \<noteq> None
| deref ref \<Rightarrow>
mv_reg_count (dest_keys (ref_state S ref)) = 1
\<and> mv_reg_get1' (dest_keys (ref_state S ref)) \<noteq> None
| may_delete inref remaining \<Rightarrow> True
| reset_inref inref \<Rightarrow> True
| reset_ref ref \<Rightarrow> True
"
find_consts "'a set \<Rightarrow> 'a"
definition effector_impl :: "effector_function" where
"effector_impl eff S \<equiv> case eff of
effector_inref_inuse_enable inref \<Rightarrow>
s_update_inref inref (\<lambda>s. s\<lparr> inUse := True\<rparr>) S
| effector_inref_rev_refs_add inref antidoteKey uid \<Rightarrow>
s_update_inref inref (\<lambda>s. s\<lparr>
rev_refs := two_phase_set_add (rev_refs s) (antidoteKey, uid ) \<rparr>) S
| effector_inref_rev_refs_rem inref antidoteKey uid \<Rightarrow>
s_update_inref inref (\<lambda>s. s\<lparr>
rev_refs := two_phase_set_remove (rev_refs s) (antidoteKey, uid ) \<rparr>) S
| effector_ref_dest_keys_assign ref antidoteKey uid oldUids \<Rightarrow>
s_update_ref ref (\<lambda>s. s\<lparr>dest_keys := insert (antidoteKey,uid) (Set.filter (\<lambda>(x,u). u\<noteq>uid \<and> u\<notin>oldUids) (dest_keys s)) \<rparr>) S
"
(** broken version
definition ref_reset_targets :: "ref \<Rightarrow> inref option \<Rightarrow> uid \<Rightarrow> state \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"ref_reset_targets ref ignoredInref uid state \<equiv> exec {
let (outkeys :: (inref option\<times>uid) set) = (dest_keys (ref_state state ref));
set_forEachS outkeys True (\<lambda>(target,uid) first_time.
case target of
None \<Rightarrow> return first_time
| Some target' => exec {
(if \<not> (target = ignoredInref \<and> first_time) then exec {
inref_rev_refs_remove target' ref uid;
return first_time
} else if target = ignoredInref then exec {
return False
} else exec {
return first_time
})
})
}"
**)
definition ref_reset_targets :: "ref \<Rightarrow> uid \<Rightarrow> state \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"ref_reset_targets ref uid state \<equiv> exec {
let (outkeys :: (inref option\<times>uid) set) = (dest_keys (ref_state state ref));
set_forEach outkeys (\<lambda>(target,uid).
case target of
None \<Rightarrow> skip
| Some target' => inref_rev_refs_remove target' ref uid
)
}"
definition ref_reset :: "ref \<Rightarrow> inref option \<Rightarrow> uid \<Rightarrow> state \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"ref_reset ref ignoredInref uid state \<equiv> exec {
ref_dest_keys_assign ref None uid state;
ref_reset_targets ref uid state
}"
definition outref_update :: "ref \<Rightarrow> inref option \<Rightarrow> state \<Rightarrow> uid \<Rightarrow> operation_effector list \<Rightarrow> operation_effector list" where
"outref_update ref inref state uid \<equiv> exec {
(* first insert into new target: *)
(case inref of
None \<Rightarrow> skip
| Some inref \<Rightarrow> inref_rev_refs_add inref ref uid);
(* then: assign source *)
ref_dest_keys_assign ref inref uid state;
(* then reset targets: remove from all old targets *)
ref_reset_targets ref uid state
}"
definition generator_impl :: generator_function where
"generator_impl opr uid state \<equiv> [] |> (case opr of
init ref inref \<Rightarrow> exec {
inref_inuse_enable inref;
outref_update ref (Some inref) state uid;
return no_result
}
| assign outTo outVal \<Rightarrow> exec {
let new_key = mv_reg_get1' (dest_keys (ref_state state outVal));
outref_update outTo new_key state uid;
return no_result
}
| deref ref \<Rightarrow> exec {
let inref = mv_reg_get1' (dest_keys (ref_state state ref));
let key = (map_option (inref_get_object_key state) inref);
return (deref_result key)
}
| may_delete inref last_refs \<Rightarrow> exec {
return (may_delete_result (may_delete_check state inref (set last_refs)))
}
| reset_inref inref \<Rightarrow> exec {
return no_result
}
| reset_ref ref \<Rightarrow> exec {
outref_update ref None state uid;
return no_result
}
)
"
definition
"wellFormed_impl execution \<equiv> wellFormed execution initialState generator_impl effector_impl localPrecondition_impl precondition_impl"
find_consts "('a \<Rightarrow> bool) \<Rightarrow> 'a list \<Rightarrow> 'a option"
fun find_smaller :: "'a rel \<Rightarrow> 'a \<Rightarrow> 'a list \<Rightarrow> 'a option" where
"find_smaller R x [] = None"
| "find_smaller R x (y#ys) = None"
lemma find_length[simp]: "find P xs = Some x \<Longrightarrow> length (remove1 x xs) < length xs"
by (induct xs, auto split: if_splits)
lemma find_length2[simp]: "find P xs = Some x \<Longrightarrow> Suc (length (remove1 x xs)) = length xs "
by (induct xs, auto split: if_splits)
definition findMinimal :: "'a rel \<Rightarrow> 'a list \<Rightarrow> 'a" where
"findMinimal R xs \<equiv> case find (\<lambda>y. \<forall>x\<in>set xs. x=y \<or> (x,y)\<notin>R) xs of None \<Rightarrow> hd xs | Some x \<Rightarrow> x"
lemma findMinimalIn: "xs\<noteq>[] \<Longrightarrow> findMinimal R xs \<in> set xs"
apply (auto simp add: findMinimal_def split: option.splits)
by (metis (no_types, lifting) find_Some_iff in_set_conv_nth)
lemma findMinimal_termination[simp]: "xs\<noteq>[] \<Longrightarrow> length (remove1 (findMinimal R xs) xs) < length xs"
by (simp add: findMinimalIn length_remove1)
lemma findMinimal_termination2[simp]: "findMinimal R (v # va) \<noteq> v \<Longrightarrow> length (remove1 (findMinimal R (v # va)) va) < length va"
by (metis One_nat_def Suc_pred findMinimalIn length_pos_if_in_set length_remove1 lessI list.discI set_ConsD)
fun topSort :: "'a rel \<Rightarrow> 'a list \<Rightarrow> 'a list" where
"topSort R [] = []"
| "topSort R xs = (let m = findMinimal R xs in m#topSort R (remove1 m xs))"
find_consts "('k, 'v) fmap"
definition "numberEffectors E e \<equiv> e |> fmlookup (events E) |> the |> event_effectors |> length"
definition execution_addStep :: "int \<Rightarrow> operation \<Rightarrow> (event,nat) fmap \<Rightarrow> execution' \<Rightarrow> execution'" where
"execution_addStep eId opr preEventsN E \<equiv>
let
e = D_event eId;
preEvents = fmdom' preEventsN;
(* only existing events *)
deps1 :: event set = Set.filter (\<lambda>e. e\<in>fmdom' (events E)) preEvents;
(* include parallel events which need to be stable *)
deps2 = Set.filter (\<lambda>e. e\<in>deps1 \<or> (precondition_impl (event_operation ((events E)![e])) \<noteq> None \<and> (\<exists>e'\<in>deps1. (e',e)\<in>happensBefore E ) )) (fmdom' (events E));
(* include causal dependencies *)
deps3 = downwards_closure deps2 E; (* TODO could be more precise; at the level of effectors instead of events*)
(* include parallel events, if stable precondition check required *)
deps = (if precondition_impl opr = None then deps3 else parallel_closure deps3 (fmdom' (events E)) (happensBefore E));
snapshot = sorted_list_of_set2 deps
|> map (\<lambda>e. (e, case preEventsN.[e] of
None \<Rightarrow> numberEffectors E e
| Some n \<Rightarrow> if precondition_impl opr \<noteq> None \<or> (\<exists>e'\<in>deps. e\<in>snapshot_events (event_snapshot ((events E)![e'])))
then numberEffectors E e
else n
))
|> fmap_of_list
|> Snapshot ;
precond = (precondition_impl opr orElse (\<lambda>x. True));
execOrder = topSort (happensBefore E) (sorted_list_of_set2 deps);
preState :: state = executeEffectors (List.maps (\<lambda>e. take (snapshot_num snapshot e) (event_effectors ((events E)![e]))) execOrder) initialState effector_impl
in if \<not>(localPrecondition_impl opr preState \<and> precond preState) then
E
else
let (res,eff) = generator_impl opr e preState;
postState :: state = executeEffectors eff preState effector_impl
in
\<lparr>
events = fmupd e \<lparr>
event_operation = opr,
event_result = res,
event_effectors = eff,
event_executionOrder = execOrder,
event_state_pre = preState,
event_state_post = postState,
event_snapshot = snapshot
\<rparr> (events E)
\<rparr>
"
definition "emptyExecution \<equiv> \<lparr>events = fmempty\<rparr>"
record trace_event =
t_operation :: operation
t_deps :: "(event,nat) fmap"
definition execution_run :: "trace_event list \<Rightarrow> execution'" where
"execution_run ops \<equiv> snd (fold (\<lambda>e (n,E). (n+1, execution_addStep n (t_operation e) (t_deps e) E)) ops (0, emptyExecution))"
definition forallEvents :: "execution' \<Rightarrow> (event \<Rightarrow> eventInfo' \<Rightarrow> bool) \<Rightarrow> bool" where
"forallEvents E P \<equiv> events E |> fmpred P"
definition forallStates :: "execution' \<Rightarrow> (state \<Rightarrow> bool) \<Rightarrow> bool" where
"forallStates E P \<equiv> forallEvents E (\<lambda>e eInfo. P (event_state_pre eInfo) \<and> P (event_state_post eInfo))"
subsection {* Invariants *}
(* if ref exists, inref exists *)
definition invariant1 :: "execution' \<Rightarrow> bool" where
"invariant1 E \<equiv> forallStates E (\<lambda>s. state_refs s |> fmpred (\<lambda>r rState. \<forall>(k,u)\<in>dest_keys rState. case k of None \<Rightarrow> True | Some inref \<Rightarrow>
(case (state_inrefs s).[inref] of None \<Rightarrow> False | Some inrefState \<Rightarrow> (r,u) \<in> two_phase_set_get (rev_refs inrefState) )))"
(* once an inref is unreachable, it remains unreachable *)
definition invariant2 :: "execution' \<Rightarrow> bool" where
"invariant2 E \<equiv>
(\<forall>(e,eInfo)\<in>events' E.
\<forall>(inref,inrefState)\<in>fmap_entries (state_inrefs (event_state_pre eInfo)).
two_phase_set_get (rev_refs inrefState) = {}
\<and> stable e E
\<longrightarrow> (\<forall>(e', eInfo')\<in>events' E. (e,e')\<in>happensBefore E \<longrightarrow>
(case (state_inrefs (event_state_post eInfo')).[inref] of
Some inrefState' \<Rightarrow> two_phase_set_get (rev_refs inrefState') = {}
| None \<Rightarrow> False
)))"
(* if there is a reverse reference, then there is also a forward reference
(only true, if using transactional semantics )
*)
definition invariant3 :: "execution' \<Rightarrow> bool" where
"invariant3 E \<equiv>
forallStates E (\<lambda>S.
\<forall>(inref,inrefState)\<in>fmap_entries (state_inrefs S).
\<forall>(r,u)\<in>two_phase_set_get (rev_refs inrefState).
case state_refs S.[r] of
None \<Rightarrow> False
| Some rstate \<Rightarrow> (Some inref,u)\<in> dest_keys rstate
)
"
(* some simple postconditions for operations*)
definition invariant4 :: "execution' \<Rightarrow> bool" where
"invariant4 E \<equiv>
\<forall>(e,eInfo)\<in>events' E.
case event_operation eInfo of
init x y \<Rightarrow>
(let S = event_state_post eInfo in
case state_refs S.[x] of
None \<Rightarrow> False
| Some rstate \<Rightarrow> Some y\<in>fst`dest_keys rstate)
| _ \<Rightarrow> True
"
(* finally: if there is a reverse reference, then there is also a forward reference
*)
definition invariant5 :: "execution' \<Rightarrow> bool" where
"invariant5 E \<equiv>
let
execution_order = sorted_list_of_set2 (fmdom' (events E));
effectors = execution_order |> List.maps (\<lambda>e'.
case (events E).[e'] of Some eInfo' \<Rightarrow> event_effectors eInfo' | None \<Rightarrow> []);
S = executeEffectors effectors initialState effector_impl
in
\<forall>(inref,inrefState)\<in>fmap_entries (state_inrefs S).
\<forall>(r,u)\<in>two_phase_set_get (rev_refs inrefState).
case state_refs S.[r] of
None \<Rightarrow> False
| Some rstate \<Rightarrow> (Some inref,u)\<in> dest_keys rstate
"
export_code wf_correct_execution_lists in Haskell
export_code execution_run in Haskell
definition "transformOp I \<equiv> let (opr, deps) = I in \<lparr>t_operation = opr, t_deps = deps\<rparr>"
definition "transformOp2 I \<equiv> let (opr, deps,xx) = I in trace_event.extend \<lparr>t_operation = opr, t_deps = deps\<rparr> xx"
instantiation fmap :: (narrowing,narrowing) narrowing begin
definition "narrowing_fmap = Quickcheck_Narrowing.apply (Quickcheck_Narrowing.cons fmap_of_list) narrowing"
instance proof qed
end
instantiation trace_event_ext :: (narrowing) narrowing begin
definition "narrowing_trace_event_ext = Quickcheck_Narrowing.apply (Quickcheck_Narrowing.cons transformOp2) narrowing"
instance proof qed
end
definition "execution_run2 ops \<equiv> execution_run (map transformOp ops)"
definition fmap_key_list where
"fmap_key_list m \<equiv> sorted_list_of_set2 (fmdom' m)"
definition fmap_to_list where
"fmap_to_list m \<equiv> map (\<lambda>k. (k,m![k])) (fmap_key_list m)"
export_code sorted_list_of_set2 execution_run2
invariant1 invariant2 invariant3 invariant4 invariant5
init assign deref may_delete reset_inref reset_ref D_event D_inref D_ref D_antidoteKey
integer_of_nat int_of_integer integer_of_nat nat_of_integer fmap_of_list D_event integer_of_int
events event_operation event_result event_effectors event_executionOrder event_state_pre event_state_post event_snapshot
fmlookup fmap_key_list fmap_to_list Snapshot state_refs state_inrefs
object_key dest_keys inref_object_key rev_refs inUse
effector_inref_inuse_enable effector_inref_rev_refs_add effector_inref_rev_refs_rem effector_ref_dest_keys_assign
in Haskell (*module_name Ref_crdt*) file "refcrdt-quickcheck/srcgen"
typedef operations = "UNIV :: (trace_event list) set"
by force
fun cleanRef where
"cleanRef (D_ref n) = D_ref (n mod 3)"
fun cleanInef where
"cleanInef (D_inref n) = D_inref 0"
fun cleanOperations :: "trace_event list \<Rightarrow> nat \<Rightarrow> trace_event list" where
"cleanOperations [] n = []"
| "cleanOperations (ev#evs) n = (if n > 20 then [] else
let newOp = (case t_operation ev of
init x y \<Rightarrow> init (cleanRef x) (cleanInef y)
| assign x y \<Rightarrow> assign (cleanRef x) (cleanRef y)
| deref x \<Rightarrow> deref (cleanRef x)
| may_delete x xs \<Rightarrow> may_delete (cleanInef x) []
| reset_inref x \<Rightarrow> reset_inref (cleanInef x)
| reset_ref x \<Rightarrow> reset_ref (cleanRef x)
)
in \<lparr>t_operation=newOp, deps = fmap_of_list (map (\<lambda>x. case x of (D_event x,i) \<Rightarrow> (D_event (x mod (int n)), i)) (fmap_to_list (t_deps ev)))\<rparr>#cleanOperations evs (Suc n))"
(*
init ref inref
| assign ref ref
| deref ref
| may_delete inref "ref list"
| reset_inref inref
| reset_ref ref
*)
(*
lemma "let E = execution_run (cleanOperations ops 0) in invariant2 E"
quickcheck[random,size=40,timeout=1000,verbose,timeout=1000]
oops
*)
abbreviation "r1 \<equiv> D_ref 1"
abbreviation "r2 \<equiv> D_ref 2"
abbreviation "r3 \<equiv> D_ref 3"
abbreviation "ir1 \<equiv> D_inref 1"
abbreviation "ir2 \<equiv> D_inref 2"
abbreviation "e i \<equiv> D_event i"
value "let ops = [
(* e0 *) (init r1 ir1, fmap_of_list []),
(* e1 *) (reset_ref r1, fmap_of_list [(e 0,1)])
];
E = execution_run (map transformOp ops)
(*ev = e 4;
eInfo = the (fmlookup (events E) ev);
e' = e 6;
eInfo' = the (fmlookup (events E) e');
inv = (
\<forall>(inref,inrefState)\<in>fmap_entries (state_inrefs (event_state_pre eInfo)).
two_phase_set_get (rev_refs inrefState) = {}
\<and> stable ev E
\<longrightarrow> ( (ev,e')\<in>happensBefore E \<longrightarrow>
(case (state_inrefs (event_state_post eInfo')).[inref] of
Some inrefState' \<Rightarrow> two_phase_set_get (rev_refs inrefState') = {}
| None \<Rightarrow> False
)))*)
in (invariant3 E, E)"
end
|
{"author": "peterzeller", "repo": "ref-crdt", "sha": "b5678901b2489d87a7676188d14addf3778e235f", "save_path": "github-repos/isabelle/peterzeller-ref-crdt", "path": "github-repos/isabelle/peterzeller-ref-crdt/ref-crdt-b5678901b2489d87a7676188d14addf3778e235f/ref_crdt.thy"}
|
# Ejercicio 3 - Simulación de distribuciones condicionadas
### Julian Ferres - Nro.Padrón 101483
## Enunciado:
Sea $X$ \~ $N(0,1)$ truncada al intervalo $[-1,1]$
Imagine $m(x) = E[Y | X=x]$ como:
\begin{equation}
m(x) := \left\{
\begin{array}{ll}
\frac{(x + 2)^2}{2} & \mathrm{si\ } si -1\leq x<-0.5 \\
\frac{x}{2}+0.875 & \mathrm{si\ } -0.5 \leq x \leq 0\\
-5(x-0.2)^2 +1.075 & \mathrm{si\ } 0 < x \leq 0.5 \\
x + 0.125 & \mathrm{si\ } 0.5 \leq x < 1
\end{array}
\right.
\end{equation}
Dado un $x$, la distribución condicional de $Y - m(x)$ es $N(0, \sigma ^2(x))$,
con $\sigma(x)=0.2-0.1 * \cos(2x)$
- Se pide simular $200$ puntos $(X,Y)$, y graficarlos en un plano. Además, vamos a necesitar
Los $200$ pares ordenados en cuestión, para hacer análisis posteriores
- Reconstruir $m(x)$ con los $200$ puntos, para eso:
Realizar una partición de $[-1,1]$ en intervalos de longitud $h$ y en cada intervalo encontrar el polinomio $f$ de grado $M$ que minimice el error cuadratico medio $$ \frac{1}{n} \sum |f(X_i)-Y_i|^2$$
Usar:
1. $h = 0.5$ , $M=1$
2. $h = 0.1$ , $M=1$
3. $h = 0.25$ , $M=2$
4. $h = 0.5$ , $M=2$
## Solución:
#### Importo todas las librerias e inicializo funciones
```python
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from math import cos, pi
from scipy.stats import truncnorm
```
```python
m1 = lambda x: (x+2)**2/2
m2 = lambda x: x/2 + 0.875
m3 = lambda x: -5*(x-0.2)**2 + 1.075
m4 = lambda x: x + 0.125
```
```python
def m(x):
if -1 <= x < -0.5:
return m1(x)
if -0.5 <= x < 0:
return m2(x)
if 0 <= x < 0.5:
return m3(x)
if 0.5 <= x < 1:
return m4(x)
m = np.vectorize(m)
```
```python
#Me genero 1000 valores entre -1 y 1 para graficar m(x) 'suave'
x_0 = np.linspace(-1,1,1000)
y_0 = m(x_0)
```
#### Normal truncada
```python
a , b = -1 , 1 #Limites de la normal truncada
```
```python
#Genero 200 cuantiles de la normal truncada
x1 = np.linspace(truncnorm.ppf(0.01, a, b),
truncnorm.ppf(0.99, a, b), 200)
```
```python
plt.plot(x1, truncnorm.pdf(x1, a, b),
'r-', lw=3, alpha=0.75, label='Normal truncada')
plt.title("Density Plot de X",fontsize='15')
plt.legend(loc='best', frameon= True)
plt.grid()
```
```python
x1 = truncnorm.rvs(a, b, size=200)
#Me genero la muestra de distribucion X
```
```python
sigma = np.vectorize(lambda x : 0.2 - 0.1 * cos(2*pi*x))
normal = np.vectorize(np.random.normal)
y1 = normal( m(x1),sigma(x1))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x_0, y_0, 'g-', linewidth = 5, label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.plot(x1, y1, 'ro' ,markersize= 5, alpha = 0.5 ,label = 'Dispersion (X,Y)')
plt.legend(loc='best', frameon= True)
plt.title("Scatter Plot de (X,Y) y Line plot de m(x)", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
#### La muestra de los $200$ pares con distribución $(X,Y)$ se encuentra en la variable output
### Dejo los 200 puntos en el archivo simulacion.csv
```python
d = {'X': x1 , 'Y': y1 }
output = pd.DataFrame(data=d)
```
```python
output.to_csv("simulacion.csv" , index = False)
```
## Reconstruyo la regresión
#### Con h=0.5 y M=1
```python
partition = [[],[],[],[]]
for i in range(200):
partition[int(2*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores1 = 0
for i in range(4):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,1)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores1 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
xp.append(np.linspace(-1, -0.5, 200))
xp.append(np.linspace(-0.5,0, 200))
xp.append(np.linspace(0, 0.5, 200))
xp.append(np.linspace(0.5,1, 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5, alpha = 0.5 ,label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(4):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5, alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.5 y M=1", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
La estimación parece ajustarse bien a la función de regresion, no obstante, el error cuadrático medio es alto ya que no esta Overfitteando
a la muestra.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores1 / 200)**0.5
```
0.21146615768136706
#### Con h=0.1 y M=1
```python
partition = [[] for i in range(20)]
for i in range(200):
partition[int(10*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores2 = 0
for i in range(20):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,1)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores2 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(20):
xp.append(np.linspace(-1+i*(1/10), -0.9+i*(1/10), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5, alpha = 0.5 ,label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(20):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5, alpha = 0.75,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.1 y M=1", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se puede observar un claro caso de Overfitting, donde el error cuadrático medio es medianamente bajo, pero no estima correctamente la regresión.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores2 / 200)**0.5
```
0.1885956312812796
#### Con h=0.25 y M=2
```python
partition = [[] for i in range(8)]
for i in range(200):
partition[int(4*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores3 = 0
for i in range(8):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,2)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores3 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(8):
xp.append(np.linspace(-1+i*(1/4), -1+(i+1)*(1/4), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5,alpha = 0.5, label ='Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(8):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5 )
plt.plot(x_0, y_0, 'g-', linewidth = 5,alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.25 y M=2", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se puede observar un claro caso de Overfitting, donde el error cuadrático medio es medianamente bajo, pero no estima correctamente la regresión.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores3 / 200)**0.5
```
0.20085090075741924
#### Con h=0.5 y M=2
```python
partition = [[] for i in range(4)]
for i in range(200):
partition[int(2*(x1[i]+1))].append(i)
```
```python
polinomio_a_trozos = []
cuadrado_de_los_errores4 = 0
for i in range(4):
x_aux , y_aux = [x1[j] for j in partition[i]],[y1[j] for j in partition[i]]
z = np.polyfit(x_aux,y_aux,2)
polinomio_a_trozos.append(np.poly1d(z))
#sumo los errores para cada trozo del polinomio
for j in range(len(x_aux)):
cuadrado_de_los_errores4 += (polinomio_a_trozos[i](x_aux[j])-y_aux[j])**2
```
```python
xp=[]
for i in range(4):
xp.append(np.linspace(-1+i*(1/2), -1+(i+1)*(1/2), 200))
```
```python
fig, ax = plt.subplots(figsize=(11,7))
plt.plot(x1, y1, 'ro', linewidth = 5,alpha = 0.5, label = 'Dispersion X,Y')
plt.legend(loc='best', frameon= True)
for i in range(4):
plt.plot(xp[i], polinomio_a_trozos[i](xp[i]) ,'b-', linewidth = 5)
plt.plot(x_0, y_0, 'g-', linewidth = 5,alpha = 0.75 ,label = 'Función m(x)=E[Y|X=x]')
plt.legend(loc='best', frameon= True)
plt.title("Estimación m(x) con h=0.5 y M=2", fontsize='15')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
```
Se ve que el ECM es ligeramente superior a los casos con Overfitting, se ve que predice la regresión de forma bastante acertada.
#### Estimación del error cuadrático medio
```python
(cuadrado_de_los_errores4 / 200)**0.5
```
0.20321524367619534
```python
(cuadrado_de_los_errores1 / 200)**0.5 ,\
(cuadrado_de_los_errores2 / 200)**0.5 ,\
(cuadrado_de_los_errores3 / 200)**0.5 ,\
(cuadrado_de_los_errores4 / 200)**0.5
```
(0.21146615768136706,
0.1885956312812796,
0.20085090075741924,
0.20321524367619534)
Link al Repo de GitHub: https://github.com/julianferres/Aprendizaje-Estadistico.git
|
{"hexsha": "e1c2550a961943090eebc63a7daa73019a726d4f", "size": 305357, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Ejercicios/06-Reconstruir Regresion (Penultima Clase)/EstimacionRegresion.ipynb", "max_stars_repo_name": "julianferres/Aprendizaje-Estadistico", "max_stars_repo_head_hexsha": "897c5389afa2a0aad7ca46125540154b3b764e0d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Ejercicios/06-Reconstruir Regresion (Penultima Clase)/EstimacionRegresion.ipynb", "max_issues_repo_name": "julianferres/Aprendizaje-Estadistico", "max_issues_repo_head_hexsha": "897c5389afa2a0aad7ca46125540154b3b764e0d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Ejercicios/06-Reconstruir Regresion (Penultima Clase)/EstimacionRegresion.ipynb", "max_forks_repo_name": "julianferres/Aprendizaje-Estadistico", "max_forks_repo_head_hexsha": "897c5389afa2a0aad7ca46125540154b3b764e0d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 381.2197253433, "max_line_length": 60216, "alphanum_fraction": 0.9360748239, "converted": true, "num_tokens": 3425}
|
from styx_msgs.msg import TrafficLight
import tensorflow as tf
import rospy
import cv2
import numpy as np
class TLClassifier(object):
def __init__(self):
#TODO load classifier
pass
def get_classification(self, image):
"""Determines the color of the traffic light in the image
Args:
image (cv::Mat): image containing the traffic light
Returns:
int: ID of traffic light color (specified in styx_msgs/TrafficLight)
"""
#TODO implement light color prediction
import numpy as np
img_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_red = np.array([0,140,120])
upper_red = np.array([50,255,255])
mask0 = cv2.inRange(img_hsv, lower_red, upper_red)
count = np.count_nonzero(mask0)
if(count>=200):
return TrafficLight.RED
else:
return TrafficLight.UNKNOWN
|
{"hexsha": "75b9a09b424b724ca4f10ef085ca910cb67360bb", "size": 940, "ext": "py", "lang": "Python", "max_stars_repo_path": "ros/src/tl_detector/light_classification/tl_classifier.py", "max_stars_repo_name": "navinrahim/CarND-Capstone", "max_stars_repo_head_hexsha": "92c3ad7952b9b73fa20af798da4ddef55f570605", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ros/src/tl_detector/light_classification/tl_classifier.py", "max_issues_repo_name": "navinrahim/CarND-Capstone", "max_issues_repo_head_hexsha": "92c3ad7952b9b73fa20af798da4ddef55f570605", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2021-01-26T13:45:06.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:23:40.000Z", "max_forks_repo_path": "ros/src/tl_detector/light_classification/tl_classifier.py", "max_forks_repo_name": "navinrahim/CarND-Capstone", "max_forks_repo_head_hexsha": "92c3ad7952b9b73fa20af798da4ddef55f570605", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-06-06T11:38:28.000Z", "max_forks_repo_forks_event_max_datetime": "2018-06-12T04:28:32.000Z", "avg_line_length": 26.8571428571, "max_line_length": 80, "alphanum_fraction": 0.6276595745, "include": true, "reason": "import numpy", "num_tokens": 216}
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import math
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoCV
ALPHAS = [0.00016, 0.00018, 0.00020, 0.00022, 0.00024, 0.00026, 0.00028, 0.00030, 0.00032]
if __name__ == "__main__":
X_train = pd.read_csv('../Data/data_cleaned_train_comments_X.csv')
y_train = pd.read_csv('../Data/data_cleaned_train_y.csv')
X_val = pd.read_csv('../Data/data_cleaned_val_comments_X.csv')
y_val = pd.read_csv('../Data/data_cleaned_val_y.csv')
score_best = 0
for alpha in ALPHAS:
print('alpha:', alpha)
reg = Lasso(alpha=alpha, max_iter=1e5)
reg.fit(X_train, y_train)
score = reg.score(X_val, y_val)
print('\t training score:', reg.score(X_train, y_train))
print('\t validation score:', reg.score(X_val, y_val))
if score > score_best:
score_best = score
alpha_best = alpha
print('best alpha:', alpha_best)
reg = Lasso(alpha=alpha_best, max_iter=1e5)
reg.fit(X_train, y_train)
print('training set:', reg.score(X_train, y_train))
y_pred_train = reg.predict(X_train)
y_pred_val = reg.predict(X_val)
print('validation set:', reg.score(X_val, y_val))
coefs = np.array(reg.coef_!=0)
np.save('../Data/selected_coefs.npy', coefs)
print('total number of parameters:', sum(reg.coef_!=0))
plt.figure(1)
plt.scatter(y_pred_train, y_train)
plt.figure(2)
plt.scatter(y_pred_val, y_val)
plt.show()
pass
|
{"hexsha": "295a3d8d4391a03dc52d52f853c5dbfc5fafbe2c", "size": 1542, "ext": "py", "lang": "Python", "max_stars_repo_path": "Main/cv.py", "max_stars_repo_name": "PouyaREZ/AirBnbPricePrediction", "max_stars_repo_head_hexsha": "8c4c8ce7fd0871b6bd68e573a7796d2d2dd22276", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 46, "max_stars_repo_stars_event_min_datetime": "2019-07-31T17:21:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T21:18:42.000Z", "max_issues_repo_path": "Main/cv.py", "max_issues_repo_name": "PouyaREZ/AirBnbPricePrediction", "max_issues_repo_head_hexsha": "8c4c8ce7fd0871b6bd68e573a7796d2d2dd22276", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Main/cv.py", "max_forks_repo_name": "PouyaREZ/AirBnbPricePrediction", "max_forks_repo_head_hexsha": "8c4c8ce7fd0871b6bd68e573a7796d2d2dd22276", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2019-08-01T13:38:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T15:12:41.000Z", "avg_line_length": 32.8085106383, "max_line_length": 90, "alphanum_fraction": 0.6660181582, "include": true, "reason": "import numpy", "num_tokens": 440}
|
\chapter{Overview}
\sdr is a software-defined radio based on Microsemi's SmartFusion. Unlike common FPGAs, the SmartFusion
is a device which incorporates a hard core ARM Cortex M3 and a flash-based FPGA. Flash-based FPGA
technology offers low static power, no reconfiguration in boot stage and memory retention when in
power down mode. These benefits make it a better candidate for low-power wireless research. We present
the \sdr, a true battery-powered SDR platform which is suitable for portable and deployable hand held
device research.
\section{Features}
\begin{enumerate}
\item 2.4 - 2.5~GHz ISM band
\item Dual channel 80~MS/s, 8-bits ADC
\item Dual channel 40~MS/s, 8-bits DAC
\item 16~MB External PSRAM
\item 8~MB External Flash
\item $\pm$3~ppm TXCO
\item Ethernet, USB interfaces
\item 24 User defined IO, 8 LEDs and 4 switches
\item Battery-powered
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{sdrv2_scale}
\end{figure}
|
{"hexsha": "3c8694db06c3b613594f9e38e362c0d6e48941b9", "size": 983, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuals/tex/overview.tex", "max_stars_repo_name": "lab11/uSDR", "max_stars_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-08-23T03:56:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T11:51:36.000Z", "max_issues_repo_path": "manuals/tex/overview.tex", "max_issues_repo_name": "lab11/uSDR", "max_issues_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuals/tex/overview.tex", "max_forks_repo_name": "lab11/uSDR", "max_forks_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-07-22T12:47:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-16T23:18:10.000Z", "avg_line_length": 39.32, "max_line_length": 103, "alphanum_fraction": 0.7772126144, "num_tokens": 285}
|
import os
import sys
import theano
import theano.tensor as T
import numpy
import numpy as np
import mahotas
import partition_comparison
import StringIO
import glob
base_path = os.path.dirname(__file__)
sys.path.insert(1,os.path.join(base_path, '../../common'))
sys.path.insert(2,os.path.join(base_path, '../../database'))
sys.path.insert(3,os.path.join(base_path, '../'))
from db import DB
from project import Project
from performance import Performance
from paths import Paths
from mlp import MLP
from data import Data
print 'base_path:', base_path
if __name__ == '__main__':
# load the model to use for performance evaluation
x = T.matrix('x')
rng = numpy.random.RandomState(1234)
project = DB.getProject('mlpnew') #evalmlp')
model = MLP(
id=project.id,
rng=rng,
input=x,
momentum=0.0,
offline=True,
n_in=project.patchSize**2,
n_hidden=project.hiddenUnits,
n_out=len(project.labels),
train_time=project.trainTime,
#batch_size=project.batchSize,
batch_size=50,
patch_size=project.patchSize,
path=project.path_offline)
data = Data( project, offline=True, n_train_samples=700000, n_valid_samples=5000)
#model.train(offline=True, data=data, mean=project.mean, std=project.std)
#data.load(project )
#print data.get_pixel_count(project)
#exit(1)
n_iterations = 5000
for iteration in xrange(n_iterations):
print 'iteration:', iteration
model.train(data=data, offline=True, mean=project.mean, std=project.std)
|
{"hexsha": "368994607c12ebe042f6f2c43dc71e62c39fb95b", "size": 1637, "ext": "py", "lang": "Python", "max_stars_repo_path": "code/model/mlp/offline.py", "max_stars_repo_name": "fegonda/icon_demo", "max_stars_repo_head_hexsha": "d2d1b0148989187c1433597f9c3ae4357178c082", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "code/model/mlp/offline.py", "max_issues_repo_name": "fegonda/icon_demo", "max_issues_repo_head_hexsha": "d2d1b0148989187c1433597f9c3ae4357178c082", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/model/mlp/offline.py", "max_forks_repo_name": "fegonda/icon_demo", "max_forks_repo_head_hexsha": "d2d1b0148989187c1433597f9c3ae4357178c082", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.9841269841, "max_line_length": 85, "alphanum_fraction": 0.6676847892, "include": true, "reason": "import numpy,import theano", "num_tokens": 386}
|
"""
Data handling classes
Author: Jia Geng
Email: gjia0214@gmail.com | jxg570@miami.edu
"""
import os
import torch
import numpy as np
import random
from PIL import Image
from collections import OrderedDict
from torchvision.transforms.transforms import *
from torch.utils.data.dataloader import *
from torch.utils.data.dataset import *
def default_test_processing(mean=(0.49139968, 0.48215841, 0.44653091),
std=(0.24703223, 0.24348513, 0.26158784),
):
"""
Get the default preprocessing pipeline
ToPIL -> Resize -> ToTensor -> Normalization
:return: list of processing module
"""
return [ToTensor(), Normalize(mean=mean, std=std)]
def default_train_processing(p=0.5,
mean=(0.49139968, 0.48215841, 0.44653091),
std=(0.24703223, 0.24348513, 0.26158784)):
"""
Get the default augmentation.
Random Horizontal Flip & RandomVerticalFlip & Random Jitter
:param p: probability
:param mean: mean
:param std: std
:return: list of augmentation techniques
"""
# 0.5 chance of applying the flipping
# 0.5 chance of horizontal flip & 0.5 chance of vertical flip
# overall 0.5 x 1.0 + 0.5 x 0.25 = 0.625 chance of Not get flipped
flip = RandomApply([RandomHorizontalFlip(p=0.5), RandomVerticalFlip(p=0.5)], p)
# p/5 chance of get color jittered (disable the jitter on hue as it would require PIL input)
# less probability for jitter in case it affect the model learning
jitter = RandomApply([ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0)], p/5)
return [flip, jitter, ToTensor(), Normalize(mean=mean, std=std)]
class DataPackerAbstract:
"""
An abstract class for packing the dataset.
The abstract will make sure the packed data container class will be compatible with the ImgDataset
"""
def __init__(self):
self.mode = None # mode
self.parent_dir = None # this is for patch packer
self.patch_dir = None # this is for sequence packer only
# below should all be dict format {id: xxx, ...}
self.data_src = None # data src should contains id & image data or image fp
self.labels = None # labels
self.additional_info = None # other information of the data such as bbox locations, etc
def __len__(self):
"""
Get the length of the data
:return: length of the data
"""
if self.labels is None:
raise Exception('Data has not packed yet. Call pack() to pack the data.')
return len(self.labels)
def pack(self, *args):
"""
Pack data together, can be done by either put all data into memory or construct a memo about the data.
Should support 2 modes:
- data in memory
- data in disk
- additional information should be loaded on memory
"""
raise NotImplemented()
def get_packed_data(self):
"""
Get the packed data src, should return a dictionary that contains:
- packing mode (so that Dataset object knows what to expect)
- data src (data or fp) should be a dictionary {id: data src}
- annotations should be a dictionary {id: data src} id
"""
# sanity check
assert (isinstance(self.data_src, dict) or isinstance(self.data_src, OrderedDict)), 'Data source should be a dictionary with key: id'
assert (isinstance(self.labels, dict) or isinstance(self.labels, OrderedDict)), 'Labels should be a dictionary with key: id'
if self.additional_info is not None:
assert (isinstance(self.additional_info, dict) or isinstance(self.additional_info, OrderedDict)), 'Annotation should be a dictionary ' \
'with key: id'
assert self.mode in ['disk', 'memory'], 'model must be either disk or memory'
for data_id in self.data_src:
if self.additional_info is not None and data_id not in self.additional_info:
raise Exception('Data id={} from data src can not be found in annotation.'.format(data_id))
output = {'mode': self.mode, 'data': self.data_src, 'labels': self.labels}
if self.additional_info is not None:
output['info'] = self.additional_info
return output
def split(self, *args):
"""
Split the packed data into train, eval, test packer
"""
raise NotImplemented()
def to_rgb(self):
"""
Convert the data to RGB 3-channel format
"""
pass
def get_mean_std(self):
"""
Get the mean and std of the data (per channel)
"""
pass
class ImgDataset(Dataset):
"""
PyTorch Dataset + image processing & augmentation
For image processing pipeline & augmentation, ImageDataset will be compatible with the torchvision.transforms.
Pass a list of transform modules and ImgDataset will Compose it.
Use .train() or .eval() to turn on or off the augmentation
"""
def __init__(self, data_packer: DataPackerAbstract, processing: list or None):
"""
Constructor
:param data_packer: A packed data object. The object class should inherit the PackedDataAbstract
:param processing: processing pipeline can be augmentation or pure processing
"""
# data
self.parent_dir = data_packer.parent_dir
self.packed_data = data_packer.get_packed_data()
# idx to img_id to make get_item work
self.idx2id = {i: img_id for i, img_id in enumerate(self.packed_data['data'])}
# processing func from torchvision
self.mapping = {}
self.processing = processing
# sanity chek
if self.processing is not None:
self.__sanity_check(self.processing)
def __getitem__(self, idx: int):
"""
Get item method
:param idx: the data idx
:return: data and annotations
"""
# get the image id
data_id = self.idx2id[idx]
# get the mode
mode = self.packed_data['mode']
# get he image
if mode == 'disk':
fp = os.path.join(self.parent_dir, self.packed_data['data'][data_id])
img = Image.open(fp=fp)
elif mode == 'memory':
img = self.packed_data['data'][data_id]
else:
raise Exception('Mode must be either disk or memory but was {}'.format(mode))
# process the image
if self.processing is not None:
if not isinstance(img, Image.Image):
img = ToPILImage()(img)
for f in self.processing:
img = f(img)
# check type
if isinstance(img, Image.Image):
img = ToTensor()(img)
# get the labels
label = self.packed_data['labels'][data_id]
output = {'x': img, 'y': label}
# get the additional annotations, if any
if 'info' in self.packed_data:
output['info'] = self.packed_data['info'][data_id]
return output
def __len__(self):
"""
Get the length of the data
:return: length of the data
"""
return len(self.idx2id)
@staticmethod
def __sanity_check(funcs: list):
"""
Sanity check on each input function module.
Put the callable object into pipeline
:param funcs:
:return:
"""
callables = []
for func in funcs:
# sanity check
assert callable(func), 'Function {} is not an callable object.'.format(func)
# collect
callables.append(func)
return callables
class ImgSequenceDataset(Dataset):
"""
PyTorch Dataset + image processing & augmentation
For image processing pipeline & augmentation, ImageDataset will be compatible with the torchvision.transforms.
Pass a list of transform modules and ImgDataset will Compose it.
Only support on disk loading
"""
def __init__(self, data_packer: DataPackerAbstract, processing: list or None):
"""
Constructor
:param data_packer: A packed data object. The object class should inherit the PackedDataAbstract
:param processing: processing pipeline can be augmentation or pure processing
"""
# data
self.patch_dir = data_packer.patch_dir
self.packed_data = data_packer.get_packed_data()
# idx to img_id to make get_item work
self.idx2id = {i: img_id for i, img_id in enumerate(self.packed_data['data'])}
# processing func from torchvision
self.processing = processing
# sanity chek
if self.processing is not None:
self.__sanity_check(self.processing)
def __getitem__(self, idx: int):
"""
Get item method
:param idx: the data idx
:return: data and annotations
"""
# get the image id
sequence_id = self.idx2id[idx]
# read patches in the sequence TODO: check the IO
fps = self.packed_data['data'][sequence_id]
sequence = []
for fp in fps:
# load the image
patch = Image.open(fp=fp)
if self.processing is not None:
for f in self.processing:
patch = f(patch)
if isinstance(patch, Image.Image):
patch = ToTensor()(patch)
sequence.append(patch)
sequence = torch.stack(sequence, dim=0) # (T, C, H, W)
# get the labels
label = int(self.packed_data['labels'][sequence_id])
output = {'x': sequence, 'y': label}
# get the additional annotations, if any
if 'info' in self.packed_data:
output['info'] = self.packed_data['info'][sequence_id]
return output
def __len__(self):
"""
Get the length of the data
:return: length of the data
"""
return len(self.idx2id)
@staticmethod
def __sanity_check(funcs: list):
"""
Sanity check on each input function module.
Put the callable object into pipeline
:param funcs:
:return:
"""
callables = []
for func in funcs:
# sanity check
assert callable(func), 'Function {} is not an callable object.'.format(func)
# collect
callables.append(func)
return callables
class DataHandler:
"""
Wrapper on the dataloaders
"""
def __init__(self, train_dataset: ImgDataset or None, eval_dataset: ImgDataset or None,
batch_size: int):
"""
Constructor
:param train_dataset: training dataset
:param eval_dataset: evaluation dataset or testing dataset
:param batch_size: batch size
"""
self.dataloaders = {'train': None, 'eval': None}
if train_dataset is not None:
self.dataloaders['train'] = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
if eval_dataset is not None:
self.dataloaders['eval'] = DataLoader(dataset=eval_dataset, batch_size=batch_size, shuffle=False)
def __getitem__(self, phase: str):
"""
Get the dataloader by its phase
:param phase: phase
:return: dataloader
"""
return self.dataloaders[phase]
|
{"hexsha": "359f533757f218838cd629d5dcccea4939c59c0c", "size": 11586, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/datautils/datahandler.py", "max_stars_repo_name": "gengjia0214/jai", "max_stars_repo_head_hexsha": "865ec9fdf432288ecab806cd1ecb8a4c747ee689", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-02-20T23:14:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-09T03:28:04.000Z", "max_issues_repo_path": "src/datautils/datahandler.py", "max_issues_repo_name": "gengjia0214/jai", "max_issues_repo_head_hexsha": "865ec9fdf432288ecab806cd1ecb8a4c747ee689", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/datautils/datahandler.py", "max_forks_repo_name": "gengjia0214/jai", "max_forks_repo_head_hexsha": "865ec9fdf432288ecab806cd1ecb8a4c747ee689", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0055248619, "max_line_length": 148, "alphanum_fraction": 0.6048679441, "include": true, "reason": "import numpy", "num_tokens": 2612}
|
"""
A pure TensorFlow implementation of a convolutional neural network.
"""
# pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import functools
import tensorflow as tf
from cleverhans import initializers
from cleverhans.model import Model
from PIL import Image
class ModelCls(Model):
def __init__(self, scope, **kwargs):
del kwargs
Model.__init__(self, scope, locals())
self.n_units = 100
# Do a dummy run of fprop to make sure the variables are created from
# the start
self.fprop(tf.placeholder(tf.float32, shape = (128, 28,28, 1)))
#self.fprop(tf.placeholder(tf.float32, shape = (128, 100, 1)))
# Put a reference to the params in self so that the params get pickled
self.params = self.get_params()
def fprop(self, x, **kwargs):
del kwargs
batch_size = tf.shape(x)[0]
height = tf.shape(x)[1]
width = tf.shape(x)[2]
channels = tf.shape(x)[3]
print("channels: ", channels)
#input_size = 28*28
#print("shape of x: ", tf.shape(x)[0]," ", tf.shape(x)[1]," ", tf.shape(x)[2]," ", tf.shape(x)[3])
my_fcc = functools.partial(
tf.layers.dense, activation=tf.nn.relu)
with tf.variable_scope(self.scope, reuse=tf.AUTO_REUSE):
y = my_fcc(tf.reshape(x, [batch_size,28*28*1]), self.n_units, name = 'FCC1')
#y = my_fcc(tf.reshape(z, [batch_size,100]), self.n_units, name = 'FCC1')
y = my_fcc(y, self.n_units, name = 'FCC2')
y = my_fcc(y, self.n_units, name = 'FCC3')
logits = my_fcc(y, 10, activation = tf.nn.sigmoid, name='LOGITS')
'''
pred_label = [tf.argmax(logits[i]) for i in range(0,128)]
print("shape of pred_label: ", np.shape(pred_label))
pred = np.zeros((128,10))
sess = tf.Session()
with sess.as_default():
for i in range(0, 128):
pred[i,pred_label[i].eval()] = 1
pred1 = tf.convert_to_tensor(pred)
'''
return {
self.O_LOGITS: logits,
'LOGITS': logits,
}
|
{"hexsha": "793e337b4d5b19b5d97b2b1fa6e743da52a3c816", "size": 2127, "ext": "py", "lang": "Python", "max_stars_repo_path": "cleverhans/model_zoo/basic_cls.py", "max_stars_repo_name": "iirishikaii/cleverhans", "max_stars_repo_head_hexsha": "dda1aa598131c56a64103f478c153e84bc951f7c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-21T15:38:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-21T15:38:33.000Z", "max_issues_repo_path": "cleverhans/model_zoo/basic_cls.py", "max_issues_repo_name": "rishika11A/cleverhans", "max_issues_repo_head_hexsha": "dda1aa598131c56a64103f478c153e84bc951f7c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cleverhans/model_zoo/basic_cls.py", "max_forks_repo_name": "rishika11A/cleverhans", "max_forks_repo_head_hexsha": "dda1aa598131c56a64103f478c153e84bc951f7c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-21T15:38:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-21T15:38:34.000Z", "avg_line_length": 34.3064516129, "max_line_length": 102, "alphanum_fraction": 0.6464503996, "include": true, "reason": "import numpy", "num_tokens": 591}
|
[STATEMENT]
lemma e2ennreal_ereal [simp]: "e2ennreal (ereal x) = ennreal x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. e2ennreal (ereal x) = ennreal x
[PROOF STEP]
by (metis e2ennreal_def enn2ereal_inverse ennreal.rep_eq sup_ereal_def)
|
{"llama_tokens": 111, "file": null, "length": 1}
|
[STATEMENT]
lemma filter_insort_triv:
"\<not> P x \<Longrightarrow> filter P (insort_key f x xs) = filter P xs"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<not> P x \<Longrightarrow> filter P (insort_key f x xs) = filter P xs
[PROOF STEP]
by (induct xs) simp_all
|
{"llama_tokens": 105, "file": null, "length": 1}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.