text
stringlengths 0
1.25M
| meta
stringlengths 47
1.89k
|
|---|---|
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provides euro and other symbols
\else % if luatex or xelatex
\usepackage{unicode-math}
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\usepackage{hyperref}
\hypersetup{
pdftitle={STA 326 2.0 R Programming and Data Analysis},
pdfauthor={Thiyanga S Talagala},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{longtable,booktabs}
% Fix footnotes in tables (requires footnote package)
\IfFileExists{footnote.sty}{\usepackage{footnote}\makesavenoteenv{longtable}}{}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage[]{natbib}
\bibliographystyle{apalike}
\title{STA 326 2.0 R Programming and Data Analysis}
\author{Thiyanga S Talagala}
\date{2020-02-11}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
Learning outcomes functions:
In this tutorial we learned what functions in R programming are, the basic syntax of functions in R programming, in-built functions and how to use them to make our work easier, the syntax of a user-defined function, and different types of user-defined functions. In the next session, we are going to learn how to read files in R programming.
\hypertarget{elements}{%
\chapter{Introduction}\label{elements}}
\hypertarget{r-programming-language}{%
\section{R programming language}\label{r-programming-language}}
\hypertarget{rstudio}{%
\section{RStudio}\label{rstudio}}
RStudio is an integrated development environment (IDE) for R that provides an alternative interface to R that has several advantages over other default interfaces.
\hypertarget{installation}{%
\section{Installation}\label{installation}}
The first thing you need to do to get started with R is to install it on your computer. R works on pretty much every platform available, including the widely available Windows, Mac OS X, and Linux systems. If you want to watch a step-by-step tutorial on how to install R for Mac or Windows, you can watch these videos:
\begin{itemize}
\item
\href{https://www.youtube.com/watch?v=Ohnk9hcxf9M\&feature=youtu.be}{Installation R on Windows}
\item
\href{https://www.youtube.com/watch?v=uxuuWXU-7UQ\&feature=youtu.be}{Installing R on the Mac}
\end{itemize}
Next you can install Rstudio. Remember, you must have R already installed before installing Rstudio. If you want to watch a step-by-step watch the vedio \href{https://www.youtube.com/watch?v=bM7Sfz-LADM\&feature=youtu.be}{here}.
\hypertarget{main-windows}{%
\section{Main windows}\label{main-windows}}
\hypertarget{setting-your-working-directory}{%
\section{Setting your working directory}\label{setting-your-working-directory}}
What is working directory? To find your current working directory type \texttt{getwd()} on console.
\hypertarget{working-with-r-scripts-files}{%
\section{Working with R scripts files}\label{working-with-r-scripts-files}}
Rather than typing R commands into the Console. This allows for \textbf{reproducibility}, share scripts with someone else.
To create a new R script
File --\textgreater{} New File --\textgreater{} R Script
Commenting on R scripts
\hypertarget{r-packages}{%
\section{R packages}\label{r-packages}}
\hypertarget{installation-1}{%
\subsection{Installation}\label{installation-1}}
There is a large community of R users who contribute various packages that do useful things. Before you start using an R package, you must first install it into your environment. There are two ways to install a package
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\item
\end{enumerate}
\hypertarget{load-a-package}{%
\subsection{Load a package}\label{load-a-package}}
one time, then load package
\hypertarget{important-things-to-know-about-r}{%
\section{Important things to know about R}\label{important-things-to-know-about-r}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
R is case-sensitive
\item
R works with numerous data types. Some of the most basic types to get started are:
\begin{enumerate}
\def\labelenumii{\roman{enumii}.}
\item
\textbf{numeric}: decimal values like 8.5
\item
\textbf{integers}: natural numbers like 8
\item
\textbf{logical}: Boolean values (TRUE or FALSE)
\item
\textbf{character}: strigs(text) like ``statistics''
\end{enumerate}
\end{enumerate}
\hypertarget{objects}{%
\section{Objects}\label{objects}}
The entities R operates on are technically known as \textbf{objects}. There are two types of objects:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Data structures
\item
Functions
\end{enumerate}
\hypertarget{getting-help}{%
\section{Getting help}\label{getting-help}}
\hypertarget{variable-assignment}{%
\section{Variable assignment}\label{variable-assignment}}
\hypertarget{section}{%
\section{}\label{section}}
\hypertarget{data-permanency-and-removing-objects}{%
\section{Data permanency and removing objects}\label{data-permanency-and-removing-objects}}
\hypertarget{intro}{%
\chapter{Data structures in base R}\label{intro}}
There are five data types in R
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Atomic vector
\item
Matrix
\item
Array
\item
List
\item
Data frame
\end{enumerate}
\hypertarget{atomic-vectors}{%
\section{Atomic vectors}\label{atomic-vectors}}
\begin{itemize}
\item
This is a 1-dimensional
\item
All elements of an atomic vector must be the same type, Hence it is a \textbf{homogeneous} type of object. Vectirs can hold numeric data, charactor data or logical data.
\end{itemize}
\hypertarget{creating-vectors}{%
\subsection{Creating vectors}\label{creating-vectors}}
Vectors can be created by using the function concatenation \texttt{c}
\textbf{Syntax}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{vector_name <-}\StringTok{ }\KeywordTok{c}\NormalTok{(element1, element2, element3)}
\end{Highlighting}
\end{Shaded}
\textbf{Examples}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{first_vec <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{50}\NormalTok{, }\DecValTok{70}\NormalTok{)}
\NormalTok{second_vec <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Jan"}\NormalTok{, }\StringTok{"Feb"}\NormalTok{, }\StringTok{"March"}\NormalTok{, }\StringTok{"April"}\NormalTok{)}
\NormalTok{third_vec <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OtherTok{TRUE}\NormalTok{, }\OtherTok{FALSE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{)}
\NormalTok{fourth_vec <-}\StringTok{ }\KeywordTok{c}\NormalTok{(10L, 20L, 50L, 70L)}
\end{Highlighting}
\end{Shaded}
\hypertarget{types-and-tests-with-vectors}{%
\subsection{Types and tests with vectors}\label{types-and-tests-with-vectors}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\texttt{typepf()} returns types of their elements
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{typeof}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "double"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{typeof}\NormalTok{(fourth_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "integer"
\end{verbatim}
Notice that with suffix L you get integer elements rather than numeric.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
To check if it is a
\end{enumerate}
\begin{itemize}
\tightlist
\item
vector: \texttt{is.vector()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.vector}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] TRUE
\end{verbatim}
\begin{itemize}
\tightlist
\item
charactor vector: \texttt{is.charactor()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.character}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] FALSE
\end{verbatim}
\begin{itemize}
\tightlist
\item
double: \texttt{is.double()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.double}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] TRUE
\end{verbatim}
\begin{itemize}
\tightlist
\item
integer: \texttt{is.integer()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.integer}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] FALSE
\end{verbatim}
\begin{itemize}
\tightlist
\item
logical: \texttt{is.logical()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.logical}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] FALSE
\end{verbatim}
\begin{itemize}
\tightlist
\item
atomic: \texttt{is.atomic()}
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.atomic}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] TRUE
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
\texttt{length()} returns number of elements in a vector
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{length}\NormalTok{(first_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{length}\NormalTok{(fourth_vec)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 4
\end{verbatim}
\hypertarget{coercion}{%
\subsection{Coercion}\label{coercion}}
Vectors must be homogeneous. When you attempt to combine different types they will be coerced to the most flexible type so that every element in the vector is of the same type.
Order from least to most flexible
\texttt{logical} --\textgreater{} \texttt{integer} --\textgreater{} \texttt{double} --\textgreater{} \texttt{charactor}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{a <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\FloatTok{3.1}\NormalTok{, 2L, }\DecValTok{3}\NormalTok{, }\DecValTok{4}\NormalTok{, }\StringTok{"GPA"}\NormalTok{) }
\KeywordTok{typeof}\NormalTok{(a) }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "character"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{anew <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\FloatTok{3.1}\NormalTok{, 2L, }\DecValTok{3}\NormalTok{, }\DecValTok{4}\NormalTok{)}
\KeywordTok{typeof}\NormalTok{(anew) }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "double"
\end{verbatim}
\hypertarget{explicit-coercion}{%
\subsection{Explicit coercion}\label{explicit-coercion}}
Vectors can be explicitly coerced from one class to another using the \texttt{as.*} functions, if available. For example, \texttt{as.charactor}, \texttt{as.numeric}, \texttt{as.integer}, and \texttt{as.logical}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{vec1 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OtherTok{TRUE}\NormalTok{, }\OtherTok{FALSE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{)}
\KeywordTok{typeof}\NormalTok{(vec1)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "logical"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{vec2 <-}\StringTok{ }\KeywordTok{as.integer}\NormalTok{(vec1)}
\KeywordTok{typeof}\NormalTok{(vec2)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "integer"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{vec2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 0 1 1
\end{verbatim}
\textbf{Question}
Why the below output produce NAs?
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"c"}\NormalTok{)}
\KeywordTok{as.numeric}\NormalTok{(x)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Warning: NAs introduced by coercion
\end{verbatim}
\begin{verbatim}
[1] NA NA NA
\end{verbatim}
\hypertarget{combining-vectors}{%
\subsection{Combining vectors}\label{combining-vectors}}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x1 <-}\StringTok{ }\DecValTok{1}\OperatorTok{:}\DecValTok{3}
\NormalTok{x2 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{30}\NormalTok{)}
\NormalTok{combinedx1x2 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(x1, x2)}
\NormalTok{combinedx1x2 }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 1 2 3 10 20 30
\end{verbatim}
Let's check the classes
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{class}\NormalTok{(x1)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "integer"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{class}\NormalTok{(x2)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "numeric"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{class}\NormalTok{(combinedx1x2)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "numeric"
\end{verbatim}
Similarlarly, if you combine a numeric vector and a character vector the resulting vector is a charactor vector.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{y1 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{)}
\NormalTok{y2 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"c"}\NormalTok{)}
\KeywordTok{c}\NormalTok{(y1, y2)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "1" "2" "3" "a" "b" "c"
\end{verbatim}
\hypertarget{name-elements-in-a-vector}{%
\subsection{Name elements in a vector}\label{name-elements-in-a-vector}}
You can name elements in a vector in different ways. We will learn two of them.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
When creating it
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x1 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DataTypeTok{a=}\DecValTok{1991}\NormalTok{, }\DataTypeTok{b=}\DecValTok{1992}\NormalTok{, }\DataTypeTok{c=}\DecValTok{1993}\NormalTok{)}
\NormalTok{x1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## a b c
## 1991 1992 1993
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Modifying the names of an existing vector
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x2 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{5}\NormalTok{, }\DecValTok{10}\NormalTok{)}
\KeywordTok{names}\NormalTok{(x) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"b"}\NormalTok{)}
\NormalTok{x2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 1 5 10
\end{verbatim}
Note that the names do not have to be unique.
To remove names of a vector
Method 1
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{unname}\NormalTok{(x1)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 1991 1992 1993
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## a b c
## 1991 1992 1993
\end{verbatim}
Method 2
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{names}\NormalTok{(x2) <-}\StringTok{ }\OtherTok{NULL}
\NormalTok{x2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] 1 5 10
\end{verbatim}
\textbf{Question}
Guess the output of the following code?
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{v <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{)}
\KeywordTok{names}\NormalTok{(v) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"a"}\NormalTok{)}
\KeywordTok{names}\NormalTok{(v)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "a" NA NA
\end{verbatim}
\hypertarget{simplifying-vector-creation}{%
\subsection{Simplifying vector creation}\label{simplifying-vector-creation}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
colon \texttt{:} produce regular spaced ascending or descending sequences.
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{a1 <-}\StringTok{ }\DecValTok{10}\OperatorTok{:}\DecValTok{16}
\NormalTok{a1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 11 12 13 14 15 16
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{b1 <-}\StringTok{ }\FloatTok{-0.5}\OperatorTok{:}\FloatTok{8.5}
\NormalTok{b1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] -0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
sequence \texttt{seq()}. There are three arguments we need to provide, i) initial value, ii) final value, and iii) increment
\end{enumerate}
syntax
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{seq}\NormalTok{(initial_value, final_value, increment)}
\end{Highlighting}
\end{Shaded}
example
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
repeats \texttt{rep()}
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{9}\NormalTok{, }\DecValTok{5}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 9 9 9 9 9
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 2 3 4 1 2 3 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\DataTypeTok{each=}\DecValTok{2}\NormalTok{) }\CommentTok{# each element is repeated twice}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 1 2 2 3 3 4 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\DataTypeTok{times=}\DecValTok{2}\NormalTok{) }\CommentTok{# whole sequence is repeated twice}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 2 3 4 1 2 3 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\DataTypeTok{each=}\DecValTok{2}\NormalTok{, }\DataTypeTok{times=}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 2 2 3 3 3 4 4 4 4
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{, }\KeywordTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1 1 1 1 2 3 3 3 3 4 4
\end{verbatim}
\hypertarget{logical-operations}{%
\subsection{Logical operations}\label{logical-operations}}
\hypertarget{subsetting}{%
\subsection{Subsetting}\label{subsetting}}
There are situations where we want to select only some of the elements of a vector. Following codes show various ways to select part of a vector object.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{103}\NormalTok{, }\DecValTok{124}\NormalTok{, }\DecValTok{126}\NormalTok{)}
\NormalTok{data[}\DecValTok{1}\NormalTok{] }\CommentTok{# shows the first element }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[}\OperatorTok{-}\DecValTok{1}\NormalTok{] }\CommentTok{# shows all except the first item}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 20 103 124 126
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{] }\CommentTok{# shows first three elements}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 20 103
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{4}\NormalTok{)]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 103 124
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[data }\OperatorTok{>}\StringTok{ }\DecValTok{3}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 20 103 124 126
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[data}\OperatorTok{<}\DecValTok{20}\OperatorTok{|}\NormalTok{data}\OperatorTok{>}\DecValTok{120}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 124 126
\end{verbatim}
Example: How do you replace the 3rd element in the data vector by 203?
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{data[}\DecValTok{3}\NormalTok{] <-}\StringTok{ }\DecValTok{203}
\NormalTok{data}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 20 203 124 126
\end{verbatim}
\hypertarget{vector-arithmetic}{%
\subsection{Vector arithmetic}\label{vector-arithmetic}}
Vector operations are perfored element by element.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DecValTok{100}\NormalTok{) }\OperatorTok{+}\StringTok{ }\DecValTok{2} \CommentTok{# two is added to every element in the vector}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 12 102 102
\end{verbatim}
Vector operations between two vectors
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{v1 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{)}
\NormalTok{v2 <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DecValTok{1000}\NormalTok{)}
\NormalTok{v1 }\OperatorTok{+}\StringTok{ }\NormalTok{v2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 11 102 1003
\end{verbatim}
Add two vectors of unequal length
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{longvec <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DataTypeTok{length=}\DecValTok{10}\NormalTok{)}
\NormalTok{shortvec <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{5}\NormalTok{)}
\NormalTok{shortvec }\OperatorTok{+}\StringTok{ }\NormalTok{longvec}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 11 22 33 44 55 61 72 83 94 105
\end{verbatim}
\hypertarget{missing-values}{%
\subsection{Missing values}\label{missing-values}}
Use \texttt{NA} to place a missing value in a vector.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{z <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{101}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\OtherTok{NA}\NormalTok{)}
\KeywordTok{is.na}\NormalTok{(z)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] FALSE FALSE FALSE FALSE TRUE
\end{verbatim}
\hypertarget{factor}{%
\subsection{Factor}\label{factor}}
A factor is a vector that can contain only predefined values, and is used to store categorical data.
\hypertarget{matrix}{%
\section{Matrix}\label{matrix}}
Matrix is a 2-dimentional and a homogeneous data structure
\textbf{Syntax to create a matrix}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix_name <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(vector_of_elements, }
\DataTypeTok{nrow=}\NormalTok{number_of_rows,}
\DataTypeTok{ncol=}\NormalTok{number_of_columns,}
\DataTypeTok{byrow=}\NormalTok{logical_value, }\CommentTok{# If byrow=TRUE, then the matrix is filled in by row.}
\DataTypeTok{dimnames=}\KeywordTok{list}\NormalTok{(rnames, cnames)) }\CommentTok{# To assign row names and columns}
\end{Highlighting}
\end{Shaded}
\textbf{Example}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{values <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{30}\NormalTok{, }\DecValTok{40}\NormalTok{)}
\NormalTok{matrix1 <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(values, }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{) }\CommentTok{# Matrix filled by columns (default option)}
\NormalTok{matrix1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[,1] [,2]
[1,] 10 30
[2,] 20 40
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix2 <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(values, }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{TRUE}\NormalTok{) }\CommentTok{# Matrix filled by rows}
\NormalTok{matrix2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[,1] [,2]
[1,] 10 20
[2,] 30 40
\end{verbatim}
\textbf{Naming matrix rows and columns}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{rnames <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"R1"}\NormalTok{, }\StringTok{"R2"}\NormalTok{)}
\NormalTok{cnames <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"C1"}\NormalTok{, }\StringTok{"C2"}\NormalTok{)}
\NormalTok{matrix_with_names <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(values, }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{, }\DataTypeTok{dimnames=}\KeywordTok{list}\NormalTok{(rnames, cnames))}
\NormalTok{matrix_with_names}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
C1 C2
R1 10 30
R2 20 40
\end{verbatim}
\hypertarget{matrix-subscript}{%
\subsection{Matrix subscript}\label{matrix-subscript}}
\texttt{matraix\_name{[}i,\ {]}} gives the ith row of a matrix
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix1[}\DecValTok{1}\NormalTok{, ]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 30
\end{verbatim}
\texttt{matraix\_name{[},\ j{]}} gives the jth column of a matrix
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix1[, }\DecValTok{2}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 30 40
\end{verbatim}
\texttt{matraix\_name{[}i,\ j{]}} gives the ith row and jth column element
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix1[}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 30
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrix1[}\DecValTok{1}\NormalTok{, }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{)] }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 30
\end{verbatim}
\hypertarget{cbind-and-rbind}{%
\subsection{\texorpdfstring{\texttt{cbind} and \texttt{rbind}}{cbind and rbind}}\label{cbind-and-rbind}}
Matrices can be created by column-binding and row-binding with \texttt{cbind()} and \texttt{rbind()}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{x <-}\StringTok{ }\DecValTok{1}\OperatorTok{:}\DecValTok{3}
\NormalTok{y <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DecValTok{1000}\NormalTok{)}
\KeywordTok{cbind}\NormalTok{(x, y) }\CommentTok{# binds matrices horizontally}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
x y
[1,] 1 10
[2,] 2 100
[3,] 3 1000
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rbind}\NormalTok{(x, y) }\CommentTok{#binds matrices vertically}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[,1] [,2] [,3]
x 1 2 3
y 10 100 1000
\end{verbatim}
\hypertarget{matrix-operations}{%
\subsection{Matrix operations}\label{matrix-operations}}
\hypertarget{array}{%
\section{Array}\label{array}}
\begin{itemize}
\item
3 dimentional data structure
\item
\end{itemize}
\hypertarget{list}{%
\section{List}\label{list}}
\begin{itemize}
\tightlist
\item
Lists are heterogeneous
\end{itemize}
\textbf{Syntax}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{list_name <-}\StringTok{ }\KeywordTok{list}\NormalTok{(entry1, entry2, entry3, ...)}
\end{Highlighting}
\end{Shaded}
\textbf{Example}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{first_list <-}\KeywordTok{list}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{, }\KeywordTok{matrix}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{6}\NormalTok{, }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{), }\KeywordTok{c}\NormalTok{(}\OtherTok{TRUE}\NormalTok{, }\OtherTok{FALSE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{))}
\NormalTok{first_list}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [[1]]
## [1] 1 2 3
##
## [[2]]
## [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 2 4 6
##
## [[3]]
## [1] TRUE FALSE TRUE
\end{verbatim}
To see the structure of a list
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{str}\NormalTok{(first_list)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## List of 3
## $ : int [1:3] 1 2 3
## $ : int [1:2, 1:3] 1 2 3 4 5 6
## $ : logi [1:3] TRUE FALSE TRUE
\end{verbatim}
\hypertarget{data-frame}{%
\section{Data frame}\label{data-frame}}
\begin{itemize}
\item
A dataframe is a rectangular arrangement of data with rows corresponding to observational units and columns corresponding to variables.
\item
A data frame is more general than a matrix in that different columns can contain different modes of data.
\item
It's similar to the datasets you'd typically see in SPSS and MINITAB.
\item
Data frames are the most common data structure you'll deal with in R.
\end{itemize}
\begin{figure}
\centering
\includegraphics{tidy-1.png}
\caption{Figure 1: Components of a dataframe.}
\end{figure}
\hypertarget{creating-a-dataframe}{%
\subsection{Creating a dataframe}\label{creating-a-dataframe}}
\textbf{Syntax}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{name_of_the_dataframe <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}
\DataTypeTok{var1_name=}\NormalTok{vector of values of the first variable,}
\DataTypeTok{var2_names=}\NormalTok{vector of values of the second variable)}
\end{Highlighting}
\end{Shaded}
\textbf{Example}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{corona <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{ID=}\KeywordTok{c}\NormalTok{(}\StringTok{"C001"}\NormalTok{, }\StringTok{"C002"}\NormalTok{, }\StringTok{"C003"}\NormalTok{, }\StringTok{"C004"}\NormalTok{),}
\DataTypeTok{Location=}\KeywordTok{c}\NormalTok{(}\StringTok{"Beijing"}\NormalTok{, }\StringTok{"Wuhan"}\NormalTok{, }\StringTok{"Shanghai"}\NormalTok{, }\StringTok{"Beijing"}\NormalTok{),}
\DataTypeTok{Test_Results=}\KeywordTok{c}\NormalTok{(}\OtherTok{FALSE}\NormalTok{, }\OtherTok{TRUE}\NormalTok{, }\OtherTok{FALSE}\NormalTok{, }\OtherTok{FALSE}\NormalTok{))}
\NormalTok{corona}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
ID Location Test_Results
1 C001 Beijing FALSE
2 C002 Wuhan TRUE
3 C003 Shanghai FALSE
4 C004 Beijing FALSE
\end{verbatim}
To check if it is a datafrme
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{is.data.frame}\NormalTok{(corona)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] TRUE
\end{verbatim}
To convert a matrix to a dataframe
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mat <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\DecValTok{10}\OperatorTok{:}\DecValTok{81}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{4}\NormalTok{)}
\NormalTok{mat}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[,1] [,2] [,3] [,4]
[1,] 10 28 46 64
[2,] 11 29 47 65
[3,] 12 30 48 66
[4,] 13 31 49 67
[5,] 14 32 50 68
[6,] 15 33 51 69
[7,] 16 34 52 70
[8,] 17 35 53 71
[9,] 18 36 54 72
[10,] 19 37 55 73
[11,] 20 38 56 74
[12,] 21 39 57 75
[13,] 22 40 58 76
[14,] 23 41 59 77
[15,] 24 42 60 78
[16,] 25 43 61 79
[17,] 26 44 62 80
[18,] 27 45 63 81
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mat_df <-}\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{(mat)}
\NormalTok{mat_df}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
V1 V2 V3 V4
1 10 28 46 64
2 11 29 47 65
3 12 30 48 66
4 13 31 49 67
5 14 32 50 68
6 15 33 51 69
7 16 34 52 70
8 17 35 53 71
9 18 36 54 72
10 19 37 55 73
11 20 38 56 74
12 21 39 57 75
13 22 40 58 76
14 23 41 59 77
15 24 42 60 78
16 25 43 61 79
17 26 44 62 80
18 27 45 63 81
\end{verbatim}
\hypertarget{subsetting-data-frames}{%
\subsection{Subsetting data frames}\label{subsetting-data-frames}}
\textbf{Select rows}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{head}\NormalTok{(mat_df) }\CommentTok{# default it shows 5 rows}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
V1 V2 V3 V4
1 10 28 46 64
2 11 29 47 65
3 12 30 48 66
4 13 31 49 67
5 14 32 50 68
6 15 33 51 69
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{head}\NormalTok{(mat_df, }\DecValTok{3}\NormalTok{) }\CommentTok{# To extract only the first three rows }
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
V1 V2 V3 V4
1 10 28 46 64
2 11 29 47 65
3 12 30 48 66
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{tail}\NormalTok{(mat_df)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
V1 V2 V3 V4
13 22 40 58 76
14 23 41 59 77
15 24 42 60 78
16 25 43 61 79
17 26 44 62 80
18 27 45 63 81
\end{verbatim}
To select some specific rows
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{index <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{7}\NormalTok{, }\DecValTok{8}\NormalTok{)}
\NormalTok{mat_df[index, ]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
V1 V2 V3 V4
1 10 28 46 64
3 12 30 48 66
7 16 34 52 70
8 17 35 53 71
\end{verbatim}
\textbf{Select columns}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Select column(s) by variable names
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mat_df}\OperatorTok{$}\NormalTok{V1 }\CommentTok{# Method 1}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mat_df[, }\StringTok{"V1"}\NormalTok{] }\CommentTok{# Method 2}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Select column(s) by index
\end{enumerate}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{mat_df[, }\DecValTok{2}\NormalTok{]}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
\end{verbatim}
\hypertarget{built-in-dataframes}{%
\subsection{Built in dataframes}\label{built-in-dataframes}}
\textbf{Note:} All objects in R have a class.
\hypertarget{function}{%
\chapter{Functions in R}\label{function}}
A function is a block of organized and reusable code that is used to perform a specific task in a program. There are two types of functions in R:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
In-built functions
\item
User-defined functions
\end{enumerate}
\hypertarget{in-built-functions}{%
\section{In-built functions}\label{in-built-functions}}
These functions in R programming are provided by R environment for direct execution, to make our work easier Some examples for the frequently used in-built functions are as follows.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{mean}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{21}\NormalTok{, }\DecValTok{78}\NormalTok{, }\DecValTok{105}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 46.8
\end{verbatim}
\hypertarget{user-defined-functions}{%
\section{User-defined functions}\label{user-defined-functions}}
These functions in R programming language are dclared and defined by a user according to the requirements, to perform a specific task.
\hypertarget{main-components-of-a-function}{%
\section{Main components of a function}\label{main-components-of-a-function}}
All R functions have three main components: (Check this with Hadley's book)
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{function name}: name of the function that is stored as an R object
\item
\textbf{arguments:} are used to rovide specific inputs to a function while a function is invoked. A function can have zero, single, multiple or default arguments.
\item
\textbf{function body:} contains the block of code that performs the specific task assigned to a function. \textbf{return value}
\end{enumerate}
\hypertarget{passing-arguments-to-a-function}{%
\section{Passing arguments to a function}\label{passing-arguments-to-a-function}}
\hypertarget{some-useful-built-in-functions-in-r}{%
\section{Some useful built-in functions in R}\label{some-useful-built-in-functions-in-r}}
\hypertarget{r-can-be-used-as-a-simple-calculator.}{%
\subsection{R can be used as a simple calculator.}\label{r-can-be-used-as-a-simple-calculator.}}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[b]{0.47\columnwidth}\raggedright
Operator\strut
\end{minipage} & \begin{minipage}[b]{0.47\columnwidth}\raggedright
Description\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.47\columnwidth}\raggedright
+\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
addition\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
-\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
substraction\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
*\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
multiplication\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\^{}\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
exponentiation (5\^{}2 is 25)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\%\%\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
modulo-remainder of the division of the number to the left by the number on its right. (5\%\%3 is 2)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\hypertarget{some-more-maths-functions}{%
\subsection{Some more maths functions}\label{some-more-maths-functions}}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[b]{0.47\columnwidth}\raggedright
Operator\strut
\end{minipage} & \begin{minipage}[b]{0.47\columnwidth}\raggedright
Description\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.47\columnwidth}\raggedright
abs(x)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
absolute value of x\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
log(x, base=y)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
logarithm of x with base y; if base is not specified, returns the natural logarithm\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
exp(x)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
exponential of x\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
sqrt(x)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
square root of x\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
factorial(x)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
factorial of x\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\hypertarget{basic-statistic-functions}{%
\subsection{Basic statistic functions}\label{basic-statistic-functions}}
\begin{longtable}[]{@{}ll@{}}
\toprule
Operator & Description\tabularnewline
\midrule
\endhead
mean(x) & mean of x\tabularnewline
median(x) & median of x\tabularnewline
mode(x) & mode of x\tabularnewline
var(x) & variance of x\tabularnewline
scale(x) & z-score of x\tabularnewline
quantile(x) & quantiles of x\tabularnewline
summary(x) & summary of x: mean, minimum, maximum, etc.\tabularnewline
\bottomrule
\end{longtable}
\hypertarget{probability-distribution-functions}{%
\subsection{Probability distribution functions}\label{probability-distribution-functions}}
\begin{itemize}
\item
\textbf{d} prefix for the \textbf{distribution} function
\item
\textbf{p} prefix for the \textbf{cummulative probability}
\item
\textbf{q} prefix for the \textbf{quantile}
\item
\textbf{r} prefix for the \textbf{random} number generator
\end{itemize}
\hypertarget{illustration-with-standard-normal-distribution}{%
\subsubsection{Illustration with Standard normal distribution}\label{illustration-with-standard-normal-distribution}}
The general formula for the probability density function of the normal distribution with mean \(\mu\) and variance \(\sigma\) is given by
\[
f_X(x) = \frac{1}{\sigma\sqrt{(2\pi)}} e^{-(x-\mu)^2/2\sigma^2}
\]
If we let the mean \(\mu=0\) and the standard deviation \(\sigma=1\), we get the probability density function for the standard normal distribution.
\[
f_X(x) = \frac{1}{\sqrt{(2\pi)}} e^{-(x)^2/2}
\]
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{dnorm}\NormalTok{(}\DecValTok{0}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 0.3989423
\end{verbatim}
\begin{figure}
\centering
\includegraphics{STA-326-2.0-R-programming-and-Data-Analysis_files/figure-latex/unnamed-chunk-50-1.pdf}
\caption{\label{fig:unnamed-chunk-50}Standard normal probability density function: dnorm(0)}
\end{figure}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{pnorm}\NormalTok{(}\DecValTok{0}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 0.5
\end{verbatim}
\begin{figure}
\centering
\includegraphics{STA-326-2.0-R-programming-and-Data-Analysis_files/figure-latex/unnamed-chunk-52-1.pdf}
\caption{\label{fig:unnamed-chunk-52}Standard normal probability density function: dnorm(0)}
\end{figure}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{qnorm}\NormalTok{(}\FloatTok{0.5}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 0
\end{verbatim}
\begin{figure}
\centering
\includegraphics{STA-326-2.0-R-programming-and-Data-Analysis_files/figure-latex/unnamed-chunk-54-1.pdf}
\caption{\label{fig:unnamed-chunk-54}Standard normal probability density function: dnorm(0)}
\end{figure}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{262020}\NormalTok{)}
\NormalTok{random_numbers <-}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{10}\NormalTok{)}
\NormalTok{random_numbers}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 0.20078181 0.95873346 1.18369056 1.49513750 1.18109222 -0.57789570
[7] 0.01790671 0.81185245 0.39488199 -0.44337927
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{sort}\NormalTok{(random_numbers) }\CommentTok{## sort the numbers then it is easy to map with the graph}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] -0.57789570 -0.44337927 0.01790671 0.20078181 0.39488199 0.81185245
[7] 0.95873346 1.18109222 1.18369056 1.49513750
\end{verbatim}
\includegraphics{STA-326-2.0-R-programming-and-Data-Analysis_files/figure-latex/unnamed-chunk-56-1.pdf}
\hypertarget{reproducibility-of-scientific-results}{%
\subsection{Reproducibility of scientific results}\label{reproducibility-of-scientific-results}}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rnorm}\NormalTok{(}\DecValTok{10}\NormalTok{) }\CommentTok{# first attempt}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] 1.4701904 -0.2375662 0.1765985 -0.5257483 -1.3674764 -1.4422500
[7] 0.7576607 0.6475122 -1.1543034 0.9066248
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{rnorm}\NormalTok{(}\DecValTok{10}\NormalTok{) }\CommentTok{# second attempt}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] -1.7603264 -0.3402939 -1.0335807 1.0645014 -0.3874459 0.5975271
[7] -2.1535707 0.6602928 1.1581404 0.6133446
\end{verbatim}
As you can see above you will get different results
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)}
\KeywordTok{rnorm}\NormalTok{(}\DecValTok{10}\NormalTok{) }\CommentTok{# First attempt with set.seed}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] -0.6264538 0.1836433 -0.8356286 1.5952808 0.3295078 -0.8204684
[7] 0.4874291 0.7383247 0.5757814 -0.3053884
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)}
\KeywordTok{rnorm}\NormalTok{(}\DecValTok{10}\NormalTok{) }\CommentTok{# Second attempt with set.seed}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] -0.6264538 0.1836433 -0.8356286 1.5952808 0.3295078 -0.8204684
[7] 0.4874291 0.7383247 0.5757814 -0.3053884
\end{verbatim}
\hypertarget{writing-functions}{%
\chapter{Writing functions}\label{writing-functions}}
\hypertarget{when-should-we-write-functions}{%
\section{When should we write functions?}\label{when-should-we-write-functions}}
\begin{itemize}
\tightlist
\item
do many repetitive task
\end{itemize}
\hypertarget{glogal-variables-vs-local-variables}{%
\section{Glogal variables vs local variables}\label{glogal-variables-vs-local-variables}}
\hypertarget{control-structures}{%
\section{Control structures}\label{control-structures}}
\begin{itemize}
\tightlist
\item
for loops
\end{itemize}
\hypertarget{lapply-apply..}{%
\section{lapply, apply..}\label{lapply-apply..}}
\hypertarget{data-analysis-with-tidyverse}{%
\chapter{Data analysis with tidyverse}\label{data-analysis-with-tidyverse}}
Some \emph{significant} applications are demonstrated in this chapter.
\hypertarget{tidy-data}{%
\section{Tidy data}\label{tidy-data}}
Two key principles:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Put each dataset in a dataframe
\item
Put each variable in a column
\end{enumerate}
\begin{figure}
\centering
\includegraphics{tidy-1.png}
\caption{Figure 1: Components of a dataframe.}
\end{figure}
Vedio: \url{https://www.youtube.com/watch?v=K-ss_ag2k9E}
\hypertarget{convert-from-messy-data-to-tidy-data}{%
\subsection{Convert from messy data to tidy data}\label{convert-from-messy-data-to-tidy-data}}
``Tidy dataset are all alike; every messy dataset is messy in its own way.'' \_ Hadley Wickham
\hypertarget{data-wrangliing}{%
\chapter{Data wrangliing}\label{data-wrangliing}}
\hypertarget{data-visualisation}{%
\chapter{Data visualisation}\label{data-visualisation}}
\hypertarget{modelling}{%
\chapter{Modelling}\label{modelling}}
\hypertarget{simulation-based-inference}{%
\section{Simulation-based Inference}\label{simulation-based-inference}}
\bibliography{book.bib,packages.bib}
\end{document}
|
{"hexsha": "a58b902b09f1ed1efeb60556baa3051d514e08cf", "size": 51574, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/STA-326-2.0-R-programming-and-Data-Analysis.tex", "max_stars_repo_name": "statisticsmart/STA3262R", "max_stars_repo_head_hexsha": "779ec05d234db543e1569d6ce86b450a3c01b767", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/STA-326-2.0-R-programming-and-Data-Analysis.tex", "max_issues_repo_name": "statisticsmart/STA3262R", "max_issues_repo_head_hexsha": "779ec05d234db543e1569d6ce86b450a3c01b767", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/STA-326-2.0-R-programming-and-Data-Analysis.tex", "max_forks_repo_name": "statisticsmart/STA3262R", "max_forks_repo_head_hexsha": "779ec05d234db543e1569d6ce86b450a3c01b767", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.3373626374, "max_line_length": 359, "alphanum_fraction": 0.7289331834, "num_tokens": 17755}
|
'''
Created on Jul 19, 2017
@author: Daniel Sela, Arnon Sela
'''
from scipy.io import readsav
from astropy.io import fits
import os
def read_fits_file(file, fits_index=1):
try:
hdus = fits.open(file, memmap=True)
hdus_ext = hdus[fits_index]
match = hdus_ext.data
except Exception as e:
raise Exception("cannot read fits data from file: %s" % (file,)) from e
return match, 'ROTSE3'
def read_match_file(file, *args, **kwargs):
try:
match = readsav(file)['match']
except Exception as e:
raise Exception("cannot read match data from file: %s" % (file,)) from e
return match, 'ROTSE1'
def get_data_file_rotse(file):
if not os.path.isfile(file):
raise Exception("file not found: %s" % (file,))
file_ext = file.rpartition('.')[2]
if file_ext == 'fit':
return 3
else:
return 1
def read_data_file(file, fits_index=1, tmpdir='/tmp'):
''' Reads fits and match files into record
Args:
file: path to match or fits file
'''
if not os.path.isfile(file):
raise Exception("file not found: %s" % (file,))
file_ext = file.rpartition('.')[2]
if file_ext == 'fit':
match, rotse = read_fits_file(file, fits_index)
else:
match, rotse = read_match_file(file)
return match, rotse
|
{"hexsha": "295592db4c7b45759607ebf08a664484f928b047", "size": 1344, "ext": "py", "lang": "Python", "max_stars_repo_path": "py/rotseana/read_data_file.py", "max_stars_repo_name": "danielsela42/rotseana", "max_stars_repo_head_hexsha": "2b006fa6fd3c4d951de5872d84418e1089b352a3", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-06-14T16:17:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-19T23:07:30.000Z", "max_issues_repo_path": "py/rotseana/read_data_file.py", "max_issues_repo_name": "danielsela42/rotseana", "max_issues_repo_head_hexsha": "2b006fa6fd3c4d951de5872d84418e1089b352a3", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2019-11-01T16:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-03T03:09:15.000Z", "max_forks_repo_path": "py/rotseana/read_data_file.py", "max_forks_repo_name": "danielsela42/rotseana", "max_forks_repo_head_hexsha": "2b006fa6fd3c4d951de5872d84418e1089b352a3", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-09-07T15:45:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-09T01:47:20.000Z", "avg_line_length": 24.0, "max_line_length": 80, "alphanum_fraction": 0.6205357143, "include": true, "reason": "from scipy,from astropy", "num_tokens": 366}
|
from django.conf import settings
from django.contrib.auth.models import User
from django.db import models
from django.db.models.signals import post_save
from django.dispatch import receiver
from rest_framework.authtoken.models import Token
import numpy as np
import numpy.random as nprand
import random
class Document(models.Model):
owner = models.ForeignKey(User)
filename = models.CharField(max_length=200)
mp3_filename = models.CharField(max_length=200)
duration = models.IntegerField()
date = models.DateField()
typist = models.CharField(max_length=10)
class Line(models.Model):
document = models.ForeignKey(Document, related_name='lines')
line_num = models.IntegerField()
text = models.CharField(max_length=1000)
class Extract(models.Model):
document = models.ForeignKey(Document, related_name='extracts')
context = models.CharField(max_length=3)
completed = models.BooleanField(default=False)
flag = models.BooleanField(default=False)
tag = models.CharField(max_length=512, blank=True)
class ExtractLines(models.Model):
extract = models.ForeignKey(Extract, related_name='extract_lines')
line = models.ForeignKey(Line)
class ExtractActors(models.Model):
extract = models.ForeignKey(Extract)
app = models.CharField(max_length=256)
context = models.CharField(max_length=15)
actor = models.CharField(max_length=256, default="", blank=True)
conditions = models.CharField(max_length=256, default="", blank=True)
data_type = models.CharField(max_length=256, default="", blank=True)
class IType(models.Model):
PERSONAL = 'PR'
SENSITIVE = 'SN'
BOTH = "BO"
I_TYPE_CHOICES = (
(PERSONAL, 'Personal'),
(SENSITIVE, 'Sensitive'),
(BOTH, 'Both')
)
extract = models.OneToOneField(Extract, related_name='i_type')
type = models.CharField(max_length=2, choices=I_TYPE_CHOICES)
class IMode(models.Model):
AUTOMATICS = "AU"
MANUAL = "MN"
I_MODE_CHOICES = (
(AUTOMATICS, 'Automatic'),
(MANUAL, "Manual")
)
extract = models.OneToOneField(Extract, related_name='i_mode')
mode = models.CharField(max_length=2, choices=I_MODE_CHOICES)
class Purpose(models.Model):
extract = models.ForeignKey(Extract, related_name='i_purpose')
purpose = models.CharField(max_length=300)
class RoleRelationship(models.Model):
extract = models.ForeignKey(Extract, related_name='relationships')
relationship = models.CharField(max_length=300)
class RoleExpectation(models.Model):
extract = models.ForeignKey(Extract, related_name='expectations')
expectation = models.CharField(max_length=300)
class PlaceLocation(models.Model):
extract = models.ForeignKey(Extract, related_name='locations')
location = models.CharField(max_length=300)
class PlaceNorm(models.Model):
extract = models.ForeignKey(Extract, related_name='norms')
norm = models.CharField(max_length=300)
class IAttrRef(models.Model):
name = models.CharField(max_length=100, unique=True)
description = models.CharField(max_length=512)
label = models.CharField(max_length=100)
class IAttr(models.Model):
attr = models.ForeignKey(IAttrRef)
extract = models.ForeignKey(Extract, related_name='i_attrs')
isAttr = models.BooleanField(default=False)
class InformationFlow(models.Model):
SENDER_SUB = 'SS'
SENDER_REC = 'SR'
THIRD_PARTY = 'TP'
FEEDBACK = 'FB'
ALL = 'AL'
NO_FLOW = 'NF'
INFORMATION_FLOW_CHOICES = (
(SENDER_SUB, 'Sender-Subject'),
(SENDER_REC, 'Sender-Receiver'),
(THIRD_PARTY, 'Third-Party'),
(FEEDBACK, 'Feedback'),
(ALL, 'All'),
(NO_FLOW, 'No-Flow')
)
extract = models.OneToOneField(Extract, related_name='info_flow')
flow = models.CharField(max_length=2, choices=INFORMATION_FLOW_CHOICES)
class Recode(models.Model):
recoder = models.ForeignKey(User)
class RecodeExtract(models.Model):
extract = models.ForeignKey(Extract)
recode = models.ForeignKey(Recode)
recode_context = models.CharField(max_length=3)
def random_extracts(user):
queryset = Extract.objects.all()
# Filter only fake extracts
fake_extracts = list(queryset.filter(document__filename='dummy'))
# get 10 fake extracts
rand_fake_extracts = random.sample(fake_extracts, 10)
# exclude fake extracts
queryset = queryset.exclude(document__filename='dummy')
# Get total count
all_extract_count = queryset.count()
# 10% of count
ten_percent_count = int(np.ceil(all_extract_count * 0.1))
# exclude the users own extracts
real_extracts = list(queryset.exclude(document__owner=user))
# select random sample of 10% of real extracts
real_extracts_sample = random.sample(real_extracts, ten_percent_count)
all_extracts = rand_fake_extracts + real_extracts_sample
# randomly shuffle extracts
random.shuffle(all_extracts)
return all_extracts
@receiver(post_save, sender=Recode)
def add_extracts_to_recode(sender, instance=None, created=False, **kwargs):
if created:
extracts = random_extracts(instance.recoder)
for extract in extracts:
RecodeExtract.objects.create(extract=extract, recode=instance,
recode_context="noc")
@receiver(post_save, sender=IAttrRef)
def add_attr_to_extract(sender, instance=None, created=False, **kwargs):
if created:
extracts = Extract.objects.all()
for extract in extracts:
IAttr.objects.create(attr=instance, extract=extract)
@receiver(post_save, sender=Extract)
def create_attrs(sender, instance=None, created=False, **kwargs):
if created:
attrs = IAttrRef.objects.all()
for attr in attrs:
IAttr.objects.create(attr=attr, extract=instance)
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_auth_token(sender, instance=None, created=False, **kwargs):
if created:
Token.objects.create(user=instance)
|
{"hexsha": "03a80f97e0d59f4d7bb93ba24c1d5ca622044d04", "size": 6049, "ext": "py", "lang": "Python", "max_stars_repo_path": "transcript/models.py", "max_stars_repo_name": "ciaranmccormick/mm-transcription-server", "max_stars_repo_head_hexsha": "d7e44756beb703bf24a7a2bfe2cdfeaae8a6b49d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-02T03:51:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-02T03:51:14.000Z", "max_issues_repo_path": "transcript/models.py", "max_issues_repo_name": "ciaranmccormick/mm-transcription-server", "max_issues_repo_head_hexsha": "d7e44756beb703bf24a7a2bfe2cdfeaae8a6b49d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "transcript/models.py", "max_forks_repo_name": "ciaranmccormick/mm-transcription-server", "max_forks_repo_head_hexsha": "d7e44756beb703bf24a7a2bfe2cdfeaae8a6b49d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.3969849246, "max_line_length": 75, "alphanum_fraction": 0.7131757315, "include": true, "reason": "import numpy", "num_tokens": 1335}
|
function value = ch_indexi ( s, c )
%*****************************************************************************80
%
%% CH_INDEXI is the (case insensitive) first occurrence of a character in a string.
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 01 May 2004
%
% Author:
%
% John Burkardt
%
% Parameters:
%
% Input, string S, the string to be searched.
%
% Input, character C, the character to be searched for.
%
% Output, integer VALUE, the location of the first occurrence of C
% (upper or lowercase), or 0 if C does not occur.
%
value = 0;
for i = 1 : length ( s )
if ( ch_eqi ( s(i), c ) )
value = i;
return
end
end
return
end
|
{"author": "johannesgerer", "repo": "jburkardt-m", "sha": "1726deb4a34dd08a49c26359d44ef47253f006c1", "save_path": "github-repos/MATLAB/johannesgerer-jburkardt-m", "path": "github-repos/MATLAB/johannesgerer-jburkardt-m/jburkardt-m-1726deb4a34dd08a49c26359d44ef47253f006c1/chrpak/ch_indexi.m"}
|
# -*- coding: utf-8 -*-
import cPickle
import numpy as np
from scipy.io.wavfile import read
from sklearn.mixture import GaussianMixture as GMM
from sklearn import preprocessing
import warnings
warnings.filterwarnings("ignore")
node = True
import os
import python_speech_features as mfcc
modelsPath = "models/" #path to our model file
source = 'trainingData/' #Path of the audio files we will use to create the model
def calculate(array): #I don't need to explain more basic array and things
rows,cols = array.shape
deltas = np.zeros((rows,20))
N = 2
for i in range(rows):
index = []
j = 1
while j <= N:
if i-j < 0:
first =0
else:
first = i-j
if i+j > rows-1:
second = rows-1
else:
second = i+j
index.append((second,first))
j+=1
deltas[i] = ( array[index[0][0]]-array[index[0][1]] + (2 * (array[index[1][0]]-array[index[1][1]])) ) / 10
return deltas
def extract(audio,rate): #Our function to extract the attribute of the audio.
mfcc_feature = mfcc.mfcc(audio,rate, 0.025, 0.01,20,nfft = 1200, appendEnergy = True) #We called our mfcc function from the #python_speech_features module. And we added diagnostic features
mfcc_feature = preprocessing.scale(mfcc_feature) #preprocessing The package contains several common helper functions and substitution of transformer classes for a representative raw feature vectors that are more suitable for prediction.
delta = calculate(mfcc_feature) #calculate_delta We calculate the variable we specified with mfcc.
combined = np.hstack((mfcc_feature,delta)) #Sort arrays horizontally (as columns).
return combined
# Extraction features for each speaker
features = np.asarray(()) #we created Array
sourceFolder = [os.path.join(name)
for name in os.listdir(source)] #We got the folders in the TrainingData folder.
print("Source Folders: ",sourceFolder)
sources = [] #create a new list. We will take the .wav files in the folders in the training data folder into this list.
for x in sourceFolder:
for name in os.listdir(source + x): #TrainingData/x where x is the folder in it. This function will work for each folder.
if name.endswith('.wav'): #If it is a wav file in TrainingData/x;
nn = "{}".format(x)+"/"+"{}".format(name) #Path
sources.append(nn) #Adding it to our list.
for path in sources:
path = path.strip()
print(path)
# Read the voice
sr,audio = read(source + path)
print(source + path)
# Let's explain the 40-dimensional MFCC and delta MFCC properties
vector = extract(audio,sr)
if features.size == 0: #If we doesn't have any data
features = vector #Features will equal to vector and program ends.
else:
features = np.vstack((features, vector)) #We stack arrays vertically (on a row basis) sequentially.
if node == True:
gmm = GMM(n_components = 16, max_iter = 200, covariance_type='diag',n_init = 3) #We are calling gmm function.
gmm.fit(features)
# We save the models we calculated to the folder
picklefile = path.split("-")[0]+".gmm"
cPickle.dump(gmm,open(modelsPath + picklefile,'w'))
print " >> Modeling complete for file: ",picklefile,' ',"| Data Point = ",features.shape
features = np.asarray(())
|
{"hexsha": "ae1cd3fb9a76d2647eea364273d4e005e510c9f0", "size": 3437, "ext": "py", "lang": "Python", "max_stars_repo_path": "train.py", "max_stars_repo_name": "efecanxrd/Speech-Recognition", "max_stars_repo_head_hexsha": "a593b1f455cfe9e098a8300f4e670c07abc2453b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-19T18:07:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-19T18:07:20.000Z", "max_issues_repo_path": "train.py", "max_issues_repo_name": "efecanxrd/Speech-Recognition", "max_issues_repo_head_hexsha": "a593b1f455cfe9e098a8300f4e670c07abc2453b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train.py", "max_forks_repo_name": "efecanxrd/Speech-Recognition", "max_forks_repo_head_hexsha": "a593b1f455cfe9e098a8300f4e670c07abc2453b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6363636364, "max_line_length": 240, "alphanum_fraction": 0.6587139948, "include": true, "reason": "import numpy,from scipy", "num_tokens": 854}
|
import logging
from contextlib import contextmanager
import os
from os.path import dirname, join
import socket
from subprocess import Popen
import time
import requests
import pytest
from seldon_core.proto import prediction_pb2
from seldon_core.proto import prediction_pb2_grpc
import seldon_core.microservice as microservice
from seldon_core.flask_utils import SeldonMicroserviceException
import grpc
import numpy as np
import signal
import unittest.mock as mock
from google.protobuf import json_format
@contextmanager
def start_microservice(app_location, tracing=False, grpc=False, envs={}):
p = None
try:
# PYTHONUNBUFFERED=x
# exec python -u microservice.py $MODEL_NAME $API_TYPE --service-type $SERVICE_TYPE --persistence $PERSISTENCE
env_vars = dict(os.environ)
env_vars.update(envs)
env_vars.update(
{
"PYTHONUNBUFFERED": "x",
"PYTHONPATH": app_location,
"APP_HOST": "127.0.0.1",
"PREDICTIVE_UNIT_SERVICE_PORT": "5000",
"PREDICTIVE_UNIT_METRICS_SERVICE_PORT": "6005",
"PREDICTIVE_UNIT_METRICS_ENDPOINT": "/metrics-endpoint",
}
)
with open(join(app_location, ".s2i", "environment")) as fh:
for line in fh.readlines():
line = line.strip()
if line:
key, value = line.split("=", 1)
key, value = key.strip(), value.strip()
if key and value:
env_vars[key] = value
if grpc:
env_vars["API_TYPE"] = "GRPC"
cmd = (
"seldon-core-microservice",
env_vars["MODEL_NAME"],
env_vars["API_TYPE"],
"--service-type",
env_vars["SERVICE_TYPE"],
"--persistence",
env_vars["PERSISTENCE"],
)
if tracing:
cmd = cmd + ("--tracing",)
logging.info("starting: %s", " ".join(cmd))
logging.info("cwd: %s", app_location)
# stdout=PIPE, stderr=PIPE,
p = Popen(cmd, cwd=app_location, env=env_vars, preexec_fn=os.setsid)
time.sleep(1)
for q in range(10):
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
r1 = s1.connect_ex(("127.0.0.1", 5000))
s2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
r2 = s2.connect_ex(("127.0.0.1", 6005))
if r1 == 0 and r2 == 0:
break
time.sleep(5)
else:
raise RuntimeError("Server did not bind to 127.0.0.1:5000")
yield
finally:
if p:
os.killpg(os.getpgid(p.pid), signal.SIGTERM)
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_rest(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing
):
data = '{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}}'
response = requests.get(
"http://127.0.0.1:5000/predict", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {
"data": {"names": ["t:0", "t:1"], "ndarray": [[1.0, 2.0]]},
"meta": {},
}
data = (
'{"request":{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}},'
'"response":{"meta":{"routing":{"router":0}},"data":{"names":["a","b"],'
'"ndarray":[[1.0,2.0]]}},"reward":1}'
)
response = requests.get(
"http://127.0.0.1:5000/send-feedback", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {"data": {"ndarray": []}, "meta": {}}
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_rest_tags(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing
):
data = '{"meta":{"tags":{"foo":"bar"}},"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}}'
response = requests.get(
"http://127.0.0.1:5000/predict", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {
"data": {"names": ["t:0", "t:1"], "ndarray": [[1.0, 2.0]]},
"meta": {"tags": {"foo": "bar"}},
}
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_rest_metrics(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing
):
data = '{"meta":{"metrics":[{"key":"mygauge","type":"GAUGE","value":100}]},"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}}'
response = requests.get(
"http://127.0.0.1:5000/predict", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {
"data": {"names": ["t:0", "t:1"], "ndarray": [[1.0, 2.0]]},
"meta": {"metrics": [{"key": "mygauge", "type": "GAUGE", "value": 100}]},
}
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_rest_metrics_endpoint(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing
):
response = requests.get("http://127.0.0.1:6005/metrics-endpoint")
# This just tests if endpoint exists and replies with 200
assert response.status_code == 200
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_rest_submodule(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app2"), tracing=tracing
):
data = '{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}}'
response = requests.get(
"http://127.0.0.1:5000/predict", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {
"data": {"names": ["t:0", "t:1"], "ndarray": [[1.0, 2.0]]},
"meta": {},
}
data = (
'{"request":{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}},'
'"response":{"meta":{"routing":{"router":0}},"data":{"names":["a","b"],'
'"ndarray":[[1.0,2.0]]}},"reward":1}'
)
response = requests.get(
"http://127.0.0.1:5000/send-feedback", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {"data": {"ndarray": []}, "meta": {}}
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_grpc(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing, grpc=True
):
data = np.array([[1, 2]])
datadef = prediction_pb2.DefaultData(
tensor=prediction_pb2.Tensor(shape=data.shape, values=data.flatten())
)
request = prediction_pb2.SeldonMessage(data=datadef)
channel = grpc.insecure_channel("0.0.0.0:5000")
stub = prediction_pb2_grpc.ModelStub(channel)
response = stub.Predict(request=request)
assert response.data.tensor.shape[0] == 1
assert response.data.tensor.shape[1] == 2
assert response.data.tensor.values[0] == 1
assert response.data.tensor.values[1] == 2
arr = np.array([1, 2])
datadef = prediction_pb2.DefaultData(
tensor=prediction_pb2.Tensor(shape=(2, 1), values=arr)
)
request = prediction_pb2.SeldonMessage(data=datadef)
feedback = prediction_pb2.Feedback(request=request, reward=1.0)
response = stub.SendFeedback(request=request)
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_grpc_tags(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing, grpc=True
):
data = np.array([[1, 2]])
datadef = prediction_pb2.DefaultData(
tensor=prediction_pb2.Tensor(shape=data.shape, values=data.flatten())
)
meta = prediction_pb2.Meta()
json_format.ParseDict({"tags": {"foo": "bar"}}, meta)
request = prediction_pb2.SeldonMessage(data=datadef, meta=meta)
channel = grpc.insecure_channel("0.0.0.0:5000")
stub = prediction_pb2_grpc.ModelStub(channel)
response = stub.Predict(request=request)
assert response.data.tensor.shape[0] == 1
assert response.data.tensor.shape[1] == 2
assert response.data.tensor.values[0] == 1
assert response.data.tensor.values[1] == 2
assert response.meta.tags["foo"].string_value == "bar"
@pytest.mark.parametrize("tracing", [(False), (True)])
def test_model_template_app_grpc_metrics(tracing):
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=tracing, grpc=True
):
data = np.array([[1, 2]])
datadef = prediction_pb2.DefaultData(
tensor=prediction_pb2.Tensor(shape=data.shape, values=data.flatten())
)
meta = prediction_pb2.Meta()
json_format.ParseDict(
{"metrics": [{"key": "mygauge", "type": "GAUGE", "value": 100}]}, meta
)
request = prediction_pb2.SeldonMessage(data=datadef, meta=meta)
channel = grpc.insecure_channel("0.0.0.0:5000")
stub = prediction_pb2_grpc.ModelStub(channel)
response = stub.Predict(request=request)
assert response.data.tensor.shape[0] == 1
assert response.data.tensor.shape[1] == 2
assert response.data.tensor.values[0] == 1
assert response.data.tensor.values[1] == 2
assert response.meta.metrics[0].key == "mygauge"
assert response.meta.metrics[0].value == 100
def test_model_template_app_tracing_config():
envs = {
"JAEGER_CONFIG_PATH": join(dirname(__file__), "tracing_config/tracing.yaml")
}
with start_microservice(
join(dirname(__file__), "model-template-app"), tracing=True, envs=envs
):
data = '{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}}'
response = requests.get(
"http://127.0.0.1:5000/predict", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {
"data": {"names": ["t:0", "t:1"], "ndarray": [[1.0, 2.0]]},
"meta": {},
}
data = (
'{"request":{"data":{"names":["a","b"],"ndarray":[[1.0,2.0]]}},'
'"response":{"meta":{"routing":{"router":0}},"data":{"names":["a","b"],'
'"ndarray":[[1.0,2.0]]}},"reward":1}'
)
response = requests.get(
"http://127.0.0.1:5000/send-feedback", params="json=%s" % data
)
response.raise_for_status()
assert response.json() == {"data": {"ndarray": []}, "meta": {}}
def test_model_template_bad_params():
params = [
join(dirname(__file__), "model-template-app"),
"seldon-core-microservice",
"REST",
"--parameters",
'[{"type":"FLOAT","name":"foo","value":"abc"}]',
]
with mock.patch("sys.argv", params):
with pytest.raises(SeldonMicroserviceException):
microservice.main()
def test_model_template_bad_params_type():
params = [
join(dirname(__file__), "model-template-app"),
"seldon-core-microservice",
"REST",
"--parameters",
'[{"type":"FOO","name":"foo","value":"abc"}]',
]
with mock.patch("sys.argv", params):
with pytest.raises(SeldonMicroserviceException):
microservice.main()
@mock.patch("seldon_core.microservice.os.path.isfile", return_value=True)
def test_load_annotations(mock_isfile):
from io import StringIO
read_data = [
('foo="bar"', {"foo": "bar"}),
(' foo = "bar" ', {"foo": "bar"}),
('key= "assign==="', {"key": "assign==="}),
]
for data, expected_annotation in read_data:
with mock.patch("seldon_core.microservice.open", return_value=StringIO(data)):
assert microservice.load_annotations() == expected_annotation
|
{"hexsha": "8a5260016795c0c905637aaa9b1d4e779cee6c9a", "size": 12158, "ext": "py", "lang": "Python", "max_stars_repo_path": "python/tests/test_microservice.py", "max_stars_repo_name": "Ogaday/seldon-core", "max_stars_repo_head_hexsha": "df61cac5fda069e381c0baa1ba4d24d724e8c062", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-31T14:52:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-31T14:52:17.000Z", "max_issues_repo_path": "python/tests/test_microservice.py", "max_issues_repo_name": "Ogaday/seldon-core", "max_issues_repo_head_hexsha": "df61cac5fda069e381c0baa1ba4d24d724e8c062", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 120, "max_issues_repo_issues_event_min_datetime": "2020-04-27T09:48:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-26T06:26:10.000Z", "max_forks_repo_path": "python/tests/test_microservice.py", "max_forks_repo_name": "josephglanville/seldon-core", "max_forks_repo_head_hexsha": "34ab0c33c55879ebe3ea3009ca64b0b47d18896c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4092307692, "max_line_length": 133, "alphanum_fraction": 0.5741898339, "include": true, "reason": "import numpy", "num_tokens": 3109}
|
from unittest import TestCase
import numpy as np
import pandas as pd
from pandas.testing import assert_series_equal
from datavalid.column_schema import ColumnSchema
from datavalid.exceptions import ColumnValidationError
class ColumnSchemaTestCase(TestCase):
def test_validate(self):
field = ColumnSchema(
"test_field", unique=True, no_na=True, options=['a', 'b', 'c']
)
field.validate(pd.Series(['b', 'c', 'a']))
with self.assertRaises(ColumnValidationError) as cm:
field.validate(pd.Series(['b', 'a', 'b']))
assert_series_equal(
cm.exception.values, pd.Series(['b']))
self.assertEqual(cm.exception.failed_check, 'unique')
with self.assertRaises(ColumnValidationError) as cm:
field.validate(pd.Series(['b', 'a', np.NaN]))
assert_series_equal(
cm.exception.values, pd.Series([np.NaN]), check_dtype=False
)
self.assertEqual(cm.exception.failed_check, 'no_na')
with self.assertRaises(ColumnValidationError) as cm:
field.validate(pd.Series(['d', 'a', 'c']))
assert_series_equal(cm.exception.values, pd.Series(['d']))
self.assertEqual(cm.exception.failed_check, 'options')
|
{"hexsha": "aa648b8ad0d3d1a4dbd1e5a8e5e2e6d957e1257a", "size": 1256, "ext": "py", "lang": "Python", "max_stars_repo_path": "datavalid/test_column_schema.py", "max_stars_repo_name": "pckhoi/datavalid", "max_stars_repo_head_hexsha": "fea40936261dcbcfd144a15a498abf0b556c64f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-05-20T03:07:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-02T15:59:59.000Z", "max_issues_repo_path": "datavalid/test_column_schema.py", "max_issues_repo_name": "pckhoi/datavalid", "max_issues_repo_head_hexsha": "fea40936261dcbcfd144a15a498abf0b556c64f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "datavalid/test_column_schema.py", "max_forks_repo_name": "pckhoi/datavalid", "max_forks_repo_head_hexsha": "fea40936261dcbcfd144a15a498abf0b556c64f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.8888888889, "max_line_length": 74, "alphanum_fraction": 0.6560509554, "include": true, "reason": "import numpy", "num_tokens": 254}
|
/*
* parser.cpp
*
* Created on: Apr 20, 2016
* Author: zmij
*/
#include <wire/idl/parser.hpp>
#include <iostream>
#include <iomanip>
#include <functional>
#include <boost/optional.hpp>
namespace wire {
namespace idl {
namespace parser {
parser::parser(::std::string const& cnt)
: contents{ cnt }, state_{contents}
{
}
ast::global_namespace_ptr
parser::parse()
{
namespace qi = ::boost::spirit::qi;
auto sb = contents.data();
auto se = sb + contents.size();
tokens_type tokens;
token_iterator iter = tokens.begin(sb, se);
token_iterator end = tokens.end();
grammar_type grammar{ tokens, state_ };
bool r = qi::phrase_parse(iter, end, grammar, qi::in_state("WS")[tokens.self]);
if (!r || iter != end) {
auto loc = state_.get_location( ::std::distance(contents.data(), sb) );
throw syntax_error(loc, "Unexpected token");
}
return state_.get_tree();
}
//----------------------------------------------------------------------------
parser_state::parser_state(::std::string const& contents,
include_dir_list const& include_dirs)
: stream_begin(contents.data()),
loc_jumps{ {0, source_location{}} },
global_{ ast::global_namespace::create() },
scopes_{ ::std::make_shared< namespace_scope >( global_ ) },
include_dirs_{ include_dirs }
{
}
parser_scope&
parser_state::current()
{
return *scopes_.back();
}
ast::scope_ptr
parser_state::ast_scope()
{
return current().scope();
}
void
parser_state::update_location(base_iterator p, source_location const& loc)
{
loc_jumps[ ::std::distance(stream_begin, p) ] = loc;
global_->set_current_compilation_unit(loc.file);
}
source_location
parser_state::get_location(::std::size_t pos) const
{
auto f = --loc_jumps.upper_bound( pos );
source_location loc = f->second;
for (auto c = stream_begin + f->first; c != stream_begin + pos; ++c) {
if (*c == '\n') {
loc.character = 0;
++loc.line;
} else {
++loc.character;
}
}
return loc;
}
void
parser_state::start_namespace(::std::size_t pos, ::std::string const& name)
try {
scopes_.push_back(current().start_namespace(pos, name));
attach_annotations(ast_scope());
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare namespace '" << name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::start_structure(::std::size_t pos, ::std::string const& name)
try {
ast::structure_ptr st = ast_scope()->add_type< ast::structure >(pos, name);
scopes_.push_back( ::std::make_shared< structure_scope >(st) );
attach_annotations(st);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare structure '" << name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::start_interface(::std::size_t pos, ::std::string const& name,
optional_type_list const& ancestors_names)
try {
// check the ancestor list
ast::interface_list ancestors;
if (ancestors_names.is_initialized()) {
auto const& names = *ancestors_names;
for (auto const& an : names) {
ast::type_ptr t = ast_scope()->find_type(an, pos);
if (!t) {
::std::ostringstream os;
os << "Parent data type '" << an << "' for interface '"
<< name << "' not found";
throw syntax_error(get_location(pos), os.str());
}
ast::interface_ptr ai = ast::dynamic_type_cast< ast::interface >(t);
if (!ai) {
::std::ostringstream os;
os << "Parent data type '" << an << "' for interface '"
<< name << "' is not an interface";
throw syntax_error(get_location(pos), os.str());
}
ancestors.push_back(ai);
}
}
ast::interface_ptr iface = ast_scope()->add_type< ast::interface >(pos, name, ancestors);
scopes_.push_back(::std::make_shared< interface_scope >(iface));
attach_annotations(iface);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare interface '" << name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::start_class(::std::size_t pos, ::std::string const& name,
optional_type_list const& ancestors_names)
try {
ast::class_ptr parent;
ast::interface_list ancestors;
if (ancestors_names.is_initialized()) {
auto const& names = *ancestors_names;
for (auto const& an : names) {
ast::type_ptr t = ast_scope()->find_type(an, pos);
if (!t) {
::std::ostringstream os;
os << "Parent data type '" << an << "' for class '"
<< name << "' not found";
throw syntax_error(get_location(pos), os.str());
}
ast::class_ptr pnt = ast::dynamic_type_cast< ast::class_ >(t);
if (pnt) {
if (parent) {
::std::ostringstream os;
os << "A class cannot have more than one class ancestor. '"
<< name << "' class has more: '" << parent->get_type_name()
<< "' and '" << pnt->get_type_name() << "'";
throw syntax_error(get_location(pos), os.str());
}
parent = pnt;
} else {
ast::interface_ptr ai = ast::dynamic_type_cast< ast::interface >(t);
if (!ai) {
::std::ostringstream os;
os << "Parent data type '" << an << "' for class '"
<< name << "' is not an interface";
throw syntax_error(get_location(pos), os.str());
}
ancestors.push_back(ai);
}
}
}
ast::class_ptr cl = ast_scope()->add_type< ast::class_ >(pos, name, parent, ancestors);
scopes_.push_back(::std::make_shared< class_scope >(cl));
attach_annotations(cl);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare class '" << name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::start_exception(::std::size_t pos, ::std::string const& name,
optional_type const& parent_name)
try {
// check the ancestor
ast::exception_ptr parent;
if (parent_name.is_initialized()) {
ast::type_ptr t = ast_scope()->find_type( *parent_name, pos );
if (!t) {
::std::ostringstream os;
os << "Parent data type '" << *parent_name << "' for exception '"
<< name << "' not found";
throw syntax_error(get_location(pos), os.str());
}
parent = ast::dynamic_type_cast< ast::exception >( t );
if (!parent) {
::std::ostringstream os;
os << "Parent data type '" << *parent_name << "' for exception '"
<< name << "' is not an exception";
throw syntax_error(get_location(pos), os.str());
}
}
ast::exception_ptr ex = ast_scope()->add_type< ast::exception >( pos, name, parent );
scopes_.push_back(::std::make_shared< exception_scope >(ex));
attach_annotations(ex);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare exception '" << name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::end_scope(::std::size_t pos)
{
if (scopes_.size() > 1)
scopes_.pop_back();
else
throw syntax_error( get_location(pos), "Cannot end scope here" );
}
void
parser_state::declare_enum(::std::size_t pos, grammar::enum_decl const& decl)
try {
ast::enumeration_ptr en = ast_scope()->add_type< ast::enumeration >(pos, decl.name, decl.constrained);
for (auto const& v : decl.enumerators) {
en->add_enumerator(pos, v.name, v.init);
}
attach_annotations(en);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare type '" << decl.name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::add_type_alias(::std::size_t pos, grammar::type_alias_decl const& decl)
try {
ast::type_ptr aliased = current().scope()->find_type(decl.second, pos);
if (!aliased) {
::std::ostringstream os;
os << "Data type '" << decl.second << "' not found in scope "
<< current().scope()->get_qualified_name();
throw grammar_error{pos, os.str()};
}
ast::type_ptr ta = ast_scope()->add_type< ast::type_alias >(pos, decl.first, aliased);
attach_annotations(ta);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot declare type alias '" << decl.first << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::forward_declare(::std::size_t pos, grammar::fwd_decl const& decl)
try {
ast::forward_declaration_ptr fwd =
ast_scope()->add_type< ast::forward_declaration >(pos, decl.second, decl.first);
attach_annotations(fwd);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot forward declare type '" << decl.first << " " << decl.second << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::add_constant(::std::size_t pos, grammar::data_member_decl const& decl)
try {
ast::type_ptr t = ast_scope()->find_type(decl.type, pos);
if (!t) {
::std::ostringstream os;
os << "Data type '" << decl.type << "' not found in scope "
<< ast_scope()->get_qualified_name();
throw syntax_error{get_location(pos), os.str()};
}
if (!decl.init.is_initialized()) {
::std::ostringstream os;
os << "No initializer specified for constant '" << decl.type << " " << decl.name
<< "' in scope " << ast_scope()->get_qualified_name();
throw syntax_error{get_location(pos), os.str()};
}
// TODO Check compatible init
// string - quoted literal
// integral types - integral literals
// boolean types - bool or integral literals
// sequences, maps and arrays - initializer lists
// struct - initializer lists
ast::constant_ptr var = ast_scope()->add_constant(pos, decl.name, t, *decl.init);
attach_annotations(var);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot add constant '" << decl.name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::add_data_member(::std::size_t pos, grammar::data_member_decl const& decl)
try {
ast::variable_ptr var = current().add_data_member(pos, decl);
attach_annotations(var);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot add data member '" << decl.type << " " << decl.name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::add_func_member(::std::size_t pos, grammar::function_decl const& decl)
try {
ast::function_ptr func = current().add_func_member(pos, decl);
attach_annotations(func);
} catch (syntax_error const&) {
throw;
} catch (ast::entity_conflict const& e) {
::std::ostringstream os;
os << "Cannot add function member '" << decl.name << "'\n"
<< get_location(e.previous->decl_position()) << ": note: Previously declared here";
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), os.str());
} catch (grammar_error const& e) {
throw syntax_error( get_location(e.pos != 0 ? e.pos : pos), e.what() );
} catch (::std::exception const& e) {
throw syntax_error( get_location(pos), e.what() );
}
void
parser_state::add_annotations(::std::size_t pos, grammar::annotation_list const& annotations)
{
current_annotations_.insert(current_annotations_.end(),
annotations.begin(), annotations.end());
}
void
parser_state::attach_annotations(ast::entity_ptr en)
{
if (!current_annotations_.empty()) {
en->add_annotations(current_annotations_);
current_annotations_.clear();
}
}
//----------------------------------------------------------------------------
parser_scope::scope_ptr
parser_scope::start_namespace_impl(::std::size_t pos, ::std::string const& name)
{
::std::ostringstream os;
os << "Cannot start a namespace in scope " << scope()->get_qualified_name();
throw grammar_error{ pos, os.str() };
}
ast::variable_ptr
parser_scope::add_data_member_impl(::std::size_t pos, grammar::data_member_decl const& decl)
{
::std::ostringstream os;
os << "Cannot add a data member in scope " << scope()->get_qualified_name();
throw grammar_error{ pos, os.str() };
}
ast::function_ptr
parser_scope::add_func_member_impl(::std::size_t pos, grammar::function_decl const& decl)
{
::std::ostringstream os;
os << "Cannot add a function member in scope " << scope()->get_qualified_name();
throw grammar_error{ pos, os.str() };
}
//----------------------------------------------------------------------------
parser_scope::scope_ptr
namespace_scope::start_namespace_impl(::std::size_t pos, ::std::string const& name)
{
ast::namespace_ptr ns = scope< ast::namespace_ >()->add_namespace(pos, name);
return ::std::make_shared< namespace_scope >(ns);
}
//----------------------------------------------------------------------------
ast::variable_ptr
structure_scope::add_data_member_impl(::std::size_t pos, grammar::data_member_decl const& decl)
{
ast::structure_ptr st = scope< ast::structure >();
ast::type_ptr t = st->find_type(decl.type, pos);
if (!t) {
::std::ostringstream os;
os << "Data type '" << decl.type << "' not found in scope " << st->get_qualified_name();
throw grammar_error{pos, os.str()};
}
ast::templated_type_ptr tt = ast::dynamic_type_cast< ast::templated_type >(t);
if (tt) {
::std::ostringstream os;
os << "Cannot use template type '" << decl.type << "' without parameters";
throw grammar_error{pos, os.str()};
}
return st->add_data_member(pos, decl.name, t);
}
//----------------------------------------------------------------------------
ast::function_ptr
interface_scope::add_func_member_impl(::std::size_t pos, grammar::function_decl const& decl)
{
ast::interface_ptr iface = scope< ast::interface >();
ast::type_ptr ret = iface->find_type(decl.return_type, pos);
if (!ret) {
::std::ostringstream os;
os << "Return data type '" << decl.return_type << "' for function '"
<< decl.name << "' not found in scope " << iface->get_qualified_name();
throw grammar_error(pos, os.str());
}
ast::function::function_params params;
ast::exception_list throw_spec;
for (auto const& p : decl.params) {
auto t = iface->find_type(p.first, pos);
if (!t) {
::std::ostringstream os;
os << "Data type '" << p.first << "' for function '"
<< decl.name << "' parameter '" << p.second
<< "' not found in scope " << iface->get_qualified_name();
throw grammar_error(pos, os.str());
}
params.push_back(::std::make_pair( t, p.second ));
}
for (auto const& e : decl.throw_spec) {
auto t = iface->find_type(e, pos);
if (!t) {
::std::ostringstream os;
os << "Data type '" << e << "' from function '" << decl.name
<< "' throw specification not found in scope "
<< iface->get_qualified_name();
throw grammar_error(pos, os.str());
}
auto ex = ast::dynamic_type_cast< ast::exception >(t);
if (!ex) {
::std::ostringstream os;
os << "Data type '" << e << "' from function '" << decl.name
<< "' throw specification is not an exception";
throw grammar_error(pos, os.str());
}
throw_spec.push_back(ex);
}
return iface->add_function(pos, decl.name, ret, decl.const_qualified, params, throw_spec);
}
} // namespace parser
} // namespace idl
} // namespace wire
|
{"hexsha": "dbeb5b006a0da5eef8206e1b57a6b294640a1c28", "size": 19964, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/wire/idl/parser.cpp", "max_stars_repo_name": "zmij/wire", "max_stars_repo_head_hexsha": "9981eb9ea182fc49ef7243eed26b9d37be70a395", "max_stars_repo_licenses": ["Artistic-2.0"], "max_stars_count": 5.0, "max_stars_repo_stars_event_min_datetime": "2016-04-07T19:49:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-03T05:24:11.000Z", "max_issues_repo_path": "src/wire/idl/parser.cpp", "max_issues_repo_name": "zmij/wire", "max_issues_repo_head_hexsha": "9981eb9ea182fc49ef7243eed26b9d37be70a395", "max_issues_repo_licenses": ["Artistic-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/wire/idl/parser.cpp", "max_forks_repo_name": "zmij/wire", "max_forks_repo_head_hexsha": "9981eb9ea182fc49ef7243eed26b9d37be70a395", "max_forks_repo_licenses": ["Artistic-2.0"], "max_forks_count": 1.0, "max_forks_repo_forks_event_min_datetime": "2020-12-27T11:47:31.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-27T11:47:31.000Z", "avg_line_length": 37.3158878505, "max_line_length": 106, "alphanum_fraction": 0.5930174314, "num_tokens": 4754}
|
SUBROUTINE ISCATR
C* ROUTINE FOR THEORETICALLY COMPUTING INCOHERENT POWER SPECTRA
C AND THEIR DERIVATIVES WITH RESPECT TO ION AND ELECTRON TEMPERATURES,
C ELECTRON DENSITY, COLLISION FREQUENCY AND ION COMPOSITION.
C BY WES SWARTZ
C =============================================================
C
C CALL "ISCATR" ONCE FOR SYSTEM INITIALIZATION.
C CALL "ISFITC" ONCE FOR INITIALIZATION OF EACH FITTING TYPE.
C CALL "SPECTR" FOR FULL SPECTRA OR ACF IN ONE CALL AFTER "ISCATR"
C AND "ISFITC".
C
C DEFINITION OF BASIC TERMS:
C IONOSPHERIC PARAMETERS
C "TI"=ION TEMPERATURE (K).
C "TE"=ELECTRON TEMPERATURE (K).
C "EN"=ELECTRON DENSITY (CM-3)
C "CF"=COLLISION FREQENCY (KHZ)
C "FRACTN(J-1)"=FRACTIONAL COMPOSITION OF J'TH ION SPECIES
C "SYSTEM"=SYSTEM SCALING FACTOR
C "TR"="TE/TI"
COMMON /PARM/TI,TE,EN,CF,FRACTN(2),SYSTEM /CTR/TR
C
C GEOGRAPHIC PARAMETERS:
C "GFE"=ELECTRON GYRO FREQUENCY (KHZ)
C "ALPHA"=ANGLE BETWEEN MAGNETIC FIELD AND SCATTERING VECTOR (DEG).
C "NJ"=NUMBER OF ION SPECIES.
C "AM(J)"=MOL MASS OF J'TH ION SPECIES
C = 1. FOR H+
C = 4. FOR HE+
C =16. FOR O+
C =30. FOR NO+
C =32. FOR O2+
C =31. FOR COMBINING NO+ AND O2+
C =55.8 FOR FE+
C =24.3 FOR MG+
C
C "DTAU"=BASIC SAMPLE INTERVAL FOR ACF (MS).
C SYSTEM PARAMETERS:
C "F(K)"=FREQUENCY VALUES FOR SPECTRUM (KHZ).
C "COSINE(L,K)"=ARRAY FOR FOURIER TRANSFORM
C "TF"=TRANSMITTER FREQUENCY (MHZ).
C =49.2 FOR JICAMARCA.
C =430. FOR ARECIBO.
C =1290. FOR CHATANIKA.
C =933. FOR EISCAT.
C "NDF"=NUMBER OF FREQUENCY INTERVALS.
C "TAU(L)"=ACF LAGS (US).
COMMON /PM/DTAU,TF0,TF,GFE,ALPHA,AM(3,9),NDF,NJ,IREGN
C
C CONTROL PARAMETERS:
C "IFIT(I)"=1 FOR FITTING I'TH PARAMETER.
C "IFIT(I)"=4 FOR HOLDING I'TH PARAMETER CONSTANT.
C
C SPECIAL CASES:
C "IFIT(2)"=1 FOR FITTING "TE".
C =2 FOR "TE/TI" FIXED.
C =4 FOR "TE" FIXED.
C "IFIT(5)"=1 FOR FITTING COMPSITION.
C =2 FOR GIVEN NONZERO COMPOSITION WITHOUT FITTING.
COMMON /FITC/IFITR(8,9),IFFIT(7),NP
C
C COMPLEX OPERATIONS HAVE BEEN ELIMINATED.
C CODE FOR SCATTERING PERPINDICULAR TO MAGNETIC FIELD IS NOT IMPLIMENTED
C IN THIS VERSION.
C
REAL SQMI(3),YIR1(100),YII1(100),GP(7),S(100)
DIMENSION COSINE(36,100),FUNC(100,6),DYIRDP(100,2),DYIIDP(100,2),
> IP(7),IFIT(8),DSDTI(100)
C
COMMON /FUNCT/ACF(36,6),REFUNC(36,6),AIFUNC(36,6),TAU(36),NLAGS
> /CSPECT/YII(100),YIR(100),DYIRDT(100),DYIIDT(100),
> DYIRDC(100),DYIIDC(100)
COMMON /PLAS/ZR(100),ZI(100),THETA(100),PSIION,F(120),SCALEF,NDX
> /FITP/ITI,ITE,INE,ICF,IP1,IP2,ISY
EQUIVALENCE (GP(1),TI), (IP(1),ITI), (FUNC(1,1),YII(1),S(1)),
> (DYIRDP(1,2),YIR1(1)), (DYIIDP(1,2),YII1(1)), (DSDTI(1),YIR(1))
C======================================================================
NDX = NDF
TR = 1.
C SET UP COSINE ARRAY FOR FOURIER TRANSFORM:
NDFM = NDF-1
DO 2 K = 2,NDFM
2 COSINE(1,K) = (F(K+1)-F(K-1))*0.5
COSINE(1,1) = 0.5*F(2)
COSINE(1,NDF) = 0.5*(F(NDF)-F(NDFM))
DO 3 L = 2,36
TP2 = TAU(L)*6.2831854
TP2SQ = TP2**2
COSINE(L,1) = (1.-COS(TP2*F(2)))/(F(2)*TP2SQ)
DO 3 K = 2,NDF
COSINE(L,K) = ((COS(TP2*F(K))-COS(TP2*F(K-1)))/
> (F(K)-F(K-1))
> +(COS(TP2*F(K))-COS(TP2*F(K+1)))
> /(F(K+1)-F(K)))/TP2SQ
3 CONTINUE
RETURN
C =====================================================================
ENTRY ISFITC
C
C NUMERICAL CONSTANT FOR FREQUENCY NORMALIZATION IS:
C 27226. = C*SQRT(ME/2./BOLTZK)/2.
FNORMC = 27.226/TF
PHIEL = FNORMC*GFE
C NUMERICAL CONSTANT IS PIE/180.
AL = ALPHA*1.745329E-02
SINFAC = .5*(SIN(AL)/PHIEL)**2
COSAL = COS(AL)
COSAL2 = COSAL**2
C DEBYE LENGTH CONSTANT FACTOR IS:
C 8.36753E-12 = 16.*PIE**2*PERMITIVITY*BOLTZK/COUL**2/C**2
DEBYEC = 8.36753E-06*TF**2
C SET UP FITTING CONTROL PARAMETERS:
NPP = 1
DO 4 I = 1,7
IP(I) = NPP
IFIT(I) = IFITR(I,IREGN)
IF( IFIT(I) .NE. 1 ) GO TO 4
NP = NPP
NPP = NPP+1
IP(I) = NPP
IFFIT(NP) = I
4 CONTINUE
C
NJMF = 0
IF( IFIT(5) .LE. 2 ) NJMF = 1
IF( IFIT(6) .LE. 2 ) NJMF = 2
NJ = IFITR(8,IREGN)
IF( NJ .EQ. 1 ) NJMF = 0
C
C NUMERICAL CONSTANT IN THE FOLLOWING STATEMENT IS:
C 42.86445=SQRT( (M(PROTON)+M(NEUTRON))/M(ELECTRON)/2.0)
SQMIN = 428.6445
DO 5 J = 1,NJ
SQMI(J) = SQRT(AM(J,IREGN))*42.86445
IF( SQMI(J) .LT. SQMIN ) SQMIN = SQMI(J)
5 CONTINUE
RETURN
C ======================================================================
ENTRY SPECTR
C ----------------------------------------------------------------------
C DEFINE VARIOUS FACTORS WITH REPETITIVE USAGE:
IF( IFIT(2) .EQ. 2 ) TE = TI*TR
SQTE = SQRT(TE)
SQTI = SQRT(TI)
R5TI = 0.5/TI
C "DEBYE"=NORMALIZED DEBYE LENGTH CORRECTION FACTOR.
DEBYE = DEBYEC/EN
TWODEB = 2.*DEBYE
TWOBTE = 2./TE
TWOBTI = 2./TI
NJM = NJ-1
C SET UP FRACTIONAL COMPOSITION FACTORS:
FRCTN1 = 1.0
IF( NJ .NE. 1 ) THEN
DO 10 JM = 1,NJM
10 FRCTN1 = FRCTN1-FRACTN(JM)
ENDIF
THETC = FNORMC/SQTI
SCALEF = THETC*SQMI(1)
C "CONST"=SPECTRUM SCALING FACTOR (PROPORTIONAL TO "EN")
CONST = SYSTEM*EN
C "TEX"=APPROXIMATION OF "EXP(-TE*SINFAC)" WHICH BREAKS DOWN FOR
C LARGE "TF" AND "TE".
TEX = 1.-TE*SINFAC
TEFAC = 0.5/TE+SINFAC/TEX
TEXBTE = TEX/TE
DPSIDC = FNORMC*SQMI(1)/SQTI
IF( CF .EQ. 0.0 ) GO TO 50
C ======================================================================
C I O N A D M I T A N C E FUNCTIONS INCLUDING C O L L I S I O N S.
C ASSUME "PSI" IS INDEPENDENT OF SPECIES MASS.
PSIION = CF*DPSIDC
PSII2 = PSIION*2.
DPSIDT =-PSIION*R5TI
C COMPUTE COMPLEX PLASMA DISPERSION FUNCTION FOR FIRST ION:
CALL PLASMA
THEMAX = THETA(NDX)
THEMIN = THEMAX
C COMPUTE COMPLEX ADMITANCE FUNCTION AND ITS DERIVATIVE FOR FIRST ION:
PSIZ = 1.-PSIION*ZI(1)
PSIZM = SQMI(1)/PSIZ
C MAKE "YIR1(1)" THE LIMIT OF "YIR/THETA" AS THETA GOES TO ZERO:
YIR1(1) = PSIZM*ZI(1)
YIR(1) = FRCTN1*YIR1(1)
DYRDPS = FRCTN1*PSIZM*(ZI(1)*(ZI(1)+PSII2)-2.)/PSIZ
DYIRDT(1) = DYRDPS*DPSIDT
DYIRDC(1) = DYRDPS*DPSIDC
FRCFQ = FRCTN1*DPSIDC
DO 39 K = 2,NDX
PSIZM = PSIION*(ZR(K)**2+ZI(K)**2)
YD = 1.-PSIION*(2.*ZI(K)-PSIZM)
YIR1(K) = THETA(K)*(ZI(K)-PSIZM)/YD
YII1(K) = 1.+THETA(K)*ZR(K)/YD
YIR(K) = FRCTN1*YIR1(K)
YII(K) = FRCTN1*YII1(K)
C
TH = 1.0/THETA(K)
C1 = YIR1(K)**2+YII1(K)*(1.-YII1(K))
C2 = TH-2.*THETA(K)
C3 = YIR1(K)*(1.-2.*YII1(K))
C4 = YIR1(K)*C2
C5 = PSII2*C1
DYRDTH = C4-C5
DYIDTH = PSII2*C3-TH+YII1(K)*C2
DYRDPS = TH*C1+DYIDTH
DYIDPS = C5-TH*C3-C4
C1 =-THETA(K)*R5TI
C
DYIRDT(K) = FRCTN1*(C1*DYRDTH+DPSIDT*DYRDPS)
DYIIDT(K) = FRCTN1*(C1*DYIDTH+DPSIDT*DYIDPS)
DYIRDC(K) = FRCFQ*DYRDPS
DYIIDC(K) = FRCFQ*DYIDPS
C
39 CONTINUE
IF( NJ .EQ. 1 ) GO TO 70
C ----------------------------------------------------------------------
DO 49 J = 2,NJ
JM = J-1
C COMPUTE COMPLEX PLASMA DISPERSION FUNCTION FOR OTHER IONS:
SCALEF = THETC*SQMI(J)
CALL PLASMA
IF( THETA(NDX) .GT. THEMAX ) THEMAX = THETA(NDX)
IF( THETA(NDX) .LT. THEMIN ) THEMIN = THETA(NDX)
C COMPUTE COMPLEX ADMITANCE AND DERIVATIVES FUNCTIONS FOR OTHER IONS:
PSIZ = 1.-PSIION*ZI(1)
PSIZM = SQMI(J)/PSIZ
C MAKE "YIRJ(1)" THE LIMIT OF "YIR/THETA" AS THETA GOES TO ZERO:
YR = PSIZM*ZI(1)
YIR(1) = YIR(1)+FRACTN(JM)*YR
DYRDPS = FRACTN(JM)*PSIZM*(ZI(1)*(ZI(1)+PSII2)-2.)/PSIZ
DYIRDT(1) = DYIRDT(1)+DYRDPS*DPSIDT
DYIRDC(1) = DYIRDC(1)+DYRDPS*DPSIDC
DYIRDP(1,JM) = YR-YIR1(1)
FRCFQ = FRACTN(JM)*DPSIDC
DO 49 K = 2,NDX
PSIZM = PSIION*(ZR(K)**2+ZI(K)**2)
YD = 1.-PSIION*(2.*ZI(K)-PSIZM)
YR = THETA(K)*(ZI(K)-PSIZM)/YD
YI = 1.+THETA(K)*ZR(K)/YD
YIR(K) = YIR(K)+FRACTN(JM)*YR
YII(K) = YII(K)+FRACTN(JM)*YI
C
TH = 1.0/THETA(K)
C1 = YR**2+YI*(1.-YI)
C2 = TH-2.*THETA(K)
C3 = YR*(1.-2.*YI)
C4 = YR*C2
C5 = PSII2*C1
DYRDTH = C4-C5
DYIDTH = PSII2*C3-TH+YI*C2
DYRDPS = TH*C1+DYIDTH
DYIDPS = C5-TH*C3-C4
C1 =-THETA(K)*R5TI
C
DYIRDT(K) = DYIRDT(K)+FRACTN(JM)*(C1*DYRDTH+DPSIDT*DYRDPS)
DYIIDT(K) = DYIIDT(K)+FRACTN(JM)*(C1*DYIDTH+DPSIDT*DYIDPS)
DYIRDC(K) = DYIRDC(K)+FRCFQ*DYRDPS
DYIIDC(K) = DYIIDC(K)+FRCFQ*DYIDPS
DYIRDP(K,JM) = YR-YIR1(K)
DYIIDP(K,JM) = YI-YII1(K)
49 CONTINUE
GO TO 70
C ======================================================================
C I O N A D M I T A N C E FUNCTIONS WITHOUT C O L L I S I O N S.
50 CF = 0.0
C COMPUTE COMPLEX PLASMA DISPERSION FUNCTION FOR FIRST ION:
PSIION = 0.0
CALL PLASMA
THEMAX = THETA(NDX)
THEMIN = THEMAX
C COMPUTE COMPLEX ADMITANCE FUNCTION AND ITS DERIVATIVE FOR FIRST ION:
YIR1(1) = ZI(1)*SQMI(1)
YIR(1) = FRCTN1*YIR1(1)
FRCFQ = FRCTN1*DPSIDC
FRCFQ2 = FRCFQ*2.
DYIRDT(1) = 0.0
DYIRDC(1) = FRCFQ*(ZI(1)*YIR1(1)-2.*SQMI(1))
DO 59 K = 2,NDX
YIR1(K) = THETA(K)*ZI(K)
YII1(K) = THETA(K)*ZR(K)+1.0
YIR(K) = FRCTN1*YIR1(K)
YII(K) = FRCTN1*YII1(K)
C
TH = 1.0/THETA(K)
IF( IFIT(4) .EQ. 1 ) THEN
DYIRDC(K) = FRCFQ*((YIR1(K)**2+YII1(K)*(1.-YII1(K)))*TH
> +YII1(K)*(TH-2.*THETA(K))-TH)
DYIIDC(K)=FRCFQ2*YIR1(K)*(THETA(K)+(YII1(K)-1.)*TH)
ENDIF
TEMP = THETA(K)**2/TI-R5TI
DYIRDT(K) = FRCTN1*TEMP*YIR1(K)
DYIIDT(K) = FRCTN1*(TEMP*YII1(K)+R5TI)
59 CONTINUE
IF( NJ .EQ. 1 ) GO TO 70
C ----------------------------------------------------------------------
C COMPUTE COMPLEX PLASMA DISPERSION FUNCTION FOR OTHER IONS:
DO 69 J = 2,NJ
IF( IFIT(J+3) .GT. 2 .AND. FRACTN(J-1) .EQ. 0.0 ) GO TO 69
JM = J-1
SCALEF = THETC*SQMI(J)
CALL PLASMA
IF( THETA(NDX) .GT. THEMAX ) THEMAX = THETA(NDX)
IF( THETA(NDX) .LT. THEMIN ) THEMIN = THETA(NDX)
C COMPUTE COMPLEX ADMITANCE AND DERIVATIVES FUNCTIONS FOR OTHER IONS:
YR = ZI(1)*SQMI(J)
YIR(1) = YIR(1)+FRACTN(JM)*YR
DYIRDP(1,JM) = YR-YIR1(1)
FRCFQ = FRACTN(JM)*DPSIDC
FRCFQ2 = FRCFQ*2.
DYIRDC(1) = DYIRDC(1)+FRCFQ*(ZI(1)*YR-2.*SQMI(J))
DO 69 K = 2,NDX
YR = THETA(K)*ZI(K)
YI = THETA(K)*ZR(K)+1.0
YIR(K) = YIR(K)+FRACTN(JM)*YR
YII(K) = YII(K)+FRACTN(JM)*YI
C
IF( IFIT(4) .EQ. 1 ) THEN
TH = 1.0/THETA(K)
DYIRDC(K) = DYIRDC(K)+FRCFQ*((YR**2+YI*(1.-YI))*TH
> +YI*(TH-2.*THETA(K))-TH)
DYIIDC(K) = DYIIDC(K)+FRCFQ2*YR*(THETA(K)+(YI-1.)*TH)
ENDIF
TEMP = THETA(K)**2/TI-R5TI
DYIRDT(K) = DYIRDT(K)+FRACTN(JM)*TEMP*YR
DYIIDT(K) = DYIIDT(K)+FRACTN(JM)*(TEMP*YI+R5TI)
C COMPUTE PARTIALS W.R.T. FRACTIONAL ION COMPOSITIONS:
DYIRDP(K,JM) = YR-YIR1(K)
DYIIDP(K,JM) = YI-YII1(K)
69 CONTINUE
C
C ======================================================================
C COMPUTE COMPLEX PLASMA DISPERSION FUNCTION FOR ELECTRONS:
70 SCALEF = FNORMC/SQTE/COSAL
CALL PLASMA
C ======================================================================
C CENTER FREQUENCY SPECTRUM AND PARTIAL DERIVATIVES.
SQTIN = SQTI*EN**2
QTINE = SQTIN*YIR(1)
TINEQ = QTINE*TI
TICD = TI*DEBYEC
CDTINE = TICD+EN
SQTEC = CDTINE*SQTE*ZI(1)/COSAL2
STECX = SQTEC*TEX*TE
YN = TINEQ+STECX*CDTINE
YM = TI*EN+TE*CDTINE
YN2YM = 2.*YN/YM
TEMP = FNORMC*CONST/YM**2
C ----------------------------------------------------------------------
C THEORETICAL SPECTRUM AT "F=0.0":
S(1) = TEMP*YN
C PARTIAL DERIVATIVES W.R.T. T E M P E R A T U R E S AT "F=0.0":
DSDTE = TEMP*CDTINE*(SQTEC*(1.5-2.5*SINFAC*TE)-YN2YM)
DSDTI(1) = TEMP*(1.5*QTINE+2.*DEBYEC*STECX-YN2YM*(EN+TE*DEBYEC)+
> SQTIN*TI*DYIRDT(1))
IF( IFIT(2) .EQ. 1 ) FUNC(1,ITE) = DSDTE
IF( IFIT(2) .EQ. 2 ) FUNC(1,2) = FUNC(1,2)+DSDTE*TR
C PARTIAL DERIVATIVES W.R.T. ELECTRON D E N S I T Y AT "F=0.0":
IF( IFIT(3) .EQ. 1 ) FUNC(1,INE) =
> TEMP*(3.*TINEQ+STECX*(TICD+3.*EN)-YN2YM*EN*(TI+TE))/EN
TEMP = TEMP*SQTIN*TI
C PARTIAL DERIVATIVES W.R.T. C O L L I S I O N FREQ. AT "F=0.0":
IF( IFIT(4) .EQ. 1 ) FUNC(1,ICF) = TEMP*DYIRDC(1)
NI = ICF
IF( NJMF .NE. 0 ) THEN
C PARTIAL DERIVATIVES W.R.T. ION C O M P O S T I O N AT "F=0.0":
DO 75 J = 1,NJMF
NI = NI+1
75 FUNC(1,NI) = TEMP*DYIRDP(1,J)
ENDIF
DO 85 I = 1,NI
DO 85 L = 1,NLAGS
85 ACF(L,I) = FUNC(1,I)*COSINE(L,1)
C ======================================================================
KSMAX = 1
C LOOP OVER ALL FREQUENCIES.
DO 200 K = 2,NDX
C COMPUTE COMPLEX ADMITANCE FUNCTION AND ITS DERIVATIVE FOR ELECTRONS:
THE2 = THETA(K)**2
TEMP = TEX*THETA(K)
YER = TEMP*ZI(K)
YEI = 1.+TEMP*ZR(K)
C
TEMP = THE2/TE-TEFAC
DYERDT = TEMP*YER
DYEIDT = TEMP*(YEI-1.)+THE2*TEXBTE
C ----------------------------------------------------------------------
YERT = YER/TE
YEIT = YEI/TE
YEM = YERT**2+YEIT**2
YIRT = YIR(K)/TI
YMRT = YERT+YIRT
YIIT = YII(K)/TI
YIIH = YIIT+DEBYE
YIM = YIRT**2+YIIH**2
YMIT = YEIT+YIIH
YM = YMRT**2+YMIT**2
YN = YEM*YIR(K)+YIM*YER
YNBYM = YN/YM
TEMP = CONST/F(K)
C ----------------------------------------------------------------------
C THEORETICAL POWER SPECTRUM:
S(K) = TEMP*YNBYM
IF( S(K) .GT. S(KSMAX) ) KSMAX = K
C INTERMEDIATE DERIVATIVES W.R.T. TEMPERATURES:
DMYIR = DYIRDT(K)-YIRT
DMYII = DYIIDT(K)-YIIT
DYIMDT = TWOBTI*(YIRT*DMYIR+YIIH*DMYII)
DYMDTI = TWOBTI*(YMRT*DMYIR+YMIT*DMYII)
DMYER = DYERDT-YERT
DMYEI = DYEIDT-YEIT
DYEMDT = TWOBTE*(YERT*DMYER+YEIT*DMYEI)
DYMDTE = TWOBTE*(YMRT*DMYER+YMIT*DMYEI)
C PARTIAL DERIVATIVES OF SPECTRUM W.R.T. T E M P E R A T U R E S :
DSDTE = (TEMP*(YIM*DYERDT+YIR(K)*DYEMDT)-S(K)*DYMDTE)/YM
DSDTI(K) = (TEMP*(YEM*DYIRDT(K)+YER*DYIMDT)-S(K)*DYMDTI)/YM
C REDIFINE TEMPERATURE DERIVATIVES DEPENDING ON FIT CODE:
IF( IFIT(2) .EQ. 1 ) FUNC(K,ITE) = DSDTE
IF( IFIT(2) .EQ. 2 ) FUNC(K,2) = FUNC(K,2)+DSDTE*TR
C PARTIAL DERIVATIVES OF SPECTRUM W.R.T. ELECTRON D E N S I T Y :
IF( IFIT(3) .EQ. 1 ) FUNC(K,INE) = SYSTEM*(YNBYM+(YMIT*YNBYM-
> YER*YIIH)*TWODEB/YM)/F(K)
C PARTIAL DERIVATIVES OF SPECTRUM W.R.T. C O L L I S I O N FREQUENCY:
IF( IFIT(4) .EQ. 1 ) FUNC(K,ICF) = (TEMP*(YEM*DYIRDC(K)+
> YER*TWOBTI*(YIRT*DYIRDC(K)+YIIH*DYIIDC(K)))-
> S(K)*TWOBTI*(YMRT*DYIRDC(K)+YMIT*DYIIDC(K)))/YM
NI = ICF
IF( NJMF .NE. 0 ) THEN
C PARTIAL DERIVATIVES OF SPECTRUM W.R.T. ION C O M P O S I T I O N :
DO 60 J = 1,NJMF
NI = NI+1
60 FUNC(K,NI) = TEMP*(DYIRDP(K,J)*(YEM+TWOBTI*(YER*YIRT-YMRT*YNBYM)
> )+DYIIDP(K,J)*TWOBTI*(YER*YIIH-YMIT*YNBYM))/YM
ENDIF
DO 180 I = 1,NI
ACF(1,I) = ACF(1,I)+FUNC(K,I)*COSINE(1,K)
DO 180 L = 2,NLAGS
180 ACF(L,I) = ACF(L,I)+FUNC(K,I)*COSINE(L,K)
200 CONTINUE
IF( IFIT(7) .NE. 1 ) RETURN
DO 210 K = 1,NLAGS
210 ACF(K,NPP) = ACF(K,1)/SYSTEM
RETURN
END
C
C
C
SUBROUTINE PLASMA
C*THIS ROUTINE COMPUTES THE COMPLEX PLASMA DISPERSION FUNCTION
C GIVEN BY:
C Z(S)=I*SQRT(PIE)*EXPC(-S**2)*(1.+ERFC(I*S)
C WHERE:
C I=SQRT(-1.) ; S=X+I*Y=COMPLEX ARGUMENT
C FOR ABS(Y).GT.1.0, THE CONTINUED FRACTION EXPANSION GIVEN BY FRIED
C AND CONTE (1961) IS USED; WHILE FOR ABS(Y).LE.1.0, THE FOLLOWING
C DIFFERENTIAL EQUATION IS SOLVED:
C D Z(S)
C ------ = -2.*(1.+S*Z(S))
C D S
C SUBJECT TO Z(0)=I*SQRT(PIE)
C
C "F(K)"=TRUE FREQUENCY.
C "X(K)"=NORMALIZED FREQUENCY.
C "SCALEF"=FREQUENCY SCALING FACTOR FOR NORMALIZATION.
C ----------------------------------------------------------------------
C BY WES SWARTZ
C WHEN "Y" IS ZERO OR VERY SMALL, AND "X"IS GREATER THAN 7., THEN
C "ZI(K)" IS SET ZERO AND "ZR(K)" IS COMPUTED FROM THE ASYMPTOTIC
C EXPANSION FOR LARGE REAL ARGUMENTS.
C WHEN "Y.GT.0.0 .AND. Y.LE.1.0" THEN CODE IS GOOD FOR "ABS(X).LE.15."
C WHILE FOR "ABS(X).GT.15." RESULTS STILL LOOK GOOD WHEN ZI IS ZEROED,
C BUT NO DEFINITIVE CHECKS HAVE BEEN MADE.
C ======================================================================
COMMON /PLAS/ZR(100),ZI(100),X(100),Y,F(120),SCALEF,NX
SR=0.0
SI=ABS(Y)
IF (SI.GE.1.0) GO TO 8
C CODE BELOW USES SOLUTION TO DIFFERENTIAL EQUATION WHERE
C "(CR,CI)=FUNF(H,S,Z)=H*(1.+S*Z)"
C IS THE MAIN FUNCTION FOR INTEGRATING THE DIFFERENTIAL EQUATION FOR Z.
C CASE FOR "0.0.LT.ABS(Y).AND.ABS(Y).LE.1.0":
C "Z=(0.,1.772454)". SQRT(PI)=1.7724539
ZZR=0.0
ZZI=1.772454
C IS ARGUMENT PURELY REAL? IF SO, SKIP "Y" INTEGRATION.
IF (Y.EQ.0.0) GO TO 4
SI=0.0
NY=INT(Y*50.+1.1)
DY=Y/FLOAT(NY)
H=2.*DY
H2=.5*DY
DO 2 K=1,NY
A0I=DY*(1.-SI*ZZI)
SI=SI+H2
A1I=DY*(1.-SI*(ZZI-A0I))
B0I=H*(1.-SI*(ZZI-A1I))
SI=SI+H2
B1I=H*(1.-SI*(ZZI-B0I))
2 ZZI=ZZI-(2.*(A0I+A1I+A1I+B0I)+B1I)/6.
C GENERAL INTEGRATION OVER "X":
4 ZR(1)=ZZR
ZI(1)=ZZI
X(1)=SR
XMAX=SCALEF*F(NX)
XLIMIT=XMAX
IF (XLIMIT.GT.7.2 .AND. SI.LT.1.E-05) XLIMIT=7.2
IF (XLIMIT.GT.15.0 .AND. SI.GE.1.E-05) XLIMIT=15.
K=2
X(K)=SCALEF*F(K)
5 DX=X(K)-SR
NSKIP=INT(DX*9.1+1.1)
DX=DX/FLOAT(NSKIP)
H=2.*DX
H2=.5*DX
DO 6 KSKIP=1,NSKIP
A0R=DX*(1.+SR*ZZR-SI*ZZI)
A0I=DX*(SR*ZZI+SI*ZZR)
AR=ZZR-A0R
AI=ZZI-A0I
SR=SR+H2
A1R=DX*(1.+SR*AR-SI*AI)
A1I=DX*(SR*AI+SI*AR)
AR=ZZR-A1R
AI=ZZI-A1I
B0R=H*(1.+SR*AR-SI*AI)
B0I=H*(SR*AI+SI*AR)
AR=ZZR-B0R
AI=ZZI-B0I
SR=SR+H2
B1R=H*(1.+SR*AR-SI*AI)
B1I=H*(SR*AI+SI*AR)
ZZR=ZZR-(2.*(A0R+A1R+A1R+B0R)+B1R)/6.
6 ZZI=ZZI-(2.*(A0I+A1I+A1I+B0I)+B1I)/6.
ZR(K)=ZZR
ZI(K)=ZZI
SR=X(K)
K=K+1
IF (K.GT.NX) RETURN
X(K)=SCALEF*F(K)
IF (X(K).LE.XLIMIT) GO TO 5
C USE ASYMPTOTIC EXPANSION FOR LARGE REAL ARGUMENTS:
IF (SI.GT.1.E-05) WRITE(6,22)
22 FORMAT(' ** *** WARNING, X OUT OF RANGE IN Z(X,Y) *** **')
KP=K
DO 7 K=KP,NX
X(K)=SCALEF*F(K)
X2=X(K)**(-2)
ZR(K)=-(1.+X2*(.5+X2*(.75+X2*(.625+X2*.4375))))/X(K)
ZI(K)=0.0
7 CONTINUE
RETURN
C CODE BELOW IS FOR "ABS(Y).GT.1" USING CONTINUED FRACTION EXPANSION:
8 DO 30 K=1,NX
SR=SCALEF*F(K)
X(K)=SR
A0R=0.0
A0I=0.0
B0R=1.0
B0I=0.0
RR=SI**2-SR**2+0.5
RI=-2.*SR*SI
A1R=SR
A1I=SI
B1R=RR
B1I=RI
BM=B1R**2+B1I**2
FR=(A1R*B1R+A1I*B1I)/BM
FI=(A1I*B1R-A1R*B1I)/BM
DO 10 N=1,23
AN=FLOAT(N)
C=AN*(.5-AN)
DR=RR+2.*AN
AR=DR*A1R+C*A0R-RI*A1I
AI=DR*A1I+C*A0I+RI*A1R
BR=DR*B1R+C*B0R-RI*B1I
BI=DR*B1I+C*B0I+RI*B1R
BM=BR**2+BI**2
GR=(AR*BR+AI*BI)/BM
GI=(AI*BR-AR*BI)/BM
IF (ABS(GI/FI-1.).LE.1.E-05) GO TO 20
FR=GR
FI=GI
A0R=A1R
A0I=A1I
A1R=AR
A1I=AI
B0R=B1R
B0I=B1I
B1R=BR
10 B1I=BI
20 ZR(K)=GR
ZI(K)=GI
30 CONTINUE
IF (Y.GE.0.0) RETURN
C IF "Y.LT.1.0", THEN CHANGE QUADRANT:
DO 40 K=1,NX
SR=X(K)
SI=Y
C FOR "Y.LT.-1.0" THE FOLLOWING CARD AND COMPLEX EXP IS REQUIRED.
C S=(0.,3.544908)*EXPC(-S**2)
ZR(K)=SR
40 ZI(K)=SI
RETURN
END
|
{"hexsha": "c157c46a0a368ec91c58928cd449f12c7607577a", "size": 21158, "ext": "for", "lang": "FORTRAN", "max_stars_repo_path": "src/iscatspe.for", "max_stars_repo_name": "stephancb/IScatterSpectrum.jl", "max_stars_repo_head_hexsha": "b4512871b34ba27d852d6a19302115e617640529", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/iscatspe.for", "max_issues_repo_name": "stephancb/IScatterSpectrum.jl", "max_issues_repo_head_hexsha": "b4512871b34ba27d852d6a19302115e617640529", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/iscatspe.for", "max_forks_repo_name": "stephancb/IScatterSpectrum.jl", "max_forks_repo_head_hexsha": "b4512871b34ba27d852d6a19302115e617640529", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.9719008264, "max_line_length": 73, "alphanum_fraction": 0.4994328386, "num_tokens": 8776}
|
\documentclass[class=report, float=false, crop=false]{standalone}
\usepackage[subpreambles=true]{standalone}
\input{preamble}
\graphicspath{{figures/images/}}
% \begin{cbunit}
\begin{document}
\chapter{Ellipsoids}
\label{appendix:ellipsoids}
\section{Definition}
An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation \cite{wiki:Ellipsoid}.\\
Within a Cartesian coordiante system in which the origin is the center of the ellipsoid and the coordinate axes are axes of the ellipsoid, a vector \(\vec{r} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \in \mathbb{R}^3\) belongs to the surface of the ellispoid if and only if
\begin{equation}
\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1
\label{ellipsoid_cartesian}
\end{equation}
where $a$, $b$ and $c \in \mathbb{R}$ are the the semi-axes of the ellipsoid.\\
Equivalently to equation \ref{ellipsoid_cartesian}, an ellipsoid $\mathcal{A}$ can be defined by a positive definite matrix $A \in \mathcal{M}_3(\mathbb{R})$ and a vector $\vec{v} \in \mathbb{R}^3$ such that
\begin{equation}
\boxed{\forall \vec{r} \in \mathbb{R}^3, \vec{r} \in \bar{\mathcal{A}} \Leftrightarrow (\vec{r}-\vec{v})^TA(\vec{r}-\vec{v}) = 1}
\label{ellipsoid_matrix}
\end{equation}
The eigenvectors of $A$ then define the principal axes of $\mathcal{A}$ and the associated eigenvalues are the reciprocals of the squares of the semi-axes. The vector $\vec{v}$ defines the center of the ellipsoid.
\section{Homogeneous coordinates}
\subsection{Quick reminder}
A vector \(\vec{r} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \in \mathbb{R}^3\) is equivalent to its homogeneous form \(\begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix} \in E^3\) where $E^3$ is the Euclidean 3D projective space.\\
Moreover, we have that in $E^3$, $\forall \lambda \in \mathbb{R}^*$ and $\forall (x,y,z) \in \mathbb{R}^3$, \(\begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{x}{\lambda} \\ \frac{y}{\lambda} \\ \frac{z}{\lambda} \\ 1 \end{pmatrix}\) represent the exact same point. Therefore, we will always assume that the last coordinate of our vectors in $E^3$ is 1.\\
From the property above, you naturally have that any point in $E^3$ whose last coordinate would be $0$ is then infinitely far from the origin \(\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}\).
\subsection{Useful affine transformations}
\subsubsection{Translation}
We define in $\mathbb{R}^3$ the translation of vector $\vec{u} \in \mathbb{R}^3$ as
\begin{align*}
\mathcal{T}_{\vec{u}} \colon &\mathbb{R}^3 \to \mathbb{R}^3\\ &\phantomarrow{\mathbb{R}^3}{\vec{v}} \vec{v} + \vec{u}
\end{align*}
In $E^3$, this function $\mathcal{T}_{\vec{u}}$ is represented by the matrix $T_{\vec{u}}$:
\begin{equation}
\boxed{T_{\vec{u}} = \begin{pmatrix} \mathbbm{1}_3 & \vec{u} \\ \vec{0}^T & 1 \end{pmatrix}}
\label{translation_matrix}
\end{equation}
\subsubsection{Dilatation}
We define in $\mathbb{R}^3$ the dilatation of factor $a \in \mathbb{R}$ as
\begin{align*}
\mathcal{X}_{a} \colon &\mathbb{R}^3 \to \mathbb{R}^3\\ &\phantomarrow{\mathbb{R}^3}{\vec{v}} a\vec{v}
\end{align*}
In $E^3$, this function $\mathcal{X}_{a}$ is represented by the matrix $X_{a}$:
\begin{equation}
\boxed{X_{a} = \begin{pmatrix} a\mathbbm{1}_3 & \vec{0} \\ \vec{0}^T & 1 \end{pmatrix}}
\label{dilatation_matrix}
\end{equation}
\subsubsection{Rotation}
We showed in part \ref{action_rotation} that every 3D rotation can be expressed as the action of some unit quaternion.\\
The product of quaternions being bilinear, we can associate the action of a quaternion to a linear function in a vectorial space and therefore to a matrix \cite{shoemake}.\\
Consider $q = [\vec{q},q_0],p = [\vec{p},p_0] \in \mathbb{H}$. According to part \ref{quat_properties}, we have $qp = [\underbrace{\vec{q}\times\vec{p}}_{Ap} + \underbrace{q_0\vec{p}}_{Bp} + \underbrace{p_0\vec{q}}_{Cp},\underbrace{q_0p_0}_{Dp}~\underbrace{-\vec{q}\cdot\vec{p}}_{Ep}]$, with
\begin{align*}
A = \begin{pmatrix} \vec{q} \times \bigcdot & \vec{0} \\ \vec{0}^T & 0 \end{pmatrix}, B = \begin{pmatrix} q_0 \mathbbm{1}_3 & \vec{0} \\ \vec{0}^T & 0 \end{pmatrix}, C = \begin{pmatrix} 0_3 & \vec{q} \\ \vec{0}^T & 0 \end{pmatrix}, D = \begin{pmatrix} 0_3 & \vec{0} \\ \vec{0}^T & q_0 \end{pmatrix}, E = \begin{pmatrix} 0_3 & \vec{0} \\ -\vec{q}^T & 0 \end{pmatrix}
\end{align*}
Therefore the left multiplication by $q$ in $\mathbb{H}$ can be represented by the matrix
\begin{align*}
L_q &= A + B + C + D + E\\
&= \begin{pmatrix} \vec{q} \times \bigcdot + q_0\mathbbm{1}_3 & \vec{q} \\ -\vec{q}^T & q_0 \end{pmatrix}
\end{align*}
Similarly, we have $pq^* = [\underbrace{\vec{q}\times\vec{p}}_{Ap} + \underbrace{q_0\vec{p}}_{Bp}~\underbrace{-p_0\vec{q}}_{-Cp},\underbrace{q_0p_0}_{Dp} + \underbrace{\vec{q}\cdot\vec{p}}_{-Ep}]$, therefore the right multiplication by $q^*$ can be represented by the matrix
\begin{align*}
R_{q^*} &= A + B - C + D - E\\
&= \begin{pmatrix} \vec{q} \times \bigcdot + q_0\mathbbm{1}_3 & -\vec{q} \\ \vec{q}^T & q_0 \end{pmatrix}
\end{align*}
Consequently, the matrix representing the action of the unit quaternion $q$ in $\mathbb{H}$ is
\begin{align*}
Q_q = L_qR_{q^*} = R_{q^*}L_q = \begin{pmatrix} \mathcal{Q}_q & \vec{0} \\ \vec{0}^T & \underbrace{N^2(q)}_{=1} \end{pmatrix}
\end{align*}
where
\begin{equation}
\boxed{\mathcal{Q}_q = (\vec{q}\times\bigcdot + q_0\mathbbm{1}_3)^2 + \vec{q}\vec{q}^T}
\label{rotation_matrix_R3}
\end{equation}
is the matrix representing the rotation associated to the unit quaternion $q$ in $\mathbb{R}^3$.\\
Therefore, the matrix reprensenting the action of $q$ in $\mathbb{H}$ and the rotation associated to $q$ in $E^3$ are the same and we will note the latter
\begin{equation}
\boxed{Q_q = \begin{pmatrix} \mathcal{Q}_q & \vec{0} \\ \vec{0}^T & 1 \end{pmatrix}}
\label{rotation_matrix}\\
\end{equation}
We can develop equation \ref{rotation_matrix_R3} to get an expression of the rotation matrix $\mathcal{Q}_q$ in $E^3$. With
\begin{align*}
\left(\vec{e}_1 \equiv (1, 0, 0)^T, \vec{e}_2 \equiv (0, 1, 0)^T, \vec{e}_3 \equiv (0, 0, 1)^T\right)
\end{align*}
the canonical basis of $E^3$, we have
\begin{align*}
\vec{q} \times \vec{e}_1 = \begin{pmatrix} 0 \\ q_3 \\ -q_2 \end{pmatrix}, \vec{q} \times \vec{e}_2 = \begin{pmatrix} -q_3 \\ 0 \\ q_1 \end{pmatrix}, \vec{q} \times \vec{e}_3 = \begin{pmatrix} q_2 \\ -q_1 \\ 0 \end{pmatrix}
\end{align*}
such that
\begin{align*}
\vec{q} \times \bigcdot = \begin{pmatrix} 0 & -q_3 & q_2 \\ q_3 & 0 & -q_1 \\ -q_2 & q_1 & 0 \end{pmatrix} \text{ and } \vec{q} \times \bigcdot + q_0 \mathbbm{1}_3 = \begin{pmatrix} q_0 & -q_3 & q_2 \\ q_3 & q_0 & -q_1 \\ -q_2 & q_1 & q_0 \end{pmatrix}
\end{align*}
and then
\begin{align*}
(\vec{q} \times \bigcdot + q_0 \mathbbm{1}_3)^2 =
\begin{pmatrix}
q_0^2 - q_3^2 - q_2^2 & q_1 q_2 - 2 q_0 q_3 & q_1 q_3 + 2 q_0 q_2 \\
q_1 q_2 + 2 q_0 q_3 & q_0^2 - q_3^2 - q_1^2 & q_2 q_3 - 2 q_0 q_1 \\
q_1 q_3 - 2 q_0 q_2 & q_2 q_3 + 2 q_0 q_1 & q_0^2 - q_1^2 - q_2^2
\end{pmatrix}
\end{align*}
thus leading to
\begin{align*}
\mathcal{Q}_q = (\vec{q} \times \bigcdot + q_0 \mathbbm{1}_3)^2 + \vec{q} \vec{q}^T =
\begin{pmatrix}
q_0^2 + q_1^2 - q_3^2 - q_2^2 & 2 q_1 q_2 - 2 q_0 q_3 & 2 q_1 q_3 + 2 q_0 q_2 \\
2 q_1 q_2 + 2 q_0 q_3 & q_0^2 + q_2^2 - q_3^2 - q_1^2 & 2 q_2 q_3 - 2 q_0 q_1 \\
2 q_1 q_3 - 2 q_0 q_2 & 2 q_2 q_3 + 2 q_0 q_1 & q_0^2 + q_3^2 - q_1^2 - q_2^2
\end{pmatrix}
\end{align*}
where we can note that $q$ is an unit quaternion, \textit{i.e.} $N^2(q) = \sum_{i=0}^4 q_i^2 = 1$, and therefore
\begin{equation}
\boxed{\mathcal{Q}_q =
\begin{pmatrix}
1 - 2 (q_2^2 + q_3^2) & 2 (q_1 q_2 - q_0 q_3) & 2 (q_1 q_3 + q_0 q_2) \\
2 (q_1 q_2 + q_0 q_3) & 1 - 2 (q_1^2 + q_3^2) & 2 (q_2 q_3 - q_0 q_1) \\
2 (q_1 q_3 - q_0 q_2) & 2 (q_2 q_3 + q_0 q_1) & 1 - 2 (q_1^2 + q_2^2)
\end{pmatrix}
}
\label{rotation_matrix_R3_expression}
\end{equation}
in accordance with \cite{wiki:quaternions}.
\section{Belonging matrix}
\subsection{Definition}
We want to find for any ellipsoid $\mathcal{A}$ a matrix $B$ acting on homogeneous coordinates with the following properties
\begin{equation}
\forall \vec{r} \in E^3, \begin{cases} \vec{r}^T B \vec{r} < 0 &\text{ if } \vec{r} \in \mathcal{A} \setminus \bar{\mathcal{A}} \\ \vec{r}^T B \vec{r} = 0 &\text{ if } \vec{r} \in \bar{\mathcal{A}} \\ \vec{r}^T B \vec{r} > 0 &\text{ if } \vec{r} \notin \mathcal{A} \end{cases}
\label{belonging_definition}
\end{equation}
which we will call the \textit{belonging matrix} of $\mathcal{A}$.
\subsection{Expression}
\label{belonging_exp}
We have seen with equation \ref{ellipsoid_matrix} that the belonging to the surface of an ellispoid could be expressed with a matrix whose eigenvectors define the principal axes of the ellipsoid and whose eigenvalues define the reciprocals of the squares of the semi-axes. Therefore, if we denote $(R_i)_{i=1:3}$ the semi-axes of the ellipsoid, this matrix can be written as $A = P^{-1}\text{diag}(R_1^{-2},R_2^{-2},R_3^{-2})P$.\\
The semi-axes of an ellipsoid define an orthogonal base of $\mathbb{R}^3$. Therefore P is the transition matrix from the orthogonal base formed by the principal axes of the ellipsoid to the original Euclidean base $(\vec{e_i})_{i=1:3}$, and inversely for $P^{-1}$.\\
Consequently, we have that $\forall i \in \llbracket1,3\rrbracket, \vec{v} + R_iP^{-1}\vec{e_i} \in \bar{\mathcal{A}}$ and so
\begin{align*}
\forall i \in \llbracket1,3\rrbracket,&(P^{-1}R_i\vec{e_i})^TA(P^{-1}R_i\vec{e_i}) = 1\\ \Leftrightarrow&(P^{-1}R_i\vec{e_i})^TP^{-1}\text{diag}(R_j^{-2})_{j=1:3}P(P^{-1}R_i\vec{e_i}) = 1\\
\Leftrightarrow &R_i\vec{e_i}^T(P^{-1})^TP^{-1}\text{diag}(R_j^{-2})_{j=1:3}\underbrace{PP^{-1}}_{\mathbbm{1}_3}R_i\vec{e_i} = 1\\
\Leftrightarrow &\vec{e_i}^T(P^{-1})^TP^{-1}\underbrace{\text{diag}(R_j^{-2})_{j=1:3}R_i^2\vec{e_i}}_{\vec{e_i}} = 1\\
\Leftrightarrow &\vec{e_i}^T(P^{-1})^TP^{-1}\vec{e_i} = 1
\end{align*}
Thus, the norm of every column vector of $P^{-1}$ equals to 1. Therefore, $P^{-1}$, and equivalently $P$, is an orthonormal matrix.\\
If $P$ is an orthonormal matrix, then the principal axes of the ellipsoids are derived from the original Euclidean base through rotations and permutations of axis. Without loss of generality, we can consider that there are no permutations of axis, leading to $P = \mathcal{Q}_q^{-1} = \mathcal{Q}_q^T$ with $q$ the quaternion describing the orientation of the ellipsoid. Equation \ref{ellipsoid_matrix} thus becomes:
\begin{align*}
\forall \vec{r} \in \mathbb{R}^3, \vec{r} \in \bar{\mathcal{A}} \Leftrightarrow (\vec{r}-\vec{v})^T\mathcal{Q}_q\text{diag}(R_i^{-2})_{i=1:3}\mathcal{Q}_q^T(\vec{r}-\vec{v}) = 1
\end{align*}
We can get rid of the $\vec{v}$ by using homogeneous coordinates. In $E^3$, $\vec{r} - \vec{v}$ becomes $T_{-\vec{v}}\vec{r}$ and, to conserve the scalar product, $\text{diag}(R_i^{-2})_{i=1:3}$ becomes $\text{diag}(R_i^{-2},0)_{i=1:3}$.\\
We always assume that the last coordinate of our vectors in $E^3$ is 1, therefore we can notice that $\forall \vec{u} \in E^3, \vec{u}^T\text{diag}(0,0,0,-1)\vec{u} = -1$. We can then rewrite equation \ref{ellipsoid_matrix}:
\begin{align*}
\forall \vec{r} \in E^3, \vec{r} \in \bar{\mathcal{A}} \Leftrightarrow &\vec{r}^TT_{-\vec{v}}^TQ_q\text{diag}(R_i^{-2},0)_{i=1:3}Q_q^TT_{-\vec{v}}\vec{r} = 1\\
\Leftrightarrow&\vec{r}^T\underbrace{T_{-\vec{v}}^TQ_q\text{diag}(R_i^{-2},-1)_{i=1:3}Q_q^TT_{-\vec{v}}}_{\mathcal{C}}\vec{r} = 0
\end{align*}
The matrix $\mathcal{C}$ is a good candidate for the belonging matrix of $\mathcal{A}$, we then have to understand how it works.\\
The $T_{-\vec{v}}$ translation and the $Q_q^T$ rotation bring back the principal axes of the ellipsoid to the original Euclidean base and origin. Without loss of generality, we can then consider that the principal axes of the ellipsoids are along $(\vec{e_1},\vec{e_2},\vec{e_3})$.\\
For a vector \(\vec{r} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \in \mathbb{R}^3\), we have
\begin{align*}
&\frac{x^2}{R_1^2} + \frac{y^2}{R_2^2} + \frac{z^2}{R_3^2} = \mu^2\\
\Leftrightarrow & \frac{x^2}{(\mu R_1)^2} + \frac{y^2}{(\mu R_2)^2} + \frac{z^2}{(\mu R_3)^2} = 1
\end{align*}
therefore, $\mu$ can be construed as the rescaling factor that has to be applied to the ellipsoid for $\vec{r}$ to be on its surface. Then, $\mu < 1$ if $\vec{r} \in \mathcal{A}\setminus\bar{\mathcal{A}}$, $\mu = 1$ if $\vec{r} \in \bar{\mathcal{A}}$, and $\mu > 1$ otherwise.\\
Since we have that $\forall \vec{r} \in E^3, \vec{r}^T\mathcal{C}\vec{r} = \mu^2 - 1$, $\mathcal{C}$ is the matrix we have been looking for, and we can define
\begin{equation}
\boxed{B(\vec{v},q,(R_i)_{i=1:3}) \equiv T_{-\vec{v}}^TQ_q\text{diag}(R_i^{-2},-1)_{i=1:3}Q_q^TT_{-\vec{v}}}
\label{belonging_matrix}
\end{equation}
One must notice that such a matrix is symmetric. We can also define the \textit{belonging function} of the ellipsoid $\mathcal{A}$
\begin{equation}
\boxed{\begin{aligned}\mathcal{F}_{\mathcal{A}} \colon &E^3 \to \mathbb{R}\\ &\phantomarrow{E^3}{\vec{r}} \vec{r}^TB(\vec{v},q,(R_i)_{i=1:3})\vec{r}\end{aligned}}
\label{belonging_function}
\end{equation}
such that $\forall \vec{r} \in E^3$, $\vec{r} \in \mathcal{A} \Leftrightarrow \mathcal{F}_{\mathcal{A}}(\vec{r}) \leq 0$ and
\begin{equation}
\boxed{\mathcal{F}_{\mathcal{A}}(\vec{r}) = \mu^2(\vec{r}) - 1}
\end{equation}
with $\mu^2(\vec{r})$ the rescaling factor that has to be applied to the ellipsoid for $\vec{r}$ to be on its surface.
\subsection{Reduced belonging matrix}
The equation \ref{belonging_matrix} is theoretically relevant, however it is not computationally efficient.\\
On the one side, resorting to homogeneous coordinates and expressing the translation with a translation matrix (equation \ref{translation_matrix}) leads to more operations than to express the rotation in $\mathbb{R}^3$ directly. On the other side, we have seen in part \ref{quaternions_interest} that the rotation of a vector was quicker performed when converting the quaternion associated to the rotation to a rotation matrix.\\
We then introduce, for an ellipsoid $\mathcal{A}$ whose centre is located in $\vec{v}$, whose orientation is described by the unit quaternion $q$ and whose semi-axes are $(R_i)_{i=1:3}$, the associate \textit{reduced belonging matrix}
\begin{equation}
\boxed{\bar{B}(q,(R_i)_{i=1:3}) \equiv \mathcal{Q}_q\text{diag}(R_i^{-2})_{i=1:3}\mathcal{Q}_q^T}
\label{reduced_belonging_matrix}
\end{equation}
with $\mathcal{Q}_q$ the $\mathbb{R}^3$ rotation matrix associated to $q$ (equation \ref{rotation_matrix_R3}). One must notice that the reduced belonging matrix is also symmetric. Furthermore, this reduced belonging matrix has the following properties
\begin{equation}
\forall \vec{r} \in \mathbb{R}^3, \begin{cases} (\vec{r} - \vec{v})^T \bar{B} (\vec{r} - \vec{v}) < 1 &\text{ if } \vec{r} \in \mathcal{A} \setminus \bar{\mathcal{A}} \\ (\vec{r} - \vec{v})^T \bar{B} (\vec{r} - \vec{v}) = 1 &\text{ if } \vec{r} \in \bar{\mathcal{A}} \\ (\vec{r} - \vec{v})^T \bar{B} (\vec{r} - \vec{v}) > 1 &\text{ if } \vec{r} \notin \mathcal{A} \end{cases}
\label{reduced_belonging_definition}
\end{equation}
according to equation \ref{belonging_definition}, which leads to the following expression for the belonging function:
\begin{equation}
\boxed{\begin{aligned}\mathcal{F}_{\mathcal{A}} \colon &\mathbb{R}^3 \to \mathbb{R}\\ &\phantomarrow{\mathbb{R}^3}{\vec{r}} (\vec{r} - \vec{v})^T\bar{B}(q,(R_i)_{i=1:3})(\vec{r}-\vec{v}) - 1\end{aligned}}
\label{belonging_function_reduced}
\end{equation}
\section{Unit normal vectors}
Since the surface of the ellipsoid -- the locus of points where $\mathcal{F}_{\mathcal{A}}(\vec{r}) = 0$ -- is an isosurface of the function $\mathcal{F}_{\mathcal{A}}$, we have that $\forall \vec{r_S} \in \bar{\mathcal{A}}, \vec{\nabla}\mathcal{F}_{\mathcal{A}}(\vec{r_S})$ is a non-unit outward-facing normal vector to the ellipsoid in $\vec{r_S}$.\\
Let us note $\forall \vec{r_S} \in \bar{\mathcal{A}}$, $\vec{n}(\vec{r_S})$ the unit outward-facing normal vector to the ellipsoid in $\vec{r_S}$. Then $\vec{n}(\vec{r_S}) // \vec{\nabla}\mathcal{F}_{\mathcal{A}}(\vec{r_S})$, we thus have to find the direction of this gradient.
\subsection{Expression with the belonging matrix}
\label{normal_vector_belonging}
We have that
\begin{align*}
\forall \vec{r} \in E^3, \mathcal{F}_{\mathcal{A}}(\vec{r} + \vec{dr}) &= (\vec{r} + \vec{dr})^TB(\vec{r} + \vec{dr})\\
&= \underbrace{\vec{r}^TB\vec{r}}_{\mathcal{F}_{\mathcal{A}}(\vec{r})} + \underbrace{\vec{r}^TB\vec{dr} + \vec{dr}^TB\vec{r}}_{2\vec{dr}^TB\vec{r}} + \vec{dr}^TB\vec{dr}
\end{align*}
therefore we can show that
\begin{align*}
\forall \vec{r} \in E^3, \forall i \in \llbracket1,3\rrbracket,~ &\mathcal{F}_{\mathcal{A}}(\vec{r} + dr_i\vec{e_i}) - \mathcal{F}_{\mathcal{A}}(\vec{r}) = 2dr_i\vec{e_i}^TB\vec{r} + dr_i^2\vec{e_i}B\vec{e_i}\\
\Rightarrow&\frac{\partial}{\partial e_i}\mathcal{F}_{\mathcal{A}}(\vec{r}) = \lim_{dr_i \to 0} \frac{\mathcal{F}_{\mathcal{A}}(\vec{r} + dr_i\vec{e_i}) - \mathcal{F}_{\mathcal{A}}(\vec{r})}{dr_i} = 2\vec{e_i}^TB\vec{r}\\
\Rightarrow&\vec{\nabla}\mathcal{F}_{\mathcal{A}}(\vec{r}) = \sum_{i=1}^3\left(\frac{\partial}{\partial e_i}\mathcal{F}_{\mathcal{A}}(\vec{r})\right)\vec{e_i} = 2\underbrace{\sum_{i=1}^3(\vec{e_i}^TB\vec{r})\vec{e_i}}_{B\vec{r}}
\end{align*}
\textit{i.e.} $\vec{\nabla}\mathcal{F}_{\mathcal{A}}(\vec{r}) = 2B\vec{r}$. Consequently, we finally have that
\begin{equation}
\boxed{\forall \vec{r_S}\in \bar{\mathcal{A}}, \vec{n}(\vec{r_S}) = \frac{B(\vec{v},q,(R_i)_{i=1:3})\vec{r_S}}{|B(\vec{v},q,(R_i)_{i=1:3})\vec{r_S}|}}
\end{equation}
\subsection{Expression with the reduced belonging matrix}
With the same notations while assuming that the centre of the ellipsoid is located in $\vec{v}$, we can notice that replacing $\vec{r}$ by $(\vec{r} - \vec{v})$ in the demonstration of part \ref{normal_vector_belonging} leads to the following expression
\begin{align*}
\forall \vec{r} \in E^3, \forall i \in \llbracket1,3\rrbracket,~ &\mathcal{F}_{\mathcal{A}}(\vec{r} + dr_i\vec{e_i}) - \mathcal{F}_{\mathcal{A}}(\vec{r}) = 2dr_i\vec{e_i}^T\bar{B}(\vec{r} - \vec{v}) + dr_i^2\vec{e_i}\bar{B}\vec{e_i}
\end{align*}
and thus, by analogy, the following result
\begin{equation}
\boxed{\forall \vec{r_S}\in \bar{\mathcal{A}}, \vec{n}(\vec{r_S}) = \frac{\bar{B}(q,(R_i)_{i=1:3})(\vec{r_S} - \vec{v})}{|\bar{B}(q,(R_i)_{i=1:3})(\vec{r_S} - \vec{v})|}}
\label{surface_vec_reduced}
\end{equation}
% \input{references/biblio}
\end{document}
% \end{cbunit}
|
{"hexsha": "dafa53d90e4e834b222289ab07db4086e9777200", "size": 18670, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/appendices/app_ellipsoids.tex", "max_stars_repo_name": "yketa/Umea_2017_Notes", "max_stars_repo_head_hexsha": "3b0e564e9054383bd91ff46930afe5543e9845ca", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/appendices/app_ellipsoids.tex", "max_issues_repo_name": "yketa/Umea_2017_Notes", "max_issues_repo_head_hexsha": "3b0e564e9054383bd91ff46930afe5543e9845ca", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/appendices/app_ellipsoids.tex", "max_forks_repo_name": "yketa/Umea_2017_Notes", "max_forks_repo_head_hexsha": "3b0e564e9054383bd91ff46930afe5543e9845ca", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.7394366197, "max_line_length": 430, "alphanum_fraction": 0.6638993037, "num_tokens": 7509}
|
# pylint: disable=missing-docstring
import unittest
import random
import numpy as np
import tensorflow as tf
import tf_encrypted as tfe
from tf_encrypted.tensor import int100factory
class TestLSB(unittest.TestCase):
def setUp(self):
tf.reset_default_graph()
def _core_lsb(self, tensor_factory, prime_factory):
f_bin = np.vectorize(np.binary_repr)
f_get = np.vectorize(lambda x, ix: x[ix])
raw = np.array([random.randrange(0, 10000000000)
for _ in range(20)]).reshape(2, 2, 5)
expected_lsb = f_get(f_bin(raw), -1).astype(np.int32)
with tfe.protocol.SecureNN(
tensor_factory=tensor_factory,
prime_factory=prime_factory,
) as prot:
x_in = prot.define_private_variable(
raw, apply_scaling=False, name='test_lsb_input')
x_lsb = prot.lsb(x_in)
with tfe.Session() as sess:
sess.run(tf.global_variables_initializer())
actual_lsb = sess.run(x_lsb.reveal(), tag='lsb')
np.testing.assert_array_equal(actual_lsb, expected_lsb)
def test_lsb_int100(self):
self._core_lsb(
int100factory,
None
)
if __name__ == '__main__':
unittest.main()
|
{"hexsha": "227476035f6c87b2b997efdbbb901ae5d6b7e738", "size": 1186, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_lsb.py", "max_stars_repo_name": "gavinuhma/tf-encrypted", "max_stars_repo_head_hexsha": "4e18d78a151bbe91489a1773fb839b889ff5b460", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-10-18T19:36:02.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-05T19:46:23.000Z", "max_issues_repo_path": "tests/test_lsb.py", "max_issues_repo_name": "dropoutlabs/tf-encrypted", "max_issues_repo_head_hexsha": "48c9dc7419163425e736ad05bb19980d134fc851", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/test_lsb.py", "max_forks_repo_name": "dropoutlabs/tf-encrypted", "max_forks_repo_head_hexsha": "48c9dc7419163425e736ad05bb19980d134fc851", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.72, "max_line_length": 63, "alphanum_fraction": 0.6787521079, "include": true, "reason": "import numpy", "num_tokens": 305}
|
# Copyright 2022, Lefebvre Dalloz Services
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This module is copy-pasted in generated Triton configuration folder to perform the tokenization step.
"""
# noinspection DuplicatedCode
import os
from typing import Callable, Dict, List
import numpy as np
import torch
from torch.nn import Module
from transformers.generation_utils import GenerationMixin
from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
try:
# noinspection PyUnresolvedReferences
import triton_python_backend_utils as pb_utils
except ImportError:
pass # triton_python_backend_utils exists only inside Triton Python backend.
from transformers import AutoConfig, AutoTokenizer, BatchEncoding, PretrainedConfig, PreTrainedTokenizer, TensorType
# IMPORTANT
# Some paramters are hard coded below like the sequence length to generate.
# If you want to provide some of those parameters at run time, the easiest way to proceed
# is to send to Triton server JSON (as a string), and just parse it in this module.
# Build
class GPTModelWrapper(Module, GenerationMixin):
def __init__(
self, config: PretrainedConfig, device: torch.device, inference: Callable[[torch.Tensor], torch.Tensor]
):
super().__init__()
self.config: PretrainedConfig = config
self.device: torch.device = device
self.inference: Callable[[torch.Tensor], torch.Tensor] = inference
self.main_input_name = "input_ids" # https://github.com/huggingface/transformers/pull/14803
def prepare_inputs_for_generation(self, input_ids, **kwargs):
return {
self.main_input_name: input_ids,
}
def forward(self, input_ids, **_):
logits = self.inference(input_ids)
return CausalLMOutputWithCrossAttentions(logits=logits)
class TritonPythonModel:
tokenizer: PreTrainedTokenizer
device: str
def initialize(self, args: Dict[str, str]) -> None:
"""
Initialize the tokenization process
:param args: arguments from Triton config file
"""
current_path: str = os.path.join(args["model_repository"], args["model_version"])
self.device = "cpu" if args["model_instance_kind"] == "CPU" else "cuda"
# more variables in https://github.com/triton-inference-server/python_backend/blob/main/src/python.cc
model_config = AutoConfig.from_pretrained(current_path)
target_model = args["model_name"].replace("_generate", "_model")
def inference_triton(input_ids: torch.Tensor) -> torch.Tensor:
input_ids = input_ids.type(dtype=torch.int32)
inputs = [pb_utils.Tensor.from_dlpack("input_ids", torch.to_dlpack(input_ids))]
inference_request = pb_utils.InferenceRequest(
model_name=target_model, requested_output_names=["output"], inputs=inputs
)
inference_response = inference_request.exec()
if inference_response.has_error():
raise pb_utils.TritonModelException(inference_response.error().message())
else:
output = pb_utils.get_output_tensor_by_name(inference_response, "output")
tensor: torch.Tensor = torch.from_dlpack(output.to_dlpack())
tensor = tensor.cuda()
return tensor
self.model = GPTModelWrapper(config=model_config, device=self.device, inference=inference_triton)
if self.device == "cuda":
self.model = self.model.cuda()
self.tokenizer = AutoTokenizer.from_pretrained(current_path)
# to silent a warning during seq generation
self.model.config.pad_token_id = self.tokenizer.eos_token_id
def execute(self, requests) -> "List[List[pb_utils.Tensor]]":
"""
Parse and tokenize each request
:param requests: 1 or more requests received by Triton server.
:return: text as input tensors
"""
responses = []
# for loop for batch requests (disabled in our case)
for request in requests:
# binary data typed back to string
query = [t.decode("UTF-8") for t in pb_utils.get_input_tensor_by_name(request, "TEXT").as_numpy().tolist()]
tokens: BatchEncoding = self.tokenizer(
text=query[0], return_tensors=TensorType.PYTORCH, return_attention_mask=False
)
# tensorrt uses int32 as input type, ort also because we force the format
input_ids = tokens.input_ids.type(dtype=torch.int32)
if self.device == "cuda":
input_ids = input_ids.to("cuda")
output_seq: torch.Tensor = self.model.generate(input_ids, max_length=32)
decoded_texts: List[str] = [self.tokenizer.decode(seq, skip_special_tokens=True) for seq in output_seq]
tensor_output = [pb_utils.Tensor("output", np.array(t, dtype=object)) for t in decoded_texts]
responses.append(pb_utils.InferenceResponse(tensor_output))
return responses
|
{"hexsha": "5cb1111b30a07f82f2936d23b39a64dd1b67f2da", "size": 5552, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/transformer_deploy/utils/generative_model.py", "max_stars_repo_name": "dumpmemory/transformer-deploy", "max_stars_repo_head_hexsha": "36993d8dd53c7440e49dce36c332fa4cc08cf9fb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 698, "max_stars_repo_stars_event_min_datetime": "2021-11-22T17:42:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T11:16:08.000Z", "max_issues_repo_path": "src/transformer_deploy/utils/generative_model.py", "max_issues_repo_name": "dumpmemory/transformer-deploy", "max_issues_repo_head_hexsha": "36993d8dd53c7440e49dce36c332fa4cc08cf9fb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 38, "max_issues_repo_issues_event_min_datetime": "2021-11-23T13:45:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:36:45.000Z", "max_forks_repo_path": "src/transformer_deploy/utils/generative_model.py", "max_forks_repo_name": "dumpmemory/transformer-deploy", "max_forks_repo_head_hexsha": "36993d8dd53c7440e49dce36c332fa4cc08cf9fb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 58, "max_forks_repo_forks_event_min_datetime": "2021-11-24T11:46:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T08:45:16.000Z", "avg_line_length": 44.7741935484, "max_line_length": 119, "alphanum_fraction": 0.6938040346, "include": true, "reason": "import numpy", "num_tokens": 1182}
|
import numpy as np
import networkx as nx
import random
PORT_NODE_THRESHOLD = 80000
class Graph():
def __init__(self, nx_G, is_directed, p, q):
self.G = nx_G
self.is_directed = is_directed
self.p = p
self.q = q
def node2vec_walk(self, walk_length, start_node):
'''
Simulate a random walk starting from start node.
'''
G = self.G
walk = [start_node]
while len(walk) < walk_length * 2 - 1:
cur = walk[-1]
out_edges = G.out_edges(cur, keys=True)
random.shuffle(out_edges)
if len(out_edges) > 0:
walk += [out_edges[0][1], out_edges[0][2]]
else:
break
return walk
def simulate_walks(self, num_walks, walk_length):
'''
Repeatedly simulate random walks from each node.
'''
G = self.G
walks = []
nodes = [x for x in list(G.nodes()) if x < PORT_NODE_THRESHOLD]
print 'Walk iteration:'
for walk_iter in range(num_walks):
print str(walk_iter+1), '/', str(num_walks)
random.shuffle(nodes)
for node in nodes:
walks.append(self.node2vec_walk(walk_length=walk_length, start_node=node))
return walks
|
{"hexsha": "e41244a5a3859ceba86f4810f9c7093901371101", "size": 1069, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/node2vec.py", "max_stars_repo_name": "amitzohar/node2vec", "max_stars_repo_head_hexsha": "c1ff2151789593f02af2f5eff6b3d15cfe360c22", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/node2vec.py", "max_issues_repo_name": "amitzohar/node2vec", "max_issues_repo_head_hexsha": "c1ff2151789593f02af2f5eff6b3d15cfe360c22", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/node2vec.py", "max_forks_repo_name": "amitzohar/node2vec", "max_forks_repo_head_hexsha": "c1ff2151789593f02af2f5eff6b3d15cfe360c22", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.7446808511, "max_line_length": 78, "alphanum_fraction": 0.6772684752, "include": true, "reason": "import numpy,import networkx", "num_tokens": 323}
|
#==============================================================================#
# EC2/test/runtests.jl
#
# Copyright OC Technology Pty Ltd 2014 - All rights reserved
#==============================================================================#
using AWSEC2
using AWSCore
using Test
AWSCore.set_debug_level(1)
#-------------------------------------------------------------------------------
# Load credentials...
#-------------------------------------------------------------------------------
aws = AWSCore.aws_config()
#-------------------------------------------------------------------------------
# EC2 tests
#-------------------------------------------------------------------------------
@test ec2_id(aws, "Not a real server name!!") == nothing
r = ec2(aws, Dict("Action" => "DescribeImages",
"Filter.1.Name" => "owner-alias",
"Filter.1.Value" => "amazon",
"Filter.2.Name" => "name",
"Filter.2.Value" => "amzn-ami-hvm-2015.09.1.x86_64-gp2"))
@test r["imagesSet"]["item"]["description"] ==
"Amazon Linux AMI 2015.09.1 x86_64 HVM GP2"
#==============================================================================#
# End of file.
#==============================================================================#
|
{"hexsha": "37832794b8a716e43700eb08362e16858ce23845", "size": 1315, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/runtests.jl", "max_stars_repo_name": "daisy12321/AWSEC2.jl", "max_stars_repo_head_hexsha": "24b6c7706c40f92339516560d4d875ab7510fd28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/runtests.jl", "max_issues_repo_name": "daisy12321/AWSEC2.jl", "max_issues_repo_head_hexsha": "24b6c7706c40f92339516560d4d875ab7510fd28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/runtests.jl", "max_forks_repo_name": "daisy12321/AWSEC2.jl", "max_forks_repo_head_hexsha": "24b6c7706c40f92339516560d4d875ab7510fd28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.8863636364, "max_line_length": 80, "alphanum_fraction": 0.3003802281, "num_tokens": 222}
|
///////////////////////////////////////////////////////////////////////////////
/// \file expr.hpp
/// Contains definition of expr\<\> class template.
//
// Copyright 2008 Eric Niebler. Distributed under the Boost
// Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_PROTO_EXPR_HPP_EAN_04_01_2005
#define BOOST_PROTO_EXPR_HPP_EAN_04_01_2005
#include <boost/preprocessor/cat.hpp>
#include <boost/preprocessor/arithmetic/dec.hpp>
#include <boost/preprocessor/selection/max.hpp>
#include <boost/preprocessor/iteration/iterate.hpp>
#include <boost/preprocessor/facilities/intercept.hpp>
#include <boost/preprocessor/repetition/repeat.hpp>
#include <boost/preprocessor/repetition/repeat_from_to.hpp>
#include <boost/preprocessor/repetition/enum_trailing.hpp>
#include <boost/preprocessor/repetition/enum_params.hpp>
#include <boost/preprocessor/repetition/enum_binary_params.hpp>
#include <boost/preprocessor/repetition/enum_trailing_params.hpp>
#include <boost/preprocessor/repetition/enum_trailing_binary_params.hpp>
#include <boost/utility/addressof.hpp>
#include <boost/proto/proto_fwd.hpp>
#include <boost/proto/args.hpp>
#include <boost/proto/traits.hpp>
#if defined(_MSC_VER)
# pragma warning(push)
# pragma warning(disable : 4510) // default constructor could not be generated
# pragma warning(disable : 4512) // assignment operator could not be generated
# pragma warning(disable : 4610) // user defined constructor required
# pragma warning(disable : 4714) // function 'xxx' marked as __forceinline not inlined
#endif
namespace boost { namespace proto
{
namespace detail
{
struct not_a_valid_type
{
private:
not_a_valid_type()
{}
};
template<typename Tag, typename Arg>
struct address_of_hack
{
typedef not_a_valid_type type;
};
template<typename Expr>
struct address_of_hack<proto::tag::address_of, Expr &>
{
typedef Expr *type;
};
template<typename T, typename Expr, typename Arg0>
BOOST_FORCEINLINE
Expr make_terminal(T &t, Expr *, proto::term<Arg0> *)
{
Expr that = {t};
return that;
}
template<typename T, typename Expr, typename Arg0, std::size_t N>
BOOST_FORCEINLINE
Expr make_terminal(T (&t)[N], Expr *, proto::term<Arg0[N]> *)
{
Expr that;
for(std::size_t i = 0; i < N; ++i)
{
that.child0[i] = t[i];
}
return that;
}
template<typename T, typename Expr, typename Arg0, std::size_t N>
BOOST_FORCEINLINE
Expr make_terminal(T const(&t)[N], Expr *, proto::term<Arg0[N]> *)
{
Expr that;
for(std::size_t i = 0; i < N; ++i)
{
that.child0[i] = t[i];
}
return that;
}
// Work-around for:
// https://connect.microsoft.com/VisualStudio/feedback/details/765449/codegen-stack-corruption-using-runtime-checks-when-aggregate-initializing-struct
#if BOOST_WORKAROUND(BOOST_MSVC, BOOST_TESTED_AT(1700))
template<typename T, typename Expr, typename C, typename U>
BOOST_FORCEINLINE
Expr make_terminal(T &t, Expr *, proto::term<U C::*> *)
{
Expr that;
that.child0 = t;
return that;
}
#endif
template<typename T, typename U>
struct same_cv
{
typedef U type;
};
template<typename T, typename U>
struct same_cv<T const, U>
{
typedef U const type;
};
}
namespace result_of
{
/// \brief A helper metafunction for computing the
/// return type of \c proto::expr\<\>::operator().
template<typename Sig, typename This, typename Domain>
struct funop;
#include <boost/proto/detail/funop.hpp>
}
namespace exprns_
{
// This is where the basic_expr specializations are
// actually defined:
#include <boost/proto/detail/basic_expr.hpp>
// This is where the expr specialization are
// actually defined:
#include <boost/proto/detail/expr.hpp>
}
/// \brief Lets you inherit the interface of an expression
/// while hiding from Proto the fact that the type is a Proto
/// expression.
template<typename Expr>
struct unexpr
: Expr
{
BOOST_PROTO_UNEXPR()
BOOST_FORCEINLINE
explicit unexpr(Expr const &e)
: Expr(e)
{}
using Expr::operator =;
};
}}
#if defined(_MSC_VER)
# pragma warning(pop)
#endif
#endif // BOOST_PROTO_EXPR_HPP_EAN_04_01_2005
|
{"hexsha": "0f06b10ec3e3b2e38f82ee3549b79c8494773711", "size": 4865, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "deps/src/boost_1_65_1/boost/proto/expr.hpp", "max_stars_repo_name": "shreyasvj25/turicreate", "max_stars_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11356.0, "max_stars_repo_stars_event_min_datetime": "2017-12-08T19:42:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:55:25.000Z", "max_issues_repo_path": "deps/src/boost_1_65_1/boost/proto/expr.hpp", "max_issues_repo_name": "shreyasvj25/turicreate", "max_issues_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2402.0, "max_issues_repo_issues_event_min_datetime": "2017-12-08T22:31:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T19:25:52.000Z", "max_forks_repo_path": "deps/src/boost_1_65_1/boost/proto/expr.hpp", "max_forks_repo_name": "shreyasvj25/turicreate", "max_forks_repo_head_hexsha": "32e84ca16aef8d04aff3d49ae9984bd49326bffd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1343.0, "max_forks_repo_forks_event_min_datetime": "2017-12-08T19:47:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T11:31:36.000Z", "avg_line_length": 29.6646341463, "max_line_length": 158, "alphanum_fraction": 0.6150051387, "num_tokens": 1102}
|
"""
Setting a parameter by cross-validation
=======================================================
Here we set the number of features selected in an Anova-SVC approach to
maximize the cross-validation score.
After separating 2 sessions for validation, we vary that parameter and
measure the cross-validation score. We also measure the prediction score
on the left-out validation data. As we can see, the two scores vary by a
significant amount: this is due to sampling noise in cross validation,
and choosing the parameter k to maximize the cross-validation score,
might not maximize the score on left-out data.
Thus using data to maximize a cross-validation score computed on that
same data is likely to optimistic and lead to an overfit.
The proper appraoch is known as a "nested cross-validation". It consists
in doing cross-validation loops to set the model parameters inside the
cross-validation loop used to judge the prediction performance: the
parameters are set separately on each fold, never using the data used to
measure performance.
In scikit-learn, this can be done using the GridSearchCV object, that
will automatically select the best parameters of an estimator from a
grid of parameter values.
One difficulty here is that we are working with a composite estimator: a
pipeline of feature selection followed by SVC. Thus to give the name
of the parameter that we want to tune we need to give the name of the
step in the pipeline, followed by the name of the parameter, with '__' as
a separator.
"""
### Load Haxby dataset ########################################################
from nilearn import datasets
import numpy as np
haxby_dataset = datasets.fetch_haxby_simple()
# print basic information on the dataset
print('Mask nifti image (3D) is located at: %s' % haxby_dataset.mask)
print('Functional nifti image (4D) are located at: %s' % haxby_dataset.func)
y, session = np.loadtxt(haxby_dataset.session_target).astype('int').T
conditions = np.recfromtxt(haxby_dataset.conditions_target)['f0']
### Preprocess data ###########################################################
# Keep only data corresponding to shoes or bottles
condition_mask = np.logical_or(conditions == b'shoe', conditions == b'bottle')
y = y[condition_mask]
conditions = conditions[condition_mask]
### Loading step ##############################################################
from nilearn.input_data import NiftiMasker
mask_filename = haxby_dataset.mask
# For decoding, standardizing is often very important
nifti_masker = NiftiMasker(mask_img=mask_filename, sessions=session,
smoothing_fwhm=4, standardize=True,
memory="nilearn_cache", memory_level=1)
func_filename = haxby_dataset.func
X = nifti_masker.fit_transform(func_filename)
# Restrict to non rest data
X = X[condition_mask]
session = session[condition_mask]
### Prediction function #######################################################
### Define the prediction function to be used.
# Here we use a Support Vector Classification, with a linear kernel
from sklearn.svm import SVC
svc = SVC(kernel='linear')
### Dimension reduction #######################################################
from sklearn.feature_selection import SelectKBest, f_classif
### Define the dimension reduction to be used.
# Here we use a classical univariate feature selection based on F-test,
# namely Anova. We set the number of features to be selected to 500
feature_selection = SelectKBest(f_classif, k=500)
# We have our classifier (SVC), our feature selection (SelectKBest), and now,
# we can plug them together in a *pipeline* that performs the two operations
# successively:
from sklearn.pipeline import Pipeline
anova_svc = Pipeline([('anova', feature_selection), ('svc', svc)])
### Cross validation ##########################################################
anova_svc.fit(X, y)
y_pred = anova_svc.predict(X)
from sklearn.cross_validation import LeaveOneLabelOut, cross_val_score
cv = LeaveOneLabelOut(session[session < 10])
k_range = [10, 15, 30, 50, 150, 300, 500, 1000, 1500, 3000, 5000]
cv_scores = []
scores_validation = []
for k in k_range:
feature_selection.k = k
cv_scores.append(np.mean(
cross_val_score(anova_svc, X[session < 10], y[session < 10])))
print("CV score: %.4f" % cv_scores[-1])
anova_svc.fit(X[session < 10], y[session < 10])
y_pred = anova_svc.predict(X[session == 10])
scores_validation.append(np.mean(y_pred == y[session == 10]))
print("score validation: %.4f" % scores_validation[-1])
from matplotlib import pyplot as plt
plt.figure(figsize=(6, 4))
plt.plot(cv_scores, label='Cross validation scores')
plt.plot(scores_validation, label='Left-out validation data scores')
plt.xticks(np.arange(len(k_range)), k_range)
plt.axis('tight')
plt.xlabel('k')
### Nested cross-validation ###################################################
from sklearn.grid_search import GridSearchCV
# We are going to tune the parameter 'k' of the step called 'anova' in
# the pipeline. Thus we need to address it as 'anova__k'.
# Note that GridSearchCV takes an n_jobs argument that can make it go
# much faster
grid = GridSearchCV(anova_svc, param_grid={'anova__k': k_range}, verbose=1)
nested_cv_scores = cross_val_score(grid, X, y)
plt.axhline(np.mean(nested_cv_scores),
label='Nested cross-validation',
color='r')
plt.legend(loc='best', frameon=False)
plt.show()
|
{"hexsha": "5359d4a3d3ad1c962a16b6e59f04394f9752c869", "size": 5447, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/decoding/plot_haxby_grid_search.py", "max_stars_repo_name": "agramfort/nilearn", "max_stars_repo_head_hexsha": "f075440e6d97b5bf359bb25e9197dbcbbc26e5f2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/decoding/plot_haxby_grid_search.py", "max_issues_repo_name": "agramfort/nilearn", "max_issues_repo_head_hexsha": "f075440e6d97b5bf359bb25e9197dbcbbc26e5f2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/decoding/plot_haxby_grid_search.py", "max_forks_repo_name": "agramfort/nilearn", "max_forks_repo_head_hexsha": "f075440e6d97b5bf359bb25e9197dbcbbc26e5f2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1870503597, "max_line_length": 79, "alphanum_fraction": 0.6930420415, "include": true, "reason": "import numpy", "num_tokens": 1215}
|
/-
Copyright (c) 2021 Yaël Dillies. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yaël Dillies
! This file was ported from Lean 3 source module order.circular
! leanprover-community/mathlib commit f2f413b9d4be3a02840d0663dace76e8fe3da053
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Data.Set.Basic
/-!
# Circular order hierarchy
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file defines circular preorders, circular partial orders and circular orders.
## Hierarchy
* A ternary "betweenness" relation `btw : α → α → α → Prop` forms a `circular_order` if it is
- reflexive: `btw a a a`
- cyclic: `btw a b c → btw b c a`
- antisymmetric: `btw a b c → btw c b a → a = b ∨ b = c ∨ c = a`
- total: `btw a b c ∨ btw c b a`
along with a strict betweenness relation `sbtw : α → α → α → Prop` which respects
`sbtw a b c ↔ btw a b c ∧ ¬ btw c b a`, analogously to how `<` and `≤` are related, and is
- transitive: `sbtw a b c → sbtw b d c → sbtw a d c`.
* A `circular_partial_order` drops totality.
* A `circular_preorder` further drops antisymmetry.
The intuition is that a circular order is a circle and `btw a b c` means that going around
clockwise from `a` you reach `b` before `c` (`b` is between `a` and `c` is meaningless on an
unoriented circle). A circular partial order is several, potentially intersecting, circles. A
circular preorder is like a circular partial order, but several points can coexist.
Note that the relations between `circular_preorder`, `circular_partial_order` and `circular_order`
are subtler than between `preorder`, `partial_order`, `linear_order`. In particular, one cannot
simply extend the `btw` of a `circular_partial_order` to make it a `circular_order`.
One can translate from usual orders to circular ones by "closing the necklace at infinity". See
`has_le.to_has_btw` and `has_lt.to_has_sbtw`. Going the other way involves "cutting the necklace" or
"rolling the necklace open".
## Examples
Some concrete circular orders one encounters in the wild are `zmod n` for `0 < n`, `circle`,
`real.angle`...
## Main definitions
* `set.cIcc`: Closed-closed circular interval.
* `set.cIoo`: Open-open circular interval.
## Notes
There's an unsolved diamond on `order_dual α` here. The instances `has_le α → has_btw αᵒᵈ` and
`has_lt α → has_sbtw αᵒᵈ` can each be inferred in two ways:
* `has_le α` → `has_btw α` → `has_btw αᵒᵈ` vs
`has_le α` → `has_le αᵒᵈ` → `has_btw αᵒᵈ`
* `has_lt α` → `has_sbtw α` → `has_sbtw αᵒᵈ` vs
`has_lt α` → `has_lt αᵒᵈ` → `has_sbtw αᵒᵈ`
The fields are propeq, but not defeq. It is temporarily fixed by turning the circularizing instances
into definitions.
## TODO
Antisymmetry is quite weak in the sense that there's no way to discriminate which two points are
equal. This prevents defining closed-open intervals `cIco` and `cIoc` in the neat `=`-less way. We
currently haven't defined them at all.
What is the correct generality of "rolling the necklace" open? At least, this works for `α × β` and
`β × α` where `α` is a circular order and `β` is a linear order.
What's next is to define circular groups and provide instances for `zmod n`, the usual circle group
`circle`, `real.angle`, and `roots_of_unity M`. What conditions do we need on `M` for this last one
to work?
We should have circular order homomorphisms. The typical example is
`days_to_month : days_of_the_year →c months_of_the_year` which relates the circular order of days
and the circular order of months. Is `α →c β` a good notation?
## References
* https://en.wikipedia.org/wiki/Cyclic_order
* https://en.wikipedia.org/wiki/Partial_cyclic_order
## Tags
circular order, cyclic order, circularly ordered set, cyclically ordered set
-/
#print Btw /-
/-- Syntax typeclass for a betweenness relation. -/
class Btw (α : Type _) where
Btw : α → α → α → Prop
#align has_btw Btw
-/
export Btw (Btw)
#print SBtw /-
/-- Syntax typeclass for a strict betweenness relation. -/
class SBtw (α : Type _) where
Sbtw : α → α → α → Prop
#align has_sbtw SBtw
-/
export SBtw (Sbtw)
#print CircularPreorder /-
/- ./././Mathport/Syntax/Translate/Tactic/Builtin.lean:69:18: unsupported non-interactive tactic order_laws_tac -/
/-- A circular preorder is the analogue of a preorder where you can loop around. `≤` and `<` are
replaced by ternary relations `btw` and `sbtw`. `btw` is reflexive and cyclic. `sbtw` is transitive.
-/
class CircularPreorder (α : Type _) extends Btw α, SBtw α where
btw_refl (a : α) : btw a a a
btw_cyclic_left {a b c : α} : btw a b c → btw b c a
Sbtw := fun a b c => btw a b c ∧ ¬btw c b a
sbtw_iff_btw_not_btw {a b c : α} : sbtw a b c ↔ btw a b c ∧ ¬btw c b a := by
run_tac
order_laws_tac
sbtw_trans_left {a b c d : α} : sbtw a b c → sbtw b d c → sbtw a d c
#align circular_preorder CircularPreorder
-/
export CircularPreorder (btw_refl btw_cyclic_left sbtw_trans_left)
#print CircularPartialOrder /-
/-- A circular partial order is the analogue of a partial order where you can loop around. `≤` and
`<` are replaced by ternary relations `btw` and `sbtw`. `btw` is reflexive, cyclic and
antisymmetric. `sbtw` is transitive. -/
class CircularPartialOrder (α : Type _) extends CircularPreorder α where
btw_antisymm {a b c : α} : btw a b c → btw c b a → a = b ∨ b = c ∨ c = a
#align circular_partial_order CircularPartialOrder
-/
export CircularPartialOrder (btw_antisymm)
#print CircularOrder /-
/-- A circular order is the analogue of a linear order where you can loop around. `≤` and `<` are
replaced by ternary relations `btw` and `sbtw`. `btw` is reflexive, cyclic, antisymmetric and total.
`sbtw` is transitive. -/
class CircularOrder (α : Type _) extends CircularPartialOrder α where
btw_total : ∀ a b c : α, btw a b c ∨ btw c b a
#align circular_order CircularOrder
-/
export CircularOrder (btw_total)
/-! ### Circular preorders -/
section CircularPreorder
variable {α : Type _} [CircularPreorder α]
#print btw_rfl /-
theorem btw_rfl {a : α} : Btw a a a :=
btw_refl _
#align btw_rfl btw_rfl
-/
#print Btw.btw.cyclic_left /-
-- TODO: `alias` creates a def instead of a lemma.
-- alias btw_cyclic_left ← has_btw.btw.cyclic_left
theorem Btw.btw.cyclic_left {a b c : α} (h : Btw a b c) : Btw b c a :=
btw_cyclic_left h
#align has_btw.btw.cyclic_left Btw.btw.cyclic_left
-/
#print btw_cyclic_right /-
theorem btw_cyclic_right {a b c : α} (h : Btw a b c) : Btw c a b :=
h.cyclic_left.cyclic_left
#align btw_cyclic_right btw_cyclic_right
-/
alias btw_cyclic_right ← Btw.btw.cyclic_right
#align has_btw.btw.cyclic_right Btw.btw.cyclic_right
#print btw_cyclic /-
/-- The order of the `↔` has been chosen so that `rw btw_cyclic` cycles to the right while
`rw ←btw_cyclic` cycles to the left (thus following the prepended arrow). -/
theorem btw_cyclic {a b c : α} : Btw a b c ↔ Btw c a b :=
⟨btw_cyclic_right, btw_cyclic_left⟩
#align btw_cyclic btw_cyclic
-/
#print sbtw_iff_btw_not_btw /-
theorem sbtw_iff_btw_not_btw {a b c : α} : Sbtw a b c ↔ Btw a b c ∧ ¬Btw c b a :=
CircularPreorder.sbtw_iff_btw_not_btw
#align sbtw_iff_btw_not_btw sbtw_iff_btw_not_btw
-/
#print btw_of_sbtw /-
theorem btw_of_sbtw {a b c : α} (h : Sbtw a b c) : Btw a b c :=
(sbtw_iff_btw_not_btw.1 h).1
#align btw_of_sbtw btw_of_sbtw
-/
alias btw_of_sbtw ← SBtw.sbtw.btw
#align has_sbtw.sbtw.btw SBtw.sbtw.btw
#print not_btw_of_sbtw /-
theorem not_btw_of_sbtw {a b c : α} (h : Sbtw a b c) : ¬Btw c b a :=
(sbtw_iff_btw_not_btw.1 h).2
#align not_btw_of_sbtw not_btw_of_sbtw
-/
alias not_btw_of_sbtw ← SBtw.sbtw.not_btw
#align has_sbtw.sbtw.not_btw SBtw.sbtw.not_btw
#print not_sbtw_of_btw /-
theorem not_sbtw_of_btw {a b c : α} (h : Btw a b c) : ¬Sbtw c b a := fun h' => h'.not_btw h
#align not_sbtw_of_btw not_sbtw_of_btw
-/
alias not_sbtw_of_btw ← Btw.btw.not_sbtw
#align has_btw.btw.not_sbtw Btw.btw.not_sbtw
#print sbtw_of_btw_not_btw /-
theorem sbtw_of_btw_not_btw {a b c : α} (habc : Btw a b c) (hcba : ¬Btw c b a) : Sbtw a b c :=
sbtw_iff_btw_not_btw.2 ⟨habc, hcba⟩
#align sbtw_of_btw_not_btw sbtw_of_btw_not_btw
-/
alias sbtw_of_btw_not_btw ← Btw.btw.sbtw_of_not_btw
#align has_btw.btw.sbtw_of_not_btw Btw.btw.sbtw_of_not_btw
#print sbtw_cyclic_left /-
theorem sbtw_cyclic_left {a b c : α} (h : Sbtw a b c) : Sbtw b c a :=
h.Btw.cyclic_left.sbtw_of_not_btw fun h' => h.not_btw h'.cyclic_left
#align sbtw_cyclic_left sbtw_cyclic_left
-/
alias sbtw_cyclic_left ← SBtw.sbtw.cyclic_left
#align has_sbtw.sbtw.cyclic_left SBtw.sbtw.cyclic_left
#print sbtw_cyclic_right /-
theorem sbtw_cyclic_right {a b c : α} (h : Sbtw a b c) : Sbtw c a b :=
h.cyclic_left.cyclic_left
#align sbtw_cyclic_right sbtw_cyclic_right
-/
alias sbtw_cyclic_right ← SBtw.sbtw.cyclic_right
#align has_sbtw.sbtw.cyclic_right SBtw.sbtw.cyclic_right
#print sbtw_cyclic /-
/-- The order of the `↔` has been chosen so that `rw sbtw_cyclic` cycles to the right while
`rw ←sbtw_cyclic` cycles to the left (thus following the prepended arrow). -/
theorem sbtw_cyclic {a b c : α} : Sbtw a b c ↔ Sbtw c a b :=
⟨sbtw_cyclic_right, sbtw_cyclic_left⟩
#align sbtw_cyclic sbtw_cyclic
-/
#print SBtw.sbtw.trans_left /-
-- TODO: `alias` creates a def instead of a lemma.
-- alias btw_trans_left ← has_btw.btw.trans_left
theorem SBtw.sbtw.trans_left {a b c d : α} (h : Sbtw a b c) : Sbtw b d c → Sbtw a d c :=
sbtw_trans_left h
#align has_sbtw.sbtw.trans_left SBtw.sbtw.trans_left
-/
#print sbtw_trans_right /-
theorem sbtw_trans_right {a b c d : α} (hbc : Sbtw a b c) (hcd : Sbtw a c d) : Sbtw a b d :=
(hbc.cyclic_left.trans_left hcd.cyclic_left).cyclic_right
#align sbtw_trans_right sbtw_trans_right
-/
alias sbtw_trans_right ← SBtw.sbtw.trans_right
#align has_sbtw.sbtw.trans_right SBtw.sbtw.trans_right
#print sbtw_asymm /-
theorem sbtw_asymm {a b c : α} (h : Sbtw a b c) : ¬Sbtw c b a :=
h.Btw.not_sbtw
#align sbtw_asymm sbtw_asymm
-/
alias sbtw_asymm ← SBtw.sbtw.not_sbtw
#align has_sbtw.sbtw.not_sbtw SBtw.sbtw.not_sbtw
#print sbtw_irrefl_left_right /-
theorem sbtw_irrefl_left_right {a b : α} : ¬Sbtw a b a := fun h => h.not_btw h.Btw
#align sbtw_irrefl_left_right sbtw_irrefl_left_right
-/
#print sbtw_irrefl_left /-
theorem sbtw_irrefl_left {a b : α} : ¬Sbtw a a b := fun h => sbtw_irrefl_left_right h.cyclic_left
#align sbtw_irrefl_left sbtw_irrefl_left
-/
#print sbtw_irrefl_right /-
theorem sbtw_irrefl_right {a b : α} : ¬Sbtw a b b := fun h => sbtw_irrefl_left_right h.cyclic_right
#align sbtw_irrefl_right sbtw_irrefl_right
-/
#print sbtw_irrefl /-
theorem sbtw_irrefl (a : α) : ¬Sbtw a a a :=
sbtw_irrefl_left_right
#align sbtw_irrefl sbtw_irrefl
-/
end CircularPreorder
/-! ### Circular partial orders -/
section CircularPartialOrder
variable {α : Type _} [CircularPartialOrder α]
#print Btw.btw.antisymm /-
-- TODO: `alias` creates a def instead of a lemma.
-- alias btw_antisymm ← has_btw.btw.antisymm
theorem Btw.btw.antisymm {a b c : α} (h : Btw a b c) : Btw c b a → a = b ∨ b = c ∨ c = a :=
btw_antisymm h
#align has_btw.btw.antisymm Btw.btw.antisymm
-/
end CircularPartialOrder
/-! ### Circular orders -/
section CircularOrder
variable {α : Type _} [CircularOrder α]
#print btw_refl_left_right /-
theorem btw_refl_left_right (a b : α) : Btw a b a :=
(or_self_iff _).1 (btw_total a b a)
#align btw_refl_left_right btw_refl_left_right
-/
#print btw_rfl_left_right /-
theorem btw_rfl_left_right {a b : α} : Btw a b a :=
btw_refl_left_right _ _
#align btw_rfl_left_right btw_rfl_left_right
-/
#print btw_refl_left /-
theorem btw_refl_left (a b : α) : Btw a a b :=
btw_rfl_left_right.cyclic_right
#align btw_refl_left btw_refl_left
-/
#print btw_rfl_left /-
theorem btw_rfl_left {a b : α} : Btw a a b :=
btw_refl_left _ _
#align btw_rfl_left btw_rfl_left
-/
#print btw_refl_right /-
theorem btw_refl_right (a b : α) : Btw a b b :=
btw_rfl_left_right.cyclic_left
#align btw_refl_right btw_refl_right
-/
#print btw_rfl_right /-
theorem btw_rfl_right {a b : α} : Btw a b b :=
btw_refl_right _ _
#align btw_rfl_right btw_rfl_right
-/
#print sbtw_iff_not_btw /-
theorem sbtw_iff_not_btw {a b c : α} : Sbtw a b c ↔ ¬Btw c b a :=
by
rw [sbtw_iff_btw_not_btw]
exact and_iff_right_of_imp (btw_total _ _ _).resolve_left
#align sbtw_iff_not_btw sbtw_iff_not_btw
-/
#print btw_iff_not_sbtw /-
theorem btw_iff_not_sbtw {a b c : α} : Btw a b c ↔ ¬Sbtw c b a :=
iff_not_comm.1 sbtw_iff_not_btw
#align btw_iff_not_sbtw btw_iff_not_sbtw
-/
end CircularOrder
/-! ### Circular intervals -/
namespace Set
section CircularPreorder
variable {α : Type _} [CircularPreorder α]
#print Set.cIcc /-
/-- Closed-closed circular interval -/
def cIcc (a b : α) : Set α :=
{ x | Btw a x b }
#align set.cIcc Set.cIcc
-/
#print Set.cIoo /-
/-- Open-open circular interval -/
def cIoo (a b : α) : Set α :=
{ x | Sbtw a x b }
#align set.cIoo Set.cIoo
-/
#print Set.mem_cIcc /-
@[simp]
theorem mem_cIcc {a b x : α} : x ∈ cIcc a b ↔ Btw a x b :=
Iff.rfl
#align set.mem_cIcc Set.mem_cIcc
-/
#print Set.mem_cIoo /-
@[simp]
theorem mem_cIoo {a b x : α} : x ∈ cIoo a b ↔ Sbtw a x b :=
Iff.rfl
#align set.mem_cIoo Set.mem_cIoo
-/
end CircularPreorder
section CircularOrder
variable {α : Type _} [CircularOrder α]
#print Set.left_mem_cIcc /-
theorem left_mem_cIcc (a b : α) : a ∈ cIcc a b :=
btw_rfl_left
#align set.left_mem_cIcc Set.left_mem_cIcc
-/
#print Set.right_mem_cIcc /-
theorem right_mem_cIcc (a b : α) : b ∈ cIcc a b :=
btw_rfl_right
#align set.right_mem_cIcc Set.right_mem_cIcc
-/
/- warning: set.compl_cIcc -> Set.compl_cIcc is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} [_inst_1 : CircularOrder.{u1} α] {a : α} {b : α}, Eq.{succ u1} (Set.{u1} α) (HasCompl.compl.{u1} (Set.{u1} α) (BooleanAlgebra.toHasCompl.{u1} (Set.{u1} α) (Set.booleanAlgebra.{u1} α)) (Set.cIcc.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) a b)) (Set.cIoo.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) b a)
but is expected to have type
forall {α : Type.{u1}} [_inst_1 : CircularOrder.{u1} α] {a : α} {b : α}, Eq.{succ u1} (Set.{u1} α) (HasCompl.compl.{u1} (Set.{u1} α) (BooleanAlgebra.toHasCompl.{u1} (Set.{u1} α) (Set.instBooleanAlgebraSet.{u1} α)) (Set.cIcc.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) a b)) (Set.cIoo.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) b a)
Case conversion may be inaccurate. Consider using '#align set.compl_cIcc Set.compl_cIccₓ'. -/
theorem compl_cIcc {a b : α} : cIcc a bᶜ = cIoo b a :=
by
ext
rw [Set.mem_cIoo, sbtw_iff_not_btw]
rfl
#align set.compl_cIcc Set.compl_cIcc
/- warning: set.compl_cIoo -> Set.compl_cIoo is a dubious translation:
lean 3 declaration is
forall {α : Type.{u1}} [_inst_1 : CircularOrder.{u1} α] {a : α} {b : α}, Eq.{succ u1} (Set.{u1} α) (HasCompl.compl.{u1} (Set.{u1} α) (BooleanAlgebra.toHasCompl.{u1} (Set.{u1} α) (Set.booleanAlgebra.{u1} α)) (Set.cIoo.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) a b)) (Set.cIcc.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) b a)
but is expected to have type
forall {α : Type.{u1}} [_inst_1 : CircularOrder.{u1} α] {a : α} {b : α}, Eq.{succ u1} (Set.{u1} α) (HasCompl.compl.{u1} (Set.{u1} α) (BooleanAlgebra.toHasCompl.{u1} (Set.{u1} α) (Set.instBooleanAlgebraSet.{u1} α)) (Set.cIoo.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) a b)) (Set.cIcc.{u1} α (CircularPartialOrder.toCircularPreorder.{u1} α (CircularOrder.toCircularPartialOrder.{u1} α _inst_1)) b a)
Case conversion may be inaccurate. Consider using '#align set.compl_cIoo Set.compl_cIooₓ'. -/
theorem compl_cIoo {a b : α} : cIoo a bᶜ = cIcc b a :=
by
ext
rw [Set.mem_cIcc, btw_iff_not_sbtw]
rfl
#align set.compl_cIoo Set.compl_cIoo
end CircularOrder
end Set
/-! ### Circularizing instances -/
#print LE.toBtw /-
/-- The betweenness relation obtained from "looping around" `≤`.
See note [reducible non-instances]. -/
@[reducible]
def LE.toBtw (α : Type _) [LE α] : Btw α
where Btw a b c := a ≤ b ∧ b ≤ c ∨ b ≤ c ∧ c ≤ a ∨ c ≤ a ∧ a ≤ b
#align has_le.to_has_btw LE.toBtw
-/
#print LT.toSBtw /-
/-- The strict betweenness relation obtained from "looping around" `<`.
See note [reducible non-instances]. -/
@[reducible]
def LT.toSBtw (α : Type _) [LT α] : SBtw α
where Sbtw a b c := a < b ∧ b < c ∨ b < c ∧ c < a ∨ c < a ∧ a < b
#align has_lt.to_has_sbtw LT.toSBtw
-/
#print Preorder.toCircularPreorder /-
/-- The circular preorder obtained from "looping around" a preorder.
See note [reducible non-instances]. -/
@[reducible]
def Preorder.toCircularPreorder (α : Type _) [Preorder α] : CircularPreorder α
where
Btw a b c := a ≤ b ∧ b ≤ c ∨ b ≤ c ∧ c ≤ a ∨ c ≤ a ∧ a ≤ b
Sbtw a b c := a < b ∧ b < c ∨ b < c ∧ c < a ∨ c < a ∧ a < b
btw_refl a := Or.inl ⟨le_rfl, le_rfl⟩
btw_cyclic_left a b c h := by
unfold btw at h⊢
rwa [← or_assoc, or_comm']
sbtw_trans_left a b c d :=
by
rintro (⟨hab, hbc⟩ | ⟨hbc, hca⟩ | ⟨hca, hab⟩) (⟨hbd, hdc⟩ | ⟨hdc, hcb⟩ | ⟨hcb, hbd⟩)
· exact Or.inl ⟨hab.trans hbd, hdc⟩
· exact (hbc.not_lt hcb).elim
· exact (hbc.not_lt hcb).elim
· exact Or.inr (Or.inl ⟨hdc, hca⟩)
· exact Or.inr (Or.inl ⟨hdc, hca⟩)
· exact (hbc.not_lt hcb).elim
· exact Or.inr (Or.inl ⟨hdc, hca⟩)
· exact Or.inr (Or.inl ⟨hdc, hca⟩)
· exact Or.inr (Or.inr ⟨hca, hab.trans hbd⟩)
sbtw_iff_btw_not_btw a b c := by
simp_rw [lt_iff_le_not_le]
set x₀ := a ≤ b
set x₁ := b ≤ c
set x₂ := c ≤ a
have : x₀ → x₁ → a ≤ c := le_trans
have : x₁ → x₂ → b ≤ a := le_trans
have : x₂ → x₀ → c ≤ b := le_trans
clear_value x₀ x₁ x₂
tauto
#align preorder.to_circular_preorder Preorder.toCircularPreorder
-/
#print PartialOrder.toCircularPartialOrder /-
/-- The circular partial order obtained from "looping around" a partial order.
See note [reducible non-instances]. -/
@[reducible]
def PartialOrder.toCircularPartialOrder (α : Type _) [PartialOrder α] : CircularPartialOrder α :=
{ Preorder.toCircularPreorder α with
btw_antisymm := fun a b c =>
by
rintro (⟨hab, hbc⟩ | ⟨hbc, hca⟩ | ⟨hca, hab⟩) (⟨hcb, hba⟩ | ⟨hba, hac⟩ | ⟨hac, hcb⟩)
· exact Or.inl (hab.antisymm hba)
· exact Or.inl (hab.antisymm hba)
· exact Or.inr (Or.inl <| hbc.antisymm hcb)
· exact Or.inr (Or.inl <| hbc.antisymm hcb)
· exact Or.inr (Or.inr <| hca.antisymm hac)
· exact Or.inr (Or.inl <| hbc.antisymm hcb)
· exact Or.inl (hab.antisymm hba)
· exact Or.inl (hab.antisymm hba)
· exact Or.inr (Or.inr <| hca.antisymm hac) }
#align partial_order.to_circular_partial_order PartialOrder.toCircularPartialOrder
-/
#print LinearOrder.toCircularOrder /-
/-- The circular order obtained from "looping around" a linear order.
See note [reducible non-instances]. -/
@[reducible]
def LinearOrder.toCircularOrder (α : Type _) [LinearOrder α] : CircularOrder α :=
{ PartialOrder.toCircularPartialOrder α with
btw_total := fun a b c =>
by
cases' le_total a b with hab hba <;> cases' le_total b c with hbc hcb <;>
cases' le_total c a with hca hac
· exact Or.inl (Or.inl ⟨hab, hbc⟩)
· exact Or.inl (Or.inl ⟨hab, hbc⟩)
· exact Or.inl (Or.inr <| Or.inr ⟨hca, hab⟩)
· exact Or.inr (Or.inr <| Or.inr ⟨hac, hcb⟩)
· exact Or.inl (Or.inr <| Or.inl ⟨hbc, hca⟩)
· exact Or.inr (Or.inr <| Or.inl ⟨hba, hac⟩)
· exact Or.inr (Or.inl ⟨hcb, hba⟩)
· exact Or.inr (Or.inr <| Or.inl ⟨hba, hac⟩) }
#align linear_order.to_circular_order LinearOrder.toCircularOrder
-/
/-! ### Dual constructions -/
section OrderDual
instance (α : Type _) [Btw α] : Btw αᵒᵈ :=
⟨fun a b c : α => Btw c b a⟩
instance (α : Type _) [SBtw α] : SBtw αᵒᵈ :=
⟨fun a b c : α => Sbtw c b a⟩
instance (α : Type _) [h : CircularPreorder α] : CircularPreorder αᵒᵈ :=
{ OrderDual.hasBtw α,
OrderDual.hasSbtw α with
btw_refl := btw_refl
btw_cyclic_left := fun a b c => btw_cyclic_right
sbtw_trans_left := fun a b c d habc hbdc => hbdc.trans_right habc
sbtw_iff_btw_not_btw := fun a b c => @sbtw_iff_btw_not_btw α _ c b a }
instance (α : Type _) [CircularPartialOrder α] : CircularPartialOrder αᵒᵈ :=
{ OrderDual.circularPreorder α with
btw_antisymm := fun a b c habc hcba => @btw_antisymm α _ _ _ _ hcba habc }
instance (α : Type _) [CircularOrder α] : CircularOrder αᵒᵈ :=
{ OrderDual.circularPartialOrder α with btw_total := fun a b c => btw_total c b a }
end OrderDual
|
{"author": "leanprover-community", "repo": "mathlib3port", "sha": "62505aa236c58c8559783b16d33e30df3daa54f4", "save_path": "github-repos/lean/leanprover-community-mathlib3port", "path": "github-repos/lean/leanprover-community-mathlib3port/mathlib3port-62505aa236c58c8559783b16d33e30df3daa54f4/Mathbin/Order/Circular.lean"}
|
from enum import Enum
import cv2
import numpy as np
class FrameImageType(Enum):
COLOR = 1
DEPTH = 2
MASK = 3
def generate_frame_image_path(index: int, frame_image_type: FrameImageType, input_folder: str) -> str:
filename_prefix_by_frame_image_type = {
FrameImageType.COLOR: "color",
FrameImageType.DEPTH: "depth",
FrameImageType.MASK: "sod"
}
extension_by_frame_image_type = {
FrameImageType.COLOR: "jpg",
FrameImageType.DEPTH: "png",
FrameImageType.MASK: "png"
}
return f"{input_folder:s}/{filename_prefix_by_frame_image_type[frame_image_type]:s}/{index:06d}.{extension_by_frame_image_type[frame_image_type]:s}"
def load_frame_numpy_raw_image(index: int, frame_image_type: FrameImageType, input_folder: str) -> np.ndarray:
return cv2.imread(generate_frame_image_path(index, frame_image_type, input_folder), cv2.IMREAD_UNCHANGED)
def load_mask_numpy_image(index: int, input_folder: str) -> np.ndarray:
return load_frame_numpy_raw_image(index, FrameImageType.MASK, input_folder)
def load_depth_numpy_image(index: int, input_folder: str, conversion_factor=0.001) -> np.ndarray:
return load_frame_numpy_raw_image(index, FrameImageType.DEPTH, input_folder).astype(np.float32) * conversion_factor
def load_color_numpy_image(index: int, input_folder: str) -> np.ndarray:
return load_frame_numpy_raw_image(index, FrameImageType.COLOR, input_folder)
|
{"hexsha": "ec230264d8be6a7e1c909f20fdde7a6106138768", "size": 1450, "ext": "py", "lang": "Python", "max_stars_repo_path": "apps/frameviewer/frameloading.py", "max_stars_repo_name": "Algomorph/NeuralTracking", "max_stars_repo_head_hexsha": "6312be8e18828344c65e25a423c239efcd3428dd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-04-18T04:23:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T08:37:51.000Z", "max_issues_repo_path": "apps/frameviewer/frameloading.py", "max_issues_repo_name": "Algomorph/NeuralTracking", "max_issues_repo_head_hexsha": "6312be8e18828344c65e25a423c239efcd3428dd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 24, "max_issues_repo_issues_event_min_datetime": "2021-05-28T21:59:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-03T16:09:41.000Z", "max_forks_repo_path": "apps/frameviewer/frameloading.py", "max_forks_repo_name": "Algomorph/NeuralTracking", "max_forks_repo_head_hexsha": "6312be8e18828344c65e25a423c239efcd3428dd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-03-10T02:56:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-14T06:04:50.000Z", "avg_line_length": 36.25, "max_line_length": 152, "alphanum_fraction": 0.7579310345, "include": true, "reason": "import numpy", "num_tokens": 358}
|
#include "rotation.h"
#include "himan_common.h"
#include <Eigen/Geometry>
using namespace Eigen;
template <typename T>
void himan::geoutil::rotate(himan::geoutil::position<T>& p, const himan::geoutil::rotation<T>& r)
{
// Map data structures to Eigen library objects
Map<Matrix<T, 3, 1>> P(p.Data());
Map<const Quaternion<T>> QR(r.Data());
// Create a corresponding quaternion of the input position vector
Quaternion<T> QP;
QP.w() = 0;
QP.vec() = P;
// Apply spatial rotation through quaternion products
Quaternion<T> rotatedP = QR * QP * QR.inverse();
P = rotatedP.vec();
}
template void himan::geoutil::rotate<float>(himan::geoutil::position<float>&, const himan::geoutil::rotation<float>&);
template void himan::geoutil::rotate<double>(himan::geoutil::position<double>&,
const himan::geoutil::rotation<double>&);
template <typename T>
himan::geoutil::position<T> himan::geoutil::rotate(const himan::geoutil::position<T>& p,
const himan::geoutil::rotation<T>& r)
{
position<T> ret(p);
rotate(ret, r);
return ret;
}
template himan::geoutil::position<float> himan::geoutil::rotate<float>(const himan::geoutil::position<float>&,
const himan::geoutil::rotation<float>&);
template himan::geoutil::position<double> himan::geoutil::rotate<double>(const himan::geoutil::position<double>&,
const himan::geoutil::rotation<double>&);
template <typename T>
himan::geoutil::rotation<T> himan::geoutil::rotation<T>::FromRotLatLon(const T& latOfSouthPole, const T& lonOfSouthPole,
const T& angleOfRot)
{
himan::geoutil::rotation<T> ret;
// Map data structures to Eigen library objects
Map<Quaternion<T>> QRot(ret.Data());
// Create a rotation quaternion from product of a series of rotations about principle axis
QRot = AngleAxis<T>(lonOfSouthPole, Matrix<T, 3, 1>::UnitZ()) *
AngleAxis<T>(-(T(M_PI / 2.0) + latOfSouthPole), Matrix<T, 3, 1>::UnitY()) *
AngleAxis<T>(-angleOfRot, Matrix<T, 3, 1>::UnitZ());
return ret;
}
template himan::geoutil::rotation<float> himan::geoutil::rotation<float>::FromRotLatLon(const float& latOfSouthPole,
const float& lonOfSouthPole,
const float& angleOfRot);
template himan::geoutil::rotation<double> himan::geoutil::rotation<double>::FromRotLatLon(const double& latOfSouthPole,
const double& lonOfSouthPole,
const double& angleOfRot);
template <typename T>
himan::geoutil::rotation<T> himan::geoutil::rotation<T>::ToRotLatLon(const T& latOfSouthPole, const T& lonOfSouthPole,
const T& angleOfRot)
{
himan::geoutil::rotation<T> ret;
// Map data structures to Eigen library objects
Map<Quaternion<T>> QRot(ret.Data());
// Create a rotation quaternion from product of a series of rotations about principle axis
QRot = AngleAxis<T>(angleOfRot, Matrix<T, 3, 1>::UnitZ()) *
AngleAxis<T>(T(M_PI / 2.0) + latOfSouthPole, Matrix<T, 3, 1>::UnitY()) *
AngleAxis<T>(-lonOfSouthPole, Matrix<T, 3, 1>::UnitZ());
return ret;
}
template himan::geoutil::rotation<float> himan::geoutil::rotation<float>::ToRotLatLon(const float& latOfSouthPole,
const float& lonOfSouthPole,
const float& angleOfRot);
template himan::geoutil::rotation<double> himan::geoutil::rotation<double>::ToRotLatLon(const double& latOfSouthPole,
const double& lonOfSouthPole,
const double& angleOfRot);
|
{"hexsha": "be174d641f33634657deb856aaf02cde35afea5c", "size": 4331, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "himan-lib/source/rotation.cpp", "max_stars_repo_name": "fox91/himan", "max_stars_repo_head_hexsha": "4bb0ba4b034675edb21a1b468c0104f00f78784b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 18.0, "max_stars_repo_stars_event_min_datetime": "2017-04-20T18:51:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T21:12:49.000Z", "max_issues_repo_path": "himan-lib/source/rotation.cpp", "max_issues_repo_name": "fox91/himan", "max_issues_repo_head_hexsha": "4bb0ba4b034675edb21a1b468c0104f00f78784b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5.0, "max_issues_repo_issues_event_min_datetime": "2018-07-05T02:15:56.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-01T09:36:51.000Z", "max_forks_repo_path": "himan-lib/source/rotation.cpp", "max_forks_repo_name": "fox91/himan", "max_forks_repo_head_hexsha": "4bb0ba4b034675edb21a1b468c0104f00f78784b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2.0, "max_forks_repo_forks_event_min_datetime": "2020-02-18T06:32:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-29T15:17:09.000Z", "avg_line_length": 51.5595238095, "max_line_length": 120, "alphanum_fraction": 0.5472177326, "num_tokens": 951}
|
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
from time import time
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.preprocessing import OneHotEncoder
def run_decomp_nn(k, estimator, f, x_train, x_test, y_train, y_test):
ica_x_train = estimator.fit_transform(x_train)
ica_x_test = estimator.transform(x_test)
scaler = StandardScaler()
scaler.fit(ica_x_train)
ica_x_train = scaler.transform(ica_x_train)
ica_x_test = scaler.transform(ica_x_test)
start = time()
model_1 = MLPClassifier(solver='sgd', learning_rate_init=0.001, validation_fraction=0.1, alpha=1e-6, hidden_layer_sizes=(5, 5), max_iter=5000, random_state=1)
model_1.fit(ica_x_train, y_train)
end = time() - start
results = model_1.predict(ica_x_test)
acc = accuracy_score(y_test, results)
f.write('%3f\t%.4f\t%.3f\t%.3f\n' % (k,end, acc, 0.0))
def run_cluster_nn(k, estimator, f, x_train, x_test, y_train, y_test):
print('running...')
estimator.fit(x_train)
predictions = estimator.predict(x_train)
predictions = np.reshape(predictions, (-1, 1))
test_predictions = estimator.predict(x_test)
test_predictions = np.reshape(test_predictions, (-1, 1))
enc = OneHotEncoder()
enc.fit(predictions)
train = enc.transform(predictions).toarray()
test = enc.transform(test_predictions).toarray()
# scaler = StandardScaler()
# scaler.fit(predictions)
# ica_x_train = scaler.transform(predictions)
# ica_x_test = scaler.transform(test_predictions)
start = time()
model_1 = MLPClassifier(solver='sgd', learning_rate_init=0.001, validation_fraction=0.1, alpha=1e-6, hidden_layer_sizes=(5,5), max_iter=5000, random_state=1)
model_1.fit(train, y_train)
end = time() - start
results = model_1.predict(test)
acc = accuracy_score(y_test, results)
f.write('%3f\t%.4f\t%.3f\t%.3f\n' % (k,end, acc, 0.0))
|
{"hexsha": "d1d7ca87b8b373d071adfb85b6aeccafc6653a7c", "size": 1964, "ext": "py", "lang": "Python", "max_stars_repo_path": "supervised/nn_util.py", "max_stars_repo_name": "travisMichael/unsupervisedLearning", "max_stars_repo_head_hexsha": "f01bd4e36833de4917811e51042e3937510e2701", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "supervised/nn_util.py", "max_issues_repo_name": "travisMichael/unsupervisedLearning", "max_issues_repo_head_hexsha": "f01bd4e36833de4917811e51042e3937510e2701", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "supervised/nn_util.py", "max_forks_repo_name": "travisMichael/unsupervisedLearning", "max_forks_repo_head_hexsha": "f01bd4e36833de4917811e51042e3937510e2701", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.0816326531, "max_line_length": 162, "alphanum_fraction": 0.7184317719, "include": true, "reason": "import numpy", "num_tokens": 537}
|
# -*- coding: utf-8 -*-
"""
Created on Thu Sep 23 10:27:40 2021
@author: Tom
"""
import liionpack as lp
import matplotlib.pyplot as plt
import numpy as np
import pybamm
plt.close('all')
# Circuit parameters
R_bus = 1e-4
R_series = 1e-2
R_int = 5e-2
I_app = 80.0
ref_voltage = 3.2
# Load the netlist
netlist = lp.read_netlist("AMMBa", Ri=R_int, Rc=R_series, Rb=R_bus, Rl=R_bus, I=I_app, V=ref_voltage)
Nspm = np.sum(netlist['desc'].str.find('V') > -1)
output_variables = [
'X-averaged total heating [W.m-3]',
'Volume-averaged cell temperature [K]',
'X-averaged negative particle surface concentration [mol.m-3]',
'X-averaged positive particle surface concentration [mol.m-3]',
]
# Heat transfer coefficients
htc = np.ones(Nspm) * 10
# Cycling protocol
protocol = lp.generate_protocol()
# PyBaMM parameters
chemistry = pybamm.parameter_sets.Chen2020
parameter_values = pybamm.ParameterValues(chemistry=chemistry)
# Solve pack
output = lp.solve(netlist=netlist,
parameter_values=parameter_values,
protocol=protocol,
output_variables=output_variables,
htc=htc)
X_pos = [0.080052414,0.057192637,0.080052401,0.057192662,0.080052171,0.057192208,0.080052285,0.057192264,
-0.034260006,-0.011396764,-0.034259762,-0.011396799,-0.034259656,-0.011397055,-0.034259716,-0.01139668,
0.034329391,0.01146636,0.034329389,0.011466487,0.034329301,0.011466305,0.034329448,0.011465906,
-0.079983086,-0.057122698,-0.079983176,-0.057123076,-0.079982958,-0.057122401,-0.079982995,-0.057122961]
Y_pos = [-0.046199913,-0.033000108,-0.019799939,-0.0066001454,0.0066000483,0.019799888,0.033000056,0.046200369,
0.046200056,0.033000127,0.019800097,0.0065999294,-0.0065998979,-0.019800061,-0.032999967,-0.046200222,
-0.04620005,-0.032999882,-0.019800016,-0.0065999624,0.0065997543,0.019799885,0.033000077,0.046199929,
0.0462001,0.033000148,0.019800099,0.0066000627,-0.0065999586,-0.019800142,-0.032999927,-0.046199973]
lp.plot_output(output)
fig, ax = plt.subplots()
lp.cell_scatter_plot(ax, X_pos, Y_pos, c=output['Cell current [A]'][400, :])
plt.show()
|
{"hexsha": "76645ab3d5f64cde43c9b9c8d4a056f1306123ad", "size": 2177, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/scripts/load_netlist.py", "max_stars_repo_name": "tinosulzer/liionpack", "max_stars_repo_head_hexsha": "ed1c8e61d6e81c28d73eb0c39fc77e2ac39b6258", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/scripts/load_netlist.py", "max_issues_repo_name": "tinosulzer/liionpack", "max_issues_repo_head_hexsha": "ed1c8e61d6e81c28d73eb0c39fc77e2ac39b6258", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/scripts/load_netlist.py", "max_forks_repo_name": "tinosulzer/liionpack", "max_forks_repo_head_hexsha": "ed1c8e61d6e81c28d73eb0c39fc77e2ac39b6258", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.015625, "max_line_length": 113, "alphanum_fraction": 0.7083141938, "include": true, "reason": "import numpy", "num_tokens": 782}
|
# -*- coding: utf-8 -*-
"""
Created on Fri Jan 12 16:54:41 2018
@author: Aake
"""
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as pl
import EOS
from scaling import scaling,cgs
#%% Void object
class void():
pass
evol=void()
#%% Soft gravity
sc=scaling()
m_planet=5.0
a_planet=1.0
def force(r,rsm):
if r>rsm:
f=cgs.grav*cgs.m_earth*m_planet/r**2
else:
f=cgs.grav*cgs.m_earth*m_planet/rsm**2*(4.*(r/rsm)-3.*(r/rsm)**2)
return f
#%%a
title='T(P) for disk density = 1e-11, 1e-10, 1e-9'
#title='EOS = ideal gas gamma=1.4'
pl.figure(1); pl.clf(); pl.title(title)
a_planet=1.0 # orbital radius
r_start=1.00 # start of integration, in units of R_Hill
T_start=200 # disk temperature
d_start=1e-10 # disk density
dlnd=0.02
dlnd=0.05
dlnd=0.1
T=None
rsm=0.0
#masses=(0.1,0.2,0.4,0.6,0.8,1.0)
#masses=(0.4,1.0)
masses=[1.]
evol.mass=[0.0]
evol.temp=[0.0]
for m_planet in masses:
r_p=cgs.r_earth*m_planet**(1./3.)
r_n=r_p
r_H=a_planet*cgs.au*(cgs.m_earth*m_planet/(3.*cgs.m_sun))**(1./3.)
xlabel='r/r_H'
for i in range(1,2):
dd=[]
TT=[]
for d_start in (1e-11,1e-10,1e-9):
root='m={:3.1f}'.format(m_planet)
if i==0:
eos=EOS.eos_i(mu=2.35)
file=open(root+"_i.atm","w")
else:
eos=EOS.eos_t()
file=open(root+"_t.atm","w")
file.write('Pressure Temperature\n')
d1=d_start
T1=T_start
P1=eos.pressure(T1,d1)
gamma1=eos.gamma(T1,d1)
r1=r_start*r_H
#r1=243*cgs.r_earth
if T is not None:
Tp=T
rp=r
d=[d1]; P=[P1]; T=[T1]; r=[r1/r_n]; gamma=[gamma1]; dm=[0.]
n=0
tau=0.0
while (r1>r_p):
r0=r1
d0=d1
P0=P1
T0=T1
gamma0=eos.gamma(T0,d0)
g0=force(r0,rsm)*r0*d0/P0
d1=d0*np.exp(dlnd)
for iter in range(5):
gamma1=eos.gamma(T1,d1)
gam=0.5*(gamma0+gamma1)
dlnT=dlnd*(gam-1.0)
T1=T0*np.exp(dlnT)
P1=eos.pressure(T1,d1)
f1=force(r1,rsm)
g1=f1*r1*d1/P1
g=0.5*(g0+g1)
dlnP=np.log(P1/P0)
dlnr=-dlnP/g
r1=r0*np.exp(dlnr)
r.append(r1/r_n); d.append(d1); P.append(P1); T.append(T1); gamma.append(gamma1)
dm.append(0.5*(d1+d0)*4.*np.pi*(0.5*(r0+r1))**2*(r0-r1))
n+=1
tau+=(r0-r1)*(d1+d0)/2.0
#print(n,r1/cgs.r_earth,d1,P1,T1,gamma1,vd[-1])
print('{:4d} {:12.3e} {:15.5e} {:13.2e}'.format(n,r1/cgs.r_earth,T1,f1))
file.write('{:15.5e} {:15.5e}\n'.format(P1,T1))
file.close()
dm[0]=dm[1]
pl.loglog(P,T)
pl.xlabel('P')
pl.ylabel('T')
pl.tight_layout()
pl.draw()
pl.pause(0.001)
dd.append(d_start)
TT.append(T1)
#%%
pl.title('T(P) for disk density = 1e-11, 1e-10, 1e-9')
pl.figure(1)
pl.xlabel('P [cgs]')
pl.savefig('T-P density dependence')
#%%
pl.figure(2)
pl.semilogx(dd,TT,'-o')
pl.xlabel('disk density')
pl.ylabel('surface temperature')
pl.ylim(2600,3600)
pl.title('M = 1.0, Tomida & Hori EOS');
pl.savefig('density dependence of final T')
|
{"hexsha": "c4cacff4a0af176c9733bd871a522d87532dfa5b", "size": 3629, "ext": "py", "lang": "Python", "max_stars_repo_path": "data/eos/Tomida+Hori_2016/hydrostatic3.py", "max_stars_repo_name": "applejwjcat/dispatch", "max_stars_repo_head_hexsha": "4fad06ee952de181f6c51b91f179d6396bdfb333", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data/eos/Tomida+Hori_2016/hydrostatic3.py", "max_issues_repo_name": "applejwjcat/dispatch", "max_issues_repo_head_hexsha": "4fad06ee952de181f6c51b91f179d6396bdfb333", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "data/eos/Tomida+Hori_2016/hydrostatic3.py", "max_forks_repo_name": "applejwjcat/dispatch", "max_forks_repo_head_hexsha": "4fad06ee952de181f6c51b91f179d6396bdfb333", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.2857142857, "max_line_length": 96, "alphanum_fraction": 0.4860843207, "include": true, "reason": "import numpy", "num_tokens": 1226}
|
# https://github.com/chatflip/ImageRecognitionDataset
#
import gzip
import os
import pickle
import shutil
import sys
import tarfile
import urllib.request
import zipfile
import numpy as np
from PIL import Image
import argparse
class ExpansionDataset(object):
'''docstring for ClassName'''
def __init__(self, args):
print(args.dataset)
self.dataset_name = args.dataset
self.raw_path = os.path.expanduser(args.raw_file_path)
self.data_path = os.path.expanduser(args.data_file_path)
self.validation_ratio = args.validation_ratio
self.download_dict = get_url(self.dataset_name)
def download(self):
for filename in ([*self.download_dict['filenames']]):
download_file(self.download_dict['baseurl'],
filename, self.raw_path)
def decompress(self):
print('Decompress: {}'.format(self.dataset_name))
for filename in ([*self.download_dict['filenames']]):
decompress_file(filename, self.raw_path)
def setup(self):
setup_file(self.dataset_name, self.raw_path, self.data_path, self.validation_ratio)
def get_url(dataset_name):
data = {'CIFAR10': {'baseurl': 'https://www.cs.toronto.edu/~kriz/',
'filenames': ['cifar-10-python.tar.gz']},
'CIFAR100': {'baseurl': 'https://www.cs.toronto.edu/~kriz/',
'filenames': ['cifar-100-python.tar.gz']},
'MNIST': {'baseurl': 'http://yann.lecun.com/exdb/mnist/',
'filenames': ['train-images-idx3-ubyte.gz',
'train-labels-idx1-ubyte.gz',
't10k-images-idx3-ubyte.gz',
't10k-labels-idx1-ubyte.gz'], },
'fashionMNIST': {'baseurl': 'http://fashion-mnist.s3-website.'
'eu-central-1.amazonaws.com/',
'filenames': ['train-images-idx3-ubyte.gz',
'train-labels-idx1-ubyte.gz',
't10k-images-idx3-ubyte.gz',
't10k-labels-idx1-ubyte.gz'], },
'caltech101': {'baseurl': 'http://www.vision.caltech.edu/'
'Image_Datasets/Caltech101/',
'filenames': ['101_ObjectCategories.tar.gz']},
'caltech256': {'baseurl': 'http://www.vision.caltech.edu/'
'Image_Datasets/Caltech256/',
'filenames': ['256_ObjectCategories.tar']},
'omniglot': {'baseurl': 'https://raw.githubusercontent.com/'
'brendenlake/omniglot/master/python/',
'filenames': ['images_background.zip',
'images_evaluation.zip']},
'animal': {'baseurl': 'http://xiang.bai.googlepages.com/'
'',
'filenames': ['non_rigid_shape_A.zip',
'non_rigid_shape_B.zip']},
}
return data[dataset_name]
def progress(block_count, block_size, total_size):
percentage = min(int(100.0 * block_count * block_size / total_size), 100)
bar = '[{}>{}]'.format('='*(percentage//4), ' '*(25-percentage//4))
sys.stdout.write('{} {:3d}%\r'.format(bar, percentage))
sys.stdout.flush()
def download_file(baseurl, filename, raw_path):
if os.path.exists(os.path.join(raw_path, filename)):
print('File exists: {}'.format(filename))
else:
exist_mkdir(raw_path)
print('Downloading: {}'.format(filename))
try:
urllib.request.urlretrieve(
url=baseurl+filename,
filename=os.path.join(raw_path, filename),
reporthook=progress)
print('')
except (OSError, urllib.error.HTTPError) as err:
print('ERROR :{}'.format(err.code))
print(err.reason)
def decompress_file(filename, raw_path):
if '.tar.gz' in filename:
with tarfile.open(os.path.join(raw_path, filename), 'r:gz') as tr:
tr.extractall(os.path.join(raw_path, ''))
elif '.tar' in filename:
with tarfile.open(os.path.join(raw_path, filename), 'r:') as tr:
tr.extractall(os.path.join(raw_path, ''))
elif '.zip' in filename:
with zipfile.ZipFile(os.path.join(raw_path, filename), 'r') as z:
z.extractall(os.path.join(raw_path, ''))
def setup_file(dataset_name, raw_path, data_path, validation_ratio):
exist_mkdir(os.path.join(data_path, dataset_name))
if dataset_name == 'CIFAR10':
setup_cifar10(dataset_name, raw_path, data_path)
elif dataset_name == 'CIFAR100':
setup_cifar100(dataset_name, raw_path, data_path)
elif dataset_name == 'MNIST':
setup_mnist(dataset_name, raw_path, data_path)
elif dataset_name == 'fashionMNIST':
setup_fashionmnist(dataset_name, raw_path, data_path)
elif dataset_name == 'caltech101':
setup_caltech101(dataset_name, raw_path, data_path)
elif dataset_name == 'caltech256':
setup_caltech256(dataset_name, raw_path, data_path)
elif dataset_name == 'omniglot':
setup_omniglot(dataset_name, raw_path, data_path)
elif dataset_name == 'animal':
setup_animal(dataset_name, raw_path, data_path, validation_ratio)
def exist_mkdir(path):
if not os.path.exists(path):
os.makedirs(path)
def unpickle(file):
with open(file, 'rb') as f:
pic = pickle.load(f, encoding='latin1')
return pic
def data2img_cifar(dataset_name, src, dst, class_names):
pickles = unpickle(src)
datas = pickles['data']
if dataset_name == 'CIFAR10':
labels = pickles['labels']
elif dataset_name == 'CIFAR100':
labels = pickles['fine_labels']
filenames = pickles['filenames']
for data, label, filename in zip(datas, labels, filenames):
img = np.rollaxis(data.reshape((3, 32, 32)), 0, 3)
pilimg = Image.fromarray(np.uint8(img))
pilimg.save(os.path.join(dst, class_names[label], filename))
def setup_cifar10(dataset_name, raw_path, data_path):
folder_name = 'cifar-10-batches-py'
src_root = os.path.join(raw_path, folder_name)
dst_root = os.path.join(data_path, dataset_name)
meta_data = unpickle(os.path.join(src_root, 'batches.meta'))
class_names = meta_data['label_names']
exist_mkdir(os.path.join(dst_root, 'train'))
exist_mkdir(os.path.join(dst_root, 'test'))
for class_name in class_names:
exist_mkdir(os.path.join(dst_root, 'train', class_name))
exist_mkdir(os.path.join(dst_root, 'test', class_name))
# Extract train files
print('Extract train files')
for num_subset in range(1, 6):
src_path = '{}/data_batch_{}'.format(src_root, num_subset)
dst_path = os.path.join(dst_root, 'train')
data2img_cifar(dataset_name, src_path, dst_path, class_names)
# Extract test files
print('Extract test files')
src_path = '{}/test_batch'.format(src_root)
dst_path = os.path.join(dst_root, 'test')
data2img_cifar(dataset_name, src_path, dst_path, class_names)
def setup_cifar100(dataset_name, raw_path, data_path):
folder_name = 'cifar-100-python'
src_root = os.path.join(raw_path, folder_name)
dst_root = os.path.join(data_path, dataset_name)
meta_data = unpickle(os.path.join(src_root, 'meta'))
class_names = meta_data['fine_label_names']
exist_mkdir(os.path.join(data_path, dataset_name, 'train'))
exist_mkdir(os.path.join(data_path, dataset_name, 'test'))
for class_name in class_names:
exist_mkdir(os.path.join(data_path, dataset_name, 'train', class_name))
exist_mkdir(os.path.join(data_path, dataset_name, 'test', class_name))
# Extract train files
print('Extract train files')
src_path = '{}/train'.format(src_root)
dst_path = os.path.join(dst_root, 'train')
data2img_cifar(dataset_name, src_path, dst_path, class_names)
# Extract test files
print('Extract test files')
src_path = '{}/test'.format(src_root)
dst_path = os.path.join(dst_root, 'test')
data2img_cifar(dataset_name, src_path, dst_path, class_names)
def data2img_mnist(src, dst, phase):
if phase == 'train':
prefix = 'train'
elif phase == 'test':
prefix = 't10k'
imgs_path = os.path.join(src, '{}-images-idx3-ubyte.gz'.format(prefix))
labels_path = os.path.join(src, '{}-labels-idx1-ubyte.gz'.format(prefix))
with gzip.open(imgs_path, 'rb') as img, gzip.open(labels_path, 'rb') as lb:
labels = np.frombuffer(lb.read(), dtype=np.uint8, offset=8)
imgs = np.frombuffer(img.read(), dtype=np.uint8,
offset=16).reshape(len(labels), 784)
count = 0
for img, label in zip(imgs, labels):
exist_mkdir(os.path.join(dst, phase, str(label)))
img = np.reshape(img, (28, 28)).astype(np.uint8)
pilimg = Image.fromarray(img)
pilimg.save('{}/{}/{}/{:05d}.png'.format(dst, phase, label, count))
count += 1
def setup_mnist(dataset_name, raw_path, data_path):
# Extract train files
print('Extract train files')
dst_root = os.path.join(data_path, dataset_name)
data2img_mnist(os.path.join(raw_path, ''), dst_root, 'train')
# Extract test files
print('Extract test files')
data2img_mnist(os.path.join(raw_path, ''), dst_root, 'test')
def setup_fashionmnist(dataset_name, raw_path, data_path):
# Extract train files
print('Extract train files')
dst_root = os.path.join(data_path, dataset_name)
data2img_mnist(os.path.join(raw_path, ''), dst_root, 'train')
# Extract test files
print('Extract test files')
data2img_mnist(os.path.join(raw_path, ''), dst_root, 'test')
def symlink_caltech(dataset_name, data_path, folder_name):
data_path = os.path.abspath(data_path)
if dataset_name == 'caltech101':
ignore_class = 'BACKGROUND_Google'
elif dataset_name == 'caltech256':
ignore_class = '257.clutter'
sym_root = '{}/{}'.format(data_path, dataset_name)
for num_subset in range(10):
subset = 'subset{}'.format(num_subset)
subset_root = os.path.join(sym_root, subset)
exist_mkdir(subset_root)
exist_mkdir(os.path.join(subset_root, 'train'))
exist_mkdir(os.path.join(subset_root, 'test'))
class_names = os.listdir(os.path.join(sym_root, folder_name))
class_names.sort()
for class_name in class_names:
if ignore_class in class_name:
continue
exist_mkdir(os.path.join(subset_root, 'train', class_name))
exist_mkdir(os.path.join(subset_root, 'test', class_name))
for phase in ('train', 'test'):
filenames = np.genfromtxt('{0}/csv/{0}_{1}_{2}.csv'. format(
dataset_name, phase, subset),
dtype=np.str)
for fname in filenames:
if not os.path.exists(os.path.join(subset_root, phase, fname)):
os.symlink(os.path.join(sym_root, folder_name, fname),
os.path.join(subset_root, phase, fname))
def setup_caltech101(dataset_name, raw_path, data_path):
folder_name = '101_ObjectCategories'
# copy 101_ObjectCategories
cp_src = os.path.join(raw_path, folder_name)
cp_dst = os.path.join(data_path, dataset_name, folder_name)
if not os.path.exists(cp_dst):
shutil.copytree(cp_src, cp_dst)
symlink_caltech(dataset_name, data_path, folder_name)
def setup_caltech256(dataset_name, raw_path, data_path):
folder_name = '256_ObjectCategories'
# copy 256_ObjectCategories
cp_src = os.path.join(raw_path, folder_name)
cp_dst = os.path.join(data_path, dataset_name, folder_name)
if not os.path.exists(cp_dst):
shutil.copytree(cp_src, cp_dst)
symlink_caltech(dataset_name, data_path, folder_name)
def convert_omniglot(src, dst):
num2class = {}
exist_mkdir(dst)
for root, dirs, file_names in os.walk(src):
if len(dirs) == 0:
tmp = file_names[0]
class_num = int(tmp[:int(tmp.find('_'))])
num2class.setdefault(class_num, root)
for key, value in num2class.items():
_, _, class_name, subclass_name = value.replace('\\','/').split('/')
dst_path = '{}/{:04d}_{}_{}'.format(
dst, key, class_name, subclass_name)
exist_mkdir(dst_path)
for file_name in os.listdir(value):
shutil.copy(os.path.join(value, file_name),
os.path.join(dst_path, file_name))
def symlink_omniglot(dst_path, folder_name):
dst_path = os.path.abspath(dst_path)
src_root = '{}/{}'.format(dst_path, folder_name)
dst_root = '{}/'.format(dst_path)
for num_subset in range(20):
subset_root = '{}/subset{}/{}'.format(
dst_root, num_subset, folder_name)
exist_mkdir(subset_root)
exist_mkdir(os.path.join(subset_root, 'train'))
exist_mkdir(os.path.join(subset_root, 'test'))
class_names = os.listdir(src_root)
class_names.sort()
for class_name in class_names:
exist_mkdir(os.path.join(subset_root, 'train', class_name))
exist_mkdir(os.path.join(subset_root, 'test', class_name))
file_names = os.listdir(os.path.join(src_root, class_name))
for file_name in file_names:
if '{:02d}.png'.format(num_subset+1) in file_name:
os.symlink(os.path.join(src_root, class_name, file_name),
os.path.join(subset_root, 'train',
class_name, file_name))
else:
os.symlink(os.path.join(src_root, class_name, file_name),
os.path.join(subset_root, 'test',
class_name, file_name))
def setup_omniglot(dataset_name, raw_path, data_path):
for folder_name in ('images_background', 'images_evaluation'):
src_path = os.path.join(raw_path, folder_name)
dst_path = os.path.join(data_path, dataset_name)
convert_omniglot(src_path, os.path.join(dst_path, folder_name))
symlink_omniglot(dst_path, folder_name)
def caltech101_list():
import random
random.seed(0)
src_root = 'caltech101/101_ObjectCategories'
class_names = os.listdir(src_root)
class_names.sort()
for num_subset in range(0, 10):
train_list = []
test_list = []
for class_name in class_names:
if 'BACKGROUND' in class_name:
continue
file_names = os.listdir(os.path.join(src_root, class_name))
file_names.sort()
random.shuffle(file_names)
train_count = 0
for file_name in file_names:
if not file_name.endswith('.jpg'):
continue
if train_count < 30:
train_list.append(os.path.join(src_root,
class_name, file_name))
elif train_count < 60:
test_list.append(os.path.join(src_root,
class_name, file_name))
train_count += 1
np.savetxt('{0}/csv/{0}_train_subset{1}.csv'.format(
'caltech101', num_subset), train_list, fmt='%s')
np.savetxt('{0}/csv/{0}_test_subset{1}.csv'.format(
'caltech101', num_subset), test_list, fmt='%s')
def caltech256_list():
import random
random.seed(0)
src_root = 'caltech256/256_ObjectCategories'
class_names = os.listdir(src_root)
class_names.sort()
for num_subset in range(0, 10):
train_list = []
test_list = []
for class_name in class_names:
if '257.clutter' in class_name:
continue
file_names = os.listdir(os.path.join(src_root, class_name))
file_names.sort()
random.shuffle(file_names)
train_count = 0
for file_name in file_names:
if not file_name.endswith('.jpg'):
continue
if train_count < 30:
train_list.append(class_name+'/'+file_name)
elif train_count < 60:
test_list.append(class_name+'/'+file_name)
train_count += 1
np.savetxt('{0}/csv/{0}_train_subset{1}.csv'.format(
'caltech256', num_subset), train_list, fmt='%s')
np.savetxt('{0}/csv/{0}_test_subset{1}.csv'.format(
'caltech256', num_subset), test_list, fmt='%s')
def conf():
parser = argparse.ArgumentParser(description='Image Recognition Dataset')
parser.add_argument('--dataset', '-d', default='',
type=str, help='select dataset')
parser.add_argument('--raw_file_path', default='',
type=str, help='temporary path')
parser.add_argument('--data_file_path', '-o', default='data',
type=str, help='output path')
parser.add_argument('--validation_ratio', '-r', default=0.2,
type=float, help='ratio for the validation split for animal dataset')
args = parser.parse_args()
return args
###
def setup_animal(dataset_name, raw_path, data_path, val_ratio=0.2):
dst_root = os.path.join(data_path, dataset_name)
exist_mkdir(os.path.join(dst_root, 'train'))
exist_mkdir(os.path.join(dst_root, 'test'))
for mode in ['non_rigid_shape_A','non_rigid_shape_B']:
for dn in os.listdir(os.path.join(raw_path,mode)):
src = os.path.join(raw_path,mode,dn)
if os.path.isdir(src):
exist_mkdir(os.path.join(dst_root, 'train', dn))
exist_mkdir(os.path.join(dst_root, 'test', dn))
fns = sorted(os.listdir(src))
for i,fn in enumerate(fns):
if i < (1-val_ratio)*len(fns):
shutil.copy(os.path.join(src,fn),os.path.join(dst_root, 'train', dn))
else:
shutil.copy(os.path.join(src,fn),os.path.join(dst_root, 'test', dn))
if __name__ == '__main__':
args = conf()
if not args.raw_file_path:
args.raw_file_path = args.dataset
worker = ExpansionDataset(args)
# Download files
worker.download()
# Extract unzip files
worker.decompress()
# Setup
worker.setup()
|
{"hexsha": "debf185e09b63976af382fcb22bc01fa4a62f5b8", "size": 18834, "ext": "py", "lang": "Python", "max_stars_repo_path": "util/ImageDatasetsDownloader.py", "max_stars_repo_name": "shizuo-kaji/PretrainCNNwithNoData", "max_stars_repo_head_hexsha": "6d076e4bc2effcd91e9275470db79e0125704087", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-18T07:18:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-18T07:18:44.000Z", "max_issues_repo_path": "util/ImageDatasetsDownloader.py", "max_issues_repo_name": "shizuo-kaji/PretrainCNNwithNoData", "max_issues_repo_head_hexsha": "6d076e4bc2effcd91e9275470db79e0125704087", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "util/ImageDatasetsDownloader.py", "max_forks_repo_name": "shizuo-kaji/PretrainCNNwithNoData", "max_forks_repo_head_hexsha": "6d076e4bc2effcd91e9275470db79e0125704087", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.5761589404, "max_line_length": 97, "alphanum_fraction": 0.5999787618, "include": true, "reason": "import numpy", "num_tokens": 4343}
|
import sys
import numpy as np
MOD = 10**9 + 7
U = 10**6
def mod_cumprod(a, p=MOD):
l = len(a)
sql = int(np.sqrt(l) + 1)
a = np.resize(a, sql**2).reshape(sql, sql)
for i in range(sql - 1):
a[:, i + 1] *= a[:, i]
a[:, i + 1] %= p
for i in range(sql - 1):
a[i + 1] *= a[i, -1]
a[i + 1] %= p
return np.ravel(a)[:l]
def make_fac_ifac(n=U, p=MOD):
fac = np.arange(n + 1)
fac[0] = 1
fac = mod_cumprod(fac)
ifac = np.arange(n + 1, 0, -1)
ifac[0] = pow(int(fac[-1]), p - 2, p)
ifac = mod_cumprod(ifac)[n::-1]
return fac, ifac
fac, ifac = make_fac_ifac()
def mod_choose(n, r, p=MOD):
if r > n or r < 0:
return 0
return fac[n] * ifac[r] % p * ifac[n - r] % p
def make_choose_n_table(n=10**9, r=U, p=MOD):
table = [None] * (r + 1)
table = np.arange(n + 1, n - r, -1)
table[0] = 1
table[1:] = mod_cumprod(table[1:]) * ifac[1 : r + 1] % p
return table
mod_choose_n = make_choose_n_table()
n, m = map(int, sys.stdin.readline().split())
def main():
d = abs(n - m)
if d >= 2:
res = 0
elif d == 1:
res = fac[n] * fac[m] % MOD
else:
res = fac[n] * fac[m] % MOD * 2 % MOD
print(res)
if __name__ == "__main__":
main()
|
{"hexsha": "bf2fa14280aa96aee8cf9ac855625980e9f268f7", "size": 1350, "ext": "py", "lang": "Python", "max_stars_repo_path": "jp.atcoder/abc065/arc076_a/11896928.py", "max_stars_repo_name": "kagemeka/atcoder-submissions", "max_stars_repo_head_hexsha": "91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-09T03:06:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T03:06:25.000Z", "max_issues_repo_path": "jp.atcoder/abc065/arc076_a/11896928.py", "max_issues_repo_name": "kagemeka/atcoder-submissions", "max_issues_repo_head_hexsha": "91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-05T22:53:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-09T01:29:30.000Z", "max_forks_repo_path": "jp.atcoder/abc065/arc076_a/11896928.py", "max_forks_repo_name": "kagemeka/atcoder-submissions", "max_forks_repo_head_hexsha": "91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.1492537313, "max_line_length": 61, "alphanum_fraction": 0.4659259259, "include": true, "reason": "import numpy", "num_tokens": 498}
|
"""Tests the DNC class implementation."""
import sonnet as snt
import tensorflow as tf
import unittest
from numpy.testing import assert_array_equal
from .. dnc import dnc
def suite():
"""Create testing suite for all tests in this module."""
suite = unittest.TestSuite()
suite.addTest(DNCTest('test_construction'))
return suite
class DNCTest(unittest.TestCase):
"""Tests for the DNC class."""
def test_construction(self):
"""Test the construction of a DNC."""
output_size = 10
d = dnc.DNC(output_size)
self.assertIsInstance(d, dnc.DNC)
def test_build(self):
"""Test the build of the DNC."""
graph = tf.Graph()
with graph.as_default():
with tf.Session(graph=graph) as sess:
output_size = 10
memory_size = 20
word_size = 8
num_read_heads = 3
hidden_size = 1
tests = [{ # batch_size = 1
'input': [[1, 2, 3]],
'batch_size': 1
}, { # batch_size > 1
'input': [[1, 2, 3], [4, 5, 6]],
'batch_size': 2,
}, { # can handle 2D input with batch_size > 1
'input': [[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[9, 8, 7],
[6, 5, 4],
[3, 2, 1]]],
'batch_size': 2,
}, { # 3D input with batch_size > 1
'input': [[[[1], [2]], [[3], [4]]],
[[[5], [6]], [[7], [8]]]],
'batch_size': 2,
}]
for test in tests:
i = tf.constant(test['input'], dtype=tf.float32)
batch_size = test['batch_size']
d = dnc.DNC(
output_size,
memory_size=memory_size,
word_size=word_size,
num_read_heads=num_read_heads,
hidden_size=hidden_size)
prev_state = d.initial_state(batch_size, dtype=tf.float32)
output_vector, dnc_state = d(i, prev_state)
assert_array_equal([batch_size, output_size],
sess.run(tf.shape(output_vector)))
assert_array_equal(
[batch_size, num_read_heads, word_size],
sess.run(tf.shape(dnc_state.read_vectors)))
if __name__ == '__main__':
unittest.main(verbosity=2)
|
{"hexsha": "ad37eeb2cebc0cdc3f7948458df4ee350c274e32", "size": 2711, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/testing/dnc_test.py", "max_stars_repo_name": "derrowap/DNC-TensorFlow", "max_stars_repo_head_hexsha": "3e9ad109f8101265ae422ba9c20e058aa70ef7df", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-10-29T18:42:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-02T16:39:32.000Z", "max_issues_repo_path": "src/testing/dnc_test.py", "max_issues_repo_name": "derrowap/DNC-TensorFlow", "max_issues_repo_head_hexsha": "3e9ad109f8101265ae422ba9c20e058aa70ef7df", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-10-11T00:28:05.000Z", "max_issues_repo_issues_event_max_datetime": "2017-10-11T00:30:11.000Z", "max_forks_repo_path": "src/testing/dnc_test.py", "max_forks_repo_name": "derrowap/DNC-TensorFlow", "max_forks_repo_head_hexsha": "3e9ad109f8101265ae422ba9c20e058aa70ef7df", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7564102564, "max_line_length": 78, "alphanum_fraction": 0.443378827, "include": true, "reason": "from numpy", "num_tokens": 591}
|
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from extensions.front.mxnet.eltwise_scalar_replacers import MulScalarFrontReplacer
from extensions.front.mxnet.ssd_detection_output_replacer import SsdPatternDetectionOutputReplacer
from extensions.front.split_normalizer import AttributedSplitToSplit
from extensions.ops.slice_like import SliceLike
from mo.front.common.replacement import FrontReplacementSubgraph
from mo.graph.graph import Graph, Node
from mo.middle.pattern_match import find_pattern_matches
from mo.ops.const import Const
class SsdPatternAnchorReshape(FrontReplacementSubgraph):
"""
Find ssd anchors and setup variants values.
Need to provide compatibility with IE DetectionOutput layer.
"""
enabled = True
graph_condition = [lambda graph: graph.graph['fw'] == 'mxnet' and graph.graph['cmd_params'].enable_ssd_gluoncv]
variants_pattern = dict(
nodes=[
('concat', dict(op='Concat')),
('reshape', dict(op='Reshape')),
('slice_channel', dict(op='Split')),
('mul_scalar1x', dict(op='Mul')),
('mul_scalar1y', dict(op='Mul')),
('mul_scalar2x', dict(op='Mul')),
('mul_scalar2y', dict(op='Mul')),
],
edges=[
('concat', 'reshape'),
('reshape', 'slice_channel'),
('slice_channel', 'mul_scalar1x', {'out': 0}),
('slice_channel', 'mul_scalar1y', {'out': 1}),
('slice_channel', 'mul_scalar2x', {'out': 2}),
('slice_channel', 'mul_scalar2y', {'out': 3}),
]
)
def run_after(self):
return [MulScalarFrontReplacer, AttributedSplitToSplit]
def run_before(self):
return [SsdPatternDetectionOutputReplacer]
def pattern(self):
return dict(
nodes=[
('power', dict(op='Mul')),
('anchor', dict(op='Const')),
('slice_like', dict(op='slice_like')),
('reshape1', dict(op='Reshape')),
('reshape2', dict(op='Reshape')),
('reshape3', dict(op='Reshape'))
],
edges=[
('anchor', 'slice_like', {'in': 0}),
('power', 'slice_like', {'in': 1}),
('slice_like', 'reshape1', {'in': 0}),
('reshape1', 'reshape2', {'in': 0}),
('reshape2', 'reshape3', {'in': 0}),
]
)
def replace_sub_graph(self, graph: Graph, match: dict):
slice_like = match['slice_like']
const = slice_like.in_nodes()[0]
crop_shape = slice_like.in_nodes()[1]
variants_dict = {'mul_scalar1x': 0.1, 'mul_scalar2x': 0.2, 'mul_scalar1y': 0.1, 'mul_scalar2y': 0.2}
for matches in find_pattern_matches(graph, self.variants_pattern['nodes'], self.variants_pattern['edges'], None, None):
for k, v in matches.items():
if v in variants_dict.keys():
variants_dict[v] = Node(graph, k).in_nodes()[1].value[0]
variants = np.array([variants_dict['mul_scalar1x'], variants_dict['mul_scalar1y'],
variants_dict['mul_scalar2x'], variants_dict['mul_scalar2y']] * int(const.value.size / 4)).reshape(const.value.shape)
priorbox_variants = Const(graph, dict(value=variants, name=const.id + '/priorbox_variants')).create_node()
variants_slice_like = SliceLike(graph, dict(axes=slice_like.axes,
name=slice_like.id + '/variants_slice_like')).create_node()
variants_slice_like.in_port(0).connect(priorbox_variants.out_port(0))
variants_slice_like.in_port(1).connect(crop_shape.out_port(0))
concat = match['reshape3'].out_port(0).get_destination().node
assert concat.op == 'Concat'
concat_nodes_count = len(concat.in_nodes())
concat.add_input_port(concat_nodes_count)
concat.in_port(concat_nodes_count).get_connection().set_source(variants_slice_like.out_port(0))
|
{"hexsha": "fe0d62ddacc7400566a7b32b527a2a6f49991533", "size": 4156, "ext": "py", "lang": "Python", "max_stars_repo_path": "model-optimizer/extensions/front/mxnet/ssd_anchor_reshape.py", "max_stars_repo_name": "monroid/openvino", "max_stars_repo_head_hexsha": "8272b3857ef5be0aaa8abbf7bd0d5d5615dc40b6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2406, "max_stars_repo_stars_event_min_datetime": "2020-04-22T15:47:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T10:27:37.000Z", "max_issues_repo_path": "model-optimizer/extensions/front/mxnet/ssd_anchor_reshape.py", "max_issues_repo_name": "thomas-yanxin/openvino", "max_issues_repo_head_hexsha": "031e998a15ec738c64cc2379d7f30fb73087c272", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4948, "max_issues_repo_issues_event_min_datetime": "2020-04-22T15:12:39.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T18:45:42.000Z", "max_forks_repo_path": "model-optimizer/extensions/front/mxnet/ssd_anchor_reshape.py", "max_forks_repo_name": "thomas-yanxin/openvino", "max_forks_repo_head_hexsha": "031e998a15ec738c64cc2379d7f30fb73087c272", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 991, "max_forks_repo_forks_event_min_datetime": "2020-04-23T18:21:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T18:40:57.000Z", "avg_line_length": 45.1739130435, "max_line_length": 146, "alphanum_fraction": 0.5986525505, "include": true, "reason": "import numpy", "num_tokens": 954}
|
import numpy as np
import random, operator, os, json, sys
import folium
import pandas as pd
import osmnx as ox
import networkx as nx
import matplotlib.pyplot as plt
from deliveryrouting.generate_input import progressbar
class Fitness:
def __init__(self, route, origins_file, destinations_file):
self.route = route
self.origins_file = origins_file
self.destinations_file = destinations_file
self.distance = 0
self.fitness= 0.0
def routeDistance(self):
if self.distance ==0:
pathDistance = 0
for i in range(0, len(self.route)):
fromCity = self.route[i]
toCity = None
if i + 1 < len(self.route):
toCity = self.route[i + 1]
else:
toCity = self.route[0]
if fromCity in self.origins_file.keys():
if toCity not in self.origins_file[fromCity]['r_dist'].keys():
pathDistance += np.inf
else:
pathDistance += self.origins_file[fromCity]['r_dist'][toCity]
else:
if toCity not in self.destinations_file[fromCity]['r_dist'].keys():
pathDistance += np.inf
else:
pathDistance += self.destinations_file[fromCity]['r_dist'][toCity]
self.distance = pathDistance
return self.distance
def routeFitness(self):
if self.fitness == 0:
self.fitness = 1 / float(self.routeDistance())
return self.fitness
class delivery_routing:
'''
Computes the optimized route.
'''
def __init__(self, origins_file, destinations_file, pop_size, elite_size, mutation_rate, generations):
self.origins_file = origins_file
self.destinations_file = destinations_file
self.pop_size = pop_size
self.elite_size = elite_size
self.mutation_rate = mutation_rate
self.generations = generations
def create_route(self, city_list):
route = random.sample(city_list, len(city_list))
return route
def initial_population(self, city_list):
population = []
for i in range(0, self.pop_size):
population.append(self.create_route(city_list))
return population
def rank_routes(self, population):
fitness_results = {}
for i in range(0,len(population)):
fitness_results[i] = Fitness(population[i], self.origins_file, self.destinations_file).routeFitness()
return sorted(fitness_results.items(), key = operator.itemgetter(1), reverse = True)
def selection(self, pop_ranked):
selection_results = []
df = pd.DataFrame(np.array(pop_ranked), columns=["Index","Fitness"])
df['cum_sum'] = df.Fitness.cumsum()
df['cum_perc'] = 100*df.cum_sum/df.Fitness.sum()
for i in range(0, self.elite_size):
selection_results.append(pop_ranked[i][0])
for i in range(0, len(pop_ranked) - self.elite_size):
pick = 100*random.random()
for i in range(0, len(pop_ranked)):
if pick <= df.iat[i,3]:
selection_results.append(pop_ranked[i][0])
break
return selection_results
def mating_pool(self, population, selection_results):
matingpool = []
for i in range(0, len(selection_results)):
index = selection_results[i]
matingpool.append(population[index])
return matingpool
def breed(self, parent1, parent2):
child = []
childP1 = []
childP2 = []
geneA = int(random.random() * len(parent1))
geneB = int(random.random() * len(parent1))
startGene = min(geneA, geneB)
endGene = max(geneA, geneB)
for i in range(startGene, endGene):
childP1.append(parent1[i])
childP2 = [item for item in parent2 if item not in childP1]
child = childP1 + childP2
return child
def breed_population(self, matingpool):
children = []
length = len(matingpool) - self.elite_size
pool = random.sample(matingpool, len(matingpool))
for i in range(0,self.elite_size):
children.append(matingpool[i])
for i in range(0, length):
child = self.breed(pool[i], pool[len(matingpool)-i-1])
children.append(child)
return children
def mutate(self, individual):
for swapped in range(len(individual)):
if(random.random() < self.mutation_rate):
swapWith = int(random.random() * len(individual))
city1 = individual[swapped]
city2 = individual[swapWith]
individual[swapped] = city2
individual[swapWith] = city1
return individual
def mutate_population(self, population):
mutated_pop = []
for ind in range(0, len(population)):
mutated_ind = self.mutate(population[ind])
mutated_pop.append(mutated_ind)
return mutated_pop
def next_generation(self, current_gen):
pop_ranked = self.rank_routes(current_gen)
selectionResults = self.selection(pop_ranked)
matingpool = self.mating_pool(current_gen, selectionResults)
children = self.breed_population(matingpool)
next_generation = self.mutate_population(children)
return next_generation
def genetic_algorithm(self, population):
pop = self.initial_population(population)
for i in range(0, self.generations):
@progressbar
def progress_func():
progress = i/self.generations
text = 'calculating best route'
return progress, text
pop = self.next_generation(pop)
best_route_index = self.rank_routes(pop)[0][0]
best_route = pop[best_route_index]
for o_key in self.origins_file.keys():
pivot = best_route.index(o_key)
list1 = best_route[pivot:]
list2 = best_route[:pivot]
ordered_best_route = list1+list2
return ordered_best_route
def exclusion():
x = input('Exclude Locations:')
excluded_list = [j.strip() for j in x.split(',')]
return excluded_list
def route_formatted(best_route_list):
best_route_string = ''
for item in best_route_list:
if best_route_list.index(item) != len(best_route_list)-1:
best_route_string += str(item) + ' -> '
else:
best_route_string += str(item)
return best_route_string
def interactive_mapping(graph_path, origins_file, destinations_file, best_route_list, nearest_node_file):
print('\nloading graph...\n')
graph = ox.io.load_graphml(graph_path)
print('\nGenerating Interactive Map...\n')
o_lat = [origins_file[key]['lat'] for key in origins_file.keys()][0]
o_lon = [origins_file[key]['lon'] for key in origins_file.keys()][0]
o_pop = [key for key in origins_file.keys()][0]
m = folium.Map(location= [o_lat, o_lon],tiles= 'OpenStreetMap', zoom_start=10)
folium.Marker(location=[o_lat, o_lon],popup=o_pop).add_to(m)
for key in destinations_file.keys():
d_lat = destinations_file[key]['lat']
d_lon = destinations_file[key]['lon']
folium.CircleMarker(location=[d_lat, d_lon], radius=5, color='red', fill_color='red', fill_opacity=1, popup=key).add_to(m)
for i in range(0, len(best_route_list)):
fromCity = best_route_list[i]
toCity = None
if i + 1 < len(best_route_list):
toCity = best_route_list[i + 1]
else:
toCity = best_route_list[0]
o_n = nearest_node_file[fromCity]
d_n = nearest_node_file[toCity]
route = nx.shortest_path(G=graph, source=o_n, target=d_n, weight='length')
ox.folium.plot_route_folium(graph, route, route_map=m)
return m
def main():
excluded_list = exclusion()
origins_json = os.path.join('.', 'database', 'origins.json')
destinations_json = os.path.join('.', 'database', 'destinations.json')
nearest_node_json = os.path.join('.', 'database', 'nearest_node.json')
graph_path = os.path.join('.', 'database', 'graph.graphml')
with open(origins_json) as o_file:
origins_file= json.load(o_file)
with open(destinations_json) as d_file:
destinations_file= json.load(d_file)
with open(nearest_node_json) as n_file:
nearest_node_file= json.load(n_file)
city_list = []
for i in origins_file.keys():
city_list.append(i)
for j in destinations_file.keys():
if j not in excluded_list:
city_list.append(j)
pop_size = 200
elite_size=40
mutation_rate=0.01
generations=1000
routing = delivery_routing(origins_file, destinations_file, pop_size, elite_size, mutation_rate, generations)
best_route_list = routing.genetic_algorithm(city_list)
best_route = route_formatted(best_route_list)
interactive_map = interactive_mapping(graph_path, origins_file, destinations_file, best_route_list, nearest_node_file)
return best_route, interactive_map
if __name__ == '__main__':
main()
|
{"hexsha": "7586a4d4c3a665fba3ff36df4b47a01430e83a6b", "size": 9363, "ext": "py", "lang": "Python", "max_stars_repo_path": "deliveryrouting/delivery_routing.py", "max_stars_repo_name": "balakumaran247/delivery_routing", "max_stars_repo_head_hexsha": "be1dbc19d567d917f2b9991b608a732d2e77ab3c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "deliveryrouting/delivery_routing.py", "max_issues_repo_name": "balakumaran247/delivery_routing", "max_issues_repo_head_hexsha": "be1dbc19d567d917f2b9991b608a732d2e77ab3c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deliveryrouting/delivery_routing.py", "max_forks_repo_name": "balakumaran247/delivery_routing", "max_forks_repo_head_hexsha": "be1dbc19d567d917f2b9991b608a732d2e77ab3c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6900826446, "max_line_length": 130, "alphanum_fraction": 0.6191391648, "include": true, "reason": "import numpy,import networkx", "num_tokens": 2094}
|
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
# MIT License
#
# Copyright (c) 2022 Nathan Juraj Michlo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
from typing import NoReturn
from typing import Optional
from typing import Sequence
from typing import Tuple
import numpy as np
from disent.dataset.data import GroundTruthData
from disent.dataset.sampling import BaseDisentSampler
from disent.dataset.util.state_space import StateSpace
from disent.util.jit import try_njit
# ========================================================================= #
# Pretend We Are Walking Ground-Truth Factors Randomly #
# ========================================================================= #
class GroundTruthRandomWalkSampler(BaseDisentSampler):
def uninit_copy(self) -> 'GroundTruthRandomWalkSampler':
return GroundTruthRandomWalkSampler(
num_samples=self._num_samples,
p_dist_max=self._p_dist_max,
n_dist_max=self._n_dist_max,
)
def __init__(
self,
num_samples: int = 3,
p_dist_max: int = 8,
n_dist_max: int = 32,
):
super().__init__(num_samples=num_samples)
# checks
assert num_samples in {1, 2, 3}, f'num_samples ({repr(num_samples)}) must be 1, 2 or 3'
# save hparams
self._num_samples = num_samples
self._p_dist_max = p_dist_max
self._n_dist_max = n_dist_max
# dataset variable
self._state_space: Optional[StateSpace] = None
def _init(self, dataset: GroundTruthData):
assert isinstance(dataset, GroundTruthData), f'dataset must be an instance of {repr(GroundTruthData.__class__.__name__)}, got: {repr(dataset)}'
self._state_space = dataset.state_space_copy()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - #
# Sampling #
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - #
def _sample_idx(self, idx) -> Tuple[int, ...]:
if self._num_samples == 1:
return (idx,)
elif self._num_samples == 2:
p_dist = np.random.randint(1, self._p_dist_max + 1)
pos = _random_walk(idx, p_dist, self._state_space.factor_sizes)
return (idx, pos)
elif self._num_samples == 3:
p_dist = np.random.randint(1, self._p_dist_max + 1)
n_dist = np.random.randint(1, self._n_dist_max + 1)
pos = _random_walk(idx, p_dist, self._state_space.factor_sizes)
neg = _random_walk(pos, n_dist, self._state_space.factor_sizes)
return (idx, pos, neg)
else:
raise RuntimeError
# ========================================================================= #
# Helper #
# ========================================================================= #
def _random_walk(idx: int, dist: int, factor_sizes: np.ndarray) -> int:
# random walk
pos = np.array(np.unravel_index(idx, factor_sizes), dtype=int) # much faster than StateSpace.idx_to_pos, we don't need checks!
for _ in range(dist):
_walk_nearby_inplace(pos, factor_sizes)
idx = np.ravel_multi_index(pos, factor_sizes) # much faster than StateSpace.pos_to_idx, we don't need checks!
# done!
return int(idx)
@try_njit()
def _walk_nearby_inplace(pos: np.ndarray, factor_sizes: Sequence[int]) -> NoReturn:
# try to shift any single factor by 1 or -1
while True:
f_idx = np.random.randint(0, len(factor_sizes))
cur = pos[f_idx]
# walk random factor value
if np.random.random() < 0.5:
nxt = max(cur - 1, 0)
else:
nxt = min(cur + 1, factor_sizes[f_idx] - 1)
# exit if different
if cur != nxt:
break
# update the position
pos[f_idx] = nxt
# ========================================================================= #
# END #
# ========================================================================= #
|
{"hexsha": "37f5c29786319c8796decafd72ba2fb5aa3bcd78", "size": 5325, "ext": "py", "lang": "Python", "max_stars_repo_path": "disent/dataset/sampling/_groundtruth__walk.py", "max_stars_repo_name": "nmichlo/msc-research", "max_stars_repo_head_hexsha": "625e57eca77bbfbc4728ccebdb0733e1613bd258", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-31T21:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:20:30.000Z", "max_issues_repo_path": "disent/dataset/sampling/_groundtruth__walk.py", "max_issues_repo_name": "nmichlo/msc-research", "max_issues_repo_head_hexsha": "625e57eca77bbfbc4728ccebdb0733e1613bd258", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "disent/dataset/sampling/_groundtruth__walk.py", "max_forks_repo_name": "nmichlo/msc-research", "max_forks_repo_head_hexsha": "625e57eca77bbfbc4728ccebdb0733e1613bd258", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2790697674, "max_line_length": 151, "alphanum_fraction": 0.5502347418, "include": true, "reason": "import numpy", "num_tokens": 1247}
|
import numpy as np
import pandas as pd
import requests
import sys
import csv
from scipy.spatial.distance import cdist
from scipy.spatial import distance
from pandas.core.frame import DataFrame
import matplotlib.pyplot as plt
LIVE_URL = "http://environment.data.gov.uk/flood-monitoring/id/stations?parameter=rainfall"
ARCHIVE_URL = "http://environment.data.gov.uk/flood-monitoring/archive/"
RAINFALL_URL = "https://environment.data.gov.uk/flood-monitoring/id/measures?parameter=rainfall"
l = requests.get(LIVE_URL)
location = l.json()
location_csv = pd.read_csv(location['meta']['hasFormat'][0])
location_1 = location_csv.loc[:,['stationReference','long','lat','northing','easting']]
location_1 = location_1.drop_duplicates(subset='stationReference')
#grab the northing and easting
def get_stationReference(easting,northing):
pc = np.array([[easting,northing]])
x_easting = location_1['easting'].values
y_northing = location_1['northing'].values
xy_ndarray = np.stack([x_easting, y_northing], axis=1)
dist = cdist(pc,xy_ndarray,'euclidean').reshape(len(xy_ndarray),)
location_1['dist'] = dist
#location_1['threshold'] = 10000
stations = location_1['stationReference'][location_1['dist'].idxmin()]
return stations
Need = pd.read_csv('./FloodHistory.csv', usecols=['DATE', 'easting', 'northing', 'FloodProb'])
Date = list(Need['DATE'])
Easting = list(Need['easting'])
Northing = list(Need['northing'])
FloodProb = list(Need['FloodProb'])
Station = []
Rainfall = []
for i in range(len(Easting)):
stations = get_stationReference(Easting[i], Northing[i])
Station.append(stations)
ARCHIVE_CSV = 'http://environment.data.gov.uk/flood-monitoring/archive/readings-'+str(Date[i])+'.csv'
archive_csv = pd.read_csv(ARCHIVE_CSV,low_memory=False)
archive = archive_csv.loc[(archive_csv['measure'].str.contains('rainfall'))&(archive_csv['measure'].str.contains('t-15_min-mm'))&(archive_csv['measure'].str.contains(str(Station[i])))]
#archive_station = archive.loc[archive['measure'].str.startswith('http://environment.data.gov.uk/flood-monitoring/id/measures/'+str(Station[i]), na=False)]
sum_value = archive['value'].astype(np.float64).sum()
Rainfall.append(sum_value)
plt.xlabel('Rainfall')
plt.ylabel('FloodProb')
plt.scatter(Rainfall, FloodProb)
plt.show()
|
{"hexsha": "16d1321430ca0771cc96bdc69c813656f704a9cd", "size": 2403, "ext": "py", "lang": "Python", "max_stars_repo_path": "flood_tool/try_plot_rainfall_and_flood.py", "max_stars_repo_name": "oahul14/FloodRisk", "max_stars_repo_head_hexsha": "286fa1b183258befaffe81c6a4edca7ff490d04d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "flood_tool/try_plot_rainfall_and_flood.py", "max_issues_repo_name": "oahul14/FloodRisk", "max_issues_repo_head_hexsha": "286fa1b183258befaffe81c6a4edca7ff490d04d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "flood_tool/try_plot_rainfall_and_flood.py", "max_forks_repo_name": "oahul14/FloodRisk", "max_forks_repo_head_hexsha": "286fa1b183258befaffe81c6a4edca7ff490d04d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.9692307692, "max_line_length": 189, "alphanum_fraction": 0.7128589263, "include": true, "reason": "import numpy,from scipy", "num_tokens": 614}
|
import numpy
import os
import setuptools
from setuptools import setup, find_packages
import setuptools.command.develop
import setuptools.command.build_py
from tools import gitsemver
with open('README.md') as f:
longDescription = f.read()
with open('requirements.txt') as f:
required = f.read().splitlines()
version = gitsemver.getVersion()
with open('hardware_tools/version.py', 'w') as file:
file.write(f'version = \'{version}\'\n')
file.write(f'versionFull = \'{version.fullStr()}\'\n')
cwd = os.path.dirname(os.path.abspath(__file__))
try:
from Cython.Build import cythonize
except ImportError:
def cythonize(*args, **kwargs):
from Cython.Build import cythonize
return cythonize(*args, **kwargs)
def findPyx(path='.'):
pyxFiles = []
for root, _, filenames in os.walk(path):
for file in filenames:
if file.endswith('.pyx'):
pyxFiles.append(os.path.join(root, file))
return pyxFiles
def findCythonExtensions(path='.'):
extensions = cythonize(findPyx(path), language_level=3)
for ext in extensions:
ext.include_dirs = [numpy.get_include()]
return extensions
class BuildPy(setuptools.command.build_py.build_py):
def run(self):
setuptools.command.build_py.build_py.run(self)
class Develop(setuptools.command.develop.develop):
def run(self):
setuptools.command.develop.develop.run(self)
setup(
name='hardware-tools',
version=version,
description='A library for automating hardware development and testing',
long_description=longDescription,
long_description_content_type='text/markdown',
license='MIT',
ext_modules=findCythonExtensions(),
packages=find_packages(),
package_data={'hardware_tools': []},
install_requires=required,
tests_require=['json'],
test_suite='tests',
scripts=[],
author='Bradley Davis',
author_email='me@bradleydavis.tech',
url='https://github.com/WattsUp/hardware-tools',
classifiers=[
'Programming Language :: Python :: 3',
'Operating System :: OS Independent',
'Development Status :: 2 - Pre-Alpha',
'License :: OSI Approved :: MIT License',
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)',
'Topic :: Scientific/Engineering :: Information Analysis',
'Topic :: Scientific/Engineering :: Visualization',
],
python_requires='>=3.6',
include_package_data=True,
cmdclass={
'build_py': BuildPy,
'develop': Develop,
},
zip_safe=False,
)
|
{"hexsha": "95b1212ba2018d57bf0440b64465521c4805c1e3", "size": 2758, "ext": "py", "lang": "Python", "max_stars_repo_path": "setup.py", "max_stars_repo_name": "WattsUp/hardware-tools", "max_stars_repo_head_hexsha": "d9dc01429369bc071381cb25af7b984195aff8e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "setup.py", "max_issues_repo_name": "WattsUp/hardware-tools", "max_issues_repo_head_hexsha": "d9dc01429369bc071381cb25af7b984195aff8e5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "setup.py", "max_forks_repo_name": "WattsUp/hardware-tools", "max_forks_repo_head_hexsha": "d9dc01429369bc071381cb25af7b984195aff8e5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.6559139785, "max_line_length": 80, "alphanum_fraction": 0.6809282088, "include": true, "reason": "import numpy", "num_tokens": 638}
|
// A multi-threaded inclusive_scan with std::accumulate "lookahead"
// Copyright 2019 Jeff Trull <edaskel@att.net>
#ifndef INCLUSIVE_SCAN_MT_HPP
#define INCLUSIVE_SCAN_MT_HPP
#include "benchmark_scan.hpp"
#include "serial_scan.hpp"
#include <boost/asio.hpp>
#include <iterator>
#include <future>
template <typename InputIt, typename OutputIt, typename T = typename std::iterator_traits<InputIt>::value_type>
std::pair<OutputIt, T>
inclusive_scan_mt_impl(InputIt start, InputIt end, OutputIt d_start, T init = T{})
{
// use n_threads global (set by benchmarking code) to spawn partitions
std::size_t sz = std::distance(start, end);
/*
if (sz < 40000) // arbitrary heuristic based on experiment
// faster just to run sequentially
return inclusive_scan_seq_impl<InputIt, OutputIt, T>(start, end, d_start, init);
*/
std::size_t psize = sz / n_threads;
std::vector<std::promise<T>> part_sum_prom(n_threads - 1);
boost::asio::thread_pool tpool(n_threads - 1);
for (int p = 0; p < n_threads-1; ++p)
{
InputIt p_end = start;
std::advance(p_end, psize);
boost::asio::post(
tpool,
[p, start, p_end, d_start, &part_sum_prom, &init](){
T p_result = std::accumulate(start, p_end, T{});
// wait for result of previous partition's accumulate
T acc_so_far;
if (p == 0)
{
acc_so_far = init;
} else {
acc_so_far = part_sum_prom[p-1].get_future().get();
}
// store for use by next higher partition
part_sum_prom[p].set_value(acc_so_far + p_result);
// lastly, store the local intermediate results
inclusive_scan_seq<InputIt, OutputIt, T>(start, p_end, d_start, acc_so_far);
});
start = p_end;
std::advance(d_start, psize);
}
// the last partition is special:
// - it may not have exactly the same size as the others due to rounding
// - there is no need to do the "accumulate" part
// - we do it directly in this thread
T acc_so_far = part_sum_prom.back().get_future().get();
auto result = inclusive_scan_seq_impl<InputIt, OutputIt, T>(start, end, d_start, acc_so_far);
// maybe a future improvement is to output a future with a final accumulate result for chaining
// in the chunk case... dunno
tpool.join();
return result;
}
template <typename InputIt, typename OutputIt, typename T = typename std::iterator_traits<InputIt>::value_type>
OutputIt
inclusive_scan_mt(InputIt start, InputIt end, OutputIt d_start, T init = T{})
{
return inclusive_scan_mt_impl(start, end, d_start, init).first;
}
#endif // INCLUSIVE_SCAN_MT_HPP
|
{"hexsha": "f0e200e0cf55eb24033d6dcd18be5de356bc1f20", "size": 2816, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "inclusive_scan_mt.hpp", "max_stars_repo_name": "jefftrull/MyParallelAlg", "max_stars_repo_head_hexsha": "87ca7ce89152c70c46ffabad897f4d8ed82a6e24", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "inclusive_scan_mt.hpp", "max_issues_repo_name": "jefftrull/MyParallelAlg", "max_issues_repo_head_hexsha": "87ca7ce89152c70c46ffabad897f4d8ed82a6e24", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "inclusive_scan_mt.hpp", "max_forks_repo_name": "jefftrull/MyParallelAlg", "max_forks_repo_head_hexsha": "87ca7ce89152c70c46ffabad897f4d8ed82a6e24", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2, "max_line_length": 111, "alphanum_fraction": 0.6392045455, "num_tokens": 688}
|
"""
This figure is meant to represent the neuronal event-related model
and a coefficient of +1 for Faces, -2 for Objects.
"""
import pylab
import numpy as np
from sympy import Symbol, Heaviside, lambdify
ta = [0,4,8,12,16]; tb = [2,6,10,14,18]
ba = Symbol('ba'); bb = Symbol('bb'); t = Symbol('t')
fa = sum([Heaviside(t-_t) for _t in ta]) * ba
fb = sum([Heaviside(t-_t) for _t in tb]) * bb
N = fa+fb
Nn = N.subs(ba,1)
Nn = Nn.subs(bb,-2)
Nn = lambdify(t, Nn)
tt = np.linspace(-1,21,1201)
pylab.step(tt, [Nn(_t) for _t in tt])
a = pylab.gca()
a.set_ylim([-5.5,1.5])
a.set_ylabel('Neuronal (cumulative)')
a.set_xlabel('Time')
pylab.show()
|
{"hexsha": "e14c7562ddddf92681ee9f898ae4b8297675bc14", "size": 647, "ext": "py", "lang": "Python", "max_stars_repo_path": "doc/users/plots/neuronal_event.py", "max_stars_repo_name": "yarikoptic/NiPy-OLD", "max_stars_repo_head_hexsha": "8759b598ac72d3b9df7414642c7a662ad9c55ece", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-08-22T16:14:45.000Z", "max_stars_repo_stars_event_max_datetime": "2015-08-22T16:14:45.000Z", "max_issues_repo_path": "doc/users/plots/neuronal_event.py", "max_issues_repo_name": "yarikoptic/NiPy-OLD", "max_issues_repo_head_hexsha": "8759b598ac72d3b9df7414642c7a662ad9c55ece", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/users/plots/neuronal_event.py", "max_forks_repo_name": "yarikoptic/NiPy-OLD", "max_forks_repo_head_hexsha": "8759b598ac72d3b9df7414642c7a662ad9c55ece", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.6060606061, "max_line_length": 66, "alphanum_fraction": 0.6445131376, "include": true, "reason": "import numpy,from sympy", "num_tokens": 240}
|
The Looking Glass is a custom picture framing shop located inside the Pacific Auction Company facility.
Get 25 % off right now athttp://greenmachinedavis.com/coupon_lookingglass.html Little Green Coupon Machine
Follow us onhttp://www.facebook.com/home.php#!/pages/LookingGlassCustomFraming/199585876730294?skwall Facebook
|
{"hexsha": "747efcd516367369c586ad57d5f2e52adddbd9b4", "size": 327, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "lab/davisWiki/The_Looking_Glass.f", "max_stars_repo_name": "voflo/Search", "max_stars_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/davisWiki/The_Looking_Glass.f", "max_issues_repo_name": "voflo/Search", "max_issues_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/davisWiki/The_Looking_Glass.f", "max_forks_repo_name": "voflo/Search", "max_forks_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3333333333, "max_line_length": 110, "alphanum_fraction": 0.8256880734, "num_tokens": 75}
|
\section{Abstract}
\begin{frame}
\begin{abstract}
Thermodynamics started with steam engines. At the end of the 19th century, Ludwig Boltzmann introduced the idea of statistical mechanics, i.e. the idea that complex microscopic interactions would \emph{emerge} by averaging (coarse graining) defined ensembles, and provide the previously discovered equations of state used in the earlier thermodynamics. In the last three quarters of a century, the development of information theory found a different description of entropy, which up to a constant \emph{seems to provide a connection} to thermodynamic entropy.
\end{abstract}
\end{frame}
\begin{frame}
\begin{abstract}
A different problem, Maxwell’s demon, has also stymied physicists for about the same time. The solution of this problem has in the last five years or so, shown that it is not only apparently the same, but, in fact a completely different approach than Boltzmann’s using Maxwell’s demon the tool to unite physical entropy and informational entropy.\vskip1ex
In this set of lectures we will describe these two different approaches and, hopefully, show how physical entropy is actually informational entropy.
\end{abstract}
\end{frame}
|
{"hexsha": "cc9f95f1c5300bbb66af628d6c6d3e691e261486", "size": 1206, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "0) CommonTeX_Code/abstract.tex", "max_stars_repo_name": "bobksgithub/Info2Thermo_Lectures", "max_stars_repo_head_hexsha": "7766dda1f32dd322962397285f6a47cc46be13b2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "0) CommonTeX_Code/abstract.tex", "max_issues_repo_name": "bobksgithub/Info2Thermo_Lectures", "max_issues_repo_head_hexsha": "7766dda1f32dd322962397285f6a47cc46be13b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0) CommonTeX_Code/abstract.tex", "max_forks_repo_name": "bobksgithub/Info2Thermo_Lectures", "max_forks_repo_head_hexsha": "7766dda1f32dd322962397285f6a47cc46be13b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.375, "max_line_length": 560, "alphanum_fraction": 0.8067993367, "num_tokens": 260}
|
[STATEMENT]
lemma real_neg_pp_np_help: "\<And>x. f x \<le> (0::real) \<Longrightarrow> np f x = -f x \<and> pp f x = 0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
(*<*)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
fix x
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
assume le: "f x \<le> 0"
[PROOF STATE]
proof (state)
this:
f x \<le> 0
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
hence "pp f x = 0 "
[PROOF STATE]
proof (prove)
using this:
f x \<le> 0
goal (1 subgoal):
1. pp f x = 0
[PROOF STEP]
by (cases "f x < 0") (auto simp add: positive_part_def)
[PROOF STATE]
proof (state)
this:
pp f x = 0
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
pp f x = 0
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
from le
[PROOF STATE]
proof (chain)
picking this:
f x \<le> 0
[PROOF STEP]
have "np f x = -f x"
[PROOF STATE]
proof (prove)
using this:
f x \<le> 0
goal (1 subgoal):
1. np f x = - f x
[PROOF STEP]
by (cases "f x < 0") (auto simp add: negative_part_def)
[PROOF STATE]
proof (state)
this:
np f x = - f x
goal (1 subgoal):
1. \<And>x. f x \<le> 0 \<Longrightarrow> np f x = - f x \<and> pp f x = 0
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
pp f x = 0
np f x = - f x
[PROOF STEP]
show "np f x = -f x \<and> pp f x = 0"
[PROOF STATE]
proof (prove)
using this:
pp f x = 0
np f x = - f x
goal (1 subgoal):
1. np f x = - f x \<and> pp f x = 0
[PROOF STEP]
by fast
[PROOF STATE]
proof (state)
this:
np f x = - f x \<and> pp f x = 0
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 1013, "file": "Integration_RealRandVar", "length": 14}
|
import os
from gensim import models
import json
import numpy as np
from keras.models import *
from keras.layers import *
import keras
from sklearn.metrics import *
import keras.backend as K
import pandas as pd
import argparse
pd.options.mode.chained_assignment = None
def readJson(filename):
print "Reading [%s]..." % (filename)
with open(filename) as inputFile:
jsonData = json.load(inputFile)
print "Finished reading [%s]." % (filename)
return jsonData
def filterWords(questionRow, caption):
labels = [int(l) for l in questionRow['wordLabels'].split(' ')]
questionWords = questionRow['question'].split(' ')
captionWords = caption.split(' ')
newLabels = []
newQuestionWords = []
newCaptionWords = []
for wi, w in enumerate(questionWords):
if w in w2v:
if w.lower() not in excludeWordList:
newQuestionWords.append(w)
newLabels.append(labels[wi])
for w in captionWords:
if w in w2v:
if w.lower() not in excludeWordList:
newCaptionWords.append(w)
return newLabels, newQuestionWords, newCaptionWords
def extractBowFeatures(questionRows, totalLength, maxLength):
X = []
y = []
for i,questionRow in questionRows.iterrows():
imageFilename = questionRow['image']
caption = imageCaptions[imageFilename]
labels, questionWords, captionWords = filterWords(questionRow, caption)
feature = np.zeros(len(word_index))
feature1 = np.zeros(len(word_index))
labelFeature = np.zeros(len(word_index))
relevant = True
for li,l in enumerate(labels):
if (l == 0):
labelFeature[word_index[questionWords[li]]] = 1
relevant = False
if relevant:
labelFeature[0] = 1
for ci,c in enumerate(captionWords):
feature[word_index[c]] += 1
# feature[word_index[c]] = max(feature[word_index[c]], 1)
for ci,c in enumerate(questionWords):
feature[word_index[c]] += 1
# feature1[word_index[c]] = max(feature1[word_index[c]], 1)
# feature = feature1 + feature
X.append(feature)
y.append(labelFeature)
return np.asarray(X),np.asarray(y)
def extractFeatures(questionRows, totalLength, maxLength):
# print '\tTotal Question Rows: [%d]' % (len(questionRows))
X_captions = []
x_questions = []
y = []
for i,questionRow in questionRows.iterrows():
imageFilename = questionRow['image']
caption = imageCaptions[imageFilename]
labels, questionWords, captionWords = filterWords(questionRow, caption)
captionFeature = np.zeros((maxLength, wordVectorSize))
questionFeature = np.zeros((maxLength, wordVectorSize))
labelFeature = np.zeros(len(word_index))
relevant = True
for li,l in enumerate(labels):
if (l == 0):
labelFeature[word_index[questionWords[li]]] = 1
relevant = False
if relevant:
labelFeature[0] = 1
for ci,c in enumerate(captionWords):
captionFeature[ci] = w2v[c]
for ci,c in enumerate(questionWords):
questionFeature[ci] = w2v[c]
X_captions.append(captionFeature)
x_questions.append(questionFeature)
y.append(labelFeature)
return np.asarray(x_questions),np.asarray(X_captions),np.asarray(y)
def extractAvgFeatures(questionRows, totalLength, maxLength):
# print '\tTotal Question Rows: [%d]' % (len(questionRows))
X = []
y = []
for i,questionRow in questionRows.iterrows():
imageFilename = questionRow['image']
caption = imageCaptions[imageFilename]
labels, questionWords, captionWords = filterWords(questionRow, caption)
captionFeature = []
questionFeature = []
labelFeature = np.zeros(len(word_index))
relevant = True
for li,l in enumerate(labels):
if (l == 0):
labelFeature[word_index[questionWords[li]]] = 1
relevant = False
if relevant:
labelFeature[0] = 1
for ci,c in enumerate(captionWords):
captionFeature.append(w2v[c])
for ci,c in enumerate(questionWords):
questionFeature.append(w2v[c])
captionFeature=sum(captionFeature)/float(len(captionFeature))
questionFeature=sum(questionFeature)/float(len(questionFeature))
X.append(np.concatenate((questionFeature,captionFeature),0))
y.append(labelFeature)
return np.asarray(X),np.asarray(y)
def extractVocab(rowSet):
word_index = {'RELEVANT':0}
index_word = {0:'RELEVANT'}
for r in rowSet:
for i,questionRow in r.iterrows():
imageFilename = questionRow['image']
caption = imageCaptions[imageFilename]
questionWords = questionRow['question'].split(' ')
for w in questionWords:
if (w in w2v) and (w not in word_index) and (w not in excludeWordList):
word_index[w] = len(word_index)
index_word[word_index[w]] = w
captionWords = caption.split(' ')
for w in captionWords:
if (w in w2v) and (w not in word_index) and (w not in excludeWordList):
word_index[w] = len(word_index)
index_word[word_index[w]] = w
return word_index, index_word
parser = argparse.ArgumentParser()
parser.add_argument('-d', action='store', dest='dataFile', help='Data file')
parser.add_argument('-o', action='store', dest='outputPath', help='Output path')
parser.add_argument('-e', action='store', dest='numberOfEpochs', help='Epochs', type=int)
parser.add_argument('-f', action='store', dest='numberOfFolds', help='Folds', type=int)
parser.add_argument('-b', action='store', dest='baseDataDirectory', help='Base directory')
results = parser.parse_args()
# modelTypes = {'all-bidirect':50, 'all':50, 'bow':50,'capOnly':5,'quesOnly':5}
# modelTypes = ['all-bidirect','all','avg','bow']
modelTypes = ['capOnly', 'quesOnly']
dataFile = results.dataFile
numberOfEpochs = results.numberOfEpochs
numberOfFolds = results.numberOfFolds
baseDataDirectory = results.baseDataDirectory
word2VecPath = os.path.join(baseDataDirectory, 'word2vec/google-news/GoogleNews-vectors-negative300.bin')
captionFile = os.path.join(baseDataDirectory, 'cvqa/imagecaptions.json')
wordVectorSize = 300
maxLength = 20
totalLength = maxLength * 2
excludeWordList = ['is','a','the','what','that','to','who','why']
print "Loading Word2Vec Dictionary. This may take a long time..."
w2v = models.Word2Vec.load_word2vec_format(word2VecPath, binary=True)
print "Loading Captions generated by a Pre-Trained Captioning Model for Images..."
imageCaptions = readJson(captionFile)
print "Loading Questions..."
allRows = pd.read_csv(dataFile)
print '\tAll rows: [%d]' % (len(allRows))
print "Removing Questions Without Matching Captions..."
allRows = allRows[allRows['image'].isin(imageCaptions)]
print '\tAll rows: [%d]' % (len(allRows))
print "Extracting Vocab..."
word_index, index_word = extractVocab([allRows])
print 'Vocab size: [%d]' % (len(word_index))
print 'Max Sequence Length: [%d]' % (maxLength)
print 'Total Sequence Length: [%d]' % (totalLength)
for fold in range(0,numberOfFolds):
allRows = allRows.sample(frac=1)
split = len(allRows)/2
trainRows = allRows[:split]
testRows = allRows[split:]
print '\tTraining rows: [%d]' % (len(trainRows))
print '\tTest rows: [%d]' % (len(testRows))
for modelType in modelTypes:
print "Running fold: [%d] model: [%s]" % (fold, modelType)
if (modelType == "bow"):
print '\tExtraction BOW Training Features...'
X_train, y_train = extractBowFeatures(trainRows, totalLength, maxLength)
print '\tExtraction BOW Test Features...'
X_test, y_test = extractBowFeatures(testRows, totalLength, maxLength)
elif (modelType == "avg"):
print '\tExtraction Avg Training Features...'
X_train, y_train = extractAvgFeatures(trainRows, totalLength, maxLength)
print '\tExtraction Avg Test Features...'
X_test, y_test = extractAvgFeatures(testRows, totalLength, maxLength)
else:
print '\tExtraction Training Features...'
X_questions_train, X_captions_train, y_train = extractFeatures(trainRows, totalLength, maxLength)
print '\tExtraction Test Features...'
X_questions_test, X_captions_test, y_test = extractFeatures(testRows, totalLength, maxLength)
# print 'Total data samples: [%d]' % (len(y_train) + len(y_test))
# print '\tTraining data size: [%d]' % (len(y_train))
# print '\tTest data size: [%d]' % (len(y_test))
outputResultsFile = os.path.join(results.outputPath, "%s-%d-outputTestResults.csv" % (modelType, fold))
outputStatsFile = os.path.join(results.outputPath, "%s-%d-outputStats.csv" % (modelType, fold))
outputModelFile = os.path.join(results.outputPath, "%s-%d-model-weights.hd5" % (modelType, fold))
metrics = ['accuracy', 'precision','recall','fmeasure']
# bow: 200,150,100
# all: 150,150,100
if (modelType == "bow"):
decoder = Sequential()
decoder.add(Dense(300, input_dim=len(word_index), activation='relu'))
decoder.add(Dense(200, activation='relu'))
decoder.add(Dense(150, activation='relu'))
decoder.add(Dense(len(word_index), activation='softmax'))
decoder.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=metrics)
elif (modelType == "avg"):
decoder = Sequential()
decoder.add(Dense(200, input_dim=wordVectorSize*2, activation='relu'))
decoder.add(Dense(150, activation='relu'))
decoder.add(Dense(100, activation='relu'))
decoder.add(Dense(len(word_index), activation='softmax'))
decoder.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=metrics)
elif(modelType in ['capOnly','quesOnly']):
encoder_a = Sequential()
encoder_a.add(LSTM(200, input_shape=(maxLength,wordVectorSize)))
decoder = Sequential()
decoder.add(encoder_a)
decoder.add(Dense(150, activation='relu'))
decoder.add(Dense(100, activation='relu'))
decoder.add(Dense(len(word_index), activation='softmax'))
decoder.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=metrics)
elif (modelType == "all-bidirect"):
encoder_a = Sequential()
encoder_a.add(Bidirectional(LSTM(200), input_shape=(maxLength,wordVectorSize)))
encoder_b = Sequential()
encoder_b.add(Bidirectional(LSTM(200), input_shape=(maxLength,wordVectorSize)))
decoder = Sequential()
decoder.add(Merge([encoder_a, encoder_b], mode='concat'))
decoder.add(Dense(150, activation='relu'))
decoder.add(Dense(100, activation='relu'))
decoder.add(Dense(len(word_index), activation='softmax'))
decoder.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=metrics)
elif (modelType == "all"):
encoder_a = Sequential()
encoder_a.add(LSTM(200, input_shape=(maxLength,wordVectorSize)))
encoder_b = Sequential()
encoder_b.add(LSTM(150, input_shape=(maxLength,wordVectorSize)))
decoder = Sequential()
decoder.add(Merge([encoder_a, encoder_b], mode='concat'))
decoder.add(Dense(150, activation='relu'))
decoder.add(Dense(100, activation='relu'))
decoder.add(Dense(len(word_index), activation='softmax'))
decoder.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=metrics)
# if fold == 1:
# print(decoder.summary())
# print(decoder.get_config())
finalScores = []
def test():
names = ['loss','acc', 'precision', 'recall', 'fmeasure']
if (modelType in ['all','all-bidirect']):
scores = decoder.test_on_batch([X_questions_test, X_captions_test], y_test)
elif(modelType == 'quesOnly'):
scores = decoder.test_on_batch(X_questions_test, y_test)
elif(modelType == 'capOnly'):
scores = decoder.test_on_batch(X_captions_test, y_test)
elif(modelType in ["bow","avg"]):
scores = decoder.test_on_batch(X_test, y_test)
totalScores = dict(zip(decoder.metrics_names, scores))
global finalScores
finalScores.append(totalScores)
print '\nTest: ' + ' - '.join([n + ": " + str(float(totalScores[n])) for n in names])
class TestCallback(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
# print ''
test()
testCallback = TestCallback()
if (modelType in ['all','all-bidirect']):
decoder.fit([X_questions_train, X_captions_train],y_train, nb_epoch=numberOfEpochs, verbose=1, callbacks=[testCallback], batch_size=200)
elif(modelType == 'quesOnly'):
decoder.fit(X_questions_train,y_train, nb_epoch=numberOfEpochs, verbose=1, callbacks=[testCallback], batch_size=200)
elif(modelType == 'capOnly'):
decoder.fit(X_captions_train,y_train, nb_epoch=numberOfEpochs, verbose=1, callbacks=[testCallback], batch_size=200)
elif(modelType in ["bow","avg"]):
decoder.fit(X_train,y_train, nb_epoch=numberOfEpochs, verbose=1, callbacks=[testCallback], batch_size=200)
test()
if (modelType in ['all','all-bidirect']):
y_predict = decoder.predict_proba([X_questions_test, X_captions_test], verbose=0)
elif(modelType == 'quesOnly'):
y_predict = decoder.predict_proba(X_questions_test, verbose=0)
elif(modelType == 'capOnly'):
y_predict = decoder.predict_proba(X_captions_test, verbose=0)
elif(modelType in ["bow","avg"]):
y_predict = decoder.predict_proba(X_test, verbose=0)
y_predict_words = []
test_captions = []
index = 0
for _,t in testRows.iterrows():
best = np.argmax(y_predict[index])
imageFilename = t['image']
test_captions.append(imageCaptions[imageFilename])
y_predict_words.append(index_word[best])
index+=1
testRows['caption'] = pd.Series(test_captions, index=testRows.index)
testRows['predict'] = pd.Series(y_predict_words, index=testRows.index)
testRows.to_csv(outputResultsFile)
pd.DataFrame(finalScores).to_csv(outputStatsFile)
decoder.save_weights(outputModelFile)
|
{"hexsha": "b8507b6eee1b3ad59e75b6539c24cacb44eb0766", "size": 13258, "ext": "py", "lang": "Python", "max_stars_repo_path": "cvqa-r-nouns/task2-lstm-mlp-folds.py", "max_stars_repo_name": "andeeptoor/qpr-qe-datasets", "max_stars_repo_head_hexsha": "4359af17e7df335abe38a18d046f94f9cef57277", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cvqa-r-nouns/task2-lstm-mlp-folds.py", "max_issues_repo_name": "andeeptoor/qpr-qe-datasets", "max_issues_repo_head_hexsha": "4359af17e7df335abe38a18d046f94f9cef57277", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cvqa-r-nouns/task2-lstm-mlp-folds.py", "max_forks_repo_name": "andeeptoor/qpr-qe-datasets", "max_forks_repo_head_hexsha": "4359af17e7df335abe38a18d046f94f9cef57277", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-06-09T01:03:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-05T11:45:06.000Z", "avg_line_length": 33.2280701754, "max_line_length": 139, "alphanum_fraction": 0.7182078745, "include": true, "reason": "import numpy", "num_tokens": 3486}
|
%!TEX root = main.tex
\section{Related Work}
\label{sec:related_work}
To the best of our knowledge, we are the first to formally address the problem of optimizing
the driver's strategy in ride-hailing platforms like Uber and Lyft.
Apart from
some recent popular-press articles that
offer, often contradictory, advice to ride-hailing drivers on how to maximize earnings, mostly via chasing surge~\cite{dont,tips},
the only relevant existing technical work
studies other aspects of ride-hailing.
Next, we discuss these works as well as some work related to optimization problems
for taxi fleets.
%Also related can be considered existing work on optimization problems related to taxi fleets. We discuss existing works along these lines next.
\spara{Studies of ride-hailing platforms:}
Recent work has investigated the supply-side effects of specific incentives (e.g., surge pricing) that Uber and Lyft provide to drivers~\cite{slaves}. For example, Chen and Sheldon~\cite{chen2016dynamic} showed a causal relationship that drivers on Uber respond to surges by driving more during high surge times, differentiating from previous work that suggests taxi drivers primarily focus on achieving earnings goals~\cite{camerer1997labor}.
%Other work has viewed ride-hailing platforms more holistically, at a macro level.
In another line of research,
Chen {\etal}~\cite{chen2015peeking} measured many facets of Uber in NYC, including the prevalence and extent
of surge pricing.
Hall and Krueger~\cite{hall2016analysis} showed that drivers were attracted to the Uber platform due to the flexibility it offers,
and the level of compensation, but that earnings per hour do not vary much with the number of hours worked.
Finally, Castillo {\etal}~\cite{castillo2017surge} recently showed that surge pricing is responsible for effectively
relocating drivers during periods of high-demand thereby preventing them from engaging in `wild goose chases'
to pick up distant customers, which only exacerbates the problem of low driver availability.
These studies perform an {\em a posteriori} analysis of the data,
but they do not focus on devising specific recommendations for drivers as we do.
In another line of work, Banerjee {\etal}~\cite{banerjee2015pricing} studied
dynamic
pricing strategies for ride-hailing platforms (such as Lyft) using a
queuing-theoretic economic model.
They showed that dynamic pricing is robust to changes in system parameters, even if it does not
achieve higher performance than static pricing.
More recently, Ozkan and Ward~\cite{ozkan2016dynamic} looked at strategic matching between supply (individual drivers)
and demand (requested rides) for Uber and Lyft.
They showed that matching based on time-varying parameters like driver and customer arrival rates, and the
willingness of customers to wait can achieve better performance than naively matching
passengers with the closest driver.
Although these works build interesting models for ride-hailing economies, they are
orthogonal to ours, as they take a holistic view of such economies, while we focus
on earnings of individual, self-interested drivers.
\spara{Optimization problems for taxi fleets:}
A considerable body of work has focused on the optimization of taxi fleets, for
example building economic network models to describe demand and supply equilibria of taxi
services under various tariff structures, fleet size regulations, and other policy
alternatives~\cite{bailey1987simulation,yang2002demand}. Other work seeks to
optimize the allocation of taxi market resources~\cite{shi2016optimization}.
Another direction focuses on route optimization by a centralized administrator (e.g., taxi dispatching services)~\cite{maciejewski2013simulation,nunes2011taxi}
or on maximizing occupancy and minimizing travel times
%(by minimizing passenger detours)
in a shared-ride setting~\cite{jung2013design}.
Other work has studied the supply side of the driving market from the viewpoint of behavioral economics.
A seminal paper by Camerer {\etal}~\cite{camerer1997labor} studied cab drivers and found that inexperienced
cab drivers (1) make labor supply decisions ``one day at a time'' instead of substituting labor and leisure across multiple days, and (2) set a loose daily income target and quit working once they reach that target.
These works, however, do not focus on the design of a specific gain-optimizing
strategy for drivers, as we do.
\begin{comment}
\textbf{Works on general taxi cabs:}
2. \cite{yang1998network} - first in the series of works modeling taxi utilization and movement to find that higher utilization leads to longer waiting times for customers.
3. \cite{wong2001modeling} - second paper, extends previous paper to incorporate effects of congestion and customer demand elasticity.
4. \cite{yang2002demand} - third paper, uses the network model to describe demand and supply equilibrium of taxi services under fare structure and fleet size regulation in competitive / monopoly market.
5. \cite{bailey1987simulation} - directed at understanding the dynamic interaction between demand, service rates, and policy alternatives. Finds that customer waiting time is insensitive to changes in demand but highly sensitive to changes in taxi fleet size.
6. \cite{qin2017mining} - explore the factors affecting driver incomes with quantitative estimates using GPS traces of over 167 million trips in Shanghai.
7. \cite{rong2016rich} - MDP to increase the revenue per unit time of taxi drivers. They study the relocate action from our strategy.
\textbf{Works on taxi routing:}
8. \cite{maciejewski2013simulation} - optimizes taxi routing by generating demand and congested network simulation. Defines online and offline taxi dispatching strategies and evaluate them. `No-scheduling' strategy works well under low demand but deteriorates under heavy load.
9. \cite{nunes2011taxi} - formulates a TSP problem to find the best route for a taxi company to satisfy demand.
\textbf{Works on ride-hailing:}
10. \cite{agatz2012optimization} - outline optimization challenges in developing technology to support ride-hailing services. Survey of operations research papers in the domain.
11. \cite{santos2013dynamic} - Prove that problem of maximizing shared trips within a fixed time window to minimize shared expenses is NP-Hard and a propose heuristic solution.
12. \cite{jung2013design} - Simulated Annealing algorithm to maximize occupancy and minimize travel times (by minimizing passenger detours) in shared-ride concept.
\textbf{Works on ride-hailing vehicle routing:}
13. \cite{lin2012research} - simulated annealing algorithm to optimize routing of ride-hailing taxi to minimize operating costs while maximizing customer satisfaction.
\textbf{Strategic behavior:}
14. \cite{shi2016optimization} - maximizes social welfare and optimizes allocation of taxi market resources. Also analyzes strategic behavior of passengers who may join or drop out of system based on their social welfare threshold.
\textbf{Uber related works:}
15. \cite{hall2016analysis} - Drivers attracted to Uber platform due to flexibility it offers, level of compensation, earnings per hour do not vary much with number of hours worked.
16. \cite{chen2016dynamic} - Show a causal relationship that drivers on Uber respond to `surges' by driving more during high surge times, in contrast to \cite{camerer1997labor} which says that drivers driver until they achieve earnings goals.
17. \cite{banerjee2015pricing} - Study complex dynamic pricing strategies for ride-hailing platforms (Lyft) using a queuing-theoretic economic model. They show the dynamic pricing is robust to changes in system parameters, even if it does not achieve higher performance than static pricing.
18. \cite{ozkan2016dynamic} - Strategic matching between Uber or Lyft's supply and demand. Matching based on parameters like customer/driver arrival rates, willingness of customers to wait and time-variance can achieve better performance than naively matching passenger with closest driver.
\textbf{Media and Press articles:}
%%3.
\end{comment}
|
{"hexsha": "a6701185f225620f61fb8d412429045983f75de6", "size": 8156, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/WSDM-2018/related.tex", "max_stars_repo_name": "chdhr-harshal/uber-driver-strategy", "max_stars_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-04-14T22:30:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-05T17:54:25.000Z", "max_issues_repo_path": "paper/WSDM-2018/related.tex", "max_issues_repo_name": "chdhr-harshal/uber-driver-strategy", "max_issues_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-02-17T10:36:43.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-17T10:46:33.000Z", "max_forks_repo_path": "paper/WSDM-2018/related.tex", "max_forks_repo_name": "chdhr-harshal/uber_driver_strategy", "max_forks_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.8524590164, "max_line_length": 446, "alphanum_fraction": 0.8071358509, "num_tokens": 1775}
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['savefig.dpi']=300
a=[666618,588597,443497,269913,197146,195835,139682,134647,130100,125753]
for i in range(len(a)):
a[i]=a[i]/10000
data={'country':['美国','巴西','印度','墨西哥','秘鲁','俄罗斯','印度尼西亚','英国','意大利','哥伦比亚'],
'deadnum':a}
pdat=pd.DataFrame(data)
l=pdat['deadnum']
N=pdat.shape[0]
width=2*np.pi/N
rad=np.cumsum([width]*N)
colors=['red','darkred','maroon','firebrick','brown','indianred','lightcoral','salmon','rosybrown','mistyrose']
plt.figure(figsize=(15,20))#创建画布
ax=plt.subplot(projection='polar')#创建极坐标
ax.set_ylim(0,np.ceil(l).max()+1)
ax.set_theta_zero_location('N')#极坐标起点
ax.grid(False)#不显示极轴
ax.spines['polar'].set_visible(False)#不显示最外的圆形
ax.set_yticks([])#不显示坐标间隔
ax.set_thetagrids([])#不显示极轴坐标
ax.bar(rad,l,width=width,color=colors,alpha=1)
ax.bar(rad,5,width=width,color='white',alpha=1)
ax.bar(rad,10,width=width,color='white',alpha=0.2)
for i in np.arange(N):
ax.text(rad[i],l[i]-4,data['country'][i],rotation=rad[i]*180/np.pi,
rotation_mode='anchor',alpha=1,fontweight='bold',size=12,color='white')
ax.text(rad[i], l[i] - 7, data['deadnum'][i], rotation=rad[i] * 180 / np.pi,
rotation_mode='anchor', alpha=1, fontweight='bold', size=12,color='white')
plt.savefig('2.png',bbox_inches='tight')
plt.show()
|
{"hexsha": "eea6444ac8dfd56fd652bb337615c256ac830fbf", "size": 1462, "ext": "py", "lang": "Python", "max_stars_repo_path": "1.py", "max_stars_repo_name": "fita23689/MachineLearning", "max_stars_repo_head_hexsha": "6da571ab1e7067392b67df566a6d040b68530f31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1.py", "max_issues_repo_name": "fita23689/MachineLearning", "max_issues_repo_head_hexsha": "6da571ab1e7067392b67df566a6d040b68530f31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1.py", "max_forks_repo_name": "fita23689/MachineLearning", "max_forks_repo_head_hexsha": "6da571ab1e7067392b67df566a6d040b68530f31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1071428571, "max_line_length": 112, "alphanum_fraction": 0.6532147743, "include": true, "reason": "import numpy", "num_tokens": 521}
|
# -*- coding: utf-8 -*-
"""
Created on Thu Nov 29 09:22:10 2018
@author: gregoire
"""
#srcfolder=r'L:\processes\analysis\rams\temp_to_add_to_4076'
#anafolder=r'L:\processes\analysis\rams\20181130.142222.run'
#srcfolder=r'L:\processes\analysis\rams\temp_to_add_to_4832'
#anafolder=r'L:\processes\analysis\rams\20181130.141414.run'
#fom_segment_min_index_spacing=4
import os,shutil
import pandas as pd
import numpy as np
def import_CU_Multi_Bcknd_as_ana_block(srcfolder,anafolder,fom_segment_min_index_spacing=6,anak='ana__2'):
def get_num_segments(arr):
indsarr=np.where((arr[:-1]<=0.5)&(arr[1:]>0.5))[0]
if len(indsarr)==0:
return 0
return ((indsarr[1:]-indsarr[:-1])>fom_segment_min_index_spacing).sum()+int(indsarr[0]>fom_segment_min_index_spacing)
pid=int(srcfolder.rpartition('_')[2])
keystr='sample_no,runint,plate_id,num_pts_above_bcknd,smooth_num_pts_above_bcknd,num_segments_above_bcknd,smooth_num_segments_above_bcknd,max_signal_prob,max_smooth_signal_prob'
numk=keystr.count(',')+1
indent=' '
paramsfromfile=''
tups=[]
filelists=[[],[],[]]
for fn in os.listdir(srcfolder):
pr=os.path.join(srcfolder,fn)
nfn=anak+'_'+fn
pn=os.path.join(anafolder,nfn)
if fn=='Bcknd_Summary.csv':
with open(pr,mode='r') as f: lines=f.readlines()
orig_summ_keys=lines[0].strip().split(',')
inds=[count for count,k in enumerate(orig_summ_keys) if 'bcknd_weight__' in k]
i0=inds[0]
i1=inds[-1]
if inds!=range(i0,i1+1):
print 'WARNING NON CONSEC KEYS: ',inds,orig_summ_keys
keep_summ_keys=orig_summ_keys[i0:i1+1]
new_key_str=','.join([keystr]+keep_summ_keys)
filelists[0].append('%s: csv_fom_file;%s;19;%d' %(nfn,new_key_str,len(lines)-1))
csvstartstr=('1\t%d\t%d\t17\ncsv_version: 1\nplot_parameters:' %(numk+len(keep_summ_keys),len(lines)-1))+\
'\n plot__1:\n colormap: jet\n colormap_over_color: (0.5,0.,0.)\n colormap_under_color: (0.,0.,0.)\n fom_name: max_smooth_signal_prob' +\
'\n plot__2:\n colormap: jet\n colormap_over_color: (0.5,0.,0.)\n colormap_under_color: (0.,0.,0.)\n fom_name: smooth_num_pts_above_bcknd' +\
'\n plot__3:\n colormap: jet\n colormap_over_color: (0.5,0.,0.)\n colormap_under_color: (0.,0.,0.)\n fom_name: smooth_num_segments_above_bcknd'
summ_smps=[int(s.partition(',')[0]) for s in lines[1:]]
summ_keepstrs=[','.join(s.split(',')[i0:i1+1]) for s in lines[1:]]
p_summ=pn
elif 'Bcknd_Factors' in fn:
shutil.copy(pr,pn)
with open(pn,mode='r') as f: lines=f.readlines()
filelists[1].append('%s: rams_misc_file;%s;1;%d' %(nfn,lines[0].strip(),len(lines)-1))
elif 'Bcknd_Sample_' in fn:
shutil.copy(pr,pn)
d=pd.read_csv(pn)
x=np.array(d.as_matrix())
smp=int(fn.rpartition('_')[2].partition('.')[0])
tups.append((smp,1,pid,(x[:,2]>0.5).sum(),(x[:,3]>0.5).sum(),get_num_segments(x[:,2]),get_num_segments(x[:,3]),x[:,2].max(),x[:,3].max()))
filelists[2].append('%s: rams_inter_rawlen_file;%s;1;%d;%d' %(nfn,','.join(d.keys()),len(x),smp))
elif fn=='Bcknd_Init.txt':
with open(pr,mode='r') as f: lines=f.readlines()
lines=[indent*2+l.strip() for l in lines]
paramsfromfile='\n'.join(lines)
new_summ_lines=[csvstartstr,new_key_str]
for t in sorted(tups):#this will only keep lines of summary for sample_no with individual sample files, and if there is an individual file not in the summary there will be an error
i=summ_smps.index(t[0])
new_summ_lines.append(','.join(['%d,%d,%d,%d,%d,%d,%d,%.5f,%.5f' %t]+[summ_keepstrs[i]]))
filestr='\n'.join(new_summ_lines)
with open(p_summ,mode='w') as f: f.write(filestr)
s=anak
s+=':\n plate_ids: %d\n analysis_fcn_version: 1\n technique: rams\n analysis_general_type: analysis_of_ana\n description: multi-rank background identification and subtraction\n name: Analysis__CU_Multi_Bcknd\n parameters:\n select_ana: ana__1\n%s\n fom_segment_min_index_spacing: %d\n plot_parameters:\n plot__1:\n x_axis: wavenumber._cm\n series__1: smooth_signal_probability_pattern' \
%(pid,paramsfromfile,fom_segment_min_index_spacing)
analines=[s]
analines.append(' files_multi_run:\n fom_files:\n'+'\n'.join([indent*3+filedesc for filedesc in filelists[0]]))
analines.append(' misc_files:\n'+'\n'.join([indent*3+filedesc for filedesc in filelists[1]]))
analines.append(' files_run__1:\n inter_rawlen_files:\n'+'\n'.join([indent*3+filedesc for filedesc in filelists[2]]))
pana=os.path.join(anafolder,[fn for fn in os.listdir(anafolder) if fn.endswith('.ana')][0])
with open(pana,mode='r') as f: fs=f.read()
anafilestr='\n'.join([fs.strip()]+analines)
with open(pana,mode='w') as f: f.write(anafilestr)
with open(os.path.join(srcfolder,'anablock.txt'),mode='w') as f: f.write('\n'.join(analines))
#anafolder=r'L:\processes\analysis\rams\20181205.140000.run'
#
#for anaint,rank in [(2,1),(3,2),(4,4),(5,8)]:
# foldname='rank%d_4832' %rank
# anak='ana__%d' %(anaint)
# srcfolder=os.path.join(r'D:\data\201812_MultiBcknd_4832',foldname)
# import_CU_Multi_Bcknd_as_ana_block(srcfolder,anafolder,fom_segment_min_index_spacing=6,anak=anak)
|
{"hexsha": "ee73009d16f0f11f3feb1d430c835538dd8e612a", "size": 5723, "ext": "py", "lang": "Python", "max_stars_repo_path": "one_off_routines/20181129_ingest_CU_Bcknd.py", "max_stars_repo_name": "johnmgregoire/JCAPDataProcess", "max_stars_repo_head_hexsha": "c8120e5b2f8fc840a6307b40293dccaf94bd8c2c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:05:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-15T18:18:05.000Z", "max_issues_repo_path": "one_off_routines/20181129_ingest_CU_Bcknd.py", "max_issues_repo_name": "johnmgregoire/JCAPDataProcess", "max_issues_repo_head_hexsha": "c8120e5b2f8fc840a6307b40293dccaf94bd8c2c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "one_off_routines/20181129_ingest_CU_Bcknd.py", "max_forks_repo_name": "johnmgregoire/JCAPDataProcess", "max_forks_repo_head_hexsha": "c8120e5b2f8fc840a6307b40293dccaf94bd8c2c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.504587156, "max_line_length": 458, "alphanum_fraction": 0.6279923117, "include": true, "reason": "import numpy", "num_tokens": 1762}
|
!##############################################################################
!# ****************************************************************************
!# <name> CahnHilliard_partridiscr </name>
!# ****************************************************************************
!#
!# <purpose>
!# This module contains routines to read the parametrisation, create the
!# triangulation and set up the discretisation for the heat conduction problem.
!# The following routines can be found here:
!#
!# 0.) CH_initSolution
!# -> Give the initial solution of Cahn-Hilliard equation
!#
!# 1.) CH_initParamTriang
!# -> Read .PRM/.TRI files. Generate meshes on all levels.
!#
!# 2.) CH_doneParamTriang
!# Clean up parametrisation/triangulation, release memory.
!#
!# 3.) CH_initDiscretisation
!# -> Initialise the spatial discretisation.
!#
!# 4.) CH_doneDiscretisation
!# -> Clean up the discretisation, release memory.
!# </purpose>
!##############################################################################
module CahnHilliard_partridiscr
use fsystem
use storage
use linearsolver
use boundary
use bilinearformevaluation
use linearformevaluation
use cubature
use matrixfilters
use vectorfilters
use bcassembly
use triangulation
use spatialdiscretisation
use sortstrategy
use coarsegridcorrection
use ucd
use timestepping
use genoutput
use element
use collection
use paramlist
use CahnHilliard_callback
use CahnHilliard_basic
IMPLICIT NONE
CONTAINS
! ***************************************************************************
!<subroutine>
subroutine CH_initSolution(rCHproblem, rCHvector)
!<description>
! Initialises the initial solution vector into rvector. Depending on the settings
! in the DAT file this is either zero or read from a file.
!
! The routine assumes that basic mass matrices have already been constructed.
!</description>
!<input>
! A problem structure saving problem-dependent information.
type(t_CHproblem), intent(INOUT) :: rCHproblem
!</input>
!<inputoutput>
! The solution vector to be initialised. Must be set up according to the
! maximum level NLMAX in rproblem!
type(t_vectorBlock), intent(INOUT) :: rCHvector
!</inputoutput>
! local variables
integer :: i
real(DP), dimension(:,:), pointer :: p_DvertexCoords
real(DP), dimension(:), pointer :: p_vectordata
real(DP), dimension(:), pointer :: p_data
call lsyssc_getbase_double(rCHvector%rvectorBlock(1), p_vectordata)
call storage_getbase_double2D(rCHproblem%RlevelInfo(&
rCHproblem%NLMAX)%rtriangulation%h_DvertexCoords,p_DvertexCoords)
do i=1,rCHproblem%RlevelInfo(rCHproblem%NLMAX)%rtriangulation%NVT
call CH_iniconPhi(p_DvertexCoords(1,i),p_DvertexCoords(2,i), p_vectordata(i))
end do
! for initial solution of chemical potential
call lsyssc_getbase_double(rCHvector%rvectorBlock(2), p_vectordata)
call storage_getbase_double2D(rCHproblem%RlevelInfo(&
rCHproblem%NLMAX)%rtriangulation%h_DvertexCoords,p_DvertexCoords)
do i=1,rCHproblem%RlevelInfo(rCHproblem%NLMAX)%rtriangulation%NVT
call CH_iniconChemP(p_DvertexCoords(1,i),p_DvertexCoords(2,i), p_vectordata(i))
end do
end subroutine
! ***************************************************************************
!<subroutine>
subroutine CH_initParamTriang (NLMIN,NLMAX,rCHproblem)
!<description>
! This routine initialises the parametrisation and triangulation of the
! domain. The corresponding .prm/.tri files are read from disc and
! the triangulation is refined as described by the parameter ilv.
!</description>
!<input>
! Minimum refinement level of the mesh; = coarse grid = level 1
integer, intent(IN) :: NLMIN
! Maximum refinement level
integer, intent(IN) :: NLMAX
!</input>
!<inputoutput>
! A problem structure saving problem-dependent information.
type(t_CHproblem), intent(INOUT) :: rCHproblem
!</inputoutput>
!</subroutine>
! local variables
integer :: i
! Initialise the level in the problem structure
rCHproblem%NLMIN = NLMIN
rCHproblem%NLMAX = NLMAX
! At first, read in the parametrisation of the boundary and save
! it to rboundary.
call boundary_read_prm(rCHproblem%rboundary, './pre/QUAD.prm')
! Now read in the basic triangulation.
call tria_readTriFile2D (rCHproblem%RlevelInfo(rCHproblem%NLMIN)%rtriangulation, &
'./pre/QUAD.tri', rCHproblem%rboundary)
! Refine the mesh up to the minimum level
call tria_quickRefine2LevelOrdering(rCHproblem%NLMIN-1,&
rCHproblem%RlevelInfo(rCHproblem%NLMIN)%rtriangulation,rCHproblem%rboundary)
! Create information about adjacencies and everything one needs from
! a triangulation. Afterwards, we have the coarse mesh.
call tria_initStandardMeshFromRaw (&
rCHproblem%RlevelInfo(rCHproblem%NLMIN)%rtriangulation,rCHproblem%rboundary)
! Now, refine to level up to nlmax.
do i=rCHproblem%NLMIN+1,rCHproblem%NLMAX
call tria_refine2LevelOrdering (rCHproblem%RlevelInfo(i-1)%rtriangulation,&
rCHproblem%RlevelInfo(i)%rtriangulation, rCHproblem%rboundary)
call tria_initStandardMeshFromRaw (rCHproblem%RlevelInfo(i)%rtriangulation,&
rCHproblem%rboundary)
end do
end subroutine
! ***************************************************************************
!<subroutine>
subroutine CH_initDiscretisation (rCHproblem)
!<description>
! This routine initialises the discretisation structure of the underlying
! problem and saves it to the problem structure.
!</description>
!<inputoutput>
! A problem structure saving problem-dependent information.
type(t_CHproblem), intent(INOUT), TARGET :: rCHproblem
!</inputoutput>
!</subroutine>
! local variables
! An object for saving the domain:
type(t_boundary), POINTER :: p_rboundary
! An object for saving the triangulation on the domain
type(t_triangulation), POINTER :: p_rtriangulation
! MCai
! An object for the block discretisation on one level
! MCai,
! In CH problem, we have two blocks. If we apply Ciarlet-Raviart type mixed
! finite element, two blocks have same discretisation. But, it is better to
! use two pointers to denote two blocks
! An object for the block discretisation on one level
type(t_blockDiscretisation), pointer :: p_rdiscretisation
! Specifically, p_rdiscretisation(1)=p_rdiscretisation_phase
! p_rdiscretisation(2)=p_rdiscretisation_chemPoten
type(t_spatialDiscretisation), pointer :: p_rdiscretisationLaplace, p_rdiscretisationMass
character(LEN=SYS_NAMELEN) :: sstr
integer :: i, k, ielementType,icubtemp
integer(i32) :: icubA,icubB,icubF
integer(i32) :: ieltype_A, ieltype_B
!~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
! Which discretisation is to use?
! Which cubature formula should be used?
call parlst_getvalue_int (rCHproblem%rparamList,'CH-DISCRETISATION',&
'iElementType',ielementType,0) ! Default Q1 element
call parlst_getvalue_string (rCHproblem%rparamList,'CH-DISCRETISATION',&
'scubA',sstr,'')
if (sstr .eq. '') then
icubtemp = CUB_G2X2
call parlst_getvalue_int (rCHproblem%rparamList,'CH-DISCRETISATION',&
'icubA',icubtemp,icubtemp)
icubA = icubtemp
else
icubA = cub_igetID(sstr)
end if
call parlst_getvalue_string (rCHproblem%rparamList,'CH-DISCRETISATION',&
'scubB',sstr,'')
if (sstr .eq. '') then
icubtemp = CUB_G2X2
call parlst_getvalue_int (rCHproblem%rparamList,'CH-DISCRETISATION',&
'icubB',icubtemp,icubtemp)
icubB = icubtemp
else
icubB = cub_igetID(sstr)
end if
call parlst_getvalue_string (rCHproblem%rparamList,'CH-DISCRETISATION',&
'scubF',sstr,'')
if (sstr .eq. '') then
icubtemp = CUB_G2X2
call parlst_getvalue_int (rCHproblem%rparamList,'CH-DISCRETISATION',&
'icubF',icubtemp,icubtemp)
icubF = icubtemp
else
icubF = cub_igetID(sstr)
end if
select case (ielementType)
case (0)
ieltype_A = EL_Q1
ieltype_B = EL_Q1
case (1)
ieltype_A = EL_Q2
ieltype_B = EL_Q2
case default
call output_line (&
'Unknown discretisation: iElementType = '//sys_siL(ielementType,10), &
OU_CLASS_ERROR,OU_MODE_STD,'CH_initDiscretisation')
call sys_halt()
end select
! Now set up discrezisation structures on all levels:
do i=rCHproblem%NLMIN,rCHproblem%NLMAX
! Ask the problem structure to give us the boundary and triangulation.
! We need it for the discretisation.
p_rboundary => rCHproblem%rboundary
p_rtriangulation => rCHproblem%RlevelInfo(i)%rtriangulation
allocate(p_rdiscretisation)
allocate(p_rdiscretisationLaplace)
allocate(p_rdiscretisationMass)
! Now we can start to initialise the discretisation. At first, set up
! a block discretisation structure that specifies the blocks in the
! solution vector.
! MCai,
! In CH problem, we have two blocks. If we apply Ciarlet-Raviart type mixed
! finite element, two blocks have same discretisation.
p_rdiscretisation => rCHproblem%RlevelInfo(i)%rdiscretisation
! Now we can start to initialise the discretisation. At first, set up
! a block discretisation structure that specifies 2 blocks in the
! solution vector.
call spdiscr_initBlockDiscr(&
p_rdiscretisation,2,p_rtriangulation,p_rboundary)
! rdiscretisation%RspatialDiscr is a list of scalar
! discretisation structures for every component of the solution vector.
! We have a solution vector with two components:
! Component 1 = Phase variable
! Component 2 = Chemical potential
! For simplicity, we set up one discretisation structure for the phase var
! then copy the discretisation to chemical potential
call spdiscr_initDiscr_simple ( &
p_rdiscretisation%RspatialDiscr(1), &
ieltype_A,icubA,p_rtriangulation, p_rboundary)
! Manually set the cubature formula for the RHS as the above routine
! uses the same for matrix and vectors.
p_rdiscretisation%RspatialDiscr(1)% &
RelementDistr(1)%ccubTypeLinForm = icubF
! 2nd discretisation, icubB should be equal to icubA, we do not need it now.
call spdiscr_initDiscr_simple ( &
p_rdiscretisation%RspatialDiscr(2), &
ieltype_B,icubB,p_rtriangulation, p_rboundary)
! Manually set the cubature formula for the RHS as the above routine
! uses the same for matrix and vectors.
p_rdiscretisation%RspatialDiscr(2)% &
RelementDistr(1)%ccubTypeLinForm = icubF
! Save the discretisation structure to our local LevelInfo structure
! for later use.
rCHproblem%RlevelInfo(i)%p_rdiscretisation => p_rdiscretisation
call spdiscr_duplicateDiscrSc (p_rdiscretisation%RspatialDiscr(1), &
rCHproblem%RlevelInfo(i)%rdiscretisationLaplace,.true.)
call spdiscr_duplicateDiscrSc (p_rdiscretisation%RspatialDiscr(1), &
rCHproblem%RlevelInfo(i)%rdiscretisationMass,.true.)
p_rdiscretisationLaplace => rCHproblem%RlevelInfo(i)%rdiscretisationLaplace
p_rdiscretisationMass => rCHproblem%RlevelInfo(i)%rdiscretisationMass
! Initialise the cubature formula appropriately.
do k = 1,p_rdiscretisationMass%inumFESpaces
p_rdiscretisationLaplace%RelementDistr(k)%ccubTypeBilForm = icubA
p_rdiscretisationMass%RelementDistr(k)%ccubTypeBilForm = icubA
end do
end do
end subroutine
! ***************************************************************************
!<subroutine>
subroutine CH_doneDiscretisation (rCHproblem)
!<description>
! Releases the discretisation from the heap.
!</description>
!<inputoutput>
! A problem structure saving problem-dependent information.
type(t_CHproblem), intent(INOUT), TARGET :: rCHproblem
!</inputoutput>
!</subroutine>
! local variables
integer :: i
do i=rCHproblem%NLMAX,rCHproblem%NLMIN,-1
! Delete the block discretisation together with the associated
! scalar spatial discretisations....
call spdiscr_releaseBlockDiscr(rCHproblem%RlevelInfo(i)%p_rdiscretisation)
! and remove the allocated block discretisation structure from the heap.
! and remove the allocated block discretisation structure from the heap.
! why we do not need to deallocate?
! deallocate(rCHproblem%RlevelInfo(i)%p_rdiscretisation)
! deallocate(rCHproblem%RlevelInfo(i)%p_rdiscretisation)
end do
end subroutine
! ***************************************************************************
!<subroutine>
subroutine CH_doneParamTriang (rCHproblem)
!<description>
! Releases the triangulation and parametrisation from the heap.
!</description>
!<inputoutput>
! A problem structure saving problem-dependent information.
type(t_CHproblem), intent(INOUT), TARGET :: rCHproblem
!</inputoutput>
!</subroutine>
! local variables
integer :: i
do i=rCHproblem%NLMAX,rCHproblem%NLMIN,-1
! Release the triangulation
call tria_done (rCHproblem%RlevelInfo(i)%rtriangulation)
end do
! Finally release the domain.
call boundary_release (rCHproblem%rboundary)
end subroutine
end module
|
{"hexsha": "5dee01e65771373808b8f8404343d565ff161a59", "size": 13731, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "area51/Archive/CHNS_Kay/src/CahnHilliard_partridiscr.f90", "max_stars_repo_name": "tudo-math-ls3/FeatFlow2", "max_stars_repo_head_hexsha": "56159aff28f161aca513bc7c5e2014a2d11ff1b3", "max_stars_repo_licenses": ["Intel", "Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-09T15:48:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-09T15:48:37.000Z", "max_issues_repo_path": "area51/Archive/CHNS_Kay/src/CahnHilliard_partridiscr.f90", "max_issues_repo_name": "tudo-math-ls3/FeatFlow2", "max_issues_repo_head_hexsha": "56159aff28f161aca513bc7c5e2014a2d11ff1b3", "max_issues_repo_licenses": ["Intel", "Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "area51/Archive/CHNS_Kay/src/CahnHilliard_partridiscr.f90", "max_forks_repo_name": "tudo-math-ls3/FeatFlow2", "max_forks_repo_head_hexsha": "56159aff28f161aca513bc7c5e2014a2d11ff1b3", "max_forks_repo_licenses": ["Intel", "Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4087591241, "max_line_length": 93, "alphanum_fraction": 0.6683417085, "num_tokens": 3626}
|
[STATEMENT]
lemma cong_exp_trans[trans]:
"[a ^ b = c] (mod n) \<Longrightarrow> [a = d] (mod n) \<Longrightarrow> [d ^ b = c] (mod n)"
"[c = a ^ b] (mod n) \<Longrightarrow> [a = d] (mod n) \<Longrightarrow> [c = d ^ b] (mod n)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lbrakk>[a ^ b = c] (mod n); [a = d] (mod n)\<rbrakk> \<Longrightarrow> [d ^ b = c] (mod n)) &&& (\<lbrakk>[c = a ^ b] (mod n); [a = d] (mod n)\<rbrakk> \<Longrightarrow> [c = d ^ b] (mod n))
[PROOF STEP]
using cong_pow cong_sym cong_trans
[PROOF STATE]
proof (prove)
using this:
[?b = ?c] (mod ?a) \<Longrightarrow> [?b ^ ?n = ?c ^ ?n] (mod ?a)
[?b = ?c] (mod ?a) \<Longrightarrow> [?c = ?b] (mod ?a)
\<lbrakk>[?b = ?c] (mod ?a); [?c = ?d] (mod ?a)\<rbrakk> \<Longrightarrow> [?b = ?d] (mod ?a)
goal (1 subgoal):
1. (\<lbrakk>[a ^ b = c] (mod n); [a = d] (mod n)\<rbrakk> \<Longrightarrow> [d ^ b = c] (mod n)) &&& (\<lbrakk>[c = a ^ b] (mod n); [a = d] (mod n)\<rbrakk> \<Longrightarrow> [c = d ^ b] (mod n))
[PROOF STEP]
by blast+
|
{"llama_tokens": 469, "file": "Probabilistic_Prime_Tests_Algebraic_Auxiliaries", "length": 2}
|
module parcel_netcdf
use constants, only : one
use netcdf_utils
use netcdf_writer
use netcdf_reader
use parcel_container, only : parcels, n_parcels
use parameters, only : nx, ny, nz, extent, lower, max_num_parcels
use config, only : package_version, cf_version
use timer, only : start_timer, stop_timer
use iomanip, only : zfill
use options, only : write_netcdf_options
use physics, only : write_physical_quantities
implicit none
integer :: parcel_io_timer
integer :: n_writes = 1
character(len=512) :: ncbasename
character(len=512) :: ncfname
integer :: ncid
integer :: npar_dim_id, vol_id, buo_id, &
x_pos_id, y_pos_id, z_pos_id, &
x_vor_id, y_vor_id, z_vor_id, &
b11_id, b12_id, b13_id, &
b22_id, b23_id, &
t_axis_id, t_dim_id
double precision :: restart_time
#ifndef ENABLE_DRY_MODE
integer :: hum_id
#endif
private :: ncid, ncfname, n_writes, npar_dim_id, &
x_pos_id, y_pos_id, z_pos_id, &
x_vor_id, y_vor_id, z_vor_id, &
b11_id, b12_id, b13_id, b22_id, b23_id, &
vol_id, buo_id, t_dim_id, t_axis_id, &
restart_time
#ifndef ENABLE_DRY_MODE
private :: hum_id
#endif
private :: ncbasename
contains
! Create the parcel file.
! @param[in] basename of the file
! @param[in] overwrite the file
subroutine create_netcdf_parcel_file(basename, overwrite, l_restart)
character(*), intent(in) :: basename
logical, intent(in) :: overwrite
logical, intent(in) :: l_restart
logical :: l_exist
integer :: dimids(2)
ncfname = basename // '_' // zfill(n_writes) // '_parcels.nc'
ncbasename = basename
restart_time = -one
if (l_restart) then
! find the last parcel file in order to set "n_writes" properly
call exist_netcdf_file(ncfname, l_exist)
do while (l_exist)
n_writes = n_writes + 1
ncfname = basename // '_' // zfill(n_writes) // '_parcels.nc'
call exist_netcdf_file(ncfname, l_exist)
if (l_exist) then
call open_netcdf_file(ncfname, NF90_NOWRITE, ncid)
call get_time(ncid, restart_time)
call close_netcdf_file(ncid)
endif
enddo
return
endif
call create_netcdf_file(ncfname, overwrite, ncid)
! define global attributes
call write_netcdf_info(ncid=ncid, &
epic_version=package_version, &
file_type='parcels', &
cf_version=cf_version)
call write_netcdf_box(ncid, lower, extent, (/nx, ny, nz/))
call write_physical_quantities(ncid)
call write_netcdf_options(ncid)
! define dimensions
call define_netcdf_dimension(ncid=ncid, &
name='n_parcels', &
dimsize=n_parcels, &
dimid=npar_dim_id)
call define_netcdf_temporal_dimension(ncid, t_dim_id, t_axis_id)
dimids = (/npar_dim_id, t_dim_id/)
call define_netcdf_dataset(ncid=ncid, &
name='x_position', &
long_name='x position component', &
std_name='', &
unit='m', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=x_pos_id)
call define_netcdf_dataset(ncid=ncid, &
name='y_position', &
long_name='y position component', &
std_name='', &
unit='m', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=y_pos_id)
call define_netcdf_dataset(ncid=ncid, &
name='z_position', &
long_name='z position component', &
std_name='', &
unit='m', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=z_pos_id)
call define_netcdf_dataset(ncid=ncid, &
name='B11', &
long_name='B11 element of shape matrix', &
std_name='', &
unit='m^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=b11_id)
call define_netcdf_dataset(ncid=ncid, &
name='B12', &
long_name='B12 element of shape matrix', &
std_name='', &
unit='m^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=b12_id)
call define_netcdf_dataset(ncid=ncid, &
name='B13', &
long_name='B13 element of shape matrix', &
std_name='', &
unit='m^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=b13_id)
call define_netcdf_dataset(ncid=ncid, &
name='B22', &
long_name='B22 element of shape matrix', &
std_name='', &
unit='m^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=b22_id)
call define_netcdf_dataset(ncid=ncid, &
name='B23', &
long_name='B23 element of shape matrix', &
std_name='', &
unit='m^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=b23_id)
call define_netcdf_dataset(ncid=ncid, &
name='volume', &
long_name='parcel volume', &
std_name='', &
unit='m^3', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=vol_id)
call define_netcdf_dataset(ncid=ncid, &
name='x_vorticity', &
long_name='x vorticity component', &
std_name='', &
unit='1/s', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=x_vor_id)
call define_netcdf_dataset(ncid=ncid, &
name='y_vorticity', &
long_name='y vorticity component', &
std_name='', &
unit='1/s', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=y_vor_id)
call define_netcdf_dataset(ncid=ncid, &
name='z_vorticity', &
long_name='z vorticity component', &
std_name='', &
unit='1/s', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=z_vor_id)
call define_netcdf_dataset(ncid=ncid, &
name='buoyancy', &
long_name='parcel buoyancy', &
std_name='', &
unit='m/s^2', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=buo_id)
#ifndef ENABLE_DRY_MODE
call define_netcdf_dataset(ncid=ncid, &
name='humidity', &
long_name='parcel humidity', &
std_name='', &
unit='1', &
dtype=NF90_DOUBLE, &
dimids=dimids, &
varid=hum_id)
#endif
call close_definition(ncid)
end subroutine create_netcdf_parcel_file
! Write parcels of the current time step into the parcel file.
! @param[in] t is the time
subroutine write_netcdf_parcels(t)
double precision, intent(in) :: t
integer :: cnt(2), start(2)
call start_timer(parcel_io_timer)
if (t <= restart_time) then
call stop_timer(parcel_io_timer)
return
endif
call create_netcdf_parcel_file(trim(ncbasename), .true., .false.)
call open_netcdf_file(ncfname, NF90_WRITE, ncid)
! write time
call write_netcdf_scalar(ncid, t_axis_id, t, 1)
! time step to write [step(2) is the time]
cnt = (/ n_parcels, 1 /)
start = (/ 1, 1 /)
call write_netcdf_dataset(ncid, x_pos_id, parcels%position(1, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, y_pos_id, parcels%position(2, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, z_pos_id, parcels%position(3, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, b11_id, parcels%B(1, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, b12_id, parcels%B(2, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, b13_id, parcels%B(3, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, b22_id, parcels%B(4, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, b23_id, parcels%B(5, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, vol_id, parcels%volume(1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, x_vor_id, parcels%vorticity(1, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, y_vor_id, parcels%vorticity(2, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, z_vor_id, parcels%vorticity(3, 1:n_parcels), start, cnt)
call write_netcdf_dataset(ncid, buo_id, parcels%buoyancy(1:n_parcels), start, cnt)
#ifndef ENABLE_DRY_MODE
call write_netcdf_dataset(ncid, hum_id, parcels%humidity(1:n_parcels), start, cnt)
#endif
! increment counter
n_writes = n_writes + 1
call close_netcdf_file(ncid)
call stop_timer(parcel_io_timer)
end subroutine write_netcdf_parcels
subroutine read_netcdf_parcels(fname)
character(*), intent(in) :: fname
logical :: l_valid = .false.
integer :: cnt(2), start(2)
call start_timer(parcel_io_timer)
call open_netcdf_file(fname, NF90_NOWRITE, ncid)
call get_num_parcels(ncid, n_parcels)
if (n_parcels > max_num_parcels) then
print *, "Number of parcels exceeds limit of", &
max_num_parcels, ". Exiting."
stop
endif
! time step to read [step(2) is the time]
cnt = (/ n_parcels, 1 /)
start = (/ 1, 1 /)
! Be aware that the starting index of buffer_1d and buffer_2d
! is 0; hence, the range is 0:n_parcels-1 in contrast to the
! parcel container where it is 1:n_parcels.
if (has_dataset(ncid, 'B11')) then
call read_netcdf_dataset(ncid, 'B11', parcels%B(1, 1:n_parcels), start, cnt)
else
print *, "The parcel shape component B11 must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'B12')) then
call read_netcdf_dataset(ncid, 'B12', parcels%B(2, 1:n_parcels), start, cnt)
else
print *, "The parcel shape component B12 must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'B13')) then
call read_netcdf_dataset(ncid, 'B13', parcels%B(3, 1:n_parcels), start, cnt)
else
print *, "The parcel shape component B13 must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'B22')) then
call read_netcdf_dataset(ncid, 'B22', parcels%B(4, 1:n_parcels), start, cnt)
else
print *, "The parcel shape component B22 must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'B23')) then
call read_netcdf_dataset(ncid, 'B23', parcels%B(5, 1:n_parcels), start, cnt)
else
print *, "The parcel shape component B23 must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'x_position')) then
call read_netcdf_dataset(ncid, 'x_position', &
parcels%position(1, 1:n_parcels), start, cnt)
else
print *, "The parcel x position must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'y_position')) then
call read_netcdf_dataset(ncid, 'y_position', &
parcels%position(2, 1:n_parcels), start, cnt)
else
print *, "The parcel y position must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'z_position')) then
call read_netcdf_dataset(ncid, 'z_position', &
parcels%position(3, 1:n_parcels), start, cnt)
else
print *, "The parcel z position must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'volume')) then
call read_netcdf_dataset(ncid, 'volume', &
parcels%volume(1:n_parcels), start, cnt)
else
print *, "The parcel volume must be present! Exiting."
stop
endif
if (has_dataset(ncid, 'x_vorticity')) then
l_valid = .true.
call read_netcdf_dataset(ncid, 'x_vorticity', &
parcels%vorticity(1, 1:n_parcels), start, cnt)
endif
if (has_dataset(ncid, 'y_vorticity')) then
call read_netcdf_dataset(ncid, 'y_vorticity', &
parcels%vorticity(2, 1:n_parcels), start, cnt)
endif
if (has_dataset(ncid, 'z_vorticity')) then
l_valid = .true.
call read_netcdf_dataset(ncid, 'z_vorticity', &
parcels%vorticity(3, 1:n_parcels), start, cnt)
endif
if (has_dataset(ncid, 'buoyancy')) then
l_valid = .true.
call read_netcdf_dataset(ncid, 'buoyancy', &
parcels%buoyancy(1:n_parcels), start, cnt)
endif
#ifndef ENABLE_DRY_MODE
if (has_dataset(ncid, 'humidity')) then
l_valid = .true.
call read_netcdf_dataset(ncid, 'humidity', &
parcels%humidity(1:n_parcels), start, cnt)
endif
#endif
if (.not. l_valid) then
print *, "Either the parcel buoyancy or vorticity must be present! Exiting."
stop
endif
call close_netcdf_file(ncid)
call stop_timer(parcel_io_timer)
end subroutine read_netcdf_parcels
end module parcel_netcdf
|
{"hexsha": "a48a96c4bf4a8a941b46584a8bfca41243bf22fc", "size": 20241, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "src/3d/parcels/parcel_netcdf.f90", "max_stars_repo_name": "matt-frey/epic", "max_stars_repo_head_hexsha": "954ebc44f2c041eee98bd14e22a85540a0c6c4bb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-11-11T10:50:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T00:11:41.000Z", "max_issues_repo_path": "src/3d/parcels/parcel_netcdf.f90", "max_issues_repo_name": "matt-frey/epic", "max_issues_repo_head_hexsha": "954ebc44f2c041eee98bd14e22a85540a0c6c4bb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 47, "max_issues_repo_issues_event_min_datetime": "2021-11-12T17:09:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-24T16:50:58.000Z", "max_forks_repo_path": "src/3d/parcels/parcel_netcdf.f90", "max_forks_repo_name": "matt-frey/epic", "max_forks_repo_head_hexsha": "954ebc44f2c041eee98bd14e22a85540a0c6c4bb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4028103044, "max_line_length": 100, "alphanum_fraction": 0.3887159725, "num_tokens": 3825}
|
import numpy as np
import torch.nn as nn
import random
import pytest
from test.utils import convert_and_test
class LayerTest(nn.Module):
def __init__(self, out, eps, momentum):
super(LayerTest, self).__init__()
self.bn = nn.BatchNorm2d(out, eps=eps, momentum=momentum)
def forward(self, x):
x = self.bn(x)
return x
@pytest.mark.repeat(10)
@pytest.mark.parametrize('change_ordering', [True, False])
def test_bn2d(change_ordering):
inp_size = np.random.randint(10, 100)
model = LayerTest(inp_size, random.random(), random.random())
model.eval()
input_np = np.random.uniform(0, 1, (1, inp_size, 224, 224))
error = convert_and_test(model, input_np, verbose=False, change_ordering=change_ordering)
|
{"hexsha": "73459aa2b7218bfb4a90e677c73140a8f970c648", "size": 761, "ext": "py", "lang": "Python", "max_stars_repo_path": "test/layers/normalizations/test_bn2d.py", "max_stars_repo_name": "dawnclaude/onnx2keras", "max_stars_repo_head_hexsha": "3d2a47c0a228b91fd434232274e216e491da36e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 115, "max_stars_repo_stars_event_min_datetime": "2019-07-03T21:01:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-05T16:50:49.000Z", "max_issues_repo_path": "test/layers/normalizations/test_bn2d.py", "max_issues_repo_name": "dawnclaude/onnx2keras", "max_issues_repo_head_hexsha": "3d2a47c0a228b91fd434232274e216e491da36e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 106, "max_issues_repo_issues_event_min_datetime": "2019-06-27T09:08:58.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-03T09:42:44.000Z", "max_forks_repo_path": "test/layers/normalizations/test_bn2d.py", "max_forks_repo_name": "dawnclaude/onnx2keras", "max_forks_repo_head_hexsha": "3d2a47c0a228b91fd434232274e216e491da36e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 68, "max_forks_repo_forks_event_min_datetime": "2019-07-04T22:36:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-25T13:54:11.000Z", "avg_line_length": 26.2413793103, "max_line_length": 93, "alphanum_fraction": 0.6977660972, "include": true, "reason": "import numpy", "num_tokens": 196}
|
"""
Combine the contours estimated:
* directly with the classification CNN
* computing normal curvature on dmap estimated with the regression CNN
Extract cells using watershed.
"""
"""
This file is part of Cytometer
Copyright 2021 Medical Research Council
SPDX-License-Identifier: Apache-2.0
Author: Ramon Casero <rcasero@gmail.com>
"""
# cross-platform home directory
from pathlib import Path
home = str(Path.home())
# PyCharm automatically adds cytometer to the python path, but this doesn't happen if the script is run
# with "python scriptname.py"
import os
import sys
sys.path.extend([os.path.join(home, 'Software/cytometer')])
import pickle
import glob
import numpy as np
# limit number of GPUs
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
# limit GPU memory used
os.environ['KERAS_BACKEND'] = 'tensorflow'
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
set_session(tf.Session(config=config))
# Note: you need to use my branch of keras with the new functionality, that allows element-wise weights of the loss
# function
import keras
import keras.backend as K
import cytometer.data
import cytometer.models
from cytometer.utils import principal_curvatures_range_image
import matplotlib.pyplot as plt
from skimage import measure
from skimage.morphology import watershed
from mahotas.labeled import borders
import cv2
# specify data format as (n, row, col, channel)
K.set_image_data_format('channels_last')
DEBUG = True
'''Load model
'''
# data paths
root_data_dir = os.path.join(home, 'Data/cytometer_data/klf14')
training_dir = os.path.join(root_data_dir, 'klf14_b6ntac_training')
training_non_overlap_data_dir = os.path.join(root_data_dir, 'klf14_b6ntac_training_non_overlap')
training_augmented_dir = os.path.join(root_data_dir, 'klf14_b6ntac_training_augmented')
saved_models_dir = os.path.join(root_data_dir, 'saved_models')
saved_contour_model_basename = 'klf14_b6ntac_exp_0006_cnn_contour' # contour
saved_dmap_model_basename = 'klf14_b6ntac_exp_0007_cnn_dmap' # dmap
contour_model_name = saved_contour_model_basename + '*.h5'
dmap_model_name = saved_dmap_model_basename + '*.h5'
# load model weights for each fold
contour_model_files = glob.glob(os.path.join(saved_models_dir, contour_model_name))
dmap_model_files = glob.glob(os.path.join(saved_models_dir, dmap_model_name))
contour_n_folds = len(contour_model_files)
dmap_n_folds = len(dmap_model_files)
# load k-fold sets that were used to train the models (we assume they are the same for contours and dmaps)
saved_contour_model_kfold_filename = os.path.join(saved_models_dir, saved_contour_model_basename + '_info.pickle')
with open(saved_contour_model_kfold_filename, 'rb') as f:
aux = pickle.load(f)
im_file_list = aux['file_list']
idx_test_all = aux['idx_test_all']
# correct home directory if we are in a different system than what was used to train the models
im_file_list = cytometer.data.change_home_directory(im_file_list, '/users/rittscher/rcasero', home, check_isfile=True)
'''Load data and visualise results
'''
fold_i = 0
# split the data into training and testing datasets
im_test_file_list, _ = cytometer.data.split_list(im_file_list, idx_test_all[fold_i])
# load im, seg and mask datasets
test_datasets, _, _ = cytometer.data.load_datasets(im_test_file_list, prefix_from='im',
prefix_to=['im', 'seg', 'mask'], nblocks=2)
im_test = test_datasets['im']
seg_test = test_datasets['seg']
mask_test = test_datasets['mask']
del test_datasets
# list of model files to inspect
contour_model_files = glob.glob(os.path.join(saved_models_dir, contour_model_name))
dmap_model_files = glob.glob(os.path.join(saved_models_dir, dmap_model_name))
contour_model_file = contour_model_files[fold_i]
dmap_model_file = dmap_model_files[fold_i]
# load models
contour_model = keras.models.load_model(contour_model_file)
dmap_model = keras.models.load_model(dmap_model_file)
# set input layer to size of test images
contour_model = cytometer.models.change_input_size(contour_model, batch_shape=(None,) + im_test.shape[1:])
dmap_model = cytometer.models.change_input_size(dmap_model, batch_shape=(None,) + im_test.shape[1:])
# visualise results
i = 0
# i = 18
# run image through network
contour_test_pred = contour_model.predict(im_test[i, :, :, :].reshape((1,) + im_test.shape[1:]))
dmap_test_pred = dmap_model.predict(im_test[i, :, :, :].reshape((1,) + im_test.shape[1:]))
# compute mean curvature from dmap
_, mean_curvature, _, _ = principal_curvatures_range_image(dmap_test_pred[0, :, :, 0], sigma=10)
# multiply mean curvature by estimated contours
contour_weighted = contour_test_pred[0, :, :, 1] * mean_curvature
# rough segmentation of inner areas
labels = (contour_weighted <= 0).astype('uint8')
# label areas with a different label per connected area
labels = measure.label(labels)
# remove very small labels (noise)
labels_prop = measure.regionprops(labels)
for j in range(1, np.max(labels)):
# label of region under consideration is not the same as index j
lab = labels_prop[j]['label']
if labels_prop[j]['area'] < 50:
labels[labels == lab] = 0
# extend labels using watershed
# labels_ext = watershed(-dmap_test_pred[0, :, :, 0], labels)
labels_ext = watershed(mean_curvature, labels)
# extract borders of watershed regions for plots
labels_borders = borders(labels_ext)
# dilate borders for easier visualization
kernel = np.ones((3, 3), np.uint8)
labels_borders = cv2.dilate(labels_borders.astype(np.uint8), kernel=kernel) > 0
# add borders as coloured curves
im_test_r = im_test[i, :, :, 0].copy()
im_test_g = im_test[i, :, :, 1].copy()
im_test_b = im_test[i, :, :, 2].copy()
im_test_r[labels_borders] = 0.0
im_test_g[labels_borders] = 1.0
im_test_b[labels_borders] = 0.0
im_borders = np.concatenate((np.expand_dims(im_test_r, axis=2),
np.expand_dims(im_test_g, axis=2),
np.expand_dims(im_test_b, axis=2)), axis=2)
# plot results
plt.clf()
plt.subplot(331)
plt.imshow(im_test[i, :, :, :])
plt.title('histology, i = ' + str(i))
plt.subplot(332)
plt.imshow(contour_test_pred[0, :, :, 1])
plt.title('predicted contours')
plt.subplot(333)
plt.imshow(dmap_test_pred[0, :, :, 0])
plt.title('predicted dmap')
plt.subplot(334)
plt.imshow(mean_curvature)
plt.title('dmap\'s mean curvature')
plt.subplot(335)
plt.imshow(contour_weighted)
plt.title('contour * curvature')
plt.subplot(336)
plt.imshow(labels)
plt.title('labels')
plt.subplot(337)
plt.imshow(labels_ext)
plt.title('watershed on labels')
plt.subplot(338)
plt.imshow(im_borders)
plt.title('watershed on labels')
|
{"hexsha": "43c98ef0054814722ea54b865ebf22c12d8abb62", "size": 6720, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/klf14_b6ntac_exp_0009_combine_dmap_contour_estimates.py", "max_stars_repo_name": "rcasero/cytometer", "max_stars_repo_head_hexsha": "d76e58fa37f83f6a666d556ba061530d787fcfb2", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-09T10:18:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-09T10:18:26.000Z", "max_issues_repo_path": "scripts/klf14_b6ntac_exp_0009_combine_dmap_contour_estimates.py", "max_issues_repo_name": "rcasero/cytometer", "max_issues_repo_head_hexsha": "d76e58fa37f83f6a666d556ba061530d787fcfb2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/klf14_b6ntac_exp_0009_combine_dmap_contour_estimates.py", "max_forks_repo_name": "rcasero/cytometer", "max_forks_repo_head_hexsha": "d76e58fa37f83f6a666d556ba061530d787fcfb2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.9393939394, "max_line_length": 118, "alphanum_fraction": 0.7605654762, "include": true, "reason": "import numpy", "num_tokens": 1722}
|
[STATEMENT]
lemma fMin_finsert[simp]: "fMin (finsert x A) = (if A = {||} then x else min x (fMin A))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fMin (finsert x A) = (if A = {||} then x else min x (fMin A))
[PROOF STEP]
by transfer simp
|
{"llama_tokens": 104, "file": null, "length": 1}
|
///////////////////////////////////////////////////////////////////////////////
// Copyright Christopher Kormanyos 2015.
// Copyright Paul Bristow 2015.
// Distributed under the Boost Software License,
// Version 1.0. (See accompanying file LICENSE_1_0.txt
// or copy at http://www.boost.org/LICENSE_1_0.txt)
//! \file
//!\brief Tests for the fixed_point basic narrowing constructors.
#define BOOST_TEST_MODULE test_negatable_basic_narrowing_constructors
#define BOOST_LIB_DIAGNOSTIC
#include <boost/fixed_point/fixed_point.hpp>
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_CASE(test_negatable_basic_narrowing_constructors)
{
bool result = true;
{
// This fixed-point negatable type has only 2 IntegralRange digits.
// For all practical purposes, all constructions and assignments
// must be explicit.
//
// For example, the following should fail.
// fixed_point_type x = std::uint8_t(1U);
// But the following should succeed.
// fixed_point_type x(std::uint8_t(1U));
typedef boost::fixed_point::negatable<2, -5> fixed_point_type;
fixed_point_type x(std::uint8_t(1U));
x = fixed_point_type(1U);
x = static_cast<fixed_point_type>(1U);
result &= (x == 1U);
}
{
typedef boost::fixed_point::negatable<8, -7> fixed_point_type;
// This is OK because 8 range digits in fixed_point_type
// allow for non-narrowing conversion to boost::uint8_t.
fixed_point_type x = boost::uint8_t(1U);
// Here we require explicit construction from 16-bit
// unsiged integer boost::uint16_t.
// In other words,
// fixed_point_type z = boost::uint16_t(1U);
// will not work because uint16_t has more
// digits than the IntegralRange digits of y.
// But using explicit construction, this code
// sequence is OK.
fixed_point_type y(boost::uint16_t(1U));
result &= (x == 1U);
result &= (y == 1U);
}
{
typedef boost::fixed_point::negatable<16, -15> fixed_point_type;
// This is OK because 16 range digits in fixed_point_type
// allow for non-narrowing conversion to boost::uint16_t.
fixed_point_type x = boost::uint16_t(1U);
// This is OK because conversion from 8-bit unsigned
// integer to fixed_point_type is non-narrowing anyway.
fixed_point_type y = boost::uint8_t(1U);
// Here we require explicit construction from 32-bit
// unsiged integer boost::uint32_t.
// In other words,
// fixed_point_type z = boost::uint32_t(1U);
// will not work because uint32_t has more
// digits than the IntegralRange digits of z.
// But using explicit construction, this code
// sequence is OK.
fixed_point_type z(boost::uint32_t(1U));
result &= (x == 1U);
result &= (y == 1U);
result &= (z == 1U);
}
{
typedef boost::fixed_point::negatable<64, -63> fixed_point_type;
fixed_point_type x;
// These are all OK because x has 64 IntegralRange digits.
x = boost::uint8_t(1U);
x = boost::uint16_t(1U);
x = boost::uint32_t(1U);
x = boost::uint64_t(1U);
result &= (x == 1U);
}
// If the code compiles, then the test passes.
BOOST_CHECK(result);
}
|
{"hexsha": "39c4fb9c432d603ea7c36beaa3d13b79e9e37564", "size": 3186, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "test/test_negatable_basic_narrowing_constructors.cpp", "max_stars_repo_name": "BoostGSoC15/fixed-point", "max_stars_repo_head_hexsha": "d71b4a622ded821a2429d8d857097441c2a10246", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/test_negatable_basic_narrowing_constructors.cpp", "max_issues_repo_name": "BoostGSoC15/fixed-point", "max_issues_repo_head_hexsha": "d71b4a622ded821a2429d8d857097441c2a10246", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test/test_negatable_basic_narrowing_constructors.cpp", "max_forks_repo_name": "BoostGSoC15/fixed-point", "max_forks_repo_head_hexsha": "d71b4a622ded821a2429d8d857097441c2a10246", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5, "max_line_length": 79, "alphanum_fraction": 0.6641556811, "num_tokens": 855}
|
'''Core functionality'''
from __future__ import print_function, division
import os, sys, glob, numpy as np, matplotlib, scipy, time
from scipy import stats, interpolate, optimize
from math import pi
import numpy.lib.recfunctions as rf
import mla
from mla.spectral import *
from mla.tools import *
from mla.timing import *
import scipy.stats
from copy import deepcopy
try:
import cPickle as pickle
except ImportError:
import pickle
def build_bkg_spline(data , bins=np.linspace(-1.0, 1.0, 501) , file_name = None):
''' build the dec-background spline.
args:
bins: the sindec bins that would be used to build the histogram.
file_name(optional): The file name of the spline saved. Default is not saving the spline.
return:
sindec-background spline.
'''
sin_dec = np.sin(data['dec'])
hist, bins = np.histogram(sin_dec,
bins=bins,
density=True
)
bg_p_dec = interpolate.InterpolatedUnivariateSpline(bins[:-1]+np.diff(bins)/2.,
np.log(hist),
k=2)
if file_name is not None:
with open(file_name+"1d.pickle", 'wb') as f:
pickle.dump(bg_p_dec, f)
return bg_p_dec
def scale_and_weight_trueDec(sim , source_dec , sampling_width = np.radians(1)):
''' scaling the Monte carlo using trueDec
This is for calculating expected signal given spectrum
args:
sim: Monte Carlo dataset
source_dec: Declination in radians
sampling_width: The sampling width in rad
returns:
reduced_sim=Scaled simulation set with only events within sampling width
'''
sindec_dist = np.abs(source_dec-sim['trueDec'])
close = sindec_dist < sampling_width
reduced_sim = sim[close].copy()
omega = 2*np.pi * (np.min([np.sin(source_dec+sampling_width), 1]) -\
np.max([np.sin(source_dec-sampling_width), -1]))
reduced_sim['ow'] /= omega
return reduced_sim
def scale_and_weight_dec(sim , source_dec , sampling_width = np.radians(1)):
''' scaling the Monte carlo using dec
This is for calculating energy S/B
Notice that we doesn't change ow here it is unnessary
args:
sim: Monte Carlo dataset
source_dec: Declination in radians
sampling_width: The sampling width in rad
returns:
reduced_sim=Simulation set with only events within sampling width(ow unchanged)
'''
sindec_dist = np.abs(source_dec-sim['dec'])
close = sindec_dist < sampling_width
reduced_sim = sim[close].copy()
return reduced_sim
def build_bkg_2dhistogram(data , bins=[np.linspace(-1,1,100),np.linspace(1,8,100)] , file_name = None):
''' build the background 2d(sindec and logE) histogram. This function a prepation for energy S/B building for custom spectrum.
args:
data: Background data set
bins: Bins defination,first one is sindec binning and the second one is logE binning.
file_name(optional): Saving the background 2d histogram to file.Default is not saving.
returns:
bg_h,bins:The background histogram and the binning.
'''
bg_w=np.ones(len(data),dtype=float)
bg_w/=np.sum(bg_w)
bg_h,xedges,yedges=np.histogram2d(np.sin(data['dec']),data['logE'],bins=bins
,weights=bg_w)
if file_name is not None:
np.save(file_name+"bkg2d.npy",bg_h)
return bg_h,bins
#The code
def create_interpolated_ratio( data, sim, gamma, bins=[np.linspace(-1,1,100),np.linspace(1,8,100)]):
r'''create the S/B ratio 2d histogram for a given gamma.
args:
data: Background data
sim: Monte Carlo Simulation dataset
gamma: spectral index
bins: Bins defination,first one is sindec binning and the second one is logE binning.
returns:
ratio,bins:The S/B energy histogram and the binning.
'''
# background
bins = np.array(bins)
bg_w = np.ones(len(data), dtype=float)
bg_w /= np.sum(bg_w)
bg_h, xedges, yedges = np.histogram2d(np.sin(data['dec']),
data['logE'],
bins=bins,
weights = bg_w)
# signal
sig_w = sim['ow'] * sim['trueE']**gamma
sig_w /= np.sum(sig_w)
sig_h, xedges, yedges = np.histogram2d(np.sin(sim['dec']),
sim['logE'],
bins=bins,
weights = sig_w)
ratio = sig_h / bg_h
for i in range(ratio.shape[0]):
# Pick out the values we want to use.
# We explicitly want to avoid NaNs and infinities
values = ratio[i]
good = np.isfinite(values) & (values>0)
x, y = bins[1][:-1][good], values[good]
# Do a linear interpolation across the energy range
spline = scipy.interpolate.UnivariateSpline(x, y,
k = 1,
s = 0,
ext = 3)
# And store the interpolated values
ratio[i] = spline(bins[1][:-1])
return ratio, bins
def build_energy_2dhistogram(data, sim, bins=[np.linspace(-1,1,100),np.linspace(1,8,100)], gamma_points = np.arange(-4, -1, 0.25), file_name = None):
''' build the Energy SOB 2d histogram for power-law spectrum for a set of gamma.
args:
data: Background data
sim: Monte Carlo Simulation dataset
bins: Bins defination,first one is sindec binning and the second one is logE binning.
gamma_points: array of spectral index
returns:
sob_maps,gamma_points:3d array with the first 2 axes be S/B energy histogram and the third axis be gamma_points,and the binning.
'''
sob_maps = np.zeros((len(bins[0])-1,
len(bins[1])-1,
len(gamma_points)),
dtype = float)
for i, g in enumerate(gamma_points):
sob_maps[:,:,i], _ = create_interpolated_ratio(data, sim,g, bins )
if file_name is not None:
np.save(file_name+"2d.npy",sob_maps)
np.save("gamma_point_"+file_name,gamma_points)
return sob_maps, gamma_points
class LLH_point_source(object):
'''The class for point source'''
def __init__(self , ra , dec , data , sim , spectrum , signal_time_profile = None , background_time_profile = (0,1) , background = None, fit_position=False , bkg_bins=np.linspace(-1.0, 1.0, 501) , sampling_width = np.radians(1) , bkg_2dbins=[np.linspace(-1,1,100),np.linspace(1,8,100)] , sob_maps = None , gamma_points = np.arange(-4, -1, 0.25) ,bkg_dec_spline = None ,bkg_maps = None,file_name=None):
''' Constructor of the class
args:
ra: RA of the source in rad
dec: Declination of the source in rad
data:The data(If no background/background histogram is supplied ,it will also be used to generate background pdf)
sim: Monte Carlo simulation
spectrum: Spectrum , could be a BaseSpectrum object or a string name PowerLaw
signal_time_profile: generic_profile object. This is the signal time profile.Default is the same as background_time_profile.
background_time_profile: generic_profile object or the list of the start time and end time. This is the background time profile.Default is a (0,1) tuple which will create a uniform_profile from 0 to 1.
background: background data that will be used to build the background dec pdf and energy S/B histogram if not supplied.Default is None(Which mean the data will be used as background.
fit_position:Whether position is a fitting parameter. If True that it will keep all data.Default is False/
bkg_bins: The sindec bins for background pdf(as a function of sinDec).
sampling_width: The sampling width(in rad) for Monte Carlo simulation.Only simulation events within the sampling width will be used.Default is 1 degree.
bkg_2dbins: The sindec and logE binning for energy S/B histogram.
sob_maps: If the spectrum is a PowerLaw,User can supply a 3D array with sindec and logE histogram generated for different gamma.Default is None.
gamma_points: The set of gamma for PowerLaw energy weighting.
bkg_dec_spline: The background pdf as function of sindec if the spline already been built beforehand.
bkg_maps: The background histogram if it is already been built(Notice it would only be needed if the spectrum is a user-defined spectrum.
'''
if background is None:
self.background = data
else:
self.background = background
try:
self.data = rf.append_fields(data,'sindec',np.sin(data['dec']),usemask=False)#The full simulation set,this is for the overall normalization of the Energy S/B ratio
except ValueError: #sindec already exist
self.data = data
pass
self.energybins = bkg_2dbins
self.N = len(data) #The len of the data
self.fit_position = fit_position
try:
self.fullsim = rf.append_fields(sim,'sindec',np.sin(sim['dec']),usemask=False)#The full simulation set,this is for the overall normalization of the Energy S/B ratio
except ValueError: #sindec already exist
self.fullsim = self.fullsim
pass
if isinstance(background_time_profile,generic_profile):
self.background_time_profile = background_time_profile
else:
self.background_time_profile = uniform_profile(background_time_profile[0],background_time_profile[1])
if signal_time_profile is None:
self.signal_time_profile = deepcopy(self.background_time_profile)
else:
self.signal_time_profile = signal_time_profile
self.sample_size = 0
self.sampling_width = sampling_width
if bkg_dec_spline is None:
self.bkg_spline = build_bkg_spline(self.background , bins = bkg_bins,file_name=file_name)
elif type(bkg_dec_spline) == str:
with open(bkg_dec_spline, 'rb') as f:
self.bkg_spline = pickle.load(f)
else:
self.bkg_spline = bkg_dec_spline
if spectrum == "PowerLaw":
self.gamma_point = gamma_points
self.gamma_point_prc = np.abs(gamma_points[1] - gamma_points[0])
if sob_maps is None:
self.ratio,self.gamma_point = build_energy_2dhistogram(self.background, sim ,bkg_2dbins ,gamma_points,file_name=file_name)
elif type(sob_maps) == str:
self.ratio = np.load(sob_maps)
else:
self.ratio = sob_maps
self.update_position(ra,dec)
else:
self.spectrum = spectrum
if bkg_maps is None:
self.bg_h,self.energybins = build_bkg_2dhistogram(self.background , bins = bkg_2dbins,file_name=file_name)
elif type(bkg_maps) == str:
self.bg_h = np.load(bkg_maps)
else:
self.bg_h = bkg_maps
self.update_position(ra,dec)
self.update_energy_histogram()
self.update_time_weight()
self.update_energy_weight()
return
def update_position(self, ra, dec):
r'''update the position of the point source
args:
ra: RA of the source in rad
dec: Declination of the source in rad
'''
self.ra = ra
self.dec = dec
self.edge_point = (np.searchsorted(self.energybins[0],np.sin(dec-self.sampling_width))-1,np.searchsorted(self.energybins[0],np.sin(dec+self.sampling_width))-1)
self.sim = scale_and_weight_trueDec(self.fullsim , dec , sampling_width = self.sampling_width)# Notice that this is for expected signal calculation
self.sim_dec = scale_and_weight_dec(self.fullsim , dec , sampling_width = self.sampling_width)# This is for Energy S/B ratio calculation
self.update_spatial()
return
def update_spatial(self):
r'''Calculating the spatial llh and drop data with zero spatial llh'''
signal = self.signal_pdf()
mask = signal!=0
if self.fit_position==False:
self.data = self.data[mask]
signal = signal[mask]
self.drop = self.N - mask.sum()
else:
self.drop = 0
self.spatial = signal/self.background_pdf()
return
def update_spectrum(self,spectrum):
r''' update the spectrum'''
self.spectrum = spectrum
self.update_energy_histogram()
self.update_energy_weight()
return
def cut_data(self , time_range ):
r'''Cut out data outside some time range
args:
time_range: array of len 2
'''
mask = (self.data['time']>time_range[0]) & (self.data['time']<=time_range[1])
self.data = self.data[mask]
self.N = len(self.data)
self.update_spatial()
self.update_time_weight()
self.update_energy_weight()
return
def update_data(self , data ):
r'''Change the data
args:
data: new data
'''
try:
self.data = rf.append_fields(data,'sindec',np.sin(data['dec']),usemask=False)#The full simulation set,this is for the overall normalization of the Energy S/B ratio
except ValueError: #sindec already exist
self.data = data
pass
self.N = len(data)
self.sample_size = 0
self.update_spatial()
self.update_time_weight()
self.update_energy_weight()
return
def signal_pdf(self):
r'''Computer the signal spatial pdf
return:
Signal spatial pdf
'''
distance = mla.tools.angular_distance(self.data['ra'],
self.data['dec'],
self.ra,
self.dec)
sigma = self.data['angErr']
return (1.0)/(2*np.pi*sigma**2) * np.exp(-(distance)**2/(2*sigma**2))
def background_pdf(self):
r'''Computer the background spatial pdf
return:
background spatial pdf
'''
background_likelihood = (1/(2*np.pi))*np.exp(self.bkg_spline(np.sin(self.data['dec'])))
return background_likelihood
def update_background_time_profile(self,profile):
r'''Update the background time profile
args:
profile: The background time profile(generic_profile object)
'''
self.background_time_profile = profile
return
def update_signal_time_profile(self,profile):
r'''Update the signal time profile
args:
profile: The signal time profile(generic_profile object)
'''
self.signal_time_profile = profile
return
def update_time_weight(self):
r'''Update the time weighting'''
signal_lh_ratio = self.signal_time_profile.pdf(self.data['time'])
background_lh_ratio = self.background_time_profile.pdf(self.data['time'])
self.t_lh_ratio = np.nan_to_num(signal_lh_ratio/background_lh_ratio) #replace nan with zero
return
def update_energy_histogram(self):
'''enegy weight calculation. This is slow if you choose a large sample width'''
sig_w=self.sim_dec['ow'] * self.spectrum(self.sim_dec['trueE'])
sig_w/=np.sum(self.fullsim['ow'] * self.spectrum(self.fullsim['trueE']))
sig_h,xedges,yedges=np.histogram2d(self.sim_dec['sindec'],self.sim_dec['logE'],bins=self.energybins,weights=sig_w)
with np.errstate(divide='ignore'):
ratio=sig_h/self.bg_h
for k in range(ratio.shape[0]):
values=ratio[k]
good=np.isfinite(values)&(values>0)
x,y=self.energybins[1][:-1][good],values[good]
if len(x) > 1:
spline=scipy.interpolate.UnivariateSpline(x,y,k=1,s=0,ext=3)
ratio[k]=spline(self.energybins[1][:-1])
elif len(x)==1:
ratio[k]=y
else:
ratio[k]=0
self.ratio=ratio
return
def update_energy_weight(self, gamma = None):
r'''
Update the energy weight of the events.If the spectrum is user-defined one, the first part of the code will be ran and search the S/B histogram.If the spectrum is PowerLaw object, second part of the code will be ran.
args:
gamma: only needed if the spectrum is a PowerLaw object and you to evaluate the weight at a spectific gamma instead of optimizing the weight over gamma.
'''
if self.N == 0:#If no data , just do nothing
return
#First part, ran if the spectrum is user-defined
if self.ratio.ndim == 2 :#ThreeML style
i = np.searchsorted(self.energybins[0],np.sin(self.data['dec']))-1
j = np.searchsorted(self.energybins[1],self.data['logE'])-1
i[i<self.edge_point[0]] = self.edge_point[0] #If events fall outside the sampling width, just gonna approxiamte the weight using the nearest non-zero sinDec bin.
i[i>self.edge_point[1]] = self.edge_point[1]
self.energy = self.ratio[i,j]
#Second part, ran if the Spectrum is a PowerLaw object.
elif self.ratio.ndim == 3: #Tradiational style with PowerLaw spectrum and spline
sob_ratios = self.evaluate_interpolated_ratio()
sob_spline = np.zeros(len(self.data), dtype=object)
for i in range(len(self.data)):
spline = scipy.interpolate.UnivariateSpline(self.gamma_point,
np.log(sob_ratios[i]),
k = 3,
s = 0,
ext = 'raise')
sob_spline[i] = spline
with np.errstate(divide='ignore', invalid='ignore'):
def inner_ts(parameter):
gamma = parameter[0]
ns = parameter[1]
e_lh_ratio = self.get_energy_sob(gamma, sob_spline)
ts = ( ns/self.N * (e_lh_ratio*self.spatial*self.t_lh_ratio - 1))+1
return -2*(np.sum(np.log(ts))+self.drop*np.log(1-ns/self.N))
if gamma is not None:
bounds= [[gamma, gamma],[0,self.N]]
self.gamma_best_fit = gamma
else:
bounds= [[self.gamma_point[0], self.gamma_point[-1]],[0,self.N]]
bf_params = scipy.optimize.minimize(inner_ts,
x0 = [-2,1],
bounds = bounds,
method = 'SLSQP',
)
self.energy = self.get_energy_sob(bf_params.x[0],sob_spline)
self.gamma_best_fit = bf_params.x[0]
self.ns_best_fit = bf_params.x[1]
return
def get_energy_sob(self, gamma, splines):
r'''only be used if the spectrum is PowerLaw object.
args:
gamma: the spectral index
splines: the spline of S/B of each events
return:
final_sob_ratios: array of len(data) .The energy weight of each events.
'''
final_sob_ratios = np.ones_like(self.data, dtype=float)
for i, spline in enumerate(splines):
final_sob_ratios[i] = np.exp(spline(gamma))
return final_sob_ratios
def evaluate_interpolated_ratio(self):
r'''only be used if the spectrum is PowerLaw object.Used to create the spline. Notice the self.ratio here is a 3D array with the third dimensional be gamma_points
return:
2D array .The energy weight of each events at each gamma point.
'''
i = np.searchsorted(self.energybins[0], np.sin(self.data['dec'])) - 1
j = np.searchsorted(self.energybins[1], self.data['logE']) - 1
return self.ratio[i,j]
def eval_llh(self):
r'''Calculating the llh using the spectrum'''
if self.N == 0:
return 0,0
ns = (self.sim['ow'] * self.spectrum(self.sim['trueE']) * self.signal_time_profile.effective_exposure() *24*3600).sum()
ts =( ns/self.N * (self.energy*self.spatial*self.t_lh_ratio - 1))+1
ts_value = 2*(np.sum(np.log(ts))+self.drop*np.log(1-ns/self.N))
#if ts_value < 0 or np.isnan(ts_value):
if np.isnan(ts_value) :
ns = 0
ts_value = 0
return ns,ts_value
def eval_llh_ns(self,ns):
r'''Calculating the llh with user-input ns'''
if self.N == 0:
return 0,0
ts =( ns/self.N * (self.energy*self.spatial*self.t_lh_ratio - 1))+1
ts_value = 2*(np.sum(np.log(ts))+self.drop*np.log(1-ns/self.N))
if np.isnan(ts_value):
ns = 0
ts_value = 0
return ns,ts_value
def eval_llh_fit_ns(self):
r'''Calculating the llh with ns floating(discarded)'''
if self.N == 0:
return 0,0
bounds= [[0, self.N ],]
def get_ts(ns):
ts =( ns/self.N * (self.energy*self.spatial*self.t_lh_ratio - 1))+1
return -2*(np.sum(np.log(ts))+self.drop*np.log(1-ns/self.N))
result = scipy.optimize.minimize(get_ts,
x0 = [1,],
bounds = bounds,
method = 'SLSQP',
)
self.fit_ns = result.x[0]
self.fit_ts = -1*result.fun
if np.isnan(self.fit_ts):
self.fit_ts = 0
self.fit_ns = 0
return self.fit_ns,self.fit_ts
def eval_llh_fit(self):
r'''Calculating the llh with scipy optimize result'''
ts =( self.ns_best_fit/self.N * (self.energy*self.spatial*self.t_lh_ratio - 1))+1
self.fit_ts = -2*(np.sum(np.log(ts))+self.drop*np.log(1-self.ns_best_fit/self.N))
return self.ns_best_fit,self.fit_ts
def get_fit_result(self):
r'''return the fit result, Only meaningful when the spectrum is PowerLaw object.'''
return self.gamma_best_fit,self.ns_best_fit,self.fit_ts
def add_injection(self,sample):
r'''Add injected sample
args:
sample: The injection sample
'''
self.sample_size = len(sample)+self.sample_size
try:
sample = rf.append_fields(sample,'sindec',np.sin(sample['dec']),usemask=False)
except ValueError: #sindec already exist
pass
sample = rf.drop_fields(sample, [n for n in sample.dtype.names \
if not n in self.data.dtype.names])
self.data = np.concatenate([self.data,sample])
self.N = self.N+len(sample)
self.update_spatial()
self.update_time_weight()
self.update_energy_weight()
return
def remove_injection(self,update=True):
r'''remove injected sample
args:
update: Whether updating all the weighting.Default is True.
'''
self.data = self.data[:len(self.data)-self.sample_size]
self.N = self.N-self.sample_size
self.sample_size = 0
if update:
self.update_spatial()
self.update_time_weight()
self.update_energy_weight()
return
def modify_injection(self,sample):
r'''modify injected sample
args:
sample:New sample
'''
self.remove_injection(update=False)
self.add_injection(sample)
return
|
{"hexsha": "5e4a905511552a0d85c7c2290e86f7ee2a6f3cd5", "size": 24862, "ext": "py", "lang": "Python", "max_stars_repo_path": "mla/mla/core.py", "max_stars_repo_name": "jasonfan1997/umd_icecube_analysis_tutorial", "max_stars_repo_head_hexsha": "50bf3af27f81d719953ac225f199e733b5c0bddf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mla/mla/core.py", "max_issues_repo_name": "jasonfan1997/umd_icecube_analysis_tutorial", "max_issues_repo_head_hexsha": "50bf3af27f81d719953ac225f199e733b5c0bddf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mla/mla/core.py", "max_forks_repo_name": "jasonfan1997/umd_icecube_analysis_tutorial", "max_forks_repo_head_hexsha": "50bf3af27f81d719953ac225f199e733b5c0bddf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4266211604, "max_line_length": 407, "alphanum_fraction": 0.5759391843, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 5541}
|
import numpy as np
import pytest
import pdffitx.modeling.fitobjs as fitobjs
from pdffitx.modeling.fitobjs import MyParser, GenConfig, ConConfig
from pdffitx.modeling.running import multi_phase
@pytest.mark.parametrize(
"meta",
[
None,
{'qmin': 1, 'qmax': 24, 'qdamp': 0.04, 'qbroad': 0.02}
]
)
def test_MyParser_parseDict(db, meta):
parser = MyParser()
parser.parseDict(db['Ni_gr'], meta=meta)
recipe = multi_phase([db['Ni_stru']], parser, fit_range=(0., 8., .1))
con = next(iter(recipe.contributions.values()))
gen = next(iter(con.generators.values()))
# if meta = None, generator will use the default values
assert gen.getQmin() == parser._meta.get('qmin', 0.0)
assert gen.getQmax() == parser._meta.get('qmax', 100. * np.pi)
assert gen.qdamp.value == parser._meta.get('qdamp', 0.0)
assert gen.qbroad.value == parser._meta.get('qbroad', 0.0)
@pytest.mark.parametrize(
"data",
[
np.zeros((1, 5)),
np.zeros((5, 5))
]
)
def test_MyParser_parseDict_error(data):
parser = MyParser()
with pytest.raises(ValueError):
parser.parseDict(data)
@pytest.mark.parametrize(
"data_key",
['Ni_pdfgetter']
)
@pytest.mark.parametrize(
"meta",
[None, {'qmax': 19}]
)
def test_MyParser_parsePDFGetter(db, data_key, meta):
pdfgetter = db[data_key]
parser = MyParser()
parser.parsePDFGetter(pdfgetter, meta=meta)
if meta:
for key, value in meta.items():
assert parser._meta[key] == value
@pytest.mark.parametrize(
"data_key",
['Ni_gr_file']
)
@pytest.mark.parametrize(
"meta",
[None, {'qmax': 19}]
)
def test_MyParser_parseFile(db, data_key, meta):
data_file = db[data_key]
parser = MyParser()
parser.parseFile(data_file, meta=meta)
if meta:
for key, value in meta.items():
assert parser._meta[key] == value
@pytest.mark.parametrize(
"mode,stype",
[
("xray", "X"),
("neutron", "N"),
("sas", "X")
]
)
def test_map_stype(mode, stype):
assert fitobjs.map_stype(mode) == stype
def test_map_stype_error():
with pytest.raises(ValueError):
fitobjs.map_stype("nray")
@pytest.mark.parametrize(
"stru_key,expect",
[
("Ni_stru_molecule", (False, True)),
("Ni_stru", (True, False)),
("Ni_stru_diffpy", (True, False))
]
)
def test_GenConfig(db, stru_key, expect):
# noinspection PyArgumentList
gen_config = GenConfig("G0", db[stru_key])
assert gen_config.periodic == expect[0]
assert gen_config.debye == expect[1]
@pytest.mark.parametrize(
"kwargs,expect",
[
({'res_eq': 'chiv'}, {'res_eq': 'chiv'})
]
)
def test_ConConfig(db, kwargs, expect):
parser = MyParser()
parser.parseFile(db['Ni_gr_file'])
stru = db['Ni_stru']
con_config = ConConfig(
name="con",
parser=parser,
fit_range=(0., 8., .1),
genconfigs=[GenConfig('G0', stru)],
eq="G0",
**kwargs
)
for key, value in expect.items():
assert getattr(con_config, key) == value
|
{"hexsha": "13518e89850f6b5bc35d048459a18b66902b7b26", "size": 3128, "ext": "py", "lang": "Python", "max_stars_repo_path": "pdffitx/tests/modeling/test_fitobjs.py", "max_stars_repo_name": "st3107/pdffitx", "max_stars_repo_head_hexsha": "c746f6dfaf5656e9bb62508a9847c00567b34bbe", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-10T11:59:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T11:59:34.000Z", "max_issues_repo_path": "pdffitx/tests/modeling/test_fitobjs.py", "max_issues_repo_name": "st3107/pdffitx", "max_issues_repo_head_hexsha": "c746f6dfaf5656e9bb62508a9847c00567b34bbe", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pdffitx/tests/modeling/test_fitobjs.py", "max_forks_repo_name": "st3107/pdffitx", "max_forks_repo_head_hexsha": "c746f6dfaf5656e9bb62508a9847c00567b34bbe", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-12-14T18:38:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T00:25:35.000Z", "avg_line_length": 24.4375, "max_line_length": 73, "alphanum_fraction": 0.6144501279, "include": true, "reason": "import numpy", "num_tokens": 873}
|
[STATEMENT]
lemma hm_update_op_refine: "(hm_update_op, h.update_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> Id \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (hm_update_op, h.update_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> Id \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel
[PROOF STEP]
apply (intro fun_relI nres_relI)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>a a' aa a'a ab a'b. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id\<rbrakk> \<Longrightarrow> hm_update_op a aa ab \<le> \<Down> hmr_rel (h.update_op a' a'a a'b)
[PROOF STEP]
unfolding hm_update_op_def h.update_op_def mop_list_get_alt mop_list_set_alt
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>a a' aa a'a ab a'b. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id\<rbrakk> \<Longrightarrow> (case a of (pq, m) \<Rightarrow> \<lambda>i v. ASSERT (hm_valid (pq, m) i \<and> hmr_invar (pq, m)) \<bind> (\<lambda>_. ASSERT (pre_list_get (pq, i - 1)) \<bind> (\<lambda>_. RETURN (op_list_get pq (i - 1))) \<bind> (\<lambda>k. RETURN (pq, m(k \<mapsto> v))))) aa ab \<le> \<Down> hmr_rel (ASSERT (0 < a'a) \<bind> (\<lambda>_. ASSERT (pre_list_set ((a', a'a - 1), a'b)) \<bind> (\<lambda>_. RETURN (op_list_set a' (a'a - 1) a'b))))
[PROOF STEP]
apply refine_vcg
[PROOF STATE]
proof (prove)
goal (4 subgoals):
1. \<And>a a' aa a'a ab a'b x1 x2. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id; 0 < a'a; pre_list_set ((a', a'a - 1), a'b); a = (x1, x2)\<rbrakk> \<Longrightarrow> hm_valid (x1, x2) aa
2. \<And>a a' aa a'a ab a'b x1 x2. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id; 0 < a'a; pre_list_set ((a', a'a - 1), a'b); a = (x1, x2)\<rbrakk> \<Longrightarrow> hmr_invar (x1, x2)
3. \<And>a a' aa a'a ab a'b x1 x2. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id; 0 < a'a; pre_list_set ((a', a'a - 1), a'b); a = (x1, x2); hm_valid (x1, x2) aa \<and> hmr_invar (x1, x2)\<rbrakk> \<Longrightarrow> pre_list_get (x1, aa - 1)
4. \<And>a a' aa a'a ab a'b x1 x2. \<lbrakk>(a, a') \<in> hmr_rel; (aa, a'a) \<in> nat_rel; (ab, a'b) \<in> Id; 0 < a'a; pre_list_set ((a', a'a - 1), a'b); a = (x1, x2); hm_valid (x1, x2) aa \<and> hmr_invar (x1, x2); pre_list_get (x1, aa - 1)\<rbrakk> \<Longrightarrow> ((x1, x2(op_list_get x1 (aa - 1) \<mapsto> ab)), op_list_set a' (a'a - 1) a'b) \<in> hmr_rel
[PROOF STEP]
apply (auto simp: hmr_rel_defs map_distinct_upd_conv hm_valid_def hm_length_def)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
|
{"llama_tokens": 1331, "file": "Refine_Imperative_HOL_IICF_Impl_Heaps_IICF_Abs_Heapmap", "length": 5}
|
from multiprocessing.dummy import Pool as ThreadPool
from bilisupport import BANGUMIINFO, EXTERNAL_BANGUMI, RECOMMENDINFO
import numpy as np
from bert_serving.client import BertClient
import numpy as np
import pickle
GPU_SERVER = '10.113.63.16'
bc = BertClient(ip=GPU_SERVER)
similarity_matrix = []
def cosin_similarity(x,y):
return np.dot(x,y)/(np.linalg.norm(x)*np.linalg.norm(y))
def get_tag_vector(tag_str):
emd_sum = np.zeros(768)
embed = bc.encode(tag_str)
for i in embed:
emd_sum += i
mean_embed = emd_sum/len(tag_str)
return mean_embed
def compute_similarity(sid,ex_tag_dict):
similarity_dict = {}
for ex_anime in ex_tag_dict:
if(ex_anime==int(sid) or ex_tag_dict[ex_anime] == []):
continue
similarity_dict[ex_anime] = cosin_similarity(get_tag_vector(kyoani_tag_dict[sid]),get_tag_vector(ex_tag_dict[ex_anime]))
similarity_matrix.append(similarity_dict)
top_recommend = sorted(similarity_dict.items(), key=lambda item: item[1], reverse=True)[:10]
print(len(top_recommend))
recommend_data = [{
'sid': int(sid),
'ref_sid': x[0],
'cos_sim': x[1]
} for x in top_recommend]
RECOMMENDINFO.insert_many(recommend_data)
if __name__ == "__main__":
kyoani_query = BANGUMIINFO.find({},{"sid":1,"tag_name":1})
ex_query = EXTERNAL_BANGUMI.find({},{"sid":1,"tag_name":1})
kyoani_tag_dict = {}
ex_tag_dict = {}
for i in kyoani_query:
kyoani_tag_dict[i['sid']] = i['tag_name']
for i in ex_query:
ex_tag_dict[i['sid']] = i['tag_name']
for kyoani in kyoani_tag_dict:
print('Computing',kyoani)
compute_similarity(kyoani,ex_tag_dict)
f = open('similarity_matrix.pkl', 'wb')
pickle.dump(similarity_matrix, f)
f.close()
|
{"hexsha": "7298f46481e614d536f772a294023096e21b341f", "size": 1803, "ext": "py", "lang": "Python", "max_stars_repo_path": "local/calculate_similarity.py", "max_stars_repo_name": "Ririkoo/DanmakuAnime", "max_stars_repo_head_hexsha": "8c8b93d80bc777f4789631526b04214564ab15d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "local/calculate_similarity.py", "max_issues_repo_name": "Ririkoo/DanmakuAnime", "max_issues_repo_head_hexsha": "8c8b93d80bc777f4789631526b04214564ab15d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "local/calculate_similarity.py", "max_forks_repo_name": "Ririkoo/DanmakuAnime", "max_forks_repo_head_hexsha": "8c8b93d80bc777f4789631526b04214564ab15d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.7818181818, "max_line_length": 128, "alphanum_fraction": 0.6827509706, "include": true, "reason": "import numpy", "num_tokens": 500}
|
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
from scipy.optimize import minimize
class SIRPredict:
'''
Represents the SIR model facilities to compute and predict the spread of
a virus given the value of i
'''
def __init__(self, population: int, beta: float, gamma: float, t: int = 0, initsir: tuple = None):
self.beta = beta
self.gamma = gamma
self.t = 0 #Initial time
self.n = population
if not initsir is None:
self.s, self.i, self.r = initsir
else:
self.reset_model()
def reset_model(self):
'''
Initialize and resets model to t=0.
Susceptible = population - 1
Infected = 1 (index case)
'''
self.s = self.n - 1
self.i = 1
self.r = 0
def solve(self):
dsdt = -(self.beta * self.s * self.i / self.n)
didt = (self.beta * self.s * self.i / self.n) - self.gamma * self.i
drdt = self.gamma * self.i
self.s += dsdt
self.i += didt
self.r += drdt
return self.s, self.i, self.r
def predict(self, day: int):
'''
Predict epidemic spread given the day, in the discrete domain.
:param day: Day from the beginning of spread
:return: Tuple with SIR values in day t
'''
for _ in range(day - self.t):
self.solve()
return self.s, self.i, self.r
def spread_predict(self, final_day: int):
'''
Predict epidemic spread for a timespan, in the discrete domain.
:param day: Final day of prediction
:return: Numpy array with SIR values of predicted spread
'''
predicted = np.ndarray(shape=(final_day, 3))
for i in range(final_day):
predicted[i] = self.solve()
return predicted
class SIRInterpolation:
def __init__(self, sir_values: np.ndarray):
'''
Initializes the SIR model given an array of SIR functions first discrete derivative.
:param sir_values: Array with SIR values.
'''
self.data = sir_values
self.n = sir_values[0][0] + 1
def loss_rms(self, point, data):
'''
Tries given
:param point:
:param data:
:return: float
'''
size = len(data)
beta, gamma = point
def next_dt_SIR(t, sir_data):
s, i, r = sir_data
return (-beta*s*i/self.n, beta*s*i/self.n-gamma*i, gamma*i)
solution = solve_ivp(next_dt_SIR,
(0, size), #Integration interval
self.data[0], #Initial state, float array of 3
t_eval=np.arange(0, size, 1), #Discrete time interval
vectorized=True #Functions supports vectors
)
return np.sqrt( np.mean( (np.transpose(solution.y) - data) ** 2 ) )
def fit(self):
'''
Estimate β and ɣ given integrated values from SIR model (cumulative sums).
To fit the curve (thus getting β and ɣ) we must minimize the error using RMS.
:return: Estimated beta and gamma
'''
optimal = minimize(
self.loss_rms, #Loss function
[0.001, 0.001], #Initial guess
args = (self.data),
method = 'L-BFGS-B',
bounds = [(0.0001, 1.0), (0.0001, 1.0)] #β, ɣ bounds
)
self.beta, self.gamma = optimal.x
return self.beta, self.gamma
def main():
italian_population = 500000
import data_fetcher as df
raw_data = df.fetch_data()
data = df.process_data(raw_data, population=italian_population)
inter = SIRInterpolation(data)
beta, gamma = inter.fit()
print(beta, gamma)
model = SIRPredict(italian_population, beta, gamma, len(data), data[-1])
predicted_data = np.ndarray(shape=(100, 3))
for i, _ in enumerate(predicted_data):
predicted_data[i] = model.solve()
plt.plot(data)
plt.show()
plt.plot(np.vstack((data, predicted_data)))
plt.show()
if __name__ == "__main__":
main()
|
{"hexsha": "1f8c48da0281c53029338806ff0493eeafe4b0aa", "size": 4297, "ext": "py", "lang": "Python", "max_stars_repo_path": "sir.py", "max_stars_repo_name": "giuliocorradini/SIRVisualizer", "max_stars_repo_head_hexsha": "c6f19c8040defce4d163e16f2b7c7dc69e06e7f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sir.py", "max_issues_repo_name": "giuliocorradini/SIRVisualizer", "max_issues_repo_head_hexsha": "c6f19c8040defce4d163e16f2b7c7dc69e06e7f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sir.py", "max_forks_repo_name": "giuliocorradini/SIRVisualizer", "max_forks_repo_head_hexsha": "c6f19c8040defce4d163e16f2b7c7dc69e06e7f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6466666667, "max_line_length": 102, "alphanum_fraction": 0.5489876658, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1043}
|
/**
* Copyright (C) 2012 ciere consulting, ciere.com
* Copyright (C) 2012 Jeroen Habraken
* Copyright (c) 2011 Joel de Guzman
* Copyright (C) 2011, 2012 Object Modeling Designs
*
* Distributed under the Boost Software License, Version 1.0. (See accompanying
* file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#ifndef CIERE_JSON_VALUE_IMPL_HPP
#define CIERE_JSON_VALUE_IMPL_HPP
#include <iostream> //test
#include "../value.hpp"
#include <boost/lexical_cast.hpp>
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_convertible.hpp>
#include <boost/type_traits/is_same.hpp>
#include <boost/mpl/assert.hpp>
namespace ciere { namespace json
{
namespace detail
{
template< typename T >
struct extract
{
template<typename A>
static T get(A & v) { return boost::get<T>(v); }
template<typename A>
static const T get(const A & v) { return boost::get<T>(v); }
};
template<typename R>
struct convert
{
BOOST_MPL_ASSERT_MSG(
!(boost::is_same<R,null_t>::value)
, CANNOT_GET_AS_WITH_NULL_T
, (R)
);
template<typename T>
static R apply( T const & v
, typename boost::enable_if<boost::is_convertible<R,T> >::type* dummy=0 )
{
return v;
}
static R apply( string_t const & v )
{
return boost::lexical_cast<R>(v);
}
template<typename T>
static R apply( T const & v
, typename boost::disable_if<boost::is_convertible<R,T> >::type* dummy=0 )
{
throw get_as_error();
return R();
}
};
template<>
struct convert<string_t>
{
static string_t apply( string_t const & v)
{
return v;
}
static string_t apply( float_t const & v )
{
return boost::lexical_cast<std::string>(v);
}
static string_t apply( int_t const & v )
{
return boost::lexical_cast<std::string>(v);
}
static string_t apply( bool_t const & v)
{
return (v ? "true" : "false");
}
static string_t apply( null_t const & )
{
return "null";
}
template<typename T>
static string_t apply( T const & v )
{
throw get_as_error();
return "";
}
};
template<>
struct convert<bool_t>
{
template<typename T>
static bool_t apply( T const & v
, typename boost::enable_if<boost::is_convertible<bool_t,T> >::type* dummy=0 )
{
return v;
}
static bool_t apply( string_t const & v )
{
if( v == "true" ) return true;
else return false;
}
template<typename T>
static bool_t apply( T const & v
, typename boost::disable_if<boost::is_convertible<bool_t,T> >::type* dummy=0 )
{
throw get_as_error();
return false;
}
};
template<typename T>
struct convert_to : public boost::static_visitor<T>
{
template<typename V>
T operator()(V const & v) const
{
try
{
return convert<T>::apply(v);
}
catch(...)
{
throw get_as_error();
}
}
// if the types are the same, no conversion required
T operator()(T const & v) const
{
return v;
}
};
}
struct value::make_json_value
{
json::value& operator()(json::value& v) const { return v; }
const json::value& operator()(const json::value& v) const { return v; }
};
struct value::make_json_member
{
value::member operator()(object_t::value_type & v) const { return value::member(v); }
value::const_member operator()(const object_t::value_type & v) const { return value::const_member(v); }
};
// -------------------------------------------------------------------------------
// array handling
// -------------------------------------------------------------------------------
/**
* Add compatible type to the end of the array
*/
template< typename T >
value& value::add( T v )
{
push_back(v);
return *this;
}
/**
* Add a compatible type to the end of the array, functor style.
*/
template< typename T >
value& value::operator()( T v )
{
return add(v);
}
/**
* Add compatible type to the end of the array, stl style-ish
* Actually returns a reference the newly added value.
*/
template< typename T >
value& value::push_back( T v )
{
array_t* p_array = boost::get<array_t>(&base_type::get());
// if we aren't an array, we need to be an array
if( !p_array )
{
base_type::get() = array_t();
p_array = boost::get<array_t>(&base_type::get());
}
p_array->push_back( (json::value(v)) );
return p_array->back();
}
// -------------------------------------------------------------------------------
// object handling
// -------------------------------------------------------------------------------
template< typename T >
value& value::set( string_t const & name, T v )
{
object_t* p_object = boost::get<object_t>(&base_type::get());
// if this isn't an object type ... it needs to be
if( !p_object )
{
base_type::get() = object_t();
p_object = boost::get<object_t>(&base_type::get());
}
(*p_object)[name] = v;
return *this;
}
template< typename T >
value& value::operator()( string_t const & name, T v )
{
return set(name,v);
}
// -------------------------------------------------------------------------------
// Extract based on type
// -------------------------------------------------------------------------------
template< typename T >
T value::get()
{
return detail::extract<T>::get(base_type::get());
}
template< typename T >
const T value::get() const
{
return detail::extract<T>::get(base_type::get());
}
// -------------------------------------------------------------------------------
// -------------------------------------------------------------------------------
// -------------------------------------------------------------------------------
// Extract based on type and convert to requested type
// -------------------------------------------------------------------------------
template< typename T >
T value::get_as()
{
return boost::apply_visitor(detail::convert_to<T>(),base_type::get());
}
template< typename T >
const T value::get_as() const
{
return boost::apply_visitor(detail::convert_to<T>(),base_type::get());
}
// -------------------------------------------------------------------------------
// -------------------------------------------------------------------------------
}}
#endif // CIERE_JSON_VALUE_IMPL_HPP
|
{"hexsha": "3338313d48baae5c75233e81ad952ae54573bf77", "size": 7736, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "libs/spirit/example/qi/json/json/detail/value_impl.hpp", "max_stars_repo_name": "Abce/boost", "max_stars_repo_head_hexsha": "2d7491a27211aa5defab113f8e2d657c3d85ca93", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 85.0, "max_stars_repo_stars_event_min_datetime": "2015-02-08T20:36:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T20:38:31.000Z", "max_issues_repo_path": "libs/boost/libs/spirit/example/qi/json/json/detail/value_impl.hpp", "max_issues_repo_name": "flingone/frameworks_base_cmds_remoted", "max_issues_repo_head_hexsha": "4509d9f0468137ed7fd8d100179160d167e7d943", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 9.0, "max_issues_repo_issues_event_min_datetime": "2015-01-28T16:33:19.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T23:03:28.000Z", "max_forks_repo_path": "libs/boost/libs/spirit/example/qi/json/json/detail/value_impl.hpp", "max_forks_repo_name": "flingone/frameworks_base_cmds_remoted", "max_forks_repo_head_hexsha": "4509d9f0468137ed7fd8d100179160d167e7d943", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 27.0, "max_forks_repo_forks_event_min_datetime": "2015-01-28T16:33:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-12T05:04:39.000Z", "avg_line_length": 27.5302491103, "max_line_length": 111, "alphanum_fraction": 0.4383402275, "num_tokens": 1591}
|
file = matopen("temp.mat","w")
#title:"Lock Prediction"
#write(file,"title",title)
#length:6 -> {"Actual", "Our contention model", "LR+class", "quad+class", "Dec. tree regression", "Orig. Thomasian"};
#length:5 -> {"Our contention model", "LR+class", "quad+class", "Dec. tree regression", "Orig. Thomasian"};
lenOfLegends=length(legends)
write(file,"lenOfLegends",lenOfLegends)
#=
for i=1:lenOfLegends
write(file,string("legends",i),legends[i])
end
=#
lenOfXdata=length(Xdata)
write(file,"lenOfXdata",lenOfXdata)
for i=1:lenOfXdata
write(file,string("Xdata",i),float(Xdata[i]))
end
lenOfYdata=length(Ydata)
write(file,"lenOfYdata",lenOfYdata)
for i=1:lenOfYdata
write(file,string("Ydata",i),float(Ydata[i]))
end
#Xlabel = "TPS";
#Ylabel = "Total time spent acquiring row locks (seconds)";
#=
write(file,"Xlabel",Xlabel)
write(file,"Ylabel",Ylabel)
=#
lenOfMeanAbsError=length(meanAbsError)
write(file,"lenOfMeanAbsError",lenOfMeanAbsError)
for i=1:lenOfMeanAbsError
write(file,string("meanAbsError",i),float(meanAbsError[i]))
end
lenOfMeanRelError=length(meanRelError)
write(file,"lenOfMeanRelError",lenOfMeanRelError)
for i=1:lenOfMeanRelError
write(file,string("meanRelError",i),float(meanRelError[i]))
end
#length>0 -> {"Our contention model", "LR+class", "quad+class", "Dec. tree regression", "Orig. Thomasian"};
lenOfErrorHeader=length(errorHeader)
write(file,"lenOfErrorHeader",lenOfErrorHeader)
#=
for i=1:lenOfErrorHeader
write(file,string("errorHeader",i),errorHeader[i])
end
=#
lenOfExtra=length(extra)
write(file,"lenOfExtra",lenOfExtra)
for i=1:lenOfExtra
write(file,string("extra",i),extra[i])
end
close(file)
#=
file = matopen("lockP2.mat","w")
write(file,"title",title)
write(file,"legends",legends)
write(file,"Xdata",Xdata)
write(file,"Ydata",Ydata)
write(file,"Xlabel",Xlabel)
write(file,"Ylabel",Ylabel)
write(file,"meanAbsError",meanAbsError)
write(file,"meanRelError",meanRelError)
write(file,"errorHeader",errorHeader)
write(file,"extra",extra)
close(file)
=#
#=
matwrite("lockP.mat",{
"title" => title,
"legends" => legends,
"Xdata" => Xdata,
"Xlabel" => Xlabel,
"Ylabel" => Ylabel,
"meanAbsError" => meanAbsError,
"meanRelError" => meanRelError,
"errorHeader" => errorHeader,
"extra" => extra
})
=#
|
{"hexsha": "33dfc7a47f0adad3217f99b9e8edc0285da867ab", "size": 2250, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "predict_mat/julia/write_mat_lockP.jl", "max_stars_repo_name": "barzan/dbseer", "max_stars_repo_head_hexsha": "3ad0718f665a5beffa65df7be8998986dcdbf3db", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 115, "max_stars_repo_stars_event_min_datetime": "2015-06-27T15:09:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-06T04:09:35.000Z", "max_issues_repo_path": "predict_mat/julia/write_mat_lockP.jl", "max_issues_repo_name": "barzan/dbseer", "max_issues_repo_head_hexsha": "3ad0718f665a5beffa65df7be8998986dcdbf3db", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2015-06-27T15:10:49.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-14T12:13:24.000Z", "max_forks_repo_path": "predict_mat/julia/write_mat_lockP.jl", "max_forks_repo_name": "barzan/dbseer", "max_forks_repo_head_hexsha": "3ad0718f665a5beffa65df7be8998986dcdbf3db", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2015-09-23T05:49:32.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T09:13:46.000Z", "avg_line_length": 24.7252747253, "max_line_length": 117, "alphanum_fraction": 0.7293333333, "num_tokens": 706}
|
#############################################################################
##
#W example.gd
##
## This file contains a sample of a GAP implementation file.
##
#############################################################################
##
#M SomeOperation( <val> )
##
## performs some operation on <val>
##
InstallMethod( SomeProperty,
"for left modules",
[ IsLeftModule ], 0,
function( M )
if IsFreeLeftModule( M ) and not IsTrivial( M ) then
return true;
fi;
TryNextMethod();
end );
#############################################################################
##
#F SomeGlobalFunction( )
##
## A global variadic funfion.
##
InstallGlobalFunction( SomeGlobalFunction, function( arg )
if Length( arg ) = 3 then
return arg[1] + arg[2] * arg[3];
elif Length( arg ) = 2 then
return arg[1] - arg[2]
else
Error( "usage: SomeGlobalFunction( <x>, <y>[, <z>] )" );
fi;
end );
#
# A plain function.
#
SomeFunc := function(x, y)
local z, func, tmp, j;
z := x * 1.0;
y := 17^17 - y;
func := a -> a mod 5;
tmp := List( [1..50], func );
while y > 0 do
for j in tmp do
Print(j, "\n");
od;
repeat
y := y - 1;
until 0 < 1;
y := y -1;
od;
return z;
end;
|
{"hexsha": "941f7c586f9c8183d87b2bfe614d1b5dc2924bc7", "size": 1323, "ext": "gi", "lang": "GAP", "max_stars_repo_path": "analyzer/libs/pygments/tests/examplefiles/example.gi", "max_stars_repo_name": "oslab-swrc/juxta", "max_stars_repo_head_hexsha": "481cd6f01e87790041a07379805968bcf57d75f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2016-01-06T07:01:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-12T15:53:20.000Z", "max_issues_repo_path": "analyzer/libs/pygments/tests/examplefiles/example.gi", "max_issues_repo_name": "oslab-swrc/juxta", "max_issues_repo_head_hexsha": "481cd6f01e87790041a07379805968bcf57d75f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-02T00:42:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-02T00:42:29.000Z", "max_forks_repo_path": "analyzer/libs/pygments/tests/examplefiles/example.gi", "max_forks_repo_name": "oslab-swrc/juxta", "max_forks_repo_head_hexsha": "481cd6f01e87790041a07379805968bcf57d75f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2016-01-06T07:01:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-29T11:43:16.000Z", "avg_line_length": 20.671875, "max_line_length": 77, "alphanum_fraction": 0.4270597128, "num_tokens": 353}
|
#!/usr/bin/env python
"""Generate temperatures along given order parameter."""
import argparse
import numpy as np
import pandas as pd
from scipy import interpolate
from scipy.optimize import minimize
def main():
args = parse_args()
aves = pd.read_csv(args.inp_filename, sep=" ")
if args.rtag:
aves = aves[aves[args.rtag] == args.rvalue].reset_index()
old_temps = aves["temp"]
old_ops = aves[args.tag]
# Prevent instabilities in minimization (need monotonically decreasing)
old_ops = old_ops.sort_values()[::-1]
interpolated_ops_f = interpolate.interp1d(
old_temps, old_ops, kind="linear", fill_value="extrapolate"
)
guess_temps = np.linspace(
old_temps[1], old_temps[len(old_temps) - 1], num=args.threads - 6
)
desired_ops = np.linspace(args.max_op - 1, 1, num=args.threads - 6)
new_temps = minimize(
sum_of_squared_errors, guess_temps, args=(desired_ops, interpolated_ops_f)
).x
new_temps.sort()
low_temps = [new_temps[0] - 3, new_temps[0] - 1, new_temps[0] - 0.3]
high_temp = new_temps[len(new_temps) - 1]
high_temps = [high_temp + 0.3, high_temp + 1, high_temp + 3]
new_temps = np.concatenate([low_temps, new_temps, high_temps])
np.set_printoptions(formatter={"float": "{:0.3f}".format}, linewidth=200)
new_temps = np.around(new_temps, decimals=3)
temps_string = ""
for temp in new_temps:
temps_string = temps_string + "{:.3f} ".format(temp)
print(temps_string)
def sum_of_squared_errors(temps, desired_ops, interpolated_ops_f):
squared_error = 0
for temp, op in zip(temps, desired_ops):
new_op = interpolated_ops_f(temp)
squared_error += (new_op - op) ** 2
return squared_error
def parse_args():
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("inp_filename", type=str, help="Expectations filename")
parser.add_argument("tag", type=str, help="Order parameter tag")
parser.add_argument("max_op", type=float, help="Maximum value of order parameter")
parser.add_argument("threads", type=int, help="Number of threads/replicas")
parser.add_argument("--rtag", type=str, help="Tag to slice on")
parser.add_argument("--rvalue", type=float, help="Slice value")
return parser.parse_args()
if __name__ == "__main__":
main()
|
{"hexsha": "4a6cb8e4d4be91843f4059b0cfbd3f9dca5d5744", "size": 2426, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/simutils/generate_ptmc_temps.py", "max_stars_repo_name": "cumberworth/origamipy", "max_stars_repo_head_hexsha": "cd005b140342dbe4c7708ab234102be54328648a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/simutils/generate_ptmc_temps.py", "max_issues_repo_name": "cumberworth/origamipy", "max_issues_repo_head_hexsha": "cd005b140342dbe4c7708ab234102be54328648a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/simutils/generate_ptmc_temps.py", "max_forks_repo_name": "cumberworth/origamipy", "max_forks_repo_head_hexsha": "cd005b140342dbe4c7708ab234102be54328648a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6944444444, "max_line_length": 86, "alphanum_fraction": 0.6883759275, "include": true, "reason": "import numpy,from scipy", "num_tokens": 648}
|
from PIL import Image
from yuv_reader import YUVReader
from reprojection import Reprojection
from optparse import OptionParser
from six.moves import cPickle
import matplotlib.pyplot as plt
import numpy as np
import os
from dataset_preparation.ballet_camera import BalletCamera
def main():
parser = OptionParser()
parser.add_option("-o", "--camera-original", dest="cameraOriginalFile",
help="Original camera file to load intrinsic matrix from", metavar="FILE")
parser.add_option("-v", "--camera-virtual", dest="cameraVirtualFile",
help="Virtual camera file to load intrinsic matrix from", metavar="FILE")
parser.add_option("-i", "--input", dest="video",
help="video file to process", metavar="FILE")
parser.add_option("-r", "--result", dest="output",
help="Save output images on that path")
parser.add_option("-p", "--prefix", dest="prefix", default="depth",
help="Output prefix")
parser.add_option("-g", "--generate-images", dest="generate_images", default=False, action="store_true",
help="Generate PNG images for each depth plane")
# Depth sweeping options
parser.add_option("--depth_start", dest="depth_start", default=1,
help="Depth sweep start")
parser.add_option("--depth_stop", dest="depth_stop", default=40,
help="Depth sweep stop")
parser.add_option("--depth_step", dest="depth_step", default=0.5,
help="Depth sweep steep increment", metavar="FILE")
(options, args) = parser.parse_args()
# Create output dir if it does not exist
if not os.path.exists(options.output):
os.makedirs(options.output)
# Read camera parameters from cameraFile argument
print "Original camera file: %s" % options.cameraOriginalFile
print "Virtual camera file: %s" % options.cameraVirtualFile
cam_original = BalletCamera(options.cameraOriginalFile)
cam_virtual = BalletCamera(options.cameraVirtualFile)
# Get a frame for YUV video
print "Load YUV (i480) video file: %s" % options.video
video = YUVReader(options.video, (1024, 768))
frame = video.getRGBFrame(0)
# Reproject at all needed depths
print "Saving plane sweep volume: %s" % options.output
r = Reprojection(cam_original, cam_virtual, frame)
# Reproject at depths 1m to 200m in steps of 2 meters (total 100 images)
volume_array = []
i = 0
depth_from = options.depth_start
depth_to = options.depth_stop
depth_step = options.depth_step
print "Sweeping from (%s meters)->(%s meters) in steps of (%s meters)" % (depth_from, depth_to, depth_step)
for depth in np.arange(depth_from, depth_to, depth_step):
print "Calculate projection at depth: (%s meters)" % depth
result_prj = r.reproject(depth=depth)
volume_array.append(result_prj)
# if the generate image is selected then output an image for each depth plane
if options.generate_images:
im = Image.fromarray(result_prj, 'RGBA')
path = os.path.join(options.output, "%s_%03d_%03d.png" % (options.prefix, depth, i))
print "Saving image: %s" % path
im.save(path)
i += 1
print "Depth sweep plane calculation done."
# Save plane sweep volume in a cPickle file with stacked images
if not options.generate_images:
print "Saving pickle file..."
psw_volume = np.dstack(volume_array)
vol_name = os.path.join(options.output, "psw_%s_%s.pkl" % (depth_from, depth_to))
cPickle.dump(psw_volume, open(vol_name, "wb"))
print "Plane sweep volume file saved in %s" % vol_name
if __name__ == "__main__": main()
|
{"hexsha": "999cebca4d646e21095081c45d2a460c7f9f1e3c", "size": 3792, "ext": "py", "lang": "Python", "max_stars_repo_path": "reprojection/process_video.py", "max_stars_repo_name": "aTeK7/deep-stereo1.4", "max_stars_repo_head_hexsha": "dd2150097d0ed1c05791e4d80cf9b404f98a6880", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-07-25T15:05:21.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-19T16:45:51.000Z", "max_issues_repo_path": "reprojection/process_video.py", "max_issues_repo_name": "aTeK7/deep-stereo1.4", "max_issues_repo_head_hexsha": "dd2150097d0ed1c05791e4d80cf9b404f98a6880", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-04-16T21:27:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-01T04:24:26.000Z", "max_forks_repo_path": "reprojection/process_video.py", "max_forks_repo_name": "aTeK7/deep-stereo1.4", "max_forks_repo_head_hexsha": "dd2150097d0ed1c05791e4d80cf9b404f98a6880", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2016-08-14T10:57:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-29T11:29:07.000Z", "avg_line_length": 39.9157894737, "max_line_length": 111, "alphanum_fraction": 0.6584915612, "include": true, "reason": "import numpy", "num_tokens": 879}
|
// Copyright (c) 2001-2009 Hartmut Kaiser
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#if !defined(BOOST_SPIRIT_FORMAT_MANIP_MAY_05_2007_1202PM)
#define BOOST_SPIRIT_FORMAT_MANIP_MAY_05_2007_1202PM
#include <boost/spirit/home/qi/parse.hpp>
#include <boost/spirit/home/support/unused.hpp>
#include <boost/spirit/home/qi/stream/detail/match_manip.hpp>
#include <boost/mpl/assert.hpp>
#include <boost/utility/enable_if.hpp>
///////////////////////////////////////////////////////////////////////////////
namespace boost { namespace spirit { namespace qi
{
///////////////////////////////////////////////////////////////////////////
template <typename Expr>
inline detail::match_manip<Expr>
match(Expr const& xpr)
{
typedef spirit::traits::is_component<qi::domain, Expr> is_component;
// report invalid expression error as early as possible
BOOST_MPL_ASSERT_MSG(is_component::value,
xpr_is_not_convertible_to_a_parser, (Expr));
return qi::detail::match_manip<Expr>(xpr, unused, unused);
}
template <typename Expr, typename Attribute>
inline detail::match_manip<Expr, Attribute>
match(Expr const& xpr, Attribute& p)
{
typedef spirit::traits::is_component<qi::domain, Expr> is_component;
// report invalid expression error as early as possible
BOOST_MPL_ASSERT_MSG(is_component::value,
xpr_is_not_convertible_to_a_parser, (Expr, Attribute));
return qi::detail::match_manip<Expr, Attribute>(xpr, p, unused);
}
///////////////////////////////////////////////////////////////////////////
template <typename Expr, typename Skipper>
inline detail::match_manip<Expr, unused_type const, Skipper>
phrase_match(Expr const& xpr, Skipper const& s)
{
typedef
spirit::traits::is_component<qi::domain, Expr>
expr_is_component;
typedef
spirit::traits::is_component<qi::domain, Skipper>
skipper_is_component;
// report invalid expression errors as early as possible
BOOST_MPL_ASSERT_MSG(expr_is_component::value,
xpr_is_not_convertible_to_a_parser, (Expr, Skipper));
BOOST_MPL_ASSERT_MSG(skipper_is_component::value,
skipper_is_not_convertible_to_a_parser, (Expr, Skipper));
return qi::detail::match_manip<Expr, unused_type const, Skipper>(
xpr, unused, s);
}
template <typename Expr, typename Attribute, typename Skipper>
inline detail::match_manip<Expr, Attribute, Skipper>
phrase_match(Expr const& xpr, Attribute& p, Skipper const& s)
{
typedef
spirit::traits::is_component<qi::domain, Expr>
expr_is_component;
typedef
spirit::traits::is_component<qi::domain, Skipper>
skipper_is_component;
// report invalid expression errors as early as possible
BOOST_MPL_ASSERT_MSG(expr_is_component::value,
xpr_is_not_convertible_to_a_parser, (Expr, Attribute, Skipper));
BOOST_MPL_ASSERT_MSG(skipper_is_component::value,
skipper_is_not_convertible_to_a_parser, (Expr, Attribute, Skipper));
return qi::detail::match_manip<Expr, Attribute, Skipper>(xpr, p, s);
}
///////////////////////////////////////////////////////////////////////////
template<typename Char, typename Traits, typename Expr>
inline typename
enable_if<
spirit::traits::is_component<qi::domain, Expr>,
std::basic_istream<Char, Traits> &
>::type
operator>> (std::basic_istream<Char, Traits> &is, Expr& xpr)
{
typedef std::istream_iterator<Char, Char, Traits> input_iterator;
input_iterator f(is);
input_iterator l;
if (!qi::parse (f, l, xpr))
{
is.setstate(std::ios_base::failbit);
}
return is;
}
}}}
#endif
|
{"hexsha": "32c2bc5c13f6c47d8f606f5097dfce437e5f5369", "size": 4023, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "boost/spirit/home/qi/stream/match_manip.hpp", "max_stars_repo_name": "mike-code/boost_1_38_0", "max_stars_repo_head_hexsha": "7ff8b2069344ea6b0b757aa1f0778dfb8526df3c", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 14.0, "max_stars_repo_stars_event_min_datetime": "2017-03-07T00:14:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T00:59:22.000Z", "max_issues_repo_path": "boost/spirit/home/qi/stream/match_manip.hpp", "max_issues_repo_name": "xin3liang/platform_external_boost", "max_issues_repo_head_hexsha": "ac861f8c0f33538060790a8e50701464ca9982d3", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 11.0, "max_issues_repo_issues_event_min_datetime": "2016-11-22T13:14:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-14T00:56:51.000Z", "max_forks_repo_path": "boost/spirit/home/qi/stream/match_manip.hpp", "max_forks_repo_name": "xin3liang/platform_external_boost", "max_forks_repo_head_hexsha": "ac861f8c0f33538060790a8e50701464ca9982d3", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 6.0, "max_forks_repo_forks_event_min_datetime": "2016-11-07T13:38:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-04T12:13:31.000Z", "avg_line_length": 35.6017699115, "max_line_length": 80, "alphanum_fraction": 0.6216753666, "num_tokens": 920}
|
#!/usr/bin/env python
"""
Make plots showing how to calculate the p-value
"""
import matplotlib.pyplot as pl
from scipy.stats import norm
from scipy.special import erf
import numpy as np
mu = 0. # the mean, mu
sigma = 1. # standard deviation
x = np.linspace(-4, 4, 1000) # x
# set plot to render labels using latex
pl.rc('text', usetex=True)
pl.rc('font', family='serif')
pl.rc('font', size=14)
fig = pl.figure(figsize=(7,4), dpi=100)
# value of x for calculating p-value
Z = 1.233
y = norm.pdf(x, mu, sigma)
# plot pdfs
pl.plot(x, y, 'r')
pl.plot([-Z, -Z], [0., np.max(y)], 'k--')
pl.plot([Z, Z], [0., np.max(y)], 'k--')
pl.fill_between(x, np.zeros(len(x)), y, where=x<=-Z, facecolor='green', interpolate=True, alpha=0.6)
pl.fill_between(x, np.zeros(len(x)), y, where=x>=Z, facecolor='green', interpolate=True, alpha=0.6)
pvalue = 1.-erf(Z/np.sqrt(2.))
ax = pl.gca()
ax.set_xlabel('$Z$', fontsize=14)
ax.set_ylabel('$p(Z)$', fontsize=14)
ax.set_xlim(-4, 4)
ax.grid(True)
ax.text(Z+0.1, 0.3, '$Z_{\\textrm{obs}} = 1.233$', fontsize=16)
ax.text(-3.6, 0.31, '$p$-value$= %.2f$' % pvalue, fontsize=18,
bbox={'facecolor': 'none', 'pad':12, 'ec': 'r'})
fig.subplots_adjust(bottom=0.15)
pl.savefig('../pvalue.pdf')
pl.show()
|
{"hexsha": "3a2824e653c6bf59617658c15069f733d19ba0fd", "size": 1242, "ext": "py", "lang": "Python", "max_stars_repo_path": "figures/scripts/pvalue.py", "max_stars_repo_name": "mattpitkin/GraWIToNStatisticsLectures", "max_stars_repo_head_hexsha": "09175a3a8cb3c9f0f15535d64deaef1275eac870", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-02-09T21:01:54.000Z", "max_stars_repo_stars_event_max_datetime": "2018-02-09T21:01:54.000Z", "max_issues_repo_path": "figures/scripts/pvalue.py", "max_issues_repo_name": "mattpitkin/GraWIToNStatisticsLectures", "max_issues_repo_head_hexsha": "09175a3a8cb3c9f0f15535d64deaef1275eac870", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "figures/scripts/pvalue.py", "max_forks_repo_name": "mattpitkin/GraWIToNStatisticsLectures", "max_forks_repo_head_hexsha": "09175a3a8cb3c9f0f15535d64deaef1275eac870", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.4339622642, "max_line_length": 100, "alphanum_fraction": 0.6312399356, "include": true, "reason": "import numpy,from scipy", "num_tokens": 430}
|
'''
Created on Jan 14, 2017
@author: safdar
'''
from operations.baseoperation import Operation
from sklearn.externals import joblib
from statistics import mean
import numpy as np
import cv2
from extractors.helper import buildextractor
import os
from utils.plotter import Image
import PIL
import time
from operations.vehicledetection.entities import Candidate, Box
class VehicleFinder(Operation):
ClassifierFile = 'ClassifierFile'
FeatureExtractors = 'FeatureExtractors'
SlidingWindow = 'SlidingWindow'
class Logging(object):
LogHits = 'LogHits'
LogMisses = 'LogMisses'
FrameRange = 'FrameRange'
Frames = 'Frames'
LogFolder = 'LogFolder'
HitsXRange = 'HitsXRange'
MissesXRange = 'MissesXRange'
class SlidingWindow(object):
DepthRangeRatio = 'DepthRangeRatio'
CenterShiftRatio = 'CenterShiftRatio'
SizeVariations = 'SizeVariations'
WindowRangeRatio = 'WindowRangeRatio'
StepRatio = 'StepRatio'
ConfidenceThreshold = 'ConfidenceThreshold'
# Constants:
AllWindowColor = [152, 0, 0]
WeakWindowColor = [200, 0, 0]
StrongWindowColor = [255, 0, 0]
# Outputs
FrameCandidates = "FrameCandidates"
def __init__(self, params):
Operation.__init__(self, params)
self.__classifier__ = joblib.load(params[self.ClassifierFile])
self.__windows__ = None
loggingcfg = params[self.Logging.__name__]
self.__is_logging_hits__ = loggingcfg[self.Logging.LogHits]
self.__is_logging_misses__ = loggingcfg[self.Logging.LogMisses]
self.__log_folder__ = os.path.join(loggingcfg[self.Logging.LogFolder], time.strftime('%m-%d-%H-%M-%S'))
self.__hits_folder__ = os.path.join(self.__log_folder__, 'Hits')
self.__misses_folder__ = os.path.join(self.__log_folder__, 'Misses')
self.__frames_to_log__ = loggingcfg[self.Logging.Frames]
self.__frame_range_to_log__ = loggingcfg[self.Logging.FrameRange]
self.__hits_x_range_to_log__ = loggingcfg[self.Logging.HitsXRange]
assert self.__hits_x_range_to_log__ is None or len(self.__hits_x_range_to_log__) == 2
self.__misses_x_range_to_log__ = loggingcfg[self.Logging.MissesXRange]
assert self.__misses_x_range_to_log__ is None or len(self.__misses_x_range_to_log__) == 2
if self.__is_logging_hits__:
if not os.path.isdir(self.__hits_folder__):
os.makedirs(self.__hits_folder__, exist_ok=True)
if self.__is_logging_misses__:
if not os.path.isdir(self.__misses_folder__):
os.makedirs(self.__misses_folder__, exist_ok=True)
# Feature Extractors
extractorsequence = params[self.FeatureExtractors]
self.__feature_extractor__ = buildextractor(extractorsequence)
self.__frame_candidates__ = None
def islogginghits(self):
return self.__is_logging_hits__==1
def isloggingmisses(self):
return self.__is_logging_misses__==1
def isframewithrange(self, frame):
return (self.__frame_range_to_log__ is None or frame.framenumber() in range(*self.__frame_range_to_log__)) and \
(self.__frames_to_log__ is None or frame.framenumber() in self.__frames_to_log__)
def log(self, folder, window, i, j, frame):
if self.isframewithrange(frame):
windowdumpfile = os.path.join(folder, "{:04d}_{:02d}_{:02d}.png".format(frame.framenumber(), i, j))
towrite = PIL.Image.fromarray(window)
towrite.save(windowdumpfile)
def isWindowMissInRange(self, boundary):
(x1, x2, _, _) = boundary
if self.__misses_x_range_to_log__ is not None:
return x1 in range(*self.__misses_x_range_to_log__) and x2 in range(*self.__misses_x_range_to_log__)
return True
def isWindowHitInRange(self, boundary):
(x1, x2, _, _) = boundary
if self.__hits_x_range_to_log__ is not None:
return x1 in range(*self.__hits_x_range_to_log__) and x2 in range(*self.__hits_x_range_to_log__)
return True
def __processupstream__(self, original, latest, data, frame):
x_dim, y_dim, xy_avg = latest.shape[1], latest.shape[0], int(mean(latest.shape[0:2]))
slidingwindowconfig = self.getparam(self.SlidingWindow.__name__)
if self.__windows__ is None:
self.__continuity_threshold__ = slidingwindowconfig[self.SlidingWindow.ConfidenceThreshold]
self.__windows__ = self.generatewindows(slidingwindowconfig, x_dim, y_dim, xy_avg)
self.__window_range_ratio__ = slidingwindowconfig[self.SlidingWindow.WindowRangeRatio]
self.__window_range__ = [int(xy_avg * r) for r in self.__window_range_ratio__]
# Perform search:
image = np.copy(latest)
weak_candidates = []
strong_candidates = []
for i, scan in enumerate(self.__windows__):
for j, box in enumerate(scan):
(x1, x2, y1, y2) = box.boundary()
snapshot = image[y1:y2,x1:x2,:]
if np.min(snapshot) == 0 and np.max(snapshot)==0:
continue
window = snapshot.astype(np.float32)
if np.max(window) == 0:
print ("Error")
features = self.__feature_extractor__.extract(window)
try:
label = self.__classifier__.predict([features])
except ValueError:
print ("Error")
score = self.__classifier__.decision_function([features])[0] if "decision_function" in dir(self.__classifier__) else None
if label == 1 or label == [1]:
if score is None or score > self.__continuity_threshold__:
strong_candidates.append(Candidate(box.center(), box.diagonal(), score))
if self.islogginghits() and self.isWindowHitInRange(box.boundary()):
self.log(self.__hits_folder__, snapshot, i, j, frame)
else:
weak_candidates.append(Candidate(box.center(), box.diagonal(), score))
else:
if self.isloggingmisses() and self.isWindowMissInRange(box.boundary()):
self.log(self.__misses_folder__, snapshot, i, j, frame)
self.__frame_candidates__ = strong_candidates
self.setdata(data, self.FrameCandidates, self.__frame_candidates__)
if (self.islogginghits() or self.isloggingmisses()) and self.isframewithrange(frame):
imagedumpfile = os.path.join(self.__log_folder__, "{:04d}.png".format(frame.framenumber()))
towrite = PIL.Image.fromarray(latest)
towrite.save(imagedumpfile)
if self.isplotting():
all_windows = [x for sublist in self.__windows__ for x in sublist]
# First show all windows being searched:
image_all = np.zeros_like(latest)
for scan in self.__windows__:
for box in scan:
(x1,x2,y1,y2) = box.boundary()
cv2.rectangle(image_all, (x1,y1), (x2,y2), self.AllWindowColor, 2)
if box.fitted():
cv2.putText(image_all,"~x~x~", (x1,y1), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, self.WeakWindowColor, 1)
# Then show all windows that were weak candidates:
image_weak = np.zeros_like(latest)
for candidate in weak_candidates:
(x1,x2,y1,y2) = candidate.boundary()
cv2.rectangle(image_weak, (x1,y1), (x2,y2), self.WeakWindowColor, 2)
if candidate.score() is not None:
cv2.putText(image_weak,"{:.2f}".format(candidate.score()), (x1,y1), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, self.WeakWindowColor, 1)
# Then show all windows that were strong candidates:
image_strong = np.copy(latest)
for candidate in strong_candidates:
(x1,x2,y1,y2) = candidate.boundary()
cv2.rectangle(image_strong, (x1,y1), (x2,y2), self.StrongWindowColor, 4)
if candidate.score() is not None:
cv2.putText(image_strong,"{:.2f}".format(candidate.score()), (x1,y1), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, self.StrongWindowColor, 1)
# Now superimpose all 3 frames onto the latest color frame:
todraw = cv2.addWeighted(image_strong, 1, image_all, 0.2, 0)
todraw = cv2.addWeighted(todraw, 1, image_weak, 0.5, 0)
# todraw = cv2.addWeighted(todraw, 1, image_strong, 1, 0)
self.__plot__(frame, Image("Vehicle Search & Hits (All/Weak/Strong = {}/{}/{})".format(
len(all_windows),
len(weak_candidates),
len(strong_candidates)),
todraw, None))
# Try group rectangles:
# cons = np.copy(latest)
# if len(self.__frame_candidates__)>0:
# grouped, weights = cv2.groupRectangles(list(zip(*self.__frame_candidates__))[0], 1, .2)
# for ((x1,x2,y1,y2), weight) in zip(grouped, weights):
# cv2.rectangle(cons, (x1,y1), (x2,y2), self.StrongWindowColor, 3)
# cv2.putText(cons,"{}".format(weight), (x1,y1), cv2.FONT_HERSHEY_COMPLEX_SMALL, 2, self.StrongWindowColor, 1)
# self.__plot__(frame, Image("Grouped", cons, None))
return latest
def generatewindows(self, cfg, x_dim, y_dim, xy_avg):
shift = cfg[self.SlidingWindow.CenterShiftRatio] * x_dim
depth_range_ratio = sorted(cfg[self.SlidingWindow.DepthRangeRatio], reverse=True)
horizon = min([int(y_dim * r) for r in depth_range_ratio])
print ("Horizon is at depth: {}".format(horizon))
window_range_ratio = cfg[self.SlidingWindow.WindowRangeRatio]
window_range = [int(xy_avg * r) for r in window_range_ratio]
print ("Window range: {}".format(window_range))
size_variations = cfg[self.SlidingWindow.SizeVariations]
grow_rate = int(np.absolute(window_range[1]-window_range[0])/(size_variations))
print ("Window grow rate (each side): {}".format(grow_rate))
slide_ratio = cfg[self.SlidingWindow.StepRatio]
center = (int((x_dim / 2) + shift), horizon)
print ("Center of vision: {}".format(center))
windows = []
for i in range(size_variations):
print ("Scan # {}".format(i))
windows.append([])
scanwidth = int(x_dim/2)
print ("\tScan width: {}".format(scanwidth))
boxwidth = window_range[0] + (i * grow_rate)
print ("\tBox width: {}".format(boxwidth))
center_box = Box(center, boxwidth)
if center_box is None:
print ("\tCenter box (OUTSIDE BOUNDS): {}".format(center_box))
continue
print ("\tCenter box: {}".format(center_box))
windows[i].append(center_box)
shifts_per_box = int(1 / slide_ratio)
boxshift = int(boxwidth * slide_ratio)
print ("\t\tBox shift: {}".format(boxshift))
numboxes = int(scanwidth / boxwidth) # Boxes each side of the center box
print ("\t\tNum boxes: {}".format(numboxes))
# Each box on left + right sides of center:
print ("\t\t\tShifts Per Box: {}".format(shifts_per_box))
for j in range(1, numboxes + 1):
print ("\t\t\tShifted Boxes # ({})".format('~'*j))
for k in range(0, shifts_per_box):
leftcenter = (center[0] - (j * boxwidth) - (k * boxshift), center[1])
if not leftcenter[0] in range(0,x_dim) or not leftcenter[1] in range(0,y_dim):
continue
left_box = Box(leftcenter, boxwidth, bounds=((0,x_dim),(0,y_dim)))
rightcenter = (center[0] + (j * boxwidth) + (k * boxshift), center[1])
if not rightcenter[0] in range(0,x_dim) or not rightcenter[1] in range(0,y_dim):
continue
right_box = Box(rightcenter, boxwidth)
windows[i].append(left_box)
windows[i].append(right_box)
print ("\t\t\t\tShift # {}: <--{} {} {}-->".format(k, left_box, '~'*(k+1), right_box))
print ("\tTotal boxes in scan: {}".format(len(windows[i])))
return windows
|
{"hexsha": "b2f6dcd4d9731fb083e3202b305a212afc16001b", "size": 12753, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/operations/vehicledetection/vehiclefinder.py", "max_stars_repo_name": "safdark/advanced-lane-lines", "max_stars_repo_head_hexsha": "27edcc444ac532e84749d667fc579970d2059aff", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/operations/vehicledetection/vehiclefinder.py", "max_issues_repo_name": "safdark/advanced-lane-lines", "max_issues_repo_head_hexsha": "27edcc444ac532e84749d667fc579970d2059aff", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-01-01T12:12:57.000Z", "max_issues_repo_issues_event_max_datetime": "2017-01-06T03:40:49.000Z", "max_forks_repo_path": "src/operations/vehicledetection/vehiclefinder.py", "max_forks_repo_name": "safdark/advanced-lane-lines", "max_forks_repo_head_hexsha": "27edcc444ac532e84749d667fc579970d2059aff", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.4071146245, "max_line_length": 151, "alphanum_fraction": 0.6051909355, "include": true, "reason": "import numpy", "num_tokens": 3015}
|
# ==============================================================
# Author: Rodolfo Ferro
# Twitter: @FerroRodolfo
#
# ABOUT COPYING OR USING PARTIAL INFORMATION:
# This script has been originally created by Rodolfo Ferro.
# Any explicit usage of this script or its contents is granted
# according to the license provided and its conditions.
# ==============================================================
# -*- coding: utf-8 -*-
from sklearn.datasets import load_iris
import streamlit as st
import pandas as pd
import numpy as np
from iris import decision_tree
from figure import plotly_figure_1
from figure import plotly_figure_2
data = load_iris()
iris = pd.DataFrame(data.data, columns=data.feature_names)
classes = data.target_names
models = {
"Árbol de decisión": decision_tree()
}
# Sección de introducción
st.title("Predicción de especies de Iris usando scikit-learn y Streamlit")
st.write(
"""
Bienvenid@ a este sencillo ejemplo que ejecuta un modelo entrenado
de scikit-learn directo en Streamlit.
"""
)
# Sección de datos
st.write(
"""
A continuación los datos utilizados.
"""
)
st.dataframe(iris)
# Sección de datos
st.write(
"""
Y un pequeño gráfico generado con Plotly.
"""
)
fig = plotly_figure_2(iris)
st.plotly_chart(fig)
# Selección del modelo
model_selector = st.sidebar.selectbox(
"Selecciona el modelo a utilizar:",
list(models.keys())
)
model = models[model_selector]
# Especificación de datos
st.write(
"""
Especificamos las características:
"""
)
sepal_lenght = st.slider(
"Longitud del sépalo", 0.0, 10.0, 5.0, 0.05
)
sepal_width = st.slider(
"Ancho del sépalo", 0.0, 5.0, 2.5, 0.05
)
petal_lenght = st.slider(
"Longitud del pétalo", 0.0, 8.0, 4.0, 0.05
)
petal_width = st.slider(
"Ancho del pétalo", 0.0, 3.0, 1.5, 0.05
)
# Predicción
features = np.array([[sepal_lenght, sepal_width, petal_lenght, petal_width]])
prediction = model.predict(features)[0]
st.markdown(
f"""
De acuerdo a la predicción del modelo, la clase correspondiente es:
### {classes[prediction]}
"""
)
|
{"hexsha": "8cd3a133b2a8017ca73bfc1f948fde7c4b5b39da", "size": 2110, "ext": "py", "lang": "Python", "max_stars_repo_path": "app.py", "max_stars_repo_name": "RodolfoFerro/streamlit-example", "max_stars_repo_head_hexsha": "fd1e921a447fa8c25fd3f862456905b11bf4a3c6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-08-27T01:16:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T02:32:18.000Z", "max_issues_repo_path": "app.py", "max_issues_repo_name": "RodolfoFerro/streamlit-example", "max_issues_repo_head_hexsha": "fd1e921a447fa8c25fd3f862456905b11bf4a3c6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "app.py", "max_forks_repo_name": "RodolfoFerro/streamlit-example", "max_forks_repo_head_hexsha": "fd1e921a447fa8c25fd3f862456905b11bf4a3c6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-04-23T15:42:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T03:15:02.000Z", "avg_line_length": 22.4468085106, "max_line_length": 77, "alphanum_fraction": 0.6587677725, "include": true, "reason": "import numpy", "num_tokens": 585}
|
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
class VideoStream:
"""Camera object that controls video streaming from the Picamera"""
def __init__(self, resolution=(640, 480), framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC,
cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3, resolution[0])
ret = self.stream.set(4, resolution[1])
# Read first frame from the stream
(self.grabbed, self.frame) = self.stream.read()
# Variable to control when the camera is stopped
self.stopped = False
def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update, args=()).start()
return self
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return
# Otherwise, grab the next frame from the stream
(self.grabbed, self.frame) = self.stream.read()
def read(self):
# Return the most recent frame
return self.frame
def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='detect.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='labelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
default='1280x720')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.tflite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH, MODEL_NAME, GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH, MODEL_NAME, LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
# Initialize video stream
videostream = VideoStream(resolution=(imW, imH), framerate=30).start()
time.sleep(1)
def new_cy2(x, y, w, h):
for contour, hier in zip(contours, hierarchy):
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt = contours[max_index]
(x, y, w, h) = cv2.boundingRect(cnt)
area_ = (w*h)
area_s.append(area_)
cx = int((w / 2) + x)
cy = int((h / 2) + y)
if w > 10 and h > 10:
cv2.rectangle(frame_og, (x - 10, y - 10),
(x + w, y + h), (0, 255, 0), 2)
cv2.circle(frame_og, (cx, cy), 10, (0, 0, 255), -1)
|
{"hexsha": "b682c38c1dcf6ec765f7ff7a75100ac1c7e99dd4", "size": 5423, "ext": "py", "lang": "Python", "max_stars_repo_path": "1.night traffic/final.py", "max_stars_repo_name": "alex-coch/TDI-Machine-Learning", "max_stars_repo_head_hexsha": "975c87339a21038bcf1c811e382d3dbf52fd9995", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1.night traffic/final.py", "max_issues_repo_name": "alex-coch/TDI-Machine-Learning", "max_issues_repo_head_hexsha": "975c87339a21038bcf1c811e382d3dbf52fd9995", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-12-07T08:13:40.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-08T18:51:16.000Z", "max_forks_repo_path": "1.night traffic/final.py", "max_forks_repo_name": "alex-coch/TDI-Machine-Learning", "max_forks_repo_head_hexsha": "975c87339a21038bcf1c811e382d3dbf52fd9995", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6832298137, "max_line_length": 150, "alphanum_fraction": 0.665867601, "include": true, "reason": "import numpy", "num_tokens": 1329}
|
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A deep MNIST classifier using DNNClassifier in tf.Estimator """
import numpy as np
import tensorflow as tf
from absl import flags, app, logging
def build_mnist():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype(np.int)
y_train = y_train.astype(np.int)
x_test = x_test.astype(np.int)
y_test = y_test.astype(np.int)
return x_train, y_train, x_test, y_test
def build_config():
return tf.estimator.RunConfig(
keep_checkpoint_max=3,
save_checkpoints_steps=1000,
save_summary_steps=500,
log_step_count_steps=500
)
def build_feature_columns():
return [tf.feature_column.numeric_column("feature", shape=[28, 28])]
def build_classifier(feature_columns, run_config):
return tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 128],
optimizer=tf.keras.optimizers.Adagrad(learning_rate=FLAGS.learning_rate, epsilon=1e-5),
n_classes=10,
dropout=FLAGS.dropout,
model_dir=FLAGS.model_dir,
config=run_config
)
def build_input_fn(x_train, y_train, x_test, y_test):
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
{"feature": x_train},
y_train,
batch_size=FLAGS.batch_size,
shuffle=True, num_epochs=FLAGS.train_epoch)
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
{"feature": x_test},
y_test,
batch_size=FLAGS.batch_size,
shuffle=False,
num_epochs=1)
return train_input_fn, eval_input_fn
def build_spec(train_input_fn, eval_input_fn):
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn,
max_steps=FLAGS.max_steps,
hooks=[])
eval_spec = tf.estimator.EvalSpec(
input_fn=eval_input_fn,
steps=500,
throttle_secs=60)
return train_spec, eval_spec
def main(argv):
del argv
logging.set_verbosity("INFO")
run_config = build_config()
x_train, y_train, x_test, y_test = build_mnist()
feature_columns = build_feature_columns()
classifier = build_classifier(feature_columns, run_config)
train_input_fn, eval_input_fn = build_input_fn(x_train, y_train, x_test, y_test)
train_spec, eval_spec = build_spec(train_input_fn, eval_input_fn)
tf.estimator.train_and_evaluate(classifier, train_spec, eval_spec)
if run_config.task_type == "chief":
classifier.evaluate(eval_input_fn)
if __name__ == '__main__':
flags.DEFINE_string('model_dir', 'output', 'path of model output')
flags.DEFINE_float('dropout', 0.5, 'dropout of this model')
flags.DEFINE_float('learning_rate', 0.001, 'learning rate of this model')
flags.DEFINE_integer('train_epoch', 500, 'train epoch of this model')
flags.DEFINE_integer('batch_size', 64, 'batch size of this model')
flags.DEFINE_integer('max_steps', 300000, 'batch size of this model')
FLAGS = flags.FLAGS
app.run(main)
|
{"hexsha": "f891215f25853666ba35a7c86bc68d34b358fe66", "size": 3703, "ext": "py", "lang": "Python", "max_stars_repo_path": "tony-examples/mnist-tensorflow/mnist_estimator_distributed.py", "max_stars_repo_name": "ashahab/TonY", "max_stars_repo_head_hexsha": "9bf6eec72e36ee8d8db295fa1be729cf7a780a97", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 645, "max_stars_repo_stars_event_min_datetime": "2018-09-13T03:51:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-28T14:46:38.000Z", "max_issues_repo_path": "tony-examples/mnist-tensorflow/mnist_estimator_distributed.py", "max_issues_repo_name": "ashahab/TonY", "max_issues_repo_head_hexsha": "9bf6eec72e36ee8d8db295fa1be729cf7a780a97", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 365, "max_issues_repo_issues_event_min_datetime": "2018-09-17T19:58:03.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-13T06:02:12.000Z", "max_forks_repo_path": "tony-examples/mnist-tensorflow/mnist_estimator_distributed.py", "max_forks_repo_name": "ashahab/TonY", "max_forks_repo_head_hexsha": "9bf6eec72e36ee8d8db295fa1be729cf7a780a97", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 166, "max_forks_repo_forks_event_min_datetime": "2018-09-13T14:51:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-12T13:25:19.000Z", "avg_line_length": 32.7699115044, "max_line_length": 95, "alphanum_fraction": 0.6959222252, "include": true, "reason": "import numpy", "num_tokens": 869}
|
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
#%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
import os
os.listdir("test_images/")
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return result
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
|
{"hexsha": "df96165e6553c4985203846157590d10d3959135", "size": 7047, "ext": "py", "lang": "Python", "max_stars_repo_path": "lane_line_tracker.py", "max_stars_repo_name": "nialldevlin/Lane-Line-Tracker", "max_stars_repo_head_hexsha": "fe1127d895790ab0c0a8292fdf1fab0551ef6bbb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lane_line_tracker.py", "max_issues_repo_name": "nialldevlin/Lane-Line-Tracker", "max_issues_repo_head_hexsha": "fe1127d895790ab0c0a8292fdf1fab0551ef6bbb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lane_line_tracker.py", "max_forks_repo_name": "nialldevlin/Lane-Line-Tracker", "max_forks_repo_head_hexsha": "fe1127d895790ab0c0a8292fdf1fab0551ef6bbb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5081967213, "max_line_length": 137, "alphanum_fraction": 0.7268341138, "include": true, "reason": "import numpy", "num_tokens": 1748}
|
import torch
from tqdm import tqdm
import torch.nn as nn
import numpy as np
import random
class DenseRetriever(nn.Module):
def __init__(self, unpaired_sents, vocab, add_sos=True, add_eos=True):
super().__init__()
self.pad_id = vocab.pad
self.eos_id = vocab.eos
self.sos_id = vocab.sos
self.unpaired_sents, self.unpaired_ids = self.build_data(unpaired_sents, vocab, add_sos=add_sos, add_eos=add_eos)
self.representations = None
self.style_classes = len(self.unpaired_sents)
def build_data(self, sents, vocab, add_sos, add_eos):
print("Building Dense Retriever...")
unpaired_ids = []
unpaired_sents = []
record_ids = set()
for style_sents in sents:
style_ids = []
style_tokens = []
for i in range(len(style_sents)):
cur_sent = [vocab.word2id.get(word, vocab.unk) for word in style_sents[i]]
if add_sos:
cur_sent = [self.sos_id] + cur_sent
if add_eos:
cur_sent = cur_sent + [self.eos_id]
cur_id = " ".join([str(s) for s in cur_sent])
if cur_id not in record_ids:
style_ids.append(cur_id)
style_tokens.append(cur_sent)
record_ids.add(cur_id)
unpaired_ids.append(style_ids)
unpaired_sents.append(style_tokens)
print("Total unique sentence: {}, {}".format(len(unpaired_sents[0]), len(unpaired_sents[1])))
return unpaired_sents, unpaired_ids
def update_representation(self, encoder, batch_size=32, device="cuda"):
representations = []
with torch.no_grad():
for style_sents in self.unpaired_sents:
style_reps = []
for i in tqdm(range(0, len(style_sents), batch_size)):
batch = self.process_batch(style_sents[i:i+batch_size]).to(device)
batch_reps = encoder(batch).detach()
style_reps.append(batch_reps)
representations.append(torch.cat(style_reps, dim=0))
self.representations = [rep / rep.norm(p=2, dim=-1, keepdim=True) for rep in representations]
def process_batch(self, sents):
max_len = max([len(sent) for sent in sents])
batch = []
for sent in sents:
batch.append(sent + [self.pad_id] * (max_len - len(sent)))
return torch.LongTensor(batch)
def process_retrieve_outs(self, batch_samples, batch_style):
batch_sample_tokens = []
for i in range(len(batch_samples)):
for k in batch_samples[i]:
sent = self.unpaired_sents[batch_style[i]][k]
batch_sample_tokens.append(sent)
batch_samples = self.process_batch(batch_sample_tokens).to(batch_style.device)
# B, K, seq_length
return batch_samples.view(batch_style.size(0), -1, batch_samples.size(-1))
def retrieve(self, batch_inputs, batch_query, batch_style, topk=5):
batch_size = batch_inputs.size(0)
batch_ids = [" ".join([str(s) for s in seq if s != self.pad_id]) for seq in batch_inputs.cpu().numpy()]
batch_query = batch_query / batch_query.norm(p=2, dim=-1, keepdim=True)
batch_samples = np.zeros((batch_size, topk), dtype=np.int)
for style in range(self.style_classes):
style_index = (batch_style == style).nonzero().squeeze(dim=-1)
if not style_index.nelement():
continue
sub_query = torch.index_select(batch_query, 0, style_index)
scores = torch.matmul(sub_query, self.representations[style].t().contiguous())
# (sub_batch, k+1)
scores, index = torch.sort(scores, dim=-1, descending=True)
# remove trival sentence
for sub_cursor, batch_cursor in enumerate(style_index):
z = 0
for j in range(topk):
while(self.unpaired_ids[style][index[sub_cursor][z]] == batch_ids[batch_cursor]):
z += 1
batch_samples[batch_cursor][j] = index[sub_cursor][z]
z += 1
return self.process_retrieve_outs(batch_samples, batch_style)
|
{"hexsha": "741711ae630639202188e04cc67fe44611e96be0", "size": 4463, "ext": "py", "lang": "Python", "max_stars_repo_path": "model/DenseRetriever.py", "max_stars_repo_name": "xiaofei05/TSST", "max_stars_repo_head_hexsha": "450d0d8c18002b50a50b4b642ace7769d476e889", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-09-24T11:44:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T08:29:35.000Z", "max_issues_repo_path": "model/DenseRetriever.py", "max_issues_repo_name": "xiaofei05/TSST", "max_issues_repo_head_hexsha": "450d0d8c18002b50a50b4b642ace7769d476e889", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-10-06T05:40:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-17T15:18:54.000Z", "max_forks_repo_path": "model/DenseRetriever.py", "max_forks_repo_name": "xiaofei05/TSST", "max_forks_repo_head_hexsha": "450d0d8c18002b50a50b4b642ace7769d476e889", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5727272727, "max_line_length": 121, "alphanum_fraction": 0.5762939727, "include": true, "reason": "import numpy", "num_tokens": 999}
|
########################################
## File Name: belief_types.jl
## Author: Haruki Nishimura (hnishimura@stanford.edu)
## Date Created: 2020/05/12
## Description: Belief Type Definitions for SACBP
########################################
import Distributions: Normal, MvNormal
import Plots.Plot
using LinearAlgebra
abstract type Belief end
BelState = Union{Belief,Array{<:Belief}};
# MultiVariate Gaussian Belief.
@auto_hash_equals struct BelMvNormal{T<:Real} <:Belief
t::Float64
params::Vector{T} #[μ;vcat(Σ...)];
dim::Int64
function BelMvNormal{T}(t,params,dim) where {T<:Real}
if length(params) != dim*(dim+1)
error(ArgumentError("Invalid parameter vector length."))
# elseif !isposdef(reshape(params[dim+1:end],dim,dim))
# error(ArgumentError("Σ must be positive definite."))
else
return new(t,params,dim)
end
end
end
BelMvNormal(t::Real,params::Vector{T},dim::Int64) where {T<:Real} = BelMvNormal{T}(t,params,dim);
function BelMvNormal(t::Real,params::Vector{T}) where {T<:Real}
l = length(params)
try
dim = Int64((-1+sqrt(1+4l))/2) # Speculate dimension from the parameter vector length.
return BelMvNormal(t,params,dim)
catch
error(ArgumentError("Invalid parameter vector length."))
end
end
function BelMvNormal(t::Real,μ::Vector{<:Real},Σ::Matrix{<:Real})
if size(Σ,1) != size(Σ,2) || size(Σ,2) != length(μ)
error(ArgumentError("Invalid parameter vector length."))
else
return BelMvNormal(t,[μ;vcat(Σ...)],length(μ));
end
end
function Distributions.MvNormal(b::BelMvNormal)
μ = b.params[1:b.dim];
Σ = reshape(b.params[b.dim+1:end],b.dim,b.dim);
return Distributions.MvNormal(μ,Σ)
end
function plot_e_ellipse!(b::BelMvNormal,probability::Float64,plt::Plots.Plot)
d = Distributions.MvNormal(b);
μ = d.μ;
Σ = Matrix(d.Σ);
ϵ = (1. - probability)/(2*pi*sqrt(det(Σ)));
theta = range(0.,stop=2.0*pi, length=100);
radius = sqrt(-2.0*log(2.0*pi) - logdet(Σ) - 2*log(ϵ));
x = zeros(length(theta));
y = zeros(length(theta));
for jj = 1:length(theta)
pos = sqrt(Σ)*[radius*cos(theta[jj]);radius*sin(theta[jj])] + μ;
x[jj] = pos[1];
y[jj] = pos[2];
end
plt = plot!(x,y,color=:aquamarine, fill=true, fillalpha=0.3,label="")
end;
VecBelMvNormal{T<:Real} = Vector{BelMvNormal{T}};
function VecBelMvNormal(bvec::VecBelMvNormal)
if !all([isequal(bvec[1].t,b.t) for b in bvec])
error(ArgumentError("Beliefs have inconsistent time parameters."))
else
return bvec
end
end
|
{"hexsha": "9517e5b2eaf210be6f2d001f6bb97ef2808e6fc9", "size": 2631, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/belief_types.jl", "max_stars_repo_name": "StanfordMSL/SACBP.jl", "max_stars_repo_head_hexsha": "f759dcdaa3a39f5195d0ae2939ac5a40fc5d1a8a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-05-22T15:34:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-19T18:03:01.000Z", "max_issues_repo_path": "src/belief_types.jl", "max_issues_repo_name": "StanfordMSL/SACBP.jl", "max_issues_repo_head_hexsha": "f759dcdaa3a39f5195d0ae2939ac5a40fc5d1a8a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/belief_types.jl", "max_forks_repo_name": "StanfordMSL/SACBP.jl", "max_forks_repo_head_hexsha": "f759dcdaa3a39f5195d0ae2939ac5a40fc5d1a8a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-02-27T06:52:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T01:00:26.000Z", "avg_line_length": 34.1688311688, "max_line_length": 97, "alphanum_fraction": 0.6252375523, "num_tokens": 814}
|
!! Copyright (C) Stichting Deltares, 2012-2016.
!!
!! This program is free software: you can redistribute it and/or modify
!! it under the terms of the GNU General Public License version 3,
!! as published by the Free Software Foundation.
!!
!! This program is distributed in the hope that it will be useful,
!! but WITHOUT ANY WARRANTY; without even the implied warranty of
!! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
!! GNU General Public License for more details.
!!
!! You should have received a copy of the GNU General Public License
!! along with this program. If not, see <http://www.gnu.org/licenses/>.
!!
!! contact: delft3d.support@deltares.nl
!! Stichting Deltares
!! P.O. Box 177
!! 2600 MH Delft, The Netherlands
!!
!! All indications and logos of, and references to registered trademarks
!! of Stichting Deltares remain the property of Stichting Deltares. All
!! rights reserved.
subroutine tfalg ( pmsa , fl , ipoint , increm , noseg ,
& noflux , iexpnt , iknmrk , noq1 , noq2 ,
& noq3 , noq4 )
!>\file
!> Temperature functions for algae growth and mortality
!
! Description of the module :
!
! Name T L I/O Description Unit
! ---- --- - - ------------------- ---
! TEMP R*4 1 I ambient temperature [x
! TEMP20 R*4 1 L ambient temperature - stand. temp (20) [x
! TCG1 R*4 1 I temp. coeff. for growth processes diatoms [
! TCM1 R*4 1 I temp. coeff. for mortality processes green s [
! TFUNG1 R*4 1 L temp. function for growth processes green [
! TFUNM1 R*4 1 L temp. function for mortality processes green [
! Logical Units : -
! Modules called : -
! Name Type Library
! ------ ----- ------------
IMPLICIT REAL (A-H,J-Z)
REAL PMSA ( * ) , FL (*)
INTEGER IPOINT( * ) , INCREM(*) , NOSEG , NOFLUX,
+ IEXPNT(4,*) , IKNMRK(*) , NOQ1, NOQ2, NOQ3, NOQ4
LOGICAL TMPOPT
!
IN1 = INCREM( 1)
IN2 = INCREM( 2)
IN3 = INCREM( 3)
IN4 = INCREM( 4)
IN5 = INCREM( 5)
!
IP1 = IPOINT( 1)
IP2 = IPOINT( 2)
IP3 = IPOINT( 3)
IP4 = IPOINT( 4)
IP5 = IPOINT( 5)
!
IF ( IN1 .EQ. 0 .AND. IN2 .EQ. 0 .AND. IN3 .EQ. 0 ) THEN
TEMP = PMSA(IP1 )
TCG = PMSA(IP2 )
TCM = PMSA(IP3 )
TEMP20 = TEMP - 20.
TFG = TCG**TEMP20
TFM = TCM**TEMP20
TMPOPT = .FALSE.
ELSE
TMPOPT = .TRUE.
ENDIF
!
DO 9000 ISEG = 1 , NOSEG
!! CALL DHKMRK(1,IKNMRK(ISEG),IKMRK1)
!! IF (IKMRK1.EQ.1) THEN
IF (BTEST(IKNMRK(ISEG),0)) THEN
!
IF ( TMPOPT ) THEN
TEMP = PMSA(IP1 )
TCG = PMSA(IP2 )
TCM = PMSA(IP3 )
TEMP20 = TEMP - 20.
! Algal temp. functions for growth (G) and mortality (M) processes
TFG = TCG**TEMP20
TFM = TCM**TEMP20
ENDIF
! Uitvoer limiterende factoren
PMSA(IP4 ) = TFG
PMSA(IP5 ) = TFM
!
ENDIF
!
IP1 = IP1 + IN1
IP2 = IP2 + IN2
IP3 = IP3 + IN3
IP4 = IP4 + IN4
IP5 = IP5 + IN5
!
9000 CONTINUE
!
RETURN
END
|
{"hexsha": "a19ab777b2ae65499b945d6cca67b4cce1447425", "size": 3407, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "docker/water/delft3d/tags/v6686/src/engines_gpl/waq/packages/waq_kernel/src/waq_process/tfalg.f", "max_stars_repo_name": "liujiamingustc/phd", "max_stars_repo_head_hexsha": "4f815a738abad43531d02ac66f5bd0d9a1def52a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-01-06T03:01:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:02:55.000Z", "max_issues_repo_path": "docker/water/delft3d/tags/v6686/src/engines_gpl/waq/packages/waq_kernel/src/waq_process/tfalg.f", "max_issues_repo_name": "liujiamingustc/phd", "max_issues_repo_head_hexsha": "4f815a738abad43531d02ac66f5bd0d9a1def52a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docker/water/delft3d/tags/v6686/src/engines_gpl/waq/packages/waq_kernel/src/waq_process/tfalg.f", "max_forks_repo_name": "liujiamingustc/phd", "max_forks_repo_head_hexsha": "4f815a738abad43531d02ac66f5bd0d9a1def52a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.4196428571, "max_line_length": 73, "alphanum_fraction": 0.526856472, "num_tokens": 1098}
|
import numpy as np
import pandas as pd
from pandas import (
DataFrame,
Series,
date_range,
timedelta_range,
)
import pandas._testing as tm
def _check_mixed_int(df, dtype=None):
# GH#41672
result = DataFrame([], columns=['lang', 'name'])
result = result.agg({'name': lambda y: y.values})
assert type(result) == Series
result = DataFrame([['a', 'boof']], columns=['lang', 'name'])
result = result.agg({'name': lambda y: y.values})
assert type(result) == Series
|
{"hexsha": "16d769afb93f1944e5ac1df23d954ebcfea17ad5", "size": 507, "ext": "py", "lang": "Python", "max_stars_repo_path": "pandas/tests/frame/test_aggregate.py", "max_stars_repo_name": "weikhor/pandas", "max_stars_repo_head_hexsha": "ae6538f5df987aa382ec1499679982aaff1bfd86", "max_stars_repo_licenses": ["PSF-2.0", "Apache-2.0", "BSD-3-Clause-No-Nuclear-License-2014", "MIT", "MIT-0", "ECL-2.0", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-13T02:50:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T02:50:57.000Z", "max_issues_repo_path": "pandas/tests/frame/test_aggregate.py", "max_issues_repo_name": "weikhor/pandas", "max_issues_repo_head_hexsha": "ae6538f5df987aa382ec1499679982aaff1bfd86", "max_issues_repo_licenses": ["PSF-2.0", "Apache-2.0", "BSD-3-Clause-No-Nuclear-License-2014", "MIT", "MIT-0", "ECL-2.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pandas/tests/frame/test_aggregate.py", "max_forks_repo_name": "weikhor/pandas", "max_forks_repo_head_hexsha": "ae6538f5df987aa382ec1499679982aaff1bfd86", "max_forks_repo_licenses": ["PSF-2.0", "Apache-2.0", "BSD-3-Clause-No-Nuclear-License-2014", "MIT", "MIT-0", "ECL-2.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0454545455, "max_line_length": 65, "alphanum_fraction": 0.6370808679, "include": true, "reason": "import numpy", "num_tokens": 131}
|
module Verifier where
open import Definitions
open import NatEquality using (_≟_ ; equality-disjoint)
check1 : (m n : ℕ) → Equal? m n
check1 = _≟_
check2 : (m n : ℕ) → m ≡ n → m ≢ n → ⊥
check2 = equality-disjoint
|
{"hexsha": "dd5f327ee64ca7ee2db177706147e01c06629842", "size": 217, "ext": "agda", "lang": "Agda", "max_stars_repo_path": "problems/NatEquality/Verifier.agda", "max_stars_repo_name": "danr/agder", "max_stars_repo_head_hexsha": "ece25bed081a24f02e9f85056d05933eae2afabf", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-17T12:07:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-17T12:07:03.000Z", "max_issues_repo_path": "problems/NatEquality/Verifier.agda", "max_issues_repo_name": "danr/agder", "max_issues_repo_head_hexsha": "ece25bed081a24f02e9f85056d05933eae2afabf", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/NatEquality/Verifier.agda", "max_forks_repo_name": "danr/agder", "max_forks_repo_head_hexsha": "ece25bed081a24f02e9f85056d05933eae2afabf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.0833333333, "max_line_length": 55, "alphanum_fraction": 0.6589861751, "num_tokens": 75}
|
//==============================================================================
// Copyright 2003 - 2012 LASMEA UMR 6602 CNRS/Univ. Clermont II
// Copyright 2009 - 2012 LRI UMR 8623 CNRS/Univ Paris Sud XI
//
// Distributed under the Boost Software License, Version 1.0.
// See accompanying file LICENSE.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt
//==============================================================================
#ifndef BOOST_SIMD_SDK_MEMORY_IS_ALIGNED_HPP_INCLUDED
#define BOOST_SIMD_SDK_MEMORY_IS_ALIGNED_HPP_INCLUDED
#include <boost/simd/sdk/memory/parameters.hpp>
#include <boost/simd/sdk/memory/is_power_of_2.hpp>
#include <boost/dispatch/attributes.hpp>
#include <boost/assert.hpp>
#include <cstddef>
namespace boost { namespace simd
{
/*!
@brief Check a value or address is aligned on an arbitrary alignment boundary
@param value Value to check
@param align Alignment boundary to check for.
**/
BOOST_FORCEINLINE bool is_aligned(std::size_t value, std::size_t align)
{
BOOST_ASSERT_MSG
( boost::simd::is_power_of_2(align)
, "Invalid alignment boundary. You tried to check if "
"an address or a value is aligned on a non-power of 2 boundary."
);
return !(value & (align-1) );
}
/*! @overload **/
template<class T> BOOST_FORCEINLINE bool is_aligned(T* ptr, std::size_t align)
{
return boost::simd::is_aligned(reinterpret_cast<std::size_t>(ptr),align);
}
/*!
@brief Check a value or address is aligned on default alignment boundary
@param value Value to check
**/
template<class T> BOOST_FORCEINLINE bool is_aligned(T value)
{
return boost::simd::is_aligned(value,BOOST_SIMD_CONFIG_ALIGNMENT);
}
} }
#endif
|
{"hexsha": "d1433a1e368abd60ad2e0ddbb0eefe7b0584faf1", "size": 1797, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "modules/boost/simd/sdk/include/boost/simd/sdk/memory/is_aligned.hpp", "max_stars_repo_name": "pbrunet/nt2", "max_stars_repo_head_hexsha": "2aeca0f6a315725b335efd5d9dc95d72e10a7fb7", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 2.0, "max_stars_repo_stars_event_min_datetime": "2016-09-14T00:23:53.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-14T12:51:18.000Z", "max_issues_repo_path": "modules/boost/simd/sdk/include/boost/simd/sdk/memory/is_aligned.hpp", "max_issues_repo_name": "pbrunet/nt2", "max_issues_repo_head_hexsha": "2aeca0f6a315725b335efd5d9dc95d72e10a7fb7", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modules/boost/simd/sdk/include/boost/simd/sdk/memory/is_aligned.hpp", "max_forks_repo_name": "pbrunet/nt2", "max_forks_repo_head_hexsha": "2aeca0f6a315725b335efd5d9dc95d72e10a7fb7", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.6727272727, "max_line_length": 81, "alphanum_fraction": 0.633277685, "num_tokens": 416}
|
import kagglegym
import numpy as np
import pandas as pd
import bz2
import base64
import pickle as pk
import warnings
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
# The "environment" is our interface.
env = kagglegym.make()
# We get our initial observation by calling "reset".
o = env.reset()
excl = [env.ID_COL_NAME, env.SAMPLE_COL_NAME, env.TARGET_COL_NAME, env.TIME_COL_NAME]
col = [c for c in o.train.columns if c not in excl]
train = o.train.loc[:, col]
# Total number of NA values per observation.
train_NA_values = train.isnull().sum(axis=1)
# Record NA values and then fill them with the median.
d_mean = train.median(axis=0)
for c in col:
train.loc[:, c + "_nan"] = pd.isnull(train[c])
d_mean[c + "_nan"] = 0
train = train.fillna(d_mean)
train.loc[:, "is_null"] = train_NA_values
# Add mask with best features selection
model_2_mask = [ True, False, False, True, False, True, False, True, True,
False, False, True, False, False, True, True, True, True,
False, True, False, True, True, True, True, False, True,
True, False, True, False, True, False, False, True, True,
False, True, True, False, True, False, False, True, True,
False, True, True, False, True, True, True, True, True,
False, True, True, True, True, True, True, True, True,
True, True, False, True, True, False, False, True, False,
True, True, False, False, False, True, True, False, True,
True, True, False, True, True, True, True, True, True,
True, True, True, True, True, False, True, True, True,
True, False, False, False, True, True, True, True, False,
False, False, False, False, False, False, False, False, True,
True, False, False, True, True, False, False, False, True,
False, False, True, True, False, False, False, False, True,
True, False, True, False, True, True, True, False, True,
False, True, False, True, False, True, False, True, False,
True, False, True, True, False, True, True, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, True, False, False, False,
False, False, False, False, True, False, False, False, False,
True, False, True, False, False, False, False, False, False,
False, False, False, False, False, True, False, False, False,
False, False, True, False, False, True, False, False, False, True]
train = train.loc[:, train.columns[model_2_mask]]
low_y_cut = -0.075
high_y_cut = 0.075
y_is_above_cut = (o.train.y > high_y_cut)
y_is_below_cut = (o.train.y < low_y_cut)
y_is_within_cut = (~y_is_above_cut & ~y_is_below_cut)
model_1 = LinearRegression(n_jobs=-1)
model_1.fit(np.array(train.loc[y_is_within_cut, "technical_20"].values).reshape(-1, 1),
o.train.loc[y_is_within_cut, "y"])
# Fit an ExtraTreesRegressor
extra_trees = ExtraTreesRegressor(n_estimators=100, max_depth=4, n_jobs=-1,
random_state=17, verbose=0)
model_2 = extra_trees.fit(train, o.train["y"])
# Load saved pickle model.
#model_2_str = """
#"""
#warnings.simplefilter("ignore", UserWarning)
#model_2 = pk.loads(bz2.decompress(base64.standard_b64decode(model_2_str)), encoding="latin1")
train = []
ymean_dict = dict(o.train.groupby(["id"])["y"].median())
while True:
test = o.features.loc[:, col]
# Total number of NA values per observation.
test_NA_values = test.isnull().sum(axis=1)
# Fill NA values.
for c in col:
test.loc[:, c + "_nan"] = pd.isnull(test[c])
test = test.fillna(d_mean)
test.loc[:, "is_null"] = test_NA_values
# Add mask with best features selection
test = test.loc[:, test.columns[model_2_mask]]
pred = o.target
test_technical_20 = np.array(test["technical_20"].values).reshape(-1, 1)
# Ponderation of the two models.
pred["y"] = ((model_1.predict(test_technical_20).clip(low_y_cut, high_y_cut) * 0.35)
+ (model_2.predict(test).clip(low_y_cut, high_y_cut) * 0.65))
# Add the median of the target value by ID.
pred["y"] = pred.apply(lambda x: 0.95 * x["y"] + 0.05 * ymean_dict[x["id"]] if x["id"] in ymean_dict else x["y"], axis=1)
# The target values have 6 decimals in the training set.
pred["y"] = [float(format(x, ".6f")) for x in pred["y"]]
o, reward, done, info = env.step(pred)
if done:
print("Finished", info["public_score"])
break
if o.features.timestamp[0] % 100 == 0:
print(reward)
|
{"hexsha": "57bdc28cecf06a7f9863949afb132253dd7118cb", "size": 4699, "ext": "py", "lang": "Python", "max_stars_repo_path": "main.py", "max_stars_repo_name": "anthonyhu/twosigma-LETH-Al", "max_stars_repo_head_hexsha": "10306e35fdd7e0d2b7f3d19dce27674ad07fcddb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.py", "max_issues_repo_name": "anthonyhu/twosigma-LETH-Al", "max_issues_repo_head_hexsha": "10306e35fdd7e0d2b7f3d19dce27674ad07fcddb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.py", "max_forks_repo_name": "anthonyhu/twosigma-LETH-Al", "max_forks_repo_head_hexsha": "10306e35fdd7e0d2b7f3d19dce27674ad07fcddb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5163934426, "max_line_length": 125, "alphanum_fraction": 0.6397105767, "include": true, "reason": "import numpy", "num_tokens": 1407}
|
'''
Calculates PV from OpenMARS data.
'''
import numpy as np
import xarray as xr
import os, sys
import glob
import analysis_functions as funcs
import PVmodule as PV
def calculate_pfull(psurf, siglev):
r"""Calculates full pressures using surface pressures and sigma coordinates
psurf : array-like
Surface pressures
siglev : array-like
Sigma-levels
"""
return psurf*siglev
if __name__ == "__main__":
### choose your desired isotropic levels, in Pascals
plev1 = [float(i/10) for i in range(1,100,5)]
plev2 = [float(i) for i in range(10,100,10)]
plev3 = [float(i) for i in range(100,650,50)]
### choose desired isentropic levels, in Kelvins
thetalevs=[200., 250., 300., 350., 400., 450., 500., 550., 600., 650., 700., 750., 800., 850., 900., 950.]
save_PV_isobaric=True
save_PV_isentropic=True
interpolate_isentropic=True
Lsmin = 255
Lsmax = 285
theta0 = 200.
kappa = 1/4.0
p0 = 610.
inpath = '/export/anthropocene/array-01/xz19136/OpenMARS/MY28-32/'
#infiles = os.listdir(inpath)
home = os.getenv("HOME")
os.chdir(inpath)
infiles = glob.glob('*open*')
for f in infiles:
print(f)
os.chdir(home)
isenpath = '/export/anthropocene/array-01/xz19136/OpenMARS/Isentropic/'
isopath = '/export/anthropocene/array-01/xz19136/OpenMARS/Isobaric/'
#inpath = ''
#outpath = 'MACDA_data/'
figpath = 'OpenMARS_figs/'
plevs = plev1+plev2+plev3
for f in infiles:
ds = xr.open_mfdataset(inpath+f, decode_times=False, concat_dim='time',
combine='nested',chunks={'time':'auto'})
ens_list = []
tmp1 = ds.sel(lon=-180.)
tmp1 = tmp1.assign_coords({'lon':179.9999})
ens_list.append(ds)
ens_list.append(tmp1)
d = xr.concat(ens_list, dim='lon')
d = d.astype('float32')
d = d[['Ls','MY','ps','temp','u','v']]
prs = calculate_pfull(d.ps, d.lev)
prs = prs.transpose('time','lev','lat','lon')
temp = d[["temp"]].to_array().squeeze()
uwind = d[["u"]].to_array().squeeze()
vwind = d[["v"]].to_array().squeeze()
print('Calculating potential temperature...')
thta = PV.potential_temperature(d.temp, d.pfull,
kappa = kappa, p0 = p0)
print('Interpolating variables onto isobaric levels...')
tmp, uwnd, vwnd, theta = PV.log_interpolate_1d(plevs, prs.compute(),
temp, uwind, vwind, thta,
axis = 1)
d_iso = xr.Dataset({"tmp" : (("time", "plev", "lat", "lon"), tmp),
"uwnd" : (("time", "plev", "lat", "lon"), uwnd),
"vwnd" : (("time", "plev", "lat", "lon"), vwnd),
"theta": (("time", "plev", "lat", "lon"), theta)},
coords = {"time": d.time,
"plev": plevs,
"lat" : d.lat,
"lon" : d.lon})
uwnd_trans = d_iso.uwnd.transpose('lat','lon','plev','time')
vwnd_trans = d_iso.vwnd.transpose('lat','lon','plev','time')
tmp_trans = d_iso.tmp.transpose('lat','lon','plev','time')
print('Calculating potential vorticity on isobaric levels...')
PV_iso = PV.potential_vorticity_baroclinic(uwnd_trans, vwnd_trans,
d_iso.theta, 'plev', omega = omega, g = g, rsphere = rsphere)
PV_iso = PV_iso.transpose('time','plev','lat','lon')
d_iso["PV"] = PV_iso
if save_PV_isobaric == True:
print('Saving PV on isobaric levels to '+isopath)
d_iso["Ls"]=d.Ls
d_iso["MY"]=d.MY
path = isopath+'isobaric_'+f
d_iso.to_netcdf(path)
isentlevs = np.array(thetalevs)
if interpolate_isentropic==True:
print('Interpolating variables onto isentropic levels...')
isent_prs, isent_PV, isent_u, isent_tmp = PV.isent_interp(isentlevs, d_iso.plev,
d_iso.tmp, PV_iso, d_iso.uwnd,
axis = 1,temperature_out=True)
d_isent = xr.Dataset({"prs" : (("time","ilev","lat","lon"), isent_prs),
"PV" : (("time","ilev","lat","lon"), isent_PV),
"uwnd": (("time","ilev","lat","lon"), isent_u),
"tmp" : (("time","ilev","lat","lon"), isent_tmp)},
coords = {"time": d_iso.time,
"ilev": isentlevs,
"lat" : d_iso.lat,
"lon" : d_iso.lon})
if save_PV_isentropic == True:
print('Saving PV on isentropic levels to '+isenpath)
d_isent["Ls"]=d.Ls
d_isent["MY"]=d.MY
d_isent.to_netcdf(isenpath+'isentropic_'+f)
|
{"hexsha": "f4966e9351a7af0dcbccc21be78dacede0424733", "size": 5307, "ext": "py", "lang": "Python", "max_stars_repo_path": "calculate_PV_OpenMARS.py", "max_stars_repo_name": "BrisClimate/Roles_of_latent_heat_and_dust_on_the_Martian_polar_vortex", "max_stars_repo_head_hexsha": "11b4e1ba958eefbc03f9491cb68485637b696346", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-14T09:02:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-14T09:02:57.000Z", "max_issues_repo_path": "calculate_PV_OpenMARS.py", "max_issues_repo_name": "BrisClimate/Roles_of_latent_heat_and_dust_on_the_Martian_polar_vortex", "max_issues_repo_head_hexsha": "11b4e1ba958eefbc03f9491cb68485637b696346", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "calculate_PV_OpenMARS.py", "max_forks_repo_name": "BrisClimate/Roles_of_latent_heat_and_dust_on_the_Martian_polar_vortex", "max_forks_repo_head_hexsha": "11b4e1ba958eefbc03f9491cb68485637b696346", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-16T18:03:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-16T18:03:49.000Z", "avg_line_length": 37.6382978723, "max_line_length": 110, "alphanum_fraction": 0.4940644432, "include": true, "reason": "import numpy", "num_tokens": 1415}
|
[STATEMENT]
lemma C_eq_normalizeQ:
"DenyAll \<in> set (policy2list p) \<Longrightarrow> allNetsDistinct (policy2list p) \<Longrightarrow>
all_in_list (policy2list p) (Nets_List p) \<Longrightarrow>
C (list2FWpolicy (normalizeQ p)) = C p"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>DenyAll \<in> set (policy2list p); allNetsDistinct (policy2list p); all_in_list (policy2list p) (Nets_List p)\<rbrakk> \<Longrightarrow> C (list2FWpolicy (normalizeQ p)) = C p
[PROOF STEP]
by (simp add: normalizeQ_def C_eq_compileQ)
|
{"llama_tokens": 213, "file": "UPF_Firewall_FWNormalisation_NormalisationIntegerPortProof", "length": 1}
|
import time
import pyro
import torch
import networkx as nx
import pyro.distributions as dist
torch.set_default_tensor_type(torch.DoubleTensor)
class ApproximateCounterfactual():
"""A class for performing Pyro-based approximate inference with a Structural Causal Model.
Has the ability to handle and operate on Twin Networks.
Currently implements only Importance Sampling, but would be simple to extend to other
sampling-based methods and variational inference.
Example:
```
from scm import CausalModel
scm = CausalModel(continuous=True)
scm.create(method="backward", n_nodes=10, max_parents=5)
app = ApproximatePyro()
app.construct(scm)
evidence = scm.sample_observable_dict(n_samples=1, n_vars=4)
intervention = {"N5": 1.25}
node_of_interest = "N9"
posterior = app.counterfactual_query(node_of_interest, evidence, intervention, 1000, 1000)
```
"""
def __init__(self, prior_mean=None, prior_variance=None, verbose=False):
"""
Initialize an instance of the class.
Args:
verbose (bool): if True, prints out timings during the inference stage.
"""
self.verbose = verbose
self._intv_nodes = []
self._noi_nodes = []
self.prior_mean = 0. if prior_mean is None else prior_mean
self.prior_variance = 1. if prior_variance is None else prior_variance
def construct(self, scm):
"""Parameterize the Approximate class via a specific SCM.
Args:
scm (CausalModel): a CausalModel instance.
"""
self.continuous = scm.continuous
if self.continuous:
self.exog_fn = dist.Normal(torch.tensor([self.prior_mean]), self.prior_variance)
self.invert_fn = self.continuous_invert
else:
self.exog_fn = dist.Bernoulli(torch.tensor([0.5]))
self.invert_fn = self.binary_invert
self.scm = scm
self.G_inference = self.scm.G
def model(self, evidence={}, noise=None):
"""A generative Pyro model function for a Structural Causal Model.
Args:
evidence (dict): a dictionary of {node_name: value} evidence data.
noise: If you want to pass in a list of tuples of (node, sample) for samples from a noise posterior.
Returns:
model_dict: a sample from the model in {node_name: value} format.
TODO: Make all endogenous nodes deterministic rather than delta variables
"""
model_dict = {}
if noise is not None: # `noise` would not be None if you've already performed abduction.
noise_sample = [z for z in zip(self.scm._get_exog_nodes(), noise.sample())]
model_dict = {z[0]: pyro.sample(z[0], dist.Delta(z[1])) for z in noise_sample} # sample from posterior.
for node in nx.topological_sort(self.G_inference):
parents = sorted(list(self.G_inference.predecessors(node)))
if self.scm._is_exog(node, self.G_inference):
if noise is None: # only create a noise random variable if a noise posterior is not passed
model_dict[node] = pyro.sample(node, self.exog_fn)
else: # all remaining endogenous nodes are deterministic functions of their parents
exog_parent = [pa for pa in parents if pa[0] == "U"][0]
parent_values = [model_dict[n] for n in parents if not self.scm._is_exog(n, self.G_inference)]
predicted_data = self._scm_function(node, parent_values, model_dict[exog_parent])
if node in evidence:
model_dict[node] = pyro.sample(node, dist.Delta(predicted_data), obs=evidence[node]) # TODO: Choose
# model_dict[node] = self._assign_delta_node(node, predicted_data, obs=evidence[node])
else:
model_dict[node] = pyro.sample(node, dist.Delta(predicted_data)) # TODO: Choose
# model_dict[node] = self._assign_delta_node(node, predicted_data)
return model_dict
def _assign_delta_node(self, node, value, obs=None):
"""Helper for assigning delta nodes. If node is intervened on, make pyro RV so pyro.do works. Else, float."""
if node not in self._intv_nodes + self._noi_nodes:
return value
else:
return pyro.sample(node, dist.Delta(value), obs=obs)
def guide(self, evidence={}, noise=None):
"""A "smart" guide function for the SCM model above which propagates the information
from a deterministic node being observed to the noise node, so that you don't end up with many rejected samples.
This is slightly different from the model schema for the sake of sampling efficiency.
Args:
evidence (dict): a dictionary of {node_name: value} evidence data.
noise (None): a useless parameter that exists because in Pyro, the guide fn has the same inputs as the model fn.
Returns:
model_dict (dict): a sample from the guide in {node_name: value} format.
TODO: Make all endogenous nodes deterministic rather than delta variables
"""
guide_dict = {}
# the order is a little complex. Any observed nodes have to go first, then the non-twin endog, then twin.
for node in self._get_guide_order(evidence):
exog_parent = [n for n in self.G_inference.predecessors(node) if self.scm._is_exog(n, self.G_inference)][0]
endog_parents = sorted([n for n in self.G_inference.predecessors(node)
if not self.scm._is_exog(n, self.G_inference)])
if endog_parents:
parent_values = [guide_dict[n] for n in endog_parents]
else:
parent_values = []
if node not in evidence:
if exog_parent not in guide_dict: # if you haven't already sampled the exog_parent
guide_dict[exog_parent] = pyro.sample(exog_parent, self.exog_fn)
else:
if not endog_parents: # if node only has an exogenous parent
if exog_parent not in guide_dict:
guide_dict[exog_parent] = pyro.sample(exog_parent, dist.Delta(evidence[node])) # TODO: Choose
# guide_dict[exog_parent] = self._assign_delta_node(exog_parent, evidence[node])
else: # if a node has exog & endog parents
if exog_parent not in guide_dict:
predicted_val = self._scm_function(node, parent_values)
exog_val = self.invert_fn(evidence[node], predicted_val)
guide_dict[exog_parent] = pyro.sample(exog_parent, dist.Delta(exog_val)) # TODO: Choose
# guide_dict[exog_parent] = self._assign_delta_node(exog_parent, exog_val)
val = self._scm_function(node, parent_values, guide_dict[exog_parent])
guide_dict[node] = pyro.sample(node, dist.Delta(val)) # TODO: Choose
# guide_dict[node] = self._assign_delta_node(node, val)
return guide_dict
def _get_guide_order(self, evidence):
"""A helper function that finds the correct model order if the twin graph is the graph of inference.
Args:
evidence (dict): a dictionary of {node_name: value} evidence data.
"""
if self.G_inference.is_twin:
return self.scm._weave_sort_endog(evidence)
else:
return self.scm._get_endog_nodes()
def binary_invert(self, obs, pred):
"""The smart guide inversion function for recovering the exogenous value in binary SCMs.
Write your own implementation based on how exogenous variables act in your SCM model.
In the vanilla implementation, if pred != obs, the exogenous variable value must be 1 (it is active, or "flips")
Args:
obs (tensor): the observed evidence value tensor
pred (tensor): the predicted value tensor
"""
return (pred != obs).double()
def continuous_invert(self, obs, pred):
"""The smart guide inversion function for recovering the exogenous value in cntinuous SCMs.
Write your own implementation based on how exogenous variables act in your SCM model.
In the vanilla implementation, noise is additive.
Thus, it must make up the difference between the observed and predicted values.
Args:
obs (tensor): the observed evidence value tensor
pred (tensor): the predicted value tensor
"""
return obs - pred
def _binary_exog_scm_function(self, val, flippers):
"""The binary SCM noise-flipping function.
If flippers = 1, then flippers will flip val (i.e. 0=>1, 1=>0)
Write your own implementation based on how you want noise to enter.
Args:
val (tensor): the predicted value
flippers (tensor): the values of the flipper variables (sampled or inferred).
"""
## TODO: Move to SCM class
val[flippers.byte()] = 1. - val[flippers.byte()]
return val
def _continuous_exog_scm_function(self, val, noise=None):
"""The continuous SCM additive noise function.
Noise adds to the predicted value.
Write your own implementation based on how you want noise to enter.
Args:
val (tensor): the predicted value
noise (tensor or None): the noise value.
"""
return val if noise is None else val + noise
def _scm_function(self, node, parents, exog=None):
"""A handler for using each node's generating function to generate the value for that node.
Args:
node (str): the name of the node (used for indexing in self._deterministic_function(...))
parents (list): a list of tensors of the values of the parents.
exog (None or tensor): optional value if the only parent of `node` is its exogenous node.
"""
## TODO: Move to SCM class
if not parents:
return exog
else:
parents = torch.cat([pa.reshape(-1, 1) for pa in parents], dim=1)
val = self._deterministic_function(node, parents, exog)
return val.flatten()
def _deterministic_function(self, node, parents, exog=None):
"""
Calls the relevant deterministic functions and exerts the influence of the noise functin.
Args:
node (str): the name of the node (used for indexing in self._deterministic_function(...))
parents (list): a list of tensors of the values of the parents.
exog (None or tensor): optional value if the only parent of `node` is its exogenous node.
"""
## TODO: Move to SCM class
if self.continuous:
val = self.G_inference.nodes[node]['fn'](parents).flatten()
val = self._continuous_exog_scm_function(val, exog)
else:
val = self._binary_fn(node, parents)
val = self._binary_exog_scm_function(val, exog)
return val
def _binary_fn(self, node, parent_values):
"""The functional form for generating a predicted value, without the effect of noise.
This is just a threshold-based classification function with linear form.
Args:
node (str): the name of the node
parent_values (list): a list of tensors of the values of `node`'s parents.
"""
thetas = torch.from_numpy(self.G_inference.nodes[node]['parameters']).double()
v = ((thetas * parent_values).sum(dim=-1) > 0.5).double()
return v
def get_posterior(self, nodes=None, evidence=None, n_samples=1000, custom_model=None, custom_guide=None, twin=False):
"""Run importance sampling for the defined model and guide to form a joint posterior.
Args:
nodes (list): list of nodes of the desired joint posterior
evidence (dict): a dictionary of observed values formatted as {node_name: val}
n_samples (int): the number of samples to take for inference.
custom_model (fn): if desired, a custom model function.
custom_guide (fn): if desired, a custom guide function.
twin (bool): whether or not to run posterior inference on the twin network.
"""
if evidence is not None:
evidence = {d: torch.tensor([evidence[d]]).double() for d in evidence}
self.G_inference = self.scm.G if not twin else self.scm.twin_G
model = custom_model if custom_model is not None else self.model
guide = custom_guide if custom_guide is not None else self.guide
posterior = pyro.infer.Importance(model, guide, n_samples)
posterior.run(evidence)
posterior = pyro.infer.EmpiricalMarginal(posterior, sites=nodes)
return posterior
def abduction(self, evidence=None, n_samples=1000):
"""Run importance sampling for the above model and guide to form the joint posterior over *exog. variables*.
Args:
evidence (dict): a dictionary of observed values formatted as {node_name: val}
n_samples (int): the number of samples to take for inference.
"""
return self.get_posterior(nodes=self.scm._get_exog_nodes(), evidence=evidence, n_samples=n_samples)
def intervention_prediction(self, node_of_interest, intervention, posterior, n_samples):
"""Given an exogenous posterior, sample then return the mean of the node of interest.
Args:
node_of_interest (str): the name of the noode of interest
intervention (dict): a dictionary of {node_name: value} interventions
evidence (dict): a dictionary of {node_name: value} evidence
n_samples (int): the number of samples to take from the posterior.
"""
intervention = {k: torch.tensor([intervention[k]]).double().flatten() for k in intervention}
self._intv_nodes = [k for k in intervention]
intervened_model = pyro.do(self.model, data=intervention)
estimate = []
for s in range(n_samples):
estimate.append(intervened_model(noise=posterior)[node_of_interest])
self._intv_nodes = []
return estimate
def counterfactual_query(self,
node_of_interest,
evidence,
intervention,
n_abduction_samples,
n_posterior_samples,
distribution=False):
"""Run the standard 3-step counterfactual inference procedure.
Args:
node_of_interest (str): the name of the noode of interest
intervention (dict): a dictionary of {node_name: value} interventions
evidence (dict): a dictionary of {node_name: value} evidence
n_abduction_samples (int): the number of samples to take during importance sampling.
n_posterior_samples (int): the number of samples to take from the posterior.
distribution (bool): if True, return samples. If False, returns mean and sd. of distribution.
"""
# Abduction step
self.G_inference = self.scm.G_original
t_abduction = time.time()
if self.verbose:
print("Performing Abduction... ", end="", flush=True)
posterior = self.abduction(evidence, n_abduction_samples)
t_abduction = time.time() - t_abduction
if self.verbose:
print("✓ ({}s)".format(round(t_abduction, 3)))
# Prediction step
t_prediction = time.time()
if self.verbose:
print("Performing Intervention and Prediction... ", end="", flush=True)
samples = self.intervention_prediction(node_of_interest, intervention, posterior, n_posterior_samples)
t_prediction = time.time() - t_prediction
if self.verbose:
print("✓ ({}s)".format(round(t_prediction, 3)))
if not distribution:
return torch.cat(samples).mean().numpy(), torch.cat(samples).std().numpy()
else:
return torch.cat(samples).numpy()
def twin_query(self, node_of_interest, evidence, intervention, n_samples, distribution=False, merge=False):
"""Run the twin network counterfactual inference procedure.
Args:
node_of_interest (str): the name of the noode of interest
intervention (dict): a dictionary of {node_name: value} interventions
evidence (dict): a dictionary of {node_name: value} evidence
n_samples (int): the number of samples to take during importance sampling and from the posterior.
distribution (bool): if True, return samples. If False, returns mean and sd. of distribution.
"""
if not self.scm.twin_exists:
self.scm.create_twin_network()
if merge:
self.scm.merge_in_twin(node_of_interest, intervention)
self.G_inference = self.scm.twin_G
intervention = {"{}tn".format(k): torch.tensor([intervention[k]]).double().flatten() for k in intervention}
node_of_interest = "{}tn".format(node_of_interest) if "tn" not in node_of_interest else node_of_interest
self._intv_nodes = [k for k in intervention]
self._noi_nodes = [node_of_interest]
intervened_model = pyro.do(self.model, data=intervention)
intervened_guide = pyro.do(self.guide, data=intervention)
if self.verbose:
print("Performing Twin Network inference... ", end="", flush=True)
t_twin = time.time()
posterior = self.get_posterior(self._noi_nodes,
evidence,
n_samples,
custom_model=intervened_model,
custom_guide=intervened_guide,
twin=True)
t_twin = time.time() - t_twin
if self.verbose:
print("✓ ({}s)".format(round(t_twin, 3)))
self._intv_nodes = []
self._noi_nodes = []
samples = posterior.sample_n((n_samples))
if not distribution:
return samples.mean().numpy(), samples.std().numpy()
else:
return samples
|
{"hexsha": "baaeb8e23c69ae09f19b66d806ca0c2c88350e0f", "size": 18584, "ext": "py", "lang": "Python", "max_stars_repo_path": "approximate.py", "max_stars_repo_name": "Jude188/TwinNetworks", "max_stars_repo_head_hexsha": "3213358c7ac869e1c72c82554b1a3724ff368bad", "max_stars_repo_licenses": ["MIT", "Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-09-06T05:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-25T12:47:02.000Z", "max_issues_repo_path": "approximate.py", "max_issues_repo_name": "Jude188/TwinNetworks", "max_issues_repo_head_hexsha": "3213358c7ac869e1c72c82554b1a3724ff368bad", "max_issues_repo_licenses": ["MIT", "Unlicense"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-15T08:16:18.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-19T11:10:04.000Z", "max_forks_repo_path": "approximate.py", "max_forks_repo_name": "Jude188/TwinNetworks", "max_forks_repo_head_hexsha": "3213358c7ac869e1c72c82554b1a3724ff368bad", "max_forks_repo_licenses": ["MIT", "Unlicense"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-07-03T15:50:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-26T07:58:16.000Z", "avg_line_length": 50.0916442049, "max_line_length": 125, "alphanum_fraction": 0.6248923805, "include": true, "reason": "import networkx", "num_tokens": 4026}
|
import re
from io import StringIO
import numpy as np
import pandas
from pycosmosac.molecule import mole, cavity
from pycosmosac.utils import elements
def get_molecule(data):
sdata = re.search(r"!DATE[a-zA-Z0-9:\s]+\n(.+)end\s*", data, re.DOTALL).group(1)
df = pandas.read_csv(StringIO(sdata), names=['atomidentifier','x / A','y / A','z / A','?1','?2','?3','atom','?4'],sep=r'\s+',engine = 'python')
geometry = {}
x = np.asarray(df['x / A'].tolist(), dtype=float)
y = np.asarray(df['y / A'].tolist(), dtype=float)
z = np.asarray(df['z / A'].tolist(), dtype=float)
geometry["xyz"] = np.column_stack((x,y,z))
geometry["atom"] = []
atoms = df['atom'].tolist()
for atom in atoms:
geometry["atom"].append(elements.std_symb(atom))
return geometry
def get_cavity(data):
cav = cavity.Cavity()
cav.area = float(re.search(r"Total surface area of cavity \(A\*\*2\) =(.+)\n", data).group(1).strip())
cav.volume = float(re.search(r"Total volume of cavity \(A\*\*3\) =(.+)\n", data).group(1).strip())
sdata = re.search(r"\(X, Y, Z\)[\sa-zA-Z0-9\[\]\^\./]+\n(.+)(\n\n|$)", data, re.DOTALL).group(1).rstrip()
df = pandas.read_csv(StringIO(sdata), names=['n','atom','x / a.u.','y / a.u.','z / a.u.','charge / e','area / A^2','charge/area / e/A^2','potential'],sep=r'\s+',engine= 'python')
cav.atom_map = np.asarray(df['atom'].tolist(), dtype=int)
x = np.asarray(df['x / a.u.'].tolist(), dtype=float)
y = np.asarray(df['y / a.u.'].tolist(), dtype=float)
z = np.asarray(df['z / a.u.'].tolist(), dtype=float)
cav.segments["xyz"] = np.column_stack((x,y,z))
cav.segments["charge"] = np.asarray(df['charge / e'].tolist(), dtype=float)
cav.segments["area"] = np.asarray(df['area / A^2'].tolist(), dtype=float)
cav.segments["sigma"] = np.asarray(df['charge/area / e/A^2'].tolist(), dtype=float)
return cav
def load(name):
try:
with open(name, 'r') as f:
data = f.read()
geometry = get_molecule(data)
cavity = get_cavity(data)
mol = mole.Mole()
mol.build(geometry = geometry, cavity = cavity)
return mol
except:
raise RuntimeError("failed reading cosmo file %s." % name)
if __name__ == "__main__":
from pycosmosac.utils import misc
mol = load("./test/h2o.cosmo")
print(misc.fp(mol.geometry["xyz"]) - -1.3049969366762706)
print(mol.geometry["atom"] == ['O', 'H', 'H'])
|
{"hexsha": "be2abddf732d1e623a2923f1814c0fe2376273a3", "size": 2486, "ext": "py", "lang": "Python", "max_stars_repo_path": "pycosmosac/cosmo/read_cosmo_bagel.py", "max_stars_repo_name": "fishjojo/pycosmosac", "max_stars_repo_head_hexsha": "9984a0ca2c9093142de60112f4c9a7fe33865946", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-07-28T02:07:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-26T06:25:39.000Z", "max_issues_repo_path": "pycosmosac/cosmo/read_cosmo_bagel.py", "max_issues_repo_name": "fishjojo/pycosmosac", "max_issues_repo_head_hexsha": "9984a0ca2c9093142de60112f4c9a7fe33865946", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pycosmosac/cosmo/read_cosmo_bagel.py", "max_forks_repo_name": "fishjojo/pycosmosac", "max_forks_repo_head_hexsha": "9984a0ca2c9093142de60112f4c9a7fe33865946", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.1355932203, "max_line_length": 182, "alphanum_fraction": 0.5820595334, "include": true, "reason": "import numpy", "num_tokens": 741}
|
[STATEMENT]
lemma lindemann_weierstrass_integral:
fixes u :: complex and f :: "complex poly"
defines "df \<equiv> \<lambda>n. (pderiv ^^ n) f"
defines "m \<equiv> degree f"
defines "I \<equiv> \<lambda>f u. exp u * (\<Sum>j\<le>degree f. poly ((pderiv ^^ j) f) 0) -
(\<Sum>j\<le>degree f. poly ((pderiv ^^ j) f) u)"
shows "((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
note [derivative_intros] =
exp_scaleR_has_vector_derivative_right vector_diff_chain_within
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (t *\<^sub>R ?A)) has_vector_derivative exp (?t *\<^sub>R ?A) * ?A) (at ?t within ?T)
\<lbrakk>(?f has_vector_derivative ?f') (at ?x within ?s); (?g has_vector_derivative ?g') (at (?f ?x) within ?f ` ?s)\<rbrakk> \<Longrightarrow> (?g \<circ> ?f has_vector_derivative ?f' *\<^sub>R ?g') (at ?x within ?s)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
let ?g = "\<lambda>t. 1 - t" and ?f = "\<lambda>t. -exp (t *\<^sub>R u)"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
have "((\<lambda>t. exp ((1 - t) *\<^sub>R u) * u) has_integral
(?f \<circ> ?g) 1 - (?f \<circ> ?g) 0) {0..1}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. exp ((1 - t) *\<^sub>R u) * u) has_integral ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 1 - ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 0) {0..1}
[PROOF STEP]
by (rule fundamental_theorem_of_calculus)
(auto intro!: derivative_eq_intros simp del: o_apply)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp ((1 - t) *\<^sub>R u) * u) has_integral ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 1 - ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 0) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
hence aux_integral: "((\<lambda>t. exp (u - t *\<^sub>R u) * u) has_integral exp u - 1) {0..1}"
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. exp ((1 - t) *\<^sub>R u) * u) has_integral ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 1 - ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1) 0) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u) has_integral exp u - 1) {0..1}
[PROOF STEP]
by (simp add: algebra_simps)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u) has_integral exp u - 1) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
have "((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
unfolding df_def m_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
proof (induction "degree f" arbitrary: f)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. \<And>f. 0 = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
2. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
case 0
[PROOF STATE]
proof (state)
this:
0 = degree f
goal (2 subgoals):
1. \<And>f. 0 = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
2. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
0 = degree f
[PROOF STEP]
obtain c where c: "f = [:c:]"
[PROOF STATE]
proof (prove)
using this:
0 = degree f
goal (1 subgoal):
1. (\<And>c. f = [:c:] \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by (auto elim: degree_eq_zeroE)
[PROOF STATE]
proof (state)
this:
f = [:c:]
goal (2 subgoals):
1. \<And>f. 0 = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
2. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
have "((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}
[PROOF STEP]
using aux_integral
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u) has_integral exp u - 1) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}
[PROOF STEP]
by (rule has_integral_mult_right)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}
goal (2 subgoals):
1. \<And>f. 0 = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
2. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
with c
[PROOF STATE]
proof (chain)
picking this:
f = [:c:]
((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
using this:
f = [:c:]
((\<lambda>t. c * (exp (u - t *\<^sub>R u) * u)) has_integral c * (exp u - 1)) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
by (simp add: algebra_simps I_def)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
goal (1 subgoal):
1. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
case (Suc m)
[PROOF STATE]
proof (state)
this:
m = degree ?f1 \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly ?f1 (t *\<^sub>R u)) has_integral I ?f1 u) {0..1}
Suc m = degree f
goal (1 subgoal):
1. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
define df where "df = (\<lambda>j. (pderiv ^^ j) f)"
[PROOF STATE]
proof (state)
this:
df = (\<lambda>j. (pderiv ^^ j) f)
goal (1 subgoal):
1. \<And>x f. \<lbrakk>\<And>f. x = degree f \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}; Suc x = degree f\<rbrakk> \<Longrightarrow> ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
[PROOF STEP]
proof (rule integration_by_parts[OF bounded_bilinear_mult])
[PROOF STATE]
proof (state)
goal (6 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} ?f
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> (?f has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
5. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
6. ((\<lambda>t. ?f t * ?g' t) has_integral ?f 1 * poly f (1 *\<^sub>R u) - ?f 0 * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
fix t :: real
[PROOF STATE]
proof (state)
goal (6 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} ?f
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> (?f has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
5. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
6. ((\<lambda>t. ?f t * ?g' t) has_integral ?f 1 * poly f (1 *\<^sub>R u) - ?f 0 * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
assume "t \<in> {0..1}"
[PROOF STATE]
proof (state)
this:
t \<in> {0..1}
goal (6 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} ?f
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> (?f has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
5. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
6. ((\<lambda>t. ?f t * ?g' t) has_integral ?f 1 * poly f (1 *\<^sub>R u) - ?f 0 * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "((?f \<circ> ?g) has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1 has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
[PROOF STEP]
by (auto intro!: derivative_eq_intros simp: algebra_simps simp del: o_apply)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1 has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
goal (6 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} ?f
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> (?f has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
5. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
6. ((\<lambda>t. ?f t * ?g' t) has_integral ?f 1 * poly f (1 *\<^sub>R u) - ?f 0 * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
thus "((\<lambda>t. -exp (u - t *\<^sub>R u)) has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)"
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. - exp (t *\<^sub>R u)) \<circ> (-) 1 has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
goal (1 subgoal):
1. ((\<lambda>t. - exp (u - t *\<^sub>R u)) has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
[PROOF STEP]
by (simp add: algebra_simps o_def)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. - exp (u - t *\<^sub>R u)) has_vector_derivative exp (u - t *\<^sub>R u) * u) (at t)
goal (5 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
5. ((\<lambda>t. - exp (u - t *\<^sub>R u) * ?g' t) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (5 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
5. ((\<lambda>t. - exp (u - t *\<^sub>R u) * ?g' t) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
fix t :: real
[PROOF STATE]
proof (state)
goal (5 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
5. ((\<lambda>t. - exp (u - t *\<^sub>R u) * ?g' t) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
assume "t \<in> {0..1}"
[PROOF STATE]
proof (state)
this:
t \<in> {0..1}
goal (5 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
5. ((\<lambda>t. - exp (u - t *\<^sub>R u) * ?g' t) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "(poly f \<circ> (\<lambda>t. t *\<^sub>R u) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (poly f \<circ> (\<lambda>t. t *\<^sub>R u) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)
[PROOF STEP]
by (rule field_vector_diff_chain_at) (auto intro!: derivative_eq_intros)
[PROOF STATE]
proof (state)
this:
(poly f \<circ> (\<lambda>t. t *\<^sub>R u) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)
goal (5 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. \<And>t. t \<in> {0..1} \<Longrightarrow> ((\<lambda>x. poly f (x *\<^sub>R u)) has_vector_derivative ?g' t) (at t)
5. ((\<lambda>t. - exp (u - t *\<^sub>R u) * ?g' t) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
thus "((\<lambda>t. poly f (t *\<^sub>R u)) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)"
[PROOF STATE]
proof (prove)
using this:
(poly f \<circ> (\<lambda>t. t *\<^sub>R u) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)
goal (1 subgoal):
1. ((\<lambda>t. poly f (t *\<^sub>R u)) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)
[PROOF STEP]
by (simp add: o_def)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. poly f (t *\<^sub>R u)) has_vector_derivative u * poly (pderiv f) (t *\<^sub>R u)) (at t)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
from Suc(2)
[PROOF STATE]
proof (chain)
picking this:
Suc m = degree f
[PROOF STEP]
have m: "m = degree (pderiv f)"
[PROOF STATE]
proof (prove)
using this:
Suc m = degree f
goal (1 subgoal):
1. m = degree (pderiv f)
[PROOF STEP]
by (simp add: degree_pderiv)
[PROOF STATE]
proof (state)
this:
m = degree (pderiv f)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
from Suc(1)[OF this] this
[PROOF STATE]
proof (chain)
picking this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral I (pderiv f) u) {0..1}
m = degree (pderiv f)
[PROOF STEP]
have "((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral
exp u * (\<Sum>j=0..m. poly (df (Suc j)) 0) - (\<Sum>j=0..m. poly (df (Suc j)) u)) {0..1}"
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral I (pderiv f) u) {0..1}
m = degree (pderiv f)
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral exp u * (\<Sum>j = 0..m. poly (df (Suc j)) 0) - (\<Sum>j = 0..m. poly (df (Suc j)) u)) {0..1}
[PROOF STEP]
by (simp add: df_def funpow_swap1 atMost_atLeast0 I_def)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral exp u * (\<Sum>j = 0..m. poly (df (Suc j)) 0) - (\<Sum>j = 0..m. poly (df (Suc j)) u)) {0..1}
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral exp u * (\<Sum>j = 0..m. poly (df (Suc j)) 0) - (\<Sum>j = 0..m. poly (df (Suc j)) u)) {0..1}
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "(\<Sum>j=0..m. poly (df (Suc j)) 0) = (\<Sum>j=Suc 0..Suc m. poly (df j) 0)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>j = 0..m. poly (df (Suc j)) 0) = (\<Sum>j = Suc 0..Suc m. poly (df j) 0)
[PROOF STEP]
by (rule sum.shift_bounds_cl_Suc_ivl [symmetric])
[PROOF STATE]
proof (state)
this:
(\<Sum>j = 0..m. poly (df (Suc j)) 0) = (\<Sum>j = Suc 0..Suc m. poly (df j) 0)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(\<Sum>j = 0..m. poly (df (Suc j)) 0) = (\<Sum>j = Suc 0..Suc m. poly (df j) 0)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "\<dots> = (\<Sum>j=0..Suc m. poly (df j) 0) - poly f 0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>j = Suc 0..Suc m. poly (df j) 0) = (\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0
[PROOF STEP]
by (subst (2) sum.atLeast_Suc_atMost) (simp_all add: df_def)
[PROOF STATE]
proof (state)
this:
(\<Sum>j = Suc 0..Suc m. poly (df j) 0) = (\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(\<Sum>j = Suc 0..Suc m. poly (df j) 0) = (\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "(\<Sum>j=0..m. poly (df (Suc j)) u) = (\<Sum>j=Suc 0..Suc m. poly (df j) u)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>j = 0..m. poly (df (Suc j)) u) = (\<Sum>j = Suc 0..Suc m. poly (df j) u)
[PROOF STEP]
by (rule sum.shift_bounds_cl_Suc_ivl [symmetric])
[PROOF STATE]
proof (state)
this:
(\<Sum>j = 0..m. poly (df (Suc j)) u) = (\<Sum>j = Suc 0..Suc m. poly (df j) u)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(\<Sum>j = 0..m. poly (df (Suc j)) u) = (\<Sum>j = Suc 0..Suc m. poly (df j) u)
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "\<dots> = (\<Sum>j=0..Suc m. poly (df j) u) - poly f u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>j = Suc 0..Suc m. poly (df j) u) = (\<Sum>j = 0..Suc m. poly (df j) u) - poly f u
[PROOF STEP]
by (subst (2) sum.atLeast_Suc_atMost) (simp_all add: df_def)
[PROOF STATE]
proof (state)
this:
(\<Sum>j = Suc 0..Suc m. poly (df j) u) = (\<Sum>j = 0..Suc m. poly (df j) u) - poly f u
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u)) {0..1}
[PROOF STEP]
have "((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral
-(exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) -
((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u))) {0..1}"
(is "(_ has_integral ?I) _")
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u)) has_integral exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u)) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral - (exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u))) {0..1}
[PROOF STEP]
by (rule has_integral_neg)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral - (exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u))) {0..1}
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral - (exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u))) {0..1}
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
have "?I = - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) -
- exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. - (exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u)) = - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u
[PROOF STEP]
by (simp add: df_def algebra_simps Suc(2) atMost_atLeast0 I_def)
[PROOF STATE]
proof (state)
this:
- (exp u * ((\<Sum>j = 0..Suc m. poly (df j) 0) - poly f 0) - ((\<Sum>j = 0..Suc m. poly (df j) u) - poly f u)) = - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u
goal (4 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
4. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
show "((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u)))
has_integral \<dots>) {0..1}"
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. - (exp (u - t *\<^sub>R u) * u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
[PROOF STEP]
by (simp add: algebra_simps)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. - exp (u - t *\<^sub>R u) * (u * poly (pderiv f) (t *\<^sub>R u))) has_integral - exp (u - 1 *\<^sub>R u) * poly f (1 *\<^sub>R u) - - exp (u - 0 *\<^sub>R u) * poly f (0 *\<^sub>R u) - I f u) {0..1}
goal (3 subgoals):
1. 0 \<le> 1
2. continuous_on {0..1} (\<lambda>t. - exp (u - t *\<^sub>R u))
3. continuous_on {0..1} (\<lambda>x. poly f (x *\<^sub>R u))
[PROOF STEP]
qed (auto intro!: continuous_intros)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
((\<lambda>t. exp (u - t *\<^sub>R u) * u * poly f (t *\<^sub>R u)) has_integral I f u) {0..1}
goal (1 subgoal):
1. ((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
[PROOF STEP]
by (simp add: has_contour_integral_linepath algebra_simps)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. exp (u - t) * poly f t) has_contour_integral I f u) (linepath 0 u)
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 14321, "file": "E_Transcendental_E_Transcendental", "length": 71}
|
import numpy as np
from pySDC.core.Errors import ParameterError
from pySDC.core.Problem import ptype
from pySDC.implementations.datatype_classes.mesh import mesh, imex_mesh
class buck_converter(ptype):
"""
Example implementing the buck converter model as in the description in the PinTSimE project
Attributes:
A: system matrix, representing the 3 ODEs
"""
def __init__(self, problem_params, dtype_u=mesh, dtype_f=imex_mesh):
"""
Initialization routine
Args:
problem_params (dict): custom parameters for the example
dtype_u: mesh data type for solution
dtype_f: mesh data type for RHS
"""
problem_params['nvars'] = 3
# these parameters will be used later, so assert their existence
essential_keys = ['duty', 'fsw', 'Vs', 'Rs', 'C1', 'Rp', 'L1', 'C2', 'Rl']
for key in essential_keys:
if key not in problem_params:
msg = 'need %s to instantiate problem, only got %s' % (key, str(problem_params.keys()))
raise ParameterError(msg)
# invoke super init, passing number of dofs, dtype_u and dtype_f
super(buck_converter, self).__init__(init=(problem_params['nvars'], None, np.dtype('float64')),
dtype_u=dtype_u, dtype_f=dtype_f, params=problem_params)
self.A = np.zeros((3, 3))
def eval_f(self, u, t):
"""
Routine to evaluate the RHS
Args:
u (dtype_u): current values
t (float): current time
Returns:
dtype_f: the RHS
"""
Tsw = 1 / self.params.fsw
f = self.dtype_f(self.init, val=0.0)
f.impl[:] = self.A.dot(u)
if 0 <= ((t / Tsw) % 1) <= self.params.duty:
f.expl[0] = self.params.Vs / (self.params.Rs * self.params.C1)
f.expl[2] = 0
else:
f.expl[0] = self.params.Vs / (self.params.Rs * self.params.C1)
f.expl[2] = -(self.params.Rp * self.params.Vs) / (self.params.L1 * self.params.Rs)
return f
def solve_system(self, rhs, factor, u0, t):
"""
Simple linear solver for (I-factor*A)u = rhs
Args:
rhs (dtype_f): right-hand side for the linear system
factor (float): abbrev. for the local stepsize (or any other factor required)
u0 (dtype_u): initial guess for the iterative solver
t (float): current time (e.g. for time-dependent BCs)
Returns:
dtype_u: solution as mesh
"""
Tsw = 1 / self.params.fsw
self.A = np.zeros((3, 3))
if 0 <= ((t / Tsw) % 1) <= self.params.duty:
self.A[0, 0] = -1 / (self.params.C1 * self.params.Rs)
self.A[0, 2] = -1 / self.params.C1
self.A[1, 1] = -1 / (self.params.C2 * self.params.Rl)
self.A[1, 2] = 1 / self.params.C2
self.A[2, 0] = 1 / self.params.L1
self.A[2, 1] = -1 / self.params.L1
self.A[2, 2] = -self.params.Rp / self.params.L1
else:
self.A[0, 0] = -1 / (self.params.C1 * self.params.Rs)
self.A[1, 1] = -1 / (self.params.C2 * self.params.Rl)
self.A[1, 2] = 1 / self.params.C2
self.A[2, 0] = self.params.Rp / (self.params.L1 * self.params.Rs)
self.A[2, 1] = -1 / self.params.L1
me = self.dtype_u(self.init)
me[:] = np.linalg.solve(np.eye(self.params.nvars) - factor * self.A, rhs)
return me
def u_exact(self, t):
"""
Routine to compute the exact solution at time t
Args:
t (float): current time
Returns:
dtype_u: exact solution
"""
me = self.dtype_u(self.init)
me[0] = 0.0 # v1
me[1] = 0.0 # v2
me[2] = 0.0 # p3
return me
|
{"hexsha": "d77f850ab187d7bfdc85e2f6559dd5f5964ffd4c", "size": 3906, "ext": "py", "lang": "Python", "max_stars_repo_path": "pySDC/implementations/problem_classes/BuckConverter.py", "max_stars_repo_name": "brownbaerchen/pySDC", "max_stars_repo_head_hexsha": "31293859d731646aa09cef4345669eac65501550", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pySDC/implementations/problem_classes/BuckConverter.py", "max_issues_repo_name": "brownbaerchen/pySDC", "max_issues_repo_head_hexsha": "31293859d731646aa09cef4345669eac65501550", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pySDC/implementations/problem_classes/BuckConverter.py", "max_forks_repo_name": "brownbaerchen/pySDC", "max_forks_repo_head_hexsha": "31293859d731646aa09cef4345669eac65501550", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.8235294118, "max_line_length": 103, "alphanum_fraction": 0.5417306708, "include": true, "reason": "import numpy", "num_tokens": 1090}
|
import numpy as np
import cv2
from keras.models import load_model
from mtcnn.mtcnn import MTCNN
from PIL import Image
from sklearn.svm import SVC
from SVMclassifier import model as svm
from SVMclassifier import out_encoder
model = load_model('../models/facenet_keras.h5')
# get the face embedding for one face
def get_embedding(model, face_pixels):
# scale pixel values
face_pixels = face_pixels.astype('float32')
# standardize pixel values across channels (global)
mean, std = face_pixels.mean(), face_pixels.std()
face_pixels = (face_pixels - mean) / std
print(face_pixels.shape)
# transform face into one sample
#expand dims adds a new dimension to the tensor
samples = np.expand_dims(face_pixels, axis=0)
print(samples.shape)
# make prediction to get embedding
yhat = model.predict(samples)
return yhat[0]
faceCascade = cv2.CascadeClassifier('C:/Users/alisy/OneDrive/Documents/AIandMachineLearning/FaceNet-FriendScan/Real-Time_Face-Recognition-System/haarcascades/haarcascade_frontalface_default.xml')
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags= cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces and predict the face name
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
#take the face pixels from the frame
crop_frame = frame[y:y+h, x:x+w]
#turn the face pixels back into an image
new_crop = Image.fromarray(crop_frame)
#resize the image to meet the size requirment of facenet
new_crop = new_crop.resize((160, 160))
#turn the image back into a tensor
crop_frame = np.asarray(new_crop)
#get the face embedding using the face net model
face_embed = get_embedding(model, crop_frame)
#it is a 1d array need to reshape it as a 2d tensor for svm
face_embed = face_embed.reshape(-1, face_embed.shape[0])
print(face_embed.shape)
#predict using our SVM model
pred = svm.predict(face_embed)
#get the prediction probabiltiy
pred_prob = svm.predict_proba(face_embed)
#pred_prob has probabilities of each class
print(pred_prob)
# get name
class_index = pred[0]
class_probability = pred_prob[0,class_index] * 100
predict_names = out_encoder.inverse_transform(pred)
text = 'Predicted: %s (%.3f%%)' % (predict_names[0], class_probability)
#add the name to frame but only if the pred is above a certain threshold
if (class_probability > 60):
cv2.putText(frame, text, (x, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
|
{"hexsha": "1fe97dde192c7abb68781329d90b28256715e635", "size": 3139, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/real_time_face_rec.py", "max_stars_repo_name": "esgyu/AI_PROJECT_SERVER", "max_stars_repo_head_hexsha": "b0c1d1ac44bd88d5a32920065bfbc3844c649fbc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/real_time_face_rec.py", "max_issues_repo_name": "esgyu/AI_PROJECT_SERVER", "max_issues_repo_head_hexsha": "b0c1d1ac44bd88d5a32920065bfbc3844c649fbc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/real_time_face_rec.py", "max_forks_repo_name": "esgyu/AI_PROJECT_SERVER", "max_forks_repo_head_hexsha": "b0c1d1ac44bd88d5a32920065bfbc3844c649fbc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6704545455, "max_line_length": 195, "alphanum_fraction": 0.6830200701, "include": true, "reason": "import numpy", "num_tokens": 795}
|
from __future__ import division
import numpy as np
from ..utils.importing import import_file
class ObjectDetector(object):
"""
Object detection workflow.
This workflow is used to train image object detection tasks, typically
when the dataset cannot be stored in memory.
Submissions need to contain two files, which by default are named:
image_preprocessor.py and object_detector_model.py (they can be
modified by changing `workflow_element_names`).
image_preprocessor.py needs a `transform` function, which
is used for preprocessing the images. It takes an image as input
and it returns an image as an output. Optionally, image_preprocessor.py
can also have a function `transform_test`, which is used only to preprocess
images at test time. Otherwise, if `transform_test` does not exist,
`transform` is used at train and test time.
object_detector_model.py needs a `ObjectDetector` class, which
implements `fit` and `predict`.
Parameters
==========
test_batch_size : int
batch size used for testing.
chunk_size : int
size of the chunk used to load data from disk into memory.
(see at the top of the file what a chunk is and its difference
with the mini-batch size of neural nets).
n_jobs : int
the number of jobs used to load images from disk to memory as `chunks`.
"""
def __init__(self, workflow_element_names=['object_detector']):
self.element_names = workflow_element_names
def train_submission(self, module_path, X, y, train_is=None):
"""Train a ObjectDetector.
module_path : str
module where the submission is. the folder of the module
have to contain object_detector.py.
X : ArrayContainer vector of int
vector of image data to train on
y : vector of lists
vector of object labels corresponding to X
train_is : vector of int
indices from X_array to train on
"""
if train_is is None:
train_is = slice(None, None, None)
# object detector model
detector = import_file(module_path, self.element_names[0])
clf = detector.ObjectDetector()
# train and return fitted model
clf.fit(X[train_is], y[train_is])
return clf
def test_submission(self, trained_model, X):
"""Test an ObjectDetector.
trained_model
Trained model returned by `train_submission`.
X : ArrayContainer of int
Vector of image data to test on.
"""
clf = trained_model
y_pred = clf.predict(X)
return y_pred
class BatchGeneratorBuilder(object):
"""A batch generator builder for generating batches of images on the fly.
This class is a way to build training and
validation generators that yield each time a tuple (X, y) of mini-batches.
The generators are built in a way to fit into keras API of `fit_generator`
(see https://keras.io/models/model/).
The fit function from `Classifier` should then use the instance
to build train and validation generators, using the method
`get_train_valid_generators`
Parameters
==========
X_array : ArrayContainer of int
vector of image data to train on
y_array : vector of int
vector of object labels corresponding to `X_array`
"""
def __init__(self, X_array, y_array):
self.X_array = X_array
self.y_array = y_array
self.nb_examples = len(X_array)
def get_train_valid_generators(self, batch_size=256, valid_ratio=0.1):
"""Build train and valid generators for keras.
This method is used by the user defined `Classifier` to o build train
and valid generators that will be used in keras `fit_generator`.
Parameters
==========
batch_size : int
size of mini-batches
valid_ratio : float between 0 and 1
ratio of validation data
Returns
=======
a 4-tuple (gen_train, gen_valid, nb_train, nb_valid) where:
- gen_train is a generator function for training data
- gen_valid is a generator function for valid data
- nb_train is the number of training examples
- nb_valid is the number of validation examples
The number of training and validation data are necessary
so that we can use the keras method `fit_generator`.
"""
nb_valid = int(valid_ratio * self.nb_examples)
nb_train = self.nb_examples - nb_valid
indices = np.arange(self.nb_examples)
train_indices = indices[0:nb_train]
valid_indices = indices[nb_train:]
gen_train = self._get_generator(
indices=train_indices, batch_size=batch_size)
gen_valid = self._get_generator(
indices=valid_indices, batch_size=batch_size)
return gen_train, gen_valid, nb_train, nb_valid
def _get_generator(self, indices=None, batch_size=256):
if indices is None:
indices = np.arange(self.nb_examples)
# Infinite loop, as required by keras `fit_generator`.
# However, as we provide the number of examples per epoch
# and the user specifies the total number of epochs, it will
# be able to end.
while True:
X = self.X_array[indices]
y = self.y_array[indices]
# converting to float needed?
# X = np.array(X, dtype='float32')
# Yielding mini-batches
for i in range(0, len(X), batch_size):
yield X[i:i + batch_size], y[i:i + batch_size]
|
{"hexsha": "fe3356435421c4b30c7cc10371472fb392611ae8", "size": 5706, "ext": "py", "lang": "Python", "max_stars_repo_path": "rampwf/workflows/object_detector.py", "max_stars_repo_name": "mehdidc/ramp-workflow", "max_stars_repo_head_hexsha": "68146005369b31c1c855c2372172d355440994a1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "rampwf/workflows/object_detector.py", "max_issues_repo_name": "mehdidc/ramp-workflow", "max_issues_repo_head_hexsha": "68146005369b31c1c855c2372172d355440994a1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-01-18T09:47:03.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-20T15:33:11.000Z", "max_forks_repo_path": "rampwf/workflows/object_detector.py", "max_forks_repo_name": "mehdidc/ramp-workflow", "max_forks_repo_head_hexsha": "68146005369b31c1c855c2372172d355440994a1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7926829268, "max_line_length": 79, "alphanum_fraction": 0.6521205748, "include": true, "reason": "import numpy", "num_tokens": 1231}
|
# -*- coding: utf-8 -*-
"""
Created on Tue Oct 24 15:03:29 2017
@author: r.dewinter
"""
from predictorEGO import predictorEGO
from hypervolume import hypervolume
from paretofrontFeasible import paretofrontFeasible
import copy
import numpy as np
import time
import pygmo as pg
def optimizeSMSEGOcriterion(x, model, ref, paretoFront, currentHV, epsilon, gain):
'''
Slightly modified infill Criterion of the SMSEGO
Ponweiser, Wagner et al. (Proc. 2008 PPSN, pp. 784-794)
***********************************************************************
call: optimizeSMSEGOcriterion(x, model, ref, currentHV, eps, gain)
arguments
x: decision vector to be evaluated
model: d-dimensional cell array of models for each objective
ref: d-dimensional anti-ideal reference point
paretoFront: current Pareto front approximation
currentHV: hypervolume of current front with respect to ref
epsilon: epsilon to use in additive epsilon dominance
gain: gain factor for sigma (optional)
'''
# print(x,'objectiv')
nObj = len(model)
mu = nObj*[None]
mse = nObj*[None]
for i in range(nObj):
[mu[i], _, mse[i]] = predictorEGO(x, model[i])
sigma = np.sqrt(mse)
potentialSolution = np.ndarray.flatten(mu - gain*sigma)
penalty = 0
logicBool = np.all(paretoFront<= potentialSolution+epsilon, axis=1)
for j in range(paretoFront.shape[0]):
if logicBool[j]:
p = - 1 + np.prod(1 + (potentialSolution-paretoFront[j,:]))
penalty = max(penalty, p)
if penalty == 0: #non-dominated solutions
potentialFront = np.append(paretoFront, [potentialSolution], axis=0)
myhv = hypervolume(potentialFront, ref)
f = currentHV - myhv
else:
f = penalty
# print(f,'objectiv')
return f
|
{"hexsha": "94b2c8f44558a2ab38d8b689dcc3912ad4b8bcb2", "size": 1970, "ext": "py", "lang": "Python", "max_stars_repo_path": "CEGO/optimizeSMSEGOcriterion.py", "max_stars_repo_name": "napa-jmm/CEGO", "max_stars_repo_head_hexsha": "172d511133a608ca5bf265d9ebd2937b8a171b3e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-07-18T06:38:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-17T21:01:40.000Z", "max_issues_repo_path": "CEGO/optimizeSMSEGOcriterion.py", "max_issues_repo_name": "napa-jmm/CEGO", "max_issues_repo_head_hexsha": "172d511133a608ca5bf265d9ebd2937b8a171b3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CEGO/optimizeSMSEGOcriterion.py", "max_forks_repo_name": "napa-jmm/CEGO", "max_forks_repo_head_hexsha": "172d511133a608ca5bf265d9ebd2937b8a171b3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-10-15T09:35:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-08T13:40:19.000Z", "avg_line_length": 35.1785714286, "max_line_length": 83, "alphanum_fraction": 0.5974619289, "include": true, "reason": "import numpy", "num_tokens": 497}
|
"""Reconstruct current density distribution of Maryland multigate device.
Device ID: JS311_2HB-2JJ-5MGJJ-MD-001_MG2.
Scan ID: JS311-BHENL001-2JJ-2HB-5MGJJ-MG2-060.
Fridge: vector9
This scan contains Fraunhofer data for a linear multigate -1-2-3-4-5-
Gates 1 and 5 are grounded; gates 2 and 4 are shorted.
Both Vg3 and Vg2(=Vg4) are swept independently.
"""
from pathlib import Path
import numpy as np
from matplotlib import pyplot as plt
from scipy import constants as cs
from shabanipy.jj.fraunhofer.deterministic_reconstruction import (
extract_current_distribution,
)
from shabanipy.jj.fraunhofer.utils import find_fraunhofer_center, symmetrize_fraunhofer
from shabanipy.jj.utils import extract_switching_current
from shabanipy.labber import LabberData, get_data_dir
LABBER_DATA_DIR = get_data_dir()
DATA_FILE_PATH = (
Path(LABBER_DATA_DIR)
/ "2020/12/Data_1202/JS311-BHENL001-2JJ-2HB-5MGJJ-MG2-060.hdf5"
)
# channel names
CH_GATE_3 = "SM1 - Source voltage"
CH_GATE_2_4 = "SM2 - Source voltage"
CH_MAGNET = "Magnet Source - Source current"
CH_RESIST = "VITracer - VI curve"
# Coil current to B-field conversion factor.
# The new sample holder is perpendicular to the old one;
# the conversion factor along the new axis is 30mA to 1mT.
CURR_TO_FIELD = 1 / 30
# constants
PHI0 = cs.h / (2 * cs.e) # magnetic flux quantum
JJ_WIDTH = 4e-6
# The effective junction length is largely unknown due to thin-film penetration depth
# and flux focusing effects; nominally 100nm.
JJ_LENGTH = 1200e-9
FIELD_TO_WAVENUM = 2 * np.pi * JJ_LENGTH / PHI0 # B-field to beta wavenumber
PERIOD = 2 * np.pi / (FIELD_TO_WAVENUM * JJ_WIDTH)
with LabberData(DATA_FILE_PATH) as f:
# NOTE: The use of np.unique assumes the gate, field, and bias values are identical
# for each sweep. This is true for the current datafile but may not hold in general.
# NOTE: Also, in this case we have to manually correct some Labber shenanigans by
# flipping some data.
gate_3 = np.flip(np.unique(f.get_data(CH_GATE_3)))
gate_2_4 = np.flip(np.unique(f.get_data(CH_GATE_2_4)))
field = np.unique(f.get_data(CH_MAGNET)) * CURR_TO_FIELD
# Bias current from the custom Labber driver VICurveTracer isn't available via
# LabberData methods.
bias = np.unique(f._file["/Traces/VITracer - VI curve"][:, 1, :])
resist = f.get_data(CH_RESIST)
# extract_switching_current chokes on 1D arrays. Construct the ndarray of bias sweeps
# for each (gate, field) to match the shape of the resistance ndarray
ic = extract_switching_current(
np.tile(bias, resist.shape[:-1] + (1,)), resist, threshold=2.96e-3,
)
# NOTE: Here, every other fraunhofer appears flipped horizontally (i.e. field B -> -B)
# when compared to Labber's Log Viewer. However, Labber's Log Viewer shows a field
# offset that systematically changes sign on every other fraunhofer. This suggests that
# the Log Viewer incorrectly flips every other fraunhofer. To recover the data as
# viewed in Log Viewer, uncomment this line. The fraunhofers are centered and
# symmetrized before current reconstruction, so it shouldn't matter.
# ic[:, 1::2, :] = np.flip(ic[:, 1::2, :], axis=-1)
# 183 is the largest number of points returned by symmetrize_fraunhofer
# extract_current_distribution then returns max 183*2 = 366 points
POINTS = 366
x = np.empty(shape=ic.shape[:-1] + (POINTS,))
jx = np.empty(shape=ic.shape[:-1] + (POINTS,))
for i, g3 in enumerate(gate_3):
for j, g24 in enumerate(gate_2_4):
ic_ = ic[i, j]
field_ = field - find_fraunhofer_center(field, ic_)
field_, ic_ = symmetrize_fraunhofer(field_, ic_)
x_, jx_ = extract_current_distribution(
field_, ic_, FIELD_TO_WAVENUM, JJ_WIDTH, len(field_)
)
x[i, j] = np.pad(x_, (POINTS - len(x_)) // 2, mode="edge")
jx[i, j] = np.pad(jx_, (POINTS - len(jx_)) // 2, mode="edge")
# There are 11x10 fraunhofers, 1 for each (Vg3, Vg2=Vg4) combination.
# Make 21 plots by fixing Vg3 and sweeping over Vg2=Vg4, and vice versa.
cmap = plt.get_cmap("inferno")
for i, g3 in enumerate(gate_3):
fig, ax = plt.subplots(constrained_layout=True)
ax.set_title(r"$V_\mathrm{g1,g5} = 0$, $V_\mathrm{g3} = $" + f"{g3} V")
ax.set_xlabel(r"$B_\perp$ (mT)")
ax.set_ylabel(r"$I_c$ (μA)")
lines = ax.plot(field * 1e3, np.transpose(ic[i]) * 1e6)
for l, line in enumerate(lines):
line.set_color(cmap(l / len(lines)))
lines[0].set_label(gate_2_4[0])
lines[-1].set_label(gate_2_4[-1])
ax.legend(title=r"$V_\mathrm{g2,g4}$ (V)")
fig.savefig(f"plots/060_fraunhofer_Vg3={g3}.pdf", format="pdf")
plt.close(fig=fig)
fig, ax = plt.subplots(constrained_layout=True)
ax.set_title(r"$V_\mathrm{g1,g5} = 0$, $V_\mathrm{g3} = $" + f"{g3} V")
ax.set_xlabel(r"$x$ (μm)")
ax.set_ylabel(r"$J(x)$ (μA/μm)")
for j, g24 in enumerate(gate_2_4):
ax.plot(x[i, j] * 1e6, jx[i, j], color=cmap(j / len(gate_2_4)))
lines = ax.get_lines()
lines[0].set_label(gate_2_4[0])
lines[-1].set_label(gate_2_4[-1])
ax.legend(title=r"$V_\mathrm{g2,g4}$ (V)")
fig.savefig(f"plots/060_current-density_Vg3={g3}.pdf", format="pdf")
plt.close(fig=fig)
for j, g24 in enumerate(gate_2_4):
fig, ax = plt.subplots(constrained_layout=True)
ax.set_title(r"$V_\mathrm{g1,g5} = 0$, $V_\mathrm{g2,g4} = $" + f"{g24} V")
ax.set_xlabel(r"$B_\perp$ (mT)")
ax.set_ylabel(r"$I_c$ (μA)")
lines = ax.plot(field * 1e3, np.transpose(ic[:, j]) * 1e6)
for l, line in enumerate(lines):
line.set_color(cmap(l / len(lines)))
lines[0].set_label(gate_3[0])
lines[-1].set_label(gate_3[-1])
ax.legend(title=r"$V_\mathrm{g3}$ (V)")
fig.savefig(f"plots/060_fraunhofer_Vg24={g24}.pdf", format="pdf")
plt.close(fig=fig)
fig, ax = plt.subplots(constrained_layout=True)
ax.set_title(r"$V_\mathrm{g1,g5} = 0$, $V_\mathrm{g2,g4} = $" + f"{g24} V")
ax.set_xlabel(r"$x$ (μm)")
ax.set_ylabel(r"$J(x)$ (μA/μm)")
for i, g3 in enumerate(gate_3):
ax.plot(x[i, j] * 1e6, jx[i, j], color=cmap(i / len(gate_3)))
lines = ax.get_lines()
lines[0].set_label(gate_3[0])
lines[-1].set_label(gate_3[-1])
ax.legend(title=r"$V_\mathrm{g3}$ (V)")
fig.savefig(f"plots/060_current-density_Vg24={g24}.pdf", format="pdf")
plt.close(fig=fig)
|
{"hexsha": "afa41bbf2fb9fdb9feddcb9472e2bebe0fe8d408", "size": 6331, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/jj/JS311_2HB-2JJ-5MGJJ-MD-001/analyze_md_multigate_060.py", "max_stars_repo_name": "ShabaniLab/DataAnalysis", "max_stars_repo_head_hexsha": "e234b7d0e4ff8ecc11e58134e6309a095abcd2c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/jj/JS311_2HB-2JJ-5MGJJ-MD-001/analyze_md_multigate_060.py", "max_issues_repo_name": "ShabaniLab/DataAnalysis", "max_issues_repo_head_hexsha": "e234b7d0e4ff8ecc11e58134e6309a095abcd2c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/jj/JS311_2HB-2JJ-5MGJJ-MD-001/analyze_md_multigate_060.py", "max_forks_repo_name": "ShabaniLab/DataAnalysis", "max_forks_repo_head_hexsha": "e234b7d0e4ff8ecc11e58134e6309a095abcd2c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6513157895, "max_line_length": 88, "alphanum_fraction": 0.686147528, "include": true, "reason": "import numpy,from scipy", "num_tokens": 2073}
|
# -*- coding: utf-8 -*-
from kaplot import *
import scipy
class NFWTri(object):
def __init__(self, p, q, nfw_xaxis):
self.p = p
self.q = q
self.nfw_xaxis = nfw_xaxis
def run(self, args, opts, scope):
global logrho_model
N = 8
#sigmas = zeros(N) + arange(1,N)
logsigmas = zeros(N) #(arange(N)-N/2.)*0.3
logweights = zeros(N)
#logweights = [0, 0]
#logsigmas = [-2, 2]
logxmin = -2.
logxmax = 2.
Nx = 100
deltax = (logxmax-logxmin)/Nx
logx = arange(logxmin, logxmax, deltax)
x = 10**logx
logrho_target = log(self.nfw_xaxis.densityr(x))
logsigmas = arange(logxmin, logxmax, (logxmax-logxmin)/N)
#logsigmas = log10(sigmas)
#print self.nfw_xaxis.densityr(x)
#dsa
distance = 79/1000.
parsec_km = 1.4959787068e8*648e3/pi
conversion_factor = distance*1.0e6*tan(pi/(648e3))*parsec_km
def f(params):
global logrho_model
sigmas = array([10**params[:N]]).T
weights = array([10**params[N:]]).T
#S = (-1/(2*sigmas**2)*x**2) + log(weights/(sigmas*sqrt(2*pi))**3)
rho_model = zeros(Nx)
for i in range(N):
rho_model += exp((-x**2/(2*sigmas[i]**2)) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3))
logrho_model = log(rho_model)
logrho_model[isinf(logrho_model)] = -100
#logrho_model = sum( S, axis=0 )
#print S
#print S.shape
print "target ", ("% 8.3f" * Nx) % tuple(logrho_target)
print "model ", ("% 8.3f" * Nx) % tuple(logrho_model)
print "params ", ("% 5.2f" * (N*2)) % tuple(params)
print "sigmas ", ("% 5.2f" * N) % tuple(sigmas.T[0])
print "weights ", ("% 5.2f" * N) % tuple(weights.T[0])
#print "weights ", weights.T
value = log10(sum((logrho_model-logrho_target)**2))
print "difference", value
#raw_input()
return value
params = concatenate([logsigmas, logweights])
print params
bounds = None #[None] * N
u = scipy.optimize.fmin_l_bfgs_b(f, params, None, bounds=bounds, approx_grad=True, iprint=1,factr=1e-10,maxfun=200000)[0]
#u = params
#u = scipy.optimize.fmin(f, params, maxiter=10000000, maxfun=1000000)
print u
sigmas = array(10.**u[:N])
weights = array(10.**u[N:])
logxmin = -3.
logxmax = 3.
Nx = 200
deltax = (logxmax-logxmin)/Nx
logx = arange(logxmin, logxmax, deltax)
x = 10**logx
logrho_target = log(self.nfw_xaxis.densityr(x))
#logsigmas = arange(logxmin, logxmax, (logxmax-logxmin)/N)
if 1:
print sigmas
rho_model = zeros(Nx)
for i in range(N):
rho_model += exp((-x**2/(2*sigmas[i]**2)) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3))
logrho_model = log(rho_model)
#logrho_model[isinf(logrho_model)] = -2
#logrho_model[logrho_model<10] = -2
print "target ", ("% 8.3f" * Nx) % tuple(logrho_target)
print "model ", ("% 8.3f" * Nx) % tuple(logrho_model)
print "params ", ("% 5.2f" * (N*2)) % tuple(u)
print "sigmas ", ("% 5.2f" * N) % tuple(sigmas)
print "weights ", ("% 5.2f" * N) % tuple(weights)
#print "weights ", weights.T
value = log10(sum((logrho_model-logrho_target)**2))
print "difference", value
logrho_model_total = logrho_model
import pdb
#pdb.set_trace()
print logrho_model
#box()
mozaic(2,2,box)
graph(logx, logrho_target)
graph(logx, logrho_model, color="red", linestyle="dash")
for i in range(N):
logrho_model = (-1/(2*sigmas[i]**2)*x**2) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3)
logrho_model[isinf(logrho_model)] = -10
logrho_model[logrho_model<-10] = -10
graph(logx, logrho_model, color="blue", linestyle="dot")
#sigma, weight = 10**5, 1e20
#logrho_model = (-1/(2*sigma**2)*x**2) + log(weight/(sigma*sqrt(2*pi))**3)
#graph(logx, logrho_model, color="orange", linestyle="dot")
ylim(min(logrho_target), max(logrho_target))
select(1,0)
graph(logx, 10**logrho_target)
graph(logx, 10**logrho_model_total, color="red", linestyle="dash")
for i in range(N):
logrho_model = (-1/(2*sigmas[i]**2)*x**2) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3)
logrho_model[isinf(logrho_model)] = -10
logrho_model[logrho_model<-10] = -10
graph(logx, 10**logrho_model, color="blue", linestyle="dot")
#sigma, weight = 10**5, 1e20
#logrho_model = (-1/(2*sigma**2)*x**2) + log(weight/(sigma*sqrt(2*pi))**3)
#graph(logx, logrho_model, color="orange", linestyle="dot")
#ylim(min(logrho_target), max(logrho_target))
select(1,1)
graph(10**logx, 10**logrho_target)
graph(10**logx, 10**logrho_model_total, color="red", linestyle="dash")
for i in range(N):
logrho_model = (-1/(2*sigmas[i]**2)*x**2) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3)
logrho_model[isinf(logrho_model)] = -10
logrho_model[logrho_model<-10] = -10
graph(10**logx, 10**logrho_model, color="blue", linestyle="dot")
xlim(0, 10)
select(0,1)
graph(10**logx, logrho_target)
graph(10**logx, logrho_model_total, color="red", linestyle="dash")
for i in range(N):
logrho_model = (-1/(2*sigmas[i]**2)*x**2) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3)
logrho_model[isinf(logrho_model)] = -10
logrho_model[logrho_model<-10] = -10
graph(10**logx, logrho_model, color="blue", linestyle="dot")
xlim(0, 10)
ylim(min(logrho_target), max(logrho_target))
print conversion_factor
densities = []
for i in range(N):
# Msol/kpc**3
mass = weights[i]
#print mass
#print "weight", w, "Msol/kpc^3"
#mass = mass# /1000**3
#print "weight", w, "Msol/pc^3"
sigma = sigmas[i] # kpc
density_central = mass / ( (sigma*1000*sqrt(2*pi))**3)
#print "sigma", sigma, "kpc"
#print "density", density_central, "Msol/kpc^3"
#print "log", (-1/(2*sigmas[i]**2)*0**2) + log(weights[i]/(sigmas[i]*sqrt(2*pi))**3)
sigma = sigma*1000*parsec_km # kpc to km
sigma = sigma / conversion_factor # km to arcsec
#print "sigma", sigma, "km"
densities.append(density_central)
# Msol/km^3
#density = mass / ( (sigma*sqrt(2*pi))**2)
#print "dens: ", mass / ( (sigma*sqrt(2*pi))**3), "Msol/km^3",
#print "sigma", sigma, "arcsec" # km to arcsec
print density_central, sigma, "1.0 1.0"
print "tot mass %e" % ( sum(weights))
densities = array(densities) /parsec_km**3
sigmas = sigmas*1000*parsec_km/conversion_factor * conversion_factor
print densities, sigmas
print "tot mass", 2*pi*(2*pi)**(1./2)*sum(densities*sigmas**3)
print "m enc %e" % self.nfw_xaxis.enclosed_mass(10**2)
print parsec_km, conversion_factor
rmin, rmax = 10**-2, 10
print "rlogmin, rlogmax:", log10(rmin*1000*parsec_km/conversion_factor), log10(rmax*1000*parsec_km/conversion_factor)
draw()
|
{"hexsha": "23583ab4aa5f7d90a9cb667cab259b36bc1e46eb", "size": 6551, "ext": "py", "lang": "Python", "max_stars_repo_path": "mab/gd/tri/potential.py", "max_stars_repo_name": "maartenbreddels/mab", "max_stars_repo_head_hexsha": "112dcfbc4a74b07aff13d489b3776bca58fe9bdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-01T04:10:34.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-01T04:10:34.000Z", "max_issues_repo_path": "mab/gd/tri/potential.py", "max_issues_repo_name": "maartenbreddels/mab", "max_issues_repo_head_hexsha": "112dcfbc4a74b07aff13d489b3776bca58fe9bdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mab/gd/tri/potential.py", "max_forks_repo_name": "maartenbreddels/mab", "max_forks_repo_head_hexsha": "112dcfbc4a74b07aff13d489b3776bca58fe9bdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4234693878, "max_line_length": 123, "alphanum_fraction": 0.632117234, "include": true, "reason": "import scipy", "num_tokens": 2449}
|
# 4. faza: Analiza podatkov
# Uvozimo funkcijo za uvoz spletne strani.
# source("lib/xml.r")
# Preberemo spletno stran v razpredelnico.
# cat("Uvažam spletno stran...\n")
# tabela <- preuredi(uvozi.obcine(), obcine)
# Narišemo graf v datoteko PDF.
# cat("Rišem graf...\n")
# pdf("slike/naselja.pdf", width=6, height=4)
# plot(tabela[[1]], tabela[[4]],
# main = "Število naselij glede na površino občine",
# xlab = "Površina (km^2)",
# ylab = "Št. naselij")
# dev.off()
# Uvozimo zemljevid
cat("Uvažam zemljevid...\n")
svet <- uvozi.zemljevid("http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_admin_0_map_units.zip",
"svet", "ne_110m_admin_0_map_units.shp", mapa = "zemljevid",
encoding = "Windows-1252")
svet1 <- svet[svet$continent %in% c("Europe", "Africa", "South America", "North America", "Asia", "Oceania"),]
library(ggplot2)
library(scales)
######top10
#stolpicni diagram 10
zanr.imena <- colnames(top10)[5:24]
zanr.vrednosti <- apply(top10[zanr.imena], 2, sum)
zanr.vrednosti <- sort(zanr.vrednosti)
zanr.imena <- names(sort(zanr.vrednosti))
cairo_pdf("slike/stolpicni10.pdf",family="Arial")
barplot(zanr.vrednosti, names.arg = zanr.imena, xlab="ŽANRI", ylab="ŠTEVILO FILMOV", main="DISTRIBUCIJA ŽANROV ZA LETO 2010", las=2, cex.names=0.5, col="red")
dev.off()
#zemljevi 10
kraj <- table(top10$KRAJ)
stevilo <- unique(kraj)
stevilo <- stevilo[order(stevilo)]
barve <- rainbow(length(stevilo))[match(kraj, stevilo)]
names(barve) <- names(kraj)
barve.zemljevid <- barve[as.character(svet1$name_long)]
barve.zemljevid[is.na(barve.zemljevid)] <- "white"
imena <- names(kraj)
aa <- svet1[svet1$name_long %in% imena,]
koordinate <- coordinates(aa)
imena.krajev <- as.character(aa$name_long)
rownames(koordinate) <- imena.krajev
koordinate["Canada",2] <- koordinate["Canada",2]+5
koordinate["Canada",1] <- koordinate["Canada",1]-4
koordinate["United States",2] <- koordinate["United States",2]+4.5
koordinate["Jamaica",2] <- koordinate["Jamaica",2]+5
koordinate["Jamaica",1] <- koordinate["Jamaica",1]-2
koordinate["Puerto Rico",2] <- koordinate["Puerto Rico",2]+5
koordinate["Puerto Rico",1] <- koordinate["Puerto Rico",1]+8
koordinate["Colombia",2] <- koordinate["Colombia",2]+5
koordinate["Morocco",2] <- koordinate["Morocco",2]+5
koordinate["Spain",2] <- koordinate["Spain",2]+6
koordinate["Ireland",2] <- koordinate["Ireland",2]+7
koordinate["Ireland",1] <- koordinate["Ireland",1]-5
koordinate["England",2] <- koordinate["England",2]+8
koordinate["England",1] <- koordinate["England",1]+7
koordinate["Wales",2] <- koordinate["Wales",2]+4
koordinate["Germany",2] <- koordinate["Germany",2]+6
koordinate["Germany",1] <- koordinate["Germany",1]+6
koordinate["Italy",2] <- koordinate["Italy",2]+5
koordinate["Serbia",2] <- koordinate["Serbia",2]+5
koordinate["Serbia",1] <- koordinate["Serbia",1]+2
koordinate["Ethiopia",2] <- koordinate["Ethiopia",2]+5
koordinate["Russian Federation",2] <- koordinate["Russian Federation",2]+5
koordinate["China",2] <- koordinate["China",2]+5
koordinate["Indonesia",2] <- koordinate["Indonesia",2]+3
koordinate["Australia",2] <- koordinate["Australia",2]+5
cairo_pdf("slike/zemljevid10.pdf", family="Arial")
plot(svet1, col=barve.zemljevid, bg="lightyellow", border="orange", main="ZEMLJEVID ZA LETO 2010")
text(koordinate, labels=imena.krajev, pos=1, cex=0.4)
legend("bottomright", title="ŠTEVILO POSNETIH FILMOV V POSAMEZNI DRŽAVI:", bg="white", text.font=10, legend=stevilo, fill=rainbow(length(stevilo)))
dev.off()
#min/max + povprecje box office
a <- top10$BOX.OFFICE
b <- a[a>0]
min10 <- min(b)
max10 <- max(top10$BOX.OFFICE)
povprecje10 <- (sum(as.numeric(top10$BOX.OFFICE)))/100
#min/max + povprecje budget
aa <- top10$BUDGET
bb <- aa[aa>0]
minb10 <- min(bb)
maxb10 <- max(top10$BUDGET)
povprecjeb10 <- (sum(as.numeric(top10$BUDGET)))/100
######top11
#stolpicni diagram 11
zanr.imena <- colnames(top11)[5:21]
zanr.vrednosti <- apply(top11[zanr.imena], 2, sum)
zanr.vrednosti <- sort(zanr.vrednosti)
zanr.imena <- names(sort(zanr.vrednosti))
cairo_pdf("slike/stolpicni11.pdf",family="Arial")
barplot(zanr.vrednosti, names.arg = zanr.imena, xlab="ŽANRI", ylab="ŠTEVILO FILMOV", main="DISTRIBUCIJA ŽANROV ZA LETO 2011", las=2, cex.names=0.5, col="green")
dev.off()
#zemljevid 11
kraj <- table(top11$KRAJ)
stevilo <- unique(kraj)
stevilo <- stevilo[order(stevilo)]
barve <- rainbow(length(stevilo))[match(kraj, stevilo)]
names(barve) <- names(kraj)
barve.zemljevid <- barve[as.character(svet1$name_long)]
barve.zemljevid[is.na(barve.zemljevid)] <- "white"
imena <- names(kraj)
aa <- svet1[svet1$name_long %in% imena,]
koordinate <- coordinates(aa)
imena.krajev <- as.character(aa$name_long)
rownames(koordinate) <- imena.krajev
koordinate["Canada",2] <- koordinate["Canada",2]+5
koordinate["Canada",1] <- koordinate["Canada",1]-4
koordinate["United States",2] <- koordinate["United States",2]+4.5
koordinate["Puerto Rico",2] <- koordinate["Puerto Rico",2]+5
koordinate["Puerto Rico",1] <- koordinate["Puerto Rico",1]+8
koordinate["Brazil",2] <- koordinate["Brazil",2]+6
koordinate["Spain",2] <- koordinate["Spain",2]+6
koordinate["France",2] <- koordinate["France",2]+6
koordinate["France",1] <- koordinate["France",1]-3
koordinate["Switzerland",2] <- koordinate["Switzerland",2]+5
koordinate["Switzerland",1] <- koordinate["Switzerland",1]+8
koordinate["England",2] <- koordinate["England",2]+8.5
koordinate["Norway",2] <- koordinate["Norway",2]+4
koordinate["Norway",1] <- koordinate["Norway",1]-8
koordinate["Sweden",2] <- koordinate["Sweden",2]+4
koordinate["Sweden",1] <- koordinate["Sweden",1]+4
koordinate["Germany",2] <- koordinate["Germany",2]+7
koordinate["Austria",2] <- koordinate["Austria",2]+7
koordinate["Austria",1] <- koordinate["Austria",1]+6
koordinate["Bulgaria",2] <- koordinate["Bulgaria",2]+6
koordinate["India",2] <- koordinate["India",2]+5
koordinate["Thailand",2] <- koordinate["Thailand",2]+6
koordinate["Indonesia",2] <- koordinate["Indonesia",2]+3
koordinate["Australia",2] <- koordinate["Australia",2]+5
koordinate["New Zealand",2] <- koordinate["New Zealand",2]+7.5
cairo_pdf("slike/zemljevid11.pdf", family="Arial")
plot(svet1, col=barve.zemljevid, bg="lightyellow", border="orange", main="ZEMLJEVID ZA LETO 2011")
text(koordinate, labels=imena.krajev, pos=1, cex=0.4)
legend("bottomright", title="ŠTEVILO POSNETIH FILMOV V POSAMEZNI DRŽAVI:", bg="white", text.font=10, legend=stevilo, fill=rainbow(length(stevilo)))
dev.off()
#min/max box + povprecje office
c <- top11$BOX.OFFICE
d <- c[c>0]
min11 <- min(d)
max11 <- max(top11$BOX.OFFICE)
povprecje11 <- (sum(as.numeric(top11$BOX.OFFICE)))/100
#min/max + povprecje budget
cc <- top11$BUDGET
dd <- cc[cc>0]
minb11 <- min(dd)
maxb11 <- max(top11$BUDGET)
povprecjeb11 <- (sum(as.numeric(top11$BUDGET)))/100
######top12
#stolpicni diagram 12
zanr.imena <- colnames(top12)[5:24]
zanr.vrednosti <- apply(top12[zanr.imena], 2, sum)
zanr.vrednosti <- sort(zanr.vrednosti)
zanr.imena <- names(sort(zanr.vrednosti))
cairo_pdf("slike/stolpicni12.pdf",family="Arial")
barplot(zanr.vrednosti, names.arg = zanr.imena, xlab="ŽANRI", ylab="ŠTEVILO FILMOV", main="DISTRIBUCIJA ŽANROV ZA LETO 2012", las=2, cex.names=0.5, col="orange")
dev.off()
#zemljevid 12
kraj <- table(top12$KRAJ)
stevilo <- unique(kraj)
stevilo <- stevilo[order(stevilo)]
barve <- rainbow(length(stevilo))[match(kraj, stevilo)]
names(barve) <- names(kraj)
barve.zemljevid <- barve[as.character(svet1$name_long)]
barve.zemljevid[is.na(barve.zemljevid)] <- "white"
imena <- names(kraj)
aa <- svet1[svet1$name_long %in% imena,]
koordinate <- coordinates(aa)
imena.krajev <- as.character(aa$name_long)
rownames(koordinate) <- imena.krajev
koordinate["Canada",2] <- koordinate["Canada",2]+5
koordinate["Canada",1] <- koordinate["Canada",1]-4
koordinate["United States",2] <- koordinate["United States",2]+4.5
koordinate["Brazil",2] <- koordinate["Brazil",2]+6
koordinate["Spain",2] <- koordinate["Spain",2]+6
koordinate["France",2] <- koordinate["France",2]+6
koordinate["Ireland",2] <- koordinate["Ireland",2]+7
koordinate["Ireland",1] <- koordinate["Ireland",1]-5
koordinate["England",2] <- koordinate["England",2]+7.5
koordinate["England",1] <- koordinate["England",1]+7
koordinate["Scotland",2] <- koordinate["Scotland",2]+7.5
koordinate["Denmark",2] <- koordinate["Denmark",2]+7
koordinate["Denmark",1] <- koordinate["Denmark",1]+5
koordinate["Georgia",2] <- koordinate["Georgia",2]+6.5
koordinate["Georgia",1] <- koordinate["Georgia",1]+6.5
koordinate["Turkey",2] <- koordinate["Turkey",2]+6.5
koordinate["India",2] <- koordinate["India",2]+5
koordinate["Thailand",2] <- koordinate["Thailand",2]+6
koordinate["China",2] <- koordinate["China",2]+5
koordinate["New Zealand",2] <- koordinate["New Zealand",2]+7.5
cairo_pdf("slike/zemljevid12.pdf", family="Arial")
plot(svet1, col=barve.zemljevid, bg="lightyellow", border="orange", main="ZEMLJEVID ZA LETO 2012")
text(koordinate, labels=imena.krajev, pos=1, cex=0.4)
legend("bottomright", title="ŠTEVILO POSNETIH FILMOV V POSAMEZNI DRŽAVI:", bg="white", text.font=10, legend=stevilo, fill=rainbow(length(stevilo)))
dev.off()
#min/max + povprecje box office
e <- top12$BOX.OFFICE
f <- e[e>0]
min12 <- min(f)
max12 <- max(top12$BOX.OFFICE)
povprecje12 <- (sum(as.numeric(top12$BOX.OFFICE)))/100
#min/max + povprecje budget
ee <- top12$BUDGET
ff <- ee[ee>0]
minb12 <- min(ff)
maxb12 <- max(top12$BUDGET)
povprecjeb12 <- (sum(as.numeric(top12$BUDGET)))/100
######top13
#stolpicni diagram 13
zanr.imena <- colnames(top13)[5:22]
zanr.vrednosti <- apply(top13[zanr.imena], 2, sum)
zanr.vrednosti <- sort(zanr.vrednosti)
zanr.imena <- names(sort(zanr.vrednosti))
cairo_pdf("slike/stolpicni13.pdf",family="Arial")
barplot(zanr.vrednosti, names.arg = zanr.imena, xlab="ŽANRI", ylab="ŠTEVILO FILMOV", main="DISTRIBUCIJA ŽANROV ZA LETO 2013", las=2, cex.names=0.5, col="yellow")
dev.off()
#zemljevid 13
kraj <- table(top13$KRAJ)
stevilo <- unique(kraj)
stevilo <- stevilo[order(stevilo)]
barve <- rainbow(length(stevilo))[match(kraj, stevilo)]
names(barve) <- names(kraj)
barve.zemljevid <- barve[as.character(svet1$name_long)]
barve.zemljevid[is.na(barve.zemljevid)] <- "white"
imena <- names(kraj)
aa <- svet1[svet1$name_long %in% imena,]
koordinate <- coordinates(aa)
imena.krajev <- as.character(aa$name_long)
rownames(koordinate) <- imena.krajev
koordinate["Canada",2] <- koordinate["Canada",2]+5
koordinate["Canada",1] <- koordinate["Canada",1]-4
koordinate["United States",2] <- koordinate["United States",2]+4.5
koordinate["Argentina",2] <- koordinate["Argentina",2]+5
koordinate["Bahamas",2] <- koordinate["Bahamas",2]+6
koordinate["Bahamas",1] <- koordinate["Bahamas",1]+9
koordinate["France",2] <- koordinate["France",2]+7
koordinate["France",1] <- koordinate["France",1]-4
koordinate["Northern Ireland",2] <- koordinate["Northern Ireland",2]+7.5
koordinate["Northern Ireland",1] <- koordinate["Northern Ireland",1]-14
koordinate["England",2] <- koordinate["England",2]+8.5
koordinate["England",1] <- koordinate["England",1]+6
koordinate["Scotland",2] <- koordinate["Scotland",2]+8
koordinate["Germany",2] <- koordinate["Germany",2]+6.5
koordinate["Germany",1] <- koordinate["Germany",1]-4.5
koordinate["Poland",2] <- koordinate["Poland",2]+7.5
koordinate["Poland",1] <- koordinate["Poland",1]+2
koordinate["Czech Republic",2] <- koordinate["Czech Republic",2]+6.5
koordinate["Czech Republic",1] <- koordinate["Czech Republic",1]+12
koordinate["Hungary",2] <- koordinate["Hungary",2]+6.5
koordinate["Hungary",1] <- koordinate["Hungary",1]-6
koordinate["Romania",2] <- koordinate["Romania",2]+6
koordinate["Romania",1] <- koordinate["Romania",1]+6
koordinate["Italy",2] <- koordinate["Italy",2]+6
koordinate["Australia",2] <- koordinate["Australia",2]+5
koordinate["New Zealand",2] <- koordinate["New Zealand",2]+7.5
cairo_pdf("slike/zemljevid13.pdf",family="Arial")
plot(svet1, col=barve.zemljevid, bg="lightyellow", border="orange", main="ZEMLJEVID ZA LETO 2013")
text(koordinate, labels=imena.krajev, pos=1, cex=0.4)
legend("bottomright", title="ŠTEVILO POSNETIH FILMOV V POSAMEZNI DRŽAVI:", bg="white", text.font=10, legend=stevilo, fill=rainbow(length(stevilo)))
dev.off()
#min/max + povprecje box office
g <- top13$BOX.OFFICE
h <- g[g>0]
min13 <- min(h)
max13 <- max(top13$BOX.OFFICE)
povprecje13 <- (sum(as.numeric(top13$BOX.OFFICE)))/100
#min/max + povprecje budget
gg <- top13$BUDGET
hh <- gg[gg>0]
minb13 <- min(hh)
maxb13 <- max(top13$BUDGET)
povprecjeb13 <- (sum(as.numeric(top13$BUDGET)))/100
######top14
#stolpicni diagram 14
zanr.imena <- colnames(top14)[5:24]
zanr.vrednosti <- apply(top14[zanr.imena], 2, sum)
zanr.vrednosti <- sort(zanr.vrednosti)
zanr.imena <- names(sort(zanr.vrednosti))
cairo_pdf("slike/stolpicni14.pdf",family="Arial")
barplot(zanr.vrednosti, names.arg = zanr.imena, xlab="ŽANRI", ylab="ŠTEVILO FILMOV", main="DISTRIBUCIJA ŽANROV ZA LETO 2014", las=2, cex.names=0.5, col="pink")
dev.off()
#zemljevid 14
kraj <- table(top14$KRAJ)
stevilo <- unique(kraj)
stevilo <- stevilo[order(stevilo)]
barve <- rainbow(length(stevilo))[match(kraj, stevilo)]
names(barve) <- names(kraj)
barve.zemljevid <- barve[as.character(svet1$name_long)]
barve.zemljevid[is.na(barve.zemljevid)] <- "white"
imena <- names(kraj)
aa <- svet1[svet1$name_long %in% imena,]
koordinate <- coordinates(aa)
imena.krajev <- as.character(aa$name_long)
rownames(koordinate) <- imena.krajev
koordinate["Canada",2] <- koordinate["Canada",2]+5
koordinate["Canada",1] <- koordinate["Canada",1]-4
koordinate["United States",2] <- koordinate["United States",2]+4.5
koordinate["Morocco",2] <- koordinate["Morocco",2]+5
koordinate["South Africa",2] <- koordinate["South Africa",2]+5
koordinate["Northern Ireland",2] <- koordinate["Northern Ireland",2]+7.5
koordinate["Northern Ireland",1] <- koordinate["Northern Ireland",1]-14.5
koordinate["England",2] <- koordinate["England",2]+6.6
koordinate["England",1] <- koordinate["England",1]-1
koordinate["Denmark",2] <- koordinate["Denmark",2]+7
koordinate["Denmark",1] <- koordinate["Denmark",1]+5
koordinate["Netherlands",2] <- koordinate["Netherlands",2]+7.6
koordinate["Netherlands",1] <- koordinate["Netherlands",1]+10.5
koordinate["Belgium",2] <- koordinate["Belgium",2]+5
koordinate["Belgium",1] <- koordinate["Belgium",1]+-5
koordinate["France",2] <- koordinate["France",2]+5.5
koordinate["Germany",2] <- koordinate["Germany",2]+5.5
koordinate["Germany",1] <- koordinate["Germany",1]+6
koordinate["Serbia",2] <- koordinate["Serbia",2]+8
koordinate["Bulgaria",2] <- koordinate["Bulgaria",2]+6
koordinate["Bulgaria",1] <- koordinate["Bulgaria",1]+5
koordinate["China",2] <- koordinate["China",2]+5
koordinate["Australia",2] <- koordinate["Australia",2]+5
koordinate["New Zealand",2] <- koordinate["New Zealand",2]+7.5
cairo_pdf("slike/zemljevid14.pdf", family="Arial")
plot(svet1, col=barve.zemljevid, bg="lightyellow", border="orange", main = "ZEMLJEVID ZA LETO 2014")
text(koordinate, labels=imena.krajev, pos=1, cex=0.4)
legend("bottomright", title="ŠTEVILO POSNETIH FILMOV V POSAMEZNI DRŽAVI:", bg="white", , text.font=8, legend=stevilo, fill=rainbow(length(stevilo)))
dev.off()
#min/max + povprecje box office
i <- top14$BOX.OFFICE
j <- i[i>0]
min14 <- min(j)
max14 <- max(top14$BOX.OFFICE)
povprecje14 <- (sum(as.numeric(top14$BOX.OFFICE)))/100
#min/max + povprecje budget
ii <- top14$BUDGET
jj <- ii[ii>0]
minb14 <- min(jj)
maxb14 <- max(top14$BUDGET)
povprecjeb14 <- (sum(as.numeric(top14$BUDGET)))/100
######filmi (top 100 vseh casov)
#min/max + povprecje box office
k <- filmi$BOX.OFFICE
l <- k[k>0]
min100 <- min(l)
max100 <- max(filmi$BOX.OFFICE)
povprecje100 <- (sum(as.numeric(filmi$BOX.OFFICE)))/100
#min/max + povprecje budget
kk <- filmi$BUDGET
ll <- kk[kk>0]
minb100 <- min(ll)
maxb100 <- max(filmi$BUDGET)
povprecjeb100 <- (sum(as.numeric(filmi$BUDGET)))/100
######skupne primerjave
#min/max box office
min <- min(min100, min10, min11, min12, min13, min14)
max <- max(max100, max10, max11, max12, max13, max14)
#razmerje box office-a med maximumom in minimumom po letnicah
r100 <- (max100/min100)
r10 <- (max10/min10)
r11 <- (max11/min11)
r12 <- (max12/min12)
r13 <- (max13/min13)
r14 <- (max14/min14)
#min/max budget
minb <- min(minb100, minb10, minb11, minb12, minb13, minb14)
maxb <- max(maxb100, maxb10, maxb11, maxb12, maxb13, maxb14)
#razmerje budget-a med maximumom in minimumom po letnicah
rb100 <- (maxb100/minb100)
rb10 <- (maxb10/minb10)
rb11 <- (maxb11/minb11)
rb12 <- (maxb12/minb12)
rb13 <- (maxb13/minb13)
rb14 <- (maxb14/minb14)
##grafi
#maximum+minimum+povprecje box officev po letnicah
Letnice <- c(2010, 2011, 2012, 2013, 2014)
y1 <- c(max10, max11, max12, max13, max14)
y2 <- c(min10, min11, min12, min13, min14)
y3 <- c(povprecje10, povprecje11, povprecje12, povprecje13, povprecje14)
cairo_pdf("slike/box-office.pdf",family="Arial")
df <- data.frame(Letnice, y1, y2, y3)
ggplot(df, aes(Letnice, y = Vrednosti, color = Legenda)) +
geom_line(aes(y = y1, col = "maksimumi")) +
geom_line(aes(y = y2, col = "minimumi")) +
geom_line(aes(y = y3, col = "povprečje")) +
scale_y_log10(labels = trans_format('log10', math_format(10^.x)))
dev.off()
#maximum+minimum+povprecje budgetov po letnicah
Letnice <- c(2010, 2011, 2012, 2013, 2014)
y1 <- c(maxb10, maxb11, maxb12, maxb13, maxb14)
y2 <- c(minb10, minb11, minb12, minb13, minb14)
y3 <- c(povprecjeb10, povprecjeb11, povprecjeb12, povprecjeb13, povprecjeb14)
cairo_pdf("slike/budget.pdf",family="Arial")
df <- data.frame(Letnice, y1, y2, y3)
print(ggplot(df, aes(Letnice, y = Vrednosti, color = Legenda), title=) +
geom_line(aes(y = y1, col = "maksimumi")) +
geom_line(aes(y = y2, col = "minimumi")) +
geom_line(aes(y = y3, col = "povprečje")) +
scale_y_log10(labels = trans_format('log10', math_format(10^.x))))
dev.off()
#razmerje minimumov in maximumov - box office
razmerje.ime <- c("vseh casov", 2010, 2011, 2012, 2013, 2014)
razmerje.vrednosti <- c(r100, r10, r11, r12, r13, r14)
cairo_pdf("slike/razmerje-boxoffice.pdf",family="Arial")
barplot(razmerje.vrednosti, names.arg = razmerje.ime, xlab="LETNICE", ylab="RAZMERJE", main="RAZMERJE MED MAX IN MIN ZASLUŽKOM PO LETNICAH", cex.names=0.5, cex.axis=0.5, col=rainbow(6))
dev.off()
#razmerje minimumov in maximumov - budget
razmerje.ime <- c("vseh casov", 2010, 2011, 2012, 2013, 2014)
razmerje.vrednosti <- c(rb100, rb10, rb11, rb12, rb13, rb14)
cairo_pdf("slike/razmerje-budget.pdf", family="Arial")
barplot(razmerje.vrednosti, names.arg = razmerje.ime, xlab="LETNICE", ylab="RAZMERJE", main="RAZMERJE MED MAX IN MIN PRORAČUNOM PO LETNICAH", cex.names=0.5, cex.axis=0.5, col=rainbow(6))
dev.off()
|
{"hexsha": "aea27f7526119237af53fd5f0ed1d3e31d60b07b", "size": 18669, "ext": "r", "lang": "R", "max_stars_repo_path": "analiza/analiza.r", "max_stars_repo_name": "statijana/APPR-2014-15", "max_stars_repo_head_hexsha": "4117ab91075a70b2dd0eb3ed23d39b2b7629b65c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "analiza/analiza.r", "max_issues_repo_name": "statijana/APPR-2014-15", "max_issues_repo_head_hexsha": "4117ab91075a70b2dd0eb3ed23d39b2b7629b65c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-01-11T20:15:17.000Z", "max_issues_repo_issues_event_max_datetime": "2015-01-18T20:35:42.000Z", "max_forks_repo_path": "analiza/analiza.r", "max_forks_repo_name": "statijana/APPR-2014-15", "max_forks_repo_head_hexsha": "4117ab91075a70b2dd0eb3ed23d39b2b7629b65c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.4162790698, "max_line_length": 186, "alphanum_fraction": 0.7080722053, "num_tokens": 6895}
|
# Copyright (c) 2012, GPy authors (see AUTHORS.txt).
# Licensed under the BSD 3-clause license (see LICENSE.txt)
import numpy as np
from kern import CombinationKernel
from ...util.caching import Cache_this
import itertools
def numpy_invalid_op_as_exception(func):
"""
A decorator that allows catching numpy invalid operations
as exceptions (the default behaviour is raising warnings).
"""
def func_wrapper(*args, **kwargs):
np.seterr(invalid='raise')
result = func(*args, **kwargs)
np.seterr(invalid='warn')
return result
return func_wrapper
class Prod(CombinationKernel):
"""
Computes the product of 2 kernels
:param k1, k2: the kernels to multiply
:type k1, k2: Kern
:param tensor: The kernels are either multiply as functions defined on the same input space (default) or on the product of the input spaces
:type tensor: Boolean
:rtype: kernel object
"""
def __init__(self, kernels, name='mul'):
for i, kern in enumerate(kernels[:]):
if isinstance(kern, Prod):
del kernels[i]
for part in kern.parts[::-1]:
kern.unlink_parameter(part)
kernels.insert(i, part)
super(Prod, self).__init__(kernels, name)
@Cache_this(limit=2, force_kwargs=['which_parts'])
def K(self, X, X2=None, which_parts=None):
if which_parts is None:
which_parts = self.parts
elif not isinstance(which_parts, (list, tuple)):
# if only one part is given
which_parts = [which_parts]
return reduce(np.multiply, (p.K(X, X2) for p in which_parts))
@Cache_this(limit=2, force_kwargs=['which_parts'])
def Kdiag(self, X, which_parts=None):
if which_parts is None:
which_parts = self.parts
return reduce(np.multiply, (p.Kdiag(X) for p in which_parts))
@numpy_invalid_op_as_exception
def update_gradients_full(self, dL_dK, X, X2=None):
k = self.K(X,X2)*dL_dK
try:
for p in self.parts:
p.update_gradients_full(k/p.K(X,X2),X,X2)
except FloatingPointError:
for combination in itertools.combinations(self.parts, len(self.parts) - 1):
prod = reduce(np.multiply, [p.K(X, X2) for p in combination])
to_update = list(set(self.parts) - set(combination))[0]
to_update.update_gradients_full(dL_dK * prod, X, X2)
def update_gradients_diag(self, dL_dKdiag, X):
k = self.Kdiag(X)*dL_dKdiag
for p in self.parts:
p.update_gradients_diag(k/p.Kdiag(X),X)
@numpy_invalid_op_as_exception
def gradients_X(self, dL_dK, X, X2=None):
target = np.zeros(X.shape)
k = self.K(X,X2)*dL_dK
try:
for p in self.parts:
target += p.gradients_X(k/p.K(X,X2),X,X2)
except FloatingPointError:
for combination in itertools.combinations(self.parts, len(self.parts) - 1):
prod = reduce(np.multiply, [p.K(X, X2) for p in combination])
to_update = list(set(self.parts) - set(combination))[0]
target += to_update.gradients_X(dL_dK * prod, X, X2)
return target
def gradients_X_diag(self, dL_dKdiag, X):
target = np.zeros(X.shape)
k = self.Kdiag(X)*dL_dKdiag
for p in self.parts:
target += p.gradients_X_diag(k/p.Kdiag(X),X)
return target
|
{"hexsha": "a3b4997332109fe2f65f659336043e692db0ef56", "size": 3511, "ext": "py", "lang": "Python", "max_stars_repo_path": "GPy/kern/_src/prod.py", "max_stars_repo_name": "strongh/GPy", "max_stars_repo_head_hexsha": "775ce9e64c1e8f472083b8f2430134047d97b2fa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-08-06T13:47:10.000Z", "max_stars_repo_stars_event_max_datetime": "2015-08-06T13:47:10.000Z", "max_issues_repo_path": "GPy/kern/_src/prod.py", "max_issues_repo_name": "strongh/GPy", "max_issues_repo_head_hexsha": "775ce9e64c1e8f472083b8f2430134047d97b2fa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GPy/kern/_src/prod.py", "max_forks_repo_name": "strongh/GPy", "max_forks_repo_head_hexsha": "775ce9e64c1e8f472083b8f2430134047d97b2fa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-09T01:31:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T01:31:17.000Z", "avg_line_length": 36.1958762887, "max_line_length": 143, "alphanum_fraction": 0.6129307889, "include": true, "reason": "import numpy", "num_tokens": 863}
|
[STATEMENT]
lemma delete_Linorder:
assumes "k > 0" "root_order k t" "sorted_less (leaves t)" "Laligned t u" "bal t" "x \<le> u"
shows "leaves (delete k x t) = del_list x (leaves t)"
and "Laligned (delete k x t) u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. leaves (delete k x t) = del_list x (leaves t) &&& Laligned (delete k x t) u
[PROOF STEP]
using reduce_root_Laligned[of "del k x t" u] reduce_root_inorder[of "del k x t"]
[PROOF STATE]
proof (prove)
using this:
Laligned (reduce_root (del k x t)) u = Laligned (del k x t) u
leaves (reduce_root (del k x t)) = leaves (del k x t)
goal (1 subgoal):
1. leaves (delete k x t) = del_list x (leaves t) &&& Laligned (delete k x t) u
[PROOF STEP]
using del_Linorder[of k t u x]
[PROOF STATE]
proof (prove)
using this:
Laligned (reduce_root (del k x t)) u = Laligned (del k x t) u
leaves (reduce_root (del k x t)) = leaves (del k x t)
\<lbrakk>0 < k; root_order k t; bal t; sorted_less (leaves t); Laligned t u; x \<le> u\<rbrakk> \<Longrightarrow> leaves (del k x t) = del_list x (leaves t) \<and> Laligned (del k x t) u
goal (1 subgoal):
1. leaves (delete k x t) = del_list x (leaves t) &&& Laligned (delete k x t) u
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
Laligned (reduce_root (del k x t)) u = Laligned (del k x t) u
leaves (reduce_root (del k x t)) = leaves (del k x t)
\<lbrakk>0 < k; root_order k t; bal t; sorted_less (leaves t); Laligned t u; x \<le> u\<rbrakk> \<Longrightarrow> leaves (del k x t) = del_list x (leaves t) \<and> Laligned (del k x t) u
0 < k
root_order k t
sorted_less (leaves t)
Laligned t u
bal t
x \<le> u
goal (1 subgoal):
1. leaves (delete k x t) = del_list x (leaves t) &&& Laligned (delete k x t) u
[PROOF STEP]
by simp_all
|
{"llama_tokens": 714, "file": "BTree_BPlusTree_Set", "length": 4}
|
function [exp,waschanged] = exp_replacerepeated(exp,rules)
% Apply substitution rules to some expression until it no longer changes.
% [Out-Expression,Changed] = exp_replacerepeated(In-Expression,Rule-List)
%
% Rules may contain blanks and placeholders.
%
% In:
% In-Expression : some expression in which substitution shall take place
% Rule-List : cell array of substutition rules; may also be a struct, where field names are
% taken as symbol names
%
% Out:
% Out-Expression : In-Expression with substitutions performed
% Changed : whether a replacement has taken place
%
% Notes:
% Impure expressions degrade into pure expressions when substitutions are performed on them (i.e.
% they lose their value).
%
% See also:
% exp_replaceall, exp_match
%
% Christian Kothe, Swartz Center for Computational Neuroscience, UCSD
% 2010-04-19
if ~exp_beginfun('symbolic') return; end
[exp,waschanged] = utl_replacerepeated(exp,rules);
exp_endfun;
|
{"author": "goodshawn12", "repo": "REST", "sha": "e34ce521fcb36e7813357a9720072dd111edf797", "save_path": "github-repos/MATLAB/goodshawn12-REST", "path": "github-repos/MATLAB/goodshawn12-REST/REST-e34ce521fcb36e7813357a9720072dd111edf797/dependencies/BCILAB/code/expressions/exp_replacerepeated.m"}
|
SUBROUTINE chint(a,b,c,cint,n)
INTEGER n
REAL a,b,c(n),cint(n)
INTEGER j
REAL con,fac,sum
con=0.25*(b-a)
sum=0.
fac=1.
do 11 j=2,n-1
cint(j)=con*(c(j-1)-c(j+1))/(j-1)
sum=sum+fac*cint(j)
fac=-fac
11 continue
cint(n)=con*c(n-1)/(n-1)
sum=sum+fac*cint(n)
cint(1)=2.*sum
return
END
|
{"hexsha": "f98cf05c71ac0664e1930289a6410e50f8018fde", "size": 408, "ext": "for", "lang": "FORTRAN", "max_stars_repo_path": "NR-Functions/Numerical Recipes- Example & Functions/Functions/chint.for", "max_stars_repo_name": "DingdingLuan/nrfunctions_fortran", "max_stars_repo_head_hexsha": "37e376dab8d6b99e63f6f1398d0c33d5d6ad3f8c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "NR-Functions/Numerical Recipes- Example & Functions/Functions/chint.for", "max_issues_repo_name": "DingdingLuan/nrfunctions_fortran", "max_issues_repo_head_hexsha": "37e376dab8d6b99e63f6f1398d0c33d5d6ad3f8c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NR-Functions/Numerical Recipes- Example & Functions/Functions/chint.for", "max_forks_repo_name": "DingdingLuan/nrfunctions_fortran", "max_forks_repo_head_hexsha": "37e376dab8d6b99e63f6f1398d0c33d5d6ad3f8c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.4736842105, "max_line_length": 42, "alphanum_fraction": 0.4411764706, "num_tokens": 149}
|
import os
import re
import matplotlib.pyplot as plt
from skimage.transform import resize, rescale
import numpy as np
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Input, MaxPooling2D, Dropout, UpSampling2D,add
from tensorflow.keras import regularizers
from tensorflow.keras.models import Model
#----------PREPARE DATASET----------
def GET_DATASET(number_of_batches = 27):
n_batch = number_of_batches
bn = 0
low_batch_array = []
high_batch_array = []
img_high_ds = []
img_low_ds = []
for r, n, f in os.walk('/home/sachin/MY_FOLDER/PYCHARM/Tensorflow/Datasets/Image Super Resolution/bmw10'):
for file in f:
if re.search('.(jpg|jpeg|png|bmf|tiff)\Z', file, re.I):
img_path = os.path.join(r, file)
img = plt.imread(img_path)/255
if len(img.shape) > 2:
img_resize = resize(img, (256, 256))
high_batch_array.append(img_resize)
low_batch_array.append(rescale(rescale(img_resize,0.5,multichannel=True), 2.0,multichannel=True))
bn += 1
if bn == n_batch:
img_high_ds = np.array(high_batch_array)
img_low_ds = np.array(low_batch_array)
bn = 0
high_batch_array = []
low_batch_array = []
else:
print("Invalid Image Extension Found:", file)
return img_low_ds, img_high_ds
#----------ENCODER PART----------
def ENCODER():
input = Input(shape=(256,256,3))
layer1 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(input)
layer2 = Conv2D(64, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer1)
layer3 = MaxPooling2D(padding='same')(layer2)
layer4 = Dropout(0.3)(layer3)
layer5 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer4)
layer6 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer5)
layer7 = MaxPooling2D(padding='same')(layer6)
output = Conv2D(256, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer7)
encoder = Model(input, output)
return encoder
def AUTO_ENCODER():
ip = Input(shape=(256, 256, 3))
layer1 = Conv2D(64, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(ip)
layer2 = Conv2D(64, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer1)
layer3 = MaxPooling2D(padding='same')(layer2)
layer4 = Dropout(0.3)(layer3)
layer5 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer4)
layer6 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer5)
layer7 = MaxPooling2D(padding='same')(layer6)
layer8 = Conv2D(256, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer7)
layer9 = UpSampling2D()(layer8)
layer10 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer9)
layer11 = Conv2D(128, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer10)
layer12 = add([layer6, layer11])
layer13 = UpSampling2D()(layer12)
layer14 = Conv2D(64, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer13)
layer15 = Conv2D(64, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer14)
layer16 = add([layer15, layer2])
op = Conv2D(3, (3,3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(layer16)
auto_enc = Model(ip, op)
return auto_enc
low_ds, high_ds = GET_DATASET()
auto_encoder = AUTO_ENCODER()
#auto_encoder.summary()
auto_encoder.compile(optimizer='adadelta', loss='mean_squared_error')
#auto_encoder.load_weights('/home/sachin/MY_FOLDER/PYCHARM/Tensorflow/Datasets/Image Super Resolution/sr.img_net.mse.final_model5.no_patch.weights.best.hdf5')
auto_encoder.fit(low_ds,high_ds, epochs=20)
res = auto_encoder.predict(low_ds)
print("result=",res[0].shape, np.max(res[0]), np.min(res[0]))
print(low_ds[0].shape, np.max(low_ds[0]), np.min(low_ds[0]))
i = np.random.randint(0,27)
plt.figure(figsize=(20,20))
plt.subplot(1,3,1)
plt.imshow(low_ds[i])
plt.title("low res")
plt.subplot(1,3,2)
plt.imshow(res[i])
plt.title("Predicted")
plt.subplot(1,3,3)
plt.imshow(high_ds[i])
plt.title("High Res")
plt.show()
|
{"hexsha": "57445f1b72561bd6d3caccae2dd31118a1bcf942", "size": 4777, "ext": "py", "lang": "Python", "max_stars_repo_path": "isr.ipynb.py", "max_stars_repo_name": "SACHIN446/Image-Super-Resolution-Using-AutoEncoder", "max_stars_repo_head_hexsha": "64c295214d881529fd3844e9931f7c7b9391e580", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "isr.ipynb.py", "max_issues_repo_name": "SACHIN446/Image-Super-Resolution-Using-AutoEncoder", "max_issues_repo_head_hexsha": "64c295214d881529fd3844e9931f7c7b9391e580", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "isr.ipynb.py", "max_forks_repo_name": "SACHIN446/Image-Super-Resolution-Using-AutoEncoder", "max_forks_repo_head_hexsha": "64c295214d881529fd3844e9931f7c7b9391e580", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.8256880734, "max_line_length": 158, "alphanum_fraction": 0.6673644547, "include": true, "reason": "import numpy", "num_tokens": 1363}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Aug 5 15:22:56 2020
@author: matthew
"""
#%%
def deformation_wrapper(lons_mg, lats_mg, deformation_ll, source, dem = None,
asc_or_desc = 'asc', incidence = 23, **kwargs):
""" A function to prepare grids of pixels and deformation sources specified in lon lat for use with
deformation generating functions that work in metres. Note that different sources require different arguments
(e.g. opening makes sense for a dyke or sill, but not for a Mogi source, but volume change does for a Mogi
source and not for a dyke or sill). Therefore, after setting "source" a selection of kwargs must be passed
to the function.
E.g. for a Mogi source:
mogi_kwargs = {'volume_change' : 1e6,
'depth' : 2000} # both in metres
Or a dyke:
dyke_kwargs = {'strike' : 0, # degrees
'top_depth' : 1000, # metres.
'bottom_depth' : 3000,
'length' : 5000,
'dip' : 80,
'opening' : 0.5}
Or a sill:
sill_kwargs = {'strike' : 0,
'depth' : 3000,
'width' : 5000,
'length' : 5000,
'dip' : 1,
'opening' : 0.5}
Or an earthquake:
quake_ss_kwargs = {'strike' : 0,
'dip' : 80,
'length' : 5000,
'rake' : 0,
'slip' : 1,
'top_depth' : 4000,
'bottom_depth' : 8000}
deformation_eq_dyke_sill and deformation_mogi have more information on these arguments, too.
Inputs:
lons_mg | rank 2 array | longitudes of the bottom left corner of each pixel.
lats_mg | rank 2 array | latitudes of the bottom left corner of each pixel.
deformation_ll | tuple | (lon, lat) of centre of deformation source.
source | string | mogi or quake or dyke or sill
dem | rank 2 masked array or None | The dem, with water masked. If not supplied (None),
then an array (and not a masked array) of deformation is returned.
asc_or_dec | string | 'asc' or 'desc' or 'random'. If set to 'random', 50% chance of each.
incidence | float | satellite incidence angle.
**kwargs | various parameters required for each type of source. E.g. opening, as opposed to slip or volumne change etc.
Returns:
los_grid | rank 2 masked array | displacment in satellite LOS at each location, with the same mask as the dem.
x_grid | rank 2 array | x displacement at each location
y_grid | rank 2 array | y displacement at each location
z_grid | rank 2 array | z displacement at each location
History:
2020/08/07 | MEG | Written.
2020/10/09 | MEG | Update so that the DEM is optional.
2021_05_18 | MEG | Fix bug in number of pixels in a degree (hard coded as 1201, now calcualted from lon and lat grids)
"""
import numpy as np
import numpy.ma as ma
from syinterferopy.aux import ll2xy, lon_lat_to_ijk
# 1: deal with lons and lats, and get pixels in metres from origin at lower left.
pixs2deg_x = 1 / (lons_mg[0,1] - lons_mg[0,0]) # the number of pixels in 1 deg of longitude
pixs2deg_y = 1 / (lats_mg[0,0] - lats_mg[1,0]) # the number of pixels in 1 deg of latitude
pixs2deg = np.mean([pixs2deg_x, pixs2deg_y])
dem_ll_extent = [(lons_mg[-1,0], lats_mg[-1,-1]), (lons_mg[1,-1], lats_mg[1,0])] # [lon lat tuple of lower left corner, lon lat tuple of upper right corner]
xyz_m, pixel_spacing = lon_lat_to_ijk(lons_mg, lats_mg) # get pixel positions in metres from origin in lower left corner (and also their size in x and y direction)
# import sys
# sys.path.append("/home/matthew/university_work/python_stuff/python_scripts")
# from small_plot_functions import matrix_show
# import matplotlib.pyplot as plt
# f, axes = plt.subplots(1,2)
# matrix_show(np.reshape(xyz_m[0,:], (lons_mg.shape[0], lons_mg.shape[1])), ax = axes[0], fig = f)
# matrix_show(np.reshape(xyz_m[1,:], (lons_mg.shape[0], lons_mg.shape[1])), ax = axes[1], fig = f)
# 1: Make a satellite look vector.
if asc_or_desc == 'asc':
heading = 192.04
elif asc_or_desc == 'desc':
heading = 348.04
elif asc_or_desc == 'random':
if (-0.5+np.random.rand()) < 0.5:
heading = 192.04 # Heading (azimuth) of satellite measured clockwise from North, in degrees, half are descending
else:
heading = 348.04 # Heading (azimuth) of satellite measured clockwise from North, in degrees, half are ascending
else:
raise Exception(f"'asc_or_desc' must be either 'asc' or 'desc' or 'random', but is currently {asc_or_desc}. Exiting. ")
# matlab implementation
deg2rad = np.pi/180
sat_inc = 90 - incidence
sat_az = 360 - heading
# sat_inc=Incidence # hmmm - why did T (TJW) im use a 2nd different definition?
# sat_az=Heading;
los_x=-np.cos(sat_az*deg2rad)*np.cos(sat_inc*deg2rad);
los_y=-np.sin(sat_az*deg2rad)*np.cos(sat_inc*deg2rad);
los_z=np.sin(sat_inc*deg2rad);
los_vector = np.array([[los_x],
[los_y],
[los_z]]) # Unit vector in satellite line of site
# my Python implementaiton
# look = np.array([[np.sin(heading)*np.sin(incidence)], # Looks to be a radians / vs degrees error here.
# [np.cos(heading)*np.sin(incidence)],
# [np.cos(incidence)]]) # ground to satelite unit vector
# 2: calculate deformation location in the new metres from lower left coordinate system.
deformation_xy = ll2xy(np.asarray(dem_ll_extent[0])[np.newaxis,:], pixs2deg, # lon lat of lower left coerner, number of pixels in 1 degree
np.asarray(deformation_ll)[np.newaxis,:]) # long lat of point of interext (deformation centre)
deformation_m = np.array([[deformation_xy[0,0] * pixel_spacing['x'], deformation_xy[0,1] * pixel_spacing['y']]]) # convert from number of pixels from lower left corner to number of metres from lower left corner, 1x2 array.
#import pdb; pdb.set_trace()
# 3: Calculate the deformation:
if source == 'mogi':
model_params = np.array([deformation_m[0,0], deformation_m[0,1], kwargs['depth'], kwargs['volume_change']])[:,np.newaxis]
U = deformation_Mogi(model_params, xyz_m, 0.25,30e9) # U = 3d displacement, xyz are rows, each point is a column.
elif (source == 'quake') or (source == 'dyke') or (source == 'sill'):
U = deformation_eq_dyke_sill(source, (deformation_m[0,0], deformation_m[0,1]), xyz_m, **kwargs) # U = 3d displacement, xyz are rows, each point is a column.
else:
raise Exception(f"'source' can be eitehr 'mogi', 'quake', 'dyke', or 'sill', but not {source}. Exiting. ")
# 4: convert the xyz deforamtion in movement in direction of satellite LOS (ie. upwards = positive, despite being range shortening)
x_grid = np.reshape(U[0,], (lons_mg.shape[0], lons_mg.shape[1]))
y_grid = np.reshape(U[1,], (lons_mg.shape[0], lons_mg.shape[1]))
z_grid = np.reshape(U[2,], (lons_mg.shape[0], lons_mg.shape[1]))
los_grid = x_grid*los_vector[0,0] + y_grid*los_vector[1,0] + z_grid*los_vector[2,0]
if dem is not None:
los_grid = ma.array(los_grid, mask = ma.getmask(dem)) # mask the water parts of the scene. Note that this can reduce the max of defo_m as parts of the signal may then be masked out.
return los_grid, x_grid, y_grid, z_grid
#%%
def deformation_eq_dyke_sill(source, source_xy_m, xyz_m, **kwargs):
"""
A function to create deformation patterns for either an earthquake, dyke or sill. Uses the Okada function from PyInSAR: https://github.com/MITeaps/pyinsar
To aid in readability, different sources take different parameters (e.g. slip for a quake, opening for a dyke)
are passed separately as kwargs, even if they ultimately go into the same field in the model parameters.
A quick recap on definitions:
strike - measured clockwise from 0 at north, 180 at south. fault dips to the right of this. hanging adn fo
dip - measured from horizontal, 0 for horizontal, 90 for vertical.
rake - direction the hanging wall moves during rupture, measured relative to strike, anticlockwise is positive, so:
0 for left lateral ss
180 (or -180) for right lateral ss
-90 for normal
90 for thrust
Inputs:
source | string | quake or dyke or sill
source_xy_m | tuple | x and y location of centre of source, in metres.
xyz_m | rank2 array | x and y locations of all points in metres. 0,0 is top left?
examples of kwargs:
quake_normal = {'strike' : 0,
'dip' : 70,
'length' : 5000,
'rake' : -90,
'slip' : 1,
'top_depth' : 4000,
'bottom_depth' : 8000}
quake_thrust = {'strike' : 0,
'dip' : 30,
'length' : 5000,
'rake' : 90,
'slip' : 1,
'top_depth' : 4000,
'bottom_depth' : 8000}
quake_ss = {'strike' : 0,
'dip' : 80,
'length' : 5000,
'rake' : 0,
'slip' : 1,
'top_depth' : 4000,
'bottom_depth' : 8000}
dyke = {'strike' : 0,
'top_depth' : 1000,
'bottom_depth' : 3000,
'length' : 5000,
'dip' : 80,
'opening' : 0.5}
sill = {'strike' : 0,
'depth' : 3000,
'width' : 5000,
'length' : 5000,
'dip' : 1,
'opening' : 0.5}
Returns:
x_grid | rank 2 array | displacment in x direction for each point (pixel on Earth's surface)
y_grid | rank 2 array | displacment in y direction for each point (pixel on Earth's surface)
z_grid | rank 2 array | displacment in z direction for each point (pixel on Earth's surface)
los_grid | rank 2 array | change in satellite - ground distance, in satellite look angle direction. Need to confirm if +ve is up or down.
History:
2020/08/05 | MEG | Written
2020/08/21 | MEG | Switch from disloc3d.m function to compute_okada_displacement.py functions.
"""
import numpy as np
from syinterferopy.pyinsar_okada import compute_okada_displacement
# 1: Setting for elastic parameters.
lame = {'lambda' : 2.3e10, # elastic modulus (Lame parameter, units are pascals)
'mu' : 2.3e10} # shear modulus (Lame parameter, units are pascals)
v = lame['lambda'] / (2*(lame['lambda'] + lame['mu'])) # calculate poisson's ration
# import matplotlib.pyplot as plt
# both_arrays = np.hstack((np.ravel(coords), np.ravel(xyz_m)))
# f, axes = plt.subplots(1,2)
# axes[0].imshow(coords, aspect = 'auto', vmin = np.min(both_arrays), vmax = np.max(both_arrays)) # goes from -1e4 to 1e4
# axes[1].imshow(xyz_m, aspect = 'auto', vmin = np.min(both_arrays), vmax = np.max(both_arrays)) # goes from 0 to 2e4
if source == 'quake':
opening = 0
slip = kwargs['slip']
rake = kwargs['rake']
width = kwargs['bottom_depth'] - kwargs['top_depth']
# centroid_depth = np.mean((kwargs['bottom_depth'] - kwargs['top_depth']))
centroid_depth = np.mean((kwargs['bottom_depth'], kwargs['top_depth']))
elif source == 'dyke': # ie dyke or sill
opening = kwargs['opening']
slip = 0
rake = 0
width = kwargs['bottom_depth'] - kwargs['top_depth']
centroid_depth = np.mean((kwargs['bottom_depth'], kwargs['top_depth']))
elif source == 'sill': # ie dyke or sill
opening = kwargs['opening']
slip = 0
rake = 0
centroid_depth = kwargs['depth']
width = kwargs['width']
else:
raise Exception(f"'Source' must be either 'quake', 'dyke', or 'sill', but is set to {source}. Exiting.")
# 3: compute deformation using Okada function
U = compute_okada_displacement(source_xy_m[0], source_xy_m[1], # x y location, in metres
centroid_depth, # fault_centroid_depth, guess metres?
np.deg2rad(kwargs['strike']),
np.deg2rad(kwargs['dip']),
kwargs['length'], width, # length and width, in metres
np.deg2rad(rake), # rake, in rads
slip, opening, # slip (if quake) or opening (if dyke or sill)
v, xyz_m[0,], xyz_m[1,:]) # poissons ratio, x and y coords of surface locations.
return U
#%%
def deformation_Mogi(m,xloc,nu,mu):
"""
Computes displacements, strains and stresses from a point (Mogi) source.
Inputs m and xloc can be matrices; for multiple models, the deformation fields from each are summed.
Inputs:
m = 4xn volume source geometry (length; length; length; length^3)
(x-coord, y-coord, depth(+), volume change)
xloc = 3xs matrix of observation coordinates (length)
nu = Poisson's ratio
mu = shear modulus (if omitted, default value is unity)
Outputs:
U = 3xs matrix of displacements (length)
(Ux,Uy,Uz)
D = 9xn matrix of displacement derivatives
(Dxx,Dxy,Dxz,Dyx,Dyy,Dyz,Dzx,Dzy,Dzz)
S = 6xn matrix of stresses
(Sxx,Sxy,Sxz,Syy,Syz,Szz)
History:
For information on the basis for this code see:
Okada, Y. Internal deformation due to shear and tensile faults in a half-space, Bull. Seismol. Soc. Am., 82, 1018-1049, 1992.
1998/06/17 | Peter Cervelli.
2000/11/03 | Peter Cervelli, Revised
2001/08/21 | Kaj Johnson, Fixed a bug ('*' multiplication should have been '.*'), , 2001. Kaj Johnson
2018/03/31 | Matthw Gaddes, converted to Python3 #, but only for U (displacements)
"""
#import scipy.io
import numpy as np
_, n_data = xloc.shape
_, models = m.shape
#Lambda=2*mu*nu/(1-2*nu)
U=np.zeros((3,n_data)) # set up the array to store displacements
for i in range(models): # loop through each of the defo sources
C=m[3]/(4*np.pi)
x=xloc[0,:]-float(m[0,i]) # difference in distance from centre of source (x)
y=xloc[1,:]-float(m[1,i]) # difference in distance from centre of source (y)
z=xloc[2,:]
d1=m[2,i]-z
d2=m[2,i]+z
R12=x**2+y**2+d1**2
R22=x**2+y**2+d2**2
R13=R12**1.5
R23=R22**1.5
R15=R12**2.5
R25=R22**2.5
R17=R12**3.5
R27=R12**3.5
#Calculate displacements
U[0,:] = U[0,:] + C*( (3 - 4*nu)*x/R13 + x/R23 + 6*d1*x*z/R15 )
U[1,:] = U[1,:] + C*( (3 - 4*nu)*y/R13 + y/R23 + 6*d1*y*z/R15 )
U[2,:] = U[2,:] + C*( (3 - 4*nu)*d1/R13 + d2/R23 - 2*(3*d1**2 - R12)*z/R15)
return U
#%%
def atmosphere_topo(dem_m, strength_mean = 56.0, strength_var = 2.0, difference = False):
""" Given a dem, return a topographically correlated APS, either for a single acquistion
or for an interferometric pair.
Inputs:
dem_m | r4 ma | rank4 masked array, with water masked. Units = metres!
strength_mean | float | rad/km of delay. default is 56.0, taken from Fig5 Pinel 2011 (Statovolcanoes...)
strength_var | float | variance of rad/km delay. Default is 2.0, which gives values similar to Fig 5 of above.
difference | boolean | if False, returns for one acquisitin. If true, returns for an interferometric pair (ie difference of two acquisitions)
Outputs:
ph_topo | r4 ma | topo correlated delay in m. UNITS ARE M
2019/09/11 | MEG | written.
"""
import numpy as np
import numpy.ma as ma
envisat_lambda = 0.056 #envisat/S1 wavelength in m
dem = 0.001 * dem_m # convert from metres to km
if difference is False:
ph_topo = (strength_mean + strength_var * np.random.randn(1)) * dem
elif difference is True:
ph_topo_aq1 = (strength_mean + strength_var * np.random.randn(1)) * dem # this is the delay for one acquisition
ph_topo_aq2 = (strength_mean + strength_var * np.random.randn(1)) * dem # and for another
ph_topo = ph_topo_aq1 - ph_topo_aq2 # interferogram is the difference, still in rad
else:
print("'difference' must be either True or False. Exiting...")
import sys; sys.exit()
# convert from rad to m
ph_topo_m = (ph_topo / (4*np.pi)) * envisat_lambda # delay/elevation ratio is taken from a paper (pinel 2011) using Envisat data
if np.max(ph_topo_m) < 0: # ensure that it always start from 0, either increasing or decreasing
ph_topo_m -= np.max(ph_topo_m)
else:
ph_topo_m -= np.min(ph_topo_m)
ph_topo_m = ma.array(ph_topo_m, mask = ma.getmask(dem_m))
ph_topo_m -= ma.mean(ph_topo_m) # mean centre the signal
return ph_topo_m
#%%
def atmosphere_turb(n_atms, lons_mg, lats_mg, method = 'fft', mean_m = 0.02,
water_mask = None, difference = False, verbose = False,
cov_interpolate_threshold = 1e4, cov_Lc = 2000):
""" A function to create synthetic turbulent atmospheres based on the methods in Lohman Simons 2005, or using Andy Hooper and Lin Shen's fft method.
Note that due to memory issues, when using the covariance (Lohman) method, largers ones are made by interpolateing smaller ones.
Can return atmsopheres for an individual acquisition, or as the difference of two (as per an interferogram). Units are in metres.
Inputs:
n_atms | int | number of atmospheres to generate
lons_mg | rank 2 array | longitudes of the bottom left corner of each pixel.
lats_mg | rank 2 array | latitudes of the bottom left corner of each pixel.
method | string | 'fft' or 'cov'. Cov for the Lohmans Simons (sp?) method, fft for Andy Hooper/Lin Shen's fft method (which is much faster). Currently no way to set length scale using fft method.
mean_m | float | average max or min value of atmospheres that are created. e.g. if 3 atmospheres have max values of 0.02m, 0.03m, and 0.04m, their mean would be 0.03cm.
water_mask | rank 2 array | If supplied, this is applied to the atmospheres generated, convering them to masked arrays.
difference | boolean | If difference, two atmospheres are generated and subtracted from each other to make a single atmosphere.
verbose | boolean | Controls info printed to screen when running.
cov_Lc | float | length scale of correlation, in metres. If smaller, noise is patchier, and if larger, smoother.
cov_interpolate_threshold | int | if n_pixs is greater than this, images will be generated at size so that the total number of pixels doesn't exceed this.
e.g. if set to 1e4 (10000, the default) and images are 120*120, they will be generated at 100*100 then upsampled to 120*120.
Outputs:
ph_turbs | r3 array | n_atms x n_pixs x n_pixs, UNITS ARE M. Note that if a water_mask is provided, this is applied and a masked array is returned.
2019/09/13 | MEG | adapted extensively from a simple script
2020/10/02 | MEG | Change so that a water mask is optional.
2020/10/05 | MEG | Change so that meshgrids of the longitudes and latitudes of each pixel are used to set resolution.
Also fix a bug in how cov_Lc is handled, so this is now in meters.
2020/10/06 | MEG | Add support for rectangular atmospheres, fix some bugs.
2020_03_01 | MEG | Add option to use Lin Shen/Andy Hooper's fft method which is quicker than the covariance method.
"""
import numpy as np
import numpy.ma as ma
from scipy.spatial import distance as sp_distance # geopy also has a distance function. Rename for safety.
from scipy import interpolate as scipy_interpolate
from syinterferopy.aux import lon_lat_to_ijk
def generate_correlated_noise_cov(pixel_distances, cov_Lc, shape):
""" given a matrix of pixel distances (in meters) and a length scale for the noise (also in meters),
generate some 2d spatially correlated noise.
Inputs:
pixel_distances | rank 2 array | pixels x pixels, distance between each on in metres.
cov_Lc | float | Length scale over which the noise is correlated. units are metres.
shape | tuple | (nx, ny) NOTE X FIRST!
Returns:
y_2d | rank 2 array | spatially correlated noise.
History:
2019/06/?? | MEG | Written
2020/10/05 | MEG | Overhauled to be in metres and use scipy cholesky
2020/10/06 | MEG | Add support for rectangular atmospheres.
"""
import scipy
nx = shape[0]
ny = shape[1]
Cd = np.exp((-1 * pixel_distances)/cov_Lc) # from the matrix of distances, convert to covariances using exponential equation
Cd_L = np.linalg.cholesky(Cd) # ie Cd = CD_L @ CD_L.T Worse error messages, so best called in a try/except form.
#Cd_L = scipy.linalg.cholesky(Cd, lower=True) # better error messages than the numpy versio, but can cause crashes on some machines
x = np.random.randn((ny*nx)) # Parsons 2007 syntax - x for uncorrelated noise
y = Cd_L @ x # y for correlated noise
y_2d = np.reshape(y, (ny, nx)) # turn back to rank 2
return y_2d
def generate_correlated_noise_fft(nx, ny, std_long, sp):
""" A function to create synthetic turbulent troposphere delay using an FFT approach.
The power of the turbulence is tuned by the weather model at the longer wavelengths.
Inputs:
nx (int) -- width of troposphere
Ny (int) -- length of troposphere
std_long (float) -- standard deviation of the weather model at the longer wavelengths. Default = ?
sp | int | pixel spacing in km
Outputs:
APS (float): 2D array, Ny * nx, units are m.
History:
2020_??_?? | LS | Adapted from code by Andy Hooper.
2021_03_01 | MEG | Small change to docs and inputs to work with SyInterferoPy
"""
import numpy as np
import numpy.matlib as npm
import math
np.seterr(divide='ignore')
cut_off_freq=1/50 # drop wavelengths above 50 km
x=np.arange(0,int(nx/2)) # positive frequencies only
y=np.arange(0,int(ny/2)) # positive frequencies only
freq_x=np.divide(x,nx*sp)
freq_y=np.divide(y,ny*sp)
Y,X=npm.meshgrid(freq_x,freq_y)
freq=np.sqrt((X*X+Y*Y)/2) # 2D positive frequencies
log_power=np.log10(freq)*-11/3 # -11/3 in 2D gives -8/3 in 1D
ix=np.where(freq<2/3)
log_power[ix]=np.log10(freq[ix])*-8/3-math.log10(2/3) # change slope at 1.5 km (2/3 cycles per km)
bin_power=np.power(10,log_power)
ix=np.where(freq<cut_off_freq)
bin_power[ix]=0
APS_power=np.zeros((ny,nx)) # mirror positive frequencies into other quadrants
APS_power[0:int(ny/2), 0:int(nx/2)]=bin_power
# APS_power[0:int(ny/2), int(nx/2):nx]=npm.fliplr(bin_power)
# APS_power[int(ny/2):ny, 0:int(nx/2)]=npm.flipud(bin_power)
# APS_power[int(ny/2):ny, int(nx/2):nx]=npm.fliplr(npm.flipud(bin_power))
APS_power[0:int(ny/2), int(np.ceil(nx/2)):]=npm.fliplr(bin_power)
APS_power[int(np.ceil(ny/2)):, 0:int(nx/2)]=npm.flipud(bin_power)
APS_power[int(np.ceil(ny/2)):, int(np.ceil(nx/2)):]=npm.fliplr(npm.flipud(bin_power))
APS_filt=np.sqrt(APS_power)
x=np.random.randn(ny,nx) # white noise
y_tmp=np.fft.fft2(x)
y_tmp2=np.multiply(y_tmp,APS_filt) # convolve with filter
y=np.fft.ifft2(y_tmp2)
APS=np.real(y)
APS=APS/np.std(APS)*std_long # adjust the turbulence by the weather model at the longer wavelengths.
APS=APS*0.01 # convert from cm to m
return APS
def rescale_atmosphere(atm, atm_mean = 0.02, atm_sigma = 0.005):
""" a function to rescale a 2d atmosphere with any scale to a mean centered
one with a min and max value drawn from a normal distribution.
Inputs:
atm | rank 2 array | a single atmosphere.
atm_mean | float | average max or min value of atmospheres that are created, in metres. e.g. if 3 atmospheres have max values of 0.02m, 0.03m, and 0.04m, their mean would be 0.03m
atm_sigma | float | standard deviation of Gaussian distribution used to generate atmosphere strengths.
Returns:
atm | rank 2 array | a single atmosphere, rescaled to have a maximum signal of around that set by mean_m
History:
20YY/MM/DD | MEG | Written
2020/10/02 | MEG | Standardise throughout to use metres for units.
"""
atm -= np.mean(atm) # mean centre
atm_strength = (atm_sigma * np.random.randn(1)) + atm_mean # maximum strength of signal is drawn from a gaussian distribution, mean and sigma set in metres.
if np.abs(np.min(atm)) > np.abs(np.max(atm)): # if range of negative numbers is larger
atm *= (atm_strength / np.abs(np.min(atm))) # strength is drawn from a normal distribution with a mean set by mean_m (e.g. 0.02)
else:
atm *= (atm_strength / np.max(atm)) # but if positive part is larger, rescale in the same way as above.
return atm
# 0: Check inputs
if method not in ['fft', 'cov']:
raise Exception(f"'method' must be either 'fft' (for the fourier transform based method), "
f" or 'cov' (for the covariance based method). {method} was supplied, so exiting. ")
#1: determine if linear interpolation is required
ny, nx = lons_mg.shape
n_pixs = nx * ny
if (n_pixs > cov_interpolate_threshold) and (method == 'cov'):
if verbose:
print(f"The number of pixels ({n_pixs}) is larger than 'cov_interpolate_threshold' ({int(cov_interpolate_threshold)}) so images will be created "
f"with {int(cov_interpolate_threshold)} pixels and interpolated to the full resolution. ")
interpolate = True # set boolean flag
oversize_factor = n_pixs / cov_interpolate_threshold # determine how many times too many pixels we have.
lons_ds = np.linspace(lons_mg[-1,0], lons_mg[-1,-1], int(nx * (1/np.sqrt(oversize_factor)))) # make a downsampled vector of just the longitudes (square root as number of pixels is a measure of area, and this is length)
lats_ds = np.linspace(lats_mg[0,0], lats_mg[-1,0], int(ny * (1/np.sqrt(oversize_factor)))) # and for latitudes
lons_mg_ds = np.repeat(lons_ds[np.newaxis, :], lats_ds.shape, axis = 0) # make rank 2 again
lats_mg_ds = np.repeat(lats_ds[:, np.newaxis], lons_ds.shape, axis = 1) # and for latitudes
ny_generate, nx_generate = lons_mg_ds.shape # get the size of the downsampled grid we'll be generating at
else:
interpolate = False # set boolean flag
nx_generate = nx # if not interpolating, these don't change.
ny_generate = ny
lons_mg_ds = lons_mg # if not interpolating, don't need to downsample.
lats_mg_ds = lats_mg
#2: calculate distance between points
ph_turbs = np.zeros((n_atms, ny_generate, nx_generate)) # initiate output as a rank 3 (ie n_images x ny x nx)
xyz_m, pixel_spacing = lon_lat_to_ijk(lons_mg_ds, lats_mg_ds) # get pixel positions in metres from origin in lower left corner (and also their size in x and y direction)
xy = xyz_m[0:2].T # just get the x and y positions (ie discard z), and make lots x 2 (ie two columns)
#3: generate atmospheres, using either of the two methods.
if difference == True:
n_atms += 1 # if differencing atmospheres, create one extra so that when differencing we are left with the correct number
if method == 'fft':
for i in range(n_atms):
ph_turbs[i,:,:] = generate_correlated_noise_fft(nx_generate, ny_generate, std_long=1,
sp = 0.001 * np.mean((pixel_spacing['x'], pixel_spacing['y'])) ) # generate noise using fft method. pixel spacing is average in x and y direction (and m converted to km)
if verbose:
print(f'Generated {i+1} of {n_atms} single acquisition atmospheres. ')
else:
pixel_distances = sp_distance.cdist(xy,xy, 'euclidean') # calcaulte all pixelwise pairs - slow as (pixels x pixels)
Cd = np.exp((-1 * pixel_distances)/cov_Lc) # from the matrix of distances, convert to covariances using exponential equation
success = False
while not success:
try:
Cd_L = np.linalg.cholesky(Cd) # ie Cd = CD_L @ CD_L.T Worse error messages, so best called in a try/except form.
#Cd_L = scipy.linalg.cholesky(Cd, lower=True) # better error messages than the numpy versio, but can cause crashes on some machines
success = True
except:
success = False
for n_atm in range(n_atms):
x = np.random.randn((ny_generate*nx_generate)) # Parsons 2007 syntax - x for uncorrelated noise
y = Cd_L @ x # y for correlated noise
ph_turb = np.reshape(y, (ny_generate, nx_generate)) # turn back to rank 2
ph_turbs[n_atm,:,:] = ph_turb
print(f'Generated {n_atm} of {n_atms} single acquisition atmospheres. ')
# nx = shape[0]
# ny = shape[1]
# return y_2d
# success = 0
# fail = 0
# while success < n_atms:
# #for i in range(n_atms):
# try:
# ph_turb = generate_correlated_noise_cov(pixel_distances, cov_Lc, (nx_generate,ny_generate)) # generate noise
# ph_turbs[success,:,:] = ph_turb
# success += 1
# if verbose:
# print(f'Generated {success} of {n_atms} single acquisition atmospheres (with {fail} failures). ')
# except:
# fail += 0
# if verbose:
# print(f"'generate_correlated_noise_cov' failed, which is usually due to errors in the cholesky decomposition that Numpy is performing. The odd failure is normal. ")
# # ph_turbs[i,:,:] = generate_correlated_noise_cov(pixel_distances, cov_Lc, (nx_generate,ny_generate)) # generate noise
# # if verbose:
#3: possibly interplate to bigger size
if interpolate:
if verbose:
print('Interpolating to the larger size...', end = '')
ph_turbs_output = np.zeros((n_atms, ny, nx)) # initiate output at the upscaled size (ie the same as the original lons_mg shape)
for atm_n, atm in enumerate(ph_turbs): # loop through the 1st dimension of the rank 3 atmospheres.
f = scipy_interpolate.interp2d(np.arange(0,nx_generate), np.arange(0,ny_generate), atm, kind='linear') # and interpolate them to a larger size. First we give it meshgrids and values for each point
ph_turbs_output[atm_n,:,:] = f(np.linspace(0, nx_generate, nx), np.linspace(0, ny_generate, ny)) # then new meshgrids at the original (full) resolution.
if verbose:
print('Done!')
else:
ph_turbs_output = ph_turbs # if we're not interpolating, no change needed
# 4: rescale to correct range (i.e. a couple of cm)
ph_turbs_m = np.zeros(ph_turbs_output.shape)
for atm_n, atm in enumerate(ph_turbs_output):
ph_turbs_m[atm_n,] = rescale_atmosphere(atm, mean_m)
# 5: return back to the shape given, which can be a rectangle:
ph_turbs_m = ph_turbs_m[:,:lons_mg.shape[0],:lons_mg.shape[1]]
if water_mask is not None:
water_mask_r3 = ma.repeat(water_mask[np.newaxis,], ph_turbs_m.shape[0], axis = 0)
ph_turbs_m = ma.array(ph_turbs_m, mask = water_mask_r3)
return ph_turbs_m
#%%
def coherence_mask(lons_mg, lats_mg, threshold=0.8, turb_method = 'fft',
cov_Lc = 5000, cov_interpolate_threshold = 1e4, verbose = False):
"""A function to synthesis a mask of incoherent pixels
Inputs:
lons_mg | rank 2 array | longitudes of the bottom left corner of each pixel.
lats_mg | rank 2 array | latitudes of the bottom left corner of each pixel.
threshold | decimal | value at which deemed incoherent. Bigger value = less is incoherent
turb_method | string | 'fft' or 'cov'. Controls the method used to genete spatially correlated noise which is used here. fft is normal ~100x faster.
cov_Lc | float | length scale of correlation, in metres. If smaller, noise is patchier (ie lots of small masked areas), and if larger, smoother (ie a few large masked areas).
cov_interpolation_threshold | int | if there are more pixels than this value (ie the number of entries in lons_mg), interpolation is used to create the extra resolution (as generating spatially correlated noise is slow for large images)
verbose | boolean | True is information required on terminal.
Returns:
mask_coh | rank 2 array |
2019_03_06 | MEG | Written.
2020/08/10 | MEG | Update and add to SyInterferoPy.
2020/08/12 | MEG | Remove need for water_mask to be passed to function.
2020/10/02 | MEG | Update to work with atmosphere_turb after this switched from cm to m.
2020/10/07 | MEG | Update to use new atmosphere_turb function
2020/10/19 | MEG | Add option to set pass interpolation threshold to atmosphere_turb function.
2020/03/01 | MEG | Add option to select which method is used to generate the spatialy correlated noise.
"""
import numpy as np
if verbose:
print(f"Starting to generate a coherence mask... ", end = '')
if turb_method == 'fft':
mask_coh_values_r3 = atmosphere_turb(1, lons_mg, lats_mg, method='fft', mean_m = 0.01) # generate a single turbulent atmosphere (though it still comes at as rank 3 with first dimension = 1)
elif turb_method == 'cov':
mask_coh_values_r3 = atmosphere_turb(1, lons_mg, lats_mg, mean_m = 0.01,
method='cov', cov_Lc=cov_Lc, cov_interpolate_threshold=cov_interpolate_threshold) # generate a single turbulent atmosphere (though it still comes at as rank 3 with first dimension = 1)
else:
print(f"'turb_method' should be either 'fft' or 'cov'. {turb_method} was supplied, so defaulting to 'fft'. ")
mask_coh_values_r3 = atmosphere_turb(1, lons_mg, lats_mg, method='fft', mean_m = 0.01) # generate a single turbulent atmosphere (though it still comes at as rank 3 with first dimension = 1)
mask_coh_values = mask_coh_values_r3[0,] # convert to rank 2
mask_coh_values = (mask_coh_values - np.min(mask_coh_values)) / np.max(mask_coh_values - np.min(mask_coh_values)) # rescale to range [0, 1]
mask_coh = np.where(mask_coh_values > threshold, np.ones(lons_mg.shape), np.zeros(lons_mg.shape)) # anything above the threshold is masked, creating blothcy areas of incoherence.
if verbose:
print("Done. ")
return mask_coh
#%%
|
{"hexsha": "675b96d5167b39ec36aa1ae541a9f8390170123a", "size": 40871, "ext": "py", "lang": "Python", "max_stars_repo_path": "syinterferopy/syinterferopy.py", "max_stars_repo_name": "matthew-gaddes/Synthetic-interferograms", "max_stars_repo_head_hexsha": "3cbc553c7a687dd9f94a984231064861ee8363be", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-08T06:30:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-08T06:30:36.000Z", "max_issues_repo_path": "syinterferopy/syinterferopy.py", "max_issues_repo_name": "matthew-gaddes/Synthetic-interferograms", "max_issues_repo_head_hexsha": "3cbc553c7a687dd9f94a984231064861ee8363be", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "syinterferopy/syinterferopy.py", "max_forks_repo_name": "matthew-gaddes/Synthetic-interferograms", "max_forks_repo_head_hexsha": "3cbc553c7a687dd9f94a984231064861ee8363be", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.1622377622, "max_line_length": 244, "alphanum_fraction": 0.5506593917, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 10035}
|
"""
I/O module for BRAIN files (Matlab NDT library of Universiy of Bristol).
Implemented as of 20/6/2016:
- dtype of variables is according to settings.py
- get element dimensions from el_x1, el_y1, el_z1, el_x2, el_y2, el_z2:
Information calculated is probe orientation dependent.
"""
import numpy as np
from .. import geometry as g
from .. import settings as s
from ..core import Probe, Time, ExaminationObject, Material, Frame
__all__ = ["load_expdata"]
class NotHandledByScipy(Exception):
pass
class InvalidExpData(IOError):
pass
def _import_h5py():
try:
import h5py
except ImportError:
h5py = None
return h5py
def load_expdata(file):
"""
Load exp_data file.
Parameters
----------
file: str or file object
Returns
-------
arim.core.Frame
Raises
------
InvalidExpData, OSError (HDF5 fail)
"""
try:
(exp_data, array, filename) = _load_from_scipy(file)
except NotHandledByScipy:
# It seems the file is HDF5 (matlab 7.3)
h5py = _import_h5py()
if h5py is None:
raise Exception(
"Unable to import Matlab file because its file format version is unsupported. "
"Try importing the file in Matlab and exporting it with the "
"command 'save' and the flag '-v7'. Alternatively, try to install the Python library 'h5py'."
)
(exp_data, array, filename) = _load_from_hdf5(file)
# As this point exp_data and array are populated either by scipy.io or hdf5:
try:
probe = _load_probe(array)
except Exception as e:
raise InvalidExpData(e) from e
try:
frame = _load_frame(exp_data, probe)
except Exception as e:
raise InvalidExpData(e) from e
frame.metadata["from_brain"] = filename
frame.probe.metadata["from_brain"] = filename
frame.examination_object.metadata["from_brain"] = filename
return frame
def _load_probe(array):
"""
:param array: dict-like object corresponding to Matlab struct exp_data.array.
:return: Probe
"""
frequency = array["centre_freq"][0, 0]
# dtype = np.result_type(array['el_xc'], array['el_yc'], array['el_zc'])
dtype = s.FLOAT
# Get locations
locations_x = np.squeeze(array["el_xc"]).astype(dtype)
locations_y = np.squeeze(array["el_yc"]).astype(dtype)
locations_z = np.squeeze(array["el_zc"]).astype(dtype)
locations = g.Points.from_xyz(locations_x, locations_y, locations_z)
# Calculate Probe Dimensions (using el_x1, el_x2 and el_xc etc for each dimension)
dimensions_x = 2 * np.maximum(
np.absolute(np.squeeze(array["el_x1"]).astype(dtype) - locations_x),
np.absolute(np.squeeze(array["el_x2"]).astype(dtype) - locations_x),
)
dimensions_y = 2 * np.maximum(
np.absolute(np.squeeze(array["el_y1"]).astype(dtype) - locations_y),
np.absolute(np.squeeze(array["el_y2"]).astype(dtype) - locations_y),
)
dimensions_z = 2 * np.maximum(
np.absolute(np.squeeze(array["el_z1"]).astype(dtype) - locations_z),
np.absolute(np.squeeze(array["el_z2"]).astype(dtype) - locations_z),
)
dimensions = g.Points.from_xyz(dimensions_x, dimensions_y, dimensions_z)
return Probe(locations, frequency, dimensions=dimensions)
def _load_frame(exp_data, probe):
# NB: Matlab is 1-indexed, Python is 0-indexed
tx = np.squeeze(exp_data["tx"])
rx = np.squeeze(exp_data["rx"])
tx = tx.astype(s.UINT) - 1
rx = rx.astype(s.UINT) - 1
# Remark: [...] is required to read in the case of HDF5 file
# (and does nothing if we have a regular array
timetraces = np.squeeze(exp_data["time_data"][...])
timetraces = timetraces.astype(s.FLOAT)
# exp_data.time_data is such as a two consecutive time samples are stored contiguously, which
# is what we want. However Matlab saves either in Fortran order (shape: numtimetraces x numsamples)
# or C order (shape: numsamples x numtimetraces). We force using the later case.
if timetraces.flags.f_contiguous:
timetraces = timetraces.T
timevect = np.squeeze(exp_data["time"])
timevect = timevect.astype(s.FLOAT)
time = Time.from_vect(timevect)
velocity = np.squeeze(exp_data["ph_velocity"])
velocity = velocity.astype(s.FLOAT)
material = Material(velocity)
examination_object = ExaminationObject(material)
return Frame(timetraces, time, tx, rx, probe, examination_object)
def _load_from_scipy(file):
"""
:param file:
:return:
:raises: NotHandledByScipy
"""
import scipy.io as sio
try:
data = sio.loadmat(file)
except NotImplementedError as e:
raise NotHandledByScipy(e)
# Get data:
try:
exp_data = data["exp_data"][0, 0]
array = exp_data["array"][0, 0]
except IndexError as e:
raise InvalidExpData(e) from e
# Get filename (works whether 'file' is a file object or a (str) filename)
try:
filename = file.name
except AttributeError:
filename = str(file)
return exp_data, array, filename
def _load_from_hdf5(file):
import h5py
# This line might raise an OSError:
f = h5py.File(file, mode="r")
try:
# File successfully loaded by HDF5:
exp_data = f["exp_data"]
array = exp_data["array"]
except IndexError as e:
raise InvalidExpData(e) from e
filename = f.filename
return exp_data, array, filename
|
{"hexsha": "a8738520e3d3b1a67f1e4958aa4e1a2e71e664c1", "size": 5527, "ext": "py", "lang": "Python", "max_stars_repo_path": "arim/io/brain.py", "max_stars_repo_name": "will-jj/arim", "max_stars_repo_head_hexsha": "fc15efe171a41355090123fcea10406ee75efe31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-04-05T13:43:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T21:38:19.000Z", "max_issues_repo_path": "arim/io/brain.py", "max_issues_repo_name": "will-jj/arim", "max_issues_repo_head_hexsha": "fc15efe171a41355090123fcea10406ee75efe31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-04-09T10:38:26.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-17T16:23:16.000Z", "max_forks_repo_path": "arim/io/brain.py", "max_forks_repo_name": "will-jj/arim", "max_forks_repo_head_hexsha": "fc15efe171a41355090123fcea10406ee75efe31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-04-04T17:02:20.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-30T15:36:03.000Z", "avg_line_length": 28.4896907216, "max_line_length": 109, "alphanum_fraction": 0.6560521078, "include": true, "reason": "import numpy,import scipy", "num_tokens": 1403}
|
### Author Douwe Spaanderman - 16 June 2020 ###
# This script reads all the files maf files and creates a major list of files
import pandas as pd
from pathlib import Path
import warnings
import numpy as np
import argparse
import time
import json
# Currently use sys to get other script - in Future use package
import os
import sys
path_main = ("/".join(os.path.realpath(__file__).split("/")[:-2]))
sys.path.append(path_main + '/Classes/')
sys.path.append(path_main + '/Utils/')
from gene_one_hot import one_hot
from help_functions import mean, str_to_bool, str_none_check
def maf_extract(maf):
'''
'''
id_ = []
data_dict= {}
file_name = maf.stem.split('.')[0]
i = 0
with open(maf, 'r', encoding="latin-1") as f:
try:
for line in f:
if line.startswith("#"):
continue
elif not id_:
id_ = line.replace('\n', '').split('\t')
else:
data_dict[i] = line.replace('\n', '').split('\t')
i += 1
except:
warnings.warn(f"File: {file_name}, had problems with unrecognizable symbols", DeprecationWarning)
maf_frame = pd.DataFrame.from_dict(data_dict, orient="index", columns=id_)
return maf_frame, file_name
#Filtering and frame transformation
def filter_frame(maf_row, cutoff=0, filter_protein_coding=False):
'''
'''
#Change tumor_f to float and avg
tumor_f = list(map(float, str(maf_row["tumor_f"]).split('|')))
tumor_f = mean(tumor_f)
maf_row["tumor_f"] = tumor_f
makes_filters = True
if tumor_f <= cutoff:
makes_filters= False
if filter_protein_coding == True:
if maf_row["Protein_Change"] == '':
makes_filters= False
if makes_filters == True:
return maf_row
def clean_frame(maf_frame):
'''
'''
#CURRENTLY DANGEROUS AS I JUST TAKE TUMOR_SEQ_Allele2
#Check allele, it is not actually phased so don't really know why it is called like this.
#Also if tumor_f is 100 both alleles should have this mut which is not the case
maf_frame["Tumor_Allele"] = maf_frame["Tumor_Seq_Allele2"]
#dropna from filter_frame
maf_frame = maf_frame.dropna()
maf_frame = maf_frame[["Hugo_Symbol",
"Entrez_Gene_Id",
"Chromosome",
"Start_position",
"End_position",
"Variant_Classification",
"Variant_Type",
"Reference_Allele",
"Tumor_Allele",
"Matched_Norm_Sample_Barcode",
"Genome_Change",
"Annotation_Transcript",
"Transcript_Strand",
"Transcript_Exon",
"Transcript_Position",
"cDNA_Change",
"Codon_Change",
"Protein_Change",
"Ensembl_so_term",
"tumor_f"]]
return maf_frame
def mutation_filter(data, classification_filter=True, ensembl_filter=False):
'''
mutation filter to filter out based on classification done in the maf file. Note that both filters generally use the same filtering, but
I would advise to use default, because it is more standardized.
input:
'''
if classification_filter == True:
data = data[~data["Variant_Classification"].isin(["Intron", "lincRNA", "IGR", "5'Flank", "5'UTR", "Silent", "3'UTR", "RNA"])]
if ensembl_filter == True:
data = data[~data["Ensembl_so_term"].isin(["intron_variant", "intergenic_variant", "upstream_gene_variant", "5_prime_UTR_variant", "synonymous_variant", "3_prime_UTR_variant", ""])]
return data
#Create one-hot encoding for genes
def one_hot_encoder(data, all_genes:list, all_alterations:list):
'''
'''
# Initialize empty arrays
gene_array = np.zeros(shape=(len(all_genes)))
gene_alteration_1D_array = np.zeros(shape=(len(all_alterations)))
# Get all keys
gene = set(data["Hugo_Symbol"])
gene_alt_1D = set(data["Hugo_Symbol"] + ":::" + data["Variant_Classification"])
#Get index
index_gene = [i for i, item in enumerate(all_genes) if item in gene]
index_gene_alt_1D = [i for i, item in enumerate(all_alterations) if item in gene_alt_1D]
#Change 0 -> 1 for index
np.put(gene_array, index_gene, 1)
np.put(gene_alteration_1D_array, index_gene_alt_1D, 1)
#Create class
flat = one_hot(gene_array, all_genes)
all_alt = one_hot(gene_alteration_1D_array, all_alterations)
#2D array
all_alt_2D = all_alt.make_2D(int(len(all_alterations)/len(all_genes)))
# Sanity checks
flat.sanity()
all_alt.sanity()
all_alt_2D.sanity()
#Create classes
return flat, all_alt, all_alt_2D
#Create one-hot encoding for empty failed mutation files
def one_hot_empty(all_genes:list, all_alterations:list):
'''
'''
# Initialize empty arrays
gene_array = np.zeros(shape=(len(all_genes)))
gene_alteration_1D_array = np.zeros(shape=(len(all_alterations)))
#Create class
flat = one_hot(gene_array, all_genes)
all_alt = one_hot(gene_alteration_1D_array, all_alterations)
#2D array
all_alt_2D = all_alt.make_2D(int(len(all_alterations)/len(all_genes)))
# Sanity checks
flat.sanity()
all_alt.sanity()
all_alt_2D.sanity()
#Create classes
return flat, all_alt, all_alt_2D
def main_maf(directory, filter_protein_coding=False, classification_filter=True, ensembl_filter=False, Save=False, Show=True):
'''
'''
if directory.endswith("/"):
pathlist = Path(directory).glob('*.maf*')
number_of_files = len(list(pathlist))
pathlist = Path(directory).glob('*.maf*')
else:
pathlist = Path(directory).glob('/*.maf*')
number_of_files = len(list(pathlist))
pathlist = Path(directory).glob('/*.maf*')
data = []
cache_failed = []
for idx, path in enumerate(pathlist):
maf_frame, file_name = maf_extract(path)
#print("now doing: {}".format(file_name))
maf_frame = maf_frame.apply(filter_frame, cutoff=0, filter_protein_coding=filter_protein_coding, axis=1)
if type(maf_frame) == pd.core.series.Series or maf_frame.empty == True:
print(f"{file_name} has no mutations that made tumor fraction cutoff or in protein coding")
cache_failed.append(file_name)
continue
maf_frame = clean_frame(maf_frame)
maf_frame["file"] = file_name
data.append(maf_frame)
if idx % 10 == 0:
print(f'done {idx+1} out of {number_of_files}')
data = pd.concat(data)
data = mutation_filter(data, classification_filter=classification_filter, ensembl_filter=classification_filter)
# Now create one-hot
data_summary = []
unique_names = data["file"].unique()
all_genes = data["Hugo_Symbol"].unique()
#Remove Unknown
all_genes = all_genes[all_genes != "Unknown"]
all_alterations = data["Variant_Classification"].unique()
all_alterations = [str(x) + ":::" + str(y) for x in all_genes for y in all_alterations]
for i, name in enumerate(unique_names):
tmp_data = data[data["file"] == name]
flat, all_alt, all_alt_2D = one_hot_encoder(tmp_data, all_genes, all_alterations)
# Create dataframe
tmp_data = pd.DataFrame({
"File": name,
"Flat_one_hot": flat,
"Alt_one_hot": all_alt,
"Alt_2D": all_alt_2D
}, index=[0])
data_summary.append(tmp_data)
if i % 10 == 0:
print(f'done {i+1} out of {len(unique_names)}')
if not cache_failed:
print("No failed mutation files")
else:
for i, name in enumerate(cache_failed):
flat, all_alt, all_alt_2D = one_hot_empty(all_genes, all_alterations)
# Create dataframe
tmp_data = pd.DataFrame({
"File": name,
"Flat_one_hot": flat,
"Alt_one_hot": all_alt,
"Alt_2D": all_alt_2D
}, index=[0])
data_summary.append(tmp_data)
if i % 10 == 0:
print(f'done {i+1} out of {len(unique_names)}')
data_summary = pd.concat(data_summary)
print(data_summary)
if Save != False:
Save_cache = Save
if not Save.endswith(".pkl"):
Save = Save + "maf_extract.pkl"
Save_summary = "/".join(["_summary.".join(x.split(".")) if i+1 == len(Save.split("/")) else x for i, x in enumerate(Save.split("/"))])
data.to_pickle(Save)
data_summary.to_pickle(Save_summary)
with open(Save_cache + "Cache/" + "cache_failed_maf.json", 'w') as f:
json.dump(cache_failed, f, indent=2)
if Show == True:
print(data)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Read and combine all maf data with the data")
parser.add_argument("Path", help="path to directory with maf files")
parser.add_argument("-s", dest="Save", nargs='?', default=False, help="location of file")
parser.add_argument("-d", dest="Show", nargs='?', default=True, help="Do you want to show the plot?")
parser.add_argument("-p", dest="filter_protein_coding", nargs='?', default=True, help="Do you want to show the plot?")
parser.add_argument("-c", dest="classification_filter", nargs='?', default=False, help="Do you want to show the plot?")
parser.add_argument("-e", dest="ensembl_filter", nargs='?', default=False, help="Do you want to show the plot?")
args = parser.parse_args()
start = time.time()
main_maf(directory=args.Path, filter_protein_coding=str_to_bool(args.filter_protein_coding), classification_filter=str_to_bool(args.classification_filter), ensembl_filter=str_to_bool(args.ensembl_filter), Save=args.Save, Show=str_to_bool(args.Show))
end = time.time()
print('completed in {} seconds'.format(end-start))
|
{"hexsha": "8ecbe4547a38a6e09ca382b733ec47a3bc6ebc66", "size": 10487, "ext": "py", "lang": "Python", "max_stars_repo_path": "CellCulturePy/Panel/maf.py", "max_stars_repo_name": "Douwe-Spaanderman/Broad_DJ_AI", "max_stars_repo_head_hexsha": "d151b35d2c05b7ca12653abca4f73cf438399b0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CellCulturePy/Panel/maf.py", "max_issues_repo_name": "Douwe-Spaanderman/Broad_DJ_AI", "max_issues_repo_head_hexsha": "d151b35d2c05b7ca12653abca4f73cf438399b0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CellCulturePy/Panel/maf.py", "max_forks_repo_name": "Douwe-Spaanderman/Broad_DJ_AI", "max_forks_repo_head_hexsha": "d151b35d2c05b7ca12653abca4f73cf438399b0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-14T20:07:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T20:07:15.000Z", "avg_line_length": 36.1620689655, "max_line_length": 253, "alphanum_fraction": 0.6038905311, "include": true, "reason": "import numpy", "num_tokens": 2547}
|
#!/usr/local/sci/bin/python2.7
#*****************************
#
# general Python gridding script
#
#
#************************************************************************
'''
Author: Robert Dunn
Created: March 2016
Last update: 12 April 2016
Location: /project/hadobs2/hadisdh/marine/PROGS/Build
-----------------------
CODE PURPOSE AND OUTPUT
-----------------------
Converts raw ASCII hourly observations extracted from IMMA ICOADS format to 3 hrly 1x1 grids. Then further consolidation to:
1x1 daily
# 1x1 monthly - no longer goes through this step
5x5 monthly
and also 5x5 monthly calculated directly from the 1x1 daily data.
These later grids are available for all, day- and night-time periods (any mixtures are assigned to daytime) and also using strict and relaxed completeness criteria.
This can work with raw, QC only, bias corrected, bias corrected height only, bias corrected instrument only, ship only, and each individual uncertainty field.
Gridded uncertainties will account for correlation over the gridbox in the individual quantities only. This will not be possible for uncTOT. Total uncertainty (with correlation)
will have to be calculated by combining all gridded individual uncertainties.
-----------------------
LIST OF MODULES
-----------------------
utils.py
set_paths_and_vars.py - set file paths and some universal variables.
plot_qc_diagnostics.py - to output plots of clean obs vs all
MDS_RWtools.py - for the file format
-----------------------
DATA
-----------------------
Input data stored in:
/project/hadobs2/hadisdh/marine/ICOADS.3.0.0/
Exact folder set by "OUTROOT" - as depends on bias correction.
-----------------------
HOW TO RUN THE CODE
-----------------------
# for all data
python2.7 gridding_cam.py --suffix relax --period day --start_year YYYY --end_year YYYY --start_month MM --end_month MM
# for QC data only
python2.7 gridding_cam.py --suffix relax --period day --start_year YYYY --end_year YYYY --start_month MM --end_month MM --doQC (--ShipOnly)
# for BC data only
python2.7 gridding_cam.py --suffix relax --period day --start_year YYYY --end_year YYYY --start_month MM --end_month MM --doBCtotal (--doBCscn, --doBChgt, --doNOWHOLE) (--ShipOnly)
# for uncertainty data only
python2.7 gridding_cam.py --suffix relax --period day --start_year YYYY --end_year YYYY --start_month MM --end_month MM --doBCtotal --doUSLR (--doUSCN, --doUHGT, --doUR, --doUM, --doUC, --doUTOT) (--ShipOnly)
python2.7 gridding_cam.py --help
will show all options
-----------------------
OUTPUT
-----------------------
# *** KATE ADDED:
First iteration reads in ERAclimNBC, does NOT include buddy QC flag, and outputs to GRIDSERAclimNBC:
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDSERAclimNBC/
Second iteration reads in OBSclim1NBC, does NOT include buddy QC flag, and outputs to GRIDSOBSclim1NBC:
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDSOBSclim1NBC/
Third iteration reads in OBSclim2NBC, DOES include buddy QC flag, and outpus to GRIDSOBSclim2NBC:
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDSOBSclim2NBC/
We then grid the bias corrected version of OBSclim2NBC so reads in from OBSclim2BC and outputs to GRIDSOBSclim2BC:
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDSOBSclim2BC/
THIS NOW INCLUDES OBS UNCERTAINTY!!!
We also grid up a noQC version for comparison but one that uses teh OBSclim2 base data (obs based climatology)
so reads in from OBSclim2NBC and outputs to GRIDSOBSclim2noQC:
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDSOBSclim2noQC/
and the plots:
First iteration reads in ERAclimNBC, does NOT include buddy QC flag, and outputs to GRIDSERAclimNBC:
/project/hadobs2/hadisdh/marine/PLOTSERAclimNBC/
Second iteration reads in OBSclim1NBC, does NOT include buddy QC flag, and outputs to GRIDSOBSclim1NBC:
/project/hadobs2/hadisdh/marine/PLOTSOBSclim1NBC/
Third iteration reads in OBSclim2NBC, DOES include buddy QC flag, and outpus to GRIDSOBSclim2NBC:
/project/hadobs2/hadisdh/marine/PLOTSOBSclim2NBC/
We then grid the bias corrected version of OBSclim2NBC so reads in from OBSclim2BC and outputs to GRIDSOBSclim2BC:
/project/hadobs2/hadisdh/marine/PLOTSOBSclim2BC/
We also grid up a noQC version for comparison but one that uses teh OBSclim2 base data (obs based climatology)
so reads in from OBSclim2NBC and outputs to GRIDSOBSclim2noQC:
/project/hadobs2/hadisdh/marine/PLOTSOBSclim2noQC/
#*** end
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/GRIDS2/
Plots to appear in
/project/hadobs2/hadisdh/marine/ICOADS.2.5.1/PLOTS2/
# And everything linked to uncertainty (# UNC NEW)
-----------------------
VERSION/RELEASE NOTES
-----------------------
Version 4 (7 May 2020) Kate Willett
---------
Enhancements
This now does a NOWHOLE version which grids the BCtotal ShipOnly (optional) values that have not been given a whole number flag
Changes
Bug fixes
Version 3 (24 Sep 2018) Kate Willett
---------
Enhancements
This now reads in obs uncertainty for the BC versions and propgates these through the gridding.
This has to be done individually for each source though because it takes up too much memory.
Changes
Bug fixes
Version 2 (26 Sep 2016) Kate Willett
---------
Enhancements
This can now cope with three different types of QC in addition to existing:
doQC1it, doQC2it and doQC3it - for working with ERA, then OBS clims versions
It can also work with:
the full BC version - now doBCtotal,
the height correction only - now doBCght
the screen correction only - now doBCscn
I have also set this up to work with ship only data which required changes to utils.py and set_paths_and_vars.py
Look for # KATE modified
...
# end
Changes
STREAMLINED OUTPUTS
I have commented out the monthy 1x1s and monthly 5x5s from monthly 1x1s as these are no longer needed
MEAN OVER MEDIAN
I have hard coded in the MEAN for creating daily 1x1s and monthly 5x5s because I think this is more sensible
Look for # KATE MEDIAN WATCH
SWITCH OFF UNESSENTIAL OUTPUTS
I have added an internally operated switch - SwitchOutput - 1 = output all interim grids, 0 only output 5x5monthlies
Bug fixes
Version 1 (release date)
---------
Enhancements
Changes
Bug fixes
-----------------------
OTHER INFORMATION
-----------------------
'''
import os
import datetime as dt
import numpy as np
import sys
import matplotlib
matplotlib.use('Agg')
import calendar
import gc
import copy
import utils
import plot_qc_diagnostics
import MDS_RWtools as mds
import set_paths_and_vars
defaults = set_paths_and_vars.set()
# Kate MODIFIED
import pdb
# end
# KW #
# Use of median vs mean #
# Essentially we're using the average as a way of smoothing in time and space so ideally it would have influence from all viable values
# within that time/space period.
# The median might be better when we're first using the raw obs to create the 1x1 3 hrlies because we know that there may be some shockers in there.
# There is NO expectation that the values would be very similar or very different (not necessarily normally distributed)
# After that, we're averaging already smoothed values but missing data may make our resulting average skewed.
# There IS an expectation that the values would quite different across the diurnal cycle (quite possibly normally distributed)
# For dailies we could set up specific averaging routines depending on the sampling pattern
# e.g.,
# All 8 3hrly 1x1s present = mean(0,3,6,9,12,15,18,21)
# 6 to 7 3hrly 1x1s present = interpolate between missing vals (if 3 to 18hrs missing), repeat 0=3 or 21=18 (if 0 or 21 hrs missing), mean(0,3,6,9,12,15,18,21)
# 5 or fewer 3hrly 1x1s present = mean(mean(0 to 9hrs),mean(12 to 21hrs)) or just mean(0 to 9hrs) or mean(12 to 21hrs) if either one of those results in 0/missing.
# a median of 5 values might give you 3 cool values and 2 warm, the 'average' would then be the cool value with no influence from the warmer daytime value (or vice versa)
# For pentad or monthlies I think the median or mean would be ok - and median might be safer.
# There is NO expectation that the values would be very similar or very different (not necessarily normally distributed)
# For monthly 5x5s I think we should use the mean to make sure the influence of sparse obs are included.
# There IS an expectation that the values could quite different across a 500km2 area and 1 month (quite possibly, but not necessarily normally distributed)
# what size grid (lat/lon/hour)
DELTA_LAT = 1
DELTA_LON = 1
DELTA_HOUR = 3
# set up the grid
grid_lats = np.arange(-90 + DELTA_LAT, 90 + DELTA_LAT, DELTA_LAT)
grid_lons = np.arange(-180 + DELTA_LON, 180 + DELTA_LON, DELTA_LON)
# KATE modified
# Make this 1 if you want to run in test mode - outputting all interim files and plots
# Make this 0 if you want to run in operational mode - only output 5x5 monthly files and plots
SwitchOutput = 0
# end
#************************************************************************
# KATE modified
def do_gridding(suffix = "relax", start_year = defaults.START_YEAR, end_year = defaults.END_YEAR, start_month = 1, end_month = 12,
doQC = False, doQC1it = False, doQC2it = False, doQC3it = False, doSST_SLP = False,
doBC = False, doBCtotal = False, doBChgt = False, doBCscn = False, doNOWHOLE = False,
doUSLR = False, doUSCN = False, doUHGT = False, doUR = False, doUM = False, doUC = False, doUTOT = False,
ShipOnly = False):
#def do_gridding(suffix = "relax", start_year = defaults.START_YEAR, end_year = defaults.END_YEAR, start_month = 1, end_month = 12, doQC = False, doSST_SLP = False, doBC = False, doUncert = False):
# end
'''
Do the gridding, first to 3hrly 1x1, then to daily 1x1 and finally monthly 5x5
:param str suffix: "relax" or "strict" criteria
:param int start_year: start year to process
:param int end_year: end year to process
:param int start_month: start month to process
:param int end_month: end month to process
:param bool doQC: incorporate the QC flags or not
# KATE modified
:param bool doQC1it: incorporate the first iteration (no buddy) QC flags or not
:param bool doQC2it: incorporate the second iteration (no buddy) QC flags or not
:param bool doQC3it: incorporate the third iteration (buddy) QC flags or not
# end
:param bool doSST_SLP: process additional variables or not
:param bool doBC: work on the bias corrected data
# KATE modified
:param bool doBCtotal: work on the full bias corrected data and maybe uncertainty
:param bool doBChgt: work on the height only bias corrected data
:param bool doBCscn: work on the screen only bias corrected data
# end
:param bool doNOWHOLE: work on the bias corrected data that doesn't have any whole number flags set
# UNC NEW
:param bool doUSLR: work on BC and solar adj uncertainty with correlation
:param bool doUSCN: work on BC and instrument adj uncertainty with correlation
:param bool doUHGT: work on BC and height adj uncertainty with correlation
:param bool doUR: work on BC and rounding uncertainty with no correlation
:param bool doUM: work on BC and measurement uncertainty with no correlation
:param bool doUC: work on BC and climatological uncertainty with no correlation
:param bool doUTOT: work on BC and total uncertainty with no correlation
# KATE modified
:param bool ShipOnly: work on the ship platform type only data
# end
:returns:
'''
# KATE modified
settings = set_paths_and_vars.set(doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE, doQC = doQC, doQC1it = doQC1it, doQC2it = doQC2it, doQC3it = doQC3it,
doUSLR = doUSLR, doUSCN = doUSCN, doUHGT = doUHGT, doUR = doUR, doUM = doUM, doUC = doUC, doUTOT = doUTOT, ShipOnly = ShipOnly)
#settings = set_paths_and_vars.set(doBC = doBC, doQC = doQC)
# end
# KATE modified - added other BC options
# if doBC:
if doBC | doBCtotal | doBChgt | doBCscn | doNOWHOLE:
# end
fields = mds.TheDelimitersExt # extended (BC)
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT:
uncfields = mds.TheDelimitersUnc # uncertainty fields (BC)
else:
fields = mds.TheDelimitersStd # Standard
# KATE modified - added other BC options
# OBS_ORDER = utils.make_MetVars(settings.mdi, doSST_SLP = doSST_SLP, multiplier = True, doBC = doBC) # ensure that convert from raw format at writing stage with multiplier
OBS_ORDER = utils.make_MetVars(settings.mdi, doSST_SLP = doSST_SLP, multiplier = True, doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE) # ensure that convert from raw format at writing stage with multiplier
# end
# KW switching between 4 ('_strict') for climatology build and 2 for anomaly buily ('_relax') - added subscripts to files
if suffix == "relax":
N_OBS_DAY = 2 # KW ok for anomalies but this was meant to be 4 for dailies_all? and 2 for dailies_night/day?
N_OBS_FRAC_MONTH = 0.3
elif suffix == "strict":
N_OBS_DAY = 4
N_OBS_FRAC_MONTH = 0.3
# flags to check on and values to allow through
# KATE modified
if doQC1it | doQC2it:
these_flags = {"ATclim":0,"ATrep":0,"DPTclim":0,"DPTssat":0,"DPTrep":0,"DPTrepsat":0}
elif doNOWHOLE: # this should now pull through only those without rounding / whole number flags set
these_flags = {"ATbud":0, "ATclim":0,"ATround":0,"ATrep":0,"DPTbud":0,"DPTclim":0,"DPTround":0,"DPTssat":0,"DPTrep":0,"DPTrepsat":0}
else:
these_flags = {"ATbud":0, "ATclim":0,"ATrep":0,"DPTbud":0,"DPTclim":0,"DPTssat":0,"DPTrep":0,"DPTrepsat":0}
#these_flags = {"ATbud":0, "ATclim":0,"ATrep":0,"DPTbud":0,"DPTclim":0,"DPTssat":0,"DPTrep":0,"DPTrepsat":0}
# end
# spin through years and months to read files
for year in np.arange(start_year, end_year + 1):
for month in np.arange(start_month, end_month + 1):
times = utils.TimeVar("time", "time since 1/{}/{} in hours".format(month, year), "hours", "time")
grid_hours = np.arange(0, 24 * calendar.monthrange(year, month)[1], DELTA_HOUR)
times.data = grid_hours
# process the monthly file
# KATE modified - added other BC options
# if doBC:
if doBC | doBCtotal | doBChgt | doBCscn | doNOWHOLE:
# end
filename = "new_suite_{}{:02d}_{}_extended.txt".format(year, month, settings.OUTROOT)
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT:
uncfilename = "new_suite_{}{:02d}_{}_uncertainty.txt".format(year, month, settings.OUTROOT)
else:
filename = "new_suite_{}{:02d}_{}.txt".format(year, month, settings.OUTROOT)
# pdb.set_trace()
# KATE modified - added other BC options
# raw_platform_data, raw_obs, raw_meta, raw_qc = utils.read_qc_data(filename, settings.ICOADS_LOCATION, fields, doBC = doBC)
raw_platform_data, raw_obs, raw_meta, raw_qc = utils.read_qc_data(filename, settings.ICOADS_LOCATION, fields, doBC = doBC,
doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE, ShipOnly = ShipOnly)
# end
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: # Read in the uncertainty info but only if we're doing a full BC run
unc_data = utils.read_unc_data(uncfilename, settings.ICOADS_LOCATION, uncfields,
doUSLR = doUSLR, doUSCN = doUSCN, doUHGT = doUHGT, doUR = doUR, doUM = doUM, doUC = doUC, doUTOT = doUTOT,
ShipOnly = ShipOnly)
# extract observation details
lats, lons, years, months, days, hours = utils.process_platform_obs(raw_platform_data)
# test dates *KW - SHOULDN'T NEED THIS - ONLY OBS PASSING DATE CHECK ARE INCLUDED*
# *RD* - hasn't run yet but will leave it in just in case of future use.
if not utils.check_date(years, year, "years", filename):
sys.exit(1)
if not utils.check_date(months, month, "months", filename):
sys.exit(1)
# KATE modified - seems to be an error with missing global name plots so have changed to settings.plots
# Choose this one to only output once per decade
#if settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
# Choose this one to output a plot for each month
if settings.plots:
#if plots and (year in [1973, 1983, 1993, 2003, 2013]):
# end
# plot the distribution of hours
import matplotlib.pyplot as plt
plt.clf()
plt.hist(hours, np.arange(-100,2500,100))
plt.ylabel("Number of observations")
plt.xlabel("Hours")
plt.xticks(np.arange(-300, 2700, 300))
plt.savefig(settings.PLOT_LOCATION + "obs_distribution_{}{:02d}_{}.png".format(year, month, suffix))
# only for a few of the variables
for variable in OBS_ORDER:
if variable.name in ["marine_air_temperature", "dew_point_temperature", "specific_humidity", "relative_humidity", "marine_air_temperature_anomalies", "dew_point_temperature_anomalies", "specific_humidity_anomalies", "relative_humidity_anomalies"]:
#plot_qc_diagnostics.values_vs_lat(variable, lats, raw_obs[:, variable.column], raw_qc, these_flags, settings.PLOT_LOCATION + "qc_actuals_{}_{}{:02d}_{}.png".format(variable.name, year, month, suffix), multiplier = variable.multiplier, doBC = doBC)
plot_qc_diagnostics.values_vs_lat_dist(variable, lats, raw_obs[:, variable.column], raw_qc, these_flags, \
settings.PLOT_LOCATION + "qc_actuals_{}_{}{:02d}_{}.png".format(variable.name, year, month, suffix), multiplier = variable.multiplier, \
# KATE modified - added other BC options
doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE)
# end
# QC sub-selection
# KATE modified - added QC iterations but also think this needs to include the bias corrected versions because the QC flags need to be applied to those too.
# Not sure what was happening previously with the doBC run - any masking to QC'd obs?
if doQC | doQC1it | doQC2it | doQC3it | doBC | doBCtotal | doBChgt | doBCscn | doNOWHOLE:
#if doQC:
# end
print "Using {} as flags".format(these_flags)
# KATE modified - BC options
# mask = utils.process_qc_flags(raw_qc, these_flags, doBC = doBC)
mask = utils.process_qc_flags(raw_qc, these_flags, doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE)
# end
print "All Obs: ",len(mask)
print "Good Obs: ",len(mask[np.where(mask == 0)])
print "Bad Obs: ",len(mask[np.where(mask == 1)])
#pdb.set_trace()
complete_mask = np.zeros(raw_obs.shape)
for i in range(raw_obs.shape[1]):
complete_mask[:,i] = mask
clean_data = np.ma.masked_array(raw_obs, mask = complete_mask)
del raw_obs
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_complete_mask = np.zeros(unc_data.shape)
for i in range(unc_data.shape[1]):
unc_complete_mask[:,i] = mask
unc_clean_data = np.ma.masked_array(unc_data, mask = unc_complete_mask)
del unc_data
gc.collect()
# end
else:
print "No QC flags selected"
clean_data = np.ma.masked_array(raw_obs, mask = np.zeros(raw_obs.shape))
del raw_obs
gc.collect()
# discretise hours
hours = utils.make_index(hours, DELTA_HOUR, multiplier = 100)
# get the hours since start of month
hours_since = ((days - 1) * 24) + (hours * DELTA_HOUR)
# discretise lats/lons
lat_index = utils.make_index(lats, DELTA_LAT, multiplier = 100)
lon_index = utils.make_index(lons, DELTA_LON, multiplier = 100)
lat_index += ((len(grid_lats)-1)/2) # and as -ve indices are unhelpful, roll by offsetting by most westward
lon_index += ((len(grid_lons)-1)/2) # or most southerly so that (0,0) is (-90,-180)
# NOTE - ALWAYS GIVING TOP-RIGHT OF BOX TO GIVE < HARD LIMIT (as opposed to <=)
# do the gridding
# extract the full grid, number of obs, and day/night flag
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
raw_month_grid, unc_grid, raw_month_n_obs, this_month_period = utils.grid_1by1_cam_unc(clean_data, unc_clean_data, \
raw_qc, hours_since, lat_index, lon_index, grid_hours, grid_lats, grid_lons, OBS_ORDER, settings.mdi, doMedian = settings.doMedian, \
doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, \
doUSLR = doUSLR, doUSCN = doUSCN, doUHGT = doUHGT, doUR = doUR, doUM = doUM, doUC = doUC, doUTOT = doUTOT)
del clean_data
del unc_clean_data
gc.collect()
else:
# KATE MEDIAN WATCH This is hard coded to doMedian (rather than settings.doMedian) - OK WITH MEDIAN HERE!!!
# KATE modified - to add settings.doMedian instead of just doMedian which seems to be consistent with the other bits and BC options
raw_month_grid, raw_month_n_obs, this_month_period = utils.grid_1by1_cam(clean_data, raw_qc, hours_since, lat_index, lon_index, \
grid_hours, grid_lats, grid_lons, OBS_ORDER, settings.mdi, doMedian = settings.doMedian, \
doBC = doBC, doBCtotal = doBCtotal, doBChgt = doBChgt, doBCscn = doBCscn, doNOWHOLE = doNOWHOLE)
#raw_month_grid, raw_month_n_obs, this_month_period = utils.grid_1by1_cam(clean_data, raw_qc, hours_since, lat_index, lon_index, grid_hours, grid_lats, grid_lons, OBS_ORDER, settings.mdi, doMedian = True, doBC = doBC)
# end
del clean_data
gc.collect()
print "successfully read data into 1x1 3hrly grids"
# create matching array size
this_month_period = np.tile(this_month_period, (len(OBS_ORDER),1,1,1))
for period in ["all", "day", "night"]:
if period == "day":
this_month_grid = np.ma.masked_where(this_month_period == 1, raw_month_grid)
this_month_obs = np.ma.masked_where(this_month_period[0] == 1, raw_month_n_obs) # and take first slice to re-match the array size
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_this_month_grid = np.ma.masked_where(this_month_period == 1, unc_grid)
elif period == "night":
this_month_grid = np.ma.masked_where(this_month_period == 0, raw_month_grid)
this_month_obs = np.ma.masked_where(this_month_period[0] == 0, raw_month_n_obs) # and take first slice to re-match the array size
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_this_month_grid = np.ma.masked_where(this_month_period == 0, unc_grid)
else:
this_month_grid = copy.deepcopy(raw_month_grid)
this_month_obs = copy.deepcopy(raw_month_n_obs)
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_this_month_grid = copy.deepcopy(unc_grid)
print('Set up period: ',period)
# KATE modified
# If SwitchOutput == 1 then we're in test mode - output interim files!!!
if (SwitchOutput == 1):
# have one month of gridded data.
out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_1x1_3hr_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
utils.netcdf_write(out_filename, this_month_grid, np.zeros(this_month_obs.shape), this_month_obs, OBS_ORDER, grid_lats, grid_lons, times, frequency = "H")
## have one month of gridded data.
#out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_1x1_3hr_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
#utils.netcdf_write(out_filename, this_month_grid, np.zeros(this_month_obs.shape), this_month_obs, OBS_ORDER, grid_lats, grid_lons, times, frequency = "H")
# end
# now average over time
# Dailies
daily_hours = grid_hours.reshape(-1, 24/DELTA_HOUR)
shape = this_month_grid.shape
this_month_grid = this_month_grid.reshape(shape[0], -1, 24/DELTA_HOUR, shape[2], shape[3])
this_month_obs = this_month_obs.reshape(-1, 24/DELTA_HOUR, shape[2], shape[3])
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_this_month_grid = unc_this_month_grid.reshape(shape[0], -1, 24/DELTA_HOUR, shape[2], shape[3])
print('Reshaped daily grids')
# KATE MEDIAN WATCH - settings.doMedian is generally set to True - I think we may want the MEAN HERE!!!
# KATE modified - to hard wire in MEAN here
daily_grid = np.ma.mean(this_month_grid, axis = 2)
#if settings.doMedian:
# daily_grid = np.ma.median(this_month_grid, axis = 2)
#else:
# daily_grid = np.ma.mean(this_month_grid, axis = 2)
# end
daily_grid.fill_value = settings.mdi
# filter on number of observations/day
n_hrs_per_day = np.ma.count(this_month_grid, axis = 2)
n_obs_per_day = np.ma.sum(this_month_obs, axis = 1)
# UNC NEW
# PROPAGATE UNCERTAINTY IN THE MEAN np.sqrt(np.sum(np.power(arr,2.))) np.sqrt(np.ma.sum(np.ma.power(uncTOT_clean_data[locs, :][:, cols],2.), axis = 0))
# Use n_hrs_per_day because we're combining the already propagated uncertainties at the 1by1 3hr level, not from all individual obs
# if its correlated (r=1) do it like this
if doUSLR | doUSCN | doUHGT | doUC: #
# John K thinks it should be divude by N, not SQRT(N)
# unc_daily_grid = np.sqrt(np.ma.power(np.ma.sum(unc_this_month_grid, axis = 2),2.)) / np.sqrt(n_hrs_per_day)
unc_daily_grid = np.sqrt(np.ma.power(np.ma.sum(unc_this_month_grid, axis = 2),2.)) / n_hrs_per_day
unc_daily_grid.fill_value = settings.mdi
# if its NOT correlated (r=0) do it like this
if doUR | doUM | doUTOT: #
# John K thinks it should be divude by N, not SQRT(N)
# unc_daily_grid = np.sqrt(np.ma.sum(np.ma.power(unc_this_month_grid, 2.), axis = 2)) / np.sqrt(n_hrs_per_day)
unc_daily_grid = np.sqrt(np.ma.sum(np.ma.power(unc_this_month_grid, 2.), axis = 2)) / n_hrs_per_day
unc_daily_grid.fill_value = settings.mdi
print('Built daily grids')
if period == "all":
bad_locs = np.where(n_hrs_per_day < N_OBS_DAY) # at least 2 of possible 8 3-hourly values (6hrly data *KW OR AT LEAST 4 3HRLY OBS PRESENT*)
else:
bad_locs = np.where(n_hrs_per_day < np.floor(N_OBS_DAY / 2.)) # at least 1 of possible 8 3-hourly values (6hrly data *KW OR AT LEAST 4 3HRLY OBS PRESENT*)
daily_grid.mask[bad_locs] = True
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
unc_daily_grid.mask[bad_locs] = True
print('Masked daily grids where few obs')
# KATE modified - added SwitchOutput to if loop
if (SwitchOutput == 1) and settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
#if settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
# end
# plot the distribution of hours
plt.clf()
plt.hist(n_hrs_per_day.reshape(-1), bins = np.arange(-1,10), align = "left", log = True, rwidth=0.5)
if period == "all":
plt.axvline(x = N_OBS_DAY-0.5, color = "r")
else:
plt.axvline(x = np.floor(N_OBS_DAY / 2.)-0.5, color = "r")
plt.title("Number of 1x1-3hrly in each 1x1-daily grid box")
plt.xlabel("Number of 3-hrly observations (max = 8)")
plt.ylabel("Frequency (log scale)")
plt.savefig(settings.PLOT_LOCATION + "n_grids_1x1_daily_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
plt.clf()
plt.hist(n_obs_per_day.reshape(-1), bins = np.arange(-5,100,5), log = True, rwidth=0.5)
plt.title("Total number of raw observations in each 1x1 daily grid box")
plt.xlabel("Number of raw observations")
plt.ylabel("Frequency (log scale)")
plt.savefig(settings.PLOT_LOCATION + "n_obs_1x1_daily_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
# clear up memory
del this_month_grid
del this_month_obs
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
del unc_this_month_grid
gc.collect()
# KATE modified
# If SwitchOutput == 1 then we're in test mode - output interim files!!!
if (SwitchOutput == 1):
# write dailies file
times.data = daily_hours[:,0]
out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_1x1_daily_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
utils.netcdf_write(out_filename, daily_grid, n_hrs_per_day[0], n_obs_per_day, OBS_ORDER, grid_lats, grid_lons, times, frequency = "D")
#times.data = daily_hours[:,0]
#out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_1x1_daily_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
#utils.netcdf_write(out_filename, daily_grid, n_hrs_per_day[0], n_obs_per_day, OBS_ORDER, grid_lats, grid_lons, times, frequency = "D")
# end
# Monthlies
times.data = daily_hours[0,0]
# KATE modified - commenting out as we don't need this anymore
# if settings.doMedian:
# monthly_grid = np.ma.median(daily_grid, axis = 1)
# else:
# monthly_grid = np.ma.mean(daily_grid, axis = 1)
#
# monthly_grid.fill_value = settings.mdi
#
# # filter on number of observations/month
# n_grids_per_month = np.ma.count(daily_grid, axis = 1)
# bad_locs = np.where(n_grids_per_month < calendar.monthrange(year, month)[1] * N_OBS_FRAC_MONTH) # 30% of possible daily values
# monthly_grid.mask[bad_locs] = True
#
# # number of raw observations
# n_obs_per_month = np.ma.sum(n_obs_per_day, axis = 0)
#
# if settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
# # plot the distribution of days
#
# plt.clf()
# plt.hist(n_obs_per_month.reshape(-1), bins = np.arange(-10,500,10), log = True, rwidth=0.5)
# plt.title("Total number of raw observations in each 1x1 monthly grid box")
# plt.xlabel("Number of raw observations")
# plt.ylabel("Frequency (log scale)")
# plt.savefig(settings.PLOT_LOCATION + "n_obs_1x1_monthly_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
#
# plt.clf()
# plt.hist(n_grids_per_month[0].reshape(-1), bins = np.arange(-2,40,2), align = "left", log = True, rwidth=0.5)
# plt.axvline(x = calendar.monthrange(year, month)[1] * N_OBS_FRAC_MONTH, color="r")
# plt.title("Total number of 1x1 daily grids in each 1x1 monthly grid")
# plt.xlabel("Number of 1x1 daily grids")
# plt.ylabel("Frequency (log scale)")
# plt.savefig(settings.PLOT_LOCATION + "n_grids_1x1_monthly_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
#
# # write monthly 1x1 file
# out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_1x1_monthly_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
# utils.netcdf_write(out_filename, monthly_grid, n_grids_per_month[0], n_obs_per_month, OBS_ORDER, grid_lats, grid_lons, times, frequency = "M")
#
# # now to re-grid to coarser resolution
# # KW # Here we may want to use the mean because its a large area but could be sparsely
# # populated with quite different climatologies so we want
# # the influence of the outliers (we've done our best to ensure these are good values)
#
# # go from monthly 1x1 to monthly 5x5 - retained as limited overhead
# monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, grid5_lats, grid5_lons = utils.grid_5by5(monthly_grid, n_obs_per_month, grid_lats, grid_lons, doMedian = settings.doMedian, daily = False)
# out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_5x5_monthly_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
#
# utils.netcdf_write(out_filename, monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, OBS_ORDER, grid5_lats, grid5_lons, times, frequency = "M")
#
# if settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
# # plot the distribution of days
#
# plt.clf()
# plt.hist(monthly_5by5_n_obs.reshape(-1), bins = np.arange(0,100,5), log = True, rwidth=0.5)
# plt.title("Total number of raw observations in each 5x5 monthly grid box")
# plt.xlabel("Number of raw observations")
# plt.ylabel("Frequency (log scale)")
# plt.savefig(settings.PLOT_LOCATION + "n_obs_5x5_monthly_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
#
# plt.clf()
# plt.hist(monthly_5by5_n_grids.reshape(-1), bins = np.arange(-2,30,2), align = "left", log = True, rwidth=0.5)
# plt.axvline(x = 1, color="r")
# plt.title("Total number of 1x1 monthly grids in each 5x5 monthly grid")
# plt.xlabel("Number of 1x1 monthly grids")
# plt.ylabel("Frequency (log scale)")
# plt.savefig(settings.PLOT_LOCATION + "n_grids_5x5_monthly_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
#
# # clear up memory
# del monthly_grid
# del monthly_5by5
# del monthly_5by5_n_grids
# del monthly_5by5_n_obs
# del n_grids_per_month
# del n_obs_per_month
# del n_hrs_per_day
# gc.collect()
# end
# go direct from daily 1x1 to monthly 5x5
# KATE MEDIAN WATCH - settings.doMedian is generally set to True - I think we may want the MEAN HERE!!!
# KATE modified - to hard wire in MEAN here
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT: #
monthly_5by5, unc_monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, grid5_lats, grid5_lons = utils.grid_5by5_unc(daily_grid, unc_daily_grid, n_obs_per_day, grid_lats, grid_lons, doMedian = False, daily = True,
doUSLR = doUSLR, doUSCN = doUSCN, doUHGT = doUHGT, doUR = doUR, doUM = doUM, doUC = doUC, doUTOT = doUTOT)
else:
monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, grid5_lats, grid5_lons = utils.grid_5by5(daily_grid, n_obs_per_day, grid_lats, grid_lons, doMedian = False, daily = True)
#monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, grid5_lats, grid5_lons = utils.grid_5by5(daily_grid, n_obs_per_day, grid_lats, grid_lons, doMedian = settings.doMedian, daily = True)
# end
print('Done Monthly grids')
out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_5x5_monthly_from_daily_{}{:02d}_{}_{}.nc".format(year, month, period, suffix)
# UNC NEW
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT:
if doUSLR:
uncS = 'uSLR'
elif doUSCN:
uncS = 'uSCN'
elif doUHGT:
uncS = 'uHGT'
elif doUR:
uncS = 'uR'
elif doUM:
uncS = 'uM'
elif doUC:
uncS = 'uC'
elif doUTOT:
uncS = 'uTOT'
out_filename = settings.DATA_LOCATION + settings.OUTROOT + "_{}_5x5_monthly_from_daily_{}{:02d}_{}_{}.nc".format(uncS, year, month, period, suffix)
utils.netcdf_write_unc(uncS, out_filename, unc_monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, OBS_ORDER, grid5_lats, grid5_lons, times, frequency = "M", \
doUSLR = doUSLR, doUSCN = doUSCN, doUHGT = doUHGT, doUR = doUR, doUM = doUM, doUC = doUC, doUTOT = doUTOT)
else:
utils.netcdf_write(out_filename, monthly_5by5, monthly_5by5_n_grids, monthly_5by5_n_obs, OBS_ORDER, grid5_lats, grid5_lons, times, frequency = "M")
if settings.plots and (year in [1973, 1983, 1993, 2003, 2013]):
# plot the distribution of days
plt.clf()
plt.hist(monthly_5by5_n_obs.reshape(-1), bins = np.arange(-10,1000,10), log = True, rwidth=0.5)
plt.title("Total number of raw observations in each 5x5 monthly grid box")
plt.xlabel("Number of raw observations")
plt.ylabel("Frequency (log scale)")
plt.savefig(settings.PLOT_LOCATION + "n_obs_5x5_monthly_from_daily_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
plt.clf()
plt.hist(monthly_5by5_n_grids.reshape(-1), bins = np.arange(-5,100,5), align = "left", log = True, rwidth=0.5)
plt.axvline(x = (0.3 * daily_grid.shape[0]), color="r")
plt.title("Total number of 1x1 daily grids in each 5x5 monthly grid")
plt.xlabel("Number of 1x1 daily grids")
plt.ylabel("Frequency (log scale)")
plt.savefig(settings.PLOT_LOCATION + "n_grids_5x5_monthly_from_daily_{}{:02d}_{}_{}.png".format(year, month, period, suffix))
del daily_grid
del monthly_5by5
del n_obs_per_day
del monthly_5by5_n_grids
del monthly_5by5_n_obs
# UNC NEW
# if doBCtotal:
if doUSLR | doUSCN | doUHGT | doUR | doUM | doUC | doUTOT:
del unc_daily_grid
del unc_monthly_5by5
gc.collect()
return # do_gridding
#************************************************************************
if __name__=="__main__":
import argparse
# set up keyword arguments
parser = argparse.ArgumentParser()
parser.add_argument('--suffix', dest='suffix', action='store', default = "relax",
help='"relax" or "strict" completeness, default = relax')
parser.add_argument('--start_year', dest='start_year', action='store', default = defaults.START_YEAR,
help='which year to start run, default = 1973')
parser.add_argument('--end_year', dest='end_year', action='store', default = defaults.END_YEAR,
help='which year to end run, default = present')
parser.add_argument('--start_month', dest='start_month', action='store', default = 1,
help='which month to start run, default = 1')
parser.add_argument('--end_month', dest='end_month', action='store', default = 12,
help='which month to end run, default = 12')
# CANNOT BE MORE THAN ONE OF THE BELOW:
parser.add_argument('--doQC', dest='doQC', action='store_true', default = False,
help='process the QC information, default = False')
# KATE modified
parser.add_argument('--doQC1it', dest='doQC1it', action='store_true', default = False,
help='process the first iteration QC information without buddy check, default = False')
parser.add_argument('--doQC2it', dest='doQC2it', action='store_true', default = False,
help='process the second iteration QC information without buddy check, default = False')
parser.add_argument('--doQC3it', dest='doQC3it', action='store_true', default = False,
help='process the third iteration QC information with buddy check, default = False')
# end
parser.add_argument('--doBC', dest='doBC', action='store_true', default = False,
help='process the bias corrected data, default = False')
# KATE modified
parser.add_argument('--doBCtotal', dest='doBCtotal', action='store_true', default = False,
help='process the full bias corrected data, default = False')
parser.add_argument('--doBChgt', dest='doBChgt', action='store_true', default = False,
help='process the height bias corrected data only, default = False')
parser.add_argument('--doBCscn', dest='doBCscn', action='store_true', default = False,
help='process the screen bias corrected data only, default = False')
# end
parser.add_argument('--doNOWHOLE', dest='doNOWHOLE', action='store_true', default = False,
help='process the total bias corrected data that has no whole number flag set, default = False')
# UNC NEW
# MUST SET doBCtotal for these to work:
parser.add_argument('--doUSLR', dest='doUSLR', action='store_true', default = False,
help='process the solar adjustment uncertainty only with correlation, default = False')
parser.add_argument('--doUSCN', dest='doUSCN', action='store_true', default = False,
help='process the instrument adjustment uncertainty only with correlation, default = False')
parser.add_argument('--doUHGT', dest='doUHGT', action='store_true', default = False,
help='process the height adjustment uncertainty only with correlation, default = False')
parser.add_argument('--doUR', dest='doUR', action='store_true', default = False,
help='process the rounding uncertainty only with no correlation, default = False')
parser.add_argument('--doUM', dest='doUM', action='store_true', default = False,
help='process the measurement uncertainty only with no correlation, default = False')
parser.add_argument('--doUC', dest='doUC', action='store_true', default = False,
help='process the climatological uncertainty only with correlation, default = False')
parser.add_argument('--doUTOT', dest='doUTOT', action='store_true', default = False,
help='process the total uncertainty only - no correlation possible, default = False')
# end
# KATE modified
# THIS CAN RUN WITH ANY OF THE do???? arguments:
parser.add_argument('--ShipOnly', dest='ShipOnly', action='store_true', default = False,
help='process the ship only platform type data, default = False')
# end
args = parser.parse_args()
do_gridding(suffix = str(args.suffix), start_year = int(args.start_year), end_year = int(args.end_year), \
start_month = int(args.start_month), end_month = int(args.end_month), \
# KATE modified
doQC = args.doQC, doQC1it = args.doQC1it, doQC2it = args.doQC2it, doQC3it = args.doQC3it, \
doBC = args.doBC, doBCtotal = args.doBCtotal, doBChgt = args.doBChgt, doBCscn = args.doBCscn, doNOWHOLE = args.doNOWHOLE, \
doUSLR = args.doUSLR, doUSCN = args.doUSCN, doUHGT = args.doUHGT, doUR = args.doUR, doUM = args.doUM, doUC = args.doUC, doUTOT = args.doUTOT, \
ShipOnly = args.ShipOnly)
#doQC = args.doQC, doBC = args.doBC)
# end
# END
# ************************************************************************
|
{"hexsha": "4f942cab1ceb5060ca2034a7b717095c4ea2d9e4", "size": 45215, "ext": "py", "lang": "Python", "max_stars_repo_path": "EUSTACE_SST_MAT/gridding_cam.py", "max_stars_repo_name": "Kate-Willett/HadISDH_Marine_Build", "max_stars_repo_head_hexsha": "293b4c89dc6e04e47d3f6e3645cf0f610beca2f2", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EUSTACE_SST_MAT/gridding_cam.py", "max_issues_repo_name": "Kate-Willett/HadISDH_Marine_Build", "max_issues_repo_head_hexsha": "293b4c89dc6e04e47d3f6e3645cf0f610beca2f2", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EUSTACE_SST_MAT/gridding_cam.py", "max_forks_repo_name": "Kate-Willett/HadISDH_Marine_Build", "max_forks_repo_head_hexsha": "293b4c89dc6e04e47d3f6e3645cf0f610beca2f2", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.1941176471, "max_line_length": 272, "alphanum_fraction": 0.6315160898, "include": true, "reason": "import numpy", "num_tokens": 12025}
|
[STATEMENT]
lemma LeftDerivationFix_grow_prefix:
assumes LDF: "LeftDerivationFix (b1@[X]@b2) (length b1) D j c"
assumes prefix_b1: "LeftDerives1 prefix e r b1"
shows "LeftDerivationFix (prefix@[X]@b2) (length prefix) ((e, r)#D) j c"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
from LDF
[PROOF STATE]
proof (chain)
picking this:
LeftDerivationFix (b1 @ [X] @ b2) (length b1) D j c
[PROOF STEP]
have LDF': "LeftDerivation (b1 @ [X] @ b2) D c \<and>
length b1 < length (b1 @ [X] @ b2) \<and>
j < length c \<and>
(b1 @ [X] @ b2) ! length b1 = c ! j \<and>
(\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and>
LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and>
LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))"
[PROOF STATE]
proof (prove)
using this:
LeftDerivationFix (b1 @ [X] @ b2) (length b1) D j c
goal (1 subgoal):
1. LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
[PROOF STEP]
using LeftDerivationFix_def
[PROOF STATE]
proof (prove)
using this:
LeftDerivationFix (b1 @ [X] @ b2) (length b1) D j c
LeftDerivationFix ?\<alpha> ?i ?D ?j ?\<beta> = (is_sentence ?\<alpha> \<and> is_sentence ?\<beta> \<and> LeftDerivation ?\<alpha> ?D ?\<beta> \<and> ?i < length ?\<alpha> \<and> ?j < length ?\<beta> \<and> ?\<alpha> ! ?i = ?\<beta> ! ?j \<and> (\<exists>E F. ?D = E @ derivation_shift F 0 (Suc ?j) \<and> LeftDerivation (take ?i ?\<alpha>) E (take ?j ?\<beta>) \<and> LeftDerivation (drop (Suc ?i) ?\<alpha>) F (drop (Suc ?j) ?\<beta>)))
goal (1 subgoal):
1. LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
[PROOF STEP]
obtain E F where EF: "D = E @ derivation_shift F 0 (Suc j) \<and>
LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and>
LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)"
[PROOF STATE]
proof (prove)
using this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
goal (1 subgoal):
1. (\<And>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c) \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)
[PROOF STEP]
have E_b1_c: "LeftDerivation b1 E (take j c)"
[PROOF STATE]
proof (prove)
using this:
D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)
goal (1 subgoal):
1. LeftDerivation b1 E (take j c)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
LeftDerivation b1 E (take j c)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
with EF
[PROOF STATE]
proof (chain)
picking this:
D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)
LeftDerivation b1 E (take j c)
[PROOF STEP]
have F_b2_c: "LeftDerivation b2 F (drop (Suc j) c)"
[PROOF STATE]
proof (prove)
using this:
D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c)
LeftDerivation b1 E (take j c)
goal (1 subgoal):
1. LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
LeftDerivation b2 F (drop (Suc j) c)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
have step: "LeftDerives1 (prefix @ [X] @ b2) e r (b1 @ [X] @ b2)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. LeftDerives1 (prefix @ [X] @ b2) e r (b1 @ [X] @ b2)
[PROOF STEP]
using LDF LeftDerivationFix_is_sentence LeftDerives1_append_suffix
is_sentence_concat prefix_b1
[PROOF STATE]
proof (prove)
using this:
LeftDerivationFix (b1 @ [X] @ b2) (length b1) D j c
LeftDerivationFix ?a ?i ?D ?j ?b \<Longrightarrow> is_sentence ?a \<and> is_sentence ?b
\<lbrakk>LeftDerives1 ?v ?i ?r ?w; is_sentence ?u\<rbrakk> \<Longrightarrow> LeftDerives1 (?v @ ?u) ?i ?r (?w @ ?u)
is_sentence (?x @ ?y) = (is_sentence ?x \<and> is_sentence ?y)
LeftDerives1 prefix e r b1
goal (1 subgoal):
1. LeftDerives1 (prefix @ [X] @ b2) e r (b1 @ [X] @ b2)
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
LeftDerives1 (prefix @ [X] @ b2) e r (b1 @ [X] @ b2)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
[PROOF STEP]
apply (simp add: LeftDerivationFix_def)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_sentence (prefix @ X # b2) \<and> is_sentence c \<and> (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule conjI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. is_sentence (prefix @ X # b2)
2. is_sentence c \<and> (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (metis Derives1_sentence1 LDF LeftDerivationFix_def LeftDerives1_implies_Derives1
is_sentence_concat is_sentence_cons prefix_b1)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_sentence c \<and> (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule conjI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. is_sentence c
2. (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
using LDF LeftDerivationFix_is_sentence
[PROOF STATE]
proof (prove)
using this:
LeftDerivationFix (b1 @ [X] @ b2) (length b1) D j c
LeftDerivationFix ?a ?i ?D ?j ?b \<Longrightarrow> is_sentence ?a \<and> is_sentence ?b
goal (2 subgoals):
1. is_sentence c
2. (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply blast
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c) \<and> j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule conjI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<exists>x. LeftDerives1 (prefix @ X # b2) e r x \<and> LeftDerivation x D c
2. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule_tac x="b1@[X]@b2" in exI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. LeftDerives1 (prefix @ X # b2) e r (b1 @ [X] @ b2) \<and> LeftDerivation (b1 @ [X] @ b2) D c
2. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
using step
[PROOF STATE]
proof (prove)
using this:
LeftDerives1 (prefix @ [X] @ b2) e r (b1 @ [X] @ b2)
goal (2 subgoals):
1. LeftDerives1 (prefix @ X # b2) e r (b1 @ [X] @ b2) \<and> LeftDerivation (b1 @ [X] @ b2) D c
2. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply simp
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. LeftDerives1 (prefix @ X # b2) e r (b1 @ X # b2) \<Longrightarrow> LeftDerivation (b1 @ X # b2) D c
2. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
using LDF'
[PROOF STATE]
proof (prove)
using this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
goal (2 subgoals):
1. LeftDerives1 (prefix @ X # b2) e r (b1 @ X # b2) \<Longrightarrow> LeftDerivation (b1 @ X # b2) D c
2. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply auto[1]
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. j < length c \<and> X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule conjI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. j < length c
2. X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
using LDF'
[PROOF STATE]
proof (prove)
using this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
goal (2 subgoals):
1. j < length c
2. X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply simp
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. X = c ! j \<and> (\<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c))
[PROOF STEP]
apply (rule conjI)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. X = c ! j
2. \<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
using LDF'
[PROOF STATE]
proof (prove)
using this:
LeftDerivation (b1 @ [X] @ b2) D c \<and> length b1 < length (b1 @ [X] @ b2) \<and> j < length c \<and> (b1 @ [X] @ b2) ! length b1 = c ! j \<and> (\<exists>E F. D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation (take (length b1) (b1 @ [X] @ b2)) E (take j c) \<and> LeftDerivation (drop (Suc (length b1)) (b1 @ [X] @ b2)) F (drop (Suc j) c))
goal (2 subgoals):
1. X = c ! j
2. \<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
apply auto[1]
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>E F. (e, r) # D = E @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix E (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
apply (rule_tac x="(e,r)#E" in exI)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>F. (e, r) # D = ((e, r) # E) @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix ((e, r) # E) (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
apply (rule_tac x="F" in exI)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (e, r) # D = ((e, r) # E) @ derivation_shift F 0 (Suc j) \<and> LeftDerivation prefix ((e, r) # E) (take j c) \<and> LeftDerivation b2 F (drop (Suc j) c)
[PROOF STEP]
apply (auto simp add: EF F_b2_c)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>x. LeftDerives1 prefix e r x \<and> LeftDerivation x E (take j c)
[PROOF STEP]
apply (rule_tac x="b1" in exI)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. LeftDerives1 prefix e r b1 \<and> LeftDerivation b1 E (take j c)
[PROOF STEP]
apply (simp add: prefix_b1 E_b1_c)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
[PROOF STATE]
proof (state)
this:
LeftDerivationFix (prefix @ [X] @ b2) (length prefix) ((e, r) # D) j c
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 7017, "file": "LocalLexing_Ladder", "length": 43}
|
import sys
import os
import re
from typing import Union
import shutil
import io
import threading
import concurrent.futures
import queue
import lzma
import configparser
import logging
from collections import OrderedDict
import multiprocessing
import numpy as np
from . import converter
# =============================== set up logging ==============================
logger = logging.getLogger(__name__)
# =============================== set up config ===============================
THISCONF = 'cavitylearn-data'
config = configparser.ConfigParser(interpolation=None)
# default config values
config[THISCONF] = {
"queue_maxsize": 1000,
"queue_timeout": 1,
"queue_workers": 0
}
# Look for the config file
for p in sys.path:
cfg_filepath = os.path.join(p, 'config.ini')
if os.path.exists(cfg_filepath):
logger.debug('Found config file in: ' + cfg_filepath)
config.read(cfg_filepath)
break
else:
logger.debug("config.ini not found!")
class DataConfig:
"""Data configuration class
:type classes: list[string]
:type num_props: int
:type boxshape: list[int]
:type dtype: np.dtype
"""
def __init__(self, classes: list, num_props: int, boxshape: list, dtype: np.dtype):
"""DataConfig object constructor.
:param classes: List of classes in the dataset
:param num_props: Number of properties or "colors" per box pixel.
:param boxshape: List with 3 integers with the shape of the box.
:param dtype: Datatype of the box pixels.
"""
self.num_classes = len(classes)
self.classes = list(classes)
self.num_props = num_props
self.boxshape = list(boxshape)
self.dtype = dtype
DATACONFIG_SECTION = 'dataconfig'
def read_dataconfig(configfile):
"""Read dataconfig from .ini file.
:param str configfile: File path or file object to the data configuration .ini file
:return: A DataConfig object
:rtype: DataConfig
"""
conf = configparser.ConfigParser()
if isinstance(configfile, str):
configfile = open(configfile, "rt")
try:
conf.read_file(configfile)
classes = [cl.strip() for cl in conf[DATACONFIG_SECTION]["classes"].split(',')]
properties = [prop.strip() for prop in conf[DATACONFIG_SECTION]["proplist"].split(',')]
shape = [int(s) for s in conf[DATACONFIG_SECTION]["shape"].split(',')]
dtype_str = conf[DATACONFIG_SECTION]["dtype"]
if dtype_str == "float32":
dtype = np.float32
else:
raise ValueError("Unkown data type `{}` in dataconfig file".format(dtype_str))
return DataConfig(
classes=classes,
num_props=len(properties),
boxshape=shape,
dtype=dtype
)
finally:
configfile.close()
BOX_SUFFIX = ".box"
RE_BOXFILE = re.compile(r'^(.*?)(\.r\d\d)?\.box$')
RE_BOXXZFILE = re.compile(r'^(.*?)(\.r\d\d)?\.box\.xz$')
def load_boxfile(f: str, dataconfig: DataConfig) -> np.array:
"""Load a box file.
This reads the input file depending on its ending. If it ends in .box, the file is read as-is. If it ends in
.box.xz, the data is first decompressed using the LZMA algorithm. The input file data is read as a numpy array and
reshaped to match the info in the dataconfig.
:param f: Filename of the box file. Has to end either in .box or .box.xz .
:param dataconfig: Data configuration
:return: data file as array, reshaped to match the data configuration
"""
if RE_BOXXZFILE.match(f):
with lzma.open(f) as xzfile:
file_array = np.frombuffer(xzfile.read(), dtype=dataconfig.dtype)
elif RE_BOXFILE.match(f):
with open(f, "rb") as infile:
file_array = np.frombuffer(infile.read(), dtype=dataconfig.dtype)
else:
raise NameError("Unknown file suffix for box file `{}`".format(f))
return file_array.reshape([
dataconfig.boxshape[0],
dataconfig.boxshape[1],
dataconfig.boxshape[2],
dataconfig.num_props])
class DataSet:
"""Data set handle class.
This class represents a set of input box arrays along with their labels. It is created from a list of files ending
in .box (uncompressed) or .box.xz.
The data files are read permanently in the background, and the resulting arrays and labels are buffered until they
are retrieved via read_batch.
"""
def __init__(self, labelfile: io.IOBase, boxfiles: list, dataconfig: DataConfig, shuffle=True, verify=True,
start_worker=True):
"""Create a new DataSet from a list of box files, a label file and data configuration.
The label file is a tab separated file with two columns. The first column is the UUID of the box file
(basename of the box file without .box or .box.xz extension), and the second column is the name of the class.
:param labelfile: Filepath or file object of the label file.
:param boxfiles: List of box file paths. All filenames have to end in .box or .box.xz .
:param dataconfig: Data configuration object
:param shuffle: If true, randomize the order upon construction
:param verify: If true, opens each file and verifies that it is readable and an LZMA-compressed file
(if it ends in .box.xz).
:param start_worker: If true, start the worker threads loading the files immediately. If false, workers are not
started, but have to be started manually by calling start_worker()
"""
self._dataconfig = dataconfig
if isinstance(labelfile, str):
with open(labelfile, "rt") as labelfile:
label_list = [row.strip().split('\t') for row in labelfile]
else:
label_list = [row.strip().split('\t') for row in labelfile]
label_dict = {
entry[0]: entry[1]
for entry in label_list
}
boxfiles_labels = OrderedDict()
# loop through all box files, verify that they are there, an XZ file and have an entry in the label file
for boxfile in boxfiles:
try:
if RE_BOXXZFILE.match(boxfile):
if verify:
with lzma.open(boxfile):
pass
# get the name of the box: get basename, delete box suffix and look it up in the label list
boxfile_name = os.path.basename(boxfile)
boxfile_name = RE_BOXXZFILE.match(boxfile_name).group(1)
elif RE_BOXFILE.match(boxfile):
if verify:
with open(boxfile, "rb"):
pass
# get the name of the box: get basename, delete box suffix and look it up in the label list
boxfile_name = os.path.basename(boxfile)
boxfile_name = RE_BOXFILE.match(boxfile_name).group(1)
else:
logger.warning("File %d does not end in .box or .box.xz. I'm not quite sure what to do with it.")
continue
if boxfile_name not in label_dict:
logger.warning("Box file `{}` not found in label file.".format(boxfile))
continue
boxfiles_labels[boxfile] = label_dict[boxfile_name]
except FileNotFoundError:
logger.warning("Box file not found: {}".format(boxfile))
self.N = len(boxfiles_labels)
logger.debug("{:d} box files found from {:d} labels in list".format(self.N, len(label_dict)))
self._labels = converter.labels_to_classindex(list(boxfiles_labels.values()), dataconfig.classes)
self._boxfiles = list(boxfiles_labels.keys())
if shuffle:
self.shuffle(norestart=True)
self._last_batch_index = 0
self._queue_shutdown_flag = False
maxsize = int(config[THISCONF]['queue_maxsize'])
# logger.debug("Creating new queue object with maxsize %d.", maxsize)
self._box_future_queue = queue.Queue(maxsize=maxsize)
self._workthread = None
self._restart_worker()
pass
def _boxfile_read_worker(self):
"""Boxfile read worker.
Sequentially reads all files currently in the files list, and pushes them into the box array queue.
If the maximum queue size has been reached, the function blocks and waits until there is still space in the
queue.
To stop this worker, self._queue_shutdown_flag has to be set to true.
This function should be started from a separate thread.
:return: None
"""
num_workers = int(config[THISCONF]['queue_workers'])
if num_workers == 0:
num_workers = multiprocessing.cpu_count()
timeout = int(config[THISCONF]['queue_timeout'])
# create thread pool
with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
logger.debug("Started ")
# iterate over input files, submit one future for each
for i in range(self._last_batch_index, len(self._boxfiles)):
fut = executor.submit(load_boxfile, self._boxfiles[i], self._dataconfig)
# repeatedly try to insert the future into the queue
while True:
try:
# try to put the future into the queue, with timeout
self._box_future_queue.put(fut, timeout=timeout)
# if no exception was raised, break inner loop and continue with loading files
break
except queue.Full:
# no problem, that was just the timeout. Continue trying to insert the result into the queue
pass
finally:
# shut down queuing operations immediately, if shutdown_queue is set.
if self._queue_shutdown_flag:
executor.shutdown()
return
pass
def _restart_worker(self):
"""Start or restart the boxfile read worker.
If a boxfile rad worker is currently running, it is hut down.
A new thread for the boxfile read worker is started.
:return: None
"""
# Signal that we want to quit the loading business
self._queue_shutdown_flag = True
# Eat all remaining boxes in the queue and drop them
try:
while True:
self._box_future_queue.get_nowait()
except queue.Empty:
pass
# join the worker thread
if self._workthread:
self._workthread.join()
# restart the loading business
self._queue_shutdown_flag = False
self._workthread = threading.Thread(target=self._boxfile_read_worker, daemon=True)
self._workthread.start()
def start_worker(self):
if self._workthread is None:
self._restart_worker()
def shuffle(self, norestart=False):
"""Shuffle the order of the dataset. Restarts the boxfile read worker unless norerstart is specified.
:param norestart: Do not restart the boxfile read worker
:return: None
"""
rand_order = np.random.permutation(self.N)
self._labels = self._labels[rand_order]
self._boxfiles = [self._boxfiles[i] for i in rand_order]
if not norestart:
self._restart_worker()
def rewind_batches(self, last_index=0, norestart=False):
"""Rewind the batch index pointer. This resets the DataSet to a fresh state.
The next call to next_batch after invoking this functino will return the same data as the first call to
next_batch.
Unless norestart is specified, the boxfile read worker will be restarted.
This function should be called when the dataset has been exhausted and new batches are still desired.
:param last_index:
:param norestart: Do not restart the boxfile read worker
:return: None
"""
self._last_batch_index = last_index
if not norestart:
self._restart_worker()
def next_batch(self, batch_size: int) -> (np.array, np.array):
"""Retrieve the next batch of box arrays.
:param batch_size: Number of labels/boxes to return at most.
:return:
"""
next_index = self._last_batch_index + batch_size
if next_index > self.N:
batch_size = self.N - self._last_batch_index
next_index = self.N
label_slice = self._labels[self._last_batch_index:next_index]
boxes_slice = np.zeros([batch_size,
self._dataconfig.boxshape[0],
self._dataconfig.boxshape[1],
self._dataconfig.boxshape[2],
self._dataconfig.num_props], dtype=self._dataconfig.dtype)
# logger.debug("boxqueue size before batch retrieval: %d", self._box_future_queue.qsize())
for i in range(batch_size):
# get future, retrieve result
fut = self._box_future_queue.get()
# store output data
boxes_slice[i, :, :, :] = fut.result()
# signal that we are done with this item
self._box_future_queue.task_done()
self._last_batch_index = next_index
return label_slice, boxes_slice
@property
def labels(self) -> np.array:
"""labels property
:return: A numpy array with the label indices, in the order in which they are returned by next_batch.
"""
return self._labels.copy()
@property
def files(self) -> list:
"""files property
:return: A list of all the boxfile paths, in the order in which they are returned by next_batch.
"""
return list(self._boxfiles)
@property
def dataconfig(self) -> DataConfig:
"""Dataconfig property
:return: A dataconfig object
"""
return self._dataconfig
def load_datasets(labelfile: Union[io.IOBase, str], boxdir: str, dataconfig: DataConfig, datasets=None,
recursive=False, shuffle=True, verify=True, start_workers=True):
"""Load datases from a dataset directory.
Traverses the given dataset directory, and creates a DataSet for each directory which directly contains
.box or .box.xz files. Additionally, a DataSet is created for ALL .box or .box.xz files recursively found in
top-level directory.
:param datasets: Create datasets only from direcotories in this list. The root dataset is represented as an empty
string. If None, creates all datasets available in the directory.
:param labelfile: Filepath or file object of the label file.
:param boxdir: Input directory that will be recursively searched for .box or .box.xz files.
:param dataconfig: Data configuration object
:param recursive: Scan directories recursively. If false, only recurses into top-level directories
:param shuffle: If true, randomize the order upon construction
:param verify: If true, opens each file and verifies that it is readable and an LZMA-compressed file
(if it ends in .box.xz).
:param start_workers: If true, start the worker threads loading the files immediately. If false, workers are not
started, but have to be started manually by calling start_worker()
:return: A dictionary with the names of the directories containing .box/.box.xz files directly and "" as keys, and
the datasets for the respective directories and the root directory as values.
"""
out_datasets = {
}
if recursive:
# walk the box directory. Create dataset for each directory that contains '.box.xz' files.
for root, dirs, files in os.walk(boxdir):
dirname = os.path.basename(root)
if datasets is not None and dirname not in datasets:
continue
# accumulate all boxfiles
boxfiles = [os.path.join(root, boxfile) for boxfile in files if
RE_BOXXZFILE.search(boxfile) or RE_BOXFILE.search(boxfile)]
if not len(boxfiles):
continue
# add files to current dataset, but only if the current root dir is not the top level box directory
if not os.path.abspath(root) == os.path.abspath(boxdir):
out_datasets[dirname] = DataSet(labelfile, boxfiles, dataconfig, shuffle=shuffle, verify=verify,
start_worker=start_workers)
if isinstance(labelfile, io.IOBase):
labelfile.seek(io.SEEK_SET)
else:
# recurse into top level directories
for dirname in (d.name for d in os.scandir(boxdir) if d.is_dir()):
if datasets is not None and dirname not in datasets:
continue
files = (f.name for f in os.scandir(os.path.join(boxdir, dirname)))
boxfiles = [os.path.join(boxdir, dirname, boxfile) for boxfile in files if
RE_BOXXZFILE.search(boxfile) or RE_BOXFILE.search(boxfile)]
if not len(boxfiles):
continue
out_datasets[dirname] = DataSet(labelfile, boxfiles, dataconfig, shuffle=shuffle, verify=verify,
start_worker=start_workers)
if isinstance(labelfile, io.IOBase):
labelfile.seek(io.SEEK_SET)
if datasets is not None and "" in out_datasets:
rootfiles = list()
for ds in out_datasets.values():
# add files to root dataset
rootfiles.extend(ds.files)
out_datasets[""] = DataSet(labelfile, rootfiles, dataconfig, shuffle=shuffle, verify=False,
start_worker=start_workers)
return out_datasets
def unpack_datasets(sourcedir: str, outdir: str, progress_tracker=None):
"""Uncompress compressed .box files.
This traverses a directory recursively, unpacking each .box.xz file to a .box in the output directory with the same
relative path.
:param sourcedir: Source directory containing .box.xz files
:param outdir: Output directory for the .box files. This can be the source directory.
:param progress_tracker: An object with an update() function, that will be called once for each file.
:return:
"""
for root, dirs, files in os.walk(sourcedir):
current_outdir = os.path.join(outdir, os.path.relpath(root, sourcedir))
if not os.path.isdir(current_outdir):
os.makedirs(current_outdir)
for file in files:
# copy already uncompressed files
if RE_BOXFILE.search(file):
shutil.copy(os.path.join(root, file), os.path.join(current_outdir, file))
elif RE_BOXXZFILE.search(file):
outfilename = RE_BOXXZFILE.match(file).group(1) + BOX_SUFFIX
with lzma.open(os.path.join(root, file)) as infile, \
open(os.path.join(current_outdir, outfilename), 'wb') as outfile:
outfile.write(infile.read())
if progress_tracker:
progress_tracker.update()
RE_BOXFILE_ROT = re.compile('^(.*?)(\.r\d\d)?\.box(\.xz)?$')
def split_datasets(rootdir: str, test_part: float, validation_part=0.0, shuffle=True):
"""Split dataset into train, test and cv partitions.
Recursively collects .box and .box.xz files in the root directory, then distributes those files to test, train and
cv subdirectories according to the fractions specified by test_part and validation_part.
test_part + validation_part < 1
:param rootdir: Root directory that contains the .box/.box.xz files
:param test_part: Fraction of data that will be the test partition, must be between 0 and 1.
:param validation_part: Fraction of data that will be the cv partition, must be between 0 and 1.
:param shuffle: Shuffle original datasets.
"""
if not (isinstance(validation_part, np.float) and validation_part >= 0) or \
not (isinstance(test_part, np.float) and test_part >= 0):
raise ValueError("validation_part and test_part must be positive floating point numbers between 0 and 1")
if validation_part + test_part >= 1.0:
raise ValueError("Validation and Test partitions cannot make up more than 100% of the data set")
# Collect all box files in this directory recursively
uuid_files_dict = dict()
for root, dirs, files in os.walk(rootdir):
for file in files:
m = RE_BOXFILE_ROT.match(file)
if not m:
continue
filepath = os.path.join(root, file)
uuid = m.group(1)
if uuid not in uuid_files_dict:
uuid_files_dict[uuid] = [filepath]
else:
uuid_files_dict[uuid].append(filepath)
number_of_uuids = len(uuid_files_dict)
uuids = list(uuid_files_dict.keys())
# Randomize order if requested, otherwise order lexicographically
if shuffle:
order = np.random.permutation(number_of_uuids)
uuids = [uuids[idx] for idx in order]
else:
uuids.sort()
# calculate number of examples in test partition and cv-partition
num_test = int(number_of_uuids * test_part)
num_val = int(number_of_uuids * validation_part)
# move training, test and cv files to their places
ds = "train"
if not os.path.isdir(os.path.join(rootdir, ds)):
os.makedirs(os.path.join(rootdir, ds))
for idx in range(0, number_of_uuids - num_test - num_val):
for file in uuid_files_dict[uuids[idx]]:
shutil.move(file, os.path.join(rootdir, ds, os.path.basename(file)))
ds = "test"
if not os.path.isdir(os.path.join(rootdir, ds)):
os.makedirs(os.path.join(rootdir, ds))
for idx in range(number_of_uuids - num_test - num_val, number_of_uuids - num_val):
for file in uuid_files_dict[uuids[idx]]:
shutil.move(file, os.path.join(rootdir, ds, os.path.basename(file)))
ds = "cv"
if not os.path.isdir(os.path.join(rootdir, ds)):
os.makedirs(os.path.join(rootdir, ds))
for idx in range(number_of_uuids - num_val, number_of_uuids):
for file in uuid_files_dict[uuids[idx]]:
shutil.move(file, os.path.join(rootdir, ds, os.path.basename(file)))
return
|
{"hexsha": "387622497f2d3ef0982378355b8ba5054fbc0901", "size": 22655, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/py/cavitylearn/data.py", "max_stars_repo_name": "akors/cavitylearn", "max_stars_repo_head_hexsha": "a03d159cbefce83d4c4c731a9c2573e7261faf91", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/py/cavitylearn/data.py", "max_issues_repo_name": "akors/cavitylearn", "max_issues_repo_head_hexsha": "a03d159cbefce83d4c4c731a9c2573e7261faf91", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/py/cavitylearn/data.py", "max_forks_repo_name": "akors/cavitylearn", "max_forks_repo_head_hexsha": "a03d159cbefce83d4c4c731a9c2573e7261faf91", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.3844884488, "max_line_length": 119, "alphanum_fraction": 0.6288677996, "include": true, "reason": "import numpy", "num_tokens": 4977}
|
import gzip
import os
import pickle
import torch
from torch import nn
import torch.utils.data as data_utils
import numpy as np
import torchvision
from torchvision import transforms
'''
spherical MNIST related
'''
def load_spherical_data(path='/workspace/tasks/spherical', batch_size=32):
data_file = os.path.join(path, 's2_mnist.gz')
with gzip.open(data_file, 'rb') as f:
dataset = pickle.load(f)
train_data = torch.from_numpy(
dataset["train"]["images"][:, None, :, :].astype(np.float32))
train_labels = torch.from_numpy(
dataset["train"]["labels"].astype(np.int64))
# TODO normalize dataset
# mean = train_data.mean()
# stdv = train_data.std()
train_dataset = data_utils.TensorDataset(train_data, train_labels)
#train_loader = data_utils.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_data = torch.from_numpy(
dataset["test"]["images"][:, None, :, :].astype(np.float32))
test_labels = torch.from_numpy(
dataset["test"]["labels"].astype(np.int64))
test_dataset = data_utils.TensorDataset(test_data, test_labels)
#test_loader = data_utils.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
return train_dataset, test_dataset
'''
CIFAR related
'''
class RowColPermute(nn.Module):
def __init__(self, row, col):
super().__init__()
self.rowperm = torch.randperm(row) if type(row) == int else row
self.colperm = torch.randperm(col) if type(col) == int else col
def forward(self, tensor):
return tensor[:, self.rowperm][:, :, self.colperm]
def load_cifar_train_data(path, permute):
CIFAR_MEAN = [0.49139968, 0.48215827, 0.44653124]
CIFAR_STD = [0.24703233, 0.24348505, 0.26158768]
normalize = transforms.Normalize(CIFAR_MEAN,
CIFAR_STD)
if permute:
permute_op = RowColPermute(32, 32)
transform = transforms.Compose([transforms.ToTensor(), permute_op, normalize])
else:
transform = transforms.Compose(
[transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(),
normalize]
)
trainset = torchvision.datasets.CIFAR10(
root=path, train=True, download=True, transform=transform
)
return trainset
def load_cifar_val_data(path, permute):
CIFAR_MEAN = [0.49139968, 0.48215827, 0.44653124]
CIFAR_STD = [0.24703233, 0.24348505, 0.26158768]
normalize = transforms.Normalize(CIFAR_MEAN,
CIFAR_STD)
if permute:
permute_op = RowColPermute(32, 32)
transform = transforms.Compose([transforms.ToTensor(), permute_op, normalize])
else:
transform = transforms.Compose(
[transforms.ToTensor(), normalize]
)
valset = torchvision.datasets.CIFAR10(
root=path, train=False, download=True, transform=transform
)
return valset
'''
sEMG related
'''
def scramble(examples, labels, second_labels=[]):
random_vec = np.arange(len(labels))
np.random.shuffle(random_vec)
new_labels = []
new_examples = []
if len(second_labels) == len(labels):
new_second_labels = []
for i in random_vec:
new_labels.append(labels[i])
new_examples.append(examples[i])
new_second_labels.append(second_labels[i])
return new_examples, new_labels, new_second_labels
else:
for i in random_vec:
new_labels.append(labels[i])
new_examples.append(examples[i])
return new_examples, new_labels
def load_sEMG_train_data(path='/workspace/tasks/MyoArmbandDataset/PyTorchImplementation/sEMG'):
datasets_training = np.load(os.path.join(path, "saved_evaluation_dataset_training.npy"),
encoding="bytes", allow_pickle=True)
examples_training, labels_training = datasets_training
#examples_training = examples_training.reshape(-1, *examples_training.shape[2:])
#labels_training = labels_training.reshape(-1, *labels_training.shape[2:])
for j in range(17):
print("CURRENT DATASET : ", j)
examples_personne_training = []
labels_gesture_personne_training = []
for k in range(len(examples_training[j])):
examples_personne_training.extend(examples_training[j][k])
labels_gesture_personne_training.extend(labels_training[j][k])
examples_personne_scrambled, labels_gesture_personne_scrambled = scramble(examples_personne_training,
labels_gesture_personne_training)
train = data_utils.TensorDataset(torch.from_numpy(np.array(examples_personne_scrambled, dtype=np.float32)),
torch.from_numpy(np.array(labels_gesture_personne_scrambled, dtype=np.int64)))
return train
def load_sEMG_val_data(path='/workspace/tasks/MyoArmbandDataset/PyTorchImplementation/sEMG'):
datasets_test0 = np.load(os.path.join(path, "saved_evaluation_dataset_test0.npy"),
encoding="bytes", allow_pickle=True)
examples_test0, labels_test0 = datasets_test0
datasets_test1 = np.load(os.path.join(path, "saved_evaluation_dataset_test1.npy"),
encoding="bytes", allow_pickle=True)
examples_test1, labels_test1 = datasets_test1
#x_val = np.concatenate((examples_test0.reshape(-1), examples_test1.reshape(-1)))
#y_val = np.concatenate((labels_test0.reshape(-1), labels_test1.reshape(-1)))
for j in range(17):
X_test_0, Y_test_0 = [], []
for k in range(len(examples_test0)):
X_test_0.extend(examples_test0[j][k])
Y_test_0.extend(labels_test0[j][k])
X_test_1, Y_test_1 = [], []
for k in range(len(examples_test1)):
X_test_1.extend(examples_test1[j][k])
Y_test_1.extend(labels_test1[j][k])
X_test_0, Y_test_0 = np.array(X_test_0, dtype=np.float32), np.array(Y_test_0, dtype=np.int64)
X_test_1, Y_test_1 = np.array(X_test_1, dtype=np.float32), np.array(Y_test_1, dtype=np.int64)
X_test = np.concatenate((X_test_0, X_test_1))
Y_test = np.concatenate((Y_test_0, Y_test_1))
val = data_utils.TensorDataset(torch.from_numpy(X_test),
torch.from_numpy(Y_test))
return val
|
{"hexsha": "391f88e24cd10127fbab3ba7b69ef0868d35b187", "size": 6413, "ext": "py", "lang": "Python", "max_stars_repo_path": "old/bananas/darts/cnn/utils_data.py", "max_stars_repo_name": "rtu715/NAS-Bench-360", "max_stars_repo_head_hexsha": "d075006848c664371855c34082b0a00cda62be67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-06-15T17:48:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T18:34:28.000Z", "max_issues_repo_path": "old/bananas/darts/cnn/utils_data.py", "max_issues_repo_name": "rtu715/NAS-Bench-360", "max_issues_repo_head_hexsha": "d075006848c664371855c34082b0a00cda62be67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-12T15:12:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-12T19:38:00.000Z", "max_forks_repo_path": "old/bananas/darts/cnn/utils_data.py", "max_forks_repo_name": "rtu715/NAS-Bench-360", "max_forks_repo_head_hexsha": "d075006848c664371855c34082b0a00cda62be67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-15T04:07:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-15T04:07:17.000Z", "avg_line_length": 33.7526315789, "max_line_length": 115, "alphanum_fraction": 0.6639638235, "include": true, "reason": "import numpy", "num_tokens": 1516}
|
[STATEMENT]
lemma emb_step_arg: "is_App t \<Longrightarrow> t \<rightarrow>\<^sub>e\<^sub>m\<^sub>b (arg t)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_App t \<Longrightarrow> t \<rightarrow>\<^sub>e\<^sub>m\<^sub>b arg t
[PROOF STEP]
by (metis emb_step.intros(2) tm.collapse(2))
|
{"llama_tokens": 115, "file": "Lambda_Free_EPO_Embeddings", "length": 1}
|
# This code is part of Qiskit.
#
# (C) Copyright IBM 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""The triangular lattice"""
from dataclasses import asdict
from itertools import product
from math import pi
from typing import Dict, List, Optional, Tuple, Union
import numpy as np
from retworkx import PyGraph
from .lattice import LatticeDrawStyle, Lattice
from .boundary_condition import BoundaryCondition
class TriangularLattice(Lattice):
"""Triangular lattice."""
def _coordinate_to_index(self, coord: np.ndarray) -> int:
"""Convert the coordinate of a lattice point to an integer for labeling.
When self.size=(l0, l1), then a coordinate (x0, x1) is converted as
x0 + x1*l0.
Args:
coord: Input coordinate to be converted.
Returns:
int: Return x0 + x1*l0 when coord=np.array([x0, x1]) and self.size=(l0, l1).
"""
dim = 2
size = self.size
base = np.array([np.prod(size[:i]) for i in range(dim)], dtype=int)
return np.dot(coord, base).item()
def _self_loops(self) -> List[Tuple[int, int, complex]]:
"""Return a list consisting of the self-loops on all the nodes.
Returns:
List[Tuple[int, int, complex]] : List of the self-loops.
"""
size = self.size
onsite_parameter = self.onsite_parameter
num_nodes = np.prod(size)
return [(node_a, node_a, onsite_parameter) for node_a in range(num_nodes)]
def _bulk_edges(self) -> List[Tuple[int, int, complex]]:
"""Return a list consisting of the edges in th bulk, which don't cross the boundaries.
Returns:
List[Tuple[int, int, complex]] : List of weighted edges that don't cross the boundaries.
"""
size = self.size
edge_parameter = self.edge_parameter
list_of_edges = []
rows, cols = size
coordinates = list(product(*map(range, size)))
for x, y in coordinates:
node_a = self._coordinate_to_index(np.array([x, y]))
for i in range(3):
# x direction
if i == 0 and x != rows - 1:
node_b = self._coordinate_to_index(np.array([x, y]) + np.array([1, 0]))
# y direction
elif i == 1 and y != cols - 1:
node_b = self._coordinate_to_index(np.array([x, y]) + np.array([0, 1]))
# diagonal direction
elif i == 2 and x != rows - 1 and y != cols - 1:
node_b = self._coordinate_to_index(np.array([x, y]) + np.array([1, 1]))
else:
continue
list_of_edges.append((node_a, node_b, edge_parameter[i]))
return list_of_edges
def _boundary_edges(self) -> List[Tuple[int, int, complex]]:
"""Return a list consisting of the edges that cross the boundaries
depending on the boundary conditions.
Raises:
ValueError: Given boundary condition is invalid values.
Returns:
List[Tuple[int, int, complex]]: List of weighted edges that cross the boundaries.
"""
list_of_edges = []
size = self.size
edge_parameter = self.edge_parameter
boundary_condition = self.boundary_condition
rows, cols = size
# add edges when the boundary condition is periodic.
if boundary_condition == BoundaryCondition.PERIODIC:
# The periodic boundary condition in the x direction.
# It makes sense only when rows is greater than 2.
if rows > 2:
for y in range(cols):
node_a = (y + 1) * rows - 1
node_b = node_a - (rows - 1) # node_b < node_a
list_of_edges.append((node_b, node_a, edge_parameter[0].conjugate()))
# The periodic boundary condition in the y direction.
# It makes sense only when cols is greater than 2.
if cols > 2:
for x in range(rows):
node_a = rows * (cols - 1) + x
node_b = x # node_b < node_a
list_of_edges.append((node_b, node_a, edge_parameter[1].conjugate()))
# The periodic boundary condition in the diagonal direction.
for y in range(cols - 1):
node_a = (y + 1) * rows - 1
node_b = node_a + 1 # node_b > node_a
list_of_edges.append((node_a, node_b, edge_parameter[2]))
for x in range(rows - 1):
node_a = rows * (cols - 1) + x
node_b = x + 1 # node_b < node_a
list_of_edges.append((node_b, node_a, edge_parameter[2].conjugate()))
node_a = rows * cols - 1
node_b = 0 # node_b < node_a
list_of_edges.append((node_b, node_a, edge_parameter[2].conjugate()))
elif boundary_condition == BoundaryCondition.OPEN:
pass
else:
raise ValueError(
f"Invalid `boundary condition` {boundary_condition} is given."
"`boundary condition` must be " + " or ".join(str(bc) for bc in BoundaryCondition)
)
return list_of_edges
def _default_position(self) -> Dict[int, List[float]]:
"""Return a dictionary of default positions for visualization of a two-dimensional lattice.
Returns:
Dict[int, List[float]] : The keys are the labels of lattice points,
and the values are two-dimensional coordinates.
"""
size = self.size
boundary_condition = self.boundary_condition
pos = {}
width = 0.0
if boundary_condition == BoundaryCondition.PERIODIC:
# the positions are shifted along the x- and y-direction
# when the boundary condition is periodic.
# The width of the shift is fixed to 0.2.
width = 0.2
for index in range(np.prod(size)):
# maps an index to two-dimensional coordinate
# the positions are shifted so that the edges between boundaries can be seen
# for the periodic cases.
coord = np.array(divmod(index, size[0]))[::-1] + width * np.sin(
pi * np.array(divmod(index, size[0])) / (np.array(size)[::-1] - 1)
)
pos[index] = coord.tolist()
return pos
def __init__(
self,
rows: int,
cols: int,
edge_parameter: Union[complex, Tuple[complex, complex, complex]] = 1.0,
onsite_parameter: complex = 0.0,
boundary_condition: BoundaryCondition = BoundaryCondition.OPEN,
) -> None:
"""
Args:
rows: Length of the x direction.
cols: Length of the y direction.
edge_parameter: Weights on the edges in x, y and diagonal directions.
This is specified as a tuple of length 3 or a single value.
When it is a single value, it is interpreted as a tuple of length 3
consisting of the same values.
Defaults to 1.0,
onsite_parameter: Weight on the self-loops, which are edges connecting a node to itself.
Defaults to 0.0.
boundary_condition: Boundary condition for the lattice.
The available boundary conditions are:
BoundaryCondition.OPEN, BoundaryCondition.PERIODIC.
Defaults to BoundaryCondition.OPEN.
Raises:
ValueError: Given size, edge parameter or boundary condition are invalid values.
"""
self.rows = rows
self.cols = cols
self.size = (rows, cols)
self.dim = 2
self.boundary_condition = boundary_condition
if rows < 2 or cols < 2 or (rows, cols) == (2, 2):
# If it's True, triangular lattice can't be well defined.
raise ValueError(
"Both of `rows` and `cols` must not be (2, 2)"
"and must be greater than or equal to 2."
)
if isinstance(edge_parameter, (int, float, complex)):
edge_parameter = (edge_parameter, edge_parameter, edge_parameter)
elif isinstance(edge_parameter, tuple):
if len(edge_parameter) != 3:
raise ValueError(
f"The length of `edge_parameter` must be 3, not {len(edge_parameter)}."
)
self.edge_parameter = edge_parameter
self.onsite_parameter = onsite_parameter
graph = PyGraph(multigraph=False)
graph.add_nodes_from(range(np.prod(self.size)))
# add edges excluding the boundary edges
bulk_edges = self._bulk_edges()
graph.add_edges_from(bulk_edges)
# add self-loops
self_loop_list = self._self_loops()
graph.add_edges_from(self_loop_list)
# add edges that cross the boundaries
boundary_edge_list = self._boundary_edges()
graph.add_edges_from(boundary_edge_list)
# a list of edges that depend on the boundary condition
self.boundary_edges = [(edge[0], edge[1]) for edge in boundary_edge_list]
super().__init__(graph)
# default position
self.pos = self._default_position()
def draw_without_boundary(
self,
self_loop: bool = False,
style: Optional[LatticeDrawStyle] = None,
):
r"""Draw the lattice with no edges between the boundaries.
Args:
self_loop: Draw self-loops in the lattice. Defaults to False.
style : Styles for retworkx.visualization.mpl_draw.
Please see
https://qiskit.org/documentation/retworkx/stubs/retworkx.visualization.mpl_draw.html#retworkx.visualization.mpl_draw
for details.
"""
graph = self.graph
if style is None:
style = LatticeDrawStyle()
elif not isinstance(style, LatticeDrawStyle):
style = LatticeDrawStyle(**style)
if style.pos is None:
if self.dim == 1:
style.pos = {i: [i, 0] for i in range(self.size[0])}
elif self.dim == 2:
style.pos = {
i: [i % self.size[0], i // self.size[0]] for i in range(np.prod(self.size))
}
graph.remove_edges_from(self.boundary_edges)
self._mpl(
graph=graph,
self_loop=self_loop,
**asdict(style),
)
|
{"hexsha": "e1a532f7b66f1cf23c4ff4f9a7f8d6a6801b0e2c", "size": 10917, "ext": "py", "lang": "Python", "max_stars_repo_path": "qiskit_nature/problems/second_quantization/lattice/lattices/triangular_lattice.py", "max_stars_repo_name": "jschuhmac/qiskit-nature", "max_stars_repo_head_hexsha": "b8b1181d951cf8fa76fe0db9e5ea192dad5fb186", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 132, "max_stars_repo_stars_event_min_datetime": "2021-01-28T14:51:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T21:10:47.000Z", "max_issues_repo_path": "qiskit_nature/problems/second_quantization/lattice/lattices/triangular_lattice.py", "max_issues_repo_name": "jschuhmac/qiskit-nature", "max_issues_repo_head_hexsha": "b8b1181d951cf8fa76fe0db9e5ea192dad5fb186", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 449, "max_issues_repo_issues_event_min_datetime": "2021-01-28T19:57:43.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T17:01:50.000Z", "max_forks_repo_path": "qiskit_nature/problems/second_quantization/lattice/lattices/triangular_lattice.py", "max_forks_repo_name": "jschuhmac/qiskit-nature", "max_forks_repo_head_hexsha": "b8b1181d951cf8fa76fe0db9e5ea192dad5fb186", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 109, "max_forks_repo_forks_event_min_datetime": "2021-01-28T13:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T23:53:39.000Z", "avg_line_length": 40.5836431227, "max_line_length": 132, "alphanum_fraction": 0.5843180361, "include": true, "reason": "import numpy", "num_tokens": 2446}
|
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 6 07:50:51 2018
@author: markditsworth
"""
import zen
import numpy as np
import traceback
def out_degree_dist(G,output_file):
try:
ddist = zen.degree.ddist(G,normalize=False,direction='out_dir')
n = len(ddist)
k = np.arange(n)
ddist = ddist.reshape((n,1))
k = k.reshape((n,1))
deg_dist = np.concatenate((k,ddist),axis=1)
np.savetxt(output_file,deg_dist,delimiter=',',fmt='%d')
end_mes = 'Successfully written to: %s\n'%output_file
print 'Out-Degree Distribution Saved...'
except:
end_mes = traceback.format_exc()
print 'Error Occured: See Log.'
return end_mes
def main(argv):
p = 0
network_file = ''
bot_file = ''
while argv:
if argv[0] == '-p':
p = float(argv[1])
elif argv[0] == '-N':
network_file = argv[1]
elif argv[0] == '-B':
bot_file = argv[1]
argv = argv[1:]
if network_file == '':
print 'Error: Network File Required! Use flag -N'
else:
file_error_flag = 0
try:
G = zen.io.gml.read(network_file,weight_fxn= lambda x:x['weight'])
#G = zen.generating.erdos_renyi(20,0.1,directed=True)
except IOError:
print 'Error: Invalid Network File Name.\n'
file_error_flag = 1
# Remove p percentage of bots (should be incremental or random?) [incremental right now]
if p>0:
try:
with open(bot_file,'rb') as fObj:
bots = fObj.readlines()
n = int(len(bots)*p)
bots = bots[0:n]
for bot in bots:
bot = bot.strip().split('/')[-1]
G.rm_node(bot)
except IOError:
print 'Error: Invalid Bot File Name.\n'
file_error_flag = 1
if not file_error_flag:
suff = '_'.join(str(p).split('.')) # Suffix denotes bot removal percentage
log = './Logs/Out_Degree_Dist_Log_'+suff+'.txt' # Create Log File Name
# Out Degree
result = out_degree_dist(G,'./Stats/out_degree_dist_'+suff+'.csv')
with open(log,'wb') as fObj:
fObj.write(result)
if __name__ == '__main__':
from sys import argv
main(argv)
|
{"hexsha": "03d8c50b0503144d5baa8d8f74ac25bfd47f1487", "size": 2579, "ext": "py", "lang": "Python", "max_stars_repo_path": "Scripts/Out_Degree_Dist_Calc.py", "max_stars_repo_name": "markditsworth/RedditCommentAnalysis", "max_stars_repo_head_hexsha": "4db34accdda2e8c13747acc66e67aceeb6bdfbbc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-18T08:06:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-20T15:11:52.000Z", "max_issues_repo_path": "Scripts/Out_Degree_Dist_Calc.py", "max_issues_repo_name": "markditsworth/RedditCommentAnalysis", "max_issues_repo_head_hexsha": "4db34accdda2e8c13747acc66e67aceeb6bdfbbc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Scripts/Out_Degree_Dist_Calc.py", "max_forks_repo_name": "markditsworth/RedditCommentAnalysis", "max_forks_repo_head_hexsha": "4db34accdda2e8c13747acc66e67aceeb6bdfbbc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-04-18T08:06:35.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-06T09:32:03.000Z", "avg_line_length": 28.9775280899, "max_line_length": 96, "alphanum_fraction": 0.5025203567, "include": true, "reason": "import numpy", "num_tokens": 626}
|
# %% Imports
import argparse
from collections import namedtuple
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %% Flags
parser = argparse.ArgumentParser(description='Export Wandb results')
parser.add_argument('--tag', type=str)
flags = parser.parse_args()
tag = flags.tag
# %% Helper functions
def get_model_results(results):
env_split_mode = {}
for split, split_results in results.groupby("env_split"):
env_split_mode[split] = split_results[["actual_measure", "risk_max"]]
return env_split_mode
def get_best_by_measure(data):
data = pd.DataFrame(data).reset_index()
measures = [c for c in data.actual_measure.unique() if len(c.split(".")) == 2]
data["measure"] = data.actual_measure.apply(lambda x: ".".join(x.split(".")[:2]))
data = data.iloc[data.groupby("measure").risk_max.idxmin()]
return data
def preprocess_columns(data):
data["mae_max"] = np.sqrt(data.risk_max)
data["pretty_measure"] = [c.replace("complexity.", "").replace("_adjusted1", "").replace("_", "-") for c in data.actual_measure]
return data
def subtract_baseline(data, baseline_mae):
data["mae_max_vs_baseline"] = baseline_mae - data["mae_max"]
return data
# %% Load data
print(tag)
resultspath = Path(f'temp/single_network/')
resultspath.mkdir(parents=True, exist_ok=True)
df = pd.read_csv(resultspath / f'{tag}_export.csv')[['lr', 'bias', 'datafile', 'env_split', 'actual_measure', 'only_bias__ignore_input', 'selected_single_measure', 'bias.1', 'loss', 'weight', '_runtime', 'risk_max', 'risk_min', 'train_mse', 'risk_range', 'robustness_penalty']]
affine = get_model_results(df[(df['bias']==True) & (df['only_bias__ignore_input']==False)].copy())
weight_only = get_model_results(df[(df['bias']==False) & (df['only_bias__ignore_input']==False)].copy())
bias_only = get_model_results(df[(df['bias']==True) & (df['only_bias__ignore_input']==True)].copy())
# %% Plot regression results
sns.set_style("darkgrid", {'xtick.bottom': True})
plotpath = Path(f'temp/single_network/{tag}/')
plotpath.mkdir(parents=True, exist_ok=True)
order = None
for idx, split in enumerate(sorted(df.env_split.unique())):
plt.figure(figsize=(8,1.5))
baselines_bias = {split: preprocess_columns(bias_only[split]).mae_max.values[0] for split in bias_only}
plot_results = preprocess_columns(get_best_by_measure(affine[split]))
plot_results = subtract_baseline(plot_results, baseline_mae=baselines_bias[split])
order = plot_results.sort_values("mae_max_vs_baseline", ascending=False).pretty_measure if order is None else order
sns.barplot(data=plot_results, order=order, x="pretty_measure", y="mae_max", palette="deep")
plt.axhline(baselines_bias[split], label='bias-only baseline')
plt.xticks(rotation=90)
#plt.title(f'split {split}')
plt.legend(loc='lower right')
plt.xlabel('Generalization Measure')
plt.ylabel('Robust RMSE')
plt.tight_layout()
plt.xticks(rotation=45,ha='right')
plt.yticks(fontsize=8)
plt.xticks(fontsize=8)
plt.savefig(plotpath / f'{split}_mae_all_vs_baseline.pdf', bbox_inches='tight')
plt.close()
# %% Plot regression cdfs
D = namedtuple('D', ['measure', 'env_split', 'exp_type', 'bias_only'])
sns.set()
exp_type = tag.split('_')[-1]
rows = 1 if exp_type=='v1' else 5
plt.figure(figsize=(10,2 * rows))
sorting = None
for row, env_split in enumerate(['all', 'lr', 'depth', 'width', 'train_size']):
if exp_type=='v1' and env_split != 'all':
continue
data = [(D(*x.name.split('__')), np.sqrt(np.load(x))) for x in Path('temp/single_network/risks').glob(f'*__{env_split}__{exp_type}__False.npy')]
baseline = [(D(*x.name.split('__')), np.sqrt(np.load(x))) for x in Path('temp/single_network/risks').glob(f'*__{env_split}__{exp_type}__True.npy')]
data[0][0], len(data)
if sorting is None:
data = sorted(data, key=lambda x: x[1].max())
sorting = [x[0] for x in data]
else:
data_dict = dict(data)
data = [(x._replace(env_split=env_split), data_dict[x._replace(env_split=env_split)]) for x in sorting]
maxx = baseline[0][1].max()
points = 100
for i in range(len(data)):
maxx = max(maxx, data[i][1].max())
for i in range(len(data)):
plt.subplot(rows,24,24*row + i+1)
x = np.cumsum(np.histogram(data[i][1], points, (0,maxx))[0])
x = x / x[-1]
ax = sns.heatmap(x[..., np.newaxis], cmap="Blues_r", cbar=(i+1)==len(data), cbar_kws={"aspect":35}, rasterized=True)
ax.invert_yaxis()
plt.axhline(baseline[0][1].max()*points/maxx, color='red', label='baseline')
plt.axhline(np.max(data[i][1])*points/maxx, color="limegreen", zorder=1, linewidth=1.5, label='max')
plt.axhline(np.percentile(data[i][1], q=90)*points/maxx, color="magenta", zorder=2, linewidth=1.5, linestyle="--", label='90th percentile')
plt.axhline(np.mean(data[i][1])*points/maxx, color='orange', zorder=2, linewidth=1.5, linestyle=":", label='mean')
plt.ylabel('')
if i==0:
plt.ylabel(f"RMSE ({env_split.replace('_', ' ')})")
plt.yticks([0, points//2, points], labels=[0, str(maxx/2)[:5], str(maxx)[:5]], fontsize=8)
else:
plt.yticks([])
plt.xticks([])
if row+1 == rows:
plt.xlabel(data[i][0].measure.replace('_','.'), rotation=45, fontsize=8, ha="right")
else:
plt.xlabel('')
plt.legend(loc='upper left', ncol=4, bbox_to_anchor=(-25.5,-0.9), fontsize=8)
plt.savefig(plotpath / f'cdf_{exp_type}.pdf', bbox_inches='tight')
plt.close()
|
{"hexsha": "9d477af1235821c1c2c1c5a9f086d59ebb1ba85b", "size": 5640, "ext": "py", "lang": "Python", "max_stars_repo_path": "experiments/single_network/plot_results.py", "max_stars_repo_name": "nitarshan/robust-generalization-measures", "max_stars_repo_head_hexsha": "8e9012991ddef1603bab5b6ab31ace6fbfc67ac6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2020-10-22T21:17:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T08:57:18.000Z", "max_issues_repo_path": "experiments/single_network/plot_results.py", "max_issues_repo_name": "nitarshan/robust-generalization-measures", "max_issues_repo_head_hexsha": "8e9012991ddef1603bab5b6ab31ace6fbfc67ac6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "experiments/single_network/plot_results.py", "max_forks_repo_name": "nitarshan/robust-generalization-measures", "max_forks_repo_head_hexsha": "8e9012991ddef1603bab5b6ab31ace6fbfc67ac6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-02-15T16:57:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-04T22:39:28.000Z", "avg_line_length": 44.7619047619, "max_line_length": 277, "alphanum_fraction": 0.664893617, "include": true, "reason": "import numpy", "num_tokens": 1540}
|
module BoundaryCondition
export set_outlet_nonreflect_boundary!,set_outlet_costant_p_boundary!,
set_inlet_constant_h!,set_outlet_costant_h_boundary!,set_inlet_interface!,set_outlet_interface!,
get_L_from_interface,get_L_from_nonreflect,get_d_from_L_inflow,get_d_from_L_outflow
using..Systems
function get_L_from_interface(uu1::Array,uu2::Array,pipesystem1,pipesystem2)
Δx1 = pipesystem1.Δx
uueverything1 = UUtoEverything(uu1,pipesystem1)
u1 = uueverything1.u
ρ1 = uueverything1.ρ
c1 = uueverything1.c
p1 = uueverything1.p
# get λ1,λ2,λ3,λ4,λ5
λ1 = Array{Float64,1}(UndefInitializer(), 5)
λ1[1] = u1[end]-c1[end]
λ1[2] = u1[end]
λ1[3] = u1[end]
λ1[4] = u1[end]
λ1[5] = u1[end]+c1[end]
# get L2,L3,L4,L5 from upstream
L = Array{Float64,1}(UndefInitializer(), 5)
L[2]=λ1[2].*(c1[end].^2 .* (ρ1[end]-ρ1[end-1])./Δx1-(p1[end]-p1[end-1])./Δx1)
L[3]=λ1[3].*0
L[4]=λ1[4].*0
L[5]=λ1[5].*((p1[end]-p1[end-1])./Δx1+ρ1[end].*c1[end].*(u1[end]-u1[end-1])./Δx1)
Δx2 = pipesystem2.Δx
uueverything2 = UUtoEverything(uu2,pipesystem2)
u2 = uueverything2.u
ρ2 = uueverything2.ρ
c2 = uueverything2.c
p2 = uueverything2.p
# get λ1,λ2,λ3,λ4,λ5
λ2 = Array{Float64,1}(UndefInitializer(), 5)
λ2[1] = u2[1]-c2[1]
λ2[2] = u2[1]
λ2[3] = u2[1]
λ2[4] = u2[1]
λ2[5] = u2[1]+c2[1]
# get L1 from downstream
L[1] = λ2[1].*((p2[2]-p2[1])./Δx2-ρ2[1].*c2[1].*(u2[2]-u2[1])./Δx2)
return L
end
"""
get the characteristic wave amplitudes L from constant enthalpy conditions
"""
function get_L_from_constant_h(uu::Array,pipesystem)
gamma=pipesystem.gamma
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
c = uueverything.c
p = uueverything.p
h = uueverything.h
L = get_L_from_nonreflect(uu,pipesystem)
dhdt = 0
L[1] = (-dhdt .* ρ[1] .* c[1].^2 ./h[1] + L[2]) .*2 ./(gamma-1) - L[5]
return L
end
"""
get the characteristic wave amplitudes L from constant pressure conditions
"""
function get_L_from_constant_p(uu::Array,pipesystem)
L = get_L_from_nonreflect(uu,pipesystem)
dpdt = 0
L[1] = -L[5] - 2 .* dpdt
return L
end
"""
get the characteristic wave amplitudes L from nonreflect conditions
"""
function get_L_from_nonreflect(uu::Array,pipesystem)
# import variables
gamma = pipesystem.gamma
Δx = pipesystem.Δx
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
c = uueverything.c
p = uueverything.p
h = uueverything.h
# get λ1,λ2,λ3,λ4,λ5
λ = Array{Float64,1}(UndefInitializer(), 5)
λ[1] = u[end]-c[end]
λ[2] = u[end]
λ[3] = u[end]
λ[4] = u[end]
λ[5] = u[end]+c[end]
# get L1,L2,L3,L4,L5
L = Array{Float64,1}(UndefInitializer(), 5)
L[1]=λ[1].*0
L[2]=λ[2].*(c[end].^2 .* (ρ[end]-ρ[end-1])./Δx-(p[end]-p[end-1])./Δx)
L[3]=λ[3].*0
L[4]=λ[4].*0
L[5]=λ[5].*((p[end]-p[end-1])./Δx+ρ[end].*c[end].*(u[end]-u[end-1])./Δx)
# println("L=",L)
return L
end
"""
get the characteristic wave amplitudes L from constant enthalpy inflow
"""
function get_L_from_inflow_h(uu::Array,pipesystem)
# import variables
gamma = pipesystem.gamma
Δx = pipesystem.Δx
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
c = uueverything.c
p = uueverything.p
h = uueverything.h
dudt=0
dhdt=0
# get λ1,λ2,λ3,λ4,λ5
λ = Array{Float64,1}(UndefInitializer(), 5)
λ[1] = u[end]-c[end]
λ[2] = u[end]
λ[3] = u[end]
λ[4] = u[end]
λ[5] = u[end]+c[end]
# get L1,L2,L3,L4,L5
L = Array{Float64,1}(UndefInitializer(), 5)
L[1]=λ[1].*((p[2]-p[1])./Δx-ρ[1].*c[1].*(u[2]-u[1])./Δx)
L[5]=L[1] - 2 .* ρ[1] .* c[1] .* dudt
L[2]=0.5.*(gamma-1).*(L[5]+L[1]) +ρ[1].*c[1].*c[1]./h[1].*dhdt
L[3]=0
L[4]=0
return L
end
"""
get the d from L for inflow
"""
function get_d_from_L_inflow(uu::Array,pipesystem,L::Array)
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
c = uueverything.c
p = uueverything.p
h = uueverything.h
dhdt = 0
# get d1,d2,d3,d4,d5
d = Array{Float64}(UndefInitializer(), 5)
d[1] = 1 ./ (c[1].^2).*(L[2]+0.5.*(L[5]+L[1]))
d[2] = 0.5 .* (L[5]+L[1])
d[3] = 0.5 ./ρ[1]./c[1] .* (L[5]-L[1])
d[4] = 0
d[5] = 0
return d
end
"""
get the d from L for outflow
"""
function get_d_from_L_outflow(uu::Array,pipesystem,L::Array)
# import variables
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
c = uueverything.c
p = uueverything.p
# get d1,d2,d3,d4,d5
d = Array{Float64}(UndefInitializer(), 5)
d[1] = 1 ./ (c[end].^2).*(L[2]+0.5.*(L[5]+L[1]))
d[2] = 0.5 .* (L[5]+L[1])
d[3] = 0.5 ./ρ[end]./c[end] .* (L[5]-L[1])
d[4] = 0
d[5] = 0
return d
end
"""
still working on this
"""
function set_outlet_nonreflect_boundary!(uu::Array,pipesystem,Δt::Float64)
L = get_L_from_nonreflect(uu,pipesystem)
d = get_d_from_L_outflow(uu,pipesystem,L)
gamma = pipesystem.gamma
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
uuend=Array{Float64,1}(UndefInitializer(), 3)
uuend[1]=uu[1,end] + (-d[1]).*Δt
uuend[2]=uu[2,end] + (-u[end].*d[1]-ρ[end].*d[3]+0).*Δt
uuend[3]=uu[3,end] + (-0.5 .* u[end].*u[end].*d[1]-d[2]./(gamma-1) - ρ[end].*u[end].*d[3] + 0).*Δt
return uuend
end
"""
still working on this
"""
function set_outlet_costant_p_boundary!(uu::Array,pipesystem,Δt::Float64)
L = get_L_from_constant_p(uu,pipesystem)
d = get_d_from_L_outflow(uu,pipesystem,L)
gamma = pipesystem.gamma
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
uuend=Array{Float64,1}(UndefInitializer(), 3)
uuend[1]=uu[1,end] + (-d[1]).*Δt
uuend[2]=uu[2,end] + (-u[end].*d[1]-ρ[end].*d[3]+0).*Δt
uuend[3]=uu[3,end] + (-0.5 .* u[end].*u[end].*d[1]-d[2]./(gamma-1) - ρ[end].*u[end].*d[3] + 0).*Δt
return uuend
end
"""
still working on this
"""
function set_outlet_costant_h_boundary!(uu::Array,pipesystem,Δt::Float64)
L = get_L_from_constant_h(uu,pipesystem)
d = get_d_from_L_outflow(uu,pipesystem,L)
gamma = pipesystem.gamma
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
uuend=Array{Float64,1}(UndefInitializer(), 3)
uuend[1]=uu[1,end] + (-d[1]).*Δt
uuend[2]=uu[2,end] + (-u[end].*d[1]-ρ[end].*d[3]+0).*Δt
uuend[3]=uu[3,end] + (-0.5 .* u[end].*u[end].*d[1]-d[2]./(gamma-1) - ρ[end].*u[end].*d[3] + 0).*Δt
return uuend
end
"""
still working on this
"""
function set_inlet_constant_h!(uu::Array,pipesystem,Δt::Float64)
L = get_L_from_inflow_h(uu,pipesystem)
d = get_d_from_L_inflow(uu,pipesystem,L)
gamma = pipesystem.gamma
uueverything = UUtoEverything(uu,pipesystem)
u = uueverything.u
ρ = uueverything.ρ
uubegin=Array{Float64,1}(UndefInitializer(), 3)
uubegin[1]=uu[1,1] + (-d[1]).*Δt
uubegin[2]=uu[2,1] + (-u[1].*d[1]-ρ[1].*d[3]+0).*Δt
uubegin[3]=uu[3,1] + (-0.5 .* u[1].*u[1].*d[1]-d[2]./(gamma-1) - ρ[1].*u[1].*d[3] + 0).*Δt
return uubegin
end
function set_inlet_interface!(uu1::Array,uu2::Array,pipesystem1,pipesystem2,Δt::Float64)
L = get_L_from_interface(uu1,uu2,pipesystem1,pipesystem2)
d = get_d_from_L_inflow(uu2,pipesystem2,L)
gamma = pipesystem2.gamma
uueverything = UUtoEverything(uu2,pipesystem2)
u = uueverything.u
ρ = uueverything.ρ
uubegin=Array{Float64,1}(UndefInitializer(), 3)
uubegin[1]=uu2[1,1] + (-d[1]).*Δt
uubegin[2]=uu2[2,1] + (-u[1].*d[1]-ρ[1].*d[3]+0).*Δt
uubegin[3]=uu2[3,1] + (-0.5 .* u[1].*u[1].*d[1]-d[2]./(gamma-1) - ρ[1].*u[1].*d[3] + 0).*Δt
return uubegin
end
function set_outlet_interface!(uu1::Array,uu2::Array,pipesystem1,pipesystem2,Δt::Float64)
L = get_L_from_interface(uu1,uu2,pipesystem1,pipesystem2)
d = get_d_from_L_outflow(uu1,pipesystem1,L)
gamma = pipesystem1.gamma
uueverything = UUtoEverything(uu1,pipesystem1)
u = uueverything.u
ρ = uueverything.ρ
uuend=Array{Float64,1}(UndefInitializer(), 3)
uuend[1]=uu1[1,end] + (-d[1]).*Δt
uuend[2]=uu1[2,end] + (-u[end].*d[1]-ρ[end].*d[3]+0).*Δt
uuend[3]=uu1[3,end] + (-0.5 .* u[end].*u[end].*d[1]-d[2]./(gamma-1) - ρ[end].*u[end].*d[3] + 0).*Δt
return uuend
end
# """
# still working on this
# """
# """
#
# function set_h_boundary!(uu::Array,everythinginitial)
#
# gamma = everythinginitial.gamma
# h = everythinginitial.h
#
#
# uueverything = UUtoEverything(uu,gamma)
#
# u = uueverything.u
# ρ = uueverything.ρ
#
# ϵ = h./gamma
# Ehat = ρ.*ϵ + 0.5.*ρ.*u.*u
#
# uunew = Array{Float64,2}(UndefInitializer(), 3,size(uu)[2])
#
# uunew[1,:]=uu[1,:]
# uunew[2,:]=uu[2,:]
# uunew[3,:]=Ehat
#
# return uunew[:,1]
# end
# """
#
#
# """
# function setuuboundary!(uu::Array,everythinginitial::UUtoEverything)
#
# gamma = everythinginitial.gamma
# h = everythinginitial.h
#
#
# uueverything = UUtoEverything(uu,gamma)
#
# u = uueverything.u
# ρ = uueverything.ρ
#
# ϵ = h./gamma
# Ehat = ρ.*ϵ + 0.5.*ρ.*u.*u
#
# uunew = Array{Float64,2}(UndefInitializer(), 3,size(uu)[2])
#
# uunew[1,:]=everythinginitial.ρ
# uunew[2,:]=everythinginitial.m
# uunew[3,:]=Ehat
#
# return uunew[:,1]
# end
# """
end
|
{"hexsha": "b46181cf459c6f9f857da268035284e950ad5343", "size": 11395, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/BoundaryCondition.jl", "max_stars_repo_name": "liyuxuan48/thermo-network", "max_stars_repo_head_hexsha": "92bbcc909a74232e8caa18c3f99d5f96b746de12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-03-26T23:40:19.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-07T14:09:27.000Z", "max_issues_repo_path": "src/BoundaryCondition.jl", "max_issues_repo_name": "liyuxuan48/thermo-network", "max_issues_repo_head_hexsha": "92bbcc909a74232e8caa18c3f99d5f96b746de12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/BoundaryCondition.jl", "max_forks_repo_name": "liyuxuan48/thermo-network", "max_forks_repo_head_hexsha": "92bbcc909a74232e8caa18c3f99d5f96b746de12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6577669903, "max_line_length": 111, "alphanum_fraction": 0.5063624397, "num_tokens": 4132}
|
#!/usr/bin/env python3
# Author: Octavio Castillo Reyes
# Contact: octavio.castillo@bsc.es
"""Define functions a 3D CSEM/MT solver using high-order vector finite element method (HEFEM)."""
# ---------------------------------------------------------------
# Load python modules
# ---------------------------------------------------------------
import numpy as np
from petsc4py import PETSc
from mpi4py import MPI
# ---------------------------------------------------------------
# Load petgem modules (BSC)
# ---------------------------------------------------------------
from .common import Print, Timers, measure_all_class_methods
from .parallel import readPetscMatrix, readPetscVector, createParallelMatrix, createParallelVector
from .parallel import MPIEnvironment
from .parallel import writePetscVector
from .hvfem import computeJacobian, computeElementOrientation, computeElementalMatrices, computeSourceVectorRotation
from .hvfem import tetrahedronXYZToXiEtaZeta, computeBasisFunctions
from .hvfem import getNormalVector, get2DJacobDet, compute2DGaussPoints
from .hvfem import transform2Dto3DInReferenceElement, getRealFromReference, computeBasisFunctionsReferenceElement
from .hvfem import getFaceByLocalNodes, getNeumannBCface
from .mt1d import eval_MT1D
# ###############################################################
# ################ CLASSES DEFINITION ##################
# ###############################################################
@measure_all_class_methods
class Solver():
"""Class for solver."""
def __init__(self):
"""Initialization of a solver class."""
return
def setup(self, inputSetup):
"""Setup of a solver class.
:param object inputSetup: user input setup.
"""
# ---------------------------------------------------------------
# Initialization
# ---------------------------------------------------------------
# Start timer
Timers()["Setup"].start()
# Parameters shortcut (for code legibility)
model = inputSetup.model
output = inputSetup.output
out_dir = output.get('directory_scratch')
Print.master(' Importing files')
# ---------------------------------------------------------------
# Obtain the MPI environment
# ---------------------------------------------------------------
parEnv = MPIEnvironment()
# ---------------------------------------------------------------
# Import files
# ---------------------------------------------------------------
# Read nodes coordinates
input_file = out_dir + '/nodes.dat'
self.nodes = readPetscMatrix(input_file, communicator=None)
# elements-nodes connectivity
input_file = out_dir + '/meshConnectivity.dat'
self.elemsN = readPetscMatrix(input_file, communicator=None)
# elements-edges connectivity
input_file = out_dir + '/edges.dat'
self.elemsE = readPetscMatrix(input_file, communicator=None)
# edges-nodes connectivity
input_file = out_dir + '/edgesNodes.dat'
self.edgesNodes = readPetscMatrix(input_file, communicator=None)
# elements-faces connectivity
input_file = out_dir + '/faces.dat'
self.elemsF = readPetscMatrix(input_file, communicator=None)
# faces-edges connectivity
input_file = out_dir + '/facesEdges.dat'
self.facesEdges = readPetscMatrix(input_file, communicator=None)
# Dofs connectivity
input_file = out_dir + '/dofs.dat'
self.dofs = readPetscMatrix(input_file, communicator=None)
# Conductivity model
input_file = out_dir + '/conductivityModel.dat'
self.sigmaModel = readPetscMatrix(input_file, communicator=None)
# # Receivers
# input_file = out_dir + '/receivers.dat'
# self.receivers = readPetscMatrix(input_file, communicator=None)
# Sparsity pattern (NNZ) for matrix allocation
input_file = out_dir + '/nnz.dat'
tmp = readPetscVector(input_file, communicator=None)
self.nnz = (tmp.getArray().real).astype(PETSc.IntType)
# Number of dofs (length of nnz correspond to the total number of dofs)
self.total_num_dofs = tmp.getSizes()[1] # Get global sizes
# Depending on modeling mode, load data for source, boundary faces or boundary dofs
if (model.get('mode') == 'csem'):
# Boundary dofs for csem mode
input_file = out_dir + '/boundaries.dat'
self.boundaries = readPetscVector(input_file, communicator=None)
# Load source data (master task)
if parEnv.rank == 0:
# Read source file
input_file = out_dir + '/source.dat'
self.source_data = readPetscVector(input_file, communicator=PETSc.COMM_SELF)
elif (model.get('mode') == 'mt'):
# Boundary faces for mt mode
input_file = out_dir + '/boundaryElements.dat'
self.boundaries = readPetscMatrix(input_file, communicator=None)
# Stop timer
Timers()["Setup"].stop()
return
def assembly(self, inputSetup):
"""Assembly a linear system for 3D CSEM/MT based on HEFEM.
:param object inputSetup: user input setup.
"""
# ---------------------------------------------------------------
# Initialization
# ---------------------------------------------------------------
# Start timer
Timers()["Assembly"].start()
# Parameters shortcut (for code legibility)
model = inputSetup.model
run = inputSetup.run
Print.master(' Assembling linear system')
# ---------------------------------------------------------------
# Obtain the MPI environment
# ---------------------------------------------------------------
parEnv = MPIEnvironment()
# ---------------------------------------------------------------
# Define constants
# ---------------------------------------------------------------
num_nodes_per_element = 4
num_edges_per_element = 6
num_faces_per_element = 4
#num_nodes_per_face = 3
num_edges_per_face = 3
num_nodes_per_edge = 2
num_dimensions = 3
basis_order = run.get('nord')
num_polarizations = run.get('num_polarizations')
num_dof_in_element = np.int(basis_order*(basis_order+2)*(basis_order+3)/2)
if (model.get('mode') == 'csem'):
mode = 'csem'
data_model = model.get(mode) # Get data model
frequency = data_model.get('source').get('frequency')
elif (model.get('mode') == 'mt'):
mode = 'mt'
data_model = model.get(mode) # Get data model
frequency = data_model.get('frequency')
omega = frequency*2.*np.pi
mu = 4.*np.pi*1e-7
Const = np.sqrt(-1. + 0.j)*omega*mu
# ---------------------------------------------------------------
# Get global ranges
# ---------------------------------------------------------------
# Ranges over elements
Istart_elemsE, Iend_elemsE = self.elemsE.getOwnershipRange()
# ---------------------------------------------------------------
# Assembly linear system (Left-Hand Side - LHS)
# ---------------------------------------------------------------
# Left-hand side
self.A = createParallelMatrix(self.total_num_dofs, self.total_num_dofs, self.nnz, run.get('cuda'), communicator=None)
# Compute contributions for all local elements
for i in np.arange(Istart_elemsE, Iend_elemsE):
# Get indexes of nodes for i
nodesEle = (self.elemsN.getRow(i)[1].real).astype(PETSc.IntType)
# Get coordinates of i
coordEle = self.nodes.getRow(i)[1].real
coordEle = np.reshape(coordEle, (num_nodes_per_element, num_dimensions))
# Get edges indexes for faces in i
edgesFace = self.facesEdges.getRow(i)[1].real
edgesFace = np.reshape(edgesFace, (num_faces_per_element, num_edges_per_face))
# Get indexes of edges for i
edgesEle = (self.elemsE.getRow(i)[1].real).astype(PETSc.IntType)
# Get node indexes for edges in i
edgesNodesEle = self.edgesNodes.getRow(i)[1].real
edgesNodesEle = np.reshape(edgesNodesEle, (num_edges_per_element, num_nodes_per_edge))
# Get conductivity values for i (horizontal and vertical conductivity)
sigmaEle = self.sigmaModel.getRow(i)[1].real
# Compute jacobian for i
jacobian, invjacobian = computeJacobian(coordEle)
# Compute global orientation for i
edge_orientation, face_orientation = computeElementOrientation(edgesEle,nodesEle,edgesNodesEle,edgesFace)
# Compute elemental matrices (stiffness and mass matrices)
M, K = computeElementalMatrices(edge_orientation, face_orientation, jacobian, invjacobian, basis_order, sigmaEle)
# Compute elemental matrix
Ae = K - Const*M
Ae = Ae.flatten()
# Get dofs indexes for i
dofsEle = (self.dofs.getRow(i)[1].real).astype(PETSc.IntType)
# Add local contributions to global matrix
self.A.setValues(dofsEle, dofsEle, Ae, addv=PETSc.InsertMode.ADD_VALUES)
# Start global LHS assembly
self.A.assemblyBegin()
# End global LHS assembly
self.A.assemblyEnd()
# ---------------------------------------------------------------
# Assembly linear system (Right-Hand Side RHS)
# ---------------------------------------------------------------
self.b = []
self.x = []
for i in np.arange(num_polarizations):
self.b.append(createParallelVector(self.total_num_dofs, run.get('cuda'), communicator=None))
self.x.append(createParallelVector(self.total_num_dofs, run.get('cuda'), communicator=None))
# Assembly RHS for csem mode
if (mode == 'csem'):
# Get source parameters
position = np.asarray(data_model.get('source').get('position'), dtype=np.float)
azimuth = data_model.get('source').get('azimuth')
dip = data_model.get('source').get('dip')
current = data_model.get('source').get('current')
length = data_model.get('source').get('length')
# Compute matrices for source rotation
sourceRotationVector = computeSourceVectorRotation(azimuth, dip)
# Total electric field formulation. Set dipole definition
# x-directed dipole
Dx = np.array([current*length*1., 0., 0.], dtype=np.float)
# y-directed dipole
Dy = np.array([0., current*length*1., 0.], dtype=np.float)
# z-directed dipole
Dz = np.array([0., 0., current*length*1.], dtype=np.float)
# Rotate source and setup electric field
field = sourceRotationVector[0]*Dx + sourceRotationVector[1]*Dy + sourceRotationVector[2]*Dz
# Insert source (only master)
if parEnv.rank == 0:
# Get source data
source_data = self.source_data.getArray().real
# Get indexes of nodes for srcElem
nodesEle = source_data[0:4].astype(np.int)
# Get nodes coordinates for srcElem
coordEle = source_data[4:16]
coordEle = np.reshape(coordEle, (num_nodes_per_element, num_dimensions))
# Get faces indexes for srcElem
#facesEle = source_data[16:20].astype(np.int)
# Get edges indexes for faces in srcElem
edgesFace = source_data[20:32].astype(np.int)
edgesFace = np.reshape(edgesFace, (num_faces_per_element, num_edges_per_face))
# Get indexes of edges for srcElem
edgesEle = source_data[32:38].astype(np.int)
# Get node indexes for edges in srcElem
edgesNodesEle = source_data[38:50].astype(np.int)
edgesNodesEle = np.reshape(edgesNodesEle, (num_edges_per_element, num_nodes_per_edge))
# Get dofs for srcElem
dofsSource = source_data[50::].astype(PETSc.IntType)
# Compute jacobian for srcElem
jacobian, invjacobian = computeJacobian(coordEle)
# Compute global orientation for srcElem
edge_orientation, face_orientation = computeElementOrientation(edgesEle,nodesEle,edgesNodesEle,edgesFace)
# Transform xyz source position to XiEtaZeta coordinates (reference tetrahedral element)
XiEtaZeta = tetrahedronXYZToXiEtaZeta(coordEle, position)
# Compute basis for srcElem
basis, _ = computeBasisFunctions(edge_orientation, face_orientation, jacobian, invjacobian, basis_order, XiEtaZeta)
# Compute integral
rhs_contribution = np.matmul(field, basis[:,:,0])
# Multiplication by constant value
rhs_contribution = rhs_contribution * Const
# Add local contributions to global matrix
self.b[0].setValues(dofsSource, rhs_contribution, addv=PETSc.InsertMode.ADD_VALUES)
elif (mode == 'mt'):
# Ranges over boundary faces or boundary elements
Istart_boundaryF, Iend_boundaryF = self.boundaries.getOwnershipRange()
# Compute the two-dimensional gauss points.
gauss_order = np.int(2)*basis_order
gaussPoints2D, Wi = compute2DGaussPoints(gauss_order)
ngaussP = gaussPoints2D.shape[0]
# Allocate array for interpolation points
num_local_boundaries = self.boundaries.getLocalSize()
interpolationPoints = np.zeros([num_local_boundaries[0], ngaussP], dtype=np.float)
centroid_z_face4 = []
sigma_face4 = []
indx_local_face = np.int(0) # Initialize index of local boundary face
# Compute local contributions for each boundary face
for i in np.arange(Istart_boundaryF, Iend_boundaryF):
boundary_data = self.boundaries.getRow(i)[1].real
# Get face plane for boundary element
faceType = boundary_data[50].astype(np.int)
# Get nodes coordinates for boundary element
coordEle = boundary_data[4:16]
coordEle = np.reshape(coordEle, (num_nodes_per_element, num_dimensions))
# Get faces indexes for boundary element
facesEle = boundary_data[16:20].astype(np.int)
# Get global face index
faceGlobalIndex = boundary_data[51].astype(np.int)
# Get sigma for element with boundary face
sigmaBoundaryElement = boundary_data[52].astype(np.float)
# Get local index of boundary face
faceLocalIndex = np.where(facesEle==faceGlobalIndex)[0][0]
for j in np.arange(ngaussP):
# Transform 2D gauss points to 3D in the reference element.
gaussPoint3D = transform2Dto3DInReferenceElement(gaussPoints2D[j,:], faceLocalIndex)
# This is the real point where the excitation is evaluated.
realPoint = getRealFromReference(gaussPoint3D, coordEle)
# Save z-component of gauss point
interpolationPoints[indx_local_face, j] = realPoint[2]
# Save centroid only for face 3
if faceType == 3:
nodesInFace = getFaceByLocalNodes(faceLocalIndex)
centroid_face4 = np.sum(coordEle[nodesInFace], axis=0)/3.
centroid_z_face4.append(centroid_face4[2])
sigma_face4.append(sigmaBoundaryElement)
# Increment index of local boundary face
indx_local_face += np.int(1)
# List to numpy arrays
centroid_z_face4 = np.asarray(centroid_z_face4, dtype=np.float)
sigma_face4 = np.asarray(sigma_face4, dtype=np.float)
# Compute the max/min z-coordinate in the domain
coord_z = []
for i in np.arange(Istart_elemsE, Iend_elemsE):
# Get indexes of nodes for i
nodesEle = (self.elemsN.getRow(i)[1].real).astype(PETSc.IntType)
# Get coordinates of i
coordEle = self.nodes.getRow(i)[1].real
coordEle = np.reshape(coordEle, (num_nodes_per_element, num_dimensions))
coord_z.append(coordEle[:,2])
# Get local max/min
coord_z = np.asarray(coord_z, dtype=np.float)
coord_z = coord_z.flatten()
z_max_local = np.max(coord_z)
z_min_local = np.min(coord_z)
# Get global max/min
za = parEnv.comm.allreduce(z_max_local, op=MPI.MAX)
zb = parEnv.comm.allreduce(z_min_local, op=MPI.MIN)
u = eval_MT1D(za, zb, np.float(1), np.float(0), sigma_face4, centroid_z_face4,
omega, mu, np.int(1e6), np.int(1), interpolationPoints)
# For each polarization mode
for i in np.arange(num_polarizations):
# Get polarization mode
tmp = data_model.get('polarization')
if (tmp[i] == 'x'):
polarization_mode = np.int(1)
elif (tmp[i] == 'y'):
polarization_mode = np.int(2)
else:
Print.master(' MT polarization mode not supported.')
exit(-1)
# Compute local contributions for each boundary face
indx_local_face = np.int(0) # Initialize index of local boundary face
for j in np.arange(Istart_boundaryF, Iend_boundaryF):
boundary_data = self.boundaries.getRow(j)[1].real
# Get indexes of nodes for boundary element
nodesEle = boundary_data[0:4].astype(np.int)
# Get nodes coordinates for boundary element
coordEle = boundary_data[4:16]
coordEle = np.reshape(coordEle, (num_nodes_per_element, num_dimensions))
# Get faces indexes for boundary element
facesEle = boundary_data[16:20].astype(np.int)
# Get edges indexes for boundary element
edgesFace = boundary_data[20:32].astype(np.int)
edgesFace = np.reshape(edgesFace, (num_faces_per_element, num_edges_per_face))
# Get indexes of edges for boundary element
edgesEle = boundary_data[32:38].astype(np.int)
# Get node indexes for edges in boundary element
edgesNodesEle = boundary_data[38:50].astype(np.int)
edgesNodesEle = np.reshape(edgesNodesEle, (num_edges_per_element, num_nodes_per_edge))
# Get face plane for boundary element
faceType = boundary_data[50].astype(np.int)
# Get global face index
faceGlobalIndex = boundary_data[51].astype(np.int)
# Get sigma for element with boundary face
#sigmaBoundaryElement = boundary_data[52].astype(np.int)
# Get dofs for boundary element
dofsBoundaryElement = boundary_data[53::].astype(PETSc.IntType)
# Compute jacobian for boundary element
_, invjacobian = computeJacobian(coordEle)
# Compute global orientation for boundary element
edge_orientation, face_orientation = computeElementOrientation(edgesEle,nodesEle,edgesNodesEle,edgesFace)
# Get local index of boundary face
faceLocalIndex = np.where(facesEle==faceGlobalIndex)[0][0]
# Compute normal
normalVector = getNormalVector(faceLocalIndex, invjacobian)
# Compute normal unit vector
normalUnitVector = normalVector/np.linalg.norm(normalVector)
# Compute 2D Jacobian
detJacob2D = get2DJacobDet(coordEle, faceLocalIndex)
# Allocate array for local contribution
rhs_contribution = np.zeros(num_dof_in_element, dtype=np.complex)
# Get excitation for boundary face
ex, ey, ez = getNeumannBCface(faceType, polarization_mode, u)
for k in np.arange(ngaussP):
# Transform 2D gauss points to 3D in the reference element.
gaussPoint3D = transform2Dto3DInReferenceElement(gaussPoints2D[k,:], faceLocalIndex)
# 3D basis functions evaluated on reference element
allBasesEvaluated = computeBasisFunctionsReferenceElement(edge_orientation, face_orientation, basis_order, gaussPoint3D)
# Same mapping as in mass matrix.
allBasesReal = np.matmul(invjacobian,allBasesEvaluated[:,:,0])
# Add excitation field
ex_g = ex[indx_local_face, k]
ey_g = ey[indx_local_face, k]
ez_g = ez[indx_local_face, k]
excitation_value = np.array([ex_g, ey_g, ez_g], dtype=np.complex)
# Allocate
integrandTangential = np.zeros(num_dof_in_element, dtype=np.complex)
for l in np.arange(num_dof_in_element):
iBaseTangential = np.cross(np.cross(normalUnitVector, allBasesReal[:,l]), normalUnitVector)
integrandTangential[l] = np.dot(iBaseTangential, excitation_value)
rhs_contribution += Wi[k]*integrandTangential*detJacob2D
# Multiplication by constant value
rhs_contribution = rhs_contribution * Const
# Add local contributions to global matrix
self.b[i].setValues(dofsBoundaryElement, rhs_contribution, addv=PETSc.InsertMode.ADD_VALUES)
# Increment index of local boundary face
indx_local_face += np.int(1)
# Global assembly for each RHS
for i in np.arange(num_polarizations):
# Start global RHS assembly
self.b[i].assemblyBegin()
# End global RHS assembly
self.b[i].assemblyEnd()
# Stop timer
Timers()["Assembly"].stop()
return
def run(self, inputSetup):
"""Run solver for linear systems generated by the HEFEM for a 3D CSEM/MT problem.
:param object inputSetup: user input setup.
"""
# ---------------------------------------------------------------
# Initialization
# ---------------------------------------------------------------
# Parameters shortcut (for code legibility)
model = inputSetup.model
run = inputSetup.run
output = inputSetup.output
out_dir = output.get('directory_scratch')
# ---------------------------------------------------------------
# Define constants
# ---------------------------------------------------------------
num_polarizations = run.get('num_polarizations')
if (model.get('mode') == 'csem'):
mode = 'csem'
elif (model.get('mode') == 'mt'):
mode = 'mt'
Print.master(' Solving linear system')
if (mode == 'csem'):
# Start timer
Timers()["SetBoundaries"].start()
# ---------------------------------------------------------------
# Set dirichlet boundary conditions
# ---------------------------------------------------------------
# Ranges over boundaries
Istart_boundaries, Iend_boundaries = self.boundaries.getOwnershipRange()
# Boundaries for LHS
self.A.zeroRowsColumns(np.real(self.boundaries).astype(PETSc.IntType))
# Boundaries for RHS
numLocalBoundaries = Iend_boundaries - Istart_boundaries
self.b[0].setValues(np.real(self.boundaries).astype(PETSc.IntType),
np.zeros(numLocalBoundaries, dtype=np.complex),
addv=PETSc.InsertMode.INSERT_VALUES)
# Start global system assembly
self.A.assemblyBegin()
self.b[0].assemblyBegin()
# End global system assembly
self.A.assemblyEnd()
self.b[0].assemblyEnd()
# Stop timer
Timers()["SetBoundaries"].stop()
# ---------------------------------------------------------------
# Solve system
# ---------------------------------------------------------------
Timers()["Solver"].start()
for i in np.arange(num_polarizations):
# Create KSP: linear equation solver
ksp = PETSc.KSP().create(comm=PETSc.COMM_WORLD)
ksp.setOperators(self.A)
ksp.setFromOptions()
ksp.solve(self.b[i], self.x[i])
ksp.destroy()
# Write vector solution
out_path = out_dir + '/x' + str(i) + '.dat'
writePetscVector(out_path, self.x[i], communicator=None)
Timers()["Solver"].stop()
return
def unitary_test():
"""Unitary test for solver.py script."""
if __name__ == '__main__':
unitary_test()
|
{"hexsha": "892b375da211ecea69f5d5889c81a48be8dc6344", "size": 26261, "ext": "py", "lang": "Python", "max_stars_repo_path": "petgem/solver.py", "max_stars_repo_name": "MTA09/petgem", "max_stars_repo_head_hexsha": "eb9ad46b3c88d3fd13fb0270eb00d2a147dbb798", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2018-11-08T19:04:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T22:49:54.000Z", "max_issues_repo_path": "petgem/solver.py", "max_issues_repo_name": "MTA09/petgem", "max_issues_repo_head_hexsha": "eb9ad46b3c88d3fd13fb0270eb00d2a147dbb798", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-08-17T08:18:16.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-17T11:46:22.000Z", "max_forks_repo_path": "petgem/solver.py", "max_forks_repo_name": "MTA09/petgem", "max_forks_repo_head_hexsha": "eb9ad46b3c88d3fd13fb0270eb00d2a147dbb798", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2018-07-18T14:59:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-15T08:58:13.000Z", "avg_line_length": 43.2635914333, "max_line_length": 144, "alphanum_fraction": 0.5432009444, "include": true, "reason": "import numpy", "num_tokens": 5334}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.