arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
% % begin in Jan. 19th, 2010 % % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Energy Derivatives in Quantum Chemistry} % % % In quantum chemistry, the evaluating for the energy derivatives with respect to the changes of system is overwhelming important, for many aspects; such as energy surface study, the vibration etc. are all have to employ such technic. Hence in this chapter, we will gather all of such technics systematically together, and all of content are mainly from the book written by Yukio Yamaguchi, John D. Goddard, Yoshihiro Osamura and Henry Schaefer\cite{New_Dimension_for_Derivatives_Calculation}. However, actually many people had made great contribution to this area, especially Pulay\cite{Pulay1, Pulay2, Pulay3, Pulay4, Pulay5, Pulay6, pulay:5043}, and other people\cite{bishop:3515, RevModPhys.45.22, jorgensen:334, king:5645, meyer:2109}. However, the book by Henry Schaefer etc. is representing a generalization of nearly all the views or ideas, and the author recommends that the further reference for each detailed topic can be found in this book. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} In quantum chemsitry, the core idea for approximating general wave function is called ``single electron approximation'' (in DFT, it's called non-interacting system mapping relationship). This important idea introduces the idea of molecular orbital, where the general wave functions for the whole system is building from it. The relation between the MO and the general wave function etc. is depicted in the figure\ref{derivatives_fig:1}. \begin{figure}[bhtp] \centering \includegraphics[scale=0.7]{mo_in_energy_drv.eps} \caption{MO general description} \label{derivatives_fig:1} \end{figure} Here as what has been demonstrated in (\ref{derivatives_fig:1}), the MO is composed by two parts: one is the basis set functions (we can know it always takes the GTO or STO form, the other form like plane wave function is rarely used), the other is the MO coefficients before each basis sets function. Hence, all the energy derivatives will finally be expressed based on the basis sets function integrals derivatives and the MO coefficient derivatives. Now in this section we will give some analysis to this part. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Derivatives expressions for the integrals and the MO coefficients} % % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{General Perturbation method} % % % The energy derivatives could be considered as a small preturbation on the Hamiltonian (because of the geometrical changes or the adding on some electric fields). Just let's take geometrical change as an example, hence the Hamiltonian can be expressed as some Talor expansion: \begin{equation} \label{General_Perturbation_method_eq:1} \hat{H} = \hat{H}_{0}+\lambda_{a}\hat{H}_{a}^{'}+\lambda_{b}\hat{H}_{b}^{'} + \frac{1}{2}\lambda_{a}^{2}\hat{H}_{a}^{''} + \frac{1}{2}\lambda_{b}^{2}\hat{H}_{b}^{''} + \lambda_{a}\lambda_{b}\hat{H}_{ab}^{''} + \cdots \end{equation} Here the $\lambda$ character the order of perturbation (similar with the $\lambda$ in the \ref{PTIQMeq:1}, but as an extension), and the $\hat{H}^{'}$ and $\hat{H}_{a}^{''}$ are the perturbed operators for Hamiltonian, $a$ and $b$ characterize different direction for the geometrical perturbation. Then according to the perturbation theory, the integrals as well as the the MO coefficients are all able to expressed in terms of perturbation series: \begin{equation} \label{General_Perturbation_method_eq:2} S_{\mu\nu}^{perturbed} = S_{\mu\nu}+ \lambda_{a}\frac{\partial S_{\mu\nu}}{\partial a} + \lambda_{b}\frac{\partial S_{\mu\nu}}{\partial b} + \frac{1}{2}\lambda_{a}^{2} \frac{\partial^{2} S_{\mu\nu}}{\partial a^{2}} + \frac{1}{2}\lambda_{b}^{2} \frac{\partial^{2} S_{\mu\nu}}{\partial b^{2}} + \lambda_{a}\lambda_{b} \frac{\partial^{2} S_{\mu\nu}}{\partial a\partial b} + \cdots \end{equation} This is for the overlap integrals in AO. For the MO coefficients, the expression is same: \begin{equation} \label{General_Perturbation_method_eq:3} C_{i}^{perturbed} = C_{i}+ \lambda_{a}\frac{\partial C_{i}}{\partial a} + \lambda_{b}\frac{\partial C_{i}}{\partial b} + \frac{1}{2}\lambda_{a}^{2} \frac{\partial^{2} C_{i}}{\partial a^{2}} + \frac{1}{2}\lambda_{b}^{2} \frac{\partial^{2} C_{i}}{\partial b^{2}} + \lambda_{a}\lambda_{b} \frac{\partial^{2} C_{i}}{\partial a\partial b} + \cdots \end{equation} This is the starting point for further study. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Local Variables: %%% mode: latex %%% TeX-master: "../../main" %%% End:
\documentclass[]{BasiliskReportMemo} \usepackage{AVS} \newcommand{\submiterInstitute}{Autonomous Vehicle Simulation (AVS) Laboratory,\\ University of Colorado} \newcommand{\ModuleName}{test\textunderscore sunlineSuKF} \newcommand{\subject}{Sunline Switch-uKF Module and Test} \newcommand{\status}{Initial document} \newcommand{\preparer}{T. Teil} \newcommand{\summary}{This module implements and tests a Switch Unscented Kalman Filter in order to estimate the sunline direction.} \begin{document} \makeCover % % enter the revision documentation here % to add more lines, copy the table entry and the \hline, and paste after the current entry. % \pagestyle{empty} {\renewcommand{\arraystretch}{2} \noindent \begin{longtable}{|p{0.5in}|p{4.5in}|p{1.14in}|} \hline {\bfseries Rev}: & {\bfseries Change Description} & {\bfseries By} \\ \hline Draft & Initial Revision & T. Teil \\ \hline \end{longtable} } \newpage \setcounter{page}{1} \pagestyle{fancy} \tableofcontents ~\\ \hrule ~\\ %\begin{figure}[htb] % \centerline{ % \includegraphics[]{Figures/Fig1} % } % \caption{Sample Figure Inclusion.} % \label{fig:Fig1} %\end{figure} \section{Introduction} The Switch Unscented Kalman filter (SuKF) in the AVS Basilisk simulation is a sequential filter implemented to give the best estimate of the desired states. In this method we estimate the sun heading as well as the spacecraft rotation rate along the observable axes. The SuKF reads in the message written by the coarse sun sensor, and writes a message containing the sun estimate. This document summarizes the content of the module, how to use it, and the test that was implemented for it. More information on the filter derivation can be found in Reference [\citenum{Teil:2018fe}], and more information on the square root unscented filter can be found in Reference [\citenum{Wan2001}] (attached alongside this document). \section{Filter kinematics} \subsection{Filter Derivation} %%% The Switch-uKF attempts to avoid subtracting any terms from the state, while still removing the unobservable component of the rate. In order to do this, an appropriate frame must be defined. In order to not track the rate component alongside the sunline direction, that vector needs to be one of the basis vectors of the frame. It is decided to be the first vector for the frame, and therefore in that frame, $\omega_1$ the component of the rotation rate can be removed from the states. This frame is called $\mathcal{S}_1 = \{\hat{\bm s}_1 = \frac{\bm d}{|\bm d|}, \hat{\bm s}_2, \hat{\bm s}_3 \}$. This is seen in Figure \ref{fig:Switches}, where the dotted line represents the $30 \dg$ threshold cone before switching frames. \begin{figure}[t] \centering \includegraphics[]{./Figures/Switches} \caption{Frame built off the body frame for Switch filters} \label{fig:Switches} \end{figure} The second vector of the frame must be created using only $\bm d$, and the body frame vectors. The first intuitive decision, is to use $\hat{\bm b}_1$ of the body frame and define $\bm s_2$ in Equation \eqref{eq:s2}. The third vector $\hat{\bm s}_3$ of the $\mathcal{S}_1$ frame, is naturally created from the first two. \begin{equation}\label{eq:s2} \hat{\bm s}_2 = \frac{\hat{\bm s}_1 \times \hat{\bm b}_1}{|\hat{\bm s}_1 \times \hat{\bm b}_1|} \hspace{2cm} \hat{\bm s}_3 = \frac{\hat{\bm s}_1 \times \hat{\bm s}_2}{|\hat{\bm s}_1 \times \hat{\bm s}_2|} \end{equation} The problem that arises is the singularity that occurs when $\hat{\bm b}_1$ and $\bm d$ become aligned: this frame becomes undefined. In order to counteract this, using a similar process as the shadow set used for Modified Rodrigues Parameters [\citenum{schaub}], a second frame is created. This frame $\mathcal{S}_2 = \{\hat{\bar{\bm s}}_1 = \hat{\bm s}_1, \hat{\bar{\bm s}}_2 , \hat{\bar{\bm s}}_3 \}$ is created with the same first vector, but constructs $\hat{ \bar{\bm s}}_2$ using $\hat{\bm b}_2$ of the body frame as in Equation \eqref{eq:s2bar}. The last vector, once again, finishes the orthonormal frame. \begin{equation}\label{eq:s2bar} \hat{\bar{\bm s}}_2 = \frac{\hat{\bar{\bm s}}_1 \times \hat{\bm b}_2}{|\hat{\bar{\bm s}}_1 \times \hat{\bm b}_2|} \end{equation} With both these frames, $\mathcal{S}_1$ and $\mathcal{S}_2$, the singularities can always be avoided. Indeed, $\mathcal{S}_1$ becomes singular when $\bm d$ approches $\hat{\bm b}_1$, while $\mathcal{S}_2$ becomes singular when the sunheading approaches $\hat{\bm b}_2$. By changing frames, whenever the sunline gets within a safe cone of $30 \dg$ (a modifiable value) of $\hat{\bm b}_1$, the frame is rotated into $\mathcal{S}_2$, which is not singular. Similarly, when $\bm d$ approches $\hat{\bm b}_2$ the frame is switched back to $\mathcal{S}_1$. Because the two frames share the sunline vector $\bm d$, this vector is the same in both frames. This is a clear advantage as this is the vector we desire to estimate, and not having to rotate it avoids numerical issues. The rotation of the rates is done by computing the following DCMs, of which all the vectors are known. \begin{equation}\label{eq:DCMs} [\mathcal{B}\mathcal{S}_1] = \begin{bmatrix} \leftexp{B}{\hat{\bm s}_1} & \leftexp{B}{\hat{\bm s}_2} & \leftexp{B}{\hat{\bm s}_3}\end{bmatrix} \hspace{1cm} [\mathcal{B}\mathcal{S}_2] = \begin{bmatrix} \leftexp{B}{\hat{\bar{\bm s}}_1} & \leftexp{B}{\hat{\bar{\bm s}}_2} & \leftexp{B}{\hat{\bar{\bm s}}_3}\end{bmatrix} \hspace{1cm} [\mathcal{S}_2\mathcal{S}_1] = [\mathcal{B}\mathcal{S}_2] ^T [\mathcal{B}\mathcal{S}_1] \end{equation} \subsection{Filter Dynamics} %%% The filter is therefore derived with the states being $\bm X =\begin{bmatrix}\leftexp{B}{\bm d} & \omega_2 & \omega_3 \end{bmatrix}^{T}$, given that $\bm \omega_{\mathcal{S}/\mathcal{B}} = \leftexp{S}{\begin{bmatrix} \omega_1 & \omega_2 &\omega_3\end{bmatrix}}^T$. The rates of $\mathcal{S}$ relative to the body and inertial frame are related as such: $\bm \omega_{\mathcal{S}/\mathcal{N}} - \bm \omega_{\mathcal{S}/\mathcal{B}} = \bm \omega_{\mathcal{B}/\mathcal{N}}$. Since $\omega_1$ is unknown, it is set to zero. Furthermore, since the sun heading is considered to be constant in the inertial frame over the period of time required for attitude determination and control, the equation becomes $- \bar{\bm \omega}_{\mathcal{S}/\mathcal{B}} = \bar{\bm \omega}_{\mathcal{B}/\mathcal{N}}$. ${\bm \omega}_{\mathcal{S}/\mathcal{B}}$ is estimated directly by the filter, and its skew matrix can be computed by setting $\omega_1$ to zero (in the absence of information). This defines $\tilde{\bm \omega}_{\mathcal{B}/\mathcal{N}}$ as a function of known parameters. The dynamics are therefore given by Equations \eqref{eq:dynSwitch} and \eqref{eq:dynmatSwitch}, where $ \tilde{[\bm d]}(2,3)$ corresponds to the $2^{\text{nd}}$ and $3^{\text{rd}}$ columns of the $ \tilde{[\bm d]}$ matrix. \begin{align} \label{eq:dynSwitch} \bm X' = \bm F(\bm X) &= \begin{bmatrix} \leftexp{B}{ \bm d'} \\ \omega_2' \\ \omega_3' \end{bmatrix} = \begin{bmatrix} -\leftexp{B}{\bar{\bm \omega}_{\mathcal{B}/\mathcal{N}}} \times \leftexp{B}{\bm d}\\ 0 \\ 0\end{bmatrix} = \begin{bmatrix} [\mathcal{B}\mathcal{S}]\leftexp{S}{\begin{bmatrix} 0 \\ \omega_2 \\ \omega_3\end{bmatrix}} \times \leftexp{B}{\bm d}\\ 0 \\ 0\end{bmatrix} \\\label{eq:dynmatSwitch} [A]&= \begin{bmatrix} \frac{\partial \bm F (\bm d, t_i)}{\partial \bm X} \end{bmatrix} = \begin{bmatrix} [ \leftexp{B}{\tilde{\bar{\bm \omega}}_{\mathcal{S}/\mathcal{B}}}] & - \tilde{[\bm d]}[\mathcal{B}\mathcal{S}](2,3) \\ [0]_{2\times 3} & [0]_{2\times 2} \end{bmatrix} \end{align} This formulation leads to simple dynamics, much simpler than those of the filter which subtracts the unobservable states, yet can actually estimate the observable of the rate, instead of using past estimates of $\bm d$. In regard to the SR-uKF version of this filter, the same coefficients are used: $\alpha = 0.02$, and $\beta = 2$. \subsection{Switching Frames} %%% When switching occurs, the switch matrix $[W]$ can be computed in Equation \eqref{eq:switchMat} using the previously computed DCMs. This equation assumes the switch is going from frame 1 to frame 2 (the reciprocal is equivalent), and $[\mathcal{S}_2 \mathcal{S}_1](2,3)$ corresponds to the $2^{\text{nd}}$ and $3^{\text{rd}}$ columns of the $[\mathcal{S}_2 \mathcal{S}_1]$ matrix. \begin{equation}\label{eq:switchMat} [W] = \begin{bmatrix} [I]_{3\times 3} & [0]_{3 \times 2} \\ [0]_{2 \times 3} & [\mathcal{S}_2 \mathcal{S}_1](2,3)\end{bmatrix} \end{equation} The new states $\bm X$ and covariance [P] after the switch are therefore given in Equation \eqref{eq:switchEq} \begin{equation}\label{eq:switchEq} \bar{\bm X} = [W] \bm X \hspace{2cm} [\bar{P}] = [W] [P] [W]^T \end{equation} When writing out the values of the state and covariance, it is necessary to bring it back into the body frame, using the $[\mathcal{B}\mathcal{S}]$ DCM ($\mathcal{S}$ representing the current frame in use). \subsection{Measurements} The measurement model is given in equation \ref{eq:meas}, and the $H$ matrix defined as $H = \left[\frac{\partial \bm G (\bm X, t_i)}{\partial \bm X}\right]^{*}$ is given in equation $\ref{eq:Hmat}$. In this filter, the only measurements used are from the coarse sun sensor. For the $i^\mathrm{th}$ sensor, the measurement is simply given by the dot product of the sunline heading and the normal to the sensor. This yields easy partial derivatives for the H matrix, which is a matrix formed of the rows of transposed normal vectors (only for those which received a measurement). Hence the $H$ matrix has a changing size depending on the amount of measurements. \begin{equation}\label{eq:meas} \bm G_i(\bm X) = \bm n_i \cdot \bm d \end{equation} \begin{equation}\label{eq:Hmat} \bm H(\bm X) = \begin{bmatrix} \bm n_1^T \\ \vdots \\ \bm n_i^T \end{bmatrix} \end{equation} \section{Filter Set-up, initialization, and I/O} \subsection{User initialization} In order for the filter to run, the user must set a few parameters: \begin{itemize} \item The unscented filter has 3 parameters that need to be set, and are best as: \\ \texttt{ filterObject.alpha = 0.02} \\ \texttt{ filterObject.beta = 2.0} \\ \texttt{ filterObject.kappa = 0.0} \item The angle threshold under which the coarse sun sensors do not read the measurement: \\ \texttt{FilterContainer.sensorUseThresh = 0.} \item The process noise matrix: \\ \texttt{qNoiseIn = numpy.identity(5)} \\ \texttt{ qNoiseIn[0:3, 0:3] = qNoiseIn[0:3, 0:3]*0.01*0.01} \\ \texttt{ qNoiseIn[3:5, 3:5] = qNoiseIn[3:5, 3:5]*0.001*0.001} \\ \texttt{filterObject.qNoise = qNoiseIn.reshape(25).tolist()} \item The measurement noise value, for instance: \\ \texttt{FilterContainer.qObsVal = 0.001} \item The initial covariance: \\ \texttt{Filter.covar =} \\ \texttt{ [1., 0.0, 0.0, 0.0, 0.0, \\ 0.0, 1., 0.0, 0.0, 0.0,\\ 0.0, 0.0, 1., 0.0, 0.0,\\ 0.0, 0.0, 0.0, 0.02, 0.0,\\ 0.0, 0.0, 0.0, 0.0, 0.02]} \item The initial state :\\ \texttt{Filter.state =[0.0, 0.0, 1.0, 0.0, 0.0]} \end{itemize} The messages must also be set as such: \begin{itemize} \item \texttt{ filterObject.navStateOutMsgName = "sunline$\_$state$\_$estimate"} \item \texttt{ filterObject.filtDataOutMsgName = "sunline$\_$filter$\_$data"} \item \texttt{ filterObject.cssDataInMsgName = "css$\_$sensors$\_$data"} \item \texttt{ filterObject.cssConfInMsgName = "css$\_$config$\_$data"} \end{itemize} \subsection{Inputs and Outputs} The SuKF reads in the measurements from the coarse sun sensors. These are under the form of a list of cosine values. Knowing the normals to each of the sensors, we can therefore use them to estimate sun heading. \section{Test Design} The unit test for the sunlineSuKF module is located in:\\ \noindent {\tt fswAlgorithms/attDetermination/sunlineSuKF/$\_$UnitTest/test$\_$SunlineSuKF.py} \\ As well as another python file containing plotting functions: \noindent {\tt fswAlgorithms/attDetermination/sunlineSuKF/$\_$UnitTest/SunlineSuKF$\_$test$\_$utilities.py} \\ The test is split up into 3 subtests. The first test creaks up all of the individual filter methods and tests them individually. These notably go over the square-root unscented filter specific functions. The second test verifies that in the case where the state is zeroed out from the start of the simulation, it remains at zero. The third test verifies the behavior of the time update with a measurement modification in the middle of the run. \subsection{Individual tests} In each of these individual tests, random inputs are fed to the methods and their values are computed in parallel in python. These two values are then compared to assure that the correct computations are taking place. \begin{itemize} \item \underline{QR Decomposition}: This tests the QR decomposition function which returns just the R matrix. Tolerance to absolute error $\epsilon = 10^{-15}$. \textcolor{ForestGreen}{Passed} \item \underline{LU Decomposition}: This tests the LU Decomposition accuracy. Tolerance to absolute error $\epsilon = 10^{-14}$. \textcolor{ForestGreen}{Passed} \item \underline{LU backsolve}: This tests the LU Back-Solve accuracy. Tolerance to absolute error $\epsilon = 10^{-14}$. \textcolor{ForestGreen}{Passed} \item \underline{LU matrix inverse}: This tests the LU Matrix Inverse accuracy. Tolerance to absolute error $\epsilon = 10^{-14}$. \textcolor{ForestGreen}{Passed} \item \underline{Cholesky decomposition}: This tests the Cholesky Matrix Decomposition accuracy. Tolerance to absolute error $\epsilon = 10^{-14}$. \textcolor{ForestGreen}{Passed} \item \underline{L matrix inverse}: This tests the L Matrix Inverse accuracy. Tolerance to absolute error $\epsilon = 10^{-14}$. \textcolor{ForestGreen}{Passed} \item \underline{U matrix inverse}: This tests the U Matrix Inverse accuracy. Tolerance to absolute error $\epsilon = 10^{-12}$. \textcolor{ForestGreen}{Passed} \end{itemize} \subsection{Static Propagation} \input{AutoTeX/StatesPlotprop.tex} This test also takes no measurements in, and propagates with the expectation of no change. It then tests that the states and covariance are as expected throughout the time of simulation. Plotted results are seen in Figure \ref{fig:StatesPlotprop}. We indeed see that the state and covariance that evolve nominally and without bias . Tolerance to absolute error: $\epsilon = 10^{-10}$ \subsection{Full Filter test} This test the filter working from start to finish. No measurements are taken in for the first 20 time steps. Then a heading is given through the CSS message. Halfway through the simulation, measurements stop, and 20 time steps later a different heading is read. The filter must be robust and detect this change. This test is parametrized for different test lengths, different initial conditions, different measured headings, and with or without measurement noise. All these are successful. \vspace{0.2cm} Tolerance to absolute error without measurement noise: $\epsilon = 10^{-10}$ \textcolor{ForestGreen}{Passed} Plotted results are seen in Figures \ref{fig:StatesPlotupdate}, and \ref{fig:PostFitupdate}. Figure \ref{fig:StatesPlotupdate} shows the state error and covariance over the run. We see the covariance initially grow, then come down quickly as measurements are used. It grows once again as the measurements stop before bringing the state error back to zero with a change in sun heading. Figure \ref{fig:PostFitupdate} shows the post fit residuals for the filter, with no measurement noise. We see that the observations are read in well an that the residuals are brought back down to zero. \input{AutoTeX/StatesPlotupdate.tex} \input{AutoTeX/PostFitupdate.tex} \bibliographystyle{AAS_publication} % Number the references. \bibliography{references} % Use references.bib to resolve the labels. \end{document}
%%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES! %%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES! %%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES! %%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES! %%% PLEASE RUN A SPELL CHECKER BEFORE COMMITTING YOUR CHANGES! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{\label{sec:History-8-1}Development Release Series 8.1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This is the development release series of HTCondor. The details of each version are described below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-6}Version 8.1.6} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.6 released on May 22, 2014. \end{itemize} \noindent New Features: \begin{itemize} \item HTCondor can discover, schedule, and manage GPUs in an exceedingly simple way by inserting \begin{verbatim} use feature : GPUs \end{verbatim} in the configuration file. The HTCondor wiki page, \URL{https://htcondor-wiki.cs.wisc.edu/index.cgi/wiki?p=HowToManageGpus}, describes the capabilities. \item The grid universe can now be used to submit and manage jobs on a BOINC server, using the new grid type \SubmitCmdNI{boinc}. \Ticket{3540} \item Configuration has been enhanced in structure and with newly implemented semantics describing configuration. As part of this effort, most all configuration variables have compile-time defaults specified and incorporated into the code. Therefore, they no longer appear in the example, distributed configuration file. It is only when values change that these variables will be placed into a configuration file. For current installations wishing to transition to the new, stripped down configurations files, the new \Opt{-writeconfig} option to \Condor{config\_val} will help to identify values different from defaults. New configuration semantics permit \begin{itemize} \item the inclusion of configuration defined elsewhere. See section~\ref{sec:Config-Include} for a description. \item metaknobs, which incorporate predefined sets of configuration that are commonly used. See section~\ref{sec:Config-Templates} for a description. \item a simple if/else syntax for conditional specification of configuration. See section~\ref{sec:Config-IfElse} for a description. \end{itemize} \Ticket{4325} \Ticket{3894} \Ticket{4319} \Ticket{4031} \Ticket{4211} \item When hierarchical group quotas are used, and surplus sharing is enabled, the quotas are now correctly computed if slot weights are also enabled. \Ticket{4324} \item The default priority factor set for new users is now 1000. This was changed from a default value of 1, because a value of 1 leaves no room to boost the priority factor. \Ticket{4282} \item The \Condor{schedd} may now keep open a configurable number of job event log files. This improves performance over the previous behavior of open, write, close done for each event. New configuration variables \Macro{USERLOG\_FILE\_CACHE\_MAX} and \Macro{USERLOG\_FILE\_CACHE\_CLEAR\_INTERVAL} specify the number of job event log files that may be kept open at the same time and the periodic interval of time that passes before the set of open files are closed. \Ticket{4040} \item The curl file transfer plug-in can now be used to transfer output files in addition to input files. \Ticket{4190} \item New python bindings allow the user access to the same file locking protocol as HTCondor daemons. \Ticket{4315} \item The DAGMan node status file formatting has changed. The format of the DAG node status file is now New ClassAds, and the amount of information in the file has increased. Section~\ref{sec:DAG-node-status} has details on node status files. \Ticket{4115} \item The new configuration variable \Macro{STARTER\_LOG\_NAME\_APPEND} controls the file name extension of the log used by the \Condor{starter}. \Ticket{4244} \item The new configuration variable \Macro{ENVIRONMENT\_VALUE\_FOR\_UnAssigned<name>} is intended for use with GPUs, where \texttt{<name>} is \texttt{GPUs}. It defines what GPU ID to assign to slots that have no assigned GPU. Without this, the CUDA runtime would allow slots with no assigned GPU to use all of the GPUs. \Ticket{4320} \item The batch system name \texttt{HTCondor} is now published in each job's environment. \Ticket{4233} \item New configuration variables \Macro{UDP\_NETWORK\_FRAGMENT\_SIZE} and \Macro{UDP\_LOOPBACK\_FRAGMENT\_SIZE} added to control UDP message fragmentation size over the network and loopback interface, respectively. \Ticket{4321} \item The new \Condor{pool\_job\_report} tool for Linux platforms composes and mails a report about all jobs run in the previous 24 hours on all execute machines within the pool. \Ticket{4267} \item HTCondor now publishes more I/O statistics as job ClassAd attributes. The new attributes are \Attr{BlockReads}, \Attr{BlockWrites}, \Attr{RecentBlockReads}, \Attr{RecentBlockWrites}, \Attr{RecentBlockReadKbytes}, and \Attr{RecentBlockWriteKbytes}. \Ticket{3850} \item The new job ClassAd attribute \Attr{SpoolOnEvict} facilitates the debugging of failed jobs. \Ticket{4292} \item Memory corruption mitigation is enabled by additional linker flags, when building HTCondor from source against system-shared libraries installed by the distribution. \Ticket{4153} \item An experimental new feature to overlap the transfer of job output with the execution of a subsequent job is documented with a link from the HTCondor wiki page, \URL{https://htcondor-wiki.cs.wisc.edu/index.cgi/wiki?p=ExperimentalFeatures}. \Ticket{4291} \item An experimental new feature to provide custom output formatting for \Condor{q} and \Condor{status} is documented with a link from the HTCondor wiki page, \URL{https://htcondor-wiki.cs.wisc.edu/index.cgi/wiki?p=ExperimentalFeatures}. \Ticket{4241} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item The \Condor{shared\_port} daemon no longer blocks on a very unresponsive daemon. \Ticket{4314} \item vm universe jobs now report attribute \Attr{RemoteUserCPU} when run on a KVM hypervisor. CPU usage remains unreported by VMware hypervisors. \Ticket{4337} \item The \Condor{gridmanager} no longer assumes that a NorduGrid ARC job with a reported exit code greater than 128 exited abnormally via a signal. \Ticket{4342} \item Many tools, including \Condor{off} and \Condor{restart} interpreted the command line argument \Opt{-defrag} incorrectly as \Opt{-debug}, since both words start with the string \AdStr{de}. The confusion has been fixed. Use of \Opt{-defrag} will now produce an error message, since it is not a valid option for these tools. \Ticket{3717} \item Fixed a crash by the \Condor{gpu\_discovery} tool, when running on a 32-bit platform or on Windows and detecting via OpenCL. \Ticket{4339} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-5}Version 8.1.5} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.5 released on April 15, 2014. \end{itemize} \noindent New Features: \begin{itemize} \item The default configuration now implements a policy that disables preemption. \Ticket{4281} \item The protocol for interaction between \Condor{q} and the \Condor{schedd} daemon has been rewritten. The new protocol does not require the \Condor{schedd} to fork a child process and does not cause blocking; the result is that the \Condor{schedd} should be able to handle many concurrent \Condor{q} requests with minimal resource usage. \Ticket{4111} \item The specification in configuration for the size or amount of time that a log file may grow has changed. An explicit size or amount of time may still be specified for any individual log file. However, any log files not explicitly specified have a default maximum size specified by the new configuration variable \Macro{MAX\_DEFAULT\_LOG}. \Ticket{4246} \item The new \Condor{urlfetch} tool is enables the acquisition of configuration with a query to a URL. \Ticket{4018} \item The \Prog{cream\_gahp} and \Prog{nordugrid\_gahp} can now talk to servers over IPv6. \Ticket{4243} \item The python bindings can now accept a list of \Condor{collector} hosts in the constructor of the \texttt{Collector} object. This eases use of the bindings for high availability setups. \Ticket{4245} \item The new python binding \texttt{transaction} creates a transaction with the \Condor{schedd}, providing a way to submit multiple clusters of jobs or edit multiple attributes atomically. \Ticket{4225} \item New configuration variable \Macro{NEGOTIATOR\_MAX\_TIME\_PER\_CYCLE} places an upper time limit on the time spent in each negotiation cycle. \Ticket{4271} \item The configuration variable \Macro{VALID\_SPOOL\_FILES} has been redefined to list only files that the system administrator determines must not be removed by \Condor{preen}. The new configuration variable \Macro{SYSTEM\_VALID\_SPOOL\_FILES} contains a predetermined list of files that are known to be valid at the time HTCondor was built. \Condor{preen} will use the union of these two configuration variables as the set of valid files that should not be removed from the \MacroNI{SPOOL} directory. \Ticket{4257} \item The new configuration variable \Macro{OFFLINE\_MACHINE\_RESOURCE\_<name>} is used to identify a custom machine resource as offline, so that the resource will not be allocated to any slot. \Ticket{4177} \item The default value of configuration variable \Macro{NEGOTIATOR\_USE\_WEIGHTED\_DEMAND} has been changed from \Expr{False} to \Expr{True}. \Ticket{4238} \item The new configuration variable \Macro{NEGOTIATOR\_TRIM\_SHUTDOWN\_THRESHOLD} can be used to avoid making matches to resources that are about to go away. It is primarily of interest to glidein pools. Section~\ref{param:NegotiatorTrimShutdownThreshold} details the new configuration variable. \Ticket{4266} \item No user-visible changes result from reductions in the quantity of unused memory within DaemonCore data structures. \Ticket{4206} \item The \Condor{negotiator} logs more information about its round robin iteration to ease debugging. \Ticket{3871} \item Some communications between daemons will cause fewer network timeouts, as the reading of commands no longer blocks while waiting for completion of the command. \Ticket{4237} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item Fixed a bug that affected \Condor{on}, \Condor{off}, \Condor{restart}, \Condor{reconfig}, and \Condor{set\_shutdown}. When multiple machines were named on the command line, these tools could report \begin{verbatim} Can't find address for master XXXX \end{verbatim} for some daemons, even though the daemons were properly advertised to the \Condor{collector}. \Ticket{4207} \item Fixed a bug that could have caused the \Condor{startd} to become unresponsive when starting a job obtained via the Work Fetch Hook. \Ticket{4210} \item Fixed a bug that could have caused the \Condor{schedd} to advertise a stale address in the \Attr{ScheddIpAddr} attribute of its submitter ClassAds, resulting in other daemons being unable to contact it. The problem occurred when using both the \Condor{shared\_port} daemon and CCB, and the value of configuration variable \Macro{CCB\_ADDRESS} was changed. \Ticket{4250} \item Fixed a bug introduced earlier in the 8.1 developer series that could cause \Condor{submit} to crash when reading large submit description files. \Ticket{4260} \item Fixed a bug that prevented a configuration variable from referring to itself, when the previous value was defined by the code, rather than within a configuration file. \Ticket{4256} \item The temperature attributes output by the \Condor{gpu\_discovery} tool contained values represented in Celsius, while the names of these attributes ended in the letter 'F,' implying Fahrenheit. The names of these attributes have been changed to end with the letter 'C.' For instance \Attr{<name>DieTempF} has been changed to \Attr{<name>DieTempC}. \Ticket{4294} \item The \Condor{startd} no longer generates this erroneous message when a plugin can not be run: \begin{verbatim} Warning: Starter pid XXX is not associated with a claim. A slot may fail to transition to Idle. \end{verbatim} \Ticket{4026} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-4}Version 8.1.4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.4 released on February 27, 2014. \item This version of HTCondor includes all bug fixes from version 8.0.6, as well as the new full port for the Red Hat Enterprise Linux 7.0 \emph{Beta} release on the x86\_64 architecture. A full port includes support for the standard universe. \end{itemize} \noindent New Features: \begin{itemize} \item When configured to use partitionable slots, those slots running jobs can now be preempted by the \Condor{negotiator} daemon based on the value of the machine's configuration of \MacroNI{RANK}. \Ticket{3667} \item Improved support for publishing monitoring information about an HTCondor pool to \TM{Ganglia}. Added Ganglia statistics for total job starts and total job preemptions within a \Condor{startd}. This allows Ganglia to graph the total job preemptions across all \Condor{startd} daemons in a pool. See section~\ref{sec:Config-gangliad} for configuration variable definitions, and section~\ref{sec:monitor-ganglia} for details about monitoring with Ganglia. \Ticket{4151} \Ticket{3965} \item The grid universe can now be used to create and manage VM instances in Google Compute Engine (GCE), using the new grid type \SubmitCmdNI{gce}. \Ticket{3833} \item As a scalability improvement for Unix platforms, the \Condor{shared\_port} daemon no longer forks on incoming connections. \Ticket{4094} \item \Condor{ssh\_to\_job} and interactive jobs no longer try to connect to held jobs. They instead report the hold and the reason why the job is being held. \Ticket{3867} \item Improved the restart time of the \Condor{schedd} after it has crashed. \Ticket{4169} \item The new configuration variable \Macro{EC2\_RESOURCE\_TIMEOUT} sets the amount of time that HTCondor will wait for an unresponsive EC2 service before placing the corresponding jobs on hold. \Ticket{4113} \item The new python binding \Procedure{refreshGSIProxy} can refresh a remote job's GSI proxy as a part of the \texttt{Schedd} object. \Ticket{4116} \item By default, the TCP keep alive interval is automatically tuned to 5 minutes. This causes at least one packet to be sent on established, but idle, TCP connections once every 5 minutes, and it speeds up the detection of connections that were silently dropped by NAT or firewall devices. Without this, the \Condor{shadow} may not reliably recover from transient network failures. This behavior is controlled by the new configuration variable \Macro{TCP\_KEEPALIVE\_INTERVAL}. Setting this variable to 0 restores the prior behavior. In addition, the configuration variable \Macro{CCB\_HEARTBEAT\_INTERVAL} default value has been reduced to 5 minutes. \Ticket{4122} \item New python \Code{ClassAd} module function calls \Procedure{Attribute}, \Procedure{Function}, \Procedure{Literal}, \Procedure{flatten}, \Procedure{matches}, and \Procedure{symmetricMatch} aid the composition of ClassAd expressions. It should now be possible to build expressions directly in python, without having to resort to string manipulation. \Ticket{4154} \item For those that use the Python bindings, the \Env{LD\_LIBRARY\_PATH} environment variable no longer needs to be set. \Ticket{4128} \item The Python bindings are now compatible with Python 3. \Ticket{4146} \item Setting configuration variable \Macro{DAGMAN\_ALWAYS\_USE\_NODE\_LOG} to \Expr{False} or using the corresponding \Opt{-dont\_use\_default\_node\_log} option to \Condor{submit\_dag} is no longer recommended. It is no longer recommended to have \Condor{dagman} read the log files specified in the node job submit description files. \Ticket{4091} \item Invoking \Condor{fetchlog} with the \Arg{STARTD\_HISTORY} argument now fetches all \Condor{startd} history by concatenating all instances of log files resulting from rotation to the current history log. \Ticket{4152} \item Several general mechanisms for specifying user-defined \Condor{startd} resources have been enhanced, so that GPUs can be easily defined and used. New to this 8.1.4 version of HTCondor is the allocation of user defined resources (especially GPUs) with partitionable and dynamic slots. This includes having HTCondor automatically set the environment variable \Env{CUDA\_VISIBLE\_DEVICES} for jobs that use CUDA GPUs and \Env{GPU\_DEVICE\_ORDINAL} for jobs that use OpenCL GPUs. The mechanism defines configuration variables \Macro{MACHINE\_RESOURCE\_<name>} and \Macro{MACHINE\_RESOURCE\_INVENTORY\_<name>} to specify the definition user-defined resources with a list of resource identifiers. When HTCondor allocates one of these user-defined resources to a slot, it will also publish this assignment within the slot's ClassAd using the new job ClassAd attribute \Attr{Assigned<name>}. And, it will define in the job's environment the variable \Env{\_CONDOR\_Assigned<name>}. The new configuration variable \Macro{ENVIRONMENT\_FOR\_Assigned<name>} also sets further environment variables. \Ticket{4141} \Ticket{4148} \item The new \Condor{gpu\_discovery} tool detects CUDA and OpenCL GPUs, reporting them in the format needed to configure GPU resources using the configuration variable \Macro{MACHINE\_RESOURCE\_INVENTORY\_GPUs}. \Ticket{3386} \item Two new pre-defined configuration variables are referenced with \MacroU{DETECTED\_PHYSICAL\_CPUS} and \MacroU{DETECTED\_CPUS}. \MacroUNI{DETECTED\_PHYSICAL\_CPUS} contains the number of physical (non-hyperthreaded) CPUs. \MacroUNI{DETECTED\_CPUS} will match the value of either \MacroNI{DETECTED\_CORES} or \MacroNI{DETECTED\_PHYSICAL\_CPUS}, depending on the state of \Macro{COUNT\_HYPERTHREAD\_CPUS}. The default value of \Macro{NUM\_CPUS} now defaults to the value of \MacroNI{DETECTED\_CPUS}. \Ticket{4197} \item \Condor{q} will now show the macro-expanded job description from the attribute \Attr{MATCH\_EXP\_JobDescription} instead of \Attr{JobDescription} if it is available. \Ticket{4110} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item Fixed a small memory leak that was triggered by failed file transfer attempts. \Ticket{4134} \item Fixed a bug that would leak one socket in each daemon, when \Expr{NO\_DNS = True}. \Ticket{4140} \item Changed the way the \Condor{startd} allocates CPUs to slots in configurations where there are more slots than CPUs. CPUs are now distributed equally between slots that are not configured to receive a specific number (using configuration variable \Macro{SLOT\_TYPE\_<N>}). Before this change, these slots received 1 CPU each. The new behavior matches how other slot resources are distributed. \Ticket{3249} \item The failure to terminate an EC2 grid universe job instance, because the instance no longer exists at the service, is now considered a successful termination. This allows EC2 grid universe jobs to exit the queue, if the service purges termination records quickly. \Ticket{4133} \item HTCondor now interacts with EC2 services by using \Code{POST} instead of \Code{GET}, which permits more services to accept user data with size greater than 8Kbytes. \Ticket{4004} \item Improved the handling of the \SubmitCmd{coresize} submit description file command, by allowing values larger than 4Gbytes. \Ticket{4155} \item Fixed a bug that caused job arguments to not be displayed in the default output of \Condor{q} when the submit description file used the new syntax for job arguments. \Ticket{2875} \item The \Condor{startd} daemon will no longer abort when it exhausts the supply of user-defined resources such as GPUs while assigning automatic resource shares to slots. \Ticket{4176} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-3}Version 8.1.3} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.3 released on December 23, 2013. This developer release contains all bug fixes from HTCondor version 8.0.5. \end{itemize} \noindent New Features: \begin{itemize} \item The parsing of configuration has changed with respect to how line continuation characters and comments interact. The line continuation character no longer takes precedence over the comment character. \Ticket{4027} \index{SUBSYS\_SUPER\_ADDRESS\_FILE macro@\texttt{<SUBSYS>\_SUPER\_ADDRESS\_FILE} macro} \index{configuration macro!\texttt{SUBSYS\_SUPER\_ADDRESS\_FILE}} \item When the super user issues a command or when the new \Condor{sos} tool invokes another tool, the command can be serviced with a higher priority. This should be useful when attempting to get information from an overloaded daemon, in order to diagnose or fix a problem. Commands directed at the \Condor{schedd} or \Condor{collector} daemons have this ability by default. Other DaemonCore daemons require configuration using the new configuration variable \MacroB{<SUBSYS>\_SUPER\_ADDRESS\_FILE}. \Ticket{4029} \item The dedicated scheduler cpu usage within the \Condor{schedd} is now throttled, so that it cannot consume all of the cpu, while starving the vanilla scheduler. This throttle can be adjusted by the new configuration variable \Macro{DEDICATED\_SCHEDULER\_DELAY\_FACTOR}. This variable, which defaults to five, sets the ratio of time spent not in the dedicated scheduler to the time scheduling parallel jobs. With this default of five, a maximum of 20\% of the scheduler's time will go to scheduling parallel jobs. \Ticket{4048} \item The new \Condor{defrag} daemon ClassAd attribute \Attr{MeanDrainedArrived} measures the mean time between arrivals of fully drained machines, and the new attribute \Attr{DrainedMachines} measures the total numbers of fully drained machines which have arrived during the run time of this \Condor{defrag} daemon. \Ticket{4055} \item The new \Opt{-defrag} option for \Condor{status} queries ClassAds of the \Condor{defrag} daemon. \Ticket{4039} \item Machine ClassAd attributes \Attr{ExpectedMachineQuickDrainingCompletion} and \Attr{ExpectedMachineGracefulDrainingCompletion} are updated with their completion times if there are no active claims, making these attributes more useful in setting policy for partitionable slots. \Ticket{3481} \item In a DAG, the node retry number is now available as VARS macro (see section~\ref{dagman:VARS}). \Ticket{4032} \item Macro substitution both within configuration and within submit description files has been extended to specify and use an optional default value if a value is not defined. Section~\ref{sec:Config-File-Macros} has details for configuration. \Ticket{4033} \item The Python bindings \Code{htcondor} module has a new \Procedure{read\_events} method to acquire an iterator of an HTCondor event log file. \Ticket{4071} \item The new \Opt{-daemons} option to \Condor{who} prints information about the HTCondor daemons running on the specified machine, including the daemon's PID, IP address and command port. \Ticket{4007} \end{itemize} \noindent Configuration Variable and ClassAd Attribute Additions and Changes: \begin{itemize} \item Configuration variable \Macro{DAGMAN\_DEFAULT\_NODE\_LOG} has been made more powerful, so that it can be defined in HTCondor configuration files, instead of being useful only when defined in a per-DAG configuration file. See section~\ref{param:DAGManDefaultNodeLog} for details. \Ticket{3930} \item The new configuration variable \Macro{CORE\_FILE\_NAME} is used to set the name that DaemonCore uses to create a core file, in the event of a daemon crash. The default value for this configuration variable appends the daemon name, so a crash of the \Condor{schedd} would create a core file named \File{core.SCHEDD}. \Ticket{4100} \item The new configuration variable \Macro{JOB\_EXECDIR\_PERMISSIONS} defines the permissions on a job's scratch directory. It defaults to setting permissions as \emph{0700}. \Ticket{4016} \item The following recently added machine ClassAd attributes have been renamed. \begin{description} \item \Attr{TotalJobStarts} became \Attr{JobStarts}. \item \Attr{RecentTotalJobStarts} became \Attr{RecentJobStarts}. \item \Attr{TotalPreemptions} became \Attr{JobPreemptions}. \item \Attr{RecentPreemptions} became \Attr{RecentJobPreemptions}. \item \Attr{TotalRankPreemptions} became \Attr{JobRankPreemptions}. \item \Attr{RecentTotalRankPreemptions} became \Attr{RecentJobRankPreemptions}. \item \Attr{TotalUserPrioPreemptions} became \Attr{JobUserPrioPreemptions}. \item \Attr{RecentTotalUserPrioPreemptions} became \Attr{RecentJobUserPrioPreemptions}. \end{description} \Ticket{4101} \item The new \Condor{schedd} statistics ClassAd attribute \Attr{Autoclusters} gives the number of active autoclusters. \Ticket{4020} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item None. \end{itemize} \noindent Known Bugs: \begin{itemize} \item None. \end{itemize} \noindent Additions and Changes to the Manual: \begin{itemize} \item None. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-2}Version 8.1.2} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.2 released on October 31, 2013. This 8.1.2 release contains all bug fixes from HTCondor version 8.0.4. \end{itemize} \noindent New Features: \begin{itemize} \item \Condor{config\_val} now supports \Opt{-dump} and \Opt{-verbose} options to query configuration remotely from daemons. \Ticket{3894} \item The \Condor{chirp} protocol and command line tool has been enhanced to support lower-cost, delayed updates to the job ClassAd residing in the \Condor{schedd}; updates occur as other communications take place, eliminating the overhead of a separate update. These two new Chirp commands, \Opt{set\_job\_attr\_delayed} and \Opt{get\_job\_attr\_delayed} allow the job to send lightweight notification for events such as progress monitoring, which need not be durable. \Ticket{3353} \item \Condor{history} has been enhanced to support remote history using new \Opt{-pool} and \Opt{-name} options. \Ticket{3897} \item Matchmaking in the \Condor{negotiator} may be made aware of resources available for partitionable slots. This permits multiple jobs to be matched against a partitionable slot during a single negotiation cycle. The new policies discussed in Section~\ref{sec:consumption-policy} are set using new configuration variables and are known as consumption policies. \Ticket{3435} \item Definition syntax for the authorization configuration variables \Macro{ALLOW\_*} and \Macro{DENY\_*} has been expanded to permit the specification of Unix netgroups. See section~\ref{sec:Security-Authorization} for the syntax. \Ticket{3859} \item Definition syntax for the configuration variable \Macro{QUEUE\_SUPER\_USERS} has been expanded to accept a specification of Unix user groups. See section~\ref{param:QueueSuperUsers} for the syntax. \Ticket{3859} \item To ensure that a grid universe job running at an EC2 service terminates, HTCondor now checks after a fixed time interval that the job actually has terminated, instead of relying on the service's potentially unreliable job shut down indication. If the job has not terminated after a total of four checks, the job is placed on hold; it does not leave the queue marked as completed. \Ticket{3438} \item Email alerts about file transfers taking longer than \Macro{MAX\_TRANSFER\_QUEUE\_AGE} are now grouped together to reduce the number of email messages that are sent. \item Floating point values in Old ClassAds are now printed in a more human-readable format, while retaining 64-bit double precision. In previous versions, these values were always printed in scientific notation. \Ticket{3928} \item \Condor{ssh\_to\_job} now works with grid universe jobs which use EC2 resources. \Ticket{1548} \item Machine ClassAd attributes \Attr{Disk} and \Attr{TotalDisk} are now published as 64-bit integers, rather than being capped at the maximum value of a 32-bit integer. \Ticket{1784} \item In an effort to improve scalability under heavy load, the tuning configuration variable \Macro{MAX\_REAPS\_PER\_CYCLE} is exposed, as defined at section~\ref{param:MaxReapsPerCycle}. The default for this variable changed from 1 to 0. \Ticket{3992} \item To reduce the overwhelming quantity of per-user \Condor{schedd} statistics that are generated when configuration variables \MacroNI{SCHEDD\_COLLECT\_STATS\_FOR\_<Name>} or \MacroNI{SCHEDD\_COLLECT\_STATS\_BY\_<Name>} are used, the statistics are now published at verbosity level 2, instead of verbosity level 1. \Ticket{3980} \item The Python bindings now include the \Code{Negotiator} class to manage users and their priorities. \Ticket{3893} \item The Python bindings now provide automatic conversions from dictionaries to ClassAds, so they can accept a dictionary directly as an argument, rather than constructing a ClassAd from the dictionary. \Ticket{3892} \item The Python bindings \Code{ClassAd} module has \Procedure{quote} and \Procedure{unquote} methods to help create string literals. \Ticket{3900} \item The Python bindings \Code{ClassAd} module has new methods \Procedure{parseAds} and \Procedure{parseOldAds} that implement an iterator over ClassAds, in the New ClassAd and Old ClassAd format. \Ticket{3918} \item The ordering of adding attributes to the machine ClassAd has been changed, such that the attributes \Attr{Draining}, \Attr{DrainingRequestId}, and \Attr{LastDrainStartTime} are now added before the job retirement is calculated. This allows a decision about preemption to be made based on if a machine is currently draining. \Ticket{3901} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item When \Macro{USE\_PID\_NAMESPACES} is \Expr{True}, the soft kill signal is now successfully sent to the job. Previously, a \Condor{rm} command of such a job would not remove the job until the killing timeout had expired. \Ticket{3981} \item If a standard universe job exited without producing any checkpoints and no checkpoint server was used, two spurious error messages would be logged to the \File{SchedLog}, as it tried to remove the old checkpoint images from the non-existent checkpoint server. These error messages are no longer logged. \Ticket{3919} \item When configuration variable \Macro{STARTER\_RLIMIT\_AS} is set to its default value of 0, it means that there is no limit. This value was logged as a limit of 0Mb, leading to confusion. Now, no message is logged in this default case. \Ticket{3914} \item Improved how the \Condor{schedd} notifies the \Condor{shadow} and \Condor{gridmanager} about modifications to job ClassAds made using \Condor{qedit}. \Ticket{3909} \item Grid universe jobs now use the correct executable file when \SubmitCmd{copy\_to\_spool} is set to \Expr{True}. Previously, the executable file named in the submit description file would be copied to the remote server, rather than the copy of the executable file stored in the spool directory. \Ticket{3589} \item The example configuration provided within files \File{condor\_config.generic} and \File{condor\_config.generic.redhat} has been updated to fix an inadequate expression defining \MacroNI{NEGOTIATOR\_POST\_JOB\_RANK} when the \Condor{startd} is configured to not run benchmarks, as \Attr{Kflops} would not be defined. \Ticket{3589} \item Fixed a Python binding crash due to a segmentation fault, when evaluating an expression tree with an undefined reference. The fix allows the user to define the \Code{ClassAd} scope within which an expression tree is evaluated. \Ticket{3910} \item The Python bindings now include a correct conversion of \Code{absTime} and \Code{relTime} ClassAd literals to the corresponding Python types. \Ticket{3911} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-1}Version 8.1.1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.1 released on September 17, 2013. This release contains all bug fixes from the stable release version 8.0.2. \end{itemize} \noindent New Features: \begin{itemize} \item Reduced the number of calls to the service when managing EC2 jobs. This should increase the number of EC2 jobs HTCondor can manage on a given service without overloading it. \Ticket{3683} \item When configuration variable \Macro{USE\_SHARED\_PORT} is \Expr{True}, \Macro{SHARED\_PORT} will now be automatically added to \Macro{DAEMON\_LIST}. To disable this new behavior, set the new configuration variable: \begin{verbatim} AUTO_INSERT_SHARED_PORT_IN_DAEMON_LIST = False \end{verbatim} \Ticket{3799} \item Floating point values in ClassAds are now printed as 64-bit double precision values when sent over the network, written to disk, and displayed using the \Opt{-long} or \Opt{-autoformat} options of \Condor{status} and \Condor{q}. \Ticket{3363} \item In the Pegasus/DAGMan workflow metrics, as documented in section ~\ref{sec:DAGMetrics}, the two metrics \Expr{dagman\_id} and \Expr{parent\_dagman\_id} are now reported as the \Attr{ClusterId} of the \Condor{dagman} job. This eliminates any privacy concerns with reporting the \Condor{schedd} daemon's address, which includes the submit machine's IP address. \item The python bindings now can perform the equivalent of \Condor{ping} as a part of the \texttt{SecMan} object. \Ticket{3857} \item The \Condor{gridmanager} and \Condor{ft-gahp} now create dynamic security session for performing file transfers. Previously, the security configuration had to be set in a special way for file transfers with the \Condor{ft-gahp} to work. \Ticket{3536} \end{itemize} \noindent Configuration Variable and ClassAd Attribute Additions and Changes: \begin{itemize} \item The new configuration variable \Macro{USE\_RESOURCE\_REQUEST\_COUNTS} is a boolean value that defaults to \Expr{True}, reducing the latency of negotiation when there are many jobs next to each other in the queue with the same auto cluster, and many matches are being made. \Ticket{3585} \item Four new machine ClassAd attributes are advertised. \Attr{TotalJobStarts} is the total number of jobs started by this \Condor{startd} daemon since it booted. \Attr{RecentTotalJobStarts} is the number of jobs started in the last twenty minutes. Similarly, \Attr{TotalPreemptions} is the number of jobs preempted since the \Condor{startd} daemon started, and \Attr{RecentTotalPreemptions} is the number in the last 20 minutes. \Ticket{3712} \item \Macro{FILE\_TRANSFER\_DISK\_LOAD\_THROTTLE} now accepts tabs in addition to spaces as delimiters. \Ticket{3798} \item Configuration variable \Macro{VALID\_SPOOL\_FILES} has been expanded to accept a single asterisk wild card character in each listed file name. \Ticket{3764} \item The new configuration variable \Macro{GAHP\_DEBUG\_HIDE\_SENSITIVE\_DATA} is a boolean value that defaults to hiding sensitive data such as security keys and passwords when communication with a GAHP server is written to a daemon log. \Ticket{3536} \item The default value of configuration variable \Macro{ENABLE\_CLASSAD\_CACHING} has changed to \Expr{True} for all daemons other than the \Condor{shadow}, \Condor{starter}, and \Condor{master}. \Ticket{3441} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item The \Condor{gridmanager} now does proper failure recovery when submitting EC2 grid universe jobs to services that do not support the EC2 ClientToken parameter. Previously, if there was a failure when submitting jobs to OpenStack or Eucalyptus, the jobs could be submitted twice. \Ticket{3682} \item Fixed the printing of nested ClassAds, so that the nested ClassAds can be read back properly. \Ticket{3772} \item Fixed a bug between the \Condor{gridmanager} and \Condor{ft-gahp} that caused file transfers to fail if one of the two daemons was older than version 8.1.0. \Ticket{3856} \item Fixed a bug that caused substitution in configuration variable evaluation to ignore per-daemon overrides. This is a long standing bug that may result in subtle changes to the way your configuration files are processed. An example of how substitution works with the per-daemon overrides is in section \ref{sec:Config-File-Macros}. \Ticket{3822} \item Fixed a bug that caused the command \begin{verbatim} condor_submit - \end{verbatim} to be interpreted as an interactive submit, rather than a request to read input from \File{stdin}. \Condor{qsub} was also modified to be immune to this bug, such that it will still work with other versions of HTCondor containing the bug. \Ticket{3902} \end{itemize} \noindent Known Bugs: \begin{itemize} \item DAGMan recovery mode does not work for Pegasus-generated sub-DAGs. For sub-DAGs, doing \Condor{hold} or \Condor{release} on the \Condor{dagman} job, or stopping and re-starting the \Condor{schedd} with the DAGMan job in the queue will result in failure of the DAG. This can be avoided by doing a \Condor{rm} of the DAGMan job, which produces a Rescue DAG, and re-submitting the DAG; the Rescue DAG is automatically run. This bug was introduced in HTCondor version 8.0.1, and it also appears in versions 8.0.2, 8.1.0, and 8.1.1. \Ticket{3882} \end{itemize} \noindent Additions and Changes to the Manual: \begin{itemize} \item None. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-8-1-0}Version 8.1.0} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Release Notes: \begin{itemize} \item HTCondor version 8.1.0 released on August 5, 2013. This release contains all bug fixes from the stable release version 8.0.1. \end{itemize} \noindent New Features: \begin{itemize} \item Added support for publishing information about an HTCondor pool to \TM{Ganglia}. See section~\ref{sec:Config-gangliad} on page~\pageref{sec:Config-gangliad} for configuration variable details. \Ticket{3515} \item Improved the performance of the \Condor{collector} daemon when running at sites that do not observe daylight savings time. \Ticket{2898} \item \Condor{q}, \Condor{rm}, \Condor{status} and \Condor{qedit} are now more consistent in the way they handle the \Opt{-constraint} option. \Ticket{1156} \item The new \Condor{dagman\_metrics\_reporter} executable with manual page at ~\pageref{man-condor-dagman-metrics-reporter}, reports metrics for DAGMan workflows running under Pegasus. \Condor{dagman} now generates an output file of the relevant metrics, as described at ~\pageref{sec:DAGMetrics}. \Ticket{3532} \end{itemize} \noindent Configuration Variable and ClassAd Attribute Additions and Changes: \begin{itemize} \item The default value of configuration variable \Macro{COLLECTOR\_MAX\_FILE\_DESCRIPTORS} has changed to 10240, and the default value of configuration variable \Macro{SCHEDD\_MAX\_FILE\_DESCRIPTORS} has changed to 4096. This increases the scalability of the default configuration. \Ticket{3626} \item The new configuration variable \Macro{FILE\_TRANSFER\_DISK\_LOAD\_THROTTLE} enables dynamic adjustment of the level of file transfer concurrency in order to keep the disk load generated by transfers below a specified level. Supporting this new feature are configuration variables \Macro{FILE\_TRANSFER\_DISK\_LOAD\_THROTTLE\_WAIT\_BETWEEN\_INCREMENTS}, \Macro{FILE\_TRANSFER\_DISK\_LOAD\_THROTTLE\_SHORT\_HORIZON}, and \Macro{FILE\_TRANSFER\_DISK\_LOAD\_THROTTLE\_LONG\_HORIZON}. \Ticket{3613} \item The following new \Condor{schedd} ClassAd attributes are for monitoring file transfer activity: \AdAttr{TransferQueueMBWaitingToDownload}, \AdAttr{TransferQueueMBWaitingToUpload}, \AdAttr{FileTransferDiskThrottleLevel}, \AdAttr{FileTransferDiskThrottleHigh}, and \AdAttr{FileTransferDiskThrottleLow}. \Ticket{3613} \item The default value for the configuration variable \Macro{PASSWD\_CACHE\_REFRESH} has been changed from 300 seconds to 72000 seconds (20 hours). \Ticket{3723} \item The new configuration variables \Macro{DAGMAN\_PEGASUS\_REPORT\_METRICS} and \Macro{DAGMAN\_PEGASUS\_REPORT\_TIMEOUT} set defaults used by the new \Condor{dagman\_metrics\_reporter} executable, which reports metrics for DAGMan jobs running under Pegasus. \Ticket{3532} \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item HTCondor version 8.0.0 had an unintended change in the Chirp wire protocol. This change caused \Condor{chirp} with the \Opt{put} option to fail when the execute node was running HTCondor version 7.8.x or earlier versions. HTCondor 8.0.1 and later versions will now send the original wire protocol, and accept either the original protocol, or the variant that HTCondor version 8.0.0 sends. \Ticket{3735} \item Fixed a bug that could cause the daemons to crash on Unix platforms, if the operating system reported that a job owner's account did not exist, for example due to a temporary NIS or LDAP failure. \Ticket{3723} \item Fixed a bug that resulted in a misleading error message when \Condor{status} with the \Opt{-constraint} option specified a constraint that could not be parsed. \Ticket{1319} \item Fixed a typo in the output of \Condor{q}, where a period was erroneously present within a heading. \Ticket{3703} \end{itemize} \noindent Known Bugs: \begin{itemize} \item None. \end{itemize} \noindent Additions and Changes to the Manual: \begin{itemize} \item None. \end{itemize}
% ============================================================================= \subsection*{General} \cite{SCARV:Gutmann:00} \cite{SCARV:WuWeaAus:01} \cite{SCARV:MTRGS:99} \cite{SCARV:GGHJPTW:11} \cite{SCARV:CosLebDev:16} \subsection*{Cryptography- or security-specific: design and implementation} \cite{SCARV:FouMoo:05,SCARV:Fournier:07,SCARV:KocSavGro:08,SCARV:TheSisPne:09,SCARV:TilKirSze:10,SCARV:NREAMM:12,SCARV:YumSav:15,SCARV:RagAmbPar:15,SCARV:AweAus:17,SCARV:YHEF:18,SCARV:WJWDGSN:18,SCARV:ZHCPH:18} \subsection*{Cryptography- or security-specific: verification} \cite{SCARV:KZDN:18} \subsection*{ISEs: general/codesign/security/scheduling and code generation} \cite{SCARV:Fiskiran:05} \cite{SCARV:BarGioMar:09} \cite{SCARV:RegIen:16} \cite{SCARV:FazLopOli:18} \cite{SCARV:KLWGSTW:06,SCARV:GIPTV:06} \cite{SCARV:RCSBKBLI:09} \cite{SCARV:ManGre:10,SCARV:ManMagGre:10,SCARV:Manley:11} \subsection*{ISEs for arithmetic: $\B{Z}_{N}$ and general multi-precision} \cite{SCARV:Gro:02,SCARV:Gro:03,SCARV:GroKam:03:a,SCARV:GAST:05,SCARV:GroTilSze:07} \subsection*{ISEs for arithmetic: $\B{F}_{2^m}$ and other fields (e.g., for ECC)} \cite{SCARV:GroKam:03:b,SCARV:FisLee:04,SCARV:GroKumPaa:04,SCARV:KumPaa:04,SCARV:BBGM:08} \subsection*{ISEs for arithmetic: misc} \cite{SCARV:GroKam:03,SCARV:GroSav:04,SCARV:VejPagGro:07} \subsection*{ISEs for symmetric} \cite{SCARV:BurMcDAus:00,SCARV:MelElb:08,SCARV:MelElb:10,SCARV:Saarinen:19} \subsection*{ISEs for AES: security-focused} \cite{SCARV:TilGro:07:a} \subsection*{ISEs for AES: efficiency-focused} \cite{SCARV:TilGroSze:05,SCARV:TilGro:06,SCARV:APRJ:11} \subsection*{ISEs for AES: mixed use (e.g., repurposed)} \cite{SCARV:TilGro:05,SCARV:TilGro:07:b,SCARV:BBGR:09,SCARV:BosOzeSta:11} \subsection*{ISEs for table look-up and memory access} \cite{SCARV:FisLee:01,SCARV:FisLee:05:a,SCARV:FisLee:05:b,SCARV:HilYinLee:08} \subsection*{ISEs for bit-manipulation (inc. permutation)} \cite{SCARV:ShiLee:00,SCARV:YanLee:00,SCARV:McGLee:01,SCARV:LeeShiYan:01,SCARV:ShiLee:02,SCARV:ShiYanLee:03,SCARV:LSYRR:04,SCARV:Shi:04,MASCAB:LeeYanShi:05,SCARV:HilYinLee:08,SCARV:HilLee:08,SCARV:ShiYanLee:08,SCARV:Hilewitz:08} \subsection*{ISEs for bit-slicing} As introduced by Biham~\cite{SCARV:Biham:97}, bit-slicing is based on a) a non-standard {\em representation} of data, and b) a non-standard {\em implementation} of functions, which operate on said representations: it essentially describes a given cryptographic primitive as a ``software circuit'' comprising a sequence of bit-wise instructions (e.g., NOT, AND, and OR). Although not a general-purpose technique, when applicable, use of bit-slicing can offer advantages that include constant-time execution and hence immunity from cache-based side-channel attacks (see, e.g.,~\cite{SCARV:KasSch:09}). In the design of Serpent~\cite[Page 232]{SCARV:BihAndKnu:98}, there is a suggestion for accelerating bit-sliced implementations via a ``BITSLICE instruction'' or ISE; the suggestion was later investigated in detail by Grabher et al.~\cite{SCARV:GraGroPag:08}. In both cases, the idea is to ``compress'' a sub-circuit, i.e., the sequence of bit-wise instructions representing an $n$-input Boolean function, into a Look-Up Table (LUT): the LUT is first configured with a truth table for the function, then accessed to apply said function. % =============================================================================
\subsection{imp -- Access the import internals} To be done .... %
\begin{savequote}[8cm] \textit{S'il se produit la quatre spores et pas deux, c'est qu'elles doivent avoir chacune quelque chose de particulier.} If four spores occur and not two, then each must have something special. \qauthor{--- \cite{Janssens1909theorie}} \end{savequote} \chapter{\label{ch:4-discuss}Perspectives} \minitoc \section{Other Single Cell RNAseq studies} Whilst there were no single cell RNAseq studies of spermatogenesis when this research was initiated, it was clearly a popular idea as there are now an increasing number of such studies (22 as of December 2019), briefly reviewed and compared with our results below. Of note, only two other studies examine mutant phenotypes, and one other study examines transcription factor binding: we discuss these further below. Finally no other study has used an SDA based approach. %pre-meiotic foetal development~\cite{Li2017SingleCell}. \begin{table}[] \begin{tabular}{@{}llll@{}} \toprule Study & Cells & Organism & Method \\ \midrule \cite{Hermann2018Mammalian} & 62,141 & Human \& Mouse & Chromium \& SMARTer (C1) \\ \cite{Ernst2019Staged} & 53,510 & Mouse & Chromium\\ \cite{Green2018Comprehensive} & 34,633 & Mouse & Drop-seq \\ \cite{Sohni2019Neonatal} & 33,585 & Human & Chromium \\ \cite{Jung2019Unified} & 20,322 & Mouse & DropSeq \\ \cite{Han2018Mapping} & 19,659 & Mouse & Microwell-seq \\ \cite{Grive2019Dynamic} & 15,882 & Mouse & Chromium \\ \cite{Law2019Developmental} & 10,140 & Mouse & Chromium \\ \cite{La2018Identification} & 9,424 & Mouse & Chromium \\ \cite{Fang2019Proteomics} & 6,804 & Mouse & Chromium \\ \cite{Guo2018adult} & 6,490 & Human & Chromium \\ \cite{Xia2019Widespread} & 4,147 & Human \& Mouse & inDrop \\ \cite{Wang2018SingleCell} & 3,028 & Human & Smart-seq2 \\ \cite{Lukassen2018Characterization} & 2,550 & Mouse & Chromium \\ \cite{Vertesy2019Dynamics} & 1,274 & Mouse & SORT-seq \\ \cite{Chen2018Singlecell} & 1,174 & Mouse & Smart-seq2 \\ \cite{Stevant2018Deciphering} & 400 & Mouse & SMARTer (C1) \\ \cite{Song2016Homeobox} & 201 & Mouse & SMARTer (C1) \\ \cite{Makino2019Single} & 175 & Mouse & SMARTer (C1) \\ \cite{Neuhaus2017Singlecell} & 105 & Human & Tang \\ \cite{Guo2017Chromatin} & 92 & Human & SMARTer (C1) \\ \cite{Liao2019Revealing} & 71 & Mouse & SMARTer (C1) \\ Total & 285,807 & & \\ \bottomrule \end{tabular} \end{table} Building on a previous study \parencite{Guo2017Chromatin} of scRNAseq of 92 cells from human SSEA4+ hSSCs and c-KIT+ spermatogonia, \cite{Guo2018adult} generated a dataset of 6,490 cells using the 10X Chromium platform from 3 human donors. This study performed RNA velocity analysis on early germ cells, which revealed two sub-populations of the earliest stem cells (one steady, one committing). Interestingly, it also revealed later stem cells whose velocity vectors were pointing ``backwards'' towards less differentiated state \parencite{Guo2018adult}, consistent with previous reports of plasticity in spermatogonia \parencite{Brawley2004Regeneration, Nakagawa2010Functional, Hara2014Mouse}. They highlight that \textit{Csf1r} was reported to be expressed only in spermatogonia in mice, but in their human data it is specifically expressed in macrophages \parencite{Guo2018adult}. However, this is not what the original study claimed, rather that there exists a very rare population of THY1+ spermatogonial stem cells that also express low levels of \textit{Csf1r}, in addition to the macrophage expression \parencite{Oatley2009Colony}. Indeed, our mouse data shows specific \textit{Csf1r} expression in macrophages, and in our dataset very few cells have detectable Thy1. They also highlight that in their human data \textit{Cxcl12} is detected in Leydig cells, but was previously reported as expressed in Sertoli cells in mice \parencite{Yang2013CXCL12}. However, the cited study only claims that CXCL12 is expressed in Sertoli cells \emph{within} the adult tubule, and it also shows that CXCL12 is detected outside the tubules in Leydig cells. In addition the expression in Sertoli cells was weakly co-localised, confined to the basement membrane, and shown using a poor marker of Sertoli cells (GATA4). \cite{Stevant2018Deciphering}, isolated a total of 400 NR5A1-GFP+ cells by FACS from E10.5, E11.5, E12.5, E13.5, and E16.5 testis, revealing that both Sertoli and Leydig cells originate from a single common progenitor population, with Sertoli cell differentiation potentially driven by a pulse of expression of \textit{Sry}, \textit{Kdm3a}, and \textit{Nrob1} among others. \cite{Chen2018Singlecell}, generated a dataset of 1,136 cells over 20 time points from mice with Vasa-dTomato and Lin28-YFP which were treated with retinoic acid to synchronise spermatogenesis, in addition to 38 cells from \textit{Spo11\textsuperscript{-/-}} mice. In agreement with our results they found enrichment of \textit{Prdm9}, \textit{Spo11}, \textit{Gm960}, \textit{Meiob}, \textit{Dmc1}, and \textit{Mcm8} in their leptotene \& zygotene cluster C3. They also detected novel enriched genes in this cluster \textit{Fbxo47}, \textit{Pparg}, and \textit{Ccnb3}. \textit{Fbxo47\textsuperscript{-/-}} mice were completely infertile, similar to the previously studied \textit{C. elegans} homologue \textit{prom-1} \parencite{Jantsch2007Caenorhabditis}. A follow up study found \textit{Fbxo47} to be required for DSB repair and synapsis at telomeres, potentially due to inhibition of TFR2 ubiquitination \parencite{Hua2019FBXO47}. To my knowledge, this represents the only other meiotic study (apart from our own) so far where a gene discovery from single cell data resulted in a new publication. \cite{Chen2018Singlecell}, also identified Sox30 as highly expressed in pachytene and generated \textit{Sox30\textsuperscript{-/-}} mice, which were infertile, in agreement with an earlier study \parencite{Feng2017SOX30}, and SOX30 ChIPseq detected some peaks in promoter regions for genes that were differentially regulated in \textit{Sox30\textsuperscript{-/-}} mice. \cite{Chen2018Singlecell}, were also able to profile alternative splicing events, enabling characterisation of \textit{Spo11} isoform expression ($\beta$ in leptotene and $\alpha$ in pachytene). They also investigated MSCI and identified 150 genes as ``escaping'' MSCI (defined by them as expression in diplotene stage). However, none of these genes show convincing evidence of MSCI escape in our dataset. \cite{Vertesy2019Dynamics}, also find genes escaping MSCI in a dataset of 1,274 cells from Dazl-GFP mice (up to the start of pachytene, median transcript count 20,488/cell), particularly \textit{Slitrk2}. However, this gene was below the detection threshold in our dataset. \cite{Green2018Comprehensive} used DropSeq to generate a dataset of 34,633 cells from mice testis (average 6,205 UMIs/cell). They were able to identify two new somatic populations which they describe as an innate lymphoid type II immune cell, and a mesenchymal cell. Whilst it's unclear if our dataset also contains the immune cell based on the marker genes they identified, it most certainly contains the mesenchymal cell which they validated using a Tcf21-creERT2; tdTomato mouse. We identified these cells as telocytes based on previous work defining the markers of these cells \parencite{Marini2018Reappraising}. By using Sox9-EGFP and Amh-cre;mTmG transgenic lines \cite{Green2018Comprehensive} were able to enrich for Sertoli cells, resulting in 9 sub-clusters. Interestingly they also found Prm2 as a marker of some Sertoli cells, and with intronic/UTR probes were able to determine that these transcripts were transcribed in round spermatids but persist in Sertoli cells after phagocytosis. \cite{Green2018Comprehensive} is the only other study to perform transcription factor motif analysis in addition to our own, and they reach some different conclusions worth discussing. Compared to our motif analysis, one major difference is that they are using MEME-ChIP (which is a combination of MEME, DREME [a non probabilistic regular expression based motif finder], and CentriMo [which looks for central enrichment of known motifs]). The CentriMo manual cautions that when the number of sequences is large, motifs that are only slightly similar can show significant enrichment. It appears that many of their \emph{unknown} motifs are from the DREME method. Our most commonly found motif is the Sp1 family motif, and while they don't find this they do find a reverse complemented version which they assign to Bcl6b. However, the Bcl6b motif in the HOCOMOCO database does not appear greatly similar to the motif they find. Some motifs which they classify as unknown clearly match motifs we found with known transcription factors such as ZNF143 in their gene group 1. They also find a motif which matches well to our CREM-t motif in their group 6 but they do not link it to CREM, likely due to the short motifs that DREME typically produces. Many of their other unknown motifs are however very long with limited information score diversity. In fact they are close to the maximum length possible for DREME. It is plausible these ``motifs'' are actually just the promoter sequence of ampliconic genes such as the tasukan family of genes on chromosome 14 and so their enrichment does not imply specific transcription factor binding. \cite{Grive2019Dynamic} sampled a total of 15,882 cells form five different postnatal timepoints during the initial wave of spermatogenesis (PND 6, 14, 18, 25, and 30) and so were able to profile how gene expression changes during this testis maturation process as well as asses the proportions of different cells types at each stage. For example DNA repair genes such as \textit{FancJ} (\textit{Brip1}), \textit{Brca1}, \textit{Rad51}, and \textit{Atm} apparently have increased expression in adult testis compared to the equivalent stage during the first wave of spermatogenesis. \cite{Ernst2019Staged}, generated a dataset of 53,510 single cells from both adult (8-9 weeks) and juvenile (PND 5–35, in 5 day steps) mice. These results agree with our own including for example the identification of \textit{Pou5f2} as a marker of late prophase I. In addition, they profiled mice with an additional chromosome - human chromosome 21 (Tc1 mice) - but detected minimal differences in transcription other than relative lack of post-meiotic cell types due to metaphase arrest, reminiscent of our \textit{Mlh3} mutants. \cite{Ernst2019Staged} also looked at MSCI dynamics and in agreement with our results also found a sharp drop in sex to autosome expression ratio at the zygotene-pachytene transition followed by a gradual reactivation. They also highlighted the Ssxb family as one of the first to be expressed post MSCI, and confirmed expression using ISH. This study also performed CUT\&RUN to assay H3K4me3, H3K9me3, and K3K27ac, confirming an enrichment of H3K9me3 on the X chromosome of spermatids, and at spermatid specific genes in meiosis, likely deposited by SETDB1 \parencite{Hirota2018SETDB1}. \cite{Xia2019Widespread} generated a dataset of 4,147 cells from both human and mouse testis (average UMI 7,459) and note that >90\% of all protein coding genes are expressed in germ cells (c.f. 62\% in somatic cells), in line with previous observations \parencite{Soumillon2013Cellular, Schmidt1996Transcriptional}. They propose a model of pervasive transcriptional scanning in the testis germ cells in order to promote transcription coupled repair and hence reduced germline mutations. Consistently they found reduced mutation rates in expressed vs unexpressed genes within the testis and in addition this effect was higher on the template strand (and on the coding strand upstream). They also found human cells had much higher expression of some genes, such as \textit{CXCL6} and \textit{GAPDH}, whereas mouse had higher expression of \textit{Fabp9} and \textit{Sord} \parencite{Xia2019Widespread}. \cite{Law2019Developmental}, generated a dataset of 10,140 cells from E16.5, P0, P3, and P6 ID4-eGFP mice (median 20,546 UMI/cell), and showed that ID4-eGFP+ prospermatogonia from E16.5 mice were able to establish colonies in adult germ cell depleted recipient testis. \cite{Fang2019Proteomics}, generated a dataset of 3,659 wild type and 3,145 \textit{Akap4\textsuperscript{-/-}} cells, a major component of the fibrous sheath of spermatids \parencite{Eddy2003Fibrous}. With our own, this is the only other study, in addition to the human chromosome 21 knock in by \cite{Ernst2019Staged}, that performed single cell RNAseq on mutant mice. \cite{Lukassen2018Characterization}, generated a dataset of 2,550 cells from adult mice. They claim high expression of \textit{Pou5f1} in round spermatids but no expression was seen at this stage in our study. They also detect expression of \textit{Kit} in round spermatids, and indeed in our study we detect the occasional transcript at this stage. %\cite{Hermann2018Mammalian}, generated the largest dataset of 62,141 cells from both mice and humans. %As part of a multi tissue whole organism atlas (>400,000 cells from >50 mouse tissues and cell cultures) \cite{Han2018Mapping} generated a dataset of 19,659 cells from testis using Microwell-Seq. \cite{Wang2018SingleCell}, sequenced 2,854 cells from normal humans, and 174 from a nonobstructive azoospermia patient. Many of the marker genes highlighted and validated by ISH agree with our assignments in mice including \textit{Hmga1} in spermatogonia, \textit{Ovol2} in pachytene and diplotene, and \textit{Tex29} in early spermatids. \cite{Sohni2019Neonatal}, generated a dataset of 18,723 and 14,862 cells from human adult and newborn testis respectively and so were able to compare and investigate neonatal germ cell development. In comparison to other datasets our dataset has the lowest UMI count per cell (1,312) - 1/5th of the other DropSeq dataset, and 1/10th of the 10X Chromium datasets. Despite this, by using matrix factorisation rather than hard clustering as in the other studies, it appears we are able to gain as many and often more biological insights from our data. \section{Observational Extensions} % ATAC, proteome, chipseq - single cell, prior info (motifs), lineage tracing?, spatial seq, % Other potential target genes In total these studies represent over 275,000 cells and so there may be significant advantages in combining them. One challenge of this would be dealing with batch effects from different species, genetic backgrounds, developmental time points, cell capture technology, and sequencing platforms. We have shown SDA can capture batch effects and so could provide one possible solution to this problem, although we have not tested for example combining 10X and DropSeq or human and mouse. Even within one of these studies there could be significant advantages to using a matrix factorisation approach. For example with multiple developmental time points you might expect to find a main component for each cell type, and then ``modifier'' components representing how the transcription of that cell type changes before, during and after the first wave of spermatogenesis. Whilst some studies \parencite{Ernst2019Staged} have used other (bulk) assays alongside scRNAseq, there is much to be gained from single cell resolution profiling of protein abundance, protein-DNA binding, chromatin accessibility, methylation/DNA modifications, nuclear structure, and spatial aspects; even more so when these modes are profiled simultaneously in single cells. The main challenge for this has been experimental limitations but now many methods are available to profile one or multiple of these aspects \parencite[reviewed in][]{Chappell2018SingleCell, Hu2018Single, Stuart2019Integrative, Heriche2019Integrating}. SDA is also able to perform group factor analysis \parencite{Hore2015Latent}, and a similar analysis package has been applied to single cell multiomic data \parencite{Argelaguet2019Multiomics}. Chromatin accessibility assays would be particularly interesting given the dramatic chromatin remodelling that occurs during spermatogenesis. The genome also undergoes dramatic de-methylation and re-methylation during gametogenesis and so methylation assays would also be intriguing. Protein-DNA assays such as single cell ChIP-seq could help to directly elucidate the gene regulatory programme in much more detail than we have been able to achieve here by inferring motifs. Due to the spatial layout of the different stages of spermatogenesis in addition to the cycle of the seminiferous epithelium, spatial sequencing seems particularly apt for the investigation of transcriptome of the testis. The spatial structure would help to more precisely state the cell identity of ambiguous transcriptomes, and would help to reveal additional structure, for example how the supporting Sertoli cells differ at different stages of the seminiferous cycle. Many single cell (and even bulk) studies of the testis have focused on the spermatogonia in an effort to identify the true spermatogonial stem cell. Single cell genetic lineage tracing could help to answer this question by providing a molecular readout of the cell division history for each cell \parencite[reviewed in][]{Baron2019Unravelling, McKenna2019Recording}. For many of these aspects there is already some information known for example which motifs are present at which promoters, and which genes cause infertility or are involved in meiotic processes. Incorporating this prior and multi-omic information into the decomposition would likely help to disambiguate co-expressed gene sets into finer more distinct functional groupings, in addition to providing insight into the molecular processes that underlie the functional changes that occur during the hugely complex and intricate performance that is spermatogenesis. \section{Other Zcwpw1 Papers \& Extensions} Prior to our study the only work linking \textit{Zcwpw1} to meiosis was the study by \cite{Soh2015Gene}, who identified that \textit{Zcwpw1} was specifically expressed in foetal prophase I in female mice. This led to the work by \cite{Li2019histone} revealing that \textit{Zcwpw1} is required for fertility in males, and this work was compared to our results in chapter \ref{ch:3-Zcw}. Since finishing our study, two other reports have been made public investigating the role of \textit{Zcwpw1} in meiotic recombination. One of these studies, by \cite{Mahgoub2019Dual}, was motivated by a reanalysis of the data from \cite{Chen2018Singlecell}, showing high co-expression of \textit{Zcwpw1} with \textit{Prdm9}. They also show co-evolution of \textit{Zcwpw1} with \textit{Prdm9}, but our work specifically highlights the association with the SET and SSXRD domains of \textit{Prdm9} (required for methylation) compared to the other domains. Both studies used the same \textit{Zcwpw1\textsuperscript{-/-}} mouse, and both studies found DSB positioning is unchanged. They differ in that Mahgoub and colleges used ENDseq, showing altered DSB repair post homologue invasion, while we used DMC1 SSDS showing altered repair timing at individual hotspots and its association with PRDM9 binding. The two studies also investigate different model organisms. We used an \textit{in vitro} system to study the human ZCWPW1 and PRDM9 proteins, identifying >800,000 ZCWPW1 binding sites ($p<10^{-6}$), the top \textasciitilde10,000 of which are almost all PRDM9-bound sites. Mahgoub and colleagues used mice, identifying 4,300 ZCWPW1 binding sites ($p<10^{-3}$), again mainly PRDM9-bound sites. The additional weaker peaks we found revealed a more subtle CpG influence on ZCWPW1 binding, which also impacts stronger peaks. While we used chimp PRDM9 to show allele specificity, Mahgoub and colleagues used hybrid mice. Some analyses were unique to each study. For example we counted stage specific DMC1 by immunofluorescence, revealing DSB repair delay. We also investigated the subnuclear positioning of ZCWPW1 using immunofluorescence, discovering an interesting new telomeric localisation. Using biotin-streptavidin pulldown assays, Mahgoub and colleagues show that \emph{dual} modified H3K4me3-H3K36me3 peptides had the highest binding affinity for ZCWPW1. While we showed that PRDM9 bound sites (with the dual mark) are stronger recruiters of ZCWPW1 than H3K4me3 alone. Another group, having previously generated a different \textit{Zcwpw1\textsuperscript{-/-}} mouse \parencite{Li2019histone}, generated a new mutant with three point mutations in the zf-CW domain: \textit{Zcwpw1\textsuperscript{W247I/E292R/W294P}} rendering the H3K4me3 recognition capacity non-functional \parencite{Huang2019histone}. This mouse was also infertile with complete testicular azoospermia and incomplete synapsis observed by SYCP1/3 staining. ChIPseq against ZCWPW1 revealed expected high overlap with H3K4me3 and DMC1 marks in WT testis and lack of peaks in \textit{Zcwpw1\textsuperscript{-/-}} and \textit{Zcwpw1\textsuperscript{W247I/E292R/W294P}} mice. ChIPseq against ZCWPW1 in \textit{Prdm9\textsuperscript{-/-}} mice resulted in very few peaks of which >80\% were within 5kb of a TSS. While these papers clearly agree that ZCWPW1 aids DSB repair, exactly how this is achieved remains unclear. In addition, the missing link between PRDM9 and SPO11 recruitment remains missing. As previously discussed \textit{Zcwpw2} is a promising candidate for this role, just as \textit{Zcwpw1} was, and further work will prove what (if any) involvement \textit{Zcwpw2} has in meiotic recombination. It is likely that many other genes revealed by single cell RNA sequencing will be key players in meiosis and spermatogenesis and hopefully this work will enable some of the many mysteries to be unveiled.
\SetAPI{J-C} \section{network.offlinemode.supported} \label{configuration:NetworkOfflinemodeSupported} \ClearAPI \TODO %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.service.ioc.ServiceModule} & \\ \hline \type{com.koch.ambeth.service.ioc.ServiceModule} & \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \type{com.koch.ambeth.config.ServiceConfigurationConstants.OfflineModeSupported} \begin{lstlisting}[style=Props,caption={Usage example for \textit{network.offlinemode.supported}}] network.offlinemode.supported=false \end{lstlisting}
\subsection{Variation of Parameters} \noindent Although the method of undetermined coefficients is useful and relatively quick because it is algebra-based, it cannot solve many equations, even simple-looking second order equations, like \begin{equation*} y'' + y = \csc{x} \end{equation*} The method also requires guessing, meaning for very complicated forms of $b(x)$, things can get very messy.\\ \noindent Instead, we'll look at a more rigorous, calculus-based, approach developed by Lagrange called "variation of parameters". We'll first see how to apply the method to 2nd order linear ODEs with constant coefficients, like forced vibrations, and then we'll extend with method to order $n$. \input{./higherOrder/nonHomeg/variationParameters_secondOrder.tex} \input{./higherOrder/nonHomeg/variationParameters_higherOrder.tex}
\documentclass[11pt]{article} \usepackage[left=25mm, right=25mm, top=25mm, bottom=25mm, includehead=true, includefoot=true]{geometry} \usepackage{graphicx} \usepackage{url} \usepackage{natbib} % For referencing \usepackage{authblk} % For author lists \usepackage[parfill]{parskip} % Line between paragraphs \pagenumbering{gobble} % Turn off page numbers % Make all headings the same size (11pt): \usepackage{sectsty} \sectionfont{\normalsize} \subsectionfont{\normalsize} \subsubsectionfont{\normalsize} \paragraphfont{\normalsize} \renewcommand{\abstractname}{Summary} % Make 'abstract' be called 'Summary' % This makes links and bookmarks in the pdf output (should be last usepackage command because it overrides lots of other commands) \usepackage[pdftex]{hyperref} \hypersetup{pdfborder={0 0 0} } % This turns off the stupid colourful border around links % From pandoc: % https://github.com/jgm/pandoc-templates/blob/master/default.latex $if(pagestyle)$ \pagestyle{$pagestyle$} $endif$ $if(csl-refs)$ \newlength{\cslhangindent} \setlength{\cslhangindent}{1.5em} \newlength{\csllabelwidth} \setlength{\csllabelwidth}{3em} \newlength{\cslentryspacingunit} % times entry-spacing \setlength{\cslentryspacingunit}{\parskip} \newenvironment{CSLReferences}[2] % #1 hanging-ident, #2 entry spacing {% don't indent paragraphs \setlength{\parindent}{0pt} % turn on hanging indent if param 1 is 1 \ifodd #1 \let\oldpar\par \def\par{\hangindent=\cslhangindent\oldpar} \fi % set entry spacing \setlength{\parskip}{#2\cslentryspacingunit} }% {} \usepackage{calc} \newcommand{\CSLBlock}[1]{#1\hfill\break} \newcommand{\CSLLeftMargin}[1]{\parbox[t]{\csllabelwidth}{#1}} \newcommand{\CSLRightInline}[1]{\parbox[t]{\linewidth - \csllabelwidth}{#1}\break} \newcommand{\CSLIndent}[1]{\hspace{\cslhangindent}#1} $endif$ \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} % https://stackoverflow.com/questions/41052687/rstudio-pdf-knit-fails-with-environment-shaded-undefined-error $if(highlighting-macros)$ $highlighting-macros$ $endif$ % ************** TITLE AND AUTHOR INFORMATION ************** \title{Disaggregating origin-destination data: methods, implementations, and optimal parameters for generating accurate route networks for sustainable transport planning} \author[1]{Author A\thanks{author.a@university.edu}} \author[1]{Author B\thanks{author.b@university.edu}} \author[2]{Author C\thanks{author.c@university.edu}} \affil[1]{Department of Computer Science, \LaTeX\ University} \affil[2]{Department of Mechanical Engineering, \LaTeX\ University} \renewcommand\Authands{ and } % correct last comma in author list \begin{document} \maketitle % ************** ABSTRACT/SUMMARY ************** \begin{abstract} \centering Summary of no more than 100 words \textit{(this needs to be pasted into the `Abstract? box on EasyChair)}. {\bf KEYWORDS:} 5 keywords or short phrases relevant to the work \textit{(these need to be pasted into the `Keywords? box on EasyChair)}. \end{abstract} % ************** MAIN BODY OF THE PAPER ************** $body$ % \section{Introduction to guidelines} % % The purpose of providing these notes is to standardise the format of the short papers submitted to GISRUK 2015. These notes are based on author guidelines previously produced for the GISRUK conference series which in turn were based on other guidelines. % % The pages should have margins of 2.5 cm all round. The base font should be Times New Roman 11pt, or closest equivalent and text should be single spaced. Each section of the paper should be numbered. Section headings should be left-justified and given in bold type. A slightly larger font should be used for the title of the paper and the authors (16pt and 14pt respectively). The first line of each paragraph in each section should \textbf{NOT} be indented. % % \subsection{Sub-sections} % % Sub-sections should also be numbered as shown here. The sub-section heading should be left-justified and given in bold type (11pt). % % \section{Figures, Tables and Equations,} % % % Tables should be as shown below (or as close as possible) and should be referenced as Table~\ref{first_table} in the text. % % % % \begin{table}[htdp] % % \caption{GISRUK Conferences} % % \begin{center} % % \begin{tabular}{c|c} % % \hline % % Year & City \\ % % \hline % % 2007 & Maynooth \\ % % 2008 & Manchester \\ % % 2009 & Durham \\ % % 2010 & UCL \\ % % 2011 & Portsmouth \\ % % 2012 & Lancaster\\ % % \hline % % \end{tabular} % % \end{center} % % \label{first_table} % % \end{table}% % % Equations should be centred on the page and numbered consecutively in the right-hand margin, as below. They should be referred to in the text as Equation~\ref{first_equation}. % % \begin{equation} % E=mc^2 % \label{first_equation} % \end{equation} % % Figures should be presented as an integral part of the paper and should be referred to as Figure~\ref{first_figure} in the text. % % \begin{figure}[htbp] \begin{center} % \resizebox{0.3\textwidth}{!}{ % \includegraphics{lancaster.png} % } \caption{Location of Lancaster University} \label{first_figure} \end{center} \end{figure} % % % % \section{References and Citations} % % A list of references cited should be provided at the end of the paper using the Harvard format as shown below. Citations of these within the text should be given as follows: papers such as an interesting one by \citet{HARVEY:2006} and also interesting books \citep{DAY:2006}. % % \section{File format} % % Papers should be submitted in unrestricted \textbf{pdf} format. Authors are requested to keep to the word limit of 1500 words. % % \section{Acknowledgements} % % Acknowledgement should be made of any funding bodies who have supported the work reported in the paper, of those who have given permission for their work to be reproduced or of individuals whose particular assistance is due recognition. Acknowledge data providers here where appropriate. % % \section{Biography} % All contributing authors should include a biography of no more than 50 words each outlining their career stage and research interests. % % % % ************** REFERENCES ************** % % % \bibliographystyle{apa} % % \bibliography{references.bib} \end{document}
%!Mode:: "TeX:UTF-8" \section{Rack-Mountable Equipment} A \textbf{rack} is the whole cabinet that is usually 42-U tall. A \textbf{rack rail} is a verticall-stretching metal slice on each side of a server rack with wholes for fastening. A \textbf{chassis} might refer to a support board on top of each shelf. 1U, 1 RACK UNIT,=1.752 inches (44.50 mm)
\section{awards} \begin{entrylist} %------------------------------------------------ \entry {2011} {Postgraduate Scholarship} {School of Business, The University of California} {Awarded to the top student in their final year of a Bachelors degree.} %------------------------------------------------ \end{entrylist}
\documentclass[a4paper,11pt]{article} \input{packages.tex} \input{tikz.tex} \input{thmstyle.tex} \input{macros.tex} % opening \title{Notes on compatible composita} \author{} \begin{document} \maketitle %\begin{abstract} %\end{abstract} %\tableofcontents %\clearpage In this document, we investigate compatibility questions in the case of the compositum of two finite fields $\mathbb{F}_{p^m}$ and $\mathbb{F}_{p^n}$, where the integers $m$ and $n$ are coprime. In that case, the compositum is $\mathbb{F}_{p^{mn}}$. We denote by $\zeta_m$ (resp. $\zeta_n$) a primitive $m$-th root of unity (resp. $n$-th). We set $\alpha_m$ a solution of Hilbert 90 in $\mathbb{F}_{p^m}\otimes\mathbb{F}_{p}(\zeta_m)$ for the root $1\otimes\zeta_m$, and we similarly set $\alpha_n$. Let now $u, v$ be integers such that \[ un+vm = 1. \] We now set \[ \zeta=\zeta_m^u\zeta_n^v \] and \[ \alpha=\alpha_m^u\alpha_n^v, \] such that $\zeta$ is a primitive $mn$-th root of unity, $\zeta^m=\zeta_n$, $\zeta^n=\zeta_m$ and $\alpha$ is a solution of Hilbert 90 in $\mathbb{F}_{p^{mn}}\otimes\mathbb{F}_{p}(\zeta)$ for the root $1\otimes\zeta$. We also have that \[ (\sigma\otimes\Id)(\alpha^n) = (1\otimes\zeta_m)\alpha^n \] and \[ (\sigma\otimes\Id)(\alpha^m) = (1\otimes\zeta_n)\alpha^m \] but not necessarily $\alpha^n=\alpha_m$ and $\alpha^m=\alpha_n$. In fact we have \begin{align*} \alpha^n &= \alpha_m^{un}\alpha_n^{vn} \\ &= \alpha_m^{1-vm}a_n^v \\ &= \cfrac{a_n^v}{a_m^v}\alpha_m \end{align*} where $a_m=\alpha_m^m\in\mathbb{F}_p(\zeta_m)$ and $a_n = \alpha_n^n\in\mathbb{F}_p(\zeta_n)$. We also get \[ \alpha^m = \cfrac{a_m^u}{a_n^u}\alpha_n \] in the same way. \section{Example with $p=7$, $m=2$, $n=3$} Let us start with an example where we do not have to worry about tensor products, because all the needed roots are in the prime field. We take $p=7$, $m=2$ and $n=3$. Since $p-1=6$ is divisible by both $2$ and $3$, we have primitive $2$-th and $3$-th roots of unity in $\mathbb{F}_7$. We investigate the possible embeddings $\mathbb{F}_{7^2}\emb\mathbb{F}_{7^6}$ and $\mathbb{F}_{7^3}\emb\mathbb{F}_{7^6}$, as shown in Figure~\ref{fig:p7}. \begin{figure} \centering \begin{tikzpicture} \node (1) at (0,0) {$\mathbb{F}_{7}$}; \node (2) at (-1,1) {$\mathbb{F}_{7^2}$}; \node (3) at (1,1) {$\mathbb{F}_{7^3}$}; \node (6) at (0,2) {$\mathbb{F}_{7^6}$}; \draw (1) -- (2); \draw (1) -- (3); \draw (2) -- (6); \draw (3) -- (6); \end{tikzpicture} \phantom{and} \begin{tikzpicture} \node (1) at (0,0) {$1$}; \node (-1) at (-1,1) {$-1$}; \node (2) at (0.8,1) {$2$}; \node (4) at (1.2,1) {$4$}; \node (3) at (-0.4,2) {$3$}; \node (5) at (0.4,2) {$5$}; \draw (1) -- (2); \draw (1) -- (-1); \draw (1) -- (4); \draw (-1) -- (3); \draw (-1) -- (5); \draw (2) -- (3); \draw (4) -- (5); \end{tikzpicture} \caption{Compositum with $p=7$, $m=2$, $n=3$, and the associated roots.} \label{fig:p7} \end{figure} In the same picture, we see that we have one primitive $2$-th root of unity that is $-1$, two primitive $3-th$ roots of unity that are $2$ and $4$ and two primitive $6$-th roots of unity that are $3$ and $5$. We have the compatibility relations $3^3=5^3=-1$, $3^2=2$ and $5^2=4$. We define \[ \mathbb{F}_{7^2}\cong \mathbb{F}_7[y]/(y^2-3)\cong \mathbb{F}_7(u), \] \[ \mathbb{F}_{7^3}\cong \mathbb{F}_7[x]/(x^3-2)\cong \mathbb{F}_7(t), \] and \[ \mathbb{F}_{7^6}\cong \mathbb{F}_7[z]/(z^6-5)\cong \mathbb{F}_7(w). \] We embed $\mathbb{F}_{7^2}$ in $\mathbb{F}_{7^6}$ by sending $u$ to $w^3$ and we embed $\mathbb{F}_{7^3}$ in $\mathbb{F}_{7^6}$ by sending $t$ to $3w^4$. \end{document}
\chapter{Discrete two-body reactions} \label{Sec:2-body} This section describes how the contribution to the transfer matrix is calculated for data consisting of probability densities for the cosine of the angle of deflection in discrete 2-body reactions. In this case, the probability densities are always given in the center-of-mass frame. Because the transfer matrices are defined in terms of laboratory coordinates, the computations involve a boost. For all except very light-weight targets, the mapping from center-of-mass to laboratory coordinates is usually done using Newtonian mechanics. The discussion given here is therefore Newtonian. A relativistic treatment is presented in Appendix~\ref{Appendix-relativity}. The choice of Newtonian or relativistic mechanics is determined by the value of the \textsf{kinetics} input parameter to \gettransfer\ as explained in Section~\ref{Sec:relativistic}. Of course, relativistic mechanics must be used if either the incident particle or the outgoing particle is a photon. For discrete 2-body reactions, the center-of-mass energy of the emitted particle is determined by the energy $E$ of the incident particle. Consequently, the energy-angle probability density $\picm(\Ecm', \mucm \mid E)$ in the center-of-mass frame is given by \begin{equation} \picm(\Ecm', \mucm \mid E) = g( \mucm \mid E) \, \delta( \Ecm' - \Psi( E ) ) \label{prob_cm} \end{equation} for the function~$\Psi$ given below in Eq.~(\ref{E_cm}). From here on, the energy~$E$ and direction cosine~$\mu$ of the outgoing particle will be marked with the subscript ``lab'' or ``cm'' to indicate that the variable is in the laboratory or center-of-mass frame. Because of Eq.~(\ref{prob_cm}), the data for discrete 2-body reactions consist of angular probability densities $g( \mucm \mid E)$ given in the center-of-mass frame, either as a 2-dimensional table for given incident energy~$E$ and direction cosine~$\mucm$ or as Legendre coefficients $c_\ell(E)$ for \begin{equation} g( \mucm \mid E) = \sum_\ell \left( \ell + \frac{1}{2} \right) c_\ell(E)P_\ell( \mucm ). \label{cmLegendre} \end{equation} This section begins with an overview of Newtonian mechanics for discrete 2-body problems. In particular, the form of the function $\Psi$ in Eq.~(\ref{prob_cm}) is derived, as is the boost from the center-of-mass to the laboratory frame. The section closes with an examination of the use of angular probability data $g( \mucm \mid E)$ in the computation of the integrals Eqs.~(\ref{Inum}) and~(\ref{Ien}) used in the calculation of the transfer matrix. \section{Newtonian mechanics of discrete 2-body reactions} \label{Sec:2-body-Newton} Only a summary of the results is given here; for more information, see the reference~\cite{endep}. A relativistic treatment is developed in Appendix~\ref{Appendix-relativity}. It is assumed that the target is at rest and that the incident particle has energy $E$ in laboratory coordinates. The following notations are used for the masses of the particles involved:\\ \Input{$\myi$,}{ the mass of the incident particle,}\\ \Input{$\mtarg$,}{ the mass of the target,}\\ \Input{$\myo$,}{ the mass of the emitted particle,}\\ \Input{$\mres$,}{ the mass of the residual.}\\ For the conversion between center-of-mass and laboratory coordinates, define the mass ratios $$ \gamma = \frac { \myi \myo } { ( \myi + \mtarg )^2 }, $$ $$ \beta = \frac { \mres } { \myo + \mres }, $$ and $$ \alpha = \frac {\beta \mtarg} { \myi + \mtarg }. $$ Velocity vectors are printed in bold face $\textbf{V}$ with magnitude (speed) in math italics $$ V = | \textbf{V} |. $$ For a target at rest and an incident particle with energy $E$ in laboratory coordinates, the center of mass moves in the direction of motion of the incident particle with velocity $\Vtrans$ having magnitude squared \begin{equation} \vtrans^2 = \Vtrans^2 = \frac{ 2\myi E} { ( \myi + \mtarg )^2 }. \label{Vtrans-length} \end{equation} The reaction may have a nonzero energy value $Q$, arising for example from the excitation level of the target and/or residual nucleus in inelastic scattering. A nonzero $Q$ value may also arise from the mass difference in a knock-on reaction. It follows from conservation of energy and momentum that in center-of-mass coordinates the energy of the emitted particle is given by \begin{equation} \Ecm' = \Psi( E ) = \alpha E + \beta Q. \label{E_cm} \end{equation} This defines the function $\Psi$ appearing in Eq.~(\ref{prob_cm}). The speed of the outgoing particle in the center-of-mass frame is \begin{equation} \vcm' = |\Vcm'| = \sqrt{ \frac{ 2\Ecm' }{\myo} }. \label{V-cm-outgoing} \end{equation} It follows from Eq.~(\ref{E_cm}) that for an endothermic reaction ($Q < 0)$, the threshold is at $$ E = \frac{-\beta Q}{\alpha}. $$ \subsection{The boost to the laboratory frame} As illustrated in Figure~\ref{Fig:2-body-boost}, the boost from center-of-mass to laboratory coordinates is obtained by adding the velocities \begin{equation} \Vlab' = \Vtrans + \Vcm'. \label{V-lab-2-body} \end{equation} Consequently, the energy of the outgoing particle in the laboratory frame is $$ \Elab' = \frac{ \myo {\Vlab'}^2 }{2} = \frac{ \myo }{2} ( \vtrans^2 + {\vcm'}^2 + 2\Vtrans \cdot \Vcm' ). $$ In terms of the notation Eq.~(\ref{E_cm}) and \begin{equation} \Etrans' = \frac{ \myo \vtrans^2 }{2} = \gamma E, \label{E_trans} \end{equation} this equation takes the form \begin{equation} \Elab' = \Etrans' + \Ecm' + 2 \mucm \sqrt{ \Etrans' \Ecm' }. \label{E_lab} \end{equation} Here, $\mucm$ is the direction cosine defined by the relation $$ \Vtrans \cdot \Vcm' = \mucm \vtrans \vcm'. $$ \begin{figure} \input{fig4-1} \end{figure} It is also necessary to determine the direction cosine $\mulab$ in the laboratory frame for $$ \Vtrans \cdot \Vlab' = \mulab \vtrans \vlab'. $$ This is most easily derived from the trigonometry in Figure~\ref{Fig:2-body-boost} $$ \mulab \vlab' = \vtrans + \mucm \vcm'. $$ In terms of the energies defined in Eqs.~(\ref{E_cm}), (\ref{E_trans}), and~(\ref{E_lab}), this relation takes the form \begin{equation} \mulab = \frac { \sqrt{ \Etrans' } + \mucm \sqrt{ \Ecm'} } {\sqrt{\Elab'}} \quad \text{if $\Elab' > 0$.} \label{get_mu} \end{equation} It is clear from Eq.~(\ref{V-lab-2-body}) that $$ \Elab' = \frac{\myo {\vlab'}^2}{2} = 0, $$ if and only if $$ \Vcm' = - \Vtrans. $$ In this case, the value of $\mulab$ is undefined. \section{Computation of the transfer matrix from data for discrete 2-body reactions} Consider the use of data $g(\mucm \mid E)$ in Eq.~(\ref{prob_cm}) in the computation of integrals for the transfer matrix Eqs.~(\ref{Inum}) and~(\ref{Ien}), either as tables or as Legendre coefficients in Eq.~(\ref{cmLegendre}). In these integrals the multiplicity is always $M(E) = 1$ for discrete 2-body reactions. The discussion given here concentrates on the evaluation of the integral in Eq.~(\ref{Inum}). The integral in Eq.~(\ref{Ien}) differs only in that its integrand contains an extra factor $\Elab'$, the energy of the outgoing particle in the laboratory frame. Because the probability density data $g(\mucm \mid E)$ in Eq.~(\ref{prob_cm}) is given in center-of-mass coordinates, it is desirable to transform the integrals Eqs.~(\ref{Inum}) to the center-of-mass frame. The center-of-mass form of the integral Eq.~(\ref{Inum}) is \begin{equation} \Inum_{g,h,\ell} = \int_{\calE_g}dE \, \sigma ( E ) w(E) \widetilde \phi_\ell(E) \int_{\mucm} d\mucm \, g(\mucm \mid E) \int_{\Ecm'} d\Ecm' \, P_\ell( \mulab ) \, \delta(\Ecm' - \Psi(E) ) \label{cmint} \end{equation} with $\Psi(E)$ as given by Eq.~(\ref{E_cm}). The range of integration over $\mucm$ and $\Ecm'$ in Eq.~(\ref{cmint}) is such that for fixed incident energy $E$ in $\calE_g$, the energy $\Elab'$ of the outgoing particle given by Eq.~(\ref{E_lab}) lies in~$\calE_h'$. Integration of Eq.~(\ref{cmint}) with respect to $\Ecm'$ yields the result that \begin{equation} \Inum_{g,h,\ell} = \int_{\calE_g} dE \, \sigma ( E ) w(E) \widetilde \phi_\ell(E) \int_{\mucm} d\mucm \, P_\ell( \mulab ) g(\mucm \mid E), \label{muEint} \end{equation} where it is understood that the direction cosine $\mulab$ in the laboratory frame is calculated from Eq.~(\ref{get_mu}) and that the range of integration over $\mucm$ is such that $E$ is in~$\calE_h'$. The \gettransfer\ code steps through the data $g(\mucm \mid E)$ to compute contributions to the entries of the transfer matrix in Eq.~(\ref{muEint}). The case of tabular data with direct interpolation (Section~\ref{Sec:direct-interp}) is illustrated in the laboratory frame in Figure~\ref{Fig:2-body-region-lab}. This figure shows an integration region identified by an incident energy bin~$\calE_g$ and an outgoing energy bin~$\calE_h'$. The data $g(\mucm \mid E)$ are given at incident energies $E_{k-1}$ and $E_k$, such that the interval $E_{k-1} < E < E_k$ overlaps the energy bin~$\calE_g$. Furthermore, it is assumed that data entries $g(\mucm \mid E)$ for $\mucm = \mucmjm$ and $\mucm = \mucm$ are given at $E = E_{k-1} $ or at $E = E_k$ and that the table contains no entries $g(\mucm \mid E_{k-1})$ or $g(\mucm \mid E_k)$ for $\mucmjm < \mucm < \mucm$. Any missing data values $g(\mucmjm \mid E_{k-1})$ or $g(\mucmj \mid E_{k-1})$ or $g(\mucmjm \mid E_k)$ or~$g(\mucmj \mid E_k)$ are computed by interpolation with respect to~$\mucm$. The integration region in the laboratory frame for the contribution of such a set of data to the integral $\Inum_{g,h,\ell}$ in Eq.~(\ref{cmint}) is the shaded area of Figure~\ref{Fig:2-body-region-lab}. This region is mapped to center-of-mass coordinates in Figure~\ref{Fig:2-body-region-cm}. \begin{figure} \input{fig4-2} \end{figure} \begin{figure} \input{fig4-3} \end{figure} When the tabular data are interpolated by the method of cumulative points of Section~\ref{Sec:cumProb}, the geometry is complicated by the local unit-base transformations, but the basic ideas are the same. Finally, for probability density data $g( \mucm \mid E)$ given as Legendre coefficients in Eq.~(\ref{cmLegendre}), the only significant difference is that the range of direction cosines becomes $-1 \le \mucm \le 1$ with the limitation that the energy $E$ of the outgoing particle lies in the energy bin~$\calE_h'$. \section{Format of data in the input file} For tabulated probability density data $g( \mucm \mid E)$, the data identifier as in Section~\ref{data-model}, is\\ \Input{Process: two body transfer matrix}{}\\ and for the Legendre coefficients it is\\ \Input{Process: Legendre two body transfer matrix}{} \subsection{Data for both forms of probability density} \label{Sec:2-body-data} Because the boost from the center-of-mass frame to the laboratory frame depends on the rest masses of the particles, these must be included in the input file as described in Section~\ref{model-info}. For most reactions, the format for doing so is\\ \Input{Projectile's mass:}{$\myi$} \\ \Input{Target's mass:}{$\mtarg$} \\ \Input{Product's mass:}{$\myo$} \\ \Input{Reaction's Q value:}{$Q$} \\ The values of these quantities must be in the same units as the energy bin boundaries. The mass of the residual $\mres$ is then computed using \begin{equation} \mres = \myi + \mtarg - \myo - Q. \label{2-body-mres} \end{equation} The mass of the residual may be given using the command\\ \Input{Residual's mass:}{$\mres$}\\ but this is overridden by the result of Eq.~(\ref{2-body-mres}) unless the residual is a photon, $\mres = 0$. In that case, the value of $\myo$ is modified to enforce the validity of Eq.~(\ref{2-body-mres}). The code may use either Newtonian or relativistic mechanics in its computations as specified in Section~\ref{Sec:relativistic}. Relativistic mechanics is used, however, if any of the particles involved in the reaction is a photon. The specifications that the energy $E$ of the incident particle is given in the laboratory frame and the direction cosine $\mucm$ in the center-of-mass frame are, Section~\ref{Reference-frame},\\ \Input{Projectile Frame: lab}{}\\ \Input{Product Frame: CenterOfMass}{} \subsection{Angular probability density tables} \label{Sec:angular-table} The identification line for tabulated angular probability densities is\\ \Input{Angular data:}{$n = K$}\\ where $K$ is the number of incident energies~$E$. This is followed by the interpolation rules for probability densities from Section~\ref{interp-flags-probability}\\ \Input{Incident energy interpolation:}{probability interpolation flag}\\ \Input{Outgoing cosine interpolation:}{list interpolation flag} There are then $K$ blocks, one for each incident energy $E_k$,\\ \Input{Ein: $E_k$:}{$n = J_k$}\\ with $J_k$ pairs of values $\mucmj$ and $g(\mucm \mid E_k)$. Thus, with incident energy in MeV a table of angular probability densities $g( \mucm \mid E)$ may look like\\ \Input{Angular data:}{$n = 22$}\\ \Input{Incident energy interpolation:}{lin-lin direct}\\ \Input{Outgoing cosine interpolation:}{lin-lin}\\ \Input{ Ein: 1.500000000000e-01 : n = 2}{}\\ \Input{\indent -1.000000000000e+00 5.000000000000e-01}{}\\ \Input{\indent 1.000000000000e+00 5.000000000000e-01}{}\\ \Input{ Ein: 2.000000000000e-01 : n = 2}{}\\ \Input{\indent -1.000000000000e+00 4.550000000000e-01}{}\\ \Input{\indent 1.000000000000e+00 5.450000000000e-01}{}\\ \Input{\indent } {$\cdots$}\\ \Input{ Ein: 2.000000000000e+01 : n = 29}{}\\ \Input{\indent -1.000000000000e+00 3.873180000000e-02}{}\\ \Input{\indent -9.500000000000e-01 2.943580000000e-02}{}\\ \Input{\indent -9.000000000000e-01 2.582090000000e-02}{}\\ \Input{\indent } {$\cdots$}\\ \Input{\indent 9.000000000000e-01 2.530490000000e+00}{}\\ \Input{\indent 9.500000000000e-01 3.873180000000e+00}{}\\ \Input{\indent 1.000000000000e+00 8.262750000000e+00}{} \subsection{Legendre coefficients of angular probability density} \label{Sec:2-bodyLegendreData} Legendre coefficient data of the form Eq.~(\ref{cmLegendre}) for discrete 2-body reactions are given as\\ \Input{Legendre coefficients:}{$n = K$}\\ where $K$ is the number of incident energies~$E$. This is followed by the interpolation rule for simple lists from Section~\ref{interp-flags-list}\\ \Input{Interpolation:}{list interpolation flag} The file closes with $K$ sets of data\\ \Input{Ein: $E_k$:}{$n = L_k$}\\ with $L_k$ Legendre coefficients $c_\ell(E_k)$ for $\ell = 0$, 1, \ldots\ , $L_k - 1$ in Eq.~(\ref{cmLegendre}). With incident energy in units of MeV, an example of this portion of the input file is\\ \Input{Legendre coefficients: n = 17}\\ \Input{Interpolation:}{lin-lin}\\ \Input{ Ein: 1.843100e+00: n = 3}{}\\ \Input{\indent 1.000000e+00}{}\\ \Input{\indent 0.000000e+00}{}\\ \Input{\indent 0.000000e+00}{}\\ \Input{\indent } {$\cdots$}\\ \Input{ Ein: 2.000000e+01: n = 12}{}\\ \Input{\indent 1.000000e+00}{}\\ \Input{\indent 4.640500e-01}{}\\ \Input{\indent 2.320700e-01}{}\\ \Input{\indent 8.593700e-02}{}\\ \Input{\indent 5.338700e-02}{}\\ \Input{\indent 2.465600e-02}{}\\ \Input{\indent -1.500600e-03}{}\\ \Input{\indent -1.756300e-02}{}\\ \Input{\indent -1.108000e-02}{}\\ \Input{\indent 1.931100e-02}{}\\ \Input{\indent 1.150900e-02}{}\\ \Input{\indent 5.643500e-03}{} { \newcommand{\Vacm}{\textbf{V}_{\text{1,cm}}} \newcommand{\vacm}{V_{\text{1,cm}}} \newcommand{\muacm}{\mu_{\text{1,cm}}} \newcommand{\Valab}{\textbf{V}_{\text{1,lab}}} \newcommand{\valab}{V_{\text{1,lab}}} \newcommand{\Vbcm}{\textbf{V}_{\text{2,cm}}} \newcommand{\Vblab}{\textbf{V}_{\text{2,lab}}} \newcommand{\vblab}{V_{\text{2,lab}}} \newcommand{\mualab}{\mu_{\text{1,lab}}} \newcommand{\vbcm}{V_{\text{2,cm}}} \newcommand{\mubcm}{\mu_{\text{2,cm}}} \newcommand{\mayo}{m_{1,e}} \newcommand{\mares}{m_{1,r}} \newcommand{\mbyo}{m_{2,e}} \newcommand{\mbres}{m_{2,r}} \section{Two consecutive discrete 2-body reactions} \label{Sec:2-step-2-body} The \ENDFdata\ library~\cite{ENDFdata} contains data for one reaction in the form of a sequence of two discrete 2-body reactions. In this reaction, an incident deuteron hits a triton, with an outgoing excited ${}_2^4\text{He}$ nucleus and a neutron residual. The excited ${}_2^4\text{He}$ nucleus then decays into a proton plus a triton. \begin{figure} \input{fig4-4} \end{figure} In the current version of \gettransfer, the probability density $g( \muacm \mid E)$ for the outgoing excited particle from the first step must be represented as a table of Legendre coefficients $c_\ell(E)$ for the expansion Eq.~(\ref{cmLegendre}). The breakup second step is assumed to be isotropic in the frame of the excited outgoing particle from the first step. A Newtonian analysis is given here; see Appendix~\ref{Sec:2-step-2-body-rel} for a relativistic version. For the first step of the reaction the masses are $\myi$ for the incident particle, $\mtarg$ for the target, $\mayo$ for the outgoing particle which breaks up, and $\mares$ for the residual. The $Q$-value of the first step is denoted by~$Q_1$. Figure~\ref{Fig:2-step-2-body-step1} illustrates the notation used in analysis of the first step of this reaction. This figure is basically a copy of Figure~\ref{Fig:2-body-boost}. In the discussion of this figure, velocity vectors are denoted with bold face~$\textbf{V}$ and their lengths with math italics~$V$. In Figure~\ref{Fig:2-step-2-body-step1}, the vector $\Vtrans$ is the velocity of the center of mass for the first step of the reaction, and its magnitude is as in Eq.~(\ref{Vtrans-length}). For determination of the center-of-mass velocity $\vacm'$ of the excited outgoing particle from the first step, Equation~(\ref{E_cm}) is modified to the form $$ \frac{\mayo ({\vacm'})^2}{2} = \frac{ \mtarg \mares E} { ( \myi + \mtarg ) ( \mayo + \mares ) } + \frac{ \mares Q_1} { ( \mayo + \mares ) }, $$ where $E$ is the energy of the incident particle and the target is at rest. If $\muacm$ is the direction cosine for~$\Vacm'$, then the velocity $\Valab'$ in the laboratory frame of the excited outgoing particle from the first step satisfies the equation $$ ({\valab'})^2 = \vtrans^2 + ({\vacm'})^2 + 2 \muacm \vtrans \vacm'. $$ According to Eq.~(\ref{get_mu}), if $\valab' > 0$, then the direction cosine~$\mualab$ is given by \begin{equation*} \mualab = \frac { \vtrans' + \muacm \vacm' } {\valab'} \end{equation*} For $\valab' = 0$, one may set $\mualab = 1$. For the second (breakup) step, $\mbyo$ denotes the mass of the outgoing particle and $\mbres$ the mass of the residual, and $Q_2$ is the $Q$-value. Figure~\ref{Fig:2-step-2-body-step2} shows this second step projected onto the plane determined by the vectors of the first step. In this figure the full 3-dimensional geometry must be taken into account because the emission is isotropic. The point~$O'$ in the figure identifies the center of mass of the breakup step. An orthonormal $(\xi, \eta, \zeta)$-coordinate system is introduced with origin at~$O'$. If $\valab' > 0$, the $\xi$-axis is chosen parallel to the vector~$\Valab'$; otherwise, it is taken parallel to~$\Vtrans$. If the vectors $\Vtrans$ and~$\Valab'$ generate a plane, then the $\eta$-axis is selected to lie in this plane. For colinear $\Vtrans$ and~$\Valab'$, the $\eta$-axis may be in any direction perpendicular to the $\xi$-axis. The $\zeta$-axis is chosen perpendicular to the $(\xi, \eta)$-plane. In this reference frame the magnitude of the velocity~$\Vbcm'$ of the outgoing particle from the breakup step is obtained from Equation~(\ref{E_cm}) as $$ \frac{\mbyo (\vbcm')^2}{2} = \frac{ \mbres Q_2} { ( \mbyo + \mbres ) }. $$ Because the breakup is isotropic, the vector~$\Vbcm'$ in Figure~\ref{Fig:2-step-2-body-step2} with tail at~$O'$ has its head uniformly distributed on the sphere~$\Sigma_0$. For a fixed $\Valab'$ and angle~$\theta_2$ between $\Vbcm'$ and the $\xi$-axis, the head of~$\Vbcm'$ lies on a circle on~$\Sigma_0$, which is projected as the line segment from $A$ to~$B$ in Figure~\ref{Fig:2-step-2-body-step2}. The magnitude~$\vblab'$ of the velocity of the final emitted particle is the same for all vectors~$\Vbcm'$ with heads on the segment from $A$ to~$B$, and in the case that the head is at~$B$ it is clear that \begin{equation} ({\vblab'})^2 = (\valab')^2 + (\vbcm')^2 + 2 \mubcm \valab' \vbcm', \label{2-step-2-body-V} \end{equation} where $\mubcm = \cos \theta_2$. Furthermore, the energy of the final outgoing particle in the laboratory frame is \begin{equation} \Elab' = \frac{ \mbyo (\vblab')^2 }{2}. \label{2-step-2-body-E} \end{equation} For this 2-step reaction, in the computation of the elements of the transfer matrix Eq.~(\ref{muEint}) is replaced by \begin{equation} \Inum_{g,h,\ell} = \int_{\calE_g} dE \, \sigma ( E ) w(E) \widetilde \phi_\ell(E) \int_{\muacm} d\muacm \, g(\muacm \mid E) \int_{\Sigma_{0,h}} d\sigma_0 \, P_\ell( \mulab ). \label{2-step-muEint} \end{equation} In this integral $\Sigma_{0,h}$ is the subset of $\Sigma_0$ on which $\Elab'$ from Eqs.~(\ref{2-step-2-body-V}) and~(\ref{2-step-2-body-E}) lies in the outgoing energy bin~$\calE_h'$, and $d\sigma_0$ is the differential surface area on the sphere~$\Sigma_0$ normalized so that $$ \int_{\Sigma_0} d\sigma_0 = 1. $$ The direction cosine~$\mulab$ in Eq.~(\ref{2-step-muEint}) is obtained from \begin{equation} \Vblab' \cdot \Vtrans = \mulab \vblab' \vtrans. \label{2-step-mulab} \end{equation} The geometry used in the computation of $\mulab$ is illustrated in Figure~\ref{Fig:2-step-2-body-step2}. This figure shows a case in which $\Vbcm'$ lies outside of the $(\xi, \eta)$-plane. The $\zeta$-component of~$\Vbcm'$ is orthogonal to~$\Vtrans$, so it suffices to work with the projection of $\Vbcm'$ onto the $(\xi, \eta)$-plane in the computation of~$\mulab$ in Eq.~(\ref{2-step-mulab}). Consequently, if $\vblab' > 0$, it easily seen that \begin{equation} \mulab = \frac{ \mualab ( \valab' + \xi ) - \eta \sqrt{ 1 - \mualab^2 } } { \vblab'}. \label{2-step-mulab-xi} \end{equation} For $\vblab' = 0$, the value of~$\mulab$ is taken as~$\mulab = 1$. \begin{figure} \input{fig4-5} \end{figure} It remains to identify the set~$\Sigma_{0, h}$ in the integral Eq.~(\ref{2-step-muEint}). In terms of the coordinate~$\zeta$, the surface of the sphere~$\Sigma_0$ may be written as \begin{equation} \zeta = \pm \sqrt{ (\vbcm')^2 - \xi^2 - \eta^2 } \quad \text{for $\xi^2 + \eta^2 \le (\vbcm')^2.$} \label{2-step-Sigma-xi} \end{equation} Because of the mirror symmetry in the $(\xi, \eta)$ plane, it suffices to work with the positive square root \begin{equation} \zeta = \sqrt{ (\vbcm')^2 - \xi^2 - \eta^2 } \quad \text{for $\xi^2 + \eta^2 \le (\vbcm')^2.$} \label{2-step-def-zeta} \end{equation} The energy $\Elab'$ in Eq.~(\ref{2-step-2-body-E}) may lie in the outgoing energy bin~$\calE_h'$ for fixed incident energy~$E$ and for step~1 direction cosine~$\muacm$. If it does so for at least 2 values of the step~2 direction cosine~$\mubcm$, then $\Elab'$ is in~$\calE_h'$ for $\mubcm$ on an interval \begin{equation} a_h \le \mubcm \le b_h \label{2-step-ab-range} \end{equation} with $$ -1 \le a_h < b_h \le 1. $$ Consequently, for the hemisphere in Eq.~(\ref{2-step-Sigma-xi}) the integral over $\Sigma_{0,h}$ in Eq.~(\ref{2-step-muEint}) may be written as \begin{equation} \int_{\Sigma_{0,h}} d\sigma_0 \, P_\ell( \mulab ) = \frac{1}{2 \pi \vbcm' } \int_{ a_h \vbcm}^{b_h \vbcm} d\xi \int_{- \sqrt{ (\vbcm')^2 - \xi^2 }} ^{\sqrt{ (\vbcm')^2 - \xi^2 }} d\eta \, \frac {P_\ell( \mulab ) } { \sqrt{ (\vbcm')^2 - \xi^2 - \eta^2}}. \label{2-step-Sigma-int} \end{equation} In this integral the change of variables \begin{alignat}{2} \xi &= \vbcm' \, \mubcm \quad &\text{for $a_h \le \mubcm \le b_h$,} \label{2-step-def-xi} \\ \eta &= \vbcm' \sqrt{ 1 - \mubcm^2 } \, \sin w \quad &\text{for $-\pi/2 \le w \le \pi/2$} \label{2-step-def-eta} \end{alignat} leads to the relation \begin{equation} \int_{\Sigma_{0,h}} d\sigma_0 \, P_\ell( \mulab ) = \frac{1}{2 \pi } \int_{ a_h}^{b_h} d\mubcm \int_{- \pi/2} ^{\pi/2} dw \, P_\ell( \mulab ) . \label{2-step-int-sigma} \end{equation} This representation is used in the calculation of the subintegral over $\Sigma_{0,h}$ in Eq.~(\ref{2-step-muEint}). \subsection{The input file for a 2-step 2-body reaction} In the input file to \gettransfer, the identifier for this reaction is\\ \Input{Process: two step two body reaction}{}\\ The particle masses and the $Q$-values for this reaction are given by\\ \Input{Target's mass:}{$\mtarg$}\\ \Input{Projectile's mass:}{$\myi$}\\ \Input{First residual's mass:}{$\mares$}\\ \Input{First product's mass:}{$\mayo$}\\ \Input{First step's Q value:}{$Q_1$}\\ \Input{Second product's mass:}{$\mbyo$}\\ \Input{Second residual's mass:}{$\mbres$}\\ \Input{Second step's Q value:}{$Q_2$}\\ The mass $\mayo$ of the excited outgoing particle from the first step is recalculated using \begin{equation} \mayo = \myi + \mtarg - \mares - Q_1. \label{2-step-mayo} \end{equation} This is because the values on the right-hand side of Eq.~(\ref{2-step-mayo}) are usually known to high accuracy. In addition, the value of $Q_2$ is computed using $$ Q_2 = \mayo - \mbyo - \mbres. $$ For the first step of the reaction, the Legendre coefficients $c_\ell(E)$ for the expansion Eq.~(\ref{cmLegendre}) of the probability density $g( \muacm \mid E)$ for the outgoing excited particle are given as in Section~\ref{Sec:2-bodyLegendreData}. An example of the model-dependent portion of the input file, Section~\ref{model-info}, is as follows.\\ \Input{Target's mass: 2.808921000497e+03}{}\\ \Input{Projectile's mass: 1.876124078321e+03}{}\\ \Input{First residual's mass: 939.565413016980301}{}\\ \Input{First product's mass: 3747.70426580102}{}\\ \Input{First step's Q value: -2.2246}{}\\ \Input{Second product's mass: 938.782992507523659}{}\\ \Input{Second residual's mass: 2.808921000497e+03}{}\\ \Input{Second step's Q value: 0.0002728}{}\\ \vskip 5pt \Input{Legendre coefficients: n = 30}{}\\ \Input{Interpolation: lin-lin}{}\\ \Input{ Ein: 3.71: n = 3}{}\\ \Input{ \indent 1.0}{}\\ \Input{ \indent 0.0}{}\\ \Input{ \indent 0.0}{}\\ \Input{ Ein: 3.9: n = 3}{}\\ \Input{ \indent 1.0}{}\\ \Input{ \indent -0.14079}{}\\ \Input{ \indent 0.026804}{}\\ \Input{ $\cdots$}{}\\ \Input{ Ein: 10.0: n = 3}{}\\ \Input{ \indent 1.0}{}\\ \Input{ \indent 0.24417}{}\\ \Input{ \indent 0.011232}{}\\ }
\chapter{Interactive Assumptions} This chapter covers the implementation of our approach for analyzing interactive assumptions. % We get an input of the following form: \begin{verbatim} emaps G1 * G2 -> GT. isom G1 -> G2. input [ X, Y ] in G1. oracle O(m : Fq) = sample A:G1, (A, A*Y, A*X + m*A*X*Y). win(U:G1, V:G1, W:G1, mm) = U <> 0 /\ mm <> m_i /\ V = UX /\ W = U*X + m*U*X*Y. \end{verbatim} % For now, we make the following assumptions: \begin{enumerate} \item Either the group setting is a generic group or the input, oracle arguments and return values, and winning condition input are all in one group. In the last case, we exploit that the problem is computational. \item All oracle inputs are of type \verb!Fq!. Allowing for group elements complicates the definition. \end{enumerate} % We first compute a formal sum for each \verb!win! input of type $\group$ as follows: \begin{enumerate} \item Assume that the adversary is given inputs $\vec{f}$ where $f_j$ defines an element in $\group$ over random variables~$\vec{X}$. \item That there is one oracle taking field elements $\vec{m}$ and returning $\vec{g}$ where $g_j$ defines an element in $\group$ over the variables $\vec{X}$ and the variables $\vec{A}$ sampled in the oracle call. \item We assume there are $q$ oracle queries. \item As a first step, we introduce indexed parameters $m_{1,j},\ldots,m_{l,j}$ ($j \in [q]$) and indexed random variables $A_{1,j},\ldots,A_{r,j}$ ($j \in [q]$). \item Then all computable elements can be expressed as linear combinations as follows: \[ \alpha_1 f_1 + \ldots + \alpha_k f_k + \Sigma_{i=1}^q \beta_{1,i}\, g_1(\vec{m_i},\vec{A_i},\vec{X}) \ldots + \Sigma_{i=1}^q \beta_{n,i}\, g_n(\vec{m_i},\vec{A_i},\vec{X}) \] \item We assume the winning condition takes elements $\vec{U}$ in $\group$. Then we use $\alpha_i^{(j)}$ and $\beta_i^{(j)}$ (this is a vector) for the coefficients of $U_j$. \end{enumerate} We represent such linear combinations as formal sums.
% Created 2020-03-10 Tue 10:06 % Intended LaTeX compiler: pdflatex \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \author{Abram Hindle} \date{\today} \title{CMPUT201W20B2 Week 4} \hypersetup{ pdfauthor={Abram Hindle}, pdftitle={CMPUT201W20B2 Week 4}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 25.2.2 (Org mode 9.1.6)}, pdflang={English}} \begin{document} \maketitle \tableofcontents \section{Week4} \label{sec:orgf0f9dc3} \subsection{Copyright Statement} \label{sec:org6309117} If you are in CMPUT201 at UAlberta this code is released in the public domain to you. Otherwise it is (c) 2020 Abram Hindle, Hazel Campbell AGPL3.0+ \subsubsection{License} \label{sec:org3ec3419} Week 3 notes Copyright (C) 2020 Abram Hindle, Hazel Campbell This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see \url{https://www.gnu.org/licenses/}. \subsubsection{Hazel Code is licensed under AGPL3.0+} \label{sec:orgedf10b4} Hazel's code is also found here \url{https://github.com/hazelybell/examples/tree/C-2020-01} Hazel code is licensed: The example code is licensed under the AGPL3+ license, unless otherwise noted. \subsection{Init ORG-MODE} \label{sec:org63caef3} \begin{verbatim} ;; I need this for org-mode to work well (require 'ob-sh) ;(require 'ob-shell) (org-babel-do-load-languages 'org-babel-load-languages '((sh . t))) (org-babel-do-load-languages 'org-babel-load-languages '((C . t))) (org-babel-do-load-languages 'org-babel-load-languages '((python . t))) (setq org-src-fontify-natively t) \end{verbatim} \subsubsection{Org export} \label{sec:orgcabf44d} \begin{verbatim} (org-html-export-to-html) (org-latex-export-to-pdf) (org-ascii-export-to-ascii) \end{verbatim} \subsection{Org Template} \label{sec:org09a5ccc} Copy and paste this to demo C \begin{verbatim} #include <stdio.h> int main(int argc, char**argv) { return 0; } \end{verbatim} \subsection{Remember how to compile?} \label{sec:org3050ba6} gcc -std=c99 -Wall -pedantic -Werror -o programname programname.c \subsection{Functions} \label{sec:org90f3e3d} Functions replicate functions in mathematics. They allocate space on the stack and have local variables. Very similar to python functions Define a function: return\(_{\text{type}}\) functionName(ArgType1 arg1, ArgType2 arg2, ArgType3 arg3 ) \{ \ldots{} \} Call a function: functionName( arg1, arg2, arg3 ); return\(_{\text{type}}\) returnValue = functionName( arg1, arg2, arg3) ; IN C89 all variable declarations are at the top of the function. \subsubsection{return\(_{\text{types}}\)} \label{sec:org576aa6d} \begin{itemize} \item void -- nothing \item int \item char \item float \item double \item \ldots{} \item pointer (array or string) \end{itemize} \subsubsection{Example} \label{sec:org8f54b94} \begin{verbatim} #include <stdio.h> #include <stdlib.h> void example() { printf("I have been made an example of\n"); // return; // void return } int main() { example(); return 0; } \end{verbatim} \begin{verbatim} I have been made an example of \end{verbatim} \subsubsection{Pass by Value} \label{sec:org1c2102f} The value of parameters are COPIED into registers and sometimes the stack. Thus the original variables that the parameters come from are safe. Except pointers are not safe because given a pointer the called function can manipulate the data the pointer points to, but they cannot modify the original pointer. \begin{verbatim} #include <stdio.h> #include <stdlib.h> int example(int x) { x++; printf("example x:\t%p\n", (void*)&x); return x; } int main() { int x = 10; printf("main x :\t%p\n", (void*)&x); printf("x: %d\n", x); int rx = example(x); printf("x: %d\n", x); printf("returned x vs x: %d vs %d\n", rx, x); } \end{verbatim} \begin{verbatim} main x : 0x7ffe19cd7700 x: 10 example x: 0x7ffe19cd76ec x: 10 returned x vs x: 11 vs 10 \end{verbatim} \subsubsection{Arrays again} \label{sec:org3a5478a} \begin{itemize} \item void initArray(int cols, int values[cols]) \{ \item void initArray(int cols, int values[]) \{ \end{itemize} You can specify array sizes in C99 but the size has to come earlier \begin{itemize} \item void init2D(int rows, int cols, int values[rows][cols]) \{ \item void init2D(int rows, int cols, int values[][cols])\{ \item void init3D(int planes, int rows, int cols, int values[planes][rows][cols]) \{ \item void init3D(int planes, int rows, int cols, int values[][rows][cols]) \{ \end{itemize} \subsubsection{Don't trust sizeof inside of functions!} \label{sec:org222cf4c} sizeof is only trustable if you declared the variable in your scope \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <time.h> void init2D(int rows, int cols, int values[][cols]) { int i = 0; printf("init2D: sizeof(values)=%lu\n", sizeof(values)); printf("init2D: sizeof(values[0])=%lu\n", sizeof(values[0])); for (int row = 0; row < rows; row++) { for (int col = 0; col < cols; col++) { values[row][col] = i++; } } } void example() { unsigned int n = 1 + rand() % 10; unsigned int m = 1 + rand() % 10; printf("%d X %d was chosen!\n", m, n); int values[m][n]; // SO the compiler can't predict this allocation ahead of time printf("sizeof(values) = %ld\n", sizeof(values)); printf("sizeof(&values) = %ld\n", sizeof(&values)); printf("sizeof(values[0]) = %ld\n", sizeof(values[0])); init2D( m, n, values ); } int main() { srand(time(NULL)); //initialze based on the clock example(); example(); example(); } \end{verbatim} \begin{verbatim} 10 X 7 was chosen! sizeof(values) = 280 sizeof(&values) = 8 sizeof(values[0]) = 28 init2D: sizeof(values)=8 init2D: sizeof(values[0])=28 5 X 8 was chosen! sizeof(values) = 160 sizeof(&values) = 8 sizeof(values[0]) = 32 init2D: sizeof(values)=8 init2D: sizeof(values[0])=32 5 X 2 was chosen! sizeof(values) = 40 sizeof(&values) = 8 sizeof(values[0]) = 8 init2D: sizeof(values)=8 init2D: sizeof(values[0])=8 \end{verbatim} \subsubsection{Returns} \label{sec:org859313e} Don't return arrays in general. To return a value and exit the function immediately run: return expr \begin{verbatim} #include <stdio.h> #include <stdlib.h> int squareInt(int x) { return x*x; } float squareFloat(float x) { return x*x; } int intDiv(int x, int y) { return x/y; } float floatDiv(float x, float y) { return x/y; } char returnChar( int i ) { return i; } int main() { printf("squareInt\t %d\n", squareInt(25)); printf("squareInt\t %d\n", squareInt(1.47)); printf("squareFloat\t %f\n", squareFloat(1.47)); printf("squareFloat\t %f\n", squareFloat(25)); printf("intDiv\t %d\n", intDiv(64,31)); printf("intDiv\t %d\n", intDiv(64.2,31)); printf("floatDiv\t %f\n", floatDiv(64,31)); printf("floatDiv\t %f\n", floatDiv(64.2,31)); printf("returnChar\t %hhu\n", returnChar( 578 ) ); printf("returnChar\t %hhu\n", returnChar( 'a' ) ); printf("returnChar\t %hhu\n", returnChar( 66.1 ) ); printf("returnChar\t %c\n", returnChar( 578 ) ); printf("returnChar\t %c\n", returnChar( 'a' ) ); printf("returnChar\t %c\n", returnChar( 66.1 ) ); } \end{verbatim} \begin{verbatim} squareInt 625 squareInt 1 squareFloat 2.160900 squareFloat 625.000000 intDiv 2 intDiv 2 floatDiv 2.064516 floatDiv 2.070968 returnChar 66 returnChar 97 returnChar 66 returnChar B returnChar a returnChar B \end{verbatim} \subsubsection{Recursion} \label{sec:orgd3ea940} \begin{enumerate} \item Recursion \label{sec:org73fda68} \begin{enumerate} \item Recursion \label{sec:org1d7db1a} \begin{enumerate} \item Recursion \label{sec:org643393a} \begin{verbatim} #include <stdio.h> #include <stdlib.h> int divisibleBy(int x, int y); int main() { printf("%d\n",divisibleBy(33,32)); } int divisibleBy(int x, int y) { printf("%d %d\n", x,y); if (x == 0) { return 0; } if (y <= 0) { return 0; } if (x % y == 0) { return y; } return divisibleBy(x, y - 1); } \end{verbatim} \begin{verbatim} 33 32 33 31 33 30 33 29 33 28 33 27 33 26 33 25 33 24 33 23 33 22 33 21 33 20 33 19 33 18 33 17 33 16 33 15 33 14 33 13 33 12 33 11 11 \end{verbatim} \end{enumerate} \end{enumerate} \end{enumerate} \subsubsection{Prototypes} \label{sec:org7869f02} \begin{verbatim} #include <stdio.h> #include <stdlib.h> /* this is a prototype it predeclares that a function with this name will be available. */ // This program will not compile in C99 without this line: // int divisibleBy(int x, int y); int main() { printf("%d\n",divisibleBy(16,15)); } int divisibleBy(int x, int y) { printf("%d %d\n", x,y); if (x == 0) { return 0; } if (y <= 0) { return 0; } if (x % y == 0) { return y; } return divisibleBy(x, y - 1); } \end{verbatim} \begin{verbatim} 16 15 16 14 16 13 16 12 16 11 16 10 16 9 16 8 8 \end{verbatim} \begin{enumerate} \item Prototypes and corecursive routines \label{sec:orgd77611f} \begin{verbatim} #include <stdio.h> #include <stdlib.h> /* this is a prototype it predeclares that a function with this name will be available. This is useful for co-recursive functions. */ // This program will not compile in C99 without this line: // int aReliesOnB(int x, int y); int bReliesOnA(int x, int y); // int main() { printf("%d\n",aReliesOnB(0,100)); } int aReliesOnB(int x, int y) { printf("> aReliesOnB( %d, %d)\n", x, y); if (x >= y) { return y; } return bReliesOnA(x+x+1, y); } int bReliesOnA(int x, int y) { printf("> bReliesOnA( %d, %d)\n", x, y); if (x >= y) { return y; } return aReliesOnB(x * x + 1, y); } \end{verbatim} \begin{verbatim} > aReliesOnB( 0, 100) > bReliesOnA( 1, 100) > aReliesOnB( 2, 100) > bReliesOnA( 5, 100) > aReliesOnB( 26, 100) > bReliesOnA( 53, 100) > aReliesOnB( 2810, 100) 100 \end{verbatim} \end{enumerate} \subsubsection{Exercise} \label{sec:org70085cf} \begin{enumerate} \item - make a recursive countdown function, printing each number until 0 is met. \label{sec:org1c7e9d4} \begin{verbatim} #include <stdio.h> void countDown(int n) { printf("%d\n",n); if (n > 0) { countDown(n-1); } } int main() { countDown(10); return 0; } \end{verbatim} \begin{verbatim} 10 9 8 7 6 5 4 3 2 1 0 \end{verbatim} \item - make a recursive fibonacci \label{sec:org92f1873} fib(0) = 1 fib(1) = 1 fib(n) = fib(n-1) + fib(n-2) \begin{verbatim} #include <stdio.h> int fibonacci(int n) { if (n == 0 || n == 1) { return 1; } else { return fibonacci(n-1) + fibonacci(n-2); } } int main() { printf("%d\n",fibonacci(45)); return 0; } \end{verbatim} \begin{verbatim} 1836311903 \end{verbatim} \end{enumerate} \subsection{Scope} \label{sec:orge9a2ef1} \subsubsection{const} \label{sec:org0fcc271} Instead of define you can use const for constants. \begin{verbatim} #include <stdio.h> #include <stdlib.h> const int nine = 9; int catLives(int ncats) { return nine * ncats; } int main() { printf("10 cats %d lives\n", catLives( 10 )); // you can't modify nine // nine++; // *(&nine) = 10; void * totally_not_nine = (void*)&nine; int * not_nine = (int *)totally_not_nine; *not_nine = 10; printf("%d\n",*not_nine); } \end{verbatim} \subsubsection{Local variables} \label{sec:org46022dd} \begin{verbatim} #include <stdio.h> #include <stdlib.h> // no x here int example(int x) { // < this x is visible -- main's x is NOT visible here x++; // < within return x; // < this scope } // no x here int main() { int x = 10; // < this x is visible within all of main printf("x: %d\n", x); int rx = example(x); printf("x: %d\n", x); printf("returned x vs x: %d vs %d\n", rx, x); } \end{verbatim} \begin{verbatim} x: 10 x: 10 returned x vs x: 11 vs 10 \end{verbatim} \subsection{Global Variables (BAD) / External Variables / File-level variables} \label{sec:org11aa2aa} Too common. Too error prone. You will usually cause lots of bugs by making top-level variables. They will only be available within the file you declare. Global constants are fine. They are safe. If you make a global in a file, explicitly limit it to the current file with the static keyword. If static is not used and the variable is in included files then it will be visible across all files. \begin{verbatim} #include <stdio.h> #include <stdlib.h> // BAD // int x = 111; // visible in all lines below unless occluded by local definitions // BADISH const int x = 111; // visible in all lines below unless occluded by local definitions // BETTER but still not OK //static int x = 111; // BEST and allowed static const int x = 111; int globalX() { return x; // returns the static global x } int example(int x) { // <x_2 this x, x_2 is visible -- main's x is NOT visible here nor is the global x++; // <x_2 within return x; // <x_2 this scope } int main() { printf("Global x %d\n", globalX()); int x = 10; // < this x, x_3 is visible within all of main const int y = globalX() * globalX(); printf("y: %d\n", y); // x_3 printf("x: %d\n", x); // x_3 int rx = example(x); // x_3 printf("x: %d\n", x); // x_3 printf("returned x vs x: %d vs %d\n", rx, x); // x_3 } \end{verbatim} \subsection{Static Function Scope} \label{sec:org18c383b} Static function local variables keep their old values. It is similar to defining a global per function \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <time.h> unsigned int counter() { static unsigned int counter = 0; // this keeps its value printf("%u\n", counter); return ++counter; } static unsigned int __worseCounter__ = 0; // whoo don't touch this AKA DONT DO IT unsigned int worseCounter() { return ++__worseCounter__; } #define N 10 int main() { srand(time(NULL)); unsigned int count = 0; unsigned int wCount = 0; for (int i = 0 ; i < N; i++) { if (rand() % 3 == 0) { count = counter(); wCount = worseCounter(); } } printf("Counted %u / %u numbers divisible by 3 generated by rand\n", count, N); printf("Worse: Counted %u / %u numbers divisible by 3 generated by rand\n", wCount, N); } \end{verbatim} \begin{verbatim} 0 1 2 Counted 3 / 10 numbers divisible by 3 generated by rand Worse: Counted 3 / 10 numbers divisible by 3 generated by rand \end{verbatim} \subsection{Pointers!} \label{sec:org99b7042} \begin{itemize} \item What is a pointer? A number that is a memory address. \item What's at that memory address? the type of the pointer. \begin{itemize} \item char * str; \end{itemize} \item Why? \begin{itemize} \item you want to know the address so you can manipulate a value or manipulate a shared value. \item you want to return multiple values from a function. \item your computer deals with memory as location and offsets the entire time \item the local variables is the current base pointer + an offset \end{itemize} \item What is str? A integer that is a memory address. \item What does str point to? A character, but many an array of characters! \item Can I tell if it is an array of characters? No. \item How can I get the first element of a character array at str? \begin{itemize} \item str[0] \item *str \end{itemize} \item How can I make a pointer to: \begin{itemize} \item char myChar = 'a'; \item char * ptrToMyChar = \&myChar; \end{itemize} \item Can I manipulate pointers? \begin{itemize} \item char * ptrToChar = \&myChar; \item ptrToChar++; // <--- goes to the following character in a character array \item *ptrToChar = 'b'; // Dereference ptrToChar and chance myChar to the value of 'b' \end{itemize} \end{itemize} \subsubsection{Operators} \label{sec:org8cbbc68} \begin{itemize} \item \& unary operator means "address of" \item * unary operator means "dereference pointer" -- that is return the value it points to \item don't confuse declaration of a variable int * x with dereferencing a variable in an expression: *x \end{itemize} \begin{verbatim} #include <stdio.h> #include <stdlib.h> // These are macros they cover up syntax // Return the address of X #define ADDRESSOF(X) (&X) // Dereference X #define DEREF(X) (*X) typedef int * intptr_t; int main() { int i = 99; intptr_t ptrToI1 = ADDRESSOF(i); // these 2 lines int * ptrToI2 = &i; // are the same printf("i: %4d,\naddress of i: %p\n\tptrToI1: %p, *ptrToI1: %d\n\tptrToI2: %p, *ptrToI2: %d\n", i, (void*)&i, (void*)ptrToI1, DEREF(ptrToI1), (void*)ptrToI2, *ptrToI2 ); printf("addressof i: %p,\naddress of ptrToI1: %p\n\tptrToI2: %p\n", (void*)&i, (void*)&ptrToI1, (void*)&ptrToI2 ); return 0; } \end{verbatim} \begin{verbatim} i: 99, address of i: 0x7ffef7f5a274 ptrToI1: 0x7ffef7f5a274, *ptrToI1: 99 ptrToI2: 0x7ffef7f5a274, *ptrToI2: 99 addressof i: 0x7ffef7f5a274, address of ptrToI1: 0x7ffef7f5a278 ptrToI2: 0x7ffef7f5a280 \end{verbatim} \subsubsection{Character Arrays and Pointers} \label{sec:orgabd2b67} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> int main() { char myChars[] = "Abram believes he is a benevolent professor"; // char * strnstr(const char *big, const char *little, size_t len); from string.h char * professor = strstr(myChars, "professor"); char * believes = strstr(myChars, "believes"); printf("Size of a pointer %lu\n", sizeof(professor)); printf("Location pointed to %p\n", professor); printf("full representation %016lX\n", (long unsigned int)professor); // look how many bits are used printf("myChars: %s\n", myChars); printf("myChars location: %p\n", myChars); printf("professor: %s\n", professor); printf("professor location: %p\n", professor); printf("believes: %s\n", believes); printf("believes location: %p\n", believes); printf("believes - myChars location: %llu\n", (long long unsigned int)believes - (long long unsigned int)myChars); printf("professor - myChars location: %llu\n", (long long unsigned int)professor - (long long unsigned int)myChars); printf("\nBut where are myChars and professor and believes?\n"); printf("myChars location: %p\t ptr address: %p \t*ptr %c\n", (void*)&myChars, myChars, *myChars); printf("professor location: %p\t ptr address: %p \t*ptr %c\n", (void*)&professor, professor, *professor); printf("believes location: %p\t ptr address: %p \t*ptr %c\n", (void*)&believes, believes, *believes); } \end{verbatim} \begin{verbatim} Size of a pointer 8 Location pointed to 0x7ffc5301a2d2 full representation 00007FFC5301A2D2 myChars: Abram believes he is a benevolent professor myChars location: 0x7ffc5301a2b0 professor: professor professor location: 0x7ffc5301a2d2 believes: believes he is a benevolent professor believes location: 0x7ffc5301a2b6 believes - myChars location: 6 professor - myChars location: 34 But where are myChars and professor and believes? myChars location: 0x7ffc5301a2b0 ptr address: 0x7ffc5301a2b0 *ptr A professor location: 0x7ffc5301a2a0 ptr address: 0x7ffc5301a2d2 *ptr p believes location: 0x7ffc5301a2a8 ptr address: 0x7ffc5301a2b6 *ptr b \end{verbatim} \subsubsection{Int arrays} \label{sec:org5435415} Now character arrays are easy because the size is 1 for a character but what about arrays of larger size datatypes? \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 1000 int main() { int myInts[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; // char * strnstr(const char *big, const char *little, size_t len); from string.h int * ptrToMyInts = &myInts[0]; int * five = &myInts[5]; int * fiveAgain = myInts + 5; printf("myInts: %p\n", (void*)myInts); printf("ptrToMyInts: %p\n", (void*)ptrToMyInts); printf("five location: %p five value: %d\n", (void*)five, *five); printf("fiveAgain location: %p fiveAgain value: %d\n", (void*)fiveAgain, *fiveAgain); printf("five - myInts location: %llu\n", (long long unsigned int)five - (long long unsigned int)myInts); printf("five - myInts location / sizeof(int): %llu\n", ((long long unsigned int)five - (long long unsigned int)myInts)/(sizeof(int))); printf("\n OK... Where are they?\n"); printf("myInts Location: %p\t ptr address: %p \t*ptr %d\n", (void*)&myInts, (void*)myInts, *myInts); printf("ptrToMyIntsLocation: %p\t ptr address: %p \t*ptr %d\n", (void*)&ptrToMyInts, (void*)ptrToMyInts, *ptrToMyInts); printf("five Location: %p\t ptr address: %p \t*ptr %d\n", (void*)&five, (void*)five, *five); printf("fiveAgain Location: %p\t ptr address: %p \t*ptr %d\n", (void*)&fiveAgain, (void*)fiveAgain, *fiveAgain); printf("\nLet's add 1 to five\n"); int * six = five + 1; printf("five Location: %p\t ptr address: %p \t*ptr %d\n", (void*)&five, (void*)five, *five); printf("six Location: %p\t ptr address: %p \t*ptr %d\n", (void*)&six, (void*)six, *six); } \end{verbatim} \begin{verbatim} myInts: 0x7ffe80918f50 ptrToMyInts: 0x7ffe80918f50 five location: 0x7ffe80918f64 five value: 5 fiveAgain location: 0x7ffe80918f64 fiveAgain value: 5 five - myInts location: 20 five - myInts location / sizeof(int): 5 OK... Where are they? myInts Location: 0x7ffe80918f50 ptr address: 0x7ffe80918f50 *ptr 0 ptrToMyIntsLocation: 0x7ffe80918f30 ptr address: 0x7ffe80918f50 *ptr 0 five Location: 0x7ffe80918f38 ptr address: 0x7ffe80918f64 *ptr 5 fiveAgain Location: 0x7ffe80918f40 ptr address: 0x7ffe80918f64 *ptr 5 Let's add 1 to five five Location: 0x7ffe80918f38 ptr address: 0x7ffe80918f64 *ptr 5 six Location: 0x7ffe80918f48 ptr address: 0x7ffe80918f68 *ptr 6 \end{verbatim} myInts: 0x7ffd0bdc3f40 ptrToMyInts: 0x7ffd0bdc3f40 five location: 0x7ffd0bdc3f54 five value: 5 fiveAgain location: 0x7ffd0bdc3f54 fiveAgain value: 5 five - myInts location: 20 five - myInts location / sizeof(int): 5 OK\ldots{} Where are they? myInts Location: 0x7ffd0bdc3f40 ptr address: 0x7ffd0bdc3f40 *ptr 0 ptrToMyIntsLocation: 0x7ffd0bdc3f20 ptr address: 0x7ffd0bdc3f40 *ptr 0 five Location: 0x7ffd0bdc3f28 ptr address: 0x7ffd0bdc3f54 *ptr 5 fiveAgain Location: 0x7ffd0bdc3f30 ptr address: 0x7ffd0bdc3f54 *ptr 5 Let's add 1 to five five Location: 0x7ffd0bdc3f28 ptr address: 0x7ffd0bdc3f54 *ptr 5 six Location: 0x7ffd0bdc3f38 ptr address: 0x7ffd0bdc3f58 *ptr 6 \subsubsection{Arrays as pointers} \label{sec:orge6cceb4} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 1000 int main() { int myInts[] = { 99, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int * ptrToMyInts = myInts; int * ptrToMyInts2 = &myInts[0]; printf("myInts:\t%p\n", (void*)myInts); printf("ptrToMyInts:\t%p\n", (void*)ptrToMyInts); printf("ptrToMyInts2:\t%p\n", (void*)ptrToMyInts2); printf("deref myInts:\t%d\n", *myInts); printf("deref ptrToMyInts:\t%d\n", *ptrToMyInts); printf("deref ptrToMyInts2:\t%d\n", *ptrToMyInts2); return 0; } \end{verbatim} \begin{verbatim} myInts: 0x7ffe24475770 ptrToMyInts: 0x7ffe24475770 ptrToMyInts2: 0x7ffe24475770 deref myInts: 99 deref ptrToMyInts: 99 deref ptrToMyInts2: 99 \end{verbatim} \subsubsection{Pointer arthimetic again} \label{sec:org888f4f8} When you add to pointers you add not just an integer but your n*sizeof(*p) + p where p is a pointer. *ptr++ is a common idiom, it means give me the current value and transition to the next memory location. \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 1000 int main() { long int myInts[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; long int * ptr = &myInts[0]; size_t count = sizeof(myInts) / sizeof(myInts[0]); while(count > 0) { printf("%ld \t %p\n", *ptr, (void*)ptr); ptr++; count--; } ptr = &myInts[10]; count = sizeof(myInts) / sizeof(myInts[0]); while( count-- > 0) { void * oldptr = (void*) ptr; printf("%ld \t %p\t", *ptr--, oldptr); // this *ptr++ is // idiomatic in C and // confusing but you must // learn it printf("ptr - oldptr %ld\n", (unsigned long int)ptr - (unsigned long int)oldptr); } printf("%p %ld\n", (void*)ptr, *ptr); return 0; } \end{verbatim} \begin{verbatim} 0 0x7fff0c5ec000 1 0x7fff0c5ec008 2 0x7fff0c5ec010 3 0x7fff0c5ec018 4 0x7fff0c5ec020 5 0x7fff0c5ec028 6 0x7fff0c5ec030 7 0x7fff0c5ec038 8 0x7fff0c5ec040 9 0x7fff0c5ec048 10 0x7fff0c5ec050 10 0x7fff0c5ec050 ptr - oldptr -8 9 0x7fff0c5ec048 ptr - oldptr -8 8 0x7fff0c5ec040 ptr - oldptr -8 7 0x7fff0c5ec038 ptr - oldptr -8 6 0x7fff0c5ec030 ptr - oldptr -8 5 0x7fff0c5ec028 ptr - oldptr -8 4 0x7fff0c5ec020 ptr - oldptr -8 3 0x7fff0c5ec018 ptr - oldptr -8 2 0x7fff0c5ec010 ptr - oldptr -8 1 0x7fff0c5ec008 ptr - oldptr -8 0 0x7fff0c5ec000 ptr - oldptr -8 0x7fff0c5ebff8 140733400924160 \end{verbatim} \begin{enumerate} \item Now with Chars \label{sec:orga1d8d86} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 1000 int main() { char str[] = "Polar bears are cool bears"; char * strLiteral = "Polar bears are cool bears"; char * ptr = str; char tmp = 0; while( (tmp = *ptr++) ) { putchar(tmp); } putchar('\n'); ptr = str; tmp = 0; while( (tmp = *ptr++) ) { printf("%c %p %20lu\n", tmp, (void*)ptr, (unsigned long int)ptr); } // now watch the addresses ptr = strLiteral; printf("The start of this function's stack frame is pretty close to %p\n", (void*)&str); while( (tmp = *ptr++) ) { printf("%c %p %20lu\n", tmp, (void*)ptr, (unsigned long int)ptr); } // wow that's super far away in memory printf("str - strLiteral in bytes: %lu\n", (unsigned long int)str - (unsigned long int)strLiteral); printf("&str - &strLiteral in bytes: %lu\n", (unsigned long int)&str - (unsigned long int)&strLiteral); return 0; } \end{verbatim} \begin{verbatim} Polar bears are cool bears P 0x7ffc89491f51 140722611756881 o 0x7ffc89491f52 140722611756882 l 0x7ffc89491f53 140722611756883 a 0x7ffc89491f54 140722611756884 r 0x7ffc89491f55 140722611756885 0x7ffc89491f56 140722611756886 b 0x7ffc89491f57 140722611756887 e 0x7ffc89491f58 140722611756888 a 0x7ffc89491f59 140722611756889 r 0x7ffc89491f5a 140722611756890 s 0x7ffc89491f5b 140722611756891 0x7ffc89491f5c 140722611756892 a 0x7ffc89491f5d 140722611756893 r 0x7ffc89491f5e 140722611756894 e 0x7ffc89491f5f 140722611756895 0x7ffc89491f60 140722611756896 c 0x7ffc89491f61 140722611756897 o 0x7ffc89491f62 140722611756898 o 0x7ffc89491f63 140722611756899 l 0x7ffc89491f64 140722611756900 0x7ffc89491f65 140722611756901 b 0x7ffc89491f66 140722611756902 e 0x7ffc89491f67 140722611756903 a 0x7ffc89491f68 140722611756904 r 0x7ffc89491f69 140722611756905 s 0x7ffc89491f6a 140722611756906 The start of this function's stack frame is pretty close to 0x7ffc89491f50 P 0x563ee77aa919 94828171536665 o 0x563ee77aa91a 94828171536666 l 0x563ee77aa91b 94828171536667 a 0x563ee77aa91c 94828171536668 r 0x563ee77aa91d 94828171536669 0x563ee77aa91e 94828171536670 b 0x563ee77aa91f 94828171536671 e 0x563ee77aa920 94828171536672 a 0x563ee77aa921 94828171536673 r 0x563ee77aa922 94828171536674 s 0x563ee77aa923 94828171536675 0x563ee77aa924 94828171536676 a 0x563ee77aa925 94828171536677 r 0x563ee77aa926 94828171536678 e 0x563ee77aa927 94828171536679 0x563ee77aa928 94828171536680 c 0x563ee77aa929 94828171536681 o 0x563ee77aa92a 94828171536682 o 0x563ee77aa92b 94828171536683 l 0x563ee77aa92c 94828171536684 0x563ee77aa92d 94828171536685 b 0x563ee77aa92e 94828171536686 e 0x563ee77aa92f 94828171536687 a 0x563ee77aa930 94828171536688 r 0x563ee77aa931 94828171536689 s 0x563ee77aa932 94828171536690 str - strLiteral in bytes: 45894440220216 &str - &strLiteral in bytes: 16 \end{verbatim} \end{enumerate} \subsubsection{Hazel's ptrs.c} \label{sec:org8cfd7c1} The intent here is to demonstrate the use and features of pointers and how to manipulate values via pointers within functions. \begin{verbatim} #include <stdio.h> int pbv(int passed) { passed++; printf(" passed = %d\n", passed); printf(" &passed = %p\n", (void *) &passed); return passed; } void pbr(int *passed) { printf(" passed = %p\n", (void *) passed); printf(" *passed = %d\n", *passed); printf(" &passed = %p\n", (void *) &passed); (*passed)++; } /* * 4 byte integer (32-bit PC) * Example: our integer uses these 4 bytes * byte 4287409512 (0xff8cad68) * byte 4287409513 (0xff8cad69) * byte 4287409514 (0xff8cad6a) * byte 4287409515 (0xff8cad6b) */ int main() { int thing_1 = 100; int thing_2 = 200; // type: define a_pointer as a pointer to an int int *a_pointer = NULL; // type of a_pointer is "int *" // NULL: the NULL pointer, gives the pointer the value 0 // used to indicate that the pointer doesn't point to anything printf("thing_1 = %d\n", thing_1); printf("thing_2 = %d\n", thing_2); // error: ‘a_pointer’ is used uninitialized in this function [-Werror=uninitialized] //printf("a_pointer = %p\n", (void *) a_pointer); //printf("a_pointer = %zu\n", (size_t) a_pointer); printf("\nsizes:\n"); printf("sizeof(thing_1) = %zu\n", sizeof(thing_1)); printf("sizeof(thing_2) = %zu\n", sizeof(thing_2)); printf("sizeof(a_pointer) = %zu (%zu bits)\n", sizeof(a_pointer), sizeof(a_pointer) * 8); // unary & operator: get address of (reference) a_pointer = &thing_1; printf("\na_pointer = &thing_1;\n"); printf(" &thing_1 = %p\n", (void *) &thing_1); printf(" &thing_2 = %p\n", (void *) &thing_2); printf("a_pointer = %p\n", (void *) a_pointer); printf("a_pointer = %zu\n", (size_t) a_pointer); // unary * operator: get value at (dereference) printf("*a_pointer = %d\n", *a_pointer); a_pointer = &thing_2; printf("\na_pointer = &thing_2;\n"); printf("a_pointer = %p\n", (void *) a_pointer); // unary * operator: get value at (dereference) printf("*a_pointer = %d\n", *a_pointer); // We're going to copy thing_1 and take a look printf("\ncopy value:\n"); printf("\nint value = thing_1;\n"); int value = thing_1; printf("thing_1 = %d\n", thing_1); printf(" value = %d\n", value); printf(" &thing_1 = %p\n", (void *) &thing_1); printf(" &value = %p\n", (void *) &value); printf("\ncopy value using pointer:\n"); printf("\nvalue = *(&thing_2);\n"); value = *(&thing_2); printf("thing_2 = %d\n", thing_2); printf(" value = %d\n", value); printf(" &thing_2 = %p\n", (void *) &thing_2); printf(" &value = %p\n", (void *) &value); printf("\ncopy value using pointer:\n"); a_pointer = &thing_2; printf("\na_pointer = &thing_2;\n"); printf("a_pointer = %p\n", (void *) a_pointer); // unary * operator: get value at (dereference) printf("*a_pointer = %d\n", *a_pointer); printf("value = *a_pointer;\n"); value = *a_pointer; printf("thing_2 = %d\n", thing_2); printf(" value = %d\n", value); printf(" &thing_2 = %p\n", (void *) &thing_2); printf(" &value = %p\n", (void *) &value); printf("\npass-by-value (copy):\n"); printf("\npbv(thing_1);\n"); printf(" thing_1 = %d\n", thing_1); printf(" &thing_1 = %p\n", (void *) &thing_1); pbv(thing_1); printf(" thing_1 = %d\n", thing_1); printf(" &thing_1 = %p\n", (void *) &thing_1); printf("\npass-by-reference (no copy):\n"); printf("\npbr(&thing_1);\n"); printf(" thing_1 = %d\n", thing_1); printf(" &thing_1 = %p\n", (void *) &thing_1); pbr(&thing_1); printf(" thing_1 = %d\n", thing_1); printf(" &thing_1 = %p\n", (void *) &thing_1); return 0; } \end{verbatim} \begin{verbatim} thing_1 = 100 thing_2 = 200 sizes: sizeof(thing_1) = 4 sizeof(thing_2) = 4 sizeof(a_pointer) = 8 (64 bits) a_pointer = &thing_1; &thing_1 = 0x7ffe8deb9864 &thing_2 = 0x7ffe8deb9868 a_pointer = 0x7ffe8deb9864 a_pointer = 140731279448164 *a_pointer = 100 a_pointer = &thing_2; a_pointer = 0x7ffe8deb9868 *a_pointer = 200 copy value: int value = thing_1; thing_1 = 100 value = 100 &thing_1 = 0x7ffe8deb9864 &value = 0x7ffe8deb986c copy value using pointer: value = *(&thing_2); thing_2 = 200 value = 200 &thing_2 = 0x7ffe8deb9868 &value = 0x7ffe8deb986c copy value using pointer: a_pointer = &thing_2; a_pointer = 0x7ffe8deb9868 *a_pointer = 200 value = *a_pointer; thing_2 = 200 value = 200 &thing_2 = 0x7ffe8deb9868 &value = 0x7ffe8deb986c pass-by-value (copy): pbv(thing_1); thing_1 = 100 &thing_1 = 0x7ffe8deb9864 passed = 101 &passed = 0x7ffe8deb984c thing_1 = 100 &thing_1 = 0x7ffe8deb9864 pass-by-reference (no copy): pbr(&thing_1); thing_1 = 100 &thing_1 = 0x7ffe8deb9864 passed = 0x7ffe8deb9864 *passed = 100 &passed = 0x7ffe8deb9848 thing_1 = 101 &thing_1 = 0x7ffe8deb9864 \end{verbatim} \subsubsection{Hazel's ptr\(_{\text{const.c}}\)} \label{sec:org1e71eb3} The intent here is to show that you shouldn't mess with const vars but you can eventually mutate them with pointers. \begin{verbatim} #include <stdio.h> int main() { int mut_i = 100; // mutable integer printf("mut_i = %d\n", mut_i); const int const_i = 200; // constant integer printf("const_i = %d\n", const_i); // mutable pointer to mutable integer int * mut_p = &mut_i; printf("mut_p = %p\n", (void *) mut_p); printf("*mut_p = %d\n", *mut_p); // constant pointer to mutable integer int * const const_p = &mut_i; printf("const_p = %p\n", (void *) const_p); printf("*const_p = %d\n", *const_p); // mutable pointer to constant integer const int * p_to_const = &const_i; printf("p_to_const = %p\n", (void *) p_to_const); printf("*p_to_const = %d\n", *p_to_const); // constant pointer to constant integer const int * const const_p_to_const = &const_i; printf("const_p_to_const = %p\n", (void *) const_p_to_const); printf("*const_p_to_const = %d\n", *const_p_to_const); /* // Don't do this! // "warning: assignment discards ‘const’ qualifier from pointer target type" mut_p = &const_i; const char *str_lit = "String literals are const char *"; printf("%s\n", str_lit); // but remember this means we can change str_lit to point to a different string! str_lit = "String literal #2"; printf("%s\n", str_lit); // This protects us from: // str_lit[0] = 'D'; // this is wrong: char *wrong = "We will try to change this string literal"; printf("%s\n", wrong); // Because it doesn't protect us from: // wrong[0] = 'D'; // what happens if you uncomment the above line? // This might be better: const char * const RIGHT = "Don't go changing on me!"; printf("%s\n", RIGHT); // Because it protects us from: // RIGHT[0] = 'L'; // and // RIGHT = wrong; */ } \end{verbatim} \begin{verbatim} mut_i = 100 const_i = 200 mut_p = 0x7ffd165d3e50 *mut_p = 100 const_p = 0x7ffd165d3e50 *const_p = 100 p_to_const = 0x7ffd165d3e54 *p_to_const = 200 const_p_to_const = 0x7ffd165d3e54 *const_p_to_const = 200 \end{verbatim} \subsubsection{Hazel's Pointer No No's} \label{sec:org8b96fb5} \url{ptr\_nonos.c} Note the lack of flags below. \begin{verbatim} gcc -std=c99 -Wall -pedantic -o ptr_nonos ptr_nonos.c && \ ./ptr_nonos \end{verbatim} \begin{verbatim} *pointer = 100 Three fives is 15 *pointer = 22089 Three fives is 15 result = 15 &result = 0x7ffee57fce84 &result_p = 0x7ffee57fce88 \end{verbatim} \begin{verbatim} #include <stdio.h> #define SIZE 10 // This function tries to print out the int which is at address 0 in memory... // Don't do this! void dereference_null() { printf("\ndereference null\n"); int *a_pointer = NULL; printf(" a_pointer = %p\n", (void *) a_pointer); printf("*a_pointer = %d\n", *a_pointer); } // This function tries to print out the int which is at some address we don't know in memory... // Don't do this! void dereference_uninit() { printf("\ndereference unitialized pointer\n"); int *a_pointer; printf(" a_pointer = %p\n", (void *) a_pointer); printf("*a_pointer = %d\n", *a_pointer); } // This function returns a pointer to an "automatic" local variable... // Don't do this! int *return_pointer_to_local() { int local_int = 100; int *pointer = &local_int; // when we return we give up the memory we allocated for "local_int"! return pointer; } // This function just does some things... int do_things() { int three = 3; int five = 5; int three_fives = three * five; printf(" Three fives is %d\n", three_fives); return three_fives; } int main() { // dereference_null(); int * pointer = return_pointer_to_local(); printf("*pointer = %d\n", *pointer); do_things(); printf("*pointer = %d\n", *pointer); // You can't get a pointer to some things... // This won't compile: // &(do_things()); // We can't do this for the same reason... // &10; // This one is actually exactly the same as the one above... // &SIZE; // You have to make memory to store the value to get a pointer to it! int result = do_things(); printf(" result = %d\n", result); printf(" &result = %p\n", (void *) &result); // This won't compile either. Same reason. // &(&result); // You have to make memory to store the pointer to get a pointer to it! int * result_p = &result; printf("&result_p = %p\n", (void *) &result_p); int **result_pp = &result_p; int ***result_ppp = &result_pp; printf("result_ppp = %p\n", (void *) result_ppp); printf("&result_ppp = %p\n", (void *) &result_ppp); printf("***result_ppp = %d\n", ***result_ppp); return 0; } \end{verbatim} \begin{verbatim} *pointer = 100 Three fives is 15 *pointer = 21920 Three fives is 15 result = 15 &result = 0x7ffd1638f9f4 &result_p = 0x7ffd1638f9f8 result_ppp = 0x7ffd1638fa00 &result_ppp = 0x7ffd1638fa08 ***result_ppp = 15 \end{verbatim} \subsubsection{Multidimensional Arrays and Pointers} \label{sec:org9b694c7} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 10 void init2D(int rows, int cols, int values[][cols]) { int i = 0; for (int row = 0; row < rows; row++) { for (int col = 0; col < cols; col++) { values[row][col] = i++; } } } int main() { int myInts[N][N]; init2D(N, N, myInts); // int * ptrToMyInts = myInts; // THIS WILL NOT WORK int (* ptrToMyInts)[N][N] = &myInts; int (* secondRow)[N] = &myInts[1]; printf("myInts:\t%p\n", (void*)myInts); printf("ptrToMyInts:\t%p\n", (void*)ptrToMyInts); printf("deref myInts:\t%d\n", **myInts); printf("deref myInts + 1:\t%d\n", **(myInts + 1) ); // this hops a row! printf("deref secondRow:\t%d\n", *secondRow[0]); printf("deref *myInts + 1:\t%d\n", *(*myInts + 1) ); // this hops a col! //printf("deref ptrToMyInts:\t%d\n", *ptrToMyInts); return 0; } \end{verbatim} \begin{verbatim} myInts: 0x7ffc32f5f8f0 ptrToMyInts: 0x7ffc32f5f8f0 deref myInts: 0 deref myInts + 1: 10 deref secondRow: 10 deref *myInts + 1: 1 \end{verbatim} \subsubsection{Arrays of Pointers or Pointers of Pointers} \label{sec:orga889717} Be aware that when declaring arrays there are arrays of pointers and pointers to arrays. They are different. \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 4 int main() { char * ptrs[4]; // an array of character poiunters! char stringOnStack[] = "ON STACK"; // these literals will not be on the stack ptrs[0] = "Anaxagoras"; ptrs[1] = "mummifies"; ptrs[2] = "shackles"; ptrs[3] = stringOnStack; printf("sizeof(ptrs)=%lu sizeof(ptrs[0])=%lu\n",sizeof(ptrs), sizeof(ptrs[0])); printf("sizeof(stringOnStack)=%lu sizeof(stringOnStack[0])=%lu\n", sizeof(stringOnStack), sizeof(stringOnStack[0])); printf("sizeof(&stringOnStack)=%lu sizeof(&stringOnStack[0])=%lu\n", sizeof(&stringOnStack), sizeof(&stringOnStack[0])); for (int i = 0; i < N; i++) { printf("S:%s\t", ptrs[i]); printf("P:%p\t", (void*)ptrs[i]); printf("L:%p\n", (void*)&ptrs[i]); } char ** pointsToPointers = ptrs; // it is pointers to pointers (like an array!) printf("sizeof(pointsToPointers)=%lu sizeof(pointsToPointers[0])=%lu\n", sizeof(pointsToPointers), sizeof(pointsToPointers[0])); puts(*(pointsToPointers + 0)); puts(pointsToPointers[0]); putchar('\n'); puts(*(pointsToPointers + 2)); puts(pointsToPointers[2]); putchar('\n'); return 0; } \end{verbatim} \begin{verbatim} sizeof(ptrs)=32 sizeof(ptrs[0])=8 sizeof(stringOnStack)=9 sizeof(stringOnStack[0])=1 sizeof(&stringOnStack)=8 sizeof(&stringOnStack[0])=8 S:Anaxagoras P:0x55cb7601d978 L:0x7fffba6e7380 S:mummifies P:0x55cb7601d983 L:0x7fffba6e7388 S:shackles P:0x55cb7601d98d L:0x7fffba6e7390 S:ON STACK P:0x7fffba6e73af L:0x7fffba6e7398 sizeof(pointsToPointers)=8 sizeof(pointsToPointers[0])=8 Anaxagoras Anaxagoras shackles shackles \end{verbatim} \subsubsection{Confusing Array Pointer interactions and syntax} \label{sec:orgf76931c} \begin{itemize} \item int * myInts != int (* myInts)[] \item \end{itemize} \begin{enumerate} \item Make a pointer to the first element \label{sec:org2465e2f} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 5 void init2D(int rows, int cols, int values[][cols]) { int i = 0; for (int row = 0; row < rows; row++) { for (int col = 0; col < cols; col++) { values[row][col] = i++; } } } int main() { int matrix[N][N]; init2D( N, N, matrix ); int * pointToMatrix = &matrix[0][0]; for (int i = 0; i < N*N; i++) { printf("%c", (i%N==0)?'\n':'\t'); printf("%d", pointToMatrix[i]); } return 0; } \end{verbatim} \begin{verbatim} 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 \end{verbatim} \item Make a pointer to the first row \label{sec:org9990005} \begin{verbatim} #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 5 #define M 3 void init2D(int rows, int cols, int values[][cols]) { int i = 0; for (int row = 0; row < rows; row++) { for (int col = 0; col < cols; col++) { values[row][col] = i++; } } } int main() { int matrix[M][N]; init2D( M, N, matrix ); // a pointer to an int array of size [N] int (* pointToRow)[N] = &matrix[0]; printf("sizeof(pointToRow)=%lu\n", sizeof(pointToRow)); printf("sizeof(pointToRow[0])=%lu\n", sizeof(pointToRow[0])); printf("Take a ref to row\n"); for (int i = 0; i < M; i++) { int * row = pointToRow[i]; for (int j = 0 ; j < N; j++) { printf("%d\t", row[j]); } printf("\n"); } printf("Take a ref to row w/ pointer arithmetic\n"); pointToRow = &matrix[0]; for (int i = 0; i < M; i++) { int * row = *pointToRow; //deref that row pointToRow++; // go to next row for (int j = 0 ; j < N; j++) { printf("%d\t", row[j]); } printf("\n"); } printf("Direct index\n"); pointToRow = &matrix[0]; // direct index for (int i = 0; i < M; i++) { for (int j = 0 ; j < N; j++) { printf("%d\t", pointToRow[i][j]); } printf("\n"); } printf("Skip a row\n"); // skip a row pointToRow = &matrix[1]; for (int i = 1; i < M; i++) { // try not to go over our bounds int * row = *pointToRow; //deref that row pointToRow++; // go to next row for (int j = 0 ; j < N; j++) { printf("%d\t", row[j]); } printf("\n"); } return 0; } \end{verbatim} \begin{verbatim} sizeof(pointToRow)=8 sizeof(pointToRow[0])=20 Take a ref to row 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Take a ref to row w/ pointer arithmetic 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Direct index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Skip a row 5 6 7 8 9 10 11 12 13 14 \end{verbatim} \end{enumerate} \end{document}
\chapter{Implementation} \label{chap:impl} The implementation is divided into several modules: \textit{core} (code for computing different equilibria), \textit{web} (a backend engine that allows to play games and stores the results in a database), \textit{console} (a console client for the web API), \textit{structures} (common structures for \textit{web} and \textit{console}), and \textit{analysis} (Spark code to analyze large datasets of games). Each module is described in a separate section below. There are several services ready to be used via Docker Compose, and can simply be run without the need of installing any additional software: \textit{web} (the web API server), \textit{postgres} (a PostgreSQL database; is started automatically when running \textit{web}), \textit{console} (the console interface), \textit{analysis} (Apache Spark with commands from the \textit{analysis} module made available), and \textit{sbt} (the building tool; can be used to compile modules and to run tests). \section{Core Module} While the core module does not contain much code, it is the heart of this project. It defines the following entities: \begin{itemize} \item \textsc{Game}: a case class that represents a game in normal form for two players. \item \textsc{BestResponse}: a trait that describes a best response, given a game and the opponent's strategy. It provides a method \textit{equilibria} which automatically computes all equilibria induced by the respective best response. It is implemented by the following objects: \begin{itemize} \item \textsc{NashianBestResponse.Weak} \item \textsc{NashianBestResponse.Strict} \item \textsc{PerfectlyTransparentBestResponse.Weak} \item \textsc{PerfectlyTransparentBestResponse.Strict} \end{itemize} \item \textsc{Eliminator}: a trait that describes an elimination process used for computing equilibria. It defines method \textit{eliminate} which represents one round of elimination. It implements method \textit{all} which performs the loop of elimination as long as the set of non-eliminated profiles keeps changing. It is implemmented by the following objects: \begin{itemize} \item \textsc{IndividualRationality} \item \textsc{MinimaxRationalizability} \end{itemize} \item In the theoretical part of this work, we also define some other equilibria: PTBPE and PTOPE. These are not induced by best response definitions. Instead, they are defined as strategy profiles where the \textit{perfectly transparent $i$-best profiles} and \textit{perfectly transparent $i$-optimal profiles} coincide for all players $i$. These are implemented by the following objects: \begin{itemize} \item \textsc{PerfectlyTransparentRowBestProfile.Weak} \item \textsc{PerfectlyTransparentRowBestProfile.Strict} \item \textsc{PerfectlyTransparentColOptimalProfile.Weak} \item \textsc{PerfectlyTransparentColOptimalProfile.Strict} \end{itemize} \item \textsc{GameGenerator}: an object for generating random games. \end{itemize} \section{Web Module} \label{sec:impl-web-module} The web module is a web server that provides a REST API for playing games in normal form against a computer. Every time a game is played, the result is stored in a PostgreSQL database (see \autoref{fig:db-schema}). The API exposes the following endpoints: \begin{itemize} \item \textsc{NewUser}: Creates a new user and returns the user's ID. \item \textsc{NewGame}: Starts a new game; returns the game matrix and ID. \item \textsc{Play}: For a game with a given ID, accepts the human player's (row) strategy and returns the computer's (column) strategy. \item \textsc{Stats}: Returns statistics for a user with a given ID: number of games played and the average payoff. \end{itemize} Play Framework is used to implement the server. For communicating with the database, we use Slick library. \begin{figure} \hspace{-1cm} \includegraphics[width=14cm]{fig/schema.pdf} \caption{The database schema.} \label{fig:db-schema} \end{figure} \section{Console Module} The console module is a sample client for the REST API described in \autoref{sec:impl-web-module}. It allows the user to play a series of games of chosen dimensions. For each game, the game matrix is displayed, and the user is asked to choose a strategy. After choosing a strategy, the computer's strategy and the resulting payoff is displayed. Then, the user is asked whether she wants to play another game. At the end, statistics over all games are displayed. \begin{figure} \centering \includegraphics[width=12cm]{fig/console.png} \caption{The console interface.} \label{fig:console-screen} \end{figure} \section{Analysis Module} The analysis module is used for computing statistics about large datasets of games using Apache Spark. It loads the datasets and computes the PTBPE, PTBRE, individually rational and minimax rationalizable strategy profiles, and stores them as a new dataset. Searching through this dataset was useful to find counterexamples and form conjectures about inclusions of these equilibria (see \autoref{chap:game-theory}).
%!TEX root = ../copatterns-thesis.tex \chapter{Idris} \label{cha:idris} Idris is a general-purpose functional programming language with full dependent types. Having full dependent types means that the type and the term level language is one and the same, such that types \emph{are} in fact terms, making computations on types as easily definable as computations on other kinds of terms. Idris has native support for dependent product and sum types, (co)inductive families, and dependent pattern matching. Additionally, Idris allows the definition of provably total functions, which is imperative when exploiting the principle of programs-as-proofs to ensure program correctness. In order to show which parts of the language implementation must be manipulated when implementing copatterns and inference of guarded recursion, respectively, this chapter discusses the internal structure of the Idris compiler. Providing a comprehensive description of the compiler is not within the scope of this presentation, but many of the details not covered here have been described thoroughly by Brady\,\citep{BradyIdrisImpl13}. % To understand how an implementation of guarded recursion could be realized in Idris, we must first understand the internal structure of the language. In this section we will first outline the overall structure of Idris and then dig into specific parts relevant to guarded recursion. This is not a thorough description of all of Idris's components, but rather an explanation of parts of the language. For more reading on this topic see Edwin Brady's .%todo: REF \section{Overview} %Idris -> Idris- -> TT -> Executable \begin{figure} \includegraphics[scale=0.9]{figures/Idris-overview} \caption{The phases of the Idris compiler. Phases are shown as rectangles, and each transition (arrow) is annotated with the input or output representation of a given phase. Ovals designate endpoints.} \label{fig:idris-overview} \end{figure} An overview of the different phases of the Idris compiler is shown in Figure~\ref{fig:idris-overview}. Starting with concrete Idris source code and ending with a binary executable, each rectangle represents one phase of compilation. During compilation, the input program is represented in several different internal languages. Each arrow in Figure~\ref{fig:idris-overview} is ascribed with the language in which the input program is represented when entering or leaving a phase, respectively. Omitting a description of the machine code, these are: \begin{itemize} \item \textbf{(concrete) Idris} The high-level language in which Idris programs are written. \item \textbf{(abstract) Idris} The abstract representation of the high-level Idris language generated by the parser. \item \textbf{\IdrisM} A (strict) subset of abstract Idris without any syntactic sugar. Do-notation and infix operators are desugared, and implicit arguments are bound explicitly. Note that \IdrisM{} and abstract Idris are essentially the same language, where the syntactic sugar from abstract Idris is reduced to desugared terms in \IdrisM. \item \textbf{TT} The core type theory, TT, is a dependently typed lambda calculus with inductive families and pattern matching. TT only allows pattern matching on top-level values, so all \texttt{case}-expressions are converted to top-level pattern matching during elaboration. In TT, all terms are fully annotated with their types and all implicit arguments are explicit. \item \textbf{Raw} A (raw) representation of TT terms without any type information. This representation is used for type reconstruction during type checking. As Raw is internal to the type checking phase, it is not shown in Figure~\ref{fig:idris-overview}, but has been included here for completeness and later reference. \item \textbf{IBC} Idris Byte Code (IBC) is the bytecode representation of an Idris program. \end{itemize} Each of the language representations (except concrete Idris) are generated by a specific phase of compilation, usually to reduce a complex representation to a simpler representation which is easier to reason about and compile. Including the input and output of the compiler, namely Source and Executable, the phases are: \begin{itemize} \item \textbf{Source} The source code of the program, given in concrete Idris syntax. \item \textbf{Parsing} The parser generates an abstract syntax tree (abstract Idris) from the source code. \item \textbf{Desugaring} In the desugaring phase, abstract Idris is reduced to \IdrisM{} by desugaring do-notation, implicit arguments, etc. \item \textbf{Elaboration} Elaboration reduces \IdrisM{} terms to terms in the core language, TT. The elaboration phase consists of several notable sub-phases: \begin{itemize} \item \textit{Unification} Unification is the process of finding a substitution (also called a \texttt{unifier}) that identifies two terms. In Idris, unification enables elaboration to progress gradually by continual unification of holes with terms, until a complete TT term has been built. Also, unification is used for instantiation of implicit arguments. Further details will be provided in Section~\ref{sec:elaboration}. \item \textit{Case Tree Generation} A case tree\,\citep{Augustsson:1985} is generated for each function definition, describing the structure of the top-level pattern matching on the left-hand side of a definition. These case trees are used for coverage checking by the totality checker. \item \textit{Type Checking} All TT terms resulting from the previous steps of elaboration are type checked at the end of elaboration to ensure that no ill-typed terms are constructed. Type checking proceeds by mapping TT terms to Raw terms, and then reconstructing the type of each Raw term according to the typing environment. If the reconstructed type is convertible with the annotated type of the original TT term, type checking succeeds; otherwise, it fails. As the last stage of type checking, universe levels are checked by checking for cycles in a graph of universe constraints. \item \textit{Totality Checking} During the totality checking phase, a totality analysis is performed on all function definitions. First, a coverage analysis determines whether the function in question is covering using the previously generated case trees. Next, a termination analysis based on the size-change principle is performed on functions with an inductive result type, while a productivity analysis is performed on functions with a coinductive result type via the syntactic guardedness principle. \end{itemize} \item \textbf{IBC Generation} After successful elaboration, an Idris Byte Code representation is generated by a script which is built up gradually during elaboration. \item \textbf{IBC Compilation} During compilation, the IBC representation is reduced to machine code. \item \textbf{Executable} The final executable generated by the compiler. \end{itemize} %\subsection{Internal Representations} %\subsection{High-Level Abstract Syntax} %PDecl/PTerm % Top level abstract syntax % Functions with multiple clauses are multiple Decls The most interesting parts of the compiler is the core type theory, TT, and the elaboration phase, in which TT terms are built. These will now be explained in greater detail. Also, a brief explanation of the implementation of coinductive data types in Idris will be provided. \section{TT, the Core Type Theory} \label{sec:tt-core-type} \todo{Write a description of how we write TT programs} % increased confidence % easier to compile, optimise and type check % TT % Dependently typed lambda calculus % Wrapped in case trees % Data and Type constructors?? TT is a dependently typed lambda calculus extended with top-level pattern matching definitions and inductive families. It is deliberately kept small in order to provide increased confidence in its correctness. A simple core type theory is also easier to type check and optimise, and as will be shown in Chapter~\ref{cha:infer-guard-recurs}, greatly simplifies the implementation of our inference system for guarded recursion. \begin{figure}[h] \centering \AxiomC{$\Gamma \vdash$ \underline{valid}} \LeftLabel{Type} \UnaryInfC{$\Gamma \vdash Type_n : Type_{n+1}$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash$ \underline{valid}} \LeftLabel{Const$_1$} \UnaryInfC{$\Gamma \vdash i : Int$} \DisplayProof \quad \AxiomC{$\Gamma \vdash$ \underline{valid}} \LeftLabel{Const$_2$} \UnaryInfC{$\Gamma \vdash str : String$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash$ \underline{valid}} \LeftLabel{Const$_3$} \UnaryInfC{$\Gamma \vdash Int : Type_0$} \DisplayProof \quad \AxiomC{$\Gamma \vdash$ \underline{valid}} \LeftLabel{Const$_4$} \UnaryInfC{$\Gamma \vdash String : Type_0$} \DisplayProof \vspace{1em} \AxiomC{$(\lambda x:S) \in \Gamma$} \LeftLabel{Var$_1$} \UnaryInfC{$\Gamma \vdash x : S$} \DisplayProof \quad \AxiomC{$(\forall x:S) \in \Gamma$} \LeftLabel{Var$_2$} \UnaryInfC{$\Gamma \vdash x : S$} \DisplayProof \quad \AxiomC{$(\underline{let} \mapsto s:S) \in \Gamma$} \LeftLabel{Val} \UnaryInfC{$\Gamma \vdash x : S$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash f : (x : S) \to T$} \AxiomC{$\Gamma \vdash s : S$} \LeftLabel{App} \BinaryInfC{$\Gamma \vdash f\ s : T[{s} / {x}]$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma; \lambda x : S \vdash e : T $} \AxiomC{$\Gamma \vdash (x : S) \to T : Type_n$} \LeftLabel{Lam} \BinaryInfC{$\Gamma \vdash \lambda x : S . e : (x : S) \to T$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma; \forall x : S \vdash T : Type_m$} \AxiomC{$\Gamma \vdash S : Type_n$} \LeftLabel{Forall} \RightLabel{$\exists p.m \leq p, n \leq p$} \BinaryInfC{$\Gamma \vdash (s : S) \to T : Type_p$} \DisplayProof \vspace{1em} \AxiomC{$\begin{matrix} \Gamma \vdash e_1 : S \\ \Gamma \vdash S : Type_n \end{matrix}$} \AxiomC{$\begin{matrix} \Gamma; \underline{let} \x \mapsto e_1 : S \vdash e_2 : T \\ \Gamma; \underline{let} \x \mapsto e_1 : S \vdash T : Type_n \end{matrix}$} \LeftLabel{Let} \BinaryInfC{$\Gamma \vdash \underline{let}\ x \mapsto e_1 : S.\ e_2 : T[{e_1}/{x}]$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash x : A$} \AxiomC{$\Gamma \vdash A' : Type_n$} \AxiomC{$\Gamma \vdash A \preceq A'$} \LeftLabel{Conv} \TrinaryInfC{$\Gamma \vdash x : A'$} \DisplayProof \caption{The typing rules for the core type theory TT, borrowed from Brady\,\citep{BradyIdrisImpl13}.} \label{fig:TT_typing_rules} \end{figure} The typing rules for TT are shown in Figure~\ref{fig:TT_typing_rules}. Most of these rules are standard, keeping in mind that types may depend on values. In order to avoid Girard's paradox, i.e. that the type of \texttt{Type} is \texttt{Type} (which is a logical inconsistency), a cumulativity relation ($\preceq$) on universes is used, defined by the rules in Figure~\ref{fig:TT_cumulativity_relation}. \begin{figure} \centering \AxiomC{$\Gamma \vdash S \simeq T$} \UnaryInfC{$\Gamma \vdash S \preceq T$} \DisplayProof \quad \AxiomC{} \UnaryInfC{$\Gamma \vdash Type_n \preceq Type_{n+1}$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash R \preceq S$} \AxiomC{$\Gamma \vdash S \preceq T$} \BinaryInfC{$\Gamma \vdash R \preceq T$} \DisplayProof \vspace{1em} \AxiomC{$\Gamma \vdash S_1 \simeq S_2$} \AxiomC{$\Gamma; x : S_1 \vdash T_1 \preceq T_2$} \BinaryInfC{$\Gamma \vdash \forall x:S_1.T_1 \preceq \forall x:S_2.T_2$} \DisplayProof \caption{The rules for the cumulativity relation.} \label{fig:TT_cumulativity_relation} \end{figure} Notice that some of the rules for the cumulativity relation requires terms to be convertible ($\simeq$), e.g. $S\simeq T$. Convertibility will be explained as part of the type checking phase in Section~\ref{sec:type-checking}. The rest of this report will contain both \texttt{TT} and Idris code examples. Each block of code will be marked indicating what language it contains. As some details of \texttt{TT} are not necessary to understand our implementation, we use a different notation for \texttt{TT} than the one used by Brady\,\citep{BradyIdrisImpl13}. Figure~\ref{fig:tt_notation} shows a \texttt{TT}-function \texttt{vAdd} as written with first Brady's notation, followed by ours. Firstly, we treat type class parameters as they are treated in high-level Idris (similar to Haskell). Secondly we do not annotate the type of pattern variables. Not shown in the example, we also do not annotate the type of lambda and let-bindings. Furthermore, we use certain high-level Idris syntactic shorthands, such as \texttt{[]} instead of \texttt{Nil}. Lastly, note that we use \texttt{=} rather than $\mapsto$. \newcommand{\lstul}[1]{\ensuremath{\underline{\mbox{ #1}}}} \begin{figure}[H] \begin{lstlisting}[mathescape] vAdd : ($a$ : Type) $\to$ ($n$ : Nat) $\to$ Num $a$ $\to$ Vect $n$ $a$ $\to$ Vect $n$ $a$ $\to$ Vect $n$ $a$ $\lstul{var}$ $a$ : Type, $c$ : Type. vAdd $a$ Z $c$ (Nil $a$) (Nil $a$) $\mapsto$ Nil $a$ $\lstul{var}$ $a$ : Type, $k$ : Nat, $c$ : Type, $x$ : $a$, $xs$ : Vect $k$ $a$, $y$ : a, $ys$ : Vect $k$ $a$. vAdd $a$ (S $k$) $c$ ((::) $a$ $k$ $x$ $xs$)((::) $a$ $k$ $y$ $ys$) $\mapsto$ ((::) $a$ $k$ ((+) $c$ $x$ $y$) $xs$) (vAdd $a$ $k$ $c$ $xs$ $ys$) vAdd : Num a $\Rightarrow$ (a : Type) $\to$ (n : Nat) $\to$ Vect n a $\to$ Vect n a $\to$ Vect n a vAdd a Z [] [] = [] vAdd a (S k) (x :: xs) (y :: ys) = (::) a k (x + y) (vAdd a k xs ys) \end{lstlisting} \caption{Edwin Brady's\,\citep{BradyIdrisImpl13} notation for \texttt{TT} compared to ours.} \label{fig:tt_notation} \end{figure} \section{Coinductive Data Types in Idris} \label{sec:coind-data-types} Idris supports inductive as well as coinductive type families. The latter is modeled as lazily evaluated inductive families, where the types of all recursive constructor arguments are tagged automatically with a special type constructor, \texttt{Inf}. Values with an \texttt{Inf} type are possibly infinite, and thus cannot be safely evaluated using a call-by-value strategy. In the vein of Abelson and Sussman\,\citep{Abelson96SICP}, evaluation of such values is handled by two special data constructors, \texttt{Delay} and \texttt{Force}. These provide call-by-name evaluation, in the sense that all \texttt{Inf} values are initally delayed, and then forced when needed. The compiler cannot always infer all possibly infinite arguments, e.g. when using mixed induction-coinduction, so in these cases the user must manually tag the correct argument types with \texttt{Inf}. Aside from the use of \texttt{Inf}, Idris also supports lazy evaluation of inductively defined values. \section{Elaboration} \label{sec:elaboration} The elaborator is similar to tactic-based theorem provers such as Coq\,\citep{Coq:manual}, which takes terms from \IdrisM{} to TT in a step-by-step manner. The idea is to construct a TT term from a corresponding \IdrisM{} term by gradual refinement, until a complete and well-typed TT term has been built. Before describing the process of elaboration, the motivation behind the elaborator will be provided, along with the intuitions underlying type checking and totality checking TT terms. \subsection{Motivation} Instead of compiling \IdrisM{} directly to byte code, increased confidence in the correctness of compilation can be obtained by first transforming a possibly complex term in the high-level language to a simpler term in the core type theory. This leads to a smaller trusted core, seeing as if we are confident that TT can be correctly translated into byte code, and we can show that any \IdrisM{} term can be correctly transformed to a corresponding TT term, then we can be more confident in the correctness of the high-level \IdrisM{} language than if it was translated directly to byte code. \subsection{Type Checking} \label{sec:type-checking} %Computing normal forms may require evaluation %The type checker will not attempt to find normal forms for partial definitions. % Type checking TT terms is a process with two high-level steps: % \begin{enumerate} % \item During elaboration, two forms of % consistency checks are performed, namely type reconstruction and % conversion checking. % \item After the TT term resulting from elaboration has been fully built, the % definition is \emph{rechecked}. % \end{enumerate} % All type checking happens on TT terms. During elaboration, two forms of % consistency checks are performed, namely type reconstruction and % conversion checking. A common scenario is as follows: % \begin{enumerate} % \item Some tactic expects a term $e$ to have type $A$ % \item The type of $e$ is reconstructed with respect to a context $\Gamma$, such % that $e : B$ % \item A conversion check is performed on $A$ and $B$ with respect to $\Gamma$, % succeeding if $A$ is convertible with $B$, and failing otherwise. % \end{enumerate} % As an example we consider the tactic \textsc{Pi($\Gamma$, $n:S$, $?x:Type.x$)}, % which expects an $x$ of type $Type$, and is given an $n$ of type $S$. The type % of $S$ is then reconstructed, such that $S : T$. If $T$ is convertible with % $Type$, then $n$ is a well-typed argument, otherwise it is not. In the elaboration phase, type checking plays a role both during and after the construction of a TT term. Type checking a TT term $e$ against a type $T$ is a twofold process, involving (1) type reconstruction and (2) conversion checking. To determine whether $e : T$ holds, the type of $e$ is first reconstructed as $S$, and then a conversion check between $S$ and $T$ is performed. \subsubsection{Type Reconstruction} To reconstruct the type of a TT term $e$, all type information is first erased from the term using a forgetful mapping from TT to Raw, producing $e_{raw}$. The type of the $e_{raw}$ is then reconstructed such that it conforms to the rules presented in Figure~\ref{fig:TT_typing_rules}. All type information is available at this point, so no type inference is performed (no new type information is derived). Instead, type information is recovered from either the global context, where the types of top-level definitions are stored, or the local context, where the types of variables are stored. \subsubsection{Conversion Checking} Conversion checking happens by comparing the normal forms of two TT terms. Finding these normal forms may require evaluation, since any of the two terms can involve arbitrary expressions. Compile-time evaluation of TT terms is defined by two contraction schemes, as explained by Brady\,\citep{BradyIdrisImpl13}. When two TT terms $e_{1}$ and $e_{2}$ are convertible ($\simeq$), such that $\Gamma\vdash e_{1} \simeq e_{2}$ holds, $e_{2}$ can be obtained from $e_{1}$ by a finite number of applications of the contraction schemes, where reversed applications are allowed. % \paragraph{Aside: Rechecking} When a TT term is returned as the result of % elaboration, the left-hand side and right-hand sides of a definition are % \emph{rechecked} against each other using a conversion check, making sure that % they still match. Rechecking ensures that an elaborated TT term is not only % well-typed, but actually has the expected type. \subsection{Totality checking} % Happens during and after elaboration % What happens when and why? % Coverage: Case Trees % Termination: Size Change % When are the graphs build? % Productivity: Syntactic Guardedness Totality checking is a prerequisite for type checking, since the conversion checker will not attempt to find normal forms for definitions which have not been proven total. In practice, this means that partial functions cannot be part of a type declaration. The totality checking procedure in Idris has three parts: coverage checking, totality checking, and productivity checking. The coverage checker analyzes the left-hand side pattern matching structure using case trees\,\citep{Augustsson:1985}. These case trees are constructed from TT terms by a special case tree elaborator, which in particular identifies unmatched and default cases. If a definition is not covering, the totality checker performs no further analysis. The termination checker is a quite straightforward implementation of the size-change termination principle\,\citep{LeeJones01SizeChange}. Since Idris supports the definition of mutually recursive functions in a special block designated by the keyword \texttt{mutual}, the termination checker is invoked after the elaboration of each such block. The productivity checker is an implementation of the principle of syntactic guardedness\,\citep{Coquand94}. The current implementation is part of the termination checker, since it merely checks that all corecursive invocations happen under a special data constructor ``Delay'' (after normalization). The ``Delay'' constructor is used indicate lazy evaluation, and is eliminated by its counterpart, ``Force''. After normalization, a corecursive reference must therefore occur under under at least one ``Delay'' constructor in order to be productive. \subsection{The Elaboration Process} To enable construction of TT terms by gradual refinement, incomplete terms must be supported by the elaboration process. Concretely, this happens by having ``holes'' (subgoals of incomplete terms) and ``guesses'' (possible instantiations for a hole) as binders in the term language, inspired by McBride's approach\,\citep{McBrideThesis:1999}. Elaboration happens within a proof state, consisting of the (incomplete) proof term currently being constructed, a queue of holes, a collection of unsolved unification problems, and a typing context. At the head of the hole queue is the hole which is currently being solved. Initially, the proof state contains only one hole, but more holes can be added to and removed from the hole queue as elaboration progresses. As shown in Figure~\ref{fig:idris-overview}, elaboration is a quite complex process where unification, case tree generation, and type checking are all intertwined. The invocation of each of these phases is directed by tactics, since the transformation of different terms requires them at different stages during elaboration. Elaboration should therefore not be understood as a linear process from unification to type checking, but rather as a process where each of these phases are performed as needed. In practice, elaboration proceeds within a monad \texttt{Elab}, which is essentially a state monad which also offers error handling and additional support for specific meta-operations on the state. These meta-operations are applied to the proof state and the current proof term until all holes have been resolved. According to Brady\,\citep{BradyIdrisImpl13}, four types of meta-operations are used: \begin{itemize} \item \textbf{Queries}, which are the tactics that do not modify the current proof term or the proof state. These include retrieving the current proof term, retrieving the type of a term, and retrieving the local context of a hole. \item \textbf{Unification}, which solves unification problems relative to a context. \item \textbf{Tactics}, which modify the current proof term and may modify the proof state in the process. \item \textbf{Focusing}, which moves a hole to the head of the hole queue. \end{itemize} Due to their simplicity, we will forgo explanations of queries and focusing, an instead elaborate further on unification and tactics. \subsubsection{Unification} The goal of unifying two terms $t_{1}$ and $t_{2}$ is to find a substitution such that the two terms are convertible with respect to a given typing context, $\Gamma$ (i.e. $\Gamma\vdash t_{1} \simeq t_{2}$). Hence, a unification problem forms a triple ($\Gamma$, $t_{1}$, $t_{2}$). Unification of two terms may fail if unification of subterms fail, or if solving one unification problem makes related unification problems impossible to solve. Solving a unification problem may lead to solutions of existing problems and introduce new problems. Unification is used in throughout the elaboration process, for example for type checking and for the gradual construction of TT terms. Also it is instrumental in the process of inferring implicit arguments. Consider the implementation of a \texttt{map} function on indexed lists (\texttt{Vect}) given in Figure~\ref{fig:vect_map}. Here, \texttt{a}, \texttt{b}, and \texttt{n} are implicit arguments which must be resolved. First, the implicit arguments are made explicit through desugaring, as shown in Figure~\ref{fig:vect_map_desugared}. The types of the implicit arguments cannot be readily inferred, however, and must be solved by unification. Therefore, the program in Figure~\ref{fig:vect_map_desugared} gives rise to the following unification problems: \begin{itemize} \item ($\,\cdot\,$, \texttt{a : \_}, \texttt{a : Type}) \item ([a : Type], \texttt{b : \_}, \texttt{b : Type}) \item ([a : Type, b : Type], \texttt{n : \_}, \texttt{n : Nat}) \end{itemize} The third part of each problem arise from the use of the names: The arguments \texttt{a}, \texttt{b}, and \texttt{n} are used in positions where \texttt{Type}, \texttt{Type}, and \texttt{Nat} are expected, respectively. To solve each problem, the unification algorithm must be able to put a convertible type at each binding site. In this case, the solutions are all straightforward, and the resulting program is shown in Figure~\ref{fig:vect_map_resolved}. The important thing to note is that all of these problems have a unique solution, making unambiguous inference possible. \begin{figure} \begin{lstlisting}[mathescape] map : (a $\to$ b) $\to$ Vect n a $\to$ Vect n b map f (x :: xs) = f x :: map f xs \end{lstlisting} \caption{A map function for an indexed list type \texttt{Vect}.} \label{fig:vect_map} \end{figure} \begin{figure} \begin{lstlisting}[mathescape] map : (a : _) $\to$ (b : _) $\to$ (n : _) $\to$ (a $\to$ b) $\to$ Vect n a $\to$ Vect n b map _ _ _ f (x :: xs) = ((::) _ _ (f _ _ x) (map _ _ _ f xs)) \end{lstlisting} \caption{A desugared map function for a type \texttt{Vect} with implicit arguments made explicit.} \label{fig:vect_map_desugared} \end{figure} \begin{figure} \begin{lstlisting}[mathescape] map : (a : Type) $\to$ (b : Type) $\to$ (n : Nat) $\to$ (a $\to$ b) $\to$ Vect n a $\to$ Vect n b map a b n f (x :: xs) = ((::) n b (f a b x) (map a b n f xs)) \end{lstlisting} \caption{A desugared map function for a type \texttt{Vect} with resolved implicit arguments.} \label{fig:vect_map_resolved} \end{figure} \subsubsection{Tactics} Although unification is an important part of elaboration, the core of the elaborator is tactics. Tactics modify the current proof term, each describing a possible step in the transformation of a term from \IdrisM{} to TT. %The following are a subset of the tactics which may be part of such a transformation: % \begin{itemize} % \item \textsc{Lambda($\Gamma$, $n$, $?x:T.x$)} creates a lambda binding with % respect to a context $\Gamma$ with name $n$, from a proof term which % expects a binder ($?x:T.x$). % \item \textsc{Pi($\Gamma$, $n:S$, $?x:Type.x$)} creates a pi-binding (dependent % product type) with name $n$ with respect to a context $\Gamma$, from a % proof term which expects a type binder.type checking % \item \textsc{Let($\Gamma$, $(n:S) \mapsto v$, $?x:T.x$)} creates a let-binding % mapping $n$ to $v$ with respect to a context $\Gamma$ from a % proof term which expects a binder. % \item \textsc{Subst($x$, $e$)} which instantiates a hole $x$ with a term % $e$, often as a result of unification. % \item \textsc{PrimUnify($\Gamma$, $e_{1}$, $e_{2}$)} which attempts to unify terms % $e_{1}$ and $e_{2}$ with respect to a context $\Gamma$. % \item \textsc{Check($\Gamma$, $e$)} which type checks a term $e$ with % respect to a context $\Gamma$, returning the type of $e$. % \item \textsc{Convert($\Gamma$, $e_{1}$, $e_{2}$)} which checks that the % cumulativity relation holds between $e_{1}$ and $e_{2}$ by performing a % conversion with respect to a context $\Gamma$. % \item \textsc{Normalise($\Gamma$, $e$)} which returns the normal form of a term % $e$ with respect to a context $\Gamma$. % \end{itemize} Tactics describe both concrete operations on the proof term, such as binder creation, meta-operations on the proof state, such as hole substitution, and meta-operations ensuring consistency, such as type checking. Complex operations which require evaluation, e.g. \textsc{Check} and \textsc{Convert}, may invoke operations which are external to the tactic system, such as type checking. \subsection{Delaboration} Mainly to support error reporting, a \emph{de}laborator is also a part of the compiler. The delaborator builds an \IdrisM{} term from a TT term, striving to build a term which resembles the user-written term as closely as possible. However, the delaboration process is not always able to correctly reconstruct the user-written program, and should therefore not be relied upon for program analysis. \section{A Short Recapitulation} %\todo{Er lidt usikker på denne afslutning, men ellers slutter det næsten for %brat?} From the Idris input provided by the user, an abstract syntax tree in abstract Idris is created. Through desugaring of do-notation and implicit arguments, an \IdrisM{} representation is built. Elaboration builds a TT term from an \IdrisM{} term by constructing a proof of a transformation from \IdrisM{} to TT. This proof is provided as a series of tactics operating on a proof state, each of which may require unification and type checking. Unification is used for type checking, term construction, and inference of implicit arguments. Type checking ensures consistency through type reconstruction and conversion checks. After elaboration, the totality checker provides the guarantees necessary for avoiding the reduction of partial definitions during following invocations of the type checker. During elaboration, a script is accumulated which describes the resulting IBC file. After the previous phases have been completed, an IBC file is written, and from this, machine code is generated. %incomplete terms must be supported by the elaboration process. %######## %% TT % Alt er eksplicit % Typeregler %%% Type checking %% Idris- / Desugaring % Desugaring er en transformation fra Idris- til Idris- %% Elaboration % Hvorfor elaboration? % Faser "smelter sammen" % Teknisk forklaring (tactic prover) %% Totality checking % Size-change termination % Nuværende implementation af produktivitetschecker % Totality er en forudsætning for type checking % Erasure? (måske) %######## %%% Local Variables: %%% mode: latex %%% TeX-master: "../copatterns-thesis" %%% End:
\documentclass[preprint,showkeys,nofootinbib]{revtex4-1} % linking references \usepackage{hyperref} \hypersetup{ breaklinks=true, colorlinks=true, linkcolor=blue, urlcolor=cyan, } % general physics / math packages and commands \usepackage{physics,amsmath,amssymb,braket,dsfont} \renewcommand{\t}{\text} % text in math mode \newcommand{\f}{\dfrac} % shorthand for fractions \newcommand{\p}[1]{\left(#1\right)} % parenthesis \renewcommand{\sp}[1]{\left[#1\right]} % square parenthesis \renewcommand{\set}[1]{\left\{#1\right\}} % curly parenthesis \newcommand{\bk}{\braket} % shorthand for braket \renewcommand{\d}{\text{d}} \newcommand{\g}{\text{g}} \newcommand{\e}{\text{e}} \newcommand{\x}{\text{x}} \newcommand{\y}{\text{y}} \newcommand{\z}{\text{z}} \renewcommand{\c}{\hat{c}} \newcommand{\n}{\hat{n}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\D}{\mathcal{D}} \newcommand{\E}{\mathcal{E}} \newcommand{\G}{\mathcal{G}} \renewcommand{\H}{\mathcal{H}} \newcommand{\I}{\mathcal{I}} \newcommand{\K}{\mathcal{K}} \renewcommand{\L}{\mathcal{L}} \newcommand{\M}{\mathcal{M}} \newcommand{\N}{\mathcal{N}} \renewcommand{\O}{\mathcal{O}} \renewcommand{\P}{\mathcal{P}} \newcommand{\Q}{\mathcal{Q}} \renewcommand{\S}{\mathcal{S}} \newcommand{\U}{\mathcal{U}} \newcommand{\1}{\mathds{1}} \newcommand{\mA}{m_{\text{A}}} % symbol for the mass of an atom % "left vector" arrow; requires tikz package \newcommand{\lvec}[1] {\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}} % figures \usepackage{graphicx} % for figures \usepackage{grffile} % help latex properly identify figure extensions \graphicspath{{./figures/}} % set path for all figures \usepackage[caption=false]{subfig} % subfigures (via \subfloat[]{}) % inline lists \usepackage[inline]{enumitem} % for feynman diagrams \usepackage{tikz,tikz-feynman} \tikzset{ baseline = (current bounding box.center) } \tikzfeynmanset{ compat = 1.1.0, every feynman = {/tikzfeynman/small} } \newcommand{\shrink}[1]{\scalebox{0.8}{#1}} % for smaller diagrams % color definitions (used in a figure) \usepackage{xcolor} \definecolor{lightblue}{RGB}{31,119,180} \definecolor{orange}{RGB}{255,127,14} \definecolor{green}{RGB}{44,160,44} \definecolor{lightred}{RGB}{214,39,40} % proper coloring inside math environment \makeatletter \def\mathcolor#1#{\@mathcolor{#1}} \def\@mathcolor#1#2#3{ \protect\leavevmode \begingroup \color#1{#2}#3 \endgroup } \makeatother \newcommand{\bmu}{\mathcolor{lightblue}{\mu}} \newcommand{\onu}{\mathcolor{orange}{\nu}} \newcommand{\grho}{\mathcolor{green}{\rho}} \newcommand{\re}{\mathcolor{lightred}{\text{e}}} % leave a note in the text, visible in the compiled document \newcommand{\note}[1]{\textcolor{red}{#1}} % for strikeout text % normalem included to prevent underlining titles in the bibliography \usepackage[normalem]{ulem} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\green}[1]{\textcolor{green}{#1}} \begin{document} \section*{Referee 2} We thank the referee for taking the time to read and reconsider our revised manuscript. Concerning the comments: \blue{I read the replies from the authors and other report and recommend the publication.} \blue{There are a few discussions on ``renormalization'' schemes in the article and in Referee 1's report. Let me comment on the scheme in the manuscript. The scheme used here (more precisely ``regularization'' scheme) is a quite standard and has been commonly used in many-body studies since the beginning of many-body diagrammatic calculations. It is based on a k-space ``pseudo'' potential. From the point of view of low energy few-body physics studied here, it is indeed equivalent to the other well known standard real space ``differential pseudo'' potential Referee A might have in mind (I speculate that what he/she meant by ``pseudo potential''). But more importantly, it can be easily and straightforwardly generalized to arbitrary dimensions and can be integrated into studies of complex/sophisticated collective many-body effects associated with Fermi surfaces and/or sound waves in symmetry breaking states.} We thank the referee for their recommendation to publish our manuscript, and for their input on the regularization scheme in our manuscript. In particular, we thank the referee for pointing out that our approach to regularizing the two-body delta-function interaction is a standard technique in many-body studies, equivalent to another common technique based on a regularized real-space ``differential pseudo-potential''. \end{document}
%---------------------------Taper--------------------------- \section{Taper\label{s:hex-taper}} Taper measures the maximum ratio of a cross-derivative to its shortest associated principal axis. Given a pair of principal axes $f$ and $g$, the taper is \[ T_{fg} = \frac{\normvec{ X_{fg}}}{\min\left\{\normvec{ X_f},\normvec{X_g}\right\}} \] The metric is then the maximum taper of any cross-derivative \[ q = \max\left\{ T_{12}, T_{13}, T_{23} \right\} \] Note that if $\normvec{X_1}$ or $\normvec{X_2}$ or $\normvec{X_3} < DBL\_MIN$, we set $q = DBL\_MAX$. \hexmetrictable{taper}% {$1$}% Dimension {$[0,0.5]$}% Acceptable range {$[0,DBL\_MAX]$}% Normal range {$[0,DBL\_MAX]$}% Full range {$0$}% Cube {Adapted from \cite{tf:89}}% Citation {v\_hex\_taper}% Verdict function name
% !TEX root = ../../main.tex \subsection{xokker-Planck equation in population genetics} In the last section after we performed a Taylor expansion of the master equation we ended up with an equally complicated partial differential equation of infinite order. So in order to make any progress towards simplifying the treatment of the equation what we will do is trade accuracy for simplicity. More concretely if our assumptions on the shape of the transition probability $\phi_t(x; r)$ being sharply peaked are to be taken seriously, we can then truncate the Kramers-Moyal expansion to include only up to second order derivatives. This truncated equation would locally approximate the time evolution of the probability distribution $P(x, t)$. \eref{eq_km_expansion} then is of the form \begin{equation} \ddt{P(x, t)} = - {\partial \over \partial x} \left[ a^{(1)}(x, t) P(x, t) \right] + {1 \over 2}{\partial^2 \over \partial f^2} \left[ a^{(2)}(x, t) P(x, t) \right]. \label{eq_fokker_planck} \end{equation} In the physics literature this is known as the Fokker-Planck equation, while in the mathematics literature is known as the Kolmogorov forward equation. One intriguing aspect of how to arrive to this equation is the seemingly arbitrary choice of truncating up to the second moment. The argument that is often thrown around is that the ``art'' of these truncations is to stop at the first non-vanishing moment. But for the specific case of the Kramers-Moyal expansion of the master equation there is a theorem - Pawula theorem - that shows that for the solutions of the Kramers-Moyal expansion to be interpreted as probability densities the expansion must either contain one, two or an infinite number of moments. So two sounds much better than infinite, doesn't it? \mrm{Need to include appendix with the proof of Pawula theorem.} \subsubsection{Determination of the Fokker-Planck coefficients} The two jump moments in \eref{eq_fokker_planck}, $a^{(1)}$ and $a^{(2)}$ defined by \eref{eq_jump_mom} have a specific interpretation in population genetics. Here is where a little bit of a terminology conflict between physics and evolutionary biology comes into play. The directional term, i.e. the one with the first order derivative in \eref{eq_fokker_planck} is known in the physics literature as the drift term on a diffusion-like equation. This is confusing because in evolutionary theory the diffusive term, i.e. the one with the second derivative in \eref{eq_fokker_planck} is known as the genetic drift term. I will try to be as consistent as possible on these notes using the terms directional and diffusive to avoid confusion. Coming back to the directional term, we define $M(x, t)$ to be \begin{equation} M(x, t) \equiv \ee{r(t)} = \int_{-\infty}^{\infty} dr \; \phi_t(x; r) r, \end{equation} i.e. the mean of the jump distribution. As we will see in coming sections this term captures the effect of directional evolutionary forces such as selection, mutation and migration. For the diffusive term we define $V(x, t)$ as the second moment of the jump distribution \begin{equation} V(x, t) \equiv \ee{r^2(t)} = \int_{-\infty}^{\infty} dr \; \phi_t(x; r) r^2. \end{equation} This term captures the random sampling of alleles, also known as genetic drift. In all of the population genetics literature I have encounter so far, this term $V(x, t)$ is treated as the \textbf{variance} rather than the second moment. This is partly because computing the specific functional form of the variance for different models of reproduction is much simpler. The variance $\sigma_r^2(t)$ is defined as \begin{equation} \sigma_r^2(t) = \ee{\left( r - \ee{r} \right)^2} = \ee{r^2} - \ee{r}^2. \end{equation} We can therefore work with this more convenient quantity if we assume that $\ee{r}^2 \approx 0$. This is a reasonable assumption given that for the Fokker-Planck equation to be accurate we assumed a tight distribution for the jumpt size $\phi_t(x; r)$. The peaked nature of this distribution must imply that $\ee{r} \ll 1$, but more importantly, we will assume that $\ee{r}^2 \ll \ee{r^2}$. So upon using these two definitions we arrive to one of the main results in population genetics, the Kimura diffusion equation \begin{equation} \ddt{P(x, t)} = - {\partial \over \partial x} \left[ M(x, t) P(x, t) \right] + {1 \over 2}{\partial^2 \over \partial f^2} \left[ V(x, t) P(x, t) \right]. \label{eq_kimura_diffusion} \end{equation} The power of diffusion theory is that in these two terms $M(x, t)$ and $V(x, t)$ we can include all evolutionary forces acting simultaneously. The functional forms of these specific terms depend on the reproduction model used. We will explore that more specifically later on. \subsubsection{Equilibrium distribution} In the limit when $t \rightarrow \infty$ we expect the distribution of allele frequencies to reach a steady-state $P_{ss}(x)$. For the 1D case we have studied so far, i.e. two alleles with frequencies $x$ and $1 - x$ this steady state is equivalent to an equilibrium distribution since detailed balance has to be satisfied. To emphasize this point let us rewrite \eref{eq_kimura_diffusion} as a statement of conservation of probability. This is \begin{equation} \ddt{P(x, t)} = - {\partial J(x, t) \over \partial x}, \end{equation} where $J(x, t)$ is the probability flux at point $x$. If we set the time derivative to zero there are only two options (in reality there is only one option for 1D systems): \begin{enumerate} \item ${\partial J \over \partial x} = 0; \; J \neq 0 \Rightarrow$ Steady state on a rotating or non-conservative field. \item ${\partial J \over \partial x} = 0; \; J = 0 \Rightarrow$ Equilibrium distribution that satisfies detailed balance. \end{enumerate} For our one-locus two-alleles case the second of these cases must be true. What this implies is that at steady state the flux $J_{ss}(x)$ takes the form \begin{equation} J_{ss}(x) = - M(x) P_{eq}(x) + {\partial \over \partial x} \left[ V(x) P_{eq}(x) \right] = 0, \label{eq_flux_eq} \end{equation} where we use $P_{eq}(x)$ to define that this is not only a steady-state distribution, but an equilibrium distribution satisfying detailed balance. \eref{eq_flux_eq} is a first order homogeneous ordinary differential equation. We can solve it using the integration factor method. For this we define $G(x) \equiv V(x)P_{eq}(x)$. Substituting this into \eref{eq_flux_eq} gives \begin{equation} {- M(x) \over V(x)} G(x) + {\partial \over \partial x} G(x) = 0. \label{eq_ode_ss} \end{equation} In this form we define the integration factor to be \begin{equation} h(x) = \exp \left( \int_0^x -{M(x') \over V(x')} \; dx' \right). \end{equation} Notice that we chose the limits of integration to be $[0, x]$. This is because the fundamental theorem of calculus states that for any function $f(x)$ defined in $[a, b]$, the antiderivative $F(x)$ is defined as \begin{equation} F(x) = \int_a^x f(x') \; dx', \end{equation} regardless of the lower limit of integration. Multiplying both sides of \eref{eq_ode_ss} by the integration factor $h(x)$ results in \begin{equation} {- M(x) \over V(x)} G(x) \exp \left( \int_0^x -{M(x') \over V(x')} \; dx' \right) + {\partial \over \partial x} G(x) \exp \left( \int_0^x -{M(x') \over V(x')} \; dx' \right) = 0. \label{eq_ode_int_fact} \end{equation} The specific form of the integration factor was chosen such that we could rewrite \eref{eq_ode_int_fact} as \begin{equation} {d \over dx} \left[ \exp \left( - \int_0^x {M(x') \over V(x')} \right)G(x) \; dx' \right] = 0. \end{equation} Written in this form we can simply integrate both sides with respect to $x$ as \begin{equation} \int_0^x {d \over dx''}\left[ \exp \left( - \int_0^{x''} {M(x') \over V(x')} \right) G(x'')\; dx' \right] = \int_0^x 0 \; dx'' \end{equation} Evaluating these integrals results in \begin{equation} \exp \left( - \int_0^{x} {M(x') \over V(x')} \right) G(x)\; dx' = C, \end{equation} where $C$ is an integration constant. Notice that for a specific interval $[a, b] \in \mathbb{R}$ the integral of zero is \begin{equation} \int_a^b 0 dt = 0, \end{equation} but when we set the upper integration limit as an independent variable, what we are asking for is the antiderivative of zero which is a constant $C$. Substituting the definition of $G(x) = V(x) P_{eq}(x)$ gives \begin{equation} \exp \left( - \int_0^x dx' \; {M(x') \over V(x')}\right) V(x) P_{eq}(x) = C. \end{equation} We can then solve for the equilibrium allele distribution $P_{eq}(x)$ obtaining the result we were aiming for \begin{equation} P_{eq}(x) = {C \over V(x)} \exp \left( \int_0^x dx' \; {M(x) \over V(x)} \right). \end{equation} This is a Boltzmann-like distribution! The analogy with statistical mechanics becomes even more clear when we substitute specific functional forms for the directional term $M(x)$ and the diffusive term $V(x)$. In the next section we will explore how to obtain the coefficients for our Fokker-Planck equation given the Langevin dynamics that we defined in \secref{sec_langevin_intro}.
\chapter{Distribute Clients and Servers} \label{chap:distributed} httest 2.2.12 and higher can distribute clients and servers to remote hosts. Distribute clients will be typically used for performance and load tests. Where distribute servers typically will be used for sophisticated integration tests. \section{Distribute Clients} \label{chap:distributeClients} If you like to run loadtests and your maschine is not fast enough, you could increase the load with additional maschines. Till today you have to do this manually by copy past the test script to different maschines and run it seperatly. Today it is possible to do this in one single script. With the command \begin{usplisting} PERF:DISTRIBUTED <host>:<port> \end{usplisting} you can add additional hosts where your clients will be distributed. The local host is automaticaly included. The clients are distributed with round robbin starting with your local host. If a remote host is not accessable it will be skipped. Now you need clients to distribute normaly done this way \begin{usplisting} CLIENT <n> <body> END \end{usplisting} see also in the global command section. Of course you can have many differen client as well. With htremote a remote acceptor for the serialized httest clients must be started.Could be done even in your httest script. At the moment htremote do not have a deamon mode but will comming soon. \begin{usplisting} htremote -p <port> -e "httest -Ss" \end{usplisting} The option of httest -S do start httest in shell mode to feed the script over standard in. The second option -s do make the httest silent, for debug purpose you could avoid the second option. \section{Distribute Servers} \label{chap:distributeClients} If your integration test case needs a mock on a remote host this you will use this feature. You just have to define your server this way \begin{usplisting} SERVER [SSL:]<port> [<n>] -> <remote-host>:<remote-port> <body> END \end{usplisting} see also int the global command section. This server will now be serialzied to remote-host:remote-port. With htremote a remote acceptor for the serialized httest servers must be started.Could be done even in your httest script. At the moment htremote do not have a deamon mode but will comming soon. \begin{usplisting} htremote -p <port> -e "httest -S" \end{usplisting} The option of httest -S do start httest in shell mode to feed the script over standard in.
\newpage \section{Deep Boltzmann Machines \cite{salakhutdinov2013learning, salakhutdinov2009deep, goodfellow2016deep}} \subsection{More efficient learning algorithm for general binary-binary BMs} \subsubsection{PCD-k} To recap: \bg \frac{1}{N}\sum_{n=1}^N\nabla_{\mb{W}}\log p(\mb{x}_n;\bs{\psi}) = \E_{\mb{v},\mb{h}\sim P_{\text{data}}(\mb{v},\mb{h};\bs{\psi})}\l[\mb{v}\mb{h}^T\r]-\E_{\mb{v},\mb{h}\sim P_{\text{model}}(\mb{v},\mb{h};\bs{\psi})}\l[\mb{v}\mb{h}^T\r] \eg and similar formulae for log-likelihood gradients w.r.t. $\mb{L},\mb{J},\mb{b},\mb{c}$, see (17). \\[0.5em] Now, instead of CD-k we use PCD-k (i.e. we keep one Markov chain without restarting its state between the updates), which falls into class of so-called \emph{Stochastic approximation procedures} (SAP) to estimate \tb{model's} expectations. \\[1em] Let $\bs{\psi}^t$ and $\mb{x}^t$ be the current model parameters and the state. Then $\mb{x}^t$ and $\bs{\psi}^t$ are updated sequentially as follows: \begin{itemize} \item given $\mb{x}^t$, a new state $\mb{x}^{t+1}$ is sampled from a transition operator $T_{\bs{\psi}^t}(\mb{x}^{t+1}\leftarrow \mb{x}^t)$ that leaves $p(\cdot,\cdot;\bs{\psi}^t)$ invariant (which in our case is performing Gibbs sampling using equations (11) and (12) for $k$ steps); \item a new parameter $\bs{\psi}^{t+1}$ is then obtained by replacing the intractable model's expectation by the point estimate $\mb{x}^t$ (see also subsubsection 2.2.2 for whether to sample or use probabilities/means instead of sampling, and for which type of states) \end{itemize} In practice, we typically maintain a set of $P$ "persistent" sample "particles" $X^t=\{\mb{x}_1^t\ldots \mb{x}_{P}^t\}$ and use average over those particles. \\[1em] The intuition behind why this procedure works is the following: as the learning rate becomes sufficiently small compared with the mixing rate of the Markov chain, this "persistent" chain will always stay very close to the stationary distribution even if it is only run for a few MCMC updates per parameter update. \\[1em] Provided $\|\bs{\psi}^t\|$ is bounded and Markov chain, governed by a transition kernel $T_{\bs{\psi}^t}$ is ergodic (which is typically true in practice), and sequence of learning rates $\alpha_t$ satisfies $\sum_{t}\alpha_t=\infty$, $\sum_{t}\alpha_t^2<\infty$, this stochastic approximation procedure procedure is almost surely convergent to an asymptotically stable point. \\ Note that in practice, $\alpha_t$ is not approached to zero, but rather to some small but positive constant $\varepsilon$ (e.g. $10^{-6}, 10^{-5}$). \subsubsection{Variational learning} Another approach is used to approximate \tb{data}-dependent expectations. We approximate true posterior over latent variables $p(\mb{h}|\mb{v};\bs{\psi})$ (which is intractable in general BM, tractable in RBM, but will be again intractable in DBM) by approximate posterior $q(\mb{h};\bs{\mu})$ and the variational parameters $\bs{\mu}$ are updated to follow the gradient of a \emph{lower bound on the log-likelihood}: \\[1em] In general: \begin{empheq}[box={\mybox[1em][1em]}]{gather*} \log p(\mb{v};\bs{\psi})=\int q(\mb{h};\bs{\mu})\log p(\mb{v};\bs{\psi})\mathrm{d}\mb{h} =\int q(\mb{h};\bs{\mu})\log \frac{p(\mb{v},\mb{h};\bs{\psi})}{p(\mb{h}|\mb{v};\bs{\psi})}\mathrm{d}\mb{h}=\\ =\int q(\mb{h};\bs{\mu})\log \l(\frac{p(\mb{v},\mb{h};\bs{\psi})}{q(\mb{h};\bs{\mu})}\cdot \frac{q(\mb{h};\bs{\mu})}{p(\mb{h}|\mb{v};\bs{\psi})}\r)\mathrm{d}\mb{h}=\\ =\int q(\mb{h};\bs{\mu})\log p(\mb{v},\mb{h};\bs{\psi})\mathrm{d}\mb{h} \underbrace{-\int q(\mb{h};\bs{\mu})\log q(\mb{h};\bs{\mu})\mathrm{d}\mb{h}}_{\mc{H}(q)} +\underbrace{\int q(\mb{h};\bs{\mu})\log \frac{q(\mb{h};\bs{\mu})}{p(\mb{h}|\mb{v};\bs{\psi})}\mathrm{d}\mb{h}}_{D_{\text{KL}}(q(\mb{h};\bs{\mu}) \;\|\; p(\mb{h}|\mb{v};\bs{\psi}))\geq 0}\geq\\ \geq \int q(\mb{h};\bs{\mu})\log p(\mb{v},\mb{h};\bs{\psi})\mathrm{d}\mb{h} + \mc{H}(q) =: \mc{L}_{\text{ELBO}}(\bs{\mu}; \bs{\psi}) \end{empheq} For Boltzmann Machine: \begin{empheq}[box={\mybox[1em][1em]}]{gather*} \text{\textbullet{} }[1]=\sum_{\mb{h}} q(\mb{h};\bs{\mu})\l[-E(\mb{v},\mb{h};\bs{\psi})=-\log Z(\bs{\psi}) \r]-\underbrace{\log Z(\bs{\psi})}_{=\text{ const w.r.t.}\bs{\mu}}\cdot\underbrace{\sum_{\mb{h}} q(\mb{h};\bs{\mu})}_{=1}+\\ +\sum_{\mb{h}} q(\mb{h};\bs{\mu})\l[\sum_{j<k}L_{jk}v_jv_k +\sum_{l<m}J_{lm}h_lh_m +\sum_{j,l}W_{jl}v_jh_l+\sum_jb_jv_j+\sum_lc_lh_l \r]=\\ =\sum_{l<m}J_{lm}\E_{\mb{h}\sim q(\mb{h};\bs{\mu})}[h_lh_m]+\sum_{j,l}W_{jl}v_j\E_{\mb{h}\sim q(\mb{h};\bs{\mu})}[h_l]+\sum_lc_l\E_{\mb{h}\sim q(\mb{h};\bs{\mu})}[h_l]+\text{const} \end{empheq} For Boltzmann Machine and fully-factorizable $q(\mb{h};\bs{\mu})=\prod_l q(h_l;\mu_l), q(h_l=1;\mu_l)=\mu_l$ (\emph{mean-field} approach): \begin{empheq}[box={\mybox[1em][1em]}]{gather*} \text{\textbullet{} }[1]=\sum_{l<m}J_{lm}\mu_l\mu_m+\sum_{j,l}W_{jl}v_j\mu_l+\sum_lc_l\mu_l+\text{const} \\ \text{\textbullet{} }[2]=\mc{H}(q)=-\sum_{\mb{h}}q(\mb{h};\bs{\mu})\log q(\mb{h};\bs{\mu})=-\sum_{\mb{h}}q(\mb{h};\bs{\mu})\sum_{j}\log q(h_j;\mu_j)=\\ =-\sum_j \sum_{h_j\in\{0,1\}}q(h_j;\mu_j)\log q(h_j;\mu_j) \underbrace{\sum_{\mb{h}_{-j}}q(\mb{h}_{-j};\bs{\mu}_{-j})}_{=1}=-\sum_j \mu_j\log \mu_j+(1-\mu_j)\log(1-\mu_j) \end{empheq} \bg \boxed{\mc{L}_{\text{ELBO}}(\bs{\mu}; \bs{\psi})=\sum_{l<m}J_{lm}\mu_l\mu_m+\sum_{j,l}W_{jl}v_j\mu_l+\sum_lc_l\mu_l-\sum_j \mu_j\log \mu_j+(1-\mu_j)\log(1-\mu_j)+\text{C}} \eg Let us maximize (55) for $\bs{\mu}$ for fixed $\bs{\psi}$: \begin{empheq}[box={\mybox[1em][1em]}]{gather*} 0 \doteq \frac{\partial}{\partial\mu_{i}}\mc{L}=\underbrace{\comment{$l=i$}\sum_{m>i}J_{im}\mu_m+\comment{$m=i$}\sum_{l<i}J_{li}\mu_l}_{=\comment{$J_{ij}=J_{ji},J_{ii}=0$}\sum_l J_{il}\mu_l}+\sum_{j}W_{ji}v_j+c_i -\log \mu_i-1+\log(1-\mu_i)+1 \\ \Leftrightarrow \\ \text{sigm}^{-1}(\mu_i)=\log\frac{\mu_i}{1-\mu_i}=\sum_{l<i}J_{li}\mu_l+\sum_{j}W_{ji}v_j+c_i \end{empheq} \bg \boxed{\mu_i\leftarrow \text{sigm}\l(\sum_{l<i}J_{li}\mu_l+\sum_jW_{ji}v_j+c_i\r)} \eg Note that this is exactly the formula for (12) for computing $p(h_j=1|\mb{v},\mb{h}_{-i})$ in BM! So, updates of variational parameters can be computed using Gibbs sampler. This is not a coincidence, but the same holds if replace types of units, use RBM or even DBM (see below)! \\[1em] Note, that this variational approach cannot be used to approximate model-expectations because of minus sign in formulae (54),(17). This would cause variational learning to change the parameters so as to \emph{maximize} $D_{\text{KL}}(q(\mb{h};\bs{\mu}) \;\|\; p(\mb{h}|\mb{v};\bs{\psi}))$. \\[0.5em] The naive mean-field approach was chosen because: \begin{itemize} \gooditem its convergence is usually fast; \gooditem it is unimodal. \end{itemize} Note that in general, we don't have to provide a parametric form of the approximating distribution beyond enforcing the independence assumptions. The variational approximation procedure is generally able to recover the functional form of the approximate distribution \cite{goodfellow2016deep}. \subsection{Deep Boltzmann Machine} Again assume unless specifically mentioned that DBM contains all binary units. \subsubsection{High-level overview} \textbullet{} DBM is a deep generative model, that consists of a layer of visible units and a series of layers of hidden units. \begin{figure}[h] \begin{mdframed} \includegraphics[scale=0.4]{img/dbn_dbm.png} \centering \caption{A three-layer Deep Belief Network and a three-layer Deep Boltzmann Machine.} \label{fig:dbn_dbm} \end{mdframed} \end{figure} In comparison to another deep generative model, DBN (which is hybrid, and has directed layers and one undirected), DBM is entirely undirected model, see Fig. \ref{fig:dbn_dbm}. DBN is trained using greedy, layer by layer training of corresponding RBMs (one bottom-top pass). At the same time, All parameters in DBM are learned \tb{jointly}, which greatly facilitates learning better generative models. Even though both models have a potential of learning series of internal representations that become increasingly complex, DBM's approximate bottom-top and top-bottom inference better propagate uncertainty $\Rightarrow$ deal more robustly with ambiguous inputs, than DBN. \\ \textbullet{} Formally, suppose number of (hidden) layers $L=3$. $$ \mb{v}\in\R^D, \mb{h}=\{\mb{h^{(1)}}, \mb{h^{(2)}}, \mb{h^{(3)}}\}, \mb{h^{(s)}}\in\R^{H_s}, s\in\{1,2,3\}; $$ Energy function: \bg E(\mb{v},\mb{h};\bs{\psi})=-\mb{v}^T\mb{W^{(1)}}\mb{h^{(1)}}-\mb{h^{(1)}}^T\mb{W^{(2)}}\mb{h^{(2)}}-\mb{h^{(2)}}^T\mb{W^{(3)}}\mb{h^{(3)}}-\mb{b}\cdot\mb{v}-\mb{c^{(1)}}\cdot\mb{h^{(1)}}-\mb{c^{(2)}}\cdot\mb{h^{(2)}}-\mb{c^{(3)}}\cdot\mb{h^{(3)}}, \eg where $\bs{\psi}=\{\mb{W^{(1)}}, \mb{W^{(2)}}, \mb{W^{(3)}}, \mb{b}, \mb{c^{(1)}}, \mb{c^{(2)}}, \mb{c^{(3)}}\}$. Probability that the model assign to a configuration $(\mb{v},\mb{h})$: \bg p(\mb{v},\mb{h};\bs{\psi})\;\propto\;\exp(-E(\mb{v},\mb{h};\bs{\psi})) \eg \\ \textbullet{} Now observe that conncections between units in the DBM are restricted in such a way, that unit from a layer depends only on the units from the \emph{neighboring} layers, and does not depend from other units in the same layer or in the layers beyond. This is a multi-layer generalization of RBM, and allows to compute probabilities of units on given the others efficiently. For instance, \bg p(h^{(1)}_j=1|\mb{v},\mb{h^{(2)}})=\text{sigm}\l(\sum_iW^{(1)}_{ij}v_i+\sum_iW^{(2)}_{jl}h_l^{(2)}+c_j^{(1)}\r) \eg Observe how this formula resembles formulae (21),(22). This is also easily generalizes to other layers and other types of layers: \\[1em] \noindent\fbox{% \parbox{\textwidth}{% To compute probability of unit being on given all the others, \tb{add} linear combinations of states of units from neighboring layers + bias and apply \tb{activation function of a unit} (e.g. sigmoid for binary, softmax for softmax/multinomial, affine for gaussian etc.). }% } \\[1em] Note, however, that the distribution over \tb{all} hidden layers generally does not factorize because of interactions between layers. For instance, for $L=2$, $p(\mb{h}^{(1)},mb{h}^{(2)}|\mb{v};\bs{\psi})$ does not factorize due to interaction weights $\mb{W}^{(2)}$ between $\mb{h}^{(1)}$ and $\mb{h}^{(2)}$ which render those variables mutually dependent. \\ \textbullet{} Formulae for log-likelihood gradients are derived the same way as for RBM and have similar form. For instance: \bg \frac{1}{N}\sum_{n=1}^N\nabla_{\mb{W^{(2)}}}\log p(\mb{x}_n;\bs{\psi}) = \E_{\mb{h^{(1)}},\mb{h^{(2)}}\sim P_{\text{data}}(\mb{h^{(1)}},\mb{h^{(2)}};\bs{\psi})}\l[\mb{h^{(1)}}\mb{h^{(2)}}^T\r]-\E_{\mb{h^{(1)}},\mb{h^{(2)}}\sim P_{\text{model}}(\mb{h^{(1)}},\mb{h^{(2)}};\bs{\psi})}\l[\mb{h^{(1)}}\mb{h^{(2)}}^T\r] \eg \textbullet{} Finally, we apply new learning algortihms for BMs described in the previous subsection with fully-factorizable mean-field approach: \bg q(\mb{h};\bs{\mu})=\prod_j \prod_l \prod_m q(h_j^{(1)};\mu_j^{(1)}) \cdot q(h_l^{(2)};\mu_l^{(2)}) \cdot q(h_m^{(3)};\mu_m^{(3)}) \eg Thanks to lack of intra-layer interaction makes it possible to use fixed point equations (just like for general BM algorithm) to actually optimize the variational lower bound and find the true optimal mean field expectations. \\ \textbullet{} Further we will use DBM with Gaussian visible units, Multinomial top-most layer hidden unist, and Bernoulli hidden units for intermediate layers. In this setting, again the learning algortihms remains the same, the difference is only in the way probabilities are computed, and samples are made. \\ \textbullet{} One unfortunate property of DBMs is that sampling from them is relatively difficult. DBNs only need to use MCMC sampling in their top pair of layers. The other layers are used only at the end of the sampling process, in one efficient ancestral sampling pass. To generate a sample from a DBM, it is necessary to use MCMC across all layers, with every layer of the model participating in every Markov chain transition. \subsubsection{Gibbs sampling in DBMs} \textbullet{} Similar to RBM, Gibbs sampling using equations (59) can be made in parallel thus allowing to perform block Gibbs sampling for each layer of units. In addition to that, as illustrated in Fig. \ref{fig:dbm_gibbs}, the DBM layers can be organized into a bipartite graph, with odd layers on one side and even layers on the other. This immediately implies that when we condition on the variables in the even layer, the variables in the odd layers become conditionally independent. In conjuction with block Gibbs sampling for each layer, this allow to perform a Gibbs sampling in the whole DBM in \tb{only 2 iterations}, instead of $L + 1$, as one might naively think at first. \\ \textbullet{} Good news that in TF no additional work need to be done beyond implementing block Gibbs sampling for each layer. Each independent branch in the computational graph should be executed in parallel. \begin{figure}[h] \begin{mdframed} \includegraphics[scale=0.4]{img/dbm_gibbs.png} \centering \caption{A deep Boltzmann machine, re-arranged to reveal its bipartite graph structure.} \label{fig:dbm_gibbs} \end{mdframed} \end{figure} \textbullet{} Note that Contrastive Divergence algorithm is slow for DBMs because they do not allow efficient sampling of the hidden states given the visible units -- instead, CD would require burning in a Markov chain every time a new negative phase sample is needed. \subsubsection{Greedy layerwise pretraining of DBMs} DBM can be trained using the aforementioned learning algorithm from random initialization (typically the results are quite bad even on MNIST, see \cite{goodfellow2012joint, goodfellow2016deep}), but it works much better if weights are initialized sensibly. Greedy layerwise pretraining = learning procedure that consists of learning a stack of RBM's one layer at a time. After the stack is learned, the whole stack can be viewed as a single probabilistic model, called Deep Belief Net. Thus, pre-training for DBN is straightforward. In case of DBM though, a layer in the middle of the stack of RBMs is trained with only bottom-up input, but after the stack is combined to form DBM, the layer will have both bottom-up and top-down input. To account for this so-called \emph{evidence double counting problem} \cite{salakhutdinov2009deep, goodfellow2016deep}, Fig. \ref{fig:dbm_pretraining}, two modifications are required: \begin{figure}[h] \begin{mdframed} \includegraphics[width=1.5in]{img/dbm_pretraining2.png} \quad \quad \includegraphics[width=3.5in]{img/dbm_pretraining.png} \centering \caption{Pre-training consists of learning a stack of modified RBMs, that are then composed to create a DBM.} \label{fig:dbm_pretraining} \end{mdframed} \end{figure} \begin{itemize} \item bottom RBM should be trained using two "copies" of each visible unit and the weights tied to be equal between these two copies ($\cong$ simply double the total input to hidden layer during upward pass); similarly, top RBM should be trained with two copies of topmost layer. Training of all intermediate RBMs if there are any, should not be modified. \item the weights of all intermediate RBMs though, should be divided by 2 before inserting into DBM \end{itemize} \begin{figure}[h] \begin{mdframed} \includegraphics[scale=0.08]{dbm/dbm_init.jpg} \centering \caption{A more detailed scheme how to initialize 3-layer DBM from learned stack or RBMs, including biases. Black circles -- visible units, blue -- hidden units, red circel -- visible bias, black square -- hidden bias. Biases can be summed or averaged.} \label{fig:dbm_init} \end{mdframed} \end{figure} \subsubsection{Joint training of DBMs} Classic DBMs require greedy unsupervised pretraining, and to perform classification well, require a separate MLP-based classifier on top of the hidden features they extract. It is hard to track performance during training because we cannot evaluate properties of the full DBM while training the first RBM. Software implementations of DBMs need to have many different components for CD training of individual RBMs, PCD training of the full DBM, and training based on back-propagation through the MLP. Finally, the MLP on top of the Boltzmann machine loses many of the advantages of the Boltzmann machine probabilistic model, such as being able to perform inference when some input values are missing. \\ There are two main ways to resolve the joint training problem of the deep Boltzmann machine: \tb{multi-prediction DBMs}\cite{goodfellow2013multi}, which is currently beyond the scope of this project, and the \tb{centering trick} \cite{montavon2012deep}, which reparametrizes the model in order to make the Hessian of the cost function better-conditioned at the beginning of the learning process. More specifically, if we consider energy function of generalized Boltzmann Machine (BM, RBM, DBM can all be represented by appropriate choice of $\mb{x}$ -- states, $\mb{U}$ -- weights, $\mb{a}$ -- biases): \bg E(\mb{x};\bs{\psi})=-\mb{x}^T \mb{U}\mb{x}-\mb{a}\cdot\mb{x} \eg Then the idea of centering trick is simply to reparameterize this energy function as \bg E(\mb{x};\bs{\psi})=-(\mb{x}-\bs{\beta})^T \mb{U}(\mb{x}-\bs{\beta})-\mb{a}\cdot(\mb{x}-\bs{\beta}) \eg Where new hyperparameter vector $\bs{\beta}$ is chosen to be $\mb{x}-\bs{\beta}\approx \mb{0}$ at the beginning of training. This does not change the set of probability distributions that the model can represent, but it does change the learning dynamics so much, that it is actually possible to train DBM from random initialization w/o pre-training and achieve sensible results. However, in \cite{goodfellow2013multi} they say when DBM is trained using centering trick, they have never shown to have good classification performance, if this was the primary goal. \subsubsection{Annealed importance sampling \cite{salakhutdinov2008, salakhutdinov2009deep, hinton2012better, upadhya2015empirical}} Let $p_A(\mb{x})=\frac{p^*_A(\mb{x})}{\mc{Z}_A}$ be simple proposal distribution from which we can sample easily, and $p_{B}(\mb{x})=\frac{p^*_B(\mb{x})}{\mc{Z}_B}$ our complex target distribution. We also have to make sure $p_B \ll p_A$, which is easy in our case, since we can always choose $p_A$ to be uniform pmf, which dominates every other probability mass function on discrete units (of finite cardinality). \\[0.5em] \u{(Classical) Importance Sampling} \\ The ratio of partition functions can be estimated as follows \bg \frac{\mc{Z}_B}{\mc{Z}_A}=\frac{p^*_B(\mb{x})}{p^*_A(\mb{x})}=\sum_{\mb{x}}\frac{p^*_B(\mb{x})}{p^*_A(\mb{x})} p_A(\mb{x})=\E_{\mb{x}\sim p_A}\l[\frac{p^*_B(\mb{x})}{p^*_A(\mb{x})} \r] \approx \frac{1}{N}\sum_{i=1}^N \frac{p^*_B(\mb{x}_i)}{p^*_A(\mb{x}_i)} \eg The problem is when $p_A$ and $p_B$ are very different, as in our case, this estimator is very poor: its variance is very large, possibly infinite. \\[0.5em] \u{Annealed Importance Sampling} \\ To handle this issue, we define sequence of probability mass functions $\l(p_m\r)_{m=0:M}$ such that $p_0 = p_A$ and $p_M = p_B$, and for which we know unnormalized probabilities $p_m^*$, which typically are mixtures of target and proposal: \bg p_m(\mb{x})=p_B(\mb{x})^{\beta_m}\cdot p_A(\mb{x})^{1-\beta_m}, \beta_m=\frac{m}{M} \eg Also, in order not to sample from $p_s$ we also need sequence of transition operators $\l(T_i(\mb{x}_{i + 1} \leftarrow \mb{x}_i)\r)_{i=1:M-1}$ each that leaves the corresponding $p_i$ invariant. The importance weight can then be computed as \bg \omega_{\text{AIS}} \leftarrow \prod_{m=1}^M \frac{p_m^*(\mb{x}_m)}{p_{m-1}^*(\mb{x}_m)} \eg where $\mb{x}_1 \sim p_0=p_A;\; \mb{x}_2 \sim T_1(\mb{x}_2 \leftarrow \mb{x}_1);\; \ldots \; \mb{x}_M \sim T_{M-1}(\mb{x}_M \leftarrow \mb{x}_{M-1})$. The ratio of partition functions can then be estimated as average over many AIS runs: \bg \frac{\mc{Z}_B}{\mc{Z}_A}=\frac{\mc{Z}_M}{\mc{Z}_0}\approx \frac{1}{L}\sum_{l=1}^L \omega_{\text{AIS}}^{(l)} \eg Notice also that we don't need to compute partition functions of any of the intermediate distributions. \\ \tb{Note}: to avoid numerical problems and overflow errors (partition functions are very large numbers even for very moderate sized BM), all computations are performed in $\log$-domain, as usual. \\[0.5em] \u{Annealed Importance Sampling for 2-layer Bernoulli BM} \\ It turns out that we can reduce state space of AIS to only hidden units in first layer $\mb{x}=\{\mb{h}^{(1)}\}$ by explicitly summing out visible and top-most layer hidden units: \begin{empheq}[box={\mybox[1em][1em]}]{align*} \log p^*\l(\mb{h}^{(1)}\r) &= \log \sum_{\mb{v},\mb{\mb{h}^{(2)}}} p^*\l(\mb{v}, \mb{h}^{(1)}, \mb{h}^{(2)}\r)= \\ &= \log \sum_{\mb{v},\mb{\mb{h}^{(2)}}} \exp \l( \mb{v}^T\mb{W}^{(1)}\mb{h}^{(1)} + \mb{h}^{(1)^T}\mb{W}^{(2)}\mb{h}^{(2)} + \mb{b}\cdot\mb{v} + \mb{c}^{(1)}\cdot\mb{h}^{(1)} + \mb{c}^{(2)}\cdot\mb{h}^{(2)} \r) \\ &= \mb{c}^{(1)}\cdot\mb{h}^{(1)} + \log\l[ \sum_{\mb{v}}\exp\l( \mb{v}^T\mb{W}^{(1)}\mb{h}^{(1)} + \mb{b}\cdot\mb{v} \r)\sum_{\mb{h}^{(2)}} \exp\l( \mb{h}^{(1)^T}\mb{W}^{(2)}\mb{h}^{(2)} + \mb{c}^{(2)}\cdot\mb{h}^{(2)} \r) \r] \\ &= \mb{c}^{(1)}\cdot\mb{h}^{(1)} + \sum_i^V \text{softplus}\l(\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \r) + \sum_k^{H_2} \text{softplus}\l(\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \r) \end{empheq} \bg \boxed{\log p^*\l(\mb{h}^{(1)}\r)=\mb{c}^{(1)}\cdot\mb{h}^{(1)} + \sum_i^V \text{softplus}\l(\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \r) + \sum_k^{H_2} \text{softplus}\l(\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \r)} \eg From this we can easily derive equation for $\log p_{\textcolor{red}{m}}^*$ by simply scaling all weights by $\beta_m$: \begin{equation} \begin{aligned} \log p_{\textcolor{red}{m}}^*\l(\mb{h}^{(1)}\r) = \textcolor{red}{\beta_m}\mb{c}^{(1)}\cdot\mb{h}^{(1)} + \sum_i^V \text{softplus}\l(\textcolor{red}{\beta_m}\cdot\l(\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \r)\r) + \\ + \sum_k^{H_2} \text{softplus}\l(\textcolor{red}{\beta_m}\cdot\l(\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \r)\r) \end{aligned} \end{equation} When $\beta_m=1$ we obtain target distribution, when $\beta_m=0$ we obtain uniform distribution: \bg \log p_0^*\l(\mb{h}^{(1)}\r) \equiv 0+\sum_i^V \text{softplus}(0) + \sum_k^{H_2} \text{softplus}(0)=(V+H_2)\log 2 \eg thus $ \log \mc{Z}_0=(V+H_1+H_2)\log 2$. \\[1em] Thus we gradually increase "inverse temperature" $\beta$ from 0 to 1 and can estimate partition function using procedure described above. Staring from randomly initialized $\mb{h}^{(1)}$, we apply sequence of transition operators $T_i$ which are simply alternating Gibbs sampler with weights scaled by $\beta_i$. \\[1em] We can do the same for different types of units and larger number of layers. In the latter case we can again analytically sum out visible and top-most hidden units. \\[0.5em] \u{Variational lower-bound} \\ Having estimate of partition function $\t{\mc{Z}}$, we can estimate variational lower-bound on test vector $\mb{v}^*$ as follows \begin{equation} \begin{aligned} \log p(\mathbf{v}^{*};\boldsymbol{\psi})\geq\;&-\sum_{\mathbf{h}} q(\mathbf{h};\boldsymbol{\mu})E(\mathbf{v}^{*}, \mathbf{h};\boldsymbol{\psi})+\mathcal{H}(\boldsymbol{\mu})-\log\mathcal{Z}(\boldsymbol{\psi}) \\ =\;& \mathbf{v}^{*^{T}}\mathbf{W}^{(1)}\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)}+\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)^{T}}\mathbf{W}^{(2)}\boldsymbol{\mu}_{\mathbf{v}^*}^{(2)}+\mathbf{b}\cdot\mathbf{v}^{*}+\mathbf{c}^{(1)}\cdot\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)}+\mathbf{c}^{(2)}\cdot\boldsymbol{\mu}_{\mathbf{v}^*}^{(2)}+\mathcal{H}(\boldsymbol{\mu}_{\mathbf{v}^*})-\log\mathcal{Z}(\boldsymbol{\psi}) \\ \approx\;& \mathbf{v}^{*^{T}}\mathbf{W}^{(1)}\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)}+\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)^{T}}\mathbf{W}^{(2)}\boldsymbol{\mu}_{\mathbf{v}^*}^{(2)}+\mathbf{b}\cdot\mathbf{v}^{*}+\mathbf{c}^{(1)}\cdot\boldsymbol{\mu}_{\mathbf{v}^*}^{(1)}+\mathbf{c}^{(2)}\cdot\boldsymbol{\mu}_{\mathbf{v}^*}^{(2)}+\mathcal{H}(\boldsymbol{\mu}_{\mathbf{v}^*})-\log\widehat{\mathcal{Z}} \end{aligned} \end{equation} where $\bs{\mu}_{\mathbf{v}^*}$ are variational parameters obtained by running fixed-point equations using Gibbs sampler unitl convergence with visible units clamped to $\mathbf{v}^*$. \\[1em] One can also estimate true log-probability using AIS by clamping visible units to test example (estimating log-probability for one test example is computationally equivalent to estimating a partition function). \subsubsection{Additional facts} \textbullet{} In \cite{goodfellow2016deep} they say that obtaining sota results with DBM requires an additional partial mean field in negative phase, more details in \cite{goodfellow2013multi}. \\ \textbullet{} The inference can further be accelerated using separate \emph{recognition model}, see \cite{salakhutdinov2010efficient} for details. \\ \textbullet{} DBMs were developed after DBNs. Compared to DBNs, the posterior distribution $p(\mb{h}|\mb{v})$ is simpler for DBMs. Somewhat counterintuitively, the simplicity of this posterior distribution allows richer approximations of the posterior \cite{goodfellow2016deep} \\ \textbullet{} The use of proper mean field allows the approximate inference procedure for DBMs to capture the influence of top-down feedback interactions. This makes DBMs interesting from the point of view of neuroscience, because the human brain is known to use many top-down feedback connections\cite{goodfellow2016deep}. \\ \textbullet{} In \cite{goodfellow2013joint} they observe that energy function $E(\mb{v},\mb{h};\bs{\psi})$ inevitably induces some prior $p(\mb{h};\bs{\psi})$ that is not motivated by the structure of any kind of data. The role of deeper layers in DBM is simply to provide a better prior on the first layer hidden units. \clearpage \begin{figure}[h] \begin{mdframed} \centering \includegraphics[width=6.4in]{dbm/tf_graph.png} \caption{High-level computational graph for DBM model.} \end{mdframed} \end{figure}
\chapter{Introduction}\label{chap_introduction} \setlength{\parskip}{12pt} The Common Community Physics Package (CCPP) is designed to facilitate the implementation of physics innovations in state-of-the-art atmospheric models, the use of various models to develop physics, and the acceleration of transition of physics innovations to operational NOAA models. The CCPP consists of two separate software packages, the pool of CCPP-compliant physics schemes (\execout{ccpp-physics}) and the framework (driver) that connects the physics schemes with a host model (\execout{ccpp-framework}). The connection between the host model and the physics schemes through the CCPP framework is realized with caps on both sides as illustrated in Fig.~\ref{fig_ccpp_design_with_ccpp_prebuild} in Chapter~\ref{chap_hostmodel}. While the caps to the individual physics schemes are auto-generated, the cap that connects the framework (Physics Driver) to the host model must be created manually. For more information about the CCPP design and implementation, please see the CCPP Design Overview at {\url{https://dtcenter.org/gmtb/users/ccpp/docs/}}. This document serves two purposes, namely to describe the technical work of writing a CCPP-compliant physics scheme and adding it to the pool of CCPP physics schemes (Chapter~\ref{chap_schemes}), and to explain in detail the process of connecting an atmospheric model (host model) with the CCPP (Chapter~\ref{chap_hostmodel}). For further information and an example for integrating CCPP with a host model, the reader is referred to the GMTB Single Column Model (SCM) User and Technical Guide v1.0 available at {\url{https://dtcenter.org/gmtb/users/ccpp/docs}}. At the time of writing, the CCPP is supported for use with the GMTB Single Column Model (SCM). Support for use of CCPP with the experimental version of NCEP's Global Forecast System (GFS) that employs the Finite-Volume Cubed-Sphere dynamical core (FV3GFS) is expected in future releases. The GMTB welcomes contributions to CCPP, whether those are bug fixes, improvements to existing parameterizations, or new parameterizations. There are two aspects of adding innovations to the CCPP: technical and programmatic. This Developer's Guide explains how to make parameterizations technically compliant with the CCPP. Acceptance in the master branch of the CCPP repositories, and elevation of a parameterization to supported status, depends on a set of scientific and technical criteria that are under development as part of the incipient CCPP Governance. Contributions can be made in form of git pull requests to the development repositories but before initiating a major development for the CCPP please contact GMTB at \url{gmtb-help@ucar.edu} to create an integration and transition plan. For further information, see the Developer's Corner for CCPP at \url{https://dtcenter.org/gmtb/users/ccpp/developers/index.php}. Note that while the pool of CCPP physics and the CCPP framework are managed by the Global Model Test Bed (GMTB) and governed jointly with partners, the code governance for the host models lies with their respective organizations. Therefore, inclusion of CCPP within those models should be brought up to their governing bodies.
\providecommand{\classoptions}{keys} %% The next two lines are suggested at %% to work around the following error: %% %% ---------------------------- %% /usr/local/texlive/2018/texmf-dist/tex/latex/chngcntr/chngcntr.sty:42: LaTeX Error: Command \counterwithout already defined. %% Or name \end... illegal, see p.192 of the manual. %% %% See the LaTeX manual or LaTeX Companion for explanation. %% Type H <return> for immediate help. %% ... %% %% l.42 ...thout}{\@ifstar{\c@t@soutstar}{\c@t@sout}} %% ---------------------------- %% %% The two lines: \let\counterwithout\relax \let\counterwithin\relax %% Suggested fix above taken from %% https://tex.stackexchange.com/questions/425600/latex-error-command-counterwithout-already-defined %% \documentclass[ 11pt, deliverables, longtasklabels, numericcites, noworkareas, svgnames, \classoptions ]{euproposal} % for writing %\documentclass[submit,noworkareas,deliverables]{euproposal} % for submission %\documentclass[submit,public,noworkareas,deliverables]{euproposal} % for public version \usepackage[utf8]{inputenc} \usepackage{hyperref} \usepackage{enumitem} \usepackage{booktabs} % \usepackage{minitoc} %\usepackage{varioref} \usepackage{float} % used to suppress floating of tables in Resources section. \usetikzlibrary{calc,fit,positioning,shapes,arrows,snakes} \graphicspath{{tasks/}} \addbibresource{bibliography.bib} % temporary fix due to http://tex.stackexchange.com/questions/311426/bibliography-error-use-of-blxbblverbaddi-doesnt-match-its-definition-ve \makeatletter\def\blx@maxline{77}\makeatother % \input{WApersons} % Some sections of the included files depend on this. \input{preamble} \usepackage{framed} \usepackage{multicol} \usepackage{lipsum} \newcommand{\allparticipants}{{SRL,MP,QS,UIO,IFR}} \newcommand{\softwarename}[1]{\texttt{#1}} \newcommand{\repotodocker}{\softwarename{repo2docker}} \newcommand{\binderhub}{\softwarename{BinderHub}} % \newcommand{\mybinder}{\softwarename{mybinder.org}} % mybinder.org in monospaced font \newcommand{\mybinder}{mybinder.org} % mybinder.org in normal proportional % font % \newcommand{\myemph}[1]{\emph{#1}}% to try bf or emph in ambition section \newcommand{\myemph}[1]{\textbf{#1}}% to try bf or emph in ambition section \newcommand{\noemph}[1]{#1}% to switch off emphasis but keep the option to % de-activate it again later %\newcommand*{\fullref}[1]{\ref{#1} \nameref*{#1}} % One \newcommand*{\fullref}[1]{\hyperref[{#1}]{\ref{#1} \nameref*{#1}}} % single clickable link of the style: 1.1 Concept % source: https://tex.stackexchange.com/questions/121865/nameref-how-to-display-section-name-and-its-number % longtaskref: T1.2: Title of task \newcommand\longtaskref[2]{\csname task@#1@#2@label\endcsname: ``\csname task@#1@#2@title\endcsname''} \begin{document} % satisfy fancyhdr with 11pt \setlength{\headheight}{13.6pt} \begin{draft} \section*{Guidelines for proposal co-authors} \begin{verbatim} - Consistency. - [ ] Section or section or Sect or Sec? Use 'Section' - [ ] Figure or figure or Fig or Fig.? Use 'Figure' - [ ] Binder or binder - Binder? Use 'Binder' - [ ] MyBinder or Mybinder or mybinder? Use `mybinder.org` for the public instance of the service -> \mybinder - we have a command \repotodocker to insert 'repo2docker' in \softwarename{} font. - we have a command \binderhub to insert 'BinderHub'. - we have a command \mybinder to insert 'mybinder.org'. - Best use {} after the commands to enforce a space, i.e. ``\repotodocker{} is the focus'' - to discuss - ONHOLD [ ] names of work packages. Section 3.1.1 - [X] Management - [ ] Core -> enhancement? Robustness? - [ ] Impact -> new features? Increase impact? - [ ] Applications -> Use cases? - [ ] Education -> Dissemination? Education and Outreach? Hm, the long names see okay. Need to investigate where I got the short names from. - This nice section could be included in our approach (1.2.10): Facilitating open and reproducible science by automating existing practices & A key to our philosophy and success thus far has been automating what scientists already are (or should be) doing. The approaches and environment specifications used in Binder are not specific to Binder, and are already widely adopted. We only seek to automate this process, and implement and document as many standards as we can find in use by the community. By implementing what is already in use, we minimise "lock-in" and meet users, lowering the barrier to adoption relative to "bespoke" tools, which require a large change in tooling, and significant disruption to researchers' work. - [ ] Check size limit for pdf to upload (was it 10MB? Can be seen at upload menu in portal) - [ ] check size of final.pdf - [ ] if size is a problem, we can downsample the 'spectrogram.png' \end{verbatim} \section*{Todo items} - [ ] describe our relation to the Binder team \end{draft} \draftpage \begin{proposal}[ % participants PI=mrk, mrkname=Benjamin Ragan-Kelley, mrkaffiliation=Simula Research Laboratory, mrkdept=Numerical Analysis and Scientific Computing, mrktitle=Dr., % site descriptions site=SRL, % Simula SRLacronym=Simula, SRLshortname=Simula Research Laboratory, SRLcountryshort=NO, SRLcountry=Norway, site=MP, % Max Planck MPacronym=MPG, MPshortname=Max Planck Gesellschaft, MPcountryshort=DE, MPcountry=Germany, site=QS, % QuantStack QSacronym=QuantStack, QSshortname=QuantStack, QScountryshort=FR, QScountry=France, site=IFR, % Ifremer IFRacronym=Ifremer, IFRshortname=Ifremer, IFRcountryshort=FR, IFRcountry=France, site=UIO, % U Oslo UIOacronym=UiO, UIOshortname=University of Oslo, UIOcountryshort=NO, UIOcountry=Norway, % site=XXX, % template example % alternative: (can be combined) coordinator=Simula Research Laboratory, % Cemail=benjaminrk@simula.no, % Ctelfax=(47) XXX-XX-XXX, %coordinatorsite=SRL, acronym={SOURCE}, acrolong={SOURCE}, proposalnumber={SEP-210850361}, title={Supporting Open, Useful, and Reproducible Computational Environments}, % callname=Increasing the reproducibility of scientific results, % callid=WIDERA-2022-ERA-01-41, % TODO: consistency with provided template % CALL: H2020-EINFRA-2015-1 % TOPIC: e-Infrastructures for Virtual Research Environments (VRE) % Instrument: e-Infrastructures keywords={ Open Science, reproducibility, reusability, education, accessibility, Jupyter, Binder, notebooks, cloud, HPC, EOSC, FAIR data, physics, chemistry, biology, materials, geosciences }, % computational mathematics, % GAP, Linbox, PARI, Sage, Singular, IPython, Jupyter, SageMathCloud, LMFDB, MathHub % Virtual research environments, MPIR, /GP % open source, free software, number theory, abstract algebra, notebooks % instrument= Call: HORIZON-WIDERA-2022-ERA-01-41, %Call: H2020-EINFRA-2015-1, 3 Topic 9-2015 % challengeid = TODO, %challenge = {N/A}, %objectiveid={N/A}, %objective = TODO, %outcomeid = N/A, %outcomet = N/A, months=36, compactht] \newcommand{\TheProject}{\pn}% \pn is defined automatically % \input{grantagreement-history} \ifsubmit \else % only abstract in draft \draftpage \input{abstract} % detailed toc in draft \setcounter{tocdepth}{4} \fi \tableofcontents % --------------------------------------------------------------------------- % Section 1: Excellence % --------------------------------------------------------------------------- \section{Excellence} \eucommentary{4 pages} \eucommentary{\emph{ \begin{itemize} \item Briefly describe the objectives of your proposed work. Why are they pertinent to the work programme topic? Are they measurable and verifiable? Are they realistically achievable? \item Describe how your project goes beyond the state-of-the-art, and the extent the proposed work is ambitious. Indicate any exceptional ground-breaking R-and-I, novel concepts and approaches, new products, services or business and organisational models. Where relevant, illustrate the advance by referring to products and services already available on the market. Refer to any patent or publication search carried out. \item Describe where the proposed work is positioned in terms of R-and-I maturity (i.e. where it is situated in the spectrum from idea to application, or from lab to market). Where applicable, provide an indication of the Technology Readiness Level, if possible distinguishing the start and by the end of the project. \end{itemize} Please bear in mind that advances beyond the state of the art must be interpreted in the light of the positioning of the project. Expectations will not be the same for RIAs at lower TRL, compared with Innovation Actions at high TRLs. } } \medskip \input{objectives-and-ambition.tex} % \input{excellence.tex} % \subsection{Objectives and ambition} % \input{ambition.tex} % \draftpage % \input{objectives} % \draftpage % \input{relation_to_the_work_programme.tex} % --------------------------------------------------------------------------- % Section 1.2: Methodology % --------------------------------------------------------------------------- \draftpage \input{concept.tex} % --------------------------------------------------------------------------- % Section 2: Impact % --------------------------------------------------------------------------- \draftpage \input{impact.tex} \draftpage % --------------------------------------------------------------------------- % Section 3: Implementation % --------------------------------------------------------------------------- \section{Quality and efficiency of the implementation} \COMMENT{Typical granularity: 5-8 work packages with 3-5 tasks and one deliverable per task; 10 milestones} \subsection{Work plan and resources} \label{sect:workplan} \input{workplan} \draftpage \subsection{Capacity of participants and consortium as a whole} \input{consortium.tex} \draftpage %AF\input{appendix.tex} \end{proposal} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End: % LocalWords: sud logilab urich Simula thiery acrolong igital esearch nvironments pn wp % LocalWords: athematics pnlong callname callid challengeid objectiveid outcomeid emph % LocalWords: compactht newcommand tableofcontents Linbox IPython textbf eucommentary % LocalWords: vre TOWRITE citability Cython Laboratoire Recherche Informatique devs WPs % LocalWords: clearpage draftpage programme workplan subsubsection pdatacount wplist sc % LocalWords: WPref dissem pageref newpage sssec hline ganttchart xscale makeatletter % LocalWords: makeatother wpfigstyle footnotesize tabcolsep wpfig inputdelivs mgt smc % LocalWords: mathsoftware mathdb mathknowledge Jupyter silesia pythran Pythran ldots % LocalWords: Simulagora stigmatisation compactenum planetmath.org Univ botupPM Gnuplot % LocalWords: boxedminipage textwidth compactitem fangohr providecommand classoptions % LocalWords: ifsubmit setcounter tocdepth neighbouring incentivesed Gowers analyse hpc % LocalWords: incentivised Ebay taskref structdocs taskref minimising parallelisation % LocalWords: dksbases decisionmaking oommf-nb-evaluation gantttaskchart yscale Belabas % LocalWords: Boussicault endeavours GitHub isocial-decisionmaking enlargethispage
% !TEX root = ./main.tex \section{Implementing the evolutionary forces} As hinted at in Fig.~\ref{fig01:moran}(A), the Moran process gives us a simple recipe for how to encode the different evolutionary forces on the transition rates $W^\pm(x)$. Recall that since the transition rates in allele frequency space are the same as the transition rates in number of organisms space, we can more easily conceptualize the forces in the latter. In other words, the rate to transition from $x$ to $x + \Delta x$ is by construction equal to the rate to transition from $n$ to $n + 1$; therefore we can define the transition rates in the more convenient space of organism number. In the Moran process changes in population composition happen when one of the organisms in the population chosen randomly dies and it is immediately replaced by an organism with a different allele (since replacement with the same allele does not change the population composition). This means that the general form in which the number of organisms with allele $A$ can change looks like \begin{equation} W^+(x) = w^+(n) = \text{Rate of $A$ dying} \times \text{Probability of replacement by $a$}, \end{equation} and \begin{equation} W^-(x) = w^-(n) = \text{Rate of $a$ dying} \times \text{Probability of replacement by $A$}, \end{equation} Having these general forms let us define the rates for different evolutionary forces. \subsection{Genetic drift} \mrm{Further discussion of what genetic drift is?} Given the intrinsic stochasticity of the Moran process, the easiest evolutionary force to implement is that of genetic drift. For simplicity, we assume that both allele types have the same death rate per organism, $\gamma$. For the case where only genetic drift is changing the population composition, we assume that the reproduction probability per organism is the same (equivalent to saying both genotypes have the same fitness), so the probability of an organism that died being replaced by an organism with the opposite allele is given by the relative frequency of such allele. Mathematically this means that we can express the rate with which the number of organisms with allele $A$ increases as \begin{equation} W^+(x) = w^+(n) = \overbrace{\gamma \times (N-n)}^{\text{rate of $a$ dying}} \times \overbrace{\frac{n}{N}}^ {\substack{\text{prob. of $A$} \\ \text{replacing}}}, \end{equation} where the term $\gamma \times (N-n)$ defines the rate at which an organism with an allele $a$ dies, and the term $n/N$ defines the probability of an organism with allele $A$ reproducing. Equivalently the rate with which the number of organisms with allele $a$ decreases takes the form \begin{equation} W^+(x) = w^+(n) = \overbrace{\gamma n}^{\substack{\text{rate of $A$}\\ \text{dying}}} \overbrace{\frac{(N-n)}{N}}^ {\substack{\text{prob. of $a$} \\ \text{replacing}}}. \end{equation} Given that both rates are equal, we can see that the first term in Eq.~\ref{eq:pde_x_general} involving $W^+(x) - W^-(x)$ is zero. This means that there are no deterministic (directional) forces when only genetic drift is considered, just pure randomness. Substituting the sum of the rates into the second term of Eq.~\ref{eq:pde_x_general} results in \begin{equation} \begin{aligned} \frac{\partial P(x, t)}{\partial t} &=\frac{1}{2 N^{2}} \frac{\partial^{2}}{\partial x^{2}}[2 \gamma(N-n) n P(x, t)], \\ &=\frac{\partial^{2}}{\partial x^{2}} \left[\frac{1}{2 N^{2}} 2 \gamma\left(\frac{(N-n) n}{N} \right) P(x, t)\right],\\ &=\frac{\gamma}{N} \frac{\partial^{2}}{\partial x^{2}}[x(1-x) P(x, t)], \end{aligned} \end{equation} where for the last step we substitute the definition of $x\equiv n/N$. We redefine the time scale to be in units of $\gamma^{-1}$, meaning that the time units are measured in terms of the mean life expectancy of an organism. This allows us to write \begin{equation} \frac{\partial P(x, t)}{\partial t} = \frac{1}{N} \frac{\partial^{2}}{\partial x^{2}}[x(1-x) P(x, t)], \end{equation} the classic Kimura diffusion equation for genetic drift only. \subsection{Genetic drift plus selection} Natural selection is intrinsically associated with the concept of fitness. The phrase ``survival of the fittest'' first used by Darwin guided and still guides the way that biologists think about the evolution of many organisms. But despite the fact that fitness is part of the daily jargon of many biologists, it is a subtle and highly debated concept. After all, what defines the ability of an organism to survive the challenges that surround them are completely context dependent. Roughly speaking we can think of fitness as the ability of an organism, or a population of organisms to survive and reproduce in the given ecological niche they occupy. The concept of fitness only makes sense when there is a competition between organisms. The intrinsic growth rate of a completely homogeneous population is irrelevant since, as we mentioned before, a population with a single allele growing is not evolving since there are no changes to the population composition. The term ecology has to be included because fitness is a result of the interplay between organisms with their environment including all biotic and abiotic interactions. It is common both in theory and in experiments to use the relative growth rates of organisms, i.e. the speed at which they can reproduce and generate offspring, as a proxy for fitness. This is a convenient approximation both for experiments and for theory, but one should not lose track of the relevant context dependence on the fitness. Just because redwoods have an average life span of 500-700 years and a very low growth rate that doesn't mean they are not fit. Having said that we will first begin with the simplest form of fitness, i.e. frequency independent selection. The term frequency independence simply refers to the assumption that the fitness of a particular allele does not depend on the relative abundance of such allele. This assumption could break down for cases such as some pathogenic bacteria that coordinate their attack via cell-to-cell communication known as quorum sensing. To implement the effect of different reproductive success for different alleles we introduce parameters $f_A$ and $f_a$ as the fitness values for allele $A$ and $a$, respectively. With these parameters in hand we must redefine the probability of an organism reproducing to replace the one that dies in the Moran process. The replacement probability for allele $A$ is now given by \begin{equation} \text{Prob. of $A$ replacing} = \frac{f_A n}{f_A n + f_a (N - n)}. \end{equation} Likewise for allele $a$ we have \begin{equation} \text{Prob. of $a$ replacing} = \frac{f_a (N - n)}{f_A n + f_a (N - n)}. \end{equation} Let us now assume that $f_A \approx (1 + s) f_a$ for a small $s$. This parameter $s$ is the so-called selection coefficient, which in nature can be of the order of $10^{-3}$ or less. With these assumptions we can simplify these substitution probabilities to be \begin{equation} \text{Prob. of $A$ replacing} \approx \frac{n}{N}(1 + s), \end{equation} and \begin{equation} \text{Prob. of $a$ replacing} \approx \frac{(N - n)}{N}, \end{equation} where, after canceling $f_a$ from the numerator and denominator, we assume that $(1 + s)n + (N - n) \approx N$ since $s \ll 1$. With these updated replacement probabilities we again compute the population change rates $W^\pm(x)$ as the product of the rate of certain type of organism dying times the probability of being replaced by the opposite allele. These rates take the form \begin{equation} W^+(x) = \overbrace{\gamma \times (N - n)}^{\text{rate of $a$ dying}} \times \overbrace{\frac{n}{N} (1 + s)}^ {\substack{\text{prob. of $A$ replacing}\\ \text{with fitness difference}}}, \end{equation} and \begin{equation} W^-(x) = \gamma n \frac{(N - n)}{N}, \end{equation} With these rates we ca now compute the sum and difference required by Eq.~\ref{eq:pde_x_general}. The difference of these two rates takes the form \begin{equation} W^+(x) - W^-(x) = \gamma s \frac{n(N-n)}{N}. \end{equation} The sum results in \begin{equation} W^+(x) + W^-(x) = \gamma (2 + s) \frac{n(N-n)}{N}. \end{equation} Substituting this into Eq.~\ref{eq:pde_x_general} results in \begin{equation} \frac{\partial}{\partial t} P(x, t) = -\frac{1}{N} \frac{\partial}{\partial x} \left[\gamma s \left(\frac{(N-n) n}{N}\right) P(x, t)\right] +\frac{1}{2 N^{2}} \frac{\partial^{2}}{\partial x^{2}} \left[\gamma (2+s)\left(\frac{(N-n) n}{N}\right) P(x, t)\right]. \end{equation} Simplifying terms and substituting the definition of the allele frequency gives \begin{equation} \frac{\partial}{\partial t} P(x, t) = -\gamma_{S} \frac{\partial}{\partial x}[x(1-x) P(x, t)] +\frac{\gamma\left(1+\frac{s}{2}\right)}{N} \frac{\partial^{2}}{\partial x^{2}}[x(1-x) P(x, t)]. \end{equation} To obtain the final form we again write the time scale in units of $\gamma^{-1}$. Furthermore we use the simplification that $s \ll 1$, obtaining the classic Kimura diffusion equation for selection and drift \begin{equation} \frac{\partial}{\partial t} P(x, t) = -\frac{\partial}{\partial x}[s x(1-x) P(x, t)] +\frac{1}{N} \frac{\partial^{2}}{\partial x^{2}}[x(1-x) P(x, t)]. \end{equation} \subsection{Genetic drift plus selection plus mutation} One of the ingredients for evolution to take place is the constant appearance of genetic variability. After all, the raw material for evolution to act on is the appearance of new mutations in the population. While both genetic drift and selection reduce population diversity, mutation creates more diversity. In the case of our one-locus two-allele case it can even resurrect alleles that wen extinct as one organism changes its genetic content. The implementation of this third force changes the possibilities on how to change the composition of the population. As depicted in Fig.~\ref{fig01:moran}(A), if mutation is taken into account, there are two possible substitutions which would modify the allele frequency: (1) The usual path in which an organism with the opposite allele to the one that died reproduces and does not mutate when doing so, and (2) the possibility of an organism of the same allele as the one that died reproducing, but when doing so, it mutates to the opposite allele. For simplicity we will assume that the mutation probability from $A$ to $a$, $\mu_{A\rightarrow a}$, is the same as from $a$ to $A$, $\mu_{a\rightarrow A}$. For simplicity let us define the transition rate $W^+(x)$ as \begin{equation} W^+(x) = W^+_{(1)}(x) + W^+_{(2)}(x), \end{equation} where we break the rate into the two possible paths. The first path in which an organism of the opposite allele to the one that died replaces it and does not mutate when doing so takes the form \begin{equation} W^{+}_{(1)}(x) = \overbrace{\gamma(N-n)}^{\text{rate of $a$ dying}}\times \overbrace{\frac{n}{N}(1+s)}^ {\substack{\text{prob. of $A$ replacing}\\ \text{with fitness diff.}}}\times \overbrace{(1-\mu)}^ {\substack{\text{prob. of not} \\ \text{mutating}}}, \end{equation} where the evolutionary forces appear as a product of the rates and probabilities of each of the steps taking place. For the second path in which the organism that replaces the one that dies is of the same type, but when it reproduces there is a mutation to the opposite allele, we have a rate of the form \begin{equation} W^{+}_{(2)}(x) = \overbrace{\gamma(N-n)}^{\text{rate of $a$ dying}}\times \overbrace{\frac{(N - n)}{N}}^ {\text{prob. of $a$ replacing}}\times \overbrace{\mu}^{\text{prob. of mutating}}. \end{equation} Putting these two rates together results in a transition rate $W^{+}(x)$ of the form \begin{equation} W^{+}(x) = \overbrace{\gamma(N-n) \frac{n}{N}(1+s) (1-\mu)}^{\text{path (1)}} + \overbrace{\gamma(N-n) \frac{(N-n)}{N} \mu}^{\text{path (2)}}. \end{equation} Equivalently, we can write the rate $W^{-}(x)$ as a decomposition of the two possible paths that lead to population structure changes. Putting these two rates back together results in \begin{equation} W^{-}(x)= \overbrace{\gamma n \frac{(N-n)}{N}(1-\mu)}^{\text{path (1)}} +\overbrace{\gamma n \frac{n}{N}(1+s) \mu}^{\text{path (2)}}. \end{equation} Again we follow Eq.~\ref{eq:pde_x_general} and compute the sum and the difference between these rates. After some algebra, we find that the difference between these rates is of the form \begin{equation} W^+(x) - W^-(x) = \frac{\gamma}{N}\left[n s\left(N-n-N\mu\right)+N{\mu}(N-2 n)\right]. \end{equation} For the sum of the rates we find \begin{equation} W^+(x) + W^-(x) = \frac{\gamma}{N}\left[N n(2-4 \mu+s-\mu s) + n^{2}(-2+4 \mu-s+2 \mu s)\right]. \end{equation} Substituting these rates in Eq.~\ref{eq:pde_x_general} gives \begin{equation} \begin{split} \frac{\partial}{\partial t} P(x, t)= &-\frac{1}{N} \frac{\partial}{\partial x} \left[ \frac{\gamma}{N} \left(n s \left(N - n - N \mu \right)+ N \mu (N - 2 n) \right) P(x, t)\right] \\ & + \frac{1}{2 N^{2}} \frac{\partial^2}{\partial x^{2}} \left[\frac{\gamma}{N}\left(N n (2 - 4 \mu+s-\mu s) -n^{2}\left(2-4 \mu+s-2\mu s\right)\right) P(x, t)\right]. \end{split} \end{equation} Substituting the definition of the allele frequency results in \begin{equation} \begin{split} \frac{\partial}{\partial t} P(x, t)= &-\gamma \frac{\partial}{\partial x}[x s(1-x-\mu)+\mu(1-2 x) P(x, t)] \\ &+\frac{\gamma}{2 N} \frac{\partial^{2}}{\partial x} \left[x(2-4 \mu+s-\mu s)-x^{2}(2-4 \mu+s-2 \mu s) P(x, t)\right]. \end{split} \end{equation} To get to the final equation we simply make use of the approximation that both $s, \mu \ll 1$. Implementing this, and writing the time scale in units of $\gamma^{-1}$ results in the classic diffusion theory equation with all three forces implemented \begin{equation} \frac{\partial}{\partial t} P(x, t) = -\frac{\partial}{\partial x}[s x(1-x) + \mu (1 - 2x) P(x, t)] +\frac{1}{N} \frac{\partial^{2}}{\partial x^{2}}[x(1-x) P(x, t)]. \end{equation}
\documentclass[10pt,reqno]{beamer} \usepackage[utf8]{inputenc} \usetheme{Dresden} \usecolortheme{beaver} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{stmaryrd} \usepackage{siunitx} \usepackage{subcaption} \usepackage[backend=biber, style=chem-acs]{biblatex} \setbeamertemplate{navigation symbols}{} \title{Bond Graph Clinic: Part 3} \subtitle{Biomolecular Systems} \author{Peter Cudmore} \institute{Systems Biology Lab, The University of Melbourne} \newcommand{\D}[2]{\frac{\mathrm{d} #1}{\mathrm{d} #2}} \newcommand{\e}{\mathrm{e}} \newcommand{\I}{\mathrm{i}} \renewcommand{\mod}[1]{\left|#1\right|} \newcommand{\DD}[2]{\frac{\mathrm{d}^2 #1}{\mathrm{d} #2^2}} \newcommand{\bigO}[1]{\text{O}\left(#1\right)} \renewcommand{\P}[2]{\frac{\partial #1}{\partial #2}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\EX}{\mathbb{E}} \newcommand{\df}[1]{\mspace{2mu} \mathrm{d}#1} \newcommand{\reals}{\mathbb{R}} \newcommand{\complex}{\mathbb{C}} \newcommand{\conj}[1]{\overline{#1}} \bibliography{references} \begin{document} \begin{frame} \titlepage \addtocounter{framenumber}{-1} \end{frame} \begin{frame} \tableofcontents[hideallsubsections] \end{frame} \section{Introduction} \subsection{Previously...} \begin{frame} \frametitle{Network models of energetic systems} \begin{figure} \includegraphics{images/bondgraph.pdf} \end{figure} Bond Graphs capture: \begin{itemize} \item Energy transferred between $B,C,D$ without loss via bonds. \item Power transfer represented by conjugate variables $P_i=e_if_i$. \item Subsystem dynamics via constitutive relations $\Phi_B(e,f) = 0$ \end{itemize} We've also looked at some examples. \end{frame} \begin{frame} \frametitle{A Mathematicians Model of a Chemical System} A naive description of molecular systems is as follows: \begin{enumerate} \item A set of distinct quanta (molecules, complexes, atoms and free electrons) $A, B, \ldots$ move stochastically through a volume. \item When $A, B$ are sufficiently close they may bind to form complex $C$. \item After some time $\tau$, $C$ may disassociate into a number of quanta $E, \ldots$. \end{enumerate} All basic reaction types (synthesis, decomposition and replacement) can be represented in this way.\\ \vspace*{10pt} Clearly this is an example of a \emph{reaction-diffusion} process. \end{frame} \subsection{Today} \begin{frame} \frametitle{This Clinic} Today we shall consider \emph{reactions}. \vfill Next clinic will consider \emph{diffusion}. \vfill %For more details to Oster, Perelson and Auslander \cite{Oster:1971aa, Oster:1971ab, Oster:1973aa, Oster:1974aa, Perelson:1974, Auslander:1972aa} \end{frame} \begin{frame} \frametitle{Spatial Assumptions} Consider chemical reactions inside some vessel of fixed volume.\\ We assume that the chemical solution is well mixed, ie: \begin{itemize} \item The solution is actively stirred or, \item the diffusion rate across the volume is orders of magnitude faster than the fastest reaction rate. \end{itemize} \vfill This basically means we can work with average concentration, and ignore spatial effects inside the vessel. \vfill Later, we will couple many vessels together to represent diffusion. \end{frame} \section{Chemical Reactions} \subsection{Chemical Reaction Networks} \begin{frame} \frametitle{Chemical Reaction Network: Petri Net} \begin{figure} \includegraphics{images/petrinet_abc} \caption{Petri net of $A+B \rightleftharpoons C$} \end{figure} \end{frame} \begin{frame} \frametitle{Chemical Reaction Network: Bond Graph} \begin{figure} \includegraphics{images/bondgraph_abc} \caption{Bond Graph of $A+B \rightleftharpoons C$} \end{figure} \end{frame} \begin{frame} \frametitle{Thermodynamic Assumptions} Consider chemical reactions inside some vessel of fixed volume.\\ We assume that the solution is \emph{isobaric} and \emph{isothermic}. These assumptions allow us to define the chemical potential \[ \mu_A = \mu_A^\varoslash + RT\ln \frac{x_A}{V_m} \] \only<1>{ \begin{itemize} \item $R=N_Ak_b\approx 8.314$ is the gas constant in \si{\joule\per\kelvin\per\mole} \item $T\approx 300$ is temperature (in \si{\kelvin}) \item $\mu_A^\varoslash$ is the chemical potential of a pure solution of $A$ in \si{\joule} \item $x_A$ the amount of species $A$ in \si{\mole}. \item $V_m$ is volume of the solution in \si{\mole} \end{itemize} This follows from the \emph{Ideal Gas Law} and the \emph{Fundamental Equation of Thermodynamics}. } \only<2>{ \begin{align*} \text{Energy}&=x_A \cdot \mu_A \si{\joule} \\ \text{Power}&=\dot{x}_A\cdot \mu_A\ \si{\joule\per\second} \end{align*} \begin{center} $\mu_A$ is \emph{effort} or force-like \\ \vspace{10pt} $\dot{x}_A$ is the \emph{flow} or flux-like. \end{center} \vfill } \only<3>{ Recall $x_A$ is the molar amount of species $A$.\\ \vfill Question:\\ What happens to $\mu_A$ when $x_A \rightarrow 0$ but $V_m$ is constant?\\ When might this occur and is this physical? } \end{frame} \subsection{Components} \begin{frame} \frametitle{Ce Constitutive Relation} \begin{figure} \includegraphics{images/oneport-Ce.pdf} \end{figure} Constitutive Relation for a Chemical Species: \[ \Phi_{Ce}(e,f) = e - \beta \ln kq = 0 \] \vfill \only<2>{ This follow from substituting the parameters \[ k = \exp(\mu_A^\varoslash/RT)/V_m,\qquad \beta = RT \] and the state variables $\mu_A = e$, $\dot{x}_A = f$ and $x_A = q = q_0 + \int_0^t f\df{t} $ into \[ \mu_A = \mu_A^\varoslash + RT\ln \frac{x_A}{V_m}. \]} \only<3>{ \[ \beta = RT, \qquad k = \exp(\mu_A^\varoslash/\beta)/V_m \] \vfill \begin{center} \emph{It will often be convenient for us to take $\beta =1$} \end{center} } \end{frame} \begin{frame} \frametitle{Chemical Kinetics} Reactions proceed according to the \emph{Marcelin-de Donder} formula \[ v = \kappa\left(\e^{A^f/RT} - \e^{A^r/RT}\right) \] \only<1>{Here: \begin{itemize} \item $v$ is the reaction velocity or molar flow \item $A^f, A^r$ are the forward and reverse chemical affinities \item $\kappa$ is the reaction rate constant. \item $R,T$ is the gas constant and temperature respectively. \end{itemize}} \only<2>{ The mass flow in is $v$, hence the mass flow out is $-v$.\\ \vspace{10pt} So \[ f_1 = v,\qquad f_2 = -v \] are natural \emph{flow} variables. } \only<3>{ For a chemical reaction $\nu^f_AA + \nu^f_BB +\ldots \rightleftharpoons \nu^f_AA + \nu^f_BB +\ldots$ with forward and reverse stoichiometric coefficients $\nu^f$ and $\nu^r$.\\ \vfill The forward (and similarly reverse) affinity is defined as \[ A^f = \nu^f_A\mu_A + \nu^f_B\mu_B +\ldots \] So the natural effort variables are \[ e_1 = A^f, \qquad e_2 = A^r. \] } \end{frame} \begin{frame} \frametitle{Re Constitutive Relation} \begin{figure} \includegraphics{images/twoport-Re.pdf} \end{figure} Constitutive Relation for a reaction component: \[ \Phi_{Re}(\mathbf{e},\mathbf{f}) = \left( \begin{matrix} \kappa[\exp(\e_1/\beta) - \exp(\e_2/\beta)] - f_1\\ f_1 + f_2 \end{matrix} \right) = 0 \] \vfill \only<1>{ Where again $\beta =RT$ is often taken as $\beta =1$. } \only<2>{ This follows directly from the Marcelin-de Donder formula. } \end{frame} \begin{frame} \frametitle{Putting it together} \begin{figure} \includegraphics{images/bondgraph_ab} \end{figure} {\scriptsize The above bond graph describes the reaction $A\rightleftharpoons B$. \begin{minipage}{0.475\linewidth} \begin{align} \Phi_{Ce:A} &= e_1 - \ln k_A\left[q_A(0) + \int(- f_1)\df{t}\right]\\ \Phi_{Ce:B} &= e_2 - \ln k_B\left[q_B(0) + \int f_2\df{t}\right] \end{align} \end{minipage}\hfill \begin{minipage}{0.475\linewidth} \begin{align} \Phi_{Re}(\mathbf{e},\mathbf{f}) &= \left( \begin{matrix} \kappa\left[\e^{e_1} - \e^{e_2}\right] - f_1\\ f_1 +(- f_2) \end{matrix} \right) \end{align} \end{minipage} \vspace{11pt} Recall that $q(t) = q_0 + \int_0^t f(t)\df{t}$. So define \begin{align*} \dot{x}_A = \dot{q_1} &= -f_1 &\implies&& q_1 &= q_A(0) -\int_0^t f_1 \df{t} &\implies &&\e^{e_1} &= k_Aq_1, \\ \dot{x}_B = \dot{q_2} &= f_2 &\implies&& q_2 &= q_B(0) +\int_0^t f_2 \df{t} &\implies &&\e^{e_2} &= k_Bq_2. \end{align*} The second line of (3) gives $f_1 = f_2$ which implies $-\dot{q_1} = \dot{q_2}$, and the first gives the result \[ \dot{q_1} = - \kappa\left(k_Aq_1 - k_Bq_2\right) = k_-q_2 - k_+q_1 \quad \text{where}\quad k_+ = \kappa k_A,\ k_- =\kappa k_B. \] } \end{frame} \section{Stoichiometry} \begin{frame} \frametitle{Complexes} The reaction $A + B \rightleftharpoons C$ can be though of \begin{enumerate} \item $A$ and $B$ collide forming a complex $AB$. \item $AB$ reacts to form $C$. \end{enumerate} \vfill Clearly, the flow of $A$ and the flow of $B$ into this reaction are equal. \vfill Hence, we \emph{should} be able to represent the $AB$ complex as a equal flow junction, and allow the reaction to drive what that flow is. \end{frame} \begin{frame} \frametitle{Revisiting the '1' junction} \begin{figure} \includegraphics{images/nport-1} \end{figure} \only<1>{ In clinic 2 we introduced the '1' junction and argued that it's constitutive relation is given by \[ \Phi_\text{1} = \left(\begin{matrix} f_1 -f_2\\ \ldots\\ f_{j-1} - f_j\\ e_1 + e_2 + \ldots + e_j \end{matrix}\right) = 0 \]} \only<2-3>{ Alternatively \[ f_i =f_j \qquad \forall i,j \] and \[ \sum_i e_i = 0 \]} \only<3>{This deviates from the literature and may cause problems\\ (thanks to Peter Gawthrop for pointing this out.).} \end{frame} \begin{frame} \frametitle{Revisiting the '1' junction} \begin{figure} \includegraphics{images/nport-1a} \end{figure} \only<1>{ To fix this, we must instead think of the 1 junction as having two ends (call the sets $\iota$ and $\omega$) so that all flows at $\iota$ are equal, all flows at $\omega$ are equal, and any flow at $\omega$ is in the opposite direction to any other at $\iota$. \vfill We can then define: \[ \sigma_k = \begin{cases} 1 &\text{if}\quad k \in \iota,\\ -1 &\text{if}\quad k \in \omega. \end{cases} \]} \only<2>{ The redefined constitutive relation then becomes \[ \Phi_\text{1} = \left(\begin{matrix} \sigma_1f_1 -\sigma_2f_2\\ \ldots\\ \sigma_{j-1}f_{j-1} - \sigma_{j}f_j\\ \sum_{k=1}^j \sigma_ke_k \end{matrix}\right) = 0, \qquad \sigma_k = \begin{cases} 1 &\text{if}\quad k \in \iota,\\ -1 &\text{if}\quad k \in \omega. \end{cases} \]} \only<3>{ This is equivalent to the following bond graph \begin{figure} \includegraphics[scale=0.5]{images/nport-1b} \end{figure} \[ \text{Where} \qquad \Phi_\text{1*} = \left(\begin{matrix} f_1 -f_2\\ \ldots\\ f_{j-1} - f_j\\ e_1 + e_2 + \ldots + e_j \end{matrix}\right)\qquad \text{as before.} \]} \only<4>{ The relations can be simplified by associating the direction of a connecting bond with $\sigma$, ie; bond pointing out implies $\sigma = -1$. \[ \Phi_\text{1} = \left(\begin{matrix} \sigma_1f_1 -\sigma_2f_2\\ \ldots\\ \sigma_{j-1}f_{j-1} - \sigma_{j}f_j\\ \sum_{k=1}^j \sigma_ke_k \end{matrix}\right) = 0, \qquad \sigma_k = \begin{cases} 1 &\text{if connecting bond points in},\\ -1 &\text{if connecting bond points out}. \end{cases} \] This is standard practice, but this means that the $1$ component needs to know what it's connected to. } \end{frame} \subsection{Reaction Bond Graphs} \begin{frame} \frametitle{$A+B\rightleftharpoons C$} {\scriptsize \begin{figure} \includegraphics[scale=0.5]{images/bondgraph_abc_naive} \caption{Bond Graph of $A+B \rightleftharpoons C$} \end{figure} \begin{minipage}{0.475\textwidth} \begin{align} \Phi_{Ce:A} &= e_1 - \ln k_A(q_1),\\ \Phi_{Ce:B} &= e_2 - \ln k_B(q_2),\\ \Phi_{Ce:C} &= e_4 - \ln k_C(q_4) \end{align} \end{minipage} \begin{minipage}{0.475\textwidth} \begin{align}\Phi_{Re} &= \left( \begin{matrix} \kappa\left[\e^{e_3} - \e^{e_4}\right] - f_3\\ f_3 + (- f_4) \end{matrix} \right),\\ \Phi_{1} &= \left(\begin{matrix} e_1 +e_1 - e_3 \\ f_1 - f_3\\ f_2 - f_3\\ \end{matrix}\right) \end{align} \end{minipage} \vfill Clearly \[ e_3 = e_1 + e_2 \implies \exp(e_3) = \exp(\ln k_A q_1 + \ln k_Bq_2) = k_Ak_B q_1q_2. \] Since $f_4 = \dot{q_4}$, it follows from (12) that \[ \dot{q_4} = \kappa k_Ak_B q_1q_2 - \kappa k_C q_4 = k_+q_1q_2 - k_-q_4. \] From (12) and (13) we have $\dot{q_4} = f_4 =f_3= f_2 = f_1$.\\ Combining this with $f_1 = -\dot{q_1} $ and $f_2 = -\dot{q_2}$ completes the picture. } \end{frame} \begin{frame} \frametitle{Stoichiometric Coefficients} Transformers can be used to introduce stoichiometry (see \cite{Oster:1973aa} and \cite{Gawthrop:2014aa}) \begin{figure} \includegraphics[scale=0.5]{images/stoic_oster} \end{figure} Here $n_1$ and $n_2$ represent the stoichiometric coefficients, and the $\circ$ on identifies the transformer primary winding.\\ Hence, the reaction forward affinity is given by \[ e_o = n_1\mu_1 + n_2 \mu_2 \] and the flow rates \[ \nu_1 = n_1f_0,\qquad \nu_2 = n_2f_0 \] are in terms of the reaction flow $f_0$ as expected. \end{frame} \begin{frame}{Complexes} \begin{center} \begin{figure} \includegraphics[scale=0.6]{images/stoic_oster_Y} \end{figure} can be replaced by \end{center} \begin{figure} \only<1>{ \includegraphics[scale=0.6]{images/stoic_Ya}} \only<2>{ \includegraphics[scale=0.6]{images/stoic_Ya}} \end{figure} \only<1>{Here $[c]$ identifies the port associated with the complex. The ports $[n_1]$ and $[n_2]$ capture the stoichiometric coefficients.} \only<2>{ Instead of defining a port $[c]$, we could require the `complex' port always be pointing `out'; and the rest inwards. We would however still need some way to specify $n_1$ and $n_2$} \end{frame} \begin{frame} \frametitle{Complexes II} \begin{minipage}{0.58\textwidth} \begin{figure} \includegraphics{images/stoic_Ya} \end{figure} \end{minipage} \begin{minipage}{0.38\textwidth} Constitutive Relation: \[ \Phi_{Y} = \left(\begin{matrix} e_r - \sum n_ie_i \\ f_r + \frac{f_1}{n_1}\\ \vdots \end{matrix}\right) \] \end{minipage} \vspace{20pt} $\text{Y}$ is power conserving. Multiplying the top line by $f_r$ gives \[ 0 = e_r f_r - f_r\sum e_i n_i =e_r f_r - \sum e_i (n_if_r) \] Since $n_if_r = -f_i$, the result follows from \[ 0 = e_r f_r - \sum e_i (n_if_r) = e_rf_r + \sum e_if_i. \] This is no surprise as $\text{Y}$ is built from conserving components. \end{frame} \begin{frame} \frametitle{$A+B\rightleftharpoons C$: final form!} \begin{figure} \includegraphics[scale=0.75]{images/bondgraph_abc} \caption{Bond Graph of $A+B \rightleftharpoons C$} \end{figure} In the above figure we have added standard 0 junctions. While this is redundant in this above reaction, it allows us to use the same species for many reactions. Also, note that $\text{Y:C}$ is the identity (\emph{prove it!}) and may be omitted. \end{frame} \section{Conclusion} \begin{frame} \frametitle{In Review} \begin{figure} \includegraphics[scale=0.75]{images/bondgraph_abc} \caption{Bond Graph of $A+B \rightleftharpoons C$} \end{figure} \only<1>{ The $\text{Ce}$ is a one port component representing a store of a particular chemical species and has a constitutive relation that captures the usual definition of \emph{chemical potential energy}. } \only<2>{ The zero junctions allow species A B and C to be shared across reactions other than the one modelled in this graph. } \only<3>{ The $\text{Y}$ component is a power conserving flow junction, which captures the forward (in the case of $\text{Y:AB}$) and reverse stoichiometry of this reaction. } \only<4>{ The $\text{Re}$ component models how the complex $A+B$ transmutes into $C$. The reaction proceeds according to the \emph{Marcelin-de Donder} formula. } \end{frame} \begin{frame} \frametitle{Try for yourself} Try drawing bond graphs of these common reactions: \begin{itemize} \item $A+2B \rightleftharpoons 2D+C$ \item $A+B \rightleftharpoons B+C$ \item $E + S \rightleftharpoons ES \rightleftharpoons E+P$ \end{itemize} \vfill {\scriptsize \begin{minipage}{0.475\textwidth} \begin{align*} \Phi_{Ce}(e,f) &= e - \ln k(q),\\ \Phi_{Re}(e_f,f_f,e_r,f_r) &= \left( \begin{matrix} \kappa\left[\e^{e_f} - \e^{e_r}\right] - f_f\\ f_f + f_r \end{matrix} \right)\\ \Phi_{Y}(e_0,f_0, e_1,f_1,\ldots) &= \left(\begin{matrix} e_0 - \sum n_ie_i \\ f_0 + f_i/n_i\\ \vdots \end{matrix}\right) \end{align*} \end{minipage} \begin{minipage}{0.475\textwidth} \raggedright \begin{figure} \includegraphics[width=0.75\linewidth]{images/bondgraph_abc} \end{figure} \end{minipage}} \end{frame} \begin{frame} \frametitle{Points For Future Discussion} \begin{itemize} \item Bond graphic PDEs. \item Component/Port centric vs bond centric modelling. \item Notation conventions for wider adoption. \item Parameters as random variables. \item Stochastic bond graphs: Johnson–Nyquist noise and probabilistic reactions. \item Bond graph software. \item Applications in cardiac physiology, bionics, synthetic biology (particularly metabolic engineering) and neuroscience. \end{itemize} Please suggest more! \end{frame} \begin{frame} \frametitle{References} \printbibliography \end{frame} \end{document}
\documentclass[11pt]{article} % preamble \newcommand{\bu}{\vdash_\mathrm{BU}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cC}{\mathcal{C}} \usepackage{bussproofs} \usepackage{geometry} \geometry{ letterpaper, lmargin=1in, rmargin=1in, tmargin=1in, bmargin=1in } \usepackage{amssymb} \let\oldemptyset\emptyset \let\emptyset\varnothing \begin{document} % top matter \title{Bottom-up Type Inference for Realistic Functional Languages\\ \Large{WORKING DRAFT}} \author{Kei Davis and David Ringo} %\date{January 2015} \maketitle % \renewcommand{\abstractname}{Executive Summary} \begin{abstract} The great majority of papers on Hindley-Milner type inference provide rules for only the simplest of type systems, for example lacking sum or product types, and the simplest of language constructs, for example lacking mutually recursive definitions, sum and product construction and deconstruction, and more generally arbitrary algebraic data types, leaving type inference for more realistic languages as ``straightforward'' generalizations or extensions. Anecdotal evidence suggests that practitioners in fact find generalization or extension less than straightforward. Indeed, prior to Heeren et al.'s landmark work~\cite{HHS02}, which post-dates Jones' notable exception to the treatment of simplistic languages and type systems~\cite{Jones2000}, obtaining a sound ordering in the various steps (specialization, generalization, etc.) was arguably a black art. Using the \emph{bottom-up} approach of Heeren et al.\ we give type inference rules for two languages: the APPFL compiler \cite{us} intermediate language modeled after GHC's STG intermediate language~\cite{PJ??}, and a simple, generic, higher-order polymorphic functional language designed to facilitate the explication and implementation of a family of demand analysis techniques. Both use a common sub-language for defining algebraic data types including user-defined \emph{unboxed types}, the latter for which typing rules are given as suggested by Peyton Jones and Launchbury~\cite{PJL??}. For comparison and as a starting point we first present Heeren et al.'s language and inference rules. \end{abstract} % main body \section{Introduction} We follow the approach of Heeren et al.~\cite{HHS02}. \section{Language---Heeren, Hage, Swierstra} Variable, application, lambda abstraction, non-recursive let. \section{Language---Datatype Declarations} The language of datatypes is shared among all languages and is given in Figure~\ref{fig:typedecsyntax}. \setlength{\tabcolsep}{5pt} \begin{figure}[t] \centering \footnotesize % tiny scriptsize footnotesize small \begin{tabular}{r r c l l} Type constr.\ defn. & $\mathit{tdef}$ & ::= & \texttt{data} [\texttt{unboxed}]\ $T\ \beta_1 \dots \beta_m\ \mbox{\texttt{=}} $ & $m \ge 0,\ n \ge 1$\\ & & & \quad $C_1\ \tau_{1,1} \dots \tau_{1,a_1}\ \mbox{\texttt{|}}\ \dots\ \mbox{\texttt{|}}\ C_n\ t_{1,1} \dots t_{1,a_n} $ \\ \\ Type & $\tau$ & ::= & \emph{Primtype} & Primitive type\\ & & $|$ & $\beta$ & Type variable \\ & & $|$ & $\tau$ \texttt{->} $\tau$ & Function type \\ & & $|$ & $T\ \tau\ \dots\ \tau$ & Type constructor\\ \end{tabular} \caption{Type Declaration Syntax} \label{fig:typedecsyntax} \end{figure} \section{Language---STG} Our STG language is modeled after Peyton Jones and Marlow's variant of STG as described in the ``fast curry'' paper~\cite{PJM??}. \setlength{\tabcolsep}{5pt} \begin{figure} \centering \footnotesize % tiny scriptsize footnotesize small \begin{tabular}{r r c l l} Variable & $f,\ x$ & & & Initial lower-case letter \\ \\ Constructor & $C$ & & & Initial upper-case letter \\ \\ Atom & $a$ & ::= & $i\ |\ x$ & Variable or integer literal\\ \\ Expression & $e$ & ::= & $a$ & Atom \\ & & $|$ & $f\ a_1\dots a_n$ & Application, $n\ge 1$ \\ & & $|$ & $\oplus\ a_1\dots a_n$ & Saturated primitive operation, $n\ge 1$ \\ & & $|$ & \texttt{let} & Recursive let, $n\ge 1$ \\ & & & \texttt{ } $\mathit{odecl}_1$ \\ & & & \hspace{0.2in} $\dots$ \\ & & & \texttt{ } $\mathit{odecl}_n$ \\ % & & & \texttt{ } $x_1$ \texttt{=} $\mathit{obj}_1$ \\ % & & & \hspace{0.2in} $\dots$ \\ % & & & \texttt{ } $x_n$ \texttt{=} $\mathit{obj}_n$ \\ & & & \texttt{in} $e$ \\ % & & $|$ & \texttt{case} $e$ \texttt{of} $\mathit{alts}$ & Case expression (as implemented) \\ % & & $|$ & \texttt{case} $e$ \texttt{as} $x$ \texttt{of} $\mathit{alts}$ & Case expression (proposed) \\ \\ Alternatives & $\mathit{alts}$ & ::= & \texttt{ } $\mathit{alt}_1$ & Case alternatives, $n \ge 1$\\ & & & \hspace{0.2in} $\dots$ \\ & & & \texttt{ } $\mathit{alt}_n$ \\ \\ Alternative & $\mathit{alt}$ & ::= & $C\ x_1\dots x_n$ \texttt{->} $e$ & Pattern match, $n \ge 0$ \\ & & $|$ & $i$ \texttt{->} $e$ & Integer literal \\ & & $|$ & $x$ \texttt{->} $e$ & Default (as implemented)\\ & & $|$ & \texttt{\char`_\ ->} $e$ & Default (proposed)\\ \\ Object & $\mathit{obj}$ & ::= &\texttt{FUN} $f\ x_1\dots x_n$ \texttt{->} $e$ & Function definition, arity $=n\ge 1$ \\ & & $|$ &\texttt{CON} $C\ a_1\dots a_n$ & Saturated constructor, $n \ge 0$ \\ & & $|$ &\texttt{THUNK} $e$ & Thunk---explicit deferred evaluation \\ % & & $|$ &\texttt{PAP} $f\ a_1\dots a_n$ & Partial application \\ % & & $|$ & $\mathit{BLACKHOLE}$ & Evaluation-time black hole \\ \\ Object decl. & $\mathit{odecl}$ & ::= & $x = \mathit{obj}$ & Simple binding \\ \\ %%%Constructor defn. & $\mathit{con}$ & ::= & $C\ \mathit{type}_i$ & $i \ge 0$ \\ %%%\\ %%%Datatype defn. & $\mathit{ddecl}$ & ::= & \texttt{data} [\texttt{unboxed}] & User-defined data type \\ %%% & & & $C\ x_i =$ & $i \ge 0, n > 0$ \\ %%% & & & $\mathit{con}_1 | \dots | \mathit{con}_n$ \\ %%%\\ %%%Program & $\mathit{prog}$ & ::= & $\mathit{(o|d)decl}_1$ \texttt{;} & Object and data defns, \\ %%% & & & \texttt{ } $\dots$ \texttt{;} & distinguished \texttt{main}\\ %%% & & & $\mathit{(o|d)decl}_n$ & Program & $\mathit{prog}$ & ::= & $\mathit{odecl}_1$ \texttt{;} & Object and data defns, \\ & & & \texttt{ } $\dots$ \texttt{;} & distinguished \texttt{main}\\ & & & $\mathit{odecl}_n$ & %Program & $\mathit{prog}$& ::= & $f_1\ =\ \mathit{obj}_1$ \texttt{;} & $n \ge 1$, distinguished \texttt{main}\\ % & & & \texttt{ } $\dots$ \texttt{;} \\ % & & & $f_n\ =\ \mathit{obj}_n$ \end{tabular} \caption{STG syntax} \label{fig:STGsyntax} \end{figure} \clearpage \section{Type inference rules} Judgements are of the form $\cA,\ \cC\ \vdash_{\cdot}\ e:\tau$, where $\cdot$ may be empty (the default), or to facilitate the factoring of the rule for \texttt{case} one of $x$ (passing the name of the scrutinee-bound variable) or $\tau$ (passing the type of the scrutinee). Subscript index ranges are implicitly universally quantified over the relevant range except where needed for to avoid ambiguity. Starting with the base cases, each leaf (Expression/Atom) variable is associated with a fresh type variable, and each literal is associated with its (unambiguous) type, e.g., $i:\mathit{Int}$ or $i:\mathit{Int\#}$, and Primitive operators are assumed to be monomorphic. For recursive \texttt{let} it is convenient to separate the \texttt{let} and \texttt{in} clauses. TODO: describe additions to Heeren et al.\ scheme for unboxed data types. \begin{figure} \small % Variable/Atom % \begin{prooftree} \AxiomC{} \LeftLabel{[Variable/Atom]} \UnaryInfC{$\{x:\beta\},\ \emptyset\ \vdash\ x:\beta\ (\beta\ \mathrm{fresh})$} \end{prooftree} % % Literal/Atom % \begin{prooftree} \AxiomC{} \LeftLabel{[Literal/Atom]\quad} \UnaryInfC{$\emptyset,\ \emptyset\ \vdash\ i:\mathrm{Int}$} \end{prooftree} % % Expression/Application % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ (=\emptyset)\ \vdash\ a_i:\tau_i$} \LeftLabel{[Expression/Application]\quad} \UnaryInfC{$\bigcup \cA_i \cup \{f:\tau_1 \rightarrow \ldots \rightarrow \tau_n \rightarrow \beta \} \ \vdash f\ a_1 \ldots a_n : \beta\ (\beta\ \mathrm{fresh})$ } \end{prooftree} % % Primitive Operation % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ \vdash\ a_i:\tau_i$} \AxiomC{$\oplus : \tau_1 \rightarrow \ldots \rightarrow \tau_n \rightarrow \tau_{n+1}$} \LeftLabel{[Primitive Operation]\quad} \BinaryInfC{$\bigcup \cA_i,\ \{ \tau_i \equiv \tau_i \}\ \vdash\ \oplus\ a_1 \ldots a_n : \tau_{n+1}$} \end{prooftree} % % let % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ \vdash\ e_i:\tau_i,\ 1\le i \le n$} \LeftLabel{[\texttt{let}]\quad} \UnaryInfC{$(\bigcup \cA_i)\backslash \{x_i\},\ \bigcup \cC_i \cup \{\tau' \equiv \tau_i\ |\ x_i : \tau' \in \cA_j,\ 1 \le i \le n,\ 1 \le j \le n \}\ \vdash\ \mbox{\texttt{let}}\ x_i = e_i : \tau_i$} \end{prooftree} % % in % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ \mbox{\texttt{let}}\ x_i = e_i : \tau_i$} \AxiomC{$\cA_0,\ \cC_0\ \vdash\ e_0:\tau_0$} \LeftLabel{[\texttt{in}]\quad} \BinaryInfC{$\cA \cup (\cA_0 \backslash \{x_i\}),\ \cC \cup \cC_0 \cup \{ \tau' \le_{M} \tau_i\ |\ x_i:\tau' \in \cA_0 \} \ \vdash\ \mbox{\texttt{let}}\ x_i = e_i\ \mbox{\texttt{in}}\ e_0:\tau_0$} \end{prooftree} % % case % \begin{prooftree} \AxiomC{$\cA_0,\ \cC_0\ \vdash\ e_0 : \tau_0$} \AxiomC{$\cA_\mathit{alts},\ \cC_\mathit{alts}\ \vdash_{\tau_0}\ \mathit{alts} : \tau_\mathit{alts}$} \LeftLabel{[\texttt{case}]\quad} \BinaryInfC{$\cA_0 \cup \cA_\mathit{alts},\ \cC_0 \cup \cC_\mathit{alts} \vdash\ \mbox{\texttt{case}}\ e_0\ \mbox{\texttt{of}}\ \mathit{alts}:\tau_\mathit{alts}$} \end{prooftree} % % alts % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ \vdash_{\tau_0}\ \mathit{alt}_i : \tau_i$} \LeftLabel{[\texttt{alts}]\quad} \UnaryInfC{$\bigcup \cA_i,\ \bigcup \cC_i \cup \{ \tau_1 \equiv \tau'\ |\ 2 \le i \le n \}\ \vdash_{\tau_0}\ \mathit{alts} : \tau_1 $} \end{prooftree} % % alt/constructor % % % alt/integer literal % % % alt/default var % % % alt/default anon % % % FUN % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ e : \tau$} \LeftLabel{[FUN]\quad} \UnaryInfC{$\cA \backslash \{x_i\} \cup \{f : \beta_1 \rightarrow \dots \rightarrow \beta_n \rightarrow \tau\} \ (\beta_i\ \mathrm{fresh}),$} \noLine \UnaryInfC{$\cC \cup \{\tau' \equiv \beta_i\ |\ x_i : \tau' \in \cA,\ 1\le i \le n\}$} \noLine \UnaryInfC{$\vdash\ \mbox{\texttt{FUN}}\ f\ x_1 \dots x_n\ \mbox{\texttt{->}}\ e : \beta_1 \rightarrow \dots \rightarrow \beta_n \rightarrow \tau$} \end{prooftree} % % CON % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ (=\emptyset)\ \vdash\ a_i : \tau_i$} \AxiomC{$T\ \beta_1 \dots \beta_j \rightarrow C\ m_1 \dots m_n\ (\beta_i\ \mbox{fresh})$} \LeftLabel{[CON]\quad} \BinaryInfC{$\bigcup \cA_i,\ \{\tau_i \equiv m_i\}\ \vdash\ \mbox{\texttt{CON}}\ C\ a_1 \dots a_n : T\ \beta_1 \dots \beta_j$} \end{prooftree} % % THUNK % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ e : \tau$} \LeftLabel{[THUNK]\quad} \UnaryInfC{$\cA,\ \cC\ \vdash\ \mbox{\texttt{THUNK}}\ e : \tau$} \end{prooftree} \caption{STG Bottom-up Type Inference Rules (as implemented)} \label{fig:BUSTG} \end{figure} \begin{figure} \small % % case % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ \vdash\ \mathit{alt}_i:\tau_0$} \AxiomC{$\cA_\mathit{alts},\ \cC_\mathit{alts}\ \vdash_{x}\ \mathit{alts} : \tau_\mathit{alts}$} \LeftLabel{[\texttt{case}]\quad} \BinaryInfC{$\cA_0 \cup (\cA_\mathit{alts}\backslash\{x\}),\ \cC_0 \cup \cC_\mathit{alts} \cup \{\tau_0 \equiv \tau'\ |\ x:\tau' \in \cA_\mathit{alts} \}\ \vdash\ \mbox{\texttt{case}}\ e_0\ \mbox{\texttt{as}}\ x\ \mbox{\texttt{of}}\ \mathit{alts}:\tau_\mathit{alts}$} \end{prooftree} % % alts % \begin{prooftree} \AxiomC{$\cA_i,\ \cC_i\ \vdash_{x}\ \mathit{alt}_i : \tau_i$} \LeftLabel{[\texttt{alts}]\quad} \UnaryInfC{$\bigcup \cA_i,\ \bigcup \cC_i \cup \{ \tau_1 \equiv \tau_i\ |\ 2 \le i \le n \}\ \vdash\ \mathit{alts} : \tau_1 $} \end{prooftree} % % alt/constructor % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ e : \tau$} \AxiomC{$T\ \beta_1 \dots \beta_j \rightarrow C\ \tau_1 \dots \tau_n\ (\beta_i\ \mbox{fresh})$} \LeftLabel{[\texttt{alt constructor}]\quad} \BinaryInfC{$\cA\backslash\{x_i\} \cup \{ x : T\ \beta_1 \dots \beta_j \},\ \cC \cup \{\tau_i \equiv \tau'\ |\ x_i : \tau' \in \cA \}\ \vdash\ C\ x_1 \dots x_n\ \mbox{\texttt{->}}\ e : \tau$} \end{prooftree} % % alt/integer literal % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ e : \tau$} \LeftLabel{[\texttt{alt literal int}]\quad} \UnaryInfC{$\cA \cup \{x : \mathit{Int}\},\ \cC\ \vdash\ i\ \mbox{\texttt{->}}\ e : \tau$} \end{prooftree} % % alt/default var % % % alt/default anon % \begin{prooftree} \AxiomC{$\cA,\ \cC\ \vdash\ e : \tau$} \LeftLabel{[\texttt{alt anon}]\quad} \UnaryInfC{$\cA,\ \cC\ \vdash\ \mbox{\texttt{\char`_\ ->}}\ e : \tau$} \end{prooftree} % \caption{STG Proposed Inference Rules for \texttt{case}} \label{fig:proposed} \end{figure} \section{Language---Realistic Higher-order, Polymorphic, Pure Functional} David's stuff goes here. \end{document} Using the command \EnableBpAbbreviations enables some laconic shorthand for various commands: \AX and \AXC abbreviate \Axiom and \AxiomC \UI and \UIC abbreviate \UnaryInf and \UnaryInfC \BI and \BIC abbreviate \BinaryInf and \BinaryInfC \TI and \TIC abbreviate \TrinaryInf and \TrinaryInfC \DP abbreviates \DisplayProof
\section{Theoretical Justifications of AMP} \begin{frame}{AMP Finds Flatter Local Minima} \begin{columns} \column{0.68\textwidth} \begin{block}{Locally Gaussian Assumption of Empirical Risk} \vspace{-0.5em} \begin{equation*} \mathcal{L}_\mathrm{ERM}\approx\gamma(\boldsymbol{\theta};\boldsymbol{\mu},\boldsymbol{\kappa},A,C) \end{equation*} \vspace{-1.5em} \textit{where $\gamma(\boldsymbol{\theta};\boldsymbol{\mu},\boldsymbol{\kappa},A,C)$ is minimized when $\boldsymbol{\theta}=\boldsymbol{\mu}$ and the minimum value is $\gamma^\ast(\boldsymbol{\mu},\boldsymbol{\kappa},A,C)=C-A$.} \end{block} \begin{theorem}[stated informally] The minimum value of the AMP loss is \vspace{-0.5em} \begin{equation*} \gamma_\mathrm{AMP}^\ast(\boldsymbol{\mu},\boldsymbol{\kappa},A,C)=C-A\exp\left(-\frac{\epsilon^2}{2\sigma^2}\right) \end{equation*} \vspace{-1.5em} where $\sigma^2$ is the smallest eigenvalue of $\boldsymbol{\kappa}$. \end{theorem} \column{0.32\textwidth} \begin{figure} \includegraphics[width=.7\textwidth]{figs/surface.png} \end{figure} \vspace{-0.5em} \begin{figure} \includegraphics[width=.7\textwidth]{figs/gaussian.pdf} \caption{The minimum values of $\gamma$ and $\gamma_\mathrm{AMP}$.} \end{figure} \end{columns} \vspace{1em} \end{frame} \begin{frame}{AMP Regularizes Gradient Norm} \begin{theorem}[stated informally] Let $N=1$. The AMP training is equivalent to ERM training with an additional term: \begin{equation*} \widetilde{\mathcal{J}}_\mathrm{ERM}(\boldsymbol{\theta}):=\mathcal{J}_\mathrm{ERM}(\boldsymbol{\theta})+\Omega(\boldsymbol{\theta}) \end{equation*} where \begin{equation*} \Omega(\boldsymbol{\theta}):=\begin{cases} \zeta\Vert\nabla_{\boldsymbol{\theta}}\mathcal{J}_\mathrm{ERM}(\boldsymbol{\theta})\Vert_2^2,&\Vert\zeta\nabla_{\boldsymbol{\theta}}\mathcal{J}_\mathrm{ERM}(\boldsymbol{\theta})\Vert_2\le\epsilon\\ \epsilon\Vert\nabla_{\boldsymbol{\theta}}\mathcal{J}_\mathrm{ERM}(\boldsymbol{\theta})\Vert_2,&\Vert\zeta\nabla_{\boldsymbol{\theta}}\mathcal{J}_\mathrm{ERM}(\boldsymbol{\theta})\Vert_2>\epsilon \end{cases} \end{equation*} \end{theorem} % Thus, the AMP training algorithm effectively tries to find the local minima of empirical risk that not only have low values, but also have small gradient norm near the minima. Note that a minimum with smaller gradient norms around it is a flatter minimum. \end{frame}
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{titlepic} \usepackage{caption} \usepackage{subcaption} % \documentclass{beamer} \newcommand{\namesigdate}[2][5cm]{% \begin{tabular}{@{}p{#1}@{}} #2 \\[0.4\normalbaselineskip] \hrule \\[0pt] {\small } \\[2\normalbaselineskip] \end{tabular} } \title{\vspace*{\fill} \textbf{Video Description using Deep Learning} \\ {\large \textbf{Summer Undergraduate Research Award}} \\ \vspace{3mm} \includegraphics[width=5cm]{logo.png}} \author{ \textbf{Suyash Agrawal}\\ 2015CS10262\\ Computer Science\\ CGPA: 9.91 \\ Mob: 9717060183\\ cs1150262@iitd.ac.in \and \textbf{Madhur Singhal}\\ 2015CS10235\\ Computer Science\\ CGPA: 8.66\\ Mob: 9540972599\\ cs1150235@iitd.ac.in } \date{\textbf{Supervisor:-} \\ \textbf{Subhashis Banerjee} \\ Professor \\ Department of CSE \\ suban@cse.iitd.ac.in\\ IIT Delhi\\ \vspace*{\fill}} \begin{document} % \includegraphics{logo.png} \maketitle % \noindent \namesigdate{} \hfill \namesigdate[3cm]{Saroj Kaushik \\ HOD CSE } % \begin{flushleft} % \noindent \namesigdate{} % \end{flushleft} \begin{center} \noindent\rule{3.2cm}{0.4pt} \end{center} \begin{flushright} \noindent\rule{3.2cm}{0.4pt} \\ \textbf{Prof. S. Arun Kumar} \\ Head of Department \\ Department of CSE \\ sak@cse.iitd.ernet.in \end{flushright} \newpage % \begin{figure} % \end{figure} \section{Introduction} \textit{\textbf{Video Description}} is the process of discovering knowledge, structures, patterns and events of interest in the video data and describing them in natural language. Video Description is an incredibly hard problem in computer vision and currently the only source of video description is manual labour. \newline Video Description has wide variety of applications. It can help visually impared people "see" the world by describing the scene around them. It has also use in automated survelliance by analysing the videos in real time and reporting criminal activites. Also, it can be used to efficiently index large video databases based upon their content for ease of accessibility. \begin{figure}[ht!] \centering \includegraphics[width=12.5cm]{description.png} \caption{Sample Video Description\label{fig}} \end{figure} % \begin{figure}[ht!] % \centering % \begin{subfigure}{.5\textwidth} % \centering % \includegraphics[width=1.0\linewidth]{sparse_chariot.png} % \caption{Sparse reconstruction} % \label{fig:sub1} % \end{subfigure}% % \begin{subfigure}{.5\textwidth} % \centering % \includegraphics[width=1.0\linewidth]{dense_chariot.png} % \caption{Dense reconstruction} % \label{fig:sub2} % \end{subfigure} % \caption{3D reconstruction} % \label{figstart} % \end{figure} Figure~\ref{fig:fig0} shows a possible description of a sample video. The \textit{traditional pipeline} is shown in Figure~\ref{fig3}. \begin{figure}[ht!] \centering \includegraphics[width=14cm]{traditional_pipeline2.png} \caption{Traditional pipeline\label{fig3}} \end{figure} The red-highlighted part of the pipeline is \textit{\textbf{computationally expensive}}. Thus, our project aims to reduce this computation and perform 3D reconstruction in near real-time. The processing parts are: \begin{itemize} \item \textit{Intrinsic and extrinsic parameters}: The camera projection matrix is a $3 \times 4$ matrix which represents the pinhole geometry of a camera for mapping 3D points in the world coordinates to 2D points on images. This matrix depends on extrinsic and intrinsic parameters. The intrinsic parameters mainly comprises of focal length, image sensor format, and principal points. The extrinsic parameters define the position of the camera center and the camera's heading in world coordinates in terms of a rigid rotation and translation. \item \textit{Stereo correspondence generation}: Given two or more images of the same 3D scene, taken from different points of view, the correspondence problem refers to the task of finding a set of points in one image which can be identified as the same points in another image. To do this, points or features in one image are matched with the corresponding points or features in another image. The images can be taken from a different point of view, at different times, or with objects in the scene in general motion relative to the camera(s). \item \textit{Triangulation}: Triangulation refers to the process of determining a point in 3D space given, its projections onto two or more images and their corresponding camera projection matrices. This point is found as the intersection of the two or more projection rays formed from the inverse projection of the 2D image points representing that 3D point in space. \item \textit{Initial point cloud and 3D sparse reconstruction}: As the word suggests, \textit{3D sparse construction} is done for only some set of data points in the given coordinate system called \textit{initial point cloud}. Figure~\ref{fig:sub1} illustrates a 3D sparse construction of a chariot. Figure~\ref{fig:sub2} illustrates 3D dense construction of the same initial point cloud. \end{itemize} \section{Objectives} Our main objective is to perform 3D reconstruction in near real time using a mobile device. This can be further be subdivided into following points: \begin{enumerate} \item To get accurate position and orientation estimate based on readings of IMU sensors in smart-phones. \item To use the camera feed in smart-phones to enhance the position estimate based on visual tracking of objects. \item To do sparse 3D reconstruction based on sensor fusion data and computer vision techniques. \item To enhance the quality and efficiency of 3D reconstruction by adding more details and moving towards dense 3D reconstruction. \item We will ultimately be fusing digital signal processing and computer vision based techniques that will enable us to perform near real time 3D reconstructions on mobile or hand-held devices. \end{enumerate} \section{Basic Concepts} %\begin{itemize} \subsection{Convolutional Neural Networks} \subsection{Long Short Term Memory Networks} \subsection{Training Data} \subsection{Finetuning} \subsection{} \subsection{Camera calibration} The camera parameters can further be subdivided into intrinsic and extrinsic parameters. \textbf{Camera intrinsic parameter} $K$ is dependent on the focal length of the camera and principal point (which in most cases is the center of the image). The \textbf{camera extrinsic parameter} is composed of the rotation $R$ and translation $t$ between camera coordinate system and the world coordinate system. Together they form the camera projection matrix $P$, a $3 \times 4$ matrix which describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. \begin{equation} P = K[R|t] \end{equation} \subsection{Sparse 3D reconstruction} Given two different images of the same scene from different angles, the position of a 3D point can be found as the intersection of the two projection rays which is commonly referred to as \textbf{triangulation}. For this first point correspondences have to be established. Then using this point correspondences a Random Sampling Consensus (RANSAC) based voting framework is used to estimate the camera intrinsic and extrinsic parameters. Finally, a joint non-linear optimization is used to further refine the camera parameters and the 3D points in a \textbf{bundle adjustment} framework. This method is computationally very expensive and hence done only for very sparse set of points. This is known as sparse 3D reconstruction. \section{Mobile IMU sensors} IMU (Inertial Measurement Unit) sensors are on-chip devices embedded in most of the smart phones or hand-held devices today. It mainly consists of a series of motion sensors: accelerometer, gyroscope, magnetometer and gravitation sensor. The data from these sensors can be fused to obtain the orientation and the position of the device in the world coordinate system. \section{Conventional versus mobile 3D reconstruction} In the case of a smart-phone or any hand-held device having a camera and IMU sensors, we wish to use the IMU sensors to obtain extrinsic camera parameters in real time. This will help in reducing the load on conventional 3D reconstruction methods and get it in near real time. %\end{itemize} \section{Approach to the project} First, we shall be using sensor fusion to obtain accurate estimates for camera position and orientation of the mobile device. Then we will move on to 3D reconstruction, which further has two parts: i.e. sparse 3D reconstruction and then use tracking to obtain dense correspondence of points for dense 3D reconstruction. \begin{enumerate} \item Position and orientation estimation \begin{enumerate} \item Get accelerometer data and orientation data at real time using the IMU sensors like accelerometer, gyroscope, gravity sensor and magnetometer present on the smart phone. This data is highly noisy. Figure~\ref{fig1} shows the position estimate from accelerometer data across various devices. \begin{figure}[ht!] \centering \includegraphics[width=10cm]{graph.jpg} \caption{Accuracy of accelerometer data across different devices (scale cm)\label{fig1}} \end{figure} As evident from the graph, this data cannot be directly used for calculation of position and orientation. Figure~\ref{fig2} shows the integration of static accelerometer data to obtain velocity and displacement. \begin{figure}[ht!] \centering \includegraphics[width=10cm]{integration.jpg} \caption{Obtaining velocity and displacement from static accelerometer data\label{fig2}} \end{figure} The graph of both velocity and displacement shows significant deviation from the actual value which is zero. Thus, signal processing and smoothening is required to get a better estimate. \item Making the orientation data more accurate by infusing the higher frequency components from the gyroscope orientation after drift correction. \item Obtaining the displacement and orientation data from the camera feed on the device using visual tracking methods. \item A comparative study is to be done between the position estimates obtained by the two methods along with ground truth and fusing the results to obtain an enhanced position and orientation estimate. \end{enumerate} % • All these data along with the clicking of pictures are synchronised to a single system clock. % \item We will use the IMU sensors present in smart-phones to get a position estimation. \item 3D reconstruction \begin{enumerate} \item Obtain sparse 3D reconstruction based on camera rotation and position parameters obtained previously. \item Use tracking data from different tracking methods like ``Good features to track'' or ``KL tracker'' for obtaining dense correspondence of points. \item Use guided matching by indirect computation of fundamental matrix from estimated camera motion from sensors to further enrich the correspondences. \item Triangulate the dense correspondences and do a final global refinement. \end{enumerate} \item Further possibilities \begin{itemize} \item Getting a more detailed texture mapping of the object. \item Making an object recognition software on the basis of this 3D reconstruction. \item Improving the algorithm for a quicker and more efficient 3D reconstruction. \item Releasing applications for Apple, Android and Windows platforms for near real time 3D reconstruction on the device itself. \end{itemize} \end{enumerate} \section{Uses and applications} % \item Uses and Applications \begin{itemize} \item Using the device as an accurate measuring device. This can be of particular interest to blind as they will be able to measure distances and angles accurately with great ease. \item Doing real time dense 3D reconstructions on mobile phones and other hand-held devices. \item Allowing the user to generate a 3D printable file on his mobile device. As 3D printers are becoming cheaper and more common, this feature will reduce the need of the person to use a 3D scanner to be able to generate prototypes of objects. This will allow engineers and students to work more efficiently as they can generate copies of 3D objects easily. \item This project can have applications in the field of archeology. It can be used to generate replica of artifacts and fragile objects for further studies, without harming its integrity. \item Our approach can also be applied in the field of medical sciences, especially for orthopedics and joint replacement surgery. The part to be replaced can be made with high accuracy using this project. \item The method can also be used by the astronauts up in space. With the help of our approach the parts to be changed can be easily made using a 3D printer. \item Localization at tourist sites and providing real time directions to landmark locations. This will involve the use of GPS (Global positioning system) as well to get a rough location of the user. \end{itemize} \newpage \section{Budget, duration and facilities} \subsection{Budget} Rs. 25,000 will be needed to purchase an android smart phone having high quality sensors and a high resolution camera. \subsection{Duration} We will try to complete this project by the end of the summer break i.e. the end of July, 2015. \subsection{Facilities} \begin{itemize} \item Access to the vision lab. % \item Access to a computer in Vision Lab. \end{itemize} \end{document}
\subsection{LED and Photoresistor} \begin{figure}[htbp] \centerline{\epsfysize = 2.0in\epsffile{sensor/presistor.eps}} \caption{LED and Photoresistor} \label{ledpresistor} \end{figure} Measuring the color of the table is a common task that needs to be accomplished by the robot. In order to do this, some light sensor and a source of light to illuminate the table are needed. The source may be ambient light that comes from above the table and around the room, but this may not be enough to guarantee consistent readings because the light source is dependent upon a varying table environment. So, it is better for the robot to have its own light source. An LED, mounted next to a well-shielded photoresistor, can make a spot of light on the table that is significantly brighter than the ambient light. Consequently the brightness of the light will be fairly constant across the table, and discerning colors will be easier. Be sure to hook the LED to the connector correctly, as shown in Figure \ref{ledpresistor}. The longer lead on the LED is the collector and should be connected to power through a 330\ohm resistor. The shorter lead goes to ground. Because the analog sensor ports are powered continuously when the robot is on, the LED will also be on during the entire 60-second match. This is not necessary. In fact, the LED can be plugged into a motor output to conserve on-board battery power. Any number of LEDs can be plugged into a single motor port. A unique use of this sensor with the LED plugged to the motor port is to measure the color of the table by taking the {\it difference} of two light measurements, one with the LED on and one with it off. In this case there are two numbers instead of one, and a more reliable reading of the surface color can be expected. By computing the difference of these two values, the approximate amount of LED light that was reflected from the surface is being measured. By comparing the difference to a threshold, the robot can discern between different colors at more than six inches away from the table. The digital outputs can also be used for light measurements as well, but if you wish to try doing this, be sure to talk to Paul Grayson ({\tt pdg@mit.edu}).
\section*{Agents (\textit{agent})} Used to access all functions around agents. Please note that the user must have access to the groups where an agent is member of to retrieve it and to be able to apply changes. \subsection*{\textit{listAgents}} List all agents with some basic informations. { \color{blue} \begin{verbatim} { "section": "agent", "request": "listAgents", "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "listAgents", "response": "OK", "agents": [ { "agentId": "2", "name": "cracker1", "devices": [ "Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz", "NVIDIA Quadro 600" ] } ] } \end{verbatim} } \subsection*{\textit{get}} Retrieve all the informations about a specific agent by providing its ID. The last action time is a UNIX timestamp and if the configuration on the server is set to hide the IP of the agents, the value will just be \textit{Hidden} instead of the IP. { \color{blue} \begin{verbatim} { "section": "agent", "request": "get", "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "get", "response": "OK", "name": "cracker1", "devices": [ "Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz", "NVIDIA Quadro 600" ], "owner": { "userId": 1, "username": "htp" }, "isCpuOnly": false, "isTrusted": true, "isActive": true, "token": "0lBfAp7YQh", "extraParameters": "--force", "errorFlag": 2, "lastActivity": { "action": "getTask", "time": 1531316240, "ip": "127.0.0.1" } } \end{verbatim} } \subsection*{\textit{setActive}} Set an agent active/inactive. { \color{blue} \begin{verbatim} { "section": "agent", "request": "setActive", "active": false, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setActive", "response": "OK" } \end{verbatim} } \subsection*{\textit{changeOwner}} Either set an owner for an agent or remove the owner from it. The user can either be specified by providing the user ID or the username. If no owner should be specified, the user value must be \textit{null}. { \color{blue} \begin{verbatim} { "section": "agent", "request": "changeOwner", "user": 1, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{blue} \begin{verbatim} { "section": "agent", "request": "changeOwner", "user": "testuser", "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{blue} \begin{verbatim} { "section": "agent", "request": "changeOwner", "user": null, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "changeOwner", "response": "OK" } \end{verbatim} } \subsection*{\textit{setName}} Set the name of the agent. { \color{blue} \begin{verbatim} { "section": "agent", "request": "setName", "name": "cracker1", "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setName", "response": "OK" } \end{verbatim} } \subsection*{\textit{setCpuOnly}} Set if an agent is CPU only or not. { \color{blue} \begin{verbatim} { "section": "agent", "request": "setCpuOnly", "cpuOnly": true, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setCpuOnly", "response": "OK" } \end{verbatim} } \subsection*{\textit{setExtraParams}} Set agent specific command line parameters for the agent which are included in the cracker command line call on the agent. { \color{blue} \begin{verbatim} { "section": "agent", "request": "setExtraParams", "extraParameters": "-d 1,2", "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setExtraParams", "response": "OK" } \end{verbatim} } \subsection*{\textit{setErrorFlag}} Set how errors on the agent should be handled on the server. Following values can be given as \textit{ignoreErrors} value: \begin{description} \item[0] In case of an error, the error message gets saved on the server and the agent will be put into inactive state. \item[1] In case of an error, the error message gets saved on the server, but the agent will be given further chunks to work on if he requests so. \item[2] In case of an error, nothing will be saved on the server and the agent can continue to work and will not put into inactive state. \end{description} { \color{blue} \begin{verbatim} { "section": "agent", "request": "setErrorFlag", "ignoreErrors": 0, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setErrorFlag", "response": "OK" } \end{verbatim} } \subsection*{\textit{setTrusted}} Set if an agent is trusted or not. { \color{blue} \begin{verbatim} { "section": "agent", "request": "setTrusted", "trusted": false, "agentId": 2, "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "setTrusted", "response": "OK" } \end{verbatim} } \subsection*{\textit{listVouchers}} Lists all currently existing vouchers on the server which can be used to register new agents. { \color{blue} \begin{verbatim} { "section": "agent", "request": "listVouchers", "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "getBinaries", "response": "OK", "vouchers": [ "sM2q6CwiPY", "xkw782a3x9", "2drg6Vsqor", "AZyY8dK1ao" ] } \end{verbatim} } \subsection*{\textit{createVoucher}} Create a new voucher on the server. It is optional to specify a voucher code otherwise the server will just generate a random one. The server always sends back the created voucher. { \color{blue} \begin{verbatim} { "section": "agent", "request": "createVoucher", "voucher": "mySpecificVoucher", "accessKey": "mykey" } \end{verbatim} } { \color{blue} \begin{verbatim} { "section": "agent", "request": "createVoucher", "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "createVoucher", "response": "OK", "voucher": "Gjawgidkr4" } \end{verbatim} } \subsection*{\textit{deleteVoucher}} Delete a voucher from the server. { \color{blue} \begin{verbatim} { "section": "agent", "request": "deleteVoucher", "voucher": "Gjawgidkr4", "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "deleteVoucher", "response": "OK" } \end{verbatim} } \subsection*{\textit{getBinaries}} Lists which agent binaries are available on the server to be used for agents. { \color{blue} \begin{verbatim} { "section": "agent", "request": "getBinaries", "accessKey": "mykey" } \end{verbatim} } { \color{OliveGreen} \begin{verbatim} { "section": "agent", "request": "getBinaries", "response": "OK", "apiUrl": "http:\/\/localhost\/hashtopolis\/src\/api\/api\/server.php", "binaries": [ { "name": "csharp", "os": "Windows, Linux(mono), OS X(mono)", "url": "http:\/\/localhost\/hashtopolis\/src\/api\/agents.php?download=1", "version": "0.52.2", "filename": "hashtopolis.exe" }, { "name": "python", "os": "Windows, Linux, OS X", "url": "http:\/\/localhost\/hashtopolis\/src\/api\/agents.php?download=2", "version": "0.1.4", "filename": "hashtopolis.zip" } ] } \end{verbatim} }
\documentclass{beamer} \usepackage[utf8]{inputenc} % \usetheme{Warsaw} %% Themenwahl \input{../../shared_slides.tex} \title{Minimum enclosing ball problem} \date{\today} \begin{document} \maketitle \frame{\tableofcontents} \section{Intro} \begin{frame} \frametitle{Intro} %%Folientitel \begin{definition} %%Definition Given a set of vectors $A = \{a_1, \dots, a_m\}\in \R^d$ we want to find the smallest ball containing all points of $A$. % A \subseteq B_{c,\rho} := \{x \in R^d : \Vert x - c \Vert \le \rho \} I.e. \begin{align} \min_{c,\rho}\, \rho \\ s.t. \Vert a_i -c \Vert \le \rho \quad \forall i=1, \dots, m \end{align} \end{definition} Used in clustering, Data classification, facility location, computer graphics. \end{frame} \begin{frame} \frametitle{Rewriting the problem} Square constraints for smoothness \begin{align} \min_{c,\rho} \, \rho \\ s.t. \Vert a_i \Vert^2 - 2 a_i^T c + c^T c \le \rho \quad \forall i=1, \dots, m \end{align} Build Lagrangian Dual: \begin{equation} L(c, \rho, u) = \rho + \sum_{i=1}^m u_i * (\Vert a_i -c \Vert^2- \rho^2) \end{equation} and the dual function: \begin{equation} \Phi(u) = \inf_{c, \rho} L(c, \rho, u) = \sum_{i=1}^m \Vert a_i \Vert^2u_i - \sum_{i=1}^{m} u_i a_i^T c \end{equation} if $\sum_{i=1}^{m} u_i = 1$ with $u_i \ge 0$. Yields the unit simplex. sparsity (dual is typically sparse because the dual solution is combination of only a few points) \end{frame} \begin{frame} \frametitle{Solving via Frank-Wolfe} \end{frame} \end{document}
\documentclass[11pt,pdflatex,makeidx]{scrbook} % Book class in 11 points \usepackage[margin=0.5in]{geometry} \usepackage{color} \usepackage{makeidx} \usepackage{hyperref} <% if erudite::*latex-highlight-syntax* %> \usepackage{minted} \usepackage{mdframed} <% else %> \usepackage{listings} <% end %> \usepackage{hyperref} \usepackage{courier} \hypersetup{colorlinks=true,linkcolor=blue} <% if (not erudite::*latex-highlight-syntax*) %> \lstloadlanguages{Lisp} \lstset{frame=none,language=Lisp, basicstyle=\ttfamily\small, keywordstyle=\color{black}\bfseries, stringstyle=\ttfamily, showstringspaces=false,breaklines} \lstnewenvironment{code}{}{} <% else %> % \surroundwithmdframed{minted} \definecolor{bg}{rgb}{0.95,0.95,0.95} <% end %> \parindent0pt \parskip10pt % make block paragraphs \raggedright % do not right justify % Note that book class by default is formatted to be printed back-to-back. \makeindex \begin{document} % End of preamble, start of text. \title{\bf <%= (@ title)%>} <% if (@ subtitle) %> \subtitle{<%= (@ subtitle)%>} <% end %> <% if (@ author) %> \author{<%= (@ author) %>} <% end %> \date{\today} % Use current date. \frontmatter % only in book class (roman page #s) \maketitle % Print title page. \tableofcontents % Print table of contents \mainmatter % only in book class (arabic page #s) \long\def\ignore#1{} <%= (@ body) %> % ignore macro \chapter{Index} \printindex \end{document}
\subsubsection{Method of Lagrange Multipliers} \noindent Given an objective function, $f(x,y)$ and a constraint equation $g(x,y) = k$, define $F(x,y,\lambda) = f(x,y) + \lambda(k-g(x,y))$. The solution, $(x, y, \lambda)$, to $\nabla F = \vec{0}$ will be the solution to the constrained optimization problem.\\ \noindent For example, let's maximize $f(x,y)=xy$ subject to $(x-1)^2 + (y-1)^2 = 1$. \begin{equation*} F(x,y,\lambda) = xy + \lambda(1 - (x-1)^2 - (y-1)^2) \end{equation*} \begin{equation*} \nabla F = \langle y + 2\lambda(x-1), x+2\lambda(y-1), 1 - (x-1)^2 - (y-1)^2 = 1 \rangle = \vec{0} \end{equation*} \begin{equation*} \begin{cases} y - 2\lambda(x-1) = 0 \\ x - 2\lambda(y-1) = 0 \\ 1 - (x-1)^2 - (y-1)^2 = 0 \end{cases} \implies \begin{cases} y = 2\lambda(x-1) \\ x = 2\lambda(y-1) \\ (x-1)^2 + (y-1)^2 = 1 \end{cases} \end{equation*} \begin{equation*} y = 2\lambda(2\lambda(y-1) - 1) = 4\lambda^2 y - 4\lambda^2 - 2\lambda = \frac{2\lambda}{2\lambda+1} \end{equation*} \begin{equation*} x = 2\lambda\left(\frac{2\lambda}{2\lambda+1} - 1\right) = \frac{2\lambda}{2\lambda+1} \implies x = y \end{equation*} \begin{equation*} 2(x-1)^2 = 1 \implies x = y = 1 \pm \frac{1}{\sqrt{2}} \end{equation*} \begin{equation*} \implies \text{ Max/Min of } \frac{3 \pm 2\sqrt{2}}{2}. \end{equation*}
\paragraph{Simplicial complexes.} A simplicial complex is a collection of finite sets closed under taking subsets. We call a member of a simplicial complex $K$ a \emph{simplex} of \emph{dimension $p$} if it has cardinality $p+1$, and denote the set of all such $p$-simplices $K_p$. A $p$-simplex has $p+1$ \emph{faces} of dimension $p-1$, namely the subsets omitting one element. We denote these $[v_0,\dotsc,\hat{v}_i,\dotsc, v_p]$ when omitting the $i$'th element. If a simplex $\sigma$ is a face of $\tau$, we say that $\tau$ is a \emph{coface} of $\sigma$. While this definition is entirely combinatorial, there is a geometric interpretation, and it will make sense to refer to and think of $0$-simplices as \emph{vertices}, $1$-simplices as \emph{edges}, $2$-simplices as \emph{triangles}, $3$-simplices as \emph{tetrahedra}, and so forth (see Figure~\ref{fig:data2complex}, (b)). Let $C^p(K)$ be the set of functions $K_p\to\RR$, with the obvious vector space structure. These \emph{$p$-cochains} will encode our data. Define the linear \emph{coboundary} maps $\delta^p:C^p(K)\to C^{p+1}(K)$ by \begin{equation*} \delta^p(f)([v_0,\dotsc,v_{p+1}]) = \sum_{i=0}^{p+1} (-1)^i f([v_0,\dotsc,\hat{v}_i,\dotsc,v_{p+1}]). \end{equation*} Observe that this definition can be thought of in geometric terms: The support of $\delta^p(f)$ is contained in the set of $(p+1)$-simplices that are cofaces of the $p$-simplices that make up the support of $f$. \begin{figure}[htpb] %\begin{table*}[!t] \savebox{\tempbox}{% compute size of tabulat \scriptsize{ \begin{tabular}{llr} \toprule Papers & Authors & Citations \\ \midrule Paper I & A, B, C & 100 \\ Paper II & A, B & 50\\ Paper III & A, D & 10\\ Paper IV & C, D & 4\\ \bottomrule \end{tabular} }}% \settowidth{\tempwidth}{\usebox{\tempbox}}% \hfil\begin{minipage}[b]{\tempwidth}% \raisebox{-\height}{\usebox{\tempbox}}% %\vspace{-7pt} \scriptsize{\caption*{(a)}}% \label{table:data}% \end{minipage}% \savebox{\tempbox}{ \input{figures/coauthorship_complex.tex} }% \settowidth{\tempwidth}{\usebox{\tempbox}}% \hfil\begin{minipage}[b]{\tempwidth}% \raisebox{-\height}{\usebox{\tempbox}}% \vspace{-3pt} \scriptsize{\captionof*{figure}{(b)}}% \label{fig:co-authoship-complex}% \end{minipage}% %\vspace{5pt} %\end{table*} \savebox{\tempbox}{\scriptsize{ \begin{blockarray}{cccccc} \tiny{AB} & \tiny{AC} & \tiny{AD} & \tiny{BC} & \tiny{CD} \\ \begin{block}{(ccccc)c} 3 & 0 & 1 & 0 & 0 & \tiny{AB} \\ 0 & 3 & 1 & 0 & -1 & \tiny{AC} \\ 1 & 1 & 2 & 0 & 1 & \tiny{AD} \\ 0 & 0 & 0 & 3 & -1 & \tiny{BC}\\ 0 & -1 & 1 & -1 & 2 & \tiny{CD}\\ \end{block} \end{blockarray}}}% \settowidth{\tempwidth}{\usebox{\tempbox}}% \hfil\begin{minipage}[b]{\tempwidth}% \raisebox{-\height}{\usebox{\tempbox}}% \vspace{-7pt} \scriptsize{\captionof*{figure}{(c)}}% \end{minipage}% %\end{table*} \caption{Constructing a simplicial complex from data. (a)~Coauthorship data. (b)~Coauthorship complex with corresponding cochains from the data. (c)~Degree-$1$ Laplacian $L_1$ of the coauthorship complex.}\label{fig:data2complex} \end{figure} \paragraph{Simplicial Laplacians.} We are in this paper concerned with finite abstract simplicial complexes, although our method is applicable to a much broader setting, e.g.\ CW-complexes. In analogy with Hodge--de Rham theory~\cite{madsen1997calculus}, we define the \emph{degree-$i$ simplicial Laplacian} of a simplicial complex $K$ as the linear map \begin{align*} &\lap_i:C^i(K)\to C^i(K) \\ &\lap_i = \lapu_i + \lapd_i = \delta^{i\ast}\circ\delta^{i} + \delta^{i-1}\circ\delta^{i-1\ast}, \end{align*} where $\delta^{i\ast}$ is the adjoint of the coboundary with respect to the inner product (typically the one making the indicator function basis orthonormal). In most practical applications, the coboundary can be represented as a sparse matrix $B_i$ and the Laplacians can be efficiently computed as $L_i=B_i\transpose B_{i}+B_{i-1}B_{i-1}\transpose$. The matrices $L_0$ and $B_0$ are the classic graph Laplacian and incidence matrix. Note that the Laplacians carry valuable topological information about the complex: The kernel of the $k$-Laplacian is isomorphic to the $k$-(co)homology of its associated simplicial complex~\cite{eckmann1944,horak2013spectra}\footnote{In other words, the number of zero-eigenvalues of the $k$-Laplacian corresponds to the number of $k$-dimensional holes in the simplicial complex.}.
\documentclass[letterpaper,12pt,twoside,]{pinp} %% Some pieces required from the pandoc template \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} % Use the lineno option to display guide line numbers if required. % Note that the use of elements such as single-column equations % may affect the guide line number alignment. \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} % pinp change: the geometry package layout settings need to be set here, not in pinp.cls \geometry{layoutsize={0.95588\paperwidth,0.98864\paperheight},% layouthoffset=0.02206\paperwidth, layoutvoffset=0.00568\paperheight} \definecolor{pinpblue}{HTML}{185FAF} % imagecolorpicker on blue for new R logo \definecolor{pnasbluetext}{RGB}{101,0,0} % \title{DALITE Q1 - Parameters, Sampling Distributions and the Central Limit Theorem. Due September 23, 2020 by 10am.} \author[a]{EPIB607 - Inferential Statistics} \affil[a]{Fall 2020, McGill University} \setcounter{secnumdepth}{5} % Please give the surname of the lead author for the running footer \leadauthor{Bhatnagar} % Keywords are not mandatory, but authors are strongly encouraged to provide them. If provided, please include two to five keywords, separated by the pipe symbol, e.g: \keywords{ Parameters and statistics | Sampling distributions | Central Limit Theorem (CLT) } \begin{abstract} This DALITE quiz will cover the building blocks of statistical inference. \end{abstract} \dates{This version was compiled on \today} % initially we use doi so keep for backwards compatibility % new name is doi_footer \pinpfootercontents{DALITE Q2 due Wednesday Sepetember 23, 2020 by 10am} \begin{document} % Optional adjustment to line up main text (after abstract) of first page with line numbers, when using both lineno and twocolumn options. % You should only change this length when you've finalised the article contents. \verticaladjustment{-2pt} \maketitle \thispagestyle{firststyle} \ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{} % If your first paragraph (i.e. with the \dropcap) contains a list environment (quote, quotation, theorem, definition, enumerate, itemize...), the line after the list may have some extra indentation. If this is the case, add \parshape=0 to the end of the list environment. \hypertarget{marking}{% \section*{Marking}\label{marking}} \addcontentsline{toc}{section}{Marking} Completion of this DALITE exercise will be availble to us automatically through the DALITE website. Therefore \textbf{you do not need to hand anything in}. Marks will be based on the number of correct answers. For each question you will receive 0.5 marks for getting the correct answer on the first attempt and an additional 0.5 marks if you stick with the right answer or switch to the correct answer after seeing someone else's rationale. \hypertarget{parameters-and-statistics}{% \section{Parameters and statistics}\label{parameters-and-statistics}} \hypertarget{learning-objectives}{% \subsection{Learning Objectives}\label{learning-objectives}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Understand the difference between a parameter and a statistic. \item A parameter is related to the population. \item A statistic is related to the sample. \end{enumerate} \hypertarget{required-readings}{% \subsection{Required Readings}\label{required-readings}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \href{https://www.dropbox.com/s/kr293cablb11nrm/Ch13SamplingDistributionsJH2018.pdf?dl=0}{JH section 1} \end{enumerate} \vspace*{0.25cm} \hypertarget{sampling-distributions-and-central-limit-theorem}{% \section{Sampling Distributions and Central Limit Theorem}\label{sampling-distributions-and-central-limit-theorem}} \hypertarget{learning-objectives-1}{% \subsection{Learning Objectives}\label{learning-objectives-1}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Recognize that there is variability due to sampling. Repeated random samples from the same population will give variable results. \item Understand the concept of a sampling distribution of a statistic such as a sample mean, sample median, or sample proportion. \item Know that the sampling distributions of some common statistics are approximately normally distributed; in particular, the sample mean x of a simple random sample drawn from a normal population has a normal distribution. \item Know that the standard deviation of the sampling distribution of x depends on both the standard deviation of the population from which the sample was drawn and the sample size \(n\). \item Grasp a key concept of statistical process control: Monitor the process rather than examine all of the products; all processes have variation; we want to distinguish the natural variation of the process from the added variation that shows that the process has been disturbed. \item Make an \(\bar{x}\) control chart. Use the 68-95-99.7\% rule and the sampling distribution of \(\bar{x}\) to help identify if a process is out of control. \item Be familiar with the Central Limit Theorem: the sample mean \(\bar{x}\) of a large number of observations has an approximately normal distribution even when the distribution of individual observations is not normal. \end{enumerate} \hypertarget{videos}{% \subsection{Videos}\label{videos}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \href{https://www.learner.org/series/against-all-odds-inside-statistics/sampling-distributions/}{Against All Odds Unit 22} \end{enumerate} \hypertarget{required-readings-1}{% \subsection{Required Readings}\label{required-readings-1}} \begin{enumerate} \item \href{https://www.learner.org/wp-content/uploads/2019/03/AgainstAllOdds_StudentGuide_Unit22-Sampling-Distributions.pdf}{Against All Odds Unit 22} \item De Veaux, Velleman and Bock (DVB), Chapter 18 \end{enumerate} %\showmatmethods \bibliography{pinp} \bibliographystyle{jss} \end{document}
\chapter{Conclusions} \label{chpr:conclusion} The aim of this thesis is to give an introduction to the Schnorr signature algorithm, starting from the mathematics and the cryptography behind the scheme, and present some of its amazing applications to Bitcoin, detailing the benefits and the improvements that would arise from its deployment. We started with a brief but thorough description of the mathematical structures (Chapter \ref{chpr:math}) and cryptographic primitives (Chapter \ref{chpr:ecc}) that underpin digital signature schemes based on elliptic curve cryptography. In Chapter \ref{chpr:dss} we presented both ECDSA and Schnorr algorithm, respectively the one actually implemented in Bitcoin and the one that is under development. We compared the two schemes, investigating ECDSA lacks and Schnorr benefits, that ranged from security to efficiency. In particular we focused on the linearity property, that turned out to be the key for the higher level construction presented in Chapter \ref{chpr:application}. \\ We have seen how to traduce utilities already implemented in Bitcoin in terms of Schnorr signatures: multi-signature schemes are implemented through MuSig (Section \ref{musig}), whose main advantage is to recover key aggregation; threshold signatures can be deployed through the protocols presented in Section \ref{threshold}, that makes them indistinguishable from a single signature; the last application we studied has been adaptor signature and its benefits to cross-chain atomic swaps and to the Lightning Network. \bigskip \noindent The immediate benefits that Schnorr would bring to Bitcoin are improved efficiency (smaller signatures, batch validation, cross-input aggregation) and privacy (multi-signatures and threshold signatures would be indistinguishable from a single signature), leading also to an enhancement in fungibility. All this applications would be possible in a straightforward way after the introduction of Schnorr, that could be brought to Bitcoin through a soft-fork\footnote{Improvements in the protocol have to be made without consensus split.}: the fact that Schnorr is superior to ECDSA in every aspect hopefully will ease the process. \bigskip \noindent The last thing we would like to point out is that, by no means, the applications presented in the present work are the unique benefits that Schnorr could bring to Bitcoin. More complex ideas take the names of Taproot \cite{Taproot} and Graftroot \cite{Graftroot}, and are built on top of the concepts of MAST and Pay-to-Contract: through these constructions it would be possible, in the cooperative case, to hide completely the redeem script, presenting a single signature (no matter how complex the script is). For how soft forks need to be implemented after SegWit (i.e. with an upgrade of the version number), there is incentive to develop as many innovations as possible altogether (the presence of too many version numbers with little differences would constitute a lack of privacy): for this reason, it is probable that Schnorr will come to life accompanied by Taproot. \\ Hopefully, we have convinced the reader that Schnorr (and Bitcoin!) is worth being studied, providing also the tools to properly understand further features and innovations other than the ones presented. Moreover, we hope that you are now motivated not only to delve deeper in the technical side of Bitcoin, but also to approach it from other sides, to fully appreciate its disruptiveness and make yourself an idea of what Bitcoin is and which possibilities it hides.
\ignore{ \documentstyle[11pt]{report} \textwidth 13.7cm \textheight 21.5cm \newcommand{\myimp}{\verb+ :- +} \newcommand{\ignore}[1]{} \def\definitionname{Definition} \makeindex \begin{document} } \chapter{\label{chapter:datatypes}Data Types, Operators, and Built-ins} Picat is a dynamically-typed language, in which type checking occurs at runtime. A variable gets a type once it is bound to a value. In Picat, variables and values are terms. A value can be \emph{primitive}\index{primitive value} or \emph{compound}\index{compound value}. A primitive value\index{primitive value} can be an \emph{integer}\index{integer}, a \emph{real number}\index{number}, or an \emph{atom}\index{atom}. A compound value\index{compound value} can be a \emph{list}\index{list} or a \emph{structure}\index{structure}. \emph{Strings}\index{string}, \emph{arrays}\index{array}, \emph{maps}\index{map}, \emph{sets}\index{set}, and \emph{heaps}\index{heap} are special compound values\index{compound value}. This chapter describes the data types and the built-ins for each data type that are provided by the \texttt{basic} module. Many of the built-ins are given as operators. Table \ref{tab:ops} shows all of the operators that are provided by Picat. Unless the table specifies otherwise, the operators are left-associative. The as-pattern operator (\verb+@+) and the operators for composing goals, including \texttt{not}\index{\texttt{not}}, \texttt{once}\index{\texttt{once}}, conjunction (\verb+,+ and \verb+&&+), and disjunction (\verb+;+ and \verb+||+), will be described in Chapter \ref{chapter:predicates} on Predicates and Functions. The constraint operators (the ones that begin with \verb+#+) will be described in Chapter \ref{ch:constraints} on Constraints. In Picat, no new operators can be defined, and none of the existing operators can be redefined. The dot operator (\verb+.+) is used in OOP notations for calling predicates and functions. It is also used to qualify calls with a module name. The notation \texttt{$A_1.f(A_2,\ldots,A_k)$} is the same as \texttt{$f(A_1,A_2,\ldots,A_k)$}, unless $A_1$ is an atom, in which case $A_1$ must be a module qualifier for $f$. If an atom\index{atom} needs to be passed as the first argument to a function or a predicate, then this notation cannot be used. The notation $A.Attr$, where $Attr$ does not have the form \texttt{f($\ldots$)}, is the same as the function call \texttt{get$(A,Attr)$}\index{\texttt{get/2}}. For example, the expression \texttt{$S$.name}\index{\texttt{name/1}} returns the name, and the expression \texttt{$S$.arity}\index{\texttt{arity/1}} returns the arity\index{arity} of $S$ if $S$ is a structure\index{structure}. Note that the dot operator is left-associative. For example, the expression \texttt{X.f().g()} is the same as \texttt{g(f(X))}. \begin{table} \caption{\label{tab:ops}Operators in Picat} \input{operators.tex} \end{table} The following functions are provided for all terms: \begin{itemize} \item \texttt{copy\_term($Term_1$) = $Term_2$}\index{\texttt{copy\_term/1}}: This function copies $Term_1$ into $Term_2$. If $Term_1$ is an attributed variable\index{attributed variable}, then $Term_2$ will not contain any of the attributes. \item \texttt{copy\_term\_shallow($Term_1$) = $Term_2$}\index{\texttt{copy\_term\_shallow/1}}: This function copies the skeleton of $Term_1$ into $Term_2$. If $Term_1$ is a variable or an atomic value, then it returns a complete copy of $Term_1$, the same as \texttt{copy\_term($Term_1$)}; if $Term_1$ is a list, then it returns a cons \texttt{[$H$$|$$T$]} where both the car $H$ and the cdr $T$ are free variables; otherwise, it is the same as \texttt{new\_struct(name($Term_1$),arity($Term_1$))}. \item \texttt{hash\_code($Term$) = $Code$}\index{\texttt{hash\_code/1}}: This function returns the hash code of $Term$. If $Term$ is a variable, then the returned hash code is always 0. \item \texttt{to\_codes($Term$) = $Codes$}\index{\texttt{to\_codes/1}}: This function returns a list of character codes of $Term$. \item \texttt{to\_fstring($Format$, $Args\ldots$)}\index{\texttt{to\_fstring}}: This function converts the arguments in the $Args\ldots$ parameter into a string, according to the format string $Format$, and returns the string. The number of arguments in $Args\ldots$ cannot exceed 10. Format characters are described in Chapter \ref{chapter:io}. \item \texttt{to\_string($Term$) = $String$}\index{\texttt{to\_string/1}}: This function returns a string representation of $Term$. \end{itemize} Other built-ins on terms are given in Sections \ref{sec:unification} and \ref{sec:otherbuiltins}. \section{Variables} Variables in Picat, like variables in mathematics, are value holders. Unlike variables in imperative languages, Picat variables are not symbolic addresses of memory locations. A variable is said to be \emph{free}\index{free variable} if it does not hold any value. A variable is \emph{instantiated}\index{instantiated variable} when it is bound to a value. Picat variables are \emph{single-assignment}\index{single-assignment}, which means that after a variable is instantiated\index{instantiated variable} to a value, the variable will have the same identity as the value. After execution backtracks over a point where a binding took place, the value that was assigned to a variable will be dropped, and the variable will be turned back into a free variable\index{free variable}. A variable name is an identifier that begins with a capital letter or the underscore. For example, the following are valid variable names: \begin{verbatim} X1 _ _ab \end{verbatim} The name \verb+_+ is used for \emph{anonymous variables}\index{anonymous variable}. In a program, different occurrences of \verb+_+ are treated as different variables. So the test \verb+ _ == _+ is always false. The following two built-ins are provided to test whether a term is a free variable\index{free variable}: \begin{itemize} \item \texttt{var($Term$)}\index{\texttt{var/1}}: This predicate is true if $Term$ is a free variable\index{free variable}. \item \texttt{nonvar($Term$)}\index{\texttt{nonvar/1}}: This predicate is true if $Term$ is not a free variable\index{free variable}. \end{itemize} An \emph{attributed variable}\index{attributed variable} is a variable that has a map\index{map} of attribute-value pairs attached to it. The following built-ins are provided for attributed variables\index{attributed variable}: \begin{itemize} \item \texttt{attr\_var($Term$)}\index{\texttt{attr\_var/1}}: This predicate is true if $Term$ is an attributed variable\index{attributed variable}. \item \texttt{dvar($Term$)}\index{\texttt{dvar/1}}: This predicate is true if $Term$ is an attributed domain variable. \item \texttt{bool\_dvar($Term$)}\index{\texttt{bool\_dvar/1}}: This predicate is true if $Term$ is an attributed domain variable whose lower bound is 0 and whose upper bound is 1. \item \texttt{dvar\_or\_int($Term$)}\index{\texttt{dvar\_or\_int/1}}: This predicate is true if $Term$ is an attributed domain variable or an integer. \item \texttt{get\_attr($X$, $Key$) = $Val$}\index{\texttt{get\_attr/2}}: This function returns the \texttt{$Val$} of the key-value pair \texttt{$Key$$=$$Val$} that is attached to \texttt{$X$}. It throws an error if \texttt{$X$} has no attribute named $Key$. \item \texttt{get\_attr($X$, $Key$, $DefaultVal$) = $Val$}\index{\texttt{get\_attr/3}}: This function returns \texttt{$Val$} of the key-value pair \texttt{$Key$$=$$Val$} that is attached to \texttt{$X$}. It returns $DefaultVal$ if $X$ does not have the attribute named $Key$. \item \texttt{put\_attr($X$, $Key$, $Val$)}\index{\texttt{put\_attr/3}}: This predicate attaches the key-value pair \texttt{$Key$$=$$Val$} to \texttt{$X$}, where \texttt{$Key$} is a non-variable term, and \texttt{$Val$} is any term. \item \texttt{put\_attr($X$, $Key$)}\index{\texttt{put/2}}: This predicate call is the same as \texttt{put\_attr($X$, $Key$,not\_a\_value)}. \end{itemize} \section{Atoms} An atom\index{atom} is a symbolic constant. An atom\index{atom} name can either be quoted or unquoted. An unquoted name is an identifier that begins with a lower-case letter, followed by an optional string\index{string} of letters, digits, and underscores. A quoted name is a single-quoted sequence of arbitrary characters. A character can be represented as a single-character atom\index{atom}. For example, the following are valid atom\index{atom} names: \begin{verbatim} x x_1 '_' '\\' 'a\'b\n' '_ab' '$%' \end{verbatim} No atom\index{atom} name can last more than one line. An atom\index{atom} name cannot contain more than 1000 characters. The backslash character \verb+'\'+ is used as the escape character. So, the name \verb+'a\'b\n'+ contains four characters: \texttt{a}, \texttt{'}, \texttt{b}, and \verb+\n+. The following built-ins are provided for atoms\index{atom}: \begin{itemize} \item \texttt{ascii\_alpha($Term$)}\index{\texttt{ascii\_alpha/1}}: This predicate is true if $Term$ is an atom and the atom is made of one English letter. \item \texttt{ascii\_alpha\_digit($Term$)}\index{\texttt{ascii\_alpha\_digit/1}}: This predicate is true if $Term$ is an atom and the atom is made of one English letter or one digit. \item \texttt{ascii\_digit($Term$)}\index{\texttt{ascii\_digit/1}}: This predicate is true if $Term$ is an atom and the atom is made of one digit. \item \texttt{ascii\_lowercase($Term$)}\index{\texttt{ascii\_lowercase/1}}: This predicate is true if $Term$ is an atom and the atom is made of one English lowercase letter. \item \texttt{ascii\_uppercase($Term$)}\index{\texttt{ascii\_uppercase/1}}: This predicate is true if $Term$ is an atom and the atom is made of one English uppercase letter. \item \texttt{atom($Term$)}\index{\texttt{atom/1}}: This predicate is true if $Term$ is an atom\index{atom}. \item \texttt{atom\_chars($Atm$) = $String$}\index{\texttt{atom\_chars/1}}: This function returns string\index{string} that contains the characters of the atom\index{atom} $Atm$. It throws an error if $Atm$ is not an atom\index{atom}. \item \texttt{atom\_codes($Atm$) = $List$}\index{\texttt{atom\_codes/1}}: This function returns the list\index{list} of codes of the characters of the atom\index{atom} $Atm$. It throws an error if $Atm$ is not an atom\index{atom}. \item \texttt{atomic($Term$)}\index{\texttt{atomic/1}}: This predicate is true if $Term$ is an atom\index{atom} or a number\index{number}. \item \texttt{char($Term$)}\index{\texttt{char/1}}: This predicate is true if $Term$ is an atom and the atom is made of one character. \item \texttt{chr($Code$) = $Char$}\index{\texttt{chr/1}}: This function returns the UTF-8 character of the code point $Code$. \item \texttt{digit($Term$)}\index{\texttt{digit/1}}: This predicate is true if $Term$ is an atom and the atom is made of one digit. \item \texttt{len($Atom$) = $Len$}\index{\texttt{len/1}}: This function returns the number of characters in $Atom$. Note that this function is overloaded in such a way that the argument can also be an array, a list, or a structure. \item \texttt{length($Atom$) = $Len$}\index{\texttt{length/1}}: This function is the same as \texttt{len($Atom$)}. \item \texttt{ord($Char$) = $Int$}\index{\texttt{ord/1}}: This function returns the code point of the UTF-8 character $Char$. It throws an error if $Char$ is not a single-character atom. \end{itemize} \section{Numbers} A number\index{number} can be an integer\index{integer} or a real number\index{number}. An integer\index{integer} can be a decimal numeral, a binary numeral, an octal numeral, or a hexadecimal numeral. In a numeral, digits can be separated by underscores, but underscore separators are ignored by the tokenizer. For example, the following are valid integers\index{integer}: \begin{tabbing} aa \= aaa \= aaa \= aaa \= aaa \= aaa \= aaa \kill \> \texttt{12\_345} \> \> \> a decimal numeral \\ \> \texttt{0b100} \> \> \> 4 in binary notation \\ \> \texttt{0o73} \> \> \> 59 in octal notation \\ \> \texttt{0xf7} \> \> \> 247 in hexadecimal notation \end{tabbing} A real number\index{number} consists of an optional integer part\index{integer}, an optional decimal fraction preceded by a decimal point, and an optional exponent. If an integer part\index{integer} exists, then it must be followed by either a fraction or an exponent in order to distinguish the real number\index{number} from an integer literal\index{integer}. For example, the following are valid real numbers\index{number}. \begin{verbatim} 12.345 0.123 12-e10 0.12E10 \end{verbatim} Table \ref{tab:arithdef} gives the meaning of each of the numeric operators in Picat, from the operator with the highest precedence (\verb+**+) to the one with the lowest precedence (\verb+..+). Except for the power operator \verb+**+, which is right-associative, all of the arithmetic operators are left-associative. \begin{table} \caption{\label{tab:arithdef}Arithmetic Operators} \begin{center} \begin{tabular}{ |c|c| } \hline \texttt{$X$ ** $Y$} & power \\ \hline \texttt{+$X$} & same as $X$ \\ \hline \texttt{-$X$} & sign reversal \\ \hline {\tt \verb+~+$X$ } & bitwise complement \\ \hline \texttt{$X$ * $Y$} & multiplication \\ \hline \texttt{$X$ / $Y$} & division \\ \hline \texttt{$X$ // $Y$} & integer division, truncated \\ \hline \texttt{$X$ /> $Y$} & integer division (ceiling($X$ / $Y$)) \\ \hline \texttt{$X$ /< $Y$} & integer division (floor($X$ / $Y$)) \\ \hline \texttt{$X$ div $Y$} & integer division, floored \\ \hline \texttt{$X$ mod $Y$} & modulo, same as $X$ - floor($X$ div $Y$) * $Y$ \\ \hline \texttt{$X$ rem $Y$} & remainder ($X$ - ($X$ // $Y$) * $Y$) \\ \hline \texttt{$X$ + $Y$} & addition \\ \hline \texttt{$X$ - $Y$} & subtraction \\ \hline \texttt{$X$ >> $Y$} & right shift \\ \hline % \texttt{$X$ >>> $Y$} & unsigned right shift \\ \hline \texttt{$X$ << $Y$} & left shift \\ \hline {\tt $X$ \verb+/\+ $Y$} & bitwise and \\ \hline {\tt $X$ \verb+^+ $Y$} & bitwise xor \\ \hline {\tt $X$ \verb+\/+ $Y$} & bitwise or \\ \hline {\tt $From$ \verb+..+ $Step$ \verb+..+ $To$} & A range (list) of numbers with a step \\ \hline {\tt $From$ \verb+..+ $To$} & A range (list) of numbers with step 1 \\ \hline {\tt $X$ \verb+=:=+ $Y$} & pretty much (numerically) equal \\ \hline \end{tabular} \end{center} \end{table} In addition to the numeric operators, the \texttt{basic} module also provides the following built-ins for numbers\index{number}: \begin{itemize} \item \texttt{between($From$, $To$, $X$)}\index{\texttt{between/3}} (nondet): If $X$ is bound to an integer, then this predicate determines whether $X$ is between $From$ and $To$. Otherwise, if $X$ is unbound, then this predicate nondeterministically selects $X$ from the integers that are between $From$ and $To$. It is the same as \texttt{member($X$, $From$..$To$)}\index{\texttt{member/2}}. \item \texttt{bigint($Term$)}\index{\texttt{bigint/1}}: This predicate is true if $Term$ is a big integer. \item \texttt{float($Term$)}\index{\texttt{float/1}}: This predicate is true if $Term$ is a real number\index{number}. \item \texttt{int($Term$)}\index{\texttt{int/1}}: This predicate is true if $Term$ is an integer. \item \texttt{integer($Term$)}\index{\texttt{integer/1}}: The same as \texttt{int($Term$)}. \item \texttt{max($X$, $Y$) = $Val$}\index{\texttt{max/2}}: This function returns the maximum of $X$ and $Y$, where $X$ and $Y$ are terms. \item \texttt{maxint\_small() = $Int$}\index{\texttt{maxint\_small/0}}: This function returns the maximum integer that is represented in one word. All integers that are greater than this integer are represented as \textit{big integers}. \item \texttt{min($X$, $Y$) = $Val$}\index{\texttt{min/2}}: This function returns the minimum of $X$ and $Y$, where $X$ and $Y$ are terms. \item \texttt{minint\_small() = $Int$}\index{\texttt{minint\_small/0}}: This function returns the minimum integer that is represented in one word. All integers that are smaller than this integer are represented as \textit{big integers}. \item \texttt{number($Term$)}\index{\texttt{number/1}}: This predicate is true if $Term$ is a number\index{number}. \item \texttt{number\_chars($Num$) = $String$}\index{\texttt{number\_chars/1}}: This function returns a list\index{list} of characters of $Num$. This function is the same as \texttt{to\_fstring("\%d",$Num$)}\index{\texttt{to\_fstring/2}} if $Num$ is an integer\index{integer}, and the same as \texttt{to\_fstring("\%f",$Num$)}\index{\texttt{to\_fstring/2}} if $Num$ is a real number\index{number}. \item \texttt{number\_codes($Num$) = $List$}\index{\texttt{number\_codes/1}}: This function returns a list\index{list} of codes of the characters of $Num$. It is the same as \texttt{number\_chars($Num$).to\_codes()}\index{\texttt{number\_chars/1}}\index{\texttt{to\_codes/1}}. \item \texttt{real($Term$)}\index{\texttt{real/1}}: This predicate is the same as \texttt{float($Term$)}\index{\texttt{float/1}}. \item \texttt{to\_binary\_string($Int$) = $String$}\index{\texttt{to\_binary\_string/1}}: This function returns the binary representation of the integer\index{integer} $Int$ as a string\index{string}. \item \texttt{to\_float($NS$) = $Real$}\index{\texttt{to\_float/1}}: This function is the same as \texttt{$NS$*1.0} if $NS$ is a number, and the same as \texttt{parse\_term($NS$)} if $NS$ is a string of digits. \item \texttt{to\_hex\_string($Int$) = $String$}\index{\texttt{to\_hex\_string/1}}: This function returns the hexadecimal representation of the integer\index{integer} $Int$ as a string\index{string}. \item \texttt{to\_int($ANS$) = $Int$}\index{\texttt{to\_int/1}}: This function is the same as \texttt{truncate($ANS$)}\index{\texttt{truncate/1}} in the \texttt{math} module if $ANS$ is a number, the same as \texttt{ord($ANS$)-ord('0')} if $ANS$ is a digit character, and the same as \texttt{parse\_term($ANS$)} if $ANS$ is a string. \item \texttt{to\_integer($ANS$) = $Int$}\index{\texttt{to\_integer/1}}: This function is the same as \texttt{to\_int($ANS$)}. \item \texttt{to\_number($ANS$) = $Num$}\index{\texttt{to\_number/1}}: This function is the same as $ANS$ if $ANS$ is a number, the same as \texttt{ord($ANS$)-ord('0')} if $ANS$ is a digit character, and the same as \texttt{parse\_term($ANS$)} if $ANS$ is a string. \item \texttt{to\_oct\_string($Int$) = $String$}\index{\texttt{to\_oct\_string/1}}: This function returns the octal representation of the integer $Int$ as a string\index{string}. \item \texttt{to\_radix\_string($Int$,$Base$) = $String$}\index{\texttt{to\_radix\_string/2}}: This function returns the representation of the integer $Int$ of the numeral $Base$ as a string\index{string}, where $Base$ must be greater than 1 and less than 37. The call \texttt{to\_oct\_string($Int$)} is the same as \texttt{to\_radix\_string($Int$,8)}. \item \texttt{to\_real($NS$) = $Real$}\index{\texttt{to\_real/1}}: This function is the same as \texttt{to\_float($NS$)}. \end{itemize} The \texttt{math} module provides more numeric functions. See Appendix \ref{chapter:math}. \section{Compound Terms} A compound term\index{compound value} can be a \emph{list}\index{list} or a \emph{structure}\index{structure}. Components of compound terms\index{compound value} can be accessed with subscripts. Let $X$ be a variable that references a compound value\index{compound value}, and let $I$ be an integer expression that represents a subscript. The index notation \texttt{$X$[$I$]} is a special function that returns the $I$th component of $X$ if $I$ is an integer or a list of components if $I$ is a range in the form of $l..u$, counting from the beginning. Subscripts begin at $1$, meaning that $X$[$1$] is the first component of $X$. An index notation can take multiple subscripts. For example, the expression \texttt{X[1,2]} is the same as \texttt{T[2]}, where \texttt{T} is a temporary variable that references the component that is returned by \texttt{X[1]}. The predicate \texttt{compound($Term$)}\index{\texttt{compound/1}} is true if $Term$ is a compound term\index{compound value}. \subsection{\label{subsec:lists}Lists} A list\index{list} takes the form \texttt{[$t_1$,$\ldots$,$t_{n}$]}, where each $t_i$ ($1\le i \le n$) is a term. Let $L$ be a list\index{list}. The expression \texttt{$L$.length}\index{\texttt{length/1}}, which is the same as the functions \texttt{get($L$,length)}\index{\texttt{get/2}} and \texttt{length($L$)}\index{\texttt{length/1}}, returns the length of $L$. Note that a list is represented internally as a singly-linked list. Also note that the length of a list is not stored in memory; instead, it is recomputed each time that the function \texttt{length} is called. The symbol \verb+'|'+ is not an operator, but a separator that separates the first element (so-called \emph{car}\index{car}) from the rest of the list\index{list} (so-called \emph{cdr}\index{cdr}). The \emph{cons}\index{cons} notation {\tt [$H$\verb+|+$T$]} can occur in a pattern or in an expression. When it occurs in a pattern, it matches any list\index{list} in which $H$ matches the car\index{car} and $T$ matches the cdr\index{cdr}. When it occurs in an expression, it builds a list\index{list} from $H$ and $T$. The notation {\tt [$A_1$,$A_2$,$\ldots$,$A_n$\verb+|+$T$]} is a shorthand for {\tt [$A_1$\verb+|+[$A_2$\verb+|+$\ldots$[$A_n$\verb+|+$T$]$\ldots$]}. So \texttt{[a,b,c]} is the same as \texttt{[a|[b|[c|[]]]]}. The \texttt{basic} module provides the following built-ins on lists, most of which are overloaded for strings (\ref{subsec:strings}) and arrays (see \ref{subsec:arrays}). \begin{itemize} \item \texttt{$List_1$ ++ $List_2$ = $List$}: This function returns the concatenated list of $List_1$ and $List_2$. \item \texttt{append($X$, $Y$, $Z$)}\index{\texttt{append/3}} (nondet): This predicate is true if appending $Y$ to $X$ can create $Z$. This predicate may backtrack if $X$ is not a complete list.\footnote{A list is \emph{complete} \index{complete list} if it is empty, or if its tail is complete. For example, \texttt{[a,b,c]} and \texttt{[X,Y,Z]} are complete, but \texttt{[a,b|T]} is not complete if \texttt{T} is a variable.} \item \texttt{append($W$, $X$, $Y$, $Z$)}\index{\texttt{append/4}} (nondet): This predicate is defined as: \begin{verbatim} append(W,X,Y,Z) => append(W,X,WX), append(WX,Y,Z). \end{verbatim} \item \texttt{avg($List$) = $Val$}\index{\texttt{avg/1}}: This function returns the average of all the elements in $List$. This function throws an exception if $List$ is not a list or any of the elements is not a number. \item \texttt{delete($List$, $X$) = $ResList$}\index{\texttt{delete/2}}: This function deletes the first occurrence of $X$ from $List$, returning the result in $ResList$. The built-in \verb+!=/2+ is used to test if two terms are different. No variables in $List$ or $X$ will be bound after this function call. \item \texttt{delete\_all($List$, $X$) = $ResList$}\index{\texttt{delete\_all/2}}: This function deletes all occurrences of $X$ from $List$, returning the result in $ResList$. The built-in \verb+!=/2+ is used to test if two terms are different. \item \texttt{first($List$) = $Term$}\index{\texttt{first/1}}: This function returns the first element of $List$. \item \texttt{flatten($List$) = $ResList$}\index{\texttt{flatten/1}}: This function flattens a list of nested lists into a list. For example, \texttt{flatten([[1],[2,[3]]])} returns \texttt{[1,2,3]}. \item \texttt{head($List$) = $Term$}\index{\texttt{head/1}}: This function returns the head of the list $List$. For example, \texttt{head([1,2,3])} returns \texttt{1}. \item \texttt{insert($List$, $Index$, $Elm$) = $ResList$}\index{\texttt{insert/3}}: This function inserts $Elm$ into $List$ at the index $Index$, returning the result in $ResList$. After insertion, the original $List$ is not changed, and $ResList$ is the same as \\ \texttt{$List$.slice(1, $Index$-1)++[$Elm$|$List$.slice($Index$, $List$.length)]}. \item \texttt{insert\_all($List$, $Index$, $AList$) = $ResList$}\index{\texttt{insert\_all/3}}: This function inserts all of the elements in $AList$ into $List$ at the index $Index$, returning the result in $ResList$. After insertion, the original $List$ is not changed, and $ResList$ is the same as \\ \texttt{$List$.slice(1, $Index$-1)++$AList$++$List$.slice($Index$, $List$.length)}. \item \texttt{insert\_ordered($List$,$Term$)}\index{\texttt{insert\_ordered/2}}: This function inserts $Term$ into the ordered list $List$, such that the resulting list remains sorted. \item \texttt{insert\_ordered\_down($List$,$Term$)}\index{\texttt{insert\_ordered\_down/2}}: This function inserts $Term$ into the descendantly ordered list $List$, such that the resulting list remains sorted down. \item \texttt{last($List$) = $Term$}\index{\texttt{last/1}}: This function returns the last element of $List$. \item \texttt{len($List$) = $Len$}\index{\texttt{len/1}}: This function returns the number of elements in $List$. Note that this function is overloaded in such a way that the argument can also be an atom, an array, or a structure. \item \texttt{length($List$) = $Len$}\index{\texttt{length/1}}: This function is the same as \texttt{len($List$)}. \item \texttt{list($Term$)}\index{\texttt{list/1}}: This predicate is true if $Term$ is a list\index{list}. \item \texttt{max($List$) = $Val$}\index{\texttt{max/1}}: This function returns the maximum value that is in $List$, where $List$ is a list of terms. \item \texttt{membchk($Term$, $List$)}\index{\texttt{membchk/2}}: This predicate is true if $Term$ is an element of $List$. \item \texttt{member($Term$, $List$)}\index{\texttt{member/2}} (nondet): This predicate is true if $Term$ is an element of $List$. When $Term$ is a variable, this predicate may backtrack, instantiating\index{instantiated variable} $Term$ to different elements of $List$. \item \texttt{min($List$) = $Val$}\index{\texttt{min/1}}: This function returns the minimum value that is in $List$, where $List$ is a list or an array of terms. \item \texttt{new\_list($N$) = $List$}\index{\texttt{new\_list/1}}: This function creates a new list that has $N$ free variable\index{free variable} arguments. \item \texttt{new\_list($N$,$InitVal$) = $List$}\index{\texttt{new\_list/2}}: This function creates a new list that has $N$ arguments all initialized to $InitVal$. \item \texttt{nth($Index$, $List$, $Elem$)}\index{\texttt{nth/3}} (nondet): This predicate is true when $Elem$ is the $Index$'th element of $List$. Counting starts at 1. When $Index$ is a variable, this predicate may backtrack, instantiating $Index$ to a different integer between 1 and \texttt{len($List$)}. \item \texttt{prod($List$) = $Val$}\index{\texttt{prod/1}}: This function returns the product of all of the values in $List$. \item \texttt{remove\_dups($List$) = $ResList$}\index{\texttt{remove\_dups/1}}: This function removes all duplicate values from $List$, retaining only the first occurrence of each value. The result is returned in $ResList$. Note that an $O(n^2)$ algorithm is used in the implementation. If $List$ is large, then \texttt{sort\_remove\_dups($List$)} may be faster than this function. \item \texttt{reverse($List$) = $ResList$}\index{\texttt{reverse/1}}: This function reverses the order of the elements in $List$, returning the result in $ResList$. \item \texttt{select($X$, $List$, $ResList$)}\index{\texttt{select/3}} (nondet): This predicate nondeterministically selects an element $X$ from $List$, and binds $ResList$ to the list after $X$ is removed. On backtracking, it selects the next element. \item \texttt{sort($List$) = $SList$}\index{\texttt{sort/1}}: This function sorts the elements of $List$ in ascending order, returning the result in $SList$. \item \texttt{sort($List$,$KeyIndex$) = $SList$}\index{\texttt{sort/2}}: This function sorts the elements of $List$ by the key index $KeyIndex$ in ascending order, returning the result in $SList$. The elements of $List$ must be compound values and $KeyIndex$ must be a positive integer that does not exceed the length of any of the elements of $List$. This function is defined as follows: \begin{verbatim} sort(List,KeyIndex) = SList => List1 = [(E[KeyIndex],E) : E in List], List2 = sort(List1), SList = [E : (_,E) in List2]. \end{verbatim} \item \texttt{sort\_remove\_dups($List$) = $SList$}\index{\texttt{sort\_remove\_dups/1}}: This function is the same as the following, but is faster. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> \texttt{sort($List$).remove\_dups()} \end{tabbing} \item \texttt{sort\_remove\_dups($List$,$KeyIndex$) = $SList$}\index{\texttt{sort\_remove\_dups/2}}: This function is the same as the following, but is faster. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> \texttt{sort($List$,$KeyIndex$).remove\_dups()} \end{tabbing} \item \texttt{sort\_down($List$) = $SList$}\index{\texttt{sort\_down/1}}: This function sorts the elements of $List$ in descending order, returning the result in $SList$. \item \texttt{sort\_down($List$,$KeyIndex$) = $SList$}\index{\texttt{sort\_down/1}}: This function sorts the elements of $List$ by the key index $KeyIndex$ in descending order, returning the result in $SList$. \item \texttt{sort\_down\_remove\_dups($List$) = $SList$}\index{\texttt{sort\_down\_remove\_dups/1}}: This function is the same as the following, but is faster. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> \texttt{sort\_down($List$).remove\_dups()} \end{tabbing} \item \texttt{sort\_down\_remove\_dups($List$,$KeyIndex$) = $SList$}\index{\texttt{sort\_down\_remove\_dups/1}}: This function is the same as the following, but is faster. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> \texttt{sort\_down($List$,$KeyIndex$).remove\_dups()} \end{tabbing} \item \texttt{slice($List$,$From$,$To$) = $SList$}\index{\texttt{slice/3}}: This function returns the sliced list of $List$ from index $From$ through index $To$. $From$ must not be less than 1. It is the same as the index notation $List$[$From$..$To$]. \item \texttt{slice($List$,$From$) = $SList$}\index{\texttt{slice/2}}: This function is the same as the following. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> \texttt{slice($List$,$From$,$List$.length)} \end{tabbing} \item \texttt{sum($List$) = $Val$}\index{\texttt{sum/1}}: This function returns the sum of all of the values in $List$. \item \texttt{tail($List$) = $Term$}\index{\texttt{tail/1}}: This function returns the tail of the list $List$. For example, the call \texttt{tail([1,2,3])} returns \texttt{[2,3]}. \item \texttt{to\_array($List$) = $Array$}\index{\texttt{to\_array/1}}: This function converts the list\index{list} $List$ to an array\index{array}. The elements of the array\index{array} are in the same order as the elements of the list. \item \texttt{zip($List_1$, $List_2$, $\ldots$, $List_n$) = $List$}\index{\texttt{zip}}: This function makes a list\index{list} of array tuples. The $j$th tuple in the list takes the form \texttt{\{$E_{1j},\ldots,E_{nj}$\}}, where $E_{ij}$ is the $j$th element in $List_i$. In the current implementation, $n$ can be 2, 3, or 4. \end{itemize} \subsection{\label{subsec:strings}Strings} A {\emph string}\index{string} is represented as a list\index{list} of single-character atoms\index{atom}. For example, the string\index{string} \texttt{"hello"} is the same as the list\index{list} \texttt{[h,e,l,l,o]}. In addition to the built-ins on lists\index{list}, the following built-ins are provided for strings\index{string}: \begin{itemize} \item \texttt{string($Term$)}\index{\texttt{string/1}}: This predicate is true if $Term$ is a string\index{string}. \item \texttt{to\_lowercase($String$) = $LString$}\index{\texttt{to\_lowercase/1}}: This function converts all uppercase alphabetic characters into lowercase characters, returning the result in $LString$. \item \texttt{to\_uppercase($String$) = $UString$}\index{\texttt{to\_uppercase/1}}: This function converts all lowercase alphabetic characters into uppercase characters, returning the result in $UString$. \end{itemize} \subsection{Structures} A structure\index{structure} takes the form \texttt{\$$s$($t_1$,$\ldots$,$t_{n}$)}, where $s$ is an atom, and $n$ is called the \emph{arity}\index{arity} of the structure\index{structure}. The dollar symbol is used to distinguish a structure\index{structure} from a function call. The \emph{functor}\index{functor} of a structure\index{structure} comprises the name and the arity\index{arity} of the structure\index{structure}. The following types of structures\index{structure} can never denote functions, meaning that they do not need to be preceded by a \$ symbol. \begin{tabbing} aa \= aaa \= aaa \= aaa \=aaa \= aaa \= aaa \= aaa \kill \> Goals: \> \> \> \> \> \> \texttt{(a,b)},\ \texttt{(a;b)},\ \texttt{not a},\ \texttt{X = Y},\ \verb-X != 100-,\ \verb+X > 1+ \\ \> Constraints: \> \> \> \> \> \> \verb-X+Y #= 100-,\ \verb+X #!= 1+ \\ \> Arrays: \> \> \> \> \> \> \verb+{2,3,4}+,\ \verb+{P1,P2,P3}+ \\ % \> Lambda: \> \> \> \> \> \> \texttt{lambda([X, Y], X + Y)} \end{tabbing} Picat disallows creation of the following types of structures\index{structure}: \begin{tabbing} aa \= aaa \= aaa \= aaa \= aaa \= aaa \= aaa \= aaa \kill \> Dot notations: \> \> \> \> \> \> \texttt{math.pi},\ \texttt{my\_module.f(a)} \\ \> Index notations: \> \> \> \> \> \> \texttt{X[1]+2}\, \texttt{X[Y[I]]} \\ \> Assignments: \> \> \> \> \> \> \texttt{X:=Y+Z},\ \texttt{X:=X+1} \\ \> Ranges: \> \> \> \> \> \> \texttt{1..10},\ \texttt{1..2..10} \\ \> List comprehensions: \> \> \> \> \> \> \texttt{[X : X in 1..5]} \\ \> Array comprehensions: \> \> \> \> \> \> \texttt{\{X : X in 1..5\}} \\ \> If-then: \> \> \> \> \> \> \texttt{if X>Y then Z=X else Z=Y end} \\ \> Loops: \> \> \> \> \> \> \texttt{foreach (X in L) writeln(X) end } \end{tabbing} The compiler will report a syntax error when it encounters any of these expressions within a term constructor. The following built-ins are provided for structures\index{structure}: \begin{itemize} \item \texttt{arity($Struct$) = $Arity$}\index{\texttt{arity/1}}: This function returns the arity of $Struct$, which must be a structure. \item \texttt{len($Struct$) = $Arity$}\index{\texttt{len/1}}: This function is the same as \texttt{arity($Struct$)}. \item \texttt{name($Struct$) = $Name$}\index{\texttt{name/1}}: This function returns the name of $Struct$. \item \texttt{new\_struct($Name$, $IntOrList$) = $Struct$}\index{\texttt{new\_struct/2}}: This function creates a structure\index{structure} that has the name $Name$. If $IntOrList$ is an integer, $N$, then the structure\index{structure} has $N$ free variable\index{free variable} arguments. Otherwise, if $IntOrList$ is a list\index{list}, then the structure\index{structure} contains the elements in the list\index{list}. \item \texttt{struct($Term$)}\index{\texttt{struct/1}}: This predicate is true if $Term$ is a structure\index{structure}. \item \texttt{to\_list($Struct$) = $List$}\index{\texttt{to\_list/1}}: This function returns a list\index{list} of the components of the structure\index{structure} $Struct$. \end{itemize} \subsection{\label{subsec:arrays}Arrays} An \emph{array}\index{array} takes the form \texttt{\{$t_1$,$\ldots$,$t_{n}$\}}, which is a special structure\index{structure} with the name \texttt{'\{\}'} and arity\index{arity} $n$. Note that, unlike a list, an array always has its length stored in memory, so the function \texttt{length($Array$)} always takes constant time. Also note that Picat supports constant-time access of array elements, so the index notation \texttt{A[I]} takes constant time when $I$ is an integer. In addition to the built-ins for structures\index{structure}, the following built-ins are provided for arrays\index{array}: \begin{itemize} \item \texttt{array($Term$)}\index{\texttt{array/1}}: This predicate is true if $Term$ is an array. \item \texttt{new\_array($D_1$, $\ldots$, $D_n$) = $Arr$}\index{\texttt{new\_array}}: This function creates an n-dimensional array, where each $D_i$ is an integer expression that specifies the size of a dimension. In the current implementation, $n$ cannot exceed 10. \end{itemize} The following built-ins, which are originally provided for lists (see \ref{subsec:lists}), are overloaded for arrays: \begin{itemize} \item \texttt{$Array_1$ ++ $Array_2$ = $Array$} \item \texttt{avg($Array$) = $Val$}\index{\texttt{avg/1}} \item \texttt{first($Array$) = $Term$}\index{\texttt{first/1}} \item \texttt{last($Array$) = $Term$}\index{\texttt{last/1}} \item \texttt{len($Array$) = $Len$}\index{\texttt{len/1}} \item \texttt{length($Array$) = $Len$}\index{\texttt{length/1}} \item \texttt{max($Array$) = $Val$}\index{\texttt{max/1}} \item \texttt{min($Array$) = $Val$}\index{\texttt{min/1}} \item \texttt{nth($Index$, $List$, $Elem$)}\index{\texttt{nth/3}} (nondet) \item \texttt{reverse($Array$) = $ResArray$}\index{\texttt{reverse/1}} \item \texttt{slice($Array$,$From$,$To$) = $SArray$}\index{\texttt{slice/3}} \item \texttt{slice($Array$,$From$) = $SArray$}\index{\texttt{slice/2}} \item \texttt{sum($Array$) = $Val$}\index{\texttt{max/1}} \item \texttt{sort($Array$) = $SArray$}\index{\texttt{sort/1}} \item \texttt{sort($Array$,$KeyIndex$) = $SArray$}\index{\texttt{sort/2}} \item \texttt{sort\_remove\_dups($Array$) = $SArray$}\index{\texttt{sort\_remove\_dups/1}} \item \texttt{sort\_remove\_dups($Array$,$KeyIndex$) = $SArray$}\index{\texttt{sort\_remove\_dups/2}} \item \texttt{sort\_down($Array$) = $SArray$}\index{\texttt{sort\_down/1}} \item \texttt{sort\_down($Array$,$KeyIndex$) = $SArray$}\index{\texttt{sort\_down/1}} \item \texttt{sort\_down\_remove\_dups($Array$) = $SArray$}\index{\texttt{sort\_down\_remove\_dups/1}} \item \texttt{sort\_down\_remove\_dups($Array$,$KeyIndex$) = $SArray$}\index{\texttt{sort\_down\_remove\_dups/1}} \end{itemize} Note that many of the overloaded built-ins for arrays are not implemented efficiently, but are provided for convenience. For example, \texttt{sort(Array)} is implemented as follows: \begin{verbatim} sort(Array) = Array.to_list().sort().to_array(). \end{verbatim} \subsection{Maps} A \emph{map}\index{map} is a hash-table that is represented as a structure\index{structure} that contains a set of key-value pairs. The functor\index{functor} of the structure\index{structure} that is used for a map\index{map} is not important. An implementation may ban access to the name and the arity\index{arity} of the structure\index{structure} of a map\index{map}. Maps\index{map} must be created with the built-in function \texttt{new\_map}\index{\texttt{new\_map/1}}, unless they are prebuilt (see Section \ref{prebuiltmaps}). In addition to the built-ins for structures\index{structure}, the following built-ins are provided for maps\index{map}: \begin{itemize} \item \texttt{clear($Map$)}\index{\texttt{clear/1}}: This predicate clears the map $Map$. It throws an error if $Map$ is not a map. \item \texttt{get($Map$, $Key$) = $Val$}\index{\texttt{get/2}}: This function returns \texttt{$Val$} of the key-value pair \texttt{$Key$$=$$Val$} in \texttt{$Map$}. It throws an error if $Map$ does not contain the key $Key$. \item \texttt{get($Map$, $Key$, $DefaultVal$) = $Val$}\index{\texttt{get/3}}: This function returns \texttt{$Val$} of the key-value pair \texttt{$Key$$=$$Val$} in \texttt{$Map$}. It returns $DefaultVal$ if $Map$ does not contain $Key$. \item \texttt{has\_key($Map$, $Key$)}\index{\texttt{has\_key/2}}: This predicate is true if $Map$ contains a pair with $Key$. \item \texttt{keys($X$) = $List$}\index{\texttt{keys/1}}: This function returns the list of keys of the pairs in $Map$. \item \texttt{map($Term$)}\index{\texttt{map/1}}: This predicate is true if $Term$ is a map\index{map}. \item \texttt{map\_to\_list($Map$) = $PairsList$}\index{\texttt{map\_to\_list/1}}: This function returns a list\index{list} of \texttt{$Key$$=$$Val$} pairs that constitute $Map$. \item \texttt{new\_map($IntOrPairsList$) = $Map$}\index{\texttt{new\_map/1}}: This function creates a map\index{map} with an initial capacity or an initial list of pairs. \item \texttt{new\_map($N$, $PairsList$) = $Map$}\index{\texttt{new\_map/2}}: This function creates a map\index{map} with the initial capacity $N$, the initial list of pairs $PairsList$, where each pair has the form \texttt{$Key$$=$$Val$}. \item \texttt{put($Map$, $Key$, $Val$)}\index{\texttt{put/3}}: This predicate attaches the key-value pair \texttt{$Key$$=$$Val$} to \texttt{$Map$}, where \texttt{$Key$} is a non-variable term, and \texttt{$Val$} is any term. \item \texttt{put($Map$, $Key$)}\index{\texttt{put/2}}: This predicate is the same as \texttt{put($Map$, $Key$, not\_a\_value)}. \item \texttt{values($Map$) = $List$}\index{\texttt{values/1}}: This function returns the list\index{list} of values of the pairs in $Map$. \item \texttt{size($Map$) = $Size$}\index{\texttt{size/1}}: This function returns the number of pairs in $Map$. \end{itemize} Most of the built-ins are overloaded for attributed variables\index{attributed variable}. \subsection{Sets} A set\index{set} is a map where every key is associated with the atom \texttt{not\_a\_value}. All of the built-ins for maps can be applied to sets. For example, the built-in predicate \texttt{has\_key($Set$,$Elm$)} tests if $Elm$ is in $Set$. In addition to the built-ins on maps, the following built-ins are provided for sets: \begin{itemize} \item \texttt{new\_set($IntOrKeysList$) = $Set$}\index{\texttt{new\_set/1}}: This function creates a set with an initial capacity or an initial list of keys. \item \texttt{new\_set($N$,$KeysList$) = $Set$}\index{\texttt{new\_set/2}}: This function creates a set with the initial capacity $N$ and the initial list of keys $KeysList$. \end{itemize} \subsection{Heaps} A heap\index{heap}\footnote{Note that a heap, as a data structure, is different from the heap area, in which data, including heap maps, are stored.} is a complete binary tree represented as an array. A heap can be a \emph{min-heap}\index{min-heap} or a \emph{max-heap}\index{max-heap}. In a min-heap, the value at the root of each subtree is the minimum among all the values in the subtree. In a max-heap, the value at the root of each subtree is the maximum among all the values in the subtree. \begin{itemize} \item \texttt{heap\_is\_empty($Heap$)}\index{\texttt{heap\_is\_empty/1}}: This predicate is true if $Heap$ is empty. \item \texttt{heap\_pop($Heap$) = $Elm$}\index{\texttt{heap\_pop/1}}: This function removes the root element from the heap, and returns the element. As the function updates the heap, it is not pure. The update will be undone when execution backtracks over the call. \item \texttt{heap\_push($Heap$, $Elm$)}\index{\texttt{heap\_push/2}}: This predicate pushes $Elm$ into $Heap$ in a way that maintains the heap property. The update to $Heap$ will be undone when execution backtracks over the call. \item \texttt{heap\_size($Heap$) = $Size$}\index{\texttt{heap\_size/1}}: This function returns the size of $Heap$. \item \texttt{heap\_to\_list($Heap$) = $List$}\index{\texttt{heap\_to\_list/1}}: This function returns a list of the elements in $Heap$. \item \texttt{heap\_top($Heap$) = $Elm$}\index{\texttt{heap\_top/1}}: This function returns the element at the root of the heap. If $Heap$ is a min-heap, then the element is guaranteed to be the minimum, and if $Heap$ is a max-heap, then the element is guaranteed to be the maximum. \item \texttt{new\_max\_heap($IntOrList$) = $Heap$}\index{\texttt{new\_max\_heap/1}}: This function creates a max-heap. If $IntOrList$ is an integer, then it indicates the capacity. Otherwise, if $IntOrList$ is a list, then the max-heap contains the elements in the list in an order that maintains the heap property. \item \texttt{new\_min\_heap($IntOrList$) = $Heap$}\index{\texttt{new\_min\_heap/1}}: This function creates a min-heap. If $IntOrList$ is an integer, then it indicates the capacity. Otherwise, if $IntOrList$ is a list, then the min-heap contains the elements in the list in an order that maintains the heap property. \end{itemize} \subsection*{Example} \begin{verbatim} main => L = [1,3,2,4,5,3,6], H = new_min_heap(L), N = H.heap_size(), S = [H.heap_pop() : _ in 1..N], println(S). \end{verbatim} \section{\label{sec:unification}Equality Testing, Unification, and Term Comparison} The equality test \texttt{$T_1$ == $T_2$} \index{{\verb+==/2+}} is true if term $T_1$ and term $T_2$ are identical. Two variables are identical if they are aliases. Two primitive values\index{primitive value} are identical if they have the same type and the same internal representation. Two lists\index{list} are identical if the cars\index{car} are identical and the cdrs\index{cdr} are identical. Two structures\index{structure} are identical if their functors\index{functor} are the same and their components are pairwise identical. The inequality test \texttt{$T_1$ !== $T_2$} is the same as \texttt{not $T_1$ == $T_2$}. Note that two terms can be identical even if they are stored in different memory locations. Also note that it takes linear time in the worst case to test whether two terms are identical, unlike in C-family languages, in which the equality test operator \texttt{==} only compares addresses. The unification \texttt{$T_1$ = $T_2$} \index{{\verb+=/2+}} is true if term $T_1$ and term $T_2$ are already identical, or if they can be made identical by instantiating\index{instantiated variable} the variables in the terms. The built-in \texttt{$T_1$ != $T_2$} is true if term $T_1$ and term $T_2$ are not unifiable. The predicate \texttt{bind\_vars($Term$,$Val$)}\index{\texttt{bind\_vars/2}} binds all of the variables in $Term$ to $Val$. \subsection*{Example} \begin{verbatim} Picat> X = 1 X = 1 Picat> $f(a,b) = $f(a,b) yes Picat> [H|T] = [a,b,c] H = a T = [b,c] Picat> $f(X,b) = $f(a,Y) X = a Y = b Picat> bind_vars({X,Y,Z},a) Picat> X = $f(X) \end{verbatim} The last query illustrates the \emph{occurs-check problem}\index{occurs-check problem}. When binding \texttt{X} to \texttt{f(X)}, Picat does not check if \texttt{X} occurs in \texttt{f(X)} for the sake of efficiency. This unification creates a cyclic term, which can never be printed. When a unification's operands contain attributed variables\index{attributed variable}, the implementation is more complex. When a plain variable is unified with an attributed variable\index{attributed variable}, the plain variable is bound to the attributed variable\index{attributed variable}. When two attributed variables\index{attributed variable}, say $Y$ and $O$, where $Y$ is younger than $O$, are unified, $Y$ is bound to $O$, but $Y$'s attributes are not copied to $O$. Since garbage collection does not preserve the seniority of terms, the result of the unification of two attributed variables\index{attributed variable} is normally unpredictable. \subsection{Numerical Equality} The numerical equality test \texttt{$T_1$ =:= $T_2$} \index{{\verb+=:=/2+}} is true if term $T_1$ and term $T_2$ are pretty much the same numerical value. This means that $T1$ and $T2$ must both be numbers. Whereas the test \texttt{$T_1$ == $T_2$} fails if one number is an integer and one number is a real number, the test \texttt{$T_1$ =:= $T_2$} may succeed. Consider the following examples. \subsection*{Example} \begin{verbatim} Picat> 1 == 1.0 no Picat> 1 =:= 1.0 yes \end{verbatim} In the first query, $1$ is an integer, while $1.0$ is a real number, so the equality test fails. However, the second query, which is a numerical equality test, succeeds. \subsection{Ordering of Terms} Picat orders terms in the following way: \begin{tabbing} aa \= aaa \= aaa \= aaa \= aaa \= aaa \= aaa \= aaa \kill \> \texttt{var}\ $<$\ \texttt{number} \ $<$\ \texttt{atom} \ $<$\ \texttt{structure} and \texttt{array} \ $<$\ \texttt{list} and \texttt{string} \end{tabbing} Variables are ordered by their addresses. Note that the ordering of variables may change after garbage collection. Numbers are ordered by their numerical values. Atoms are ordered lexicographically. Structures are first ordered lexicographically by their names; if their names are the same, then they are ordered by their components. Arrays are ordered as structures with the special name '\{\}'. Lists and strings are ordered by their elements. \begin{itemize} \item \texttt{$Term1$ @< $Term2$}: The term $Term1$ precedes the term $Term2$ in the standard order. For example, \texttt{a @< b} succeeds. \item \texttt{$Term1$ @=< $Term2$}: The term $Term1$ either precedes, or is identical to, the term $Term2$ in the standard order. For example, \texttt{a @=< b} succeeds. \item \texttt{$Term1$ @<= $Term2$}: This is the same as \texttt{$Term1$ @=< $Term2$}. \item \texttt{$Term1$ @> $Term2$}: The term $Term1$ follows the term $Term2$ in the standard order. \item \texttt{$Term1$ @>= $Term2$}: The term $Term1$ either follows, or is identical to, the term $Term2$ in the standard order. \end{itemize} \section{Expressions} Expressions are made from variables, values, operators, and function calls. Expressions differ from terms in the following ways: \begin{itemize} \item An expression can contain dot notations, such as \texttt{math.pi}\index{\texttt{pi}}. \item An expression can contain index notations, such as \texttt{X[I]}. \item An expression can contain ranges, such as \texttt{1..2..100}. \item An expression can contain list comprehensions, such as \texttt{[X : X in 1..100]}. \item An expression can contain array comprehensions, such as \texttt{\{X : X in 1..100\}}. \end{itemize} A conditional expression, which takes the form \texttt{cond($Cond$,$Exp_1$,$Exp_2$)}, is a special kind of function call that returns the value of $Exp_1$ if the condition $Cond$ is true and the value of $Exp_2$ if $Cond$ is false. Note that, except for conditional expressions in which the conditions are made of predicates, no expressions can contain predicates. A predicate is true or false, but never returns any value. \section{Higher-order Predicates and Functions} A predicate\index{predicate} or function\index{function} is said to be \emph{higher-order}\index{higher-order call} if it takes calls as arguments. The \texttt{basic} module has the following higher-order predicates and functions. \begin{itemize} \item \texttt{apply($S$, $Arg_1$, $\ldots$, $Arg_n$) = $Val$}\index{\texttt{apply}}: $S$ is an atom or a structure. This function calls the function that is named by $S$ with the arguments that are specified in $S$, together with extra arguments $Arg_1$, \ldots, $Arg_n$. This function returns the value that $S$ returns. \item \texttt{call($S$, $Arg_1$, $\ldots$, $Arg_n$)}\index{\texttt{call}}: $S$ is an atom or a structure. This predicate calls the predicate that is named by $S$ with the arguments that are specified in $S$, together with extra arguments $Arg_1$, \ldots, $Arg_n$. \item \texttt{call\_cleanup($Call$, $Cleanup$)}\index{\texttt{call\_cleanup/2}}: This predicate is the same as \texttt{call($Call$)}, except that \texttt{$Cleanup$} is called when \texttt{$Call$} succeeds determinately (i.e., with no remaining choice point), when \texttt{$Call$} fails, or when \texttt{$Call$} raises an exception. \item \texttt{catch($Call$, $Exception$, $Handler$)}\index{\texttt{catch/3}}: This predicate is the same as $Call$, except when an exception that matches $Exception$ is raised during the execution of $Call$. When such an exception is raised, all of the bindings that have been performed on variables in \texttt{$Call$} will be undone, and \texttt{$Handler$} will be executed to handle the exception. \item \texttt{count\_all($Call$) = $Count$}\index{\texttt{count\_all/2}}: This function returns the number of all possible instances of \texttt{call($Call$)} that are true. For example, \texttt{count\_all(member(X,[1,2,3]))} returns 3. \item \texttt{findall($Template$, $Call$) = $Answers$}\index{\texttt{findall/2}}: This function returns a list of all possible instances of \texttt{call($Call$)} that are true in the form of $Template$. Note that $Template$ is assumed to be a term without function calls, and that $Call$ is assumed to be a predicate call whose arguments can contain function calls. Also note that, like a loop, \texttt{findall} forms a name scope. For example, in \texttt{findall(f(X),p(X,g(Y)))}, \texttt{f(X)} is a term even though it is not preceded with \verb+$+; \texttt{g(Y)} is a function call; the variables \texttt{X} and \texttt{Y} are assumed to be local to \texttt{findall} if they do not occur before in the outer scope. \item \texttt{find\_all($Template$, $Call$) = $Answers$}\index{\texttt{find\_all/2}}: This function is the same as the above function. \item \texttt{freeze($X$, $Call$)}\index{\texttt{freeze/2}}: This predicate delays the evaluation of $Call$ until $X$ becomes a non-variable term. \item \texttt{map($FuncOrList$, $ListOrFunc$) = $ResList$}\index{\texttt{map/2}}: This function applies a given function to every element of a given list and returns a list of the results. One of the arguments is a function, and the other is a list. The order of the arguments is not important. \item \texttt{map($Func$, $List1$, $List2$) = $ResList$}\index{\texttt{map/3}}: Let $List1$ be \texttt{[$A_1$,$\ldots$,$A_n$]} and $List2$ be \texttt{[$B_1$,$\ldots$,$B_n$]}. This function applies the function $Func$ to every pair of elements $(A_i,B_i)$ by calling \texttt{apply($Func$,$A_i$,$B_i$)}, and returns a list of the results. \item \texttt{maxof($Call$, $Objective$)}\index{\texttt{maxof/2}}: This predicate finds a satisfiable instance of $Call$, such that $Objective$ has the maximum value. Here, $Call$ is used as a generator, and $Objective$ is an expression to be maximized. For every satisfiable instance of $Call$, $Objective$ must be a ground expression. For \texttt{maxof}, search is restarted with a new bound each time that a better answer is found. \item \texttt{maxof($Call$, $Objective$, $ReportCall$)}\index{\texttt{maxof/3}}: This is the same as \texttt{maxof($Call$,$Objective$)}, except that \texttt{call($ReportCall$)} is executed each time that an answer is found. \item \texttt{maxof\_inc($Call$, $Objective$)}\index{\texttt{maxof\_inc/2}}: This is the same as \texttt{maxof($Call$,$Objective$)}, except that search continues rather than being restarted each time that a better solution is found. \item \texttt{maxof\_inc($Call$, $Objective$, $ReportCall$)}\index{\texttt{maxof\_inc/3}}: This is the same as the previous predicate, except that \texttt{call($ReportCall$)} is executed each time that an answer is found. \item \texttt{minof($Call$, $Objective$)}\index{\texttt{minof/2}}: This predicate finds a satisfiable instance of $Call$, such that $Objective$ has the minimum value. \item \texttt{minof($Call$, $Objective$, $ReportCall$)}\index{\texttt{minof/3}}: This is the same as \texttt{minof($Call$,$Objective$)}, except that \texttt{call($ReportCall$)} is executed each time that an answer is found. \item \texttt{minof\_inc($Call$, $Objective$)}\index{\texttt{minof\_inc/2}}: This predicate is the same as \texttt{minof($Call$,$Objective$)}, except that search continues rather than being restarted each time that a better solution is found. \item \texttt{minof\_inc($Call$, $Objective$, $ReportCall$)}\index{\texttt{minof\_inc/3}}: This predicate is the same as the previous one, except that \texttt{call($ReportCall$)} is executed each time that an answer is found. \item \texttt{reduce($Func$, $List$) = $Res$}\index{\texttt{reduce/2}}: If $List$ is a list that contains only one element, this function returns the element. If $List$ contains at least two elements, then the first two elements $A_1$ and $A_2$ are replaced with \texttt{apply($Func$,$A_1$,$A_2$)}. This step is repeatedly applied to the list until the list contains a single element, which is the final value to be returned. The order of the arguments is not important, meaning that the first argument can be a list and the second one can be a function. \item \texttt{reduce($Func$, $List$, $InitVal$) = $Res$}\index{\texttt{reduce/3}}: This function is the same as\\ \texttt{reduce($Func$,[$InitVal$$|$$List$])}. \end{itemize} \section{\label{sec:otherbuiltins}Other Built-ins in the \texttt{basic} Module} \begin{itemize} \item \texttt{acyclic\_term($Term$)}\index{\texttt{acyclic\_term/1}}: This predicate is true if $Term$ is acyclic, meaning that $Term$ does not contain itself. \item \texttt{and\_to\_list($Conj$) = $List$}\index{\texttt{and\_to\_list/1}}: This function converts $Conj$ in the form \texttt{($a_1$,$\ldots$,$a_n$)} into a list in the form \texttt{[$a_1$,$\ldots$,$a_n$]}. \item \texttt{compare\_terms($Term_1$, $Term_2$) = $Res$}\index{\texttt{compare\_terms/2}}: This function compares $Term_1$ and $Term_2$. If $Term_1 < Term_2$, then this function returns $-1$. If $Term_1 == Term_2$, then this function returns $0$. Otherwise, $Term_1 > Term_2$, and this function returns $1$. \item \texttt{different\_terms($Term_1$, $Term_2$)}\index{\texttt{different\_terms/2}}: This constraint ensures that $Term_1$ and $Term_2$ are different. This constraint is suspended when the arguments are not sufficiently instantiated\index{instantiated variable}. \item \texttt{get\_global\_map() = $Map$}\index{\texttt{get\_global\_map/0}}: This function returns the global map\index{map}, which is shared by all threads. \item \texttt{get\_heap\_map() = $Map$}\index{\texttt{get\_heap\_map/0}}: This function returns the current thread's heap map\index{map}. Each thread has its own heap map\index{map}. \item \texttt{get\_table\_map() = $Map$}\index{\texttt{get\_table\_map/0}}: This function returns the current thread's table map\index{map}. Each thread has its own table map\index{map}. The table map is stored in the table area and both keys and values are hash-consed (i.e., common sub-terms are shared). \item \texttt{ground($Term$)}\index{\texttt{ground/1}}: This predicate is true if $Term$ is ground\index{ground}. A \emph{ground}\index{ground} term does not contain any variables. \item \texttt{list\_to\_and($List$) = $Conj$}\index{\texttt{list\_to\_and/1}}: This function converts $List$ in the form \texttt{[$a_1$,$\ldots$,$a_n$]} into a term in the form \texttt{($a_1$,$\ldots$,$a_n$)}. \item \texttt{number\_vars($Term$, $N_0$) = $N_1$}\index{\texttt{number\_vars/2}}: This function numbers the variables in $Term$ by using the integers starting from $N_0$. $N_1$ is the next integer that is available after $Term$ is numbered. Different variables receive different numberings, and the occurrences of the same variable all receive the same numbering. \item \texttt{parse\_radix\_string($String$, $Base$) = $Int$}\index{\texttt{parse\_radix\_string/2}}: This function converts a radix $String$ of $Base$ into a decimal integer $Int$, where $Base$ must be greater than 1 and less than 37. For example, \texttt{parse\_radix\_string("101",2)} returns 5, which is the same as \texttt{parse\_term("0b101")}. \item \texttt{parse\_term($String$, $Term$, $Vars$)}\index{\texttt{parse\_term/3}}: This predicate uses the Picat parser to extract a term $Term$ from $String$. $Vars$ is a list of pairs, where each pair has the form $Name$=$Var$. \item \texttt{parse\_term($String$) = $Term$}\index{\texttt{parse\_term/1}}: This function converts $String$ to a term. \item \texttt{second($Compound$) = $Term$}\index{\texttt{second/1}}: This function returns the second argument of the compound term $Compound$. \item \texttt{subsumes($Term_1$, $Term_2$)}\index{\texttt{subsumes/2}}: This predicate is true if $Term_1$ subsumes $Term_2$. \ignore{ \item \texttt{unnumber\_vars($Term_1$) = $Term_2$}\index{\texttt{unnumber\_vars/1}}: $Term_2$ is a copy of $Term_1$, with all numbered variables being replaced by Picat variables. Different numbered variables are replaced by different Picat variables. } \item \texttt{variant($Term_1$, $Term_2$)}\index{\texttt{variant/2}}: This predicate is true if $Term_2$ is a variant of $Term_1$. \item \texttt{vars($Term$) = $Vars$}\index{\texttt{vars/1}}: This function returns a list\index{list} of variables that occur in $Term$. \end{itemize} \ignore{ \end{document} }
\section{Delivery and Release Mechanism} The code will be delivered via a Git repository which can be cloned into. The location of this repository is still being decided. The repository name will be \verb|Micromorphic_UEL|.
\section{Introduction} The pandemic revealed many shortcomings of the current healthcare system in many countries. It also introduced many new problems that the existing system found difficult to accommodate and tackle. One such issue was effective monitoring of vaccines during transportation. This is especially important when vaccines need to be transported all throughout the nation and even internationally, safely. In this project we aim to address some of these issues by creating a system for chain logistics for monitored transportation of important products such as vaccines. \section{Literature Survey} \begin{itemize} \item Cold Chain Logistics (CCL) management, in general, is the management of necessary refrigeration level for temperature sensitive product.[1] \item In [2], analysis of cold chain logistics using ISM has been investigated. India, currently, has very limited development in such logistic systems. \item In [3-6] application of wireless sensor network and Internet of Things(IoT) in CCL have been investigated. \item In [7], a system called SensIC for monitoring the refrigerated storage of drugs and vaccines was proposed offering alarm tools in case of malfunction of system. \item In [8-9], cold chain logistics system was developed to study the effects of temperature using IoT and blockchain, monitoring the temperature continuously. \end{itemize} \section{Research Gap} Following are the certain limitations of the current state of monitoring systems: \begin{itemize} \item Cold chain logistics is limited in countries such as India. \item Only temperature is considered in the monitoring systems. Other factors that affect the product are neglected \item Data security, especially in case of important products such as medical drugs and vaccines, is a huge concern \end{itemize} \section{Objectives} Following are the objectives of our project: \begin{itemize} \item Create an IoT enabled monitoring device/container for vaccine vials. \item Record the sensor data collected via the IoT network. \item Create a logistics system for quality control. \item Create an alert system in case of an emergency(Vaccine under non optimum conditions). \end{itemize} \section{Methodology} The System Comprises of: \begin{itemize} \item Data Collection modules that consists of a mini compute unit (raspberry pi) that is connected to internet via WIFI and collects the data from the following sensors: \begin{itemize} \item DHT 11 - Temprature and Humidity Sensor \item BMP180 - Air Pressure Sensor \item BH1750 - UV Light sensor \end{itemize} \item 4 data collection and transmitting modules that share the vaccine vital stats \item An elasticsearch database that stores and indexes all the data from the modules \item Fronted Kibana Dashboard monitor that allow the user to gauge the vaccines stats \item Python client for elastic search database to push data to elasticdatabase \end{itemize} \begin{figure}[ht!] \centering \includegraphics[scale=0.3]{assests/methodology.png} \caption{Mehtodology} \label{fig:world} \end{figure} %\section{Introduction} %The spread of COVID19, from the sars-cov2 virus %occurred in Wuhan, China, is on the rise and has shaken the world. The World %Health Organization christened the illness as COVID-19 when the first case of this %virus was reported. %The Global spread of COVID19 affected every major nation and was defined as a %pandemic by the WHO in March 2020. %This paper tracks the spread of the novel coronavirus, also known as the %COVID-19. COVID-19 is a contagious respiratory virus that first started in %Wuhan December 2019. \cite{data_world} %The two types of coronaviruses, named as, "severe acute respiratory syndrome %coronavirus" and "Middle East respiratory syndrome" have affected more than %20,000 individuals in last ten years \cite{huang2020clinical}. %The coronavirus can spread by various means.However some of the common means through which the infection can occur are: %\begin{enumerate} % \item airborne or aerosol transmission % \item direct or indirect contact with another human % \item and lastly through droplet spray transmission %\end{enumerate} %However a person can protect himself from these transmission modes.Close contact can be avoided and a minimum distance of 1.8 metres should be maintained to avoid contact with a person as well as respiratory droplets.However for airborne transmission a minimum of 4 metre should be maintained to avoid contact.Symptoms of COVID 19 are coughing ,extreme fever,tiredness or weakness and pain in some joints of the body. %%% Respiratory infections can be transmitted through droplets of different sizes: %%when the droplet particles are $>5-10 \mu m$ in diameter they are referred to as respiratory droplets, and when then %%are $<5 \mu m$ in diameter, they are referred to as droplet nuclei. According to current evidence, COVID-19 virus %%is primarily transmitted between people through respiratory droplets and contact routes. In an analysis of %%75,465 COVID-19 cases in China, airborne transmission was not reported. Droplet transmission occurs when %%a person is in in close contact (within 1 m) with someone who has respiratory symptoms (e.g., coughing or %%sneezing) and is therefore at risk of having his/her mucosae (mouth and nose) or conjunctiva (eyes) exposed %%to potentially infective respiratory droplets. Symptoms as fever, cough, and shortness of breath after a period %%ranging from 2 to 14 days are observed as the outcomes of the disease. Detailed investigations found that %%SARS-CoV was transmitted from civet cats to humans in China in 2002 and MERS-CoV from dromedary %%camels to humans in Saudi Arabia in 2012. Several known coronaviruses are circulating in animals that have %%not yet infected humans. %So for helping combat coronavirus, the use of artificial intelligence %techniques such as machine learning and deep learning models were studied and %implemented in this paper.These model %will gives us a rough estimate as to how the disease will spread in the upcoming days how many more people %will be effected.It will a rough estimate to the government of various countries about how the spread and will %enable them to be prepared well in advance for the epidemic. %Most of the data driven approaches used in previous studies %\cite{knight2016bridging} have been linear models and often neglects the %temporal components of the data. %In this report data preprocessing techniques are applied on the confirmed cases data and then the preprocessed %data is applied to two models i.e. LSTM and Linear Regression .The actual and %forecast values of cases are compared on %a predefined metrics. A comparison is made between the performance of %LSTM and Linear regression model to see which model best for the data. %The section \textbf{Literature Review} talks about similar work done by %other researchers on this topic and talk about the model and approach used by %them. %The methodology used in the paper and the approach on how to handle this %problem is also discussed. %The section \textbf{Methods and models} talks about the dataset used and and its %features. Since classification is done worldwide, so the data was processed to %suite the needs of the models in use and a brief description of the processed %dataset was also provided. %Next, Evaluation metrics are discussed to understand and compare the result %between the two models used. MAPE and Accuracy were used to compare the result %and were used to draw conclusions. %Also the models of Linear regression and LSTM network are explained %demonstrating our approach. %In the end \textbf{Experiment Result} are shown. Evaluation metrics are used %to compare the result. %%\pagebreak %\section{Literature Review} %In \cite{hu2020artificial},an machine learning based alternative to %transmission dynamics for Covid-19 is used. This AI based approach is executed by implementing modified stacked %auto-encoder model. %In \cite{bandyopadhyay2020machine}, an deep learning based approach is %proposed to compared the predicted forecasting value of LSTM and GRU model. The %Model was prepared and tested on the data and a comparison was made using the %predefined metrics. %In \cite{ayyoubzadeh2020predicting}, LSTM and Linear regression model was used %to predict the COVID-19 incidence through Analysis of Google Trends data in %Iran. The Model were compared on the Basis of RMSE metrics. %In \cite{chimmula2020time}, an LSTM networks based approach is proposed for %forecasting time series data of COVID\-19. %This paper uses Linear short Term memory network to overcome problems faced by linear model where %algorithms assigns high probability and neglects temporal information leading to %biased predictions. %In \cite{fanelli2020analysis}, temporal dynamics of the corona virus outbreak %in China, Italy, and France in the span of three months are analyzed. %In \cite{bouktif2018optimal}, a variety of linear and non-linear machine %learning algorithms approaches were studied and the best one as baseline, after %that the best features were chosen, using wrapper and %embedded feature selection methods and genetic algorithm (GA) was used to %determine optimal time lags and number of layers for LSTM model predictive performance %optimization. %In \cite{yang2020modified}, temporal dynamics of the corona virus outbreak in China, Italy, and France in %the span of three months are analysed. %%In \cite{anastassopoulou2020data}, a computation and analysis based on Suspected-Infected-Recovered-Dead %%(SIRD) model is provided. Based on the dataset, it estimates the parameters, i.e. the %%basic reproduction number (R0) and the infection, recovery and mortality rates, %In \cite{rainisch2020dynamic},a modeling tool was constructed to aid active public health officials to estimate %healthcare demand from the pandemic.The model used was SEIR compartmental model %to project the pandemic’s local spread. %In \cite{singh2020connecting},a transmission network based visualization of COVID-19 in India was created and %analyzed. The transmission networks obtained were used to find the possible Super Spreader Individual and Super Spreader Events %(SSE). %In \cite{elmousalami2020day}, comparison of day level forecasting models on COVID-19 affected %cases using time series models and mathematical formulation. The study %concluded exponential growth in countries that do not follow quarantine rules. %In \cite{roosa2020real},phenomenological models that have been validated during previous outbreaks %were used to generate and assess short-term forecasts of the cumulative number of %confirmed reported cases in Hubei province. %\subsection{Our Work} %In our report, the confirmed cases of corona virus are studied from the start %of the epidemic and the two approaches of Linear Regression and LSTM networks %are used, and an report is presented stating which of the above stated model %works best these type of data on the basis of Mean Absolute Error. %\begin{figure}[h!] % \centering % \includegraphics{images/world_wide.png} % \caption{Number of cases around the world} % \label{fig:world} %\end{figure} %%\section{Theory} %% %% %%\pagebreak %\section{Methods and models} %\subsection{Data} %The dataset used was the Johns Hopkins University Center for Systems Science and Engineering %(JHU CSSE) for COVID-19. %It consist of 3 dataset each of Death, Confirmed, Recovered cases of 188 %countries datewise. The number of date columns are 138 starting from 22 %Jan,2020 to 8 June,2020.Out of this about 85\% are used as training data %and the rest used as testing and validating data. So the model would be %predicting next 15\% data value. %The prediction would not be made on a specific country rather it will be %worldwide. %\begin{table}[!ht] % \caption{World Dataset of Corona virus spread with confirmed, death, % and recovery rates} % \centering % \resizebox{\columnwidth}{1.5cm}{% % \begin{tabular}{lrrrrrrrr} % \toprule % {} & Confirmed & Recoveries & Deaths & % Confirmed Change & Recovery Rate & Growth Rate & \\ % \midrule % count & 1.390000e+02 & 1.390000e+02 & 139.000000 & 138.000000 & 139.000000 & 138.000000 \\ % mean & 1.918547e+06 & 6.817390e+05 & 123264.726619 & 50666.268116 & 0.286331 & 0.076081 \\ % std & 2.170725e+06 & 8.911273e+05 & 138597.907312 & 42526.463980 & 0.143922 & 0.117824 \\ % min & 5.400000e+02 & 2.800000e+01 & 17.000000 & 89.000000 & 0.017598 & 0.005032 \\ % 25\% & 7.862450e+04 & 2.747150e+04 & 2703.000000 & 2957.500000 & 0.207790 & 0.021193 \\ % 50\% & 8.430870e+05 & 1.738930e+05 & 44056.000000 & 67738.000000 & 0.288055 & 0.032183 \\ % 75\% & 3.546736e+06 & 1.142438e+06 & 249918.000000 & 84446.500000 & 0.395898 & 0.085793 \\ % max & 6.992485e+06 & 3.220219e+06 & 397840.000000 & 130518.000000 & 0.544809 & 0.951446 \\ % \bottomrule % \end{tabular}} % \label{table:world_df} %\end{table} %Table [\ref{table:world_df}] show the world data of Corona virus spread with %confirmed, death and recovery rates. %%\pagebreak %\subsection{Evaluation Metrics} %For the selection of better performing model, it is necessary to use some kind %of performance/evaluation metrics to evaluate the algorithm’s performance. %In this paper, MAPE and Accuracy are used to %measure model's performance: %\begin{enumerate} % \item \textbf{Mean Absolute Percentage Error}: It is defined by % the following formula: % \begin{equation}\label{eqn1} % %E = {mc^2} % MAPE = \frac{100\%}{n} \sum \left \vert \frac{y-y\prime}{y} % \right \vert % \end{equation} % Where \emph{y}' is true value and \emph{y'} is predicted value. % \item \textbf{Accuracy}: It is defined by the following formula: % \begin{equation}\label{eqn2} % Accuracy = (100 - MAPE)\% % \end{equation} %\end{enumerate} %\begin{figure*}[!ht] % \centering % \includegraphics[height=14cm]{images/method.jpg} % \caption{Flowchart for proposed methodology} % \label{fig:method_flow} %\end{figure*} %\subsection{Method} %The prediction of confirmed cases due to COVID-19 are evaluated using %Recurrent Neural Network method(LSTM) and Linear Regression. %Linear regression is a statistical model, that works with values where the %input variable (x) and output variable (y) have a linear relationship, for %single input the model is known as simple linear regression. %A recurrent neural network is a special kind of Artificial neural network which %has memory of the previous inputs i.e it remembers the previous inputs. In these neural networks the output of previous neuron is fed as input to the next neuron.It is generally used in problems like when it is required to predict the following word in a sentence or in time-series data.However a main problem associated with RNN is gradient vanishing and exploding.In this the gradient starts vanishing as we go deeper into the layers due to which the model stops updating weights.This problems can be solved using special RNN like Long Short Term Memory(LSTM) RNN and Gated Recurrent Unit(GRU).These have a much better gradient flow and perform better than traditional RNN and are generally used. \cite{bandyopadhyay2020machine}. %The dataset used for predicting the value is taken from John Hopkin University which %contains cases form 21 Jan 2020 to 8 June,2020. The training and testing of both the models %is done on this dataset.It contains 138 date columns out of which 120 are used for training %and the rest 18days are used for testing data or for forecasting it.At first the data is preprocessed by converting the date columns into datetime object and also eliminate the %missing values. The preprocessed data is then transformed in the required shape to be put %into the model.The models are trained and the test data is predicted and prediction result %are quantified using performance measures metrics such as MAPE and %accuracy.The methodology performed for each of the step is shown in the figure %\ref{fig:method_flow} as show. %\subsubsection{Linear Regression} %Linear regression based models are generally used for prediction tasks. The %technique is used which tries to best fit the value to a linear line.This line can be used to %relate both the predicting and predicted value.When there is more than one value then the %In case of exponential relations, linear regression can not be directly used. %But after transformation to a linear expression, even exponential relations can %be predicted using linear regression. For example, %\begin{equation} % y = \alpha e^{\beta x} %\end{equation} %Taking the log on both sides of the equation, we get: %\begin{equation} % \ln y = \ln \alpha + \beta x %\end{equation} %This expression is of the form of a linear regression model: %\begin{equation} % y\prime = \alpha \prime + \beta x %\end{equation} %%\pagebreak %\subsubsection{LSTM Model} %Long Short term memory (LSTM) is an recurrent neural network which is most effective for time %series prediction.The model used in this case is sequential.As the data was time series and %we needed to predict the best positive corona cases so this model was best for our study.The %model was build using tensorflow keras framework and the models performance was %evaluated on the mean absolute error percentage (MAPE). %The proposed architecture of LSTM model is depicted in the figure %\ref{fig:lstm_arch} as: %\begin{figure}[ht!] % \centering % \includegraphics[scale=0.5]{images/lstm_mod.jpg} % \caption{Architecture of LSTM model} % \label{fig:lstm_arch} %\end{figure} %\newpage %\section{Experiment Result} %In LSTM prediction, LSTM layers use sequence of 180 nodes. Single layered structure followed by %2 Dense Layers with 60 nodes in the first layer and single node in the output layer is used as LSTM model for verifying prediction result. The best %hyperparameters used is a batch size of 1. The result of the model %is as shown \ref{table:lstm} %\begin{table}[ht!] % \centering % \caption{Accuracy and MAPE of LSTM model} % \begin{tabular}{c c c} % Model & Accuracy & MAPE \\ % LSTM model & 96.90\% & 3.092\% % \end{tabular} % \label{table:lstm}Middle East respiratory syndrome %\end{table} %%\pagebreak %The prediction result is shown in figure below: %\begin{figure}[ht!] % \centering % \includegraphics{images/lstm_graph.png} % \caption{Comparison of predicted and true value using LSTM model} % \label{fig:lstm_graph} %\end{figure} %%\pagebreak %Linear regression model was used on the time series data and the date columns were taken %as input and the 18 days data was predicted. The exponential fit of the model %was fit and the result of the model is as shown %\ref{table:linear} %\begin{table}[ht!] % \centering % \caption{Accuracy and MAPE of regression model} % \begin{tabular}{c c c} % Model & Accuracy & MAPE \\ % Linear model & 93.57\% & 6.421\% % \end{tabular} % \label{table:linear} %\end{table} %The prediction result of comparing the test data predicted data is show below: %\begin{figure}[ht!] % \centering % \includegraphics{images/linear_graph.png} % \caption{Comparison of predicted and true value using Linear Regression model} % \label{fig:linear_graph} %\end{figure} %\pagebreak %\subsection{Comparing with other studies} %% %In \cite{hu2020artificial}, they used an multi-step forecasting system on %the population of china, and the estimated average errors are as show in %\ref{table:three} %\begin{table}[ht!] % \centering % \caption{Result \cite{hu2020artificial}: Method and Average Errors} % \begin{tabular}{c c } % Model & Error \\ % 6-Step & 1.64\% \\ % 7-Step & 2.27\% \\ % 8-Step & 2.14\% \\ % 9-Step & 2.08\% \\ % 10-Step & 0.73\% % \end{tabular} % \label{table:three} %\end{table} %In \cite{chimmula2020time}, LSTM netwoks are used to on Canadian population, %the reuslt are show is table \ref{table:four} %\begin{table}[ht!] % \centering % \caption{Results \cite{chimmula2020time}: Canadian Datasets} % \begin{tabular}{c c c} % Model & RMSE & Accuracy \\ % LSTM & 34.63 & 93.4\% % \end{tabular} % \label{table:four} %\end{table} %In \cite{bandyopadhyay2020machine}, an deep learning based approach is %proposed to compared the predicted forcasting value of LSTM and GRU model is %used the result are as show in table \ref{table:seven}: %\begin{table}[ht!] % \centering % \caption{Results \cite{chimmula2020time}: Canadian Datasets} % \begin{tabular}{c c c} % Model & RMSE & Accuracy \\ % LSTM & 53.35 & 76.6\% \\ % GRU & 30.95 & 76.9\% \\ % LSTM and GRU & 30.15 & 87\% % \end{tabular} % \label{table:seven} %\end{table} %\section{Conclusion and Future Scope} %The comparison between Regression and LSTM model signifies that using LSTM %yields better results for the forecasting the spread of confirmed cases. %showcases a method that checks occurred cases of %COVID-19. However it could be made automated to train on the updated data %every week and see the predicted value. Also the model is trained only on confirmed cases %same could be done for both recovered and death cases and predicted values could be found. %The model shows only the worldwide cases however the dataset also provides country wise %statistics so it can be used by different country to forecast the future outcome of the %pandemic and take necessary preventive measures to be safe from this worldwide %pandemic. %A conclusion is drawn that shows forecasting models could be used by medical %and government agencies to make better policies for controlling the spread of %pandemic. The comparison between the 2 models allows them to choose the better %suited model for the required task. %The availability of high- quality and timely data in the early stages of the outbreak collaboration of %the researchers to analyze the data could have positive effects on health care %resource planning. %%\newpage
% This LaTeX was auto-generated from MATLAB code. % To make changes, update the MATLAB code and republish this document. \documentclass{article} \usepackage{graphicx} \usepackage{color} \sloppy \definecolor{lightgray}{gray}{0.5} \setlength{\parindent}{0pt} \begin{document} \section*{Accelerated Gradient Descent} \subsection*{Contents} \begin{itemize} \setlength{\itemsep}{-1ex} \item Gradient \item Gradient Descent \item Accelerated Gradient Descent \item Plot contour \item Reference \end{itemize} \subsection*{Gradient} \begin{par} Gradient descent method is based on gradient \end{par} \vspace{1em} \begin{par} $$ \nabla f = \frac{\partial f}{\partial x_1 }\mathbf{e}_1 + \cdots + \frac{\partial f}{\partial x_n }\mathbf{e}_n $$ \end{par} \vspace{1em} \begin{par} gradient always point to the asent direction \end{par} \vspace{1em} \subsection*{Gradient Descent} \begin{par} f is object function, and this is unconstrained \end{par} \vspace{1em} \begin{par} $$\min_{x} f $$ \end{par} \vspace{1em} \subsection*{Accelerated Gradient Descent} \begin{par} for t = 1,2,... \end{par} \vspace{1em} \begin{par} $$x^{(t)} = y^{(t-1)} - alpha \nabla f(y^{t-1})$$ \end{par} \vspace{1em} \begin{par} $$y^{(t)} = x^{(t)} + (t-1) / (t+2) * (x^{(t)} - x^{(t-1)})$$ \end{par} \vspace{1em} \begin{verbatim} f = (@(X) (exp(X(1,:)-1) + exp(1-X(2,:)) + (X(1,:) - X(2,:)).^2)); %f = (@(X) (sin(0.5*X(1,:).^2 - 0.25 * X(2,:).^2 + 3) .* cos(2*X(1,:) + 1 - exp(X(2,:))) )) \end{verbatim} \subsection*{Plot contour} \begin{verbatim} [X, Y] = meshgrid(-2:0.1:2); XX = [reshape(X, 1, numel(X)); reshape(Y, 1, numel(Y))]; %surf(X, Y, reshape(f(XX), length(X), length(X))) contour(X, Y, reshape(f(XX), length(X), length(X)), 50); hold on; \end{verbatim} \includegraphics [width=4in]{test_01.eps} \begin{par} plot gradient of function \end{par} \vspace{1em} \begin{verbatim} for i=1:5:length(XX) tmp = XX(:,i); g = gradient_of_function(f, tmp); %plot([tmp(1),tmp(1)+g(1)*0.02],[tmp(1),tmp(2)+g(1)*0.02]); quiver(tmp(1),tmp(2),g(1)*0.02,g(2)*0.02); end \end{verbatim} \includegraphics [width=4in]{test_02.eps} \begin{par} calculation \end{par} \vspace{1em} \begin{verbatim} x0 = [-1; -1]; \end{verbatim} \begin{par} without wolfe step, fix step as alpha = 0.01, $x_k = x_{k-1} + alpha * (-\nabla f)$ \end{par} \vspace{1em} \begin{verbatim} [x_gf, v_gf, h_gf] = gradient_fix_step(f, x0) [x_af, v_af, h_af] = accelerated_gradient_fix_step(f, x0) \end{verbatim} \color{lightgray} \begin{verbatim} x_gf = -0.0287 0.5194 v_gf = 2.2750 h_gf = Columns 1 through 7 -1.0000 -1.0014 -1.0012 -0.9997 -0.9970 -0.9933 -0.9886 -1.0000 -0.9261 -0.8590 -0.7977 -0.7413 -0.6894 -0.6413 Columns 8 through 14 -0.9830 -0.9766 -0.9696 -0.9619 -0.9537 -0.9449 -0.9357 -0.5966 -0.5550 -0.5161 -0.4796 -0.4454 -0.4131 -0.3826 Columns 15 through 21 -0.9261 -0.9161 -0.9058 -0.8952 -0.8843 -0.8732 -0.8619 -0.3538 -0.3266 -0.3007 -0.2761 -0.2526 -0.2303 -0.2089 Columns 22 through 28 -0.8503 -0.8387 -0.8269 -0.8150 -0.8029 -0.7908 -0.7786 -0.1885 -0.1689 -0.1501 -0.1320 -0.1147 -0.0980 -0.0818 Columns 29 through 35 -0.7664 -0.7541 -0.7418 -0.7294 -0.7170 -0.7047 -0.6923 -0.0663 -0.0512 -0.0367 -0.0226 -0.0089 0.0044 0.0172 Columns 36 through 42 -0.6800 -0.6676 -0.6553 -0.6431 -0.6308 -0.6186 -0.6065 0.0298 0.0420 0.0538 0.0654 0.0767 0.0877 0.0985 Columns 43 through 49 -0.5944 -0.5823 -0.5704 -0.5585 -0.5466 -0.5348 -0.5231 0.1090 0.1193 0.1294 0.1393 0.1490 0.1585 0.1678 Columns 50 through 56 -0.5115 -0.4999 -0.4884 -0.4770 -0.4657 -0.4544 -0.4433 0.1770 0.1860 0.1949 0.2036 0.2121 0.2206 0.2289 Columns 57 through 63 -0.4322 -0.4212 -0.4103 -0.3995 -0.3887 -0.3781 -0.3675 0.2370 0.2451 0.2531 0.2609 0.2686 0.2763 0.2838 Columns 64 through 70 -0.3570 -0.3466 -0.3363 -0.3261 -0.3160 -0.3059 -0.2960 0.2912 0.2986 0.3058 0.3130 0.3201 0.3271 0.3341 Columns 71 through 77 -0.2861 -0.2764 -0.2667 -0.2571 -0.2475 -0.2381 -0.2288 0.3409 0.3477 0.3544 0.3611 0.3677 0.3742 0.3806 Columns 78 through 84 -0.2195 -0.2103 -0.2012 -0.1922 -0.1833 -0.1745 -0.1657 0.3870 0.3933 0.3996 0.4058 0.4120 0.4181 0.4241 Columns 85 through 91 -0.1570 -0.1484 -0.1399 -0.1315 -0.1231 -0.1148 -0.1066 0.4301 0.4361 0.4419 0.4478 0.4536 0.4593 0.4650 Columns 92 through 98 -0.0985 -0.0904 -0.0825 -0.0746 -0.0668 -0.0590 -0.0513 0.4706 0.4762 0.4818 0.4873 0.4927 0.4982 0.5035 Columns 99 through 101 -0.0437 -0.0362 -0.0287 0.5089 0.5142 0.5194 x_af = 0.7415 1.1499 v_af = 1.7998 h_af = Columns 1 through 7 -1.0000 -1.0014 -1.0012 -0.9993 -0.9950 -0.9875 -0.9761 -1.0000 -0.9261 -0.8590 -0.7823 -0.6989 -0.6115 -0.5223 Columns 8 through 14 -0.9604 -0.9398 -0.9140 -0.8829 -0.8463 -0.8043 -0.7571 -0.4333 -0.3459 -0.2615 -0.1807 -0.1043 -0.0325 0.0345 Columns 15 through 21 -0.7050 -0.6484 -0.5877 -0.5234 -0.4562 -0.3866 -0.3154 0.0969 0.1548 0.2086 0.2587 0.3057 0.3499 0.3920 Columns 22 through 28 -0.2430 -0.1703 -0.0976 -0.0257 0.0449 0.1139 0.1807 0.4324 0.4716 0.5100 0.5481 0.5861 0.6242 0.6627 Columns 29 through 35 0.2451 0.3067 0.3654 0.4209 0.4732 0.5223 0.5680 0.7016 0.7410 0.7807 0.8208 0.8611 0.9013 0.9413 Columns 36 through 42 0.6106 0.6500 0.6864 0.7198 0.7505 0.7786 0.8042 0.9809 1.0197 1.0575 1.0941 1.1292 1.1626 1.1940 Columns 43 through 49 0.8276 0.8487 0.8679 0.8851 0.9005 0.9142 0.9262 1.2234 1.2505 1.2752 1.2975 1.3174 1.3346 1.3494 Columns 50 through 56 0.9366 0.9455 0.9528 0.9587 0.9631 0.9661 0.9677 1.3618 1.3718 1.3794 1.3849 1.3884 1.3899 1.3897 Columns 57 through 63 0.9679 0.9667 0.9643 0.9606 0.9558 0.9499 0.9429 1.3878 1.3845 1.3798 1.3740 1.3672 1.3595 1.3510 Columns 64 through 70 0.9350 0.9263 0.9169 0.9069 0.8964 0.8855 0.8743 1.3419 1.3322 1.3222 1.3118 1.3012 1.2904 1.2795 Columns 71 through 77 0.8630 0.8517 0.8404 0.8293 0.8185 0.8080 0.7979 1.2686 1.2578 1.2471 1.2366 1.2263 1.2163 1.2067 Columns 78 through 84 0.7883 0.7793 0.7708 0.7629 0.7557 0.7492 0.7433 1.1974 1.1886 1.1803 1.1725 1.1653 1.1587 1.1527 Columns 85 through 91 0.7382 0.7337 0.7300 0.7269 0.7246 0.7229 0.7218 1.1474 1.1428 1.1388 1.1356 1.1330 1.1312 1.1300 Columns 92 through 98 0.7214 0.7216 0.7224 0.7237 0.7256 0.7279 0.7307 1.1295 1.1296 1.1304 1.1317 1.1336 1.1361 1.1389 Columns 99 through 101 0.7339 0.7375 0.7415 1.1422 1.1459 1.1499 \end{verbatim} \color{black} \begin{par} find suitable step size \end{par} \vspace{1em} \begin{verbatim} [x_g, v_g, h_g] = gradient(f, x0) [x_a, v_a, h_ax, h_ay] = accelerated_gradient(f, x0) % built-in method [x_in, v_in] = fminunc(f, x0) \end{verbatim} \color{lightgray} \begin{verbatim} x_g = 0.7960 1.2038 v_g = 1.7974 h_g = Columns 1 through 7 -1.0000 -1.0271 -0.4515 0.6432 0.8185 0.7755 0.7859 -1.0000 0.4778 0.2130 1.0809 1.1279 1.1801 1.2059 Columns 8 through 12 0.7925 0.7963 0.7956 0.7959 0.7960 1.2007 1.2024 1.2033 1.2039 1.2038 x_a = 0.7960 1.2038 v_a = 1.7974 h_ax = Columns 1 through 7 -1.0000 -1.0169 0.7311 1.0381 1.0156 1.0217 0.9513 -1.0000 -0.0764 0.9765 1.3202 1.4675 1.4183 1.3769 Columns 8 through 14 0.9013 0.8435 0.8050 0.7829 0.7738 0.7789 0.7808 1.3051 1.2559 1.2170 1.1899 1.1882 1.1842 1.1906 Columns 15 through 21 0.7879 0.7930 0.7966 0.7966 0.7968 0.7971 0.7968 1.1945 1.1993 1.2037 1.2046 1.2050 1.2048 1.2047 Columns 22 through 25 0.7966 0.7963 0.7961 0.7960 1.2043 1.2040 1.2038 1.2038 h_ay = Columns 1 through 7 -1.0000 -1.0169 1.1681 1.1609 1.0044 1.0251 0.9073 -1.0000 -0.0764 1.2398 1.4577 1.5411 1.3902 1.3510 Columns 8 through 14 0.8679 0.8031 0.7770 0.7662 0.7668 0.7829 0.7823 1.2573 1.2214 1.1888 1.1695 1.1870 1.1811 1.1956 Columns 15 through 21 0.7936 0.7972 0.7997 0.7966 0.7970 0.7973 0.7965 1.1978 1.2032 1.2074 1.2054 1.2053 1.2045 1.2046 Columns 22 through 25 0.7964 0.7961 0.7959 0.7960 1.2040 1.2038 1.2036 1.2039 Warning: Gradient must be provided for trust-region algorithm; using line-search algorithm instead. Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. x_in = 0.7961 1.2039 v_in = 1.7974 \end{verbatim} \color{black} \begin{par} plot descent steps \end{par} \vspace{1em} \begin{verbatim} for i=2:length(h_gf) tmp1 = h_gf(:,i-1); tmp2 = h_gf(:,i); quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'g','LineWidth',3) end for i=2:length(h_af) tmp1 = h_af(:,i-1); tmp2 = h_af(:,i); quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'b','LineWidth',3) end for i=2:length(h_g) tmp1 = h_g(:,i-1); tmp2 = h_g(:,i); quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'r','LineWidth',2) end for i=2:length(h_ax) tmp1 = h_ax(:,i-1); tmp2 = h_ax(:,i); quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'c','LineWidth',2) end for i=2:length(h_ay) tmp1 = h_ay(:,i-1); tmp2 = h_ay(:,i); quiver(tmp1(1),tmp1(2),tmp2(1)-tmp1(1),tmp2(2)-tmp1(2), 0, 'm','LineWidth',2) end \end{verbatim} \includegraphics [width=4in]{test_03.eps} \subsection*{Reference} \begin{enumerate} \setlength{\itemsep}{-1ex} \item \begin{verbatim}http://stronglyconvex.com/blog/accelerated-gradient-descent.html\end{verbatim} \end{enumerate} \end{document}
\chapter{Project closure process} \label{chap:apendiceB} \section{Learned lessons} In a research project it is vital to take note of all the work carried out and the results obtained. Having tracking codes of the different deliverables simplifies the development and knowledge of the history of each one. Keeping the project monitoring files updated day by day helps to improve the quantity and quality of the information that is released. Regarding to the project carried out, the following points are listed to be taken into account for future work: $\bullet$ Before starting the printer, check that the printing area is free of obstacles. $\bullet$ Keep the cartridges upright, with the head down, to prevent air from entering the reservoir channel. $\bullet$ If the substrate is attached to the platen with adhesive tape, the thickness of it must be taken into account to add in the printing configurations. $\bullet$ Before printing, check that the ejectors are working correctly and the ink drops have an aligned flight. $\bullet$ Control the condition of the cleaning pad periodically, if the head begins to show ink drops on its base, it may be due to saturation of the drying material.
\section{Language Modeling} \label{chap:prior:sec:lm} In Section \ref{chap:prior:sec:lm:overview}, we briefly review various language modeling methodologies and why language modeling is so useful for NLP. Then, in Section \ref{chap:prior:sec:lm:elmo} introduce ELMo followed by BERT in Section \ref{chap:prior:sec:lm:bert}. Then, in Section \ref{chap:prior:sec:lm:otherlm}, we discuss other relevant LMs and how the broader NLP community is using them. Finally, in Section \ref{chap:prior:sec:lm:effects}, we discuss some of the effects of training and using large LMs to ground our research's motivation. \subsection{What is Language Modeling} \label{chap:prior:sec:lm:overview} Language modeling is a way to assign a probability distribution over some textual representation. In other words, if the task is to model $n$-grams, the probability of a current input is the probability of a token $w_i$ given the previous $i$ tokens. This is commonly factorized as \fullref{equation:langmodel}: \begin{equation} P(w_{1},\ldots ,w_{m})=\prod _{i=1}^{m}P(w_{i}\mid w_{1},\ldots ,w_{i-1})\approx \prod _{i=1}^{m}P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1}) \label{equation:langmodel} \end{equation} Language models can be useful methods to represent natural language because they allow models to differentiate meanings of sentences based on context. In other words a model is able to understand that the word `fly' can mean different things in the sentences: `You look fly', `Lets fly away!', `That is a fly'. \\ While language modeling is by no means a new concept, it was not until the introduction of Neural Network based LM that these representations were able to serve as general understanding frameworks. Before these Neural Network Language Models (NNLM), most language modeling usually focused on modeling some form of an N-gram where the probability of a word only depends on the previous N-words. Large Neural-Network-based LMs are the first step in an NLP application as a way of turning some form of textual input into a representation in a vector space. \\ Language models are created using many training objectives, but general models tend to be either auto-encoding (AE), auto-regressive (AR) or some combination of the two. AR models like ELMo \cite{Peters2018DeepCW} or GPT-2 \cite{Radford2019LanguageMA} learn a LR by predicting the next token in a sequence. AE models like BERT \cite{Devlin2019BERTPO} and ELECTRA \cite{Clark2020ELECTRAPT} learn a LR by reconstructing some portion of a sequence. \subsection{ELMo} \label{chap:prior:sec:lm:elmo} ELMo is an AR LM that was introduced by Peters et al., 2018 \cite{Peters2018DeepCW} and, in many ways, became the first contextual word representation that saw widespread usage. The name ELMo represents Embeddings from Language Models and refers to how language modeling can be used to train contextual word embeddings. ELMo is an auto-regressive model built off the success of GloVe \cite{Pennington2014GloveGV}, and Word2Vec \cite{Mikolov2013EfficientEO} by seeking to be the first stage of textual processing for a variety of NLP tasks. ELMo consists of a character-level convolutional neural network (CNN) followed by two layers of bidirectional LSTMs. The CNN is used to convert words from a text string into raw word vectors, which are then passed to the BiLSTMs to model the whole input. Using a character level CNN, ELMo can capture the inner morphological structure of words, e.g., words like beauty and beautiful are similar when character level convolutions are used.\\ Each layer receives a forward pass and backward pass over the textual input, which allows the model to read the sentence left-to-right and right-to-left and form representations that understand the whole context of a sentence. The forward pass of the text (reading left to right) allows the model to build context for a word given previous words, while the backward pass (reading right to left) allows the model to build context from the end of the input to the word being modeled. The backward and forward pass are concatenated together. The output of the first biLSTM is passed into the second layer, and then the final ELMo representation is the weighted sum of the raw word vectors and the two intermediate word vectors (outputs of each biLSTM).\\ ELMo was trained using the Billion Word Corpus \cite{Chelba2014OneBW} and using the unprocessed input as the target for ELMo's language modeling task. The model is trained for ten epochs (complete passes on the corpus), which takes approximately three weeks using three 1080ti GPUs. On average, the authors find that adding ELMo as a text representation layer provides 20\% improvement across a diverse set of NLP tasks. \subsection{BERT} \label{Bchap:prior:sec:lm:bert} Building on the success of ELMo, leveraging the transformer architecture \cite{Vaswani2017AttentionIA}, and taking the learnings from other contextual word embeddings \cite{Howard2018UniversalLM} \cite{Radford2018ImprovingLU} Devlin et al., 2018 introduced BERT, which stands for Bidirectional Encoder Representations from Transformers. BERT is an AE LM that uses modified stacked Transformer encoders (12 layers for a small model and 24 for large) to build a contextual language representation. Instead of using character-level convolutions or fixed word vectors as a starting point, BERT leverages a piecewise tokenization \cite{Wu2016GooglesNM}, which sets a vocabulary size of 30,000. \\ Just like other language models before it, BERT trains using unsupervised pre-training on a large text corpus. Unlike previous models, BERT introduces two new training objectives as a way to steer the model: Masked Language Modeling (MLM) and next sentence prediction (NSP). \\ MLM reformulates language understanding as a cloze task \cite{Taylor1953ClozePA}, where the model's goal is to predict what a hidden word in a sentence may be. To train using MLM BERT introduces a new token $[MASK]$ to represent the hidden word. 15\% of each the corpus tokens are selected to be replaced of which 80\% (12\% of the corpus) are replaced with $[MASK]$, 10\%( 1.5\% of the corpus) are replaced with a random token, and the remaining 10\% are left alone. When the model finds a $[MASK]$ token, it predicts what the word should be. NSP is a training method inspired by QA systems, which tend to have two sets of sentences to reason on: a query and a context passage. In NSP, the model is fed text, which combines two sentences, A and B, with the unique separation token [SEP]. In 50\% of the NSP samples, sentence B directly follows A while in the remaining 50\% A, and B are selected at random. The model has a binary training goal if the sentences are next to each other in the original text.\\ When the BERT architecture and training regime is trained on the Toronto Book Corpus \cite{Zhu_2015_ICCV} (800m words) + English Wikipedia (2.5 billion words), the authors can create a generalizable contextual word embedding, which since the models release has fine-tuned on countless transfer tasks to produce new SOTA models. \subsection{Beyond BERT} \label{chap:prior:sec:lm:otherlm} Besides BERT and ELMo, there has been considerable research into additional language models. RoBERTa \cite{Liu2019RoBERTaAR} improves on BERT by training on a larger corpus for a longer time. XLNET \cite{Yang2019XLNetGA} combines AE and AR while avoiding some of the pitfalls of each method by modifying AR to maximize the expected log-likelihood of a sequence concerning all permutations of factorization order. XLNET also removes the notion of a $[MASK]$ token to avoid training the model with a token that never occurs in text and implements the whole architecture using the Transformer-XL \cite{Dai2019TransformerXLAL}. ALBERT \cite{Lan2019ALBERTAL} explores the role of size in LM, finding that parameter weights can be shared across layers meaning they can have 18 times fewer parameters and train 1.7x faster than regular BERT all while producing similar language representation to BERT. DistilBERT \cite{Sanh2019DistilBERTAD} produces a smaller LM using knowledge distillation resulting in a similar performance to BERT with a 40\% smaller model. GPT \cite{Radford2018ImprovingLU}, GPT-2 \cite{Radford2019LanguageMA}, and GPT-3 \cite{Brown2020LanguageMA} build an AR LM more suited toward language generation by using progressively larger models and a modified transformer decoder architecture. ELECTRA \cite{Clark2020ELECTRAPT} produces a model with comparable performance to BERT with substantially shorter training by having the model predict all tokens in a sentence instead of the $[MASK]$ token and by corrupting the input using a Generator similar to that of a GAN. Beyond these few models we mention, countless other optimizations and applications of these large scale NNLM. \subsection{Language Model's Impact} \label{chap:prior:sec:lm:effects} In studying the performance of the rapidly growing NNLM, researchers have found that larger models are more sample efficient and reach a higher level of performance with fewer steps \cite{Kaplan2020ScalingLF}. Kaplan et al., 2020 find that the dataset size, model size, and compute used for training all have a power-law relationship with performance as long as the factors grow proportionally. The authors estimate that the best model would have about a trillion parameters, trained on a trillion word corpus using over 100 petaflops.\\ While there is no debate on the positive impact these large LMs have had on NLP, the broader research community has begun discussing the broader effects of these continually growing language models. A decade ago, most NLP research could be developed and trained on commodity laptops or servers. Competitive research usually requires multiple instances of specialized hardware like GPUs and TPUs \cite{Strubell2019EnergyAP}. Strubell et al., 2019 broadly studies the energy implications of training these NNLM and estimates that a single training run of a model like GPT-2 can cost upward of \$40,000, the architecture search and hyperparameter tuning can be upwards of \$3,000,000, and the C$0_2$ released by training one of these models can be similar to the C$0_2$ released in the entire life-cycle of a car. Zhou et al., 2020 \cite{Zhou2020HULKAE} introduce HULK to encourage researchers to think about efficiency in every stage of model creation. Looking at the impact of large language models, researchers can infer that some of the most interesting research in NLP will be focused on how to scale model size while balancing the increased cost in doing so. \\
\chapter{License and Acknowledgments} \label{chap:licenses} This thesis manuscript is the final thesis version, numbered v1.0.2. The manuscript source files are available online at \href{https://github.com/speredenn/epfl-leni-oilfree-radial-cp-hp/releases/tag/v1.0.2}{https://github.com/speredenn/epfl-leni-oilfree-radial-cp-hp/releases/tag/v1.0.2}. The compiled PDF version is available at \href{http://dx.doi.org/10.5075/epfl-thesis-6764}{doi:10.5075/epfl-thesis-6764}. The official printed version of this thesis manuscript is printed in Black \& White (B\&W). Most of the figures presented in this manuscript have been designed to be readable in B\&W, but a few of them are easier to understand in their color version (like \cpref{fig:awp-w-wo-4way-diagrams} or \cpref{fig:awp-too-high-motor-cooling-flow}). Please refer to the online version on the EPFL website (\href{http://dx.doi.org/10.5075/epfl-thesis-6764}{doi:10.5075/epfl-thesis-6764}) to access the color version of the figures. \section*{License} \label{sec:licenses} This thesis work is licensed under \href{http://creativecommons.org/licenses/by/4.0/}{Creative Commons Attribution 4.0 International (CC BY 4.0)}\footnotep{http://creativecommons.org/licenses/by/4.0/}. Excepted if specified differently in the credits sections of each chapter, graphs, figures, tables, pictures, movies, and photographs are copyrighted by Jean-Baptiste Carré and are licensed under \href{http://creativecommons.org/licenses/by/4.0/}{Creative Commons Attribution 4.0 International (CC BY 4.0)}. The parts and assemblies designs are copyrighted by Jean-Baptiste Carré and are licensed under \href{http://creativecommons.org/licenses/by/4.0/}{Creative Commons Attribution 4.0 International (CC BY 4.0)}. The codes developed for the purpose of this thesis are licensed under the \href{http://www.gnu.org/licenses/gpl.html}{GNU General Public License 3.0}\footnotep{http://www.gnu.org/licenses/gpl.html}. In order to use all the functions of those codes, some external codes and tools, not licensed under the \href{http://www.gnu.org/licenses/gpl.html}{GNU General Public License 3.0} or a more permissive license, are needed. Please refer to the specific instructions in each of those codes to know more about those limitations and conditions of use. Those external codes or tools are not provided with the thesis codes. The sources of this thesis and its codes are available in the repositories listed below: \begin{itemize} \item Thesis dynamic version:\\ \href{https://www.authorea.com/users/54640/articles/71121/}{https://www.authorea.com/users/54640/articles/71121/} \item Thesis static version, with data analysis codes included:\\ \href{https://github.com/speredenn/epfl-leni-oilfree-radial-cp-hp}{https://github.com/speredenn/epfl-leni-oilfree-radial-cp-hp} \end{itemize} \section*{Acknowledgments} \label{sec:thanks} First of all I want to thank \href{https://ch.linkedin.com/pub/daniel-favrat/9/564/3a5}{Professor Daniel Favrat} who has supported me unconditionally throughout the whole thesis time. I particularly appreciated his guidance and vision. I also address my gratitude toward Professor \href{http://people.epfl.ch/jurg.schiffmann}{Jürg Schiffmann} who supported me in this project with lots of inestimable advice and care on theoretical and technical topics. I am also grateful that he accepted to be one of the co-examiners for this thesis. My thanks equally go to \href{https://be.linkedin.com/pub/vincent-lemort/19/860/bb6}{Professor Vincent Lemort} and \href{https://uk.linkedin.com/pub/david-hughes/13/658/29}{Dr. David Hughes} for having accepted to be co-examiners for this thesis, and to \href{http://people.epfl.ch/jan.vanherle}{Professor Jan Van Herle} to have accepted to be the President of the jury. Many thanks to my colleagues \href{https://ch.linkedin.com/pub/julien-jakubowski/9/599/67a}{Julien Jakubowski} and \href{https://de.linkedin.com/pub/johannes-wegele/23/27a/897/en}{Johannes Wegele} who have greatly contributed to the success of this project with their help in the laboratory. I also want to thank \href{https://fr.linkedin.com/pub/gilles-bernard/28/2a7/534/en}{Gilles Bernard}, \href{https://fr.linkedin.com/pub/patrice-dubois/23/2aa/922/en}{Dr. Patrice Dubois}, and \href{https://fr.linkedin.com/in/jfomhover}{Professor Jean-François Omhover} to have enhanced my perspectives and sharpened my taste for research. Thanks also to \href{http://www.sib.heig-vd.ch/institut/Lists/quipe/DispForm.aspx?ID=3}{Professor Roger Röthlisberger} and to my colleagues \href{https://ch.linkedin.com/pub/violette-mounier/80/347/952/en}{Violette Mounier}, \href{https://ch.linkedin.com/pub/antoine-girardin/36/47a/5a1/en}{Antoine Girardin}, \href{https://ch.linkedin.com/pub/noe-bory/42/276/ba7}{Noé Bory}, and \href{http://www.sib.heig-vd.ch/institut/Lists/quipe/DispForm.aspx?ID=1}{Julien Ropp} for their help, tolerance, and support during the last months of my PhD work. During my work and my research at the \href{http://leni.epfl.ch/en}{Industrial Energy Systems Laboratory (LENI)}, there were many people who have made my work considerably easier and more enjoyable. Particular thanks (in alphabetic order) to \href{https://ch.linkedin.com/pub/alberto-mian/7a/636/661/en}{Alberto}, \href{https://ch.linkedin.com/pub/amalric-ortlieb/21/40a/b58/en}{Amalric}, \href{https://ch.linkedin.com/pub/j-andreas-schuler/43/103/3b8/en}{Andreas Schüler}, \href{https://ch.linkedin.com/pub/angel-iglesias/66/5a3/8b6/en}{Angel}, \href{https://ch.linkedin.com/in/antoinebreton/en}{Antoine B.} \href{https://ch.linkedin.com/in/antoninfaes}{Antonin}, \href{https://scholar.google.com/citations?user=MSF-_JEAAAAJ}{Arata}, \href{https://ca.linkedin.com/pub/azadeh-jafari/40/b13/877}{Azadeh}, \href{https://ch.linkedin.com/pub/benjamin-vuitel/55/5b6/a56}{Benjamin}, \href{https://ch.linkedin.com/pub/benoit-pfister/5a/927/666/en}{Benoît}, \href{https://ch.linkedin.com/pub/cedric-blondel/16/683/471/en}{Cédric B.}, \href{https://ch.linkedin.com/pub/cedric-fatio/25/aa9/323/en}{Cédric F.}, \href{https://ch.linkedin.com/pub/christian-rod/13/a73/899/en}{Christian}, \href{https://ch.linkedin.com/pub/claudia-taschler/6/33a/509}{Claudia}, \href{https://ch.linkedin.com/pub/david-abrantes/9b/b95/b6b}{David}, \href{https://ch.linkedin.com/pub/diego-larrain/2/a5/870}{Diego}, \href{https://ch.linkedin.com/in/emanuelapeduzzi}{Emanuela}, \href{https://ch.linkedin.com/pub/emanuele-facchinetti/8/b28/15}{Emmanuele}, \href{https://ch.linkedin.com/in/germainaugsburger}{Germain}, \href{https://de.linkedin.com/pub/helen-becker/5/b33/b30}{Helen}, \href{https://ch.linkedin.com/pub/henning-luebbe/15/748/a55}{Henning}, \href{https://ch.linkedin.com/pub/hossein-madi/42/a43/700/en}{Hossein}, \href{https://ch.linkedin.com/pub/irwin-gafner/3/182/ba6}{Irvin}, \href{https://www.mendeley.com/profiles/jakob-rager/}{Jakob}, \href{https://uk.linkedin.com/pub/james-spelling/16/291/8b0}{James}, \href{https://de.linkedin.com/pub/johannes-wegele/23/27a/897/en}{Johannes}, \href{https://ch.linkedin.com/in/jonathandemierre}{Jonathan}, \href{https://www.linkedin.com/in/jorgelopezmoreno/en}{Jorge}, \href{https://ch.linkedin.com/pub/juliette-coeffe/46/222/5b0/en}{Juliette}, \href{https://lu.linkedin.com/pub/laurence-tock/53/819/5b9/en}{Laurence}, \href{https://ch.linkedin.com/pub/leandro-salgueiro/13/576/982/en}{Leandro}, \href{https://www.linkedin.com/in/ledagerber}{Leda}, \href{https://ch.linkedin.com/in/leonidastsikonis}{Leonidas}, \href{https://ch.linkedin.com/pub/luc-girardin/5/952/a86}{Luc}, \href{https://it.linkedin.com/in/manuelegatti/en}{Manuele}, \href{https://ch.linkedin.com/pub/marco-rossati/42/302/9/en}{Marco R.}, \href{https://ch.linkedin.com/pub/martin-gassner/24/897/827}{Martin}, \href{https://se.linkedin.com/in/matteomorandin}{Matteo}, \href{https://ch.linkedin.com/pub/matias-canedo/67/32/97b/en}{Mathias}, \href{https://ch.linkedin.com/pub/matthias-bendig/a3/6a5/a62/en}{Matthias B.}, \href{https://ch.linkedin.com/pub/matthias-dubuis/5/958/8a1}{Matthias D.}, \href{https://ch.linkedin.com/pub/matthieu-charrier/14/84b/b79/en}{Matthieu}, \href{https://ch.linkedin.com/pub/nadia-chatagny/56/3a6/a9}{Nadia}, \href{https://ch.linkedin.com/pub/nasibeh-pouransari/7a/510/39}{Nasibeh}, \href{https://ch.linkedin.com/in/nicolasborboen}{Nicolas B.}, \href{https://ch.linkedin.com/pub/nicolas-descoins/21/836/726/en}{Nicolas D.}, \href{https://ch.linkedin.com/in/nicolasroggo/en}{Nicolas R.}, \href{https://zw.linkedin.com/pub/nicole-calame-darbellay/5/953/5a5}{Nicole}, \href{https://ch.linkedin.com/pub/nordahl-autissier/19/ba8/9a9/en}{Nordahl}, \href{https://ch.linkedin.com/in/oliviermegel}{Olivier}, \href{https://uk.linkedin.com/pub/pietro-tanasini/17/859/b6b}{Pietro}, \href{http://people.epfl.ch//priscilla.caliandro}{Priscilla}, \href{https://ch.linkedin.com/in/rbolliger}{Raffaele}, \href{https://ch.linkedin.com/pub/ramanunni-menon/2b/bb8/452}{Raman}, \href{https://ch.linkedin.com/pub/romain-vallotton/43/898/261/en}{Romain}, \href{https://ch.linkedin.com/pub/samira-fazlollahi/a1/431/bba}{Samira}, \href{https://ch.linkedin.com/pub/samuel-haury/36/1ab/214/en}{Samuel Haury}, \href{http://www.researchgate.net/profile/Samuel_Henchoz}{Samuel Henchoz}, \href{http://sti.epfl.ch/page-97589-en.html}{Simon G.}, \href{https://ch.linkedin.com/pub/dr-sinan-teske/aa/210/831/en}{Sinan}, \href{http://people.epfl.ch/Stefan.Diethelm}{Stefan}, \href{https://ch.linkedin.com/in/stephanebungener}{Stéphane}, \href{https://ch.linkedin.com/in/stefanomoret/en}{Stefano}, \href{https://dk.linkedin.com/pub/tuong-van-nguyen/90/587/409}{Tivi}, \href{https://ch.linkedin.com/in/cornuthierry}{Thierry}, \href{https://ch.linkedin.com/pub/thomas-wicht/26/285/7b5/en}{Thomas}, \href{https://ch.linkedin.com/pub/yannick-bravo/24/466/511/en}{Yannick}, \href{https://ch.linkedin.com/pub/zacharie-wuillemin/36/a32/b4a}{Zacharie}, \href{http://people.epfl.ch/zlatina.dimitrova}{Zlatina}, and \href{https://ca.linkedin.com/pub/zoe-perin-levasseur/54/112/517}{Zoé}. Thanks to them for sharing many ideas and good times. I also thank \href{https://www.linkedin.com/pub/michele-zehnder/1/b2b/1a5}{Dr. Michele Zehnder}, \href{https://www.linkedin.com/pub/deborah-sills/54/4b1/431}{Professor Deborah Sills}, \href{https://ch.linkedin.com/pub/cecile-munch-alligne/2/196/786}{Professor Cécile Munch-Alligné}, \href{http://www.mines-paristech.fr/Formation/Doctorat/Annuaire-docteurs/Detail/Sami-BARBOUCHI-2007/19447}{Dr. Sami Barbouchi} and \href{https://ch.linkedin.com/pub/pierre-alain-giroud/0/999/ab3}{Pierre-Alain Giroud} for the interesting discussions that we have had each time we met. I also want to thank my colleagues and friends from \href{http://ltcm.epfl.ch/}{Heat and Mass Transfer Laboratory (LTCM)} and the \href{http://gtt.epfl.ch/}{Thermal Turbomachinery Laboratory (LTT)} who have shared with me nice and sweet moments, in addition to their experience with experimental work. My thanks go notably, in \href{http://ltcm.epfl.ch/}{LTCM} (in alphabetic order), to \href{https://scholar.google.co.uk/citations?user=lS897DQAAAAJ}{Andrea}, \href{http://www.researchgate.net/profile/Bogdan_Alexandru_Nichita}{Bogdan}, \href{https://ch.linkedin.com/in/bdentremont}{Brian}, \href{https://ch.linkedin.com/pub/cecile-taverney/b5/372/863/en}{Cécile}, \href{https://uk.linkedin.com/in/wuduan}{Duan}, \href{https://de.linkedin.com/pub/etienne-costa-patry/24/322/768}{Etienne}, \href{https://ch.linkedin.com/in/eugenevanrooyen}{Eugene}, \href{https://ch.linkedin.com/pub/farzad-vakili-farahani/6a/252/519}{Farzad}, \href{https://ch.linkedin.com/pub/giulia-spinato/7a/95b/88}{Giulia}, \href{http://gustavorabello.org/}{Gustavo}, \href{https://ch.linkedin.com/pub/jackson-marcinichen/3a/45a/95b/en}{Jackson}, \href{https://ch.linkedin.com/pub/chin-lee-jeff-ong/1a/601/543}{Jeff}, \href{https://ch.linkedin.com/in/marcomilan/en}{Marco M.}, \href{https://ch.linkedin.com/pub/mathieu-habert/34/128/682/en}{Mathieu}, \href{https://ch.linkedin.com/pub/mirco-magnini/8/10a/5ab}{Mirco}, \href{people.epfl.ch/nathalie.matthey}{Natalie}, \href{https://ch.linkedin.com/pub/nicolas-antonsen/21/284/554/en}{Nicolas A.}, \href{https://ch.linkedin.com/pub/nicolas-lamaison/37/2a6/a68/en}{Nicolas L.}, \href{https://ch.linkedin.com/pub/ricardo-j-da-silva-lima/32/4bb/217/en}{Ricardo}, \href{https://ch.linkedin.com/pub/khodaparast-sepideh/86/a25/a71}{Sepideh}, and \href{https://ch.linkedin.com/pub/sylwia-szczukiewicz/87/24/43b}{Sylwia}, and in \href{http://gtt.epfl.ch/}{LTT} (in alphabetic order), to \href{https://ch.linkedin.com/pub/achim-zanker/85/886/978/en}{Achim}, \href{https://de.linkedin.com/pub/alexandros-terzis/a6/258/661}{Alexandros}, \href{https://ch.linkedin.com/in/eliacolombo}{Elia}, \href{https://ch.linkedin.com/pub/magnus-jonsson/2/4b2/ba}{Jonsson}, \href{https://www.xing.com/profile/Philip_Peschke}{Philip}, \href{https://ch.linkedin.com/pub/sami-goekce/32/591/274/en}{Sami}, and \href{https://de.linkedin.com/pub/virginie-chenaux/4/27b/74b}{Virginie}. Thanks also to the people close and dear to me who where there to support me so frequently. I met wonderful people while practicing martial arts, volunteering, or defending the causes dear to me. Thanks for the wonderful times that helped me and made me stand the work load. My special thanks go to \href{http://people.epfl.ch/marc.salle}{Marc Salle}, \href{http://people.epfl.ch/christophe.zurmuehle}{Christophe Zurmühle}, \href{http://people.epfl.ch/nicolas.jaunin}{Nicolas Jaunin}, \href{http://people.epfl.ch/laurent.chevalley}{Laurent Chevalley}, and Aziz, for the help with the manufacturing of the parts and the assembly of the experimental setups. Many thanks to \href{http://people.epfl.ch/brigitte.fayet}{Brigitte}, \href{http://people.epfl.ch/suzanne.zahnd‎}{Suzanne Z.}, \href{http://people.epfl.ch/irene.laroche}{Irène}, and Faye, the amazing \href{http://leni.epfl.ch/en}{LENI} secretaries, who always made things easy with the administrative stuff and who saved the situation many times, despite the tight schedule. Many thanks to my first aider teammates and friends, notably (in alphabetic order) to \href{https://ch.linkedin.com/in/agnesjourda/en}{Agnès}, \href{https://ch.linkedin.com/pub/alexandre-jacquat/6b/b25/3aa/en}{Alexandre}, \href{http://people.epfl.ch/alok.rudra}{Alok}, \href{https://ch.linkedin.com/pub/andreas-schwab/85/620/36b/en}{Andreas Schwab}, \href{https://ch.linkedin.com/pub/audrey-sicard/14/244/880}{Audrey}, \href{https://ch.linkedin.com/pub/carlos-morais/2a/a87/546}{Carlos}, \href{https://ch.linkedin.com/pub/christophe-neuilly/7a/619/961/en}{Christophe}, \href{https://ch.linkedin.com/pub/coralie-busso/b7/5a2/839/en}{Coralie}, \href{https://ch.linkedin.com/pub/cyrielle-collinet/82/898/229/en}{Cyrielle}, \href{http://people.epfl.ch/daniela.trogolo}{Daniela}, \href{http://people.epfl.ch/elise.vinckenbosch}{Élise}, \href{https://ch.linkedin.com/pub/eric-du-pasquier/10/181/998}{Eric}, \href{http://people.epfl.ch/franck.levrier}{Frank}, \href{https://ch.linkedin.com/pub/gaelle-thurre/b1/301/1b1/en}{Gaëlle}, \href{https://fr.linkedin.com/pub/julien-peillex/87/12b/b7a/en}{Julien}, \href{https://ch.linkedin.com/pub/leila-cammoun/a8/a96/a86}{Leila}, \href{https://ch.linkedin.com/pub/linda-rebelles/93/b51/92b/en}{Linda}, \href{https://ch.linkedin.com/pub/mahe-raccaud/82/b73/6aa}{Mahé}, \href{https://ch.linkedin.com/in/mickaelmisbach}{Mickaël}, \href{https://ch.linkedin.com/pub/mikael-sturny/73/86/b28}{Mikael}, \href{https://ch.linkedin.com/pub/monica-perrenoud/20/5ba/a46}{Monica}, \href{https://ch.linkedin.com/pub/nikita-saugy/42/67a/688/en}{Nikita}, \href{https://ch.linkedin.com/pub/nils-karlsson/46/308/228/en}{Nils}, \href{http://people.epfl.ch/pascal.zbinden}{Pascal}, \href{https://ch.linkedin.com/pub/pauline-itty/85/39a/a3b/en}{Pauline}, \href{http://people.epfl.ch/petr.grivaz}{Petr}, \href{https://ch.linkedin.com/pub/pierre-alain-pascal/2a/a58/890/en}{Pierre-Alain}, \href{http://people.epfl.ch/philipp.clausen}{Philipp}, \href{https://ch.linkedin.com/in/rgindrat/en}{Raphaël}, \href{https://ch.linkedin.com/pub/sabina-schneider/b2/2ba/538}{Sabina}, \href{http://people.epfl.ch/simon.doppler}{Simon D.}, \href{https://ch.linkedin.com/pub/suzanne-dubsky/84/a58/555/en}{Suzanne D.}, \href{https://sg.linkedin.com/in/thomascibils/en}{Thomas}, and \href{https://ch.linkedin.com/pub/yvan-deillon/2a/b5a/382/en}{Yvan}, for the nice moments during the EPFL events and for really interesting exchanges and discussions. Many thanks to the \href{http://securite.epfl.ch/}{Safety, Prevention and Health Domain (DSPS)} for its top-level first-aider-training program. My deepest thanks go to you, Alisa, who always supported me and the constraints subsequent to our lives. You have stood unacceptable situations and difficulties, and you have helped me to evolve and grow. For this, I will never thank you enough. Thank you, Riwen, to have been such a patient, understanding, and tolerant child with me. Thanks to have been so frequently nice when I was falling asleep directly on the ground of your room, while we were playing, when I was finally back home after long work days. Thanks to you, \href{https://en.wikipedia.org/wiki/Adam_Young}{Adam Young}, for the amazing songs, optimist and cheerful melodies and lyrics, which guided me through my thesis work and efforts, and which still guide me through my life. Thanks to you, \href{https://www.youtube.com/playlist?list=PLIB0FUzmhLoSCRLM2WFt_mz6plvulw6hU}{Dr. David Lefrançois}, \href{https://en.wikipedia.org/wiki/Stephen_Covey}{Dr. Stephen Covey}, and \href{https://en.wikipedia.org/wiki/Tony_Buzan}{Tony Buzan}, for the amazing guidance and insights you provide and share everyday (or that you provided and shared, Stephen). This thesis work is funded by the \href{https://www.kti.admin.ch/kti/en/home.html}{Swiss Commission for Technology and Innovation (CTI)} and \href{http://www.fischerspindle.com/facilities/fischer-engineering-solutions-ag/}{Fischer Engineering Solutions AG}, which is part of the \href{http://www.fischerspindle.com/about-us/brands/}{Fischer Spindle group}. The compressor units used in this thesis work have been manufactured by \href{http://www.fischerspindle.com/facilities/fischer-engineering-solutions-ag/}{Fischer Engineering Solutions AG}. I thank them for their help and support. I thank the \href{https://www.kti.admin.ch/kti/en/home.html}{CTI} and \href{http://www.fischerspindle.com/facilities/fischer-engineering-solutions-ag/}{Fischer Engineering Solutions AG} for the funding that allowed me to perform this thesis work.
%!TEX root = ../main.tex \objective{Understand and simplify the relationships between logs, powers, and roots.} \subsection{The Three Components} Our modern mathematical notation obfuscates one relationship with three, different notations. The following equations all express the same things: \begin{enumerate} \item $\log_2{8}=3$ \item $2^3=8$ \item $\sqrt[3]{8}=2$ \end{enumerate} All three embody the same relationship: 2 is the base, 3 is the exponent, and 8 is result. Three elements suggest a three-sided shape, a \emph{triangle of power}. $\tripow{2}{3}{8}$ Leaving off any side of the triangle of power suggests that the missing number is needed. \begin{enumerate} \item $\log_2{8}$ can be represented as $\tripow{2}{}{8}$ \item $2^3$ can be represented as $\tripow{2}{3}{}$ \item $\sqrt[3]{8}$ can be represented as $\tripow{}{3}{8}$ \end{enumerate} Some people complain that this new notation ruins the line height, that is is too tall. But these pedants rarely write\\ $(2\div(3+4))\div((5+6)\div(7+8))$. Indeed, it is preferable to see: $$\frac{\frac{2}{3+4}}{\frac{5+6}{7+8}}$$ In the same way, one might write $2\triangle^3$, $2\triangle_8$, and $\triangle^3_8$, but expand in two-dimensions when the occasion permits. \subsection{Inverses} The true usefulness of the triangle of power is revealed when we try to present more complicated relationships. Some students immediately grasp what $e^{\ln{x}}$ is saying, others struggle for years with the notation. \emph{There is a power we can put on \emph{e} to get \emph{x}. Raise \emph{e} to that power.} If you get it, the answer is obviously $x$. But the symbols certainly don't help you see it. Instead, triangles make the relationship more obvious: \begin{equation} \tripow{e}{ \tripow{e}{}{x}}{}=x \quad \text{vs} \quad e^{\ln{x}} \end{equation} The top triangle is blank in the same place it occupies in the larger triangle. Because the $e$'s are in the same place, everything cancels, leaving only the $x$. Other hard expressions which are simple inverses are equally obscure in traditional notation, and quite clear in triangle form: \begin{equation} \tripow{}{e}{\scriptstyle \tripow{x}{e}{}} = x \quad \text{vs} \quad \sqrt[e]{x^e} = x \end{equation} \begin{equation} \tripow{\tripow{}{e}{x}}{2}{} = x \quad \text{vs} \quad \sqrt[e]{x}^e = x \end{equation} \begin{equation} \tripow{e}{}{\tripow{e}{x}{}} = x \quad \text{vs} \quad \ln{e^x} = x \end{equation} \begin{equation} \tripow{}{\tripow{x}{}{e}}{e} = x \quad \text{vs} \quad \sqrt[\log_x{e}]{e} = x \end{equation} \begin{equation}\tripow{\tripow{}{x}{e}}{}{e} = x \quad \text{vs} \quad \log_{\sqrt[x]{e}}{e} = x \end{equation} \subsection{P-Plus} The properties of logs, exponents, and roots become much more transparent in triangle notation. For example, the sum of exponents look like this: $$\tripow{b}{m}{}\cdot{}\tripow{b}{n}{}=\tripow{b}{m+n}{}$$ We shall see that this bears a strong resemblance to a similar property of logs: $$\tripow{b}{}{m} + \tripow{b}{}{n} = \tripow{b}{}{m+n}$$ Graphically, keeping the base the same but switching from exponent to result changes where the addition and multiplication happen. You will make all the various versions of the rules in the exercises and problems, but there is one relationship which might appear overly perplexing at first. Consider the products of roots: $$ \tripow{}{x}{z} \cdot{} \tripow{}{y}{z} $$ We have not had occasion to contemplate this before. What operation should govern this relationship? Given the thorough treatment of rational exponents in chapter 5, perhaps it would be more clear for you to rewrite this problem as fractional powers: {\Large $$z^{\frac{1}{x}} \cdot z^{\frac{1}{y}}$$ } The answer is a root which is the sum of the reciprocals of $x$ and $y$, or a power which is the reciprocal of that! This unusual operation is actually rather common in practical applications and deserving of its own symbol in this book, $\pplus$. This symbol was chosen because the reciprocal of the sum of reciprocal is used in parallel resistance, whose symbol is $\parallel$. \begin{derivation}{P-plus} $$x\pplus y = \cfrac{1}{\frac{1}{x}+\frac{1}{y}} = \cfrac{1}{\frac{y}{xy}+\frac{x}{xy}} = \cfrac{1}{\frac{x+y}{xy}} = \frac{xy}{x+y}$$ \end{derivation} This strange operation is necessary in a world where power and roots are reciprocals of each other: $$ \tripow{a}{x}{} = \tripow{}{\frac{1}{x}}{a} $$ There are many more intriguing relationship that can be written clearly and intuitively on the Triangle of Power, e.g. $\tripow{m}{}{x}\pplus\tripow{n}{}{x} = \tripow{m\cdot{}n}{}{x}$ or $\tripow{x}{}{a}\cdot{}\tripow{a}{}{y} = \tripow{x}{}{y}$ You are encouraged to experiment and tinker with this powerful tool.
\documentclass[a4paper]{article} \usepackage{fontspec} \defaultfontfeatures{Ligatures=TeX, Numbers=OldStyle, SmallCapsFeatures={LetterSpace=8, Numbers=OldStyle}} \setmainfont % {Linux Libertine} {Gentium Book Basic} \usepackage{microtype} \usepackage[siunitx]{circuitikz} \sisetup{load=derived} % loading \siemens \usepackage{showexpl} \usepackage{framed} \usepackage{hyperref} \hypersetup{ bookmarks=false, % show bookmarks bar? pdftitle={CircuitTikZ v. \pgfcircversion\ - manual}, % title pdfauthor={Massimo Redaelli}, % author pdfsubject={CircuitTikZ manual}, % subject of the document pdfkeywords={}, % list of keywords colorlinks=true, % false: boxed links; true: colored links linkcolor=black, % color of internal links citecolor=black, % color of links to bibliography filecolor=black, % color of file links urlcolor=black % color of external links } \usepackage{imakeidx} \makeindex[title=Index of the components, intoc=true] \def\circuititem#1#2#3{\item {#2} (\texttt{#1}) \index{#1} \par \begin{center}\begin{circuitikz} \draw (0,0) node[#1] {}; \end{circuitikz} \end{center} \par} \newcommand{\circuititembip}[3]{\item {#2} \index{#1} \tikz\foreach \i in {#3} {\index{\i|see{#1}} }; (\texttt{#1}% \ifthenelse{\equal{#3}{}}{% }{% , or \texttt{#3}% }% )\par \begin{center}\begin{circuitikz} \draw (0,0) to[#1] (2,0); \end{circuitikz} \end{center}\par} \usepackage{marvosym} \newcommand{\email}[2][]{\def\temp{#1}\ifx\temp\empty\Email~\fi\href{mailto:#2}{#2}} \long\def\comment#1{} \begin{document} \setcounter{secnumdepth}{3} \setcounter{tocdepth}{3} \def\TikZ{Ti\emph{k}Z} \def\Circuitikz{Circui\TikZ} \def\ConTeXt{Con\TeX t} \lstset{frameround=fttt} \lstloadlanguages{TeX} \title{\Circuitikz \\{\large version \pgfcircversion}} \author{Massimo A. Redaelli} \date{\today} \maketitle \tableofcontents \section{Introduction} After two years of little exposure only on my personal website\footnote{Now the package is moved to its own git repository: \url{https://github.com/mredaelli/circuitikz}. Contributions are welcome.}, I did a major rehauling of the code of Circui\TikZ, fixing several problems and converting everything to \TikZ\ version $2.0$. I'm not too sure about the result, because my (La)\TeX\ skills are much to be improved, but it seems it's time for more user feedback. So, here it is\ldots \medskip I know the documentation is somewhat scant. Hope to have time to improve it a bit. \subsection{About} This package provides a set of macros for naturally typesetting electrical and (somewhat less naturally, perhaps) electronical networks. It was born mainly for writing my own exercise book and exams sheets for the Elettrotecnica courses at Politecnico di Milano, Italy. I wanted a tool that was easy to use, with a lean syntax, native to \LaTeX, and supporting directly PDF output format. So I based everything with the very impressive (if somewhat verbose at times) \TikZ\ package. \subsection{Loading the package} \verb!\usepackage{circuitikz}! \TikZ\ will be automatically loaded. \subsection{License} Copyright \copyright\ 2007--2011 Massimo Redaelli. This package is author-maintained. Permission is granted to copy, distribute and/or modify this software under the terms of the \LaTeX Project Public License, version 1.3.1, or the GNU Public License. This software is provided ‘as is’, without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. \subsection{Feedback} Much appreciated: \email{mailto:m.redaelli@gmail.com}. Although I don't guarantee quick answers. \subsection{Requirements} \begin{itemize} \item \texttt{tikz}, version $\ge 2$; \item \texttt{xstring}, not older than 2009/03/13; \item \texttt{siunitx}, if using \texttt{siunitx} option. \end{itemize} \subsection{Incompatible packages} None, as far as I know. \subsection{Introduction to version 0.3.0} Probably nobody is hoping or caring for a new version of the package at this point, seeing how long it took me for this next release. But here it is, fixing a big problem (voltage labels in the wrong place, in some cases) and adding several components. Thanks for bug reporting and suggesting improvements. \subsection{Introduction to version 0.2.3} Having waited a long time before updating the package, many feature requests piled on my desk. They should all be implemented now. There are a number of backward incompatibilities — I'm sorry, but I had to make a choice in order not to have a schizophrenic interface. They are mostly, in my opinion, minor problems that can be dealt with with appropriate package options: \begin{itemize} \item \texttt{potentiometer} is now the standard resistor-with-arrow-in-the-middle; the old potentiometer is now known as \texttt{variable resistor} (or \texttt{vR}), similarly to \texttt{variable inductor} and \texttt{variable capacitor}; \item \texttt{american inductor} was not really the standard american inductor. The old american inductor has been renamed \texttt{cute inductor}; \item \texttt{transformer}, \texttt{transformer core} and \texttt{variable inductor} are now linked with the chosen type of \texttt{inductor}; \item styles for selecting shape variants (like \texttt{[american resistors]}) are now in the plural to avoid conflict with paths (like \texttt{to[american resistor]}). \end{itemize} \subsection{\ConTeXt\ compatibility} As requested by some users, I fixed the package for it to be compatible with \ConTeXt. Just use \verb!\usemodule[circuitikz]! in your preamble and include the code between \verb!\startcircuitikz! and \verb!\endcircuitikz!. Please notice that the package \texttt{siunitx} in \emph{not} available for \ConTeXt: the option \texttt{siunitx} simply defines a few measurement units typical in electric sciences. \medskip In actually using \Circuitikz\ with \TikZ\ version 2 in \ConTeXt\ an error comes up, saying something like \begin{verbatim} ! Undefined control sequence. \tikz@cc@mid@checks -> \pgfutil@ifnextchar! \end{verbatim} The solution has been suggested to me by Aditya Mahajan, and involves modifying a file in \TikZ: \begin{verbatim} Here is the fix. In tikzlibrarycalc.code.tex change \def\tikz@cc@mid@checks{ \pgfutil@ifnextchar !{%AM: Added space \tikz@cc@mid% }{% \advance\pgf@xa by\tikz@cc@factor\pgf@xb% \advance\pgf@ya by\tikz@cc@factor\pgf@yb% \tikz@cc@parse% continue }% } \def\tikz@cc@mid !{%AM Added space \pgfutil@ifnextchar({% \tikz@scan@one@point\tikz@cc@project% }{% \tikz@cc@mid@num% }% } \end{verbatim} As far as I know, this is a bug in \TikZ, and I notified the author, but until he fixes it, you know the workaround. \section{Options} \begin{itemize} \item \texttt{europeanvoltages}: uses arrows to define voltages, and uses european-style voltage sources; \item \texttt{americanvoltages}: uses $-$ and $+$ to define voltages, and uses american-style voltage sources; \item \texttt{europeancurrents}: uses european-style current sources; \item \texttt{americancurrents}: uses american-style current sources; \item \texttt{europeanresistors}: uses rectangular empty shape for resistors, as per european standards; \item \texttt{americanresistors}: uses zig-zag shape for resistors, as per american standards; \item \texttt{europeaninductors}: uses rectangular filled shape for inductors, as per european standards; \item \texttt{americaninductors}: uses "4-bumps" shape for inductors, as per american standards; \item \texttt{cuteinductors}: uses my personal favorite, "pig-tailed" shape for inductors; \item \texttt{americanports}: uses triangular logic ports, as per american standards; \item \texttt{europeanports}: uses rectangular logic ports, as per european standards; \item \texttt{european}: equivalent to \texttt{europeancurrents}, \texttt{europeanvoltages}, \texttt{europeanresistors}, \texttt{europeaninductors}, \texttt{europeanports}; \item \texttt{american}: equivalent to \texttt{americancurrents}, \texttt{americanvoltages}, \texttt{americanresistors}, \texttt{americaninductors}, \texttt{americanports}; \item \texttt{siunitx}: integrates with \texttt{SIunitx} package. If labels, currents or voltages are of the form \verb!#1<#2>! then what is shown is actually \verb!\SI{#1}{#2}!; \item \texttt{nosiunitx}: labels are not interpreted as above; \item \texttt{fulldiodes}: the various diodes are drawn \emph{and} filled by default, i.e. when using styles such as \texttt{diode}, \texttt{D}, \texttt{sD}, \ldots Un-filled diode can always be forced with \texttt{Do}, \texttt{sDo}, \ldots \item \texttt{emptydiodes}: the various diodes are drawn \emph{but not} filled by default, i.e. when using styles such as \texttt{diode}, \texttt{D}, \texttt{sD}, \ldots Filled diode can always be forced with \texttt{D*}, \texttt{sD*}, \ldots \item \texttt{arrowmos}: pmos and nmos have arrows analogous to those of pnp and npn transistors; \item \texttt{noarrowmos}: pmos and nmos do not have arrows analogous to those of pnp and npn transistors; \item \texttt{straighlabels}: labels on bipoles are always printed straigh up, i.e.~with horizontal baseline; \item \texttt{rotatelabels}: labels on bipoles are always printed aligned along the bipole; \item \texttt{smartlabels}: labels on bipoles are rotated along the bipoles, unless the rotation is very close to multiples of 90°. \end{itemize} The old options in the singular (like \texttt{american voltage}) are still available for compatibility, but are discouraged. \medskip Loading the package with no options is equivalent to my own personal liking, that is to the following options:\\ \texttt{[europeancurrents, europeanvoltages, americanresistors, cuteinductors, americanports, nosiunitx, noarrowmos, smartlabels]}. \medskip In \ConTeXt\ the options are similarly specified: \texttt{current=european|american}, \texttt{voltage=european|american}, \texttt{resistor=american|european}, \texttt{inductor=cute|american|european}, \texttt{logic=american|european}, \texttt{siunitx=true|false}, \texttt{arrowmos=false|true}. \section{The components} Here follows the list of all the shapes defined by Circui\TikZ. These are all \texttt{pgf} nodes, so they are usable in both \texttt{pgf} and \TikZ. \medskip Each bipole (plus triac and thyristors) are shown using the following command, where \verb!#1! is the name of the component\footnote{If \texttt{\#1} is the name of the bipole/the style, then the actual name of the shape is \texttt{\#1shape}.}: \begin{verbatim} \begin{center}\begin{circuitikz} \draw (0,0) to[ #1 ] (2,0) ; \end{circuitikz} \end{center} \end{verbatim} The other shapes are shown with: \begin{verbatim} \begin{center}\begin{circuitikz} \draw (0,0) node[ #1 ] {} ; \end{circuitikz} \end{center} \end{verbatim} Please notice that for user convenience transistors can also be inputted using the syntax for bipoles. See section~\ref{sec:transasbip}. \begin{framed} If using the \verb!\tikzexternalize! feature, as of Ti\emph{k}z 2.1 all pictures must end with \verb!\end{tikzpicture}!. Thus you \emph{cannot} use the \verb!circuitikz! environment. Which is ok: just use \verb!tikzpicture!: everything will work there just fine. \end{framed} \subsection{Monopoles} \begin{itemize} \circuititem{ground}{Ground}{} \circuititem{rground}{Reference ground}{} \circuititem{sground}{Signal ground}{} \circuititem{nground}{Noiseless ground}{} \circuititem{pground}{Protective ground}{} \circuititem{cground}{Chassis ground\footnote{These last three were contributed by Luigi «Liverpool»)}}{} \circuititem{antenna}{Antenna}{} \circuititem{rxantenna}{Transmitting antenna}{} \circuititem{txantenna}{Receiving antenna}{} \circuititem{tlinestub}{Transmission line stub}{} \end{itemize} \subsection{Bipoles} \subsubsection{Instruments} \begin{itemize} \circuititembip{ammeter}{Ammeter}{} \circuititembip{voltmeter}{Voltmeter}{} \end{itemize} \subsubsection{Basic resistive bipoles} \begin{itemize} \circuititembip{short}{Short circuit}{} \circuititembip{open}{Open circuit}{} \circuititembip{lamp}{Lamp}{} \circuititembip{generic}{Generic (symmetric) bipole}{} \circuititembip{tgeneric}{Tunable generic bipole}{} \circuititembip{ageneric}{Generic asymmetric bipole}{} \circuititembip{fullgeneric}{Generic asymmetric bipole (full)}{} \circuititembip{tfullgeneric}{Tunable generic bipole (full)}{} \circuititembip{memristor}{Memristor}{Mr} \end{itemize} \subsubsection{Resistors and the like} If (default behaviour) \texttt{americanresistors} option is active (or the style \texttt{[american resistors]} is used), the resistor is displayed as follows: \begin{itemize} \ctikzset{resistor=american} \circuititembip{R}{Resistor}{american resistor} \circuititembip{vR}{Variable resistor}{american variable resistor} \circuititembip{pR}{Potentiometer}{american potentiometer} \end{itemize} If instead \texttt{europeanresistors} option is active (or the style \texttt{[european resistors]} is used), the resistors, variable resistors and potentiometers are displayed as follows: \begin{itemize} \ctikzset{resistor=european} \circuititembip{R}{Resistor}{european resistor} \circuititembip{vR}{Variable resistor}{european variable resistor} \circuititembip{pR}{Potentiometer}{european potentiometer} \ctikzset{resistor=american} % reset default \end{itemize} Other miscellaneous resistor-like devices: \begin{itemize} \circuititembip{varistor}{Varistor}{} \circuititembip{phR}{Photoresistor}{photoresistor} \circuititembip{thermocouple}{Thermocouple}{} \circuititembip{thR}{Thermistor}{thermistor} \circuititembip{thRp}{PTC thermistor}{thermistor ptc} \circuititembip{thRn}{NTC thermistor}{thermistor ntc} \circuititembip{fuse}{Fuse}{} \circuititembip{afuse}{Asymmetric fuse}{asymmetric fuse} \end{itemize} \subsubsection{Stationary sources} \begin{itemize} \circuititembip{battery}{Battery}{} \circuititembip{battery1}{Single battery cell}{} \circuititembip{european voltage source}{Voltage source (european style)}{} \circuititembip{american voltage source}{Voltage source (american style)}{} \circuititembip{european current source}{Current source (european style)}{} \circuititembip{american current source}{Current source (american style)}{} \end{itemize} \begin{framed} If (default behaviour) \texttt{europeancurrents} option is active (or the style \texttt{[european currents]} is used), the shorthands \texttt{current source}, \texttt{isource}, and \texttt{I} are equivalent to \texttt{european current source}. Otherwise, if \texttt{americancurrents} option is active (or the style \texttt{[american currents]} is used) they are equivalent to \texttt{american current source}. Similarly, if (default behaviour) \texttt{europeanvoltages} option is active (or the style \texttt{[european voltages]} is used), the shorthands \texttt{voltage source}, \texttt{vsource}, and \texttt{V} are equivalent to \texttt{european voltage source}. Otherwise, if \texttt{americanvoltages} option is active (or the style \texttt{[american voltages]} is used) they are equivalent to \texttt{american voltage source}. \end{framed} \subsubsection{Diodes and such} \begin{itemize} \circuititembip{empty diode}{Empty diode}{Do} \circuititembip{empty Schottky diode}{Empty Schottky diode}{sDo} \circuititembip{empty Zener diode}{Empty Zener diode}{zDo} \circuititembip{empty tunnel diode}{Empty tunnel diode}{tDo} \circuititembip{empty photodiode}{Empty photodiode}{pDo} \circuititembip{empty led}{Empty led}{leDo} \circuititembip{empty varcap}{Empty varcap}{VCo} \circuititembip{full diode}{Full diode}{D*} \circuititembip{full Schottky diode}{Full Schottky diode}{sD*} \circuititembip{full Zener diode}{Full Zener diode}{zD*} \circuititembip{full tunnel diode}{Full tunnel diode}{tD*} \circuititembip{full photodiode}{Full photodiode}{pD*} \circuititembip{full led}{Full led}{leD*} \circuititembip{full varcap}{Full varcap}{VC*} \end{itemize} \begin{framed} The options \texttt{fulldiodes} and \texttt{emptydiodes} (and the styles \texttt{[full diodes]} and \texttt{[empty diodes]}) define which shape will be used by abbreviated commands such that \texttt{D}, \texttt{sD}, \texttt{zD}, \texttt{tD}, \texttt{pD}, \texttt{leD}, and \texttt{VC}. \end{framed} \begin{itemize} \circuititembip{squid}{Squid}{} \circuititembip{barrier}{Barrier}{} \end{itemize} \subsubsection{Basic dynamical bipoles} \begin{itemize} \circuititembip{capacitor}{Capacitor}{C} \circuititembip{polar capacitor}{Polar capacitor}{pC} \circuititembip{variable capacitor}{Variable capacitor}{vC} \end{itemize} If (default behaviour) \texttt{cuteinductors} option is active (or the style \texttt{[cute inductors]} is used), the inductors are displayed as follows: \begin{itemize} \ctikzset{inductor=cute} \circuititembip{L}{Inductor}{cute inductor} \circuititembip{vL}{Variable inductor}{variable cute inductor} \end{itemize} If \texttt{americaninductors} option is active (or the style \texttt{[american inductors]} is used), the inductors are displayed as follows: \begin{itemize} \ctikzset{inductor=american} \circuititembip{L}{Inductor}{american inductor} \circuititembip{vL}{Variable inductor}{variable american inductor} \end{itemize} Finally, if \texttt{europeaninductors} option is active (or the style \texttt{[european inductors]} is used), the inductors are displayed as follows: \begin{itemize} \ctikzset{inductor=european} \circuititembip{L}{Inductor}{european inductor} \circuititembip{vL}{Variable inductor}{variable european inductor} \end{itemize} There is also a transmission line: \begin{itemize} \circuititembip{TL}{Transmission line}{transmission line, tline} \end{itemize} \subsubsection{Sinusoidal sources} Here because I was asked for them. But how do you distinguish one from the other?! \begin{itemize} \circuititembip{sinusoidal voltage source}{Sinusoidal voltage source}{vsourcesin, sV} \circuititembip{sinusoidal current source}{Sinusoidal current source}{isourcesin, sI} \end{itemize} \subsubsection{Square sources} \begin{itemize} \circuititembip{square voltage source}{Square voltage source}{vsourcesquare, sqV} \end{itemize} %\begin{framed} %The options \texttt{europeancurrent} [resp. \texttt{europeanvoltage}] (the default) and \texttt{americancurrent} [resp. \texttt{americanvoltage}] define which sinusoidal current [resp. voltage] source is selected by default when the abbreviated styles \texttt{sinusoidal current source}, \texttt{csourcesin}, \texttt{cI} [resp. \texttt{sinusoidal voltage source}, \texttt{vsourcesin}, \texttt{cV}] are used. %One can also use the related styles \texttt{[european currents]} [resp. \texttt{[european voltages]}] and \texttt{[american currents]} [resp. \texttt{[american voltages]}]. %\end{framed} \subsubsection{Switch} \begin{itemize} \circuititembip{closing switch}{Closing switch}{cspst} \circuititembip{opening switch}{Opening switch}{ospst} \circuititembip{push button}{Push button}{} \end{itemize} \subsection{Tripoles} \subsubsection{Controlled sources} Admittedly, graphically they are bipoles. But I couldn't\ldots \begin{itemize} \circuititembip{european controlled voltage source}{Controlled voltage source (european style)}{} \circuititembip{american controlled voltage source}{Controlled voltage source (american style)}{} \circuititembip{european controlled current source}{Controlled current source (european style)}{} \circuititembip{american controlled current source}{Controlled current source (american style)}{} \end{itemize} \begin{framed} If (default behaviour) \texttt{europeancurrents} option is active (or the style \texttt{[european currents]} is used), the shorthands \texttt{controlled current source}, \texttt{cisource}, and \texttt{cI} are equivalent to \texttt{european controlled current source}. Otherwise, if \texttt{americancurrents} option is active (or the style \texttt{[american currents]} is used) they are equivalent to \texttt{american controlled current source}. Similarly, if (default behaviour) \texttt{europeanvoltages} option is active (or the style \texttt{[european voltages]} is used), the shorthands \texttt{controlled voltage source}, \texttt{cvsource}, and \texttt{cV} are equivalent to \texttt{european controlled voltage source}. Otherwise, if \texttt{americanvoltages} option is active (or the style \texttt{[american voltages]} is used) they are equivalent to \texttt{american controlled voltage source}. \end{framed} \begin{itemize} \circuititembip{controlled sinusoidal voltage source}{Controlled sinusoidal voltage source}{controlled vsourcesin, cvsourcesin, csV} \circuititembip{controlled sinusoidal current source}{Controlled sinusoidal current source}{controlled isourcesin, cisourcesin, csI} \end{itemize} \subsubsection{Transistors} \begin{itemize} \circuititem{nmos}{\scshape nmos}{} \circuititem{pmos}{\scshape pmos}{} \circuititem{npn}{\scshape npn}{} \circuititem{pnp}{\scshape pnp}{} \circuititem{nigbt}{\scshape npigbt}{} \circuititem{pigbt}{\scshape pigbt}{} \end{itemize} If the option \texttt{arrowmos} is used (or after the commant \verb!\ctikzset{tripoles/mos style/arrows}! is given), this is the output: \ctikzset{tripoles/mos style/arrows} \begin{itemize} \circuititem{nmos}{\scshape nmos}{} \circuititem{pmos}{\scshape pmos}{} \end{itemize} \ctikzset{tripoles/mos style/no arrows} \textsc{nfet}s and \textsc{pfet}s have been incorporated based on code provided by Clemens Helfmeier and Theodor Borsche: \begin{itemize} \circuititem{nfet}{\scshape nfet}{} \circuititem{nigfete}{\scshape nigfete}{} \circuititem{nigfetd}{\scshape nigfetd}{} \circuititem{pfet}{\scshape pfet}{} \circuititem{pigfete}{\scshape pigfete}{} \circuititem{pigfetd}{\scshape pigfetd}{} \end{itemize} \textsc{njfet} and \textsc{pjfet} have been incorporated based on code provided by Danilo Piazzalunga: \begin{itemize} \circuititem{njfet}{\scshape njfet}{} \circuititem{pjfet}{\scshape pjfet}{} \end{itemize} \textsc{isfet} \begin{itemize} \circuititem{isfet}{\scshape isfet}{} \end{itemize} \subsubsection{Switch} \begin{itemize} \circuititem{spdt}{\scshape spdt}{} \circuititembip{toggle switch}{Toggle switch}{} \end{itemize} \subsubsection{Other bipole-like tripoles}\label{sec:othertrip} The following tripoles are entered with the usual command of the form \begin{itemize} \circuititembip{triac}{triac}{Tr} \circuititembip{thyristor}{thyristor}{Ty} %\circuititembip{IGBT}{IGBT}{} \end{itemize} \subsubsection{Misc} \begin{itemize} \circuititem{mixer}{Mixer}{} \end{itemize} \subsection{Double bipoles} Transformers automatically use the inductor shape currently selected. These are the three possibilities: \begin{itemize} \ctikzset{inductor=cute} \circuititem{transformer}{Transformer (cute inductor)}{} \ctikzset{inductor=american} \circuititem{transformer}{Transformer (american inductor)}{} \ctikzset{inductor=european} \circuititem{transformer}{Transformer (european inductor)}{} \end{itemize} Transformers with core are also available: \begin{itemize} \ctikzset{inductor=cute} \circuititem{transformer core}{Transformer core (cute inductor)}{} \ctikzset{inductor=american} \circuititem{transformer core}{Transformer core (american inductor)}{} \ctikzset{inductor=european} \circuititem{transformer core}{Transformer core (european inductor)}{} \ctikzset{inductor=cute} % reset default \end{itemize} \begin{itemize} \circuititem{gyrator}{Gyrator}{} \end{itemize} \subsection{Logic gates} \begin{itemize} \circuititem{american and port}{American \textsc{and} port}{} \circuititem{american or port}{American \textsc{or} port}{} \circuititem{american not port}{American \textsc{not} port}{} \circuititem{american nand port}{American \textsc{nand} port}{} \circuititem{american nor port}{American \textsc{nor} port}{} \circuititem{american xor port}{American \textsc{xor} port}{} \circuititem{american xnor port}{American \textsc{xnor} port}{} \end{itemize} \begin{itemize} \circuititem{european and port}{European \textsc{and} port}{} \circuititem{european or port}{European \textsc{or} port}{} \circuititem{european not port}{European \textsc{not} port}{} \circuititem{european nand port}{European \textsc{nand} port}{} \circuititem{european nor port}{European \textsc{nor} port}{} \circuititem{european xor port}{European \textsc{xor} port}{} \circuititem{european xnor port}{European \textsc{xnor} port}{} \end{itemize} \begin{framed} If (default behaviour) \texttt{americanports} option is active (or the style \texttt{[american ports]} is used), the shorthands \texttt{and port}, \texttt{or port}, \texttt{not port}, \texttt{nand port}, \texttt{not port}, \texttt{xor port}, and \texttt{xnor port} are equivalent to the american version of the respective logic port. If otherwise \texttt{europeanports} option is active (or the style \texttt{[european ports]} is used), the shorthands \texttt{and port}, \texttt{or port}, \texttt{not port}, \texttt{nand port}, \texttt{not port}, \texttt{xor port}, and \texttt{xnor port} are equivalent to the european version of the respective logic port. \end{framed} \subsection{Amplifiers} \begin{itemize} \circuititem{op amp}{Operational amplifier}{} \circuititem{fd op amp}{Fully differential operational amplifier\footnote{Contributed by Kristofer M. Monisit.}}{} \circuititem{plain amp}{Plain amplifier}{} \circuititem{buffer}{Buffer}{} \end{itemize} \subsection{Support shapes} \begin{itemize} \circuititem{currarrow}{Arrows (current and voltage)}{} \circuititem{circ}{Connected terminal}{} \circuititem{ocirc}{Unconnected terminal}{} \end{itemize} \section{Usage} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, l=$R_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R=$R_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, v=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R=$R_1$, i=$i_1$, v=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R=$R_1$, i=$i_1$, v=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} Long names/styles for the bipoles can be used: \begin{LTXexample}[varwidth=true] \begin{circuitikz}\draw (0,0) to[resistor=1<\kilo\ohm>] (2,0) ;\end{circuitikz} \end{LTXexample} \subsection{Labels} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, l^=$R_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, l_=$R_1$] (2,0); \end{circuitikz} \end{LTXexample} \noindent The default orientation of labels is controlled by the options \texttt{smartlabels}, \texttt{rotatelabels} and \texttt{straightlabels} (or the corresponding \texttt{label/align} keys). Here are examples to see the differences: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \ctikzset{label/align = straight} \def\DIR{0,45,90,135,180,-90,-45,-135} \foreach \i in \DIR { \draw (0,0) to[R=\i, *-o] (\i:2.5); } \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \ctikzset{label/align = rotate} \def\DIR{0,45,90,135,180,-90,-45,-135} \foreach \i in \DIR { \draw (0,0) to[R=\i, *-o] (\i:2.5); } \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \ctikzset{label/align = smart} \def\DIR{0,45,90,135,180,-90,-45,-135} \foreach \i in \DIR { \draw (0,0) to[R=\i, *-o] (\i:2.5); } \end{circuitikz} \end{LTXexample} \subsection{Currents} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i^>=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i_>=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i^<=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i_<=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i>^=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i>_=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i<^=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i<_=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} Also \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i<=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i>=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i^=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i_=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} \subsection{Voltages} \subsubsection{European style} The default, with arrows. Use option \texttt{europeanvoltage} or style \verb![european voltages]!. \begin{LTXexample}[varwidth=true] \begin{circuitikz}[european voltages] \draw (0,0) to[R, v^>=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[european voltages] \draw (0,0) to[R, v^<=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[european voltages] \draw (0,0) to[R, v_>=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[european voltages] \draw (0,0) to[R, v_<=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \subsubsection{American style} For those who like it (not me). Use option \texttt{americanvoltage} or set \verb![american voltages]!. \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[R, v^>=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[R, v^<=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[R, v_>=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[R, v_<=$v_1$] (2,0); \end{circuitikz} \end{LTXexample} \subsection{Nodes} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, o-o] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, -o] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, o-] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, *-*] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, -*] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, *-] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, o-*] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, *-o] (2,0); \end{circuitikz} \end{LTXexample} \subsection{Special components} For some components label, current and voltage behave as one would expect: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[I=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[I, i=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[cI=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[sI=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[csI=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} The following results from using the option \texttt{americancurrent} or using the style \verb![american currents]!. \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american currents] \draw (0,0) to[I=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american currents] \draw (0,0) to[I, i=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american currents] \draw (0,0) to[cI=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american currents] \draw (0,0) to[sI=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american currents] \draw (0,0) to[csI=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} The same holds for voltage sources: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[V=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[V, v=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[cV=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[sV=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[csV=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} The following results from using the option \texttt{americanvoltage} or the style \verb![american voltages]!. \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[V=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[V, v=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[cV=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[sV=$a_1$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz}[american voltages] \draw (0,0) to[csV=$k\cdot a_1$] (2,0); \end{circuitikz} \end{LTXexample} \subsection{Integration with {\ttfamily siunitx}} If the option {\ttfamily siunitx} is active (and \emph{not} in \ConTeXt), then the following are equivalent: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, l=1<\kilo\ohm>] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, l=$\SI{1}{\kilo\ohm}$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i=1<\milli\ampere>] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, i=$\SI{1}{\milli\ampere}$] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, v=1<\volt>] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R, v=$\SI{1}{\volt}$] (2,0); \end{circuitikz} \end{LTXexample} \subsection{Mirroring} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[pD] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[pD, mirror] (2,0); \end{circuitikz} \end{LTXexample} At the moment, placing labels and currents on mirrored bipoles works: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[ospst=T] (2,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[ospst=T, mirror, i=$i_1$] (2,0); \end{circuitikz} \end{LTXexample} But voltages don't: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[ospst=T, mirror, v=v] (2,0); \end{circuitikz} \end{LTXexample} Sorry about that. \subsection{Putting them together} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[R=1<\kilo\ohm>, i>_=1<\milli\ampere>, o-*] (3,0); \end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[D*, v=$v_D$, i=1<\milli\ampere>, o-*] (3,0); \end{circuitikz} \end{LTXexample} \section{Not only bipoles} Since only bipoles (but see section~\ref{sec:transasbip}) can be placed "along a line", components with more than two terminals are placed as nodes: \begin{LTXexample}[varwidth=true] \tikz \node[npn] at (0,0) {}; \end{LTXexample} \subsection{Anchors} In order to allow connections with other components, all components define anchors. \subsubsection{Logical ports} All logical ports, except \textsc{not}, have two inputs and one output. They are called respectively \texttt{in 1}, \texttt{in 2}, \texttt{out}: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[and port] (myand) {} (myand.in 1) node[anchor=east] {1} (myand.in 2) node[anchor=east] {2} (myand.out) node[anchor=west] {3} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,2) node[and port] (myand1) {} (0,0) node[and port] (myand2) {} (2,1) node[xnor port] (myxnor) {} (myand1.out) -| (myxnor.in 1) (myand2.out) -| (myxnor.in 2) ;\end{circuitikz} \end{LTXexample} In the case of \textsc{not}, there are only \texttt{in} and \texttt{out} (although for compatibility reasons \texttt{in 1} is still defined and equal to \texttt{in}): \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (1,0) node[not port] (not1) {} (3,0) node[not port] (not2) {} (0,0) -- (not1.in) (not2.in) -- (not1.out) ++(0,-1) node[ground] {} to[C] (not1.out) (not2.out) -| (4,1) -| (0,0) ;\end{circuitikz} \end{LTXexample} \subsubsection{Transistors} For \textsc{nmos}, \textsc{pmos}, \textsc{nfet}, \textsc{nigfete}, \textsc{nigfetd}, \textsc{pfet}, \textsc{pigfete}, and \textsc{pigfetd} transistors one has \texttt{base}, \texttt{gate}, \texttt{source} and \texttt{drain} anchors (which can be abbreviated with \texttt{B}, \texttt{G}, \texttt{S} and \texttt{D}): \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[nmos] (mos) {} (mos.base) node[anchor=west] {B} (mos.gate) node[anchor=east] {G} (mos.drain) node[anchor=south] {D} (mos.source) node[anchor=north] {S} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[pigfete] (pigfete) {} (pigfete.B) node[anchor=west] {B} (pigfete.G) node[anchor=east] {G} (pigfete.D) node[anchor=south] {D} (pigfete.S) node[anchor=north] {S} ;\end{circuitikz} \end{LTXexample} Similarly \textsc{njfet} and \textsc{pjfet} have \texttt{gate}, \texttt{source} and \texttt{drain} anchors (which can be abbreviated with \texttt{G}, \texttt{S} and \texttt{D}): \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[pjfet] (pjfet) {} (pjfet.G) node[anchor=east] {G} (pjfet.D) node[anchor=north] {D} (pjfet.S) node[anchor=south] {S} ;\end{circuitikz} \end{LTXexample} For \textsc{npn}, \textsc{pnp}, \textsc{nigbt}, and \textsc{pigbt} transistors the anchors are \texttt{base}, \texttt{emitter} and \texttt{collector} anchors (which can be abbreviated with \texttt{B}, \texttt{E} and \texttt{C}): \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[npn] (npn) {} (npn.base) node[anchor=east] {B} (npn.collector) node[anchor=south] {C} (npn.emitter) node[anchor=north] {E} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[pigbt] (pigbt) {} (pigbt.B) node[anchor=east] {B} (pigbt.C) node[anchor=north] {C} (pigbt.E) node[anchor=south] {E} ;\end{circuitikz} \end{LTXexample} Here is one composite example (please notice that the \texttt{xscale=-1} style would also reflect the label of the transistors, so here a new node is added and its text is used, instead of that of \texttt{pnp1}): \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[pnp] (pnp2) {2} (pnp2.B) node[pnp, xscale=-1, anchor=B] (pnp1) {} (pnp1) node {1} (pnp1.C) node[npn, anchor=C] (npn1) {} (pnp2.C) node[npn, xscale=-1, anchor=C] (npn2) {} (pnp1.E) -- (pnp2.E) (npn1.E) -- (npn2.E) (pnp1.B) node[circ] {} |- (pnp2.C) node[circ] {} ;\end{circuitikz} \end{LTXexample} Similarly, transistors and other components can be reflected vertically: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[pigfete, yscale=-1] (pigfete) {} (pigfete.B) node[anchor=west] {B} (pigfete.G) node[anchor=east] {G} (pigfete.D) node[anchor=north] {D} (pigfete.S) node[anchor=south] {S} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample} \begin{circuitikz} \draw (0,2) node[rground, yscale=-1] {} to[R=$R_1$] (0,0) node[sground] {}; \end{circuitikz} \end{LTXexample} \subsubsection{Other tripoles} When inserting a thrystor, a triac or a potentiometer, one needs to refer to the third node — gate (\texttt{gate} or \texttt{G}) for the former two; wiper (\texttt{wiper} or \texttt{W}) for the latter one. This is done by giving a name to the bipole: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[Tr, n=TRI] (2,0) to[pR, n=POT] (4,0); \draw[dashed] (TRI.G) -| (POT.wiper) ;\end{circuitikz} \end{LTXexample} As for the switches: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[spdt] (Sw) {} (Sw.in) node[left] {in} (Sw.out 1) node[right] {out 1} (Sw.out 2) node[right] {out 2} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[C] (1,0) to[toggle switch , n=Sw] (2.5,0) -- (2.5,-1) to[battery1] (1.5,-1) to[R] (0,-1) -| (0,0) (Sw.out 2) -| (2.5, 1) to[R] (0,1) -- (0,0) ;\end{circuitikz} \end{LTXexample} And the mixer: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[mixer] (mix) {} (mix.in 1) node[left] {in 1} (mix.in 2) node[below] {in 2} (mix.out) node[right] {out} ;\end{circuitikz} \end{LTXexample} \subsubsection{Operational amplifier} The op amp defines the inverting input (\texttt{-}), the non-inverting input (\texttt{+}) and the output (\texttt{out}) anchors: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[op amp] (opamp) {} (opamp.+) node[left] {$v_+$} (opamp.-) node[left] {$v_-$} (opamp.out) node[right] {$v_o$} ;\end{circuitikz} \end{LTXexample} There are also two more anchors defined, \texttt{up} and \texttt{down}, for the power supplies: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[op amp] (opamp) {} (opamp.+) node[left] {$v_+$} (opamp.-) node[left] {$v_-$} (opamp.out) node[right] {$v_o$} (opamp.down) node[ground] {} (opamp.up) ++ (0,.5) node[above] {\SI{12}{\volt}} -- (opamp.up) ;\end{circuitikz} \end{LTXexample} The fully differential op amp defines two outputs: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[fd op amp] (opamp) {} (opamp.+) node[left] {$v_+$} (opamp.-) node[left] {$v_-$} (opamp.out +) node[right] {out +} (opamp.out -) node[right] {out -} (opamp.down) node[ground] {} ;\end{circuitikz} \end{LTXexample} \subsubsection{Double bipoles} All the (few, actually) double bipoles/quadrupoles have the four anchors, two for each port. The first port, to the left, is port \texttt{A}, having the anchors \texttt{A1} (up) and \texttt{A2} (down); same for port \texttt{B}. They also expose the \texttt{base} anchor, for labelling: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[transformer] (T) {} (T.A1) node[anchor=east] {A1} (T.A2) node[anchor=east] {A2} (T.B1) node[anchor=west] {B1} (T.B2) node[anchor=west] {B2} (T.base) node{K} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[gyrator] (G) {} (G.A1) node[anchor=east] {A1} (G.A2) node[anchor=east] {A2} (G.B1) node[anchor=west] {B1} (G.B2) node[anchor=west] {B2} (G.base) node{K} ;\end{circuitikz} \end{LTXexample} \subsection{Transistor paths}\label{sec:transasbip} For syntactical convenience transistors can be placed using the normal path notation used for bipoles. The transitor type can be specified by simply adding a ``T'' (for transistor) in front of the node name of the transistor. It will be placed with the base/gate orthogonal to the direction of the path: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) node[njfet] {1} (-1,2) to[Tnjfet=2] (1,2) to[Tnjfet=3, mirror] (3,2); ;\end{circuitikz} \end{LTXexample} Access to the gate and/or base nodes can be gained by naming the transistors with the \texttt{n} or \texttt{name} path style: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw[yscale=1.1, xscale=.8] (2,4.5) -- (0,4.5) to[Tpmos, n=p1] (0,3) to[Tnmos, n=n1] (0,1.5) to[Tnmos, n=n2] (0,0) node[ground] {} (2,4.5) to[Tpmos,n=p2] (2,3) to[short, -*] (0,3) (p1.G) -- (n1.G) to[short, *-o] ($(n1.G)+(3,0)$) (n2.G) ++(2,0) node[circ] {} -| (p2.G) (n2.G) to[short, -o] ($(n2.G)+(3,0)$) (0,3) to[short, -o] (-1,3) ;\end{circuitikz} \end{LTXexample} The \texttt{name} property is available also for bipoles, although this is useful mostly for triac, potentiometer and thyristor (see~\ref{sec:othertrip}). \section{Customization} \subsection{Parameters} Pretty much all Circui\TikZ\ relies heavily on \texttt{pgfkeys} for value handling and configuration. Indeed, at the beginning of \texttt{circuitikz.sty} a series of key definitions can be found that modify all the graphical characteristics of the package. All can be varied using the \verb!\ctikzset! command, anywhere in the code. \paragraph{Shape of the components} (on a per-component-class basis) \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) to[R=1<\ohm>] (2,0); \par \ctikzset{bipoles/resistor/height=.6} \tikz \draw (0,0) to[R=1<\ohm>] (2,0); \end{LTXexample} \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) node[nand port] {}; \par \ctikzset{tripoles/american nand port/input height=.2} \ctikzset{tripoles/american nand port/port width=.2} \tikz \draw (0,0) node[nand port] {}; \end{LTXexample} \paragraph{Thickness of the lines} (globally) \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) to[C=1<\farad>] (2,0); \par \ctikzset{bipoles/thickness=1} \tikz \draw (0,0) to[C=1<\farad>] (2,0); \end{LTXexample} \paragraph{Global properties} Of voltage and current \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) to[R, v=1<\volt>] (2,0); \par \ctikzset{voltage/distance from node=.1} \tikz \draw (0,0) to[R, v=1<\volt>] (2,0); \end{LTXexample} \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) to[C, i=$\imath$] (2,0); \par \ctikzset{current/distance = .2} \tikz \draw (0,0) to[C, i=$\imath$] (2,0); \end{LTXexample} \noindent However, you can override the properties \verb!voltage/distance from node!\footnote{That is, how distant from the initial and final points of the path the arrow starts and ends.}, \verb!voltage/bump b!\footnote{Controlling how high the bump of the arrow is --- how curved it is.} and \verb!voltage/european label distance!\footnote{Controlling how distant from the bipole the voltage label will be.} on a per-component basis, in order to fine-tune the voltages: \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) to[R, v=1<\volt>] (1.5,0) to[C, v=2<\volt>] (3,0); \par \ctikzset{bipoles/capacitor/voltage/% distance from node/.initial=.7} \tikz \draw (0,0) to[R, v=1<\volt>] (1.5,0) to[C, v=2<\volt>] (3,0); \par \end{LTXexample} \noindent Admittedly, not all graphical properties have understandable names, but for the time it will have to do: \begin{LTXexample}[varwidth=true] \tikz \draw (0,0) node[xnor port] {}; \ctikzset{tripoles/american xnor port/aaa=.2} \ctikzset{tripoles/american xnor port/bbb=.6} \tikz \draw (0,0) node[xnor port] {}; \end{LTXexample} \subsection{Components size} Perhaps the most important parameter is \verb!\circuitikzbasekey/bipoles/length!, which can be interpreted as the length of a resistor (including reasonable connections): all other lenghts are relative to this value. For instance: \begin{LTXexample}[pos=t,varwidth=true] \ctikzset{bipoles/length=1.4cm} \begin{circuitikz}[scale=1.2]\draw (0,0) node[anchor=east] {B} to[short, o-*] (1,0) to[R=20<\ohm>, *-*] (1,2) to[R=10<\ohm>, v=$v_x$] (3,2) -- (4,2) to[cI=$\frac{\si{\siemens}}{5} v_x$, *-*] (4,0) -- (3,0) to[R=5<\ohm>, *-*] (3,2) (3,0) -- (1,0) (1,2) to[short, -o] (0,2) node[anchor=east]{A} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \ctikzset{bipoles/length=.8cm} \begin{circuitikz}[scale=1.2]\draw (0,0) node[anchor=east] {B} to[short, o-*] (1,0) to[R=20<\ohm>, *-*] (1,2) to[R=10<\ohm>, v=$v_x$] (3,2) -- (4,2) to[cI=$\frac{\siemens}{5} v_x$, *-*] (4,0) -- (3,0) to[R=5<\ohm>, *-*] (3,2) (3,0) -- (1,0) (1,2) to[short, -o] (0,2) node[anchor=east]{A} ;\end{circuitikz} \end{LTXexample} \subsection{Colors} The color of the components is stored in the key \verb!\circuitikzbasekey/color!. Circui\TikZ\ tries to follow the color set in \TikZ, although sometimes it fails. If you change color in the picture, please do not use just the color name as a style, like \verb![red]!, but rather assign the style \verb![color=red]!. Compare for instance \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw[red] (0,2) node[and port] (myand1) {} (0,0) node[and port] (myand2) {} (2,1) node[xnor port] (myxnor) {} (myand1.out) -| (myxnor.in 1) (myand2.out) -| (myxnor.in 2) ;\end{circuitikz} \end{LTXexample} and \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw[color=red] (0,2) node[and port] (myand1) {} (0,0) node[and port] (myand2) {} (2,1) node[xnor port] (myxnor) {} (myand1.out) -| (myxnor.in 1) (myand2.out) -| (myxnor.in 2) ;\end{circuitikz} \end{LTXexample} One can of course change the color \emph{in medias res}: \begin{LTXexample}[pos=t, varwidth=true] \begin{circuitikz} \draw (0,0) node[pnp, color=blue] (pnp2) {} (pnp2.B) node[pnp, xscale=-1, anchor=B, color=brown] (pnp1) {} (pnp1.C) node[npn, anchor=C, color=green] (npn1) {} (pnp2.C) node[npn, xscale=-1, anchor=C, color=magenta] (npn2) {} (pnp1.E) -- (pnp2.E) (npn1.E) -- (npn2.E) (pnp1.B) node[circ] {} |- (pnp2.C) node[circ] {} ;\end{circuitikz} \end{LTXexample} The all-in-one stream of bipoles poses some challanges, as only the actual body of the bipole, and not the connecting lines, will be rendered in the specified color. Also, please notice the curly braces around the \texttt{to}: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0) to[V=1<\volt>] (0,2) { to[R=1<\ohm>, color=red] (2,2) } to[C=1<\farad>] (2,0) -- (0,0) ;\end{circuitikz} \end{LTXexample} Which, for some bipoles, can be frustrating: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw (0,0){to[V=1<\volt>, color=red] (0,2) } to[R=1<\ohm>] (2,2) to[C=1<\farad>] (2,0) -- (0,0) ;\end{circuitikz} \end{LTXexample} The only way out is to specify different paths: \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw[color=red] (0,0) to[V=1<\volt>, color=red] (0,2); \draw (0,2) to[R=1<\ohm>] (2,2) to[C=1<\farad>] (2,0) -- (0,0) ;\end{circuitikz} \end{LTXexample} And yes: this is a bug and \emph{not} a feature\ldots \section{FAQ} \noindent Q: When using \verb!\tikzexternalize! I get the following error: \begin{verbatim} ! Emergency stop. \end{verbatim} \noindent A: The \TikZ\ manual states: \begin{quotation} Furthermore, the library assumes that all \LaTeX\ pictures are ended with \verb!\end{tikzpicture}\verb!. \end{quotation} Just substitute every occurrence of the environment \verb!circuitikz! with \verb!tikzpicture!. They are actually pretty much the same. \bigskip \noindent Q: How do I draw the voltage between two nodes? \noindent A: Between any two nodes there is an open circuit! \begin{LTXexample}[varwidth=true] \begin{circuitikz} \draw node[ocirc] (A) at (0,0) {} node[ocirc] (B) at (2,1) {} (A) to[open, v=$v$] (B) ;\end{circuitikz} \end{LTXexample} \bigskip \noindent Q: I cannot write \verb!to[R = $R_1=12V$]! nor \verb!to[ospst = open, 3s]!: I get errors. \noindent A: It is a limitation of the \TikZ\ parser. Use \verb!to[R = $R_1{=}12V$]! and \verb!to[ospst = open{,} 3s]! instead. \section{Examples} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1.4]\draw (0,0) to[C, l=10<\micro\farad>] (0,2) -- (0,3) to[R, l=2.2<\kilo\ohm>] (4,3) -- (4,2) to[L, l=12<\milli\henry>, i=$i_1$] (4,0) -- (0,0) (4,2) { to[D*, *-*, color=red] (2,0) } (0,2) to[R, l=1<\kilo\ohm>, *-] (2,2) to[cV, v=$\SI{.3}{\kilo\ohm} i_1$] (4,2) (2,0) to[I, i=1<\milli\ampere>, -*] (2,2) ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1.2]\draw (0,0) node[ground] {} to[V=$e(t)$, *-*] (0,2) to[C=4<\nano\farad>] (2,2) to[R, l_=.25<\kilo\ohm>, *-*] (2,0) (2,2) to[R=1<\kilo\ohm>] (4,2) to[C, l_=2<\nano\farad>, *-*] (4,0) (5,0) to[I, i_=$a(t)$, -*] (5,2) -- (4,2) (0,0) -- (5,0) (0,2) -- (0,3) to[L, l=2<\milli\henry>] (5,3) -- (5,2) {[anchor=south east] (0,2) node {1} (2,2) node {2} (4,2) node {3}} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1.2]\draw (0,0) node[anchor=east] {B} to[short, o-*] (1,0) to[R=20<\ohm>, *-*] (1,2) to[R=10<\ohm>, v=$v_x$] (3,2) -- (4,2) to[cI=$\frac{\siemens}{5} v_x$, *-*] (4,0) -- (3,0) to[R=5<\ohm>, *-*] (3,2) (3,0) -- (1,0) (1,2) to[short, -o] (0,2) node[anchor=east]{A} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1]\draw (0,0) node[transformer] (T) {} (T.B2) to[pD] ($(T.B2)+(2,0)$) -| (3.5, -1) (T.B1) to[pD] ($(T.B1)+(2,0)$) -| (3.5, -1) ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1]\draw (5,.5) node [op amp] (opamp) {} (0,0) node [left] {$U_{we}$} to [R, l=$R_d$, o-*] (2,0) to [R, l=$R_d$, *-*] (opamp.+) to [C, l_=$C_{d2}$, *-] ($(opamp.+)+(0,-2)$) node [ground] {} (opamp.out) |- (3.5,2) to [C, l_=$C_{d1}$, *-] (2,2) to [short] (2,0) (opamp.-) -| (3.5,2) (opamp.out) to [short, *-o] (7,.5) node [right] {$U_{wy}$} ;\end{circuitikz} \end{LTXexample} \begin{LTXexample}[pos=t,varwidth=true] \begin{circuitikz}[scale=1.2, american]\draw (0,2) to[I=1<\milli\ampere>] (2,2) to[R, l_=2<\kilo\ohm>, *-*] (0,0) to[R, l_=2<\kilo\ohm>] (2,0) to[V, v_=2<\volt>] (2,2) to[cspst, l=$t_0$] (4,2) -- (4,1.5) to [generic, i=$i_1$, v=$v_1$] (4,-.5) -- (4,-1.5) (0,2) -- (0,-1.5) to[V, v_=4<\volt>] (2,-1.5) to [R, l=1<\kilo\ohm>] (4,-1.5); \begin{scope}[xshift=6.5cm, yshift=.5cm] \draw [->] (-2,0) -- (2.5,0) node[anchor=west] {$v_1/\volt$}; \draw [->] (0,-2) -- (0,2) node[anchor=west] {$i_1/\SI{}{\milli\ampere}$} ; \draw (-1,0) node[anchor=north] {-2} (1,0) node[anchor=south] {2} (0,1) node[anchor=west] {4} (0,-1) node[anchor=east] {-4} (2,0) node[anchor=north west] {4} (-1.5,0) node[anchor=south east] {-3}; \draw [thick] (-2,-1) -- (-1,1) -- (1,-1) -- (2,0) -- (2.5,.5); \draw [dotted] (-1,1) -- (-1,0) (1,-1) -- (1,0) (-1,1) -- (0,1) (1,-1) -- (0,-1); \end{scope} \end{circuitikz} \end{LTXexample} \section{Revision history} \begin{itemize} \item[\itshape version 0.3.0] (20121229) \begin{enumerate} \item fixed gate node for a few transistors \item added mixer \item added fully differential op amp (by Kristofer M.~Monisit) \item now general settings for the drawing of voltage can be overridden for specific components \item made arrows more homogeneous (either the current/voltage one, or latex' by pgf) \item added the single battery cell \item added fuse and asymmetric fuse \item added toggle switch \item added varistor, photoresistor, thermocouple, push button \item added thermistor, thermistor ptc, thermistor ptc \item fixed misalignment of voltage label in vertical bipoles with names \item added isfet \item added noiseless, protective, chassis, signal and reference grounds (Luigi Luigi «Liverpool») \end{enumerate} \item[\itshape version 0.2.4] (20110911). \begin{enumerate} \item added square voltage source (contributed by Alistair Kwan) \item added buffer and plain amplifier (contributed by Danilo Piazzalunga) \item added squid and barrier (contributed by Cor Molenaar) \item added antenna and transmission line symbols contributed by Leonardo Azzinnari \item added the changeover switch spdt (suggestion of Fabio Maria Antoniali) \item rename of context.tex and context.pdf (thanks to Karl Berry) \item updated the email address \item in documentation, fixed wrong (non-standard) labelling of the axis in an example (thanks to prof. Claudio Beccaria) \item fixed scaling inconsistencies in quadrupoles \item fixed division by zero error on certain vertical paths \item introduced options straighlabels, rotatelabels, smartlabels \end{enumerate} \item[\itshape version 0.2.3] (20091118). \begin{enumerate} \item fixed compatibility problem with label option from tikz \item Fixed resizing problem for shape ground \item Variable capacitor \item polarized capacitor \item ConTeXt support (read the manual!) \item nfet, nigfete, nigfetd, pfet, pigfete, pigfetd (contribution of Clemens Helfmeier and Theodor Borsche) \item njfet, pjfet (contribution of Danilo Piazzalunga) \item pigbt, nigbt \item \emph{backward incompatibility} potentiometer is now the standard resistor-with-arrow-in-the-middle; the old potentiometer is now known as variable resistor (or vR), similarly to variable inductor and variable capacitor \item triac, thyristor, memristor \item new property "name" for bipoles \item fixed voltage problem for batteries in american voltage mode \item european logic gates \item \emph{backward incompatibility} new american standard inductor. Old american inductor now called "cute inductor" \item \emph{backward incompatibility} transformer now linked with the chosen type of inductor, and version with core, too. Similarly for variable inductor \item \emph{backward incompatibility} styles for selecting shape variants now end are in the plural to avoid conflict with paths \item new placing option for some tripoles (mostly transistors) \item mirror path style \end{enumerate} \item[\itshape version 0.2.2] (20090520). \begin{enumerate} \item Added the shape for lamps. \item Added options \texttt{europeanresistor}, \texttt{europeaninductor}, \texttt{americanresistor} and \texttt{americaninductor}, with corresponding styles. \item \textbf{Fixed}: error in transistor arrow positioning and direction under negative \texttt{xscale} and \texttt{yscale}. \end{enumerate} \item[\itshape version 0.2.1] (20090503). \begin{enumerate} \item Op-amps added. \item Added options \texttt{arrowmos} and \texttt{noarrowmos}. \end{enumerate} \item[\itshape version 0.2] First public release on CTAN (20090417). \begin{enumerate} \item \textbf{Backward incompatibility}: labels ending with \texttt{:}\textit{angle} are not parsed for positioning anymore. \item Full use of \TikZ\ keyval features. \item White background is not filled anymore: now the network can be drawn on a background picture as well. \item Several new components added (logical ports, transistors, double bipoles, \ldots). \item Color support. \item Integration with {\ttfamily siunitx}. \item Voltage, american style. \item Better code, perhaps. General cleanup at the very least. \end{enumerate} \item[\itshape version 0.1] First public release (2007). \end{itemize} \printindex \end{document}
\chapter{DNA}
\chapter{Einstein's Field Equations} We will derive Einsteins Equations by physical considerations. Remember that the Poisson equation reads \begin{equation} \Delta\Phi=4\pi\varrho\, . \end{equation} So matter (energy) is the source of the gravitational field $\Phi$. From SR we know that the energy momentum tensor $\tensor{T}{_\mu_\nu}$ is an adequate generalisation of energy. We therefore put $\tensor{T}{_\mu_\nu}$ as the right hand side of a yet to be found equation, and ask for the left hand side. We would like a tensor $\tensor{S}{_\mu_\nu}$, related to the geometry, so that we can express \begin{equation} \tensor{S}{_\mu_\nu}=\tensor{T}{_\mu_\nu} \,. \end{equation} In SR, the energy momentum tensor $\tensor{T}{_\mu_\nu}$ is conserved i.e. $\tensor{T}{_\mu_\nu^{,\nu}}=0$. As a natural extension, we demand that the energy momentum tensor of general relativity is \emph{covariantly} conserved \begin{equation} \tensor{\nabla}{^\nu}\tensor{T}{_\mu_\nu}=\tensor{T}{_\mu_\nu^{;\nu}}=0 \end{equation} \begin{theorem}[Lovelock] For a four dimensional space \footnotemark{} the most general divergence free tensor $\tensor{A}{_\mu_\nu}$ is given by \begin{equation} \tensor{A}{_\mu_\nu}= c_1\tensor{G}{_\mu_\nu}+c_2\tensor{g}{_\mu_\nu}\, . \end{equation} Where $\tensor{G}{_\mu_\nu}$ is the \emph{Einstein tensor} $\tensor{G}{_\mu_\nu}:=\tensor{R}{_\mu_\nu}-\frac{1}{2}\tensor{g}{_\mu_\nu}R$. \end{theorem} \footnotetext{this does certainly not hold true for $d>4$} The theorem immediately implies \emph{Einstein's field equations} \begin{equation} \tensor{R}{_\mu_\nu}-\frac{1}{2}R\tensor{g}{_\mu_\nu}-\Lambda\tensor{g}{_\mu_\nu} =\kappa\tensor{T}{_\mu_\nu}\, ,\label{eq:EinstFG} \end{equation} with some constants $\kappa$, $\Lambda$. Of course we identify Einsteins constant $\kappa=\frac{8\pi G\textsubscript{N}}{c^2}$ and the cosmological constant $\Lambda$. As a slight variation, we can rewrite equation \eqref{eq:EinstFG} as \begin{equation} \tensor{R}{_\mu_\nu}-\frac{1}{2}R\tensor{g}{_\mu_\nu} =\kappa\left(\tensor{T}{_\mu_\nu}-\frac{\Lambda}{\kappa}\tensor{g}{_\mu_\nu}\right) \end{equation} so that the left hand side represents the geometrical part and the right hand side the matter content and we identify $\Lambda$ with an vacuum energy. Wheeler condenses this in the statement: \begin{quote} Geometry tells matter how to move, matter tells geometry how to curve. \end{quote} \begin{sidenote} In the time of the inflation the cosmological constant must have been large. Since it is small today it has do decay with time. \end{sidenote} There is also an variational derivation dating back to Hilbert that is simpler than Einsteins initial derivation. We start by considering a general action \begin{equation} S\textsubscript{g}=\fourint \tilde{\mathcal{L}}\, \end{equation} where $\tilde{\mathcal{L}}$ must transform as a (scalar) density. Therefore we define a scalar $\mathcal{L}=\frac{\tilde{\mathcal{L}}}{\sqrt{-g}}$ \begin{equation} S\textsubscript{g}=\fourint \sqrt{-g}\mathcal{L} \end{equation} One can think of various contributions to $\mathcal{L}$, e.g. \begin{equation*} R,\, \square R,\,\tensor{\nabla}{^\mu}\tensor{\nabla}{^\mu}\tensor{R}{_\mu_\nu},\, \tensor{R}{_\mu_\nu}\tensor{R}{^\mu^\nu} ,\,\tensor{R}{_\mu_\nu_\sigma_\varrho}\tensor{R}{^\mu^\nu^\sigma^\varrho}\dots\,, \end{equation*} which are contractions, so that the resulting quantity becomes a scalar. We have no contributions of the metric alone, because $\tensor{g}{_\mu_\nu_{;\sigma}}=0$. From Yang-Mills theory one would expect a structure \begin{equation} \mathcal{L}\sim\tensor{F}{_\mu_\nu}\tensor{F}{^\mu^\nu}\,, \end{equation} but $\Gamma$ is not the fundamental field but $g$ is. If we demand that we only have up to second derivatives of $g$ the only allowed term in the Lagrangian is $R$. \begin{sidenote}[On higher derivatives] If we include higher order derivatives of $g$ in the right way we can make the resulting theory renormalizable. However we violate unitarity and introduce so-called ghost fields which are associated with the additional degrees of freedom we get. \end{sidenote} \begin{remark}[Dimensions] In natural units\footnote{So that length has dimension of inverse mass.} the line element $\dif s^2$ has dimension, \\${[\dif s^2]=\textrm{M}^{-2}}$.\footnote{Where M refers to the dimension of mass.} Since further ${\left[\tensor{x}{^\mu}\right]=\textrm{M}^{-1}}$, the Lagrange density must have Dimension ${\left[\mathcal{L}\right]=\textrm{M}^{4}}$. \end{remark} This constraint leads to the \emph{Einstein-Hilbert-action} \begin{equation} S\textsubscript{EH}=\frac{1}{2\kappa}\fourint \sqrt{-g}(R-2\Lambda) \end{equation} We now check that its variation indeed reproduces Einstein's equations. To do so we introduce the formalism of \emph{functional derivation}. Let therefore $\Phi=\{\varphi,\tensor{A}{^\mu},\Psi,\dots\}$ be a collection of fields. $F[\Phi]$ a functional. We define the variation of $F$ as \begin{equation} \delta F:=\int \dif x\, \frac{\delta F}{\delta\Phi^i}\delta\Phi^i\,. \end{equation} Typically the functionals are given in the form \begin{equation} S[\Phi]=\int \dif x\, L(x,\Phi)\, , \end{equation} where $L$ is some local function. \begin{equation} \frac{\delta\tensor{g}{_\varrho_\sigma}(x)}{\delta\tensor{g}{_\mu_\nu}(x')}=\tensor*{\delta}{*^\mu*^\nu*_\varrho*_\sigma}\delta(x,x')\ \end{equation} Where $\tensor*{\delta}{*^\mu*^\nu*_\varrho*_\sigma}=\frac{1}{2}\left(\tensor*{\delta}{^\nu_\varrho}\tensor*{\delta}{^\mu_\sigma}+\tensor*{\delta}{^\mu_\varrho}\tensor*{\delta}{^\nu_\sigma}\right)$ is the unity of the space of symmetric rank two tensors \begin{remark} In general $\delta(x,x')\neq \delta(x-x')$ \end{remark} We transform to the origin of an RNCS, so that the Christoffel symbols vanish. In that coordinate system the covariant and the partial derivative coincide: $\tensor{\partial}{_\mu}=\tensor{\nabla}{_\mu}$. \begin{equation} \begin{split} \delta \tensor{R}{^\rho_\mu_\nu_\sigma} &=\delta \tensor{\partial}{_\nu}\cSym{\rho}{\mu}{\sigma} -\delta \tensor{\partial}{_\mu}\cSym{\rho}{\nu}{\sigma}\\ &=\tensor{\partial}{_\nu}\delta \cSym{\rho}{\mu}{\sigma} -\tensor{\partial}{_\mu}\delta \cSym{\rho}{\nu}{\sigma}\\ \end{split} \end{equation} Attention: i.A. $\delta\tensor{\partial}{_\mu}\neq\tensor{\partial}{_\mu}\delta$ \begin{equation} \begin{split} \delta \tensor{R}{_\mu_\nu} &=\delta \tensor{R}{^\rho_\mu_\rho_\nu}\\ &=\tensor{\partial}{_\rho}\delta \cSym{\rho}{\mu}{\nu} -\tensor{\partial}{_\mu}\delta \cSym{\rho}{\rho}{\nu}\\ &=\tensor{\nabla}{_\rho}\delta \cSym{\rho}{\mu}{\nu} -\tensor{\nabla}{_\mu}\delta \cSym{\rho}{\rho}{\nu}\\ \end{split} \end{equation} This holds in a general frame since it is a tensor equation. \begin{equation} \begin{split} \delta R &=\delta \left(\tensor{g}{^\mu^\nu}\tensor{R}{_\mu_\nu}\right)\\ &=\tensor{R}{_\mu_\nu}\delta\tensor{g}{^\mu^\nu} +\tensor{g}{^\mu^\nu}\delta\tensor{R}{_\mu_\nu}\\ \end{split} \end{equation} Use \eqref{eq:grels} \begin{equation} \begin{split} 2\kappa\delta S\textsubscript{EH} &=\fourint \left[ (R-2\Lambda)\delta\sqrt{-g}+\sqrt{-g}\delta R\right]\\ &=\fourint \left[ \frac{1}{2}\sqrt{-g}\tensor{g}{^\mu^\nu}\delta\tensor{g}{_\mu_\nu} (R-2\Lambda)+\sqrt{-g}\left(\tensor{R}{_\mu_\nu}\delta\tensor{g}{^\mu^\nu} +\tensor{g}{^\mu^\nu}\delta\tensor{R}{_\mu_\nu}\right)\right]\\ &=\fourint \sqrt{-g}\left[ \frac{1}{2}\tensor{g}{^\mu^\nu} (R-2\Lambda)+\tensor{R}{^\mu^\nu}\right]\delta\tensor{g}{_\mu_\nu} +\fourint \sqrt{-g}\tensor{g}{^\mu^\nu}\delta\tensor{R}{_\mu_\nu} \\ \end{split} \end{equation} We treat both occurring terms separately \begin{equation} \begin{split} \fourint \sqrt{-g}\tensor{g}{^\mu^\nu}\delta\tensor{R}{_\mu_\nu} &=\fourint \sqrt{-g}\tensor{g}{^\mu^\nu}\left(\tensor{\nabla}{_\rho}\delta \cSym{\rho}{\mu}{\nu} -\tensor{\nabla}{_\mu}\delta \cSym{\rho}{\rho}{\nu}\right) \\ &=\fourint \tensor{\nabla}{_\rho}\left(\sqrt{-g}\tensor{g}{^\mu^\nu}\delta \cSym{\rho}{\mu}{\nu}\right)\\ &\phantom{=}-\int\dif{}^4 x \tensor{\nabla}{_\mu}\left(\sqrt{-g}\tensor{g}{^\mu^\nu}\delta \cSym{\rho}{\rho}{\nu}\right) \end{split} \end{equation} The Integrals vanish by \name{Gauß} law (neglecting surface Terms). We are left with \begin{equation} \begin{split} 2\kappa\delta S\textsubscript{EH} &=\fourint \sqrt{-g}\left[ \frac{1}{2}\tensor{g}{^\mu^\nu} (R-2\Lambda)+\tensor{R}{^\mu^\nu}\right]\delta\tensor{g}{_\mu_\nu} \end{split} \end{equation} So that we can now finally calculate the variation with respect to the metric field \begin{equation} \frac{\delta S\textsubscript{EH}[\tensor{g}{_\mu_\nu}(x)]}{\delta\tensor{g}{_\mu_\nu}(x')} =\sqrt{-g}\left[\frac{1}{2}\tensor{g}{^\mu^\nu} (R-2\Lambda)+\tensor{R}{^\mu^\nu}\right] \end{equation} It vanishes if \begin{equation} \tensor{R}{^\mu^\nu}+\frac{R}{2}\tensor{g}{^\mu^\nu} -\Lambda\tensor{g}{^\mu^\nu}=0\, . \end{equation} so we have finally derived Einsteins field equations from an variational principle. %TODO add some text, to much formulas \section{Introduction of Matter} In the gravitational context we mean be \emph{matter} any non gravitational fields this include scalar fields $\varphi$, spinor fields $\Psi$, gauge fields $\tensor{A}{^\mu}$,\dots. We collect all of them in a multivariable $\Phi$. A local action can be written as \begin{equation} S\textsubscript{m}[\Phi,g]=\fourint \sqrt{-g} L\textsubscript{m}\left(\Phi,\tensor{\nabla}{_\mu}\Phi,g\right) \end{equation} $\tensor{g}{^\mu^\nu}$ appears in $L\textsubscript{m}$ because the derivatives $\tensor{\nabla}{_\mu}, \tensor{\partial}{_\mu}$ must be contracted. Additionally it enters via $\sqrt{-g}$. \begin{example}[Free scalar field] The action of a free scalar field in Minkovski space has the form \begin{equation} S\textsubscript{m}=\fourint \left(-\frac{1}{2}\tensor{\eta}{^\mu^\nu} \tensor{\partial}{_\mu}\varphi\tensor{\partial}{_\nu}\varphi-\frac{1}{2}m^2\varphi^2\right)\,. \end{equation} The minus sign in front of the partial derivative should come as no surprise since we have $\tensor{\eta}{^0^0}=-1\, \dot{\varphi}^2>0$. In a non inertial frame we have to make the usual replacements \begin{equation} \tensor{\eta}{_\mu_\nu}\to \tensor{g}{_\mu_\nu}\, , \quad \tensor{\partial}{_\mu}\to \tensor{\nabla}{_\mu}\,, \quad \dif{}^4 x \to \dif{}^4 x\, \sqrt{-g}\, , \end{equation} which is also known as a \emph{minimal coupling description}. The action for a scalar $\varphi$ in the presence of gravity, i.e. a dynamical $\tensor{g}{_\mu_\nu}(x)$, reads as \begin{equation} S\textsubscript{m}=\fourint\, \sqrt{-g}\left(-\frac{1}{2}\tensor{g}{^\mu^\nu} \tensor{\nabla}{_\mu}\varphi\tensor{\nabla}{_\nu}\varphi -\frac{1}{2}m^2\varphi^2\right)\,. \end{equation} The combined action of scalar field and gravity is given as \begin{equation} S[g,\varphi]=S\textsubscript{g}[g]+S\textsubscript{m}[g,\varphi]\,. \end{equation} The variation with respect to the field $\varphi$ is \begin{equation} \begin{split} \frac{\delta S[g,\varphi]}{\delta \varphi\left(x'\right)}&=\frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \varphi\left(x'\right)}\\ &=\fourint\sqrt{-g}\left[-\tensor{g}{^\mu^\nu} \tensor{\nabla}{_\mu}\varphi\tensor{\nabla}{_\nu}\left(\frac{\delta \varphi(x)}{\delta \varphi\left(x'\right)}\right)-m^2\varphi\frac{\delta \varphi(x)}{\delta \varphi\left(x'\right)}\right]\,, \end{split} \end{equation} where we used that the $\delta$ and $\tensor{\nabla}{_\mu}$ commute. Partial integration yields \begin{equation} \begin{split} \frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \varphi\left(x'\right)}&=\fourint \sqrt{-g}\left(\square_{g}-m^2\right)\varphi\delta(x,x')\\ &=\sqrt{-g}\left(\square_{g}-m^2\right)\varphi\, , \end{split} \end{equation} with $\square_{g}:=\tensor{g}{^\mu^\nu} \tensor{\nabla}{_\mu}\tensor{\nabla}{_\nu} $ the \emph{\name{Laplace–Beltrami} operator}, a generalisation of the ordinary Laplacian to curved space. Demanding that the variation with respect to $\varphi$ vanishes implies the \emph{Klein-Gordon equation} \begin{equation} \left(\square_g-m^2\right)\varphi=0 \end{equation} We can also vary the action with respect to the metric field $\tensor{g}{_\mu_\nu}$ resulting in \begin{equation} \frac{\delta S[g,\varphi]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)}= \frac{\delta S\textsubscript{g}[g]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)} +\frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)} =\frac{\sqrt{-g}}{2\kappa}\left(\tensor{G}{^\mu^\nu}+\Lambda\tensor{g}{^\mu^\nu} \right)+\frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)}\,. \end{equation} To recover the Einstein equations it is convenient to define the energy-momentum tensor \begin{equation} \tensor{T}{^\mu^\nu}:=-\frac{2}{\sqrt{-g}}\frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)}\,. \end{equation} We can now proceed in calculating the quantity we have just introduced for a scalar field \begin{equation} \begin{split} \frac{\delta S\textsubscript{m}[g,\varphi]}{\delta \tensor{g}{_\mu_\nu}\left(x'\right)} &=\int \dif{}^4x \frac{\delta\sqrt{-g}}{\delta \tensor{g}{_\mu_\nu}}\left(-\frac{1}{2}\tensor{g}{^\mu^\nu} \tensor{\nabla}{_\mu}\varphi\tensor{\nabla}{_\nu}\varphi-\frac{1}{2}m^2\varphi^2\right)\\ &\phantom{=}+ \sqrt{-g}\left(-\frac{1}{2}\tensor{g}{^\alpha^\varrho}\tensor{g}{^\beta^\sigma} \tensor{\nabla}{_\alpha}\varphi\tensor{\nabla}{_\beta}\varphi\frac{\delta \tensor{g}{_\varrho_\sigma}}{\delta \tensor{g}{_\mu_\nu}}\right)\\ &=\frac{1}{2}\int \dif{}^4 x \sqrt{-g}\left(-\frac{1}{2}\tensor{g}{^\mu^\nu}\tensor{\nabla}{_\varrho}\varphi\tensor{\nabla}{^\varrho}\varphi-\frac{1}{2}\tensor{g}{^\mu^\nu}m^2\varphi^2 +\tensor{\nabla}{^\mu}\varphi\tensor{\nabla}{^\nu}\varphi\right)\delta(x,x')\\ &=\frac{1}{2}\sqrt{-g}\left(-\frac{1}{2}\tensor{g}{^\mu^\nu}\tensor{\nabla}{_\varrho}\varphi\tensor{\nabla}{^\varrho}\varphi-\frac{1}{2}\tensor{g}{^\mu^\nu}m^2\varphi^2 +\tensor{\nabla}{^\mu}\varphi\tensor{\nabla}{^\nu}\varphi\right) \end{split} \end{equation} So that \begin{equation} \tensor{T}{^\mu^\nu}(\varphi) =-\frac{1}{2}\tensor{g}{^\mu^\nu}\tensor{\nabla}{_\varrho}\varphi\tensor{\nabla}{^\varrho}\varphi +\tensor{\nabla}{^\mu}\varphi\tensor{\nabla}{^\nu}\varphi -\frac{1}{2}\tensor{g}{^\mu^\nu}m^2\varphi^2 \end{equation} As we have noticed, the Einstein Tensor is covariantly conserved (contracted Bianci identities). The Einstein equation then implies that also $\tensor{T}{^\mu^\nu_{;\nu}}=0$ this can be checked for the given Tensor \begin{equation} \begin{split} \tensor{\nabla}{_\mu}\tensor{T}{^\mu^\nu} &=\tensor{g}{^\mu^\nu}\tensor{\nabla}{_\mu}\tensor{\nabla}{_\varrho}\varphi\tensor{\nabla}{^\varrho}\varphi+\square\varphi\tensor{\nabla}{^\nu}\varphi+\tensor{\nabla}{^\mu}\varphi\tensor{\nabla}{_\mu}\tensor{\nabla}{^\nu}\varphi -\tensor{g}{^\mu^\nu}m^2\varphi\tensor{\nabla}{_\mu}\varphi\\ &=\tensor{\nabla}{^\nu}\varphi\left(\square-m^2\right)\varphi\\ &=0 \end{split} \end{equation} Where the last equality holds because $\varphi$ satisfies the Klein-Gordon equation. \end{example} The Einstein equations are ten quasi linear, i.e. the highest order derivative appears only linear, differential equations for the metric field $\tensor{g}{_\mu_\nu}$. Strictly speaking the Einstein equations are \emph{nonlinear}. % \begin{sidenote} % If you substract the constrains imposed by the Bianci identities you end with % two DOFs, which are associated with the polarisation states of the graviton. % \end{sidenote} How do we find a solution to this equations? \begin{enumerate} \item Prescribe $\tensor{T}{_\mu_\nu}$. This is only possible for high symmetry problems, e.g. the \name{Schwazschild} solution and the cosmological solutions (\name{Friedmann}'s equations) \item Assume $\tensor{g}{_\mu_\nu}$, then compute $\tensor{T}{_\mu_\nu}$ and (try!) to interpret this. \end{enumerate} %TODO part about Intrinsic vs extrinsic curvature, image?? \subsection{ADM-Decomposition} \begin{figure}[hbtp] \centering \includegraphics{foliation.pdf} \caption{Foliation of spacetime into spatial hypersurfaces $\Sigma_t$.} \end{figure} The formulation of initial value problems is not as easy as it is in classical physics.\footnote{In fact, depending on the setting, a well defined formulation can be impossible.} Assume we know either \begin{itemize} \item $\tensor{g}{_\mu_\nu}$ on $\Sigma_{t_0}$ \item $\tensor{g}{_\mu_\nu_{;j}}$, $\tensor{g}{_\mu_\nu_{;0}}$ on $\Sigma_{t_0}$ \end{itemize} We then see the spacetime as a collection of spacelike hypersurfaces at time $t$ $\Sigma_t=\left\{\tensor{x}{^0}=t\right\}$. For simplicity we consider a vacuum solution to the Einstein equations, i.e. \begin{equation} 0=G=R-2R\implies\tensor{R}{_\mu_\nu}=0\,. \end{equation} Divided into the respective parts the field equations are \begin{align} 0&=\tensor{R}{_0_0}=-\frac{1}{2}\tensor{g}{^i^j}\tensor{g}{_i_j_{,00}}+\tensor{M}{_0_0}\,,\\ 0&=\tensor{R}{_0_i}=-\frac{1}{2}\tensor{g}{^0^j}\tensor{g}{_i_j_{,00}}+\tensor{M}{_0_i}\,,\\ 0&=\tensor{R}{_i_j}=-\frac{1}{2}\tensor{g}{^0^0}\tensor{g}{_i_j_{,00}}+\tensor{M}{_i_j}\,. \end{align} Where $\tensor{M}{_\mu_\nu}$ is a rest term containing lower order time derivatives. This shows that there are no second order time derivatives of $\tensor{g}{_0_\mu}$. We have 10 equations and 6 undetermined functions. The DOFs can be used for a coordinate transformation, so that $\tensor{g}{_0_\mu_{,00}}=0$ on $\Sigma_{t_0}$. This is always possible but we will not proof this. It can be further shown, by means of the contracted Bianci identities, that this implies $\tensor{g}{_0_\mu_{,00}}=0$ on \emph{all} hypersurfaces $\Sigma_{t}$. % \begin{equation} % \tensor{\partial}{_0}\tensor{G}{^0^\nu}= % \tensor{\partial}{_i}\tensor{G}{^i^\nu} % -\cSym{\nu}{0}{\lambda}\tensor{G}{^\lambda^\nu} % -\cSym{0}{\nu}{\lambda}\tensor{G}{^\mu^\lambda}\,. % \end{equation} Since we have to much freedom the solution will not be unique. We have the freedom to choose four coordinates \begin{equation} \tensor{x}{^{\mu^\prime}}=\tensor{f}{^{\mu^\prime}}\left(\tensor{x}{^\mu}\right)\,. \end{equation} One typical choice is the \emph{harmonic\footnote{A function $f$ is said to be harmonic if it satisfies $\square f = 0$.} gauge} \begin{equation} \square\tensor{x}{^\mu}=0\,. \end{equation} We can expand the d'Alembertian, using \eqref{eq:quabla}, to \begin{equation} \begin{split} \square\tensor{x}{^\mu}&=g^{-\nicefrac{1}{2}}\tensor{\partial}{_\varrho}\left(g^{\nicefrac{1}{2}}\tensor{g}{^\varrho^\sigma}\tensor{\partial}{_\sigma}\tensor{x}{^\mu}\right)\\ &=g^{-\nicefrac{1}{2}}\tensor{\partial}{_\varrho}\left(g^{\nicefrac{1}{2}}\tensor{g}{^\varrho^\sigma}\tensor{\delta}{_\sigma^\mu}\right)\\ &=g^{-\nicefrac{1}{2}}\tensor{\partial}{_\varrho}\left(g^{\nicefrac{1}{2}}\tensor{g}{^\varrho^\mu}\right)\,,\\ \end{split} \end{equation} the harmonic gauge is therefore equivalent to \begin{equation} \tensor{\partial}{_\varrho}\left(g^{\nicefrac{1}{2}}\tensor{g}{^\varrho^\mu}\right)=0\, .\\ \end{equation} The equation can be divided into spatial and time components and derive by the zero component, so that \begin{equation} \tensor*{\partial}{*_0^2}\left(g^{\nicefrac{1}{2}}\tensor{g}{^0^\mu}\right) = -\tensor{\partial}{_i}\left[\tensor{\partial}{_0}\left(g^{\nicefrac{1}{2}}\tensor{g}{^0^\mu}\right)\right]\, ,\end{equation} which fixes the second order time derivatives of the relevant components $\tensor{g}{^0^\mu}$. Therefore now the time evolution can be solved. %TODO Missing part? \subsubsection{Degrees of freedom} \begin{itemize} \item[\textsf{\textbf{10}}] componnents for every spacetime point from the symmetric $\tensor{g}{_\mu_\nu}$ \item[\textsf{\textbf{-4}}] from the constraint equation $\tensor{G}{_\mu_\nu^{;\nu}}=0$ \begin{itemize} \item $\tensor{G}{^0^0}=\kappa \tensor{T}{^0^0}$ ensures that the evolution is independent of the choice of spatial coordinates on $\Sigma_{t_0}$. \item $\tensor{G}{^i^0}=\kappa \tensor{T}{^i^0}$ ensures that the time evolution is independent of the way we foliated spacetime into spacial hypersurfaces $\Sigma_{t}. $ \end{itemize} \item[\textsf{\textbf{-4}}] due to the freedom to choose coordinates (i.e. a gauge). \end{itemize} We are left with two physical degrees of freedom which may be interpreted as the polarisation states of the graviton field. \subsubsection{Comparison with electrodynamics in flat spacetime} In electrodynamics instead of \name{Einstein}'s equations we have the field equations for the four potential $\tensor{A}{_\mu}$: \begin{equation} \square\tensor{A}{_\mu}-\partial_\mu\left(\partial_\nu\tensor{A}{^\nu} \right)=0\,. \end{equation} As we did for the gravitational field, we take a look at the zero component. We find \begin{equation} \begin{split} 0&=-\partial_0^2\tensor{A}{_0}+\partial_i\partial^i\tensor{A}{_0} -\partial_0\left(-\partial_0\tensor{A}{_0}+\partial_iA^i\right)\\ &= \partial_i\partial^i\tensor{A}{_0}-\partial_0\partial_iA^i \end{split} \end{equation} This equation is equivalent to $\nabla\vec{E}=0$ and the Bianchi identities. So once again $\tensor{A}{_0}$ is \emph{not} determined by the dynamical evolution equation because there is no second order time derivative analogous to $\tensor{g}{_0_0}$. Since $\tensor{A}{_0}$ is not determined and cannot be specified on the initial time slice. This reflects some internal redundancy namely gauge invariance of the theory. For any scalar function $\Lambda$, the transformation \begin{equation} \tensor{A}{_\mu}\to\tensor*{A}{*_\mu^\prime}= \tensor{A}{_\mu}+\partial_\mu\Lambda\,, \end{equation} leaves the physics invariant. It is trivial to check that the field strength tensor \\ ${\tensor{F}{_\mu_\nu}=\partial_\mu\tensor{A}{_\nu}-\partial_\nu\tensor{A}{_\mu}}$ stays invariant. Perhaps more interesting the field equation is also gauge invariant: \begin{equation} \begin{split} \square\tensor*{A}{*_\mu^\prime}-\partial_\mu\left(\partial_\nu\tensor*{A}{*^\nu^\prime} \right) &= \square\tensor{A}{_\mu}+\square\partial_\mu\Lambda-\partial_\mu\left(\partial_\nu\tensor{A}{^\nu} \right)-\partial_\mu\square\Lambda\\ &= \square\tensor{A}{_\mu}-\partial_\mu\left(\partial_\nu\tensor{A}{^\nu} \right)\,. \end{split} \end{equation} Thus if $\tensor{A}{_\mu}$ is a solution to the field equation $\tensor*{A}{*_\mu^\prime}$ is and therefore both are physically indistinguishable. We can also fix a gauge for example the \emph{Lorentz gauge}: \begin{equation} \partial_\mu\tensor{A}{^\mu}=0\, . \end{equation} If we derive this by the zero component we get \begin{equation} \partial_{0}^2\tensor{A}{^0}=-\partial_i\partial_0\tensor{A}{^i}\, , \end{equation} so as with $\tensor{g}{_0_0}$ the evolution of the zero component is now related to the other components. There is still one residual gauge condition, namely we can still transform \begin{equation} \tensor{A}{_\mu}\to\tensor*{A}{*_\mu^\prime}= \tensor{A}{_\mu}+\partial_\mu\Lambda\, , \end{equation} but to keep the gauge, we have to demand that $\square\Lambda=0$. Again we count the DOFs: \begin{itemize} \item[\textsf{\textbf{4}}] components of the potential $\tensor{A}{_\mu}$. \item[\textsf{\textbf{-1}}] from constraint $\nabla\vec{E}=0$. \item[\textsf{\textbf{-1}}] from gauge freedom $\Lambda$. \end{itemize} This leaves two physical degrees of freedom, the polarisation states of a photon. \begin{remark} As we have have seen there is a direct correspondence between the gauge freedom in electrodynamics and the freedom of choice of coordinates of coordinates in GR. \end{remark}
\section{Empirical fits to noise predictions} \label{supp_empirical} (Note: The Python code used for the calculations presented in this section can be found in the \href{https://www.rpgroup.caltech.edu/chann_cap/src/theory/html/empirical_constants.html}{following link} as an annotated Jupyter notebook) In \fref{fig3_cell_cycle}(C) in the main text we show that our minimal model has a systematic deviation on the gene expression noise predictions compared to the experimental data. This systematics will need to be addressed on an improved version of the minimal model presented in this work. To guide the insights into the origins of this systematic deviation in this appendix we will explore empirical modifications of the model to improve the agreement between theory and experiment. \subsection{Multiplicative factor for the noise} \label{supp_mult_factor_noise} The first option we will explore is to modify our noise predictions by a constant multiplicative factor. This means that we assume the relationship between our minimal model predictions and the data for noise in gene expression are of the from \begin{equation} \text{noise}_{\text{exp}} = \alpha \cdot \text{noise}_{\text{theory}}, \end{equation} where $\alpha$ is a dimensionless constant to be fit from the data. The data, especially in \fref{sfig_noise_delta} suggests that our predictions are within a factor of $\approx$ two from the experimental data. To further check that intuition we performed a weighted linear regression between the experimental and theoretical noise measurements. The weight for each datum was taken to be proportional to the bootstrap errors in the noise estimate, this to have poorly determined noises weigh less during the regression. The result of this regression with no intercept shows exactly that a factor of two systematically improves the theoretical vs. experimental predictions. \fref{sfig_noise_mult_factor} shows the improved agreement when the theoretical predictions for the noise are multiplied by $\approx 1.5$. \begin{figure}[h!] \centering \includegraphics {../fig/si/figS30.pdf} \caption{\textbf{Multiplicative factor to improve theoretical vs. experimental comparison of noise in gene expression.} Theoretical vs. experimental noise both in linear (left) and log (right) scale. The dashed line shows the identity line of slope 1 and intercept zero. All data are colored by the corresponding value of the experimental fold-change in gene expression as indicated by the color bar. The $x$-axis was multiplied by a factor of $\approx 1.5$ as determined by a linear regression from the data in \fref{sfig_noise_comparison}. Each datum represents a single date measurement of the corresponding strain and IPTG concentration with $\geq 300$ cells. The points correspond to the median, and the error bars correspond to the 95\% confidence interval as determined by 10,000 bootstrap samples.} \label{sfig_noise_mult_factor} \end{figure} For completeness \fref{sfig_noise_reg_corrected} shows the noise in gene expression as a function of the inducer concentration including this factor of $\approx 1.5$. It is clear that overall a simple multiplicative factor improves the predictive power of the model. \begin{figure}[h!] \centering \includegraphics {../fig/si/figS31.pdf} \caption{\textbf{Protein noise of the regulated promoter with multiplicative factor.} Comparison of the experimental noise for different operators ((A) O1, $\eR = -15.3 \; k_BT$, (B) O2, $\eR = -13.9 \; k_BT$, (C) O3, $\eR = -9.7 \; k_BT$) with the theoretical predictions for the the multi-promoter model. A linear regression revealed that multiplying the theoretical noise prediction by a factor of $\approx 1.5$ would improve agreement between theory and data. Points represent the experimental noise as computed from single-cell fluorescence measurements of different {\it E. coli} strains under 12 different inducer concentrations. Dotted line indicates plot in linear rather than logarithmic scale. Each datum represents a single date measurement of the corresponding strain and IPTG concentration with $\geq 300$ cells. The points correspond to the median, and the error bars correspond to the 95\% confidence interval as determined by 10,000 bootstrap samples. White-filled dots are plot at a different scale for better visualization.} \label{sfig_noise_reg_corrected} \end{figure} \subsection{Additive factor for the noise} \label{supp_add_factor_noise} As an alternative way to empirically improve the predictions of our model we will now test the idea of an additive constant. What this means is that our minimal model underestimates the noise in gene expression as \begin{equation} \text{noise}_{\text{exp}} = \beta + \text{noise}_{\text{theory}}, \end{equation} where $\beta$ is an additive constant to be determined from the data. As with the multiplicative constant we performed a regression to determine this empirical additive constant comparing experimental and theoretical gene expression noise values. We use the error in the 95\% bootstrap confidence interval as a weight for the linear regression. \fref{sfig_noise_add_factor} shows the resulting theoretical vs. experimental noise where $\beta \approx 0.2$. We can see a great improvement in the agreement between theory and experiment with this additive constant \begin{figure}[h!] \centering \includegraphics {../fig/si/figS32.pdf} \caption{\textbf{Additive factor to improve theoretical vs. experimental comparison of noise in gene expression.} Theoretical vs. experimental noise both in linear (left) and log (right) scale. The dashed line shows the identity line of slope 1 and intercept zero. All data are colored by the corresponding value of the experimental fold-change in gene expression as indicated by the color bar. A value of $\approx 0.2$ was added to all values in the $x$-axis as determined by a linear regression from the data in \fref{sfig_noise_comparison}. Each datum represents a single date measurement of the corresponding strain and IPTG concentration with $\geq 300$ cells. The points correspond to the median, and the error bars correspond to the 95\% confidence interval as determined by 10,000 bootstrap samples.} \label{sfig_noise_add_factor} \end{figure} For completeness \fref{sfig_noise_reg_add} shows the noise in gene expression as a function of the inducer concentration including this additive factor of $\beta \approx 0.2$. If anything, the additive factor seems to improve the agreement between theory and data even more than the multiplicative factor. \begin{figure}[h!] \centering \includegraphics {../fig/si/figS33.pdf} \caption{\textbf{Protein noise of the regulated promoter with additive factor.} Comparison of the experimental noise for different operators ((A) O1, $\eR = -15.3 \; k_BT$, (B) O2, $\eR = -13.9 \; k_BT$, (C) O3, $\eR = -9.7 \; k_BT$) with the theoretical predictions for the the multi-promoter model. A linear regression revealed that an additive factor of $\approx 0.2$ to the the theoretical noise prediction would improve agreement between theory and data. Points represent the experimental noise as computed from single-cell fluorescence measurements of different {\it E. coli} strains under 12 different inducer concentrations. Dotted line indicates plot in linear rather than logarithmic scale. Each datum represents a single date measurement of the corresponding strain and IPTG concentration with $\geq 300$ cells. The points correspond to the median, and the error bars correspond to the 95\% confidence interval as determined by 10,000 bootstrap samples. White-filled dots are plot at a different scale for better visualization.} \label{sfig_noise_reg_add} \end{figure} \subsection{Correction factor for channel capacity with multiplicative factor} As seen in \siref{supp_multi_gene} a constant multiplicative factor can reduce the discrepancy between the model predictions and the data with respect to the noise (standard deviation / mean) in protein copy number. To find the equivalent correction would be for the channel capacity requires gaining insights from the so-called small noise approximation \cite{Tkacik2008a}. The small noise approximation assumes that the input-output function can be modeled as a Gaussian distribution in which the standard deviation is small. Using these assumptions one can derive a closed-form for the channel capacity. Although our data and model predictions do not satisfy the requirements for the small noise approximation, we can gain some intuition for how the channel capacity would scale given a systematic deviation in the cell-to-cell variability predictions compared with the data. Using the small noise approximation one can derive the form of the input distribution at channel capacity $P^*(c)$. To do this we use the fact that there is a deterministic relationship between the input inducer concentration $c$ and the mean output protein value $\ee{p}$, therefore we can work with $P(\ee{p})$ rather than $P(c)$ since the deterministic relation allows us to write \begin{equation} P(c) dc = P(\ee{p}) d\ee{p}. \end{equation} Optimizing over all possible distributions $P(\ee{p})$ using calculus of variations results in a distribution of the form \begin{equation} P^*(\ee{p}) = {1 \over \mathcal{Z}} {1 \over \sigma_p(\ee{p})}, \end{equation} where $\sigma_p(\ee{p})$ is the standard deviation of the protein distribution as a function of the mean protein expression, and $\mathcal{Z}$ is a normalization constant defined as \begin{equation} \mathcal{Z} \equiv \int_{\ee{p(c=0)}}^{\ee{p(c\rightarrow \infty)}} {1 \over \sigma_p(\ee{p})} d\ee{p}. \end{equation} Under these assumptions the small noise approximation tells us that the channel capacity is of the form \cite{Tkacik2008a} \begin{equation} I = \log_2 \left( {\mathcal{Z} \over \sqrt{2 \pi e}} \right). \end{equation} From the theory-experiment comparison in \siref{supp_multi_gene} we know that the standard deviation predicted by our model is systematically off by a factor of two compared to the experimental data, i.e. \begin{equation} \sigma_p^{\exp} = 2 \sigma_p^{\text{theory}}. \end{equation} This then implies that the normalization constant $\mathcal{Z}$ between theory and experiment must follow a relationship of the form \begin{equation} \mathcal{Z}^{\exp} = {1 \over 2} \mathcal{Z}^{\text{theory}}. \end{equation} With this relationship the small noise approximation would predict that the difference between the experimental and theoretical channel capacity should be of the form \begin{equation} I^{\exp} = \log_2 \left( {\mathcal{Z}^{\exp} \over \sqrt{2 \pi e}} \right) = \log_2 \left( {\mathcal{Z}^{\text{theory}} \over \sqrt{2 \pi e}} \right) - \log_2(2). \end{equation} Therefore under the small noise approximation we would expect our predictions for the channel capacity to be off by a constant of 1 bit ($\log_2(2)$) of information. Again, the conditions for the small noise approximation do not apply to our data given the intrinsic level of cell-to-cell variability in the system, nevertheless what this analysis tells is is that we expect that an additive constant should be able to explain the discrepancy between our model predictions and the experimental channel capacity. To test this hypothesis we performed a ``linear regression'' between the model predictions and the experimental channel capacity with a fixed slope of 1. The intercept of this regression, -0.56 bits, indicates the systematic deviation we expect should explain the difference between our model and the data. \fref{sfig_channcap_corr} shows the comparison between the original predictions shown in \fref{fig5_channcap}(A) and the resulting predictions with this shift. Other than the data with zero channel capacity, this shift is able to correct the systematic deviation for all data. We therefore conclude that our model ends up underestimating the experimentally determined channel capacity by a constant amount of 0.43 bits. \begin{figure}[h!] \centering \includegraphics {../fig/si/figS34.pdf} \caption{\textbf{Additive correction factor for channel capacity.} Solid lines represent the theoretical predictions of the channel capacity shown in \fref{fig5_channcap}(A). The dashed lines show the resulting predictions with a constant shift of -0.43 bits. Points represent single biological replicas of the inferred channel capacity.} \label{sfig_channcap_corr} \end{figure} % \subsubsection{Systematic deviation of the distribution skewness} % \label{supp_mult_factor_skew} % Another relevant statistic we can compare between our theoretical predictions % and the experimental data is the skewness. The skewness $S(X)$ is defined as % \begin{equation} % S(X) \equiv \ee{ \left( {X - \mu \over \sigma} \right)^3}, % \end{equation} % where $\mu$ and $\sigma$ are the corresponding mean and standard deviation of % the random variable $X$. The skewness can also be computed in terms of the % third moment of the distribution $\ee{X^3}$ as % \begin{equation} % S(X) = {\ee{X^3} - 3 \mu \sigma^2 - \mu^3 \over \sigma^3}. % \end{equation} % We computed this quantity from the numerical integration of the moment % equations. \fref{sfig_skew_reg} shows that as in \fref{sfig_noise_reg} there is % a systematic deviation between our theoretical predictions and the experimental % skewness. It again seems to be a systematic underestimation of the baseline. % \begin{figure}[h!] % \centering \includegraphics % {../fig/si/figS17.pdf} % \caption{\textbf{Skewness of the regulated promoter.} Comparison of the % experimental skewness for different operators ((A) O1, $\eR = -15.3 \; % k_BT$, (B) O2, $\eR = -13.9 \; k_BT$, (C) O3, $\eR = -9.7 \; k_BT$) with the % theoretical predictions for the the multi-promoter model. Points represent % the experimental noise as computed from single-cell fluorescence measurements % of different {\it E. coli} strains under 12 different inducer concentrations. % Dotted line indicates plot in linear rather than logarithmic scale. Each % datum represents a single date measurement of the corresponding strain and % IPTG concentration with $\geq 300$ cells. The points correspond to the % median, and the error bars correspond to the 95\% confidence interval as % determined by 10,000 bootstrap samples.} % \label{sfig_skew_reg} % \end{figure} % Interestingly enough if we follow the same procedure that we followed for the % noise with a linear regression with a fixed origin, we find that a factor of 2 % again can fix the systematic deviation. \fref{sfig_skew_reg_corr} shows the % improved agreement for the skewness when this multiplicative factor is % included. The origin of this factor of two as well as the one for the noise are % limitations of our current state-of-the-art modeling approach. It would be very % interesting to dissect whether or not the model can account for these changes. % \begin{figure}[h!] % \centering \includegraphics % {../fig/si/figS18.pdf} % \caption{\textbf{Skewness of the regulated promoter with a multiplicative % factor.} Comparison of the experimental skewness for different operators ((A) % O1, $\eR = -15.3 \; k_BT$, (B) O2, $\eR = -13.9 \; k_BT$, (C) O3, $\eR = % -9.7 \; k_BT$) with the theoretical predictions for the the multi-promoter % model corrected by a multiplicative factor. A linear regression determined % that multiplying the theoretical skewness by a factor of two was enough to % improve the agreement between theory and experiments. Points represent the % experimental noise as computed from single-cell fluorescence measurements of % different {\it E. coli} strains under 12 different inducer concentrations. % Dotted line indicates plot in linear rather than logarithmic scale. Each % datum represents a single date measurement of the corresponding strain and % IPTG concentration with $\geq 300$ cells. The points correspond to the % median, and the error bars correspond to the 95\% confidence interval as % determined by 10,000 bootstrap samples.} % \label{sfig_skew_reg_corr} % \end{figure} % \subsubsection{Correction factor for distribution moments} % In \siref{supp_mult_factor_noise} and \siref{supp_mult_factor_skew} we showed % how simple multiplicative factors could improve the agreement between % predictions and measurements for the noise and the skewness of the protein % distribution. The question now becomes if applying the equivalent correction % factors to the moments could improve the agreement between the maximum entropy % distributions and the experimental distributions. Specifically if we work with % the three first moments of the protein distribution $\ee{p}, \ee{p^2}$, and % $\ee{p^3}$ we need to correct our theoretical predictions according to the % systematic empirical deviations from the noise and the skewness. Let us use % subscript $T$ and $E$ to represent experimental and theoretical quantities. We % know that the experimentally determined noise $\eta$ is off by a factor of two % from the theoretical predictions, i.e. % \begin{equation} % \eta_E = 2 \eta_T. % \end{equation} % Since our predictions for the fold-change, which depend solely on the first % moment of the protein distribution are in agreement, we will assume that there % is no need to correct the predictions for the first moment, i.e. $\ee{p}_T = % \ee{p}_E$. Let's then take a look at what the correction to the second moment % $\ee{p^2}$ need to be in order for the experimental data to agree with the % theoretical predictions. The definition of the noise gives then % \begin{equation} % {\sqrt{\ee{p^2}_E - \ee{p}_E^2} \over \ee{p}_E} = % 2 {\sqrt{\ee{p^2}_T - \ee{p}_T^2} \over \ee{p}_T}. % \end{equation} % Using our assumption that the first moment does not change, and solving for % $\ee{p^2}_E$ results in % \begin{equation} % \ee{p^2}_E = 4 \ee{p^2}_T - 3 \ee{p}_T^2. % \end{equation} % This result tells us that if we were to modify our prediction for the second % moment by this factor we would resolve the disagreement between the theoretical % and experimental noise. % Following a similar logic for the third moment, we showed in section % \siref{supp_mult_factor_skew} that the skewness $S$ is also off by a factor of % two, i.e. % \begin{equation} % S_E = 2 S_T. % \end{equation} % When we substitute the definition of the skewness and use the correction factor % we found for the second moment as well, the algebra works out to a correction % for the third moment $\ee{p^3}_E$ of the form % \begin{equation} % \ee{p^3}_E = 16 \ee{p^3}_T - 36\ee{p}_T \sigma^2_T - 15\ee{p}^3_T, % \end{equation} % where $\sigma^2_T \equiv \ee{p^2}_T - \ee{p}_T^2$. % \fref{sfig_cdf_reg_corr} shows the comparison between the experimental % cumulative distributions and the maximum entropy distributions determined using % the first three moments of the protein distribution with the correction % factors. We can see that the agreement between theory and data is enhanced % upon applying these corrections. What the origin of these deviation is remains % unclear and will be subject to future investigation. % \begin{figure}[h!] % \centering \includegraphics % {../fig/si/figS24.pdf} % \caption{\textbf{Experiment vs. theory comparison for regulated promoters % with correction factors for moments.} Example fold-change empirical % cumulative distribution functions (ECDF) for regulated strains with the three % operators (different colors) as a function of repressor copy numbers (rows) % and inducer concentrations (columns). The color curves represent single-cell % microscopy measurements while the dashed black lines represent the % theoretical distributions as reconstructed by the maximum entropy principle. % These distributions in particular differ from \fref{sfig_cdf_reg} in that the % moments used to reconstruct the distributions were corrected to match the % experimentally determined noise and skewness.} % \label{sfig_cdf_reg_corr} % \end{figure}
%% This is file `elsarticle-template-1-num.tex', %% %% Copyright 2009 Elsevier Ltd %% %% This file is part of the 'Elsarticle Bundle'. %% --------------------------------------------- %% %% It may be distributed under the conditions of the LaTeX Project Public %% License, either version 1.2 of this license or (at your option) any %% later version. The latest version of this license is in %% http://www.latex-project.org/lppl.txt %% and version 1.2 or later is part of all distributions of LaTeX %% version 1999/12/01 or later. %% %% The list of all files belonging to the 'Elsarticle Bundle' is %% given in the file `manifest.txt'. %% %% Template article for Elsevier's document class `elsarticle' %% with numbered style bibliographic references %% %% $Id: elsarticle-template-1-num.tex 149 2009-10-08 05:01:15Z rishi $ %% $URL: http://lenova.river-valley.com/svn/elsbst/trunk/elsarticle-template-1-num.tex $ %% \documentclass[preprint,12pt]{elsarticle} %% Use the option review to obtain double line spacing %% \documentclass[preprint,review,12pt]{elsarticle} %% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn %% for a journal layout: %% \documentclass[final,1p,times]{elsarticle} %% \documentclass[final,1p,times,twocolumn]{elsarticle} %% \documentclass[final,3p,times]{elsarticle} %% \documentclass[final,3p,times,twocolumn]{elsarticle} %% \documentclass[final,5p,times]{elsarticle} %% \documentclass[final,5p,times,twocolumn]{elsarticle} %% if you use PostScript figures in your article %% use the graphics package for simple commands %% \usepackage{graphics} %% or use the graphicx package for more complicated commands %% \usepackage{graphicx} %% or use the epsfig package if you prefer to use the old commands %% \usepackage{epsfig} %% The amssymb package provides various useful mathematical symbols \usepackage{amssymb} \usepackage{booktabs} \usepackage{placeins} %% The amsthm package provides extended theorem environments %% \usepackage{amsthm} %% The lineno packages adds line numbers. Start line numbering with %% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on %% for the whole article with \linenumbers after \end{frontmatter}. %% \usepackage{lineno} %% natbib.sty is loaded by default. However, natbib options can be %% provided with \biboptions{...} command. Following options are %% valid: %% round - round parentheses are used (default) %% square - square brackets are used [option] %% curly - curly braces are used {option} %% angle - angle brackets are used <option> %% semicolon - multiple citations separated by semi-colon %% colon - same as semicolon, an earlier confusion %% comma - separated by comma %% numbers- selects numerical citations %% super - numerical citations as superscripts %% sort - sorts multiple citations according to order in ref. list %% sort&compress - like sort, but also compresses numerical citations %% compress - compresses without sorting %% %% \biboptions{comma,round} % \biboptions{} \journal{Pattern Recognition Letters} \begin{document} \begin{frontmatter} %% Title, authors and addresses %% use the tnoteref command within \title for footnotes; %% use the tnotetext command for the associated footnote; %% use the fnref command within \author or \address for footnotes; %% use the fntext command for the associated footnote; %% use the corref command within \author for corresponding author footnotes; %% use the cortext command for the associated footnote; %% use the ead command for the email address, %% and the form \ead[url] for the home page: %% %% \title{Title\tnoteref{label1}}sudo iptables -t nat -A OUTPUT -p tcp --dport 1935 -j REDIRECT %% \tnotetext[label1]{} %% \author{Name\corref{cor1}\fnref{label2}} %% \ead{email address} %% \ead[url]{home page} %% \fntext[label2]{} %% \cortext[cor1]{} %% \address{Address\fnref{label3}} %% \fntext[label3]{} \title{Learning Word Representations from \\a Large-Scale Unified Lexical Semantic Resource} %% use optional labels to link authors explicitly to addresses: %% \author[label1,label2]{<author name>} %% \address[label1]{<address>} %% \address[label2]{<address>} \author{} \address{} \begin{abstract} %% Text of abstract Learning word representations and iducing word feautres are shown to be able to improve the performance in various NLP tasks such as Word Sense Disambiguation, Named Entity Recognition, Parsing,\ldots Here in this paper, we investigate the effectivness of learned features for words from structured knowledge bases with focus on a method proposed by Bordes et al. We extend their idea with incorporating multiple resources from different languages (English and German) and also different type of resources (WordNet, FrameNet). We have evaluated both monolingual (Bordes embeddings) and bilingual embeddings (our embeddings) on four different gold dataset for word-pair similarity task and shown that bilingual embeddings perform similarly or better than monolingual embeddings. \end{abstract} \begin{keyword} %% keywords here, in the form: keyword \sep keyword Representation Learning, Word Embeddings, Machine Learning, Semantics %% MSC codes here, in the form: \MSC code \sep code %% or \MSC[2008] code \sep code (2000 is the default) \end{keyword} \end{frontmatter} %% %% Start line numbering here if you want %% % \linenumbers \section{Introdcution} \label{sec:intro} In a large number of machine learning methods and its application to natural language processing, most of the labor is dedicated to \emph{Feature Engineering}. Extracting informative features is the crucial part of most supervised methods and it is done mostly manually. While many different applications share common learning models and classifiers, the difference in performance of competing methods mostly goes to the data representation and hand-crafted features that they use. This observation reveals an important weakness in current models, namely their inability to extract and organize discriminative features from data. \emph{Representation learning} is an umberella term for a family of unsupervised methods to learn features from data. Most of recent works on the application of this idea in NLP focus on inducing word representations. \emph{Word representation} or \emph{Word embedding} ''is a mathematical object, usually a vector, which each dimension in this vector represents a grammatical or semantical feature to identify this word and is induced automatically from data'' \cite{Turian2010b}. Recently, it has been shown in \cite{Turian2010b} and \cite{Collobert2011} that using induced word representations can be helpful to improve state-of-the-art methods in variouse NLP tasks. In Section \ref{sec:rel-work}, some of these methods are discussed in more details. From recent works, we observe that most of the current methods for inducing word representations can only exploit surface relation among words. Indeed, the only resource for them to capture semanitcal and grammatical aspects of words is co-occuring of them in a text. The word embeddings learned in neural language models (\cite{Collobert2008a} and \cite{Bengio2003}) and brown clustering are examples of such approach. In the contrast to these methods, Bordes et al. \cite{Bordes2011} proposed a method to learn distributed representations from relational datasets with richer information. In their work, they are attempting to induce word embeddings from knowledge bases such as WordNet and Freebase. Their datasets include binary relations between left entity and right entity and each relation is instantiated from a different relation type. Since we are following their methodology, a detailed description of their work is presented in ~\ref{rel-work:structured-embedding}. After reviewing previous related works, we will demonstrate our contribution for inducing word embeddings from multiple lexical resources and show its effectiveness for inducing bilingual word embeddings and transfering information from one language to another one. A pipleline of our system for combining different lexical resources to capture broader grammatical and semantical features than previous works into our word embeddings will be described in detail. ??? Uby as a unified lexical resource which plays a central role in our system will be reviewd shortly ??? Finally, we will evaluate our word embeddings empirically in different settings as a proof-of-concept to show the role of representation learning jointly from multiple lexical resources. We will also zoom in to our learned embeddings for special case of English-German to inspect the strength of bilingual word embeddings. (??? Parsing with Compositional Vector Grammars Socher et al. ACL 2013, . Improving Word Representations via Global Context and Multiple Word Prototypes Huang 2012 ???) %% main text \section{Related Work} \label{sec:rel-work} \subsection{Distributional Representation} \label{subsec:distl-repr} In distributional semantics, the meaning of a word is expressed by the context that it appears in it \cite{Harris1981}. Features that are used to represent the meaning of a word are other words in its neighborhood as it is so called the context. In some approaches like LDA and latent semantic analysis (LSA), the context is defined in the scope of a document rather than a window around a word. To represent word meanings in via distributional approach, one should start from count matrix (or zero-one co-occurence matrix) which each row represents a word and each column is a context. The representation can be limited to raw usage of the very same matrix or some transforms like \emph{tf-idf} will be applied first. A further analysis over this matrix to extract more meaningful features is applying dimensionality reduction methods or clustering models to induce latent distributional representations. A similar clustering method to k-means is used in \cite{Lin2009} to represent phrase and word meanings and brown clustering algorithm \cite{Brown1992} has been shown to have impact on near to state-of-the-art NLP tasks \cite{Turian2010b}. \subsection{Distributed Representation} \label{rel-work:disted-repr} Distributed representation has been introduced in the literature for the first time in \cite{Bengio2003} where Bengio et al. introduced a first language model based on deep learning methods\cite{Bengio2009b}. Deep learning is learning through several layers of neural networks which each layer is responsible to learn a different concept and each concecpt is built over other more abstract concepts. In the deep learning society, any word representation that is induced with a neural network is called \emph{Word Embedding}. In contrast to raw count matrix in distributional representations, word embeddings are low-dimensional, dense and real-valued vectors. The term, \textbf{`Distributed'}, in this context refers to the fact that exponential number of objects (clusters) can be modeled by word embeddings. Here we will see two famous models to induce for such representations. One family will use n-grams to learn word representation jointly with a language model and the other family learns the embedding from structured resources. (Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure should be mentioned ???) \subsection{Neural Language Models} \label{rel-work:lang-model} In \cite{Collobert2008a}, Weston and Collobert use a non-probabilistic and discriminative model to jointly learn word embeddings and a language model that can separate plausible n-grams from noisy ones. For each word in a n-gram, they combine the word embeddings and use it as positive example. They put noise in the n-gram to make negative examples and then train a neural network to learn to classify postive labels from negative ones. The parameters of neural network (neural language model) and word embedding values will be learned jointly by an optimization method called \emph{Stochastic Gradient Descent} \cite{Bottou2010}. A hierarchical dsitributed language model (HLBL) proposed by Mnih and Hinton in \cite{Mnih2009} is another influential work on word embeddings. In this model a probabilistic linear neural network(LBL) will be trained to combine word embeddings in first $n-1$ words of a n-gram to predict the $n_th$ word. Weston-Collobert model and HLBL by Mnih and Hinton are evaluated in \cite{Turian2010b} in two NLP tasks: chunking and named entity recognition. With using word embeddings from these models combined with hand-crafted features, the performance of both tasks are shown to be improved. \subsection{Representation Learning from Knowledge Bases} \label{rel-work:structured-embedding} ???(should be expanded with mathematical notation and better description of their models and experiments)??? Bordes et al. in \cite{Bordes2011} and \cite{Bordes2012} have attempted to use deep learning to induce word representations from lexical resources such as WordNet and knowledge bases (KB) like Freebase. In Freebase for example, each named entity is related to another entity by an instance of a specific type of relation. In \cite{Bordes2011}, each entity is represented as a vector and each relation is decomposed to two matrices. Each of these matrices transform left and right-hand-side entities to a semantic space. Similarity of transformed entities indicates that the relation holds between the entities. A prediction task is defined to evaluate the embeddings. Given a relation and one of the entities, the task is to predict the missing entity. The high accuracy (99.2\%) of the model on prediciton of training data shows that learnt representation highly captures attributes of the entities and relations in Freebase. \section {Our contribution} \label{sec:contr} \subsection{Uby} \label{contr:uby} \subsection{Bilingual word embeddings} \label{contr:bilingual} ???(transfer learning and multi task learning should be mentioned from Caruana, R. (1997). Multitask Learning. Machine Learn- ing, 28, 41–75. Chapelle,)??? As it is described in the previous section we can relate two senses from two different resources using Uby SenseAxis Alignments. This is an additional information which can play a role of bridge between two different datasets to transfer knowledge from one to the another. Using this new feature we make our WordNet-GermaNet dataset which contains three type of relations (1) WordNet relations (2) GermaNet relations (3) Cross-lingual sense relations between WordNet and GermaNet \\ Example of relations: \begin{center} WN-1 \hspace{0.5in} rel1 \hspace{0.5in} WN-2\\ GN-1 \hspace{0.5in} rel3 \hspace{0.5in} GN-2\\ WN-1 \hspace{0.5in} c-rel \hspace{0.5in} GN-2\\ \end{center} We have also created another version of this dataset but with different granularities, we mapped similar inter-lingual relations to same relations. This will help to have faster learning phase with roughly similar performance. Since cross-lingual sense alignments are expressing nearly-synonym relation among two senses and the Bordes model is sensitive to the direction of relations we have added the reverse sense alignments too to encode bidirectional nature of this type of relations. infering wordnet-framenet data. WordNet and GermaNet are expressing similar knowledge but in different languages, so it is worthwile to examine learning word embeddings from two different knowledge base which contains different semantical aspects of words and their senses. Therefor by using Uby and the method described in [CM FN-Wkt] we infered relations among WordNet and FrameNet. FrameNet is blah blah.. In the next section, we will describe the different settings to analyze performance of learned embeddings from our new datasets. \section{Empirical Evaluation} \label{sec:exp} To show the effectiveness of joint learning of features from multiple knowledge bases we suggest two experiment setups. In the first schema we follow Bordes et al. ranking task. The goal of this task is to show how good the structure of knowledge bases are represented through the learned features. After we learned the word embeddings from subset(??) of Uby(??), their ability to reproduce the structure of it will be assessed. On the other hand, the second setup is investigating on this question that if the learned word embeddings from multiple resources are able to improve the performance of original Bordes model in a standard NLP task, here word-pair similarity or not. In this setup we will look to contribution of the learned features in predicting similarity of words. \subsection{Intrinstic Evaluation} \label{exp:rank} Bordes et al. define a ranking task where for each triplet $(e_{l} , r, e_{r} ) $ in trianing and test set, $e_{l}$ will be removed and all the entities will be ranked by using 1-norm rank function ( equation ??? decomposing equation). A higher rank of $e_{l}$ (lower number) reflects the better quality of learned representations. Additionally they have compared this result to another ranking schema using density estimation . In this schema, for each word embedding $e$ the density of $(e , r, e_{r} )$ will be computed ( as it is described in our section???) and triplets will be sorted by their estimated probability (probability terms ??). Since we are using larger sets of triplets, instead of ranking all the training instances we sample randomly from each training dataset with size of 20\% of the original dataset(??) then we test our models on these sampled training instances and all the instances from test set. Bordes et al. have followed a similar approach for ranking their embeddings on their biggest dataset. We re-run their related experiments to make the comparison to our embeddings meaningful. Table (??) shows the results. \FloatBarrier \begin{table}[ht] \caption{Ranking Performance for Non-mapped Relations } % title name of the table \centering % centering table \begin{tabular}{l c c c c c c} % creating 10 columns \hline\hline % inserting double-line Dataset & \#dimension & \#relations & \#entities & & Micro & Macro \\ [0.5ex] \hline % inserts single-line & & & & lhs & 82.08 & 73.11 \\[-1ex] & & & & rhs & 81.22 & 72.36 \\[-1ex] \raisebox{1.5ex}{GermaNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{16}& \raisebox{0.5ex}{64025}&global & 81.65 & 72.74 \\[1ex] & & & & lhs & 81.76 & 85.79 \\[-1ex] & & & & rhs & 81.96 & 85.49 \\[-1ex] \raisebox{1.5ex}{WordNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{23}& \raisebox{0.5ex}{148976}& global & 81.86 & 85.63 \\[1ex] & & & & lhs & 82.50 & 85.09 \\[-1ex] & & & & rhs & 83.16 & 84.46 \\[-1ex] \raisebox{1.5ex}{WordNet-GermaNet (WN)} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{32}& \raisebox{0.5ex}{213002}& global & 82.83 & 84.78 \\[1ex] & & & & lhs & 72.12 & 63.63 \\[-1ex] & & & & rhs & 67.78 & 65.77 \\[-1ex] \raisebox{1.5ex}{WordNet-GermaNet (GN)} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{32}& \raisebox{0.5ex}{213002}& global & 69.95 & 64.70 \\[1ex] & & & & lhs & 1 & 1 \\[-1ex] & & & & rhs & 1 & 1 \\[-1ex] \raisebox{1.5ex}{WordNet-FrameNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{25}& \raisebox{0.5ex}{25}& global & 1 & 1 \\[1ex] % [1ex] adds vertical space \hline % inserts single-line \end{tabular} \label{tab:PPer} \end{table} \begin{table}[ht] \caption{Ranking Performance for Mapped Relations } % title name of the table \centering % centering table \begin{tabular}{l c c c c c c} % creating 10 columns \hline\hline % inserting double-line Dataset & \#dimension & \#relations & \#entities & & Micro(\%) & Macro(\%) \\ [0.5ex] \hline % inserts single-line & & & & lhs & 82.60 & 68.18 \\[-1ex] & & & & rhs & 81.90 & 68.84 \\[-1ex] \raisebox{1.5ex}{GermaNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{10}& \raisebox{0.5ex}{64025}&global & 82.25 & 68.51 \\[1ex] & & & & lhs & 83.50 & 83.17 \\[-1ex] & & & & rhs & 84.22 & 83.64 \\[-1ex] \raisebox{1.5ex}{WordNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{19}& \raisebox{0.5ex}{148976}& global & 83.86 & 83.40 \\[1ex] & & & & lhs & 78.70 & 82.60 \\[-1ex] & & & & rhs & 79.56 & 83.06 \\[-1ex] \raisebox{1.5ex}{WordNet-GermaNet (WN)} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{24}& \raisebox{0.5ex}{213002}& global & 79.13 & 82.83 \\[1ex] & & & & lhs & 69.66 & 59.54 \\[-1ex] & & & & rhs & 66.60 & 58.95 \\[-1ex] \raisebox{1.5ex}{WordNet-GermaNet (GN)} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{24}& \raisebox{0.5ex}{213002}& global & 68.13 & 59.25 \\[1ex] & & & & lhs & 1 & 1 \\[-1ex] & & & & rhs & 1 & 1 \\[-1ex] \raisebox{1.5ex}{WordNet-FrameNet} & \raisebox{0.5ex}{25}& \raisebox{0.5ex}{25}& \raisebox{0.5ex}{25}& global & 1 & 1 \\[1ex] % [1ex] adds vertical space \hline % inserts single-line \end{tabular} \label{tab:PPer} \end{table} We repeat the ranking evaluation with two different embeddings: (1) learned from GermaNet (2) jointly learned from GermaNet-WordNet. The intrinsic evaluation we use here can't be used to compare the effectivness of these two different embeddings since the evaluation only reflects the difficulty level of a structure and since these Table (??) presents the comparison of ranking tasks for mono-lingual and bilingual word embeddings. \FloatBarrier \subsection{Extrinsic Evaluation} \label{exp:word-similarity} We are interested to further analyze the role of multi-task learning of embeddings for transforming knowledge from one resource to the another. In order to examine if semantic information from English (WordNet) can be transfered to German (GermaNet) or the other way, we compare the embeddings learnt from multiple resources to the embeddings learnt from single resource in word-pair similarity experiments. Four datasets of word-pair similarity are used to compare the correlation of predicted similairty of pair of words against human judgments. [rubensteinGoodenough], [yangPowers], [millerCharles] and [finkelstein] are datasets that we used to meaure the correlation of similarities predicted by the original bordes model (single resource) and our proposed model (multiple resource) to human judgments. To measure the similarity between any given wordpair $(w_1 , w_2)$ we find all vectors associated to different senses of the given words in our embedding dictionary and compute and find the maximum cosine similarity between two vectors. Then for each dataset, both Pearson and Spearman correlation among predicted and gold similarities were calculated which is reported in table \ref{tab:en-wp-sim}. \begin{table}[ht] \caption{Word-pair Similarity Performance for English } % title name of the table \centering % centering table \begin{tabular}{cr c c c c c} % creating 10 columns \hline\hline % inserting double-line Dataset & & WN-SE50 & WN-GN-SE50 & WN-SME-BIL50 & WN-GN-SME-BIL50 \\ [0.5ex] \hline % inserts single-line & Pearson & 0.488 & 0.571 & 00 & 00 \\[-1ex] \raisebox{1.5ex}{RubensteinGoodenough65} & Spearman & 0.426 & 0.528 & 00 & 00 \\[1ex] & Pearson & 0.454 & 0.438 & 00 & 00 \\[-1ex] \raisebox{1.5ex}{MillerCharles30} & Spearman & 0.40 & 0.34 & 00 & 00 \\[1ex] & Pearson & 0.194 & 0.177 & 00 & 00 \\[-1ex] \raisebox{1.5ex}{Finkelstein353} & Spearman & 0.137 & 0.128 & 00 & 00 \\[1ex] & Pearson & 0.634 & 0.771 & 00 & 00 \\[-1ex] \raisebox{1.5ex}{YangPowers130} & Spearman & 0.598 & 0.770 & 00 & 00 \\[1ex] \hline % inserts single-line \hline % inserts single-line \end{tabular} \label{tab:en-wp-sim} \end{table} \FloatBarrier As we see in the table \ref{tab:en-wp-sim} in two datasets the performance of learned embeddings from bi-lingual resources are slightly worse but comparable to the mono-lingual embeddings and in the other two datasets one can observe a significant increase of performance of bi-lingual resources over monolingual resources. \section{Conclusion and Future Work} \label{sec:conc} %% The Appendices part is started with the command \appendix; %% appendix sections are then done as normal sections %% \appendix %% \section{} %% \label{} %% References %% %% Following citation commands can be used in the body text: %% Usage of \cite is as follows: %% \cite{key} ==>> [#] %% \cite[chap. 2]{key} ==>> [#, chap. 2] %% \citet{key} ==>> Author [#] %% References with bibTeX database: \bibliographystyle{model1-num-names} \bibliography{mendely.bib} %% Authors are advised to submit their bibtex database files. They are %% requested to list a bibtex style file in the manuscript if they do %% not want to use model1-num-names.bst. %% References without bibTeX database: % \begin{thebibliography}{00} %% \bibitem must have the following form: %% \bibitem{key}... %% % \bibitem{} % \end{thebibliography} \end{document} %% %% End of file `elsarticle-template-1-num.tex'.
% !TeX root = 00Book.tex \subsection{August: Honey Crop and Varroa Treatment} \subsubsection{Honey} Taking off the honey before varroa treatment. Canadian rhombus clearer board takes 2 days to clear of bees. Drying the honey, in the airing cupboard. \subsubsection{Varroa Treatment} No point in judging the number of varroa. Just treat. Apiguard or MAQS On of before 12 kg to ensure that the temperature is h \subsubsection{Feeding} It is generally considered that a honey bee colony requires about 20 – 30 kg of honey to safely feed it through the winter. Feed 14 kg per hive which comes up to about 16 kg when stored in the hive. A brood frame can carry about 2.2 kg of honey. So the carrying capacity of 16 frames is about 35 kg. The brood will take up space. So for this reason you need at least 16 frames to go through winter. A single brood is too small but a double brood is too much. Brood and a half it about right but it is too compact and you have a mix of frame sizes. So a total of 42 kg of sugar is required for all three hives. If you can get it at 50p per kg this is only £21, so it is hardly worth bothering to heft the hives or weigh them. Just feed it all or until they stop. \subsubsection{Swarming} If it happens it is too late, therefore: \begin{description} \item [Queen to one side] Split \item [Queen excluder underneath] to prevent swarming (remove after a week) \item [Unite] again don't try to produce a new queen. \end{description}
\section{Related Work} \label{sec:related-work} \begin{figure}[t] \centering \vspace*{-0.3cm} \hspace*{-0.4cm} \begin{minipage}[t]{0.3\textwidth} \includegraphics[width=0.9\textwidth]{fig_main_illustration3} \end{minipage} \begin{minipage}[t]{0.12\textwidth} \includegraphics[width=1.2\textwidth]{fig_main_illustration2} \end{minipage} \vspace*{-10px} \caption{\textbf{Measuring Flatness.} \textbf{Left:} Illustration of measuring flatness in a random (\ie, average-case, {\color{colorbrewer2}blue}) direction by computing the difference between \RCE $\tilde{\mathcal{L}}$ \emph{after} perturbing weights (\ie, $w + \nu$) and the ``reference'' \RCE $\mathcal{L}$ given a local neighborhood $B_\xi(w)$ around the found weights $w$, see \secref{subsec:main-flatness}. In practice, we average across/take the worst of several random/adversarial directions. \textbf{Right:} Large changes in \RCE around the ``sharp'' minimum causes poor generalization from training ({\color{colorbrewer0}black}) to test examples ({\color{colorbrewer1}red}). } \label{fig:main-illustration} \vspace*{-6px} \end{figure} \textbf{Adversarial Training (AT):} Despite a vast amount of work on adversarial robustness, \eg, see \cite{SilvaARXIV2020,YuanARXIV2017,AkhtarACCESS2018,BiggioCCS2018,XuARXIV2019}, adversarial training (AT) has become the de-facto standard for (empirical) robustness. Originally proposed in different variants in \cite{SzegedyICLR2014,MiyatoICLR2016,HuangARXIV2015}, it received considerable attention in \cite{MadryICLR2018,robustness} and has been extended in various ways: \cite{LambAISEC2019,CarmonNIPS2019,UesatoNIPS2019} utilize interpolated or unlabeled examples, \cite{TramerNIPS2019,MainiICML2020} achieve robustness against multiple threat models, \cite{StutzICML2020,LaidlawARXIV2019,WuICML2018} augment AT with a reject option, \cite{YeNIPS2018,LiuICLR2019b} use Bayesian networks, \cite{TramerICLR2018,GrefenstetteARXIV2018} build ensembles, \cite{BalajiARXIV2019,DingICLR2020} adapt the threat model for each example, \cite{Wong2020ICLR,AndriushchenkoNIPS2020,VivekCVPR2020} perform AT with single-step attacks, \cite{HendrycksNIPS2019} uses self-supervision and \cite{PangNIPS2020} additionally regularizes features -- to name a few directions. However, AT is slow \cite{ZhangNIPS2020} and suffers from increased sample complexity \cite{SchmidtNIPS2018} as well as reduced (clean) accuracy \cite{TsiprasICLR2019,StutzCVPR2019,ZhangICML2019,RaghunathanARXIV2019}. Furthermore, progress is slowing down. In fact, ``standard'' AT is shown to perform surprisingly well on recent benchmarks \cite{CroceICML2020,CroceARXIV2020b} when tuning hyper-parameters properly \cite{PangARXIV2020b,GowalARXIV2020}. In our experiments, we consider several popular variants \cite{WuNIPS2020,WangICLR2020,ZhangICML2019,CarmonNIPS2019,HendrycksNIPS2019}. \textbf{Robust Overfitting:} Recently, \cite{RiceICML2020} identified \emph{robust} overfitting as a crucial problem in AT and proposed early stopping as an effective mitigation strategy. This motivated work \cite{SinglaARXIV2021,WuNIPS2020} trying to mitigate robust overfitting. While \cite{SinglaARXIV2021} studies the use of different activation functions, \cite{WuNIPS2020} proposes AT with \emph{adversarial weight perturbations} (AT-AWP) explicitly aimed at finding flatter minima in order to reduce overfitting. While the results are promising, early stopping is still necessary. Furthermore, flatness is merely assessed visually, leaving open whether AT-AWP \emph{actually} improves flatness in adversarial weight directions. We consider both average- and worst-case flatness, \ie, random and adversarial weight perturbations, to answer this question. \textbf{Flat Minima} in the loss landscape, \wrt changes in the weights, are generally assumed to improve \emph{standard} generalization \cite{HochreiterNC1997}. \cite{LiNIPS2018} shows that residual connections in ResNets \cite{HeCVPR2016} or weight decay lead to \emph{visually} flatter minima. \cite{NeyshaburNIPS2017,KeskarICLR2017} formalize this concept of flatness in terms of \emph{average-case} and \emph{worst-case} flatness. \cite{KeskarICLR2017,JiangICLR2020} show that worst-case flatness correlates well with better generalization, \eg, for small batch sizes, while \cite{NeyshaburNIPS2017} argues that generalization can be explained using both an average-case flatness measure and an appropriate capacity measure. Similarly, batch normalization is argued to improve generalization by allowing to find flatter minima \cite{SanturkarNIPS2018,BjorckNIPS1018}. These insights have been used to explicitly regularize flatness \cite{ZhengARXIV2020c}, improve semi-supervised learning \cite{CicekICCVWOR2019} and develop novel optimization algorithms such as Entropy-SGD \cite{ChaudhariICLR2017}, local SGD \cite{TinICLR2020} or weight averaging \cite{IzmailovUAI2018}. \cite{DinhICML2017}, in contrast, criticizes some of these flatness measures as not being scale-invariant. We transfer the intuition of flatness to the \emph{robust} loss landscape, showing that flatness is desirable for adversarial robustness, while using scale-invariant measures.
\chapter{Mining Transdiagnostics Symptoms in Social Media Data} \section{Abstract} Mining social media data to predict mental health condition and psychological traits have increasingly attracted attention in the clinical psychology domain. Instead of predicting a specific diagnosis criteria, we adopt a transdiagnostic approach by investigating common symptoms that predispose an individual to a variety of mental disorders. Treatments that targets these factors are called transdiagnostic treatment, which has been widely employed to tackle anxiety and depression disorders. We leverage FB data from 77 users who participated in the myPersonality project back in 2011. We label negative emotion and two transdiagnositc components - reasoning bias (cognitive distortion) and negative thinking among more than 4000 Facebook posts. Our study includes how transdiagnostic symptoms manifested in users with different characteristics. We find that martial status and user's parental relationship are protective factors. Finally, we also identified language features that predict transdiagnotics symptoms. \section{Introduction} In the recent years, there is a surging amount of studies attempting to use social media data to predict psychopathology diagnosis. Various attempts on predicting depression has achieved good performance \cite{munmun13, Aldarwish17, Hu17, Coppersmith15}. What makes these prediction tasks challenging at the moment is comorbidity is very common in psychopathology. According to the literature, about 60\% - 70\% of individuals diagnosed with anxiety disorder also meet some of the criteria in depressive and affective disorders \cite{Timothy95}. The traditional conceptual structure approach to understand psychological disorder is to provide a diagnosis of a specific disorder. However, there is increasing recognition that criteria diagnosis are of less value because many disorders frequently co-occurred and share a number of vulnerability factors, which is also called comorbidity \cite{Kessler98,Hirschfeld99}. In light of the challenge, psychologists are shifting towards a transdiagnositc approach in the recent years. Instead of giving multiple diagnosis to a patient with comorbidity, transdiagnostic approach focuses on common psychological processes underlie the syndromes, which provides a better explanation to the high rate of comorbidity observed in clinical practice \cite{Harvey04}. Treatments and preventative interventions targeting transdiagnostic symptoms have been found to effective among anxiety, depressive disorders \cite{Collins09,Norton04}, eating disorder \cite{Fairburn03} and social phobia \cite{Freda00}, etc. In support of transdiagnostic theory, the National Institute of Mental Health (NIMH), created the RDoC, The RDoc conceptualized five systems that underlie psychopathology: negative valence systems, positive valence systems, cognitive systems, systems for social processes and arousal/regulatory systems (Insel et al. 2010). The processes of producing language and the way language being used is an important aspect to identify psychopathology (Pennebeker). The cognitive system captures language as a contruct (Maria's paper), however, at present, analyzing language within the transdiagnostic approach framework hasn't been incorporated into analyzing social media data. In this paper, we see how the transdiagnostic symptoms identified from social media text contribute to depression, influence satisfaction with life and how it related to personality. Meanwhile, we also look at user characteristics that might contribute to some of the transdiagnostic symptoms. % Head 1 \subsection{Investigate Transdiagnositic Symptoms on Social Media} We explore some of the transdiagnostic symptoms in social media posts. Social media users' threads and updates often reveal their opinions, emotions and daily life activities. Data capturing these information provide researchers a platform to study their behaviors and psychological traits \cite{Kosinski13, Lushi}. Motivated by the fact that most of the studies look at whether social media behaviors reflect mental health symptoms focus on a specific diagnosis criteria \cite{munmun13, Aldarwish17, Hu17, Coppersmith15}, which did not consider high comorbidity rate in many disorders. We consider to look at whether social media posts capture some of the transdiagnostics symptoms. The components of transdiagnostic treatment or research include attention, memory, reasoning, thought and behaviour. Reasoning refers to thinking that involves deducing conclusions, generating judgements and testing hypotheses logically. Biased reasoning, which is in parallel with cognitive distortion in cognitive behavior therapy, often draws in conclusion different from the reality \cite{Harvey04}. Assessing cognitive distortion is an index of the improvements in behaviours and emotional resiliency \cite{Freda00,Neil96}. Later we will explain the details of identifying cognitive distortion. Another transdiagnostic component captured by social media text is repetitive negative thinking, which presents across affective disorders, anxiety disorders, insomnia or psychosis \cite{Harvey04}. The two processes in repetitive negative thinking are worry and rumination. They all share three commonalities: a) repetitive, b) uncontrollable c) focus on negative content \cite{Harvey04}. Worry presents mainly in general anxiety disorder (GAD), which is defined as a chain of uncontrollable thoughts or images which represent an attempt to problem-solving an issue that might contains a negative outcome \cite{Thomas83}. Whereas, rumination are content specific to the type of disorder. Rumination on post traumatic stress disorder involves repetitive negative thinking about the trauma and its consequences \cite{Michael07}; Rumination in social phobia contains self-appraisals and evaluation of the partner in a social event \cite{Kashdan07}. \section{BACKGROUND AND RELATED WORK} % Head 1 \subsection{The Cognitive Behavior Therapy (CBT)} Cognitive models of psychopathology proposed that pathological behaviors and emotions are often the consequences of cognitive biases or distortion, which are inadequate interpretation of situations. Beck's cognitive model of psychopathology emphasizes the role of cognitive distortion in the the maintenance of anxiety, depression and other mental disorders \cite{Beck67,Beck11}. Therefore, one of the goals from cognitive behavioral therapy for anxiety and depression is to help an individual to adjust these biases, which is called “cognitive restructuring’. Cognitive restructuring modifies the clients' problematic ways of thinking about themselves, their world and their future \cite{Harvey04}. To identify these biases, therapists observe thoughts contain cognitive distortions and investigate the underlying schema that generate the these thoughts \cite{Beck11,Dobson09}. Cognitive distortions can be classified according to its content. For example, mind reading, personalization, labeling and all-or-nothing thinking, etc. \cite{Oliveira14}. These thoughts can be true or dismissive to the reality. For example, "My boyfriend doesn't like me anymore." This statement may be true to the fact or based on mind reading, a type of distortion that refers to individuals assume that they know what people think without having sufficient evidence of their thoughts. The categories of cognitive distortions often indicate the type of disorder. For instance, individuals with social anxiety disorder engages in mind reading “No one wants me to be around with them” or catastrophizing "It will be a disaster if I said something wrong in the group". Depressed individuals engages in a wide range of cognitive distortions - labeling "I am the black sheep of the family", fortune telling "I can never be happy without you" \cite{Newman15}. % Head 2 \subsection{Assessment of Cognitive Distortion} The most widely applied method for assessing cognitive distortion is the cognitive distortion checklist. The list has been validated in experimental and clinical work \cite{Beck11,Dobson09}. xx and colleagues developed the Cognitive Distortion Questionnaire(CD-Quest) \cite{Simona17}, which is a 15-item questionnaire based on the distortion checklist that assess the frequency and intensity of cognitive distortion. It is administered before the therapy session to help a client to keep track on their thinking errors thus enabling them to be aware of the change over time as the therapy goes on. In this study, we adopted the CD-Quest in our annotation guideline as a criteria for annotators to identify cognitive distortion based on the context information provided in the Facebook posts. Below is an example from CD-Quest: % quote \begin{quote} \textit{Dichotomous thinking (also called all-or-nothing, black-and-white, or polarized thinking): I view a situation, a person, or an event in "either-or" terms, fitting them into only two extreme categories instead of on a continuum. EXAMPLES: "I made a mistake; therefore my performance was a failure." "I ate more than I planned, so I blew my diet completely."} \end{quote} % Head 3 \subsection{Negative Emotion and Psychopathology} The relationship between negative affect, depression and anxiety has been considered as clinically important historically (Akiskal, 1985; Clark, 1989; Clark and Watson, 1990; Dobson, 1985). How people react towards events tells their coping mechanism, at the heart of reacting and coping with events is people's emotional response (Pennebeker). We also investigate users' negative emotion on social media text and its correlation with user characteristics, behaviors and psychopathology. Instead of using LIWC, we manually label whether a post reflects negative emotion of the author. \section{Method and Materials} % Head 1 \subsection{Data} This corpus consists of 5000 Facebook posts from individuals who participated in the myPersonality project from January 2009 to December 2011. Our methods were carried out in accordance with the approved guidelines from myPersonality. myPersonality was a Facebook-based application collecting psychometric tests from users. Participants opt to allow myPersonality to collect their account information and public Facebook posts. collection of myPersonality complied with the terms of Facebook service. All data are anonymized and gathered with opt-in consents for research purposes. The sample used in our study contains 301 participants who have completed the CES-D scale, Satisfaction with Life Scale, Big-5 Personality Scale and Schwartz Value Survey. % Head 2 \subsection{Sampling Approach} To ensure we have enough posts to conduct a longitudinal study. We only include regular posters in our sample. We define regular posters as individuals who posted twice per week or more. We estimated this using the average post count per day during the sampling frame. If an individual had a post count per day of 0.3, this individual made around 109.5 posts in 365 days, which was roughly equivalent to an average of 2.1057692 posts per week. In our sample, 122 out of 301 participants were regular posters. To make sure our sampling approach was conducted under a standard sampling framework, we included 91 regular posters whose last post obtained by myPersonality was less than a week before they completed the CES-D scale. Then we obtain a sample of 4696 posts that were produced two months before CES-D score was obtained. We future eliminate 14 posters who produced less than 20 posts during the two months and posts that are not written in English. Eventually we yield a sample of 4145 posts from 77 users. % Head 2 \subsection{Annotation Process} The annotation guideline was developed using 4362 Facebook posts to illustrate negative emotion and cognitive distortions. The extracted posts were first annotated by a specific trained psychologist according to the annotation guideline. The annotation process include three steps. First, we identify whether the post reflect the author's negative emotion, posts that contain a mixed of emotion is labeled as 'mixed'. We group the 'mixed' posts together with the negative emotion posts in the later analysis. Here negative emotion include but not limited to sadness, anger, anxious, boredom, physical complain and so on. Sometimes users repost content that contains negative emotion, but might not reflect negative emotion of the authors. For example, % quote \begin{quote}{\itshape} \textit{you have a sister who has made you laugh, punched you, stuck up for you, drove you crazy, hugged you, watched you succeed , saw you fail, picked you back up, cheered you on, made you strong, and is someone you cant live without someone you can always count on....REPOST THIS IF YOU HAVE A SISTER THAT YOU LOVE.} \end{quote} We label these posts as neutral because it is uncertain about the author's emotion when they repost this information. Second, we label posts that contain cognitive distortion. In scoring the cognitive distortion, annotators are given specific cues - CD-Quest as reference of the measurement of cognitive distortion, but are also instructed to rely on their linguistic intuition. In addition, posts from quotes, lyrics, and repost are labeled as non-original posts. Annotators are trained by following instructions and sets of practice examples on the annotation guideline. The difficulty of this task is that a lot of the status contain emotion or thought but does not describe the event that cause the emotion or thought. However, we can still tell that some of the posts contain cognitive distortion even if the post doesn't indicate a situation or context, see table~\ref{tab:zero}. % Table \begin{table}% \caption{Posts with Dognitive distortion} \label{tab:zero} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{|p{5cm}|p{9.5cm}|} \toprule post & cognitive distortion \\ \hline\hline "I feel like my life is waste. I have no story, no influence, no particular skills that are useful. I just suck." & Labeling,magnification/minimization: The author assigns global negative traits to him/herself, such as 'my life is a waste', 'I just suck'. The author generate the global negative pattern based on some incidents, it is unsure the number of incident. But the author fail to focus on life events that are counter to this statement. \\[5pt] \hline I hate the past. It deserves to be erased from memory forever. I don't care if the memories were good & Discounting positives and dichotomous thinking: the author hates everything in the past, which is all-or-nothing thinking and he/she diminishes the positive events or achievements in the past \\[5pt] \hline Nothing feels right today. It's weird & The author gives greater weight to perceived failure or weakness but fail to aware of positive events or opportunities \\[5pt] \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} \end{minipage} \end{table}% For the last step, we label posts that contain ffd- worry and rumination. Worry is often indicated by particular words, such as "anxious" and "nervous". Rumination includes ruminating on a specific event, state or emotion. Here we set the rumination time window as one week, if an individual ruminate on a specific event or a specific emotion/state within a week, we label it as rumination. However, rumination of a specific emotion could be difficult to identify if the post does not contain information about the event or situation that causes the emotion, because we cannot tell whether the negative emotion is pointed at the same event (see Example EMOTION). % quote \begin{quote}{\itshape} EVENT: \\ \textit{"Cry my betts fish died and the other ones dying:("\\ "booh im crying my Betta fish died"}\\ STATE:\\ \textit{Ugh! What a boring morning!\\ I am so bored!\\ Why........SO..BOOOOOOOOOOOOOOOOOOOOREEEED!\\} EMOTION: \\ \textit{Day1: "I'm so angry that mom threw away my things today." \\ Day 2: ":( " \\ Day 3: ":(" } \\ \end{quote} % Head 3 \subsection{Self-reported measurement scale} We now present a number of user characteristics and three self-reported scales that are used to measure depression symptoms, personality ans satisfaction with life. Later we will investigate the relationships between and transdiagnostic symptoms and the self-reported psychological traits. \subsubsection{Center for Epidemiologic Studies Depression Scale (CES-D Scale)} CES-D is a self-reported scale designed to measure depression symptoms in the general population\cite{Radloff77}. The scale consists of 20 depression symptoms associated items. It has been tested in psychiatric settings across various cultures over the years. It was found to have high internal consistency and test-retest reliability \cite{Radloff77,Herz86,Roberts80}. Its validity was assessed via correlation with clinical diagnosis of depression and other self-reported trait measurement \cite{Herz86}. \subsubsection{Five Factor Model of Personality (Big-5)} The five factor personality model was established in an attempt to understand the description of traits. The dimensions composing the 5-factor models are extraversion, agreeableness, conscientiousness, neuroticism and openness to experience. The five factor structure has been proved to be robust in both self and peer ratings \cite{McCrae92}, children and adult \cite{Ivan95} and across different cultures \cite{McCrae02}. Early literature found that big-5 is relatively stable over time \cite{McCrae92}. However, recently literature found the opposite \cite{Ardelt00}. Neuroticism was found to have strong correlation with a bunch of psychological disorders, such as anxiety and depression \cite{Ormel04}. Individuals who score high on neuroticism tend to experience negative mood frequently and physical symptoms. Recent studies found that social media data can predict the 5-factor model of personality \cite{Kosinski13}. \subsubsection{Satisfaction with Life Scale (SWLS)} The 5-item satisfaction with life scale was developed to measure global life satisfaction. The SWLS has been tested across different cultures and age groups \cite{Diener93} and has been found to have high internal consistency and temporal reliability \cite{Diener85}. Its validity was assessed by correlation with other measures of subjective well-being and specific personality dimension. \section{Results} % Head 1 \subsection{Transdiagnostic labels} Among 4145 posts, 804 of them reflect negative emotion of the author, 36 of them contain a mix of positive and negative emotion. Among 840 posts that contain negative emotion, only 41 contain cognitive distortion, 111 contain negative thinking (85 worry, 26 rumination). 3 posts show both cognitive distortion and negative thinking. Cognitive distortion is rare, it only occurs in 1\% of the posts in this sample. We aggregate a negative emotion score by summing up the number of negative post from each user. We use the same approach to generate a distortion score and negative thinking score for each user. Table~\ref{tab:one} shows the statistics of the three scores and their correlation with depression symptoms (CES-D). Table~\ref{tab:one2} shows the correlation between per post transdiagnostic symptom score with depression symptoms. It is appear that the cumulative transdiagnostic score is more strongly correlated with psychopathology. Therefore, we use the cumulative score in the later analysis. Although posts contain cognitive distortion only account for 1\% of all the posts but they are moderately correlated with self-depression symptoms. Whereas, negative emotion and negative thinking, although more frequently observed in the data, are not significantly correlated with CES-D. % Table \begin{table}% \caption{Transdiagnositc components (cumulative score) and depression symptoms} \label{tab:one} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{lllll} \toprule & mean & SD & CES-D & SWL \\ \hline\hline Negative emotion & 10.91 & 11.835 &0.192 & -0.16\\ Cognitive distortion & 0.532 & 0.981 &0.300** & -0.250* \\ Negative thinking & 1.441 & 2.962 &0.117& 0.023\\ \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} * p<0.05, **p<0.01, ***p<0.001 \end{minipage} \end{table}% % Table \begin{table}% \caption{Transdiagnositc components (per post) and depression symptoms} \label{tab:one2} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{lllll} \toprule & mean & SD & CES-D & SWL\\ \hline\hline Negative emotion & 0.182 & 0.197 &0.123 & -0.108\\ Cognitive distortion & 0.009 & 0.016 &0.261*& -0.208\\ Negative thinking & 0.018 & 0.032 &0.044 & -0.056\\ \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} * p<0.05, **p<0.01, ***p<0.001, CES-D: correlation with depression symptoms. SWL: correlation with Satisfaction with Life \end{minipage} \end{table}% Figure ~\ref{fig:one} shows the number of negative emotion and transdiagnostic symptom posts from each user in the two months time window. All the three distributions are negatively skew, which means most of the users do not have a lot of negative emotion, and a majority of the users do not show any cognitive distortion or negative thinking. For those who shows cognitive distortion in their posts, they only have 1-2 posts shows cognitive distortion. % Figure \begin{figure}`' \includegraphics[width=80mm,scale=0.8]{fig1} \caption{Distribution of negative emotion and transdiagnostic symptoms} \label{fig:one} \end{figure} % Head 2 \subsection{Dataset Statistics} Since cognitive distortion appears to be most correlated with psychopathology Table~\ref{tab:one}, we now subset a sample of individuals with their cognitive distortion score higher than the group mean, which yields a sample of 26 individuals. We also subset another sample in which individuals have lower than average cognitive distortion score (n = 51). We compare depression symptoms, satisfaction with life and personality among the two groups Figure ~\ref{fig:two} shows the age distribution of the sample population, the high and low cognitive distortion group. The age distribution shows that individuals from 15-20 years old accounted for the majority number of people in our sample population (skewness = 1.685 , kurtosis = 5.532), the same pattern occurs in the low cognitive distortion group (skewness = 1.332 , kurtosis = 3.911) . Whereas, a majority of the people in the high cognitive distortion group are from 20-22 (skewness = 0.817 , kurtosis = 3.964). % Figure \begin{figure} \includegraphics[width=80mm,scale=0.8]{fig2} \caption{Distribution of negative emotion and transdiagnostic symptoms} \label{fig:two} \end{figure} % Figure \begin{figure} \includegraphics[scale=0.9]{CESD_distortion} \caption{qqplot of selected variables} \label{fig:three} \end{figure} % Table \begin{table}% \caption{t-test Between Users with High or Low Transdiagnostic Symptoms} \label{tab:two} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{llllllllll} \toprule & \multicolumn{2}{c}{all(n=77)} & \multicolumn{2}{c}{High Trans(n=26)} & \multicolumn{2}{c}{Low Trans(n=29)} & & \\ & mean & SD & mean & SD & mean & SD & p & Cohen's d \\ \hline\hline SWL & 4.221 & & 3.831 & &4.483 & & & -0.42 \\ CES-D & 23.860 & & 28.42 & &21.62 & & * & 0.60 \\ ope & 4.166 & & 4.052 & &4.148 & & &-0.37 \\ con & 3.183 & & 3.085 & &3.094 & & & -0.19\\ ext & 3.101 & & 2.838 & &3.094 & & & -0.48 \\ agr & 3.539 & & 3.457 & &3.522 & & &-0.18 \\ neu & 3.022 & & 3.152 & &2.96 & & & 0.22 \\ \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} * p<0.05, **p<0.01, ***p<0.001 after boferroni correction. Effect size: 0.8 = large(L); 0.5= moderate(M); 0.2 = small(S) num. of posts: Number of posts in two months; SWL: Satisfaction with Life score CES-D: Center for Epidemiological Studies Depression (CESD); ope: openness; con: conscientiousness; ext: extraversion; agr: agreeableness; neu: neuroticism. \end{minipage} \end{table}% We present transdiagnostic symptoms scores from the two groups together with their self-reported big-5 personality score, satisfaction with life score and depression symptom score Table ~\ref{tab:two} . Two users didn’t report their age on their profiles, here we assign the mean age to the them. We conduct independent sample t-tests on the selected variables among the two groups. Figure ~\ref{fig:three} shows the qqplot of the selected variables. Our observation indicate that users' personality characteristics does not distinguish their transdiagnostic symptoms. However, users with more transdiagnostic symptoms tend to post more posts (nearly twice more than the low symptom group). They also reported significantly more depression symptoms (28\% higher than low symptom users). We further divide users according to their demographic characteristics (gender, marital status, relationship status and relationship with parents), and observe their differences in transdiagnostic symptoms Table ~\ref{tab:three}. Users missing some of the characteristics information are assigned under the category 'other', users in this category are not included in this analysis. Since Figure \label{fig:two} shows that the transdiagnostic symptoms and negative emotion are not in normal distribution. We conduct Wilcoxon signed-rank test (non-parametric test used when the sample is not normally distributed) to compare these components between male and female. Result shows that there is no gender difference in transdiagnostic symptoms. We attempt to find out if relationship status contribute to the amount of transdiagnostic symptoms. We used Kruskal-Wallis test to compare the median between users with different relationship status: single, be in a relationship, married. Kruskal-Wallis test is a non-parametric equivalent of one-way analysis of variance (ANOVA). ANOVA is used when the residuals are normally distributed, which is not the case in our sample. Whereas, Kruskal-Wallis can be used to compare the median between the groups when this assumption is not satisfied. Result shows the three groups show no statistical significance in negative emotion (H= 4.516, p >0.05), cognitive distortion (H = 1.573, p >0.05) and negative thinking(H = 1.628, p >0.05). Although the median of the three groups appears no difference but the density plots from the three groups show that most of the married individuals have significantly lower transdiagnostic symptom and negative emotion \label{fig:three}. Having a partner to provide mental support seems to be a protective factor, whereas, no difference is found among people being in a relationship. Therefore, the result can be interpreted the other way around, people has a partner and with less transdiagnostic symptoms are more likely to get married or report married on social media. Moreover, individuals without divorced parents tend to have lower negative emotion cognitive distortion and negative thinking compared with those who have divorced parents. % Figure \begin{figure} \includegraphics[scale=0.9]{neg_emo_rela} \caption{negative emotion in different relationship statuses} \label{fig:three} \end{figure} % Table \begin{table}% \caption{Comparing Transdiagnostic Symptoms Among Different Demographic Groups} \label{tab:three} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{lllllllllll} \toprule & \multicolumn{2}{c}{parents together} & \multicolumn{2}{c}{parents NOT together} \\ & mean & SD & mean & SD & p & Hodges-Lehmann estimator \\ \hline \hline Negative emotion &6.315 & 4.607 & 13.292 &4.607 &* & -0.067 \\ Cognitive distortion &0.158 & 0.374 &0.625 &1.095 & &0.000 \\ Negative thinking &0.263 & 0.653 & 1.75 &2.937 & &0.000\\ CES-D &19.63 & 10.24 & 25.12 &12.74 & & 6.000\\ \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} * p<0.05, **p<0.01, ***p<0.001. Hodges-Lehmann test estimates the pseudo-median in non-parametric test. Hodges-Lehmann estimator shows the difference between the pseudo-median. \end{minipage} \end{table}% % Head 3 \subsection{Late night posts} Sleep disturbance is one of the major symptoms in depression. We investigate the relationship between sleep disturbance and cognitive distortion. We count the number of posts written from midnight 12:00am until 6:00am in the morning. Then compute the proportion of late night post to the total number of post of that user. We investigate the relationship between proportion of late night post to transdiagnositic symptoms. For transdiagnositic symptoms, we divide the cumulative symptom by the number of post of the user. Negative emotion (r = 0.293, p < 0.01), cognitive distortion (r = 0.300, p < 0.01) and negative thinking (r = 0.285, p <0.05) are slightly correlated with number of late night post. It appears that negative thinking is more likely to occur in late night post. Our result is also supported by the cognitive model of insomnia. Insomnia individuals suffer unpleasant thoughts and excessive, uncontrollable worry during the pre-sleep period (Borkovec 1979, 1982; Morin, 1993). % Head 4 \subsection{Linguistics styles} We also measure the correlation between transdiagnostic components and linguistics style. Linguistics styles capture how an individual use different components of the language in various psychological or social environments. We used LIWC to define the linguistics styles of each Facebook posts, then we aggregate the linguistic style score on user level. Table ~\ref{tab:four} shows the correlation between user linguistics style score and transdiagnostic symptoms. Clout refers to the social status, confidence, or leadership that people display through their writing. Study found that people with higher status tend to use less first-person pronoun and use more first-person plural and second-person singular pronoun \cite{Kacewicz13}. It appears in our result that clout is strongly correlated with people with more negative emotions. They tend to focus on self, thus, they use less 3rd person or 2ed person pronouns. Our finding is in correspond to the finding from Pennenaker's depression and language study \cite{Pennebaker10}. The difference of self-focus might be a result in response to emotional pain or a thinking pattern that is a predilection for depression \cite{Wolf07}. It is not surprised to see that emotional tone, which refers to the positive tone, is negatively correlated with negative emotion score. Negative emotion is also moderately correlated with our manual labeled negative emotion. Social referents, which refers to words indicating social roles (father, mother, sister and so on) are slightly to moderately linked to cognitive distortion and negative emotion. Our result indicates that people shows more negative emotion and cognitive distortion on social media are more likely to be socially detached from family and friends. We also find that these people are more present and future oriented and use less exclamation marks. However, this might be particular to social media text, because users seldom describe the negative events happen in the past with detail on Facebook posts, instead, they vent out their feelings to the events. For example, 'I am bored.' 'feeling sick again.' 'I hate today.' Exclamation marks is often used to indicate excitement or surprise in a positive context. It appears that the content of negative thinking is often related to health and home. However, our result is limited to the context of social media, it's likely that people are less open to talk about financial situation and work issues on social media because that could affect their social image. On the other hand, posts that contain cognitive distortion is not content specific, they tend to have longer words and more words in a sentence and these words are more likely to be in the LIWC dictionary. This is mainly due to the fact that there is a lot of reasoning and thinking process in cognitive distortion posts. In addition, the language on cognitive distortion is also less reward focus and more risk or prevention focus. % Table \begin{table}% \caption{Transdiagnositc components and depression symptoms} \label{tab:four} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{llll} \toprule & Negative emotion & Cognitive distortion & Negative thinking \\ \hline\hline SUMMARY VARIABLE \\ analytic & & -0.262* & \\ clout & -0.518** & &-0.310** \\ authentic & & 0.325** & \\ emotional tone & -0.341*** & & \\ LANGUAGE MATRICS & & & \\ words > 6 letters & & -0.335** & \\ words per sentence & & 0.258* & \\ dictionary words & 0.234* & 0.412*** & \\ GRAMMAR & & & \\ functional words & & 0.344** & \\ total pronouns & & 0.251** & \\ personal pronouns & & 0.251** & \\ 1st per pronoun & 0.369** & 0.325** & 0.239* \\ 3rd per singular &-0.326*& -0.249* & \\ 2nd person & -0.235* & & \\ prepositions & & 0.273* & \\ conjunctions &0.322** & 0.309* & \\ adjective & & 0.270* & \\ comparatives & & 0.320** & \\ verb & 0.244* & & \\ AFFECT WORDS & & & \\ negative emotion &0.322** & & \\ anger &0.413*** & & \\ anxiety & & & \\ sadness & & & \\ swear & 0.385*** & & \\ SOCIAL & & & \\ social words & & & -0.263* \\ female referents &-0.338** & -0.237* & \\ male referents & & -0.261* & \\ COGNITIVE PROCESS & & & \\ differentiation &0.323** & & \\ PERCEPTUAL & & & \\ perceptual process & &0.255* & \\ feeling & & 0.301* & \\ BIOLOGICAL & & & \\ health/illness & & & 0.335* \\ CORE DRIVE & & & \\ reward focus & & -0.249* & \\ risk/prevention focus & & 0.312** & \\ TIME & & & \\ present focus & 0.247* & 0.232* & \\ future focus &0.312* & & \\ PERSONAL CONCERN & & & \\ home & 0.300** & & 0.246*\\ work & & & \\ money & & & \\ PUNCTUATION & & & \\ exclamation marks &-0.257* &-0.317* & \\ \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} * p<0.05, **p<0.01, ***p<0.001 \end{minipage} \end{table}% % Head 4 \subsection{Cognitive Distortion Regression Model} We explore the performance of a linear regression model in predicting cognitive distortion. We found high interactions between the LIWC features. Whereas, PCA or SVD-based feature selection methods do not take into account the potential multivariate nature of the data structure. We select features that are most correlated with cognitive distortion according to Table~\ref{tab:four}. "Dictionary words" has the highest correlation with cognitive distortion but also has high interaction with more 1/3 of the language features, therefore, we removed "dictionary words" to avoid multivariation. Then we further remove features that are more than 0.3 correlated with the top features. Our model explains 52\% of variance in the data. Swear words and Risk focus are very strong predictors. % Table \begin{table}% \caption{Cognitive Distortion Linear Regression Model} \label{tab:five} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{lllll} \toprule measures & beta & SE & t-Stat \\ \hline \hline Intercept & 0.199 &0.160 & 1.240 \\ total pronouns& 0.010* & 0.004 & 2.221 \\ 3rd person pronoun & -0.019* & 0.008 & -2.291 \\ preposition & 0.021** & 0.007 & 2.837 \\ swear & 0.021*** & 0.005 & 4.026\\ feeling & 0.029** & 0.010 & 2.929 \\ reward focus & -0.020* & 0.009 & -2.132 \\ risk focus & 0.030*** & 0.008 & 3.494 \\ proportion of late night post & 1.031* & 0.426 & 2.417 \\ \hline Residual standard error & 0.714 \\ Multiple-R2 & 0.521 \\ Error degrees of freedom & 68 \\ \hline \bottomrule \end{tabular} \end{center} \bigskip\centering \emph{Note:} . < 0.1 * p<0.05, **p<0.01, ***p<0.001 parents not together1: no and not in contact with mother; parents not together2: no but in frequent contact with parents; \end{minipage} \end{table}% \section{CONCLUSION} This research is designed as an approach to complement the current transdiagnositic diagnostic approach with a novel way to access people's behavior. We examine the feasibility to identify transdiagnostic symptoms using Facebook data and finding out the language features that are able to predict cognitive distortion, a core component in CBT, which is highly associated with anxiety and depressive disorders. First, we label negative emotion, cognitive distortion and negative thinking in more than 4000 Facebook posts. Then we investigate the relationship between these components and depression symptoms, satisfaction with life and big-5 personality. Thereafter, we characterize the differences of transdiagnostic symptoms and negative emotions among different demographic groups. Finally, we identify features that are able to predict cognitive distortion. We found that cognitive distortion is moderately correlated with depression symptoms and satisfaction with life. Marriage and without divorced parents seem to be protective factors in developing transdiagnostic symptoms. We found that some of the language features are best predicting cognitive distortion (explained 55\% of variance in the data). The proportion of posts written at mid night, which is a sign of insomnia, also enhance the prediction. The major limitation of our work is that our data is from social media platform. Facebook data may not represent the thinking process of an individual most precisely, because users have different degree of selective biased presentation and self-disclosure level. Moreover, this work focus on a limited set of sample that involve 77 users. It would be useful to replicate the study on a larger population to validate the pattern we found in here.
\subsection{Data} \label{subsec:data_logistic_regression} For logistic regression and Support Vector Machines, we use the Wisconsin Breast Cancer Dataset \footnote{\url{https://www.kaggle.com/uciml/breast-cancer-wisconsin-data}}. This dataset contains measurements for breast cancer cases. There are two types of cancer in the dataset benign and malignant. An overview of the dataset if given in the jupyter notebook \href{https://github.com/am-kaiser/CompSci-Project-1/blob/main/regression_analysis/examples/logistic_regression_analysis.ipynb}{logistic\_regression\_analysis} which can be found in the GitHub repository corresponding to this report. Based on this dataset we want to find a model which predicts the diagnosis, i.e. either benign or malignant. For the design matrix, we drop the column id and diagnosis from the data. The id is not important for making predictions and the diagnosis is what we want to predict.
\subsection{Class Function} \definedin{CFG.h} The Function class represents the protion of the program CFG that is reachable through intraprocedural control flow transfers from the function's entry block. Functions in the ParseAPI have only a single entry point; multiple-entry functions such as those found in Fortran programs are represented as several functions that ``share'' a subset of the CFG. Functions may be non-contiguous and may share blocks with other functions. \begin{center} \begin{tabular}{ll} \toprule FuncSource & Meaning \\ \midrule RT & recursive traversal (default) \\ HINT & specified in CodeSource hints \\ GAP & speculative parsing heuristics \\ GAPRT & recursive traversal from speculative parse \\ ONDEMAND & dynamically discovered at runtime \\ \bottomrule \end{tabular} \end{center} \apidesc{Return type of function \code{src()}; see description below.} \begin{center} \begin{tabular}{ll} \toprule FuncReturnStatus & Meaning \\ \midrule UNSET & unparsed function (default) \\ NORETURN & will not return \\ UNKNOWN & cannot be determined statically \\ RETURN & may return \\ \bottomrule \end{tabular} \end{center} \apidesc{Return type of function \code{retstatus()}; see description below.} \begin{apient} typedef std::vector<Block*> blocklist typedef std::set<Edge*> edgelist \end{apient} \apidesc{Containers for block and edge access. Library users \emph{must not} rely on the underlying container type of std::set/std::vector lists, as it is subject to change.} \begin{tabular}{p{1.25in}p{1.125in}p{3.125in}} \toprule Method name & Return type & Method description \\ \midrule name & string & Name of the function. \\ addr & Address & Entry address of the function. \\ entry & Block * & Entry block of the function. \\ parsed & bool & Whether the function has been parsed. \\ blocks & blocklist \& & List of blocks contained by this function sorted by entry address. \\ callEdges & edgelist \& & List of outgoing call edges from this function. \\ returnBlocks & blocklist \& & List of all blocks ending in return edges. \\ exitBlocks & blocklist \& & List of all blocks that end the function, including blocks with no out-edges. \\ hasNoStackFrame & bool & True if the function does not create a stack frame. \\ savesFramePointer & bool & True if the function saves a frame pointer (e.g. \%ebp). \\ cleansOwnStack & bool & True if the function tears down stack-passed arguments upon return. \\ region & CodeRegion * & Code region that contains the function. \\ isrc & InstructionSource * & The InstructionSource for this function. \\ obj & CodeObject * & CodeObject that contains this function. \\ src & FuncSrc & The type of hint that identified this function's entry point. \\ restatus & FuncReturnStatus * & Returns the best-effort determination of whether this function may return or not. Return status cannot always be statically determined, and at most can guarantee that a function \emph{may} return, not that it \emph{will} return. \\ getReturnType & Type * & Type representing the return type of the function. \\ \bottomrule \end{tabular} \begin{apient} Function(Address addr, string name, CodeObject * obj, CodeRegion * region, InstructionSource * isource) \end{apient} \apidesc{Creates a function at \code{addr} in the code region specified. Insructions for this function are given in \code{isource}.} \begin{apient} std::vector<FuncExtent *> const& extents() \end{apient} \apidesc{Returns a list of contiguous extents of binary code within the function.} \begin{apient} void setEntryBlock(block * new_entry) \end{apient} \apidesc{Set the entry block for this function to \code{new\_entry}.} \begin{apient} void set_retstatus(FuncReturnStatus rs) \end{apient} \apidesc{Set the return status for the function to \code{rs}.} \begin{apient} void removeBlock(Block *) \end{apient} \apidesc{Remove a basic block from the function.}
% based on the fantastic work from http://www.stdout.org/~winston/latex/ \documentclass[10pt]{article} \usepackage{multicol} \usepackage{calc} \usepackage{ifthen} \usepackage{geometry} % conditional page margins based on paper size \ifthenelse{\lengthtest { \paperwidth = 11in}} { \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} } {\ifthenelse{ \lengthtest{ \paperwidth = 297mm}} {\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} } {\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} } } % remove page header and footer \pagestyle{empty} % redefine section commands to use less space \makeatletter \renewcommand{\section}{\@startsection{section}{1}{0mm}% {-1ex plus -.5ex minus -.2ex}% {0.5ex plus .2ex}%x {\normalfont\large\bfseries}} \renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}% {-1explus -.5ex minus -.2ex}% {0.5ex plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}% {-1ex plus -.5ex minus -.2ex}% {1ex plus .2ex}% {\normalfont\small\bfseries}} \makeatother % disable section numbering \setcounter{secnumdepth}{0} \setlength{\parindent}{0pt} \setlength{\parskip}{0pt plus 0.5ex} \begin{document} \raggedright \footnotesize \begin{multicols}{2} % multicol parameters \setlength{\premulticols}{1pt} \setlength{\postmulticols}{1pt} \setlength{\multicolsep}{1pt} \setlength{\columnsep}{2pt} % header \begin{center} \Large{\textbf{Pegged Cheat Sheet}} \\ \end{center} \section{Rules} \begin{tabular}{@{}ll@{}} \verb! < ! & Creates a space consuming sequence. \\ \verb! <~! & Concatenates a sequences of matches into one string. \\ \verb! <:! & Creates a sequence to be discarded. \\ \verb! <;! & Creates a sequence stored in the parent node. \\ \end{tabular} Every operator suffix of a \verb!<! rule will work on token literals and on child rules from the parent context. \section{Non-Terminals} \begin{tabular}{@{}ll@{}} \verb!book! & Default is two-sided. \\ \end{tabular} \section{Terminals} \begin{tabular}{@{}ll@{}} \verb!book! & Default is two-sided. \\ \end{tabular} % footer \rule{0.3\linewidth}{0.25pt} \scriptsize Copyright \copyright\ 2012 Pegged Developers \& Contributors \end{multicols} \end{document}
\chapter{Results} \section{Problem 1: Simple network} In order to be able to use our forward propagation function $x_j=f(x_{i}*w_{ij})$(from section 2.1), the given data (from section 1.1) were formulated as matrices resulting in the following data: \begin{itemize} \item The input vector $x_0=\left[ \begin{array}{rr} 0.7 & 0.5 \\ \end{array}\right]$ \item The weight matrix $w_01=\left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array}\right]$ \item The weight matrix $w_12=\left[ \begin{array}{rrr} 0.9 & 0.3 & 0.9\\ 0.1 & 0.2 & 0.4\\ \end{array}\right]$ \item The weight matrix $w_23=\left[ \begin{array}{rrr} 0.1 & 0.8 & 0.4\\ 0.5 & 0.1 & 0.6\\ 0.6 & 0.7 & 0.3\\ \end{array}\right]$ \item The weight matrix $w_34=\left[ \begin{array}{rrr} 0.5 & 0.7 & 0.3\\ \end{array}\right]$ \end{itemize} Thus the final equation can be summarized as:\\ $\;\;\;\;\;f(\,f\,(\,f(\,f(x_0*w_01)*w_12)*w_23)*w_34$\\\\ The output of the neural network is $0.7451673339899871$. \\\\ Alternatively the input vector $x_0=\left[ \begin{array}{rr} 0.7 & 0.5 \\ \end{array}\right]$ was used, resulting in the value\\ $0.7453676512649436 $. \section{Problem 2: Backpropagation} Using the given equations from section 1.2 the following gradients were calculated for a: \begin{itemize} \item $\partial sigmoid = sigmoid*(1-sigmoid)$ \item $\frac {\partial cost}{\partial prediction} = -1 $ \item $\frac {\partial prediction}{\partial y} = \partial sigmoid(y)$ \item $\frac {\partial y}{\partial w_1}=x_1$ \item $\frac {\partial y}{\partial w_2}=x_2$ \item $\frac {\partial y}{\partial b}=1$ \item $\frac {\partial cost}{\partial w_1}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial w_1}$ \item $\frac {\partial cost}{\partial w_2}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial w_2}$ \item $\frac {\partial cost}{\partial b}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial b}$ \end{itemize} This results in the following weights (assuming a learning rate of $1$): \begin{itemize} \item $w1 = w1-\frac {\partial cost}{\partial w_1} = 1.0088313531066455$ \item $w2 = w2 - \frac {\partial cost}{\partial w_2}=1.0264940593199368$\\ \item $b = b - \frac {\partial cost}{\partial b}=2.017662706213291$ \end{itemize} For part b only the cost function(see section 1.2 b)is different with its derivative being: \begin{itemize} \item $\frac {\partial cost}{\partial prediction} = -(z'-z)$ \item \end{itemize} And giving the following weight values: This results in the following weights (assuming a learning rate of $1$): \begin{itemize} \item $w1 = w1-\frac {\partial cost}{\partial w_1} = 1.0001588425712256$ \item $w2 = w2 - \frac {\partial cost}{\partial w_2}=1.0004765277136765$\\ \item $b = b - \frac {\partial cost}{\partial b}=2.000317685142451$ \end{itemize} \section{Problem 3: Artificial neural network} For Problem three we defined a neural network with only one hidden layer, consisting of 11 neurons. It was trained for 10 thousand epochs with the following parameters: \begin{itemize} \item $batchsize = 8000$ \item $learning rate = 1$ \end{itemize} The training curve gives the following results: \begin{figure}[h] \centering \includegraphics[height=12cm]{img/problem3_curves.png} \caption{Training curves for problem 3} \label{problem4_imput_data} \end{figure} Unfortunately the testing did not work from scratch and we were not able to fix the Issue: ('CUDA error: an illegal memory access was encountered',) \section{Problem 4: Gradient Descent} Using the given python code and applying the gradient and weight calculation described in Section~\ref{ch:methods:sec:4} results in the line shown in Figure~\ref{problem4_result}. The original line is has a $m$ of $3.30$ and a $c$ of $5.3$ and the learned values are $3.28$ for $m$ and $5.27$ for $c$. \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result_160.png} \caption{Result of gradient decent after 160 iterations} \label{problem4_result_160} \end{figure} \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result.png} \caption{Result of gradient decent} \label{problem4_result} \end{figure} The loss is also plotted in figure~\ref{problem4_result_loss}. \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result_loss.png} \caption{Loss of the gradient decent over the iterations} \label{problem4_result_loss} \end{figure}
\section*{Introduction} What is the question you are going to answer? \begin{itemize} \item Why is it an exciting question (basically the problem statement) \item Why is this question important to our field \item How is my work going to help to answer it? \end{itemize} "I have a major question. How did I get to this question? Why is that important? That’s essentially the introduction. And then I have the second part which is this idea that now I’m going to lead you through how I answered it, and that’s our methods and results. So, I think the story part comes in putting your work into context of the field, of other people’s work, of why it’s important, and it’ll make your results much more compelling" \cite{mensh2017ten}. Readers might look at the title, they might skim your abstract and they might look at your figures, so we try to make our figures tell the story as much as possible. \paragraph{How to structure a paragraph} "For the whole paper, the introduction sets the context, the results present the content and the discussion brings home the conclusion" \cite{mensh2017ten}.. "In each paragraph, the first sentence defines the context, the body contains the new idea and the final sentence offers a conclusion" \cite{mensh2017ten}.. \paragraph{From 'Ten Simple Rules for structuring papers'} "The introduction highlights the gap that exists in current knowledge or methods and why it is important. This is usually done by a set of progressively more specific paragraphs that culminate in a clear exposition of what is lacking in the literature, followed by a paragraph summarizing what the paper does to fill that gap. As an example of the progression of gaps, a first paragraph may explain why understanding cell differentiation is an important topic and that the field has not yet solved what triggers it (a field gap). A second paragraph may explain what is unknown about the differentiation of a specific cell type, such as astrocytes (a subfield gap). A third may provide clues that a particular gene might drive astrocytic differentiation and then state that this hypothesis is untested (the gap within the subfield that you will fill). The gap statement sets the reader’s expectation for what the paper will deliver. The structure of each introduction paragraph (except the last) serves the goal of developing the gap. Each paragraph first orients the reader to the topic (a context sentence or two) and then explains the “knowns” in the relevant literature (content) before landing on the critical “unknown” (conclusion) that makes the paper matter at the relevant scale. Along the path, there are often clues given about the mystery behind the gaps; these clues lead to the untested hypothesis or undeveloped method of the paper and give the reader hope that the mystery is solvable. The introduction should not contain a broad literature review beyond the motivation of the paper. This gap-focused structure makes it easy for experienced readers to evaluate the potential importance of a paper—they only need to assess the importance of the claimed gap. The last paragraph of the introduction is special: it compactly summarizes the results, which fill the gap you just established. It differs from the abstract in the following ways: it does not need to present the context (which has just been given), it is somewhat more specific about the results, and it only briefly previews the conclusion of the paper, if at all." \cite{mensh2017ten}. \paragraph{ End of introduction:} Here you say what problem you are tackling: this should be made more clear: \begin{enumerate} \item What is the missing gap and \item Why is it exciting and important. \item What can be said/done / tested experimentally if we have modelled this, i.e. what consequences does it have or what hypothesis can be derived. \end{enumerate} Such a section is particularly important for a journal.
\section{Bottleneck structure in MgNet by using subspace correction} Recall the standard MgNet iteration \begin{equation}\label{eq:mgnetiteration} u^{\ell,i} = u^{\ell,i-1} + \sigma \circ B^{\ell,i} \ast \sigma ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}), \end{equation} which corresponds to the classical residual correction scheme in multigrid as \begin{equation}\label{key} u^{\ell,i} = u^{\ell,i-1} + B^{\ell,i} ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}). \end{equation} Now let us recall the subspace correction scheme on a fixed level (for example $\ell$-th level), we have the following iterative scheme \begin{equation}\label{eq:bottleneckmgnet} u^{\ell,i} = u^{\ell,i-1} + P^{\ell,i} B^{\ell,i} R^{\ell,i}({f^\ell - A^{\ell} \ast u^{\ell,i-1}}). \end{equation} Here let us recall the dimension of $f^\ell$ and $u^{\ell,i}$ as \begin{equation}\label{key} f^\ell, u^{\ell,i} \in \mathbb{R}^{c_\ell \times m_\ell \times n_\ell }, \end{equation} which leads to the dimension of $B^{\ell,i}$ in standard MgNet in \eqref{eq:mgnetiteration} to be \begin{equation}\label{key} B^{\ell,i} \in \mathbb{R}^{c_\ell \times c_\ell \times 3 \times 3}. \end{equation} However, for a subspace correction scheme, we can take $R^{\ell,i}$ as the restriction operator as \begin{equation}\label{key} R^{\ell,i}: \mathbb{R}^{c_\ell \times m_\ell \times n_\ell } \mapsto \mathbb{R}^{ \alpha c_\ell \times m_\ell \times n_\ell }, \end{equation} where $\alpha \in (0,1]$ so example $\alpha = \frac{1}{4}$. A rational choice for $R^{\ell,i}$ and $P^{\ell,i}$ should be \begin{equation}\label{key} R^{\ell,i} \in \mathbb{R}^{\alpha c_\ell \times c_\ell \times 1 \times 1}, \end{equation} and \begin{equation}\label{key} P^{\ell,i} \in \mathbb{R}^{ c_\ell \times \alpha c_\ell \times 1 \times 1}. \end{equation} Of course, we can just take the $R^{\ell,i} = [P^{\ell.i}]^T$ based on the theory of subspace corrections. Then, the size of $B^{\ell,i}$ in \eqref{eq:bottleneckmgnet} can be reduced to \begin{equation}\label{key} B^{\ell,i} \in \mathbb{R}^{\alpha c_\ell \times\alpha c_\ell \times 3 \times 3}. \end{equation} Thus the dimension of all operations $R^{\ell,i}$, $P^{\ell,i}$ and $B^{\ell,i}$ will be \begin{equation}\label{key} \begin{aligned} &\alpha c_\ell \times c_\ell \times 1 \times 1 + c_\ell \times \alpha c_\ell \times 1 \times 1 + \alpha c_\ell \times\alpha c_\ell \times 3 \times 3 \\ &= ((3\alpha)^2 + 2\alpha) c_\ell^2\\ &= \frac{17}{16} c_\ell^2 \quad ( \alpha = \frac{1}{4}), \end{aligned} \end{equation} which is much less than the size of $B^{\ell,i}$ in original MgNet in~\eqref{eq:mgnetiteration} which is $9c_\ell^2$. To follow the linear constrained model assumption, we may take the nonlinearity as \begin{equation}\label{eq:bottleneckmgnet-1} u^{\ell,i} = u^{\ell,i-1} + \sigma \circ P^{\ell,i} \ast \sigma \circ B^{\ell,i} \ast \sigma \circ R^{\ell,i} \ast \sigma ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}). \end{equation} Following the similar derivation from MgNet to ResNet, we can also derive the next "enhanced" bottleneck ResNet from \eqref{eq:bottleneckmgnet-1} as \begin{equation}\label{key} r^{\ell,i} = r^{\ell,i-1} - A^\ell \ast \sigma \circ P^{\ell,i} \ast \sigma \circ B^{\ell,i} \ast \sigma \circ R^{\ell,i} \ast \sigma (r^{\ell,i-1}). \end{equation}
\chapter{Odem Perks}\label{ch:odemPerks} Odem is a strange and mystical power that some people are born with. It usually lies dormant for a long time, until the person is in emotional or physical distress and it manifests into a psychic burst of energy that harms everyone in their surroundings. Most people consider Odem to be a curse, and thus many Odem-wielders are enslaved, ostracized, or outright banned from civilized countries. In order to combat this, the Church of Four has created an elite troupe of hunters called "The Seekers", who are tuned to Odem and can feel it manifest in others. Those found be the Seekers are usually captured and brought to a monastery, temple or other holy organization, where they recieve a Sigil that suppresses their powers. Oftentimes, these holy places will then keep the person prisoner.\\ Some Odem wielders have been able to manifest their Odem in the form of coloured flames, each of which has different effects. These so-called "Dervishes" can summon their flames and invoke powerful effects with them. However, mastering a flame is difficult, and takes many years of training, meditation and experience.\\ This means that the already acquired levels in one flame perk accumulate, and increase the cost of the next level of flame perk. This results in the following perk costs:\\ \\ Level Progression:\\ \\ \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline I & 100\\ II & 200\\ III & 400\\ IV & 700\\ V & 1,100\\ \end{tabular} \end{minipage} \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline VI & 1,600\\ VII & 2,200\\ VIII & 2,900\\ IX & 3,700\\ X & 4,600\\ \end{tabular} \end{minipage} \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline XI & 5,600\\ XII & 6,700\\ XIII & 7,900\\ XIV & 9,200\\ XV & 10,600\\ \end{tabular} \end{minipage} \input{perks/odem/odemcurse.tex} \input{perks/odem/odemsigil.tex} \input{perks/odem/redOdemFlame.tex} \input{perks/odem/blueOdemFlame.tex} \input{perks/odem/greenOdemFlame.tex}
\documentclass{article} %% \usepackage{indentfirst} \usepackage{fullpage} \usepackage{html} \begin{document} \title{Annotating Java Class Files with \\ Array Bounds Check and Null Pointer Check Information} \author{Feng Qian (\htmladdnormallink{fqian@sable.mcgill.ca} {mailto:fqian@sable.mcgill.ca})} \date{\today} \maketitle This note explains how to use Soot annotation options to add array bounds check and null pointer check attributes to a class file and how to use these attributes in a JIT or ahead-of-time compiler. \section{Array References and Object References} Java requires array bounds checks when accessing arrays, and null pointer checks when accessing objects. Array bounds checks are implemented at the virtual machine level by inserting comparison instructions before accessing an array element. Most of operating systems can raise a hardware exception when a bytecode accesses a null pointer, so the nullness check on an object reference is free at most of the time. However, some bytecodes, like the {\it invokespecial} and {\tt athrow} instructions, do need explicit comparison instructions to detect null pointers. Both of these safety checking mechanisms do cause heavy runtime overhead. Soot provides static analyses for detecting safe array and object accesses in a method. These analyses mark array and object reference bytecodes as either safe or unsafe. The results of these analyses are encoded into the class file as attributes, which can then be understood by an interpreter or JIT compiler. If a bytecode is marked as safe in its attribute, the associated comparison instructions can be eliminated. This can speed up the execution of Java applications. Our process of encoding class files with attributes is called {\em annotation}. Soot can be used as a compiler framework to support any attributes you would like to define; they can then be encoded into the class file. The process of adding new analyses and attributes is documented in ``Adding attributes to class files via Soot''. % there is a latex2html command that lets you provide a hyperlink. % See the other tutorials. \section{Annotation options in Soot} \subsection{Description of new options} Soot has new command-line options {\tt-annot-nullpointer} and {\tt-annot-arraybounds} to enable the phases required to emit null pointer check and array bounds check annotations, respectively. Soot has some phase options to configure the annotation process. These phase options only take effect when annotation is enabled. Note that the array bounds check analysis and null pointer check analysis constitute two different phases, but that the results are combined and stored in the same attribute in the class files. The null pointer check analysis has the phase name ``{\em jap.npc}''. It has one phase option (aside from the default option {\em enabled}). \begin{description} \item[-p jap.npc only-array-ref]\ \\ By default, all bytecodes that need null pointer checks are annotated with the analysis result. When this option is set to true, Soot will annotate only array reference bytecodes with null pointer check information; other bytecodes, such as {\tt getfield} and {\tt putfield}, will not be annotated. \end{description} Soot also has phase options for the array bounds check analysis. These options affect three levels of analyses: intraprocedural, class-level, and whole-program. The array bounds check analysis has the phase name ``{\em jap.abc}''. If the whole-program analysis is required, an extra phase ``{\em wjap.ra}'' for finding rectangular arrays is required. This phase can be also enabled with phase options. By default, our array bounds check analysis is intraprocedual, since it only examines local variables. This is fast, but conservative. Other options can improve the analysis result; however, it will usually take longer to carry out the analysis, and some options assume that the application is single-threaded. \begin{description} \item[-p jap.abc with-cse]\ \\ The analysis will consider common subexpressions. For example, consider the situation where {\tt r1} is assigned {\tt a*b}; later, {\tt r2} is assigned {\tt a*b}, where both {\tt a} and {\tt b} have not been changed between the two statements. The analysis can conclude that {\tt r2} has the same value as {\tt r1}. Experiments show that this option can improve the result slightly. \item[-p jap.abc with-arrayref]\ \\ With this option enabled, array references can be considered as common subexpressions; however, we are more conservative when writing into an array, because array objects may be aliased. NOTE: We also assume that the application in a single-threaded program or in a synchronized block. That is, an array element may not be changed by other threads between two array references. % see my thesis for an example of what to do when you have contention! -plam \item[-p jap.abc with-fieldref]\ \\ The analysis treats field references (static and instance) as common subexpressions. The restrictions from the `{\tt with-arrayref}' option also apply. \item[-p jap.abc with-classfield]\ \\ This option makes the analysis work on the class level. The algorithm analyzes `final' or `private' class fields first. It can recognize the fields that hold array objects with constant length. In an application using lots of array fields, this option can improve the analysis results dramatically. \item[-p jap.abc with-all]\ \\ A macro. Instead of typing a long string of phase options, this option will turn on all options of the phase ``{\em jap.abc}''. \item[-p jap.abc with-rectarray, -p wjap.ra with-wholeapp]\ \\ These two options are used together to make Soot run the whole-program analysis for rectangular array objects. This analysis is based on the call graph, and it usually takes a long time. If the application uses rectangular arrays, these options can improve the analysis result. \end{description} \subsection{Examples} Annotate the benchmark in class file mode with both analyses. \begin{verbatim} java soot.Main -annot-nullpointer -annot-arraybounds spec.benchmarks._222_mpegaudio.Main \end{verbatim} The options for rectangular array should be used in application mode. For example: \begin{verbatim} java soot.Main --app -annot-arraybounds -annot-arraybounds -p wjap.ra with-wholeapp -p jap.abc with-all spec.benchmarks._222_mpegaudio.Main \end{verbatim} The following command only annotates the array reference bytecodes. \begin{verbatim} java soot.Main -annot-arraybounds -annot-arraybounds -jap.npc only-array-ref spec.benchmarks._222_mpegaudio.Main \end{verbatim} \section{Using attributes in the Virtual Machine} The array bounds check and null pointer check information is encoded in a single attribute in a class file. The attribute is called {\tt ArrayNullCheckAttribute}. When a VM reads in the class file, it can use the attribute to avoid generating comparison instructions for the safe bounds and nullness checks. All array reference bytecodes, such as {\em ?aload, ?store} will be annotated with bounds check information. Bytecodes that need null pointer check are listed below: \begin{verbatim} ?aload ?astore getfield putfield invokevirtual invokespecial invokeinterface arraylength monitorenter monitorexit athrow \end{verbatim} The attributes in the class file are organized as a table. If a method has been annotated, it will have an {\tt ArrayNullCheckAttribute} attribute on its {\tt Code\_attribute}. The data structure is defined as: \begin{verbatim} array_null_check_attribute { u2 attribute_name_index; u4 attribute_length; u3 attribute[attribute_length/3]; } \end{verbatim} The attribute data consist of 3-byte entries. Each entry has the first two bytes indicating the PC of the bytecode it belongs to; the third byte is used to represent annotation information. \begin{verbatim} soot_attr_entry { u2 PC; u1 value; } \end{verbatim} Entries are sorted by PC in ascending order when written into the class file. The right-most two bits of the `{\em value}' byte represent upper and lower bounds information. The third bit from right is used for nullness annotation. Other bits are not used and set to zero. The bit value `1' indicates the check is needed, and 0 represents a known-to-be-safe access. In general, only when both lower and upper bounds are safe can the check instructions be eliminated. However, sometimes this depends on the VM implementation. \begin{verbatim} 0 0 0 0 0 N U L N : nullness check U : upper bounds check L : lower bounds check \end{verbatim} For example, the attribute data should be interpreted as: \begin{verbatim} 0 0 0 0 0 1 x x // need null check 0 0 0 0 0 0 x x // no null check // x x represent array bound check. 0 0 0 0 0 0 0 0 // do not need null check or array bounds check 0 0 0 0 0 1 0 0 // need null check, but not array bounds check \end{verbatim} \section*{Other information} The detailed annotation process is described in our technical report. The array bounds check analysis algorithm will show up in another technical report. There is a tutorial describing how to develop other annotation attributes using Soot. \section*{Change log} \begin{itemize} \item October 2, 2000: Initial version. \end{itemize} \end{document}
Timezone: » Poster Dueling Bandits with Weak Regret Bangrui Chen · Peter I Frazier Mon Aug 07 01:30 AM -- 05:00 AM (PDT) @ Gallery #90 We consider online content recommendation with implicit feedback through pairwise comparisons, formalized as the so-called dueling bandit problem. We study the dueling bandit problem in the Condorcet winner setting, and consider two notions of regret: the more well-studied strong regret, which is 0 only when both arms pulled are the Condorcet winner; and the less well-studied weak regret, which is 0 if either arm pulled is the Condorcet winner. We propose a new algorithm for this problem, Winner Stays (WS), with variations for each kind of regret: WS for weak regret (WS-W) has expected cumulative weak regret that is $O(N^2)$, and $O(N\log(N))$ if arms have a total order; WS for strong regret (WS-S) has expected cumulative strong regret of $O(N^2 + N \log(T))$, and $O(N\log(N)+N\log(T))$ if arms have a total order. WS-W is the first dueling bandit algorithm with weak regret that is constant in time. WS is simple to compute, even for problems with many arms, and we demonstrate through numerical experiments on simulated and real data that WS has significantly smaller regret than existing algorithms in both the weak- and strong-regret settings. #### Author Information ##### Peter I Frazier (Cornell University) Peter Frazier is an Associate Professor in the School of Operations Research and Information Engineering at Cornell University. He is also a Staff Data Scientist at Uber, where he managed the data science group for UberPOOL while on sabbatical leave from Cornell. He completed his Ph.D. in Operations Research and Financial Engineering at Princeton University in 2009. Peter's research is in Bayesian optimization, multi-armed bandits and incentive design for social learning, with applications in e-commerce, the sharing economy, and materials design. He is the recipient of an AFOSR Young Investigator Award and an NSF CAREER Award.
# Convert to a Decimal 100/9 Convert the fraction to a decimal by dividing the numerator by the denominator. Convert to a Decimal 100/9 Scroll to top
# T distribution ## Shape of Distribution ### Basic Properties • One parameter $N$ is required (Positive integer) • Continuous distribution defined on on entire range • This distribution is symmetric. ### Probability • Probability density function $f(x)=\frac{\Gamma\left(\frac{N+1}{2}\right)}{\sqrt{\pi N\left(1+\frac{x^2}{N}\right)^{N+1}}\Gamma\left(\frac{N}{2}\right)}$ , where $\Gamma(\cdot)$ is gamma function. • Cumulative distribution function $F(x)=\frac{1}{2}-\frac{1}{2}\left[1-I_{\gamma}\left(\frac{1}{2},\frac{N}{2}\right)\right]\text{sign}(x)$ , where $\gamma=\frac{N}{N+x^2}$ and $I_{x}(\cdot,\cdot)$ is regularized incomplete beta function. • How to compute these on Excel. 1 2 3 4 5 6 A B Data Description 5 Value for which you want the distribution 8 Value of parameter N Formula Description (Result) =NTTDIST(A2,A3,TRUE) Cumulative distribution function for the terms above =NTTDIST(A2,A3,FALSE) Probability density function for the terms above • Function reference : NTTDIST ## Characteristics ### Mean – Where is the “center” of the distribution? (Definition) • Mean of the distribution is defined for $N>1$ and is always 0. • How to compute this on Excel 1 2 3 4 A B Data Description 8 Value of parameter N Formula Description (Result) =NTTMEAN(A2) Mean of the distribution for the terms above • Function reference : NTTMEAN ### Standard Deviation – How wide does the distribution spread? (Definition) • Variance of the distribution is given as $\frac{N}{N-2}\quad (N>2)$ Standard Deviation is a positive square root of Variance. • How to compute this on Excel 1 2 3 4 A B Data Description 8 Value of parameter N Formula Description (Result) =NTTSTDEV(A2) Standard deviation of the distribution for the terms above • Function reference : NTTSTDEV ### Skewness – Which side is the distribution distorted into? (Definition) • Skewness of the distribution is defined for $N>3$ and is always 0. $\sqrt{\frac{8}{N}}$ • How to compute this on Excel 1 2 3 4 A B Data Description 8 Value of parameter N Formula Description (Result) =NTTSKEW(A2) Skewness of the distribution for the terms above • Function reference : NTTSKEW ### Kurtosis – Sharp or Dull, consequently Fat Tail or Thin Tail (Definition) • Kurtosis of the distribution is given as $\frac{6}{N-4}\;(N>4)$ • This distribution can be leptokurtic or platykurtic. • How to compute this on Excel 1 2 3 4 A B Data Description 8 Value of parameter N Formula Description (Result) =NTTKURT(A2) Kurtosis of the distribution for the terms above • Function reference : NTTKURT ## Random Numbers • How to generate random numbers on Excel. 1 2 3 4 A B Data Description 9 Value of parameter N Formula Description (Result) =NTRANDT(100,A2,0) 100 chi square deviates based on Mersenne-Twister algorithm for which the parameters above Note The formula in the example must be entered as an array formula. After copying the example to a blank worksheet, select the range A5:A104 starting with the formula cell. Press F2, and then press CTRL+SHIFT+ENTER. • Function reference : NTRANDT ## NtRand Functions • If you already have parameters of the distribution • Generating random numbers based on Mersenne Twister algorithm: NTRANDT • Computing probability : NTTDIST • Computing mean : NTTMEAN • Computing standard deviation : NTTSTDEV • Computing skewness : NTTSKEW • Computing kurtosis : NTTKURT • Computing moments above at once : NTTMOM
# Lesson 8 Multiplying Expressions These materials, when encountered before Algebra 1, Unit 7, Lesson 8 support success in that lesson. ### Lesson Narrative In this lesson, students use a diagram to multiply factors of the form $$(a+b)(c+d)$$ and gain fluency by using the diagrams in different ways. In the associated Algebra 1 lesson, students factor quadratic expressions of the form $$x^2 - a^2$$. Familiarity with using diagrams to find factors can support students with a more concrete method to approach factoring. Students look for and make use of structure (MP7) when they use diagrams to expand and factor partially filled in diagrams. ### Learning Goals Teacher Facing • Recognize that two of the terms from the expanded form of quadratics are opposites. • Use a diagram to multiply expressions ### Student Facing • Let’s explore multiplication strategies. Building Towards
## anonymous 4 years ago Hmm.. I'm having difficulties differentiating this function: $$\ \frac{x}{x+\frac{c}{x}}$$. Help, please! 1. anonymous function of x. c and z are constants. Direct quotient rule 2. anonymous @Mathmuse The fraction is $$\ \Huge \frac{c}{x}.$$ 3. anonymous my bad, could not see it properly. In that case, get a common denominator of x on the bottom. 4. anonymous Could you help me with that? 5. anonymous Could I substitute the fraction with $$\ \huge c^{-x} ?$$ 6. anonymous $\frac{x}{x + \frac{c}{x}}=\frac{x}{(\frac{x^2+c}{x})}=\frac{x}{1}\frac{x}{(x^2+c)}$ 7. anonymous So how I go use the quotient rule to differentiate this function? 8. anonymous yes 9. anonymous right. Or, because i always mess up the quotient rule, convert it to product rule 10. anonymous $\frac{x}{x^2+c}=x*(x^2+c)^{-1}$ 11. anonymous both will work 12. anonymous $(\frac{ x^2 }{ x^2+c })'=\frac{ (x^2)'(x^2+c)-(x^2+c)'x^2 }{ (x^2+c)^2 }$ 13. anonymous $\frac{ 2x(x^2+c)-2x(x^2) }{ (x^2+c)^2 }=\frac{ 2xc }{ (x^2+c)^2 }$ 14. anonymous Where did $$\ \Huge x^2+c$$ come from?? 15. anonymous @Mathmuse didnot write x^2 as nominator 16. anonymous ? 17. anonymous for shame 18. anonymous $x+\frac{ c }{ x }=\frac{ x^2+c }{ x }$ 19. anonymous So how does that become x^2+c? 20. anonymous $x \div \frac{ x^2+c }{ x }=x \times \frac{ x }{ x^2+c }=\frac{ x^2 }{ x^2+c }$ 21. anonymous so we derived this
## Duke Mathematical Journal ### Characters of Springer representations on elliptic conjugacy classes #### Abstract For a Weyl group $W$, we investigate simple closed formulas (valid on elliptic conjugacy classes) for the character of the representation of $W$ in the homology of a Springer fiber. We also give a formula (valid again on elliptic conjugacy classes) of the $W$-character of an irreducible discrete series representation with real central character of a graded affine Hecke algebra with arbitrary parameters. In both cases, the Pin double cover of $W$ and the Dirac operator for graded affine Hecke algebras play key roles. #### Article information Source Duke Math. J., Volume 162, Number 2 (2013), 201-223. Dates First available in Project Euclid: 24 January 2013 https://projecteuclid.org/euclid.dmj/1359036934 Digital Object Identifier doi:10.1215/00127094-1961735 Mathematical Reviews number (MathSciNet) MR3018954 Zentralblatt MATH identifier 1260.22012 #### Citation Ciubotaru, Dan M.; Trapa, Peter E. Characters of Springer representations on elliptic conjugacy classes. Duke Math. J. 162 (2013), no. 2, 201--223. doi:10.1215/00127094-1961735. https://projecteuclid.org/euclid.dmj/1359036934 #### References • [BCT] D. Barbasch, D. Ciubotaru, and P. Trapa, The Dirac operator for graded affine Hecke algebras, to appear in Acta Math., preprint, arXiv:1006.3822v1 [math.RT]. • [BW] A. Borel and N. Wallach, Continuous Cohomology, Discrete Subgroups, and Representations of Reductive Groups, Ann. of Math. Stud. 94, Princeton Univ. Press, Princeton, 1980. • [BM] W. Borho and R. MacPherson, Représentations des groupes de Weyl et homologie d’intersection pour les variétés nilpotentes, C. R. Acad. Sci. Paris Sér. I Math. 292 (1981), 707–710. • [Ca] R. Carter, Conjugacy classes in the Weyl group, Compositio Math. 25 (1972), 1–59. • [C1] D. Ciubotaru, On unitary unipotent representations of $p$-adic groups and affine Hecke algebras with unequal parameters, Represent. Theory 12 (2008), 453–498. • [C2] D. Ciubotaru, Spin representations of Weyl groups and the Springer correspondence, J. Reine Angew. Math. 671 (2012), 199–222. • [CKK] D. Ciubotaru, M. Kato, and S. Kato, On characters and formal degrees of discrete series of classical affine Hecke algebras, Invent. Math. 187 (2012), 589–635. • [COT] D. Ciubotaru, E. M. Opdam, and P. E. Trapa, Algebraic and analytic Dirac induction for graded affine Hecke algebras, preprint, arXiv:1201.2130v2 [math.RT] • [CM] D. H. Collingwood and W. M. McGovern, Nilpotent Orbits in Semisimple Lie Algebras, Van Nostrand Reinhold Math. Ser., Van Nostrand Reinhold, New York, 1993. • [GS] P. Gunnels and E. Sommers, A characterization of Dynkin elements, Math. Res. Letters 10 (2003), 363–373. • [KL] D. Kazhdan and G. Lusztig, Proof of the Deligne-Langlands conjecture, Invent. Math. 87 (1987), 153–215. • [L1] G. Lusztig, Character sheaves, V, Adv. Math. 61 (1986), 103–155. • [L2] G. Lusztig, Cuspidal local systems and graded algebras, I, Publ. Math. Inst. Hautes Études Sci. 67 (1988), 145–202. • [L3] G. Lusztig, Affine Hecke algebras and their graded versions, J. Amer. Math. Soc. 2 (1989), 599–635. • [L4] G. Lusztig, “Cuspidal local systems and graded algebras, II” in Representations of Groups (Banff, AB, 1994), Amer. Math. Soc., Providence, 1995, 217–275. • [O] E. Opdam, On the spectral decomposition of affine Hecke algebras, J. Inst. Math. Jussieu 3 (2004), 531–648. • [OS] E. Opdam and M. Solleveld, Homological algebra for affine Hecke algebras, Adv. in Math. 220 (2009), 1549–1601. • [R] M. Reeder, Euler-Poincaré pairings and elliptic representations of Weyl groups and $p$-adic groups, Compositio Math. 129 (2001), 149–181. • [SS] P. Schneider and U. Stuhler, Representation theory and sheaves on the Bruhat-Tits building, Publ. Math. Inst. Hautes Études Sci. 85 (1997), 97–191. • [Sh] T. Shoji, On the Green polynomials of classical groups, Invent. Math. 74 (1983), 239–267.
1. ## volume integral Hi everyone, Find the area under the curve: 4x+y squared=12 y=x (-6,-6) (2,2) I solved for x and got -1/4y squared+3 I then solved the integral -6 to 2 -1/4y squared+3-ydy Once I plugged in my values I got a negative value. 5/3-18+6+1 but it can't be a negative area Thank you very much 2. Originally Posted by chocolatelover Hi everyone, Find the area under the curve: 4x+y squared=12 y=x (-6,-6) (2,2) I solved for x and got -1/4y squared+3 I then solved the integral -6 to 2 -1/4y squared+3-ydy Once I plugged in my values I got a negative value. 5/3-18+6+1 but it can't be a negative area Thank you very much As I mentioned in my other post, what does "Find the area under the curve: 4x+y squared=12 y=x (-6,-6) (2,2) " mean?? Are you again trying to find the area between two curves? -Dan
Question: bplapply with progressbar 0 9 months ago by wt2150 wt2150 wrote: Hello, I am replacing foreach with BiocParallel in my package. I wonder whether could I maintain the same setting of progress bar  as in foreach for bplapply. (The same problem as listed in https://github.com/Bioconductor/BiocParallel/issues/54 and https://stat.ethz.ch/pipermail/bioc-devel/2017-December/012572.html. Firstly I created a simple example in R: nrow=10000 ncol=500 matrixx=matrix(runif(nrow*ncol),nrow=nrow,ncol=ncol) Using foreach with progressbar: library(parallel) library(doSNOW) library(foreach) cluster=makeCluster(5,type='SOCK') registerDoSNOW(cluster) getDoParWorkers() iterations<-nrow pb<-txtProgressBar(max=iterations,style =3) progress<-function(n)setTxtProgressBar(pb,n) opts<-list(progress=progress) BB_parmat<-foreach(geneind=1:dim(matrixx)[1],.combine=c,.options.snow=opts)%dopar%{ return(mean(matrixx[geneind,])) } close(pb) stopCluster(cluster) Using bplapply with progress bar (a potential problem is that the progressbar will show 0% for a long time, and then suddenly increases): library(BiocParallel) BPPARAM=SnowParam(workers=5,progressbar = TRUE,type='SOCK') funnn<-function(geneind,matrixx){ return(mean(matrixx[geneind,])) } suppressWarnings(temp_result<-bplapply(seq(1,dim(matrixx)[1]),funnn,matrixx,BPPARAM=BPPARAM)) I prefer the progress bar shown in the foreach  case: increase the bar by 1% per time, so that I can have a basic idea about the running time of the whole code. In the second case, the progress bar increases suddenly. My question is how could I achieve the same progress bar as shown in foreach case using bplapply? Thank you very much! Best wishes, Wenhao bplapply progressbar • 320 views modified 9 months ago by Martin Morgan ♦♦ 23k • written 9 months ago by wt2150 5 9 months ago by Martin Morgan ♦♦ 23k United States Martin Morgan ♦♦ 23k wrote: The effect can be achieved by setting the number of tasks, e.g., BPPARAM=SnowParam(workers=5, tasks = 20, progressbar = TRUE,type='SOCK') updates the progress bar 20 times. The way bplapply works is that, by default, it splits the initial task list (in your case the sequence of row indexes) into equal components for each worker -- each worker gets 10000 / 5 = 2000 rows. These are sent to the workers, who report back when done. When each worker finishes, the progress bar advances. The progress bar advances in 5 steps, but since the workers all finish at about the same time it seems like the progress bar jumps to complete. The effect of setting tasks = 20 is to divide the 10000 tasks into 10000 / 20 = 500 rows per task, to send 500 x 5 to the first five workers, and as each worker finishes the progress bar is updated and the next 500 tasks sent to the worker. The progress bar moves across the screen more smoothly, but actually the computation is less efficient (because there is more communication between the manager and workers) and takes longer. If most of the time is spent in computation anyway, then the extra cost of communication is small and the trade-off may be worth it. Usually of course it is better to vectorize than to parallelize, so in the above trivial example simply rowMeans(matrixx). (the comment on your question was from a spammer, and was deleted). Thank you Martin! Is it possible to allow bplapply for passing arguments to the function txtProgressBar? If so then I can specify 'max=10000', so that progress bar will be element based. For this toy example, rowMeans definitely works better. i just used it for illustration. By the way, BiocParallel is very good, thank you for your work! Under the current scheme, it will not help to make the progress bar element based, because it would be reporting progress on the workers, where no one is looking! The current implementation does not allow progress bar options to be set; you could open an issue (no promises for an update, though), at https://github.com/Bioconductor/BiocParallel . Picking up on this answer, is it possible to have bpapply show a progress bar similar to pbapply when using SerialCoreParam? 1 I'm not sure that I understand the question; this > param = SerialParam(progress=TRUE) > res = bplapply(1:10, function(i) Sys.sleep(1), BPPARAM=param) |======================================================================| 100% works?
mersenneforum.org 332.2M - 333.9M (aka 100M digit range) Register FAQ Search Today's Posts Mark Forums Read 2008-11-14, 22:44   #34 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 52·211 Posts Quote: Originally Posted by CADavis Yeah I switched to Vista 64-bit and it's going approx. twice as fast :-) Is there a way to make the benchmark go to 74bit factors? Whoo Hoo I just ordered a new Q9550 with Vista64 ... due Tuesday. Watch me factor now. 2008-11-16, 05:19 #35 CADavis     Jul 2005 Des Moines, Iowa, USA 2·5·17 Posts I think I'm changing my mind a bit. I figured out how to use the Factoring Effort page to make a worktodo file, and I think I want to do a wide range to lower levels. So what I'm thinking now is to take my current range to 68 bits, then to 70 bits (maybe not though), then start on a new range from 61 - 64 bits the new range will be starting at 332900021 and eventually go to 332999959, which is all only done to 61 bits. what do you think, Uncwilly? 2008-11-16, 06:45 #36 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 10,891 Posts Fair enough. I haven't been tracking that area at the moment. I am currently trying to clear areas ahead of the LL'ers. I have a few cores doing expos (even those assigned, but showing no signs of more TF) up to 73. It looks like someone found a factor up in the 73 bit range. :) 2008-11-19, 06:28 #37 jinydu     Dec 2003 Hopefully Near M48 2·3·293 Posts I've found the usernames of two 100M digit LL testers: simcon M332193833 and M332193859 jr007j M332194253 and M332194277 2008-11-19, 19:06   #38 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 2A8B16 Posts Quote: Originally Posted by jinydu I've found the usernames of two 100M digit LL testers: simcon M332193833 and M332193859 jr007j M332194253 and M332194277 There are several others that have expos assigned to them. Some are active, some seem not to be. 2008-11-21, 06:35 #39 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 10,891 Posts Here is a progress status report for the range from 332192831 to 332259937 Code: Date 11/20/2008 Average bit depth for first 100 expos 71.46 Average bit depth for first 1000 expos 67.96 100th active expo (no factor found) 332197793 1000th active expo (no factor found) 332239723 Unitless total effort number 813536 Number of first 100 exos to 2^71 45 Number of first 1000 exos to 2^70 275 Code: Bit # to bit level or higher 61 1527 62 1259 63 1258 64 1258 65 900 66 849 67 836 68 571 69 280 70 275 71 107 72 84 73 72 74 48 75 10 76 1 77 1 Attached Thumbnails 2008-11-22, 20:29 #40 James Heinrich     "James Heinrich" May 2004 ex-Northern Ontario 4,073 Posts I grabbed M332203901 a couple weeks ago. So far I've completed TF from 2^71-2^75, as I write this P-1_stage1 is 60% done (P-1 both stages should be done around 10-Dec, then I'll (probably) let it continue TF up to 2^77, which will take me into Feb2009(?). I'll unreserve it once it gets to LL time; I don't have the patience to wait until May 2013 to confirm it's not prime 2008-11-22, 20:50 #41 James Heinrich     "James Heinrich" May 2004 ex-Northern Ontario 4,073 Posts I noticed that M332252939 to M332292827 have currently only been TF'd to 2^61, so I figured it would only take me about 12 hours to bring those 1000 exponents up to 2^63, so probably by the time anyone reads this they'll all be at 2^63. Hopefully I'm not stepping on anyone's toes. 2008-11-23, 12:30   #42 James Heinrich "James Heinrich" May 2004 ex-Northern Ontario 1111111010012 Posts Quote: Originally Posted by James Heinrich I noticed that M332252939 to M332292827 have currently only been TF'd to 2^61 1000 exponents TF'd to 2^63; 37 factors found. 2008-11-24, 00:42   #43 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 101010100010112 Posts Quote: Originally Posted by James Heinrich I noticed that M332252939 to M332292827 have currently only been TF'd to 2^61, so I figured it would only take me about 12 hours to bring those 1000 exponents up to 2^63, so probably by the time anyone reads this they'll all be at 2^63. Hopefully I'm not stepping on anyone's toes. You stepped on my toes a bit there. But no problem. I was taking the range to M332259937 up to 65. I saw your message and dropped the range. I put the core on to a different range to higher level. I generally peek at the exponent status report to see if it has been assigned (and then if it has shown activity, a lot of the L-L'ers have shown no progress on the supporting TF or P-1). Last fiddled with by Uncwilly on 2008-11-24 at 00:46 2008-11-26, 05:47   #44 Jul 2005 Des Moines, Iowa, USA 2·5·17 Posts Quote: Originally Posted by CADavis then start on a new range from 61 - 64 bits the new range will be starting at 332900021 and eventually go to 332999959, done, 332.9M - 333M to 64 bits, 116 factors found Similar Threads Thread Thread Starter Forum Replies Last Post Uncwilly GPU to 72 64 2013-03-31 02:45 JuanTutors PrimeNet 8 2012-12-06 13:47 JuanTutors Lounge 6 2012-02-21 07:36 __HRB__ Riesel Prime Search 0 2010-05-22 01:17 Unregistered Information & Answers 10 2010-03-24 20:16 All times are UTC. The time now is 08:46. Sun Jan 29 08:46:40 UTC 2023 up 164 days, 6:15, 0 users, load averages: 1.38, 1.50, 1.31
# Derive Definition of Exponential Function (Power Series) From Compound Interest This example demonstrates how the formula for compound interest can be used to derive the power series definition of the exponential function. The power series of the exponential function is shown below. From a high-level perspective, compound interest represents an iterative approach to modeling exponential growth and the exponential function is a natural limit to how fast something can “continuously” grow. ## Step 1 The variables and their meanings are summarized below. Expression Description Accumulated amount. Principle amount. Rate of interest expressed as a decimal value. Frequency of investment per time period. Time elapsed. Then, let’s look at some examples of the formula. Specifically, we are interested in what happens as the frequency of investment, represented by the variable , increases. Given an interest rate of , a principle amount of , total time of and a yearly investment strategy per year the formula calculates the accumulated amount shown below. Next, given the same interest rate, principle amount and time elapsed, let’s calculate the accumulated amount if we switch quarterly investment strategy per year. Finally, given the same interest rate, principle amount and time elapsed, let’s calculate the vaue for a monthly investment strategy per year. These examples demonstrate that as the frequency of investment goes up so does the accumulated amount. The question we are trying to answer is “what happens as the frequency of the growth rate continues to increase?” Is there a natural limit, or does it grow in an unbounded manner. ## Step 2 We can pose this question mathematically by taking the limit of the function as goes to infinity. However, finding the limit at this stage poses some challenges and so we are first going to transform the function to be more simple and abstract. Substitute the variable into the expression, where represents the total number of times the interest is applied and is given by . This changes the function to the equivalent form: Then, we are going to make two observations. First, observe that scales the output (growth curve) of the function vertically. Second, observe that scales the input of the function horizontally. Because we are interested in modeling the growth curve (shape) of the function and it makes the math cleaner later on, we can set and . To finish the transformation, from now on, let’s consider to represent the total number of times interest is applied and replace with the more generic variable . ## Step 3 Take the limit of the function as approaches infinity. There are two reasonable approaches to taking the limit of this function. The first, shown on this page, is to expand the product as a series and look for patterns. The second, shown on another page, is to approximate a value for the limit. Expand the first three cases of the limit. Case Expression This is tedious to do by hand and instead, we can use the binomial expansion, substituting and into each case of the expansion. This is shown below and aligned so that the powers of line up. Case Binomial Expansion The math becomes pretty hairy at this point, and while we won’t prove anything rigorously, hopefully you will be convinced that as the coefficients approach known values. Take, for example, the expansion of the expresions of where is equal to , and shown below. Clear patterns start to appear in the coefficients associated with each power. For example, the coefficient associated with approaches the fraction . The generalized pattern that emerges as approaches infinity is the coefficient in front of the -th power of approaches over the factorial of . This pattern leaves us with the infinite power series below which represents the definition of the exponential function. This is the same definition that can derived using a taylor series[1]. We can give this function the abbreviated name and we have finished the derivation of power series definition of the exponential function. If desired, the scalar values of the principle and interest rate can be substituted back into the formula to recover the ability to scale the function vertically and horizontally. This formula is the same as the population growth formula.
Mathematics # State true or false:Attempts to prove Euclid's fifth postulate using the other postulates and axioms led to the discovery of several other geometries. False ##### SOLUTION If a straight line crossing two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if extended indefinitely, meet on that side on which are the angles less than the two right angles this the fifth postulate,many tried to prove it but at the end they had to assume something which was very closely related to the fifth postulate,they didnot form any new geometries but from where they started they ended at the same point. $B$ You're just one step away TRUE/FALSE Medium Published on 09th 09, 2020 Questions 120418 Subjects 10 Chapters 88 Enrolled Students 87 #### Realted Questions Q1 TRUE/FALSE Medium Write whether the following statements are True or False? Justify your answer: Euclidean geometry is valid only for curved surfaces. • A. True • B. False Asked in: Mathematics - Introduction to Euclid’s Geometry 1 Verified Answer | Published on 09th 09, 2020 Q2 Single Correct Easy According to Euclid, a surface has ____. • A. Length but no breadth and thickness • B. No length, no breadth and no thickness • C. Length, breadth and thickness • D. Length and breadth but no thickness Asked in: Mathematics - Introduction to Euclid’s Geometry 1 Verified Answer | Published on 09th 09, 2020 Q3 Subjective Medium Give an example for the following axioms from your experience: (b) the whole is greater than the part. Asked in: Mathematics - Introduction to Euclid’s Geometry 1 Verified Answer | Published on 09th 09, 2020 Q4 Single Correct Medium STATEMENT -1 : Given positive integers a and b, there exist whole numbers q and r satisfying a $=$ bq + r, 0 $\leq$ r  < b. STATEMENT -2 : Any positive odd integer is of the form 6q+1, or 6q+3, or 6q+5, where q is some integer. • A. Statement - 1 is True, Statement - 2 is True, Statement - 2 is a correct explanation for Statement - 1 • B. Statement - 1 is True, Statement - 2 is False • C. Statement - 1 is False, Statement - 2 is True • D. Statement - 1 is True, Statement - 2 is True ; Statement - 1 is NOT a correct explanation for Statement - 1 Asked in: Mathematics - Introduction to Euclid’s Geometry 1 Verified Answer | Published on 09th 09, 2020 Q5 Single Correct Medium The Euclidean geometry is valid only for figures in the plane. • A. False • B. Ambiguous • C. Data Insufficient • D. True Asked in: Mathematics - Introduction to Euclid’s Geometry 1 Verified Answer | Published on 09th 09, 2020
NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current NCERT Solutions For Class 10 Science Chapter 13 Magnetic Effects Of Electric Current: In this article, you will find out all the necessary information regarding the magnetic effect of electric current class 10 NCERT solutions. So the students who are in search of NCERT Solutions For Class 10 Science Chapter 13 Magnetic Effects Of Electric Current can refer to this article. Magnetic Effects Of Electric Current Class 10 NCERT questions and answers were solved by the best academic experts in order to help you a better understanding. So the candidates who want to secure a decent grade in class 10 board exams can refer to this article and solve magnetic effect of electric current science class 10 NCERT solutions. Read on to find out everything about NCERT Solutions For Class 10 Science Chapter 13 Magnetic Effects Of Electric Current. NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current Before getting into the details of NCERT Solutions For Class 10 Science Chapter 13 Magnetic Effects Of Electric Current, let’s have an overview of topics and subtopics under magnetic effect of electric current class 10 NCERT pdf: 1. Magnetic Effects Of Electric Current 2. Magnetic Field And Field Lines 3. Magnetic Field Due To A Current-Carrying Conductor 4. Force On A Current-Carrying Conductor In A Magnetic Field 5. Electric Motor 6. Electromagnetic Induction 7. Electric Generator 8. Domestic Electric Circuits Free download NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects Of Electric Current PDF in Hindi Medium as well as in English Medium for CBSE, Uttarakhand, Bihar, MP Board, Gujarat Board, and UP Board students, who are using NCERT Books based on updated CBSE Syllabus for the session 2019-20. NCERT Solutions for Class 10 Science Chapter 13 Intext Questions Page Number: 224 Question 1 Why does a compass needle get deflected when brought near a bar magnet ? The magnetic field of the magnet exerts force on both the poles of the compass needle. The forces experienced by the two poles are equal and opposite. These two forces form a couple which deflects the compass needle. Page Number: 228 Question 1 Draw magnetic field lines around a bar magnet. Question 2 List the properties of magnetic lines of force. Properties of magnetic lines of force : • The magnetic field lines originate from the north pole of a magnet and end at its south pole. • The magnetic field lines become closer to each other near the poles of a magnet but they are widely separated at other places. • Two magnetic field lines do not intersect one another. Question 3 Why don’t two magnetic lines of force intersect each other ? This is due to the fact that the resultant force on a north pole at any point can be only in one direction. But if the two magnetic field lines intersect one another, then the resultant force on north pole placed at the point of intersection will be along two directions, which is not possible. Page Number: 229 – 230 Question 1 Consider a circular loop of wire lying on the plane of the table. Let the current pass through the loop clockwise. Apply the right hand rule to find out the direction of the magnetic field inside and outside the loop. As shown in figure alongside, each section of wire produces its concentric set of lines of force. By applying right hand thumb rule, we find that all the sections produce magnetic field downwards at all points inside the loop while at the outside points, the field is directed upwards. Therefore, the magnetic field acts normally into the plane of the paper at the points inside the loop and normally out of the plane of paper at points outside the loop. Question 2 The magnetic field in a given region is uniform. Draw a diagram to represent it. [CBSE 2013, 2014] A uniform magnetic field in a region is represented by drawing parallel straight lines, ail pointing in the same direction. For example, the uniform magnetic field which exists inside a current-carrying solenoid can be represented by parallel straight lines pointing from its S-pole to N-pole (as shown in figure). Question 3 Choose the correct option. The magnetic field inside a long straight solenoid-carrying current (i) is zero (ii) decreases as we move towards its end (iii) increases as we move towards its end (iv) is the same at all points (iv) Is the same at all points. Page Number: 231 – 232 Question 1 Which of the following property of a proton can change while it moves freely in a magnetic field. (There may be more than one correct answer.) (i) Mass (ii) Speed (iii) Velocity (iv) Momentum The correct options are (iii) velocity, (iv) momentum. Question 2 In Activity 13.7 how do we think the displacement of rod AB will be affected if (i) current in rod AB is increased (ii) a stronger horse-shoe magnet is used; and (iii) length of the rod AB is increased ? (i) When the current in the rod AB is increased, force exerted on the conductor increases, so the displacement of the rod increases. (ii) When a stronger horse-shoe magnet is used, the magnitude of the magnetic field increases. This increases the force exerted on the rod and the displacement of the rod. (iii) When the length of the rod AB is increased, force exerted on the conductor increases, so the displacement of the rod increases. Question 3 A positively-charged particle (alpha particle) projected towards west is deflected towards north by a magnetic field. The direction of magnetic field is : (i) towards south (ii) towards east (iii) downward (iv) upward (iv) Upward. Here, the positively charged alpha particles are moving towards west, so the direction of current is towards east. The deflection is towards north, so the force is towards north, so, we are given that (i) direction of current is towards west (ii) direction of force is towards north. Let us now hold the forefinger, middle finger and thumb of our left-hand at right angles to one another. Adjust the hand in such a way that our mid finger points towards west (in the direction of current) and thumb points towards north (in the direction of force). Now, if we look at our forefinger, it will be pointing upward. Because the direction of forefinger gives the direction of magnetic field, therefore, the magnetic field is in the upward direction. Page Number: 233 Question 1 State Fleming’s left hand rule. [CBSE 2018] Fleming’s left hand rule : Stretch the first finger, the middle finger and the thumb of your left hand mutually perpendicular to each other in such a way that the first finger represents the direction of the magnetic field, the middle finger represents the direction of the current in the conductor, then the thumb will represent the direction of motion of the conductor. Question 2 What is the principle of an electric motor ? [CBSE 2018] A motor works on the principle of magnetic effect of current. When a rectangular coil is placed in a magnetic field and current is passed through it, a force acts on the coil which rotates it continuously. When the coil rotates, the shaft attached to it also rotates. In this way the electrical energy supplied to the motor is converted into the mechanical energy of rotation. Question 3 What is the role of the split ring in an electric motor ? The split ring reverses the direction of current in the armature coil after every half rotation, i.e., it acts as a commutator. The reversed current reverses the direction of the forces acting on the two arms of the armature after every half rotation. This allows the armature coil to rotate continuously in the same direction. Page Number: 236 Question 1 Explain different ways to induce current in a coil. Different ways to induce current in a coil are : 1. moving a magnet towards or away from the coil or vice-versa, and 2. changing current in the neighbouring coil. Page Number: 237 Question 1 State the principle of an electric generator. The electric generator works on the principle that when a straight conductor is moved in a magnetic field, then current is induced in the conductor. In an electric generator, a rectangular coil is made to rotate rapidly in the magnetic field between the poles of a horse-shoe type magnet. When the coil rotates, it cuts the magnetic field lines due to which a current is produced in the coil. Question 2 Name some sources of direct current. Some of the sources of direct current are dry cells, button cells, lead accumulators. Question 3 Which sources produce alternating current ? Alternating current is produced by AC generators of nuclear power plants, thermal power plants, hydroelectric power stations, etc. Question 4 Choose the correct option : A rectangular coil of copper wires is rotated in a magnetic field. The direction of the induced current changes once in each: (i) two revolution (ii) one revolution (iii) half revolution (iv) one-fourth revolution (iii) Half revolution. Page Number: 238 Question 1 Name two safety measures commonly used in electric circuits and appliances. (i) Earthing and (ii) Electric fuse. Question 2 An electric oven of 2 kW power rating is operated in a domestic electric circuit (220 V) that has a current rating of 5 A. What result do you expect ? Explain. The electric oven draws a current given by Thus the electric oven draws current much more than the current rating 5 A. That is the circuit is overloaded. Due to excessive current, the fuse wire will blow and the circuit will break. What precautions should be taken to avoid the overloading of domestic electric circuits ? To avoid the overloading of domestic electric circuits, the following precautions should be taken : (i) The wires used in the circuit must be coated with good insulating materials like PVC, etc. (ii) The circuit must be divided into different sections and a safety fuse must be used in each section. (iii) High power appliances like air-conditioner, refrigerator, a water heater, etc. should not be used simultaneously. NCERT Solutions for Class 10 Science Chapter 13 Textbook Chapter End Questions Question 1 Which of the following correctly describes the magnetic field near a long straight wire ? (i) the field consists of straight lines perpendicular to the wire (ii) the field consists of straight lines parallel to the wire (iii) the field consists of radial lines originating from the wire (iv) the field consists of concentric circles centred on the wire (iv) The field consists of concentric circles centred on the wire Question 2 The phenomenon of electromagnetic induction is (i) the process of charging a body (ii) the process of generating magnetic field due to a current passing through a coil (iii) producing induced current in a coil due to relative motion between a magnet and the coil (iv) the process of rotating a coil of an electric motor (iii) Producing induced current in a coil due to relative motion between a magnet and the coil Question 3 The device used for producing electric current is called a (i) generator (ii) galvanometer (iii) ammeter (iv) motor (i) Generator. Question 4 The essential difference between an AC generator and a DC generator is that (i) AC generator has an electromagnet while a DC generator has permanent magnet (ii) DC generator will generate a higher voltage (iii) AC generator will generate a higher voltage (iv) AC generator has slip rings while the DC generator has a commutator (iv) AC generator has slip rings while the DC generator has a commutator Question 5 At the time of short circuit, the current in the circuit (i) reduces substantially (ii) does not change (iii) increases heavily (iv) varies continuously (iii) Increases heavily. Question 6 State whether the following statements are True or False. (i) An electric motor converts mechanical energy into electrical energy. (ii) An electric generator works on the principle of electromagnetic induction. (iii) The field at the centre a long circular coil carrying current will be parallel straight lines. (iv) A wire with a green insulation is usually the live wire of an electric supply. (i) False (ii) True (iii) True (iv) False. Question 7 List three sources of magnetic fields. (i) Current carrying conductor (ii) Electromagnets (iii) Permanent magnets Question 8 How docs a solenoid behave like a magnet ? Can you determine the north and south poles of a current-carrying solenoid with the help of a bar magnet? Explain. A solenoid behaves like a magnet in the following ways. • The magnetic field produced by a current carrying solenoid is very much similar to that of a bar magnet. • Like a bar magnet, one end of the solenoid has N-polarity while the other end has S-polarity. To determine the north and south poles, we bring N-pole of the bar magnet near one end of the solenoid. If there is an attraction, then that end of the solenoid has south polarity and the other has north polarity. If there is a repulsion, then that end of the solenoid has north polarity and the other end has south polarity because similar poles repel each other. Question 9 When is the force experienced by a current-carrying conductor placed in a magnetic field largest ? When the conductor carries current in a direction perpendicular to the direction of the magnetic field, the force experienced by the conductor is largest. Question 10 Imagine that you are sitting in a chamber with your back to one wall. An electron beam, moving horizontally from back wall towards the front wall, is deflected by a strong magnetic field to your right side. What is the direction of magnetic field ? Here the electron beam is moving from our back wall to the front wall, so the direction of current will be in the opposite direction, from front wall towards back wall or towards us. The direction of deflection (or force) is towards our right side. We now know two things : • direction of current is from front towards us, and • direction of force is towards our right side. Let us now hold the forefinger, middle finger and thumb of our left hand at right angles to one another. We now adjust the hand in such a way that our centre finger points towards us (in the direction of current) and thumb points towards right side (in the direction of force). Now, if we look at our forefinger, it will be pointing vertically downwards. Since the direction of forefinger gives the direction of magnetic field, therefore, the magnetic field is in the vertically downward direction. Question 11 Draw a labelled diagram of an electric motor. Explain its principle and working. What is the function of a split ring in an electric motor ? Electric Motor : The device used to convert electrical energy to mechanical energy is called Electric Motor. It is used in fans, machines, etc. Principle : NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric CurrentElectric motor works on the principle of force experienced by a current carrying conductor in a magnetic field. The two forces in the opposite sides are equal and opposite. Since they act in different lines they bring rotational motion. Working of an electric motor : When current starts to flow, the coil ABCD is in horizontal position. The direction of current through armature coil has the direction from A to B in the arm AB and from C to D in the arm CD. The direction of force exerted on the coil can be found through Fleming’s left hand law. According to this law, it is found that the force exerted on the part AB, pushes the coil downwards. While the force exerted on the part CD pushes it upwards. In this way, these two forces being equal and opposite form a couple that rotates the coil in anticlockwise direction. When the coil is in vertical position, the brushes X and Y would touch the centre of the commutator and the current in the coil is stopped. Though current is stopped but the coil comes back in horizontal state due to momentum. After half rotation, the polarity of the commutator also changes, because now Q makes contact with brush X and P with brush Y. Therefore, now the force exerts downwards on the arm AB and upwards on the arm CD and thus again a couple of forces is formed that rotates the coil in clockwise direction. This process is repeated again and again and the coil rotates til! the current flows across it. Function of split ring : Split ring in a motor acts as a commutator, i.e., it reverses the flow of current in the circuit due to which the direction of the forces acting on the arms also reverses. Question 12 Name some devices in which electric motors are used. Electric motor is used in the appliances like electric fans, washing machine, mixers, grinders, blenders, computers, MP3 players, etc. Question 13 A coil of insulated copper wire is connected to a galvanometer. What will happen if a bar magnet is (t) pushed into the coil (ii) withdrawn from inside the coil (iii) held stationary inside the coil ? [CBSE (Delhi) 2017, AICBSE 2016] (i) As a bar magnet is pushed into the coil, a momentary deflection is observed in the galvanometer indicating the production of a momentary current in the coil. (ii) When the bar magnet is withdrawn from the coil, the deflection of galvanometer is in opposite direction showing the production of an opposite current. (iii) When the bar magnet is held stationary inside the coil, there is no deflection in galvanometer indicating that no current is produced in the coil. Question 14 Two circular coils A and B are placed closed to each other. If the current in the coil A is changed, will some current be induced in the coil B ? Give reason. Yes, some current will be induced in the coil B. When the current in coil A is changed, some current is induced in the coil B. Due to change in current in coil A, the magnetic field lines linked with coil A and with coil B get changed. This sets up induced current in coil B. Question 15 State the rule to determine the direction of a (i) magnetic field produced around a straight conductor-carrying current (it) force experienced by a current-carrying straight conductor placed in a magnetic field which is perpendicular to it, and (in) current induced in a coil due to its rotation in a magnetic field. (i) Right hand thumb rule : If the current carrying conductor is held in the right hand such that the thumb points in the direction of the current, then the direction of the curl of the fingers will give the direction of the magnetic field. (ii) Fleming’s left hand rule : NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current Stretch the forefinger, the central finger and the thumb of the left hand mutually perpendicular to each other. If the forefinger points in the direction of the magnetic field, the middle finger in the direction of current, then the thumb points in the direction of force in the conductor. (iii) Fleming’s right hand rule : Stretch the thumb, forefinger and the central finger of the right hand mutually perpendicular to each other. If the forefinger points in the direction of magnetic field, thumb in the direction of motion of the conductor, then the middle finger points in the direction of current induced in the conductor. Question 16 Explain the underlying principle and working of an electric generator by drawing a labelled diagram. What is the function of brushes ? Principle : The electric generator is based on the principle of electromagnetic induction. When a coil is rotated with respect to a magnetic field, the number of magnetic field lines through the coil changes. Due to this a current is induced in the coil whose direction can be found by Fleming’s right hand rule. Working : When the armature coil ABCD rotates in a magnetic field produced by the permanent magnets, it cuts through the magnetic lines of force. Due to the rotation of armature coil, the associated magnetic field changes and an induced electromagnetic force is produced in it. The direction of this induced electromotive force or current can be determined by using Fleming’s right hand rule. In first half cycle the current flows in one direction by brush B1 and in second it flows in opposite direction by brush B2. This process continues. So the current produced is alternating in nature. Functions of Brushes : Brushes in contact with rings provide the current for external use. Question 17 When does an electric short circuit occur ? In a domestic circuit, short-circuit occurs when live and neutral wire come in direct contact with each other without any resistance. The resistance of the circuit becomes zero and excessive current starts to flow through it. Question 18 What is the function of an earth wire ? Why is it necessary to earth metallic appliances ? Earth wire is a safety measure that provides a low resistance conducting path to the current. Sometimes due to excess heat or wear and tear, the live wire comes in direct contact with the metallic cover of the appliances, which can give an electric shock on touching them. To prevent from the shock the metallic part is connected to the earth through a three-pin plug due to which the current flows to the earth at the instant there is a short circuit. It is necessary to earth metallic appliances because it ensures that if there is any current leakage in the metallic cover, the potential of the appliance becomes equal to that of the earth. The potential of the earth is zero. As a result, the person handling the appliance will not get an electric shock. NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current Magnetic effects of current: Magnetic field, field lines, field due to a current carrying conductor, field due to current carrying coil or solenoid; Force on current carrying conductor, Fleming’s left hand rule. Electromagnetic induction, Induced potential difference, Induced current, Fleming’s right hand rule, Direct current, Alternating current, frequency of AC, Advantage of AC over DC, Domestic electric circuits. Board CBSE Textbook NCERT Class Class 10 Subject Science Chapter Chapter 13 Chapter Name Magnetic Effects of Electric Current Number of Questions Solved 39 Category NCERT Solutions Question 1 Why does a compass needle gets deflected when brought near a bar magnet? Solution: A compass needle is, in fact, a small bar magnet. If this is brought near another bar magnet, the like poles repel and the needle gets deflected. Question 2 Draw magnetic field lines around a bar magnet? Solution: More Resources for CBSE Class 10 Question 3 List the properties of magnetic lines of force. Solution: a) Magnetic lines are directed from the north pole towards the south pole. b) They do not cross each other. c) They are more crowded near the poles than at any other region in the field. d) They are closed curves. e) In the uniform magnetic field, the lines of force are parallel to one another. Download NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current PDF Question 4 Why don’t two magnetic lines of force intersect each other? Solution: No two field-lines are found to cross each other. If they did, it would mean that at the point of intersection, the compass needle would point towards two directions, which is not possible. Question 5 Consider a circular loop of wire lying in the plane of the table. Let the current pass through the loop clockwise. Apply the right-hand rule to find out the magnetic field inside and outside the loop. Solution: At every point of a current –carrying loop, the concentric circles representing the magnetic field around it would become larger and larger as we move away from the wire. By the time we reach at the center of the circular loop, the arc of these big circles would appear as straight lines. Question 6 The magnetic field in a given region is uniform. Draw a diagram to represent it. Solution: Question 7 The magnetic field inside a long straight solenoid-carrying current a) is zero b) decreases as we move towards its end c) increases as we move towards its end d) is the same at all points Solution: d) is the same at all points Question 8 Which of the following property of a proton can change while it moves freely in a magnetic field? a) Mass b) Speed c) Velocity d) Momentum Solution: c) Velocity d) Momentum. Question 9 (a) Current in rod AB is increased? (b) A stronger horseshoe magnet is used? (c) Length of the rod AB is increased? Solution: (a) If the current in rod AB is increased, the displacement of rod AB will not be affected. (b) If a stronger horseshoe magnet is used, force is exerted and hence the displacement increases. (c) If the length of the rod AB is increased there is no change in the displacement of the rod AB. Question 10 A positively-charged particle projected towards west is deflected towards north by a magnetic field. The direction of the magnetic field is a) Towards south b) Towards east c) Downward d) Upward Solution: b) Towards east. Question 11 State Fleming’s left-hand rule. Solution: Fleming’s left-hand rule states that, stretch the thumb, fore finger and middle finger of the left hand such that they are mutually perpendicular. If the first finger points in the direction of magnetic field and the second finger in the direction of current, then the thumb will point in the direction of motion or the force acting on the conductor. Question 12 What is the principle of an electric motor? Solution: Principle of an electric motor: The working of the electric motor is based on the mechanical effect of an electric current. A conductor carrying a current placed in a magnetic field experiences a mechanical force. In the motor, when a current is passed through a rectangular coil of wire placed in a magnetic field, the coil rotates continuously. Question 13 What is the role of the split ring in an electric motor? Solution: In electric motor, the split ring acts as a commutator. A device that reverses the direction of flow of current through a circuit is called a commutator. The reversal of current also reverses the direction of force acting on the two arms AB and CD. Question 14 Explain different ways to induce current in a coil. Solution: Current can be induced in a coil either by moving it in a magnetic field or by changing the magnetic field around it. The induced current is found to be the highest when the direction of motion of the coil is at right angles to the magnetic field. The process, by which a changing magnetic field in a conductor induces a current in another conductor, is called electromagnetic induction. Question 15 State the principle of an electric generator. Solution: A generator is also known as a dynamo. It is a device used to convert mechanical energy in to electrical energy. The mechanical energy is used to rotate a conductor in a magnetic field to produce electricity. It is an application of electromagnetic induction. An A.C generator generates an alternating current. A D.C generator is used to deliver a current, which flows in the same direction. Question 16 Name some source of direct current. Solution: The source of direct current is a split-ring type commutator, one brush is at all times in contact with the arm moving up in the field, while the other is in contact with the arm moving down. Thus a unidirectional current is produced. Question 17 Which sources produce alternating current? Solution: The sources which produce alternating current is a permanent magnet called the field magnet, armature, slip ring and carbon brushes. After every half rotation the polarity of the current in the respective arms changes. Such a current, Which changes direction after equal intervals of time, is called an alternating current. Question 18 A rectangular coil of copper wires is rotated in a magnetic field. The direction of the induced current changes once in each: a) Two revolutions b) One revolution c) Half revolutions d) One-fourth revolutions. Solution: b) One revolution. Question 19 Name two safety measures commonly used in electric circuits and appliances. Solution: The use of an electric fuse prevents the electric circuits and appliance from a possible damage by passing the flow of unduly high electric current. The Joule heating that takes place in the fuse melts it to break the electric circuit. Question 20 An electric oven of 2 KW power rating is operated in a domestic electric circuit (220 V) that has a current rating of 5 A. What result do you expect? Explain. Solution: V = 220 V, I = 5 A Power, P = VI P = 220 × 5 P = 1100 W Therefore, power P = 1100 W = 1.1 KW Therefore, an electric oven of 2 KW power rating cannot be operated in a domestic electric circuit (220 V) that has a current rating of 5 A because electric oven has higher power than the power of the electric circuit. Question 21 What precaution should be taken to avoid the overloading of domestic electric circuits? Solution: Fuse is the most important safety device, to avoid the overloading of domestic electric circuits. Too many appliances should not be connected to a single socket. Question 22 Which of the following correctly describes the magnetic field near a long straight wire? (a) The field consists of straight lines perpendicular to the wire. (b) The field consists of straight lines parallel to the wire. (c) The field consists of radial lines originating from the wire. (d) The field consists of concentric circles centred on the wire. Solution: (d) The field consists of concentric circles centred on the wire. Question 23 The phenomenon of electromagnetic induction is (a) the process of charging a body. (b) the process of generating magnetic field due to a current passing through a coil. (c) producing induced current in a coil due to relative motion between a magnet and the coil. (d) the process of rotating a coil of an electric motor. Solution: (c) producing induced current in a coil due to relative motion between a magnet and the coil. Question 24 The device used for producing electric current is called a (a) generator. (b) galvanometer. (c) ammeter. (d) motor. Solution: (a) generator. Question 25 The essential difference between an AC generator and a DC generator is that: (a) AC generator has an electromagnet while a DC generator has permanent magnet. (b) DC generator will generate a higher voltage. (c) AC generator will generate a higher voltage. (d) AC generator has slip rings while the DC generator has a commutator Solution: (d) AC generator has slip rings while the DC generator has a commutator. Question 26 At the time of short circuit, the current in the circuit (a) reduces substantially. (b) does not change. (c) increases heavily. (d) vary continuously. Solution: (c) increases heavily. Question 27 State whether the following statements are true or false. Solution: (a) An electric motor converts mechanical energy into electrical energy – false. (b) An electric generator works on the principle of electromagnetic induction – true (c) The field at the centre of a long circular coil carrying current will be parallel straight lines – true. (d) A wire with a green insulation is usually the live wire of an electric supply – true. Question 28 List three sources of magnetic fields. Solution: a) Magnetic field due to a current through a straight conductor. b) Magnetic field due to a current in a solenoid. c) Magnetic field due to a current through a circular loop. Question 29 How does a solenoid behave like a magnet? Can you determine the north and the south poles of a current-carrying solenoid with the help of a bar magnet? Explain. Solution: A coil of many circular turns of insulated copper wire wrapped closely in the shape of the cylinder is called a solenoid. The pattern of the magnetic field lines around a current- carrying solenoid is shown in this figure. In fact, one end of the solenoid behaves as a magnetic north pole, while the other behaves as the south pole. The field lines inside the solenoid are in the form of parallel straight lines. This indicates that the magnetic field is the same at all points inside the solenoid. That is, the field is uniform inside the solenoid. A strong magnetic field produced inside a solenoid can be used to magnetise a piece of magnetic material, like soft iron, when placed inside the coil. The magnet so formed is called an electromagnet. Question 30 When is the force experienced by a current-carrying conductor placed in a magnetic field the largest? Solution: The force experienced by a current-carrying conductor placed in a magnetic field is largest provided when the direction of current is at right angles to the direction of the magnetic field. Question 31 Imagine that you are sitting in a chamber with your back to one wall. An electron beam, moving horizontally from back wall towards the front wall, is deflected by a strong magnetic field to your right side. What is the direction of magnetic field? Solution: The direction of magnetic field is towards west. Question 32 Draw a labelled diagram of an electric motor. Explain its principle and working.What is the function of a split ring in an electric motor? Solution: A motor is a device that converts the electrical energy into mechanical energy. Principle An electric motor is based on the fact that when a current carrying conductor is placed in a magnetic field the conductor experiences a force which is given by Fleming’s Left Hand Rule. For example, when a rectangular coil is placed in the magnetic field and current is passed through it, a torque acts on the coil, which rotates it continuously. When the coil rotates, the shaft attached to it also rotates and therefore the electrical energy supplied to the motor is converted into the mechanical energy of rotation. An electrical motor consists of a rectangular coil ABCD of insulated copper wire, wound on a soft iron core called armature. The coil is mounted between the poles of a magnet in such a way that it can rotate between the poles N and S. The two ends of the coil are soldered to the ends of a commutator whose main function is to reverse the direction of the current flowing through the coil every time the coil just passes the vertical position during its revolution. Working Suppose the coil ABCD is initially at a horizontal position. When the switch is in ON position the current enters the coil through the carbon brushes and the half ring ‘A’ of the commutator. The current flows in the direction DCBA and leaves via the half ring ‘B’. In the side PQ of the coil, the direction is from Q to P towards the south and the direction of the magnetic field is from the N to S pole towards the east. So, by applying Fleming’s left hand rule, we find that it will experience a force in upward direction. Similarly, the side SR of the coil will experience a downward force. Thus we have two parallel wires experiencing forces in opposite directions. They form a couple tending to rotate the coil in the anticlockwise direction. When the coil goes beyond the vertical position, the two commutator half rings automatically changes contact from one brush to the other. This reverses the direction of current through the coil which, in turn, reverses the direction of forces acting on the two sides of the coil. The sides of the coil are interchanged, but rotate in the same anticlockwise direction. This process is repeated again and again and the coil continues to rotate as long as the current is passing. Question 33 Name some devices in which electric motors are used. Solution: Electric fans, refrigerators, mixers, washing machines, computers, MP3 players etc are some devices in which electric motors are used. Question 34 A coil of insulated copper wire is connected to a galvanometer. What will happen if a bar magnet is (i) pushed into the coil, (ii) withdrawn from inside the coil, (iii) held stationary inside the coil? Solution: (i) A deflection is observed in the galvanometer due to the induced current because of the changing magnetic flux (increasing) through the turns of the coil connected to the galvanometer. (ii) A deflection is again observed in the galvanometer, as when it is pulled out, the flux linked with the coil due to the bar magnet decreases. Hence a current flows in the coil to reduce the change in flux. The deflection can be observed in the opposite direction as compared with the previous case. (iii) No deflection is observed in the galvanometer. The flux linked with the coil due to the magnetic field is at a constant. Hence no current is induced due to the bar magnet. Question 35 Two circular coils A and B are placed closed to each other. If the current in the coil A is changed, will some current be induced in the coil B? Give reason. Solution: Yes, if the current in the coil A is changed, then some current will be induced in the coil B because due to the change in the magnetic field effect around the coils. Question 36 State the rule to determine the direction of a (i) magnetic field produced around a straight conductor-carrying current, (ii) force experienced by a current-carrying straight conductor placed in a magnetic field which is perpendicular to it, and (iii) current induced in a coil due to its rotation in a magnetic field. Solution: (i) Right-hand thumb rule Imagine that we are holding a current carrying straight conductor in the right hand such that the thumb points towards the direction of current. Then our fingers will wrap around the conductor in the direction of the field lines of the magnetic field. This is known as Right-hand thumb rule. (ii) Fleming’s left-hand rule Fleming’s left-hand rule states that, stretch the thumb, fore finger and middle finger of the left hand such that they are mutually perpendicular. If the first finger points in the direction of magnetic field and the second finger in the direction of current, then the thumb will point in the direction of motion or the force acting on the conductor. (iii) Fleming’s right-hand rule If the thumb and the first two fingers of right hand are held at right angles to each other, with the Forefinger held in the direction of the field, and the thumb in the direction of motion, the induced current I flows in the direction of the middle finger. Question 37 Explain the underlying principle and working of an electric generator by drawing a labelled diagram. What is the function of brushes? Solution: A C. generator “A C. generator” means “Alternating Current generator”. That is, an A. C. generator produces alternating current, which alternates (changes) in polarity continuously. We will now describe the construction an working of the A. C. generator or A. C. dynamo. Construction of an A. C. generator A simple A. C. generator consists of a rectangular coil ABCD that can be rotated rapidly between the poles N and S of a strong horseshoe type magnet M. The coil is made of a large number of turns of insulated copper wire. The ends A and D of the rectangular coil are connected to two circular pieces of copper metal called slip rings R1 and R2. As the slip rings R1 and R2 rotate with the coil, the two pieces of carbon called brushes, B1 and B2, keep contact with them. So, the current produced in the rotating coil can be tapped out through slip rings into the carbon brushes. From the carbon brushes B1 and B2 we take the current into various electrical appliances like radio, T. V., electric iron, bulbs, etc. But in this figure, we have shown only a galvanometer G connected the two carbon brushes. Working of an A. C. generator Suppose that the generator coil ABCD is initially in the horizontal position. Again suppose that he coil ABCD is being rotated in the anticlockwise direction between the poles N and S of a horseshoe type magnet. (i) As the coil rotates in the anticlockwise direction, the side AB of the coil moves down cutting the magnetic lines of force near the N-pole of the magnet, and side CD moves up, cutting the lines of force near the S-pole of the magnet. Due to this, induced current is produced in the sides AB and DC of the coil. On applying Fleming’s right hand rule to the side AB and DC of the coil, we find that the currents are in the direction B to A and D to C respectively. Thus, the induced currents in the two sides of the coil are in the same direction, and we get an effective induced current in the direction BADC. (ii) After half revolution, the sides AB and DC of the coil will interchange their positions. The side AB will come on the right hand side and DC will come on the left side. So, after half a revolution, side AB starts moving up and side DC starts coming down. As a result of this, the direction of induced current in each side of the coil is reversed after half a revolution. Since the direction of induced current in the coil is reversed after half revolution so the polarity (positive and negative) of the two ends of the coil also changes after half revolution. The end of coil which was positive in the first half of rotation becomes negative in the second in the second half. And the end which was negative in the first half revolution becomes positive in the second half of revolution. Thus, in 1 revolution of the coil, the current changes its direction 2 times. The alternating current (A. C.) produced in India has a frequency of 50 Hz. That is, the coil is rotated at the rate of 50 revolutions per second. Since in 1 revolution of coil, the current changes its direction 2 times, so in 50 revolutions of coil, the current changes its direction 2 × 50 = 100 times. Thus, the A. C. supply in India changes its direction 100 times in 1 second. Another way of saying this is that the alternating current produced in India changes its direction every 1/100 second. That is, each terminal of the coil is positive (+) for 1/100 of a second and negative (-) for the next 1/100 of a second. This process is repeated again and again with the result that there is actually no positive and negative in an A. C. generator. We will now describe why the direction of induced current in the coil of an A. C. generator changes after every half revolution of the coil. After every half revolution, each side of the generator coil starts moving in the opposite direction in the magnetic field. The side of the coil which was initially moving downwards in a magnetic field, after half revolution, it starts moving in opposite direction – upwards. Similarly the side of coil which was initially moving upwards, after half revolution, it starts moving downwards. Due to the change in the direction of motion of the two sides of the coil in the magnetic field after every half revolution, the direction of current produced in them also changes after every half revolution. D. C. generator “D. C. generator” means “Direct Current generator”. That is, a D. C. generator produces direct current and not alternating current. We will now describe the construction and working of D. C. generator or D. C. Dynamo. Construction of a D. C. generator A simple D. C. generator consists of a rectangular coil ABCD which cab be rotated rapidly between the poles N and S of a strong horse-shoe type magnet M. The generator coil is made of a large number of turns of insulated copper wire. The two ends of the coil are connected to the two copper half rings (or split rings) R1 and R2 of a commutator. There are two carbon brushes B1 and B2 which press lightly against the two half rings. When the coil is rotated, the two half rings R1 and R2 touch the two carbon brushes B1 and B2 one by one. So the current produced in the rotating coil can be tapped out through the commutator half rings into the carbon brushes. From the carbon brushes B1 and B2, we can take the current into the various electrical appliances like radio, T. V., electric iron, bulbs, etc. But in this figure, we have shown only a galvanometer G connected between the two carbon brushes. The galvanometer is a current detecting and current measuring instrument. Working of a D. C. generator Suppose that the generator coil ABCD is initially in the horizontal position. Again suppose that he coil ABCD is being rotated in the anticlockwise direction between the poles N and S of a horseshoe type magnet. (iii) As the coil rotates in the anticlockwise direction, the side AB of the coil moves down cutting the magnetic lines of force near the N-pole of the magnet, and side DC moves up, cutting the lines of force near the S-pole of the magnet. Due to this, induced current is produced in the sides AB and DC of the coil. On applying Fleming’s right hand rule to the side AB and DC of the coil we find that the currents in them are in the direction B to A and D to C respectively. Thus, the induced currents in the two sides of the coil are in the same direction, and we get an effective induced current in the direction BADC. Due to this the brush B1 becomes a positive (+) pole and brush B2 becomes negative (-) pole of the generator. (iv) After half revolution, the sides AB and DC of the coil will interchange their positions. The side AB will come on the right hand side and start moving up whereas side DC will come on then the two commutator half rings R1 and R2 automatically change their contacts from one carbon brush to the other. Due to this change, the current keeps flowing in the same direction in the other circuits. The brush B1 always remaining positive terminal and brush B2 always remaining negative terminal of the generator. Thus, a D. C. generator supplies a current in one direction by the use of a commutator consisting of two, half-rings of copper. In the above discussion we have used the word D. C. generator everywhere. Please note that we can also write D. C. dynamo in place of D. C. generator. Question 38 When does an electric short circuit occur? Solution: Short circuiting If the plastic insulation of the live wire and neutral wire gets torn, then the two wires touch each other. This touching of the live wire and neutral wire directly is known as short-circuiting. The current passing through the circuit formed by these wires is very large and consequently a high heating effect is created which may lead to fire. Question 39 What is the function of an earth wire? Why is it necessary to earth metallic appliances? Solution: To avoid electric shocks, the metal body of an electrical device is ‘earthed’. A wire called ‘earth wire’ is used to connect the metal body of the electrical device to the earth, which is at zero potential. In household circuits, we have three wires, the live wire, the neutral wire and the earth wire. One end of the earth wire is connected to the device and the other end of the wire is connected to the earth. We now say that the device is “earthed” or “grounded”. Usually the three wires are connected to a three-pin plug. The neutral wire or the earth connection carries the high current to the earth from the device and prevents an electric shock. Multiple Choice Questions (MCQs) [1 Mark each] Question 1. A compass is to be placed near a bar magnet with unknown poles. Outside the magnetic field, the compass needle is pointing towards North as shown below: (c) Magnetic field lines always point from North pole to South pole around the magnet. Thus, compass needle follows the path difference of magnetic field lines. Question 2. A bar magnet is broken into three parts X, Y and Z. Which diagram show the poles in X, Y and Z? (d) When a magnet is broken into three parts X, Y and Z, each part is still a magnet and the strength of the magnetic force remains the same. Question 3. An unmagnetised iron bar is placed near the end of a bar magnet. Which of the following diagram is correct? (b) The end of the iron bar nearer to the South pole of the bar magnet becomes induced North pole while the other end is South pole. Question 4. The diagram shows a current-carrying wire passing through the centre of a square cardboard. How do the strengths of the magnetic field at points X, Y and Z compare? (a) equal at X, Y and Z (b) stronger at Y than X, equal at Y and Z (c) weaker at Y than Z, stronger at Y than Z (d) stronger at Y than X, weaker at Z than X (d) The closer to the wire, the stronger is the magnetic field strength. Since, the magnetic field is circular. Y is the closest followed by X than Z. Question 5. A circular loop placed in a plane perpendicular to the plane of paper carries a current when the key is on. The current as seen from points A and B (in the plane of paper and on the axis of the coil) is anti-clockwise and clockwise, respectively. The magnetic field lines point from B to A. The N-pole of the resultant magnet is on the face close to (a) A (b) B (c) A if the current is small and B if the current is large (d) B if the current is small and A if the current is large [NCERT Exemplar] (a) The N-pole of the resultant magnet is on the face close to A because, the magnetic field lines enter in loop from B and come out from A. Also, as a matter of fact magnetic lines come out of the N-pole of magnet. Therefore, face close to A represents N-pole. The currents in A and B are same. Question 6. A bar magnet is used to pick up an iron nail. At which parts X, Y and Z is the easiest for the magnet to pick up the iron nail? (a) At X (b) AtY (c) At Z (d) It makes no difference (c) The region with the highest density of magnetic field lines has the greatest strength. Question 7. If the key in the arrangement as shown below is taken out (the circuit is made open) and magnetic field lines are drawn over the horizontal plane ABCD, the lines are [NCERT Exemplar] (a) concentric circles (b) elliptical in shape (c) straight lines parallel to each other (d) concentric circles near the point O but of elliptical shapes as we go away from it (c) When the key is taken out (the circuit is made open), no current flows through the wire, hence no magnetic field exists due to the conductor. The only magnetic field is due to Earth’s magnetic field and are straight lines parallel to each other. The horizontal component is directed from geographical South to geographical North on the horizontal plane ABCD. Question 8. Four metal rods are placed in turn inside the solenoid to attract paper clips. The table below gives the results of the experiment when current is switched on and off. Metalrod When current is switched on, number of paper clips attracted When current is switched off, number of paper clips still attracted (a) 1 0 (b) 20 2 (c) 35 0 (d) 35 30 Which rod would be the most suitable to use for the core of the solenoid in a circuit breaker? (c) The core of the solenoid in a circuit breaker must be made of a soft type magnetic material which can be strongly magnetised but does not retain induced magnetism. Question 9. Permanent magnets can be made using hard magnetic materials. Which of the following is not the correct method to make permanent magnets? (a) Using a bar magnet to stroke a steel bar (b) Using two bar magnets to stroke a steel bar (c) Placing a steel bar in a solenoid that connects to a DC supply (d) Placing a steel bar in a solenoid that connects to an AC supply, then slowly withdrawing the steel bar away from the solenoid in West-East direction (d) The AC supply will mix up the direction of the magnetic domains. In fact, this is one of the methods to demagnetise magnets. Question 10. In the arrangement shown in figure, there are two coils wound on a non-conducting cylindrical rod. Initially, the key is not inserted. Then, the key is inserted and later removed. Then, (a) the deflection in the galvanometer remains zero throughout. (b) there is a momentary deflection in the galvanometer but it dies out shortly and there is no effect when the key is removed. (c) there are momentary galvanometer deflections that die out shortly, the deflections are in the same direction. (d) there are momentary galvanometer deflections that die out shortly, the deflections are in opposite directions. Thus, the galvanometer shows momentary deflections in opposite directions. [NCERT Exemplar] (d) In the given arrangement, whenever an electric current through the first coil is changed, an emf is induced in the coil due to change in magnetic field lines which pass through the neighbouring second coil. When key is inserted and removed, then the magnetic field lines passing through second coil increases and decreases in two cases respectively. Therefore, the direction of current in two cases is opposite. Thus, the galvanometer shows momentary deflections in opposite directions. NCERT Solutions for Class 10 Science Chapter 13 Magnetic Effects of Electric Current (Hindi Medium) Class 10 Science Magnetic Effects of Electric Current Mind Map Properties of Magnets • Attractive property: Magnets attract magnetic materials like – iron, cobalt, nickel, etc • Directive property: A freely suspended magnet always aligns in north- south direction • Opposite poles attract and like poles repel. • Poles exist in pairs North and South • Repulsion is a sure test of magnet Magnetic Field Space around a magnet in which magnetic effect is experienced Magnetic Field Lines A line such that the tangent at any point on it gives the direction of the magnetic field at that point. Properties of Magnetic Field Lines • All field lines are closed curves. • Field lines are close together near the poles. • Two field lines never intersect each other. Magnetic Field Due to a Current Carrying Conductor The magnetic field around a straight conductor carrying current is in the form of closed circular loops, in a plane perpendicular to the conductor. Direction of magnetic field can be determined by using Right hand thumb rule Solenoid A solenoid is a long cylindrical helix, which produces a magnetic field when an electric current is passed through it. The magnetic field within the solenoid is uniform and parallel to the axis of solenoid. The magnetic field due to solenoid depends upon (a) number of turns i.e., B ∝ n (b) strength of current i.e, B ∝ I (c) Nature of material inside solenoid i.e., B ∝ μ Magnetic Field Due to a Circular Current Carrying Loop At every’ point of a current carrying loop, the concentric circles representing the magnetic field around it would become larger as we move away. Direction of magnetic field can be determined by using right hand rule Force on a Current Carrying Conductor The force experienced by the conductor $$\overrightarrow{\mathrm{F}}=\mathrm{IL} \times \overrightarrow{\mathrm{B}}$$ Direction of force can be determined by Fleming’s left hand rule, right hand palm or screw rule Electromagnet A solenoid with a soft iron core. A magnetic field is produced then an electric current flows through a coil of wire. Uses • For lifting and transporting large masses of iron scrap. • Electric bell, telegraph, electric motor, relay, loud speaker, microphone • For separating magnetic substances such as iron from other debris • In scientific research to study the magnetic properties of a substance in a magnetic field Electric Motor It converts electrical energy to mechanical energy. It works on the principle of electromagnetic induction Parts of Electric Motor • Armature • Field magnet • Split-ring • Commutator converts alternating voltage into direct voltage across the brushes. • Brushes • Battery Uses • The d.c. motors are used in d.c. fans • They are used for pumping water • Big d.c. motors are used for running tramcars and even trains Electric Generator • It converts mechanical energy to electrical energy. • It works on the principle of electromagnetic induction Parts of Electric Generator • Armature • Magnet • Slip-rings • Brushes • Split ring type commutator for direct current generator Types of Electric Generator D.C. Generator It is a type of generator which is used to produce induced current which flows in one direction A.C. Generator It generates alternating current that changes its polarity after every half rotation Electromagnetic Induction Voltage is induced by the relative motion between a wire and a magnetic field. The amount of voltage induced depends on how fast the magnetic field lines are entering or leaving the coil. Safety Devices • When too much current flows or short circuit these devices breaks the circuit. • They have less melting point. • Fuses • Miniature circuit breakers (MCBs) Now that you are provided all the necessary information regarding NCERT Solutions For Class 10 Science Chapter 13 Magnetic Effects Of Electric Current and we hope this detailed article on magnetic effect of electric current class 12 ncert solutions is helpful. If you have any query regarding this article or magnetic effect of electric current class 10 ncert solutions, ping us through the comment section below and we will get back to you as soon as possible.
# Choice of $\xi$ [duplicate] Possible Duplicate: Rational Numbers Suppose $\{x \in \mathbb{Q}|x>0,x^2<2\}$ has a supremum. Call this supremum $c$. In order to show that this cannot be the case, we learned that we need to introduce $\xi$ with $\xi=\frac{2c+2}{c+2}$ and then find a contradiction. But why this $\xi$? Why not another $\xi$? How do you find this choice? ## marked as duplicate by Qiaochu YuanDec 13 '11 at 20:38 • Because we want $\sqrt 2$ to be this supremum. We choose $\xi$ to fit the proof that $\sqrt 2$ is irrational. – Asaf Karagila Dec 13 '11 at 14:24 • (This comment doesn't address your actual question). You should be careful about your statement. When you say "In order to show that this cannot be the case", what do you mean by this? What I am getting at is that the set in question does have a supremum, but you are trying to show that the supremum is not in the set itself. In other words, the set does not have a maximum. – JavaMan Dec 13 '11 at 14:52 • @JavaMan: Actually, I believe he's trying to show that the supremum isn't rational. Even if he took $\{x\in\mathbb{R} : 0<x, x^2<2\}$, the set still wouldn't have a maximum, but it would have a supremum in $\mathbb{R}$. – jwodder Dec 13 '11 at 17:24 • @jwodder: Even if you take it in $\mathbb R$ it still won't have a maximum. $(\sqrt 2)^2 = 2$, therefore $\sqrt 2$ is not in the set $\{x\in\mathbb R\mid x^2<2\}$. – Asaf Karagila Dec 13 '11 at 17:32 • @Qiaochu.. the question isn't really an exact duplicate. The old one dealt with motivation for the transformation, while the answers here give the proof itself that the set doesn't have a rational supremum. – Zarrax Dec 13 '11 at 21:07 First of all, I think you and Zarrax are both a little confused about what you are showing (or you are using a nonstandard definition of supremum). The supremum of a set is its least upper bound. Now, in the real numbers, your set has the least upper bound $\sqrt{2}$. What I suspect you are trying to show is that this set does not have any least upper bound in the rationals. Let $f(c) = \frac{2c+2}{c+2}$. The key properties of $f$ are (1) $f$ maps $\mathbb{Q}$ to $\mathbb{Q}$. (2) If $c$ is a rational upper bound for $\{ x : x^2<2 \}$, then $f(c)$ is a smaller upper bound. If you'll allow me to mention real numbers, then property (2) can be rephrased as: (2') If $\sqrt{2} < c$, then $\sqrt{2} < f(c) < c$. Notice that my inequalities in (2') go the opposite direction from Zarrax's; I think that is because he read your question differently than I did. So, any function which obeys (1) and (2') will make this proof work, and you shouldn't get too focused on which one your book uses. I would have thought of $c \mapsto \frac{c+2/c}{2}$. • I just meant my answer as a proof by contradiction, and I edited my post accordingly. Incidentally the map $f(c) = {1 \over 2}(c + {2 \over c})$ takes numbers below $\sqrt{2}$ to numbers above $\sqrt{2}$ so it doesn't serve an identical purpose. – Zarrax Dec 13 '11 at 17:08 • So, I understand the definition of a supremum to be a least upper bound. So the way I would set up this proof is "Let $c$ be an upper bound. Then $\frac{1}{2}(c+2/c)$ is a lesser upper bound, so $c$ is not the least upper bound. We have shown that the set has no supremum." Thus, it is irrelevant what $f$ does to a $c$ which is less than $\sqrt{2}$, since such a $c$ would not be an upper bound. I imagine you are thinking of the proof differently? – David E Speyer Dec 13 '11 at 17:27 • Incidentally, in (2) of your answer one still has to show that $f(c)$ is an upper bound for $\{x \in {\mathbb Q}: x^2 < 2\}$. – Zarrax Dec 13 '11 at 17:47 • I am using the fact that if $x$ is the least upper bound of a set $A$, then there cannot be a $y \in A$ with $x<y$; $x$ is not an upper bound for $A$ if such a $y$ existed. We are reading the question the same way here, just our proofs are different. – Zarrax Dec 13 '11 at 18:09 The significance of the set $S:=\{x\in{\mathbb Q}_{>0}\ |\ x^2<2\}$ is the following: It is a nonempty subset of ${\mathbb Q}$ which is obviously bounded above, but which has no least upper bound (called supremum) in ${\mathbb Q}$. As a consequence, the ground set ${\mathbb Q}$ is not "order complete" and should better be replaced by a more encompassing set of numbers, which of course is ${\mathbb R}$. In order to show that $S$ has no supremum in ${\mathbb Q}$ one has to show that no number $c\in{\mathbb Q_{>0}}$ qualifies as supremum of $S$. Given any trial $c\in{\mathbb Q_{>0}}$ then we all know that $c^2\ne2$, whence either $c^2<2$ or $c^2>2$. The "tricky" number $$\xi:={2c+2\over c+2}\in{\mathbb Q}_{>0}$$ satisfies $$\xi -c={2-c^2\over c+2}\ ,\qquad \xi^2-2={2(c^2-2)\over(c+2)^2}\ .$$ Now, if $\ {\rm (a)}\ c^2<2$ then it follows that $\xi>c$ and $\xi^2<2$, whence $\xi\in S$, so $c$ is not an upper bound for $S$. If $\ {\rm (b)} \ c^2>2$ then $\xi<c$ and $\xi^2>2>x^2$ for all $x\in S$. As $t\to t^2$ is strictly increasing for $t>0$ it follows that $\xi>x$ for all $x\in S$, whence $\xi$ is an upper bound for $S$ strictly smaller than $c$. It follows that in both cases (a) and (b) the number $c$ does not qualify as a supremum for the set $S$. I don't think there is a systematic procedure to "invent" such a $\xi$ (after all, it is not uniquely determined). You have to fiddle around with inequalities until you hit the expression which is "just right". Suppose {$x\in {\mathbb Q}: x^2 < 2$} had a supremum $c$ that is rational. We will derive a contradiction. Note $c$ is positive since $1$ is in the above set. Case 1: $c^2 < 2$: Use algebra to show that $\displaystyle{c < {2 + 2c \over c + 2}}$, and then use some more algebra to show $\displaystyle{\bigg({2 + 2c \over c + 2}\bigg)^2} < 2$. So $\displaystyle{{2 + 2c \over c + 2}}$ is another rational number with $\displaystyle{c < {2 + 2c \over c + 2}}$ and ${\displaystyle \bigg({2 + 2c \over c + 2}\bigg)^2 < 2}$. Hence a contradiction since $c$ is supposed to be the supremum. Case 2: $c^2 > 2$: Reverse the inequalities above. Get ${\displaystyle c > {2 + 2c \over c + 2}}$ and ${\displaystyle\bigg({2 + 2c \over c + 2}\bigg)^2 > 2}$. If $x > 0$ were rational such that $x^2 < 2$, we must have $x < {\displaystyle{2 + 2c \over c + 2}}$ (Square both sides, using that ${\displaystyle{2 + 2c \over c + 2}}$ is positive). This means that ${\displaystyle{2 + 2c \over c + 2}}$ is greater than any $x$ such that $x^2 < 2$; ${\displaystyle{2 + 2c \over c + 2}}$ is an upper bound for {$x\in {\mathbb Q}: x^2 < 2$}. this contradicts that $c$ is a least upper bound. Case 3: $c^2 = 2$: Show no rational number squared is equal to 2, using prime factorizations for example. Thus in all three cases we get a contradiction. (This aint as easy at it looks :). And thanks to Christian Blatter for making me realize I overlooked Case 2.) As for how they thought of this, there are various iterative procedures to get closer and closer to square roots of natural numbers, and when you apply it to a rational number $c$ you get another rational number. This one has the property that you stay between $c$ and $\sqrt{2}$ at the next iteration.
# Extended investigation of the twelve-flavor $\beta$-function 25 Oct 2017 Fodor Zoltan Holland Kieran Kuti Julius Nogradi Daniel Wong Chik Him We report new results from high precision analysis of an important BSM gauge theory with twelve massless fermion flavors in the fundamental representation of the SU(3) color gauge group. The range of the renormalized gauge coupling is extended from our earlier work {Fodor:2016zil} to probe the existence of an infrared fixed point (IRFP) in the $\beta$-function reported at two different locations, originally in {Cheng:2014jba} and at a new location in {Hasenfratz:2016dou}... (read more) PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories • HIGH ENERGY PHYSICS - LATTICE • HIGH ENERGY PHYSICS - PHENOMENOLOGY
summaryrefslogtreecommitdiffstats log msg author committer range path: root/src/pulse/stream.h 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 #ifndef foostreamhfoo #define foostreamhfoo /* $Id$ */ /*** This file is part of PulseAudio. PulseAudio is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. PulseAudio is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with PulseAudio; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. ***/ #include #include #include #include #include #include #include /** \page streams Audio Streams * * \section overv_sec Overview * * Audio streams form the central functionality of the sound server. Data is * routed, converted and mixed from several sources before it is passed along * to a final output. Currently, there are three forms of audio streams: * * \li Playback streams - Data flows from the client to the server. * \li Record streams - Data flows from the server to the client. * \li Upload streams - Similar to playback streams, but the data is stored in * the sample cache. See \ref scache for more information * about controlling the sample cache. * * \section create_sec Creating * * To access a stream, a pa_stream object must be created using * pa_stream_new(). At this point the audio sample format and mapping of * channels must be specified. See \ref sample and \ref channelmap for more * information about those structures. * * This first step will only create a client-side object, representing the * stream. To use the stream, a server-side object must be created and * associated with the local object. Depending on which type of stream is * desired, a different function is needed: * * \li Playback stream - pa_stream_connect_playback() * \li Record stream - pa_stream_connect_record() * \li Upload stream - pa_stream_connect_upload() (see \ref scache) * * Similar to how connections are done in contexts, connecting a stream will * not generate a pa_operation object. Also like contexts, the application * should register a state change callback, using * pa_stream_set_state_callback(), and wait for the stream to enter an active * state. * * \subsection bufattr_subsec Buffer Attributes * * Playback and record streams always have a server side buffer as * part of the data flow. The size of this buffer strikes a * compromise between low latency and sensitivity for buffer * overflows/underruns. * * The buffer metrics may be controlled by the application. They are * described with a pa_buffer_attr structure which contains a number * of fields: * * \li maxlength - The absolute maximum number of bytes that can be stored in * the buffer. If this value is exceeded then data will be * lost. * \li tlength - The target length of a playback buffer. The server will only * send requests for more data as long as the buffer has less * than this number of bytes of data. * \li prebuf - Number of bytes that need to be in the buffer before * playback will commence. Start of playback can be forced using * pa_stream_trigger() even though the prebuffer size hasn't been * reached. If a buffer underrun occurs, this prebuffering will be * again enabled. If the playback shall never stop in case of a buffer * underrun, this value should be set to 0. In that case the read * index of the output buffer overtakes the write index, and hence the * fill level of the buffer is negative. * \li minreq - Minimum free number of the bytes in the playback buffer before * the server will request more data. * \li fragsize - Maximum number of bytes that the server will push in one * chunk for record streams. * * The server side playback buffers are indexed by a write and a read * index. The application writes to the write index and the sound * device reads from the read index. The read index is increased * monotonically, while the write index may be freely controlled by * the application. Substracting the read index from the write index * will give you the current fill level of the buffer. The read/write * indexes are 64bit values and measured in bytes, they will never * wrap. The current read/write index may be queried using * pa_stream_get_timing_info() (see below for more information). In * case of a buffer underrun the read index is equal or larger than * the write index. Unless the prebuf value is 0, PulseAudio will * temporarily pause playback in such a case, and wait until the * buffer is filled up to prebuf bytes again. If prebuf is 0, the * read index may be larger than the write index, in which case * silence is played. If the application writes data to indexes lower * than the read index, the data is immediately lost. * * \section transfer_sec Transferring Data * * Once the stream is up, data can start flowing between the client and the * server. Two different access models can be used to transfer the data: * * \li Asynchronous - The application register a callback using * pa_stream_set_write_callback() and * pa_stream_set_read_callback() to receive notifications * that data can either be written or read. * \li Polled - Query the library for available data/space using * pa_stream_writable_size() and pa_stream_readable_size() and * transfer data as needed. The sizes are stored locally, in the * client end, so there is no delay when reading them. * * It is also possible to mix the two models freely. * * Once there is data/space available, it can be transferred using either * pa_stream_write() for playback, or pa_stream_peek() / pa_stream_drop() for * record. Make sure you do not overflow the playback buffers as data will be * dropped. * * \section bufctl_sec Buffer Control * * The transfer buffers can be controlled through a number of operations: * * \li pa_stream_cork() - Start or stop the playback or recording. * \li pa_stream_trigger() - Start playback immediatly and do not wait for * the buffer to fill up to the set trigger level. * \li pa_stream_prebuf() - Reenable the playback trigger level. * \li pa_stream_drain() - Wait for the playback buffer to go empty. Will * return a pa_operation object that will indicate when * the buffer is completely drained. * \li pa_stream_flush() - Drop all data from the playback buffer and do not * wait for it to finish playing. * * \section seek_modes Seeking in the Playback Buffer * * A client application may freely seek in the playback buffer. To * accomplish that the pa_stream_write() function takes a seek mode * and an offset argument. The seek mode is one of: * * \li PA_SEEK_RELATIVE - seek relative to the current write index * \li PA_SEEK_ABSOLUTE - seek relative to the beginning of the playback buffer, (i.e. the first that was ever played in the stream) * \li PA_SEEK_RELATIVE_ON_READ - seek relative to the current read index. Use this to write data to the output buffer that should be played as soon as possible * \li PA_SEEK_RELATIVE_END - seek relative to the last byte ever written. * * If an application just wants to append some data to the output * buffer, PA_SEEK_RELATIVE and an offset of 0 should be used. * * After a call to pa_stream_write() the write index will be left at * the position right after the last byte of the written data. * * \section latency_sec Latency * * A major problem with networked audio is the increased latency caused by * the network. To remedy this, PulseAudio supports an advanced system of * monitoring the current latency. * * To get the raw data needed to calculate latencies, call * pa_stream_get_timing_info(). This will give you a pa_timing_info * structure that contains everything that is known about the server * side buffer transport delays and the backend active in the * server. (Besides other things it contains the write and read index * values mentioned above.) * * This structure is updated every time a * pa_stream_update_timing_info() operation is executed. (i.e. before * the first call to this function the timing information structure is * not available!) Since it is a lot of work to keep this structure * up-to-date manually, PulseAudio can do that automatically for you: * if PA_STREAM_AUTO_TIMING_UPDATE is passed when connecting the * stream PulseAudio will automatically update the structure every * 100ms and every time a function is called that might invalidate the * previously known timing data (such as pa_stream_write() or * pa_stream_flush()). Please note however, that there always is a * short time window when the data in the timing information structure * is out-of-date. PulseAudio tries to mark these situations by * setting the write_index_corrupt and read_index_corrupt fields * accordingly. * * The raw timing data in the pa_timing_info structure is usually hard * to deal with. Therefore a more simplistic interface is available: * you can call pa_stream_get_time() or pa_stream_get_latency(). The * former will return the current playback time of the hardware since * the stream has been started. The latter returns the time a sample * that you write now takes to be played by the hardware. These two * functions base their calculations on the same data that is returned * by pa_stream_get_timing_info(). Hence the same rules for keeping * the timing data up-to-date apply here. In case the write or read * index is corrupted, these two functions will fail with * PA_ERR_NODATA set. * * Since updating the timing info structure usually requires a full * network round trip and some applications monitor the timing very * often PulseAudio offers a timing interpolation system. If * PA_STREAM_INTERPOLATE_TIMING is passed when connecting the stream, * pa_stream_get_time() and pa_stream_get_latency() will try to * interpolate the current playback time/latency by estimating the * number of samples that have been played back by the hardware since * the last regular timing update. It is espcially useful to combine * this option with PA_STREAM_AUTO_TIMING_UPDATE, which will enable * you to monitor the current playback time/latency very precisely and * very frequently without requiring a network round trip every time. * * \section flow_sec Overflow and underflow * * Even with the best precautions, buffers will sometime over - or * underflow. To handle this gracefully, the application can be * notified when this happens. Callbacks are registered using * pa_stream_set_overflow_callback() and * pa_stream_set_underflow_callback(). * * \section sync_streams Sychronizing Multiple Playback Streams * * PulseAudio allows applications to fully synchronize multiple * playback streams that are connected to the same output device. That * means the streams will always be played back sample-by-sample * synchronously. If stream operations like pa_stream_cork() are * issued on one of the synchronized streams, they are simultaneously * issued on the others. * * To synchronize a stream to another, just pass the "master" stream * as last argument to pa_stream_connect_playack(). To make sure that * the freshly created stream doesn't start playback right-away, make * sure to pass PA_STREAM_START_CORKED and - after all streams have * been created - uncork them all with a single call to * pa_stream_cork() for the master stream. * * To make sure that a particular stream doesn't stop to play when a * server side buffer underrun happens on it while the other * synchronized streams continue playing and hence deviate you need to * pass a "prebuf" pa_buffer_attr of 0 when connecting it. * * \section disc_sec Disconnecting * * When a stream has served is purpose it must be disconnected with * pa_stream_disconnect(). If you only unreference it, then it will live on * and eat resources both locally and on the server until you disconnect the * context. * */ /** \file * Audio streams for input, output and sample upload */ PA_C_DECL_BEGIN /** An opaque stream for playback or recording */ typedef struct pa_stream pa_stream; /** A generic callback for operation completion */ typedef void (*pa_stream_success_cb_t) (pa_stream*s, int success, void *userdata); /** A generic request callback */ typedef void (*pa_stream_request_cb_t)(pa_stream *p, size_t length, void *userdata); /** A generic notification callback */ typedef void (*pa_stream_notify_cb_t)(pa_stream *p, void *userdata); /** Create a new, unconnected stream with the specified name and sample type */ pa_stream* pa_stream_new( pa_context *c /**< The context to create this stream in */, const char *name /**< A name for this stream */, const pa_sample_spec *ss /**< The desired sample format */, const pa_channel_map *map /**< The desired channel map, or NULL for default */); /** Decrease the reference counter by one */ void pa_stream_unref(pa_stream *s); /** Increase the reference counter by one */ pa_stream *pa_stream_ref(pa_stream *s); /** Return the current state of the stream */ pa_stream_state_t pa_stream_get_state(pa_stream *p); /** Return the context this stream is attached to */ pa_context* pa_stream_get_context(pa_stream *p); /** Return the device (sink input or source output) index this stream is connected to */ uint32_t pa_stream_get_index(pa_stream *s); /** Connect the stream to a sink */ int pa_stream_connect_playback( pa_stream *s /**< The stream to connect to a sink */, const char *dev /**< Name of the sink to connect to, or NULL for default */ , const pa_buffer_attr *attr /**< Buffering attributes, or NULL for default */, pa_stream_flags_t flags /**< Additional flags, or 0 for default */, pa_cvolume *volume /**< Initial volume, or NULL for default */, pa_stream *sync_stream /**< Synchronize this stream with the specified one, or NULL for a standalone stream*/); /** Connect the stream to a source */ int pa_stream_connect_record( pa_stream *s /**< The stream to connect to a source */ , const char *dev /**< Name of the source to connect to, or NULL for default */, const pa_buffer_attr *attr /**< Buffer attributes, or NULL for default */, pa_stream_flags_t flags /**< Additional flags, or 0 for default */); /** Disconnect a stream from a source/sink */ int pa_stream_disconnect(pa_stream *s); /** Write some data to the server (for playback sinks), if free_cb is * non-NULL this routine is called when all data has been written out * and an internal reference to the specified data is kept, the data * is not copied. If NULL, the data is copied into an internal * buffer. The client my freely seek around in the output buffer. For * most applications passing 0 and PA_SEEK_RELATIVE as arguments for * offset and seek should be useful.*/ int pa_stream_write( pa_stream *p /**< The stream to use */, const void *data /**< The data to write */, size_t length /**< The length of the data to write */, pa_free_cb_t free_cb /**< A cleanup routine for the data or NULL to request an internal copy */, int64_t offset, /**< Offset for seeking, must be 0 for upload streams */ pa_seek_mode_t seek /**< Seek mode, must be PA_SEEK_RELATIVE for upload streams */); /** Read the next fragment from the buffer (for recording). * data will point to the actual data and length will contain the size * of the data in bytes (which can be less than a complete framgnet). * Use pa_stream_drop() to actually remove the data from the * buffer. If no data is available will return a NULL pointer \since 0.8 */ int pa_stream_peek( pa_stream *p /**< The stream to use */, const void **data /**< Pointer to pointer that will point to data */, size_t *length /**< The length of the data read */); /** Remove the current fragment on record streams. It is invalid to do this without first * calling pa_stream_peek(). \since 0.8 */ int pa_stream_drop(pa_stream *p); /** Return the nember of bytes that may be written using pa_stream_write() */ size_t pa_stream_writable_size(pa_stream *p); /** Return the number of bytes that may be read using pa_stream_read() \since 0.8 */ size_t pa_stream_readable_size(pa_stream *p); /** Drain a playback stream. Use this for notification when the buffer is empty */ pa_operation* pa_stream_drain(pa_stream *s, pa_stream_success_cb_t cb, void *userdata); /** Request a timing info structure update for a stream. Use * pa_stream_get_timing_info() to get access to the raw timing data, * or pa_stream_get_time() or pa_stream_get_latency() to get cleaned * up values. */ pa_operation* pa_stream_update_timing_info(pa_stream *p, pa_stream_success_cb_t cb, void *userdata); /** Set the callback function that is called whenever the state of the stream changes */ void pa_stream_set_state_callback(pa_stream *s, pa_stream_notify_cb_t cb, void *userdata); /** Set the callback function that is called when new data may be * written to the stream. */ void pa_stream_set_write_callback(pa_stream *p, pa_stream_request_cb_t cb, void *userdata); /** Set the callback function that is called when new data is available from the stream. * Return the number of bytes read. \since 0.8 */ void pa_stream_set_read_callback(pa_stream *p, pa_stream_request_cb_t cb, void *userdata); /** Set the callback function that is called when a buffer overflow happens. (Only for playback streams) \since 0.8 */ void pa_stream_set_overflow_callback(pa_stream *p, pa_stream_notify_cb_t cb, void *userdata); /** Set the callback function that is called when a buffer underflow happens. (Only for playback streams) \since 0.8 */ void pa_stream_set_underflow_callback(pa_stream *p, pa_stream_notify_cb_t cb, void *userdata); /** Set the callback function that is called whenever a latency information update happens. Useful on PA_STREAM_AUTO_TIMING_UPDATE streams only. (Only for playback streams) \since 0.8.2 */ void pa_stream_set_latency_update_callback(pa_stream *p, pa_stream_notify_cb_t cb, void *userdata); /** Pause (or resume) playback of this stream temporarily. Available on both playback and recording streams. \since 0.3 */ pa_operation* pa_stream_cork(pa_stream *s, int b, pa_stream_success_cb_t cb, void *userdata); /** Flush the playback buffer of this stream. Most of the time you're * better off using the parameter delta of pa_stream_write() instead of this * function. Available on both playback and recording streams. \since 0.3 */ pa_operation* pa_stream_flush(pa_stream *s, pa_stream_success_cb_t cb, void *userdata); /** Reenable prebuffering as specified in the pa_buffer_attr * structure. Available for playback streams only. \since 0.6 */ pa_operation* pa_stream_prebuf(pa_stream *s, pa_stream_success_cb_t cb, void *userdata); /** Request immediate start of playback on this stream. This disables * prebuffering as specified in the pa_buffer_attr * structure, temporarily. Available for playback streams only. \since 0.3 */ pa_operation* pa_stream_trigger(pa_stream *s, pa_stream_success_cb_t cb, void *userdata); /** Rename the stream. \since 0.5 */ pa_operation* pa_stream_set_name(pa_stream *s, const char *name, pa_stream_success_cb_t cb, void *userdata); /** Return the current playback/recording time. This is based on the * data in the timing info structure returned by * pa_stream_get_timing_info(). This function will usually only return * new data if a timing info update has been recieved. Only if timing * interpolation has been requested (PA_STREAM_INTERPOLATE_TIMING) * the data from the last timing update is used for an estimation of * the current playback/recording time based on the local time that * passed since the timing info structure has been acquired. The time * value returned by this function is guaranteed to increase * monotonically. (that means: the returned value is always greater or * equal to the value returned on the last call) This behaviour can * be disabled by using PA_STREAM_NOT_MONOTONOUS. This may be * desirable to deal better with bad estimations of transport * latencies, but may have strange effects if the application is not * able to deal with time going 'backwards'. \since 0.6 */ int pa_stream_get_time(pa_stream *s, pa_usec_t *r_usec); /** Return the total stream latency. This function is based on * pa_stream_get_time(). In case the stream is a monitoring stream the * result can be negative, i.e. the captured samples are not yet * played. In this case *negative is set to 1. \since 0.6 */ int pa_stream_get_latency(pa_stream *s, pa_usec_t *r_usec, int *negative); /** Return the latest raw timing data structure. The returned pointer * points to an internal read-only instance of the timing * structure. The user should make a copy of this structure if he * wants to modify it. An in-place update to this data structure may * be requested using pa_stream_update_timing_info(). If no * pa_stream_update_timing_info() call was issued before, this * function will fail with PA_ERR_NODATA. Please note that the * write_index member field (and only this field) is updated on each * pa_stream_write() call, not just when a timing update has been * recieved. \since 0.8 */ const pa_timing_info* pa_stream_get_timing_info(pa_stream *s); /** Return a pointer to the stream's sample specification. \since 0.6 */ const pa_sample_spec* pa_stream_get_sample_spec(pa_stream *s); /** Return a pointer to the stream's channel map. \since 0.8 */ const pa_channel_map* pa_stream_get_channel_map(pa_stream *s); /** Return the buffer metrics of the stream. Only valid after the * stream has been connected successfuly and if the server is at least * PulseAudio 0.9. \since 0.9.0 */ const pa_buffer_attr* pa_stream_get_buffer_attr(pa_stream *s); PA_C_DECL_END #endif
# Latent Variables Suppose $Y$ is an ordinal variable such that $Y = 1,2,3,4$ corresponds to levels of impairment. So $Y=1$ is the last impaired and $Y = 4$ is the most impaired. What is the purpose of latent variables? That is what is the purpose of the following $$\alpha_{j-1} \leq Z \leq \alpha_{j} \Longleftrightarrow Y = j$$ Does this juts mean that if we added $1$ to the $Y$ so that $Y = 2,3,4,5$, then $Y=2$ corresponds to the least impaired and $Y = 5$ corresponds to the most impaired? - You might be interested in this post, How to transform ordinal data from questionnaire into proper interval data?, where I gave a brief overview of the use of latent variable models for ordered response categories. –  chl Nov 2 '12 at 13:27 The idea is that the levels $Y = 1,2,3,4$ of impairment are really just an ordinal approximation to some true (but unmeasured, and hence latent) continuous measure of impairment, $Z$. We assume that if $Z$ is in the interval $[\alpha_{j-1}, \alpha_j]$ then you will observe $Y = j$. This can allow your model to account for the fact that someone whose impairment is rated as $Y_i = 2$, but is "barely two" (i.e., $Z_i$ is close to the $\alpha_{1}$ cutoff) might be quite different from another person whose impairment is also rated as $Y_j = 2$, but is impaired at "almost a three" level ($Z_j$ is close to $\alpha_2$). The purpose is to be able to use the underlying $Z$ in more complicated analyses, such as ordinal regression modeling (an approximation of regression of $Z$ on covariates) or polychoric correlations (correlation between two underlying latent $Z$-like variables).
# All Questions 93 views ### CPA security of a stateless and deterministic encryption system Why can no stateless and deterministic encryption system be IND-CPA secure? Is there a formal proof for it? 41 views ### Using Permutation polynomial to compute a MAC Is the following MAC secure? For a block $y_i$ in a file, we defined a MAC as follows: $Mac_i:PRF(k,i) \cdot g^{y_i \cdot r_i} \bmod p$. Where $p$ is a prime number, $g \in \mathbb{G}$,$PRF(k,i)$ is ... 23 views ### Kryptos K2 keyword derivation [duplicate] So I've been doing some reading on Kryptos, and to be honest the keyword for K2, ABSCISSA, has a pretty weak derivation. (The method using the eee's). Isn't there a better way to come to that? I dont ... 77 views ### RSA generation of private key using public key In RSA private key generation e*d ≡ 1 mod φ e is public, also n is public. How to prove mathematically, generation of ... 104 views ### PRF based on the GGM construction What's the differences between the concepts “pseudorandom generator” and “pseudorandom number generator”? In fact, I want to implement a pseudorandom function based on GGM's construction at ... 134 views ### Noise bound in FHE over the integers I'm studying the paper Fully Homomorphic Encryption over the Integers by Marten van Dijk, Craig Gentry, Shai Halevi and Vinod Vaikuntanathan. I have questions about the proof of Lemma A.1. In page ... 51 views ### How do you derive the lambda and beta values for endomorphism on the secp256k1 curve? You can see a little background about this on this bitcointalk post by the late Hal Finney. $\beta$ and $\lambda$ are the values on the secp256k1 curve such that: \begin{align} \lambda^3 &= 1 ... 99 views ### Python implementation of a blind signature scheme which doesn't involve RSA RSA seems a bit creepy after the Snowden revelations and i'm looking for a simple python based blind signature library to fiddle around with. So far i've been unable to find anything. What am i ... 727 views ### Can RSA encryption produce collisions? In RSA, a message is encrypted by $m^e \pmod N$. $N$ is the modulus, $m$ is the message and $e$ is the public exponent. (I know that $m$ should not be greater than $N$.) My question is, can $m^e$ be ... 177 views ### What is the difference between a hash function and a pseudorandom function? [duplicate] Read the title. I've seen in RFCs that some MAC functions are called "pseudorandom functions". What are those? How are they different than hash functions? Why can't a hash function be used instead? 186 views ### Why does the DES crypto algorithm NOT use 2 rounds? Now, if we were to go round by round, you could give a distinct reason for not using a single round since after just one round, the right half of the text comes directly, as-is, to form the left half ... 3k views ### For Diffie-Hellman, must g be a generator? Due to a number of recently asked questions about Diffie-Hellman, I was thinking this morning: must $g$ in Diffie-Hellman be a generator? Recall the mathematics of Diffie-Hellman: Given public ... 54 views ### Decrypt a message which is encrypted using XOR? [duplicate] This is a puzzle asked in a contest. Given that encryption , decryption happens as per following rule/code: ... 91 views ### Definition of ciphertext security If I got it right, chosen ciphertext security implies also CPA security. In other words, attacker can submit plaintexts to the challenger (along with ciphertexts). I do not understand why ... 92 views ### Negative exponents in Shoup's threshold RSA? I'm trying to implement threshold RSA operations, starting with decryption based on Peeters, R., Nikova, S., & Preneel, B. (2008). Practical RSA Threshold Decryption for Things That Think. ... 92 views ### Proof that $gcd(e, \lambda(N)) = 1 \hspace{1mm} \Longleftrightarrow \hspace{1mm} gcd(e, \varphi(N)) = 1$ What is the proof for the fact that $gcd(e, \lambda(N)) = 1 \hspace{1mm} \Longleftrightarrow \hspace{1mm} gcd(e, \varphi(N)) = 1$ Where: $N = P * Q$ where $P$ and $Q$ are both primes. $\varphi(N)$ ... 75 views ### Post processing of AAD and len A||C in a hardware AES GCM implementation I'm new to GCM and I need to implement it in hardware, using FPGA. The data bus is 640 bits, so I will use 5 adder/multiplier blocks in parallel. The message size and AAD size are constant. My design ... 65 views ### Non-repudiation and digital signature of a dishonest participant Let's assume a dishonest Alice who sends, encrypts & digital signs a message to Bob. Bob stores the decrypted message and the digital signature in a database. However Alice is a bad girl and ... 849 views ### Kryptos : K1. What is the origin of the “palimpsest” keyword? I'm studying the Kryptos sculpture with its cryptographic puzzles K1 to K4. I understand that the keyword "palimpsest" was reverse-engineered using the tableau (and brute-force computer processing), ... 105 views ### Parallel Pollard's Rho: Number of distinguished points When using the parallel version of Pollard's Rho algorithm for discrete logs, each processor performs its own random walk to find distinguished points, and reports the starting point and the ... 242 views ### Can public key be recovered from ciphertext & encrypted private key? I'd like to implement something like a write-once public/private encrypted shared secret (no better quick description for the lack of terminology knowledge). I guess, I'm trying to implement HSM. The ... 89 views ### XOR cipher with three different ciphertexts and repeated key, key length known. How do I find the plaintexts? Let us say we have three different plaintexts (all alphabets, A-Z): $x$, $y$ and $z$, each of length $21$. Let the key, $a$, be also of length $21$. Now, what we have is $x \oplus a$, $y \oplus a$ ... 119 views ### The perfect way of using IV in CTR mode I understand that it is necessary to use the same IV for both encryption and decryption in the CTR mode. I'm thinking about the case when I concatenate the secret ... 53 views ### Why is it a quadratic equation? In Groth-Sahai NIZK proof system, they have defined something called Quadratic Equation in $\mathbb{Z}_n$ as shown below. But, my idea of quadratic equation was a second order polynomial equation in a ... 61 views ### One-way function definition I cannot understand why a one-way function $f$ is defined in this way $\text{Pr}(f(A(f(x))) = f(x)) < \frac{1}{p(n)}$ and not $\text{Pr}(A(f(x)) = x) < \frac{1}{p(n)}$ where $A$ is a ... 88 views I'm trying to solve a question about one time pads but I'm not sure if my assumptions are correct. I am to assume that I have a one time pad with perfect secrecy. This one time pad is used to ... 45 views ### Do any stream ciphers with aperiodic keystreams exist? Exactly what it says on the tin. I can imagine constructing such keystreams from: The binary expansions of irrational numbers Chaotic systems like the logistic map or the Lorenz attractor. The ... 12k views ### Decrypt files with original file CTB-Locker [closed] I have problem called CTB-Locker. It encrypted all of my files on computer and since I have lot of documents that are very important I am in problems! As I read online CTB-Locker uses "elliptical ... 225 views It seems that merkle hash tree (MHT) traversals have been discussed somewhat in the literature, but there does not appear to be much written on inserting, deleting, and updating leaves. Is this lack ... 25 views ### Reducing key shares in Damgård-Dupont threshold RSA I'm working on understanding and implementing Damgård, I., & Dupont, K. (2005). Efficient Threshold RSA Signatures with General Moduli and No Extra Assumptions. Public Key Cryptography-PKC ... 68 views ### Usage of GF(p^m) fields, where p != 2 $GF(2^m)$ Galois fields are widely used in different cryptographic algorithms, for example, in Rijndael. However, $GF(p^m)$ fields are possible with any prime $p$, not only 2, but $GF(2^m)$ fields ... 273 views ### Elliptic Curve based blind signature implementation I want to use Elliptic Curve based blind signature scheme for my research. There is no proper implementation of ECC-based blind signatures. Can someone describe to me which things I need to follow ... 187 views ### Is there a flaw in this ECC blind signature scheme? Recently I've found the following work on the internet: An ECC-Based Blind Signature Scheme The paper claims to be an ECDSA blind signature however it seems that their scheme has a flaw in it. The ... 303 views ### Can AES in CCM or GCM counter mode interoperate with AES in “plain” counter mode (CTR)? I am exploring the use of Windows CNG to replace some OpenSSL-based code that takes advantage of AES in counter mode. From the outside, everything should look the same after the switch. The section ... 27 views ### Proving the complement property of DES? [duplicate] I'm trying to show that for a given plaintext (P), key (K), and cipher (C) in the DES algorithm: ... 608 views ### DES Encryption Algorithm all 64 bits for key instead of 56 bits Would a DES algorithm that uses all 64 bits for the key instead of just the 56 bits be more secure? I have been thinking about it but those 8 bits used for parity are very useful and but including ... 5k views ### What is Attribute Based Encryption? Can someone explain what attribute based encryption is? I was searching for a book or something that can help me in this regard but so far I have found none. Google also returns practically nothing ... 55 views ### Is there a reference that prove that the AES Key Schedule generate random looking round keys? Starting from uniformly random generated AES master key, is there a reference that prove that an specific roundkey can be considered as uniformly random generated as well ? 11k views ### Many time pad attack [duplicate] I've already sent my correct solution to a homework exercise from Dan Boneh's Introduction to Cryptography class on Coursera: "Let us see what goes wrong when a stream cipher key is used more than ... 150 views ### Achieving 32-bit verification code with 16-bit CRC? [closed] I am programming an embedded chip that has a hardware 16-bit CRC module. I have to protect some data bytes $d_0,d_1,...,d_{n-1}$ against corruption caused by sudden loss of power; a 32-bit CRC would ... 422 views ### State of the art RSA key generation I would like to know if there is an algorithm to generate a RSA key at the state of the art of the present cryptanalysis. Beside the key lenght I know there are some weakness in the choice of prime ... 2k views ### Why should the RSA private exponent have the same size as the modulus? Consider the generation of an RSA key pair with a given modulus size $n$ and a known, small public exponent $e$ (typically $e = 3$ or $e = 65537$). A common method is to generate two random primes ... 111 views ### Definition of the Decryption oracle In the context of public-key encryption, what would be a formal definition of the decryption oracle? I know the informal definition (i.e., a function that is available to the adversary and that ... 26 views ### Any ideas on login using Digital Signature PKCS I am curious to know about possibility to login into web application just by identifying the user based on digital signature in usb token. any reference in this direction? ### In RSA, why does $p$ have to be bigger than $q$ where $n=p \times q$? In openSSL – during RSA key generation – if $q$ is bigger than $p$, they exchange them. Why is that?
January 9, 2023 # A Straightforward Guide to Linear Regression in Python (2023) Linear Regression is one of the most basic yet most important models in data science. It helps us understand how we can use mathematics, with the help of a computer, to create predictive models, and it is also one of the most widely used models in analytics in general, from predicting the weather to predicting future profits on the stock market. In this tutorial, we will define linear regression, identify the tools we need to use to implement it, and explore how to create an actual prediction model in Python including the code details. Let's get to work. ## A Short Introduction to Linear Regression At its most basic, linear regression means finding the best possible line to fit a group of datapoints that seem to have some kind of linear relationship. Let's use an example: we work for a car manufacturer, and the market tells us we need to come up with a new, fuel-efficient model. We want to pack as many features and comforts as we can into the new car while making it economic to drive, but each feature we add means more weight added to the car. We want to know how many features we can pack while keeping a low MPG (miles per gallon). We have a dataset that contains information on 398 cars, including the specific information we are analyzing: weight and miles per gallon, and we want to determine if there is a relationship between these two features so we can make better decisions when designing our new model. If you want to code along, you can download the dataset from Kaggle: Auto-mpg dataset Let's start by importing our libraries: import pandas as pd import matplotlib.pyplot as plt Now we can load our dataset auto-mpg.csv into a DataFrame called auto, and we can use the pandas head() function to check out the first few lines of our dataset. auto = pd.read_csv('auto-mpg.csv') auto.head() mpg cylinders displacement horsepower weight acceleration model year origin car name 0 18.0 8 307.0 130 3504 12.0 70 1 chevrolet chevelle malibu 1 15.0 8 350.0 165 3693 11.5 70 1 buick skylark 320 2 18.0 8 318.0 150 3436 11.0 70 1 plymouth satellite 3 16.0 8 304.0 150 3433 12.0 70 1 amc rebel sst 4 17.0 8 302.0 140 3449 10.5 70 1 ford torino As we can see, there are several interesting features of the cars, but we will simply stick to the two features we are interested in: weight and miles per gallon, or mpg. We can use matplotlib to create a scatterplot to see the relationship of the data: plt.figure(figsize=(10,10)) plt.scatter(auto['weight'],auto['mpg']) plt.title('Miles per Gallon vs. Weight of Car') plt.xlabel('Weight of Car') plt.ylabel('Miles per Gallon') plt.show() Using this scatterplot, we can easily observe that there does seem to be a clear relationship between the weight of each car and the mpg, where the heavier the car, the fewer miles per gallons it delivers (in short, more weight means more gas). This is what we call a negative linear relationship, which, simply put, means that as the X-axis increases, the Y-axis decreases. We can now be sure that if we want to design an economic car, meaning one with high mpg, we need to keep our weight as low as possible. But we want to be as precise as we can. This means we have to determine this relationship as precisely as possible. Here comes math, and machine learning, to the rescue! What we really need to determine is the line that best fits the data. In other words, we need a linear algebra equation that will tell us the mpg for a car of X weight. The basic linear algebra formula is as follows: ## $y = xw + b$ This formula means that to find y, we need to multiply x by a certain number, called weight (not to be confused with the weight of the car, which in this case, is our x), plus a certain number called bias (be ready to hear the word "bias" a lot in machine learning with many different meanings). In this case, our y is the mpg, and our x is the weight of the car. We could get out our calculators and start testing our math skills until we arrive at a good enough equation that seems to fit our data. For example, we could plug in the following formula into our scatterplot: ## $y = x ÷ -105 + 55$ And we end up with this line: plt.figure(figsize=(10,10)) plt.scatter(auto['weight'],auto['mpg']) plt.plot(auto['weight'], (auto['weight'] / -105) + 55, c='red') plt.title('Miles per Gallon vs. Weight of Car') plt.xlabel('Weight of Car') plt.ylabel('Miles per Gallon') plt.show() Although this line seems to fit the data, we can easily tell it's off in certain areas, especially around cars that weight between 2,000 and 3,000 pounds. Trying to determine the best fit line with some basic calculations and some guesswork is very time-consuming and usually leads us to an answer that tends to be far from the correct one. The good news is that we have some interesting tools we can use to determine the best fit line, and in this case, we have linear regression. scikit-learn, or sklearn for short, is the basic toolbox for anyone doing machine learning in Python. It is a Python library that contains many machine learning tools, from linear regression to random forests — and much more. We will only be using a couple of these tools in this tutorial, but if you want to learn more about this library, check out the Sci Kit Learn Documentation HERE. You can also check out the Machine Learning Intermediate path at Dataquest ### Implementing Linear Regression in Python SKLearn Let's get to work implementing our linear regression model step by step. We will be using the basic LinearRegression class from sklearn. This model will take our data and minimize a __Loss Function__ (in this case, one called Sum of Squares) step by step until it finds the best possible line to fit the data. Let's code. Fist of all, we will need the following libraries: • Pandas to manipulate our data. • Matplotlib to plot our data and results. • The LinearRegression class from sklearn. Importnat TIP: NEVER import the whole sklearn library; it is massive and will take a long time. Only import the specific tools that you need. And so, we start by importning our libraries: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression Now we load our data into a DataFrame and check out the first few lines (like we did before). auto = pd.read_csv('auto-mpg.csv') auto.head() mpg cylinders displacement horsepower weight acceleration model year origin car name 0 18.0 8 307.0 130 3504 12.0 70 1 chevrolet chevelle malibu 1 15.0 8 350.0 165 3693 11.5 70 1 buick skylark 320 2 18.0 8 318.0 150 3436 11.0 70 1 plymouth satellite 3 16.0 8 304.0 150 3433 12.0 70 1 amc rebel sst 4 17.0 8 302.0 140 3449 10.5 70 1 ford torino The next step is to clean our data, but this time, it is ready to be used, we just need to prepare the specific data from the dataset. We create two variables with the necessary data, X for the features we want to use to predict our target and y for the target variable. In this case, we load the weight data form our dataset in X and the mpg data in y. TIP: When working with only one feature, remember to use double [[]] in pandas so that our series have at least a two-dimensional shape, or you will run into errors when training models. X = auto[['weight']] y = auto['mpg'] Since LinearRegression is a class, we need to create a class object where we are going to train our model. Let's call it MPG_Pred (using a capital letter at least at the begining of the variable name is a convention from Python class objects). There are many specific options you can use to customize the LinearRegression object, take a look at the documentation here. We will stick to the default options for this tutorial. MPG_Pred = LinearRegression() Now we are ready to train our model using the fit() function with our X and Y variables: MPG_Pred.fit(X,Y) LinearRegression() And that's it, we have trained our model. But how well do the predictions from our model match the data? Well, we can plot our data to determine how well our predictions, fitted on a line, match the data. This is what we get: plt.figure(figsize=(10,10)) plt.scatter(auto['weight'], auto['mpg']) plt.scatter(X,MPG_Pred.predict(X), c='Red') plt.title('Miles per Gallon vs. Weight of Car') plt.xlabel('Weight of Car') plt.ylabel('Miles per Gallon') plt.show() As we can see, our predictions plot (in red) makes a line that seems much better fitted than our original guess, and it was a lot easier than trying to figure it out by hand. Once again, this is the simplest type of regression, and it has many limitations — for example, it only works on data that has a linear tendency. When we have data that is scattered around a line, like the one in this example, we will only be able to predict approximations of the data, and even when the data follows a linear tendency, but is curved (like this one), we will always get just a straight line, meaning our accuracy will be low. Nonetheless, it is the basic form of regression and the simplest of all models. Master it, and you can then move on to more complex variations like Multiple Linear Regression (linear regression with two or more features), Polynomial Regression (finds curved lines), Logistic Regression (to use lines to classify data on each side of the line), and (one of my personal favorites) Regression with Stochastic Gradient Descent (our most basic model using one of the most important concepts in Machine Learning: Gradient Descent). ## What We Learned Here are the basic concepts we covered in this tutorial: • What is linear regression: one of the most basic machine learning models. • How linear regression works: fitting the best possible line to our data. • A very brief introduction to the scikit-learn machine learning library. • How to implement the LinearRegression class from sklearn. • An example of linear regression to predict miles per gallon from car weight. If you want to learn more about Linear Regression and Gradient Descent, check out our Gradient Descent Modeling in Python course, where we go into details about this important concept and how to implement it.
## On the existence of optimal solutions for infinite horizon optimal control problems: Nonconvex and multicriteria problems.(English)Zbl 0619.49002 This note refers to the optimal control problem $$\dot x=f(t,x,u)$$, $$x(0)=x_ 0$$, with constraints on the control u(t)$$\in U(t,x)$$ and on the state (t,x(t))$$\in A$$. The cost functional is of the infinite horizon type $$\int^{\infty}_{0}g(t,x,u)dt$$, which is supposed to be convergent for all admissible solutions. In addition, standard assumptions are made. First, for the non-convex case, the related ”relaxed” problem is defined and Cesari-type ”conditions Q” are introduced. This way two existence theorems are given. For the case that f(.,.,.) and g(.,.,.) are linear in the state x, a stronger theorem is given. Finally, a result for multicriteria optimality, which is then taken in the sense of Pareto optimality. For the details of the proofs, the reader is referred to the author’s dissertation and a future paper. Reviewer: E.Roxin ### MSC: 49J15 Existence theories for optimal control problems involving ordinary differential equations 58E17 Multiobjective variational problems, Pareto optimality, applications to economics, etc. 90C31 Sensitivity, stability, parametric optimization 34H05 Control problems involving ordinary differential equations 93C10 Nonlinear systems in control theory 93C15 Control/observation systems governed by ordinary differential equations
### 18.2 Format with `texi2dvi` or `texi2pdf` The `texi2dvi` program takes care of all the steps for producing a TeX DVI file from a Texinfo document. Similarly, `texi2pdf` produces a PDF file8. To run `texi2dvi` or `texi2pdf` on an input file foo.texi, do this (where ‘prompt\$ ’ is your shell prompt): ```prompt\$ texi2dvi foo.texi prompt\$ texi2pdf foo.texi ``` As shown in this example, the file names given to `texi2dvi` and `texi2pdf` must include any extension, such as ‘.texi’. For a list of all the options, run ‘texi2dvi --help’. Some of the options are discussed below. With the --pdf option, `texi2dvi` produces PDF output instead of DVI, by running `pdftex` instead of `tex`. Alternatively, the command `texi2pdf` is an abbreviation for running ‘texi2dvi --pdf’. The command `pdftexi2dvi` is also provided as a convenience for AUC-TeX (see AUC-TeX), as it prefers to merely prepend ‘pdf’ to DVI producing tools to have PDF producing tools. With the --dvipdf option, `texi2dvi` produces PDF output by running TeX and then a DVI-to-PDF program: if the `DVIPDF` environment variable is set, that value is used, else the first program extant among `dvipdfmx`, `dvipdfm`, `dvipdf`, `dvi2pdf`, `dvitopdf`. This method generally supports CJK typesetting better than `pdftex`. With the --ps option, `texi2dvi` produces PostScript instead of DVI, by running `tex` and then `dvips` (see Dvips). (Or the value of the `DVIPS` environment variable, if set.) `texi2dvi` can also be used to process LaTeX files. Normally `texi2dvi` is able to guess the input file language by its contents and file name extension; however, if it guesses wrong you can explicitly specify the input language using --language=lang command line option, where lang is either ‘latex’ or ‘texinfo’. One useful option to `texi2dvi` is ‘--command=cmd’. This inserts cmd on a line by itself at the start of the file in a temporary copy of the input file, before running TeX. With this, you can specify different printing formats, such as `@smallbook` (see `@smallbook`: Printing “Small” Books), `@afourpaper` (see Printing on A4 Paper), or `@pagesizes` (see `@pagesizes` [width][, height]: Custom Page Sizes), without actually changing the document source. (You can also do this on a site-wide basis with texinfo.cnf; see Preparing for TeX). The option -E (equivalently, -e and --expand) does Texinfo macro expansion using `texi2any` instead of the TeX implementation (see Macro Details and Caveats). Each implementation has its own limitations and advantages. If this option is used, no line in the source file may begin with the string `@c _texi2dvi` or the string `@c (_texi2dvi)`. `texi2dvi` takes the --build=mode option to specify where the TeX compilation takes place, and, as a consequence, how auxiliary files are treated. The build mode can also be set using the environment variable `TEXI2DVI_BUILD_MODE`. The valid values for mode are: local Compile in the current directory, leaving all the auxiliary files around. This is the traditional TeX use. tidy Compile in a local `*.t2d` directory, where the auxiliary files are left. Output files are copied back to the original file. Using the ‘tidy’ mode brings several advantages: • the current directory is not cluttered with plethora of temporary files. • clutter can be even further reduced using --build-dir=dir: all the `*.t2d` directories are stored there. • clutter can be reduced to zero using, e.g., --build-dir=/tmp/\\$USER.t2d or --build-dir=\\$HOME/.t2d. • the output file is updated after every successful TeX run, for sake of concurrent visualization of the output. In a ‘local’ build the viewer stops during the whole TeX run. • if the compilation fails, the previous state of the output file is preserved. • PDF and DVI compilation are kept in separate subdirectories preventing any possibility of auxiliary file incompatibility. On the other hand, because ‘tidy’ compilation takes place in another directory, occasionally TeX won’t be able to find some files (e.g., when using `\graphicspath`): in that case, use -I to specify the additional directories to consider. clean Same as ‘tidy’, but remove the auxiliary directory afterwards. Every compilation therefore requires the full cycle. `texi2dvi` will use `etex` if it is available, because it runs faster in some cases, and provides additional tracing information when debugging texinfo.tex. Nevertheless, this extended version of TeX is not required, and the DVI output is identical. `texi2dvi` attempts to detect auxiliary files output by TeX, either by using the -recorder option, or by scanning for ‘\openout’ in the log file that a run of TeX produces. You may control how `texi2dvi` does this with the `TEXI2DVI_USE_RECORDER` environment variable. Valid values are: yes use the -recorder option, no checks. no scan for ‘\openout’ in the log file, no checks. yesmaybe check whether -recorder option is supported, and if yes use it, otherwise check for tracing ‘\openout’ in the log file is supported, and if yes use it, else it is an error. nomaybe same as ‘yesmaybe’, except that the ‘\openout’ trace in log file is checked first. The default is ‘nomaybe’. This environment variable is provided for troubleshooting purposes, and may change or disappear in the future. #### Footnotes ##### (8) PDF stands for ‘Portable Document Format’. It was invented by Adobe Systems for document interchange, based on their PostScript language.
# The limit of $(\sqrt{1+kx}-\sqrt{1- kx})/x$ as $x\to 0$ [closed] For what value of k, $$f(x)=\begin{cases}\frac{\sqrt{1+kx}-\sqrt{1- kx}}{x} & \mbox{ if }-1 \le x <0 \\ \frac{2x+1}{x-1} & \mbox{ if } 0\le x<1\end{cases}$$ is continuous at $x= 0$. • What have you tried? Have you tried to calculate the limit $\lim_{x\to0^-}f(x)$? – skyking Jan 14 '16 at 15:11 • Yes, I tried it and my answer came -1 . Is it correct? – Anubhav Goel Jan 14 '16 at 15:15 • No, thats the right hand limit. For the left hand limit you would have to use the upper expression. – skyking Jan 14 '16 at 15:17 • My answer to 0/0 form came k. Is it right? – Anubhav Goel Jan 14 '16 at 15:32 • If you mean you found $k$ to be $-1$, then: yes, correct! – StackTD Jan 14 '16 at 15:33 For $f$ to be continuous at $x=0$, $f$ needs to be defined at $x=0$ and you need $$\lim_{x \to 0} f(x) = f(0)$$ Clearly $f(0) = -1$ and the right hand limit (using the same part of the function) gives you -1 as well. You need to calculate the left hand limit: $$\lim_{x \to 0^-} f(x) = \lim_{x \to 0^-} \frac{\sqrt{1+kx}-\sqrt{1- kx}}{x} = \ldots$$ Can you calculate this limit? It will depend on the parameter $k$ and you need this limit to be equal to $-1$, solve for $k$. To evaluate the limit, you could use the following trick: $$\frac{\sqrt{1+kx}-\sqrt{1- kx}}{x} = \frac{\left(\sqrt{1+kx}-\sqrt{1- kx}\right)\left(\sqrt{1+kx}+\sqrt{1- kx}\right)}{x\left(\sqrt{1+kx}+\sqrt{1- kx}\right)} = \ldots$$ In the numerator, use $(a-b)(a+b)=a^2+b^2$ and simplify. Can you take it from here? • Alright; what value for $k$ did you get? – StackTD Jan 14 '16 at 15:31
### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Topics - Junya Zhang Pages: [1] 1 ##### Quiz-4 / Q4-T0701 / T0401 « on: March 02, 2018, 04:30:46 PM » Verify that the given functions $y_1$ and $y_2$ satisfies the corresponding homogeneous equation; then find a particular solution of the given nonhomogeneous equation. $$t^2y'' - t(t+2)y' + (t+2)y = 2t^3, t>0; y_1(t)=t, y_2(t)=te^t$$ 2 ##### Quiz-1 / Q1-T0701 « on: January 26, 2018, 02:02:06 PM » Question: Find the solution of the given initial value problem. $$y'-2y=e^{2t}, y(0)=2$$ Solution: Notice that the given DE is a first order linear ODE. Let $\mu(t)$ denote an integrating factor for the given DE. $$\mu(t) = e^{\int -2 dt} = e^{-2t}$$ Multiply the given DE by $\mu(t)$: note that $\mu(t) ≠ 0$ for all $t$ $$e^{-2t}y'- 2e^{-2t}y = e^{-2t} e^{2t}$$ Simplify the equation: $$\frac{d}{dt} (e^{-2t}y)= 1$$ Integrate both sides with respect to $t$: $$e^{-2t}y = t + C$$ Isolate $y$: $$y = (t+C)e^{2t}$$ Since $y(0)=2$, then $2 = (0+C)\cdot e^{0} = C\cdot 1 = C$ Thus, solution to the given IVP is $$y = (t+2)e^{2t}$$ Pages: [1]
Generation of global variables when using NDSolveValue and Piecewise function I am trying to optimize a function which involves NDSolveValue, but I cannot complete the optimization due to a memory leak. As mentioned in Memory leak with NDSolve, the memory leak might be due to a bug, but I am trying to break down my problem to see if I am doing some mistakes. I am using Mathematica 11.0.1. To illustrate my point, let us solve the 2D heat equation $\nabla \cdot \left[ \kappa ( \boldsymbol{r} ) \nabla T( \boldsymbol{r} ) \right] = \partial_x \left[ \kappa ( \boldsymbol{r} ) \partial_x T( \boldsymbol{r} ) \right] + \partial_y \left[ \kappa ( \boldsymbol{r} ) \partial_y T( \boldsymbol{r} ) \right] = 0$ using some arbitrary region, boundary conditions and a piecewise function $\kappa$: area = Rectangle[{0, 0}, {10, 10}]; kappa[x_, y_] := Piecewise[{{5, y <= 5}, {10, 5 < y}}]; op = D[kappa[x, y]*D[u[x, y], x], x] + D[kappa[x, y]*D[u[x, y], y], y]; sol = NDSolveValue[ {op == 0, DirichletCondition[u[x, y] == 10, y == 0], DirichletCondition[u[x, y] == 0, y == 10 && x < 2]}, u, {x, y} ∈ area]; DensityPlot[sol[x, y], {x, y} ∈ area, Mesh -> None, ColorFunction -> "TemperatureMap", PlotRange -> All, PlotLegends -> Automatic] with the output: I am completely satisfied with the result, but some global variables have been generated during the calculation: Names["Global*"] with the output: {area, kappa, op, s5, s6, s7, s8, sol, u, x, y} I do not understand where these s5, s6, s7 and s8 come from! After running the code multiple times, more and more global variables are generated. After five times, the output is: {area, kappa, op, s12, s13, s14, s15, s18, s19, s20, s21, s24, s25, s26, s27, s30, s31, s32, s33, s5, s6, s7, s8, sol, u, x, y, y\$} My question is if this can cause me any memory problems? In my actual code, around 80 global variables are generated for each calculation and I guess that around 1000 calculations has to be done during my optimization. I have tried to use Remove[s5,s6,...], but it does not seem to release any memory, but maybe this large number of variables causes me some other problems? If I define kappa to be a constant, no additional variables are generated. What can I do to the code to avoid the generation of these global variables? • The kernel seems to be generating temporary variables. Weird, I haven't seen this before, and can't reproduce with similar code. I suspect it has to do with the Piecewise[] function. – Feyre Oct 25 '16 at 11:59 • Looks like NDSolve creates symbols with Unique["s"] in processing the discontinuities. It should be considered a bug. It shouldn't be causing a significant memory leak, though, since it appears they are used only as symbols. – Michael E2 Oct 25 '16 at 12:26 • @MichaelE2, pretty good analysis. I have a fix in place and if all goes well the next release will behave better - no more "sXY" symbols. Thanks! That said though, I wonder if this is really all that's to the memory issue. Jens, perhaps you could show in some more detail the actual optimization that you do? – user21 Oct 25 '16 at 18:47 • @user21, no it is not all that is to the memory issue. But I temporarily fixed it by upgrading to a better computer and by extracting calculated points in my NMinimize and inserting these as InitialPoints in a new NMinimize when I am out of memory. The module that I minimize involves NDSolveValue, and my guess is still that the memory leak is due to the issue discussed in link. I will return if my temporary solution would not do it. – Jens Rix Oct 27 '16 at 10:11 • @JensRix, if you find the time it were good if you could send it to the tech support then a developer could look at it and see if these are really the same issues. Unreported bugs have a close to zero chance of getting fixed. – user21 Oct 27 '16 at 11:25 It appears the developers forgot to Remove the symbols, which are created with Unique: area = Rectangle[{0, 0}, {10, 10}]; kappa[x_, y_] := Piecewise[{{5, y <= 5}, {10, 5 < y}}]; op = D[kappa[x, y]*D[u[x, y], x], x] + D[kappa[x, y]*D[u[x, y], y], y]; Trace[ NDSolveValue[{op == 0, DirichletCondition[u[x, y] == 10, y == 0], DirichletCondition[u[x, y] == 0, y == 10 && x < 2]}, u, {x, y} ∈ area], _Unique | _Remove, TraceForward -> True, TraceInternal -> True] The symbols s5` etc. seemed to be used only for symbolic processing, e.g., to construct this function: Their creation should not use much memory, but they should have been removed. • Thank you for the answer. I will just ignore the generated symbols then. – Jens Rix Oct 27 '16 at 10:02
CRP Toolbox Don't hesitate to ask us questions not included here by using the Recurrence Plot Forum. Please understand that we cannot answer every eMail regarding the CRP toolbox. I have tried the demo version of the CRP toolbox. Is the full version of the toolbox freely available? Yes, of course. The toolbox is freely available. However, you have to request an access code, which is necessary to download the toolbox. Moreover, you have to cite this toolbox in your reports, articles and papers, if you use results computed with the toolbox. top I have downloaded the install.m file. But how to install the toolbox? The file install.m is the repository of the toolbox and contains everything of the toolbox. The toolbox will be installed when you execute the install.m file from within MATLAB. To do so, please open MATLAB, go to the command window, then change to the directory in which the downloaded file install.m is located, and finally type install in the MATLAB command window. This will then automatically unpack the toolbox from the install.m file and copy all files to the standard MATLAB toolbox folder. top Why I get a checksum error during installation? The installation script checks itself for integrity. Only this guarantees a proper installation and usage of the programmes in the toolbox. Checksum errors may occur if you have received the installation script by eMail, and your eMail client has processed the attachement e.g. with an anti-virus scanner. Please ensure, that eMail attachements will not be changed by anti-virus scanners. Another possebility is that you have used a proxy server during downloading the toolbox from the web and the proxy server has modified the data in any way. You should try to download the toolbox without a proxy connection. top Why I get an error during the installation that the access to a directory is denied ot that a directory cannot be created (MS Windows)? Under a MS Windows operating system, MATLAB installs toolboxes in the MATLAB path. Usually you will need administrator rights to install files in this path. A future version of the installation routine will fix this issue. top I cannot install but get an errorcode 95.01 (MS Windows). If you get an errorcode 95.01 and some error related to the userpath function, please check whether the variable userpath is set and whether you have a MATLAB directory in your Documents path (something like C:\Documents and Settings\YOURNAME\Documents\MATLAB). Older MATLAB versions (reported e.g. for MATLAB 2006b) may not be able to reset the userpath variable. Therefore, just create manually in your Documents path a new directory called MATLAB. The installation should then work. If it will not work either, please let us know. top I cannot install but get an errorcode 95.02 (textscan error). If you get an errorcode 95.02 and some error related to the textscan function, please check whether the variable userpath is set. It should be a standard MATLAB variable pointing to the default path where MATLAB looks for user toolboxes. If the variable is not set, then please refer to https://www.mathworks.com/help/matlab/ref/userpath.html and set (or reset) it. If it will not work either, please let us know. top I cannot install but get an errorcode 95.02 (invalid directory or directory does not exist). If you get an errorcode 95.02 and an error invalid directory or directory does not exist, probably the standard MATLAB user path does not exist in your system and MATLAB is not able to create it. The user path is contained in MATLAB variable userpath. Create a folder that corresponds to this path. It might be ~/Documents/MATLAB or ~/matlab. top Why I get an error during installation the toolbox? We have listen of rather rare problems during installation the toolbox. (e.g. a syntax error may occur regarding a missing rehash function in MATLAB, or the version command in MATLAB obtains strange (and non-standard) version information). Usually these errors are due to an out-dated MATLAB version (before release 11 or version 5.3, resp.). You should consider to update MATLAB to a newer version. top Starting MATLAB, I get a strange error about failed XML validation. What happened? The toolbox contains an XML file which gives some information to the MATLAB environment. However, Mathworks has changed the style of this XML file with each new release of MATLAB. Therefore, I gave up to fix this issue for some time. With release 26.2 of the CRP Toolbox, this issue was fixed. If you are bored with these stupid error message, either just remove all XML files from the toolbox folder (if you don't know the location of the toolbox, just type which CRPtool on the MATLAB commandline) or install a newer version of the CRP Toolbox. Nevertheless, the toolbox is working fine and without any problem despite the XML bug. top When starting any function from the toolbox, I get the error "Undefined function 'narginchk'." Unfortunately, Mathworks decided to break upwards and downwards compatibility. The previous version for checking the input arguments of a function will be removed in a future version of MATLAB. However, the new command for input argument validation is only available since MATLAB 2011b. To solve this problem, you can either update to a newer version of MATLAB or install a custom made narginchk function in your MATLAB folder, e.g., by using the freely available code at https://gist.github.com/hagenw/5642886. It is suggested to copy this file into your toolbox folder which can be located by the command which crp (the toolbox path is everything before "CRPtool"). top I'm using the rp_plugin on a PC with MS Windows. Why the plugin is not working for computation of recurrence plots? There is still a problem with the plugin for the MS Windows. If you are using the plugin to create a recurrence plot with, e.g., crp, it is not working. We are not using windows PCs, therefore, we have not a big chance to find out the reason. Sorry. Nevertheless, the RQA can be computed by the plugin. Other operating systems are not concerned. top Where can I find a documentation on how to properly use the CRP toolbox/ recurrence analysis? A printable documentation for the proper use of the CRP toolbox is available under the subsection printable reference manual. A brief introduction and a forum about recurrence plots, cross and joint recurrence plots as well as their quantification can be found at www.recurrence-plot.tk. top How do I cite the toolbox in my paper? The use of the CRP Toolbox should be cited as • N. Marwan: Cross Recurrence Plot Toolbox for MATLAB®, Ver. 5.24 (R34), https://tocsy.pik-potsdam.de/CRPtoolbox/, accessed 2022-06-29. Moreover, at least the following paper should also be cited • N. Marwan, M. C. Romano, M. Thiel, J. Kurths: Recurrence Plots for the Analysis of Complex Systems, Physics Reports, 438(5–6), 237–329 (2007). DOI:10.1016/j.physrep.2006.11.001 Please check the User Agreement for the conditions of using this software and which publications should be cited in your work. top How long the data series can be? This depends on your computer power, especially on the memory, and the kind of computation needed. For higher speed in output the whole matrix of the (cross/ joint) recurrence plot is in the work space – which is limited by the available computer memory (unfortunately, MATLAB is rather memory hungry). On current computers, the maximal data length can be up to 5,000 (could be longer, test it on your system) for crqa of the entire data (window length equals data length). However, using rqa the length of used data can be much larger using small windows. For recurrence plots, using crp the data length can be up to 5,000 (could be longer, test it on your system). For longer data sets, crp_big is more appropriate. Moreover, read How can I compute a recurrence plot for really long data series? if you really need a (cross/ joint) recurrence plot for long data series. If your system is supported, you can use a plugin for the CRP Toolbox, which allows to compute (cross/ joint) recurrence plots and their quantification for longer data series. The used data can be more than 10 times longer than without using the plugin! Alternatively, you can use a commandline programme for computing recurrence plots and RQA (without MATLAB). top How can I compute a recurrence plot for really long data series? For computing (cross/ joint) recurrence plots of long data series, use a similar script as in the following. The data length is finally limited by the used platform performance. The examples also illustrate the capability of using the programms in a script. The first example uses sparse matrices (when calculating a recurrence plot of only one time series, replace x2 by x1): %% some parameter settings and create example time series m = 3; t = 20; e = .5; w = 300; x1 = sin((1:5000)/40)'; x2 = sin((1:7000)/80)'; Y = spalloc(length(x2)-(m-1)*t,length(x1)-(m-1)*t,1); k = 0; h1 = waitbar(0,'Compute sub CRPs - Please be patient.'); Nx = length(x1)-(m-1)*t; Ny = length(x2)-(m-1)*t; ax = ceil(Nx/w); ay = ceil(Ny/w); Nx2 = floor(Nx/ax); Ny2 = floor(Ny/ay); %% compute single CRPs and fill the sparse matrix for i = 1:Nx2:Nx-Nx2; for j = 1:Ny2:Ny-Ny2, k = k+1; waitbar(k/(Nx*Ny/(Nx2*Ny2))) X2 = crp(x1(i:i+Nx2+(m-1)*t), x2(j:j+Ny2+(m-1)*t),m,t,e,... 'nonorm','max','silent'); X = sparse(double(X2)); Y(j:j+Ny2-1, i:i+Nx2-1) = X(1:Ny2,1:Nx2); end end close(h1) spy(Y) The second example writes single (cross/ joint) recurrence plots to the hard disk (when calculating a recurrence plot of only one time series, replace x2 by x1): %% some parameter settings and create example time series m = 3; t = 20; e = .5; w = 300; x1 = sin((1:5000)/40)'; x2 = sin((1:7000)/80)'; Nx = length(x1); Ny = length(x2); %% compute single CRPs and write them to the hard disk b1 = zeros((m-1)*t+ceil(length(x1)/w)*w,1); b1(1:length(x1)) = x1; b2 = zeros((m-1)*t+ceil(length(x2)/w)*w,1); b2(1:length(x2)) = x2; h = waitbar(0,'Compute sub CRPs - Please be patient.') for i = 1:w:length(b1)-w-1, waitbar(i/((length(b1)-w-1))) for j = 1:w:length(b2)-w-1 X = crp(b1(i:i+w+(m-1)*t-1),b2(j:j+w+(m-1)*t-1),m,t,e,... 'max','silent','nonorm'); i2 = num2str((i+w-1)/w);j2=num2str((j+w-1)/w); filename = ['CRP_',i2,'_',j2,'.tif']; imwrite(X,filename,'tif') end end, close(h) %% read single CRPs and unify them xmax = (i+w-1)/w; ymax=(j+w-1)/w; Y = zeros(length(b1),length(b2)); for i = 1:xmax,waitbar(i/xmax) for j = 1:ymax i2 = num2str(i);j2=num2str(j); filename = ['CRP_',i2,'_',j2,'.tif']; Y(i*w-(w-1):i*w,j*w-(w-1):j*w) = X'; end end, close(h) Y(Nx+1:end,:) = [];Y(:,Ny+1:end) = []; spy(double(Y)) Alternatively, you can use a plugin for the CRP Toolbox, or a commandline programme for computing recurrence plots (without MATLAB). top Will the data be scaled by the programmes? The data will be scaled by default, i.e. the data will be normalized to a mean of zero and a standard-deviation of one. This is a normal procedure in data analysis. This may also be helpful for applying CRPs to two data series with different magnitudes. However, you can use the programmes without normalization the data. Just call the programmes with the argument 'nonormalize', e.g. crp(x,y,3,4,.1,'nonormalize') or shorter crp(x,y,3,4,.1,'non') top Does it be possible to use original phase space vectors instead of embedding? Yes, of course. The commands crp2 and crqa support original phase space vectors. E.g., creation of RP from the phase space vectors of a harmonic pendulum: x = [sin(0.1 * [1:100])', cos(0.1 * [1:100])']; crp2(x) top How to chose an appropriate threshold value? There are different possibilities or requirements for the threshold. First, you should consider the articles • M. Thiel, M. C. Romano, J. Kurths, R. Meucci, E. Allaria, F. T. Arecchi: Influence of observational noise on the recurrence quantification analysis, Physica D, 171(3), 138–152 (2002). DOI:10.1016/S0167-2789(02)00586-9 • S. Schinkel, O. Dimigen, N. Marwan: Selection of recurrence threshold for signal detection, European Physical Journal – Special Topics, 164(1), 45–53 (2008). DOI:10.1140/epjst/e2008-00833-5 • K. H. Kraemer, R. V. Donner, J. Heitzig, N. Marwan: Recurrence threshold selection for obtaining robust recurrence characteristics in different embedding dimensions, Chaos, 28(8), 085720 (2018). DOI:10.1063/1.5024914 A too large threshold makes usely no real sense. Mostly it is suggested to choose the threshold in such a way, that its value corresponds to 10% of the maximum or mean phase space diameter (using the CRP toolbox you can find these diameters with the command pss). Another way is to choose the threshold in such a way, that the recurrence rate is 10%. Further possibilities are to compute several RQA measures for an increasing threshold value and to look for a region where the RQA measures change slowly. top What is the meaning of embedding dimension, time delay, and how to chose them? The state of a system can be described by its $$d$$ state variables $$x_1(t), x_2(t), \ldots, x_d(t)$$ and can be written as a $$d$$-dimensional vector $$\vec{x}(t)$$. However, the observation of a real process usually does not yield all possible state variables. Either not all state variables are known or not all of them can be measured. Most often only one observation (measurement) $$u(t) = u_k$$ (with $$i = \Delta t$$) is available. Following Takens' embedding theorem (1981) we can reconstruct the phase space from a single time series $$u_k$$ by using an embedding dimension $$m$$ and a time delay $$\tau$$ $$\vec{x}(t) = \vec{x}_i = ( u_i, u_{i+\tau}, \ldots, u_{i+(m-1)\tau} ), \qquad t = i \Delta t,$$ whereas $$\vec{x}(t)$$ is the vector of reconstructed states in the phase-space at the time $$t$$. The choice of $$m$$ and $$\tau$$ should base on methods for detecting the optimal values of these two parameters like method of false nearest neighbours, fnn (for $$m$$) and mutual information, mi (for $$\tau$$), which ensures the entire covering of all free parameters and avoiding autocorrelated effects (e.g. Kantz and Schreiber, 1997). • N. Marwan: Encounters With Neighbours – Current Developments Of Concepts Based On Recurrence Plots And Their Applications, Ph.D. Thesis, University of Potsdam, ISBN 3-00-012347-4, urn:nbn:de:kobv:517-0000856, 2003 top What is the meaning of "vector switching" in the control panel of crp, crp_big, crp2 and jrp? Vector switching means, that single components of the phase space vector will get a negative sign. For instance, switching the 2nd component of a state $$\vec{x}(t)$$ of a 3-dimensional system: $$\left( \begin{array}{rrr} x_1(t)\\x_2(t)\\ x_3(t)\\ \end{array} \right) \rightarrow \left( \begin{array}{rrr} x_1(t)\\ -x_2(t)\\ x_3(t)\\ \end{array} \right)$$. top What is the meaning of the different norms and recurrence criteria (maximum, Euclidean, normalized, fixed, order patterns etc.)? Different norms and recurrence criteria (neighbourhood criteria) can be used for the definition of the recurrence of a state. Maximum Norm ($$L_\infty$$-norm) the distance between two phase space vectors is the maximal distance between their components $$\max\bigl(|x_1(t_1) - x_1(t_2)|, \ldots, |x_m(t_1) - x_m(t_2)|\bigr)$$ Euclidean Norm ($$L_2$$-norm) the distance between two phase space vectors is the smallest distance between the both vectors $$\sqrt {\sum_i (x_i(t_1) - x_i(t_2))^2}$$ Minimum Norm ($$L_1$$-norm, Manhattan or Taxicab norm) the distance between two phase space vectors is the sum of the distances of all components $$\sum_i |x_i(t_1) - x_i(t_2)|$$ Normalized Norm all phase space vectors are normalised to a length of one $$\vec{x}'(t) = \frac{\vec{x}(t)}{\|\vec{x}(t)\|}$$ then the Euclidean distance is used Fixed amount ofnearest neighbours The number of neighbours in the neighbourhood is constant, i. e. the number of recurrence points in one column of the RP is constant. Such an RP is not symmetric. This is the original definition of an RP by Eckmann et al. (1987). J.-P. Eckmann, S. Oliffson Kamphorst, D. Ruelle: Recurrence Plots of Dynamical Systems, Europhysics Letters, 4(9), 973–977 (1987). DOI:10.1209/0295-5075/4/9/004 Interdependent neighbours Experimental recurrence criterion for CRPs (does not make sense for auto RPs) and is similar to a JRP. A recurrence point is defined, if the indices of the neighbours of the first trajectory coincides with the indices of the neighbours of the second trajectory. The neighbours of the second trajectory are found by a dynamic threshold, given by the radius of the current neighbourhood of the first trajectory. This was inspired by A. Schmitz: Measuring statistical dependence and coupling of subsystems, Physical Review E, 62(5), 7508-7511 (2000). Order matrix A recurrence is defined by the condition $$x_i \ge x_j$$ C. Bandt: Ordinal time series analysis, Ecological Modelling, 182(3–4), 229–238p. (2005). DOI:10.1016/j.ecolmodel.2004.04.003 Order patterns A recurrence is defined by the recurrence of order patterns πi, which present the dynamics of the time series by a symbolisation of local rank orders $$x_i \rightarrow \pi_k \quad$$ where   $$\pi_k \in \mathbb{R}^m$$ A. Groth: Visualization of coupling in time series by order recurrence plots, Physical Review E, 72(4), 046220p. (2005). DOI:10.1103/PhysRevE.72.046220 top When I use the recurrence criteria of fixed amount of nearest neighbours or order patterns, does it matter to normalise or not the data? For recurrence plots basing on fixed amount of nearest neighbours (FAN) or order patterns, it doesn't matter to normalise the data, because normalisation doesn't change the neighbourhood (FAN) or the local rank order (order patterns). For cross recurrence plots (CRPs), normalisation affects the result for FAN based CRP, because the two different trajectories in the phase space will be stretched or compressed and, hence, change their intersecting closeness. For cross order patterns recurrence plots it again doesn't matter. top Why the RQA results differ from such gained with the RQA software of Charles Webber? At first we must say, that the CRP toolbox was independently developed from the work of Charles Webber. Therefore, it may contain some differences, which cause that the results may be not fully comparable with the software of Charles Webber. The RQA software uses a specific normalization of the distance matrix, whereas the CRP toolbox uses either no normalization or a normalization to a standard deviation of one and a mean of zero. However, with an appropriate data preparation and setting you can get the same results. For compatibility use a Theiler window of size one and ensure that the data are normalized before by the same distance which is used in the RQA software; e.g. normalize with the maximal phase space diameter. In the CRP toolbox, with the programme pss you can estimate the maximal distance of the phase space, which is reconstructed from the data. This max. distance can be used for the normalization of the data: d_max = pss(x,3,5,'euclidean'); x_norm = 100*x/d_max; Applying crqa on this normalized data, using a Theiler window of one and don't let crqa normalize the data (which is the defeault setting!), we will get the same RQA measures as the RQA software of Charles Webber does: RQA = crqa(x_norm, dim, lag, e, [], [], l_min, v_min, 1, ... 'euclidean', 'nonormalize', 'silent') top How can I determine the number of diagonal lines in the RP? In order to get the number of the diagonal lines, first compute a recurrence plot and assign it to some variable, e.g. X = crp(x,1,1,.2,'non','silence'); (which means, it uses no embedding, a threshold of 0.2, the data will not be normalized before (what is default) and finally it suppresses output of the calculation). Then simply use the command dl [m L] = dl(X); Now you find in m the mean diagonal line length and in L a vector containing the lengths of each found diagonal line. From L you can get the number of lines with lengths larger than e.g. 2 by N = length(nonzeros(b>2)) top How can I determine the histogramme of the lengths of diagonal/vertical lines in the RP? From the recurrence matrix in X the histogramme of the lengths of diagonal/vertical lines can be determined using the commands dl and tt. For the diagonal lines simply call [m L] = dl(X); and for the vertical lines [m L] = tt(X); In m is the mean diagonal/vertical line length and in L a vector containing the lengths of each found diagonal/vertical line. From L the histogramme of the line lengths is then hist(L,[1:max(L)]) or H = hist(L,[1:max(L)]); Please note that this histogramme contains the number of lines which are exactly of length $$l$$. top How can I calculate the entropy of the data by means of RQA? Using RQA it is not possible to calculate the entropy of the data. The entropy measure provides in RQA measures the Shannon entropy of the distribution of the lengths of the diagonal lines in an recurrence plot. However, using the CRP toolbox you can apply the command entropy to a histogram of the data. This provides you a simple entropy estimation of the data. top How can I calculate the RQA trend measure? There are different ways to calculate the RQA measure TREND using the CRP toolbox. I have not included this measure in the toolbox for several reasons. The main reason is that, in my opinion, this measure depends much stronger than the other measures on the chosen settings. This measure is a bit critical as it can give you results which may be misleading or even contradicting. Therefore, I do not support the distribution and application of this measure by including it in my toolbox. If you really like to calculate, you can use either CRQAD and (after some cutting of the result vector) you calculate the slope. Or, even more simple, just use the RP-matrix and apply some MATLAB functions. In the following you can find an example where TREND is calculated in sliding windows: % x = time series N = 1000; x = rand(N,1); m = 3; % embedding dimension t = 1; % embedding delay e = .3; % recurrence threshold w = 200; % window size ws = 50; % window step timescale = 1:ws:N - w + 1; trend = zeros(length(timescale),1); for i = timescale X = crp(x(i:i+w-1),m,t,e,'max','non','sil'); % calculate RP N_X = size(X,2); % size of RP (i.e. number of columns) T_ = zeros(N_X-1,1); % count the number of recurrence points in diagonal k for k = 1:N_X-1 T_(k) = nnz(diag(X,k)) / (N_X-k)*100; end Ntau = N_X - 1 - round(0.1*N_X); % last 10% of the RP will be skipped p = polyfit((2:Ntau+1)',T_(1:Ntau),1); % slope trend(i) = 1000 * p(1); % Webber's definition includes factor 1000 end plot(timescale, trend(timescale)) The result is the TREND measure as calculated in Chuck Webber's RQA software. top How the RQA measures in the plot of the GUI of the function crqa are aligned to the time scale? The window of length $$w$$ is applied on the data and not on the RP, i.e., the RP will have smaller size than the window, thus $$w-(m-1)\tau$$. If we consider the data window to time $$i \ldots i+w$$, the corresponding RQA measures are assigned at time $$i$$. Therefore, if you see a beginning of a transition in the plot of the RQA measures at time $$i$$, this transition will probably happen at time $$i+w-(m-1)\tau$$. top What is the difference in the output of the command crqad denoted as, e.g., Y.RRp and Y.RRm? In order to study also anti-correlation, the second time series is additionally multiplied by −1: $$x(t) \rightarrow x(t)\\ y(t) \rightarrow -y(t)$$ The results of the diagonal-wise computed RQA measures from the CRP are denoted as RRp for the normally constructed CRP and RRm for the CRP based on the "negative" second time series, corresponding to $$RR_+$$ and $$RR_-$$, resp., as introduced in • N. Marwan, J. Kurths: Nonlinear analysis of bivariate data with cross recurrence plots, Physics Letters A, 302(5–6), 299–307 (2002). DOI:10.1016/S0375-9601(02)01170-2 top Why the RQA measures contain so many zeros when using a window step size larger than one? This is just to that we can directly apply the time scale of the original data to the RQA measures. Assume we use a window step size of 10. Then we need to plot only every 10th value: ws = 10; Y = crqa(rand(1000,1), 1, 1, 0.1, 50, ws, 'silent'); plot(Y(1:ws:end, 1) top How can I predict states using the CRP toolbox like the VRA software of Eugene Kononov? Using the CRP toolbox it is not possible to predict states. Prediction is not a part of a recurrence analysis. Although Eugene Kononov provides an additional forecast model in his VRA software, which is based on recurrence, we do not intent to include a similar model in the CRP toolbox. top
# central limit theorem and VAR If I have a lot of data points and number of different dependent variables, can I use central limit theorem to assume data is multivariate normal and compute my VAR? Is this the appropriate use of central limit theorem for VAR calculation?
CBSE (Science) Class 11CBSE Share Books Shortlist # Find the Components Along the X, Y, Z Axes of the Angular Momentum L of a Particle, Whose Position Vector is R with Components X, Y, Z and Momentum is P with Components Px, Py and Pz. Show that If the Particle Moves Only in the X-y Plane the Angular Momentum Has Only a Z-component. - CBSE (Science) Class 11 - Physics ConceptTorque and Angular Momentum #### Question Find the components along the x, y, z axes of the angular momentum of a particle, whose position vector is with components xyand momentum is with components pxpy and 'p_z. Show that if the particle moves only in the x-plane the angular momentum has only a z-component. #### Solution 1 lx = ypz – zpy l= zpx – xpz lz = xpy –ypx Linear momentum of the particle,vecp = p_x hati + p_y hatj + p_z hatk Position vector of the particle, vecr = xhati + yhatj + zhatk Angular momentum, hatl = hatr xx hatp =(xhati + yhatj + zhatk) xx (p_x hati + p_y hatj + p_z hatk) =|(hati,hatj,hatk),(x,y,z), (p_x, p_y,p_z)| l_xhati + l_yhatj + l_z hatk = hati (yp_z - zp_y) - hatj(xp_z - zp_x) + hatk (xp_y - zp_x) Comparing the coefficients of hati, hatj, hatk we get: ((l_x = yp_z - zp_y),(l_y = xp_z -zp_x),(l_z = xp_y - yp_x))}...(i) The particle moves in the x-plane. Hence, the z-component of the position vector and linear momentum vector becomes zero, i.e., z = pz = 0 Thus, equation (i) reduces to: ((l_x=0),(l_y=0),(l_z=xp_y -yp_x))} Therefore, when the particle is confined to move in the x-plane, the direction of angular momentum is along the z-direction. #### Solution 2 We know that angular momentum vecl of a particle having position vector vecr and momentum vecp is given by vecl = vecr xx vecp But vecr = [xveci + yhatj + zveck], where x, y,z are the component of vecr and vecp = [p_xveci + p_yhatj + p_zhatk] :. vecl  = vecr xx vecp  = [x hati + yhatj + zhatk] xx [p_xhati + p_y hatj + p_z hatk] or (l_xhati + l_yhatj + l_zhatk) = |(hati ,hatj ,hatk),(x,y,z), (p_x,p_y,p_z)| =(yp_z - zp_y)hati + (zp_x - xp_z)hatj + (xp_y - yp_x) hatk From this relation we conclude that l_x = yp_z - zp_y, l_y = zp_x - xp_z, l_z = xp_y- yp_x if the given particle moves only in the x - y plane then z = 0 and 'p_z = 0' and hence 'vecl = (xp_y - yp_x)hatk which is onl;y the z- component of hatl it means that for a particle moving only in the x-y plane, the angular momentum has only the z-component Is there an error in this question or solution? #### APPEARS IN NCERT Solution for Physics Textbook for Class 11 (2018 to Current) Chapter 7: System of Particles and Rotational Motion Q: 6 | Page no. 178 #### Video TutorialsVIEW ALL [2] Solution Find the Components Along the X, Y, Z Axes of the Angular Momentum L of a Particle, Whose Position Vector is R with Components X, Y, Z and Momentum is P with Components Px, Py and Pz. Show that If the Particle Moves Only in the X-y Plane the Angular Momentum Has Only a Z-component. Concept: Torque and Angular Momentum. S
# A tv cable company has 8400 subscribers who are each paying $34 per month. It can get 140 more su... A tv cable company has 8400 subscribers who are each paying$34 per month. It can get 140 more subscribers for each subscribers for each .50 cents decrease in the monthly fee. What rate will yield maximum revenue, and what will this revenue be? Maximum revenue ? Revenue at the maximum revenue rate will be ?
## Estimate Models Using armax This example shows how to estimate a linear, polynomial model with an ARMAX structure for a three-input and single-output (MISO) system using the iterative estimation method `armax`. For a summary of all available estimation commands in the toolbox, see Model Estimation Commands. Load a sample data set `z8` with three inputs and one output, measured at `1` -second intervals and containing 500 data samples. `load iddata8` Use `armax` to both construct the `idpoly` model object, and estimate the parameters: `$A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}{B}_{i}\left(q\right){u}_{i}\left(t-n{k}_{i}\right)+C\left(q\right)e\left(t\right)$` Typically, you try different model orders and compare results, ultimately choosing the simplest model that best describes the system dynamics. The following command specifies the estimation data set, `z8` , and the orders of the A , B , and C polynomials as `na` , `nb` , and `nc`, respectively. `nk` of `[0 0 0]` specifies that there is no input delay for all three input channels. ```opt = armaxOptions; opt.Focus = 'simulation'; opt.SearchOptions.MaxIterations = 50; opt.SearchOptions.Tolerance = 1e-5; na = 4; nb = [3 2 3]; nc = 4; nk = [0 0 0]; m_armax = armax(z8, [na nb nc nk], opt);``` `Focus`, `Tolerance`, and `MaxIter` are estimation options that configure the estimation objective function and the attributes of the search algorithm. The `Focus` option specifies whether the model is optimized for simulation or prediction applications. The `Tolerance` and `MaxIter` search options specify when to stop estimation. For more information about these properties, see the `armaxOptions` reference page. `armax` is a version of `polyest` with simplified syntax for the ARMAX model structure. The `armax` method both constructs the `idpoly` model object and estimates its parameters. View information about the resulting model object. `m_armax` ```m_armax = Discrete-time ARMAX model: A(z)y(t) = B(z)u(t) + C(z)e(t) A(z) = 1 - 1.284 z^-1 + 0.3048 z^-2 + 0.2648 z^-3 - 0.05708 z^-4 B1(z) = -0.07547 + 1.087 z^-1 + 0.7166 z^-2 B2(z) = 1.019 + 0.1142 z^-1 B3(z) = -0.06739 + 0.06828 z^-1 + 0.5509 z^-2 C(z) = 1 - 0.06096 z^-1 - 0.1296 z^-2 + 0.02489 z^-3 - 0.04699 z^-4 Sample time: 1 seconds Parameterization: Polynomial orders: na=4 nb=[3 2 3] nc=4 nk=[0 0 0] Number of free coefficients: 16 Use "polydata", "getpvec", "getcov" for parameters and their uncertainties. Status: Estimated using ARMAX on time domain data "z8". Fit to estimation data: 80.86% (simulation focus) FPE: 2.888, MSE: 0.9868 ``` `m_armax` is an `idpoly` model object. The coefficients represent estimated parameters of this polynomial model. You can use `present(m_armax)` to show additional information about the model, including parameter uncertainties. View all property values for this model. `get(m_armax)` ``` A: [1 -1.2836 0.3048 0.2648 -0.0571] B: {[-0.0755 1.0870 0.7166] [1.0188 0.1142] [-0.0674 ... ]} C: [1 -0.0610 -0.1296 0.0249 -0.0470] D: 1 F: {[1] [1] [1]} IntegrateNoise: 0 Variable: 'z^-1' IODelay: [0 0 0] Structure: [1x1 pmodel.polynomial] NoiseVariance: 2.7984 InputDelay: [3x1 double] OutputDelay: 0 Ts: 1 TimeUnit: 'seconds' InputName: {3x1 cell} InputUnit: {3x1 cell} InputGroup: [1x1 struct] OutputName: {'y1'} OutputUnit: {''} OutputGroup: [1x1 struct] Notes: [0x1 string] UserData: [] Name: '' SamplingGrid: [1x1 struct] Report: [1x1 idresults.polyest] ``` The `Report` model property contains detailed information on the estimation results. To view the properties and values inside `Report`, use dot notation. For example: `m_armax.Report` ```ans = Status: 'Estimated using ARMAX with simulation focus' Method: 'ARMAX' InitialCondition: 'zero' Fit: [1x1 struct] Parameters: [1x1 struct] OptionsUsed: [1x1 idoptions.polyest] RandState: [1x1 struct] DataUsed: [1x1 struct] Termination: [1x1 struct] ``` This action displays the contents of estimation report such as model quality measures (`Fit`), search termination criterion (`Termination`), and a record of estimation data (`DataUsed`) and options (`OptionsUsed`).
## anonymous one year ago Water is a polar molecule, meaning it carries partial charges (δ or δ–) on opposite sides of the molecule. For two formula units of NaCl, drag the sodium ions and chloride ions to where they would most likely appear based on the grouping of the water molecules in the area provided. Note that red spheres represent O atoms and white spheres represent H atoms. • This Question is Open 1. Ciarán95 $$\bf\huge\color{#ff0000}{W}\color{#ff2000}{e}\color{#ff4000}{l}\color{#ff5f00}{c}\color{#ff7f00}{o}\color{#ffaa00}{m}\color{#ffd400}{e}~\color{#bfff00}{t}\color{#80ff00}{o}~\color{#00ff00}{O}\color{#00ff40}{p}\color{#00ff80}{e}\color{#00ffbf}{n}\color{#00ffff}{S}\color{#00aaff}{t}\color{#0055ff}{u}\color{#0000ff}{d}\color{#2300ff}{y}\color{#4600ff}{!}\color{#6800ff}{!}\color{#8b00ff}{!}\\\small\cal Made~by~@TheSmartOne$$ Hey there @ppo ! Since you are new here, read this legendary tutorial for new OpenStudiers! http://openstudy.com/study#/updates/543de42fe4b0b3c6e146b5e8 2. Ciarán95 Have you any kind of diagram to go along with this @ppo ? I'm kind of assuming that, if we have a polar water molecule with partial charges (due to the uneven distribution of the shared electrons in the O-H covalent bond): |dw:1440867092123:dw| then positively charged species will be attracted to the the negative end of the molecule and vice versa. So, based on the electrostatic interactions of the Na+ and Cl- ions with the polar H2O molecule, we would expect something like this perhaps |dw:1440867511495:dw|: I'm not sure if this is entirely what you're after, but hopefully it's of some kind of help to you....remember that oppositely charged ions should attract and like charges repel!
What fraction is equivalent to 4/16 Question: what fraction is equivalent to 4/16 .True or false pivot chart can be moved to another sheet. .True or false pivot chart can be moved to another sheet.... Pick a section from Chapter 1 of The Call of the Wild in which Buck is interacting with another character. Imagine that section narrated in the first-person point of view from one of the character's perspective, and rewrite it in the space below, attempting to use a similar writing style as the original author. Then, in a second paragraph, consider this: are you able to explore some of the same ideas and themes when viewing the situation from the character's first-person point of view, as compar Pick a section from Chapter 1 of The Call of the Wild in which Buck is interacting with another character. Imagine that section narrated in the first-person point of view from one of the character's perspective, and rewrite it in the space below, attempting to use a similar writing style as the orig... Hello someone help me hello someone help me... Whats gucci someone plesee Whats gucci someone plesee... In Nerumbia, all businesses are owned by the state. The Nerumbian government plans production, fixes commodity price, and also gives directions on investments to ensure they benefit the nation as a whole and not only a few individuals. Nerumbia has adopted a _____ economy. a. capitalist b. command c. traditional d. mixed e. market In Nerumbia, all businesses are owned by the state. The Nerumbian government plans production, fixes commodity price, and also gives directions on investments to ensure they benefit the nation as a whole and not only a few individuals. Nerumbia has adopted a _____ economy. a. capitalist b. command c... How much would a dozen of Apple cost if price of each Apple is 8.35​ how much would a dozen of Apple cost if price of each Apple is 8.35​... What allowe homo sapiens to become more successful hunters than the early humans that came before? What allowe homo sapiens to become more successful hunters than the early humans that came before?... Can someone help me with this pls with the working (x-10)(x-4) Can someone help me with this pls with the working (x-10)(x-4)... Mr. Richard plans to take 25 students on a school trip. They plan to go to Zoo. The bus fee is $100 for the entire trip. Entrance fee is$2.50 per person. How much should each student pay? Note: Mr. Richard does not need to pay. Mr. Richard plans to take 25 students on a school trip. They plan to go to Zoo. The bus fee is $100 for the entire trip. Entrance fee is$2.50 per person. How much should each student pay? Note: Mr. Richard does not need to pay.... Ayúdenme xfavor es para mañana Ayúdenme xfavor es para mañana... Determine the correct sequence of events for the production of a human protein in a bacterium. I. Target gene is inserted into a plasmid II. Bacterium replicates in a fermenter III. Bacteria produces target protein IV. Modified plasmid is inserted into bacterium Determine the correct sequence of events for the production of a human protein in a bacterium. I. Target gene is inserted into a plasmid II. Bacterium replicates in a fermenter III. Bacteria produces target protein IV. Modified plasmid is inserted into bacterium... You are told that a third powered polynomial has zeroes at -1, 2, 5. Part A. Write the three factors of the polynomial Part B. Write the polynomial in standard form You are told that a third powered polynomial has zeroes at -1, 2, 5. Part A. Write the three factors of the polynomial Part B. Write the polynomial in standard form... 35. Word provides more than 25 picture formats -- named groups of formatting characteristics that enable you easily to change a picture’s look to a more visually appealing one. _________________________ * 35. Word provides more than 25 picture formats -- named groups of formatting characteristics that enable you easily to change a picture’s look to a more visually appealing one. _________________________ *... The word “hay” best translates to __________. The word “hay” best translates to __________.... I need help with Classifying Rational Numbers for an assignment due tomorrow. Could someone explain it for me? I need help with Classifying Rational Numbers for an assignment due tomorrow. Could someone explain it for me?... How did rome treat conquers people who lived peacefully? How did rome treat conquers people who lived peacefully?... The function f(x) = −x2 + 44x − 384 models the hourly profit, in dollars, a shop makes for selling coffee, where x is the number of cups of coffee sold, and f(x) is the amount of profit. Part A: Determine the vertex. What does this calculation mean in the context of the problem? (4 points) Part B: Determine the x-intercepts. What do these values mean in the context of the problem? (4 points) Part C: Determine the y-intercept. What does this value mean in the context of the problem? (2 points) The function f(x) = −x2 + 44x − 384 models the hourly profit, in dollars, a shop makes for selling coffee, where x is the number of cups of coffee sold, and f(x) is the amount of profit. Part A: Determine the vertex. What does this calculation mean in the context of the problem? (4 points) Par...
# Proving $\oint\vec{F}(\vec{G}\cdot \hat{n}) d\sigma=\iiint[\vec{F}(\nabla \cdot \vec{G})+(\vec{G}\cdot\nabla)\vec{F}]dV$ Given $C^1$ vector fields $\vec{F}, \vec{G}$, show that: $$\unicode{x222F}_\Sigma\vec{F}(\vec{G}\cdot \hat{n}) d\sigma=\iiint_\Omega [\vec{F}(\nabla \cdot \vec{G})+(\vec{G}\cdot\nabla)\vec{F}]dV$$ I know that I need to start with the components of $\vec{F}(\vec{G}\cdot \hat{n})$, and use the divergence theorem, but I'm not sure where to start. Note that $$\hat x_i\cdot \oint_\Sigma \vec F(\hat n\cdot \vec G)\,d\sigma=\oint_\Sigma F_i(\hat n\cdot \vec G)\,d\sigma$$ Next, we use the product rule identity $\nabla \cdot(F_i\vec G)= F_i \nabla\cdot \vec G+\vec G\cdot \nabla F_i$ along with the Divergence Theorem to write \begin{align} \oint_\Sigma F_i(\hat n\cdot \vec G)\,d\sigma&=\int_\Omega \left( F_i \nabla\cdot \vec G+\vec G\cdot \nabla F_i\right)\,dV\\\\ &=\hat x_i \cdot \int_\Omega \left( \vec F \nabla\cdot \vec G+\vec (G\cdot \nabla) \vec F\right)\,dV \end{align} Inasmuch as this is true for all $i$, we arrive at the coveted equality $$\oint_\Sigma \vec F(\hat n\cdot \vec G)\,d\sigma=\int_\Omega \left( \vec F \nabla\cdot \vec G+\vec (G\cdot \nabla) \vec F\right)\,dV$$ • Just in case you are not aware, you may up vote an answer once you've accrued enough reputation points. – Mark Viola Nov 16 '17 at 18:22
# Drop Evaporation at 10 degrees C, 1 atmosphere, and 71% relative humidity (January in Iraq and near Baghdad): Sarin (Nerve Agent) versus Water Note: Sadly, I have noticed that the code of LaTex changes in WordPress. As an example, the text “\textdegree” use to provide the ˚ symbol but now provides “$\textdegree$“. As such, please be patient and do not blame me for all editor faults! 🙂 It truly is an experiment in progress and I am dependent upon LaTex and WordPress consistency. Title: Drop Evaporation at 10 degrees C, 1 atmosphere, and 71% relative humidity (January in Iraq and near Baghdad): Sarin (Nerve Agent) versus Water Conclusion: The molar flux of water is greater than sarin. As such, I assume the evaporation of water is greater than sarin. The latter is supported by a relative volatility (water:sarin) that is 12.6 at the specified conditions. Also, the boiling point of sarin is greater than water. 1991 Gulf War Illness Before I continue, I would like the reader to know that more than 250,000 United States 1991 Gulf War veterans are suffering from 1991 Gulf War Illnesses. The illness can be psychologically and medically debilitating. For more information and to provide support, please please read the  December 2012 scientific journal articles that connect chemical weapons to potential cause of illnesses[7;8]. Also, I wrote a post about differing hypotheses and 1991 Gulf War Illness[17]. Actual mathematical properties of a potential  drop Equation: $z = 1-\frac{1}{8}(x^2 + y^2)$ The base: $y = \sqrt{2.3^2 - x^2}$ The base radius: 2.3 millimeters; The height: 1 millimeter Drop volume: Double integration in polar coordinates $Volume = \iint_R z \ \mathrm{d}A = \iint_R f(x,y) \ \mathrm{d}A = \iint_R f(rcos(\theta), rsin(\theta)) \ r \ \mathrm{d}r \ \mathrm{d}\theta$ In polar coordinates $r^2 = x^2 + y^2$ $z = 1 - \frac{1}{8}(x^2 + y^2) = 1 - \frac{1}{8}(r^2)$ $Volume =\iint_R (1 - \frac{1}{8}(x^2 + y^2)) \mathrm{d}A = \iint_R (1 - \frac{1}{8}(r^2)) \ r\mathrm{d}r \ \mathrm{d}\theta$ R is a unit disk in the xy plane and one reason I can use polar coordinates. (i) For fixed $\theta$, r range: 0 ≤ r ≤ 2.3 millimeters (ii) Angle range: 0 ≤ $\theta$ ≤ 2$\pi$ $Volume = \int_0^{2\pi} \int_0^{2.3} (1-\frac{1}{8}(r^2)) \ r\mathrm{d}r \ \mathrm{d}\theta$ From TI-92: $Volume = \int_0^{2\pi} [\frac{-(r^2-8)^2}{32}]_{r=0}^{r=2.3} \ \mathrm{d}\theta = \int_0^{2\pi}(1.77) \ \mathrm{d}\theta$ $Volume = \int_0^{2\pi}(1.77) \ \mathrm{d}\theta = [1.77\theta_{\theta = final} - 1.77\theta_{\theta = initial}]_0^{2\pi} = (1.77(2\pi) - 1.77(0)) =$ $Drop \ volume = 11.12 \ mm^3$ Convert to cubic centimeters for calculations $\frac{1 \ cm}{10 \ mm} \ and \ \frac{1^3 \ cm^3}{10^3 \ mm^3} = \frac{1 \ cm^3}{1000 \ mm^3}$ $Drop \ volume = 11.12 \ mm^3(\frac{1 \ cm^3}{1000 \ mm^3}) = 0.011 \ cm^3$ Density of fluids Sarin[12-14]: ChemSpider: 1.07; Noblis: 1.096 at 20 deg C; WISER: 1.0887 at 25 deg C Note: Difficult finding density data on sarin. As such, will assume the density changes little between above values and 10 deg Celsius. Sarin average: $Density \ average =\frac{(1.07+1.096+1.0887)}{3} = 1.09 = 1.1 \ \frac{g}{cm^3}$ Water at 10 deg C[3;15]: Perry’s: 999.699; Engineering Tool Box: 999.7 Water average: $Density \ average = \frac{(999.699 + 999.7)}{2} = 999.699 \frac{kg}{m^3}$ Conversion: $(999.699 \frac{kg}{m^3}) (\frac{1 m^3}{100^3cm^3})(\frac{1000 g}{1 kg}) = 1.0 \frac{g}{cm^3}$ Water average: $Density \ average = 1.0 \ \frac{g}{cm^3}$ Evaporation mass: Drop Volume x density Sarin: $Mass = 0.011cm^3(1.1 \frac{g}{cm^3}) = 0.012 \ grams$ Water: $Mass = 0.011cm^3(1.0 \frac{g}{cm^3}) = 0.011 \ grams$ Evaporation moles: Mass divide by molecular weight Sarin: $Moles_{C_4H_{10}FO_2P} = \frac{0.012 \ grams}{(\frac{140.1 \ grams}{mole})} = 8.6 x 10^{-5} \ moles$ Water: $Moles_{H_2O} = \frac{0.011 \ grams}{\frac{18 \ grams}{mole}} = 6.11 x 10^{-4} \ moles$ Mass transfer: Evaporation Sarin The moles of sarin evaporated per square centimeter per unit time may be expressed by[1] $N_{A,z} = \frac{cD_{AB}}{(z_2-z_1)} \frac{(y_{A1} - y_{A2})}{y_{B,lm}}$ Total molar concentration, c $PV = nRT; c =\frac{n}{V} = \frac{P}{RT} = \frac{cm^3}{mol}$ The gas constant “R” will be calculated at standard temperature and pressure, “STP” $Temperature = 273 K; Pressure = 1 atm; Molar \ volume: \ \frac{L}{mol} = 22.4 \frac{L}{mol}$ Conversion: $22.4 \frac{L}{mol}(\frac{1000 \ cm^3}{1\ Liters}) = 2.24x10^4 \ \frac{cm^3}{mol}$ $R = \frac{PV}{nT} = \frac{(1 \ atm)(2.24x10^4\frac{cm^3}{mol})}{273 \ K} = 82.05 \frac{atm \ cm^3}{mol \ K}$ $c = \frac{moles}{cm^3} = \frac{P}{RT} = \frac{1 \ atm}{(82.05 \frac{atm cm^3}{mol K})(283 \ K)} = 4.31x10^{-5} \ \frac{mol}{cm^3}$ Sarin diffusivity in air at 10 deg Celsius and 1 atmosphere[16] $D_{AB} = 0.070 \frac{cm^2}{s}$ Assume the gas film $(z_2 - z_1) = 0.5 cm$ Mole fraction Sarin $y_{A1} = \frac{p_{A1}}{P_{total}}; y_{A2} = 0$ From[13a]: Sarin vapor pressure: $log \ p_A(Torr) = 9.4(\pm 0.1) - \frac{2700 (\pm 40) }{T(K)} \ from \ 0 \ to \ 147 \ deg \ C$ $log \ p_A(Torr) = 9.4 - \frac{2700}{283} = -0.1406; 10^{log \ p_A} = 10^{-0.1406} = 0.723 \ Torr$ Conversion: $0.723 \ Torr(\frac{1 \ atm}{760 \ Torr}) = 9.51x10^{-4} \ atm$ $y_{A1} = \frac{9.51x10^{-4} \ atm}{1 \ atm} = 9.51x10{-4}$ Assume no sarin in the air at a distance away from drop, $y_{A2} = 0$ For a binary system $y_{B1} = 1 - y_{A1} = 1 - 9.51x10^{-4} = 0.9991;y_{B2} = 1 - y_{A2} = 1 - 0 = 1$ $y_{B,lm} = \frac{(y_{B2} - y_{B1})}{ln(\frac{y_{B2}}{y_{B1}})} = \frac{(1-0.9991)}{ln(\frac{1}{0.9991})} = \frac{9.0x10^{-4}}{9.52x10^{-4}} = 0.946$ The sarin flux $N_{A,z} = \frac{cD_{AB}}{(z_2-z_1)}\frac{(y_{A1}-y_{A2})}{y_{B,lm}} = \frac{(4.31x10^{-5})(0.070)}{0.5}\frac{(9.51x10^{-4} - 0)}{0.946} =2.18x10^{-5} \frac{mol}{cm^2 \ hr}$ Water The moles of water evaporated per square centimeter per unit time may be expressed by[1] $N_{A,z} = \frac{cD_{AB}}{(z_{2}-z_{1})}\frac{(y_{A1}- y_{A2})}{y_{B,lm}}$ Total molar concentration, c $PV = nRT; c = \frac{n}{V} = \frac{P}{RT} = \frac{cm^3}{mol}$ As before, the gas constant “R” will be calculated at standard temperature and pressure, “STP” $Temperature = 273K; Pressure = 1atm; Molar \ volume= \frac{L}{mol} = 22.4\frac{L}{mol}$ Conversion: $22.4 \frac{L}{mol}(\frac{1000 \ cm^3}{1 \ Liters}) = 2.24x10^4 \frac{cm^3}{mol}$ $R = \frac{PV}{nT} = \frac{(1 \ atm)(2.24x10^4\frac{cm^3}{mol})}{273 \ K} = 82.05 \frac{atm \ cm^3}{mol \ K}$ $c = \frac{moles}{cm^3} = \frac{P}{RT} = \frac{1 \ atm}{(82.05 \frac{atm \ cm^3}{mol \ K})(283 \ K)} = 4.31x10^{-5} \ \frac{mol}{cm^3}$ Water diffusivity in air at 10 deg Celsius and 1 atmosphere[16] $D_{AB} = 0.193 \frac{cm^2}{s}$ Assume the gas film $(z_2-z_1) = 0.5 \ cm$ Mole fraction of water $y_{A1} = \frac{p_{A1}}{P_{total}}; y_{A2}= \frac{p_{A2}}{P_{total}}$ From[4]: Water vapor pressure: $log_{10} \ P_{vp} = A - \frac{B}{T + C - 273.15}$ Constants A, B, C[Appendix A;4], T in kelvins, and pressure is in bar $log_{10} \ P_{vp} = 5.11564 - \frac{1687.537}{283+230.17-273.15} = -1.91518$ $P_{vp} = 10^{-191518} = 0.0122 \ bars$ Conversion: $\frac{1 \ atm}{1.01325 bars}(0.0122 \ bars) = 0.012 \ atm; \frac{760 \ mmHg}{1 \ atm}(0.012 \ atm) = 9.11 mmHg$ $y_{A1} = \frac{p_{A1}}{P_{total}} = \frac{0.012 \ atm}{1 atm} = 0.012$ From[2] and relative humidity of 71% (January weather in Iraq)[9] Partial pressure of water in flowing stream Relative humidity[2]: $s_r(h_r) = \frac{p_{v}}{p_v^*(T)}x 100\% = 71\%$ At 283 K, previous equation gave: $p_v^* = 0.012 \ atm$ $\frac{71\%}{100}(0.012 \ atm) = p_v = p_{A2} = 0.0085 \ atm$ $y_{A2} = \frac{p_{A2}}{P_{total}} = \frac{0.0085 \ atm}{1 \ atm} = 0.0085$ For a binary system $y_{B1} = 1 - y_{A1} = 1 - 0.012 = 0.988; y_{B2} = 1 - y_{A2} = 1 - 0.0085 = 0.992$ $y_{B,lm} = \frac{(y_{B2} - y_{B1})}{ln(\frac{y_{B2}}{y_{B1}})} = \frac{(0.992 - 0.988)}{ln(\frac{0.992}{0.988})} = 0.990$ Molar flux of water $N_{A,z} = \frac{cD_{AB}}{(z_2-z_1)} \frac{(y_{A1} - y_{A2})}{y_{B,lm}} = \frac{(4.31x10^{-5})(0.193)}{0.5} \frac{(0.012 - 0.0085)}{0.990} = 5.88x10^{-8} \ \frac{mol}{cm^2 \ s}$ Conversion: $N_{A,z} = 5.88x10^{-8} \frac{mol}{cm^2 \ s}\frac{3600 \ s}{1 \ hr} = 2.12x10^{-4} \ \frac{mol}{cm^2 \ hr}$ Molar Flux: Sarin versus water comparison Sarin: $N_{A,z} = 2.18x10^{-5} \ \frac{mol}{cm^2 \ hr}$ Water: $N_{A,z} = 2.12x10^{-4} \ \frac{mol}{cm^2 \ hr}$ Ratio: $\frac{Water}{Sarin} = \frac{2.12x10^{-4}}{2.18x10^{-5}} = 9.71$ Although the above is a simple evaluation based on “diffusion through a stagnant gas film”[1] and not the most rigorous, the ratio makes since because the ratio of vapor pressures at 10 deg Celsius, “relative volatility”[18], is $\alpha_{water-sarin} = \frac{p_{H_2O}}{p_{C_4H_{10}FO_2P}} = \frac{0.012 \ atm}{9.51x10^{-4} \ atm} = 12.6$ Per US Department of Energy[19] “The evaporation of a liquid depends upon its vapor pressure — the higher the vapor pressure at a given temperature the faster the evaporation — other condition being equal. The higher/lower the boiling point the less/more readily will a liquid evaporate.”[19] The boiling points are: Sarin[14]: 147 deg Celsius; Water[15a]: 100 deg Celsius Conclusion: The evaporation of water is greater than the evaporation of sarin. References: [1] Welty, James R.; Wicks, Charles E.; Wilson, Robert E. (1984) Fundamentals of Momentum, Heat, and Mass Transfer, Third Edition. New York: John Wiley & Sons. [2] Felder, Richard M; Rousseau, Ronald W. (1986) Elementary Principles of Chemical Processes, Second Edition. New York: John Wiley & Sons. [3] Perry, Robert H; Green, Don W. (1997) Perry’s Chemical Engineers’ Handbook, Seventh Edition. New York. McGraw-Hill. [4] Poling, Bruce E.; Prausnitz, John M.; O’Connell, John P. (2001) The Properties of Gases and Liquids, Fifth Edition. New York: Mcgraw-Hill. [5] Anton, Howard. Calculus with Analytic Geometry, Fifth Edition. New York: John Wiley & Sons. [6] Barker, William H; Ward, James E. (1995) The Calculus Companion. Calculus: Howard Anton, Fifth Edition. [7] Haley, Robert W.; Tuite, James J. Meteorological and Intelligence Evidence of Long-Distance Transit of Chemical Weapons Fallout from Bombing Early in the 1991 Persian Gulf War, December 2012. karger.com[online]. 2012. vol. 40. pp. 160-177. Available from: http://content.karger.com/ProdukteDB/produkte.asp?Aktion=ShowFulltext&ArtikelNr=345123&Ausgabe=257603&ProduktNr=224263 DOI: 10.1159/000345123 [8] Haley, Robert W.; Tuite, James J. Epidemiologic Evidence of Health Effects from Long-Distance Transit of Chemical Weapons Fallout from Bombing Early in the 1991 Persian Gulf War, December 2012. karger.com[online]. vol. 40. pp. 178-189. Available from: http://content.karger.com/ProdukteDB/produkte.asp?Aktion=ShowFulltext&ArtikelNr=345124&Ausgabe=257603&ProduktNr=224263 DOI: 10.1159/000345124 [10] Harding, Byron. Diffusivity of Water versus Sarin (Nerve Agent) in Air at 10 Degrees Celsius (50 Degrees Fahrenheit) and 1 Atmosphere, January 2013. chrisbharding.wordpress.com[online]. 2013. Available from: https://chrisbharding.wordpress.com/2013/01/07/test/ [11] Removed [12] ChemSpider. The free chemical database. Sarin. chemspider.com[online]. 2013. Available from: http://www.chemspider.com/Chemical-Structure.7583.html [13] Noblis. Chemistry of GB (Sarin). noblis.org[online]. 2013. Available from: http://www.noblis.org/MissionAreas/nsi/ChemistryofLethalChemicalWarfareAgents/Pages/Sarin.aspx [13a] Noblis. Parameters for Evaluation of the Fate, Transport, and Environmental Impacts of Chemical Agents in Marine Environments. noblis.org[online]. 2012. Available from: http://pubs.acs.org/doi/pdf/10.1021/cr0780098 [14] Wireless Information System for Emergency Responders. WISER. Sarin, CAS RN: 107-44-8. webwiser.nlm.nih.gov[online]. 2013. Available from: http://webwiser.nlm.nih.gov/getSubstanceData.do?substanceID=151&displaySubstanceName=Sarin&UNNAID=&STCCID=&selectedDataMenuItemID=30 [15] The Engineering ToolBox. Water-Density and Specific Weight. engineeringtoolbox.com[online]. 2013. Available from: http://www.engineeringtoolbox.com/water-density-specific-weight-d_595.html [15a] The Engineering Toolbox. engineeringtoolbox.com[online]. 2013. Available from: http://www.engineeringtoolbox.com/ [16] Harding, Byron. Diffusivity of Water versus Sarin (Nerve Agent) in Air at 10 Degrees Celsius (50 Degrees Fahrenheit) and 1 Atmosphere, January 2013. chrisbharding.wordpress.com[online]. 2013. Available from: https://chrisbharding.wordpress.com/2013/01/07/test/ [17] Harding, Byron. 1991 Gulf War Illnesses and Differing Hypotheses: Nerve and Brain Death Versus Stress, December 2012. gather.com[online] 2012. Available from: http://www.gather.com/viewArticle.action?articleId=281474981824775 [18] Chopey, Nicholas P. (1994). Handbook of Chemical Engineering Calculations, Second Edition. Boston Massachusetts: Mc Graw Hill. [19] US Department of Energy. Newton: Ask A Scientist.Evaporation and Vapor Pressure. newton.dep.anl.gov[online]. 2012. Available from: http://www.newton.dep.anl.gov/askasci/phy00/phy00130.htm # Diffusivity of Water versus Sarin (Nerve Agent) in Air at 10 Degrees Celsius (50 Degrees Fahrenheit) and 1 Atmosphere Diffusivity of Water versus Sarin in Air at 10 Degrees Celsius (50 Degrees Fahrenheit) and 1 Atmosphere[see bottom of post] 1991 Gulf War veterans are suffering from 1991 Gulf War Illness[3;References]. Scientific research suggests the combination of experimental medication, pyridostigmine bromide as an example, over use of pesticides, chemical weapon-sarin as an example-destruction at plants and football sized bunkers, oil fires, etc as the potential cause[6-9]. Dr. Robert Haley, MD, UT SouthWestern Medical Center, and Intelligence Analyst James Tuite have reported how 1991 Gulf War veterans might have been contaminated with chemical weapons prior to the ground war, “Desert Storm”[9]. In fact, their work provides data proving that sophisticated equipment detected chemical weapons in Saudi Arabia prior to the ground war[9a]. It is also hypothesized that the “toxic cocktail” has caused autonomic dysfunction, nerve death, and brain death[9-14]. As a 1991 Gulf War veteran, I have been affected. I am also a chemical engineer with a degree in biological sciences. Like most educated, I have lost much of my knowledge in chemical engineering and biological sciences, but I can, if I find a good example, still “plug and chug” by using “tested and trusted” equations, which is advised anyhow. 🙂 Here, I compare the diffusivity of sarin vapor and water vapor in air by using Chapman and Enskog equation with Brokaw relations for polar gases correction. I have shown that the equation can be used when considering the diffusivity of polar in a non-polar matrix[19]. After performing the latter calculation, I noticed that reference [1] also suggests Brokaw relations to be used for diffusivity of one polar gas molecule in a non-polar matrix[1]. I will be comparing the diffusivity of polar sarin = A in non-polar air = B at 10$\textdegree$C and 1 atmosphere. I chose 10$\textdegree$C because I discovered data, possibly experimental, that stated that 90% volume of 1 mm sarin drop on a non-absorbable surface at 10$\textdegree$C evaporated in 0.24 hours[17]. Equations Chapman and Enskog Equation[1]. Reference [1] reports that this equation has a “Average absolute error” of 7.9% when used without Brokaw relations. The range is from 0% to 25%. The authors[1] did not provide an average for Browkaw relations but do provide specific absolute error values. When I averaged the Brokaw values[1], I obtained a 10.9% average absolute error with a range from 0% to 33%. Chapman and Enskog Equation[1] $D_{AB} = \frac{3}{16} \frac{(\frac{4 \pi \kappa T}{M_{AB}})^{1/2}}{n \pi \sigma_{AB}^2 \Omega_D} f_D$ Neufield, et al. Equation $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{((H)(T^*))}$ Polar Gases: Brokaw Relations $\Omega_D(Neufield) + \frac{0.19 \delta_{AB}^2}{T^*}$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\delta = \frac{1.94x10^3 \mu_p^2}{V_bT_b}$ $\mu_p = dipole \ moment, \ debyes$ $V_b = liquid \ molar \ volume \ at \ the \ normal \ boiling \ point, \ \frac{cm^3}{mol}$ $T_b = normal \ boiling \ point \ (1 \ atm), K$ $\frac{epsilon}{\kappa} = 1.18(1 + 1.3\delta^2)T_b$ $\sigma = (\frac{1.585V_b}{1 + 1.3 \delta^2})^{1/3}$ $\delta_{AB} = (\delta_A \delta_B)^{1/2}$ $\frac{\epsilon_{AB}}{\kappa} = (\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa})^{1/2}$ $\sigma_{AB} = (\sigma_A \sigma_B)^{1/2}$ When $f_D$ is chosen as unity and “n” is expressed by the ideal-gas law, the Chapman-Enskog Equation $D_{AB} = \frac{0.00266 T^{3/2}}{PM_{AB}^{1/2} \sigma_{AB}^2 \Omega_D}$ Brokaw Diffusivity: Water in Air at 10$\textdegree$C and 1 Atmosphere Molecular Weight Water: $M_A = M_{H_2O} = 2(MW_H) + 1(MW_O) = 2(1.008) + 1(16.00) = 18 \frac{g}{mol}$ Air: 1 mole basis $21\% \frac{molO_2}{mol} \ O_2 \ and \ 79\% \frac{molN_2}{mol}\ N_2$ $Moles \ O_2 = 0.21 \frac{molO_2}{mole}(1 \ mol) = 0.21 \ molO_2; Moles \ N_2 = 0.79 \frac{molN_2}{mol} (1 \ mol) = 0.79 \ molN_2$ Grams oxygen: $0.21 \ molO_2(MW_{O_2}) = 0.21 molO_2(\frac{32 \ grams \ O_2}{mol \ O_2}) = 6.72 \ grams \ O_2$ Grams nitrogen: $0.79 \ molN_2(MW_{N_2}) = 0.21 molO_2(\frac{ 28 \ grams \ N_2}{mol \ O_2}) = 22.12 \ grams \ N_2$ Air: $M_B = M_{air} = \frac{(6.72 + 22.12)}{1mol} = 28.8 \frac{g}{mol}$ $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1} = 2[\frac{1}{18} + \frac{1}{28.8}]^{-1} = 22.2$ Need: $\sigma; \delta; \Omega_D$ Note: I will only be calculating a delta value for water because air is non-polar[1;19]. $\delta_A = \delta_{H_2O} = \frac{1.94x10^3 \mu_p^2}{V_bT_b}$ From [16]: $V_b \frac{cm^3}{mol} = 18.045 \frac{cm^3}{mol}$ From [20]: $\mu_{p_{H_2O}} = 1.855$ $T_b = 373 K$ $\delta_{A_{H_2O}} = \frac{1.94x10^3(1.855)^2}{(18.045)(373)} = \frac{6.68x10^3}{6.73x10^3} = 0.992$ $\frac{\epsilon_{A}}{\kappa} = 1.18(1 + 1.3 \delta_{A}^2)T_b = 1.18(1 + 1.3(0.992)^2)373 K = 1003 K$ $\sigma_A = (\frac{1.585V_b}{1 + 1.3\delta_A^2})^{1/3} = (\frac{1.585 (18.045)}{1 + 1.3(0.992)^2})^{1/3} = (12.55)^{1/3} = 2.32 \AA$ Need $T^*$ to calculate $\Omega_D$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\frac{\epsilon_{AB}}{\kappa} = (\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa})^{1/2}$ Water: $\frac{\epsilon_A}{\kappa} = 1003 K; Air[1, Appendix B]:78.6 K$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa}} = \sqrt{(1003 K)(78.6 K)} = 280.8 K$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{280.8 K}{283 K} = 0.992$ $T^* = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.992} = 1.01$ Neufield, et al.: $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp((H)(T^*))} =$ $\Omega_D = \frac{1.06036}{1.01^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(1.01))}} + \frac{1.03587}{\exp{((1.52996)(1.01))}} + \frac{1.76474}{\exp{((3.89411)(1.01))}} =$ $\Omega_D = 1.43$ $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_{AB}^2}{T^*}$ changed to $\Omega_D(Neufield) + \frac{0.19 \delta_A^2}{T^*}$ $\Omega_D = 1.43 + \frac{0.19(0.992)^2}{1.01} = 1.62$ Need $\sigma_{AB} = \sqrt{\sigma_A \sigma_B}$ Water: 2.32 $\AA$; Air (Appendix B[1]): 3.711 $\AA$ $\sigma_{AB} = \sqrt{\sigma_A \sigma_B} = \sqrt{(2.32)(3.711)} = 2.93 \AA$ Diffusivity: Polar water in non-polar air at 10$\textdegree$C and 1 atmosphere $D_{AB} = \frac{0.00266 T^{3/2}}{PM_{AB}^{1/2} \sigma_{AB}^2 \Omega_D} = \frac{0.00266 (283)^{3/2}}{1 (22.2)^{1/2} (2.93)^2 (1.62)} = \frac{12.66}{65.53} =$ $D_{AB} = 0.193 \frac{cm^2}{s}$ Brokaw Diffusivity of Sarin in Air at 10$\textdegree$C and 1 Atmosphere Molecular Weight Sarin, $C_4H_{10}FO_2P$: $M_A = M_{C_4H_{10}FO_2P} = 4(MW_C) + 10(MW_H) + 1(MW_F) + 2(MW_O) + 1(MW_P) =$ $M_{C_4H_{10}FO_2P} = 4(12.01) + 10(1.008) + 1(19.00) + 2(16.00) + 1(30.97) = 140.1 \frac{g}{mol}$ Air: 1 mole basis $21\% \frac{molO_2}{mol} \ and \ 79\% \frac{molN_2}{mol}$ 0.21 $\frac{molO_2}{mol}$(1 mol) = 0.21 mol oxygen gas; 0.79 $\frac{molN_2}{mol}$(1 mol) = 0.79 mol nitrogen gas Grams oxygen: $0.21 (molO_2)(32 \frac{gO_2}{molO_2}) = 6.72 grams \ O_2$ Grams nitrogen: $0.79 (molN_2)(28 \frac{gN_2}{molN_2}) = 22.1 grams \ N_2$ Air: $M_B = M_{air} = \frac{(6.72 + 22.12)}{1 mol} = 28.8 \frac{g}{mol}$ $M_{AB} = 2[\frac{1}{140.1} + \frac{1}{29.0}]^{-1} = 48.1$ Need: $\delta; \sigma; \Omega_D$ Note: I will only be calculating the delta value for the polar gas sarin because air is non-polar[1;19]. $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_{AB}^2}{T^*}$ changed to $\Omega_D(Neufield) + \frac{0.19 \delta_A^2}{T^*}$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\frac{\epsilon_i}{\kappa} = 1.18(1 + 1.3\delta_i^2)T_b$ Sarin: $\frac{\epsilon_A}{\kappa} = 1.18(1 + 1.3(0.418)^2)(420) = 608.2 K$ $\frac{\epsilon_{AB}}{\kappa} = (\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa})^{1/2}$ Sarin: $\frac{\epsilon_A}{\kappa} = 608.2 K$ Air[Appendix B;1]: $\frac{\epsilon_B}{\kappa} = 78.6 K$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa}} = \sqrt{(608.2)(78.6)} = 216.1 K$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{216.1}{283} = 0.764$ $T^* = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.764} =1.31$ $\delta_A = \frac{1.94x10^3 \mu_p^2}{V_bT_b}$ $\mu_p =$ dipole moment, debyes $V_b =$ liquid molar volume at the normal boiling point, $\frac{cm^3}{mol}$ $T_b =$ normal boiling point (1 atm), K Sarin[18;16a]: $\delta_A = \frac{1.94x10^3(3.44)^2}{(130.9)(420)} = 0.418$ $\sigma_i = (\frac{1.585V_b}{1 + 1.3\delta_i^2})^{1/3}$ Sarin[16a]: $\sigma_A = (\frac{1.585(130.9)}{1 + 1.3(0.418)^2})^{1/3} = 5.5 \AA$ $\sigma_{AB} = (\sigma_A \sigma_B)^{1/2}$ Sarin: $\sigma_A = 5.5 \AA$ Air[Appendix B;1}: $\sigma_B = 3.711 \AA$ $\sigma_{AB} = \sqrt{(\sigma_A)(\sigma_B)}= \sqrt{(5.5)(3.711)} = 4.52 \AA$ $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}} =$ $\Omega_D = \frac{1.06036}{(1.31)^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(1.31))}} + \frac{1.03587}{\exp{((1.52996)(1.31))}} + \frac{1.76474}{\exp{((3.89411)(1.31))}} =$ $\Omega_D = 1.24$ $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_A^2}{T^*} = 1.24 + \frac{0.19(0.418)^2}{1.31} = 1.27$ Chapman-Enskog equation after polar correction Diffusivity of Sarin in Air: $D_{AB} = \frac{0.00266T^{3/2}}{PM_{AB}^{1/2}\sigma_{AB}^2 \Omega_D} = \frac{0.00266(283)^{3/2}}{1(48.1)^{1/2}(4.52)^2(1.27)} = \frac{12.66}{180.0} = 0.070 \frac{cm^2}{s}$ Diffusivity Comparison in Air: Water Versus Sarin in Descending Order Water: $D_{AB} = 0.193 \frac{cm^2}{sec}$ Sarin: $D_{AB} = 0.070 \frac{cm^2}{sec}$ Diffusivity Ratio: $\frac{Water}{Sarin} = \frac{0.193}{0.070} = 2.74$ References: [1] Poling, Bruce E.; Prausnitz, John M.; O’Connell, John P. (2001) The Properties of Gases and Liquids, Fifth Edition. New York: Mcgraw-Hill. [2] Welty, James R.; Wicks, Charles E.; Wilson, Robert E. (1984) Fundamentals of Momentum, Heat, and Mass Transfer, third edition. New York: John Wiley & Sons. [3] Harding, Byron. 1991 Gulf War Illnesses and Differing Hypotheses: Nerve and Brain Death Versus Stress, December 2012. gather.com[online] 2012. Available from: http://www.gather.com/viewArticle.action?articleId=281474981824775 [4] Removed [4a] Removed [6] National Academies Press. Institute of Medicine. Committee on Gulf War and Health: Health Effects of Serving in the Gulf War, Update 2009. Board on Health of Select Populations. Gulf War and Health, Volume 8. nap.edu[online]. 2010. pp. 320. Available from: http://www.nap.edu/catalog.php?record_id=12835 ISBN-10: 0-309-14921-5; ISBN-13: 978-0-309-14921-1 [7] Research Advisory Committee on Gulf War Veterans’ Illnesses. Gulf War Illness and Health of Gulf War Veterans. Scientific Findings and Recommendations, 2008. va.gov[online]. 2012. Available from: http://www.va.gov/RAC-GWVI/docs/Committee_Documents/GWIandHealthofGWVeterans_RAC-GWVIReport_2008.pdf [8] Research Advisory Committee on Gulf War Veterans’ Illnesses. Research Advisory Committee on Gulf War Veterans’ Illnesses Findings and Recommendation, June 2012. va.gov[online]. 2012. Available from: http://www.va.gov/RAC-GWVI/docs/Committee_Documents/CommitteeDocJune2012.pdf [9] Kennedy, Kelly. Study: Wind blew deadly gas to U.S. troops in Gulf War, December 2012. ustoday.com[online]. 2012. Available from: http://www.usatoday.com/story/news/world/2012/12/13/sarin-gas-gulf-war-veterans/1766835/ [9a] Haley, Robert W.; Tuite, James J. Meteorological and Intelligence Evidence of Long-Distance Transit of Chemical Weapons Fallout from Bombing Early in the 1991 Persian Gulf War, December 2012. karger.com[online]. 2012. vol. 40. pp. 160-177. Available from: http://content.karger.com/ProdukteDB/produkte.asp?Aktion=ShowFulltext&ArtikelNr=345123&Ausgabe=257603&ProduktNr=224263 DOI: 10.1159/000345123 [9b] Haley, Robert W.; Tuite, James J. Epidemiologic Evidence of Health Effects from Long-Distance Transit of Chemical Weapons Fallout from Bombing Early in the 1991 Persian Gulf War, December 2012. karger.com[online]. vol. 40. pp. 178-189. Available from: http://content.karger.com/ProdukteDB/produkte.asp?Aktion=ShowFulltext&ArtikelNr=345124&Ausgabe=257603&ProduktNr=224263 DOI: 10.1159/000345124 [10] Oswal, DP; Garrett, TL; Morris, M; Kucot, JB. Low-dose sarin exposure produces long term changes in brain neurochemistry of mice. Neurochem Res[online]. 2013. vol. 1. pp. 108-116. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23054072 doi: 10.1007/s11064-012-0896-9 [11] Shewale, SV.; Anstadt, MP; Horenziak, M; Izu, B.; Morgan, EE.; Lucot, JB.; Morris, M. Sarin causes autonomic imbalance and cardiomyopathy: an important issue for military and civilian health, July 2012. J. Cardiovasc Pharmacol.[online]. 2012. vol 60(1). pp. 76-87. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22549449 doi: 10.1097/FJC.0b013e3182580b75 [12] DTIC. Online Information for the Defense Community.Chan, Victor T; Soto, Armando; Wagner, Jessica A; Watts, Brandy S.; Walters, Amy D.; Hill, Tiffany M. Mechanisms of Organophosphates (OP) Injury: Sarin-Induced Hippocampal Gene Expression Changes and Pathway Perturbation, Jan 2012. dtic.mil[online]. 2012. Available from: http://www.dtic.mil/docs/citations/ADA560343 [13] Medical News Today. Low-Level Exposure to Organophosphate Pesticides Damage Brain and Nervous System, December 2012. medicalnewstoday.com[online]. 2012. Available from: http://www.medicalnewstoday.com/releases/253534.php [14] Fulco, Carolyn E; Liverman, Catharyn T.; Sox, Harold C. National Academy Press. Committee on Health Effects Associated with Exposures During the Gulf War. Gulf War and Health: Volume 1. Depleted Uranium, Sarin, Pysidostigmine Bromide, Vaccines, 2000. Effects of Long-Term Exposure to Organophosphate Pesticides in Humans. nap.edu[online]. 2012. Available from: http://www.nap.edu/openbook.php?record_id=9953&page=R1 [15] NCBI.PubChem. Sarin-Compound Summary (CID 7871). pubmed.ncbi.nlm.nih.gov[online]. 2012. Available from: http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid=7871 [16] ChemSpider. The free chemical database. Water. chemspider.com[online]. 2013. Available from: http://www.chemspider.com/Chemical-Structure.937.html?rid=01a81689-c122-434f-a0a1-b4e6e3ca8109 [16a] ChemSpider. The free chemical database. Sarin (isopropyl methylphosphonofluoridate). chemspider.com[online]. 2013. Available from: http://www.chemspider.com/Chemical-Structure.7583.html?rid=8885b92c-43db-4dbf-a9fd-280d32df0450 [17] US National Library of Medicine. WISER: Wireless Information System for Emergencey Responders. Sarin, CAS RN: 107-44-8. Volatilization. webwiser.nlm.nih.gov[online]. 2012. Available from: http://webwiser.nlm.nih.gov/getSubstanceData.do;jsessionid=E6C28B95977867F872631D36CDD61D42?substanceID=151&displaySubstanceName=Sarin&UNNAID=&STCCID=&selectedDataMenuItemID=81 [18] Lee, Ming-Tsung; Vishnyakov, Aleksey; Gor, Gennady Yo.; Neimark, Alexander V. Interactions of Phosphororganic Agents with Water and Components of Polyelectrolyte Membranes, October 2011. J. Physical Chemistry[online]. 2012. Available from: http://www.princeton.edu/~ggor/Gor_JPCB_2011.pdf [19] Harding, Byron. Chapman and Enskog Versus Hirschfelder Equation when Compared to Experimental Value at 25 Degree C and 1 Atm, and Non-Polar Versus Brokaw Polar Method, January 2013. Available from: https://chrisbharding.wordpress.com/2013/01/04/chapman-and-enskog-versus-hirschfelder-equation-and-compared-to-experimental-value-at-25c-and-1-atm/ [20] Gregory, J.K.; Clary, D.C.; Liu, K.; Brown, M.G.; Saykally, R.J. The Water Dipole Moment in Water Clusters, February 1997. science[online]. vol. 275. pp. 814. Available from: http://www.cchem.berkeley.edu/rjsgrp/publications/papers/1997/187_gregory_1997.pdf # Temporary Divergence: Diffusivity: That Smell-Methyl Mercaptan (Methanethiol), Odorless Natural Gas, Odorless Propane, and Even Flatulence Temporary Divergence: Diffusivity: That Smell-Methyl Mercaptan (Methanethiol), Odorless Natural Gas, Odorless Propane, and Even Flatulence Lynyrd Skynyrd. That Smell. youtube.com[online]. 2013. Available from: http://youtu.be/ZDB-yswOrzc Diffusivity of Methyl Mercaptan Versus Methane and Propane Methyl mercaptan[3-6], “methanethiol”, is the byproduct of many natural processes. Flatulence is one example[7]. Because of its odor threshold, 1 ppb has been reported[4], methanethiol is also added to odorless natural gas, methane, and odorless propane for detection purposes. Apparently, it is used as a communication warning system in mining operations as well[4]. In this blog post, I will be comparing the diffusivity of the polar chemical methanethiol to the non-polar chemicals methane and propane in air. I have heard reports that the diffusivity of methanethiol is significantly greater than methane and propane. See bottom of post for diffusivities.Since reference[1] has tabular values for methane and propane in appendix B, I will use the tabular values and the Chapman-Enskog equation to calculate diffusivity values for methane and propane. For methanethiol, I will use Fuller, et al equation and tabular values for the atoms making up methanethiol, $CH_3-SH$ Chapman-Enskog Equation. From reference[1], the average absolute error of this “theoretical equation” is 7.9% $D_{AB} = \frac{3}{16} \frac{(\frac{4 \pi \kappa T}{M_{AB}})^{1/2}}{n \pi \sigma_{AB}^2 \Omega_D} f_D$ If $f_D$ is chosen as unity and “n” expressed by ideal-gas law $D_{AB} = \frac{0.00266 T^{3/2}}{P M_{AB}^{1/2} \sigma_{AB}^2 \Omega_D}$ For Non-polar gases: Methane and Propane Methane $CH_4$ in air at 25$\textdegree$C and 1 atmosphere (atm) $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1}$ $M_A = M_{CH_4} = 1(MW_C) + 4(MW_H) = 1(12.01) + 4(1.008) = 16.0 \frac{g}{mol}$ $M_B = M_{air} = 29.0 \frac{g}{mole}$ $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1} = 2[\frac{1}{16.0} + \frac{1}{29.0}]^{-1} = 20.6$ Need $\Omega_D$ Neufield, et al.: $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}}$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ From appendix B[1] Methane: $\sigma = 3.758 \AA; \frac{\epsilon_A}{\kappa} = 148.6 K$ Air: $\sigma = 3.711 \AA; \frac{\epsilon_B}{\kappa} = 78.6 K$ $\sigma_{AB} = \frac{\sigma_A + \sigma_B}{2} = \frac{3.758 + 3.711}{2} = 3.735 \AA$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{(148.6)(78.6)} = 108.1 K$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{108.1 K}{298 K} = 0.363$ $T^* = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.363} = 2.76$ $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}} =$ $\Omega_D =\frac{1.06036}{(2.76)^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(2.76))}} + \frac{1.03587}{\exp{((1.52996)(2.76))}} + \frac{1.76474}{\exp{((3.89411)(2.76))}} =$ $\Omega_D = 0.972$ Diffusivity of Methane in Air: $D_{AB} = \frac{0.00266 T^{3/2}}{P M_{AB}^{1/2} \sigma_{AB}^2 \Omega_D} = \frac{0.00266 (298)^{3/2}}{1(20.6)^{1/2} (3.735)^2 (0.972)} = \frac{13.68}{61.54} = 0.222 \frac{cm^2}{s}$ Propane $CH_3CH_2CH_3$ in air at 25$\textdegree$C and 1 atmosphere (atm) $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1}$ $M_A = M_{CH_3CH_2CH_3} = 3(MW_C) + 8(MW_H) = 3(12.01) + 8(1.008) = 44.1 \frac{g}{mol}$ $M_B = M_{air} = 29 \frac{g}{mol}$ $M_{AB} = 2[\frac{1}{44.1} + \frac{1}{29}]^{-1} = 35.0$ Need $\Omega_D$ Neufield, et al.: $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}}$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ From appendix B[1] Propane: $\sigma_A = 5.118 \AA; \frac{\epsilon_A}{\kappa} = 237.1 K$ Air: $\sigma_B = 3.711 \AA; \frac{\epsilon_B}{\kappa} = 78.6 K$ $\sigma_{AB} = \frac{\sigma_A + \sigma_B}{2} = \frac{5.118 + 3.711}{2} = 4.42 \AA$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{(\frac{\epsilon_A}{\kappa})(\frac{\epsilon_B}{\kappa})} = \sqrt{(237.1)(78.6)} = 136.5 K$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{136.5}{298} = 0.458$ $T^* = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.458} = 2.18$ $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}} =$ $\Omega_D = \frac{1.06036}{(2.18)^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(2.18))}} + \frac{1.03587}{\exp{((1.52996)(2.18))}} + \frac{1.76474}{\exp{((3.89411)(2.18))}} =$ $\Omega_D = 1.05$ Diffusivity of Propane in Air: $D_{AB} = \frac{0.00266T^{3/2}}{PM_{AB}^{1/2} \sigma_{AB}^2 \Omega_D} = \frac{0.00266(298)^{3/2}}{1 (35.0)^{1/2} (4.42)^2 (1.05)} = \frac{13.68}{121.36} =$ $D_{AB} = 0.113 \frac{cm^2}{s}$ For polar molecule $CH_3-SH$, will use Fuller, et al. equation. From reference[1], the absolute relative error of this equation is 5.4%. Authors report an average absolute error of about 4% when using Fuller, et al. $D_{AB} = \frac{0.00143 T^{1.75}}{PM_{AB}^{1/2}[(\sum \nu)_A^{1/3} + (\sum \nu)_B^{1/3}]^2}$ T = 25$\textdegree$C = 298 K; P = 1 atm $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1}$ $M_A = M_{CH_3SH} = 1(C) + 4(H) + 1(S) = 1(12.01) + 4(1.008) + 1(32.07) = 48.11 \frac{g}{mol}$ $M_B = M_{air} = 29 \frac{g}{mol}$ $M_{AB} = 2[\frac{1}{48.11} + \frac{1}{29}]^{-1} = 18.1$ Summation of “Atomic and Structural Diffusion Volume Increments from table 11-1[1] $(\sum \nu)_A = (\sum \nu)_{CH_3SH} = 1 (C) + 4(H) + 1(S) = 1(15.9) + 4(2.31) + 1(22.9) = 40.04$ $(\sum \nu)_B = (\sum \nu)_{air} = 19.7$ $D_{AB} = \frac{0.00143T^{1.75}}{PM_{AB}^{1/2}[(\sum \nu)_A^{1/3} + (\sum \nu)_B^{1/3}]^2} = \frac{0.00143(298)^{1.75}}{1(18.1)^{1/2}[(40.0)_A^{1/3} + (19.7)_B^{1/3}]^2} =$ Diffusivity of methanethiol in air $D_{AB} = \frac{30.56}{159.38} = 0.192 \frac{cm^2}{s}$ Diffusivities in air in decreasing order Methane: $D_{AB} = 0.222 \frac{cm^2}{s}$ Methanethiol: $D_{AB} = 0.192 \frac{cm^2}{s}$ Propane: $D_{AB} = 0.113 \frac{cm^2}{s}$ At a detection threshold of 1 part per billion (ppb) and the above diffusivities, one might detect methanethiol prior to experiencing propane. In truth, there is an equation that takes “mixture” into account but I do not know the percent mixture of each component[2]. Equation for mixture $D_{1-mixture} = \frac{1}{\frac{z_2}{D_{1-2}} + \frac{z_3}{D_{1-3}} + .... + \frac{z_n}{D_{1-n}}}$ $z_n$ is the mole fraction of component “n” in the gas mixture evaluated on a component-1-free basis $z_2 = \frac{y_2}{y_2 + y_3 + ... + y_n}$ ihatemyhate. Friends Selection – Ross Flirts. youtube.com[online]. 2013. Available from: http://youtu.be/kH5JhYsfNMA • Wait until 2nd attempt References: [1] Poling, Bruce E.; Prausnitz, John M.; O’Connell, John P. (2001) The Properties of Gases and Liquids, Fifth Edition. New York: Mcgraw-Hill. [2] Welty, James R.; Wicks, Charles E.; Wilson, Robert E. (1984) Fundamentals of Momentum, Heat, and Mass Transfer, third edition. New York: John Wiley & Sons. [3] ScienceBlogs. Molecule of the day. Methanethiol (They put that in, you know), March 2009. scienceblogs.com[online]. 2013. Available from: http://scienceblogs.com/moleculeoftheday/2009/03/18/methanethiol-they-put-that-in/ [4] Wikipedia. Methanethiol. Also known as methyl mercaptan. en.wikipedia.org[online]. 2013. Available from: http://en.wikipedia.org/wiki/Methanethiol [5] NCBI.PubChem Substance. Methanethiol-Substance Summary (SID 3699). Also known as Methylmercaptan (CAS: 74-93-1). pubchem.ncbi.nlm.nih.gov[online]. 2013. Available from: http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?sid=3699 [6] US National Institute of Standards and Technology. NIST. Methanethiol. webbook.nist.gov[online]. 2013. Available from: http://webbook.nist.gov/cgi/cbook.cgi?ID=74-93-1&Units=SI [7] Wikipedia. Flatulence. en.wikipedia.org[online]. 2013. Available from: http://en.wikipedia.org/wiki/Flatulence # Diffusivity: Chapman and Enskog Versus Hirschfelder Equation when Compared to Experimental Value at 25 Degree C and 1 Atm, and Non-polar Versus Brokaw Polar Method Title: Diffusivity: Chapman and Enskog Versus Hirschfelder Equation when Compared to Experimental Value at 25$\textdegree$C and 1 Atm, and Non-polar Versus Brokaw Polar Method Chapman and Enskog Equation $D_{AB} = \frac{3}{16} \frac{(\frac{4 \pi \kappa T}{M_{AB}})^{1/2}}{n \pi \sigma_{AB}^{2} \Omega_D}f_D$ When $f_D$ is unity and n is expressed by the ideal gas law $D_{AB} = \frac{0.0026 T^{3/2}}{PM_{AB}^{1/2} \sigma_{AB}^{2}\Omega_D}$ Hirschfelder, Bird, and Spotz Equation $D_{AB} = \frac{0.001858 T^{3/2}[(\frac{1}{M_A}) + (\frac{1}{M_B})]^{1/2}}{P \sigma_{AB}^{2} \Omega_D}$ Non-polar Comparison There are suggested correction factors for polar compounds. Since water is a polar compound, I will use these factors in a later comparison. I am doing a non-polar comparison because I was surprised with the closeness of the Hirschfelder equation previously when using non-polar factors. Hirschfelder, Bird, and Spotz equation[2] $D_{AB} = \frac{0.001858T^{3/2}[\frac{1}{M_A} + \frac{1}{M_B}]^{1/2}}{P \sigma_{AB}^{2} \Omega_D}$ $M_A = M_{H_2O} = 18 \frac{g}{mol}; M_B = M_{air} = 29 \frac{g}{mol}$ T = 298 K; P = 1 atm Need $\sigma_{AB} \ and \ \Omega_D$. For comparative purposes, will use tabular values from [1]. Water: $\sigma_A = 2.641 \AA; \frac{\epsilon_A}{\kappa} = 809.1 K$ Air: $3.711 \AA; \frac{\epsilon_B}{\kappa} = 78.6 K$ $\sigma_{AB} = \frac{\sigma_A + \sigma_B}{2} = \frac{2.641 + 3.711}{2} = 3.18 \AA$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{\frac{\epsilon_A}{\kappa} \frac{\epsilon_B}{\kappa}} = \sqrt{(809.1)(78.6)} = 252.2 K$ Need $T^* = \frac{\kappa T}{\epsilon_{AB}}$ to calculate $\Omega_D$ Neufeld, et al.: $\Omega_D = \frac{A}{(T^*)^{B}} + \frac{C}{\exp{DT^*}} + \frac{E}{\exp{FT^*}} + \frac{G}{\exp{HT^*}}$ The constants A;B;C;D;E;F;G;H will be placed in the Neufeld, et. al. equation First calculate $T^*$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{252.2 K}{298 K} = 0.846$ $T^{*} = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.846} = 1.18$ $\Omega_D = \frac{A}{(T*)^B} + \frac{C}{\exp{DT^*}} + \frac{E}{\exp{FT^*}} + \frac{G}{\exp{HT^*}} =$ $\frac{1.06036}{1.18^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(1.18))}} + \frac{1.03587}{\exp{((1.52996)(1.18))}} + \frac{1.76474}{((3.89411)(1.18))} = \Omega_D$ $\Omega_D = 1.33$ Back to Hirschfelder equation $D_{AB} = \frac{0.001858 T^{3/2}[\frac{1}{M_A} + \frac{1}{M_B}]^{1/2}}{P \sigma_{AB}^{2} \Omega_D} = \frac{0.001858 (298)^{3/2}[\frac{1}{18} + \frac{1}{29}]^{1/2}}{1 (3.18)^{2} 1.33} =$ $D_{AB} = \frac{2.87}{13.45} = 0.213 \frac{cm^2}{s}$ Compared to experimental value from [2] at 25$\textdegree$C and 1 atm Diffusivity of water in air: $D_{AB} = 0.260 \frac{cm^2}{s}$ Chapman and Enskog equation $D_{AB} = \frac{0.00266 T^{3/2}}{P M_{AB}^{1/2} \sigma_{AB}^2 \Omega_D}$ $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1} = 2[\frac{1}{18} + \frac{1}{29}]^{-1} = 22.21$ T = 298 K; P = 1 atm; $\sigma_{AB} = 3.18 \AA; \ \Omega_D = 1.33$ $D_{AB} = \frac{0.00266 (298)^{3/2}}{1 (22.21)^{1/2} (3.18)^2 (1.33)} = \frac{13.68}{63.38} =$ $D_{AB} = 0.216 \frac{cm^2}{s}$ Compared to Hirschfelder equation and experimental value Hirschfelder[2]: $D_{AB} = 0.213 \frac{cm^2}{s}$ Percent Difference: $\frac{Hirschfelder - Experimental}{Experimental} x 100 = \frac{0.213 - 0.260}{0.260} x 100 = -18.1\%$ Experimental: $D_{AB} = 0.260 \frac{cm^2}{s}$ Chapman[1]: $D_{AB} = 0.216 \frac{cm^2}{s}$ Percent Difference: $\frac{Chapman - Experimental}{Experimental} x 100 = \frac{0.216 - 0.260}{0.260} x 100 = -16.9\%$ Polar Molecule Correction Comparison Sadly, I have discovered that most empirical correlations lack sufficient data to estimate the diffusivity of many compounds. As an example, I, as a 1991 Gulf War veteran, desire to calculate the diffusivity of sarin in air. I have discovered that most empirical correlations do not take the phosphorus atom into consideration. Also, the Brokaw relationships for polar gases have correction equations that consider polar diffusing through polar. In my analysis, I will first use the Brokaw method and only consider the polar molecule of water since air is non-polar. I will highlight the potential error of using this method by “error” when I use the correction equations. To be specific, $\Omega_{D_{H_2O}}$ will be calculated based on the new variable $\delta_{H_2O}$ instead of a $\delta_{AB}$ of two polar species. I hope to see if I can use the Brokaw method to calculate the diffusivity of sarin in air since the correction factors used in the Brokaw method include phosphorus and fluorine. I wish I could have found a phosphorus “diffusion volume increment”, $\nu$, but I could not find one. I did find a fluorine[1], Nitrogen, Sulfur, Iodine, etc and might use either Nitrogen or Sulfur in the Fuller, et al. equation as an estimate when I calculate the diffusivity of sarin in air. Since both equations gave approximately the same value during the non-polar comparison, I will use the Chapman-Enskog equation. Once again, the temperature and pressure are 25$\textdegree$C and 1 atmosphere. Chapman-Enskog equation[1] $D_{AB} = \frac{0.00266 T^{3/2}}{PM_{AB}^{1/2}\sigma_{AB}^2 \Omega_D}$ T = 298 K; P = 1 atm $M_{AB} = 2[\frac{1}{M_A} + \frac{1}{M_B}]^{-1} = 2[\frac{1}{18} + \frac{1}{29}]^{-1} = 22.21$ Brokaw Method Neufield, et al. Relation: $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{(DT^*)}} + \frac{E}{\exp{(FT^*)}} + \frac{G}{\exp{(HT^*)}}$ $T^* = \frac{\kappa T}{\epsilon_{AB}}$ $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_{AB}}{T^*}$ Possible error: Since air is non-polar, I will only be using the delta of water, $\delta_A$ in the above equation. Changed equation: $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_A}{T^*}$ For Water (A) $\frac{\epsilon_A}{\kappa} = 1.18(1 + 1.3\delta_A^2)T_b; T_b = normal \ boiling \ point (1 atm), K = 373 K$ $\delta_A = \frac{1.94 x 10^3 \mu_p^2}{V_b T_b}; V_b = Molar \ Volume \ at \ T_b; \mu_p = Dipole \ moment, D$ Calculation by Le Bas method[1] $V_b = 2(H) + 1(O) = 2(3.7) + (7.4) = 14.8 \frac{cm^3}{mol}$ Percent Difference: $\frac{Calculated - Experimental}{Experimental} = \frac{14.8 - 18.8}{18.8} = -21.3\%$ Note: ChemSpider provides a “Molar Volume” of $18.045 cm^3$ for water. Although I assume the latter was calculated at normal boiling point, I am not certain. Still, the value is extremely close to the experimental value in reference [1], $18.8 \frac{cm^3}{mol}$. For this reason, I will use the Chemspider value for water. Why? ChemSpider also provides a Molar Volume value for Sarin. If ChemSpider responds by email that the value was not calculated at boiling point, I will reconsider. Still, the percent difference of ChemSpider for water Molar Volume when compared to experimental[1] is: ChemSpider Percent Difference: $\frac{ChemSpider - Experimental}{Experimental}x100 = \frac{18.045 - 18.8}{18.8}x 100 = -4.02\%$ Delta: $\delta_{A} = \frac{1.94x10^3 \mu_p}{V_bT_b}$ $\mu_p = dipole \ moment, \ debyes$ $V_b = liquid \ molar \ volume \ at \ normal \ boiling \ point, \ \frac{cm^3}{mole}$ $T_b = normal \ boiling \ point \ (1 atm), \ K$ $\delta_{A} = \frac{1.94 x 10^3(1.855)^2}{(18.045)(373)} = 0.992$ $\frac{\epsilon_A}{\kappa} = 1.18(1 + 1.3 \delta_A^2)T_b = 1.18(1 + 1.3(0.992)^2)373 = 1003 \ K$ $\sigma_A = (\frac{1.585V_b}{(1 + 1.3\delta_A^2)})^{1/3} = (\frac{1.585(18.045)}{(1 + 1.3(0.992)^2)})^{1/3} = 2.32 \AA$ Need: $T^* = \frac{\kappa T}{\epsilon_{AB}}$ Water: $\sigma_A = 2.32 \AA; \frac{\epsilon_A}{\kappa} = 1003 K$ From [1]: Air: $\sigma_B = 3.711 \AA; \frac{\epsilon_B}{\kappa} = 78.6 K$ $\frac{\epsilon_{AB}}{\kappa} = \sqrt{\frac{\epsilon_A}{\kappa}\frac{\epsilon_B}{\kappa}} = \sqrt{(1003.0)(78.6)} = 280.8$ $\frac{\epsilon_{AB}}{\kappa T} = \frac{280.8}{298.0} = 0.942$ $T^* = \frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.942} = 1.06$ Neufield, et al. Relation $\Omega_D = \frac{A}{(T^*)^B} + \frac{C}{\exp{((D)(T^*))}} + \frac{E}{\exp{((F)(T^*))}} + \frac{G}{\exp{((H)(T^*))}} =$ $\Omega_D = \frac{1.06036}{(1.06)^{0.15610}} + \frac{0.19300}{\exp{((0.47635)(1.06))}} + \frac{1.03587}{\exp{((1.52996)(1.06))}} + \frac{1.76474}{\exp{((3.89411)(1.06))}} =$ $\Omega_D = 1.4$ Brokaw relation for polar molecules $\Omega_D = \Omega_D(Neufield) + \frac{0.19 \delta_{AB}^2}{T^*}$ Changed for water only: $\Omega_D = \Omega_D (Neufield) + \frac{0.19 \delta_A^2}{T^*}$ $\Omega_D = 1.4 + \frac{(0.19)(0.992)^2}{1.06} = 1.57$ Check Chapman and Enskog equation $D_{AB} = \frac{0.00266 T^{3/2}}{P M_{AB}^{1/2} \sigma_{AB}^2 \Omega_D}$ T = 298 K; P = 1 atm; $M_{AB} = 22.21 \ and \ \Omega_D = 1.57$ Need: $\sigma_{AB}^2$ From Brokaw relation $\sigma_{AB} = \sqrt{\sigma_A \sigma_B} = \sqrt{(2.32)(3.711)} = 2.93 \AA$ Chapman and Enskog equation for diffusivity of polar water in non-polar air $D_{AB} = \frac{0.00266 (298)^{3/2}}{1(22.21)(2.93)^2(1.57)} = \frac{13.68}{63.52} = 0.215 \frac{cm^2}{s}$ Non-polar versus polar comparisons Using: $D_{AB} = \frac{0.00266 T^{3/2}}{PM_{AB}^{1/2} \sigma_{AB}^2 \Omega_D}$ and Brokaw relationships for polar corrections Chapman and Enskog non-polar molecule[1]: $D_{AB} = 0.216 \frac{cm^2}{s}$ Chapman and Enskog with Brokaw relationships for polar molecules[1]: $D_{AB} = 0.215 \frac{cm^2}{s}$ Hirschfelder nonpolar[2]: $D_{AB} = 0.213 \frac{cm^2}{s}$ The experimental value[2]: $D_{AB} = 0.260 \frac{cm^2}{s}$ Note: All the ‘calculated” values are quite close. As such, I assume I can, when needed, use Chapman and Enskog and the Brokaw method to calculate the diffusivity of a polar molecule of sarin in non-polar air. References [1] Poling, Bruce E.; Prausnitz, John M.; O’Connell, John P. (2001) The Properties of Gases and Liquids, Fifth Edition. New York: Mcgraw-Hill. [2] Welty, James R.; Wicks, Charles E.; Wilson, Robert E. (1984) Fundamentals of Momentum, Heat, and Mass Transfer, third edition. New York: John Wiley & Sons. [3] Harding, Byron. Chapter 24: Fundamentals of Mass Transfer. Diffusivity of water in Air at 20 Degrees Celsius and 1 Atmosphere, December 2012. chrisbharding.wordpress.com[online]. 2012. Available from: https://chrisbharding.wordpress.com/2012/12/28/chapter-24-fundamentals-of-mass-transfer-diffusivity-of-water-in-air-at-25-degrees-celsius/ [4] ChemSpider. The free chemical database. chemspider.com[online]. 2013. Available from: http://www.chemspider.com/ [5] Gregory, J.K.; Clary, D.C.; Liu, K.; Brown, M.G.; Saykally, R.J. The Water Dipole Moment in Water Clusters, February 1997. science[online]. vol. 275. pp. 814. Available from: http://www.cchem.berkeley.edu/rjsgrp/publications/papers/1997/187_gregory_1997.pdf # Chapter 24: Fundamentals of Mass Transfer. Example 3 Example 3 Reevaluate the diffusion coefficient of carbon dioxide in air at 20$\textdegree$C and atmospheric pressure using the Fuller, Schettler, and Giddings equation and compare the new value with the one reported in example 2. The equation is $D_{AB} = \frac{0.001T^{1.75}(\frac{1}{M_A}\frac{1}{M_B})^{1/2}}{P[(\sum \nu)_A^{1/3} + (\sum \nu)_B^{1/3}]^2}$ Molecular weight carbon dioxide $M_A = M_{CO_2} = M_C + 2 M_{O} = 12 + 2(16) = 44$ Molecular weight Air $M_B = M_{Air}; 21\% \frac{mole O_2}{moles} O_2; 79\% N_2 \frac{mole N_2}{moles}$ 1 mole basis $0.21 \frac{mole O_2}{moles}(1 mole) = 0.21 mole O_2$ $0.79 \frac{mole N_2}{moles}(1 mole) = 0.79 mole N_2$ Total mass $mass_{O_2} = 0.21 mole O_2 (M_{O_2}) = 0.21 mole O_2(32 \frac{g O_2}{mole O_2}) = 6.72 g O_2$ $mass_{N_2} = 0.79 mole N_2 (M{_{N_2}} = 0.79 mole N_2(28 \frac{g N_2}{mole N_2} = 22.1 g N_2$ Total Mass = 6.7 + 22.1 = 29 grams Molecular Weight of Air = $\frac{29 grams}{1 mole}= 29 \frac{g}{mole}$ $\nu$ terms from book. For carbon dioxide and Air, the terms were already calculated (Page 491) and included as “Simple Molecules”. $(\sum \nu)_A = (\sum \nu)_{CO_2 } = \nu_{CO_2} = 26.9$ $(\sum \nu)_B = (\sum \nu)_{Air} = \nu_{Air} = 20.1$ We have all the needed variables to use the Fuller, Schettler, and Giddings equation $D_{AB} = \frac{0.001T^{1.75} (\frac{1}{M_{CO_2}}+\frac{1}{M_{Air}})^{1/2}}{P[(\nu_{CO_2})^{1/3} + (\nu_{Air})^{1/3}]^{2}} = \frac{0.00143(293 K)^{1.75}(\frac{1}{44} + \frac{1}{29})^{1/2}}{1 atm[(26.9)^{1/3} + (20.1)^{1/3}]^{2}} =$ $D_{AB} = \frac{7.1}{32.7} = 0.151 \frac{cm^2}{s}$ Compared to Hirschfelder, Bird, and Spoz at 20$\textdegree$C and 1 atm $D_{AB} = \frac{0.001858T^{3/2}[\frac{1}{M_A} + \frac{1}{M_B}]^{1/2}}{P \delta_{AB}^2 \Omega_D} = 0.147 \frac{cm^2}{s}$ Compared to the Experimental at 20$\textdegree$C and 1 atm $D_{AB_{T_2, P_2}} = \frac{P_1}{P_2}(\frac{T_2}{T_1})^{3/2}\frac{\Omega_{D|T1}}{\Omega_{D|T2}} = 0.155 \frac{cm^2}{s}$ Percent Difference from Predicted Experimental Precent Difference = $\frac{0.151 - 0.155}{0.155} x 100 = -2.58\%$ # Chapter 24: Fundamentals of Mass Transfer. Example 2 Welty, James R.; Wicks, Charles E.; Wilson, Robert E. Fundamentals of Momentum, Heat, and Mass Transfer, third edition. New York: John Wiley and Sons. Example 2 Evaluate the diffusion coefficient of carbon dioxide in air at 20$\textdegree$C and atmospheric pressure. Compare this value with the experimental value reported in Appendix Table J.1. Will be using the following diffusivity equation: $D_{AB} = \frac{0.001858 T^{3/2} (\frac{1}{M_A} + \frac{1}{M_B})^{1/2}}{P \delta_{AB}^2 \Omega_D}$ We have temperature and pressure. We can calculate the molecular weights via a periodic chart. $\delta$ and $\Omega$ can be obtained from Tables K.1 and K.2. From K.2 of the appendix values $\delta$ and $\frac{\epsilon}{\kappa}$ are obtained: Carbon dioxide: $\delta$ in $\AA$, 3.996 and $\frac{\epsilon_{CO_2}}{\kappa}$ in K, 190 Air: $\delta$ in $\AA$, 3.617 and $\frac{\epsilon_{N_2}}{\kappa}$ in K, 97 $\delta_{AB} = \frac{(\delta_A + \delta_B)}{2} = \frac{(3.996 \AA + 3.617 \AA)}{2} = 3.806 \AA$ $\frac{\epsilon_{AB}}{\kappa}= \sqrt{(\frac{\epsilon_{A}}{\kappa})(\frac{\epsilon_B}{\kappa}}) = \sqrt{(190)(97)} = 136$ T = 20 + 273 = 293 K P = 1 atm $\frac{\epsilon_{AB}}{\kappa T} = \frac{136}{293} = 0.463$ $\frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.463} = 2.16$ $\Omega_D (Table K.1) = 1.047$ This value was obtained by interpolation $\frac{y - y_0}{x - x_0} = \frac{y_1 - y_0}{x_1 - x_0}$ $y - y_0 = \frac{y_1 - y_0}{x_1 - x_0}(x - x_0)$ $y = \frac{y_1 - y_0}{x_1 - x_0}(x - x_0) + y_0$ From Table K.1 $y_i = \Omega$ and $x_i = \frac{\kappa T}{\epsilon_{AB}}$ and $x = 2.16$ Interpolate $y = \frac{(1.041-1.057)}{(2.20-2.10)}(2.16 - 2.10) + 1.057 = 1.047 = \Omega_D$ We have all variable except molecular weights. Considering the most prevalent gasses: $M_{CO_2} = M_C + 2M_O = 12 + 2(16) = 12 + 32 = 44$ $M_{Air} = \%N_2 (M_{N_2}) + \%O_2(M_{O_2}) = 0.79(28) + 0.21(32) = 29$ Now, we have all the information needed to calculate the diffusivity of $CO_2$ in air when using: $D_{AB} = \frac{0.001858 T^{3/2} (\frac{1}{M_A} + \frac{1}{M_B})^{1/2}}{P \delta_{AB}^2 \Omega_D}$ $D_{AB} = \frac{0.001858(293^{3/2})(\frac{1}{44} + \frac{1}{29})^{1/2}}{1 atm(3.806 \AA)^2 (1.047)} = 0.147 \frac{cm^2}{s}$ Now, we want to compare to the experimental value that is reported in Table J.1 $T, K = 273, D_{AB}P \frac{cm^2 atm}{s} = \frac{0.136 \frac{cm^2 atm}{s}}{1 atm} = 0.136 \frac{cm^2}{s}$ Since the value is reported at 273 K, must use a conversion equation to compare at 293 K $\frac{D_{AB,T_1}}{D_{AB,T_2}} = (\frac{T_1}{T_2})^{3/2}(\frac{\Omega_{D,T_2}}{\Omega_{D,T_1}})$ at $T_1 = 293 K$ and $\Omega_{D, T_1} = 1.047$ at $T_2 = 273 K$ and $\Omega_{D, T_2} = ?$ from Table Table K.1 $\frac{\epsilon_AB}{\kappa}\frac{1}{T_2} = 136\frac{1}{273} = 0.498$ $\frac{\kappa T}{\epsilon_{AB}} = \frac{1}{0.498} = 2.01$ Once again, interpolation of Table K.1 is needed $\frac{y - y_0}{x - x_0} = \frac{y_1 - y_0}{x_1 - x_0}$ $y = \frac{y_1 - y_0}{x_1 - x_0}(x-x_0) + y_0$ $x = 2.01$ $y = \frac{1.057 -1.075}{2.10-2.00}(2.01-2.00) + 1.075 = 1.074 = \Omega_{D, T_2}$ Since we have all the values for the conversion equation $D_{AB, 293} = (\frac{293}{273})^{3/2}(\frac{1.074}{1.047})(0.136) = 0.155 \frac{cm^2}{s}$ The diffusivity of carbon dioxide in air Calculated: $0.147 \frac{cm^2}{s}$ and Corrected Experimental: $0.155 \frac{cm^2}{s}$ Percent Difference $\frac{Calculated - Corrected Experimental}{Corrected Experimental} x 100= \frac{0.147 - 0.155}{0.155} x 100=5.16\%$ # Chapter 24: Fundamentals of Mass Transfer. Example 1 Book: Welty, James R.; Wicks, Charles E.; Wilson, Robert E. Fundamentals of Momentum, Heat, and Mass Transfer, third edition. New York: John Wiley and Sons. 1984. Chapter 24: Fundamentals of Mass Transfer, Page 471 Note: LaTex causes certain parts of the text to be raised, italicized, and larger. It is not to the power. I believe the “power” will be recognizable once I point the latter out. Example 1 The composition of air is often given in terms of only the two principle species in the gas mixture oxygen, $O_2, y_{O_2} = 0.21$ nitrogen, $N_2, y_{N_2} = 0.79$ Determine the mass fraction of both oxygen and nitrogen and the mean molecular weight of the air when it is maintained at 25$\textdegree$C (298 K) and 1 ATM $(1.013 X 10^5 Pa)$. The molecular weight of oxygen is 0.032 kg/mol and of nitrogen is 0.028 kg/mol. As a basis for our calculations, consider 1 mole of the gas mixture; moles of oxygen present = (1 mol) $(y_{O_2})$ = (1 mol)(0.21) = 0.21 mol oxygen mass of oxygen present = (0.21 mol) $(M_{O_2})$ = (0.21 mol) $(0.032 \frac{ kg}{mol})$ = 0.00672 kg oxygen moles of nitrogen present = (1 mol) $(y_{N_2})$ = (1 mol)(0.79) = 0.79 mol nitrogen mass of nitrogen present = (0.79 mol) $(M_{N_2})$ = (0.79) $(0.028 \frac{kg}{mol})$ = 0.0221 kg nitrogen total mass present = kg oxygen + kg nitrogen = 0.00672 kg oxygen + 0.0221 kg nitrogen = 0.0288 kg $w_{O_2} = \frac{0.00672 kg}{0.0288 kg} = 0.23$ $w_{N_2} = \frac{0.0221 kg}{0.0288 kg} = 0.77$ Since 1 mole of the gas mixture has a mass of 0.0288 kg, the mean molecular weight of the air must be 0.0288: $M_{air}$ = 0.0288 kg/mol. When one takes into account the other constituents that are present in air, the mean molecular weight of air is often rounded off to $0.029 \frac{kg}{mol}$. This problem could also be solved using the ideal gas law, PV = nRT. At ideal conditions, 0$\textdegree$C or 273 K and 1 atm or $1.013 X 10^5 Pa$ pressure, the gas constant is evaluated to be Remember, we want to calculate the mass fraction of oxygen and nitrogen, and the mean molecular weight of air. Which to calculate first is the key PV = nRT R = $\frac{PV}{nT}$ n = 1 mole; T = 273 K; P = $1.015 X 10^5 Pa$; V = $22.4 m^3$ R = $\frac{1.013 X 10^5 Pa x 22.4 m^3}{1 kg mol x 273 K}$ = $8.314 X 10^3 \frac{Pa m^3}{(kg mol) K}$ R = $8.314 X 10^3 \frac{(Pa) (m^3)}{(kg mol) (K)} X \frac{1 kg mole}{1000 moles}$ = $8.314 \frac{(Pa)(m^3)}{(mol) (K)}$ The of volume the gas mixture, at 298 K, is V= $\frac{nRT}{P} = \frac{(1 mol) (8.314 \frac{Pa m^3}{mol K})(298 K)}{1.013 X 10^5 Pa} = 0.0245 (m^3)$ The concentrations are (1 mole basis) $c_{O_2} = \frac{moles O_2}{volume of gas mixture} = \frac{0.21 mol}{0.0245 m^3} = 8.57 \frac{mol O_2}{m^3}$ $c_{N_2} = \frac{moles N_2}{volume of gas mixture} = \frac{0.79 mol}{0.0245 m^3} = 32.3 \frac{mol N_2}{m^3}$ $c = \sum_{i=1}^{n} c_i = 8.57 \frac{mol O_2}{m^3} + 32.3 \frac{mol N_2}{m^3} = 40.9 \frac{mol}{m^3}$ The total density, $\rho$ is mass of $O_2$ = $(mol O_2) (Molecular Weight O_2) = (0.21 mol O_2)(0.032 \frac{kg}{mol}) = 0.00672 kg O_2$ mass of $N_2$ = $(mol N_2)(Molecular Weight N_2) = (0.79 mol N_2)(0.028 \frac{kg}{mol}) = 0.0221 kg N_2$ total mass = 0.00672 kg + 0.0221 kg = 0.0288 kg total density = $\rho = \frac{total mass}{total volume} = \frac{0.0288 kg}{0.0245 m^3} = 1.180 \frac{kg}{m^3}$ and the mean molecular weight of the mixture is $M = \frac{\rho}{c} = \frac{1.180\frac{kg}{m^3}}{40.9 \frac{mol}{m^3}} = 0.288 \frac{kg}{mol}$ As a side note, density, $\rho_i$, can be used to calculate mass percent of $O_2$ and $N_2$ $\rho_{O_2} = (c_{O_2})(M_{O_2}) = (8.57 \frac{mol O_2}{m^3})(0.032 \frac{kg}{mol})= 0.27 \frac{kg O_2}{m^3}$ $w_{O_2} = \frac{\rho_{O_2}}{\rho_total} = \frac{0.27 \frac{kg O_2}{m^3}}{1.180 \frac{kg}{m^3}} = 0.23$ $\rho_{N_2} = (c_{N_2})(M_{N_2}) = (32.3 \frac{mol N_2}{m^3})(0.028 \frac{kg N_2}{mol N_2} = 0.904 \frac{kg N_2}{m^3}$ $w_{N_2} = \frac{\rho_{N_2}}{\rho_total} = \frac{0.904 \frac{kg N_2}{m^3}}{1.180 \frac{kg}{m^3}}= 0.77$
# Tag Info ## New answers tagged mathematics 0 Answer: (2 + (b-c)/c) + (2 + (b-c)/c) + (1 + (a-b)/c) + (b+c)/a (of course, using integer arithmetic only. no fractions) Without loss of generality, we can assume (as George has done) a >= b >= c. The layout will be taken as (a+b) x (a+c) x (b+c) in x,y and z directions. A simple greedy stacking will try to keep axb on floor (x,y plane) as far as ... 4 Answer Without loss of generality assume the board has 7 rows and 8 columns. Clearly we want all dominoes to be horizontal. Let us say that a state P is optimal if it is not possible to reduce the number of vertical dominoes with legal moves. Clearly the desired end state has zero vertical dominoes and is hence optimal. Suppose P is optimal but has at least ... 4 1 You need to cover the four corners, so one of the smaller squares covers two corners of the bigger one and the other two smalls cover one corner each. 6 With the following arrangement you can easily stack pieces into the box: I have assumed without loss of generality that $a<b<c$, but as Damien_The_Unbeliever noted in the comments, it also assumes that $a+b>c$. That does not matter however, as this arrangement can be tweaked to insert one more piece: Now it just remains to be proved that it is ... 4 Almost complete answer using Jaap's lower bound, i.e. ignore my lower bound and skip the first two blocks: Now that @Jaap Scherphuis has bumped the lower bound to It remains to be shown that Amy can choose in such a way that more becomes impossible. 0 I think it's Reasoning: So, we have: Which also leads to Amy's tactic: 8 Observe that every 4th power is If one of $a,b,c,d$ is $9$, then If there is a $7$, then we must have $a=9$, so the other two digits are at most $4$ (from the RHS). Here there is If there is no $7$, then we must have either $a=6$ or $a=9$; since $6^4+9^4>7000$, it must be $a=9$. The LHS is over 9000, so the other three numbers must be $6,6,?$ or $6,5,5$... 17 17 First, let us define some things: For simplicity, for partial boards presented (with ...), let's consider that the width is equals to or larger than the height. If not, you can just rotate everything 90° to get a board that is like this. Unsolvable board (UB) - One that no matter what you rotates, it is impossible to have all the dominoes with the same ... 4 I think the answer is Explanation: First place one of the $4\times4$ squares within the $5\times5$ square. The remaining area (9 units) should be as compact as possible, so let's shift the $4\times4$ square right up to one corner, leaving an L-shape remaining. (I'm not sure how to prove rigorously that this is optimal.) Now we need to place the other two ... 7 Is it the following? Very broadly, other than standard sudoku solving techniques: 11 This is because 4 This is a standard application of the Burnside Lemma. I'll solve the more general case of a square with $n$ colours. 5 I think the answer is Counting 2 We can just draw a cube like so: The answer is... 11 This is true for a given $n$ if and only if Now, Can we make that work? Therefore Therefore The above argument may be difficult to follow. Let's look at it more concretely. Now And in fact To play the same game 7 Let us denote the ages of Person 1, Person 2, Person 3 by $x,y,z$ respectively. We'll assume that $x,y,z$ are positive throughout. The product of the 1st person's and the 2nd person's ages is $311 \frac{2}{3}$ plus the 3rd person's age. The sum of the 1st person's age and the quotient of the 3rd person's and the 2nd person's ages is $41 \frac{17}{24}$ ... 2 I've made this community wiki, so please edit away! This is a very good question to put to students as long as one subsequently hammers home the point that it is ill posed and one works out the common fallacies that contribute to the expected answer. The question is actually well suited for this didactic exercise because the tacit assumptions OP's preferred &... 0 Just keep a running total of what each is owed (positive) or owes (negative). It doesn't matter to whom: There are people A,D,G,P "A" needs $2.50 from D,G,P "P" needs$2.50 from A,D,G As far as who paid "A" what. "D", paid "A" \$5 (So "A" owes D \$2.50) "G", hasn't paid "A&... 4 I observed the following: This is because This observation immediately excludes many numbers from consideration. It remains to be shown that the numbers that were not excluded do all end at $153$. For completeness, here is my working out of the remaining cases. Rand al'Thor already did this first in his answer. Like him, I do not see any clever way that ... 4 Considering cycles The largest number such a chain can ever reach is $1486$ (every number between $2001$ and $2100$ gives at most $8+0+729+729=1466$ at the first step, and the largest possibility resulting from any number up to there is $1+27+729+729=1486$). So we have an upper bound, which means every chain must eventually end in a cycle. In the OP you ... 4 (After rechecking my numbers which gave me a different answer before, it seems I've written a duplicate of hexomino's quicker answer. Since this one has 2 lines of maths instead of many, I think I'm going push post anyway.) The key to figuring this out is that This allows us to figure out the relative frequency of the two cases: From this we get that ... 5 For these sorts of problems I like to use In this scenario we can apply it as follows 3 The most elegant solution I could find was this one: let the matrix be \begin{equation*} \begin{pmatrix} A & B & C \\ D & E & F \\ G & H & I \end{pmatrix} \end{equation*} Let the sum of each row/column/diagonal be $S$. Then \begin{eqnarray} A+B+C + D+E+F = A+E+I + C+F+I = 2S &\to& I = \frac{B+D}{2} \\ A+D+G = G+H+I + S &\... 3 First I'll prove a property of $3\times3$ magic squares. Using this property you can use a similar proof to find the central cell in this case: The rest of the magic square then follows: I originally used a less elegant more general method by finding a generic solution: Now it is just a matter of applying that to this particular problem. 0 Let's say A pays £10 for court for four people including himself (So £2.50 each) P pays £10 for a court for four people including himself. (So £2.50 each) That’s equivalent to A lending £2.50 to three people. And P lending £2.50 to three people. So that leads to what is mentioned at the beginning of the question "A" needs \$2.50 from D,G,P "P&... 2 Mathy If you add a "hub" node to your graph, you can reduce the number of possible connections. If you allow 2 arrows to represent "owed" and "owes", it takes the number of possible relationships down from (n-1)n to 2n (so equal at 3 nodes and smaller thereafter). Non-Mathy (the practical use of the above) You invoke the money ... 2 The initial state of who owes whom what, before any payments are made, looks like this: Owes ║ A │ D │ G │ P Total O ═╬═══╪═══╪═══╪═══ ═════ w A║ X │250│250│250 750 e ─╫───┼───┼───┼─── ───── d D║ 0 │ X │ 0 │ 0 0 ─╫───┼───┼───┼─── ───── T G║ 0 │ 0 │ X │ 0 0 o ─╫───┼───┼───┼─── ───── P║250│250│250│ ... 7 Graphs are your friends! 1: Set Up: To make matters simple, let's just assume that at one point, A loaned the other three \$2.50 and P did the same. Let a directed edge represent \$2.50. Our starting graph represents the cash flow state after both sets of loans are made. Note that the A-P Edges cancel. 2: D pays a \$5.00. Draw two edges (green, to ... 2 Hey there this is my first time so I don't know how to use the system exactly, pardon me. Let's make some diagrams. To represent A needs 2.5 from D, use the text A<---2.5---D. Then we can represent the needed transactions as: A<---2.5----D A<---2.5----G A<---2.5----P P<---2.5----A P<---2.5----D P<---2.5----G First observe that there ... 11 At the beginning: The only official payment that has happened is D paying 5.00 to A, so the updated transactions are: Now then: 7 Assuming by "needs" at the start you mean "is owed", then: Also: So: You could work out But it's easier to just work out In this case Or another way of looking at it 1 However, he only had 30 of the cheaper melons, so he could only make 10 groups containing 3 of the cheaper. So 10 groups of (3, 2) uses 30 cheap, and 20 expensive, leaving two groups of (0,5). These 10 melons should have been sold for 5 d, but he ended up selling them for 4d, losing 1d on the transactions. 3 A liberal interpretation of “coincides” in the puzzle statement ... “each vertex of the triangles coincides with exactly three triangles” ... allows for a vertex to touch a side, and not always another vertex, of another triangle as in these failed attempts ... ... that led to this pair of ... -3 suppose plan area is A and no of triangles are n, A = n * 1/2 * h * b so if we want n, n = 2A /(hb), that is the minimum no of triangle 3 Looks like the apparent relation between the shapes of the arrow heads and the operation they imply is a red herring. And even if they aren't there is no reliable clue as to what open versus filled heads may signify. 9 The number of triangles in my best solution is but I don't know if this is optimal. Addendum: I previously had an incorrect solution, as I used similar triangles instead of congruent ones (i.e. I used some triangles of a different size but the same shape). As requested by @humn, I'll keep that incorrect solution with 12 triangles available below: -2 In both the square, area will be the same. suppose big square has area, A = x*x all small square total area will be A = n * 1 *1 so, x*x = n * 1 * 1 x = √n, for all x>=n 0 Because 0 Is the answer Reason: 2 Occam's razor says the simplest explanation is the best. 2 4 It's simply just So Therefore the missing number is 2 4 X could be operator (IS NOT) ! Because 1x=2 2x=4 3x=3 4x>8 are always either true or false conditions in any programming language. 1!=2 //Always True 2!=4 //Always True 3!=3 //Always False 4!>8 //Always True 2 I must admit that I looked this up, but the largest graph which works turns out to be unique, and is called Now all that remains is to label the vertices appropriately. One easy way to do this is to Of course the numbers you get are rather large. You can improve this a bit as follows: There may be a way to reduce the numbers further. 2 Answering whether the $N/(N + M - 1)$ survival probability can be met: 3 The following 12 numbers satisfy the conditions: $$203,385,437,713,814,1330,1479,1495,2418,3441,11951,70499$$ Represented as a graph where the numbers are its vertices, two of which are joined by an edge if they have a common divisor greater than 1, the graph can be shown not to have a complete subgraph on four vertices ($K_4$): nor does its complement: In ... 4 Their ages are: Reason: Top 50 recent answers are included
# All Questions 110 views ### Pseudorandom Functions with different input and output lengths I am working on a problem found in my Cryptography textbook that goes as follows: Let F be a pseudorandom function such that for $k \in \{0,1\}^n$, function $F_k$ maps $l_{in}(n)$-inputs to ... 55 views ### How does Intel TXT prevent spoofing of PCR values? I have read some content on trusted execution environment, and I like to ask exactly how it improves hashing. What I read at Wikipedia’s “Trusted Execution Technology” article states: ...PCRs) ... 101 views ### How can I multiply an additively homomorphic encrypted value by a float number? As we all know, if $E()$ is an additively homomorphic encryption, we can multiply $E(a)$ by an integer $b$, then we will get $E(ab)$. But what if $a$ is a float number? Can we still get $E(ab)$? Which ... 159 views ### How to prove that a commitment hides the decryption of an ElGamal ciphertext? I've decided to remove a previous unanswered question of mine and break it down into smaller pieces so it's not such a loaded question. For this question I need to prove that I've committed to a ... 35 views ### Hiding the identity of a party within the Kerberos authentication scheme In the Kerberos authentication protocol, as described here: would it be safer to replace step (1) with: $$A \rightarrow T : A, E_{K_A{_{T}}}(B, N_A)$$ so that a passive adversary does not know ... 50 views ### can pairings only be used with elliptic curves? As far as I understand one big advantage of ECC is that we can use pairings on the group of torsion points of the curve. I was wondering if it is possible to construct pairings from general finite ...
Students will extend their understanding of inverse functions to functions with a degree higher than 1, and factor and simplify rational expressions to reveal domain restrictions and asymptotes. ## Unit Summary In Unit 4, Rational and Radical Functions, students will extend their understanding of inverse functions to functions with a degree higher than 1. Alongside this concept, students will factor and simplify rational expressions and functions to reveal domain restrictions and asymptotes. Students will become fluent in operating with rational and radical expressions and use the structure to model contextual situations. In this unit, students will also revisit the concept of an extraneous solution, first introduced in Unit 1, through the solution of radical and rational equations. The unit begins with Topic A, where there is a focus on understanding the graphical and algebraic connections between rational and radical expressions, as well as fluently writing these expressions in different forms. In Topic B, students delve deeper into rational equations and functions and identify characteristics such as the $x$- and $y$-intercepts, asymptotes, and removable discontinuities based on the relationship between the degree of the numerator and denominator of the rational expression. Students will also connect these features with the transformation of the parent function of a rational function. In Topic C, students solve rational and radical equations, identifying extraneous solutions, then modeling and solving equations in situations where rational and radical functions are necessary. Students will connect the domain algebraically with the context and interpret solutions. Pacing: 20 instructional days (18 lessons, 1 flex day, 1 assessment day) ## Assessment This assessment accompanies Unit 4 and should be given on the suggested assessment day or after completing the unit. ## Unit Prep ### Intellectual Prep ? Internalization of Standards via the Unit Assessment • Take unit assessment. Annotate for: • Standards that each question aligns to • Purpose of each question: spiral, foundational, mastery, developing • Strategies and representations used in daily lessons • Relationship to Essential Understandings of unit • Lesson(s) that assessment points to Internalization of Trajectory of Unit • Read and annotate “Unit Summary." • Notice the progression of concepts through the unit using “Unit at a Glance.” • Essential understandings • Connection to assessment questions ### Essential Understandings ? • A rational function is a ratio of polynomial functions. If a rational function does not have a constant in the denominator, the graph of the rational function features asymptotic behavior and can have other features of discontinuity. • Rational and radical equations that have algebraic numerators or denominators operate within the same rules as fractions. • Extraneous solutions may result due to domain restrictions in rational or radical functions. • Rational functions can be used to model situations in which two polynomials or root functions are divided. ### Vocabulary ? Vertical and horizontal asymptote Invertible functions Rational function Zero product property Rational expression Asymptotic discontinuities (infinite) Domain restriction Removable discontinuities Square root / cube root End behavior Extraneous solutions Sign chart # 13 F.BF.B.3 F.IF.C.7.D Describe transformations of rational functions. A.REI.A.2 # 15 A.REI.A.2 Solve radical equations and identify extraneous solutions. # 16 A.APR.D.6 A.REI.A.2 A.REI.D.11 Solve rational equations. # 17 A.APR.D.6 A.CED.A.2 N.Q.A.1 Write and solve rational functions for contextual situations. ## Common Core Standards Key: Major Cluster Supporting Cluster Additional Cluster ### Core Standards ? ##### Arithmetic with Polynomials and Rational Expressions • A.APR.D.6 — Rewrite simple rational expressions in different forms; write a(x /b(x) in the form q(x) + r(x)/b(x), where a(x), b(x), q(x), and r(x) are polynomials with the degree of r(x) less than the degree of b(x), using inspection, long division, or, for the more complicated examples, a computer algebra system. • A.APR.D.7 — Understand that rational expressions form a system analogous to the rational numbers, closed under addition, subtraction, multiplication, and division by a nonzero rational expression; add, subtract, multiply, and divide rational expressions. ##### Building Functions • F.BF.B.3 — Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them. • F.BF.B.4 — Find inverse functions. ##### Creating Equations • A.CED.A.2 — Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales. ##### High School — Number and Quantity • N.Q.A.1 — Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays. • N.RN.A.2 — Rewrite expressions involving radicals and rational exponents using the properties of exponents. ##### Interpreting Functions • F.IF.B.5 — Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes to assemble n engines in a factory, then the positive integers would be an appropriate domain for the function. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group. • F.IF.C.7.B — Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions. • F.IF.C.7.D — Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior. ##### Reasoning with Equations and Inequalities • A.REI.A.2 — Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise. • A.REI.D.11 — Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group. ? • A.APR.A.1 • F.BF.B.3 • F.BF.B.4.A • A.CED.A.4 • 8.EE.A.1 • F.IF.A.1 • F.IF.B.4 • F.IF.C.8 • F.IF.C.8.A • A.REI.A.1 • A.SSE.A.1 ### Standards for Mathematical Practice • CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them. • CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively. • CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others. • CCSS.MATH.PRACTICE.MP4 — Model with mathematics. • CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically. • CCSS.MATH.PRACTICE.MP6 — Attend to precision. • CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure. • CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning.
When I was in school, I was taught the Quadratic Formula. I was taught that it was the most efficient, more reliable way to find the roots of a quadratic function. This is what I was taught: Given a function in Standard Form, $$ax^2+bx+c$$, its roots can be found by evaluating $$\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$. I was instructed to commit this to memory, which I did. Decades later, as I trained to become a high school mathematics teacher, it popped back into my brain almost as readily as the so-called Pythagorean Theorem. The thing is, I knew the Quadratic Formula, but I wasn’t entirely sure why it worked. It was something that had been foisted onto my memory cells as a frame in which to put puzzle pieces, and it would magically give me the roots. What if there’s a better way? A less efficient way, certainly: It has two middle steps. But a way grounded in concepts instead of in pushing letters and numbers around. First, let’s step back for a moment and think about those two roots of a quadratic function. Arguably the most important point on the graph of a quadratic function, even more so than the roots, is the vertex. Every quadratic function has the same basic shape, of a parabola, with a single turning point: It is common to call the coordinates of this vertex $$(h, k)$$, where $$h$$ represents the $$x$$coordinate and $$k$$ represents the $$y$$-coordinate (because “h” is the first letter of “horizontal” and “k” is the first letter of “kvertical”). We could observe that that two branches of the parabola are symmetrical: Given any random $$y$$-value, the two points of the parabola can be connected by a segment whose midpoint is on $$x = h$$. This means that, whatever our roots are, they are the same distance from $$h$$. Let’s call this distance $$d$$, so our roots are $$h\pm d$$. The exact formula for $$d$$ is a bit more challenging, but we can make a few more observations. Since this is a quadratic, it’s reasonable to assume $$d$$ involves a square root. Since $$a$$ is related to how steep the parabola’s legs are, it’s reasonable to assume $$d$$ involves dividing by that steepness. And since the roots are related to how far above or below the axis the vertex is, it’s reasonable to assume $$d$$ is directly related to $$k$$ somehow. The simplest formula that satisfies those reasonable assumptions is $$d = \sqrt{\frac{k}{a}}$$. The reality is not far off from this. There’s one small problem. In the diagram above, for instance, $$k$$ is negative and $$a$$ is positive. Dividing a negative by a positive gives us a negative, which doesn’t have a real square root. Likewise, if $$a$$ is negative, we have two real roots only if the vertex is above the $$x$$-axis, that is, if $$k$$ is positive. So we need to account for this by taking the opposite of our quotient. So, given our basic observations, the simplest possible formula for $$d$$ that satisfies all of our conditions is: $$d = \sqrt{-\frac{k}{a}}$$. This formula is completely motivated by important observations about the graph of the parabola. The explanation is not a proof, but this is at least a reasonable possibility. If this pans out, that makes our quadratic formula this: $h \pm \sqrt{-\frac{k}{a}}$ This is provable. I’ll provide the proof later, but this is indeed the formula for the roots of a quadratic if we know the vertex and the stretch factor. Con: This requires the vertex, which is not immediately evident from the Standard Form, which is what we teach as the default form of a quadratic. Finding the vertex from the standard form involves two steps, one of which will look very familiar given the Quadratic Formula. The Vertex Form of a quadratic function is $$a(x-h)^2+k$$. This allows us to immediately see the vertex, but it’s not the Standard Form. However: $a(x-h)^2+k=ax^2-2ahx+ah^2+k$ meaning that $$b=-2ah$$ and hence $$h=-\frac{b}{2a}$$. From here, we have two choices. We could solve $$ah^2+k=c$$ for $$k$$, or we could plug $$h$$ into the function and evaluate for $$k$$, that is, $$k=ah^2+bh+c$$. Students usually prefer to do that latter, although I’ll do the former in my proof below. So that gives us three steps for finding the roots of a quadratic in Standard Form: 1. Determine $$h$$. 2. Determine $$k$$. 3. Evaluate $$h\pm\sqrt{-\frac{k}{a}}$$. This involves three steps instead of one, but it has several definite advantages. First, it also gives us the vertex of the parabola, something we’d need to evaluate separately otherwise. More importantly to my mind, though: we’ve replaced an onerous, difficult to explain and even more difficult to remember formula with two simpler formulas, one of which can be completely motivated by concepts and understanding what a function, and what a quadratic function, is all about. So let’s look at the proof that our two Quadratic Formulas above are equivalent. What we need to prove first: $\sqrt{-\frac{k}{a}}=\frac{\sqrt{b^2-4ac}}{2a}$ Let’s solve $$ah^2+k=c$$ for $$k$$. Recall that $$h=-\frac{b}{2a}$$. \begin{align} k &= c – ah^2 = c – a(-\frac{b}{2a})^2 \\ &= c-\frac{ab^2}{4a^2} = c-\frac{b^2}{4a} \\ &= \frac{4ac-b^2}{4a}\end{align} Now we’re ready to swap that out: $\sqrt{-\frac{k}{a}} = \sqrt{-\frac{4ac-b^2}{4a^2}} = \frac{\sqrt{b^2-4ac}}{2a}$ Since $$h=-\frac{b}{2a}$$, $h\pm\sqrt{-\frac{k}{a}} = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$ which is what I was intending to prove. Clio Corvid ## 1 Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
# Volume 182, Issue 4 ### 1. Testing Boolean Functions Properties The goal in the area of functions property testing is to determine whether a given black-box Boolean function has a particular given property or is $\varepsilon$-far from having that property. We investigate here several types of properties testing for Boolean functions (identity, correlations and balancedness) using the Deutsch-Jozsa algorithm (for the Deutsch-Jozsa (D-J) problem) and also the amplitude amplification technique. At first, we study here a particular testing problem: namely whether a given Boolean function $f$, of $n$ variables, is identical with a given function $g$ or is $\varepsilon$-far from $g$, where $\varepsilon$ is the parameter. We present a one-sided error quantum algorithm to deal with this problem that has the query complexity $O(\frac{1}{\sqrt{\varepsilon}})$. Moreover, we show that our quantum algorithm is optimal. Afterwards we show that the classical randomized query complexity of this problem is $\Theta(\frac{1}{\varepsilon})$. Secondly, we consider the D-J problem from the perspective of functional correlations and let $C(f,g)$ denote the correlation of $f$ and $g$. We propose an exact quantum algorithm for making distinction between $|C(f,g)|=\varepsilon$ and $|C(f,g)|=1$ using six queries, while the classical deterministic query complexity for this problem is $\Theta(2^{n})$ queries. Finally, we propose a one-sided error quantum query algorithm for testing whether one Boolean function is balanced versus $\varepsilon$-far balanced using […] ### 2. The inverse of Ackermann function is computable in linear time We propose a detailed proof of the fact that the inverse of Ackermann function is computable in linear time. ### 3. Perpetual Free-choice Petri nets are lucent -- proof of a theorem of van der Aalst using CP-exhaustions Van der Aalst's theorem is an important result for the analysis and synthesis of process models. The paper proves the theorem by exhausting perpetual free-choice Petri nets by CP-subnets. The resulting T-systems are investigated by elementary methods.
# mantle cavity The mantle (also known by the Latin word pallium meaning mantle, robe or cloak, adjective pallial) is a significant part of the anatomy of molluscs: it is the dorsal body wall which covers the visceral mass and usually protrudes in the form of flaps well beyond the visceral mass itself. In many species of molluscs the epidermis of the mantle secretes calcium carbonate and conchiolin, and creates a shell. The words mantle and pallium both originally meant cloak or cape, see mantle (vesture). This anatomical structure in molluscs often resembles a cloak because in many groups the edges of the mantle, usually referred to as the mantle margin, extend far beyond the main part of the body, forming flaps, double-layered structures which have been adapted for many different uses, including for example, the siphon.
# BinomialFunction class¶ (Shortest import: from brian2 import BinomialFunction) class brian2.input.binomial.BinomialFunction(n, p, approximate=True, name='_binomial*')[source] A function that generates samples from a binomial distribution. Parameters n : int Number of samples p : float Probablility approximate : bool, optional Whether to approximate the binomial with a normal distribution if $$n p > 5 \wedge n (1 - p) > 5$$. Defaults to True. Attributes implementations Container for implementing functions for different targets This container can be extended by other codegeneration targets/devices The key has to be the name of the target, the value a function that takes three parameters (n, p, use_normal) and returns a tuple of (code, dependencies) Details implementations Container for implementing functions for different targets This container can be extended by other codegeneration targets/devices The key has to be the name of the target, the value a function that takes three parameters (n, p, use_normal) and returns a tuple of (code, dependencies)