content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
This is the fourth project in Calculus 2 at Fitchburg State. Spring 2017.
\documentclass[12pt]{amsart} \addtolength{\hoffset}{-2.25cm} \addtolength{\textwidth}{4.5cm} \addtolength{\voffset}{-2.5cm} \addtolength{\textheight}{5cm} \setlength{\parskip}{0pt} \setlength{\
parindent}{15pt} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \usepackage[colorlinks = true, linkcolor = black, citecolor = black, final]{hyperref} \usepackage{graphicx} \usepackage
{multicol} \usepackage{ marvosym } \usepackage{wasysym} \usepackage{tikz} \usetikzlibrary{patterns} \newcommand{\ds}{\displaystyle} \DeclareMathOperator{\sech}{sech} \setlength{\parindent}{0in} \
pagestyle{empty} \begin{document} \thispagestyle{empty} {\scshape Math 2400} \hfill {\scshape \large Project \#4 - Recursive Sequences} \hfill {\scshape Spring 2016} \smallskip \hrule \bigskip This
project may be completed individually or in a group of two or three students. If you wish to complete the project as a group, let me know so that I can make the appropriate Blackboard tools
available. You will submit the written report to Project \#4 Assignment by uploading a pdf file. Please follow all the specifications for a written report project that are outlined in the
Specifications document. \bigskip \bigskip First, we need some information on recursive sequences. Like, what are they? A {\bf recursive sequence} is one where rather than giving the formula for the
$n^\text{th}$ term as an expression of $n$ (like the Harmonic sequences $a_n = \frac{1}{n}$), the formula is given as an expression of previous terms in the sequence. For example, we could find a
term in the sequence by adding two to the previous term and then taking the square root of the result. Since this is not directly related to the number of the term, we must also give a starting place
when defining a recursive sequence. Suppose our first term in this sequence we're defining is 1, then we would define the sequence as $$s_1 = 1\text{, and }s_n = \sqrt{s_{n-1} + 2}.$$ We can write
out the first few terms \dots $$1, \sqrt{3}, \sqrt{2 + \sqrt{3}}, \sqrt{2 + \sqrt{2 + \sqrt{3}}}, \sqrt{2 + \sqrt{2 + \sqrt{2 + \sqrt{3}}}}\dots$$ It is difficult to see what might be happening to
this sequence in this form, so we look at a decimal approximation of the terms: $$1, 1.73205, 1.93185, 1.98289, 1.99571, \dots$$ This particular sequence seems to be approaching 2, but all of our
tools for showing this depend on writing the terms as an expression of $n$. What can we do?!?!?! Fear not. We press on.\\ It appears this sequence is increasing and always less than or equal to 2,
and to verify this, we use a technique called, {\it mathematical induction}. If you've learned this technique somewhere else, hooray! If not, don't worry, we'll focus on how this is used in recursive
sequences and not concern ourselves with the formalness happening in the background. \\ First we'll try to see if the sequence is increasing. Starting with $n=1$, $1 = a_1 < a_2 = \sqrt{3} \approx
1.73205$, so initially, the sequence is increasing. Suppose that this is the case, that the sequence is increasing up to the point where $n=k$. Could we show that the sequence increases in the next
step? Could we show that $a_k < a_{k+1}$? Yes, \begin{align*} a_{k+1} & = \sqrt{2 + a_k} \text{, by the recursive definition of our sequence,}\\ & > \sqrt{2 + a_{k-1}} \text{, since we know $\{a_n\}$
is increasing up until $n=k$}\\ & = a_k. \end{align*} Putting all this together shows that $a_k < a_{k+1}$. If the sequence is increasing up to a point, then in the next step the sequence is also
increasing. The combination of the initial condition and what we just showed, tells us that the sequence is increasing. This is mathematical induction! If that didn't quite click, try thinking about
it like this: We already know the sequence increases from the first to second term, so it is increasing up to the point where $n = 2$. The induction step (where we showed that $a_k < a_{k+1}$ shows
that it must be increasing in the next step, so $a_2 < a_3$. Now, we know it is increasing all the way until $n = 3$. The induction step again says that it must increase to the next term, so $a_3 <
a_4$. This process never needs to end. If I want to know if the sequence is increasing at the 75$^\text{th}$ step, I can keep this argument going 71 more times.\\ We can make a similar argument for
the fact that the sequence is less than or equal to 2. Clearly the first term, 1, is smaller than 2. Now, we assume that every term up to $a_k$ is less than or equal to 2, and we try to show that
this implies that the next term, $a_{k+1}$, is less than or equal to 2. \begin{align*} a_{k+1} & = \sqrt{2 + a_k} \text{, by the definition of the sequence,}\\ & < \sqrt{2 + 2} \text{, since each
term before $a_{k+1}$ is smaller than 2,}\\ & = \sqrt{4} = 2. \end{align*} Hooray! The sequence is bounded above by 2. These two facts together (increasing and bounded above) tell us that our
recursive sequence does in fact converge. What Theorem is that? But, it doesn't tell us what the sequence converges to. So, here we go. Since we already know the limit exists, we can give it a name.
How about $L$? \begin{align*} \lim_{n \rightarrow \infty} a_n & = L\\ \lim_{n \rightarrow \infty} \sqrt{2 + a_{n-1}} & = L \text{, by the definition of the sequence,}\\ \sqrt{\lim_{n \rightarrow \
infty} 2 + a_n} & = L \text{, since the square root function is continuous,}\\ \sqrt{2 + \lim_{n \rightarrow \infty} a_n} & = L \text{, by Limit Laws,}\\ \sqrt{2 + L} & = L \text{, since we called
the limit of our sequence $L$,}\\ 2 + L & = L^2 \\ 0 & = L^2 - L - 2\\ 0 & = (L-2)(L+1)\\ L & = 2 \text{, since $L \neq -1$ as all the terms of the sequence are positive.} \end{align*} AHA!!! The
limit of the sequence is in fact 2. \vfill \vfill {\bf Now it's your turn!} \bigskip Consider the recursive sequence define by $$s_1 = 1 \text{ and } s_{n+1} = \frac{1}{3-s_n}.$$ \bigskip Write out
the first ten terms of the sequence. \bigskip From these terms, does it appear that the sequence is bounded? Monotone? Convergent? \bigskip Use induction to prove that $\{s_n\}$ is bounded and
decreasing. \bigskip Prove that $\{s_n\}$ converges and find its limit. \vfill \newpage {\bf Recursive Bunnies } \medskip Consider what you know about bunny rabbits (or what we're pretending is true
about bunny rabbits): \begin{itemize} \item Bunny rabbits live forever. \item There are always an equal number of female bunny rabbits and male bunny rabbits. \item Bunny rabbits reproduce like
crazy: Every month, each ``productive pair" of rabbits produces another pair of baby bunnies. \item It takes each new pair of bunnies two months to become productive. \item The fact that so many of
these bunnies are inbred does not affect the bunnies ability to continue living their lives as described above. \end{itemize} \bigskip \bigskip Suppose we start with a brand new pair of bunny rabbits
for month 1. How many pairs of bunnies are there in month 2? Month 3? Write out the number of pairs of bunnies we have in each of the first ten months. \bigskip Do you see pattern? Does this look
familiar? Is it recursive? \bigskip Let's denote the number of bunnies in the $n^{\text{th}}$ month by $b_n$. Write down a formula for this sequence. \bigskip What can you say about the sequence $\
{b_n\}$? Is it monotonic? Is it bounded? Does it converge? Prove these results. \bigskip Since the sequence $\{b_n\}$ diverges, we define a new sequence that is a bit more interesting to work with.
Let $a_n = \displaystyle{\frac{b_{n+1}}{b_n}}$. \bigskip Is this sequence monotonic? Is it bounded? \bigskip The sequences $\{a_n\}$ converges (promise!) even though it does not fit {\bf all} the
criteria for the Bounded Monotone Convergence Theorem. Find the limit of the sequence. \bigskip \end{document} | {"url":"https://cs.overleaf.com/articles/fsu-math2400-project4/yhrwhphsdpdg","timestamp":"2024-11-06T07:42:10Z","content_type":"text/html","content_length":"44297","record_id":"<urn:uuid:65b17542-ab9f-408e-9275-ffac8dd14c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00494.warc.gz"} |
Spacemacs and LaTeX
In the previous post, I introduced several useful tips for editing LaTeX file using Vim. In a nutshell, there are shortcuts (= and gq family), commands and plugins. In addition, I also introduced
latexmk tool in the texlive. By using -pvc flag, the PDF file will automatically generated once you save a tex file. However, long time ago, I change my main editor from Vim to Emacs, more
specifically, Spacemacs. If you are interested in the reasons why I turned to Spacemacs. This post will give you enough answers. I will mainly focus on the experience of editing LaTeX files in
First of all, what’s is Spacemacs? It is an Emacs advanced Kit focused on Evil: The best editor is neither Emacs nor Vim, it's Emacs and Vim! Briefly speaking, you can get the powerful functions of
Emacs along with the efficient editing experience of Vim. For me, the selling point may be the great documents. You don’t need to searching in the internet and hope to find posts from experienced
users. The documents of Spacmeacs is like a real software manual including various introductions and handful examples. I recommend you to read through the features section in the documents. Here is
an official screenshot.
There are plenty of configuration layers in the Spacemacs. For LaTeX editing, two layers are enough. They are auto-completion and latex. If you include these layers in the .spacemacs file. It will
automatically load related packages when you open an tex file. The first layer, auto-completion is for auto-completion. For example, you can simply type a e and then M-/ to expand to \emph{} command
in LaTeX. Besides tex file, this layer can serve other functions when editing other kinds of files. Let’s mainly look into latex layer.
The core of latex layer is actually auctex package. This package is extremely powerful. I am sure I can’t list all functions in one post. I will list the most useful functions in my view.
Build and View
The keybingings of build is SPC m b and view is SPC m v. If you press build keybinding, it will build your tex file using latex and generate a pdf file. By default, it will use pdflatex. However, I
use xelatex sometimes to build beamer source code. To do this, add a local variable in the end of a tex file can tell auctex to build as xetex.
%%% Local Variables:
%%% coding: utf-8
%%% mode: latex
%%% TeX-master: t
%%% TeX-command-extra-options: "-shell-escape"
%%% TeX-engine: xetex
%%% End:
By typing the view key binding, you can open the pdf file and tex file side by side to preview the result. I recommend to use Skim in OS X and Okular in Linux. Because both of them can sync pdf and
tex file at runtime. Add following lines in the dot file to automatically select preview.
((string-equal system-type "darwin")
(progn (setq TeX-view-program-selection '((output-pdf "Skim")))))
((string-equal system-type "gnu/linux")
(progn (setq TeX-view-program-selection '((output-pdf "Okular"))))))
Sometimes, when you editing a long tex file or beamer presentation, you want to know the correlation between pdf file and tex file. The sync function provided by auctex can help you jump from tex to
pdf and pdf to tex. Add the following lines into the dot file
(setq TeX-source-correlate-mode t)
(setq TeX-source-correlate-start-server t)
(setq TeX-source-correlate-method 'synctex)
(setq TeX-view-program-list
'(("Okular" "okular --unique %o#src:%n`pwd`/./%b")
("Skim" "displayline -b -g %n %o %b")
("zathura %o"
" --synctex-forward %n:0:%b -x \"emacsclient +%{line} %{input}\"")))))
After that, when you type SPC m v the pdf viewer will highlight the line which you are editing with yellow color. Furthermore, you can press shift and command together and click on the pdf file, it
will automatically jump to corresponding line in the tex file. This is quite useful when you editing a beamer presentation and want to jump to a page in the middle.
In addition to previous random thoughts, Spacemacs supports keybinding in Vim such as = and gq to format a paragraph. Moreover syntax checking is also a optional layer in the dot file. For auctex,
there are still a lot of features. For example, you can preview equation and formula in side source code. You can also easily insert \cite{} and search keywords in bib file. It can generate WYSIWYG
inline previews of mathematics, figures, and other constructs which are hard to grasp in plain source view by means of the preview-latex subsystem.
At last, this is just an introduction.
UPDATE on 2015/11/17
Previously, we said to add TeX-view-program-list and TeX-view-program-selection. I forgot to mention one important thing. You need also to make Okular recognize Emacs. You should go "Settings ->
Configure Okular -> Editor" and select the Emacs client configuration.
Since Okular changed the command line interface for users, the command for sync from Spacemacs to Okular should be update. Now, you can use ("Okular" "okular --unique %o#src:%n`pwd`/./%b") for sync.
Besides Okular in Linux, I also recommend another light weight PDF viewer. It's name is Zathura. Same as Okular, Zathura also support vim key binding. But its shortcuts is more than traditional hjkl,
you will see more if you read the manual. However, the configuration of tex sync for Zathura is a little complicated. First, you need to create a shell script for Zathura which I call it
zathura --synctex-forward "$pos" "$pdffile" || \
zathura -x "emacsclient --eval '(progn (find-file \"%{input}\") (goto-line %{line}))'" "$pdffile" &
sleep 1; zathura --synctex-forward "$pos" "$pdffile" )
Secondly, you need to put this script into a directory which in the PATH environment. My preference is to put all my executables in /home/username/bin directory. At last, add zathura-sync.sh script
to tex view program list:
(setq TeX-view-program-list
'(("Okular" "okular --unique %o#src:%n`pwd`/./%b")
("Skim" "displayline -b -g %n %o %b")
("Zathura" "zathura-sync.sh %n:1:%b %o")))
;; and change default program to Zathura
((spacemacs/system-is-mac) (setq TeX-view-program-selection '((output-pdf "Skim"))))
((spacemacs/system-is-linux) (setq TeX-view-program-selection '((output-pdf "Zathura")))))
Finally, you can use Zathura simply run SPC m v to vew PDF and use CTL + left mouse to jump into corresponding tex sentence.
UPDATE on 2016/08/05
Recently, I found that the latest AucTeX began to support more view programs. This means that you no longer need to bother define your own TeX-view-program-list. The official list is in http://
(defvar TeX-view-program-list-builtin
((eq system-type 'windows-nt)
'(("Yap" ("yap -1" (mode-io-correlate " -s %n%b") " %o") "yap")
("dviout" ("dviout -1 "
((paper-a4 paper-portrait) "-y=A4 ")
((paper-a4 paper-landscape) "-y=A4L ")
((paper-a5 paper-portrait) "-y=A5 ")
((paper-a5 paper-landscape) "-y=A5L ")
((paper-b5 paper-portrait) "-y=E5 ")
((paper-b5 paper-landscape) "-y=E5L ")
((paper-b4jis paper-portrait) "-y=B4 ")
((paper-b4jis paper-landscape) "-y=B4L ")
((paper-b5jis paper-portrait) "-y=B5 ")
((paper-b5jis paper-landscape) "-y=B5L ")
(paper-legal "-y=Legal ")
(paper-letter "-y=Letter ")
(paper-executive "-y=Executive ")
"%d" (mode-io-correlate " \"# %n '%b'\"")) "dviout")
("SumatraPDF -reuse-instance"
(mode-io-correlate " -forward-search \"%b\" %n") " %o")
("dvips and start" "dvips %d -o && start \"\" %f" "dvips")
("start" "start \"\" %o")))
((eq system-type 'darwin)
'(("Preview.app" "open -a Preview.app %o" "open")
("Skim" "open -a Skim.app %o" "open")
("displayline" "displayline %n %o %b" "displayline")
("open" "open %o" "open")))
`(("dvi2tty" ("dvi2tty -q -w 132 %o"))
("xdvi" ("%(o?)xdvi"
(mode-io-correlate " -sourceposition \"%n %b\" -editor \"%cS\"")
((paper-a4 paper-portrait) " -paper a4")
((paper-a4 paper-landscape) " -paper a4r")
((paper-a5 paper-portrait) " -paper a5")
((paper-a5 paper-landscape) " -paper a5r")
(paper-b5 " -paper b5")
(paper-letter " -paper us")
(paper-legal " -paper legal")
(paper-executive " -paper 7.25x10.5in")
" %d") "%(o?)xdvi")
("dvips and gv" "%(o?)dvips %d -o && gv %f" ,(list "%(o?)dvips" "gv"))
("gv" "gv %o" "gv")
("xpdf" ("xpdf -remote %s -raise %o" (mode-io-correlate " %(outpage)")) "xpdf")
("Evince" ,(TeX-view-program-select-evince "gnome" "evince") "evince")
("Atril" ,(TeX-view-program-select-evince "mate" "atril") "atril")
("Okular" ("okular --unique %o" (mode-io-correlate "#src:%n%a")) "okular")
("xdg-open" "xdg-open %o" "xdg-open")
("PDF Tools" TeX-pdf-tools-sync-view)
("zathura %o"
" --synctex-forward %n:0:%b -x \"emacsclient +%{line} %{input}\""))
"Alist of built-in viewer specifications.
This variable should not be changed by the user who can use
`TeX-view-program-list' to add new viewers or overwrite the
definition of built-in ones. The latter variable also contains a
description of the data format.") | {"url":"https://mssun.me/blog/spacemacs-and-latex.html","timestamp":"2024-11-13T05:30:00Z","content_type":"text/html","content_length":"18656","record_id":"<urn:uuid:907799da-5206-4ff0-ad82-3d27cafcc2ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00738.warc.gz"} |
Topics for Theses
Here you will find an overview of topics for integrated projects, bachelor and master theses. Of course it is possible to discuss with the respective tutors more topics or modifications.
Tutor Areas of exptertise
Mathias Magdowski simulation; electromagnetic coupling; statistical EMC
Benjamin Hoepfner modelling; power quality; active filtering
Moustafa Raya simulation; networks; e-mobility
Jörg Petzold simulation; electromagnetic scattering
Max Rosenthal measurements; electromagnetic scattering
Types of Theses Meaning
IP integrated project
MT master thesis
HashTags Meaning
#PQ power quality
#RC reverberation chamber
#LS literature study
#MS modelling/ simulation
#MEAS measurements
#PRAC construction and layout
Background and problem: The theory of Schelkunoff to calculate the shielding effiency of a planar metallic wall is a standard tool for every EMC engineer to approximate the shielding effect of boxes
or enclosures. The calculation is done in the frequency domain, where a harmonic excitation is assumed. Nevertheless, measurement methods and characteristics to assess the transient shielding
efficiency for certain pulses are also proposed in the literature.
Task: In the scope of this work, the practicability to convert the well-known Schelkunoff theory from frequency into time domain shall be analyzed. At this, a direct approach in the time domain as
well as a transform from frequency into time domain shall be investigated. This inverse Fourier transform can be done analytically or numerically. The proposed procedure shall also be tested for some
typical pulse shapes of the exciting external field.
• Literature survey about the existing Schelkunoff theory
• Literature survey for transient assessment criteria of the shielding efficiency
• Development of a direct transient approach for analyzing the shielding efficienty
• Transform of the existing frequeny-domain solution into the time domain
• Test of the procedure for some standard pulses
Supervisor: Dr.-Ing. Mathias Magdowski
Background and problem: Many interference phenomena of electromagnetic compatibility like galvanic, capacitive and inductive coupling as well as the corresponding countermeasures like bonding,
filtering and shielding can be more easily understood if they are practically demonstrated. Therefore, EMC demonstration units or "demo boxes" have been popular for several decades already.
Task: A new EMC demonstration unit shall be designed, developed and constructed. It shall be based on typical designs that are described in the literature. The box should be simple in design,
mechanically and electrically stable, easy to transport and to use. In contrast to the typical designs in the literature, where a full-fledged spectrum analyzer is necessary for the demonstration, a
much cheaper SDR (software defined radio) receiver shall be used here, that only requires a standard computer with some USB port.
• literature research about existing EMC demonstration units
• design and development of an own demonstration box
• setup and construction of the demonstration box
• bringing this box into service
Supervisor: Dr.-Ing. Mathias Magdowski
Background and problem: Reverberation chambers are commonly used to test the immunity against high intensity radiated fields. The chamber acts as a resonator with a preferably high quality factor and
low losses. In the steady state, the input power equals the power loss. From the difference between the power loss in the empty chamber and the chamber loaded with the device under test (DUT), the
coupled power to the device under test can be determined.
Task: Such an indirect measurement shall be done in different frequency ranges with diverse DUTs in the three mode-stirred chambers of the chair for electromagnetic compatibility. For simplicity,
plain monopole antenna with one main resonance shall be used as a DUT. The resonant frequency and bandwidth of the DUT shall be determined from the frequency dependence of the coupled power to the
The experimental results shall be validated by a direct measurement of coupled power to the DUT. The discrepancies between both measurement shall be discussed. Also the uncertainty of the indirect
measurement as well as its reasons shall be analyzed.
Supervisor: Dr.-Ing. Mathias Magdowski
Background and problem: The field in reverberation chambers can be described statistically. This description covers the distribution of the field quantities at one position as well as the spatial
correlation between nearby field points. Usually, the field are assumed to be circular, which means that the real and imaginary parts of the complex phasors of the field components are independent of
each other, but follow the same distribution. From this follows that the field is statistically homogenous, isotropic, unpolarized and incoherent. Based on this assumptions, e.g. also the maximum
values of the field components and therefore the failure probability of an equipment under test can be determined.
In practice however, the field will always feature a certain ellipticity, i.e. a difference between the real and imaginary parts of the complex field components. Measurement are necessary to
determine the actual field properties in mode-stirred chambers. Such measurements have only been done with linear polarized antennas up to now.
Task: The aim of this project is to measure the complex scattering parameters between a linear and a circular polarized antenna as well as between two circular polarized antennas. A vector network
analyzer is available for this measurement. As linear polarized antennas, different logarithmic-periodic dipole and horn antennas are provided. A helix antenna is available as a circular polarized
antenna. A second helix antenna has to be build according to this prototype. The measurement of the scattering parameters has to be done over a wide frequency range for different stirrer positions
and has to analyzed statistically.
Supervisor: Dr.-Ing. Mathias Magdowski
Background and problem: Cables are important coupling paths of external electromagnetic fields into connected devices and systems. In practice, not only single cables but cables harnesses occur that
can be regarded as transmission line networks. External fields can often be approximated as plane waves, at least in the far field region.
The simulation of the plane wave coupling to transmission line networks is quite well understood in the frequency domain. Nevertheless, when the loads at the terminals of the network feature a
non-linear behavior, e.g. as for a diode, the calculation has to be done in time domain. Such calculation has already been done for a single cable, where the exciting field was incorporated as
several distributed sources along the line.
Task: The task of this project is do adopt this approach for a transmission line network by taking into account the interaction between the individual lines. For simplicity, the lines can be assumed
to be straight, uniform and of low loss. For linear loads and a certain pulsed excitation, the time response of the coupled voltage or current should be validated against a frequency-domain solution
with a following inverse Fourier transform. Then also the time response of the coupled voltage or current for a non-linear load shall be calculated.
Supervisor: Dr.-Ing. Mathias Magdowski
Background and problem: The field in electrically large and complex shaped resonators (e.g. car bodies, aircraft fuselages, ...) can in principle be described deterministically. Anyhow, such a
description is of little value, as a small change in frequency, in the spatial position or in the electromagnetic boundary conditions may lead to a completely different field pattern. Therefore, a
statistical field description is much more suitable that can also be experimentally reproduced in reverberation chambers. If a device under test is placed in such a field, also the coupling has to be
described statistically. For this, several methods exist, as the Random Coupling Model or the Plane Wave Integral Representation.
Task: The aim of the project is to solve a given coupling problem with both methods and to compare both procedures (e.g. necessary parameters, computational effort, accuracy, ...). As a coupling
problem, the field coupling to a single wire transmission line above a conducting ground plane shall be analyzed. For this problem, experimental results as well as several analytical and numerical
results based on the Plane Wave Integral Representation exist at the chair for EMC, so that only a solution via the Random Coupling Model has to be found for comparison.
The solution to be analyzed is e.g. the coupled voltage (or current) at one line end as a complex phasor. This phasor can be characterized by its real and imaginary part, its magnitude and phase or
its squared magnitude, which is proportional to the power. From these characteristics, the frequency-dependent average, minimum, maximum or standard deviation can be calculated. Also the probability
density function, cumulative distribution function or the general statistical moments are of interest.
Supervisor: Dr.-Ing. Mathias Magdowski | {"url":"https://www.emv.ovgu.de/emv/en/Study/Topics+for+Theses.html","timestamp":"2024-11-13T09:20:43Z","content_type":"text/html","content_length":"45177","record_id":"<urn:uuid:df40db7c-0850-415d-b229-a1625a78222d>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00635.warc.gz"} |
Number sequence calculator for instant results
Discover the features of our free sequence calculator.
Calculate the terms of a number sequence in a blink of an eye.
Arithmetic Sequence Calculator
definition: an = a[1] + f × (n-1)
example: 1, 3, 5, 7, 9 11, 13, ...
Geometric Sequence Calculator
definition: an = a × r[n-1]
example: 1, 2, 4, 8, 16, 32, 64, 128, ...
Fibonacci Sequence Calculator
definition: a[0]=0; a[1]=1; a[n] = a[n-1] + a[n-2];
example: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...
What is the definition of the sequences calculator?
A number sequence calculator is an easy-to-use tool that helps you find out the terms of sequences in a breeze. You can use our calculator to determine both finite and infinite sequences in just a
few seconds. Our tool is equally helpful for arithmetic sequences, geometric sequences, and Fibonacci sequences.
Arithmetic sequence
An arithmetic sequence in math is a sequence of numbers where the difference between each consecutive term is constant. The next term is created by adding a constant number to the previous term. This
number is also called a constant difference.
Depending on its sign, an arithmetic sequence can be either positive or negative, consecutively tending toward positive or negative infinity.
To denote this sequence, we can use an arithmetic sequence formula:
a[n] = a[1] + f × (n-1), where an is the n^th term in the sequence, a[1] is the is the first term, and f is the common difference.
i.e. a[1], a[1] + f, a[1] + 2f.
For example, 1, 3, 5, 7, 9, 11, 13, ...
Here, the common difference, or f, is 2. Let's use the equation to determine the fifth term:
Geometric sequence
A number sequence, in which each next term after the first one is created by multiplying the previous term by a set non-zero number, is called a geometric sequence. This fixed number is also referred
to as a common ratio.
The geometric sequence formula is as follows:
a[n] = a × r^n-1, where an is the n [th] term, a refers to the scale factor, and r – common ratio.
i.e. a, ar, ar^2, ar^3, ...
For instance, 1, 2, 4, 8, 16, 32, 64, 128, ... It is clear that in this example, the common ratio, or r, is 2.
Say, we wanted to calculate the eighth term in the sequence using the formula above:
a[8] = a × r ^8-1
a[8] = 1 × 2^7 = 128
You can also find out the sum of the geometric equation with this formula:
Fibonacci sequence
In a Fibonacci sequence, each next term following the first two is a sum of two previous terms. Based on the chosen starting point, the first two terms can be either 1 and 1 or 0 and 1.
Fibonacci numbers appear commonly yet unexpectedly and have numerous applications in mathematics and beyond. They are often used in computer studies, biological settings, and even economics.
The Fibonacci sequence formula is:
a[n] = a[n-1] + a[n-2], where an is the n ^th number in the sequence.
An example of a Fibonacci sequence is: 0, 1, 1, 2, 3, 5, 8, 13, 21, ...
Frequently Asked Questions
Our calculator allows you to determine the numbers of your sequence in an instant. Furthermore, you can use it for any sequence of numbers, be it arithmetic, geometric, or Fibonacci list. We also do
calculations for common sequences, such as prime numbers.
Calculating a sequence is as easy as pie with our free tool! Just follow these three simple steps.
Step one: Insert the first number (a) and the common difference (d) or common ratio (r) in the respective field.
Step two: Hit the "Calculate" button.
Step three: Clear the fields by tapping on the "Reset" button.
A sequence calculator captures and mathematically represents the common relationship (difference, ratio, etc.) behind two consecutive terms in the sequence.
To grasp the whole meaning of this calculating tool, you need to understand what a sequence of numbers is. A sequence is the ordered list of numbers or terms governed by a specific pattern. The order
and increasing and decreasing numbers are vital for a sequence.
The common depiction of a sequence is:
There are two types of sequences in math:
• A finite sequence, which obtains a definite amount of numbers;
• An infinite sequence, which is an endless set of terms.
A common pattern is the most important thing for any sequence. These factors can be found in the simplest things, like the clock rotation as well as in complicated equations.
Finding such a pattern requires time and attention to detail. For an unknown sequence, you have to discover a difference between two elements of the list and do the same for all the elements.
But worry not! Our calculator can make this tedious task as easy as a walk in the park. Use it at any time for effortless calculations and save hours of time and tons of energy.
Arithmetic sequences are very common in our day-to-day lives. Stacking household items, arranging seats in a classroom, and finding a leap earl all require an arithmetic sequence.
The arithmetic sequence formula is also frequently used to calculate the terms of said sequence.
For example, we need to find the thirteenth term in the sequence 1, 5, 9, 13... Given that the common difference d is 4, and the first term a1 is 1, we can use the following equation:
Whatever you need an arithmetic sequence for, our efficient tool can help you with calculations.
We guarantee the most accurate results and seamless experience every step of the way. | {"url":"https://calculatingapp.com/number-sequence-calculator","timestamp":"2024-11-12T12:42:35Z","content_type":"text/html","content_length":"60467","record_id":"<urn:uuid:c3cc56db-173d-4ea1-9524-3be71a73e2b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00884.warc.gz"} |
Ticket Pack Promotion for Portland Trail Blazers Essay | Cipcommunity
Ticket Pack Promotion for Portland Trail Blazers
Context The Trail Blazers was a monopoly on the professional sports market in Portland. Now the Trail Blazers is in a very bad time. Its home arena was taken over by creditors, its performance was in
danger of being the worst in NBA history, and attendance numbers was falling (Case, Page 1). Its Management tried to promote by developing multigame ticket packages. Conjoint analysis technique is
used to design the survey and analyze the result. According to the situation, we assume that the new promotion program needs to increase the attendance numbers and profit. Question 1
We Will Write a Custom Essay Specifically
For You For Only $13.90/page!
order now
To judge which attribute indicate is the overall most important in the purchase decision, we calculated the importance of each attribute according to the utility score data (Case, Table 1). The
results are shown in Decision Weight Assignment (Appendix, Table 1). The most important attribute is ticket location, which can decide 39. 4% of the total utility. The second is ticket price with 37.
7% decision weight. Number of games and promotional item is relatively less important. Number of games has 11. 8% decision weight, and promotional item has 11. 1% decision weight. Question 4
The total attribute combinations are 3*4*4*5=240, which are shown in Appendix Table 2. Because “the Blazers were unwilling to allow certain price and seating combinations no matter how well received
they were” (Case, Page 5), the combinations including 200-level seats for less than $60 and 300-level midcourt seats for less than $25 can be removed from total combinations. The remaining attribute
combinations are 240-3*1*5-3*3*5=180. According to the cost structure (Case, Table 2 & Table 3) and our assumptions, the Trail Blazers should avoid loss, thus 27 packages which cause loss need to be
The remaining packages are 180-27=153, which are shown in Appendix Table 3. The utility gap between the package with the greatest utility and the package with the 21st greatest utility is 0. 53,
which occupies 17% of total utility (3. 12, which is adjusted according the analysis in previous paragraph). We think analyzing them is enough to make a decision. By analyzing the top 20 popular
packages (Appendix Table 5), we find out the following results (Appendix Table 4): 60% of packages include 6 games, which is managements’ favorite (Case, Page 6).
But 10-game packages seem not very attractive due to the low appearance rate 5%. Seats and price show strongly relativity. 70% of the packages have 300-level seat in midcourt and ticket price $25 or
$35, which strongly suggests us to design packages like these. But 20% of packages have 300-level seat in other places and $15 price, which means some customers prefer bad seats with low price; and
10% of packages have 200-level seat in midcourt and $60 price, which means some customers can afford high price for good seats.
This information is valuable, because it help us design packages with seats which are not very popular. 75% of packages have low-value promotional items, which supports our conclusion about the
importance of promotional items in question 1. The top 4 popular packages have much greater utility than others. All of them have seats in 300-level midcourt, so seats in 300-level midcourt should be
main stream of the packages. 2 of them have $25 price, which have 0. 04917 more utility than other 2 packages with $35 price, but the profit is $10 less.
Considering profit, we suggest to design the price based on $35. If we use hot dog instead of priority, the utility increased by 0. 04917. If we assume that there is a linear relation between price
and utility in the range between $35 and $60, we can increase the price by (0. 04917/1. 66909)*(60-35) = $0. 74 and keep the utility unchanged, but our cost for hot dog is $3. 25. Thus the profit
decreased. The same conclusion is for other promotional items. In our conclusion, it is better to let customers purchase promotional items except priority by themselves.
Considering all factors, we suggest that the core package has $35 price, 6 games, 300-level seat in midcourt and priority for home playoff tickets. It has good profit, relatively greater utility, and
increases potential future attendance numbers. Moreover, to satisfy diversity customers, two other packages can be considered as supplement. The package with 6 games, $15 price, 300-level seat behind
the baskets and priority for home playoff tickets help sell unpopular seats with 0. 29371 utility and $5 profit.
Also, the package with 6 games, $60 price, 200-level seat in midcourt and priority for home playoff tickets help get more profit ($20) with 0. 37785 utility. Appendix Table 1: Decision Weight
Assignment | Minimum| Maximum| Gap| Percentage| Ticket location| -0. 73169| 1. 01148| 1. 74317| 39. 4%| Ticket price| -1. 00257| 0. 65646| 1. 66903| 37. 7%| Number of games| -0. 2764| 0. 24383| 0.
52023| 11. 8%| Promotional item| -0. 31786| 0. 17428| 0. 49214| 11. 1%| Total| | | 4. 42457| 100. 0%| Table 2: All packages https://docs. google. com/spreadsheet/ccc? ey=
0An4XCgbePO-fdE4xeGdJZlVqaEFlaV84dkhGM2d5TWc&hl=en_US Table 3: All packages restricted by seat cost and profit https://docs. google. com/spreadsheet/ccc? key=
0An4XCgbePO-fdHdaMGl2SlNVWGUxYnA4ZVA5R09HZ0E&hl=en_US Table 3: All packages restricted by seat cost and profit | Games| Seats| Price| Promotional item| Option| 3| 6| 10| 300 mc| 200 mc| 300 others|
$25, $35| $15| $60| cheap| expensive| Number| 7| 12| 1| 14| 2| 4| 14| 4| 2| 15| 5| Percent| 35%| 60%| 5%| 70%| 10%| 20%| 70%| 20%| 10%| 75%| 25%| Table 5: Top 20 most popular packages
When the attribute appears in the package, its value is 1; otherwise, its value is 0. The package utility can be calculated as (0. 03257)*(3-game)+( 0. 24383)*(6-game)+(-0. 2764)*(10-game)+(0. 65646)
*($15)+(0. 22011)*($25) +(0. 126)*($35) +(-1. 00257)*($60) +(-0. 73169)*(300 behind) +(-0. 43716)*(300 corner) +(0. 15736)*(300 midcourt) +(1. 01148)*(200 midcourt) +(0. 12511)*(priority) +(0. 17428)
*(hot dog)+(0. 00158)*(apparel)+(-0. 31786)*(collectible)+(0. 01689)*(gift). The package profit can be calculated as (15)*($15)+(25)*($25)+(35)*($35)+(60)*($60)-(10)*(300behind)-(12)*(300corner) | {"url":"https://cipcommunity.org/essays/ticket-pack-promotion-for-portland-trail-blazers-802/","timestamp":"2024-11-12T19:37:34Z","content_type":"text/html","content_length":"46863","record_id":"<urn:uuid:0e83f965-b15d-4d50-9af0-bcf16e5c90dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00231.warc.gz"} |
Estimated Multiple Regression Equation MCQ [PDF] Quiz Questions Answers | Estimated Multiple Regression Equation MCQs App Download & e-Book
MBA Business Statistics Practice Tests
MBA Business Statistics Online Tests
Estimated Multiple Regression Equation MCQ (Multiple Choice Questions) PDF Download
The Estimated Multiple Regression Equation Multiple Choice Questions (MCQ Quiz) with Answers PDF (Estimated Multiple Regression Equation MCQ PDF e-Book) download to practice MBA Business Statistics
Tests. Study Multiple Regression Model Multiple Choice Questions and Answers (MCQs), Estimated Multiple Regression Equation quiz answers PDF for online certificate programs. The Estimated Multiple
Regression Equation MCQ App Download: Free learning app for estimated multiple regression equation, chow-test model, multicollinearity test prep for top online MBA programs.
The MCQ: If R² outputs 1 the consequent value of F is equals to; "Estimated Multiple Regression Equation" App Download (Free) with answers: 0; 1; -1; Undefined; for online certificate programs.
Practice Estimated Multiple Regression Equation Quiz Questions, download Apple e-Book (Free Sample) .
Estimated Multiple Regression Equation MCQ (PDF) Questions Answers Download
MCQ 1:
In the exponential model, if independent variable 'x' increases by 1%, y will increase by
1. 0.01
2. 1 unit
3. β2 %
4. β2 units
MCQ 2:
If R² outputs 1 the consequent value of F is equals to
1. 0
2. 1
3. -1
4. Undefined
MCQ 3:
In the exponential model equation, computed slope coefficient is equals to
1. β1 *y/x
2. β2 *y/x
3. β1 *x/y
4. β2 *x/y
MCQ 4:
The output generated by adjusted R² could be
1. Positive
2. Negative
3. Zero
4. Positive/Negative
MCQ 5:
In the lin-log model equation, computed elasticity is equals to
1. β1 *(1/y)
2. β2 * (1/y)
3. β1 *y
4. β2 *y
MBA Business Statistics Practice Tests
Estimated Multiple Regression Equation Textbook App: Free Download iOS & Android
The App: Estimated Multiple Regression Equation MCQs App to study Estimated Multiple Regression Equation Textbook, MBA Business Statistics MCQ App, and Human Resource Management (MBA) MCQ App. The
"Estimated Multiple Regression Equation MCQs" App to free download Android & iOS Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy
100% functionality with subscriptions! | {"url":"https://mcqslearn.com/mba/statistics/estimated-multiple-regression-equation.php","timestamp":"2024-11-03T01:39:40Z","content_type":"text/html","content_length":"95363","record_id":"<urn:uuid:b67fbe6a-a458-4b0d-a9bc-37a967d34880>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00551.warc.gz"} |
Eureka Math Grade 7 Module 6 Lesson 15 Answer Key
Engage NY Eureka Math 7th Grade Module 6 Lesson 15 Answer Key
Eureka Math Grade 7 Module 6 Lesson 15 Example Answer Key
Example 1.
A triangular fence with two equal angles, ∠S = ∠T, is used to enclose some sheep. A fence is constructed inside the triangle that exactly cuts the other angle into two equal angles: ∠SRW = ∠TRW. Show
that the gates, represented by \(\overline{S W}\) and \(\overline{W T}\), are the same width.
There is a correspondence △SRW ↔ △TRW that matches two pairs of angles of equal measurement, ∠S = ∠T and ∠SRW = ∠TRW, and one pair of sides of equal length shared, side \(\overline{R W}\). The
triangles satisfy the two angles and side opposite a given angle condition. From the correspondence, we can conclude that SW = WT, or that the gates are of equal width.
Example 2.
In △ABC, AC = BC, and △ABC ↔ △B’ A’ C’. John says that the triangle correspondence matches two sides and the included angle and shows that ∠A = ∠B’. Is John correct?
We are told that AC = BC. The correspondence △ABC ↔ △B’A’C’ tells us that BC ↔ A’C’, CA ↔ C’B’, and ∠C↔∠C’, which means △ABC is identical to △B’A’C’ by the two sides and included angle condition.
From the correspondence, we can conclude that ∠A = ∠B’; therefore, John is correct.
Eureka Math Grade 7 Module 6 Lesson 15 Exercise Answer Key
Exercise 1.
Mary puts the center of her compass at the vertex O of the angle and locates points A and B on the sides of the angle. Next, she centers her compass at each of A and B to locate point C. Finally, she
constructs the ray \(\overrightarrow{O C}\). Explain why ∠BOC = ∠AOC.
Since Mary uses one compass adjustment to determine points A and B, OA = OB. Mary also uses the same compass adjustment from B and A to find point C; this means BC = AC. Side \(\overrightarrow{O C}\)
is common to both the triangles,
△OBC and △OAC. Therefore, there is a correspondence △OBC ↔ △OAC that matches three pairs of equal sides, and the triangles are identical by the three sides condition. From the correspondence, we
conclude that
∠BOC = ∠AOC.
Exercise 2.
Quadrilateral ACBD is a model of a kite. The diagonals \(\overline{A B}\) and \(\overline{C D}\) represent the sticks that help keep the kite rigid.
a. John says that ∠ACD = ∠BCD. Can you use identical triangles to show that John is correct?
b. Jill says that the two sticks are perpendicular to each other. Use the fact that ∠ACD = ∠BCD and what you know about identical triangles to show ∠AEC = 90°.
c. John says that Jill’s triangle correspondence that shows the sticks are perpendicular to each other also shows that the sticks cross at the midpoint of the horizontal stick. Is John correct?
a. From the diagram, we see that AC = BC, and AD = BD. \(\overline{C D}\) is a common side to both triangles, △ACD and △BCD. There is a correspondence △ACD ↔ △BCD that matches three pairs of equal
sides; the two triangles are identical by the three sides condition. From the correspondence, we conclude that ∠ACD = ∠BCD. John is correct.
b. Since we know that AC = BC and ∠ACD = ∠BCD, and that △ACE and △BCE share a common side, \(\overline{C E}\), we can find a correspondence that matches two pairs of equal sides and a pair of equal,
included angles. The triangles are identical by the two sides and included angle condition. We can then conclude that ∠AEC = ∠BEC. Since both angles are adjacent to each other on a straight line, we
also know their measures must sum to 180°. We can then conclude that each angle measures 90°.
c. Since we have established that △ACE and △BCE are adjacent to each other, we know that AE = BE. This means that E is the midpoint of \(\overline{A B}\), by definition.
Exercise 3.
In △ABC, ∠A = ∠B, and △ABC ↔ △B’A’C’. Jill says that the triangle correspondence matches two angles and the included side and shows that AC = B’C’. Is Jill correct?
We are told that ∠A = ∠B. The correspondence △ABC ↔ △B’A’C’ tells us that ∠A = ∠B’, ∠B = ∠A’, and AB = B’A’, which means △ABC is identical to △B’A’C’ by the two angles and included side condition.
From the correspondence, we can conclude that AC = B’C’; therefore, Jill is correct.
Exercise 4.
Right triangular corner flags are used to mark a soccer field. The vinyl flags have a base of 40 cm and a height of 14 cm.
a. Mary says that the two flags can be obtained by cutting a rectangle that is 40 cm×14 cm on the diagonal. Will that create two identical flags? Explain.
b. Will measures the two non-right angles on a flag and adds the measurements together. Can you explain, without measuring the angles, why his answer is 90°?
a. If the flag is to be cut from a rectangle, both triangles will have a side of length 40 cm, a length of 14 cm, and a right angle. There is a correspondence that matches two pairs of equal sides
and an included pair of equal angles to the corner flag; the two triangles are identical to the corner flag as well as to each other.
b. The two non-right angles of the flags are adjacent angles that together form one angle of the four angles of the rectangle. We know that a rectangle has four right angles, so it must be that the
two non-right angles of the flag together sum to 90°.
Eureka Math Grade 7 Module 6 Lesson 15 Problem Set Answer Key
Question 1.
Jack is asked to cut a cake into 8 equal pieces. He first cuts it into equal fourths in the shape of rectangles, and then he cuts each rectangle along a diagonal.
Did he cut the cake into 8 equal pieces? Explain.
Yes, Jack cut the cake into 8 equal pieces. Since the first series of cuts divided the cake into equal fourths in the shape of rectangles, we know that the opposite sides of the rectangles are equal
in length; that means all 8 triangles have two sides that are equal in length to each other. Each of the triangular pieces also has one right angle because we know that rectangles have four right
angles. Therefore, there is a correspondence between all 8 triangles that matches two pairs of equal sides and an equal, 90° non-included angle, determining 8 identical pieces of cake.
Question 2.
The bridge below, which crosses a river, is built out of two triangular supports. The point M lies on \(\overline{B C}\) . The beams represented by \(\overline{A M}\) and \(\overline{D M}\) are equal
in length, and the beams represented by \(\overline{A B}\) and \(\overline{D C}\) are equal in length. If the supports were constructed so that ∠A and ∠D are equal in measurement, is point M the
midpoint of \(\overline{B C}\)? Explain.
Yes, M is the midpoint of \(\overline{B C}\). The triangles are identical by the two sides and included angle condition. The correspondence △ABM ↔ △DCM matches two pairs of equal sides and one pair
of included equal angles. Since the triangles are identical, we can use the correspondence to conclude that BM = CM, which makes M the midpoint, by definition.
Eureka Math Grade 7 Module 6 Lesson 15 Exit Ticket Answer Key
Question 1.
Alice is cutting wrapping paper to size to fit a package. How should she cut the rectangular paper into two triangles to ensure that each piece of wrapping paper is the same? Use your knowledge of
conditions that determine unique triangles to justify that the pieces resulting from the cut are the same.
Alice should cut along the diagonal of rectangle ABCD. Since ABCD is a rectangle, the opposite sides will be equal in length, or AB = DC and AD = BC. A rectangle also has four right angles, which
means a cut along the diagonal will result in each triangle with one 90° angle. The correspondence △ABD ↔ △CDB matches two equal pairs of sides and an equal, included pair of angles; the triangles
are identical by the two sides and included angle condition.
Leave a Comment | {"url":"https://bigideasmathanswers.com/eureka-math-grade-7-module-6-lesson-15/","timestamp":"2024-11-10T01:51:36Z","content_type":"text/html","content_length":"146780","record_id":"<urn:uuid:8cd0e955-d880-4be7-b1e1-0f34a3e363c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00102.warc.gz"} |
Number Theory
The 4000th digit is 2.
learnmgcat Jul 3, 2024
Find the $4000$th digit following the decimal point in the expansion of $\frac{1}{17}$.
Be sure to include complete explanations with your answer, using complete sentences. Imagine you were going to show your solution to a classmate, and try to write your solution so that they could
understand it without doing any extra work.
Rangcr897 Jul 3, 2024
First, let's note that 1/17 is a repeating decimal.
We have that \(1/17 = 0.\overline{0588235294117647}\)
There are 16 repeating digits.
Thus, we do 4000/16. We get
Since it's divisible, it means the last digit of the repeating part is our answer, which is 7.
So 7 is our answer.
Thanks! :)
NotThatSmart Jul 3, 2024 | {"url":"https://web2.0calc.com/questions/number-theory_74372","timestamp":"2024-11-05T03:08:48Z","content_type":"text/html","content_length":"22621","record_id":"<urn:uuid:97d99cfa-5e4e-4b2f-801e-640c8dec06bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00550.warc.gz"} |
CAT 2014 - How I Attempted New Pattern Mock CAT 3 - GP Ka Funda | Fundas on Career, Entrance Exam & Education
CAT 2014 – How I Attempted New Pattern Mock CAT 3
Last few weeks I have been travelling and hence have not been able to contribute to the video analysis of the new CAT’14 Pattern Mock CATs. However while travelling I was able to create time to solve
the papers and to write down how I attempted Mock CAT 3. With positive feedback on my previous effort (on the new pattern Mock CAT 1), let me share with you my take on the new Pattern Mock CAT 3
despite the delay in completing this article. My apologies for this delay and will ensure that the subsequent articles are not delayed.
As usual I started the paper by attempting VALR section in 80 minutes and the balance 90 minutes were used for QADI.
VALR: Summary
• Round1: I went from Q51-100 sequentially and attempted the questions of Verbal Logic and English Usage. While going through these questions I also took a note of the RC passages and LR sets and
tried to identify which of these should be attempted. This took me about 30 minutes in which I attempted 16 out of 20 questions.
• Round 2: Now I moved to RC, in R1 I had identified that I was not comfortable with the first passage and hence would not attempt it. I also identified that I am comfortable with the subject of
the last three passages and would like to attempt them in the sequence given below:
□ Q78-81 Passage on Business
□ Q67-70 Presidents of USA
□ Q82-85 Passage on happiness
I attempted 10 out of 12 questions from the three passages in a little less than 25 minutes.
• Round 3: As in the case of RC, in R1 I evaluated the LR sets also. I felt that the LR set on room allocation (Q 74-77) and LR set on circular arrangement (Q 86-88) are likely to be time consuming
due to the number of people and also the amount of data, however between the two I was more inclined towards room allocation as it is a bit mathematical. I am comfortable the selection and
mathematical LR sets and hence was sure that I would be attempting the remaining two LR sets (Q63-66 and 94-96). I first attempted the sets on selection (63-66), in this I did not mark the answer
to Q66 as I got confused and left it after wasting a couple of minutes. The mathematical set (Q94-96) was extremely easy and I was able to attempt the three questions comfortably. These two sets
took me less than 15 minutes and then I moved on to the set on room allocation. I however made mistakes in reading the data and was unable solve it in the first go, I solved it in the second
attempt but spent around 10 minutes for this data set. In LR hence my attempt of 10 out of 11 questions in approximately 25 minutes..
• Total attempt in VALR was 36 questions.
QADI Summary
• Round 1: I went sequentially from Q1-50 and attempted 11 questions that I was sure of solving in about a minute and half. I also identified the questions which could be considered for R2 and
identified that all DI sets are attemptable but the first DI set is could be a bit complicated. Decided that I also identified that I may not be able solve questions 16 & 20. The total time spent
in Round I was around 30 minutes.
• Round 2: I started this Round with the 5 DS questions in which I attempted all but Q42 which I left after spending some time and then attempted three DI sets (Share price, Institute forms and
University professors) leading to 15 attempts in around 30 minutes.
• In remaining 30 minutes I went back to the QA questions that had been marked for R2 and attempted 11 questions but did not mark an answer to one question which I left after solving it partly.
• Total attempt in QADI was 36.
VALR Round 1: 30 minutes 16 attempts
Q51. Para Completion to be attempted by elimination of choices:
Choice (a): incorrect as it is out of scope. Religion and science are not mentioned in the passage.
Choice (b): could be correct on account of continuity and it provides a closure to the passage.
Choice (c): out of scope of the passage, this choice is introducing a new idea or thought.
Choice (d): out of scope on account of Darwinian evolution.
Correct answer: Choice (b)
Q52. Sentence correction, to be attempted but since this is my weak area I will not mark the answer if not sure.
Answer has to be choice (b) or (c) as it should be “testimony to” and not “testimony of” but I could not eliminate between these two choices and hence I did not mark an answer.
Correct answer: Not marked
Q53. Para Completion to be attempted by elimination of choices:
The passage contrasts the current and the ancient point of view on erotic life. The last line gives the point of view of the ancients and the correct answer choice should give our point of view.
Choice (a): incorrect on account t of continuity as it is not giving a contrast.
Choice (b): incorrect on account of scope (morality and lack of instinct)
Choice (c): could be correct on account of continuity as it is giving a contrasting view from the one attributed to the ancients.
Choice (d): incorrect on account of scope, Freud’s opinion on instinct is not the issue.
Correct answer: Choice (c)
Q54. Para Completion to be attempted by elimination of choices:
The passage describes what happens when a person is possessed by a spirit and the correct answer choice should take this forward.
Choice (a): incorrect as it is out of scope. Military tactics of Polynesians is irrelevant.
Choice (b): incorrect on account of scale and scope has also been reduced to a specific case.
Choice (c): can be correct on account of continuity.
Choice (d): incorrect on account of continuity.
Correct answer: Choice (c)
Q55. Para Jumble, to be attempted.
Statement D has to come before both A and E hence choice (a) and (d) can be eliminated.
Between the remaining choices went for choice (b) since statement C is a better opening statement as it introduces the idea and D takes off from there. Statement C gives the conventional definition
of labour and statement D mentions that its scope has been reduced.
Correct answer: Choice (b)
Q56. Para Jumble, to be attempted.
DC has to be a mandatory pair. While statement D talks about the artist statement C talks about the critic and then we have further explanation of criticism.
Correct answer: Choice (b)
Q57. Data Sufficiency, to be attempted
Statement I implies that today is Wednesday or Sunday or Monday and hence is not sufficient to give the answer.
Statement II implies that today is Friday or Tuesday or Wednesday and hence is not sufficient to give the answer.
Taking both statements together we can say that today is Wednesday,
Correct answer: Choice (c)
Q58-61. RC passage, to be attempted in Round 2 but noted the following on the rough sheet:
Q58-61, Long passage on “Freedom”, readable and can be considered.
Q62. Sentence correction, to be attempted
Statement D is incorrect it should be “A solution to this problem….”
Hence Choice (b) is incorrect since
Went through the other statements but not able to identify any error in Statements A, B and C hence left the questions without marking an answer.
Correct answer: Not marked
Q63-66. LR set, to be attempted in Round 3 but noted the following in the rough sheet:
Q63-66 Set on Selection process, appears do-able.
Q67-70. RC passage, to be attempted in Round 2 but noted the following on the rough sheet:
Q67-70, Okay length, passage on American Presidents, preferable to the first RC passage.
Q71-73. Fill in the blanks, to be attempted
Q71. Second blank has to be “mediocre”
Correct Answer: Choice (a)
Q72. Second blank has to be “quotidian”
Correct Answer: Choice (c)
Q73. First blank has to be “disapproval” and second “suspected”
Correct Answer: Choice (b)
Q74-77. LR set, to be attempted in Round 3 but noted the following in the rough sheet:
Q74-77 Arrangement Set, 12 people in 6 x 2 rooms and 6 statements, do-able but the first set is preferable.
Q78-81. RC passage, to be attempted in R2. In the rough sheet wrote down:
Q78-81, Okay length, passage on business, to be attempted
Q82-85. RC passage, to be attempted in R2. In the rough sheet wrote down:
Q82-85, Okay length, passage on happiness, can be attempted.
Q86-88. LR set, to be attempted in R3. In the rough sheet wrote down:
Q86-88 Circular arrangement, 8 people 2 colours each, 7 statements, could be difficult.
Q89. Critical reasoning, attempted by elimination of choices.
Choice (a): could be correct. Drop in sales even after the company has been cleared of all charges indicates that the buyer is not fully convinced.
Choice (b): incorrect as “manufacturing” is out of scope.
Choice (c): incorrect, if the cheese of company X was always more expensive than its competitors then it should not have impacted the sales now.
Choice (d): incorrect as “bribes” is out of scope
Correct Answer: Choice (a)
Q90. Critical reasoning, attempted by elimination of choices.
To put one’s foot in the mouth is to say something that is illogical or insensitive or imprudent.
Choice (a): incorrect as the statement is not incoherent (disjointed or rambling).
Choice (b): incorrect, as the focus should be on the logic of the remark and not on the actress. Choice (c): correct because if people believe that depth and lightness are opposites and cannot be
associated then the actress’ statement is illogical.
Correct Answer: Choice (c)
Q91. Vocabulary, attempted by elimination of choices.
Statement 1: Slaver (tray or a plate) is the correct word hence choice (a) and (d) are incorrect.
Statement II: not sure of the answer hence moved on to the next statement.
Statement III: Acclimation is the right word hence choice (c) is also incorrect and hence the answer should be choice (a) but still checked the remaining statements.
Statement IV: Timber refers to wood while timbre pertains to the quality of sound.
Statement V: Canopy of trees.
Correct Answer: Choice (c)
Q92. Vocabulary, attempted by elimination of choices.
Statement 1: not sure of the answer hence moved on to the next statement.
Statement II: Impassable is the correct word hence choice (d) is incorrect.
Statement III: Levy is a tax and hence cannot be the right word hence choice (b) is incorrect.
Statement IV: Millinery which refers to hats is the right word hence choice (a) is incorrect and Choice (c) should be the answer.
Statement V: Not sure of this one but it does not matter as we have already eliminated 3 options.
Correct Answer: Choice (c)
Q93. Para jumble – odd statement, to be attempted.
“He” in statement (a) and (c) is mentioned in statement (b) – Mark Blumberg and what is mentioned in statement (b) is explained and taken forward in statements (c) and (c), thus statement (d) thus
not fit in.
Correct Answer: Choice (d)
Q94-96. LR set, to be attempted in R3. In the rough sheet wrote down:
Q94-96 Mathematical set, to be attempted
Q97. Summary of a passage, to be attempted by elimination of choices.
Choice (a): could be correct as the passage is on water pollution in Kanpur but has increased the scale of the problem by addition of the word “severe”.
Choice (b): Correct as it is within the scope and scale of the passage.
Choice (c): incorrect as “state government of Uttar Pradesh” is out of scope of the passage.
Choice (d): incorrect as the purity of the recycled water is out of scope.
Correct Answer: Choice (b)
Q98. Para jumble – odd statement, to be attempted.
Statement (a) gives a concept and statement (b) tells us that this is called “moksha”. Statement (d) is also connected as it takes the same idea forward.
Correct Answer: Choice (c)
Q99. Word Usage, to be attempted
I was able to identify that Choice (a) and (d) are correct in terms of usage of the word “fish” but could not eliminate any of the other two choices hence did not mark an answer.
Correct answer: Not marked
Q100. Word Usage, to be attempted
I was able to identify that Choice (a) and (d) are correct in terms of usage of the word “close” but was not sure about the other two choices hence did not mark an answer.
Correct answer: Not marked
Summary of VALR Round 1: In a little less than 30 minutes of R1, out of the 20 questions, I
• Attempted 16 questions – Q51, 53, 54, 55, 56, 57, 71, 72, 73, 89, 90, 91, 92, 93, 97 & 98.
• Did not mark an answer to 4 questions – Q52, 62, 99 & 100.
• Identified that in R2 I would attempt the following RC passages::
□ Q78-81 Passage on Business
□ Q67-70 Presidents of USA
□ Q82-85 Passage on happiness
• Identified that in R3 I would attempt the following LR sets::
□ Q63-66 Set on selection
□ Q94-96 Mathematical set
□ Q74-77 Room allocation set
• In case time is available then go for the remaining RC passage followed by the LR set. The RC passage is preferred over the LR set since a LR set usually needs to be solved completely before any
question can be answered but in RC you can answer questions even without reading the complete passage.
VALR Round 2: 25 minutes 10 attempts
Q78-81. Passage on business
Q78. Went for elimination of choices
Choice (a): Correct as it can be inferred from the first couple of paragraphs of the passage.
Choice (b): Incorrect on account of the increased scale, thus choice (d) is also incorrect.
Choice (c): Incorrect on account of scope, “regulations” have not been discussed in the passage.
Correct Answer: Choice (a)
Q79. Choice (c) can be inferred directly from the sixth paragraph of the passage.
Q80. Choice (a) the word creative is used to explain and unethical practice hence the answer should be sarcastically.
Correct Answer: Choice (a)
Q81. Choices (a) and (d) were easy to eliminate but got stuck between the remaining two choices and left the question without marking the answer.
I should have eliminated choice (c) on account of it being very narrow and specific.
Correct answer: Not marked
Q67-70. Passage on Presidents of USA
Q67. Refer to the first paragraph, verbal abuse is not mentioned.
Correct Answer: Choice (d)
Q68. Direct from the passage, all three statements are correct.
Correct Answer: Choice (c)
Q69. Went for elimination of choices
Choice (a) is mentioned in the fourth paragraph.
Choice (b) is mentioned in the fifth paragraph.
Choice (c) is mentioned in the fourth paragraph.
Choice (d) not mentioned in the passage and also cannot be inferred from the passage.
Correct Answer: Choice (d)
Q70. Choice (b) can be inferred from the last paragraph.
Correct Answer: Choice (b)
Q82-85. Found the passage easy to read and did not have any problem understanding it.
Q82. Choice (c) can be inferred from the first paragraph of the passage.
Correct Answer: Choice (c)
Q83. Choice (a), (b) and (c) are mentioned in the passage but not choice (d). Intentional activity and not “unintentional activity” is mentioned.
Correct Answer: Choice (d)
Q84. None of the choices can support the theory.
Correct Answer: Choice (d)
Q85. Eliminated choice (a) and (d) but was confused between the remaining two choices and hence did not mark an answer to this question.
Correct Answer: Not marked
Summary of VALR Round 2: In the approximately 25 minutes I attempted three RC passages marking the answers to 10 out of the 12 questions and left 2 unmarked.
VALR Round 3: 25 minutes 11 attempts
Q63-66. Made a table separating the men and women as per their profession and then eliminated choices in the first three questions (Q 63, 64 & 65) and marked the correct answers. However in Q66 I got
confused, tried it twice (time wasted) and then left it unanswered. I spent about 8-9 minutes in this set.
Q63, 64 & 65: Marked the correct answer
Q66: Not answered.
Q94-66. Simple set, Q94 sold me that multiple cases are possible. Made cases for possible values of A, B, C & D and solved the three questions correctly. The time spent in this set was approximately
5 minutes.
Q94, 95 & 96: Marked the correct answer
Q74-77. This set took time was time consuming took about 6-7 minutes to make the table itself and along with the questions the total time was almost 12 minutes. I was however was able to get the
answers to the four questions comfortably.
Q74,75, 76 & 77: Marked the correct answer
Summary of VALR Round 3: In the approximately 25 minutes I attempted three LR sets and marked the answers to 10 out of the 11 questions and left 1 unmarked.
QADI Round 1: 11 attempts 30 minutes
Q1. R2 – Functions, it is lengthy to read and can be time consuming.
Q2. R1 – Numbers, have done similar questions earlier.
Second rightmost non-zero digit of 207047 will be the ten digit of 20747 and this can be obtained by checking out the cyclicity of 7. The powers of number with the unit digit 7 will have the last two
digits as given below:
207^1 will have the last two digits as 07
207^2 will have the last two digits as 07 x 07 = 49
207^3 will have the last two digits as 07 x 49 = 63
207^4 will have the last two digits as 07 x 63 = 01
207^5 will have the last two digits as 07 x 01 = 07 = last two digits of 207^1
The tens digit of these numbers can be 0 or 4 and since 0 is not available in the choices, the correct answer has to be 4.
Correct Answer: Choice (c) 4
Q3. R2 – Numbers, likely to be time consuming
Q4. R2 – Numbers and P&C, could be time consuming.
Q5. R1 – Numbers
Choice (a) is 1.414
Choice (b) is 1.442
41/4 = 21/2 = 1.414
Comparing 4^1/4 and 5^1/5 by raising both to the 20th power we get 4^5 and 5^4 which are 1024 and 625 respectively. Thus 31/3 values will keep on decreasing.
Correct Answer: Choice (b) 31/3
Q6. R1 – Mixtures. Lengthy to read but am usually able to solve these in short time.
Since Sanjay is selling milk at 10% over his cost price and has a overall profit of 43% thus he has a profit of 30% on account of addition of “x” liters water to pure milk because two successive
increments of 30% and 10% are equal to a single increment of 43%.
Since the profit on account of addition of water to pure milk is 30% hence water should be 30% of pure milk.
Since pure milk and “x” liters water add up to 52 liters thus 130% of milk = 52 liters
Or pure milk should be 52/1.3 = 40 liters and water should be 52 – 40 = 12 liters
If Sanjay was selling the mixture at cost price after adding 12 liters of water to 60 liters of pure milk then his profit will be the same ass percentage of water added to milk = 12/60 = 20%
Correct Answer: Choice (b) 20%
Q7. R1 – Time and Work, used SQC technique of assuming the number of men and women.
Initially 2 men and 2 women can complete the piece of work in 120 days and also that a man does “m” units of work per day while a woman does “w” units of work per day.
The amount of work is 120 units
Thus in 1 day the amount of work completed will be (2m + 2w) units
• 120 x (2m + 2w) = 120
• m + w = 0.5
If one woman in the group is replaced by a man then there will be 3 men + 1 woman and as per the question they will finish the work in 96 days
• 96 x (3m + w) = 120
• 3m + w = 120/96 = 5/4 = 1.25
Solving the above two equations we get m = 3/8 units per day and w = 1/8 units per day
As per the question the two women in the group are replaced by men
• the amount of work done per day by 4 men = 4 x (3/8) = 1.5 units
Amount of work is 120 units hence the number of days required by 4 men to complete the work will be 120/1.5 = 80 days
Correct Answer: Choice (b) 80 days
Q8. R1 – Ratios
Ratio of number of coins with A to C = 1 : 3
Ratio of number of coins with C to B = 2 : 7
• Ratio of number of coins with A to B to C = 2 : 21 : 6
• Number of coins with A, B and C are 2x, 21x and 6x respectively
• Total number of coins should be a multiple of 2 + 21 + 6 = 29
• Choice (b) 54 is incorrect but the other three are multiples of 29 and hence cannot be eliminated.
• As per the question, 21x = 18 + 6 (2x) or x =2
• Total number of coins between A, B and C = 29x = 58
Correct Answer: Choice (d) 58 days
Q9. R1 – Mixtures
Vodka in the original mixture = 32 liters in a total of 50 liters = 64%
Step 1: Replace 10 liters of 50 liters of cocktail or replace 20% of mixture
• percentage of vodka in the mixture now = 80% of 64%
Step 2: Replace 20 liters of 50 liters of cocktail or replace 40% of mixture
• percentage of vodka in the mixture now = 80% of 64% of 60% 51.2% 0f 60% > 30%
Correct Answer: Choice (d) 30.72%
Q10. R2 – TSD, these questions can usually be solved by rations but I take time in understanding the question hence a R2 question.
Q11-13. R2 DI set.
Noted on the rough sheet under DI:
Q 11–13 Two tables, Movies Easy numbers but likely to be high on calculations Attemptable
Q14. R2 – Geometry
Q15. R1 – Inequalities, used SQC technique to get the answer.
As per the equation the sum of roots is 9 and the product of roots is p
As per the question the difference between the roots is less than 7
• thus the roots ≥ 1 and roots ≤ 8
• the product of roots = p > 8, thus choice (a) and (d) are incorrect
the minimum difference between the roots will be when each root = 9/2
• the product of roots = p ≤ (9/2)2 = 81/4 , thus choice (b) is incorrect
Correct Answer: Choice (c)
Q16. R3 – Time and Work
Read the question but could not understand hence decided not to attempt.
Q17. R1 –Algebra, used SQC technique of substitution on numbers.
Checked the choices for various values of P and Q, since P + Q ≤ 6 took P and Q to be 5 and 1 respectively, choice (c) does not hold and hence is the correct answer.
Correct Answer: Choice (c)
Q18. R1 –Functions
g(6) = g(8-2) = -16/3
2f(x+3) = 3x/5, replacing x with x-3 we get,
2f(x) = 3(x-3)/5
f(x) = 3(x-3)/10
f(g(6)) = f(-16/3) = -25/10 = -5/2
Correct Answer: Choice (a) -5/2
Q19. R2 – Averages, could require calculation.
Q20. R3 – Algebra, likely to be calculation intensive and time consuming.
Q21. R2 – Algebra, may need to make cases.
Q22. R2 – Coordinate geometry, could be calculation intensive.
Q23. R1 – Cyclicity, have done similar questions earlier
Highest power of 2 that will completely divide 54! = 27 + 13 + 6 + 3 + 1 = 50
• highest power of 2 that will completely divide 54!^20 = 50 x 20 = 1000
• a^3 = 1000 thus a = 10
Correct Answer: Choice (a) 10
Q24. R2 – Set theory, long question
Q25-27. R2 – DI set
Noted on the rough sheet under DI:
Q25–27 Share price table Calculations likely but easy numbers To be attempted
Q28. R2 – Geometry
Q29. R1 – Mensuration, used SQC technique of making cases.
Sum of three sides of the rectangle =100
If the sides are 25, 50 and 25 then area = 25 x 50 = 1250
• Choice (a), (c) and (d) are incorrect.
Correct answer: Choice (b) 1250
Q30. R2 – will need to make cases.
Q31-32. R2 – Algebra, new questions type may take time to understand.
Q33. R2 – Mensuration, will need to make cases
Q34-37. R2 DI set.
Noted on the rough sheet under DI:
Q 34-37 Pie Chart & Table Simple data & easy numbers To be attempted
Q38. R2 – Mensuration
Q39. R2 – Permutations & Combinations
Q40. R2 – Probability
Q41-45. R2 – Data Sufficiency
Q46. R2 – Permutations & Combinations
Q47-50. R2 DI set.
Noted on the rough sheet under DI:
Q 47-50 Bar chart with simple numbers Calculations likely To be attempted
Summary of Round 1:
• Time taken: about 30 minutes.
• Attempted 11 questions – Q 2, 5, 6, 7, 8, 9, 15, 17, 18, 23 & 29. These were, for me, the easiest QA questions in the paper.
• Identified that I will try attempting all four DI sets and my order of preference will be:
□ Q25-27 Share price table
□ Q 47-50 Bar chart on forms of institutes
□ Q 34-37 Pie & bar Chart
□ Q 11-13Tables on movies
• Identified that Q 16 and 20 should not be attempted.
QADI Round 2A: 30 minutes 15 attempts
Started R2 with the 5 DS questions
Q41-45. DS, Solved four DS questions correctly.
Did not mark an answer to Q42, while I was sure that the two statements independently could not answer the questions I was not sure if the two statements could together answer the question and hence
did not mark an answer.
Q25-27. DI Set on Share price
Solved all three questions without any problems, since the choices were not close in any question hence approximation was helpful in saving time.
Q47-50. DI Set on sale of forms
Solved all four questions without any problems, calculations however were higher in this set as compared to the previous one but the questions were not difficult.
Q34-37. DI Set on professors in universities.
I was able to solve this set also but it took some time for me to determine the values of P, Q, R and S as I initially made a calculation mistake and hence had to redo the set.
The five DS questions and the three DI sets (10 questions) took me about 30 minutes. With 25 attempts in approximately 60 minutes I now moved to the QA questions marked for Round 2.
QADI Round 2B: 30 minutes 10 attempts
Q1. If f(x) is divided by x then the reminder is a^4
Now p, q, r and t are in GP and t = a^4
• p = a, q = a^2 , r = a^3
• f(x) = ax^3 + a^2x^2 + a^3x + a^4
If f(x) divided by (x-a) then the remainder will be f(a)
• f(a) = 4a^4, as per the question this has to be a perfect cube
• a = 2
Correct answer: Choice (a) 2
Q3. 120 = 2^3 x 3 x 5
LCM (X, 120) = 1320 = 120 x 11
• X should be a multiple of 11 but not of 11^2 or higher powers so it is not a perfect square
• Choice (a) is incorrect
LCM (Y, 120) = 1680 = 120 x 14 = 120 x 2 x 7
• Y should be a multiple of 7 but not of 7^2 or higher powers so it is not be a perfect square
Choice (b) is incorrect
LCM (Z, 120) = 1800 = 120 x 15 = 2^3 x 3^2 x 5^2 or
• Z can be 3^2 x 5^2 or 2 x 3^2 x 5^2 or Z may be a perfect square
• Choice (c) is incorrect
Correct answer: Choice (d)
Q4. Thus the number of 5 digit numbers that are multiples of 3 or 4 but not both
= multiples of 3 + multiples of 4 – multiples of 12
For the 5 digit number to be a multiple of 3, its sum of digits should be a multiple of 3
The digit sum of first 6 natural numbers = 21
For the 5 digit number to be a multiple of 3 the only digits that can be removed are either 3 or 6, thus one digit of the number will be either 3 or 6 and the remaining 4 digits will be 1, 2, 4 and
• The number of 5 digit numbers that will be multiple of 3 = 2 x 5! = 240
Thus the answer will be more than 240
• Choice (d) 240 is incorrect
Last 2 digits of 5 digit numbers that are multiples of 4 will be: 12, 16, 24, 32, 36, 52, 56 & 64
The remaining 3 places can be filled in 4 x 3 x 2 = 24 ways
• Total number of numbers that are divisible by 4 = 24 x 8 = 192
• the answer will be less than 240 + 192 = 432
• Choice (a) 432 will be incorrect
Got confused while determining the number of 5 digit numbers that are multiple of 12 and hence left the question without marking the answer.
Correct answer: Not marked
Q14. Solved by using the SQC technique of verifying the figure and approximating the answer.
In the question ADC is 90^o and in the figure also it is 90^o
As per the question
AO = 56, OD = 4, AC = 61 and AB = 5
In the figure OB : OD does not appear to be greater than 5 : 4 hence BC should be pushed deeper in the circle.
Visual inspection of the figure tells us ABC is more than double of ABD, check the choices:
Choice (a) 11 : 14 < 1, hence incorrect
Choice (b) 14 : 3 > 2, could be correct
Choice (c) 3 : 11 < 1, hence incorrect
Choice (d) 11 : 9 < 2, hence incorrect
Correct answer: Choice (c)
Q19. Assume: Average score before the second last test = x
Total number of tests = n + 2
On scoring 98 marks in the second last test the average increases by 1 (deviation of 1)
• 98 = Original average + Deviation from the average x total number of tests
• 98 = x + (1) (n + 1)
• x + n = 97
On scoring 70 marks in the last test the average dropped by 2 (deviation of -2)
• 70 = (x + 1) + (-2) (n + 2)
• x – 2n = 73
Solving the two equations we get n = 8
• total number of tests = n + 2 = 10
Correct Answer: Choice (c) 10
Q21. Solved by musing the SQC technique of making cases:
As per the question: a + 15b + 25c = 77, where a, b & c are natural numbers
• a + 5(3b + 5c) = 77
• 5(3b + 5c) = 77 – a
• there are 8 possible values of “a” which are = 2, 7, 12, 17, 22, 27, 32 & 37
Minimum value of b and c is 1, thus maximum value of a is 37
If a = 2 then 3b + 5c = 15, no values of b and c are possible
If a = 7 then 3b + 5c = 14, then b = 3 and c = 1
If a = 12 then 3b + 5c = 13, then b = 1 and c = 2
If a = 17 then 3b + 5c = 12, no values of b and c are possible
If a = 22 then 3b + 5c = 11, then b = 2 and c = 1
If a = 27 then 3b + 5c = 10, no values of b and c are possible
If a = 32 then 3b + 5c = 9, no values of b and c are possible
If a = 37 then 3b + 5c = 8, then b = 2 and c = 1
Thus four ordered triplets (a, b, c) are possible.
Correct Answer: Choice (d) 4
Q24. Solved this question by making a venn diagram.
Correct Answer: Choice (a) 24
Q28. Solved this question by using the SQC technique of drawing the figure and assuming the value of the side of square.
Let the side of the square = 2
Thus Area of the square = 4
• Area of OFCG = 1
• Area of OFG = ½
• Area of OMN = (1/4) (OFG) = 1/8
• Area of MNGCF = 1 – 1/8 = 7/8
Assume the area of EHMN = x
Ratio of area of MNGCF to area EHNM = 7/8 : x = 7 : 8x
• In the choices the first number should be a multiple of 7
• Choice (b), (c) and (d) are incorrect
Correct Answer: Choice (a) 7 : 9
Q39. Could not understand how the question should be solved hence made a figure to understand.
If there are 4 children standing around a circle then we get a square and the number of distinct pairs in which children are:
• standing next to each other = 4 = n or the number of sides
• not standing side by side = 2 (along the two diagonals) = ^nC[2] – n
• ^nC[2] – n = 5n
• n = 13
Correct Answer: Choice (b) 13
Q40. Probability of not getting a blue ball in two trials = 81/100 = (9/10)^2
• Number of blue balls = 1 out of 10
Probability of getting a green ball in two consecutive trials = 49/100 = (7/10)^2
• Number of green balls = 7 out of 10
• Number of red balls = 2 out of 10
Number of ways in which we can draw balls of three different colours = 3 x 2 x 1 = 6
Thus probability of getting a ball of three different colours in three consecutive trials
= 6 x (1/10) x (7/10) x (2/10) = 21/250
Correct Answer: Choice (b) 21/250
Q10. Let the speed of Ayesha be x km/hr and that of Bhumika = y km/hr
Ayesha started running at 5am and Bhumika at 6am
• At 6 am Ayesha has run x km
At 7 am the two meet and after 6 am in 1 hour Ayesha has run x km and Bhumika y km
• Ratio of speed of Ayesha to Bhumika = x : y
At the time of the second meeting, in the time after the first meeting at 7am:
Ayesha has run 2y km and Bhumika has run 4x km
• Ratio of speed of Ayesha to Bhumika = (x/y) = (2y/4x)
• (x/y)2 = ½
• x/y = 1/√2
Correct Answer: Choice (b) 1 : √2
87 Comments
1. Sir,
I wanted to pursue executive mba from iim indore/sp jain, what %ile should I target in CAT14,
My total work experience is 6 years and 5 months in infrastructure comp(reliance)., B-tech(nit)- 75.5 %, 10th-76 and +2-67%.
pl. suggest.
□ You should get an overall percentile of about 97+ with 95+ in eac section.
2. Sir,
i attempted the new pattern Mock 3.i got a score of 142. QA/DI 31 C W. VA/LR 23 C 16 W. I did a lot of mistakes in the second section.Please help me to increase my scores.
□ 3W in QA/DI
☆ Ankur, you are doing well in QADI and should continue with you your existing approach. It is VALR that you need to work on and that too essentially on accuracy. 39 attempts with 16
incorrect answers is effectively only 17-18 net correct answer. Please identify which of the VALR question types (Verabl Logic, English Usage, RC and LR) is the issue and work on
improving your accuracy in that area. Also it might be useful to reduce (or stop) your attempts in the question type in which you have the highest error rate. Please call me at 9811155160
in the evening in case you want to discuss the issue.
3. Hello Sir,
I have 2 questions-
1. My academics is average Xth-7.0 cgpa ,XIIth-70%, Graduation.-64%, category-general, no work ex. with these acads. will I get calls from IIMs and other B-schools if I am able to score 95+%tile
in CAT.
2. How much ques. should I attempt with what % of accuracy to get around 95%tile in CAT I am targeting IMT ghaziabad,MDI gurgaon and other B schools in Delhi-ncr.
□ Chirag, except for IIMB all other IIMs are possible for you but a call from top IIMs will need over 99.5%ile, other old IIMs & FMS you will need over 99%ile and over 97%ile for the new IIMs,
MDI etc. At a score of 95%ile you will be able to get an interview call from most of the other good MBA institutes.
4. sir i am a 2014 passout …currently working and scoring well in mocks
my graduation percentage is 59.98 and i am thinking of giving improvement papers in december this year …..i wanted to know even after this ….one the cat result comes out and my result goes up to
61-63% will they allow me to apply for colleges for which the general category cutoff are pegged at 60….provided that as of now i donot have 60 …but i will by the time result comes out???
□ Deepak, you score at the time of application decides upon your eligibility, if the results come out before the last date of application then you will be able to apply. Do take the improvement
tests as over 60% will be useful in other areas as well.
5. Dear Sir,
I ‘ve enrolled for both CL CAT test series and Test Gym adaptive as well. I have been taking 2-3 full length mocks along with sectionals from Test Gym followed by through analysis every week. As
of now i am able to clear the sectionals but my overall percentile is meager which keeps on varying from 84 -87 %ile. Every time i analyse the paper post exam, i do realize that another 20-22
marks,my job would have been done easily. This has made my morale down and a sense of panic is building up with each passing day to D Day.
At this point of time sir Kindly guide me, so that i can fight back by putting up my best efforts that will help me to sail through easily.
□ Bibhu, 3 FLTs/Mock CATs in a week is excessive as it does not give you an opportunity to revise or work on you weak areas. Suggest that you restrict yourself to 2 Mocks a week and also ensure
one day every week for revising all important questions. You could follow the weekly schedule given below or modify it to suit your specific requirements.
Day 1: Topic wise tests from the Test Gym eg, 2 tests each of RC, DI, LR and VL (14 questions each), 1 test of Usage (Sentence Correction and Vocabulary) and 3-4 of QADI
Day 2: Repeat Day 1 schedule
Day 3: Take a section test of QADI and VALR, one in the morning and one in the afternoon.
Day 4: Repeat Day 1 schedule
Day 5: Repeat Day 1 schedule
Day 6: Revise all important questions from all the test that you have taken so far.
Day 7: Take a Mock CAT
You should analyse each test (topic, section and Mock) in detail, identify important questions and revise them regularly.
6. GP sir i am an scc student in b tech 7th sem. I have 94.4% in 10th, 89.4 in 12th and 89.6 in b tech till 6th semester. Which programmes should i apply for in the CAT 2014 form. I’m terribly
confused. Please help.
□ Glen, apply for PGP of all IIMs, in case of IIM C it is PGDM. If you are interested in HR then the HR program of IIM Ranchi should also be on your list. Similarly if you are interested in
Agri Business then do apply for ABM of IIM A and L.
☆ Thank you sir. I have been attempting mocks quite regularly, but my scores are stagnating around 110. I usually end up with lesser time for quant than i should, after my section 2. Any
way of getting the number of attempts up?
○ Glen, 110 is a good score at the current stage of preparation, please call me at 9811155160 to discuss.
■ sure sir…
7. Hi sir,
I appeared in CAT 13 and scored 98.11.I just started about a month back for this CAT.I have given 4-5 mocks and actually I am really worried about the erratic pattern of my mock scores. My scores
have varied so much that i have scored 84,104,112,131 and 167 in the 5 mocks I have given.MY rank varied greatly from 4 in CLproc mock 3 to 1200 in an AIMCAT. Since I have started working now,I
am greatly worried that whether 2 months of persistent effort along with the job will land me a percentile in excess of 99.7 which i aspire for.I have not devoted much time to the preparation
this year.Rather ,I have spent time in revising the 4-5 mocks I have given.Please Help.
□ Dear Mukesh,
You have great potential. If you give your best, you should get good results this year itself. Two months of focused effort should be good enough to ensure consistent performance in your
remaining mocks and a good score in your actual tests. Use the Practise-Analyse-Revise-Test model. Analyse your mocks thoroughly and identify areas where you are regularly making mistakes.
Work on the basics of those areas and then practise as many problems from those areas as possible. Use Test Gym Adaptive thoroughly. This will help you build up your speed as well as
8. Sir,
Thanks a lot for taking up our queries and solving our problems.
I’ve been regularly solving Mocks, usually completing 2 Mock tests in a week, but the variance in percentiles I receive is tremendous, I’ve had percentiles ranging from as low as as 60 to 99 in
the Mocks, same with the scores, from a paltry 70 to a high of 170. With all the mocks being solved in the same manner and following the same strategy.
-How do I bring about consistency in my results? It is the accuracy that is really bringing the result down. How do I improve that?
□ Dear Aman,
Use the Practise-Analyse-Revise-Test model. Analyse your mocks thoroughly and identify areas where you are regularly making mistakes. Work on the basics of those areas and then practise as
many problems from those areas as possible. Use Test Gym Adaptive thoroughly. This will help you build up your speed as well as accuracy.
9. Hi Sir,
These are my mock scores in order i attempted them.
Proc 1- 97 (qa- 50, va- 47)
proc 3- 91 (qa- 45, va- 46)
proc 2- 123 (qa- 60, va- 63)
proc 5- 136 (qa- 60, va- 76)
proc 4- 128 (qa- 54, va- 74)
In all exams my attempts are between 65-75.
Sir, u can see that i am totally stagnated in QA.. cant seem to cross the 60 marks and last 3 tests..no improvement in overall also..
i have decent acads(85+ throughout)..
sir, please give me a reality check about the targets which i can set about colleges as well as how to improve quants..
Thanks Sir..
□ Dear Ankit,
Your overall scores are good. If your scores in QA are not improving, revise the basics in this area and practise more problems. Use Test Gym Adaptive thoroughly and also take some sectional
tests. This should help improve your accuracy and problem solving speed in QA.
10. Sir I had posted my comment this morning but it is removed now, So here I am posting it again. Sir, everything with my acads is third class, Xth-76.5, XII-69.4, (1 Year gap) , B.tech- 61 (to top
it off now having an year lag because of 8th sem results) , and mocks are not coming at all. As of now I can at the max go for 40-44 attempts with 70-80 percent accuracy and pcntl hovers around
70-80. But all I want to know is if I can improve in this 2 months and pull something like 85-90 will it be fruitful with my disastrous acads. I belong to SC but have seen a SC guy, with >
80,80,70 acads, great 3 years central govt. work experience constantly pulling 95s with good sectionals for 2 years now and then not getting old IIM calls. I am really worried now sir, these
present continuous sins are heavy to carry. I just want to know about some colleges that you want to recommend to me, and one of those with sure-shot placement, because, now I already have a
B.Tech loan to pay, which will be added to the MBA fees. Sorry for the very long post, but sir I am worried sick.
Thank you
□ Abir, the earlier query has been answered and is reproduced below for your reference.
Abir, the 4 year gap will be an issue but it can be explained and a good CAT score will be required to ensure admission to a good MBA institute.
Given your category a score of 85%ile will get you a call from most of the top IIMs and other top b schools that have reservations. You should not be considering NIBMs of the world but apply
only to those institutes that have reservations for SC candidates. Focus on your preparation and ensure a 85%ile plus and you will find yourself in a IIM or IIT or FMS or one of the top
Symbiosis (SIBM Pune or SCMHRD) institutes. Also IITs and FMS (DU) have a very low fee so you will not be burdened financially.
You should take CAT, IIFT and SNAP. Institutes affiliated to NMAT and XAT do not have reservation so ignore them. Hope you have enrolled for the Test Series, we came out with a Rs 499/- offer
of 10 Mock CATs, if not then let me know and as a special case will have it made available to you.
☆ Thank you very much sir, even though I didn’t understand what do you mean by 4 year gap (as I currently am in final year, cause of final year lag), I understood what you wanted to say.
Thank you again. And definitely willing to buy the 10 mock cats. 🙂
○ Abir, my mistake, I misread your query.
“….with my profile is third class, X-76.5, XII-69.4 -YEAR GAP- B.Tech- 61 (to top it off now having an year lag……”
■ Oh that’s fine sir, anyway, I was unable to find the aforementioned 10 mock cat series on cl site sir, sorry for disturbing.
11. hi Sir,
One off-the-topic question, i’m not pretty comfortable in solving Games and tournaments questions, can i ignore that part altogether ? or do i need to practice rigorously now. ? please answer
□ Sivaji, with about 2 months to you should do both – practice/revise regularly but in the paper attempt only if you are sure of solving it but if you are absolutely uncomfortable then drop it
12. minimum percentile required to for IIM L,I,K AND NEW ONES with how many attempt .
In actual one if i attempt 30-35 with 85-90% accuracy is it sufficent to cleat QA&DI cutoffs of all the colleges.
does jbims take cat?
□ Shashank, JBIMS takes CAT score but to have a good chance of making it the institute you should take Feb’15 MAT or CET Maharashtra or ATMA.
For IIM I, K, L etc you should target a 99%ile in CAT and the new IIMs are possible at over 97.5%ile.
A net score of over 55 marks (net 18 correct attempts) will be sufficient to clear the sectional cutoff of all IIMs. With 33 attempts and 6 incorrect answers (net score of 75) you would be
over 99%ile in QADI.
☆ how much i target in both section than if attempting 30-35 in QA&DI?
○ Shashank, A total of 65 attempts is likely to be very good in CAT and hence you could consider about 30 in VALR.
■ sir it will be sufficent for iim-l and fms to clear cut off as i am planning my career in sales and marketing and they both are the top most
■ Shashank, it will be sufficient but if you want to be very safe then target a total attempt of 70 questions.
13. First of all very warm regards, Sir. You’re doing a great job here and it’s a thing of immense appreciation that you’ve helped a lot of students with their queries.
My question to you sir is what follows-
I’ve a record of 79.6%, 84.2% and 6.4(CGPA) in my 10th, 12th and graduation courses. I had backlogs in the 3rd, 5th and 8th semester of my graduation year, which I cleared in the August 2014(My
graduation was to get completed in June, but as a result of my final semester backlog, it finished in August. Although my official marksheet and provisional degree certificate have yet not been
released by my university- it’ll be given to me probably by the third week of October). I’ve joined a crash course for CAT-14 just a week ago, and I’ve also started studying and giving mocks
seriously. I’ve a fairly good aptitude and I’m also OK in English. Coming to the point- my query is what would be the ideal percentile for me to get in CAT-14 to get a good college, keeping my
category, poor academic record, baclogs, etc. in mind. Also, how and in which ways my poor academic record is going to hamper my chances. Please also suggest the exams which I should give apart
from CAT, XAT, IIFT, SNAP and CMAT.
Looking forward to your answer, Sir. I know your reply is going to clear all my doubts and help me.
Thank you!
□ Dear Rahul.
What is your category?
☆ I’m from the General category, Sir.
○ Rahul, as a general category candidate with decent academic record (despite 6.4 CGPA) you should target:
1. 99.6%ile for top IIMs
2. 99%ile for other old IIMs, FMS and IIT Mumbai
3. 97.5%ile for new IIMs, other IITs, MDI, NITIE. XAT and IIFT will need an equivalent score in their entrance tests.
4. 95%ile for other top B schools
■ Thank you very much, Sir. I was feeling quite disheartened assuming that my poor academics may impede my chances of getting a good college even if I get a good CAT score. Your
reply provided me with some confidence again.
Thanks again for starting this blog, this has been of immense help.
Another query of mine is that I’m consistently scoring 90-95%iles in VA in various mocks, but my quant score is stuck at 70-75%ile. It has been 20 days since I’ve started
preparing for CAT, and I know I’ve been very late in starting my preparation. Could you please guide me about some specific important areas to target in QA initially, so that I
can at least start improving my score slowly and slowly.
Thank you!
■ Rahul, while you are late you can still get the required CAT score. Focus on practice, testing, analysis and revision. You should practice only MCQs from your study material and
test gym, for testing you have the 20 Mock CATs (proc and unproc) and do analyse all test (practice and mocks) to identify the important questions and these should be revised at
least once a week. You would have already taken a few mock CATs and practice test (from the material or the Test Gym) and you should first analyse them as suggested in the
following two posts:
1. Are you choosing the right questions in your Mock CATs?
2. Things you must-do after every mock you take
Continuous revision of important questions should not be neglected, a question goes out of the revision list only after you have revised it 5-6 times and you remember the
14. sir, I am a kashmiri migrant candidate and have done my B.E. in 2012 from MIT alandi college pune university and am preparing to give mba exams for 2015-17 mba batch. please help me regarding
which colleges to apply and which exams to give to get into a good b school.Moreover, i also wanted to know how to go about MH-cet .i mean should i appear for MAT exam also as one of my friend
from maharshtra told me its better to appear in MAT as its easy to score and get the required cutoff for good b schools in maharashtra rather than applying through CAT or CMAT. . Last thing sir
being a kashmiri migrant is it wise to apply through kashmiri migrant quota or in open category ?? to get into a good college ……please reply sir i want to do mba in this year only in any case so
please enlist options accordingly…thanks & regards…:)
□ Dear Umang,
If you are interested in b-schools in Maharashtra, then MHCET will be a better option than MAT. You will need to score very high marks in MAT (around 750+ out of 800) in order to get into
good colleges in Maharashtra. MHCET will definitely be a better option. You should definitely apply under Kashmiri Migrant category and not in Open category since the cutoffs for Kashmiri
migrants will definitely be lower than cutoffs for open category candidates. Do share some details of your scores in mocks so that we can advise you on which institutes you should apply to.
15. Hello GP Sir and Team CL,
I have 49% in graduation. How tough are my chances of getting into IIFT Delhi? What %ile do I need to score then?
□ Siddhartha, the interview calls in IIFT are based on your performance in the entrance test only but for final selection your marks in graduation will play a role. Suggest that you target a
score of over 55 marks (60+ is preferable) in the written test to ensure that your graduation marks are not a problem.
16. sir,with 107 in mock cat 3,110 actually q17 was not recorded due to an internet failure,where do i go now on in my preparation,average scores in cl mocks are in the range of 100 to 105 with
82-83% accuracy,what score should i target and how many more questions should i attempt,attempts are in the range of 45 to 50.
I am targeting the very best and need maximum percentile,.pp shows around 97.
thank you
□ Dhruv, 100-105 is decent at this stage and with decent accuracy you should now look at increasing your attempts to around 55 by attempting 2-3 extra questions in each section. Hope you are
revising regularly as it will help in increasing your speed as well as accuracy. Target a score of 120-125 by the end of this month.
☆ Thank you sir,yes i have been increasing my attempts but sitters such as the last lr set in mock cat 3 are going amiss sometimes which is worrisome to me,also the other day in my
institute mock cat,i attempted 20 qadi questions in just 45 minutes but still could not reach 30 attempts,how to overcome these shortcomings?
Also sir,just roughly,how much percentile would a final score of around 150 fetch in cat,going by 60 to 63 attempts.
○ Dhruv, continue with the practice + testing + analysis + revision process and the attempts will go up. Keep a modes target of 2 extra attempts in each section. Do not worry too much
about missing a couple of easy questions, it is very difficult to ensure that you attempt all easy questions, most of us will miss out on a couple of them.
A score of 150+ is likely to be over 99%ile in CAT.
■ thank you so much sir
gp ka funda is a great initiative really,cheers
17. Sir ,
When is starting with online Mocks for other exams like NMAT,IIFT. NMAT is less than a month from now & still no mocks from CL
□ Dear Virat,
The first NMAT Mock test has already gone live today. The schedule of the other Non-CAT Mocks is available in the Notifications section of your SIS.
18. IS GP’s take for mock cat 04 (unproctored) available yet???
□ Dear Arpit,
GP’s take is available only for the Proctored Mocks, not for the unproctored mocks. Hence, this would not be available for Mock CAT 4.
19. sir,
I have 2 questions :
1. Like after the mock analysis, of one mock I have seen which questions I have missed which cud have been done etc. Now in the next mock in order to get a higher hit %age, what change can i
bring in the prep in 10-11 odd days till the next mock.
2. Practicing test gym , then in every mock some new things add up to my cognizance , then old class room concepts which I have to revise(Although i don’t have much issues with the basics), but
still I tend to forget somethings, then vocab, and LR question types .
SIR I AM NOT ABLE TO MANAGE WHAT HOW WHEN TO DO ALL THESE STUFFS? Its like going onto somethings feels like I am missing on old things.
IF I AM MAKING ANY MISTAKES Kindly fill in sir.
Kindly suggest What should be done.
□ Koustav, it is essentially practice and revision between the two mocks and working on the weak areas that will help you improve your scores. Do not expect your score to jump from one paper to
next and a consistent score over 3-4 papers with adequate practice and revision in between the papers will lead to improvement in your scores.
Every Mock will give you a couple of new things that will get added to your pile of revision, similarly test gym questions too will give you a few new things.
You are doing all the things that are required for preparation but it will be useful to draw out a schedule (or time table) to ensure that you do not miss out on anything.
☆ thanku sir,
will work on that 🙂
20. Sir,
My question is that in the video analysis of cat14 strategy by Gejo sir, he mentioned that in qa(30-qa,5-ds) out of 30 questions 10 question will be from level1,10 question from level 2 and 10
question from level 3.
Sir how would define these level 1/2/3 with respect to level in test gym adaptive(beginner/intermediate/advanced/expert)
And guide the same for other section(va,lr,di),so as to practice thoroughly in that level.
Please reply as soon as possible, waiting for your reply.
□ Hi Sahil,
Roughly, you can go with this benchmark.
Level 1 – Beginner/Intermediate
Level 2 – Intermediate/Advanced
Level 3 – Advanced/Expert
You must also understand that each one of us has our favourite area. For instance, someone is very good at numbers but not so good at geometry. For him/her A difficult question on Numbers
would be still a Level – 1. A moderately difficult question in geometry would be Level 3.
Let me define Level 1 as –
questions that you can solve within 1.5 minutes. In 35 questions, you must have around 8 – 10 such questions.
21. Sir
kindly help me out with my score. Despite of opting different techniques, and time management, I’m not able to go through entire paper.
Also in order to manage score in VA section, I’m losing considerable score in QADI section, in which I feel I’m more comfortable.
In QA section, about half of my time is wasted in the first 15-20 ques with very few attempts. All these ques look tempting, but I end up losing a lot of time with minimal attempts.
And coming to VA Sec, In RC question, is it advisable to mark all questions of a particular RC with same option, as with each RC consisting 4 ques, even if I get one right choice I’m at no loss,
and getting more then 1 will only fetch me some positive score.
□ Rishabh, Hope you have gone through the following posts:
1. The single-most unpardonable, gravest sin you can commit in CAT
2. Are you choosing the right questions in your Mock CATs?
If you are unable to go through the entire paper then most probably you are not leaving difficult questions, this is confirmed that in QADI you are wasting almost half your time. Please go
through the above two posts and the old Mocks to identify the questions types that you should be attempting.
22. Sir while filling registeration form of CAT, I selected all programmes considering that they themselves will reject those for which I am not eligible. But now someone told me if you get a call
from a specific IIM and you dont go for interview, they ultimately reject you for all other programmes too. Please help.
□ Nishant, this is incorrect, the person who has told you this is ill-informed.
23. Gp sir,
I had posted my query regarding IIFT, but its not here anymore.
□ Dear Ayush,
Please post your query again.
24. sir my question is regarding the word usage -(1 word and 4 different options) based questions. What should be my approach for these questions from the preparation point of view as well as from
the attempt point of view.
□ Dear Sarat,
Such questions test your knowledge of idioms and phrasal verbs. To prepare for such questions, the best method is to read as much as possible. This will give you exposure to most of the
common idioms and phrasal verbs. In the short term, you should go through lists of idioms and of phrasal verbs. You will find many such lists online. Go through some such lists and try to
learn the exact structure and the meaning of idioms and phrasal verbs.
25. Gp sir,
i have a query related to iift in context of di.. I have been attempting previous year iift papers. I found that DI is calculation intensive and hence it takes me around 4 mins to solve each
question even after applying approximations. How to reduce my speed?
□ Dear Ayush,
Revise the SQC concepts and practise them as much as possible. These techniques will be a great help to you in the IIFT exam.
26. Hi Sir,
I apologize since this question might be out of context here, but I was not sure where to post this question. I have a couple of questions about the OMET exams.
1. Since CMAT is conducted twice a year in September and February, what is the catch here for September test takers, are they disallowed to attempt February CMAT Exam ? Any such restriction ?
2. Second question of mine was regarding CMAT, SNAP, XAT, and other exams that have GK questions that are frequent. Can you advise on some useful resources from which I can prepare for these
among online websites ? For both static and current GK ?
Thank you in advance,
□ Also, could you please suggest a book on decision making for XAT?
☆ Ayush, can you please not hijack others’ questions again ?
☆ Dear Ayush,
The best source for this would be the XAT supplement that we give. Go through this supplement thoroughly.
□ Dear Dilip,
1. There is no such restriction. Even if you have taken CMAT in September, you are free to attempt the February CMAT and vice versa.
2. To prepare for static GK, you should first go through previous years’ papers so that you get an overview of the kind of questions asked. Further, you should solve some model question
papers/CL’s mock tests. Static GK questions come from diverse topics. Hence, in order to prepare for this, you should be well-versed in all the basic sections/areas of GK, viz. history,
polity, geography, sci, economy, etc. Some suggested readings are as follows:
1) Lucent’s GK
2) Manorama Year Book 2014
3) Any national daily news paper
4) A monthly magazine like Pratiyogita Darpan/Civil Services Chronicle
☆ Sir, do these exams- CMAT, NMAT, IIFT, CMAT require a different kind of approach or preparation from CAT? As the exam dates are coming close do we need to focus on some specific areas for
these? My CMAT is in this month and NMAT comes in the next.
Thanks and regards. Your posts have been of immense help and motivation.
○ Sorry, I meant CMAT, NMAT, IIFT, SNAP and XAT. ^^^
■ Rohon, the preparation and approach for all these tests is similar to that of CAT but closer to each of these papers we will orient you towards these exams as well.
GK is one of the biggest difference in all these papers but in the other areas the question types are similar (lower difficulty level except in XAT) to that of CAT.
27. sir,
you said that one can attempt RC without complete reading please explain it.how does it work?
□ Dear Baijnath,
Some RC questions may simply test you on specific details mentioned in the passage; such questions may not require any understanding of the passage. For example, questions such as “Which of
the following is true as per the passage?”. For answering such questions, you may choose to simply skim through the passage and search for the answer instead of reading the entire passage. Do
note that even tough passages may have one or two simple questions of this type. Such sitters may be answered accurately and efficiently using this approach.
28. Sir, do you always have an accuracy of 100% in mocks/CAT? What all can one do to get close to 100% accuracy and still attempt a decent number of questions?
□ Dear Dexter,
Accuracy of close to 100% in QA-DI can be easily achieved through practise. In VA-LR, some questions may be very tricky. Despite this, accuracy of around 95% is realistically possible in this
section as well. Do ensure that you have a firm grasp of the basic concepts. After this, practise as many problems as possible. Analyse your attempts to ensure that you learn from each and
every mistake that you make. These steps will ensure that you have good problem solving speed and high accuracy.
□ Dexter, no I do not have a 100% accuracy. I have hit a purple patch and both my attempts an accuracy are higher than normal. My normal attempts would around 65 questions with around 5-6
incorrect answers. My suggestion is to target a accuracy of 90%.
☆ Sir, is every SQC session different or the same, the reason I ask you this is that I attended your SQC session in Delhi and I learnt a lot from it and seeing as how you are conducting
another one, do you think I should attend it again on the 19th or do you think it will be redundant?
29. Hello GP sir, I read your post and found some really cool and convincing strategies which I’ll definitely try and execute in the next Mock. Sir do you write somewhere regarding which questions
need to be attempted in Round 2 and 3? Can SQCs be applied in the first attempt?
□ Dear Siddhartha,
The same post contains details of questions attempted in each of the three rounds. Yes, SQC concepts can certainly be used in the first attempt itself.
30. Thank you sir!
Had been waiting for this post for long. I had also posted a comment (in an older article of yours) requesting you to write this article soon.
Thanks a lot again!
Now coming to my problem – I have been attempting mocks with a focus on accuracy.. Yes! I have done that and in the last 3 Proc-mocks, I attempted 44 Qns (18-22 attempts in QADI and 22-26
attempts in VALR). In the Proc mock 5 I got a 100 Overall score with 36 in QADI and 64 in VALR. The percentiles clearly reveal that the others have done exceedingly well, with puys notching upto
180 odd score.
My case here is that you and the other mock-maulers are very thorough about getting 12-15 sitters in the QA paper, which would fetch them a 36 to 45, plus a 10 plus in DI gets them another 30.
And they easily notch 65 plus in QADI.
Due to the current time constraint/backlogues, I am facing this challenge of not being able to attempt many questions in QADI. I am sure of the fundamentals to be used during exam, but cant
implement them in that given time. I know practice would be the only solution to this. But I request you to guide me towards how I should practice now. There are so many topics to be done and not
much time left. Even in my comfortable topics, I face a lot of challenge during exam. It is getting panicky for me now.
With a 60 plus in VALR, I am sure to improve it.. Thanks to you, Gejo and Madhu maam at CL Kolkata for showing the path. But I am panicking a lot for my present condition in QADI.
□ Dear Sachin,
You should analyse your mocks thoroughly and identify those areas/topics where you are regularly making mistakes. Revise the basics in these areas and then, practise as many problems from
these areas as possible. Use Test Gym Adaptive thoroughly. This will help you build up speed as well as accuracy. | {"url":"https://gpkafunda.in/cat-2014-how-i-attempted-new-pattern-mock-cat-3/","timestamp":"2024-11-03T03:25:37Z","content_type":"text/html","content_length":"298930","record_id":"<urn:uuid:7359f339-366e-4865-a1a0-36262ca3c226>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00826.warc.gz"} |
Finding outliers in oscillating data
I would like to describe a method to clean up oscillating data and find the outliers using Fast Fourier Transforms. FFTs are really cool when trying to find the periodicity of data. I elected to use
FFTs rather than Moving averages (but the principle would be similar, linearize -> find outliers) because they lend themselves nicely to seasonal data.
So let’s get started with some data:
df=data.frame(obs=c(1:10000),obs1=sin(c(1:10000)/1000) - 1+abs(rnorm(10000)))
Z Scoring this data, we will get what we expect (the tops chopped) but not what we want:
The first thing we need to do is move from time domain to frequency domain. Fortunately R has a really nice library to take the hard math out of this. We then want to calculate the magnitude (the
phase is not important at this stage, because we want to know how big the waves are and not their angle.)
As we’d expect from the data we have a few very low frequencies in the data, and lots of noise in the higher frequencies.
Now we do something you can do in numbers and not really filters (and some of the artifacts of this will be visible later). We filter top half of the data out (set it zero, and not delete it
otherwise the reverse FFT breaks) and we filter the top 10 frequencies which should give us a good match to the base waveform in the data. Enter the perfect low pass filter (frequency domain only).
This is a little trick I’ve seen in industry where you mess with the frequency domain directly rather than trying to reverse the filter design into time domain.
This give us a nice split on the waveform.
Now to filter the top 10 harmonics (play with this number in your use case):
Hmmm, there is an annoying harmonic in the 3000 range, lets try 5 instead of 10.
Nice, right now we have a clean low frequency waveform we can apply to the data, so let’s first bring it back into time domain so the we have data we can work with:
Now we can linearize our data using the waveform:
And that curve down at the end is an artifact of our transform and we seem to have missed a frequency at around the 5th harmonic? So, play and try 7 after we’re done.
Now we can just Z score the data using |z|<2
And our resulting scoring looks as follows:
And on the original data, we get:
As further work you can try to combine both the linear and the fft version for trending data. There is also nothing stopping you from using Laplace transforms and z-transforms to filter out noisy
streaming data to find outliers (in fact, if I were deploying this in a streaming environment that is most likely what I’d use)
Hope you found this informative. | {"url":"https://slipstreamdata.co.za/news/blog/finding-outliers-oscillating-data","timestamp":"2024-11-03T19:53:11Z","content_type":"text/html","content_length":"22439","record_id":"<urn:uuid:a23e55e9-0916-4206-ae15-ba3736bb2774>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00409.warc.gz"} |
Working Against Each Other
<p><a href="https://www.prepswift.com/quizzes/quiz/prepswift-working-against-each-other" target="_blank">Working Against Each Other Exercise</a></p><p>The final variation of work/rate problems
involves people or things <span style="color:#27ae60;">Working Against Each Other</span>. In cases such as these, you don't add the rates, but rather find the difference. </p> <p><strong>
<span style="color:#8e44ad;">Example</span></strong></p> <p>One pipe is filling up a pool at $3$ liters per second. Unfortunately, the pool has a leak and water is leaking at a rate of $1$ liter per
second. If the pool is $5000$ liters, how many <strong>minutes</strong> would it take to fill the pool. </p> <p>In this case, the pipe and the leak are working AGAINST each other, so
don't add the rates. Find the difference:</p> <p>$$3 - 1 = 2$$</p> <p>So the "relative" rate is $2$ liters per second, and we can use this value to solve.</p> <p>$$5000 = 2t$$</p> <p>
$$t = 2500 \ seconds$$</p> <p>But don't forget to convert to minutes!</p> <p>$$\frac{2500}{60} = 416.6 \ minutes$$</p> <p>Approximately.</p>Sorry, you need to log in to see this. Click here to
log in. | {"url":"https://www.prepswift.com/content/working-against-each-other","timestamp":"2024-11-11T17:38:31Z","content_type":"text/html","content_length":"319637","record_id":"<urn:uuid:65e5a2c6-51a7-4431-8071-4711a5903ed1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00195.warc.gz"} |
What is index of coincidence
13 Jan 2012 Kasiki's test and the index of coincidence are used to attack a Vigenère cipher (or other polyalphabetic ciphers with small alphabet and small key size) - they Index of Coincidence is
the probability that when selecting two letters from a text ( without replacement), the two letters are the same. For a random piece of text
1 Feb 2018 Index of Coincidence (IOC) indicates when bytes at a common offset are related. This statistical measure and related features of freak.py will be 19 Mar 2019 The value of the Index of
Coincidence will be calculated in a general way without considering the number of units of the language in question Assuming the longest key, we compute the Index of Coincidence(Ic): Column 2 3 4 5
6 7 8 1 0.044 0.064 0.049 0.057 0.079 0.050 0.056 2 0.0524 0.056 0.054 20 Jun 2018 analysts use the Index of Coincidence (IC) (Fried- man, 1987). The IC, invented by William Fried- man, is the
probability of two randomly drawn Their distance is expected to be a multiple of the length. Compute the gcd of ( most) distances. 3. Use the index of coincidence. cс Eli Biham - May In elementary
cryptanalysis, it is often used in conjunction with the index of coincidence to attempt to determine the number of alphabets used to encipher a plain
Add this topic to your repo. To associate your repository with the index-of- coincidence topic, visit your repo's landing page and select "manage topics.".
The te's't of coincidence is the evaluation of the coincidences of letters, or of digraphs, etc., between two or more messages, or within the same message. The coincidence' or "pairing" test may be
consolidated into one final number or . "statistic". That statistic is called the "index of coincidence" and is defined as I have computed the letter frequency of the cipher text. However, I don't
know how to apply Friedman Test to Vigenère cipher. I couldn't calculate the Index of Coincidence. Does anyone can help to The Coincident Economic Activity Index includes four indicators: nonfarm
payroll employment, the unemployment rate, average hours worked in manufacturing and wages and salaries. The trend for each state's index is set to match the trend for gross state product. The index
of coincidence is sometimes called the repeat rate. Friedman had noticed that when drawing two ciphertext letters at random the probability of drawing “doubles,” i.e., the two letters are the same,
is higher if the letters are drawn from the same alphabet than from different alphabets. Leading indicators are considered to point toward future events. Lagging indicators are seen as confirming a
pattern that is in progress. Coincident indicators occur in real-time and clarify the This feature is not available right now. Please try again later.
The index of coincidence is sometimes called the repeat rate. Friedman had noticed that when drawing two ciphertext letters at random the probability of drawing “doubles,” i.e., the two letters are
the same, is higher if the letters are drawn from the same alphabet than from different alphabets.
Tool to compute coincidence index. Index of Coincidence is a cryptanalysis technique studying the probability of finding repeating letters in an encrypted text. A text in English language has a,
index of coincidence of 0.0667. Composite Index of Coincident Indicators: An index published by the Conference Board that is a broad-based measurement of current economic conditions, helping
economists and investors to determine The coincident indexes combine four state-level indicators to summarize current economic conditions in a single statistic. The four state-level variables in each
coincident index are nonfarm payroll employment, average hours worked in manufacturing, the unemployment rate, and wage and salary disbursements deflated by the consumer price index (U.S. city
Assuming the longest key, we compute the Index of Coincidence(Ic): Column 2 3 4 5 6 7 8 1 0.044 0.064 0.049 0.057 0.079 0.050 0.056 2 0.0524 0.056 0.054
The coincident indexes combine four state-level indicators to summarize current economic conditions in a single statistic. The four state-level variables in each coincident index are nonfarm payroll
employment, average hours worked in manufacturing, the unemployment rate, and wage and salary disbursements deflated by the consumer price index (U.S. city average). The te's't of coincidence is the
evaluation of the coincidences of letters, or of digraphs, etc., between two or more messages, or within the same message. The coincidence' or "pairing" test may be consolidated into one final number
or . "statistic". That statistic is called the "index of coincidence" and is defined as I have computed the letter frequency of the cipher text. However, I don't know how to apply Friedman Test to
Vigenère cipher. I couldn't calculate the Index of Coincidence. Does anyone can help to The Coincident Economic Activity Index includes four indicators: nonfarm payroll employment, the unemployment
rate, average hours worked in manufacturing and wages and salaries. The trend for each state's index is set to match the trend for gross state product. The index of coincidence is sometimes called
the repeat rate. Friedman had noticed that when drawing two ciphertext letters at random the probability of drawing “doubles,” i.e., the two letters are the same, is higher if the letters are drawn
from the same alphabet than from different alphabets. Leading indicators are considered to point toward future events. Lagging indicators are seen as confirming a pattern that is in progress.
Coincident indicators occur in real-time and clarify the This feature is not available right now. Please try again later.
Their distance is expected to be a multiple of the length. Compute the gcd of ( most) distances. 3. Use the index of coincidence. cс Eli Biham - May
Tool to compute coincidence index. Index of Coincidence is a cryptanalysis technique studying the probability of finding repeating letters in an encrypted text.
Periodic Index of Coincidence. Cipher: Results: Maximum period to use: Standard Nicodemus Progressive key. Directions: Type or paste cipher into cipher box. called Index of Coincidence (IC) can help
deciphering. It narrows down the search for the method used with the result obtained from the IC formula. Using the 2 Introduction Index of Coincidence (IC) is a statistical measure of text which
distinguishes text encrypted with a substitution cipher from plain text. IC was Calculate The Index Of Coincidence For The Following Ciphertext And Use It To Estimate The Keyword Length. NGTSA IPNGE
PBSFW NCPBN RSAGF ASGEW we can use the same tool as we did for homophone ciphers, namely, the index of coincidence. Consider our repeating-key tabula recta cipher from before but 1 Feb 2018 Index
of Coincidence (IOC) indicates when bytes at a common offset are related. This statistical measure and related features of freak.py will be 19 Mar 2019 The value of the Index of Coincidence will be
calculated in a general way without considering the number of units of the language in question | {"url":"https://digitaloptionsnqfukct.netlify.app/huzzard25820mup/what-is-index-of-coincidence-ti.html","timestamp":"2024-11-13T18:12:02Z","content_type":"text/html","content_length":"34284","record_id":"<urn:uuid:224fb05f-470d-4b1c-a28f-2641d5052475>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00775.warc.gz"} |
What is Data interpretation methods Importance Scales
Research Writing
What is Data interpretation methods Importance Scales
Data interpretation refers to the application of processes by which data are reviewed in order to reach an informed conclusion. Data interpretation assigns meaning to the analyzed information and
determines its meaning and implications.
It is an important aspect of working with data sets in any field or research and statistics. Both go hand in hand, since the process of interpreting data implies their analysis.
According to Ellingson (2007), the process of data interpretation is often cumbersome and should naturally be made more difficult with the increased amount of data being produced on a daily
basis. However, with the accessibility of data analysis tools and machine learning techniques, analysts are gradually finding it easier to interpret the data.
Interpretation of data is very important as it helps to derive useful information from an irrelevant data set and to make informed decisions. It is useful for individuals, companies and researchers.
What are data interpretation methods?
Data interpretation methods are how analysts help people make sense of numerical data that has been collected, analyzed, and presented. Data, when collected raw, can be difficult for laymen to
understand, so analysts have to break down the collected information so that others can make sense of it.
For example, when founders target potential investors, they need to interpret the data (eg, market size, growth rate, etc.) to better understand them. There are two main methods to do this:
quantitative methods and qualitative methods.
Importance of Data Interpretation
The importance of interpreting the data is evident and that is why it must be done correctly. It is very likely that the data will come from multiple sources and have a tendency to enter the analysis
process in a random order. According to Patten (2004), data analysis tends to be extremely subjective. That is, the nature and purpose of the interpretation will vary from company to company,
probably in correlation with the type of data being analyzed. Although there are several different types of processes that are applied depending on the nature of the data, the two broadest and most
common categories are “quantitative analysis” and “qualitative analysis.”
However, before any serious data interpretation research can begin, it must be understood that visual presentations of data results are irrelevant unless a sound decision is made regarding
measurement scales. Before any serious data analysis can begin, the scale of data measurement must be decided, as this will have a long-term impact on the ROI of data interpretation.
Scales in Data Measurement
The different scales include:
nominal scale
Non-numerical categories that cannot be ranked or compared quantitatively. Variables are exclusive and exhaustive.
ordinal scale
Exclusive and exhaustive categories but with a logical order. Quality indices and agreement indices are examples of ordinal scales (eg, good, very good, fair, etc., or agree, strongly agree,
disagree, etc.).
Measurement scale in which the data are grouped into categories with ordered and equal distances between the categories. There is always an arbitrary zero point.
It contains characteristics of all three.
How to interpret the data?
Once the measurement scales have been selected, it is time to choose which of the two general interpretation processes will best suit your data needs. Let’s take a closer look at those specific data
interpretation methods and potential data interpretation issues.
Illustration of data interpretation on blackboard
When interpreting the data, an analyst must try to discern the differences between correlation, causality, and coincidence, as well as many other biases, but must also consider all the factors that
may have led to a result. There are several methods of data interpretation that can be used.
Data interpretation is intended to help people make sense of the numerical data that has been collected, analyzed and presented. Having a reference method (or methods) for interpreting data will
provide your analyst teams with structure and a consistent foundation.
In fact, if you have different approaches to interpret the same data, even if they share the same objectives, some mismatches can occur. Disparate methods will lead to duplication of effort,
inconsistent solutions, wasted energy and, inevitably, time and money.
Qualitative interpretation of the data
Qualitative data analysis can be summed up in one word: categorical. With qualitative analysis, the data is not described by numerical values or patterns, but by using a descriptive context (ie, a
text). Typically, narrative data is collected using a wide variety of person-to-person techniques. These techniques include:
Detail the behavior patterns that occur within an observation group. These patterns can be the amount of time spent on an activity, the type of activity, and the method of communication used.
Just as behavioral patterns can be observed, different types of documentary resources can be coded and divided according to the type of material they contain.
It is one of the best narrative data collection methods. Interview responses can be grouped by themes, issues, or categories. The interview approach allows the data to be segmented very precisely.
A key difference between qualitative and quantitative analysis becomes clear at the interpretation stage. Qualitative data, being widely open to interpretation, must be “coded” to facilitate the
grouping and labeling of data into identifiable themes. Since person-to-person data collection techniques can often lead to disputes about the appropriate analysis, qualitative data analysis is often
summed up in three basic principles: notice things, pick things up, think about things.
Interpretation of quantitative data
If the interpretation of quantitative data could be summed up in one word (and it really can’t) that word would be ‘numerical’. There are few certainties when it comes to data analysis, but you can
be sure that if the research you’re involved in doesn’t have numbers, it’s not quantitative research. Quantitative analysis refers to a set of processes by which numerical data is analyzed. In most
cases, it involves the use of statistical models such as the standard deviation, the mean, and the median. Let’s quickly review the most common statistical terms:
The mean represents a numerical average for a set of responses. When dealing with a data set (or multiple data sets), a mean will represent a central value of a specific set of numbers. It is the sum
of the values divided by the number of values within the data set. Other terms that can be used to describe the concept are arithmetic mean, average, and mathematical expectation.
Standard deviation
It is another statistical term that often appears in quantitative analysis. The standard deviation reveals the distribution of responses around the mean. Describes the degree of consistency of
responses; together with the mean, it allows knowing the data sets.
Frequency distribution
It is a measure that measures the rate of occurrence of a response within a data set. When using a survey, for example, the frequency distribution has the ability to determine the number of times a
specific ordinal scale response appears (ie, agree, strongly agree, disagree, etc.). The frequency distribution is very useful in determining the degree of consensus between the data points.
Typically, quantitative data is measured by visually presenting evidence of correlation between two or more significant variables. Different processes can be used together or separately, and
comparisons can be made to finally reach a conclusion. Other quantitative data interpretation processes are regression, cohort, predictive and prescriptive analyses.
Qualitative Data Interpretation
The qualitative data interpretation method is used to analyze qualitative data, which is also known as categorical data. This method uses text, rather than numbers or patterns, to describe the data.
According to Creswell, (1997), qualitative data is often collected using a wide variety of person-to-person techniques, which can be difficult to analyze compared to the quantitative research method.
Unlike quantitative data, which can be directly analyzed once collected and classified, qualitative data must first be encoded into numbers before it can be analyzed. This is because texts are often
cumbersome, and will take longer and lead to many errors if parsed in their original state. The coding done by the analyst should also be documented so that it can be reused by others and analyzed as
There are two main types of qualitative data: nominal and ordinal data. These two types of data are interpreted in the same way, but ordinal data is much easier to interpret than nominal data.
In most cases, ordinal data is usually labeled with numbers during the data collection process and may not need to be coded. This is different from nominal data, which still needs to be encoded for
correct interpretation. | {"url":"https://englopedia.com/what-is-data-interpretation/","timestamp":"2024-11-14T22:10:50Z","content_type":"text/html","content_length":"149240","record_id":"<urn:uuid:78c0c803-c916-4bfc-928b-3bc4d7bf0a36>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00340.warc.gz"} |
EquatIO LaTeX Editor
LaTeX Editor
Take the stress out of making math online with Equatio’s LaTeX editor.
Adding advanced math expressions to online math doesn’t have to be hard work. The LaTeX Editor in Equatio is your online LaTeX equation editor. Equatio’s technology has taken the hard work out of
creating and editing LaTeX expressions.
Use LaTex confidently
Equatio makes constructing LaTeX expressions easy. For teachers and higher ed students this means Equatio can be used as the go-to LateX equation editor. Simply copy LaTeX content into the toolbar
and see it in real-time as math content. Equatio allows users to input simple or complex expressions, and transform their work into LaTeX at the click of a button. Simply type, write directly
on-screen, or speak, and Equatio does the hard work.
Capture math content from anywhere online
We’ve also built LaTeX into our ‘screenshot reader’ which means you can snap any math content from PDF’s, images, videos or the web and capture the LaTeX code to use anywhere. Users can take a
screenshot of an inaccessible LaTeX expression, convert it into an accessible formula, which will automatically be read aloud. Once converted, the LaTeX expression can then be pasted directly into
the Equatio editor for editing.
Write and edit LaTeX in Canvas
Equatio for Canvas is an easy-to-use plugin that integrates seamlessly with the Canvas Learning Management System (LMS) on the Chrome browser. This allows you add LaTeX expressions, equations,
formulas, and more into the Rich Content Editor with a click.
“Equatio has significantly increased my quality of life when making google slides with math. And I love that it has LaTeX option in it. Thank you, #EquatIO.”
Are you ready to find out more about Equatio?
If you have any questions about collaboration in math with Equatio, you’d like to see it in action or you’d like to talk to one of our Texthelpers about licensing options, then please complete this
Why do we need this detail?
The information you provide in this form will help us to direct you to the right person. It means we’ll be able to reply to you faster. We also ask if you’re interested in any other products because
we often have multi-product discounts available.
When can you expect to hear from us?
We try to respond to you within 24 hours (max). Sometimes that isn’t always possible (over the weekend for example). But we’ll try to get back to you as soon as possible. | {"url":"https://www.texthelp.com/en-au/products/equatio/latex-editor/","timestamp":"2024-11-14T14:12:27Z","content_type":"text/html","content_length":"96001","record_id":"<urn:uuid:36a890b5-4702-42c6-a0bc-118dadf2ad4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00285.warc.gz"} |
Day 2: Bag of Tricks for Image Classification with Convolutional Neural NetworksData Science News
Day 2: Bag of Tricks for Image Classification with Convolutional Neural Networks
Day 2: Bag of Tricks for Image Classification with Convolutional Neural NetworksFrancisco InghamBlockedUnblockFollowFollowingMar 11Up-to-date tricks to boost your CNN’sTrick or treat, some papers
render your code obsolete.
TL-DRDeep learning convolutional networks have had many improvements not directly related to architecture.
This paper examines a collection of tricks that clearly improve performance at almost no complexity cost.
Many of these tricks have been added to fastai ????.
Large batch-sizeThe paper presents four techniques that allow to effectively train networks with large batch sizes.
Linear scaling learning rateSince larger batch sizes mean a lower variance (lower noise) in the gradient of SGD we can be more confident that the gradient is a promising direction.
Thus, it makes sense to increase the learning rate along with batch size.
It was empirically proven that linearly increasing the learning rate with the batch size works empirically for ResNet50 training.
Learning rate warmupAt the beginning of training the weights typically have random values and are far away from the final solution.
Using a learning rate that is too high may result in numerical instability.
The trick here is to use a low learning rate initially and increase it once the training is stable.
Zero yThe residual blocks in ResNet have an output to which the input of the block is added:x + block(x)Sometimes the last layer in the block is batch normalization which normalizes the value and
then performs a scale transformation.
If the normalized value is x_hat the output of the batch normalization layer is:y .
x_hat + Bwhere y and B are initialized at 1 and 0.
If we instead initialize y as 0 the residual blocks would start by just returning the input, effectively reducing the number of layers and making it easier to train.
Also the network will only modify the value of y if the transformation in the residual block is worth it (i.
improves performance) and this avoids unnecessary computation.
No bias decayIt is recommended not to apply any regularization (or weight decay) to the bias or batch normalization parameters.
Low-Precision TrainingNew hardware offers serious improvements in speed when using FP16 rather than FP32 (on Nvidia V100 training on FP16 offers a x2/3 increase in performance).
However FP16 may cause overflow and disrupt the training process.
The suggestion to overcome this is to store parameters and activations in FP16 and use FP16 to compute gradients.
All parameters have a copy in FP32 for parameter updates.
For a detailed explanation see.
Model TweaksResNet ArchitectureThese tweaks help increase validation accuracy in ResNet-50 without a significant computational cost (~3% longer to train).
ResNet-BResNet B changes the stride of the first two convolutional layers in Path AThe first improvement consists of changing the stride in the convolutional layers.
The first layer in Path A has a stride of 2 which means that it discards 3/4 of the input’s pixels.
To avoid this the stride of this layer can be changed from 2 to 1 and the next layer from 1 to 2 to compensate and conserve the output dimensions.
Since the next layer has a kernel size of 3×3, even with a stride of 2 the layer takes advantage of all the input information.
ResNet-CResNet-C involves a replacement of big kernel size convolutionsThe computational cost of a convolution is quadratic to the kernel width or height.
A 7 × 7 convolution is 5.
4 times more expensive than a 3 × 3 convolution.
This tweak consists of replacing the 7×7 convolutional layer in the input step by three 3×3 layers (will make the model easier to train).
ResNet-DResNet-D replaces a 2 stride convolution with an AvgPool and a 1 stride convolution to avoid information lossResNet-D is a similar improvement as ResNet-B but with a different approach.
They replaced a 2 stride convolution in Path B by an Average Pooling layer and a 1 stride convolution (this keeps the output dimensions intact).
The authors report that this tweak does not affect speed noticeably.
Training RefinementsThe training refinements have a clear positive impact in the performance not only in ResNet but also in other CV architectures (1)Cosine Learning Rate DecayCosine Decay is a
smooth way to progressively decay learning rateThe formula that defines the cosine decay functionTypically, after the learning rate warm-up described earlier, we decrease the learning rate as the
training progresses (the intuition being that as you get closer to the optimum, high learning rates might move you away from it).
A smooth function to describe this schedule is the cosine function which we can see above.
Label SmoothingNew target with label smoothingTypically the last layer of a neural network is a fully-connected layer with output dimension equal to the number of categories and a softmax activation
If the loss is cross-entropy, for mathematical reasons, the network has an incentive to make the prediction for one category very large and the others very small and this leads to over-fitting.
Label smoothing consists in changing the target from [1, 0, 0, …] to [1-e, 0+e/k-1, 0+e/k-1, …] to reduce the polarity in the target.
It is clear that with label smoothing the distribution centers at the theoretical value and has fewer extreme values.
Knowledge DistillationIn knowledge distillation, we use a teacher model to help train the current model, which is called the student model.
One example is using a ResNet-152 as the teacher model to help training ResNet-50.
Knowledge distillation entails adding a term to the loss function which accounts for the difference between the student model and the teacher model to ensure that the student model does not differ
too much from the teacher model.
Loss with knowledge distillation.
T is the temperature hyperparameter, r is the teacher output, z is the student output and p is the target.
MixupThe new example is created by interpolating two existing examplesMixup means linearly interpolating two training examples and creating a new one.
Transfer LearningObject DetectionTraining refinements apply to object detectionThe authors proved that performance of Faster-RCNN on Pascal VOC was improved by adding the refinements presented
Semantic SegmentationOnly cosine smoothing applies to semantic segmentationThe authors trained a Fully-Connected Network on ADE20K and concluded that only cosine smoothing improved the performance in
this task (2).
Notes(1) Knowledge distillation hampers performance in two of the three architectures.
According to the authors:Our interpretation is that the teacher model is not from the same family of the student, therefore has different distribution in the prediction, and brings negative impact to
the model.
(2) Why did the other improvements not improve performance?While models trained with label smoothing, distillation and mixup favor soften labels, blurred pixel-level information may be blurred and
degrade overall pixel-level accuracy.
ReferencesBag of Tricks for Image Classification with Convolutional Neural Networks; He et al.
, AWS, 2018.
Cover Image: www.
org.. More details
You must be logged in to post a comment. | {"url":"http://datascience.sharerecipe.net/2019/03/12/day-2-bag-of-tricks-for-image-classification-with-convolutional-neural-networks/","timestamp":"2024-11-02T08:29:28Z","content_type":"text/html","content_length":"37702","record_id":"<urn:uuid:6bf175dd-89a9-4ed5-9010-9c2853134110>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00267.warc.gz"} |
‘Cosmic inflation’: did the early cosmos balloon in size? A mirror universe going backwards in time may be a simpler explanation (2024)
We live in a golden age for learning about the universe. Our most powerful telescopes have revealed that the cosmos is surprisingly simple on the largest visible scales. Likewise, our most powerful
“microscope”, the Large Hadron Collider, has found no deviations from known physics on the tiniest scales.
These findings were not what most theorists expected. Today, the dominant theoretical approach combines string theory, a powerful mathematical framework with no successful physical predictions as
yet, and “cosmic inflation” – the idea that, at a very early stage, the universe ballooned wildly in size. In combination, string theory and inflation predict the cosmos to be incredibly complex on
tiny scales and completely chaotic on very large scales.
The nature of the expected complexity could take a bewildering variety of forms. On this basis, and despite the absence of observational evidence, many theorists promote the idea of a “multiverse”:
an uncontrolled and unpredictable cosmos consisting of many universes, each with totally different physical properties and laws.
This is article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.
So far, the observations indicate exactly the opposite. What should we make of the discrepancy? One possibility is that the apparent simplicity of the universe is merely an accident of the limited
range of scales we can probe today, and that when observations and experiments reach small enough or large enough scales, the asserted complexity will be revealed.
The other possibility is that the universe really is very simple and predictable on both the largest and smallest scales. I believe this possibility should be taken far more seriously. For, if it is
true, we may be closer than we imagined to understanding the universe’s most basic puzzles. And some of the answers may already be staring us in the face.
The trouble with string theory and inflation
The current orthodoxy is the culmination of decades of effort by thousands of serious theorists. According to string theory, the basic building blocks of the universe are miniscule, vibrating loops
and pieces of sub-atomic string. As currently understood, the theory only works if there are more dimensions of space than the three we experience. So, string theorists assume that the reason we
don’t detect them is that they are tiny and curled up.
Unfortunately, this makes string theory hard to test, since there are an almost unimaginable number of ways in which the small dimensions can be curled up, with each giving a different set of
physical laws in the remaining, large dimensions.
Meanwhile, cosmic inflation is a scenario proposed in the 1980s to explain why the universe is so smooth and flat on the largest scales we can see. The idea is that the infant universe was small and
lumpy, but an extreme burst of ultra-rapid expansion blew it up vastly in size, smoothing it out and flattening it to be consistent with what we see today.
Inflation is also popular because it potentially explains why the energy density in the early universe varied slightly from place to place. This is important because the denser regions would have
later collapsed under their own gravity, seeding the formation of galaxies.
Over the past three decades, the density variations have been measured more and more accurately both by mapping the cosmic microwave background – the radiation from the big bang – and by mapping the
three-dimensional distribution of galaxies.
In most models of inflation, the early extreme burst of expansion which smoothed and flattened the universe also generated long-wavelength gravitational waves –– ripples in the fabric of space-time.
Such waves, if observed, would be a “smoking gun” signal confirming that inflation actually took place. However, so far the observations have failed to detect any such signal. Instead, as the
experiments have steadily improved, more and more models of inflation have been ruled out.
Furthermore, during inflation, different regions of space can experience very different amounts of expansion. On very large scales, this produces a multiverse of post-inflationary universes, each
with different physical properties.
The inflation scenario is based on assumptions about the forms of energy present and the initial conditions. While these assumptions solve some puzzles, they create others. String and inflation
theorists hope that somewhere in the vast inflationary multiverse, a region of space and time exists with just the right properties to match the universe we see.
However, even if this is true (and not one such model has yet been found), a fair comparison of theories should include an “Occam factor”, quantifying Occam’s razor, which penalises theories with
many parameters and possibilities over simpler and more predictive ones. Ignoring the Occam factor amounts to assuming that there is no alternative to the complex, unpredictive hypothesis – a claim I
believe has little foundation.
Over the past several decades, there have been many opportunities for experiments and observations to reveal specific signals of string theory or inflation. But none have been seen. Again and again,
the observations turned out simpler and more minimal than anticipated.
It is high time, I believe, to acknowledge and learn from these failures, and to start looking seriously for better alternatives.
A simpler alternative
Recently, my colleague Latham Boyle and I have tried to build simpler and more testable theories that do away with inflation and string theory. Taking our cue from the observations, we have attempted
to tackle some of the most profound cosmic puzzles with a bare minimum of theoretical assumptions.
Our first attempts succeeded beyond our most optimistic hopes. Time will tell whether they survive further scrutiny. However, the progress we have already made convinces me that, in all likelihood,
there are alternatives to the standard orthodoxy – which has become a straitjacket we need to break out of.
I hope our experience encourages others, especially younger researchers, to explore novel approaches guided strongly by the simplicity of the observations – and to be more sceptical about their
elders’ preconceptions. Ultimately, we must learn from the universe and adapt our theories to it rather than vice versa.
Boyle and I started out by tackling one of cosmology’s greatest paradoxes. If we follow the expanding universe backward in time, using Einstein’s theory of gravity and the known laws of physics,
space shrinks away to a single point, the “initial singularity”.
In trying to make sense of this infinitely dense, hot beginning, theorists including Nobel laureate Roger Penrose pointed to a deep symmetry in the basic laws governing light and massless particles.
This symmetry, called “conformal” symmetry, means that neither light nor massless particles actually experience the shrinking away of space at the big bang.
By exploiting this symmetry, one can follow light and particles all the way back to the beginning. Doing so, Boyle and I found we could describe the initial singularity as a “mirror”: a reflecting
boundary in time (with time moving forward on one side, and backward on the other).
Picturing the big bang as a mirror neatly explains many features of the universe which might otherwise appear to conflict with the most basic laws of physics. For example, for every physical process,
quantum theory allows a “mirror” process in which space is inverted, time is reversed and every particle is replaced with its anti-particle (a particle similar to it in almost all respects, but with
the opposite electric charge).
According to this powerful symmetry, called CPT symmetry, the “mirror” process should occur at precisely the same rate as the original one. One of the most basic puzzles about the universe is that it
appears to [violate CPT symmetry] because time always runs forward and there are more particles than anti-particles.
Our mirror hypothesis restores the symmetry of the universe. When you look in a mirror, you see your mirror image behind it: if you are left-handed, the image is right-handed and vice versa. The
combination of you and your mirror image are more symmetrical than you are alone.
Likewise, when Boyle and I extrapolated our universe back through the big bang, we found its mirror image, a pre-bang universe in which (relative to us) time runs backward and antiparticles outnumber
particles. For this picture to be true, we don’t need the mirror universe to be real in the classical sense (just as your image in a mirror isn’t real). Quantum theory, which rules the microcosmos of
atoms and particles, challenges our intuition so at this point the best we can do is think of the mirror universe as a mathematical device which ensures that the initial condition for the universe
does not violate CPT symmetry.
Surprisingly, this new picture provided an important clue to the nature of the unknown cosmic substance called dark matter. Neutrinos are very light, ghostly particles which, typically, move at close
to the speed of light and which spin as they move along, like tiny tops. If you point the thumb of your left hand in the direction the neutrino moves, then your four fingers indicate the direction in
which it spins. The observed, light neutrinos are called “left-handed” neutrinos.
Heavy “right-handed” neutrinos have never been seen directly, but their existence has been inferred from the observed properties of light, left-handed neutrinos. Stable, right-handed neutrinos would
be the perfect candidate for dark matter because they don’t couple to any of the known forces except gravity. Before our work, it was unknown how they might have been produced in the hot early
Our mirror hypothesis allowed us to calculate exactly how many would form, and to show they could explain the cosmic dark matter.
A testable prediction followed: if the dark matter consists of stable, right-handed neutrinos, then one of three light neutrinos that we know of must be exactly massless. Remarkably, this prediction
is now being tested using observations of the gravitational clustering of matter made by large-scale galaxy surveys.
The entropy of universes
Encouraged by this result, we set about tackling another big puzzle: why is the universe so uniform and spatially flat, not curved, on the largest visible scales? The cosmic inflation scenario was,
after all, invented by theorists to solve this problem.
Entropy is a concept which quantifies the number of different ways a physical system can be arranged. For example, if we put some air molecules in a box, the most likely configurations are those
which maximise the entropy – with the molecules more or less smoothly spread throughout space and sharing the total energy more or less equally. These kinds of arguments are used in statistical
physics, the field which underlies our understanding of heat, work and thermodynamics.
The late physicist Stephen Hawking and collaborators famously generalised statistical physics to include gravity. Using an elegant argument, they calculated the temperature and the entropy of black
holes. Using our “mirror” hypothesis, Boyle and I managed to extend their arguments to cosmology and to calculate the entropy of entire universes.
To our surprise, the universe with the highest entropy (meaning it is the most likely, just like the atoms spread out in the box) is flat and expands at an accelerated rate, just like the real one.
So statistical arguments explain why the universe is flat and smooth and has a small positive accelerated expansion, with no need for cosmic inflation.
How would the primordial density variations, usually attributed to inflation, have been generated in our symmetrical mirror universe? Recently, we showed that a specific type of quantum field (a
dimension zero field) generates exactly the type of density variations we observe, without inflation. Importantly, these density variations aren’t accompanied by the long wavelength gravitational
waves which inflation predicts – and which haven’t been seen.
These results are very encouraging. But more work is needed to show that our new theory is both mathematically sound and physically realistic.
Even if our new theory fails, it has taught us a valuable lesson. There may well be simpler, more powerful and more testable explanations for the basic properties of the universe than those the
standard orthodoxy provides.
By facing up to cosmology’s deep puzzles, guided by the observations and exploring directions as yet unexplored, we may be able to lay more secure foundations for both fundamental physics and our
understanding of the universe. | {"url":"https://londonthamesfencingclub.com/article/cosmic-inflation-did-the-early-cosmos-balloon-in-size-a-mirror-universe-going-backwards-in-time-may-be-a-simpler-explanation","timestamp":"2024-11-07T13:11:15Z","content_type":"text/html","content_length":"77830","record_id":"<urn:uuid:ced685cc-6183-48f7-a9b8-30f809088804>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00440.warc.gz"} |
Integration RulesIntegration Rules
[S:(toc) #title=(Table of Content):S]
can be used to find areas, volumes, central points and many useful things. But it is often used to find the
area underneath the graph of a function
The integral of many functions are well known, and there are useful rules to work out the integral of more complicated functions, many of which are shown here.
There are examples below to help you.
Common Functions Function Integral
Constant ∫a dx ax + C
Variable ∫x dx x^2/2 + C
Square ∫x^2 dx x^3/3 + C
Reciprocal ∫(1/x) dx ln|x| + C
Exponential ∫e^x dx e^x + C
∫a^x dx a^x/ln(a) + C
∫ln(x) dx x ln(x) − x + C
Trigonometry (x in radians) ∫cos(x) dx sin(x) + C
∫sin(x) dx -cos(x) + C
∫sec^2(x) dx tan(x) + C
Rules Function Integral
Multiplication by constant ∫cf(x) dx c∫f(x) dx
Power Rule (n≠-1) ∫x^n dx x^n+1/(n+1) + C
Sum Rule ∫(f + g) dx ∫f dx + ∫g dx
Difference Rule ∫(f - g) dx ∫f dx - ∫g dx
Integration by Parts See Integration by Parts
Substitution Rule See Integration by Substitution
Power Rule
Multiplication by constant
Sum Rule
Difference Rule
Sum, Difference, Constant Multiplication And Power Rules
Integration by Parts
Substitution Rule
Post a Comment
0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin. | {"url":"https://www.ilmgah.com/2017/01/integration-rules.html","timestamp":"2024-11-07T19:48:14Z","content_type":"application/xhtml+xml","content_length":"512960","record_id":"<urn:uuid:2bb03667-72bb-4787-9265-2a94005167c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00391.warc.gz"} |
Laboratoire de Physique
Past seminars
When Oct 28, 2025 01:30 to
Jan 12, 2029 02:30
Add event to calendar vCal
Jeudi 17 Octobre
Title: Quantum features from classical entropies
Tobias Haas (Univ. Libre de Bruxelles)
Abstract: Local quantum entropies are of utmost interest for characterizing quantum fields, many-body systems and gravity. Despite their importance, being nonlinear functionals of the underlying
quantum state often hinders their theoretical as well as experimental accessibility. Here, we show that suitably chosen classical entropies of standard measurement distributions capture the very same
features as their quantum analogs, while remaining accessible even in high-dimensional Hilbert spaces.
We demonstrate the presence of the celebrated area law for classical entropies for typical states such as ground and excited states of a scalar quantum field. Further, we consider the post-quench
dynamics of a multi-well spin-1 Bose-Einstein condensate from an initial product state, in which case we observe the dynamical build-up of quantum correlations signaled by the area law, as well as
local thermalization revealed by a transition to a volume law, both in regimes characterized by non-Gaussian quantum states and small sample numbers.
With the classical entropy method, we set out a novel paradigm for analyzing data, thereby rendering full information measures accessible to the vast majority of (quantum) many-body systems.
Related publications: arXiv:2404.12320 (solo-author), 2404.12321, 2404.12323 (joint work with the groups of Markus Oberthaler and Martin Gärttner).
Jeudi 10 Octobre
Title: Entanglement generation with ultra-cold atoms in optical lattices in the Mott regime
Emilia Witkowska (IFPAN, Varsovie)
Entanglement in systems with single-particle control is a well-established resource of modern quantum technology. Spin-squeezing is a great example of such. Applied in an optical lattice clock it can
reduce the statistical uncertainty of spectroscopic measurements.
During the seminar, I will consider the dynamic generation of spin squeezing with ultra-cold atoms with two internal states loaded into an optical lattice in the strongly interacting regime as
realized with state-of-the-art experiments using a quantum gas microscope. I will show how anisotropic interactions and inhomogeneous magnetic fields generate scalable spin squeezing when their
magnitudes are sufficiently small, but not negligible. The simple models for collective spin will be shown to effectively describe the dynamics. I will also discuss the effect of nonuniform filling
caused by hole doping, at a microscopic level, demonstrating their limiting role in the dynamics and scaling of entanglement.
Jeudi 3 Octobre
Title: Measurement-altered quantum criticality
Sara Murciano (Caltech, USA)
Abstract: Quantum critical systems constitute appealing platforms for the exploration of novel measurement-induced phenomena due to their innate sensitivity to perturbations. I will discuss the
impact of measurement on Ising chains using an explicit protocol, whereby uncorrelated ancillae are entangled with the critical chain and then projectively measured. I can identify different
protocols wherein measurements (i) weakly modify the universal long-range entanglement and (ii) they completely obliterate it. I will also highlight a path to experimental realization in analog
quantum simulators based on Rydberg atom arrays. Finally, I will describe how these ideas can establish a long-term quantum science application of ‘measurement-altered quantum criticality’.
Jeudi 26 Septembre
Title: Two untrodden paths to spin liquid candidates on the pyrochlore lattice
Michel Gingras (University of Waterloo, Ontario, Canada)
Over the past thirty years, the pyrochlore lattice of corner-sharing tetrahedra has served as the premier platform for the study of frustrated magnetism in three dimensions, including classical and
quantum spin liquidity, with a rich interplay of theoretical and experimental developments. The most general anisotropic bilinear spin-1/2 Hamiltonian model for this lattice admits four independent
spin-spin couplings, which has been widely used to describe the properties of the magnetic rare-earth pyrochlore oxide materials. In this talk, I will briefly highlight the results from two recent
works [1,2].
I will first discuss the situation where these couplings are fine-tuned to realize a “triple point” in the ground state phase diagram where a spin ice state and two long-range ordered phase meet [1].
At the classical level, a system with this fined-tuned Hamiltonian displays a classical spin liquid state with intertwined rank-1 and rank-2 gauge fields over a finite temperature window. Upon
cooling, the model undergoes a thermal crossover to a spin ice state that is selected through a “disorder-by-disorder” fluctuation mechanism. In contrast, the corresponding quantum model displays a
spin liquid with coexisting vector and matrix gauge fields at zero temperature. These results may be relevant to the highly paradoxical Tb2Ti2O7 rare-earth pyrochlore oxide compound.
The simplest version of the spin-1/2 Hamiltonian consists of one Ising spin-spin coupling that imposes a local microscopic constraint on every tetrahedron, such that the low-temperature phase is
described by a coarse-grained vector field 𝑬 satisfying a lattice divergence-free condition, ∇·𝑬=0. These local constraints leave a highly-degenerate ground state manifold in which spins collectively
align in a head-to-tail fashion to form a network of closed strings referred to as a Coulomb phase or string condensate. Point-like quasiparticle excitations (spinons) then appear at the ends of open
strings of spins, acting as local “electric charges” 𝑄 sourcing the divergence of 𝑬 via an emergent Gauss law, ∇·𝑬=𝑄. I will illustrate that a novel Ising spin vorticity model can be constructed on
the pyrochlore lattice that supports a classical spin liquid in which the ground state consists of a condensate of closed membranes and where the low-energy defects are strings that live on open
membranes. The ground state manifold of this spin vorticity model is a novel classical spin liquid – a 2-form Coulomb phase [2]. If time permits, I will present some Monte Carlo simulations of that
classical model and comment on the effect of quantum exchange term causing membrane exchange/membrane tunneling and the possible existence of a 2-form U(1) quantum spin liquid.
[1] Daniel Lozano-Gómez et al., Competing Gauge Fields and Entropically-Driven Spin Liquid to Spin Liquid Transition in Non-Kramers Pyrochlores; PNAS 121, e24034871 (2024).
[2] K.T.K. Chung and M.J.P. Gingras, 2-Form U(1) Spin Liquids: Classical Model and Quantum Aspects; arXiv:2310.17607
Jeudi 26 Septembre
Title: Signed eigenvalue/vector distributions of random complex tensors and geometric measure of entanglement of random multipartite states
Naoki Sasakura (Yukawa Institute, Kyoto)
Eigenvalue/vector distributions of random tensors can systematically be computed as partition functions of quantum field theories of bosons and fermions. In particular, signed distributions, which
are the distributions with sign factors coming from Hessian matrices, are expressed as partition functions of four-fermi theories, which are in principle exactly computable. Though the distributions
and the signed distributions are different, they are expected to have intimate relations and in particular common edges in large-N limit. In this talk, we obtain the exact closed-form expressions of
the signed eigenvalue/vector distributions of random complex tensors with symmetric or independent indices. As an application we determine the asymptotic forms of the geometric measure of
entanglement of random multipartite states, which agree with the earlier numerical study by Fitter, Lancien, and Nechita.
Jeudi 19 Septembre
Title: Quantum state engineering for precision measurements with atom
Robin Corgier (LNE-Syrte, Observatoire de Paris)
Matter-waves interferometry allows precision measurements by mapping the physical
quantity of interest to a phase shift determined using interferometric techniques. In the past
years, atomic interferometers have been widely used for fundamental physics tests. To name
a few, they have allowed us to measure the fine structure constant, the gravitational
constant, topological phases and atomic properties. In addition, they are currently used to
perform tests of the Universality of Free Fall (UFF), one of the pillars of the Einstein
Equivalence Principle, where gravity is tested within a quantum framework. At term, the
expected supremacy of atom interferometry comes from the use of quantum correlations to
overcome the standard quantum limit inherent to uncorrelated or classically correlated
In this presentation, I will first introduce the concept of atom interferometry and the
constraints to realize a test of UFF [1]. I will then discuss recent results on quantum state
engineering and their implementation on advanced platforms, being on ground [2-4] and in
space [5-6]. The second part of the presentation will focus on quantum entanglement
dynamics. I will first introduce the concept of spin squeezing dynamics and then discuss a
novel method compatible with state-of-the-art atom interferometer and inertial
measurements [7].
[1] C. Struckmann, R. Corgier et al., Platform and environment requirements of a satellite
quantum test of the Weak Equivalence Principle at the 10-17 level, arXiv: 2310.04212
[2] C. Deppner et al., Collective-Mode Enhanced Matter-Wave Optics, Phys. Rev. Lett. 127,
100401 (2021).
[3] H. Albers, R. Corgier et al., All-Optical Matter-Wave Lens using Time-Averaged
Potentials, Commun Phys 5, 60 (2022).
[4] A. Herbst, et al., Matter-wave collimation to picokelvin energies with scattering length and
potential shape control, arXiv:2310.04383
[5] D. Becker et al., Space-borne Bose–Einstein condensation for precision interferometry,
Nature 562, 391-395 (2018).
[6] N. Gaaloul, M. Meiter, R. Corgier et al., A space-based quantum gas laboratory reaching
picokelvin energy scales, Nat Commun 13, 7889 (2022)
[7] R. Corgier et al., Delta-kick Squeezing, Phys. Rev. Lett. 127, 183401 (2021).
Jeudi 12 Septembre
Title: A new kind of metamagnetism in Spin Ice-like materials
Rodolfo Borzi (University of La Plata, Argentina)
Abstract: In this talk I will introduce some of the essential physics of classical spin ice materials. The observation of "pinch points" in diffuse neutron scattering has constituted a major proof of
the peculiar magnetic correlations present in these materials; however, more direct, thermodynamic evidence of their existence has been much more difficult to find in experiments. The major part of
this exposition will be devoted to see how spin ice correlations can markedly affect the shape of the magnetization curves. We will then show clear thermodynamic and dynamic indications of a
topological phase change in, which is a direct consequence of these correlations. This takes the form of a three dimensional Kasteleyn transition (a second order transition, characterized by the
absence of fluctuations on one side of the critical point), predicted to occur in spin ices in 2008. Within the final part of the talk we will change sides to more speculative grounds, moving from
experiments on canonical spin ice single crystals to numerical simulations on other Ising pyrochlores. The question we will try to answer is the following: is the big asymmetry of the Kasteleyn
transition an essential part of it?
Jeudi 11 Juillet
Title: Anomaly physics in magnetic Weyl semimetals with domain walls
Julia Hannukainen (KTH, Stocholm)
In this talk I introduce the chiral anomaly in the context of magnetic Weyl semimetals with domain walls. Weyl semimetals serve as a platform to explore the chiral anomaly---the nonconservation of
chiral charge due to applied parallel electric and magnetic, and/or parallel axial electric and axial magnetic, fields. Axial electromagnetic fields are emergent fields which couple with opposite
sign to fermions with opposite chirality. We consider a magnetic Weyl semimetal which contains two Weyl fermions of opposite chirality separated in momentum space. Introducing a dynamic domain wall
in the Weyl node separation generates axial electromagnetic fields, leading to the chiral anomaly. Via the chiral magnetic effect, the anomaly generates a current, resulting in electromagnetic
radiation, which if detected, measures the axial anomaly. In reverse, the anomaly influences the domain wall dynamics, enabling electric control of the chirality of domain walls and improving the
domain wall dynamics. Measuring the electric field mediated changes in the domain wall chirality would constitute a direct proof of the chiral anomaly.
Refs: Hannukainen, Ferreiros, Cortijo, Bardarson, Phys. Rev. B 102, 241401(R) (2020), Hannukainen, Bardarson, Cortijo, Ferreiros, SciPost Phys. 10, 102 (2021)
Jeudi 13 Juin
Title: Black Hole as a Self-gravitating Quantum Many-Body System with Maximum Entropy
Yuki Yokokura (iTHEMS, RIKEN, Japon)
A quantum characterization of a black hole is that it maximizes thermodynamic entropy for a given surface area, which should eventually allow us to understand a black hole as a quantum many-body
system without using spacetime geometry. As a first attempt, we explore this idea by self-consistently solving the 4D semi-classical Einstein equation, where matter is quantum and gravity is
classical. I illustrate the logic by comparing with Ising model's mean-field approximation. For spherical static highly-excited configurations, we apply thermodynamic typicality locally and estimate
the entropy including self-gravity to derive its upper bound. The saturation condition uniquely determines the entropy-maximized configuration: a self-gravitating collection of excited quanta
condensates forming a dense configuration, where the self-gravity and a large quantum pressure associated with the 4D conformal anomaly are balanced. The interior metric is a non-perturbative
self-consistent solution in the Planck constant. The maximum entropy, given by the volume integral of the entropy density, agrees with the Bekenstein-Hawking formula through self-gravity, leading to
the Bousso bound for thermodynamic entropy. Thus, this semi-classical gravity condensate has holographic bulk dynamics and could be a candidate for a quantum black hole.
Jeudi 6 Juin
Title: Magnetic frustration in octahedral lattices: emergent complexity in applied field
Mike Zhitomirsky (IRIG, CEA, Grenoble)
Geometrically frustrated magnets typically consist of either triangular or tetrahedral blocks of magnetic ions. A novel frustrated motif is provided by octahedral blocks. Magnetic ions form a
corner-sharing network of octahedra in antiperovskites and Mn3X intermetallics, whereas edge-shared octahedra emerge for the J1-J2 spin model on a face-centered cubic (fcc) lattice for a special
ratio of two exchanges J2/J1 = 0.5. We illustrate an emergent complex behavior of octahedral antiferromagnets by studying the magnetization process of the classical J1-J2 fcc antiferromagnet. Up to
eight different phases exist in magnetic field including two fractional magnetization plateaus at M/Msat = 1/3 and 2/3. An unusual twist in the quantum order-by-disorder effect due to magnon-magnon
interactions is also found for the nearest-neighbor fcc antiferromagnet in zero field.
Jeudi 30 Mai
Colloquium de physique théorique
Title: Time reparametrization invariance: from glasses to toy black holes
Jorge Kurchan (LPENS Paris)
Time reparametrization `softness' has gained a lot of attention in recent years, because it is the way in which a toy model of quantum field theory generates (also toy) gravity. Surprisingly enough,
the same invariance is at the heart of the glass transition, as I will describe.
Jeudi 16 Mai
Title: Symmetry shapes thermodynamics of macroscopic quantum systems
Ariane Soret (Université du Luxembourg)
Symmetries play a fundamental role in shaping physical theories, from quantum mechanics to thermodynamics. Studying the entropic, energetic, or dynamic signatures of underlying symmetries in quantum
systems is an active field of research, from fundamental questions about entropy scalings, ground state properties, or thermalization, to the optimization of quantum computing or numerical simulation
procedures, and is gaining momentum due to rapid experimental advances, particularly in cold atoms [1].
In this work [2], we derive a systematic approach to the thermodynamics of quantum systems based on the underlying symmetry groups. We show that the entropy of a system can be described in terms of
group-theoretical quantities that are largely independent of the details of its density matrix. We apply our technique to generic N identical interacting d-level quantum systems. Using permutation
invariance, we find that, for large N, the entropy displays a universal large deviation behavior with a rate function $s(x)$ that is completely independent of the microscopic details of the model,
but depends only on the size of the irreducible representations of the permutation group SN. In turn, the partition function is shown to satisfy a large deviation principle with a free energy $f(x) =
e(x) − β^{−1}s(x)$, where $e(x)$ is a rate function that only depends on the ground state energy of particular subspaces determined by group representation theory. We demonstrate the power of our
approach by applying it to the nontrivial task of describing phase transitions governed by the interplay of quantum and thermal fluctuations in the transverse-field Curie-Weiss model.
[1] Masahito Ueda. Quantum equilibration, thermalization and prethermalization in ultracold atoms. Nat. Rev. Phys., 2(12):669, 2020.
[2] Vasco Cavina, Ariane Soret, Timur Aslyamov, Krzysztof Ptaszynski, and Massimiliano Esposito. Symmetry shapes thermodynamics of macroscopic quantum systems. arXiv:2402.04214, 2024.
Jeudi 14 Mars
Title: From Gauge Theory to Gravity via Homotopy Algebras
Olaf Hohm (Université Humboldt, Berlin)
I begin with a self-contained introduction to Homotopy algebras, which are
generalizations of familiar structures such as Lie or associative algebras
that in physics emerged in string theory but that more recently have begun
to be recognized as the underlying structure of general classical and
quantum field theories. This framework allows one, in particular,
to formulate two deep connections between gauge theories such as
Yang-Mills theory and gravity, as a first step toward a first-principle derivation:
These are, first, the so-called double copy relations
between the scattering amplitudes of gauge theory and of gravity and,
second, the holographic or AdS/CFT relation between a gravity theory
on AdS and a dual CFT on the boundary.
Jeudi 7 Mars
Title: Chiral basis for qubits and decay of spin-helix states
Frank Göhmann (Université de Wuppertal)
In a recent cold-atom experiment by the Ketterle group at MIT one-dimensional spin-helix states could be prepared and their time evolution induced by the XXZ Hamiltonian could be observed. The
experiment allows to adjust the anisotropy parameter of the latter. For the special case of vanishing anisotropy parameter, i.e. for the XX model, we describe the spatio-temporal decay of the spin
helix explicitly. The helix pattern stays stable in space, but has a non-trivial time-dependent decay amplitude which is of scaling form and is governed by a universal function that can berepresented
as a semi-infinite determinant with a kernel related to the discrete Bessel kernel. This representation is valid for all times, is numerically utterly efficient and allows us to guess the long-time
asymptotics of the function.
Jeudi 15 Février
Title: Gravity on Null Hypersurfaces: Phase Space and Quantization
Luca Ciambelli (Perimeter Institute)
Using an intrinsic perspective, the Einstein equations
projected to a generic null hypersurface (Raychaudhuri and Damour
equations) can be understood as conservation laws for a Carrollian
stress tensor. After reviewing the salient ingredients of null
geometries, we introduce the canonical symplectic phase space and
compute the Poisson brackets among the gravitational dynamical fields.
We then perform a perturbative expansion in Newton's constant, and
quantize the phase space order by order. This leads to the
appreciation of the Raychaudhuri equation as a stress tensors balance
law, where the geometric data behave like a curved beta-gamma CFT per
null generator. This opens a window toward a constructive (bottom-up)
approach to quantum gravity.
Jeudi 8 Février
Title: Fractal entanglement transitions in a quasiperiodic non-unitary circuit
Bastien Lapierre (Université de Princeton, états-Unis)
Measurement-induced phase transitions are novel classes of non-equilibrium dynamical phase transitions, resulting from the interplay between unitary time evolution and measurements. In this talk I
will present a family of exactly solvable non-unitary circuits that lead to rich measurement-induced phase transitions, ranging from area to volume law scalings of the entanglement entropy. In the
case of time-periodic non-unitary circuits, there exists a sharp transition from volume to area law. I will show how the full breaking of the time-translation symmetry in a class of quasiperiodic
circuits leads to even richer entanglement transitions, with extended critical regions characterized by a fractal structure of entanglement separating area and volume law phases. I will finally
comment on the case of a purely random non-unitary evolution.
Jeudi 25 Janvier
Title: Entanglement entropy of two disjoint intervals and spin structures in interacting chains in and out of equilibrium
Vanja Marić (LPTMS Orsay)
We take the paradigm of interacting spin chains, the Heisenberg spin-1/2 XXZ model, as a reference system and consider interacting models that are related to it by Jordan-Wigner transformations and
restrictions to sub-chains. An example is the fermionic analogue of the gapless XXZ Hamiltonian, which, in a continuum scaling limit, is described by the massless Thirring model. We work out the R\
'enyi-$\alpha$ entropies of disjoint blocks in the ground state and extract the universal scaling functions describing the R\'enyi-$\alpha$ tripartite information in the limit of infinite lengths. We
consider also the von Neumann entropy, but only in the limit of large distance.
We show how to use the entropies of spin blocks to unveil the spin structures of the underlying massless Thirring model. Finally, we speculate about the tripartite information after global quenches
and conjecture its asymptotic behaviour in the limit of infinite time and small quench. The resulting conjecture for the ``residual tripartite information'', which corresponds to the limit in which
the intervals' lengths are infinitely larger than their (large) distance, supports the claim of universality recently made studying noninteracting spin chains. Our mild assumptions imply that the
residual tripartite information after a small quench of the anisotropy in the gapless phase of XXZ is equal to $-\log 2$.
Reference: V. Maric, S. Bocini & M. Fagotti, Entanglement entropy of two disjoint intervals and spin structures in interacting chains in and out of equilibrium, arXiv:2312.10028
Jeudi 11 Janvier
Title: Universal control of a bosonic mode via drive-activated native cubic interactions
Théo Sepulcre (Chalmers University)
Bosonic modes provide a hardware-efficient alternative to qubit-based quantum information processing. However, achieving universal control on bosons requires access to a nonlinearity, or to
resourceful non-Gaussian quantum states like cubic phase states. Superconducting microwave circuits offer such strong nonlinearities but face other challenges, like parasitic state distortion due to
the Kerr effect and shorter coherence times.
In this talk, we will demonstrate how these difficulties can be overcome. We harness the 3rd order non-linearity of a SNAIL (Superconducting Nonlinear Asymmetric Inductive eLement) dipole terminated
resonator through simultaneous flux and charge pumping to obtain the desired cubic state, 45 times faster than decoherence. In parallel, we minimize the 4th order Kerr effect by adjusting the flux DC
bias. Achieving this required meticulous pulse calibration and circuit modeling. We will delve into the details of these processes and discuss how our simulation efforts shed light on the primary
causes of infidelity in our current experimental setup.
Jeudi 14 Décembre
Title: Hydrodynamics of Multipole-Conserving Systems
Giuseppe de Tomasi (Urbana Champaign, états-Unis)
During this talk, we will explore how conserving quantities influence the long-time dynamics of generally strongly interacting closed systems. Typically, interacting quantum systems achieve
thermalization via their own unitary dynamics, leading to the emergence of statistical mechanics. However, the route to equilibrium can differ due to the existence of conserved quantities.
Often, conserved charges spread diffusively across the system. However, mobility constraints can impede or even halt their dynamics. The initial part of the talk is devoted to the non-equilibrium
dynamics of fractonic systems, especially those with multiple conservation laws, such as dipole conservation. In these systems, charges are unable to move independently. Their limited dynamics are
described by a generalized diffusion equation that exhibits sub-diffusion [1].
In the second half of the talk, inspired by recent experiments on trapped-ion platforms that intrinsically display power-law decaying interactions [2], we will delve into the interplay between
long-range interactions that promote thermalization and the dipole mobility constraints that obstruct it [3].
[1] PRL 125 (24), 245303
[2] Nature 599, 393 (2021)
[3] arXiv:2304.12342
Jeudi 07 Décembre
Title: (Weyl-)Fefferman-Graham asymptotic symmetries
Arnaud Delfante (Université de Mons, Belgique)
Within the framework of asymptotic symmetries as applied to the AdS/CFT correspondence, there is an increasing body of evidence suggesting that the symmetries employed for gauge-fixing might carry
charge. Consequently, setting the associated fields to zero is a physical constraint on the system, which should be avoided. In this talk, we will examine a partial fixing of the Fefferman-Graham
(FG) gauge, referred to as the Weyl-Fefferman-Graham (WFG) gauge, which restores boundary Weyl covariance. We will show that the diffeomorphism mapping WFG to FG can be charged and discuss how this
relates to holography.
Jeudi 23 Novembre
Title: Macroscopic effects from local perturbations in quantum spin chains
Saverio Bocini (LP, ENS de Lyon)
We investigate the non-equilibrium dynamics of integrable quantum spin chains governed by translationally-invariant Hamiltonians with short-range interactions. Our focus is particularly on initial
states with localized inhomogeneities. Although such scenarios are often addressed by Generalized hydrodynamics (GHD), we consider specific setups that go beyond this framework.
In this presentation, we delve into two distinct cases where minimal modifications, such as a spin flip, of an otherwise stationary initial state can induce a global reconfiguration of the spin
chain. We specifically examine this phenomenon in the context of quantum jammed states and quantum scars. The dynamical behavior of the spin chain can not be predicted solely relying on the knowledge
of local conservation laws, making these setups particularly intriguing to study the limitations of the GHD framework.
Jeudi 19 Octobre
Title: Solving the Form Factor bootstrap for Solvable Irrelevant Deformations
Stefano Negro (NYU University, états-Unis)
Solvable Irrelevant Deformations – also known as "generalised TTbar deformations" – are a large class of perturbations of Integrable Quantum Field Theories (IQFTs). From the perspective of the
factorised scattering theory, they can be defined as deformations of the two-body S-matrix by a CDD factor. While still being integrable, the resulting theories display unusual properties in their
high-energy regime. In particular, the original UV fixed point is lost and it is replaced by a Hagedorn behaviour, reminiscent of the string-theoretic one. This is expected, due to the deformations
being irrelevant in nature. However, to the contrary of a generic irrelevant perturbation, these theories offer an enormous amount of control, allowing us to probe their deep UV regime. In a sense,
they constitute a robust extension of the standard Wilsonian paradigm for Quantum Field Theories.
In this talk I will present some recent developments in the study of the Solvable Irrelevant Deformations: the determination, in full generality, of their Form Factors. The latter are matrix elements
of operators between a vacuum an an n-particle state and constitute a set of building blocks that can be used to compute correlation functions. In IQFTs, these objects satisfy a set of equations that
allow us to bootstrap their exact expressions. Carrying on this procedure for Solvable Irrelevant Deformations one finds that the Form Factors take a factorised form as products of the unperturbed
objects with a factor containing the effects of the perturbation. With this result, it is then possible to analyse the effect of the perturbation on correlation functions. We will see that, depending
on the sign of the deformation parameters, the form factor expansion of correlation functions can be divergent or "hyper-convergent" and that these behaviours possess an intuitive interpretation in
terms of particles acquiring a positive or negative size, as was recently proposed by Cardy and Doyon.
Jeudi 5 Octobre
Title: Exploring integrable deformations
Sibylle Driezen (ETH-Zürich, Suisse)
Recent years have seen an upsurge of interest in deformations of two-dimensional sigma-models which preserve classical integrability when present in the original model. This property enables powerful
techniques for solving these models, even in non-trivial scenarios such as at strong coupling. This talk introduces classical integrability concepts and reviews the construction of a large family of
integrable deformed sigma-models. We will focus on the crucial role played by "worldsheet dualities", which have been naturally developed within the context of string theory. In the second part of
the talk, we will explore the interest of applying integrable deformations on the so-called AdS/CFT correspondence, a duality connecting highly symmetrical string theories to gauge theories.
Specifically, we will focus on the "Jordanian" subclass of integrable deformations and provide insights into ongoing research in this area.
Jeudi 28 Septembre
Title: Theory of robust quantum many-body scars in long-range interacting systems
Alessio LEROSE (Université de Genève)
Quantum many-body scars (QMBS) are exceptional energy eigenstates of quantum many-body systems associated with violations of thermalization for special non-equilibrium initial states. Their various
systematic constructions require fine-tuning of local Hamiltonian parameters. In this work we demonstrate that the setting of \emph{long-range} interacting quantum spin systems generically hosts \
emph{robust} QMBS. We analyze spectral properties upon raising the power-law decay exponent $\alpha$ of spin-spin interactions from the solvable permutationally-symmetric limit $\alpha=0$. First, we
numerically establish that despite spectral signatures of chaos appear for infinitesimal $\alpha$, the towers of $\alpha=0$ energy eigenstates with large collective spin are smoothly deformed as $\
alpha$ is increased, and exhibit characteristic QMBS features.
To elucidate the nature and fate of these states in larger systems, we introduce an analytical approach based on mapping the spin Hamiltonian onto a relativistic quantum rotor non-linearly coupled to
an extensive set of bosonic modes. We exactly solve for the eigenstates of this interacting impurity model, and show their self-consistent localization in large-spin sectors of the original
Hamiltonian for $0<\alpha<d$. Our theory unveils the stability mechanism of such QMBS for arbitrary system size, and predicts instances of its breakdown e.g. near dynamical critical points or in
presence of semiclassical chaos, which we verify numerically in long-range quantum Ising chains.
Jeudi 13 Juillet
Title: Measuring quantum entanglement in materials with neutron scattering
Allen SCHEIE (Los Alamos National Laboratories)
Electron entanglement is ubiquitous in solid state quantum materials, underpinning exotic states like superconductivity and quantum spin liquids. However, it has been historically very difficult to
experimentally measure, which hampers our understanding of such states. I will discuss our recent work showing that quantum Fisher Information can be extracted from neutron scattering data, which
gives a lower bound on entanglement depth. We have also shown that by transforming neutron scattering data into real space, one obtains the expectation value of the spin-spin commutator with atomic
resolution, which serves as an alternative measure of many-body "quantumness". I will end by discussing our experiments witnessing entanglement in a 2D system, and extracting a temperature-dependent
quantum length scale via quantum covariance. These entanglement studies have shed light on even well-studied states like the 1D Heisenberg antieferromagnet, and promise a new vista on strongly
correlated many-body physics.
Jeudi 6 Juillet
Title: Solutions of the cylindrical Korteweg-de Vries equation related to the Airy determinantal point process.
Sofia Tarricone (IPhT, CEA Saclay)
In this talk we will study solutions of the cylindrical KdV equation built up by using the Janossy densities of the thinned shifted and dilated Airy determinantal point process. We will see how they
can be interpreted as Darboux transformations of known solutions obtained in terms of the Fredholm determinant of the so called finite temperature Airy kernel. Finally we will describe their
asymptotic behavior in different regimes. This is based on a joint work with T. Claeys, G. Glesner, G. Ruzza available at ArXiv2303.09848.
Jeudi 11 Mai
Title: From light to colour: single photons' frequency as quantum continuous variables
Pérola MILMAN (MPQ Université Paris Cité)
Last year Nobel's prize was awarded for experiments proving non-local aspects of quantum physics using polarization entangled photons. Although polarization is a discrete mode that is well-defined
for all field statistics and classical fields, polarization measurements in the subspace of single photons exhibit quantum mechanical behavior that can be described using observables associated with
Pauli matrices. This raises the question: how do continuous modes, as frequency and time intervals, behave in this same subspace, i.e., the one composed of single photons only?
In this talk, we show that in the subspace of single photons frequency and time intervals can be associated with observables possessing a continuous spectrum with the same properties as the position
and momentum of a particle or the field's quadratures. We develop a quantum mechanical framework to describe these variables and show how to represent them in phase space and to directly measure
their Wigner function. As an application, we also discuss a problem and experiments in quantum metrology permitting to highlight what should be considered as classical and quantum optical resources.
Finally, we clearly point out the fundamental differences and equivalences between our formalism and quadrature based quantum optical systems.
Jeudi 4 Mai
Title: Local unitary invariance, the tensor HCIZ integral, and applications
Luca Lionni (Institut für Theoretische Physik, Heidelberg University, Allemagne)
LU-invariants are polynomials of tensor variables that are invariant under conjugation of these variables by tensor products of D unitary matrices. They are relevant in quantum information, in the
study of the entanglement properties of D-partite density matrices, but also in discrete geometry, as they are dual to colored triangulations in dimension D. They also constitute the correlation
functions of random tensors whose distributions possess this local unitary invariance.
It is possible to expand on the family of LU-invariants certain integrals over tensor products of N x N unitary matrices, that generalize the celebrated HCIZ integral, and to study the limit where N
goes to infinity. I will give an overview of how these results apply to (and connect) the following topics:
i)the study of randomized measurements of multipartite quantum systems,
ii)the construction of a theory of free probability for random tensors,
iii)discrete and random geometry in dimension two and higher.
Jeudi 30 Mars
Title: Lossy one-dimensional quantum gases
Leonardo Mazza (LPTMS - Orsay)
It has always been thought that the coupling to an environment can be only detrimental to any quantum effect; recently this viewpoint has been changed. In this talk I will summarize the recent
studies that I have co-authored where we consider one-dimensional quantum gasses in the presence of two-body losses. I will show that losses are a very interesting mechanism that drives the system
out of equilibrium through non-equilibrium states. By considering the cases of two-body losses in fermionic systems with SU(2) or SU(3) symmetries, I will argue that the steady state of the loss
dynamics is non-trivial and features some metrologically-useful spin entanglement.
Jeudi 23 Mars
Title: Corner Symmetries in Gravity
Luca Ciambelli (Perimeter Institute, Canada)
Abstract: In the last 7 years, we have gathered a lot of results pointing toward an underlying universal symmetry structure in gravity. I will give an overview of these recent results, focussing in
particular on two main features. First, we will derive the universal corner symmetry algebra, which can be regarded as the algebra of observables in classical gravity. Secondly, we will propose an
extension of the gravitational covariant phase space such that all diffeomorphism charges are integrable, albeit the system is still dissipative. These two ingredients are at the core of the corner
proposal, which is a bottom-up approach to quantum gravity, based on symmetries. After enunciating and discussing this proposal, time permitting, I will mention how the geometric degrees of freedom
at cuts of null hypersurfaces can be quantized, and its far-reaching consequences for quantum gravity.
Jeudi 16 Mars
Title: Stringy black holes and Wald entropy
Tomás Ortín (Instituto de Física Teórica UAM/CSIC, Madrid)
In order to test whether the microscopic entropy computed by string/(AdS/CFT) methods matches the macroscopic one for stringy black holes at higher orders in alpha' it is crucial to have a reliable
computation of the latter. A prominent candidate is the quantity that plays the role of the entropy in the first law of black hole thermodynamics for any matter-free diff-invariant theory: the Wald
entropy. Iyer and Wald gave a widely used prescription to compute it when the matter fields are tensors. However, in the case of the black-hole solutions of the heterotic superstring effective action
to first order in alpha' the entropy obtained using this prescription fails to satisfy the first law.
The main reason for this failure is the fact that most matter fields have gauge freedoms and, therefore, they are not tensors.
In this talk I will show how to compute the diffeomorphism Noether charge (Wald entropy) by dealing correctly with the gauge freedoms of the matter fields. I will apply this methodology to different
theories including the heterotic superstring effective action to first order in alpha'. The resulting formula will be used to compute the alpha' corrections to the entropy of the several black-hole
solutions of the heterotic superstring effective action to first order in alpha'.
Jeudi 2 Mars
Title: The bosonic skin effect: boundary condensation in asymmetric transport
Louis GARBE (TU Wien, Autriche)
We study the incoherent transport of bosonic particles through a one dimensional lattice with different left and right hopping rates, as modelled by the asymmetric simple inclusion process (ASIP).
Specifically, we show that as the current passing through this system increases, a transition occurs, which is signified by the appearance of a characteristic zigzag pattern in the stationary density
profile near the boundary. In this highly unusual transport phase, the local particle distribution alternates on every site between a thermal distribution and a Bose-condensed state with broken U(1)
-symmetry. Furthermore, we show that the onset of this phase is closely related to the so-called non-Hermitian skin effect and coincides with an exceptional point in the spectrum of density
fluctuations. Therefore, this effect establishes a direct connection between quantum transport, non-equilibrium condensation phenomena and non-Hermitian topology, which can be probed in cold-atom
experiments or in systems with long-lived photonic, polaritonic and plasmonic excitations.
Jeudi 9 février
Title: Edge Deformations of Quantum Hall Droplets
Blagoje Oblak (CPHT, école Polytechnique)
The study of two-dimensional droplets of electrons in a strong magnetic field lies at the heart of the quantum Hall effect. In this talk, I present recent results on geometric deformations of such
droplets, resulting from variations of the underlying spatial metric and/or confining potential. Time-dependent variations give rise to Berry phases that can remarkably be written in closed form
despite the fact that the underlying parameter space is infinite-dimensional. In particular, I argue that a large class of deformations that generalize squeezing and shearing probe the edge modes of
the system, including their topological central charge.
(Based on 2212.12935 and 2301.01726 + ongoing work)
Jeudi 2 Février
Title: Integrable systems as the asymptotic dynamics of AdS_{3} gravity
Marcela Cardenas (U. Santiago, Chile)
In this talk we discuss the geometrization of 1+1 integrable systems included in the AKNS integrable system, which contains the Korteweg de-Vries (KDV), modified KDV, sine-Gordon and non-linear
Schrödinger equations. This is possible through the construction of a broad class of asymptotic conditions for the gravitational field reproducing the properties of the AKNS dynamics. We study the
consistency, asymptotic symmetry algebra and integrability properties of these novel boundary conditions.
Jeudi 12 Janvier
Title: Some consequences of collective bath-interaction applied to thermodynamic tasks
Camille LOMBARD-LATUNE (Laboratoire de Physique, ENS de Lyon)
In a bid to give some elements of answer to the vast question ``Can quantum properties be used to enhance (thermodynamic) operations?'', we focus on a situation where quantum properties can naturally
emerge from the dynamics itself, namely bath-induced coherences. We briefly detail physically and mathematically, through master equations applied on the system of interest, the phenomenon of
bath-induced coherences, and show that it is intimately related to the indistinguishability of some energy levels from the point of view of the bath.
Then, focusing on spin ensembles, we then present some thermodynamic consequences of bath-induced coherences/collective dissipation in term of energy, free energy and entropy production. Finally,
some applications to Otto engines and equilibrium thermometry are mentioned. We conclude with some perspectives and questions.
Jeudi 24 Novembre
Title: Chiral and topological aspects of E-models
Daniel Thompson (Swansea University, UK)
E-models are an important class of two-dimensional QFTs; on the one hand they provide duality invariant parent theories from which Poisson-Lie T-dual pairs of sigma-models are obtained and on the
other hand for appropriate choices of data they describe integrable models. This talk will provide an overview of these theories, their applications and the Poisson-Lie duality that underpins them.
The first part of the talk will review various formulations of the chiral dynamics required to describe E-models and in doing so we will clarify linkages between formulations of chiral bosons that
have a wider application. The second part of the talk we will address some topological aspects of Poisson-Lie duality.
Jeudi 17 Novembre
Title: Strings without Supersymmetry: an overview
Ivano Basile (Arnold-Sommerfeld Center for Theoretical Physics, Munich, Allemagne)
Building realistic models of the universe from string theory is a remarkably intricate challenge, with many subtleties that need to be addressed simultaneously. Among these, breaking supersymmetry
stands as one of the most insidious. In this talk I will contrast the traditional approach, where supersymmetry is left unbroken until a low-energy mechanism intervenes, with another approach, where
supersymmetry is either absent or broken at high energies from the outset. This leads one to consider the three known non-supersymmetric string theories in ten dimensions with no perturbative
tachyons as a starting point. I will compare pros and cons of each approach, and describe some phenomenological constructions arising from these models, which feature unstable anti-de Sitter vacua
nucleating de Sitter-like bubbles where the universe could live.
Jeudi 3 Novembre
Title: A journey into curved spaces
Lavi Upreti (Université de Konstanz)
Motivated by recent experimental breakthroughs in realizing hyperbolic lattices in superconducting waveguides, in the first part, we compute the Hofstadter butterfly on (regular) hyperbolic polygons
[1]. We obtain the true hyperbolic bulk spectrum by utilizing large hyperbolic lattices with periodic boundary conditions. Our results reveal that the butterfly spectrum with large extended gapped
regions prevails and that its shape is universally determined by the polygon of tilling underlying, while the fractal structure is lost. We explain how these intriguing features are related to the
nature of Landau levels in hyperbolic space. Furthermore, in the second part, we study the hyperbolic drum and show how it differs from the Euclidean drum [2]. It is demonstrated by calculating the
spectrum of the Laplacian (static test), where the eigenmode ordering differs for both cases. We also claim a dynamic test, where an excitation on a hyperbolic drum follows a (hyperbolic) geodesic, a
smoking gun proof of hyperbolic geometry. These two claims are then backed by experimental observation in electrical circuits.
Jeudi 20 Octobre
Title: Non-Hermitian topological phases in traveling-wave parametric amplifiers
Alvaro Gomez-Leon (Université de l'institut de physique fondamentale de Madrid)
Amplification is at the heart of different areas of science and technology. For example, it is used
in music production, telecoms, medical diagnosis or quantum technologies. Achieving large gain
and low noise during the amplification process is one of the main objectives of their development. I
we will show that the ideas from topological condensed matter systems can be used to design high-
quality amplifiers where topology plays a crucial role. These amplifiers are quite robust to disorder,
the phase-matching between modes is automatically implemented, they can amplify a wide range of
frequencies, their gain is exponential with the number of sites and its signal-to-noise ratio is near
quantum limited. In this talk I will discuss the theory behind these topological amplifiers1 and a
possible experimental implementation in the microwave regime using Josephson junctions2.
1 “Non-Hermitian topological phases in traveling-wave parametric amplifiers”. Á. Gómez-León, T. Ramos, A. González-Tudela
and D. Porras. arXiv:2207.13715.
2 “Directional Josephson traveling-wave parametric amplifier via non-Hermitian topology”. T. Ramos, Á. Gómez-León, J. J.
García-Ripoll, A. González-Tudela and D. Porras. arXiv:2207.13728.
Jeudi 23 Juin
Title: Critical behaviour of interacting thermodynamic machines
Alberto Imaprato (Physics Departement, Univerity of Aarhus, Denmark)
It is known that in an equilibrium system approaching a critical point, the response to a change in an external thermodynamic force can become significantly large. In other words, an equilibrium
system at the verge of a second-order phase transition is highly susceptible to external thermodynamic forces.
Starting from this premise, in my talk I will discuss the properties of systems of interacting thermodynamic machines that operate at the verge of a phase transition. I will focus on the performance
of different types of out-of-equilibrium machines converting heat or other forms of energy into useful work.
Specifically, I will consider:
i) an out-of-equilibrium lattice model consisting of 2D discrete rotators, in contact with heath reservoirs at different temperatures,
ii) an out-of-equilibrium Frenkel--Kontorova
model moving over a periodic substrate and in a position dependent temperature profile,
iii ) a transverse field Ising model undergoing a quantum phase transition, and operating as a battery-charger system.
For each of these systems, I will argue that the optimal operating regime occurs when the system is driven out-of-equilibrium in proximity of a phase transition.
Jeudi 09 Juin
Title: 'Fractonicity' from elasticity
Leo Radzihovsky (Université de Colorado, Boulder)
I will discuss a bourgeoning field of "fractons" - a class of models where quasi-particles are strictly immobile or display restricted mobility. Focussing on just a corner of this fast-growing
subject, a will explain how one class of such theories - symmetric tensor gauge theories surprisingly emerge from seemingly mundane elasticity of a two-dimensional quantum crystal. The disclination
and dislocation crystal defects respectively map onto charges and dipoles of the fracton gauge theory. This fracton-elasticity duality leads to predictions of fractonic phases and quantum phase
transitions to their descendants, that are duals of the commensurate crystal, supersolid, smectic, and hexatic liquid crystals. Extensions of this duality to generalized elasticity theories provide a
route to discovery of new fractonic models and their potential experimental realizations.
Jeudi 5 Mai
Title: On maps with tight boundaries
Jérémie Bouttier (IphT, CEA Saclay et Laboratoire de Physique, ENS de Lyon)
Maps, in the combinatorial sense, are discrete surfaces made of polygons
glued together. Over the last 20 years, very precise results on the
geometric properties of random maps have been obtained. However, most of
the focus has been so far on the spherical (planar) case. Maps of other
topologies (higher genus/more boundaries) are well-understood on the
enumerative side, thanks to advanced techniques such as topological
recursion, but it is unclear how to extend this understanding to
geometrical aspects. I will report on an ongoing project with E. Guitter
and G. Miermont where we explore this question. Based on
arXiv:2104.10084, arXiv:2203.14796 and work in progress.
Jeudi 14 Avril
Title: Asymptotic symmetries of (super)gravity - A tale of “two infinities”
Sucheta Majumdar (Laboratoire de Physique, ENS de Lyon et Labex LIO)
The study of asymptotic symmetries is very sensitive to gauge choices, boundary conditions and how one approaches infinity- along a spacelike or null direction. Originally discovered as the
asymptotic symmetry of Einstein’s gravity at null infinity, the BMS group is an infinite-dimensional extension of the Poincaré group. At spacelike infinity, however, the standard boundary conditions
only allow for the Poincaré group, not BMS. I will discuss how the BMS symmetry can be realised at spatial infinity as well, thereby, resolving a longstanding disparity between the “two infinites”.
Our methods rely on the key aspects of Hamiltonian dynamics, which also sheds light on the structure of the BMS algebra. In the last part of the talk, I will briefly focus on the case of supergravity
where the underlying super-Poincaré algebra is enhanced to a super-BMS algebra at infinity.
Jeudi 7 Avril
Title: Is the glass a genuine state of matter?
Benjamin Guiselin (Laboratoire de Physique, ENS Lyon)
The glass is usually obtained by cooling a liquid until the viscosity becomes so high that the sample cannot flow on an experimental timescale. It is thus often considered as a frozen
out-of-equilibrium liquid. However, the recent exact mean-field theory of glass formation [1] suggests that an "ideal glass" phase exists as a genuine state of matter separated from the liquid phase
by a first-order equilibrium phase transition. After reviewing the mean-field results, I will turn to computer simulations of the thermodynamics of finite-dimensional glass-forming liquids in order
to assess what remains of mean- field theory in finite dimensions [2]. In particular, I will show that the thermodynamics of supercooled liquids is ruled by glass metastability and the approach to an
equilibrium phase transition at low temperature, in agreement with mean-field results. I will also show that two-dimensional and three- dimensional glass-forming liquids have important differences in
their thermodynamic properties which can be accounted for by the random-field Ising model universality class of the transitions [3]. The latter is the consequence of the existence of a "self-induced
disorder" in supercooled liquids and I will present an original numerical procedure in order to measure the statistics of this static heterogeneity [4]. [1] G. Parisi, P. Urbani and F. Zamponi,
Theory of simple glasses: exact solutions in infinite dimensions, Cambridge University Press (2020). [2] B. Guiselin, L. Berthier and G. Tarjus, "Statistical mechanics of coupled supercooled liquids
in finite dimensions", SciPost Physics, 12(3), 091 (2022). [3] B. Guiselin, L. Berthier and G. Tarjus, "Random-field Ising model criticality in a glass-forming liquid", Physical Review E, 102(4),
042129 (2020). [4] B. Guiselin, G. Tarjus and L. Berthier, "Static self-induced heterogeneity in glass-forming liquids: Overlap as a microscope", arXiv preprint arXiv:2201.10183 (2022).
Jeudi 17 Mars
Title: Satellite Quantum Communications at Thales Alenia Space : Quantum Key Distribution and Quantum Internet
Laurent DE FORGES (Thales Alenia Space)
After my PhD in Condensed Matter in 2012 (simulations of the Bose-Hubbard model and quantum phase transitions) and five years in postdoc, I left Fundamental Research in 2018 for the Space Industry. I
will briefly explain how a theoretical researcher can finally integrate the French Industry after many obstacles and pitfalls.
Then, I will present Satellite Quantum Communications at Thales Alenia Space. Our two main disruptive activities concern Quantum Key Distribution (QKD) for cryptography needs and the Quantum Internet
that will connect quantum computers together at the horizon of 2035. QKD, that provides the highest security level to secure our communications, is a parade to the quantum computer threat. Firstly
developed in Research Labs since 1984, QKD is now an industrial topic of interest. I will explain why satellites are essential assets for the European QKD network we are developing for the European
Commission with the European Space Agency. I will also explain how entangled photons and quantum teleportation will be used in the quantum internet. My talk will highlight the role of a theoretical
quantum physicist in the Space Industry.
Satellite-based Quantum Information Networks: Use cases, Architecture, and Roadmap, https://arxiv.org/pdf/2202.01817.pdf
Jeudi 10 Mars
Title: Form factor approaches to out-of-equilibrium dynamics in integrable models
Etienne Granet ( Kadanoff Center for Theoretical Physics, Chicago, états-Unis)
Form factor expansions are a powerful tool for computing correlation functions in integrable models. I will present the calculation of out-of-equilibrium correlations in two different models using
this technique. I will first consider the XY model in a field, that can be mapped to free fermions, and show how to obtain Fredholm determinant formulas for expectation values out-of-equilibrium for
any variation of external magnetic field. I will then consider the Lieb-Liniger model and explain how to perform a strong coupling expansion after a quantum quench
Jeudi 10 Février
Title: Generalised Density Profiles in Single-File Systems
Aurélien Grabsch (université Paris Sorbonne)
Single-file transport, where particles diffuse in narrow channels while not overtaking each other, is a fundamental model for the tracer subdiffusion observed in confined systems, such as zeolites or
carbon nanotubes. This anomalous behavior originates from strong bath-tracer correlations in 1D, which we characterise in this talk through Generalised Density Profiles (GDPs). These GDPs have
however remained elusive, because they involve an infinite hierarchy of equations. Here, for the Symmetric Exclusion Process, a paradigmatic model of single-file diffusion, we break the hierarchy and
unveil a closed equation satisfied by these correlations, which we solve. Beyond quantifying the correlations, the central role of this equation as a novel tool for interacting particle systems will
be further demonstrated by showing that it applies to out-of equilibrium situations, other observables and other representative single-file systems.
* Generalized Correlation Profiles in Single-File Systems
Alexis Poncet, Aurélien Grabsch, Pierre Illien, Olivier Bénichou
Phys. Rev. Lett. 127, 220601 (2021), arXiv:2103.13083
* Closing and Solving the Hierarchy for Large Deviations and Spatial Correlations in Single-File Diffusion
Aurélien Grabsch, Alexis Poncet, Pierre Rizkallah, Pierre Illien, Olivier Bénichou
Jeudi 3 Février
Title: Deformations and dualities in string theory and integrable models
Riccardo Borsato (Universidad Santiago de Compostella)
I will review recent progress in the identification and classification of solution-generating techniques, which can be understood as deformations or generalised duality transformations. In the
context of string theory, these solution-generating techniques may be viewed as methods to generate supergravity backgrounds (or even their alpha'-corrections) when starting from a "seed"
supergravity solution. In the context of integrability, they also allow us to generate integrable sigma-models when starting from a seed one. The combination of these two applications has interesting
motivations for generalisations of the AdS/CFT correspondence that may additionally be treated by the exact methods of integrability. After a generic introduction, I will review the ideas behind the
construction of such solution-generating techniques and the methods that allow to classify them.
Jeudi 27 Janvier
Title: What is common between disordered elastic systems, the sandpile model, loop erased random walks and the phi4 theory?
Andreï Fedorenko (Laboratoire de Physique, ENS de Lyon)
Abstract: I will give a brief introduction to disordered elastic systems, the sandpile model and loop erased random walks. These models have diverse applications but they are difficult to study
I will show how these problems that seem unrelated at first glance can be mapped to each other and connected to the phi4 theory that drastically simplifies their study.
Jeudi 20 Janvier
Title: Higher-order topological insulators and superconductors and beyond
Luka Trifunovic (University of Zürich)
Not so long ago, the concept of higher-order topological insulators and superconductors was mentioned for the first time in print (see Viewpoint [1]). Since then, the concept of higher-order topology
has been recognized to be key for the description of boundary phenomenology of topological crystalline insulators and superconductors.
Although higher-order boundary phenomenology can exist without a topologically nontrivial bulk band structure, recent excitement stems from the observation that it can also be a consequence of (and
protected by) nontrivial bulk topology if additional crystalline symmetries are present. In this picture, higher-order topological phases are not a new kind of topological phases, rather they are a
new type of boundary manifestation of nontrivial bulk topology.
In this talk, I will introduce extrinsic and intrinsic higher-order boundary phenomenology and discuss both bulk and boundary topological classification. A new type of bulk-boundary correspondence
that naturally includes a higher-order boundary phenomenology will be also discussed [2].
[2] L Trifunovic, PW Brouwer, physica status solidi (b) 258 (1), 2000090 (2021)
Jeudi 13 Janvier
Title: Persistent currents and entanglement in a Bose-Bose mixture after an interaction quench.
Dominique Spehner (Departamento de Ingeniería Matemática, Universidad de Concepción, Chili)
In this talk we consider a Bose-Bose mixture formed by two atomic gases with different atomic species trapped in a one-dimensional ring lattice potential with an artificial gauge field. We focus on
the out-of-equilibrium dynamics of this mixture after a sudden quench from zero to strong interspecies interactions, and discuss how these interactions modify the single-gas persistent currents. By
analyzing both perturbatively and numerically the dynamics of the Bose-Hubbard model for a finite ring, we show that in certain parameter regimes there exist universal relations between the relative
variation of the single-gas currents, the amount of entanglement between the two gases, and their initial visibility. In particular, the entanglement, quantified by the second Rényi entropy of the
reduced state of one species, scales linearly with the number of sites and is proportional to the relative variation of the current. We argue that this may provide a way to measure interspecies
entanglement experimentally in this setup.
Jeudi 9 Décembre
Title: Effective models for emerging anyons
Nicolas ROUGERIE (UMPA, ENS de Lyon)
Fundamental particles come in two types: fermions and bosons, according to whether they satisfy the Pauli exclusion principle or they do not. However, quasi-particles of certain low-dimensional
condensed matter systems may violate this fundamental dichotomy and have an intermediate behavior. Such exotic objects, called anyons, can be described as ordinary bosons and fermions with special
long-range magnetic interactions. This leads to intricate models for which well-educated approximations are desirable.
In this talk I will survey recent, mathematically rigorous, derivations of such approximations from the basic many-body Schrödinger Hamiltonian with Aharonov-Bohm magnetic fluxes. We study two limit
situations where the anyon statistics/magnetic interaction is seen as a perturbation either "from the bosonic end" or "from the fermionic end". We vindicate mean-field-type approximations, proving
that the ground state of a gas of anyons is described to leading order by a magnetic non-linear Schrödinger theory (bosonic end) or a semi-classical, Vlasov-like, energy functional (fermionic end).
Joint works with Michele Correggi, Romain Duboscq, Théotime Girardot, Antoine Levitt and Douglas Lundholm
Jeudi 2 Décembre
Title: Dissipative critical phenomena
Fabrizio MINGANTI (Ecole Polytechnique Fédérale de Lausanne)
Dissipation is often regarded as an obstacle to the realization of quantum technology. However, if properly controlled and engineered, dissipative processes can be harnessed for technological
advantages. The purpose of this talk is to discuss some uncanny critical phenomena emerging in open quantum systems, using the formalism of the Lindblad master equation and of the Liouvillian
superoperator [1]. In particular, I will then discuss peculiar phenomena occurring in dissipative systems, either related to the emergence of peculiar multistability [2], to spontaneous symmetry
breaking [3], and to $\mathcal{PT}$ symmetry simulations [4]. I will also briefly discuss how dissipative critical systems can bring an advantage to metrological protocols [5].
[1] FM, A. Biella, N. Bartolo, and C. Ciuti, Phys. Rev. A 98, 042118 (2018)
[2] FM, I. I. Arkhipov, A. Miranowicz, and F. Nori, arXiv:2103.05625 (accepted in PRR)
[3] FM, I. I. Arkhipov, A. Miranowicz, and F. Nori, arXiv:2110.11902 (accepted in NJP)
[4] I. I. Arkhipov, FM, arXiv:2110.15286
[5] R. Di Candia, FM, K. V. Petrovnin, G. S. Paraoanu, S. Felicetti, arXiv:2107.04503
Jeudi 25 Novembre
Title: Intégrabilité et théorie de Chern-Simons à quatre dimensions
François Delduc (Laboratoire de Physique, ENS de Lyon)
We shall study the relation, which with my usual perspicacity I at first did not believe to hold, between a four dimensional version of the Chern-Simons (CS) model and two dimensional integrable
sigma models. It will turn out that the four dimensions of the CS model comprise the two dimensional Minkovski space and the complex plane of the spectral parameter. It turns out that probably all
known integrable 2D models, and maybe some that are not known, may be encoded in a 4D CS model.
Jeudi 18 Novembre
Title: Multipoint conformal blocks and Gaudin models
Sylvain Lacroix (Institute for Theoretical Studies, ETH, Zürich)
In this talk, I will discuss a relation between Gaudin integrable models and multipoint conformal blocks. The latter are the building blocks of correlation functions in conformal field theories (in
arbitrary dimension). After reviewing their definition, I will explain how these conformal blocks can be characterised as eigenvectors of a complete set of commuting differential operators, arising
as a specific limit of the Hamiltonians of a Gaudin model.
Jeudi 28 Octobre
Title: Phase transitions in driven-dissipative many-body quantum systems: Gutzwiller quantum trajectories and a study of mean-field validity
Dolf Huybrechts (Laboratoire de Physique, ENS de Lyon)
Open quantum systems have become subject of intense research in the latest years due to technological and experimental advances and their potential for quantum information applications. An open
quantum system is subject to an interaction with its environment with which it can exchange e.g. particles or energy. Usually this type of interaction results in a dissipation of the system’s energy
into the environment and a drive is needed to compensate for this loss. The competition of these driving and dissipation processes can result in very interesting physical phenomena that are markedly
distinct from their equilibrium counterparts. Subsequently, the theoretical interest in these systems has burgeoned and a plethora of theoretical techniques have been developed. Due to the scarcity
of analytical solutions, these are mainly based on numerical simulations. A crucial obstacle to be overcome is the exponential growth in computational resources that is required in a numerically
exact approach. As a result, there is a clear need for the development of approximative methods and methods that exploit the symmetries that are present in these systems to allow for a more efficient
numerical study.
In this seminar I will give a (short) introduction of open quantum systems described by a Lindblad master equation. Thereafter, I will discuss the validity of the mean-field assumption in these
systems, more particularly in the dissipative (anisotropic) XYZ Heisenberg model. Subsequently, the influence of correlations on the description of the system will be briefly discussed, both in the
XYZ model as well as in the driven-dissipative Bose Hubbard model. Finally, I will introduce an efficient method to extract the properties of these open systems in the long time limit.
Jeudi 21 Octobre
Title: Berry-Chern monopoles and Spectral flows (or why are topological boundary states everywhere?)
Pierre Delplace (Laboratoire de Physique, ENS de Lyon)
I would like to discuss a cornerstone concept of wave topology that pops up from condensed mater physics to classical waves (e.g. in optics or fluids), and which is the deep connection between a
topological property of the waves (Berry-Chern monopoles) in an homogeneous system and the existence of unidirectional modes (spectral flow) in an inhomogeneous one. I will introduce a simple
pedagogical model to illustrate this correspondence. The presentation will be done on the blackboard.
Jeudi 7 Octobre
Title: On quantum separation of variables
Giuliano Niccoli (Laboratoire de Physique, ENS de Lyon)
I will describe our new quantum separation of variables method (SoV) to the exact and complete solution of the spectral and dynamical problems of integrable quantum models. It is based exclusively on
the quantum integrable structure of the analyzed models (i.e. their commuting conserved charges) to get their resolutions. Then, our SoV method should put on the same footing the quantum
integrability of a model and its effective solvability. Indeed, others exact methods rely on some set of additional requirements beyond integrability which may result in their reduced applicability.
Moreover, this is a non-Ansatz approach for which the completeness of the spectrum description is proven to be a built-in feature. It can be seen as the natural quantum analogue of the classical
separation of variables in the Hamilton-Jacobi's theory, reducing multi-degrees of freedoms highly coupled spectral problems into independent one-degree of freedom ones. The transfer matrix wave
functions are then factorized into products of its eigenvalues and universal determinant representations of scalar products and even of form factors of local operators naturally appear in our SoV
Jeudi 1 Juillet
Title: Edge states and symmetries in gravity
Marc Geiller (Laboratoire de Physique, ENS de Lyon)
In gravity the notion of energy is a priori ill-defined because there is no true Hamiltonian. Instead, the Hamiltonian is a pure constraint and as such vanishes on-shell. To understand the way out of
this difficulty, one should carefully distinguish between the notion of gauge and physical symmetries, which acquires a meaning when working in bounded regions of spacetime. When considering
boundaries, gravity reveals its holographic nature and gives access to new types of observables. I will explain how these observables and their algebraic structure gives new hopes for understanding
the quantum nature of gravity.
Jeudi 3 Juin
Title: Universal spin squeezing in the quantum dynamics of U(1)-symmetric spin Hamiltonians
Tommaso Comparin (Laboratoire de Physique, ENS de Lyon)
Spin squeezing - a central resource for quantum metrology - appears during the entangling evolution of an initially factorized spin state. Here we consider a large class of S=1/2 spin Hamiltonians
with axial symmetry, and we show that they induce a universal dynamics of spin squeezing at short time. This property is connected to the existence of a peculiar set of Hamiltonian eigenstates - the
so-called Anderson tower of states. Such states are related to the appearance of spontaneous symmetry breaking in quantum systems, and they are parametrically close to the eigenstates of a planar
rotor (Dicke states), in that they feature an anomalously large value of the total angular momentum.
We show that, starting from a coherent spin state, a generic U(1)-symmetric Hamiltonian featuring the Anderson tower of states generates the same squeezing evolution at short times as the one
governed by the paradigmatic one-axis-twisting (or planar rotor) model of squeezing dynamics. The full squeezing evolution is seemingly reproduced for interactions which decay sufficiently slowly
with the distance.
Our results connect quantum simulation with quantum metrology by unveiling the squeezing power of a large variety of Hamiltonian dynamics that are currently implemented by different quantum
simulation platforms - including for instance experiments with Rydberg atoms.
Reference: T. Comparin, F. Mezzacapo, and T. Roscilde, arXiv:2103.07354v1 [cond-mat.str-el].
Jeudi 29 Avril
Title: Convergence: why bother?
Karol Kozlowski (Laboratoire de Physique, ENS de Lyon)
I will review the motivations for one of my current research interests which aims at developing methods for proving the convergence of a class of series of multiple integrals, namely series whose
n-th summand is given by a n-fold integral. These series arise in the context of studying correlation functions in quantum integrable systems but also define a new class of special functions laying
above the Painlevé class. More specifically, I will address the issue of convergence related to representations of two-point functions in the 1+1 dimensional massive quantum integrable Sinh-Gordon
field theory.
Jeudi 8 Avril
Title: Tunable critical correlations in kagome ice and the approach to the Kasteleyn transition
Peter Holdsworth (Laboratoire de Physique, ENS de Lyon)
Phase transitions falling outside the Landau-Ginzburg-Wilson paradigm are a recurring theme of modern statistical physics. In this seminar
I will discuss one of the simplest examples - the Kasteleyn transition. This is a topological transition involving the deconfinement of
topological defects in a field with continuous symmetry. A model system showing a K-transition is kagome spin ice in an external field
for which a dual language exists between spins and hard core dimers on a hexagonal lattice. I will explain how kagome planes can be isolated
by applying a field in the [111] direction in a spin ice sample and discuss our recent neutron scattering experiments on single crystals
of holmium titanate in which the transition is approached as the field is slightly tilted into the kagome plane (https://arxiv.org/pdf/2102.06546.pdf).
I will show how the critical correlations of kagome ice are tuned, following the biaxial symmetry breaking of the field.
Jeudi 4 Mars
Title: Berry Phases and Drift in the KdV Equation
Blagoje Oblak (CPHT)
I consider a model of fluid motion closely related to the Korteweg-de Vries equation that governs shallow water waves. Upon reformulating this model as a geodesic in an infinite-dimensional group,
the fluid's drift velocity can be recast as an ergodic rotation number. The latter is sensitive to Berry phases, inspired by conformal field theory and gravity, that are produced by adiabatic
deformations. Along the way, I show that the topology of coadjoint orbits of wave profiles affects drift in a dramatic manner: orbits that are not homotopic to a point yield quantized rotation
numbers. These arguments rely on the general structure of Euler equations, suggesting the existence of other applications of infinite-dimensional geometry to nonlinear waves.
Jeudi 25 Février
Title: Towards celestial holography
Laura Donnay (TU Wien, Autriche)
Universal relationships between asymptotic symmetries quantum field theory soft theorems and low energy observables have reinvigorated attempts at flat space holography. In this talk I will review
recent advances in the celestial holography proposal which aims at establishing a dual description of gravity in asymptotically flat spacetimes in terms of correlators on the celestial sphere at null
Jeudi 18 Février
Title: Robustness and invariance of entanglement in symmetry-protected topological phases at and away from the phase transition
Pierre FROMHOLZ (ICTP, Trieste, Italie)
Gapped topological phases of matter display exclusive entanglement properties that could prove useful in topological quantum computers. Much of these properties are unknown, in particular for
symmetry-protected topological phases (SPTP). In my presentation, I will summarize a series a work (some of which I contributed to) that shows that the ground state of SPTP at low-dimension displays
one long-range entanglement between the edge and that this entanglement can be extracted using the « disconnected entanglement entropy » SD. I show that this quantity is measurable (although with
difficulties), that it is quantized, robust to disorder, and robust to quenches in the topological regime. I finally show that the quantity can be used at phase transition to obtain seemingly
universal critical exponent, making SD a non-local analogous to an order parameter.
Jeudi 11 Février
Title: Quantizing driven superconducting circuits: drive-induced nonlinear enhancements to the Purcell effect and the measurement problem
Alex Petrescu (Université de Sherbrooke, Québec)
With current advances in state preparation, as well as gate and measurement operations, superconducting circuits are now a leading architecture for quantum information processing. As these systems
are scaled up, strict requirements on the fidelity of operations required for computation and readout are imposed. In this talk we focus on the so-called “readout problem” in superconducting circuit
quantum electrodynamics: several experiments have shown that qubit energy relaxation rates may become strongly dependent on the power of the measurement drive, even for moderate or weak drives; this
hampers efforts to improve readout fidelity. To explain this, we devised a perturbation theory for driven-dissipative, weakly anharmonic, superconducting circuits based on a sequence of unitary
transformations. Applied to a transmon qubit coupled to a readout resonator, this approach allows us to classify the nonlinear processes that enhance qubit relaxation in the presence of resonator
photons. We will then discuss a more general framework for quantizing driven superconducting circuits, with applications to the study of parametric gates, Josephson parametric amplifiers, and
multi-qubit systems.
Jeudi 28 Janvier
Title: Einstein's fluctuation relation and Gibbs states far from equilibrium
Alexandre Lazarescu (Université Catholique de Louvain)
I will present a class of one-dimensional nonequilibrium interacting particle models characterised by a so-called "gradient condition" which generalises detailed balance and guarantees the existence
of Gibbs-type local homogeneous stationary states.
I will show how, defining appropriate boundary conditions, this leads to a special symmetry of the models under time and space reversal which, rephrased in terms of the large deviations function of
stationary currents of conserved quantities, yields a novel fluctuation relation under reservoir exchange, unrelated to the standard Gallavotti-Cohen symmetry.
I will then show that this relation can be interpreted as a nonequilibrium and nonlinear generalisation Einstein's relation, which points to the existence of a Langevin-type hydrodynamic equation for
the macroscopic behaviour of those models.
Jeudi 21 Janvier
Title: Exotic phases of cluster-forming systems
Adriano ANGELONE (ICTP - Trieste, Italie)
I will present my recent results on bosonic systems featuring extended-range
interactions, of interest for experiments with cold Rydberg-dressed atoms. In
my previous work, I proved these Hamiltonians to host a wide variety of
interesting physical phenomena, including (super)solid phases of clusters of
particles, as well as out-of-equilibrium glass and superglass states (the
latter displaying the coexistence of glassy physics and superfluidity).
In this talk, I will discuss my demonstration, in the ground-state regime of
this class of models, of a novel type of phase transition between two
supersolid states characterized by different crystalline and superfluid
exchange structures. I will then discuss my results on the out-of-equilibrium
counterparts of the states mentioned above, which I prove to be glasses and
(super)solids (the latter featuring crystalline structures in general
remarkably different from their ground-state counterparts) in an energy range
which would allow their observation in experimental realizations.
Jeudi 14 Janvier
Title: Flow Equation Methods for Many-Body Localisation
Steven THOMSON (Centre de Physique Théorique — Ecole Polytechnique)
The interplay between many-body interactions and quenched disorder in quantum systems can result in rich dynamical phenomena far from equilibrium. When the disorder is strong enough to prevent
thermalisation, this can lead to phases of matter with no equilibrium analogue, such as many-body localisation (MBL). In combination with periodic drive, MBL can even allow for the existence of
exotic states such as time crystals. Crucially, in MBL systems the eigenstate thermalisation hypothesis - which underpins the use of equilibrium statistical mechanics in isolated quantum systems -
dramatically fails: new theoretical approaches are needed.
In this talk, I will outline how the flow equation method can be used to directly obtain the emergent local integrals of motion that characterise MBL matter, and show how this allows us to compute
both static and dynamical quantities of strongly disordered systems on larger scales than those accessible with any other technique, including in two dimensions [1]. I will show how long-range
interactions can lead to the breakdown of many-body localisation [2,3], and how periodically driven (Floquet) systems can be treated within the same general formalism [4], paving the way for future
studies of time crystals in two-dimensional systems.
[1] - S. J. Thomson & M. Schiró, Phys. Rev. B 97, 060201(R) (2018)
[2] - S. J. Thomson & M. Schiró, Eur. Phys. J. B 93, 22 (2020)
[3] - S. J. Thomson & M. Schiró, Phys. Rev. Research 2, 043368 (2020)
[4] - S. J. Thomson, D. Magano & M. Schiró, arXiv:2009.03186
Jeudi 10 Décembre
Title: Transition between trivial and topological Ising paramagnets in 2D
Maxime Dupont (University of California, Berkley)
In this talk, I will present you the phase diagram of a one-parameter Hamiltonian interpolating between trivial and topological Ising paramagnets on the triangular lattice [1,2]. The only option to
connect these two distinct states is via a quantum phase transition. Here, the transition does not occur via a single point. Instead, a whole new phase emerges where magnetic order settles in,
sandwiched between the topological and trivial paramagnetic ones.
The magnetic order takes the form of a stripe phase. Remarkably, it is gapless due to the incommensurability of the stripe pattern with the lattice. In return, the interfaces between these stripes
behave analogously to electric fields. They are subject to the laws of electrodynamics in the form of a deconfined U(1) gauge theory. This magnetic phase is a condensed matter realization of
"artificial light".
[1] M. Dupont, S. Gazit, and T. Scaffidi, arXiv:2008.06509
[2] M. Dupont, S. Gazit, and T. Scaffidi, arXiv:2008.11206
Jeudi 25 Juin
Title: Diffusions in random environment
Guillaume Barraquand (Laboratoire de Physique, ENS Paris)
Consider the simple random walk on Z. What happens if transition probabilities are themselves random variables independent at each time and each location? Using a Bethe ansatz solvable model, a
random walk with Beta distributed transition probabilities, we will see that the extreme behavior of many random walks in the same environment is governed by scalings and statistics that arise in
random matrix theory and the Kardar-Parisi-Zhang universality class. Then we will see that the relevant continuous limit of the model is a stochastic flow, introduced by Le Jan-Raimond and partly
motivated by models of turbulence. Several diffusions following this stochastic flow behave as Brownian motions with a local attractive interaction called sticky Brownian motions. This talk is based
on joint works with Ivan Corwin, Mark Rychnovsky and Pierre Le Doussal.
Jeudi 4 Juin
Title: Resolution of the exponent puzzle for the Anderson transition in doped semiconductors
Rudolf Römer (University of Warwick, Royaume-Uni)
The Anderson metal-insulator transition (MIT) is central to our understanding of the quantum mechanical nature of disordered materials. Despite extensive efforts by theory and experiment, there is
still no agreement on the value of the critical exponent ν describing the universality of the transition—the so-called “exponent puzzle.” Here, going beyond the standard Anderson model, we employ ab
initio methods to study the MIT in a realistic model of a doped semiconductor. We use linear-scaling density functional theory to simulate prototypes of sulfur-doped silicon (Si:S). From these we
build larger tight-binding models close to the critical concentration of the MIT. When the dopant concentration is increased, an impurity band forms and eventually delocalizes. We characterize the
MIT via multifractal finite-size scaling, obtaining the phase diagram and estimates of ν. Our results suggest an explanation of the long-standing exponent puzzle, which we link to the hybridization
of conduction and impurity bands.
Jeudi 20 Février
Title: Electrical detection of non-Abelian statistics in topological superconductors.
Aurélien Grabsch (Lorentz Institute de Leiden, Pays-Bas)
Topological superconductors can support quasiparticle excitations which present unusual exchange statistics, called non-Abelian anyons. They correspond to midgap states localized in the core of a
vortex or bound to the end of a nanowire. However, their unusual statistics cannot be easily demonstrated as they are immobile, and one should rely on indirect methods. Here, we propose a real space
alternative which relies on the chiral motion along the edges of a topological superconductor. We present an approach which allows to inject on demand so-called edge vortices, which are pi-phase
domain walls which propagate along the chiral edge channels, and possess non-Abelian statistics. We show that the signatures of this unusual exchange statistics can be detected in an electrical
Electrical detection of the Majorana fusion rule for chiral edge vortices in a topological superconductor
C.W.J Beenakker, A. Grabsch, Y. Herasymenko
SciPost Phys. 6, 022 (2019)
Time-resolved electrical detection of chiral edge vortex braiding
I. Adagideli, F. Hassler, A. Grabsch, M. Pacholski, C.W.J. Beenakker
Jeudi 13 Février
Title: Momentum-space atom correlations of interacting lattice bosons
David Clément (Institute d'Optique, Palaiseau)
Measuring the full distribution of individual quantum particles has emerged as a central approach to characterize many-body ground-states and many-body dynamics by means of correlation functions.
Over the past decade, various platforms, from trapped ions and superconducting circuits to arrays of cold atoms, have investigated strongly interacting matter through position-space and/or
spin-resolved correlations. In this talk I will present a complementary approach that consists in measuring the momentum-space correlations between quantum particles. This is achieved by detecting
individual metastable Helium-4 atoms in three dimensions and in the far-field regime of expansion, when released from an optical lattice.
I will briefly discuss the benchmarking of our technique with ab-initio quantum Monte-Carlo calculations [1] and the investigation of two-body collisions during the expansion [2]. Then I will report
on the measurement of the two-body and three-body correlations deep in the Mott insulator regime. We observe a perfectly contrasted bunching whose periodicity reproduces the reciprocal lattice. In
addition, we show quantitatively that the momentum-space correlations of a Mott insulator are of Gaussian nature [3]. Finally, I will present a recent observation of a Hanbury-Brown and Twiss type of
experiment with strongly-interacting lattice Bose-Einstein condensates [4]. The interpretation of the measured bunching in the depletion of the condensate is found compatible with that expected for
Bogoliubov quasi-particles.
[1] H. Cayla, C. Carcy, Q. Bouton, R. Chang, G. Carleo, M. Mancini, D. Clément, Phys. Rev. A 97 061609(R) (2018)
[2] A. Tenart, C. Carcy, C. Carcy, H. Cayla, T. Bourdel, M. Mancini, D. Clément, Phys. Rev. Research 2, 013017 (2020)
[3] C. Carcy, H. Cayla, A. Tenart, A. Aspect, M. Mancini, D. Clément, Phys. Rev. X 9, 041028 (2019)
[4] In preparation (2020).
Jeudi 30 Janvier
Title: Abelian axial anomaly in 3D semimetals
Luca LEPORI (IIT Genova, Italie)
After a general introduction on (multi-)Weyl and triple-point semimetals,
I will derive and discuss the Abelian axial anomaly
(non-conservation of chiral currents in the presence of
an electromagnetic coupling) on these devices.
Later on, I will comment on the physical consequences
of the anomaly.
L. Lepori, M. Burrello, and E. Guadagnini, "Axial anomaly in
multi-Weyl and triple-point semimetals", J. High En. Phys. (2018).
Jeudi 23 Janvier
Title: Exact persistence exponent for the 2d-diffusion equation and related Kac polynomials
Gregory Schehr (LPTMS, CNRS, Université Paris-Sud)
After an introduction to persistence probabilities and related first-passage time in statistical physics, I will discuss a specific example:
the 2d diffusion equation with random initial conditions. The persistence probability in this problem turns out to be related to the probability
of no real root for Kac random polynomials. I will show that this probability can be computed by using yet another connection, namely to the truncated orthogonal ensemble of random matrices.
Jeudi 19 Décembre
Title: Chance Constrained Alternative Current Optimal Power Flow with Sparse Polynomial Chaos Expansion
David Métivier (Los Alamos National Laboratory, CNLS \& T-4, états-Unis d'Amérique)
Anticipating the effects of random inputs over a Complex System is a natural question arising in many engineering applications. In the context of Electrical Power Systems, the growing uncertainty
from renewable energy integration and distributed energy resources motivate the need for advanced tools to quantify the effect of uncertainty and assess the risks it poses.
I will introduce the motivations for this work, as well as the Polynomial chaos expansion (PCE) method that has been recently proposed as a tool for UQ in Power Systems. The method produces results
that are highly accurate, but are computationally challenging to scale to large systems. We propose a modified algorithm based on PCE and using the system sparsity with significantly improved
computational efficiency while retaining the desired high level of accuracy. In an example, we show how to solve the so called chance constrained power flow problem, e.g. we need a solution such that
the power transmitted through the lines is lower than some critical value 99 percent of the time.
Jeudi 12 Décembre
Title: Vacuum decay: from cosmology to cold atoms
Florent Michel (Durham, Royaume-Uni)
Vacuum decay is a prominent example of strongly nonlinear effects in quantum field theories, with potentially important implications for cosmology, relating to phase transitions in the early universe
or the supposed metastability of the current Higgs vacuum. Although a general theoretical description was laid out in the 70s by Sidney Coleman and his collaborators, fundamental questions pertaining
to the back-reaction of true vacuum bubbles on space-time curvature and their correlations remain so far unanswered, calling for different approaches to the problem. In this talk, after a brief
review of Coleman's theory emphasizing its genericness and limitations, I will present a recently-proposed cold-atoms model in which some of these ideas could be tested in laboratory experiments. I
will discuss the mathematical correspondence between the two problems and focus on how a localized defect changes the decay rate, taking the example of a vortex in a Bose-Einstein condensate and
comparing with the effect of a black hole in a relativistic theory.
Jeudi 5 Décembre
Title: New diagrammatic Monte Carlo approaches to the quantum many-body problem.
Riccardo ROSSI (Flatiron Institute, New York)
Finding a way to numerically simulate many interacting quantum particles would be of great fundamental and practical value. In this talk, I will discuss a broad class of approaches based on
diagrammatic expansions that allows one to obtain numerically-exact results in a time polynomially increasing with the inverse of the requested precision. Recent advances allow one to include
renormalization and non-perturbative information (e.g. Dynamical Mean Field Theory for lattice problems) in the expansion using a succinct, and, most importantly, very efficient, formalism. I will
present state-of-the-art numerically-exact results for the doped square-lattice Hubbard model, and the generic efficient code we have developed.
Jeudi 28 Novembre
Title: The role of discreteness in the black hole information loss puzzle
Lautaro Amadei (CPT, Université de Marseille)
In approaches to quantum gravity where smooth spacetime is an emergent approximation of a discrete Planckian fundamental structure, any standard effective field theoretical description will miss part
of the degrees of freedom and thus break unitarity. Here we show that these expectations can be made precise in loop quantum cosmology. Concretely, even when loop quantum cosmology is unitary at the
fundamental level, when microscopic degrees of freedom, irrelevant to low-energy cosmological observers, are suitably ignored, pure states in the effective description evolve into mixed states due to
decoherence with the Planckian microscopic structure. When extrapolated to black hole formation and evaporation, this concrete example provides a key physical insight for a natural resolution of
Hawking's information paradox.
Jeudi 21 Novembre
Title: Engineering Z_2 lattice gauge theories with a strongly interacting atomic mixture
Luca BARBIERO (Université Libre de Bruxelles, Belgique)
In this talk I will show how quantized dynamical gauge fields can be created in mixtures of strongly interacting ultracold atoms in optical lattices. Specifically, I will discuss a protocol by which
atoms of one species carry a magnetic flux felt by an other species, hence realizing an instance of flux-attachment. This is obtained by combining coherent lattice modulation techniques with strong
Hubbard interactions. I will show that this protocol has been experimentally implemented in a double-well potential thus realizing a first building block of a true Z_2 lattice gauge theory. Moreover
I will discuss how this setting can be arranged so as to implement lattice models displaying a Z2 gauge symmetry, both in one and two dimensions. Finally I will also present a detailed analysis of a
ladder toy model, which features a global Z_2 symmetry, and revealing the phase transitions that occur both in the matter and gauge sectors. Mastering flux-attachment in optical lattices envisages a
new route towards the realization of strongly-correlated systems with properties dictated by an interplay of dynamical matter and gauge fields.
Jeudi 7 Novembre
Title: Non-Abelian gauge theories invariant under diffeomorphisms
Olivera Miskovic (Pontificia Universidad Católica de Valparaíso, Chili)
Motivated by the fact that some interesting non-Abelian models invariant under general coordinate transformations do not have a suitable action description yet, we develop a canonical construction of
this type of actions in three-dimensional spacetime. As a result, we find a class of theories possessing a finite number of local degrees of freedom. We analyze in detail three particular cases.
Jeudi 31 Octobre
Title: Renormalized Volume in AdS Gravity
Rodrigo Olea (Universidad Andrés Bello, Chili)
We explore the connection between renormalized action for AdS gravity and the appearance of conformal structures in the bulk. The link to the formulas for renormalized volume by Anderson in 4D and
Chang-Qin-Yang in 6D is explicitly worked out. We emphasize the role of renormalized volume in defining a correct black hole thermodynamics in AdS gravity and in the renormalization of co-dimension 2
surfaces, what is relevant in holographic computations of Entanglement Entropy.
Jeudi 24 Octobre
Title: The Geometry of Relative Locality
Laurent FREIDEL (Perimeter Institute, Canada)
In this talk I review some of the general motivations behind relative locality, which is an extension of the relativity principle.I show how this leads at the classical level to a new concept of
geometry: the Born geometry which allows the differential structure itself to be dynamical. I also present how this leads at the quantum level to a new concept of space: Modular Space, and exemplify
how it affects the effective description of string theory. If time permits I'll present the relation of these ideas to a new action principle for gravity based on generalized geometry.
Jeudi 17 Octobre
Title: Integrability in and beyond AdS/CFT
Joao Caetano (Simons Center for Geometry and Physics, Stony Brook, états-Unis)
In this talk, I am going to review some aspects of the current state of the art of Integrability in the AdS/CFT correspondence and beyond. We will first review a general nonperturbative approach to
compute multipoint correlation functions of local operators in the N=4 SYM theory which allows us to explore the theory even beyond the planar level. In the second part, I will describe my recent
work about exploring deformations of N=4 SYM by irrelevant operators, which revives an old attempt of generalizing the AdS/CFT correspondence. Here integrability seems to also play an important role
and opens the door for its application for non-conformal field theories.
Jeudi 26 Septembre
Title: Nonlinear Water Waves over variable bathymetry : Hamiltonian Coupled-Mode theory
Christos Papoutsellis (université Aix-Marseille)
The accurate predictionof the complex dynamics of water waves is of fundamental importance for the better understanding of the marine environment. The co-existence of strongly nonlinear and
dispersive interactions and bathymetric effects rendersthe accurate simulation of water waves a challenging issue. In this work, a modelling approach is presented that takes into account full
nonlinearity, dispersion and bottom variability. The critical feature of this approach, called Hamiltonian Coupled-Mode Theory (HCMT), is the use of an enhanced vertical mode expansion that serves as
an exact representation of the velocity potential in terms of horizontal amplitudes.Using this representation, the classical water wave problem is reformulated as a Hamiltonian system in terms of the
free-surface elevation and free-surface potential. Most importantly, the computationally expensive Laplace problem for the velocity potential is replaced by a Coupled-Mode System (CMS) of horizontal
differential equations for the modal amplitudes.For the numerical solution of the model equations, afourth-order accurate finite-difference scheme is developed and applied to several demanding wave
problems. It is shown that the present method accurately describes strongly nonlinear and dispersive propagation up to the breaking limit.In order to extend HCMT to the breaking casein shallow water,
two strategies are developed and applied. Both methods introduce dissipative terms in the dynamic free-surface condition and are constructed by analogy with the hydraulic jump paradigm. Dissipation
is activated and deactivated on thebasis of an appropriate criterion. In the first method, a pressure-type absorptionis introduced while the second considers an eddy viscosity term. Comparisons with
experimental measurements indicate that both methods provide a good description of the post-breaking evolution. Further, they can be applied to other wave models that are based on the Hamiltonian
structure of free-surface potential flow.
[1] Ch. Papoutsellis, G. Athanassoulis. Exact semi-separation of variables in waveguides with nonplanar boundaries, Proc. R. Soc. A. (2017) 473:20170017, doi.org/10.1098/rspa.2017.0017(arxiv.org/abs/
[2] Ch. Papoutsellis, A. Charalampopoulos, G. Athanassoulis. Implementation of a fully nonlinear Hamiltonian Coupled-Mode Theory, and application to solitary wave problems over bathymetry, Eur. J.
Mech. B, Fluids (2018)72: 199–224. doi.org/10.1016/j.euromechflu.2018.04.015(arxiv.org/abs/1710.10847)
[3] Ch. Papoutsellis, G. AthanassoulisA new efficient Hamiltonian approach to the nonlinear water-wave problem over arbitrary bathymetry, 2017, (arxiv.org/abs/1704.03276)
[4] Ch. Papoutsellis, M. Yates, B. Simon, M. Benoit Modeling of depth-induced wave breaking in a fully nonlinear free-surface potential flow model , 2019, Acceptedin Coastal Engineering
Jeudi 22 Août
Title: The Kronig-Penney model with arbitrary scattering potentials
Thomas BUSCH (Okinawa Institute for Science and Technology, Okinawa, Japan)
Motivated by the recent realisation of a Kronig-Penney lattice for ultra cold atoms, I will discuss exact solutions to such a system with arbitrary positions and strengths of scattering sites. This
is an iconic model in solid state physics and the large number of degrees of freedom that come from the possibility to arbitrarily choose the properties of the scatterers allow to explore a wide
range of physics.
As an example I will show that this one-dimensional model can possess topologically nontrivial properties. Using some of the free parameters of the system as extra dimensions allows to observe
topologically protected edge states as well as the emergence of a Hofstadter butterfly-like quasimomentum spectrum, even in the case of small numbers of scattering sites. To extend these results to
strongly interacting many-particle systems I will also briefly discuss the solutions in the Tonks-Girardeau limit.
Jeudi 25 Juillet
Title: Quantum effects in gravitational collapse and black hole evaporation
Sebastian Murk (Macquarie University, Sydney)
For more than forty years, quantum effects such as Hawking radiation have proven to be a source of inspiration and controversies in black hole physics. They are fundamental ingredients in black hole
thermodynamics and are thought to lead to the infamous information loss paradox [1]. In turn, they have motivated many developments of models of compact horizonless objects [2, 3, 4]. To separate
essential features from model-dependent properties, I will present some implications [5, 6] that follow from the minimal set of necessary assumptions. The assumptions are that astrophysical black
holes exist and their horizon regions are regular. We are working in the framework of semiclassical gravity.
According to a stationary observer at spacelike infinity, the finite-time formation of a trapped spacetime region with regular boundary requires violation of the null energy condition (NEC) [5,7].
Quantum energy inequalities bound the extent in which such violations are possible. Back-of-the-envelop calculations appear to contradict estimates on the size of negative energy density regions that
are obtained on the background of eternal black holes, indicating that the required amount of negative energy density may be incompatible with the standard analysis of black hole evaporation [5].
Contraction of a massive spherically symmetric thin dust shell that separates a flat interior region from a curved exterior is the simplest model of gravitational collapse. Nevertheless, different
extensions of this model that include a collapse-triggered radiation lead to contradictory predic- tions [8, 9]. Analysis of the boundary of a trapped space-time region identifies two possible
families of metrics — ingoing and outgoing Vaidya — that may describe geometry in its vicinity [5]. Description of the exterior geometry using the outgoing Vaidya metric is known to result in horizon
avoidance and timelike-to-null transition. We estimate the radial coordinate of this transition. Since violation of the NEC is the prerequisite for a finite-time formation of a trapped region
according to a distant observer [5], only the outgoing Vaidya metric with decreasing mass is applicable in this case. Using this metric for the exterior geometry leads to a finite (proper or distant)
time of horizon cross- ing. A macroscopic shell loses only a negligible amount of its rest mass in the process. However, this is incompatible with the NEC violation, thus rendering the horizon
formation and its crossing by the shell impossible [6].
[1] R. B. Mann, BlackHoles: Thermodynamics, Information, and Firewalls (Springer, New York, 2015).
[2] M. Visser, PoS BHs,GRandStrings 2008:001 (2008), arXiv:0901.4365v3; M. Visser, Phys. Rev. D 90, 127502 (2014).
[3] C. Barcelo, S. Liberati, S. Sonego, M. Visser, JHEP 02, 003 (2011).
[4] A. Paranjape, T. Padmanabhan, Phys. Rev. D 80, 044011 (2009).
[5] V. Baccetti, R. B. Mann, S. Murk, D. R. Terno, arXiv:1811.04495 (2018).
[6] V. Baccetti, S. Murk, D. R. Terno, arXiv:1812.07727(2018).
[7] S. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-Time, (Cambrdge University Press, 1973).
[8] R. Brout, S. Massar, R. Parentani, P. Spindel, Phys. Rep. 260, 329 (1995).
[9] A. Ashtekar, M. Bojowald, Class. Quant. Grav. 22, 3349 (2005).
Jeudi 4 Juillet
Title: Quantum complexity, irreversibility, learnability and fluctuation
Alioscia HAMMA (Université de Massachusetts, Boston, états-Unis d'Amérique)
Quantum complexity is a notion characterizing the universality of the entanglement arising from a quantum evolution. A universal evolution will result in a complex entanglement. At the same time,
this also corresponds to small fluctuations and to unlearnability from the point of view of machine learning. All these aspects are connected to the different features of k-designs, which are
under-samplings of the Hilbert space.
We study the transition in complexity due to the doping of a quantum circuit by universal gates and show that the transition to complex entanglement can be obtained by just a single gate. These
results are relevant for the notions of scrambling, quantum chaos, OTOCs and operator spreading. We conjecture that the transition to 4−design, W-D and unlearnability are one and the same.
Jeudi 20 Juin
Title: Distribution matches in stochastic vertex models and Macdonald processes
Michael WHEELER (Université de Melbourne, Australie)
One of the classic quantities in the six-vertex model is the domain wall partition function, which was computed as a determinant by Izergin. Most proofs of Izergin's formula are based on solving
recursion relations and, as a consequence, a priori knowledge of the answer.
I will sketch a direct method for deriving Izergin's formula, based on Macdonald polynomials and their difference-operator eigenrelations (following ideas of Lascoux and Warnaar). The connection
between the six-vertex model and Macdonald polynomials runs deeper still; I will discuss some intriguing distribution matches first observed by Borodin.
Jeudi 13 Juin
Title: Classical-Quantum correspondence and backreaction
George Zahariade (Center for Fundamental Concepts in Science, Arizona State University)
We map the quantum problem of a free bosonic field in a space-time dependent background into a classical problem. N degrees of freedom of a real field in the quantum theory are mapped into 2*N^2
classical simple harmonic oscillators with specific initial conditions. We discuss how this classical-quantum correspondence (CQC) may be used to evaluate quantum radiation and also to analyze the
backreaction of quantum fields on classical backgrounds. We also analyze the agreement between results obtained with the CQC and with a full quantum analysis.
Jeudi 23 Mai
Title: Optical responses in chiral topological metals
Adolfo GRUSHIN (Institut Néel, Grenoble)
Abstract: In this talk I will discuss our recent results concerning nonlinear and linear optical responses of topological chiral metals. We have predicted a quantized circular photogalvanic effect,
the part of the non-linear photocurrent which changes sign when the light's polarization flips. We find it is quantized in units of a large universal constant e^3/h^2 times the Weyl monopole charge
in all mirror free topological semimetals. We provide specific predictions for RhSi for which we also calculate the linear optical conductivity, necessary to pin down quantization and relevant for
recent experiments. Finally, if time permits, I will also discuss the optical activity, the rotation of the plane of polarisation of light, for all chiral multifold fermions, which we find also to be
enhanced compared to the Weyl semimetal case.
Vendredi 5 Avril
Title: Derivative expansion for the non-perturbative renormalization group: convergence
and influence of the regulating function
Ivan Balog (Institut de Physique, Zagreb, Croatie)
We examine the "effective average action approach", a nonperturbative RG (NPRG)
implementation of the exact RG performed on the effective action. Until the mid-nineties, it was
believed to be on one hand a very appealing method from a heuristic point of view but on the other
hand a method plagued with prohibitive technical difficulties. Two severe pitfalls were indeed
pointed out in the seventies: It seemed impossible to reproduce the two-loop results with the usual
momentum shell approach and all results intending to be nonperturbative seemed to show a huge
dependence on the method used to separate the fast and slow degrees of freedom, i.e. the choice of
the regulator function.
In this work we dispel some of the criticisms often attributed to this method. We examine the
derivative expansion up to order 6 and find that all families (parametrized e.g. by a number ) of
reasonable regulating functions, yield an optimal value of , at which some critical exponent is at an
extremum, and those optimal values are typically very close for different exponents. Furthermore,
these values converge to a singe ``true value" of as the order of the derivative expansion is
increased. The values of the critical exponents at the optimal values of the parameter converge as
well with the order of the derivative expansion, thus in this sense we give evidence of the
convergence of the derivative expansion. For the example of the 3d Ising model at the 6th order of
the derivative expansion, we obtain the critical exponents comparable to the best available
Monte Carlo simulations.
Jeudi 04 Avril
Title: Higher Algebras in Field Theories
Olaf HOHM (Université de Humboldt, Berlin, Allemagne)
In this talk I will aim to give a pedagogical introduction to
"higher" algebraic structures in physics, notably in (classical and quantum) field theories.
Examples include L-infinity algebras, which generalize the notion of Lie algebras to
structures in which the Jacobi identity can be violated. This violation is then controlled by
"higher brackets". Such structures first emerged in string field theory, but they have
subsequently been shown to be of much wider relevance for general field theories.
Jeudi 28 Mars
Title: Random walks with memory: anomalous diffusion and localization
Denis Boyer (UNAM, DF, Mexique)
We study several lattice random walk models with stochastic relocations to sites visited in the past which exhibit a phase transition between an anomalous diffusive regime and a localization regime
where diffusion is suppressed. The localized phase settles above a critical relocation rate, or rate of memory use, and the probability density asymptotically adopts in this regime a non-equilibrium
steady state similar to that of the better known problem of diffusion with resetting to the origin. The transition occurs because of the presence of a single impurity site where the resetting rate is
lower than on other sites, and around which the walker spontaneously localizes. Near criticality, the localization length diverges with a critical exponent that falls in the same class as the
self-consistent theory of Anderson localization of waves in random media. The critical dimensions are also the same in both problems. Our study provides analytically tractable examples of
localization transitions in path-dependent, reinforced stochastic processes, which can also be useful to understanding spatial learning by living organisms.
Jeudi 21 Mars
Title: Low energy effective actions, consistent truncations and generalised geometry
Michela Petrini (LPTHE, Université Pierre et Marie Curie, Paris, France)
An important problem in string theory is the derivation of sensible
low energy effective actions. Consistent truncations provide an answer to
this question. I will introduce consistent truncations and discuss how
generalised geometry allows for interesting progress in the derivation
of such constructions
Jeudi 14 Mars
Title: Hydrodynamics of integrable systems, and application to non-equilibrium transport.
Benjamin Doyon (Department of Mathematics, King's College London)
Hydrodynamics is a powerful framework for describing the large-scale behaviours of many-body systems in inhomogeneous, non-stationary states. Until recently, however, it was restricted to
non-integrable models, as the assumption of local thermodynamic equilibrium is broken by the large amount of conserved charges afforded by integrability. I will describe how to generalise
hydrodynamics to integrable systems. The resulting theory has a rich structure, and applies to large families of quantum and classical field theories, chains and gases. It allows us to solve
experimentally relevant setups such as the famous ``quantum Newton's cradle" in cold atomic gases, and to evaluate exact non-equilibrium currents, correlations, Drude weights and full counting
statistics of fluctuations in non-equilibrium transport. After explaining the principles and main equations of ``generalised hydrodynamics", I will derive the solutions to non-equilibrium transport
problems, and discuss the exact calculations of various quantities such as Drude weights and diffusion coefficients.
Jeudi 7 Mars
Title: q-oscillators and highest $\ell$-weight representations of
quantum loop algebras
Frank Göhmann (Fakultät für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, Allemange)
q-oscillator representations of the Borel subalgebra
of the rank-$n$ quantum loop algebras U_q ({\cal L}
(\mathfrak{sl}_{n+1})) have played an important role
in the contruction of Q-operators and their functional
equations. They also appeared in the construction of the
so-called Fermionic basis on the space of local operators
of the basic rank-one integrable lattice model, the
XXZ spin-1/2 chain. We have expressed the generators in
Drinfeld's second realization of the rank-$n$ quantum
loop algebras in terms of q-oscillators. This made it
possible to identify the q-oscillator representations
as highest $\ell$-weight representations, to calculate
their $\ell$-weights and to relate them with another type
of highest $\ell$-weight representations introduced by
Hernandes and Jimbo, the so-called prefundamental
Jeudi 21 Février
Title: Recent devlopments in calculating five-point scattering amplitudes
Dimitri Chicherin (Université de Münich, Département de Physique)
Abstract: Multi-loop scattering amplitudes for multi-particle processes start to play an increasingly important role in future collider physics analyses. We review the recent progress in calculating
the virtual two-loop corrections for five-particle processes. We concentrate on the analytic calculation of the relevant master integrals representing nonplanar corrections in any 4D gauge theory
including the massless QCD, Yang-Mills theory, N=4 super-Yang-Mills theory. We apply the modern mathematical techniques for evaluating multi-loop Feynman integrals which include the iterated
integrals, symbol alphabets, analysis of the leading singularities, the canonical form of the differential equation. We identify the space of the pentagon functions representing the five-point
Feynman integrals with on-shell legs. Using this knowledge, we demonstrate how two-loop nonplanar Feynman integrals can be found in the bootstrap approach relying on the Mellin-Barnes representation.
Then we systematically evaluate all two-loop master integrals of the nonplanar topologies using the method of differential equations. Finally, we discuss the application of these results to the
five-point two-loop nonplanar amplitude in N=4 super-Yang-Mills theory and N=8 super-gravity.
Jeudi 31 Janvier
Title: Dynamics vs Thermodynamics of black holes
Marcela Cárdenas (Université Paris Diderot, Laboratoire APC)
In this talk, we will address various aspects of hairy black holes that are solutions of four-dimensional gravity in the presence of a dilatonic scalar field and an Abelian gauge field. In
particular, we will study their thermodynamics as a consequence of a well-posed variational principle. We find that for a slow fall-off of the scalar fields, they introduce a non-integrable term in
the variation of the mass, that make the first law of black hole thermodynamics to be satisfied. The appearance of a non-integrableterm is solved by proposing boundary conditions that arbitrarily
relates the leading and subleading terms of the scalar field fall-off.
In a second part of the talk, we give a first attempt to connect thermodynamic black holes with astrophysical ones, where the presence of a non-integrable term will be crucial. We propose a way to
connect two a priori distinct aspects of black hole physics: their thermodynamics, and their description as point particles, which is an essential starting point in the post-Newtonian approach to
their dynamics. We will find that, when reducing a black hole to a point particle endowed with its specific effective mass, one in fact describes a black hole satisfying the first law of
thermodynamics by virtue of their global charges and entropy remaining constant.
Jeudi 24 Janvier
Title: Dynamics of correlations and thermalization in long-range quantum spin models: A semi-classical perspective
Johannes SCHACHENMAYER (IPCMS, Université de Strasbourg)
Experimental setups with ultracold atoms, molecules or ions offer platforms for studying coherent non-equilibrium dynamics of long-range interacting quantum many-body spin-models in controlled
environments. We developed a semi-classical technique for studying time-evolution in these models numerically. Here we show how many aspects of such dynamics, such as correlation spreading, can be
remarkably well captured with our semi-classical approach. We show how recently observed dynamics (and in particular thermalization behavior) in an experimental setup with Chromium atoms trapped in
an optical lattice can be described fully by the semi-classical approach.
Mercredi 19 Décembre
Title: Analytical Large Deviation and Uncertainly Thermodynamic Relation
Raphael Chetrite (Laboratoire J.A. Dieudonné, Université de Nice)
In this talk, I will talk about the theory of large deviations. After a general introduction, I will present some recent developments on the large deviations associated with a Markov process and on
applications for thermodynamic uncertainty relations.
Lundi 3 Decembre
Title: 2-species TASEP with open boundaries: Baxterisation, integrability and Matrix ansatz.
Matthieu Vanicat (Fakulteta za matematiko in fiziko, Université de Ljubljana, Slovénie)
Abstract: We present a 2-species Totally asymmetric exclusion process (TASEP) in which the hopping rates depend on the species of the particle. The lattice is connected at its extremities to particle
reservoirs: particles are injected and extracted with given probability rates. The system is driven out-of-equilibrium, there is a non-vanishing particle current in the stationary state.
We show that the model is Yang-Baxter integrable: the Markov matrix -encoding the stochastic dynamics- is constructed from the Sklyanin transfer matrix. The integrable structure has nevertheless some
-The associated R-matrix depends separately on two spectral parameters and does not seem to be related to any known quantum groups. We show that it can be constructed from a very simple braid-like
algebra through a Baxterisation procedure.
-The R-matrix has some singularity which prevents us to prove the commutation relation of the Sklyanin transfer matrix in the usual way. We present an alternative proof.
Finally we provide an exact construction of the stationary state of the model using Matrix ansatz.
Jeudi 8 Novembre
Title: Quantum many-body physics with nonlinear propagating light
Pierre-Elie Larre (Université de Cergy)
Abstract: The propagation of a paraxial and quasimonochromatic quantum light field in a dispersive and nonlinear dielectric medium is considered. In this all-optical platform, the space propagation
of the field's envelope may be rigorously mapped onto the time evolution of a quantum fluid of interacting photons. The resulting quantum many-body system constitutes a particular class of quantum
fluids of light and presently attracts growing interest as a powerful tool for quantum simulation. I will review recent theoretical and experimental progresses in this rapidly emerging research
field, including investigations on Bose-Einstein condensation, superfluidity, collective excitations, disorder, quantum quenches, prethermalization, and thermalization.
Jeudi 25 Octobre
Title: Random Strebel graphs, Random Delaunay triangulations, and their relation with two-dimensional gravity
François David (IPhT, CEA-Saclay)
Abstract: The relationship between random planar geometries, two-dimensional quantum gravity and string theories is studied by theoretical physicists and mathematicians since 35 years. I shall
present recent works on random Strebel graphs and random Delaunay triangulations, and discuss their relations with the geometry of moduli spaces of surfaces, topological gravity, conformal point
processes, and possible discretisations of conformal theories. Based on joint works with B. Eynard, S. Charbonnier and J. Scott.
Mercredi 24 Octobre
Title: Dynamics of quasiparticle excitations in spin ice materials
Claudio Castelnovo (Université de Cambridge)
Some of the most exciting discoveries in strongly correlated
systems in recent years are related to phases of matter that have a
topological nature, often conveniently described as novel types of vacua
that host emergent quasiparticle excitations. The quasiparticles and
their underlying vacuum are heavily intertwined: the local correlations
in the vacuum have an impact on the properties of the quasiparticles
and, vice versa, the motion of the quasiparticles can change the nature
of the underlying vacuum. Developing a theory based on this idea is
generally a tall order, and the effects of such feedback mechanisms
remain largely unexplored. In this talk we investigate this feedback
mechanism in the context of spin ice materials. At the microscopic
level, we argue that the spin dynamics originates from transverse
components of the internal exchange and dipolar fields, and is
characterised by two distinct spin flip rates determined by the
surrounding spin configuration. This points at an entirely novel type of
annealed dynamics in spin ice systems. The separation in rates can be
remarkably large in quantum spin ice compounds. By studying the
resulting spectral properties of the quasiparticle excitations we are
able to compute their contribution to the magnetic conductivity, a
quantity that can be directly related to existing experimental results.
Jeudi 11 Octobre
Title: Interacting particle systems and Pfaffian point processes
Oleg Zaboronski (departement of mathematics, Warwick University, Royaume-Uni)
A large class of 1d interacting particle system including
coalescing and annihilating random walks as well as branching coalescing
random walks is shown to be exactly solvable in terms of some explicit
Pfaffian point processes. We will explain how these results appear using the
notion of Markov duality and exploit them in order to compute various statistics
for these systems such as gap probabilities. We will also explain the emergence
of dualities as a consequence of some hidden symmetries of the models
Jeudi 4 Octobre
Title: On the origin of certain dualities in two-dimensional quantum field theories.
Joerg Teschner (Department of Mathematics, Hamburg University and DESY, Allemagne)
Abstract: The term duality refers in the context of quantum field theory to the existence of multiple Lagrangian or Hamiltonian representations for one and the same abstract quantum field theory,
defining perturbative expansions in different regimes of the parameter space. As duality usually is a non-perturbative phenomenon, it is typically hard to demonstrate that it is realised in a given
quantum field theory, and to understand why this is the case. Motivated by this, we revisit the issue of the self-duality of the Liouville quantum field theory in the light of the proof of the
formula for the three-point function of Liouville theory recently given by Kupiainen, Rhodes and Vargas. The goals of my talk will be (i) to draw a coherent picture of the self-duality of Liouville
theory taking into
account the results of Kupiainen, Rhodes and Vargas, (ii) offer a fairly simple explanation for the origin of this self-duality, and (iii) to explain why similar phenomena should be expected to occur
in much wider
classes of two-dimensional quantum field theories including the sigma models relevant for the AdS-CFT correspondence.
Jeudi 27 Septembre
Title: Chern-Weil theorem and boundary terms in gravity actions
Nelson Merino (Laboratoire de Physique, ENS de Lyon)
Abstract: Two mathematical approaches are commonly used in the construction of gravity theories: tensorial and Cartan language. It is usually said that they are completely equivalent and that the
translation between them should be evident. However, as we show in this work, there are cases where a result in one side is not clearly understood in the other, because the translation is not
obvious. This is the case of the Katz, Bicak and Lynden-Bell (KBL) procedure, which is constructed in the tensorial language and allows to have a well-defined variational principle as well as finite
conserved charges in general relativity. Up to now, it was not known how this method reads in Cartan language, neither how it could be generalized to more general theories (e.g.,
Einstein-Gauss-Bonnet and Lovelock gravity). In this work we use the Chern-Weil theorem and an auxiliary "hybrid" manifold to provide the translation of the Katz boundary term into the Cartan
language. As a consequence, this give us a guideline to make the generalization of the KBL procedure for a generic Lovelock gravity. Possible extensions and further applications are also discussed.
Based on a collaboration with Nathalie Deruelle (APC Laboratoire, Paris 7) and Rodrigo Olea (UNAB, Chile):
arXiv:1709.06478 [gr-qc], arXiv:1803.04741 [gr-qc]
Jeudi 20 septembre
Title: T Tbar-deformed classical and quantum field theories in two dimensions
Roberto Tateo (Dipartimento di Fisica , Università di Torino, Italie)
Abstract: Surprising links between the deformation of 2D quantum field theories
induced by the composite $T \bar{T}$ operator, effective string models and
the AdS/CFT correspondence, have recently emerged.
I will discuss various classical and quantum aspects of this special
irrelevant perturbation, including its geometrical interpretation at
classical level. The deformed sine-Gordon model is used as explanatory
Jeudi 6 Septembre
Title: algebres L_infini et leurs applications en theorie des champs
Vladislav G. Kupriyanov (Max-Planck-Institüt für Physik, Munich, Allemagne et CMCC-Universidade Federal do ABC, Santo Andree, Brazil
et Tomsk State University, Tomsk, Russia)
Non-commutative gauge theories with a non-constant NC-parameter are investigated. As a novel approach, we propose that such theories should admit an underlying L-infinity algebra, that governs not
only the action of the symmetries but also the dynamics of the theory. Our approach is well motivated from string theory. In this talk I will discuss the L-infinity bootstrap program: the basic
ideas, construction, including the recurrence relations for L-infinity gauge algebra, and uniqueness. As particular examples we construct the explicit expressions for the non-commutative su(2)-like
and non-associative octonionic-like deformations of the abelian gauge transformation in slowly varying field approximation. The latter is related to non-geometric backgrounds in string and M-theory.
Mercredi 11 Juillet
Title: Entanglement entropy by means of flow equation holography method
Lorenzo CEVOLANI (Université de Göttingen)
Entanglement is one of the most exciting features of quantum mechanics, which is connected to many different fields, ranging from quantum information to quantum phase transitions.
In this talk I will present a method based on perturbation theory to compute this quantity for arbitrary bi-partitions in situations where the two subsystems are weakly coupled to one another. This
method is a priori not constrained by the dimensionality of the system or by the form of its interactions. In the case of exactly solvable models where the entanglement entropy can be computed by
other means, the flow equation approach demonstrated to be extremely accurate. I will present extensions to interacting systems, which are more challenging and not accessible by other theoretical
methods beyond numerics.
Our approach allows us to quantify the entanglement and to interpret the results via the structures of the interactions of a general many-body system.
Jeudi 5 Juillet
Title: Two-dimensional fermionic mixtures with dipolar interactions: A quantum Monte Carlo study
Tommaso COMPARIN (BEC center, Università di Trento,Italie)
Abstract: One of the interesting features of ultracold atomic gases is the possibility of exploring systems with different interatomic interactions. On top of the common short-ranged potentials,
current experiments are often performed with atoms or molecules having a strong dipolar moment, which adds a longer-ranged part to the interactions.
We consider a system of fermionic dipoles confined in two dimensions and aligned in the transverse direction, such that their interaction is a repulsive power-law potential (1/r^3, as a function of
the interparticle distance r). The ground-state properties of a uniform system are accessed through the diffusion quantum Monte Carlo technique.
In the low-density regime (the closest to current experiments with gases of erbium or dysprosium) we compute the equation of state of a two-species mixture, and study the properties of the extremely
unbalanced case of a single impurity in a bath of the other species.
At large density, we address the issue of itinerant ferromagnetism, namely the possibility for the ground state to have a non-zero polarization. This is a subtle many-body problem which was studied
for several other systems (electrons, helium, short-ranged ultracold gases) and we show that a high-accuracy version of the quantum Monte Carlo technique is required to reach the correct conclusion.
Jeudi 21 Juin
Title: Antiferromagnetic resonance and terahertz continuum in the Kitaev magnet α−RuCl3
Liang Wu (Department of Physics, Berkeley)
Spin-1/2 moments in the antiferromagnetic Mott insulator α-RuCl3 are coupled by strongly anisotropic bond-dependent exchange interactions on a honeycomb lattice. Intense study of α- RuCl3 has been
driven by the proposal that its low energy excitations may be adiabatically connected to the Majorana quasiparticles that emerge in the exact solution of the Kitaev spin liquid model. In my talk, I
will present optical absorption measurements using time- domain terahertz spectroscopy in the range 0.3 to 10 meV and reveal several new features of the low-energy excitations and a continuum. I will
discuss what are the origins of these features and the dramatic spectral weight shift between them in various geometries. By combining the linear spin-wave theory, our measurements refine the
parameters of the spin Hamiltonian.
Jeudi 14 Juin
Title: Onset of correlations in synthetic quantum Ising systems
Louis-Paul HENRY (Institut für Laserphysik, Univ. Hambourg, Allemagne)
Large arrays of Rydberg atoms are one of the very promising platforms for quantum engineering applications.
I will present here recent results obtained with a Rydberg quantum simulator of up to 36 spins in which the
parameters of the Hamiltonian (representing a quantum Ising model) can be dynamically tuned.
After a short introduction on Rydberg atoms and the experimental setup, I will describe the dynamics of the onset of the
correlations in the systems, and in particular how fast they can spread in the system, and how single atom dephasing seems to be the major effect limiting them.
I will then show how the spatial features observed can be explained analytically, based on a
short-time expansion of the evolution operator, for both the square and the triangular lattice cases, which highlights the frustrated nature of the latter.
Jeudi 7 Juin
Title: Pôles de Regge en Physique des trous noirs
Bernard Raffaelli (ESME Sudria, Lyon)
Depuis les années soixante, principalement dues à l'impulsion de Nussenzveig en Electromagnétisme et Regge en Physique quantique, des méthodes semi-classiques, utilisant un prolongement analytique
des développement en ondes partielles, ont été développées dans le cadre de la théorie de la diffusion. Parmi ces techniques, la théorie du moment angulaire complexe apparaît comme particulièrement
bien adaptée à l'étude des résonances. Dans ce séminaire, j'exposerai l'application de cette technique au cas de la diffusion d'un champ scalaire, massif ou non, par une certaine classe de trous
noirs, sur la base de l'exemple plus simple du trou noir de Schwarzschild.
L'originalité de cette approche est non seulement d'amener les concepts de matrice S, de pôles de Regge et des techniques associées, au coeur de la Physique des trous noirs, mais aussi d'apporter un
éclairage nouveau sur l'interprétation des phénomènes de résonance et d'absorption pour de telles géométries, comme les modes quasi-normaux faiblement amortis, la structure de la section d'absorption
ou encore le lien avec le lensing gravitationnel fort.
Jeudi 24 Mai
Title: Journée de l'équipe 4
Jeudi 17 Mai
Title: Building strongly interacting many-body quantum systems with individual atoms
Sylvain SCHWARTZ (Laboratoire Kastler Brossel, ENS Paris)
Controlling entanglement in large quantum systems is a very exciting challenge of modern physics, and a necessary milestone to fulfill the promises of the second quantum revolution. Arrays of neutral
atoms have recently emerged as a versatile tool in this context, building on the high coherence properties of atomic systems and on the mature experimental toolbox of cold-atom physics. In this talk,
I will describe two experimental approaches that I have been involved in towards many-body quantum systems based on neutral atoms with single-particle control and long- range interactions.
The first approach, pursued at Harvard university in the group of Mikhail Lukin, is based on an array of 100 tightly focused optical tweezers which are generated and controlled by an acousto-optic
deflector, stochastically loaded from an optical molasses with at most one atom per trap, and deterministically arranged to create the desired atomic configuration using site-resolved imaging. By
coupling the ground state of the atoms to a Rydberg state, we create strong tunable Ising-type interactions between them, resulting in entanglement and non-trivial spatial correlations across the
array. With this platform, we were able to perform quantum simulations of an Ising Hamiltonian and to demonstrate high fidelity preparation of the many-body ground state of a Rydberg crystal phase
for up to 51 atoms (where classical simulations are no longer tractable). We have also explored some intriguing quantum many-body dynamics in the form of robust oscillations between complementary
crystal states after a quantum quench [1]. Future directions include investigating the Kibble-Zurek mechanism in a quantum phase transition, creating highly entangled states, and studying many-body
dynamics in disordered potentials.
In a second approach, pursued at Laboratoire Kastler Brossel in the group of Jakob Reichel, single atoms will be loaded in an optical lattice trap sustained by a fiber- based Fabry-Perot cavity
placed under a quantum gas microscope. Here, infinite- range interactions will be created by coherent photon exchange enhanced by the cavity in the strong coupling regime. Importantly, there is
exactly a factor of two between the wavelength of the trapping mode and the wavelength of the coupling mode, to make the photon-mediated interactions maximal and equal for all atoms. This new
platform will provide an ideal test bed to implement various protocols for the creation and characterization of many-body entanglement, such as the Dicke model where highly entangled states are
expected to occur in the vicinity of the quantum phase transition.
[1] Bernien, H., Schwartz, S., Keesling, A., Levine, H., Omran, A., Pichler, H., Choi, S., Zibrov, A. S., Endres, M., Greiner, M., Vuletic, V., and Lukin, M. (2017). Probing many- body dynamics on a
51-atom quantum simulator. Nature 551, 579.
Jeudi 3 Mai
Title: Slow convergence due to long range temporal correlations:
models, phenomena, data analysis challenges
Holger Kantz (Max-Planck Institut für Physik komplexer Systeme, Dresden)
Long range temporal correlations (LRC), i.e., an infinite
correlation time, seems to be abundant in natural signals,
such as climatological and physiological time series.
We discuss how to verify and quantify the presence
of such correlations and show the limitations of the methods.
We highlight some statistical problems as a consequence
of the presence of LRC such as bad convergence properties
of time averages. Lastly, we speculate about potential sources
of LRC.
Jeudi 26 Avril
Title: Self-assembled topological materials: Weyl points for light and sound
Michel Fruchart (Leiden University, Leiden, Pays-Bas)
Abstract: Soft materials such as liquid crystals, block copolymers, or colloidal particles can self-assemble into highly structured phases which replicate at the mesoscopic scale the symmetry of
atomic crystals. As such, they offer an unparalleled platform to design mesostructured materials for light and sound. Here, we present a bottom-up approach based on self-assembly to engineer
three-dimensional photonic and phononic crystals with topologically protected Weyl points. In addition to angular and frequency selectivity of their bulk optical response, Weyl materials are endowed
with topological surface states, which allows for the existence of one-way channels even in the presence of time-reversal invariance. Using a combination of group-theoretical methods and numerical
simulations, we identify the general symmetry constraints that a self-assembled structure has to satisfy in order to host Weyl points, and describe how to achieve such constraints using a
symmetry-driven pipeline for self-assembled material design and discovery.
Vendredi 13 Avril
Title: Semimetals Unlimited: Unbounded electrical and thermal transport properties in nodal semimetals
Brian Skinner (MIT, Boston, États-Unis d’Amérique).
Abstract: Modern electronics is built on semiconductors, whose utility comes from their ability to operate on either side of the conductor-insulator dichotomy. For practical applications, however,
semiconductors face certain unavoidable limitations imposed by the physics of Anderson localization and by the disorder introduced through doping. In this talk I discuss whether these same
limitations apply to nodal semimetals, which are a novel class of three-dimensional materials that have a vanishing density of states (like insulators) but no gap to electron-hole excitations (like
conductors). I show that, surprisingly, in a certain class of nodal semimetals the electronic mobility can far exceed the bounds that constrain doped semiconductors, becoming divergingly large even
with a finite concentration of charged impurities. I then discuss the thermoelectric effect in semimetals, and show that their electron-hole symmetry allows for a thermopower that grows without bound
under the application of a strong magnetic field. This large thermopower apparently enables the development of devices with record-large thermoelectric figure of merit.
Jeudi 29 Mars
Title: "Emergence of hydrodynamics in integrable systems out of
Benjamin Doyon (King's College, Londres, Angleterre)
Abstract: I will introduce the recently developed theory of
"generalized hydrodynamics", which describes large-scale
behaviours in many-body quantum and classical integrable
systems out of equilibrium.
Jeudi 1 Mars
Title: Baxter Q-operators for rational spin chains
Rouven Frassek (IHES, Paris)
After giving a short review of the quantum inverse scattering method I will discuss how Q-operators can be constructed in this framework. The approach employs an infinite dimensional auxiliary space
and follows the ideas of Bazhanov, Lukyanov and Zamolodchikov. The R-matrices relevant belong to a set of degenerate solutions to the Yang-Baxter equation. The construction is exemplified for the
closed and open Heisenberg chain but also for non-compact spin chains, which are relevant for high energy QCD and N=4 super Yang-Mills theory. Finally, I discuss the generalisation of the
construction to higher rank Lie algebras.
Jeudi 15 Février
Title: The Klein-Gordon equation on curved spacetimes and its propagators.
Jan Derezinski (Katedra Metod Matematycznych dla Fizyki, Université de Varsovie, Pologne)
Abstract: The Klein-Gordon equation (including an electromagnetic potential) has several natural Green's functions,
often called propagators.The so-called Feynman propagator, used in quantum field theory, has a clear
definition on static spacetimes. I will discuss, partly on a heuristic level, its possible generalizations to the non-static case. I will also describe a curious, partly open problem about the
self-adjointness of the Klein-Gordon operator.
Jeudi 25 Janvier
Title: Generalized Gibbs Ensembles and Generalized Hydrodynamics in quantum integrable systems
Jacopo De Nardis (Département de Physique, ENS Paris)
Abstract: I'll give a short review of the recent theoretical progress to explicitly construct non-thermal steady states in quantum systems as interacting bosons and spin chains.
Moreover, I'll present the recently introduced hydrodynamic description of such non-thermal steady states that allows to study (ballistic)
transport properties of many-body systems and to construct non-equilibrium steady states with persistent energy or spin currents and stronger
Jeudi 18 Janvier
Title: Mesoscopic Quantum electrodynamics: from atomic-like physics to quantum transport.
Audrey Cottet (Laboratoire Pierre Aigrain, ENS Paris)
Cavity QED techniques have turned out to be instrumental to probe or manipulate coherently two level systems such as superconducting quantum bits. The success of this field relies on the
implementation of a strong coupling between the two level systems and cavity photons. Recently, experiments on hybrid mesoscopic circuits embedded in coplanar microwave cavities have appeared [1, 2].
This architecture is appealing since new degrees of can be used in the context of cavity QED. In the first part of this talk, I will discuss how the strong coupling between a single charge [3, 4, 5]
or spin [6, 7, 8, 9] degree of freedom in a double quantum dot and cavity photons can be obtained.
Mesoscopic circuits represent model systems for quantum transport and condensed matter phenomena due to the presence of fermionic reservoirs. In the second part of this talk, I will show that
microwave cavities are also a powerful probe in that context. For a quantum dot coupled to a superconducting contact, a microwave cavity reveals photo-emission due to quasiparticle tunneling,
although this effect is too weak to be detected in a transport measurement [10]. A microwave cavity could also provide a new way to test the peculiar properties of Majorana bound states induced
inside a spin-orbit coupled quantum nanowire by a superconducting contact [11].
[1] Delbecq et al, Phys. Rev. Lett. 107, 256804 (2011).
[2] Frey et al, Phys. Rev. Lett. 108, 046807 (2012).
[3] Bruhat, Cubaynes, Viennot, Dartiailh, Desjardins, Cottet and Kontos, arXiv:1612.05214
[4] Mi, Cady, Zajac, Stehlik, Edge and Petta, Science 355 156 (2017)
[5] Stockklauser, Scarlino, Koski, Gasparinetti, Kraglund Andersen, Reichl, Wegscheider, Ihn, Ensslin, and Wallraff, Phys. Rev. X 7, 011030 (2017)
[6] Viennot, Dartiailh, Cottet, and Kontos, Science 349, 408 (2015).
[7] Mi, Benito, Putz, Zajac, Taylor, Burkard, Petta,, arXiv:1710.03265
[8] Landig, Koski, Scarlino, Mendes, Blais, Reichl, Wegscheider, Wallraff, Ensslin, Ihn, arXiv:1711.01932
[9] Samkharadze, Zheng, Kalhor, Brousse, Sammak, Mendes, Blais, Scappucci, Vandersypen, arXiv:1711.02040
[10] Bruhat, Viennot, Dartiailh, Desjardins, Kontos and Cottet, Phys. Rev. X 6, 021014 (2016).
[11] Dartiailh, Kontos, Douçot and Cottet, Phys. Rev. Lett. 118, 126803 (2017)
Jeudi 30 Novembre
Title: Localization phenomena and topological properties of atomic lattice gases with long-range interactions
Jiri MINAR (Université de Nottingham et Université de Genève)
Recent experimental progress in cold atomic gases has allowed for creation of arbitrary lattice geometries at unit filling [1,2] which opens exciting ways for studies of the dynamics of quantum spin
Hamiltonians [3].
In this talk I will discuss two examples of dynamics in spin systems with long-range interactions.
Firstly, I will discuss the dynamics of Rydberg excitations in an optical tweezer array under the so-called facilitation condition [4]. Here, the presence of positional atomic disorder results in a
correlated disorder in the interatomic interaction strengths and drastically affects the facilitation dynamics. To shed light on the role of disorder in a many-body setting we show that here the
dynamics is governed by an Anderson-Fock model, i.e., an Anderson model formulated on a lattice with sites corresponding to many-body Fock states. We first consider a one-dimensional atom chain in a
limit that is described by a one-dimensional Anderson-Fock model with disorder on every other site, featuring both localized and delocalized states. Next, we consider a situation in which the system
maps on a two-dimensional Anderson-Fock model and observe a clear suppression of excitation propagation, which we ascribe to the localization of the many-body wave functions in Hilbert space. With
the help of the developed framework we then study the excitation dynamics in the ladder geometry.
Secondly, I will describe the topological properties of a two-dimensional atomic lattice gas, where the coupling of the atoms to the radiation field gives rise to dissipation and long-range
interactions beyond a simple power law [5]. This has for a consequence energy spectra with one-sided divergences in the Brillouin zone or possible breaking of the standard bulk-boundary relation in
topological insulators. We show that under certain conditions, the topological properties, such as the transport of an excitation along the edge of the lattice, remain robust with respect to the
presence of lattice defects and dissipation.
[1] D. Barredo, S. de Léséleuc et al., Science 354, 1021 (2016)
[2] M. Endres, H. Bernien et al., Science aah3752 (2016)
[3] H. Labuhn, D. Barredo et al., Nature 534, 667 (2016)
[4] M. Marcuzzi, J. Minář et al., Phys. Rev. Lett. 118, 063606 (2017)
[5] R. Bettles, J. Minář et al., Phys. Rev. A 96, 041603(R) (2017)
Jeudi 23 Novembre
Title: Quantum Transport after Inhomogeneous Quenches
Spyros Sotiriadis (Université de Ljubljana, Slovénie)
I will discuss quantum dynamics and transport in systems that are initially split in two halves lying at different temperature or particle density and abruptly connected. After such an inhomogeneous
quench, a Non-Equilibrium Steady State (NESS) typically forms in the thermodynamic and large time limit. I will demonstrate how the emergence of NESS can be derived from first principles, starting
from non-interacting lattice models in one dimension and considering the effects of different boundary conditions and of interacting defects. Next I will focus on a genuinely interacting integrable
system, the Lieb-Liniger gas, for which it has been recently conjectured that Generalised Hydrodynamics (GHD) emerges at large times. I will derive an exact formula for the NESS and show how certain
predictions of the above conjecture can be deduced from it.
Jeudi 9 Novembre
Title: Topological phases of parafermions: a model with exactly-solvable ground states
Leonardo MAZZA (ENS Paris)
In this talk I will speak about parafermions, emergent excitations that generalize Majorana fermions and can also realize topological order.
After making an introduction on the research field, I will present a non-trivial and quasi-exactly-solvable model for a chain of parafermions in a topological phase. The ground-state wave-functions,
which are matrix-product states and have a particularly elegant interpretation in terms of Fock parafermions, are computed and characterized. Using these wavefunctions, several signatures of
topological order are demonstrated analytically.
This study provides a starting point for the non-approximate study of topological one-dimensional parafermionic chains in the absence of strong edge modes.
Fernando Iemini, Christophe Mora and Leonardo Mazza, Phys. Rev. Lett. 118 170402 (2017)
Jeudi 5 Octobre
Title: Ondes, singularités et couches de cisaillement internes dans les fluides stratifiés tournants
Stéphane Le Dizès (IRPHE, Marseille, France)
Abstract: Dans ce séminaire, je m'intéresse à la réponse d'un fluide stratifié tournant à un forçage
harmonique. Je montre la grande variété de cette dernière suivant la géométrie du domaine et
la fréquence d'excitation. Je montre également comment des singularités apparaissent
au sein du fluide dans la limite non-visqueuse. En présence de viscosité, ces singularités donnent lieu
à des couches de cisaillement internes dont les caractéristiques peuvent être déterminées.
J'analyse plus en détail celles obtenues par la libration d'un disque et d'un sphéroïde dans un milieu
tournant infini.
Les résultats sont discutés dans les contextes de la géophysique interne (écoulement au sein des
planètes généré par forçage gravitationnel) et de l'océanographie (écoulement généré par les marées).
Jeudi 28 Septembre
Title: Controllable sub-5nm nanomaterial synthesis and manipulation
Xing Wu (Department of Electrical Engineering, East China Normal University)
Abstract: Two-dimensional (2D) ultra-thin materials like graphene with rich physical properties and unique layered structures are promising for applications in electronics,
chemistry, energy, and bioscience, etc. In this talk, I will mainly introduce the controllable synthesis of 2D materials, device fabrication and
electronic transport. Also, I will talk about manipulate 2D materials at atomic scale by using multiple-fields transmission electron microscopy (TEM).
Jeudi 21 Septembre
Title: Bulk-edge correspondence for Floquet topological insulators
Clement Tauber (Département de Physique, ETH Zürich, Suisse)
Abstract: Floquet topological insulators describe independent electrons on a lattice driven out of equilibrium by a time-periodic Hamiltonian, beyond the usual adiabatic approximation. In dimension
two such systems are characterized by integer-valued topological indices associated to the unitary propagator, alternatively in the bulk or at the edge of a sample. In this talk I will give new
definitions of the two indices, relying neither on translation invariance nor on averaging, and show that they are equal. In particular weak disorder and defects are intrinsically taken into account.
Finally indices can be defined when two driven sample are placed next to one another either in space or in time, and then shown to be equal. The edge index is interpreted as a quantized pumping
occurring at the interface with an effective vacuum.
Jeudi 22 Juin
Title: Freezing of entanglement in alternating transverse field XY model
Debasis SADHUKAN (Harish-Chandra Research Institute — Allahabad, India)
Abstract: Inhomogeneity often leads to the generation of new phases in
many-body systems. The one-dimensional quantum XY model in a uniform
transverse field is known to have a quantum phase transition from
antiferromagnetic to paramagnetic phase. Introduction of an
alternating transverse field instead of a uniform one, develops a new
dimer phase in addition to the antiferromagnetic and paramagnetic
phase. I will show that the quantum correlation present in the system
can characterize all the quantum phase transitions present in the
system. I will also talk about the trends of quantum correlations in
such system, under closed and open dynamics. Finally, I will show that
bipartite entanglement can be frozen over time with a proper choice of
the many-body substrate, which is in contact with the environment via
a repetitive interaction.
Jeudi 15 Juin
Title: Changes large and small: The physics of stochastic resetting
Shamik Gupta (Department of Physics, Ramakrishna Mission Vivekananda University, Belur Math, Calcutta, India)
Abstract: What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals of time? Modeling the stochastic process by diffusion and the large
changes as abrupt resets to the initial condition, this talk will unveil a wide spectrum of rich long-time behavior that the resulting dynamics exhibits, from an ever-spreading spatial distribution,
to one that is time independent and characterizes a nonequilibrium stationary state. The implication of the results for physical situations of relevance will be discussed.
Jeudi 8 Juin
Title: Observable's Statistical Mechanics
Fabio ANZA (Oxford University, Angleterre)
Abstract: The emergence of thermal equilibrium is the statistical foundations of thermodynamics. In a many-body quantum system, whose microscopic dynamics is unitary, there are two main approaches.
The "typicality approach" ascribes the emergence of local thermalization to entanglement while the "Eigenstate Thermalization Hypothesis" (ETH) derives its intuition from the emergence of chaotic
behavior of observables. After a brief introduction on these two topics I will argue that the ordinary notion of thermal equilibrium is experimentally unaccessible and propose a more realistic way of
describing thermal equilibrium, focused on observables rather than on the state of the system. I will show that the theory that emerges is an observable-wise generalization of statistical mechanics
and it provides a fresh perspective to look at the typicality approach and at ETH.
Jeudi 18 Mai
Title: Topology and the Pseudo-Gap phase of Cuprates
Catherine Pépin (IPhT, CEA Saclay, Paris)
Abstract:The Pseudo-Gap state in under-doped cuprates remains the key mystery for the understanding of those compounds. Recently, a new concept has been introduced, that this state of matter could be
controlled by topology. In this talk we review the two main forms of topological states, in real and momentum space that are specific to quantum matter. We show how each of them can account in a
different way for the phase diagram of the cuprates, and in particular the under doped region between the Mott insulator at very low oxygen doping, and the metallic state at high doping. We then
describe how skyrmions can emerge in the pseudo-spin space, related to an emerging SU(2) symmetry, and argue that proliferation of such skyrmions can account for a number of experimental properties
of the pseudo-gap phase.
Jeudi 15 Mai
Title: 2D CFT blocks for a class of N=1 theories in 4D
Vladimir Mitev (Institute of Physics, Universität Mainz)
Abstract: In this talk I will present our program for the search for the 2D CFT description of a large class of 4D gauge theories with superconformal N=1 symmetry. I will show how to identify the 2D
CFT symmetry algebra and its representations, namely the conformal blocks of the Virasoro/W-algebra, that underlie the 2D theory and reproduce the Seiberg-Witten curves of the N=1 gauge theories. One
finds that the blocks corresponding to thegauge theories under investigation involve fields in certain non-unitary representations of the Virasoro/W-algebra. These conformal blocks further give a
prediction for the instanton partition functions of the 4D theories.
Jeudi 11 Mai
Title: Complex structures and zero-curvature equations for sigma-models
Dimitri Bykov (Max-Planck-Institut fur Gravitationsphysik,Potsdam, Allemagne)
I will construct zero-curvature representations for the equations of motion of a class of sigma-models with complex homogeneous target spaces, not necessarily symmetric. As an example, for the case
when the target space is a flag manifold and the worldsheet a sphere, I will describe all solutions to the equations of motion. Various ramifications of these results will be described.
Jeudi 13 Avril
Title: Fermionic matrix product states and one-dimensional topological phases
Nick Bultinck (Gent University, Belgique)
The matrix product state (MPS) formalism has been very successful both
as the variational class underlying DMRG, and as a theoretical tool to
classify all
symmetry-protected topological phases of spin systems in one
dimension. In this talk I will explain how MPS can be extended to
describe fermionic systems.
This naturally leads to two classes characterized by the presence or
absence of Majorana edge modes. Imposing additional global symmetries
allows one to
extract discrete invariants from the MPS that lead to the full
classification of interacting symmetry-protected phases of fermions in
one dimension. The invariants
can be related to physical properties of the system and their behavior
under stacking of chains is determined by the intrinsic fermionic MPS
Jeudi 6 Avril
Title: Detection of Zak phases and topological invariants in a chiral quantum walk of twisted photons
Alexandre Dauphin (ICFO, Barcelone)
Abstract: Topological insulators are exotic phases going beyond the standard Landau theory of phase transitions. These phases are characterized by a global topological order and present robust
conducting surface states protected by the topology of the system. Recently, a great effort has been done to quantum simulate such phases. We here focus on the quantum simulation of one dimensional
topological phases with quantum walks. We discuss how topology can arise in these systems and how to detect the topological phase. Finally, we discuss the recent photonic quantum walk realized in the
group of Prof. L. Marrucci and propose a realistic detection scheme of the topological invariant.
Title: Topological aspects of the generalized sine-kernel Fredholm determinants
Oleksandr Gamayun (Lorentz Institute, Leiden, Pays-Bas)
We consider Fredholm determinants with the so-called time-dependent generalized sine kernel introduced in [KK].
These determinants are appropriate for a description of two-point functions in a wide class of integrable models.
The long-distance/long-time asymptotic behaviour of these objects has been analysed by means of Riemann-Hilbert problem (RHP) [KK].
We re-derive this asymptotic by means of the summations of microscoptic form-factors (similar to Refs. [KK2],[KK3]). This allows us to bypass
restrictions on the kernel needed for the RHP analysis. In particular, we consider the possibility for certain periodic functions in a kernel to have a topological phase-slip.
We study how these phase-slips affect the asymptotic behaviour and demonstrate how they appear in specific physical models.
[KK] K. K. Kozlowski {\em Riemann–Hilbert approach to the time-dependent generalized sine kernel}
[KK2] N. Kitanine, K. K. Kozlowski, J. M. Maillet, N. A. Slavnov, V. Terras {\em Form factor
approach to dynamical correlation functions in critical models} [arXiv:1206.2630]
[KK3] K. K. Kozlowski, J.-M. Maillet {\em Microscopic approach to a class of 1D quantum critical
models} [arXiv:1501.07711].
Jeudi 30 Mars
Title: What are the impedance combination rules in quantum circuits?
Philippe Joyez (CEA/SPEC,Saclay)
Résumé: When several quantum electronic components are assembled in a electrical circuit, they interact with each other in a non-local and non-linear way that prevents using the usual impedance
combinations rules to predict the behavior of the circuit. I will first explain qualitatively how this interaction operates. Then I will show how it can be taken into account for making detailed
predictions on several simple circuits, and making a link with quantum optics.
[1] C. Altimiras, F. Portier and P. Joyez, Interacting electrodynamics of short coherent conductors in quantum circuits, Phys. Rev. X 6, 031002.
Jeudi 23 Mars
Title: Lattice deformation of Virasoro algebra : Volterra, Toda-2 and q-Toda models.
Olivier Babelon (LPTHE, Université Pierre et Marie Curie, Paris)
I will recall the old program of L.D. Faddeev to define an integrable lattice deformation of CFT.
This lead to the Volterra model and more recently to the simpler Toda-2 model (Toda chain in the second Hamiltonian structure).
I will point out the similarities and differences between Toda-2 and the q-Toda chain.
I will explain the separation of variables and the construction of Baxter Q operator for Toda-2 and q-Toda.
Jeudi 2 Février
Title: Non equilibrium dynamics of quantum systems : the Loschmidt echo
Eric Vernier (SISSA et INFN, Trieste, Italie)
The non-equilibrium dynamics of quantum many-body systems has attracted a large interest over the last decade, prompted by formidable advances in cold-atomic experiments.
While much progress has been done in understanding the relaxation mechanisms of physical observables and the characterization of the stationary state following, for instance, a quantum quench (where
an isolated system is let evolve after one or several parameters have been suddenly changed), very few analytical results exist about the full time dynamics despite the existence of prototypical
integrable models. Indeed, the time dynamics involves contributions for arbitrarily excited eigenstates of the Hamiltonian, making calculations prohibitively difficult.
In this talk I will present some progress made recently in this direction (based on
), namely an analytical computation of the Loschmidt echo, which measures the overlap between the state of the system at a given time and its initial state, for various types of quenches in the
Heisenberg XXZ spin chain. The latter has attracted a renewed interest recently in the context of dynamical phase transitions, which it signals through its non-analyticities as a function of the
time. Using a reformulation of the problem in terms of an auxiliary boundary quantum transfer matrix and using an infinite set of functional relations, we write the Loschmidt echo as the solution of
an infinite set of Non Linear Integral Equations, which allows for its exact determination at arbitrarily large time. This method overcomes the time limitations experienced by numerical approaches,
and may serve as a basis for the computation of other physical observables.
Jeudi 26 Janvier
Title: Modified Bethe Ansatz for models without U(1) symmetry.
Samuel Belliard (IPhT, CEA-Saclay, Paris)
We present a modified version of the algebraic Bethe ansatz (MABA) that allows to characterize the eigenvalues and the eigenstates of spins chains without U(1) symmetry. In the cases of the XXX
Heisenberg spins chains on the segment and on the circle, the Bethe vectors and associated eigenvalues will be constructed and for some special cases, the scalar product of these Bethe vectors will
be conjectured [Belliard, Crampé (2013)], [Belliard, Pimenta (2015)]. The solution involves the Baxter T-Q equation with an inhomogeneous term introduced by [Cao et al. (2013)] and used in the
quantum separation of variable approach by [Niccoli et al. (2014)].
The relation between these different methods will be pointed out.
Jeudi 19 Janvier
Title: Engineering non-abelian states from simpler topological order
Cécile Répellin (Max Planck Institute, Dresde, Allemagne)
The possibility of realizing anyons -- quasiparticles with fractional exchange statistics -- is an exciting prospect in the field of interacting topological phases. Non-abelian anyons, whose exchange
is characterized by a matrix rather than a simple phase, are of the most exotic kind. They are highly sought after as they could be used as qubits for quantum computation intrinsically immune to
decoherence. While non-abelian anyons are expected to appear in the fractional quantum Hall effect, engineering systems that purposefully favor their emergence might be a better strategy to probe
their properties. In this talk, I will explore two such routes. The first one stems from the concept of projective construction: a multilayer abelian system can formally be transformed into a
non-abelian one by application of a non-local operator. I will review this construction in the case of a well-known fractional quantum Hall state -- the Moore-Read state -- and show how to obtain all
of its topological sectors by the insertion of line defects. I will then discuss a possible physical realization of the projective construction in a quantum Hall bilayer, and provide numerical
arguments for this discussion. Another route to obtain non-abelian degeneracies is to trigger a phase transition at the edge of a quantum Hall bilayer where the excitations will be localized. I will
show some preliminary numerical results supporting this transition in a microscopic system.
Jeudi 12 Janvier
Title: Large deviations in single-file motion
Tridib Sadhu (Department of Theoretical Physics, Tata Institute of Fundamental Research)
Transport of impenetrable particles in a crowded one-dimensional channel is referred as the single-file motion. The particles can not pass each other and this leads to sub-diffusion of individual
(tagged) particles. Such constrained motion has been observed in many physical systems: motion of ions through narrow pores in cell membranes, transport of large molecules in porous medium, etc. I
shall present a hydrodynamic formulation to analyze the probability distribution of the position of a tagged particle in single-file. This formulation is an application of the macroscopic fluctuation
theory and applies to a large class of single-file systems. The framework enables one to calculate the large deviation function of the tagged particle position which contains the full statistics at
large time. As a simple example, I shall discuss a system of Brownian point particles with hard-core repulsion and show how to derive an exact expression of the large deviation function. Then I shall
present an exact solution of the problem starting from microscopic dynamics and verify the hydrodynamic results. In addition, I shall discuss connection with fractional Brownian motion and emphasize
an unusual dependence on the initial state, even at large times.
Jeudi 5 Janvier
Title: Quantum fields with tensorial locality
Sylvain Carrozza (Perimeter Institute, Canada)
In recent years, generalizations of matrix models known as Tensor Models and Group Field Theories have been developed into a consistent formalism. The common feature of these field theories is an
abstract notion of locality, know as tensorial locality, which encodes the combinatorial structure of the elementary field interactions. It has initially been introduced in the context of quantum
gravity, where indeed the absence of a non-dynamical background space-time renders the standard notion of locality inoperative. I will provide an overview of this approach, focusing on general
features of the phase diagrams of tensorial theories, and of their possible applications to quantum gravity and statistical physics. I will in particular discuss the tensorial version of the
Sachdev-Ye-Kitaev model recently proposed by Witten.
Jeudi 24 Novembre
Title:The relative locality of quantum spacetime
Laurent Freidel (Perimeter Institute, Canada).
Should we revisit the concept of space based on quantum mechanics? Do we need a radically new physical principle to address the problem of quantum gravity? In this talk I will adress these questions.
I will review what are the central challenges one faces when trying to understand the theory of quantum gravity and focus on the main one which is non-locality. I will present a collection of results
and ideas that have been developed in the recent years that provides a radical new perspective on these issues. One of the central concept I’ll present is the idea that locality has to be made
relative, and how this idea goes back to one of the founder of quantum mechanics: Max Born. I’ll also explain how these new ideas remarkably force us to revisit the concept of space itself and
propose a natural generalization that incorporate quantum mechanics in its fabric called modular space. I’ll also sketch how these foundational ideas quite unexpectedly links with the most recent
developments on the geometry of string theory, and generalized geometry.
Jeudi 17 Novembre
Title: Critical behavior of open quantum systems
Riccardo ROTA (Laboratoire Matériaux et Phénomènes Quantiques, Univ. Paris Diderot VII)
I will discuss intriguing features of dissipative phase transitions in open quantum systems. In particular, I will present recent results [1] about the critical properties of two-dimensional lattices
of spins interacting via an anisotropic Heisenberg Hamiltonian and subject to incoherent spin flips. Using the recently developed corner-space renormalization method [2], I will show the finite-size
scaling and critical exponent of the magnetic linear susceptibility. I will also present results for the Von Neumann entropy and the quantum Fisher information across the transition, showing that a
dissipative phase transition can share properties of both thermal and quantum phase transitions.
[1] R. Rota, F. Storme, N. Bartolo, R. Fazio and C. Ciuti, arXiv:1609.02848 [quant-ph]
[2] S. Finazzi, A. Le Boité, F. Storme, A. Baksic and C. Ciuti, Phys. Rev. Lett. 115, 080604 (2015).
Jeudi 10 Novembre
Title: Random field Ising model out of equilibrium
Ivan BALOG (Institute of Physics, Zagreb, Croatia).
Abstract: Phase transitions in RFIM are characterized by a huge number of quasidegenrate metastable
states. This is why the problem has been resisting solution for so long. To fully capture the
important physics we have used the Nonperturbative Renormalization Group (NPRG) approach. I will
present how one can describe the critical relaxation to equilibrium as well as the out of
equilibrium or hysteresis criticality of this model by starting from a dynamical formalism
that we developed within the NPRG.
Jeudi 3 Novembre
Title: On classical de Sitter and Minkowski string backgrounds
David Andriot (Albert-Einstein Institut Potsdam, Allemagne).
Abstract: Standard paths to connect string theory to cosmology or particle physics require to find backgrounds where the space-time is a product of de Sitter or Minkowski space-time and a compact
manifold. We study the existence of such backgrounds at the classical level, in the framework of type II supergravities with parallel orientifolds and D-branes. For de Sitter, we obtain highly
constraining no-go theorems; it allows us to exclude a stringy realisation of a particular inflation scenario. For Minkowski, we characterise a broad class of solutions, that possibly encounters for
all such backgrounds.
Jeudi 20 Octobre
Title: Entanglement entropies in 3d gauge theories
Aldo Riello (Perimeter Institute)
Entanglement entropy is a valuable tool for characterizing the correlation structure of quantum field theories. When applied to gauge theories, subtleties arise which prevent the factorization of the
Hilbert space underlying the notion of entanglement entropy. Borrowing techniques from extended topological field theories, I introduce a new definition of entanglement entropy for both Abelian and
non–Abelian gauge theories. I will then relate this construction to earlier proposals and argue that it brings these closer to each other. I will also point out that different definitions of
entanglement entropies can be related to choices of (squeezed) vacuum states and excitations there upon. Time allowing, I will briefly discuss aspects more closely related to topics in quantum
Jeudi 13 Octobre
Title: Higher Spins & Strings
Matthias R. Gaberdiel (Institut für Theoretische Physik, ETH Zurich, Suisse)
Abstract: The conjectured relation between higher spin theories on anti de-Sitter (AdS) spaces
and weakly coupled conformal field theories is reviewed. I shall then outline the
evidence in favour of a concrete duality of this kind, relating a specific higher spin
theory on AdS3 to a family of 2d minimal model CFTs, and show how this duality fits
into the framework of the familiar stringy AdS/CFT correspondence. Finally, I shall
explain how Yangian symmetries appear in this context, hinting at an underlying
integrable symmetry.
Jeudi 6 Octobre
Title: Hexagons and Three-Point Functions
Benjamin Basso (LPT, ENS, Paris, France)
Abstract: I will present a framework for computing correlators of three
single trace operators in planar N=4 SYM theory that uses hexagonal
patches as building blocks. This approach allows one to exploit the
integrability of the theory and derive all loop predictions for its
structure constants. After presenting the main ideas and results, I will
discuss recent perturbative tests and open problems. Based on arXiv
Jeudi 22 Septembre
Title: Cooperativity flows and Shear-Bandings: a statistical field theory approach
Roberto Benzi (Universita Roma Tor Vergata, Rome, Italie)
Shear band formation is an example of a material instability, corresponding to an abrupt loss of homogeneity of deformation occurring in a solid sample subject to a loading path compatible with
continued uniform deformation. This phenomenology is associated with `complex materials'
as it is clearly distinct from the simpler homogeneous deformation or deformation rate fields in ideal Hookean solids and Newtonian fluids.
In this talk I show that shear bandings can be interpreted as a compact solutions emerging from the variational formulation of field theory. The order parameter of the theory is the fluidity (inverse
of viscosity). Compactons coexistence with regions of zero fluidity ("non-flowing vacuum") is shown to be stabilized by the presence of mechanical noise, which ultimately shapes up the equilibrium
distribution of the fluidity field.
Jeudi 7 Juillet
Title: Atomtronics flux qubits
Luigi Amico (Università di Catania, Italy & Center for Quantum Technologies, Singapore)
Abstract: Atomtronics is an emerging field seeking to realize atomic circuits exploiting ultra-cold atoms manipulated in micro-magnetic or laser-generated micro-optical circuits. Atomtronics circuits
are made of bosonic/fermionic neutrally charged carriers put in motion by applying a 'potential drop' induced by methods developed by quantum technology. The typically low decoherence/dissipation
rates of cold atoms systems and the high controllability and enhanced flexibility of potentials for ultracold matter make Atomtronics very attractive for enlarging the scope of existing cold-atom
quantum technology. In this talk, I will concentrate on a few specific Atomtronics schemes for quantum processing. In particular, I will discuss the quantum dynamics of Bose-Einstein condensates
trapped in ring shaped potentials. Interrupting the ring with weak links, atomic analogs of SQUIDs are realized. I will discuss the issue of scalability.
Jeudi 30 Juin
Title: Many spins in a lattice
Bruno NAYLOR (Laboratoire de Physique des Lasers - Paris XIII)
Abstract: The field of ultracold atoms offers the possibility to load atoms in a periodic potential (called optical lattice) and engineer model condensed matter like Hamiltonians. In our experiment,
chromium atoms are loaded in each site of a 3D optical lattice. We study spin dynamics due to long-range dipole-dipole interactions. This dynamics is inherently many-body, as each atom is coupled to
its many neighbors. We specifically study in which conditions the spin dynamics can be seen as classical, and in which conditions quantum correlations arise.
Jeudi 23 Juin
Title: Antiferroquadrupolar and Ising-nematic orders of a frustrated bilinear-biquadratic Heisenberg model and implications for the magnetism of FeSe.
Rong YU (Renmin University of China, Beijing)
Abstract: Motivated by the magnetic properties of the iron chalcogenides, we study the phase diagram of a generalized Heisenberg model with frustrated bilinear-biquadratic interactions on a square
lattice. We identify zero-temperature phases with antiferroquadrupolar and Ising-nematic orders. The effects of quantum fluctuations and interlayer couplings are analyzed. We propose the
Ising-nematic order as underlying the structural phase transition observed in the normal state of FeSe, and discuss the role of the Goldstone modes of the antiferroquadrupolar order for the dipolar
magnetic fluctuations in this system. Our results provide a considerably broadened perspective on the overall magnetic phase diagram of the iron chalcogenides and pnictides, and are amenable to tests
by new experiments.
Jeudi 26 Mai
Title: Factorized solutions of the Yang-Baxter equation and the lattice models.
Dimitri Chicherin (LAPTH, Annecy)
Abstract: The Yang-Baxter equation (YBE) plays a major role in the theory of completely integrable quantum systems.
It has found numerous applications in mathematical physics. We are interested in the rational, trigonometric,
and elliptic solutions of the YBE for the rank one symmetry algebras. They involve such special functions as
the noncompact quantum dilogarithm and the elliptic gamma function. We construct the general integral
solution for the principal series representations and associate it with a lattice model of statistical mechanics
with continuous spin variables. All finite-dimensional solutions of the YBE are obtained from the integral one.
We explain the underlying algebraic structure and provide the factorized form of the solutions.
Jeudi 17 Mars
Title: Modeling human interactions and the dynamics of epidemic spreading
Christian Lyngby Vestergaard (Centre de Physique Théorique, Marseille)
Abstract: Respiratory infections, such as the flu, spread mainly through face-to-face contacts between individuals. Recent development of portable and cheap radio-frequency receptor/emitters has
enabled time-resolved measurement of physical interactions. Measured data, typically represented by a temporal network, reveal the heterogeneous dynamics at play and can be used to improve models of
social behavior and inform realistic simulations of epidemic spreading. I will present some of our recent advances along these directions. First, I present a generalized version of the Doob-Gillespie
algorithm that can be used for simulation of stochastic contagion processes on temporal networks. This temporal Gillespie algorithm is stochastically exact, and up to several orders of magnitude
faster than traditional methods based on rejection sampling. Second, I present a simple generative modeling framework for social interactions in a well-mixed population. It allows us to study how
heterogeneous dynamics emerge as the result of different memory mechanisms at the level of individuals. We propose four individual mechanisms, which together result in generally heterogeneous network
dynamics, notably of contact and inter-contact durations and frequencies of contacts per link, as observed in empirical contact networks. Our modeling framework thus enables us to study the
individual effect of heterogeneities on the propagation of contagion processes. I finally discuss our current efforts to include social groupings and different physical locations, which constrain who
can interact with whom, in the above model. This augmented model can be applied to investigate the validity of the assumptions of homogeneity underlying the popular metapopulation models of epidemic
spreading, and to study dynamic sampling effects.
Jeudi 10 Mars
Title: Crossing probability for directed polymers in random media: exact results and relation to random matrices
LPTMS di Orsay
Abstract: I discuss the problem of many polymers that compete in the same random potential but are forced to avoid each other. By means of replica trick, this model is mapped into the usual
Lieb-Liniger model of quantum particles but with a generalized statistics. I will introduce the nested Bethe-Ansatz method which allows for an exact solution of this quantum system. The result agrees
with a previous derivation based on Macdonald process. In this way, we arrive to a general Fredholm determinant formula which can be used to study general clusters of avoiding polymers. We apply this
formalism to the study of the non-crossing probability P for two polymers. We compute exactly the leading large time behavior of all its moments. From this, we extract the tail of the probability
distribution of the non-crossing probability. The exact formula is compared to numerical simulations, with excellent agreement.
Jeudi 3 Mars
Title: Scattering amplitudes and hidden symmetries in supersymmetric gauge theory
Jan Plefka (HU Berlin)
Abstract: We give a brief introduction to gauge field theory - the underlying theoretical framework of elementary particle physics - and its symmetries. Then we shall discuss supersymmetry and a
holographic description of gauge fields in terms of a higher dimensional string theory known as the AdS/CFT correspondence. Finally, we focus on recent results for scattering amplitudes in
supersymmetric gauge theory, their string dual description and surprising hidden symmetries pointing towards an integrable structure.
Jeudi 18 Février
Title: Almost black holes in flowing water
Antonin Coutant, School of Mathematical Sciences, University of Nottingham, UK
Abstract: When the velocity of flowing water surpasses the speed of surface waves, these propagate in exactly the same way as radiation near a black hole horizon. In particular, it is in theory
possible to reproduce in such systems the (classical) analog of the Hawking effect. Unfortunately, it is in practice quite hard to obtain controllable flows that reach the wave velocity. It turns out
that if the flow accelerates, even below the threshold velocity, there is still an imprint of the Hawking effect, which has been experimentally observed in Vancouver and in Poitiers. I will describe
the nature of this imprint, as well as its spectrum for low frequency waves. In particular, I will show that the production of Hawking-like modes is governed by a new type of horizon, which is
reached for complex values of the position. This “complex horizon” governs both the region of this mode production and its spectrum.
Jeudi 4 Février 2016
Title: Fully developed isotropic turbulence from non-perturbative renormalisation group
Léonie Canet (LPMMC Grenoble)
Abstract: I will present a field-theoretic approach, based on the Non-Perturbative (or Functional) Renormalisation Group (NPRG), to study the regime of fully developed isotropic and homogeneous
turbulence of the Navier-Stokes equation with stochastic forcing, focusing on incompressible fluids. I will review the symmetries of the associated field theory, and point out one which was not yet
identified and which leads to useful identities. I will then present the NPRG flow equations within the Leading Order approximation, and show that they lead to a fixed point, both in two and three
space dimensions. I will then analyze the property of this fixed-point solution, in particular in the limit of large wave-number, show that it does not entail the usual scale invariance, and explain
the mechanism within NPRG for the emergence of corrections to the dimensional scaling.
Jeudi 21 Janvier 2016
High Energy ideas and phenomena in quantum simulation
Alessio Celi (ICFO, Castelldefels)
Abstract: After reviewing idea and motivations for quantum simulation, I will introduce the simulation of synthetic (background) gauge fields with ultracold atoms as paradigmatic example. Such
quantum simulators allow to study quantum Hall effects, as well as the simulation of relativistic physics and other topological systems. Of all the possible strategies devoloped in order to engineer
synthetic gauge fields, I will detail the one based on "synthetic dimensions" that we introduced, and has been recently experimentally realized in two different experiments. In the last part of the
talk, I will briefly comment about other branches of my program, namely, how it is possible to simulate (certain) lattice gauge theories, i.e. models in which the synthetic gauge field becomes
dynamical and quantum, and artificial gravity backgrounds.
Jeudi 7 Janvier 2016
Title: Thermodynamics of trajectories for quantum open systems : from full-counting statistics and dynamical phase transitions to fluctuation theorems
Simon PIGEON (Queen’s University, Belfast, UK)
Abstract: The description of the dynamics resulting from the interaction of a quantum system with its environment is one of the key goals of modern quantum physics. The formal description of the
evolution of an open system, especially in a quantum context, is typically tackled through master equation approach. Recently, a promising approach came to light, combining the quantum master
equation and large-deviation theory. Unlike others, this approach applies to any dissipative quantum systems, paving the way to a standard description of dynamic of open quantum system in terms of
thermodynamics of trajectories. From two different systems, I will explore the possibility given by this approach. Starting with a small interacting spin ring, we will see how thermodynamic of
trajectories predict bistable dynamical behaviour. Next I will consider a paradigmatic system in quantum mechanics, a quantum harmonic oscillators connected to various baths. I will present how our
approach, based on quantum optics methods yields an analytical expression for the large- deviation function encoding the full-counting statistics of exchange between the system and the environment.
Furthermore, the same approach, generalised to any network of harmonic oscillator undergoing linear dynamics allows us to, efficiently derive numerically the behaviour of energy-exchange processes
between the system in a steady state and the environment. From it we can access to possible fluctuation theorem, a key thermodynamic quantities for a large variety of open systems.
Jeudi 17 Décembre 2015
Title: Resonant Tunneling in a Dissipative Environment: Quantum Critical Behavior
Harold U. Baranger (Department of Physics, Duke University, Durham, NC, USA)
Abstract: The role of the surroundings, or environment, in quantum mechanics has long captivated physicists' attention. Recently, quantum phase transitions (QPT)-- a qualitative change in the ground
state as a function of a parameter-- have been shown to occur in systems coupled to a dissipative environment. Despite the ubiquity of QPTs in contemporary theoretical physics, obtaining clear
experimental signatures has been challenging. I start by presenting a recent experiment in which it was possible to thoroughly characterize a QPT caused by coupling to an environment. The system is a
single-molecule transistor built from a carbon nanotube quantum dot connected to strongly dissipative contacts. The electrical conductance of this system is highly singular as T tends to 0: the
conductance is 0 except at one special point (on resonance and symmetric coupling) at which electrons are fully transmitted with unit probability. I then turn to the theoretical understanding of this
QPT obtained by mapping the problem onto that of a resonant Majorana fermion level in an interacting electron liquid. The unitary transmission obtained in the experiment is seen as a competition
between the two leads. The deviations from unitarity at nonzero temperature are connected to residual interactions among the Majoranas; in this way, the experiment observes a signature of Majorana
critical behavior.
Jeudi 10 Décembre 2015
Title: A kagome map of spin liquids
Ludovic Jaubert (Okinawa Institute of Technology)
Abstract: Despite its deceptive simplicity, few concepts have more fundamental implications than chirality. In magnetic materials, spin chirality gives rise to unconventional phenomena such as the
anomalous Hall effect and multiferroicity, taking an enhanced flavour in the so-called spin-liquid phases where magnetic disorder prevails. The kagome lattice sits at the crossroad of these ideas.
Here we shall bring together a network of kagome spin liquids with anisotropic and Dzyaloshinskii-Moriya interactions. This network revolves around the Ising antiferromagnet and ends on
(ferromagnetic) chiral spin liquids with spontaneously broken time-reversal symmetry. As for the celebrated Heisenberg antiferromagnet, it now belongs to a triad of equivalently disordered phases.
The present work provides a unifying theory of kagome spin liquids with time-reversal nearest-neighbour Hamiltonians, and promising applications for rare-earth based kagome materials and
optical-lattice experiments. work done in collaboration with Karim Essafi & Owen Benton from OIST, Japan
Jeudi 3 Décembre 2015
Title: Quantum spacetime and topological quantum field theories with defects
Marc Geiler (Perimeter Institute, Canada)
Abstract: After introducing and reviewing some recent developments in discrete non-perturbative approaches to quantum gravity, I will present the construction of new vacua and representations for the
quantum algebra of observables of canonical gravity. This will highlight the unsuspected richness of this class of theories, and make more transparent their interpretation as topological quantum
field theories with defects. I will then explain the relevance of this construction for the study of the continuum limit via coarse-graining and renormalization.
Jeudi 26 Novembre 2015
Title: The eta-deformation of the AdS_5 x S^5 superstring and supergravity.
Ben HOARE (Institut f. Theoretische Physik, ETH Zürich SWITZERLAND)
Abstract: In this talk we will discuss recent progress in understanding the extent to which the eta-deformed AdS_5 x S^5 background is a solution of Type IIB supergravity. Observing that the
background itself is not a solution, but its 6-fold T-dual is, we will construct the corresponding deformed supergravity equations, of which the eta-deformed background is a solution.
Jeudi 19 Novembre
Title: Matrix product approximations to conformal field theories
Volkher B. Scholz
Abstract: We establish rigorous error bounds for approximating correlation functions of conformal field theories (CFTs) by certain finite-dimensional tensor networks. For chiral CFTs, the
approximation takes the form of a matrix product state. For full CFTs consisting of a chiral and an anti-chiral part, the approximation is given by a finitely correlated state. We show that the bond
dimension scales polynomially in the inverse of the approximation error and sub-exponentially in the ultraviolett cutoff. We illustrate our findings using Wess-Zumino-Witten models, and show that
there is a one-to-one correspondence between group-covariant MPS and our approximation. (joint work with Robert Koenig (TUM), see arXiv:1509.07414)
Jeudi 12 Novembre
Title: Wannier functions for crystalline solids, obstruction theory, and the Z_2-invariants of topological insulators
Gianluca PANATI (Rome)
Abstract: The localization of electrons in crystalline solids is often expressed in terms of the Wannier functions, which provide an orthonormal basis of L2(Rd) canonically associated to a given
periodic Schrödinger operator. The existence of exponentially localized (composite) Wannier functions might be, a priori, topologically obstructed, in view of the possible competition between
regularity and periodicity of the corresponding (quasi-) Bloch functions. In a previous work (2007), we proved that the obstruction to the existence of exponentially localized Wannier functions is
given exactly by the Hall conductance of the system, provided d<=3. On the other hand, for time-reversal (TR) symmetric systems such obstruction vanishes. Thus one may ask a finer question, and
investigate the existence of frames of Bloch functions which are simultaneously smooth, periodic and TR-symmetric. The answer to this question depends on the fact that the TR operator is even or odd.
In the latter case, an intriguing relation with the Z_2-invariants of TR-symmetric topological insulators appears.
Jeudi 5 Novembre 2015
Title: Quantum Algebras Based on Extensions of psl(2|2)
Niklas Beisert
Abstract: TBA
Jeudi 15 Octobre 2015
Title: Yang-Baxter deformations in AdS/CFT and flat space
Stijn VAN TONGEREN (Humboldt-Universitaet zu Berlin, Allemagne)
Abstract: The string on AdS5xS5 can be deformed in many ways while preserving its integrability, using the framework of Yang-Baxter deformations. Time permitting, my talk will consist of two parts.
Firstly, starting from the quantum deformation of the string on AdS5xS5, by a contraction limit I will discuss the quantum deformation of strings on flat space. This contraction limit is actually
identical to the so-called ``mirror limit''. This give a cute perspective on the string on AdS5xS5, as a worldsheet double Wick rotation of the quantum deformation of the simplest possible string.
Secondly, I will discuss homogeneous deformations as Drinfeld twisted models, and show how to interpret (many of) them in terms of AdS/CFT.
Jeudi 8 Octobre 2015
Title: Theory of the many-body delocalization transition
Romain Vasseur (Berkeley)
Abstract: In this talk, I will review the physics of many-body localized systems. As disorder is weakened below a critical value, these non-thermal quantum glasses melt via a continuous dynamical
phase transition into a high temperature, ergodic liquid. In contrast to classical phase transitions between two different non-zero temperature phases of matter, and to quantum phase transitions
between zero temperature phases, this dynamical delocalization transition represents an entirely new type of critical point in which statistical mechanics and thermalization break down sharply at a
continuous phase transition. I will describe an effective model for such quantum-to-classical transitions and use it to compute their universal critical properties.
Jeudi 1 Octobre 2015
Title: Interfaces in correlated Random Media: what we learn from the Directed Polymer
Vivien Leconte (Laboratoire Probabilités et Modèles Aléatoires, Université Paris VII Denis Diderot)
Abstract: One-dimensional boundary interfaces between different phases are described at macroscopic scales by a rough fluctuating line, whose geometrical properties are dictated by the disorder in
the underlying medium, by the temperature of the environment, and by the elastic properties of the line. A widely used and successful model is the directed polymer in a random medium, pertaining to
the Kardar-Parisi-Zhang (KPZ) universality class. Much is known for this continuous model when the disorder is uncorrelated, and it has allowed to understand the static and dynamical features of
experimental systems ranging from magnetic interfaces to liquid crystals. We show that short-range correlations in the disorder at a scale ξ > 0 modify the uncorrelated (i.e. zero ξ) picture in a
non-obvious way. If the geometrical fluctuations are still described by the celebrated 2/3 KPZ exponent, characteristic amplitudes are however modified even at scales much larger than ξ, in a
well-controlled and rather universal manner. Our results are also relevant to describe the slow (so called `creep') motion of interfaces in random media, and more formally (trough replicae)
one-dimensional gases of bosons interacting with softened delta potential. We also discuss results obtained in the same spirit for the depinning force of interfaces with long-range elasticity.
Jeudi 24 Septembre 2015
Title: Order by disorder in the antiferromagnetic Ising pyrochlore
Pamela C. Guruciaga (Université Pierre et Marie Curie)
Abstract: In the pyrochlore lattice, Ising-like spins occupy the vertices of corner-sharing tetrahedra, pointing along the local <111> directions. Via the dumbbell model, this spin configuration can
be mapped into a system of non-conserved magnetic charges ("magnetic monopoles"), with four types of charges: positive or negative, single or double. In this way, the rotation of a magnetic moment is
equivalent to the creation, annihilation or translation of a monopole in a discrete lattice. This mapping is commonly used in the context of the frustrated magnetic materials known as spin ices; in
this case, we will address their antiferromagnetic counterpart and consider only nearest-neighbour interactions. Due to the system's geometry, a magnetic field applied along the [110] direction of
the pyrochlore lattice couples to two spins per tetrahedron, but does not affect the other two which are perfectly orthogonal. In this situation, there is a range of field intensity in which the
ground state consists of single monopoles, with double monopoles playing the role of the lowest excitations. Although the system is charge-disordered at T=0, a single monopole crystal is found at
finite (low) temperature --a phenomenon known as order by disorder. We use Wang-Landau algorithm to find the density of states of the system and show that it is led to order by the excitations. Also,
we perform Monte Carlo simulations with Metropolis algorithm to characterise other transitions present.
Jeudi 17 Septembre 2015
Title: On the integrability of strings on symmetric spaces
Linus Wulff (Blackett Laboratory, Imperial College, Londres)
Abstract: I will describe the structure of string actions on symmetric spaces and the form of the corresponding (super)isometry algebras. For vanishing NSNS three-from flux a general proof of the
classical integrability of the superstring can be found. For the case of non-vanishing NSNS flux new supercoset models corresponding to strings on AdS(2,3)xS(2,3)xS(2,3)xT(2,3,4) are constructed.
Jeudi 10 Septembre
Title: Pure connection formulation of Twistor theory
Yannick Herfray (Nottingham)
Abstract: In the last decade, a lot of improvement have been made in the understanding of transition amplitudes in General Relativity. In some sense, GR is much simpler on shell than was expected.
However it is still an open question as to whether or not there exist an off-shell formulation of GR that would make this simplicity manifest. A good candidat for this could be a twistor action.
However, even if such an action exists for conformal gravity, it is still missing for full GR. In this talk, I will present a possible strategy to achieve it by using ideas coming from the pure
connection formulation of GR. This is however still a work in progress that I hope to continue in the next years of my PhD between Lyon and Nottingham. I will also review some of the basics of
twistor theory, and what we could expect from such a twistor action, mainly a MHV formalism.
Jeudi 11 Juin 2015
Title: On the subtle coexistence of charge and magnetism, and the exotic animals they can give birth
Pierre Pujol (Laboratoire de Physique Théorique, Université Paul Sabatier)
Abstract: In this talk we are going to overview some microscopic models that give rise to coexisting charge and spin degrees of freedom. In these strongly correlated models, quantum effects play an
important role, in particular for the existence of some non-trivial states of matter. We are going to present a particular field theory technique to understand many subtleties of the interplay
between these two kind of degrees of freedom as well as for investigating the existence of quite counterintuitive phases.
Jeudi 4 Juin 2015
Title: Unraveling the nature of carrier mediated ferromagnetism in diluted magnetic semiconductors
Georges BOUZERAR (Institut Lumière Matière, Univ. Lyon 1)
Abstract: After more than a decade of intensive research in the field of diluted magnetic semiconductors (DMS), the nature and origin of ferromagnetism, especially in III-V compounds is still
controversial. Many questions and open issues are under intensive debates. Why among the broad family of III-V materials, and for a given concentration of transition metal (TM) impurities, Mn doped
GaAs still exhibits the highest critical temperature? How can one understand that these temperatures are almost two orders of magnitude larger than that of hole doped (Zn,Mn)Te or (Cd,Mn)Se? Is there
any intrinsic limitation or is there any hope to reach in the dilute regime room temperature ferromagnetism? How can one explain the proximity of (Ga,Mn)As to the metal insulator transition and the
change from RKKY couplings in II-VI compounds to double exchange type in (Ga,Mn)N? In spite of the great success of density functional theory based studies to provide accurately the critical
temperatures in various compounds, till very lately a theory that provides a coherent picture and understanding of the underlying physics was still missing. Recently, within a minimal model it has
been possible to show that among the physical parameters, the key one is the position of the TM acceptor level. By tuning the value of that parameter, one is able to explain both magnetic and
transport properties quantitatively in a broad family of DMS. We will see that this minimal model explains in particular the RKKY nature of the exchange in (Zn,Mn)Te/(Cd,Mn)Te and the double exchange
type in (Ga,Mn)N and simultaneously the reason why (Ga,Mn)As exhibits the highest critical temperature among both II-VI and III-V DMS's.
Jeudi 28 Mai 2015
Title: Why are there so many interpretations of quantum mechanics?
Pierre Hohenberg (New York University)
Abstract: Quantum mechanics is unique among physical theories in that 90 years after its introduction and general acceptance as being correct and complete, its 'interpretation' remains a subject of
controversy. Unlike classical mechanics, what quantum mechanics still lacks is a clear microscopic formulation, whereby the theory is defined for a closed system S of any size in terms of concepts
relating only to S itself. Such a formulation, called Compatible Quantum Theory, is presented and shown to account for and clarify the standard quantum phenomena and paradoxes. The question of
physical implementation, on the other hand, requires a macroscopic theory, to account for state preparation and the measurement of system properties. It is primarily in different versions of such
macroscopic implementation mechanisms that most interpretations of quantum mechanics differ.
Jeudi 30 Avril 2015
Title: Le conditionnement comme ''meilleure'' façon pour rendre typique un évènement rare
Raphael Chetrite (Laboratoire J.A. Dieudonne, Univ. Nice)
Abstract: In my talk, I will present works done with Hugo Touchette. In these works, we consider the problem of conditioning a Markov process on a rare event and of representing this conditioned
process by a conditioning-free process. This conditioning-free process may be seen as the process that is the closest to the initial one in the class of Markov processes that make typical the initial
rare event. Our approach generalises many previous results in the mathematical literature on the spectral characterisation of positive operators, but also maximum entropy principles scattered in the
physics literature.
Jeudi 23 Avril 2015
Title: Spin fluctuations and fragmentation of a spin 1 Bose-Einstein condensate
Fabrice GERBIER (Laboratoire Kastler Brossel, ENS Paris, et Collège de France)
Abstract: I will discuss the magnetic properties of a Sodium Bose-Einstein condensate in a tight trap. This system can be described as a mesoscopic ensemble of a few thousands spin 1 bosons with
antiferromagnetic exchange interactions. Anomalous spin fluctuations are observed for small magnetic fields and vanishing magnetizations. These fluctuations are characteristic of a condensate
displaying spin fragmentation, i.e. condensation occurs in several single-particle spin state instead of just one for a "regular" condensate. This is a mesoscopic effect, which reflects the
restoration of the spin rotational symmetry, explicitely broken by external fields, by spin exchange interactions. In our experiment, these spin fluctuations can be characterized by a quasi-thermal
ensemble at a spin temperature Ts. For small magnetic fields, we find that Ts is well below the "kinetic" temperature Tk of the uncondensed gas surrounding the condensate. When increasing the
magnetic field, Ts converges towards Tk. I will discuss how this behavior can be explained with a picture where the condensate spin acts as an isolated quantum system, which is able through
spin-changing collisions to reach "on its own" a pseudo-equilibrium state.
Jeudi 09 Avril 2015
Title: The best quantum thermoelectric at finite power output
Robert Whitney (LPMMC, Université Grenoble)
Abstract: Carnot efficiency is only achievable at zero power output. We ask what is the maximum efficiency at some given finite power output. It appears that this question is ill-defined in classical
thermodynamics, but can be answered with a quantum theory. We use the Landauer-Buttiker scattering theory to find this maximum efficiency for heat engines and refrigerators made of thermoelectric
quantum systems. We initially find the exact maximum efficiency for two-terminal systems without energy relaxation [1]. We then use phenomenological models to explore whether this maximum can be
exceeded by two-terminal systems with relax- ation [2], or by three-terminal systems. We have not yet found a system which can exceed the maximum efficiency given in Ref. [1], although open questions
Jeudi 02 Avril 2015
Title: Dynamics at a Quantum Critical Point: Combining Quantum Monte Carlo and Holography
Erik SORENSEN (McMaster University & Université de Toulouse III)
Abstract: The real time dynamics near quantum critical points have proven very challenging to obtain both from a numerical and analytical perspective. Here we focus on the superfluid-insulator
transition occurring for bosons on a lattice. New large-scale QMC results have made it possible to obtain very precise results for many quantities in particular the frequency dependent conductivity
at imaginary frequencies. Since the numerical results remain confined to imaginary times/frequencies additional tools are needed to extend the results to the rest of the complex plane. Here, recent
insights from conformal field theory and holography have yielded a wealth of information that combined with the QMC results yield quantitative and experimentally testable results for the
frequency-dependent conductivity near the quantum critical point.
Jeudi 19 Mars 2015
Title: L'invariance d'échelle implique l'invariance conforme pour les modèles O(N) en trois dimensions
Bertrand Delamotte (LPTMC, Paris 6)
Abstract: Nous montrerons grâce au groupe de renormalisation à la Wilson (aussi appelé "exact") que les modèles critiques tri-dimensionnels ne sont pas seulement invariants d'échelle mais sont
invariants sous tout le groupe conforme. Après avoir fait une introduction générale au groupe de renormalisation Wilsonien, nous présenterons la "preuve" sur l'exemple des modèles O(N).
Jeudi 12 Mars 2015
Title: Robust quantum coherence above the Fermi sea
Patrice Roche (CEA/SPEC)
Abstract: A new type of quantum device, relying on the one dimensional edge states of the Quantum Hall regime, where electrons mimic the photon trajectory of a laser beam, has opened a route towards
electron quantum optics and manipulation of single electron excitations. Pauli statistics and interactions provide new ingredients for the physics of the electrons which are not relevant for photons.
For example, when electrons are injected above the Fermi sea, it is fundamental to understand how their phase coherence will be affected by the injection energy. We explore this issue by first using
a quantum dot to inject the carriers at a controllable energy into an edge state. Then an electronic Mach-Zehnder interferometer is used to monitor the quantum coherence of the electronic
quasiparticle. We find that above a certain threshold the coherence is energy-independent; it is even preserved at energies fifty times larger than the electronic temperature. This is remarkable,
since from simple considerations based on Fermi's golden rule, one would expect that the relaxation rate increases with the injection energy, thus reducing quantum coherence. Indeed, our simulations
using recent theories predict a continuous trend of increasing relaxation. While the origin of this coherence robustness remain unidentified, it has a significant bearing for the implementation of
quantum information encoded in electron trajectories. (S. Tewari, P. Roulleau, C. Grenier, F. Portier, A. Cavanna, U. Gennser, D. Mailly, and P. Roche)
Jeudi 26 Février 2015
Title: From Aztec diamonds to pyramids: steep tilings
Jérémie Bouttier (IPhT, CEA)
Abstract: We consider random tilings made of dominos (2x1 rectangles), and describe a general family which encompasses several known models: domino tilings of the Aztec diamond (giving rise to the
celebrated "arctic circle phenomenon"), pyramid partitions, plane overpartitions... These tilings are in one-to-one correspondence with sequences of Young diagrams where, at each step, one adds or
removes an horizontal or a vertical strip. Using an algebraic framework related to the boson-fermion correspondence, we compute the partition function and all correlations of the model (i.e. the
probabilities of finding a given number of dominos at given positions). We furthermore provide an algorithm for the efficient generation of such random tilings. Based on joint work with Guillaume
Chapuy, Sylvie Corteel and later on with Dan Betea, Cédric Boutillier, Sanjay Ramassamy and Mirjana Vuletić.
Jeudi 13 Fevrier 2015
Title: Electronic Correlations and Multiorbital Effects in Iron-Based Superconductors
Rong Yu (Department of Physics, Renmin University of China)
Abstract: Identifying the role of electron correlations in iron-based superconductors is crucial in understanding the superconductivity and related normal-state properties in these systems. To this
end, we study the metal-to-Mott-insulator transitions in multiorbital Hubbard models for several parent compounds of iron-based superconductors using the slave-spin mean-field method. We show that a
crossover from a weakly coupled metal to a strongly coupled metal generally exists in all these models when the Hund's coupling is beyond a threshold. In the strongly coupled metallic phase, the
quasiparticle spectral weights are substantially reduced from the non-interacting limit and become strongly orbital dependent. Particularly for alkaline iron selenides, we find a novel
orbital-selective Mott phase, in which the Fe 3d xy orbital is Mott localized while the other Fe 3d orbitals remains itinerant. This phase is still stabilized over a range of carrier dopings, and has
unique experimental signatures. We further investigate the effects of electron correlations on superconductivity. We have derived the effective exchange coupling between quasi-localized moments in
the bad metal regime. This allows us to study the superconducting pairing via an effective multiorbital t-J1-J2 model. We show that the orbital dependent correlation effect results in a rich pairing
phase diagram. In a certain parameter regime, it naturally gives rise to orbital selective pairing, which leads to anisotropic superconducting gaps along the electron Fermi pockets and splitting of
spin resonance peak in the superconducting state.
Jeudi 12 Février 2015
Title: Super-symmetric spin-chains, percolation, and non-rational CFTs at c=0
Azat Gainutdinov (Department of Mathematics Hamburg University and DESY, Germany)
Abstract: I will discuss algebraic properties of periodic sl(n+1|n) spin-chains with Heisenberg-like interaction. These chains are made of alternating tensor products of the fundamental and conjugate
sl(n+1|n) representations. The algebra of local Hamiltonian densities in the chain is provided by a representation of the affine or periodic Temperley-Lieb algebra at the primitive 6th root of unity.
The more detailed analysis was carried out for periodic sl(2|1) spin chains (with H. Saleur, N. Read and R. Vasseur), which describe statistical properties of boundaries of 2D percolation clusters on
a torus. In this case, the continuum limit of the chains was identified with a bulk Logarithmic CFT at c = 0, which is a fixed point theory of a non-linear sigma model on the complex projective
superspace CP^{1|1} in the strong coupling regime. We deduced the structure of the space of states as a representation over the product of left and right Virasoro algebras. Our main result is the
explicit structure of the full vacuum module/sector of the LCFT, which exhibits Jordan cells of arbitrary rank for the Hamiltonian or the dilatation operator.
Mercredi 11 Février 2015
Title: IIB Supergravity and the E_6(6) covariant vector tensor hierarchy
Bernard de Wit (NIKHEF, Amsterdam)
Abstract: TBA
Jeudi 5 Février 2015
Title: 4d Quantum Gravity with a Cosmological Constant from SL2C Chern-Simons Theory
Aldo Riello (Perimeter Institute, Ontario, Canada)
Abstract: In this seminar, I will discuss the first steps towards a definition of a model for simplicial 4d quantum gravity with a cosmological constant, via 3d Chern-Simons theory with defects. The
proposal hinges on a "reconstruction theorem" assessing the correspondence between a class of flat connections on a S3 graph complement (related to the 4-simplex skeleton) and the geometries of
constant-curvature Lorentzian 4-simplices. The main result consists in showing that in the semiclassical (WKB) approximation the Regge action of simplicial general relativity correctly appears. Time
allowing, I will also discuss the relation of this construction with spinfoam models for loop quantum gravity.
Mercredi 4 Février 2015
Title: Fluctuations of the current and optimal profiles in the open Asymmetric Simple Exclusion Process
Alexandre Lazarescu (Instituut voos Theoretische Fysica, KU Leuven, Belgium)
Abstract: The asymmetric simple exclusion process (ASEP), where particles perform biased random walks with hard core repulsion, is one of the most studied model in non-equilibrium statistical
physics. It has the mathematical property of being integrable, which makes it a good candidate for in-depth exact calculations. The quantity of particular interest there is the current of particles
that flows through the system due to the bias of the jumps. In this presentation, we will see how we can obtain information about the distribution of that current, through various techniques:
integrability, macroscopic fluctuation theory, and asymptotic direct diagonalisation. This allows us to build the phase diagram for the large deviations of the current, and examine the corresponding
density profiles in each of its five phases. We show that two situations arise: in most phases, the system can be described hydrodynamically, but in one phase, where the current is larger than the
limit set by hydrodynamics, the system becomes highly correlated. If time allows it, we will also see how these techniques and results could be generalised to some other observables or models.
Jeudi 29 janvier 2015
Title: Realization of strongly interacting topological phases on lattices
Antoine Sterdyniak (Institüt für Theorestische Physik, Innsbruck Universität)
Abstract: While fractional quantum Hall effect (FQHE) was realized experimentally thirty years ago in semiconductor heterostructures, strongly interacting chiral topological phases are still at the
center of an important research effort, both as they serve as building blocks of more exotic phases such as fractional topological insulators and as a realization outside of semi-conductor physics is
still missing. In this talk, I will describe realizations of these phases in cold atoms gases and in frustrated spins systems. I will first introduce optical flux lattices, which are continuous
models that exhibit topological flat bands with a tunable Chern number and host fractional states beyond the FQHE. Then, I will focus on chiral spin liquids whose emergence on the kagomé lattice
using a local Hamiltonian has been shown very recently. Unlike itinerant particle systems where FQHE can be understood as a consequence of interactions in a partially filled topological band, I will
show that such a picture does not hold for this chiral spin liquid.
mercredi 28 janvier 2015
Title: Towards the Turaev-Viro amplitudes from a Hamiltonian constraint.
Maité Dupuis (University of Waterloo, Ontario, Canada)
Abstract: I will show how the usual Loop Quantum Gravity phase space can be deformed to characterize hyperbolic discrete geometries and thus be a candidate to describe the discretization of SU(2) BF
theory with a (negative) cosmological constant. The quantization of this model will then give at the kinematical level, an Hilbert space spanned by spin networks built on Uq(su(2)) (with q real). I
will also build an Hamiltonian constraint and show that the Turaev-Viro amplitude with q real is a solution of the quantum Hamiltonian. This model is therefore a natural candidate to describe 3D loop
quantum gravity with a (negative) cosmological constant.
Jeudi 15 janvier 2015
Title: Integrability for AdS3/CFT2
Alessandro Sfondrini (Institut für Mathematik und Institut für Physik, Humboldt-Universität zu Berlin)
Abstract: Gravity theories with negative cosmological constant in three dimensions (such as AdS3) play an important role in the understanding of black hole physics, and provided an early example of
holography. Their dual 2-dimensional conformal field theories (CFT2) are quite special, since they enjoy (suitable super-symmetric extensions of) Virasoro symmetry. This duality naturally emerges in
string theory too, for instance as the near horizon limit of a system of D1/F1-strings and D5/NS5-branes and was much studied in the early days of the Maldacena correspondence. Recently, the interest
in Ad3/CFT2 was revived when Babichenko, Stefanski and Zarembo showed that the maximally super-symmetric AdS3 backgrounds yield classically integrable string non-linear sigma models. It is natural to
ask whether the worldsheet S-matrix and spin-chain integrability approaches, which work beautifully for the planar limit of AdS5/CFT4, can be applied here as well. The answer did not appear to be
straightforward, due to several new features and some conceptual complications of AdS3/CFT2, and indeed eluded us for four years. In my talk I will provide substantial evidence for an affirmative
answer. To do this, I will discuss in detail the simplest case of superstrings on AdS3xS3xT4 and describe the exciting future directions for this integrability program.
Mardi 6 janvier 2015
Title: A New Type of Quantum Criticality in the Pyrochlore Iridates
Lucile Savary (MIT)
Abstract: Magnetic fluctuations and electrons couple in intriguing ways in the vicinity of zero-temperature phase transitions—quantum critical points—in conducting materials. Quantum criticality is
implicated in non-Fermi liquid behavior of diverse materials and in the formation of unconventional superconductors. Here, we uncover an entirely new type of quantum critical point describing the
onset of antiferromagnetism in a nodal semimetal engendered by the combination of strong spin-orbit coupling and electron correlations, and which is predicted to occur in the iridium oxide
pyrochlores. We formulate and solve a field theory for this quantum critical point by renormalization group techniques and show that electrons and antiferromagnetic fluctuations are strongly coupled
and that both these excitations are modified in an essential way. This quantum critical point has many novel features, including strong emergent spatial anisotropy, a vital role for Coulomb
interactions, and highly unconventional critical exponents. Our theory motivates and informs experiments on pyrochlore iridates and constitutes a singular realistic example of a nontrivial quantum
critical point with gapless fermions in three dimensions.
Jeudi 27 novembre 2014
Title: Light-cone effect and supersonic correlations in one- and two-dimensional bosonic superfluids
Giuseppe Carleo (Laboratoire Charles Fabry - Institut d'Optique, Paris, France)
Abstract: In this talk I will present some recent results on the out-of-equilibrium dynamics of interacting lattice bosons [1]. In particular, we study how (and how fast) correlations can spread in a
quantum system abruptly driven out of equilibrium by a quantum quench. This protocol can be experimentally realized with ultra-cold atoms, which allow to address fundamental questions concerning the
quasi-locality principle in isolated quantum systems [2, 3]. We focus on the spreading of density-density correlations in Bose-Hubbard models after a quench of the interaction strength, using
time-dependent variational Monte Carlo simulations [4]. This method gives access to unprecedented long propagation times and to dimensions higher than one. In both one and two dimensions, we
demonstrate the existence of a "light-cone", characterized by the ballistic spreading of correlations with a finite propagation time. We extract accurate values of the correlation-cone velocity in
the superfluid regime and show that the spreading of correlations is generally supersonic. Further, we show that in two dimensions the correlation spreading is highly anisotropic and presents
nontrivial interference effects. [1] G. Carleo, F. Becca, L. Sanchez-Palencia, S. Sorella, and M. Fabrizio, Phys. Rev. A 89, 031602(R) (2014). [2] M. Cheneau et al., Nature 481, 484 (2012). [3] T.
Langen et al., Nat. Phys. 9, 640 (2013). [4] G. Carleo, F. Becca, M. Schiro, and M. Fabrizio, (Nature) Sci. Rep. 2, 243 (2012).
Jeudi 20 novembre 2014
Title: Probing the $\nu=2/3$ fractional quantum Hall edge by momentum-resolved tunneling
Hendrik Meier (Department of Physics, Yale University, USA)
Abstract: The nature of the fractional quantum Hall state with filling factor ?=2/3 and its edge modes continues to remain an open problem in low-dimensional condensed matter physics. In this talk, I
am going to present and discuss a suggested experimental setting to probe the?=2/3 edge by tunnel-coupling it to a ?=1 integer quantum Hall edge in another layer of a two-dimensional electron gas
(2DEG). In this double-layer geometry, the momentum of tunneling electrons may be boosted by an auxiliary magnetic field parallel to the two planes of 2DEGs. The current is calculated as a function
of bias voltage and the boosting magnetic field. Its threshold behavior yields information about the spectral function of the ?=2/3 edge, in particular about the nature of the chiral edge modes. The
theory accounts also for the effects of Coulomb interaction and disorder. Hendrik Meier, Yuval Gefen, Leonid I. Glazman, Phys. Rev. B 90, 081101(R) (2014); preprint: arXiv:1406.4517
Jeudi 13 novembre 2014
Title: On exact spectrum of planar N=4 super-Yang-Mills theory
Vladimir Kazakov (LPT-ENS Paris & Université Paris-VI Jussieu, France)
Abstract: N=4 Super-Yang Mills in planar limit is the only exactly solvable theory in 4 space-time dimensions. This potentially gives a possibility to compute any physically interesting quantities,
such as anomalous dimensions, correlators, Wilson loops, scattering amplitudes, at any strength of the 't Hooft coupling. Solvability is possible due to AdS/CFT duality and a hidden integrability
property. Due to the efforts of many researchers over the last dozen of years, the exact equations for the spectrum of anomalous dimensions have been discovered, known as the AdS/CFT Y-system/TBA,
are developed We present a new, the most concise and efficient, Riemann-Hilbert system of spectral equations -- Qantum Spectral Curve (QSC). We will review the origins of this approach and present
the basic structure of QSC. We will also expose the most important results of computations of anomalous dimensions: Konishi dimension at any coupling (numerically), its strong coupling expansion and
weak coupling expansion (9-loops!), as well as the Balitsky-Fadin-Kuraev-Lipatov limit of QSC reproducing the BFKL pomeron spectrum.
Jeudi 06 novembre 2014
Title: Stückelberg interferometry with a pair of Dirac cones
Jean-Noël Fuchs (LPTMC Jussieu & LPS Orsay, France)
Abstract: Dirac cones are linear band contacts in crystals that are also characterized by a quantized Berry phase. The prime example of a crystal featuring a pair of Dirac cones is graphene
(honeycomb lattice). Recently, artificial and tunable analogs of graphene were realized experimentally (e.g. with cold atoms). When deforming the honeycomb lattice, it is possible to manipulate the
Dirac cones up to their merging and annihilation. The energy spectrum across the merging transition can be detected via Bloch oscillations and Landau-Zener tunneling as recently shown by Tarruell et
al. This technique is not restricted to studying the energy spectrum and can give access to band coupling effects (Berry phases). The idea is to use a pair of Dirac cones to realize a Stückelberg
interferometer. We will show that this type of interferometer contains information on band coupling in the form of an open-path (but gauge-invariant) Berry phase.
Jeudi 30 octobre 2014
Title: A New Broken Symmetry: Hidden (Hastatic) Order in URu2Si2
Premela Chandra (Department of Physics, Rutgers University, USA)
Abstract: The development of collective long-range order by means of phase transitions occurs by the spontaneous breaking of fundamental symmetries. Magnetism is a consequence of broken time-reversal
symmetry, whereas superfluidity results from broken gauge invariance. The broken symmetry that develops below 17.5 kelvin in the heavy-fermion compound URu2Si2 has long eluded such identification.
Here we show that the recent observations of Ising quasiparticles in URu2Si2 results from a spinor order parameter that breaks double time-reversal invariance, mixing states of integer and
half-integer spin. Such "hastatic" order hybridizes uranium-atom conduction electrons with Ising 5f2 states to produce Ising quasiparticles; it accounts for the large entropy of condensation and the
magnetic anomaly observed in torque magnetometry. Hastatic order predicts a tiny transverse moment in the conduction-electron sea, a collosal Ising anisotropy in the nonlinear susceptibility anomaly
and a resonant, energy-depedent nematicity in the tunnelling density of states. We also discuss the microscopic origin of hastatic order, identifying it as a fractionalization of three-body body
bound-states into integer spin fermions and half-integer spin bosons. Work done with Piers Coleman and Rebecca Flint. References: PC, P. Coleman and R. Flint Nature 493, 421 (2013)arXiV: 1404.5920
Jeudi 9 octobre 2014
Title: 3D random tensor models
Adrian Tanasa (Institut Galilée, Université Paris 13, France)
Abstract: Random tensor models, seen as field theoretical models, are related on one side to group field theory, a recent candidate for quantum gravity, and on the other side they represent a natural
generalization of the celebrated matrix models. These matrix models are also known to be connected to 2D statistical physics or to quantum gravity; one of the main results of their study is that
their perturbative series can be reorganized in powers of 1/N (N being the matrix size). The leading order in this expansion is given by planar graphs (which are dual to triangulations of the
2-dimensional sphere S^2). In this talk I will present such a 1/N asymptotic expansion for some particular class of 3-dimensional random tensor models (called multi-orientable models). The leading
order (and hence the dominant graphs, dual to particular triangulations of the three-dimensional sphere S^3), the next-to-leading order and finally some results on the combinatorics of the general
term of this asymptotic expansion will be given.
Jeudi 25 septembre 2014
Title: Tensor Models and Renormalization
Joseph BenGeloun (Max-Planck Institute, Potsdam, Allemagne)
Abstract: A review will be provided on the renormalization program for the so-called Tensor Models for Quantum Gravity. These are non local field theories extending both the matrix models, a
successful framework in statistical mechanics applied to 2D physics, and the Grosse-Wulkenhaar model in the matrix basis arising in Noncommuting Neometry. We will emphasize the Multi-scale
renormalization but also report recent results on the Functional Renormalization Group Approach for these class of models.
Jeudi 18 septembre 2014 (Salle 115)
Title: Integrable Deformations of Strings
Timothy J. Hollowood (Department of Physics, Swansea University, UK)
Abstract: TBA
Jeudi 17 juillet 2014
Title: Random Quantum Spin Chains with Long-Range Interactions
Stephan HAAS (Department of Physics & Astronomy, University of Southern California, Los Angeles)
Abstract: A real-space renormalization group technique is used to study the anisotropic Heisenberg model with long-range interactions, decaying with a power $\alpha$, which are generated by placing
spin sites at random positions along the chain. This method permits a large-scale finite-size analysis of systems with up to 256 spins, along with a disorder average over up to 300,000 realizations.
Analyzing the distribution of the first excitation energy from the ground state, we find a critical power-law decay where the distribution function is critical, separating an insulator phase at
larger $\alpha$ where the distribution is Poissonian, and a metallic state at smaller $\alpha$, where it follows the Wigner surmise.
Vendredi 11 juillet 2014
Title: Exploring problems of nonlinear dynamics with Bose-Einstein condensates
Tristram ALEXANDER (School of Physical, Environmental and Mathematical Sciences, UNSW Canberra)
Abstract: New frontiers in research concerning Bose-Einstein condensates continue to emerge, however in many ways some of the most intriguing problems are old. What are the effects of the interplay
of nonlinearity and constraint? What nonlinear modes exist in a given system? What sort of excitations appear in the non-equilibrium dynamics? In this talk I will seek to answer these questions in
the context of Bose-Einstein condensates in the presence of external potentials such as an optical lattice. I will discuss some of the localised states which may emerge and their possible excitations
and I will look at parallels between confined BECs and familiar mechanical systems such as chains of oscillators. I will also try to highlight some of the outstanding problems in this area.
Jeudi 3 juillet 2014
Title: A Shortcut to Scattering Amplitudes in N=4 Super Yang-Mills via Integrability.
Matthias STAUDACHER (Institut für Mathematik und Institut für Physik, Humboldt-Universität zu Berlin)
Abstract: We combine recent applications of the two-dimensional quantum inverse scattering method to the scattering amplitude problem in four-dimensional N = 4 Super Yang-Mills theory. Integrability
allows us to obtain a general, explicit method for the derivation of the Yangian invariants relevant for scattering amplitudes in the N = 4 model. There is a beautiful connection to contour integrals
defined on Grassmannian manifolds.
Jeudi 5 juin 2014
Title: The Physical Review: Editorial and Review Process
Robert WIMMER (Assistant Editor, Physical Review D)
Abstract: I discuss some aspects of the physics journals of the American Physical Society and their review process. I will speak in particular about PRD, where I am an editor, and about some aspects
of PRL. But also other journals will be discussed (PRA, PRB, PRC, PRE, PRX). The idea is that the review process becomes more transparent and to provide important information for authors and
referees. Questions are very welcome.
Jeudi 22 mai 2014
Title: Time-dependent theory of nonlinear response and current fluctuations.
Inès SAFI (LPS, U. Paris 11, Orsay)
Abstract: TBA.
Jeudi 17 avril 2014
Title: Information thermodynamics in a hybrid opto-mechanical system
Alexia AUFFEVES (Institut Néel, Grenoble)
Abstract: Quantum optical hybrid systems are devices where a single quantum emitter is coupled to a mechanical degree of freedom. In these systems, many impressive results have been obtained already
in the direction of achieving quantum control of the mechanics. In this letter we show that they have also unexpected capabilities in a very different domain: information thermodynamics. We evidence
that the optical measurement-induced back action of the emitter on the mechanical oscillator can be interpreted as reversible conversions of elementary work into bits of information. In the proper -
and realistic - experimental conditions, we find that a 2π mechanical oscillation describes in fact a closed cycle of such information-to-work conversions, with strong analogy to Landauer's erasure
and Szilard engine. As an interesting consequence of these findings, we finally show that such devices can be turned into high power heat engines operating at Carnot efficiency.
Jeudi 10 avril 2014
Title: Orbital magnetic susceptibility and interband effects of 2D tight-binding models
Frédéric PIECHON (LPS, Orsay)
Abstract: We review orbital magnetic properties of 2D spinless electrons in multiband systems. For systems (metal or insulator) that break time reversal invariance it is well established that the
existence of a spontaneous orbital magnetization crucially depends on interband effects through the so called self-rotating orbital magnetic moment and Berry curvature. For systems that do not break
time reversal invariance, in absence of spontaneaous magnetization one is forced to study orbital magnetic susceptibility. Using numerics and a recently developped a gauge invariant perturbation
theory we present results we obtained for the orbital susceptibility of various coupled systems (semi-metal or insulators) in two and three band models. In particular we show that systems similar
energy spectrum have orbital susceptibility that can range from dia to paramagnetism depending on their self-rotating orbital magnetic moment and Berry curvature.
Jeudi 27 mars 2014
Title: Fluctuations en homogénéisation
Jean-Christophe MOURRAT (UMPA, ENS Lyon)
Abstract: Sous un changement d'échelle diffusif, la solution de l'équation de la chaleur à coefficients aléatoires converge vers la solution d'une équation de la chaleur à coefficients constants,
"homogénéisés". On peut interpréter ce résultat comme une sorte de loi des grands nombres pour cette EDP. Dans l'exposé, je présenterai un résultat qui peut être vu comme un premier pas vers la
description des fluctuations de la solution de l'équation à coefficients aléatoires (une sorte de théorème limite central). Plus précisément, je décrirai le comportement de grande échelle du
"correcteur", qui est l'objet clé derrière l'homogénéisation de ces problèmes.
Jeudi 20 mars 2014
Title: Simulating condensed matter systems with tensor network states and discovery of algebraic decoherence
Thomas BARTHEL (LPTMS, Orsay)
Abstract: The non-locality of quantum many-body systems can be quantified by entanglement measures. Studying the scaling behavior of such measures, one finds that the entanglement in most states of
interest (occurring in nature) is far below the theoretical maximum. Hence, it is possible to describe such systems with a reduced set of effective degrees of freedom. This is exploited in simulation
techniques based on so-called tensor network states (MPS, PEPS, or MERA). I will describe how this approach can be employed to simulate systems of all particle statistics in order to study ground
states, thermal states, and non-equilibrium phenomena. Besides explaining the main ideas, I will highlight some applications. The second part of the talk focuses on an application to the decoherence
in systems that are coupled to an environment. Until our recent study, it was assumed that, as long as the environment is memory-less (i.e. Markovian), the temporal coherence decay is always
exponential -- to a degree that this behavior was synonymously associated with decoherence. However, the situation can change if the system itself is a many-body system. For the open spin-1/2 XXZ
model, we have discovered that the interplay between dissipation and internal interactions can lead to a divergence of the decoherence time! The quantum coherence then decays according to a power
law. To complement the quasi-exact numerical simulation, I will explain the result on the basis of a perturbative treatment.
Jeudi 13 mars 2014
Title: Improved diffusion Monte Carlo for quantum Monte Carlo, rare event simulation, data assimilation, and more
Jonathan WEARE (Department of Statistics, The University of Chicago)
Abstract: Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various
characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle
filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac
representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle
filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to
the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the
evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In
particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We
numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a
high-frequency data assimilation problem.
Jeudi 20 février 2014
Title: Ground state energy of noninteracting particles and of interacting bosons in a random Bernoulli potential
Jan WEHR (The University of Arizona, Tucson)
Abstract: I will study the fluctuations of the ground state energy of the one-dimensional Anderson model, in which the random potential has the Bernoulli (i.e. two-valued) distribution. By a direct
method, the statistics of the random landscape will be shown to imply a limit theorem for this quantity. I will also discuss a mean-field model of interacting bosons in such potential and derive an
asymptotic formula for its ground state energy density in the limit of weak interaction. The results were obtained jointly with M. Bishop.
Jeudi 13 février 2014
Title: Grandes Déviations et Hors d'Equilibre
Raphaël CHETRITE (Laboratoire J.A. Dieudonné, Nice)
Abstract: Ce séminaire contiendra deux parties. La première sera une ''scratch'' présentation de la théorie des Grandes Déviations. La deuxième partie portera sur des résultats récents, obtenus avec
Hugo Touchette (PRL 111, 120601 (2013)) portant sur des processus de Markov conditionnés à des événements rares. Je démontrerai, à l'aide de la théorie des grandes déviations, qu'un tel processus
conditionné peut être représenté par un processus Markovien sans conditionnement, appelé processus équivalent, ayant les mêmes propriétés typiques que le processus conditionné. La motivation Physique
pour l'étude d'un tel processus conditionné découle de la question de l'équivalence et de la simulation d'ensembles Microcanonique et Canonique hors d'équilibre.
Jeudi 30 janvier 2014
Title: Decay of excitations in interacting one-dimensional Bose gases
Zoran RISTIVOJEVIC (Centre de Physique Théorique, Ecole Polytechnique, Palaiseau)
Abstract: Excitation spectrum in weakly-interacting systems of bosons have the Bogoliubov form. In three dimensions, those excitations are unstable due to residual weak interactions. The resulting
process is known as Beliaev decay [1,2] and has been experimentally observed [3]. The related problem of decay of excitations in one-dimensional Bose gases is a fundamental long-standing problem. In
this talk I will present its solution [4]. As a result of the conservation laws in one dimension, at zero temperature the leading mechanism of decay of a quasiparticle excitation is its
disintegration into three others. We find that a phonon excitation has a decay rate proportional to the seventh power of momentum. In the integrable case of contact interaction between the bosons,
the decay rate vanishes. Our theory is based on studying the anharmonic effects produced by the leading integrability breaking perturbations to the Lieb-Liniger model. It is not limited to the decay
of lowest momentum phonon excitations and can describe full crossover as momentum increases and the excitation spectrum approaches its quadratic form. [1] S. T. Beliaev, Sov. Phys. JETP 7, 299
(1958). [2] L. D. Landau and E. M. Lifshitz, Statistical Physics, Part 2 (Pergamon Press, Oxford, 1980). [3] N. Katz, J. Steinhauer, R. Ozeri, and N. Davidson, Phys. Rev. Lett. 89, 220401 (2002). [4]
Z. Ristivojevic and K. A. Matveev, arxiv:1312.5322 (2013).
Lundi 27 janvier 2014 à 15h30 en salle 115
Title: N=4 super Yang Mills amplitudes and integrability
Georgios PAPATHANASIOU (LAPTH)
Abstract: Maximally supersymmetric Yang-Mills theory stands out as an interacting 4-dimensional gauge theory which may be exactly solvable in the planar limit. In this talk, we explore the
consequences of a recent, integrability-based proposal of Basso, Sever and Vieira, for a nonperturbative description of its scattering amplitudes. We prove that the integrals this proposal predicts
for part of the 6-point amplitude, evaluate to transcendental functions known as Harmonic Polylogarithms at any order in the weak coupling expansion, and obtain explicit expressions up to 6 loops.
Jeudi 23 janvier 2014
Title: Shape Dynamics: a new tool for General Relativity, Cosmology and Quantum Gravity
Flavio MERCATI, Perimeter Institute
Abstract: I will give an overview of the possibilities offered by a recent reformulation of General Relativity called Shape Dynamics. This theory replaces relativity of simultaneity for spatial
conformal invariance, maintaining the same degree of symmetry of GR while doing without some of its shortcomings. In SD several kinds of singularities of GR (like black hole and big-bang type
singularities) become unphysical gauge artifacts. Moreover quantum SD is expected to be inequivalent to a standard quantization of GR and appears to be more manageable. Finally, SD motivates an
original interpretation of the evolution of the Universe in terms of growth of complexity, which could explain the arrow of time without referring to any notion of gravitational entropy and without
statistically unlikely initial conditions for the Universe.
Jeudi 16 janvier 2014
Title: Relaxation dynamics of a coherently split one-dimensional gas
Laura FOINI, Département de Physique de la Matière Condensée, Université de Genève
Abstract: Non-equilibrium dynamics and relaxation processes in isolated quantum systems represent, at present, a vivid research direction both theoretically and experimentally. Such interest is
sustained by the overwhelming progress in the field of cold atoms that allows to investigate the unitary dynamics of the system. In this talk I will review an experiment that has considered the
splitting of a one-dimensional Bose gas into two coherent gases, where, ultimately, the properties of the system are probed by matter-wave interference. While previous works have focused on the
independent dynamics of the two systems after the splitting, in our study we take into account the effect of a finite tunneling coupling between the two. Comparisons between the results obtained for
such non-equilibrium problem and the thermal ones will be drawn.
Jeudi 9 janvier 2014
Title: Entropy and Mutual information in low-dimensional classical and quantum critical systems
Jean-Marie STEPHAN (University of Virginia)
Abstract: In studies of new and exotic phases of quantum matter, the Renyi entanglement entropy has established itself as an important resource.For example it is universal at one-dimensional quantum
critical points: the leading term can be used to extract the central charge $c$ of the underlying conformal field theory, and thus identify the universality class. In this talk I will show how an
analogous quantity defined for classical systems, the Renyi Mutual Information (RMI), can be used to access universality classes in 2d. In particular for a rectangle cut into two rectangles, the
shape dependence of the RMI can be computed exactly and is proportional to $c$. This makes it possible to extract $c$ from (transfer-matrix) Monte Carlo simulations. I will also discuss how this
Mutual information is related to the entanglement entropy of certain Resonating valence bond states in 2d, as well as other basis-dependent entropies in 1d quantum systems.
Mardi 7 janvier 2014 à 14 heures en salle 116
Title: Universal thermodynamics and fate of the amplitude mode in the quantum O(N) model
Adam RANÇON (James Franck Institute, University of Chicago)
Abstract: The quantum O(N) model is ubiquitous in condensed matter and cold atoms and describes the behavior of a number of systems close to a quantum phase transition. In the ordered
(broken-symmetry) phase far from the critical point, there are N-1 Goldstone modes and a gapped amplitude mode. In low dimensions, the system is strongly coupled close to the critical point, and the
fate of the existence of the amplitude mode is guaranted. We discuss the thermodynamics of the two-dimensional quantum O(N) model for $N\geq 2$ in the vicinity of its zero-temperature quantum
critical point, and in particular the universal scaling function ${\cal F}_N$ which determines the pressure $P(T)$. We show that the large-$N$ approach is unable to predict the (non-monotonuous)
shape of ${\cal F}_N$ for $N\lesssim 10$, but ${\cal F}_N$ can be computed from a non-pertubative renormalization-group approach. Finally, we discuss the spectral function of the amplitude mode close
to the quantum critical point and how show a well-defined mode at small N disappears as N increases.
Jeudi 12 décembre 2013
Title: Real Time Imaging of Quantum and Thermal Fluctuations: a Detour into Quantum Noise
Denis Bernard (LPT-ENS)
Abstract: In the last decade progresses have been achieved in realising and manipulating stable and controllable quantum systems, and these made possible to experimentally study fundamental questions
posed in the early days of quantum mechanics. We shall theoretically discuss recent cavity QED experiments on non-demolition quantum measurements. While they nicely illustrate the possibility to
implement efficient quantum state manipulations, these experiments pose a few questions such as: What does it mean to observe a progressive wave function collapse in real time? How to describe it?
What do we learn from them? Their analysis will allow us one hand to link these experiments to basics notions of probability or information theory, and on the other hand to touch upon notions of
quantum noise. As an illustration, we shall also look at quantum systems in contact with a heat bath and we shall describe the main physical features of thermally activated quantum jumps.
Jeudi 5 décembre 2013 - Colloquium - Grande Salle du CBP
Title: Quantum gravity: The view from particle physics
Hermann Nicolai (Albert Einstein Institute Potsdam)
Mardi 3 décembre 2013
Title: Quantum integrability of Benjamin-Ono model
E. Sklyanin (Université de York, UK)
Abstract: The classical Benjamin-Ono equation is a Hamiltonian integrable system originating from hydrodynamics. Its quantised version is discussed recently in various contexts of mathematical
physics and pure algebra. The quantum Hamiltonian is diagonal on Jack symmetric functions of infinitely many variables. We present two different constructions of higher commuting Hamiltonians for the
quantum B-O equation. One is based on the determinantal formula for the Hamiltonians of N-particle Calogero-Sutherland model. Another one uses the quantum Lax matrix. A generalisation for Macdonald
polynomials is also discussed. The talk is based on joint work with Maxim Nazarov (York): arXiv:1212.2781,1212.2960,1309.6464.
Jeudi 28 novembre 2013
Title: Living on the edge: towards a general theory for pyrochlores
Ludovic Jaubert (Okinawa Institute of Technology (Japon))
Abstract: Rare earth pyrochlore oxydes is a very rich family in frustrated magnetism where each member brings its own complexity. If some materials and models exhibit topological phase transitions
beyond the standard Ginzburg-Landau approach, other are able to avoid ordering such as spin ice (Dy2Ti2O7) or potential spin liquids (Tb2Ti2O7). Yb2Ti2O7 for example has generated considerable
excitement as a potential example of a “quantum spin-ice”. Here we show how it is possible to construct a unified picture of ground states and excitations for any kind of nearest neighbour coupling.
Our starting point is an exact mapping onto a lattice field theory which explains both the dimensional reduction seen in Yb2Ti2O7, and the ground-state selection in Er2Ti2O7. It also provides a
general recipe to find emergent spin liquids. These results are combined with spin wave calculations and extensive Monte Carlo simulations to provide a comprehensive picture of the interplay between
order and finite temperature and quantum excitations in pyrochlore oxides with highly-anisotropic exchange interactions.
Jeudi 21 novembre 2013
Title: Cosmology from quantum gravity: the universe as a Bose-Einstein condensate
Daniele Oriti (AEI Potsdam)
Abstract: We introduce the group field theory (GFT) formalism as a second quantized dynamics of the spin network states of loop quantum gravity. We explain how it provides, at the same time, a
generalization of matrix models for 2d gravity and of lattice gravity path integrals. We summarize briefly some recent results obtained in this context, in particular concerning renormalizability. We
then discuss the issue of the emergence of continuum spacetime from this "pre-geometric" quantum gravity formalism (and related ones) and the hypothesis that it arises from some sort of cosmological
phase transition. We show that effective cosmological equations for continuum homogeneous geometries can be derived directly from fundamental group field theory models, in full generality. The
relevant quantum states are GFT condensates, and a form of nonlinear quantum cosmology arises as the hydrodynamics of the system, in the same way in which Gross-Pitaevskii equations arise from the
quantum microscopic dynamics of real Bose-Einstein condensates. A continuum spacetime emerges then from GFT as a quantum fluid.
Jeudi 14 novembre 2013 en Amphi H
Title: A numerical study of the Bose-Hubbard model with a trapping potential
Christian Torrero (CPT, Marseille)
Abstract: The seminar shows an application of the Trap Size Scaling (TSS) technique to the Bose-Hubbard model with a trapping potential in 1, 2 and 3 dimensions. Particular emphasis is put on the
scaling behaviour of some observables close to phase transitions and to the determination of critical points in the phase diagram. A detailed description of the versatile algorithm employed in the
study, i.e. the Directed Loop Update, will be also provided.
Jeudi 7 novembre 2013
Title: Quantum Dynamics and Topological Phases of Light in Circuit Quantum Electrodynamics
Karyn Le Hur (Center for Theoretical Physics, Ecole Polytechnique & CNRS)
Abstract: Systems in cavity or circuit Quantum Electrodynamics (QED) are extensively studied in the context of quantum information and quantum computing. In addition, studying the non-equilibrium
dynamics of photons in these open quantum systems is a challenging theoretical question. Studying “versatile lattice versions of these systems” open new doors to quantum simulation, similar to
ultra-cold atoms in optical lattices. In this Talk, we review our recent theoretical progress regarding (i) the dynamics in driven and dissipative light-matter systems beyond the weak-coupling limit
[1] and (ii) the realization of topological phases of light in superconducting QED-circuit arrays through synthetic gauge fields [2]. Effects of disorder and Mott physics [3,4] will be studied. We
also comment on the current experimental status in superconducting circuit systems.
[1] P. P. Orth, A. Imambekov and K. Le Hur, Phys. Rev. B 87, 014305 (2013)
[2] J. Koch et al. Phys. Rev. A 82, 043811 (2010); A. Petrescu, A. A. Houck and K. Le Hur, Phys. Rev. A 86, 053804 (2012)
[3] J. Koch and K. Le Hur, Phys. Rev. A 80, 023811 (2009)
[4] A. Petrescu and K. Le Hur, Phys. Rev. Lett. 111, 150601 (2013)
Jeudi 17 octobre 2013
Title: String theory reBorn
Laurent Freidel (Perimeter Institute)
Abstract: In this talk I will show how string theory is fundamentally satisfying the principle of Born reciprocity: A perfect duality between space and momentum space. I will show that String theory
as a consequence, not only allow space-time to carry curvature but also allow momentum space and provides phase space with a novel geometrical structure that we call a Born geometry. This opens the
way to radically new string theory backgrounds in which the usual concept of locality do not hold, locality becoming relative. I will show how to implement this formulation based on a generalization
of T-duality and what are the challenges in trying to understand its formulation and its fundamental consequences on our picture of space-time locality.
Jeudi 26 septembre 2013
Exposé groupe de travail MaCon par D. Ferraro
Jeudi 19 septembre 2013
Title: Electric/magnetic duality in AdS4/CFT3
Oscar Varela (ITF d'Utrecht)
Abstract: The field theory defined on a stack of N M2-branes is thought to correspond to that first introduced by BLG/ABJM. At large N, an important sector of this theory can be described,
holographically, by the SO(8)-gauged maximal supergravity in four dimensions of de Wit and Nicolai. Since its inception, the latter has been tacitly assumed to be unique. Recently, however, a
one-parameter family of SO(8) gaugings of maximal supergravity has been discovered, the de Wit-Nicolai theory being just a member in this class. I will explain how this overlooked family of SO(8)
-gauged supergravities is deeply related to electric/magnetic duality in four dimensions. I will then discuss some predictions that can be made about the possible family of holographic dual field
theories, focusing on the structure of conformal phases and the RG flows between them..
Jeudi 27 juin 2013
Title: Quelques pistes cosmologiques pour tester la gravitation quantique à boucles
Aurélien Barreau (Université Joseph Fourier (Grenoble))
Abstract: Je présenterai brièvement les idées de la cosmologie quantique à boucles et la manière dont elles peuvent conduire à certaines prédictions potentiellement testables. Je montrerai comment le
le Big Bang est remplacé par un grand rebond, pourquoi l'inflation est ici très naturelle et la façon dont le spectre de puissance cosmologique peut être modifié par cette théorie. Je conclurai sur
quelques autres pistes concernant les trous noirs.
Mardi 25 juin 2013 à 14h en salle des thèses
Title: Séparation des variables et facteurs de forme des modèles intégrables quantiques
Soutenance de thèse de Nicolas Grosjean (ENS Lyon))
Lundi 24 juin 2013 à 13h30 en salle de réunion du labo
Title:N=2 SUSY gauge theories and quantized moduli spaces of flat connections
Jorge Teshner (DESY, Hamburg)
Abstract: Supersymmetry allows us to reduce the path integrals representing expectation values of supersymmetric observables in certain classes of N=2 SUSY gauge theories to expectation values in an
effective zero mode quantum mechanics. It turns out that this quantum mechanics is related to the quantization of the moduli spaces of flat PSL(2,R)-connections. The correspondences between N=2 SUSY
gauge theories and conformal field theory discovered by Alday, Gaiotto and Tachikaw (AGT) may then be understood using existing relations between the quantized moduli spaces of flat connections and
conformal field theory.
Mardi 18 juin 2013 à 14h en salle 115
Title: Bose gas in the vicinity of the Mott transition
Nicolas Dupuis (LPTMC, Université Pierre et Marie Curie (Jussieu))
Abstract: A Bose gas in an optical lattice undergoes a quantum phase transition between a superfluid phase and a Mott-insulating state as the interaction strength or the density is varied. Using a
non-perturbative renormalization-group approach to the Bose-Hubbard model, we obtain a phase diagram in very good quantitative agreement with quantum Monte Carlo simulations and recover the two
universality classes of the Mott transition. We then compute the pressure in the vicinity of the transition. Due to the proximity of a quantum critical point, the equation of state takes a universal
form with only a few nonuniversal parameters. We discuss both the density-driven and the interaction-driven transitions, and compute the associated universal scaling functions. Finally, we discuss
the experimental consequences of our results for ultracold bosons in an optical lattice.
Jeudi 13 juin 2013
Title: Roton-phonon interaction: from superfluid helium to quantum magnets
Mike Zhitomirsky (CEA, Grenoble)
Abstract: High-energy gapped quasiparticles (rotons) interacting with low-energy acoustic excitations (phonons) are ubiquitous in condensed matter physics. I discuss two experimentally relevant
examples. The high-precision neutron spin-echo measurements of rotons in the superfluid helium reveal a non-monotonous temperature dependence of the roton gap [1]. The new phenomenon is explained by
competition between the standard roton-roton scattering, which is effective above ~1K, and the roton-phonon three-particle processes, which appear due to the presence of the Bose-Einstein condensate
in the superfluid helium and dominate in the sub-Kelvin region.
In the second example, an optical magnon in an easy-plane collinear antiferromagnet interacts with acoustic spin waves. I consider the effect of disorder in such systems and demonstrate that it has a
profound effect on the temperature dependence of the relaxation rate of optical magnons. The usual random-potential scattering yields a T-independent contribution whereas the impurity-assisted magnon
scattering gives the leading temperature correction, which exceeds greatly the effect of magnon-magnon interaction in the bulk. Our theoretical prediction finds confirmation in the high-resolution
neutron measurements on BaNi2(PO4)2 [2].
[1] B. Fak, T. Keller, M. E. Zhitomirsky, and A. L. Chernyshev, "Roton-Phonon Interactions in Superfluid 4He", Phys. Rev. Lett. 109, 155305 (2012)
[2] A. L. Chernyshev, M. E. Zhitomirsky, N. Martin, and L.-P. Regnault "Lifetime of Gapped Excitations in a Collinear Quantum Antiferromagnet", Phys. Rev. Lett. 109, 097201 (2012).
Jeudi 18 avril 2013
Title: The Physical Church-Turing Thesis and the Principles of Quantum Theory
Pablo Arrighi (LIP ENS Lyon et Université Joseph Fourier (Grenoble))
Abstract: Quantum Computation shatters the question of "How fast can we compute the solution to a given problem?" (namely, Complexity theory). But notoriously, it remains innocuous to the question of
"Whether the solution a given problem can be computed at all, even with unbounded time and space resources" (namely, Computability theory). Any answer to the latter question must depend on your
definition of what a computer is. This is why Computability theory crucially relies upon the celebrated Church-Turing thesis, which states that: "Anything that can be computed can be computer by a
Turing machine".
Yet, several works have shown how quantum theory as it stands could breach the physical Church-Turing thesis. We draw a clear line as to when this is the case, in a way that is inspired by Gandy.
Gandy formulates postulates about physics, such as homogeneity of space and time, bounded density and velocity of information --- and proves that the physical Church-Turing thesis is a consequence of
these postulates. We provide a quantum version of this theorem. The approach exhibits a formal, non-trivial interplay between theoretical physics symmetries and computability.
Jeudi 4 avril 2013
Title: Beyond time-dependent charge transport: noise and thermoelectric effects.
Janine Splettstoesser (Institut für Theorie der Statistischen Physik , AAchen)
Abstract: Nanoscale systems driven by time-dependent signals, such as quantum pumps, have recently attracted a lot of attention since they can serve as controlled sources of single particles [1].
Furthermore it can be shown, that time-dependent transport provides an intriguing spectroscopy tool, revealing quantum effects that are not accessible from a stationary state measurement [2].
In this talk I will present different examples for these particular characteristics of time-dependently driven quantum dot devices. Due to the smallness of these setups many-body effects like the
Coulomb interaction, as well as quantum fluctuations play an important role for the transport properties and their signatures are observable in charge transport.
In addition to the transported charge I will discuss the transport noise induced by the time-dependent modulation[3]. Interestingly, there can be pumping noise even in the absence of charge pumping,
which gives additional insight into the underlying transport mechanism.
Finally, I will talk about the thermoelectric performance of driven quantum dots [4]. Under certain conditions not only quantized charge pumping can be realized, but also the heat current exhibits
plateaus, related to the spin degeneracy of the system. This renders possible the operation of time-dependently driven quantum dot devices as nanoscale engines, in particular as battery chargers,
cooling devices or heat engines.
[1] G. Feve et al., Science 316, 1169 (2007); M. D. Blumenthal, et al., Nature Physics 3, 343 (2007); V. F. Maisi, et al., New J. Phys. 11, 113057 (2009).
[2] F. Reckermann, J. Splettstoesser, and M. R. Wegewijs, Phys. Rev. Lett. 104, 226803 (2010).
[3] R.-P. Riwar, J. Splettstoesser, and J. Konig, arxiv:1212.3545.
[4] S. Juergens, F. Haupt, M. Moskalets, and J. Splettstoesser, in preparation.
Jeudi 14 mars 2013
Title: Tomonaga-Luttinger physics in electronic quantum circuits
Mathias Albert (Laboratoire de Physique des Solides, Orsay)
Abstract: In one-dimensional conductors, Coulomb interactions result in correlated electronic systems called Tomonaga-Luttinger liquids (TLL). The TLL physics also applies to other many-body
phenomena, providing complementary viewpoints while benefiting from the powerful TLL framework.
One such phenomenon is the low-energy conductance suppression of a quantum coherent conductor embedded in a dissipative circuit, an effect called dynamical Coulomb blockade. Here we investigate the
basic class of mesoscopic circuits constituted by a short single-channel quantum conductor in series with a resistance R. Remarkably, such circuits can be mapped on a TLL of interaction parameter 1/
(1 + Re2 /h), with an impurity. From this mapping, generalized to realistic dissipative circuits, a scaling law for the suppressed conductance is derived at R = h/e2 , and small deviations computed
for R ? h/e2 using the thermodynamic Bethe-Ansatz exact solution [1]. We find that the scaling law is obeyed by our data [2] for arbitrary quantum channels, emulated by a Ga(Al)As quantum point
contact, and by the recent data [3] obtained using a carbon nanotube. This demonstrates a highly tunable test-bed for TLL physics, and consolidates a recently proposed phenomenological expression for
the conductance of a quantum channel in a linear circuit.
[1] P. Fendley, A.W.W. Ludwig and H. Saleur Phys. Rev. B 52, 8934?8950 (1995).
[2] Jezouin et al, arXiv:1301.4159 (2013).
[3] Mebrahtu et al., Nature 488, 61 (2012).
Jeudi 7 mars 2013
Title: Cavity QED of two-dimensional electron systems subjected to a perpendicular magnetic field
David Hagenmüller (MPQ, Paris 7)
Abstract: Cavity quantum electrodynamics (cavity QED) is the study of the interaction between light confined in a reflective cavity and atoms or other particles, under conditions where the quantum
nature of light photons is significant. If the coupling between light and matter is sufficiently strong, the resulting quantum eigenstates are entangled states. The whole system exhibits new
resonances, with energies that are different from the ones of the bare excitations.
In this talk, I will describe the coupling between a two-dimensional electron gas subjected to a perpendicular magnetic field and the cavity confined optical modes. In particular, I will show that
such a system can reach an unprecedented ultrastrong coupling regime, in which the vacuum Rabi frequency (quantifying the strength of the light-matter interaction) becomes comparable to the cyclotron
transition frequency between two consecutive Landau levels: the vacuum Rabi frequency scales with the square root of the Landau levels filling factor[1]. This physical prediction has been
quantitatively demonstrated by recent spectroscopy experimental results in the THz domain[2].
Moreover, a clearly intriguing problem is to explore how graphene behaves when embedded in a cavity resonator. I will show that if it is possible to achieve ultrastrong coupling between the graphene
cyclotron transition and cavity photons, it leads to strong qualitative differences with respect to the case of massive fermions in semiconductors. In particular, the former can undergo a quantum
phase transition analogous to the one occuring in the Dicke model for superradiance[3].
[1] D. Hagenmüller, S. De Liberato and C. Ciuti, Phys. Rev. B 81, 235303 (2010).
[2] G. Scalari et al., Science 335, 1323-1326 (2012).
[3] D. Hagenmüller and C. Ciuti, Phys. Rev. Lett., in press (2012).
Jeudi 21 février 2013
Title: Transport properties of thin superconducting films
Aleksandra Petkovic (LPTHE, Paris)
Abstract: Transport properties of 2D superconducting systems can be very different from those of bulk superconductors because thermal and quantum fluctuations of superconducting order parameter are
more pronounced and play a crucial role. First we focus on influence of superconducting fluctuations on dynamics, while the system is in the normal state but close to the superconducting transition.
In the fluctuational regime, we derive Ginzburg-Landau-type action under far-from equilibrium conditions. Then, utilizing it, we calculate fluctuation-induced density of states and Maki-Thomson- and
Aslamazov-Larkin-type contributions to the in-plane electrical conductivity. We propose an experimental setup where our results can be tested.
Then, we concentrate on transport at lower temperatures in close-to-equilibrium conditions investigating influence of quantum fluctuations on unbinding of vortex-antivortex pairs. We determine the
temperature below which quantum fluctuations dominate over thermal fluctuations and describe the transport in this quantum regime. The crossover from quantum to classical regime is discussed and the
quantum correction to the classical current-voltage relation is determined.
Jeudi 14 février 2013
Title: Classical and quantum integrability : from spin chains to the AdS/CFT duality.
Sébastien Laurent (Imperial College, Londres)
Abstract: Various quantum systems are integrable and have a large number of conserved quantities, related to each other by an equation called the Hirota equation. This equation is also known in the
context of classical integrability, and its generic solution can be expressed in terms of operators called Q-operators. We will start from some simple spin chains, and generalize to the finite size
spectrum of some field theories and the AdS/CFT duality ; we will see that these systems are strongly constrained by the analytic properties of the eigenvalues of the Q-operators, and that these
constraints allow to solve the system in the sense that they give relatively simple equations encoding the spectrum. However, the physical origin of these analytical conditions is not always very
well understood.
Jeudi 7 février 2013
Title: Avalanches
Kay Wiese (LPTENS, Paris)
Abstract: Magnetic domain walls, charge density waves, contact lines, and cracks are all elastic systems, pinned by disorder. Changing an external parameter, they remain stuck before advancing in
sudden rapid motion, termed avalanche. After an introduction into the phenomenology, I present work based on the functional renormalization group, which allows to go beyond the usual toy-model
description: avalanche-size distributions in any dimension, the distribution of velocities in an avalanche, and their shape. These techniques also lead to an exact solution for the decay of
2-dimensional Burgers turbulence.
Jeudi 28 février 2013
Congés d'Hivers: pas de séminaire
Jeudi 31 janvier 2013 (14h30-16h30, amphi Schrödinger)
Title: Everything you wanted to know for Mathematica (but were afraid to ask)
Marc Magro (ENS Lyon)
Abstract: In two hours, Master Magro will turn you from Mathematica Novices into Mathematica Padawans... Pour participer à cette introduction, munissez vous d'un ordinateur portable sur lequel le
logiciel Mathematica est installé!
Jeudi 24 janvier 2013
Journée de l'équipe de théorie
Jeudi 17 janvier 2013
Title: Dark energy: an effective field theory approach
Federico Piazza (APC, Paris)
Abstract: The discovery of the accelerating expansion of the Universe is triggering an impressive amount of theoretical and observational activity. After briefly reviewing the problems and challenges
of ``Dark Energy", I will focus on recent and ongoing works in which my collaborators and I propose a unifying description of dark energy and modified gravity models that makes use of effective field
theory (EFT) techniques. EFT allows to isolate the relevant low energy degrees of freedom and to efficiently study their dynamics. By extending the ``Effective field theory (EFT) of inflation"
formalism to late time cosmology, we write the most general action for cosmological perturbations in the presence of an additional scalar degree of freedom. As we argue, in fact, cosmological
perturbations are the relevant low energy degrees of freedom in a cosmological set-up. I will focus on a few operators that are quadratic in the perturbations and which appear in non-minimally
coupled scalar-tensor gravity and ``Galileon" theories and I will describe the mixing between gravity and the scalar degree of freedom that such operators produce.
Jeudi 22 novembre 2012
Title: Avalanches in systems with quenched disorder: Mean-field models and beyond
Alexander Dobrinevski (LPT ENS)
Abstract: Disordered systems typically respond non-smoothly to external driving. This leads to phenomena like Barkhausen noise in the motion of magnetic domain walls, earthquakes in the motion of
tectonic plates, and avalanches in granular media under shear stress. A simple mean-field model for such avalanche phenomena is a particle driven by a spring on a Brownian random-force landscape. It
was developed originally in the context of magnetic domain walls and is also known as the Alessandro-Beatrice-Bertotti-Montorsi model. I will present some analytical results on statistics of
avalanches in several variants of this model (e.g. including memory effects). I will then discuss, using RG methods, the relationship of this mean-field model to a spatially extended elastic
interface in a disordered environment.
Jeudi 15 novembre 2012
Title: Quartet correlations between Cooper pairs in a mesoscopic double Josephson junction
Régis Mélin (Institut Néel, Grenoble)
Abstract: Two Josephson junctions separated by less than the superconducting coherence length involve higher order processes that can be quite sizeable for more transparent junctions. Those processes
originate from crossed Andreev reflections (CAR), where Cooper pairs are split into two spin-entangled electrons. In a Sa-S-Sb bijunction, double CAR simultaneously produces two Cooper pairs, one in
each junction. This amounts to producing nonlocal quartets in the superconductors Sa and Sb [1]. Cooper pairs being pseudo-bosons, this mechanism bears some similarity with the emission of pairs of
time-correlated photons in Quantum Optics, thus the name "superconducting beam splitter" given to the basic bijunction set-up. Energy conservation implies that a coherent dc quartet current can flow
even if Sa and Sb are biased at respective voltages V and -V, thus offering the possibility of dc Josephson effect in biased junction arrays. The appearance of the quartet currents at equilibrium
rely on a superconducting circuit having 3 (and not 2) current terminals. Dc quartet current can be detected by the synchronization of the ac Josephson oscillations in each contact a and b, either
with Va=V and Vb=-V, or adding a small dc and ac components such as to achieve Shapiro steps for quartet motion. Special attention will be paid to the microscopic theory for those quartet and higher
order resonances in connection with preliminay experimental results obtained in Grenoble [group of F. Lefloch (CEA-Grenoble, INAC)) and H. Courtois (NEEL)]. A phenomenological circuit model for an
overdamped superconducting triode will also be discussed in connection with those experiments.
[1] A. Freyn, B. Douçot, D. Feinberg and R. Mélin, Phys. Rev. Lett. 106, 257005 (2011).
Jeudi 8 novembre 2012
Title: Stochastic Thermodynamics of Biased Diffusions
Mattéo Smerlak (Albert Einstein Institute, Postdam (Allemagne))
Abstract: A unifying framework for the thermodynamics of fluctuating systems with Fokker-Planck dynamics has been developed by Seifert using the notion of "stochastic entropy". I will consider the
extension of this formalism to the case of geometric/entropic ratchets: inhomogeneous systems in which freely diffusing particles do not reach a Boltzmann-Gibbs equilibrium, even at constant
temperature, and thus violate the naive law that "the entropy of a system at local equilibrium cannot decrease". I will introduce to this effect the notion of "relative stochastic entropy", and use
it to generalize (i) Jaynes? maximum-entropy principle for the canonical ensemble, (ii) the second law of thermodynamics and (iii) Seifert?s integral fluctuation theorem. These results apply e.g. to
colloidal particles dragged in viscous fluids with space-dependent viscosity or in asymmetric confined geometries.
Jeudi 1er novembre 2012
Congés d'Automne: pas de séminaire
Jeudi 14 juin 2012
Title: TBA
Sasha Chernyshev (University of California at Irvine)
Du 5 au 8 juin 2012
Local organizers: Edmond Orignac et Tommaso Roscilde
This workshop is dedicated to the theoretical challenges in the field of quantum gases, with a strong connection to condensed matter physics - including strongly correlated systems, low-dimensional
systems, disorder effects etc.
One of the main goals of this workshop is to bring together young researchers coming from Europe and overseas. The program includes 5 overview lectures by leading senior scientists in the field of
cold atoms and condensed matter, about 20 lectures by junior scientists selected by the advisory committee, and a poster session open to all the participants. A special invited session is organized
this year on the subject of Bose-Einstein condensation in condensed matter systems.
Jeudi 31 mai 2012
Title: Casimir force induced by imperfect Bose gas
Jaroslaw Piaseki (Institut de Physique Théorique de l'Université de Varsovie)
Abstract: We present a study of the Casimir effect in an imperfect (mean-field) Bose gas filling the volume contained between two infinite parallel plane walls. In the one-phase region the Casimir
force decays exponentially fast with increasing distance between the walls. However, when Bose-Einstein phase transition is approached the decay length in the exponential law diverges. In the two
phase region the Casimir force is long range, and decays following a power law. We clarify the relation between the range of the Casimir forces and the bulk correlation length.
Jeudi 24 mai 2012
Title: Majorana Edge States in One-Dimensional Systems
Pascal Simon (LPS Orsay)
Abstract: We study a one-dimensional wire with strong spin-orbit coupling, which supports Majorana fermions when subject to a Zeeman magnetic field and in proximity of a superconductor. We evaluate
the local density of states, as well as the spin polarization in this system using an exact numerical diagonalization approach. Moreover, we define and calculate a local "Majorana polarization" and
"Majorana density". We find that the spatial dependence of the Majorana polarization is proportional to that of the spin polarization parallel to the chain and we propose to test the presence of
Majorana fermions in a 1D system by a spin-polarized density of states measurement [1].
We then discuss the effect of electron-electron interactions on one-dimensional electron systems that support Majorana edge states. Strong interactions generically destroy the induced superconducting
gap that stabilizes the Majorana edge states. For weak interactions, the renormalization of the gap is nonuniversal and allows for a regime in which the Majorana edge states persist. We present
strategies of how this regime can be reached [2].
[1] C. Bena, D. Sticlet, P. Simon, arXiv:1109.5697.
[2] S. Gangadharaiah, B. Braunecker, P. Simon, D. Loss, Phys. Rev. Lett. 107, 036801 (2011).
Semaine Michael E. Fisher: 21 au 25 Mai 2012
Cours: Molecular motors: observations, experiments and theory
Abstract: This informal, introductory short course will explain that molecular mo- tors are proteins found in all living cells that convert chemical energy into mechanical work and motion. Thus the
protein myosin together with filamentary actin underlies the operation of all muscles. Processive motor proteins such as kinesin and dynein and some types of myosin, step unidirectionally along
linear tracks, specifically microtubules and actin filaments, and play crucial roles in cellular transport processes, organization, and function. How do we know about such facts ? How can we observe
in vitro or even in vivo the operation of single, individual molecules ? And what sort of experiments and theory are appropriate for gaining insight into the mechanisms by which such motors operate ?
• Mercredi 23 mai : 10h-12h (pause cafe de 30 mn au milieu)
• Mercredi 23 mai : 12h-14h (pause cafe de 30 mn au milieu)
• Jeudi 24 mai : 10h-12h (pause cafe de 30 mn au milieu)
Tous les cours auront lieu dans la grande salle du Centre Blaise Pascal.
Jeudi 17 mai 2012
Pas de séminaire: ascension
Jeudi 10 Mai 2012 à 14h
Title: The nonperturbative renormalization group (NPRG): principles and some applications. (Colloquium)
Dominique Mouhanna (LPTMC, Jussieu)
Abstract: The renormalization group (RG) has been introduced in its modern form by K.G. Wilson in the seventies, based on the concept of block spin of L.P. Kadanoff. Although nonperturbative in
essence the RG has been, for a long time, confined to the perturbative domain: small coupling constant, small disorder, vicinity of the upper or lower critical dimension, etc.
In the nineties, C. Wetterich has introduced a new formulation of the RG, based on the concept of running Gibbs free energy, that turns out to be the most suitable one to tackle with nonperturbative
issues: strong coupling behaviour, physics far from the upper or lower critical dimensions, physics of topological excitations, bound states, disorder, etc.
In the first part of my talk I present the general principles underlying the method. In the second part I illustrate these principles in the context of various systems taken from field theory and
condensed matter physics.
Jeudi 3 mai 2012
Title: Statistical mechanics of harmonic spheres: glass and jamming transition
Hugo Jacquin (Laboratoire MSC, Université Denis Diderot, Paris)
Abstract: When a fluid is cooled down suficiently rapidly to avoid cristallization, its dynamics slows down very rapidly, and an amorphous solid is formed. This phenomenon is called the glass
transition. When piling randomly in a box more and more spheres, a density is eventually attained, where all spheres come in contact. At this density and upon further compression, the system acquires
rigidity: it is the jamming transition. Jamming and glass transition are very old statistical mechanics problems that are often associated with one another, because intuition suggests that both
phenomenons arise from the same physical effect, the caging of each particle by its shell of neighbours.
During my phd thesis, I studied analytically a model system of harmonic spheres (soft spheres that repel each other with finite amplitude and finite range), that allows to simultaneously study the
jamming and the glass transition. I will present results obtained on the dynamics near the glass transition, as well as thermodynamics near the jamming point, with field theoretic methods and replica
Jeudi 26 Avril 2012 à 14h
Title: Quantum phase slips in 1D Josephson junction chains
G. Rastelli (LPMMC, Grenoble)).
Abstract: One-dimensional Josephson junction chains (1D-JJ chains) are paradigmatic systems to study the correlations between different elements in superconducting Josephson junction nanodevices.
Their use has been proposed for the realization of a qubit topologically protected against decoherence [1] or for the realization of a fundamental current standard in quantum metrology [2].
The quantum ground state of the chain is ruled by the competition between the Josephson effect and the electrostatic interactions which contrast the charge transfer. This effect corresponds to an
increase of the quantum fluctuations of the local phase of the condensate on the islands [3]. Indeed, in the thermodynamic limit, the theory predicted a quantum superconductor-insulator phase
transition [4]. However the experimental devices designed for the above-mentioned applications are generally composed by a finite number of superconducting elements. I will discuss the effect of
quantum phase fluctuations in 1D-JJ chains of finite length [5]. Some comparisons with the experiments will be also presented.
• [1] S. Gladchenko et al., Nature Physics 2009 ; B. Douçot, J. Vidal, PRL 2002 ; Douçot, M. V. Feigelman, L. B. Ioffe, PRL 2003.
• [2] W. Guichard, F.W.J.Hekking Phys. Rev. B 2010. ; J. Flowers, Science 2004.
• [3] K. A. Matveev, A.I. Larkin, L. I. Glazman, PRL 2002.
• [4] R. M. Bradley, S. Doniach, PRB 1984 ; S. E. Korshunov, Sov. Phys. JETP 1986-1989.
• [5] G. Rastelli et al., arXiv :1201.0539 (submitted to PRB).
Jeudi 12 et 19 avril 2012
Congés de Printemps
Jeudi 29 Mars 2012 à 14h (amphi D)
Title: Fluctuation-Dissipation relations for nonequilibrium systems
Matteo Colangeli (Politechnico di Torino)
Abstract: TBA
Mardi 27 Mars 2012 à 14h (amphi H)
Title: Holographic Correlation Functions
Tristan McLoughlin (Albert Einstein Institute (Postdam))
Abstract: In this talk we discuss recent progress in the calculation of quantum field theory correlation functions using the AdS/CFT correspondence. In particular, we describe the strong coupling
description of three-point functions in planar N=4 super Yang-Mills theory by semiclassical strings. We show that such semiclassical calculations agree with the exact known answers in special
protected cases, and provide a starting point for understanding generic operators.
Jeudi 22 Mars 2012 à 14h
Title: Josephson parametric amplifiers for quantum information processing. (Colloquium)
B. Huard (LPA - Ecole Normale Supérieure de Paris).
Abstract: Nowadays, it is possible to control and measure the quantum state of systems with a few degrees of freedom, whether they are microscopic objects like cold atoms and single photons, or
macroscopic objects like superconducting circuits. Such delicate experiments require an interface which bridges the gap of orders of magnitudes in energy between the quantum object and the data
acquisition system. This problem is solved by an active device: the amplifier.
Although the amplifier is necessary to overcome the noise at the stage of data acquisition, it eventually alters the signal. We have developed an amplifier for microwave signals which adds only the
minimum of noise allowed by quantum laws: the equivalent of half a photon of noise referred to the input. Our amplifier is based on a superconducting circuit of Josephson junctions in superconducting
In this talk, I will review the principles of parametric amplification with superconducting circuits and show the quantum limits specific to the different types of amplifiers. I will present
applications of quantum limited amplifiers for quantum information processing using superconducting qubits and pairs of twin microwave beams. In particular, I will show that the amplifier we
developed allows a direct measurement of the quantity of entanglement between two beams.
Jeudi 15 Mars 2012 à 14h
Title: An impurity in a Fermi sea on a narrow Feshbach resonance: A variational study.
Christian Trefzger (ENS Paris).
Abstract: We study the problem of a single impurity of mass M immersed in a Fermi sea of particles of mass m [1]. The impurity and the fermions interact through a s-wave narrow Feshbach resonance, so
that the Feshbach length R* naturally appears in the system. We use simple variational ansatz, limited to at most one pair of particle-hole excitations of the Fermi sea and we determine for the
polaronic and dimeronic branches the phase diagram between absolute ground state, local minimum, thermodynamically unstable regions (with negative effective mass), and regions of complex energies
(with negative imaginary part). We also determine the closed channel population which is experimentally accessible. Finally we identify a non-trivial weakly attractive limit where analytical results
can be obtained, in particular for the crossing point between the polaronic and dimeronic energy branches.
[1] Christian Trefzger, Yvan Castin, arXiv:1112.4364
Jeudi 9 Mars 2012 à 14h
Title: Ground-state phase diagram of the quantum J1-J2 model on the honeycomb lattice
F. Mezzacapo (Max Planck Institute for Quantum optics).
Abstract: Frustrated quantum antiferromagnets are a subject of current intense research. Frustration can arise either from the geometry of the system, or from competing interactions, and can lead to
the stabilization of novel, exotic (magnetic and non-magnetic) phases of matter. The antiferromagnetic spin-1/2 Heisenberg Hamiltonian in presence of next nearest-neighbor coupling is a prototypical
example of quantum spin model (usually referred as the J1-J2 model ) featuring interaction induced frustration. Such a model is of relevance for experimentally accessible compounds and has been
proposed as an effective description to characterize the spin-liquid phase of the half-filled Hubbard model on the honeycomb lattice [Z. Y. Meng et al., Nature (London) 464, 847 (2010)]. For these
reasons it has been recently investigated by means of various computational approaches, however, different studies have yielded conflicting physical scenarios. For example it is not clear if the
model features a disordered ground state (GS) for any value of J2/J1 and even the nature of the ordered phases remains controversial.
In this talk I will present results of a variational study on the GS phase diagram of the quantum J1-J2 model on the honeycomb lattice. Values of energy and relevant order parameters are computed, in
the range 00.4 (collinear). In the intermediate region, the GS is disordered. The results discussed here also show that the reduction of the half-filled Hubbard model to the J1-J2 one does not yield
accurate predictions.
Jeudi 1 Mars 2012 à 14h
Title: Holographic fluids, vorticity and analogue gravity
M. Petropoulos (CPhT - Ecole Polytechnique).
Abstract: In vew of the recent interest in reproducing holographically various properties of conformal fluids, I will analyze the emergence of rotation and vortices in the framework of AdS/CFT. The
boundary backgrounds involved in this study turn out to exhibit interesting relationships with sailing in drift currents or sound wave propagation in moving media. The latter opens the way to the
holographic description of analogue-gravity models.
Jeudi 23 février 2012
Title: The group field theory description of quantum spacetime
Daniele Oriti (Postdam)
Abstract: We present an introduction to group field theories and tensor models, as a framework for the dynamics of quantum spacetime. They are a generalization of matrix models for 2d quantum
gravity, incorporating insights from other approaches like spin foam models and simplicial quantum gravity, as well as tools from non-commutative geometry. We review briefly recent developments, and,
if time allows, discuss in more detail some of them.
Jeudi 9 Février 2012 à 14h
Title: Perturbative quantum gravity with Immirzi parameter.
Simone Speziale (CPT Marseille)
Abstract: If one uses a first order action principle for general relativity, there is a third fundamental coupling constant that appears, next to Newton's and the cosmological constants. It enters
the coupling of the gravitational field to fermions, and it is oftent referred to as the Immirzi parameter, for historical reasons. In this talk, I will review this formulation of general relativity
and the meaning of the parameter. I will then present recent results on the 1-loop quantum effective action which leads to a non-trivial running of the Immirzi parameter, and discuss possible
implications for non-perturbative quantum gravity.
Jeudi 2 Février 2012 à 14h
Title: Random matrix ensembles for quantum spin decoherence.
F. David (SPhT - CEA Saclay).
Abstract: I present a class of random matrix ensembles relevant for the study of quantum decoherence for quantum spins. These ensembles generalize the standard GUE ensemble. For a single spin j, they
lead to exact solutions for the dynamics of decoherence and for quantum diffusion. I discuss the general non-Markovian case, the Markovian limits and the quantum-to-classical transition.
Lundi 30 janvier 2012 à 16h (salle 117)
Title: A Geometric Classification of Supersymmetric Solutions in String Theory
Alessandro Tomasiello (Université de Milano-Bicocca)
Abstract: Supersymmetry is a conjectural new symmetry of the universe, which would help solve some of its puzzles. In this talk, we will consider supersymmetric solutions in type II string theory; we
will describe a system of equations which reformulates supersymmetry in terms of differential forms, without any need to resort to spinors. This extends to any spacetime a similar method already
available for vacuum compactifications, i.e. manifolds of the type AdS4 x M6 or Minkowski4 x M6, where M6 is compact, which led to constraints on M6 involving Hitchin's "generalized complex
geometry". In our new, more general setup no factorization of spacetime is assumed.
Jeudi 26 Janvier 2012 à 14h (salle 115)
Title: Quantum Critical Magnetization Behaviors of the Kagome- and Triangular- Lattice Antiferromagnets
T. Sakai (Condensed MAtter Theory Group, Spring-8, Harima).
Abstract: Magnetization process of the S=1/2 isotropic Heisenberg antiferromagnets on the Kagome and triangular lattices are studied. Data from numerical- diagonalization method up to 39-spin
systems, are reexamined from the viewpoint of the derivative of the magnetization with respect to the magnetic field. We find that the behavior of the derivative around the 1/3 height of the
magnetization saturation is quite different from the cases of typical magnetization plateaux for the Kagome-lattice antiferromagnet. This new phenomenon is called the magnetization ramp [1]. We also
compare it with the 1/3 magnetization plateau of the triangular antiferromagnet. The critical exponent analysis indicates a clear difference between the magnetization plateau and ramp [2]. In
addition using the numerical diagonalization up to 42-spin systems we suggest that the kagome-lattice antiferromagnet has a gapless singlet- triplet excitation in the thermodynamic limit [3].
[1] H. Nakano and T. Sakai: J. Phys. Soc. Jpn. 79 (2010) 053707.
[2] T. Sakai and H. Nakano: Phys. Rev. B 83 (2011) 100405(R).
[3] H. Nakano and T. Sakai: J. Phys. Soc. Jpn. 80 (2011) 053704.
Jeudi 19 Janvier 2012 à 14h
Title: Coherence of Single Electron Sources from Mach-Zehnder Interferometry
G. Haack (Université de Genève).
Abstract: A new type of single electron sources (SES) has emerged which permits to inject single particles in a controllable manner into an electronic circuit. Multiparticle exchange, two-particle
interference effects, entanglement and HBT experiments have already been proposed. Here we determine the coherence length of the single-particle states analyzing the decay of Aharonov-Bohm
oscillations as a function of the imbalance of a Mach-Zehnder interferometer connected to an SES. This single-particle coherence length is of particular importance as it is an intrinsic property of
the source in contrast to the dephasing length.
Jeudi 12 Janvier 2012 à 14h
Title: Waiting times distribution of electrons flowing across mesoscopic conductors
M. Albert (LPS, Orsay).
Abstract: Electronic transport through mesoscopic devices is known to be stochastic due to the quantum nature of the charge carriers. The noise power spectrum as well as the Full Counting Statistics
(FCS) provide many important informations about the system under study as it has been shown during the past 20 years. However the distribution of waiting times (WTD) between the detection of several
charge carriers has been recently investigated and shown to be very powerful to understand the short time physics and correlations between different elementary events [1,2] in the same spirit than
the level spacing distribution in the spectral statistics of complex systems. In this talk we will use this quantity to discuss the short time correlations in a perfect one dimensional quantum
channel with a quantum point contact. Although the system is extremely simple, the WTD reveals quite striking transport properties that can be explained using random matrix theory in a totally
unexpected context. Some other quantum states, such as a train of Lorentzian pulses [3] will be also considered and the relation between the WTD and the FCS also discussed.
[1] T. Brandes, Ann. Phys. (Berlin) 17, 477 (2008).
[2] M. Albert, C. Flindt and M. Buttiker, Phys. Rev. Lett. 107, 086805 (2011)
[3] J. Keeling, I. Klich, L. S. Levitov, Phys. Rev. Lett. 97, 116403 (2006)
Lundi 9 janvier 2012 a 15h30 en salle 117
Andres Anabalon Dupuy, Science Department, Universidad Adolfo Ibanez (Chili)
Title: Black Holes and Wormholes for a Self Interacting Scalar Field in asymptotically (anti) de Sitter Spacetime.
Abstract: TBA
Jeudi 22 et 29 Décembre 2011
Pas de séminaire: congés de Noël
Mardi 13 Décembre 2011 à 14h (date et heures exceptionnels)
Title: Renormalization Group Approach to Quasi-One Dimensional Systems.
Samuel Moukouri (Racah Institute of Physics, Hebrew University, Jerusalem)
Abstract: The advent of high performance computing and the development of sophisticated numerical techniques have opened new vistas for researchers in condensed matter physics. Exciting predictions
of new quantum phases of matter in model systems can often be substantiated or falsified by numerical methods. In this talk, I will present recent development in applying the celebrated
density-matrix renormalization group method to quasi-one dimensional systems. I will show the power of this technique by computing, with high accuracy, critical points in quantum phase transitions
induced by small inter-chain interactions. Illustrations are made on models for magnetic frustration and for the Mott transition.
Jeudi 8 Décembre 2011 à 14h
Title: Non-equilibrium phase transition for a system of diffusing-coalescing particles with deposition and evaporation
O. Zaboronsky (Université de Warwick & ENS Lyon)
Abstract: In 1998 Majumdar and collaborators introduced a model of diffusing- aggregating massive particles with extra evaporation and deposition processes. Basing on mean field analysis and
numerical simulations they conjectured that this model undergoes a non-equilibrium phase transition from the state with zero flux of mass toward large masses to the state with a positive
asymptotically constant flux. By combining global properties of the Markov chain describing the system with the analysis of low order moments we give a rigorous proof of Majumdar's conjecture. Joint
work with Roger Tribe, Colm Connaughton and R. Rajesh.
Lundi 5 Décembre 2011 à 15h30
Title: Title: Vacua analysis in extended supersymmetry compactifications
G. Dibitetto (Université de Groningen)
Abstract: We consider truncations of half-maximal and maximal supergravity theories in four dimensions and we analyse the landscape of vacua for those embedding tensor configurations having a string
theory origin in terms of geometric flux compactifications. The full dictionary between fluxes and embedding tensor components is worked out, the gaugings are identified and the full mass spectra are
computed. The search of vacua is carried out by applying algebraic geometry techniques which allows for complete analytical treatment.
We furthermore discuss the link with duality-covariant extensions of the concept of backgrounds in string theory, such as Double Field Theory and (Exceptional) Generalised Geometry.
Jeudi 1 Décembre 2011 à 16h (salle 117)
Title: The 1/N Expansion and The Continuum Limit in Colored Tensor Models.
Razvan Gurau (Perimeter Institute)
Abstract: Matrix models are one of the most versatile tools in theoretical physics with applications ranging from the theory of strong interaction, to string theory, critical phenomena and two
dimensional quantum gravity. In higher dimensions matrix models generalize to tensor models. Ordinary tensor models do not admit a meaningful 1/N expansion, and no analytic result could be establish
on their continuum limit. In this talk I will give an overview of recent results for the colored tensor models. Such models have been shown to admit a 1/N expansion dominated by graphs of spherical
topology. The leading sector is summable and the tensor models undergo a phase transition to a continuum theory. I will conclude by an overview of results on the continuum limit for various specific
Jeudi 1 Décembre 2011 à 14h
Title: Discrete complex analysis and statistical physics
H. Duminil-Copin (Université de Geneve).
Abstract: Discrete harmonic and discrete holomorphic functions have been proved to be very useful in many domains of mathematics. Recently, they have been at the heart of the two dimensional
statistical physics (for instance, through the works of Kenyon, Schramm, Smirnov and others...). We will present some of the connections between discrete complex analysis and statistical physics. In
particular (it is a joint work with S. Smirnov), we will use discrete holomorphic functions to prove that the number a_n of self-avoiding walks of length n (starting at the origin) on the hexagonal
lattice satisfies a_n^{1/n}=\sqrt{2+\sqrt 2} when n goes to infinity thus answering a conjecture made by Nienhuis in 1982.
Jeudi 24 Novembre 2011 à 14h
Title: Diffusion anormale d'un polymère dans un fondu non enchevêtré.
Jean Farago (Institut Charles Sadron, Strasbourg)
Abstract: Contrairement à ce qui est d'ordinaire admis, les interactions hydrodynamiques ne sont pas écrantées dans un fondu de polymères au-delà de la taille des monomères et sont importantes dans
les régimes transitoires, très longs pour les polymères. Nous montrons que les interactions visco-hydrodynamiques sont responsables d'une dynamique anormale universelle qui s'étend jusqu'au temps de
Rouse (~N²); la diffusion du centre de masse d'un polymère marqué est accélérée par rapport à la prédiction du modèle de Rouse d'une quantité importante, qui croît avec la taille du polymère. Nous
avons développé une théorie analytique de ces effets viscoélastiques qui se compare quantitativement avec les simulations numériques, sans paramètre ajustable.
Cette théorie s'adapte aussi au cas d'une dynamique stabilisée par un thermostat de Langevin (très utilisée en simulation numérique), qui, de manière inattendue, affecte fortement et à temps longs la
dynamique relaxationnelle du centre de masse, celle-ci restant néanmoins fortement accélérée. Ce problème permet également la comparaison de différentes approches théoriques, réponse linéaire,
hydrodynamique fluctuante et couplage de modes, qui dans ce cas précis donnent des éclairages complémentaires sur le phénomène étudié et des résultats similaires.
Jeudi 17 Novembre 2011 à 14h (salle 116!)
Title: Strong back-action of a linear circuit on a single electronic quantum channel
Anne Anthore (Laboratoire de Photonique et Nanostructures et université Denis Diderot)
Abstract: How are the transport properties of a coherent conductor modified by its surrounding circuit? This fundamental question is also of practical importance for the engineering of composite
quantum devices. When a coherent conductor is inserted into a circuit, its conductance is reduced due to the circuit back-action in response to the granularity of charge transfers. This phenomenon,
called dynamical Coulomb blockade, has been extensively studied for a tunnel junction. However, for arbitrary short coherent conductors, it is fully understood only in the limit of small conductance
reductions and low-impedance surrounding circuits.
We have investigated experimentally the strong back-action regime of a linear circuit on a single electronic conduction channel of arbitrary transmission. This was achieved by using a quantum point
contact (QPC) as a test-bed for coherent conductors. The QPC was embedded in an adjustable on-chip circuit of impedance comparable to the resistance quantum RK= h/e² at microwave frequencies, leading
to conductance reductions up to 90%. An in-situ short-circuit technique allows us to extract the back-action signal in the most direct way, by probing the QPC conductance in presence and in absence
of the circuit back-action.
From our results, we propose a generalized expression for the conductance of an arbitrary single quantum channel embedded in a linear environment. The proposed expression is in good agreement with
recent predictions derived for a purely ohmic environment
Jeudi 3 Novembre 2011
Title: Couplage électron-phonon dans les isolants topologiques
S. Giraud (Dusseldorf)
Abstract: Les isolants topologiques sont des matériaux isolants dans le bulk mais qui permettent le transport de charges sur leurs bords. Leur découverte récente, d'abord en dimension 2 dans des
puits quantiques de HgTe, puis en dimension 3 dans une classe de matériaux tels que Bi2Se3 ou Bi2Te3, a ouvert la voie à de nombreuses applications notamment dans les domaines de la spintronique ou
de l'informatique quantique. Parallèlement, plusieurs nouveaux phénomènes physiques ont pu être prédits. L?existence de ces états de bords est protégée par un invariant topologique caractéristique de
la structure de bandes du bulk et une propriété essentielle de ces matériaux est l'absence totale de diffusion en dimension 2 ou une réduction drastique de l'espace des phases des états de diffusion
en dimension 3. L'étude des isolants topologiques a été réalisée jusqu?à présent sans interaction. Cela était justifié par la protection topologique des états de bords. Cependant, la situation est
encore mal comprise, d?autant plus que dans certains cas on s?attend à de fortes interactions. En utilisant une théorie effective de basse énergie pour l?état de surface des isolants topologiques 3D,
nous avons récemment analysé les conséquences du couplage électron-phonon. Nous avons ainsi pu prédire différentes grandeurs physiques comme la durée de vie des quasi-particules ou la résistivité
surfacique, qui sont en très bon accord avec les premiers résultats expérimentaux.
Jeudi 27 Octobre 2011
Title: Quantum Hamiltonians for the SU(2) WZW model.
German Sierra (Institute of Theoretical Physics CSIC-UAM, Madrid)
Abstract: Comment obtenir des généralisations des chaines de Haldane-Shastry par la théorie conforme des champs...
Jeudi 20 Octobre 2011 à 14h (salle 115!)
Title: Structures intégrables de type Calogero-Sutherland dans les théories conformes; applications à l'effet Hall quantique fractionnaire
B. Estienne (Instituut voor Theoretische Fysica, Universiteit van Amsterdam)
Abstract: Depuis son introduction il y a 40 ans, le modèle de Calogero-Sutherland a attiré beaucoup d'attention en physique théorique. Un récent développement concerne le lien entre ce système
intégrable et les théories conformes en 2d. En particulier les blocs conformes permettent de construire des états propres de Calogero-Sutherland possédant des monodromies non-triviales. L'échange des
particules est décrit par une représentation non-triviale du groupe de tresse: on parle alors d'anyons non-abéliens. En matière condensée, ces fonctions d'ondes apparaissent dans la description des
excitations (quasi-trou) de l'effet Hall quantique fractionnaire.
Je commencerai par décrire le lien entre Calogero-Sutherland et théories conformes, ainsi que la structure intégrable sous-jacente. Dans un deuxième temps, je parlerai des conséquences pour les
fonctions d'ondes de l'effet Hall quantique fractionnaire: dualité entre électrons et quasi-trous, et applications pour les simulations numériques.
Jeudi 13 Octobre 2011 à 14h
Title: Spin Hall effect at interfaces between topological insulators and metals
Marine Guigou (Univ. Wurzburg)
Abstract: The Spin Hall effect (SHE) is a physical phenomenon realized in nonmagnetic systems and that allows for a transverse spin current generated if an electrical charge current is driven in
longitudinal direction. This can happen due to impurity scattering [Hir99], called extrinsic SHE, or due to band structure effects [Sin04], called intrinsic SHE. Both cases have been experimentally
observed: the extrinsic SHE in semiconductor heterostructures and the intrinsic SHE in HgTe/CdTe heterostructures by combining the SHE with the so-called quantum spin Hall effect (QSHE) in a single
device [Bru10]. The QSHE is yet another type of spin Hall effect that exists at the boundary of a two-dimensional topological insulator realized in HgTe/CdTe quantum wells [Ber06,Kon07]. In this
presentation, we will first show the existence of a new type of interface SHE at junctions between Quantum Spin Hall Iinsulators and metals (normal and superconducting ones). The new type of SHE is
intimately related to the coexistence of propagating and evanescent modes at the interface between a QSHI and a metal. Interestingly, this happens in the absence of structure and bulk inversion
asymmetry within each subsystem. Secondly, we functionalize these findings to propose a device for all-electric spin injection into normal metal leads [Gui11].
[Ber06] B.A. Bernevig, T. L. Hughes, and S.C. Zhang, Science 314, 1757 (2006).
[Bru10] C. Bruene, A. Roth, E.G. Novik, M. Koenig, H. Buhmann, E.M. Hankiewicz, W. Hanke, J. Sinova, and L.W. Molenkamp, Nature Physics 6, 448 (2010).
[Hir99] J.E. Hirsch, Phys. Rev. Lett. 83, 1834 (1999).
[Kon07] M. Koenig, S. Wiedmann, C. Bruene, A. Roth, H. Buhmann, L.W. Molenkamp, X.L. Qi, and S.C. Zhang, Science 318, 766 (2007).
[Sin04] J. Sinova, D. Culcer, Q. Niu, N.A. Sinitsyn, T. Jungwirth and A.H. MacDonald, Phys. Rev. Lett. 92, 126603 (2004).
[Gui11] M. Guigou, P. Recher, B. Trauzettel, and J. Cayssol, arXiv:1102.5066.
Mercredi 12 octobre a 14h en salle 115 (exceptionnel)
Title: Generalised geometry and E11
Peter WEST (King's College, London)
Abstract: I will explain how generalized geometry was contained in the E11 proposal and discuss some recent applications.
Lundi 10 octobre a 13h30 en salle 115 (exceptionnel)
Long-lived qubits in atomic systems
Hui-Khoon NG (Center for Quantum Technologies, National University of Singapore)
Abstract: In this talk, I will present a scheme for qubits stored in clusters of three atoms that are long-lived against decoherence from fluctuating magnetic fields, a limiting source of noise in
many experiments. Each qubit is stored in a rotationally invariant subsystem of the total angular momentum states of the three atoms, and can persist for time-scales on the order of hours, compared
to milliseconds for an unprotected qubit. I will first present the theoretical scheme of rotationally invariant subsystems in atomic systems, and then move on to discuss current work at CQT on an
experimental scheme to demonstrate the persistence of the qubit. This includes methods for state preparation via Rydberg blockade, state tomography via light scattering, as well as novel techniques
for state estimation with sparse data.
Lundi 10 octobre a 15h en salle 115 (exceptionnel)
Title: The classical integrable structure of AdS/CFT
Benoit VICEDO (DESY, Hamburg)
Abstract: I will present the classical integrable structure which underlies all know integrable superstring theories on AdS-spaces with CFT duals. It is described by a standard r-matrix on the
underlying twisted loop algebra. However, the latter is equipped with a non-standard inner product which encodes the non-ultralocality of the sigma-model.
Jeudi 6 Octobre 2011 à 14h
Title: Textures généralisées pour l'effet Hall quantique entier et modes collectifs
Benoit Doucot (LPTHE, université Pierre et Marie Curie)
Abstract: Il existe plusieurs systèmes bidimensionnels dans lesquels les électrons portent, en plus de leur spin, un autre degré de liberté interne auquel est associé un nombre fini d'états
quantiques. On peut penser par exemple à des bicouches de gaz d'électrons bidimensionnels, ou encore au graphène, avec ses deux branches d'excitations linéaires au bord de la première zone de
Brillouin. En présence d'un fort champ magnétique, l'Hamiltonien cinétique est quantifié en niveaux de Landau fortement dégénérés. Lorsque la densité électronique correspond au remplissage exact du
plus bas de ces niveaux, la répulsion Coulombienne induit des processus d'échange qui sélectionnent un état fondamental ferromagnétique. Mais si le facteur de remplissage diffère légèrement de
l'unité, cet ordre magnétique est remplacé par un réseau de textures appelées Skyrmions. Je montrerai comment étendre la construction de tels états lorsque le spin 1/2 de l'électron est remplacé par
un degré de liberté interne avec $d$ états distincts. Je montrerai ensuite comment on peut obtenir les modes d'excitations collectives au voisinage de ces textures généralisées dans une approche de
Hartree-Fock dépendant du temps. Ce travail est issu d'une collaboration avec Roderich Moessner et Dmitri Kovrizhin (MPI Dresden).
Jeudi 22 Septembre 2011 à 14h (salle 115!)
Title: New results inspired by quantum gravity: topological models and statistical systems coupled to random lattices
Valentin Bonzom (Perimeter Institute)
Abstract: In this talk, I will present an overview on results which have appeared this year. While they are inspired from quantum gravity issues, they have a pretty large spectrum. I first focus on a
new form of a generalized Kitaev Hamiltonian for topological order, which led to a new way of solving 2+1 quantum gravity as well as to new semi-classical formulae for re-coupling of quantum angular
momenta. Those systems are based on fixed lattices. The second part is instead dedicated to statistical systems coupled to random lattices in dimension three and higher, an exciting field which
generalizes matrix models.
Jeudi 30 Juin 2011 à 14h
Soutenance de thèse: Optique quantique électronique
Charles Grenier (ENS Lyon)
Abstract: Les progrès des techniques de nanofabrication des dix dernières années ont permis la mise en place de protocoles visant à manipuler les charges uniques dans les nanostructures. Ces
nouvelles techniques permettent d'envisager la réalisation d'expériences d'optique quantique avec des électrons. Cette thèse s'inscrit dans ce contexte.
Le but de ce travail a été la construction d'un formalisme adapté à la description de telles expériences. Ce formalisme, construit en analogie avec la théorie de la cohérence quantique du champ
électromagnétique de Glauber, souligne les similitudes et différences entre les photons se propageant dans le vide, et le transport électronique dans des conducteurs balistiques unidimensionnels. En
particulier, il rend compte de la décohérence et de la relaxation en énergie des excitations électroniques en présence d'interactions. Un autre aspect de cette thèse a été la proposition de
protocoles permettant de mesurer des quantités directement reliées aux propriétés de cohérence décrites par le formalisme de l'optique quantique électronique. En particulier, un protocole de
tomographie quantique reposant sur l'effet Hanbury Brown et Twiss a été proposé pour reconstruire la cohérence à un corps d'une source d'excitations mono-électroniques. Ce protocole peut aussi être
envisagé pour obtenir des informations sur les mécanismes de décohérence.
Jury: Markus Buttiker, Pascal Degiovanni, Christian Glattli, Frank Hekking, Peter Holdsworth.
Mardi 28 juin (salle 117) à 14h
Title: The AdS(5) x S(5) semi-symmetric space sine-Gordon theory
Speaker: J. Luis MIRAMONTES (Universidad de Santiago de Compostela, Espagne)
Abstract: The motion of strings on (semi)symmetric space target spaces underlies the integrability of the AdS/CFT correspondence. Although the relevant theories, whose excitations are giant magnons,
are non-relativistic, they are classically equivalent, via the Pohlmeyer reduction to a family of 2-d relativistic integrable field theories known as (semi-)symmetric space sine-Gordon (S-SSSG)
theories. Moreover, it has been conjectured that this equivalence could extend into the full quantum theory. In this talk I will review the main features of the AdS_5 x S^5 S-SSSG theory, including
the semiclassical quantization of their soliton spectrum which has been recently sorted out in ArXiv:1104.2429. It exhibits supersymmetry and leads to a natural conjecture for their S-matrix
Jeudi 23 Juin 2011 à 14h
Title: Quench Dynamics in Interacting one-dimensional systems
Aditi Mitra (New York University)
Abstract: Due to experiments in cold-atomic gases, the problem of quench dynamics which is the unitary time evolution of interacting quantum systems arising due to sudden change in system parameters
has become a topic of great current interest. Fundamental questions such as whether systems thermalize at long times after a quench, what is the time-scale associated with thermalization, and the
role played by integrability and system size are still largely open and a frontier of current research.
In this talk I will present results for the time-evolution of some 1-dimensional models and first show how various non-thermal steady states can arise at long times, at least for simple integrable
models such as the XX spin-chain and the Luttinger liquid. Next I will address the issue of how stable these non equilibrium steady-states are to non-trivial interactions or mode-coupling. Employing
analytic approaches such as perturbative renormalization group and random-phase-approximation, I will show that even infinitesimally weak interactions or mode-coupling generate a dissipation (and
hence a finite lifetime for the modes), and also a finite temperature. However the notion of the temperature even with interactions can be quite subtle as it can depend in a non-trivial way on both
the frequency as well as the momenta of the modes.
Jeudi 16 Juin 2011 à 14h
Soutenance d'Habilitation à diriger des recherches: STATISTICAL MECHANICS OF SYSTEMS WITH LONG RANGE INTERACTIONS AND OF GEOPHYSICAL TURBULENT FLOWS
Freddy Bouchet
Abstract: Devising theories for the dynamics of the largest scales of turbulent flows, and their applications to geophysical problems is a current challenge in statistical mechanics. It turns out
that vortices in two dimensional flows, planetary atmospheres, and large scale ocean flows interact non locally. These long range interactions lead to peculiarities for the statistical mechanics of
those systems: exotic phase transitions, negative heat capacity (the temperature increase when the energy is decreased), self-organized dynamical behavior leading to anomalous diffusion and long
relaxation times, and so on. During the last ten years, the foundations of the statistical mechanics of systems with long range interactions have been precised and completed through joint efforts of
a large community. We will present contributions to this domain including a classification of phase transitions, prediction and observations of generic anomalous diffusion, explanation of generic
ergodicity breaking, construction of microcanonical measures for the 2D Euler and Vlasov equations.
Those theoretical progresses have had some applications in astrophysics, plasma physics, cold atom physics. We will emphasize applications to two dimensional and geophysical turbulence: predictions
and experimental observation of non-equilibrium phase transitions and statistical ensemble inequivalence in two-dimensional turbulence, explanation of the structure and the drift properties of ocean
vortices, and characterization of the long time inviscid dissipation of two-dimensional flows. We will also discuss challenges in this domain, mainly the development of the non equilibrium
statistical mechanics of far from equilibrium turbulent flows.
Jury: Bernard Castaing, Cristel Chandre, Henk Dijkstra, Krzysztof Gawedzki, David Mukamel, Sergey Nazarenko, Cédric Villani
Mardi 14 Juin 2011 à 14h (salle 116)
Title: Towards the quantum S-matrix of the Pohlmeyer reduced form of AdS_5 x S^5 superstring theory
Benjamin HOARE (Imperial College, Londres).
Abstract: The Pohlmeyer reduced form of AdS_5 x S^5 superstring theory is a classical reformulation of the original superstring theory. Of particular interest is whether this equivalence extends
beyond the classical level -- one way of investigating this is to construct the S-matrix of the reduced theory and compare to that of the superstring theory. In this talk we will review the
construction of the reduced theory and using perturbation theory, integrability, symmetry and analogies with various truncations propose an exact quantum S-matrix for the Lagrangian-field
Mercredi 8 Juin 2011 à 14h (colloquium de théorie, horaire exceptionnel)
Title: Extreme Events in Turbulence, Populations and Finance
M.H. Jensen (Niels Bohr Institute, University of Copenhagen)
Abstract: Many phenomena in nature and society are governed by extreme events. Extreme events refer to very abrupt changes with a statistics that often do not follow normal distributions. Instead,
they show "heavy/fat" tails indicating that the large and extreme events have much higher probability than one would expect from normal statistics [1]. We study extreme events in turbulence,
populations and finance by mean forward statistics, measuring typical systemic changes over a specified time, resulting in the heavy tails. We further invoke threshold dynamics or "inverse"
statistics by pre-describing a specific size an event and estimates the length of time before the first occurrence of this event [2]. In case of population dynamics the extreme events cause increased
species competition and shorter fixation times [3]. For financial time series, inverse statistics leads to an observation of an optimal investment horizon and an asymmetry in the market between gains
and losses [4].
[1] T. Bohr, M.H. Jensen, G. Paladin and A. Vulpiani, "Dynamical Systems Approach to Turbulence", Cambridge University Press (1998).
[2] M.H. Jensen, "Multiscaling and Structure Functions in Turbulence: An Alternative Approach", Phys. Rev. Lett. 83, 76 (1999).
[3] S. Pigolotti, R. Benzi, M.H. Jensen and D.R. Nelson, "Population genetics in compressible flows", preprint (2010).
[4] R. Donangelo, M.H. Jensen, I. Simonsen and K. Sneppen, "Synchronization Model for Stock Market Asymmetry", J. Stat. Mech. 11, L11001 (2006).
Mercredi 1er Juin 2011 à 14h (colloquium de théorie, date exceptionnelle)
Title: Corrélations fortes dans les niveaux de Landau du graphène -- du ferromagnétisme à l'effet Hall quantique fractionnaire
Marc Goerbig (Université Paris Sud)
Abstract: La découverte d'un effet Hall quantique "relativiste" dans le graphène en 2005 a montré que les électrons de basse énergie dans ce matériau bidimensionnel (2D) sont décrits par une équation
de Dirac (pour des fermions sans masse) plutôt que par une équation de Schrödinger. Une question naturelle à être posée est la suivante : quid de l'effet Hall quantique fractionnaire ? ou bien plus
généralement : quid des interactions dans les niveaux de Landau ? Est-ce qu'elles réflète également le comportement ultra-relativiste des électrons dans le graphène ? Ce seminaire tente d'illustrer
ces questions du point de vue d'un théoricien, tout en tenant compte des récents progrés expérimentaux sur le sujet.
Jeudi 19 Mai 2011 à 14h
Title: Quantum Hall transitions and conformal restriction
Ilya Gruzberg (Univ. Chicago)
Abstract: Disordered electronic systems exhibit continuous quantum phase transitions between insulating and conducting phases (Anderson transitions). The nature of the critical state at and the
critical phenomena near such a transition are of great current interest. A famous example is the integer quantum Hall (IQH) plateau transition. In spite of much effort over several decades, an
analytical treatment of most of the critical conducting states in disordered electronic systems has been elusive. We propose to use the recently developed rigorous theory of conformal restriction and
Schramm-Loewner evolutions to study the IQH and other Anderson transitions in two dimensions, assuming conformal invariance at these critical points. We consider the so-called point contact
conductances (PCC) and obtain, for the first time, exact analytical results for PCC's in the presence of a variety of boundary conditions at the IQH and similar critical points.
Jeudi 12 Mai 2011 à 14h
Title: Mélanges Bose-Fermi dans un potentiel 1D désordonné
Francois Crépin (LPS Orsay)
Abstract: Les systèmes unidimensionnels permettent une étude approfondie des effets conjoints du désordre et des interactions sur la localisation d'un gaz de fermions ou de bosons [1]. Alors que des
fermions sans interactions sont invariablement localisés par un potentiel aléatoire, une transition vers une phase superfluide se produit pour des interactions suffisamment attractives. De manière
analogue, de fortes interactions répulsives provoquent la localisation d'un gaz de bosons [1,2].
Lors de ce séminaire je présenterai l'étude d'un mélange de bosons et de fermions sans spin en interaction dans un potentiel aléatoire. Les corrélations superfluides sont augmentées par les
interactions inter-espèce, alors que le potentiel de désordre essaye d'accrocher chacune des composantes du gaz. En utilisant les méthodes du groupe de renormalisation, complétées par un calcul
variationnel, nous avons mis en évidence l'existence de plusieurs phases, localisées ou superfluides, incluant une nouvelle phase isolante, analogue au verre de Bose, où les deux espèces sont
localisées et en interaction [3]. Le calcul du facteur de structure dynamique, pour des paramètres expérimentaux typiques de certaines expériences d'atomes froids, a permis de mettre en évidence des
signatures de chacune de ces phases, pouvant typiquement être observés par diffusion de Bragg.
[1] T. Giamarchi and H.J. Schulz, Phys. Rev. B 37, 325 (1988)
[2] M.P.A Fisher et al, Phys. Rev. B 40, 546 (1989)
[3] F. Crépin, G. Zaránd, P. Simon, Phys. Rev. Lett. 105, 115301 (2010)
Jeudi 21 Avril 2011 à 14h
Title: Quantum spin glass at T=0
Alexei Andreanov (ICTP de Trieste)
Abstract: Quantum spin glasses are glassy systems where both thermal and quantum fluctuations could drive the transition into a glassy phase. Since classical spin glasses turned are complicated and
interesting models the questions of interest are How quantum fluctuations affect the glass phase: can they destroy it ? How the properties of the glass are affected ? What's the spectrum of
excitations like in the glass phase ? We have studied a simple quantum spin glass, a transverse field Ising spin glass [2,3], where the strength of quantum fluctuations is tuned by a transverse
field. There are a paramagnetic and a glassy phases [2,3]. The quantum phase transition at T=0 is well established however little is known about the glass phase dominated by quantum fluctuations. We
have proved that the entire glass phase is critical and gapless giving rise to specific low lying excitations which can be interpreted as collective oscillators. For small transverse fields we have
also discovered a fixed point in the Parisi flow identical to that unveiled by Pankov in classical spin glasses[3].
[1] A. Bray and M. Moore, J. Phys. C 13, L655 (1980).
[2] J. Miller and D. Huse, Phys. Rev. Lett. 70, 20 (1993) 3147.
[3] M.J. Rozenberg and D.R. Grempel, Phys. Rev. Lett. 81, 12 (1998) 2550.
[4] S. Pankov, Phys. Rev. Lett. 96, 197204 (2006..
Jeudi 14 Avril 2011 à 14h
Title: Relaxation rates of hot electrons in quantum wires
Zoran Ristivojevic (ENS Paris)
Abstract: Due to constraints imposed by the conservation laws, relaxation of hot electrons in quantum wires is suppressed if one considers two-body processes. Finite relaxation time can be achieved
by three-body collisions. We will consider the cases of screened and unscreened Coulomb interaction and derive the corresponding relaxation rates, that behave as power laws of temperature. We will
also discuss the role of three-body collisions for interaction induced corrections to conductance and thermopower in quantum wires.
Jeudi 7 avril 2011 à 14h
Title: Projective symmetry group approach of chiral phases
Laura Messio (EPFL)
Abstract: Quantum spin liquids are generally opposed to classical Néel states. In the Schwinger boson mean-field theory (SBMFT), the spin S is a continuous parameter and one can study connections
between spin liquids and Néel states. Mean-field Ansätze respecting the lattice symmetries can be selected using projective symmetry groups (PSG) (Wen, PRB 2002 and Wang et al., PRB 2006). They lead
to phases which can either be spin liquids if the gap is non zero (small S), or magnetized states (large S) with Goldstone modes. These Ansätze are labelled by gauge invariant quantities called
We present an adaptation of the PSG approach to the classical spin limit and apply it to triangular and Kagome lattices. This helps us in sorting spin rotational symmetry breaking states. We show
that well known chiral classical states are excluded in the orignal approach of Wen and Wang. But relaxing constraints, we are able to display three dimensional chiral Néel states wich under the
effects of quantum fluctutions transpose in chiral spin liquids, breaking the time reversal symmetry. We propose models where these states could be ground states.
Jeudi 31 Mars 2011 à 14h
Title: Ultrastrong coupling cavity QED in solid-state systems
Cristiano Ciuti, (Matériaux et Physique Quantique, Université Denis Diderot, Paris)
Abstract: This talk will be devoted to cavity and circuit quantum electrodynamics (QED) in the ultrastrong coupling regime. Such an unconventional limit is achieved when the vacuum Rabi frequency
(quantifying the light-matter interaction) is comparable or larger than the two-level transition frequency coupled to the bosonic field of a resonator. The ultrastrong coupling regime is being
explored both theoretically and experimentally in semiconductor and superconducting systems[1]. After an introduction, here we will describe theoretically the quantum properties of a chain of
Josephson atoms in a transmission line resonator, both in the case of inductive [2] and capacitive [3] coupling with the resonator field. Predictions and constraints will be presented for the
occurrence of quantum phase transitions with the appearance of a doubly degenerate vacuum (ground state). The robustness and protection of the vacuum degeneracy and the manipulation of quantum
information [4] stored in a ?vacuum? qubit will be presented.
[1] For semiconductors, see C. Ciuti, G. Bastard, I. Carusotto, Phys. Rev. B 72, 115303 (2005); C. Ciuti, I. Carusotto, Phys. Rev. A. 74, 033811 (2006); G. Günter et al. Nature 458, 178 (2009); Y.
Todorov et al., Phys. Rev. Lett. 105, 196402 (2010).
In the case of superconducting circuits, see e.g. : M.H. Devoret, S.M. Girvin, R.J. Schoelkopf, Ann. Phys. 16, 767 (2007); T. Niemczyk et al., Nature Physics 6, 772-776 (2010).
[2] P. Nataf, C. Ciuti, Vacuum degeneracy of a circuit-QED system in the ultrastrong coupling regime, Phys. Rev. Lett. 104, 023601 (2010) and references therein. [3] P. Nataf, C. Ciuti, No-go theorem
for superradiant quantum phase transitions in cavity QED and counter-example in circuit-QED, Nat. Commun 1, 72 (2010) and references therein. [4] P. Nataf, C. Ciuti, Protected quantum computation
with multiple resonators in ultrastrong coupling circuit QED, submitted.
Mercredi 23 Mars 2011 à 14 h
Title: From Rotating Atomic Rings to Quantum Hall States
Matteo Rizzi (Garching)
Abstract: Considerable efforts are currently devoted to the preparation of ultracold neutral atoms in the emblematic strongly correlated quantum Hall regime. The routes followed so far essentially
rely on thermodynamics, i.e. imposing the proper Hamiltonian and cooling the system towards its ground state. In rapidly rotating 2D harmonic traps the role of the transverse magnetic field is played
by the angular velocity. The needed angular momentum is very large and it can be obtained only for spinning frequencies extremely near to the deconfinement limit; consequently, the required control
on experimental parameters turns out to be far too stringent.
Here we propose to follow instead a dynamic path starting from the gas confined in a rotating ring. The large moment of inertia of such geometry facilitates the access to states with a large angular
momentum, corresponding to a giant vortex. The trapping potential is then adiabatically transformed into a harmonic confinement, which brings the interacting atomic gas in the desired quantum Hall
regime. We provide clear numerical evidence that for a relatively broad range of initial angular frequencies, the giant vortex state is adiabatically connected to the bosonic $\nu=1/2$ Laughlin
state, and we discuss the scaling to many particles and the robustness against experimental defects.
Jeudi 10 Mars 2010 à 14h
Title: Interaction induced hierarchy of non-equilibrium locking
Masud HAQUE (MPIPKS-Dresde)
Abstract: In 1D interacting lattice systems, I will present geometry-induced structures in the energy specturm. A dramatic series of dynamics-suppression effects arise due to these spectral
I will show versions of the phenomenon for three classic condensed-matter models: (1) the Bose-Hubbard model; (2) the spinless fermion model with nearest-neighbor repulsion; (3) the XXZ spin chain.
Jeudi 24 Février 2011 à 14h
Title: Fractionalization in three components fermionic atomic gases in a one dimensional optical lattice.
Patrick Azaria (LPTMC, Université Pierre & Marie Curie)
Abstract: We study a three component fermionic gas loaded in a one-dimensional optical trap at half filling. We find that the system is fully gapped and may order into 8 possible phases: four 2kF
density wave and spin-Peierls phases with all possible Pi phase shifts between the three species. We find that trionic excitations are unstable towards the decay into pairs of kinks carrying a
fractional number (3/2) of atoms. These sesquions eventually condense upon small doping and are described by a Luttinger liquid. We finally discuss the phase diagram of a three component mixture made
of three hyperfine levels of Li6 as a function of the magnetic field.
Jeudi 17 Février 2011 à 14h
Title: Non-equilibrium dynamics of interacting fermions: From quantum spin chains to fermions in optical lattices
Fabian Heidrich-Meisner (Ludwig-Maximilians-Universitaet, Munich)
Abstract: The non-equilibrium properties of interacting fermions are attracting many theorists' attention. Besides the interest in thermalization and relaxation processes in quantum quenches and the
study of steady-state problems, there is a group of experiments in which net currents are finite while a stationary regime is typically not reached. I will discuss two examples: first, the energy and
spin dynamics in quantum spin chains and ladders and second, the sudden expansion of interacting fermions. In the former example we study the expansion of energy or spin-density wave packets aiming
at classifying the dynamics as either ballistic or diffusive. We further determine the range of validity of a effective low-energy descriptions to the non-equilibrium dynamics of lattice models. In
the second example, we study the expansion of interacting fermions in an optical lattice, after quenching the trapping potential to zero. Here we are particularly interested in the dependence on
initial conditions such as filling [2] and the behavior of correlations during the expansion [3].
• [1] Langer, HM, Gemmer, McCulloch, Schollwoeck, Phys. Rev. B 79, 214409 (2009)
• [2] HM, Rigol, Muramatsu, Feiguin, Dagotto, Phys. Rev. A 78, 013620 (2008)
• [3] HM, Manmana, Rigol, Muramatsu, Feiguin, Dagotto Phys. Rev. A 80, 041603(R) (2009)
Jeudi 10 Février 2011 à 14h
Title: Quantum deformations of spin foam models
Winston Fairbairn (Université de Hambourg)
Abstract: Invariants of topological manifolds that are based on the representation theory of quantum groups play an important role in mathematical physics. In particular, they are of interest to
quantum gravity, where they are known under the name of spin foam models. Compared to the `classical´ models which are based on the representation theory of Lie groups, the q-deformed models
constructed upon the representation categories of quantum groups offer several advantages, the most important one being the improvement of the convergence properties.
In this talk, I will discuss q-deformations of spin foam models in three and four space-time dimensions. I will firstly review the derivation of the Ponzano-Regge model of 3d quantum gravity from a
physical perspective and present the Turaev-Viro invariant as a natural regulator of its divergences. I will then discuss analogue constructions in four dimensions and present recent results
concerning the q-deformation of a certain constrained topological model.
Jeudi 3 Février 2011 à 14h (Colloquium)
Title: Cavity QED in solid state physics: new insights and potential for quantum devices.
Alexia Auffèves (Institut Néel, Grenoble)
Abstract: Thanks to technological progresses in the field of solid-state physics, a wide range of quantum optics experiments previously restricted to atomic physics can now be implemented using
solid-state emitters and cavities. Still, so-called artificial atoms are subjected to intrinsic decoherence processes that broaden the emission linewidth, making them very different from isolated
atoms. At the same time, very high quality factors are achieved for state of the art cavities. These new conditions open an unexplored regime for cavity quantum electrodynamics (CQED) so far, where
the emitter(s)'s linewidth can be of the same order of magnitude, or even broader than the cavity mode one. In this talk, I will focus on two different realizations of this situation.
First, I will consider the coupling of a high Q cavity to a single emitter homogeneously broadened, which can safely model a quantum dot (QD) coupled to a semiconducting optical cavity. In that case,
unusual phenomena can happen. In particular, we have shown [1] that photons spontaneously emitted by a QD coupled to a detuned cavity can efficiently by emitted at the cavity frequency, even if the
detuning is large; whereas if the QD is continuously pumped, decoherence can induce lasing [2]. These effects clearly show that decoherence, far from being a drawback, is a fundamental resource in
solid-state cavity quantum electrodynamics, offering appealing perspectives in the context of advanced nano-photonic devices.
In a second part, I will present recent results where we investigated theoretically the coupling of a cavity mode to a continuous distribution of emitters, paying attention to the influence of
inhomogeneous broadening on the existence and the coherence properties of the polaritonic peaks. We found that their coherence depends crucially on the shape of the distribution and not only on its
width. Under certain conditions the coupling to the cavity protects the polaritonic state from inhomogeneous broadening, resulting in a longer storage time for a quantum memory based on emitters
ensembles. When two different ensembles of emitters are coupled to the resonator, they support a peculiar collective dark state, also very attractive for the storage of quantum information.
[1] "Pure emitter's dephasing : a resource for advanced single photon sources", A. Auffèves, J.M. Gérard and J.P. Poizat, PRA 79, 053838 (2009).
[2] "Controlling the dynamics of a coupled atom-cavity system by pure dephasing : basics and applications in nanophotonics", A. Auffèves, D. Gerace, J.M. Gérard, M.P. França Santos, L.C. Andreani and
J.P. Poizat, PRB 81, 245419 (2010).
[3] " Strongly coupling a cavity to inhomogeneous ensembles of emitters : potential for long lived solid-state quantum memories", I. Diniz, S. Portolan, R. Ferreira, J.M. Gérard, P. Bertet and A.
Auffèves, arXiv:0176967.
Lundi 31 janvier 2011, 14h00, salle 117
Title: Fusion of line operators and quantum integrability in conformal sigma models on supergroups.
Raphael BENICHOU (Vrije Universiteit Brussel et The International Solvay Institutes)
Abstract: I will present recent progress in the understanding of two-dimensional sigma-models on the supergroup PSl(n|n). I will emphasize the relevance of these models to study quantum integrability
in the AdS/CFT correspondence. In particular i will explain the computation of the fusion of some line operators, the transfer matrices, that encode an infinite number of conserved charges. This
computation leads to a first-principle, perturbative derivation of the Hirota equation, which has been argued to provide a solution to the spectrum problem in N=4 SYM.
Jeudi 27 Janvier 2011 à 14h
Title: Probing quasiparticle states in strongly interacting atomic gases
Tung-Lam Dao (Institut d'Optique, Orsay)
Abstract: We investigate a momentum-resolved Raman spectroscopy technique which is able to probe the one-body spectral function and the quasi-particle states of a gas of strongly interacting
ultracold atoms [1]. This technique is inspired by Angle-Resolved Photo-Emission Spectroscopy, a powerful experimental probe of electronic states in solid-state systems. A very good agreement is
found with recent experimental data for the study of the BEC-BCS crossover [2,3]. . We discuss also direct applications of the Raman spectroscopy technique for recent experiments of interacting
fermionic atoms loaded into an optical lattice. This technique is applicable to detect the temperature of weakly interacting Fermi gas in the experimentally relevant temperature regimes.
Additionally, we show that a similar spectroscopy scheme can be used to obtain information on the quasiparticle properties and Hubbard bands of the metallic and Mott-insulating states of interacting
fermionic spin mixtures. These two methods provide experimentalists with novel probes to accurately characterize fermionic quantum gases confined to optical lattices [4].
• [1] T-L. Dao et al., Phys. Rev. Lett. 98, 240402 (2007)
• [2] T-L. Dao et al., Phys. Rev. A. 80, 023627 (2009)
• [3] J. T. Stewartet al., Nature 454, 774 (2008).
• [4] J-S. Bernier et al., to be published in Phys. Rev. A.
Jeudi 20 Janvier 2011 à 14h (Colloquium)
Title: Dark matter : from astrophysics and cosmology to the LHC
Geneviève Bélanger (LAPPTH, Annecy)
Abstract: There is strong evidence from astrophysics and cosmological measurements that most of the matter that constitutes the universe is dark. Early indications of the presence of dark matter from
observations of galaxies were confirmed in the last few years with in particular a precise extraction of the relic density of dark matter from WMAP measurements. This has stimulated numerous direct
and indirect searches for dark matter and has fuelled theoretical speculations on the nature of dark matter. The favoured explanation for dark matter is to postulate a new neutral stable and weakly
interacting particle such as the ones found in extensions of the standard model.
In this talk I will present the current status for dark matter searches and will discuss how the LHC will further probe various dark matter candidates.
Jeudi 13 Janvier 2011 à 14h
Title: The Bose-glass phase transition in the 1-D mean-field limit
Vincenzo Savona (Laboratory of Theoretical Physics of Nanosystems, EPFL, Switzerland)
Abstract: A one-dimensional system of noninteracting bosons in presence of disorder is always in an insulator state at zero temperature. Interactions can induce a quantum phase transition to a
superfluid state, characterized by quasi long-range order. This Bose-glass to superfluid transition has been the subject of intense theoretical studies and of several recent experiments carried out
on ultracold atomic clouds.
I will present a theoretical study of the quantum phase transition of the 1D disordered Bose gas in the mean field regime, based on the extended Bogolyubov model for a quasicondensate. In this
context, I will derive the phase diagram on the interaction-disorder plane (U,D), by inspecting the long-range behaviour of the one-body density matrix as well as the drop in superfluid fraction. It
turns out that the phase boundary between the quasicondensed and the Bose-glass phases is marked by two different algebraic relations. These can be analytically explained as two limiting behaviours
of the Bose gas in the white noise limit - where D ~ U^(3/4) and in the Thomas-Fermi regime - where D ~ U.
The in-situ density profile is perhaps the feature of an ultracold atomic cloud that can be most easily measured in an experiment. I will show that a direct link exists between the fragmentation of
the density profile and the occurrence of the phase transition. This link is given by the probability distribution of the gas density. In particular, the appearance of a superfluid fraction coincides
with a vanishing probability distribution in the limit of zero density, namely with the disappearance of fragmentation. This analysis sets the intuitive relation between fragmentation and insulating
behaviour into a more rigourous framework, and opens the way to the experimental detection of the phase transition.
Jeudi 16 Décembre 2010
Journée Gravité Quantique
Thèse de Maïté Dupuis (10h00)
Titre: Spin foam models for quantum gravity and semi-classical limit
Abstract: The spin foam framework is a proposal for a regularized path integral for quantum gravity, to define transition amplitudes between quantum geometry states. This covariant approach is based
on the following fact. General relativity can be seen as a topological theory (i.e. with non-local degrees of freedom) plus constraints (the so-called simplicity constraints which reintroduce local
degrees of freedom). The issue is then to implement consistently the constraints at the quantum level. I will first recall the spin foam quantization procedure and focus more particularly on the step
consisting in implementing the simplicity constraints. Then, I will present an original way using harmonic oscillators to impose the simplicity constraints in the context of 4d Euclidean. Another
key-issue is to extract semi-classical information from a given spin foam model. I will present new techniques and new results that allow to compute semiclassical asymptotic expressions for the
transition amplitudes of 3d quantum gravity.
Habilitation à diriger des recherches d'Etera Livine (14h00)
Titre: The Spinfoam Framework for Quantum Gravity
Abstract: Spinfoam models provide a path integral formalism for quantum gravity. They define quantum space-time structures, which describe the evolution in time of the spin network states for quantum
geometry derived from loop quantum gravity.These models are inspired mainly from topological field theory and Regge calculus for discretized general relativity. Beyond the mere constuction of such
models, this framework turns out to be relevant for quantum gravity phenomenoloy. Indeed, it is possible to recover the leading orders of the graviton propagator (that's Newton law fro gravity!) and
to compute the quantum gravity effects on the matter dynamics, which can be interpreted in term of non-commutative geometry.
Jeudi 9 Décembre 2010 à 14h
Title: Duality invariance and non-renormalisation theorems in supergravity.
Guillaume BOSSARD (CPHT, Ecole Polytechnique)
Abstract: E7 duality is a symmetry of the equations of motion of N=8 supergravity. Using a non-manifestly diffeomorphism invariant formalism one can write an action which is manifestly E7 and super-
diffeomorphism invariant, although the super-diffeomorphism are realised in an unconventional way. I will show the consistency of this formalism by exhibiting its quantum equivalence with the
conventional formulation, defining a regularisation scheme consistent with the quantum action principle, and proving the absence of anomaly. With the result that the theory can be renormalised such
that it is E7 invariant, its logarithm divergences must therefore be both E7 and super-diffeomorphism invariant. I will then prove that the candidate counter-terms for putative divergences at 3, 5
and 6 loops are all breaking E7. However, one can show the existence of a non-vanishing E7 invariant counter-term associated to a 7-loop divergence; which suggests that the theory might diverge
logarithmically at 7-loop, if no further hidden symmetry was to be discovered.
Jeudi 25 Novembre 2010 à 14h
Title: Recent trends in the AdS/CFT
Eoin O Colgain (Korea Institute for Advanced Study, Seoul)
Abstract: Whether one is searching for geometries dual to supersymmetric surface operators or those with potential condensed matter application, special tools are required in higher-dimensional
supergravity. We review the use of consistent truncations and G-structures in finding new solutions and comment on properties of these solutions.
Mercredi 1er Décembre 2010, 14h, salle 115
Title: Aspects of Gauge-Strings Duality
Carlos NUNEZ (Swansea University)
Abstract: I will make a general discussion on the duality between gauge fields and Strings. The focus of the talk will be to review some well established results and comment on some new aspects
recently discovered.
Jeudi 18 Novembre 2010 à 14h
Title: Light-induced gauge potentials for cold atoms
Gediminas JUZELIUNAS (Institute of Theoretical Physics and Astronomy of Vilnius University)
Abstract: In the initial part of the talk we shall review schemes enabling to produce the artificial magnetic field for cold atoms using several light beams. We discuss the possibilities to create
both Abelian and also non-Abelian gauge potentials. Subsequently we shall talk on some recent studies of the effects due to the non-Abelian gauge potentials for cold atoms including their
quasi-relativistic behaviour and negative reflection. We also talk about about a scheme of generating the non-Abelian gauge potentials for cold atoms containing three degenerate dressed states, so
their centre of mass motion is described by a three-component spinor.
Jeudi 4 Novembre 2010 à 14h
Title: Phase transitions in the quantum transport problem.
Vivo Pierpaolo (ICTP Trieste, Italy)
Abstract: Linear statistics on ensembles of random matrices occur frequently in many applications. We present a general method to compute probability distributions of linear statistics for large
matrix size N. This is applied to the calculation of full probability distribution of conductance and shot noise for ballistic scattering in chaotic cavities, in the limit of large number of open
electronic channels. The method is based on a mapping to a Coulomb gas problem in Laplace space, displaying phase transitions as the Laplace parameter is varied. As a consequence, the sought
distributions generally display a central Gaussian region flanked on both sides by non-Gaussian tails, and weak non-analytical points at the junction of the two regimes.
[1] Phys. Rev. B 81, 104202 (2010)
[2] Phys. Rev. Lett. 101, 216809 (2008).
Vendredi 22 Octobre 2010 à 14h (salle commune du laboratoire)
Title: Accuracy of the Quantum Capacitor as a Single Electron Source
Mathias Albert (Université de Genève)
Abstract: Controllable single electron sources are at the forefront of current research on nano-scale electronics. Systems that generate quantized electrical currents, for example quantum capacitors
and quantum pumps, are of great interest due to their potential applications in metrology and quantum information processing as well as in basic research on single- and few-electron physics in
mesoscopic structures. Despite the experimental and theoretical advances, the accuracy at which the quantum capacity emits electrons is still not well understood. Here we consider a conceptually
simple model of a quantum capacitor and find analytically the noise spectrum as well as the full distribution of emitted electrons (full counting statistics) and the waiting time distribution. We
find that the failure rate of the capacitor can be arbitrarily small when operated under favorable conditions. Our predictions may be tested in future experiments..
Jeudi 14 Octobre 2010 à 14h
Title: Scaling in Single-Chain Magnets
Alessandro Vindigni, (ETH Zurich, Suisse)
Abstract: It is well-known that long-range order cannot occur in 1d magnetic systems with short-range interactions. A remanent magnetization may, however, be obser ved in some anisotropic spin chains
due to slow dynamics. The physics of such systems ? named Single-Chain Magnets ? is mainly dictated by the temperature dependence of the relaxation time (tau) and the correlation length (xi). A
simple random-walk argument relates these two quantities with each other : within a time tau a domain wall performs a random walk over a distance xi. Depending on the relative strength of the
exchange interaction and the single-ion anisotropy, the relevant excitations consist either of sharp (extending over just one lattice spacing) or broad (extending over several lattice constants)
domain walls. By combining time-quanti?ed Monte-Carlo simulations with transfer-matrix and renormalization-group calculations, we highlighted that the broad- and sharp-wall regimes are associated
with different temperature dependences i) of the correlation length and ii) of the diffusion coef?cient of domain-wall motion. These ?ndings allow us to explain the different relationship between tau
and xi reported for broad- and sharp-wall Single-Chain Magnets.
Jeudi 7 Octobre 2010 à 14h
Title: Fluctuation relations in mesoscopic transport
Alessandro De Martino, (Univ. Cologne)
Abstract: In this talk we will discuss the concept of fluctuation relations in application to mesoscopic transport. Fluctuation relations connect the statistics of pairs of time-reversed evolutions
of physical observables (e.g., heat, work, current, etc.) in nonequilibrium systems and thereby establish rigorous identities for their averages. We will derive general functional FRs for the current
flow induced by time-varying forces and illustrate their utility in the description of transport through systems of mesoscopic size. We will then show that under nonequilibrium conditions rare
realizations of transport observables are crucial and imply strong stochastic fluctuations around the exact averages established by the FRs. We will illustrate our general results on the paradigmatic
example of a mesoscopic RC circuit driven by time-dependent voltage.
Jeudi 30 Septembre 2010 à 14h
Title: Localisation d'Anderson et transition metal-isolant d'Anderson dans les gaz atomiques froids
Dominique Delande, (Laboratoire Kastler Brossel)
Abstract: En presence d'un potentiel aléatoire, le mouvement classique d'une particule est typiquement diffusif, avec une constante de diffusion dépendant du degré de desordre. Quand les
interferences quantiques entre differents chemins sont pris en compte, la situation peut être radicalement differente, conduisant en particulier à une transition diffusif-localise, i.e.
métal-isolant, - appelee transition d'Anderson - quand on modifie l'amplitude du désordre. Ces effets sont très sensibles à la préservation de la cohérence de phase quantique et donc difficiles à
observer experimentalement. En utilisant des ondes de matière atomiques à très basse température, manipulées par des champs lasers dépendant du temps, on peut construire des systèmes dont la
dimensionnalité effective et les paramètres du désordre peuvent être ajustées à volonté. On a ainsi pu observer la localisation d'Anderson et la transition métal-isolant. On a en particulier pu
mesurer finement les exposants critiques, les lois d'échelle au point critique de la transition et les fluctuations géantes en son voisinage. Ces études ouvre des perspectives nouvelles pour l'étude
du transport quantique, en presence de désordre et/ou d'interactions.
Jeudi 16 septembre 2010 à 14h
Title: Charge order and quantum critical behavior in layered organic conductors
Simone Fratini (Institut Néel, Grenoble)
Abstract: Low-dimensional organic conductors show a variety of electronic phases that are believed to originate from the presence of strong electronic interactions. After a brief overview on the
phase diagrams of these materials, I will focus on the charge ordering that occurs in the family of the two-dimensional theta-ET2 salts with quarter-filled electronic bands. These systems are ideally
located halfway between the strongly correlated oxides (dominated by Mott physics close to integer fillings) and the two-dimensional electron gas (2DEG), that exhibits Wigner crystallization at low
density. I will present results based on a model that accounts for both local electronic correlations and longer range Coulomb interactions responsible for charge ordering, and discuss the emergence
of unconventional phases and their possible relevance to experiments.
Semaine du jeudi 24 Juin 2010
Pas de séminaire (soutenances de thèses).
Mercredi 23 juin 2010
Soutenance de thèse de Arnaud Le Diffon
Mardi 22 juin 2010 à 14h
Soutenance de thèse de Guillaume Paulin: Transport Electronique et Verres de Spins
Résumé: Les travaux décrits dans cette thèse apportent une contribution à la physique de la matière condensée des systèmes désordonnés, à la physique mésoscopique d?une part et à la physique des
verres de spins d?autre part. La première partie de cette thèse étudie de manière numérique le trans- port électronique cohérent dans un metal non magnétique dopé par des impuretés magnétiques gelées
(un verre de spins à basse température). A l?aide d?un code récursif de calcul de la conductance à deux terminaux du système, nous étudions en détail le régime métallique de conduction (car- actérisé
par une conductance élevée) ainsi que le régime isolant (faible con- ductance). Dans ces deux régimes, des comportements universels du système sont mis en évidence. En outre, une étude des
corrélations de conduc- tance pour deux con?gurations di?érentes des spins des impuretés permet de relier ces corrélations aux corrélations entre con?gurations de spins (ap- pelées recouvrement).
Cette étude ouvre la voie à la première détermination expérimentale du recouvrement par des mesures de transport. Une deuxième partie de cette thèse consiste à étudier le modèle de champ moyen de
Sherrington-Kirkpatrick, qui décrit la phase à basse température d?un verre de spins d?Ising. Nous nous intéressons ici à la généralisation au cas de spins d?Ising quantiques (i.e en champ magnétique
transverse) de ce modèle classique très étudié ces trente dernières années. Nous déduisons analytiquement des équations du mouvement dans le cas semi-classique où l?in?uence des ?uctuations
quantiques est faible, que nous comparons au cas classique. Ces équations sont résolues numériquement par une méthode pseudo-spectrale.
Jeudi 27 Mai 2010 (salle 116)
Title: Fermions ultrafroids et problèmes à quelques corps.
Xavier Leyronas (LPS, ENS Paris)
Abstract: Je présenterai mon travail sur le problème de fermions ultrafroids en interaction. Après une introduction sur ce que l'on appelle le "crossover BEC-BCS", je montrerai les résultats
d'expériences déterminant l'équation d'état d'un tel système, en particulier celle récente du groupe Li6 du Labotatoire Kastler Brossel. Enfin, j'exposerai nos travaux théoriques visant à calculer
cette équation d'état, où les problèmes à trois et quatre corps apparaissent.
Jeudi 8 Avril 2010 (salle 116)
Titre: Mesure des fluctuations de courant d?une source d?électrons : une preuve de l?émission contrôlée d'électrons uniques
Gwendal Fève (Laboratoire Pierre Aigrain, ENS Paris)
Le transport électronique dans les conducteurs quantiques balistiques présente de nombreuses analogies avec le transport de photons dans le vide. Sur des longueurs microniques aux températures
cryogéniques, les électrons se propagent sans subir de collisions et la phase de leur fonction d?onde reste bien définie. Ces analogies ont pu être spectaculairement mises en valeur par la
réalisation, par exemple, d'interféromètres de type Mach-Zehnder [1]. La manipulation contrôlée de quelques uniques électrons dans un circuit permettrait de pousser l'analogie jusqu'à l'optique
quantique. En particulier, certaines de ses expériences fondatrices pourraient être reproduites avec des électrons, comme les expériences de Hanbury-Brown et Twiss ou Hong-Ou-Mandel dans lesquelles
un ou deux photons sont partitionnés sur une lame séparatrice. La nature non-classique des faisceaux incidents peut alors être mise en évidence par la mesure des corrélations entre les courants en
sortie de la lame. La reproduction de ces expériences avec des électrons repose donc sur la possibilité d?émettre à la demande des électrons uniques dans un circuit et de mesurer les corrélations de
courants mono-électroniques.
Dans cet exposé, je présenterai la mesure des fluctuations du courant (autocorrélation) généré par une source d'électrons périodiquement déclenchée. Cette source permet l'émission contrôlée d'une
unique charge d'une boîte quantique réalisée dans un gaz bidimensionnel d'électrons à l'interface d?une hétérostructure AlGaAs/GaAs [2]. Par une soudaine variation du potentiel de la boîte, un seul
électron peut être émis par couplage tunnel de la boîte vers le reste du gaz sur un temps caractéristique subnanoseconde. Lorsque ce régime d'émission d'électrons uniques est atteint, les
fluctuations du courant se réduisent à l?incertitude quantique sur le temps tunnel d?émission d?une unique charge. L'observation de ce bruit irréductible permet alors de démontrer l?émission
contrôlée de particules uniques et d?envisager la réalisation de futures expériences d'optique quantique électronique.
[1] Y. Ji et al., Nature 422, 415 (2003) .
[2] G. Fève et al., Science 316, 1169 (2007).
Vendredi 23 avril 2010 à 13h30 (salle 116)
De Sitter vacua of supergravity and supersymmetry breaking branes.
David Andriot (LPTHE Jussieu)
Finding a de Sitter vacuum of supergravity seems to be rather difficult, as many four-dimensional studies have shown. In this talk I will study this question from a ten-dimensional point of view. A
starting point to find such solutions is to consider as an ansatz a deformation of a known supersymmetric solution. Nevertheless, this is often not sufficient to obtain a positive cosmological
constant, and then one usually adds extra ingredients. Here I will rather break in addition the supersymmetry of the sources. I will make a proposal for such sources, and provide a concrete example
of such a de Sitter solution obtained by compactifying on a solvmanifold. This proposal may open the door to first order equations generalizing the supersymmetry ones. The four-dimensional stability
of the solution remains to be studied, but the four-dimensional potential admits at least a minimum in the dilaton and the volume moduli.
Based on arXiv:1003.3774.
Jeudi 29 Avril 2010 (salle 116)
Title: Universal Resistances of the Quantum RC circuit.
Christophe Mora (Laboratoire Pierre Aigrain, ENS Paris)
We discuss the capacitance and the resistance, usually called the charge relaxation resistance, of a quantum coherent RC circuit driven by a low-frequency AC voltage.
This circuit is the quantum analogue of the classical RC circuit: it comprises a dot capacitively coupled to a nearby gate and connected to a single reservoir lead. As a result of phase coherence and
electronic interactions, the quantum circuit behaves quite differently and Kirchhoff's law is violated.
Here we show that the charge relaxation resistance is perfectly quantized, regardless of the single lead transmission and for an arbitrary strength of the interaction. Its low-frequency value is h/2e
^2. When the driving frequency exceeds the dot level spacing, we predict a transition to a metallic regime with a doubled quantized resistance h/e^2. The novel quantized resistance h/e^2 is connected
to the Korringa-Shiba relation of the Kondo model, thereby revealing the physics behind these universal charges.
Jeudi 6 Mai 2010 (salle 116)
Title: Time-dependent theory of non-linear response and current fluctuations.
Inès Safi (Laboratoire de Physique des Solides, Orsay)
A general non-linear response theory is derived for an arbitrary time-dependent Hamiltonian, not necessarily obeying time-reversal symmetry. This allows us to obtain a greatly generalized Kubo type
formula. Applied to a mesoscopic system with any type of interactions, and coupled to multiple probes and gates with arbitrarily time-dependent voltages, we derive current-conserving differential
conductance and current fluctuation matrices obeying a generalized Fluctuation-Dissipation Theorem. This relation provides a common explanation for asymmetries of the excess noise in several
non-linear mesoscopic systems, as well as of its surprising negative sign.
Jeudi 25 Mars 2010 (salle 116)
Title: Phase-Diagram of Quasi-Two-Dimensional Trapped Bose Gases.
Markus Holtzmann (LPTMC, UMPC Paris & LPMMC, UJF Grenoble)
discuss quasi-two-dimensional Bose gases in harmonic traps at temperatures close to the Kosterlitz-Thouless transtion. Using Quantum Monte Carlo calculations for experimentally relevant geometries
and interparticle interactions, we have studied density profiles, superfluid and condensate fractions, single-particle coherence, and pair correlations. Quantitative comparisions with mean-field, and
effective (classical) field theory allows us to study universal two-dimensional correlations in the fluctuation region, and to characterize the cross-over from Kosterlitz-Thouless to Bose-Einstein
behavior for small particle numbers.
Jeudi 18 Mars 2010 (salle 116)
Title: Description du mode de respiration d'un système de particules en interaction.
Alain Olivetti (université de Nice)
Considérant un ensemble de particules piégées, l'étude des modes d'oscillation de ce système donne accès à un grand nombre d'informations, et en particulier les effets collectifs qui interviennent.
Ici, nous nous intéresserons plus précisément au mode dit de respiration, où le système de particules alterne des phases de compression et de dilatation. Notre travail se base sur les équations de la
hiérarchie BBGKY, couplées à un opérateur de Fokker-Planck. À l'aide d'un ansatz, nous décrivons le mode de respiration pour une grande gamme de systèmes, et ce quelles que soient la température, la
dimension de l'espace, que l'on soit en régime linéaire ou non linéaire ... Par la suite nous généraliserons nos résultats au cas où la friction et/ou la diffusion peuvent être des quantités
dépendant de l'espace, une situation rencontrée dans les pièges magnéto-optiques et qui peut être à l'origine d'instabilités.
Jeudi 11 Mars 2010 (salle 116)
Title: On the microscopic origin of excess low frequency flux 1/f noise in qubits and SQUIDs.
Lara Faoro (LPTHE, Paris)
At millikelvin temperatures, superconducting flux and phase qubits and SQUIDs (Superconducting QUantum Interference Devices) both suffer from intrinsic magnetic flux noise. The noise power spectrum
scales as 1/fa, where f is frequency and b is approximately unity. Low-frequency flux noise enhances decoherence in qubits and reduces flux resolution in SQUIDs. Remarkably, all devices show
approximately the same level of noise, a few ?-?0 per square root of Hz at 1 Hz (?0 is the flux quantum). The magnitude of the flux noise scales only weakly with the area of the device, and is
independent of temperature T.
In this talk I will illustrate our theoretical picture for the excess low frequency flux noise consistent with data in which the noise is due to the spins at the Superconductor Insulator interface
coupled via RKKY interaction. In contrast to the alternative models, this mechanism explains many puzzling features of the flux noise: its apparent temperature independence down to 20 mK, its
persistence to at least 20 MHz and the rough SQUID loop area independence. This mechanism generates roughly 1/f noise in a broad frequency. I will report on some recent experimental results that
support our theoretical conjecture. I will also discuss some very recent puzzling results by McDermott group at University Madison Wisconsin that indicate a large and previously overlooked source of
noise: fluctuations of the kinetic inductance in the superconducting wires which can be masked as a flux noise in some cases.
Jeudi 4 Mars 2010 (salle 116)
Title: Physique statistique et systèmes sociaux complexes.
Sébastien Grauwin (ENS de Lyon)
Un problème récurrent au sein des sciences sociales, connu comme le 'micro-to-macro problem', concerne notre capacité à expliquer la relation entre les éléments constitutifs des sciences sociales
(les individus) et les phénomènes collectifs émergeant résultant de leurs interactions (ola, émeutes, ségrégations urbaines, institutions, société, économie...). Ces dernières années, les physiciens
se sont attaqués à ce problème avec des outils issus de la physique statistique. Le double pari implicite est que les outils et le regard 'neuf' des physiciens peut aider les chercheurs en sciences
sociales à mieux comprendre leurs systèmes, mais aussi que les spécificités des systèmes sociaux permettent de développer des outils de modélisation qui peuvent intéresser la physique et d'autres
disciplines. Deux approches sont tentées au niveau mondial pour mener à bien ce pari.
• 1/ La modélisation de sociétés virtuelles simples où l'on peut comprendre les causalités. Cette approche sera illustrée par un modèle analytique de ségrégation urbaine que nous avons développé
dans un récent article [1]. Ce modèle présente notamment une extension du concept d'énergie libre pour les systèmes sociaux.
• 2/ L'analyse de données réelles par des méthodes sophistiquées. Nous avons rassemblé une base de données de 200000 articles rattachés à des thématiques 'systèmes complexes'. L'examen de cette
grande base de données nous a amené à développer des d'outils d'analyse spécifiques qui apportent un éclairage nouveau par rapport aux études plus traditionnelles.
Je présenterais enfin les grandes lignes d'une troisième démarche mixte qui tente de relier ces deux approches restées jusqu'à aujourd'hui étanches, les modèles étant trop simplistes pour rendre
compte de la richesse des données réelles.
[1] S. Grauwin et al, Proc. Natl. Acad. Sci. USA 106, 20622-20626 (Dec 2009)
Jeudi 25 Février 2010 (salle 116)
Title: Towards coherent spintronics (with Carbon Nanotubes).
Takis Kontos (Laboratoire Pierre Aigrain, ENS Paris)
L'asymétrie de diffusion entre les spins + et les spins - à l'interface entre un métal ferromagnétique et un métal non-magnétique est au coeur du principe de fonctionnement des jonctions tunnel ou
des multicouches magnétiques qui ont valu le prix Nobel à A. Fert et P. Grünberg en 2007. Bien que ces dispositifs utilisent l'effet tunnel et le spin de l'électron, ils n'exploitent pas un degré de
liberté crucial autorisé par la mécanique quantique : la phase de la fonction d'onde. En effet, le plus souvent, cet aspect reste confidentiel et le transport électronique à travers de tels objets
est très bien décrit par des lois essentiellement classiques. Dans le travail que je vais présenter, nous avons effectué des mesures de transport dépendant du spin dans des nanotubes de carbone à
plusieurs contacts. Nous observons des signaux non-locaux de type vanne de spin contrôlables par un champ électrique appliqué à l'aide d'électrodes de grille. Ceci révèle que le spin ainsi que la
phase de la partie orbitale de la fonction d'onde sont conservés dans de tels dispositifs. Ces observations réalisent un pont entre la physique mésoscopique et l'électronique de spin et ouvrent la
voie vers la réalisation de composants de la nano-électronique utilisant ces deux degrés de liberté quantiques sur un pied d'égalité.
Jeudi 11 Fevrier 2010 (salle 116)
Title: Quasi-particules anyoniques et transition de phase topologique (le code torique en champ magnétique)
Sébastien Dusuel (Lycée Saint Louis, Paris)
Je commencerai par décrire qualitativement les propriétés élémentaires des systèmes quantiques bidimensionnels possédant des excitations anyoniques (particules de statistique fractionnaire, n'étant
ni des bosons ni des fermions), en relation avec les domaines récents des qubits topologiquement protégés et du calcul topologique quantique.
Ensuite, je donnerai une introduction pédagogique du modèle de spins 1/2 le plus simple qui présente de tels excitations exotiques et de l'ordre topologique, à savoir le code torique de Kitaev avec
anyons émergents Z_2 [1].
Finalement, la robustesse de la phase topologique du code torique à la perturbation locale la plus simple (un champ magnétique) sera estimée, en donnant une image physique en termes de
quasi-particules anyoniques [2,3]. En fonction de la direction du champ magnétique : - l'ordre topologique est détruit par une transition de phase quantique du premier ou du second ordre, d'où un
diagramme de phase riche - le système peut posséder une pléthore d'états liés
• [1] Kitaev, Ann. Phys 303, 2 (2003)
• [2] Vidal, Dusuel & Schmidt, Phys. Rev. B 79, 033109 (2009)
• [3] Vidal, Thomale, Schmidt & Dusuel, Phys. Rev. B 80, 081104(R) (2009)
Jeudi 21 Janvier 2010 (salle 116)
Title: Edge-State Spectroscopy and Energy Exchanges in the Integer Quantum Hall Regime
Carles Altimiras (LPN Marcoussis)
The quantum Hall regime is a state of matter where quantum phenomena manifest themselves on a macroscopic scale. Its most salient feature is a dissipationless charge propagation along one-dimensional
edge channels, whose similarity with light beams has inspired electronic analogues of quantum optics experiments. Yet, the microscopic physics of edge states is poorly understood, as vividly
illustrated by the on-going debate to interpret the recent electronic Mach-Zehnder experiments.
Two important issues are particularly acute. The first one concerns the expected reconstruction of the edges due to Coulomb repulsion in realistic smooth confining potentials: the edge channels
acquire a finite width and additional acoustic modes of density oscillations across the width are predicted. Second, the interaction between co-propagating edge channels may deeply modify the nature
of edge excitations: for strong enough interactions, the excitations are predicted to delocalize among the co-propagating channels. At filling factor 2 (with two co-propagating edge channels of
opposite spin), this effect results in a spin-charge separation of the edge dynamics.
I will present a setup that permits us to extract the energy distribution f(E) in an edge channel driven out-of-equilibrium. This novel spectroscopy provides a stringent test of whether the predicted
additional acoustic modes capture part of the injected energy [1]. Moreover, by measuring f(E) for various propagation lengths, we can test the inelastic mechanisms at work and the nature of the
pertinent electronic excitations. Our results for two co-propagating edge channels reveal complete energy current equilibration, over a few micrometers. This strongly suggests that the dynamics is
governed by collective edge excitations delocalized over both channels [2].
Jeudi 14 Janvier 2010 (Amphi Schrodinger)
Title: Systèmes dynamiques loin de l'equilibre: que peut on dire de la fonction susceptibilité ?
David Ruelle (IHES)
Jeudi 26 Novembre à 14h
Title: Localization of BECs in quasiperiodic lattices
Michele Modugno (LENS, Florence, Italie)
Abstract: I will discuss the localization behavior of a Bose-Einstein condensate in a one-dimensional bichromatic optical lattice, by considering both static and dynamical properties. In particular,
I will report on the recent experiments performed at LENS, where a Bose-Einstein condensate with tunable interactions has been employed to explore the delocalization transition induced by
interactions. I will also consider the quantum spreading of a wave-packet in the quasiperiodic potential, by discussing the interplay of quasi-disorder and nonlinearity and the role of initial
Jeudi 3 décembre 2009 à 14h
Title: AC conductance of quantum chaotic cavities: semiclassical approach
Cyrille Petitjean (Université de Regensburg)
Abstract: Due to progress in the control and manipulation of mesoscopic structures driven by high frequency periodic voltages, the ac regime has recently been experimentally investigated [1] and
consequently theoretical interest in it has been renewed.
We consider a quantum chaotic cavity that is coupled via tunnel barriers and gates to a macroscopic circuit which contains ac-sources. For the transparent barrier, our semiclassical techniques permit
us to include the Ehrenfest time in the weak-localization correction to the screened conductance, previously obtain by the random matrix theory [2].
Then by extending the recent semiclassical theory in presence of tunnel barriers [3] to the ac-transport, we investigate the effect of dephasing on the relaxation resistance of a chaotic capacitor in
the linear low frequency regime. This last investigation is in principle relevant to the recent measurements of the admittance at zero magnetic flux of a mesoscopic capacitor [1,4].
Works in collaboration with D. Waltner, J. Kuipers, I. Adagideli and K. Richter:
C. Petitjean et al. Phys. Rev. B 80, 115310 (2009).
[1] J. Gabelli et al., Science 313, 499 (2006). [2] P.W. Brouwer and M. Buttiker, Europhys. Lett. 37, 441 (1997). [3] R.S. Whitney, Phys. Rev. B 75, 235404 (2007). [4] S. Nigg and M. Buttiker, Phys.
Rev. B 77, 085312 (2008).
Jeudi 10 décembre 2009
Pas de séminaire!
Le GDR Physique Quantique Mésoscopique co-organise une mini école de trois jours sur le thème Les isolants topologiques du Mercredi 9 au Vendredi 11 décembre 2009. Elle est organisée à l'Ecole
Normale Supérieure de Lyon par D. Carpentier et J. Cayssol. Pour en savoir plus, visitez le site Web.
Jeudi 12 Novembre à 14h
Title: Universal detector efficiency of a mesoscopic capacitor
Simon Nigg (université de Genève)
Abstract: In this talk I will discuss a novel type of high frequency quantum detector based on the mesoscopic capacitor recently realized by Gabelli et al., [1], which consists of a quantum dot
connected via a single channel quantum point contact to a single lead. I will show that the state of a double quantum dot charge qubit capacitively coupled to this detector can be read out in the GHz
frequency regime with near quantum limited efficiency. To leading order, the quantum efficiency is found to be universal owing to the universality of the charge relaxation resistance of the single
channel mesoscopic capacitor.
References: [1] J. Gabelli et al., Science 313, 499 (2006)
Jeudi 5 Novembre à 15h
Title: Self duality and supersymmetry
M. Konyushikhin (Subatech, Université de Nantes)
Jeudi 29 Octobre 2009 à 14h
Title: "Financial market fluctuations and predictability: asynchronous models and statistical mechanics"
Damien Challet (Université de Fribourg)
Abstract: Starting from the minority game, this talk discusses how to design and solve agent-based models of financial markets that focus on information ecology. This yields a powerful picture of how
real markets operate and allows one to link price volatility to predictability. This talk then makes a case for asynchronous models of speculation and focuses on two of them. The difficulties of
tackling them analytically will be discussed.
Jeudi 22 Octobre 2009 à 14h
Title: The extremal black hole/CFT correspondence
G. Compère (University of California, Santa Barbara)
Abstract: Universal properties of black holes such as the first law can be derived from generic properties of Killing horizons and diffeomorphism covariance. We show that the entropy of extremal
black holes can be derived universally as well, suggesting a conformal field theory description of extremal black holes. The linear perturbations in the Kerr throat and the greybody factors will be
surveyed in the perspective of the existence of a CFT.
Jeudi 15 Octobre 2009 à 14h
Title: Magnétorésistance tunnel et symétries électroniques
David Halley, (IPCMS Strasbourg)
Abstract: La magnétorésistance géante dans les systèmes entièrement métalliques a été découverte il y a une vingtaine d?années par A. Fert et P. Grünberg. De nombreuses applications en ont découlé
notamment pour l?enregistrement magnétique ou les systèmes de capteurs.
Les études des années 90 ont montré que l'effet de magnéto-résistance géante était plus important dans des systèmes incluant une barrière isolante amorphe, que les électrons franchissent par effet
tunnel (jonctions magnétiques tunnel). Cependant, jusqu'à une date récente les valeurs de magnéto-résistance tunnel demeuraient inférieures à 100% ce qui limitait encore le nombre d?applications
envisageables pour de tels systèmes.
Depuis quelques années, l'étude de tels systèmes, cette fois-ci monocristallins, a permis d'accroître considérablement les valeurs de magnéto-résistance (jusqu?à 500%), les rendant très attractifs
pour l?industrie. Nous montrerons comment les effets de filtrage en symétrie par une barrière monocristalline permettent d?obtenir une forte polarisation en spin du courant d?électrons et donc une
très forte valeur de magnéto-résistance. Nous montrerons en outre comment ces effets de filtrage par la barrière peuvent modifier le comportement de métaux « normaux » placés à proximité de celle-ci,
les rendant isolants au regard du transport électronique. Nous illustrerons ce phénomène dans le cas de « puits quantiques » sélectifs en symétrie observés dans le système Fe/Cr/Fe/MgO/Fe.
Jeudi 2 Juillet 2009 à 14h
Title: Macroscopic quantum dynamics in 3D beyond mea field - the case of colliding BECs
Piotr Deuar, LPTMS, Universite Paris-sud / CNRS, Orsay, France
Abstract: During a supersonic collision of Bose-Einstein condensates, like in experiments with metastable Helium, a complex dynamics develops among the halo of scattered atoms. I will show some of
the interesting phenomena that occur, and explain why they are completely missed by a mean field approach.
Nevertheless, the full quantum dynamics can be simulated ab-initio by sampling the positive-P representation of the boson field. This method, deveolped originally in quantum optics, is successful
when single-particle interactions are weak but collective effects are strong - even in 3D.
To understand the behaviour of the halo particles, one can dissect the Bogoliubov quasiparticle dynamics to see which processes are responsible for what. With a dynamically evolving background
condensate, finding the Bogoliubov modes exactly in 3D is intractable, but this has been successfully circumvented by sampling their positie-P representation.
Vendredi 26 juin 2009
Title: Charting the phase diagram of a disordered quantum spin chain with state fidelity
Toby Jacobson, University of Southern California, Los Angeles
Abstract: The phase diagram of a quantum XY spin chain with Gaussian-distributed random anisotropies and transverse fields is investigated, with focus on the fidelity susceptibility, a recently
introduced quantum information theoretical measure. Monitoring the finite-size scaling of the probability distribution of this quantity as well as its average and typical values, we detect a
disorder-induced disappearance of criticality and the emergence of Griffiths phases in this model.
Phys. Rev. Lett. 102, 057205 (2009)
Phys. Rev. B 79, 184427 (2009)
Jeudi 28 mai 2009
Titre: Anyons, fermions, et ordre topologique dans un système de spins 1/2: le modèle de Kitaev
J. Vidal (Laboratoire de Physique Théorique de la Matière Condensée, Université Pierre et Marie Curie)
Abstract: En 1977, Leinaas et Myrheim [1] suggéraient l'existence de statistiques quantiques différentes de celles de Bose-Einstein et de Fermi-Dirac. Quelques décennies plus tard, les particules
obéissant à ces statistiques, les anyons, n'ont toujours pas été observées expérimentalement.
Le modèle de Kitaev [2] qui décrit des spin 1/2 en interaction sur réseau hexagonal est sans aucun doute un des meilleurs candidats à la detection de tels objets. En effet, le spectre de ce système
contient des excitations anyoniques, abéliennes et non-abéliennes, intimement liées à l'ordre topologique sous-jacent. Ces anyons sont localisés dans l'espace ce qui permet d'envisager simplement les
manipulations indispensables à la mise en évidence expérimentale de leur statistiques [3]. Cependant, le spectre contient également des excitations fermioniques susceptibles de polluer les processus
de détection.
Je présenterai une analyse perturbative de ce modèle autour de la limite de dimères isolés permettant de comprendre les problèmes liées à la coexistence de ces deux types de particules, anyons et
fermions, au sein d'un même spectre [4].
• [1] J. M. Leinaas et J. Myrheim, Nuovo Cimento Soc. Ital. Fis. B 37, 1 (1977).
• [2] A. Kitaev, Ann. Phys. 321, 2 (2006).
• [3] L. Jiang et al., Nature Physics 4, 482, (2008).
• [4] J. Vidal, K. P. Schmidt et S. Dusuel, Phys. Rev. Lett. 100, 177204 (2008).
12-15 mai 2009
Jeudi 9 avril 2009
Challenges for String Inflation
Marcus BERG (Stockholm University)
Abstract: : Abstract: There are by now many models that claim to achieve inflationary cosmology in string theory (warped brane inflation, kaehler inflation, axion monodromy, etc.). I will explain the
motivation for constructing such models, and give some details about the three aforementioned examples. I will summarize some concrete challenges concerning the consistency of these models, and how
to make them more predictive.
Vendredi 20 mars 2009 (Colloquium)
Dynamics of a Nonlinear Luttinger Liquid.
L. Glazman (Yale University and Inst. Néel)
Abstract: : Dynamics of one-dimensional quantum many-body systems is usually described within the Luttinger Liquid paradigm. In that paradigm, the generic nonlinear dispersion relation of particles
is replaced by a linear one. That allows one to solve the dynamics problem exactly, at the expense of introducing Lorentzian symmetry which was absent in the generic system. We investigate the
dynamic responses without the linearization of the particle spectrum and find new universal singular behavior of the response functions.
Jeudi 12 mars 2009 (Colloquium)
Continuously monitoring the quantum oscillations of an electrical circuit.
P. Bertet (Quantronics group, SPEC/CEA Saclay)
Abstract: Superconducting circuits based on Josephson junctions can be used to realize artificial atoms, with coherence times sufficient to perform interesting atomic physics experiments. They can be
strongly coupled to the electromagnetic field of an on-chip superconducting resonator, allowing to realize cavity quantum electrodynamics experiments with electrical circuits, giving rise to a new
field called Circuit Quantum Electrodynamics (Circuit QED) [1,2]. We have studied the interplay between quantum dynamics and measurement in a Circuit QED setup. In our experiment, we use a ?
transmon?, a modified Cooper-Pair Box coupled to a coplanar waveguide cavity which protects it from the environment and allows to reach long enough coherence times. An electromagnetic mode of the
cavity is used to measure the qubit state. The photons stored in the cavity progressively extract information about the quantum state of the qubit, and correlatively dephase it. This information is
carried by the phase of the electromagnetic field leaking out of the cavity that is measured by homodyne detection. By continuously applying the measuring field during Rabi oscillations of the
circuit, we revisit the quantum measurement problem of a mesoscopic quantum electrical circuit [3]. By increasing the average number of photons in the cavity, we observe the transition between the
weak measurement and Zeno regimes, both in the time and frequency domains. In the latter case, we discuss how far the experimental results provide a proof of the quantum behavior of the circuit.
[1] A. Blais et al., Phys. Rev. A 69, 062320 (2004)
[2] A. Wallraff et al., Nature 431, 162 (2004)
[3] A. Korotkov and D. Averin, Phys. Rev. B 64, 165310 (2001)
Jeudi 5 mars 2009
Transition from a one-dimensional to a quasi-one-dimensional state in interacting quantum wires
Julia S. Mayer (Ohio state university)
Abstract: In experiment, signatures of one-dimensional (1D) behavior have been observed in quantum wires and carbon nanotubes as well as cold atomic gases. While the 1D aspects make the above
mentioned systems so fascinating, the real world is three-dimensional and, therefore, even in these confined geometries, features pertaining to deviations from one-dimensionality may remain. My
interest is in identifying how the one-dimensional effects are modified in realistic situations and exploring the novel phenomena that arise.
Upon increasing the density of electrons in a quantum wire, the system undergoes a transition from a one-dimensional to a quasi-one-dimensional state. In the absence of interactions between
electrons, this corresponds to filling up the second subband of transverse quantization. On the other hand, strongly interacting one-dimensional electrons form a Wigner crystal, and the transition
corresponds to it splitting into two chains (zigzag crystal).
We study the evolution of the system and the electronic excitation modes in the vicinity of the transition as a function of the interaction strength. In particular, we establish that, for
spin-polarized electrons, only one gapless mode exists on either side of the transition at any interaction strength. In the strongly interacting regime, the effective Hamiltonian is represented by
two weakly coupled modes given by a Luttinger liquid and a transverse field Ising model. Performing a renormalization group analysis, we show that the critical fixed point is Lorentz invariant.
However, the critical velocity vanishes due to marginally irrelevant operators.
Jeudi 26 février 2009
Excitation spectrum of the light-atom field in a periodic ultracold atomic gas.
M. Antezza (Laboratoire Kastler-Brossel, ENS paris)
Abstract: We study the excitation spectrum of the light-atom field in a periodic system of atoms located in the lowest vibrational state of an optical lattice. To this purpose the eigenmodes of the
atomic gas interacting via the electromagnetic field are analytically investigated by taking into account both the vectorial character of the light and the quantum atomic motion. We show the problems
of models which assume the atoms as point-like scatterers at rest at periodical positions, and that, on the contrary, the inclusion of the quantum atomic motion naturally leads to a well defined and
divergency free model. We finally predict a gapless photonic spectrum.
Jeudi 5 février 2009
Anderson localization in ultracold atomic gases.
L. Sanchez-Palencia (Laboratoire Charles Fabry, Institut d'Optique, Orsay)
Abstract: We present our recent theoretical and experimental works on the expansion of a Bose-Einstein condensate in a disordered potential. We show that a such a system can exhibit single-particle
Anderson localization under conditions that we will discuss. We determine analytically the localization and find that experimental data are in very good agreement. In addition, we show that the
one-dimensional speckle potentials used in the experiments are very peculiar as they exhibit an effective mobility edge.
We also investigate the effects of disorder in a Bose-Einstein condensate at equilibrium in a regime where the interaction energy dominates over the kinetic energy. While the ground state is extended
owing to the strong interactions, we show that the elementary excitations of the condensate (Bogolyubov quasi-particles) are localized. This constitutes an exemple of many-body Anderson localization
in a system with strong meanfield interactions. We present a general formalism to determine analytically the localization lengths and compare them to numerical calculations in 1D.
Jeudi 29 janvier 2009
Interaction effects on transport in disordered d-wave superconductors: a study of several universality classes.
L. Dell Anna (SISSA, Trieste)
Abstract: We study the localization properties of disordered d-wave superconductors by means of fermionic replica trick method, deriving the effective non-linear sigma-model for the spin diffusive
modes. According to the presence of certain symmetries and the range of imputity potential, we provide a detailed classification for the behavior of some physical quantities, like the density of
states, the spin and the quasiparticle charge conductivities. Following the Finkel'stein approach, we finally extend the effective functional method to include residual quasiparticle interactions, at
all orders in the scattering amplitudes, obtaining the complete RG equations for the full set of couplings of the theory.
Jeudi 22 janvier 2009
Branching random walks : effect of the selection on the survival and the genealogies.
Damien Simon (université de Cologne)
Abstract: Many simple biological models for the evolution of species are based on branching random walks. In particular, the influence of the environment (limitied resources...) on a population of
such walks can lead either to a extinction of the population or to a saturation of its size. We present here some properties of these branching random walks from the point of view of statistical
physics. First, we will present the phase transition between survival and extinction which occurs when the boundary conditions of the domain change. In a second part, we will characterize the
genealogical trees in different selection regimes and establish some links with other classical models of statistical physicsm such as the directed polymers and voter models.
Jeudi 8 janvier 2009 (Colloquium)
Conducteurs balistiques: inductance cinétique, temps de vol et interactions
B. Plaçais, Laboratoire Pierre Aigrain, Ecole Normale Supérieure, 24 rue Lhomond, 75231 Paris cedex 5
Abstract: Au-delà des propriétés de transmission électronique moyenne et de probabilité de transmission sondées par le transport basse fréquence et le bruit, le régime de transport quantique
dynamique apporte des informations nouvelles sur les temps de transit électroniques dans les conducteurs. Pour l'expérimentateur, le transport dans un conducteur peut être décrit par un réseau
d'impédances qui fait intervenir, selon sa géométrie, des résistances mais aussi des capacités et des inductances quantiques. Ces dernières ont une importance particulière dans les conducteurs
quantiques qui tient au caractère fini de la densité d'états.
Dans l'exposé, nous étudierons l'inductance des fils quantiques chiraux et non-chiraux en présence d'interactions et d'écrantage [1]. Nous nous appuierons sur la théorie de diffusion développée par
Christen et Büttiker [2]. Nous présenterons des mesures d'admittance GHz réalisées sur une barre de Hall [1] et des nanotubes de carbone [3]. Enfin, nous discuterons les implications de ces résultats
pour la dynamique des nanotransistors mésoscopiques, qui sont des dispositifs prometteurs pour la détection au vol d'électrons unique.
[1] Relaxation time of a chiral quantum R-L circuit, J. Gabelli, et al., Phys. Rev. Lett. 98, 166806 (2007)
[2] Low-frequency admittance of quantized Hall conductors, T. Christen, M. Büttiker, Phys. Rev. B 53, 2064 (1996)
[3] Single carbon nanotube transistor at GHz frequency, J. Chaste, et al., Nano Letters 8, 525 (2008)
Jeudi 4 décembre 2008
Nicolas Regnault (Laboratoire Pierre Aigrain, ENS Paris)
Titre:Peindre l'effet Hall quantique fractionnaire
Abstract: Bien qu´il ait été découvert il y a plus de vingt ans et couronné d´un prix Nobel en 1998, l'effet Hall quantique fractionnaire reste un sujet vivant. Parmi les problématiques qui ont
émergé récemment, on peut citer la possibilité de trouver une physique similaire dans les gaz atomiques ultrafroids en rotation et les prédictions concernant l'existence de statistiques exotiques au
delà des statistiques fermioniques, bosoniques ou même fractionnaires, du nom de statistiques non abéliennes. Ces dernières suscitent un vif intérêt : il a été montré qu´elles pouvaient être
utilisées comme base pour le calcul quantique en étant robuste par construction à la décohérence, actuellement le principal obstacle au développement de l´ordinateur quantique.
Avoir une vision de l´effet Hall quantique fractionnaire permettrait d´appréhender l´intégralité des phénomènes en jeu reste un objectif à atteindre. Ce système est l´archétype même de ceux où
physique quantique et interactions entre particules sont si imbriquées, que les schémas usuels d´approximations ne sont plus valides. Suivant le nombre de particules et l´intensité du champ
magnétique, des modèles différents permettaient d´appréhender la nature des phénomènes en jeu jusqu´à ce jour. Les plus célèbres sont le liquide de Laughlin, le modèle des fermions composites de
Jain, ou encore l´état de Moore-Read. Pour autant, aucune de ces approches n´est ¨universelle¨.
Nous présenterons une approche reposant sur des liquides quantiques colorés. Chaque groupe de particules avec une couleur donnée forme un liquide de Laughlin. L´utilisation de couleurs est une
manière artificielle de distinguer les particules alors qu´elles sont toutes identiques. En réalité, elles sont en quelques sorte toutes ¨grises¨. Nous proposons une procédure qui permet de rendre
les particules de nouveau ¨grises¨. Cette méthode est l´analogue d´une photographie d´une peinture couleur avec un appareil photo noir et blanc. L´image en noir et blanc révèle de façon surprenante
la structure interne du liquide quantique : il est constitué de gouttelettes formées à partir d´un nombre de particules égal au nombre de couleurs utilisées initialement. Les résultats de calculs
numériques menés pour valider cette approche, semblent indiquer que cette structure est une caractéristique commune des différentes approches de l´effet Hall quantique fractionnaire.
Jeudi 27 novembre 2008
Sylvain Capponi (Université Paul Sabatier)
Titre: One-dimensional multicomponent fermionic cold atoms
Jeudi 13 novembre 2008
Christian MAES (Leuven)
Titre: How to understand the minimum entropy production principle
Jeudi 6 novembre 2008
Guiliano Orso (LPTMS, Universit Paris Sud)
Titre: Exotic Fermi superfluids and Bose glasses: exact results for one dimensional correlated systems
Abstract: Recent experiments have shown that atomic quantum gases can be used as model systems to investigate t raditional problems in Condensed Matter Physics.
In this talk I present my recent theoretical works on interacting one dimensional gases, where some exact results can be obtained.
In the first part I discuss the problem of attractive fermions undergoing supeerfluid pairing. I will focus on the interesting situation where the number of spin-up and spin-down fermions is u
nequal, the population imbalance acting in the same way as the magnetic field for usual superconductors. Starting from the Bethe's ansatz solution, I have derived the exact quantum phase diagram for
one dimensional gases with contact in teractions. Moreover I also showed that, in the presence of a shallow longitudinal trap, the gas phase-separates in tw o shells, with a partially polarized core.
The latter is a direct realization of the celebrated Fulde-Ferrell-Larkin-O vchinnikov (FFLO) superfluid.
In the second part of my seminar, I will present some recent results on strongly interacting lattice bosons in the pre sence of disorder. I will consider the response of the gas when the tunneling
rate between neighboring sites is modulated periodically in time. Using exact Bose-Fermi mappings, I have calculated the energy absorption rate as a function of the driven frequency, disorder
strength and a toms filling. I will point out the differences in the response of the gas in the disordered Bose glass phase with resp ect to the Mott insulator, which occurs at integer filling.
Mardi 28 octobre 2008 à 14 h
Gregory FALKOVICH (Weizmann Institute)
Titre: Inverse cascades and condensates | {"url":"https://www.ens-lyon.fr/PHYSIQUE/seminars/physique-theorique/old-theo-phys-seminars","timestamp":"2024-11-11T19:40:05Z","content_type":"application/xhtml+xml","content_length":"535467","record_id":"<urn:uuid:0b3a8d6b-d3b5-464b-a068-d60c374fefea>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00691.warc.gz"} |
Finite Element Analysis of Hybrid Boron-carbon Composites
Info: 8356 words (33 pages) Dissertation
Published: 9th Dec 2019
Tagged: Engineering
Adaptability to different situations and the ease of combination makes composites one of the most extensively used materials in the engineering industries. One of the ways of managing properties of
composites is hybridization of different fibers in one composite.
This work merely focuses upon the analysis of effective stiffness of Hybrid materials which is carried out using Finite Element Method.
A series of models were experimented in a sequence by imposing various conditions on each of them. Boron and Carbon fibers were used primarily for these experiments because of their better
compatibility than other combinations of fibers. Due to the difference in diameters of the fibers it was possible to achieve very high volume concentration of fibers. Longitudinal properties of
hybrid composites are studied well in literature but prediction of transversal properties was a challenge. The results of FEM modelling were compared with the theoretical predictions and conclusions
were drawn.
LIST OF ILLUSTRATIONS……………………………………………………………….
LIST OF TABLES…………………………………………………………………
Chapter 1 INTRODUCTION………………………………………………………
1. Composites: Definition and Classification………………………………
2. Classification of Composite Materials…………………………………..
3. Previous Studies on Hybrid Composites………………………………..
4. Objectives of Present Study…………………………………………….
Chapter 2 LITERATURE REVIEW………………………………………………..
Chapter 3 MATERIAL SPECIFICATION…………………………………………
Chapter 4 FE MODELING AND SIMULATION…………………………………
4.1 Modeling Steps……………………………………………………………
4.2 Cases Studied……………………………………………………………..
4.2.1 Triangular Models…………………………………………………….
4.2.2 Hexagonal Models……………………………………………………
4.2.3 Rectangular Models…………………………………………………..
Chapter 5 RESULTS AND DISCUSSION
5.1 Results of Triangular Models…………………………………………..
5.1.1 Estimation of Longitudinal Properties………………………………
5.1.2 Behavior of Elastic Strain on Infinite Lengths…………………………….
5.1.3 Estimation of Transversal Properties………………………………..
5.2 Results of Rectangular Models……………………………………………….
5.2.1 Estimation of Longitudinal Shear Stiffness………………………….
5.2.2 Estimation of Transversal Shear Stiffness…………………………..
Chapter 6 CONCLUSION……………………………………………………………….
6.1 Summary and Conclusion…………………………………………………….
6.2 Future Work……………………………………………………………………
BIOGRAPHICAL INFORMATION………………………………………………………
Figure 1.1 In-Layer and In-Ply hybridization………………………………………
Figure 4.1 Steps in modeling……………………………………………………….
Figure 4.2 Orthogonal and front view of Model 1………………………………….
Figure 4.3 Orthogonal and front view of Model 2………………………………….
Figure 4.4 Orthogonal and front view of Model 3………………………………….
Figure 4.5 Symmetry cut view of Triangular Model 1……………………………..
Figure 4.6 Front face meshing of Triangular Model 1…………………………….
Figure 4.7 Boundary conditions on Triangular Model 1…………………………..
Figure 4.8 Boundary conditions on Triangular Model 2…………………………..
Figure 4.9 Symmetry cut view of Triangular Model 3……………………………
Figure 5.0 Meshing on Triangular Model 3……………………………………….
Figure 4.11 Front view of Triangular Model 3……………………………………..
Figure 4.12 Fine meshing of carbons and matrix…………………………………..
Figure 4.13 Direct compressive pressure applied on Model 2 by increasing the length to 4000µm………………………………………………………………….
Figure 4.14 Direct tensile pressure applied on a Model 2 by increasing the length to 4000µm…………………………………………………………………………..
Figure 4.15 Hexagonal model 1……………………………………………………
Figure 4.16 Hexagonal model 2…………………………………………………
Figure 4.17 Hexagonal model 3…………………………………………………….
Figure 4.18 Meshing of Rectangular Model……………………………………….
Figure 4.19 Boundary Conditions on Rectangular model 1 in longitudinal directions…………………………………………………………………………..
Figure 4.20 Boundary Conditions on Rectangular model 1 in transversal directions……………………………………………………………………………
Figure 5.1 Distribution of Normal Stresses on Triangular Model 1……………….
Figure 5.2 Distributtion of Normal stresses on all the three Triangular Models……
Figure 5.3 Distribution of Normal Elastic Strains on Triangular Model 1 in Transversal and Longitudinal directions…………………………………………..
Table 5.4 Comparison of Theoretical and FE Model values of Shear Modulus in Longitudinal and Transversal Direction for Rectangular Model 1 and 2…………..
Figure 5.5 Plot between Total Concentration of Fibers and Poisson’s Ratio in Longitudinal Direction……………………………………………………………
Figure 5.6 Plot between Total Concentration of Fibers and Young’s Modulus in Transversal Direction…………………………………………………………….
Figure 5.7 Plot between Total Concentration of Fibers and Poisson’s Ratio in Transversal Direction…………………………………………………………….
Figure 5.8 Normal Elastic Strain of Triangular Model 1 in Axial Direction…….
Figure 5.9 Normal Elastic Strain of Triangular Model 1 increased to length 4000µm in Axial Direction…………………………………………………..
Figure 5.10 Deformation of Rectangular Model 1 subjected to Longitudinal Shear force………………………………………………………………………….
Figure 5.11 Surface Displacements of Rectangular Model 1 subjected to Longitudinal Shear force…………………………………………………….
Figure 5.12 Deformation of Rectangular Model 1 subjected to Transversal Shear force…………………………………………………………………………….
Figure 5.13 Surface Displacements of Rectangular Model 1 subjected to Transversal Shear force…………………………………………………………
Figure 5.14 Plot between Total Concentration of Fibers and Shear Modulus in Longitudinal Direction………………………………………………………….
Figure 5.15 Plot between Total Concentration of Fibers and Shear Modulus in Transversal Direction……………………………………………………………
Table 3.1 Properties of Resin Epoxy Matrix……………………………………….
Table 3.2 Properties of Boron Fiber……………………………………………….
Table 3.3 Properties of Carbon (T300) Fiber………………………………………
Table 5.1 Table with Normal Stresses and Normal Strains based on the concentration of fibers and matrix for all the Triangular Models…………………
Table 5.2 Comparison of Theoretical and FE Model values of Young’s modulus and Poisson’s Ratio in axial direction for all the three Triangular Models…………
Table 5.3 Comparison of Theoretical and FE Model values of Young’s modulus and Poisson’s Ratio in Transversal Direction for all the three Triangular Models…
Table 5.4 Comparison of Theoretical and FE Model values of Shear Modulus in Longitudinal and Transversal Direction for Rectangular Model 1 and 2………….
Chapter 1
1.1 Composites: Definition and Classification
Advanced composites are a blend of two or more distinct constituents or phases, of which one is made up of stiff, long fibers, and the other, a binder or matrix which holds the fibers in place. The
composites thus formed are generally composed of layers or laminates of fibers and matrix. They have enhanced properties as compared to the constituents which retain their individual identities to
influence the final properties. The fibers are tough and rigid relative to the matrix, with different properties in different directions, which means they are orthotropic. For the modern structural
composites, the length to diameter ratio of fibers generally varies over 100.[1]
If well designed, a composite material usually exhibits the best qualities of its components or constituents. Composite materials are formed to get the best refined properties. Some of the properties
that can be improved by forming a composite are strength, stiffness, corrosion resistance, wear resistance, attractiveness, weight, fatigue life, thermal insulation, thermal conductivity, and
acoustic insulation. Not all these properties are upgraded at the same time but, depending on the task to be performed, characteristics are taken into consideration.[2]
Examples: The fibers and matrices are made out of various materials. The fibers are typically glass, carbon, silicon carbide, or asbestos, while the matrix is usually plastic, metal, or a ceramic
1.2 Classification of Composite Materials
Based on the type of reinforcements, the composite materials are primarily classified into two broad categories as Fiber Reinforced Composites and Particle Reinforced Composites.
In a Fiber Reinforced Composite, also known as a Fibrous Reinforcement, the length is much higher than its cross-sectional dimension. The ratio of length to cross-sectional dimension is known as
aspect ratio. Single- layered fibers with high aspect ratio are called Continuous Fiber Reinforced Composites whereas, those with low aspect ratios are called Discontinuous Fiber Reinforced
Composites. Depending on the orientation of the continuous fibers, the reinforcement can be either Unidirectional or Bidirectional. These bidirectional reinforcements can also be termed as Woven
Reinforcements. Similarly, the discontinuous fibers may be Random oriented or Preferred oriented.
Multilayered Composites, being another group of the Fibrous Reinforcements, can be sub-grouped as, Laminates and Hybrids. By stacking layers or plies in a specific sequence, the Laminates are
obtained. These are usually unidirectional. A typical laminate may have 4 to 40 layers, of which the fiber orientation changes from layer to layer. Multilayered composites with mixed fibers result in
Hybrids, which are now becoming commonplace. These hybrid composites are designed to benefit from the properties of fibers employed. Here, the fibers can be mixed in a ply or layer by layer. Also,
some hybrids have a mixture of fibrous and particulate reinforcements.
However, the particle reinforced composites have dimensions approximately equal in all directions. In this case, the shape of the particles are important which can be any regular or irregular
geometry such as, spherical, cubic or platelet. Same as in Discontinuous Single-layered composites, the particulate reinforcements can also be arranged in a Random or Preferred manner.[3]
Figure 1.1 In-Layer and In-Ply hybridization
1.3 Previous Studies on Hybrid Composites
The term ‘Hybrid composite’ was coined in the fifties of the twentieth century and since then they have been studied and utilized till present. In the 1970s and 1980s, the theoretical analysis on
hybrid composites was done. This was the time when most of the frequently used fibers like carbon, boron, Kevlar, and other fibers came into commercial existence. To analyze the materials for which
the layers are made from different fibers, was a simple task but analyzing the case of in-ply hybridization, where different types of fibers are used to make unidirectionally reinforced materials,
was intricate. Formerly, a number of experiments were conducted on in-ply hybrid composites for the estimation of longitudinal and transversal properties. In all these approaches, the results of
theoretical modeling of usual unidirectional composites were used. In this case, longitudinal stiffness of such material was estimated with sufficient precision using the ‘mixture rule’ and for the
prediction of effective strength, the difference in ultimate strains is usually considered. Three types of models were considered for analysis, namely,
• Anisotropic Matrix Model
• Four-Phase Model
• Double Periodic Model
These models were being experimented for the longitudinal and transversal properties. Results obtained from these models were close enough to the theoretical results.
1.4 Objectives of Present Study
The principal objective of this work is to study the properties of hybrid composites and to determine the longitudinal and transversal characteristics by numerically computing the shear moduli. To
accomplish this, a combination of fibers and matrix are examined by launching certain boundary conditions.
Here, boron and carbon are considered because they combine excellent mechanical properties as compared to the other fiber combinations. In the case of boron and carbon, the difference between the
ultimate strains is relatively less due to which the chance of breaking is also less. Apart from this, their elastic properties are closer. Boron being highly resistive to compression, and carbon
being highly resistive to tension, together they exhibit better strength towards both compressive and tensile effects.
Experiments are done for shear moduli on various models with different volume concentrations of fibers, and effect of this is studied on all the models. This comprises of numerically computing the
values, and comparing with the theoretical results.
Chapter 2
Hybridization is one of the effective and efficient ways of ameliorating the energy absorption capability of the fiber reinforced composite materials. Hybrid composite laminates reinforced with
carbon, Kevlar, glass, natural fibers and other types of fibers have been studied over time.[6]
Earlier, Carbon fiber reinforced composites were utilized tremendously, because of their excellent mechanical properties making them popular for lightweight, high-performance applications. But
because of their limitations such as low tensile failure strain and high-cost, hybrid composites came into existence. This was a method in which the drawbacks of one fiber can be balanced out by the
virtues of the other.
Carbon and glass being the low and high elongation fibers respectively, is the most common combination of hybrid composites. Moreover, the properties of some hybrid composites go beyond the expected
on the basic of consideration of rules-of-mixture, referred to as the hybrid effect. This was discovered by Hayashi in 1972. His findings deduced that the failure strain of an all-carbon fiber
composite could be increased by 40% by introducing glass fibers. Finally, it was quoted that the three main causes for hybrid effect are residual thermal stresses, fracture propagation effects and
dynamic stress concentrations.
The attempts of many authors to model hybrid composites were not always successful. They could analyze some of the influencing parameters but the conclusions were not straightforward and sufficient
for them to interpret the results.[8] Later, a global load-sharing model was developed and study for carbon/glass fibers was carried out. This study gave out the directions for designing optimal
hybrid composites [7].
Numerous investigations were done on hybrid laminates subjected to low velocity impact but very few dealt with ballistic impact on hybrid composites. One of which was the effect of hybridization with
E-glass fabric on carbon/epoxy laminates [9]. Also, the influence of Kevlar 29 fibers on E-glass reinforced laminates was examined [10]. A work on basalt fibers reported to offer mechanical
properties comparable with those of traditional glass ones while displaying advantages in terms of environmental sustainability and chemico-physical properties [11,12]. In addition to this,
investigations were done by replacing glass fibers with basalt fibers to study the impact resistance, which proved to be successful. It was inferred that basalt fibers act as good impact resistance
improvers for composite laminates with a view to enhance the environmental sustainability to such composites [7,13-15]. A further work on hybrid composites was done on the impact of high velocity
behavior of hybrid basalt-carbon/epoxy composites. In this experiment, interply hybrid specimens with four stacking sequences were tested and compared to laminates made of either only carbon or
basalt layers. The result was that the ballistic limits of all stacking sequences were enhanced when compared to those of either only carbon or basalt. Eventually, the response to impact in this case
was largely improved. Also, since the basalt fibers are far less-expensive when compared to carbon, therefore the hybridization made it cost-effective [16].
Besides, there were also researches carried out on polymer matrix composites and metal/ceramic composites, which are generally used for the spacecraft applications. These were tested for
hypervelocity impact behaviors since the spacecrafts have to overcome the force of the particles and meteoroids which move with high velocities in space. However, it was concluded that once the
composites are affected by hypervelocity impact, spallations and delaminating as well as adiabatic shear tended to form [17,18]. In all these experiments, Ti was used as one of the fibers, since they
can strengthen the anti-penetration ability for aluminum alloys. The combination of Ti and M40 was also tried on hypervelocity impact and the behavior of composites thin targets were observed.
Results were positive as the composites presented high impact resistance [19].
Apart from spacecraft applications, hybrid composites in the form of tendons are widely used in civil infrastructures, buildings, and offshore infrastructure services as well. These are used as
tension members and use high-tensile-strength steel wires, bars, and rebars which are ought to get corroded. To avoid this, the use of fiber-reinforced polymer matrix composites, especially
carbon-fiber-reinforced polymer matrix composites was proposed [20,21]. Later, novel carbon/glass hybrid thermoplastic composite rods consisting of unidirectional PAN-based carbon fiber, braids of
E-class glass fibers, and thermoplastic epoxy matrix was developed and checked for the tensile properties and fracture behavior. The tensile modulus and strength increased with the increasing carbon
fiber volume fraction [22].
Chapter 3
The primary requirement to carry out a simulation in Ansys is the characterization of materials and its properties. Since this analysis was on hybrid composites, two fibers and a matrix have been
used who’s properties are highlighted below. To make a hybrid composite, the fibers and matrix must be selected in such a way that, it gives good properties in both longitudinal and transverse
direction. In this study, Boron and Carbon were selected in combination with epoxy, due to better compatibility. Carbon being weaker in the transversal direction, becomes good when combined with
boron, which exhibits better properties in both axial and transversal directions. The properties of these materials have been discussed below:
Resin Epoxy: Epoxies are polymerizable thermosetting resins and are available in a variety of viscosities from liquid to solid. Epoxies are used widely in resins for prepreg materials and structural
adhesives. The advantages of epoxies are high strength and modulus, low levels of volatiles, excellent adhesion, low shrinkage, good chemical resistance, and ease of processing. Epoxy resins are
often used as a matrix because they have excellent mechanical properties and good handling properties, including fabrication.
Table 3.1 Properties of Resin Epoxy Matrix
Boron: Boron fibers are very stiff and have a high tensile and compressive strength and possess excellent resistance to buckling. These fibers have relatively large diameters (usually 100-130µm) and
do not flex well. They are usually considered for their light weight. Boron fibers are primarily used in military aviation applications, especially for the repair of cracks and patches.
Table 3.2 Properties of Boron Fiber
Carbon: Carbon fibers are very stiff and strong, 3 to 10 times stiffer than glass fibers. Carbon fibers are used for structural aircraft applications. Advantages include, high strength and corrosion
Table 3.3 Properties of Carbon (T300) Fiber
Chapter 4
The finite element method (FEM) is a numerical method for solving problems of engineering and mathematical physics. Typical problem areas of interest include structural analysis, heat transfer, fluid
flow, mass transport, and electromagnetic potential. The formulation of the problem using finite element method results in a system of algebraic equations which yields approximate values of the
unknowns at discrete number of points over the domain.[4] In this process, a model is subdivided into various small and simple geometries which become easy to solve. The subdivision of a whole domain
into simpler parts has several advantages.[5]
• Accurate representation of complex geometry
• Inclusion of dissimilar material properties
• Easy representation of the total solution
• Capture of local effects.
4.1 Modeling Steps
In this research work the modelling was carried out using SolidWorks whereas the simulation was done in ANSYS (workbench). The process flowchart is as outlined below.
Figure 4.1 Steps in modeling
4.2 Cases Studied
A total of three models were experimented in this work each of them with varying volume concentrations of fibers and matrix. Boron and Carbon fibers were taken for hybridization and Resin Epoxy was
taken as the matrix. Diameters of fibers were kept constant throughout the process, for boron fibers it was 100µm and for carbon it was 7.5µm and the length of the body was 2000µm.
In the first model, the volume of carbon fibers are relatively low when compared to the volume of boron fibers which in turn produces high buckling effect.
Figure 4.2 Orthogonal and front view of Model 1
The volumes concentrations of boron fibers and carbon fibers is approximately the same in the second model, which produces relatively low buckling effect as compared to the first model.
Figure 4.3 Orthogonal and front view of Model 2
In the final model, the effect of buckling becomes too low because of the high concentration of carbon fibers as shown in the figure below.
Figure 4.4 Orthogonal and front view of Model 3
4.2.1 Triangular Models
These models have been derived from the parent models which have been already illustrated in 3.2. For simplification, 1/6^th part of the models were cut using symmetry tool and analysis was performed
on these models. For analysis, different pressures were applied in a way that it gets deformed in the longitudinal or axial direction and displacements were also applied to the bodies. Deformations
and elastic strains were deduced.
Triangular Model 1:
In this case, the model was fixed on one surface and allowed to get displaced from the opposite surface by applying a pressure of 10MPA on the lateral surfaces. The model was then allowed to run on
fine mesh which gave about 102706 nodes and 33277 elements in total.
Figure 4.5 Symmetry cut view of Triangular Model 1
Figure 4.6 Front face meshing of Triangular Model 1
Figure 4.7 Boundary conditions on Triangular Model 1
Triangular Model 2:
With the same boundary conditions, the second model was also analyzed with an increase in the number of elements to 62652 which resulted in 308228 nodes. This increase was due to the increase in the
number of carbon fibers.
Figure 4.8 Boundary conditions on Triangular Model 2
Triangular Model 3:
Model 3 was experimented in the same way by applying similar boundary conditions but due to large number of parts it resulted in almost triple the number of nodes i.e.,866366 when compared to the
second model and as much as 413704 elements.
All the above cases were used to analyze the Young’s modulus and Poisson’s ratio by numerical calculations using the normal stresses and strains which were inferred from the analysis.
Figure 4.9 Symmetry cut view of Triangular Model 3
Figure 4.10 Meshing on Triangular Model 3
Figure 4.11 Front view of Triangular Model 3
Figure 4.12 Fine meshing of carbons and matrix
Furthermore, Triangular Model 2 was also experimented by increasing the length of the bar to 4000µm which was 2000µm previously. This was done to test the effect of strain the bar when the length
goes on tending to infinity. Results were quite satisfactory. Below is the figure which shows the Triangular Model 2 but length 4000µm. The boundary conditions were the same as before.
Figure 4.13 Direct compressive pressure applied on Model 2 by increasing the length to 4000µm
Until now, the pressure was applied on the models in such a way that the deformation of body was in longitudinal direction, means the pressure applied was compressive. In addition to this, with
similar boundary conditions, pressure was applied in such a manner that the body was subject to deformation in the direction perpendicular to the axis and results were compared. This was done to
analyze the behavior of the bar on the application of tensile pressure. Just like the previous case, this comparison was also done on both 2000µm and 4000µm bar.
Figure 4.14 Direct tensile pressure applied on a Model 2 by increasing the length to 4000µm
In addition to this, the transversal properties were also estimated by taking the same models but with different boundary conditions. In the previous case, i.e., in the calculation of longitudinal
properties, the body was allowed to deform in the axial direction whereas in this case, the body was allowed to move in the transversal direction and restricted in the axial direction. From this, the
bulk modulus was computed and simultaneously the Young’s modulus and Poisson’s ratio were calculated.
4.2.2 Hexagonal Models
Same as the triangular models, the hexagonal models were also derived from the parent models. For this, two types of analysis were carried out on each of the three hexagonal models. In the first
case, shear forces of varying magnitudes were applied on the lateral surfaces of the bodies in axial direction.
Hexagonal Model 1:
The first model constituted of 32 parts. As shown in the figure, the bottom surface was fixed and a pressure of 1MPA was applied on the top surface in such a way that it shears in the axial
direction. On the remaining 4 sides a pressure of τcos60° was applied in opposite directions. Because of fewer parts, meshing was done fine.
Figure 4.15 Hexagonal model 1
Hexagonal Model 2:
Likewise, model 2 was subjected to the above mentioned boundary conditions. With almost thrice the number of parts in the first model, it ran smooth on fine mesh.
Figure 4.16 Hexagonal Model 2
Hexagonal Model 3:
This model consists of the highest number of parts amongst all the three models. Due to this huge number, it was very difficult to simplify using fine mesh. Therefore, medium mesh was considered in
this regard. Besides this, all the other conditions imposed were the same.
Figure 4.17 Hexagonal Model 3
Apart from the longitudinal shear force, another type of shear force was applied on the body as well which is called transverse shear force which acts exactly in the direction perpendicular to the
axis. This operation was implemented on model 2, because of the same volume concentrations of both Boron and Carbon fibers.
In this analysis, again the bottom side is fixed and a force of 1MPA is applied on the top surface but in transverse direction, which allows shearing in transverse direction. Additionally, a shear
force of τcos60° magnitude is applied on the rest of the lateral faces in opposite directions. Moreover, compressive and tensile forces were also allowed to act as detailed in the image below. The
magnitude of these pressures were approximately τcos30°.
Due to the complexity in the calculation of shear forces to be applied on the lateral surfaces of the hexagonal body, meshing was done but simulations could not be carried out and therefore, these
models were replaced by the rectangular models, which will be discussed in the next section.
4.2.3 Rectangular Models
A couple of experiments were conducted on a series of rectangular models and the longitudinal and transversal shear moduli were determined. This was done to analyze the shear stiffness in both
the directions. The meshing was done on these models as illustrated below. A superfine mesh was generated near the matrix and carbon fibers by the use of ‘body sizing’ with a size of about 2.5µm
whereas, a regular mesh was generated for the boron fibers.
Figure 4.18 Meshing of Rectangular Model
Estimation of Longitudinal Shear Stiffness:
In the first experiment, the bottom surface was fixed and a pressure of 100MPa was applied on the top surface in axial direction. Due to this pressure, there is force created on the fixed support in
the opposite direction which eventually creates a shear in the axial direction. This causes the body to get deformed and the surface displacements were then determined using probe. With these
displacements, the longitudinal shear modulus was calculated by various formulae.
Figure 4.19 Boundary Conditions on Rectangular model 1 in longitudinal directions
Estimation of Transversal Shear Stiffness:
In the next experiment, the same boundary conditions were taken but the only difference was in the application of pressure. Here, the pressure was applied in the transversal direction due to which
there was a shear force generated in the transversal direction and hence the displacements were recorded as done before. Using these displacements, the transversal shear modulus was deduced. In this
case, due to the deformation, the body turns into a skewed shape. With this skewed angle, the shear angle and other properties were determined.
Figure 4.20 Boundary Conditions on Rectangular model 1 in longitudinal directions
In both longitudinal and transverse directions, analysis was carried out to obtain shear modulus for both Model 2 and Model 3. Considerable results were obtained for the second model, but, in the
case of third model, because of limitation of number of nodes and elements, the model failed to run, as it containing 452 parts.
Chapter 5
The results of all the models have been discussed in this chapter.
5.1 Results of Triangular Models
The first phase of simulations were done to determine the longitudinal and transversal properties.
5.1.1 Estimation of Longitudinal Properties
Figure 5.1 Distribution of Normal Stresses on Triangular Model 1
Figure 5.2 Distributtion of Normal stresses on all the three Triangular Models
The stresses induced in the body is proportional to the concentrations of fiber, was induced from the above analysis. As the volume of carbon and matrix keep increasing gradually, the model tends to
become weak because of the low strength of carbon fibers and epoxy as compared to boron fibers.
The strains induced in all the directions are picturized below.
Figure 5.3 Distribution of Normal Elastic Strains on Triangular Model 1 in Transversal and Longitudinal directions
Here, it is clear that, in the strains the transversal directions grow with a considerable difference but in axial direction, they almost remain constant since it is a linearly elastic model.
Distribution of stresses and strains due to compression on all the three triangular models are tabulated below.
Table 5.1 Table with Normal Stresses and Normal Strains based on the concentration of fibers and matrix for all the Triangular Models
Taking these values into consideration, the longitudinal properties were determined by numerical computations. The calculated results were then compared to the theoretical ones.
Table 5.2 Comparison of Theoretical and FE Model values of Young’s modulus and Poisson’s Ratio in axial direction for all the three Triangular Models
From the table, it can be concluded that, the results from the Finite Element Analysis of the model are close enough to the theoretical values, which means that the model has good longitudinal
properties. Bar graphs are plotted for the same.
Theoretical Results:
Following plots demonstrate the theoretical results of the longitudinal and transversal properties for Triangular Model 1.
Figure 5.4 Plot between Total Concentration of Fibers and Young’s Modulus in Longitudinal Direction
So, it can be deduced that, with the increase in concentration of fibers, the Young’s Modulus is gradually decreasing. This graph shows the dependency of elastic strain on concentration of fibers.
Figure 5.5 Plot between Total Concentration of Fibers and Poisson’s Ratio in Longitudinal Direction
The Poisson’s ratio keeps on increasing with the increase in the total concentration of fibers.
Figure 5.6 Plot between Total Concentration of Fibers and Young’s Modulus in Transversal Direction
From the graph, it is observed that, with the increase in concentration of fibers, the Young’s Modulus decreases drastically.
Figure 5.7 Plot between Total Concentration of Fibers and Poisson’s Ratio in Transversal Direction
Here, it can be derived from the graph that, with the increase in the concentration of fibers, there is a gradual decrease in the Poisson’s ratio.
5.1.2 Behavior of Elastic Strain on Infinite Lengths
The variation of elastic strains also depends upon the length of the bar considered. Among the two models, one was of 2000µm in length and the other one was 4000 µm which is double the length of the
first model. For the first model, there was a considerable difference between the strains whereas for the second model, the difference between the strains was negligible, which means that as the
length of the model tends to infinity, the effect of elastic strains reduces and eventually becomes constant. The simulation models are shown below with the strain values.
Figure 5.9 Normal Elastic Strain of Triangular Model 1 increased to length 4000µm in Axial Direction
5.1.3 Estimation of Transversal Properties
Some numerical computations were performed to calculate the transversal properties. First the bulk modulus was found based on the pressure applied on the lateral surfaces and this bulk modulus was
used to calculate the Young’s modulus and Poisson’s Ratio.
Table 5.3 Comparison of Theoretical and FE Model values of Young’s modulus and Poisson’s Ratio in Transversal Direction for all the three Triangular Models
5.2 Results of Rectangular Models
5.2.1 Estimation of Longitudinal Shear Stiffness
By applying pressure in the longitudinal direction, the deformations in the axial directions were drawn. The deformations at the fixed supports were found to be maximum and the minimum deformations
occurred at the ends. This was because there were no forces applied on the side faces of the bar and the bar was restricted to move in all the three directions.
Figure 5.10 Deformation of Rectangular Model 1 subjected to Longitudinal Shear force
Also, the surface displacements were recorded since these were used to calculate the shear modulus. A set of values were taken from the surface using probe and by averaging them the shear modulus was
Figure 5.11 Surface Displacements of Rectangular Model 1 subjected to Longitudinal Shear force
5.2.2 Estimation of Transversal Shear Stiffness
Since, the pressure was applied in the transversal direction, the deformation resulted in a skewed body wherein the skew angle could have been measured to calculate the shear angle.
Figure 5.12 Deformation of Rectangular Model 1 subjected to Transversal Shear force
The average of the displacements were taken from the top surface to calculate the shear modulus by probing technique. Below figure indicates the surface deformation.
Figure 5.13 Surface Displacements of Rectangular Model 1 subjected to Transversal Shear force
On tabulating the results, it was found that there was a considerable variation in the shear modulus between the theoretical and FE model in the axial direction. On the other hand, there was a huge
variation in the transversal shear modulus.
Table 5.4 Comparison of Theoretical and FE Model values of Shear Modulus in Longitudinal and Transversal Direction for Rectangular Model 1 and 2
Figure 5.14 Plot between Total Concentration of Fibers and Shear Modulus in Longitudinal Direction
It can be inferred from the graph that, shear modulus increases till the concentration of fibers is 15% and decreases gradually thereafter.
Figure 5.15 Plot between Total Concentration of Fibers and Shear Modulus in Transversal Direction
This graph shows a rapid decrease in the shear modulus till the concentration of 30% and a gradual decrease thereafter.
Chapter 6
The experiment on Finite Element Model based on the prescribed loads acting on the cell gave results close to theoretical ones, where loads distribution acting on the cell is not prescribed but
calculated from known loads applied to the infinite ensemble of the cells.
It can therefore be deduced that Finite Element Method can be applied to more complicated geometries to find compressive strength due to buckling of fibers.
Compressive strength of hybrid composites inside polymer matrix can be investigated further due to buckling of fibers. Because the tensile and compressive strengths of unidirectional fiber reinforced
plastics are significantly different, the eddying of boron fibers can support carbon fibers preventing them from buckling.
Finding the optimal combination of carbon and boron fibers is the ultimate goal. The presented work was the first step on this part.
[1] Peters, S. T. (Ed.). (2013). Handbook of composites. Springer Science & Business Media.
[2] Jones, R. M. (1998). Mechanics of composite materials. CRC press.
[3] Matthews, F. L., & Rawlings, R. D. (1999). Composite materials: engineering and science. Elsevier.
[4] Daryl L. Logan (2011). A first course in the finite element method. Cengage Learning. ISBN 978-0495668251.
[5] Reddy, J.N. (2006). An Introduction to the Finite Element Method (Third ed.). McGraw-Hill. ISBN 9780071267618.
[6] Bandaru, A. K., Ahmad, S., & Bhatnagar, N. (2017). Ballistic performance of hybrid thermoplastic composite armors reinforced with Kevlar and basalt fabrics. Composites Part A: Applied Science and
Manufacturing, 97, 151-165.
[7] Swolfs, Y., McMeeking, R. M., Rajan, V. P., Zok, F. W., Verpoest, I., & Gorbatikh, L. (2015). Global load-sharing model for unidirectional hybrid fibre-reinforced composites. Journal of the
Mechanics and Physics of Solids, 84, 380-394.
[8] Dai, G., & Mishnaevsky, L. (2014). Fatigue of hybrid glass/carbon composites: 3D computational studies. Composites Science and Technology, 94, 71-79.
[9] Pandya, K. S., Pothnis, J. R., Ravikumar, G., & Naik, N. K. (2013). Ballistic impact behavior of hybrid composites. Materials & Design, 44, 128-135.
[10] Muhi, R. J., Najim, F., & de Moura, M. F. (2009). The effect of hybridization on the GFRP behavior under high velocity impact. Composites Part B: Engineering, 40(8), 798-803.
[11] Ross, A. (2006). Basalt fibers: Alternative to glass?. Composites Technology, 12(4).
[12] Deák, T., & Czigány, T. (2009). Chemical composition and mechanical properties of basalt and glass fibers: a comparison. Textile Research Journal, 79(7), 645-651.
[13] Sarasini, F., Tirillò, J., Ferrante, L., Valente, M., Valente, T., Lampani, L., … & Sorrentino, L. (2014). Drop-weight impact behaviour of woven hybrid basalt–carbon/epoxy composites. Composites
Part B: Engineering, 59, 204-220.
[14] Sarasini, F., Tirillò, J., Valente, M., Ferrante, L., Cioffi, S., Iannace, S., & Sorrentino, L. (2013). Hybrid composites based on aramid and basalt woven fabrics: Impact damage modes and
residual flexural properties. Materials & Design, 49, 290-302.
[15] Sarasini, F., Tirillò, J., Valente, M., Valente, T., Cioffi, S., Iannace, S., & Sorrentino, L. (2013). Effect of basalt fiber hybridization on the impact behavior under low impact velocity of
glass/basalt woven fabric/epoxy resin composites. Composites Part A: Applied Science and Manufacturing, 47, 109-123.
[16] Tirillò, J., Ferrante, L., Sarasini, F., Lampani, L., Barbero, E., Sánchez-Sáez, S., & Gaudenzi, P. (2017). High velocity impact behaviour of hybrid basalt-carbon/epoxy composites. Composite
Structures, 168, 305-312.
[17] Zhu, D., Chen, G., Wu, G., Kang, P., & Ding, W. (2009). Hypervelocity impact damage to Ti–6Al–4V meshes reinforced Al–6Mg alloy matrix composites. Materials Science and Engineering: A, 500(1),
[18] Robinson, J. H., & Nolen, A. M. (1995). An investigation of metal matrix composites as shields for hypervelocity orbital debris impacts. International journal of impact engineering, 17(4-6),
[19] Zhu, D., Chen, Q., & Ma, Z. (2015). Impact behavior and damage characteristics of hybrid composites reinforced by Ti fibers and M40 fibers. Materials & Design, 76, 196-201.
[20] Meier, U. (2000). Composite materials in bridge repair. Applied Composite Materials, 7(2), 75-94.
[21] Meier, U. (2012). Carbon fiber reinforced polymer cables: Why? Why not? What if?. Arabian Journal for Science and Engineering, 37(2), 399-411.
[22] Naito, K., & Oguma, H. (2017). Tensile properties of novel carbon/glass hybrid thermoplastic composite rods. Composite Structures, 161, 23-31.
Nemani Amruta Rupa earned her Bachelor’s degree in Mechanical Engineering from Jawaharlal Nehru Technological University, Hyderabad, India in 2014. She started her career as a Systems Engineer at
Infosys Limited in 2015. Later, in 2016, she joined the University of Texas at Arlington, USA, for her Master’s in Mechanical Engineering. She then started her research work with Dr. Andrey Beyle,
and successfully graduated in 2017.
As a Bachelor’s student, she did study project on ‘Manufacturing of gearbox on CNC machine’ from Hindustan Machine Tools Limited, a government of India organization in Hyderabad. In her thesis, she
worked on ‘Finite Element Analysis on Hybrid Composites’. She always wanted to work in design and manufacturing sectors.
Cite This Work
To export a reference to this article please select a referencing stye below:
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Content relating to: "Engineering"
Engineering is the application of scientific principles and mathematics to designing and building of structures, such as bridges or buildings, roads, machines etc. and includes a range of specialised
Related Articles
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: | {"url":"https://ukdiss.com/examples/analysis-of-hybrid-boron-carbon-composites.php","timestamp":"2024-11-03T20:00:46Z","content_type":"text/html","content_length":"109785","record_id":"<urn:uuid:bd39a3e9-8dfd-499f-8a43-374ee481fc9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00734.warc.gz"} |
Flow enhancement factor
6.7. Flow enhancement factor
The flow enhancement factor \(E\) is a multiplier for the temperature-dependent rate factor \(A(T')\) that accounts for ice softening (\(E>1\)) or hardening (\(E<1\)) due to impurities or an
anisotropic fabric. For the setting ENHMOD = 1, three different values can be defined in the run-specs headers:
• ENH_FACT: \(E_\mathrm{SIA}\), for the shallow-ice approximation (SIA) of grounded ice.
• ENH_STREAM: \(E_\mathrm{SStA}\), for the shelfy-stream approximation (SStA) of grounded ice.
• ENH_SHELF: \(E_\mathrm{SSA}\), for the shallow-shelf approximation (SSA) of floating ice.
Further settings of ENHMOD allow making the enhancement factor dependent on age or time of deposition, so that different values for glacial and interglacial ice can be defined. This affects only the
SIA. See the documentation in the run-specs headers for details.
For computing the velocity field with hybrid SIA/SStA dynamics, \(E_\mathrm{SIA}\) is used for the SIA solution, \(E_\mathrm{SStA}\) is used for the SStA solution, and then the velocity field is
obtained as a weighted average of the two solutions. However, for computing the temperature field (the dissipation term depends on the ice viscosity and thus on the enhancement factor), this is not
possible because no separate SIA and SStA solutions are available. Therefore, a weighted enhancement factor \(E_\mathrm{SIA/SStA}\) is computed from \(E_\mathrm{SIA}\) and \(E_\mathrm{SStA}\), using
the same weighting rule used for the velocity. This weighted enhancement factor is then used for the dissipation term of the temperature equation. | {"url":"https://sicopolis.readthedocs.io/en/ad/modelling_choices/enhancement_factor.html","timestamp":"2024-11-10T05:42:48Z","content_type":"text/html","content_length":"12476","record_id":"<urn:uuid:96352668-741f-490d-a9e3-0a0c2fc09688>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00156.warc.gz"} |
Wave function of real space parallel DMRG.
I encounter some problems when I try to test the real space parallel DMRG. In calculation, I write the wave function psi to the hard disk by
writeToFile("path", psi);
When I load the wavefunction readFromFile("path", psi) and do it inner product inner(psi, psi), the error is:
From line 568, file indexset.cc
In findIndex: more than one Index found, consider using findInds instead.
What makes such an error?
Thanks for the question. Often this kind of error has to do with how in ITensor, MPS are made by first making a collection of site indices ("site set") and then you form the MPS with these. Are you
also saving the sites of your MPS to disk too and reading them back in? Or are you making a brand new set of sites?
But I do see that even `inner(psi,psi)` is failing which should work for any MPS psi whatever sites it has.
So can you please provide some more context like a minimal code sample that reproduces this issue?
Also have you tried printing out `psi` to look at the indices each of its tensors has? That could reveal what's going on: it might have a structure that's not a valid MPS.
Oh I just saw that this is for the real-space parallel DMRG code. That code is really provided for research/example purposes only and doesn't come with guarantees like about the wavefunction being in
a given global state after you read it from disk. Basically in more detail there are additional "V" tensors in between each group of MPS tensors that "glue" them together into a valid global MPS, and
you might have not included these back into psi when you made it.
May I ask what you are using the real-space parallel DMRG for? You may not strictly need it for your application.
Hi Miles,
Thanks for your answer. I think that considering the glue tensors V is the key point to fix this problem.
I am testing the parallel DMRG for the Hubbard model with Sz, electron number, and ky conserved. Storing the wave function is very useful for the following considerations:
For challenging quasi-2D models, we usually do not know how many sweeps and states are large enough to make the result veritably converged in advance. So it is very convenient to use the stored wave
function as an input when we increase the sweep numbers and bond dimensions, which avoids starting a calculation in the beginning. Therefore, that will save time and computational resources.
On the other hand, the maximum running time is limited on some public clusters, which may be one or two weeks. In addition, we need to shut down the personal cluster during power maintenance. Hence,
the program will be more flexible if we can resubmit it based on the stored wave function in the previous calculation.
Zongsheng Zhou
Hi Zongsheng,
Glad the point about the V tensors helped. Yes I think that's the key point to notice. Additionally, there could also be some subtlety about the state of each node as the calculation is stopped which
could leave them in an incompatible state, so that is another thing to consider.
In general, that parallel DMRG code was not very focused on the ability to restart the calculations, with the thinking that anyway the time to store and re-prepare the state (e.g. recalculating the
tensors which wrap the Hamiltonian into the MPS) does not have good parallel scaling so could cancel out the benefits of doing a parallel calculation in the first place. Also it's just a more
complicated aspect of the code to get working correctly.
Hope you find a way to use it that's helpful to you – | {"url":"http://www.itensor.org/support/3606/wave-function-of-real-space-parallel-dmrg?show=3607","timestamp":"2024-11-04T23:48:17Z","content_type":"text/html","content_length":"28942","record_id":"<urn:uuid:4e5ca86d-bcf8-4340-8505-1fea49d193c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00107.warc.gz"} |
Design and Sizing of Primary Sedimentation Tanks
Design and Sizing of Primary Sedimentation Tanks
The design and sizing of primary sedimentation tanks involve several key factors to ensure efficient solids removal and proper wastewater treatment. Here are the key considerations and steps involved
in the design process:
1. Flow Rate and Design Criteria:
• Determine the average and peak flow rates of the influent wastewater based on the projected population or industrial wastewater characteristics.
• Consider any applicable design criteria or regulatory requirements related to sedimentation tank performance, such as solids removal efficiency or detention time.
2. Surface Area Calculation:
• Calculate the required surface area of the sedimentation tank based on the hydraulic loading rate (HLR) and desired detention time.
• The HLR is typically expressed in m^3/m^2/day and represents the flow rate per unit area of the tank. It is determined by dividing the flow rate (Q) by the surface area (A) of the tank.
• The detention time is the desired duration that the wastewater remains in the tank and is typically expressed in hours. It is determined by dividing the effective volume (V) of the tank by the
flow rate (Q).
• The surface area can be calculated by dividing the flow rate by the hydraulic loading rate: A = Q / HLR.
3. Tank Dimensions:
• Determine the appropriate tank dimensions, such as length, width, and depth, based on the available space and desired surface area.
• Consider the desired aspect ratio (length-to-width ratio) for the tank, typically between 2:1 and 4:1, to promote efficient solids settling.
4. Sludge Retention Time (SRT):
• Determine the desired sludge retention time, which is the time solids spend in the tank, to ensure proper settling and sludge removal.
• The SRT is typically based on the desired level of solids removal and can vary depending on the treatment objectives and influent characteristics.
• The SRT can be calculated by dividing the sludge volume by the flow rate: SRT = V / Q.
5. Inlet and Outlet Design:
• Design appropriate inlet structures to evenly distribute the influent flow across the tank to prevent short-circuiting and ensure uniform settling.
• Install weirs, effluent launders, or other suitable devices to collect and remove clarified effluent from the tank.
• Consider the use of baffles or flow control mechanisms to minimize turbulence and improve settling efficiency.
6. Sludge Removal:
• Select suitable mechanisms for sludge removal, such as scraper blades, suction devices, or sludge pumps.
• Determine the frequency and method of sludge removal based on the expected sludge accumulation rate and the required maintenance schedule.
7. Structural Considerations:
• Design the sedimentation tank structure to withstand the hydraulic and mechanical forces exerted by the wastewater flow.
• Consider the use of appropriate materials, such as concrete or steel, based on the tank size, loadings, and local regulations.
Solved Example on Design and Sizing of Primary Sedimentation Tanks
Let's consider designing a rectangular sedimentation tank for a wastewater treatment plant. The influent flow rate is 500 m^3/day, and we want to achieve a detention time of 3 hours.
1. Surface Area Calculation:
• We can calculate the hydraulic loading rate (HLR) using the flow rate (Q) and surface area (A) equation: HLR = Q / A.
• Let's assume a hydraulic loading rate of 200 m^3/m^2/day.
• To calculate the required surface area: A = Q / HLR = 500 / 200 = 2.5 m2.
2. Tank Dimensions:
• Determine the tank dimensions based on the desired surface area and aspect ratio.
• Let's assume a length-to-width ratio of 3:1.
• The tank width can be calculated by assuming a width of 1 meter and finding the length using the aspect ratio: Length = 3 * Width = 3 * 1 = 3 meters.
• So, the tank dimensions would be 3 meters in length and 1 meter in width.
3. Sludge Retention Time (SRT):
• In this example, we'll assume a sludge retention time of 4 hours.
• The SRT can be calculated by dividing the effective volume (V) of the tank by the flow rate (Q).
• Let's assume an effective volume of 12 m^3: SRT = V / Q = 12 / 500 = 0.024 hours.
4. Inlet and Outlet Design:
• Design appropriate inlet structures to evenly distribute the influent flow across the tank.
• Install weirs or effluent launders to collect and remove clarified effluent from the tank.
5. Sludge Removal:
• Determine the frequency and method of sludge removal based on the expected sludge accumulation rate and maintenance requirements.
• Let's assume periodic sludge removal every 2 weeks using scraper blades.
Design and Sizing of Primary Sedimentation Tanks | {"url":"http://mail.aboutcivil.org/design-sizing-primary-sedimentation-tanks","timestamp":"2024-11-11T08:09:09Z","content_type":"text/html","content_length":"61097","record_id":"<urn:uuid:8d093bd5-22b0-4466-9f2e-1a131b8c9d2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00804.warc.gz"} |
How to find a cross price elasticity of demand from a demand equation
Finding the price elasticity of demand, and the cross price elasticity of demand from a demand function is something that most intermediate microeconomics will require you to know.
This idea is related to
finding the point price elasticity of demand
covered in a previous post.
For more information on the process you should review that post.
This posting is going to go over an example of calculate both the price elasticity of demand and the cross price elasticity of demand for two related goods from the following demand function to
demonstrate how the process is done.
First, we need to have a demand function. The following demand function for hot dogs is given as:
Q = 20 - 4P + Ph - Pb
Where Q is quantity.
P is the price of the product.
Ph is the price of hamburgers. We know that hamburgers are a substitute for hot dogs because it is added to the demand function, meaning that higher prices of hamburgers results in a higher quantity
demanded for hot dogs.
And Pb is the price for hot dog buns. We know that hot dog buns are complements with hot dogs because as the price for hot dog buns rises, we see that the quantity demanded of hot dogs will decline,
meaning they are complements.
It is necessary to know the equation for calculating the point price elasticity of demand to progress further, the point price elasticity of demand equation is:
e = (change in Q/change in P) * (P/Q) or
Point Price Elasticity of Demand = (P/Q)(∆Q/∆P)
We first need to find what ∆Q/∆P is and we can do this by either taking the derivative of our demand function with respect to P, or by changing P by one unit and finding out how much Q changes by.
Either way, the result will be -4.
We now need to know what equilibrium P and Q are to solve the rest of the equation. For this example, I will assume that P = 1, Ph=1, and Pb=1. If you need to brush up on solving for equilibrium
mathematically you can check this prior post. Plugging in these values, we are left with Q = 16. We can then plug each of these values into the point price elasticity of demand equation to get:
e = -4*(1/16) = -0.25
This shows us that the good has an inelastic point price elasticity of demand measure because it is greater than -1 or the absolute value of the measure is less than 1.
In order to find out what the cross price elasticity of demand is, we need to do this same process but use the price for the related good. Let's begin with the price associated with the substitute
good, or hamburgers. The equation for estimating the point cross price elasticity of demand is:
Point Price Elasticity of Demand = (P2/Q1)(∆Q1/∆P2)
Where Q1 represents the quantity of the good in question (hot dogs) and P2 represents the price of the related good (hamburgers).
We can find (∆Q1/∆P2) using the same method above to get 1.
Q will still be equal to 16 (in equilibrium) and P is 1, and this leaves us with:
e = 1*(1/16) = 1/16
So the cross price elasticity of demand between hot dogs and hamburgers is 1/16, and since it is positive we confirm that they are substitutes.
We can do a similar process with hot dog buns, this will give us a ∆Q1/∆P2 of -1 and a resulting elasticity measure of -1/16. Since it is negative, we confirm that they are complements.
Finally, what if the price of hamburgers increases to 2? The new equilibrium Q will be 15, and calculating the new elasticity will give us:
e = 2*(2/15) = 4/15 = 0.26666
We can see that the number is still positive, but has risen slightly meaning that quantity demanded has become more elastic or sensitive to price changes in the related good. | {"url":"https://www.freeeconhelp.com/2013/02/how-to-find-cross-price-elasticity-of.html","timestamp":"2024-11-06T20:46:28Z","content_type":"application/xhtml+xml","content_length":"186857","record_id":"<urn:uuid:0e5fb68b-2dff-4f9c-b150-bfce81860ccb>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00283.warc.gz"} |
Berlin 2018 – wissenschaftliches Programm
MA 21.50: Poster
Dienstag, 13. März 2018, 09:30–13:00, Poster A
Implementation of a self-consistent NEQ scheme in the KKR formalism — •Alexander Fabian, Michael Czerner, and Christian Heiliger — Institut für theoretische Physik, Justus-Liebig-Universität Gießen,
Heinrich-Buff-Ring 16, 35392 Gießen
Todays need for even more efficient and faster nano sized devices requires a decent understanding of the behavior of nanostructured materials under applied fields. However, most common approaches
rely on the equilibrium properties of the material and use approximations to describe the non-equilibrium behavior. Since crucial assumptions have to be made, these descriptions do not always
describe all of the properties correctly and one needs an exact description to calculate non-equilibrium properties. The Keldysh formalism can be used to describe the non-equilibirum properties
within the framework of an ab initio theory. We implemented a self-consistent scheme in our multiscattering DFT code based on the KKR method. To calculate non-equilibrium properties, we use a steady
state Keldysh formalism with non-equilibrium Green’s functions. The electronic density is calculated by splitting the energy contour in two parts according to the applied voltage. In the first part,
the density is calculated with equilibrium tools while the second part is calculated with the actual Keldysh formalism. Summing the two parts the resulting density is used to solve the system
self-consistently in the non-equilibrium steady state. Charge displacement due to the applied voltage and the behavior of the voltage in the underlying system are extracted from the self-consistent | {"url":"https://www.dpg-verhandlungen.de/year/2018/conference/berlin/part/ma/session/21/contribution/50","timestamp":"2024-11-12T04:08:42Z","content_type":"text/html","content_length":"8058","record_id":"<urn:uuid:97b11ed4-5dfb-4795-83a4-322c98b49879>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00876.warc.gz"} |
Algorithmic Trading with Average Directional Index in Python
There are four different branches of technical indicators out there but the most popular ones are known as trend indicators. These indicators help traders in identifying the direction and strength of
the market trend and trade along with them. In most cases, the indicators that come under the category of trend reveal good results unless we use them efficiently.
In this article, we are going to explore one of the most popular trend indicators, the Average Directional Index (shortly known as ADX). We will first build some basic understanding of what ADX is
all about and its calculation, then, move on to building the indicator from scratch and construct a trading strategy based on that indicator in Python. To evaluate our strategy, we will first
backtest it with the Apple stock, and then to make more sense, we will compare our strategy returns with the SPY ETF returns (an ETF specifically designed to track the movement of the S&P 500 index).
Average True Range (ATR)
Before moving on to discovering ADX, it is essential to know what the Average True Range (ATR) is as it is involved in the calculation of the Average Directional Index (ADX).
The Average True Range is a technical indicator that measures how much an asset moves on an average. It is a lagging indicator meaning that it takes into account the historical data of an asset to
measure the current value but it’s not capable of predicting the future data points. This is not considered as a drawback while using ATR as it’s one of the indicators to track the volatility of a
market more accurately. Along with being a lagging indicator, ATR is also a non-directional indicator meaning that the movement of ATR is inversely proportional to the actual movement of the market.
To calculate ATR, it is requisite to follow two steps:
• Calculate True Range (TR): A True Range of an asset is calculated by taking the greatest values of three price differences which are: market high minus market low, market high minus previous
market close, and previous market close minus market low. It can be represented as follows:
MAX [ {HIGH - LOW}, {HIGH - P.CLOSE}, {P.CLOSE - LOW} ]
MAX = Maximum values
HIGH = Market High
LOW = Market Low
P.CLOSE = Previous market close
• Calculate ATR: The calculation for the Average True Range is simple. We just have to take a smoothed average of the previously calculated True Range values for a specified number of periods. The
smoothed average is not just any SMA or EMA but an own type of smoothed average created by Wilder Wiles himself but there aren’t any restrictions in using other MAs too. In this article, we will
be using SMA to calculate ATR rather than the custom moving average created by the founder of the indicator just to make things simple. The calculation of ATR with a traditional setting of 14 as
the number of periods can be represented as follows:
ATR 14 = EMA 14 [ TR ]
ATR 14 = 14 Period Average True Range
SMA 14 = 14 Period Simple Moving Average
TR = True Range
While using ATR as an indicator for trading purposes, traders must ensure that they are cautious than ever as the indicator is very lagging. Now that we have an understanding of what the Average True
Range is all about. Let’s now dive into the main concept of this article, the Average Directional Index.
Average Directional Index (ADX)
ADX is a technical indicator that is widely used in measuring the strength of the market trend. Now, the ADX doesn’t measure the direction of the trend, whether it’s bullish or bearish, but just
represents how strong the trend is. So, to identify the direction of the trend, ADX is combined with a Positive Directional Index (+ DI) and a Negative Directional Index (- DI). As the name suggests,
the + DI measures the bullish or positive trend of the market, similarly, the – DI measures the bearish or negative trend of the market. The values of all the components are bound between 0 to 100,
hence acting as an oscillator. The traditional setting of ADX is 14 as the lookback period.
To calculate the values of ADX with 14 as the lookback period, first, the Positive (+ DM) and Negative Directional Movement (- DM) is determined. The + DM is calculated by finding the difference
between the current high and the previous high, and similarly, the – DM is calculated by finding the difference between the previous low and the current low. It can be represented as follows:
+ DM = CURRENT HIGH - PREVIOUS HIGH
- DM = PREVIOUS LOW - CURRENT LOW
Then, an ATR with 14 as the lookback period is calculated. Now, using the calculated directional movement and the ATR values, the Positive Directional Index (+ DI) and the Negative Directional Index
(- DI) are calculated. To determine the values of + DI, the value received by taking the Exponential Moving Average (EMA) of the Positive Directional Movement (+ DM) with 14 as the lookback period is
divided by the previously calculated 14-day ATR and then, multiplied by 100. This same applies to determining the – DI too but instead of taking the 14-day EMA of + DM, the Negative Directional
Movement (- DM) is taken into account. The formula to calculate both the + DI and the – DI can be represented as follows:
+ DI14 = 100 * [ EMA 14 ( + DM ) / ATR 14 ]
- DI14 = 100 * [ EMA 14 ( - DM ) / ATR 14 ]
The next step is to use the + DI and – DI to calculate the Directional Index. It can be determined by dividing the absolute value of the difference between the +DI and – DI by the absolute value of
the total of + DI and – DI multiped by 100. The formula to calculate the Directional Index can be represented as follows:
DI 14 = | (+ DI 14) - (- DI 14) | / | (+ DI 14) + (- DI 14) | * 100
The final step is to calculate the ADX itself by utilizing the determined Directional Index values. The ADX is calculated by multiplying the previous Directional Index value with 13 (lookback period
– 1) and adding it with the Directional Index, then multiplied by 100. The formula to calculate the values of ADX can be represented as follows:
ADX 14 = [ ( PREV DI 14 * 13 ) + DI 14 ] * 100
The ADX cannot be used as it is but needs to be smoothed. Since founded by Wilder Wiles (the founder of ATR too), the ADX is smoothed by a custom moving average we discussed before. We neglected
using this custom moving average while calculating ATR as it is possible to use other types of moving averages but it is essential to use while smoothing ADX to get accurate values.
That’s the whole process of calculating the values of ADX. Now, let’s discuss how a simple ADX-based trading strategy can be constructed.
About our trading strategy
In this article, we are going to build a simple crossover strategy that reveals a buy signal whenever the ADX line crosses from below to above 25 and the + DI line is above the – DI line. Similarly,
a sell signal is generated whenever the ADX line crosses from below to above 25 and the – DI line is above the + DI line. Our trading strategy can be represented as follows:
IF P.ADX < 25 AND C.ADX > 25 AND + DI LINE > - DI LINE ==> BUY
IF P.ADX < 25 AND C.ADX > 25 AND + DI LINE < - DI LINE ==> SELL
This concludes the theory part of this article. Now, let’s code this indicator from scratch and build the discussed trading strategy out of it in python and backtest it with the Apple stock to see
some exciting results. We will also compare our ADX crossover strategy returns with the returns of the SPY ETF to see how well our trading strategy has performed against a benchmark. Without further
ado, let’s dive into the coding part.
Implementation in Python
The coding part is classified into various steps as follows:
1. Importing Packages
2. API Key Activation
3. Extracting Historical Stock Data
4. ADX Calculation
5. ADX Indicator Plot
6. Creating the Trading Strategy
7. Plotting the Trading Lists
8. Creating our Position
9. Backtesting
10. SPY ETF Comparison
We will be following the order mentioned in the above list and buckle up your seat belts to follow every upcoming coding part.
Step-1: Importing Packages
Importing the required packages into the Python environment is a non-skippable step. The primary packages are going to be eodhd for extracting historical stock data, Pandas for data formatting and
manipulations, NumPy to work with arrays and for complex functions, and Matplotlib for plotting purposes. The secondary packages are going to be Math for mathematical functions and Termcolor for font
customization (optional).
Python Implementation:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from termcolor import colored as cl
from math import floor
from eodhd import APIClient
plt.rcParams['figure.figsize'] = (20,10)
With the required packages imported into Python, we can proceed to fetch historical data for Apple using EODHD’s eodhd Python library. Also, if you haven’t installed any of the imported packages,
make sure to do so using the pip command in your terminal.
Step-2: API Key Activation
It is essential to register the EODHD API key with the package in order to use its functions. If you don’t have an EODHD API key, firstly, head over to their website, then, finish the registration
process to create an EODHD account, and finally, navigate to the ‘Settings’ page where you could find your secret EODHD API key. It is important to ensure that this secret API key is not revealed to
anyone. You can activate the API key by following this code:
api_key = '<YOUR API KEY>'
client = APIClient(api_key)
The code is pretty simple. In the first line, we are storing the secret EODHD API key into the api_key and then in the second line, we are using the APIClient class provided by the eodhd package to
activate the API key and stored the response in the client variable.
Note that you need to replace <YOUR API KEY> with your secret EODHD API key. Apart from directly storing the API key with text, there are other ways for better security such as utilizing
environmental variables, and so on.
Step-3: Extracting Historical Data
Before heading into the extraction part, it is first essential to have some background about historical or end-of-day data. In a nutshell, historical data consists of information accumulated over a
period of time. It helps in identifying patterns and trends in the data. It also assists in studying market behavior. Now, you can easily extract the historical data of any tradeable assets using the
eod package by following this code:
def get_historical_data(ticker, start_date):
json_resp = client.get_eod_historical_stock_market_data(symbol = ticker, period = 'd', from_date = start_date, order = 'a')
df = pd.DataFrame(json_resp)
df = df.set_index('date')
df.index = pd.to_datetime(df.index)
return df
aapl = get_historical_data('AAPL', '2020-01-01')
In the above code, we are using the get_eod_historical_stock_market_data function provided by the eodhd package to extract the split-adjusted historical stock data of Apple. The function consists of
the following parameters:
• the ticker parameter where the symbol of the stock we are interested in extracting the data should be mentioned
• the period refers to the time interval between each data point (one-day interval in our case).
• the from_date and to_date parameters which indicate the starting and ending date of the data respectively. The format of the input should be “YYYY-MM-DD”
• the order parameter which is an optional parameter that can be used to order the dataframe either in ascending (a) or descending (d). It is ordered based on the dates.
After extracting the historical data, we are performing some data-wrangling processes to clean and format the data. The final dataframe looks like this:
Step-4: ADX Calculation
In this step, we are going to calculate the values of ADX by following the method we discussed before.
Python Implementation:
def get_adx(high, low, close, lookback):
plus_dm = high.diff()
minus_dm = low.diff()
plus_dm[plus_dm < 0] = 0
minus_dm[minus_dm > 0] = 0
tr1 = pd.DataFrame(high - low)
tr2 = pd.DataFrame(abs(high - close.shift(1)))
tr3 = pd.DataFrame(abs(low - close.shift(1)))
frames = [tr1, tr2, tr3]
tr = pd.concat(frames, axis = 1, join = 'inner').max(axis = 1)
atr = tr.rolling(lookback).mean()
plus_di = 100 * (plus_dm.ewm(alpha = 1/lookback).mean() / atr)
minus_di = abs(100 * (minus_dm.ewm(alpha = 1/lookback).mean() / atr))
dx = (abs(plus_di - minus_di) / abs(plus_di + minus_di)) * 100
adx = ((dx.shift(1) * (lookback - 1)) + dx) / lookback
adx_smooth = adx.ewm(alpha = 1/lookback).mean()
return plus_di, minus_di, adx_smooth
aapl['plus_di'] = pd.DataFrame(get_adx(aapl['high'], aapl['low'], aapl['close'], 14)[0]).rename(columns = {0:'plus_di'})
aapl['minus_di'] = pd.DataFrame(get_adx(aapl['high'], aapl['low'], aapl['close'], 14)[1]).rename(columns = {0:'minus_di'})
aapl['adx'] = pd.DataFrame(get_adx(aapl['high'], aapl['low'], aapl['close'], 14)[2]).rename(columns = {0:'adx'})
aapl = aapl.dropna()
Code Explanation: We are first defining a function named ‘get_adx’ that takes a stock’s high (‘high’), low (‘low’), and close data (‘close’) along with the lookback period (‘lookback’) as parameters.
Inside the function, we are first calculating and storing the + DM and – DM into the ‘plus_dm’ and ‘minus_dm’ respectively. Then comes the ATR calculation where we are first calculating the three
differences and defined a variable ‘tr’ to store the highest values among the determined differences, then, we calculated and stored the values of ATR into the ‘atr’ variable.
Using the calculated directional movements and ATR values, we are calculating the + DI and – DI and stored them into the ‘plus_di’ and ‘minus_di’ variables respectively. With the help of the
previously discussed formula, we are calculating the Directional Index values and stored them into the ‘dx’ variable, and applied those values into the ADX formula to calculate the Average
Directional Index values. Then, we defined a variable ‘adx_smooth’ to store the smoothed values of ADX. Finally, we are returning and calling the function to obtain the + DI, – DI, and ADX values of
Apple with 14 as the lookback period.
Step-5: ADX Plot
In this step, we are going to plot the calculated ADX values of Apple to make more sense out of it. The main aim of this part is not on the coding section but instead to observe the plot to gain a
solid understanding of the Average Directional Index.
Python Implementation:
ax1 = plt.subplot2grid((11,1), (0,0), rowspan = 5, colspan = 1)
ax2 = plt.subplot2grid((11,1), (6,0), rowspan = 5, colspan = 1)
ax1.plot(aapl['close'], linewidth = 2, color = '#ff9800')
ax1.set_title('AAPL CLOSING PRICE')
ax2.plot(aapl['plus_di'], color = '#26a69a', label = '+ DI 14', linewidth = 3, alpha = 0.3)
ax2.plot(aapl['minus_di'], color = '#f44336', label = '- DI 14', linewidth = 3, alpha = 0.3)
ax2.plot(aapl['adx'], color = '#2196f3', label = 'ADX 14', linewidth = 3)
ax2.axhline(25, color = 'grey', linewidth = 2, linestyle = '--')
ax2.set_title('AAPL ADX 14')
The above chart is divided into two panels: the upper panel with the closing prices of Apple and the lower panel with the components of ADX. Along with the components, a grey dashed line is plotted
which is the threshold for the ADX plotted at a level of 25. As I said before, the ADX doesn’t track the direction of the trend but instead, the strength and it can be seen several times in the chart
where the ADX line increases when the market shows a strong trend (either up or down) and decreases when the market bound to consolidate. This is the same case with both the directional index lines
too. We could see that the + DI line increases when the market shows a sturdy uptrend and decreases during a downtrend and vice-versa for the – DI line.
ADX is not only used to quantify the strength of a market trend but also becomes a handy tool to identify ranging markets (markets where the stock moves back and forth between specific high and low
levels showing zero momentum). Whenever the lines move closer to each other, the market is observed to be ranging, similarly, the wider the space between the lines, the more the markets are trending.
Those who are introduced to the chart of ADX for the very first time might confuse themselves since the movement of each line is indirectly proportional to the movement of the market.
Step-6: Creating the trading strategy
In this step, we are going to implement the discussed Average Directional Index trading strategy in Python.
Python Implementation:
def implement_adx_strategy(prices, pdi, ndi, adx):
buy_price = []
sell_price = []
adx_signal = []
signal = 0
for i in range(len(prices)):
if adx[i-1] < 25 and adx[i] > 25 and pdi[i] > ndi[i]:
if signal != 1:
signal = 1
elif adx[i-1] < 25 and adx[i] > 25 and ndi[i] > pdi[i]:
if signal != -1:
signal = -1
return buy_price, sell_price, adx_signal
buy_price, sell_price, adx_signal = implement_adx_strategy(aapl['close'], aapl['plus_di'], aapl['minus_di'], aapl['adx'])
Code Explanation: First, we are defining a function named ‘implement_adx_strategy’ which takes the stock prices (‘prices), and the components of ADX (‘pdi’, ‘ndi’, ‘adx’) as parameters.
Inside the function, we are creating three empty lists (buy_price, sell_price, and adx_signal) in which the values will be appended while creating the trading strategy.
After that, we are implementing the trading strategy through a for-loop. Inside the for-loop, we are passing certain conditions, and if the conditions are satisfied, the respective values will be
appended to the empty lists. If the condition to buy the stock gets satisfied, the buying price will be appended to the ‘buy_price’ list, and the signal value will be appended as 1 representing to
buy the stock. Similarly, if the condition to sell the stock gets satisfied, the selling price will be appended to the ‘sell_price’ list, and the signal value will be appended as -1 representing to
sell the stock.
Finally, we are returning the lists appended with values. Then, we are calling the created function and stored the values into their respective variables. The list doesn’t make any sense unless we
plot the values. So, let’s plot the values of the created trading lists.
Step-7: Plotting the trading signals
In this step, we are going to plot the created trading lists to make sense out of them.
Python Implementation:
ax1 = plt.subplot2grid((11,1), (0,0), rowspan = 5, colspan = 1)
ax2 = plt.subplot2grid((11,1), (6,0), rowspan = 5, colspan = 1)
ax1.plot(aapl['close'], linewidth = 3, color = '#ff9800', alpha = 0.6)
ax1.set_title('AAPL CLOSING PRICE')
ax1.plot(aapl.index, buy_price, marker = '^', color = '#26a69a', markersize = 14, linewidth = 0, label = 'BUY SIGNAL')
ax1.plot(aapl.index, sell_price, marker = 'v', color = '#f44336', markersize = 14, linewidth = 0, label = 'SELL SIGNAL')
ax2.plot(aapl['plus_di'], color = '#26a69a', label = '+ DI 14', linewidth = 3, alpha = 0.3)
ax2.plot(aapl['minus_di'], color = '#f44336', label = '- DI 14', linewidth = 3, alpha = 0.3)
ax2.plot(aapl['adx'], color = '#2196f3', label = 'ADX 14', linewidth = 3)
ax2.axhline(25, color = 'grey', linewidth = 2, linestyle = '--')
ax2.set_title('AAPL ADX 14')
Code Explanation: We are plotting the Average Directional Index components along with the buy and sell signals generated by the trading strategy. We can observe that whenever the ADX line crosses
from below to above 25 and the + DI line is above the – DI line, a green-colored buy signal is plotted in the chart. Similarly, whenever the ADX line crosses from below to above 25 and the + DI line
is below the – DI line, a red-colored sell signal is plotted in the chart.
Step-8: Creating our Position
In this step, we are going to create a list that indicates 1 if we hold the stock or 0 if we don’t own or hold the stock.
Python Implementation:
position = []
for i in range(len(adx_signal)):
if adx_signal[i] > 1:
for i in range(len(aapl['close'])):
if adx_signal[i] == 1:
position[i] = 1
elif adx_signal[i] == -1:
position[i] = 0
position[i] = position[i-1]
close_price = aapl['close']
plus_di = aapl['plus_di']
minus_di = aapl['minus_di']
adx = aapl['adx']
adx_signal = pd.DataFrame(adx_signal).rename(columns = {0:'adx_signal'}).set_index(aapl.index)
position = pd.DataFrame(position).rename(columns = {0:'adx_position'}).set_index(aapl.index)
frames = [close_price, plus_di, minus_di, adx, adx_signal, position]
strategy = pd.concat(frames, join = 'inner', axis = 1)
Code Explanation: First, we are creating an empty list named ‘position’. We are passing two for-loops, one is to generate values for the ‘position’ list to just match the length of the ‘signal’ list.
The other for-loop is the one we are using to generate actual position values. Inside the second for-loop, we are iterating over the values of the ‘signal’ list, and the values of the ‘position’ list
get appended concerning which condition gets satisfied. The value of the position remains 1 if we hold the stock or remains 0 if we sold or don’t own the stock. Finally, we are doing some data
manipulations to combine all the created lists into one dataframe.
From the output being shown, we can see that in the first row our position in the stock has remained 1 (since there isn’t any change in the ADX signal) but our position suddenly turned to -1 as we
sold the stock when the ADX trading signal represents a sell signal (-1). Our position will remain 0 until some changes in the trading signal occur. Now it’s time to do implement some backtesting
Step-9: Backtesting
Before moving on, it is essential to know what backtesting is. Backtesting is the process of seeing how well our trading strategy has performed on the given stock data. In our case, we are going to
implement a backtesting process for our Average Directional Index trading strategy over the Apple stock data.
Python Implementation:
aapl_ret = pd.DataFrame(np.diff(aapl['close'])).rename(columns = {0:'returns'})
adx_strategy_ret = []
for i in range(len(aapl_ret)):
returns = aapl_ret['returns'][i]*strategy['adx_position'][i]
adx_strategy_ret_df = pd.DataFrame(adx_strategy_ret).rename(columns = {0:'adx_returns'})
investment_value = 100000
adx_investment_ret = []
for i in range(len(adx_strategy_ret_df['adx_returns'])):
number_of_stocks = floor(investment_value/aapl['close'][i])
returns = number_of_stocks*adx_strategy_ret_df['adx_returns'][i]
adx_investment_ret_df = pd.DataFrame(adx_investment_ret).rename(columns = {0:'investment_returns'})
total_investment_ret = round(sum(adx_investment_ret_df['investment_returns']), 2)
profit_percentage = floor((total_investment_ret/investment_value)*100)
print(cl('Profit gained from the ADX strategy by investing $100k in AAPL : {}'.format(total_investment_ret), attrs = ['bold']))
print(cl('Profit percentage of the ADX strategy : {}%'.format(profit_percentage), attrs = ['bold']))
Profit gained from the ADX strategy by investing $100k in AAPL : 54719.46
Profit percentage of the ADX strategy : 54%
Code Explanation: First, we are calculating the returns of the Apple stock using the ‘diff’ function provided by the NumPy package and we have stored it as a dataframe into the ‘aapl_ret’ variable.
Next, we are passing a for-loop to iterate over the values of the ‘aapl_ret’ variable to calculate the returns we gained from our ADX trading strategy, and these returns values are appended to the
‘adx_strategy_ret’ list. Next, we are converting the ‘adx_strategy_ret’ list into a dataframe and stored it into the ‘adx_strategy_ret_df’ variable.
Next comes the backtesting process. We are going to backtest our strategy by investing a hundred thousand USD into our trading strategy. So first, we are storing the amount of investment into the
‘investment_value’ variable. After that, we are calculating the number of Apple stocks we can buy using the investment amount. You can notice that I’ve used the ‘floor’ function provided by the Math
package because, while dividing the investment amount by the closing price of Apple stock, it spits out an output with decimal numbers. The number of stocks should be an integer but not a decimal
number. Using the ‘floor’ function, we can cut out the decimals. Remember that the ‘floor’ function is way more complex than the ‘round’ function. Then, we are passing a for-loop to find the
investment returns followed by some data manipulation tasks.
Finally, we are printing the total return we got by investing a hundred thousand into our trading strategy and it is revealed that we have made an approximate profit of fifty-four thousand USD in
three years. That’s not bad! Now, let’s compare our returns with SPY ETF (an ETF designed to track the S&P 500 stock market index) returns.
Step-10: SPY ETF Comparison
This step is optional but it is highly recommended as we can get an idea of how well our trading strategy performs against a benchmark (SPY ETF). In this step, we are going to extract the data of the
SPY ETF using the ‘get_historical_data’ function we created and compare the returns we get from the SPY ETF with our Average Directional Index strategy returns on Apple.
def get_benchmark(start_date, investment_value):
spy = get_historical_data('SPY', start_date)['close']
benchmark = pd.DataFrame(np.diff(spy)).rename(columns = {0:'benchmark_returns'})
investment_value = investment_value
benchmark_investment_ret = []
for i in range(len(benchmark['benchmark_returns'])):
number_of_stocks = floor(investment_value/spy[i])
returns = number_of_stocks*benchmark['benchmark_returns'][i]
benchmark_investment_ret_df = pd.DataFrame(benchmark_investment_ret).rename(columns = {0:'investment_returns'})
return benchmark_investment_ret_df
benchmark = get_benchmark('2020-01-01', 100000)
investment_value = 100000
total_benchmark_investment_ret = round(sum(benchmark['investment_returns']), 2)
benchmark_profit_percentage = floor((total_benchmark_investment_ret/investment_value)*100)
print(cl('Benchmark profit by investing $100k : {}'.format(total_benchmark_investment_ret), attrs = ['bold']))
print(cl('Benchmark Profit percentage : {}%'.format(benchmark_profit_percentage), attrs = ['bold']))
print(cl('ADX Strategy profit is {}% higher than the Benchmark Profit'.format(profit_percentage - benchmark_profit_percentage), attrs = ['bold']))
Benchmark profit by investing $100k : 37538.96
Benchmark Profit percentage : 37%
ADX Strategy profit is 17% higher than the Benchmark Profit
Code Explanation: The code used in this step is almost similar to the one used in the previous backtesting step but, instead of investing in Apple, we are investing in SPY ETF by not implementing any
trading strategies. From the output, we can see that our Average Directional Index trading strategy has outperformed the SPY ETF by 17%. That’s good!
Final Thoughts!
After a long process of crushing both theory and coding parts, we have successfully learned what the Average Directional Index is all about and how a simple ADX-based trading strategy can be
implemented in Python.
From my perspective, the full power of ADX is unleashed when it is accompanied by another technical indicator, especially with RSI to get quality entry and exit points for your trades. So, it is
highly recommended to try improvising this article by tuning the ADX strategy accompanied by other technical indicators and backtesting it as many as possible. Doing this might help in achieving
better results in the real-world market. That’s it! Hope you learned something useful from this article. | {"url":"https://eodhd.com/financial-academy/backtesting-strategies-examples/algorithmic-trading-with-average-directional-index-in-python","timestamp":"2024-11-08T05:05:10Z","content_type":"text/html","content_length":"126988","record_id":"<urn:uuid:88f1c916-5415-4664-9879-602be7e17bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00302.warc.gz"} |
Effect of Electric Field on Dispersion of a Solute in an MHD Flow through a Vertical Channel With and Without Chemical Reaction
Effect of Electric Field on Dispersion of a Solute in an MHD Flow through a Vertical Channel With and Without Chemical Reaction
More details Hide details
Department of Mathematics, Gulbarga University, Gulbarga, Karnataka, India 585 106
Department of Mechanical Engineering, Cleveland State University, Cleveland-44115, OHIO, USA; Department of Studies and Research in Mathematics, Kuvempu University, Shankaraghatta-577 451, Shimoga,
Karnataka, India
Online publication date: 2016-09-10
Publication date: 2016-08-01
International Journal of Applied Mechanics and Engineering 2016;21(3):683-711
The longitudinal dispersion of a solute between two parallel plates filled with two immiscible electrically conducting fluids is analyzed using Taylor’s model. The fluids in both the regions are
incompressible and the transport properties are assumed to be constant. The channel walls are assumed to be electrically insulating. Separate solutions are matched at the interface using suitable
matching conditions. The flow is accompanied by an irreversible first-order chemical reaction. The effects of the viscosity ratio, pressure gradient and Hartman number on the effective Taylor
dispersion coefficient and volumetric flow rate for an open and short circuit are drawn in the absence and in the presence of chemical reactions. As the Hartman number increases the effective Taylor
diffusion coefficient decreases for both open and short circuits. When the magnetic field remains constant, the numerical results show that for homogeneous and heterogeneous reactions, the effective
Taylor diffusion coefficient decreases with an increase in the reaction rate constant for both open and short circuits.
REFERENCES (90)
Levenspiel O. and Smith W.K. (1957): Notes on the diffusion-type model for the longitudinal mixing of fluids in flow. – Chem. Engng. Sci., vol.6, pp.27–233.
Levenspiel O., Smith W.K. (1957). Notes on the diffusion-type model for the longitudinal mixing of fluids in flow Chem. Engng. Sci. 6: 27-233.
Danckwerts P.V. (1953): The effect of incomplete mixing on homogeneous reactions. – Chem. Engng. Sci., vol.8, No.1-2, pp.93-102.
Danckwerts P.V. (1953). The effect of incomplete mixing on homogeneous reactions Chem. Engng. Sci. 8 (1-2): 93-102.
Taylor G.I. (1953): Dispersion of soluble matter in solvent flowing slowly through a tube. – Proceedings of the Royal Society of London A, vol.219, pp.186-203.
Taylor G.I. (1953). Dispersion of soluble matter in solvent flowing slowly through a tube Proceedings of the Royal Society of London A. 219: 186-203.
Taylor G.I. (1954): The dispersion of matter in turbulent flow through a pipe. – Proceedings of the Royal Society of London A, 223, pp.446-468.
Taylor G.I. (1954). The dispersion of matter in turbulent flow through a pipe Proceedings of the Royal Society of London A. 223: 446-468.
Taylor G.I. (1954): Conditions under which dispersion of a solute in a stream of solvent can be used to measure molecular diffusion. – Proceedings of the Royal Society of London A, 225, pp.473-477.
Taylor G.I. (1954). Conditions under which dispersion of a solute in a stream of solvent can be used to measure molecular diffusion Proceedings of the Royal Society of London A. 225: 473-477.
Batchelor G.K. (1981): Preoccupations of a journal editor. – J. Fluid Mech., vol.106, pp.1-25.
Batchelor G.K. (1981). Preoccupations of a journal editor J. Fluid Mech. 106: 1-25.
Aris R. (1956): On the dispersion of a solute in a fluid flowing through a tube. – Proceedings of Royal Society London A 235, pp.67–77.
Aris R. (1956). On the dispersion of a solute in a fluid flowing through a tube Proceedings of Royal Society London A. 235: 67-77.
Horn F.J.M. and Kipp JR R.L. (1971): Induced transport in pulsating flow. – AIChE Journal, vol.17, pp.621–626.
Horn F.J.M., Kipp R.L. (1971). Induced transport in pulsating flow AIChE Journal. 17: 621-626.
Brenner H. (1980): A general theory of Taylor dispersion phenomena. – Physicochem. Hydrodyn., vol.1, pp.91–123.
Brenner H. (1980). A general theory of Taylor dispersion phenomena Physicochem. Hydrodyn. 1: 91-123.
Brenner H. and Edwards D.A. (1982): Macrotransport Process. – Butterworth-Heinemann, Boston, 714.
Brenner H., Edwards D.A. (1982). Macrotransport Process. 714. Butterworth-Heinemann, Boston.
Philip J.R. (1963): The theory of dispersal during laminar flow in tubes. – I. Australian J. Physics, vol.16, pp.287–299.
Philip J.R. (1963). The theory of dispersal during laminar flow in tubes I. Australian J. Physics. 16: 287-299.
Gill W.N. and Sankarasubramanian R. (1970): A note on the solution of transient dispersion problems. – Proceedings of the Royal Society A, vol.316, pp.341–350.
Gill W.N., Sankarasubramanian R. (1970). A note on the solution of transient dispersion problems Proceedings of the Royal Society A. 316: 341-350.
Gill W.N. and Sankarasubramanian R. (1972): Dispersion of non-uniformly distributed time-variable continuous sources in time-dependent flow. – Proceedings of Royal Society London A, vol.327,
Gill W.N., Sankarasubramanian R. (1972). Dispersion of non-uniformly distributed time-variable continuous sources in time-dependent flow Proceedings of Royal Society London A. 327: 191-208.
DeGance A.E. and Johns L.E. (1978a): The theory of dispersion of chemically active solutes in a rectilinear flow field. – Appl. Sci. Res., vol.34, pp.189-225.
DeGance A.E., Johns L.E. (1978a). The theory of dispersion of chemically active solutes in a rectilinear flow field Appl. Sci. Res. 34: 189-225.
DeGance A.E. and Johns L.E. (1980): On the construction of dispersion approximations to the solution of the convective diffusion equation. – AIChE Journal, vol.26, pp.411–419.
DeGance A.E., Johns L.E. (1980). On the construction of dispersion approximations to the solution of the convective diffusion equation AIChE Journal. 26: 411-419.
Hatton T.A. and Lightfoot E.N. (1982): On the significance of the dispersion coefficient in two-phase flow. – Chem. Engng. Sci., vol.37, pp.1289-1307.
Hatton T.A., Lightfoot E.N. (1982). On the significance of the dispersion coefficient in two-phase flow Chem. Engng. Sci. 37: 1289-1307.
Hatton T.A. and Lightfoot E.N. (1984a): Dispersion, mass transfer and chemical reaction in multiphase contactors: part I: theoretical developments. – AIChE journal 30, pp.235-243.
Hatton T.A., Lightfoot E.N. (1984a). Dispersion, mass transfer and chemical reaction in multiphase contactors: part I: theoretical developments AIChE journal. 30: 235-243.
Hatton T.A. and Lightfoot E.N. (1984b): Dispersion, mass transfer and chemical reaction in multiphase contactors: Part II: Numerical examples. – AIChE Journal, vol.30, pp.243-249.
Hatton T.A., Lightfoot E.N. (1984b). Dispersion, mass transfer and chemical reaction in multiphase contactors: Part II: Numerical examples AIChE Journal. 30: 243-249.
Yamanaka T. (1983): Projection operator theoretical approach to unsteady convective diffusion phenomena. – J. Chem. Engng. Japan, vol.16, pp.29-35.
Yamanaka T. (1983). Projection operator theoretical approach to unsteady convective diffusion phenomena J. Chem. Engng. Japan. 16: 29-35.
Yamanaka T. (1983b): Generalization of Taylor’s approximate solution for dispersion phenomena. – J. Chem. Engng. Japan, vol.16, pp.511-512.
Yamanaka T. (1983b). Generalization of Taylor’s approximate solution for dispersion phenomena J. Chem. Engng. Japan. 16: 511-512.
Yamanaka T. and Inui S. (1994): Taylor dispersion models involving nonlinear irreversible reactions. – J. Chem. Engng. Japan, vol.27, pp.434–435.
Yamanaka T., Inui S. (1994). Taylor dispersion models involving nonlinear irreversible reactions J. Chem. Engng. Japan. 27: 434-435.
Smith R. (1981): A delay-diffusion description for contaminant dispersion. – J. Fluid Mech., vol.105, pp.469-486.
Smith R. (1981). A delay-diffusion description for contaminant dispersion J. Fluid Mech. 105: 469-486.
Smith R. (1987): Diffusion in shear flows made easy: the Taylor limit. – J. Fluid Mech., vol.175, pp.201-214.
Smith R. (1987). Diffusion in shear flows made easy: the Taylor limit J. Fluid Mech. 175: 201-214.
Cleland F.A. and Wilhelm R.H. (1956): Diffusion and reaction in viscous-flow tubular reactor. – AIChE Journal, vol.2, pp.489-497.
Cleland F.A., Wilhelm R.H. (1956). Diffusion and reaction in viscous-flow tubular reactor AIChE Journal. 2: 489-497.
Katz S. (1959): Chemical reactions catalysed on a tube wall. – Chem. Engng. Sci., vol.10, pp.202-211.
Katz S. (1959). Chemical reactions catalysed on a tube wall Chem. Engng. Sci. 10: 202-211.
Walker R. (1961): Chemical reaction and diffusion in a catalytic tubular reactor. – Physics of Fluids, vol.4, pp.1211-1216.
Walker R. (1961). Chemical reaction and diffusion in a catalytic tubular reactor Physics of Fluids. 4: 1211-1216.
Solomon R.L. and Hudson J.L. (1967): Heterogeneous and homogeneous reactions in a tubular reactor. – AIChE. J., vol.13, pp.545-550.
Solomon R.L., Hudson J.L. (1967). Heterogeneous and homogeneous reactions in a tubular reactor AIChE. J. 13: 545-550.
Packham B.A. and Shail R. (1971): Stratified laminar flow of two immiscible fluids. – Mathematical Proceedings Cambridge Philosophical Society, vol.69, pp.443-448.
Packham B.A., Shail R. (1971). Stratified laminar flow of two immiscible fluids Mathematical Proceedings Cambridge Philosophical Society. 69: 443-448.
Alireza S. and Sahai V. (1990): Heat transfer in developing magnetohydrodynamic Poiseuille flow and variable transport properties. – Int. J. Heat and Mass Transfer, vol.33, pp.1711–1720.
Alireza S., Sahai V. (1990). Heat transfer in developing magnetohydrodynamic Poiseuille flow and variable transport properties Int. J. Heat and Mass Transfer. 33: 1711-1720.
Malashetty M.S. and Leela V. (1991): Magnetohydrodynamic heat transfer in two fluid flow. – Proc. of National Heat Transfer Conferences sponsored by AIChE and ASME–HTD, Phase Change Heat Transfer,
vol.159, pp.171-175.
Malashetty M.S., Leela V. (1991). Magnetohydrodynamic heat transfer in two fluid flow Proc. of National Heat Transfer Conferences sponsored by AIChE and ASME–HTD, Phase Change Heat Transfer. 159:
Malashetty M.S. and Leela V. (1992): Magnetohydrodynamic heat transfer in two-phase flow. – Int. J. Engng. Sci., vol.30, pp.371-377.
Malashetty M.S., Leela V. (1992). Magnetohydrodynamic heat transfer in two-phase flow Int. J. Engng. Sci. 30: 371-377.
Lohrasbi J. and Sahai V. (1988): Magnetohydrodynamic heat transfer in two-phase flow between parallel plates. – Appl. Sci. Res., vol.45, pp.53-66.
Lohrasbi J., Sahai V. (1988). Magnetohydrodynamic heat transfer in two-phase flow between parallel plates Appl. Sci. Res. 45: 53-66.
Malashetty M.S. and Umavathi J.C. (1997): Magnetohydrodynamic two phase flow in an inclined channel. – Int. J. Multiphase Flow, vol.23, pp.545-560.
Malashetty M.S., Umavathi J.C. (1997). Magnetohydrodynamic two phase flow in an inclined channel Int. J. Multiphase Flow. 23: 545-560.
Chamkha A.J. (1999): Flow of two-immiscible fluids in porous and nonporous channels. – ASME. J. Fluids Eng., vol.122, pp.117-124.
Chamkha A.J. (1999). Flow of two-immiscible fluids in porous and nonporous channels ASME. J. Fluids Eng. 122: 117-124.
Malashetty M.S. Umavathi J.C. and Kumar J.P. (2001): Two fluid magneto convection flow in an inclined channel. – Int. J. Transport Phenomena, vol.3, pp.73-84.
Malashetty M.S., Umavathi J.C., Kumar J.P. (2001). Two fluid magneto convection flow in an inclined channel Int. J. Transport Phenomena. 3: 73-84.
Malashetty M.S. Umavathi J.C. and Kumar J.P. (2001): Convective magneto hydrodynamic two fluid flow and heat transfer in an inclined channel. – Heat and Mass Transfer J., vol.37, pp.259-264.
Malashetty M.S., Umavathi J.C., Kumar J.P. (2001). Convective magneto hydrodynamic two fluid flow and heat transfer in an inclined channel Heat and Mass Transfer J. 37: 259-264.
Malashetty M.S. Umavathi J.C. and Kumar J.P. (2001): Convective flow and heat transfer in an inclined composite porous medium. – J. Porous Media, vol.4, pp.15-22.
Malashetty M.S., Umavathi J.C., Kumar J.P. (2001). Convective flow and heat transfer in an inclined composite porous medium J. Porous Media. 4: 15-22.
Umavathi J.C., Liu I.C. and Kumar J.P. (2010): Magnetohydrodynamic Poseuille-Coutte flow and heat transfer in an inclined channel. – J. Mech., vol.26, pp.525-532.
Umavathi J.C., Liu I.C., Kumar J.P. (2010). Magnetohydrodynamic Poseuille-Coutte flow and heat transfer in an inclined channel J. Mech. 26: 525-532.
Umavathi J.C. and Shekar M. (2011): Mixed convective flow of two immiscible viscous fluids in a vertical wavy channel with traveling thermal waves. – Heat Transfer-Asian Res., vol.40, pp.721-743.
Umavathi J.C., Shekar M. (2011). Mixed convective flow of two immiscible viscous fluids in a vertical wavy channel with traveling thermal waves Heat Transfer-Asian Res. 40: 721-743.
Kumar J.P., Umavathi J.C. and Shivakumar M. (2011): Effect of first order chemical reaction on magneto convection of immiscible fluids in a vertical channel. – Heat Transfer Asian Res., vol.40,
Kumar J.P., Umavathi J.C., Shivakumar M. (2011). Effect of first order chemical reaction on magneto convection of immiscible fluids in a vertical channel Heat Transfer Asian Res. 40: 608-640.
Kumar J.P., Umavathi J.C., Chamkha A.J and Ashok Basawaraj (2012): Solute dispersion between two parallel plates containing porous and fluid layers. – J. Porous Media, vol.15, pp.1031-1047.
Kumar J.P., Umavathi J.C., Chamkha A.J, Basawaraj Ashok. (2012). Solute dispersion between two parallel plates containing porous and fluid layers J. Porous Media. 15: 1031-1047.
Gupta A.S. and Chatterjee A.S. (1968): Dispersion of soluble matter in the hydromagnetic laminar flow between two parallel plates. – Mathematical Proceedings of the Cambridge Philosophical Society,
vol.64, pp.1209-1214.
Gupta A.S., Chatterjee A.S. (1968). Dispersion of soluble matter in the hydromagnetic laminar flow between two parallel plates Mathematical Proceedings of the Cambridge Philosophical Society. 64:
Wooding R.A. (1960): Instability of a viscous liquid of variable density in a vertical Hele-Shaw cell. – J. Fluid Mech., vol.7, pp.501–515.
Wooding R.A. (1960). Instability of a viscous liquid of variable density in a vertical Hele-Shaw cell J. Fluid Mech. 7: 501-515.
Sudhanshu, Ghoshal K., Subhash Sikdar Ch. and Ajit K. (1976): Dispersion of solutes in laminar hydromagnetic flows with homogeneous and heterogeneous chemical reactions. – Proceedings of the Indian
National Science Academy. Part A, Physical Sci., vol.43, pp.370-379.
Sudhanshu Ghoshal K., Subhash Sikdar Ch., Ajit K. (1976). Dispersion of solutes in laminar hydromagnetic flows with homogeneous and heterogeneous chemical reactions Proceedings of the Indian National
Science Academy. Part A, Physical Sci. 43: 370-379.
Gupta A.S. and Chatterjee A.S. (1968): Dispersion of soluble matter in the hydromagnetic laminar flow between two parallel plates. – In Mathematical Proceedings of the Cambridge Philosophical
Society, vol.64, pp.1209-1214.
Gupta A.S., Chatterjee A.S. (1968). Dispersion of soluble matter in the hydromagnetic laminar flow between two parallel plates In Mathematical Proceedings of the Cambridge Philosophical Society. 64: | {"url":"https://www.ijame-poland.com/Effect-of-Electric-Field-on-Dispersion-of-a-Solute-in-an-MHD-Flow-through-a-Vertical,167118,0,2.html","timestamp":"2024-11-06T03:55:54Z","content_type":"application/xhtml+xml","content_length":"132187","record_id":"<urn:uuid:0ca910de-11f4-4437-9b8b-1c1c2fd2e431>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00739.warc.gz"} |
Elliptic curve cryptography
Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. The use of elliptic curves in cryptography was
suggested independently by Neal Koblitz^[1] and Victor S. Miller^[2] in 1985.
Elliptic curves are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic curve factorization.
Introduction[ ]
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as the RSA algorithm, are secure assuming that it is difficult to factor a
large integer composed of two or more large prime factors. For elliptic-curve-based protocols, it is assumed that finding the discrete logarithm of a random elliptic curve element with respect to a
publicly-known base point is infeasible. The size of the elliptic curve determines the difficulty of the problem. It is believed that the same level of security afforded by an RSA-based system with a
large modulus can be achieved with a much smaller elliptic curve group. Using a small group reduces storage and transmission requirements.
For current cryptographic purposes, an elliptic curve is a plane curve which consists of the points satisfying the equation
${\displaystyle y^2 = x^3 + ax + b, \, }$
along with a distinguished point at infinity, denoted ${\displaystyle \infty}$. (The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve
equation will be somewhat more complicated.) This set together with the group operation of the elliptic group theory form an Abelian group, with the point at infinity as identity element. The
structure of the group is inherited from the divisor group of the underlying algebraic variety.
As for other popular public key cryptosystems, no mathematical proof of security has been published for ECC as of 2009 . However, the U.S. National Security Agency has endorsed ECC by including
schemes based on it in its Suite B set of recommended algorithms and allows their use for protecting information classified up to top secret with 384-bit keys.^[3] While the RSA patent expired in
2000, there are patents in force covering certain aspects of ECC technology, though the Federal elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key
exchange schemes (including ECDH) can certainly be implemented without infringing them.^[4]
Cryptographic premise[ ]
The entire security of ECC depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original and product points.
Cryptographic schemes[ ]
Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group ${\displaystyle \mathbb{Z}_{pq}}$ with an elliptic curve:
At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both
classified and unclassified national security systems and information.^[5]
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these
primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.
Implementation considerations[ ]
Although the details of each particular elliptic curve scheme are described in the article referenced above some common implementation considerations are discussed here.
Domain parameters[ ]
To use ECC all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The field is defined by ${\displaystyle p}$ in the prime case and the
pair of ${\displaystyle m}$ and ${\displaystyle f}$ in the binary case. The elliptic curve is defined by the constants ${\displaystyle a}$ and ${\displaystyle b}$ used in its defining equation.
Finally, the cyclic subgroup is defined by its generator (aka. base point) ${\displaystyle G}$. For cryptographic application the order of ${\displaystyle G}$, that is the smallest non-negative
number ${\displaystyle n}$ such that ${\displaystyle n G = O}$, must be prime. Since ${\displaystyle n}$ is the size of a subgroup of ${\displaystyle E(\mathbb{F}_p)}$ it follows from Lagrange's
theorem that the number ${\displaystyle h = \frac{|E|}{n}}$ is an integer. In cryptographic applications this number ${\displaystyle h}$, called the cofactor, must be small (${\displaystyle h \le 4}$
) and, preferably, ${\displaystyle h=1}$. Let us summarize: in the prime case the domain parameters are ${\displaystyle (p,a,b,G,n,h)}$ and in the binary case they are ${\displaystyle
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.
The generation of domain parameters is not usually done by each participant since this involves counting the number of points on a curve which is time-consuming and troublesome to implement. As a
result several standard bodies published domain parameters of elliptic curves for several common field sizes:
Test vectors are also available [1].
If one (despite the said above) wants to build one's own domain parameters one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e.,
near prime) number of points using one of the following methods:
• select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or Schoof–Elkies–Atkin algorithm,
• select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or
• select the number of points and generate a curve with this number of points using complex multiplication technique.^[6]
Several classes of curves are weak and should be avoided:
• curves over ${\displaystyle \mathbb{F}_{2^m}}$ with non-prime ${\displaystyle m}$ are vulnerable to Weil descent attacks.^[7]^[8]
• curves such that ${\displaystyle n}$ divides ${\displaystyle p^B-1}$ (where ${\displaystyle p}$ is the characteristic of the field – ${\displaystyle q}$ for a prime field, or ${\displaystyle 2}$
for a binary field) for sufficiently small ${\displaystyle B}$ are vulnerable to MOV attack^[9]^[10] which applies usual DLP in a small degree extension field of ${\displaystyle \mathbb{F}_p }$
to solve ECDLP. The bound ${\displaystyle B}$ should be chosen so that discrete logarithms in the field ${\displaystyle \mathbb{F}_{p^B}}$ are at least as difficult to compute as discrete logs on
the elliptic curve ${\displaystyle E(\mathbb{F}_{q})}$.^[11]
• curves such that ${\displaystyle |E(\mathbb{F}_q)| = q}$ are vulnerable to the attack that maps the points on the curve to the additive group of ${\displaystyle \mathbb{F}_{q}}$^[12]^[13]^[14]
Key sizes[ ]
Since all the fastest known algorithms that allow to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need ${\displaystyle O(\sqrt{n})}$ steps, it follows that the size of the underlying
field shall be roughly twice the security parameter. For example, for 128-bit security one needs a curve over ${\displaystyle \mathbb{F}_{q}}$, where ${\displaystyle q \approx 2^{256}}$. This can be
contrasted with finite-field cryptography (e.g., DSA) which requires^[15] 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires 3072-bit
public and private keys.
The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case this was broken in July 2009 using a
cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously (see [2]). For the binary field case, it was broken in April
2004 using 2600 computers for 17 months (see [3]).
Projective coordinates[ ]
A close examination of the addition rules shows that in order to add two points one needs not only several additions and multiplications in ${\displaystyle \mathbb{F}_{q}}$ but also an inversion
operation. The inversion (for given ${\displaystyle x \in \mathbb{F}_q}$ find ${\displaystyle y \in \mathbb{F}_q}$ such that ${\displaystyle xy = 1}$) is one to two orders of magnitude slower^[16]
than multiplication. Fortunately, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were
proposed: in the projective system each point is represented by three coordinates ${\displaystyle (X, Y, Z)}$ using the following relation: ${\displaystyle x = \frac{X}{Z}}$, ${\displaystyle y = \
frac{Y}{Z}}$; in the Jacobian system a point is also represented with three coordinates ${\displaystyle (X, Y, Z)}$, but a different relation is used: ${\displaystyle x = \frac{X}{Z^2}}$, ${\
displaystyle y = \frac{Y}{Z^3}}$; in the López–Dahab system the relation is ${\displaystyle x = \frac{X}{Z}}$, ${\displaystyle y = \frac{Y}{Z^2}}$; in the modified Jacobian system the same relations
are used but four coordinates are stored and used for calculations ${\displaystyle (X,Y,Z,aZ^4)}$; and in the Chudnovsky Jacobian system five coordinates are used ${\displaystyle (X,Y,Z,Z^2,Z^3)}$.
Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional
speed-up is possible if mixed coordinates are used.^[17]
Fast reduction (NIST curves)[ ]
Reduction modulo ${\displaystyle p}$ (which is needed for addition and multiplication) can be executed much faster if the prime ${\displaystyle p}$ is a pseudo-Mersenne prime that is ${\displaystyle
p \approx 2^d}$, for example, ${\displaystyle p = 2^{521} - 1}$ or ${\displaystyle p = 2^{256} - 2^{32} - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1.}$ Compared to Barrett reduction there can be an order of
magnitude speedup.^[18] The speedup here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed
efficiently by computers operating on binary numbers with bitwise operations.
The curves over ${\displaystyle \mathbb{F}_p }$ with pseudo-Mersenne ${\displaystyle p}$ are recommended by NIST. Yet another advantage of the NIST curves is the fact that they use a = −3 which
improves addition in Jacobian coordinates.
NIST-recommended elliptic curves[ ]
NIST recommends fifteen elliptic curves. Specifically, FIPS 186-3 has ten recommended finite fields:
• Five prime fields ${\displaystyle \mathbb{F}_p }$ for certain primes p of sizes 192, 224, 256, 384, and 521 bits. For each of the prime fields, one elliptic curve is recommended.
• Five binary fields ${\displaystyle \mathbb{F}_{2^m}}$ for m equal 163, 233, 283, 409, and 571. For each of the binary fields, one elliptic curve and one Koblitz curve was selected.
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.^[19]
Side-channel attacks[ ]
Unlike DLP systems (where it is possible to use the same procedure for squaring and multiplication) the EC addition is significantly different for doubling (${\displaystyle P = Q}$) and general
addition (${\displaystyle P e Q}$) depending on the coordinate system used. Consequently, it is important to counteract side channel attacks (e.g., timing or simple/differential power analysis
attacks) using, for example, fixed pattern window (aka. comb) methods^[20] (note that this does not increase the computation time). Another concern for ECC-systems is the danger of fault attacks,
especially when running on smart cards (see, for example, Biehl et al.^[21]).
Quantum computing attacks[ ]
Elliptic curve cryptography is vulnerable to a modified Shor's algorithm for solving the discrete logarithm problem on elliptic curves.^[22] ^[23]
Patents[ ]
Main article: ECC patents
At least one ECC scheme (ECMQV) and some implementation techniques are covered by patents.
Implementations[ ]
Open source[ ]
• OpenSSL - C library with ECC functionality
• Crypto++ - C++ library with ECC functionality
Proprietary/commercial[ ]
• CNG API in Windows Vista and Windows Server 2008 with managed wrappers for CNG in .NET Framework 3.5
• Sun Java System Web Server 7.0 and later
• Java SE 6
• Java Card
Alternative representations of elliptic curves[ ]
• Hessian curves
• Edwards curves
• Twisted curves
• Twisted Hessian curves
• Twisted Edwards curve
• Doubling-oriented Doche–Icart–Kohel curve
• Tripling-oriented Doche–Icart–Kohel curve
• Jacobian curve
• Montgomery curve
See also[ ]
• DNSCurve
• ECDH
• ECDSA
• ECMQV
• Pairing-based cryptography
Notes[ ]
References[ ]
• Standards for Efficient Cryptography Group (SECG), SEC 1: Elliptic Curve Cryptography, Version 1.0, September 20, 2000.
• D. Hankerson, A. Menezes, and S.A. Vanstone, Guide to Elliptic Curve Cryptography, Springer-Verlag, 2004.
• I. Blake, G. Seroussi, and N. Smart, Elliptic Curves in Cryptography, London Mathematical Society 265, Cambridge University Press, 1999.
• I. Blake, G. Seroussi, and N. Smart, editors, Advances in Elliptic Curve Cryptography, London Mathematical Society 317, Cambridge University Press, 2005.
• L. Washington, Elliptic Curves: Number Theory and Cryptography, Chapman & Hall / CRC, 2003.
• The Case for Elliptic Curve Cryptography, National Security Agency
• Online Elliptic Curve Cryptography Tutorial, Certicom Corp.
• K. Malhotra, S. Gardner, and R. Patz, Implementation of Elliptic-Curve Cryptography on Mobile Healthcare Devices, Networking, Sensing and Control, 2007 IEEE International Conference on, London,
15–17 April 2007 Page(s):239–244
External links[ ]
ca:Criptografia de corba el·líptica de:Elliptic Curve Cryptography el:Κρυπτογραφία ελλειπτικών καμπυλών es:Criptografía de curva elíptica fr:Cryptographie sur les courbes elliptiques ko:타원곡선 암호
it:Crittografia ellittica he:הצפנה מבוססת עקומים אליפטיים ja:楕円曲線暗号 pl:Kryptografia krzywych eliptycznych pt:Criptografia de curvas elípticas ru:Эллиптическая криптография sk:Kryptografia na
báze eliptických kriviek zh:椭圆曲线密码学 | {"url":"https://cryptography.fandom.com/wiki/Elliptic_curve_cryptography","timestamp":"2024-11-13T08:28:00Z","content_type":"text/html","content_length":"264781","record_id":"<urn:uuid:2deba61b-eb27-4e2e-91d1-139a2c8e0c39>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00157.warc.gz"} |
How the Physics of Resonance Shapes Reality | Quanta Magazine
Ariel Davis for Quanta Magazine
Almost anytime physicists announce that they’ve discovered a new particle, whether it’s the Higgs boson or the recently bagged double-charm tetraquark, what they’ve actually spotted is a small bump
rising from an otherwise smooth curve on a plot. Such a bump is the unmistakable signature of “resonance,” one of the most ubiquitous phenomena in nature.
Resonance underlies aspects of the world as diverse as music, nuclear fusion in dying stars, and even the very existence of subatomic particles. Here’s how the same effect manifests in such varied
settings, from everyday life down to the smallest scales.
In its simplest form, resonance occurs when an object experiences an oscillating force that’s close to one of its “natural” frequencies, at which it easily oscillates. That objects have natural
frequencies “is one of the bedrock properties of both math and the universe,” said Matt Strassler, a particle physicist affiliated with Harvard University who is writing a book about the Higgs boson.
A playground swing is one familiar example: “Knock something like that around, and it will always pick out its resonant frequency automatically,” Strassler said. Or flick a wineglass and the rim will
vibrate a few hundred times per second, producing a characteristic tone as the vibrations transfer to the surrounding air.
A system’s natural frequencies depend on its intrinsic properties: For a flute, for instance, they are the frequencies of sound waves that exactly fit inside its cylindrical geometry.
The Swiss mathematician Leonhard Euler solved the equation describing a system continuously driven near its resonant frequency in 1739. He found that the system exhibited “various and wonderful
motions,” as he put it in a letter to fellow mathematician Johann Bernoulli, and that, when the system is driven precisely at the resonant frequency, the amplitude of the motion “increases
continually and finally grows out to infinity.”
Driving a system too hard at the right frequency can have dramatic effects: A trained singer, for instance, can shatter a glass with a sustained note at its resonant frequency. A bridge resonating
with the footsteps of marching soldiers can collapse. But more often, energy loss, which Euler’s analysis neglected, prevents the motion of a physical system from growing unchecked. If the singer
sings the note quietly, vibrations in the glass will grow at first, but larger vibrations cause more energy to radiate outward as sound waves than before, so eventually a balance will be achieved
that results in vibrations with constant amplitude.
Now suppose the singer starts with a low note and continuously glides up in pitch. As the singer sweeps past the frequency at which the wineglass resonates, the sound momentarily grows much louder.
This enhancement arises because the sound waves arrive at the glass in sync with vibrations that are already present, just as pushing on a swing at the right time can amplify its initial motion. A
plot of the sound amplitude as a function of frequency would trace out a curve with a pronounced bump around the resonant frequency, one that’s strikingly similar to the bumps heralding particle
discoveries. In both cases, the bump’s width reflects how lossy the system is, indicating, for instance, how long a glass rings after it is struck once, or how long a particle exists before it
Samuel Velasco/Quanta Magazine; source: CMS Experiment
But why do particles behave like humming wineglasses? At the turn of the 20th century, resonance was understood to be a property of vibrating and oscillating systems. Particles, which travel in
straight lines and scatter like billiard balls, seemed far removed from this branch of physics.
The development of quantum mechanics showed otherwise. Experiments indicated that light, which had been thought of as an electromagnetic wave, sometimes behaves like a particle: a “photon,” which
possesses an amount of energy proportional to the frequency of the associated wave. Meanwhile, matter particles like electrons sometimes exhibit wavelike behavior with the same relation between
frequency and energy.
In 1925, inspired by this correspondence, the Austrian physicist Erwin Schrödinger derived an equation for the hydrogen atom whose solutions are waves oscillating at a set of natural frequencies,
much like the solutions to equations governing the acoustics of wind instruments.
Each solution to Schrödinger’s equation represents a possible state of the atom’s orbiting electron. The electron can hop up to a higher-energy state by absorbing a photon whose frequency makes up
the difference between the two states’ natural frequencies.
Such transitions are themselves a form of resonance: Just like a wine glass, an atom only absorbs energy from waves with specific frequencies, and it can also shed energy by emitting waves with those
same frequencies. (When excited at precisely the right frequency, certain atoms will oscillate for more than 10 quadrillion cycles before releasing their energy as photons — extremely sharp atomic
resonances that form the basis for the world’s most precise atomic clocks.)
Quantum theory revealed that the structure of atoms, no less than the structure of symphonies, is intimately tied to resonance. Electrons bound to atoms are a little like sound waves trapped inside
flutes. As for the atomic nuclei, further advances in the 1930s showed that many kinds of atomic nuclei only exist in the universe today because of resonance. Resonant transitions are critical to the
nuclear fusion reactions that transmute one type of atomic nucleus into another. The most celebrated of these nuclear resonances enables the fusion of three helium nuclei into one carbon nucleus.
Without this, stars would not be capable of producing carbon or heavier elements, and life as we know it would not be possible.
But the roots of resonance in fundamental physics lie deeper. In the late 1920s physicists began to develop a powerful mathematical framework known as quantum field theory that remains the language
of particle physics to this day. In quantum field theory, the universe’s truly elementary entities are fields that fill all space. Particles are localized, resonant excitations of these fields,
vibrating like springs in an infinite mattress. The frequencies at which quantum fields prefer to vibrate stem from fundamental constants whose origins remain obscure; these frequencies in turn
determine the masses of the corresponding particles. Blast the vacuum of empty space hard enough at the right frequency, and out will pop a bunch of particles.
In this sense, resonance is responsible for the very existence of particles. It has also increasingly become the workhorse of experimental particle physics. When measuring how often specific
combinations of particles are produced in high-energy collisions, physicists see pronounced peaks in the detection rate as they vary the collision energy: new manifestations of the universal
resonance curve. “As with the wineglass, you’re sweeping through a system that wants to resonate,” said Strassler. “You’ll make anything vibrate that can.”
In the 1950s and ’60s, physicists saw many more peaks than they had expected, and at first nobody knew quite what to make of them. Many of the bumps were very broad, suggesting the existence of
particles that stuck around for barely more than a trillionth of a trillionth of a second. Unlike more familiar particles that can be detected directly, these newcomers could only be observed through
the process of resonance.
Physicists later appreciated that these new ephemeral particles were fundamentally no different from protons and neutrons, save for their short lifetimes. Even so, short-lived particles are often
simply referred to as “resonances” — a testament to a phenomenon that has played a surprisingly central role in expanding our understanding of the world. | {"url":"https://www.quantamagazine.org/how-the-physics-of-resonance-shapes-reality-20220126/","timestamp":"2024-11-02T22:03:34Z","content_type":"text/html","content_length":"201312","record_id":"<urn:uuid:f5dcd5cd-71ea-4719-983c-0868dc0d4a92>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00454.warc.gz"} |
DRUM :: Browsing by Author "Lawrence, Craig T."
Browsing by Author "Lawrence, Craig T."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
• A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm
(1998) Lawrence, Craig T.; Tits, Andre; ISR
The role of optimization in both engineering analysis and designis continually expanding. As such, faster and more powerful optimization algorithms are in constant demand. In this dissertation,
motivated by problems from engineering analysis and design, new Sequential Quadratic Programming (SQP) algorithms generating feasible iterates are described and analyzed. What distinguishes these
algorithms from previous feasible SQP algorithms is a dramatic reduction in the amount of computation required to generate a new iterate while still enjoying the same global and fast local
convergence properties.
First, a basic algorithm which solves the standard smooth inequality constrained nonlinear programming problem is considered. The main idea involves a simple perturbation of the Quadratic Program
(QP) for the standard SQP search direction. The perturbation has the property that a feasible direction is always obtained and fast local convergence is preserved. An extension of the basic
algorithm is then proposed which solves the inequality constrained mini-max problem. The algorithm exploits the special structure of the problem and is shown to have the same global and local
convergence properties as the basic algorithm.
Next, the algorithm is extended to efficiently solve problems with very many objective and/or constraint functions. Such problems often arise in engineering design as, e.g., discretized
Semi-Infinite Programming (SIP) problems. The key feature of the extension is that only a small subset of the objectives and constraints are used to generate a search directionat each iteration.
The result is much smaller QP sub-problems and fewer gradient evaluations.
The algorithms all have been implemented and tested. Preliminary numericalresults are very promising. The number of iterations and function evaluations required to converge to a solution are, on
average, roughly the same as for a widely available state-of-the-art feasible SQP implementation, whereas the amount of computation required per iteration is much less. The ability of the
algorithms to effectively solve real problems from engineering design is demonstrated by considering signal set design problems for optimal detection in the presence of non-Gaussian noise.
• A Computationally Efficient Feasible Sequential Quadratic Programming Algorithm
(1998) Lawrence, Craig T.; Tits, A.; ISR
A Sequential Quadratic Programming (SQP) algorithm generatingfeasible iterates is described and analyzed.What distinguishes this algorithm from previous feasible SQP algorithms proposed by
various authors is a drastic reduction in the amountof computation required to generate a new iteratewhile the proposed scheme still enjoys the same global and fast local convergence properties.A
preliminary implementation has been tested and somepromising numerical results are reported.
• Feasible Sequential Quadratic Programming for Finely Discretized Problems from SIP
(1997) Lawrence, Craig T.; Tits, A.L.; ISR
A sequential Quadratic Programming algorithm designed to efficiently solve nonlinear optimization problems with many inequality constraints, e.g. problems arising from finely discretized
Semi-Infinite Programming, is described and analyzed. The key features of the algorithm are (i) that only a few of the constraints are used in the QP sub-problems at each iteration, and (ii) that
every iterate satisfies all constraints.
• Nonlinear Equality Constraints in Feasible Sequential Quadratic Programming
(1995) Lawrence, Craig T.; Tits, A.L.; ISR
A simple scheme is proposed for handling nonlinear equality constraints in the context of a previously introduced sequential quadratic programming (SQP) algorithm for inequality constrained
problems, generating iterates satisfying all constraints. The key is an idea due to Mayne and Polak (Math. progr., vol. 11, pp 67- 80, 1976) by which nonlinear equality constraints are treated as
ﳣﱠtype constraints to be satisfied by all iterates, thus precluding any positive value, and an exact penalty term is added to the objective function which penalizes negative values. Mayne and
Polak obtain a suitable value of the penalty parameter by iterative adjustments based on a test involving estimates of the KKT multipliers. We argue that the SQP framework allows for a more
effective estimation of these multipliers, and we provide convergence analysis of the resulting algorithms. Numerical results, obtained with the FSQP/CFSQP code, are reported.
• A Primal-Dual Interior-Point Method for Nonconvex Optimization with Multiple Logarithmic Barrier Parameters and with Strong Convergence Properties
(1998) Urban, T.; Tits, A.L.; Lawrence, Craig T.; ISR
It is observed that an algorithm proposed in the 1980s for thesolution of nonconvex constrained optimization problems is in fact aprimal-dual logarithmic barrier interior-point method closely
related tomethods under current investigation in the research community. Its maindistinguishing features are judicious selection and update of the multiplebarrier parameters (one per constraint),
use of the objective function asmerit function, and careful bending of the search direction. As a payoff,global convergence and fast local convergence ensue. The purpose of thisshort note is to
describe the algorithm in the interior-point framework andlanguage and to provide a preliminary numerical evaluation. The latter showsthat the method compares well with algorithms recently
proposed by otherresearch groups.
• A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties
(2001) Tits, A.L.; Urban, T.J.; Bakhtiari, Sasan; Lawrence, Craig T.; ISR
A scheme---inspired from an old idea due to Mayne and Polak (Math. Prog.,vol.~11, 1976, pp.~67--80)---is proposed for extending to general smoothconstrained optimization problems a previously
proposed feasibleinterior-point method for inequality constrained problems.It is shown that the primal-dual interior point framework allows for asignificantly more effective implementation of the
Mayne-Polak idea thanthat discussed an analyzed by the originators in the contextof first order methods of feasible direction. Strong global and localconvergence results are proved under mild
assumptions. In particular,the proposed algorithm does not suffer the Wachter-Biegler effect.
• A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties
(2002) Tits, Andre L.; Wachter, Andreas; Bakhtiari, Sasan; Urban, Thomas J.; Lawrence, Craig T.; ISR
An exact-penalty-function-based scheme|inspired from an old ideadue to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 67{80)|isproposed for extending to general smooth constrained
optimizationproblems any given feasible interior-point method for inequality constrained problems.
It is shown that the primal-dual interior-point framework allows for a simpler penalty parameter update rule than that discussed and analyzed by the originators of the scheme in the context of
first order methods of feasible direction. Strong global and local convergence results are proved under mild assumptions.
In particular,(i) the proposed algorithm does not suffer a common pitfall recently pointed out by Wachter and Biegler; and (ii) the positive definiteness assumption on the Hessian estimate, made
in the original version of the algorithm, is relaxed, allowing for the use of exact Hessian information, resulting in local quadratic convergence. Promisingnumerical results are reported.
Note: This report is a major revision to TR 2001-3, "A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties," by A.L. Tits, T.J. Urban,
S. Bakhtiari, C.T. Lawrence.
• User's Guide for CFSQP Version 2.0: A C Code for Solving (Large Scale) Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality Constraints
(1994) Lawrence, Craig T.; Zhou, J.L.; Tits, A.L.; ISR
CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one) subject to general smooth constraints. If the initial guess
provided by the user is infeasible for some inequality constraint or some linear equality constraint, CFSQP first generates a feasible point for these constraints; subsequently the successive
iterates generated by CFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective
functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. When solving problems with many sequentially related constraints (or objectives),
such as discretized semi- infinite programming (SIP) problems, CFSQP gives the user the option to use an algorithm that efficiently solves these problems, greatly reducing computational effort.
The user has the option of either requiring that the objective function (penalty function if nonlinear equality constraints are present) decrease at each iteration after feasibility for nonlinear
inequality and linear constraints has been reached (monotone line search), or requiring a decrease within at most four iterations (nonmonotone line search). He/She must provide functions that
define the objective functions and constraint functions and may either provide functions to compute the respective gradients or require that CFSQP estimate them by forward finite differences.
CFSQP is an implementation of two algorithms based on Sequential Quadratic Programming (SQP), modified so as to generate feasible iterates. In the first one (monotone line search), a certain
Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same effect is achieved by means of
a "nonmonotone" search along a straight line. The merit function used in both searches is the maximum of the objective functions if there is no nonlinear equality constraints, or an exact penalty
function if nonlinear equality constraints are present. | {"url":"https://drum.lib.umd.edu/browse/author?value=Lawrence,%20Craig%20T.","timestamp":"2024-11-10T18:25:43Z","content_type":"text/html","content_length":"520332","record_id":"<urn:uuid:aa547d14-d0a3-40db-a856-054ce79edd65>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00326.warc.gz"} |
Gold Cost Per Pound Calculator | Online Calculators
Gold Cost Per Pound Calculator
How to calculate the cost of gold per pound? Use following tool, Enter the gold cost and gold weight into the calculator to find the cost per pound.
What is Gold Cost Per Pound?
Gold Cost Per Pound helps determine the price of gold based on its weight. This is important for buyers and sellers to assess the value of their gold accurately.
How to Use the Calculator
1. Enter Gold Cost ($): Type the total cost of the gold. For example, 5000.
2. Enter Gold Weight (lbs): Input the weight of the gold in pounds. For example, 2.
3. Calculate Gold Cost Per Pound: Click “Calculate” to find the cost per pound.
The formula to calculate the gold cost per pound is:
Variable Description
GCP Gold Cost Per Pound ($/lb)
TGC Total Gold Cost ($)
GW Gold Weight (lbs)
How To calculate Gold Cost Per Pound ?
Example 1
• Gold Cost = $5000,
• Gold Weight = 2 lbs
Use the formula:
$\text{ GCP}=\frac{5000}{2}$
Divide 5000 by 2
Result: GCP = $2500/lb
Example 2
• Gold Cost = $7500,
• Gold Weight = 3 lbs
Use the formula:
$\text{ GCP}=\frac{7500}{3}$
Divide 7500 by 3
Result: GCP = $2500/lb
Thank you for using our tool.
Leave a Comment | {"url":"https://lengthcalculators.com/gold-cost-per-pound-calculator/","timestamp":"2024-11-05T04:26:04Z","content_type":"text/html","content_length":"64921","record_id":"<urn:uuid:d6e8d0d7-c0e4-4d71-99bc-92d14e491969>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00728.warc.gz"} |
: Book II, Proposition 14
The following is as given in Sir Thomas L. Heath's translation, which can be found in the book The Thirteen Books of The Elements, Vol. 1.
Proposition 14.
To construct a square equal to a given rectilineal figure.
Let A be the given rectilineal figure;
thus it is required to construct a square equal to the rectilineal figure A.
For let there be constructed the rectangular parallelogram BD equal to the rectilineal figure A. [I. 45]
Then, if BE is equal to ED, that which was enjoined will have been done; for a square BD has been constructed equal to the rectilineal figure A.
But, if not, one of the straight lines BE, ED is greater.
Let BE be greater, and let it be produced to F;
let EF be made equal to ED, and let BF be bisected at G.
With centre G and distance one of the straight lines GB, GF let the semicircle BHF be described; let DE be produced to H, and let GH be joined.
Then, since the straight line BF has been cut into equal segments at G, and into unequal segments at E,
the rectangle contained by BE, EF together with the square on EG is equal to the square on GF.
But GF is equal to GH;
therefore the rectangle BE, EF together with the square on GE is equal to the square on GH.
But the squares on HE, EG are equal to the square on GH; [I. 47]
therefore the rectangle BE, EF together with the square on GE is equal to the squares on HE, EG.
Let the square on GEbe subtracted from each;
therefore the rectangle contained by BE, EF which remains is equal to the square on EH.
But the rectangle BE, EF is BD, for EF is equal to ED;
therefore the parallelogram BD is equal to the square on HE.
And BD is equal to the rectilineal figure A.
Therefore the rectilineal figure A is also equal to the square which can be described on EH.
Therefore a square, namely that which can be described on EH, has been constructed equal to the given rectilineal figure A. | {"url":"http://mathlair.allfunandgames.ca/elements2-14.php","timestamp":"2024-11-12T15:18:56Z","content_type":"text/html","content_length":"6257","record_id":"<urn:uuid:77c9b55d-2a7b-4391-aba1-fefa05c0b29c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00177.warc.gz"} |
Definition of a basic cache compressor.
Definition of a basic cache compressor.
Implementation of the specialized sub-compressors used by BDI.
Definition of a base delta immediate compressor.
Implementation of a base delta immediate compressor.
Implementation of a base sim object for the templated dictionary-based cache compressor.
Implementation of the CPack cache compressor.
Definition of CPack compression, from "C-Pack: A High-Performance Microprocessor Cache Compression Algorithm".
Definition of a dictionary based cache compressor.
Implementation of a dictionary based cache compressor.
Definition of the Frequent Pattern Compression cache compressor, as described in "Frequent Pattern Compression: A Significance-Based Compression Scheme for L2 Caches".
Implementation of the FPC-D cache compressor.
Definition of the Frequent Pattern Compression with limited Dictionary support (FPC-D) cache compressor, as described in "Opportunistic Compression for Direct-Mapped DRAM Caches", by Alameldeen et
Implementation of the a multi compressor that choses the best compression among multiple compressors.
Definition of the a multi compressor that choses the best compression among multiple compressors.
Implementation of a perfect compressor, which compresses data to its maximum allowed compression ratio.
Definition of a perfect compressor, that always manages to compress to its maximum compression ratio.
Implementation of a repeated values compressor, which compresses data if it is entirely composed of repeated qwords.
Definition of a repeated qwords compressor, which compresses data if it is entirely composed of repeated qwords.
Implementation of a zero compressor, which compressed data if it is entirely composed of zero bits.
Definition of a zero compressor, which compressed data if it is entirely composed of zero bits. | {"url":"http://doxygen.gem5.org/release/current/dir_b88cc306cd6e92b86d75dab8683500e2.html","timestamp":"2024-11-05T12:38:10Z","content_type":"application/xhtml+xml","content_length":"17812","record_id":"<urn:uuid:3762d564-2036-49fa-b2f4-ec51f86dc77d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00391.warc.gz"} |
Gradient Boosting: Implementation from Scratch
Get introduced to gradient boosting, its algorithm, and how we can train our data from scratch.
Gradient boosting is a machine learning technique that builds a strong predictive model by sequentially adding weak models to the ensemble. It uses gradient descent optimization to minimize the loss
function of the model. The term “gradient” refers to the negative gradient of the loss function, which guides the learning process toward reducing errors. Gradient boosting is a versatile technique
that can be used for both regression and classification tasks. In this case, we’ll focus on regression and demonstrate how to solve a regression problem using two approaches. Currently, we’re
implementing a gradient boosting regressor from scratch.
Get hands-on with 1400+ tech skills courses. | {"url":"https://www.educative.io/courses/fundamentals-of-machine-learning-a-pythonic-introduction/gradient-boosting-implementation-from-scratch","timestamp":"2024-11-15T03:35:04Z","content_type":"text/html","content_length":"745986","record_id":"<urn:uuid:e3030dee-e747-49eb-be27-96acdb0b38e6>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00696.warc.gz"} |
Get Analytical with R
Stefano Rizzi
R is a programming language and software environment for statistical computation and analysis. The author presents a way of doing statistical calculations using R in the field of analytical
chemistry, particularly in titrations of various kinds
With a spreadsheet, some types of calculations are difficult to carry out. So I was on the look out for software to complement the spreadsheet. This software had to be free (as in freedom and, if
possible, as in beer) and open source, powerful, lightweight, fast, used by many people, very well documented and easy to learn. All these features in one software seemed almost impossible to find.
Yet, after a search, I found R . Since the full code of each example presented here is too long for a magazine article, it s freely available at https://github.com/astonfe/r
Figure 1: Potentiometric titration
Potentiometric titration
Titration is a common laboratory method of quantitative chemical analysis that is used to determine the unknown concentration of an identified analyte. This experiment is taken from Reference 3 given
at the end of the article, and is about the potentiometric titration curve for 2.433 meq of Cl- with 0.1000 M AgNO3. The raw data are read from an xls file (LibreOffice Calc) via the gdata package,
which requires the Perl language to be installed on the computer. This is not a particular problem under GNU Linux, but Strawberry Perl (http://strawberryperl.com) must be installed if you re using
Windows. This is not the only way, though. There are other options to read xls files from R. The raw data (Figure 1) can be easily plotted with the use of the following code:
plot(x,y,type= n )
points(x,y,type= o )
Figure 2: First derivative
Figure 3: Second derivative with zoom
The first and the second derivatives (Figures 2 and 3) are useful to determine the inflection point. The derivatives can be calculated with two for loops and then plotted as previously seen for the
raw data, as follows:
for(i in 2:n) {
plot(x1,y1,type= n )
points(x1,y1,type= o )
for(i in 3:n) {
plot(x2,y2,type= n )
points(x2,y2,type= o )
Now, with a zm(type= session ), I can zoom on the plot (Figure 3) for a better visualisation and then, with locator(), I can obtain the inflection point coordinates.
Figure 4: Conductometric titration (1)
Conductometric titration
For a general explanation of this technique, log on to http://en.wikipedia.org/wiki/Conductometry. The first experiment (Figure 4) is taken from Reference 5 given at the end of the article and is
about the conductometric titration curve for NaOH with 0.1219 M potassium hydrogen phthalate (C8H5KO4). The second experiment (Figure 5) is also taken from the same reference, and is about the
conductometric titration curve for H3PO4 with 0.5016 M NaOH. These two experiments are problems of linear segmented regression (see Reference 6) about the determination of the break-point(s). The
most important aspect relates to the psi parameter within the segmented package call. The psi parameter includes starting values for the break-point(s) for the corresponding variable in seg.Z. If
seg.Z includes only a variable, psi may be a numeric vector. With summary(fit) I can visualise the break-point(s) coordinates on the x axis. The easiest way to plot the regression lines is the use of
abline(intercept,slope,col=your-color), but the abline line type spans the whole plot width. The figures presented here have been arrived at with some more calculations and the regression lines are
plotted with lines. The following code can be used to calculate the slope and the intercept for each segment:
fit<-segmented(lm(y~x),seg.Z=~x,psi=list(x=c(5))) # Figure 4
fit<-segmented(lm(y~x),seg.Z=~x,psi=list(x=c(3,6))) # Figure 5
Let me add something about the use of the Greek letters. Look at the following two examples:
ylab=expression( Conductivity ( *mu* S/cm) )
ylab=expression( Conductivity ( *mu* S/cm) )
The first is preferable if the plot will be exported to PDF and the second, if a space is added after ( , if it is in PNG. With some limitations, it s also possible to directly use the micro
symbol, but probably the best thing is to use the unicode string \u00b5 . I think that the use of a more general technique could be better than this.
Figure 5: Conductometric titration (2)
Linear fit weighted and printed with Markdown
A weighted model is very widely used in some applications (for example, in pharmacokinetics studies). With a few lines of code I can define all the models I need. The most important aspect of this
topic is the calculation of the percentage accuracy for the back-calculated concentrations, as reported in the following code:
# fit<-lm(y~x) # Linear fit simple
fit<-lm(y~x,weight=1/x^2) # Linear fit weighted
# fit<-lm(y~0+x) # Linear fit through origin
# a=coef(fit)[1]
# b=0
The table that is made with cbind, in which the data is present, the x values back-calculated (usually the concentration is reported on the x axis) and its percentage accuracy, are all very useful.
For more theory, see Reference 7 given at the end of the article. Printing a notebook-like document is possible with the R Markdown package (though other options exist). Generally speaking, using my
preferred text editor, I can write a document called, for instance, example.Rmd which contains text and code; then R is used as a compiler only. If Pandoc (http://www.pandoc.org) is installed, typing
rmarkdown::render( example.Rmd ) from the R console, produces the HTML file with the text formatted according to the Markdown syntax, a plot and the results of some calculations (Figure 6). It s very
easy, so issue the following commands:
# Title
## Section 1
## Section 2
geom_ribbon(aes(y=p[, fit ],ymin=p[, lwr ],ymax=p[, upr ],fill= confidence ),alpha=0.1)+
scale_fill_manual(values= #1E90FF )+
geom_point(shape=21,col= black ,fill= #90EE90 ,size=3)+
labs(x= Concentration (µg/L) ,y= Response )+
ggtitle(expression(atop( Tetrachloroethene ,atop( Linear fit weighted 1/x ^2))))+
theme(legend.position= none )
The plot is built with ggplot and the confidence band is added. In this example, the options message (to display R messages) and echo (to display the code along with its results) are set to False .
More complex documents can be produced according to http://rmarkdown.rstudio.com.
Figure 6: R Markdown example
Emacs Speaks Statistics
With Emacs Speaks Statistics (http://ess.r-project.org), the user can edit R language commands in one GNU Emacs buffer and execute the code in a second (see Reference 9). Under Windows, an
interesting way to do this is to use the GNU Emacs distribution prepared by Professor Vincent Goulet (Université Laval) and this is available for Mac OS too (http://vgoulet.act.ulaval.ca/en/emacs). I
m not an enthusiast of ESS because I think there s an interesting approach under GNU Linux but, when I m using Windows, R has its own script editor and Edit > Run all works fine for me.
There s another problem related to highlighting syntax. By default, the R language is poorly supported by Vim and not supported by GNU Emacs. The package ESS adds some syntax highlighting support to
GNU Emacs for the R language. I m not very satisfied with it because all the keywords have the same colour (Figure 7). This is probably due to the R keywords not being categorised. To have more
effective syntax highlighting, I think it will be necessary to organise the keywords better, perhaps in a manner similar to Scilab (see OSFY, June 2015). Practically, it s not necessary to classify
all the keywords (which could run into thousands) but only the keywords I m using (200, more or less). For example, I can have different groups/colours for keywords about statistics (lm, coef,
predict …), plotting (plot, points, abline …), ggplot (ggplot, geom_something, ggtitle …), generic (cbind, par, summary ) and more. This is more or less the same as in a human-spoken language: it
has a vocabulary (the keywords) with which it is possible to find various categories such as substantives, adjectives, verbs, adverbs, etc. Using this classification, I can build a syntax
highlighting a file for Vim, GNU Emacs, jEdit, Notepad++, etc. An interesting explanation of ESS is written by Rodney Sparapani as reported in Reference 10.
Figure 7: Emacs speaks statistics
There s one more thing that must be said about the file encryption. Usually, my R files are not encrypted, but if a backup is done on a remote server, I prefer to encrypt them using GPG (GNU Privacy
Guard, http://www.gpg4win.org). Yes, under Windows, because in my work experience I have encountered only one company that uses GNU Linux as a desktop system.
First, it s necessary to generate my key using GPA (GNU Privacy Assistant). Now Emacs can interact directly with GPG. In Emacs, visit a new file with a .gpg extension. For example, my-script.R.gpg
and then place this line at the top of the file: # -*- epa-file-encrypt-to: ( your@email.address ) -*- When you try to save the file, Emacs will show a prompt in a buffer under it. Move the cursor to
the line containing the key and hit m . Move the cursor to OK and hit <return> . Next time, when I open that .gpg file, I will be prompted for the password only once and then consecutive saves
will be password-prompt-free. More details can be found on Emacs Stack Exchange (see Reference 11).
Without fear of contradiction, it can be said that R is today the lingua franca of statistics. In my opinion, it s one of the best examples of FOSS. The titration examples presented here are a bit
unusual for an IT magazine, but do consider the fact that the automatic detection of one or more break-points in conductometric titrations is very useful in daily practice. I don t know if there are
other free and open source software that do this as well as the segmented package for R.
R is also very well documented. There are a lot websites and books available to discover new things about it.
[1] http://en.wikipedia.org/wiki/Titration, last visited on 04/07/2015.
[2] http://en.wikipedia.org/wiki/Potentiometric_titration, last visited on 04/07/2015.
[3] Skoog, Principles of instrumental analysis, Saunders, Philadephia, 1985.
[4] http://en.wikipedia.org/wiki/Conductometry, last visited on 04/07/2015.
[5] Cozzi, Protti, Ruaro, Analisi chimica strumentale, Metodi elettrochimici (Instrumental analytical chemistry, Electrochemical methods), Zanichelli, Bologna, 1997.
[6] http://en.wikipedia.org/wiki/Segmented_regression, last visited on 04/07/2015.
[7] Massart, Vandeginste, Buydens, De Jong, Lewi, Smeyers-Verbeke, Handbook of chemometrics and qualimetrics, Elsevier, Amsterdam, 2003.
[8] http://rmarkdown.rstudio.com, last visited on 04/07/2015.
[9] http://en.wikipedia.org/wiki/Emacs_Speaks_Statistics, last visited on 04/07/2015.
[10] http://blog.revolutionanalytics.com/2014/03/emacs-ess-and-r-for-zombies.html, last visited on 04/07/2015.
[11] http://emacs.stackexchange.com/questions/12212, last visited on 04/07/2015.
LEAVE A REPLY Cancel reply | {"url":"https://www.opensourceforu.com/2015/10/get-analytical-with-r/","timestamp":"2024-11-02T04:13:19Z","content_type":"text/html","content_length":"361504","record_id":"<urn:uuid:e294c923-8a33-40f6-bd39-823127f04a6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00789.warc.gz"} |
What high school Algebra quizzes and NP-complete problems have in common
What I did for my summer internship at Galois
World of algebra quizzes. As a high schooler, I was using concepts from computer science long before I even knew what computer science was. I can recall taking a math quiz—calculators banned—facing a
difficult task: the multiplication of large numbers. I was (and still am) very sloppy when it came to pencil-and-paper arithmetic—if I didn’t check my answers, I would invariably lose points because
of “stupid mistakes.” Fortunately, I knew the following trick: if I summed together the digits of my factors (re-summing if the result was ten or more), the product of these two numbers should match
the sum of the digits of the result. If not, I knew I had the wrong answer. It wasn’t until much later that I discovered that this was a very rudimentary form of the checksum.
In fact, most of the tricks I rediscovered were motivated by a simple academic need: Was my answer correct or not? Indeed, while I didn’t know it at the time, this question would become the
fundamental basis for my internship at Galois this summer.
At about the time I started learning algebra, I began to notice that my tricks for checking arithmetic had become insufficient. If a teacher asked me to calculate the expanded form of the polynomial
(x + 2)(x - 3)(x - 5), I had to carry out multiple arithmetic steps before I arrived at an answer. Checking each step was tedious and prone to error—I knew too well that I would probably be blind to
errors in the work I had just written. I wanted a different way to check that my answer was correct.
Eventually, I realized that all I had to do was pick a value of x and substitute it into the original question and the answer x³ - 6x² - x + 30. If the values matched, I would be fairly confident in
my answer. I also realized that if I picked a number like x = -2, I wouldn’t even have to calculate the value of the original problem: the answer was obviously zero! I had “invented” unit testing,
and at the hand of this technique, many symbolic expressions bent to my pencil. (I independently learned about unit testing as a teething programmer, but since a PHP programmer never codes very much
math, I never made the connection.)
World of practical software testing. Here, we pass from the world of algebra quizzes to the world of software testing. The expressions being tested are more complicated than x³ - 6x² - x + 30, but
most people still adopt the strategy of the high school me: they hand pick a few inputs to test that will give them reasonable confidence that their new implementation is correct. How does one know
that the output of the program is the correct one? For many simple programs, the functionality being tested is simple enough that the tester mentally “knows” what the correct result is, and write it
down manually—akin to picking inputs like x = -2 that are particularly easy for a human to infer the answer to. For more complex programs, a tester may use a reference implementation to figure out
what the expected behavior is supposed to be.
Testing like this can only show the presence of bugs, not the absence of them. But, as many software companies have discovered, this is good enough! If the programmer misses an important test case
and a bug report comes in, he fixes the bug and adds a regression test to deal with that buggy input. So, as pragmatists, we have settled for this state of affairs: manual case-by-case testing (which
hopefully is automated). The state of the art of conventional software testing is fundamentally the same as how a high-schooler checks his answers on an algebra quiz. Anything better lies beyond the
dragons of theoretical computer science research.
Aside. As anyone who has written automated tests before can attest, automated tests are characterized by two primary chores: getting your code to be automatically testable in the first place
(much easier if it’s arithmetic than if it’s a kernel driver) and coming up with interesting situations to test your code in. For the latter, it turns out that while humans can come up with
decent edge-cases, they’re really bad at coming up with random test-cases. Thus, some extremely practical high-tech testing techniques involve having a computer generate random inputs. Fuzz
testing and QuickCheck style testing are both characterized by this methodology, though fuzz testing prides itself in nonsensical inputs, while QuickCheck tries hard to generate sensible inputs.
World of theoretical computer science. The teacher grading your algebra quiz doesn’t do something so simple as pick a few random numbers, substitute them into your answer, and see if she gets the
right answer. Instead, she compares your answer (the program itself) against the one she has in the answer key (a reference implementation), and marks you correct if she is able to judge that the
answers are the same. If you phrase your answer in terms of Fermat’s last theorem, she’ll mark you off for being cheeky.
The reference implementation may be wrong (bug in the answer key), but in this case it’s our best metric for whether or not a program is “correct.” Since we’ve wandered into the land of theoretical
computer science, we might ask this question to the Literal Genie: Is it possible, in general, to determine if two programs are equivalent? The Literal Genie responds, “No!” The question is
undecidable: there is no algorithm that can answer this question for all inputs. If you could determine if two programs were equivalent, you could solve the halting problem (the canonical example of
an unsolvable problem): just check if a program was equivalent to an infinitely looping one.
While the working theoretician may tame uncountably huge infinities on a regular basis, for a working programmer, the quantities handled on a regular basis are very much finite—the size of their
machine integer, the amount of memory on their system, the amount of time a program is allowed to run. When you deal with infinity, all sorts of strange results appear. For example, Rice’s theorem
states that figuring out whether or not a program has any non-trivial property (that is, there exists some program that has the property and some program that doesn’t) is undecidable! If we impose
some reasonable constraints, such as “the program terminates in polynomial time for all inputs”, the answer to this question is yes! But can we do so in a way that is better than testing that the
programs do the same thing on every input?
World of more practical computer science. We’ve relinquished enough theoretical purity to make our question interesting again for software engineers, but it is still very difficult for the programmer
to prove to himself that the algorithm is equivalent to his reference implementation. In contrast, it's easy for a user to show that the algorithm is wrong: all they have to do is give the programmer
an input for which his implementation and the reference implementation disagree.
Computer scientists have a name for this situation: problems for which you can verify their solutions (in this case, more of an anti-solution: a counter-example) in polynomial time are NP. Even if
both programs run in constant time, as a combinational logic circuit might (to simulate such a circuit, we only need to propagate the inputs through as many gates as they are in the circuit: there is
no dependence on the input), it still takes exponential time to brute-force an equivalence check. Every time we add another bit to the input, we double the amount of possible inputs to check.
In fact, the question of circuit non-equivalence is NP-complete. We’ve been talking about program equivalence, but we can also talk about problem equivalence, for which you can translate one problem
(graph coloring) into another one (traveling salesman). In the seventies, computer scientists spent a lot of time proving that a lot of problems that required “brute force” were actually all the same
problem. Stephen Cook introduced the idea that there were problems that were NP-complete: problems in NP for which we could translate all other problems in NP into. The most famous example of an
NP-complete problem is SAT, in which given a logical formula with boolean variables, you ask whether or not there is a satisfying assignment of variables, variables that will cause this formula to be
To show that circuit non-equivalence is NP-complete, we need to show that it is in NP (which we’ve done already) and show that we can translate some other NP-complete problem into this problem. This
is quite easy to do with SAT: write a program that takes the boolean variables of SAT as inputs and outputs the result of the logical formula and then see if it’s equivalent to a program that always
returns false.
The other direction is only slightly less trivial, but important practically speaking: if we can reduce our problem into an instance of SAT, I can chuck it a highly optimized SAT solver. A
satisfiability problem is isomorphic to a logic circuit that outputs a single bit. We can translate a circuit equivalence problem into SAT by combining the circuits into what is called a “miter”: we
combine the inputs of the two original logic circuits into a single set that feeds into both circuits, and then test the corresponding output bits between the two circuits for equality (XOR), ORing
the entire result together. The resulting circuit outputs 0 if the outputs were the same between the two circuits (all of the XORs returned 0), and outputs 1 if there is a mismatch.
“Great,” you may be thinking, “but I’m a programmer, not a hardware designer. Most of my programs can’t be expressed just in terms of logic gates!” That is true: to encode state, you also need
latches, and input/output needs to be simulated with special input and output “ports”. However, there are many important problems that are purely combinational: the shining example of which is
cryptography, which protects your money, employs a lot of complicated math and is ruthlessly optimized.
But there still is one standing complaint: even if my programs are just logic circuits, I wouldn’t want to write them in terms of ANDs, ORs and NOTs. That just seems painful!
Enter Cryptol, the project that I am working on at Galois. Cryptol bills itself as follows:
Cryptol is a language for writing specifications for cryptographic algorithms. It is also a tool set for producing high-assurance, efficient implementations in VHDL, C, and Haskell. The Cryptol
tools include the ability to equivalence check the reference specification against an implementation, whether or not it was compiled from the specifications.
But what really makes it notable, in my humble intern opinion, is the fact that it can take programs written in programming languages like C, VHDL or Cryptol and convert them into logic circuits, or,
as we call them, “formal models”, which you can chuck at a SAT solver which will do something more sensible than brute-force all possible inputs. At one point, I thought to myself, “It’s a wonder
that Cryptol even works at all!” But it does, and remarkably well for its problem domain of cryptographic algorithms. The state of the art in conventional software testing is manually written tests
that can only show the presence of bugs in an implementation; the state of the art in Cryptol is a fully automatic test that gives assurance that an implementation has no bugs. (Of course, Cryptol
could be buggy, but such is the life of high assurance.)
SAT solvers are perhaps one of the most under-utilized high-tech tools that a programmer has at their fingertips. An industrial strength SAT solver can solve most NP-complete problems in time for
lunch, and there are many, many problems in NP with wide-ranging practical applications. However, the usual roadblocks to using a SAT solver include:
1. No easy way to translate your problem into SAT and then run it on one of the highly optimized solvers, which are frequently poorly documented, library-unfriendly projects in academia,
2. Generating friendly error messages when your SAT solver passes or fails (depending on what is an “error”), and
3. Convincing your team that, no really, you want a SAT solver (instead of building your own, probably not-as-efficient implementation.)
My primary project was addressing issue one, in Haskell, by building a set of bindings for ABC, a System for Sequential Synthesis and Verification called abcBridge. One might observe that Haskell
already has a number of SAT solving libraries: ABC is notable because it employs an alternative formulation of SAT in the form of And-Inverter Graphs (NAND gates are capable of simulating all boolean
logic) as well as some novel technology for handling AIGs such as fraiging, which is a high-level strategy that looks for functionally equivalent subsets of your circuits.
The project itself has been a lot of fun: since I was building this library from scratch, I had a lot of flexibility with API decisions, but at the same time got my hands into the Cryptol codebase,
which I needed to integrate my bindings with. With any luck, we’ll be releasing the code as open source at the end of my internship. But I’m going to miss a lot more than my project when my
internship ends in two weeks. I hope to follow up with a non-technical post about my internship. Stay tuned!
Post factum. Hey, this is my hundredth post. Sweet! | {"url":"http://blog.ezyang.com/2010/08/what-high-school-algebra-quizzes-and-np-complete-problems-have-in-common/","timestamp":"2024-11-06T23:24:55Z","content_type":"text/html","content_length":"29879","record_id":"<urn:uuid:6fd8ad7a-4617-4e40-a0c8-da0750012bf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00429.warc.gz"} |
Finite Difference Method - Mastering Partial Differential Equations
The Finite Difference Method (FDM) is a powerful numerical technique used to solve partial differential equations (PDEs) by approximating derivatives with finite differences. This method is
particularly useful when analytical solutions are difficult or impossible to obtain, making it an essential tool in various fields of science and engineering.
Understanding Finite Differences
At the core of the FDM lies the concept of finite differences. Instead of working with continuous functions and their derivatives, we approximate them using discrete points on a grid. The three main
types of finite differences are:
1. Forward difference
2. Backward difference
3. Central difference
Let's explore each of these approximations:
Forward Difference
The forward difference approximates the derivative of a function $f(x)$ at a point $x$ using the function values at $x$ and $x + h$, where $h$ is a small step size:
$\frac{\partial f}{\partial x} \approx \frac{f(x + h) - f(x)}{h}$
Backward Difference
The backward difference uses the function values at $x$ and $x - h$:
$\frac{\partial f}{\partial x} \approx \frac{f(x) - f(x - h)}{h}$
Central Difference
The central difference provides a more accurate approximation by using function values on both sides of $x$:
$\frac{\partial f}{\partial x} \approx \frac{f(x + h) - f(x - h)}{2h}$
These approximations form the foundation of the FDM and can be extended to higher-order derivatives and multidimensional problems.
Discretization of the Domain
To apply the FDM, we first discretize the problem domain into a grid of points. For a one-dimensional problem, this results in a series of equally spaced points along a line. In two dimensions, we
create a mesh of points, and in three dimensions, we form a lattice.
For example, in a 1D problem with domain $[0, L]$, we might choose $N+1$ equally spaced points:
$x_i = i \Delta x, \quad i = 0, 1, ..., N$
where $\Delta x = L/N$ is the step size.
Applying FDM to PDEs
Let's consider how to apply the FDM to solve a simple 1D heat equation:
$\frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2}$
where $u(x,t)$ is the temperature, $t$ is time, $x$ is position, and $\alpha$ is the thermal diffusivity.
We can discretize this equation using forward difference in time and central difference in space:
$\frac{u_i^{n+1} - u_i^n}{\Delta t} = \alpha \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{(\Delta x)^2}$
Here, $u_i^n$ represents the temperature at position $i$ and time step $n$.
Rearranging this equation, we get an explicit scheme for updating the temperature:
$u_i^{n+1} = u_i^n + \frac{\alpha \Delta t}{(\Delta x)^2} (u_{i+1}^n - 2u_i^n + u_{i-1}^n)$
This scheme allows us to compute the temperature at the next time step based on the current values.
Stability and Accuracy
When implementing the FDM, it's crucial to consider stability and accuracy. The choice of step sizes $\Delta x$ and $\Delta t$ can significantly impact the solution's behavior.
For the explicit scheme of the heat equation, we have a stability condition:
$\frac{\alpha \Delta t}{(\Delta x)^2} \leq \frac{1}{2}$
Violating this condition can lead to numerical instability and unreliable results.
Accuracy can be improved by:
1. Using higher-order finite difference approximations
2. Decreasing step sizes (at the cost of increased computational effort)
3. Employing implicit or semi-implicit schemes
Implementation in Python
Let's implement the explicit scheme for the 1D heat equation:
import numpy as np
import matplotlib.pyplot as plt
def heat_equation_1d(L, T, Nx, Nt, alpha):
# Set up the grid
dx = L / (Nx - 1)
dt = T / (Nt - 1)
x = np.linspace(0, L, Nx)
t = np.linspace(0, T, Nt)
# Initialize temperature array
u = np.zeros((Nx, Nt))
# Set initial condition (e.g., a sine wave)
u[:, 0] = np.sin(np.pi * x / L)
# Set boundary conditions (fixed ends at 0°C)
u[0, :] = u[-1, :] = 0
# Compute the solution
for n in range(Nt - 1):
for i in range(1, Nx - 1):
u[i, n+1] = u[i, n] + alpha * dt / dx**2 * (u[i+1, n] - 2*u[i, n] + u[i-1, n])
return x, t, u
# Set parameters
L = 1.0 # Length of the domain
T = 0.1 # Total time
Nx = 50 # Number of spatial points
Nt = 1000 # Number of time steps
alpha = 0.01 # Thermal diffusivity
# Solve the equation
x, t, u = heat_equation_1d(L, T, Nx, Nt, alpha)
# Plot the results
plt.figure(figsize=(10, 6))
plt.imshow(u.T, aspect='auto', extent=[0, L, T, 0], cmap='hot')
plt.title('1D Heat Equation Solution')
This code solves the 1D heat equation and visualizes the temperature evolution over time and space.
Advantages and Limitations of FDM
The Finite Difference Method offers several advantages:
1. Simplicity: FDM is conceptually straightforward and easy to implement.
2. Flexibility: It can be applied to a wide range of PDEs and boundary conditions.
3. Efficiency: For simple geometries, FDM can be computationally efficient.
However, it also has some limitations:
1. Complex geometries: FDM struggles with irregular domains and complex boundary shapes.
2. Accuracy: Higher-order accuracy requires more complex stencils and careful handling of boundaries.
3. Stability issues: Explicit schemes can be conditionally stable, requiring small time steps.
Extensions and Advanced Techniques
The basic FDM can be extended and improved in various ways:
1. Implicit methods: These offer unconditional stability but require solving a system of equations at each time step.
2. Adaptive mesh refinement: This technique concentrates grid points in regions of high solution variation.
3. Multigrid methods: These accelerate the convergence of iterative solvers for large systems.
4. High-order schemes: Using more points in the finite difference approximations can increase accuracy.
The Finite Difference Method is a fundamental numerical technique for solving PDEs. Its simplicity and versatility make it an excellent starting point for understanding numerical methods in
computational science and engineering. While it has limitations, particularly for complex geometries, the FDM remains a valuable tool in many applications, from heat transfer and fluid dynamics to
financial modeling and beyond.
As you continue your journey in numerical methods for PDEs, remember that the FDM is just one of many techniques available. Each method has its strengths and weaknesses, and choosing the right
approach often depends on the specific problem at hand. The insights gained from understanding the FDM will serve as a solid foundation for exploring more advanced numerical methods in the field of
partial differential equations. | {"url":"https://app.studyraid.com/en/read/2438/49291/finite-difference-method","timestamp":"2024-11-10T08:59:43Z","content_type":"text/html","content_length":"202634","record_id":"<urn:uuid:2912e13c-1424-47c3-ad9c-a39adc57099d>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00744.warc.gz"} |
What Are Conjectures, Theorems, and Counterexamples? - Expii
A conjecture is essentially a hunch or educated guess based on patterns, examples, intuition, experience, or anything else. A theorem is a conjecture that is true. However, not all conjectures are
true! An if-then conjecture is sometimes disproved by finding a counterexample—an example that violates the if-then statement. | {"url":"https://www.expii.com/t/what-are-conjectures-theorems-and-counterexamples-10404","timestamp":"2024-11-02T11:40:22Z","content_type":"text/html","content_length":"5735","record_id":"<urn:uuid:3cc9f6db-30b9-4be3-ac35-6de5e31dda34>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00150.warc.gz"} |
Air Force Office of Scientific Research
, under the Department of Defense
, under award number
; and National Institute of Standards and Technology
, under Award No.
Project Team Members:
Northwestern University
University of Michigan-Ann Arbor
Georgia Institute of Technology
Materials Optimization Design with Pruned-Search
This project is kind of special. You know how in scientific research a lot of findings are accidental? The outcome of this project contributes to another evidence of this saying. We started out with
a goal to solve an optimization design problem in materials engineering: suppose you are given a set of design variables and you are allowed to change their values semi-freely, that is, under some
constraints. For example, each variable can only take values between 0 and 1, and the sum of all variable values equals 1. And what you want is to set them all at some legitimate values (which
results in a legitimate design) so that the property you are designing reaches the optimum.
Natually we turn to optimization tools to solve this problem. All we need is an objective function that takes in design variable values and spits out objective values, and the set of constraints to
set the boundary of legitimacy. There is a large body of literature for optimization methods ready for you to explore. The story could end here with picking up the right optimization tool that
conducts the many searches within our design space to hopefully find the best value. Maybe devlopping some tricks here and there to avoid local optima, as the objective function in materials design
is never convex.
Figure 1. Our proposed data mining ways of solving an optimization problem. The method consists of three key processes. Data Distillation collects a significant and representative data set from an
objective function. Complexity Reduction ranks the importance and prunes the search space for each variable. Finally, the Enhanced Optimization conducts a search within the reduced space and seeks
for the optimal solution.
We meant to make search better. But how? The idea is to have the search force focused in a more promising path and prune (hence the name) the irrelevant effort. The validity of this heuristic is
rooted in two assumptions. Fristly, we assume the desired function value depends only on a reduced, albeit unknown, set of variables. Secondly, we assume that the impact of each variable to the
search value is different. Hence, there exists an optimal order in terms of searching priority.
Suppose we buy these stated assumptions. The next questions to ask is how to obtain an optimal search path, and at the same time, reduce the viable search region of each variable so that it is
faster. We turn to data science to draw insights. For that a representative set of data needs to be collected. We call that process data distillation (see [1]). Afterwards we perform complexity
reduction on two branches, in parallel: one creates an ordered list of variables based on their impact towards the function, and the other reduces of the feasible region for each variable. The former
is achieved through feature selection methods where a ranking is guaranteed. The latter is realized through examining a rule-based classifier and looking for the critical thresholds.
After the above steps, meta-heuristics about searching are obtained. Optimization becomes a much promising endeavor now that the search space is pruned. We then employ a simple line search algorithm
that takes a prefixed searching order, and replace the original constraints with the pruned ones.
The first set of results is presented in [2], where we apply the strategy in five materials property optimization problems. The additional difficulty in this problem setting is that there could be
multiple answers that lead to the same local optimum. It turns out our method not only helps reducing the search complexity (make it faster), but also is able to identify multiple optimal solutions
Then we went further to see if this developed framework is applicable to general optimization problems (non-materials science related). A number of such problems from literature are studied in [1],
and the conclusion holds. Both the searching speed and result accuracy have been improved.
Software Download
Pruned search software can be downloaded
. It is a general optimization package that performs search space reduction given an objective function and mathematical constraints, although the example demo provided is related to a certain
materials property.
• [1] Ruoqian Liu, Ankit Agrawal, Wei-keng Liao, Alok Choudhary, and Zhengzhang Chen. Pruned Search: A Machine Learning Based Meta-Heuristic Approach for Constrained Continuous Optimization. In the
Eighth International Conference on Contemporary Computing (IC3), August 2015. (pdf)
• [2] Ruoqian Liu, Abhishek Kumar, Zhengzhang Chen, Ankit Agrawal, Veera Sundararaghavan, and Alok Choudhary. A predictive machine learning approach for microstructure optimization and materials
design. Scientific Reports, 5:11551, Macmillan Publishers Limited SN, June 2015. (pdf)
• [3] Ruoqian Liu, and Ankit Agrawal, Wei-keng Liao, and Alok Choudhary. Search Space Preprocessing in Solving Complex Optimization Problems. In the Workshop on Complexity for Big Data held in
conjunction with the IEEE International Conference on Big Data, October 2014. (pdf)
This work is supported by AFOSR (Air Force Office of Scientific Research), Department of Defense (DOD) under Award No. FA9550-12-1-0458; and by National Institute of Standards and Technology (NIST),
under Award No. 70NANB14H012. | {"url":"http://cucis.ece.northwestern.edu/projects/MURI/workopt.html","timestamp":"2024-11-08T19:18:52Z","content_type":"application/xhtml+xml","content_length":"11981","record_id":"<urn:uuid:3813dfb1-3d74-4ad8-b134-852b7bda0bd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00449.warc.gz"} |
Free Number Base Changer
Number Base Changer
Fast, free, open source, ad-free tools.
Change Number Base Calculator
This tool helps developers easily change number base to decimal, binary, hexadecimal, or more complex operations.
How to Use the Number Base Changer:
This free online tool works for all bases. For example, you can change a number from base 10 to any other base.
• Enter Your Number:
Start by entering the number you wish to convert.
• Select the Current Base:
Choose the base that your number is currently in (e.g., binary, decimal, hexadecimal).
• Choose the Target Base:
Select the base to which you want to convert the number.
• Convert:
Click the “Convert” button, and this change base of number calculator will instantly display the converted digit string in the target base.
What You Can Do with this Number Base Tool
If you’re coding in Python and need to change the base of a number or working in Tableau to change number formats based on parameters, this tool makes it easy.
• Easily convert numbers between various bases, including binary, decimal, and hexadecimal.
• Quickly change number base without manual calculations.
• Ensure precise conversions, essential for coding and math operations.
• Intuitive interface that simplifies complex base conversions.
If you need to convert RGB color values to HEX format for your CSS, check out our RGB to HEX converter.
Understanding Number Bases and Conversions
The ability to divide the decimal, work with calculated fields, and manage string representations in different number formats makes this tool a must-have for anyone dealing with number systems.
Number bases are fundamental in various fields, including computer science, engineering, and mathematics. From binary (base 2) and decimal (base 10) to hexadecimal (base 16), different bases
represent numbers using different symbols. For instance, base 2 uses only the number 0 and 1, while base 16 uses numbers 0-9 and letters A-F.
Changing a number base means converting it from one system to another. This is important for working with different data formats. It can also help improve code efficiency.
For developers, understanding and working with number bases is vital. Our tool can help you convert numbers to base 10 or between binary, octal, and hexadecimal.
• What is a number base?
A number base is the amount of different digits, including zero, used to show numbers in a system. Common bases include binary (base 2), decimal (base 10), and hexadecimal (base 16).
• How to change base number?
Use our online tool to enter the number, select the current base, choose the target base, and click “Convert.” The tool will instantly show the converted number in the target base.
• Can this tool convert between any number bases?
Our tool can convert numbers between binary numbers, decimal and hexadecimal numbers, and other common number systems.
• Why would I need to change number base?
Converting an integer base is necessary because different systems use different bases. For example, computers use binary, while humans typically use the decimal number system.
• Is this tool suitable for coding in Python?
Absolutely. Our tool is perfect for Python developers who need to change the base of numbers for various coding tasks. It’s also helpful for working with data in platforms like Tableau and Excel. | {"url":"https://index.ph/number-base-changer","timestamp":"2024-11-08T15:56:59Z","content_type":"text/html","content_length":"17408","record_id":"<urn:uuid:6d371354-b1e4-46b9-92f8-5f8f38d19c82>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00759.warc.gz"} |
Daily Medium Mathdoku Puzzle for Wednesday 17th January 2024
Are you sure you want to reset this puzzle?
The rules of Mathdoku are based on the rules of Sudoku. This is an example Mathdoku starting grid.
A completed Mathdoku grid is like a Sudoku in that each row and column must contain the numbers 1-6 once and only once (notice that this grid is smaller than the standard Sudoku grid).
In addition, the grid is split up in to multiple sections, typically between two and four cells. Each section has a number and a Mathematical symbol. You must make that number by combining the
numbers with the mathematical symbol.
This is the same grid as above, but completed. You can see that by combining the numbers with the mathematical symbol, you arrive at the target number.
As an example, the top-left block has two cells with the clue '6x', so this means you need to make 6 by multiplying the two numbers, and 2x3 does make 6!
With + and x, it doesn't matter which way round your numbers go. However, with ÷ and -, the order does matter. For Mathdoku puzzles, you can choose which way round the numbers go. So, in the bottom
left we have the clues '3 -' and '3 ÷', it doesn't matter which way round the two numbers go in the cells. (It does of course matter for the general 'Sudoku' rule that each row/column must contain
each number once and only once).
This is another example starting grid. We can't fill any numbers in immediately, and this is always the case with Mathdoku puzzles.
The first step with a Mathdoku puzzle is always to enter pencil marks for those cells that don't have many combinations. For example, the top-left cell with the clue '5x' is a great example. There is
only way to satisfy that clue, i.e. 1x5 = 5. So we can enter 1 and 5 for the pencil marks for those two cells.
There are two ways to satisfy '6x', namely 1x6 = 6, and 2x3 = 6, so it's not worth looking at that block.
We have now entered some of the pencil marks, and we will use the 'naked pairs' technique here. For the column on the far-left, we have a pair of cells with the clue '20x', and we have the
possibilities of 4 and 5. The 4 and 5 in this column must go in those two cells, we don't know which way round yet, but they must go in those two cells.
This means the top-left cell can't be a 5, and therefore must be a 1.
There are a further 3 cells we can now fill in, and this is quite often the case with Mathdoku puzzles.
Having filled this in, we can now look at the two cells in the top row with '7+' as the clue. We can't use 1, 5 or 6 again in this top row, so we can only make '7+' with 3+4, which means the last
cell in this row must be a 2.
We have now made good progress in solving this Mathdoku, and you have just learnt the rules you need to be able to solve a Mathdoku puzzle.
You should now be able to work out the value that goes in the highlighted cell.
This page will automatically load the Mathdoku puzzle for today. If you want to play a different puzzle, go to the archive page and choose your puzzle.
There are two ways to play a Sudoku puzzle, you can just use the mouse/touchscreen, or you can use the mouse and keyboard. You can switch between the two methods any time you like, and can use a
combination of both.
Playing with a mouse/touchscreen.
• When you have found a square where you can enter a number, click/touch that square. The square will turn light blue.
Above and below the puzzle is the number selection. Click/touch the number you want to enter in to that cell. If there is already a number in that square, it will be over-written.
• If you want to enter a pencil mark, click/touch the square you want to put in a pencil mark. It will turn light blue. Click/touch the pencil icon above or below the puzzle. This icon will turn
light blue, and you are now in pencil marks mode.
Whenever you click/touch a number now, a pencil mark will be put in the square instead. To remove a number as a pencil mark, make sure you are in pencil marks mode, and click/touch the number
You can exit pencil mark mode by clicking/touching the pencil icon, it will turn back to normal.
• If you want to clear a particular square, make sure that square is selected and is in light blue. Click/touch the eraser icon. If there is a number in that square, it will be removed. If you
click/touch it again, any pencil marks in that square will be removed.
Playing with a mouse and keyboard.
• You will need to select a square by clicking on it with the mouse, it will turn light blue. You can change the current square by using the cursor keys on your keyboard.
• To enter a number, press that number on the keyboard. If there is already a number in that square, it will be overwritten. To remove a number, press the backspace or delete key on your keyboard.
• To enter a pencil mark, press control, shift, or alt on your keyboard at the same time as pressing a number key. Do the same thing again to remove that pencil mark.
Any mistakes you make will be hilighted in red. The website will know when you have completed a puzzle and will tell you. If you have an account and are logged in, the website will remember that you
have completed that puzzle. You will also take part in out leaderboards. It's
free to create an account! | {"url":"https://puzzlemadness.co.uk/mathdoku/medium/2024/1/17","timestamp":"2024-11-09T14:04:51Z","content_type":"text/html","content_length":"51206","record_id":"<urn:uuid:33ee023d-16a2-4451-85be-c74ed7eea2d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00843.warc.gz"} |
Insertion Sort in C, C++, Java and Python | Insertion sort algorithm
Insertion sort in c is the simple sorting algorithm that virtually splits the given array into sorted and unsorted parts, then the values from the unsorted parts are picked and placed at the correct
position in the sorted part.
What is sorting?
In simple words, sorting means logically arranging data, that is, either in ascending or descending order.
But, where can we see sorting in real life? Although you come across sorting many times in your life, you may not have noticed it. For instance, let us consider an example of a student, David, in
class 7.
He is assigned a task to arrange the mathematics test answer sheets in ascending order so that the student who scored the highest marks can be awarded. Let us assume that David completed the task and
sorted the sheets in ascending order, and found that he was the one who scored the maximum marks. How can you justify this? This is because his sheet was at last in the stack(the bundle of the
sheets), and since all the sheets were arranged in ascending order, so he was the one with the highest marks.
So he completed his task, but how did he do so? He used one of the many sorting techniques, for instance, say insertion sort.
Now that we are clear with the real-world examples of sorting, let us understand what is the use of sorting in computer science? There are plenty of uses.
For example:
1. The contact list in your phone is sorted, which means you can easily access your desired contact from your phone since the data is arranged in that manner for you. In other words, “it is sorted”.
2. While shopping on flip kart or amazon, you sort items based on your choice, that is, price low to high or high to low.
3. The app docker on your phone is an easily relate-able example.
Now since we have a crystal clear idea about sorting in both perspectives, let us, deep-dive, into Insertion Sort in c.
Insertion Sort in c
Insertion sort in c is the simple sorting algorithm that virtually splits the given array into sorted and unsorted parts.
I hope you remember the task assigned to David. Let us assume David used the insertion sort technique to complete his task. Then he first chooses a number 1 as a pivot or origin, then starts with the
other sheet and places the sheet with the marks and compares it with the marks on selected sheet number 1. If the sheet has fewer marks, he inserts it over and repeats the same process until he gets
the sorted array.
Now speaking technically, the insertion sort follows the following algorithm to sort an array of size in ascending order:
1. Iterate from arr[1] to arr[n] over the array.
2. Compare the current element (key) to its predecessor.
3. If the key element is smaller than its predecessor, compare its elements before. Move the greater elements one position up to make space for the swapped element.
Now let us apply the above algorithm for the task assigned to David and let the unsorted array of sheets be as follows:
He first compared the sheet on index 0 (number 1) with the marks on the sheet. As index 1 is lesser than marks on the sheet at index 0, he inserts it on the left side of index 0. Now the new
transformed array will look like this:
Note: The two elements at index 0 and index 1 have been swapped.
Now, he again compared the same selected sheet (number 1), which is now at index 1, with the marks on the sheet at index 2 and finds that it is greater than the selected sheet (number 1) that is it
is already sorted. Our array would be:
Now he again compares them. But this time, he compares the element at index 2 with the element at index 3, since the left part of index 2 is already sorted. He finds that the marks on index 3 are
less than marks in the sheet at index 2, so he searches the right position for the sheet in the left (that is the sorted part) and finds the correct position at index 0. Now, the transformed array
will be:
He again compares the element at index 3 with the element at index 4, and finds that marks on sheet at index 4 are less than on sheet at index 3. Thus, he finds the correct position for this element
in the left (that is sorted part) and positioned it at index 1. Our sorted array will be:
So this is our sorted array. Now, can you tell how much David scored in his mathematics test? He scored 97 marks.
What is Insertion Sort Algorithm?
• It is one of the easiest and brute force sorting algorithm
• Insertion sort is used to sort elements in either ascending or descending order
• In insertion sort, we maintain a sorted part and unsorted part
• It works just like playing cards i.e, picking one card and sorting it with the cards that we have in our hand already
• With every iteration, one item is moved from the unsorted section to the sorted section
• The first element is picked and is considered sorted
• After this, we start picking from the second element onward and compare with elements in the sorted section
• We keep shifting the elements from the sorted section one by one until an appropriate location is found for that element
• This process is continued until all elements have been exhausted
Insertion Sort Algorithm Pseudo-code
• Take the first element and consider it to be a sorted part(a single element is always sorted)
• Now pick arr[1] and store it is a temporary variable
• Start comparing the values of tmp with elements of the sorted part from the rear side
• If tmp is less than the rear element, say arr[k], then shift arr[k] to k+1 index
• This shifting will continue until the appropriate location is identified. Then, we will put the temporary element at the identified location
• This will continue for all the elements, and we will have our desired sorted array in ascending order
Also Read: Bubble Sort Algorithm
Insertion Sort Algorithm
Insertion sort in c
Insertion Sort(arr, size)
consider 0th element as sorted part
for each element from i=2 to n-1
tmp = arr[i]
for j=i-1 to 0
If a[j]>tmp
Then right shift it by one position
put tmp at current j
Insertion Sort Dry Run
First step- marking of the sorted part
After i=1
After i=2
After i=3
After i=4
Insertion Sort Time Complexity
• In the worst-case scenario, n will pick all elements and then n shifts to set it to the right position
• In the best-case scenario, that is a sorted array, we will just pick the elements, but no shifting will take place leading it to n time complexity, that is, every element is traversed at least
• Best Time Complexity: O(n)
• Average Time Complexity: O(n^2)
• Worst Time Complexity: O(n^2)
Insertion Sort Space Complexity
• No auxiliary space is required in insertion sort implementation. That is, we are not using any arrays, linked list, stack, queue, etc, to store our elements
• Hence space complexity is: O(1)
Also Read: Facial Recognition using Python
Insertion Sort in C – Algorithm
Now let’s apply this algorithm on our insertion sort in C:
#include <stdio.h>
// function to print the elements of the array
void display(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
// function to sort the elements of the array
void insertionSort(int arr[], int n) {
for (int i = 1; i < n; i++) {
int tmp = arr[i];
int j = i - 1;
while (tmp < arr[j] && j >= 0) {
arr[j + 1] = arr[j];
arr[j + 1] = tmp;
// main function or driver function
int main() {
int arr[] = {9, 5, 1, 4, 3};
int n = sizeof(arr) / sizeof(arr[0]);
printf("Elements before sorting:\n");
display(arr, n);
insertionSort(arr, n);
printf("Elements after sorting:\n");
display(arr, n);
Output of the program:
Elements before sorting:
Elements after sorting:
Insertion Sort in Java
import java.util.*;
class InsertionSort {
//method for sorting the elements
void insertionSort(int arr[]) {
int size = arr.length;
for (int i = 1; i < size; i++) {
int tmp = arr[i];
int j = i - 1;
while (j >= 0 && tmp < arr[j]) {
arr[j + 1] = arr[j];
arr[j + 1] = tmp;
// method for printing the elements
void display(int arr[]) {
int size = arr.length;
for (int i = 0; i < size; i++)
System.out.print(arr[i]+" ");
// Main method or driver method
public static void main(String args[]) {
int[] arr = { 9, 5, 1, 4, 3 };
InsertionSort ob = new InsertionSort();
System.out.println("Elements before sorting: ");
System.out.println("Elements after sorting: ");
Output of the program:
Elements before sorting:
Elements after sorting:
Insertion Sort in C++
#include <iostream>
using namespace std;
//Function for displaying the elements
void display(int arr[], int size) {
for (int i = 0; i < size; i++) {
cout << arr[i] << " ";
cout << endl;
// Function for sorting the elements
void insertionSort(int arr[], int size) {
for (int i = 1; i < size; i++) {
int tmp = arr[i];
int j = i - 1;
while (tmp < arr[j] && j >= 0) {
arr[j + 1] = arr[j];
arr[j + 1] = tmp;
// Main Function or Driver Function
int main() {
int data[] = {9, 5, 1, 4, 3};
int size = sizeof(data) / sizeof(data[0]);
cout << “Elements before sorting:\n";
display(data, size);
insertionSort(data, size);
cout << "Elements after sorting:\n";
display(data, size);
Output of the program:
Elements before sorting:
Elements after sorting:
// C++ program for insertion sort
#include <bits/stdc++.h>
using namespace std;
/* Function to sort an array using insertion sort*/
void insertionSort(int arr[], int n)
int i, key, j;
for (i = 1; i < n; i++)
key = arr[i];
j = i - 1;
/* Move elements of arr[0..i-1], that are
greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key)
arr[j + 1] = arr[j];
j = j - 1;
arr[j + 1] = key;
// A utility function to print an array of size n
void printArray(int arr[], int n)
int i;
for (i = 0; i < n; i++)
cout << arr[i] << " ";
cout << endl;
/* Driver code */
int main()
int arr[] = { 94,90,97,50,75 };
int n = sizeof(arr) / sizeof(arr[0]);
insertionSort(arr, n);
printArray(arr, n);
return 0;
Insertion Sort in Python
// Code for sorting the elements
def insertionSort(arr):
for i in range(1, len(arr)):
tmp = arr[i]
j = i - 1
while j >= 0 and tmp < arr[j]:
arr[j + 1] = arr[j]
j = j - 1
arr[j + 1] = tmp
// Main code or Driver Code
arr = [9, 5, 1, 4, 3]
print('Elements before sorting:')
print('Elements after sorting:')
Output of the program:
Elements before sorting:
[ 9, 5, 1, 4, 3]
Elements after sorting:
[1, 3, 4, 5, 9]
Also Read: Python Tutorial for beginners
Insertion Sort Example
Given the linked list with unsorted data in it, we need to sort the data.
Input: -1->5->3->4->0
Output: -1->0->3->4->5
Code: JAVA
public class List {
node head;
node sorted;
//Structure of the node that is the data part and the next part which points the next node
class node {
int data;
node next;
public node(int data) {
this.data = data;
void insert(int data) {
node newnode = new node(data);
newnode.next = head;
head = newnode;
void insertionSort(node headref) {
sorted = null;
node curr = headref;
while (curr != null) {
node next = curr.next;
curr = next;
head = sorted;
void sortedInsert(node newnode) {
if (sorted == null || sorted.data >= newnode.data) {
newnode.next = sorted;
sorted = newnode;
} else {
node curr = sorted;
while (curr.next != null && curr.next.data < newnode.data) {
curr = curr.next;
newnode.next = curr.next;
curr.next = newnode;
void print_list(node head) {
while (head != null) {
System.out.print(head.data + " ");
head = head.next;
public static void main(String[] args) {
List list = new List();
System.out.println("Original list:");
System.out.println("Sorted list:");
Output of the program:
Original list:
Sorted list:
Insertion sort in C
struct Node
int data;
struct Node* next;
void sortedInsert(struct Node**, struct Node*);
void insertionSort(struct Node **head_ref)
struct Node *sorted = NULL;
struct Node *curr = *head_ref;
while (curr != NULL)
struct Node *next = curr->next;
sortedInsert(&sorted, curr);
curr = next;
*head_ref = sorted;
void sortedInsert(struct Node** head_ref, struct Node* new_node)
struct Node* curr;
if (*head_ref == NULL || (*head_ref)->data >= new_node->data)
new_node->next = *head_ref;
*head_ref = new_node;
curr = *head_ref;
while (curr->next!=NULL &&
curr->next->data < new_node->data)
curr = curr->next;
new_node->next = curr->next;
curr->next = new_node;
void display(struct Node *head)
struct Node *tmp = head;
while(tmp != NULL)
printf("%d ", tmp->data);
tmp = tmp->next;
void insert(struct Node** head_ref, int new_data)
struct Node* new_node = new Node;
new_node->data = new_data;
new_node->next = (*head_ref);
(*head_ref) = new_node;
int main()
struct Node *a = NULL;
insert(&a, 5);
insert(&a, 20);
insert(&a, 4);
insert(&a, 3);
insert(&a, 30);
printf("\nOriginal list:\n");
printf("\nSorted list:\n");
return 0;
Output of the program:
Original list:
Sorted list:
Example 2:
Given the array, which is a nearly sorted (or K sorted) array, we need to sort the elements in the array.
Input : {6, 5, 3, 2, 8, 10, 9}
k = 3
Output : {2, 3, 5, 6, 8, 9, 10}
Code: JAVA
class Main{
static void insertionSort(int A[], int size)
int i, k, j;
for (i = 1; i < size; i++)
k = A[i];
j = i-1;
while (j >= 0 && A[j] > k)
A[j+1] = A[j];
j = j-1;
A[j+1] = k;
public static void main (String[] args) {
int a[]={2,7,5,9,3,1};
Insertion sort in C
void insertionSort(int A[], int size)
int i, k, j;
for (i = 1; i < size; i++)
k = A[i];
j = i-1;
while (j >= 0 && A[j] > k)
A[j+1] = A[j];
j = j-1;
A[j+1] = k;
Get in-depth knowledge about programming languages
Difference between Selection, Bubble and Insertion sort
1. In terms of algorithm
In Insertion sort, adjacent elements are compared and sorted if they are in wrong order. We select the smallest element and swap it with the 0th index element in the first iteration. This selection
continues for n-1 elements and single element is already sorted. We will have array sorted by 1 element in every iteration.
Here, we create partitions of sorted and unsorted parts. One by one element from the sorted part is taken and sent to the unsorted part for checking and placing it to the right position in sorting
using swaps.
2. In terms of time and space complexity
All 3 sort have O(n2) time complexity. But via flag variable, we can reduce the time complexity of Insertion and insertion to O(n) is best case. Space Complexity is the same for all i.e., O(1).
Best Average Worst Space
Selection O(n2) O(n2) O(n2) O(1)
Bubble O(n) O(n2) O(n2) O(1)
Insertion O(n) O(n2) O(n2) O(1)
3. In terms of speed
Insertion sort may work best with an already sorted array and there is no need for any extra flag.
4. In terms of in-place
In-place states that the algorithm is in-place if it does not need extra memory barring some variable creation which counts to constant space. Selection and Insertion are in-place algorithms and do
not require any auxiliary memory.
5. In terms of stability
Stability states that the algorithm is stable if the relative ordering of the same elements in the input and output array remains the same. Insertion and Insertion are stable algorithms but naive
selection is not as swapping may cost stability.
Selection Bubble Insertion
Select smallest in every iteration do single swap Adjacent swap of every element with the other element where ording is Take and put the element one by one and put it in the right place in
incorrect the sorted part.
Best case time complexity is O(n2) Best case time complexity is O(n) Best case time complexity is O(n)
Works better than Insertion as no of swaps are Worst efficiency as too many swaps are required in comparison to Works better than Insertion as no of swaps are significantly low
significantly low selection and insertion
It is in-place It is in-place It is in-place
Not stable Stable Stable
This brings us to the end of the blog on Insertion sort in c. We hope that you were able to learn more about the sorting algorithm. If you wish to learn more about such concepts and algorithms, check
out Great Learning Academy’s Free Online Courses and upskill today! | {"url":"https://www.mygreatlearning.com/blog/insertion-sort-algorithm/","timestamp":"2024-11-04T05:57:44Z","content_type":"text/html","content_length":"400438","record_id":"<urn:uuid:fffe8055-bc96-4dd4-8db4-a40fa914d1e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00036.warc.gz"} |
Summation notation | StateMath
Summation notation, also known as sigma notation, is a mathematical representation used to succinctly express the sum of a series of terms. It is commonly employed in various fields of study,
including mathematics, physics, and computer science, to simplify complex calculations and convey mathematical concepts concisely.
In summation notation, the Greek letter sigma $\Sigma$ is utilized to denote the sum of a series. The notation consists of the sigma symbol followed by the index variable, which represents the values
that the terms of the series can take. The lower limit of the index variable is specified below the sigma symbol, while the upper limit is indicated above it. The expression to be summed is written
to the right of the sigma symbol, with the index variable taking on each value within the specified range.
Understanding Summation notation
For instance, consider the series of terms $a_1,a_2,\cdots,a_n$. In summation notation, this series can be represented as $\sum_i a_i$ where $i$ is the index variable ranging from $1$ to $n$. This
notation succinctly conveys the sum of all terms in the series, with the index variable i taking on each value from $1$ to $n$. To give more precision, we also write $$ \sum_{i=1}^n a_i.$$ As an
example, we introduce the harmonic sum $$ H_n=\sum_{i=1}^n \frac{1}{i}=1+\frac{1}{2}+\cdots+\frac{1}{n}.$$
Summation notation offers several advantages in mathematical discourse. Firstly, it allows for the concise representation of series, reducing the need for lengthy and repetitive expressions.
Additionally, it facilitates the manipulation and analysis of series, enabling researchers to perform calculations and derive mathematical properties more efficiently. Moreover, summation notation
aids in the communication of mathematical ideas, as it provides a standardized and universally recognized format for expressing sums.
Properties of Summation
Summation follows certain properties that make it a powerful mathematical tool. Some key properties include:
1. Linearity: The summation of a linear combination of terms is equal to the linear combination of the individual summations. Mathematically, for constants $a$ and $b$, we have: $$ \sum_{i=1}^n (a
x_i+by_i)=a\sum_{i=1}^n x_i+b\sum_{i=1}^n y_i.$$
2. Splitting: A summation can be split into multiple parts. This property is useful when dealing with complex sequences. For instance if $1\le 1\le n$, then $$ \sum_{i=1}^n x_i=\sum_{i=1}^k x_i+\
sum_{i=k+1}^n x_i.$$
Applications of Summation notation
The concept of summation finds applications in various mathematical and real-world scenarios:
Series: Summation is fundamental in understanding series, where terms are added together. Infinite series, such as arithmetic and geometric series, are pivotal in calculus and mathematical analysis.
Calculus: Summation is closely connected to calculus through integral approximations. Techniques like Riemann sums use summation to estimate areas under curves.
Finance: Summation is used to model financial scenarios, such as calculating compound interest over time or evaluating annuities.
Physics and Engineering: In physics, summation is applied to calculate discrete quantities that approximate continuous phenomena. For instance, calculating the total distance traveled by an object
undergoing varying acceleration.
In conclusion, the art of summation is a cornerstone of mathematics with far-reaching implications. Whether in pure mathematics, calculus, finance, or the physical sciences, understanding and
mastering the concept of summation equips us with a versatile tool for solving problems and making sense of the world around us.
Through the lens of summation, the seemingly complex act of adding numbers takes on new depth and significance, connecting various mathematical domains and practical applications. | {"url":"https://statemath.com/2023/08/mastering-addition-summation-notation.html","timestamp":"2024-11-12T04:02:08Z","content_type":"text/html","content_length":"335374","record_id":"<urn:uuid:3bd13966-ddc8-4ff7-8210-21b9a3f2e457>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00791.warc.gz"} |
Progress Report: Nov 27 - Dec 3
Please report your miles for the week! Also take a minute to check your total in the sidebar to make sure I've got it right so far. AND, if your blogger name doesn't match what's in the sidebar, I'll
love you forever if you give me a hint as to who you are. :)
Did you have any
this week? Any
to overcome?
do you have for the upcoming week?
If you have an update on your own blog, leave us the link in the comments so we can come cheer you on!
Guest Posts this week:
None - If you would like to guest post, please let me know via email! Life is all of a sudden very busy, so I'd love the help!
Have a great week!
23 comments:
I posted on the wrong week-for the week of November 13-19, I did a total of 9 miles. For the week of November 20-26, I did a total of 3 miles, and for the week of November 27-Dec 3 I report a big
fat 0. Depressing. My total should be 39. I will have better things to say next week! I need a push or something....
I'm lost as to whether I remembered to post or not last week, but either way it's been a terrible couple of weeks (bad weather and general laziness) my total is now at 71 miles.
Some milestones this week!
Oh, and @Karen... *PUSH*! :)
I did 11 to get me to 54! More than halfway there.
My goal is 12 miles a week to get me to 100 by Dec. 31. I'm feeling the pressure though - Dec. 31 seems so close!
12 miles, for a total of 108. Woo-hoo, I broke 100!
My grand total is 54 so far, so that's 8 since the last time. It's going to go down to the wire to get to get to 100 by the 31st.
Add another 9 miles to mine this week. I lost a day but also walked another so almost my normal week.
I added another 24 miles this week bringing my total up to 145.
I went two full weeks without exercising, but got in 12 miles this week. I also, finally, ordered a treadmill, which should be here by the 15th, wahoo.
I posted about it here:
Total miles now is 59.
Sadly I don't think I am going to make my goal of 100, but at least I am adding to my total every week. I added another 4 miles to bring me up to 29 miles
I know, I didn't even post last week because of the guilt. I 'fessed up though this week. I got 10 miles in this week, for a total of 130. The big question in my mind is whether I can hit 200 by
the end of December???
I did 6 this week for a total of 51. Next week my goal is still 20. I can do it!
Well I ended up out most of this week, but still managed to get another 16 miles in, bringing me up to 53. It's not as high as I'd like, but I can handle it.
After being sick for a month I am finally back into exercising! I got in 8 miles this week. My total is now up to 22 miles. (My name on the sidebar is Nan.) :-)
I am logging my miles for the week. I put in 30 this week which brings my total to 211.
I managed to get 8 miles this week. Not so good, but better than nothing I guess.
Not a great week for me, only 2 miles for a total of 94. Not looking good for this week either since I had surgery on Friday!
Way to go for everyone that is moving right along!
Catherine - I had you at 65...are you sure 59 is your total?
Katie - I hope your surgery goes well!
Everyone else--continue to hang in there! Only a few more weeks left. I only got 6 miles last week, but as soon as I leave this comment in off to do some step. My goal is to work out at least 3
times a week for the rest of the month--hopefully four.
Thanks to those of you leaving links. I apologize for being slow to visit. If you do leave a link, please also remember to leave your totals here in the comments. Sometimes I go days without
getting on the computer and it's hard to keep track.
Keep it up!!!
I finished a half marathon this weekend and walked at least 1 mile a day for 5 of the other 5 days. I have a total of 20 miles this week.
10 this week!!
I had a bad week. I was only able to get 5 miles in before my back went out. Oh well, at least it was in working order for the family wedding this last weekend. I have a total of 131 miles.
I ended up posting two weeks' results at once again. I'm really sorry. If you feel like you can't count it, I'll understand.
Week total: 17 miles | {"url":"https://100milefitness.blogspot.com/2009/12/progress-report-nov-27-dec-3.html?showComment=1259962666312","timestamp":"2024-11-07T01:04:34Z","content_type":"application/xhtml+xml","content_length":"73126","record_id":"<urn:uuid:3c66bd3d-8249-427e-9ed4-48b88f571493>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00250.warc.gz"} |
superstring theory tree of life part 29
Outer Tree of Life When they are Type A triangles, the 16 triangles making up the outer Tree of Life have (16×3=48) sectors with 16 internal corners and 48 internal sides, 10 external corners and 22
external sides. The outer Tree of Life is composed of 48 triangles with (10+16=26) corners and (22+48=70) sides, i.e., 144 geometrical elements, where 144 (=122) is the 12th Fibonacci number.
Inner Tree of Life When they are Type A, the (7+7) enfolded polygons making up the inner Tree of Life have (47+47=94) sectors with 80 corners. Two of these are endpoints of their shared root edge, 72
corners are unshared with the triangles of the outer Tree and six corners coincide with Sephirothic corners of triangles in the outer Tree. The 94 sectors have 175 sides, of which one is the root
edge and four sides are sides of triangles in the outer Tree. The inner Tree of Life has 349 geometrical elements, of which 10 corners & sides are shared, leaving (349−3−10=336) intrinsic geometrical
elements outside the root edge. Each half of the inner Tree of Life comprises 168 geometrical elements that are unshared with its outer form. The number value of the Mundane Chakra of Malkuth
measures the points, lines & triangles that are intrinsic to each half of the inner Tree of Life.
Combined Trees of Life When combined, the outer & inner Trees of Life have (48+94=142) triangles with (26+80−6=100=102) corners & (70+175−4=241) sides. Outside the root edge, they have (241−1=240)
sides and (100−2+142=240) corners & triangles. This 240:240 division of the 480 geometrical elements in the outer & inner Trees of Life outside the root edge is characteristic of sacred geometries.
It manifests in superstring physics as the 240 roots of each rank-8 Lie group E8 in the symmetry group E8×E8 describing the forces between one of the two types of heterotic superstrings. Notice that
these 480 geometrical elements contain 336 elements that are intrinsic to the inner Tree of Life. We see that the "dynamical" parameter 480 contains the superstring structural parameter 336 — the
number of turns in a helical whorl of the UPA in each one of its five revolutions around the spin-axis. The division: 480 = 144 + 336 shown between the outer & inner Trees of Life reflects the
division: 240 = 72 + 168 between the 72 roots of E6, the rank-6 exceptional subgroup of E8, and the remaining
168 roots of E8. This division can be made explicit by suitably dividing the outer Tree of Life into a left-hand half and a right-hand half, each with72 geometrical elements (48 corners & sides, 24
triangles) and associating each half with the left-hand or right-hand set of seven enfolded polygons, each with 168 unshared geometrical elements outside their root edge.The metaphysical distinction
between the outer & inner Trees finds expression in the special significance of E6, which string theorists have long considered as an intermediate stage in the symmetry breakdown of E8 that leads to
the symmetries U(1)×SU(2)×SU(3) of the Standard Model. The inner Tree encodes the oscillatory form of the UPA/subquark state of the E8×E8 heterotic superstring, whilst the outer Tree encodes the E6
subgroups of each E8. In #37, it was shown that the root and trunk of the outer Tree have three sets of 24 yods, whilst its branches have seven sets of 24 yods. This is the analogue of the three
major whorls carrying the 72 gauge charges of E6, 24 per whorl, and the analogue of the seven minor whorls carrying 168 gauge charges of E8, 24 per whorl. Whereas the geometrical composition of the
outer Tree of Life picks out E6 from other subgroups of E8, its yod composition differentiates between the major and minor whorls of the UPA. They correspond, respectively, to the basic distinction
between the root and trunk of the Tree of Life with three sets of 24 yods and its branches with seven sets of 24 yods.
This 240:240 division in the geometrical composition of the outer & inner Trees is also discussed in #29.
We find that the combined Trees of Life can be divided into two halves, one the mirror image of the other. The 240 geometrical elements in each half outside the root edge comprise 49 corners, 120
sides & 71 triangles, that is, (49+71=120) corners & triangles and 120 sides. The 120:120 division in the number 240 is characteristic of holistic systems embodying this parameter. It manifests in
4-dimensional sacred geometries as the 120 vertices, edges, faces & octahedral cells in each half of the 24-cell (see here) and as the 120 vertices of each of the two 600-cells whose compound is the
4-dimensional projection of the 421 polytope (see discussion under "The 421 polytope" here). See here for its appearance in other sacred geometries. | {"url":"https://www.64tge8st.com/post/2017/01/29/superstring-theory-tree-of-life-part-29","timestamp":"2024-11-12T19:33:49Z","content_type":"text/html","content_length":"1050488","record_id":"<urn:uuid:9629143d-705e-4f7d-8373-c0438c2a5c09>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00280.warc.gz"} |
Frontiers | Bounded Cost Path Planning for Underwater Vehicles Assisted by a Time-Invariant Partitioned Flow Field Model
• ^1Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
• ^2Department of Guidance and Control, Agency for Defense Development, Daejeon, South Korea
• ^3School of Mathematics, Georgia Institute of Technology, Atlanta, GA, United States
• ^4Skidaway Institute of Oceanography, University of Georgia, Savannah, GA, United States
A bounded cost path planning method is developed for underwater vehicles assisted by a data-driven flow modeling method. The modeled flow field is partitioned as a set of cells of piece-wise constant
flow speed. A flow partition algorithm and a parameter estimation algorithm are proposed to learn the flow field structure and parameters with justified convergence. A bounded cost path planning
algorithm is developed taking advantage of the partitioned flow model. An extended potential search method is proposed to determine the sequence of partitions that the optimal path crosses. The
optimal path within each partition is then determined by solving a constrained optimization problem. Theoretical justification is provided for the proposed extended potential search method generating
the optimal solution. The path planned has the highest probability to satisfy the bounded cost constraint. The performance of the algorithms is demonstrated with experimental and simulation results,
which show that the proposed method is more computationally efficient than some of the existing methods.
1 Introduction
Over the last few decades, autonomous underwater vehicles (AUVs) have been employed for ocean sampling (Leonard et al., 2010; Smith et al., 2010), surveillance and inspection (Ozog et al., 2016;
Xiang et al., 2010), and many other applications. Ocean flow is the dominant factor that affects the motion of AUVs (Zhang et al., 2016). Ocean flow dynamics vary in both space and time, and can be
represented as geophysical Partial Differential Equations (PDEs) in ocean circulation models (e.g., the Regional Ocean Modeling System (ROMS, Shchepetkin and McWilliams, 2005; Haidvogel et al., 2008)
and the Hybrid Coordinate Ocean Model (HYCOM, Chassignet et al., 2007)). While these models can provide flow information over a large spatial domain and forecast over several days, the available flow
field forecast may still contain high uncertainty and error. The uncertainty comes from multiple sources, including the incomplete physics or boundary conditions (Haza et al., 2007; Griffa et al.,
2004) and even terms in the equations themselves (Lermusiaux, 2006). In addition, the high complexity of the flow dynamics makes solving these PDEs computationally expensive. Data-driven flow models
(Mokhasi et al., 2009; Chang et al., 2014) can provide short-term flow prediction in a relatively smaller area with significantly lower computational cost, and can be more suitable for supporting
real-time AUV navigation, particularly for systems with strong gradients and/or high uncertainty.
Path planning is one of the crucial and fundamental functions to achieve autonomy. Two key considerations for an AUV path planner are computational efficiency and path quality. The path planning
strategy should be computationally efficient so that the time for generating a path can be kept to a minimum. When these methods are sufficiently fast, path planning can be performed in near-real
time, generating a feasible solution in minutes while the AUV has surfaced to get a GPS fix and communicate with shore. Advantages of real time path planning are that more recent information,
including the real time data from the vehicle, can be incorporated in path planning. Hence there will be less planning error due to outdated information (Leonard et al., 2010). At the same time, it
is desired that the path planning algorithm has theoretical guarantee on the quality of the generated path.
Most path planning algorithms aim to design optimal path minimizing certain cost, for example, those associated with engineering or flight characteristics (battery life, travel time) or scientific
value (e.g., distance relative to other assets or spacing of relevant processes). Algorithms that have been applied to AUV optimal path planning include: 1) graph-based methods such as the A* method
(Rhoads et al., 2012; Pereira et al., 2013; Kularatne et al., 2017, 2018) and the Sliding Wavefront Expansion (SWE) (Soulignac, 2011); 2) sampling-based methods like the Rapidly exploring Random
Trees (RRTs) (Kuffner and LaValle, 2000; Cui et al., 2015), RRT* (Karaman and Frazzoli, 2011) and informed RRT* (Gammell et al., 2018); 3) methods that approximate the solution of HJ
(Hamilton-Jacobi) equations, such as the Level Set Method (LSM) (Subramani and Lermusiaux, 2016; Lolla et al., 2014), and 4) the evolutionary algorithms, including the particle swarm optimization
methods (Roberge et al., 2012; Zeng et al., 2014), and the differential evolution methods (Zamuda and Sosa, 2014; Zamuda et al., 2016). See (Zamuda and Sosa, 2019; Zeng et al., 2015; Panda et al.,
2020) for a comprehensive review on the existing AUV path planning methods. However, the computational cost of the above mentioned methods could be high, especially in cases where the AUV deployment
domain is large.
Using a regular grid to discretize the flow field can result in unnecessary large number of cells, which increases the computational burden of the graph search methods. Since the flow speed in
adjacent cells is usually similar, we partition the flow field into piece-wise constant subfields, within each the flow speed is a constant vector, and introduce the Method of Evolving Junctions
(MEJ) (Zhai et al., 2020) to solve the optimal path planning problem. MEJ solves for the optimal path by recasting the infinite dimensional path planning problem into a finite dimensional
optimization problem through introducing a number of junction points, defined as the intersection between the path and the region boundaries. Hence the computation cost of MEJ is significantly lower
than other optimal path planning methods, especially when the flow field is partitioned into a small number of cells (Zhai et al., 2020). We identify Soulignac. (2011) as the work closest related to
ours, in which sliders, defined as points sliding on the partitioned region boundaries, are introduced to describe the wavefront expansion of the graph search methods. In each iteration of the
wavefront expansion, each slider’s position on the wavefront is derived by minimizing the travel cost in a single cell, and then the planned path is computed by the backtracking of wavefronts. Both
MEJ and SWE are based on a novel parameterization of the path by introducing the junctions and the sliders, that were discovered independently by the two research groups. The main difference between
MEJ and SWE lies in that MEJ solves for junction positions by formulating a non-convex optimization problem, and derives the global minimizer by intermittent diffusion (Li et al., 2017), which
intermittently adds white noise to the gradient flow, while SWE solves for slider positions by graph search methods. MEJ has been justified to find the global minimizer with probability 1. However,
since the method does not pose any structure in the search, the computational cost of MEJ could be less favorable compared to SWE if the number of cells scales up.
To reduce the computational cost of the path planning problem, the search can be reduced to find paths with total cost less than an upper bound. Stern et al. (2014) present two algorithms to solve
the bounded cost search problem: the Potential Search (PTS) and the Anytime Potential Search (APTS). The PTS method defines the Potential Ordering Function, which is an implicit evaluation of the
probability that a node is on a path satisfying the bounded cost constraint, and iteratively expands the nodes in the graph with the highest Potential Ordering Function value. The wavefront expansion
terminates when the goal node has been expanded, and the path is found by backtracking of the wavefronts. The APTS method runs the PTS algorithm iteratively to improve on the incumbent solution, with
the upper bound on total cost lowered in each iteration of the algorithm. Later work (Thayer et al., 2012) improves on the PTS method by minimizing both the potential and an estimation of the
remaining search effort, so that the bounded cost search problem will be solved faster.
In this paper, our first objective is to develop a data-driven computational flow model that approximates the true flow field in the region of interest to assist AUV path planning. The proposed
data-driven flow model divides the flow field into cells, within which the flow is represented as a single flow vector. The optimal flow cell partition and initial values of the flow vectors in each
cell are derived from prior flow information, from numerical ocean models or from observations. To improve model accuracy, AUV observational data can be incorporated into the data-driven model in
near-real time, for example, in the form of observed or estimated velocities (Chang et al., 2015). Here, we design a learning algorithm that estimates the flow field parameters based on the AUV path
data. Our second objective is to develop an algorithm that solves the AUV bounded cost path planning problem. Given that the vehicle is traveling in a flow field represented by the data-driven
computational model, the goal is to design a path that connects AUV initial position with goal position with the highest probability to have travel cost less than a pre-assigned upper bound. By
introducing the key function, which is an implicit evaluation function of the probability that a path satisfies the bounded cost constraint, the optimal path is computed by searching for the nodes
with lowest key function value using an informed graph search method.
The main novelty of this work is introducing the modified PTS method to solve the bounded cost search problem. Unlike the PTS method (Stern et al., 2014), which assumes that the branch cost of the
graph is known exactly, our method deals with problems where the branch cost of the graph is uncertain. Given assumptions on the distribution of cost-of-arrival and cost-to-go, we prove that the
proposed algorithm guarantees optimality of the planned path, that is, the planned path has the highest probability of satisfying the bounded cost constraint. To the best of our knowledge, this is
the first time that the optimality of the modified PTS solution to bounded cost problems is proved. The proposed bounded cost path planning method can be viewed as an extension to MEJ. Compared to
MEJ, the modified PTS algorithm is computationally more efficient, since a graph search method is adopted to search for the junction positions. At the same time, optimality of the planned path can be
theoretically justified. The major benefit of the proposed bounded cost path planning algorithm lies in that it plans a path faster, while at the same time still guarantees the path quality. This
paper is a significant extension of the conference proceeding (Hou et al., 2019), which proposes a flow partition method that approximates the flow field by a set of cells of uniform flow speed. The
main extensions of this paper are that taking advantage of the flow model proposed in (Hou et al., 2019), we propose the modified PTS method to solve the AUV bounded cost path planning problem, and
present theoretical justification on the optimality of the proposed PTS method. The proposed bounded cost search method is potentially applicable for all bounded cost path planning problems with
uncertain branch cost.
We believe the proposed data-driven flow modeling and bounded cost path planning methods are well-suited for path planning of underwater glider deployment near Cape Hatteras, NC, a highly dynamic
region characterized by confluent western boundary currents and convergence in the adjacent shelf and slope waters. While deployed, the gliders are subject to rich and complex current fields driven
by a combination and interaction of Gulf Stream, wind, and buoyancy forcing, with significant cross-shelf exchange on small spatial and temporal scales (Savidge et al., 2013a, Savidge et al., 2013b)
that would be highly difficult to sample using traditional methods. Path planning must consider spatial variability of the flow field. Because spatial gradients are significant, real-time path
planning is critical to take advantage of real-time data streams. Through simulated experiments, we demonstrate the performance of applying the proposed algorithms to underwater glider deployment in
this area, and show that the proposed algorithm is more computationally efficient than A* and LSM.
2 Problem Formulation
2.1 Vehicle Dynamics
Let $FR:D→ℝ2$ represent a spatially distributed vector field for the ambient flow velocity, where $D∈ℝ2$ is the domain of interest. Let $[T0,Tf]$ be the AUV deployment time interval. The AUV model is
described as
where $x∈D$ denotes vehicle position. $VR$ is the through-water speed of the vehicle, and $ΨC(t)=[cosψC,sinψC]T$ is a unit vector that represents the direction of the vehicle motion along heading
angle $ψC$.
Assumption 2.1: During the operation, $VR$ is an unknown constant.
Remark 2.1: Actual vehicle speed may depend on a number of factors that affect an AUV’s speed, including water depth, efficiency of propulsion, and bio-fouling. These effects are difficult to
estimate. Hence the vehicle forward speed is assumed to be an unknown constant.
Assumption 2.2: We assume that the heading $ΨC(t)$ can be controlled for all time t, and the vehicle trajectory $x(t)$ can be measured or estimated for all time.
Remark 2.2: Though the actual location of a vehicle may only be known occasionally when the vehicle is underwater, the trajectory of the vehicle can be estimated through localization algorithms,
which incorporates the known locations and the heading angle commands as inputs to generate the optimal state estimation.
Assumption 2.3: We assume the flow field is time-invariant throughout the deployment.
Remark 2.3: Even though there are existing work that considers the time-variant flow field in solving the AUV planning problem, such as (Eichhorn, 2013; Lolla et al., 2014; Zamuda and Sosa, 2014), we
make this assumption due to the patterns of the flow field in this domain. In the domain of interest considered in this paper, which is near Cape Hatteras, NC, the current field is driven by a
combination and interaction of Gulf Stream, wind, and buoyancy forcing (Savidge et al., 2013a, Savidge et al., 2013b). Because magnitude and spatial gradients of the flow field are significant
relative to the temporal variation of the flow field (mostly the tidal flow component), time variation of the flow does not have a significant influence over the planned path.
2.2 Data-Driven Flow Modeling
Flow speed at neighboring grid points often exhibits similarity in both strength and direction. Hence we assume that at the time scale of an AUV deployment, the flow field can be divided into finite
number of regions ${ℛi}i∈IR$, with the union of all cells being the domain, $∪i∈IR{ℛi}=D$. The regions are separated by continuous boundary curves. boundary curves ${fi,j}i,j∈IR$, and $fij(x)=0$ is
the one dimensional compact boundary of the region $ℛi$ and $ℛj$. We define an indicator function $ϕi(x)$ as follows:
$ϕi(x)=1{x∈ℛi}={1if x∈ℛi0otherwise.$
This function indicates whether $x$ is in $ℛα$. Let $ϕ:ℝ2→ℝN$ be defined as $ϕ(x)=[ϕ1(x)…ϕN(x)]T$. Then $ϕ(x)$ are a set of spatial basis functions of $D$.
In order to compute the partition, which is represented by the basis functions $ϕ$. We need to use prior information of the environment obtained either from forecast data of the existing ocean
models, or from historical datasets. Let $F0:D×[T0,Tf]→ℝ2$ denote the discretized flow map forecast available on a set of grid points in $D$, and let $y∈ℝ4,y=[xT,F0(x,t)T]T$ denote the vector at
position $x$ and time t. We can define a distance function as $dist:ℝ4×ℝ4→ℝ$ as $dist2(y,y′)=(y−y′)TQ(y−y′),$ where $y,y′∈ℝ4$, $Q$ is a weight matrix. For each cell $ℛi$, let $r¯i$ represents its
center, and let $νi$ be the flow vector in this cell. Our goal is to find the optimal values of $ϕ(x)$ and ${νi}i∈IR$ by solving the following optimization problem:
After the optimal partition is computed from forecast or historical datasets, we need to compute the strength of the flow in each partition based on the path information of the AUV while moving
through the flow field. Again, we assume that the true flow field is constant in each of the partitioned cell. Let $θ∈ℝ2N$ be the true flow vectors in all of the partitioned cells, $θ=[θ1θ2]=
[θ11…θ1Nθ21…θ2N]$, with $θi=[θ1i,θ2i]T$ denoting the flow vector in partitioned region $ℛα$. Then the partitioned flow field can be represented as
To estimate the true flow field $FR$, we use the AUV path data $x(t)$. Let
be our estimate of the parameter $θ$ and $VL(t)$ be our estimate for $VR$. We will design a learning algorithm to achieve c convergence of $ξ(t)$ and $VL(t)$ to the true values e.g., $ξ(t)→θ$ and $VL
(t)→VR$ as $t→∞$.
2.3 Bounded Cost Path Planning
Our goal is to find a path connecting the vehicle current position $x0$ to the final position $xf$ that results in total travel time less than an upper bound C. In practice, the planning and
replanning process of AUV happen over long intervals, in order to avoid increased computation cost. Hence we assume that the estimated parameters have converged to their true value counterparts
before the planning process. There may be more than one path satisfying the bounded cost constraint. Thus we formulate the following optimization problem, in which the decision variable is the
vehicle’s heading angle, $ψC(t)$. The optimization problem is to find the decision variable that is most likely to satisfy the bounded cost constraint:
$maxψC(t)∈[−π,π]Pr(T(ψC(t))<C)s.t. x˙=VRΨC(t)+θϕ(x),x(T0)=x0,x(T0+T(ψC(t)))=xf.(4)$
where the total travel time to start from the initial position $x0$ to reach the destination position $xf$ under the control signal $ψC(t)$ is denoted as $T(ψC(t))$.
3 Flow Field Estimation
First let us describe Algorithm 1. We derive the spatial basis function and the initialized flow model parameters by solving eq. 2 using the K-means algorithm. Since this optimization depends on both
spatial and temporal variance of the flow field, solving this problem can be computationally expensive. To simplify this problem, instead of optimizing the difference between the time-varying flow
forecast and the partitioned flow field, as described in eq. 2, we optimize the difference between the time-averaged flow forecast and the partitioned flow field:
where $y¯(x)=[xT,1Tf−T0∑t∈[T0,Tf]F0(x,t)T]T$ denote the time averaged flow observation at position $x$.
To implement the K-means algorithm, we start by randomly selecting k cell centroids, and then use Lloyd iterations to solve the optimization problem. The Lloyd iteration contains two steps, first
assign the points that are closest to a centroid to that centroid, and then recompute the cell centroid. These two steps are repeated until cell membership no longer changes. The K-means algorithm
requires proper selection of the number of partitioned cells, k, the choice of which affects the path planning performance and flow modeling quality. If the field is divided into too many regions,
then it results in a complicated flow structure and potentially increases the computational cost of path planning. On the other hand, dividing the field into too few regions may result in a large
error between the true flow field and the modeled flow field, which potentially leads to significant path planning error. Therefore, we introduce an iterative K-means algorithm that can guarantee a
bounded flow field partition error, and at the same time utilize the smallest number of partition regions.
Let $F¯0:D→ℝ2$ denote the time-averaged flow field over the time interval $[T0,Tf]$, and $νi$ is the uniform flow velocity in $ℛi$. Define the flow field partition error as:
Given an initialized k, we will iteratively perform the K-means algorithm (lines 8, 9), and check if the flow partition error satisfies $δF<ε$, where ϵ is a pre-defined upper bound on the flow
partitioning error. If this condition is satisfied, then the current k is designated for partitioning; otherwise, the number of cells is increased by 1, and we recompute the solution to eq. 5 using
K-means method.
Now let us explain Algorithm 2. An estimate $z(t)$ for the vehicle trajectory can be computed by integrating
where $β(t)∈ℝ2$ is introduced as a learning injection parameter. The term $e=x−z$ is the controlled Lagrangian Localization error (CLLE, Cho and Zhang, 2016; Cho et al., 2021), which describes how
much the actual trajectory deviates from the estimated trajectory. A learning algorithm will then compute $β(t),ξ(t),VL(t)$ so that the CLLE can be reduced.
The CLLE dynamics can be derived from eqs 7, 1.
We design the learning parameter injection as
and hence the CLLE dynamics becomes
The learning algorithm updates parameters $ξ(t)$ and $VL(t)$ so that the CLLE converges to zero. Let $ξ¯(t)=[ξ11(t),…,ξ1N(t),ξ21(t),…,ξ2N(t)]T,θ¯=[θ11,…,θ1N,θ21,…,θ2N]T$, and $e⊗ϕ=
[e1ϕ1,…,e1ϕN,e2ϕ1,…,e2ϕN]T$, where $⊗$ is the Kronecker product. We design the updating rules for parameter estimation as follows,
These rules are then used in Algorithm 2.
4 Bounded Cost Path Planning
Given the piece-wise constant flow model described in eq. 3, the domain is divided into a finite number of regions ${ℛi}i∈IR$. Thus, all possible trajectories cross a sequence of cells of uniform
flow, and finally reach the goal position. Since the vehicle moves in constant speed, and the flow in one cell is uniform and constant, the vehicle’s optimal heading angle in each cell should be
constant, and the vehicle path in each cell is a straight line. We define junction points as the position where the path intersects with cell boundaries. Below we show that in each cell, due to the
time invariance of the flow field, solving for the heading angle is equivalent to solving for the junction points of a path.
Let $γ1,γ2$ denote two junction points on two different boundary curves of the same cell $ℛi$. Since the vehicle moves at constant speed, the total vehicle speed $VRΨC+θi$ must be in the same
direction as the segment of the path,
From eq. 12, we can represent the vehicle’s heading angle as a function of the junction points,
The vehicle’s travel time in $ℛi$, denoted as τ, can be computed given the junction points and the vehicle’s heading angle,
Combining eq. 14 and eq. 12 we can write the travel time in cell $ℛi$ as a function dependent only on the junction points,
eq. 15 describes the travel time in one single cell. Based on the discussion of the one cell case, next we talk about solving eq. 4 across multiple cells.
Let $γ1,γ2,…,γn$ denote the chain of junctions position, and $p1,p2,…,pn+1$ represent the index of the sequence of cells that the path crosses. The planning problem eq. 4 can be transformed into the
following mixed integer optimization problem, where the decision variables are the cell sequence and the junctions position,
$max{γi}i=1n,{pi}i=1n+1Pr(τ(xs,γ1,θp1)+∑i=1n−1τ(γi,γi+1,θpi)+τ(γn,xf,θpn+1)<C),s.t. fpi,pi+1(γi)=0,∀i∈[1,n].(16)$
Now let us explain the proposed solution to eq. 16. The solution is presented in Algorithm 3. We propose a bounded cost path planning algorithm that solves for the path that is most likely to satisfy
the bounded cost constraint. The solution contains two steps, the first step is solving for the optimal sequence of cells that is most likely to result in a bounded cost path, in the discretized flow
map described by the piece-wise constant flow cells. In this step, the junction positions are unknown. An informed graph search method is used for the first step of the proposed solution. The second
step is optimizing junction positions on the boundaries of the optimal cell sequence.
The proposed solution is presented in Algorithm 3. First we describe the first step of our solution. Consider having a candidate junction on boundaries of all cells. Two junctions $γi$ and $γj$ are
defined as adjacent if $fp,q(γi)=0,fr,s(γj)=0,{p,q}∩{r,s}≠∅$, indicating that the two junctions are on different boundaries of the same cell. Two adjacent junctions are connected by an edge. A
non-directed graph $G$ can be formed with the vertices being all the candidate junctions in the domain, and the edges being the path segment between the adjacent candidate junctions. Let $ni,j$
describe the node that corresponds to the junction on boundary curve between $ℛi$ and $ℛj$, and let s and g denote the node corresponding to the starting and final position. Then in this context, a
path $Γ$ from the starting position $x0$ to the final position $xf$, crossing the cells $ℛp1,…,ℛpn+1$ in sequence can be represented by a sequence of nodes $s→np1p2→np2p3→…→npnpn+1→g$ on the graph.
Figure 1 is an example of the graph representation of the workspace, in which case the flow field is partitioned into 4 cells.
FIGURE 1
FIGURE 1. (A) Partitioned cells in the domain. On each boundary of two adjacent cells there is a candidate junction point, represented as the purple triangle (B) Graph representation of the
workspace. The vertices represent the candidate junctions, while the edges are the path segment between the adjacent junctions. The red line on both of the plots represent the same example path.
The branch cost of the graph is defined as the travel time from one junction to another adjacent junction. The travel time can be computed by eq. 15 if the two junction positions are known. However,
since the junction positions are unknown when optimizing the cell sequence, the branch cost of the graph cannot be explicitly computed. Hence we introduced the following assumption:
Assumption 4.1: We assume that the branch cost for all edges in $ℰ$ is a random variable, with a known minimum value.
Remark 4.1: Even though the branch cost is unknown, its minimum value can be computed, since the branch cost eq. 15 is convex with respect to $γ2−γ1$ (Soulignac, 2011).We solve for the minimum cost
of all edges in the graph, denoted as $wij,jk∗$ by solving the following constrained optimization problem, using the interior-point method. (Kim et al., 2007),
$minγ1,γ2τ(γ1,γ2,θj)s.t. fij(γ1)=0,fjk(γ2)=0.(17)$
The informed graph search method we propose is an extension to a class of graph search algorithms called potential search (PTS) (Stern et al., 2014). The PTS algorithms can be viewed as modifications
to the celebrated A* algorithm for path planning (Hart et al., 1968). To determine which nodes should be searched, the algorithms maintain an OPEN list and a CLOSED list. A graph node is labeled NEW
if it has not been searched by the algorithm. The OPEN list contains all the nodes that are searched, but still have a NEW neighbor. The CLOSED list consists of all the nodes that have been accessed
by the search algorithm.To determine which cells should be searched first by the algorithm, the algorithm computes the cost-of-arrival, which is the minimal cost of going from the starting node s to
an arbitrary node n, and cost-to-go, which is the minimal cost of going from n to the goal point g. Let $g∗(n)$ denote the actual cost-of-arrival, and let $h∗(n)$ denote the actual cost-to-go of a
node n. Since the actual cost-to-go is unknown during the search, a heuristic cost $h(n)≤h∗(n)$, is usually used by the search algorithm. The A* search algorithm sort the OPEN list according to the
value of $g∗(n)+h(n)$. The node with lowest value is searched first.In our problem, the following estimated cost-to-go is used to guide the search:
The heuristic function defined in eq. 18 is the travel time of the vehicle traveling in the most favorable flow condition, reaching goal position from the junction position that is closest to the
goal. Hence, $h(n)≤h∗(n)$, which is required by A* search. However, in our problem, the branch cost is unknown. The exact value of actual cost-of-arrival cannot be computed during the search process,
which is different from a typical path planning problem that can be solved by A* or PTS method. Hence we introduce the estimated cost-of-arrival, denoted as $g(n)$. The estimated cost-of-arrival is
computed by summation of the minimum branch cost along the path, thus $g(n)≤g∗(n)$.The goal of A* search is to find the path with minimum cost. In our problem, due to the uncertainties in the branch
cost, this goal is overly ambitious. Hence our problem formulation eq. 16 aims to find a path with bounded cost. We define a potential function described as follows:
Definition 4.1: The potential of a node n, denoted as $PT(n)$, is $Pr(h∗(n)+g∗(n)<C)$.The potential function characterizes the probability that a node is on a path that satisfies the bounded cost
constraint. Nodes with high potential have higher probability to be part of the desired path. However, the exact potential of nodes cannot be computed or compared, since both $h∗(n)$ is unknown
before the optimal path is found. Therefore, PTS algorithms usually design a key function to determine the nodes that need to be searched at each step of the graph search. Nodes in the OPEN list are
sorted by the key function value instead of $g∗(n)+h(n)$, which is the main difference between the PTS algorithms and the A* method. Various key functions have been proposed for different path
planning problems with bounded cost (Thayer et al., 2012; Stern et al., 2014, Stern et al., 2011).One significant contribution of this paper is in extending the PTS method to solving bounded cost
problems with uncertain branch cost, by introducing a new form of key function $K(n)∈ℝ≥0$ as
$K(n)={h(n)g(n)(C−h(n)−g(n))2,if h(n)+g(n)<C∞,otherwise.(19)$
$K(n)$ is an indication of the probability of the node n being on a path that satisfies the bounded cost constraint. Nodes with lower key function value have a higher probability of being on a path
satisfying the bounded cost constraint. The intuition is that, if $h(n)+g(n)<C$, the key function value increases if either $h(n)$ or $g(n)$ is larger. In this case, the estimated cost $h(n)+g(n)$
increases, and will be closer to C, then it is less likely that the true cost satisfies the bounded cost constraint, and the node n is less likely to be on a feasible path. If $h(n)+g(n)≥C$, then n
cannot be on a path satisfying the bounded cost constraint, since $h∗(n)+g∗(n)≥h(n)+g(n)≥C$. In this case, the key function is set as positive infinity.The PTS with our new key function is then
applied to search for the optimal cell in Algorithm 3. The only difference between our PTS and A* is that the total cost used by A* to sort the OPEN list is replaced by the key function, as shown in
line 13. Similar to A*, The search algorithm consists of two processes: the expansion process and the backtracking process. During the iterative expansion process, the algorithm orders the nodes in
the OPEN set according to the key function value, and inserts the node with the lowest key function value to the CLOSED set (lines 19, 20). Neighbors of this node and their key function values are
updated if the neighboring nodes can be reached with a lower cost through the current node (lines 26, 28, 29). The propagation continues until the OPEN list is depleted, or the goal node is in the
OPEN set. Starting from the goal position, the backtracking process searches for the predecessor of the last node in the path set and add it to the path, until the starting node is included in the
path (lines 37, 38).The PTS algorithm fulfills step one of the bounded cost path planning solution. We have found the vector of indices ${pi}i=1n+1$, which indicates the optimal indices that is most
likely to result in a bounded cost path. In step two, we find the optimal junction positions that leads to the minimum total cost. Given the optimal cell sequence, the problem eq. 16 converts to an
optimization problem depending on the junction positions ${γi}i=1n$ in all cells,
$min{γi}i=1nτ(x0,γ1,θp1)+∑i=1nτ(γi,γi+1,θpi)+τ(γn,xf,θpn+1)s.t. fpi,pi+1(γi)=0.(20)$
This optimization problem is solved by the interior-point method.The optimal heading angle can be computed from the junction positions using eq. 12. In each cell of the sequence ${pi}i=1n$, given the
optimal junction position $γi+1$ and $γi$, the heading angle in cell $ℛpi$ can be derived by
5 Theoretical Justification
In this section, we give theoretical justification to the proposed data-driven flow modeling and bounded cost path planning method.
5.1 Data-Driven Flow Modeling
Algorithm 1 can be theoretically justified by proving that the optimal solution to eq. 5 is the optimal solution to eq. 2. We also prove that Algorithm 2 achieves error convergence and parameter
convergence, indicating that the estimated trajectory converges to the actual trajectory, and the estimated parameter converges to the true values.
Lemma 5.1: The optimal flow partition derived by solving eq. 2 is the same as the optimal flow partition derived from eq. 5.PROOF: Let $δy(x,t)=y(x,t)−y¯(x)$. Since $∑t=T0Tfδy(x,t)=∑t=T0Tfy(x,t)−
(Tf−T0)y¯(x)=0$, the following equality holds
$J=∑i=1N∑x∈ℛi∑t=T0Tfdist2(y(x,t),μi) =∑i=1N∑x∈ℛi∑t=T0Tfdist2(y¯(x)+δy(x,t),μi) =∑i=1N∑x∈ℛi[(Tf−T0)dist2(y¯(x),μi)+∑t=T0Tfdist2(y(x,t),y¯(x))].$
The second term in J represents the temporal variation of flow speed on one grid point, which does not change with respect to the partitioning of the flow field. Hence $arg minϕ(x),νJ=arg minϕ
(x),νJ′$, where $J′$ is defined in eq. 5. Thus the optimal solution of eq. 2 equals the optimal solution of eq. 5, which implies that the optimal flow partition of the time-varying flow field is
equivalent to the optimal flow partition of the time-invariant flow field, computed by taking the time-average of the flow field observations, as described in line 1, Algorithm 1.Next we will prove
that under Assumption 2.3, the estimated trajectory converges to the actual trajectory, and that the estimated parameter converges to the true value using adaptive control theory. In order to prove
convergence, the persistent excitation condition must be demonstrated, and is given below.
Definition 5.2: (Sastry and Bodson, 1994; Khalil, 1996) A vector signal $u$ is persistently exciting if there exist positive constants $κ1,κ2$, and T such that $κ2I≥∫tt+Tu(τ)uT(τ)dτ≥κ1I ∀t$.Let $ϕ˜1=
[ϕ1…ϕN000]$ and $ϕ˜2=[000ϕ1…ϕN]$. Let $w=[ϕ˜1,ϕ˜2,ΨC]T∈ℝ(2N+1)×2$, which is the input signal to eq. 24. We can construct a matrix $W(t)∈ℝ(2N+1)×(2N+1)$ as follows
The persistent excitation condition is critical to prove the convergence of parameters (Narendra and Annaswamy, 1989). When $W(t)$ is singular the estimation errors of parameters may not converge to
zero. The persistent excitation condition requires that the trajectories traveled by the robot to spread over all the partitioned cells, as stated in the following Lemma.
Lemma 5.3: The signal vector $w$ is persistently exciting if the vehicle visits all the partitioned cells.PROOF: Since the partitioned cells do not overlap with each other, for $∀τ$, $x(τ)$ can only
be in one cell. Hence for all $i,j∈IR$,
$ϕi(x(τ))ϕj(x(τ))=11{i=j}={1if i=j0otherwise.$
Thus $W(t)$ can be simplified to
If $∀i,∃τ∈[t,t+T],s.t. ϕi(x(τ))=1$, meaning that the vehicle visits all cell during time $[t,t+T]$, then $W(t)$ is full rank, and hence $w$ is persistently exciting.The persistent excitation
condition must be satisfied in order to have the flow parameters of all the cells and vehicle speed estimation converge to the true value. The persistent excitation condition requires the vehicle to
visit all the partitioned regions. If this condition is not satisfied, not all flow parameters in the partitioned cells can be accurately estimated. We will further address this condition in the
simulation and experimental result section.The convergence of CLLE is presented as follows.
Theorem 5.4: Under the updating law eq. 11, CLLE converges to zero when time goes to infinity.PROOF: Consider the following Lyapunov function,
Since $eT(θ−ξ(t))ϕ(x)=(θ¯−ξ¯(t))e⊗ϕ(x)$, the derivative of V is
$V˙=(−Ke+(θ−ξ)ϕ(x)+(VR−VL(t))ΨC)Te+1ρ(−ρe⊗ϕ(x))(θ¯−ξ¯) +1ρ(VR−VL(t))(−ρeTΨC) =eTKe≤0.$
$V˙$ is negative semi-definite, which implies that $e,ξ¯,VL(t)$ are bounded. In addition, the second order time derivative of V is
Thus $V¨$ bounded, and hence $V˙$ is uniformly continuous. Therefore $limt→∞V˙(t)=0$. Since K is a diagonal matrix, $e(t)→0$ as $t→∞$.
Theorem 5.5: Under the updating law eq. 11, if the vehicle visits all the partitioned cells, $ξ¯(t)$ and $VL(t)$ converges to $θ¯$ and $VR$ respectively as time goes to infinity.PROOF: Let $η1=θ1−ξ1
(t),η2=θ2−ξ2(t),η3=VR−VL(t)$, then the CLLE dynamics can be written as
We define a new state variable $X=[eT,η1T,η2T,η3]T$, and an output variable $Y=e$, then the dynamics for the state variable and the output variable satisfy
where $A(t)=[−Kϕ˜1ϕ˜2ΨC−ρϕ˜1000−ρϕ˜2000−ρΨC000],C=[I000].$Our goal is to show that the origin of $X˙=A(t)X$ is uniformly asymptotically stable, which indicates that $ξ¯$ converges to $θ¯$, and $VL(t)
$ converges to $VR$. Let
There exists some $c1,c2$ such that $c1I≤P≤c2I$, and there exists some constant $0<ν<1$ such that
which is negative semi-definite.Then by the Lyapunov theorem (Theorem 3.8.4 in (Ioannou and Sun, 1995)), $X˙=A(t)X$ is uniformly asymptotically stable if we can prove that $(C,A)$ is uniformly
completely observable. First, we will find a bounded matrix $L$, and show that $(C,A+LC)$ is uniformly completely observable. Then, this will lead to the conclusion that $(C,A)$ is uniformly
completely observable. Let $L=[0ρϕ˜1ρϕ˜2ρΨCT]$. Since $ΨC$ is uniformly bounded, and all elements in $ϕ˜$ is either 0 or 1, $L$ is uniformly bounded, and
Thus, we now consider the observability of
Let $η=[η1,η2,η3]T$, then the system eq. 23 has the following form:
Due to the assumption that the vehicle visits all cells, by Lemma 5.3, $w$ is persistently exciting. Let $Φ(τ)=∫tτe−K(τ−σ)w(σ)dσ$ be the output of eq. 24 given input $w$. Then $Φ(τ)$ satisfies
persistently exciting conditions because $w(σ)$ is persistently exciting, and the transfer function of eq. 24, $(sI+K)−1$ is stable, minimum phase, proper rational transfer function. Therefore, there
exists constant $κ1,κ2,T0>0$ such that $κ2I≥1T0∫tt+T0Φ(τ)Φ(τ)Tdτ≥κ1I,∀t≥0$. By applying Lemma 4.8.4 in (Ioannou and Sun, 1995), we can conclude that the system of eq. 24 is uniformly completely
observable. In other words, we have proved that $(C,A+LC)$ is uniformly completely observable. By applying Lemma 4.8.1 in (Ioannou and Sun, 1995) the system $(C,A)$ is uniformly completely
observable. Therefore, the origin of $X˙=AX$ is uniformly asymptotically stable, that is, $X→0$ as $t→∞$. This means that $η1,η2$ and $η3$ go to zeros, individually. Thus $ξ¯$ and $VL(t)$ converges
to $θ¯$ and $VR$, respectively.
5.2 Bounded Cost Path Planning
In this subsection, we prove that Algorithm 3 finds the optimal solution of eq. 4. Assumption 5.1 and Assumption 5.2 are required for the optimality proof.
Assumption 5.1: Consider any node not the starting node s or the goal node g, the estimated cost-of-arrival $g(n)$ and the estimated cost-to-go $h(n)$ are bounded below, $g(n)≥gmin,h(n)≥hmin$, where
$gmin>0$ and $hmin>0$.
Remark 5.1: For any node that is not the goal node, $h(n)$ reaches its minimum when n is an adjacent node of g. Similarly, for any node that is not the start node, $g(n)$ reaches its minimum when n
is adjacent to s. Since the flow partition algorithm is performed over discrete grid points in $D$, size of the cells cannot be infinitely small. Therefore, $h(n)$ can only be zero if the junction
represented by the node n is sliding on the same boundary of the goal point, and $g(n)$ can only be zero if the junction represented by the node n is sliding on the same boundary of the start point.
However, by junction assignment, only one junction can be assigned on each boundary. Hence there exists $hmin$ and $gmin$ that bound $h(n)$ and $g(n)$ from below, and the lower bound cannot be
infinitely small.Let $Hmax=max{Chmin,Cgmin}$. Consider ${Xn}n=1N,{Yn}n=1N$ to denote sequences of independent and identically distributed random variables uniformly distributed over $[1,Hmax]$. To
prove optimality of the algorithm, we make assumptions on the statistical relationship between $h∗(n),h(n)$, and $g∗(n),g(n)$ as follows.
Assumption 5.2: The true cost-to-go, $h∗(n)$ and the heuristic function $h(n)$, as well as the true cost-of-arrival, $g∗(n)$ and estimated cost-of-arrival, $g(n)$ both satisfy $h∗(n)=h(n)Yn,g∗(x)=g
Remark 5.2: Both $h∗(n)$ and $g∗(n)$ are summation of branch cost along the optimal path. Since the branch cost is the travel time between two adjacent junctions sliding on two boundaries, the branch
cost of all edges must have both a lower bound and an upper bound. Hence both $h∗(n)$ and $g∗(n)$ are assumed to be a uniform distribution, with the minimum of it being $h(n)$ and $g(n)$, and the
maximum being $h(n)Hmax$ and $g(n)Hmax$. In practice, the statistical model of $h∗(n)$ and $g∗(n)$ depends on the distribution of the flow field, and may not be uniform distribution in some flow
cases. However, the following theoretical analysis can be adapted to other parameterization of the statistical model of $h∗(n)$ and $g∗(n)$.We will show below that, by expanding the nodes with the
lowest key function value without explicitly calculating the potential of nodes, the proposed algorithm expands the nodes with the highest potential value, and thus guarantees to find the optimal
solution to eq. 4. Lemma 5.6 states that the key function is an equivalent evaluation of the potential value of nodes. Lemma 5.7 demonstrates that the optimal path can be equally defined by either
the potential or the key function value of nodes. Finally, given the two Lemmas, we justify the optimality of the proposed algorithm, which is stated in Theorem 5.8.
Lemma 5.6: For all $n1,n2∈G$, $PT(n1)<PT(n2)$ if and only if $K(n1)>K(n2)$.PROOF: To simplify notation, let $h1,h1*,g1,g1*$ denote $h(n1),h*(n1),g(n1)$, and $g*(n1)$, respectively. The Lemma
trivially holds in the cases where either $K(n1)$ or $K(n2)$ is infinity. Below we show that the Lemma holds in the case where both $K(n1)$ and $K(n2)$ are not infinity; equivalently, $h1+g1<C$ and
$h2+g2<C$. Due to the i.i.d. distribution assumption stated in Assumption 5.2, $X1,X2$ can be written as a single random variable uniformly distributed on $[1,Hmax]$, and $Y1,Y2$ also can be written
as a single random variable uniformly distributed on $[1,Hmax]$. Therefore, $PT(n1)<PT(n2)$ if and only if
Terms in the above inequality can be computed by integration of the probability density function,
where $S1={(x,y)|h1y+g1x<C,x∈[1,Hmax],y∈[1,Hmax]},S2={(x,y)|h2y+g2x<C,x∈[1,Hmax],y∈[1,Hmax]}$. By Assumption 5.2, $X≤Cgmin,Y≤Chmin$, and therefore $S1,S2$ are two triangles, as shown in Figure 2. Due
to the uniform distribution of $X,Y$, the above integration can be easily computed by multiplying area of $S1,S2$ with the joint distribution $ρX,Y(x,y)$, which is a constant. Hence,
which implies $K(n1)>K(n2)$.Let the ordered sequence $Γ$ denote a path connecting the start node s with the goal node g in the graph $G$. Define $Kmax(Γ)$ as the highest key function value of all
nodes on the path $Γ$, that is, $Kmax(Γ)=maxn∈ΓK(n)$.
FIGURE 2
FIGURE 2. Illustration of computing $PT(n1)$ and $PT(n2)$. The red triangle is $S1={(x,y)|h1y+g1x<C,x∈[1,Hmax,y∈[1,Hmax]]}$, and the green triangle is $S2={(x,y)|h2y+g2x<C,x∈[[1,Hmax],y∈[1,Hmax]]}$.
Lemma 5.7: The optimal path minimizes $Kmax(Γ)$ over all paths in the graph.PROOF: Let $Γ∗$ denote the optimal path that maximizes $Pr(h∗(n)+g∗(n)<C)$. Suppose that there is a path $Γ′$ that is
different from the optimal path $Γ∗$ with $Kmax(Γ′)<Kmax(Γ∗)$, then there exists $n′∈Γ′$, and $n∈Γ∗$ that satisfies $K(n′)<K(n)$. By Lemma 5.6, $PT(n′)>PT(n)$, indicating that $Pr(h∗(n′)+g∗(n′)<C)>Pr
(h∗(n)+g∗(n)<C)$, which contradicts the assumption that $Γ∗$ is the optimal path. Hence, a path is the optimal one if it minimizes $Kmax(Γ)$.Conversely, let $Γ=arg min Γ″∈GKmax(Γ″)$, then for all
$Γ′$ that is different from $Γ$, for all $n∈Γ,∃n′∈Γ′$, such that $K(n)<K(n′)$. Thus by Lemma 5.6, $PT(n)>PT(n′)$ for $n′$ in any arbitrary path that is not $Γ$ in the graph, and hence $Γ$ that
minimizes $Kmax(Γ)$ is the optimal path.
Theorem 5.8: When a feasible solution exists, the proposed algorithm terminates if and only if an optimal path is found.PROOF: Algorithm 3 can only terminate by finding the goal node, or after
depleting the OPEN set. However, the OPEN set can never be empty before termination if there is a feasible path from s to goal point. Hence Algorithm 3 must terminate by finding a goal point.Next we
show that Algorithm 3 terminates only by finding an optimal path to the goal node. Suppose that the algorithm terminates by finding a path, $Γ′$ other than the optimal path $Γ∗$, then by Lemma 5.7,
$Kmax(Γ∗)<Kmax(Γ′)$, that is, there exists $n′∈Γ′$, $n∈Γ∗$ such that $K(n)<K(n′)$. Thus during the propagation process, Algorithm 3 would have selected n for expansion rather than $n′$, contradicting
the assumption that the algorithm terminates by finding $Γ′$. Hence the algorithm must terminate by finding the optimal path to the goal node.
5.3 Complexity Analysis
In our analysis, we derive the worst case running time for Algorithm 3, and compare it with dynamic programming based planning methods, such as A*, to demonstrate the computational efficiency of the
proposed planning algorithm. Let us suppose the flow field forecast is available on $N×N$ grid points in the deployment domain, and suppose that the domain is partitioned into M cells by performing
Algorithm 1.
To derive the worst case running time of the proposed algorithm, we first consider the partitioning. Since one junction must be formed by the boundary of at least two cells, the total number of
junctions cannot exceeds $M(M−1)$, and hence the total number of nodes in the graph is at most $M(M−1)$. In one iteration process, the sorting operation (line 13), and computation of the key
function, the minimum branch cost, and the heuristics (line 29, line 23 and line 27 are performed for one time. Suppose that the OPEN set is implemented using a heap data structure, the worst case
running time of the operation in line 13 is $O(log(M(M−1)))$. We assume that the computation of the key function, the minimum branch cost, and the heuristics can be performed in constant time. There
can be at most $M(M−1)$ iterations during the entire execution, before the OPEN set is depleted. Hence, the worst case running time of Algorithm 3 is $O(M(M−1)log(M(M−1))$.
The worst case running time of A* is $O(2N2logN)$ (Nilsson, 2014). Thus, the proposed algorithm is more computationally efficient than A* if $M(M−1)<N2$, meaning that Algorithm 1 partitions the
domain into less number of cells than the number of rectangular cells in the original gridded domain.
6 Experiment and Simulation Results
In this section, we provide the results of the implementation of our flow field modeling and path planning methods in a simulated experiment. First, we describe the simulated experimental set-up and
recent field experiments, which serve as a strong test of the methods due to the magnitude and variability of the flow. We validate the proposed flow modeling algorithm by comparing the estimated
flow model parameters generated from the proposed flow estimation algorithm with the glider estimated flow collected during the experiment. Based on the estimated flow model, simulation of the
bounded cost path planning algorithm is performed, and its performance is compared with other AUV path planning algorithms.
6.1 Experimental and Simulation Setup
Our study is motivated by the use of underwater gliders off the coast near Cape Hatteras, North Carolina, US as part of a 16-months experiment (Processes driving Exchange At Cape Hatteras, PEACH) to
study the processes that cause exchange between the coastal and deep ocean at Cape Hatteras, a highly dynamic region characterized by confluent western boundary currents and convergence in the
adjacent shelf and slope waters. Underwater gliders, AUVs that change their buoyancy and center of mass to “fly” in a sawtooth-shaped pattern, were deployed on the shelf and shelf edge to capture
variability in the position of the Hatteras Front, the boundary between cool, fresh water on the shelf of the Mid Atlantic Bight and the warmer, saltier water observed in the South Atlantic Bight.
While the energy efficiency of the glider’s propulsion mechanism permits endurance of weeks to months, the forward speed of the vehicles is fairly limited (0.25–0.30 m/s), which can create
significant challenges for navigation in strong currents. Use of a thruster in a so-called “hybrid” glider configuration can increase forward speed to approximately 0.50 m/s, but at great energetic
cost. The continental shelf near Cape Hatteras is strongly influenced by the presence of the Gulf Stream, which periodically intrudes onto the shelf, resulting in strong and spatially variable flow
that can be nearly an order of magnitude greater than the forward speed of the vehicle (2+ m/s). With realistic estimates of the spatial and temporal variability of the flow, path planning can
provide a significant advantage for successful sampling.
We deployed one glider off Oregon Inlet, NC May 16, 2017 and recovered it 14 days later. For its mission, the glider was initially tasked to sample along a path with offshore, inshore, and triangular
segments designed to sample critical flow features (Figure 3), and was not used with a thruster. The glider surfaced approximately every 4 h to update its position with a GPS fix, communicate with
shore, transmit a subset of data, and most importantly, receive mission updates and commands to adapt sampling.
FIGURE 3
FIGURE 3. Survey domain near Cape Hatteras. The curve represents glider trajectory during the first PEACH deployment. The red line path is the pre-assigned sampling pattern. Squares denote the glider
surfacing positions along trajectory, and color of the trajectory depicts timestamps. The arrows represent the NCOM-predicted flow field at the starting time of the deployment.
6.2 Flow Modeling Using Glider Experimental Data
In this example, we present flow modeling results using the proposed flow partition and parameter estimation methods.
The flow map forecast is given by a 1-km horizontal resolution version of the Navy Coastal Ocean Model (NCOM, Martin, 2000), made available by J. Book and J. Osborne (Naval Research Laboratory,
Stennis Space Center, United States). In the domain of interest, the ocean model flow forecast is given at $106×106$ rectangular grid points. Tidal flow accounts for much of the short-term ($<24$
hour) temporal variation of the flow field. Hence the partition time interval is taken over multiple periods of the largest tidal constituent, the lunar semidiurnal $M2$ tide (period 12.42 h).
Maximum flow speed in this area is 2.2788 m/s, approximately 7.5 times the vehicle speed, and 4.5 times the speed of a hybrid glider using a thruster. We set the upper bound for flow partition error
to be 0.35 m/s, which is about $15%$ of the maximum flow speed in the domain. Figure 4 describes the flow partition error in the case of different selection of cell number. Since the flow partition
error goes below the upper bound when $k=13$, the number of cells is chosen as 13. We smooth the cell boundaries into straight lines using the Least Mean Square method. Even though smoothing the cell
boundaries might overlook more detailed spatial variability of the flow field, it helps to reduce the computational cost of solving the planning problem, specifically, in solving eq. 17 and eq. 20.
The partitioned flow field is shown in Figure 5. Comparing Figures 3, 5, it can be seen that the proposed algorithm captures the major spatial variation of the flow field, by separating the high
speed flow regions from the area where the flow is at lower speed.
FIGURE 4
FIGURE 5
FIGURE 5. Partitioned cells of the survey domain. The polygons are the partitioned regions. The blue arrows represent uniform flow speed in each of the cells generated from the proposed algorithm.
At each surfacing, the vehicle position is given by the GPS location. When the glider is underwater, we use linear interpolation to estimate the heading and vehicle position. The vehicle’s forward
speed is zero when it is at the surface of the ocean, and the vehicle drifts freely with the surface current. This violates the constant forward speed assumption stated in Assumption 2.1. Hence, we
remove the segment of data when the vehicle is drifting at surface, and then compute the estimated flow parameters by the proposed algorithm. Glider speed is initialized to be 0.3 m/s, while the flow
parameters are initialized by the flow vectors found by partitioning the NCOM data. Since the vehicle trajectory crosses cell 3, 6, 7, 10, and does not enter other cells, the glider trajectory does
not satisfy persistent excitation condition described in Lemma 5.3. Hence only the flow parameters in cells 3, 6, 7, 10 can be updated by the adaptive updating law, while the flow parameters in the
rest cells remain to be the initial value. To justify performance of the proposed flow parameter estimation algorithm, we use the ADCIRC (Advanced Circulation) model output (Luettich et al., 1992) to
model the tidal flow component, and derive the non-tidal glider estimated flow speed by subtracting ADCIRC reported flow from the flow parameter estimate. The de-tided glider estimated flow speed is
considered as the ground truth of flow parameters in the corresponding cells. The root mean square error (rmse) between the estimated parameter and the ground truth in cell 3, 6, 7, 10 is shown in
Table 1. It can be seen that in all of the four cells, the estimated flow parameters is in good agreement with the true flow parameters. The rmse in all of the four cells is within $5%$ of the
maximum flow speed in the domain. Figure 6 shows the comparison between the estimated flow parameters and the true flow parameter value in one of the cells that the glider trajectory visits. It is
shown that in cell 7, the estimated flow parameter matches well with the true value.
TABLE 1
FIGURE 6
6.3 Bounded Cost Path Planning
In this example, we present simulation results of AUV bounded cost path planning. Since the flow field is of high speed in the domain of interest, we assume that the glider is sampling the domain
using combined propulsion of buoyancy and thrusters for the operation. Hence the AUV through-water speed is set to be 0.5 m/s. The simulations are run on a core i7 at 1.80 GHz PC with 32GB RAM.
Figure 7 shows one example of the proposed bounded cost path planning method. The start position and goal position is assigned as $(−75.60,35.06)$ and $(−74.98,35.83)$ in longitude and latitude,
respectively. Upper bound of the travel time is set as 72 h. The travel cost of the resulting path is 62.650 h, which satisfies the bounded cost constraint. As shown in the figure, the generated path
makes a detour and takes advantage of the strong northward ocean flow to travel to the goal position.
FIGURE 7
We perform A* (Carroll et al., 1992) and Level Set Method (LSM) (Lolla et al., 2014) as comparison to the proposed method. 15 test cases are generated in total. Each test case $T={Start,Goal,d}$ is
built by first assign the distance between the start and the goal node d to be 20 km, 50 km, 80, or 100 km, then randomly place the $Start$ point in the domain, and select the $Goal$ node so that the
distance to the start node is d. The computation time column in Table 2 shows comparison of the averaged computational time for A*, LSM, and the proposed algorithm. Table 3 presents the post-hoc
analysis results of the simulation. The post-hoc analysis rejects null hypotheses of the same performance, i.e. the proposed algorithm spends less computation time to solve the planning problem than
the A* and the LSM method, for all different scenarios of d. The difference between the three algorithms is due to the number of nodes in the graph. By partitioning the domain into 13 cells, the
proposed algorithm searches for the optimal path in a graph with only $13×12$ nodes, while both the A* and LSM algorithm searches for the optimal path in a domain containing $106×106$ nodes. Thus,
the computational cost of the proposed algorithm is significantly lower than the other two methods.
TABLE 2
TABLE 2. Computation time comparison of A*, Level Set Method, and the proposed algorithm. Avg. comp. time represents the averaged computation time for each simulation scenario, and STD comp. time
represents the standard deviation of the computation time. $%$ of increase describes the percentage increase in the computation cost when d increases.
TABLE 3
TABLE 3. Post-hoc analysis of simulation comparison between the proposed method, A*, and LSM. The mean and STD of difference describe the mean and standard deviation of the computation time
difference between the proposed method and the two other methods. The significance level is set as $α=0.05$ when computing the p-value.
Further, we compare the percentage of increase in the computation time when d increases. In Table 2, the $%$ of increase column shows the increase in the computation cost when d increases from 20 to
$50,80,$ and 100, respectively. The percentage is calculated by considering the computation time of each algorithm when $d=20$ km as the base time, and divide the increase of computation time when d
scales up by the base time. It can be seen that when the domain of interest scales up, the computational cost of the proposed algorithm has the least increase, compare with A* and LSM. This is
because as d increases, both A* and LSM have to expand significantly more nodes before finding the optimal solution. For the proposed algorithm, the number of nodes to be expanded stays relatively
constant as d increases. Hence, its computation cost does not increase as much as A* and LSM as the domain scales up.
It is worth mentioning that the proposed algorithm achieves decreased computation cost by compromising the path quality. Even though optimality of the planned path is guaranteed in the partitioned
flow field, as shown in Theorem 5.8, the planned path may not be optimal in the actual flow field, since the partitioned flow field is different from the actual flow field. We identify the
compromised path quality as the major constraint of the proposed algorithm.
In cases where the domain is larger, Algorithm 1 may still result in large number of cells, leading to increased computation cost in solving the bounded cost planning problem. In such scenarios,
stochastic optimization methods, such as the differential evolution method, may be helpful in further reducing the computation cost of solving the planning problem. We refer to a survey paper on the
differential evolution methods (Das et al., 2016) for this matter.
7 Conclusion
In this paper, a bounded cost path planning method is developed for underwater vehicles assisted by a data driven flow field modeling method. The main advantage of the proposed modified PTS method is
that it is more computational efficient comparing to A* and LSM in solving AUV planning problem in time-invariant 2D fields, as demonstrated by the simulation result. Major limitation of the proposed
algorithm is the compromised solution quality, resulting from the model reduction error introduced by the flow partition process. The proposed method has the potential to be extended to other path
planning applications where the task performance is sensitive to planner’s computational efficiency. Future work will include performing the proposed method in real glider deployments, to compare the
planned trajectory with the real mission trajectory, where drift and time-varying fields happen. Future work will also include comparing the proposed method with other algorithms, such as the
differential evolution algorithms.
Data Availability Statement
Publicly available datasets were analyzed in this study. This data can be found here: https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/navoceano-ncom-reg. The flow map data analyzed in
this work is given by the Navy Coastal Ocean Model made available by J. Book and J. Osborne of Naval Research Laboratory, Stennis Space Center, US.
Author Contributions
MH, SC, and FZ contributed to design and theoretical analysis of the methods, and wrote the first draft of the manuscript. HZ, CE and FZ wrote sections of the manuscript and provided guidance for the
research work. All authors contributed to the manuscript revision, read, and approved the submitted version.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Carroll, K. P., McClaran, S. R., Nelson, E. L., Barnett, D. M., Friesen, D. K., and Williams, G. N. (1992). “AUV Path Planning: An A* Approach to Path Planning with Consideration of Variable Vehicle
Speeds and Multiple, Overlapping, Time-Dependent Exclusion Zones,” in Proceedings of the 1992 Symposium on Autonomous Underwater Vehicle Technology.
Chang, D., Liang, X., Wu, W., Edwards, C. R., and Zhang, F. (2014). Real-time Modeling of Ocean Currents for Navigating Underwater Glider Sensing Networks. Coop. Robots Sensor Networks, Stud. Comput.
Intelligence 507, 61–75. doi:10.1007/978-3-642-39301-3_4
Chang, D., Zhang, F., and Edwards, C. R. (2015). Real-Time Guidance of Underwater Gliders Assisted by Predictive Ocean Models. J. Atmos. Oceanic Tech. 32, 562–578. doi:10.1175/jtech-d-14-00098.1
Chassignet, E. P., Hurlburt, H. E., Smedstad, O. M., Halliwell, G. R., Hogan, P. J., Wallcraft, A. J., et al. (2007). The HYCOM (HYbrid Coordinate Ocean Model) Data Assimilative System. J. Mar. Syst.
65, 60–83. doi:10.1016/j.jmarsys.2005.09.016
Cho, S., and Zhang, F. (2016). “An Adaptive Control Law for Controlled Lagrangian Particle Tracking,” in Proceedings of the 11th ACM International Conference on Underwater Networks & Systems, 1–5.
Cho, S., Zhang, F., and Edwards, C. R. (2021). Learning and Detecting Abnormal Speed of marine Robots. Int. J. Adv. Robotic Syst. 18, 1729881421999268. doi:10.1177/1729881421999268
Cui, R., Li, Y., and Yan, W. (2015). Mutual Information-Based Multi-AUV Path Planning for Scalar Field Sampling Using Multidimensional RRT. IEEE Trans. Syst. Man, Cybernetics: Syst. 46, 993–1004.
Das, S., Mullick, S. S., and Suganthan, P. N. (2016). Recent Advances in Differential Evolution - an Updated Survey. Swarm Evol. Comput. 27, 1–30. doi:10.1016/j.swevo.2016.01.004
Eichhorn, M. (2013). Optimal Routing Strategies for Autonomous Underwater Vehicles in Time-Varying Environment. Robotics and Autonomous Systems. 67, 3–43. doi:10.1016/j.robot.2013.08.010
Gammell, J. D., Barfoot, T. D., and Srinivasa, S. S. (2018). Informed Sampling for Asymptotically Optimal Path Planning. IEEE Trans. Robot. 34, 966–984. doi:10.1109/tro.2018.2830331
Griffa, A., Piterbarg, L. I., and Özgökmen, T. (2004). Predictability of Lagrangian Particle Trajectories: Effects of Smoothing of the Underlying Eulerian Flow. J. Mar. Res. 62, 1–35. doi:10.1357/
Haidvogel, D. B., Arango, H., Budgell, W. P., Cornuelle, B. D., Curchitser, E., Di Lorenzo, E., et al. (2008). Ocean Forecasting in Terrain-Following Coordinates: Formulation and Skill Assessment of
the Regional Ocean Modeling System. J. Comput. Phys. 227, 3595–3624. doi:10.1016/j.jcp.2007.06.016
Hart, P., Nilsson, N. J., and Raphael, B. (1968). A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transaction of Systems Science and Cybernetics SSC, 4.
Haza, A. C., Piterbarg, L. I., Martin, P., Özgökmen, T. M., and Griffa, A. (2007). A Lagrangian Subgridscale Model for Particle Transport Improvement and Application in the Adriatic Sea Using the
Navy Coastal Ocean Model. Ocean Model. 17, 68–91. doi:10.1016/j.ocemod.2006.10.004
Hou, M., Zhai, H., Zhou, H., and Zhang, F. (2019). “Partitioning Ocean Flow Field for Underwater Vehicle Path Planning,” in OCEANS 2019-Marseille (IEEE), 1–8.
Karaman, S., and Frazzoli, E. (2011). Sampling-based Algorithms for Optimal Motion Planning. Int. J. Robotics Res. 30, 846–894. doi:10.1177/0278364911406761
Kim, S.-J., Koh, K., Lustig, M., Boyd, S., and Gorinevsky, D. (2007). An Interior-Point Method for Large-Scale-Regularized Least Squares. IEEE J. Sel. Top. Signal. Process. 1, 606–617. doi:10.1109/
Kuffner, J. J., and LaValle, S. M. (2000). “RRT-connect: An Efficient Approach to Single-Query Path Planning,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on
Robotics and Automation (Symposia Proceedings (Cat. No. 00CH37065) (IEEE)), 995–1001.
Kularatne, D., Bhattacharya, S., and Hsieh, M. A. (2018). Going with the Flow: a Graph Based Approach to Optimal Path Planning in General Flows. Auton. Robot 42, 1369–1387. doi:10.1007/
Kularatne, D., Bhattacharya, S., and Hsieh, M. A. (2017). Optimal Path Planning in Time-Varying Flows Using Adaptive Discretization. IEEE Robotics Automation Lett. 3, 458–465.
Leonard, N. E., Paley, D. A., Davis, R. E., Fratantoni, D. M., Lekien, F., and Zhang, F. (2010). Coordinated Control of an Underwater Glider Fleet in an Adaptive Ocean Sampling Field experiment in
Monterey Bay. J. Field Robotics 27, 718–740. doi:10.1002/rob.20366
Lermusiaux, P. F. J. (2006). Uncertainty Estimation and Prediction for Interdisciplinary Ocean Dynamics. J. Comput. Phys. 217, 176–199. doi:10.1016/j.jcp.2006.02.010
Li, W., Lu, J., Zhou, H., and Chow, S.-N. (2017). Method of Evolving Junctions: A New Approach to Optimal Control with Constraints. Automatica 78, 72–78. doi:10.1016/j.automatica.2016.12.023
Lolla, T., Lermusiaux, P. F. J., Ueckermann, M. P., and Haley, P. J. (2014). Time-optimal Path Planning in Dynamic Flows Using Level Set Equations: Theory and Schemes. Ocean Dyn. 64, 1373–1397.
Luettich, R. A., Westerink, J. J., Scheffner, N. W., et al. (1992). ADCIRC: an advanced three-dimensional circulation model for shelves, coasts, and estuaries. Report 1, Theory and methodology of
ADCIRC-2DD1 and ADCIRC-3DL. Tech. rep., Coastal Engineering Research Center, Mississippi, US.
Martin, P. J. (2000). Description of the Navy Coastal Ocean Model Version 1.0. Tech. Rep. NRL/FR/7322–67900-9962. Mississippi, US: Naval Research Lab Stennis Space Center.
Mokhasi, P., Rempfer, D., and Kandala, S. (2009). Predictive Flow-Field Estimation. Physica D: Nonlinear Phenomena 238, 290–308. doi:10.1016/j.physd.2008.10.009
Ozog, P., Carlevaris-Bianco, N., Kim, A., and Eustice, R. M. (2016). Long-term Mapping Techniques for Ship hull Inspection and Surveillance Using an Autonomous Underwater Vehicle. J. Field Robotics
33, 265–289. doi:10.1002/rob.21582
Panda, M., Das, B., Subudhi, B., and Pati, B. B. (2020). A Comprehensive Review of Path Planning Algorithms for Autonomous Underwater Vehicles. Int. J. Autom. Comput. 17, 321–352. doi:10.1007/
Pereira, A. A., Binney, J., Hollinger, G. A., and Sukhatme, G. S. (2013). Risk-aware Path Planning for Autonomous Underwater Vehicles Using Predictive Ocean Models. J. Field Robotics 30, 741–762.
Rhoads, B., Mezic, I., and Poje, A. C. (2012). Minimum Time Heading Control of Underpowered Vehicles in Time-Varying Ocean Currents. Ocean Eng. 66, 12–31.
Roberge, V., Tarbouchi, M., and Labonté, G. (2012). Comparison of Parallel Genetic Algorithm and Particle Swarm Optimization for Real-Time UAV Path Planning. IEEE Trans. Ind. Inform. 9, 132–141.
Sastry, S., and Bodson, M. (1994). Adaptive Control: Stability, Convergence and Robustness. New Jersey, US: Prentice-Hall.
Savidge, D. K., Austin, J. A., and Blanton, B. O. (2013a). Variation in the Hatteras Front Density and Velocity Structure Part 1: High Resolution Transects from Three Seasons in 2004-2005.
Continental Shelf Res. 54, 93–105. doi:10.1016/j.csr.2012.11.005
Savidge, D. K., Austin, J. A., and Blanton, B. O. (2013b). Variation in the Hatteras Front Density and Velocity Structure Part 2: Historical Setting. Continental Shelf Res. 54, 106–116. doi:10.1016/
Shchepetkin, A. F., and McWilliams, J. C. (2005). The Regional Oceanic Modeling System (ROMS): a Split-Explicit, Free-Surface, Topography-Following-Coordinate Oceanic Model. Ocean Model. 9, 347–404.
Smith, R. N., Chao, Y., Li, P. P., Caron, D. A., Jones, B. H., and Sukhatme, G. S. (2010). Planning and Implementing Trajectories for Autonomous Underwater Vehicles to Track Evolving Ocean Processes
Based on Predictions from a Regional Ocean Model. Int. J. Robotics Res. 29, 1475–1497. doi:10.1177/0278364910377243
Soulignac, M. (2011). Feasible and Optimal Path Planning in strong Current fields. IEEE Trans. Robot. 27, 89–98. doi:10.1109/tro.2010.2085790
Stern, R., Felner, A., van den Berg, J., Puzis, R., Shah, R., and Goldberg, K. (2014). Potential-based Bounded-Cost Search and Anytime Non-parametric A*. Artif. Intelligence 214, 1–25. doi:10.1016/
Stern, R. T., Puzis, R., and Felner, A. (2011). “Potential Search: A Bounded-Cost Search Algorithm,” in Twenty-First International Conference on Automated Planning and Scheduling.
Subramani, D. N., and Lermusiaux, P. F. J. (2016). Energy-optimal Path Planning by Stochastic Dynamically Orthogonal Level-Set Optimization. Ocean Model. 100, 57–77. doi:10.1016/j.ocemod.2016.01.006
Thayer, J. T., Stern, R., Felner, A., and Ruml, W. (2012). “Faster Bounded-Cost Search Using Inadmissible Estimates,” in Twenty-Second International Conference on Automated Planning and Scheduling.
Xiang, X., Jouvencel, B., and Parodi, O. (2010). Coordinated Formation Control of Multiple Autonomous Underwater Vehicles for Pipeline Inspection. Int. J. Adv. Robotic Syst. 7, 3. doi:10.5772/7242
Zamuda, A., Hernández Sosa, J. D., and Adler, L. (2016). Constrained Differential Evolution Optimization for Underwater Glider Path Planning in Sub-mesoscale Eddy Sampling. Appl. Soft Comput. 42,
93–118. doi:10.1016/j.asoc.2016.01.038
Zamuda, A., and Hernández Sosa, J. D. (2014). Differential Evolution and Underwater Glider Path Planning Applied to the Short-Term Opportunistic Sampling of Dynamic Mesoscale Ocean Structures. Appl.
Soft Comput. 24, 95–108. doi:10.1016/j.asoc.2014.06.048
Zamuda, A., and Sosa, J. D. H. (2019). Success History Applied to Expert System for Underwater Glider Path Planning Using Differential Evolution. Expert Syst. Appl. 119, 155–170. doi:10.1016/
Zeng, Z., Lammas, A., Sammut, K., He, F., and Tang, Y. (2014). Shell Space Decomposition Based Path Planning for AUVs Operating in a Variable Environment. Ocean Eng. 91, 181–195. doi:10.1016/
Zeng, Z., Lian, L., Sammut, K., He, F., Tang, Y., and Lammas, A. (2015). A Survey on Path Planning for Persistent Autonomy of Autonomous Underwater Vehicles. Ocean Eng. 110, 303–313. doi:10.1016/
Zhai, H., Hou, M., Zhang, F., and Zhou, H. (2020). Method of Evolving junction on Optimal Path Planning in Flow fields. IEEE Transactions on Robotics.
Zhang, F. (2016). Cyber-maritime Cycle: Autonomy of marine Robots for Ocean Sensing. FNT in Robotics 5, 1–115. doi:10.1561/2300000037
Keywords: robotic path planning, graph search method, bounded cost search, parameter identification, underwater vehicle
Citation: Hou M, Cho S, Zhou H, Edwards CR and Zhang F (2021) Bounded Cost Path Planning for Underwater Vehicles Assisted by a Time-Invariant Partitioned Flow Field Model. Front. Robot. AI 8:575267.
doi: 10.3389/frobt.2021.575267
Received: 23 June 2020; Accepted: 17 June 2021;
Published: 14 July 2021.
Edited by:
Victor Zykov
, Independent researcher, Menlo Park, CA, United States
Copyright © 2021 Hou, Cho, Zhou, Edwards and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or
reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with
accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Fumin Zhang, fumin@gatech.edu | {"url":"https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2021.575267/full","timestamp":"2024-11-09T11:33:50Z","content_type":"text/html","content_length":"963074","record_id":"<urn:uuid:f07e1f2d-062f-4873-8556-48f8dd9c73e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00062.warc.gz"} |
For every fixed k ≥ 4, it is proved that if an n-vertex directed graph has at most t pairwise arc-disjoint directed k-cycles, then there exists a set of at most ^2[3] kt + o(n^2) arcs that meets all
directed k-cycles and that the set of k-cycles admits a fractional cover of value at most ^2[3] kt. It is also proved that the ratio ^2[3] k cannot be improved to a constant smaller than ^k[2]. For k
= 5 the constant 2k/3 is improved to 25/8 and for k = 3 it was recently shown by Cooper et al. [European J. Combin., 101 (2022), 103462] that the constant can be taken to be 9/5. The result implies a
deterministic polynomial time [3]^2 k-approximation algorithm for the directed k-cycle cover problem, improving upon a previous (k-1)-approximation algorithm of Kortsarz, Langberg, and Nutov, [SIAM
J. Discrete Math., 24 (2010), pp. 255-269]. More generally, for every directed graph H we introduce a graph parameter f(H) for which it is proved that if an n-vertex directed graph has at most t
pairwise arc-disjoint H-copies, then there exists a set of at most f(H)t + o(n^2) arcs that meets all H-copies and that the set of H-copies admits a fractional cover of value at most f(H)t. It is
shown that for almost all H it holds that f(H) ≈ |E(H)|/2 and that for every k-vertex tournament H it holds that f(H) ≤ ⌊k^2/4⌋.
Bibliographical note
Publisher Copyright:
© 2024 Society for Industrial and Applied Mathematics.
• approximation
• covering
• cycle
• packing
ASJC Scopus subject areas
Dive into the research topics of 'PACKING AND COVERING A GIVEN DIRECTED GRAPH IN A DIRECTED GRAPH'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/packing-and-covering-a-given-directed-graph-in-a-directed-graph","timestamp":"2024-11-11T17:00:27Z","content_type":"text/html","content_length":"54695","record_id":"<urn:uuid:e077f364-871c-40da-9233-42d9c22a276f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00863.warc.gz"} |
Is there a Largest Prime Number? - LargestandBiggest.com
On December 26, 2017, the Large Internet Mersenne Prime Search (GIMPS) announced the discovery of a 23 million-digit prime number. On that day, a computer volunteered by Jonathan Pace found the
record-breaking prime number, 27723218971. Thousands of individuals volunteer their computing power to GIMPS for free.
Why is 11 not a prime number?
The first 25 prime numbers are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31 , 37 , 41 , 43 , 47 , 53 -> The first 25 prime numbers (all the prime numbers less than 100) are: 2 3 5 7 11 13 17 19 23 29 31
37 41 43 47 53 59 67 71 73 79 83 89 97 (sequence A000040 in the OEIS). Every odd prime other than 2 is called an odd prime.
Why is 24 not a prime number?
For example, consider the sequence of 24 positive divisors: 1, 2, 3, 4, 6, 8, 12, 24. The list of all positive divisors for a number n is simply its integer division (for example: n = 23). For a
number to be prime (or perfect), it must have only two distinct divisors: itself and 1.
Why is 17 not a prime number?
Yes, 17 is a prime number since it has only two factors, 1 and 17. Yes, 51 is not a prime number because it has more than two factors. Because 51 is a composite number that can be factored by any of
the following numbers: 1, 3, 17, or 51, it is neither a prime nor a composite number.
Why is 75 not a prime number?
The number 75 is not a prime because it may be expressed as the product of prime factors. In other words, 75 can be divided by 1, alone or in combination with at least three and five. As a result, 75
is a ‘composite number’.
What is Coprime number?
A co-prime number is a set of numbers or integers having the same highest common factor, which is typically 1 (HCF). Co-prime numbers are also known as relatively prime or mutually prime numbers.
What is the fastest way to find a prime number?
Prime sieves
A prime sieve, also known as a prime number sieve, is a quick method for determining primes. There are numerous different prime sieves. The simple sieve of Eratosthenes (250 BCE), the sundaram sieve
(1934), and Atkin’s even more complicated but nevertheless faster wheel sieve are among the most popular.
What is the smallest prime number?
Why is 2 a prime number?
Proof: A prime number is defined as a positive integer with two or more distinct divisors. Because the divisors of 2 are 1 and 2, it has exactly two distinct divisors, therefore it is prime. In
reality, the main reason why most even numbers are composite is that they are divisible by 2 (a prime) by definition.
Is 17 lucky or unlucky?
The number 17 is considered bad luck in Italian culture. When XVII is rearranged anagrammatically to become VIXI, it becomes the Latin phrase meaning “I lived” (which means “my life has ended” in
Latin). (c.f., Cicero’s famous statement announcing a capital punishment.)
Is 17 a perfect square?
No, the number 17 is not a perfect square.
Is 17 divisible by any number?
What is the smallest number that can be divided by 17? Is 17 a prime number? Is it true that 17 is a prime number? It has only one positive factor, which is 1.
Is 75 a perfect number?
For $75, the answer is: No, 75 is not a prime number. The following are the positive divisors of 75: 1, 3, 5, 15, 25, and 75. It would have been necessary for 75 to only have two dividers (i.e.,
itself and 1) in order for it to be a prime number.
Is 75 a perfect square number?
A: No, the number 75 is not a perfect square.
Is 75 a prime or composite?
75 has factors of 1, 3, 5, 15, and 25. 75 is neither prime nor simple because it has factors other than one and itself. | {"url":"https://largestandbiggest.com/education/is-there-a-largest-prime-number/","timestamp":"2024-11-11T04:33:14Z","content_type":"text/html","content_length":"80883","record_id":"<urn:uuid:436a1aa4-8c13-4986-b250-5132e22d0ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00886.warc.gz"} |
Kinetic Data Structures
Lets say you want to maintain a sorted list of items (each item is associate with a real number key). You can imagine placing each of the items on the point on the real line corresponding to its key.
Now, let the key for each item change continuously (i.e. no jumps are allowed). As long as no two (consecutive) items cross, the sorted order is intact. When two items cross, they need to be
exchanged in the list and then the sorted order is once again correct. This is a trivial example of a kinetic data structure. The key observation is that the combinatorial structure which is
maintained changes at discrete times (events) even though the basic building blocks are changing continuously.
This chapter describes a number of such kinetic data structures implemented using the Kinetic framework described in Chapter 54. We first, in Section 53.2 introduce kinetic data structures and
sweepline algorithms. This section can be skipped if the reader is already familiar with the area. The next sections, Section 53.2.1 and Section 53.3 introduce the terms and give an overview of the
framework. They are recommended reading for all readers, even if you are just using provided kinetic data structures. We then present kinetic data structures for Delaunay triangulations in two and
three dimensions in Section 53.4.
If you are already familiar with kinetic data structures and know what you want to do, you might want to first take a look at the next section Section 53.1 which covers quick hints.
53.1 Quick Hints
This section gives quick answers to some questions people might have. It presumes knowledge of kinetic data structures and this framework.
How do I store extra information allow with, for example, a kinetic Point_2?
See the example Kinetic_framework/defining_a_simulation_traits.cpp to see how to define a new SimulationTraits class where the Active-Objects-Table contains extra data along with the point.
Where is the best place to look if I want to write my own kinetic data structure?
We provide two simple kinetic data structures, first most trivial is Kinetic_framework/trivial_kds.cpp and a slightly more complicated one is:
#include <CGAL/Kinetic/Sort.h>
How can I use kinetic data structures to update Delaunay triangulations?
We are working on that one, but you will have to wait.
53.2 An Overview of Kinetic Data Structures and Sweep Algorithms
Kinetic data structures were first introduced in by Basch et. al. in 1997 [BGH97]. The idea stems from the observation that most, if not all, computational geometry structures are built using
predicates - functions on quantities defining the geometric input (e.g. point coordinates), which return a discrete set of values. Many predicates reduce to determining the sign of a polynomial on
the defining parameters of the primitive objects. For example, to test whether a point lies above or below a plane we compute the dot product of the point with the normal of the plane and subtract
the plane's offset along the normal. If the result is positive, the point is above the plane, zero on the plane, negative below. The validity of many combinatorial structures built on top of
geometric primitives can be verified by checking a finite number of predicates of the geometric primitives. These predicates, which collectively certify the correctness of the structure, are called
certificates. For a Delaunay triangulation in three dimensions, for example, the certificates are one InCircle test per facet of the triangulation, plus a point plane orientation test for each facet
or edge of the convex hull.
The kinetic data structures approach is built on top of this view of computational geometry. Let the geometric primitives move by replacing each of their defining quantities with a function of time
(generally a polynomial). As time advances, the primitives trace out paths in space called trajectories. The values of the polynomial functions of the defining quantities used to evaluate the
predicates now also become functions of time. We call these functions certificate functions. Typically, a geometric structure is valid when all predicates have a specific non-zero sign. In the
kinetic setting, as long as the certificate functions maintain the correct sign as time varies, the corresponding predicates do not change values, and the original data structure remains correct.
However, if one of the certificate functions changes sign, the original structure must be updated, as well as the set of certificate functions that verify it. We call such occurrences events.
Maintaining a kinetic data structure is then a matter of determining which certificate function changes sign next, i.e. determining which certificate function has the first real root that is greater
than the current time, and then updating the structure and the set of certificate functions. In addition, the trajectories of primitives are allowed to change at any time, although $C0$-continuity of
the trajectories must be maintained. When a trajectory update occurs for a geometric primitive, all certificates involving that primitive must be updated. We call the collection of kinetic data
structures, primitives, event queue and other support structures a simulation.
Sweepline algorithms for computing arrangements in $d$ dimensions easily map on to kinetic data structures by taking one of the coordinates of the ambient space as the time variable. The kinetic data
structure then maintains the arrangement of a set of objects defined by the intersection of a hyperplane of dimension $d-1$ with the objects whose arrangement is being computed.
Time is one of the central concepts in a kinetic simulation. Just as static geometric data structures divide the continuous space of all possible inputs (as defined by sets of coordinates) into a
discrete set of combinatorial structures, kinetic data structures divide the continuous time domain into a set of disjoint intervals. In each interval the combinatorial structure does not change, so,
in terms of the combinatorial structure, all times in the interval are equivalent. We capitalize on this equivalence in the framework in order to simplify computations. If the primitives move on
polynomial trajectories and the certificates are polynomials in the coordinates, then events occur at real roots of polynomials of time. Real numbers, which define the endpoints of the interval, are
more expensive to compute with than rational numbers, so performing computations at a rational number inside the interval is preferable whenever possible. See Section 54.1.4 for an example of where
this equivalence is exploited.
53.2.1 Terms Used
The basic geometric types, e.g., the points of a triangulation. A primitive has a set of coordinates.
combinatorial structure
A structure built on top of the primitives. The structure does not depend directly on the coordinates of the primitives, only on relationships between them.
The path traced out by a primitive as time passes. In other words how the coordinates of a primitive change with time.
The position of all the primitives at a particular moment in time.
Having to do with geometric data structures on non-moving primitives.
A function which takes the coordinates of several primitives from a snapshot as input and produces one of a discrete set of outputs.
One of a set of predicates which, when all having the correct values, ensure that the combinatorial structure is correct.
certificate function
A function of time which is positive when the corresponding certificate has the correct value. When the certificate function changes sign, the combinatorial structure needs to be updated.
When a certificate function changes sign and the combinatorial structure needs to be updated.
53.3 An Overview of the Kinetic Framework
The provided kinetic data structures are implemented on top of the Kinetic framework presented in Chapter 54. It is not necessary to know the details of the framework, but some familiarity is useful.
Here we presented a quick overview of the framework.
The framework is structured around five main concepts. See Figure 53.1 for a schematic of how a kinetic data structure interacts with the various parts. The main concepts are
• the Kinetic::Simulator. Models of this concept process events in the correct order and audit kinetic data structures. There should be one instance of a model of this concept per simulation.
• the Kinetic::Kernel. The structure of a Kinetic::Kernel is analogous to the static CGAL (i.e., non-kinetic) kernels in that it defines a set of primitives and functors which generate certificates
from the primitives.
• the Kinetic::ActiveObjectsTable. Models of this concept hold a collection of kinetic primitives in a centralized manner. This structure centralizes management of the primitives in order to
properly disseminate notifications when trajectories change, new primitives are added or primitives are deleted. There is generally one instance of a model of this concept per simulation.
• the Kinetic::InstantaneousKernel. Models of this concept allow existing non-kinetic CGAL data structures to be used on a snapshot of kinetic data. As a result, pre-existing static structures can
be used to initialize and audit kinetic data structures.
• the Kinetic::FunctionKernel. This concept is the computational kernel of our framework. Models of this concept are responsible for representing, generating and manipulating the motional and
certificate functions and their roots. It is this concept that provides the kinetic data structures framework with the necessary algebraic operations for manipulating event times. The
Kinetic::FunctionKernel is discussed in detail in Section 54.2.
Figure 53.1: The figure shows the interaction between the Kinetic::Sort<Traits, Visitor> kinetic data structure and the various pieces of our package. Other, more complicated, kinetic data structures
will also use the Kinetic::InstantaneousKernel in order to insert/remove geometric primitives and audit themselves. Kinetic::Sort<Traits, Visitor> uses the sorting functionality in the STL instead.
For simplicity, we added an additional concept, that of Kinetic::SimulationTraits, which wraps together a particular set of choices for the above concepts and is responsible for creating instances of
each of the models. As a user of existing kinetic data structures, this is the only framework object you will have to create. The addition of this concept reduces the choices the user has to make to
picking the dimension of the ambient space and choosing between exact and inexact computations. The model of Kinetic::SimulationTraits creates an instance each of the Kinetic::Simulator and
Kinetic::ActiveObjectsTable. Handles for these instances as well as instances of the Kinetic::Kernel and Kinetic::InstantaneousKernel can be requested from the simulation traits class. Both the
Kinetic::Kernel and the Kinetic::Simulator use the Kinetic::FunctionKernel, the former to find certificate failure times and the later to operate on them. For technical reasons, each supplied model
of Kinetic::SimulationTraits also picks out a particular type of kinetic primitive which will be used by the kinetic data structures.
53.4 Using Kinetic Data Structures
There are five provided kinetic data structures. They are
maintain a list of points sorted by x-coordinate.
maintain the Delaunay triangulation of a set of two dimensional points
maintain the Delaunay triangulation of a set of three dimensional points.
maintain the regular triangulation of a set of waiting three dimensional points.
restrict points to stay within a box by bouncing them off the walls.
53.4.1 A Simple Example
Using a kinetic data structure can be as simple as the following:
File: examples/Kinetic_data_structures/sort.cpp
#include <CGAL/Kinetic/basic.h>
#include <CGAL/Kinetic/Exact_simulation_traits.h>
#include <CGAL/Kinetic/Insert_event.h>
#include <CGAL/Kinetic/Sort.h>
int main(int, char *[])
typedef CGAL::Kinetic::Exact_simulation_traits Traits;
typedef CGAL::Kinetic::Insert_event<Traits::Active_points_1_table> Insert_event;
typedef Traits::Active_points_1_table::Data Moving_point;
typedef CGAL::Kinetic::Sort<Traits> Sort;
typedef Traits::Simulator::Time Time;
Traits tr(0,100000);
Sort sort(tr);
Traits::Simulator::Handle sp= tr.simulator_handle();
std::ifstream in("data/points_1");
in >> *tr.active_points_1_table_handle();
while (sp->next_event_time() != sp->end_time()) {
return EXIT_SUCCESS;
Using the other kinetic data structures is substantially identical. Please see the appropriate files in the demo/Kinetic_data_structures directory.
In the example, first the Kinetic::SimulationTraits object is chosen (in this case one that supports exact computations). Then the kinetic data structure is defined using the chosen traits object and
a visitor class which logs changes to the sorted list. Next, instances of the two are created and a set of points is read from a file. Then, the simulator is instructed to process all the events
until the end of the simulation. Finally, a record of what happened is printed to the terminal.
Several important things happen behind the scenes in this example. First, the Kinetic::ActiveObjectsTable which holds the moving points notifies the kinetic data structure that new points have been
added to the simulation. Second, the Kinetic::Sort<Traits,Visitor> kinetic data structure registers its events with the Kinetic::Simulator by providing a time and a proxy object for each event. When
a particular event occurs, the Kinetic::Simulator calls a function on the proxy object which in turn updates the kinetic data structure.
The example illustrates how to monitor the supplied data structures as they evolve by using a Kinetic::SortVisitor object - a small class whose methods are called whenever the kinetic data structure
changes. Hooks for such visitor concepts are provided for all of the shipped kinetic data structures. In the case of kinetic sorting, the visitor's methods are called every time a new point is
inserted in the sorted list, when one is removed, or when two points are swapped in the sorted order.
The visitor concept is quite powerful, allowing us, for example, to implement a data structure for computing and storing two-dimensional arrangements of $x$-monotone curves on top of the
Kinetic::Sort<Traits, Visitor> data structure using about 60 lines of code. This sweepline code is presented in Section 53.4.4.
53.4.2 Creating Kinetic Primitives
One key part of the framework not shown is how to create kinetic primitives (rather than just reading them in from a file). There are two ways to construction the necessary motion functions (which
are models of Kinetic::FunctionKernel::Function). The first is to create an array of polynomial coeffients and simply call the constructor as in:
typedef Traits::Kinetic_kernel::Motion_function F;
std::vector<F::NT> coefs;
F x(coefs.begin(), coefs.end());
A slightly more flexible way is to use a Kinetic::FunctionKernel::ConstructFunction object. To do this do the following:
typedef Traits::Kinetic_kernel::Function_kernel::Construct_function
CF; typedef Traits::Kinetic_kernel::Motion_function F; CF cf; F
x=cf(F::NT(1.0), F::NT(2.0));
The Kinetic::FunctionKernel::ConstructFunction can be passed (almost) an number of arguments and will construct a polynomial with those arguments are coefficients.
Once the motion functions are constructed, constructing the primitive is just like constructing the corresponding static object.
typedef Traits::Kinetic_kernel::Point_1 Point_1;
Point_1 p(x);
53.4.3 Visualization of Kinetic Data Structures
The framework includes Qt widgets for displaying kinetic data structures in two and three dimensions. The following example shows using the two dimensional widget with a Delaunay triangulation:
#include <CGAL/Kinetic/Exact_simulation_traits.h>
#include <CGAL/Kinetic/Delaunay_triangulation_2.h>
#include <CGAL/Kinetic/Enclosing_box_2.h>
#include <CGAL/Kinetic/IO/Qt_moving_points_2.h>
#include <CGAL/Kinetic/IO/Qt_triangulation_2.h>
#include <CGAL/Kinetic/IO/Qt_widget_2.h>
int main(int argc, char *argv[]) {
using namespace CGAL::Kinetic;
typedef Exact_simulation_traits Traits;
typedef Delaunay_triangulation_2<Traits> Del_2;
typedef Enclosing_box_2<Traits> Box_2;
typedef Qt_widget_2<Traits::Simulator> Qt_widget;
typedef Qt_moving_points_2<Traits, Qt_gui> Qt_mps;
typedef Qt_triangulation_2<Del_2, Qt_widget, Qt_mps> Qt_dt2;
// create a simulation traits and add two KDSs:
// a kinetic Delaunay triangulation and an enclosing box;
// the moving points bounce against the walls of the enclosing box
Traits tr;
Box_2::Handle box = new Box_2(tr);
Del_2::Handle kdel = new Del_2(tr);
// register the simulator, set of moving points and
// Delaunay triangulation with the kinetic Qt widget
Qt_widget::Handle qt_w = new Qt_widget(argc, argv, tr.simulator_handle());
Qt_mps::Handle qt_mps = new Qt_mps(qt_w, tr);
Qt_dt2::Handle qt_dt2 = new Qt_dt2(kdel, qt_w, qt_mps);
// read the trajectories of the moving points
// the simulation traits automatically inserts them in the two KDSs
// and schedules the appropriate kinetic events; as in the kinetic
// sorting example this is done with appropriate notifications
std::ifstream in("data/points_2");
in >> *tr.active_points_2_table_handle();
// run the interactive kinetic simulation
return qt_w->begin_event_loop();
The example shows how to use a number of additional features of the framework. First, it shows that two kinetic data structures (Kinetic::Delaunay_triangulation_2<Traits, Triangulation> and
Kinetic::Enclosing_box_2<Traits>) can coexist on the same set of points without any extra effort. Both interact with the moving points through the active objects table, and never need to directly
interact with one another. Second, objects (like qt_w, qt_mps and qt_dt2) are all stored by using reference counted handles (Object::Handle). This allows them to share references to one another
without the user having to worry about memory management and order of deletion. For example, the Kinetic::Qt_triangulation_2<KineticDelaunay_2, QtWidget_2, Qt_moving_points_2> object needs a handle
to the kinetic triangulation, in order to get the structure to display, and a handle to the Active_points_1_table to get the coordinates of the points.
Finally, the example shows how to use the graphical interface elements provided, see Figure 53.2. Our package includes Qt widgets for displaying kinetic geometry in two and three dimensions. In
addition to being able to play and pause the simulation, the user can step through events one at a time and reverse the simulation to retrace what had happened. The three-dimensional visualization
support is based on the Coin library http://www.coin3d.org.
Figure: Some events from a Delaunay triangulation kinetic data structure: The state of the two dimensional Delaunay triangulation immediately following the first events is shown. Green edges are ones
which were just created. The pictures are screen shots from demo/Kinetic_data_structures/Delaunay_triangulation_2.cpp.
Figure 53.2: The figure shows the graphical user interface for controlling two-dimensional kinetic data structures. It is built on top of the Qt_widget and adds buttons to play, pause, step through
and run the simulation backwards.
53.4.4 Extending Kinetic Data Structures
Here we present a simple example that uses the Kinetic::Sort<Traits, Visitor> kinetic data structure to compute an arrangement of algebraic functions. It wraps the sorting data structure and uses a
visitor to monitor changes and map them to corresponding features in the arrangement. To see an example using this kinetic data structure read the example at examples/Kinetic_data_structures/
First we define the visitor class. An object of this type is passed to the Kinetic::Sort<Traits, Visitor> data structure and turns events into calls on the arrangement structure. This class has to be
defined externally since the arrangement will inherit from the sorting structure.
template <class Arrangement>
struct Arrangement_visitor: public Kinetic::Sort_visitor_base
Arrangement_visitor(Arrangement *a):p_(a){}
template <class Vertex_handle>
void remove_vertex(Vertex_handle a) {
template <class Vertex_handle>
void create_vertex(Vertex_handle a) {
template <class Vertex_handle>
void after_swap(Vertex_handle a, Vertex_handle b) {
p_->swap(a, b);
Arrangement *p_;
Now we define the actual arrangement data structure.
template <class TraitsT>
class Planar_arrangement:
public Kinetic::Sort<TraitsT,
Arrangement_visitor<Planar_arrangement<TraitsT> > > {
typedef TraitsT Traits;
typedef Planar_arrangement<TraitsT> This;
typedef typename Kinetic::Sort<TraitsT,
Arrangement_visitor<This> > Sort;
typedef Arrangement_visitor<This> Visitor;
typedef typename Traits::Active_objects_table::Key Key;
typedef CGAL::Exact_predicates_inexact_constructions_kernel::Point_2 Approximate_point;
typedef std::pair<int,int> Edge;
typedef typename Sort::Vertex_handle Vertex_handle;
// Register this KDS with the MovingObjectTable and the Simulator
Planar_arrangement(Traits tr): Sort(tr, Visitor(this)) {}
Approximate_point vertex(int i) const
return approx_coords_[i];
size_t vertices_size() const
return approx_coords_.size();
typedef std::vector<Edge >::const_iterator Edges_iterator;
Edges_iterator edges_begin() const
return edges_.begin();
Edges_iterator edges_end() const
return edges_.end();
void insert(Vertex_handle k) {
void swap(Vertex_handle a, Vertex_handle b) {
int swap_point= new_point(*a);
edges_.push_back(Edge(swap_point, last_points_[*a]));
edges_.push_back(Edge(swap_point, last_points_[*b]));
last_points_[*a]= swap_point;
last_points_[*b]= swap_point;
void erase(Vertex_handle a) {
edges_.push_back(Edge(last_points_[*a], new_point(*a)));
int new_point(typename Traits::Active_objects_table::Key k) {
double tv= CGAL::to_double(Sort::traits().simulator_handle()->current_time());
double dv= CGAL::to_double(Sort::traits().active_objects_table_handle()->at(k).x()(tv));
approx_coords_.push_back(Approximate_point(tv, dv));
return approx_coords_.size()-1;
std::vector<Approximate_point > approx_coords_;
std::map<Key, int> last_points_;
std::vector<Edge> edges_;
Finally, we have to set everything up. To do this we use some special event classes: Kinetic::Insert_event<ActiveObjectsTable> and Kinetic::Erase_event<ActiveObjectsTable>. These are events which can
be put in the event queue which either insert a primitive into the set of active objects or remove it. Using these, we can allow curves in the arrangement to begin or end in arbitrary places.
typedef CGAL::Kinetic::Insert_event<Traits::Active_points_1_table> Insert_event;
typedef CGAL::Kinetic::Erase_event<Traits::Active_points_1_table> Erase_event;
do {
NT begin, end;
Point function;
// initialize the function and the beginning and end somewhere
Insert_event(function, tr.active_points_1_table_handle()));
} while (true);
Reference Manual | {"url":"https://doc.cgal.org/Manual/3.3/doc_html/cgal_manual/Kinetic_data_structures/Chapter_main.html","timestamp":"2024-11-01T20:52:37Z","content_type":"text/html","content_length":"42038","record_id":"<urn:uuid:ec5be9a5-9472-4533-8bbc-880280267753>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00624.warc.gz"} |
Logs and Exponential Equations (examples, solutions, worksheets, videos, activities)
Related Topics:
A Level Maths Math Worksheets
Examples, solutions, videos, activities and worksheets that are suitable for A Level Maths to learn how to solve Log and Exponential problems.
A-Level Maths Edexcel Core 3 - Past Paper Questions
Exponential and Log equations
1. Find the exact solutions of
(i) e
= 6
(ii) ln(3x+2) = 4
2. Find, giving your answer to 3 significant figures where appropriate, the value of x for which
(a) 3
= 5
(b) log
(2x+1) - log
x = 2
(c) ln sin x = - ln sec x, in the interval 0 < x < 90°
3. Find the exact solutions to the equations
(a) ln x + ln 3 = ln 6
(b) e
+ 3e
= 4
Exponential Equation : C3 Edexcel January 2013 Q8
Maths Revision
The value of Bob’s car can be calculated from the formula
V = 17000e
+ 2000e
+ 500
where V is the value of the car in pounds (£) and t is the age in years.
(a) Find the value of the car when t = 0
(b) Calculate the exact value of t when V = 9500
(c) Find the rate at which the value of the car is decreasing at the instant when t = 8.
Give your answer in pounds per year to the nearest pound.
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/logs-exponential-equations.html","timestamp":"2024-11-08T05:01:09Z","content_type":"text/html","content_length":"37218","record_id":"<urn:uuid:32f0c6d8-de98-40d1-a4c6-5d16934845b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00780.warc.gz"} |
Coordinates of two adjacent vertices of a regualr hexagon are A... | Filo
Question asked by Filo student
Coordinates of two adjacent vertices of a regualr hexagon are and respectively. If center of the hexagon is , Then
d. Insufficient information
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
2 mins
Uploaded on: 4/20/2024
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Matrices and Determinant
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Coordinates of two adjacent vertices of a regualr hexagon are and respectively. If center of the hexagon is , Then
Updated On Apr 20, 2024
Topic Matrices and Determinant
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 54
Avg. Video Duration 2 min | {"url":"https://askfilo.com/user-question-answers-mathematics/coordinates-of-two-adjacent-vertices-of-a-regualr-hexagon-39393831383933","timestamp":"2024-11-14T14:09:56Z","content_type":"text/html","content_length":"273566","record_id":"<urn:uuid:be56bb5c-eb27-4609-b9d2-0e1a472a5595>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00357.warc.gz"} |
PostgreSQL AVG function
PostgreSQL AVG Function
Summary: in this tutorial, you will learn how to use PostgreSQL AVG() function to calculate the average value of a set.
Introduction to PostgreSQL AVG() function
The AVG() function is one of the most commonly used aggregate functions in PostgreSQL. The AVG() function allows you to calculate the average value of a set.
Here is the syntax of the AVG() function:
You can use the AVG() function in the SELECT and HAVING clauses.
To calculate the average value of distinct values in a set, you use the distinct option as follows:
AVG(DISTINCT column)
Notice that the AVG() function ignores NULL. If the column has no values, the AVG() function returns NULL.
PostgreSQL AVG() function examples
Let’s take a look at some examples of using the AVG function.
We will use the following payment table in the dvdrental sample database for demonstration:
1) Basic PostgreSQL AVG() function example
The following example uses the AVG() function to calculate the average amount that customers paid:
SELECT AVG(amount)
FROM payment;
(1 row)
To make the output more readable, you can use the cast operator as follows:
SELECT AVG(amount)::numeric(10,2)
FROM payment;
(1 row)
2) Using AVG() function with DISTINCT operator example
The following query returns the average payment made by customers. Because we use DISTINCT PostgreSQL takes unique amounts and calculates the average.
SELECT AVG(DISTINCT amount)::numeric(10,2)
FROM payment;
(1 row)
Notice that the result is different from the first example that does not use the DISTINCT option.
3) Using AVG() function with SUM() function example
The following query uses the AVG() function with the SUM() function to calculate the total payment made by customers and the average of all transactions.
avg | sum
4.20 | 61312.04
(1 row)
4) Using PostgreSQL AVG() function with GROUP BY clause
Typically, you use the AVG() function with the GROUP BY clause to calculate the average value of per group.
• First, the GROUP BY clause divides rows of the table into groups
• Then, the AVG() function calculates the average value per group.
The following example uses the AVG() function with GROUP BY clause to calculate the average amount paid by each customer:
AVG (amount):: NUMERIC(10, 2)
INNER JOIN customer USING(customer_id)
customer_id | first_name | last_name | avg
1 | Mary | Smith | 3.82
2 | Patricia | Johnson | 4.76
3 | Linda | Williams | 5.45
4 | Barbara | Jones | 3.72
In the query, we joined the payment table with the customer table using inner join. We used GROUP BY clause to group customers into groups and applied the AVG() function to calculate the average per
5) PostgreSQL AVG() function with HAVING clause example
You can use the AVG() function in the HAVING clause to filter groups based on a specified condition.
The following example uses the AVG() function to calculate the average payment of each customer and return only the ones who paid higher than 5 USD:
AVG (amount):: NUMERIC(10, 2)
INNER JOIN customer USING(customer_id)
AVG (amount) > 5
customer_id | first_name | last_name | avg
3 | Linda | Williams | 5.45
19 | Ruth | Martinez | 5.49
137 | Rhonda | Kennedy | 5.04
181 | Ana | Bradley | 5.08
187 | Brittany | Riley | 5.62
209 | Tonya | Chapman | 5.09
259 | Lena | Jensen | 5.16
272 | Kay | Caldwell | 5.07
285 | Miriam | Mckinney | 5.12
293 | Mae | Fletcher | 5.13
310 | Daniel | Cabral | 5.30
311 | Paul | Trout | 5.39
321 | Kevin | Schuler | 5.52
470 | Gordon | Allard | 5.09
472 | Greg | Robins | 5.07
477 | Dan | Paine | 5.09
508 | Milton | Howland | 5.29
522 | Arnold | Havens | 5.05
542 | Lonnie | Tirado | 5.30
583 | Marshall | Thorn | 5.12
(20 rows)
This query is similar to the one above with an additional HAVING clause. We used AVG function in the HAVING clause to filter the groups that have an average amount less than or equal to 5.
6) Using PostgreSQL AVG() function and NULL
Let’s see the behavior of the AVG() function when its input has NULL.
First, create a table named t1.
CREATE TABLE t1 (
id serial PRIMARY KEY,
amount INTEGER
Second, insert some sample data:
INSERT INTO t1 (amount)
The data of the t1 table is as follows:
Third, use the AVG() function to calculate average values in the amount column.
SELECT AVG(amount)::numeric(10,2)
FROM t1;
(1 row)
It returns 20, meaning that the AVG() function ignores NULL values.
• Use PostgreSQL AVG() function to calculate the average value of a set.
• The AVG() function ignores NULL in the calculation.
• The AVG() function returns NULL if the set is empty. | {"url":"https://neon.tech/postgresql/postgresql-aggregate-functions/postgresql-avg-function","timestamp":"2024-11-10T02:37:12Z","content_type":"text/html","content_length":"376234","record_id":"<urn:uuid:eb24efcc-7b7b-4522-a436-b6220566f17b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00571.warc.gz"} |
HeLp PlS
Alice and Bob each have a certain amount of money. If Alice receives dollars from Bob, then she will have times as much money as Bob. If, on the other hand, she gives dollars to Bob, then she will
have times as much money as Bob. If neither gives the other any money, what is the ratio of the amount of money Alice has to the amount Bob has?
The answer is not 17/7 i tried
Anyways thanks for your help!
omgitsme Nov 2, 2023
Let Alice's initial amount of money be a and Bob's initial amount of money be b.
If Alice receives x dollars from Bob, then she will have a+x dollars and Bob will have b−x dollars. Therefore, a+x is 2×(b−x), so 3x=2b−a.
If Alice gives y dollars to Bob, then she will have a−y dollars and Bob will have b+y dollars. Therefore, a−y is 3×(b+y), so 4y=3a−3b.
Subtracting the first equation from the second equation, we get y=a−2b. Substituting this into the first equation, we get 3(a−2b)=2b−a, so 5a=7b.
Therefore, the ratio of Alice's amount of money to Bob's amount of money is 7:5.
parmen Nov 2, 2023 | {"url":"https://web2.0calc.com/questions/help-pls_88045","timestamp":"2024-11-11T20:05:18Z","content_type":"text/html","content_length":"21102","record_id":"<urn:uuid:1aa73916-6117-4387-bdcd-1af3cd6beb96>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00690.warc.gz"} |
Mutual Inductance and Dot Convention Simple Definition | Wira Electrical
Mutual Inductance and Dot Convention Simple Definition
Mutual inductance and dot convention will be used when we analyze a circuit that has multiple coils in close gap such as transformer.
After we learn about the linear transformer, we will have some important explanation such as:
The circuits we have considered so far may be regarded as conductively coupled because one loop affects the neighbouring loop through current conduction.
When two loops with or without contacts between them affect each other through the magnetic field generated by one of them, they are said to be magnetically coupled.
The transformer is an electrical device designed on the basis of the concept of magnetic coupling. It uses magnetically coupled coils to transfer energy from one circuit to another.
Transformers are key circuit elements. They are used in power systems for stepping up or stepping down ac voltages or currents.
They are used in electronic circuits such as radio and television receivers for such purposes as impedance matching, isolating one part of a circuit from another, and again for stepping up or down ac
voltages and currents.
We will begin with the concept of mutual inductance and introduce the dot convention used for determining the voltage polarities of inductively coupled components.
Based on the notion of mutual inductance, we then introduce the circuit element known as the transformer.
We will consider the linear transformer, the ideal transformer, the ideal autotransformer, and the three-phase transformer.
Finally, among their important applications, we look at transformers as isolating and matching devices and their use in power distribution.
Mutual Inductance and Dot Convention
When two inductors (or coils) are in close proximity to each other, the magnetic flux caused by the current in one coil links with the other coil, thereby inducing a voltage in the latter.
This phenomenon is known as mutual inductance.
Let us first consider a single inductor, a coil with N turns. When current i flows through the coil, a magnetic flux ϕ is produced around it in Figure.(1).
Figure 1. Magnetic flux produced by a single coil with N turns
According to Faraday’s law, the voltage v induced in the coil is proportional to the number of turns N and the time rate of change of the magnetic flux ϕ; that is,
But the flux ϕ is produced by current i so that any change in ϕ is caused by a change in the current.
Hence, Equation.(1) can be written as
which is the voltage-current relationship for the inductor. From Equations.(2) and (3), the inductance L of the inductor is thus given by
This inductance is commonly called self-inductance because it relates the voltage induced in a coil by a time-varying current in the same coil.
Now consider two coils with self-inductance L[1] and L[2] that are in close proximity with each other Figure.(2).
Coil 1 has N[1] turns, while coil 2 has N[2] turns.
Figure 2. Mutual inductance M[21] of coil 2 with respect to coil 1
For the sake of simplicity, assume that the second inductor carries no current.
The magnetic flux ϕ[1] emanating from coil 1 has two components: one component ϕ[11] links only coil 1, and another component ϕ[12] links both coils.
Although the two coils are physically separated, they are said to be magnetically coupled.
Since the entire flux ϕ[1] links coil 1, the voltage induced in coil 1 is
Only flux ϕ[12] links coil 2, so the voltage induced in coil 2 is
Again, as the fluxes are caused by the current i[1] flowing in coil 1, Equation.(6) can be written as
where L[1] = N[1] dϕ[1]/di[1] is the self-inductance of coil 1. Similarly, Equation.(7) can be written as
M[21] is known as the mutual inductance of coil 2 with respect to coil 1.
Subscript 21 indicates that the inductance M[21] relates the voltage induced in coil 2 to the current in coil 1.
Thus, the open-circuit mutual voltage (or induced voltage) across coil 2 is
Suppose we now let current i[2] flow in coil 2, while coil 1 carries no current in Figure.(3).
Figure 3. Mutual inductance M[12] of coil 1 with respect to coil 2
The magnetic flux ϕ[2] emanating from coil 2 comprises flux ϕ[22] that links only coil 2 and flux ϕ[21] that links both coils.
The entire flux ϕ[2] links coil 2, so the voltage induced in coil 2 is
where L[2] = N[2] dϕ[2]/di[2] is the self-inductance of coil 2.
Since only flux ϕ[21] links coil 1, the voltage induced in coil 1 is
which is the mutual inductance of coil 1 with respect to coil 2. Thus, the open-circuit mutual voltage across coil 1 is
We will see in the next section that M[12] and M[21] are equal, that is
and we refer to M as the mutual inductance between the two coils. Like self-inductance L, mutual inductance M is measured in henrys (H).
Keep in mind that mutual coupling only exists when the inductors or coils are in close proximity, and the circuits are driven by time-varying sources.
We recall that inductors act like short circuits to dc.
From the two cases in Figures.(2) and (3), we conclude that mutual inductance results if a voltage is induced by a time-varying current in another circuit.
It is the property of an inductor to produce a voltage in reaction to a time-varying current in another inductor near it. Thus,
Mutual inductance is the ability of one inductor to induce a voltage across a neighbouring inductor, measured in henrys (H).
Although mutual inductance M is always a positive quantity, the mutual voltage M di/dt may be negative or positive, just like the self-induced voltage L di/dt.
However, unlike the self-induced L di/dt, whose polarity is determined by the reference direction of the current and the reference polarity of the voltage (according to the passive sign convention),
the polarity of mutual voltage M di/dt is not easy to determine, because four terminals are involved.
The choice of the correct polarity for M di/dt is made by examining the orientation or particular way in which both coils are physically wound and applying Lenz’s law in conjunction with the
right-hand rule.
Since it is inconvenient to show the construction details of coils on a circuit schematic, we apply the dot convention in circuit analysis.
By this convention, a dot is placed in the circuit at one end of each of the two magnetically coupled coils to indicate the direction of the magnetic flux if current enters that dotted terminal of
the coil.
This is illustrated in Figure.(4). Given a circuit, the dots are already placed beside the coils so that we need not bother about how to place them.
The dots are used along with the dot convention to determine the polarity of the mutual voltage. The dot convention is stated as follows:
Figure 4. Illustration of the dot convention
If a current enters the dotted terminal of one coil, the reference polarity of the mutual voltage in the second coil is positive at the dotted terminal of the second coil.
If a current leaves the dotted terminal of one coil, the reference polarity of the mutual voltage in the second coil is negative at the dotted terminal of the second coil.
Thus, the reference polarity of the mutual voltage depends on the reference direction of the inducing current and the dots on the coupled coils.
Application of the dot convention is illustrated in the four pairs of mutually coupled coils in Figure.(5).
For the coupled coils in Figure.(5a), the sign of the mutual voltage v[2] is determined by the reference polarity for v[2] and the direction of i[1] .
Since i[1] enters the dotted terminal of coil 1 and v[2] is positive at the dotted terminal of coil 2, the mutual voltage is +M di[1]/dt.
For the coils in Figure.(5b), the current i[1] enters the dotted terminal of coil 1 and v[2] is negative at the dotted terminal of coil 2.
Figure 5. Examples illustrating how to apply the dot convention.
Hence, the mutual voltage is −M di[1]/dt. The same reasoning applies to the coils in Figure.(5c) and (5d).
Figure.(6) shows the dot convention for coupled coils in series. For the coils in Figure.(6a), the total inductance is
For the coil in Figure.(6b)
Now that we know how to determine the polarity of the mutual voltage, we are prepared to analyze circuits involving mutual inductance.
Figure 6. Dot convention for coils in series; the sign indicates the polarity of the mutual voltage: (a) series-aiding connection, (b) series-opposing connection.
As the first example, consider the circuit in Figure.(7).
Figure 7. Time-domain analysis of a circuit containing coupled coils.
Applying KVL to coil 1 gives
For coil 2, KVL gives
We can write Equation.(20) in the frequency domain as
As a second example, consider the circuit in Figure.(8).
Figure 8. Frequency-domain analysis of a circuit containing coupled coils.
We analyze this in the frequency domain. Applying KVL to coil 1, we get
For coil 2, KVL yields
Equations.(21) and (22) are solved in the usual manner to determine the currents.
At this introductory level, we are not concerned with the determination of the mutual inductances of the coils and their dot placements.
Like R, L, and C, calculation of M would involve applying the theory of electromagnetics to the actual physical properties of the coils.
In this text, we assume that the mutual inductance and the placement of the dot are the “givens” of the circuit problem, like the circuit components R, L, and C.
Read also : parts of dc motor
Mutual Inductance Examples
For better understanding let us review the examples below.
1. Calculate the phasor currents I1 and I2 in the circuit of Figure.(9).
For coil 1, KVL gives
For coil 2, KVL gives
Substituting this in Equation.(1.1), we get
From Equations.(1.2) and (1.3),
2. Calculate the mesh currents in the circuit of Figure.(10).
The key to analyzing a magnetically coupled circuit is knowing the polarity of the mutual voltage.
We need to apply the dot rule. In Figure.(10), suppose coil 1 is the one whose reactance is 6 Ω, and coil 2 is the one whose reactance is 8 Ω.
To figure out the polarity of the mutual voltage in coil 1 due to current I[2], we observe that I[2] leaves the dotted terminal of coil 2.
Since we are applying KVL in the clockwise direction, it implies that the mutual voltage is negative, that is, −j[2]I[2].
Alternatively, it might be best to figure out the mutual voltage by redrawing the relevant portion of the circuit, as shown in Figure.(11a), where it becomes clear that the mutual voltage is V[2] =
Thus, for mesh 1 in Figure.(10), KVL gives
Similarly, to figure out the mutual voltage in coil 2 due to current I[1] , consider the relevant portion of the circuit, as shown in Figure.(11b).
Applying the dot convention gives the mutual voltage as V[2] = −jI[1] . Also, current I2 sees the two coupled coils in series in Figure.(10); since it leaves the dotted terminals in both coils,
Equation.(8) applies.
Therefore, for mesh 2, KVL gives
Putting Equations.(2.1) and (2.2) in matrix form, we get
The determinants are
Thus, we obtain the mesh currents as
Leave a Comment | {"url":"https://wiraelectrical.com/mutual-inductance-and-dot-convention/","timestamp":"2024-11-05T06:04:15Z","content_type":"text/html","content_length":"129538","record_id":"<urn:uuid:ef990b5d-534e-41c0-a243-e39ffcc2a304>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00343.warc.gz"} |
Estimate the Transfer Function of an Unknown System
You can estimate the transfer function of an unknown system based on the system's measured input and output data.
In DSP System Toolbox™, you can estimate the transfer function of a system using the dsp.TransferFunctionEstimator System object™ in MATLAB^® and the Discrete Transfer Function Estimator block in
Simulink^®. The relationship between the input x and output y is modeled by the linear, time-invariant transfer function T[xy]. The transfer function is the ratio of the cross power spectral density
of x and y, P[yx], to the power spectral density of x, P[xx]:
The dsp.TransferFunctionEstimator object and Discrete Transfer Function Estimator block use the Welch’s averaged periodogram method to compute the P[xx] and P[xy]. For more details on this method,
see Spectral Analysis.
The coherence, or magnitude-squared coherence, between x and y is defined as:
The coherence function estimates the extent to which you can predict y from x. The value of the coherence is in the range 0 ≤ C[xy](f) ≤ 1. If C[xy] = 0, the input x and output y are unrelated. A C
[xy] value greater than 0 and less than 1 indicates one of the following:
• Measurements are noisy.
• The system is nonlinear.
• Output y is a function of x and other inputs.
The coherence of a linear system represents the fractional part of the output signal power that is produced by the input at that frequency. For a particular frequency, 1 – C[xy] is an estimate of the
fractional power of the output that the input does not contribute to.
When you set the OutputCoherence property of dsp.TransferFunctionEstimator to true, the object computes the output coherence. In the Discrete Transfer Function Estimator block, to compute the
coherence spectrum, select the Output magnitude squared coherence estimate check box.
Estimate the Transfer Function in MATLAB
To estimate the transfer function of a system in MATLAB®, use the dsp.TransferFunctionEstimator System object™. The object implements the Welch's average modified periodogram method and uses the
measured input and output data for estimation.
Initialize the System
The system is a cascade of two filter stages: dsp.LowpassFilter and a parallel connection of dsp.AllpassFilter and dsp.AllpoleFilter.
allpole = dsp.AllpoleFilter;
allpass = dsp.AllpassFilter;
lpfilter = dsp.LowpassFilter;
Specify Signal Source
The input to the system is a sine wave with a frequency of 100 Hz. The sampling frequency is 44.1 kHz.
sine = dsp.SineWave(Frequency=100,SampleRate=44100,...
Create Transfer Function Estimator
To estimate the transfer function of the system, create the dsp.TransferFunctionEstimator System object.
tfe = dsp.TransferFunctionEstimator(FrequencyRange='onesided',...
Create Array Plot
Initialize a dsp.ArrayPlot object. Configure the scope to show two displays by setting NumInputPorts to 2 and LayoutDimensions to [1 2]. The first display shows the magnitude response of the system
and the second display shows the coherence estimate between the input and the output of the system.
By default, the x-axis of the array plot is in samples. To convert this axis into frequency, set the SampleIncrement property of the dsp.ArrayPlot object to Fs/1024. In this example, this value is
44100/1024, or 43.0664. For a two-sided spectrum, the XOffset property of the dsp.ArrayPlot object must be [-Fs/2]. The frequency varies in the range [-Fs/2 Fs/2]. In this example, the array plot
shows a one-sided spectrum. Hence, set the XOffset to 0. The frequency varies in the range [0 Fs/2].
plotter = dsp.ArrayPlot(PlotType='Line',...
plotter.LayoutDimensions = [1 2];
plotter.ActiveDisplay = 1;
plotter.XLabel='Frequency (Hz)';
plotter.YLabel = 'Magnitude Response (dB)';
plotter.YLimits = [-120 20];
plotter.Title = 'System Transfer Function';
plotter.ActiveDisplay = 2;
plotter.XLabel='Frequency (Hz)';
plotter.YLabel = 'Coherence';
plotter.YLimits = [0 1.2];
plotter.Title = 'Coherence Estimate';
Estimate the Transfer Function
The transfer function estimator accepts two signals: input to the two-stage filter and output of the two-stage filter. The input to the filter is a sine wave containing additive white Gaussian noise.
The noise has a mean of zero and a standard deviation of 0.1. The estimator estimates the transfer function of the two-stage filter. The output of the estimator is the frequency response of the
filter, which is complex. To extract the magnitude portion of this complex estimate, use the abs function. To convert the result into dB, apply a conversion factor of 20*log10(magnitude).
for Iter = 1:1000
input = sine() + .1*randn(1024,1);
lpfout = lpfilter(input);
allpoleout = allpole(lpfout);
allpassout = allpass(lpfout);
output = allpoleout + allpassout;
[tfeoutput,outputcoh] = tfe(input,output);
The first plot shows the magnitude response of the system. The second plot shows the coherence estimate between the input and output of the system. Coherence in the plot varies in the range [0 1] as
Magnitude Response of the Filter Using freqz
The filter is a cascade of two filter stages - dsp.LowpassFilter and a parallel connection of dsp.AllpassFilter and dsp.AllpoleFilter. All the filter objects are used in their default state. Using
the filter coefficients, derive the system transfer function and plot the frequency response using freqz. Below are the coefficients in the [Num] [Den] format:
• All pole filter - [1 0] [1 0.1]
• All pass filter - [0.5 -1/sqrt(2) 1] [1 -1/sqrt(2) 0.5]
• Lowpass filter - Determine the coefficients using the following commands:
lpf = dsp.LowpassFilter;
Coefficients = coeffs(lpf);
Coefficients.Numerator gives the coefficients in an array format. The mathematical derivation of the overall system transfer function is not shown here. Once you derive the transfer function, run
freqz and you can see the frequency response below:
The magnitude response that freqz shows matches the magnitude response that the dsp.TransferFunctionEstimator object estimates.
Estimate Transfer Function in Simulink
To estimate the transfer function of a system in Simulink®, use the Discrete Transfer Function Estimator block. The block implements the Welch's average modified periodogram method and uses the
measured input and output data for estimation.
The system is a cascade of two filter stages: a lowpass filter and a parallel connection of an allpole filter and allpass filter. The input to the system is a sine wave containing additive white
Gaussian noise. The noise has a mean of zero and a standard deviation of 0.1. The input to the estimator is the system input and the system output. The output of the estimator is the frequency
response of the system, which is complex. To extract the magnitude portion of this complex estimate, use the Abs block. To convert the result into dB, the system uses a dB (1 ohm) block.
Open and Inspect the Model
Open the ex_transfer_function_estimator model. The input is a noisy sinusoidal signal with a frequency of 100 Hz. The input noise is white Gaussian with a mean of 0 and a variance of 0.01. The
Discrete Transfer Function Estimator block estimates the transfer function of the filter from the data input and the filtered output. The first display in the Array Plot block shows the magnitude
response of the system and the second display in the Array Plot block shows the coherence estimate.
By default, the x -axis of the array plot is in samples. To convert this axis into frequency, the Sample Increment parameter is set to Fs/1024. In this example, this value is 44100/1024, or 43.0664.
For a two-sided spectrum, the X-Offset parameter must be -Fs/2. The frequency varies in the range [-Fs/2 Fs/2]. In this example, the array plot shows a one-sided spectrum. Hence, the X-Offset is set
to 0. The frequency varies in the range [0 Fs/2].
Run the Model
Run the model. The first display shows the magnitude response of the system. The second display shows the coherence estimate between the input and output of the system. Coherence in the plot varies
in the range [0 1] as expected.
Related Topics | {"url":"https://uk.mathworks.com/help/dsp/ug/estimate-the-transfer-function-of-an-unknown-system-1.html","timestamp":"2024-11-12T05:44:29Z","content_type":"text/html","content_length":"88752","record_id":"<urn:uuid:4c602cfb-9f4f-4380-9e29-d341d88eb984>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00540.warc.gz"} |
Introduction Before the Renaissance, mathematics was divided into two main areas: arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes. Some types
of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics. During the Renaissance, two more areas appeared. Mathematical notation led to algebra which,
roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two […]
Areas of mathematics: what is it? Leggi tutto »
Mathematics: things, is the discipline that studies
Introduction Mathematics (from the Greek : μάθημα (máthema), translatable as “science”, “knowledge” or “learning”; μαθηματικός (mathematikós) means “inclined to learn”) is the discipline that studies
quantities, numbers, space, structures, calculations, rigorously defined abstract between them. The term mathematics usually refers to the discipline (and the related body of knowledge) that studies
problems concerning quantities, extensions and spatial figures, movements of bodies, and all
Mathematics: things, is the discipline that studies Leggi tutto »
Biology: things, studies life
Biology is the science that studies life, or the physical and chemical processes of the phenomena that characterize living systems, including their biochemistry, molecular mechanisms, genetics,
anatomy, physiology, as well as emergent processes such as adaptation, development, evolution, interaction between organisms, and behavior. It is a natural science with a broad scope but has several
Biology: things, studies life Leggi tutto »
Astronomy: things: celestial events
Astronomy is the natural science that deals with the observation and explanation of celestial events that occur in space. Studies the origins and evolution, physical, chemical and temporal properties
of the objects that form the universe and which can be observed on the celestial sphere. It is one of the oldest sciences and many archaic
Astronomy: things: celestial events Leggi tutto »
Human anatomy: things, the morphology
Human anatomy is primarily the scientific study of the morphology of the adult human body. It is divided into gross anatomy and microscopic anatomy. Gross anatomy (also called anthropotomy) is the
study of anatomical structures that can be seen without the aid of a microscope. Microscopic anatomy is the study of minute anatomical structures assisted
Human anatomy: things, the morphology Leggi tutto » | {"url":"https://culturalibera.com/","timestamp":"2024-11-13T14:28:12Z","content_type":"text/html","content_length":"152964","record_id":"<urn:uuid:7c516e30-0d68-4fc8-9fbb-5f04129c1ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00883.warc.gz"} |
Using Partial Functions and Lambda Expressions
Python offers lambda expressions, which are a concise and flexible alternative to functools.partial. Lambdas allow you to create anonymous functions inline, often resulting in clearer and
easier-to-maintain code. Although functools.partial offers a concise syntax for creating callable objects that bind specific arguments of existing functions, lambda expressions can be even more
flexible and intuitive.
Recreate the add_five function object using a lambda expression:
1def add(a, b):
2 return a + b
4# Using a lambda expression to create a new function that always adds 5
5add_five_lambda = lambda a: add(a, 5)
6print(f"3 + 5 using lambda = {add_five_lambda(3)}") # Output: 3 + 5 using lambda = 8
The lambda lambda a: add(a, 5) creates an anonymous function that takes one integer a and adds 5. | {"url":"https://learn.codesignal.com/preview/lessons/3488","timestamp":"2024-11-13T15:29:58Z","content_type":"text/html","content_length":"157058","record_id":"<urn:uuid:08698181-477f-484c-8656-710e8bc20194>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00477.warc.gz"} |
Yield Curve
The yield curve is a plot of the yields of debt instruments of varying maturities, starting with short-term, then mid-term, and lastly long-term debt.
Yield Spread
The yield spread is the difference in the yield that two distinct types of issuers must pay when selling debt of the same maturity. The yield spread is also referred to as the credit spread.
Commonly the yield spread looks at the yields on corporate debt versus U.S. government debt, of the same maturity.
Yield to Call (YTC)
When an investor purchases a callable bond in the secondary market that is trading at a premium the most important yield to consider is yield to call (it is called yield to worst in this
Yield to Maturity (YTM)
A bond’s yield to maturity takes into account the discount or premium paid for the bond and averages the gain (or loss) with the stated interest payment to calculate a yield over a period of
time. When a bond is purchased in the secondary market at a discount, the most important yield for the investor to consider is the yield to maturity. A bond’s yield to maturity is also known as
its internal rate of return.
Zero Coupon Bond
A zero coupon bond is a bond in which a broker-dealer has separated the interest payments from the principal amount. The zero coupon bond represents the principal only. Zero coupon bonds are sold
at a discount. They are also called STRIPS (separate trading of registered interest and principal). Zero coupon bonds have the highest duration since the investor receives no interest payments.
The interest that accrues on a zero coupon bond is taxed each year, making them fairly unpopular investments.
Showing 1101-1105 of 1105 flashcards | {"url":"https://passmasters.com/flashcards/page/23/","timestamp":"2024-11-06T01:57:45Z","content_type":"text/html","content_length":"31230","record_id":"<urn:uuid:61d7d9db-a5f6-4ac4-b8aa-799aacc1aefa>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00578.warc.gz"} |
Mixed number to a decimal
mixed number to a decimal Related topics: graphing ordered pairs powerpoint
least common multiple test
how to solve math combination problems
imperfect squares
list of square and cube roots
radical adding calculator
property of inequality
solve ordered pairs calculator
Author Message Author Message
Hilsdj1 Posted: Wednesday 03rd of Jan 13:29 denbered007 Posted: Saturday 06th of Jan 21:18
Hello everyone. I am badly in need of some assistance . https://softmath.com/news.html and
My mixed number to a decimal homework has started https://softmath.com/ordering-algebra.html are a couple
to get on my nerves. The classes move so fast , that I of good resources that offer the Algebrator. But,
Reg.: 30.08.2005 hardly ever get a chance to clarify my doubts. Is there Reg.: 27.06.2004 before making the purchase , understand what it offers
any tool that can help me cope with this homework and how is it unique by reading the reviews online. From
mania ? my personal experience, I can tell that you can start
using Algebrator right away without any help since the
tool is absolutely user friendly and very much self-
espinxh Posted: Friday 05th of Jan 09:47 informative .
Although I understand what your problem is, but if you Voumdaim of Obpnis Posted: Monday 08th of Jan 15:06
could explain in greater detail the areas in which you are
facing struggling, then I might be in a better position to you will get the details here :
Reg.: 17.03.2002 guide you. Anyhow I have a suggestion for you, try https://softmath.com/news.html. They also claim to
Algebrator. It can solve a wide range of questions, and provide an unconditional money back guarantee, so you
it can do so within minutes. And there’s more to Reg.: 11.06.2004 have nothing to lose. Try this and Good Luck!
come, it also gives a detailed step-by-step description
of how it arrived at a particular solution . That way you
don’t just find a solution to your problem but also get
to understand how to go about solving it. I found this
software to be particularly useful for solving questions Matdhejs Posted: Wednesday 10th of Jan 10:26
on mixed number to a decimal. But that’s just my
experience, I’m sure it’ll be good no matter what A extraordinary piece of algebra software is
the topic is . Algebrator. Even I faced similar difficulties while solving
binomials, side-side-side similarity and least common
Voumdaim of Obpnis Posted: Saturday 06th of Jan 08:54 Reg.: 08.12.2001 denominator. Just by typing in the problem
workbookand clicking on Solve – and step by step
I used Algebrator too , especially in Intermediate algebra. solution to my algebra homework would be ready. I
It helped me a lot, and you won't believe how simple it is have used it through several algebra classes - Algebra
to use! It solves the exercise and it also describes 1, Remedial Algebra and Algebra 2. I highly recommend
Reg.: 11.06.2004 everything step by step. Better than a teacher! the program. | {"url":"https://softmath.com/parabola-in-math/converting-decimals/mixed-number-to-a-decimal.html","timestamp":"2024-11-14T08:31:42Z","content_type":"text/html","content_length":"51741","record_id":"<urn:uuid:4651af7c-5c23-498a-8274-583b3ed0ff7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00899.warc.gz"} |
Functional Graph Library
FGL - A Functional Graph Library
The functional graph library provides a collection of graph operations to be used in functional languages, such as ML or Haskell.
The library is based on the idea of inductive graphs. An inductive view of graphs is given by the following description: a graph is either empty, or it is extended by a new node together with edges
from its predecessors and to its successors. This idea is explained the paper Inductive Graphs and Functional Graph Algorithms. The library is in an intermediate stage and is available in two
• Standard ML (1997 Standard). The focus is on providing a variety of modules containing alternative implementations of functional graphs such that for a specific application the most efficient
implementation can be chosen.
Go to FGL/ML.
• Haskell (1998 Standard). This is the second, but still preliminary version. In particular, currently only the binary tree implementation of functional graphs is provided (all the advanced
implementations of the ML version make use of imperatively updatable arrays).
Go to FGL/Haskell.
New version available! | {"url":"https://web.engr.oregonstate.edu/~erwig/fgl/","timestamp":"2024-11-04T15:19:21Z","content_type":"text/html","content_length":"3268","record_id":"<urn:uuid:8095a818-5613-49d0-b2fb-9743a7bb5d4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00787.warc.gz"} |
Engineering Mathematics GATE-2010 - Insight into Chemical Engineering
Engineering Mathematics GATE-2010
Q 1: The inverse of the matrix $\begin{bmatrix}1&2\\3&4\end{bmatrix}$ is
Q 2: The Laplace transform of the function shown in the figure below is
Q 3: The Maxwell-Boltzmann velocity distribution for the x-component of the velocity at temperature T, is
f\left(v_x\right)=\sqrt{\frac m{2\mathrm{πkT}}}exp\left(-\frac{mv_x^2}{2kT}\right)
The standard deviation of the distribution is
Q 4: Given that $i=\sqrt{-1},\;i^i$ is equal to
Q 5: A root of the equation x^4 – 3x + 1 = 0 needs to be found using the Newton-Raphson method. If the initial guess, x[0], is taken as 0, then the new estimate x[1], after the first interaction is
Q 6: The solution of the differential equation
with the initial conditions $y(0)=0,\;{\left.\frac{dy}{dt}\right|}_{t=0}=-1$, is
Q 7: If $\overrightarrow u=y\widehat i+xy\widehat j$ and $\overrightarrow v=x^2\widehat i+xy^2\widehat j$, then $curl\left(\overrightarrow u\times\overrightarrow v\right)$ is
Q 8: X and Y are independent random variables. X follows a binomial distribution, with N = 5 and p = ½. Y takes integer values 1 and 2, with equal probability. Then the probability that X = Y is
Q 9: A box contains three red and two black balls. Four balls are removed from the box one by one without replacement. The probability of the ball remaining in the box being red is
Q 10: For a function g(x), if g(0) = 0 and g’(0) = 2, then $\lim\limits_{x \to 0}\int_0^{g\left(x\right)}\frac{2t}x\operatorname dt$ is equal to | {"url":"https://chelearning.com/engineering-mathematics-gate-2010/","timestamp":"2024-11-13T22:59:17Z","content_type":"text/html","content_length":"122012","record_id":"<urn:uuid:5a49f1f2-ef49-4e4a-a773-fcc3901ed4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00331.warc.gz"} |
A tutorial on estimating the impedance of a toroidal ferrite cored inductor for radio frequencies
This article is a walk through of a process for designing a toroidal ferrite cored inductor for radio frequencies.
Designing with magnetics can be a complicated process, and it starts with using reliable data and reliable relationships, algorithms, and tools.
Be very wary of:
• published data, especially on seller’s websites, they are often contain significant errors;
• application specific calculators, most are not suitable for ferrite cored inductors at RF; and
• bait and switch where the seller pretends to sell brand name product, but ships a substitute that may or may not comply with specifications.
One reputable manufacturer of a wide range of ferrite cores is Fair-rite. Lets use their databook as an example for design data.
The challenge
A ferrite cored toroidal inductor has important characteristics that make design a challenge:
1. ferrite permeability is a complex value that is frequency dependent; and
2. the ‘inductor’ is more completely a resonator.
(1) is dealt with by using the correct complex permeability in calculations.
(2) has little effect at less than say one tenth of the lowest self resonance frequency, and up to about half that first self resonant frequency can be modelled reasonably well by a small equivalent
shunt capacitance.
Let work through two different formats of specification data, the first is common for ‘ordinary’ toroids, the second for ‘suppression sleeves’.
1. ‘ordinary’ toroids
Lets look at the entry for a 5943003821 which is known commonly in ham circles as a FT240-43. Here is a clip from Fair-rite’s catalogue 17th Ed.
Lets find the impedance of an 3t winding on this core at 3.6MHz, firstly ignoring self resonance.
Al adjusted for µ’ and µ”
Lets use Calculate ferrite cored inductor (from Al) .
Σl/A and µ’ and µ”
From the datasheet, is Σl/A 920/m (multiply the /cm value by 100 to convert).
Lets use Calculate ferrite cored inductor – ΣA/l or Σl/A .
The results reconcile well with the previous case.
Physical dimensions and µ’ and µ”
From the datasheet, dimensions are 62.8×34.2×13.7mm.
Lets use Calculate ferrite cored inductor – rectangular cross section .
The result is close to the previous cases, but a tiny bit higher as this model assumes sharp edges on the toroid whereas they are chamfered and that slightly reduces the cross section area. The error
is small in terms of the specified tolerance of the cores, so it is inconsequential.
2. Suppression sleeves
Lets look at the entry for a 2643625002. Here is a clip from Fair-rite’s catalogue 17th Ed, in this case the format is that used for many cores classed as suppression cores.
Physical dimensions and µ’ and µ”
From the datasheet, dimensions are 16.25×7.9×14.3mm.
Lets use Calculate ferrite cored inductor – rectangular cross section .
Finding Al
Al is the inductance of a single turn at a frequency where µ=µi (µi is the initial permeability, permeability at the lowest frequencies.)
Al is usually calculated from measurement of impedance or inductance with a small number of turns at around 10kHz.
It can also be estimated from initial permeability (µi) and dimensions, or Σl/A or ΣA/l.
Taking the last example, lets calculate the impedance at 10kHz.
Above, Ls is 1.65µH, so Al=1650nH. The calculator also conveniently gives ΣA/l=0.00164m, and of course Σl/A is the inverse, 610/m.
If you measure L and divide by n^2, be careful that the measurement is at a frequency where µ=µi.
Adjustment for self resonance
As mention earlier, these devices are really resonators and exhibit self resonance. Up to about half that first self resonant frequency these effects can be modelled reasonably well by a small
equivalent shunt capacitance.
So the first step is to carefully measure the first self resonant frequency, carefully meaning to ensure that the test fixture is not disturbing the thing being measured.
Above is a plot of calculated impedance for 11t on the 5943003821 used above.
Above is the same scenario with Cs=2pF to calibrate the self resonant frequency to measurement of a prototype.
References / links
• Duffy, O. 2015. A method for estimating the impedance of a ferrite cored toroidal inductor at RF. https://owenduffy.net/files/EstimateZFerriteToroidInductor.pdf.
• Snelling, E C. Soft ferrites properties and applications. Iliffe books 1969. | {"url":"https://owenduffy.net/blog/?p=12666","timestamp":"2024-11-08T14:41:33Z","content_type":"text/html","content_length":"66455","record_id":"<urn:uuid:bcb35b27-059a-47ca-9111-b17e1d8c0c08>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00349.warc.gz"} |
Patch-based image denoising
Since their introduction in denoising, the family of non-local methods, whose Non-Local Means (NL-Means) is the most famous member, has proved its ability to challenge other powerful methods such as
wavelet based approaches, or variational techniques. Though simple to implement and efficient in practice, the classical NL-Means algorithm suffers from several limitations: noise artifacts are
created around edges and regions with few repetitions in the image are not treated at all. We present here several solution to improve this method, either by considering better reprojection from the
patches space to the original pixel space, or by considering general shapes for the patches
Noise Model
We are concerned with the problem of the restoration of noisy images. We assume that we are given a grayscale image $I$ being a noisy version of an unobservable image $I^\star$. In this context one
usually deals with additive Gaussian noise: $$ewcommand{\boldx}{\mathbf x} ewcommand{\N}{\mathbb{N}} ewcommand{\sfP}{\textsf{P}} ewcommand{\R}{\mathbb{R}} ewcommand{\sfP}{P} ewcommand{\itrue}{\mathbf
f} ewcommand{\inoisy}{\mathbf Y} ewcommand{\INLM}{\hat{ \mathbf {f}} ^{NLM}} ewcommand{\INLMLPR}{\hat{\mathbf{f}}^{NLM-LPR}} ewcommand{\wnlm}[2]{\omega ( #1 , #2 )} ewcommand{\patch}{P} ewcommand{\
argmin}{\mathop{\mathrm{arg\,min}}} \inoisy(\boldx)=\itrue(\boldx)+\boldsymbol{\varepsilon}(\boldx) \: ,$$ where $\boldx=(x,y) \in \Omega$, is any pixel in the image $\Omega$ and $\boldsymbol{\
varepsilon}$ is a centered Gaussian noise with known variance $\sigma^2$.
Non-Local Means (NLM)
Below we present some variants of this method.
NLM-LPR: Local Polynomial Regression
In this work we focus on the theoretical properties of the NLM but also of other estimators such as the linear filters (LF), the Yaroslavsky filter (YF), and an oracle estimator (it is not a proper
estimator since it has access to partial information from the original/non-corrupted image). Our extension adapt local polynomial regression to all the methods mentioned. Indeed, NLM can be seen as a
zero order polynomial fitting using some weights $\alpha$ such that $\INLM(\boldx)=\sum_{\boldx'} \alpha_{\boldx',\boldx} \inoisy(\boldx')$. Though it is also common in the statistics litterature to
consider other order of polynomial approximation. For an order $r$, we have %\begin{cases} \INLMLPR(\boldx) &= \widehat{a}^{(\boldx)}_0\\ \widehat{\mathbf{a}}^{(\boldx)}&=\argmin_{\mathbf{a}} \sum_{\
boldx'} \alpha_{\boldx,\boldx'} \left(\inoisy(\boldx) - \sum_{0 \leq |s| \leq r} a_s \, (\boldx - \boldx')^s\right)^2, %\end{cases} Here the exponent $s$ is used for mutlipolynomial indices. We show
that for particular classes of images, the improvement by going up the the ordre $r=2$ is already enough to improve the visual and numerical of the methods.
"Oracle inequalities and minimax rates for non-local means and related adaptive kernel-based methods"
E. Arias-Castro, J. Salmon, R. Willett, SIAM J. Imaging Sci., vol.5, pp. 944--992, 2012, PDF.
Corresponding Matlab Demo and toolbox ZIP.
Various Reprojections (Box-kernel)
Central Uniform Average Weighted Average
PSNR=28.19 PSNR=28.68 PSNR=29.13
"From Patches to Pixels in Non-Local methods: Weighted-Average Reprojection"
J. Salmon and Y. Strozecki, ICIP, 2010, PDF. "Patch Reprojections for Non Local Methods"
J. Salmon and Y. Strozecki, Signal Processing, vol.92, pp. 477 - 489, 2012. PDF.
Corresponding Matlab Demo and toolbox ZIP.
NLM-Shape Adaptive Patches (NLM-SAP)
Another method consider to reduce the "halo of noise" due to the NLM, is to generalize the shape of the patches. Instead of using simple square patches, we propose to extend the NLM algorithm with
more general families of shapes. Examples are classical squares, disks, but also bands and pie (cf. figure). The main point of this work is define a pertinent tool to locally aggregate the various
estimations obtained for each pixel thanks to each shape. The technical tool we consider is the SURE (Stein Unbaised Risk Estimate), based on the Stein's Lemma. We apply the SURE to the NLM using
shapes, instead of using SURE to simply determine the bandwith or the patch width.
$$ewcommand{\ihat}{\hat{\itrue}} \ihat(\boldx)= \frac{\sum_{\boldx' \in \Omega} \wnlm{\boldx}{\boldx'} \inoisy(\boldx')}{\sum_{\boldx' \in \Omega} \wnlm{\boldx}{\boldx'}} \, ,$$ where the weights $\
wnlm{\boldx}{\boldx'}$ depend on patches around $\boldx$ and $\boldx'$. The denominator is a normalizing factor which ensures the weights sum to one. The original weights in the NL-Means are of the
following form: $$\label{eq:nlm_weights} \wnlm{\boldx}{\boldx'}= \varphi \left( \frac{\|\patch_{\boldx} -\patch_{\boldx'}\|_{2,a}^2}{2h^2} \right) \, ,$$ where $h>0$ is the bandwidth parameter, $\
varphi$ is the kernel used to measure similarity between patches, $\|\cdot\|_{2,a}$ is a weighted Euclidean norm using a Gaussian kernel, and $a$ is the bandwidth that controls the concentration of
the kernel around the central pixel. In order to deal with patches of arbitrary shapes, we reformulate the way the distance between two pixels is measured in terms of patches. The weighted Euclidean
distance $\|\cdot\|_{2,a}$ used above can be generalized using the following expression: $$\label{eq:shape_distance} d^2_{\mathbf S}(\boldx,\boldx')=\sum_{\tau \in \Omega} \mathbf{S}(\tau) (\inoisy(\
boldx+\tau)-\inoisy(\boldx'+\tau))^2 \, ,$$ where $\mathbf{S}$ encodes the shape we aim at.
Squares: To begin with, we apply our framework to the most commonly used shapes, i.e., the square shapes of odd length (so the squares have centers we can consider). For instance, choosing: $$\label
{eq:lambda_simple_nlm} \mathbf{S}(\tau)=\left \{ \begin{array}{ll} 1, \, &\mbox{ if } \|\tau\|_\infty \le \frac{p-1}{2}\, ,\\ \\ 0, \, &\mbox{ otherwise}, \end{array} \right.$$ leads to the classical
(simplified) NL-Means definition with square patches of size $p \times p$ and distance between patches measured by the Euclidean norm.
Gaussian The original, but less common choice, is to set: $$\label{eq:lambda_original_nlm} \mathbf{S}(\tau)=\left \{ \begin{array}{ll} \exp(-(\tau_1^2+\tau_2^2)/2a^2), \, &\mbox{ if } \|\tau\|_\infty
\le \frac{p-1}{2}\, ,\\ \\ 0, \, &\mbox{ otherwise.} \end{array} \right.$$ The last equation means that the norm $ \|\cdot\|_{2,a}$ is used to measure the distance between patches in the definition
of the NL-Means. This limits the influence of square patches corners and leads to a more isotropic comparison between patches.
Disks: Disk shapes are defined in the same way, using the Euclidean norm instead: $$\mathbf{S}(\tau)=\left \{ \begin{array}{ll} 1, \, &\mbox{ if } \|\tau\|_2 \le \frac{p-1}{2}\, ,\\ \\ 0, \, &\mbox{
otherwise.} \end{array} \right.$$
Pie slices: We study a family of shapes, denoted as "pie", whose elements are defined with three parameters: two angles and a radius. These shapes represent a portion of a disk delimited byeq two
lines and surrounding the discrete central pixel.
Bands: This family of shapes is simply composed of rectangles, potentially rotated and decentered with respect to the pixel of interest.
We have also provided a fast implementation of the method thanks to FFT calculations. Moreover, we have considered several rules to aggregates thanks to SURE the shape-based estimates obtained.
Example of using several shapes and combining them The shapes used
"Anisotropic Non-Local Means with Spatially Adaptive Patch Shapes"
C.-A. Deledalle J. Salmon, V. Duval, SSVM 2011, PDF.
"Non-Local Methods with Shape-Adaptive Patches (NLM-SAP)"
C.-A. Deledalle J. Salmon, V. Duval, J. Math. Imaging Vis., vol.43, pp. 103-120, 2012, PDF.
Corresponding Matlab DEMO and toolbox ZIP.
Contact us
us if you have any question. | {"url":"https://josephsalmon.eu/code/index_codes.php?page=NLM_variants","timestamp":"2024-11-04T20:21:45Z","content_type":"text/html","content_length":"22478","record_id":"<urn:uuid:2aa2f44f-1e90-4eec-ba52-337baee32594>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00012.warc.gz"} |
Aperiodical News Roundup – October 2023
You're reading: News Roundup
Aperiodical News Roundup – October 2023
Here’s a round-up of a few things that happened this month that we didn’t otherwise cover here.
The Salem Prize for 2023, given annually to young mathematicians judged to have done outstanding work on harmonic analysis and related topics, has been awarded to Sarah Peluse and Julian Sahasrabudhe
. (via Terence Tao)
According to this recent arXiv paper, data from 350,757 coin flips supports Persi Diaconis’ model of coin tossing, which estimates the probability of a coin landing on the same side it started at a
surprising 51%. (via Alex Corner, Sheffield Hallam University)
Statistician C. R. Rao, who pioneered powerful statistical methods that underpin modern scientific data analyses, has died. (via Raul Jimenez)
And finally, the newly* discovered aperiodic monotile, which we won’t stop going on about ever, has been chosen as one of Time’s 200 Best Inventions of 2023 (via the European Mathematical Society). | {"url":"https://aperiodical.com/2023/11/aperiodical-news-roundup-october-2023/","timestamp":"2024-11-11T04:33:39Z","content_type":"text/html","content_length":"36229","record_id":"<urn:uuid:03fa4447-308f-46e1-900a-6761f8970866>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00899.warc.gz"} |
Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs
Verifiable random functions (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function’s value y at any point x, can also generate a non-interactive proof π
that y is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed toward the construction
of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not
well understood. We present new constructions of VRFs from general primitives, the main one being non-interactive witness-indistinguishable proofs (NIWIs). This includes: (1) a selectively secure VRF
assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively secure assuming subexponential hardness of the underlying primitives. (2) An adaptively secure VRF assuming
(polynomially hard) NIWIs, non-interactive commitments, and (single-key) constrained pseudorandom functions for a restricted class of constraints. The above primitives can be instantiated under
various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from
subexponentially hard trapdoor permutations, or more generally, from verifiable pseudorandom generators (the construction can be made uniform under a standard derandomization assumption). This
partially answers an open question by Dwork and Naor (FOCS ’00). The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of indistinguishability
• Foundations
• Non-interactive witness indistinguishable proofs
• Verifiable random functions
All Science Journal Classification (ASJC) codes
• Software
• Computer Science Applications
• Applied Mathematics
Dive into the research topics of 'Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/verifiable-random-functions-from-non-interactive-witness-indistin","timestamp":"2024-11-12T06:56:13Z","content_type":"text/html","content_length":"52375","record_id":"<urn:uuid:d87110f8-a6a2-4b99-aad1-259770235678>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00161.warc.gz"} |
Is f(x)=sqrt(1/x-2) increasing or decreasing at x=2 /9 ? | HIX Tutor
Is #f(x)=sqrt(1/x-2) # increasing or decreasing at #x=2 /9 #?
Answer 1
Look at the function or take the derivative to find that $f \left(x\right)$ is decreasing.
If we think about what #f(x)# is actually doing, we can see that it is defined on #(0, 1/2]#, with #f(x) -> oo# as #x->0^+# and #f(1/2) = 0#. Then, as #x# increases, #1/x# decreases, so #f(x)#
decreases, meaning #f(x)# is decreasing on all points where it is defined.
So we know the answer without doing any calculus, but let's do it the calculus way as well. A function is increasing or decreasing at a certain point if its first derivative is positive or negative,
respectively. Then, to confirm our answer of decreasing, the first derivative should be negative at #x=2/9#
Applying the power rule, the chain rule, and the quotient rule gives us
#f'(x) = d/dx(1/x-2)^(1/2)# #= 1/2(1/x-2)^(-1/2)*(-1/x^2)# #=-1/(2x^2(1/x-2)^(1/2))#
As the denominator is always positive, then we have #f'(x) < 0# for all #x in(0,1/2]#, meaning #f(x)# is decreasing throughout the interval, including at #x = 2/9# (note that this matches our
analysis above).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To determine if the function ( f(x) = \sqrt{\frac{1}{x} - 2} ) is increasing or decreasing at ( x = \frac{2}{9} ), we can examine the sign of its derivative at that point.
First, find the derivative of ( f(x) ) with respect to ( x ): [ f'(x) = -\frac{1}{2x^2\sqrt{\frac{1}{x} - 2}} ]
Next, evaluate the derivative at ( x = \frac{2}{9} ): [ f'\left(\frac{2}{9}\right) = -\frac{1}{2\left(\frac{2}{9}\right)^2\sqrt{\frac{1}{\frac{2}{9}} - 2}} ] [ f'\left(\frac{2}{9}\right) = -\frac{1}
{2\left(\frac{2}{81}\right)\sqrt{9 - 2}} ] [ f'\left(\frac{2}{9}\right) = -\frac{1}{2\left(\frac{2}{81}\right)\sqrt{7}} ] [ f'\left(\frac{2}{9}\right) = -\frac{81}{4\sqrt{7}} ]
Since the derivative is negative at ( x = \frac{2}{9} ), the function is decreasing at that point.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/is-f-x-sqrt-1-x-2-increasing-or-decreasing-at-x-2-9-8f9af9f85d","timestamp":"2024-11-06T23:43:18Z","content_type":"text/html","content_length":"580022","record_id":"<urn:uuid:bfc53345-344e-4116-9d0c-45c97f1e1608>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00045.warc.gz"} |
2020 TCO Europe and Central Asia Regionals Algorithm Editorials - Topcoder
October 12, 2020 2020 TCO Europe and Central Asia Regionals Algorithm Editorials
300 points: SlowSequence
The differences between consecutive elements of the sequence are +1s and -1s. The sequence of the N-1 differences determines the actual sequence uniquely.
If all differences are +1, the sum of the sequence is N(N-1)/2, if they are all -1, the sum is the opposite. Thus, one necessary condition is that a solution can only exist for sums that lie in this
Suppose we start with all differences +1. If we now take the k-th +1 from the end and flip it to -1, this change will decrease each of the following elements of the sequence by 2k. As 2k is even, the
parity of the sum won’t change. Thus we get a second necessary condition: the parity of the desired sum S must be the same as the parity of the maximum possible sum N(N-1)/2.
Together these two necessary conditions are also sufficient. In order to construct one solution we need to take the difference D = N(N-1)/2 – S and write it as a sum of distinct even numbers not
exceeding 2(N-1). It can easily be shown that a simple greedy algorithm that always takes the largest available number will always construct a valid solution. Hence we get a really simple
implementation: start with all the differences equal to +1 and then go left to right and whenever flipping a +1 to a -1 keeps the sum of the sequence at least equal to S, do so.
Qualification, 500 points: RowAcross
The boat must alternately travel there and back again. Each trip in the correct direction transfers at most C people, each trip back transfers at least 1 person back, so each such pair of trips has
the net effect of transferring at most C-1 people to the desired side. The last trip will transfer at most C people. Thus, the total number of trips is at least 1 + 2*ceiling( (N-C) / (C-1) ).
As long as we always have C people going in the right direction (or everyone who’s left if it’s the last trip) and only one person going back, we can be sure that our trip count is optimal. Thus, we
only need to minimize the maximum strain.
For C >= 3 we get that the above lower bound is at most equal to N. If there are at most as many trips as people, ideally we should find a solution in which no person has to row twice.
For C = 2 we get the lower bound of 1 + 2*(N-2) = 2*N – 3. If N >= 4, this is more than N but less than 2*N, so the optimal solution must have some people rowing twice, and if we find such a
solution, we can be sure that it’s optimal. For N <= 3 we could have each person rowing at most once, and this is clearly possible: {A} for one person, {AB} for two, and {AB, B, CB} for three.
To complete the solution for C=2, for larger N we can find a very straightforward pattern: starting with B, each person will take the previous one across and then return if we still aren’t done. The
start of this solution looks as follows: {BA, B, CB, C, DC, D, …}
For C>2 we can indeed always construct a solution in which nobody has to row twice, but we need to be a bit careful: some greedy approaches can get stuck in a situation where everybody still left on
the original bank has already rowed once.
One possible solution is to generalize the pattern used for N=2, with the only change being that now with at least three people in the boat we can have the last two alphabetically as the paddlers:
one takes it in the other direction, the other returns it back. Here’s an example of the beginning of this pattern for C = 5:
{DEABC, E, HIEFG, I, LMIJK, M, …}
Qualification, 1000 points: MarioJumper
Let’s start with the observation that in cases when we are supposed to go right to left we can reverse the entire input. Hence, we may assume that start is to the left of the desired finish. (We may
also observe that steps are useless because jumps of width 1 do at least the same thing.)
As we are going right, if a column is unreachable, all of the following columns are unreachable as well – if we could jump over that column, we could also jump onto it from the same place. Let dist
denote the distance from the start to column c. For any two consecutive columns that are reachable there are only two possibilities: either dist = dist, or dist + 1 = dist. This is because the
optimal way of reaching column c+1 is either from column c-1 only (in which case the first equality holds because we can reach c from c-1 in the same number of moves) or from column c (in which case
the second equality holds). We will call these two possibilities equal and unequal columns.
Next, consider what happens when we encounter one piece of the game world, as described by the input variable pieces[]. If we know the distances to the two columns that immediately precede this
piece, we can then simply iterate over the piece from the left to the right and determine the distances for all its columns – and, most notably, for its last two columns.
If this piece is repeated 147 times, we would like to find its effect and then repeat it 147 times, but it won’t be so easy. The problem is that the repetitions won’t have the same effect. The effect
of a piece depends on whether the previous two columns were equal or unequal, and this is not necessarily preserved. However, it is easy to notice that once we repeat a piece a few times, we must get
periodic behavior: either each piece has the same effect, or we are alternating between two effects (and alternately end the piece with equal and unequal columns). The conclusion is that in either
case we should take two consecutive pieces as the smallest “unit that can be repeated”.
One possible solution with a fairly straightforward implementation therefore looks as follows:
1. for each piece, if its count is greater than, let’s say, 4, reduce it to either 4 or 5, preserving parity
2. the resulting level is small enough to run a general BFS to compute all distances
3. for each piece whose count was reduced in step 1, look at how its last two repeats changed the distances and add the same effect for the missing copies
The problem was hiding one final catch which you may not notice if you actually run BFS in step 2, but which would make your solution fail if you just iterated from start to finish in step 2 instead.
The catch is that in some levels the very first jump you make has to be in the direction away from the finish. (E.g., maximum jump height is 3, your column has height 0, the next column in the
direction towards the finish has height 5 and the other adjacent column has height 3.)
Elimination round 1: DrawNTrees
If R=1 or C=1, we can divide the rectangle into N pieces for any N, and as each piece is a path, it is a tree.
An example illustrates that for R, C > 1 the input N = 1 has no solution. All other inputs are solvable.
Here’s one possible solution for some arbitrary R, C and for N = 2:
An easy way of finding a solution for larger N: we will start with the above solution and then exactly N-2 times we will find a leaf of an existing tree (i.e., a square with exactly one neighbor of
the same color) and turn it into a new component of size 1.
If we use colors 2, 3, 4, 5 for the new components based on parity of the row and column they are in, we are guaranteed to never have two components of the same color touching each other, and we are
Elimination round 2: CoinFlipBetting
Let pH be the probability that Heather wins the series. (The value of pH is easily computed using dynamic programming.)
Once the entire series ends, we need to have at least S/pH dollars in each situation in which Heather won the series. As we cannot go below zero, we will have at least zero dollars in each situation
in which Tasha won the series. On the other hand, we know that our expected profit from each individual bet with Lucy is zero, and therefore our expected profit at the end must be zero. This is only
possible if there is an equality in all of the above inequalities: any optimal strategy must have the property that we will have exactly S/pH dollars if Heather wins the series and exactly 0 dollars
if she loses the series.
Once we made this observation, it should be intuitive that all the individual bets we’ll need to make will be uniquely determined. This will indeed be the case. Let’s say that a state is the number
of heads and tails seen so far. We will show that for each state (h, t) we can uniquely determine both the amount of money you must have whenever in that state and the bet you should make for the
next round.
We will go backwards. We already know that money[N][*] = S/pH and money[*][N] = 0 for the terminal states. Suppose we are in some state (h, t). With probability P we will go to the state (h+1, t) and
we will need to have exactly money [t] dollars, and with probability (1-P) we will need to have exactly money [t+1] dollars. We now need to determine money [t] and the amount to bet.
We could now consider two options: betting on head or betting on tail in the next flip. However, it is fairly obvious that we must have money [t] > money [t] > money [t+1] and thus we always want to
bet on heads. (If you don’t see why or don’t trust your intuition, try calculating what happens if you attempt to bet on a tail. How can you then tell that this is never an option?)
Now we have two unknowns, money [t] and bet [t], and two equations: money [t] – bet [t] = money [t+1] if we lose, and money [t] – bet [t] + bet [t]/P = money [t] if we win. This system of equations
has a unique solution: bet [t] = (money [t] – money [t+1]) * P, and money [t] = P*money [t] + (1-P)*money [t+1].
Now that we derived these expressions, we can use a straightforward dynamic programming approach to compute the money and bet for each state.
Finals 300pt: ShippingCosts
Again, the first problem featured in the finals is on the easier side. For each customer we have a window: an interval of prices in which they will make a purchase. We can note that the optimal price
will be the upper end of some customer’s window – in all other cases we can increase the price and our profits without losing any customers. We can now easily compute the optimal answer by sweeping:
the events are the openings and closings of all windows, and we keep track of the current profit and the current number of buying customers.
Finals 800pt: GreedyKiller
There were two basic strategies to solve this problem: either analyze the algorithm on paper, or implement it most optimistic version (do you see how to deal with ties?) and use it to find a suitable
collection of random counterexamples.
There are many different types of counterexamples. Below we show the two used in the reference solution, with a proof that they are sufficient.
Counterexample 1: sometimes taking the shortest interval with the fewest overlaps is bad. If a solution takes the interval marked with asterisks, it can no longer be optimal. Note that each other
interval has both strictly more overlaps and a bigger length than the marked one.
-------- -------- -------- --------
--------- ***** ---------
--------- ---------
--------- ---------
Counterexample 2: in other situations, discarding the longest interval with the most overlaps is also bad.
-------- ******************* --------
--------- ---------
--------- ---------
If takeThreshold >= 2*overlapPenalty + 2*lengthPenalty, counterexample 1 will work.
And in counterexample 2 the badness of each interval is at least 2*overlapPenalty + a_lot*lengthPenalty, so if counterexample 1 did not work, we know that takeThreshold is less than the badness of
each interval. Hence, the greedy solution will discard the most expensive interval and counterexample 2 will work. | {"url":"https://www.topcoder.com/blog/2020-tco-europe-and-central-asia-regionals-algorithm-editorials/","timestamp":"2024-11-14T07:13:06Z","content_type":"text/html","content_length":"79721","record_id":"<urn:uuid:e31d7c57-d70d-4eca-abf9-2550cbbf2958>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00036.warc.gz"} |
Review: CPSC 320, 404
Posted on June 23, 2015 in ubc • 8 min read
I took CPSC 320 and 404 this year (in 2014W2) along with a few electives, right after my second 8-month co-op stint. I actually took a lighter workload than I would usually take during regular winter
sessions, in part because I was also doing a TA-ship for the first time (in fact, I've gone ahead and wrote a review about my TA experience). Keep on reading for my thoughts about these two
upper-level CPSC courses.
CPSC 320:
In a nutshell:
Language: none in particular
Toolset: again, none in particular
Prereqs: CPSC 221, and either (a) 6 credits of 2nd year MATH or STAT or (b) 3 credits of 2nd year MATH or STAT with a grade of 72% or better. The MATH/STAT credits that you'll use to satisfy these
prereqs are likely to come from a combination of MATH 200, MATH 221, and STAT 200+302 / STAT 241.
Website/resources: CPSC 320, Piazza
Textbook: Algorithm Design by Kleinberg and Tardos
Presenting CPSC 320, known as "Algorithms and Dating Advice"...according to our assignment hand-in box, at least:
CPSC 320, more formally known as "Intermediate Algorithm Design and Analysis", is a sequel/successor of sorts to CPSC 221; it covers a bunch of different algorithms and paradigms not covered in CPSC
221, with little overlap (you'll be given a review of asymptotic analysis and notation, e.g. big O, omega, theta, etc., and as you might expect, proofs and proof techniques are still very much
relevant). In my opinion, CPSC 320 isn't nearly as broad as its predecessor, but it is still just as dense and material-heavy. The broad categories that you'll cover during the course include graphs,
greedy algorithms, divide-and-conquer algorithms, recurrences, memoization, dynamic programming, and NP-completeness. I believe the course is also supposed to cover randomization and amortized
analysis (judging by the syllabus provided for previous terms), but we didn't really have time to get to that this term.
I think it's helpful to point out the learning goals for the overall course (quoting from the syllabus):
1. Recognize which algorithm design technique(s), such as divide and conquer, prune and search, greedy strategies, or dynamic programming was used in a given algorithm.
2. Select and judge several promising paradigms and/or data structures (possibly slightly modified) for a given problem by analyzing the problem’s properties.
3. Implement a solution to a problem using a specified algorithm design paradigm, given sufficient information about the form of that problem’s solution.
4. Select, judge and apply promising mathematical techniques (such as asymptotic notations, recurrence relations, amortized analysis and decision trees) to establish reasonably tight upper and lower
bounds on the running time of algorithms.
5. Recognize similarities between a new problem and some of the problems they have encountered, and judge whether or not these similarities can be leveraged towards designing an algorithm for the
new problem.
In essence, I feel like CPSC 221 was all about giving you the basic toolkit you needed to understand algorithms, learning about the core data structures that you'll see appear over and over again
(e.g. lists, trees, graphs, etc.) and common algorithms that are applicable everywhere (e.g. sorting, hashing, etc.); CPSC 320 is about building on that knowledge and applying it to a wider variety
of problems, as well as a more directed focus on finding more efficient ways of solving a given problem (or whether that's even possible to begin with). It's definitely a course worth taking if you
found CPSC 221 even remotely interesting. Fair warning though, I consider this course to be the most difficult 3rd year CPSC course I've taken to date (compared to CPSC 304, 310, 313, and 317), both
in terms of workload and also the difficulty of the material that's assigned and taught.
Dr. Steve Wolfman taught this course when I took it. For those who've taken previous courses with him, you should already know what to expect: highly interactive lectures, and a very hands-on
approach to problem solving in-class. In fact, lectures involve very little lecturing; instead, the bulk of class time was spent working on problems with your immediate neighbours, with Steve
periodically giving out hints and then solutions near the end of class. As a result, you're supposed to do all the readings outside of class and in advance to the lectures so you were able to
actually work on the problems. I'm generally quite a fan of his interactive style of teaching, although I admit that I'm not very diligent when it comes to pre-reading.
Another peculiarity with courses taught by Steve is evening group midterm exams and group finals. How this works is that when you take an exam, you first take it individually, and then you take the
same (or very similar) exam again as a group of 3-5 of your fellow classmates; your resulting exam mark is derived from 85% of your individual mark and 15% of the group mark (with the individual mark
being worth 100% if higher than the group mark, so you'll never get penalized for doing the group exam). As a result, you get immediate feedback on how you did; for me, that often translates from a
mental thought process of "I think I did well on the exam" after taking the individual portion, to "oh crap, I answered this and that and everything else wrong so I must have totally bombed that
exam" after the group exam. :P
It's also worth noting that the exams were all open-book, i.e. you were allowed to bring the textbook and 3 binders worth of notes if you desired to do so. In reality, that provides little more than
just a boost to your confidence; it's unlikely you'll have enough time during the exam to fully utilize your notes, and Steve's exams will test you on how capable you are at applying the theorems and
algorithms you've learned, not on how well you can regurgitate material.
Assignments are quite heavily biased towards theory and applying the concepts you've learned from class and the textbook, with very little coding involved; I recall only 1 of the 7 assignments
actually involved any coding. They're also considerably time-consuming, so I highly recommend finding a partner (you're allowed to work in groups of 2 max) and working on them in advance of the due
Grading consisted of 7 assignments worth 20%, 2 midterms worth 30% total, lecture pre-reading quizzes (of which there were about ~20) worth 1% total, a final exam worth 45%; the remaining 4% goes
towards whichever component above you did best in. Unlike most other Science courses, you do not need to achieve >50% on the final to pass; instead, you had to achieve >50% on the weighted average of
the midterms and the final exam (as well as the overall course itself, of course), which was a relief to me as I thought the final was considerably harder than the midterms and I was somewhat worried
that I might not have passed it. It's worth noting that I'm unsure whether this applies to just Steve's CPSC 320 sections, or all sections of the course taught by other professors as well.
CPSC 404:
In a nutshell:
Language: relational algebra, SQL
Toolset: exposure to a number of RDBMSs, including IBM's DB2 and Microsoft SQL Server
Prereqs: CPSC 213 and CPSC 304
Website/resources: CPSC 404; most of the course material is on Connect, with Q&A on Piazza
Textbook: Database Management Systems, 3rd ed., by Ramakrishnan and Gehrke (same as CPSC 304!)
CPSC 404, a.k.a. "Advanced Relational Database Systems", is as you might expect the sequel of CPSC 304. 304 serves the purpose of introducing you to relational databases (which I'll abbreviate to
RDBMS from now on), including some of the necessary theory you need to understand like E-R models and relational algebra, and then actually using a RDBMS (e.g. by learning SQL syntax and being able
to write out SQL queries). CPSC 404, on the other hand, dives into the nitty gritty internals of a RDBMS, focusing on how RDBMSs are implemented as well as the underlying data structures used by many
RDBMSs. Therefore, if you're thinking of taking CPSC 404 with the intent of becoming a better DBA or to brush up on SQL, you'll probably be disappointed. On the other hand, if you're interested in
learning more about what goes on behind the scenes in a typical RDBMS, the course material should be relevant to your interests.
You'll start off by learning about learning about topics like storage/memory hierarchy, buffer pool management, and I/O costs; calculating and comparing page I/O costs will be a prevalent theme
throughout the entire course, and most of the calculations you'll be doing involve calculating I/O cost one way or another. That's followed by more than a month of lectures on both tree-structured
and hash-structured indexes, data structures used to represent and maintain indexes, time/space complexity of common operations (insert, update, delete) performed on these data structures, and
hashing; this includes hashing methods that were previously covered in CPSC 221 like static hashing, and methods that weren't, like extendible hashing and linear hashing. You'll see B+ trees again
during this unit, which is another source of overlap with CPSC 221. This is followed by external sorting, including external mergesort and 2PMMS (the motivation for this is that regular in-memory
mergesort like the one you learned in CPSC 221 doesn't account for the I/O overhead in RDMBSs). Next up is coverage of query evaluation and optimization, followed by a few weeks of cramming in data
warehousing. Overall, I found the material to be quite straight-forward but dry at times, with the exception of the last unit on data warehousing which I found to be very confusing (I don't think
cramming in this bulky unit in the last few weeks of class helped much).
CPSC 404 has a fairly standard marking scheme, with a combination of 2 assignments (4% each), 4 in-class midterms (10% each), pre-class and in-class exercises (10% total), clickers (5%), a final exam
worth 37%, and 1% bonus for filling out surveys. The two assignments involved using Microsoft SQL Server and SQL Server Management Studio and running through a very detailed set of instructions with
a partner (the assignment PDFs were 40-50 pages long, most of which was along the lines of "click this button" or "open this dialog box" or "type in this query" etc.), with questions and a brief Q&A
follow-up with a TA. To be honest, I didn't find the assignments all that helpful; they were quite tedious, and the data warehousing assignment in particular was just confusing. I think the pre-class
and in-class exercises, which were brief and to the point, were much more enjoyable and instructional.
Dr. Ed Knorr was my instructor for this session of CPSC 404, and this was the first time I took a class taught by him. I found him to be an approachable and overall effective professor, and he's
clearly very passionate in the subjects he teaches, although I feel like he has a tendency to go off on unrelated tangents during lectures sometimes. He also has a mild-mannered voice which was a
contributing factor to me falling asleep in class more than once (although lack of sleep was the predominant factor, of course, and that's purely my own fault). | {"url":"https://vcheng.org/2015/06/23/review-cpsc-320-404/","timestamp":"2024-11-05T15:47:51Z","content_type":"text/html","content_length":"23137","record_id":"<urn:uuid:6277b020-6d51-4e18-9854-82e8022404a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00258.warc.gz"} |
glunurbscurve man page on Solaris
[printable version]
GLUNURBSCURVE(3gl) GLUNURBSCURVE(3gl)
gluNurbsCurve - define the shape of a NURBS curve
void gluNurbsCurve( GLUnurbs* nurb,
GLint knotCount,
GLfloat *knots,
GLint stride,
GLfloat *control,
GLint order,
GLenum type )
nurb Specifies the NURBS object (created with gluNewNurbsRen‐
knotCount Specifies the number of knots in knots. knotCount equals
the number of control points plus the order.
knots Specifies an array of knotCount nondecreasing knot values.
stride Specifies the offset (as a number of single-precision float‐
ing-point values) between successive curve control points.
control Specifies a pointer to an array of control points. The coor‐
dinates must agree with type, specified below.
order Specifies the order of the NURBS curve. order equals degree
+ 1, hence a cubic curve has an order of 4.
type Specifies the type of the curve. If this curve is defined
within a gluBeginCurve/gluEndCurve pair, then the type can
be any of the valid one-dimensional evaluator types (such as
GL_MAP1_VERTEX_3 or GL_MAP1_COLOR_4). Between a gluBe‐
ginTrim/gluEndTrim pair, the only valid types are
GLU_MAP1_TRIM_2 and GLU_MAP1_TRIM_3.
Use gluNurbsCurve to describe a NURBS curve.
When gluNurbsCurve appears between a gluBeginCurve/gluEndCurve pair, it
is used to describe a curve to be rendered. Positional, texture, and
color coordinates are associated by presenting each as a separate
gluNurbsCurve between a gluBeginCurve/gluEndCurve pair. No more than
one call to gluNurbsCurve for each of color, position, and texture data
can be made within a single gluBeginCurve/gluEndCurve pair. Exactly one
call must be made to describe the position of the curve (a type of
GL_MAP1_VERTEX_3 or GL_MAP1_VERTEX_4).
When gluNurbsCurve appears between a gluBeginTrim/gluEndTrim pair, it
is used to describe a trimming curve on a NURBS surface. If type is
GLU_MAP1_TRIM_2, then it describes a curve in two-dimensional (u and v)
parameter space. If it is GLU_MAP1_TRIM_3, then it describes a curve in
two-dimensional homogeneous (u, v, and w) parameter space. See the
gluBeginTrim reference page for more discussion about trimming curves.
The following commands render a textured NURBS curve with normals:
gluNurbsCurve(nobj, ..., GL_MAP1_TEXTURE_COORD_2);
gluNurbsCurve(nobj, ..., GL_MAP1_NORMAL);
gluNurbsCurve(nobj, ..., GL_MAP1_VERTEX_4);
To define trim curves which stitch well, use gluPwlCurve.
gluBeginCurve, gluBeginTrim, gluNewNurbsRenderer, gluPwlCurve
15 Mar 97 GLUNURBSCURVE(3gl)
List of man pages available for Solaris
Copyright (c) for man pages and the logo by the respective OS vendor.
For those who want to learn more, the polarhome community provides shell access and support.
[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script. | {"url":"http://ia64.polarhome.com/service/man/?qf=glunurbscurve&tf=2&of=Solaris&sf=3gl","timestamp":"2024-11-03T15:35:30Z","content_type":"text/html","content_length":"18680","record_id":"<urn:uuid:22327ea3-bbc9-4c8c-86eb-e1a2363f2169>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00745.warc.gz"} |
Rotational Inertia of Different Objects MCQ [PDF] Quiz Questions Answers | Rotational Inertia of Different Objects MCQs App Download & e-Book: Test 10
Engineering Physics Practice Test 10
Rotational Inertia of Different Objects MCQs (Multiple Choice Questions) PDF Download - 10
The Rotational Inertia of Different Objects Multiple Choice Questions (MCQ) with Answers PDF (Rotational Inertia of Different Objects MCQs PDF e-Book) download Ch. 28-10 to solve Engineering Physics
Practice Tests. Study Rotational Motion quiz answers PDF, Rotational Inertia of Different Objects Multiple Choice Questions (MCQ Quiz) to study online tutor courses. The Rotational Inertia of
Different Objects MCQs App Download: Free educational app for momentum and kinetic energy in collisions, ultimate and yield strength of selected materials of engineering interest, speed of sound,
vectors and scalars, rotational inertia of different objects test prep for questions to ask during an interview.
The MCQs: If M is the mass of object and R is radius, then rotational inertia of hoop about any diameter is; "Rotational Inertia of Different Objects" App (Android, iOS) with answers: 2/3 MR^2; 2/5
MR^2; 2/3 MR^2; 1/2 MR^2; to study online tutor courses. Practice Rotational Motion Questions and Answers, Google eBook to download free sample for undergraduate engineering schools.
Rotational Inertia of Different Objects MCQ with Answers PDF Download: Quiz 10
MCQ 46:
If M is the mass of object and R is radius, then rotational inertia of hoop about any diameter is
1. 2/5 MR^2
2. 2/3 MR^2
3. 2/3 MR^2
4. 1/2 MR^2
MCQ 47:
Temperature is a
1. vector quantity
2. scalar quantity
3. unit vector
4. infinite quantity
MCQ 48:
Speed of Hydrogen at 0°C is
1. 965 m/s
2. 1402 m/s
3. 1284 m/s
4. 1522 m/s
MCQ 49:
400 x 10^6 N/m^2 is the ultimate strength of
1. steel
2. aluminium
3. glass
4. high strength concrete
MCQ 50:
Collision in which an encounter between two bodies in which the total kinetic energy of the two bodies after the encounter is equal to their total kinetic energy before the encounter is termed as
1. elastic collision
2. inelastic collision
3. head on collision
4. triple collision
Engineering Physics Exam Prep Tests
Rotational Inertia of Different Objects Textbook App: Free Download iOS & Android
The App: Rotational Inertia of Different Objects MCQs App to study Rotational Inertia of Different Objects Textbook, Engineering Physics MCQ App, and Digital Electronics MCQs App. The "Rotational
Inertia of Different Objects MCQs" App to free download Android & iOS Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100%
functionality with subscriptions! | {"url":"https://mcqslearn.com/engg/engineering-physics/quiz/quiz.php?page=10","timestamp":"2024-11-06T14:03:31Z","content_type":"text/html","content_length":"97072","record_id":"<urn:uuid:e132ce11-ae7d-496c-8f37-c1a71a0108a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00543.warc.gz"} |
Application of Integrals Class 12 Notes Maths Chapter 8
By going through these CBSE Class 12 Maths Notes Chapter 8 Application of Integrals, students can recall all the concepts quickly.
Application of Integrals Notes Class 12 Maths Chapter 8
Area under Simple curves:
1. Let us find the area bounded by the curve y = f(x), x-axis, and the ordinates x = a and x – b. Consider the area under the curve as composed of a large number of thin vertical stripes.
Let there be an arbitrary strip of height y and width dx.
Area of elementary strip dA = y dx, where y = f(x).
Total area A of the region between x-axis, ordinates x – a, x = b and the curve y = f(x)
= sum of areas of elementary thin strips across the region PQML.
A = ∫[a]^b dA = ∫[a]^b ydx = ∫[a]^b f(x) dx.
2. The area A of the region bounded by the curve x = g(y), y-axis, and the lines
y = c and y = d is given by
A = ∫[c]^d x dy
3. If the curve under consideration lies below x-axis, then f(x) < 0 from x = a to x = b. So, the area bounded by the curve y = f(x) and the ordinates x = a, x = b and x-axis is negative. But the
numerical value of the area is to be taken into consideration.
Then, area = |∫[a]^b f(x)dx|.
4. It may also happen that some portion of the curve is above the x-axis and some portion is below the x-axis as shown in the figure. Let A[1] be the area below the x-axis and A[2] be the area above
the x-axis. Therefore, area A bounded by the curve y = f(x), x-axis and the ordinates x = a and x = b is given by
A = |A[1]| + A[2].
Area between two curves:
1. Let the two curves by y = f(x) and y = g(x), as shown in the figure. Suppose these curve intersect at x = a and x = b.
Consider the elementary strip of height y where y = f(x) – g(x), with width dx.
∴ dA = y dx.
⇒ A = ∫[a]^b (f(x) – g(x))dx
= ∫[a]^b f(x) dx – ∫[a]^b g(x) dx.
= Area bounded by the curve y = f(x) – Area bounded by the curve y = g(x), where f(x) > g(x).
2. If the two curves y = f(x) and y = g(x) intersect at x-a,x – c and x = b such that a < c < b, then:
If f(x) > g(x) in [a, c] and f(x) < g(x) in [c, b], then the area of the regions bounded curve
= Area of the region PAQCP + Area of the region QDRBQ
= ∫[a]^c f(x) – g(x)) dx + ∫[c]^b (g(x) – f(x)) dx.
1. Area Under Simple Curves
(i) Area of the region bounded by the curve y = f (x), x-axis and the linesx = a and x = b(b > a) is given by the formula:
Area = \(\int_{a}^{b}\) ydy = \(\int_{a}^{b}\) f(x) dy.
2. Area of the region bounded by the curve x = g(x), y-axis and the lines y = c,y = d is given by the formula:
Area = \(\int_{c}^{d}\) xdy = \(\int_{c}^{d}\) g(y) dy.
2. Area Between two Curves
(i) Area of the region enclosed between two curves y = f (x), y = g (x) and the lines x = a, x = b is
\(\int_{a}^{b}\) [f(x) -g(x)] dx, where f(x) ≥ g (x) in [a, b],
(ii) Iff (x) ≥ g (x) in [a, c] and f(x) ≤ g (x) in [c, b], a < c < b, then we write the area as:
Area = \(\int_{a}^{c}\) [f(x) – g(x)] dx + \(\int_{c}^{b}\) [g(x) – f(x)] dx. | {"url":"https://www.learninsta.com/application-of-integrals-class-12-notes/","timestamp":"2024-11-04T15:01:01Z","content_type":"text/html","content_length":"55593","record_id":"<urn:uuid:7047dc6d-801d-4eb9-856f-780fa8b5122b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00796.warc.gz"} |
matematicasVisuales | Plane developments of geometric bodies (8): Cones cut by an oblique plane
A cone can be cut by an oblique plane.
The main interest of this page is to see how a cone cut by an oblique plane can be developed into a plane.
This in another example:
Durer was the first who published in german a method to draw ellipses as cone sections.
Durer made a mistake when he explanined how to draw ellipses. We can prove, using only basic properties, that the ellipse has not an egg shape .
Every ellipse has two foci and if we add the distance between a point on the ellipse and these two foci we get a constant.
Transforming a circle we can get an ellipse (as Archimedes did to calculate its area). From the equation of a circle we can deduce the equation of an ellipse.
In his book 'On Conoids and Spheroids', Archimedes calculated the area of an ellipse. We can see an intuitive approach to Archimedes' ideas.
In his book 'On Conoids and Spheroids', Archimedes calculated the area of an ellipse. It si a good example of a rigorous proof using a double reductio ad absurdum.
An Ellipsograph is a mechanical device used for drawing ellipses.
If a straight-line segment is moved in such a way that its extremities travel on two mutually perpendicular straight lines then the midpoint traces out a circle; every other point of the line traces
out an ellipse.
We study different cylinders cut by an oblique plane. The section that we get is an ellipse.
Plane net of pyramids cut by an oblique plane.
Plane net of pyramids and pyramidal frustrum. How to calculate the lateral surface area.
We study different cylinders and we can see how they develop into a plane. Then we explain how to calculate the lateral surface area.
Plane nets of prisms with a regular base with different side number cut by an oblique plane.
We study different prisms and we can see how they develop into a plane net. Then we explain how to calculate the lateral surface area. | {"url":"http://matematicasvisuales.com/english/html/geometry/planenets/coneobliq.html","timestamp":"2024-11-05T09:15:53Z","content_type":"text/html","content_length":"18058","record_id":"<urn:uuid:757016c9-8b44-4e3c-8cc7-55a866537c2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00752.warc.gz"} |
Dynamic Programming Approach - (Enumerative Combinatorics) - Vocab, Definition, Explanations | Fiveable
Dynamic Programming Approach
from class:
Enumerative Combinatorics
The dynamic programming approach is a powerful algorithmic technique used to solve complex problems by breaking them down into simpler subproblems and storing the results of these subproblems to
avoid redundant computations. This method is particularly useful in combinatorial problems where overlapping subproblems occur, enabling efficient calculation of values like Bell numbers and Stirling
numbers of the second kind. By leveraging past computed results, this approach significantly reduces the time complexity compared to naive recursive methods.
congrats on reading the definition of Dynamic Programming Approach. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The dynamic programming approach can be used to compute Bell numbers through a recursive relation based on partitioning sets, making it efficient and effective.
2. For Stirling numbers of the second kind, the dynamic programming approach utilizes a two-dimensional array where each entry corresponds to the number of ways to partition 'n' objects into 'k'
non-empty subsets.
3. Dynamic programming transforms exponential time complexity problems into polynomial time complexity problems, drastically improving efficiency.
4. This approach is particularly suited for optimization problems and counting problems in combinatorics where overlapping subproblems exist.
5. By using previously calculated results, dynamic programming avoids unnecessary recalculations, leading to significant performance gains in large-scale combinatorial calculations.
Review Questions
• How does the dynamic programming approach optimize the computation of Bell numbers?
□ The dynamic programming approach optimizes the computation of Bell numbers by using a recursive relation that expresses each Bell number as a sum of previous Bell numbers, effectively storing
these values in an array. This way, instead of recalculating Bell numbers multiple times, each result is computed once and reused, which leads to a significant reduction in time complexity
from exponential to polynomial. This makes calculating larger Bell numbers feasible.
• Compare and contrast how dynamic programming is applied in calculating Stirling numbers of the second kind versus Bell numbers.
□ When calculating Stirling numbers of the second kind using dynamic programming, a two-dimensional array is utilized where each element represents the number of ways to partition 'n' elements
into 'k' subsets. In contrast, Bell numbers are computed using a single array where each entry corresponds to the previous computed values. Both methods leverage previously stored results but
differ in their data structure and recursive relations due to the nature of the combinatorial problem being solved.
• Evaluate the impact of dynamic programming on solving combinatorial problems, specifically regarding its effect on computational efficiency and problem-solving capabilities.
□ Dynamic programming has a profound impact on solving combinatorial problems by transforming them from exponential time complexity challenges into manageable polynomial time complexities. This
not only enhances computational efficiency but also expands problem-solving capabilities by enabling calculations that would be infeasible with naive approaches. For instance, without dynamic
programming, calculating larger Bell or Stirling numbers could be impractically slow. Hence, this approach not only optimizes existing algorithms but also opens new avenues for exploring
complex combinatorial structures.
"Dynamic Programming Approach" also found in:
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/enumerative-combinatorics/dynamic-programming-approach","timestamp":"2024-11-09T02:40:42Z","content_type":"text/html","content_length":"146553","record_id":"<urn:uuid:32791381-0450-456a-af22-096aa333d1b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00752.warc.gz"} |
Is measurement tied to mathematics? - The Handy Math Answer Book
Mathematics Throughout History
Development Ofweights and Measures
Is measurement tied to mathematics?
Yes, measurement is definitely tied to mathematics. In particular, the first steps toward mathematics used units (and eventually numbers) to describe physical quantities. There had to be a way to add
and subtract the quantities, and most of those crude “calculations” were based on fundamental mathematics. For example, in order to trade horses for gold, merchants had to agree on how much a certain
amount of gold (usually as weight) was worth, then translate that weight measurement into their barter system. In other words, “x” amount of gold would equal “y” amount of horses. | {"url":"https://www.papertrell.com/apps/preview/The-Handy-Math-Answer-Book/Handy%20Answer%20book/Is-measurement-tied-to-mathematics/001137022/content/SC/52caff2782fad14abfa5c2e0_default.html","timestamp":"2024-11-11T22:41:52Z","content_type":"text/html","content_length":"10888","record_id":"<urn:uuid:d7bd316b-a113-41e6-8ee8-b65c1a3efecf>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00045.warc.gz"} |
ball mill components and functions
A ball mill is a type of grinder used to grind or blend materials for use in mineral dressing processes, ... For systems with multiple components, ball milling has been shown to be effective in
increasing solidstate chemical ... A rock tumbler functions on the same principle. Ball mills are also used in pyrotechnics and the manufacture of ...
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 200 is a powerful benchtop model with 2 grinding stations for grinding jars with a nominal volume of 12 ml to 125 ml. The extremely high centrifugal forces of Planetary
Ball Mills result in very high pulverization energy and therefore short grinding times. The PM 200 can be found in virtually all industries where the ...
WhatsApp: +86 18838072829
Ball mills. The ball mill is a tumbling mill that uses steel balls as the grinding media. The length of the cylindrical shell is usually times the shell diameter (Figure ). The feed can be dry,
with less than 3% moisture to minimize ball coating, or slurry containing 2040% water by weight.
WhatsApp: +86 18838072829
If a ball mill uses little or no water during grinding, it is a 'dry' mill. If a ball mill uses water during grinding, it is a 'wet' mill. A typical ball mill will have a drum length that is 1 or
times the drum diameter. Ball mills with a drum length to diameter ratio greater than are referred to as tube mills.
WhatsApp: +86 18838072829
Energy consumption for ball mills is a function of many factors: the physical properties of the ground material its specific gravity and hardness; the degree of drum filling by grinding balls;
the number of drum rotations, etc. Ball mills have low efficiency no more than 15%. Energy is mainly consumed on the wear of grinding balls and ...
WhatsApp: +86 18838072829
Objectives. At the end of this lesson students should be able to: Explain the grinding process. Distinguish between crushing and grinding. Compare and contrast different type of equipment and
their components used for grinding. Identify key variables for process control. Design features of grinding equipment (SAG, BALL and ROD MILLS)
WhatsApp: +86 18838072829
The main part of the ball mill mainly include feeding part, discharging part, a rotary part, a transmission part (reducer, small gear, motors, electrical control) and other parts. Hollow shaft
adopts steel castings, the liner is detachable, slewing gear hobbing with casting, the cylinder body with wearresistant liner, with good resistance to wear.
WhatsApp: +86 18838072829
ball mills from SBM China. A ball mill is a grinding machine used to grind and blend materials for use in mineral processing, ceramics, and pyrotechnics. The ball mill works by rotating a
cylinder ...
WhatsApp: +86 18838072829
The ball mill noise is above 110 dB, while the roller press is about 80 dB. Small size, light weight, small footprint, easy installation, and can be installed as a whole. Note: Due to the large
force of the roller, the roller press has some problems, such as the material and the wear of the roller surface, the bearing is easily damaged, and ...
WhatsApp: +86 18838072829
Attrition: Reduced the size of the materials when they colloid by heavy weight (Ball). Construction: The ball mill grinder consists following Parts: Cylinder: cylinder is made of a hollow metal
that moves about its horizontal axis. the cylinder can be made of porcelain, metal, and rubber. the length of the cylinder slightly higher than its diameter.
WhatsApp: +86 18838072829
Describe the components of ball mill. Explain their understanding of ball mill operation. Explain the role of critical speed and power draw in design and process control. ... (PM ) is a function
of mill capacity and diameter,,P M = Mill Constant * (Mill Diameter ) n where n = to
WhatsApp: +86 18838072829
The Ball mill pulveriser is basically horizontal cylindrical tube rotating at low speed on its axis, whose length is slightly more to its diameter. The inside of the Cylinder shell is fitted with
heavy cast liners and is filled with cast or forged balls for grinding, to approximately 1/3 of the diameter. Raw coal to be ground is fed from the ...
WhatsApp: +86 18838072829
What is a ball mill? A ball mill is a size reduction or milling equipment which uses two grinding mechanisms, namely, impact and shear. 1 Unlike other size reduction equipment, such as breakers
and jaw crushers, ball mills are capable of producing powders and particulate materials of mean particle sizes within the 750 to 50 micron (µm) range.. Ball mills serve both dry and wet milling
WhatsApp: +86 18838072829
Pharmaceutical uses of Hammer Mill. 1. It is used in pharmaceutical industries to process wet or dry granulations and disperse powder mixtures. 2. It is used in milling pharmaceutical raw
materials, herbal medicine, and sugar. 3. It is used in powdering of barks, leaves, and roots of medicinal plants. 4.
WhatsApp: +86 18838072829
A ball mill also known as pebble mill or tumbling mill is a milling machine that consists of a hallow cylinder containing balls; mounted on a metallic frame such that it can be rotated along its
longitudinal axis. The balls which could be of different diameter occupy 30 50 % of the mill volume and its size depends on the feed and mill size. ...
WhatsApp: +86 18838072829
1. SAG mill is the primary tool for grinding. SAG mill is used before the other mills. Ball mill is a secondary, and it is used after the SAG mill. 2. SAG mill breaks the raw material into pieces
for the further grinding. Ball mill is used to grind the pieces of raw material into. powderlike structures. 3.
WhatsApp: +86 18838072829
Cement mill is another necessary cement equipment of the cement plant. After raw material crushing, cement mill plays vital role in the further cement manufacturing process. Cement ball mill,
vertical cement mill, and cement roller press are common types of cement grinding plant. cement ball mill. Cement mill has two functions of the cement ...
WhatsApp: +86 18838072829
22 May, 2019. The ball mill consists of a metal cylinder and a ball. The working principle is that when the cylinder is rotated, the grinding body (ball) and the object to be polished (material)
installed in the cylinder are rotated by the cylinder under the action of friction and centrifugal force. At a certain height, it will automatically ...
WhatsApp: +86 18838072829
There are several types of grinding mills and pulverizers available to industrial buyers. These types include, The tumbling reservoir of a ball, tube, roller, media, or vertical mill uses the
impact and friction of the feed material, often supplemented by the action of stone or metal shapes. Hammer and impactor pulverizers use large hydraulic ...
WhatsApp: +86 18838072829
The vertical roller mill is a kind of grinding machine for cement, raw material, cement clinker, slag and coal slag. It has the benefits of simple structure and low cost of manufacture and use.
Vertical roller mills have many different forms, but they work basically the same. All of these forms come with a roller (or the equivalent of roller ...
WhatsApp: +86 18838072829
normal operation of ball mill. Its functions are: firs t, separating the grinding body; ... All parts cooperate with each other and work together to complete the firefighting task.
WhatsApp: +86 18838072829
CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′.
High density ceramic linings of uniform hardness male possible thinner linings and greater and more effective grinding volume.
WhatsApp: +86 18838072829
Mill discharge is generally less than 5% + 4 mesh in wet open circuit operations, for dry grinding work reduce the capacities indicated by approximately 30% to 50%. Rod Mill Working Principle
Components. A Rod Mill has for Working Principle its inside filled grinding media, in this case STEEL RODS. These rods run the length of the machine ...
WhatsApp: +86 18838072829
A tumbling mill is a collective name for the generally known ball mills, rod mills, tube mills, pebble mills and autogeneous mills. ... according to Fig. 12/4, two parts separated by the
equipotential surface: the lower part elevates in relative resting state together ... function of rpm. A flat maximum is indicated at 32/fD. 80 . Combined with ...
WhatsApp: +86 18838072829
Polished stainless steel (grade 304) grinding jars for planetary ball mills. Each jar set includes grinding jar, lid, silicon sealing gasket and a mixed sizes of stainless steel grinding balls.
Grade 304 is the most widely used austenite steel, also known as 18/8 for its composition of 18%.. Related Products: Grinding Jar.
WhatsApp: +86 18838072829
Ball End Mills; Chamfer End Mills; Roughing End Mills; Indexable Ball End Mills Inserts; Shell Mill Kits; ... and fixtures required to perform a repetitive function that later is called machine
operation. Some machine setup functions can be done with the door open but are limited to "hold to run". ... Some parts or kits manufactured or ...
WhatsApp: +86 18838072829
Ball Mill Parts Manufacturing Service. AGICO has the ability to produce and process various steel castings, ... The function of the ball mill liner plate is to protect the barrel of the ball mill
from the direct impact of the grinding media and the materials. Different forms of lining plates can also be used to adjust the movement state of the ...
WhatsApp: +86 18838072829
Various components and machines are used for grinding applications and that's where highenergy ball mill comes into play. Overview of Ball Mill Ball mill, also known as tumbling or pebble mill is
milling equipment that encompasses cylindercontaining balls and is mounted on a metallic frame that can be rotated along with a longitudinal axis ...
WhatsApp: +86 18838072829
Rod MillBall Mill Circuit: Consider the setup in Example along with the following additional data: Grindability index for the ball mill = kWh/t. Product size from the ball mill = 150 μm.
Determine the size of the ball mill operated in closed circuit. Solution Step 1. The discharge from the rod mill is the feed to the ball mill.
WhatsApp: +86 18838072829
Ball and tube ball mill is a pulverizer that consists of a horizontal rotating cylinder,up to three diameters in length,containing a charge of tumbling or cascading steel balls,pebbles,or tube
mill is a revolving cylinder of up to five diameters in length used for fine pulverization of ore,rock,and other such Of ...
WhatsApp: +86 18838072829
A crusher is a machine designed to reduce large rocks into smaller rocks, gravel, sand or rock dust.. Crushers may be used to reduce the size, or change the form, of waste materials so they can
be more easily disposed of or recycled, or to reduce the size of a solid mix of raw materials (as in rock ore), so that pieces of different composition can be differentiated.
WhatsApp: +86 18838072829
The vertical roller mill (VRM) is a type of grinding machine for raw material processing and cement grinding in the cement manufacturing recent years, the VRM cement mill has been equipped in
more and more cement plants around the world because of its features like high energy efficiency, low pollutant generation, small floor area, etc.. The VRM cement mill has a more complex ...
WhatsApp: +86 18838072829 | {"url":"https://biofoodiescafe.fr/ball_mill_components_and_functions.html","timestamp":"2024-11-09T09:46:50Z","content_type":"application/xhtml+xml","content_length":"29728","record_id":"<urn:uuid:f623012b-22b5-4ac9-8b39-e491f5473cc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00777.warc.gz"} |
Can they be equal?
Can you find rectangles where the value of the area is the same as the value of the perimeter?
Can They be Equal? printable sheet
Charlie has been drawing rectangles:
The first rectangle has a perimeter of 30 units and an area of 50 square units.
The second rectangle has a perimeter of 24 units and an area of 20 square units.
Charlie wondered if he could find a rectangle, with a side of length 10 units, whose perimeter and area have the same numerical value.
Can you find a rectangle that satisfies this condition?
Alison says "There must be lots of rectangles whose perimeter and area have the same numerical value."
Charlie is not so sure.
Can you find more examples of such rectangles?
Can you come up with a convincing argument to help Charlie and Alison decide if Alison is right?
Click here for a poster of this problem.
Getting Started
Find the dimensions of the following rectangles:
│Dimensions │Area│Perimeter │
│ │9 │20 │
│ │16 │20 │
│ │21 │20 │
│ │24 │20 │
│ │25 │20 │
Draw the rectangles.
What do you notice about the shapes of rectangles with a fixed perimeter as their areas increase?
Find the dimensions of the following rectangles:
│Dimensions │Area│Perimeter │
│ │24 │20 │
│ │24 │22 │
│ │24 │28 │
│ │24 │50 │
Draw the rectangles.
What do you notice about the shapes of rectangles with a fixed area as their perimeters increase?
Student Solutions
We had lots of solutions to this problem, so well done to everyone who submitted an answer!
David, Noah, Felix, Tom, Amy and Laura from Bristol Grammar School worked on this problem together. David says:
We found that the 6 by 3 rectangle works, because 6+3+6+3=18 and 6x3=18, so this has equal area and perimeter.
Laura and Amy say:
We have been working systematically to list all the possible rectangles, e.g. 1x1 1x2 1x3 1x4 ... and deciding whether there are cases with 1, 2 or 3
2x1 2x2 2x3 2x4...
3x1 3x2 3x3 3x4...
We noticed a diagonal pattern for where the perimeter becomes less than the area.
We are still working on a solution!
Well done to Gourav from India, Radha from Stanhope, Kirstey from da Vinci College and Caitlin from Marshfield Primary who all correctly found rectangles with the same area and perimeter, such as 4x4
and 3x6.
Hannah from Leicester Girls High made some good notes on even and odd numbers:
I realised that at least one of the length or the width of the rectangle has to be even. The perimeter will always be even, because the length is multiplied by 2, making it even, and is added to the
width which has been multiplied by 2, also making it even. But if both the length and the width are odd, then the area will be odd, meaning that it is impossible for the perimeter to be the same as
the area.
Click here to see how Dexter from Wilson's School used diagrams to understand different rectangles and how he also used some algebra to find examples of rectangles where the perimeter is equal to the
Lucy from Belgium noticed visually what was happening to rectangles with whole number side lengths:
Each square along the edges accounts for one unit of perimeter, except for the four corner squares. They account for two units of perimeter but only one unit of area. This means the perimeter is 4
more than the area and there will have to be some squares in the middle that will only be counted for the area and not the perimeter.
Bhavik from Queen Elizabeth's School for Boys also considered rectangles with whole number side lengths, and came to the same conclusion. Click here to see his very clear explanation of why there can
only be two such rectangles where the area is equal to the perimeter.
Nathan from Rushmore Primary made an attempt at using algebra, which was continued by Vicki from Farnborough Hill, Eliza and Jacqueline from Chevalier College, Australia and Tom. Vicki noted that for
a rectangle x by y, the area is equal to the perimeter if: $$ \begin{align*} xy &= 2x+2y \\ xy-2y &= 2x \\ y(x-2) &= 2x\\ y &= \frac {2x}{x-2} \end{align*} $$
Niharika then looked at the possible values of x and y from this equation: $$ \begin{align*} y &= \frac {2x}{x-2} \text{ and } x ,y> 0 \\ \Rightarrow x-2 &> 0 \\ x&> 2 \end{align*} $$ There are an
infinite number of these rectangles.
Substituting in different values of x and y and checking the answers are correct is a good problem solving skill - well done to Krystof and Mimas who did this. Also Shashank from India drew a graph
of the possible x and y values:
Esther commented that:
Although you can always put a number into the formula I used, $2x+2y=xy$, the end result is not always an integer.
In fact, the only rectangles like this with side lengths integers are a 3x6 rectangle and a 4x4 square.
Any other numbers substituted into the formula as x will give a decimal output for y.
For example, if you put in 5 as x, your value for y would be 10/3.
Well done to Miss Gerrard's 2BE Class from Perth High School, who noticed some interesting connections between area and perimeter:
We all found 2 rectangles that worked fairly quickly. They were the square with sides of 4 (A=P=16) and the rectangle with sides 3 and 6 (A=P=18).
Then we decided that we would try using decimals to see if we could find any more as we were getting stuck. One of us found out that a rectangle of sides 10 and 2.5 worked as it gave us an area and
perimeter of 25.
After we found this rectangle someone else in the class managed to spot a pattern linking the numbers. Start at the square with sides 4 by 4.
Then look at the rectangle with side length of 3 and width of 6.
The difference between the lengths 3 and 4 is 1, the difference between the widths 4 and 6 is 2.
To get from 3 to the next length you halve the difference between 4 and 3 and subtract this from 3, to get 2.5.
To get from 6 to the next width you double the difference between 4 and 6 and add this to 6, to get 10.
We found out that if you continue halving the difference in lengths and subtracting this and doubling the width and adding it on you can find many more rectangles with equal perimeter and area:
$4$ by $4$
$3$ by $6$
$2\frac{1}{2}$ by $10$
$2\frac{1}{4}$ by $18$
$2\frac{1}{8}$ by $34$
Well done to you all.
Teachers' Resources
Why do this problem?
Sometimes area and perimeter of rectangles are taught separately, and are often confused. In this problem students consider the relationship between them and are being challenged to engage in some
sophisticated mathematical thinking.
Possible approach
This printable worksheet may be useful: Can They Be Equal.
Show the students this image and ask them to work out the area and perimeter of each rectangle.
Collect answers.
"That's interesting, the first rectangle has an area that is numerically greater than the perimeter, but the second one has an area that is numerically less than the perimeter. I wonder if you could
find a rectangle whose area and perimeter are numerically the same?"
Set students to work on this challenge, perhaps encouraging them to work in pairs so they can share ideas on how to proceed.
"If you manage to find a rectangle that satisfies my conditions, see if you can find a few more."
Circulate and observe the methods and reasoning students are using. Look out for students who:
• fix one attribute (side length, area, perimeter) and vary the others using trial and improvement
• fix one attribute and use algebra to solve for the other attributes
• write an algebraic expression for area and perimeter, equate them, and substitute values into the resulting equation
For students who are struggling to get started:
"What is the same about the two rectangles we started with?"
"What could you change?"
"How does the area and perimeter change as you change the height of the rectangle?"
Once everyone has had a chance to find a few rectangles that satisfy the condition, collect together the dimensions on the board.
Invite students to share any different strategies you observed them using as they were working.
"I'd like you to have a go at finding a few more rectangles, using several different strategies."
"While you are working, think about how many different rectangles we could possibly find."
Finish off by asking students to share their ideas about how many different rectangles satisfy the criteria, together with convincing arguments about why there are infinitely many.
Possible support
A more scaffolded introduction to the problem:
Tell the students you are thinking of a rectangle. Ask them to work out its dimensions if:
the area is 24 and the perimeter is 20
the area is 24 and the perimeter is 22
the area is 24 and the perimeter is 28
the area is 24 and the perimeter is 50
Record the solutions on the board. Ask the students to comment on anything they notice. (This might be to do with the shape of the rectangles, or perhaps the evenness of the perimeters.)
Repeat the process keeping the perimeter fixed this time, to 20.
Can they find the dimensions of rectangles with areas of 9, 16, 21, 24, 25?
Another activity to help students to become fluent in working out the different attributes of rectangles:
Students could make up their own card matching game where each set contains three cards about a specific rectangle, one with area, one with perimeter and one with the dimensions. Students have to
find all three in a set. Each student produces 8 sets, shuffles them and hands them on to their neighbour to sort.
Possible extension
Ask students to consider other polygons with numerically equal areas and perimeters - those who have met Pythagoras' theorem could investigate right-angled and isosceles triangles, and those who have
met trigonometry could work on regular polygons.
Students could be invited to consider cuboids whose surface area is numerically equal to their volume. | {"url":"https://nrich.maths.org/problems/can-they-be-equal","timestamp":"2024-11-13T02:10:42Z","content_type":"text/html","content_length":"52588","record_id":"<urn:uuid:a9bfc660-1864-4e3b-be5c-62353a77a787>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00513.warc.gz"} |
Question ID - 152851 | SaraNextGen Top Answer
A ray of light travels from an optically denser to rarer medium. The critical angle for the two media is
a) b) c) d)
A ray of light travels from an optically denser to rarer medium. The critical angle for the two media is
When the ray passes into the rarer medium, the deviation is | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=152851","timestamp":"2024-11-05T01:04:20Z","content_type":"text/html","content_length":"17631","record_id":"<urn:uuid:380a12df-7d69-45cd-b9b7-0feae0c0a4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00493.warc.gz"} |
Results for high frequency approximations (Wave propagation) | Zemax Community
I am a new user of the OpticStudio software which arouses my curiosity!
The wave propagation can be solved with several models and the software offers several possibilities. The model based on the astigmatic Gaussian beams (beamlets) is very well interesting.
But, I found few years ago the relevant paper:
Gosse L., James F., Convergence results for an inhomogeneous system arising in various high frequency approximations, Numer. Math. 90: 721–753 (2002).
It is enough hard because the mathematics formulations are theoretic. However, the issue solving gives several possibilities to find the phase and the amplitude of the light wave together.
Numerical values for the phase and the amplitude of the smooth wedge.
Therefore, do we have this possibility with the OpticStudio? Compute the phase in order to see transparency images for instance?
Thank you in advance for your answer. | {"url":"https://community.zemax.com/got-a-question-7/results-for-high-frequency-approximations-wave-propagation-1574","timestamp":"2024-11-11T23:18:36Z","content_type":"text/html","content_length":"274104","record_id":"<urn:uuid:dac6990c-f9a8-4754-9fe9-e834b8d2c552>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00624.warc.gz"} |
Measure the Closeness Centrality of Nodes in a Graph
Introduction: Closeness Centrality in Graphs
Closeness centrality is a measure of how central a node is in a graph. It is computed as the inverse of the average shortest path length to all other nodes in the graph. In other words, a node with
high closeness centrality is close to all other nodes in the graph, making it an important node for information dissemination, resource allocation, or network connectivity.
This tutorial will walk you through the steps to measure the closeness centrality of nodes in a graph, using a real-world example. We will also provide a code implementation in Python, along with an
explanation of the code.
Real-world Examples and Scenarios
Closeness centrality is used in various real-world applications, such as:
1. Social network analysis: Identifying influencers or key individuals in a social network, who can spread information quickly and efficiently.
2. Transportation networks: Identifying central nodes in a transportation network that can be used to optimize routing and reduce travel times.
3. Biological networks: Identifying key proteins or genes in a biological network, which may have important functional roles or be potential drug targets.
Real-world Scenario: Identifying Influencers in a Social Network
Consider a social network where nodes represent individuals and edges represent friendships between them. We want to identify the most influential individuals in the network, who can quickly spread
information or influence others. This problem can be framed as measuring the closeness centrality of nodes in the graph.
Problem Statement and Definition
Given a graph G = (V, E) with nodes V and edges E, the closeness centrality C_c(v) of a node v is defined as the inverse of the average shortest path length from v to all other nodes in the graph:
C_c(v) = 1 / (Σ_{u ∈ V, u ≠ v} d(u, v) / (n - 1))
where d(u, v) is the shortest path length between nodes u and v, and n is the total number of nodes in the graph.
The problem is to compute the closeness centrality of all nodes in the graph and identify the node(s) with the highest closeness centrality.
Real-world Problem to Code Solution
We will now implement a solution in Python to compute the closeness centrality of nodes in a social network graph. The graph will be represented as an adjacency list, and we will use the
breadth-first search (BFS) algorithm to compute the shortest path lengths between nodes.
def bfs_shortest_path(graph, start_node):
# Initialize the distances and queue
distances = {node: float('inf') for node in graph}
distances[start_node] = 0
queue = [start_node]
# Iterate through the queue
while queue:
current_node = queue.pop(0)
# Check neighbors of the current node
for neighbor in graph[current_node]:
# Update distances if a shorter path is found
if distances[current_node] + 1 < distances[neighbor]:
distances[neighbor] = distances[current_node] + 1
return distances
def closeness_centrality(graph):
centrality = {}
num_nodes = len(graph)
for node in graph:
# Compute shortest path lengths from the node to all other nodes
shortest_path_lengths = bfs_shortest_path(graph, node).values()
# Compute the average shortest path length and closeness centrality
avg_path_length = sum(shortest_path_lengths) / (num_nodes - 1)
centrality[node] = 1 / avg_path_length
return centrality
To test the code with a sample social network graph, we can define the graph as an adjacency list and call the closeness_centrality function:
# Sample social network graph as an adjacency list
graph = {
'A': ['B', 'C'],
'B': ['A', 'C', 'D'],
'C': ['A', 'B', 'D'],
'D': ['B', 'C', 'E'],
'E': ['D']
# Compute closeness centrality
centrality = closeness_centrality(graph)
Intuitions and Analogies
The code solution consists of two main functions: bfs_shortest_path and closeness_centrality. The bfs_shortest_path function computes the shortest path lengths from a given start node to all other
nodes in the graph using the BFS algorithm. The closeness_centrality function iterates through all nodes in the graph, computes the average shortest path length from each node to all other nodes, and
calculates its closeness centrality.
The intuition behind using BFS for computing shortest path lengths is that BFS explores nodes in increasing order of distance from the start node. This ensures that we find the shortest paths to all
other nodes in the graph.
Extending the Solution to Other Real-world Problems
The code solution provided can be easily adapted to solve other real-world problems related to closeness centrality, such as identifying central nodes in transportation networks or key proteins in
biological networks. By representing the problem as a graph and modifying the adjacency list accordingly, the closeness_centrality function can be used to compute the closeness centrality of nodes in
any graph. | {"url":"https://www.altcademy.com/blog/measure-the-closeness-centrality-of-nodes-in-a-graph/","timestamp":"2024-11-09T23:29:46Z","content_type":"text/html","content_length":"35535","record_id":"<urn:uuid:21965a03-d9a7-40e5-a769-11f90c80c994>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00876.warc.gz"} |
neutron structure functions measured with spectator
Download neutron structure functions measured with spectator
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Document related concepts
Standard Model wikipedia , lookup
Density of states wikipedia , lookup
Elementary particle wikipedia , lookup
Nuclear structure wikipedia , lookup
Nuclear drip line wikipedia , lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup
Nuclear physics wikipedia , lookup
Svyatoslav Tkachenko
B.S. June 1997, Odessa State Polytechnic University
M.S. May 2004, University of Virginia
A Dissertation Submitted to the Faculty of
Old Dominion University in Partial Fulfillment of the
Requirement for the Degree of
December 2009
Approved by:
Sebastian Kuhn (Director)
John Adam
Gail Dodge
Rocco Schiavilla
Leposava Vuskovic
Svyatoslav Tkachenko
Old Dominion University, 2009
Director: Dr. Sebastian Kuhn
We know much less about the neutron than the proton due to the absence of free
neutron targets. Neutron information has to be extracted from data on nuclear targets like deuterium. This requires corrections for off-shell and binding effects which
are not known from first principles and therefore are model-dependent. As a consequence, the same data can be interpreted in different ways, leading to different
conclusions about important questions such as the value of the d/u quark ratio at
large momentum fraction x. The Barely Off-shell NUcleon Structure (BONUS) experiment at Jefferson Lab addressed this problem by tagging spectator protons in
coincidence with inelastic electron scattering from deuterium. A novel compact radial time projection chamber was built to detect low-momentum, backward moving
protons, ensuring that the scattering took place on a loosely bound neutron. The
scattered electron was detected with Jefferson Lab’s CLAS spectrometer. Data were
taken at beam energies of 2, 4 and 5 GeV. Results on the extracted structure function
F2n of the neutron, both in the resonance and deep inelastic regions are presented.
Dependence of the results on the spectator kinematics, angle and momentum, is investigated. In addition, tests of the spectator model for different angles and momenta
are performed.
2010, by Svyatoslav Tkachenko, All Rights Reserved
I would like to thank those who contributed to this work. I would like to start with
my parents, Svetlana and Mihail Tkachenko. They started all this in Odessa, Ukraine
(then, it was Odessa, USSR) many years ago. They turned me into what I am now,
and this work would be impossible without them for many reasons. I would like to
thank my wife, Olga Cherepanova, who has been my help and inspiration for the last
several years. Special thanks to my son, Artiom Tkachenko, who worked hard on not
letting me get bored in the last months of my graduate work. And to conclude this
honorable list, I would like to thank my advisor, Sebastian Kuhn, a great person and
scientist: I learnt a lot from him; he was the kind of “boss” that everybody would
dream of, and I just hope that my future superiors are going to be like him.
Since I do not want to double the size of this thesis, I will stop listing names, but
I want to reiterate that I am thanking everybody who helped me in this research,
who helped bringing me up, who taught me something about physics, life, and life
in physics, and those who simply brightened one (or more) of my days with their
smiles. Greatest thanks to all of you, best wishes to those of you who are living, and
RIP to those who are no longer with us.
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Physics review . . . . . . . . . . . . . . . . . . . . . . . . . .
II.1 Nucleon structure . . . . . . . . . . . . . . . . . . . . . . .
II.2 Scattering experiments . . . . . . . . . . . . . . . . . . . .
II.3 Elastic form factors . . . . . . . . . . . . . . . . . . . . . .
II.4 Resonant structure . . . . . . . . . . . . . . . . . . . . . .
II.5 Deep inelastic scattering . . . . . . . . . . . . . . . . . . .
II.5.1 Deep inelastic scattering cross-section and structure
II.5.2 Scaling and partons . . . . . . . . . . . . . . . . . .
II.6 Quark-hadron duality . . . . . . . . . . . . . . . . . . . . .
II.7 Deuterium . . . . . . . . . . . . . . . . . . . . . . . . . . .
II.7.1 Static properties of the deuteron . . . . . . . . . . .
II.7.2 The deuteron wavefunction . . . . . . . . . . . . . .
II.7.3 Deuteron in scattering experiments . . . . . . . . .
II.8 Tagged structure functions . . . . . . . . . . . . . . . . . .
II.8.1 Spectator tagging . . . . . . . . . . . . . . . . . . .
II.8.2 Corrections to impulse approximation . . . . . . . .
II.8.3 Alternative way of extracting F2 structure function
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Experimental setup . . . . . . . . . . .
III.1 Accelerator facility . . . . . . . . .
III.2 Hall B and CLAS . . . . . . . . . .
III.2.1 Drift chambers . . . . . . .
III.2.2 Cherenkov counters . . . . .
III.2.3 Time of flight detector . . .
III.2.4 Electromagnetic calorimeter
III.2.5 Target . . . . . . . . . . . .
III.2.6 DVCS magnet . . . . . . . .
III.3 Radial Time Projection Chamber .
III.3.1 Time projection chambers .
III.4 BONuS RTPC . . . . . . . . . . .
Data analysis . . . . . . .
IV.1 Running conditions . .
IV.2 Preliminary analysis .
IV.2.1 Drift chambers
IV.2.2 Time of flight system . . . . . . . . .
IV.2.3 Forward electromagnetic calorimeter
IV.2.4 RTPC calibration . . . . . . . . . . .
IV.2.5 Momentum corrections . . . . . . . .
IV.2.6 RTPC momentum corrections . . . .
Cuts and corrections to the data . . . . . . .
IV.3.1 Experimental data . . . . . . . . . .
IV.3.2 Accidental background subtraction .
IV.3.3 Simulated data . . . . . . . . . . . .
High level physics analysis . . . . . . . . . .
IV.4.1 Experimental data . . . . . . . . . .
IV.4.2 Simulated data . . . . . . . . . . . .
Presentation of data . . . . . . . . . . . . .
Extraction of F2n . . . . . . . . . . . . . . .
Results . . . . . . . . . . .
V.1 Systematic errors . . .
V.2 Results and discussion
V.2.1 W ∗ dependence
V.2.2 θpq dependence
V.3 Summary . . . . . . .
Partons: quarks and gluons . . . . . . . . . . . . . . . . . . . . . . . . . 264
Some kinematic variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Residues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
VITA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
. 33
. 75
Neutron constants, from reference [6] . . . . . . . . . . . . . . . . .
Proton constants, from reference [6] . . . . . . . . . . . . . . . . . .
Ground state properties of the deuteron [28], [31]. . . . . . . . . . .
DVCS magnet dimensions. . . . . . . . . . . . . . . . . . . . . . . .
Supply settings and electrode voltages in the RTPC during operation
of the experiment. The suffixes on the GEM label refer to the inner
(i) and outer (o) surfaces of the GEMs. All voltages are of negative
polarity and are referenced to ground. The table is taken from H.
Fenker [68]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Triggers collected in the BONuS experiment. . . . . . . . . . . . . . .
Beam energy values deduced from Hall A measurements, GeV . . . .
Uncertainties for missing energy and momentum spreads for 4 beam
energies, GeV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary of quark properties (light quarks), from reference [6] . . . .
Summary of quark properties (heavy quarks), from reference [6] . . .
The ratio of per nucleon cross-sections on iron and deuterium as a
function of Bjorken x. . . . . . . . . . . . . . . . . . . . . . . . . . . .
The electron scattering diagram, the sum of the lowest order electronphoton vertex and all amputated loop corrections. . . . . . . . . . . .
World data on the proton form factors. . . . . . . . . . . . . . . . . .
Values of GnM taken from ratio measurements on deuterium and polarized 3 He measurements. . . . . . . . . . . . . . . . . . . . . . . . .
World data on GnE . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scattering process corresponding to the transverse helicity conserving
amplitude A1/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scattering process corresponding to the transverse helicity nonconserving amplitude A3/2 . . . . . . . . . . . . . . . . . . . . . . . . .
Scattering process corresponding to the longitudinal helicity nonconserving amplitude C1/2 . . . . . . . . . . . . . . . . . . . . . . . . .
Proton resonance transition amplitudes. . . . . . . . . . . . . . . . .
Neutron resonance transition amplitudes. . . . . . . . . . . . . . . . .
Inclusive electroproduction cross-section data from Jefferson Lab at
Q2 =1.5 GeV/c2 as a function of invariant mass squared. . . . . . . .
An attempt to find the neutron resonance distribution as a simple
difference between those of the deuterium and proton. . . . . . . . .
The eN → eN (∗) process describing elastic scattering as well as resonance excitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The eN → eX process describing deep inelastic scattering. . . . . . .
The invariant mass (W ) distribution. . . . . . . . . . . . . . . . . . .
The elastic scattering off quasi-free quark. . . . . . . . . . . . . . . .
The ratio of neutron to proton structure functions as a function of
Bjorken x, extracted from SLAC proton and deuteron data [17], assuming different prescriptions for nuclear corrections. . . . . . . . . .
Extracted F2 data in the nucleon resonance region for hydrogen and
deuterium targets as functions of the Nachtman scaling variable ξ. . .
Virtual photon scattering from parton, leading twists. . . . . . . . . .
Scheme of a nucleon-nucleon potential as a function of distance r between nucleons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The deuteron u and w reduced radial wavefunctions calculated with
the Argonne v18 potential. . . . . . . . . . . . . . . . . . . . . . . . .
The deuteron S wave function in configuration space and in momentum space, calculated from the Argonne v18 potential. . . . . . . . . .
The deuteron elastic structure function A(Q2 ) for Q2 > 1 GeV/c2 .. . .
The deuteron elastic structure function B(Q2 ). . . . . . . . . . . . . .
The deuteron structure function F2D per nucleon at Q2 = 1.925 GeV/c2 .
Two main diagrams contributing to the spectator reaction in the region of α > 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
The ratio of nuclear spectral functions calculated in the light cone and
instant form formalisms as a function of light cone momentum fraction
αs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Ratio of the plane wave impulse approximation (PWIA) corrected for
the target fragmentation (TF) to the pure PWIA calculation. . . . . 51
n(ef f )
Ratio Rn ≡ F2
(W 2 , Q2 , p2 )/F2n (W 2 , Q2 ) of the bound to free neutron structure functions in the covariant spectator model. . . . . . . . 53
n(ef f )
Ratio Rn ≡ F2
(W 2 , Q2 , p2 )/F2n (W 2 , Q2 ) of the bound to free neutron structure functions in the relativistic quark spectral function approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Ratio of the bound to free neutron structure functions calculated in
the instant form approach. . . . . . . . . . . . . . . . . . . . . . . . . 55
The αs dependence of the ratio of the light cone spectral function with
FSI effects included calculated in DWIA framework, to that without
FSI effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
The debris-nucleon effective cross-section as a function of the longitudinal distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
The momentum and angular dependence of the ratio of the spectral
function calculated accounting for FSI to the spectral function calculated in the impulse approximation. . . . . . . . . . . . . . . . . . . . 59
The schematics of the accelerator. . . . . . . . . . . . . . . . . . . . . 63
CLAS in Hall B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
CLAS, 2-dimensional view. . . . . . . . . . . . . . . . . . . . . . . . . 65
CLAS, 3-dimensional view. . . . . . . . . . . . . . . . . . . . . . . . . 66
Representation of a portion of the layout of a Region 3 chamber. . . . 67
Vertical cut of the drift chambers transverse to the beam line. . . . . 68
Hexagonal cell drift lines with and without magnetic field. . . . . . . 68
Exploded view of one of the six CLAS EC modules [60]. . . . . . . . 72
Target tube with fixtures attached. . . . . . . . . . . . . . . . . . . . 74
The classical TPC with gaseous sensitive volume. . . . . . . . . . . . 76
BONuS data readout scheme. . . . . . . . . . . . . . . . . . . . . . . 77
An enlarged view of a GEM electrode. . . . . . . . . . . . . . . . . . 78
Electric field lines and equipotential lines in GEM holes. . . . . . . . 79
Simulation of Moeller tracks in the DVCS solenoid (S. Kuhn). . . . . 81
Schematics of the BONuS RTPC. See text for details. . . . . . . . . . 83
Exploded view of the BONuS RTPC. . . . . . . . . . . . . . . . . . . 84
An RTPC event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Residuals for six sectors before the alignment. . . . . . . . . . . . . . 92
Residuals for six sectors after the alignment. . . . . . . . . . . . . . . 93
DC resolutions after the DC calibration. . . . . . . . . . . . . . . . . 98
The geometric mean in ADC counts for the fifth paddle of the first
sector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
The RF offset vs the vertex z coordinate. . . . . . . . . . . . . . . . .
The ratio of logarithms of energy attenuation as reported by the left
and right PMTs (ln AL/ ln AR vs the hit position x (in cm) along the
scintillator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of coordinates as reported by the CLAS and RTPC. . . .
The dE/dx distribution of particles registered by the RTPC before
the RTPC gain calibration. . . . . . . . . . . . . . . . . . . . . . . . .
The dE/dx distribution of particles registered by the RTPC after the
RTPC gain calibration. . . . . . . . . . . . . . . . . . . . . . . . . . .
Invariant mass, W , distribution for the p(e,e′ )p reaction before momentum corrections. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Raw and corrected invariant mass distributions for pre-selected 5 pass
events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Raw and corrected invariant mass distributions for inclusive 5 pass
events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Missing energy distributions before and after the CLAS momentum
corrections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
z component of missing momentum distributions before and after the
CLAS momentum corrections. . . . . . . . . . . . . . . . . . . . . . .
The difference between expected and measured momenta before the
CLAS momentum corrections as a function of φ. . . . . . . . . . . . .
The difference between expected and measured momenta after the
CLAS momentum corrections as a function of φ. . . . . . . . . . . . .
Momentum distributions and the difference between measured and
true spectator momenta. . . . . . . . . . . . . . . . . . . . . . . . . .
The electron distribution shown as a function of the azimuthal angle
relative to the sector mid-plane and the polar angle. . . . . . . . . . .
The distribution of ∆z = zelectron − zspectator for 2 GeV events before
the ∆z cut was applied. . . . . . . . . . . . . . . . . . . . . . . . . .
Inclusive W distributions for experimental and simulated data. . . . .
The W and W ∗ distributions of the quasi-elastic simulation for the 4
GeV data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The W and W ∗ distributions of the quasi-elastic simulation for 5 GeV
beam energy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The W and W ∗ distributions of the inelastic simulation for the 4 GeV
data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The W and W ∗ distributions of the inelastic simulation for 5 GeV beam
energy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Raw data, raw data with subtracted accidental background, and elastic simulation cross-normalized with experimental data. . . . . . . . .
Model (lines) and measured effective (markers) F2n are shown as functions of x∗ for two Q2 bins for 5.254 GeV energy. . . . . . . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 2.140 GeV. . . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 2.140 GeV. . . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 2.140 GeV. . . . . . .
Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 2.140 GeV. . . . . . . .
Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
90 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
91 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
92 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
93 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
94 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
95 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
96 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
97 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
98 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
99 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
100 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
101 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
102 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
103 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
104 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 2.140 GeV. . . . . . . . . . . . . . . . . . .
105 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.22 to 0.45
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 2.140 GeV.
106 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.22 to 0.45
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 2.140 GeV.
Error bars are statistical only. Systematic errors are shown as a blue
band. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
107 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.22 to 0.45
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 2.140 GeV.
108 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.22 to 0.45
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 2.140 GeV.
109 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.45 to 0.77
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 2.140 GeV.
110 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.45 to 0.77
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 2.140 GeV.
111 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.45 to 0.77
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 2.140 GeV.
112 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 2.140 GeV.
113 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 2.140 GeV.
114 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 2.140 GeV.
115 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 4.217 GeV. . . . . . .
116 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 4.217 GeV. . . . . . .
117 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 4.217 GeV. . . . . . . .
118 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 4.217 GeV. . . . . . .
119 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 4.217 GeV. Error bars
are statistical only. Systematic errors are shown as a blue band. . . .
120 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 4.217 GeV. . . . . . . .
121 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 4.217 GeV. . . . . . .
122 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 4.217 GeV. . . . . . .
123 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 4.217 GeV. . . . . . . .
124 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
125 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
126 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
127 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
128 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
129 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
130 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
131 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
132 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
133 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
134 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
135 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
136 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
137 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
138 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
139 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
140 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
141 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 4.217 GeV. . . . . . . . . . . . . . . . . . .
142 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 4.217 GeV.
143 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 4.217 GeV.
144 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 4.217 GeV.
145 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 4.217 GeV.
Error bars are statistical only. Systematic errors are shown as a blue
band. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
146 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 0.77 to 1.10
(GeV/c)2 , W ∗ from 2.20 to 2.68 GeV. The beam energy is 4.217 GeV.
147 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 4.217 GeV.
148 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 4.217 GeV.
149 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 4.217 GeV.
150 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 4.217 GeV.
151 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 2.20 to 2.68 GeV. The beam energy is 4.217 GeV.
152 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 4.217 GeV.
153 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 4.217 GeV.
154 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 4.217 GeV.
Error bars are statistical only. Systematic errors are shown as a blue
band. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 4.217 GeV.
156 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 2.20 to 2.68 GeV. The beam energy is 4.217 GeV.
157 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 5.254 GeV. . . . . . .
158 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 5.254 GeV. . . . . . .
159 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 5.254 GeV. . . . . . . .
160 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from -0.75 to -0.25. The beam energy is 5.254 GeV. . . . . . .
161 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from -0.25 to 0.25. The beam energy is 5.254 GeV. Systematic
errors are shown as a blue band. . . . . . . . . . . . . . . . . . . . . .
162 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 ,
cos θpq from 0.25 to 0.75. The beam energy is 5.254 GeV. Error bars
are statistical only. Systematic errors are shown as a blue band. . . .
163 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
164 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
165 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
166 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
167 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
168 Effective and model F2n structure functions are shown as functions of
W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
169 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
170 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
171 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
172 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to
-0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
173 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to
0.25. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
174 Effective and model F2n structure functions are shown as functions of
x∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to
0.75. The beam energy is 5.254 GeV. . . . . . . . . . . . . . . . . . .
175 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 5.254 GeV.
176 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 5.254 GeV.
177 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 5.254 GeV.
178 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 5.254 GeV.
179 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 1.10 to 2.23
(GeV/c)2 , W ∗ from 2.20 to 2.68 GeV. The beam energy is 5.254 GeV.
180 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.00 to 1.35 GeV. The beam energy is 5.254 GeV.
181 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.35 to 1.60 GeV. The beam energy is 5.254 GeV.
182 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The beam energy is 5.254 GeV.
183 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 1.85 to 2.20 GeV. The beam energy is 5.254 GeV.
184 Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is
shown as a function of cos θpq . Data are for Q2 from 2.23 to 4.52
(GeV/c)2 , W ∗ from 2.20 to 2.68 GeV. The beam energy is 5.254 GeV. 263
185 Scattering of an electron with initial 4-momentum k off a proton with
initial momentum p. . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Great progress has been made in our understanding of nuclear and nucleon structure
in the last century. Hundreds of experiments have been conducted, hundreds of
theoretical models have been constructed. From the detection of the nucleus itself
by Rutherford to the detection of its constituents, the proton and the neutron, also
known as nucleons, to the discovery of the constituents of the nucleons themselves,
we know immensely more about matter now than we did a hundred years ago. Still,
many unanswered questions remain. From the basic untouched questions of a possible
substructure of quarks and a possible derivation of nuclear forces from first principles
to much-worked-on questions of elastic form factors, there is much work left to do in
nuclear and nucleon physics.
This work is dedicated to analyzing the BoNuS experiment. This experiment took
place in Hall B of Jefferson Lab in autumn of 2005. A new experimental technique
was utilized allowing us to access data in the low-momentum target fragmentation
region, where protons are spectators of the reactions that take place on neutrons in
deuterium, thus facilitating access to neutron data. A novel Radial Time Projection
Chamber capable of registering protons with momenta down to 70 MeV/c was used
in conjunction with the CLAS detector in which scattered electrons were registered.
A thin deuterium gas target was used, and data on slow protons, spectators of the
electron-neutron reaction, were collected.
This technique allowed us to emulate a neutron target, which is not provided by
Nature, by means of a deuterium target. This way we collected neutron data with
a minimum amount of model dependence that usually plagues scientists’ attempts
to study the neutron’s inner structure. Thus, we can explore: the link between
the resonance structure and the quark structure at high energies, thereby studying
quark-hadron duality and its region of applicability, the proton-to-neutron structure
function ratio and consequently the up-to-down quark distribution ratio, and the
non-perturbative quark-gluon dynamics in a bound hadron system.
The particular goal of this work is to find the neutron unpolarized structure
This dissertation uses Physical Review D as the journal model.
function F2n and to study how the effective structure function extracted from measurements on a bound neutron varies with the kinematics of the spectator proton.
Ultimately, we are interested in the ratio of the neutron to proton structure functions
F2n /F2p .This ratio can be converted to the ratio of up and down quark distributions in
the nucleons thus allowing us to access the nucleon structure. While F2p is relatively
well known, F2n has only been accessed using nuclear targets, which for inclusive
experiments requires models for the nuclear physics and a subtraction of F2p background. This ratio is sensitive to different symmetry breaking mechanisms, and the
precise knowledge of it will let us eliminate some theoretical models that have very
different predictions for the high-x behavior of the ratio.
As mentioned previously, we know a lot about nucleons, but we would like to know
even more. We know less about neutrons than about protons since there are no
free neutron targets provided by Nature. In addition, nucleons are known to change
their properties when put into nuclei (see figure 1), so that performing an experiment
on a nuclear target containing neutrons does not give us a definitive answer on free
neutron properties. As a result, the wealth of our knowledge of nucleon structure data
concentrates mainly on protons. Neutron data, which have been acquired mainly by
doing experiments on deuterium and applying nuclear corrections to the data, have
big and largely model dependent uncertainties.
The BONuS experiment tried to remedy this by measuring electron scattering off
almost free neutrons. The used method accesses neutron data with a minimum of
uncertainty associated with nuclear corrections. Measuring neutron structure with
the accuracy comparable with that of proton measurements will allow to determine
valence quark content at high x, check Bloom-Gilman duality, determine neutron
resonance structure, and find neutron elastic form factors.
There are two kinds of nucleons, particles that comprise the atomic nucleus: the
proton and the neutron. According to contemporary views, they form an isospin
doublet, and can be transformed into each other by means of the “isospin rotation”.
They both consist of two kinds of valence quarksa : up and down, the neutron having
two down and one up quark, and the proton having two up and one down quark. This
composition is responsible for the similarities as well as differences in the neutron
and proton properties.
The proton is a subatomic particle with an electric charge of +1 (measured in
units of the electron charge) that represents a hydrogen nucleus. The neutron is a
According to the current scientific views quarks are one of two kinds of fundamental spin
1/2 particles, along with leptons. Quarks interact through all four fundamental forces. They are
fermions and come in six “flavors”: up, down, strange, charm, bottom and top. Their charge,
expressed in units of the electron charge, is fractional. Up and down are the two lightest flavors of
quarks, their masses are below 10 MeV (the exact values are not known at the moment), and their
charges are: qu = (2/3)e, qd = -(1/3)e, where e is the electron charge. See appendix A for more.
FIG. 1: The ratio of per nucleon cross-sections on iron and deuterium as a function
of Bjorken x from EMC (hollow circles) [1], SLAC (solid circles) [2], and BCDMS
(squares) [3]. The data have been averaged over Q2 and corrected for neutron excess.
neutral particle that is only found in nuclei more complicated than hydrogen, and
only accompanied by protons. Unlike the proton, the neutron is not a stable particle
in its free state, decaying through β-decay with a half life of 885.7 ± 0.8 seconds:
n → p + e− + ν e .
This is another factor complicating investigation of neutrons.
As a result, the neutron was discovered later than the proton (which was detected
and recognized by Rutherford in 1918). The neutron’s discovery is attributed to
Chadwik (1932), who showed [4] that the neutral radiation emitted by some elements
subjected to bombardment with α-particles was not a type of the γ-radiation, as had
been thought before, but rather a new type of particle with no electric charge and
mass close to that of the proton.
In spite of such striking differences, the proton and neutron are indeed siblings,
and they also bear some similarities: they are both composite particles, which is witnessed, for example, by their magnetic moments; they both interact through all four
fundamental forces: electromagnetic, weak nuclear, strong nuclear and gravitational.
They are both spin 1/2 particles, also known as fermions.
Still, the aforementioned differences are large indeed, to which the fact that we
know more about the proton than about the neutron can be attributed. The differences are not limited to charge and stability. Of the other ones, I would like to
mention a puzzling negative charge radius on the neutron. It can be explained by
either a π − cloud surrounding the neutron core, or by the spin-spin forces pushing
d quarks to the periphery of the neutron (see, e.g. [5]) (the proton has an intuitive
positive charge radius). See tables 1 and 2 for the compilation of the most important
constants associated with the neutron and proton structure.
Scattering is the main tool for studying subatomic particles. It consists of colliding
two or more particles and examining how they scatter as a result of the collision.
A lot of useful information can be extracted using this method. There are different
TABLE 1: Neutron constants, from reference [6]
Mass, MeV
Mean life, s
Magnetic moment, Bohr magnetons -1.91304273±0.00000045
Electric dipole moment, 10−25 ecm
Mean-square charge radius , fm
Electric polarizability, 10−4 fm3
Magnetic polarizability, 10 fm
Charge, 10−21 e
Found using the neutron-electron scattering length bne as hrn2 i = 3(me a0 /mn )bne , where me
and mn are the masses of the electron and the neutron correspondingly, and a0 is the Bohr radius.
TABLE 2: Proton constants, from reference [6]
Mass, MeV
Mean life, s
> 1.6×1025
Magnetic moment, Bohr magnetons 2.792847337±0.000000029
Electric dipole moment, 10−23 ecm
-4 ± 6
Charge radius, fm
Electric polarizability, 10−4 fm3
Magnetic polarizability, 10 fm
|qp + qe |/e
< 1.0×10−21
The limit is from neutrality-of-matter experiments; it assumes qn = qp + qe .
kinds of scattering depending on whether the colliding particles are moving towards
each other in the lab frame (this is what we see in colliders) or whether a beam of
particles is incident upon a quasi-stationary (fixed target experiments). Also, when
one studies nucleons (which is what we are aiming at in the BONuS experiment),
different particles can be “thrown” at nuclei: electron, photon, neutrinos, etc. All
these experiments are needed, as they complement each other, but here I will concentrate on electron-nucleon fixed target scattering since this is what was used in the
BONuS experiment.
In our case we are dealing with a fixed target experiment.
Deuterium (the
“source” of neutrons for BONuS) was located in a target, off which an electron beam
was scattered and the angular distribution of the scattered electrons was measured.
In the case of elastic scattering (leaving the target intact), this angular distribution,
also known as the cross-section, can be calculated as
|F (q 2 )|2 ,
dΩ point
where q is the 4-momentum transfer (the difference between incoming and scattered
electron momenta), F (q 2 ) contains the information about the composite nature of
the target (more on this in coming sections), and dΩ
is the cross-section we
would have if an electron scattered off a point particle:
(1 − β 2 sin2 ),
dΩ point
2p sin θe /2
where α is the electromagnetic coupling constant, Z is the charge of the target, E
is the energy of the incident electron, p is the magnitude of its momentum, θe is
the scattering angle of the electron, and β is the speed of electron in units of the
speed of light. Usually, cross-sections are calculated in the lab frame. In this case,
we need to take the recoil of the target into account and this is what I will call Mott
M ott
2p sin2 θe /2
(1 − β 2 sin2 ),
where E ′ is the energy of the scattered electron.
The cross-section expresses the probability of an interaction. It can be later used
to find expectation values of any observables of the reaction. Thus, the cross-section
is all we need to extract almost any information on the reaction under consideration.
Elastic form factorsb are observables that contain information on the composite nature of nucleons. In the simplest form (coinciding with the historical development of
the form factors [7]), they can be introduced as
[F (q)]2 ,
M ott
where F (q) is a form factor (see equation (4) for details on
dΩ M ott
This simple form does not shed much light on the nature of the form factors. For
this, we would need to abolish the kindergarten level simplicity and introduce a form
containing some physics details.
Consider an example of electron scattering beyond the lowest (tree) level in QED
(figure 2).
FIG. 2: The electron scattering diagram, the sum of the lowest order electron-photon
vertex and all amputated loop corrections, from [10].
Let us see how different calculating this diagram would be from calculating the
tree-level one. Call the vertex denoted by the grey blob, −ieΓµ (p′ , p). Then, the
amplitude for the shown scattering process is [10]
iM = ie2 (ū(p′ )Γµ (p′ , p)u(p))
(ū(k ′ )γµ (k ′ , k)u(k)) ,
where M is the scattering amplitude, ū, u are Dirac spinors, γµ is the Dirac matrix,
and the initial, final and momentum exchange vectors p, p′ , and q are shown on figure
In general, Γµ is some expression that involves p, p′ , γµ , constants, and pure
numbers. At the tree level, it is equal to γ µ . Due to Lorentz invariance and Γµ
These observables are used for the description of elastic scattering. All the reactions described
in this section are elastic unless noted otherwise.
transforming as a vector, the possible form for it must be a linear combination of
vectors (γ µ , p, p′ or their linear combinations in our case). Using the combinations
p′ + p and p′ − p for convenience, we have
Γµ = γ µ · A + (p′µ + pµ ) · B + (p′µ − pµ ) · C.
Coefficients A,B, and C must be scalars. Thus, they can involve ordinary numbers,
constants, and momentum exchange q 2 . If we apply Ward identity
qµ Γµ = 0,
we can see that the third term of (7) does not vanish automatically when dotted into
qµ , and thus its coefficient must be zero. Conventionally rearranging the rest of (7)
with the help of Gordon’s identity
ū(p )γ u(p) = ū(p )
p′µ + pµ iσ µν qν
where σ µν = 2i [γ µ , γ ν ], and substituting A and B with conventional F1 and F2 , we
arrive at the final expression
Γµ (p′ , p) = γ µ F1 (q 2 ) +
iσ µν qν
F2 (q 2 ),
where the dependence of the coefficients on the only non-trivial scalar, q 2 , is shown
These coefficients, F1 and F2 are the form factors. To lowest order, F1 = 1, and
F2 = 0. The derivation given for the electron case used general symmetry principles,
and the structure (10) can be applied to any fermions. But in the case of composite
particles (proton, neutron), we should not expect Dirac equation values of 1 and 0
to be a good approximation to the form factors.
Let us go into more details. F1 is the helicity non-flip Dirac form factor, and
F2 is the helicity flip Pauli form factor. In plane-wave Born approximation, the
cross-section for elastic electron-nucleon scattering is
2 θe
(F1 + κ τ F2 ) + 2τ (F1 + κF2 ) tan
dΩ M ott
where τ = Q2 /(4MN2 ), MN is the nucleon mass, and κ denotes the nucleon anomalous
magnetic moment.
Q2 = −q 2 is often used instead of q 2 , mainly as a matter of convenience. I will use Q2 from
now on (see Appendix B for more on kinematic variables).
There are certain benefits in using not these form factors themselves, but their
linear combinations. The ones used most often are the Sachs form factors [28]:
GE (Q2 ) = F1 (Q2 ) − τ κF2 (Q2 );
GM (Q2 ) = F1 (Q2 ) + κF2 (Q2 ),
where GE and GM are the Sachs electric and magnetic form factors respectively.
These new form factors have the following properties:
GpE (0) = 1;
GnE (0) = 0;
M (0) = µ
where superscripts p and n denote proton and neutron, respectively, and µ denotes
nucleon magnetic moments. In the Breit framed , the electric and magnetic nucleon
form factors can be written as Fourier transforms of the transverse nucleon charge
and magnetization distributions, respectively.
Using Sachs form factors allows us to determine them separately using, for example, the Rosenbluth formula for scattering off a proton:
τ (GpM )2 + ǫ(GpE )2
dΩ M ott ǫ
where ǫ = 1/[1 + 2(1 + τ ) tan2 (θe /2)] is the linear polarization of the virtual photon.
Thus, measuring the cross-section at fixed Q2 as a function of ǫ provides us with
the information on each of the form factors. Polarization transfer measurements are
another technique used to access form factor information that became very popular
in the last decade or so. They allow us to access the ratio of the form factors by
measuring the transverse and longitudinal polarizations of the scattered nucleon. For
example, in the case of the proton:
Pt Ee + Ee′
tan ,
Pl 2M
where Pt and Pl are transverse and longitudinal polarizations of the scattered proton,
Ee and Ee′ are the initial and final energies of the electron, and M is the proton mass.
A particular Lorentz frame defined by ~p ′ = −~
p, that is the nucleon momentum after the collision
has the same magnitude and opposite direction as the nucleon momentum before the interaction
[32]. There is no energy transfer to the target in this frame.
In the limit of large momentum transfer, the two proton form factors and the
magnetic form factor of a neutron are nearly identical to each other except for a
scaling factore [28]:
GpE (Q2 ) =
1 p
GM (Q2 ) =
|Gn |(Q2 ) ≡ G(Q2 ),
|µn | M
where µp and µn are the magnetic dipole moments of the proton and neutron, respectively. The function G(Q2 ) may be described by a dipole form:
G(Q ) =
1 + (Q/Q0 )2
with parameter Q0 found (by fitting (19) to experimental data) to be 0.84 GeV/c
The electric form factor of the neutron GnE (Q2 ) is only known at relatively small
momentum transfers and is found to be much smaller than the corresponding magnetic form factor [28]. There are two reasons why measuring GnE (Q2 ) is difficult at
high Q2 :
• The value of τ in (16) increases with Q2 , and as a result, the scattering crosssection is dominated by the magnetic form factor at high Q2 .
• There are no fixed neutron targets with good enough luminosity, and the neutron data have to be deduced from nuclear experiments (more on this item
throughout the thesis).
A substantial amount of data on the proton form factors exist (see, for example,
figure 3). Unfortunately, their neutron counterparts are known less accurately, and
over a smaller kinematic range (see figures 4 and 5). Even the better known proton
form factors have their own unsolved mysteries, like the discrepancy between the
results obtained through the Rosenbluth technique and polarization techniquef (see,
for example reference [11] and figure 3).
Several years ago, this relationship was a decent approximation of the global data, but in the
last decade experimental data started showing very significant deviations of (18) from being a true
equality (see, for example, reference [8], or any other review paper).
This discrepancy is currently attributed to the two-photon exchange contribution, but a satisfactory experimental proof is still lacking.
Gayou et al. (2002)
Gayou et al. (2001)
Jones et al.
Milbrath et al.
p( e, e p )
Milbrath et al.
d( e, e′ p )n
Andivahis et al.
Walker et al.
Bartel et al.
Litt et al.
Hanson et al.
Bosted Fit.
Price et al.
Arrington σ Fit.
Berger et al.
Recoil Polarization Fit.
Christy et al.
Q (GeV/c)
FIG. 3: Proton form factor ratio µp GEp /GM p by Rosenbluth separations and recoil
polarization. In addition, the fits of Arrington, Bosted, and recoil polarization are
also shown. From [9].
FIG. 4: Values of GnM taken from ratio measurements on deuterium and polarized
He measurements. The circles are extractions from the ratio of e − n to e − p
quasielastic scattering; the open triangles are from measurements on polarized 3 He;
the solid squares are the CLAS preliminary results; the crosses and asterisks are
points obtained from quasielastic e-n scattering on light nuclei. The solid line is a fit
to the experimental data. From [8].
FIG. 5: World data on GnE including a recently completed Jefferson Lab Hall A
measurement (solid circles), and an approved Hall C measurement (hollow circle).
From [8].
So far, we dealt with electron-nucleon scattering in which final state nucleons are in
their ground state (i.e. elastic scattering, e + N → e + N). The extension of this
reaction is the one in which the electron shares a larger fraction of its initial energy
with the nucleon, thus exciting it, and the final state nucleon is in an excited state
(also known as a resonant state or resonance): e + N → e + R, where R denotes a
In nuclear physics, a resonance is a peak located around a certain energy found
in differential cross sections of scattering experiments. These peaks are associated
with subatomic particles (such as Nucleons, Delta baryons, Upsilon mesons, tauons)
and their excitations (see, for example, the upper panel of figure 11). These particles
are not stable and their lifetime is connected to their resonant peak width as
where Γ is resonance width, and T is the resonance lifetime. About 120 baryons
and baryon resonances are known [6], [12]. Baryons are usually identified by their
names and massesg . I will discuss only baryons comprised of the lightest, u and
d, quarks. The particle name is N or ∆ for baryons having isospin 1/2 or 3/2,
respectively. Resonances are characterized by adding L2I,2J behind the particle name
where L defines the lowest orbital momentum required when they disintegrate into
a ground state nucleon and a pseudoscalar meson, I and J are isospin and total
angular momentum, respectively.
Resonance peaks can be easily seen in the cross-section between W (see appendix
B) of 1 and 2 GeV (that is, between the elastic peak and “continuum”), see the
upper panel of figure 11. The most prominent low mass resonances are: ∆(1232)P33 ,
N(1440)P11 , N(1520)D13 , N(1535)S11 , and N(1688)F15 (numbers in brackets are
masses of the resonances in MeV).h
Resonant behavior is usually extracted from cross-sections either on hydrogen (for
protons) or nuclear (for neutrons) targets. As will be mentioned later, resonances are
very valuable for connecting the region where constituent quark models are describing
Where mass is usually the invariant mass of the unobserved final state of the inclusive reaction,
W . See appendix B for more.
In inclusive electron scattering, we can see only three regions of overlapping resonances (figure
11, upper panel). To distinguish individual resonances, we need to use a different technique, e.g.
N(π, π ′ ) or N(γ, π).
the data well with the region where perturbative quantum chromodynamics (QCD)
is a good description. A lot of effort has been and is being put into studying them.
Analogously to elastic scattering, reactions involving resonances can be described
with the help of form factors (transition form factors now), that contain all the
information about the electromagnetic structure of the baryon. These form factors
are the charge and current transition matrix elements.
Transitions between a nucleon state |N> and a resonant state |R> can be ex-
pressed in terms of dimensionless helicityi matrix elementsj :
GH =
< R, λ′ |εµ Jµ |N, λ >,
where λ denotes helicity, and the polarization vectors ε±,0 correspond to right and
left circularly polarized photons, and longitudinally polarized photons, respectively.
Equivalently, electromagnetic transition matrix elements are expressed in terms
of helicity amplitudes:
2πα |~q|
A1/2 =
G+ , A3/2 =
G− , C1/2 =
G0 ,
KR Q
where KR is the equivalent real photon energy at the resonance position.
The transverse amplitude A1/2 is helicity-conserving (see figure 6), whereas transverse amplitude A3/2 (see figure 7) and longitudinal C1/2 (see figure 8) are helicity-flip
amplitudes. Using the transition form factors, one can write the inelastic scattering
cross-section for a resonance as
|GE |2 + τ ∗ |GT |2
= σM ott frec
+ 2τ |GT | tan θe /2 R(W ).
dE ′ dΩe
1 + τ∗
In analogy with the Sachs form factors for elastic scattering, the resonance longitudinal and transverse form factors are written as
G0 = GE ,
(|G+ |2 + |G− |2 ) = τ ∗ |GT |2 .
A resonance line shape of the following form is introduced
R(W ) =
2π −1 WR MN ΓR
(W 2 − WR2 )2 + WR2 Γ2R
Projection of the particle spin onto the direction of particle momentum.
The discussion below follows the treatment of [15].
FIG. 6: Scattering process corresponding to the transverse helicity conserving amplitude A1/2 .
FIG. 7: Scattering process corresponding to the transverse helicity non-conserving
amplitude A3/2 .
FIG. 8: Scattering process corresponding to the longitudinal helicity non-conserving
amplitude C1/2 .
where WR and ΓR are resonance mass and width. The analogous kinematic quantity
(Q2 /4MN2 for the elastic case) is
τ∗ =
(Q2 + WR2 − MN2 )2
4MN2 Q2
The recoil factor (which is E ′ /E for the elastic case) is
frec =
E 1 − (WR − MN2 )/2MN E
In the limit of a very narrow resonance, in which WR = MN and WR Γ → 0, R(W )
becomes a δ-function and the cross-section reduces to that for elastic scattering.
Thus, knowledge of the resonance form factors lets us describe resonance transitions in the same way as knowledge of elastic form factors lets us describe elastic
The resonant region is a bridge between the low Q2 region in which data can be
explained rather successfully by the constituent quark models, and the high Q2 region
in which perturbative quantum chromodynamics (QCD), pQCD, is presumed to be
valid. The unresolved problem is two-fold: on one hand, the processes governing
resonance transitions themselves are not fully understood, on the other hand, the
threshold at which pQCD description becomes the description is not known yet. The
spread of opinions about the latter reaches orders of magnitude [13]. Additionally, the
understanding of the phenomenon of quark-hadron duality (see section II.6) requires
FIG. 9: Proton resonance transition amplitudes (from reference [14]). World data
on the contributing helicity conserving (A1/2 ) and helicity non-conserving (A3/2 )
amplitudes are shown for two low lying resonances. The gray band shows the quark
model prediction.
thorough knowledge of the resonance region behavior. More experimental resonance
data are needed, in particular, on the neutron, as discussed later.
The experimental situation has the usual trend: although a lot of data have
been accumulated on proton transitions, not much is known about their neutron
counterparts [16]. See, for example, figures 9 and 10, which contain the world data
on three particular resonance transition amplitudes. The discrepancy in the number
of data points is striking indeed. The difficulties of extracting neutron data by
applying nuclear corrections to the deuterium data can be seen in figure 11. The
figure shows the inclusive resonance electroproduction cross-section data. They were
obtained at Jefferson Lab (JLab) at Q2 = 1.5 GeV/c2 for hydrogen and deuterium
targets at matched kinematics. It can be seen that, at W 2 >2 GeV/c2 the deuterium
data are so smeared that none of the higher resonances can be really seen. And
a simple subtraction clearly yields nonsense as can be seen from figure 12. What
one gets for the neutron “resonant picture” is a resonance distribution turned upside
FIG. 10: Same as figure 9, but for neutron resonance transition amplitudes (from
reference [14]). The point at Q2 = 0 is the Particle Data Group estimate.
Deep inelastic scattering cross-section and structure functions
In the previous sections we went from looking at a nucleon as a whole using elastic
scattering to probing deeper by exciting resonant states of the nucleon. If we want
to look even deeper and look at the inner structure of the nucleon, we need to turn
to even larger energy probes. This sounds simple enough, but there is a problem
introduced by this “simple” approach: the nucleon will break up and the initial state
completely lose its initial identity thus requiring a new formalism to be constructed.
Pictorially, we are going from the nice and clean picture of figure 13 to the mess of
figure 14.
Or, looking at the invariant mass distribution (figure 15), we are going
from the elastic peak at W ≈ 0.94 GeV to the resonant region between 1.2 and 2
GeV to the structureless continuum beyond that.
Now, how do we quantitatively describe deep inelastic scattering (DIS)? The
derivation of the vertex form in section II.3 did not care about the final states. Thus,
equation (10) should describe the vertex for the inelastic case as well, and the crosssection equation shape should be similar to what we had before. Indeed, for both
inelastic and elastic cases, the cross-section can be written in the form
4α2 E ′2
dE ′ dΩ
dσ/dΩdW2(nb/sr GeV2)
Q2=1.5 (GeV/c)2, E = 3.245 GeV, θ = 26.98o
JLab DATA
SLAC fit
W (GeV)
W (GeV)2
dσ/dΩdW2(nb/sr GeV2)
FIG. 11: Inclusive electroproduction cross-section data from Jefferson Lab at Q2 =1.5
GeV/c2 (from [16]) as a function of invariant mass squared. The upper panel shows
hydrogen data with global resonant fit and non-resonant background fit. The lower
panel shows deuterium data at the same kinematics.
FIG. 12: An attempt to find the neutron resonance distribution as a simple difference
between those of the deuterium and proton.
FIG. 13: The eN → eN (∗) process describing elastic scattering as well as resonance
excitation. In the former case, the outgoing particle (momentum p’) is the same as
incoming (momentum p). In the latter, the outgoing particle is in a resonant state.
FIG. 14: The eN → eX process describing deep inelastic scattering where new
particles are created in the final state.
W, GeV
FIG. 15: The invariant mass (W ) distribution in the region stretching from the elastic
peak (the peak at around 1 GeV) to the inelastic continuum. Three resonance bumps
can be seen between 1 and 2 GeV (∆ resonance at W of 1.232 GeV, S11 /D13 peak
at around 1.5 GeV, and F15 peak at around 1.7 GeV). This picture was made using
BONuS hydrogen data.
For elastic scattering (eN → eN),
GE + τ G2M
2 θe
2 θe
δ(ν −
+ 2τ GM sin
where the energy conserving delta function is shown explicitly. For DIS we have [32]
= W2 (ν, Q2 ) cos2 + 2W1 (ν, Q2 ) sin2 ,
where W1 and W2 are the structure functions, analogous to the elastic form factors
in the elastic case, and transition form factors in the resonant case.
Scaling and partons
To continue our discussion, let us remember Rutherford’s experiment that discovered
nuclei: some of the α-particles incident on atoms would suddenly scatter at a big
angle, thus indicating that they ran into some hard scattering center somewhere
inside. If the energy of electrons probing the nucleon’s structure is raised so that they
are capable of resolving really small distances, the energy and angular distributions of
the scattered electrons will start looking as if an elastic scattering off a structureless
spin-half Dirac particle (to be called quark) took place.
Thus, we got another turn of the spiral: elastic scattering at a new level. At this
scale proton structure functions should turn into elastic ones [32]:
W2point = δ(ν −
where m is the mass of the scattering center on which the elastic scattering occurred
2W1point =
(quark). And, at large Q2 , we can represent inelastic electron-proton scattering as
elastic electron-quark scattering. Instead of figure 14, we get figure 16.
At this scale, inelastic structure functions exhibit a remarkable quality which will
become apparent if we rewrite (31) in the following formk :
2mW1 (ν, Q ) =
δ 1−
νW2 (ν, Q ) = δ 1 −
Equations (31) and (32) are somewhat naive, since quarks are not at rest inside nucleons. To
account for that, one needs to replace the quark mass m with xM , where x is the fraction of the
nucleon momentum carried by the quark in the Infinite Momentum Frame, and M is the mass of
the nucleon.
FIG. 16: The elastic scattering off quasi-free quark.
As one can see, both of them depend only on the ratio
Bjorken x =
2M ν
which is proportional to
in the lab frame, but not on Q2 or ν independently. This incredible
property is called scaling.
Thus, at Q2 large enough to resolve nucleon constituents,
MW1 (ν, Q2 ) → F1 (x),
νW2 (ν, Q2 ) → F2 (x),
where x is Bjorken x (see appendix B for more). Using the F1 (x) and F2 (x) structure
functions, the DIS cross-section can be written as
4α2 E ′2 F2 (x)
F1 (x) 2 θe
2 θe
In this picture, DIS off a nucleon can be viewed as an incoherent sum of scattering
off all the constituents (see, for example, [27]). Then, it should be no surprise that
we can connect the inelastic cross-section to the distribution of partons inside the
nucleon. And, since the information on the composite nature of the nucleon is by
construction hidden into the structure functions, those should be the ones containing
the “connection”.
Indeed, if we denote fi to be the probability that a struck parton of kind i and
charge ei carries momentum fraction x, simple manipulations will let us identify:
1X 2
F1 (x) =
e fi (x),
2 i i
F2 (x) =
e2i xfi (x).
Thus, DIS structure functions not only contain information allowing us to extract
cross-section, and consequently physical quantities associated with the scattering,
but also invaluable details of the internal composition of nucleonsl , shedding light on
some of the innermost workings of Nature.
The problem is, our current knowledge of DIS structure functions is unsatisfactory, especially in the case of the neutron. Just to give a quick example, we can look
at figure 17. Due to the model dependence of the data analysis, three completely
different theories can be supported using the same experimental data set. The situation is alarming, to say the least, and requires finding some model-independent
approach to extracting neutron structure functions.
Knowledge of the valence quark distributions at large x is important for several
reasons: assumptions on the large-x behavior were built into the global analysis of
parton distribution functions [19]; determining d/u experimentally would shed light
on the mechanisms behind the spin-flavor symmetry breaking [16]; quark distributions at large x are important for estimating backgrounds in searches for new physics
beyond standard models in new high-energy colliders (e.g., LHC).
Although more than three decades passed since QCD was established to be the theory
governing strong force interactions, its inner workings are still not completely clear.
The degrees of freedom appearing in the QCD Lagrangian (quarks and gluons) are not
observed in nature as real degrees of freedom, being lumped into hadrons which we
can observe instead. Due to asymptotic freedomm the partonsn can still be effectively
studied at high momentum scale Q.
But at low Q QCD is a strongly interacting theory, thus making the perturbative
treatment of parton interactions impossible. Since our knowledge of quantum systems
is largely based on applications of perturbation theory, this makes usage of partons
as degrees of freedom highly inconvenient, and the hadron description is used in this
region (see for example section II.4).
See appendix A for more.
The phenomenon of the strong coupling constant being small at high momenta (or, equivalently,
small distances), which allows perturbative treatment of the strong force in the region of high
Quarks and gluons.
Naive SU(6) Quark Model
d/u = 1/2
d/u = 1/5
F /F
SLAC Data (Fermi corr.)
♦ SLAC Data (offshell corr.)
1-gluon exchange
d/u = 0
SLAC data (PLC
suppression correction)
FIG. 17: The ratio of neutron to proton structure functions as a function of Bjorken
x, extracted from SLAC proton and deuteron data, assuming different prescriptions
for nuclear corrections. Several theoretical predictions for the x → 1 limits are shown
[16]: “SU(6) quark model”, in which quark wave functions are represented by SU(6)
symmetry group which is a simple convolution of SU(3) flavor and SU(2) spin groups;
“pQCD model” (also known as helicity conservation model) in which scattering off
the quark having the same helicity as the nucleon dominates; “1-gluon exchange
model” (also known as scalar diquark dominance model) in which scattering off u
quarks dominate. For more details see [18], [19], [20]
There is, however, a surprising connection between high- and low-Q regions. Almost forty years ago observations showed that the behavior of low-energy crosssections averaged over some energy intervals closely resembles that at asymptotically
high Q. In particular, it was observed [21] that the resonance structure function (or
transition form factor) at low W averages to the global scale curve which describes
high W data (see figure 18 for a contemporary illustration). This connection between
hadronic and partonic regimes got the name of quark-hadron duality.
Currently, duality is formulated in terms of the operator product expansion (OPE)
of moments of structure functions. According to OPEo , at Q2 ≫ Λ2QCD , where ΛQCD
is the cutoff of the region which cannot be explored perturbatively, the moments
of structure functions can be expanded in powers of 1/Q2 . For example, the nth
moment of the F2 structure function,
F2 (x)x dx =
where Aτ
Aτ (αs (Q2 ))
, n = 2, 4, 6, ...,
Qτ −2
τ =2,4,...
are the matrix elements with twistp ≤ τ . In this treatment soft and
hard contributions to scattering are separated in each of the terms of the sum, thus
allowing separate treatment of them. Here the soft contribution is hidden in Aτ
coefficients. For the leading twist τ = 2,
A2 P µ1 · · · P µn = hP |ψ̄γ{µ1 γ5 iD µ2 · · · iD µn }ψ|P i,
where P is the nucleon momentum, D µ is the covariant derivative, and the braces
denote symmetrization of indices and subtraction of traces.
Leading twist terms correspond to virtual photons scattering incoherently from
one parton (see figure 19a), whereas higher twists involve multiple partonic fields
(see figure 19b, 19c). The lowest moment of the F2 structure function is called the
Bloom-Gilman integral. Using the Bloom-Gilman integral, we can write duality in
mathematical terms as
dν νW2 (ν, Q ) =
dx F2 (x),
where νW2 (ν, Q2 ) is the actually observed structure function in the resonance region,
the upper limit on the ν integration, νm = (Wm2 −M 2 +Q2 )/2M, where Wm ≈ 2 GeV,
See [22] for more on OPE.
The mass dimension minus the spin of an operator
FIG. 18: Extracted F2 data in the nucleon resonance region for hydrogen (a) and
deuterium (b) targets as functions of the Nachtman scaling variable ξ (see appendix
B). The solid curves indicate the result of the fit to deep inelastic data for a fixed
Q2 of 10 (GeV/c)2 (from reference [23]).
FIG. 19: (a) Leading twist diagram. (b) Higher twist four quark contribution. (c)
Higher twist two gluon contribution.
is chosen so that the integral of the scaling function covers the resonance region data,
xm = Q2 /2Mνm , and F2 (x) is the structure function in the asymptotic DIS limit.
If there are no contributions from higher twist terms, duality is exact. Thus, for
duality to work, the higher order terms need to be suppressed. In the QCD domain,
where Q2 is high, this is not a problem, since higher twist terms will have a very
large Q2 in the denominator, but for moderate to low Q2 , higher twist terms should
somehow cancel, thus suppressing the interaction between the scattered quark and
the hadronic system.
Duality does not have a simple intuitive explanation since the nature of the processes at low and high Q2 is really different. At high Q2 , QCD assumes that the
virtual photon interacts only with one parton, with each additional interaction being
suppressed by 1/Q2 which make their contributions negligible. On the other hand, at
low Q2 an incoming photon coherently interacts with the whole hadron. The difference between these two kinds of interactions is analogous to the difference between
(a + b)2 and (a2 + b2 ) expressions. Such expressions can be equal only if all the
interference terms cancel out, and there is no reason why this should be the case.
Nevertheless, the duality appears to work very well, down to Q2 values of the order
of 1 GeV/c2 [22].
Besides duality being an interesting phenomenon in itself, its understanding would
allow precision studies of the high Bjorken x region, which is hard to study experimentally for technical reasons, at least at the moment. Duality could also provide
an efficient average low energy description of hadronic physics used in the interpretation of neutrino oscillations and high energy experiments, as well as more detailed
understanding of hadronization.
More data are needed to understand duality. The aforementioned cancellation
of the interference terms could be a fortuitous accident in the proton, due purely
to the charge assignmentsq . Thus, neutron data are especially interesting, since the
neutron’s charge assignments are different.
Deuterium is a stable isotope of hydrogen with a natural abundance of approximately
1 atom in 6500 of hydrogen. Its nucleus, the deuteron, is the simplest composite
nuclear system, and as such, is the simplest laboratory for nuclear physics provided
by Nature. It is one of only four stable nuclides with an odd number of protons and
odd number of neutrons, and it is the only stable two-nucleon system in Nature.
The deuteron is widely used for extracting nuclear properties due to its relative
simplicity compared to other nuclei. It is also a valuable tool for extracting neutron
information since the deuteron is the simplest nucleus containing neutrons, while
having a very small binding energy thus facilitating the study of its components.
This is how it was used in the BoNuS experiment, as an emulation of a neutron
Static properties of the deuteron
The deuteron is a unique nucleus. Its binding energy, 2.2 MeV, is much less than the
average value between a pair of nucleons in any other stable nucleus. The precise
determination of the deuteron binding energy from the neutron radiative capture by
hydrogen combined with measuring the deuteron mass makes an accurate knowledge
of the neutron mass possible. Due to the small binding energy, the deuteron has no
excited states, and all of the measurements are guaranteed to be made in the ground
state. A compilation of some ground state properties of the deuteron is given in table
Indeed, the sum of squares of quark charges in the proton is 32 + 32 + (− 31 )2 = 1, which is
exactly the square of the sum: ( 23 + 23 − 31 )2 = 1, whereas for the neutron the sum of squares is
(− 13 )2 + (− 31 )2 + ( 23 )2 = 23 , but the square of the sum is ( 23 − 31 − 31 )2 = 0
TABLE 3: Ground state properties of the deuteron [28], [31].
Ground state property
Mass, Md
1875.612762(75) MeV
Binding energy, E
2.22457312(22) MeV
Spin and parity, J π
Isospin, I
Magnetic dipole moment, µd
Electric quadrupole moment, Qd
0.28590(30) efm2
Matter radius, rd
1.975(3) fm
Charge radius, rch
2.130(10) fm
Angular momentum structure
Since the deuteron’s parity is positive, its orbital angular momentum is bound to
be even. To see that, we can separate the deuteron wave function into 3 parts: the
intrinsic wave function of the proton, the intrinsic wave function of the neutron, and
the orbital wave function for their relative motion. Since the proton and neutron
are just two states of the nucleon, they have the same intrinsic parity, and thus the
product of their parities is even. Then the parity of the deuteron is determined by
the parity of the wave function of the relative orbital motion. Its parity is determined
by the orbital angular momentum. The argument goes as follows. For states with a
definite orbital angular momentum L, the angular dependence in the wave function is
given by spherical harmonics. Under a parity inversion spherical harmonics transform
YLM (θ, φ) → YLM (π − θ, π + φ) = (−1)L YLM (θ, φ),
where YLM (θ, φ) are spherical harmonics, and θ and φ are the polar and azimuthal
angles, respectively. As is seen from (39), the parity of YLM (θ, φ) is (−1)L , and
angular momentum has to be even to provide positive parity. This in turn necessitates
the deuteron to have spin equal to 1. Since the spin of the ground state of the deuteron
is J = 1, where J = L + S, the possible values of S, the sum of the intrinsic spins of
the two nucleons, are 0 and 1. One cannot couple S = 0 with even values of L to
form a J = 1 state. Thus, S = 1.
The deuteron has two possible orbital angular momentum states: S-state and Dstate, with the S-state being dominant (approximately 96%). The deuteron isospin
is T = 0.r
Magnetic dipole moment
The deuteron magnetic moment was measured by Rabi et al. in 1934 by measuring
the deflection of a deuteron (“deuton”) beam by a magnetic field [29]. Since then
many measurements utilizing different methods have been performed and the value
of “0.75 ± 0.2 nuclear units” has been greatly improved upon (see table 3). There
are two sources of the magnetic dipole moment of a nucleus:
• Each nucleon has an intrinsic magnetic moment;
• Orbital motion of a proton carrying net charge results in electric current producing magnetic field.
To explicitly account for these two sources, we can write the magnetic moment operator ass
µd = gp sp + gn sn + L,
where L is the orbital angular momentum of nucleons relative motion, sn , sp are neutron and proton spins, respectively, and gyromagnetic ratios for proton and neutron
gp = 5.585695µN ,
gn = −3.826085µN .
The measured value of the magnetic dipole moment (see table 3) confirms the presence of both 3 S1 and 3 D1 states in the deuteron. Indeed, for the S-state alone, the
magnetic moment would be:
µd = µp + µn = 0.879805µN ,
whereas for the D-state it can be calculated from
µd =
((gp + gn )(J(J + 1) − L(L + 1) + S(S + 1)) + (J(J + 1)
4(J + 1)
+ L(L + 1) − S(S + 1)))
There is no two-nucleon bound state that couples to T = 1 isospin, which is illustrated by the
non-existence of two-proton and two-neutron configurations.
In the equation (40), it is assumed that proton and neutron masses are close enough, so that
we can assign each of them half of the total relative angular momentum.
Here we assumed that the structure of the bound nucleon is the same as that of the free one,
hence we can use gp and gn of free nucleons and bound ones interchangeably.
(J being the total angular momentum of the state) to be 0.310µN for a pure D-state,
even further away from the measured value. The admixture of S- and D-states would
be the most probable cause of these deviations although some contribution from
virtual mesons exchanged between nucleons play a role [28].
Electric quadrupole moment
The electric quadrupole operator measures the lowest order departure of a charge
distribution from the spherical shape:
Q0 = e(3hzi2 − hri2),
where e is the charge of the distribution, z is the z-coordinate, and r is the radial
distance. Hence the discovery of the electric quadrupole moment of the deuteron in
1939 [30] meant that the nuclear force is not central, thus being more complicated
than had been thought; this later became evidence for the role of pions in nuclear
physics [31]. It also provides more evidence for the deuteron wavefunction having an
admixture of the D-state, since the spherically symmetric S-state has a zero electric
quadrupole moment.
Deuteron size
There are two “sizes” we can use for the deuteron: the spread of its charge distribution, the charge radius rch , and the spread of matter, the matter radius rm (see table
3). The former is defined by scattering experiments, rch
= −6dGc /dQ2 |Q2 =0 , where
Gc is an elastic form factor defined later, whereas the latter is defined through the
deuteron wavefunction (discussed later as well) and is related to the charge radius
where rp =0.862(12) fm is the proton charge rms radius, rn2 = -0.113(5) fm2 is the
neutron charge rms radius, ∆rm
is a contribution from non-nucleonic degrees of
freedom and is close to zero, and mp = 938.272309(28) MeV is the proton mass. The
last term in the right-hand side is the Darwin-Foldy term providing a relativistic
correction for the Zitterbewegung.
The deuteron wavefunction
As was already mentioned, the deuteron wavefunction in the non-relativistic limit
should represent an admixture of S-state and D-state wavefunctions. We can write
it in general form as
u(r) M
ω(r) M
Υ101 +
Υ121 ,
where r is the radial coordinate, u(r) and ω(r) are reduced radial wavefunctions for
ψD =
S and D states respectively, and
hJ, M|L, mL ; S, mS iYLM (θ, φ)|S, mS i
mL ,mS
are spin spherical harmonics (YLM are spherical harmonics).
The Hamiltonian of the system is
H = T1 + T2 + V,
where Ti is the kinetic energy of particle i and V is the two-body potential. The
wavefunction has to satisfy the quantum equation of state, the Schroedinger equation
+ HΨD = 0
(ΨD is the deuteron wavefunction, t is the time), in the non-relativistic case [32], and
the Weinberg equation in the relativistic case (The relativistic case is too involved
to be briefly shown here, see reference [48] for more).
Solving these equations is complicated by the fact that the nuclear potential is not
known exactly. From experimental data we know that it has three distinct regions:
the hard core, scalar boson exchange region, and pion exchange region (see figure
20). It has also been shown [49] that the most general form of the non-relativistic
potential is
V (r 2 , p2 , L2 ; σ1 , σ2 , τ1 , τ2 ) = V0 (r 2 , p2 , L2 ) + Vσ (r 2 , p2 , L2 )σ1 · σ2 + Vτ (r 2 , p2 , L2 )τ1 · τ2
+ Vστ (r 2 , p2 , L2 )(σ1 · σ2 )(τ1 · τ2 ) + VLS (r 2 , p2 , L2 )L · S
+ VLSτ (r 2 , p2 , L2 )(L · S)(τ1 · τ2 ) + VT (r 2 , p2 , L2 )S12
+ VT τ (r 2 , p2 , L2 )S12 τ1 · τ2 + VQ (r 2 , p2 , L2 )Q12
+ VQτ (r 2 , p2 , L2 )Q12 τ1 · τ2 + VP P (r 2 , p2 , L2 )(σ1 · p)(σ2 · p)
+ VP P τ (r 2 , p2 , L2 )(σ1 · p)(σ2 · p)(τ1 · τ2 ),
scalar meson exchange
FIG. 20: Schematic diagram showing different parts of a nucleon-nucleon potential
as a function of distance r between nucleons [28]. The hard core radius is around
0.4 fm and it takes more than 1 GeV energy to bring nucleons closer than (twice)
this distance. The main part of the attraction lies at intermediate ranges, at radius
∼1 fm, and is believed to be dominated by the exchange of scalar mesons. The long
range part, starting at around 2 fm, is due to the single-pion exchange.
where r is the relative radial coordinate, r = r1 −r2 , the vector difference of individual
nucleon coordinates, p = 12 (p1 − p2 ), σ 1 and σ 2 are spin operators of the nucleons,
τ 1 and τ 2 are isospin operators of the nucleons, L is the relative orbital angular
momentum operator, S is the total spin operator, the tensor operator is
S12 = 2 (σ1 · r)(σ2 · r) − σ1 · σ2 ,
the two-body spin-orbit operator is
L · S = L · (σ1 + σ2 ),
where ℓ1,2 are orbital angular momenta of each of the nucleons, and the quadratic
spin operator is
Q12 = ((σ1 · L)(σ2 · L) + (σ1 · L)(σ2 · L)).
The radial dependence and strength of each of the 12 terms are given by the 12
functions V0 (r),Vσ (r), etc. These functions are determined by fits to experimental
data with hopes to get them from first principles once our understanding of QCD is
developed enough.
The complicated form of (50) illustrates difficulties in deriving it from scratch.
On top of that, the derivation of the nuclear potential from first principles must
stem from quark-quark interactions. The problem here is that carrying out QCD
calculations at the low energy at which nuclear physics operates is out of reach at
the moment.
As a result, the best potentials we have at the moment (Paris [83], Bonn [84],
Argonne [85], etc) utilize our knowledge of hadrons as much as possible and treat
phenomenologically the aspects, mainly short-range interactions, of which we have
incomplete knowledge [28]. These potentials have been quite successful in describing
available data, and although the first principle derivation of the nuclear potential
is still absent, we have very good substitutes to work with. Thus, provided the
potential, we can solve for the wavefunction. The form of the wavefunction is very
similar for all modern potentials. An example of the reduced radial wavefunctions
for the v18 potential are given in figure 21. The u and w functions in momentum
space are given by
u(p) =
u(r)j0 (pr)rdr,
w(p) = −
w(r)j2 (pr)rdr,
u(r) and w(r) (fm
r (fm)
FIG. 21: The u (S-state) (solid line) and w (D-state) (dotted line) reduced radial
wavefunctions calculated with the Argonne v18 potential (from reference [31]).
where j0 (pr) and j2 (pr) are Bessel functions of the 0th and 2nd order, correspondingly.
The deuteron S wave function in configuration space and in momentum space is
illustrated in figure 22.
Deuteron in scattering experiments
Following the treatment of lepton-nucleon scattering from the previous sections, I will
concentrate on unpolarized elastic and inelastic scattering off the deuteron, although
it is not possible to completely avoid mentioning polarization in this case.
Elastic scattering
In the Born approximation of a one-photon exchange mechanism, the unpolarized
elastic scattering differential cross-section can be written as [33]
= σM ott (A(Q2 ) + B(Q2 ) tan2 (θ/2)),
where σM ott is the Mott cross-section (see (4)), E ′ and E are the final and initial
energies of the electron, θ is the scattering angle, and A(Q2 ) and B(Q2 ) are the
elastic structure functions.
This is reminiscent of the electron-nucleon scattering case except that the
deuteron is a spin-1 particle and consequently the structure functions depend on
pu(p) (fm )
u(r)/r (fm
r (fm)
p (fm )
FIG. 22: The deuteron S wave function in configuration space and in momentum
space, calculated from the Argonne v18 potential (from reference [31]).
three elastic form factors:
A(Q2 ) = G2C (Q2 ) + η 2 G2Q (Q2 ) + ηG2M (Q2 ),
B(Q ) = η(1 + η)GM (Q ),
where η = Q2 /(4MD ), with MD being the deuteron mass, and GC , GQ , GM are the
form factors. The bad news here is that we have two structure functions which we
can measure in experiments and three form factors on which they depend (i.e. two
equations with three unknowns). Thus, we have to turn to polarized scattering to
get more equations and solve for GC , GQ and GM . The illustrations of the A and B
structure functions are shown in figures 23 and 24 from reference [46].
Inelastic scattering
The unpolarized inelastic electron-deuteron scattering cross-section can be written
as [34]
d2 σ
= σM ott (W2D (ν, Q2 ) − 2W1D (ν, Q2 ) tan2 (θ/2)),
dΩdE ′
where W1D , W2D are deuteron inelastic structure functions, and ν is the energy
transfer in the reaction. In the same way it was done for nucleon scattering, we can
FIG. 23: The deuteron elastic structure function A(Q2 ) for Q2 > 1 GeV/c2 . The
data are from [35], [33], [36], [37], [38], [39], [40], [41], [42]. The solid line is the model
fit from [46]; the dotted line is the pQCD asymptotic behavior extrapolated to lower
momentum transfer.
FIG. 24: The deuteron elastic structure function B(Q2 ). The data are from [43],
[37], [38], [44], [45]. The solid line is the model fit from [46]; the dotted line is the
pQCD asymptotic behavior extrapolated to lower momentum transfer.
define dimensionless structure functions [48] (see figure 25 for the graph of F2D ):
F1D = MD W1D (ν, Q2 ),
F2D =
νW2D (ν, Q )
As the momentum transfer goes to infinity, we can use the form convenient for a
comparison with the parton model and QCD predictions [48] (these, as well as all
the formulas from this reference, are written in the light-cone approximation):
Z X
2 dαd2 k⊥
F1D =
F1,N ( , Q2 )ρN
N =p,n
Z X
F2D =
F2,N ( , Q2 )ρN
D (α, k⊥ )
N =p,n
where the density matrix
(α, k⊥ )
MD2 + k 2 2
(U (k) + W 2 (k));
α=1+ √
m2 + k 2
is the light cone momentum fraction, and
m2 − k⊥
− m2
α(2 − α)
corresponds to the nucleon momentum in the center of mass system, ψD (α, p⊥ ) is
the deuteron wavefunction, U and W are S- and D-state deuteron wavefunctions in
momentum representation, x = Q2 /(qPD ) with q being the momentum transfer, PD
is the deuteron 4-momentum, and the factor 2 − α is due to the two-nucleon phase
space. In the lab frame, α =
MD /2
where z points along the direction of q̂.
Equations (59) have a simple parton interpretation: the probability of finding a
parton in the deuteron carrying a fraction of the deuteron momentum x/2 is equal
to the product of the probability of finding a nucleon with a fraction of the deuteron
momentum α/2 and the probability of finding a parton in the nucleon with a fraction
of nucleon momentum x/α.
After decades of studies of the partonic structure of nucleons, our knowledge of the
relative d and u quark densities in the large Bjorken x region is still unsatisfactory
FIG. 25:
The deuteron structure function F2D per nucleon at Q2 = 1.925 GeV/c2 . The red and
blue points show the CLAS data from E6a and E1d run periods respectively; others
are indicated on the graph. The curve is the phenomenological model from [47].
(see, for example, figure 17). Different methods have been proposed in order to
obtain the large-x n/p (or equivalently d/u distribution) ratio, but all of them have
been plagued with model uncertainties and too large nuclear corrections. None of
them have been able to discriminate between different limits on the ratio F2n /F2p as
x → 1 (see figure 17). The promising experiments utilizing neutrino and anti-neutrino
scattering off proton targets, that can measure u and d distributions separately, suffer
from relatively low statistics [50].
The measurement of tagged structure functions in semi-inclusive deep inelastic
scattering with slow recoil proton detected in the backward hemisphere
e+D →e+p+X
can help us resolve the ambiguities introduced by nuclear model dependence and
extract the ratio of neutron to proton structure functions at high-x region, hence
accessing the long-sought d/u distribution ratio.
The measurements performed on bound nucleons yield “effective” structure functions that are not guaranteed to be very close to free nucleon structure functions.
Nevertheless, by selecting only the slowest recoil protons and backward scattering
angles we are able to measure them in the region where the target nucleon is almost
on-shell, thus enabling ourselves to extract the F2n structure function with minimal
model uncertainties.
Spectator tagging
The general formula for the cross-section of process (63) is [50]
x2 y 2 m2N
dxdQ2 d3 ps /Es
× FLD +
cos φFTDL + cos(2φ)FTDT ,
+ tan2
FTD +
+ tan2
where the four-momentum of the virtual photon is q ≡ (ν, ~q), Q2 is the usual −q 2 ,
the recoil nucleon has four-momentum ps ≡ (Es , p~s ), y = ν/Ee , where Ee is the
initial electron energy, mN is the nucleon mass, ν is the energy transfer, αem is the
electromagnetic coupling constant, and φ is the azimuthal angle of the recoil nucleon
(the z axis is aligned with the direction of ~q). The four nuclear structure functions
FIG. 26: Two main diagrams contributing to the reaction (63) in the region of
α > 1. The diagram (a) represents the usual impulse approximation describing the
interaction of the virtual photon with only one nucleon with no further rescattering.
The diagram (b) describes final state interactions: after the interaction of the virtual
photon with a nucleon, products of the reaction interact between themselves.
L,T T depend on Q , x, αs , ps⊥ , where αs is the light-cone momentum fraction
of the spectator. After azimuthal angle integration, (64) becomes
x2 y 2m2N
ν SI
F2D + 2 tan
, (65)
dxdQ2 d3 ps /Es
2 mN 1D
(x, Q2 , αs , p⊥ ) = FLD +
Q2 ν D
F ,
2q 2 mN T
(x, Q2 , αs , p⊥ ) = FTD /2.
There are two possible reactions that can produce (63): the direct process in which
the electron scatters off the nucleon going backwards in the deuteron rest frame
(nucleon with α > 1) and the spectator reaction itself in which scattering takes
place on the unobserved nucleon, which gets knocked out and therefore releases its
neighbor-spectator. It has been shown [48] that in the kinematic region where α > 1
the contribution of the direct process is negligible, thus enabling us to utilize this
region for the study of the spectator reaction.
In this kinematic region two main diagrams will contribute to (63) [50]: the
impulse approximation diagram 26a, and the “final state interactions” diagram 26b
which accounts for rescattering of the spectator nucleon off the debris of the DIS
I will return to final state interactions in section II.8.2. For now I will concentrate
on the impulse approximation described by figure 26a.
In the nuclear impulse approximation, we can write down the scattering amplitude
as [50]
AµIA = hX|Jem
(Q2 , ν, ps )
/pd − /ps + m
ū(ps )Γd ,
m2N − t
where t = (pd − ps )2 . Γd is the covariant d → pn transition vertex, and Jem
(Q2 , ν, ps )
is the electromagnetic DIS operator of electron scattering of the bound nucleon.
/p = γ µ pµ , γ µ being Dirac matrices.
Taking the recoil nucleon to be on-mass-shell and using
/pd − /ps + m ≈
u(pd − ps )ū(pd − ps ),
we can factorize the amplitude (67) into two parts: the DIS current of the bound
nucleon, JX,N
= hX|Jem
(Q2 , ν, ps )u(pd − ps ) and the wave function of the deuteron.
With this factorization the nuclear DIS structure functions can be expressed
through the convolution of bound nucleon DIS structure functions and the nuclear
spectral function, S [50]:
p2⊥ ef f
S(αs , p⊥ )
ef f
F1N (x̃, Q , α, p⊥ ) +
F (x̃, Q , α, p⊥ ) (69a)
2pq 2N
(x, Q2 , αs , p⊥ ) =
(1 + cos δ)2 α + 2 αq + sin2 δ ⊥2
(x, Q2 , αs , p⊥ )
ef f
× F2N
(x̃, Q2 , α, p⊥ ),
ef f
ef f
where F1N
and F2N
are the structure functions of the bound nucleon, sin2 δ =
Q2 /q 2 ; the modified Bjorken x, x̃, is described in appendix B; the nuclear spectral
function S describes the probability of finding an interacting nucleon in the target
with momentum (α, p⊥ ) and a recoil nucleon in the final state of the reaction with
momentum (αs , ps⊥ ) (in impulse approximation, αs + α = 2, p⊥ = −ps⊥ ). n is also
model dependent. In the light cone approximation
n = 2 − αs .
This gives us basically the same as the density matrix from the equation (60), with
a different “normalization”. Indeed, in the light-cone approximation
(αs , ps⊥ ) =
(αs , ps⊥ ) = Ek ρ(αs , ps⊥ ).
2 − αs
Using (69), (65) takes the following form:
x2 y 2m2N
dxdQ2 d3 ps /Es
1 2 p2⊥
(1 + cos δ) α + 2 αq
+ sin δ 2
ef f
× F2N
(x̃, Q2 , α, p⊥ ) + 2 tan2
2 mN
ef f
ef f
F (x̃, Q , α, p⊥ ) ,
× F1N (x̃, Q , α, p⊥ ) +
2pq 2N
S(αs , p⊥ )
mN ν
where αq ≡ (ν − |q|)/mN .
Corrections to impulse approximation
Equations (69) present a nice way of accessing bound nucleon structure functions.
Extrapolating them to the nucleon pole will allow us to find free nucleon structure
functions. However, when final state interactions (FSI), off-shellness and model uncertainties are accounted for, things can get complicated. Nevertheless, the choice
of backward kinematic combined with using slow momentum spectator protons minimize these effects, such as final state interactions, on-shell extrapolation, deuteron
wavefunction ambiguity, and target fragmentation. Let us look more closely at these
sources of uncertainty.
Spectral function ambiguity
The nuclear spectral function S is a model dependent quantity whose form depends
on the formalism used to describe the interaction (i.e. instant form vs light cone
formulation). However, it turns out that at spectator momenta |~ps | . 0.5 GeV/c,
the difference between these approaches is not very large [51]. Figure 27 (from [51])
illustrates the αs and p⊥ dependence of the ratio of spectral functions calculated in
instant form and light cone approaches. For p⊥ ≤ 0.1 GeV/c the light cone and
instant form approaches differ up to 20% for αs ≤ 1.5. The uncertainty in the
spectral function can be further reduced by choosing the isolated values of α ≤ 1.2
or α ∼ 1.4. In these cases the difference does not exceed 10%.u
In the BONuS experiment, we restricted the kinematic region of interest to p⊥ ≤ 0.1, 1.0 <
α ≤ 1.1
FIG. 27: The ratio of nuclear spectral functions calculated in the light cone and
instant form formalisms as a function of light cone momentum fraction αs . The dependence is shown for five values of the transverse momentum close to the kinematic
region of interest. From [51].
Target fragmentation
A large rapidity gap between the spectator proton and the hadronic debris from the
struck neutron ensures a very small production of low momentum protons originating from the latter. Contributions from the direct quark to proton fragmentation
can be large in the current fragmentation region (forward hemisphere) whereas they
are strongly suppressed in the target fragmentation region (backward hemisphere).
The decrease in the spectator proton momentum should also decrease the direct
fragmentation contribution. As it can be seen from figure 28, the effects of target
fragmentation are noticeable in the forward hemisphere (current fragmentation region) only, and are totally negligible in the region of interest (target fragmentation
Off-shell effects
The degree to which the struck neutron is off-shell is
M 2 − p2 ≈ 2~ps2 + 2M|ǫ|,
where ǫ = -2.2 MeV is the deuteron binding energy. Thus, the lower spectator
momentum we have, the closer to on-shell the neutron is and the simpler the extrapolation of the structure functions to their on-shell values will be. In convolution
models, off-shell effects in the leading twist arise either kinematically or dynamically. Kinematic effects, coming from the transverse motion, can be calculated with
very little model dependence [24]. Dynamic effects, emerging due to modifications
of bound nucleon intrinsic structure, are unfortunately model-dependent and need
some further discussion. Let us look at some models for the dynamic effects and
estimate the plausibility of the on-shell extrapolation in each of them.
1. Covariant spectator model. In this model [56], nucleon-quark-diquark interactions in deep inelastic scattering are parameterized by relativistic vertex
functions. The functions are constrained by fitting to the on-shell proton functions and comparing the calculated deuteron structure function with the inclusive F2d data. The ratio of the bound to free neutron structure functions
calculated in this model (see figure 29) is rather close to unity at low (around
100 MeV/c) spectator momenta. For the highest shown Bjorken x (x=0.6)
curve, it is within 1% of unity and even closer for lower x.
(PWIA + T.F.) / PWIA
= 4 (GeV/c)
x = 0.6
p = 0.3
p = 0.4
p = 0.5
(PWIA + T.F.) / PWIA
= 1 (GeV/c)
p = 0.3 GeV/c
FIG. 28: Ratio of the plane wave impulse approximation (PWIA) corrected for the
target fragmentation (TF) to the pure PWIA calculation as a function of the center of
mass angle, θpq , between the spectator proton and the virtual photon shown for two
values of the momentum exchange: Q2 =4 GeV/c2 (upper panel) and Q2 =1 GeV/c2
(lower panel) (from reference [53]).
2. Relativistic quark spectral function. Here the bound nuclear structure
function is evaluated as the free nucleon structure function at a shifted value
of the quark light-cone momentum fraction α [24]. The shift depends on the
mass of the spectator diquark system, the bound nucleon momentum, and the
binding energy. The ratio of the bound to free neutron structure functions
calculated in this model (see figure 30) is within 2% of unity for small (around
100 MeV/c) spectator momenta values. The general behavior of the spectator
momentum dependence is consistent with that of the covariant spectator model
with the biggest difference being the less pronounced Bjorken x dependence of
the result in this model.
3. Instant form approach. Here the bound nuclear structure function is evaluated as the free structure function at a shifted energy transfer value [25]. The
shift depends on the binding energy. The ratio of the bound to free neutron
structure functions calculated in this model (see figure 31) at Q2 = 1 GeV/c is
within 1% for all the angles for small momenta (around 100 MeV/c).
4. Color screening model. In this model [26], the bulk of the EMC effect is
attributed to a medium modification of the bound nucleon, not to the nuclear
binding. A larger deviation of the bound to free structure function ratio from
unity is calculated by this model. Still, this deviation is proportional to 2~ps2 +
2M|ǫ|, thus making the extrapolation to the free nucleon pole possible once the
ratio is found for several values of ps .
To summarize, in all the models from the representative sample discussed above the
deviation of the bound structure function from the free structure function is either
very small (within a couple of percent) or a relatively easy extrapolation to the free
nucleon pole is deemed possible. At the lower edge of our momentum acceptance
(|~ps | ≈ 70 MeV/c), the neutron is only 7 MeV/c off its mass shell thus making
off-shell effects small and making on-shell extrapolation relatively painless.
|p| (MeV/c)
n(ef f )
FIG. 29: Ratio Rn ≡ F2
(W 2 , Q2 , p2 )/F2n (W 2 , Q2 ) of the bound to free neutron
structure functions as a function of the spectator proton momentum in the covariant
spectator model for several values of x. For the calculations shown, Q2 ∼ 4GeV/c2 ,
although the Q2 dependence is rather weak for Q2 > 1 GeV/c2 . In the model of [56].
p (MeV/c)
n(ef f )
FIG. 30: Ratio Rn ≡ F2
(W 2 , Q2 , p2 )/F2n (W 2 , Q2 ) of the bound to free neutron
structure functions as a function of the spectator proton momentum in the relativistic
quark spectral function approach for several values of x. In the model of [24].
PWIA(q) / PWIA
Q = 1 (GeV/c)
x = 0.6
p = 0.10
p = 0.15
p = 0.20
FIG. 31: Ratio of the bound to free neutron structure functions calculated in the
instant form approach as a function of the angle θpq . The bound structure function
in this model is calculated from the free structure function by shifting the energy
exchange ν and thus shifting x and Q2 at which the nucleon structure function is
evaluated. Both are calculated in the plane wave impulse approximation. Graphs
for 3 values of the spectator momentum are shown calculated in the model of [57].
Final state interactions
Let us return to the second diagram describing the spectator proton scattering, figure
26b. It can be represented [50] in the most general form as
X Z d 4 ps ′
p/d − p/s′ + mN
p/s′ + mN2
′ ˆem
AF SI =
Γd ,
(pd − ps′ )2 − m2N + iǫ p2s′ − m2N1 + iǫ
where Jˆem (Q2 , x) and ÂF SI represent operators of DIS and FSI scattering and G(X ′ )
is a notation for the propagation of the intermediate state X ′ . The amplitude (74)
is too general to perform any realistic calculations, but one important fact can be
deduced from its form [50]: it is not singular at the nucleon pole, thus allowing the
extraction of the free nucleon structure by extrapolating to the pole.
Having established the possibility of extrapolating to the pole in principle, let us
turn to evaluating FSI in order to estimate the practical plausibility of the procedure.
Backward angles chosen for the BoNuS experiment served the purpose of minimizing
rescattering of the spectator proton by deep inelastic remnants of the scattered neutron (also known as FSI). Although the direct calculation of the FSI is not possible
at the moment, we can look at what different theoretical models tell us about the
In the distorted wave approximation [51], the model due to W. Melnitchouk et
al, the effect of FSI would be to modify the spectral function: S → S DW IA , where
S DW IA (α, p⊥ ≈ 0) ∼ S(α, p⊥ ≈ 0) 1 −
2 i
S(α, p⊥ ≈ 0)/ Es Es (hp2⊥ i)
where hrpn
i is the average separation of the nucleons within the deuteron, Es is
the spectator energy, Es (hp2⊥ i) = M 2 + p2zs + hp2⊥ i is the energy evaluated at the
average transverse momentum transferred for the hadronic soft core interactions with
effective cross-section σef f .v Due to the steep momentum dependence of the deuteron
wavefunction, FSI effects are suppressed in backward kinematics, p⊥ ≈ 0, where FSI
contribute less than 5% to the overall uncertainty of the cross-section for α < 1.5
(see figure 32).
The effective pX cross-section can be approximated [16] by that extracted from soft neutron
production in the high energy DIS of muons from heavy nuclei, where σef f ≈ 20 mb can be used
as an upper limit of the cross-section. The average transverse momentum for this value of the
cross-section can be taken to be 200-300 MeV/c.
FIG. 32: The αs dependence of the ratio of the light cone spectral function with FSI
effects included calculated in DWIA framework, to that without FSI effects [51]. The
dependence is shown for five values of transverse momentum pT .
In a model due to Ciofi degli Atti et al in which FSI are due to the struck nucleon
debris propagation and hadronization [52], the effective cross-section is not constant,
but grows logarithmically as a function of time (or longitudinal distance z) as quarks
get further away from each other, the color tube stretches and radiates gluons (see
figure 33). Including effects of both color tube breaking and gluon bremsstrahlung,
the effective cross-section can be written as
σef f (t) = σtot
+ σtot
(nM (t) + nG (t)),
where σtot
and σtot
are the total nucleon-nucleon and pion-nucleon cross-sections,
nM (t) and nG (t) are the effective numbers of created mesons and radiated gluons
correspondingly. Then the cross-section can be evaluated [52] by replacing the struck
nucleon momentum distribution with the distorted momentum distribution
1 1 X
P W IA
F SI
(~ps ) → S (~ps ) =
| d~rΨ1,Md (~r)S(~r)χ†f e(−i~ps ·~r) |2 ,
3 (2π)3 M
where ~r = ~b + z~q/|~q| is the relative coordinate, z is the longitudinal component, b is
the transverse component, χf is the spin wavefunction of the final state, S(~r) is the
[%M σHII>PE@
Q = 2 GeV
Q = 5 GeV
Q =10 GeV
FIG. 33: The debris-nucleon effective cross-section, σef f , as a function of the longitudinal distance z [54].
S-matrix for the FSI interaction between the debris and spectator nucleon:
S(~r) = 1 − Θ(z)
σef f (z)(1 − iβ) (−b2 /2b20 )
4πb0 2
where Θ(z) is the step function, and β is the ratio of the real to imaginary parts
of the scattering amplitude. The FSI in this model are not large at small spectator
momenta, ps , and large scattering angles, θ (see figure 34) continuing the familiar
To summarize the discussion of the corrections to the impulse approximation
for the spectator scattering, these corrections, including the off-shell corrections,
target fragmentation, and final state interactions, are minimized at small spectator
momenta, ps . 100 MeV/c, and backward scattering angles, θpq > 120 degrees,
thus making the extraction of the free neutron structure information from the bound
neutron data a doable task. And for the rare models that have those corrections
non-negligible, getting a few data points in the small momentum-large angle region
and extrapolating to the nucleon pole is still quite a plausible procedure (see the next
θ R
SV SV SV θ R
θ 6
)6, 3:,$
FIG. 34: The momentum and angular dependence of the ratio of the spectral function calculated accounting for FSI to the spectral function calculated in the impulse
approximation, for Q2 = 5 GeV/c2 , and x=0.2. On the left panel containing the momentum dependence, the dashed line illustrates the case of constant effective crosssection, σef f = 20 mb, the solid line illustrates the case of the effective cross-section
changing with time and momentum exchange (cf figure 33) [55].
Alternative way of extracting F2 structure function
To finish the discussion of the spectator tagging, let us consider a procedure of
extracting the free F2 structure function for the struck nucleon that does not demand
backward scattering angles and negligible FSI [50].
The procedure is based on the pole extrapolation, the technique mentioned above
for the model with relatively large FSI. Let us introduce an extraction factor:
I(ps , t) =
(m2N − t)2
Es [Res(Ψd (Tpole ))]2
mN ν
+ cos δ)2 (α +
α )2
Q2 q
+ 12 sin2 δ m⊥2 ]
where Tpole is the kinetic energy at the nucleon pole,
mn 2mp
≈− ,
Tpole = −
ǫd being the deuteron binding energy; the residue (see appendix C for more on
residues) of the deuteron wavefunction at the pole is
Res(Ψd (Tpole )) = √
GeV1/2 ,
where C is a number that slightly depends on the potential used to calculate deuteron
wavefunction (for example, C=0.3939 for Paris potential, C=0.3930 for Bonn potential). Using I(ps , t) we can define the extracted structure function as
(Q2 , x, t) = I(ps , t) · F2D
(x, q 2 , αs , p⊥ ),
where F2D
is defined in (65). In PWIA we could use (69) with the regular spectral
function to calculate F2D
. In this case, the extrapolation of t to mN would give
ef f
f ree
→ F2N
(x, Q2 , α = 1, p⊥ = 0) = F2N
(x, Q2 ).
When FSI effects are considered, we need to evaluate the extracted structure
function within the DWIA framework, in which case we have to use the spectral
function of equation (75). And again, we can extrapolate F2D
to the values of t →
m2N .
It was noted [50] that if the spectator kinetic energy is much less than the energy scale corresponding to higher mass singularities in the deuteron wavefunction,
equation (82) is a quadratic function of t′ ≡ t−m2N . This means that we can success-
fully extrapolate to the nucleon pole by using a quadratic fit of the data constructed
according to (82) for small and finite values of t′ . There are additional problems
arising due to the deviation of (82) from the quadratic form because of the change
of other kinematic variable involved in the reaction (α dependence of F2N
, higher
twist effects due to the sensitivity of the structure function to the final mass of the
DIS scattering at intermediate Q2 ). However, as it was demonstrated in reference
[50], equation (82) has a nice quadratic form in the region of α ≈ 1. This means that
the uncertainties due to the other kinematic variables will be minimized, and we can
extract the structure function F2 with minimum model dependence in this region.
In practice, this means that we should conduct our measurement at θpq ≈ 90◦
(where α ≈ 1). Ideally, we should probe as high Q2 region as possible since higher
twist effects are reduced when Q2 gets large. Plotting the extracted structure function
in this region as a function of t − m2N with subsequent extrapolation to 0, using a
quadratic functional form, should give us the structure function value at the free
nucleon pole (t = m2N ).
The BONuS experiment was conducted in Hall B of the Thomas Jefferson National
Accelerator Facility. A deuterium target, the Continuous Electron Beam Accelerator Facility (CEBAF) electron beam, the CEBAF Large Acceptance Spectrometer
(CLAS) and a novel Radial Time Projection Chamber (RTPC), designed specially
for this experiment, were used.
The CEBAF accelerator is a superconducting radio frequency (RF) electron accelerator facility (see figure 35). The accelerator uses a state-of-the-art photo-cathode
gun system that is capable of delivering beams of high polarization and high current
to Hall A and Hall C while maintaining high polarization, low current beam delivery
to Hall B. An RF chopping system operating at 499 MHz is used to develop a 3-beam
1497 MHz bunch train at 100 keV. The beam is then longitudinally compressed in
the bunching section to provide 2 picosecond bunches, which are then accelerated to
just over 1% of the total machine energy in the remaining injector section. The beam
polarization, optics and energy are verified in the injector matching region prior to
injection into the main machine. The beam from the injector is accelerated through a
recirculating beam line, with two linear accelerators (linacs) joined by two 180◦ arcs
with a radius of 80 meters. Twenty cryomodules, each containing eight superconducting niobium cavities, line the two linear accelerators. Liquid helium, produced
at the Lab Central Helium Liquefier (CHL), keeps the accelerating cavities superconducting at a temperature of 2 Kelvin. The linac energies are each set identically
and the RF cavities are phased to provide maximum acceleration. Subsequent passes
through the accelerator are phased to maximum energy gain by adjusting the length
of travel in the dogleg section of the preceding arc. Quadrupole and dipole magnets
in the tunnel steer and focus the beam as it passes through each arc. More than
2,200 magnets are necessary to keep the beam on a precise path and tightly focused.
Beam is directed into a hall transport channel using magnetic or RF extraction. The
RF scheme uses 499 MHz cavities, which kick every third bunch out of the machine.
The accelerator can deliver any one of the first four passes to one hall only. The fifth
FIG. 35: The schematics of the accelerator.
pass can be sent to all three halls simultaneously.
At the time of the BONuS experiment, the accelerator could produce electron
beams of up to 5.5 GeV of energy (down from 5.8 GeV achievable a few years ago),
with polarization up to 86%. An upgrade that will increase the output energy to
around 12 GeV has been approved and will soon be under construction.
During the BONuS experiment, beam energies of 1.100, 2.142, 4.226, and 5.268
GeV were used.a Beam currents of 2 - 55 nA were used. The polarization of the
beam was 23%, 73.9%, 78.5%, 81%, and 85 - 87 %.
The Hall B end station, the smallest of three, houses CLAS, the largest acceptance
particle detector at Jeffeson Lab (see figure 36). CLAS provides an almost 4π angular
coverage, covering a θ range of 8◦ - 142◦ and approximately 80% of 2π in φ. It is
Nominal beam energies are quoted, the actual beam energies received can differ up to 10 MeV
from the quoted values.
FIG. 36: CLAS in Hall B.
designed to track charged particles with initial momenta ≥ 200 MeV/c, with a track
resolution for 1 GeV/c particles of δp/p ≤ 0.5% for reconstructed momenta, and δθ,
δφ ≤ 2 mrad for reconstructed angles.
It is built around a magnet, which consists of six iron-free superconducting coils,
providing a toroidal magnetic field bending charged particles in the θ direction. The
coils separate CLAS into 6 independent tracking areas known as sectors, with particle
detectors repeating in different sectors.
CLAS consists of several particle detectors (see figures 37, 38):
1. Drift chambers, which determine charged particle trajectories;
2. Cherenkov detectors for electron-pion separation;
3. Scintillation counters for time-of-flight measurements;
4. Calorimeters to identify electrons and neutral particles.
FIG. 37: CLAS, 2-dimensional view, showing the cross-section through 2 opposite
sectors. The standard CLAS configuration with the added BONuS RTPC in the middle and one scattering event is shown. The standard CLAS detectors are described
in the text.
FIG. 38: CLAS, 3-dimensional view.
Drift chambers
Drift chambers (DC) are particle detectors capable of determining the trajectory of
a charged particle. The DC is a cousin of the multi-wire (or simply wire) chamber,
which, in turn, was an advancement of the Geiger counter and the proportional
counter. In the Geiger counter, a wire at high voltage is enclosed in a tube filled
with a gas, which is ionized by passing charged particles, and this ionization triggers
the detector. In the proportional counter, the energy of the charged particle can
be determined, since the ability of a particle to ionize gas changes with the particle
kinetic energy and mass. Putting a lot of wires together in a box will produce a wire
chamber. They will allow us to approximately find the particle trajectory by looking
at which wires have been triggered by the passing particle.
If one knows the time it took the ions to “drift” from the ionization point to the
wire, the accuracy of the knowledge of a particle trajectory will be greatly increased
and we will have what is known as a drift chamber.
Drift chambers play an important role in the CLAS detector, allowing for the
FIG. 39: Representation of a portion of the layout of a Region 3 chamber, showing
the layout of its two superlayers. In the upper right corner, the edges of several
Cherenkov modules are visible.
determination of particle momenta and trajectories. There are three multi-layer
drift chambers at different radial locations, which are called DC “regions”, in each of
the six CLAS sectors, for tracking charged particles produced in a target situated on
the axis of the toroidal magnet. The region 1 DC surrounds the target in the area of
low magnetic field, the region 2 DC is located between the magnet coils in the area
of high magnetic field, and the region 3 DC is located outside of the magnet coils.
In each region, layers of wires are grouped into two “superlayers”, one parallel to the
magnetic field, and the other tilted at a 6◦ stereo angle to provide some azimuthal
information (see figure 39). Overall, there are 18 drift chambers with a total of 35,148
individually instrumented hexagonal drift cells (see figure 41).
The structure of the DC facilitates achieving the aforementioned construction
goals of momentum and angular track resolution (δp/p ≤ 0.5%, δθ, δφ ≤ 2 mrad for
1 GeV/c particles). To achieve these goals, the tracks need to be measured at the
three regions along their trajectories to an accuracy of 100 µm in the bend plane of
the magnetic field and 1 mm in the direction perpendicular to the bend plane. Also,
the total amount of material in the tracking region of the detector was required to
be less than 1% of a radiation length. A high purity gas mixture of 90% argon - 10%
CO2 was used as a drift gas.
FIG. 40: Vertical cut of the drift chambers transverse to the beam line.
FIG. 41: Hexagonal cell drift lines without (left) and with (right) magnetic field.
Cherenkov counters
A Cherenkov counter (CC) is a particle detector that utilizes the velocity-dependent
threshold of Cherenkov radiation, thus distinguishing between lighter and heavier
particles. Cherenkov radiation is a type of radiation emitted by a charged particle passing through an insulator at speeds faster than the speed of light in that
medium. As a charged particle travels, it disrupts the local electromagnetic field of
the medium. Electrons in the atoms of the medium get displaced and polarized by
the electromagnetic field of the particle. Photons are emitted after electrons restore
themselves to equilibrium after the disruption has passed. Normally, these photons
destructively interfere with each other and no radiation is registered. However, when
the disruption travels faster than the photons themselves travel, the photons constructively interfere and intensify the observed radiation, analogously to the sonic
boom caused by a supersonic aircraft.
The angle at which Cherenkov radiation is emitted is related to the velocity of
the charged particle causing the radiation by:
where θ is the angle at which the radiation is emitted, n is the index of refraction of
cos θ =
the medium, and β is the speed of the particle in the units of the speed of light in
vacuum. If there is a separate detector determining the momentum of the particle,
one can use this information to extract the mass of the particle and thus identify the
The Cherenkov counters are used in CLAS to identify electrons and separate them
from other particles, mainly pions. The CC response is used in the level 1 trigger.
The CC is positioned between the DC region 3 and the time of flight scintillator
system (see figures 37, 38). They cover the region of polar angles θ = 7◦ -48◦ in
forward direction. Each of the 6 sectors of CLAS consists of 18 segments. C4 F10 is
used as the radiator gas. Its refraction index is 1.00153, which gives the threshold
for particle energy of
1 − β2
m = 18.09m,
n2 − 1
where m is the mass of the particle, and β is its speed in units of the speed of light.
This leads to a threshold for pion detection of pπ ≈ 2.5 GeV/c. Thus, the CC can
distinguish between pions and electrons up to momenta of approximately 2.5 GeV/c.
Time of flight detector
A time of flight (TOF) detector is a particle detector which can discriminate between
lighter and heavier elementary particles of the same momentum using their time of
flight. In its simplest form, it consists of two scintillators. The first of the scintillators
activates a clock upon being hit while the other stops the clock when hit. The time
of flight difference between two highly relativistic particles of masses m1 and m2 with
velocities of v1 and v2 and momentum p is
δt = L(
− ) ≈ 2 (m21 − m22 ),
v1 v2
where L is the distance between scintillators and δt is the resolution of the time of
flight system.
In CLAS, the TOF is a much bigger detector than the simplest form described in
the previous paragraph. It consists of many scintillator counters (SC) that cover an
area of 206 m2 [59]. The system measures time of flight of the particles; it can also
participate in the level 1 trigger.
The counters cover the θ range between 8◦ and 142◦ and the entire active range
in φ. The scintillators are located radially outside the DC and CC, but in front of
the calorimeters (see figures 37, 38). The scintillator thickness is 5.08 cm, chosen
to give a large signal for traversing minimum ionizing particles. Each scintillator is
positioned so that it is perpendicular to the average local particle trajectory. The
forward counters (those positioned at θ < 45◦ ) are 15 cm wide, and the large angle
counters are 22 cm wide. Each TOF counter is made of Bicron BC-408 with a
photomultiplier tube (PMT) on each end.
In CLAS, the flight time can be calculated as the difference between the vertex
time (found by using the electron beam bunch) and the time at the end of the
particle trajectory reported by SC. For electron beam experiments, the beam bunch is
determined by identifying the final state electron and tracing it back to the interaction
point.b For tagged photon experiments, independent information about the beam
bucket is obtained from the start counterc .
The time resolution of the counters has been measured using cosmic rays and can
At JLab energies, electrons are considered to be β = 1 particles.
A system of thin counters surrounding the target.
be parameterized as [59]:
σT OF (ns) =
σ02 +
σ12 + (σP L/2)2
Npe exp(−L/2λ)
where σ0 =0.062 ns is the intrinsic resolution of the electronic measuring systems and
other processes that are independent of light level, σ1 =2.1 ns is the combined single photoelectron response of the scintillator and PMT, σP =0.0118 ns/cm accounts
for path length variations in the light collection, L is the length of the counter,
and λ is the attenuation length of the counter, which can be approximated by
λ=134cm+0.36L for forward angle counters and by 430 cm for large angle counters. Npe is the number of photoelectrons seen by a hypothetical counter without
attenuation. The time resolution of the system was between 70 (for shortest counters) and 165 (for longest counters) ps, better than the design goal of 120 ps at the
smallest angles (shortest counters) and 250 ps at angles above 90 degrees (longest
counters) [59].
Electromagnetic calorimeter
A calorimeter is an experimental apparatus that measures the energy of particles
traversing it. Some types of particles initiate a particle shower upon entering the
calorimeter and the sum of all particle energies is collected and measured, in order
to determine the energy of the original particle. The energy of a neutral particle can
be measured this way by measuring the energy of the shower. The entire energy may
be deposited and thus measured, or it may be sampled. Two types of calorimeters
are generally used: an electromagnetic calorimeter, which is designed to measure
the energy of particles that interact primarily via the electromagnetic interaction,
and a hadronic calorimeter, the main focus of which is particles interacting via the
strong nuclear force. Both calorimeters can be sampling calorimeters, in which the
particle shower is created by one kind of material, but the detecting part is made of
a different kind of material.
Two electromagnetic calorimeters are used in CLAS: the forward electromagnetic
calorimeter (EC), which covers the θ range up to 45◦ , and the large angle electromagnetic calorimeter (LAC), which covers θ between 45◦ and 75◦ in sectors one and
two, providing 120◦ coverage in φ.
The main functions of the EC are detection of and triggering on electrons at
energies above 0.5 GeV, detection of photons at energies above 0.2 GeV, and detection
FIG. 42: Exploded view of one of the six CLAS EC modules [60].
of neutrons. It is made of alternating layers of scintillator strips and lead sheets with
a total thickness of 16 radiation length, with total thicknesses of 39 cm of scintillator
and 8.4 cm of lead per module.
A module consists of 39 layers with 10 mm of scintillator and 2.2 mm of lead
in each layer. The area of each successive layer of scintillators increases linearly
with increasing distance to the target. Each scintillator layer consists of 36 strips
parallel to one side of the triangle, with the orientation of the strips rotated by 120◦ in
successive layers (see figure 42). The three orientations (U, V, and W), with 13 layers
in each direction, provide spatial information on the location of energy deposition.
In each of the sectors, EC scintillators are divided into two groups: 15 layers closer to
the target make up the inner calorimeter, 24 layers further from the target make up
the outer calorimeter. This additional subdivision facilitates longitudinal separation
of deposited energy.
The energy resolution of the EC for electrons can be parameterized as
= √
with a negligible constant term [60]. The sampling fraction is approximately 0.3
for electrons of 3 GeV and greater, and for smaller energies, there is a monotonic
decrease to about 0.25 for electrons of 0.5 GeV. The average rms resolution is 2.3 cm
for electron showers with more than 0.5 GeV of energy deposited in the scintillator.
The timing resolution of EC for electrons averages to 200 ps over the entire detector
[61]. Although the EC is capable of detecting both electrons and hadrons, a larger
fraction of the particle energy is deposited in the EC in the case of electrons via a
bremsstrahlung-pair production “shower”, than in case of hadrons.
The LAC provides detection of scattered electrons and neutral particles such as
neutrons and photons coming from radiative processes or decays. It was not used in
the BONuS experiment.
A 7 atmosphere deuterium gas target 284.1 mm long (with an active length, within
the BONuS detector, of 169.4 mm) and 6.1 mm in diameter was used in the BONuS
experiment (see figure 43). It was constructed from a 50 micron kapton wall with
aluminum endcaps. Four centimeters of the target adjacent to the upstream endcap were enclosed by an aluminum shroud, which prevented slow particles from the
upstream window from entering the detector. The target cell was surrounded by helium gas at atmospheric pressure. This kept the material density low, thus keeping
the material that spectator protons had to traverse on the way to the detector to
a minimum, while still allowing a surrounding gas detector to have thin windows.
Some helium target contamination originating from this setup was a necessary evil,
addressed in the analysis. Hydrogen and helium targets were used for calibration
purposes. The target gas system was static, i.e. the system was purged and charged
with the appropriate gas, then valved off at the appropriate pressure. At each target
gas exchange, the system was flushed a few times to minimize old gas presence in the
new target gas, but it could not be done perfectly and so provided a possible source
of errors, which are taken care of in the data analysis.
The target was located in the beam line at z = -58.0 cm in the CLAS coordinate
system. The upstream end of the tube was fixed to an aluminum cylinder that
provided gas plumbing. The target was surrounded by the BONuS RTPC (see section
III.4) and the DVCS solenoid (see section III.2.6). The downstream end of the tube
extended into a larger cylindrical volume filled with helium to minimize electron
interactions downstream of the target.
FIG. 43: Target tube with fixtures attached.
DVCS magnet
An existing solenoid magnet constructed for another experimentd was used to prevent Moeller electrons from getting into the RTPC sensitive volume (see figure 48)
and to create trajectory curvature for spectator protons thus making momentum
measurements possible.
The magnet is a superconducting magnet, designed and built in Saclay. It provides a 4.7 T nominal field at its center point. An additive superconducting compensation coil ensures external magnetic shielding. To reach the requested nominal
field, the maximum usable current is 550 A. See table 4 for the compilation of magnet
An older experiment was dedicated to Deeply Virtual Compton Scattering (DVCS), hence the
“DVCS magnet” name.
TABLE 4: DVCS magnet
Aperture diameter
External diameter
Magnet length
Total length
Total height
Total width
Cold mass at 4 K
Cold mass at 50 K
Total mass
Liquid helium capacity
270 mm
910 mm
910 mm
2776 mm
1661 mm
1143 mm
700 kg
200 kg
1500 kg
65 liters
As was mentioned earlier, to identify events in which a proton is a mere spectator to
the electron-neutron collision, we need to select events in which a scattered proton
is backwards moving and possesses a low (around or below 100 MeV/c) momentum.
To register such protons, we need a detector that would provide good coverage in the
backward (with respect to the direction of the electron beam) hemisphere, and be
close enough to the target to be able to detect heavily ionizing low energy protons
before they get stopped. A Radial Time Projection Chamber (RTPC) utilizing Gas
Electron Multipliers (GEMs) was specially constructed for the experiment to fulfil
these requirements.
Time projection chambers
The time projection chamber (TPC) is an ionization detector capable of providing
a complete 3D picture of the particle trajectory in the detector volume as well as
particle identification through its specific energy loss, dE/dx. Both gas and liquid
sensitive volumes are used. The TPC combines concepts from both Multi-Wire
Proportional Chamber (MWPC) and drift chamber.
Figure 44 depicts a scheme of a classical TPC, based on the one invented by Dave
Nygren in the late 1970s [62]. The shown configuration represents a gas filled cylinder
with a thin cathode plane at the center producing a strong electric field along the
axis of the TPC. A magnetic field parallel to the electric field is applied by a solenoid
(not shown).
FIG. 44: The classical TPC with gaseous sensitive volume.
Charged particles produced in the center of the TPC move through the sensitive
volume ionizing molecules. The produced ionization electrons drift to one of the
two endcaps. The solenoidal magnetic field minimizes transverse diffusion of the
electrons (which is necessary since the drift path can reach meters) and bends the
charged particles allowing the momentum measurement.
The endcaps are divided into six sectors, each one containing a MWPC. Anode
wires of the MWPCs detect drift electrons providing one of the three needed space
coordinates of the charged particle trajectory. The second coordinate is determined
by cathode pads located under the wires. Using the center-of-gravity method (basically looking at the way the charge from the signal was shared between pads), this
coordinate can be determined with great accuracy. The third coordinate is given by
the drift time of the drift electrons in the same fashion it is done in drift chambers.
Thus, the full particle trajectory is reconstructed in three dimensions.
When an electron produces an avalanche on an anode wire, a cloud of positively
charged ions stays behind, in the drift volume. To prevent the distortion of the
detector electric field by these ions, a grid at ground potential is placed before the
anode wires. It captures the positive ions that attempt to drift back towards the
Since the charge collected at the endcaps is proportional to the energy loss of
the particle, the signal amplitude from the anode wires also provides the information
FIG. 45: BONuS data readout scheme.
on the specific energy loss of the particle. Since the momentum of the particle can
be found from the curvature of its trajectory in the magnetic field, the particle can
be identified. This requires sufficient resolution in dE/dx, necessitating collecting
huge amounts of information and thus imposing a condition on how slow the readout
can be. The readout equipment progressed from using charged coupled devices in
earlier experiments to a switched capacitor array, and then to custom integrated
circuits, which can read out data 1000 times per second, built for a huge TPC used
in the ALICE heavy-ion experiment. The ALICE readout was used in the BONuS
experiment (see figure 45).
Nowadays, two new technologies are replacing wire chambers in TPCs: the aforementioned GEMs and Micromesh gaseous structure chambers (Micromegas). GEMs
represent thin plastic foils, metal coated on both sides, with holes punched through
them and potential difference delivered to the two sides of the foil. Micromegas use
a thin metal mesh instead of anode wires. Both GEMs and Micromegas are flexible,
relatively low cost structures, which can be used in a variety of detectors and, as an
additional benefit, can be placed very near readout pads decreasing charge diffusion.
The BONuS experiment used the GEM technology.
FIG. 46: An enlarged view of a GEM electrode. The diameter of the holes is 50µm.
Gas Electron Multipliers
GEMs are a new electron multiplication technique developed at CERN [65]. They
combine robustness, low cost, high rate capabilities, can be used in detectors of
varying shapes, and possess high energy resolution and space localization accuracy.
Some advantages of using GEMs over the conventional wire chambers include the
elimination of wire sag, and the possibility of placing them very closely to readout
pads, thus reducing diffusion after amplification. Also, positive ions generated in
the avalanche, naturally drift away from the amplification region, eliminating charge
A GEM consists of a thin, metal-clad chemically pierced polymer foil, with density
of holes reaching over 50 per 100 mm2 (see figure 46). After a potential difference
is applied, electrons released by the gas on one side of the foil drift into the holes,
multiply, and drift into the collection region. Thus, each hole acts as a proportional
amplifier (see figure 47). The multiplier can be used as an individual detector or as
a preamplifier in a many foil structure.
The main characteristics of GEM detectors [66], [67]:
• Operate with many gases, including noble;
FIG. 47: Electric field lines and equipotential lines in GEM holes.
• Proportional gains higher than 105 are possible;
• Energy resolution 18% FWHM at 5.9 keV;
• Space localization accuracy better than 60µm RMS;
• Active areas up to 1000 cm2 ;
• Rate capability higher than 105 counts/mm2 sec.
Due to these characteristics and the aforementioned robustness, low cost and flexibility, GEMs have become a popular choice as amplifiers for modern detectors. The
BONuS experiment became the first experiment to use curved GEMs. Multiple tests
were performed to verify their satisfactory performance, and they were found adequate for the task.
Sometimes longitudinally drifting electrons may not be optimal. Some collaborations like STAR [63] and CERES [64] have built and used TPCs in which electrons
drift radially outwards from a cylindrical central cathode to the anode located on a
concentric cylinder. Such TPCs are called Radial Time Projection Chambers (RTPCs). The electric and magnetic fields are no longer parallel, which leads to complex
electron drift trajectories. Curved pad planes are then required. For these reasons
RTPCs have a more complex structure and poorer resolution than their non-radial
The spectator protons of interest for BONuS have momenta of 70 to 120 MeV/c.
At these energies, protons are very heavily ionizing particles, which can be stopped
by little material. This requires the use of as low density detectors as possible (the
same considerations affect the target design, as was previously mentioned). This
consideration suggests the use of TPCs, since they have inherently low mass density.
The detector had to surround the beamline and fit inside the available DVCS
solenoidal magnet. The magnet length was less than its diameter, and so it did not
have magnetic field lines parallel to each other over a reasonable range, complicating
the use of an axial TPC. Furthermore, forward moving high-momentum particles,
which were detected with CLAS, required minimizing the endcap density, the region
where a lot of equipment is normally situated in axial TPCs. The natural solution to
Moller Tracks in DVCS Solenoid
15 degrees
r [m]
-0.88 -0.78 -0.68
-0.58 -0.48 -0.38 -0.28 -0.18 -0.08
z (CLAS center) [m]
FIG. 48: Simulation of Moeller tracks in the DVCS solenoid (S. Kuhn). Moeller
tracks at different angles to the axis (angle values in degrees are shown in the table
to the right of the picture) are shown. The RTPC outline between z of -0.48 and
-0.68, and between r of ±0.03 and ± 0.06 is shown. A Moeller catcher that prevents
the tracks from getting into the CLAS detector is shown with a dashed blue line.
these problems was to use an RTPC. The last constraint on the RTPC was the necessity to stay clear of the Moeller electrons (see figure 48), thus making it impossible
to put it too close to the target.
Figure 49 shows the BONuS RTPC. The 7 atmosphere gas target is shown in
the middle. The detector is situated close to the target, with its center moved
backwards with respect to the center of the target for better coverage of the backwards
hemisphere, where spectator protons are expected. Upon exiting the target and
traversing the volume filled with 1 atmosphere helium gas (providing low mass density
volume for spectator protons to pass and Moeller electrons to escape), the proton
passes a ground plane that was located at the radius of two centimeters, and then
the cathode surface located at the radius of three centimeters. Upon traversing the
cathode, the proton enters the sensitive volume, filled with an approximately 80%
He/20% dimethyl ether (DME) mixture, at the radius of 30 mm from the central
axis. This 30 mm distance allowed plenty of space for Moeller electrons to escape
without entering the sensitive volume (see figure 48). Having helium as the main
component of the mixture provided the necessary low density, which minimized the
energy loss of the slow protons. When traversing the sensitive volume, the spectator
ionizes the gas and the released electrons drift towards the GEMs, where they are
multiplied and delivered to readout pads. The drift region of the RTPC was kept at
1500 V for all runs (see table 5 for voltages on different components). The resulting
electric field produced a sufficiently short clearing time in the drift region without
making the cathode voltage so high that a breakdown could occur. The GEM gain
was set at a maximum voltage at which non-linearities (saturation) did not occur for
slow spectator protons. This made the RTPC fairly insensitive to the lighter ionizing
particles (i.e. electrons) [68].
The first GEM layer was at 60 mm radius, and the padboard was at 69 mm
radius, after two more GEM layers, with the space between the padboard and the
solenoidal magnet reserved for preamplifiers and cables. The interior walls of the
drift region were made of printed-circuit boards patterned with metal traces forming
the field cage necessary to make the drift field between the concentric cylinders as
close to that between two infinite concentric cylinders as possible. A ground plane
was located at the radius of two centimeters.
The RTPC was made of two half-cylinders, each of which represented a selfsupporting structure. Figure 50 shows the exploded view of the detector, with each
half having supports for the window, cathode, cascade of three GEMs, and padboard.
The length of the detector was 20 cm. Phi (azimuthal angle) coverage was around
300◦ . Wedges on the top and bottom of the assembly serving for combining the
halves covered the rest of the phi acceptance. The readout pads had dimensions of
5 mm in the phi direction, thus covering approximately 3.5◦ , and 4.45 mm in the
z direction (along the axis of the cylinder). Pad rows along the axis of the RTPC
were shifted with respect to each other to minimize the probability of a whole track
being contained in the same row of pads, thus improving the track resolution. The
RTPC was capable of detecting spectator protons with momenta from 70 to 150
MeV/c. Below this range, protons are stopped too soon to leave a substantial track
in the RTPC, and above that range, protons are too fast and the curvature of their
trajectories is not large enough to confidently reconstruct their momenta (often, they
are seen as infinite momentum particles).
FIG. 49: Schematics of the BONuS RTPC. See text for details.
FIG. 50: Exploded view of the BONuS RTPC. Supporting structures for the window,
cathode, three GEMs, and padboard in both half-cylinders, as well as the central
support combining two halves, are shown.
Figure 51 shows an RTPC event as seen on the event display. Two candidate
tracks curved by the solenoid field are shown. The charge collected on the readout
pads was traced back to the time when it was released from the gas by ionizing it,
and converted to the distance from the beam axis by the usual TPC methods (see
section III.3.1 for those). The time to distance conversion will be discussed as part
of the preliminary data analysis below. The other two coordinates were given by
~ and B
~ fields).
pad locations (after correcting for the electron drift in the combined E
The size of the rectangle indicates the amount of charge collected on a pad.
Each of the pads was connected to a connector, each of which carried sixteen
pad signals and four ground connections and supported a preamplifier card. Two
hundred cards were required to instrument the whole RTPC [68]. The preamplifier
cards projected radially from the surface of the detector and connected to the ribbon
cables in such a way that the cable length was parallel to the chamber axis thus increasing the electronics package density. The readout system was borrowed from the
ALICE experiment (ALICE TPC readout - ALTRO, see figure 45), with some necessary modifications required due to the shortage of space in the BONuS experiment.
ALTRO readout controllers (U2F) supervised the communications in the VME crates
(which accepted RTPC signals) and transferred the compacted digital data to a pair
of single-board (VME) computers via USB-2 interfaces. These processors served as
Readout Controllers within the standard CLAS data acquisition system. This system
provided readout of approximately 1kB events at a rate of about 500 Hz.
The BONuS event readout was initiated by the standard CLAS electron trigger
system selecting interactions with a high probability of having an electron track in
CLAS. The data recorded for each event is composed of the times (114 ns samples)
and amplitudes (10 bits) of all TPC pad signals above threshold for a time period
extending from 1.7 µs before to 9.7 µs after a trigger. This interval is about 1.5 times
the maximum drift time in the RTPC.
FIG. 51: An RTPC event.
TABLE 5: Supply settings and electrode voltages in the RTPC during operation of
the experiment. The suffixes on the GEM label refer to the inner (i) and outer (o)
surfaces of the GEMs. All voltages are of negative polarity and are referenced to
ground. The table is taken from H. Fenker [68].
Detector element Heavily ionizing tracks Minimum ionizing tracks
Left Half Right Half Left Half
Right half
Overall, the BONuS experiment had 1350 runs, from run 49000 to run 50349, each
of which with one to ten separate data files. In the following discussion I will ignore
the first 462 runs at the beginning of the running period that were recorded at the
time of the experiment setting up and fine tuning. The real data were recorded in
runs 49462 - 50349.
The BONuS experiment recorded data at four beam energies with nominal energy
values (the ones written in the database, see section IV.2.5 for more on the actual
values of the beam energy) of 5.268, 4.226, 2.142 and 1.100 GeV. Overall around 861
million triggers were collected on the deuterium target and around 95 million triggers
were collected on the hydrogen target. Table 6 has a detailed list. Additionally,
empty target runs for background estimation and 4 He runs for evaluation of 4 He
contamination were conducted. Several dedicated runs for DC alignment and TOF
calibrations were also conducted.
Each of the magnets had two settings during the experiment: the CLAS toroidal
magnet was used at 1500 A and 2250 A, whereas the DVCS solenoid magnet was
used at 400 A (corresponding to 3.5 Tesla field) and 534 A (corresponding to 4.7
Tesla field). The trigger for the BONuS experiment was provided by the electron in
CLAS. A coincident signal from the CC and the EC formed the CLAS trigger with
thresholds of 75 mV for CC, 72 mV for the inner calorimeter (inner EC) and for
the outer calorimeter (outer EC) the threshold was set at 147, 200, and 260 mV at
different times.
On the RTPC side, for all physics runs, the magnitude of the cathode power
supply voltage was maintained 1500 V higher than the GEM supply voltage. See
table 5 for working voltages on the RTPC. Additional calibration RTPC runs were
conducted with raised voltages in order to provide sensitivity to minimum ionizing
particles. The detector gas was a mixture of He and DME with approximately 80%
He and 20% DME.
TABLE 6: Triggers collected in the BONuS experiment.
Beam energy, GeV
Triggers, millions
D2 target H2 target
Detectors in CLAS measure relevant quantities by producing electric impulses that
have to be further interpreted as corresponding to particular energy, time, etc., measurements. The formidable task of converting voltages that are produced by the
detectors to the physical quantities of interest with one to one correspondence is
called calibration of the detectors. The complicated structure of CLAS, which has
many detectors and large luminosity makes this procedure involved.
Drift chambers
The drift chambers (see section III.2.1) are an important part of CLAS providing,
in particular, momentum information for the particles passing through the detector.
Due to stringent requirements on the knowledge of the momenta, drift chambers
should be aligned and calibrated to a high degree of accuracy.
Drift chamber alignment
The relative wire to wire and chamber to chamber positions are needed to determine
momenta of particles traversing the drift chambers (DC). The design momentum
resolution of δp/p ≤ 0.5% at a particle momentum of 1 GeV [69] requires knowledge of
hit positions along a track to be better than 0.8 mm [70]. However, it was impossible
to install the DC with an accuracy of a few hundred microns. Hence the necessity
to verify the relative positions of the chambers after each removal and/or repair
(see references [70], [71] for some of the noted cases of alignment procedures being
performed). Both the first region of DC and one of the sectors of the third region
had been moved before the Bonus experiment, thus necessitating DC alignment for
accurate data analysis. The procedure finds offsets of the DC geometry from the
design geometry and writes those offsets to the database used in data processing.
There are six offsets: shifts along each of the coordinate axis (dx, dy, dz) and rotations
along each of the axes (θx , θy , θz ).
The following was assumed:
1. Wire positions inside a single DC region and sector are fixed.
2. Intra-sector alignment only will be performed.
3. Region one DC will be used as a reference since they were constructed as a
single unit with sector to sector accuracy of 0.2 mm [70].
The torus magnetic field was turned off for the alignment run. In its absense, particle trajectories should be straight lines. Thus, by minimizing the deviation of the
trajectories from a straight line the best set of offset parameters was found. The
minimized quantity was:
χ2 =
X X (|Dtrack,hit | − |Dhit |)2
+ σhit
tracks hits
where Dtrack,hit is the calculated distance of closest approach of the track to the wire,
Dhit is the drift distance as reported by the x vs t function for that wire (see section
IV.2.1), σtrack,hit is the uncertainty of the track position at that hit, and σhit is the
time based resolution of the hit. The sum of spacial residuals of the hit
Res =
Dtrack,hit − Dhit
was used to represent the quality of the alignment parameters.
As mentioned above, region 1 was used as a reference. Then one of the other
regions (2 or 3) was fitted, and both of them were used as the reference for fitting the
remaining region. The sequence of region fitting was varied and results compared to
achieve the best possible alignment. Data with and without having applied the DC
distance versus time calibration (see section below for description) were compared
in order to resolve differences between methods described in references [70] and [71]
(R. Feuerbach [70] did not perform the DC calibration on the alignment run, noting
that it could have led to wide residual distributions for region 2, whereas S. Morrow
and M. Mestayer [71] claim that performing a calibration for the alignment run was
an improvement over the previous technique).
After performing the above-mentioned procedure outlined in references [70] and
[71], offsets in the database were altered by hand to improve the alignment. Figures
52 and 53 illustrate the residual distributions before and after the described procedure. The residuals are much closer to each other and to zero in the right panel,
indicating that the alignment was successful.
Drift chamber calibration
The reconstruction of a track in the DC is performed in two stages: hit-based tracking
and time-based tracking.
In hit-based tracking the wires that got signals from a passing particle are combined in segments by superlayers, which are subsequently linked with each other.
The physical positions of wires are used as points along the trajectory. Since the
drift cell size is small and the number of wire layers is big, the track momentum can
be reconstructed with a resolution of 3-5% [72].
We need a better resolution than that. For this, we need to know a particle
trajectory better, namely where exactly the particle traversed each of the drift cells.
Using information from other detectors, it is possible to determine the drift time
for ionization electrons from the particles passing to reach the sense wire with great
accuracy. Then drift times can be converted to drift distances (see below). Finally,
the parameters for the track are adjusted in the fit procedure that constitutes timebased tracking.
The drift time for a hit can be determined by
tdrif t = t0 − tstart − tT DC − tprop − twalk ,
where t0 is a fixed delay for a given wire determined by hardware characteristics such
as cable lengths, tstart is the event start time provided by the time-of-flight system and
corrected for the calculated flight time of the electron using a momentum estimate
and the distance from the target to SC, tT DC is the raw time as measured by the
TDC (time-digital converter), tf light is the flight time from the reaction vertex to
the wire, tprop is the signal propagation time along the wire, and twalk is a time-walk
correction. From this, the initial estimate of the distance from the wire at which
the particle passed can be made. The remaining ambiguity, namely on which side of
the wire the particle passed, can be resolved within the superlayer by comparing χ2
assuming it passing on one and the other side.
FIG. 52: Residuals for six sectors before the alignment.
FIG. 53: Residuals for six sectors after the alignment.
Since the drift time-to-distance relationship is not constant, affected even by the
weather outside, we need to review coefficients for the functions determining the time
to distance relationship. These functions are:
x(t̂) = v0 t̂ + η t̂p + κt̂q ,
x(t̂) = at̂ + bt̂2 + ct̂3 + dt̂4 + (xmax − a − b − c − d)t̂5 ,
where the power law form (91a) is used for region 3 and the polynomial form (91b) is
used for regions 1 and 2. In equations (91a) and (91b) v0 is the value of the saturated
drift velocity near t = 0, t̂ = t/tmax is the normalized time, where the normalization
tmax is the drift time for tracks that pass near the outer edge of the drift cell, so
that the ionization electrons drift for the longest time, and xmax is the cell linear size
corrected for the local angle. Each of the equations has 4 parameters that are varied
in the minimization procedure (η, p, κ, q, a, b, c, d).
The time-to-distance function has to satisfy the boundary constraint
x(t̂ = 1, α) = C · cos(30◦ − α),
where the angle α is the track entrance angle and C is the cell size.
In the region 2 DC the inhomogeneous magnetic field rotates and shrinks the
isochrones. To account for that, the effective entrance angle and maximum drift
time have to be modified. The correction to the entrance angle is determined from
a GARFIELD simulation:
αc = arccos(1 − aB),
where a is a constant and B is the magnetic field strength.
The maximum time is extracted directly from data and is parameterized as
tmax (B) = tmax (0) + bB 2 ,
where b is a constant and B is the magnetic field strength. At any given local
magnetic field point, the time-to-distance function included an additional correction
term β(t̂) to describe the magnetic field dependence:
x(t̂, α, B) = x(t̂, α − αc , B0 ) + (B − B0 )β(t̂),
where B0 is the average magnetic field value for the full fitted data sample. The
magnetic field dependence is included only for region 2, since regions 1 and 3 are
located outside the torus cryostats.
As a result of the minimization procedure, the values of the 4 parameters for each
of the equations (91) and each superlayer are determined. In addition, the average
local angle at which tracks enter the drift cells, the average magnetic field strength,
t0 and tmax are found.
The calibration was performed with the help of “dc3”, the program developed by
David Lawrence to automate and standardize the drift chamber calibration procedure. The up-to-date description of dc3 can be found at
The package itself was retrieved from CVS at /packages/reccal/dc3. The calibration
procedure followed guidelines from the dc3 manual:
1. Choosing runs for calibration. As mentioned above, the drift time-distance
correspondence can “drift” with time, thus necessitating the re-calibration of
the parameters every once in a while. The DC calibration was performed
for 7 runs more or less equally spaced over the BONuS run period with at
least one calibration run corresponding to each beam energy and torus current
combination. The chosen runs were: 49200, 49289, 49485, 49544, 49835, 50282,
and 50333.
2. Setting up the environment. Necessary environmental variables were set
up in the .cshrc file, the most important of which is the proper RunIndex, the
SQL table, in which constants for the given run period are stored.
3. “Cooking” chosen runs. This jargon means subjecting the raw data, written
to tape during the run period, to the reconstruction program that converted
raw detector signals to physical quantities, reconstructed particle trajectories,
identified particles, etc. This was performed by V. Tvaskis, our analysis coordinator at the time.
4. Producing ntuple files using the trk mon program. This step consists
of converting the “cooking” output into the particular form readable by the
DC calibration program. This is done by running the trk mon program. The
program is run twice. When running the first time, only input and output files
are given as parameters:
trk_mon -ooutputfile.hbook inputfile.A00.00
By looking at histograms produced by this command, cuts on χ2 and local
angles are chosen. These cuts are then used in running the program for the
second time. This second run produces cleaned ntuples (a particular form of
storing data) ready to be used in the calibration.
5. Fitting in DC3. This is the heart of the calibration procedure; everything
before this step was done to prepare files to use as an input to dc3.
• Finding T0. T 0 is the value of the time at which the calculated distance
is equal to 0. It is primarily determined by passive cable delays and
therefore needs to be determined only once per run period (barring some
extraordinary changes in the CLAS setup). T 0 was determined using the
leading edge distribution, that is fitting to the leading edge of the time
• Finding Tmax. Done by a simple click of a button, this means finding the
time corresponding to 97% (default, can be changed) of the time integral
for regions 2 and 3 , and corresponding to 99% (default, can be changed)
of the time integral for region 1. Using this time as the limit of calibration
as opposed to using time at 100% of the integral eliminates “edge effects”
of the drift cells.
• Fitting. After limits of the fits were determined in the previous steps,
we can do the fitting itself. First, we choose the xvst fit to find the
initial parameter set for equations (91a) and (91b) (later, we can use
the resid fit to fine-tune the parameters). The fit is initially done for one
sector of one superlayer, then the resulting parameters are copied for other
sectors/superlayers to provide initial values for those, and the global fit is
• Preliminary checking. A table of χ2 is generated to perform a preliminary
check of the parameters.
• Write parameters to the database.
6. Checking calibration quality. First we re-analyze raw data using the newly
found parameters. Using the re-analyzed data, we then prepare files for dc3, and
start the program. Using the calibration quality tab of the program interface,
we can check the calibration quality. Average tracking χ2 should be below 2.0,
the time based over hit based ratio should be around 0.7 or more, and the
number of hits per time based track should be 30 or more.
See figure 54 for resulting DC resolutions for DC sectors in different superlayers.
The resolutions are found as the RMS of the time residual distributions. The time
residuals are found as follows
T RESI = abs(DOCA) − abs(DIST ),
where DOCA (Distance Of Closest Approach) is the distance from the fitted track
to the sense wire; DIST is the calculated (using drift time and other parameters)
distance from the sense wire to the track. Both DOCA and DIST are signed quantities whose signs are determined by the side of the wire on which the track passed.
The values shown are within the allowable limits of 500 microns when averaged over
the sectors.
Time of flight system
The time of flight (TOF) system in CLAS (see section III.2.3 for more) provides
the timing information for charged particles in each event. Other detectors use this
information, and thus the quality of data reconstruction heavily depends on how well
the time of flight system is calibrated.
TOF calibration
The reconstructed time and energies in a scintillator of the TOF are given by (see
reference [74])
tL = cnorm · (tw − cLR /2 + cc2c + cp2p )
tR = cnorm · (tw + cLR /2 + cc2c + cp2p )
t̄ = (tL + tR )/2
k(A − P )
k(A − P )
ER =
E = EL · ER
vef f
(tL − tR − yof f set ),
EL =
FIG. 54: DC resolutions after the DC calibration for six DC superlayers. The resolution for each of six sectors is shown for each of the superlayers. Points are shown
for the runs used to do the calibration.
where tL and tR are adjusted times on the left and right PMTs, average time t̄ is a
position independent determination of time of particle impact, EL and ER are the
normalized pulse heights on the left and right PMTs, E is a position independent
measure of the energy deposited in the scintillator, and y is the position of the hit in
the scintillator measured with respect to the center of the counter.
To compute these quantities the following quantities must be measured/calibrated
• Pedestals (P ). The pedestal is the base voltage on an ADC channel when no
data are present. It is measured by taking data with a pulse-trigger.
• TDC calibration constants c0 , c1 , c2 are extracted from calibrating time:
t = co + c1 T + c2 T 2 ,
where T is the raw time in units of TDC channels and t is the converted time
in nanoseconds.
• Time-walk correction. Since the leading edge of the discriminator does not rise
instantaneously, there is pulse-height dependent uncertainty due to the time
the leading edge to rise. To account for this, the correction of the form
tw = t − fw
+ fw
is used, where T h is the channel corresponding to the leading-edge discriminator
threshold of 20 mV (∼ 35 channels), A is the raw value of the ADC, P is the
pedestal, and fw (x) is the time walk correction function
if x < w0
w2 w3
fw (x) = w3 (1 + w3 ) − w3+1 x if x > w0 ,
fw (x) =
where w0 , w2 , and w3 are fit parameters determined for each PMT using a laser
calibration system.
• Left-right delay constants (cLR ) are the relative time reported by PMTs from
the opposite ends of the same counter.
• Counter-to-counter offsets (cc2c ), also known as paddle-to-paddle constants, are
relative time shifts of the measured times from counter to counter.
• Panel-to-panel offsets (cp2p ) are the offsets between scintillator panels (currently
set to zero).
• Attenuation length (λ) is found separately for each counter. It is found from
measuring pulse heights on the left and right PMTs
EL e−y/λ ,
ER ey/λ .
AR − P =
AL − P =
• Effective velocity (vef f ) is the measured propagation time of light in each
counter. It is defined relative to time t0 at the center of the counter
tL = to + y/vef f ,
tR = to − y/vef f .
• Pulse height normalizations on left and right PMTs (M0L and M0R ) are the
peak height for minimum ionizing particles normally incident at the center of
the counter.
• Pulser normalization (cnorm ), the overall time scale for time measurements,
measures the possible absolute scale offset of the pulser used in calibration
runs with respect to the accelerator radio frequency (RF) time
The TOF system was successfully calibrated for BONuS by Narbe Kalantarians.
Figure 55 shows the geometric mean (in ADC channels) of the minimum ionizing
peak. The geometric mean is a position-independent handle on the energy deposition
in the counter given as:
gmean =
ADCL · ADCR ,
where ADCL and ADCR are the left and right ADC values for a particular energy
deposition. Figure 56 demonstrates how well the RF (radio frequency) offsets were
calibrated.a This shows the overall timing resolution of the TOF, which is rather good
according to figure 56. Figure 57 shows the ratio of logarithms of energy attenuation
as reported by the right and left PMTs. The nice straight line indicates a satisfactory
attenuation length calibration.
RF offset is the difference between the predicted vertex time of electrons and the RF accelerator
FIG. 55: The geometric mean in ADC counts for the fifth paddle of the first sector.
Forward electromagnetic calorimeter
Distinguishing electrons from pions, which is one of the responsibilities of the electromagnetic calorimeter (EC) (see section III.2.4), necessitates a good energy resolution
in the EC. The EC response to energy deposited should be independent of the place
of the hit, which requires us to calibrate the EC before the actual data analysis.
EC energy calibration
In general, the energy of an electron passing through CLAS can be deduced using
other detectors, thus making the EC calibration a seemingly trivial task, in which
PMT (photomultiplier) gains need to simply be adjusted in such a way that the EC
response matches the energy reported by the other detectors. Due to the complicated
structure of the EC (see figure 42), each hit in the EC involves a minimum of 6 PMTs
and the reconstructed energy can be represented as
Etot =
2 X
3 X
s=1 v=1 n=1
Ensv /fs ,
FIG. 56: The RF offset (horizontal axis) in ns vs the vertex z coordinate (vertical
axis) in cm. Data are for run 49560.
FIG. 57: The ratio of logarithms of energy attenuation as reported by the left and
right PMTs (ln AL/ ln AR vs the hit position x (in cm) along the scintillator. These
data are from paddle 22, sector 2). Data are for run 49560.
where Ensv is the energy seen by the nth PMT contributing to the peak in view v and
stack s
G(Asig − Aped )
where G is the PMT gain, λ is the effective attenuation length, fS is the overall
Ensv =
sampling fraction, Asig is the ADC channel corresponding to the digitized PMT
signal, Aped is the ADC pedestal and x is the PMT-reconstructed hit distance. G, λ,
and fs are the parameters to be determined in the calibration.
Since the EC is a complicated structure with a threefold stereo readout, a global
optimization would require fitting 433 parameters ([73]) per sector. This fit might
be very slow to converge and have problems reaching a stable solution. To avoid
this complication, cosmic muons have been used to simplify the calibration. Cosmic
runs have been performed and only events activating a single pixel in the EC were
accepted thus minimizing the spread in energy deposition. Plotting the mean of
the energy deposition vs the distance to the PMT, one can extract the gain G and
the effective attenuation length λ for each PMT. Then a uniform overall response
can be achieved by adjusting the PMT high voltage. Afterwards, beam data taken
with electrons are used to estimate the sampling fraction fs and to cross-check the
RTPC calibration
There are two kinds of calibrations needed in the RTPC:
• We need to know the drift velocity and trajectories of released electrons so
that the spatial point at which they were released from the gas by an ionizing
particle can be reconstructed (thus allowing trajectory reconstruction). This is
called drift velocity calibration.
• The RTPC pads register the charge of the ions that drifted to them. We need to
know to what ionization energy this charge corresponds in order to find dE/dx
for the passing particle (which will allow identification of the particle). This is
called gain calibration.
Drift velocity calibration
Since both the magnetic and electric fields in the RTPC were not constant, thus complicating the analytic calculations of the drift velocity and trajectory, the program
MAGBOLTZ [75] was used to generate initial electron paths. Parameterizations of
the electric and magnetic fields and an approximate composition of the drift gas were
used as inputs to the program. As a result we determined a function converting a
pad signal to a spatial point [68]:
Pxyz = Pxyz (I, Tsig ; Vcathode , VGEM , Rgas ),
where I is the pad number, Tsig is the time at which a signal was recorded, Vcathode
is the cathode voltage, VGEM is the voltage on the GEMs, and Rgas is the fraction of
helium in the drift gas, which was a He/DME mixture.
Due to the imperfect knowledge of the magnetic field and gas mixture, this function had to be further calibrated using information from the CLAS detector. A
special run with an increased RTPC voltage was conducted so that electrons registered in CLAS were also visible in the RTPC. Cross-checking information from the
two detectors allowed us to further improve the RTPC calibration. Figure 58 demonstrates RTPC - CLAS cross-checks for three coordinates of the track, illustrating the
satisfactory quality of the calibration. Nathan Baillie did this very involved work to
perform the calibration (see [91]).
Gain calibration
No RTPC channels were found whose response to a test pulse during bench tests
prior to installation was more than a few percent away from the mean response [68].
Nevertheless, later it was found that the effective detector gain varied considerably
across the surface of the RTPC [68]. Therefore, we had to accurately determine
the relative responses of all 3200 pads before useful dE/dx information could be
extracted from the data.
Using the drift velocity/trajectory calibration described above, each track’s momentum was determined. Using the momentum, the average dE/dx expected for a
proton was determined for the track using the Bethe-Bloch formula (see, for example,
[76]). Using the drift paths obtained in the drift velocity/trajectory calibration, the
amount of ionized electrons drifting to each pad was determined. Since the charge
obtained by the pad is known, the energy-charge calibration is possible. For each
track (i) and pad (j) the following mean response ratio was computed [68]
G1j =
(Qi.j /Ei,j )/N,
FIG. 58: Comparison of coordinates as reported by the CLAS and RTPC. The upper
panels show the comparison of the z vertex of the reaction. The difference of the z
coordinate as reported by the CLAS and by the RTPC is shown on the left, and the
two coordinates plotted versus each other are shown on the right. The lower panel
shows the comparison of the track initial angles by showing differences in the values
as reported by the CLAS and by the RTPC. The polar angle θ difference is shown
on the right, and the azimuthal angle φ difference is shown on the left.
where the sums over i for each pad run over those track segments that were predicted
to produce a signal in pad j, Qi,j is the time integrated pulse height measured on
the pad for the given track, and Ei,j is the predicted energy loss of the track i whose
ionization electron should have drifted to the pad j.
The obtained gain-normalization factors, G1j , were used to scale the raw pulse
heights and the same calculation was performed again excluding tracks whose
measured dE/dx was inconsistent with that of protons. The second pass gainnormalization factors were retained and used for the final analysis. See figures 59 and
60 for dE/dx distributions before and after the gain calibration with Bethe-Bloch
curves overlayed. The data point bands around the curves in the “after” picture (60)
clearly indicate that the calibration was successful.
Momentum corrections
Even after all the alignment and calibration procedures the particle momenta reconstructed by CLAS are not accurate which is illustrated in particular by the elastic
peak being shifted from its nominal value (proton mass) and broader than the CLAS
resolution would indicate (see figure 61), as well as in a polar and azimuthal angle
dependence of the momentum (see figure 66) [77]. These deviations have several
possible sources of origin:
• Inaccurate knowledge of wire positions because of the drift chamber residual
misalignment, wire sag, thermal and stress distortion of the drift chambers,
• Imperfect knowledge of the torus magnetic field.
• An axis offset of the solenoid magnet with respect to the torus axis.
• A beam position offset from the nominal position.
• Multiple scattering of particles.
• Energy loss of particles between the target and the detectors.
• Inaccurate knowledge of the beam energy (which does not affect the determi-
nation of the particle momentum directly, but needs to be taken care of since
the whole procedure was based on momentum-energy conservation).
FIG. 59: The dE/dx distribution of particles registered by the RTPC before the
RTPC gain calibration. The solid curves are calculated dE/dx curves for various
FIG. 60: The dE/dx distribution of particles registered by the RTPC after the RTPC
gain calibration. The solid curves are calculated dE/dx curves for various particles.
FIG. 61: Invariant mass, W , distribution for the p(e,e′ )p reaction before momentum
corrections. Beam energies are 1.099 GeV (top left), 2.140 GeV (top right), 4.223
GeV (bottom left), and 5.262 GeV (bottom right)
To solve these problems, a procedure of bringing particle momenta to their optimized
values, known as momentum correction, was performed.
Procedure background
The iterative procedure for momentum corrections took care of the aforementioned
problems one by one, followed by a return to the beginning, re-evaluation of the
results of each step, and possible re-iteration. Each of the steps was assessed and the
best recipe for each as well as the best sequence of corrections were determined.
Our method is an extension of the one described in [77] and as such has the same
approach to the procedure, namely imposing conservation of momentum and energy
by minimizing the missing momentum and energy of a reaction. Basic assumptions
about the form of the necessary corrections were made. The number of parameters
we used is slightly larger than the one in [77], with 12 parameters common for all
the sectors being added to the original 14 per sector. Then a fitting procedure
using a large data sample including elastic as well as inelastic events at different
beam energies was applied to fix the parameters. As a result, a set of “universal”
parameters for correcting particle momenta in different reactions at different energy
settings was developed.
Drift chamber displacement
To correct this problem a set of 8 parameters per sector was used. These parameters
would try to correct displacements along the z- and x-axes as well as φ-dependent
x and z displacements for each region 2 and region 3 drift chambers. The effect of
these can be written as a change in the polar angle and momentum:
cos θ
∆θ = (par1 + par2 φ)
+ (par3 + par4 φ) sin θ
cos φ
cos θ
= (par5 + par6 φ)
+ (par7 + par8 φ) sin θ
cos φ
where par1 . . . par8 are the parameters to be determined, q is the particle charge
in units of the electron charge, φ is the local sector azimuthal angle, θ is the polar
angle, and p is the particle momentum value. Btor stands for Btrans dℓ along the
path of the track multiplied with the speed of light in the units of m/ns (c = 0.29979
m/ns). The ratio
is proportional to the amount of curvature of the track, which
determines the effect of the drift chamber misalignment. A simple parameterization
of this integral in terms of the polar angle was found [77]
Itor sin2 (4θ)
= 0.76
for θ <
3375 θ/rad
for θ ≥
= 0.76
3375 θ/rad
where Itor is torus current.
Parameters par1 and par5 describe radial outward displacements, parameters par2
and par6 describe φ-dependent radial displacements, par3 and par7 describe displacements along the beam axis, and par4 and par8 account for a rotation along the radial
Torus field map imperfections
Differences between the actual spatial magnetic field distribution and whatever is encoded in reconstruction programs is an origin of possible deviations of reconstructed
momenta from real ones. The parameterization
= (par9 cos θ+par10 sin θ+par11 sin 2θ)+(par12 cos θ+par13 sin θ+par14 sin 2θ)φ,
where par9 . . . par14 are 6 more fitting parameters, seems to do a decent job in fixing
this problem.
Once a new polar angle θnew = θold +∆θ and ∆p/p are found, the new momentum
absolute value can be found as
pnew = pold · (1 +
and new momentum components are
px,new = pnew cos φ sin θnew
py,new = pnew sin φ sin θnew
pz,new = pnew cos θnew .
Solenoid axis offset
If the axis of the solenoid does not coincide with the beam line, then the solenoid
magnetic field through which the reconstruction program calculates the particle trajectory is wrong, the particle trajectory is bent by the wrong amount, and angles
(polar and azimuthal) calculated by the reconstruction are wrong. To correct for
that, angle corrections of the following form are introduced:
qB(par15 cos φ − par16 sin φ)
p sin θ
qB(par17 sin φ − par18 cos φ)
dθ =
dφ =
where φ and θ are azimuthal and polar angles correspondingly.
In (119) q is
the particle charge, B is the solenoid magnetic field, p is the particle momentum,
par15 . . . par18 are parameters determined by fitting. Then the corrected angles are
φnew = φold + dφ
θnew = θold + dθ,
and the corrected momentum components are found
px = p cos φnew sin θnew
py = p sin φnew sin θnew
pz = p cos θnew .
Beam position offset
In the standard reconstruction routine it is assumed that the electron beam goes
strictly along the axis of the CLAS coordinate system and thus all reactions take
place at (x, y) = (0, 0) in this coordinate system. As usual, real life introduces
deviations from this simple picture. The beam, as good as it is, comes into the hall
at a tiny angle, which is large enough, though, to produce a noticeable difference
between the reaction vertex x and y coordinates and the assumed pair of zeroes.
This causes the φ angle of the particle track to be reported wrong and its x- and ymomentum components to have the wrong values.
To correct for this, a two-step procedure was used. First, by using multiple track
CLAS events, the (x, y) vertex coordinate for each event was found as the coordinate
of the point where these tracks intersect. Vertex x and y positions were assumed to
be a linear function of the z coordinate and fit accordingly. This work was performed
by Jixie Zhang [92]. This information, in turn, was used to correct the φ angles:
φcorr = φraw +
qBx′ /100
33.356 p sin θ
where φcorr and φraw are the corrected and raw angles, q is the particle charge in units
of the electron charge, B is the solenoid field expressed in kGauss, p is the momentum
of the particle, and θ is its polar angle. The factor of 100 converts centimeters to
meters and 33.356 is the inverted speed of light in the proper units. x′ is found as
x′ =
x cos φs + y sin φs
cos(φ − φs )
where x and y are coordinates of the vertex in cm, and φs is the azimuthal angle of
the center of the sector in which the particle was detected. Then the z-position of
the vertex is corrected as
zcorr = zraw +
θ − θini
+ par19
tan θ
sin2 θ
where θini is the polar angle at the beginning of the fitting iteration, θ is the polar
angle after all the previous corrections, and par19 is a fitting parameter whose physical
meaning is the distance from the vertex to the first region of DC (∼ 60 cm).
Knowing the φ angle and considering the transverse momentum p⊥ = p2x + p2y
to be unaltered by the beam position change, we can calculate the proper x and y
momentum components as
px = p⊥ cos φcorr
py = p⊥ sin φcorr
Multiple scattering
A particle encounters a lot of scattering centers while traversing the CLAS volume.
As a result of the interaction with some of them, its polar and azimuthal angles
registered by the detectors are not the same as the angles at the reaction vertex. This
introduces an additional ambiguity into how the particle momentum is distributed
among the components. In addition, its vertex position can be shifted, and as a
result, the reconstruction program will try to bend the particle track by the wrong
amount, trying to get it to the wrong vertex. This introduces an error into the
momentum value. Luckily, we have events with more than one particle, and it is
possible to use information from multiple tracks to improve our knowledge of the
actual vertex position, and subsequently correct track angles and momentum.
The average z of the vertex was found by taking the weighted average of the z
positions of all the charged particles in the event:
zpart /zres
hzi = P
where the sums are over all the well-identified charged particles in the event, zpart is
the z position for each of the particles, and “resolution” in z is
zres =
where β is the particle speed in units of the speed of light, and p⊥ is the component
of the particle momentum perpendicular to the z-axis.
Having found the weighted average, we “force” all the particles to originate from
that vertex and correct the angles by the amounts
par20 sin2 θ + par21
par22 qB∆z
dφ =
dθ = ∆z
for the polar and azimuthal angles, respectively. In equations (128) ∆z is the distance
from the reconstructed vertex z for the particle to the weighted average defined by
(126), p is the reconstructed momentum of the particle, pz is the z component, q is
the charge of the particle, B is the solenoid magnetic field, and par20 . . . par22 are
parameters found by fitting.
Then the new angles θnew = θold + dθ and φnew = φold + dφ are found, and the
new components of the momentum are
px = p cos φnew sin θnew
py = p sin φnew sin θnew
pz = p cos θnew .
Energy loss
Energy loss of particles was accounted for using a simulation package combining a
GEANT4 based simulation of the target and RTPC developed for our experiment
and the GEANT3 based package GSIM describing the standard CLAS configuration.
A number of particles were generated with random momenta, vertex z and vertex
angles φ and θ. Then they were led through the simulated detectors using the aforementioned simulation package, with subsequent reconstruction of the tracks back to
vertices using the standard CLAS reconstruction software. The difference between
the “true” momenta with which the particles were generated and the momenta as
reported by the reconstruction software was minimized using a function based on the
Bethe-Bloch formula [76] with four fitting parameters. These parameters were tabulated for a range of polar angle, vertex z and momentum bins for different particles.
The parameters were used to find the energy loss for individual particles in real
data in the following way. The measured kinetic energy of a particle is
Tf =
m2 + p2 − m,
where m is the mass of the particle, and p is its momentum as reported by the CLAS
reconstruction. From this, its “true” (initial) kinetic energy was reconstructed:
Ti = (Tfb + amb−1.0 )1/b + c + d ∗ log(p/m),
where a, b, c, and d are parameters found in the fitting procedure. Then the “true”
initial energy was found as
Ei = Ti + m.
Beam energy
Since we are trying to reconstruct the proper momentum values by minimizing the
missing energy, we need to know the initial energy of the reaction, and hence the
beam electron energy. Unfortunately, this energy is not measured in Hall B, and
what is written in the database is the set accelerator energy which can be off by
10-20 MeV [81]. The actual energy can be deduced from the measured momenta in
elastic reactions ep → ep as
1 − cos θe
sin θe
cos θe + cos θp
−1 ,
sin θp
where mN is the nucleon mass, θe and θp are polar angles of the electron and proton,
respectively. The problem is that we need the beam energy to determine the new
components of the momenta and hence the angles.
Alternatively, we can use a beam energy measurement from other halls and convert it to the Hall B beam energy by
EHall B = Eanother Hall
nB + 0.056
no + 0.056
where nB and no are the number of beam passes on the way to Hall B and the other
Hall, respectively. This method gives an energy value much closer to the “true”
energy, and values determined in this way (see table 7) were used as initial values
for the energies in the fitting procedure. These energies were later used as fitting
Event selection
The correction method at hand uses momentum-energy conservation. Thus, it is very
important to use exclusive events, otherwise the whole procedure does not make sense
. The simplest reaction accessible would be exclusive ep → ep. To cover lower particle
TABLE 7: Beam energy values deduced from Hall A measurements, GeV
Beam pass
Beam energy 1.099 2.140 4.223 5.262
momenta and decouple momenta and scattering angles, we need to also use exclusive
reactions with more than two particles in the final state, for which ep → epπ − π +
was chosen. Although reactions with different torus polarities would be desirable to
decouple parameters in equations (114) that depend on the torus sign from those of
equations (115) that do not, we did not have such a luxury; all the BONuS data were
taken at one torus polarity.
To ensure exclusivity, only events with missing energy less than 0.1 GeV, and all
missing momentum components below 0.1 GeV/c were selected. Fiducial, electron
ID, geometric solenoid and trigger efficiency cuts (see below) were applied to all the
events. Particle IDs were identified comparing time of flight as reported by CLAS
with the time of flight that a physical particle of the given mass and momentum
would have. A timing cut of 2 ns was applied to the difference between them.
Then for the events that had the necessary numbers of particles, cuts of 0.1 GeV
on the absolute values of the missing momentum/energy were applied to ensure the
reaction was exclusive. Additionally, to ensure that identified elastic event candidates
were indeed elastic, the difference between electron and proton azimuthal angles was
required to be within 2 degrees of 180 degrees and the invariant mass of the reaction
was required to be between 0.8 and 1.1 GeV.
Fitting procedure and results
The MINUIT minimization package was used for optimizing the momentum correction parameters [78]. The fitting was done in two steps. In the first step events
with missing energy and momentum |Emiss | ≤ 0.1 GeV, |pz,miss | ≤ 0.1 GeV/c,
|px,miss | ≤ 0.07 GeV/c, and |py,miss | ≤ 0.07 GeV/c were chosen. The elastic candi-
dates were subjected to additional ∆φ = 180 ± 1.5◦ (∆φ being the difference between
TABLE 8: Uncertainties for missing energy and momentum spreads for 4 beam
energies, GeV.
Beam pass
Uncertainty for Emiss 0.015 0.030 0.036 0.038
Uncertainty for px,miss 0.013 0.017 0.019 0.021
Uncertainty for py,miss 0.013 0.017 0.019 0.021
Uncertainty for pz,miss 0.017 0.028 0.035 0.039
electron and proton azimuthal angles) and W cuts:
0.91 ≤ W ≤ 0.96 for 1 pass events
0.90 ≤ W ≤ 0.97 for 2 pass events
0.89 ≤ W ≤ 0.98 for 4 pass events
0.88 ≤ W ≤ 0.99 for 5 pass events.
The selected events were fit using the procedure described below, after which another
step, with events selected by stricter cuts (with cut values already corrected using
parameters from the first step) was taken. Cuts for the second step were: |Emiss | ≤
0.06 GeV, |pz,miss| ≤ 0.06 GeV/c, |px,miss| ≤ 0.05 GeV/c, |py,miss | ≤ 0.05 GeV/c,
∆φ = 180 ± 1.0◦ . The W cuts were the same as for the first step.
In each of the steps, the procedure was the same. For each selected event, the
corrections outlined in section IV.2.5 were applied one by one. After all the corrections and energy loss were applied, the minimization χ2 was calculated. Its main
contributions came from the missing energy and momentum to which each event
∆χ2 =
σE2 miss
σp2x,miss σp2y,miss σp2z,miss
where the σ’s are reasonable uncertainties for the corresponding variables, listed in
table 8 for 4 possible beam energies. They were found by plotting missing energy
and momentum distributions for raw data.
Also, for each event a sum of squared differences between the weighted average zposition of the particles and the z-position of each particle was added to “force” the
particles to the same vertex. One more per-event contribution for p(e,e′ p) events
came from forcing the invariant mass of the reaction, W , to be the same as the mass
of the target (proton), which added
∆χ2 =
(W − Mp )2
σE2 miss
where Mp is the proton mass.
After going over all the events and adding the χ2 contributions pertaining to each
of them, contributions from each of the parameters were added to avoid parameter
“run-away” to some corners of parameter space. For the majority of the parameters,
we added
∆χ2 =
where param stands for a parameter value at the end of the iteration, σparam is the
“reasonable uncertainty” for the parameter (σparam for the 14 “per sector” parameters
were 0.005, 0.01, 0.005, 0.01, 0.002, 0.005, 0.002, 0.005, 0.002, 0.002, 0.002, 0.005,
0.005, 0.005; σparam for the beam energy parameters were 5 MeV; σparam for the
solenoid correction parameters were 0.0003; σparam for the beam correction parameter
was 1cm; σparam ’s for multiple scattering parameters were 0.01). For a few parameters
that had physically motivated starting values (beam energy parameters, beam offset
parameter), the contribution was
∆χ2 =
(param − start value)2
to avoid parameters departing too far away from where they should be.b Representative figures 62 and 63 show invariant mass distributions for events in which fitting
was performed (figure 62) and for random inclusive events (figure 63) for 5 pass data
before and after the momentum correction was applied. In both cases, the corrected
distribution has a smaller sigma and the mean is much closer to the expected value
(proton mass, 0.938 GeV). The remaining 2-3 MeV of difference between the means
and the expected value can be attributed to radiative effects not accounted for by the
procedure. The data for other beam energies also showed a reasonable improvement,
thus indicating that the procedure was successful.
Figure 64 shows missing energy distributions before and after the corrections for
All starting values were zeroes except for five parameters: the initial values of beam energies
were 1.099, 2.140, 4.223, and 5.262 GeV (parameters 85-88), and the distance to the first region of
drift chambers was given an initial value of 60 cm (parameter 93).
FIG. 62: Raw (upper panel) and corrected (lower panel) invariant mass distributions
for pre-selected 5 pass events that were used in the fit.
FIG. 63: Raw (upper panel) and corrected (lower panel) invariant mass distributions
for inclusive 5 pass events from runs 49929-49935. These events are not necessarily
those fit for the momentum corrections.
FIG. 64: Missing energy distributions before (left side) and after (right side) the
momentum corrections. Results are shown for elastic ep → ep events events for four
beam energies.
all four energies. On averagec , the mean and sigma are better for the corrected distributions. Figure 65 shows distributions of the z component of the missing momentum.
Once again, on average, the corrected distributions are better.
Figures 66 and 67 show the percentage differences of expected electron momenta
from elastic scattering (as before, calculated using the beam energy and scattering
angles) from the measured momenta. The corrected momentum distributions (figure
67) are more flat and closer to zero thus indicating that the momenta were brought
closer to expected values by the correction procedure.
The performed procedure of correcting momenta found a set of “universal” parameters that
would work for all the energies. While it was possible to find better solutions for each of the
energies separately, we preferred to choose a set of common parameters to improve the distributions
on average.
FIG. 65: z component of missing momentum distributions before (left side) and after
(right side) the momentum corrections. Results are shown for elastic ep → ep events
for four beam energies.
FIG. 66: The difference between expected (calculated from beam energy and scattering angle, pexpected = Ebeam /(1 + (Ebeam /Mp ) ∗ (1 − cos θ)) and measured momenta,
divided by the expected momentum value, before the momentum corrections vs φ,
the azimuthal angle relative to the sector mid-plane, for each sector. ep → ep data
for the 5 GeV beam energy are shown.
FIG. 67: The difference between expected (calculated from beam energy and scattering angle, pexpected = Ebeam /(1 + (Ebeam /Mp ) ∗ (1 − cos θ)) and measured momenta,
divided by the expected momentum value, after momentum corrections vs φ, the
azimuthal angle relative to the sector mid-plane, for each sector. ep → ep data for
the 5 GeV beam energy are shown.
RTPC momentum corrections
Although the RTPC is a simpler detector than CLAS, momentum corrections had
to be applied to the RTPC data as well. Two RTPC momentum corrections were
1. Correcting momenta themselves to extract the proper initial momentum value
given the radius of curvature of the track. An extension of this correction also
provides a corrected value of the polar angle θ of the spectator proton. This
correction was needed to account for the energy loss of spectator protons.
2. The first correction took the energy loss into account, but some other limitations of the momentum reconstruction were not accounted for. These include
imperfect description of electron paths in the drift region, limited statistics and
clustering of ionization points. To correct for these, we corrected radii of curvature of trajectories in the RTPC for systematic shifts by using CLAS data
to predict the proper radius of curvature.
The first correction was based on a GEANT4 simulation by J. Zhang. A number of events in a range of z (coordinate along the beam axis), θ (spectator proton
polar angle), and ps (spectator proton momentum) were generated. They were subsequently run through the RTPC simulation and the radius of curvature and angle
θ of the tracks recorded by the simulated RTPC were compared with the thrown
momenta and θ. Utilizing this comparison, J. Zhang was able to provide a one-toone correspondence between the detected radius of curvature and the true spectator
momentum, which corrects for effects like energy loss (see figure 68 for results of two
iterations of this correction). See [92] for more details.
The momentum distribution provided by this correction was not perfect. Its deviation from the distribution expected from the deuteron wavefunction was noticeable,
whereas the distribution of spectator proton momenta extracted from the CLAS data
(utilizing events with a missing mass equal to that of the proton, and assuming that
the missing momentum is that of the spectator proton) agreed rather well with the
aforementioned theoretical distribution.
For this we used fully exclusive d(e,e′ pCLAS π − pRT P C ) events to compare predicted (based on the CLAS data) and measured (by the RTPC) spectator momentum
distributions to find a systematic shift in the measured momenta due to incorrect
momentum reconstruction in the RTPC due to aforementioned problems.
The discrepancy in the momentum measurements was taken care of by minimizing
the difference between momenta reported by the RTPC and expected from the CLAS
information, modifying the raw radius of curvature and angles of the spectator as
Rnew = Rold /(p1 · Rold + p2 )
θnew = p3 · θold + p4
φnew = p5 · φold + p6 ,
with subsequent fitting of parameters p1 . . . p6 to minimize the RTPC - CLAS momentum difference.
The RTPC momentum distribution after these two corrections was closer to the
expected one. The ps spectrum still falls off faster than predicted. This can be
attributed to the not completely understood RTPC reconstruction efficiency at low
charge per unit length of a track registered by the RTPC for higher spectator momenta.
Experimental data
We applied the following cuts:
• Counters (ec, cc, dc, sc) for any particle are not larger than 200 and the stat
variable for the trigger particle is positive.
• Target is deuterium.
• Trigger particle is a good electron:
– Negative charge.
– For electron momenta less than or equal to 3.0 GeV/c, the number of
photoelectrons in the Cherenkov counter must be at least 1.5, and at least
1.0 for other momentum values.
– The energy registered by the inner calorimeter is larger than 0.06 GeV,
and the total energy in the calorimeter divided by the particle momentum
is smaller than 0.34 and larger than 0.016p + 0.15, where p is the particle
FIG. 68: Proton momentum distribution in the RTPC (top panel) and the difference
between the measured and true spectator momentum (bottom panel) are shown. Raw
measured distributions are shown in black, the results after the first order energy loss
RTPC correction are shown in green, and results after the second order energy loss
RTPC correction are shown in blue. An attempt to correct the RTPC momentum
not discussed in the text is shown in pink.
– The Osipenko cut [95] (geometrical and temporal matching between the
CC signal and the measured track is required to eliminate coincidences
between CC noise and pion tracks, which can result in pions masquerading
as electrons. These coincidences used to be the largest source of pion
contamination in CLAS.) is passed.
– The fiducial cut (a geometrical cut serving to eliminate events in which
electrons went through regions of the CC known to be inefficient) is passed.
See figure 69 for an example of a fiducial cut.
– “θ-z” cut is passed. This is a simple cut making sure that a particle did
not hit the DVCS solenoid. For each z coordinate, only particles going
through a particular range of the polar angle θ should be able to clear the
solenoid. If the reported particle polar angle and z coordinate pairing was
not possible geometry-wise, the particle was thrown out.
– Trigger is in the same sector as the 1st electron in the event.
• The trigger electron momentum is larger than 20% of the beam energy.
• Spectator cuts:
– dq/dxd of the candidate is not more than two sigmas larger and not less
than three sigmas smaller than the dq/dx expected for a proton (see figure
60 for an example).
– The distance between either end of the ionization trail and the closest
chamber boundary is less than 5 mm.
– Vertex z is inside the fiducial target region (between -60 and 100 mm in
the RTPC coordinate systeme ).
– The difference between the vertex z as reported by CLAS and as reported
by the RTPC is no more than 15 mm.
– χ2 of the fitted track in the RTPC is less than 4.
– Radius of curvature of the track is positive (positively charged particle).
– More than 5 pads register above-threshold charge in the event.
Denotes charge per unit length of a track registered by the readout pads.
With respect to the center of the RTPC. It can be converted to the CLAS coordinate system
by subtracting 58 cm, since the RTPC was at -58.0 cm in the CLAS coordinate system.
FIG. 69: The electron distribution shown as a function of the azimuthal angle relative
to the sector mid-plane (vertical axis) and the polar angle (horizontal axis). Such
distribution plots were used to find angular regions, in which the electron detection
is less efficient (shown as less densely populated on the plot). Events with electrons
going to these regions (outside of the red line on the plot) were not used in the
analysis. The red line represents the fiducial cut. The 2 GeV beam energy data are
– Total charge collected by the RTPC is larger than 0.
• RTPC proton momentum correction (see section IV.2.6).
• CLAS particle momentum corrections (see section IV.2.5).
Accidental background subtraction
The aforementioned cut on the difference between the trigger electron vertex z and
the spectator proton candidate vertex z (in conjunction with other cuts) left us with
a rather clean sample of tagged events. Unfortunately, there still were accidental
coincidences in a fraction of these events.
To eliminate such events, we assumed that the accidental background had a triangular shape and fit ∆z (∆z = zelectron − zspectator ) with the sum of the Gaussian
representing the signal and the triangular background. This was done before the 15
mm cut was applied so that the “wings” of the distribution, which are pure background, would be used to estimate the background (see figure 70 for the fits). The
vertical lines in figure 70 show the 15 mm ∆z “signal” cut, events inside of which
were considered to come mainly from the signal. Events with |∆z| larger than 20
mm and smaller than 160 mm were considered to come from accidental coincidences
(“wing” cut). We found the number of background events to the right of the “wing”
cut (∆z > 20 mm), Nright , to the left of the “wing” cut (∆z < −20 mm), Nlef t , and
inside the “signal” cut (-15 mm < ∆z < 15 mm), Ncentral , based on the triangular fit.
We also assumed that in all of our kinematic bins accidental coincidences sneaking
inside the cut represent the same fraction of overall events. Then, we counted the
number of events falling outside the ∆z cut for each bin Nbg,bin , and scaled it with
the ratio Rbg = Ncentral /(Nright + Nlef t ) of events inside and outside the cut that
we got from the overall distribution. Thus, the sample of events free of accidental
coincidences for a given bin was
Nclean,bin = Nraw,bin − Rbg Nbg,bin ,
where Nraw,bin is the number of raw events in a bin.
Simulated data
For the simulated data, the same cuts are applied as for the real data, except for the
CC and EC cuts. The CC and EC cuts were not used since the acceptance/calibration
of these detectors was not fully understood. In particular, the hardware threshold
for the EC trigger input was set rather high and varied throughout the experiment.
This was remedied by additionally running the full inclusive simulation (ignoring
spectator protons) and comparing it with the data (see figure 71). There were no
CC and EC cuts in the inclusive simulation either. Comparison of the distribution of
scattering angles and energies of the simulated electrons with inclusive experimental
data allowed us to extract the trigger electron detection efficiency as a function of
scattering angle and energy. The efficiency was used in the main analysis by weighing
simulated counts and thus compensating our lack of understanding of the detector
efficiency (see chapter V and figure 71 for more on this efficiency).
For the simulated data, only the first of the RTPC momentum corrections was
used (see IV.2.6). No radius of curvature rescaling was applied. Similarly, the full
FIG. 70: The distribution of ∆z = zelectron − zspectator for 2 GeV events before the
∆z cut was applied. The experimental distribution (black line) was fit with a sum
of the Gaussian representing the signal (green line) and the triangular background
(blue line). The dashed vertical lines represent the ±15 mm cut applied to the events
in the analysis.
FIG. 71: Inclusive W distributions for experimental (red) and simulated (blue) data.
Simulated data was scaled by a factor of 13.2 to account for the difference between
experimental and simulated luminosity. The beam energy is 2.140 GeV.
CLAS momentum correction was not used; only the energy loss correction was applied. No accidental background subtraction was applied to the simulated data.
The physics analysis had the following goals:
• Check the spectator approximation (see section II.8.1).
• Study the off-shell F2n structure function dependence on the spectator momentum ps . We were not completely successful in this due to the ps distribution
being distorted because of unknown RTPC inefficiency at high ps .
• Study the taggedf cross-section dependence on the angle between the spectator
proton and the virtual photon θpq .
To achieve these goals the following steps were taken.
Experimental data
First, events that pass certain cuts (details below) are filled into an array of structures
• “Index” of the event - the number of the event as it was read in.
• Trigger electron momentum, GeV.
• Square of the momentum transfer (Q2 , GeV/c2 )
• Event invariant mass (W , GeV)
• Flag whether event was tagged (flag is true) or not (false).
• Spectator momentum, GeV (set to -10.0 if not tagged).
• Cosine of the angle between the direction of the spectator and the direction of
the momentum transfer (set to -10.0 if not tagged).
• Invariant mass accounting for the neutron motion (W ∗ , GeV) (set to -10.0 if
not tagged).
Tagged events are events with a spectator proton registered in the RTPC.
• Polar scattering angle of the spectator, radians.
• z coordinate of the reaction vertex as reported by the RTPC, cm.
• Difference in the z coordinate of the vertex as reported by the RTPC for the
spectator and registered by CLAS for the trigger electron.
Simulated data
To estimate detector acceptance and inefficiencies, we need to utilize a simulation
with subsequent comparison to real data. Additionally, if we want to compare the
data with a certain model, we need to simulate events generated according to that
model and compare them with the data.
Generating events
Events used in this analysis were generated using a generator written by Sebastian Kuhn. The generator is based on the RCSLACPOL code developed at SLAC
[82]. The three purposes for which we need simulated events in this analysis are:
subtracting the elastic tail from the inelastic event distribution, accounting for detector acceptance and inefficiencies, and comparing the experimental data with the
spectator model. To satisfy these needs, three kinds of events were generated: 1)
simulation of quasi-elastic scattering of electrons off the neutron inside deuterium in
the plane wave spectator approximation including electromagnetic radiative effects,
2) simulation of inelastic scattering off the neutron in the same framework (with radiative effects), 3) Fully inclusive scattering d(e, e′ )X off the deuteron (with radiative
Events with quasi-elastic scattering of the electron off a moving neutron in the
spectator picture are produced as follows. Initially, the electron is assigned random
kinematics within the boundaries (Q2 and ν) defined in the configuration file. In the
spectator picture, the energy and momentum of the off-shell bound nucleon (EN and
p~N ) are related to the spectator nucleon momentum ~ps as
EN = MD − Mp2 + p2s
~pN = −~ps .
And the target nucleon mass is
M =
(MD −
Mp2 + p2s )2 − p2s .
The initial momentum of the struck nucleon is distributed according to
P (~pN ) = |ψ(~pN )|2 ,
where ψ(~pN ) is the Paris deuteron wavefunction [83] rescaled using light-cone formalism [86] (see equations (60) - (62)). The events were then generated according to
the cross-section given by equation (16) (the equation is given for the proton, but it
has the same form for the neutron with the proton form factors substituted by the
corresponding neutron form factors) in the rest reference frame of the target nucleon.
The elastic radiative tail is calculated using the full prescription of Mo and Tsai [87].
The reduction of the quasi-elastic peak itself due to the internal radiation is given by
dΩ rad
dΩ Born
where the expression for the parameter δ is given in [87]. The event generator also
simulated external radiative energy loss before scattering due to material in the beam.
The inelastic data were generated similarly to the quasi-elastic data. The crosssection was evaluated using
2MxF2 (x, Q2 ) 1 + ǫR(x, Q2 )
dE ′ dΩ
dΩ M ott
1 + R(x, Q2 )
where R = σL /σT , σL and σT being the longitudinal and transverse cross-sections.
The polarization of the virtual photon, ǫ, is given by
=1+2 1+
tan2 .
4M x
Equation (146) is just another form of equation (34) written this way for convenience.
The New Muon Collaboration (NMC) fit to SLAC, BSDMS and NMC data on the
proton and deuteron structure functions was used [88]. The neutron structure function was obtained by Bosted et al. [93] by a fit to proton and deuteron data including
Fermi smearing. The parameterization of the R ratio from [89] was used. Radiative effects were simulated using the output of the SLACPOLRAD program [82].
SLACPOLRAD calculates the ratio of radiated to Born (unradiated) cross-section
for DIS without the elastic tail. These ratios were applied to scale the generated
unradiated cross-section.
The fully inclusive events were generated by adding quasi-elastic and inelastic
events from both the neutron and the proton (integrated over all spectator momenta),
plus the radiative elastic tail from D(e, e′ )D.
Detector simulation
The generated events were then run through the full experimental setup simulation
including external radiation losses. The target and RTPC part of the setup were
simulated in all the details using a GEANT4-based [90] simulation package written for
our experiment (the same setup that was used for the RTPC momentum corrections,
see section IV.2.6). The standard CLAS part of the setup was simulated using an
existing GEANT3-based package, called GSIM, a standard CLAS software package
describing all the other detectors besides the RTPC. Particle paths through the
RTPC were simulated. The output information was written to files which served as
input for the GSIM package. To simulate inefficiencies of the CLAS detector, the
output of the GSIM served as an input to the GSIM Post Processing package (GPP),
which accounted for such things as finite resolution of DC and SC, broken DC wires,
After the generated events went through the simulated detectors, we obtained files
with simulated detector responses for the generated events. Finally, these files were
processed by the usual data processing program (RECSIS), the same one that was
used for processing experimental events. The outputs of RECSIS for experimental
and simulated events were directly compared and used in the analysis. Figures 72
and 73 show plots of the W and W ∗ distributions for quasi-elastic 4 and 5 GeV beam
energy simulations, respectively, as examples of simulation results.
Then quasi-
elastic simulated events that pass certain cuts were filled into the array of structures
identical to those for the experimental data. The same was followed with inelastic
simulated events. Figures 74 and 75 show plots of the W and W ∗ distributions
for inelastic 4 and 5 GeV beam energy simulations, respectively, as examples of
simulation results.
The fully inclusive events (both data and simulation) were binned in E ′ and θ
(ignoring any spectator tracks in the RTPC). The three arrays are passed to the
plotting routine that performs necessary binning of the data with subsequent filling
FIG. 72: The W and W ∗ distributions of the quasi-elastic simulation for the 4 GeV
FIG. 73: The W and W ∗ distributions of the quasi-elastic simulation for 5 GeV beam
FIG. 74: The W and W ∗ distributions of the inelastic simulation for the 4 GeV data.
FIG. 75: The W and W ∗ distributions of the inelastic simulation for 5 GeV beam
and plotting of needed histograms and graphs (see below).
A loop over all experimental events that passed the cuts (see section IV.3) was
performed. If an event was tagged, the W ∗ , Q2 , cos(θpq ), and spectator momentum
bins corresponding to the values recorded in the structure were found. Binning in
W ∗ is performed twice: once with 6 bins for making plots in other variables for events
belonging to the given bin, and the other with 90 bins, for plotting other variables
vs these bins. In the same fashion 2 possible sets of cos(θpq ) bins were made: 3 bins
for making plots in other variables, 10 bins for making plots with cos θpq plotted on
the horizontal axis. In detail, we use the following bins:
• cos(θpq ) bins:
– “Small” bins: 10 equal bins between -1 and 1.
– “Big” bins: 3 bins, lower bounds being: -0.75, -0.25, 0.25, upper bounds
being -0.25, 0.25, 0.75.
• Spectator momentum bins - 4 bins, lower bounds: 0.07, 0.09, 0.11, 0.13 GeV;
upper bounds: 0.09, 0.11, 0.13, 0.15 GeV.
• Q2 bins:
– For 2 GeV beam energy: 3 bins, lower bounds: 0.2227, 0.4524, 0.7697
GeV/c; upper bounds: 0.4524, 0.7697, 1.0969 GeV/c.
– For 4 GeV beam energy: 3 bins, lower bounds: 0.7697, 1.0969, 2.2277
GeV/c; upper bounds: 1.0969, 2.2277, 4.5243 GeV/c.
– For 5 GeV beam energy: 2 bins, lower bounds: 1.0969, 2.2277 GeV/c;
upper bounds: 2.2277, 4.5243 GeV/c.
• W ∗ bins:
– “Big” bins, for plotting other variables - 6 bins, lower bounds: 0.88, 1.00,
1.35, 1.60, 1.85, 2.20 GeV; upper bounds: 1.00, 1.35, 1.60, 1.85, 2.20, 2.68
– “Small” bins, for horizontal axis - 90 bins equally spaced between 0.88
and 2.68 GeV.
As a result of the binning procedure three arrays are filled: tag counts exp “small” bins in cos(θpq ), “big” bins in W ∗ ; tag byreg exp - “big” W ∗ bin, “big”
cos(θpq ) bin, tag wplots exp - “small” W ∗ bins, “big” cos(θpq ) bins.
After the experimental data were binned, almost identical loops were performed
over the simulated events, first from the elastic simulation, then from the inelastic
simulation. Simulated counts were then multiplied by the trigger electron detection
efficiency (see section IV.3.3).
The quasi-elastic radiative tail was subtracted from the data distribution using
the quasi-elastic simulation. For this purpose, quasi-elastic simulation and data were
cross-normalized in the vicinity of the quasi-elastic peak, and the cross-normalized
quasi-elastic simulation was subtracted from the experimental data. For this, crossnormalization factors (denoted as ratio’s below) between the quasi-elastic simulation
and the elastic region of the experimental distribution were calculated using numbers
of events in the corresponding elastic bin, the bin with W ∗ being between 0.88 and
1.00 GeV and with bins in the other variables being the same as for the bin for
which the cross-normalization is being performed. The data from the simulation
of inelastic tagged scattering (see section IV.4.2) was then cross-normalized with
the experimental data. The cross-normalization factors were found by summing
experimental and simulated counts over a specific region in W ∗ , Q2 and cos θpq ,
where (according to the theoretical expectations) the spectator picture should be
most appropriate. The region in each of the variables was: for 2 GeV data, W ∗
between 1.7 and 1.8 GeV, Q2 between 0.4524 and 0.7697 (GeV/c)2 , and cos(θpq )
between -0.75 and -0.25; for 4 GeV data, W ∗ between 2.0 and 2.2 GeV, Q2 between
1.0969 and 2.2277 (GeV/c)2 , and cosine of θpq between -0.75 and -0.25; for 5 GeV
data, W ∗ between 1.9 and 2.1 GeV, Q2 between 1.0969 and 2.2277 (GeV/c)2 , and
cos(θpq ) between -0.75 and -0.25. This region was chosen so that the spectator picture
works well in it and we are beyond the resonant region. Because of our lack of
understanding of the momentum dependence efficiency spectator proton detection
in the RTPC, these factors were found and applied separately in each spectator
momentum bin.
After this, several kinds of histograms are filled, most notable being: histograms
for Q2 , W ∗ (“big” bins), ps (spectator momentum) bins with cos θpq (“small” bins)
being the horizontal axis and histograms for Q2 , cos θpq (“big” bins), ps bins with
W ∗ (“small” bins) being the horizontal axis.
In the first case, when plots are vs cos θpq , “small” bins in cosine, “big” bins in
W ∗ are used (tag counts... arrays). Different plots for the following quantities are
1. The simplest one, the number of experimental counts (black squares in figure
2. The number of experimental counts with accidental background subtracted
(blue crosses in figure 76).
3. The number of experimental counts with accidental background and elastic
radiative background from the elastic simulation, which is shown in red in
figure 76, subtracted.
4. The number from the previous item divided by the number of inelastic simulated counts:
ratio = cleaned prelim/inelsimcount,
where inelsimcount is the normalized number of inelastic simulated counts in
the bin. This ratio plot lets us eliminate detector effects (acceptance, inefficiency), thus directly providing access to the physics of the problem; the most
obvious use of them would be comparing them with flat lines to check the
validity of the spectator picture. See chapter V for a more detailed description.
In the second case, when plots are vs W ∗ , tag Wplots... arrays are used for the
number of counts (“small” W, “big” cosine bins), whereas ratio is calculated using
tag byreg... arrays (“big” W bins, “big” cosine bins, since in this binning bin 0 in
W corresponds to the elastic peak, and the number of cosine bins is identical to the
one used in calculating counts). But now W ∗ is plotted on the horizontal axis, and
“big” cosine bins are used instead of “big” W ∗ bins.
As mentioned in section IV.4.2, the differential cross-section can be written as
2MxF2 (x, Q2 ) 1 + ǫR(x, Q2 )
dE ′ dΩ
dΩ M ott
1 + R(x, Q2 )
where R = σL /σT , σL and σT being the longitudinal and transverse cross-sections.
As also shown in section IV.4.2, all factors in equation (149) (and all additional
FIG. 76: Raw data (black squares), raw data with subtracted accidental background
(blue crosses, barely visible behind the black squares), and elastic simulation crossnormalized with experimental data (red circles) are shown for four typical ps bins,
Q2 between 0.45 to 0.77 (GeV/c)2 , and cos(θpq ) between -0.75 and -0.25. The beam
energy is 2.140 GeV.
factors due to radiation, smearing, acceptance, and efficiency) are modeled by our
simulation. Thus, if the simulation is done properly, the aforementioned scheme
leaves you with the ratio of experimental (effective) and simulated (model) structure
ef f
ef f
functions F2n
. In this case, structure function F2n
plots are obtained from
the ratio plots by multiplying them by the value of F2n
(used in producing the
simulated data) for a given (Q2 , W ∗ ) value. F2n is then plotted as a function of W ∗ .
Consequently, these plots are converted into F2n vs x∗ plot by relating W ∗ to x∗ as
x∗ =
where MN is the nucleon mass.
(W ∗ )2 − MN2 + Q2
The goal of this work is to analyze the dependence of the extracted effective neutron
ef f
structure function F2n
on the kinematic conditions, namely the spectator proton
momentum ps and spectator proton scattering angle θpq . As mentioned in the analysis chapter, to achieve that, we took the ratio of the experimental d(e,e′ ps ) data
ef f
(where the experimental F2n
was convoluted with acceptance, efficiency, binning
and other effects) to the simulated data (where the model F2n
used in the simula-
tion was convoluted with the same acceptance, efficiency, binning and other effects),
subtracted the accidental background and elastic tail. Multiplying the ratio with
the model structure function leaves you with the experimental structure function,
ef f
that is, we arrive at the final goal of this research. The experimental F2n
could be
distorted (and acquire a cos θpq dependence) through FSI, or if the used spectator
model does not have the correct spectator momentum distribution. Thus, studying
the dependence mentioned earlier will allow us to check the FSI role as well as the
spectator model.
The systematic error can be defined as a bias in measurement which leads to the
situation where the mean of many separate measurements differs significantly from
the actual value of the measured attribute. CLAS detectors, as well as the RTPC,
have such biases. Since we are analyzing a ratio of the experimental data to the
simulated data, we look here for systematic errors that are not overall scaling factors.
• Accidental background subtraction. The background subtraction is per-
formed by first plotting the distribution of all events for the given energy as a
function of ∆z, the distance along the beamline between the reaction vertex
as reported by CLAS for the trigger electron and as reported by the RTPC
for the spectator proton, with the idea being that particles that are too far
away from each other in space are accidental coincidences in time. Then the
fraction of the accidental background is estimated as the ratio of the events
in the “wings” of the ∆z distribution and events under the signal peak. For
this purpose, “wings” are taken to be between 20 and 160 cm for |∆z|, and the
signal is between -15 and 15 cm (see section IV.3.2).
The “wings” could be chosen between different limits, and the effect of the
choice of these limits is estimated in this systematic error. For this purpose, the
number of events between 20 and 90 cm for |∆z| was counted. The background
was then found using these new limits as well as with the original (20 - 160
cm) limits. The background is found by multiplying the number of counts in
the “wings” in each of the chosen bins by the ratio found by taking the ratio of
“signal” to “wings” (see the previous paragraph). Due to 2 different choices of
the “wings”, there are also 2 ratios that were found using 2 sets of “wing” ∆z
limits. They multiply the counts in corresponding “wings” to find the number
of background counts in each of the limit sets. The systematic error due to the
choice of the limits was found as
error 2 = (countold
where countold
− countnew
limits )
denotes the number of background-subtracted events found
with limits of (20 - 160 cm), and countnew
is the number of events found
with limits of (20 - 90 cm). The error shown here applies directly to the
background subtracted data.
• E ′ −θ dependent acceptance and efficiency error. This is the uncertainty
on the estimate of the detection efficiency of the CLAS trigger electrons. Since
the CLAS electron detection efficiency was not 100%, we needed to account for
this. This was done by performing an inclusive event simulation and comparing
the simulated distribution to the experimental one (see figure 71 for plots of
the inclusive experimental and scaled - to account for difference in luminosity
- simulated W distributions). The efficiency of the detection was found as a
function of E ′ , the energy of the scattered electron, and θ, the electron scattering angle. By performing a two-dimensional multi-linear fit of the efficiency
as a function of these two variables, and estimating point-to-point fluctuations
in the efficiency, the E ′ − θ error was found to be 8.5%. This means that the
value of the experimental to simulated data ratio for a given bin is assigned an
additional error equal to the value of the ratio multiplied by 0.085 due to the
uncertainty in the trigger electron detection efficiency.
• F2n model dependence. The simulations used in this research utilized an
input model F2n . The systematic error due to this model dependence was
estimated to be 5%. It is not shown on plots 78 - 184 since it affects all cos θ,
ps bins equally and since we cross-normalize.
• Experiment - simulation cross-normalization. Experimental and simu-
lated data were cross-normalized by taking the ratio in a chosen Q2 - cos(θpq )W ∗ bin for each energy (see analysis description for more).
The cross-
normalization dependence on the choice of this bin in each of these variables
was estimated to be 30% for 2 GeV data, 15% for 4 GeV data, and 10% for 5
GeV data. This error was estimated by checking the spread of the values for
the bins other than the one for which the cross-normalization was performed.
This uncertainty is not shown on plots 78 - 184, since it affects all the bins in
a given distribution uniformly.
• Monte-Carlo simulations. The ratio of the experimental to the simulated
data was found for each bin as
ratio =
exp − bg − elas tail
where exp is the experimental data count for the bin, bg is the accidental
background, elas tail is the normalized elastic tail found using simulated data,
inelsimcount is the number of counts in this bin from the simulated data
multiplied by the trigger electron detection efficiency and cross-normalized with
the experimental data count. The error on this quantity due to the uncertainty
in Monte-Carlo (MC) counts can be found by chain differentiation as
∆ratioM Ccount =
∆ratioM Ccount ,
where ∆ratioM Ccount is the uncertainty on the ratio due to the MC count uncertainty and MCcount is the number of MC counts in the bin. The uncertainty
found consisted of two parts:
1. Monte-Carlo statistics. The error due to the simulation statistics was
found for each bin for the experiment to simulation ratio according to
error =
/pure elas count +
pure inelas count
where elsimcount is the cross-normalized with experiment number of
events in this bin from the elastic simulation, inelsimcount is the crossnormalized with experiment number of events in this bin from the inelastic
simulation, ratio is the aforementioned experiment to simulation ratio for
the bin, pure elas count is the number of events from the elastic simulation for this bin, and pure inelas count is the number of events from the
inelastic simulation for this bin.
2. Monte-Carlo systematics. A systematic error due to the simulation
was found as
0.1 elsimcount
error =
where elsimcount is the cross-normalized with experiment number of
events in this bin from the elastic simulation, inelsimcount is the crossnormalized with experiment number of events in this bin from the inelastic
simulation. The factor of 0.1 is the potential cross-normalization error between quasi-elastic simulation and experimental data, due to a somewhat
different shape in W ∗ of the corresponding quasi-elastic peaks (see figure
The systematic errors due to E ′ − θ efficiency, background subtraction, and
Monte - Carlo simulation (both parts) were added in quadratures, and the square
root of this sum is shown on the plots as a point to point systematic error for the
ratios of the experimental to simulated data. To convert these values to systematic
errors of the F2n structure function, they are multiplied by the value of model F2n in
the bin for which the error is calculated.
ef f
As mentioned before, the goal is to find the effective neutron structure function F2n
as a function of W ∗ and θpq .a This will allow us to:
1. Check the validity of different FSI theories and/or their range of applicability
since different theories predict different dependence of FSI on the outgoing
proton angle (see section II.8.2 for details).
Only the ratio of experimental to simulated data was obtained for the plots vs cos(θpq ). This
ratio (its deviation from 1) describes the deviation of the experimental data from the simulation,
and thus (since the simulation is based on the spectator model) from the spectator model. This
lets us perform analysis mentioned in the first two items.
2. Check the general validity and/or range of validity of the PWIA spectator
picture in both spectator angle and momentum since dependence of the effective
ef f
structure function F2n
on these quantities is different in different models (see
section II.8.2 for details).
3. Eventually extrapolate the effective structure function values to the on-shell
nucleon pole, thus finding the free neutron structure function F2n (not covered
in this work).
The full data set in all of our bins is shown, for each beam energy in sequence, in
figures 78 - 184. In the following, we discuss the dependence of ratios and extracted
effective structure functions on W ∗ and cos θpq .
W ∗ dependence
The aforementioned ratio dependence on W ∗ for different Q2 regions and four spectator momentum regions split into backward, forward and intermediate regionsb of
the spectator proton scattering lets us analyze:
1. How well the PWIA spectator model and the used model for the free F2n describe the scattering process for all final state invariant masses and all spectator
2. Whether at different values of the proton recoil momentum and angle this
agreement is better or worse.
3. Dependence of these results on Q2 . There are structure function Q2 -dependent
effects (F2n is different at different Q2 ), but there are also reaction mechanism
Q2 -dependent effects, such as resolving different distance scales at different Q2 .
We must investigate the possibility of decoupling these two competing sources
of variation in results in the used model.
2 GeV energy
For the lowest Q2 bin, Q2 between 0.22 and 0.45 (GeV/c)2 , (figures 78 - 80) all
the ratio plots exhibit a resonance-like “bump” in the vicinity of W ∗ = 1.5 GeV.
The “nomenclature” I use here for the bins introduced in IV.5 is: cos θpq = −0.75 . . . − 0.25
- backward region; cos θpq = −0.25 . . . 0.25 - intermediate region; cos θpq = 0.25 . . . 0.75 - forward
This bump is an indication that resonance contribution may be underestimated in
our model for F2n . Other than that, the ratios are consistent with 1, except for
the forward region, especially as the spectator momentum gets high. This trend is
consistent with the spectator model, which should work worse (if at all) for higher
momenta and more forward angles. There is a rise at low W ∗ , which is a remnant of
the elastic tail, which may not be completely subtracted (this could also be due to
an incompletely simulated resolution effect, where the simulated data fall off more
sharply as W ∗ → 1.08, than the real ones). The structure function plots (figures 87
- 89 and 96 - 98), which are direct descendants of the ratio plots, consequently exhibit
reasonable agreement between the model and data points (with a slight disagreement
at W ∗ ∼ 1.5), except for the forward region where the disagreement gets worse with
increasing ps (this is expected since the spectator model should work better at lower
ps [51]) and W ∗ (this is expected from the target fragmentation and FSI models of
[53] and [55]).
Plots for the next Q2 bin, Q2 between 0.45 and 0.77 (GeV/c)2 (see figures 81 83 for ratios, 90 - 92 for plots of F2n vs W ∗ , and 99 - 101 for plots of F2n vs x∗ ),
demonstrate all the attributes described for the lower bin plots. Namely, there is
a resonance-like structure between W ∗ of 1.4 and 1.6 GeV; the agreement with the
spectator model gets worse as we go to the forward region, especially for higher ps
bins. There is a rise at low W ∗ , which is a remnant of elastic tail subtraction.
In plots for the highest Q2 bin at this energy, Q2 between 0.77 and 1.10 (GeV /c)2
(see figures 84 - 86 for ratios, 93 - 95 for plots of F2n vs W ∗ , and 102 - 104 for plots of
F2n vs x∗ ), the statistics are much worse and it is difficult to draw conclusions, except
to note that reasonable agreement of the distributions with 1 and the deviation from
one in the forward region, especially in higher ps bins. Overall, there seems to be
ef f
on Q2 .
little dependence of the ratio F2n
4 GeV energy
For the lowest Q2 bin, Q2 between 0.77 and 1.10 (GeV/c)2 , (see figures 115 - 117
for ratios, 124 - 126 for plots of F2n vs W ∗ , and 133 - 135 for plots of F2n vs x∗ ) all
the ratio plots are consistent with one (except for some structure between W ∗ of 1.2
and 1.4 GeV that can be again attributed for resonance effects not accounted for in
model F2n ), and, as a consequence, the effective structure function points lie on top
of the model structure function.
This also holds for the next Q2 bin, Q2 between 1.10 and 2.23 (GeV/c)2 (see
figures 118 - 120 for ratios, 127 - 129 for plots of F2n vs W ∗ , and 136 - 138 for plots
of F2n vs x∗ ).
For the highest Q2 bin, Q2 between 2.23 and 4.52 (GeV/c)2 (see figures 121 - 123
for ratios, 130 - 132 for plots of F2n vs W ∗ , and 139 - 141 for plots of F2n vs x∗ ), the
statistics is worse than in lower Q2 bins making it harder to draw accurate conclusions, but, within errors, the distributions are not inconsistent with the descriptions
given for the lower Q2 bins.
The lowest Q2 bin agrees reasonably well with the same Q2 bin in 2 GeV, but is
much smoother due to better statistics. It shows a much lesser increase at forward
θpq and higher ps .
5 GeV energy
For the lowest Q2 bin, Q2 between 1.10 and 2.23 (GeV/c)2 (see figures 157 - 159 for
ratios, 163 - 165 for plots of F2n vs W ∗ , and 169 - 171 for plots of F2n vs x∗ ), the ratios
are consistent with 1 for the backwards and forward regions, but they are slightly
below 1 for two higher ps bins in the intermediate region, which could be attributed
to FSI that are supposed to be large in the intermediate region (see section II.8.2).
Resonance-like bumps can be seen for both W ∗ between 1.2 and 1.4 GeV and W ∗
between 1.4 and 1.6 GeV (cf lower beam energies).
For the next Q2 bin, Q2 between 2.23 and 4.52 (GeV/c)2 (see figures 160 - 162
for ratios, 166 - 168 for plots of F2n vs W ∗ , and 172 - 174 for plots of F2n vs x∗ ), the
statistics get worse, but the distributions for the ratio plots are still consistent with
ef f
1, and F2n
distributions are consistent with model F2n .
Some factors that added uncertainty to the analysis of the described results are:
• Since ps resolution is not very good, there might be an overlap of different ps
• Insufficiently understood momentum dependent efficiency of the RTPC made
it necessary to cross-normalize experiment and simulation separately for each
ps bin. This could hide systematic offsets of ratios from 1, and, consequently,
ef f
from model F2n .
ef f
The results exhibit a nearly perfect agreement of F2n
(W ∗ ) with the model for all
but the lowest Q2 and all W ∗ for the lowest ps bin and backward θpq .
θpq dependence
The dependence of the data-to-simulation ratio on the cosine of the angle θpq for
different Q2 regions and four spectator momentum regions, split into six W ∗ bins
(see section IV.5) roughly following resonance regions lets us analyze:
1. The applicability of the PWIA description as a function of cos(θpq )
2. How the agreement with PWIA expectations (flat angular dependence) depends
on final state W ∗ , Q2 and spectator momentum.
3. Dependence of the results on the Q2 value, showing the presence or absence of
such dependence.
2 GeV energy
For the lowest Q2 bin, Q2 between 0.22 and 0.45 (GeV/c)2 (figures 105 - 108), the
curve is close to a flat line for the 2 lower ps bins and the lowest W ∗ bin. With the
increase of W ∗ and ps , the distribution starts deviating from the flat line, exhibiting
two competing effects: a dip at around 90◦ and a rise towards the forward direction
where one expects a contribution from the current fragmentation region.
For the next Q2 bin, Q2 between 0.45 and 0.77 (GeV/c)2 (figures 109 - 111), the
data start developing the aforementioned structure even in the lowest ps bins and
the lowest W ∗ bin, with deviations from 1 being much more pronounced with the
increase of ps in each W ∗ bin.
For the next Q2 bin, Q2 between 0.77 and 1.10 (GeV/c)2 (figures 112 - 114), the
rise towards forward scattering angles dominates over the dip at the perpendicular
kinematics, which is barely noticeable on a minority of the plots.
The described results are in agreement with the target fragmentation and FSI
models described in chapter II. The increase of the distribution in the forward
direction is described well by the target fragmentation model of [53] (cf figure 28),
and the dip at the intermediate angles is described well by the FSI model of [55] (cf
figure 34). Additionally, it is worth noting that the dip in the experimental ratio
increases with the increase of the spectator momentum in accordance with the FSI
model (cf figure 34).
4 GeV energy
For the lowest Q2 bin, Q2 between 0.77 and 1.10 (GeV/c)2 (figures 142 - 146), each
W ∗ bin exhibits the presence of both the dip at intermediate angles and the forward
rise. The features are more pronounced in the two higher ps bins than in the two
lower ones in each W ∗ bin, except in the lowest W ∗ bin.
For the next Q2 bin, Q2 between 1.10 and 2.23 (GeV/c)2 (figures 147 - 151), all
the remarks for the previous Q2 bin hold, including the last one, about the features
being more pronounced in the lower ps bins of the lowest W ∗ bin than those of the
other W ∗ bins.
For the next Q2 bin, Q2 between 2.23 and 4.52 (GeV/c)2 (figures 152 - 156),
statistical fluctuations are too large to draw any precise conclusions, but the general
trend is consistent with that of the lower Q2 bins at this energy.
5 GeV energy
For the lowest Q2 bin, Q2 between 1.10 and 2.23 (GeV/c)2 (figures 175 - 179), and the
two lower ps bins, the ratio dependence on cos(θpq ) are close to a flat line at 1 for each
W ∗ bin, except for the lowest one, whereas higher ps bins exhibit the aforementioned
effects of the dip at the perpendicular kinematics and the rise at forward angles. The
two lower ps bins of the lowest W ∗ bin have these two features as well.
For the next Q2 bin, Q2 between 2.23 and 4.52 (GeV/c)2 (figures 180 - 184), we
have the same features. They are somewhat less pronounced than for the lower Q2
bin, presumably due to statistical fluctuations.
Overall, with the possible exception of the lowest W ∗ bin, the cos(θpq ) is very close
to flat in the backward angular region for low ps (the region in which the spectator
model should work well), for most Q2 − W ∗ bins. This confirms that this kinematics
is described well by the spectator picture and therefore well-suited to extract (nearly)
free neutron structure functions.
The results shown tend to agree with the target fragmentation model of [53] (see
figure 28), and the final state interaction model of [55] (cf figure 34): our data show
an enhancement over PWIA in the target fragmentation region (in accordance with
[53]), especially for 2 GeV, and dip in the vicinity of θpq = 90◦ (in accordance with
The PWIA spectator model works well for the lowest spectator momentum bin
(ps =70. . .90 MeV/c), as expected from the models of [56] (see figure 29) and [24] (see
figure 30), especially in the backward θpq region.
The resonance-like structure present in the ratio of the experimental data to
the simulated data shows that our model for F2n may underestimate the resonant
contribution at some values of W ∗ and Q2 . On the other hand, the agreement between
data and model for the 2 highest Q2 bins and 5 GeV beam energy, over the whole
range in W ∗ /x∗ , is quite good in the region where the spectator picture should work
(ps between 0.07 and 0.09 GeV and cos(θpq ) between -0.75 and -0.25) (see figure 77).
This confirms that in the DIS region, the F2n model provides a good description of
a (nearly) free neutron up to x∗ ≈ 0.6, within our systematic errors of 10 - 15%.
Superimposing structure functions for different Q2 for 5.254 GeV beam energy
data (see figure 77) indicates, that while the data agree with the spectator picture,
they might not scale with Q2 as nicely as one might expect.
FIG. 77: Model (lines) and effective (markers) F2n are shown as functions of x∗ for
two Q2 bins: from 1.10 to 2.23 (GeV/c)2 (red) and from 2.23 to 4.52 (GeV/c)2 (blue).
Results are shown for backward angles (cos(θpq ) between -0.75 and -0.25) and low
spectator momenta (ps between 70 and 90 MeV/c), for which the spectator model
should be a good description. The beam energy is 5.254 GeV.
Some uncorrected reconstruction and efficiency effects for CLAS and the RTPC
limited our resolution in W ∗ /x∗ and they have washed out some of the details. A
larger statistics runs at higher beam energies, with a better understanding of the
detectors, should improve the data and extend them to higher Bjorken x [96]. A
follow-up experiment after the energy upgrade of CEBAF to 12 GeV has been approved for this purpose.
FIG. 78: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 79: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 80: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 81: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 82: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 83: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 84: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 85: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 86: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 87: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 88: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 89: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 90: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 91: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 92: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 93: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 94: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 95: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 96: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 97: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 98: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 99: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 100: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 101: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 102: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 103: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 104: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 2.140 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 105: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 2.140 GeV. Systematic errors are shown as a blue band.
FIG. 106: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 107: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 108: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.22 to 0.45 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 109: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 110: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 111: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.45 to 0.77 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 112: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 113: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 114: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 2.140 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 115: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 116: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 117: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 118: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 119: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 120: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 121: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 122: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 4.217 GeV. Systematic errors are shown as a blue band.
FIG. 123: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 124: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 125: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 126: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 127: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 128: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 129: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 130: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 131: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Systematic errors are shown as
a blue band.
FIG. 132: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 133: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 134: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 135: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 136: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 137: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 138: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 139: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 140: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 4.217 GeV. Systematic errors are shown as
a blue band.
FIG. 141: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 4.217 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 142: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 143: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 144: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 145: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 146: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 0.77 to 1.10 (GeV/c)2 , W ∗ from 2.20 to 2.68 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 147: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 148: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 149: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV. The
beam energy is 4.217 GeV. Error bars are statistical only. Error bars are statistical
only. Systematic errors are shown as a blue band.
FIG. 150: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 151: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 2.20 to 2.68 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 152: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 153: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 154: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 155: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 156: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 2.20 to 2.68 GeV.
The beam energy is 4.217 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 157: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 158: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 159: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 160: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.75 to -0.25. The
beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 161: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from -0.25 to 0.25. The
beam energy is 5.254 GeV. Systematic errors are shown as a blue band.
FIG. 162: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of W ∗ . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq from 0.25 to 0.75. The
beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 163: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 164: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 165: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 166: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 167: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 5.254 GeV. Systematic errors are shown as
a blue band.
FIG. 168: Effective F2n (green markers) structure function is shown as a function of
W ∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 169: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 170: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 171: Effective F2n (green markers) structure function is shown as a function of
x∗ . Red line is the model F2n . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 172: Effective F2n (green markers) structure function is shown as a function of
x∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.75 to -0.25. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 173: Effective F2n (green markers) structure function is shown as a function of
x∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from -0.25 to 0.25. The beam energy is 5.254 GeV. Systematic errors are shown as
a blue band.
FIG. 174: Effective F2n (green markers) structure function is shown as a function of
x∗ . Black line is the model F2n . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , cos θpq
from 0.25 to 0.75. The beam energy is 5.254 GeV. Error bars are statistical only.
Systematic errors are shown as a blue band.
FIG. 175: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 176: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 177: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 178: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 179: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 1.10 to 2.23 (GeV/c)2 , W ∗ from 2.20 to 2.68 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 180: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.00 to 1.35 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 181: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.35 to 1.60 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 182: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.60 to 1.85 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 183: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 1.85 to 2.20 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
FIG. 184: Ratio of experimental data with subtracted accidental background and
elastic tail to the full simulation in the PWIA spectator picture is shown as a function
of cos θpq . Data are for Q2 from 2.23 to 4.52 (GeV/c)2 , W ∗ from 2.20 to 2.68 GeV.
The beam energy is 5.254 GeV. Error bars are statistical only. Systematic errors are
shown as a blue band.
As was mentioned in the main text, nucleons consist of so called partons, which
are of two kinds: quarks and gluons. In this appendix, I am going to give a brief
description of partons, as well as elaborate on their connection with DIS structure
functions (section II.5).
Quarks are elementary structureless (at least considered such now) fermions,
which are, along with leptons, the basic constituents of matter. Six kinds of quarks
are known; the summary of their properties is given in tables 9 and 10.
TABLE 9: Summary of quark properties (light quarks), from reference [6]
Down (d)
Up (u)
Strange (s)
Q, e- units
− 13
1.5-4.0 MeV 4.0-8.0 MeV 80-130 MeV
TABLE 10: Summary of quark properties (heavy quarks), from reference [6]
Charm (c)
Bottom (b)
Top (t)
Q, e- units
1.15-1.35 GeV 4.1-4.4 GeV ≈178 GeV
The six quarks are arranged in three doublets (called families or generations):
The c, b, t quarks are so heavy that they do not play a big role in the nucleon.
Neither is it easy to obtain a quark-antiquark pair by means of vacuum fluctuation
for these quarks. Thus, when discussing ordinary hadrons, I will limit myself to u,
d, s quarks.
Quarks determining quantum numbers of baryons are called valence quarks. Besides these three quarks, there is a practically infinite number of quark-antiquark
pairs created and annihilated every second: those are so called sea quarks. The sea
quarks play an important role in the region x ≤ 0.4, but their contribution becomes
negligible at high x. Sometimes, an effective degree of freedom called constituent
quark is defined, where a valence quark “dressed” in a number of quark antiquark
pairs and gluons, is the carrier of hadron quantum numbers.
Let us look at the DIS structure functions. I will concentrate on F2 , since we
can easily get F1 from it (see (35)). Expanding the second of equations (35), and
considering only three light quarks (u, s, d), we can write for proton and neutron
F2p (x) = x · [ (dpv + ds + d¯s ) + (upv + us + ūs ) +
F2n (x) = x · [ (dnv + ds + d¯s ) + (unv + us + ūs ) +
(ss + s̄s )],
(ss + s̄s )].
Where dp,n
v , uv are distributions of down and up valence quarks in protons and neu-
trons; ds , us are distributions of sea quarks (proton/neutron subscripts were dropped
under the assumption that sea quarks distributions are identical in protons and neutrons); ss are distributions of strange quarks (only sea strange quarks are present).
“Barred” quantities denote distributions of corresponding antiquarks. Since the proton and neutron can be transformed into each other using isospin symmetry, this
symmetry also relates their quark distributions:
upv (x) = dnv (x) ≡ uv ,
dpv (x) = unv (x) ≡ dv ,
ups (x) ≈ dps (x) = dns (x) ≈ uns (x).
Thus, if we operate in the region where sea quarks can be neglected,
1 + 4dv /uv
p ≈
4 + dv /uv
and measuring DIS structure functions gives us direct access to quark momentum
Careful experiments measuring quark momentum distributions found that only
half of baryon momentum can be assigned to quarks. This is how the idea of gluons
emerged. Gluons are carriers of strong interactions. They do not interact either
weakly or electromagnetically, being quanta of the strong field. Quarks and gluons
share a unique quantum number called color. Gluons couple to color charge, thus
propagating strong interactions. Every quark and gluon has color charge. Particles
can only exist in color neutral combinations. That is why quarks (or gluons for that
matter) have never been seen in a free state, necessitating the introduction of color
confinement, the theoretical basis of which is still being investigated.
Figure 185 shows an electron scattering off a proton. Initial momenta of the electron and proton are k and p, final momenta are k ′ and p′ correspondingly. The
4-momentum transfered from the electron to the proton is shown as q:
q = k − k′.
Some common kinematic variables associated with this reaction would be:
• The exchange energy of the reaction:
ν = E − E′,
where E is the energy of the incident electron, E ′ is the energy of the scattered
• The momentum transfer squared:
Q2 = −q 2 = 4EE ′ sin2 ,
where θ is the scattering angle of the electron, q is the transfered momentum
(see equation (159)).
• Missing mass of the inclusive reaction (sometimes called total mass or final
W 2 = (p + q)2 = p2 + 2p · q + q 2 = M 2 + 2Mν − Q2 ,
where M is the target (proton) mass, and other variables are discussed above.
• Bjorken x:
2p · q lab 2Mν
where all the variables have been defined in the earlier equations. The common
interpretation of this variable is the fraction of the nucleon momentum carried
by the struck quark. While this is an accurate interpretation only in the Bjorken
limit where Q2 , ν → ∞, we usually consider scattering at high enough momenta
that we can work in an infinite momentum frame, where it is a good enough
FIG. 185: Scattering of an electron with initial 4-momentum k off a proton with
initial momentum p. Final momenta of the electron and proton are k ′ and p′ correspondingly. The 4-momentum transfered from the electron to proton is q (see text
for more).
• Nachtman scaling variable:
1 + 4m2 x2 /Q2
where x is Bjorken x (see equation (163)), m is the target mass. This variable
is often used to study the Bloom-Gilman duality.
Equation (162) assumes the target to be stationary (in this case 4-momentum p =
M, 0, 0, 0), which is not exactly true in real life. Nevertheless, this formula is usually
used since its inaccuracy is not too large, and figuring out the motion of the target is
a very complicated task. If we do know how the target is moving (BONuS experiment
forte), we can calculate the missing mass of the inclusive reaction more precisely. Let
us denote it with a ’*’ to distinguish from the W from equation (162) and calculate
in case of the BONuS scattering on the neutron in deuterium with proton being a
(W ∗ )2 = (pn + q)2 = pµn pnµ + 2((MD − ES )ν + p~s · ~q) − Q2 .
Here pn denotes the neutron 4-momentum, MD is the deuteron mass, and ES and
p~s are the spectator proton energy and momentum (in case of spectator scattering
pn = (MD − ES , −~pS )). The other variables are discussed above.
The same inaccuracy (the assumption of the target being stationary) we find in
the second half of the equation (163). To account for the target motion, we need to
rewrite it as
x̃ ≈
2Mν(2 − α)
where α is the spectator light-cone momentum fraction.
Given a point z0 where f (z) is either analytic or has an isolated singularity, the
residue of f (z) is the coefficient of (z − z0 )−1 in the Laurent series expansion of f (z)
about z0 , or
Res(z0 ) = b1 =
f (z)dz.
If f (z) is either analytic or has a removable singularity at z0 , then b1 = 0 there. If
z0 is a pole of order m, then
b1 =
[(z − z0 )m f (z)]|z=z0 .
(m − 1)! dz m−1
For every simple closed contour C enclosing at most a finite number of singularities
z1 , z2 , . . . , zn of an analytic function continuous on C,
f (z)dz = 2πi
where Res(zk ) is the residue of f (z) at zk .
Res(zk ),
[1] J. J. Aubert et al., Phys. Lett. B 123, 275 (1983).
[2] A. Benvenuti et al., Phys. Lett. B 189, 483 (1987).
[3] J. Gomez et al., Phys. Rev. D 49, 4348 (1994).
[4] J. Chadwick, Proc. Roy. Soc. A136, 692 (1932).
[5] N. Isgur, Phys. Rev. Let. 83, 2 (1999).
[6] W.-M. Yao et al. (Particle Data Group), J. Phys. G 33, 1 (2006).
[7] C. F. Perdrisat, V. Punjabi, and M. Vanderhaegen, Prog. Part. Nucl. Phys. 59,
694 (2007).
[8] J. Arringron, C. D. Roberts, and J. M. Zanotti, J. Phys. G 34, S23 (2007).
[9] I. A. Qattan, Precision Rosenbluth Measurement of the Proton Elastic Electromagnetic Form Factors and Their Ratio at Q2 = 2.64, 3.20, and 4.10 GeV 2 ,
PhD Thesis, NWU (2005).
[10] M. Peskin and D. Schroeder An Introduction to Quantum Field Theory, AddisonWesley Publishing Company (1997).
[11] C. E. Hyde-Wright and K. de Jager, Ann. Rev. Nucl. Part. Sci. 54, 217 (2004).
[12] E. Klempt, J.-M. Richard, Baryon spectroscopy, arXiv:0901.2055v1 (2009).
[13] P. Stoler, Phys. Rev. Let. 66, 8 (1991).
[14] V. D. Burkert et al., Phys. Rev. C 67, 035204 (2003).
[15] P. Stoler, Phys. Rep. 226, 3 (1993).
[16] S. E. Kuhn et al, The structure of the free neutron via spectator tagging, PR03012 proposal.
[17] L. W. Whitlow et al., Phys. Lett. B 282, 475 (1992); A. Bodek, S. Dasu and
S.E. Rock, in Tucson Part. Nucl. Phys., 768 (1991), SLAC-PUB-5598.
[18] N. Isgur, Phys. Rev. D 4, 034013 (1999).
[19] A. D. Martin, R. G. Roberts, W. J. Stirling, and R. S. Thorne, Eur. Phys. J. C
14, 133 (2000).
[20] G. R. Farrar and D. R. Jackson, Phys. Rev. Lett. 35, 1416 (1975).
[21] E. Bloom and F. Gilman, Phys. Rev. Let. 25, 1140 (1970).
[22] W. Melnitchouk, R. Ent, and C. E. Keppel, Phys.Rep. 406, 3-4 (2005).
[23] I. Niculescu et al, Phys. Rev. Lett. 85, 1182 (2000).
[24] F. Gross and S. Liuti, Phys. Rev. C 45, 1374 (1992); S. Liuti and F. Gross,
Phys. Lett. B 356, 157 (1995).
[25] L. Heller and A. W. Thomas, Phys. ReV. C 41, 2756 (1990).
[26] L. L. Frankfurt and M. I.Strikman, Nucl. Phys. B250, 1585 (1985); Phys. Rep
160, 135 (1988).
[27] J. D. Bjorken and E. A. Paschos, Phys. Rev. 185, 5 (1969).
[28] S. S. M. Wong Introductory Nuclear Physics, John Wiley and Sons, INC. (1998).
[29] I. I. Rabi, J. M. B. Kellogg, and J. R. Zacharias, Phys. Rev. 46, 163 (1934).
[30] J. M. B. Kellogg et al., Phys. Rev. 55, 318 (1939).
[31] M. Garson and J. W. Van Orden, Adv. Nucl. Phys., 26, 293 (2001).
[32] F. Halzen and A. D. Martin, Quarks and Leptons: An Introductory Course in
Modern Particle Physics, John Wiley and Sons (1984).
[33] D. Abbott et al., Phys. Rev. Lett. 82, 1379 (1999).
[34] G. D. Yen and J. P. Vary, Phys. Rev. C 40, 1 (1989).
[35] L. C. Alexa et al., Phys. Rev. Lett. 82, 1374 (1999).
[36] R. G. Arnold et al., Phys. Rev. Lett. 35, 776 (1975).
[37] C. D. Buchanan and R. Yearian, Phys. Rev. Lett. 15, 303 (1965).
[38] R. Cramer et al., Z. Phys. C 29, 513 (1985).
[39] D. J. Drikley and L. N. Hand, Phys. Rev. Lett. 9, 521 (1962).
[40] J. E. Elias et al., Phys. Rev. 177, 2075 (1969).
[41] S. Galster et al., Nucl. Phys. B 32, 221 (1971).
[42] S. Platchkov et al., Nucl. Phys. A 510, 740 (1990).
[43] P. E. Bosted et al., Phys. Rev. C 42, 38 (1990).
[44] S. Auffret et al., Phys. Rev. Lett. 54, 649 (1985).
[45] G. G. Simon et al., Nucl. Phys. A 364, 285 (1981).
[46] A. P. Kobushkin and Y. D. Krivenko, arXiv:nucl-th/0112009.
[47] M. Osipenko et al. [CLAS Collaboration], arXiv:hep-ex/0507098.
[48] L. L. Frankfurt and M. I. Strikman, Phys. Rep. 76, 4 (1981).
[49] S. Okubo and R. E. Marshak, Ann. Rev. 4, 166 (1958).
[50] M. Sargsian and M. Strickman Phys. Lett. B 639, 223 (2006).
[51] W. Melnitchouk, M. Sargsian, and M. I. Strikman Z. Phys. A 359, 99 (1997).
[52] C. Ciofi degli Atti, L. P. Kaptari, and D. Treleani, Phys. Rev. C 63, 044601
[53] C. Ciofi degli Atti and S. Simula, Phys. Lett. B 319, 23 (1993); S. Simula, Phys.
Lett. B 387, 245 (1996).
[54] C. Ciofi degli Atti and B. Z. Kopeliovich, Eur. Phys. J. A 17, 133 (2003);
[55] C. Ciofi degli Atti, L. P. Kaptari, and B. Z. Kopeliovich, Eur. Phys. J. A 19,
145 (2004).
[56] W. Melnitchouk, A. W. Schreiber, and A. W. Thomas, Phys. Lett. B 335, 11
(1994); Phys. Rev. D 49, 1183 (1994).
[57] S. I. Alekhin and S. A. Kulagin, S. Liuti, Phys. Rev. D 69, 114009 (2004).
[58] D. Zwillinger, Standard Mathematical Tables and Formulae, CRC Press (1996).
[59] E.S. Smith et al, NIM A 432, 265 (1999).
[60] M. Amaryan et al, NIM A 460, 239 (2001).
[61] B.A. Mecking et al, NIM A 503, 513 (2003).
[62] S. Klein, CERN Courier 44, issue 1 (2004).
[63] V. Eckardt et al arXiv:nucl-ex0101013v2.
[64] CERES collaboration, Nuclear Physics A 661, 673 (1999).
[65] F. Sauli NIM A 386, 531 (1997).
[66] F. Sauli NIM A 553, 18 (2005).
[67] F. Sauli NIM A 580, 971 (2007).
[68] H. Fenker et al, NIM A592, 273 (2008).
[69] M. Mestayer et al., NIM A449 (2000).
[70] R. Feuerbach, CLAS-NOTE 2001-22 (2001).
[71] S. A. Morrow and M. D. Mestayer, CLAS-NOTE 2002-010 (2002).
[72] D. Lawrence and M. Mestayer, Drift Chamber Calibration: Software and Procedures, CLAS-NOTE 1999-018 (1999).
[73] M. Amaryan et al., NIM A460, 239 (2001).
[74] E. S. Smith et al., CLAS-NOTE 1999-011 (1999).
[75] S. Biagi, MAGBOLTZ programme, version 2, CERNLIB, CERN, Geneva (2005).
[76] W. Leo, Techniques For Nuclear And Particle Physics Experiments, SpringerVerlag New York, LLC (1994).
[77] A. Klimenko and S. Kuhn, CLAS-NOTE 2003-005 (2003).
[78] F. James, CERN Program Library Long Writeup D506 CERN, Geneva (1998).
[79] P. Bosted, S. Kuhn, and Y. Prok, CLAS-NOTE 2003-008 (2008).
[80] P. Bosted and H. Avakyan, CLAS-NOTE 2006-006 (2006).
[81] S. Stepanyan, CLAS-NOTE 2002-008 (2002).
[82] K. Abe et al., Phys. Rev. D 58, 112003 (1998).
[83] M. Lacombe, B. Loiseau, R. Vinh Mau, J. Cote, P. Pires, and R. de Tourreil,
Phys. Lett. B 101, 139 (1981).
[84] R. Machleidt, K. Holinde and Ch. Elster, Phys. Rep. 149 (1987) 1.
[85] R. B. Wiringa, V. G. J. Stoks and R. Schiavilla, Phys. Rev. C 51 (1995) 38.
[86] S. J. Brodsky, Light-Cone Quantized QCD and Novel Hadron Phenomenology,
arXiv:hep-ph/9710288v2 (1997).
[87] L. W. Mo and Y. S. Tsai, Rev. Mod. Phys. 41, 205 (1969).
[88] M. Arneodo et al., Phys. Rev. Lett. B 364, 107 (1995).
[89] K. Abe et al., Phys. Lett. B 452, 194 (1999).
[90] S. Agostinelli et al., NIM A 506,3, 250 (2003).
[91] N. Baillie, Ph.D. thesis, W&M (2009).
[92] J. Zhang, Ph.D. thesis, ODU, in preparation.
[93] P. E. Bosted and M. E. Christy, Phys. Rev. C 77, 065206 (2008).
[94] A. Klimenko, Electron Scattering From a High-Momentum Neutron in Deuterium, PhD thesis, ODU (2004).
[95] M. Osipenko, A. Vlassov and M. Taiuti, CLAS-NOTE 2004-020 (2004).
[96] M. Amarian et al., The structure of the free neutron at large x-Bjorken, PR1206-113 proposal.
Svyatoslav Tkachenko
Department of Physics
Old Dominion University
Norfolk, VA 23529
Svyatoslav Tkachenko received the Degree of Specialist (equivalent of the Bachelor of
Science degree) in robototechnics from Odessa State Polytechnic Institute (Odessa,
Ukraine) in 1997. The same year he started graduate school in the University of
Virginia, where he received a Master of Science degree in physics (Nuclear Physics)
in 2004. The same year, 2004, he entered a Ph.D. program in physics in Old Dominion University (the specialty of Nuclear Physics). The projected date of graduation
is December 2009. Svyatoslav Tkachenko is a member of the following professional
organizations: American Physical Society, CEBAF (Continuous Electron Beam Accelerator Facility) Users group, CLAS (CEBAF Large Acceptance Spectrometer)
collaboration, Deep Processes Physics Working Group.
Typeset using LATEX. | {"url":"https://studyres.com/doc/20522421/neutron-structure-functions-measured-with-spectator","timestamp":"2024-11-02T00:02:10Z","content_type":"text/html","content_length":"498253","record_id":"<urn:uuid:12f365e8-90e3-489b-ae07-2c4a4c2cf904>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00070.warc.gz"} |
Calcul.io · Math Playground. Simple and fast calculator for daily needs
Calculate probabilities and statistical distributions reliably.
Probability functions are fundamental for analyzing random events and statistical data. Calcul.io offers a variety of probability functions including distributions (normal, binomial, Poisson, etc.),
cumulative distribution functions (CDFs), and probability density functions (PDFs). These functions are crucial for fields such as finance, insurance, and scientific research.
Our probability functions enable you to model uncertainties, calculate risks, and make informed decisions based on statistical evidence. Whether you are conducting experiments, performing data
analysis, or simulating stochastic processes, Calcul.io provides the reliable tools you need for accurate probability computations.
combinations Compute the number of combinations of n items taken k at a time
combinationsWithRep Compute the number of combinations of n items taken k at a time with replacements.
factorial Compute the factorial of a value
gamma Compute the gamma function. For small values, the Lanczos approximation is used, and for large values the extended Stirling approximation.
kldivergence Calculate the Kullback-Leibler (KL) divergence between two distributions.
lgamma Logarithm of the gamma function for real, positive numbers and complex numbers, using Lanczos approximation for numbers and Stirling series for complex numbers.
Multinomial Coefficients compute the number of ways of picking a1, a2, ..., ai unordered outcomes from `n` possibilities. multinomial takes one array of integers as an argument.
multinomial The following condition must be enforced: every ai > 0.
permutations Compute the number of permutations of n items taken k at a time
pickRandom Pick a random entry from a given array.
random Return a random number.
randomInt Return a random integer number
All functions | {"url":"https://calcul.io/functions/Probability","timestamp":"2024-11-13T19:32:38Z","content_type":"text/html","content_length":"14628","record_id":"<urn:uuid:44d5be36-45c1-4221-8c80-1cc48567eb08>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00542.warc.gz"} |
retsch pm 200 planetary ball mill
The PM 200 is a convenient benchtop model with 2 grinding stations. You may also be interested in the High Energy Ball Mill Emax, an entirely new type of mill for high energy input. The unique
combination of high friction and impact results in extremely fine particles within the shortest amount of time." (RETSCH) Application Examples:
WhatsApp: +86 18838072829
Retsch Drum Mill TM 500. Retsch Mixer Mill MM 400, 100240 V, 50/60 Hz. Retsch Mixer Mill MM 500 Nano. Retsch Mixer Mill MM 500 Control. Retsch High Energy Ball Mill Emax, 200240 V, 50/60 Hz, High
Energy Ball Mill with 2 Grinding Stations. Retsch Planetary Ball Mill PM 100, 230 V, 50/60 Hz, with 1 Grinding Station, Speed Ratio 1 : 2.
WhatsApp: +86 18838072829
For use with Retsch PM 100 and PM 200 Planetary Ball Mills. Safe, nonslip seating with builtin antirotation device and conical base centering; ... Capacity (Metric) 50 mL: For Use With
(Equipment) PM 100 and PM 200 planetary ball mill: Material: Zirconium Oxide: Capacity (English) oz.
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 200 is a powerful benchtop model with 2 grinding stations for grinding jars with a nominal volume of 12 ml to 125 ml. ... Operation of the RETSCH planetary ball mills
is particularly safe. They feature a robust Safety Slider which ensures that the mill can only be started after the grinding jar has been securely fixed ...
WhatsApp: +86 18838072829
The PM 100 CM is a Planetary Ball Mill with a speed ratio of 1:1. Size reduction is not so much achieved by impact but by pressure and friction which is more gentle on the material.
Specifications: Ultimate fineness: < 1µm. Dimensions (WxDxH): 630 x 415 x 468mm. PM 400: 836 x 780 x 1220mm. Supply requirements: 230V, 5060Hz.
WhatsApp: +86 18838072829
Planetary Ball Mill PM 400. zirconium oxide, for PM 100 and PM 400 Counter wrench IQ/OQ Documentation for PM 400 Grinding jars "comfort" PM 100 / PM 200 / PM 400 Hardened steel 50 ml 125 ml 250
ml 500 ml Stainless steel 12 ml 25 ml 50 ml
WhatsApp: +86 18838072829
Find out all of the information about the RETSCH product: ball mill PM 200. Contact a supplier or the parent company directly to get a quote or to find out a price or your closest point of sale.
... The Planetary Ball Mill PM 200 is a powerful benchtop model with 2 grinding stations for grinding jars with a nominal volume of 12 ml to 125 ml.
WhatsApp: +86 18838072829
©Retsch GmbH Retsch Allee 1 5 42781 Haan Germany phone: +49 2104 fax: +49 2104 email: info page 2/4 Ball charge Planetary Ball Mills PM 100 / PM 100 CM / PM 200 / PM 400 Dry Grinding Recommended
ball charge (Pieces) Wet Grinding Recommended ball charge (Mass, g) Volume of the grinding jar
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 300 is a powerful and ergonomic benchtop model with two grinding stations for grinding jar volumes up to 500 ml. This setup allows for processing up to 2 x 220 ml
sample material per batch.
WhatsApp: +86 18838072829
Retsch™ PM 200 Model Planetary Ball Mills meet all technical requirements for colloidal grinding and provide the energy input necessary for mechanical alloying. PM 200 is a convenient benchtop
model with 2 grinding stations. Brand: RETSCH Code : A0 Additional Details : Weight : Product Code. £ / Each
WhatsApp: +86 18838072829
RETSCH Models PM 100 and PM 200 Planetary Ball Mills Benchtop ball mills are ideal for wet or dry grinding applications requiring the highest degree of fineness Specifications View More Specs
Includes: Mill only Requires: Grinding jars and balls (sold separately) Products 1 Description Specifications Description
WhatsApp: +86 18838072829
Mixer Mill and Planetary Ball Mill. I 2021 Market launch of the new MM 500 control first Mixer ... SM 100 SM 200/300 SM 400 RM 200 DM 200 / 400 RS 200 RS 300 McCrone CryoMill MM 400 MM 500 nano /
cryo MM 500 vario Emax PM 100 / 200 / ... ROTOR MILLS RETSCH offers a whole family of cutting mills from the budgetpriced basic model ...
WhatsApp: +86 18838072829
The new PM 400 is a real powerhouse. A selectable speed range from 30 to 400 rpm in combination with an effective sun wheel diameter of 300 mm guarantees a high energy input and therefore
analytical fineness in a very short time. The new model now offers an even greater range of applications, operating comfort and safety:
WhatsApp: +86 18838072829
With the development of the Planetary Ball Mill PM 300 RETSCH has closed a gap in the . product portfolio. ... e. g., a PM 200, see Figure 5. The results achieved in the PM 300 .
WhatsApp: +86 18838072829
PM 100 Bolygóművesgolyós malom. The Planetary Ball Mill PM 100 is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It
allows for grinding up to 220 ml sample material per batch. The extremely high centrifugal forces of Planetary Ball Mills result in very high ...
WhatsApp: +86 18838072829
RETSCH AGATE MORTAR AGATE PESTLE for RM100 RMO RM 100 200 MIXER MILL GRINDER Exc. Opens in a new window or tab. PreOwned. 2, seaslife (25) 100% ... Planetary Ball Mill PM400 RETSCH Great
condition 2X 250ml NEWJar with CLAMP. ... Retsch PM 400 MA Planetary Ball Mill Tag #03. Opens in a new window or tab. Refurbished. 10, bid ...
WhatsApp: +86 18838072829
PLANETARY BALL MILL PM 200 Planetary Ball Mills are used wherever the highest degree of fineness is required. Apart from the classical mixing and size reduction processes, the mills also meet all
the technical requirements for colloidal grinding and have the energy input necessary for mechanical alloying processes.
WhatsApp: +86 18838072829
RETSCH Planetary Ball Mills are perfectly suited for processes like mechanical alloying or mechanosynthesis. For most reactions, the 1:2 speed ratio of jar to sun wheel of the models PM 100 and
PM 200 is fully adequate, as the ball charge produces enough impact energy. However, greater energy is required for some reactions.
WhatsApp: +86 18838072829
Product details Powerful ergonomic Planetary Ball Mill PM 300 Material feed size*: < 10 mm Final fineness*: < 1 µm, for colloidal grinding < µm Speed ratio: 1 : 2 Grinding stations: 2 Product
details For larger sample volumes Planetary Ball Mill PM 400 Material feed size*: < 10 mm Final fineness*: < 1 µm, for colloidal grinding < µm
WhatsApp: +86 18838072829
pm 200 Топкови Мелници Смилане Продукти. Приложна база данни. Индустрии
WhatsApp: +86 18838072829
The PM 200 is a convenient bench top model with 2 grinding stations. You may also be interested in the High Energy Ball Mill Emax, an entirely new type of mill for high energy input. The unique
combination of high friction and impact results in extremely fine particles within the shortest amount of time. Add to Quote. Description.
WhatsApp: +86 18838072829
RETSCH Planetary Ball Mills are perfectly suited for processes like mechanical alloying or mechanosynthesis. For most reactions, the 1:2 speed ratio of jar to sun wheel of the models PM 100 and
PM 200 is fully adequate, as the ball charge produces enough impact energy. However, greater energy is required for some reactions.
WhatsApp: +86 18838072829
Stoichiometric of ACSgraded (99%+) Li 2 CO 3, Nb 2 O 5, Mn 2 O 3 precursors were ballmilled in a RETSCH Planetary Ball Mill PM 100 at 200 rpm for 12 h, using zirconia balls/jar and ethanol as ...
WhatsApp: +86 18838072829
The history and necessity of mechanical alloying. M. Sherif ElEskandarany, in Mechanical Alloying (Second Edition), 2015. Planetary ball mills. The Planetary ball mills are the most popular mills
used in MM, MA, and MD scientific researches for synthesizing almost all of the materials presented in Figure In this type of mill, the milling media have considerably high energy ...
WhatsApp: +86 18838072829
PM 200 Kulemøller Knusing og nedmaling Produkter. Carbolite Gero Heat Treatment ELTRA Elemental Analysis QATM Materialography Hardness Testing" Retsch Milling Sieving" Microtrac MRB Particle
Characterization Verder Scientific ELTRA Elemental Analysis QATM Materialography Hardness Testing" Retsch Milling Sieving" Microtrac MRB
WhatsApp: +86 18838072829
High Energy Ball Mills Milling Products. Retsch GmbH. Search. English. English; ... Carbolite Gero Heat Treatment ELTRA Elemental Analysis QATM Materialography Hardness Testing" Retsch Milling
Sieving" Microtrac MRB Particle Characterization Verder Scientific. ... Planetary Ball Mill PM 300. Material feed size*: 10 mm Final ...
WhatsApp: +86 18838072829
Planetary Ball Mill. PM100 power tool pdf manual download. Also for: Pm100 cm, Pm200. ... Operating companies, operators Machine type designation: PM100 / PM200 / PM100CM Retsch ball mills are
used to grind and mix soft, medium hard and extremely hard, brittle and fibrous materials. ... Top latching position PM 200 inserting the opening aid ...
WhatsApp: +86 18838072829
Ball mills are among the most variable and effective tools when it comes to size reduction of hard, brittle or fibrous materials. More: https://
WhatsApp: +86 18838072829
Retsch Planetary Ball Mill PM 200 Item number: RET_12023 EAN: Category: Retsch Manufacturer: Retsch Now only,50 € Excluding 19% VAT., plus shipping (Sperrgut) (including 19% VAT.,41 €) Add to
basket Get offer Notice Compare Question Request for quote Professional service +49 30403 667 940 International shipping
WhatsApp: +86 18838072829
The roller milled wheat flours were modified using a planetary ball mill PM 100 (Retsch GmbH, Haan, Germany) in dry grinding mode using eight stainless steel balls with a diameter of 30 mm. 150 ±
1 g of flour was weighed into a 500 mL stainless steel grinding jar and ground for 5, 10, 15, and 20 min. Flour of wheat cv. Akteur was milled at ...
WhatsApp: +86 18838072829
RETSCH offers the largest selection of laboratory ball mills in the market! Ball mills are among the most variable and effective tools when it comes to size reduction of hard, brittle or fibrous
WhatsApp: +86 18838072829
The present operating instructions for the ball mills of type PM100/200 provide all the necessary information on the headings contained in the table of contents. They act as a guide for the
target group(s) of readers defined for each topic for the safe use of the PM100/200 in accordance with its intended purpose. Familiarity with the relevant
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 100 is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It allows for grinding up to 220 ml
sample material per batch.
WhatsApp: +86 18838072829
Pm 400 200 Qm3sp2 Retsch Traducao Supplier Technique Process Working Principle Price for Planetary Ball Mill Milling Type FOB Price: US ... Pm 100 Cm Magnetite Mechanism Motion Nanoparticles
Particle Size Principle Tencan Theory Retsch Planetary Ball Mill Pdf FOB Price: US / Piece Min. Order: 1 Piece. Type: Ball Mill; Motor Type ...
WhatsApp: +86 18838072829
Planetary Ball Mills RETSCH's innovative Planetary Ball Mills meet and exceed all requirements for fast and reproducible grinding down to the nano range. They are used for the most demanding
tasks, from routine sample processing to colloidal grinding and advanced materials development. 2 Planetary Ball Mills Open the catalog to page 2
WhatsApp: +86 18838072829 | {"url":"https://larecreation-hirsingue.fr/04_09-2508.html","timestamp":"2024-11-02T20:47:16Z","content_type":"application/xhtml+xml","content_length":"30281","record_id":"<urn:uuid:f01974bd-3187-422e-aeec-7c91f60b4917>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00286.warc.gz"} |
How related are three-fourths siblings? - The Tech Interactive
How related are three-fourths siblings?
A curious adult from Canada asks:
"I have four children with two different fathers that were brothers (one died in a logging accident). If genes are handed down randomly from parents - 1/2 of the mom’s and 1/2 of the dad’s, I am
wondering how genetically similar my older daughter is to her younger siblings as they are to each other. They have exactly the same ancestors on both sides because their fathers are brothers and I’m
their mother."
The simple answer is that they are somewhere between full and half siblings. They are three quarter siblings.
In math terms that means they are 37.5% related. This is halfway between the 50% of full siblings and the 25% of half siblings.
To understand where these numbers came from, we need to take a step back and look at how people inherit genes from their parents. I think then you'll see why your children from different fathers who
are brothers are more related than half siblings but less than full siblings.
Relatedness of Siblings
As you already said, we get half our genes from each of our biological parents. The half that we get is randomly chosen. This is actually why siblings are 50% related.
To understand this more clearly, let's look at a simpler example. Imagine you have 10 marbles in front of you and with your eyes closed you have to give 5 to someone.
After recording which marbles they got, the person then gives those 5 back to you. Now you have to give 5 marbles to someone else.
What are the odds that you will pick the same 5 marbles? The odds are very much against you. In fact they are 1 out of 252, or 0.4%.
This is sort of what it is like when you pass on genes. You pass half of them to each child but the half that gets passed is chosen at random. So it is very unlikely that siblings get the exact same
set of genes from a parent.
In the picture below, I have taken the example a little further to explain why siblings are 50% related. In the picture, we have a mom and a dad. They each have 10 marbles and they have to give 5
marbles to each one of their children.
As you'd expect, each child shares five marbles with each parent. But notice that they also share five marbles with each other. The five marbles they share are a mix of some of mom's and some of
This is a good proxy for what happens when we pass our genes down to our kids. Except that instead of giving 5 marbles we pass on 20,000 or so genes*.
This is so many genes that it is essentially impossible for a parent to pass the exact same set of genes to two of their children. In fact, with numbers this big, odds are that the kids will only
have about 10,000 of these genes in common from each parent. The other 10,000 will not be shared.
Because each child gets 10,000 shared genes from each parent (and 10,000 unique ones), they will share 20,000 out of 40,000. This is where the 50% related comes from for full siblings.
It is a different story with half siblings. Half siblings share one parent but have a different second parent. So they will not get the same genes from the parent they don’t share.
We'll say they share 0 out of these 20,000 genes. That leaves only the 20,000 genes from the shared parent. And since they will only share half of those genes, this means they share 10,000 out of
40,000 genes. In other words, they are 25% related.
Relatedness of Three-Fourth Siblings
Three-fourth siblings share one parent, but have different second parents that are siblings. For example, two children are considered three-fourths siblings if they share the same mom, but have
different dads that are full brothers. The same is true for children that share the same dad and have moms that are full sisters.
OK so we know the drill with the shared parent. Each child will share about 10,000 genes with their siblings because of the shared parent. But what about the different parents who are brothers? This
is where it gets a little more interesting.
I've drawn out a picture below again using marbles. This time each of the three parents has 40 marbles to pass on. I hope it makes the explanation clearer.
As you can see, the two fathers are 50% related to each other because they are siblings (outlined in blue). So in the pictures they share 20 out of their 40 marbles. Remember in reality they share
20,000 or so genes.
Notice in the picture that each child gets 20 marbles from mom, ten of which they share (outlined in orange). Just what we've come to expect.
The kids also get 20 from each dad. But since the dads are 50% related, they'll pass down 10 shared genes. The two kids will share half of these or 5 marbles (outlined in blue).
In gene terms, the dads will each pass 10,000 shared genes to each of their children. Again, each child will share half of them or 5,000 genes.
To determine the overall relatedness of three-fourth siblings, we add the contribution from the mom plus the contribution of the dads. This is 10,000 genes (mom) plus 5,000 genes (dad) or 15,000
Since each child gets a total of 40,000 genes, the genes that they share with their three-fourth sibling is 15,000 genes divided by 40,000 genes. In other words, 37.5%.
*It is actually a bit more complicated than this. We each have two copies of our 20,000 or so genes and we pass one copy of each separate gene to our kids. To simplify things, I lumped all of these
together for a total of 40,000 genes.
Author: Cecil Benitez
When this answer was published in 2011, Cecil was a Ph.D. candidate in the Department of Developmental Biology, studying endocrine development of the pancreas in Seung Kim's laboratory. Cecil wrote
this answer while participating in the Stanford at The Tech program. | {"url":"https://www.thetech.org/ask-a-geneticist/articles/2011/ask430/","timestamp":"2024-11-03T04:12:49Z","content_type":"text/html","content_length":"40708","record_id":"<urn:uuid:d6309c0d-2c93-4aa2-bcac-84757de4c50a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00011.warc.gz"} |
Simplifying Algebraic Expressions: (5m^9)(7m^3n^7)+(2m^2n^6)(m^10n)
This article will guide you through simplifying the algebraic expression (5m^9)(7m^3n^7)+(2m^2n^6)(m^10n).
Understanding the Rules
To simplify this expression, we need to understand the following rules:
• Product of powers: When multiplying exponents with the same base, you add the powers. For example, x^a * x^b = x^(a+b).
• Commutative property of multiplication: The order of multiplication doesn't affect the result. For example, a * b = b * a.
Simplifying the Expression
1. Distribute: We start by distributing the multiplication:
□ (5m^9)(7m^3n^7) = 35m^(9+3)n^7 = 35m^12n^7
□ (2m^2n^6)(m^10n) = 2m^(2+10)n^(6+1) = 2m^12n^7
2. Combine like terms: Now we have: 35m^12n^7 + 2m^12n^7
3. Simplify: Since the terms have the same variables and exponents, we can combine their coefficients: (35 + 2)m^12n^7 = 37m^12n^7
The Simplified Expression
Therefore, the simplified form of (5m^9)(7m^3n^7)+(2m^2n^6)(m^10n) is 37m^12n^7. | {"url":"https://jasonbradley.me/page/(5m%255E9)(7m%255E3n%255E7)%252B(2m%255E2n%255E6)(m%255E10n)","timestamp":"2024-11-03T02:43:51Z","content_type":"text/html","content_length":"57443","record_id":"<urn:uuid:d1a1be14-d2cb-44bd-8205-e5b36511c21a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00559.warc.gz"} |
Lab Assignment 4 ECSE104L solved
Example of Hierarchical Designing in Verilog: –
Full adder Boolean expression-
Verilog code for Half Adder
Half Adder:
//Declare the ports of Half adder module
module half_adder(
//what are the input ports.
input Data_in_A;
input Data_in_B;
//What are the output ports.
output Data_out_Sum;
output Data_out_Carry;
//Implement the Sum and Carry equations using Verilog Bit operators.
assign Data_out_Sum = Data_in_A ^ Data_in_B; //XOR operation
assign Data_out_Carry = Data_in_A & Data_in_B; //AND operation
Verilog code for full adder-
//declare the Full adder verilog module.
module full_adder(
Data_in_A, //input A
Data_in_B, //input B
Data_in_C, //input C
//what are the input ports.
input Data_in_A;
input Data_in_B;
input Data_in_C;
//What are the output ports.
output Data_out_Sum;
output Data_out_Carry;
//Internal variables
wire ha1_sum;
wire ha2_sum;
wire ha1_carry;
wire ha2_carry;
wire Data_out_Sum;
wire Data_out_Carry;
//Instantiate the half adder 1
half_adder ha1(
//Instantiate the half adder 2
half_adder ha2(
//sum output from 2nd half adder is connected to full adder output
assign Data_out_Sum = ha2_sum;
//The carry’s from both the half adders are OR’ed to get the final carry./
assign Data_out_Carry = ha1_carry | ha2_carry;
Ques 1 Write Verilog code for half adder. Test using university wave form.
Ques 2. Design a four-bit combinational circuit 2’s complementor using exclusive-OR gates and half adder. Write Verilog code in Quartus tool then test using university waveform.
Ques 3. Write Verilog code for full adder. Test using university wave form.
Ques 4. Write Verilog code for 4-bit ripple carry adder using full adder. Test using university wave form. | {"url":"https://www.programmingmag.com/answers/lab-assignment-4-ecse104l-solved/","timestamp":"2024-11-07T16:01:34Z","content_type":"text/html","content_length":"96724","record_id":"<urn:uuid:20716ecf-7319-4d1b-b4da-08972e0f057f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00260.warc.gz"} |
5 Best Ways to Check if a Given Binary Tree is Height Balanced Like a Red-Black Tree in Python
π ‘ Problem Formulation: A binary tree is said to be height-balanced if for every node, the height difference between its left and right subtrees is at most 1. This property is intrinsic in
red-black trees, a self-balancing binary search tree. The task is to verify a given binary tree’s balance similar to that of red-black trees. For a given binary tree structure, the desired output is
a boolean value indicating whether the tree is balanced.
Method 1: Recursive Height Calculation
This method involves a recursive function that computes the height of the left and right subtrees for each node and checks the height difference. The function specification will include a recursive
traversal of the binary tree nodes, calculation of the heights, and a balance check at each step.
Here’s an example:
class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
def height(node):
if not node:
return 0
return max(height(node.left), height(node.right)) + 1
def is_balanced(node):
if not node:
return True
left_height = height(node.left)
right_height = height(node.right)
if abs(left_height - right_height) > 1:
return False
return is_balanced(node.left) and is_balanced(node.right)
True or False
This code snippet defines a recursive function is_balanced that checks if the tree is height-balanced by comparing the heights of left and right subtrees at each node. It is efficient in terms of
readability but might not be the most optimal due to repeated height calculations.
Method 2: Enhanced Recursive Approach
To mitigate the inefficiency of repeated calculations in Method 1, this approach enhances the recursion by checking the balance and height simultaneously. The function returns the height of the
subtree if it’s balanced, and -1 otherwise, allowing to terminate early if imbalance is detected.
Here’s an example:
def check_height(node):
if not node:
return 0
left_height = check_height(node.left)
if left_height == -1:
return -1
right_height = check_height(node.right)
if right_height == -1:
return -1
if abs(left_height - right_height) > 1:
return -1
return max(left_height, right_height) + 1
def is_balanced(node):
return check_height(node) != -1
True or False
In this code snippet, the function check_height returns the height of a node if the subtree is balanced; otherwise, it returns -1. This optimization reduces the time complexity by avoiding redundant
height computations.
Method 3: Bottom-Up Recursive Approach
This approach is a bottom-up version of the previous method. It starts the balance check from the leaves, moving upwards. This is generally the preferred recursive approach because it ensures that
each node’s balance and height are calculated only once.
Here’s an example:
See the code snippet from Method 2
True or False
The bottom-up recursive approach is essentially the same as the enhanced recursive approach outlined in Method 2. It’s an efficient and straightforward way to verify the tree’s balance, making it
ideal for this problem.
Method 4: Iterative Depth-First Search (DFS)
Iterative solutions using stacks are another way to check tree balance. This method uses DFS traversal to check each node’s balance without recursion. It can be harder to understand but is useful for
avoiding stack overflow error in deep trees.
Here’s an example:
The iterative approach generally requires complex data structures and is not as straightforward to implement as the recursive methods.
True or False
An iterative DFS approach may require additional data structures and careful management of stack operations. It’s more complex to understand and implement, but it helps to overcome the limitations of
recursive approaches for very deep trees.
Bonus One-Liner Method 5: Using Existing Libraries
For simplicity and ease of use, one might utilize existing libraries like networkx which has functions to check the properties of trees that can be used to verify tree balance.
Here’s an example:
# Pseudocode as specific implementation details can vary based on library functions.
# Generally, this would involve creating a graph representation of the tree and
# using a function from the library to check for balance.
True or False
This method relies on the power of tested libraries, reducing the code that needs to be written and tested. It’s great for quick checks but does add external dependencies to your project.
• Method 1: Recursive Height Calculation. Simple and easy to understand. Inefficient due to repeated calculations of height.
• Method 2: Enhanced Recursive Approach. More efficient than Method 1 as it avoids repeated calculations. Still recursive.
• Method 3: Bottom-Up Recursive Approach. Preferred recursive solution. Efficiently calculates balance and height in one pass.
• Method 4: Iterative Depth-First Search (DFS). Avoids recursion. Suitable for very deep trees. More complex implementation.
• Method 5: Using Existing Libraries. Simplest implementation if appropriate libraries are available. Adds external dependencies. | {"url":"https://blog.finxter.com/5-best-ways-to-check-if-a-given-binary-tree-is-height-balanced-like-a-red-black-tree-in-python/","timestamp":"2024-11-06T11:56:58Z","content_type":"text/html","content_length":"72577","record_id":"<urn:uuid:d0aaecf0-dddc-400e-880b-70e3761d21c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00548.warc.gz"} |
Policy Impacts Library | Top Marginal Income Tax Rate Increases in 2013 from Expiration of the Economic Growth and Tax Relief Act of 2001
The Economic Growth and Tax Relief Act of 2001 lowered top marginal income tax rates in the US to 35%. This act expired in 2013, leading to an increase in the top marginal income tax rate from 35% to
39.6%. Kawano et al. (2016) use variation from this expiration to estimate its impact on taxable income of individuals subjected to the top marginal income tax rate. Hendren and Sprung-Keyser (2020)
translate their estimates into the implied MVPF.
Hendren and Sprung-Keyser (2020) estimate the MVPF of the 2013 top tax rate change using the equation
FE = \frac{-t}{1-t}*\alpha*\epsilon_{eti}
with \alpha = \frac{E[Y]}{E[Y-y|Y\geq y]} is the Pareto Parameter of the income distribution and \epsilon = \frac{d[E[y]]}{d(1-t)}\frac{1-t}{E[y]} is the elasticity of taxable income with respect to
the keep rate, 1-t.
Throughout, Hendren and Sprung-Keyser (2020) measure t as the sum of the federal income tax rate and a 5% state income tax rate assumption. In practice, the reforms are discrete changes in t. To
account for this, Hendren and Sprung-Keyser (2020) compute the fiscal externality above separately for the pre- and post-reform tax rates, and then take an average of the two FEs. Appendix F of
Hendren and Sprung-Keyser (2020) provides further details and references.
The key additional parameter beyond the elasticity of taxable income is the Pareto parameter of the income distribution. Hendren (2017) finds a value of the Pareto parameter of 1.5.
Kawano et al. (2016) estimate an elasticity of taxable income of 0.12. This implies an MVPF of 1.16, with a confidence interval of [0.873, 1.925].
Saez (2016) also studies the 2013 reform. He directly analyzes the fiscal externality and finds FE = -0.19, which implies an MVPF of 1.23, similar to the 1.16 implied by the estimates of Kawano et
al. (2016).
The estimates used to calculate this MVPF may have been updated in a more recent working or published version of the paper. | {"url":"https://policyimpacts.org/policy-impacts-library/top-marginal-income-tax-rate-increases-in-2013-from-expiration-of-the-economic-growth-and-tax-relief-act-of-2001/","timestamp":"2024-11-14T20:24:01Z","content_type":"text/html","content_length":"33272","record_id":"<urn:uuid:1da3501f-12d9-4749-9941-90f6d5bcef5e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00611.warc.gz"} |
differential calculus
What is 5/0? When I ask my beginning algebra students that question, the most popular incorrect answer they give me is 0. The next most popular incorrect answer is 5. After repeated reminders by
their math teachers, students eventually learn that 5/0 is undefined, has no value, or is meaningless. (I once told a class of 9^th grade algebra students that if they use their calculator to divide
a number by zero, the calculator will explode in their face. One student looked at me and said, “Really?” I forgot how literal 9^th graders can be. At least I got the student’s attention.) When I ask
college algebra, trigonometry, statistics, technical math or calculus students why a number divided by zero is undefined, I either get an answer that begs the question or students say it’s simply a
mathematical fact that they learned in a previous course.
So how do you explain division by zero? There are two ways. The first depends on a basic understanding of division of two numbers. It goes something like this: Students learn that a / b = c if and
only if a = b*c. Therefore 986 / 58 = 17 because 58*17 = 986. Is 5 / 0 = 0? No, because 0 * 0 ≠ 5. Is 5 / 0 = 5? No, because 0*5 ≠ 5. Since 0 times any number never equals 5, 5 / 0 is NOTHING or
undefined. So what about 0 / 0? The problem here is that 0 times any number equals 0, and therefore 0 / 0 would have infinitely many answers, which in turn would be rather confusing. So we say that
any number divided by zero is undefined.
The second explanation involves a deep mathematical insight from the 12^th century Indian mathematician and astronomer, Bhāskara II, who developed the basic concepts of differential calculus. The 17^
th century European mathematicians, Newton and Leibniz, independently rediscovered differential calculus. This second explanation due to Bhāskara II goes something like this. Consider a single piece
of fruit. If we divide 1 piece of fruit by ¼, we get 4 pieces of fruit. If we divide 1 piece of fruit by 1/10,000, we get 10,000 pieces of fruit. As 1 is divided by smaller and smaller numbers that
approach zero, the number of pieces of fruit increases without bound. Therefore 1/0 = ∞ and, in general, n/0 = ±∞ if n does not equal 0.
Bhāskara II, Newton and Leibniz discovered the revolutionary concept of a limit of a function at a point, which enabled them to get around the problem of division by zero. Once that problem was
solved, it was a relatively easy task to find methods to calculate a rate of change over a time interval of length zero, rate of change over a fleeting instant of time, or rate of change over a flux
of time, as Newton would say. In The Ascent of Man, Dr. Bronowski tells the viewer, “In it, mathematics becomes a dynamic mode of thought, and that is a major mental step in the ascent of man.”
Differential calculus is all about the mathematics of variable rates of change. I should mention that differential calculus students learn a slick technique for finding the limiting value of an
x-variable expression as x approaches a constant k and the value of the expression when x = k is 0/0 or ∞/∞.
The graphic below shows the graphs of the functions y = 2Sin(x) and y = 2Csc(x) along with its vertical asymptotes. The graphs are color coded green, blue and red respectively. Because Csc(x) = 1 /
Sin(x), the Csc(x) function is undefined at precisely those values of x where Sin(x) = 0. It’s interesting and fun to advance a trace mark cursor on the graphs of these functions. On both graphs, the
horizontal velocity of the trace mark is constant, but the vertical velocity of the trace mark changes as the value of the x changes. As x approaches a vertical asymptote, the trace mark races
towards ± ∞. Differential calculus gives us a complete understanding of the phenomena of the moving trace cursor.
The above graphic, created with the program Basic Trig Functions, is offered by Math Teacher’s Resource. The equations entered into the program were: y = 2Sin(x), y = 2Csc(x), and Sin(x) = 0. Go to
www.mathteachersresource.com to view multiple screen shots of the program’s modules. Click the ‘learn more’ button in the TRIGONOMETRIC FUNCTIONS section. Teachers will find useful comments at the
bottom of each screen shot.
Differential calculus is not only interesting and fun, but it can also be a stress reliever. At least it was for Omar Bradley, the famous American WWII general. He took a calculus book with him on
battle campaigns, and when opportunity allowed, he worked differential calculus problems to relieve the stress of a battle campaign. | {"url":"https://blog.mathteachersresource.com/?tag=differential-calculus","timestamp":"2024-11-03T12:21:52Z","content_type":"text/html","content_length":"26977","record_id":"<urn:uuid:8ac2328b-2845-479e-905c-ad093d3f1107>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00691.warc.gz"} |
What does dy/dx mean? | Socratic
What does #dy/dx# mean?
1 Answer
It means the derivative $y$ with respect to $x$, so if $y = f \left(x\right)$
$\frac{\mathrm{dy}}{\mathrm{dx}} = y ' = f ' \left(x\right)$.
I hope that this was helpful.
Impact of this question
3213 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/what-does-dy-dx-mean","timestamp":"2024-11-12T17:26:46Z","content_type":"text/html","content_length":"31823","record_id":"<urn:uuid:1a06ca70-28eb-4aba-a121-711ceaf3b816>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00375.warc.gz"} |
Project Euler #46: Goldbach's other conjecture | HackerRank
[This problem is a programming version of Problem 46 from projecteuler.net]
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.
It turns out that the conjecture was false as you'll discover some values can't be represented as a sum of prime and twice a square.
You are given , print the number of ways N can be represented as a sum of prime and twice a square.
Example can be represented in two ways as and
The first line contains an integer , i.e., number of test cases.
Next lines will contain an integer .
Print the values corresponding to each test case. | {"url":"https://www.hackerrank.com/contests/projecteuler/challenges/euler046/problem","timestamp":"2024-11-02T08:17:09Z","content_type":"text/html","content_length":"1028378","record_id":"<urn:uuid:6ce8fe44-aab6-44cb-b626-7290e365d20c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00498.warc.gz"} |
Correlation and Significance | Arkangel AI Docs
How to preliminary choose the values you'd to use for training? In this tutorial we show how to interpret the traffic light analysis in Arkangel AI
Choosing the supporting variables prior to training is one of the most important things in your algorithm.
The traffic light bars show the degree to which a feature is correlated with the target. The classification is capable of detecting non-linear relationships with the target, but as they are
univariate, they are unable to detect interaction effects between features. This calculation uses Phi-k correlation and significance. They are both calculated using an algorithm that measures the
information content of the variable; this calculation is done independently for each feature in the dataset.
Tip: As you iterate with different configurations you might want to remove features that are unrelated to the target.
Correlation is a statistical measure that expresses the extent to which two variables are related, describing how much they can change in relation to one another and identifying a pattern but does
not help identify a relationship between cause and effect.
The Phi-k correlation coefficient works consistently between categorical, ordinal, and interval variables. It is obtained by a derivation from Pearsonโ s chi-squared contingency test. The values for
levels of correlation are bound in the range [0 - 1]:
Significance helps quantify and understand whether a relationship between a group of variables is caused by something different than chance. Significance provides evidence that a correlation exists
between the variables.
Phi-k significance is obtained by utilizing a hybrid approach, where a the G-test is used and the result is expressed as a one sided Z, which is then transformed into a P-value score.
This value is interpreted as a hypothesis test score, where the null hypothesis is that the correlation between variables has no statistical significance between one another, while the alternative
hypothesis is that there is statistical significance between variables.
A p-value lower than 0.025 is generally considered statistically significant, the lower the p-value the greater the statistical significance the variable has
For values close to 1 a statistical significance is impossible to define | {"url":"https://help.arkangel.ai/product-tutorials/improve-data/correlation-and-significance","timestamp":"2024-11-08T09:16:05Z","content_type":"text/html","content_length":"194825","record_id":"<urn:uuid:0413d710-07f1-4f1f-8340-d2a4929eeeb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00795.warc.gz"} |
ClaimsProblems | CRAN/E
Analysis of Conflicting Claims
CRAN Package
The analysis of conflicting claims arises when an amount has to be divided among a set of agents with claims that exceed what is available. A rule is a way of selecting a division among the
claimants. This package computes the main rules introduced in the literature from the old times until nowadays. The inventory of rules covers the proportional and the adjusted proportional rules, the
constrained equal awards and the constrained equal losses rules, the constrained egalitarian, the Piniles’ and the minimal overlap rules, the random arrival and the Talmud rules. Besides, the
Dominguez and Thomson and the average of awards rules are also included. All of them can be found in the book of W. Thomson (2019), 'How to divide when there isn't enough. From Aristotle, the Talmud,
and Maimonides to the axiomatics of resource allocation', with the exception of the average of awards rule (Mirás Calvo et al. (2022), ). In addition, graphical diagrams allow the user to represent,
among others, the set of awards, the paths of awards, and the schedules of awards of a rule, and some indexes. A good understanding of the similarities and the differences of the rules is useful for
a better decision making. Therefore this package could be helpful to students, researchers and managers alike.
• Version0.2.1
• R versionunknown
• Needs compilation?No
• Last release01/12/2023
This package has been downloaded 218 times in the last 30 days. The following heatmap shows the distribution of downloads per day. Yesterday, it was downloaded 4 times.
Data provided by cranlogs | {"url":"https://cran-e.com/package/ClaimsProblems","timestamp":"2024-11-02T05:58:12Z","content_type":"text/html","content_length":"51930","record_id":"<urn:uuid:e360eea5-9c62-4b2f-868f-5dee2f897176>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00136.warc.gz"} |
Matlab fft() | Guide to How Matlab fft() works with Examples
Updated March 4, 2023
Introduction to Matlab fft()
Matlab method fft() carries out the operation of finding Fast Fourier transform for any sequence or continuous signal. A FFT (Fast Fourier Transform) can be defined as an algorithm that can compute
DFT (Discrete Fourier Transform) for a signal or a sequence or compute IDFT (Inverse DFT). Fourier analysis operation on any signal or sequence maps it from the original domain (usually space or
time) to that of the frequency domain, whereas IDDFT carries out the reverse operation.
Syntax Description
F = fft(f) This form of the command is to compute DFT (Discrete Fourier Transform) of ‘f’ using a FFT (Fast Fourier transform) algorithm and results the frequency domain signal F.
F = fft(f, n) This form of the command is to compute DFT (Discrete Fourier Transform) of ‘f’ using a FFT (Fast Fourier Transform) algorithm and results the frequency domain n-point DFT signal ‘F’.
BY default F possess same size as that of f.
F = fft(f, n, This form of the command is to compute DFT (Discrete Fourier Transform) of ‘f’ using a FFT (Fast Fourier Transform) algorithm and results the frequency domain FT signal ‘F’along the
dim) dimension ‘dim’.
Examples of Matlab fft()
Given below are the examples mentioned:
Example #1
Deriving FFT for Random Noise Signal.
Ls = 2500;% Signal length
Fs = 2000;% Sampling frequency
Ts = 1/Fs;% Sampling period
tv = (0:Ls-1)*Ts; % Time vector
f = 0.6*sin(2*pi*50*tv) + 3*randn(size(tv))+ sin(2*pi*120*tv);%Input signal
xlabel('tv (ms)')
title(' Corrupted Signal having Zero-Mean Random Noise')
F = fft(f);% Calling fft() function for signal ‘f’
PS2 = abs(F/Ls);% Double sampling plot
PS1 = PS2(1:Ls/2+1);% Single sampling plot
PS1(2:end-1) = 2*PS1(2:end-1);
f = Fs*(0:(Ls/2))/Ls;
title('Amplitude Spectrum (Single-Sided) PS1 for f(t)')
xlabel('f (Herz)')
The output window displays the noise signal formed as function ‘f’ in time domain and single sided amplitude spectrum is computed using fft() resulting in frequency domain signal ‘F’.
The nature of the resultant FFT signal varies depending on the type of input signal or data such as:
Nature of Input Nature of Output
f is a Vector F is produced as Fourier transform of vector f.
f is a Matrix F is produced as Fourier transform of each column of matrix ‘f’.
f is a multidimensional array Function fft(f) treats the values along the first non-unit array dimension as vectors and returns the Fourier transform for each vector.
Example #2
Deriving np point FFT for Gaussian Signal.
Fs = 300; % Sampling frequency
ts = -0.5:1/Fs:0.5; % Time vector
Ls = length(ts); % Signal length
f = 1/(4*sqrt(2*pi*0.02))*(exp(-ts.^2/(2*0.02)));
xlabel('Time (t)')
title('Time Domain')
np = 2^nextpow2(Ls);
f = Fs*(0:(np/2))/np;
F = fft(f,np);
PF = abs(F/np);
title('Frequency Domain')
The output window displays the Gaussian signal formed as function ‘f’ in time domain and np-point FFT is computed using fft() resulting in frequency domain signal ‘PF’.
The nature of the resultant n-point FFT signal varies depending on the type of input signal or data such as:
Nature of Input Nature of Output
f is a Vector having length smaller than the value of ‘n’. F is produced as Fourier transform of vector f being padded with trailing zeros to match the length of ‘n’.
f is a Vector having length greater than the value of ‘n’. F is produced as Fourier transform of vector f being truncated to the length of ‘n’.
f is a matrix F is produced as Fourier transform of each column of matrix ‘f’.
f is a multidimensional array Function fft(f) treats the values along the first non-unit array dimension as vectors and returns the Fourier transform for each vector.
Example #3
Fs = 2000; % Sampling frequency
Ts = 1/Fs; % Sampling period
Ls = 3000; % Length of signal
t = (0:Ls-1)*Ts; % Time vector
r1 = sin(3*pi*60*t); % waveformed in First row
r2 = sin(3*pi*140*t); % waveformedin Second row
r3 = sin(3*pi*350*t); % waveformedin Third row
% Display of all 3 waves in time domain
f = [r1; r2; r3];
for k = 1:3
title(['Row No ',num2str(k),' (Time Domain)'])
np = 2^nextpow2(Ls);% Defining n value for DFT operation
d = 2;
F = fft(f,np,d);% Calling fft() for the matrix f having each wave as one row
PS2 = abs(F/Ls);
PS1 = PS2(:,1:np/2+1);
PS1(:,2:end-1) = 2*PS1(:,2:end-1);
% Computing FFT of all 3 waves and displayed in frequency domain
for k=1:3
title(['Row No',num2str(k),'(Frequency Domain)'])
The output window displays the three sinusoidal waves r1, r2 an r3 in time domain and their respective single side amplitude spectrum is computed on the waves in the form of matrix f, using fft()
resulting in frequency domain signal ‘PS1’.
How fft() works?
F = fft(f) calls the operation of Fourier transform whereas f = ifft(F) calls the operation of inverse Fourier transform. For f and F of length n, these transforms operations are defined as below:
Fourier transform F (frequency domain signal) for time or space domain signal f:
Inverse Fourier transform f (space or time domain signal) for signal F (frequency domain signal):
Where W[n] is the n^th root of the unity i.e.
Additional Note:
• fft() function execution time depends on the length defined for the transform to be carried out. The Transformation lengths with small prime factors are considerably faster than those with large
prime factors.
• For most of the values of n, real-input DFTs get executed approximately within half the execution time of a complex-input DFTs.
• In case the value of n has large prime factors, the difference is speed is null or insignificant.
• The speed of fft() function can potentially be increased by implementing fftw, the utility function. This function, fftw can control the optimization of the algorithm used in the computation of
an FFT operation performed with a particular size and dimension.
Recommended Articles
This is a guide to Matlab fft(). Here we discuss the introduction to Matlab fft(), how fft() works along with respective examples. You may also have a look at the following articles to learn more – | {"url":"https://www.educba.com/matlab-fft/","timestamp":"2024-11-11T04:09:32Z","content_type":"text/html","content_length":"319876","record_id":"<urn:uuid:8e01da74-e37a-488c-a191-696c33893678>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00190.warc.gz"} |
10.2.3 Circles I, PT3 Focus Practice
Question 11:
The Town Council plans to build an equilateral triangle platform in the middle of a roundabout. The diameter of circle RST is 24 m and the perpendicular distance from R to the line ST is 18 m. as
shown in Diagram below.
Given diameter = 24 m
hence radius = 12 m
O is the centre of the circle.
Using Pythagoras’ theorem:
$\begin{array}{l}{x}^{2}={12}^{2}-{6}^{2}\\ x=\sqrt{144-36}\\ \text{ }=10.39\text{ m}\\ TS=RS=RT\\ \text{ }=10.39\text{ m }×2\\ \text{ }=20.78\text{ m}\\ \text{Perimeter of the platform}\\ TS+RS+RT\\
=20.78×3\\ =63.34\text{ m}\end{array}$
Question 12:
Amy will place a ball on top of a pillar in Diagram below. Table below shows the diameters of three balls X, Y and Z.
Which ball X, Y or Z, can fit perfectly on the top of the pillar? Show the calculation to support Amy’s choice.
$\begin{array}{l}\text{Let the radius of the top of the pillar}=r\text{ cm}\text{.}\\ O\text{ is the centre of the circle}\text{.}\\ \text{In }\Delta \text{ }OQR,\\ {r}^{2}={\left(r-4\right)}^{2}+{8}
^{2}\text{ }\left(\text{using Pythagoras" theorem}\right)\\ {r}^{2}={r}^{2}-8r+16+64\\ {r}^{2}={r}^{2}-8r+80\\ {r}^{2}-{r}^{2}+8r=80\\ 8r=80\\ r=\frac{80}{8}\\ r=10\text{ cm}\\ \\ \text{Therefore,
diameter}\\ =2×10\\ =20\text{ cm}\\ \\ \text{Ball }Y\text{ with diameter 20 cm can fit perfectly }\\ \text{on top of the pillar}\text{.}\end{array}$
Question 13:
Diagram below shows a rim of a bicycle wheel with a diameter of 26 cm. Kenny intends to build a holder for the rim.
Which of the rim holder, X, Y or Z, can fit the bicycle rim perfectly? Show the calculation to support your answer.
$\begin{array}{l}\text{Let the radius of the rim holder}=r\text{ cm}\text{.}\\ O\text{ is the centre of the circle}\text{.}\\ \text{In }\Delta \text{ }OQR,\\ {r}^{2}={\left(r-8\right)}^{2}+{12}^{2}\
text{ }\left(\text{using Pythagoras" theorem}\right)\\ {r}^{2}={r}^{2}-16r+64+144\\ {r}^{2}={r}^{2}-16r+208\\ {r}^{2}-{r}^{2}+16r=208\\ 16r=208\\ r=\frac{208}{16}\\ r=13\text{ cm}\\ \\ \text
{Therefore, diameter}\\ =2×13\\ =26\text{ cm}\\ \\ \text{Rim holder }Z\text{ with diameter 26 cm can fit the bicycle perfectly}\text{.}\end{array}$ | {"url":"https://content.myhometuition.com/2017/10/10/10-2-3-circles-i-pt3-focus-practice/","timestamp":"2024-11-06T13:47:04Z","content_type":"text/html","content_length":"30537","record_id":"<urn:uuid:9c3610ec-5e8e-458b-8935-04796c012b84>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00103.warc.gz"} |
Data Warehouse Implementation - Efficient Data Cube Computation
Data Warehouse Implementation
The big data which is to be analyzed and handled to draw insights from it will be stored in data warehouses.
These warehouses are run by OLAP servers which require processing of a query with seconds. So, a data warehouse should need highly efficient cube computation techniques, access methods, and query
processing techniques.
The core of multidimensional data analysis is the efficient computation of aggregations across many sets of dimensions.
In SQL aggregations are referred to as group-by’s.
Each group-by can be represented as a cuboid.
Set of group-by’s forms a lattice of a cuboid defining a data cube.
Efficient Data Cube Computation
The compute cube Operator and the Curse of Dimensionality
The compute cube operator computes aggregates over all subsets of the dimensions specified in the operation.
It requires excessive storage space, especially for a large number of dimensions.
A data cube is a lattice of cuboids.
Suppose that we create a data cube for ProElectronics(Company) sales that contains the following: city, item, year, and sales_in_dollars.
Compute the sum of sales, grouping by city, and item.
Compute the sum of sales, grouping by city.
Compute the sum of sales, grouping by item.
What is the total number of cuboids, or group-by’s, that can be computed for this data cube?
Three attributes:
city, item, year (dimensions), sales_in_dollars (measure).
The total number of cuboids or group-by’s computed for this cube is 2^3=8.
Group-by’s: {(city,item,year), (city, item), (city, year), (item, year), (city), (item), (year),()}.
() : group-by is empty i.e. the dimensions are not grouped.
The base cuboid contains all three dimensions.
Apex cuboid is empty.
On-line analytical processing may need to access different cuboids for different queries.
So we have to compute all or at least some of the cuboids in the data cube in advance.
Precomputation leads to fast response time and avoids some redundant computation.
A major challenge related to precomputation would be storage space if all the cuboids in the data cube are computed, especially when the cube has many dimensions.
The storage requirements are even more excessive when many of the dimensions have associated concept hierarchies, each with multiple levels.
This problem is referred to as the Curse of Dimensionality.
Cube Operation
Cube definition and computation in DMQL
• define cube sales_cube[ city, item, year] (sales_in_dollars)
Transform it into a SQL-like language (with a new operator cube by, introduced by Gray et al.’96)
• SELECT item, city, year, SUM (amount) FROM SALES CUBE BY item, city, year
Data cube can be viewed as a lattice of cuboids
• The bottom-most cuboid is the base cuboid.
• The top-most cuboid (apex) contains only one cell.
• How many cuboids in an n-dimensional cube with L levels? (T=SUM(Li+1))
• For example, the time dimension as specified above has 4 conceptual levels, or 5 if we include the virtual level all.
• If the cube has 10 dimensions and each dimension has 5 levels (including all), the total number of cuboids that can be generated is 510 9.8x106.
Data Cube Materialization
There are three choices for data cube materialization given a base cuboid.
How to select which materialization to use
• Identify the subsets of cuboids or subcubes to materialize.
• Exploit the materialized cuboids or subcubes during query processing.
• Efficiently update the materialized cuboids or subcubes during load and refresh.
Selection of which cuboids to materialize
• Based on the size, queries in the workload, accessing cost, their frequencies, etc.
Indexing OLAP Data: Bitmap Index
First of all, create an index table on a particular column of the table.
Then each value in the column has got a bit vector: bit-op is fast.
The length of the bit vector: # of records in the base table.
The i-th bit is set if the i-th row of the base table has the value for the indexed column.
It's not suitable for high cardinality domains.
Indexing OLAP Data: Join Indices
The join indexing method gained popularity from its use in relational database query processing.
The join index records can identify joinable tuples without performing costly join operations.
Join indexing is especially useful for maintaining the relationship between a foreign key and its matching primary keys, from the joinable relation.
Suppose that there are 360-time values, 100 items, 50 branches, 30 locations, and 10 million sales tuples in the sales star data cube. If the sales fact table has recorded sales for only 30 items,
the remaining 70 items will obviously not participate in joins. If join indices are not used, additional I/Os have to be performed to bring the joining portions of the fact table and dimension tables
To further speed up query processing, the join indexing, and bitmap indexing methods can be integrated to form bitmapped join indices.
Microsoft SQL Server and Sybase IQ support bitmap indices. Oracle 8 uses bitmap and join indices.
Efficient Processing OLAP Queries
The purpose of materializing cuboids and constructing OLAP index structures is to speed up the query processing in data cubes.
Given materialized views, query processing should proceed as follows:
Determine which operations should be performed on the available cuboids:
• Transform drill, roll, etc. into the corresponding SQL and/or OLAP operations, e.g., dice = selection + projection.
Determine to which materialized cuboid(s) the relevant operations should be applied:
• Suppose that the query to be processed be on {brand, province_or_state} with the selection constant “year = 2004”, and there are 4 materialized cuboids available: {year, item_name, city}, {year,
brand, country}, {year, brand, province_or_state}, {item_name, province_or_state} where year = 2004
Efficient computation of data cubes:
• Partial vs. full vs. no materialization
• Indexing OALP data: Bitmap index and join index
Subscribe us for more content on Data.
Post a Comment | {"url":"https://www.datamining365.com/2020/05/data-warehouse-implementation.html","timestamp":"2024-11-15T00:39:05Z","content_type":"application/xhtml+xml","content_length":"269706","record_id":"<urn:uuid:39a16abd-f2d4-4893-9fdc-d82934cc458b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00780.warc.gz"} |
The paper Enhancement factors for the vertical response of footbridges subjected to stochastic crowd loading has been published in the prestigious Computers & Structures journal. This has an impact
factor of 1.719 for 2010.
This paper proposes a method of determining statistical enhancement factors to apply to single pedestrian responses to obtain corresponding crowd-induced vibration responses.
The full reference for the paper is:
Caprani, C.C., Keogh, J., Archbold, P. and Fanning, P. (2012), ‘Enhancement factors for the vertical response of footbridges subjected to stochastic crowd loading’, Computers & Structures, in press.
And it is available from: http://dx.doi.org/10.1016/j.compstruc.2012.03.006.
The vertical acceleration response of a hypothetical footbridge is predicted for a sample of single pedestrians and a crowd of pedestrians using a probabilistic approach. This approach uses
statistical distributions to account for the fact that pedestrian parameters are not identical for all pedestrians. Enhancement factors are proposed for predicting the response due to a crowd based
on the predicted accelerations of a single pedestrian. The significant contribution of this work is the generation of response curves identifying enhancement factors for a range of crowd densities
and synchronization levels. | {"url":"http://www.colincaprani.com/page/3/","timestamp":"2024-11-07T15:46:52Z","content_type":"application/xhtml+xml","content_length":"57780","record_id":"<urn:uuid:f0f0d506-c6df-4c80-a8e6-7aec136d5033>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00179.warc.gz"} |
Ingmar Metzler
I am a researcher of the research group Algebra at TU Darmstadt and work in the field of arithmetic geometry as a PhD student of Prof. Dr. Jan Hendrik Bruinier. The objects I am particularly
interested in are orthogonal and symplectic automorphic forms that are examined within the scope of our current project Geometry and Arithmetic of Uniformized Structures (CRC326) which is funded by
the DFG. Similar objects were also investigated in our previous project Uniform Structures in Arithmetic and Geometry, which had been funded by the federal state of Hessia as part of the LOEWE
initiative. In addition, I am involved in the project Digitally based teaching and learning in Hessen in order to advance digitalisation in German institutions of higher education, in particular,
in the context of TU-WAS. I am also passionate about developing the maths community as a founder and organiser of the ENTR workshop series.
Areas of interest
• automorphic forms, modular forms
• representation theory
• L-functions
• geometry
• operator algebras
• mathematical physics
• mathematical finance
• didactics
• deep learning | {"url":"https://ingmar-metzler.eu/","timestamp":"2024-11-01T23:38:09Z","content_type":"text/html","content_length":"60504","record_id":"<urn:uuid:8dde4047-ea90-4de5-b78c-1e6469729852>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00523.warc.gz"} |
Cross Market Arbitrage - Betting Exchanges
How to Use, Cross Market Arbitrage, on Betting Exchanges
Cross market arbitrage is a betting strategy that bettors use to make a sure profit. They do so by spotting differences in odds prices across various betting markets and use them to their advantage.
It is a concept a bit complicated to many bettors, but understanding it well it can help bettors start using it on betting exchanges.
What Is Cross Market Arbitrage?
In cross market arbitrage, bettors place bets on many outcomes of the same game across multiple markets in order to secure a profit without relying on the game result. Bettors spot discrepancies in
odds offered by different bookmakers and betting exchanges and use them in order to cover all possible outcomes of a game and lock in a profit.
For example, in a tennis match a bettor wants to bet on a player to win. A betting exchange offers better odds for this player and another betting exchange offers better odds for the other player.
The bettor uses cross market arbitrage and places bets on both players in both exchanges and secures a profit regardless of who wins.
How Cross Market Arbitrage Works
Cross market arbitrage works by finding opportunities where the odds for all possible outcomes of a game add up to less than 100%. This means that the combined implied probability of all possible
outcomes is lower than 100%. This is very important because it allows bettors to generate a profit.
Why Cross Market Arbitrage is Popular
Cross market arbitrage is popular among bettors because it offers a chance to make risk-free profits. Unlike traditional betting, where you risk losing your money if your prediction is wrong, cross
market arbitrage secures a profit. For this reason, it is an attractive strategy, especially for experienced bettors.
Another reason why cross market arbitrage is popular is the rise of online betting exchanges. Betting exchanges allow betting for an outcome and betting against it. Because bettors compete against
one another, the odds are varied. So bettors can find plenty of markets for the same event to bet on and differences in odds as well, creating more chances for cross market arbitrage.
The Pros of Cross Market Arbitrage
Cross Market Arbitrage offers many benefits. The most significant is the guarantee of a profit, regardless how the event will result. In this strategy, the aim of bettors is to cover all possible
Therefore, there is minimal risk. The only risk is the possibility to make mistakes in calculations. Bettors usually tend to forget including the commission fee charged by the betting exchange when
making their calculations which often leads to profit loss.
The strategy sounds complex at first, but it is a relatively easy to learn method and execute it with a bit of practice. Now that there are many online betting exchanges available, cross market
arbitrage is more accessible than ever.
The Cons of Cross Market Arbitrage
While cross market arbitrage is a strategy with many advantages, it has its challenges. The most significant challenge is that there are limited opportunities for true arbitrage. Even if there are
many opportunities, it is challenging for bettors to take them, because they can disappear quickly as the market adjusts. They need to be fast in order to use these opportunities to their advantage.
Some betting exchanges impose betting limits on how much bettors can bet, which can restrict the potential profit from arbitrage.
Another challenge is to keep a profitable account without being banned. Frequent wins from cross market arbitrage sometimes lead to account restrictions by bookmakers or exchange brokers who do not
favor this type of betting. Another challenge bettors often face when using this strategy is complex calculations. Calculating accurately what the stake should be for each bet in order to secure a
profit from all possible outcomes regardless how the match will result, can be very tricky especially for beginners. Mistakes in calculations are very common and they lead to losses instead of
Tips to Overcome the Cross Market Arbitrage Challenges
Cross market arbitrage can be tricky, but there are ways to overcome the challenges and be successful.
In order to avoid complicated calculations and simplify the process, it is helpful to use arbitrage betting tools that automatically find opportunities on a betting exchange and calculate the stakes
for you. Beginner bettors need to become familiar with the strategy and practice and try different markets to get more skilled. For this reason it is good to start small and place small bets in order
to understand the process better and avoid losing their money if they make a mistake.
It is very important to monitor the markets and the odds and develop the skill to sport opportunities quickly when they occur. Searching for diverse markets is key to this strategy. Bettors usually
limit themselves to just one sport or market.
The more markets they explore, the more arbitrage opportunities they will find.
Cross market arbitrage is a very popular strategy bettors use on betting exchanges in order to make consistent profits with minimal risk. Bettors spot differences in odds across multiple markets, and
they place bets to cover all possible outcomes. While this strategy can be challenging, with practice and the right tools, cross market arbitrage can be very rewarding. | {"url":"https://exchange-betting.com/cross-market-arbitrage/","timestamp":"2024-11-03T11:51:18Z","content_type":"text/html","content_length":"176641","record_id":"<urn:uuid:c246201d-8e8f-4e65-9c8a-3ed9d44d580d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00735.warc.gz"} |
L -PageRank for Semi-Supervised LearningApplied Network ScienceImplicit differentiation for fast hyperparameter selection in non-smooth convex learningJournal of Machine Learning ResearchOn a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equationsOptimization Methods and SoftwareSemi-Linearized Proximal Alternating Minimization for a Discrete Mumford-Shah ModelIEEE Transactions on Image ProcessingCompressive Statistical Learning with Random Feature MomentsMathematical Statistics and LearningSketching Data Sets for Large-Scale Learning: Keeping only what you needIEEE Signal Processing MagazineApproximation spaces of deep neural networksConstructive ApproximationDual Extrapolation for Sparse Generalized Linear ModelsJournal of Machine Learning ResearchFourier could be a Data Scientist: from Graph Fourier Transform to Signal Processing on GraphsComptes Rendus. PhysiqueSemi-relaxed Gromov Wasserstein divergence with applications on graphsICLR 2022 - 10th International Conference on Learning RepresentationsTemplate based Graph Neural Network with Optimal Transport DistancesNeurIPS 2022 – 36th Conference on Neural Information Processing SystemsImplicit differentiation for fast hyperparameter selection in non-smooth convex learningJournal of Machine Learning ResearchSpurious Valleys, NP-hardness, and Tractability of Sparse Matrix Factorization With Fixed SupportSIAM Journal on Matrix Analysis and ApplicationsNonsmooth convex optimization to estimate the Covid-19 reproduction number space-time evolution with robustness against low quality dataIEEE Transactions on Signal ProcessingAn Embedding of ReLU Networks and an Analysis of their IdentifiabilityConstructive ApproximationTime Series Alignment with Global InvariancesTransactions on Machine Learning ResearchEfficient Identification of Butterfly Sparse Matrix FactorizationsSIAM Journal on Mathematics of Data ScienceBeyond L1: Faster and Better Sparse Models with skglmNeurIPSMéthodes proximales multi-niveaux pour la restauration d'imagesGRETSI'22 - 28ème Colloque Francophone de Traitement du Signal et des ImagesFast learning of fast transforms, with guaranteesICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal ProcessingFast Multiscale Diffusion on GraphsICASSP 2022 - IEEE International Conference on Acoustics, Speech and Signal ProcessingBenchopt: Reproducible, efficient and collaborative optimization benchmarksNeurIPS 2022 - 36th Conference on Neural Information Processing SystemsDimension-free convergence rates for gradient Langevin dynamics in RKHSCOLT 2022 - 35th Annual Conference on Learning TheorySemi-relaxed Gromov Wasserstein divergence with applications on graphsICLR 2022 - 10th International Conference on Learning RepresentationsTemplate based Graph Neural Network with Optimal Transport DistancesNeurIPS 2022 – 36th Conference on Neural Information Processing SystemsDes règles de quadrature dans les RKHSs à base de DPPsGRETSI 2022 - XXVIIIème Colloque Francophone de Traitement du Signal et des ImagesCompressive Clustering with an Optical Processing UnitGRETSI 2022 - XXVIIIème Colloque Francophone de Traitement du Signal et des ImagesSemi-relaxed Gromov-Wasserstein divergence for graphs classificationColloque GRETSI 2022 - XXVIIIème Colloque Francophone de Traitement du Signal et des ImagesRevisiting RIP guarantees for sketching operators on mixture modelsApproximation speed of quantized vs. unquantized ReLU neural networks and beyondOn the Statistical Complexity of Estimation and Testing under Privacy ConstraintsPrivate Quantiles Estimation in the Presence of AtomsMultilevel FISTA for image restorationA theory of optimal convex regularization for low-dimensional recoverySelf-supervised learning with rotation-invariant kernelsCode for the paper "Structured Support Exploration For Multilayer Sparse Matrix Factorization"Code for reproducible research - "Spurious Valleys, NP-hardness, and Tractability of Sparse Matrix Factorization With Fixed Support"Code for reproducible research - Fast Multiscale Diffusion on GraphsCode for reproducible research - Fast learning of fast transforms, with guaranteesCode for reproducible research - Self-supervised learning with rotation-invariant kernelsCode for reproducible research - Efficient Identification of Butterfly Sparse Matrix FactorizationsSpatial and temporal regularization to estimate COVID-19 reproduction number R(t): Promoting piecewise smoothness via convex optimizationPLoS ONEConvex analysis and monotone operator theory in Hilbert spacesAnderson acceleration of coordinate descentCompressed Sensing and its ApplicationsMATHEON Workshop 2013Exact Reconstruction using Beurling Minimal ExtrapolationarXiv.orgCompressive Learning with Privacy GuaranteesInformation and InferenceProximal splitting methods in signal processingDistributed Adaptive Learning of Graph Signals IEEE Transaction on Signal ProcessingCooperative and Graph Signal Processing: Principle and ApplicationsSparse and Redundant RepresentationsFrom Theory to Applications in Signal and Image ProcessingA Mathematical Introduction to Compressive SensingSparse inverse covariance estimation with the graphical lasso BiostatisticsTranslation on Graphs: An Isometric Shift OperatorIEEE Signal Processing LettersCompressive Statistical Learning with Random Feature MomentsMathematical Statistics and LearningStatistical Learning Guarantees for Compressive Clustering and Compressive Mixture ModelingMathematical Statistics and LearningStructured Variable Selection with Sparsity-Inducing NormsJournal of Machine Learning ResearchA unified Framework for Structured Graph Learning via Spectral Constraints Journal of Machine Learning ResearchRandom features for large-scale kernel machinesSub-sampled Newton methodsMath. Program.The Emerging Field of Signal Processing on Graphs IEEE Signal Processing MagazineHilbert Space Embeddings and Metrics on Probability Measures.JMLRDictionary LearningIEEE Signal Processing MagazineControlling Wasserstein distances by Kernel norms with application to Compressive Statistical LearningOnline Graph Dictionary LearningE2-train: Training state-of-the-art cnns with over 80% energy savingsSWALP: Stochastic weight averaging in low precision trainingADAHESSIAN: An adaptive second order optimizer for machine learningarXiv preprint arXiv:2006.00719Nonconvex Sparse Graph Learning under Laplacian Constrained Graphical ModelIdentifiability in Two-Layer Sparse Matrix Factorization
Overall objectives
Building on a culture at the interface of signal modeling, mathematical optimization and statistical machine learning, the global objective of DANTE (and of its follow-up team Ockham which formal
creation was process was formally started in 2022) is to develop computationally efficient and mathematically founded methods and models to process high-dimensional data. Our ambition is to develop
frugal signal processing and machine learning methods able to exploit structured models, intrinsically associated to resource-efficient implementations, and endowed with solid statistical guarantees.
Challenge 1: Developing frugal methods with robust expressivity.
The idea of frugal approaches means algorithms relying on a controlled use of computing resources, but also methods whose expressivity and flexibility provably relies on the versatile notion of
sparsity. This is expected to avoid the current pitfalls of costly over-parameterizations and to robustify the approaches with respect to adversarial examples and overfitting. More specifically, it
is essential to contribute to the understanding of methods based on neural networks, in order to improve their performance and most of all, their efficiency in resource-limited environments.
Challenge 2: Integrating models in learning algorithms.
To make statistical machine learning both more frugal and more interpretable, it is important to develop techniques able to exploit not only high-dimensional data but also models in various forms
when available. When some partial knowledge is available about some phenomena related to the processed data, e.g. under the form of a physical model such as a partial differential equation, or as a
graph capturing local or non-local correlations, the goal is to use this knowledge as an inspiration to adapt machine learning algorithms. The main challenge is to flexibly articulate a priori
knowledge and data-driven information, in order to achieve a controlled extrapolation of predicted phenomena much beyond the particular type of data on which they were observed, and even in
applications where training data is scarce.
Challenge 3: Guarantees on interpretability, explainability, and privacy.
The notion of sparsity and its structured avatars –notably via graphs– is known to play a fundamental role in ensuring the identifiability of decompositions in latent spaces, for example for
high-dimensional inverse problems in signal processing. The team's ambition is to deploy these ideas to ensure not only frugality but also some level of explainability of decisions and an
interpretability of learned parameters, which is an important societal stake for the acceptability of “algorithmic decisions”. Learning in small-dimensional latent spaces is also a way to spare
computing resources and, by limiting the public exposure of data, it is expected to enable tunable and quantifiable tradeoffs between the utility of the developed methods and their ability to
preserve privacy.
Research program
This project is resolutely at the interface of signal modeling, mathematical optimization and statistical machine learning, and concentrates on scientific objectives that are both ambitious –as they
are difficult and subject to a strong international competition– and realistic thanks to the richness and complementarity of skills they mobilize in the team.
Sparsity constitutes a backbone for this project, not only as a target to ensure resource-efficiency and privacy, but also as prior knowledge to be exploited to ensure the identifiability of
parameters and the interpretability of results. Graphs are its necessary alter ego, to flexibly model and exploit relations between variables, signals, and phenomena, whether these relations are
known a priori or to be inferred from data. Lastly, advanced large-scale optimization is a key tool to handle in a statistically controlled and algorithmically efficient way the dynamic and
incremental aspects of learning in varying environments.
The scientific activity of the project is articulated around the three axes described below. A common endeavor to these three axes consists in designing structured low-dimensional models, algorithms
of bounded complexity to adjust these models to data through learning mechanisms, and a control of the performance of these algorithms to exploit these models on tasks ranging from low-level signal
processing to the extraction of high-level information.
Axis 1: Sparsity for high-dimensional learning.
As now widely documented, the fact that a signal admits a sparse representation in some signal dictionary 51 is an enabling factor not only to address a variety of inverse problems with
high-dimensional signals and images, such as denoising, deconvolution, or declipping, but also to speedup or decrease the cost of the acquisition of analog signals in certain scenarios compatible
with compressive sensing 52, 45. The flexibility of the models, which can incorporate learned dictionaries 63, as well as structured and/or low-rank variants of the now-classical sparse modeling
paradigm 57, has been a key factor of the success of these approaches. Another important factor is the existence of algorithms of bounded complexity with provable performance, often associated to
convex regularization and proximal strategies 43, 48, allowing to identify latent sparse signal representations from low-dimensional indirect observations.
While being now well-mastered (and in the core field of expertise of the team), these tools are typically constrained to relatively rigid settings where the unknown is described either as a sparse
vector or a low-rank matrix or tensor in high (but finite) dimension. Moreover, the algorithms hardly scale to the dimensions needed to handle inverse problems arising from the discretization of
physical models (e.g., for 3D wavefield reconstruction). A major challenge is to establish a comprehensive algorithmic and theoretical toolset to handle continuous notions of sparsity 46, which have
been identified as a way to potentially circumvent these bottlenecks. The other main challenge is to extend the sparse modeling paradigm to resource-efficient and interpretable statistical machine
learning. The methodological and conceptual output of this axis provides tools for Axes 2 and 3, which in return fuel the questions investigated in this axis.
1.1 Versatile and efficient sparse modeling. The goal is to propose flexible and resource-efficient sparse models, possibly leveraging classical notions of dictionaries and structured factorization,
but also the notion of sparsity in continuous domains (e.g. for sketched clustering, mixture model estimation, or image super-resolution), low-rank tensor representations, and neural networks with
sparse connection patterns.
Besides the empirical validation of these models and of the related algorithms on a diversity of targeted applications, the challenge is to determine conditions under which their success can be
mathematically controlled, and to determine the fundamental tradeoffs between the expressivity of these models and their complexity.
1.2 Sparse optimization. The main objectives are: a) to define cost functions and regularization penalties that integrate not only the targeted learning tasks, but also a priori knowledge, for
example under the form of conservation laws or as relation graphs, cf Axis 2; b) to design efficient and scalable algorithms 4, 8 to optimize these cost functions in a controlled manner in a
large-scale setting. To ensure the resource-efficiency of these algorithms, while avoiding pitfalls related to the discretization of high-dimensional problems (aka curse of dimensionality), we
investigate the notion of “continuous” sparsity (i.e., with sparse measures), of hierarchies (along the ideas of multilevel methods), and of reduced precision (cf also Axis 3). The nonconvexity and
non-smoothness of the problems are key challenges 2, and the exploitation of proximal algorithms and/or convexifications in the space of Borelian measures are privileged approaches.
1.3 Identifiability of latent sparse representations. To provide solid guarantees on the interpretability of sparse models obtained via learning, one needs to ensure the identifiability of the latent
variables associated to their parameters. This is particularly important when these parameters bear some meaning due to the underlying physics. Vice-versa, physical knowledge can guide the choice of
which latent parameters to estimate. By leveraging the team's know-how obtained in the field of inverse problems, compressive sensing and source separation in signal processing, we aim at
establishing theoretical guarantees on the uniqueness (modulo some equivalence classes to be characterized) of the solutions of the considered optimization problems, on their stability in the
presence of random or adversarial noise, and on the convergence and stability of the algorithms. Axis 2: Learning on graphs and learning of graphs.
Graphs provide synthetic and sparse representations of the interactions between potentially high-dimensional data, whether in terms of proximity, statistical correlation, functional similarity, or
simple affinities. One central task in this domain is how to infer such discrete structures, from the observations, in a way that best accounts for the ties between data, without becoming too complex
due to spurious relationships. The graphical lasso 53 is among the most popular and successful algorithm to build a sparse representation of the relations between time series (observed at each node)
and that unveils relevant patterns of the data. Recent works (e.g. 58) strived to emphasize the clustered structure of the data by imposing spectral constraints to the Laplacian of the sought graphs,
with the aim to improve the performance of spectral approaches to unsupervised classification. In this direction, several challenges remain, such as for instance the transposition of the framework to
graph-based semi-supervised learning 1, where natural models are stochastic block models rather than strictly multi-component graphs (e.g. Gaussian mixtures models). As it is done in 69, the standard
${l}_{1}$-norm penalization term of graphical lasso could be questioned in this case. On another level, when low-rank (precision) matrices and / or when preservation of privacy are important stakes,
one could be inspired by the sketching techniques developed in 55 and 47 to work out a sketched graphical lasso. There exists other situations where the graph is known a priori and does not need to
be inferred from the data. This is for instance the case when the data naturally lie on a graph (e.g. social networks or geographical graphs) and so, one has to combine this data structure with the
attributes (or measures) carried by the nodes or the edges of these graphs. Graph signal processing (GSP) 619, which underwent methodological developments at a very rapid pace in recent years, is
precisely an approach to jointly exploit algebraically these structures and attributes, either by filtering them, by re-organizing them, or by reducing them to principal components. However, as it
tends to be more and more the case, data collection processes yield very large data sets with high dimensional graphs. In contrast to standard digital signal processing that relies on regular graph
structures (cycle graph or cartesian grid) treating complex structured data in a global form is not an easily scalable task 54. Hence, the notion of distributed GSP 49, 50 has naturally emerged. Yet,
very little has been done on graph signals supported on dynamical graphs that undergo vertices/edges editions.
2.1 Learning of graphs. When the graphical structure of the data is not known a priori, one needs to explore how to build it or to infer it. In the case of partially known graphs, this raises several
questions in terms of relevance with respect to sparse learning. For example, a challenge is to determine which edges should be kept, whether they should be oriented, and how attributes on the graph
could be taken into account (in particular when considering time-series on graphs) to better infer the nature and structure of the un-observed interactions. We strive to adapt known approaches such
as the graphical lasso to estimate the covariance under a sparsity constraint (integrating also temporal priors), and investigate diffusion approaches to study the identifiability of the graphs. In
connection with Axis 1.2, a particular challenge is to incorporate a priori knowledge coming from physical models that offer concise and interpretable descriptions of the data and their interactions.
2.2 Distributed and adaptive learning on graphs. The availability of a known graph structure underlying training data offers many opportunities to develop distributed approaches, open perspectives
where graph signal processing and machine learning can mutually fertilize each other.
Some classifiers can be formalized as solutions of a constrained optimization problem, and an important objective is then to reduce their global complexity by developing distributed versions of these
algorithms. Compared to costly centralized solutions, distributing the operations by restricting them to local node neighborhoods will enable solutions that are both more frugal and more
privacy-friendly. In the case of dynamic graphs, the idea is to get inspiration from adaptive processing techniques to make the algorithms able to track the temporal evolution of data, either in
terms of structural evolution or of temporal variations of the attributes. This aspect finds a natural continuation in the objectives of Axis 3.
Axis 3: Dynamic and frugal learning.
With the resurgence of neural networks approaches in machine learning, training times of the order of days, weeks, or even months are common. Mainstream research in deep learning somehow applies it
to an increasingly large class of problems and uses the general wisdom to improve the models prediction accuracy by “stacking more layers”, making the approach ever more resource-hungry. Underpinning
theory on which resources are needed for a network architecture to achieve a given accuracy is still in its infancy. Efficient scaling of such techniques to massive sample sizes or dimensions in a
resource-restricted environment remains a challenge and is a particularly active field of academic and industrial R&D, with recent interest in techniques such as sketching, dimension reduction, and
approximate optimization.
A central challenge is to develop novel approximate techniques with reduced computational and memory imprint. For certain unsupervised learning tasks such as PCA, unsupervised clustering, or
parametric density estimation, random features (e.g. random Fourier features 59) allow to compute aggregated sketches guaranteed to preserve the information needed to learn, and no more: this has led
to the compressive learning framework, which is endowed with statistical learning guarantees 55 as well as privacy preservation guarantees 47. A sketch can be seen as an embedding of the empirical
probability distribution of the dataset with a particular form of kernel mean embedding 62. Yet, designing random features given a learning task remains something of an art, and a major challenge is
to design provably good end-to-end sketching pipelines with controlled complexity for supervised classification, structured matrix factorization, and deep learning.
Another crucial direction is the use of dynamical learning methods, capable of exploiting wisely multiple representations at different scales of the problem at hand. For instance, many low and
mixed-precision variants of gradient-based methods have been recently proposed 67, 66, which are however based on a static reduced precision policy, while a dynamic approach can lead to much improved
energy-efficiency. Also, despite their massive success, gradient-based training methods still possess many weaknesses (low convergence rate, dependence on the tuning of the learning parameters,
vanishing and exploding gradients) and the use of dynamical information promises to allow for the development of alternative methods, such as second-order or multilevel methods, which are as scalable
as first-order methods but with faster convergence guarantees 60, 68.
The overall objective in this axis is to adapt in a controlled manner the information that is extracted from datasets or data streams and to dynamically use such information in learning, in order to
optimize the tradeoffs between statistical significance, resource-efficiency, privacy-preservation and integration of a priori knowledge.
3.1 Compressive and privacy-preserving learning. The goal is to compress training datasets as soon as possible in the processing workflow, before even starting to learn. In the spirit of compressive
sensing, this is desirable not only to ensure the frugal use of ressources (memory and computation), but also to preserve privacy by limiting the diffusion of raw datasets and controlling the
information that could actually be extracted from the targeted compressed representations, called sketches, obtained by well-chosen nonlinear random projections. We aim to build on a compressive
learning framework developed by the team with the viewpoint that sketches provide an embedding of the data distribution, which should preserve some metrics, either associated to the specific learning
task or to more generic optimal transport formulations. Besides ensuring the identifiability of the task-specific information from a sketch (cf Axis 1.3), an objective is to efficiently extract this
information from a sketch, for example via algorithms related to avatars of continuous sparsity as studied in Axis 1.2. A particular challenge, connected with Axis 2.1 when inferring dynamic graphs
from correlation of non-stationary times series, and with Axis 3.2 below, is to dynamically adapt the sketching mechanism to the analyzed data stream.
3.2 Sequential sparse learning. Whether aiming at dynamically learning on data streams (cf. Axes 2.1 and 2.2), at integrating a priori physical knowledge when learning, or at ensuring domain
adaptation for transfer learning, the objective is to achieve a statistically near-optimal update of a model from a sequence of observations whose content can also dynamically vary. When considering
time-series on graphs, to preserve resource-efficiency and increase robustness, the algorithms further need to update the current models by dynamically integrating the data stream.
3.3 Dynamic-precision learning. The goal is to propose new optimization algorithms to overcome the cost of solving large scale problems in learning, by dynamically adapting the precision of the data.
The main idea is to exploit multiple representations at different scales of the problem at hand. We explore in particular two different directions to build the scales of problems: a) exploiting ideas
coming from multilevel optimization to propose dynamical hierarchical approaches exploiting representations of the problem of progressively reduced dimension; b) leveraging the recent advances in
hardware and the possibility of representing data at multiple precision levels provided by them. We aim at improving over state-of-the-art training strategies by investigating the design of scalable
multilevel and mixed-precision second-order optimization and quantization methods, possibly derivative-free. Application domains
The primary objectives of this project, which is rooted in Signal Processing and Machine Learning methodology, are to develop flexible methods, endowed with solid mathematical foundations and
efficient algorithmic implementations, that can be adapted to numerous application domains. We are nevertheless convinced that such methods are best developed in strong and regular connection with
concrete applications, which are not only necessary to validate the approaches but also to fuel the methodological investigations with relevant and fruitful ideas. The following application domains
are primarily investigated in partnership with research groups with the relevant expertise.
Frugal AI on embedded devices
There is a strong need to drastically compress signal processing and machine learning models (typically, but not only, deep neural networks) to fit them on embedded devices. For example, on
autonomous vehicles, due to strong constraints (reliability, energy consumption, production costs), the memory and computing resources of dedicated high-end image-analysis hardware are two orders of
magnitude more limited than what is typically required to run state-of-the-art deep network models in real-time. The research conducted in the DANTE project finds direct applications in these areas,
including: compressing deep neural networks to obtain low-bandwidth video-codecs that can run on smartphones with limited memory resources; sketched learning and sparse networks for autonomous
vehicles; or sketching algorithms tailored to exploit optical processing units for energy efficient large-scale learning.
Imaging in physics and medicine
Many problems in imaging involve the reconstruction of large scale data from limited and noise-corrupted measurements. In this context, the research conducted in DANTE pays a special attention to
modeling domain knowledge such as physical constraints or prior medical knowledge. This finds applications from physics to medical imaging, including: multiphase flow image characterization; near
infrared polarization imaging in circumstellar imaging; compressive sensing for joint segmentation and high-resolution 3D MRI imaging; or graph signal processing for radio astronomy imaging with the
Square Kilometer Array (SKA).
Interactions with computational social sciences
Based on collaborations with the relevant experts the team also regularly investigates applications in computational social science. For example, modeling infection disease epidemics requires
efficient methods to reduce the complexity of large networked datasets while preserving the ability to feed effective and realistic data-driven models of spreading phenomena. In another area,
estimating the vote transfer matrices between two elections is an ill-posed problem that requires the design of adapted regularization schemes together with the associated optimization algorithms. | {"url":"https://radar.inria.fr/rapportsactivite/RA2022/dante/dante.xml","timestamp":"2024-11-02T23:54:43Z","content_type":"application/xml","content_length":"223315","record_id":"<urn:uuid:5210f02c-5a29-4bc1-bed1-40bc6027a138>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00093.warc.gz"} |
bits - Factor Documentation
Factor handbook » The language » Numbers » Arithmetic » Bitwise arithmetic » Integer virtual sequences
Next: <bits> ( number length -- bits )
bitsClass description
Tuple representing a number as a virtual sequence of booleans. The first bit is the least significant bit. Constructors are | {"url":"https://docs.factorcode.org/content/word-bits%2Cmath.bits.html","timestamp":"2024-11-03T22:51:20Z","content_type":"application/xhtml+xml","content_length":"10014","record_id":"<urn:uuid:ddba340e-d626-40ac-9ae2-32760e226331>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00366.warc.gz"} |
Maple is math software that combines the world’s most powerful math engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. With Maple,
you aren’t forced to choose between mathematical power and usability, making it the ideal tool for both education and research.
Recently added features & improvements: | {"url":"https://www.maplesoft.com/products/Maple/features/","timestamp":"2024-11-03T13:17:39Z","content_type":"application/xhtml+xml","content_length":"155796","record_id":"<urn:uuid:6ea0725e-ba18-4e72-947e-5005cde633e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00565.warc.gz"} |