Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 22
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 14665)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 22
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section*{}
\vspace{5mm}
Amplitudes of
hard (light-cone dominated) exclusive processes in QCD
are expressed by the factorization formulae,
which separate the short-distance dynamics from the
long-distance one\cite{BLreport}.
{}For
those processes
producing a (light) vector meson $V$ ($= \rho, \omega, K^{*}, \phi$)
in the final state,
like exclusive semileptonic or radiative $B$ decays
($B \rightarrow Ve\nu$, $B\rightarrow V + \gamma$)
and the hard electroproduction of vector mesons
($\gamma^{*} + N\rightarrow V + N'$),
a long-distance part involving the final vector meson
is given by the vacuum-to-meson matrix element of
the nonlocal
light-cone operators
$\langle 0| \bar{\psi}(z) \Gamma \lambda^{i} \psi(-z) |V \rangle$,
corresponding to universal nonperturbative quantity called
the light-cone distribution amplitudes (DAs).
($z_{\mu}$ is a light-like vector
$z^{2} = 0$, and $\lambda^{i}$ and $\Gamma$ denote various
flavor and Dirac matrices.
We do not show the gauge phase factor connecting the quark
and the antiquark fields.)
The DAs of higher twist are essential for systematic study
of preasymptotic corrections to hard exclusive amplitudes, and
are also interesting
theoretically
because they contain new and direct information
on hadron structure and the dynamics of QCD.
Here we will explicitly consider the ``chiral-odd'' DAs
of the charged $\rho$ meson with momentum $P_{\mu}$
and polarization vector $e^{(\lambda)}_{\mu}$
($P^{2} = m_{\rho}^{2}$, $e^{(\lambda)}\cdot e^{(\lambda)} = -1$,
$P\cdot e^{(\lambda)} = 0$).
The ``chiral-even'' DAs can be treated similarly\cite{BBKT98};
and extension to
other vector mesons is straightforward (see below).
The relevant quark-antiquark DAs are defined
with the chirality-violating Dirac matrix structures
$\Gamma = \{ \sigma_{\mu \nu}, 1\}$ as
\begin{eqnarray}
\langle 0| \bar{u}(z) \sigma_{\mu \nu} d(-z)
| \rho^{-}(P, \lambda)\rangle
&=& if_{\rho}^{T} \left[
\left(e^{(\lambda)}_{\perp \mu}P_{\nu} -e^{(\lambda)}_{\perp \nu}P_{\mu}
\right)
\int_{0}^{1} \!\! du\
e^{i\xi P\cdot z}
\phi_{\perp}(u, \mu^{2})\right.
\nonumber \\
&+&\left. \left(P_{\mu}z_{\nu}-P_{\nu}z_{\mu}\right)
\frac{e^{(\lambda)} \cdot z}{(P\cdot z)^{2}}m_{\rho}^{2}
\int_{0}^{1} \!\! du\
e^{i\xi P\cdot z}
h_{\parallel}^{(t)}(u, \mu^{2})
\right],
\label{eq:defT}
\end{eqnarray}
\begin{equation}
\langle 0| \bar{u}(z) d(-z)
| \rho^{-}(P, \lambda)\rangle
= -i
\left(
f_{\rho}^{T}
- f_{\rho}\frac{m_{u}+m_{d}}{m_{\rho}}\right)
m_{\rho}^{2}
(e^{(\lambda)} \cdot z)
\int_{0}^{1} \!\! du\
e^{i\xi P\cdot z}
h_{\parallel}^{(s)}(u, \mu^{2}).
\label{eq:defS}
\end{equation}
In (\ref{eq:defT}),
we neglect the Lorentz structures corresponding to
the twist-4 terms
for simplicity (see Ref.\cite{BBKT98} for the complete expressions).
Our Lorentz frame is chosen as $P\cdot z = P^{+}z^{-}$.
The nonlocal operators on the l.h.s.
is renormalized at scale $\mu$.
We set $\xi \equiv u - (1-u) = 2u-1$,
and $f_{\rho}$ and $f_{\rho}^{T}$ are
the usual vector and tensor decay constants as
$\langle 0| \bar{u}(0) \gamma_{\mu} d(0)
| \rho^{-}(P, \lambda)\rangle = f_{\rho} m_{\rho} e^{(\lambda)}_{\mu}$
and
$\langle 0| \bar{u}(0) \sigma_{\mu \nu} d(0)
| \rho^{-}(P, \lambda)\rangle = i f_{\rho}^{T}(e^{(\lambda)}_{\mu}P_{\nu}
- e^{(\lambda)}_{\nu}P_{\mu})$.
We wrote
$e^{(\lambda)}_{\mu} = e^{(\lambda)}_{\parallel \mu}
+ e^{(\lambda)}_{\perp \mu}$
with $e^{(\lambda)}_{\parallel \mu} = [P_{\mu}
- z_{\mu}m_{\rho}^{2}/(P\cdot z)](e^{(\lambda)}\cdot z)/(P\cdot z)$,
so that the DAs with the subscripts $\parallel$ and $\perp$
describe longitudinally
and transversely
polarized $\rho$ mesons, respectively.
$\phi_{\perp}$ is of twist-2, while
$h_{\parallel}^{(t)}$ and $h_{\parallel}^{(s)}$ are of twist-3.
These DAs are dimensionless functions
and describe the probability amplitudes
to find the $\rho$ in a state with quark and antiquark,
which carry the light-cone momentum fractions $u$ and
$1-u$, respectively,
and have a small transverse separation of order $1/\mu$.
The
quark-antiquark-gluon DAs
can be defined
similarly;
they are of twist-3 and higher. There exists one chiral-odd
DA of twist-3, which is given by
\begin{eqnarray}
\lefteqn{
\langle 0| \bar{u}(z) \sigma^{\mu \nu}z_{\nu} gG_{\mu\eta}(vz)
z^{\eta}
d(-z) | \rho^{-}(P, \lambda)\rangle}\nonumber\\
&&\;\;\;\;\;\;\;\;\;\;=
(P\cdot z)(e^{(\lambda)} \cdot z)f_{3\rho}^{T}m_{\rho}
\int_{0}^{1}
\!\!{\cal D}\underline{\alpha}\
e^{-iP\cdot z(\alpha_{u} - \alpha_{d} + v\alpha_{g})}
{\cal T}(\underline{\alpha}, \mu^{2}) ,
\label{eq:def3DA}
\end{eqnarray}
where $G_{\mu \eta}$ is the gluon field strength tensor,
${\cal D}\underline{\alpha} \equiv d\alpha_{d}d\alpha_{u} d\alpha_{g}
\delta(1 - \alpha_{d}-\alpha_{u}-\alpha_{g})$, and $\underline{\alpha}$
denotes the set of three light-cone momentum fractions:
$\alpha_{d}$ (quark), $\alpha_{u}$ (antiquark), and $\alpha_{g}$
(gluon).
$f_{3\rho}^{T}$ is the three-body tensor decay constant\cite{BBKT98},
so that ${\cal T}$ is dimensionless
and is conveniently normalized as
$\int {\cal D}\underline{\alpha} (\alpha_{d} - \alpha_{u})
{\cal T}(\underline{\alpha}) = 1$.
The three-particle DA ${\cal T}$ describes a higher component
in the Fock-state wave function with additional gluon.
The basis of DAs defined above is overcomplete
due to the constraints from the QCD equations of motion.
Using
$(i
\not{\!\! D} - m_{q})q(x) =0$ ($q= u, d$),
we obtain
\begin{eqnarray}
\lefteqn{\frac{1}{2} x^{\nu} \frac{\partial}{\partial x_{\mu}}
\bar{u}(x) \left[ \gamma_{\mu}, \gamma_{\nu}\right]_{\pm}
d(-x)
= \int_{-1}^{1}\! dv v^{(1 \mp 1)/2}
\bar{u}(x) \sigma^{\mu \nu} x_{\nu}
g G_{\mu \eta}(vx) x^{\eta} d(-x)} \nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
- \frac{x^{\nu}}{2}\frac{\partial}{\partial y_{\mu}}
\left. \left\{ \bar{u}(x + y)
\left[ \gamma_{\mu}, \gamma_{\nu}\right]_{\mp}
d(-x + y) \right\}\right|_{y \rightarrow 0}
\!\! + i (m_{u} \pm m_{d}) \bar{u}(x) \rlap/{\mkern-1mu x} d(-x),
\label{eq:id}
\end{eqnarray}
where
$\left[ \gamma_{\mu}, \gamma_{\nu} \right]_{\pm}
\equiv \gamma_{\mu}\gamma_{\nu} \pm \gamma_{\nu}\gamma_{\mu}$.
In the light-cone limit $x^{2} \rightarrow 0$,
the vacuum-to-meson matrix elements of (\ref{eq:id})
yield a system of integral equations
between two- and three-particle DAs defined above.
We note that the
total derivative term
induces mixing between $h_{\parallel}^{(t)}$
and $h_{\parallel}^{(s)}$.
We can solve\cite{BBKT98} these coupled integral equations
in a form ($i=s, t$)
\begin{equation}
h_{\parallel}^{(i)}(u, \mu^{2}) =
\int_{0}^{1} \!dv\ K^{(i)}_{WW}(u,v) \phi_{\perp}(v, \mu^{2})
+ \int_{0}^{1}\! {\cal D}\underline{\alpha}\
K^{(i)}_{g}(u, \underline{\alpha})
{\cal T}(\underline{\alpha}, \mu^{2}),
\label{eq:sol}
\end{equation}
where $K^{(i)}_{X}$ ($X=WW, g$) are independent of $\mu$.
We omit
the quark mass correction term for simplicity.
The first term on the r.h.s. is
the twist-2 contribution and
thus the analogue of the
Wandzura-Wilczek piece of the nucleon structure function
$g_{2}(x, Q^{2})$.
To analyze (\ref{eq:sol}),
we introduce the conformal partial wave expansion
in the $m_{q}\rightarrow 0$ limit\cite{BBKT98,BF90}.
The conformal expansion of light-cone DAs
is analogous to the partial wave expansion
of wave functions in standard quantum mechanics (QM).
In conformal expansion,
the invariance of massless QCD under conformal transformations
substitutes the rotational symmetry in QM.
In QM, the purpose of partial wave
decomposition is to separate angular degrees of freedom
from radial ones
(for spherically symmetric potentials).
All dependence on the angular
coordinates is included
in spherical harmonics which form an irreducible
representation of the group O(3),
and the dependence on the single
remaining radial coordinate is governed by a one-dimensional
Schr\"{o}dinger equation.
Similarly, the conformal expansion of DAs in QCD aims
to separate longitudinal degrees of freedom
from transverse ones.
All dependence on the longitudinal momentum fractions
is included in terms of certain orthogonal polynomials
which form irreducible representations
of the so-called collinear subgroup
of the conformal group, SL(2,R).
The transverse-momentum dependence
(the scale-dependence) is governed
by simple renormalization group equations:
the different partial waves,
labeled by different ``conformal spins'',
behave independently and do not mix with each other.
Since the conformal invariance of QCD is
broken by quantum corrections, mixing of different terms of the
conformal expansion is only absent to leading logarithmic accuracy.
Still, conformal spin is a good quantum number in hard processes,
up to small corrections of order $\alpha_{s}^{2}$.
The conformal expansion of the DAs on the r.h.s. of
(\ref{eq:sol}) reads
\begin{equation}
\phi_{\perp}(u, \mu^{2}) = 6u\bar{u} \sum_{n=0}^{\infty}
a_{n}(\mu^{2}) C_{n}^{3/2}(\xi); \;\;\;\;\;\;
{\cal T}(\underline{\alpha}, \mu^{2})
= 360 \alpha_{d}\alpha_{u}\alpha_{g}^{2} \sum_{k,l=0}^{\infty}
\omega_{kl}(\mu^{2}) J_{kl}(\alpha_{d}, \alpha_{u}),
\label{eq:ce2}
\end{equation}
where $\bar{u}= 1-u$, $\xi=2u-1$,
$\alpha_{g} = 1 - \alpha_{d} - \alpha_{u}$, and $C_{n}^{3/2}$ and
$J_{kl}$ are particular Gegenbauer and Appell
polynomials\cite{BBKT98}.
Using orthogonality relations of these polynomials,
the expansion coefficients are expressed by matrix elements
of local conformal operators:
$a_{n}$ and $\omega_{kl}$ are given by
local operators of conformal spin $j= n+2$ and $j= k+l+7/2$,
respectively
(three-particle conformal
representations are degenerate).
Thanks to conformal symmetry,
$K_{X}^{(i)}$ of (\ref{eq:sol}) are
resolved into the superposition of the terms of definite
conformal spin.
As a result, $h_{\parallel}^{(i)}$ are given by the conformal
expansion, where each expansion coefficient
is expressed in terms of $a_{n}$ and $\omega_{kl}$
with the corresponding spin $j$.
Now
all DAs are expressed,
order by order in the conformal expansion,
by independent matrix elements
$a_{n}$ and $\omega_{kl}$.
{}From (\ref{eq:sol}) and (\ref{eq:ce2}),
the $\mu^{2}$-dependence of the DAs is governed by that
of $a_{n}(\mu^{2})$ and $\omega_{kl}(\mu^{2})$.
This is determined by the renormalization of the corresponding
local conformal operators, and
is worked out in the leading logarithmic
approximation\cite{BBKT98,KNT98}.
The results indicate that the relevant anomalous dimensions
increase as $\sim \ln j$.
This means that only the first few conformal partial waves
contribute at sufficiently large scales.
Therefore, the truncation of the conformal expansion
at some low order provides a useful and consistent
approximation of the full DAs.
We take into account the partial waves with $j \le 9/2$,
where the terms with $n \le 2$ and $k+l \le 1$
are retained in
the first and second equations of (\ref{eq:ce2}).
Correspondingly, we get\cite{BBKT98}
\begin{eqnarray}
h_{\parallel}^{(s)}(u) & = &
6u\bar{u} \left[ 1 + a_1 \xi + \frac{1}{4}a_2 (5\xi^2-1)\right]
+ 35 u\bar{u}\zeta_{3} (5\xi^2-1) \nonumber\\
&+& 3 \delta_{+} (3 u \bar{u} + \bar{u} \ln \bar{u} + u \ln u)
+ 3 \delta_{-} (\bar{u}
\ln \bar{u} - u \ln u), \label{eq:trun}\\
h_{\parallel}^{(t)}(u) &= &
3\xi^2+ \frac{3}{2}a_1 \xi (3 \xi^2-1)
+ \frac{3}{2} a_2 \xi^2 (5\xi^2-3)
+\frac{35}{4}\zeta_{3}(3-30\xi^2+35\xi^4) \nonumber\\
&+& \frac{3}{2} \delta_+
(1 + \xi \ln \bar{u}/u) + \frac{3}{2}\delta_- \xi ( 2
+ \ln u + \ln\bar{u} ).
\label{eq:trun2}
\end{eqnarray}
Here $\zeta_{3} = f_{3\rho}^{T}/(f_{\rho}^{T}m_{\rho})$,
and we incorporated
the quark mass corrections proportional to
$\delta_{\pm} = f_{\rho}(m_{u}\pm m_{d})/(f_{\rho}^{T} m_{\rho})$.
The results for other vector mesons $\omega$, $K^{*}$ and $\phi$
are obtained by trivial substitutions.
We emphasize that these results provide a consistent set
of DAs involving a minimum number of
independent nonperturbative parameters.
These parameters are calculated from QCD sum rules
taking into account SU(3)-breaking effects\cite{BBKT98}.
The resulting DAs are plotted in Fig.~\ref{fig:1}.
The comparison between the solid and dotted curves
shows that the three-particle contributions
(the term with $\zeta_{3}$ in (\ref{eq:trun}), (\ref{eq:trun2}))
are important and broaden the distributions.
On the other hand,
the quark mass effects are not so large except for the end-points
$u \rightarrow 0$ and $u\rightarrow 1$ for $h_{\parallel}^{(t)}(u)$.
In conclusion, we have developed a powerful framework
for hard exclusive processes,
which allows
to express the higher twist DAs
in a model-independent way by a minimum number of nonperturbative
parameters. Combined with estimates of the nonperturbative parameters,
this leads to
model building of the DAs consistent with all
QCD constraints. Our formalism is applicable
to arbitrary twist\cite{BB982} and other light mesons\cite{B98},
and our results are immediately
applicable to a range of phenomenologically
interesting processes\cite{BB98}.
\begin{figure}[t]
\leftline{\epsffile{kek98fig.eps}}
\vspace{-3.4cm}
\caption[]{Two-particle twist-3 DAs for the
$\rho$, $K^*$ and $\phi$ mesons:
(a) $h_{\parallel}^{(s)}(u)$ of (\ref{eq:trun});
(b) $h_{\parallel}^{(t)}(u)$ of (\ref{eq:trun2}).
``$\rho$ (WW)'' denotes the ``Wandzura-Wilczek'' type
contribution given by the first three
terms of (\ref{eq:trun}) and (\ref{eq:trun2})
for the case of the $\rho$ meson.}\label{fig:1}
\end{figure}
\bigskip
The author would like to thank P. Ball, V.M. Braun, and
Y. Koike for the collaboration on the subject discussed
in this work.
|
{
"timestamp": "1999-02-21T10:39:54",
"yymm": "9902",
"arxiv_id": "hep-ph/9902415",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/9902415"
}
|
\section{Introduction}
The striking conjecture of Maldacena~\cite{malda}
on the equivalence of large-$N_c$ superconformal quantum
gauge theories on $d$-dimensional Minkowski space $M_d$ - considered as
the
boundary of $(d+1)$-dimensional anti-de Sitter space AdS$_{d+1}$ - to
classical supergravity in the AdS bulk, has opened a new dialogue
between students of non-perturbative gauge theories and string theorists.
Quantities in the strong-coupling limit of gauge theory may be
calculable using classical correlators in AdS (super)gravity, and/or
non-perturbative aspects of string theory may be related to correlators
in gauge theories~\cite{gkp,witt1}, in a holographic spirit~\cite{holo}.
In particular, the AdS approach
was used in~\cite{witt} to relax the assumption of
four-dimensional supersymmetry by
starting from a supersymmetric theory in
six dimensions, which was one of the cases for which the
conjecture was thought to be valid,
and compactifying appropriately two of the dimensions.
The resulting compactification led to a
high-temperature regime for the four-dimensional
boundary theory, which had broken supersymmetry. In this
way, confinement at low temperatures and deconfinement at high
temperatures could be demonstrated. However, the gauge theory
was still conformal, and asymptotic freedom was therefore not
present. Nevertheless, this approach has motivated intriguing
estimates of glueball masses~\cite{balls}, the quark-antiquark
potential~\cite{potential}
and QCD vacuum condensates~\cite{vacuum} that agree surprisingly well with
lattice and other phenomenological estimates.
Two of us (J.E. and N.E.M.) have proposed~\cite{em} a generalization
of this holographic approach to the AdS$_{d+1}$/$M_d$
correspondence which is based on Liouville string theory~\cite{ddk,emn}, in
which conformal symmetry and supersymmetry need not be assumed,
provided world-sheet defects~\cite{defects} are
taken into account properly in the Liouville-dressed
theory.
The Liouville field itself provides an extra bulk dimension,
and the AdS structure is induced by
the recoil of the world-sheet defect, when considered in
interaction with a closed-string loop~\cite{kanti}.
Within this approach, it was possible to demonstrate
the formation of a condensate of world-sheet defects at
low temperatures, which was related to the condensation of magnetic
monopoles in target space and induced confinement~\cite{em}. It was
also possible to demonstrate the logarithmic running of the gauge
coupling strength. Although this Liouville approach is somewhat
heuristic, it opens up a new way to discuss non-supersymmetric QCD
in the strong-coupling regime and at finite temperature.
In particular, the target-space quark-hadron deconfinement-confinement
transition may be viewed as a
Berezinskii-Kosterlitz-Thouless phase transition of
world-sheet vortices~\cite{sathiap,defects}, which can also be related to
the phase transition of black holes in AdS~\cite{hp}.
In this paper we embark on a heuristic attempt to
use this approach to model aspects of
quark-hadron phase transition.
Lattice analyses~\cite{satz} indicate that the free
energy of pure QCD rises relatively rapidly above the
critical temperature to approach the
asymptotic ideal-gas value as predicted in perturbative QCD.
On the other hand, the pressure is calculated~\cite{satz} to
rise much more slowly towards its asymptotic ideal-gas value,
and one possible interpretation is that massive
effective degress of freedom are important close to the transition,
causing a larger departure from the ideal-gas picture for the
pressure than is the case for the free energy. Calculations close to the
phase transition necessarily require non-perturbative techniques,
such as the lattice, and our hope is that the $M_4$/AdS$_5$ correspondence
may also prove useful in this region.
Specifically, we model aspects of the gluon plasma using a
non-ideal gas of black holes in AdS$_5$, interacting via
forces of van der Waals type, and described by an
effective van der Waals equation of state that we derive
in this paper.
In order to set this approach up in the most reliable way,
and to relate it most closely to previous work~\cite{witt},
we first consider the high-temperature limit. Here the
black holes in AdS$_5$ are stable as well as massive, and hence
suitable for interpretation as `molecules' of a non-relativistic gas
with small velocities $|u_i| \ll 1$. We demonstrate in this limit how
the AdS structure of the ambient bulk space-time itself may be
translated into non-ideal-gas interactions between the massive
black-hole `gas molecules'. The question then arises how to extend
this description down to lower temperatures.
According to the world-sheet point of view, target-space black holes
may be viewed as `spike' defects on the world sheet~\cite{defects}, which
are dual
to `vortex' defects, that in turn correspond to $D$ particles in
space-time~\cite{polch}. In the above-mentioned high-temperature limit,
these
$D$ particles are light, and difficult to treat using the
Liouville approach. However, the Liouville approach is well-adapted
for a discussion of a dual limit, in which the
$D$ particles become very heavy~\cite{em}.
It should be emphasized, though, that both of these limits correspond
to temperatures that are high compared to those in the confining phase.
In both QCD~\cite{CEO} and AdS gravity~\cite{hp}, three distinct
transition temperatures
have been identified: $T_0$, below which only the
confined phase of the gauge theory exists
and black holes condense leaving a residual gas of radiation; $T_1$, at
which the free energies of the
confined and deconfined phases (or black-hole and radiation phases) are
equal; and $T_2$, beyond which only the
deconfined and stable-black-hole phases exist. In the world-sheet picture,
these correspond to the temperatures of Berezinskii-Kosterlitz-Thouless
transitions for vortex and spike condensation~\cite{em}. The
high-temperature
limit~\cite{witt} we take corresponds to $T \sim T_2$ and above, and our
lower-temperature limit corresponds to $T \mathrel{\rlap {\raise.5ex\hbox{$ > $} T_0$. We aim to
establish a judicious interpolation between these two limits
that describes qualitatively correctly the intermediate region $T \sim
T_1$.
The layout of this paper is as follows. In section 2 we discuss in
more detail general features of our approach to the thermodynamics
of AdS black holes. Then, in section 3 we discuss the high-energy limit
in which the $D$ particles are light~\cite{witt}, and section 4 contains
an application of Liouville string theory~\cite{em} to the
lower-temperature limit. Finally, in sections 5 and 6 we pull together a
general picture of the quark-hadron phase transition using the
information we have obtained from our studies of the thermodynamics
of AdS black holes, and relate it to lattice results~\cite{satz}.
\section{Thermodynamics of a Gas of AdS Black Holes}
We consider a homogeneous gas of AdS black holes,
each of mass $M$, and restrict ourselves to the case where the
characteristic velocity of a generic black hole in the ensemble is
either zero or very small: $|u_i|\ll 1$, corresponding to the
case of very massive black holes. We consider first the
static case: $u_i=0$. This will help in gaining insight into the
$u_i\ne 0$ case, which we treat later as a perturbation of
the static case.
We consider an ensemble of $N$ indistinguishable
Schwarzschild black holes in a five-dimensional AdS space-time
with radius $b$, which is related to the critical temperature $T_1$
above which massive black holes are stable: $T_1=1/(\pi b)$.
The invariant line element is be taken to be (in Minkowskian
signature)~\cite{hp,witt}:
\begin{equation}
ds^2=-\left(1 + \frac{r^2}{b^2} - \frac{\omega _4 M}{r^2}\right)dt^2
+ \left(1 + \frac{r^2}{b^2} - \frac{\omega _4 M}{r^2}\right)^{-1}dr^2
+ r^2 d\Omega_3
\label{adsbh}
\end{equation}
where the AdS radius $b$ is related to the negative cosmological
constant $\Lambda$ by $b=\sqrt{-3/\Lambda}$, and
$\omega_4 \equiv 8G_N/3\pi$, where $G_N$ is the five-dimensional
Newton constant that is related to the Planck length $\ell_P$ via
$G_N=\ell_P^3$, and $M$ is the ADM mass of the black hole.
The outer horizon of the black hole is defined to be the
larger positive root $r_+$ of the equation
\begin{equation}
1 + \frac{r_+^2}{b^2}-\frac{\omega_4 M}{r_+^2}=0\,,
\label{rootequation}
\end{equation}
namely
\begin{equation}
r_+ =b\left( -\frac{1}{2} + \frac{1}{2}\sqrt{1 + 4\omega_4 M/b^2}\right)^{1/2}.
\label{rplus}\end{equation}
For the purposes of calculating the partition function and other
thermodynamic quantities of the ensemble, a
Wick rotation to a Euclidean AdS geometry:
$t \rightarrow i\,t$, will always be understood.
The Euclideanized
AdS-Schwarzschild space-time has been found to be
smooth~\cite{hp}, provided the Euclidean time direction is perodic
at a particular radius $\beta_0$:
\begin{equation}
\beta_0 \equiv T_H^{-1}=\frac{4\pi b^2 r_+}{4 r_+^2 + 2b^2}\;.
\label{temp}
\end{equation}
where $T_H$ is the Hawking temperature of the black hole.
In subsequent sections of this paper,
we consider {\it two} limits of (\ref{temp}) that
correspond to high Hawking temperatures $T_H \mathrel{\rlap {\raise.5ex\hbox{$ > $} T_0$, namely
(i) $b^2\ll r_+^2$ and (ii) $b^2\gg r_+^2$,
where we assume in each case that $\ell_P^2\ll r_+^2, b^2$.
It is easy to check using (\ref{rplus}) that in the limit (i) we also have
$\omega_4M/r_+^2 \sim r_+^2/b^2$, whereas in the limit (ii) we also have
$r_+^2\simeq\omega_4 M \gg \ell_P^2$, so that we are consistent with the
static limit in both cases.
According to the analysis of~\cite{hp},
the thermodynamic ensemble of black holes is stable
in the limit (i): $\ell_P^2\ll b^2\ll r_+^2\ll\omega_4 M$,
which was considered in~\cite{witt}. This corresponds to
the region $T \sim T_2$.
The limit (ii): $\ell_P^2\ll r_+^2\sim\omega_4 M\ll b^2$,
where the radius of the
AdS space-time is large compared to the outer horizon,
was studied perturbatively in~\cite{em}, using the Liouville
approach where $b
\sim \delta^{-2} \rightarrow \infty$, with $\delta \rightarrow 0^+$ a
parameter appropriately defined to regulate
the recoil operators. According to~\cite{hp}, this limit corresponds to a
temperature $T \mathrel{\rlap {\raise.5ex\hbox{$ > $} T_0$. However,
the intermediate
regime of the phase transition where $T \sim T_1$, lying between the
regions (i) and (ii),
cannot be studied reliably by analytic methods, and we resort later
to continuity arguments in order to describe the energy and pressure
curves in this region.
\section{Is there a Phase Transition?}
To answer this question and to investigate its order,
one should examine the equation of state
for an ensemble of AdS black holes in appropriate
regimes of the parameters. We consider first the limit (i) above.
Using standard General Relativity, the effective static
potential $U(r)$ between two black holes in the ensemble
with a radial separation $r$ is given by the
temporal metric component $g_{00}$, that can be obtained
from (\ref{adsbh}):
\begin{equation}
g_{00}=-1-2U(r)\,, \qquad U(r)=\frac{r^2}{2b^2} -
\frac{\omega_4 M}{2r^2}
\label{grpotential}
\end{equation}
provided the potential is weak. We note that the potential
(\ref{grpotential}) vanishes at a radius $r_0$:
\begin{equation}
U(r_0)=0\,\, \rightarrow \,\, r_0=\left(\omega_4 Mb^2\right)^{1/4}
\;.\end{equation}
Using (\ref{rplus}), we see that $r_0\simeq\;r_++{b^2\over 4r_+}$\,.
Therefore, if we restrict ourselves to a thin shell outside the horizon:
$r_+\le r\le r_++\varepsilon$\,, the potential is indeed small, and
one can take $\varepsilon=(r_0-r_+)\simeq b^2/4r_+$ to
a good approximation.
Since the potential varies very
little over the range $r_+\le r\le r_++\varepsilon$,
we make the second approximation $U(r)\simeq\overline
U$\,, where $\overline U$ is a constant, justified because
$\delta U\sim\varepsilon$\,. The fact that the absolute value of
the constant $\overline U$ is a small number also justifies our
weak-field approximation. These statements can be checked
more precisely using the following formulae:
\begin{eqnarray}
&&U(r)\simeq{\overline U}=const\equiv\kappa N/\Delta\Omega\nonumber\\
&&\kappa=(1/N)\int_{r_+}^{r_++\varepsilon}d^4x\,U(r)
\simeq{\pi^2\over 4N}\left[{\varepsilon r_+^5\over b^2}-\omega_4M\varepsilon r_+
\right]
\label{potent}
\end{eqnarray}
where the effective volume $\Delta\Omega=\Omega_4-\Omega_0$ is the size of
the four-volume of the shell $r_+\le r\le r_++\varepsilon$ and $N$
is the number
of black holes in the ensemble. We denote by $\Omega_0$ the
four-volume inside the horizon. From the expression
(\ref{potent}) for $\kappa$, one can easily check that
$\overline U<1$\,.
We shall see that the above approximations lead to
analytic expressions for thermodynamic quantities of our AdS
black-hole system.
We first evaluate the classical partition function $Z$
of the system, assuming that it is in thermal equilibrium
at a Hawking temperature $T\equiv\beta^{-1}$:
\begin{eqnarray}
Z &=& \frac{1}{N!}
\left(\int \frac{d^4 x d^4 p}{(2\pi)^4}\;e^{-\beta\big[p^2/2M + MU(r)
\big]}
\right)^N \nonumber \\
&=& \frac{1}{N!} \left(\frac{M}{2\pi\beta}\right)^{2N}\!\!\!\left(
\int d^4x\; e^{-\beta U(r) M }\right)^N\!\! =
\frac{1}{N!}\left(\frac{M}{2\pi\beta}\right)^{2N}\!\!\Delta\Omega^N
e^{-\beta {\overline U}MN }\;.
\label{part}
\end{eqnarray}
Some remarks are in order at this point. First, in the static
case which we are considering now, the kinetic term of the black
holes has no physical meaning and serves only as a cut-off for the
momentum integrals. Secondly, the spatial integral is understood to
be taken over the prescribed shell $r_+\le r\le r_++\varepsilon$,
which give rise
to the volume factor $\Delta\Omega$. Finally, we underline that we have
made explicit use of the approximations mentioned above.
Before proceeding to compute other thermodynamic quantities,
we also examine the low-velocity non-static case.
As we shall see, the difference between the static and the non static
cases can be described effectively by a renormalisation
shift of the mass term $M\to M+T$ in the partition function
above (\ref{part}).
To prove this, we note that in the non-static case, where the heavy
black holes
move with a small velocity $|u_i|\ll 1$ relative to each other,
we may employ both the small-velocity and weak-potential
approximations simultaneously. To obtain the velocity corrections to
the potentials, we use the Minkowski-signature Lagrangian:
\begin{equation}
L=-M\frac{ds}{dt}\,, \qquad ds^2=-g_{AB} dx^A dx^B,\qquad A,B=0...4
\label{timedep}
\end{equation}
where $ds^2$ is given by (\ref{adsbh}).
We then write
\begin{equation}
\frac{ds}{dt} = \sqrt{-g_{00} - g_{ij} \frac{d x^i}{dt} \frac{d x^j}{dt}}
=\sqrt{-g_{00} - g_{ij}u^i u^j}\;.
\label{form1}
\end{equation}
and set
\begin{equation}
g_{00} =-1-2U(r)\,, \qquad g_{ij}=\delta _{ij} + h_{ij}\,,
\qquad r^2 \equiv \sum_{i=1}^4 x_i^2\;.
\end{equation}
{}from which we see that $ds/dt = \sqrt{1 + 2 U - |u_i|^2 - h_{ij} u^i u^j}$.
Expanding in powers of {\it both} U and $u_i^2$ to leading non-trivial
order, we find:
\begin{equation}
L \simeq -M \left(1 + U -\frac{1}{2}|u_i|^2
- \frac{1}{2} u^i h_{ij} u^j +
\frac{1}{2}U |u_i|^2 +... \right)\;.
\label{Lexp}
\end{equation}
Parametrising with a generalized velocity-dependent potential ${\tilde U}$
in the Lagrangian: $L=-M +\frac{1}{2}M|u_i|^2 - M {\tilde U}({\overline x}, u)$,
we get
\begin{equation}
{\tilde U} ({\overline x}, u) = U + \frac{1}{2}U|u_i|^2
- \frac{1}{2} u^i h_{ij} u^j
\label{genpot}
\end{equation}
which clearly reduces to $U$
in the static limit $u_i \rightarrow 0$.
The partition function for slowly-moving black holes
can be computed in a straightforward manner.
First, we note that the generalized momenta are given by
\begin{equation}
p_i = \frac{\partial L}{\partial u^i} = M
\left(u_i-u_iU+h_{ij}u^j\right),
\label{momenta}
\end{equation}
giving rise to the Hamiltonian:
\begin{equation}
H = p_i u^i - L = M (1 + U) + \frac{1}{2}p_iu^i\;.
\label{ham}
\end{equation}
Taking into account the facts that $h_{ij}=2 Ux_i x_j/r^2$
and $u_i\simeq\frac{p_j}{M} [\delta _{ij}(1 + U) - h_{ij} ]$,
we can re-express the Hamiltonian as:
\begin{equation}
H= \frac{1}{2} p_i {\cal K}_{ij} p_j + MU\,,
\qquad {\cal K}_{ij}=\frac{1}{M}\left[\delta _{ij} (1 + U) - h_{ij}\right]
\label{ham2}
\end{equation}
which resembles the Hamiltonian of a particle with momenta $p_i$
in a `curved space' whose `metric' ${\cal K}_{ij}$
depends solely on the potential $U$ and not
on the velocity $u_i$ (to this order).
The resulting partition function is
\begin{eqnarray}
Z&=&\frac{1}{N!}
\left(\int \frac{d^4 x d^4 p}{(2\pi)^4}\,
e^{-\beta\left[\frac{1}{2}p_i {\cal K}_{ij}p_j + MU(r)\right]}
\right)^N\nonumber\\ &=&
\frac{1}{N!}\left(\frac{M}{2\pi\beta}\right)^{2N}\!\!\left(
\int d^4x \frac{1}{\sqrt{{\rm det}(K M)}}
e^{-\beta U(r) M }\right)^N
\label{part2}
\end{eqnarray}
where
\begin{eqnarray}
\sqrt{{\rm det}(KM)} = \left[{\rm Det}\big((1+U)\delta_{ij}-h_{ij}\big)
\right]^{1/2}\simeq\exp\left[\frac{1}{2}{\rm Tr}(U\delta_{ij}-h_{ij})+...
\right]\simeq e^U\;.
\label{apprx2}
\end{eqnarray}
Thus, one has, upon approximating $U \simeq {\overline U}$ (c.f.
(\ref{potent})):
\begin{equation}
Z \simeq \frac{1}{N!}\left(\frac{M}{2\pi\beta}\right)^{2N}\!\!
\Delta\Omega^Ne^{-N (\beta M + 1){\overline U}}
\label{part3}
\end{equation}
which, when compared with (\ref{part}), demonstrates the aforementioned
renormalisation shift in the mass by $T$\,. In view of this simple change,
{}from now on we shall deal with the general velocity-dependent case.
The energy of the system is defined as
\begin{equation}
{\overline E}
= \frac{\partial}{\partial \beta}\left(-{\rm Ln}Z + \beta \mu N \right)
\label{energyint}
\end{equation}
where the `ground-state energy' due to the chemical potential
$\mu=\partial{\rm Ln}Z/\partial N$ has been appropriately added.
The pressure of the system is defined as:
\begin{equation}
P \equiv \frac{1}{\beta} \frac{\partial {\rm Ln}Z}{\partial
\Delta \Omega},
\label{pressure}
\end{equation}
which upon using (8) yields the following equation of state:
\begin{equation}
NT\,=\,
\left(P-\kappa\tilde
M{N^2\over(\Omega_4-\Omega_0)^2}\right)\left(\Omega_4
-\Omega_0\right),
\label{eqstate}
\end{equation}
where the Boltzmann constant has been put to unity and
$\tilde M$ represents the shifted mass
appearing in the partition function (\ref{part3}).
The relation (\ref{eqstate}) is
nothing other than a {\it van der Waals equation of state}. In our view,
this leads to the {\it prima facie} expectation
that a first-order phase transition takes place in the bulk,
though this remains to be verified.
For the non-static case (\ref{part3}),
the quantity ${\overline E}$ is
\begin{eqnarray}
{\overline E}&=&-N{\rm Ln}N - N
- 2N \frac{2\pi \beta}{M}\left(\frac{\partial M}{\partial \beta}\frac{1}
{2\pi \beta} - \frac{M}{2\pi \beta ^2}\right)(1 - \beta)
+ 2 N {\rm Ln}(\frac{M}{2\pi\beta}) + \nonumber \\
&&N {\rm Ln}\Delta \Omega + \frac{N}{\Delta \Omega} \frac{\partial \Delta \Omega}{\partial \beta}
+ ( 1 - 2\beta)MN{\overline U} - N {\overline U} \;.
\label{energyform}
\end{eqnarray}
In the limit under consideration, we may use (\ref{temp})
to relate the black-hole mass to the temperature:
\begin{equation}
M \sim \frac{\pi^4}{\omega_4}b^6 T^4
\label{mass}
\end{equation}
for a five-dimensional AdS-Schwarzschild black hole.~\footnote{In
the case of an $(n+1)$-dimensional
AdS-Schwarzschild black hole of large mass $M$, the temperature scales
as $T^n$.}
We assume (see later) that the AdS radius $b$ scales with the temperature
as
\begin{equation}
b\sim c_0T^{-1},
\label{c0}
\end{equation}
where $c_0$ is taken sufficiently large to ensure
that $\omega_4M\gg b^2$\,. Thus ${\overline E}$
is easily evaluated:
\begin{equation}
{\overline E} = {\rm const} + 2 N T - 6N{\rm Ln}T
+ c''{\overline U} N T^{-2} - 2 c''N {\overline U} T^{-3}
\label{energy}
\end{equation}
where ${\overline U} < 0$, and is assumed constant, and we have used
${\rm Ln} N! \simeq N {\rm Ln} N$ for large $N$.
The constant in (\ref{energy}) can be set to zero by an appropriate
normalization of the energy, since only energy differences matter.
The energy density $\rho$ for the {\it four-dimensional} system
on the boundary of AdS$_5$
is obtained by dividing ${\overline E}$ by the
{\it three}-volume which, in view of the above discussion, scales
like $T^{-3}$. Thus
\begin{equation}
\rho/T^4 \propto 2 N - 6NT^{-1} {\rm Ln}T
+ c''{\overline U} N T^{-3} - 2 c''N {\overline U} T^{-4}
\label{energydens}
\end{equation}
As for the energy density, the pressure in
{\it four dimensions} (three space dimensions, one
periodic temperature dimension), denoted by $p$,
is computed from the
equation of state (\ref{eqstate}) after writing it
in the form
\begin{equation}
p \equiv b P = {\rm const} \times \frac{1}{\Delta \Omega _3
} NT ,
\label{fourdpre}
\end{equation}
using (\ref{potent}) and assuming that the variations
of the potential with the volume are small.
The quantity $b P$ simply represents the fact that the
three-space pressure should be computed with respect to a
three-volume shell, $\Delta \Omega _3$, and {\it not} a
four-volume shell, $\Delta \Omega _4$.
The former scales with one length dimension less compared to
the latter, and thus the quantity $bp$ in (\ref{eqstate})
has the right scaling with $T$.
With in this mind, we remark that $\Delta \Omega _3$ scales like $T^{-3}$
in the very-high-temperature regime, beyond the upper phase
transition, and hence that
\begin{equation}
p/T^4 \sim {\rm const} \;.
\label{pressure2}
\end{equation}
The constant term in (\ref{pressure2}) may be
fixed by the fact that in the very-high-temperature regime
the system is supposed to represent a gas of massless gluons,
and hence, from the classical statistical mechanics of a ideal gas
of massless Bose particles,
the energy density is three times the pressure.
The energy density curve
is plotted in Figure~\ref{enerpress}.
We observe that the qualitative features of QCD
are correctly captured by our classical
statistical system of AdS black holes.
The energy density drops sharply as we approach
low temperatures, and it is tempting to identify
this region might with
the deconfined region of QCD,
approached from the high-temperature unconfined
phase. Our approximate calculation exhibits
a bump in the energy-density curve
before the `confined' region is reached, due to the
$-T^{-1} {\rm Ln }T$ term in (\ref{energydens}).
However, the limit (i) that we have used above
is valid only for high temperatures, and should not be trusted
quantitatively in this region.
On the other hand, the appearance of this bump may
indicate the existence of a thermodynamic instability, given that the
`bump' region is followed by a sharp drop in the energy density as
the `confined' region is approached.
\begin{figure}[htb]
\epsfxsize=3in
\bigskip
\centerline{\epsffile{enpresshigh.eps}}
\caption{\it\baselineskip=12pt
The scaled energy density $\rho/T^4$
(dashed line) and pressure $3p/T^4$
(continuous line) in a gauge theory, plotted as functions of the
temperature $T$, as
calculated in the high-temperature limit (i) $b \ll r_+$~\cite{witt}
using a typical set of parameters for $N$ indistinguishable AdS black
holes. The bump in the energy density is reminiscent of the
transition from a gas of pions to a deconfined
quark-gluon plasma in the QCD case, but the
approximations made in the limit (i) are not
reliable in the regions where the lines are dotted.}
\bigskip
\label{enerpress}\end{figure}
\section{Another View of the Phase Transition}
In this section we shall look at the phase
transition from the opposite viewpoint,
described by the limit (ii) defined above: $\ell_P^2\ll
r_+^2\simeq\omega_4 M\ll b^2$\,.
The approximations made in the analysis of the previous
section are also valid in this parameter regime,
though in a different way.
When there is a large separation between any pair of black holes in the
ensemble: $r_+\ll r\ll b$, it is again
a good approximation to take $U(r)\simeq\overline U$\,,
where $\overline U$ is a positive constant, because the potential varies very
little over the range $r_+\ll r\ll b$. Not only that, but the
constant $\overline U$ is also a small number and hence one can again make a
weak-field approximation. The analogues of the formulae (\ref{potent})
are in this case:
\begin{eqnarray}
&&U(r)\simeq{\overline U}=const\equiv\kappa N/\Omega\nonumber\\
&&\kappa=(1/N)\int_{r_+}^bd^4x\,U(r)
={\pi^2\over 24N}\left[b^4-{r_+^6\over b^2}-3\omega_4M(b^2-r_+^2)\right]
\label{potent2}
\end{eqnarray}
where, as before, the effective volume
$\Omega=\Omega_4-\Omega_0$ is the size of
the four-volume of the shell $r_+\ll r\ll b$, and $N$ is the
number of black holes in the ensemble.
Given that $\Omega\sim b^4$, and using the above formula (\ref{potent2})
for $\kappa$, one can easily check that $\overline U<1$\,.
Notice that the potential is {\em attractive} in the
region $r_+\ll r<r_0$ and {\it repulsive} in the region $r_0<r\ll b$\,.
In this region, (\ref{temp}) tells us that the black-hole mass
is related to the temperature by:
\begin{equation}
M \sim \frac{\beta^2}{4\pi\omega_4}\;.
\label{mass2}
\end{equation}
One can perform calculations for the partition function, energy
and pressure of the ensemble that are similar to those
described in the previous section, which we do not reproduce in detail
here.
As in the limit (i), we find again in the limit (ii)
a {\it van der Waals equation of state}, as in (\ref{eqstate}).
To understand qualitatively the physics in the
lower end of the transition
region $T \sim T_1$, we recall that, as we approach the
transition region from above,
we enter a regime where the Liouville theory takes over.
In this theory the radius $b$ may be assumed to be independent of
temperature~\cite{em}, and large. In this limit of
large $b$ and $T$-independence,
a different approximation is needed to capture correctly the
features of the transition region, since
the classical description of a gas
of stable black-hole particles breaks down in this case.
However, we can still obtain qualitative
ideas of the dynamics by applying the above
statistical-mechanical approach to this case.
Notice that the smallness of $\omega_4 M $ compared to $b$
implies that the AdS space is regular for large $r$. This is the
regime discussed in the Liouville approach of~\cite{em}.
We have calculated
the energy density $\rho$ and the pressure $3p$,
with the results shown in Figure \ref{adst2}.
We observe that the pressure is almost constant near the transition region,
whilst the energy increases and exhibits a bump. As compared to our
results in limit (i), this bump is rather smoother.
The constant value of the pressure is again
fixed by the fact that at low temperatures the
system again enters an ideal-gas regime, where in this case
the physical degrees of freedom
are the bound states, i.e., massless pions in the case of QCD, so
that the relation $\rho = 3p$ should again be valid.
\begin{figure}[htb]
\epsfxsize=3in
\bigskip
\centerline{\epsffile{enpresslow.eps}}
\caption{\it\baselineskip=12pt
As in Fig.~1, but
in the limit (ii): $r^+\ll b$~\cite{em}, conjectured to
represent the start of the phase transition regime.
The pressure curve lies below the energy density curve
and is almost constant: the bump in the energy density
is less marked than indicated in the limit (i).}
\bigskip
\label{adst2}\end{figure}
\section{Relating the Two Descriptions}
The regime (i) which describes the high-temperature tail of the phase
transition
must connect smoothly with the regime (ii) as the scales cross:
$r_+\leftrightarrow
b$\,. Since one expects that the temperature should
rise after the phase transition,
we assume that $T_{(ii)}\ll T_{(i)}$\,. At the boundary of the two
regions, one has
$r_+^B\sim\sqrt{\omega_4 M_B}\sim b_B$, and the temperature $T_B\sim
1/b_B$ should therefore lie in the range
$T_{(ii)}\ll T_B\ll T_{(i)}$\,. A natural way to arrange this
crossover is to keep $r_+$ fixed at the boundary value and study the
variations of the other
parameters as we go from one regime to the other. Clearly
this puts the following bounds on the other parameters:
\begin{equation}
b_{(i)}\ll b_B\ll b_{(ii)}\,,\qquad M_{(ii)}\sim M_B\ll M_{(i)}\;.
\label{ordering}
\end{equation}
These bounds seem natural and consistent with the definitions
of the limits (i) and (ii). In particular, one outcome of
this crossover, namely that the black-hole degrees of freedom
are more massive in
region (i) and hence decouple from infrared
physics, seems consistent for describing the
region just above the phase transition,
which, according to (\ref{ordering}), is associated with lower-mass black
holes: $M_{(ii)} \sim M_B \ll M_{(i)}$ in region (ii).
We would also like to comment on the behaviour of the AdS radius $b$
in the transition region.
The scaling (\ref{c0})
is justified in the Liouville approach of \cite{em},
in which the recoil-induced AdS radius $b$ is
proportional to a homotopic `time' variable.
In the analysis of~\cite{em}, this homotopic time was identified
with the
target time $X^0$ in a {\it real-time} formulation of Liouville QCD.
In this real-time formalism, the time $X^0$ should not be confused
with the temperature. However, from the equivalence of the real-time and
Matsubara formalisms, where one identifies the temperature with the
inverse radius of a compactified Euclidean time, it is
natural to assume that, at least in the high-temperature regime
where one assumes thermal equilibrium with a heat bath of temperature $T$,
the scaling (\ref{c0}) should be valid.
An alternative way to justify the scaling (\ref{c0}) is to
notice that, in order to arrive at the regime
where the analysis of~\cite{em}
is valid, one needs to go to very low temperatures, where $b$ is huge.
This result is not in contradiction
with our above procedure of
identifying $b$ with $1/T$. However, in the low-temperature regime
$b$ is almost constant~\cite{em}, and {\it not} scaling with temperature.
We conjecture that there are
in general competing contributions to $b$, so that
\begin{equation}
b \sim \delta ^{-2} + {\cal O}(1/T),
\label{interpola}
\end{equation}
and that the $\delta ^{-2}$ term dominates in the low-temperature
regime,
whilst $\delta $ is comparatively large in the high-temperature regime,
and the $1/T$ term dominates.
In~\cite{em}, $\delta $ was identified with the
area of the Wilson loop that generated the world sheet of the string.
This is consistent with the above picture: for low temperatures
in the confining regime, the dominant
degrees of freedom are related to large
world-sheet areas, in the sense that the
(temporal Polyakov or spatial Wilson) loops that define the
order parameters relevant for confinement are large.
It is these quark-antiquark loops that can be described by
weakly-coupled string theory, for which the analysis
of~\cite{em} is valid. At
high temperatures, on the other hand, the areas defined by
the dominant order parameters (Wilson and/or Polyakov loops)
are relatively small or
microscopic, as remarked
in~\cite{em}. This corresponds to the
pure stringy limit $\delta ^{-2} \rightarrow 0$.
In that limit the perturbative string theory approach of~\cite{em} is
invalid, and should be replaced by the
above semi-classical picture~\cite{witt} of a gas of very massive
black holes. We now remark that,
in our approximate treatment of near-horizon distances $r$, where the
potential is {\it weak}, one obtains the typical order of magnitude
estimate
\begin{equation}
r^4 \simeq \omega_4 M b^2 + {\cal O}(b^4\sqrt{\omega _4 M b^{-2}})
\label{untitle}
\end{equation}
Combining (\ref{rplus}),(\ref{mass}) and (\ref{c0}), we find that
the volume $\Omega _0 \propto r_+^4$ varies with $T$
as $T^{-4}$, and hence
\begin{equation}
\Delta \Omega \equiv \Omega_4 - \Omega _0 \propto r^4 - r_+^4 \simeq c' T^{-4}
\label{domega}
\end{equation}
where $c'$ is a constant.
As commented above and in~\cite{hp}, the phase transition
of the five-dimensional black-hole system is
expected to be of first order. Moreover, it is
here identified with a deconfining transition in gauge theory.
However, at present our analysis cannot determine the precise order of
these associated transitions, and this remains an open issue.
A related issue is
whether holography~\cite{holo} survives the first-order phase transitions
associated with the boundary and bulk dynamics. Based on the Liouville
renormalization argument given above, we would expect so, but this
issue is also open.
\section{Comparison with QCD}
Here we comment on the temperature-dependence of the pressure, and relate
it to what is known for QCD from lattice simulations.
We recall from the discussion of section 4 that
in low-temperature limit (i) of the phase
transition region (see Fig.~\ref{threepre}),
where $b$ is roughly $T$-independent and
the mass of the black hole $M \sim T^4$, there
is no difference in scaling between the four- and five-dimensional
pressures, and hence $3p/T^4$ in (\ref{fourdpre})
is initially approximately constant and then increases slightly
(due to the smallness of $\delta$) as the temperature increases.
Thus the pressure curve does not increase as
abruptly as the energy density, and always lies beneath it
as long as it can be calculated reliably.
As the temperature increases towards the upper end of the phase
transition,
the increase in the pressure may be obtained from
terms that have been ignored so far in deriving (\ref{eqstate}).
These include terms that express fluctuations of the potential
${\overline U}$ with the volume $\Omega _4 - \Omega _0$.
These are required by continuity
between the two asymptotic regimes for the pressure computed above.
The generic (approximate) form of such terms may be found by representing
the potential fluctuations as
\begin{equation}
U \simeq {\overline U} + \epsilon '\frac{N}{\Omega _4 - \Omega _0}
\label{fluct}
\end{equation}
where $\epsilon' $ is small and positive.
Such a dependence of the potential on $\Omega _4 $
results in
extra terms on the right-hand side of the equation of state.
Thus, for example, in the high-temperature phase
we expect the the boundary pressure to have
the form:
\begin{equation}
p/T^4 \sim {\rm const} \frac{1}{\Delta \Omega _3 T^4 }\left[NT +
\epsilon' N^2 (M + T)/(\Omega_4 - \Omega _0) \right]\;.
\label{prs}
\end{equation}
We now recall that, on the high-temperature
side of the phase transition, $\Delta \Omega _3$
scales like $T^{-3}$, $(\Omega _4 - \Omega _0)$ scales like $T^{-4}$,
and the mass of the black hole scales like $T^{-2}$.
The mass is sufficiently large that
the $M$-dependent term is still dominant. Hence,
from (\ref{fluct}) one obtains a linear increase for $3p/T^4$:
\begin{equation}
3p/T^4\sim{\rm const}+{\cal O}\left({\rm const'}\times\epsilon'N^2T
\right)\;.
\label{newpr}
\end{equation}
As the temperature increases, the $\epsilon'$ term becomes smaller and
smaller, and one recovers a constant temperature at the end of the
of the transition region.
The proportionality constants are again fixed by
requiring that this scaling should describe at
high temperature an almost ideal gas of massless
particles, in which case we have the relation $\rho = 3 p$
in three space dimensions.
\begin{figure}[htb]
\epsfxsize=3in
\bigskip
\centerline{\epsffile{interpol.eps}}
\caption{\it\baselineskip=12pt
Interpolations of the scaled
energy density $\rho/T^4$ (dashed line)
and pressure $3p/T^4$
(continuous line),
including the transition region between the limits (i) and (ii)
shown in Figs.~1 and 2, respectively.
The behaviours of the energy density and pressure
in the intermediate-temperature region
are reminiscent of lattice calculations~\cite{satz}:
in particular, the pressure curve rises more slowly than that of the
energy density.}
\bigskip
\label{threepre}\end{figure}
We display in Fig.~\ref{threepre} heuristic
interpolations of $\rho$ and $3p$ between the high-
and low-temperature limits (i) and (ii).
These curves can be
compared with those calculated for QCD on the lattice~\cite{satz}.
In both cases,
there appears to be a sharp jump of the energy density at the
onset of the deconfining phase transition,
from the value where the system is equivalent to a gas of pions, towards
that where the system is described by an
almost ideal gas of quarks and gluons.
On the other hand, the increase in the pressure is much smoother
in both the lattice and our AdS calculations.
This is related in our approach to the weak-field
assumption for the potential: $|{\rm U}|\ll 1$, which
is valid for near-horizon AdS geometries in the high-temperature
phase.
We should repeat that our analysis in the limit (ii) is not yet
quantitative at low temperatures. However,
the Liouville world-sheet approach of~\cite{em}, which
describes the dynamics of world-sheet defects via
$D$ particles, describes
this regime qualitatively correctly, leading in particular to
confinement as a low-temperature property.
In this case, the space-time obtained from $D$-particle recoil is indeed
of AdS type with $M \rightarrow 0$ in (\ref{adsbh})~\cite{kanti,em}.
We have shown in this paper that this approach has, moreover, a
plausible regular
continuation to the high-temperature limit (i) explored in~\cite{witt}.
This gives us further reason to hope that the Liouville string
approach may be suitable for development into a reliable tool for
describing non-perturbative gauge dynamics, and therefore may
contribute to the new avenue for non-perturbative gauge-theory
calculations opened up in~\cite{malda,witt,balls,potential,vacuum}.
\newpage
{\bf Acknowledgements}
One of us (A.G.) thanks the World Laboratory for a John Bell Scholarship.
Another (N.E.M.) thanks Mike Teper for useful discussions.
The results of this paper were presented by J.E. at the memorial meeting
for Klaus Geiger: `RHIC Physics and Beyond - Kay-Kay-Gee Day',
BNL, October 23rd, 1998~\cite{KKG}. We dedicate this paper to
our friend Klaus's memory.
|
{
"timestamp": "1999-02-25T21:37:26",
"yymm": "9902",
"arxiv_id": "hep-th/9902190",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/9902190"
}
|
\section{INTRODUCTION}
I would like to begin this talk by asking a very simple question:
Did the Universe start ``small"?
The naive answer is: Yes, of course! However, a serious answer
can only be given after defining the two keywords in the question: What do we mean by
``start"? and What is ``small" relative to? In order to be on the safe
side, let us take
the ``initial" time to be a bit larger than Planck's time,
$t_P \sim 10^{-43} \; \rm{s}$.
Then, in standard Friedmann--Robertson--Walker (FRW) cosmology,
the initial size of the (presently observable)
Universe was about $10^{-2}~ \rm{cm}$. This is of course tiny w.r.t. its present size
($\sim 10^{28}~ \rm{cm}$), yet huge w.r.t. the horizon at that time, i.e.
w.r.t. $l_P = c t_P \sim 10^{-33}~ \rm{cm}$.
In other words, a few Planck times after the big bang,
our observable Universe consisted
of $(10^{30})^3 = 10^{90}$ Planckian-size, causally disconnected regions.
More precisely, soon after $t=t_P$, the Universe was characterized by a huge hierarchy
between its Hubble radius and inverse temperature on one side, and its spatial-curvature
radius and homogeneity scale on the other. The relative factor of (at least) $10^{30}$ appears
as an incredible amount of fine-tuning on the initial state of the Universe,
corresponding to a huge asymmetry between time and space derivatives. Was this asymmetry
really there? And if so, can it be explained in any more natural way?
It is well known that a generic way to wash out inhomogeneities and spatial
curvature consists in introducing, in the history of the Universe, a long
period of accelerated expansion, called inflation \cite{KT}.
This still leaves two alternative solutions: either the Universe was generic
at the big bang and became flat and smooth
because of a long {\it post}-bangian inflationary phase;
or it was
already flat and smooth at the big bang as the result of
a long {\it pre}-bangian inflationary phase.
Assuming, dogmatically, that the Universe (and time itself) started at the big bang,
leaves
only the first alternative. However, that solution has its own
problems, in particular those of
fine-tuned initial conditions and inflaton potentials. Besides, it is quite difficult
to base standard inflation in the only known candidate theory of quantum gravity,
superstring theory. Rather, as we shall argue, superstring
theory gives strong hints in favour of the
second (pre-big bang) possibility through two of its very basic properties,
the first in relation
to its short-distance behaviour, the second from its modifications of General Relativity
even at large distance. Let us briefly comment on both.
\section {(Super)String inspiration}
\subsection{Short Distance}
Since the classical (Nambu--Goto) action of a string
is proportional to the area $A$ of the surface it sweeps, its quantization must introduce
a quantum of length $\lambda_s$ through:
\begin{equation}
S/\hbar = A/\lambda_s^2\; .
\end{equation}
This fundamental length, replacing Planck's constant in quantum string theory \cite{GVFC},
plays the role of a minimal observable length, of an ultraviolet cut-off.
Thus, in string theory, physical quantities are expected to be bound by appropriate
powers of $\lambda_s$, e.g.
\begin{eqnarray}
H^2 \sim R \sim G\rho < \lambda_s^{-2} \nonumber \\
k_B T/\hbar < c \lambda_s^{-1} \nonumber \\
R_{comp} > \lambda_s \; .
\end{eqnarray}
In other words, in quantum
string theory (QST), relativistic quantum mechanics should solve the singularity
problems in much the same way as
non-relativistic quantum mechanics solved the singularity
problem of the hydrogen atom by putting the electron and the proton a finite distance
apart. By the same token, QST gives us a rationale for asking daring questions such as:
What was there before the big bang? Certainly, in no other present theory, can
such a question
be meaningfully asked.
\subsection{Large Distance}
Even at large distance (low-energy, small curvatures), superstring theory does not
automatically give Einstein's General Relativity. Rather, it leads to a
scalar-tensor theory of the JBD variety. The new scalar particle/field $\phi$,
the so-called dilaton, is unavoidable in string theory, and gets
reinterpreted as the radius of a new dimension of space in so-called M-theory \cite{M}.
By supersymmetry, the dilaton is massless to all orders in perturbation theory,
i.e. as long as supersymmetry remains unbroken. This raises the question:
Is the dilaton a problem or an opportunity? My answer is that it is possibly both;
and while we can try to avoid its potential dangers, we may try to use some of
its properties to our advantage \dots~ Let me discuss how.
In string theory $\phi$ controls the strength of all forces, gravitational and gauge alike.
One finds, typically:
\begin{equation}
l_P^2 /\lambda_s^2 \sim \alpha_{gauge} \sim e^{\phi} \; ,
\label{VEV}
\end{equation}
showing the basic unification of all forces in string theory and the fact that, in
our conventions,
the weak-coupling region coincides with $\phi \ll -1$.
In order not to contradict precision tests of the
Equivalence Principle and of the constancy of the gauge and gravitational
couplings in the recent past (possibly meaning several million years!)
we require \cite{TV} the dilaton to have a mass and to be frozen at the bottom of
its own potential
{\it today}. This does not exclude, however, the possibility of the dilaton having
evolved cosmologically (after all the metric did!) within the weak coupling
region where it was practically massless. The amazing (yet simple) observation \cite{GV1}
is that, by so doing, the dilaton may have inflated the Universe!
A simplified argument, which, although not completely accurate, captures the essential
physical point, consists in writing the ($k=0$) Friedmann equation:
\begin{equation}
3 H^2 = 8 \pi G \rho\; ,
\end{equation}
and in noticing that a growing dilaton (meaning through (\ref{VEV}) a growing $G$)
can drive the growth of $H$ even if the energy density of standard matter decreases
in an expanding Universe. This new kind of inflation (characterized by
growing $H$ and $\phi$) has been
termed dilaton-driven inflation (DDI). The basic idea of pre-big bang
cosmology \cite{GV1,MG1,MG2}
is thus illustrated in Fig. 1: the dilaton started at very
large negative values (where it is massless), ran over a potential hill, and finally reached,
sometime in our recent past, its final destination at the bottom of
its potential ($\phi = \phi_0$). Incidentally, as shown in Fig. 1,
the dilaton of string theory can easily
roll-up ---rather than down--- potential hills, as a consequence of its non-standard
coupling to gravity.
\begin{figure
\hglue 1.6 cm
\epsfig{figure=VeneNobelFig1.eps,width=12cm}
\caption[]{}
\end{figure}
DDI is not just possible. It exists as a class of (lowest-order) cosmological solutions
thanks to the duality symmetries of string cosmology. Under
a prototype example of these symmetries, the
so-called scale-factor duality \cite{GV1}, a FRW cosmology
evolving (at lowest order in derivatives) from a singularity in the past
is mapped into a DDI cosmology going towards a singularity in the future. Of course,
the lowest order approximation breaks down before either singularity is reached.
A (stringy) moment away from their respective
singularities, these two branches can easily be joined smoothly to give
a single non-singular cosmology, at least mathematically.
Leaving aside this issue for the moment (see Section V for more discussion),
let us go back to DDI.
Since such a phase is characterized by growing coupling and curvature, it
must itself have originated from
a regime in which both quantities were very small. We take this as
the main lesson/hint to be
learned from low-energy string theory by raising it to the level
of a new cosmological principle \cite{BDV} of ``Asymptotic Past Triviality".
\section {Asymptotic Past Triviality}
The concept of Asymptotic Past Triviality (APT)
is quite similar to that of ``Asymptotic Flatness",
familiar from General Relativity \cite{AF}. The main differences consist in making
only assumptions
concerning the asymptotic past (rather than future or space-like infinity)
and in the additional presence
of the dilaton. It seems physically (and philosophically)
satisfactory to identify the beginning with simplicity (see e.g. entropy-related
arguments
concerning the arrow of time). What could be simpler than a trivial,
empty and flat Universe? Nothing of course! The problem is that such a Universe,
besides being uninteresting, is also
non-generic. By contrast, asymptotically flat/trivial
Universes are initially simple, yet generic in a precise mathematical sense.
Their definition involves exactly the right number of arbitrary ``integration constants"
(here full functions of three variables) to describe a general solution (one with
some general, qualitative features, though). This is why, by its very construction,
this cosmology
cannot be easily dismissed as being fine-tuned.
It is useful to represent the situation in a Carter--Penrose diagram,
as in Fig. 2. Here past infinity
consists of two pieces: time-like past infinity, which is shrunk to a point $I_-$, and
past null-infinity, $\cal{I}_-$ represented by a line at $45$ degrees. Note that this
region of the diagram is ``non-physical" in FRW cosmology, since it lies behind
(i.e. before)
the big bang singularity (also shown in the diagram). Instead, we shall be giving
initial data infinitesimally close to
$I_-$ and $\cal{I}_-$, and ask whether they will evolve in such a way as to generate
a physically interesting big bang-like state at some later time.
Generating so much from so little looks a bit like a miracle. However,
we will argue that it is precisely what
should be expected, owing to well-known classical and quantum gravitational
instabilities.
\begin{figure
\hglue 5cm
\epsfig{figure=VeneNobelFig2.eps,width=8cm}
\caption[]{}
\end{figure}
\section {Inflation as a classical gravitational instability}
The assumption of APT entitles us to treat the early history
of the Universe
through the classical field equations of the low-energy
(because of the small curvature) tree-level (because of the weak coupling)
effective action
of string theory. For simplicity, we will illustrate here the simplest case
of the gravi-dilaton system already compactified to
four space-time dimensions. Other fields and extra dimensions
will be mentioned below, when we discuss observable consequences.
The (string frame) effective action then reads:
\begin{equation}
\Gamma_{eff} = \frac{1}{2 \lambda^{2}_s} \int d^4x \sqrt{-g}~ e^{-\phi}~
({\cal R}
+ \partial_\mu \phi \partial^\mu\phi) \; .
\label{leaction}
\end{equation}
In this frame, the string-length parameter $\lambda_s$ is a constant and the same is true of
the curvature scale at which we have to supplement eq. (\ref{leaction}) with corrections.
Similarly, string masses, when measured with the string metric, are fixed, while test strings
sweep geodesic surfaces with respect to that metric.
For all these reasons, even if we will allow metric
redefinitions in order to simplify our calculations, we shall eventually
turn back to the string frame for the physical interpretation of the results. We stress, however,
that, while our intuition is not frame independent,
physically measurable quantities are.
Even assuming APT, the problem of determining the properties of a generic
solution to the field equations implied by (\ref{leaction}) is a formidable one.
Very luckily, however, we are able to map our problem into one that has been much investigated,
both analytically and numerically, in the literature. This is done by going to the so-called
``Einstein frame". For our purposes, it simply amounts to the field redefinition
\begin{equation}
g_{\mu\nu} = g^{(E)}_{\mu\nu} e^{\phi - \phi_0} \; ,
\label{ESF}
\end{equation}
in terms of which (\ref{leaction}) becomes:
\begin{equation}
\Gamma_{eff} = \frac{1}{2 l^{2}_P} \int d^4x \sqrt{-g^{(E)}}
~~\left({\cal R}^{(E)}
- \hbox{\magstep{-1}$\frac{1}{2}$} g_{(E)}^{\mu\nu} \partial_\mu \phi \partial_\nu\phi\right) \; ,
\label{EFaction}
\end{equation}
where $\phi_0$ ($l_P = \lambda_s e^{\phi_0/2}$) is the present value of
the dilaton (of Planck's length).
Our problem is thus reduced to that of studying a massless scalar field
minimally coupled to gravity. Such a system has been considered by many authors,
in particular by Christodoulou \cite{Chr},
precisely in the regime of interest to us. In line with the
APT postulate, in the analogue gravitational collapse problem,
one assumes very ``weak" initial data
with the aim of finding under which conditions gravitational collapse later occurs.
Gravitational collapse means that the (Einstein) metric (and the volume of 3-space)
shrinks to zero at
a space-like singularity. However, typically,
the dilaton blows up at that same singularity. Given the relation (\ref{ESF})
between the Einstein and the (physical) string metric, we can easily imagine that
the latter blows up near the singularity as implied by DDI.
How generically does this happen? In this connection it is crucial to recall
the singularity theorems of Hawking and Penrose \cite{HP}, which state that, under some
general assumptions, singularities are inescapable in GR.
One can look at the validity of those assumptions in the case at hand and finds that
all but one are automatically satisfied. The only condition to be
imposed is the existence of a closed trapped surface (a closed surface
from where future light cones lie entirely in the region inside the surface).
Rigorous results \cite{Chr} show that this condition cannot be waived:
sufficiently weak initial data
do not lead to closed trapped surfaces, to collapse, or to singularities. Sufficiently
strong initial data do. But where is the border-line? This is not known in general,
but precise criteria do exist for particularly symmetric space-times, e.g. for those
endowed with spherical symmetry.
However, no matter what the general collapse/singularity criterion will
eventually turn out to
be, we do know that:
\begin{itemize}
\item it cannot depend on an over-all additive constant in $\phi$;
\item it cannot depend on an over-all multiplicative factor in $g_{\mu\nu}$.
\end{itemize}
This is a simple consequence of the invariance (up to an over-all factor) of the effective
action (\ref{EFaction}) under shifts of the dilaton and rescaling of the metric (these
properties depend crucially on the validity of the tree-level low-energy approximation
and on the absence of a cosmological constant).
We conclude that, generically, some regions of space will undergo gravitational
collapse, will form horizons and singularities therein, but nothing, at the level
of our approximations, will be able to fix either the size of the horizon
or the value of $\phi$ at the onset of collapse. When this is translated
into the string frame,
one is describing, in the region of space-time within the horizon, a period of DDI
in which both the initial value of the Hubble parameter and that of $\phi$ are left
arbitrary. These two initial parameters are very important, since they
determine the range of validity of our description. In fact, since both curvature and
coupling increase during DDI, at some point the low-energy and/or tree-level
description is bound to break down. The smaller the initial Hubble parameter (i.e. the larger
the initial horizon size) and the smaller the initial coupling, the longer we can follow DDI
through the effective action equations and the larger the number of
reliable e-folds that we shall gain.
This does answer, in my opinion, the objections raised recently \cite{TW} to the PBB
scenario according to which it is
fine-tuned. The situation here actually resembles that of
chaotic inflation \cite{chaotic}. Given some generic (though APT) initial data, we should ask
which is the distribution of sizes of the collapsing regions and of couplings therein.
Then, only the ``tails" of these distributions, i.e. those corresponding to
sufficiently large,
and sufficiently weakly coupled regions, will produce Universes like ours, the rest will not.
The question of how likely a ``good" big bang is to take place is not very well posed
and can be greatly affected by anthropic considerations.
In conclusion, we may summarize recent progress on the problem of
initial conditions by saying that \cite{BDV}:
{\bf Dilaton-driven inflation in string cosmology
is as generic as gravitational collapse in General Relativity.}
At the same time, having a sufficiently long period of
DDI amounts to setting upper limits on two arbitrary
moduli of the classical solutions.
Our scenario is illustrated in Figs. 3 and 4, both taken
from Ref.\cite{BDV}.
In Fig. 3, I show, for the spherically symmetric case,
a Carter--Penrose diagram
in which generic (but asymptotically trivial) dilatonic waves are
given around time-like ($I^-$) and null ($\cal{I}^-$) past-infinity. In the
shaded region near $I^- , \cal{I}^-$, a weak-field solution holds. However,
if a collapse criterion
is met, an apparent horizon, inside which a cosmological (generally inhomogeneous) PBB-like
solution takes over, forms at some later
time. The future singularity of the PBB solution at $t=0$ is identified with the
space-like singularity of the black hole at $r=0$ (remember that $r$ is a
time-like coordinate inside the horizon).
Figure 4 gives a $(2+1)$-dimensional sketch of a possible PBB Universe: an original ``sea" of
dilatonic and gravity waves leads to collapsing regions of different initial size, possibly
to a scale-invariant distribution of them.
Each one of these collapses is reinterpreted, in the string frame, as the process by which
a baby Universe is born after a period of
PBB inflationary ``pregnancy", with the size of each baby Universe determined
by the duration of
its pregnancy, i.e. by the initial size of the
corresponding collapsing region. Regions initially
larger than $10^{-13}~ {\rm cm}$ can generate Universes like ours,
smaller ones cannot.
\begin{figure
\hglue 5cm
\epsfig{figure=VeneNobelFig3.eps,width=9cm}
\caption[]{}
\end{figure}
\begin{figure
\hglue 5cm
\epsfig{figure=VeneNobelFig4.eps,width=9cm}
\caption[]{}
\end{figure}
A basic difference between the large numbers needed
in (non-inflationary) FRW cosmology and the large numbers needed in PBB cosmology
should be stressed at this point.
In the former, the ratio of two classical scales, e.g. of total curvature to its
spatial component, which is expected to be $O(1)$,
has to be taken as large as $10^{60}$. In the latter, the above ratio is
initially $O(1)$ in the
collapsing/inflating region,
and ends up being very large in that region thanks to DDI.
However, the common order of magnitude of these two classical quantities
is a free parameter, and is taken to be
much larger than a classically irrelevant quantum scale.
We can visualize analogies and differences between standard and pre-big bang inflation
by comparing Figs. 5a and 5b. In these, we sketch the evolution of the Hubble radius
and of a fixed comoving scale (here the one corresponding to the part of
the Universe presently
observable to us) as a function of time
in the two scenarios. The common feature is that the fixed comoving scale was
``inside the horizon" for some time during inflation, and possibly very deeply inside
at its onset. Also, in both cases, the Hubble radius at the beginning of inflation
had to be large in Planck units and
the scale of homogeneity had to be at least as large. The difference between the two scenarios
is just in the behaviour of the Hubble radius during inflation: increasing in standard
inflation (a), decreasing in string cosmology (b). This is what makes PBB's ``wine glass"
more elegant, and stable! Thus, while standard inflation is still facing the
initial-singularity question and needs a non-adiabatic phenomenon to reheat the
Universe (a kind of small bang), PBB cosmology faces the singularity problem later,
combining it to the exit and heating problems (discussed in Sections
V and VIB, respectively).
In the end, what saves
PBB cosmology from fine-tuning is (not surprisingly!) supersymmetry. This is what
protects us from the appearance of a cosmological constant in the weak-coupling
regime. Even a relatively small cosmological constant would invalidate our
scale-invariance arguments and force DDI to be very short \cite{GV1}.
Thus, amusingly, while an effective cosmological constant is at the basis of
standard (post-big bang) inflation, its absence in the weak coupling region
is at the basis of PBB inflation. This may allow us to speculate that the
absence (or extreme smallness) of the present cosmological constant
may be related to a mysterious degeneracy between the perturbative
and the non-perturbative vacuum of superstring theory.
\section{The exit problem/conjecture}
We have argued that, generically, DDI, when studied at lowest
order in derivatives and coupling, evolves towards a singularity of the
big bang type. Similarly, at the same level of approximation,
the non-inflationary solutions
emerge from a singularity. Matching these two branches in a smooth, non-singular way
has become known as the (graceful) exit problem in string cosmology \cite{exit}.
It is, undoubtedly,
the most important theoretical problem facing the whole PBB scenario.
\begin{figure
\hglue 5cm
\epsfig{figure=VeneNobelFig5a.eps,width=9cm}
\vglue 1 cm
\hglue 5cm
\epsfig{figure=VeneNobelFig5b.eps,width=9cm}
\caption[]{}
\end{figure}
There has been quite some progress recently on the exit problem. However,
for lack of space, I shall refer the reader to the literature \cite{exit} for details.
Generically speaking, toy examples have shown that DDI can flow, thanks to
higher-curvature corrections, into a de-Sitter-like phase, i.e. into a phase of
constant $H$ (curvature) and constant $\dot{\phi}$. This phase is expected to
last until loop corrections become important (see next section) and give rise to
a transition to a radiation-dominated phase. If these toy models serve as an indication,
the full exit problem can only be achieved at large coupling and curvature, a situation
that should be described by the newly invented M-theory \cite{M}.
It was recently pointed out \cite{MR} that the reverse order of events is also
possible. The coupling may become large {\it before} the curvature. In this case, at least
for some time, the low-energy limit of M-theory should be adequate: this limit is known \cite{M}
to give $D=11$ supergravity and is therefore amenable to reliable study. It is likely,
though not yet clear, that, also in this case, strong curvatures will
have to be reached before the exit
can be completed. In the following, we will assume that:
\begin{itemize}
\item the big bang singularity is avoided thanks to the softness of string theory;
\item full exit to radiation occurs at strong coupling and curvature,
according to a criterion
given in Section VIB.
\end{itemize}
\section{ Observable relics and heating the pre-bang Universe}
\subsection{PBB relics}
Since there are already several review papers on this subject
(e.g. \cite{GV95}),
I will limit myself to mentioning the most recent developments, after recalling the basic
physical mechanism underlying particle production in cosmology \cite{quantum}.
A cosmological (i.e. time-dependent) background coupled to a given
type of (small) inhomogeneous perturbation $\Psi$ enters
the effective low-energy action in the form:
\begin{equation}
I ={\small\frac{1}{2}} \int d\eta\ d^3x\ S(\eta) \left[ \Psi
^{\prime 2}- (\nabla \Psi )^2\right].
\label{spertact}
\end{equation}
Here $\eta$ is the conformal-time coordinate, and a prime denotes
$\partial/\partial\eta$. The function $S(\eta)$ (sometimes called
the ``pump" field) is, for any given $\Psi$, a given function of the
scale factor $a(\eta)$, and of other scalar fields (four-dimensional
dilaton $\phi(\eta)$,
moduli $b_i(\eta)$, etc.), which may appear non-trivially in the
background.
While it is clear that a constant pump field $S$ can be reabsorbed in a rescaling of $\Psi$,
and is thus ineffective, a time-dependent $S$ couples non-trivially to the fluctuation
and leads to the production of pairs of quanta (with equal and opposite momenta).
One can
easily determine the pump fields for each one of the most interesting perturbations.
The result is:
\begin{eqnarray}
\rm{Gravity~waves,~dilaton}&:& S = a^2 e^{-\phi} \nonumber \\
\rm{Heterotic~gauge~bosons}&:& S = e^{-\phi} \nonumber \\
\rm{Kalb-Ramond,~axions}&:& S = a^{-2} e^{-\phi}\; .
\end{eqnarray}
A distinctive property of string cosmology is that the dilaton $\phi$ appears in some very
specific way in the pump fields. The consequences of this are very interesting:
\begin{itemize}
\item For gravitational waves and dilatons, the effect of $\phi$ is to slow down the behaviour
of $a$ (remember that both $a$ and $\phi$ grow in the pre-big bang phase).
This is the reason why those spectra
are quite steep \cite{BGGV} and give small contributions at large scales.
Thus one of the most robust predictions of PBB cosmology is a small
tensor component in the CMB anisotropy\footnote{This, however,
refers just to first-order tensor perturbations; the mechanism ---described below--- of
seeding CMB anisotropy through axions would also give a tensor (and a vector)
contribution whose relative
magnitude is being computed.}.
The reverse is also true: at short scales, the expected yield in a stochastic background
of gravitational waves is much larger than in standard inflationary
cosmology. This is easily understood: in standard inflation the GW spectrum
is either flat or slowly decreasing (as a function of frequency). Since COBE
data \cite{COBE}
set a limit on the GW contribution at large scales, this bound holds a fortiori
at shorter scales, as those of interest for direct
GW detection. Thus, in standard inflation, one expects
\begin{equation}
\Omega_{GW} < 10^{-14}\; .
\end{equation}
Since the GW spectra of PBB cosmology are ``blue", the bound by COBE
is automatically satisfied,
with no implication on the GW yield at interesting frequencies. Values of $\Omega_{GW}$ in
the range of
$10^{-6}$--$10^{-7}$ are possible in some regions of parameter space, which, according to
some estimates of sensitivities \cite{sens}, could be inside detection
capabilities in the near future.
\item For gauge bosons there is no amplification of vacuum fluctuations in standard
cosmology, since a conformally flat metric (of
the type forced upon by inflation) decouples from the
electromagnetic (EM) field precisely in $D=3+1$ dimensions.
As a very general remark, apart from pathological
solutions, the only background field which, through its cosmological variation,
can amplify EM (more generally gauge-field) quantum fluctuations is the effective gauge coupling
itself \cite{Ratra}. By its very nature,
in the pre-big bang scenario the effective gauge coupling inflates
together with space during the PBB phase.
It is thus automatic that any efficient PBB inflation brings together
a huge variation of the effective gauge coupling and
thus a very large amplification of the primordial
EM fluctuations \cite{GGV,BMUV2,BH}. This can possibly provide the long-sought origin
for the primordial seeds of the observed
galactic magnetic fields.
Notice, however, that, unlike GW, EM perturbations interact quite
considerably with the hot plasma of the early (post-big bang) Universe.
Thus, converting the primordial seeds into those that may have existed
at the proto-galaxy formation epoch is by no means a trivial exercise.
Work is in progress to try to adapt existing codes \cite{Ostreiker} to the evolution
of our primordial seeds.
\item Finally, for Kalb--Ramond fields and axions, $a$ and $\phi$ work in the same direction and
spectra can be large even at large scales \cite{Copeland}.
An interesting fact is that, unlike the GW spectrum, that of axions is very sensitive to
the cosmological behaviour of internal dimensions during the DDI epoch.
On one side, this makes the model less predictive. On the other, it tells us that
axions represent a window over the multidimensional cosmology expected generically
from string theories, which must live in more that four dimensions.
Curiously enough, the axion spectrum becomes exactly HZ (i.e. scale-invariant) when
all the nine spatial dimensions of superstring theory evolve in a rather
symmetric way \cite{BMUV2}.
In situations near this particularly symmetric one, axions are able to provide
a new mechanism for generating large-scale CMB anisotropy and LSS.
A recent calculation \cite{Durrer} of the effect gives, for massless axions,
\begin{equation}
l(l+1) C_l \sim O(1) \left({H_{max}\over M_P}\right)^4 (\eta_0 k_{max})^{-2\alpha}
{\Gamma(l+\alpha) \over \Gamma(l- \alpha)}\; ,
\end{equation}
where $C_l$ are the usual coefficients of the multipole expansion of $\Delta T/T$
\begin{equation}
\langle \Delta T/T(\vec{n})~~ \Delta T/T(\vec{n}')\rangle ~ = ~
\sum_l (2l+1) C_l P_l(\cos\theta)\; ,
\end{equation}
and the parameters $H_{max}, k_{max}, \alpha$ are defined by the primordial
axion energy spectrum in critical units as:
\begin{equation}
\Omega_{ax}(k) = \left({H_{max}\over M_P}\right)^2 (k/ k_{max})^{\alpha} \; .
\end{equation}
In string theory, as repeatedly mentioned, we expect $H_{max}/ M_P \sim M_s/M_P \sim 1/10$
and $\eta_0 k_{max} \sim 10^{30}$, while the exponent $\alpha$ depends on the explicit
PBB background with the above-mentioned HZ case corresponding to $\alpha =0$. The standard
tilt parameter $n = n_s$ ($s$ for scalar) is given by $n = 1 + 2 \alpha$ and is found, by COBE,
to lie between $0.9$ and $1.5$, corresponding to $0 < \alpha < 0.25$ (a negative $\alpha$ leads
to some theoretical problems). With these inputs we can see that the correct normalization
($C_2 \sim 10^{-10}$) is reached for $\alpha \sim 0.2$, which is just in the
middle of the allowed range. In other words, unlike in standard inflation, we cannot
predict the tilt, but when this is given, we can predict (again unlike in standard inflation)
the normalization.
Our model, being of the isocurvature type, bears some resemblance
to the one recently advocated by Peebles \cite{Peebles}
and, like his, is expected to contain some calculable amount
of non-Gaussianity, which is being
calculated and will be checked by the future satellite measurements (MAP, PLANCK).
\item Many other perturbations, which arise in generic compactifications of superstrings,
have also
been studied, and lead to interesting spectra. For lack of time, I will refer to
the existing literature \cite{BMUV2,BH}.
\end{itemize}
\subsection{Heat and entropy as a quantum gravitational instability }
Before closing this section, I wish to recall how one sees the very origin of the hot big bang
in this scenario. One can easily estimate the total energy stored in the
quantum fluctuations, which were amplified by the pre-big bang backgrounds.
The result is, roughly,
\begin{equation}
\rho_{quantum} \sim N_{eff} ~ H^4_{max} \; ,
\label{rhoq}
\end{equation}
where $N_{eff}$ is the effective number of species that are amplified and $H_{max}$ is the maximal
curvature scale reached around $t=0$. We have already argued that $H_{max} \sim M_s =
\lambda_s^{-1}$,
and we know that,
in heterotic string theory, $N_{eff}$ is in the hundreds. Yet this rather huge energy density
is very far from critical, as long as the dilaton is still in the weak-coupling region,
justifying our neglect of back-reaction effects. It is very tempting to assume \cite{BMUV2} that,
precisely when the dilaton reaches a value such that $\rho_{quantum}$ is critical, the Universe
will enter the radiation-dominated phase. This PBBB (PBB bootstrap) constraint gives, typically:
\begin{equation}
e^{\phi_{exit}} \sim 1/N_{eff}\;\; ,
\label{PBBB}
\end{equation}
i.e. a value for the dilaton close to its present value.
The entropy in these quantum fluctuations can also be estimated following
some general results \cite{entropy}. The result for the density of entropy $S$ is, as expected
\begin{equation}
S \sim N_{eff} H_{max}^3\;.
\end{equation}
It is easy to check that, at the assumed time of exit given by (\ref{PBBB}),
this entropy saturates a
recently proposed holography bound \cite{FS}. This also turns out to be a physically
acceptable value for the entropy of the Universe just after the big bang: a large entropy
on the one hand (about $10^{90}$); a small entropy for the total mass and size of the observable
Universe on the other, as often pointed
out by Penrose \cite{PenEntr}. Thus, PBB cosmology neatly explains why the
Universe, at the big bang,
looks so fine-tuned (without being so) and provides a natural arrow of time in the direction
of higher entropy.
\section{Conclusions}
\begin{itemize}
\item
Pre-big bang (PBB) cosmology is a ``top--down" rather than a ``bottom--up" approach to cosmology.
This should not be forgotten when testing its predictions.
\item
It does not need to invent an inflaton, or to fine-tune its potential; inflation is
``natural" thanks to the duality symmetries of string cosmology.
\item
It makes use of a classical gravitational instability to inflate the Universe,
and of a quantum instability to warm it up.
\item
The problem of initial conditions ``decouples" from the singularity problem;
it is classical, scale-free, and unambiguously defined. Issues of fine tuning
can be addressed and, I believe, answered.
\item The spectrum of large-scale perturbations has become more promising
through the invisible axion of string theory, while the possibility of explaining the seeds of
galactic magnetic fields remains a unique prediction of the model.
\item The main conceptual (technical?) problem remains that of providing
a fully convincing mechanism for (and a detailed description of) the
pre-to-post-big bang transition. It is very likely that such a mechanism will involve both high
curvatures and large coupling and should therefore be discussed in the (yet to be
fully constructed) M-theory \cite{M}. New ideas borrowed from
such theory and from D-branes \cite{branes,MR} could help in this respect.
\item
Once/if this problem will be solved, predictions will become more precise and robust, but,
even now, with some mild assumptions, several tests are (or will soon become) possible, e.g.
\begin{itemize}
\item the tensor contribution to $\Delta T/T$ should be very small
(see, however, footnote Section VI);
\item some non-Gaussianity in $\Delta T/T$ correlations is expected, and calculable.
\item the axion-seed mechanism should lead to
a characteristic acoustic-peak structure, which is being calculated;
\item it should be possible to convert
the predicted seed magnetic fields into observables by
using some reliable code for their late evolution;
\item a characteristic spectrum of stochastic gravitational waves is expected to surround us
and could be large enough to be measurable within a decade or so.
\end{itemize}
\end{itemize}
|
{
"timestamp": "1999-02-17T16:06:15",
"yymm": "9902",
"arxiv_id": "hep-th/9902097",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/9902097"
}
|
\section{ Experimental Results}
\hskip 2cm PS is made by electrochemical etching of P (P$^{+}$)
c-si in a 1:1 HF : Ethanol solution at a current density between 10 - 50
mA/cm$^{2}$. Free standing samples are lifted off the substrate by
increasing the current density to $>$ 200 mA/cm$^{2}$. The samples are
then washed in water to get rid of HF , rinsed in n-pentane and finally
dried in air on a glass substrate for transmission measurements .
\par
\hskip 2cm Transmission measurements are carried out using a CVI 240
Digikrom monochromator with a tungsten lamp as a source, silicon detector
and SR 530 lock -in amplifier .For some samples we also use a Cary 1756
spectrometer. We have checked that internal multiple scattering does not
dominate the absorption in our samples. This is done by making two PS
samples of two different thickness ($5\mu$ and $10\mu$ respectively) and
ensuring that the transmission curves are the same for both samples at low
absorption $^{( 11 )}$ .
\par
\hskip 2cm The absorption coefficient $\alpha(\lambda)$ as a function of
wavelength $\lambda$ is obtained from the normalised transmittance
T($\lambda$) using
$$T(\lambda) = {(1 - R)^2 exp(-\alpha (\lambda)(1 - P)t)\over 1 - R^2
exp(-2\alpha (\lambda)(1 - P)t)}
\eqno(2)
$$
\hskip 2cm The reflectivity (R) is obtained in the low $\alpha$ region
using the relation R = (1-T)/(1+T). P is the porosity of the layer and t
is its thickness.\\
\par
\hskip 2cm The porosity of PS is usually determined by
gravimetric measurements which also destroy the sample. We now show that
optical transmission experiment allows us to measure porosity in a
$nondestructive$ fashion .\\
\par
\hskip 2cm The normalized transmittance ( in eq.2) for $\alpha$ = 0
reduces to
$$ T_L = {(1 - R)\over (1 + R)} \ or\ R = {1 - T_L\over 1 + T_L}
\eqno(3)
$$
\par
\hskip 2cm If n is the real part of the refractive index , then we can
also write for the reflectivity ( for $\alpha$ = 0 ) as
$$ R = {(n-1)^2\over(n+1)^2}
\eqno(4)
$$
\hskip 2cm It follows from eq.4 that the refractive index (n$_{PS}$)
of PS is
$$ n_{PS} = {1 + \sqrt{R}\over 1 - \sqrt{R}}
\eqno(5)
$$
\par
\hskip 2cm We model PS as consisting of two media - air ( having a
refractive index ( $n_{air}$ = 1 )) and crystalline silicon ( n$_{Si}$ =
3.44 ) . If P is the porosity of a sample then we can write
$$ n_{air}P+n_{SI}(1-P) = n_{PS}
\eqno(6)
$$
$$ P = {(n_{SI} - n_{PS})\over (n_{SI} - 1)}
\eqno(7)
$$
\hskip 2cm The porosity can now be obtained using equations 4 to 7.
\par
\hskip 2cm To check the validity of eq.7, we calculate the porosity (
using eq.7 ) and compare it with the corresponding values of P obtained
gravimetrically. Figure 1 is a plot of the porosity determined from the
transmission curves and the corresponding experimentally determined
porosity by gravimetric measurements as reported in the literature$^{( 9,
14 )}$ . We see from figure 1 that the points nearly fit a straight line
with slope one . We hence conclude that optical transmission measurements
can reliably be used to obtain the porosity of porous silicon in a
$non-destructive$ fashion .
\par
\hskip 2cm Figure 2 shows the normalized transmission data on free
standing PS films made from p type ( resistivity 4 ohm-cm ) si substrate
and also from p$^{+}$ si( 0.01 ohm-cm ) substrate. We see from the figure
that for the PS grown on p type si, the onset of low absorption occurs at
a smaller wavelength than that for PS grown on p$^{+}$ si. This signifies
that the p PS has a higher band gap than p$^{+}$ PS and is in agreement
with the reported results $^{( 10 )}$.
\par
\hskip 2cm Figure 3 is a plot of log($\alpha$) vs E at 300 K and at 100
K. The absorption at low temperature is reduced with respect to that at
room temperature. We see that the two curves exhibit a rigid shift in
agreement with other results $^{(10)}$. An explanation of this phenomena
on the basis of our simulated results is given in section IV .
\\
\par
\hskip 2cm We now attempt to understand the optical absorption process in
PS with the help of some model calculations and simulations , assuming
that PS has a distribution of band gaps .\\
\section{ A Model for Absorption in PS having a distribution of band gaps}
\hskip 2cm We model the nanostructures that make up PS to be 1D
parallelopipeds of square cross sections of side d and constant length L.
Since d is typically 20 - 40 $\AA$, the band gap of the material is
enhanced due to quantum size effects . We assume that the
effective band gap ( lowest energy gap ) for a particular parallelopiped
is the minimum separation between the conduction band and the valence band
states for quantum numbers n$_{x}$ = 1 and n$_{y}$ = 1. We also assume
that the absorption takes place in the bulk.
\par
\hskip 2cm The absorption coefficient $\alpha$(E) for a particular value
of incident energy E can be written as
$$
\alpha (E) = \int_{ E_G<E} \rho^J (E - E_G) F(E_G) P(E_G)dE_G
\eqno(8)
$$
\par
\noindent where the integration is done over all the band gaps (E$_{G}$)
of the system below the incident energy E. In equation 8, $\rho_{J}$ is
the joint density of states, F(E$_{G}$) the oscillator strength for the
optical transition and P(E$_{G}$) the distribution of band gaps E$_{G}$ in
the material. We now discuss eq.8 in some detail.
\par
\hskip 2cm We have already pointed out in section I, if g(E) =B/$\sqrt
E$ then the JDOS is $\rho^J =A (E - E_G)^{-W}$ where A is a constant.The
value of W depends on whether PS is a direct or an indirect band gap
material. For direct band gap material W = 0.5 and for indirect band gap
material W = 0
.
\par
\hskip 2cm F(E$_{G}$) is the oscillator strength governing the
absorption. For indirect band gap material, the oscillator strength can
be enhanced for small d$^{( 15 )}$ . This happens because the overlap
between the electron and hole wavefunctions in k space increases as a
result of quantum confinement and contributes to an increase in the
oscillator strength at small d (large $E_{G})^{( 16, 17 )}$ and can be
written as F = f/d$^{\gamma}$ where $\gamma$ = 5 to 6. This manifests
itself as the no phonon line which has been seen in the luminescence
spectra of PS $^{( 18 )}$.
\par
\hskip 2cm The third term in eq.8 viz. $P(E_G)$, is the distribution of
band gaps in PS. The band gap distribution is obtained by assuming a
distribution of sizes for d and a relation governing the upshift in energy
$\Delta$ E with the size d ( due to quantum confinement ).
\\
\par
\hskip 2cm In our model we have considered two possible distribution of
sizes. One is Gaussian and the other is Lognormal$^{( 19 )}$ distribution
given by $P^G$(d) and $P^L$(d) respectively.
$$ P^G(d) = {1\over (\sqrt{2\Pi\sigma})} exp(-{(d - d_0)^2\over
2\sigma^2})
\eqno(9)
$$
\noindent where d$_{0}$ is the mean size and the $\sigma$ the
standard deviation.
$$ P^L(d) = {1\over\sigma_Ld\sqrt{2\Pi}} exp[-{(ln(d)-m_0)^2\over
2{\sigma_L}^2}]
\eqno(10)
$$
\noindent where m$_{0}$ = ln(d$_{0}$), and $\sigma_{L}$ = ln($\sigma$),
$d_{0}$ being the mean size and $\sigma$ the standard deviation.\\
\par
\hskip 2cm Electron microscopy measurements$^{( 2, 3, 20 )}$ suggest
that for p PS, d$_{0}$ is around 30 $\AA$ and $\sigma \approx$ 4 $\AA$.
Figure 4 shows the distribution of sizes for Lognormal and Gaussian
distributions for d$_{0}$ = 30 \AA ~and $\sigma$ = 4 \AA .
\par
\hskip 2cm The energy up shift for the confinement $\Delta$E can be
written as
$$
\Delta E = E_G - E_g = C/d^X
\eqno(11)
$$
\noindent where E$_{g}$ is the crystalline Si fundamental
indirect band gap $\approx$ 1.17 eV and E$_{G}$ is the increased band gap
due to quantum confinement.\\
\par
\hskip 2cm The value of X is 2 in the usual effective mass approximation.
However, for nanocrystals, the effective mass itself becomes size
dependent and the exponent X has been reported to vary from 1.2 - 1.8 and
becomes 2 at large size$^{( 15, 21 )}$. For the purpose of our simulation
we consider two cases
\hskip 3cm a) X = 2, C = 486 eV/\AA$^{2}$ (E$_{GO}$ = 1.71 eV for d = 30
\AA) $^{( 20, 22 )}$ . \\
\hskip 3cm and\\
\hskip 3cm b) X = 1.4, C = 126 eV/\AA$^{1.4}$ (E$_{GO}$ = 2.25 eV for d =
30 \AA) $^{( 21 )}$ .
\par
\hskip 2cm The interaction volume of the column structures with incident
light is V(d) = L d$^{2}$ . The resulting distribution R(d) = P(d) x V(d)
is properly normalized to have equal number of dipole oscillators in each
case. We now make a change of variable from d to E$_{G}$ (E$_{G}$ =
E$_{g}$ + C/d$^{X}$) to get the corresponding distribution P(E$_{G}$) of
the band gaps.\\
\par
\hskip 2cm For a Gaussian distribution of sizes (eq.9) - the distribution
of band gaps is \\
$$ P^G(E_G) = P_0 \int^{E_b}_{0} (E_G - E_g) ^{(-X-3)/X} exp[- A(({E_{G0}
- E_g\over E_G - Eg})^{1/X}-1)^2] dE_G
\eqno(12)
$$
\noindent where P$_{0}$ is a normalization constant, A = 1/2
${(d_{0}/\sigma)}^{2}$ and E$_{b}$ is the maximum value of E$_{G}$.
\par
\hskip 2cm Following a similar procedure for the Lognormal distribution
(eq.10) we have \\
$$ P^L(E_G) = R_0 \int^{E_b}_{0} (E_G - E_g) ^{(-X-2)/X)}\sigma_L^{-1}
exp [-B ({ln(C/E_G - E_g)^{1/X}\over ln(C/E_{G0} - E_g)^{1/X}} -1)^2] dE_G
\eqno(13)
$$
\noindent where R$_{0}$ is the normalization constant , B = 0.5
(m$_{0}$/$\sigma_L$)$^{2}$ . \\
\par
\hskip 2cm Fig. 5 shows the distribution of band gaps (P(E$_{G}$) when the
energy upshift is given for X = 1.4 and X = 2. For the Lognormal case,
we see that P(E$_{G}$) peaks close to the band gap of c-si.\\
\par
\hskip 2cm Equations 8, 12 and 13 are used to calculate $\alpha(E)$ for each
incident energy E = $\hbar\omega$.
\par
\hskip 2cm $\alpha(E)$ for a Gaussian distribution of sizes is given by
$$
\alpha^G(E) = \alpha_0 \int^{E_b}_{0}(E - E_G)^{-W} (E_G - E_g)
^{(\gamma-X-3)/X} exp[- A(({E_{G0} - E_g\over E_G - Eg})^{1/X})-1)^2] dE_G
\eqno(14)
$$
\noindent where $\alpha_0$ is a normalization constant .\\
\par
\hskip 2cm For a Lognormal distribution of sizes $\alpha(E)$ is
$$
\alpha^L(E) = \beta_0 \int^{E_b}_{0}(E - E_G)^{-W} (E_G - E_g)
^{((\gamma-X-2)/X)} \sigma_L^{-1} exp[ -B ({ln(C/E_G - E_g)^{1/X}\over
ln(C/E_{G0} - E_g)^{1/X}} -1)^2] dE_G
\eqno(15)
$$
\noindent where $\beta_0$ is a normalization constant. \\
\par
\hskip 2cm Equations (14) and (15) are used to simulate various possible
absorption processes .
\vskip 2cm
\section{ Results of the simulation }
\hskip 2cm In this section, we present the results for the absorption
co-efficient as a function of energy for different cases on the basis of
the above model. The values of various exponents for different physical
situations are summarized in Table I.\\
\par
(1) Direct band gap\\
\par
\hskip 2cm Figure 6 shows $\alpha(\hbar\omega)$ vs $\hbar \omega$ for a
direct band gap material where k is conserved in the optical transition
and no size dependence of the oscillator strengths ( $\gamma$ = 0 ) is
considered (cases 1 and 3 and cases 5 and 7 in Table 1 ). To facilitate
comparison between various cases , we have normalized the absorption to
its peak value in each case. We see from the figure that the absorption
tends to fall off at high energy ( in sharp contrast with the
experimental data ). These results of the simulation can be easily
understood as follows.
\\
\par
\hskip 2cm For a 1D system, the JDOS has a singularity at E = $E_{G}$
and falls off for E $>$ E$_{G}$. The dominant contribution (to each
$\alpha(E)$) is hence only from a particular set of nanostructures with
band gap $E_G$ where E =$ E_{G}$ and all other $E_G$s have almost no
contribution. So the simulated $\alpha(E)$ vs E plot mimics P($E_{G}$).
We hence conclude from the simulation and the experimental data that
PS is not a direct band gap material and we will not consider this case
further.\\
\par
(2) Indirect band gap\\
\par
\hskip 2cm We now consider the case when momentum(k) is not conserved
for the optical transition. Here we discuss two cases - size independent
oscillator strengths ( $\gamma$ = 0 ) and size dependent oscillator
strengths ( assuming $\gamma$ = 6 ). \\
\par
\hskip 2cm We first consider the situation where the oscillator strength
has no size dependence. Figure 7 (curves (a) and (c)) is a plot of the
calculated $\alpha$ vs $\hbar\omega$ for a Gaussian distribution of sizes
(case 2 and 4 of table I ). Figure 8 (curves (a) and (c)) is a plot of the
calculated $\alpha$ vs $\hbar\omega$ for a Lognormal distribution of sizes
(case 6 and 8 of Table I). Also shown in the figures is a plot of the
experimental curve for a p type PS. (The dip in $\alpha$ near 2.2 eV in
the experimental curve is an artifact related to a filter change). To
facilitate comparison , the calculated $\alpha$ is scaled to match the
experimental curve near 3 eV. We see from the plots 7(a) and 7(c), that
$\alpha$ initially increases with energy and then saturates - at variance
with the experimental results. We understand this result qualitatively as
follows. In section I , we have already mentioned that the JDOS is a
constant independent of energy for $\hbar\omega$ $<$ E$_{G}$ and zero for
$\hbar\omega$ $>$ E$_{G}$. Hence for $\hbar\omega$ $>$ $E_G$ ,a given
nanostructure contributes equally at all energies to $\alpha$. However
P(E$_{G}$) falls off as E$_{G}$ increases. Nanostructures having a band
gap ($E_G$) above a certain cut off in energy (in the high energy tail of
the distribution) do not contribute significantly to $\alpha(\hbar\omega)$
(as P(E$_{G}$) is very small). This gives rise to the saturation of
$\alpha(E)$ in figures 7(a) and 7(c).This saturation is more obvious in
figures 8(a) and 8(c), because P(E$_{G}$) falls off very rapidly for a
Lognormal distribution of sizes. These results also do not agree with the
experimental data. \\
\par
\hskip 2cm The situation is not remedied even if we include higher excited
states (n$_{x}$ = 2, n$_{y}$ = 1; etc.) in the energy range considered in
our calculation. Based on the above discussion, we see that for $\alpha$
to increase with $\hbar\omega$, it is necessary to increase the
contribution to $\alpha$ from the nanostructures in the high energy region
of $\alpha(E)$ .This is possible if the oscillator strength is enhanced
for the nanostructures with large $E_G$ ( smaller sizes ). \\
\par
\hskip 2cm We now consider the case when F($E_G$) varies as ${
f/d^\gamma}$. Figure 7 (curves (b) and (d) for case 2 and 4 in Table I
)and figure 8 (curves (b) and (d) for case 6 and 8 Table I ) are plots of
$\alpha(\hbar\omega)$ vs $\hbar\omega$ for $\gamma$ = 6. For the Gaussian
distribution that we have chosen, we see that the absorption still
saturates although the onset of saturation moves to slightly higher
energy. This is a measure of the contribution of the (high energy) tail of
$P(E_G)$ for the Gaussian case to the total absorption. For the Lognormal
distibution, we see from figure 8 (curves (b) and (d)) that
${\alpha(\hbar\omega)}$ vs $\hbar\omega$ now qualitatively looks like the
experimental curve. \\
\par
\hskip 2cm From a comparison of figures 7 and 8, we find that in order to
get the absorption co-efficient to increase with incident energy , it is
important that the contributions to $\alpha$ from the low energy regions
of P$(E_G)$ do not dominate the optical absorption at large $\hbar\omega$.
This is also satisfied if the distribution of band gaps P(E$_G$) is
sufficiently broad. We illustrate this in Fig.9 where we plot the
calculated value for $\alpha$ for a Gaussian distribution of band gaps
with the mean energy at 2.25 eV and a variance of 0.75 eV. Hence we find
that the nature of the band gap distribution $P(E_{G})$ plays a crucial
role in determining the shape of the absorption spectrum .\\
\par
\hskip 2cm We conclude that to properly account for the optical
absorption in PS we need to assume \\
\par
\hskip 2cm (1) PS is a pseudo 1D indirect band gap semiconductor.\\
\par
\hskip 2cm (2) The oscillator strength is size dependent and is given by a
power law ${f/d^{\gamma}}$.\\
\par
\hskip 2cm (3) The distribution $P(E_{G})$ also plays an important role .
Assuming a Lognormal distribution of the nanostructures diameters in
porous silicon seems to naturally account for the absorption spectrum.\\
\par
\hskip 2cm All these considerations are in general agreement with results
of luminescence experiments on PS .The low temperature luminescence of PS
excited near the peak of the broad luminescence spectrum exhibits steps
which have been associated with the characteristic TO mode of c-silicon
$^{( 23 )}$. More recently$^{( 18 )}$ well resolved peaks corresponding
to the TO mode of Si have been reported for luminescence of PS suggesting
that PS is an indirect band gap semiconductor. The observed dominant zero
phonon component of luminescence constitutes direct evidence for the
enhancement of the oscillator strength due to strong quantum confinement
at smaller sizes $^{( 18, 23, 24 )}$ . \\
\par
\hskip 2cm We now comment on the validity of the procedure for obtaining
the band gap of PS from $\sqrt{\alpha\hbar\omega}$ vs $\hbar\omega$.
Figure 10 is a simulated plot of $\sqrt{\alpha\hbar\omega}$ vs
$\hbar\omega$ ( case 8 in Table I ) for three different values of
$\gamma$ and the experimental data on p type sample . First, we note that
the intercept of the linear portion of each of the simulated curves is
neither coincident with the peak position ( near 1.3 eV ) of the
corresponding band gap distribution ( see figure 5 ) nor close to the mean
band gap energy $E_0$ = 2.25 ev. So we see that it is very difficult to
relate this intercept with such physical quantities
.Therefore , it is not obvious how to interpret the intercept with some
kind 'effective band gap '. We also note that the intercept depends on
the choice of $\gamma$. We hence believe that the intercept does not have
the simple interpretation of a $band$ $gap$ as defined for a homogeneous
system. We would like to point out that for such a heterogeneous system
the gap can be interpreted only in an operational sense like E$_{03}$ or
E$_{04}$ - depending on the value of the absorption coefficient (after
correcting for the sample porosity).\\
\par
\hskip 2cm Finally we try to understand the temperature dependence of the
band gap. To explain the rigid shift of the absorption curve with
temperature (to lower $\alpha$), we argue that $E_G$ of each of the
nanostructures follows a relation $E_G$ (T) = $E_{G0} -\gamma T$, where
$E_{G0}$ is the band gap at zero degree Kelvin . We assume that the change
of the band gap with temperature is a consequence, like in bulk
semiconductors, of the electron -phonon interaction and take $\gamma$ =
0.5 meV/K. In our model, changing temperature is equivalent to changing
the value of C in the expression that determines the energy upshift
$\Delta E$. Figure 11 is a plot of ln$(\alpha)$ vs $\hbar\omega$ for
three different values of C. We see that the absorption curves are
rigidly shifted vertically with respect to each other $^{( 10 )}$ as also
seen in figure 2. We argue that this provides a $natural$ explanation for
the observed temperature dependence of $\alpha$ . \\
\par
\section{ Conclusions }
\hskip 2cm In this paper we have studied the absorption in free standing
thin films of PS .
\par
\hskip 2cm We have calculated the absorption coefiicient as a function of
energy assuming that PS can be modelled as a $pseudo$ 1D system having a
$distribution$ $of$ $band$ $gaps$ . From our calculations, we show that it
is neccessary to look at PS as an $indirect$ $band$ $gap$ material. To
satisfactorily account for the absorption, we need to invoke that the
$oscillator$ $strengths$ $has$ $a$ $size$ $dependence$. We also show that
a Lognormal distribution of diameters of the nanostructures that make up
PS appears to account for the measured absorption spectrum of p PS .
\par
\hskip 2cm We argue that one can not use eq.1 to obtain the band gap of
such $low$ $dimensional$ materials like $porous$ $silicon$ . We also
explain the temperature dependence of the absorption in PS on the basis of
our model.
\par
\hskip 2cm We have also shown that porosity can be inferred
$non-destructively$ from a transmission measurement in the region of low
absorption .\\
\par
\vskip 2.5cm
\centerline{\bf ACKNOWLEDGEMENT }
\par
\hskip 2cm The authors like to thank Prof. B.M.Arora and Prof. Rustagi for
valuable suggestions . S.D thanks Sandip Ghosh , Anver Aziz , M.K.Rabinal
and Biswajit Karmakar for useful discussions and A.D.Deorukhkar ,
R.B.Driver , V.M.Upalekar and P.B.Joshi for their help .\\
\newpage
|
{
"timestamp": "1999-02-20T21:59:54",
"yymm": "9902",
"arxiv_id": "cond-mat/9902286",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/9902286"
}
|
\section{Motivation}
1997 can be called the Year of the Strong Penguin:
a handful of two-body charmless B decays were observed~\cite{TwoBody}
by CLEO for the first time, giving firm indication
that strong penguins are dominant.
Something completely unexpected also emerged in $\eta^\prime$ modes:
Not only exclusive modes are very sizable,
semi-inclusive $B\to \eta^\prime + X_s$ with fast $\eta^\prime$
was found~\cite{etapXs} to be close to $10^{-3}$.
We concern ourselves here with
direct CP violation in these modes.
Given the statistics,
the $a_{CP}$ reach is only $\sim 100\%$ at present.
But, as B Factories are turning on soon,
in 2--5 years this could go down to $30\%$ to 10\%.
The modes that are already observed so far would
certainly be the most sensitive probes.
What physics do they and can they probe?
What has been observed so far are charmless $b\to s$ decays.
The ordering $K\pi > \pi\pi$
clearly indicates that strong penguin $>$ tree.
We recall that in SM,
$a_{CP}$ for pure penguin $b\to s$ transitions are suppressed
by the factor
\begin{equation}
{\rm Im}\,(V_{us}V_{ub}^*)/\vert V_{cs}V_{cb}^*\vert
\simeq\eta\lambda^2 < 1.7\%,
\end{equation}
so $a_{CP} > 10\%$ in pure penguin modes
would imply New Physics!
We therefore have a discovery window in the next few years for
beyond SM (BSM) effects.
The question then is: What BSM?
{\it Is large $a_{CP}$ possible in $b\to s$ modes?}
Rather than trying to be exhaustive,
we wish to demonstrate that CP asymmetries in $b\to s$ transitions
can indeed be large with simple extensions of SM,
and sometimes even within SM.
New Physics will be illustrated with {\it large color dipole}
$bsg$ coupling
\begin{eqnarray}
-{\frac{G_{F}}{\sqrt{2}}}{\frac{g}{16\pi^{2}}}V_{tb}V_{ts}^{*} \,
c_{8}\, \bar{s}\sigma _{\mu \nu }G^{\mu \nu} m_{b}\,(1+\gamma_{5})b.
\end{eqnarray}
In SM one finds $c_8^{\scriptscriptstyle\rm SM}(m_b) \simeq -0.3$
which leads to $b\to sg$ (with $g$ on-shell) $\sim 0.2\%$,
a small rate that is usually neglected.
But because $b\to sg$ just does not give tangible signatures,
our experimental knowledge of the strength of $c_8$ is actually rather poor.
In fact, the long-standing ``deficit" in
$B$ decay semileptonic branching ratio (${\cal B}_{s.l.}$)
and charm counting ($n_{C}$) point towards the possibility of
sizable $b\to sg$ in Nature.~\cite{bsg,kagan}
If $b\to sg \approx 10\%$, which implies $c_8 \sim 2$,
${\cal B}_{s.l.}$ and $n_C$ can each be lowered by that amount and the
problems would go away.
A recent CLEO bound~\cite{bsglimit} gives $b\to sg < 7\%$ at 90\% C.L.,
which comes indirectly from the study of $B\to D\bar D K + X$ decay.
But even if one takes this seriously there is still much room,
and $b\to sg$ at 1--5\% would be very hard to rule out.
What we stress here is:
if $c_8$ is large in Nature, it must be coming from New Physics
and should carry naturally a KM-independent CP violating phase.
The idea of an enhanced $bsg$ color dipole coupling and its associated
new physics CP phase has been applied to $B\to \eta^\prime + X_s$ decay.
We have advocated that
the $g^*g\eta^\prime$ anomaly coupling mechanism~\cite{AS,HT} is needed
to account for the energetic $\eta^\prime$ (or equivalently,
the recoil $m_{X_s}$) spectrum.
Then, with new CP phase $\sigma$ in $c_8 \cong 2e^{i\sigma}$
interfering with absorptive parts from usual $c_{3-6}$ penguin coefficients,
$a_{CP}$ in inclusive $m_{X_s}$ spectrum could be at 10\% level.~\cite{HT}
We explore here \cite{HHY} the general impact of a large color dipole coupling
on CP asymmetries in charmless $b\to s$ decays.
If $b\to sg$ rate is really of order 1--10\% in Nature,
even if this rate itself is hard to measure,
other charmless $b\to s$ decays must be
affected through interference effects.
\section{Model of Unconstrained CP Phase}
To have large $b\to sg$ and evade $b\to s\gamma$ constraint at the same
time, one needs additional source for radiating gluons but not photons.
Gluinos ($\tilde g$) fit the bill nicely.
In SUSY one usually sets soft squark mass terms to be ``universal"
to suppress FCNC and to reduce the number of parameters.
But it has been shown~\cite{kagan,susy} that
nonuniversal soft $m_{\tilde d_j}$ masses could give large
$c_8$ without violating the $b\to s\gamma$ constraint.
In previous studies, however, the possibility of new CP phases were not
considered.
As an existence proof, let us consider a
minimal model of $\tilde s - \tilde b$ mixing,
the simplest would be $LL$ mixing~\cite{HH} which mimics SM couplings,
but one could also have $RR$ or $LR$ mixing models.~\cite{CHH}
The phase of $d_i$ quarks are fixed by gauge interaction, and there is
just one mixing angle $\theta$ and one phase $\phi$.
Since this mixing involves only the second and third generations,
one evades low energy bounds that involve first generation quarks, such
as neutron edm, the $K$ system, and even $B_d$-$\bar B_d$ mixing.
This is a natural model that is tailor made for generating large effects
in $b\to s$ transitions (as well as $B_s$ mixing)!
\section{Direct CP Violation in Inclusive $b\to s\bar qq$ Decay}
The theory of inclusive decays are cleaner since one can use the
quark/parton language. The absorptive parts arrising from short distance
perturbative rescattering~\cite{BSS} can be used and one is insensitive
to long distance phases. However, experimental detection poses a
challenge, unless partial reconstruction techniques can be made to work.
Since penguins dominate charmless $b\to s$ decays,
one is interested in CP violation in pure penguin processes
such as $b\to s\bar dd$ and $s\bar ss$.
But since these rates and
asymmetries occur at ${\cal O}(\alpha_S^2)$,
care~\cite{GH} has to be taken in treating CP violation in
the $b\to s\bar uu$ mode,
which has the distinction of receiving also the tree contribution.
Although the tree amplitude alone does not lead to CP violation,
while tree--penguin interference occurs only at ${\cal O}(\alpha_S)$,
to be consistent with treating pure penguin CP asymmetries,
one needs to take into account the absorptive part carried by
the gluon propagator (bubble graph) associated with the penguin.
This ${\cal O}(\alpha_S^2)$ tree--penguin interference term is needed to
maintain CPT and unitarity in rates and hence $a_{CP}$.
The above discussion has been stated in terms of ``full" theory (exact
loop calculation) to lowest relevant order in $\alpha_S$. Since QCD
corrections are important and relatively well developed by now, we adopt
an operator language in computing inclusive rates. We start from the
effective Hamiltonian
\begin{eqnarray}
H_{\rm eff} &=& {4G_F\over \sqrt{2}} \left[
V_{ub}V_{us}^*(c_1O_1 + c_2 O_2)
-V_{ib}V_{is}^* \, c^i_j O_j\right],
\label{Heff}
\end{eqnarray}
with $i$ summed over $u,c,t$ and $j$ over $3$ to $8$.
The operators are defined as
\begin{eqnarray}
&& O_1 = \bar u_\alpha \gamma_\mu L b_\beta \,
\bar s_\beta \gamma^\mu L u_\alpha,
\ \ \;
O_2 = \bar u \gamma_\mu L b \, \bar s \gamma^\mu L u,
\nonumber\\
&& O_{3,5} = \bar s \gamma_\mu L b \, \bar q \gamma^\mu {L(R)} q,
\ \ O_{4,6} = \bar s_\alpha\gamma_\mu L b_\beta \,
\bar q_\beta \gamma^\mu {L(R)} q_\alpha,
\nonumber\\
&& \tilde O_8 = {\alpha_s\over 4\pi}\, \bar s i\sigma_{\mu\nu} T^a
{m_b q^\nu\over q^2} Rb\, \bar q \gamma^\mu T^a q,
\end{eqnarray}
where $\tilde O_8$ arises from the dimension 5 color dipole $O_8$
operator of Eq. (2), and $q= p_b-p_s$.
We have neglected electroweak penguins for simplicity.
The Wilson coefficients $c_j^i$ are evaluated to NLO order
in regularization scheme independent way,
for $m_t = 174$ GeV, $\alpha_s (m_Z^2) = 0.118$ and $\mu = m_b = 5$ GeV.
Numerically,~\cite{desh-he} $c_{1,2} = -0.313,\ 1.150$,
$c_{3,4,5,6}^t = 0.017,\ -0.037,\ 0.010,\ -0.045$,
and $c_8^{\scriptscriptstyle\rm SM} = c_8^t-c_8^c= -0.299$.
We note that
$c_{1,2}$ are resummations of series starting at ${\cal O}(\alpha_S^0)$,
while $c_{3-6}$ start at ${\cal O}(\alpha_S^1)$
which is reflected in their relative smallness.
However,
one power of $\alpha_S$ is factored out by convention in defining $O_8$,
hence $c_8$ starts at ${\cal O}(\alpha_S^0)$ and its size is
comparable to $c_1$ within SM.
One has to keep track of the {\it relevant leading order} in $\alpha_S$
when comparing with ``full theory" approach discussed earlier.
To get absorptive parts, we add
$c^{u,c}_{4,6}(q^2) = -Nc_{3,5}^{u,c}(q^2)=-P^{u,c}(q^2)$
for $u$, $c$ quarks in the loop, where
\[
P^{u,c}(q^2) =
{\alpha_s \over 8\pi} c_2 \left({10\over 9} + G(m_{u,c}^2,q^2)\right),
\]
and
\[
G(m^2,q^2) =
4\int x(1-x) dx\, {\rm ln}\, {m^2 - x(1-x)q^2\over \mu^2}.
\]
To respect CPT/unitarity at ${\cal O}(\alpha_S^2)$,
for $c^t_{3-8}$ at $\mu^2 = q^2 < m_b^2$, we substitute
\begin{eqnarray}
{\rm Im}\, c_8 &=& {\alpha_s\over 4\pi} c_8 \,
\sum_{u,d,s,c} {\rm Im}\, G(m_i^2,q^2),
\nonumber\\
{\rm Im}\, c^t_{4,6} &=& -N{{\rm Im}\, c^t_{3,5}}
={\alpha_s\over 8\pi} \left[c^t_3 {\rm Im}\, G(m_s^2,q^2)
+ (c^t_4+c^t_6)
\sum_{u,d,s,c}{\rm Im}\, G(m_i^2,q^2)\right],
\nonumber
\end{eqnarray}
when interfering with the tree amplitude.
We note that the use of operator language can be misleading at this stage
since the absorptive parts are not resummed while the Wilson coefficients are.
One could easily lose track of $\alpha_S$ counting that is
needed for maintaining CPT/unitarity
if one thinks too naively in effective theory language.
Having made all these precautions, we can square amplitudes
in a straightforward manner to obtain rates and arrive at the asymmetries.
Since at lower order one has $b\to sg$ decay,
the $\vert c_8\vert^2$ term has a $\log q^2$ pole behavior.
We regulate it by simply cutting it off at 1 GeV.
Fig. 1(a) gives the rates for
$b\to s\bar dd$ (solid) and $\bar b\to \bar sd\bar d$ (dashed)
vs. y$ = q^2/m_b^2$.
The SM result does not show a prominent low $q^2$ tail
since $b\to sg$ is small,
and the asymmetry comes mostly from below $c\bar c$ threshold.
For larger $q^2$ the $a_{CP}$ is GIM suppressed.~\cite{GH}
The SM asymmetry is indeed tiny.
For new physics enhanced $c_8 = 2e^{i\sigma}$,
we consider the cases for $\sigma = \pi/4$, $\pi/2$ and $3\pi/4$.
Besides a very prominent low $q^2$ tail since $b\to sg$ is now $\sim 10\%$,
the salient feature is the
rather large rate asymmetries above $c\bar c$ threshold.
The reason is because the new physics $\sigma$ phase now evades the
SM constraint of Eq. (1), and the $c_8$ amplitude interferes with
standard $c_{3-6}$ penguins which carry the absorptive parts
due to (perturbative) $c\bar c$ rescattering,
but the $u\bar u$ rescattering absorptive part is suppressed by $V_{ub}$.
Note that for $\sigma = \pi/4,\ 3\pi/4$ one has constructive,
destructive interference, respectively.
For the latter case, the overall rate is close to SM but the asymmetries
are much larger, reaching 30\% for large $q^2$!
\begin{figure}[t]
\special{psfile=fig1.eps hoffset=-37 voffset=60 hscale=50 vscale=50
angle=270}
\vskip 5.8cm
\caption{
Inclusive rate vs. y$ = q^2/m_b^2$ for
(a) $b\to s\bar dd$, (b) $s\bar uu$ and (c) $s\bar ss$ decays
(solid) and $\bar b$ decays (dashed). Curves with
prominent small $y$ tail are for $c_8 = 2e^{i\sigma}$
with $\sigma = \pi/4$ (top), $\pi/2$ (middle), $3\pi/4$ (bottom),
while the other is SM result.
}
\end{figure}
For $b\to s\bar uu$ the tree diagram also contributes, and
one has to include the absorptive part in gluon propagator
as discussed earlier.
Because of this, the rate asymmetries in SM occur
both below and above $c\bar c$ threshold, as can be seen in Fig. 1(b).
Each are larger than the $b\to s\bar dd$ case
but are of opposite sign hence they tend to cancel each other.~\cite{GH}
If $c_8$ is enhanced, however, the dominant mechanism is
again interference between $c_8$ and the usual penguins,
hence the results are similar to the $b\to s\bar dd$ case.
For $b\to s\bar ss$ mode,
one has to take into account identical particle effects.
As seen in Fig. 1(c),
this leads to the peculiar shapes at large and small $q^2$,
and the asymmetry is now smeared over all $q^2$.
But otherwise it is similar to the $b\to s\bar dd$ case.
The integrated inclusive results are summarized in Table 1.
\begin{table}[htb]
\caption{Inclusive $Br$ (in $10^{-3}$)/$a_{CP}$ (in \%) for SM and for
$c_8 = 2 e^{i\sigma}$.}
\vspace{0.15cm}
\begin{center}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
&SM &$\sigma$ =\ \ \ \ \ \ 0 \ \ \ \ \ & ${i\pi/4}$ &
${i\pi/2}$ & ${i3\pi/4}$ &$i\pi$ \\
\hline $b\to s \bar d d$ & 2.6/0.8 & \ \ \ \ \ \ 8.5/0.4 & 7.6/3.4 &
5.2/6.5 & 2.9/8.1 & 1.9/0.5 \\
\hline $b\to s \bar u u$ & 2.4/1.4 & \ \ \ \ \ \ \ 8.1/-0.2 & 7.5/2.6
&
5.5/5.6 & 3.2/8.1 & 2.0/3.5 \\
\hline $b\to s \bar s s$ & 2.0/0.9 & \ \ \ \ \ \ 6.9/0.4 & 6.2/3.2 &
4.4/6.0 & 2.6/7.1 & 1.8/0.4 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Direct CP Violation in Exclusive Charmless Hadronic Modes}
The exclusive modes are more accessible experimentally,
as evidenced by the handful of observed modes.
Unfortunately, the theory is not clean.
One has to evaluate all possible hadronic matrix elements of
operators in Eq. (3).
Faced with CLEO data, it has become popular~\cite{Neff} to use
$N_{\scriptscriptstyle\rm eff}$ rather than the value of 3 as dictated by QCD.
Although it is a measure of deviation from factorization,
it becomes in reality a new process dependent fit parameter.
One is still subject to usual approximations and inprecise knowledge of
form factors and the $q^2$ value to take.
CP asymmetries are especially sensitive to the latter.
At the rate level, the $K\pi$ modes are approximately manageable,
but the $\eta^\prime K$ and $\omega K$ modes seem high
while the $\phi K$ mode seems low.
Thus, even introducing $N_{\scriptscriptstyle\rm eff}$ as a new parameter,
there are problems everywhere already in rate.
A new development~\cite{cleo1} in 1998 is that the
$B^- \to K^-\pi^0$ mode has been observed, while the
$B^- \to \bar K^0 \pi^-$ rate came down considerably.
One has now three measured $K\pi$ modes and their rates
are all around $1.4\times 10^{-5}$.
We shall not discuss the $\eta^\prime$ modes here since
it must have a large contribution from anomaly mechanism
and is rather difficult to treat.
But we do wish to explore whether an enhanced $c_8$ could improve
agreement with experiment.
Before we do so, however, we point out that the $K\pi$ modes offer
a rather interesting subtlety:
they in general have two isospin components and
exhibit larger $a_{CP}$ within SM,
and they are very sensitive to final state interaction (FSI) phases.
As shown in Ref. [12] but now put in terms of the angle $\gamma$,
in the absence of FSI phases one finds for $B^- \to K^-\pi^0$ mode
\begin{equation}
a_{uu} \propto {\#_1\sin\gamma\over \vert\#_2 - \#_3\cos\gamma\vert^2},
\label{auu}
\end{equation}
where $\#_1$ comes from interference, while
$\#_2$ and $\#_3$ come from penguin and tree $b\to s\bar uu$ amplitudes,
respectively. $\#_3$ and the dispersive part of $\#_2$ have the same sign.
At the time of Ref. [12], $\cos\gamma < 0$ was favored,
while $\sin\gamma$ was smaller than today,
hence $a_{uu}$ was not very large.
The present~\cite{stocchi} preferred value is $\gamma \sim 64^\circ$,
however, and one now has destructive rather than constructive interference.
Hence, one not only gains from $\sin\gamma\sim 0.9$ in the numerator,
there is also extra enhancement from the denomenator of Eq. (\ref{auu}),
and $a_{uu}$ as large as 10\% is possible.
Furthermore, since one can write
$(\bar su)(\bar ub) = [(\bar su)(\bar ub) + (\bar sd)(\bar db)]/2
+[(\bar su)(\bar ub) - (\bar sd)(\bar db)]/2$,
there is in general two isospin components
from the tree level $O_1$ and $O_2$ operators.
These two isospin amplitudes may develope soft FSI phases
that are different from each other.
If such is the case, it could overrun the perturbatively generated
(hence $\alpha_S$ suppressed) absorptive phases,
and much larger CP asymmetries can be achieved.
While this is good news for CP violation search in general,
it is bad news for search of new physics.
Can one distiniguish between new physics effects and SM with large FSI
phases? The answer is yes, if one compares several modes.
Let us give a little more detail for sake of illustration.
We separate the $\bar B^0 \to K^-\pi^+$ amplitue into two isospin
components, $A = A_{1/2} + A_{3/2}$.
Since color allowed amplitudes dominate,
$N_{\scriptscriptstyle\rm eff} \simeq N = 3$.
Defining $v_i = V_{is}^*V_{ib}$ and assuming factorization, we find,
\begin{eqnarray}
&& A_{1/2} =
i{G_F\over \sqrt{2}} f_K F_0\; (m_B^2-m_\pi^2)
\left\{v_u \left[ {2\over 3} \left({c_1\over N} + c_2\right)
-{r\over 3}\left(c_1+{c_2\over N}\right) \right] \phantom{c_5^j\over N} \right .
\nonumber\\
&& - v_j \left. \left[ {c_3^j\over N} + c_4^j
+{2m_K^2\over (m_b-m_u)(m_s+m_u)}\left({c_5^j\over N} +c_6^j\right)\right]
-v_t {\alpha_s\over 4\pi} {m_b^2\over q^2} c_8 \tilde S_{\pi
K} \right\},~
\label{kpi1}
\end{eqnarray}
where $F_0 = F_0^{B\pi}(m^2_K)$ is a BSW form factor,
$\tilde S_{\pi K} \sim -0.76$ is a complicated form factor
normalized to $F_0$,
and $r$ is some ratio of $B\to K$ and $B\to \pi$ form factors
and a measure of SU(3) breaking.
For $A_{3/2}$ one sets $c^j_{3-8}$ to zero
and substitutes $2/3,\ -r/3 \longrightarrow 1/3,\ r/3$.
The $K^- \pi^0$ mode is analogous,
with modifications in $A_{3/2}$ and an overall factor of $1/\sqrt{2}$.
The penguins contribute only to $A_{1/2}$,
hence the naively pure penguin $B^- \to \bar K^0 \pi^-$ amplitude
has just Eq. (\ref{kpi1}) with $c_{1,2}$ set to zero.
Note that the $c_{5,6}$ effects are sensitive to
current quark masses
because of effective density-density interaction.
The absorptive parts for $c^j_{3-8}$ are evaluated at
$q^2 \approx m_b^2/2$ which favors large $a_{CP}$, but
$q^2$ could be~\cite{GH} as low as $m_b^2/4$.
We plot in Fig. 2(a) and (b)
the branching ratio ($Br$) and $a_{CP}$ vs. angle $\gamma$.
For $K^- \pi^{+,0}$, $a_{CP}$ peaks at the sizable $\sim 10\%$
just at the currently favored \cite{stocchi} value of
$\gamma \simeq 64^\circ$.
But for $\bar K^0 \pi^-$, $a_{CP} \sim \eta\lambda^2$ is very small.
We have used $m_s(\mu = m_b) \simeq 120$ MeV
since it enhances the rates.
With $m_s(\mu = 1$ GeV) $\simeq$ 200 MeV,
the rates would be a factor of 2 smaller.
We find $K^- \pi^+$, $\bar K^0 \pi^-$, $K^- \pi^0
\sim 1.4,\ 1.6,\ 0.7 \times 10^{-5}$, respectively.
To illustrate the effect of FSI,
we now write $A = A_{1/2} + A_{3/2} e^{i\delta}$
and plot in Fig. 2(c) and (d) the $Br$ and $a_{CP}$ vs. $\delta$ for
$\gamma = 64^\circ$.
The rate is not very sensitive to $\delta$ which reflects
penguin dominance, but
$a_{CP}$ can now reach $20\%$, even 30\% for $K^-\pi^0$.
We stress that the naively pure penguin $\bar K^0 \pi^-$ mode
is in fact also quite susceptible to FSI phases
as it is the isospin partner of $K^-\pi^0$,
which definitely receives tree contributions.
The $B^- \to \bar K^0 \pi^-$ mode can receive tree contributions
through FSI rescattering.
Comparing Fig. 2(b) and (d), $a_{CP}$ in this mode can be
{\it much larger} than the naive factorization result.
However, the $a_{CP}$ for $\bar K^0 \pi^-$ and $K^-\pi^+$ are out of
phase,
hence, comparing the two cases
can give information on the FSI phase $\delta$.
\begin{figure}[htb]
\special{psfile=fig2.eps hoffset=-36 voffset=55 hscale=53 vscale=53
angle=270}
\vskip 6.6cm
\caption {
$Br$ and $a_{CP}$ vs. (a), (b) SM unitarity angle $\gamma$,
(c), (d) FSI phase $\delta$ for $\gamma = 64^\circ$, and
(e), (f) new physics phase $\sigma$ for $\gamma = 64^\circ$ and $\delta
= 0$.
Solid, dotted, dashed and dotdashed lines are for
$K^-\pi^+$, $\bar K^0 \pi^-$, $K^-\pi^0$ and $\phi K$ respectively.
}
\end{figure}
For physics beyond SM such as $c_8 = 2 e^{i\sigma}$,
there are too many parameters and one needs a strategy.
We set $N = 3$ and try to fit observed $Br$'s with
the phase $\sigma$, then find the preferred $a_{CP}$.
Since the $c_8$ term now dominates,
one is less sensitive to the FSI phase $\delta$.
In fitting $Br$'s, we find that
{\it destructive interference is necessary} which can be understood
from the inclusive results of Fig. 1.
This means that {\it large $a_{CP}$s are preferred!}
We plot in Fig. 2(e) and (f) the $Br$ and $a_{CP}$
vs. the new physics phase $\sigma$,
for $\gamma = 64^\circ$ and $\delta = 0$.
The $K^-\pi^+$ and $\bar K^0 \pi^-$ modes are very close in rate
for $\sigma \sim 45^\circ - 180^\circ$,
but the $K^-\pi^0$ mode remains a factor of 2 smaller.
However,
the $a_{CP}$ can now reach 50\% for $K^-\pi^+$/$\bar K^0 \pi^-$
and 40\% for $K^-\pi^0$!
These are {\it truly large asymmetries} and would be easily observed,
perhaps even before B Factories turn on (i.e. with CLEO II.V data!).
They are in strong contrast to the SM case with FSI phase $\delta$,
Fig. 2(d), and can be distinguished.
Genuine pure penguin processes arising from $b\to s\bar ss$
give cleaner probes of new physics $CP$ violation effects
since they are insensitive to FSI phase.
The amplitude for $B^-\to \phi K^-$ decay is
\begin{eqnarray}
A(B\rightarrow \phi K)&\simeq&
-i {G_F\over \sqrt{2}} f_\phi m_\phi
2 p_B\cdot \varepsilon_\phi F_1(m_\phi^2)\,
\left\{ v_j \left(c_{3}^j+{c_4^j\over N} + c_5^j\right) \right|_{q_2^2}
\nonumber\\
&& \left. +~v_j \left({c_3^j\over N} +c_4^j + {c_6^j\over
N}\right)\right|_{q_1^2}
+ \left.
v_t {\alpha_s\over 4\pi} {m_b^2 \over q_1^2} c_8 \tilde S_{\phi
K} \right\}.
\label{phiK}
\end{eqnarray}
The relevant $q^2$ is determined by kinematics:
$q_1^2 = m_b^2/2$ as before,
but for amplitudes without Fierzing $q_2^2 = m_\phi^2$.
We have dropped color octet contributions
and have checked that they are indeed small.
Since the amplitude is pure penguin, $c_8$ should have no absorptive part.
As shown in Fig. 2(a) and (b),
the SM rate of $\sim 1\times 10^{-5}$ is above the CLEO bound
of $0.5\times 10^{-5}$ while $a_{CP}$ is uninterestingly small.
If we allow for new physics enhanced $c_8 = 2 e^{i\sigma}$,
one again needs destructive interference to match observed rate.
The results are plotted in Fig. 2(e) and (f) vs. $\sigma$.
The rate is lower than the $\bar K^0 \pi^-$/$K^-\pi^+$ modes
because it is not sensitive to $1/m_s$ and we have used a low
$m_s$ value to boost up $B\to K\pi$.
The $a_{CP}$ could now reach almost 60\%,
thanks to the destructive interference preferred by
fitting the CLEO limit on rate.
We note that the SM asymmetry for $B\to \phi K$ should be of order 1\%.
One can now construct an attractive picture.
We have noted that recent studies cannot explain
the low $B\to \phi K$ upper limit within SM.
If $c_8$ is enhanced by new physics and interferes destructively with SM,
$B\to \phi K$ can be brought down to below $5\times 10^{-6}$.
The experimentally observed $\bar K^0 \pi^- \simeq K^-\pi^+$
follows from $c_8$ dominance,
and their rate $\sim 1.4\times 10^{-5}$ which is
2--3 times larger than the $\phi K$ mode suggests
a low $m_s$ value and slight tunings of BSW form factors.
Around $\sigma \sim 145^\circ$,
the rates are largely accounted for,
but $a_{CP}$ for $\phi K$, $K^-\pi^+/K^-\pi^0$ and $\bar K^0 \pi^-$
could be {\it enhanced to the dramatic values} of
55\%, 45\% and 35\%, respectively,
and {\it all of the same sign}.
This is certainly distinct from the sign correlations of SM with FSI.
On the down side, within the scenario of strong penguin dominance,
which includes the case of enhanced $c_8$,
the $B^-\to K^-\pi^0$ rate is always about a factor of two smaller
than the $K^-\pi^+$ mode, and we are unable to accommodate
recent CLEO findings.~\cite{cleo1,DHHP}
We are also barely able to accommodate $B\to \omega K$.
Within SM one needs $1/N_{\scriptscriptstyle\rm eff} \sim 1$ to be able to
account for the large $B\to \omega K \simeq 1.5\times 10^{-5}$ value,
while for $1/N_{\scriptscriptstyle\rm eff} \simeq 0$ one can account for only half.
Adding new physics induced $c_8 = 2 e^{i\sigma}$ effect,
we are able to account for $Br$ for both large and small $N_{\scriptscriptstyle\rm eff}$,
but not for $N = 3$.
However, $a_{CP}$ is never more than a few \%
and hence not very interesting.
Since the $ \omega K$ mode has a single isospin amplitude,
it is insensitive to FSI rescattering phases.
\section{CP Violation in $b\to s\gamma$ Decays}
We have emphasized that the $b\to s$ modes that are
already observed are the best places for CP search.
Clearly, the $B\to K^*\gamma$ and $b\to s\gamma$ modes were
the first ever observed penguins in $B$ decay,
and they should provide a good window.
We note that the observed recoil $m_{X_s}$ spectrum
for $B \to \gamma + X_s$ is basically orthogonal to
that for $B\to \eta^\prime +X_s$,
and is clearly dominated by $K$ resonances.
However, besides the $B\to K^*\gamma$ mode,
the higher resonance contributions to the inclusive spectrum
has not yet been disentangled.
It is important to realize that,
although in our previous discussions of enhanced $b\to sg$,
one must reckon with the $b\to s\gamma$ constraint,
{\it the converse is not true}.
One can have interesting impact on $b\to s\gamma$
without affecting $b\to sg$ by much.
Our example of $\tilde s_{L,R} - \tilde b_{L,R}$ mixings
can generate a lot of effects.
The $c_7 O_7$ operator structure of SM
can become $c_7 O_7 + c_7^\prime O_7^\prime$,
where $O_7^\prime$ has opposite chirality to $O_7$
(and likewise for the gluonic $O_8$).
This leads to much enrichment of the physics compared to SM:~\cite{CHH}
\begin{itemize}
\item Direct CP violation in $b\to s\gamma$ and $B\to K^*\gamma$
Since SM accounts for the observed $b\to s\gamma$ rate already,
one has the constraint
$\vert c_7^{\scriptscriptstyle\rm SM}\vert^2 \sim
\vert c_7^{\scriptscriptstyle\rm SM} + c_7^{\scriptscriptstyle\rm New}\vert^2 + \vert c_7^{\prime}\vert^2$,
hence one prefers $c_7^{\prime}$ to be small.
We find that $a_{CP}$ larger than 10\% is possible in certain parameter space,
especially when the new physics effect has opposite sign w.r.t. SM.
\item Mixing dependent CP violation
This requires interference between $O_7$ and $O_7^\prime$,
the two different chiralities.
For $B^0\to M^0\gamma$, where $M^0$ is a
CP eigenstate with eigenvalue $\xi$, one obtains a
mixing dependent asymmetry~\cite{AGS}
\[
A_{\rm mix} =2\xi \,\frac{|c_{7}c_{7}^{\prime}|}
{|c_{7}|^{2}+|c_{7}^{\prime}|^{2}}\,\sin [\phi _{B}-\phi -\phi ^{\prime}],
\]
where $\phi ^{(\prime)}$ are the weak phases of $c_{7}^{(\prime)}$.
Note that $\phi ^{(\prime)}$ could vanish and one could still have
CP violation through $\phi_B$ from $B$-mixing within SM.
We find that the coefficient to the phase factor can reach 80\%
in special regions of parameter space of our model.
Unfortunately,
$B^0\to K^{*0}\gamma \to K_S\pi^0\gamma$ does not give a vertex,
and one would need to turn to either $B_s\to \phi\gamma$,
or $B_d\to K_1^0\gamma$, $K_2^{*0}\gamma$,
none of which are yet observed.
\item Chirality probe: $\Lambda_b \to \Lambda\gamma$
How does one know that both SM-like
$b_{\scriptscriptstyle R}\to s_{\scriptscriptstyle L}\gamma_{\scriptscriptstyle {\rm ``}L{\rm "}}$
and new physics induced
$b_{\scriptscriptstyle L}\to s_{\scriptscriptstyle R}\gamma_{\scriptscriptstyle {\rm ``}R{\rm "}}$ transitions occur?
The best way, independent of CP violation
(but direct $a_{CP}$ in rates is of course possible),
is to search for $\Lambda_b \to \Lambda\gamma$ decay,
since $\Lambda \to p\pi$ decay is self-analyzing.
One has~\cite{MR}
\[
{d\Gamma \over d\cos \theta} \propto 1+
{|c_{7}|^{2}-|c_{7}^{\prime }|^{2}\over |c_{7}|^{2}+|c_{7}^{\prime }|^{2}}
\cos \theta,
\]
where $\theta$ is the angle between
the direction of $\vec{p}_\Lambda$
in $\Lambda _{b}$ frame and
the direction of the $\Lambda$ polarization in its rest frame.
The coefficient of $\cos\theta$ is clearly equal to 1 in SM,
but could be different in Nature.
We find that even $-1$ is possible!
When and where will $\Lambda_b\to \Lambda\gamma$ decay
be measured?
\end{itemize}
\section{Discussion and Conclusion}
We must recall that $B$ physics has had its share of surprises.
The long $b$ lifetime was discovered without much theory encouragement.
$B_d$ mixing was in fact discovered with theory ``discouragement".
More recently,
the $\eta^\prime K$ and $\omega K$ modes turn out to be
much larger than theory expectations,
while the huge inclusive fast $\eta^\prime + X_s$
simply came out of the blue without theory warnings.
We therefore must be on guard for CP violation.
In the narrow sense, we have discussed a large
$\tilde s -\tilde b$ squark mixing model that could generate
large color dipole $bsg$ coupling which carries an
unconstrained new CP phase,
and lead to large impact on CP violating asymmetries:
in $\eta^\prime + X_s$,
charmless 2-body modes such as $K\pi$ and $\phi K$,
$b\to s\gamma$,
even~\cite{HH} in $J/\psi K_S\pi^0$ modes.
In the broad sense, we have
illustrated that {\it large CP asymmetries
may just pop up everywhere} as B Factories turn on!
Let's search for CP violation in already observed modes,
{\it assuming they are large}!
\section*{Acknowledgments}
I thank
Chun-Khiang Chua, Xiao-Gang He, Ben Tseng and
Kwei-Chou Yang for collaborative work,
and R.O.C. National Science Council for support.
\section*{References}
|
{
"timestamp": "1999-02-17T23:37:44",
"yymm": "9902",
"arxiv_id": "hep-ph/9902382",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/9902382"
}
|
\section{Introduction}
\subsection[Direct Simulation Monte Carlo Method and Sequential Algorithm
in Unsteady Molecular Gasdynamics]%
{Direct Simulation Monte Carlo Method and Sequential\\
Algorithm in Unsteady Molecular Gasdynamics}
The Direct Simulation Monte Carlo (DSMC) is the simulation of real gas
flows with various physical processes by means of huge number of modeling
particles~\cite{bird}, each of which is a typical representative of great
number of real gas particles (molecules, atoms, etc.).
The DSMC method conditionally divides the
continuous process of particles movement and collisions into two
consecutive stages (motion and collision process) at each time step
$\Delta t$. The particle parameters (coordinates, velocity)
are stored in the computer's memory.
To get information about the flow field the computational domain has to be
divided into cells. The results of simulation are averaged particles
parameters in cells.
\begin{figure}[hb]
\centering
\includegraphics[scale=0.7]{seq_mc.eps}
\caption{General flowchart of sequential algorithm for DSMC
of unsteady flows.
$\Delta t$ --- time step,
$\Delta t_s$ --- interval between samples,
$\Delta t_L$ --- total time of a single run,
$t$ --- current time,
$n$ --- number of runs,
$i$ --- iteration number.}
\label{f:seq_mc}
\end{figure}
The finite memory size and computer performance make restrictions to the
total number of modeling particles and cells. Macroscopic gas parameters
determined by particles parameters in cells at the current time step are the
result of simulation. Fluctuations of averaged gas parameters at single time
step can be rather high owing to relatively small number of particles in cells.
So, when solving steady gasdynamic problems, we have to increase the time
interval of averaging (the sample size) after steady state is achieved
in order to reduce statistical error down to the required level.
The averaging time step $\Delta t_{av}$ has to be much
greater than the time step $\Delta t$ ($\Delta t_{av}\gg\Delta t$).
For DSMC of unsteady flows the value of averaging time step
$\Delta t_{av}$ for a given problem and at the current time $t$ has to meet
the following requirement: $\Delta t_{av}\ll\min t_H(x,y,z,t)$, where
$t_H$ --- is the characteristic time of flow parameters variation. The choice
of the value of $t_H$ is determined by particular problem~\cite{bykov,bogd}.
In order to meet the condition for the averaging interval we have to carry out
enough number of statistically independent calculations (runs) $n$ to get the
required sample size. This leads to the increase of the total calculation time
which is proportional to $n$ in the case of sequential DSMC algorithm.
The general flowchart of classic sequential algorithm~\cite{bird}
is depicted in the fig.~\ref{f:seq_mc}.
The algorithm of DSMC of unsteady flows consists of two basic loops. In
the first (inner) loop the single run of unsteady process is executed.
First, we generate particles at input boundaries of the domain
(subroutine \verb+Generation+). Then we carry out
simulation of particle movement, surface interaction (subroutine
\verb+Motion+) and collision process (subroutine \verb+Interaction+) for
determined number of time steps $\Delta t$. The sampling (subroutine
\verb+Sampling+) of flow macroparameters in cells is carried out at a given
moment of unsteady process. The inner loop itself is divided into two
successive steps. At the first step we sequentially carry out simulation for
each of $N_p$ particles independently. A special readdressing array is formed
-- subroutines \verb+Enumeration+, \verb+Indexing+ -- (it determines the
mutual correspondence of particles and cells) after the first step. We have to
know the location of all particles in order to fill that array. At the second step
we carry out the simulation for each of $N_c$ cells independently. For
$t>\Delta t_s$ we accumulate statistical data of flow parameters in cells.
The second (outer) loop repeats unsteady runs $n$ times to get the desired
sample size. Each run is executed independently from the previous ones. To
make separate unsteady runs independent we have to shift random number
generator (\verb+RNG+).
For each unsteady run three basic arrays (\verb+P+, \verb+LCR+, \verb+C+)
are required. The array~\verb+P+ is used for storing information about
particles. The array \verb+LCR+ is the readdressing array. The dimensions of
these arrays are proportional to the total number of particles.
The array~\verb+C+ stores information about cells and macroparameters. The
dimension of this array is proportional to the total number of cells of a
computational grid.
The DSMC method requires several additional arrays which reserve
much smaller memory size. The particles which abandon the domain
are removed from the array~\verb+P+, whereas the new generated
particles are inserted into the one.
Since the particles move from one cell to another we have to
rearrange the array \verb+LCR+ and update the array~\verb+C+.
These procedures are performed at each time step $\Delta t$.
\subsection{Parallelization methods for DSMC of gas flows}
The feasibility of parallelization and the efficiency of parallel algorithms
are determined both by the structure of modeling process and by the
architecture and characteristics of a computer (number of processors, memory
size, etc.).
The development of any parallel algorithm starts with the decomposition of
a general problem. The whole task is divided into a series of independent or
slightly dependent sub-tasks which are solved parallel. For direct simulation
of gas flows there are different decomposition strategies depending on goals
of modeling and flow nature. The development of parallel algorithms for
DSMC started not long ago (about 10 years ago). At the present time the
common classification of principal types of parallel algorithms has not been
formed yet. However, one can point out several approaches to parallelize the
DSMC, the efficiency of which is proved by the practice of their usage. Let
us conditionally single out four types of parallel algorithms of DSMC.
The first type is the parallelization by coarse-grained independent sub-tasks.
This method has been realized in \cite{korolev}--\cite{bykov} for
parallelization of DSMC of unsteady problems. The algorithm consists in
the reiteration of statistically independent modeling procedures (runs) of a
given flow on several processors.
The second type is the spatial decomposition of a computational domain.
The calculation in each of regions are single sub-tasks which are solved
parallel. Each processor performs calculations for particles and cells in its
own region. The transfer of particles accompanies with data exchange
between processors. Therefore, these sub-tasks are not independent.
This method of parallelization is the most widespread at the present for
parallel DSMC of both steady and unsteady flows \cite{furlani}--\cite{robinson2}.
The main advantage of this approach is the reduction of
memory size required by each processor. This method can be carried out on
computers with both local and shared memory. The method has drawback
for increasing number of processors: the increase of the number of
connections between regions and the increase of relative amount of data to
exchange between regions. The essential condition of high efficiency of this
method is the ensuring of uniform load balancing and minimization of data
exchange. One can use static and dynamic load balancing to make good load
balancing. The modern parallel algorithms of this type usually employ
dynamic load balancing.
The third type is the algorithmic decomposition. This type of parallel
algorithms consists in the execution of different parts of the same procedures
on different processors. For realization of these algorithms it is necessary to
use a computer with architecture which is adequate to a given algorithm.
The examples of this type of algorithm is the data parallelization
\cite{oh,grishin}.
The fourth type is the combined decomposition which includes all types
considered precedingly. The decomposition of computational domain with
data parallelization are carried out in~\cite{oh}. In this paper we shall
consider two-level algorithms which include methods of first and third type.
\subsection[Algorithm of Parallel Statistically Independent Runs]%
{Algorithm of Parallel Statistically\\ Independent Runs (PSIR)~\cite{bykov}}
The statistical independence of single runs make it possible to execute them
parallel. The general flowchart of the PSIR algorithm is depicted
in the fig.~\ref{f:psir_g}. The implementation of this approach on a
multiprocessor computer leads to the decrease of the number of iterations of
the outer loop for every single processor ($n/p$ --- the number of iterations
for the $p$-processor computer). The data exchange between processors
goes after all calculation are finished. Only one processor sequentially
analyzes the results after data exchange. The range of efficient
application field for this algorithm is $p\leq n$.
The value of $n$ has to be multiply by $p$ to get
optimal speedup and efficiency.
\begin{figure}[hb]
\centering
\includegraphics[scale=0.6]{psir_g.eps}
\caption{General flowchart of PSIR algorithm;
$m$ --- processor ID,
$p$ --- number of processors.}
\label{f:psir_g}
\end{figure}
All arrays (\verb+P, LCR, C+, etc.) are stored locally for each run. This
algorithm can be realized on computers with any type of memory (shared or
local). The message passing is used to perform data exchange on
computers with local memory. The scheme of memory usage is presented
in the fig.~\ref{f:psir_mem}.
The required memory size for this algorithm is proportional to $p$.
\begin{figure}
\centering
\includegraphics[scale=0.5]{psir_mem.eps}
\caption{Scheme of memory usage for PSIR algorithm
(the case of three processors).}
\label{f:psir_mem}
\end{figure}
The speedup $S_p$ and the efficiency $E_p$ of parallel algorithm with a
parallel fraction of computational work $\alpha$ for the computer with $p$
processors are as follows~\cite{ortega}:
\begin{equation}
S_p(p, \alpha)=\frac{T_1}{T_p},
\end{equation}
\begin{equation}
E_p(p, \alpha)=\frac{S_p}{p},
\end{equation}
where $T_1$ --- the execution time of the sequential algorithm,
$T_p$ --- the execution time of a given parallel algorithm on the computer
with $p$ processors ($p$ --- number of reserved processors).
In this paper we use a model of computational process which
assumes that there is some parallel fraction $\alpha$
of total calculations and sequential fraction $(1-\alpha)$.
The parallel and sequential calculations
are not coincided.
\begin{figure}[!hb]
\centering
\begin{minipage}[c]{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{sp1.eps}
\end{minipage}%
\begin{minipage}[c]{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{sp2.eps}
\end{minipage}
\caption{Speedup $S_p$ as a function of number of processors $p$
for various parallel fractions $\alpha$ (left),
speedup $S_p$ as a function of parallel fraction $\alpha$ for
various number of processors $p$ (right).}
\label{f:sp_alpha}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{minipage}[c]{.4\textwidth}
\centering
\caption{Efficiency $E_p$ as a function of parallel fraction $\alpha$
for various number of processors~$p$}
\label{f:ep_alpha}
\end{minipage}%
\begin{minipage}[c]{.6\textwidth}
\centering
\includegraphics[width=\textwidth]{ep.eps}
\end{minipage}
\end{figure}
The value of $T_p$ is given by
\begin{equation}
T_p=[(1-\alpha)+\alpha/p]T_1.
\end{equation}
To get the value of $\alpha$ one may use a profiler. The final
formulas for $S_p$ and $E_p$ are as follows:
\begin{equation}
S_p(p, \alpha)=\frac{p}{p-\alpha(p-1)},
\label{e:psir_sp}
\end{equation}
\begin{equation}
E_p(p, \alpha)=\frac{1}{(1-\alpha)p+\alpha}.
\end{equation}
The formula~(\ref{e:psir_sp}) presents a simple and general function,
called the Amdahl law. According to this law, the speedup upper limit
at $p\rightarrow\infty$ for an algorithm, which has two non-coinciding
parallel and sequential parts, is as follows:
\begin{equation}
S_p(p,\alpha)\le\frac{1}{1-\alpha}.
\end{equation}
To speed up calculations we have to speed up parallel computations,
however, the remaining sequential part slows down the overall
computing process to more and more extent. Even small sequential
fraction may reduce greatly the overall performance.
The figure~\ref{f:sp_alpha} shows the speedup $S_p$ as a function
of number of processors $p$ and parallel fraction $\alpha$.
The efficiency $E_p$ as a functon of $\alpha$ is shown in
the fig.~\ref{f:ep_alpha}. Sequential computations affected speedup
and efficiency particularly in the region $\alpha>0.9$.
Therefore, even small decrease of sequential computations in
algorithms with high parallel fraction makes speedup and
efficiency abruptly increase (at relatively high $p$).
The PSIR algorithm is coarse-grained and has high efficiency and great
degree of parallelism comparing to any other parallel algorithm of DSMC of
unsteady flows for the number of processor $p\leq n$. The maximum value
of speedup for this algorithm can be obtained at $p=n$. The potential of
speedup which gives the computer is surplus for $p>n$. Thus, the
PSIR algorithm for DSMC of unsteady flows has the following range of
efficient usage: $n\gg1$ and $n\geq p$. The value of parallel fraction
$\alpha$ can be very high (up to $0.99\div0.999$) for typical problems of
molecular gasdynamics~\cite{bykov}. The corresponding speedup is
$100\div1000$. To get the efficiency $E_p\ge0.5$ at $n=100\div1000$
it is necessary to have $p=100\div1000$ respectively.
\subsection[Data Parallelization of DSMC]%
{Data Parallelization (DP) of DSMC~\cite{grishin}}
The computing time of each DSMC problem is determined by the inner loop (1)
time. The duration of this loop depends on the number of particles in the
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{dp_g.eps}
\caption{General flowchart of DP algorithm;
$j$ --- index of data element}
\label{f:dp_g}
\end{figure}
domain and the number of cells. It was stated above that the inner loop
consists of two consecutive stages. The data inside each stage are
independent. The elements \verb+P[k]+ are processed at the first stage,
whereas the elements \verb+C[k]+ --- at the second one (the elements of
arrays~\verb+P+ and \verb+C+ are mutually independent). Since the
operations on each of these elements are independent it is possible to
process them parallel. Each processor takes elements from
particle array~\verb+P+ and cell array~\verb+C+ according to its unique
ID-number, i.e. the m-th processor takes the $m$-th, ($m+p$)-th,
($m+2p$)-th, etc. elements, where ``$m$'' is the processor ID-number.
This rule of particle selection provides good load balancing
because various particles require different time to process and
they are located randomly in the array~\verb+P+ .
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{dp_mem.eps}
\caption{Scheme of memory usage for DP algorithm (three processors)}
\label{f:dp_mem}
\end{figure}
The synchronization of processors is performed before the next loop iteration
starts. Before the second stage begins it is necessary to fill the readdressing
array \verb+LCR+. The complete information about the array~\verb+P+ is
required for readdresing procedure. This task can not be parallelized, so it is
performed by one processor. There are two synchronization points before the
readdressing and after the one. The reduction of the computational time is
due to the decrease of the amount of data which has to be processed by each
processor ($N_p/p$ and $N_c/p$ instead of $N_p$ and $N_c$). After the
inner loop is passed the processors also need to get synchronized.
The figure~\ref{f:dp_g} shows the general flowchart of DP algorithm.
The data from the array~\verb+P+ is required to perform the operations on
elements of array~\verb+C+. This data is located in the array~\verb+P+
randomly. These arrays are stored in the shared memory in order to reduce
the large data exchange between processors. The memory conflicts (several
processors read the same array element) are excluded by the algorithm. The
semaphore technique is used for processors synchronization.
The scheme of memory usage is depicted in the fig.~\ref{f:dp_mem}.
\section{Algorithm of Two-Level Parallelization with Static Load Balancing}
\label{sec:tlp}
It was stated above that the potential of the multiprocessor system is surplus
for the realization of the PSIR algorithm when the required number of
\begin{figure}
\centering
\includegraphics[scale=0.6]{tlp_g.eps}
\caption{Algorithm of two-level parallelization.
$\Delta t$ --- time step,
$\Delta t_s$ --- interval between samples,
$\Delta t_L$ --- time of a single run,
$t$ --- current time,
$i$ --- run number (first level),
$p_1$ --- number of first level processors,
$m$ --- second level processor ID-number,
$p_2$ --- number of second level processors,
$j$ --- index of array element.
}
\label{f:general_tlp}
\end{figure}
statistically independent runs $n$ is significantly less than the number of
processors $p$ ($n\ll p$). In this case the efficient usage of computer
resources of $p$-processor system can be provided by the implementation
of an algorithm of two-level parallelization (TLP algorithm). The general
flowchart of TLP algorithm is shown in the fig.~\ref{f:general_tlp}. The first
level of parallelization corresponds to the PSIR algorithm, the data
parallelization is employed for the second level inside each independent run.
The TLP algorithm is a parallel algorithm with static load balancing.
\begin{figure}
\centering
\includegraphics[scale=0.5]{tlp_mem.eps}
\caption{Scheme of memory usage for TLP algorithm ($p_1=2$, $p_2=3$)}
\label{f:tlp_mem}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7]{tlp.eps}
\caption{Flowchart of TLP algorithm}
\label{f:tlp_scheme}
\end{figure}
The scheme of memory usage for TLP algorithm is depicted
in the fig.~\ref{f:tlp_mem}.
This algorithm requires the memory size to be proportional to the number of
the first level processors which compute single runs (just the same as for the
PSIR algorithm). It also requires the arrays for each run to be stored in the
shared memory as for the data parallelization algorithm in order to reduce the
data exchange time between processors.
The speedup and the efficiency of the TLP algorithm are governed by the
following equations:
\begin{eqnarray}
S_p&=&S_{p_1}\cdot S_{p_2}=\frac{p_1}{p_1-\alpha_1(p_1-1)}\,
\frac{p_2}{p_2-\alpha_2(p_2-1)}, \label{e:tlp_sp}\\
E_p&=&E_{p_1}\cdot E_{p_2}=\frac{S_p}{p_1\cdot p_2}, \label{e:tlp_ep}
\end{eqnarray}
where indices `1' and `2' correspond to parameters on the first level and
on the second one.
The figure~\ref{f:tlp_scheme} shows the detailed flowchart of TLP algorithm
for unsteady flow simulation. There are five synchronization points in the
algorithm. The four of them correspond to the DP algorithm. The last
synchronization has to be done after termination of all runs. The
synchronization is employed with the aid of the semaphore technique. In
this version the iterations of the outer loop (2) are fully distributed between
the first level processors. This algorithm requires $n$ to be multiply by $p$
for uniform distribution of computer resources between single runs. In order
to make the runs statistically independent we have to shift the random
number generator in each run.
The HP/Convex Exemplar SPP-1600 system with
8 processors, 2Gb of memory and peak performance
1600 Mflops was used for algorithm test.
To simulate the conditions of a single user in the system we measured the
execution time of the parent process which makes the start-up initialization
before forking child processes and data processing after passing parallel code
(this process has the maximum execution time).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{tlp_sp.eps}
\includegraphics[scale=0.5]{tlp_ep.eps}
\caption{Speedup $S_p$ (top) and efficiency $E_p$ (bottom) of TLP algorithm
(circles --- experiment,
curve --- formulae(\ref{e:tlp_sp},\ref{e:tlp_ep}),
cross --- speedup of PSIR algorithm)
}
\label{f:tlp_sp}
\end{figure}
The amount of parallel and sequential code was obtained from the program
profiling data using standard \verb+cxpa+ utility.
The simulation of unsteady 3-D water vapor flow in the inner
atmosphere of a comet was carried out in order to study the speedup
and the efficiency which yields this algorithm. The number of the first level
processors $p_1$ was fixed and equal to 6. The number of the second level
processors $p_2$ was varied from 1 to 6. The value of parallel fraction
$\alpha_1$ and $\alpha_2$ were 0.998 and 0.97 respectively. The
figure~\ref{f:tlp_sp} depicts the experimental results (circles) and
theoretical curves for speedup and efficiency as functions of the total number
of processors $p=p_1\cdot p_2$. The same figure shows the value (marked by
cross-sign) of speedup and efficiency of the PSIR algorithm (TLP algorithm
turns into PSIR algorithm at $p_2=1$).
Thus, the TLP algorithm gives the possibility to significantly reduce the
computational time required for the DSMC of unsteady flows using
multiprocessor computers with shared memory. The range of the efficient
usage of this algorithm is the condition $n\ll p$. Moreover, the number of
processors $p$ has to be multiply by $n$ in order to provide good load
balancing.
\section[Algorithm of Two-Level Parallelization with Dynamic Load Balancing]%
{Algorithm of Two-Level Parallelization\\ with Dynamic Load Balancing}
The TLP algorithm with static load balancing described in
section~\ref{sec:tlp} has several drawbacks. It does not provide good load
balancing (hence, one may get low efficiency) in the following cases:
\begin{enumerate}
\item the ratio $p/p_1$ is not integer (part of processors are not used);
\item each run has non-parallelized code with total sequential fraction
equal to $\beta_\ast$, which depends on the starting sequential fraction
$\beta=1-\alpha$ and the number of processors~$p_2$:
\end{enumerate}
\begin{equation}
\beta_\ast=\frac{\beta}{\beta+\frac{1-\beta}{p_2}}.
\label{e:beta_ast}
\end{equation}
At small values of $\alpha$ or large values of $p_2$
some processors may be idle in each run. This leads to
non-efficient usage of computer resources for high values of $p_1$.
The increase in efficiency can be obtained by usage of dynamic load
balancing with the aid of dynamic processor reallocation (DPR). The idea of
the algorithm is as follows. Let us conditionally divide all available
processors into two parts: leading processors $p_1$ and supporting
processors which form the so called ``heap'' (the number of heap-processors
is $p-p_1$). Each leading processor is responsible for its own run. This
algorithm is similar to that of TLP but here there is no hard link of
heap-processors with the specific run. Each leading processor reserves the
required number of heap-processors before starting parallel computations
(according to a special allocation algorithm). After exiting from parallel
procedure the leading processor releases allocated heap-processors. This
algorithm makes it possible to use idle processors more efficiently, in fact this
leads to execution of parallel code with the aid of more processors than in
the case of TLP algorithm with static load balancing. The flowchart of
TLPDPR algorithm is presented in the fig.~\ref{f:tlpdpr_scheme}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{tlpdpr.eps}
\caption{Flowchart of TLPDPR algorithm}
\label{f:tlpdpr_scheme}
\end{figure}
The speedup which yields this algorithm is determined by the following basic
parameters: the total available number of processors in the system $p$, the
required number of independent runs $p_1=n$ ($p_1\ll p$), the sequential
fraction of computational work in each run $\beta$ and the algorithm
of heap-processors allocation. In this paper we use the following allocation
algorithm:
\begin{equation}
p_2^\prime=(1+\mathrm{PRI})p_2,\quad
\mathrm{PRI}=0\ldots\mathrm{PRI}^\ast,
\label{e:dpr}
\end{equation}
where $p_2^\prime$ --- the actual number of the second level processors,
$\mathrm{PRI}$ --- the parameter which is estimated by experimental results
of similar problems,
$\mathrm{PRI}^\ast$ --- the estimated upper limit of the efficient range of parameter
$\mathrm{PRI}$.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{tlpdpr1.eps}
\caption{Speedup on the second level as a function of
number of second level processors $p_2$ for algorithms of
TLP ($\mathrm{PRI}=0$, dashed line) and TLPDPR ($\mathrm{PRI}=\mathrm{PRI}^\ast$, solid line),
circles --- experiment ($p_1=6$, $p=36$, $\mathrm{PRI}=0$),
asterisk --- optimal value of parameter $\mathrm{PRI}$
}
\label{f:dpr1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{tlpdpr2.eps}
\caption{Speedups of TLP (dashed line) and TLPDPR (solid line)
algorithms as functions of number of the second level processors $p_2$
for various parallel fractions $\beta$ on the second level}
\label{f:dpr2}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{tlpdpr3.eps}
\caption{Speedup on the second level $S_{p_2}$ as a function of $\mathrm{PRI}$
($p_1=6$, $p=36$,\protect\newline $\mathrm{PRI}=0\ldots\mathrm{PRI}^\ast$).
Solid line --- theory, dashed line --- experiment approximation}
\label{f:pri}
\end{figure}
In case of $p$ being multiply by $p_1$ and the value of $\mathrm{PRI}$ is equal to 0,
this algorithm turns into TLP algorithm. The speedup on the second level
$S_{p_2}$ is governed by the following equation:
\begin{equation}
S_{p_2}=\frac{1}{\beta+\frac{1-\beta}{p_2(1+\mathrm{PRI})}}.
\label{e:sp2}
\end{equation}
The case when parameter $\mathrm{PRI}$ exceeds a threshold leads to the decrease
of speedup $S_{p_2}$. This decrease is not governed by~(\ref{e:sp2})
owing to overstating demands made by allocation algorithm on system
resources. As a result, this leads to worse load balancing.
The upper limit of the efficient range of parameter $\mathrm{PRI}$ can
be estimated by the following condition:
\begin{equation}
1+\frac{p-p_1}{(1-\beta_\ast)p_1}=(1+\mathrm{PRI}^\ast)p_2.
\label{e:pri_cond}
\end{equation}
It means that we have to find such a value of parameter $\mathrm{PRI}$
for which there is a uniform distribution of all idle processors
at a given moment among runs which perform parallel computations.
The condition for $\mathrm{PRI}^\ast$ as a function of $\beta$ and $p_2$
can be derived from (\ref{e:beta_ast}) and (\ref{e:pri_cond}):
\begin{equation}
\mathrm{PRI}^\ast=\frac{\beta}{1-\beta}(p_2-1).
\label{e:pri_ast}
\end{equation}
The expressions discussed precedingly are undoubtedly correct for $p_1\gg1$.
The value of $S_{p_2}$ at $\mathrm{PRI}=\mathrm{PRI}^\ast$ gives the upper limit of
speedup for a given problem.
To study the characteristics of TLPDPR algorithm we solve the
problem on unsteady flow past a body. The value of sequential fraction
$\beta=0.437$, $p_1=6$. The speedup as a function of $p_2$
($p_2=1\ldots6$) for $\mathrm{PRI}=0$ and $\mathrm{PRI}=\mathrm{PRI}^\ast$ is depicted in the
fig.~\ref{f:dpr1}. The same figure shows the results of calculation for
$\mathrm{PRI}=0$, the dot (marked by asterisk) corresponds to the optimal value of
parameter $\mathrm{PRI}$ for $p_2=6$ ($p=36$). The maximum speedup
$S_{p_2}$ with a given degree of parallelism ($p_1\rightarrow\infty$),
which can be estimated by the formula~(\ref{e:sp2}), comes to 2.3. The
TLP algorithm gives the speedup ($\beta=0.437$, $p_1=6$, $p=36$) which
is 80\% of the maximum value. At optimum value of parameter $\mathrm{PRI}$ the
TLPDPR algorithm gives 93\% for the same case. This is equivalent to the
usage of TLP algorithm on a 120-processor computer ($p=120$, $p_1=6$,
$p_2=20$). The figure~\ref{f:dpr2} shows speedups of TLP and TLPDPR
algorithms as functions of $p_2$ for various $\beta$.
The essential question one can raise about TLPDPR algorithm usage is how
to determine the optimal value of parameter $\mathrm{PRI}$ apriori.
The value given by (\ref{e:pri_ast}) determines the upper limit of
efficent range of parameter $\mathrm{PRI}=1\ldots\mathrm{PRI}^\ast$. The study of influence of
parameter $\mathrm{PRI}$ on the speedup is presented in the fig.~\ref{f:pri}
for $p_1=6$, $p_2=6$ ($p=36$).
The formula~(\ref{e:sp2}) gives good approximation of experimental
results for the intial range of parameter $\mathrm{PRI}$. Further, we see the
predicted above decrease of speedup owing to inconsistency of
available and required system resources. The latter can be explained in
the following manner. In (\ref{e:sp2}) it is supposed that
released heap processors are allocated instantly in the other runs.
Actually, these processes are non-coinciding, therefore the
condition~(\ref{e:sp2}) requires a probability coefficient which
is a function of parameters of a problem and a computer. This coefficient
has to determine the probability to meet requirements for system resources
while allocating heap processors.
The great flexibility of this algorithm allows its efficient usage
for calculation of both steady and unsteady problems. In case of
steady-state modeling it is possible to perform an additional
ensemble averaging for smaller number of modeling particles. This can
lead to shorter computation time comparing to DP algorithm.
The implemented TLPDPR algorithm has the following advantages
comparing to the TLP algorithm with static load balancing:
\begin{itemize}
\item TLPDPR algorithm makes it possible to minimize the latency time of
processors. It provides better load balancing;
\item Better load balancing make it possible to get higher speedups under the
same conditions.
\end{itemize}
\bigskip
\bigskip
\addcontentsline{toc}{section}{References}
|
{
"timestamp": "1999-02-12T07:55:20",
"yymm": "9902",
"arxiv_id": "cs/9902024",
"language": "en",
"url": "https://arxiv.org/abs/cs/9902024"
}
|
"\\section*{Introduction}\n\nThe basic scientific goal of the Planck Surveyor mission is to measure\(...TRUNCATED)
| {"timestamp":"1999-03-27T13:15:59","yymm":"9902","arxiv_id":"astro-ph/9902103","language":"en","url"(...TRUNCATED)
|
"\\section{Introduction}\n\n\nOver the past few years many articles have constructed and investigate(...TRUNCATED)
| {"timestamp":"1999-02-24T15:43:44","yymm":"9902","arxiv_id":"math/9902143","language":"en","url":"ht(...TRUNCATED)
|
"\\section{Introduction}\n\\label{intro}\n\\noindent\n\\noindent\nIn 1953 P. E. Rouse proposed a sim(...TRUNCATED)
| {"timestamp":"1999-02-26T18:09:43","yymm":"9902","arxiv_id":"cond-mat/9902358","language":"en","url"(...TRUNCATED)
|
"\\section{Introduction}\n\nCorrelated many particle systems such as, e.g., nuclear matter or\nstron(...TRUNCATED)
| {"timestamp":"1999-02-26T14:56:04","yymm":"9902","arxiv_id":"nucl-th/9902074","language":"en","url":(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 3