text stringlengths 8 5.77M |
|---|
How to Make Graham Cracker Brownies
1Combine all the ingredients in a large bowl. Spread into a greased 8-inch square baking pan. Bake at 350* for 30-35 minutes or until a toothpick inserted near the center comes out clean. Cool on wire rack before slicing/serving.
2To speed up the removal and clean up process, line your pan with aluminum foil and lightly spray with cooking spray. Be sure to spread batter evenly to prevent over baked corners. |
Only a few days left to Double your gift
This is a stick-up. Give me your car keys or your cell phone. I don’t care which. What’s it gonna be, pal?
For a growing number of young people, the answer is the keys. A recent survey from the research company Gartner finds that 46 percent of 18- to 24-year-old Americans would rather have access to the internet than their own car. In auto-obsessed Germany, three-quarters of those in the same age group would rather live without their car than their smartphone.
“The iPhone is the Ford Mustang of today,” Thilo Koslowski, Gartner’s lead automotive analyst, recently told the New York Times.
Help Grist raise $20,000 by 9/30. Just click the image above ☝️
What’s caused the change? For starters, driving has lost its cool with young Americans, who frankly have better things to do than sit behind the wheel of a tin can lodged in gridlock. And then there are gas prices that are expected to top $4.25 a gallon by April.
But there’s something else, too: If you’re not hung up on owning your own car, your phone can lead you to better, and far cheaper, ways to get around. And I don’t mean calling a friend to bum a ride.
Some of this stuff, you’ve already heard about: Google Transit, which is wired into many smartphones, makes it easy to plan a trip by bus or train. In some cities, GPS-enabled apps like Nextbus actually track transit in real time, so you know if your ride is running ahead of or behind schedule. Apps like Cabulous and Taxi Magic make calling a cab a no-brainer. Add to that a long list of local transportation apps that make it easy to get around without a car.
Of course, sometimes the bus isn’t going where you are. Enter smartphone-powered car-sharing companies like car2go, and programs like Getaround that allow you to locate privately owned cars that people will rent you by the hour. Lest you doubt the potential of this type of peer-to-peer car sharing, a French company called Buzzcar, co-founded by former Zipcar CEO Robin Chase (and includes nifty bug stickers for your car), is apparently doing quite well. Stateside, Zipcar just poured a bunch of cash into Wheelz, a startup peer-to-peer car sharing company that targets college campuses.
In Europe, a company called Carpooling.com allows auto owners to rent out just a seat or two, rather than the whole vehicle. It is quickly growing in popularity, thanks in part to skyrocketing gas prices. The company now claims 3.5 million registered users and boasts that it has saved 99 million gallons of gas, sparing the atmosphere 725,000 tons of CO2.
“Today when people think of mobility, they don’t think of a vehicle. They‘re looking for the best way to get from A to B,” says Carpooling CEO Markus Barnikel. He calls car sharing “the perfect complement” to public transportation, and true to form, the company’s website incorporates bus, train, and plane schedules so users can mix and match to find the best way of getting to their destinations.
What, you may be wondering, does the auto industry make of all this?
On one hand, you’ve got those in the business who argue that the fix is — wait for it — to make cars more like smartphones! This makes sense if you buy the argument that the reason young people aren’t enthused about driving is that it cuts into their texting time. But building a car that will automatically check you in on Foursquare when you arrive at the mall doesn’t even begin to address the real issues. Take, for example, the fact that it costs $8,000 just to keep a car around, according to the American Automobile Association.
The smarter companies are jumping feet-first into this brave new world where people don’t measure their worth by the amount of chrome they haul around. By 2026, a recent survey of global auto execs estimates that a quarter or more of urban inhabitants in some parts of the world will spurn personal cars in favor of “mobility services” such as car sharing. “The world is moving from car ownership to car usership,” the study says.
U.S. auto execs projected that just 1 to 5 percent of Americans would opt out of personal autos, but you can chalk that up to a seemingly unshakable lack of foresight in the American auto industry.
Audi, for one, sees the rise of car sharing as an opportunity. André Stoffels, Audi’s head of strategy, explains in the auto exec survey [PDF] that the company plans to give customers “easy, instant access to high quality vehicles as part of a premium brand experience, either through car sharing, car clubs, or ‘pay-as-you-drive’ offerings.”
It’s “sharing” for car snobs, and no doubt they will be able to take care of every bit of the business via their smartphones.
As for the rest of us? Bring on the higher gas prices! (I don’t care what the Republican presidential contenders say, we’re not going to drill our way out of this.) We’ll figure out how to get around while burning a little less of the stuff. There are lots of apps for that. |
Down-regulation of hypusine biosynthesis in Plasmodium by inhibition of S-adenosyl-methionine-decarboxylase.
An important issue facing global health today is the need for new, effective and affordable drugs against malaria, particularly in resource-poor countries. Moreover, the currently available antimalarials are limited by factors ranging from parasite resistance to safety, compliance, cost and the current lack of innovations in medicinal chemistry. Depletion of polyamines in the intraerythrocytic phase of P. falciparum is a promising strategy for the development of new antimalarials since intracellular levels of putrescine, spermidine and spermine are increased during cell proliferation. S-adenosyl-methionine-decarboxylase (AdoMETDC) is a key enzyme in the biosynthesis of spermidine. The AdoMETDC inhibitor CGP 48664A, known as SAM486A, inhibited the separately expressed plasmodial AdoMETDC domain with a Km( i ) of 3 microM resulting in depletion of spermidine. Spermidine is an important precursor in the biosynthesis of hypusine. This prompted us to investigate a downstream effect on hypusine biosynthesis after inhibition of AdoMETDC. Extracts from P. falciparum in vitro cultures that were treated with 10 microM SAM 486A showed suppression of eukaryotic initiation factor 5A (eIF-5A) in comparison to the untreated control in two-dimensional gel electrophoresis. Depletion of eIF-5A was also observed in Western blot analysis with crude protein extracts from the parasite after treatment with 10 microM SAM486A. A determination of the intracellular polyamine levels revealed an approximately 27% reduction of spemidine and a 75% decrease of spermine while putrescine levels increased to 36%. These data suggest that inhibition of AdoMetDc provides a novel strategy for eIF-5A suppression and the design of new antimalarials. |
---
abstract: 'Importance sampling (IS) is a variance reduction method for simulating rare events. A recent paper by Dupuis, Wang and Sezer (Ann. App. Probab. 17(4):1306- 1346, 2007) exploits connections between IS and subsolutions to a limit HJB equation and its boundary conditions to show how to design and analyze simple and efficient IS algorithms for various overflow events for tandem Jackson networks. The present paper uses the same subsolution approach to build asymptotically optimal IS schemes for stable open Jackson networks with a tree topology. Customers arrive at the single root of the tree. The rare overflow event we consider is the following: given that initially the network is empty, the system experiences a buffer overflow before returning to the empty state. Two types of buffer structures are considered: 1) A single system-wide buffer of size $n$ shared by all nodes, 2) each node $i$ has its own buffer of size $\beta_i n$, $\beta_i \in (0,1)$.'
author:
- |
[Ali Devin Sezer]{}\
Institute of Applied Mathematics\
Middle East Technical University\
Ankara, Turkey\
\
bibliography:
- '../../../bibliography/IS.bib'
title: Asymptotically Optimal Importance Sampling for Jackson Networks with a Tree Topology
---
Introduction
============
Importance sampling (IS) is a method for simulation of rare events. It is used in many applications including simulation of communication systems, computation of credit risk and pricing of financial derivatives. The idea in IS is to change the sampling distribution (and modify the Monte Carlo estimator accordingly) to reduce estimator variance. Queuing processes are basic stochastic models that are commonly used in a wide range of application areas. The simplest type of queuing processes are Jackson networks, in which the arrival and service times at the nodes of the network are assumed to be independent and exponentially distributed with constant rates.
In the present paper we build an IS algorithm, which is optimal in a certain asymptotic sense (see Section \[s:IS\]), to simulate buffer overflows of stable open Jackson networks with a tree topology. The system is stable in the sense that the average service rate at each node is faster than the average arrival rate to that node. Customers arrive at the single root of the tree. The rare overflow event we consider is the following: given that initially the network is empty the system experiences a buffer overflow before returning to the empty state. Two types of buffer structures are considered: 1) A single system-wide buffer of size $n$ shared by all nodes 2) each node $i$ has its own buffer of size $\beta_i n$, $\beta_i \in (0,1)$.
To construct our optimal IS algorithms we use an optimality result from [@thesis] which was obtained using the optimal control/subsolution approach to IS of [@DSW; @duphui-is1; @duphui-is2; @duphui-is3; @duphui-is4]. This result states that to construct optimal IS algorithms for the simulation of a wide range of buffer overflow events of any stable Jackson network it is sufficient to build appropriate smooth subsolutions to a Hamilton Jacobi Bellman (HJB) equation and its boundary conditions (these are given in in the context we study in the current paper). This HJB equation and the boundary conditions are the main tools of the optimal control/subsolution approach and are derived from an optimal control representation of the IS distribution construction problem.
The main contribution of the present paper is a recursive algorithm which takes as input the parameters of an arbitrary Jackson network with a tree topology and constructs a smooth subsolution to the HJB equation and its boundary conditions given in . The constructed subsolution is of the form of a smoothed minimum of affine functions, as was the case in previous works using the subsolution approach, e.g. [@DSW; @thesis]. The quantities that appear in the subsolution (and hence the algorithm) have simple heuristic interpretations as [*effective* ]{} utilities and rates of nodes in the system. They are “effective” in the sense that they depend on whether a node is empty or nonempty. These concepts are explained in detail in subsection \[ss:ssubsol\]. The main results of the paper are Lemmas \[l:main\] and \[l:rectangle\] which prove that the subsolutions arising from the effective rates and utilities satisfy all the conditions of the general optimality theorem in [@thesis] for both type of buffer structures that we will be studying in this paper. Numerical results in Sections \[s:numerical\] and \[s:rectangle\] demonstrate the practical usefulness of the resulting IS algorithms.
Since the initial writing [@istrees] of the present paper a recent paper by Dupuis and Wang [@yeniDW] appeared that treat the IS problem for any stable Jackson network using the subsolution approach. The relation between the results in the current paper and those in [@yeniDW] is discussed in Section \[s:discussion\].
There is a tremendous amount of work on the IS of queueing networks, which include, [@Rubetal04; @WeiQui; @Sadowsky91; @Changetal; @KroeseNicola; @JunejaNicola; @ParWal; @GlassKou; @BoerNicola02]. The problem of constructing IS algorithms for buffer overflow of queuing networks was first posed for the simple two node tandem network in [@ParWal], which also proved that a static large deviations based change of measure is asymptotically optimal for certain parameter values of the system. An asymptotically optimal IS algorithm with optimality proofs for buffer overflow of stable tandem Jackson networks was first developed in [@DSW] using the optimal control/subsolution approach. The discontinuous dynamics of the queuing process near the boundaries of its state space (i.e., when few customers remain in some of the nodes) makes the IS construction problem for queuing networks difficult [@DSW; @GlassKou]. This property rules out iid sampling distributions (such as those developed in [@Siegmund] in the context of a random walk on the real line and in [@ParWal] in the context of two tandem Jackson nodes) as candidates for efficient IS samplers and forces one to search for a good IS distribution among dynamic distributions, where indeed the subsolution approach locates the optimal IS distributions. For a more in depth discussion of these issues we refer the reader to [@DSW; @thesis; @GlassKou; @duphui-is1].
Setup {#s:setup}
=====
We consider Jackson networks with a tree topology. Customers arrive only at the root of the tree. Our goal is to construct optimal IS algorithms to estimate the following probability: $$\label{e:ep1}
P_0( \text{system experiences an overflow before it empties}).$$ This overflow event depends on the buffer structure of the network, which will be made precise in subsection \[ss:overflow\]. For the computation of $p_0$ it is enough to consider the embedded discrete time random walk of the Jackson network. The normalized service and arrival rates and the routing probabilities of the Jackson network are the jump probabilities of the embedded random walk.
Notation and Definitions {#ss:notation}
------------------------
The tree consists of $d$ nodes. $X(i)$ is the population of $i^{th}$ node at the jump times in the network. $i\rightarrow j$ denotes that node $j$ is a child of node $i$. For $i \rightarrow j$, $\mu_{i,j}> 0 $ is the rate at which customers are served in node $i$ and are either \[sent to node $j$ if $j>0$\] or \[ leave the system if $j=0$\].
Total service rate at node $i$ is defined as $\mu_i \doteq \sum_{k} \mu_{i,k}.$ Arrival rate to node $\Lambda_j$ at node $j$ equals $\lambda$ if $j$ is the root node. Otherwise it equals $\Lambda_j \doteq \Lambda_i \frac{\mu_{i,j}}{\mu_i}$ where node $i$ is the parent of node $j$. It is no loss of generality to assume that $\lambda + \sum_{i=1}^d \mu_i$ equals $1$ ; otherwise one can change the time unit so that the equality holds. The utility of node $i$ is defined as: $\rho_i \doteq \Lambda_i / \mu_i.$ The Jackson network is called stable if $\rho_i < 1$ for all $i \in\{1,2,...,d\}$. Therefore we assume that $\vee_{i=1}^d \rho_i < 1.$ This stability assumption implies that the buffer overflow events of interest we study in the present paper decay exponentially in $n$ (see and ). Asymptotic optimality of an IS algorithm is stated in terms of this exponential decay (see Section \[s:IS\]).
The evolution of the random walk $X$ takes place in the state space ${\mathbb Z}^d_+$. This set has $2^d - 2$ different boundaries: $\partial_i \doteq
\{x =(x_1,x_2,...,x_d) \in{\mathbb Z}^d_+: x_i = 0 \}$, $i \in \{1,2,...,d\}$, $\partial_{\{i_1,i_2,...,i_k\}} \doteq \bigcap_{l=1}^k \partial_{i_l}$, $\{i_1,i_2,...,i_k\}$ $\subset \{1,2,...,d\}.$ As we have remarked earlier the dynamics of $X$ depends on whether $X$ is on one of these boundaries and if so it further depends on which one. We will find it convenient to identify these boundaries with bitmaps $b\in \{0,1\}^d$. $b$ describes the following state of the network: $b(i)=0$ signifies that node $i$ is empty, $b(i)=1$ signifies that it is non-empty. Define $
v_{0,1} = (1,0,...,0)
$ and $$\begin{aligned}
{\mathcal V}_2 &\doteq \left\{ v_{i,j}, i,j \in \{1,2,...,d\}: i\rightarrow j, \right.\\
&~~~~~~~~~~~~~~~ \left. v_{i,j}(i)=-1,~v_{i,j}(j)=1,~v_{i,j}(k)=0, k \in \{1,2,...,d\}- \{i,j\} \right\}\\
{\mathcal V}_3 &\doteq \left\{ v_{i,0}, i \in \{1,2,...,d\}:
v_{i,0}(i)=-1,~v_{i,0}(k)=0, k \in \{1,2,...,d\}- \{i\} \right\}\end{aligned}$$ Let ${\mathcal V} \doteq \{ v_{0,1} \} \cup {\mathcal V}_2 \cup {\mathcal V}_3.$ ${\mathcal V}$ are the set of all possible jumps the process $X$ can make. $v_{0,1}$ corresponds to a new customer arriving at the root node, $v_{i,j} \in {\mathcal V}_2$ corresponds to server $i$ serving a customer in queue $i$ and sending it to queue $j$ with $i\rightarrow j$, and finally $v_{i,0} \in {\mathcal V}_3$ corresponds to a customer leaving the system after being served by server $i$.
Let $Y=\{Y_k: k =0,1,2,...\}$ be an iid sequence such that $P_x( Y_k = v_{0,1} ) =p(v_{0,1}) \doteq \lambda$, $P_x( Y_k = v_{i,j} ) = p(v_{i,j}) \doteq \mu_{i,j}$ for $v_{i,j} \in {\mathcal V}_2$, $P_x( Y_k = v_{i,0} ) = p(v_{i,0}) \doteq \mu_{i,0}$ for $v_{i,0} \in {\mathcal V}_3$, for all $x \in {\mathbb Z}^d_+$. $Y_k$ are the unconstrained increments of the process $X$. We assume the existence of a probability space $(\Omega, {\mathcal F})$ equipped with the probability distributions $P_x$. The subscript $x$ denotes the initial position of the queuing system $X_0$: under $P_x$, $X_0=x$ almost surely.
$X \in \partial_{\{i_1,i_2,...,i_k\}}$ if the Jackson network has no customers in queues $i_1$, $i_2$,...,and $i_k$. Therefore $v_{l,j}$, $j \in \{0,1,2,..,d\}$, $l \in \{i_1,i_2,...,i_k\}$, cannot be an increment of $X$ when $X \in \partial_{\{i_1,i_2,...,i_k\}}$. The constraining map $\pi:{\mathbb R}^d_+ \times {\mathcal V} \rightarrow
{\mathcal V} \cup \{0\}$ will make sure that this does not happen: $$\pi(x,v) =
\begin{cases}
0, &\text{if }
x\in \partial_i \text{ for some }i\in\{1,2,...,d\} \text{ and }
\langle v, n_i \rangle < 0,\\
v,
&\text{otherwise, }
\end{cases}$$ where $n_i$ is normal to the boundary $\partial_i$: $n_i(i)= 1$ and $n_i (j) =0$ for $j\neq i$. $X$ can now be written as $$\label{e:dynamics}
X_{k+1} \doteq X_k + \pi(X_k,Y_k).$$ $X_0$ is the initial state of the system and under $P_x$ it equals $x\in {\mathbb Z}^d_+$ almost surely.
Overflow event of interest {#ss:overflow}
--------------------------
We would like to develop IS algorithms to estimate . We now define what we mean by an overflow. Let $\partial_+^d \doteq \{ x \in {\mathbb R}^d_+: \vee_i x(i) = 1 \}$.
\[as:bs\] The system has a buffer whose structure is determined by a normalized exit set ${\mathcal S} \subset [0,1]^d$ with the following properties: 1) ${\mathcal S}$ is closed and connected, 2) $0 \notin {\mathcal S}$, 3) Any continuous curve in $[0,1]^d$ that contains $0$ and a point from $\partial_+^d$ must also contain a point from ${\mathcal S}$. 4) For $S_n \doteq \{x \in {\mathbb Z}^d_+: x/n \in {\mathcal S} \}$, $$\label{e:eda}
\gamma \doteq \lim_{n \rightarrow \infty} - \frac{1}{n}\log P_{\bf s}( X \text{ hits } S_n
\text{ before } 0)$$ exists and is nonzero.
In this article we are interested in two types of buffer structures: 1) $
{\mathcal S}_1 \doteq \{x \in {\mathbb R}^d_+ : x(1) + x(2) + \cdots + x(d) = 1\}.
$ $S_n \doteq \{x \in {\mathbb Z}^d_+: x/n \in {\mathcal S}_1 \}$ corresponds to a single buffer of size $n$ shared by all queues. For $\beta \in {\mathbb R}^d_+$ $
{\mathcal S}_2 = \{x \in {\mathbb R}^d_+ : x(i) = \beta(i) \text{ for some $i$ and } x(j) \le \beta(j) \text{ for all }j \}.
$ Then $S_n \doteq \{x \in {\mathbb Z}^d_+: x/n \in {\mathcal S}_2 \}$ corresponds to $d$ independent buffers, one for each node. The size of the buffer for node $i$ is given by $n\beta(i)$. Without loss of generality we will assume that $\vee_i \beta(i) = 1.$ Define the initial point ${\mathbf s} \doteq (1,0,0,0,\dots,0)$. Fix a buffer structure ${\mathcal S}$ and define the exit boundaries $S_n$ as above. We now rewrite the exit probability of interest precisely as: $
p_n \doteq P_{\bf s}( X \text{ hits } S_n \text{ before it hits } 0 ).
$
We consider the case ${\mathcal S} = {\mathcal S}_1$ (all nodes share a single buffer) in Section \[s:shared\] and the case ${\mathcal S} = {\mathcal S}_2$ (one buffer for each node) in Section \[s:rectangle\].
Importance Sampling {#s:IS}
===================
In order to simulate $X$ using importance sampling one specifies a sampling distribution $\bar{p}(v|x)$, $v \in {\mathcal V}$ and $x \in {\mathbb Z}^+$ and simulates $X$ from this distribution. Note that we allow $\bar{p}$ to depend on $x$, the current position of $X$. Define $A_n$ to be the set of sample paths that hit the exit set $S_n$ before $0$ and let $T_n$ denote the first time $X$ hits $S_n$ or $0$. The IS estimator of $p_n$ using $K$ sample paths is then: $$\label{e:ISestimator}
\frac{1}{K} \sum_{k=1}^K \hat{p}_n^k,
~~~~~~ \hat{p}_n^k \doteq 1_{A_n}(X^k) \cdot \prod_{i=1}^{T_n -1} \frac{p(Y^k_i)}{\bar{p}(Y^k_i|X^k_i)},$$ where $X^k$ denotes the $k^{th}$ independent sample path used in the simulation. The increments $\{Y^k\}$ are iid copies of the increment process $Y$ sampled from $\bar{p}$. $X^k$ is built along with $Y^k$ using the dynamics . The product is the likelihood ratio of $P_{\bf s}$ and $\bar{P}$, which appears in the estimator to cancel off the effect of changing the sampling distribution from $p$ to $\bar{p}$.
$\hat{p}_n\doteq \hat{p}_n^1$ is an unbiased estimator of $p_n$ and therefore the variance of $\hat{p}_n$ depends on the sampling distribution only through the second moment of $\hat{p}_n$. Because $p_n$ decays exponentially, one would like the second moment of $\hat{p}_n$ to decay exponentially as well. However, Jensen’s inequality implies that $$\limsup_n -\frac{1}{n}
\log \hat{\mathbb E}[\hat p_n^2] \le
\limsup_n -\frac{2}{n} \log \hat{\mathbb E}[\hat p_n]\equiv 2\gamma.$$ In other words, the exponential decay rate of the second moment can be at most twice that of the probability. The IS estimator is said to be [*asymptotically optimal*]{} if the upper bound is achieved, i.e., if $\liminf_n -\frac{1}{n}\log {\mathbb E}[\hat p_n^2] \ge 2\gamma.$
Definitions from the subsolution approach
-----------------------------------------
In this subsection we will give only the definitions from the subsolution approach that we need to present the results and the algorithm for the tree Jackson networks. A full development of the subsolution approach ideas can be found in [@duphui-is3; @duphui-is4; @DSW].
#### Hamiltonians, the limit HJB equation and the boundary conditions.
For a bitmap $b\in\{0,1\}^d$ and $q \in {\mathbb R}^d$ define $$\begin{aligned}
\label{e:Hams}
N_b(q) &\doteq
\lambda e^{-q(1)/2}+
\sum_{i:b(i)=1}
\sum_{ i\rightarrow j} \mu_{i,j} e^{\frac{q(i) - q(j)}{2}}\notag
+
\sum_{i:b(i) = 1} \mu_{i,0} e^{\frac{q(i)}{2}}
+
\sum_{i:b(i) = 0 } \mu_i,\\
H_b(q) &= -2\log N_b(q).\end{aligned}$$ $H_b$ is the Hamiltonian associated with boundary $b$. We denote $H_b$ by $H$ if $b=(1,1,1,\dots,1,1)$.
For $x \in {\mathbb R}^d_+$, define $b_x\in\{0,1\}^d$ as follows: $$\label{e:bx}
b_x(i) \doteq \begin{cases}
0,& \text{ if } x(i) = 0,\\
1,& \text{ otherwise.}
\end{cases}$$ $b_x$ indicates which boundary $x$ is on (if $b_x=(1,1,\dots,1,1)$ then $x$ is in the interior of ${\mathbb R}^d_+$).
#### Definition of a subsolution.
The limit HJB equation and its boundary conditions that are in the center of the subsolution approach are as follows: $$\label{e:DPE}
H(DV(x))=0,~~
H_{b_x}(DV(x)) = 0,$$ where $DV$ denotes the gradient of $V$. A subsolution to is defined as follows:
\[d:smoothsubsolg\] $\bar{V}$ is an $\epsilon$-subsolution to if it is $C^1({\mathbb R}^d,{\mathbb R})$ and
1. $~H_{b_x}( D\bar{V}(x)) \geq -\epsilon$ for all $x \in {\mathbb R}^d_+$,
2. $~\bar{V}(0) \ge 2\gamma -\epsilon$,
3. $~\bar{V}(x)\leq \epsilon ,x\in {\mathcal S},$
where $\gamma$ is the decay rate associated with the buffer structure ${\mathcal S}$.
For $q \in {\mathbb R}^d$ and bitmap $b$ define the jump probabilities: $$\label{e:jump}
\bar{p}^*_b(q)(v_{i,j})
=\begin{cases}
\lambda \frac{\exp(-q(j)/2)}{N_b(q)},~~~~& i = 0, ~j=1\\
\mu_{i,j}\frac{\exp( (q(i)-q(j))/2)}{N_b(q)},~~~~& i \neq 0, b(i) = 1, i\rightarrow j\\
\mu_{i,0} \frac{ \exp(q(i)/2)}{N_b(q)},~~~~& i \neq 0,~~ b(i)= 1 \\
\mu_{i,j}\frac{1}{N_b(q)},~~~~&i \neq 0,~ b(i)= 0, i\rightarrow j \text{ or } j = 0.
\end{cases}$$ Any smooth function $W:{\mathbb R}^d \rightarrow {\mathbb R}$ can be used to define a stochastic kernel $\bar{p}$ as follows: $$\label{e:naive}
\bar{p}_W(v | x) = \bar{p}^*_{b_x}(v| DW(x/n)),$$ where $DW$ is the gradient of $W$.
Theorem 4.1.1 of [@thesis] asserts that the IS transition kernel defined by smooth subsolutions to satisfying growth conditions on their Hessians are asymptotically optimal. For completeness we quote this theorem below.
\[t:optimality\] Let $\{\bar{V}_n\}$ be a sequence of $C^2([0,1]^d,{\mathbb R})$ functions that satisfy 1) $\bar{V}_n$ is a $\epsilon_n$-subsolution 2) $ \left| \frac{\partial^2 \bar{V}_n}{\partial x_i\partial x_j} \right|
\le \frac{C}{\delta_n}\text{ for }i,j \in \{1,2,...,d\},
$ for some fixed constant $C <\infty$ and a pair of non negative sequences $\{ \delta_n\}$ and $\{\epsilon_n\}$ that converge to $0$ and satisfy $n\delta_n \rightarrow \infty$. Then the IS scheme defined by the subsolutions $\bar{V}_n$ is asymptotically optimal.
In the next section we will construct a sequence of smooth subsolutions to that satisfy the conditions of this theorem by piecing together at most $2^d$ affine functions for the buffer structure ${\mathcal S}_1$. We will find out in Section \[s:rectangle\] that the same sequence also works for ${\mathcal S}_2$ (one individual buffer for each node).
Single shared buffer {#s:shared}
====================
In this section we will be working with ${\mathcal S}= {\mathcal S}_1 = \{x \in {\mathbb R}^d_+ : x(1) + x(2) + \cdots + x(d) = 1\}$. As noted before, ${\mathcal S}_1$ corresponds to a single buffer shared by all queues in the system. To remind the reader, we are interested in the overflow probability: $
p_n \doteq P_{\bf s}( X \text{ hits } S_n \text{ before it hits } 0 ),
$ where and $S_n \doteq \{x \in {\mathbb Z}^d_+: x/n \in {\mathcal S}_1 \}$. It is proved in [@GlassKou] that $$\label{e:ldres}
\lim_{n\rightarrow \infty} -\frac{1}{n} \log p_n = \gamma_1= \min_{i} -\log \rho_i.$$ In particular, this implies that ${\mathcal S}_1$ satisfies the conditions of Assumption \[as:bs\].
The smooth subsolution {#ss:ssubsol}
----------------------
We define the following quantities to write down the subsolution to that we have in mind.
#### The effective rate $M_i(b)$ of node $i$ at boundary $b$.
$$\label{e:Mib}
M_i(b) \doteq \begin{cases}
\mu_i, & \text{ if $b(i)=1$},\\
\min\left(\mu_i,
\sum_{k:i\rightarrow k}
M_k(b) + \mu'_{i,0}\right), & \text{ if $b(i)=0$},
\end{cases}$$
where $
\mu'_{i,0} \doteq \Lambda_i \frac{\mu_{i,0}}{\mu_i}
$ is the traffic that leaves the system through node $i$. The recursive formula is the main ingridient of our construction and is suggested by the definition of the Hamiltonians and the HJB equation to which we are constructing a subsolution. The form of and the role $M_i(b)$ plays in the solution to the problem suggests the following interpretation of . seems to compute an “effective” service rate for each node taking into account whether the node is empty or nonempty. If a node is nonempty its effective service rate is simply its service rate. If the node is empty, seems to consider it as a system whose components are the nodes it directly feeds and computes the effective rate as the total effective rates of the components. There is also an upper bound on the effective rate, namely the service rate and if the aforementioned total exceeds this bound then again the effective rate is set to be the service rate. In this interpretation $\mu_{i,0}'$ can be thought of as the effective rate of outside of the network for the empty node $i$.
#### The effective utility $\rho_i(b)$
$
\doteq \frac{\Lambda_i}{M_i(b)}.
$ The effective utility of a node is the ratio of its arrival rate to its effective service rate. If node $i$ is nonempty then it coincides with the ordinary utility $\rho_i$.
#### The effective gradient $q\in {\mathbb R}^d$ associated with boundary $b$.
$$\label{e:qi}
q(i) \doteq 2 \log \rho_i(b) = 2\log \frac{\Lambda_i}{M_i(b)},$$
where $q(i)$ denotes the $i^{th}$ component of the vector $q$. We will use the affine functions defined by the effective gradients to construct our subsolution of . The effective gradient $q$ of the boundary $b$ will be the gradient of the smooth subsolution around that boundary.
For each boundary $b$ there is an effective gradient $q$. It may happen that two boundaries $b_1$ and $b_2$ have the same effective gradients. Let $EG\doteq\{q_1$, $q_2$,...,$q_L\}$, $L \le 2^d $, be the set of unique effective gradients. We identify two extreme elements of the set $EG$: firstly, the effective gradient corresponding to the boundary $0=(0,0,0,\dots,0,0)$ (all nodes empty) is $0=(0,0,0,\dots,0,0)$ (this follows from and the definition of $\mu'_{i,0}$). Secondly, the effective gradient corresponding to the boundary $1=(1,1,1,\dots,1,1)$ (all nodes non-empty) is the vector whose $i^{th}$ component is $\log \Lambda_i/\mu_i$.
Now define $$\label{e:defmib}
m_i(b) \doteq \begin{cases}
\mu_i, & \text{ if $b(i)=1$},\\
\sum_{k:i\rightarrow k} m_k(b) + \mu'_{i,0}, & \text{ if $b(i)=0$}.
\end{cases}$$ The simple gradient $q=(q_1,q_2,...,q_d)$ associated with boundary $b$ is defined as $
q(i) \doteq 2\log \frac{\Lambda_i}{m_i(b)}
$ where as before $\Lambda_i$ is the arrival rate to node $i$. The following lemma relates simple and effective gradients. Bitmaps $b'$ and $b$ satisfy $b' \ge b$ if $b'(i) \ge b(i)$ for all $i \in \{1,2,3,...,d\}.$
\[l:effsimp\] Let $q$ be the effective gradient associated with boundary $b$. Then there exists a boundary $\bar{b} \ge b$ such that $q$ is the simple gradient associated with $\bar{b}$.
If $b=(1,1,1,...,1,1)$ then there is nothing to prove because for this boundary the effective gradient and the simple gradient are the same. Then we assume that there are some empty nodes indicated by $b$. $\bar{b}
\ge b$ is constructed as follows. Initially set $\bar{b} = b$. For each empty node $i$ in $b$ set $\bar{b}_i$ to $1$ if $M_i(b) = \mu_i.$ (see ). It is clear that 1) $\bar{b} \ge b$ and 2) the effective and simple gradients of $\bar{b}$ are the same vector which is the effective gradient of $b$.
\[d:defa\] For an effective gradient $q_l \in EG$ let $\bar{b}$ be the boundary whose simple gradient equals $q_l$. Define $\alpha_l$ to be the number of $0$’s in $\bar{b}$ plus $1$.
The $\alpha_l$’s will determine the size of the regions where the change of measure defined by $q_l$ is used for IS. Now define the piecewise affine subsolution $$\label{e:daf}
W^{\epsilon}_l(x) = 2\gamma_1 -\alpha_l\epsilon + \langle q_l, x \rangle,~~ W^{\epsilon}(x)= \bigwedge_{l=1}^L W^{\epsilon}_l(x),$$ where $L$ is the number of effective gradients and $q_l$ are the effective gradients. $W^{\epsilon}$ is piecewise affine and not smooth in general. To obtain the sequence of smooth subsolutions satisfying the assumptions of Theorem 4.1.1 of [@thesis] one has to let $\epsilon$ depend on $n$ and then smooth $W^{\epsilon}$. One smoothing method that is simple and easy to implement on a computer is the following [@duphui-is3]. Define $$\label{e:Wep}
W^{\epsilon,\delta}(x) \doteq -\delta
\log\sum_{l=1}^L \exp\left\{-\frac{1}{\delta} W^{\epsilon}_l(x)\right\}.$$ This smoothing algorithm is based on the following fact: For $d$ real numbers $a_1$, $a_2$ ,..., $a_d$: $
-\lim_{\delta\rightarrow 0 }
\delta \log\left( \sum_{i=1}^d e^{-a_i/\delta }\right) =
\bigwedge_{i=1}^d a_i.
$ By Lemma 3.12 of [@DSW], $W^{\epsilon,\delta} \rightarrow W^{\epsilon}$ uniformly as $\epsilon \rightarrow 0$. In addition, $W^{\epsilon,\delta}$ is continuously differentiable and a simple direct calculation gives $$\label{e:wi}
DW^{\epsilon,\delta}(x) = \sum_{l=1}^L w^{\epsilon,\delta}_l(x) q_l,
~~
w_l^{\epsilon,\delta}(x) \doteq
\frac{\exp\left\{-{W}_l^\epsilon(x)/\delta\right\}}
{\sum_{k=1}^L \exp\left\{- {W}_k^\epsilon(x)/\delta \right\}}.$$
\[l:main\] $W^{\epsilon,\delta}$ defined in satisfies:
1. $ H_{b_x}(DW^{\epsilon,\delta}(x)) \ge -C_1 \exp\left(-\frac{\epsilon}{\delta}\right),$
2. $W^{\epsilon,\delta}(0) \ge 2\gamma_1 - \epsilon
\left(\frac{\delta}{\epsilon}\log\sum_{l=1}^L \exp\left\{\frac{\alpha_l}{\delta/\epsilon}\right\}\right),$
3. $W^{\epsilon,\delta}(x) \le 0$ for $x \in {\mathcal S}_1$,
4. $ \left|\frac{\partial^2 W^{\epsilon,\delta}}{\partial x_i \partial x_j }\right|
\le \frac{C_2}{\delta},
$
where $C_1$ and $C_2$ are constants that only depend on the parameters of the network (arrival and service rates and the routing probabilities).
The proof of Lemma \[l:main\] is in Appendix \[a:proof\]. This lemma directly implies that, for $\epsilon_n = -\delta_n \log\delta_n$ and $\delta_n$ chosen such that $\delta_n \rightarrow 0$ and $n\delta_n \rightarrow \infty$, the sequence of smooth subsolutions $W^{\epsilon_n,\delta_n}$ (where $W^{\epsilon,\delta}$ is defined as in ) satisfy the conditions of the optimality Theorem 4.1.1 [@thesis]. This means that the IS scheme defined by these subsolutions through is asymptotically optimal.
Here we repeat an idea from [@duphui-is3; @DSW]. The formula can be used to translate any smooth function into an IS transition kernel. However, for the smooth subsolutions there is a slightly different way of defining IS transition kernels which turn out to be very convenient in computer simulations.
For $x \in {\mathbb Z}_+^d$ define $$\label{e:directav}
\bar{p}^*(v_{i,j}|x) = \sum_{l=1}^L w_l^{\epsilon,\delta}(x/n)\bar{p}_{b_x}^*(q_l)
(v_{i,j}),$$ i.e., we switch the order of taking the average against the weights $w_l^{\epsilon,\delta}$ and applying the map $\bar{p}^*_{b_x}(\cdot)$ of . The advantage of $\bar{p}^*$ of is that it requires the computation of $\bar{p}_b^*(q_l)$ only once at the beginning of the estimation procedure. During the simulation only the weights are computed dynamically and averages of the precomputed $\bar{p}_b^*(q_l)$ will be the IS rates. Theorem 4.1.1 of [@thesis] doesn’t cover this way of computing the IS rates. However, the modification of this theorem to accommodate direct averaging entails no significant changes. In the next section we report on the numerical performance of these algorithms.
Interpretation of the IS algorithm defined by the subsolution
-------------------------------------------------------------
Let $b$ a boundary and $q$ its effective gradient. essentially uses $\bar{p}_b(q)$ as the IS change of measure when the queueing process is on the boundary $b$ and away from the lower dimensional boundaries contained in $b$. Looking at and one sees that $\bar{p}_b(q)$ is simply the following change of measure: $$\label{e:simple}
\bar{\mu}_{i,j} =\begin{cases} \mu_{i,j}, ~~&\text{if node $i$ is empty},\\
\mu_{i,j}
\frac{\rho_i(b)}{\rho_j(b)}
, ~~&\text{if node $i$ is nonempty},
\end{cases}$$ where $\rho_i(b)$ and $\rho_j(b)$ are the effective utilities of nodes $i$ and $j$. These new rates are renormalized so that they sum to $1$. By convention $\rho_0(b) = 1$, i.e., the outside of the system is thought of as a node with utility $1$. The IS scheme given by uses a convex combination of when the simulated queuing process transitions from one boundary to another.
illustrates well how the IS change of measure given by the subsolution approach works. In the course of a simulation, the IS change of measure depends on which nodes are currently empty and nonempty. The service probabilities of empty nodes are not modified. The service probability $\mu_{i,j}$ of a nonempty node $i$ is modified through a comparison of the traffic at the source $i$ and the target $j$; the service rate is increased if the source is busier, decreased otherwise. The goal seems to be to direct traffic to the less strained node. The traffic is measured by the effective utilities. For an empty node the effective utility is a value that takes into account the traffic in the nodes that follow it immediately. We also note that the arrival rate $\lambda$ is replaced by $\bar{\lambda} = \lambda\frac{1}{\rho_1(b)}$ which is always larger than $\lambda$. Therefore the rate of traffic from outside is always increased. Similarly, the rate of traffic to outside is always decreased.
We would like to also note that the standard state independent heuristic IS algorithms based on large deviations results can be thought of as variants of in which the standard utilities are used instead of the effective utilities.
Numerical Results {#s:numerical}
=================
#### Choice of $\epsilon$ and $\delta$.
The IS algorithm defined by $W^{\epsilon,\delta}$ of has two parameters $\epsilon$ and $\delta$. The optimality Theorem \[t:optimality\] suggest $\delta \approx C/n$ and $\epsilon \approx -\delta \log \delta$. Asymptotic optimality criterion is not precise enough to impose a value for $C$. For the choice of this constant we used experimental evidence. Once $\epsilon$ and $\delta$ are fixed, $\bar{p}^*(v|x)$ of is used as the IS change of measure. The effective gradients $q_1,q_2,...,q_L$ and their $\alpha_l$’s are computed by iterating over all boundaries $b$ and computing the effective gradient of each of them using the formulas and and the Definition \[d:defa\].
In the following subsections we present simulation results for various Jackson networks with a tree topology. In all the estimations $K=10000$ sample paths were used.
#### Example 1.
We first consider the network in Figure \[f:n11\].
Let us consider the case when $\lambda = 0.04, \mu_{1,2} =\mu_{1,0}= 0.12,$ $\mu_{2,0} = \mu_{2,3} = \mu_{2,4} = 0.08,$ $\mu_{3,0} = \mu_{3,1} = \mu_{4,0} = \mu_{4,1}= 0.12.$ The node utilities in this case are: $ \rho_1 = 1/6$, $\rho_2 = 1/12$, $\rho_3 =\rho_4 = 1/36.$ In this example, the utilities are unevenly distributed and node 1 is the most strained node. We take $n=30$. For $n=30$, and with this four dimensional system, it is possible to compute $p_{30}$ without any simulation using the Markov property and straight-forward iteration. Such a computation yields $p_{30}=3.269 \times 10^{-23}$. For the subsolution based IS algorithm we take $\epsilon=0.25$ and $\delta=0.08$. There turns out to be only five effective gradients for the given rate values above.
Exact probability $p_{30} = 3.269 \times 10^{-23}$
Estimate $\hat{p}_n$ Standard Error 95 % CI
-------- ------------------------ ------------------------ --------------------------------
Est. 1 $3.50 \times 10^{-23}$ $0.19 \times 10^{-23}$ $[ 3.12,3.88] \times 10^{-23}$
Est. 2 $3.22 \times 10^{-23}$ $0.16 \times 10^{-23}$ $[ 2.89,3.54] \times 10^{-23}$
Est. 3 $3.28 \times 10^{-23}$ $0.17 \times 10^{-23}$ $[ 2.94,3.61] \times 10^{-23}$
Est. 4 $3.32 \times 10^{-23}$ $0.17 \times 10^{-23}$ $[ 2.98,3.66] \times 10^{-23}$
Est. 5 $3.16 \times 10^{-23}$ $0.16 \times 10^{-23}$ $[ 2.84,3.48] \times 10^{-23}$
: Simulation Results for Example 1[]{data-label="t:sim2"}
The results of five consecutive estimations using the subsolution based IS algorithm are displayed in Table \[t:sim2\]. The ‘standard error’ column is the standard error of each estimation. The $95\%$ confidence intervals are $\hat{p}^n+[-2SE, 2SE]$, where $SE$ is the standard error displayed under the standard error column . These intervals are only formal, i.e., we make no assertion about the normality of these errors. Note that the estimation results are very close to the exact value and the “$95\%$ confidence intervals” are accurate: in all these estimations the exact value happened to be in the computed confidence interval. In total all five estimations took around 20 seconds on an ordinary laptop manufactured in 2004.
#### Example 2.
Now we look at the 8-node network depicted in Figure \[f:n31\].
We take the arrival rate $\lambda = 0.1248$, The service rates are taken to be: $ \mu_{1,2} = 0.062442 $, $ \mu_{1,3} = 0.1874$, $ \mu_{1,4} = 0.062442 $ $ \mu_{1,0} = 0.062517 $ $\mu_{2,0} = 0.06$, $\mu_{3,0} = 0.036$, $\mu_{3,5} = 0.072$, $\mu_{3,6} = 0.072$, $\mu_{4,0} = 0.03$, $\mu_{4,7} = 0.03$, $\mu_{5,0} = 0.0365$, $\mu_{5,8} = 0.0365$, $\mu_{6,0} = 0.073$, $\mu_{7,0} = 0.025$, $\mu_{8,0} = 0.028$.
For this choice of the network parameters, the utility of each node turns out to be approximately: $\rho_1 = 0.331738,$ $\rho_2= 0.3465,$ $\rho_3= 0.3466,$ $\rho_4=0.3465$ , $\rho_5 =0.3419$, $\rho_6=0.3466,$ $\rho_4=0.3465,$ $\rho_8 = 0.4158.$ All nodes are similarly utilized, although the load on node 8 is slightly heavier then the rest. A straightforward simulation with $10^{8}$ samples estimate $p_{30}$ to be $1.2 \times 10^{-6}$ with a standard error of $1.1\times 10^{-6}$. The subsolution based IS simulation results are given in Table \[t:8n\]. The parameters of the algorithm are taken to be $\epsilon= 0.4$ and $\delta = 0.1$. Each estimation uses 10000 samples. For this network there are $256$ effective gradients. Total run time for all these five estimations was about 20 minutes.
Estimate $\hat{p}_n$ Standard Error 95 % CI
-------- ----------------------- ----------------------- -------------------------------
Est. 1 $1.11 \times 10^{-6}$ $0.17 \times 10^{-6}$ $[ 0.78,1.44] \times 10^{-6}$
Est. 2 $1.69 \times 10^{-6}$ $0.32 \times 10^{-6}$ $[ 1.04,2.34] \times 10^{-6}$
Est. 3 $1.25 \times 10^{-6}$ $0.18 \times 10^{-6}$ $[ 0.89,1.61] \times 10^{-6}$
Est. 4 $1.94 \times 10^{-6}$ $0.51 \times 10^{-6}$ $[ 0.92,2.97] \times 10^{-6}$
Est. 5 $1.23 \times 10^{-6}$ $0.17 \times 10^{-6}$ $[ 0.89,1.56] \times 10^{-6}$
: Simulation results for the network with eight nodes[]{data-label="t:8n"}
As can be seen, the subsolution based IS algorithm performs very well for this high dimensional system too: the estimate is within the $95\%$ confidence interval of the MC estimator and the formal $95\%$ confidence inervals of the IS simulation do not wildly fluctuate.
Individual Buffers for each Node {#s:rectangle}
================================
In this section we look at the buffer structure ${\mathcal S}_2$: for $\beta \in {\mathbb R}^d_+$ $${\mathcal S}_2 = \{x \in {\mathbb R}^d_+ : x(i) = \beta(i) \text{ for some $i$ and } x(j) \le \beta(j) \text{ for all }j \}.$$ As we noted before, $S_n \doteq \{x \in {\mathbb Z}^d_+: x/n \in {\mathcal S}_2 \}$ corresponds to $d$ independent buffers, one for each node. The size of the buffer for node $i$ is given by $n\beta(i)$. Without loss of generality we will assume that $\vee_i \beta(i) = 1.$ We are, as before, interested in: $
p_n \doteq P_{\bf s}( X \text{ hits } S_n \text{ before it hits } 0 ),
$ where $
{\mathbf s} = (1,0,0,\dots,0). $ One can prove, using arguments similar to those in [@GlassKou] that $$\label{e:rectrate}
\lim_{n\rightarrow \infty} -\frac{1}{n} \log p_n = \gamma_2 = \min_{i} -\beta(i)\log\rho_i,$$ where $\rho_i$ are the node utilities. In particular, this implies that ${\mathcal S}_2$ satisfies the conditions of Assumption \[as:bs\]. Our goal now is to prove that the IS algorithm defined by $W^{\epsilon_n,\delta_n}$ is asymptotically optimal for the buffer structure ${\mathcal S}_2$ as well (when buffer structure is changed to ${\mathcal S}_2$, $\gamma_1$ in needs to be replaced with $\gamma_2$). To prove this, it is enough to prove a version of Lemma \[l:main\] for ${\mathcal S}_2$. Note that only item 3 of this lemma depends on ${\mathcal S}$ and therefore we only have to prove that the same item holds for ${\mathcal S}_2$, which is done in the next lemma.
\[l:rectangle\] Define $
W^{c,\epsilon}_l(x) \doteq 2\gamma_2 -\alpha_l\epsilon + \langle q_l,x \rangle,
$ where $\alpha_l$ and $q_l$ are defined as in and Definition \[d:defa\] and $\gamma_2$ is the large deviation rate associated with the boundary ${\mathcal S}_2$ . Define $W^{\epsilon,\delta}$ by the expression . Then: $
W^{\epsilon,\delta}(x) \le 0.
$ for $x \in {\mathcal S_2}$.
Take any $x\in{\mathcal S}_2$. Then, there is an $i \le d$ such that $x(i)=\beta(i)$. Let $q_L$ be the effective gradient of the boundary $1=(1,1,1,\dots,1,1)$. $$W(x) = -\delta
\log\sum_{l=1}^L \exp\left\{-\frac{1}{\delta}(2\gamma_2 -\alpha_l\epsilon + \langle q_l, x \rangle)\right\}
\le 2\gamma_2 + \langle q_L, x \rangle - \alpha_L\epsilon.$$ By definition, $q_L(i) = 2\log \frac{\mu_i}{\Lambda_i}$ and the rest of the components of $q_L$ are negative. These facts, , $x \in {\mathbb R}^d_+$, and $x(i)=\beta(i)$ imply that the last display is less than $ -\alpha_L \epsilon.$ This finishes the proof of this lemma.
#### Numerical example
Consider a network with five nodes with the following service rates: $\mu_{1,2} = 0.038$, $\mu_{1,3} = 0.057$, $\mu_{1,0} = 0.095$, $\mu_{2,4} = 0.076$, $\mu_{2,0} = 0.114$, $\mu_{3,5} = 0.095$, $\mu_{3,0}= 0.095$, $\mu_{4,0} = 0.19$, $\mu_{5,0} = 0.19$ and $\lambda = 0.1$. We will suppose that the buffer sizes for the nodes are respectively: $15$, $15$, $17$, $18$, $19$ Then $n=19$ and $\beta(1) =\beta(2) = 15/19$, $\beta(3) = 17/19$, $\beta(4) = 18/19$, $\beta(5) = 1.$ The choice of the buffer sizes are rather arbitrary. We chose them relatively small so that it was possible to compute the buffer overflow probability $p_{19}$ using the Markov property and direct iteration. The exact value of $p_{19}$ turns out to be $p_{19} = 6.8601 \times 10^{-9}$.
The relative node utilities are: $\beta(1)\rho_1 = 0.208$, $\beta(2)\rho_2 = 0.042$, $\beta(3)\rho_3 = 0.013$, $\beta(4)\rho_4 = 0.0004$, $\beta(5)\rho_5 = 0.0008.$ Node $1$ is clearly the most strained node and the loads on the rest of the nodes are spread. Following the same reasoning as in Section \[s:numerical\] we take $\epsilon=0.3$ and $\delta=0.1$. The IS simulation now proceeds as before. One uses $\bar{p}(\cdot|x) = \bar{p}^*(\cdot|x)$ given in for the IS change of measure.
Estimate $\hat{p}_n$ Standard Error 95 % CI
-------- ----------------------- ----------------------- -------------------------------
Est. 1 $7.33 \times 10^{-9}$ $0.42 \times 10^{-9}$ $[ 6.50,8.17] \times 10^{-9}$
Est. 2 $6.81 \times 10^{-9}$ $0.34 \times 10^{-9}$ $[ 6.12,7.50] \times 10^{-9}$
Est. 3 $7.30 \times 10^{-9}$ $0.38 \times 10^{-9}$ $[ 6.53,8.06] \times 10^{-9}$
Est. 4 $7.05 \times 10^{-9}$ $0.39 \times 10^{-9}$ $[ 6.28,7.83] \times 10^{-9}$
Est. 5 $7.01 \times 10^{-9}$ $0.37 \times 10^{-9}$ $[ 6.26,7.76] \times 10^{-9}$
: Simulation results for the case when each node has a separate buffer[]{data-label="t:sim5"}
There turns out to be only eight effective gradients (out of a maximum of 32). The results of five consecutive estimations using the subsolution based IS algorithm are displayed in Table \[t:sim5\]. Once again, the estimation results are close to the exact value $p_{19} = 6.8601 \times 10^{-9}$ and the formal $95\%$ confidence intervals are tight and happen to contain the exact value.
Discussion {#s:discussion}
==========
The goal of the present paper was to extend the IS algorithms in [@DSW], which looked at tandem Jackson networks, to more general networks. We thought tree networks were an interesting generalization and a comparison with the algorithms in [@DSW] will reveal that the tree networks require much more sophisticated subsolutions and IS algorithms for asymptotic optimality. [@yeniDW] proves a further generalization to arbitrary stable Jackson networks. In this section we would like to discuss how the results in [@yeniDW] relate to our results.
Let $p_{i,j} = \mu_{i,j}/\mu_i$ denote the routing probability from node $i$ to $j$, where $j$ is allowed to take the value $0$. In the notation of the present paper, the IS algorithm in [@yeniDW] can be described as follows. Define the effective rate for the boundary $b$ as: $$\label{e:Mib2}
M_i(b) \doteq \begin{cases}
\mu_i, & \text{ if $b(i)=1$,}\\
\min\left(\mu_i,
\sum_{k:i\rightarrow k}
\frac{p_{i,k} \Lambda_i }{\Lambda_k } M_k(b) + \mu'_{i,0}\right), &
\text{ if $b(i)=0$}.
\end{cases}$$ As before if a node is nonempty under $b$, i.e., $b(i)=0$, then its effective rate is just the service rate $\mu_i$. If it is empty, one now takes a [*weighted*]{} sum of the effective rates of its neighbors, as before this sum is min’ed with $\mu_i$. The weight of $M_k(b)$ is the fraction of the $k^{th}$ node’s traffic in the fluid model that is coming from node $i$. This fraction is always $1$ for a tree network and thus for such networks reduces to . Once the effective rates are defined as above one proceeds as in subsection \[ss:ssubsol\].
We note that is a recursive formula: one can start from the leaves of the network and go up and compute all effective gradients using . In the case of general Jackson networks is an equation that needs to be solved; as observed in [@yeniDW], it can be solved by reducing it to a linear equation, which is a generalization of . It can also be directly solved using itself and an iterative method.
Another contribution of [@yeniDW] is the identification of the large deviation decay rate $\gamma$ of $p_n$ for any exit boundary ${\mathcal S}$ for which such a rate exists. In the notation of the present paper, [@yeniDW Proposition 3.1] asserts that $$\gamma = \inf_{x \in {\mathcal S} } - \langle q, x \rangle$$ where $q$ is the effective or simple gradient of $b=(1,1,1,\dots,1).$ As noted in [@yeniDW] this implies that the IS change of measure given by , or for the case of tree networks, is asymptotically optimal for any buffer structure ${\mathcal S}$ for which there is a large deviation decay rate.
Finally, we would like to point out a parametrization that seems most natural for . Define ${\bf M}_i\doteq 1/\rho_i$ and ${\bf M}_i(b)\doteq M_i(b)/\Lambda_i$. The first is the ordinary service to arrival ratio of node $i$. The second can be thought of as the effective service to arrival ratio of the same node when the system is on boundary $b$. By convention let ${\bf M}_0(b) = 1$, i.e., the service to arrival ratio of the outside of the system is $1$. In terms of these new variables is simply: $$\label{e:Mib3}
{\bf M}_i(b) \doteq \begin{cases}
{\bf M}_i, & \text{ if $b(i)=1$}\\
\min\left({\bf M}_i, \sum_{k:i\rightarrow k} p_{i,k} {\bf M}_k(b)\right) , & \text{ if $b(i)=0$},
\end{cases}$$ where $k=0$ value is allowed in the summation to denote the outside of the system. If node $i$ is empty, its effective service to arrival ratio is taken to be the average of the effective ratios of the nodes that are directly connected to $i$. The average is taken with respect to the routing probabilities. As before the ordinary service to arrival ratio is an upperbound on the effective one. So if the average exceeds the ordinary, the effective ratio is set to the ordinary ratio.
The effective gradient $q$ for $b$ will have components $-2\log {\bf M}_i(b)$. And the change of measure $\bar{p}_b(q)$ is: $$\bar{\mu}_{i,j} =\begin{cases} \mu_{i,j}, ~~&\text{if node $i$ is empty}\\
\mu_{i,j}
\frac{ {\bf M}_j(b)}{ {\bf M}_i(b) }
, ~~&\text{if node $i$ is nonempty},
\end{cases}$$ and this is renormalized so that $\bar{\mu}_{i,j}$ sum to $1$. One can use directly to compute the IS algorithm.
Proof of Lemma \[l:main\] {#a:proof}
=========================
Before we begin, a convention: the decay rate $\gamma$ depends on the buffer structure. We used $\gamma_1$ for the shared buffer (${\mathcal S}_1$) and $\gamma_2$ for the individual buffers for each node (${\mathcal S}_2$). In the proofs we will simply write $\gamma$.
\[l:simpprop\] Let $q$ be the simple gradient associated with boundary $b$. Then $H_{\bar{b}}(q) = 0$ for any $\bar{b} \ge b$.
We first prove that $H_b(q)=0$, or equivalently $N_b(q) = 1$. Directly from the definitions , one sees that $N_b(q)=1$ if and only if
[$$\sum_{i: b(i) = 1}\left(\sum_{j:i \rightarrow j} m_j(b) + \mu'_{i,0}\right)
+ m_1(b)
=
\lambda + \sum_{i: b(i) = 1} \mu_i.$$ ]{} The definition of $\mu'_{i,0}$ directly imply that $\sum_{i=1}^d \mu'_{i,0} = \lambda.$ The above display follows from this fact and .
Next fix a $\bar{b} > b$. We will show that $N_{\bar{b}}(q) =1$. $$\label{e:difNJNI}
N_{\bar{b}}(q) - N_b(q) =
\sum_{ i: \bar{b}(i)-b(i)=1, i\rightarrow j }
\hspace{-0.3cm}
\mu_{i,j} e^{\frac{q(i)-q(j)}{2}}
+\sum_{ i:\bar{b}(i)-b(i)=1} \mu_{i,0} e^{q(i)/2}
-
\hspace{-0.3cm}
\sum_{ i :\bar{b}(i)-b(i)=1}
\mu_i$$ Fix $i$ such that $\bar{b}(i)-b(i)=1$ and let $C$ denote the terms contributed by the index $i$ in the first two sums. Our goal is now to show that $C=\mu_i$. This will imply that first two sums and the last sum in cancel each other and that $N_{\bar{b}}(q) = N_{b}(q)$. Because $b(i) = 0$ we have that $$\label{e:mirepp1}
m_i(b) = \sum_{j: i\rightarrow j} m_j(b) + \mu'_{i,0}.$$ Then $$C=
\mu_{i,0} e^{q(i)/2} +
\sum_{j: i \rightarrow j} \mu_{i,j} e^{\frac{q(i)-q(j)}{2}}
= \mu_{i,0} \frac{\Lambda_i}{m_i(b)} + \sum_{j: i \rightarrow j} \mu_{i,j}
\frac{\Lambda_i}{m_i(b)}\frac{m_j(b)}{\Lambda_j}$$ At this point the facts $\Lambda_j = \Lambda_i \frac{\mu_{i,j}}{\mu_i}$ and $\frac{\mu_{i,0}\Lambda_i}{\mu_i}=\mu'_{i,0}$ and and simple arithmetic yield $C=\mu_i$. Thus the difference in is zero, i.e., $N_{\bar{b}}(q) = N_b(q) = 1$. This finishes the proof of this lemma.
\[l:egcomp\] Let $q$ be the effective gradient associated with boundary $b$. Then $ H_{b'}(q) \ge 0$ for all $b' \ge b$.
$ H_{b'}(q) \ge 0$ if and only if $
N_{b'}(q) \le 1$. By Lemma \[l:effsimp\] there exists $\bar{b} \ge b$ such that $q$ is the simple gradient associated with $\bar{b}$. Then by Lemma \[l:simpprop\] $
N_{b'}(q) =1
$ for all $b' \ge \bar{b}$. Now take any $b'$ such that $b' < \bar{b}$ and $b' \ge b$. Because $\bar{b} > b' \ge b$ we have [$$\begin{aligned}
\label{e:tmp0l5}\notag
&N_{\bar{b}}(q) - N_b(q) \\
&~~~=\sum_{ i: \bar{b}(i)-b(i) = 1}\left( \sum_{j:i\rightarrow j }
\mu_{i,j} e^{\frac{q(i)-q(j)}{2}}
+ \mu_{i,0} e^{q(i)/2} \right)
-
\hspace{-0.3cm}
\sum_{ i: \bar{b}(i)-b(i) = 1}
\mu_i \notag
\\
&~~~=
\sum_{ i: \bar{b}(i)-b(i) = 1}
\left( \sum_{j:i\rightarrow j }
\mu_{i,j}\frac{\Lambda_i}{M_i(b)}\frac{M_j(b)}{\Lambda_j}
+\mu_{i,0}\frac{\Lambda_i}{M_i(b)} \right)
-
\hspace{-0.3cm}
\sum_{ i: \bar{b}(i)-b(i) = 1}
\mu_i\notag\\
&~~~=
\sum_{ i: \bar{b}(i)-b(i) = 1}
\left(
\mu_i \frac{\sum_{j:i\rightarrow j } M_j(b) + \mu'_{i,0}}{M_i(b)}\right)
-
\hspace{-0.3cm}
\sum_{ i: \bar{b}(i)-b(i) = 1}
\mu_i\end{aligned}$$ ]{} Now by the construction of $\bar{b}$, $\bar{b}(i)-b(i)=0$ if and only if $
M_i(b) = \mu_i \le \sum_{i\rightarrow j} M_j(b) + \mu'_{i,0},
$ The last display and imply $N_{\bar{b}}(q) \ge N_b(q).$ Because $N_b(q) =1$ (because $q$ is the simple gradient associated with boundary $b$) this finishes the proof of this lemma.
The proof is this lemma is similar to the proof of Theorem 4.31 in [@thesis]. For small positive real numbers $\delta,\epsilon$ let $W^{\epsilon,\delta}$ be defined as in . For ease of notation we will drop the superscript $(\epsilon,\delta)$ and write $W$. We would like to prove the following: there is a constant $C_1$ that only depends on the parameter system such that for all $x \in {\mathbb R}_+^d$ $
H_{b}(DW(x)) \ge -C_1 \exp(-\epsilon/\delta),
$ where $b$ defined in is the boundary corresponding to $x$. Let $E$ be the set of effective gradients $q$ such that there is a boundary $b' \le b$ with effective gradient $q$. Define $
q' = \sum_{q_l \in E} w_l^{\epsilon,\delta}(x) q_l,
$ where $w_l^{\epsilon,\delta}$ are the weights defined in . Once again to ease notation, we drop the superscript $(\epsilon,\delta)$. Its definition directly implies that $H_b$ is concave and Lipschitz continuous. By Lemma \[l:egcomp\] we have that $H_{b}(q) \ge 0$ for $q \in E$. This fact and the concavity of $H_{b}$ and $H_{b}(0)=0$ imply that $H_{b}(q') \ge 0.$ This, and the Lipschitz continuity of $H_b$ give $$\begin{aligned}
H_b(DW(x)) &= H_b(q') + H_b(DW(x)) - H_b(q') \ge |H_b(DW(x)) - H_b(q')|\\
&\ge K | q' - DW(x)| = -K\sum_{q^l \in E^c } w_l(x) | q^l|.\end{aligned}$$ The last inequality follows from and the triangle inequality. Therefore to prove the first part of Lemma \[l:main\] it is enough to prove $
w_l(x)\le \exp(-\epsilon/\delta),
$ for $l$ such that $q_l \in E^c$.
By its definition $w_l$ equals [$$\begin{aligned}
\label{e:boundonw}
w_l(x) = \frac{\exp\left\{-{W}_l^\epsilon(x)/\delta\right\}}
{\sum_{j=1}^L \exp\left\{- {W}_j^\epsilon(x)/\delta \right\}}
&=
\frac{ \exp\left\{(\alpha_l\epsilon - \langle q_l, x \rangle)/\delta\right\}}
{\sum_{j=1}^L \exp\left\{(\alpha_j\epsilon - \langle q_j, x \rangle)/\delta \right\}}\notag\\
&\le
\frac{\exp\left\{(\alpha_l\epsilon - \langle q_l, x \rangle)/\delta\right\}}
{\exp\left\{(\alpha_{j_0}\epsilon - \langle q_{j_0}, x \rangle)/\delta
\right\}},\end{aligned}$$ ]{} where $q_{j_0}$ is an effective gradient to be selected. By Definition \[d:defa\], $\alpha_l$ is one plus the number of $0$’s in the the boundary (bitmap) $r$ whose simple gradient equals $q_l$. Form the bitmap $\tilde{r}$ from $r$ as follows: if $r(i) = 1$ but $b_x(i) = 0$ then set $\tilde{r}(i) = 0$ otherwise set $\tilde{r}(i) = r(i)$. By this construction $\tilde{r} \le b_x$ and $\tilde{r} < r $. The last inequality is strict, because otherwise we would have $b_x= r$ which would imply, by Lemma \[l:egcomp\], $H_{b_x}(q_l) \ge 0$ which in turn contradicts $q_l \notin E$. Let $q_{j_0}$ be the effective gradient associated with the bitmap $\tilde{r}$. $\tilde{r} \le b_x$ and Lemma \[l:egcomp\] imply that $H_{b_x}(q_{j_0}) \ge 0$. This implies that $q_{j_0} \in E$ and consequently $q_{j_0} \neq q_l \in E^c$. These facts and the strict inequality $\tilde{r} < r$ imply that $\alpha_{j_0} - \alpha_l \ge 1$.
Furthermore, remember $x$ is such that $x_i = 0$ if $b_x(i) = 0$. The bitmaps $r$ and $\tilde{r}$ differ only at such $i$. Then the effective gradients of these bitmaps, namely $q_l$ and $q_{j_0}$ will also differ only at such $i$. This means $\langle q_l , x \rangle = \langle q_{j_0}, x\rangle.$ These considerations and imply $
w_l(x) \le \exp(-\epsilon/\delta)
$ and hence the first part of Lemma \[l:main\].
By its definition $$W(0) =
-\delta
\log\sum_{l=1}^L \exp\left\{-\frac{2\gamma -\alpha_l \epsilon}{\delta} \right\}
= 2\gamma -\epsilon\left(\frac{\delta}{\epsilon}
\log\sum_{l=1}^L \exp\left\{
\frac{\alpha_l}{\delta/\epsilon} \right\}\right)$$ This proves the second part of Lemma \[l:main\].
Now let us prove the third part. Let $q_L$ be the effective gradient of the boundary $1=(1,1,1,\dots,1,1)$. For $x \in {\mathbb R}^d_+$ with $x_1+x_2+\cdots+x_d =1 $ we have the following estimate: $$W(x) = -\delta
\log\sum_{l=1}^L \exp\left\{-\frac{1}{\delta}(2\gamma -\alpha_l\epsilon + \langle q_l, x \rangle)\right\}
\le \langle q_L, x \rangle +2\gamma - \alpha_L\epsilon.$$ By definition $q_L(i) = 2\log \frac{\mu_i}{\Lambda_i}$. This and imply that the last line is less than $ -\alpha_L \epsilon.$ This finishes the proof of the third part of Lemma \[l:main\]. It only remains to prove the last part. Differentiating the first expression in gives: $
\frac{\partial^2 W}{\partial x_j \partial x_i}(x) = \sum_{l=1}^L \frac{\partial w_l}{\partial x_j}(x) q_l(i).
$ Differentiating the second expression in gives: $
\frac{\partial w_l}{\partial x_j}(x) = \frac{1}{\delta} w_l(x) \left(\sum_{k=1}^L w_k(x)( q_k(j) -q_l(j))\right).
$ These imply the bound in part 4 of Lemma \[l:main\], which is what we wanted to prove.
|
UPDATE 3/7/16: Janelle Monae has joined Taraji P. Henson and Octavia Spencer in Ted Melfi’s “Hidden Figures,” the much-anticipated feature film adaptation of Margot Lee Shetterly’s book of the same name, which won’t be published until this fall by HarperCollins, and which tells the untold true story of the African American women mathematicians – Katherine Johnson, Mary Jackson, Dorothy Vaughan, Kathryn Peddrew, Sue Wilder, Eunice Smith and Barbara Holley – who worked at NASA during the Civil Rights era. |
Ligand- and cell-specific effects of signal transduction pathway inhibitors on progestin-induced vascular endothelial growth factor levels in human breast cancer cells.
We evaluated the signaling pathways involved in regulating vascular endothelial growth factor (VEGF), a potent angiogenic growth factor, in response to natural and synthetic progestins in breast cancer cells. Inhibition of the phosphoinositide-3'-kinase (PI3-kinase) signaling pathway or the specificity protein-1 (SP-1) transcription factor abolished both progesterone- and medroxyprogesterone acetate (MPA)-induced VEGF secretion from BT-474 and T47-DCO)cells. Inhibitors of the MAPK kinase 1/2/MAPK and N-terminal jun kinase/MAPK signaling pathways blocked both progesterone- and MPA-induced VEGF secretion in BT-474 cells. However, these inhibitors blocked only progesterone-, but not MPA-induced VEGF secretion in T47-DCO cells. Inhibitors of PI3-kinase or SP-1 blocked both progesterone- and MPA-induced increases in VEGF mRNA levels in T47-DCO cells. The proximal SP-1 sites within the VEGF promoter were critical for progestin-dependent induction of VEGF. In contrast, MAPK inhibitors did not block the progesterone- or MPA-induced increases in VEGF mRNA in T47-DCO cells, suggesting that MAPK inhibitors decreased progesterone-induced VEGF secretion in T47-DCO cells by blocking posttranscriptional mechanisms. The MAPK kinase/ERK/MAPK-independent induction of VEGF mediated by MPA was associated with the PRB [progesterone receptor (PR) B] isoform of the PR in T47-DCO cells. None of the inhibitors tested reduced basal PR levels or abrogated PR-dependent gene expression from a reporter plasmid, indicating that loss of PR function cannot explain any of the observed effects. Because the PI3-kinase signaling pathway and SP-1 transcription factor play critical roles in progestin-dependent VEGF induction, these may be useful targets for developing antiangiogenic therapies to prevent progression of progestin-dependent human breast cancers. |
Karakter Stevie diperankan oleh aktor Sunny Suljic yang bermain di The House with a Clock in its Walls. Bintang film top yang ikut bermain ialah Katherine Waterston yang bermain di The Current War dan Fantastic Beasts and Where to Find Them dan Lucas Hedges yang membintangi Manchester By the Sea dan Lady Bird
Stevie is a sweet 13-year-old about to explode. His mom is loving and attentive, but a little too forthcoming about her romantic life. His big brother is a taciturn and violent bully. So Stevie searches his working-class Los Angeles suburb for somewhere to belong. He finds it at the Motor Avenue skate shop |
Pharmacist-managed patient assessment and medication refill clinic.
The effectiveness of a pharmacist in determining the appropriateness of prescription renewal for patients appearing at a hospital-based refill clinic was investigated. In part 1, data were collected on the clinic as it traditionally functioned with staff physicians evaluating the patients. In parts 2 and 3, data were collected with the pharmacist assuming the assessment function. In part 2, a physician reviewed the pharmacist's decisions before the patient left. In part 3, the pharmacist functioned without supervision and the physician reviewed patient records retrospectively. Physician agreement with the pharmacist's decisions was the primary criterion for determining effectiveness, and was found to be 99% in part 2, for a total of 105 patients. In part 3, physician agreement remained at the 99% level for a total of 106 patients. Patient waiting time was about the same in each part of the study because of clinic procedures beyond the pharmacist's control. A pharmacist can be cost-effective in this role if the task is combined with regular pharmaceutical functions. |
Under Mayor Emanuel, CHA production of replacement housing has slowed to a near halt — to the point that it’s virtually impossible to see the agency completing its new Plan Forward goals on time, housing advocates say.
And that’s with a five-year extension to CHA’s original ten-year Plan For Transformation.
The numbers are striking: in each of the last four years under Mayor Daley, CHA produced between 760 and 880 replacement units.
In 2011, under Emanuel, CHA produced 424 units; the next year, 112 units; and in 2013, just 88.
And in its proposed plan for 2014, which was the subject of a public hearing Wednesday, CHA is proposing a grand total of 40 new public housing units.
In fact, that number includes 12 units at the new Dorchester Artists Housing located in a vacant scattered site that was rehabbed in 2005 — and already counted once toward the PFT’s goal of 25,000 replacement units, said Leah Levinger of Chicago Housing Initiative.
With a federal historic preservation review of plans for Altgeld Gardens under way, CHA has dramatically scaled back the number of units it is considering demolishing there, according to a residents group.
People for Community Recovery discovered last year that the CHA development had been found to be eligible for listing on the National Register of Historic Places in 1993, said board president Christian Strachan.
After the group contacted federal agencies for more information — and with demands for a community-led planning process — HUD initiated a Section 106 review aimed at minimizing the impact of federally-funded redevelopment on historic properties, he said.
Meanwhile a consultant hired by CHA in May to coordinate planning has discussed two possible scenarios, one involving demolition of about 120 units and one with even less demolition, according to Cheryl Johnson, executive director of PCR.
The development team hired by CHA for Lathrop Homes issued a “final draft” of their plan last week, but key details are missing and major questions remain in contention.
That includes the height of a high-rise building Lathrop Community Partners wants to build at the southern end of Lathrop — a flashpoint for neighborhood opposition — as well as issues of preservation, replacement of lost public housing, and public financing for private developers.
Built in 1938 along the Chicago River north and south of Diversy, Lathrop features low-rise brick buildings and landscapes designed by leading architects of the day. It was cited by Preservation Chicago as “the best public housing Chicago has ever built” and named to the National Register of Historic Places last year.
Preservation plan from Landmarks Illinois
CHA stopped leasing to new residents in 2000, at first promising a full renovation as public housing, then meandering through a series of planning efforts. At one point plans to demolish and replace the entire development were announced.
LCP, a consortium of for-profit and nonprofit developers led by Related Midwest, a developer of luxury high-rises, was selected by CHA to handle Lathrop’s redevelopment in 2010. LCP issued three possible scenarios for community discussion last year.
At a community meeting on the “final draft” plan last week, lead designer Doug Farr said LCP had reduced overall unit count to less than 1,200 in response to concerns about excessive density. (One way they did this, it turns out, was removing the 92-unit Lathrop senior building from the count.) Earlier plans projected 1,300 to 1,600 units.
That goes some of the way toward meeting objections of neighborhood groups and local aldermen — though they had argued that 1300 units on the 37-acre site meant a density level two-and-a-half times the surrounding area. Lathrop currently has 925 units, with less than a fifth of them occupied.
LCP also reduced proposed retail development to 20,000 square feet, down from a high of 70,000 — with big box stores surrounded by surface parking — in earlier plans.
But although aldermen and neighborhood groups rejected the concept of a high-rise on the site, it’s still in the plan. LCP is just not saying how high it will be. They’re not even calling it a “high-rise.”
The Cabrini-Green Local Advisory and supporters will hold a press conference Thursday morning (May 16 at 9:30 a.m., 530 W. Locust) to announce “a new initiative to protect the Carini Row Houses,” according to a release from the Legal Assistance Foundation.
Row House residents have called on CHA to fulfill the promise in the original PFT to rehabilitate the development as 100 percent public housing; that plan was put on hold in 2011.
Meanwhile, resident leaders and community organizations called on the CHA board to reject the mayor’s plan and return to the drawing board — and to heed input from the public, including an emphasis on preservation and rehab of existing units rather than subsidizing private development as the most cost-effective way to meet CHA’s obligations.
A promised CHA town hall meeting with residents of Altgeld Gardens – scheduled twice last month, and twice cancelled at the last minute – is now slated for Wednesday.
CHA budgeted $7.3 million for “planning for demolition” of one third of Altgeld’s units in its annual plan, but after scores of Altgeld residents turned out to object, CHA promised no decisions would be made without a “community planning process” to commence with a town hall meeting in November.
A meeting scheduled for November 14 was cancelled the day before, and a rescheduled meeting on November 29 was cancelled by CHA chief executive Charles Woodyard just hours before it was to take place.
Resident leaders were told the November 29 meeting was cancelled because Woodyard had an “emergency meeting” with the mayor, said Cheryl Johnson of People for Community Recovery.
“A lot of people showed up, and the doors were just closed,” she said. There wasn’t even a sign announcing the cancellation, she added. “People were angry.”
New HUD regulations could make it much harder for CHA to get approval to demolish housing at Altgeld Gardens, Lathrop Homes, and Cabrini Row Houses, according to a veteran housing attorney who helped negotiate the change.
HUD issued a notice in February (PIH 2012-7) requiring public housing authorities claiming that units are “obsolete” must demonstrate that no reasonable program of repair is feasible, said Bill Wilen of the Shriver National Center on Poverty Law. It also requires environmental and civil rights reviews, he said.
A HUD regulation governing the matter is expected to be issued early next year.
Previously, applications for demolition were routinely approved by HUD’s Special Application Center, located in Chicago, including applications that clearly failed to meet statutory requirements, Wilen said. Rejections of demolition applications by the center have increased significantly in recent months, he said.
Five years ago Wilen successfully challenged HUD’s approval of demolition of public housing in Rockford, one of several legal battles that he said informed an effort by the national Housing Justice Network to get HUD to tighten up its regulations.
There are other possible legal grounds to challenge CHA demolitions. The agency’s annual contract with HUD requires CHA to maintain units that it plans to demolish eventually. Instead, like many other public housing authorities, CHA allows housing to become deteriorated and then claims it must be demolished as “obsolete,” according to advocates.
Altgeld and Lathrop
At Altgeld Gardens, residents are gearing up for a CHA town hall meeting Thursday (November 29, 6 p.m., at the Community Building, 951 E. 132nd Place) to get community input on plans for their development.
Three new plans for redeveloping Lathrop Homes fall far short of the project’s stated goal of historic preservation – to the point that developers will pass up tens of millions of dollars in federal historic preservation tax credits.
Instead, they plan to ask for $30 million or more from a new TIF district.
The plans have garnered widespread local opposition due to heavy increases in density and congestion.
CHA and Lathrop Community Partners will present three scenarios at open houses (Thursday, November 15, 3 to 8 p.m., and Saturday, November 17, 12 to 4 p.m.) at New Life Community Church, 2958 N. Damen.
At 4:15 p.m. on Thursday, Lathrop residents and neighbors will hold a press conference to denounce all the scenarios and the lack of any meaningful community engagement.
Already thirteen neighborhood associations have signed onto a letter to CHA from Ald. Scott Waguespack (32nd) calling for rejection of all three plans due to excessive density and lack of public participation.
And Tuesday, Ald. Proco Joe Moreno (1st) sent an e-mail blast announcing the open houses and saying, “I do not believe that any of the individual scenarios on the table are an acceptable plan to move Lathrop Homes forward.”
Total demolition
In fact, one of the scenarios would almost certainly fail to win regulatory approval.
Dubbed the “Delta Greenscapes” scenario, it calls for demolition of all of Lathrop’s low-rise, historic buildings.
But since Lathrop was named to the National Register of Historic Places in April, any demolition involving federal funds must be approved by the Illinois Historic Preservation Agency and the Advisory Council for Historic Preservation. And CHA will use federal funds to cover the costs of rehabbing and operating public housing at Lathrop.
“Clearly, demolishing everything would not meet preservation guidelines and would rarely be an approveable action under the federal program,” said Michael Jackson, chief architect for preservation services at IHPA, who notes that nothing has been submitted to his agency.
Approval might be forthcoming in cases involving extreme deterioration and functional obsolescence, but “I can’t see that logic applying here,” he said. “The essence of the Lathrop project is historic preservation. It’s been identified as a historic property, and the development team has been given that direction.”
“What they’re pulling is a typical developer’s trick,” said Jonathan Fine of Preservation Chicago. “We’re going to show you something so god-awful that when we walk it back to something slightly less god-awful, the community will think it’s won something.”
Developers prefer TIF
Despite the RFQ’s request for developers with experience using historic tax credits, none of the plans are likely to qualify for the credits, which cover 20 percent of a project’s costs – in this case, tens of millions of dollars. That’s what developers told aldermen in August, said Paul Sajovek, Waguespack’s chief of staff.
Over objections from residents – and despite assurances that residents will be consulted – CHA is submitting an annual plan to HUD that includes $7.3 million for “planning for demolition” of one-third of the public housing units at Altgeld Gardens.
The move comes as the citywide CHA resident leaders’ organization has called for a moratorium on demolition and for rehabbing unoccupied units at Altgeld and at other remaining traditional developments.
It comes as the need for low-income housing continues to grow, while CHA public housing production has slowed dramatically, and the city produces a handful of low-income units annually under its affordable housing plan.
And it comes as housing activists who’ve exposed CHA’s receipt of HUD operating funds for unoccupied housing units are revealing a new no-strings funding stream from HUD – capital subsidies which continue for years for units that have been demolished.
Plan first, talk later
On Tuesday, the CHA board approved the annual plan under HUD’s Moving To Work program. According to the plan: “After reassessing future developments needs at [Altgeld Gardens and Murray Homes], CHA has determined that it will undertake planning for the demolition of the remaining 648 non-rehabilitated unoccupied units.”
CHA has budgeted $7.3 million for “planning for demolition” at Altgeld, according to the document. Rehab of 1,300 units at the Far South Side development was completed in 2010.
Last week People for Community Recovery, an organization of Altgeld residents, received assurance from CHA chief Charles Woodyard that no demolition would occur prior to a community planning process, scheduled to kick off with a town hall meeting next month. Woodyard responded after the group handed Mayor Emanuel a letter asking him to intervene to save Altgeld’s housing, said Cheryl Johnson, executive director of PCR.
“It would be more reassuring for us if they took [funding for demolition] out of the plan,” she said.
“It’s backwards,” said Leah Levinger of the Chicago Housing Initiative, a coalition of community organizations working with tenants in federally-backed housing. “Why not have the conversation first, before you submit a plan to HUD?”
“There’s no evidence these buildings are not structurally sound or that it’s not cost effective to rehab,” she added. “Until there is, demolition seems senseless and wasteful.”
Moratorium
The CHA’s Central Advisory Council, comprising elected representatives of public housing developments, calls for a moratorium on demolition in a recent report outlining recommendations for the current “recalibration” of CHA’s Plan for Transformation.
Citing decreases in federal funding and a growing shortage of low-income housing, CAC calls on CHA to prioritize preservation of public housing, “specifically rehabilitation and reconfiguration of existing CHA units.” Rehab is significantly more cost-effective and involves far fewer development hurdles, CAC notes.
Categories
By Stephen Franklin Community Media Workshop A 3-year-old child died on a plane from Chicago to Poland. This, Magdalena Pantelis instantly knew, was a story her readers would care about. But she needed more detail to write about it for the Polish Daily News, the nation’s oldest daily newspaper in Polish, founded Jan. […]
Email Address:*
First Name:*
Last Name:*
Organization
Zip Code:*
Country:
CAN TV is a network that belongs to the people of Chicago. For updates on local programs, and live, timely coverage of community events, sign up at http://www.cantv.org |
Q:
Ethereum: NodeJS web3 - UnhandledPromiseRejectionWarning: Insufficient funds
I am using web3.js v1.0.0-beta.34 & nodeJS v9.11.2 to execute a smart contract on the Kovan testnet. The same method works for me on Ropsten with another smart contract. Here are the two errors I get via the callback:
1.)
UnhandledPromiseRejectionWarning: Error: Returned error: Insufficient
funds. The account you tried to send transaction from does not have
enough funds. Required 183675000000 and got: 0.
2.)
(node:15422) UnhandledPromiseRejectionWarning: Unhandled promise
rejection. This error originated either by throwing inside of an async
function without a catch block, or by rejecting a promise which was
not handled with .catch(). (rejection id: 1)
This is my smart contract:
pragma solidity ^0.4.24;
contract Test2 {
address public customer;
bytes32 public productName;
struct Box {
uint size;
}
Box public box;
constructor() public {
box.size = 3;
customer = 0xDa3E3C75....;
productName = "0x576...";
}
function changeBox(uint _change) public {
box.size = _change;
}
function getBox() public returns (uint) {
return box.size;
}
}
And here is the JavaScript code to make a transaction and execute the function changeBox with web3 and node:
const Tx = require('ethereumjs-tx');
var Web3 = require('web3');
var web3 = new Web3(new Web3.providers.HttpProvider('https://kovan.infura.io/api_key'));
const contractAddress = '0x36075430619b21Fff798454e2D5C81E9C18DEe81';
var contractABI = new web3.eth.Contract(
[...json abi...], contractAddress);
var boxNum;
function changeBox(boxNum, callback) {
web3.eth.defaultAccount = "0x002D189c25958c60...";
const account = '0x002D189c2595...';
const privateKey = Buffer.from('240462d5...', 'hex');
const contractFunction = contractABI.methods.changeBox(Number(boxNum));
const functionAbi = contractFunction.encodeABI();
let estimatedGas;
let nonce;
contractFunction.estimateGas(function(error, gasAmount) {
if(!error) {
console.log('Estimated Gas : ' + gasAmount);
estimatedGas = gasAmount + 10000;
console.log('New Gas: ' + estimatedGas);
web3.eth.getTransactionCount(account).then(_nonce => {
nonce = _nonce.toString(16);
console.log("Nonce: " + nonce);
const txParams = {
gasPrice: estimatedGas,
gasLimit: 5000000,
to: contractAddress,
data: functionAbi,
from: account,
nonce: '0x' + nonce
};
const tx = new Tx(txParams);
tx.sign(privateKey);
const serializedTx = tx.serialize();
web3.eth.sendSignedTransaction('0x' + serializedTx.toString('hex')).on('receipt', receipt => {
callback(receipt);
});
});
}
else {
callback(error);
}
});
}
//calling the contract with value 6
changeBox(6, function(err, data) {
if (!err) {
console.log(data);
}
else {
console.log(err);
}});
A:
sendSignedTransaction returns a promiEvent onto which you can chain then and catch:
web3.eth.sendSignedTransaction('0x' + serializedTx.toString('hex'))
.on('receipt', receipt => {
callback(receipt);
}).then(() => {
// success
}).catch(() => {
// fail
});
Unhandled promise rejection is thrown because the promise gets rejected but there is no catch handler.
|
Organ trafficking and illicit transplant surgeries have infiltrated global medical practice. But despite the evidence of widespread criminal networks and several limited prosecutions in countries including India, Kosovo, Turkey, Israel, South Africa and the US, it is still not treated with the seriousness it demands.
Since the first report into the matter in 1990, there has been an alarming number of post-operative deaths of “transplant tour” recipients from botched surgeries, mismatched organs and high rates of fatal infections, including HIV and Hepatitis C contracted from sellers’ organs. Living kidney sellers suffer from post-operative infections, weakness, depression, and some die from suicide, wasting, and kidney failure. Organs Watch documented five deaths among 38 kidney sellers recruited from small villages in Moldova.
Distressing stories lurk in the murky background of today’s business of commercialised organ transplantation, conducted in a competitive global field that involves some 50 nations. The World Health Organisation estimates 10,000 black market operations happen each year.
The organ trade network
As I wrote in Living Donor Organ Transplants, the sites of illicit transplant have expanded from Asia to the Middle East, Eastern Europe, South Africa, Central Asia, Latin America and the US. All are facilitated by local criminal networks but those run by organised global criminal syndicates are the most dangerous, mobile, and widespread. They are also the most difficult to trace and to interrupt.
The trade involves a network of human traffickers including mobile surgeons, brokers, patients, and sellers who meet for clandestine surgeries involving cut-throat deals that are enforced with violence, if needed. Many of the “kidney hunters” are former sellers, recruited by crime bosses into the tight web of transplant trafficking schemes.
Sellers include poor nationals, new immigrants, global guest workers, or political and economic refugees recruited from abroad to serve the needs of transplant tourists in countries that tolerate or actively facilitate the illegal transplant trade.
Bioethicists argue endlessly about the “ethics” of what is actually a crime and a medical human rights abuse.
Turning up the heat
In 2008, the climate of denial began to change when The Transplantation Society) and the International Society of Nephrology), held a major summit which acknowledged organ trafficking as a reality. Moral pressure was then put on countries actively involved in organised and disorganised international schemes to recruit paid, living donors.
Despite this, criminal networks of brokers and transplant trafficking schemes are still robust, exceedingly mobile, resilient, and generally one step ahead of the game. Meanwhile, one economic or political crisis after another has also supplied the market with countless refugees that fall like ripe fruit into the hands of organ traffickers. The desperate, displaced and dispossessed can be found and recruited to sell a spare kidney in almost any nation.
Who gets what?
Human trafficking for organs is still generally seen as a victimless crime that benefits some very sick people at the expense of other, more invisible - or at least dispensable - people. And some prosecutors and judges treat it as such.
In 2009, New Jersey federal agents arrested kidney trafficker Levy Izhak Rosenbaum as part of a larger police sting of corrupt politicians. Rosenbaum, a self-styled “matchmaker” as he described himself in taped conversations, was caught trying to arrange the private sale of a kidney from a donor in Israel to an undercover FBI agent for $160,000 (£100,000).
The hospitals where the Rosenbaum operations were arranged were prestigious and despite it being illegal to trade organs in the US since 1984, many don’t ask enough questions. Indeed, Rosenbaum claimed he was easily able to concoct cover stories. It’s a lucrative business.
Federal prosecutors couldn’t believe that the trafficked organ sellers had been deceived or coerced into selling. Two years later Rosenbaum pleaded guilty to just three incidents of brokering kidneys for payment despite admitting to having been in the business for over a decade.
At his trial, Rosenbaum had a powerful show of support from transplant patients who arrived to praise the trafficker, and beg for his mercy.
Only one victim of kidney selling testified - a young black Israeli, Elahn Quick - who was recruited by traffickers to travel to a hospital in Minnesota to sell his kidney to a 70-year-old man. Quick testified that he agreed to the donation because he had been unemployed, alienated from his community, and hoped a meritorious act would improve his social standing. However, just before he was anaesthetised he asked his “minder” if he could get out of the deal. The operation went ahead.
The judge, perhaps moved by Rosenbaum’s supporters, concluded that deep down he was a good man, and that Quick had not been defrauded; he was paid what he was promised. “Everyone”, she said, “got something out of this deal”.
Combating criminal networks
Illegal, clandestine kidney transplants depend on criminal networks of human traffickers preying on the bodies of both the desperately sick and poor.
Prosecutions of traffickers and their associates — brokers, kidney hunters, and enforcers — is inefficient. Brokers are the most visible players but easily replaceable. Arresting and prosecuting a few of them, as has been the case, won’t deter others from taking their place.
While culpable, kidney sellers and transplant tour recipients are also victims of recruitment, deception and varying degrees of coercion. They can provide information, but should be treated as victims unless, as happens in some cases, they go on to also become part of the trade.
Legislation and prosecution must instead focus on transplant professionals — the surgeons, hospitals, and insurance companies - that claim immunity by saying either that they can’t police the trade, or that they are not responsible for monitoring what goes on behind the scenes, or that they’ve been deceived.
Transplant professionals were implicated in the Netcare scandal in South Africa after the company entered into a plea bargain and accepted a $1.1m fine. The charges were related to 109 kidney transplants carried out between 2001-3. There were false declarations that donors were related and five operations in which the donors were minors, all against the company’s own internal policy.
Organs Watch has many copies of letters that show how organised traffickers can be, how they keep schemes quiet and how they coach kidney sellers and transfer illicit payments.
Professional medical sanctions against transplant surgeons who work with criminal organs trafficking networks are non existent but could be very effective. They should lose their license to practice medicine and be prohibited from participating in transplant conferences.
Regulation cannot come solely from within the transplant profession. Different laws and different jurisdictions make prosecutions of crimes that span international boundaries very difficult. |
/*
* Copyright (C) 2018-2020 Lightbend Inc. <https://www.lightbend.com>
*/
package akka.grpc.gen.javadsl
import akka.grpc.gen.{ BuildInfo, CodeGenerator, Logger }
import com.google.protobuf.Descriptors._
import com.google.protobuf.compiler.PluginProtos.{ CodeGeneratorRequest, CodeGeneratorResponse }
import protocbridge.Artifact
import templates.JavaCommon.txt.ApiInterface
import scala.collection.JavaConverters._
import scala.collection.immutable
import com.github.ghik.silencer.silent
abstract class JavaCodeGenerator extends CodeGenerator {
/** Override this to add generated files per service */
def perServiceContent: Set[(Logger, Service) => immutable.Seq[CodeGeneratorResponse.File]] = Set.empty
/** Override these to add service-independent generated files */
def staticContent(@silent("never used") logger: Logger): Set[CodeGeneratorResponse.File] =
Set.empty
def staticContent(
@silent("never used") logger: Logger,
@silent("never used") allServices: Seq[Service]): Set[CodeGeneratorResponse.File] =
Set.empty
override def run(request: CodeGeneratorRequest, logger: Logger): CodeGeneratorResponse = {
val b = CodeGeneratorResponse.newBuilder
// generate services code here, the data types we want to leave to scalapb
val fileDescByName: Map[String, FileDescriptor] =
request.getProtoFileList.asScala.foldLeft[Map[String, FileDescriptor]](Map.empty) {
case (acc, fp) =>
val deps = fp.getDependencyList.asScala.map(acc).toArray
acc + (fp.getName -> FileDescriptor.buildFrom(fp, deps))
}
// Currently per-invocation options, intended to become per-service options eventually
// https://github.com/akka/akka-grpc/issues/451
val params = request.getParameter.toLowerCase
val serverPowerApi = params.contains("server_power_apis") && !params.contains("server_power_apis=false")
val usePlayActions = params.contains("use_play_actions") && !params.contains("use_play_actions=false")
val services = (for {
file <- request.getFileToGenerateList.asScala
fileDesc = fileDescByName(file)
serviceDesc <- fileDesc.getServices.asScala
} yield Service(fileDesc, serviceDesc, serverPowerApi, usePlayActions)).toVector
for {
service <- services
generator <- perServiceContent
generated <- generator(logger, service)
} {
b.addFile(generated)
}
staticContent(logger).map(b.addFile)
staticContent(logger, services).map(b.addFile)
b.build()
}
def generateServiceInterface(service: Service): CodeGeneratorResponse.File = {
val b = CodeGeneratorResponse.File.newBuilder()
b.setContent(ApiInterface(service).body)
b.setName(s"${service.packageDir}/${service.name}.java")
b.build
}
override val suggestedDependencies = (scalaBinaryVersion: CodeGenerator.ScalaBinaryVersion) =>
Seq(
Artifact(
BuildInfo.organization,
BuildInfo.runtimeArtifactName + "_" + scalaBinaryVersion.prefix,
BuildInfo.version))
}
|
Paul Bright
Paul Bright (born April 15, 1965) is a film writer, director, and editor recognized for his predominantly gay-themed feature films.
Early life
Bright was born in Albuquerque, NM. In 1972, after his mother's death from cancer, his father remarried and moved the family to Los Angeles. He attended the Hamilton High School Musical Theater Program and studied under Don Bondi and Dr. Bill Teaford. During his senior year of high school year, he also attended the Hollywood High Performing Arts Magnet. Later, he studied musical theater at the Dorothy Chandler Pavilion in Los Angeles under Paul Gleason's direction, and took voice training from Nathan Lam and Rickie Wiener.
Career
In high school, Bright was discovered by an agent at Cunningham, Escott, DiPene Talent Agency of Beverly Hills and was cast in numerous TV commercials, Divorce Court, Loni Anderson's failed TV show Easy Street, and Blake Edwards's comedy film Micki and Maude.
From 2002 to 2004, Bright was the artistic director of the Gaslight Repertory Theater Company south of Austin, Texas, where he produced 32 stage productions in three years. He left the theater company in 2005 to film Angora Ranch.
His original film company name, Silly Bunny Pictures, was a joke between himself and Tim Jones. Later films were released by Paul Bright Films. In 2016, he signed a deal with Telly2Go to distribute his film and TV library.
Filmography
Feature films
References
External links
Category:1965 births
Category:Living people
Category:American film producers
Category:American theatre directors
Category:Writers from Albuquerque, New Mexico
Category:Film directors from New Mexico
Category:Screenwriters from New Mexico |
mediocre beer
you said it was a mediocre beer.
he said you lead a mediocre life |
An asteroid turned into a blazing fireball as it disintegrated over southern Africa last weekend, just hours after it was first spotted.
The boulder-size asteroid was discovered on Saturday morning, according to NASA’s Center for Near Earth Object Studies. Dubbed 2018 LA, the asteroid was estimated to be about 6 feet across, which is small enough to safely disintegrate in Earth’s atmosphere.
2018 LA was first detected by the Catalina Sky Survey near Tucson, Ariz., which is funded by NASA and is operated by the University of Arizona.
METEORITE HUNTERS FIND FIRST PIECES OF THE MICHIGAN FIREBALL
“Reports of a bright fireball above Botswana, Africa, early Saturday evening match up with the predicted trajectory for the asteroid,” explained NASA, in a statement. Traveling at 38,000 mph, the asteroid entered Earth’s atmosphere at 6:44 p.m. local Botswana time (12:44 p.m. EDT).
The space rock disintegrated several miles above the Earth’s surface. Videos posted to YouTube reportedly show the fireball streaking across the night sky, including one captured on a South African farm’s security camera.
The object, however, was much smaller than the meteor – estimated to be about 56 feet wide, that exploded over Chelyabinsk, Russia in 2013, injuring more than 1,000 people.
METEORITE HUNTERS: SCIENTISTS SET TO SCOUR ANTARCTICA FOR RARE SPACE ROCKS
A small chunk of an asteroid or comet is also known as a meteoroid. When it enters Earth's atmosphere, it becomes a meteor or fireball or shooting star. The pieces of rock that hit the ground, valuable to collectors, are meteorites.
Earlier this year, a meteor made headlines when it flashed across the sky in Michigan. The blazing fireball sent meteorite hunters scrambling to find fragments of the rare space rock.
The Associated Press contributed to this article.
Follow James Rogers on Twitter @jamesjrogers |
We noticed that you're using an unsupported browser. The TripAdvisor website may not display properly.We support the following browsers:Windows: Internet Explorer, Mozilla Firefox, Google Chrome. Mac: Safari.
Our gorgeous beach house is perfect for family and friends to gather for a wonderful getaway with vast and stunning views of Mt. Baker and the Cascade Mountain Range, providing picturesque sunrises. Evenings will come with gorgeous sunsets and the Everett city twinkling lights will leave you...
Located near the tip of Camano Island, this incredible property has the best views of the water that Pacific Northwest has to offer. This 3BR 2.5BA home is a world apart - bring your family here if you'd like the perfect place to decompress. Perched high above the water, with public road access,...
Camano Island, an a hour drive from Seattle, no ferries lines, no hassles just drive over the bridge in Stanwood.
Our Fun Little Cabin sleeps 4, Queen Size Bed and a Comfy Luv Sac modular queen in the living space.
We’re just across the road from Madrona community beach access. Madrona Beach sits...
Make The Beach House at Tyee on Camano Island your waterfront home away from home for an unforgettable taste of beachfront living! Relax on the first-floor deck or the private balcony off the 2nd-floor bedroom of this immaculate no-bank water-front home.
Enjoy panoramic views of Port Susan Bay...
This stunning 4BR/3BA Camano Island house features expansive Puget Sound views and upscale interior finishes including high ceilings, hardwood floors, and modern decor.
The fantastic Camaro Beach location is only 60 miles from Seattle and 35 from Everett. Cama Beach Park is 3 miles away -- or stay...
Camano Island, an a hour drive from Seattle, no ferries lines, no hassles just drive over the bridge in Stanwood.
Our Fun Little Cabin sleeps 4, Queen Size Bed and a Comfy Luv Sac modular queen in the living space.
We’re just across the road from Madrona community beach access. Madrona Beach sits...
ADORABLE NEW LISTING!!! Centrally located light filled home boasts a split floor plan with a private separated light filled bedroom on the ground floor. French doors, skylights, fully fenced backyard, beautiful front porch to watch the sunsets are just a few features this lovely home has. 1840...
This lovely 2BR beachfront Camano Island cabin sits right on the Port Susan and Tillicum Beach. Dig into a seafood feast around the six-person al fresco dining table on your private patio and watch the tides shift with the day. Two grills—one propane and one charcoal—make it easy to sear your...
This modern getaway is perfect for any group or family retreat. There are beautiful spaces offered both inside and out, with a perfect view of Skagit Bay. The bay is just steps from the back patio, offering beachcombing, kayaking, and other water activities.
What's nearby:
Camano Island offers...
*Note that this property includes other rental units and tenants may be present during your stay*
Unforgettable saltwater views await at this 3rd-floor, 2-bed, 1-bath vacation rental, across the street from the Saratoga Passage overlooking Whidbey Island. Watch the sunrise over Mt. Baker before...
Admire the trees incorporated into the architecture of this Northwest-style home in the Puget Sound! Along with a sleek and modern interior, you will also have two private balconies and an open style floor plan inside. Upstairs, you will find board games for fun family evenings. This airy home is...
This charming Camano Island home has it all for those seeking a quiet retreat or a great adventure in the Pacific Northwest! Enjoy peek-a-boo views of Port Susan, a private patio and a covered deck in a tranquil forest setting, and easy access to the beach and a private walking path through the...
Find rest, beauty, and inspiration at this serene, secluded cottage on Camano Island, which offers wonderful woodland views from its many windows! You'll be treated to a modern, updated interior, a lovely patio and wrap-around deck where you can breathe in the ocean air, and a wood-burning...
Come to Coupeville for a change of pace and stay in this western facing bayfront home with a comfortable open floor plan, outdoor firepit, and deck with a large fire table for an unforgettable Pacific Northwest vacation. Photographers will love the constantly changing sky throughout the day, and... |
142 Ga. App. 538 (1977)
236 S.E.2d 494
MAYES
v.
HODGES et al.
53809.
Court of Appeals of Georgia.
Argued May 4, 1977.
Decided May 9, 1977.
Rehearing Denied June 13, 1977.
Robert Paul Phillips, III, for appellant.
Miller, Beckmann & Simpson, Luhr G. C. Beckmann, Jr., William H. Pinson, Jr., for appellees.
WEBB, Judge.
This is an appeal by plaintiff-insured from a grant of summary judgment to defendant State Farm Fire & Casualty Company and to Preston Hodges, the defendant insurance agent, in a suit bottomed upon Hodges' failure to procure appropriate insurance coverage for plaintiff as he had undertaken to do, and upon the company's negligence in setting up Hodges, alleged to be inexperienced and incapable, in a position to deal with the public with regard to insurance matters.
1. (a) While plaintiff-insured includes in his enumeration of error a claim that there was a genuine issue as to the negligence of State Farm, this question was not within the scope of the motion for summary judgment nor the ruling thereon below, and is consequently not properly reviewable at this time. See Ga. Ports Authority v. Norair Engineering Corp., 131 Ga. App. 618 (206 SE2d 563) (1974).
(b) According to defendants' brief, their motion below sought two things: "(1) to have the dual agency question removed from the case . . ., and (2) to have the *539 complaint as a whole dismissed on the ground that [plaintiff] acknowledged receipt of the contract and had the obligation to read and determine from it whether it contained the coverages which he desired." Defendants contend that the ruling on the second issue was neither enumerated as error nor briefed and argued, and for that reason have moved this court to dismiss the appeal.[1] However, the enumeration reads: "The court erred in granting appellees' motion for summary judgment in that there were genuine and material issues of fact raised as to the concept of `dual agency' and the relationship of appellee Hodges to appellant Mayes . . . ." This relationship controls the asserted duty of plaintiff to read and discover since if, as plaintiff contends, Hodges undertook to perform the services as his agent, he was "relieved ... from the responsibility of having the policy examined minutely to determine if the coverage required was included within the terms of the insurance policies." Wright Body Works v. Columbus Ins. Agency, 233 Ga. 268, 271 (210 SE2d 801) (1974). Thus the enumeration of error was quite sufficient to reach the issue within standards announced in cases such as Adams-Cates Co. v. Marler, 235 Ga. 606 (221 SE2d 30) (1975); and since the above principle from Wright Body Works, supra, was briefed and argued, the enumeration is not deemed abandoned.
2. (a) "Whether the defendant was licensed as an `agent' or as a `broker' under the Insurance Code of Georgia [cits.] is immaterial to a determination of this case for the relationship of the parties, not the license held by the defendant, is the controlling issue." Wright Body Works v. Columbus Interstate Ins. Agency. 233 Ga. 268, 270, supra.
(b) The primary issue in this appeal is whether Hodges carried his burden of showing that State Farm had not consented for him to act as plaintiff's agent in selecting coverage. He seeks to draw a distinction *540 between an insurance "agent" and an insurance "broker" and contends that only a "broker" can act in a dual capacity as agent for both the insured and the company. Thus we are urged to confine the ruling of Wright Body Works, supra, allowing recovery from the agent, to situations involving only "independent agents" or "brokers," i.e., those who are not the exclusive agents of one company but who are free to place coverage, in the interests of the insured, with one or more of several companies which he represents. It is urged that recovery cannot be had from him here since he was not an independent agent but could only place coverage exclusively with State Farm.
This contention is without merit. The rule prohibiting dual agency without the consent of the principals is not a rule peculiar to insurance situations, nor is there any magic in the nomenclature of "agent" or "broker" the issue in all these cases is whether the principal consented to the dual agency. As stated in Ramspeck v. Pattillo, 104 Ga. 772 (30 SE 962) (1898), one of the leading cases applying the prohibitory rule to an insurance agent: "An agent of a fire-insurance company, authorized to contract for insurance in its behalf, can not, without the company's consent, become in his individual character the agent of a property-owner who desires to obtain insurance in that company. This is so for the reason that an agreement to act as agent for both of the parties would be an undertaking to perform inconsistent duties, and a mutual agency of this kind requires the consent of both parties." (Emphasis supplied.)
Thus the "dual agent" prohibition does not, by definition, apply to a situation where the insurance company knows of, and consents to, the agent's efforts on behalf of his clients in selecting coverage. "Two parties may always, by mutual consent, no matter how diverse their interest, make a third their agent." Fitzsimmons v. Southern Exp. Co., 40 Ga. 330, 336 (1869). The distinctions between "agents," "brokers," and "independent agents" do not properly relate to statements of substantive rules of agency but rather to matters of proof of consent. For example, nothing else appearing, consent would be deemed lacking in an "exclusive agent" *541 situation (Ramspeck v. Pattillo, 104 Ga. 772, supra) while at least since Wright Body Works, supra, consent may apparently be regarded as inherent in the concept of the "independent agent" arrangement. See Creative Underwriters, Inc. v. Heilman, 141 Ga. App. 740 (fn. 1) (1977).
In the instant case Hodges has carried his summary judgment burden of showing that State Farm was the only insurance company he was authorized to represent, and he must therefore in this appeal be regarded as an "exclusive" rather than an "independent" agent. He has not, however, eliminated the question of fact as to whether State Farm gave the public to understand that its agents do not just sell policies but also are authorized to perform services on behalf of the client. Should the jury so find at trial, where the burden will be upon plaintiff, Hodges would not be insulated from liability for the breach of agency duties he may have undertaken to perform on plaintiff's behalf. Wright Body Works, supra.
Compare Creative Underwriters v. Heilman, 141 Ga. App. 740, supra, where the appellant agent was solely the agent of the company and made no individual undertaking on the insured's behalf, and Pearlman v. United Ins. Co. of America, 142 Ga. App. 48 (1977), where the suit was solely against the insurance company and no sufficient ground of recovery from it appeared independent of the dual agent's breach of fiduciary duties owed the insured, which, while rendering the agent liable to the insured, does not so operate to render the company liable.[2]
(c) It is contended that there is an absolute duty imposed *542 by law upon an insured to examine his policy and to determine for himself whether or not he has the coverage appropriate to his needs, and that this is true regardless of whether the agent is the agent of the company or the insured. Cited in support of this proposition are Ga. Mut. Ins. Co. v. Meadors, 138 Ga. App. 486 (226 SE2d 318) (1976) and S & A Corp. v. Berger & Co., 111 Ga. App. 39 (140 SE2d 509) (1965). Those cases do not require a different result here, however, since Meadors was a suit on the policy, and since in Berger "the premium on said policy had not been paid." 111 Ga. App. 39, 40. Berger was further distinguished in Wright Body Works, supra. Should it nevertheless be thought that these cases are in conflict with Wright Body Works, supra, they will not be followed. It should be self-evident that the whole purpose of relying upon the expertise of the agent to select the appropriate coverage is to relieve the insured of that responsibility. "In undertaking to render this service the broker became the insured's agent and relieved its principal from the responsibility of effecting a minute examination of the policies." 27 Mer. L. Rev. 121, 122, commenting upon Wright Body Works, supra.
Judgment reversed. Deen, P. J., and Marshall, J., concur.
NOTES
[1] Failure to brief and argue results in abandonment of the enumeration, not dismissal of the appeal. Rule 18 (c) (2), this court (Code Ann. § 24-3618 (c) (2)); Peagler v. State, 117 Ga. App. 821 (1) (162 SE2d 11) (1968).
[2] As to the asserted independent ground of recovery, Pearlman followed Watkins v. Coastal States Life Ins. Co., 118 Ga. App. 145 (162 SE2d 788) (1968), which held as a matter of first impression that it was not permissible to base an action against an insurer or its agent upon the premise that they had delayed for an unreasonable time in passing upon the application for insurance. The same result was reached in Creative Underwriters, supra, where the agent, acting solely for the company, misplaced the application.
|
Dear Taco Bell,
Don't keep this a secret. It needs to be out there. Surely, there's a lot on the line after the great meat fiasco of 2011, and this could save everything.
Why? Because people will come. People will come, Taco Bell. They'll pull into your drive-through for reasons they can't even fathom (or maybe just because they've been toking weed). "Of course, you can order the taco with a shell made of Nacho Cheese Doritos," you'll say, and people will pass over the money without even thinking about it.
Food Beast and Grub Grade are leaking details and menu options of a new "Doritos Loco Tacos." If the reporting is true, then it's a good world. We all want to live in a world where stoner food synergy is brought to life on a national scale. This is planet Earth, and we own it. We can make it whatever we want. We can stuff a taco inside another taco, and then stuff it in a pizza and then drizzle it with golden flavoring.
We can make this happen. As E.T. made the public crave Reese's Pieces, Taco Bell too can launch the greatest idiotic, non-Mexican food desire of all time. It'll be glorious, and there'll be lines rounding the block. Build it, and they will come. |
2-nitroimidazol-5-ylmethyl as a potential bioreductively activated prodrug system: reductively triggered release of the PARP inhibitor 5-bromoisoquinolinone.
5-Chloromethyl-1-methyl-2-nitroimidazole reacted efficiently with the anion derived from 5-bromoisoquinolin-1-one to give 5-bromo-2-((1-methyl-2-nitroimidazol-5-yl)methyl)isoquinolin -1-one. Biomimetic reduction effected release of the 5-bromoisoquinolin-1-one. The 2-nitroimidazol-5-ylmethyl unit thus has potential for development as a general prodrug system for selective drug delivery to hypoxic tissues. |
Introduction
============
Epileptic seizures are typically described as a short-term manifestation of numerous signs and/or symptoms because of unusually superfluous or concurrent activity in the brain. In contrast, epilepsy is a collection of neurological disorders characterized by the lasting tendency to spawn epileptic seizures ([@B15]). Epilepsy is a serious disorder of the Central Nervous System (CNS) as the global epilepsy prevalence is approximately one in 100 people according to [@B21]. Whilst the underlying cause of epilepsy is not always clear, anti-convulsant drugs or anti-epileptic drugs (AEDs) as they are commonly known, may be used for the symptomatic treatment of epilepsy. The older generation of AEDs have side effects which range from abdominal discomfort and anorexia to psychosis and aplastic anemia; together with an array of different idiosyncratic reactions. In comparison, AEDs from the newer generation can result in side effects which range from fatigue and drowsiness to vomiting and diplopia ([@B16]). Whilst the efficacy of the AEDs in use today has been demonstrated, a need for the discovery of new AEDs with fewer side effects remains.
*Orthosiphon stamineus* is a Malaysian herb also known locally as 'misai kucing' and is widely grown in tropical regions which have high temperatures and year-round rainfall ([@B1]). In the Southeast Asian region, *O. stamineus* leaves are harvested and dried to make tea leaves ([@B17]). The *O. stamineus* tea leaves can then be brewed into a herbal tea and used as a traditional medicine to treat epilepsy ([@B22]). An extract of *O. stamineus* leaves has been found to possess anti-inflammatory, ([@B42]) anti-oxidant and free-radical scavenging abilities ([@B43]). Although the exact mechanism leading to the formation of seizures is unknown, there is evidence that pro-inflammatory mediators released by the brain and peripheral immune cells play a role ([@B40]). There has also been an indication that oxidative stress has a role in epilepsy, given the high degree of oxidative metabolism, limited antioxidant defense and the abundance of polyunsaturated fatty acids in the brain. It is possible that these conditions increase the vulnerability of the brain to free radical damage, leading to certain types of epilepsy ([@B11]). An experiment by [@B44] suggested that the components of *O. stamineus* leaves which are responsible for its anti-inflammatory effect in a chloroform extract are the polymethoxylated flavones sinensetin, eupatorine and 30-hydroxy-5,6,7,40-tetramethoxyflavone; which possibly function by inhibiting the nitric oxide pathway and the synthesis of prostaglandin. [@B2] also found that sinensetin, eupatorine, 30-hydroxy-5,6,7,40-tetramethoxyflavone, rosmarinic acid and quercetin form the major components in an *O. stamineus* extract which possess significant free radical scavenging and antioxidant ability. Thus, the properties of *O. stamineus* combined with its traditional usage for the treatment of epilepsy makes it an encouraging candidate for the development of novel AEDs.
One of the most frequently used approaches to inducing seizures in animals are chemoconvulsants. An example of a chemoconvulsant among many the different available is pentylenetetrazol (PTZ). PTZ is believed to induce seizures primarily by binding to the γ-Aminobutyric Acid (GABA~A~) receptor and impeding the neuroinhibitory action of GABA ([@B7]). Although the majority of past research concerning epilepsy has been undertaken using rodents as the animal model, zebrafish are currently becoming increasingly popular as a model for epilepsy. One possible reason for this is that dissolving the compounds to be tested directly in the zebrafish tank water becomes an option, which eliminates the necessity of performing an invasive procedure such as an injection. Despite zebrafish being fish and hence more removed from humans in an evolutionary perspective in comparison to the mammalian rodents, their genes are nevertheless still around 75% homologous to human genes ([@B5]; [@B7]). Among the other aspects in which zebrafish are superior to rodents as an animal model are their longer lifespan and robust phenotypes, as they display obvious and easily quantifiable behavioral endpoints ([@B37]). The blood brain barrier in zebrafish is also tight-junction based, and highly permeable to macromolecules, meaning that zebrafish will be extremely responsive the compounds being tested ([@B13]). It is for these reasons that this experiment utilized zebrafish as an animal model of epilepsy over rodents.
Once the animal model of epilepsy and the method of inducing seizures is ascertained, a technique for assessing compounds believed to be anti-convulsive is needed. One way of doing this is to test adult zebrafish inside a tank in which they can be observed so that their seizure behavior can be scored according to a predefined scoring system. Both the top and side points of view for the observation tank can be utilized for the neurophenotypic classification of the responses which result in chemoconvulsant treated adult zebrafish, as they are very similar to those observed during a seizure. Whilst the abnormal response displayed by the zebrafish varies based on the chemoconvulsant used, the conventional endpoints which are used include rapid twitching, loss of body posture, hyperactive, spiral or circular swimming, paralysis or immobility, body contractions similar to spasms and death ([@B37]). An induced seizure has also been shown using rodent models of epilepsy, to also result in an upregulation of specific genes at the site where the seizure was initiated. The upregulated genes are known as immediate-early genes ([@B29]) and are comprised of genes such as the early proto-oncogene c-Fos, which also functions as a neuronal activation marker. A similar pattern in the upregulation of seizure related genes during an induced seizure is also present in zebrafish brains ([@B37]) and may be quantitively examined to possibly serve as biomarkers of brain disorders.
Thus, whilst the efficacy of the AEDs used today for the symptomatic treatment of epilepsy is proven, there is still a necessity for the discovery of new AEDs with comparable efficacies, but with fewer side effects. Given its beneficial properties and traditional usage, *O. stamineus* leaves have the potential to be a novel treatment for epilepsy. Thus, this study aimed to determine if an ethanolic leaf extract of *O. stamineus* is pharmacologically active against seizures. This was done by observing whether pre-treating zebrafish with varying doses of the extract has any effect on the progression of PTZ-induced seizures. This experiment involved the use of three different treatment doses of the *O. stamineus* ethanolic leaf extract; with the exact concentrations decided based on a prior toxicity study using adult zebrafish. The last part of this study involved harvesting the zebrafish brains for gene expression studies to help determine the mechanism of action by which an ethanolic leaf extract of *O. stamineus* exerts its anti-convulsive effect in the zebrafish brain as the expression level of certain genes changes characteristically after a seizure ([@B29]).
Materials and Methods {#s1}
=====================
Materials
---------
### Chemicals
The standardized *O. stamineus* ethanolic leaf extract was purchased from Natureceuticals Sdn Bhd. According to the manufacturer, the extract was a 50% ethanolic extract prepared using Digimaz technology. Pentylenetetrazol (PTZ) and the standard AED diazepam (DZP) were purchased from Sigma--Aldrich (United States). TRIzol^®^ reagent was purchased from Invitrogen, Carlsbad, CA, United States. For the gene expression study, QuantiTect SYBR Green dye (Qiagen, Valencia, CA, United States) was used together with the following primer sets:
> BDNF: Dr_bdnf_1\_SG QuantiTect Primer Assay (Cat no. QT02125326);
>
> NF-κB: Dr_nfkb1_2\_SG QuantiTect Primer Assay (Cat no. QT02498762);
>
> NPY: Dr_npy_1\_SG QuantiTect Primer Assay (Cat no. QT02205763);
>
> c-Fos: Dr_fos_1\_SG QuantiTect Primer Assay (Cat no. QT02103243);
>
> TNF-α: Dr_tnf_1\_SG QuantiTect Primer Assay (Cat no. QT02097655);
>
> IL-1: Dr_il1rapl1a_1\_SG QuantiTect Primer Assay (Cat no. QT02131850);
>
> eef1a1b: Dr_eef1a1b_2\_SG QuantiTect Primer Assay (Cat no. QT02042684)
### Software and Equipment
The Smart V3.0.05 tracking software (Pan Lab, Harvard apparatus) was used for the automated tracking of zebrafish swimming patterns. The Applied Biosystems StepOnePlus^TM^ Real-Time PCR System was used for the gene expression study.
### Animals
Adult zebrafish (*Danio rerio*) 3--4 months of age and of the heterogeneous wild-type strain with a typical short-fin phenotype were purchased at the aquarium shop 'Akarium Batu Karang Laut' (Subang Jaya, Malaysia). All zebrafish were held at the Monash University Malaysia animal facility under standard husbandry conditions. The zebrafish tanks were kept at a water temperature of between 26 and 30°C, a water pH of between pH 6.8 and pH 7.1 and under a 250-lux light intensity with a cycle of 14-h of light to 10 h of darkness. The lights were automatically turned on at 8 am and automatically turned off at 10 pm via a timer. The zebrafish were fed thrice a day with TetraMin^®^ Tropical Flakes and their diet was supplemented with live brine shrimps (*Artemia*) purchased from Bio-Marine (Aquafauna, Inc. United States). Standard zebrafish tanks with a length of 36 cm, a width of 26 cm and a height of 22 cm were used to house the zebrafish. The tanks were equipped with a water circulation system to provide constant aeration. Group housing, whereby 10--12 fish were kept per tank, was practiced with the females and males being housed separately. All animal experimentation was authorized by the Monash Animal Research Platform (MARP), Australia.
Methods
-------
### Toxicity Study
A zebrafish toxicity study was carried out on adult zebrafish to determine the exact *O. stamineus* ethanolic leaf extract concentrations to be used with each of the three treatment groups. A limit test was first performed based on a modified version of the OECD Guidelines for the Testing of Chemicals No. 203 ([@B30]). An observation tank was first set up and filled with 13 L of the water normally used to fill the zebrafish tanks. One zebrafish from the untreated normal control group was then placed in the tank and its behavior was recorded for 10 min. After each recording, the zebrafish was transferred into individual 1 L tanks filled with the same water. The procedure then was repeated for each of the seven zebrafish in the control group. The recording and tank transfer procedure was then repeated with the seven zebrafish of the treatment group but with the extract added to the water to make up a concentration of 100 mg/L. All 14 zebrafish were then kept for 96 h in their respective one-liter tanks. All 14 zebrafish were checked on every 15 min for the first 2 h of exposure and every half an hour thereafter for the first day. On subsequent days, the zebrafish were checked on thrice daily during the morning, afternoon and evening. Any zebrafish found to exhibit severe symptoms of pain or suffering according to our predefined monitoring sheet at any checkpoint were humanely euthanized via an overdose of benzocaine. If no zebrafish require euthanasia at the limit concentration, the extract concentration will be raised by a factor of 2--200 mg/L and the test protocol repeated. If there is at least one zebrafish requiring euthanasia at the limit concentration, the concentration will be decreased by a factor of two, to 50 mg/L and the test protocol repeated. This protocol deviates from the OECD guidelines in that it does not use mortality as the criterion to determine toxicity due to the concerns of the MARP-Australia in using death as an endpoint. The highest dose which did not require euthanasia of any zebrafish was used as the 'High' dose in the following behavioral study with the 'Medium' dose and 'Low' dose being a factor of two and four lower than the 'High' dose, respectively.
### Behavioral Study
#### Drug treatment and groups
Three-month-old adult zebrafish with weights ranging from 0.4 to 0.8 g were selected. The zebrafish were then divided into six groups, with 10 fish per group. PTZ was dissolved in distilled water whereas DZP and the *O. stamineus* extract was dissolved in the same water used to fill the zebrafish tanks.
> Group I: Vehicle Control (CP), Tank Water Only;
>
> Group II: Negative Control (CN), PTZ (170 mg/kg) Only;
>
> Group III: Positive Control (CP), DZP (10 mg/L) + PTZ (170 mg/kg);
>
> Group IV: Treatment Group 1, *O. stamineus* extract (Low dose) + PTZ (170 mg/kg);
>
> Group V: Treatment Group 2, *O. stamineus* extract (Medium dose) + PTZ (170 mg/kg);
>
> Group VI: Treatment Group 3, *O. stamineus* extract (High dose) + PTZ (170 mg/kg)
#### Procedure for a zebrafish intraperitoneal injection
All intraperitoneal injections were administered into the abdominal cavity at a location posterior to the pelvic girdle, using a 10 μl Hamilton syringe (700 series, Hamilton 80400) ([@B36]). The experiment was performed in a separate behavior room with the room temperature kept between 26 and 30°C and humidity between 50 and 60%. All zebrafish were acclimatized in the said behavior room for 2 hours prior to experiment for the purpose of minimizing any novel tank response. Other precautions taken include using a small injection volume of 10 μl per gram of fish and using a 35-gage needle. The zebrafish were restrained in a water saturated sponge under benzocaine anesthesia to reduce the distress inflicted on the zebrafish ([@B24]). This intraperitoneal injection technique was found to be effective in zebrafish ([@B27]) and did not cause any mortality throughout the experiment.
Each zebrafish was captured individually using a fish holding net, and then transferred into an anesthesia solution (30 mg/L Benzocaine). The zebrafish was taken out once anesthetized and then weighed to calculate the dose and hence the injection volume. A soft sponge approximately 20 mm in height was saturated with water and set inside a 60 mm Petri dish. A cut between 10 and 15 mm in depth was made in the sponge to restrain and hold the fish for the intraperitoneal injection. The intraperitoneal injection was given while using a dissecting microscope by inserting the needle into the midline between the pelvic fins. An appropriate volume was then injected into the zebrafish, after taking into account the body weight of the zebrafish. After the intraperitoneal injection, the zebrafish was immediately transferred to an observation tank.
#### PTZ-Induced Seizure Model
The zebrafish were habituated for 30 min in 1 L treatment tanks filled with 1 L of the water normally used to fill the zebrafish tanks, before administration of PTZ. Groups I and II were habituated in tanks only containing the water normally used to fill the zebrafish tanks. Groups III to VI had either diazepam (10 mg/L) or the extract added to the tank water. After the 30-min habituation time, the zebrafish from groups II to VI were injected with PTZ (170 mg/kg, IP). Group I zebrafish did not receive any injections. PTZ injected zebrafish present diverse seizure profiles, intensities and latency in reaching the different seizure scores. PTZ-induced seizures will persist for about 10 min after the PTZ injection and gradually decrease with time. The PTZ injected adult zebrafish were then moved to a 13-L observation tank filled three quarters of the way with water. The behavior of the zebrafish was then recorded for 10 min after recovery from anesthesia and the video was later viewed using computer to determine the highest seizure score every minute. The zebrafish seizure score was recorded as per the scoring system used by [@B27] and is given below.
> Score 1 - Short swim mainly at the bottom of the tank
>
> Score 2 - Increased swimming activity and high frequency of opercular movement
>
> Score 3 - Burst swimming, left and right movements as well as erratic movements
>
> Score 4 - Circular movements
Under the directives of MARP-Australia, the PTZ dose was set at 170 mg per kg of zebrafish body weight in order to limit the resulting seizure scores to a maximum of four. Time to score four seizure onset (seconds) and mean seizure score over 10 min were noted when viewing the recorded video. The mean seizure score over 10 min was calculated by first assigning the highest observed seizure score 1 min after the start of the video, as the seizure score for the first minute. This process was repeated until the end of the 10th min and all 10 seizure scores were averaged to obtain the mean seizure score over 10 min. The zebrafish swimming pattern was determined via analysis using the Smart tracking software. The dose of PTZ (170 mg/kg) and the duration of the behavior recording (10 min) represent the standard protocol of our laboratory for inducing seizures with PTZ, as determined previously by [@B27]. The diazepam dose (10 mg/L) and the habituation time (30 min) were chosen based on the results of an unpublished preliminary trial using the same methodology. The diazepam dose and the habituation time were varied till a mean seizure score over 10 min of less than one was obtained.
### Gene Expression Study
#### Brain harvesting
After the behavioral study, the zebrafish brains were harvested by removing the zebrafish skull and extracting the brain, before transferring it straight into 200 μl of ice-cold TRIzol^®^. The zebrafish brains were then immediately stored at -80°C till required.
#### RNA isolation and synthesis of first strand cDNA
The mRNA was isolated according to the protocol supplied by the kit's manufacturer, and was identical to the protocol used by [@B27]. In short, the zebrafish brain was first homogenized whilst in TRIzol^®^ before chloroform was mixed in. The resulting mixture was then centrifuged at a speed of 13,500 rpm (revolutions per minute) for a period 15 min and at a temperature of 4°C. After centrifugation, the resulting aqueous supernatant was then transferred into a new tube before the addition of isopropanol. After mixing, the new tube was incubated for 10 min at room temperature and subsequently centrifuged for a period of 10 min at a speed of 13,500 rpm and at a temperature of 4°C. The resulting supernatant was removed and the pellet was rinsed with 75% ethanol. The pellet was then allowed to air dry for between 5 and 10 min. Nuclease-free water was then added to the tube for the purpose of dissolving the mRNA pellet. The purity and concentration of the resulting isolated mRNA was then measured with a NanoDrop Spectrophotometer. Afterwards, the isolated mRNA was then converted to cDNA as per the instructions given in the Omniscript Reverse-transcription Kit from QIAGEN.
#### StepOne^®^ real-time PCR
The gene expression level of Brain-Derived Neurotrophic Factor (BDNF), Nuclear Factor Kappa-light-chain-enhancer of activated B cells (NF-κB), Neuropeptide Y (NPY), c-Fos, Tumor Necrosis Factor alpha (TNF-α), Interleukin-1 (IL-1) and the housekeeping gene Elongation factor 1-alpha-1b (eef1a1b) were calculated via real-time quantitative RT-PCR (Applied Biosystems) together with QuantiTect SYBR Green dye and the appropriate Qiagen primer set for each gene; using a similar protocol to that used by [@B27]. The samples were first incubated at 95°C for 2 min prior to thermal cycling. The thermal cycling settings used were 40 cycles of 95°C for 5 s, followed by 60°C for 15 s. The relative expression level (Fold Change) of the six genes of interest was calculated by normalizing the threshold cycle (Ct) values obtained from the genes of interest, against the Ct value of the eef1a1b housekeeping gene using the formula: 2 ^∧^ \[Ct eef1a1b -- Ct Gene of interest\].
### Statistical Analysis
All results were expressed as Mean ± Standard Error of the Mean (SEM). The data was analysed using one-way Analysis of Variance (ANOVA) and followed with Dunnett's test. The PTZ only negative control group (Group II/CN) was used as the control for Dunnett's test and all other groups were compared to it. The *P*-value, ^∗∗∗^*P* \< 0.001 was regarded as statistically significant for the behavioral study, whereas a *P*-value of ^∗∗^*P* \< 0.01 and ^∗^*P* \< 0.05 was regarded as statistically significant for the gene expression study.
Results
=======
Toxicity Study
--------------
The limit test performed using 100 mg/L of *O. stamineus* ethanolic leaf extract did not result in any mortality, morbidity or abnormal behavior in the zebrafish (*n* = 7). As per the protocol, the toxicity study was repeated using twice the concentration of the extract (200 mg/L) and again resulted in no mortality, morbidity or abnormal behavior in the zebrafish. Doubling the extract concentration once again to 400 mg/L produced no abnormal behavior in the zebrafish during the initial observation period, but later resulted in the death of all the treated zebrafish after an overnight exposure (less than 18 h after the last observation); during which the zebrafish were not monitored. From the results of the toxicity study, 200 mg/L was chosen to be the 'High' dose (T200) for the following behavioral study. The 'Medium' and 'Low' doses were thus chosen to be 100 mg/L (T100) and 50 mg/L (T50), respectively. From the software generated zebrafish swimming patterns (**Figure [1](#F1){ref-type="fig"}**), it was found that zebrafish treated with 100 mg/L of the extract spent more time at the bottom of the tank. This is in comparison to the vehicle control group (CV), which displayed a slight preference for the bottom of the tank but otherwise swam throughout the whole tank. In contrast, the zebrafish treated with 200 and 400 mg/L of the extract displayed no preference for any one location in the tank.
{#F1}
Behavioral Study
----------------
### Seizure Onset Time and Seizure Score Analysis
Mean seizure onset time for the untreated CV group was taken to be 600 s, or the entire length of the video and a mean seizure score of 0 was assigned to the untreated CV group. This is because the vehicle control zebrafish were not injected with PTZ and thus did not develop seizures. Injecting PTZ into the zebrafish in the CV group resulted in a significant decrease in mean seizure onset time to 191 s and a significant increase in mean seizure score to 2.96 in comparison to the CV group. The results of the CN group were then used as a baseline for the positive control and treatment groups. Pre-treating the zebrafish with the positive control drug diazepam (CP) before challenging them with PTZ, significantly increased the mean seizure onset time to 453.4 s and significantly reduced the mean seizure score to 0.69. In contrast, pre-treating the zebrafish with 50 mg/L of *O. stamineus* ethanolic leaf extract (T50 Group) increased the mean seizure onset time to 314.4 s, although this was statistically insignificant (*P* = 0.233). However, the decrease in mean seizure score of the T50 group to 1.86 was considered statistically significant. Doubling the extract pre-treatment dose to 100 mg/L (T100 Group) produced a significant increase in the mean seizure onset time to 518.8 s and a significant decrease in the mean seizure score to 0.66. The final treatment group (T200 Group) was pre-treated with 200 mg/L of extract and did not reach seizure score 4 and thus the mean seizure onset time was recorded as 600 s or the full length of the recorded video. The T200 group also had a significant decrease in the mean seizure score to 0.47. All results were considered significant at the significance level of ^∗∗∗^α = 0.001. The mean seizure onset time (seconds) and seizure score for each zebrafish group are presented in a graphical format in **Figure [2](#F2){ref-type="fig"}**.
{#F2}
### Representative Locomotion Patterns
Using the Smart tracking software for the automated tracking of zebrafish swimming patterns, one representative swimming pattern was chosen for each group from among the *n* = 10 zebrafish per group. The representative swimming patterns are given in **Figure [2](#F2){ref-type="fig"}**. The normal zebrafish swimming behavior demonstrated by the zebrafish in the CV group is to spend roughly an equal amount of time swimming throughout the entire tank. In contrast, the untreated negative control zebrafish had a more erratic swimming pattern after the PTZ challenge, with the zebrafish dwelling at bottom of the tank more frequently. Pre-treatment with the standard AED diazepam modified the post PTZ challenge swimming behavior into a zig-zag like swimming pattern, with a significant amount of time being spent at the top and bottom of the tank. Pre-treatment with all three *O. stamineus* ethanolic leaf extract doses produced a swimming pattern similar to that of the normal control, although the 50 and 200 mg/L doses produced more bottom dwelling in the zebrafish. In comparison, the 100 mg/L dose produced the most similar swimming pattern to the vehicle control, but showed an increase in time spent at the water surface.
Gene Expression Study
---------------------
### BDNF
The change in the gene expression level of BDNF was determined to be statistically insignificant in all groups in comparison to the negative control at a level of ^∗^α = 0.05. However, when graphically represented in **Figure [3](#F3){ref-type="fig"}**, an increase BDNF expression by the CN group as compared to the CV group is visible. The BDNF expression level was reduced in both the CP and T50 groups, whereas the T100 and T200 produced an increase in BDNF expression level in comparison to the CN group.
{#F3}
### NF-κB
There was a significant rise in the gene expression level of NF-κB for the CN group in comparison to the vehicle control (^∗∗^*P* \< 0.01). The CP, T100 and T200 groups had a significant reduction in NF-κB expression (^∗∗^*P* \< 0.01) as compared to the CN group. The T50 group also showed a reduction in NF-κB expression, but this did not approach statistical significance (*P* = 0.317). The NF-κB expression level for each zebrafish group is graphically represented in **Figure [3](#F3){ref-type="fig"}**.
### NPY
There was a significant rise in NPY expression for the CN group in comparison to the vehicle control. In comparison to the negative control, only the T100 group showed a significant decrease in NPY expression. The CP, T50, and T200 groups did show a decrease in NPY expression but this was not significant at the level of ^∗^α = 0.05. The NPY expression level for each zebrafish group is graphically represented in **Figure [3](#F3){ref-type="fig"}**.
### c-Fos
The change in the gene expression level of c-Fos was determined to be statistically insignificant in all groups in comparison to the negative control. However, when graphically represented in **Figure [3](#F3){ref-type="fig"}**, it can be seen that there is an increase c-Fos expression by the CN group as compared to the CV group. The level of c-Fos expression was decreased in the CP, T50, and T200 groups, whereas the T100 group had a decrease in c-Fos expression level when compared to the CN group.
### TNF-α
There was a significant rise in the expression of TNF-α for the CN group as compared to the CV group. The CP, T50, T100, and T200 groups showed a significant reduction in the expression of TNF-α in comparison to the CN group. All changes in TNF-α expression were significant at the level of ^∗∗^α = 0.01. The TNF-α expression level for each zebrafish group is graphically represented in **Figure [3](#F3){ref-type="fig"}**.
### IL-1
The change in the gene expression level of IL-1 was deemed to be statistically insignificant in all groups as compared to the negative control. However, when graphically represented in **Figure [3](#F3){ref-type="fig"}**, an increase IL-1 expression by the CN group as compared to the CV group is visible. The IL-1 expression level was reduced in the CP, T50, T100, and T200 groups as compared to the CN group. The IL-1 expression level for each zebrafish group is graphically represented in **Figure [3](#F3){ref-type="fig"}**.
Discussion
==========
This work aims to determine if an ethanolic leaf extract of *O. stamineus* has the potential to be a novel treatment for epileptic seizures. To that end, a toxicity study was carried out to determine if the extract is safe for use with zebrafish, as well as to determine the doses to be used for the following behavioral study. The toxicity study had to be conducted as no prior published study using this extract has been conducted on adult zebrafish before. A prior literature search only yielded *O. stamineus* toxicity studies on Sprague Dawley rats ([@B9]) and zebrafish embryos ([@B23]), and thus this work represents the first of its kind. The reason that an *O. stamineus* ethanolic extract was used is because ethanolic extracts of *O. stamineus* tend to have the highest concentration of phenolic compounds, followed by methanolic and aqueous extracts ([@B35]). Thus, as oxidative stress plays a role in epilepsy ([@B11]) and that the phenols in *O. stamineus* such as rosmarinic acid possess significant free radical scavenging, anti-inflammatory and antioxidant ability ([@B2]; [@B44]), an ethanolic extract of *O. stamineus* is the ideal choice for this experiment. The results of [@B35] support this idea as they found that an ethanolic leaf extract of *O. stamineus* possess the greatest anti-oxidant activity from among a combination of ethanolic, methanolic and aqueous extracts. The reason that a leaf extract of *O. stamineus* was used over another part of the plant was due to experimental evidence such as that by [@B35] showing that extracts of the leaves possess anti-oxidant activity and that the traditional remedy for epilepsy utilizes the leaves of the plant ([@B22]). Given the uncertainty associated with any novel experiment, the toxicity study used in this experiment follows a modified version of the OECD Guidelines for the Testing of Chemicals No. 203, which concerns acute toxicity tests in fish. The test involves the use of the test substance at a concentration of 100 mg/L of water, with a minimum of seven fish each for the treatment and control groups. The principle behind the test is that when there are no fish deaths after an exposure period of 96 h, the LC~50~ for the test substance can be said to be above 100 mg/L with a confidence of 99% or greater ([@B30]). As there have been no prior publications regarding the testing of the anti-convulsive potential of an ethanolic extract of *O. stamineus* in any animal species, this dose determination study was a necessity.
From the zebrafish swimming pattern after exposure to 200 and 400 mg/L of the *O. stamineus* ethanolic leaf extract, no bottom dwelling behavior was observed. Bottom dwelling in zebrafish in associated with anxiety and is initially seen in zebrafish which have just been transferred into a novel tank ([@B8]). As anxiolytics have been found to reduce bottom dwelling ([@B18]), the results of this study suggest that the extract has anxiolytic properties, at least at a concentration greater than 200 mg/L. The finding of this study that an overnight exposure to *O. stamineus* ethanolic leaf extract at a concentration of 400 mg/L is lethal to adult zebrafish is also noteworthy. This is because an acute oral toxicity study was performed using Sprague Dawley rats by [@B9], by administering *O. stamineus* leaf extract up to a dose of 5.0 g/kg of rat body weight, daily for 14 days. The study by [@B9] resulted in no rat deaths or any adverse effect on parameters such as body weight and they deemed that their methanolic *O. stamineus* whole plant extract seemingly lacked any toxic effects. Another toxicity experiment by [@B23] found that an aqueous extract of *O. stamineus* only significantly causes mortality in zebrafish embryos when the concentration reaches 5.0 g/L of water. However, both these experiments relied on a different manner of producing *O. stamineus* extracts compared to this study and thus may have a different proportion of constituents than the extract we used. In addition, a reliable correlation between zebrafish and rodent toxicities has not been established ([@B12]) and embryonic zebrafish toxicity may also not entirely correlate to adult zebrafish toxicity. Also, unlike dosing a rodent via the oral route, introducing a substance directly into the tank water makes it difficult to determine exactly how much of the substance has been taken up by the zebrafish ([@B26]). Thus, further work needs to be done to determine the mechanism behind the toxicity of *O. stamineus* ethanolic leaf extract in adult zebrafish to help reconcile the difference in toxicity results between this study and previous ones. It should also be noted that there is also no conversion factor for translating zebrafish toxicity to mammalian toxicity, although LC50 value zebrafish is generally lower than that for the corresponding rodent LC50 for certain chemicals such as polychlorinated biphenyls ([@B31]; [@B12]). However, given the popularity of *O. stamineus* as a traditional remedy for a plethora of illnesses, combined with multiple pharmacological studies demonstrating beneficial properties such as being hepatoprotective, antioxidant and antihypertensive ([@B3]) as well as a relatively high toxic dose in rats ([@B9]), it is possible that *O. stamineus* derived AEDs would be relatively safe and non-toxic to humans.
Building on the toxicity study results, this experiment has also demonstrated that pre-treating zebrafish with an ethanolic leaf extract of *O. stamineus* for 30 min significantly increases the mean seizure onset time and decreases the mean seizure score of PTZ challenged zebrafish in a dose dependent manner. A 100 mg/L dose of the extract has been found in this study to rival the anti-convulsive effects of a 10 mg/L dose of the standard AED diazepam and a 200 mg/L dose of the extract has a stronger anti-convulsive effect than diazepam. The representative zebrafish swimming patterns also showed that diazepam reverses the bottom dwelling seen in PTZ challenge zebrafish, which is said to be comparable to the stupor like behavior and anxiety associated with an epileptic condition ([@B27]). The swimming pattern produced by the zebrafish pre-treated with diazepam could be due to the sedative effect of diazepam, as it is a benzodiazepine ([@B20]). In contrast to diazepam, zebrafish pre-treated with the extract produced a swimming pattern very similar to that of the vehicle control which was not challenged with PTZ. However, the 50 and 200 mg/L extract doses still produced some degree of bottom dwelling, although to a lesser degree than the negative control. This suggests that the 50 and 200 mg/L dose was insufficient to completely prevent the PTZ-induced seizures and this is supported by the mean seizure score for those doses being greater than zero. Interestingly, the 100 mg/L extract dose completely abolished bottom dwelling, although there was an increase in time spent on the water surface instead and the mean seizure score for 100 mg/L was also greater than zero. Taken together, the behavioral study results show that the *O. stamineus* ethanolic leaf extract does indeed possess dose dependent anti-convulsive properties but does not seem to produce the cognitive impairment associated with currently available AEDs such as diazepam.
Thus, our study shows that an *O. stamineus* ethanolic leaf extract derived novel AED has the potential to be comparable to diazepam, which is one of the standard AEDs available today. Undoubtedly, further work needs to be conducted to discover the active constituent/s of *O. stamineus* which contribute to its anti-convulsive properties. A follow up study similar to this one should then be conducted to test if a dose of the active constituent comparable to that of standard AEDs will still have similar or even better anti-convulsive efficacy. This is because our experiment shows that a dose of crude *O. stamineus* ethanolic leaf extract needs to be 10-fold that of diazepam to equal its effects. This is undesirable as high doses of substances in general tend to result in more side effects. Among the possible constituents responsible for the anti-convulsive effect an ethanolic leaf extract of *O. stamineus* are rosmarinic acid, sinensetin, eupatorine and 30-hydroxy-5,6,7,40-tetramethoxyflavone as they represent the major compounds in the extract which have anti-inflammatory action as well as substantial free radical scavenging and antioxidant ability ([@B33]; [@B2]; [@B44]), all of which are factors that seem to protect against epilepsy ([@B11]; [@B40]). However, rosmarinic acid seems to be a likely candidate as several studies have found that it possesses anti-convulsive properties, possibly due to its activation of the GABAergic system ([@B25]; [@B19]) and hence promotion of inhibitory neurotransmission. Rosmarinic acid is also neuroprotective as a result of its anti-oxidant and free radical scavenging abilities ([@B14]). Data provided by the manufacturer of our standardized extract also reiterates the importance of rosmarinic acid as they found that rosmarinic acid (5.02%) was the most abundant of the four marker compounds they tested, followed by sinensetin (0.21%), eupatorine (0.17%), and 30-hydroxy-5,6,7,40-tetramethoxyflavone (Trace Amounts). Interestingly, doubling the dose from 50 to 100 mg/L produced a much larger positive effect on both mean seizure onset time and seizure score as compared to doubling the dose from 100 to 200 mg/L. This suggests that some yet unknown factor could be limiting the bioavailability of the extract, at least for the given exposure period of 30 min. However, it is worth reiterating that the actual amount of substance taken up by the zebrafish is not known when the substance is dissolved in the tank water, unlike methods such as an intraperitoneal injection whereby the quantity delivered is defined based on the weight of the fish ([@B26]). Despite the limitations of dissolving the *O. stamineus* ethanolic leaf extract directly into the tank water, it is utilized by this study as the AEDs used today for the chronic symptomatic treatment of epilepsy are given orally ([@B4]). Thus, as we are aiming to develop a novel AED based on an ethanolic *O. stamineus* leaf extract, it must also work through the oral route. This is because if the AED must be injected into a patient to work, it will likely be underutilized due to the chronic nature of epilepsy; regardless of its efficacy.
Based on the results of the gene expression study, the downregulation of NF-κB by the *O. stamineus* ethanolic leaf extract is unusual as inhibition of the NF-κB pathway usually results in a decreased seizure threshold ([@B45]). This could be explained by the extract controlling the PTZ-induced seizures via another mechanism and hence there is minimal activation of the NF-κB pathway. This theory is supported by the fact that diazepam also reduces the NF-κB expression level in comparison to the negative control and that the CP, T100, and T200 groups displayed a NF-κB expression level very similar to that of the baseline expression level in the CV group. As NF-κB also regulates the expression level of BDNF during seizures ([@B28]), the BDNF expression levels should also mirror that of NF-κB. However, we found no significant upregulation in the BDNF expression level after a PTZ-induced seizure for any pre-treated group as compared to the negative control. However, the role of BDNF in the development of seizures and epilepsy is somewhat controversial as although there is usually an upregulation of BDNF is associated with a seizure, it is unclear whether this promotes or inhibits seizure development ([@B28]). In the case of NPY, our results are unusual, with diazepam and the 50 mg/L extract dose not having a significant effect on the NPY expression level as compared to the negative control whereas the 100 and 200 mg/L dose decreased it to around the same as the baseline vehicle control level. Although only the 100 mg/L group represented a significant change, the unusual results could be explained due to the anti-convulsive effect of NPY and also its regulation of learning and memory ([@B10]). The 50 mg/L still produced an upregulation in NPY as it does not sufficiently control the PTZ-induced seizures on its own and thus requires the assistance of NPY. Whilst diazepam does control the PTZ-induced seizures, it also negatively affects cognitive abilities ([@B27]) and hence an upregulation of NPY is needed to counteract the cognitive dysfunction which results. The explanation for the decrease in the expression level of NPY for the 100 and 200 mg/L treatment groups is similar to that of NF-κB, as the seizures are controlled via other mechanisms and thus the NPY expression level is similar to the baseline vehicle control.
In the case of c-Fos expression, we found no significant upregulation as a result of a PTZ-induced seizure and no significant difference in c-Fos expression levels as a result of any treatment. However, according to literature, a seizure usually results in an increase in c-Fos expression ([@B32]). This discrepancy could be explained by the time between the PTZ challenge and removal of the zebrafish brain, which was 10 min in our experiment. According to [@B6], in the case of rodents at least, c-Fos takes around 30 min to become significantly elevated from baselines levels after challenging with a pro-convulsant. It is possible that in our experiment, there was not enough time for c-Fos expression to become significantly elevated. For TNF-α, we found that there was a significant increase in TNF-α expression as a result of a PTZ-induced seizure, which is consistent with the results found in literature ([@B41]). Although all pre-treatments significantly decreased the TNF-α expression level, the T100 and T200 groups had a slightly lower expression level than the baseline vehicle control. This suggests that the ethanolic *O. stamineus* leaf extract may at least partially exert its anti-convulsive effect by acting as an anti-inflammatory agent as TNF-α is involved in systemic inflammation. The anti-inflammatory action of the extract may in turn be due to the downregulation of TNF-α by the extract, along with IL-1, COX-1 and COX-2 as determined by [@B39]. The last gene we tested was IL-1, which was found to have no significant upregulation in the expression level after a PTZ-induced seizure, nor any other significant change for any pre-treated group as compared to the negative control. Whilst this contrasts with reports in literature about an increase in IL-1 levels after a seizure and the ability of the extract to decrease IL-1 expression levels ([@B39]), there are conflicting reports which describe a decrease in IL-1 levels after a seizure ([@B34]). The role of IL-1 in seizures also currently remains unknown and controversial ([@B34]).
Future Directions
=================
Whilst this work represents a significant step in bridging the research gap, further research needs to be conducted on the discovery of the active anti-convulsive compound in the extract. Once identified, dose comparison studies with currently available AEDs should be conducted for a true test of their relative efficacies. Another area of future research is the usage of zebrafish tests such as the T-maze, which is design to assess the cognitive ability of the zebrafish ([@B38]). This would help to determine if the extract does not cause cognitive impairment in zebrafish as our zebrafish swimming pattern results suggest.
Conclusion
==========
In conclusion, an ethanolic leaf extract of *O. stamineus* has the potential to be a novel symptomatic treatment for epileptic seizures as it is pharmacologically active against seizures in a zebrafish model. The anti-convulsive effect of this extract is also comparable to that of diazepam at higher doses and can surpass diazepam in certain cases. Treatment with the extract also counteracts the upregulation of NF-κB, NPY, and TNF-α as a result of a PTZ treated seizure. The anti-convulsive action for this extract could be at least partially due to its anti-inflammatory effects due to the downregulation of TNF-α.
Ethics Statement
================
The experimental protocol was approved by the Monash Animal Research Platform (MARP) Animal Ethics Committee, Monash University, Australia (MARP/2017/047).
Author Contributions
====================
BC performed all the experiments and was responsible for the writing of the manuscript in its entirety. UK performed the gene expression study in tandem with BC. MS was responsible for conceptualizing and revising the manuscript. YK, S-MH, and IO were also involved in conceptualizing and proofreading. All authors gave their final approval for the submission of the manuscript.
Conflict of Interest Statement
==============================
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
[^1]: Edited by: *Marianthi Papakosta, Pfizer, United States*
[^2]: Reviewed by: *Thomas Heinbockel, Howard University, United States; Fathi M. Sherif, University of Tripoli, Libya*
[^3]: This article was submitted to Neuropharmacology, a section of the journal Frontiers in Pharmacology
|
Q:
Javascript Canvas Optimization
Here's a js fiddle to what I'm attempting:
http://jsfiddle.net/bnjhhoze/
Everything important happens in
render()
The canvas incorporates the mouse's location into its color calculations for each block for each frame. Now no matter what I seem to cut out of the function, the canvas renders around 10fps at large size (1600 x 900).
Even when it's just rendering purely black, no fill style changes, it renders at 10fps.
Any insight into what could be causing this?
A:
This question is a bit too broad to answer. It could be anything from a small one-line inefficiency, your whole approach may need to be reworked, or you really are just pushing too many pixels.
So rather than giving you a fish, you should learn to fish yourself.
Here are docs on the javascript profiling tools built into Chrome. That will tell you how to identify bottlenecks in your code and iron them out.
One thing you should know though, is that 2D canvas is slow. It's very fill rate dependent. Which means the more pixels you paint, the slower it is. And the nature of your code is that it paints a lot of pixels.
If you could use WebGL, you could achieve some amazing framerates through hardware acceleration. But that requires completely re-architecting everything about this and learning GLSL. See examples here. It's amazing what you can do with GLSL.
|
<?php declare(strict_types=1);
/**
* This file is part of the Yasumi package.
*
* Copyright (c) 2015 - 2020 AzuyaLabs
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*
* @author Sacha Telgenhof <me@sachatelgenhof.com>
*/
namespace Yasumi\tests\Australia\Tasmania\Northwest;
use Yasumi\tests\Australia\Tasmania\TasmaniaBaseTestCase;
use Yasumi\tests\YasumiBase;
/**
* Base class for test cases of the northwestern Tasmania holiday provider.
*/
abstract class NorthwestBaseTestCase extends TasmaniaBaseTestCase
{
use YasumiBase;
/**
* Name of the region (e.g. country / state) to be tested
*/
public $region = 'Australia\Tasmania\Northwest';
}
|
Q:
When applying an XSL template to one of two different source files, how can I reference a "matching" node in the other source?
I am using XSLT to transform information from two difference source XMLs. Each relevant node in the first source has a node with an equivalent "id" attribute in the second source file which contains extra information that needs to be merged. Any nodes in the second source that don't have a match don't matter, so the first source needs to drive the results.
Here is a simplified version of the problem:
XSL:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:two="http://www.mycompany.com/schemas/1.0">
<xsl:param name="secondDoc" as="document-node()" />
<xsl:template match="/">
<Employees>
<xsl:apply-templates match="$secondDoc/two:People/two:Person" />
<Employees>
</xsl:template>
<xsl:template match="two:Person">
<Employee>
<xsl:value-of select="/Employees/Employee[@id='@id']/FirstName" />
<xsl:value-of select="two:LastName" />
</Employee>
</xsl:template>
</xsl:stylesheet>
First source:
<?xml version="1.0" encoding="UTF-8"?>
<Employees>
<Employee id="1">
<FirstName>John</FirstName>
</Employee>
</Employees>
Second Source:
<?xml version="1.0" encoding="UTF-8"?>
<People xmlns="http://www.mycompany.com/schemas/1.0">
<Person id="1">
<LastName>Doe</LastName>
</Person>
</People>
The method I am trying to use is to create templates that match to the namespace prefix used by the second source, and then match to the equivalent node from within the template. The problem is that I'm not sure how to XPath back to the root template. The value-of statements fails, of course.
One method I tried is to add a xsl:with-param to xsl:apply-templates> and send the matching node as a variable to the template. This worked if I selected a node manually (Employee[1]), but since the with-param doesn't seem to take the context of the apply-templates select, I'm not sure how to tie the "id" attributes together.
Is there a way to reference back to the root template, or am I going about this the wrong way?
EDIT: I thought of a possible solution, although it may not be the cleanest. I could pass the "Employees" node as a parameter to the template and then match up the "id" tags for the specific employee inside the template and use the resulting node as the reference. This looks like passing the root node in this example, but in reality this is a small section of the overall XSL. Hopefully there's an easier way.
A:
Use a variable to store a reference to the root node of the primary input document:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:two="http://www.mycompany.com/schemas/1.0">
<xsl:param name="secondDoc" as="document-node()" />
<xsl:variable name="main-root" select="/"/>
<xsl:template match="/">
<Employees>
<xsl:apply-templates match="$secondDoc/two:People/two:Person" />
<Employees>
</xsl:template>
<xsl:template match="two:Person">
<Employee>
<xsl:value-of select="$main-root/Employees/Employee[@id = current()/@id]/FirstName" />
<xsl:value-of select="two:LastName" />
</Employee>
</xsl:template>
</xsl:stylesheet>
And then of course use a key for the cross-reference:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:two="http://www.mycompany.com/schemas/1.0">
<xsl:param name="secondDoc" as="document-node()" />
<xsl:key name="id" match="Employee" use="@id"/>
<xsl:variable name="main-root" select="/"/>
<xsl:template match="/">
<Employees>
<xsl:apply-templates match="$secondDoc/two:People/two:Person" />
<Employees>
</xsl:template>
<xsl:template match="two:Person">
<Employee>
<xsl:value-of select="key('id', @id, $main-root)/FirstName" />
<xsl:value-of select="two:LastName" />
</Employee>
</xsl:template>
</xsl:stylesheet>
|
Microwave auditory effect
The microwave auditory effect, also known as the microwave hearing effect or the Frey effect, consists of audible clicks (or, with speech modulation, spoken words) induced by pulsed/modulated microwave frequencies. The clicks are generated directly inside the human head without the need of any receiving electronic device. The effect was first reported by persons working in the vicinity of radar transponders during World War II. These induced sounds are not audible to other people nearby. The microwave auditory effect was later discovered to be inducible with shorter-wavelength portions of the electromagnetic spectrum. During the Cold War era, the Americanneuroscientist Allan H. Frey studied this phenomenon and was the first to publish[1] information on the nature of the microwave auditory effect.
Pulsed microwave radiation can be heard by some workers; the irradiated personnel perceive auditory sensations of clicking or buzzing. The cause is thought to be thermoelastic expansion of portions of the auditory apparatus.[2] The auditory system response occurs at least from 200 MHz to at least 3 GHz. In the tests, repetition rate of 50 Hz was used, with pulse width between 10–70 microseconds. The perceived loudness was found to be linked to the peak power density instead of average power density. At 1.245 GHz, the peak power density for perception was below 80 mW/cm2.[citation needed] However, competing theories explain the results of interferometric holography tests differently.[3]
In 2003-2004, the WaveBand Corp. had a contract from the US Navy for the design an MAE system they called MEDUSA (Mob Excess Deterrent Using Silent Audio) intended to remotely, temporarily incapacitate personnel. The project was cancelled in 2005.[4][5][6]
Contents
The first American to publish on the microwave hearing effect was Allan H. Frey, in 1961. In his experiments, the subjects were discovered to be able to hear appropriately pulsed microwave radiation, from a distance of 100 meters from the transmitter. This was accompanied by side effects such as dizziness, headaches, and a pins and needles sensation.
A decade later, an overview, in the American Psychologist, of radiation impacts on human perceptions, cites investigations at the Walter Reed Army Institute of Research that demonstrated 'receiverless' wireless voice transmission: "Appropriate modulation of microwave energy can result in direct 'wireless' and 'receiverless' communication of speech."[7]
A 1998 patent describes a device that can scare off birds from wind turbines, aircraft, and other sensitive installations by way of microwave energy pulses. Using frequencies from 1 GHz to about 40 GHz, the warning system generates pulses of milliseconds duration, which are claimed to be sensed by the birds' auditory systems. It is believed this may cause them to veer away from the protected object.[8]
As stated by the above-mentioned journal entry to the American Psychologist, "the averaged densities of energy required to transmit longer messages would approach the current 10mW/cm² limit of safe exposure", which makes the technology improper for human telecommunication. For this very same 'receiverless' wireless sound transmission to human beings, sound from ultrasound is used instead.
There are extensive online support networks and numerous websites maintained by people fearing mind control. California psychiatrist Alan Drucker has identified evidence of delusional disorders on many of these websites[9] and other psychologists are divided over whether such sites negatively reinforce mental troubles or act as a form of group cognitive therapy.[11] |
Covered stenting of the superficial femoral artery using the Viabahn stent-graft.
High initial technical success rates for superficial femoral artery revascularization are possible using multiple modalities. For these interventions to pose a serious challenge to open surgical bypass, improved primary patency in mid- and long-term follow-up must be achieved. In several smaller studies, covered stenting of the superficial femoral artery has shown superior patency at 2 years after intervention. The ongoing VIBRANT trial randomizes patients to superficial femoral artery intervention with either bare nitinol stents or Viabahn covered stents. The 3-year outcomes data from this trial will better define the role of percutaneous intervention for complex superficial femoral artery disease. |
Treatment of acute Achilles tendon rupture: fibrin glue versus fibrin glue augmented with the plantaris longus tendon.
In the surgical repair of Achilles tendon ruptures, suturing is standard, although fibrin glue also has been used for repair since the 1980s. Augmentation with the plantaris longus tendon is also a popular technique; however, no study has yet compared the outcome of augmented versus only glued repair of ruptured Achilles tendons. This study compares the long-term results of surgical repair of Achilles tendon rupture with fibrin glue versus fibrin glue augmented with the plantaris longus tendon. Forty patients who had undergone Achilles tendon repair with fibrin glue took part in a follow-up examination after an average of 11.5 years. The fibrin group consisted of 16 patients and the fibrin glue augmented with plantaris longus tendon group consisted of 15 patients. The modified Thermann score (adapted from Weber) and results of an isokinetic force measurement were the same in both groups, whereas complications in the 2 groups also did not differ. We conclude that augmentation with the plantaris longus tendon is not necessary when operatively treating acute ruptured Achilles tendons with fibrin glue. 2. |
The Week: News highlights 11th March 2017
March 7, 2017
Highlights from Ireland this week
Johnny Sexton’s Crumlin trip for biggest fan
Ireland rugby star Johnny Sexton touched the hearts of the country last week after he made an unannounced trip to Crumlin children’s hospital in Dublin to visit one of his biggest fans. A video made during Ireland’s last Six Nations game against France went viral due to the reaction of super-fan Tom Cahill.
The youngster was singing along to Ireland’s Call when he recognised Johnny Sexton in the player’s line-up, from when he had visited him in hospital over Christmas, while he was receiving chemotherapy. The video went viral, and having seen it Johnny, who had only returned to international duty in the France game, after injury, made a call to the hospital, who told him that Tom was back in receiving more chemo.
He surprised the young fan and presented him with his signed match jersey from the France game. The unpublicised visit from Johnny has been liked on Facebook nearly 2,000 times.
Ireland recognises Travellers as separate ethnic group for first time
Travellers have been recognised as a separate ethnicity in Ireland for the first time. Taoiseach Enda Kenny made the announcement in the Dáil last week and the news was greeted with a standing ovation by TDs. Mr Kenny referred to it as an “historic” and “proud” day for Ireland after many years of campaigning from the Traveller community.
“Our Traveller community is an integral part of our society for over a millennium, with their own distinct identity – a people within our people,” he said.
It is thought that there are about 300,000 people living in the Republic of Ireland who are members of the Travelling community, approximately 0.6 per cent of the total population. Mr Kenny said he recognised the inequalities and discrimination faced by the Travelling community and that there are “a range of special programmes and interventions to help deal with this”.
The decision was also praised by Irish President Michael D. Higgins who described it as “momentous”.
“I have no doubt that [the] clarification will be of assistance in interpreting legislation in relation to Travellers’ rights, and ensuring respect for Travellers’ distinct identity within the fabric of Irish society,” he said. Campaigners expressed their joy at the decision.
Irish cement giant rules itself out of Trump’s Mexican Wall bids
Irish construction firm Cement Roadstone Holdings (CRH) has ruled themselves out of having anything to do with Donald Trump’s planned wall along the border of the US and Mexico.
The worldwide company CRH was founded, and is headquartered in Ireland, and is the largest cement and construction materials group in America. Its chief executive Albert Manifold said last week that the project was ‘not of interest to us or relevant to us’. He said the question of the company participating in the construction of a wall along the 2,000-mile border ‘doesn’t arise’ as it does not have any ‘significant presence in the extreme south’ of the US.
Last week, US Customs and Border Protection said it will start awarding contracts by the middle of April for Mr Trump’s proposed ‘great wall’ to prevent illegal immigrants coming into the US. The agency said it would start to request bids on Monday, March 6th, and that interested companies would have to submit ‘concept papers’ to design and build prototypes by March 10th. |
Q:
How should I handle the Uber surge confirmation url in iOS?
What I mean is that I have well received the Uber 409 conflict surge error JSON response:
{
"meta": {
"surge_confirmation": {
"href": "https:\/\/api.uber.com\/v1\/surge-confirmations\/e100a670",
"surge_confirmation_id": "e100a670"
}
},
"errors":[
{
"status": 409,
"code": "surge",
"title": "Surge pricing is currently in effect for this product."
}
]
}
So what I should do is that present the HTML5 page with the following url, right?
"href": "https:\/\/api.uber.com\/v1\/surge-confirmations\/e100a670"
And here comes my two questions:
I would like to present the page in my iOS UIWebView, but I don't know what to do to close the webview if user clicked I ACCEPT HIGHER FEE button of the HTML5 page since I don't know how to leverage the surge confirmation redirect url;
When loading the url in my OSX chrome, clicked the button and it redirected to next page; when loading the url in an UIWebView, clicking the button didn't trigger anything.
Many thanks.
A:
So basically, I figured that the surge confirmation id is added as a query param in the redirect url. So what you need to do is capture the redirect url (if you didn't register any, then you cannot capture the confirmation id string) and resolve it out.
Also I think if you don't register any redirect url, Uber would consider user has denied the surge price either the button was clicked or not. So if you wanna enable the surge confirmation you have to register one redirect url.
And the reason nothing happened after I clicked the button is that I didn't register a redirect url for the surge confirmation so it didn't jump anywhere else.
|
Officers for U.S. Customs and Border Protection at an Express Consignment Operations hub in Louisville recently seized six shipments containing 2,909 counterfeit driver’s licenses and 3,123 blank card stocks to make counterfeit driver’s licenses.
All of these shipments originated from China and were being shipped to various people in the New York area. The driver’s license were for various states to include: Florida, Michigan, Illinois, Missouri, New Jersey, Ohio and other coastal states. CBP Louisville also notified CBP Memphis as they also had shipments in their area which resulted in an additional 527 counterfeit driver’s licenses.
One of these shipments seized was identified as being consigned to a convicted child rapist in the New York area. It’s suspected that this consignee entices minors with alcohol and counterfeit IDs before engaging in illicit activity. Border Enforcement Security Task (BEST) Force in Louisville and Chicago’s Tactical Analytical Unit also identified similarities between multiple shipments destined to multiple consignees. BEST Louisville presented the findings to the Homeland Security Investigation New York Office who validated they were all interconnected. HSI is continuing to investigate.
While CBP sees these dark web transactions frequently, according to Thomas Mahn, Louisville Port Director, the reasoning for buying fake IDs has evolved from teenagers trying to get into bars to more nefarious activity. “Some of the major concerns as it relates to fraudulent identity documents is identity theft, worksite enforcement, critical infrastructure protection, fraud linked to immigration-related crimes such as human smuggling and human trafficking, and these documents can be used by those individuals associated with terrorism to minimize scrutiny from travel screening measures.”
CBP Officers coordinate findings with CBP’s Fraudulent Document Analysis Unit, Homeland Security Investigations and other federal partners in an effort to combat this illicit activity.
- ADVERTISEMENT -
CBP routinely conducts inspection operations on arriving and departing international flights and intercepts narcotics, weapons, currency, prohibited agriculture products, counterfeit goods, and other illicit items at our nation’s 328 international ports of entry. |
#跟黄哥学编程系列文章之插入排序
如果你感觉黄哥的文章对你有帮助请打赏,支付宝账号:18610508486@163.com
## 插入排序算法描述
**插入排序(Insertion Sort)**是一种简单直观的[排序算法](https://zh.wikipedia.org/wiki/%E6%8E%92%E5%BA%8F%E7%AE%97%E6%B3%95)。它的工作原理是通过构建有序序列,对于未排序数据,在已排序序列中从后向前扫描,找到相应位置并插入。**插入排序**在实现上,通常采用in-place排序(即只需用到O(1)的额外空间的排序),因而在从后向前扫描过程中,需要反复把已排序元素逐步向后挪位,为最新元素提供插入空间。
##算法描述
一般来说,**插入排序**都采用in-place在数组上实现。具体算法描述如下:
1. 从第一个元素开始,该元素可以认为已经被排序
2. 取出下一个元素,在已经排序的元素序列中从后向前扫描
3. 如果该元素(已排序)大于新元素,将该元素移到下一位置
4. 重复步骤3,直到找到已排序的元素小于或者等于新元素的位置
5. 将新元素插入到该位置后
6. 重复步骤2~5
## 下面是黄哥写的python代码和golang代码
###python 插入排序
#coding:utf-8
"""
如何通过学习python学会编程
https://github.com/pythonpeixun/article/blob/master/python/how_to_learn_python.md
黄哥python远程视频培训班
https://github.com/pythonpeixun/article/blob/master/index.md
黄哥python培训试看视频播放地址
https://github.com/pythonpeixun/article/blob/master/python_shiping.md
帮你完成从不会写代码到会写代码解决问题的过渡。
咨询qq:1465376564
"""
def insert_sort(lst):
length = len(lst)
for i in range(1, length):
tmp = lst[i]
for j in range(i-1, -1, -1):
if lst[j] > tmp:
lst[j+1] = lst[j]
else:
lst[j+1] = tmp
break
if lst[0] > tmp:
lst[0] = tmp
if __name__ == '__main__':
lst = [8, 2, 4, 1, 9, 20, 15, 6, 0]
insert_sort(lst)
print(lst)
###golang插入排序代码
package main
import (
"fmt"
)
func InsertSort(lst []int) {
length := len(lst)
for i := 1; i < length; i++ {
tmp := lst[i]
for j := i - 1; j >= 0; j-- {
if lst[j] > tmp {
lst[j+1] = lst[j]
} else {
lst[j+1] = tmp
break
}
}
// 这个地方有点小技巧,因为golang 中j是for循环作用域,所以
// 直接判断第一个元素
if lst[0] > tmp {
lst[0] = tmp
}
}
}
func main() {
lst := []int{3, 8, 2, 9, 7, 12, 33, 6, 97, 48, 23}
InsertSort(lst)
fmt.Println(lst)
}
##c语言代码
void insertion_sort(int arr[], int len) {
int i, j;
int temp;
for (i = 1; i < len; i++) {
temp = arr[i]; //與已排序的數逐一比較,大於temp時,該數向後移
for (j = i - 1; j >= 0 && arr[j] > temp; j--) //j循环到-1时,由于短路求值,不会运算array[-1]
arr[j + 1] = arr[j];
arr[j+1] = temp; //被排序数放到正确的位置
}
}
##python 其它的2个版本
def insertion_sort(n):
if len(n) == 1:
return n
b = insertion_sort(n[1:])
m = len(b)
for i in range(m):
if n[0] <= b[i]:
return b[:i]+[n[0]]+b[i:]
return b + [n[0]]
def insertion_sort(lst):
if len(lst) == 1:
return
for i in xrange(1, len(lst)):
temp = lst[i]
j = i - 1
while j >= 0 and temp < lst[j]:
lst[j + 1] = lst[j]
j -= 1
lst[j + 1] = temp
##算法复杂度
如果目标是把n个元素的序列升序排列,那么采用**插入排序**存在最好情况和最坏情况。最好情况就是,序列已经是升序排列了,在这种情况下,需要进行的比较操作需*(n-1)*次即可。最坏情况就是,序列是降序排列,那么此时需要进行的比较共有*n(n-1)/2*次。**插入排序**的赋值操作是比较操作的次数减去*(n-1)*次。平均来说**插入排序**算法复杂度为**O(n2)**。因而,**插入排序**不适合对于数据量比较大的排序应用。但是,如果需要排序的数据量很小,例如,量级小于千,那么**插入排序**还是一个不错的选择。 插入排序在工业级库中也有着广泛的应用,在STL的sort算法和stdlib的qsort算法中,都将插入排序作为快速排序的补充,用于少量元素的排序(通常为8个或以下)。
[点击黄哥python培训试看视频播放地址](https://github.com/pythonpeixun/article/blob/master/python_shiping.md)
[黄哥python远程视频培训班](https://github.com/pythonpeixun/article/blob/master/index.md)
注:文字资源来源于维基百科
|
James Brightman Monday 27th November 2017 Share this article Share
By now you've probably heard about Federal Communications Commission chairman Ajit Pai's plan to kill net neutrality, a set of rules established by President Obama's administration to keep the internet a free and open service for all - so that internet service providers like Verizon or Comcast can't suddenly block, throttle or favor traffic from one source. Pai's proposal isn't the law of the land yet, but it's expected to pass when the FCC votes on December 14.
Once this plan is approved, the internet will slowly lose its democratic nature and become an oligarchy ruled by greedy ISPs who can do what they please in the absence of any real regulation. As the New York Times reported, "Under a repeal, companies like AT&T and Comcast may be able to charge people higher fees to access certain websites and online services. The companies may also be able to prioritize their own services while disadvantaging websites run by rivals."
It's not hard to imagine how bad things could get for both gamers and game companies under this new-look internet. One look at wireless carrier Meo in Portugal, as noted by Business Insider, gives us an example of what a multi-tiered internet could mean. Users would have to pay for basic access to the internet, and then select "packages" with additional monthly fees, whether for access to video streaming like Netflix or Hulu, email, social networking, music, and so on.
"Anyone who cares about multiplayer online gaming should be up in arms about the imminent demise of net neutrality in the USA" Jeremy Stieglitz, Studio Wildcard
Hypothetically, Comcast or any of the ISPs could introduce a "Premium Gaming Plan" that gives you high-speed access to your favorite services like Xbox Live, PlayStation Network, Steam, etc. If you're a serious gamer and want fast download speeds and low latency for online gaming on these services, you'd have no choice but to pay the extra money to subscribe. At the same time, the ISP could charge Xbox extra money to provide the service with a "fast lane" on its network - but that cost is then passed on to the players.
Or, what if Comcast, which already hosts games, decides to get more serious about becoming a gaming service itself? As digital gets bigger and bigger and publishers like EA talk about streaming and subscriptions becoming the next big thing for the industry, what's to stop someone like Comcast from favoring their own service's traffic over another's? The customers who need to use Comcast for their internet access in this situation are completely screwed - and by the way, in many more rural parts of America there may be no other ISP choice for a consumer to switch to.
Gaming, of course, can account for huge amounts of data, especially as more high-end games offer 4K and HDR visuals; the download files are getting ridiculously massive. And regular online gameplay itself uses up big portions of bandwidth. Like a mobster threatening to break your leg if you don't pay up, who's to say that any of these ISPs can't cap or throttle a gamer's data once net neutrality is removed?
Studio Wildcard co-founder and co-creative director Jeremy Stieglitz commented to me, "Anyone who cares about multiplayer online gaming should be up in arms about the imminent demise of net neutrality in the USA. It's completely destructive to the idea of fair and level competitive gameplay to have throttled bandwidth depending on whether you are a small title or a part of a big commercial enterprise.
"Once the network carriers decide they can prioritize bandwidth to their own offerings above anything else, independent games such as Ark are likely to suffer. This performance degradation may not happen overnight, but it almost surely will happen once the carriers decide to commercially exploit the extreme power they will have been given. Gamers everywhere should try fight this, to the extent that they can make their voices heard."
And it's not just about gamers. Game companies, especially startups, will almost certainly be affected by all of this. If you're one of the smaller companies in gaming, you have no chance to compete with juggernauts like Steam anymore because the playing field is no longer level. For example, Steam might pay to be included in the ISP's gaming package, get that fast lane, but then up-and-coming download portals could see their speeds throttled. It's just not a fair fight, and it's bad news for a free economy.
Jason Citron, co-founder and CEO of gaming chat app Discord, explained to Wired, "Net neutrality is incredibly important for small startups like Discord because all internet traffic needs to be treated as equal for us all to have access to the same resources as the big companies."
"FCC chairman Pai was one of Verizon's top lawyers - should we really be surprised that he's looking out for Verizon's interests over the people? It's a classic case of the fox watching the hen house"
Jeremy Dunham, VP of Publishing at Psyonix, which operates the hit online game Rocket League, told me, "We will be watching the rules vote on December 14 very closely. Rocket League has millions of active monthly players and any law or scenario that could jeopardize people's access to it is definitely a concern. We are hopeful that players will continue to have great access to our game."
What this all amounts to, sadly, is class warfare. It turns the internet into a world of haves and have-nots, and we already have enough of that in the games industry to begin with. The richest gamers will have the best bandwidth and lowest latency, gaining an unfair advantage in competitive online games, while the biggest and richest games companies will be able to throw their considerable weight around so that the little guys don't even have a fighting chance. The world of games streaming could be turned on its head, too. Aspiring online influencers who look to Twitch, YouTube and Facebook to stream for a living, may suddenly find streaming to be a much tougher endeavor without net neutrality rules to protect their open internet.
And as much as this wasn't intended to be a political post, the fact is that net neutrality is automatically a political issue. When the government is allowed to be run by the upper one percent, it's not "by the people, for the people," as Abraham Lincoln envisioned. It's no secret that many of those given power by the Trump administration to "regulate" or watch over industry have had direct ties to the very sectors they are supposed to be monitoring. FCC chairman Pai was one of Verizon's top lawyers - should we really be surprised that he's looking out for Verizon's interests over the people? It's a classic case of the fox watching the hen house.
The good news - and something I'm thankful for, having just celebrated Thanksgiving - is that there's still time to fight this heinous FCC plan. If there's enough public outcry, maybe, just maybe Pai and the FCC will relent. I encourage you all to take action: here's a good place to start. If the gaming community can muster enough strength to steer EA away from loot boxes in Star Wars Battlefront II, then perhaps we can make our voices heard on more important issues like net neutrality.
And hey, if Pai still gets his way, there is perhaps one small silver lining: we'll all get to stop talking about epic single-player adventures going extinct. Now excuse me while I go boot up Skyrim... |
Anti-inflammatory and antiapoptotic effects of mesenchymal stem cells transplantation in rat brain with cerebral ischemia.
Excessive inflammation and apoptosis contribute to the pathogenesis of ischemic brain damage. Nuclear factor-kappa B (NF-κB) is considered to be a key protein complex involved in this cascade of events. The aim of the present study was to clarify the protection mechanism of the mesenchymal stem cells (MSCs). Lewis rats (N = 90) were randomly assigned to three groups: (1) the sham-operated group; (2) the saline group, in which the animals underwent rat transient middle cerebral artery occlusion (tMCAO, for 2 hours) and were treated with saline through the tail vein; and (3) the MSCs group, in which the animals underwent tMCAO (for 2 hours) and were infused with cultured human MSCs (4 × 10(6)/0.4 ml PBS) through the tail vein. At days 1 and 3 post-MSCs infusion, real-time PCR, and Western blot, immunohistochemical analyses were applied for tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β) and P-IKKβ, p53, and B-cell lymphoma 2 (Bcl-2) expression levels. TNF-α, IL-1β messenger RNA (mRNA) and P-IκB-α, P-IKKβ, p53 protein expression levels were significantly increased in the saline group compared with the sham group. However, IκB-α and Bcl-2 protein expression levels were markedly decreased in the saline group. After injection of BrdU(+) MSCs, the expression levels of TNF-α, IL-1β mRNA and P-IκB-α, P-IKKβ, p53 protein were significantly decreased. Contrary to these findings, IκB-α, Bcl-2 protein expression levels were markedly increased. In addition, we found that infarct area was significantly reduced in MSCs group. These results suggest that MSCs' neuroprotection is attributable to its anti-inflammatory and antiapoptotic effect through inhibition of NF-κB. |
1. Field
Example embodiments of the following description relate to a signal processing apparatus and method, and more particularly, to a signal processing apparatus and method for providing a 3-dimensional (3D) sound effect by separating an input signal into a primary signal and an ambience signal.
2. Description of the Related Art
In order to apply a 3-dimensional (3D) sound effect to an audio signal, an ambience signal that corresponds to a background signal and noise needs to be extracted from an input signal. Conventionally, the ambience signal to be extracted from the input signal is determined by a coherence value of a predetermined section. In a physical sense, the coherence value refers to a statistical value of interference between two signals in the predetermined section.
Extraction of the ambience signal based on the coherence of the predetermined section may be efficient in a relatively simple signal. However, in a variable signal, it is difficult to quickly determine similarity. Therefore, noise may be mixed into a separated primary signal, or separation of the ambience signal and the primary signal may not be performed accurately.
Furthermore, when the coherence is extracted according to a conventional method, a phase difference between a left signal and a right signal of the input signal may not be reflected correctly. According to conventional art, since the coherence always has a value greater than or equal to 0 and less than or equal to a positive value of 1, although the phase of the left signal is 1+j and the phase of the right signal is −1−j, that is, opposite to the left signal, the coherence becomes 1. That is, the phase difference between the left signal and the right signal may not be properly reflected.
Accordingly, there is a demand for a method of reflecting a phase difference of an input signal while quickly extracting an ambience signal, even from a variable signal. |
Pentylenetetrazol and strychnine convulsions in brain weight selected mice.
The seizure sensitivities to pentylenetetrazol (Ptz, 25-100 mg/kg) and strychnine (S, 2 mg/kg) were tested in two mice lines selected for large (LB) and small (SB) brain weight (brain weight difference being approximately 75 mg). The selection was based on a regression line connecting body and brain weight. SB mice were more sensitive to both drugs-their seizure latencies were shorter and lethality higher than in LBs. The seizures generated by Ptz and S are known to affect different neurotransmitter systems. The interstrain differences in seizure susceptibility are probably determined by SB mice nervous system traits rather than by differences in the particular neurochemical trait. The data on neocortical cytoarchitectonics obtained during our previous brain selection experiment could serve as the indirect evidence favouring such a suggestion. |
USE devops_ci_op;
SET FOREIGN_KEY_CHECKS=0;
--
-- Table structure for table `SPRING_SESSION_ATTRIBUTES`
--
CREATE TABLE IF NOT EXISTS `SPRING_SESSION_ATTRIBUTES` (
`SESSION_ID` char(36) NOT NULL,
`ATTRIBUTE_NAME` varchar(200) NOT NULL,
`ATTRIBUTE_BYTES` blob,
PRIMARY KEY (`SESSION_ID`,`ATTRIBUTE_NAME`),
KEY `SPRING_SESSION_ATTRIBUTES_IX1` (`SESSION_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `dept_info`
--
CREATE TABLE IF NOT EXISTS `dept_info` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`create_time` datetime DEFAULT NULL,
`dept_id` int(11) NOT NULL,
`dept_name` varchar(100) NOT NULL,
`level` int(11) NOT NULL,
`parent_dept_id` int(11) DEFAULT NULL,
`update_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UK_bd8ig9ecbopp3592f9fcpb99p` (`dept_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Table structure for table `project_info`
--
CREATE TABLE IF NOT EXISTS `project_info` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`approval_status` int(11) DEFAULT NULL,
`approval_time` datetime DEFAULT NULL,
`approver` varchar(100) DEFAULT NULL,
`cc_app_id` int(11) DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`creator` varchar(100) DEFAULT NULL,
`creator_bg_name` varchar(100) DEFAULT NULL,
`creator_center_name` varchar(100) DEFAULT NULL,
`creator_dept_name` varchar(100) DEFAULT NULL,
`english_name` varchar(255) DEFAULT NULL,
`is_offlined` bit(1) DEFAULT NULL,
`is_secrecy` bit(1) DEFAULT NULL,
`project_bg_id` int(11) DEFAULT NULL,
`project_bg_name` varchar(100) DEFAULT NULL,
`project_center_id` varchar(50) DEFAULT NULL,
`project_center_name` varchar(100) DEFAULT NULL,
`project_dept_id` int(11) DEFAULT NULL,
`project_dept_name` varchar(100) DEFAULT NULL,
`project_id` varchar(100) DEFAULT NULL,
`project_name` varchar(100) DEFAULT NULL,
`project_type` int(11) DEFAULT NULL,
`use_bk` bit(1) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UK_bvtnw8dekf2y9gbxt7thib8vj` (`project_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Table structure for table `role`
--
CREATE TABLE IF NOT EXISTS `role` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`description` varchar(255) DEFAULT NULL,
`name` varchar(255) NOT NULL,
`ch_name` varchar(255) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`modify_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UK_8sewwnpamngi6b1dwaa88askk` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `role_permission`
--
CREATE TABLE IF NOT EXISTS `role_permission` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`expire_time` datetime DEFAULT NULL,
`role_id` int(11) DEFAULT NULL,
`url_action_id` int(11) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`modify_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FKa6jx8n8xkesmjmv6jqug6bg68` (`role_id`),
KEY `FKij92vnr0qkd97skbk7yt3mk32` (`url_action_id`),
CONSTRAINT `FKa6jx8n8xkesmjmv6jqug6bg68` FOREIGN KEY (`role_id`) REFERENCES `role` (`id`),
CONSTRAINT `FKgg1vfrini4olsrbjhubgrggam` FOREIGN KEY (`url_action_id`) REFERENCES `url_action` (`id`),
CONSTRAINT `FKij92vnr0qkd97skbk7yt3mk32` FOREIGN KEY (`url_action_id`) REFERENCES `url_action` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `schema_version`
--
CREATE TABLE IF NOT EXISTS `schema_version` (
`installed_rank` int(11) NOT NULL,
`version` varchar(50) DEFAULT NULL,
`description` varchar(200) NOT NULL,
`type` varchar(20) NOT NULL,
`script` varchar(1000) NOT NULL,
`checksum` int(11) DEFAULT NULL,
`installed_by` varchar(100) NOT NULL,
`installed_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`execution_time` int(11) NOT NULL,
`success` tinyint(1) NOT NULL,
PRIMARY KEY (`installed_rank`),
KEY `schema_version_s_idx` (`success`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `spring_session`
--
CREATE TABLE IF NOT EXISTS `spring_session` (
`SESSION_ID` char(36) NOT NULL,
`CREATION_TIME` bigint(20) NOT NULL,
`LAST_ACCESS_TIME` bigint(20) NOT NULL,
`MAX_INACTIVE_INTERVAL` int(11) NOT NULL,
`PRINCIPAL_NAME` varchar(100) DEFAULT NULL,
PRIMARY KEY (`SESSION_ID`),
KEY `SPRING_SESSION_IX1` (`LAST_ACCESS_TIME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `t_user_token`
--
CREATE TABLE IF NOT EXISTS `t_user_token` (
`user_Id` varchar(255) NOT NULL,
`access_Token` varchar(255) DEFAULT NULL,
`expire_Time_Mills` bigint(20) NOT NULL,
`last_Access_Time_Mills` bigint(20) NOT NULL,
`refresh_Token` varchar(255) DEFAULT NULL,
`user_Type` varchar(255) DEFAULT NULL,
PRIMARY KEY (`user_Id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Table structure for table `url_action`
--
CREATE TABLE IF NOT EXISTS `url_action` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`action` varchar(255) NOT NULL,
`description` varchar(255) DEFAULT NULL,
`url` varchar(255) NOT NULL,
`create_time` datetime DEFAULT NULL,
`modify_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `user`
--
CREATE TABLE IF NOT EXISTS `user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`chname` varchar(255) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`email` varchar(255) DEFAULT NULL,
`lang` varchar(255) DEFAULT NULL,
`last_login_time` datetime DEFAULT NULL,
`phone` varchar(255) DEFAULT NULL,
`username` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `user_permission`
--
CREATE TABLE IF NOT EXISTS `user_permission` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`expire_time` datetime DEFAULT NULL,
`url_action_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`modify_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FK9ng630d8o1q73hhvyr73fjg8j` (`url_action_id`),
KEY `FK7c2x74rinbtf33lhdcyob20sh` (`user_id`),
CONSTRAINT `FK3f7ym8s6w14282n8lwuukwglt` FOREIGN KEY (`url_action_id`) REFERENCES `url_action` (`id`),
CONSTRAINT `FK7c2x74rinbtf33lhdcyob20sh` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`),
CONSTRAINT `FK9ng630d8o1q73hhvyr73fjg8j` FOREIGN KEY (`url_action_id`) REFERENCES `url_action` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
--
-- Table structure for table `user_role`
--
CREATE TABLE IF NOT EXISTS `user_role` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`role_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`expire_time` datetime DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`modify_time` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FKa68196081fvovjhkek5m97n3y` (`role_id`),
KEY `FK859n2jvi8ivhui0rl0esws6o` (`user_id`),
CONSTRAINT `FK859n2jvi8ivhui0rl0esws6o` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`),
CONSTRAINT `FKa68196081fvovjhkek5m97n3y` FOREIGN KEY (`role_id`) REFERENCES `role` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
SET FOREIGN_KEY_CHECKS=1; |
It started at an early age when she won a few bucks at a local rodeo competition.
From that moment, she was hooked for life.
Competing in barrel racing — a rodeo event where a competitor and their horse attempt to complete a clover-leaf pattern around preset barrels in the fastest time — Cunningham won $10 at a local rodeo.
“Back in the day, that was a lot of money,” she said with a laugh.
As she progressed in the sport, the prize money kept increasing, to the point that as a young teenager, she could “buy a whole new wardrobe” with the $50 prize money.
“You get to make a little bit of money and that is pretty enticing,” she explained about the allure of the sport.
The rodeo lifestyle was a given for Cunningham and her two siblings, who grew up on a farm in Kamloops.
Their father was a calf-roper — Cunningham said she gave the sport a try but lacked the co-ordination — and she described their mother as “absolutely horse-crazy.”
Before she was even two years old, Cunningham was riding horses, and she owned her first horse at age 8.
She worked her way from competing at junior and Little Britches Rodeo, then on to the high school circuit, the amateur rodeo association, and then finally professionally, with the Women’s Professional Rodeo Association.
The rodeo circuit takes Cunningham across western Canada, as well as parts of the United States.
Competing in about 40 events a year, Cunningham can be gone for six months at a time.
“There is no down time; sometimes the days are 14, 15, 16 hours long,” said the 49-year-old.
“You really have to sacrifice a lot to get to the level we go at.”
While her kids — a son and daughter — are all grown up now, growing up, it made for some tough times for the family.
Sometimes both kids would attend with their mother, while other times, Cunningham’s daughter would travel with her to a competition, while her son would stay home and take care of the family farm.
But regardless, Cunningham — who also works as a lottery representative for the B.C. Lottery Corporation — can’t imagine doing anything else.
“I keep threatening to quit, and it doesn’t work out that way, I am still going,” she said.
“I think rodeoing is just a little bit of an addiction and it is very hard to give up.
“And if you were to ask any other competitor, they would tell you the same thing.”
Cunningham loves the rush of excitement that takes over her body in the minutes leading up to a competition.
“The two minutes before you get to go out there and make your run, I swear that is what I do it for, the adrenalin rush before you get to go,” she explained. “The rest of it is a lot of work and a lot of sacrifice, but those two minutes where you get to make your run, the world is just perfect.
“It is a complete adrenalin rush.”
Despite the sacrifices and injuries — a broken cheekbone, cracked sternum, crushed leg and a knee which made her leg “the size of an elephant” and a couple of concussions — Cunningham just shrugs those off.
“Nothing that would make you quit,” she explained.
This year marks her 25th appearance at the Cloverdale Rodeo — which begins today (Friday) and runs until Sunday — but first since it turned into an invitational event in 2008.
Cunningham, an Aldergrove resident for the past four years, loves the fact she is competing pretty much in her own backyard. Especially since it means she and her Zipper, her running quarterhorse, get to sleep in their own beds at night.
She has tasted success at this event before.
“It’s top-loaded with those really good people,” she said. “I’ve always done really well at Cloverdale, I’ve won a ton of money out of there, but I didn’t have my name in the top five, so I didn’t get invited until this year.”
Someone backed out and Cunningham got the invite to attend and she didn’t give it a second thought.
She said with the level of talent coming to Cloverdale, it’s important to have the right mindset.
“When you get to this level, it’s pretty well all psychological,” she said. “You approach it the same way as you do any other rodeo, you don’t change the game plan – what’s been successful in the past – you don’t change the game plan just because there’s more money added.”
— with files from Kevin Diakiw/Black Press
Above: Janet Cunningham and her horse Zipper love competing at the Cloverdale Rodeo as it allows them to sleep in their own beds at home in Aldergrove.
We encourage an open exchange of ideas on this story's topic, but we ask you to follow our guidelines for respecting community standards. Personal attacks, inappropriate language, and off-topic comments may be removed, and comment privileges revoked, per our Terms of Use. Please see our FAQ if you have questions or concerns about using Facebook to comment. |
Q:
Space efficient file type to store double precision floats
I am currently running Simulations written in C later analyzing the results using Python scripts.
ATM the C Programm is writing the results (lots of double values) in a text file which is slowly but surely eating a lot of disc space.
Is there a file format which is more space efficient to store lots of numeric values?
At best but not necessarily it should fulfill the following requirements
Values can be appended continuously such that not all values have to be in memory at once.
The file is more or less easily readable using Python.
I feel like this should be a really common question, but looking for an answer I only found descriptions of various data types within C.
A:
Binary file, but please, be careful with the format of data that you are saving. If possible, reduce the width of each variable that you are using. For example, do you need to save decimal or float, or you can have just 16 or 32 bit integer?
Further, yes, you may apply some of the compression scheme to compress the data before saving, and decompress it after reading, but that requires much more work, and it is probably an overkill for what you are doing.
|
My birth mother gave me the name “Waykedria” but my white adoptive parents changed it to “Rebekah” because “Waykedria” was too abnormal for them and they could never spell it. They kept is as a middle name solely out of respect for my birth mother. I was also teased and called ghetto when I told people by middle name. I started to hate it, shame it, hide it and curse my birth mother for ever giving to me.
Then recently, when I decided to start looking for my birth mother, my adoptive mother gave me a letter that my birth mother wrote to me when I was a baby. In it, she explain everything not only about the adoption but also why she names me Waykedria. It was a family name that has been passed down to the first female in my grandfathers family for generations. My perspective completely changed. I started to take pride upon realizing the everything people had told me didn’t matter. Also felt so cheated and lied to that my adoptive parents just erased a whole part of my identity just to make their own lives more comfortable. So I looked good on paper. Just because your name has more letters or a different combination of letters than what society is used to, does not make you less. It does not make you ghetto, chances are your name has a meaning to it.
It is because of this that it irks me so much when transracial adoptee or international students are alway forced to change their name it something more “American” sounding. My sister from Ethiopia was forced to change her name, my 6 cousins forced to changed from “Semegn” to Sarah, from Ashenafi to “Joseph. You erase culture, you erase identities. You tell us we need to "fit in” to white culture or we won’t make it. You basically tell us that our culture and identity come second to white Americas comfort. |
Canada Assists Ghana to Improve Land Administration
ACCRA, GHANA--(Marketwire - Jan. 31, 2013) - Agricultural production and access to nutritious food in developing countries will be improved by creating clearer, more predictable forms of land ownership in rural areas, and modernizing land services. The Honourable Ed Fast, Minister of International Trade, made the announcement on behalf of the Honourable Julian Fantino, Minister of International Cooperation, during his visit to Accra, as part of a recent trade mission to Ghana.
"Canada will support Ghana in the administration and management of natural resources, with a particular focus on issues around land ownership," said Minister Fast. "This will help Ghana improve food security and contribute to the country's overall economic growth and development."
"This announcement is further proof of how deepening our relationship with Ghana is a win-win for both our countries," added Minister Fast. "Canadian investment can help anchor and expand Ghana's reforms, which in turn will create jobs, economic growth and long-term prosperity for people in both countries. Our government and Canadian businesses will continue to play a significant role in contributing to Africa's continued growth and economic development."
With Canada's support, the regions of Northern Ghana will be addressing deficiencies in the legislation and the administration of land use, to increase economic possibilities and access to appropriate and nutritious food for the people of Ghana. Through this project, Canada will be assisting Ghana to improve business centres for land services, as well as develop new surveys and means of mapping land. This investment will also provide technical support and training for Ghanaian officials working on land administration, and support the development of a human resources plan, as part of the technical support and training programs for the Lands Commission and other land services agencies. The World Bank will implement this project between now and 2017. |
Reggie on why used games are not his problem
Speaking to VentureBeat, Nintendo's Reggie Fils-Aime attempted to dismiss the proliferation of used game sales as not only something that Nintendo isn't worried about, but as something that consumers shouldn't think about, citing the long life of some of Nintendo's games.
"We don't believe used games are in the best interest of the consumer," Fils-Aime said. "We have products that consumers want to hold onto. They want to play all of the levels of a Zelda game and unlock all of the levels. A game like Personal Trainer: Cooking has a long life. We believe used games aren't in the consumer's best interest."
While it may be true that designing a game that can be replayed enjoyably for years is a good defense against used games, many consumers will still fast-track their way through games to trade them in as quickly as possible for something totally new ... just because. Reggie follows up this argument with an odd point -- that other forms of media don't have significant used markets.
"Describe another form of entertainment that has a vibrant used goods market. Used books have never taken off. You don't see businesses selling used music CDs or used DVDs. Why? The consumer likes having a brand-new experience and reliving it over and over again. If you create the right type of experience, that also happens in video games."
That strikes us as just wrong. There have been used book shops worldwide for as long as there have been books, and there's even (at least one) chain of stores devoted entirely to it. And stores like Hastings and CD Warehouse trade heavily in used DVDs and CDs. |
---
author:
- 'G. Barenboim'
- 'C. Bosch'
- 'M.L. López-Ibáñez'
- 'O. Vives'
title: 'Eviction of a 125 GeV “heavy”-Higgs from the MSSM'
---
Introduction
============
In July 2012, both ATLAS and CMS, the two LHC general purpose experiments, announced the discovery of a bosonic resonance with a mass $\sim125$ GeV that could be interpreted as the expected Higgs boson in the Standard Model (SM) [@Aad:2012tfa; @Chatrchyan:2012ufa]. The observed production cross section and decay channels seem to be consistent, within errors, with a Higgs boson in the SM framework. However, at present, although CMS results are just below SM expectations, ATLAS shows a slight excess in the most sensitive channels that, if confirmed with more precise measurements, could be a sign of new physics beyond the single SM Higgs.
Besides, despite the extraordinary success of the SM in explaining all the experimental results obtained so far, both in the high energy as well as in the low energy region, there is a general belief that the SM is not the ultimate theory, but only a low energy limit of a more fundamental one. This underling, more fundamental theory is expected to contain new particles and interactions opening new processes not possible in the SM but, above all, it is envisaged to go one step further in the long way to reach a theory which incorporates gravity to our quantum field description of Nature. In such an endeavor, symmetries, who have historically played an important role in our understanding of the laws of Nature, are expected to be a major player. This is one of the reasons why Supersymmetry (SUSY), the only possible extension of symmetry beyond internal Lie symmetries and the Poincare group [@Coleman:1967ad; @Haag:1974qh], is arguably the most popular extension of the SM. SUSY is a symmetry between fermions and bosons, and, in its minimal version, the Minimal Supersymmetric Standard Model (MSSM), assigns a supersymmetric partner to each SM particle [@Fayet:1974pd; @Fayet:1977yc; @Farrar:1978xj; @Witten:1981nf; @Dimopoulos:1981zb; @Sakai:1981gr; @Ibanez:1981yh; @Kaul:1981wp; @Nilles:1983ge; @Haber:1984rc]. These particles must have a mass close to the electroweak scale, if SUSY is to solve the hierarchy problem of the SM. Moreover, the MSSM requires a second Higgs doublet in addition to the single doublet present in the SM and, therefore, Higgs phenomenology in the MSSM is much richer than the SM, with three neutral-Higgs states and a charged Higgs in the spectrum [@Djouadi:2005gj].
At tree level, the scalar potential of the MSSM is CP-conserving, and therefore mass eigenstates are also CP eigenstates. We have two neutral scalar bosons, $h$ and $H$, and a neutral pseudoscalar, $A$. However, the MSSM contains several CP violating phases beyond the single SM phase in the CKM matrix[^1], [*e.g.*]{} $M_i, i=1,2,3$, $A_t$, $\mu $ are complex parameters, and then CP violation necessarily leaks into the Higgs sector at one-loop level [@Pilaftsis:1998dd; @Pilaftsis:1998pe; @Pilaftsis:1999qt; @Demir:1999hj]. As a result, loop effects involving the complex parameters in the Lagrangian violate the tree-level CP-invariance of the MSSM Higgs potential modifying the tree-level masses, couplings, production rates and decay widths of Higgs bosons [@Pilaftsis:1999qt; @Carena:2000yi; @Choi:2000wz; @Carena:2001fw; @Choi:2001pg; @Choi:2002zp]. In particular, the clear distinction between the two CP-even and the one CP-odd neutral boson is lost and the physical Higgs eigenstates become admixtures of CP-even and odd states. Therefore, significant deviations from the naive CP conserving scenario can be obtained in the regime where $M_{H^\pm}$ is low and Im $(\mu A_t)$ is significant. Yet, the size of SUSY phases is strongly constrained by searches of electric dipole moments (EDM) of the electron and neutron. The phase of $\mu$ is bounded to be miserably small, $\lesssim 10^{-2}$, by the upper limits on EDMs if sfermion masses are below several TeV. Bounds on the phases of $A_{e,d,u}$, although somewhat weaker, are also strong, $\lesssim 10^{-1}$, under the same conditions. However, the phases of third generation trilinear couplings $A_{t,b,\tau}$ can still be sizeable[^2] for soft masses $O(1 ~\mbox{TeV})$ and, due to the large Yukawa couplings, these are precisely the couplings that influence the scalar potential more strongly [@Pilaftsis:1999td]. In this work, we will take only third-generation trilinear couplings $A_{t,b,\tau}$ as complex to generate the scalar-pseudoscalar mixing in the Higgs potential.
Among all the possibilities opened up by this scenario, one particularly interesting is the case where the scalar observed at LHC is not the lightest but the second lightest one, having the lightest escaped detection at LEP/Tevatron/LHC due to its pseudoscalar or down-type content. As a result of the mixing, the couplings $H_1-WW$, $H_1-ZZ$ and $H_1-t\bar{t}$ all get reduced simultaneously evading the current bounds. This idea of course is not new. Many studies have been carried out within this model [@Heinemeyer:2011aa; @Hagiwara:2012mga; @Arbey:2012dq; @Bechtle:2012jw; @Ke:2012zq; @Ke:2012yc; @Moretti:2013lya; @Scopel:2013bba]. There are two public codes, CPsuperH [@Lee:2003nta; @Lee:2012wa], specifically developed to analyze the Higgs phenomenology in the MSSM with explicit CP violation, and FeynHiggs [@Heinemeyer:1998yj; @Hahn:2005cu], that also calculates the spectrum and decay widths of the Higgses in the Complex MSSM. By using them, different regions of the parameters space have been explored through giant scans following the results of the colliders.
In this work, we will explore a different path. We will study this scenario, not by scanning its parameters space but rather by choosing a pair of key experimental signatures from both, high and low energy experiments, and analyzing (analytically or semi-analytically) whether their results can be simultaneously satisfied. This way we gain understanding on the physics of the model we are discussing and at the same time avoid the possibility of missing a fine-tuned region in the parameter space (even tiny to the point of being microscopic) where an unexpected cancellation or a lucky combination might occur. After all, whatever physics hides so effectively behind the SM will turn out to be just one point in our studies of the parameter space. In this sense it is clear that every region, independently of its size, has the same probability of being the right one and should be given enough attention.
Moreover, our analysis is performed in terms of the SUSY parameters at the electroweak scale, such that it encloses all possible MSSM setups (including explicit CP violation), as the CMSSM, NUHM, pMSSM or even a completely generic MSSM[@Ellis:2002wv; @Ellis:2002iu; @Ellis:2008eu; @Berger:2008cq; @AbdusSalam:2009qd; @Arbey:2012dq; @Arbey:2012bp]. In fact, only a handful of MSSM parameters affect the Higgs sector and low-energy experiments that we study. As we will see, in the Higgs sector, we fix $m_{H_1}\leq m_{H_2}\simeq 125~\mbox{GeV}\leq
m_{H_3}\simeq m_{H^\pm}\lesssim 200$–220 GeV and use the experimental results to look for acceptable, $3\times3$, Higgs mixing matrices as a function of $\tan \beta$. Supersymmetric parameters affecting the Higgs sector, and also the indirect processes $B\to X_s \gamma$ and $B_s\to \mu^+ \mu^-$, are basically third generation masses and couplings, and gaugino masses. In our analysis, these parameters take general values consistent with the experimental constraints on direct and indirect searches.
This paper is organized as follows. We begin by summarizing the experimental situation in Section \[sec:experiment\]. In Section \[sec:model\] we describe the basic ingredients of the model and analyze the direct and indirect signatures we will choose for our study. The parameter space is surveyed in Section \[sec:analysis\] and results and conclusions are contained in Section \[sec:conclu\].
Current experimental status. {#sec:experiment}
============================
Higgs signal at the LHC.
------------------------
Both ATLAS and CMS experiments have recently updated the analysis of the Higgs-like signal using the full $pp$ collision data sample. The ATLAS analysis [@ATLAS-CONF-2013-034] uses integrated luminosities of 4.8 fb$^{-1}$ at $\sqrt{s}=$7 TeV plus 20.7 fb$^{-1}$ at $\sqrt{s}=$8 TeV, for the most sensitive channels, $H\rightarrow\gamma\gamma$, $H\rightarrow ZZ^{*}\rightarrow4l$ and $H\rightarrow WW^{*}\rightarrow l\nu l\nu$, plus 4.7 fb$^{-1}$ at $\sqrt{s}=$7 TeV and 13 fb$^{-1}$ at $\sqrt{s}=$8 TeV for the $H\rightarrow\tau\tau$ and $H\rightarrow b\bar{b}$. Similarly CMS study [@CMS-PAS-HIG-13-005] uses 5.1 fb$^{-1}$ at $\sqrt{s}=$7 TeV and 19.8 fb$^{-1}$ at $\sqrt{s}=$8 TeV in all these channels.
The main channels contributing to the observed signal are the decays into photons and two Z-bosons. On the other hand, the most relevant channel constraining the presence of additional Higgs-bosons is the decay into two $\tau$ leptons. ATLAS and CMS agree on the mass of the observed state which is $m_{h}=124.3\pm0.6(\mbox{stat)\ensuremath{\pm}0.4(\mbox{sist)}}$ GeV for ATLAS and $m_{h}=125.7\pm0.3(\mbox{stat})\pm 0.3(\mbox{sist})$ GeV for CMS.
However, there are some differences on the signal strength in the different channels as measured by the two experiments. The signal strength $\mu_{X}$, for a Higgs decaying to $X$ is defined as, $$\mu_{X}=\frac{\sigma(pp\to H)\times\mbox{BR}(H\to X)}{\sigma(pp\to H)_{\rm{SM}}\times\mbox{BR}(H\to X)_{\rm{SM}}},$$ such that $\mu=0$ corresponds to the background-only hypothesis and $\mu=1$ corresponds to a SM Higgs signal. The combined signal strength in the last results presented by ATLAS is $\mu^{\rm{ATLAS}}=1.3\pm0.2$ [@Aad:2013wqa], while the signal strength measured by CMS is slightly below the SM expectations $\mu^{\rm{CMS}}=0.80\pm0.14$ [@CMS-PAS-HIG-13-005].
For the diphoton channel, the measured signal strength in both experiments are $\mu_{\gamma\gamma}^{\rm{ATLAS}}=1.6\pm0.3$ and $\mu_{\gamma\gamma}^{\rm{CMS}}=0.78_{-0.26}^{+0.28}$. This signal is consistent with the SM, although ATLAS points to a slight excess over the SM expectations. In any case, both results agree on the fact that the diphoton signal must be of the order of the SM prediction. This fact is very important in the context of multi-Higgs models, as the MSSM, where the Higgs couplings to down quark and charged leptons are enhanced by additional $\tan\beta$ factors, which tend to decrease the $H\to\gamma\gamma$ branching ratio and therefore the signal strength. In this regard, here we will adopt a conservative approach and impose the weighted average of ATLAS and CMS results at 2$\sigma$, $$0.75\leq\mu_{\gamma\gamma}^{\rm{LHC}}\leq1.55\,.$$ Similarly, the signal strength in the $H\to ZZ^{*}$ channel are, $\mu_{ZZ^{*}}^{\rm{ATLAS}}=1.5\pm0.4$ and $\mu_{ZZ^{*}}^{\rm{CMS}}=0.92\pm0.28$ and we will also use as a constraint, $$0.78\leq\mu_{ZZ^{*}}^{\rm{LHC}}\leq1.58\,.$$
The main constraint on the presence of additional heavy Higgs states comes from the $H/A\rightarrow\tau\tau$ searches at ATLAS and CMS experiments. In this case, both experiments have searched for the SM Higgs boson decaying into a pair of $\tau$-leptons and this provides a limit on $\sigma(pp\to H)\times\mbox{BR}(H\to\tau\tau)$ that can be applied to the extra Higgs states. ATLAS has analyzed the collected data samples of $4.6\,\mbox{fb}^{-1}$at $\sqrt{s}=$7 TeV and $13.0\,\mbox{fb}^{-1}$at $\sqrt{s}=$8 TeV [@Aad:2012mea] while CMS used $4.9\,\mbox{fb}^{-1}$at $\sqrt{s}=$7 TeV and $19.4\,\mbox{fb}^{-1}$at $\sqrt{s}=$8 TeV for Higgs masses up to 150 GeV [@CMS-PAS-HIG-13-004]. These constraints on the $\tau\tau$-cross section normalized to the SM cross section as a function of the Higgs mass are shown in Figure \[fig:tau-tauCERN\]. In this case, CMS sets the strongest bound for $m_{H}$ below 150 GeV. For $m_{H}=110$ GeV we obtain a bound at 95% CL of $\mu_{\tau\tau}=\sigma\left(H\rightarrow\tau\tau\right)/\sigma_{SM}\leq1.8$, and this limit remains nearly constant, $\mu_{\tau\tau}\leq2.0$, up to $m_{H}=140$ GeV. For a neutral Higgs of mass $m_{H}=150$ GeV we would have a bound of $\mu_{\tau\tau}\leq2.3$. In our scenario, this limit would apply to $H_1$ with a mass below 125 GeV and to $H_2$ with $m_{H_2}\simeq 125$ GeV. In the case of $H_3$, this bound applies for masses below 150 GeV.
For heavier $H_3$ masses, there exist a previous analysis at LHC searching MSSM Higgs bosons with masses up to 500 GeV. In Figure \[fig:ATLAS-MSSM-H\], we present the analysis made in ATLAS with $4.9\,\mbox{fb}^{-1}$ at $\sqrt{s}=$7 TeV [@Aad:2012yfa]. In this case, the bound is presented as an upper limit on the $\tau\tau$, or $\mu\mu$ production cross section. As a reference, the SM cross section for a Higgs mass of 150 GeV is $\sigma(pp\to H)_{\rm{SM}}\times\mbox{BR}(H\to X)_{\rm{SM}} \simeq 0.25$ pb and therefore, comparing with Figure \[fig:tau-tauCERN\], we can expect this bound to improve nearly an order of magnitude in an updated analysis with the new data [@privateFiorini]. Nevertheless, the production cross-section of $\tau$-pairs through a heavy Higgs is enhanced by powers of $\tan\beta$ and therefore the present limits on $\sigma_{\phi}\times\mbox{BR}(\phi\to\tau\tau)$ are already very important in the medium–large $\tan\beta$ region.
![Upper limit on the $\tau\tau$ production cross section through heavy Higgs states from ATLAS with $4.8~ \mbox{fb}^{-1}$ at $\sqrt{s}=7$ TeV \[fig:ATLAS-MSSM-H\].](fig_07)
Finally, we include the bounds on charged Higgs produced in $t \to H^+ b$ with subsequent decay $H^+ \to \tau \nu$ [@Aad:2012tj; @CMS-PAS-HIG-12-052]. These analysis set upper bounds on $B(t \to H^+ b)$ in the range 2–3 % for charged Higgs bosons with masses between 80 and 160 GeV, under the assumption that $B(H^+ \to \tau^+ \nu_\tau) = 1$, which is a very good assumption unless decay channels to the lighter Higgses and W-bosons are kinematically opened.
MSSM searches at LHC.
---------------------
Simultaneously to the Higgs searches described above, LHC has been looking for signatures on new physics beyond the SM. A large effort has been devoted to search for Supersymmetric extensions of the SM. These studies, focused in searches of jets or leptons plus missing energy (possible evidence of the LSP), agree, so far, with the Standard Model expectations in all the explored region, and are used to set bounds on the mass of the supersymmetric particles.
The most stringent constraints from LHC experiments are set on gluinos and first generation squarks produced through strong interactions in $pp$ collisions. Searches of gluinos at CMS[@Chatrchyan:2012paa; @Chatrchyan:2013wxa; @PAS-SUS-13-007; @PAS-SUS-13-008] and ATLAS [@ATLAS-CONF-2012-145; @ATLAS-CONF-2013-007] with $\sim20$ fb$^{-1}$ at 8 TeV have driven, roughly, to the exclusion of gluino masses up to 1.3 TeV for (neutralino) LSP masses below 500 GeV. The limits on first generation squarks directly produced are $m_{\tilde q}\gtrsim 740$ GeV for squarks decaying $\tilde q \to q \chi_1^0$ with $m_{\chi_1^0}= 0$ GeV[@Chatrchyan:2013lya][^3].
The most important players in Higgs physics, because of their large Yukawa couplings, are third generation squarks. In this case mass bounds, from direct stop production, are somewhat weaker but still stop masses are required to be above $\sim 650$ GeV for $m_{\chi^0} \lesssim 200$ GeV [@ATLAS-CONF-2013-024; @ATLAS-CONF-2013-037; @ATLAS-CONF-2013-053; @PAS-SUS-13-011] with the exception of small regions of nearly degenerate stop-neutralino. Limits on sbottom mass from direct production are also similar and sbottom masses up to 620 GeV are excluded at 95% C.L. for $m_{\chi^0} < 150$ GeV, with the exception of $m_{{\tilde b}_1}-m_{\chi^0}< 70$ GeV [@ATLAS-CONF-2013-053; @Chatrchyan:2013lya; @PAS-SUS-13-008].
Finally, ATLAS and CMS have presented the limits on chargino masses from direct EW production [@ATLAS-CONF-2013-035; @PAS-SUS-12-022]. In both analysis, these limits depend strongly on the slepton masses and the branching ratios of chargino and second neutralino that are supposed to be degenerate. When the decays to charged sleptons are dominant, chargino masses are excluded up to $\sim 600$ GeV for large mass differences with $\chi^0$. Even in the case when the slepton channels are closed, decays to weak bosons plus lightest neutralino can exclude[^4] chargino masses up to $\sim 350$ GeV for $m_{\chi_1^0} \lesssim 120$ GeV.
Therefore, as we have seen, limits on SUSY particles from LHC experiments are already very strong with the exceptions of sparticle masses rather degenerate with the lightest supersymmetric particle.
Indirect bounds
---------------
Indirect probes of new physics in low energy experiments still play a very relevant role in the search for extensions of the SM [@Masiero:2001ep; @Raidal:2008jk; @Calibbi:2011dn]. Even in the absence of new flavour structures beyond the SM Yukawa couplings, in a Minimal Flavour Violation scheme, decays like $B_{s}^{0}\rightarrow\mu^{+}\mu^{-}$ and, specially, $B\rightarrow X_{s}\gamma$ play a very important role, as we will see below, and put significant constraints for the whole $\tan\beta$ range.
The present experimental bounds on the decay $B_{s}^{0}\rightarrow\mu^{+}\mu^{-}$ are obtained from LHCb measurements with 1.1 fb$^{-1}$ of proton-proton collisions at $\sqrt{s} = 8$ TeV and 1.0 fb$^{-1}$ at $\sqrt{s} = 7$ TeV. The observed value for the branching ratio at LHCb [@Aaij:2012nna; @Aaij:2013aka] is, $$\mbox{BR}\left(B_{s}^{0}\rightarrow\mu^{+}\mu^{-}\right)=\left(2.9^{ +1.1}_{-1.0}
\right)\times10^{-9}\,,$$ and at CMS [@Chatrchyan:2013bka], $$\mbox{BR}\left(B_{s}^{0}\rightarrow\mu^{+}\mu^{-}\right)=\left(3.0^{ +1.0}_{-0.9}
\right)\times10^{-9}\,,$$ The limits on the decay $B\rightarrow X_{s}\gamma$ come from the BaBar and Belle B-factories and CLEO [@Chen:2001fja; @Abe:2001hk; @Limosani:2009qg; @Lees:2012wg; @Lees:2012ufa; @Aubert:2007my]. The current world average for $E_\gamma > 1.6$ GeV given by HFAG [@Amhis:2012bh; @hfag] is, $$\mbox{BR}\left(B\rightarrow X_{s}\gamma\right)=\left(3.43\pm0.21\pm0.07\right)\times10^{-4}\,.$$ We will see that this result provides a very important constraint on the charged Higgs mass in the low $\tan \beta$ region where other supersymmetric contributions are small.
Theoretical model {#sec:model}
=================
As explained in the introduction, we intend to investigate whether the observed Higgs particle of $m_{H}\simeq125$ GeV could correspond to the second Higgs in a general MSSM scenario, while the lightest Higgs managed to evade the LEP searches [@Heinemeyer:2011aa; @Hagiwara:2012mga; @Arbey:2012dq; @Bechtle:2012jw; @Ke:2012zq; @Ke:2012yc; @Moretti:2013lya; @Scopel:2013bba]. The scenario we consider here is a generic MSSM defined at the electroweak scale. This means we do not impose the usual mass relations obtained through RGE from a high scale, that we obtain, for instance in the Constrained MSSM (CMSSM), but keep all MSSM parameters as free and independent at $M_{W}$. Furthermore, we are mainly interested in the Higgs sector of the model, which we analyze assuming generic Higgs masses and mixings in the presence of CP violation in the squark sector.
CP-violating MSSM Higgs sector
------------------------------
As it is well-known, the Higgs sector of the MSSM consists of a type II two-Higgs doublet model. In the MSSM, the scalar potential conserves CP at tree-level [@Djouadi:2005gj]. Nevertheless, in the presence of complex phases in the Lagrangian, CP violation enters the Higgs potential at the one-loop level, resulting in the mixing between the CP-even and CP-odd Higgses. Then, after electroweak symmetry breaking, we have three physical neutral scalar bosons, admixtures of the scalar and pseudoscalar Higgs bosons, plus a charged Higgs boson [@Pilaftsis:1998dd; @Pilaftsis:1998pe; @Pilaftsis:1999qt; @Demir:1999hj].
The Higgs fields in the electroweak vacuum, with vevs $\upsilon_{1}$and $\upsilon_{2}$ and $\tan\beta=\upsilon_{2}/\upsilon_{1}$, are $$\Phi_{1}=\left(\begin{array}{c}
\frac{1}{\sqrt{2}}\left(\upsilon_{1}+\phi_{1}+ia_{1}\right)\\\phi_{1}^{-}
\end{array}\right);\;\;\Phi_{2}=e^{i\xi}\left(\begin{array}{c}
\phi_{2}^{+}\\
\frac{1}{\sqrt{2}}\left(\upsilon_{2}+\phi_{2}+ia_{2}\right)
\end{array}\right)\,,\label{eq:3.1-2}$$ and, as mentioned above, the presence of CP-violating phases in the Lagrangian introduces off-diagonal mixing terms in the neutral Higgs mass matrix. In the weak basis, $\left(\phi_{1},\phi_{2},a\right)$, with $\phi_{1,2}$ CP-even, scalar, and $a=a_{1}\sin\beta+a_{2}\cos\beta$ the CP-odd, pseudoscalar state, we write the neutral Higgs mass matrix as [@Pilaftsis:1999qt; @Carena:2000yi; @Carena:2001fw; @Funakubo:2002yb], $$M_{H}^{2}=\left(\begin{array}{cc}
M_{S}^{2} & M_{SP}^{2}\\
M_{PS}^{2} & M_{P}^{2}
\end{array}\right)\,,$$ where the scalar-pseudoscalar mixings are non-vanishing in the presence of phases, $M_{SP}^{2},M_{PS}^{2}\propto \mbox{Im}\left[\mu A_{t,b}e^{i\xi}\right]$. Then, this $3\times3$ neutral Higgs mass matrix is diagonalized by $${\cal U}\cdot M_{H}^{2}\cdot{\cal U}^{T}=\mbox{Diag}\left(m_{H_{1}}^{2},m_{H_{2}}^{2},m_{H_{3}}^{2}\right)\,.$$ The Higgs sector of the MSSM is defined at the electroweak scale at tree-level by only two parameters that, in the limit of CP-conservation, are taken as $\left(m_{A}^{2},\tan\beta\right)$. In the complex MSSM, the pseudoscalar Higgs is not a mass eigenstate and its role as a parameter defining the Higgs sector is played by the charged Higgs mass $m_{H^{\pm}}^{2}$. At higher orders, the different MSSM particles enter in the Higgs masses and mixings, although the main contributions are due to the top-stop and bottom–sbottom sectors. It is well-known that the one-loop corrections to $M_S^2$ can increase the lightest Higgs mass from $\lesssim M_Z$ to $\sim 130$ GeV [@Okada:1990vk; @Ellis:1990nz; @Haber:1990aw], hence being $\lesssim M_Z$, with the leading part of order [@Haber:1996fp; @Djouadi:2013vqa], $$\delta M_S^2 \simeq \frac{3 m_t^4}{2 \pi^2 \upsilon^2 \sin^2\beta} \left[\log \frac{M_{SUSY}^2}{ m_t^2} + \frac{X_t^2}{M_{SUSY}^2} \left(1- \frac{X_t^2}{12 M_{SUSY}^2}\right)\right]\,,$$ with $M_{SUSY}$ the geometric mean of the two stop masses and $X_t = A_t -\mu \cot \beta$.
Regarding the charged Higgs mass, we can relate it to the pseudoscalar mass $M_{P}^{2}$ in the neutral Higgs mass matrix [@Pilaftsis:1999qt], $$M_{H^{\pm}}^{2}=M_{P}^{2}+\frac{1}{2}\lambda_{4}\upsilon^{2}-\mbox{Re}\left(\lambda_{5}e^{2i\xi}\right)\upsilon^{2}\,,$$ with $\lambda_{4,5}$ the two-loop corrected parameters of the Higgs potential [@Carena:1995bx; @Pilaftsis:1999qt]. At tree level $\lambda_{4}=g_{w}^{2}/2$, such that $\lambda_{4}\upsilon^{2}/2=M_{W}^{2}$, and $\lambda_{5}=$0. In any case, it looks reasonable to expected $\lambda_{i}\lesssim1$. This implies that the squared charged Higgs mass can never be heavier that the largest neutral Higgs eigenvalue by a difference much larger than $M_{Z}^{2}$, which is equivalent to say that loop corrections are of the same order as $\sim\delta M_S^2$.
Similarly, we can expect the mass of the second neutral Higgs, which in our scenario is $m_{H_{2}}\simeq125$ GeV, only to differ from the heavier eigenvalue by terms of order $\upsilon^{2}$. This can be seen from the trace of the neutral Higgs masses in the basis of CP eigenstates, where we would have, without loop corrections, $\mbox{Tr}\left(M_{H}^{2}\right)=2M_{P}^{2}+M_{Z}^{2}$. As we have seen, loop corrections to the diagonal elements can be expected to be of the order of the corrections to the lightest Higgs mass which are also $O(M_Z^2)$. To obtain a light second Higgs we need, either low $M_{P}$ or a large scalar-pseudoscalar mixing. The different contributions to scalar-pseudoscalar mixing, $M_{SP}^2$, are of order [@Pilaftsis:1999qt], $$M_{SP}^2 = O\left(\frac{m_t^4 |\mu| |A_t|}{32 \pi^2~\upsilon^2 M_{SUSY}^2}\right) \sin\phi_{CP} \times \left[6, \frac{|A_t|^2}{M_{SUSY}^2}, \frac{|\mu|^2}{\tan\beta M_{SUSY}^2}\right]\,,$$ which again are of the same order as $\delta M_S^2\simeq O(M_Z^2)$ for $\sin \phi_{CP} \sim O(1)$. Therefore, taking also into account that in the decoupling limit, and in the absence of scalar-pseudoscalar mixing, $M_H\simeq M_P$, we must require $M_{P}^{2}$ not to be much larger than $M_Z^2$. Taking $M_{P}^{2}\lesssim3M_{Z}^{2}$, the invariance of the trace tells us that $m_{H_{1}}^{2}+m_{H_{2}}^{2}+m_{H_{3}}^{2}=2M_{P}^{2}+M_{Z}^{2}+ O(M_Z^2)$ in such a way that with $90~\mbox{GeV}\lesssim m_{H_{1}}\lesssim m_{H_{2}}\simeq125$ GeV, we get an upper limit[^5] for $m_{H_{3}}^{3}\lesssim 2M_{P}^{2}+ 2 M_{Z}^{2}- \left(m_{H_{2}}^{2}+ m_{H_{1}}^2\right) \lesssim(200~\mbox{GeV})^2$. We must emphasize that in this work we do not consider the possibility of $m_{H_{1}}\lesssim 90~\mbox{GeV}$ which would be possible in the presence of large CP-violating phases that could reduce the mass of the lightest Higgs through rather precise cancellations [@Carena:2000ks; @Carena:2002bb]. Although this scenario could survive LEP limits around an “open hole” with $m_{H_{1}}\approx 45~\mbox{GeV}$ and $\tan \beta \approx 8$ [@Williams:2007dc], it would never be able to reproduce the observed signal in $H_2 \to \gamma \gamma$, as the opening of the decay channel $H_2\to H_1 H_1$ would render $B(H_2 \to \gamma \gamma)$ much smaller than the SM one (see the discussion related to the $H_2 \to b \bar{b}$ channel below).
In the following analysis of the direct and indirect constraints on the Higgs sector, we try to be completely general in the framework of a Complex MSSM defined at the electroweak scale. To attain this objective, and taking into account that the presence of CP violation and large radiative corrections strongly modifies the neutral Higgs mass matrix if we are outside the decoupling regime, we consider general neutral Higgs mixings and masses. In fact, in this work, we analyze the situation in which the second lightest neutral boson corresponds to the scalar resonance measured at LHC with a mass of 125 GeV. As we have seen, to achieve this, we need a relatively light charged Higgs (with approximately $M_{H^{+}}\lesssim220\,$ GeV), and a similar mass for the heaviest neutral Higgs. The lightest neutral Higgs boson will have a mass varying in the range of 90 and 125 GeV. After fixing the Higgs masses in these ranges, we will consider generic mixing matrices ${\cal U}$ and look for mixings consistent with the present experimental results.
This analysis deals with the decays of the neutral Higgs bosons. Thus we need the Higgs couplings to the SM vector boson, fermions, scalars and gauginos. The conventions used in the following are described in Appendix \[App:convention\]. The couplings to the vector bosons are [@Lee:2003nta], $${\cal L}_{H_{a}V}=g\, M_{W}\left(W_{\mu}^{+}W^{-\,\mu}+\frac{1}{2\cos^{2}\theta_{W}}Z_{\mu}Z^{\mu}\right)\sum_{a}g_{H_{a}VV}\, H_{a}\,.$$ with $g_{H_{a}VV}=\cos\beta\,\mathcal{U}_{a1}+\sin\beta\,\mathcal{U}_{a2}$.
The Lagrangian showing the fermion–Higgs couplings is $${\cal L}_{H_{a}f}=-\sum_{f}\frac{g\, m_{f}}{2M_{W}}\sum_{a}H_{a}\bar{f}\left(g_{S,a}^{f}+ig_{P,a}^{f}\gamma_{5}\right)f\,,$$
[ $\mathbf{H_{a}\rightarrow f\bar{f}}$]{} [ $\mathbf{g_{f}}$]{} [ $\mathbf{g_{S,a}^{(0)}}$]{} [ $\mathbf{g_{P,a}^{(0)}}$]{}
----------------------------------------------------------------- ------------------------------ ----------------------------------- ------------------------------------------------------------
[ $H_{a}\rightarrow l\bar{l}$]{} [ $\frac{gm_{l}}{2M_{W}}$]{} [ $\frac{U_{a1}}{\cos(\beta)}$]{} [ $-\left(\frac{\sin(\beta)}{\cos(\beta)}\right)U_{a3}$]{}
[ $H_{a}\rightarrow d\bar{d}$]{} [ $\frac{gm_{d}}{2M_{W}}$]{} [ $\frac{U_{a1}}{\cos(\beta)}$]{} [ $-\left(\frac{\sin(\beta)}{\cos(\beta)}\right)U_{a3}$]{}
[ $H_{a}\rightarrow u\bar{u}$]{} [ $\frac{gm_{u}}{2M_{W}}$]{} [ $\frac{U_{a2}}{\sin(\beta)}$]{} [ $-\left(\frac{\cos(\beta)}{\sin(\beta)}\right)U_{a3}$]{}
[ $H_{a}\rightarrow\tilde{\chi}_{i}^{+}\tilde{\chi}_{j}^{-}$]{} [ $\frac{g}{\sqrt{2}}$]{} [ $g_{s}^{\tilde{\chi}^{+}}$]{} [ $g_{p}^{\tilde{\chi}^{+}}$]{}
: Tree level Higgs–fermion couplings.[]{data-label="tab:Hfcoupl"}
where the tree-level values of $(g_{S}^{(0)},g_{P}^{(0)})$ are given in Table \[tab:Hfcoupl\]. Still, in the case of third generation fermions, these couplings receive very important threshold corrections due to gluino and chargino loops enhanced by $\tan\beta$ factors in the case of the down-type fermions [@Hall:1993gn; @Carena:1994bv; @Blazek:1995nv; @Carena:1999py; @Hamzaoui:1998nu; @Babu:1999hn; @Isidori:2001fv; @Dedes:2002er; @Buras:2002vd]. The complete corrected couplings for third generation fermions, $(g_{S}^{f},g_{P}^{f})$, can be found in Ref. [@Lee:2003nta; @Carena:2002bb]. In our analysis, it is sufficient to consider the correction to the bottom couplings, $$\label{Eq:thresholdS}
g_{S,a}^{d}=\mbox{Re}\left(\frac{1}{1+\kappa_{d}\tan\beta}\right)\frac{{\cal U}_{a1}}{\cos\beta}+\mbox{Re}\left(\frac{\kappa_{d}}{1+\kappa_{d}\tan\beta}\right)\frac{{\cal U}_{a2}}{\cos\beta}+\mbox{Im}\left(\frac{\kappa_{d}\left(\tan^{2}\beta+1\right)}{1+\kappa_{d}\tan\beta}\right){\cal U}_{a3}$$ $$\label{Eq:thresholdP}
g_{P,a}^{d}=-\mbox{Re}\left(\frac{\tan\beta-\kappa_{d}}{1+\kappa_{d}\tan\beta}\right){\cal U}_{a3}+\mbox{Im}\left(\frac{\kappa_{d}\tan\beta}{1+\kappa_{d}\tan\beta}\right)\frac{{\cal U}_{a1}}{\cos\beta}-\mbox{Im}\left(\frac{\kappa_{d}}{1+\kappa_{d}\tan\beta}\right)\frac{{\cal U}_{a2}}{\cos\beta}$$ where $\kappa_{d}=(\Delta h_{d}/h_{d})/(1+\delta h_{d}/h_{d})$ and the corrected Yukawa couplings are, $$h_{d}=\frac{\sqrt{2}m_{d}}{\upsilon\cos\beta}\:\frac{1}{1+\delta h_{d}/h_{d}+\Delta h_{d}/h_{d}\tan\beta}\,,$$ $$\begin{aligned}
\delta h_{d}/h_{d}&=&-\frac{2\alpha_{s}}{3\pi}m_{\tilde{g}}^{*}A_{d}\, I(m_{\tilde{d}_{1}}^{2},m_{\tilde{d}_{2}}^{2},|m_{\tilde{g}}|^{2})-\frac{|h_{u}|^{2}}{16\pi^{2}}|\mu|^{2}\, I(m_{\tilde{u}_{1}}^{2},m_{\tilde{u}_{2}}^{2},|\mu|^{2}) \nonumber\\
\Delta h_{d}/h_{d}&=&\frac{2\alpha_{s}}{3\pi}m_{\tilde{g}}^{*}\mu^{*}\, I(m_{\tilde{d}_{1}}^{2},m_{\tilde{d}_{2}}^{2},|m_{\tilde{g}}|^{2})+\frac{|h_{u}|^{2}}{16\pi^{2}}A_{u}^{*}\mu^{*}\, I(m_{\tilde{u}_{1}}^{2},m_{\tilde{u}_{2}}^{2},|\mu|^{2})\:,\end{aligned}$$ and the loop function $I(a,b,c)$ is given by, $$I(a,b,c)=\frac{a\, b\log(a/b)+b\, c\log(b/c)+a\, c\log(c/a)}{(a-b)(b-c)(a-c)}\,.$$ The Higgs-sfermion couplings are, $$\begin{aligned}
{\cal L}_{H_{a}\tilde f \tilde f}= \upsilon \sum_{\tilde f} g_{\tilde f\tilde f }^a \left(H_{a}\tilde{f}^* \tilde{f}\right)\,,
\\
\upsilon~ g_{\tilde f_i\tilde f_j }^a = \left(\tilde{\Gamma}^{\alpha f f}\right)_{\beta \gamma} {\cal U}_{a \alpha}~ {\cal R}^f_{\beta i} {\cal R}^f_{\gamma j}\,,\end{aligned}$$ with $\beta,\gamma = L,R$, $ {\cal R}^f$, the sfermion mixing matrices and the couplings $\tilde{\Gamma}^{\alpha f f}$ given Ref. [@Lee:2003nta]. Other Higgs couplings that are needed to analyze the neutral Higgs decays are the couplings to charginos and charged Higgs, complete expressions can be found in Ref. [@Lee:2003nta] (taking into account their different convention on the Higgs mixing matrix, ${\cal U}={\cal O}^{T}$).
After defining all these couplings, we show in the following the expressions for $H\to\gamma\gamma$ and $H\to gg$, that together with $H\rightarrow\bar{b}b,\tau\tau$ and $H\to WW^{*},ZZ^{*}$ are the main Higgs decay channels for $m_{H}=125$ GeV, and the Higgs production mechanisms at LHC.
Higgs decays.
-------------
### Higgs decay into two photons.
The decay $H_{a}\to\gamma\gamma$ occurs only at the one-loop level and therefore we must include every contribution generated by sparticles in addition to the SM ones in our calculation. Taking into account the presence of CP violation, the Higgs decay has contributions of both the scalar and pseudoscalar components. Then its width becomes, $$\Gamma\left(H_{a}\rightarrow\gamma\gamma\right)=\frac{M_{H_{a}}^{3}\alpha^{2}}{256\pi^{3}\upsilon^{2}}\left[\left|S_{a}^{\gamma}\left(M_{H_{a}}\right)\right|^{2}+\left|P_{a}^{\gamma}\left(M_{H_{a}}\right)\right|^{2}\right]\,,\label{eq:3.3.2-5-1}$$ where the scalar part is $S_{a}^{\gamma}\left(M_{H_{a}}\right)$ and the pseudoscalar $P_{a}^{\gamma}\left(M_{H_{a}}\right)$ and they are [@Lee:2003nta], $$\begin{aligned}
S_{a}^{\gamma}\left(M_{H_{a}}\right)&=& 2\underset{f=b,t,\tilde{\chi}_{1}^{\pm},\tilde{\chi}_{2}^{\pm}}{\sum}N_{C}\, J_{f}^{\gamma}\, Q_{f}^{2}g_{f}\, g_{H_{a}\bar{f}f}^{S}\frac{\upsilon}{m_{f}}F_{f}^{S}\left(\tau_{af}\right) -\sum_{\tilde f} N_{C}\, J_{\tilde{f}}^{\gamma}\, Q_{f}^{2}\, g_{H_{a}\tilde{f}_{j}\tilde{f_{j}^*}}^{S}\frac{\upsilon^{2}}{2m_{\tilde{f}_{j}}^{2}}F_{0}\left(\tau_{a\tilde{f}_{j}}\right)\nonumber \\
&&-g_{H_{a}VV}F_{1}\left(\tau_{aW}\right)~-~g_{H_{a}H^{-}H^{+}}\frac{\upsilon^{2}}{2M_{H_{a}}^{2}}F_{0}\left(\tau_{aH}\right)\\
P_{a}^{\gamma}\left(M_{H_{a}}\right)&= & 2\underset{f=b,t,\tilde{\chi}_{1}^{\pm},\tilde{\chi}_{2}^{\pm}}{\sum}N_{C}\, J_{f}^{\gamma}\, Q_{f}^{2}g_{f}\, g_{H_{a}\bar{f}f}^{P}\frac{\upsilon}{m_{f}}F_{f}^{P}\left(\tau_{af}\right)
\label{eq:3.3.2-2}\end{aligned}$$ With $\tau_{aj}=M_{H_{a}}^{2}/(4m_{i}^{2})$ and the loop functions being: $$\begin{array}{ll}
F_{f}^{S}\left(\tau\right)=\tau^{-1}\left[1+\left(1-\tau^{-1}\right)f\left(\tau\right)\right];\qquad & F_{f}^{P}\left(\tau\right)=\tau^{-1}f\left(\tau\right);\\
F_{0}\left(\tau\right)=\tau^{-1}\left[-1+\tau^{-1}f\left(\tau\right)\right]; & F_{1}\left(\tau\right)=2+3\tau^{-1}+3\tau^{-1}\left(2-\tau^{-1}\right)f\left(\tau\right);
\end{array}\label{eq:3.3.2-3}$$ $$f\left(\tau\right)=-\frac{1}{2}\intop\nolimits _{0}^{1}\frac{\mathrm{d}x}{x}\ln\left[1-4\tau x\left(1-x\right)\right]=\begin{cases}
\arcsin^{2}\left(\sqrt{\tau}\right)\quad: & \tau\leq1\\
\\
-\frac{1}{4}\left[\ln\left(\frac{\sqrt{\tau}+\sqrt{\tau-1}}{\sqrt{\tau}-\sqrt{\tau-1}}\right)-i\pi\right]^{2}\quad: & \tau\geq1
\end{cases}\label{eq:3.3.2-4}$$ And we included the QCD corrections [@Spira:1995rr; @Spira:1997dg], $$J_{\chi}^{\gamma}=1;\; J_{q}^{\gamma}=1-\frac{\alpha_{s}\left(M_{H_{a}}^{2}\right)}{\pi};\qquad J_{\tilde{q}}^{\gamma}=1+\frac{\alpha_{s}\left(M_{H_{a}}^{2}\right)}{\pi}\label{eq:3.3.2-6}$$
### Higgs decay into two gluons.
Similarly, the decay width for $H_{a}\rightarrow gg$ is given by: $$\Gamma_{H_{a}\rightarrow gg}=\frac{M_{H_{a}}^{2}\alpha_{s}^{2}}{32\pi^{3}v^{2}}\left[K_{H}^{g}|S_{a}^{g}|^{2}+K_{A}^{g}|P_{a}^{g}|^{2}\right]\label{eq:3.3.3-1}$$ where $K_{H,A}^{g}$ is again the QCD correction enhancement factor while $S_{a}^{g}$ and $P_{a}^{g}$ are the scalar and pseudoscalar form factors, respectively. $K_{H,A}^{g}$ is [@Spira:1995rr; @Spira:1997dg], $$K_{H}^{g}=1+\frac{\alpha_{s}(M_{H_{a}}^{2})}{\pi}\left(\frac{95}{4}-\frac{7}{6}N^{F}\right)\, ,\qquad
K_{A}^{g}=1+\frac{\alpha_{s}(M_{H_{a}}^{2})}{\pi}\left(\frac{97}{4}-\frac{7}{6}N^{F}\right)\,,\label{eq:3.3.3-3}$$ being $N^{F}$ the number of quark flavours that remains lighter than the Higgs boson in consideration. On the other hand, the expressions that define $S_{a}^{g}$ and $P_{a}^{g}$ are: $$S_{a}^{g}=\sum_{f=b,t}g_{f}\, g_{sff}^{a}\frac{v}{m_{f}}F_{f}^{S}(\tau_{af})~-\sum_{\bar{f}_{i}=\tilde{b}_{1},\tilde{b}_{2},\tilde{t}_{1},\tilde{t}_{2}}g_{\tilde{f}\tilde{f}}^{a}\frac{v^{2}}{4m_{\bar{f}_{i}}^{2}}F_{0}(\tau_{a\tilde{f}_{i}})\label{eq:3.3.3-4}$$ $$P_{a}^{g}=\sum_{f=b,t}g_{f}\, g_{pff}^{a}\frac{v}{m_{f}}F_{f}^{P}(\tau_{af})\label{eq:3.3.3-5}$$
Higgs production.\[sub:Higgs-production.\]
------------------------------------------
The Higgs production processes are basically the same as in the SM [@Djouadi:2005gi; @Djouadi:2005gj], although the couplings in these processes change to the MSSM couplings. The two main production processes are gluon fusion and, specially for large $\tan\beta$, the $b\bar{b}$ fusion. Other production mechanisms, like vector boson fusion will always be sub-dominant and we do not consider them here.
At parton level, the leading order cross section for the production of Higgs particles through the gluon fusion process is given by [@Dedes:1999sj; @Dedes:1999zh; @Choi:1999aj; @Djouadi:2005gj]: $$\begin{aligned}
\sigma_{gg\rightarrow H_{a}}^{LO}&=&\hat{\sigma}_{gg\rightarrow H_{a}}^{LO}\:\delta\left(1- \frac{M_{H_{a}}^{2}}{\hat{s}}\right)=\frac{\pi^{2}}{8M_{H_a}}\Gamma_{H_{a}\to gg}^{LO}\:\delta\left(1-\frac{M_{H_{a}}^{2}}{\hat{s}}\right)\\
\hat{\sigma}_{gg\rightarrow H_{a}}^{LO} & =&\frac{\alpha_{s}^{2}\left(Q\right)}{256\pi}\frac{M_{H_a}^2}{\upsilon^2} \left[\left|\sum_{f=t,b}\frac{g_f g_{S,a}^{f} \upsilon}{m_{f}}F_{f}^{S}\left(\tau_{af}\right)+\frac{1}{4}\sum_{\tilde{f}_{i}=\tilde{b}_{1},\tilde{b}_{2},\tilde{t}_{1},\tilde{t}_{2}}\frac{g_{\tilde{f}\tilde{f}}^{a}\upsilon^2}{m_{\tilde{f}}^{2}}F_{0}\left(\tau_{a\tilde{f}}\right)\right|^{2}\nonumber\right. \\& +&\left. \left|\sum_{f=t,b}\frac{g_f g_{P,a}^{f}\upsilon}{m_{f}}F_{f}^{P}\left(\tau_{af}\right)\right|^{2}\right]
~~=~\frac{\alpha_{s}^{2}\left(Q\right)}{256\pi}\frac{M_{H_a}^2}{\upsilon^2} \Bigg[\left|S^g_a\right|^2+\left|P^g_a \right|^{2}\Bigg]\,,\label{eq:3.3.4-2}\nonumber\end{aligned}$$ with $\hat{s}$ the partonic center of mass energy squared. The hadronic cross section from gluon fusion processes can be obtained in the narrow-width approximation as, $$\sigma(pp\to H_{a})^{LO}=\hat{\sigma}_{gg\rightarrow H_{a}}^{LO}\tau_{H_{a}}\frac{d{\cal L}_{LO}^{gg}}{d\tau_{H_{a}}}\:.$$ The gluon luminosity $d{\cal L}_{LO}^{gg}/d\tau$ at the factorization scale $M$, with $\tau_{H_a}=M_{H_a}^2/s$, is given by, $$\frac{d{\cal L}_{LO}^{gg}}{d\tau}=\int_{\tau}^{1}\frac{dx}{x}\, g(x,M^{2})\, g(\tau/x,M^{2})\,.$$ In the numerical analysis below, we use the MSTW2008 [@Martin:2009iq] parton distribution functions.
The $bb\rightarrow H_{a}$ production process can also play an important role for the high and intermediate $\tan\beta$ region, roughly for $\tan\beta\geq7$ [@Dicus:1988cx; @Campbell:2002zm; @Maltoni:2003pn; @Harlander:2003ai; @Dittmaier:2003ej; @Dawson:2003kb; @Baglio:2010ae]. The leading order partonic cross section is directly related to the fermionic decay width, $$\begin{aligned}
\hat{\sigma}_{bb\rightarrow H_{a}} & = & \frac{4\pi^{2}}{9M_{H_{a}}}\Gamma_{H_{a}\rightarrow b\bar{b}}= \frac{\pi}{6}\frac{g^{2}m_{b}^{2}}{4M_{W}^{2}}\beta_{b}\left(\beta_{b}^{2}\left|g_{s}^{b}\right|^{2}+\left|g_{p}^{b}\right|^{2}\right)\label{eq:3.3.5-1}\end{aligned}$$ Again the proton-proton cross section is obtained in the narrow-width approximation in terms of the $b\bar{b}$ luminosity. Notice that associated Higgs production with heavy quarks $gg/q\bar{q}\to b\bar{b}+H_{a}$ is equivalent to the $b\bar{b}\to H_{a}$ inclusive process if we do not require to observe the final state $b$-jets and one considers the $b$-quark as a massless parton in a five active flavour scheme [@Dicus:1988cx; @Djouadi:2005gj; @Djouadi:2013vqa]. In this way, large logarithms $\log(s/m_b^2)$ are resummed to all orders. As before, we are using the MSTW2008 five flavour parton distribution functions. Regarding the QCD corrections to this process, for our purposes it is enough to take into account the QCD enhancing factor $K^f_a$ used in the decay $H_a\to b\bar b$, with the bottom mass evaluated at $m_{H_a}$, and to use the threshold-corrected bottom couplings in Eqs. (\[Eq:thresholdS\],\[Eq:thresholdP\]). $$\begin{aligned}
\hat{\sigma}_{bb\rightarrow H_{a}}^{QCD} & = & \frac{4\pi^{2}}{9M_{H_{a}}}\Gamma_{H_{a}\rightarrow b\bar{b}}= \frac{\pi}{6}\frac{g^{2}m_{b}^{2}}{4M_{W}^{2}}K_{a}^{b}\left(\frac{m_{b}(m_{H_{a}})}{m_{b}(m_{t})}\right)^{2}\beta_{b}\left(\beta_{b}^{2}\left|g_{s}^{b}\right|^{2}+\left|g_{p}^{b}\right|^{2}\right)\label{eq:bbfus}\end{aligned}$$ The total hadronic cross section can be obtained at NLO using the so-called $K$-factors [@Spira:1997dg; @Graudenz:1992pv; @Dawson:1996xz; @Choi:1999aj] to correct the LO gluon fusion, and it is given by, $$\sigma(pp\to H_{a})=K~\hat{\sigma}_{gg\rightarrow H_{a}}^{LO}\tau_{H_{a}}\frac{d{\cal L}_{LO}^{gg}}{d\tau_{H_{a}}}\:+\:\hat{\sigma}_{bb\rightarrow H_{a}}^{QCD}\tau_{H_{a}}\frac{d{\cal L}_{LO}^{bb}}{d\tau_{H_{a}}}$$ where the $K$-factor parametrizes the ratio of the higher order cross section to the leading order one. It is important to include this term as it is known that the next to leading order QCD effects, which affect both quark and squark contributions similarly [@Dawson:1996xz; @Djouadi:1999ht], are very large and cannot be neglected. Such effects are essentially independent of the Higgs mass but exhibit a $\tan\beta $ dependence. In the low $\tan\beta $ region, $K$ can be approximated by 2 while for large $\tan\beta $ its value gets closer to unity [@Baglio:2010ae]. In our study we have taken $K$ to be constant for fixed $\tan \beta$ in the considered range of Higgs masses.
Indirect constraints
--------------------
As explained in the introduction, indirect searches of new physics in low-energy precision experiments play a very important role in Higgs boson searches. The main players in this game are $b\to s\gamma$ and $B_{s}\to\mu^{+}\mu^{-}$.
### $b\rightarrow s\gamma$ decay.
Following references [@Degrassi:2000qf; @Misiak:2006zs; @Lunghi:2006hc; @Gomez:2006uv], the branching ratio of the decay given in terms of the Wilson coefficients can be written as: $$\mbox{BR}(B\rightarrow X_{s}\gamma)\simeq\left[a~+a_{77}~\delta\mathcal{C}_{7}^{2}+a_{88}~\delta\mathcal{C}_{8}^{2}+\mbox{Re}\left[a_{7}~\delta\mathcal{C}_{7}\right]+\mbox{Re}\left[a_{8}~\delta\mathcal{C}_{8}\right]+\mbox{Re}\left[a_{78}~\delta\mathcal{C}_{7}\delta\mathcal{C}{}_{8}^{*}\right]\right]\label{eq:5.1-1}$$ where $a\sim3.0\times10^{-4}$, $a_{77}\sim4.7\times10^{-4}$, $a_{88}\sim0.8\times10^{-4}$, $a_{7}\sim\left(-7.2+0.6\, i\right)\times10^{-4}$, $a_{8}\sim\left(-2.2-0.6\, i\right)\times10^{-4}$ and $a_{78}\sim\left(2.5-0.9\, i\right)\times10^{-4}$ and the main contributions to the Wilson coefficients, beyond the $W$–boson contribution, are chargino and charged-Higgs contributions, $\delta\mathcal{C}_{7,8}=\mathcal{C}_{7,8}^{H^{\pm}}+\mathcal{C}_{7,8}^{\chi^{\pm}}$.
Chargino contributions are given by, $$\mathcal{C}_{7,8}^{\chi^{\pm}}=\frac{1}{\cos\beta}\sum_{{\scriptstyle a=1,2}}\left\{ \frac{U_{a2}V_{a1}M_{W}}{\sqrt{2}m_{\tilde{\chi}_{a}^{\pm}}}\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right)+\frac{U_{a2}V_{a2}\overline{m}_{t}}{2m_{\tilde{\chi}_{a}^{\pm}}\sin\beta}\mathcal{G}_{7,8}\left(x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right)\right\}$$ where $x_{\alpha\beta}=m_{\alpha}^{2}/m_{\beta}^{2}$ and the functions $\mathcal{F}_{7,8}(x,y,z)=f_{7,8}^{(3)}\left(x\right)-\left|\mathcal{R}_{11}^{\tilde{t}}\right|^{2}f_{7,8}^{(3)}\left(y\right)-\left|\mathcal{R}_{21}^{\tilde{t}}\right|^{2}f_{7,8}^{(3)}\left(z\right)$ and $\mathcal{G}_{7,8}(x,y)=\mathcal{R}_{11}^{\tilde{t}}\mathcal{R}_{12}^{*\tilde{t}}f_{7,8}^{(3)}\left(x\right)-\mathcal{R}_{21}^{\tilde{t}}\mathcal{R}_{22}^{*\tilde{t}}f_{7,8}^{(3)}\left(y\right)$ with $f_{7,8}^{\left(3\right)}(x)$, $$f_{7}^{(3)}\left(x\right)=\frac{5-7x}{6\left(x-1\right)^{2}}+\frac{x\left(3x-2\right)}{3\left(x-1\right)^{2}}\ln x;\quad f_{8}^{(3)}\left(x\right)=\frac{1+x}{2\left(x-1\right)^{2}}-\frac{x}{\left(x-1\right)^{3}}\ln x;$$ Now, using the expansion in Appendix \[App:expand\], we can see that the dominants terms in $\tan\beta$ are: $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}} &\simeq M_{W}^{2}\frac{\mu M_{2}\tan\beta}{m_{\tilde{\chi}_{1}^{\pm}}^{2}-m_{\tilde{\chi}_{2}^{\pm}}^{2}}\left(\frac{f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}}-\frac{f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right)\qquad\qquad\label{eq:C7char}\\
& +~~M_{W}^{2}\frac{m_{t}^{2}}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}}\:\frac{\mu A_{t}\tan\beta}{m_{\tilde{\chi}_{1}^{\pm}}^{2}-m_{\tilde{\chi}_{2}^{\pm}}^{2}}\left(\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}}-\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right)\nonumber \end{aligned}$$ and in the limit $m_{\tilde{\chi}_{1}}\simeq M_{2}\ll m_{\tilde{\chi}_{2}}\simeq\mu$, we have, $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}}\simeq&-&\frac{M_{2}}{\mu}\tan\beta\frac{M_{W}^{2}}{M_{2}^{2}}\left(f_{7}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}}\right)-f_{7}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)\right) \\&-&\frac{A_{t}}{\mu}\tan\beta\,\frac{M_{W}^{2}}{M_{2}^{2}}\frac{m_{t}^{2}}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}}\:\left(f_{8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)-f_{8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)\right)\nonumber\label{C7charlim}\end{aligned}$$ Then, the charged-Higgs contribution, including the would-be Goldstone-boson corrections to the W-boson contribution [@Gomez:2006uv], is given by, $${\cal C}_{7,8}^{H^{\pm}}=\frac{1}{3\tan^{2}\beta}f_{7,8}^{(1)}(y_{t})+\frac{f_{7,8}^{(2)}(y_{t})\, +\, \left(\Delta h_{d}/h_{d} \left( 1 + \tan\beta\right) - \delta h_{d}/h_{d} \left( 1 - \cot\beta\right)\right)\,f_{7,8}^{(2)}(x_{t}) }{1+\delta h_{d}/h_{d}+\Delta h_{d}/h_{d}\tan\beta} \label{eq:C7H}$$ with $y_{t}=m_{t}^{2}/M_{H^{\pm}}^{2}$, $x_{t}=m_{t}^{2}/M_{W}^{2}$ and $$\begin{aligned}
f_{7}^{(1)}\left(x\right)&=\frac{x\left(7-5x-8x^{2}\right)}{24\left(x-1\right)^{3}}+\frac{x^{2}\left(3x-2\right)}{4\left(x-1\right)^{4}}\ln x;\quad f_{8}^{(1)}\left(x\right)&=\frac{x\left(2+5x-x^{2}\right)}{8\left(x-1\right)^{3}}-\frac{3x^{2}}{4\left(x-1\right)^{4}}\ln x;\nonumber \\
f_{7}^{(2)}\left(x\right)&=\frac{x\left(3-5x\right)}{12\left(x-1\right)^{2}}+\frac{x\left(3x-2\right)}{6\left(x-1\right)^{3}}\ln x;\qquad\quad f_{8}^{(2)}\left(x\right)&=\frac{x\left(3-x\right)}{4\left(x-1\right)^{2}}-\frac{x}{2\left(x-1\right)^{3}}\ln x;\end{aligned}$$
### $B_{s}\rightarrow\mu^{-}\mu^{+}$ decay.
The branching ratio associated to this decay can be adequately approximated by the following expression [@Buras:2002vd]: $$\mbox{BR}(B_{s}\rightarrow\mu^{-}\mu^{+})=2.32\cdot10^{-6}\;\frac{\tau_{B_{s}}}{1.5ps}\left(\frac{F_{B_{s}}}{230MeV}\right)^{2}\left(\frac{\left|V_{ts}\right|}{0.04}\right)^{2}\left[\left|\tilde{c}_{S}\right|^{2}+\left|\tilde{c}_{P}+0.04(c_{A}-c'_{A})\right|^{2}\right]\label{eq:bsmumu}$$ where the dimensionless Wilson coefficients are given by $\tilde{c}_{S}=m_{B_{s}}c_{S}$, $\tilde{c}_{P}=m_{B_{s}}c_{P}$ and the coefficients $c_{A}$ and $c'_{A}$ can be neglected in comparison with $c_{S}$ and $c_{P}$ since they are related with contributions from box diagrams and $Z^{0}$-penguin diagrams. In our analysis, we use the approximate expressions for $c_{S}$ and $c_{P}$ in Ref. [@Buras:2002vd]: $$c_{P}\simeq\frac{m_{\mu}\overline{m}_{t}^{2}}{4M_{W}}\,\frac{16\pi^{2}\tan^{3}\beta~\epsilon_{Y}}{\left(1+\delta h_{d}/h_{d}+\Delta h_{d}/h_{d}\tan\beta\right)\left(1+\epsilon_{0}\tan\beta\right)}\left[\frac{\left|U_{11}\right|^{2}}{m_{H_{1}}^{2}}+\frac{\left|U_{21}\right|^{2}}{m_{H_{2}}^{2}}+\frac{\left|U_{31}\right|^{2}}{m_{H_{3}}^{2}}\right]\label{eq:5.3-2}$$ $$c_{P}\simeq\frac{m_{\mu}\overline{m}_{t}^{2}}{4M_{W}}\,\frac{16\pi^{2}\tan^{3}\beta~\epsilon_{Y}}{\left(1+\delta h_{d}/h_{d}+\Delta h_{d}/h_{d}\tan\beta\right)\left(1+\epsilon_{0}\tan\beta\right)}\left[\frac{\left|U_{13}\right|^{2}}{m_{H_{1}}^{2}}+\frac{\left|U_{23}\right|^{2}}{m_{H_{2}}^{2}}+\frac{\left|U_{33}\right|^{2}}{m_{H_{3}}^{2}}\right]\label{eq:5.3-3}$$ with $$\begin{aligned}
\epsilon_{0} = \frac{2\alpha_{s}}{3\pi}~\mu^* m_{\tilde{g}}^* ~I\left(m_{\tilde d_1}^2,m_{\tilde d_2}^2,m_{\tilde g}^2\right) \qquad & \qquad\epsilon_{Y} = -\frac{1}{16\pi^{2}}~ A_{t}^* \mu^*~ I\left(m_{\tilde t_1}^{2},m_{\tilde t_2}^{2},|\mu|^2 \right).
\label{eq:5.3-4}\end{aligned}$$ And, given that in Eq. (\[eq:bsmumu\]) we are including only the $\tan \beta$-enhanced Higgs contributions, in the following, we use the experimental result as a 3$\sigma$ upper limit on this contribution.
Model analysis. {#sec:analysis}
===============
In the previous section we have defined the MSSM model we are going to analyze and presented the different production mechanisms and the main decay channels for neutral Higgses at LHC. In this section we study, in this general MSSM scenario with the possible presence of CP violating phases, whether it is still possible to interpret the Higgs resonance observed at LHC with a mass of $\sim125$ GeV as the second Higgs having a lighter Higgs below this mass and a third neutral Higgs with a mass $m_{H_{3}}\leq200$ GeV. As we will see in the following, the present experimental results that we use to this end are the measurement of $pp\to H_{2}\to\gamma\gamma$, $pp\to H_{a}\to\tau\tau$ at LHC and the indirect constraints on charged Higgs from BR($b\to s\gamma$). We divide our analysis in two $\tan\beta$ regions: low $\tan\beta$ defined as $\tan\beta\lesssim8$ and medium-large $\tan\beta$, for $\tan \beta \gtrsim8$.
Medium–large $\tan\beta$ regimen.
---------------------------------
Now, we take $\tan\beta\gtrsim 8$, which implies that $\sin\beta\simeq1$ and $\cos\beta\simeq(1/\tan\beta)\ll1$. We analyze the different processes in this regime of medium–large $\tan\beta.$ First, we analyze the model predictions for the process $pp\to H_{2}\to\gamma\gamma$ that is requested to satisfy the new experimental constraints with a signal strength $0.75\leq\mu_{\gamma\gamma}^{\rm{LHC}}\leq1.55\,.$ Then, we analyze the constraints from $pp\to H_{a}\to\tau\tau$ and see whether the two results can be compatible in the regime of medium–large $\tan\beta$ for $m_{H_{2}}=125$ GeV.
### Two photon cross section.
The two photon cross section through a Higgs boson can be divided, in the narrow-width approximation, in two parts: Higgs production cross section and Higgs decay to the two photon final state, $\sigma_{\gamma\gamma}=\sigma(pp\to H_{2})\times\mbox{BR}(H_{2}\to\gamma\gamma)=\sigma(pp\to H_{2})\times\Gamma(H_{2}\to\gamma\gamma)/\Gamma_{H_{2}}$. Thus we have to analyze these three elements, [*i.e.*]{} $\sigma(pp\to H_{2})$, $\Gamma(H_{2}\to\gamma\gamma)$ and $\Gamma_{H_{2}}$.
In first place, we are going to analyze the decay width of the Higgs boson into two photons in our MSSM model. As a reference value, we can compare our prediction with the Standard Model value, $$S_{H}^{\gamma}=\frac{2}{3}F_{b}^{S}\left(\tau_{Hb}\right)+\frac{8}{3}F_{t}^{S}\left(\tau_{Ht}\right)-F_{1}\left(\tau_{HW}\right)\simeq\left(-0.025+i\,0.034\right)+1.8-8.3\simeq-6.54;$$ In the MSSM, this decay width is given by the Eq. (\[eq:3.3.2-5-1\]) and it has both a scalar and a pseudoscalar part, receiving each one contributions from different virtual particles: $$\begin{aligned}
S_{H_{2}^{0}}^{\gamma} & = & S_{H_{2}^{0},b}^{\gamma}+S_{H_{2}^{0},t}^{\gamma}+S_{H_{2}^{0},W}^{\gamma}+S_{H_{2}^{0},\tilde{b}}^{\gamma}+S_{H_{2}^{0},\tilde{t}}^{\gamma}+S_{H_{2}^{0},\tilde{\tau}}^{\gamma}+S_{H_{2}^{0},\tilde{\chi}}^{\gamma}+S_{H_{2}^{0},H^{\pm}}^{\gamma};\label{eq:4.1.1-1}\\
P_{H_{2}^{0}}^{\gamma} & = & P_{H_{2}^{0},b}^{\gamma}+P_{H_{2}^{0},t}^{\gamma}+P_{H_{2}^{0},\tilde{\chi}}^{\gamma};\label{eq:4.1.1-2}\end{aligned}$$ Once we fix the mass of the Higgs particle, $M_{H_{2}}\simeq125$ GeV, the contributions from $W$-bosons and SM fermions are completely fixed, at least at tree level, with the only exception of the Higgs mixings, that we take as free, and $\tan\beta$. In the case of third generation fermions, as we have already seen, it is very important to take into account the non-holomorphic threshold corrections from gluino and chargino loops to the Higgs–fermionic couplings, $(g_{f}^{S},g_{f}^{P})$ and therefore we introduce an additional dependence on sfermion masses. Nevertheless these contributions remain very simple, $$S_{H_{2}^{0},W}^{\gamma}=-g_{H_{2}WW}\: F_{1}\left(\tau_{2W}\right)=-\left(\mathcal{U}_{21}\cos\beta+\mathcal{U}_{22}\sin\beta\right)\: F_{1}\left(\tau_{2W}\right)\simeq-8.3\,\left(\mathcal{U}_{22}+\frac{\mathcal{U}_{21}}{\tan\beta}\right)\,,$$ where we have used that $F_{1}\left(\tau_{2W}\right)=F_{1}\left(0.61\right)\simeq8$.
The top and bottom quark contributions enter both in the scalar and pseudoscalar pieces, which are both similar. The scalar contribution, from Eq. (\[eq:3.3.2-2\]) and taking into account again the $\tan\beta$ regime in consideration, is given by the following approximate expression: $$\begin{aligned}
S_{H_{2}^{0},b+t}^{\gamma} &\simeq& \frac{1}{3}\,\left[2\,\left(\mbox{Re}\left\{ \frac{\mathcal{U}_{21}+\mathcal{U}_{22}\kappa_{d}}{1+\kappa_{d}\tan\beta}\right\} \tan\beta+\mbox{Im}\left\{ \frac{\kappa_{d}\left(\tan^{2}\beta+1\right)}{1+\kappa_{d}\tan\beta}\right\} {\cal U}_{23}\right)\, F_{b}^{S}\left(\tau_{2b}\right)\right.\nonumber\\
&&\left.~+~8\,\mathcal{U}_{22}\, F_{t}^{S}\left(\tau_{2t}\right)\right];\label{eq:4.1.1-6}\end{aligned}$$ where $\kappa_{b}$ is a parameter associated to the finite loop-induced threshold corrections that modify the couplings of the neutral Higgses to the scalar and pseudoscalar fermion bilinears, as defined in Eqs. (\[Eq:thresholdS\],\[Eq:thresholdP\]). These parameters are always much lower than $1$, whereas for $m_{t}=173,1$ GeV (pole mass) and $m_{b}=4.33$ GeV (mass at $m_{t}$ scale) the loop functions are just about $F_{b}^{S}\simeq-0.04+i\,0.05$ and $F_{t}^{S}\simeq0.7$. In this way, Eq. (\[eq:4.1.1-6\]) can be finally approximated by: $$S_{H_{2}^{0},b+t}^{\gamma}\simeq1.8~\mathcal{U}_{22} +\left(-0.025+i\,0.034\right)\left[\mbox{Re}\left\{ \frac{\tan\beta}{1+\kappa_{d}\tan\beta}\right\} \,\mathcal{U}_{21}+\,\mbox{Im}\left\{ \frac{\kappa_{d}\tan^{2}\beta}{1+\kappa_{d}\tan\beta}\right\} {\cal U}_{23}\right]\,.$$ The first contribution beyond the Standard Model that we are going to consider is the charged Higgs boson. As we can see from Eq. (\[eq:3.3.2-2\]), it only takes part in the scalar part of the decay width. Its contribution is given by: $$\begin{aligned}
S_{H_{2}^{0},H^{\pm}}^{\gamma} & = & -g_{H_{2}^{0}H^{\pm}}\frac{\upsilon^{2}}{2m_{H^{\pm}}^{2}}F_{0}\left(\tau_{2H^{\pm}}\right),\label{eq:4.1.1-7}\end{aligned}$$ where the self-coupling to the second neutral Higgs can be approximated as follows for medium-large $\tan\beta$, keeping only the leading terms in $\cos\beta$: $$\begin{aligned}
g_{H_{2}^{0}H^{\pm}} & \simeq & \left(2\lambda_{1}\cos\beta-\lambda_{4}\cos\beta-2\cos\beta\,\mbox{Re}\left\{ \lambda_{5}\right\} +\mbox{Re}\left\{ \lambda_{6}\right\} \right)\mathcal{U}_{21} \\
& + & \left(\lambda_{3}+\cos\beta\,\mbox{Re}\left\{ \lambda_{6}\right\} -2\cos\beta\,\mbox{Re}\left\{ \lambda_{7}\right\} \right)\mathcal{U}_{22}+\left(2\cos\beta\,\mbox{Im}\left\{ \lambda_{5}\right\} -\mbox{Im}\left\{ \lambda_{6}\right\} \right)\mathcal{U}_{23};\nonumber\label{eq:4.1.1-8}\end{aligned}$$ The loop function, $F_{0}\left(\tau\right)$ is quite stable for small $\tau$, for $150\mbox{ GeV}\leq m_{H^{\pm}}\leq200\mbox{ GeV}$, $0.17\simeq(125/300)^{2}\leq\tau_{2H^{\pm}}\leq0.097\simeq(125/400)^{2}$, we have $F_{0}\left(\tau_{2H^{\pm}}\right)\simeq0.34$ and then, taking, $$\begin{aligned}
S_{H_{2}^{0},H^{\pm}}^{\gamma} & \lesssim & -0.45\left[\left(\frac{2\lambda_{1}-\lambda_{4}-2\,\mbox{Re}\left\{ \lambda_{5}\right\} }{\tan\beta}+\mbox{Re}\left\{ \lambda_{6}\right\} \right)\mathcal{U}_{21}
\right.\nonumber \\ & + & \left.
\left(\lambda_{3}+\frac{\mbox{Re}\left\{ \lambda_{6}\right\} -2\,\mbox{Re}\left\{ \lambda_{7}\right\} }{\tan\beta}\right)\mathcal{U}_{22}+\left(\frac{2\,\mbox{Im}\left\{ \lambda_{5}\right\} }{\tan\beta}-\mbox{Im}\left\{ \lambda_{6}\right\} \right)\mathcal{U}_{23}\right]\label{eq:4.1.1-9}\end{aligned}$$ Now, we take into account that the Higgs potential couplings $\lambda_{i}=\lambda_{i}\left(g_{,}\beta,\, M_{susy},\, A_{t},\mu\right)$, can be safely considered $\lambda_{i}\lesssim1$. Numerically, we find a maximum $\lambda_{i}^{max}\sim0.25$ for some of them and taking only the couplings not suppressed by $\tan\beta$ factors, we have $\lambda_{3}\simeq-0.074$ at tree-level with the value at one-loop typically smaller due to the opposite sign of the fermionic corrections and $\lambda_{6}\simeq-0.14\, e^{i\alpha}$. Thus, we can expect the charged Higgs contribution to be always negligible when compared to the above SM contributions, even for $m_{H^\pm} \simeq 150$ GeV, and can not modify substantially the diphoton amplitude.
The squarks involved in the two photon decay width are the ones with large Yukawa couplings, that is, the sbottom and the stop. The scalar contribution of these squarks is given in Eq. (\[eq:3.3.2-2\]) and writing explicitly their couplings to the Higgs, it can be expressed as follows: $$\begin{aligned}
S_{H_{2}^{0},\tilde{b}}^{\gamma} = -\sum_{i=1,2}\frac{1}{3}g_{H_{2}\tilde{b}_{i}^{*}\tilde{b}_{i}}\frac{v^{2}}{2m_{\tilde{b}_{i}}^{2}}F_{0}\left(\tau_{2\tilde{b}_{i}}\right)=-\sum_{i=1,2}\frac{v^{2}}{6m_{\tilde{b}_{i}}^{2}}\,\left(\tilde{\Gamma}^{\alpha bb}\right)_{\beta\gamma}\mathcal{U}_{2\alpha}\mathcal{R}_{\beta i}^{\tilde{b}*}\mathcal{R}_{\gamma i}^{\tilde{b}}\, F_{0}\left(\tau_{2\tilde{b}_{i}}\right)\qquad\label{eq:4.1.1-10}\\
S_{H_{2}^{0},\tilde{t}}^{\gamma} = -\sum_{i=1,2}\frac{4}{3}g_{H_{2}\tilde{t}_{i}^{*}\tilde{t}_{i}}\frac{v^{2}}{2m_{\tilde{t}_{i}}^{2}}F_{0}\left(\tau_{2\tilde{t}_{i}}\right)=-\sum_{i=1,2}\frac{2v^{2}}{3m_{\tilde{t}_{i}}^{2}}\,\left(\tilde{\Gamma}^{\alpha tt}\right)_{\beta\gamma}\mathcal{U}_{2\alpha}\mathcal{R}_{\beta i}^{\tilde{t}*}\mathcal{R}_{\gamma i}^{\tilde{t}}\, F_{0}\left(\tau_{2\tilde{t}_{i}}\right)~\qquad\label{eq:4.1.1-11}\end{aligned}$$ In the sbottom contribution, we make the expansion described in Appendix \[App:expand\], taking into account that the off-diagonal terms in its mass matrix are much smaller than the diagonal ones. This approximation leads us to the expression: $$\begin{aligned}
S_{H_{2}^{0},\tilde{b}}^{\gamma} & \simeq & 0.12\tan^{2}\beta\,\frac{m_{b}^{2}}{m_{\tilde{b}_{1}}^{2}}\left[\frac{\mbox{Re}\left\{ A_{b}^{*}\mu\right\} }{m_{\tilde{b}_{2}}^{2}}\mathcal{U}_{21}-\frac{\mu^{2}}{m_{\tilde{b}_{2}}^{2}}\mathcal{U}_{22}+\frac{\mbox{Im}\left\{ A_{b}^{*}\mu\right\} }{m_{\tilde{b}_{2}}^{2}\tan\beta}\mathcal{U}_{23}\right]\label{eq:sbottomgl}\\
& \simeq & 1.2\times10^{-5}\tan^{2}\beta\left(\frac{300\mbox{ GeV}}{m_{\tilde{b}_{1}}}\right)^{2}\left[\frac{\mbox{Re}\left\{ A_{b}^{*}\mu\right\} }{m_{\tilde{b}_{2}}^{2}}\mathcal{U}_{21}-\frac{\mu^{2}}{m_{\tilde{b}_{2}}^{2}}\mathcal{U}_{22}+\frac{\mbox{Im}\left\{ A_{b}^{*}\mu\right\} }{m_{\tilde{b}_{2}}^{2}\tan\beta}\mathcal{U}_{23}\right]\nonumber \end{aligned}$$ where we have used that $F_{0}\left(\tau_{2\tilde{b}_{i}}\right)\simeq0.34$ for both right and left-handed sbottoms. Assuming that $A_{b}/m_{\tilde{b}_{2}},\mu/m_{\tilde{b}_{2}}\simeq O(1)$, it is clear that the sbottom contribution can be safely neglected, as even for $\tan \beta \sim 50$ would be two orders of magnitude below the top-quark contribution. Incidentally, the stau contribution can be obtained with the replacement $b\leftrightarrow\tau$, and we can also expect it to be negligible for stau masses above 100 GeV, except for the very large $\tan \beta$ region[^6].
On the other hand, we have the top squark case where there are large off-diagonal terms in the mass matrix which can not be neglected in comparison with the diagonal ones, specially if we intend to analyze small stop masses. This does not allow us to use the Appendix \[App:expand\] approximation in such a straightforward way. Nevertheless, we can still expand the chargino mass-matrix, keeping the stop mixing matrices, $\cal{R}$, and we can write Eq. (\[eq:4.1.1-11\]) as, $$\begin{aligned}
S_{H_{2}^{0},\tilde{t}}^{\gamma}&\simeq&0.45\,\left[\frac{m_{t}^{2}}{m_{\tilde{t}_{1}}^{2}}\left(\left|\mathcal{R}_{11}\right|^{2}+\left|\mathcal{R}_{12}\right|^{2}\right)+\frac{m_{t}^{2}}{m_{\tilde{t}_{2}}^{2}}\left(\left|\mathcal{R}_{22}\right|^{2}+\left|\mathcal{R}_{21}\right|^{2}\right)\right]\mathcal{U}_{22}~+~0.45~\left(1-\frac{m_{\tilde{t}_{1}}^{2}}{m_{\tilde{t}_{2}}^{2}}\right)\nonumber\\&&\,\left[-\mbox{Re}\left\{ \frac{\mu m_{t}}{m_{\tilde{t}_{1}}^{2}}\mathcal{R}_{11}^{*}\mathcal{R}_{21}\right\} \mathcal{U}_{21}+ \mbox{Im}\left\{ \frac{\mu m_{t}}{m_{\tilde{t}_{1}}^{2}}\mathcal{R}_{11}^{*}\mathcal{R}_{21}\right\} \mathcal{U}_{23}+\mbox{Re}\left\{ \frac{A_{t}^{*}m_{t}}{m_{\tilde{t}_{1}}^{2}}\mathcal{R}_{11}^{*}\mathcal{R}_{21}\right\} \mathcal{U}_{22}\right]\nonumber\\
\label{eq:stopgl}\end{aligned}$$ where we take that $F_{0}\left(\tau_{2\tilde{t}_{1}}\right)\simeq F_{0}\left(\tau_{2\tilde{t}_{2}}\right)\simeq0.34$. Regarding the stop mass, the limit provided by ATLAS and CMS sets $m_{\tilde{t}}\geq650$ GeV for the general case where the lightest neutralino mass is $m_{\tilde{\chi}_{1}^{0}}\lesssim250$ GeV [@ATLAS-CONF-2013-024; @ATLAS-CONF-2013-037; @ATLAS-CONF-2013-053; @PAS-SUS-13-011]. Therefore if we typically consider upper values for $A_{t}, \mu\lesssim 3m_{\widetilde{Q}_{3}}\sim3000$ GeV for $m_{\widetilde{Q}_{3}}\lesssim1000$ GeV (higher values may have naturalness and charge and color breaking problems) the size of the coefficients associated to the equation above will be $m_{t}^{2}/m_{\tilde{t}_{2}}^{2},~m_{t}^{2}/m_{\tilde{t}_{1}}^{2}<0.1$, $A_{t}m_{t}/m_{\tilde{t}_{1}}^{2},~\mu m_{t}/m_{\tilde{t}_{1}}^{2}\lesssim 1.2$ and taking into account that $\mathcal{R}_{11}^{*}\mathcal{R}_{21}\leq\frac{1}{2}$, $\left|\mathcal{R}_{ij}\right|^{2}\leq1$ and $(1-m_{\tilde{t}_{1}}^{2}/m_{\tilde{t}_{2}}^{2})<1$ we obtain $$S_{H_{2}^{0},\tilde{t}}^{\gamma}\lesssim0.26\left[ - \mathcal{U}_{21}+1.7\,\mathcal{U}_{22}+\mathcal{U}_{23}\right]\label{eq:stopgl2}\,,$$ and therefore typically an order of magnitude smaller than the top quark and the W-boson contribution and without $\tan \beta$ enhancement. Nevertheless, we keep this stop contribution to take into account the possibility of a light stop, $m_{\tilde t_1} \leq 650$ GeV with a small mass difference to the LSP.
Finally, the chargino contribution is given by: $$\begin{aligned}
S_{H_{2}^{0},\tilde{\chi}^{\pm}}^{\gamma}=\sqrt{2}g\underset{{\scriptstyle i=1,2}}{\sum}\mbox{Re}\left\{ V_{i1}^{*}U_{i2}^*G_{2}^{\phi_{1}}+V_{i2}^{*}U_{i1}^*G_{2}^{\phi_{2}}\right\} \frac{v}{m_{\chi_{i}^{\pm}}}F_{f}^{S}\left(\tau_{2\tilde{\chi}_{i}}\right)\,,\\
\mbox{with~}\quad G_2^{\phi_1} = \left(\mathcal{U}_{21}-i \sin\beta~ \mathcal{U}_{23}\right)\,,\qquad G_2^{\phi_2} = \left(\mathcal{U}_{22}-i \cos\beta~ \mathcal{U}_{23}\right)\,.\nonumber\label{eq:4.7-1}\end{aligned}$$ Using again the expansion of chargino mass matrices, Appendix \[App:expand\], we have the expression: $$S_{H_{2}^{0},\tilde{\chi}^{\pm}}^{\gamma}\simeq2.8\left[\cos\beta\frac{M_{W}^{2}}{\mu^{2}}\mathcal{U}_{21}+\frac{M_{W}^{2}}{M_2^{2}}\mathcal{U}_{22}\right]$$ where we have supposed that $m_{\chi_{1}^{\pm}}\simeq M_{2}\ll m_{\chi_{2}^{\pm}}\simeq\mu$, $\sin\beta\simeq1$, $F_{f}^{S}\left(\tau_{H_{2}\chi_{2}^{\pm}}\right)\simeq F_{f}^{S}\left(\tau_{H_{2}\chi_{1}^{\pm}}\right)\simeq0.7$, and neglected $(F_{f}^{S}\left(\tau_{H_{2}\chi_{1}^{\pm}}\right)- F_{f}^{S}\left(\tau_{H_{2}\chi_{2}^{\pm}}\right))/(m_{\chi_{1}^{\pm}}^2-m_{\chi_{2}^{\pm}}^2)$. If we take $M_{W}^{2}/M_{2}^{2}\lesssim0.05$ for $m_{\chi^\pm_1}<350$ GeV from LHC limits [@ATLAS-CONF-2013-035; @PAS-SUS-12-022], we have, $$S_{H_{2}^{0},\tilde{\chi}^{\pm}}^{\gamma}\lesssim0.15\left[~\mathcal{U}_{22}+\frac{M_{2}^2}{\mu^2}~\mathcal{U}_{21}\right]$$ and again we see we can safely neglect the chargino contribution compared to the $W$-boson, top and bottom contributions.
Therefore, in summary, we can safely neglect the charged Higgs, chargino and sbottom contributions to the 2-photon decay width and we can approximate the scalar amplitude by, $$\begin{aligned}
S_{H_{2}^{0}}^{\gamma} & \simeq & \mathcal{U}_{21}\,\left(-\frac{8.3}{\tan\beta}\left(-0.025+i\,0.034\right)\,\mbox{Re}\left\{ \frac{\tan\beta}{1+\kappa_{d}\tan\beta}\right\} \right.\nonumber \\
& &\left.\quad\qquad- 0.45\,\left(\frac{m_{\tilde{t}_{2}}^{2}}{m_{\tilde{t}_{1}}^{2}}-1\right)\mbox{Re}\left\{ \frac{\mu m_{t}\mathcal{R}_{11}^{*}\mathcal{R}_{21}}{m_{\tilde{t}_{2}}^{2}}\right\}\right)\, +\nonumber \\&&\mathcal{U}_{22}\, \left(-6.5+0.45\,\left(\frac{m_{\tilde{t}_{2}}^{2}}{m_{\tilde{t}_{1}}^{2}}-1\right)\mbox{Re}\left\{ \frac{A_{t}^{*}m_{t}\mathcal{R}_{11}^{*}\mathcal{R}_{21}}{m_{\tilde{t}_{2}}^{2}}\right\}+0.45\,\left(\frac{m_{t}^{2}\left|\mathcal{R}_{11}\right|^{2}}{m_{\tilde{t}_{1}}^{2}}+\frac{m_{t}^{2}\left|\mathcal{R}_{22}\right|^{2}}{m_{\tilde{t}_{2}}^{2}}\right)\right)+\nonumber\\
&& {\cal U}_{23}\, \left(\left(-0.025+i\,0.034\right)\,\mbox{Im}\left\{ \frac{\kappa_{d}\tan^{2}\beta}{1+\kappa_{d}\tan\beta}\right\} +0.45\,\mbox{Im}\left\{ \frac{\mu m_{t}\mathcal{R}_{11}^{*}\mathcal{R}_{21}}{m_{\tilde{t}_{2}}^{2}}\right\} \right)\,.\end{aligned}$$ Thus, it looks very difficult to obtain a scalar amplitude to two photons significantly larger than the SM value taking into account that the stop contribution can be, at most, order one. The same discussion applies to the pseudoscalar amplitude that receives only fermionic contributions, only top and bottom are relevant and thus is much smaller than the scalar contribution above. The possibility of large SUSY contributions, as advocated in Refs. [@Carena:2011aa; @Carena:2012gp; @Carena:2013iba] seems closed, at least in the MSSM with $m_{H_2}\simeq 125$ GeV. In particular, large stau contributions would require $\tan \beta \geq 50$ that we show below to be incompatible with the bounds from $H_1,H_3 \to \tau \tau$.
Next, we analyze the Higgs production cross section, presented at section \[sub:Higgs-production.\]. At the partonic level, this cross section receives contributions from gluon fusion and $b\bar{b}$-fusion.
The $b\bar{b}$–fusion is tree-level at the partonic level and proportional to the bottom Yukawa coupling. Considering only the main threshold corrections to the bottom couplings, we have, $$\begin{aligned}
\hat{\sigma}_{b\bar{b}\to H_2}&\simeq&\frac{\pi}{6}~\frac{g^{2}m_{b}^{2}}{4M_{W}^{2}}\left(\frac{\tan^{2}\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}\,\left(|{\cal U}_{21}|^{2}+|{\cal U}_{23}|^{2}\right)\right)\nonumber \\&\simeq&6.8\times10^{-5}\,\frac{\tan^{2}\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}\,\left(|{\cal U}_{21}|^{2}+|{\cal U}_{23}|^{2}\right)\,.\end{aligned}$$ This dimensionless partonic cross section must be multiplied by the $b\bar{b}$ luminosity in the proton, $\tau\: d{\cal L}^{b\bar{b}}/d\tau$, for $\tau=m_{H_2}^{2}/s$. Taking $m_{H_2} = 125$ GeV and for $\sqrt{s} = 8$ TeV, we have $\tau\: d{\cal L}^{b\bar{b}}/d\tau \simeq 2300$ pb from the MSTW2008 parton distributions at LO. Thus, the $b\bar{b}$ contribution to the $pp$ cross section: $$\sigma(pp\to H_{2})_{bb} \simeq 0.16\,\frac{\tan^{2}\beta}{(1+\kappa_{d}\tan\beta)^2}\,\left(|{\cal U}_{21}|^{2}+|{\cal U}_{23}|^{2}\right) \mbox{pb}\,.
\label{eq:aprbbfusion}$$
On the other hand, gluon fusion cross section is a loop process, $$\hat{\sigma}_{gg\rightarrow H_2}^{LO} = \frac{\alpha_{s}^{2}\left(M_{H_2}\right)}{256\pi} ~\frac{m_{H_2}^2}{\upsilon^2}\left[\left|S^g_2\right|^2+[\left|P^g_2 \right|^{2}\right] \simeq 4 \times 10^{-6} \left[\left|S^g_2\right|^2+[\left|P^g_2 \right|^{2}\right]$$ where the scalar coupling, $S^g_2$, gets contributions from both quarks and squarks, while the pseudoscalar one, $P^g_2$, receives contributions only from quarks. With regard to the squark contributions, they can be easily obtained from Eqs. (\[eq:sbottomgl\],\[eq:stopgl\]), taking into account that, for $J^\gamma_{\tilde f}=1$, $S_{2,\tilde b}^{g}=3/2~S_{2,\tilde b}^{\gamma}$ and $S_{2,\tilde t}^{g}=3/8~S_{2,\tilde t}^{\gamma}$. Therefore, it is easy to see that analogously to the photonic amplitudes, we can safely neglect the sbottom and stop contributions to gluon fusion production. Thus, the scalar and pseudoscalar contributions to gluon fusion production can be approximated by, $$\begin{aligned}
S_{2,b+t}^{g}&\simeq&0.7 \,\mathcal{U}_{22} + \left(-0.04+i\,0.05\right)\,\left[\mbox{Re}\left\{ \frac{\tan\beta}{1+\kappa_{d}\tan\beta}\right\} \,\mathcal{U}_{21}+\,\mbox{Im}\left\{ \frac{\kappa_{d}\tan^{2}\beta}{1+\kappa_{d}\tan\beta}\right\} {\cal U}_{23}\right];\nonumber\\ \\
P_{2,b+t}^{g}&\simeq& \left(-0.04+i\,0.05\right)\,\left[\mbox{Im}\left\{ \frac{\kappa_d\tan\beta}{1+\kappa_{d}\tan\beta}\right\} \,\mathcal{U}_{22}+\,\mbox{Im}\left\{ \frac{\kappa_{d}\tan^{2}\beta}{1+\kappa_{d}\tan\beta}\right\} {\cal U}_{21} \right] \nonumber\\
&+&\left[\left(-0.04+i\,0.05\right)\mbox{Re}\left\{ \frac{\tan\beta}{1+\kappa_{d}\tan\beta}\right\}-\frac{1}{\tan\beta}\right]\,\mathcal{U}_{23};\end{aligned}$$
The gluon fusion contribution to the $pp$ cross section is obtained by multiplying the gluon luminosity, $ \tau_{H_2}~d{\cal L}_{LO}^{gg}/d\tau_{H_2} \simeq 3 \times 10^6$ pb and the K-factor, which we take $K\simeq2.2$, corresponding to low $\tan \beta$. Then, with $\kappa_d$ real for simplicity, the gluon fusion contribution to $pp$ cross section would be, $$\begin{aligned}
\label{eq:aprcrosssect}
\sigma(pp\to H_{2})_{gg}& \simeq& 27.5\, \left[\left|S^g_2\right|^2+[\left|P^g_2 \right|^{2}\right]~\mbox{pb} \simeq\left[ 13\, {\cal U}_{22}^2 -\frac{1.5 \tan\beta}{1+\kappa_{d}\tan\beta}\,{\cal U}_{21}{\cal U}_{22}\right.\\&+&\left. \frac{0.1 \tan^2\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}\,{\cal U}_{21}^2 + \left(\frac{2}{\left(1+\kappa_{d}\tan\beta\right)}+\frac{0.1 \tan^2\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}+\frac{27}{\tan^2\beta}\right)\,{\cal U}_{23}^2\right]~\mbox{pb} \nonumber\,.\end{aligned}$$ This equation with the approximate values of $S^g_2, P^g_2$ is compared with the full result in Figure \[fig:siggg-H2\]. We can see that this approximate expression reproduces satisfactorily the gluon fusion contribution to $H_2$ production in the whole explored region.
![Comparison of the the approximation to $\sigma(pp\to H_{2})_{gg}$ in Eq. (\[eq:aprcrosssect\]) with the full result as a function of $\tan \beta$ \[fig:siggg-H2\].](siggh2app)
From this equation, we see that the gluon fusion production is dominated by the top quark contribution if ${\cal U}_{21},{\cal U}_{22} = O(1)$ up to $\tan \beta \gtrsim 10$. Moreover, the SM contribution corresponds simply to take $\kappa=0$, $\tan \beta=1$, ${\cal U}_{21}={\cal U}_{22} = 1$ and ${\cal U}_{23}=0$ and therefore, we see the gluon fusion cross section will be typically smaller than the SM cross section for medium-low $\tan \beta$. Also, comparing Eqs. (\[eq:aprbbfusion\]) and (\[eq:aprcrosssect\]), we see that gluon fusion still dominates over $b\bar b$–fusion except for large $\tan \beta$ or small ${\cal U}_{22}$.
Finally, we have to check the total width, $\Gamma_{H_{2}}$. The main decay channels for $m_{H_{2}}\simeq125$ GeV, are $H_{2}\to b\bar{b}$, $H_{2}\to WW^{*}$ and $H_{2}\to\tau\tau$ ($H_{2}\to gg$ can of the same order as $H_{2}\to\tau\tau$ in some cases, but, being comparatively small with respect to $b\bar{b}$ and $WW$, it is not necessary to consider it in the following discussion). The decay width is usually dominated by the $b\bar{b}$-channel which can be enhanced by $\tan\beta$ factors with respect to the SM width (as the $\tau\tau$ channel). The main contribution to the decay width to $b\bar{b}$ is captured by the tree-level Higgs-bottom couplings, in the limit $\kappa_{d}\to0$ (although threshold corrections are important and always taken into account in our numerical analysis), $$\Gamma_{H_{2}}\simeq\frac{g^{2}m_{H_{2}}}{32\pi M_{W}^{2}}\,\left[\tan^{2}\beta\,\left(|{\cal U}_{21}|^{2}+|{\cal U}_{23}|^{2}\right)\left(3m_{b}^{2}+m_{\tau}^{2}\right)+\left(\mathcal{U}_{22}+\frac{\mathcal{U}_{21}}{\tan\beta}\right)^{2}m_{H_{2}}^{2}I_{PS}\right]\,,\label{eq:totwidth}$$ where $I_{PS}\simeq6.7\times10^{-4}$ represents the phase space integral in the $H_2\to W W^*$ decay width as can be found in Ref. [@Lee:2003nta] for $m_{H}\simeq125$ GeV. This must be compared with the SM decay width, which would correspond to the usual MSSM decoupling limit if we replace $H_{1}\leftrightarrow H_{2}$ : $\tan\beta\to 1$, ${\cal U}_{21},{\cal U}_{22}\to1$ and ${\cal U}_{23}=0$. This implies that for sizable ${\cal U}_{21},{\cal U}_{23}>\tan^{-1}\beta$, the total width will be much larger than the SM width. Then, taking into account that we have shown that $\Gamma_{H_{2}\to\gamma\gamma}\simeq\Gamma_{h\to\gamma\gamma}^{SM}$ we have that, for ${\cal U}_{22}\leq 1$, the diphoton branching ratio will be smaller than the SM one. The only way to keep a large branching ratio is to take ${\cal U}_{21},{\cal U}_{23}\lesssim\tan^{-1}\beta$, when the total width is reduced keeping $\Gamma_{H_{2}\to\gamma\gamma}$ similar to the SM. On the other hand, we have seen that the $H_2$ production cross section is typically smaller than the SM unless we have ${\cal U}_{22}\simeq 1$ and $H_2$ is produced through the gluon-fusion process, or $\tan \beta \gtrsim 20$ with sizeable ${\cal U}_{21},{\cal U}_{23}$ and the production is dominated by $b\bar b$ fusion. Even for this last case, $b\bar b$ fusion, the $\tan \beta$ enhancement of the production cross section is exactly compensated by the suppression on the $H_2 \to \gamma \gamma$ branching ratio. For gluon fusion, there is no $\tan \beta$ enhancement and thus in both cases the $\gamma\gamma$-production cross section is smaller than the SM one. Therefore, we arrive to the conclusion that the only way to increase the $\gamma\gamma$-production cross section to reproduce the LHC results in our scenario is to [**decrease the total width by suppressing the $b$-quark and the $\tau$-lepton decay widths**]{}. This implies having a second Higgs, $H_{2}$, predominantly $H_{u}^{0}$, so that we decrease the couplings associated to these fermions and consequently increase the two photons branching ratio. This condition means, in terms of the mixing matrix elements: $$\begin{aligned}
\mathcal{U}_{22}\sim1, & & \mathcal{U}_{21}\simeq\mathcal{U}_{23}\leq\frac{1}{\tan\beta}\ll\mathcal{U}_{22}\label{eq:Hbounds}\end{aligned}$$
### Tau-tau cross section.
The above analysis has led us to the conclusion that, to reproduce the $\gamma\gamma$-production cross section, we need the second lightest Higgs to be almost purely up type. As a consequence, $H_{2}$ nearly decouples from tau fermions and then it is unavoidable that the other neutral Higgses inherit large down-type components, increasing thus their decays into two $\tau$-fermions. Once more, to compute the $\tau\tau$-production cross section through a Higgs, we must compute $\sigma(pp\to H_{i})$, $\Gamma(H_{i}\to\gamma\gamma)$ and $\Gamma_{H_{i}}$.
The decay width $H_{i}\to\tau\tau$ is given by the following equation: $$\Gamma_{H_{a}\rightarrow \tau\tau}= \frac{g_{\tau\tau}^{2}m_{H_{a}}\beta_{\tau}}{8\pi}\left(\beta_{\tau}^{2}|g_{\tau,a}^{S}|^{2}+|g_{\tau,a}^{P}|^{2}\right)\,,\label{eq:3.3.1-2}$$ where $\tau_{i}=m_{\tau}^{2}/m_{H_{i}}^{2}$ and $\beta_{\tau}=\sqrt{1-4\tau_{i}}$. The values of the $\tau$ scalar and pseudoscalar couplings are given by: $$g_{\tau i}^{S}\simeq\frac{\tan\beta}{1+\epsilon_{\tau}\tan\beta}~\mathcal{U}_{i1}+\frac{\epsilon_{\tau}\tan\beta}{1+\epsilon_{\tau}\tan\beta}~\mathcal{U}_{i2};\qquad g_{\tau i}^{P}\simeq-\frac{\tan\beta-\epsilon_{\tau}}{1+\epsilon_{\tau}\tan\beta}~\mathcal{U}_{i3}$$ In this case $\epsilon_{\tau}\simeq g^{2}/16\pi^{2}~(\mu M_{1}/m_{\tilde{\tau}_{2}}^{2})\simeq 2\times 10^{-3}$, and we are taking it real. Then, we have $\epsilon_{\tau}\simeq\epsilon_{b}/20$ being only a sub-leading correction in this case which can be safely neglected. Therefore we get, for $i=1,3$, $$\Gamma_{i,\tau\tau}\simeq\frac{m_{H_{i}}}{8\pi}\left(\frac{gm_{\tau}}{2M_{W}}\right)^{2}\left[\tan^{2}\beta\left(\left|\mathcal{U}_{i1}\right|^{2}+\left|\mathcal{U}_{i3}\right|^{2}\right)\right]\simeq\frac{g^{2}m_{H_{i}}m_{\tau}^{2}}{32\pi M_{W}^{2}}\tan^{2}\beta\,,\label{4.1.3-3}$$ where we used that ${\cal U}_{22}\simeq1$ and ${\cal U}_{12},{\cal U}_{32}\ll1$.
Now we need the production cross section for $H_{1}$ and $H_{3}$. We can use Eqs. (\[eq:aprbbfusion\]) and (\[eq:aprcrosssect\]) with the replacement $\mathcal{U}_{2j} \to \mathcal{U}_{ij}$. Then, using $\left|\mathcal{U}_{i1}\right|^{2}+\left|\mathcal{U}_{i3}\right|^{2}\simeq1$ and $\mathcal{U}_{i2}\simeq1/\tan\beta$, we have, $$\begin{aligned}
\sigma(pp\to H_{i})_{gg}& \simeq& 27.5\, \left[\left|S^g_2\right|^2+[\left|P^g_2 \right|^{2}\right]~\mbox{pb} \simeq\left[ 13\, {\cal U}_{i2}^2 -\frac{1.5 \tan\beta}{1+\kappa_{d}\tan\beta}\,{\cal U}_{i1}{\cal U}_{i2}\right.\\&+&\left. \frac{0.1 \tan^2\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}\,{\cal U}_{i1}^2 + \left(\frac{2}{\left(1+\kappa_{d}\tan\beta\right)}+\frac{0.1 \tan^2\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}+\frac{27.5}{\tan^2\beta}\right)\,{\cal U}_{i3}^2\right]~\mbox{pb}\ \nonumber\\&\simeq& \left[\frac{0.1 \tan^2\beta}{\left(1+\kappa_{d}\tan\beta\right)^2} + \frac{13 + 27.5\, {\cal U}_{i3}^2}{\tan^2 \beta} +\frac{ 2\,{\cal U}_{i3}^2 - 1.5\,{\cal U}_{i1} }{1+\kappa_{d}\tan\beta}\right] ~\mbox{pb}\,, \nonumber\\
\label{eq:aprcrosssecHi}
\sigma(pp\to H_{i})_{bb} &\simeq& 0.16\,\frac{\tan^{2}\beta}{(1+\kappa_{d}\tan\beta)^2}\,\left(|{\cal U}_{i1}|^{2}+|{\cal U}_{i3}|^{2}\right) \mbox{pb} ~\simeq~ 0.16\,\frac{\tan^{2}\beta}{(1+\kappa_{d}\tan\beta)^2}~\mbox{pb} \,.
\label{eq:aprbbfusHi}\end{aligned}$$ Therefore, we see that for $\tan \beta \gtrsim 5$ in our scenario, always with ${\cal U}_{i2}\lesssim 1/\tan \beta$, the bottom contribution to gluon fusion is larger than the top contribution and only slightly smaller than the $b\bar b$–fusion. Then we approximate the total production cross section for $H_{1,3}$, $$\begin{aligned}
\sigma(pp\to H_{i}) &\simeq& \left[0.16~\left(\frac{ \tau_{H_i}~d{\cal L}^{bb}/d\tau_{H_i}}{2300 ~\mbox{pb}}\right) + 0.11 ~\left(\frac{ \tau_{H_i}~d{\cal L}^{gg}_{LO}/d\tau_{H_i}}{3 \times 10^6 ~\mbox{pb}}\right)\right]\,\frac{\tan^{2}\beta}{(1+\kappa_{d}\tan\beta)^2}~\mbox{pb} \,.\nonumber \\
\label{eq:aprbbfusHifin}\end{aligned}$$
The last ingredient we need is the total width of the $H_{i}$, we can still consider that the dominant contributions will come from $b\bar{b}$, $\tau\tau$ and $WW^{*}$ for Higgs masses below 160 GeV. For masses above 160 GeV, the width is usually dominated by real $W$-production and $ZZ$ or $ZZ^{*}$. Therefore, below 160 GeV, the total width can be directly read from Eq. (\[eq:totwidth\]) replacing $H_{2}\to H_{i}$ and the mixing ${\cal U}_{2a}\to{\cal U}_{ia}$. For Higgs masses above 160 GeV, always below 200 GeV in our scenario, the total width will be larger than Eq. (\[eq:totwidth\]) and thus taking only $b\bar{b}$, $\tau\tau$ and $WW^{*}$ we obtain a lower limit to $\Gamma_i$. In the case of $H_{1}$ and $H_{3}$, we have ${\cal U}_{i2}\ll1$ and $\left|\mathcal{U}_{i1}\right|^{2}+\left|\mathcal{U}_{i3}\right|^{2}\simeq1$.
Then the total width is, $$\Gamma_{i}\gtrsim\frac{g^{2}m_{H_{i}}}{32\pi M_{W}^{2}}\left(\frac{3m_{b}^{2}}{1+\kappa_{d}\tan\beta}+m_{\tau}^{2}\right)\tan^{2}\beta\,,$$ And thus, the branching ratio is, $$\mbox{BR}\left(H_{i}\to\tau\tau\right)\lesssim\frac{m_{\tau}^{2}\left(1+\kappa_{d}\tan\beta\right)^2}{3m_{b}^{2}+m_{\tau}^{2}\left(1+\kappa_{d}\tan\beta\right)^2}$$ So, for the $\tau\tau$-production cross section of $H_{1}$ and $H_{3}$ we have, $$\begin{aligned}
\label{eq:pphitauaprox}
&\sigma(pp &\overset{H_i}{\longrightarrow} \tau \tau)\lesssim\frac{\tan^{2}\beta}{\left(1+\kappa_{d}\tan\beta\right)^2}\,\frac{m_{\tau}^{2}\left(1+\kappa_{d}\tan\beta\right)^2}{3m_{b}^{2}+m_{\tau}^{2}\left(1+\kappa_{d}\tan\beta\right)^2} \\[.2cm]
&&\qquad\qquad~\left[0.16\left(\frac{ \tau_{H_i}~d{\cal L}^{bb}/d\tau_{H_i}}{2300 ~\mbox{pb}}\right) + 0.11 \left(\frac{ \tau_{H_i}~d{\cal L}^{gg}_{LO}/d\tau_{H_i}}{3 \times 10^6 ~\mbox{pb}}\right)\right]~\mbox{pb} \nonumber\\[.2cm]
&\simeq&\frac{\tan^{2}\beta}{8.4+2\kappa_{d}\tan\beta+\kappa_{d}^2\tan^2\beta}~\left[0.16\left(\frac{ \tau_{H_i}~d{\cal L}^{bb}/d\tau_{H_i}}{2300 ~\mbox{pb}}\right) + 0.11 \left(\frac{ \tau_{H_i}~d{\cal L}^{gg}_{LO}/d\tau_{H_i}}{3 \times 10^6 ~\mbox{pb}}\right)\right]~\mbox{pb} \nonumber\end{aligned}$$ which should be compared with the SM cross section $\sigma(pp\to H \to \tau \tau) \simeq 1.4~\mbox{pb}$ for $m_H \simeq 110$ GeV. The comparison of this approximate expression with the full result is shown in Figure \[fig:ppHitau\]. In fact, this approximate expression works very well for $m_{H_1}=110$ GeV and is slightly larger than the exact result for $m_{H_3}=155$ GeV. This is due to the fact that we did not include the $H_i \to W W^*$ channel in Eq. (\[eq:pphitauaprox\]) and this channel is important for $H_3$, which means that the approximate branching ratio is larger than one in the full expression. Nevertheless, we can safely use this expression to understand the qualitative behaviour in this process.
Next, we combine the bounds on the two photon production cross section and the $\tau\tau$ production cross section in our model with medium-large $\tan \beta$. In Figure \[fig:tauscatter\] we present the $\tau\tau$ production cross sections at LHC for $m_{H_1} \simeq 110$ GeV and $m_{H_3} \simeq 160$ GeV with (squares in blue) or without (circles in red) fulfilling the requirement $0.75\leq\mu_{\gamma\gamma}^{\rm{LHC}}\leq1.55$. The green line is the CMS limit on the $\tau\tau$ production cross section for Higgs masses below 150 GeV and the green points are the points where, in addition, the $\tau\tau$ cross-section limit on the observed Higgs, $H_2$ in our scenario, at a mass $m_{H_2} \simeq 125$ GeV is also fulfilled. Even though we fixed $m_{H_1}= 110$ GeV in this plot, we have checked that the situation does not change at all for $m_{H_1}= 100$ GeV or $m_{H_1}= 120$ GeV.
![\[fig:tauscatter\]$\tau\tau$ production cross-section at $m_{H_1}=110$ GeV as a function of $\tan \beta$, with the CMS limit on $\tau\tau$ production in green.](taultbscatter)
Notice that, the present constraints on heavy Higgses for $\sigma(pp\to H_3 \to \tau \tau)$ for masses $150~{\rm GeV} \leq m_{H_3} \lesssim 200~{\rm GeV}$ can only eliminate the region of $\tan \beta \gtrsim 25$, but we expect the future analysis of the stored data to reduce this parameter space significantly [@privateFiorini].
Hence, we see that there are no points consistent with the LHC constraints on $\sigma(pp\to H_1 \to \tau \tau)$ for $\tan \beta \geq 7.8$ and $100~\mbox{GeV}<m_{H_1}<125$ GeV and, as we will see in the next section, all the surviving points are inconsistent with BR($B\to X_s \gamma$).
Low $\tan\beta$ regime.
-----------------------
As we have just seen, LHC constraints on $\sigma(pp\to H_1 \to \tau \tau)$ rule out the possibility of $m_{H_2}\simeq 125$ GeV for $\tan \beta \geq 7.8$, still, the situation for $\tan\beta\lesssim8$ is very different. For low $\tan \beta$, it is much easier to satisfy the constraint from the $\gamma\gamma$-signal strength at LHC, $\mu_{\gamma\gamma}\gtrsim0.5$.
Analogously to the discussion in the case of medium-large $\tan\beta$, we can see that the $\gamma\gamma$-decay width for low $\tan \beta$ remains of the same order as the SM one, $\Gamma_{H_{2}\to\gamma\gamma}\simeq\Gamma_{h\to\gamma\gamma}^{SM}$. The production cross section is typically of the order of the SM one, as the $b\bar{b}$-fusion process and the $b$-quark contribution to gluon fusion, being proportional to $\tan\beta$, are now smaller and the top contribution is very close to the SM for ${\cal U}_{22}\simeq O(1)$. In fact, the total decay width is still larger than the SM value if ${\cal U}_{21,21}$ are sizeable, as the $b\bar b$ and $\tau \tau$ widths are enhanced by $\tan^2 \beta$. So, the same requirements on Higgs mixings, Eq. (\[eq:Hbounds\]), hold true now, although are less suppressed correspondingly to the smaller $\tan \beta$ values. On the other hand, the $\tau\tau$ production cross section through the three neutral Higgses remains an important constraint, but it is much easier to satisfy for low $\tan\beta$ values, as we can see in Fig. \[fig:tauscatter\].
However, in our scenario, we have a rather light charged Higgs, $m_{H^{\pm}}\lesssim220$ GeV, and the main constraint for $\tan\beta\lesssim8$ now comes from the $\mbox{BR}(B\rightarrow X_{s}\gamma)$.
### Constraints from BR($B\to X_{s}\gamma$)
The decay $B\to X_{s}\gamma$ is an important constraint on the presence of light charged Higgs particles as we have in our scenario. However, although the charged Higgs interferes always constructively with the SM $W$-boson contribution to the Wilson coefficients, in the MSSM this contribution can be compensated by an opposite sign contribution from the stop-chargino loop if $\mbox{Re}\left(\mu A_{t}\right)$ is negative. The charged Higgs contribution is given by Eq. (\[eq:C7H\]). The size of ${\cal C}_7^{H^\pm}$ can be approximated by the dominant contribution, given by $f_{7}^{(2)}(m^2_{t}/m_{H^\pm}^2)$, $${\cal C}_{7}^{H^{\pm}}\simeq\frac{f_{7}^{(2)}(y_{t})}{1+\delta h_{d}/h_{d}+\Delta h_{d}/h_{d}\tan\beta}\,, \label{eq:C7H2}$$ and for $m_{H^\pm} \in [150, 200]$ GeV we get $ f_{7}^{(2)}(y_{t})\in [-0.22,-0.18]$. Incidentally, we see that this charged Higgs contribution decreases with $\tan \beta$, and thus it is more difficult to satisfy the constraints at low $\tan \beta$ unless this contribution is compensated by a different sign contribution. Then for the stop-chargino contribution, using Eq. (\[C7charlim\]), $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}}&\simeq&-\frac{M_{W}^{2}}{M_{2}^2}~\frac{M_2}{\mu }~\tan\beta\left(f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)\right)\nonumber \\&-&\frac{A_{t}}{\mu}\tan\beta\,\frac{M_{W}^{2}}{M_{2}^{2}}\frac{m_{t}^{2}}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}}\:\left(f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)\right)\end{aligned}$$ Taking now $f_{7}^{(3)}\left(x\simeq 1\right)\simeq 0.44$, and therefore, with the limits on stop and chargino masses, $m_{\tilde t_1} \geq 650$ GeV and $m_{\chi^\pm} \geq 350$ GeV, we estimate $\mathcal{C}_{7}^{\chi^{\pm}}\simeq 0.02~ M_2/\mu~ \tan \beta \ll{\cal C}_{7,8}^{H^{\pm}}$. Thus it looks very difficult to compensate the charged Higgs contribution for low $\tan\beta$ and this is confirmed in the numerical analysis.
![\[fig:bsgammares.\]Branching ratio of the $B\to X_s \gamma$ decay as a function of $\tan \beta$. Blue squares fulfil the $\mu^{\rm LHC}_{\gamma \gamma}$ and $\sigma_{H_i\tau\tau}/\sigma_{\rm SM}$ constraints, as explained in the text. Green and yellow regions are the one and two-$\sigma$ experimentally allowed regions.](BRbsglarge)
In Figure \[fig:bsgammares.\], we present the obtained BR($B\to X_s \gamma$), the blue squares fulfil the requirements of, $0.75 \leq \mu^{\rm LHC}_{\gamma\gamma}\leq 1.55$, $\sigma_{H_1\tau\tau}/\sigma_{\rm SM}\leq 1.8$ and $\sigma_{H_2\tau\tau}/\sigma_{\rm SM}\leq 1.8$ while the red dots violate some of these requirements. The experimentally allowed region at the one-$\sigma$ and two-$\sigma$ level is shown in green and yellow respectively [^7]. In passing, please note that the reduction of the BR with $\tan \beta$ is mainly due to the reduction of the charged Higgs contribution, as shown in Eq. (\[eq:C7H2\]), and not to the negative interference with the chargino diagram.
Therefore, the only remaining option is to have a light stop with a small mass difference with respect to the lightest neutralino that has escaped detection so far at LHC. To explore numerically this possibility, we select the lightest stop mass to be $m_{\chi^0_1}\leq m_{\tilde t_1} \leq m_t + m_{\chi^0_1}$. The result is shown in Fig. \[fig:bsgammalstop\], where we plot again BR($B\to X_s \gamma$) as a function of $\tan \beta$.
![\[fig:bsgammalstop\]Branching ratio of the $B\to X_s \gamma$ decay as a function of $\tan \beta$, for $m_{\tilde t_1}\leq 650$ and $m_{\chi^0_1}\leq m_{\tilde t_1} \leq m_t + m_{\chi^0_1}$. The color coding is the same as in Fig. \[fig:bsgammares.\]](BRbsglight)
Now, we can see that the range of BR($B\to X_s \gamma$) for a given $\tan \beta$ has decreased, as expected, due to a possible destructive interference of the stop-chargino diagram. Nevertheless, we can see that there are no points allowed by collider constraints that reach the two-$\sigma$ allowed region[^8].
As a by-product, we can already see from here that it will be very difficult, if not completely impossible, to accommodate two sizeable Higgs-like peaks in the $\gamma\gamma$ production cross section, as recently announced by the CMS collaboration [@CMS-PAS-HIG-13-016], within an MSSM context. The CMS analysis of an integrated luminosity of 5.1 (19.6) fb$^{-1}$ at a center of mass energy of 7 (8) TeV reveals a clear excess near $m_H=136.5$ GeV, aside from the 125–126 GeV Higgs boson that has already been discovered, with a local significance for this extra peak of 2.73 $\sigma $ combining the data from Higgs coming from vector-boson fusion and vector-boson associated production (each of which shows the excess individually).
As we have shown in this work, the 125 GeV Higgs found at the LHC ought to be the lightest, therefore this new resonance, despite its light mass, is bounded to be the second lightest Higgs, meaning that the third neutral Higgs (and its charged sibling) are to be found nearby. This can be easily seen following our line of reasoning in section \[sec:model\], where we obtain $m_{H_3}< 180$ GeV and $m_{H^+}< 200$ GeV. However, to reproduce the observed signal strength in $H_1 \longrightarrow \gamma \gamma $ of the $\sim126$ GeV peak for medium–large $\tan \beta$, we must force all the pseudoscalar and down-type content out of the lightest state. In this case, we have ${\cal U}_{12} \approx 1$ and ${\cal U}_{11},{\cal U}_{13} \ll 1$, so that the the two heavier Higgses will necessarily couple, with $\tan \beta$-enhancement, to down-type fermions and the branching ratio of these Higgses to $\gamma\gamma$ will be brutally inhibited. At the same time, the $H_i \longrightarrow \tau \tau $ channel, for $i=2,3$ is $\propto (U_{i1}^2 + U_{i3}^2) \approx (1 -U_{i2}^2)
\approx U_{12}^2\simeq 1$. Meaning that any MSSM setting would predict a $H_i \longrightarrow \tau \tau $ at a level that is already excluded [@Aad:2012mea; @CMS-PAS-HIG-13-004; @Aad:2012yfa].
The only possible escape to this situation would be to stay in the (very) low $\tan \beta$ region, but then, given the low mass of the charged Higgs, the constraints from BR($B\to X_s \gamma$) eliminate completely this possibility. Therefore, we can not see any way to accommodate two Higgs peaks in the $\gamma\gamma$ spectrum with a signal strength of the order of the SM model one. Nevertheless this possibility will be fully explored in a subsequent paper [@WIP].
Conclusions. {#sec:conclu}
============
In this work we have investigated the possibility of the Higgs found at LHC with a mass $m_H\sim125$ GeV not being the lightest but the second lightest Higgs in an MSSM context, having the actual lightest Higgs escaped detection due to its pseudoscalar and/or down-type content. In this scheme, such a content suppresses simultaneously its couplings to gauge bosons and up-type quarks and paves the way to evade LEP constraints.
Although similar studies, with previous LHC constraints, are already present in the literature, most of these studies proceed through giant scans of the model’s parameter space and the later analysis of the scanning results. Our approach in this work has been different, and we have chosen to study analytically, with simple expressions under reasonable approximations, three or four key phenomenological signatures, including the two photon signal strength and the $\tau\tau$ production cross sections at LHC and the indirect constraints on BR$(B\to X_s \gamma)$. To the best of our knowledge, this is the first study carried out in this way in an MSSM context using the LHC data. Our approach has the advantage that can rule out the model altogether without risking having missed a region where unexpected cancellations or combinations can take place.
This analysis is accomplished in a completely generic MSSM, in terms of SUSY parameters at the electroweak scale, such that it encloses all possible MSSM setups. To be as general as possible, we have allowed for the presence of CP violating phases in the Higgs potential such that the three neutral-Higgs eigenstates become admixtures with no definite CP–parity. Our study starts with the $\gamma \gamma$ signal observed at LHC at $m_H\simeq 125$ GeV. The experimental results show a signal slightly larger or of the order of the SM expectations, and this is a strong constraint on models with extended Higgs sectors. We have shown than in the MSSM with $m_{H_2}\simeq 125$ GeV the width $\Gamma(H_2 \to \gamma \gamma)$ cannot be substantially modified from its SM value. On the other hand, the total width of $H_2$ tends to be significantly larger if the down-type or pseudoscalar components of $H_2$ are sizeable. Simply requiring that BR$(H_2 \to \gamma \gamma)$ or, more exactly, $\sigma(pp\to H_2) \times \mbox{BR}(H_2 \to \gamma \gamma)$ is not much smaller than the SM severely restricts the possible mixings in the Higgs sector and determines the bottom and $\tau$ decay rates of the three Higgses.
Next, we have analyzed the $\tau\tau$ production cross sections for the three Higgs eigenstates, splitting the parameter space in two regions of large and small $\tan \beta$, being the dividing line $\tan \beta \simeq 8$. We have shown that, for large $\tan \beta$, present constraints on $\sigma (pp \to H_1 \to \tau \tau)$ forbid all points in the model parameter space irrespective of the supersymmetric mass spectrum.
On the other hand, in the low $\tan \beta$ region, the presence of a relatively light charged Higgs, $m_{H^\pm}\lesssim 220$ GeV, provides a large charged-Higgs contribution to $\mbox{BR}(B \to X_s \gamma)$ which can not be compensated by an opposite sign chargino contribution, precisely due to the smallness of $\tan \beta$ and this eliminates completely the possibility of the observed Higgs at $M_H \simeq 125$ GeV, being the next-to-lightest Higgs in an MSSM context.
In summary, we have shown that a carefully chosen combination of three or four experimental signatures can be enough to entirely rule out a model without resorting to gigantic scans while simultaneously provides a much better understanding on the physics of the model studied. The power of this technique should not be underestimated specially when studying models with large parameter spaces where monster scans can be quite time consuming and not precisely enlightening. Special interest raises the case in which the Higgs found at the LHC is the lightest where this type of combined analysis can close significant regions of the parameters space [@WIP].
In this respect, the straightforward application of this kind of study to the recently published CMS data with a second Higgs-like resonance at $\sim 136$ GeV, aside from the 125–126 GeV Higgs, shows it is not possible to accommodate both resonances in the $\gamma\gamma$ spectrum with a signal strength of the order of the SM model one.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are grateful to Luca Fiorini, Sven Heinemeyer, Joe Lykken and Arcadi Santamaria for useful discussions and wish to thank specially Jae Sik Lee for his help with CPsuperH. We acknowledge support from the MEC and FEDER (EC) Grants FPA2011-23596 and the Generalitat Valenciana under grant PROMETEOII/2013/017. G.B. acknowledges partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions, PITN- GA-2011- 289442).
MSSM Conventions {#App:convention}
================
We follow the MSSM conventions in the classical review of Haber and Kane [@Haber:1984rc], see also [@Chung:2003fi]. In this section we review the mass matrices entering in our analysis,
### Charginos: {#charginos .unnumbered}
In our convention the chargino mass matrix is, $$\mathcal{M}_{C}=\left(\begin{array}{cc}
M_{2} & \sqrt{2}M_{W}\sin\beta\\
\sqrt{2}M_{W}\cos\beta & \mu
\end{array}\right)$$ and can be diagonalized by two unitary matrices so that $U^{*}\mathcal{M}_{C} V^{\dagger}=\mbox{Diag.}\left\{ m_{\chi_{1}^{\pm}},\: m_{\chi_{2}^{\pm}}\right\}$ with $m_{\chi_{1}^{\pm}}\leq m_{\chi_{2}^{\pm}}$. The mass eigenstates, $\chi_i^\pm$, are related to the electroweak eigenstates, $\hat \chi_{i}^\pm$, by $$\chi_{i}^{+}=V_{ij}\hat \chi_{j}^{+}\,, \qquad\chi_{i}^-=U_{ij}\hat\chi_{j}^-\,.$$
### Sfermions: {#sfermions .unnumbered}
The squark mass matrix is given by, $$\mathcal{M}_{q}^{2}=\left(\begin{array}{cc}
M_{\tilde{Q}_{3}}^{2}+m_{q}^{2}+\cos\left(2\beta\right)M_{Z}^{2}\left(R_{z}^{q}-Q_{q}\sin^{2}\theta_{W}\right) & h_{q}^{*}\upsilon_{q}\left(A_{q}^{*}-\mu T_{q}\right)/\sqrt{2}\\
\\
h_{q}\upsilon_{q}\left(A_{q}-\mu^{*}T_{q}\right)/\sqrt{2} & M_{\tilde{R}_{3}}^{2}+m_{q}^{2}+\cos\left(2\beta\right)M_{Z}^{2}Q_{q}\sin^{2}\theta_{W}
\end{array}\right)\label{eq:3.2.2-1}$$ With $R_{z}^{t}=-R_{z}^{b}=\frac{1}{2}$, $Q_{q}$ the quark charge, $T_{b}=\tan\beta=\frac{\upsilon_{u}}{\upsilon_{d}}=T_{t}^{-1}$ and $h_{q}$ the Yukawa coupling corresponding to the quark. This matrix is diagonalized ${\cal R}_q \mathcal{M}_{\tilde q}^2 {\cal R}_q^\dagger=\mbox{Diag.}\left\{m_{\tilde q_{1}}^2,\:m_{\tilde q_{2}}^2\right\}$
Similarly, the stau mass matrix, $$\mathcal{M}_{\tau}^{2}=\left(\begin{array}{cc}
M_{\tilde{L}_{3}}^{2}+m_{\tau}^{2}+\cos\left(2\beta\right)M_{Z}^{2}\left(\sin^{2}\theta_{W}-\frac{1}{2}\right) & h_{\tau}^{*}\upsilon_{1}\left(A_{\tau}^{*}-\mu\tan\beta\right)/\sqrt{2}\\
\\
h_{\tau}\upsilon_{1}\left(A_{\tau}-\mu^{*}\tan\beta\right)/\sqrt{2} & M_{\tilde{E}_{3}}^{2}+m_{\tau}^{2}+\cos\left(2\beta\right)M_{Z}^{2}\sin^{2}\theta_{W}
\end{array}\right)\label{eq:3.2.2-2}$$
Expansion of Hermitian matrices {#App:expand}
===============================
Following Refs. [@Buras:1997ij; @Masiero:2005ua], we have that given a $n\times n$ hermitian matrix $A=A^{0}+A^{1}$ with $A^{0}=Diag(a_{1}^{0},...,a_{n}^{0})$ and $A^{1}$completely off diagonal that is diagonalized by $\mathcal{U}\cdot A\cdot\mathcal{U}^{\dagger}=Diag(a_{1},...,a_{n})$, we have a first order in $A^{1}$: $$\mathcal{U}_{ki}^{*}f\left(a_{k}\right)\mathcal{U}_{kj}\simeq\delta_{ij}f(a_{i}^{0})+A_{ij}^{1}\frac{f(a_{i}^{0})-f(a_{j}^{0})}{a_{i}^{0}-a_{j}^{0}}\label{eq:A-1}$$ We use this formula to expand the chargino Wilson coefficients, ${\cal C}_{7,8}$, with respect to the chargino mass matrix elements. In this case we have to be careful because the chargino mass matrix is not hermitian. However due to the necessary chirality flip in the chargino line ${\cal C}_{7,8}$ is a function of odd powers of $M_{\chi^{+}}$ [@Clavelli:2000ua], and then $$\sum_{j=1}^{2}U_{j2}V_{j1}m_{\chi_{j}^{+}}A(m_{\chi_{j}^{+}}^{2})=\sum_{j,k,l=1}^{2}U_{jk}m_{\chi_{j}^{+}}V_{j1}U_{l2}A(m_{\chi_{l}^{+}}^{2})U_{lk}^{*}$$ where we introduced $\sum_{k}U_{jk}U_{lk}^{*}=\delta_{jl}$. Then, we obtain, $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}(a)} & = & \frac{1}{\cos\beta}\sum_{{\scriptstyle a=1,2}}\frac{U_{a2}V_{a1}M_{W}}{\sqrt{2}m_{\tilde{\chi}_{a}^{\pm}}}\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right)\label{eq:5.2-6}\\
& \sim & \frac{M_{W}}{\sqrt{2}\cos\beta}\left[~\left(\mathcal{M}_{\chi}\right)_{21}\frac{\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right.\nonumber \\
& + & \left.\left(\mathcal{M}_{\chi}\right)_{11}\left(\mathcal{M}_{\chi}\mathcal{M}_{\chi}^{\dagger}\right)_{21}\frac{m_{\tilde{\chi}_{1}^{\pm}}^{2}\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)-m_{\tilde{\chi}_{2}^{\pm}}^{2}\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}m_{\tilde{\chi}_{2}^{\pm}}^{2}\left(m_{\tilde{\chi}_{2}^{\pm}}^{2}-m_{\tilde{\chi}_{1}^{\pm}}^{2}\right)}\right];\nonumber\label{eq:5.2-7}\end{aligned}$$$$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}(b)} & = & \frac{1}{\cos\beta}\sum_{{\scriptstyle a=1,2}}\frac{U_{a2}V_{a2}\overline{m}_{t}}{2m_{\tilde{\chi}_{a}^{\pm}}\sin\beta}\mathcal{G}_{7,8}\left(x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right)\label{eq:5.2-8}\\
& \sim & \frac{\overline{m}_{t}}{2\cos\beta\sin\beta}\left[~\left(\mathcal{M}_{\chi}\right)_{22}\frac{\mathcal{G}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right.\nonumber \\
& + & \left.\left(\mathcal{M}_{\chi}\right)_{12}\left(\mathcal{M}_{\chi}\mathcal{M}_{\chi}^{\dagger}\right)_{21}\frac{m_{\tilde{\chi}_{1}^{\pm}}^{2}\mathcal{G}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)-m_{\tilde{\chi}_{2}^{\pm}}^{2}\mathcal{G}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}m_{\tilde{\chi}_{2}^{\pm}}^{2}\left(m_{\tilde{\chi}_{2}^{\pm}}^{2}-m_{\tilde{\chi}_{1}^{\pm}}^{2}\right)}\right];\nonumber\label{eq:5.2-9}\end{aligned}$$ and using again the same approximation we can expand the stop mixings in the ${\cal F}_{7,8}$ and ${\cal G}_{7,8}$, we obtain: $$\begin{aligned}
\mathcal{F}_{7,8}\left(x_{\tilde{q}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right) & \simeq & f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{a}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}}\right);\label{eq:5.2-10}\\
\mathcal{G}_{7,8}\left(x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}},x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right) & \simeq & \left(\mathcal{M}_{\tilde{t}}\right)_{21}\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{a}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{a}^{\pm}}\right)}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}};\label{eq:5.2-11}\end{aligned}$$ So, putting all together, we have: $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}(a)} & \sim & \frac{M_{W}}{\sqrt{2}\cos\beta}\left[~\left(\mathcal{M}_{\chi}\right)_{21}\frac{f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right. \\
& + & \left.\frac{\left(\mathcal{M}_{\chi}\right)_{11}\left(\mathcal{M}_{\chi}\mathcal{M}_{\chi}^{\dagger}\right)_{21}}{m_{\tilde{\chi}_{1}^{\pm}}^{2}-m_{\tilde{\chi}_{2}^{\pm}}^{2}}\left(\frac{f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}}-\frac{f_{7,8}^{(3)}\left(x_{\tilde{q}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right)\right];\nonumber\label{eq:5.2-12}\end{aligned}$$ $$\begin{aligned}
\mathcal{C}_{7,8}^{\chi^{\pm}(b)} & \sim & \frac{\overline{m}_{t}}{2\cos\beta\sin\beta}\left[~\left(\mathcal{M}_{\chi}\right)_{22}\frac{\left(\mathcal{M}_{\tilde{t}}\right)_{21}}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}}\left(\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right)\right. \\
& + & \left.\frac{\left(\mathcal{M}_{\chi}\right)_{12}\left(\mathcal{M}_{\chi}\mathcal{M}_{\chi}^{\dagger}\right)_{21}}{m_{\tilde{\chi}_{1}^{\pm}}^{2}-m_{\tilde{\chi}_{2}^{\pm}}^{2}}\left(\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{1}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{1}^{\pm}}\right)}{m_{\tilde{\chi}_{1}^{\pm}}^{2}}-\frac{f_{7,8}^{(3)}\left(x_{\tilde{t}_{1}\tilde{\chi}_{2}^{\pm}}\right)-f_{7,8}^{(3)}\left(x_{\tilde{t}_{2}\tilde{\chi}_{2}^{\pm}}\right)}{m_{\tilde{\chi}_{2}^{\pm}}^{2}}\right) \right.\nonumber \\
&&\left. \frac{\left(\mathcal{M}_{\tilde{t}}\right)_{21}}{m_{\tilde{t}_{1}}^{2}-m_{\tilde{t}_{2}}^{2}} \right];\nonumber \label{eq:5.2-13}\end{aligned}$$
[99]{} G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\].
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\].
S. R. Coleman and J. Mandula, Phys. Rev. [**159**]{}, 1251 (1967).
R. Haag, J. T. Lopuszanski and M. Sohnius, Nucl. Phys. B [**88**]{}, 257 (1975).
P. Fayet, Nucl. Phys. B [**90**]{}, 104 (1975).
P. Fayet, Phys. Lett. B [**69**]{}, 489 (1977).
G. R. Farrar and P. Fayet, Phys. Lett. B [**76**]{}, 575 (1978).
E. Witten, Nucl. Phys. B [**188**]{}, 513 (1981).
S. Dimopoulos and H. Georgi, Nucl. Phys. B [**193**]{}, 150 (1981).
N. Sakai, Z. Phys. C [**11**]{}, 153 (1981).
L. E. Ibanez and G. G. Ross, Phys. Lett. B [**105**]{}, 439 (1981).
R. K. Kaul, Phys. Lett. B [**109**]{}, 19 (1982).
H. P. Nilles, Phys. Rept. [**110**]{}, 1 (1984).
H. E. Haber and G. L. Kane, Phys. Rept. [**117**]{}, 75 (1985).
A. Djouadi, Phys. Rept. [**459**]{}, 1 (2008) \[hep-ph/0503173\].
A. Pilaftsis, Phys. Lett. B [**435**]{}, 88 (1998) \[hep-ph/9805373\].
A. Pilaftsis, Phys. Rev. D [**58**]{}, 096010 (1998) \[hep-ph/9803297\].
A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**553**]{}, 3 (1999) \[hep-ph/9902371\].
D. A. Demir, Phys. Rev. D [**60**]{}, 055006 (1999) \[hep-ph/9901389\].
M. S. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**586**]{}, 92 (2000) \[hep-ph/0003180\].
S. Y. Choi, M. Drees and J. S. Lee, Phys. Lett. B [**481**]{}, 57 (2000) \[hep-ph/0002287\].
M. S. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**625**]{}, 345 (2002) \[hep-ph/0111245\].
S. Y. Choi, K. Hagiwara and J. S. Lee, Phys. Rev. D [**64**]{}, 032004 (2001) \[hep-ph/0103294\].
S. Y. Choi, M. Drees, J. S. Lee and J. Song, Eur. Phys. J. C [**25**]{}, 307 (2002) \[hep-ph/0204200\]. S. M. Barr and A. Zee, Phys. Rev. Lett. [**65**]{}, 21 (1990) \[Erratum-ibid. [**65**]{}, 2920 (1990)\].
D. Chang, W. -F. Chang and W. -Y. Keung, Phys. Lett. B [**478**]{}, 239 (2000) \[hep-ph/9910465\].
J. R. Ellis, J. S. Lee and A. Pilaftsis, JHEP [**0810**]{}, 049 (2008) \[arXiv:0808.1819 \[hep-ph\]\].
A. Pilaftsis, Phys. Lett. B [**471**]{}, 174 (1999) \[hep-ph/9909485\].
S. Heinemeyer, O. Stal and G. Weiglein, Phys. Lett. B [**710**]{}, 201 (2012) \[arXiv:1112.3026 \[hep-ph\]\].
K. Hagiwara, J. S. Lee and J. Nakamura, JHEP [**1210**]{}, 002 (2012) \[arXiv:1207.0802 \[hep-ph\]\].
A. Arbey, M. Battaglia, A. Djouadi and F. Mahmoudi, JHEP [**1209**]{}, 107 (2012) \[arXiv:1207.1348 \[hep-ph\]\].
P. Bechtle, S. Heinemeyer, O. Stal, T. Stefaniak, G. Weiglein and L. Zeune, Eur. Phys. J. C [**73**]{}, 2354 (2013) \[arXiv:1211.1955 \[hep-ph\]\]. J. Ke, H. Luo, M. -x. Luo, K. Wang, L. Wang and G. Zhu, Phys. Lett. B [**723**]{}, 113 (2013) \[arXiv:1211.2427 \[hep-ph\]\]. J. Ke, H. Luo, M. -x. Luo, T. -y. Shen, K. Wang, L. Wang and G. Zhu, arXiv:1212.6311 \[hep-ph\]. S. Moretti, S. Munir and P. Poulose, arXiv:1305.0166 \[hep-ph\].
S. Scopel, N. Fornengo and A. Bottino, arXiv:1304.5353 \[hep-ph\].
J. S. Lee, A. Pilaftsis, M. S. Carena, S. Y. Choi, M. Drees, J. R. Ellis and C. E. M. Wagner, Comput. Phys. Commun. [**156**]{}, 283 (2004) \[hep-ph/0307377\].
J. S. Lee, M. Carena, J. Ellis, A. Pilaftsis and C. E. M. Wagner, Comput. Phys. Commun. [**184**]{}, 1220 (2013) \[arXiv:1208.2212 \[hep-ph\]\].
S. Heinemeyer, W. Hollik and G. Weiglein, Comput. Phys. Commun. [**124**]{}, 76 (2000) \[hep-ph/9812320\].
T. Hahn, W. Hollik, S. Heinemeyer and G. Weiglein, eConf C [**050318**]{}, 0106 (2005) \[hep-ph/0507009\].
J. R. Ellis, K. A. Olive and Y. Santoso, Phys. Lett. B [**539**]{}, 107 (2002) \[hep-ph/0204192\].
J. R. Ellis, T. Falk, K. A. Olive and Y. Santoso, Nucl. Phys. B [**652**]{}, 259 (2003) \[hep-ph/0210205\].
J. R. Ellis, K. A. Olive and P. Sandick, Phys. Rev. D [**78**]{}, 075012 (2008) \[arXiv:0805.2343 \[hep-ph\]\].
C. F. Berger, J. S. Gainer, J. L. Hewett and T. G. Rizzo, JHEP [**0902**]{}, 023 (2009) \[arXiv:0812.0980 \[hep-ph\]\].
S. S. AbdusSalam, B. C. Allanach, F. Quevedo, F. Feroz and M. Hobson, Phys. Rev. D [**81**]{}, 095012 (2010) \[arXiv:0904.2548 \[hep-ph\]\].
A. Arbey, M. Battaglia, A. Djouadi and F. Mahmoudi, Phys. Lett. B [**720**]{}, 153 (2013) \[arXiv:1211.4004 \[hep-ph\]\].
\[ATLAS Collaboration\], ATLAS-CONF-2013-034. \[CMS Collaboration\], CMS-PAS-HIG-13-005.
G. Aad [*et al.*]{} \[ATLAS Collaboration\], arXiv:1307.1427 \[hep-ex\].
G. Aad [*et al.*]{} \[ATLAS Collaboration\], JHEP [**1209**]{}, 070 (2012) \[arXiv:1206.5971 \[hep-ex\]\].
\[CMS Collaboration\], CMS-PAS-HIG-13-004.
G. Aad [*et al.*]{} \[ATLAS Collaboration\], JHEP [**1302**]{}, 095 (2013) \[arXiv:1211.6956 \[hep-ex\]\]. L. Fiorini, private communication.
G. Aad [*et al.*]{} \[ATLAS Collaboration\], JHEP [**1206**]{}, 039 (2012) \[arXiv:1204.2760 \[hep-ex\]\]. \[CMS Collaboration\], CMS-PAS-HIG-12-052
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], JHEP [**1303**]{}, 037 (2013) \[arXiv:1212.6194 \[hep-ex\]\].
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], arXiv:1305.2390 \[hep-ex\].
\[CMS Collaboration\], PAS-SUS-13-007
\[CMS Collaboration\], PAS-SUS-13-008
\[ATLAS Collaboration\], ATLAS-CONF-2012-145.
\[ATLAS Collaboration\], ATLAS-CONF-2013-007.
S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], arXiv:1303.2985 \[hep-ex\].
\[ATLAS Collaboration\], ATLAS-CONF-2013-024.
\[ATLAS Collaboration\], ATLAS-CONF-2013-037.
\[ATLAS Collaboration\], ATLAS-CONF-2013-053
\[CMS Collaboration\], PAS-SUS-13-011
\[ATLAS Collaboration\], ATLAS-CONF-2013-035.
\[CMS Collaboration\], PAS-SUS-12-022
A. Bharucha, S. Heinemeyer and F. von der Pahlen, arXiv:1307.4237 \[hep-ph\].
A. Masiero and O. Vives, Ann. Rev. Nucl. Part. Sci. [**51**]{}, 161 (2001) \[hep-ph/0104027\].
M. Raidal, A. van der Schaaf, I. Bigi, M. L. Mangano, Y. K. Semertzidis, S. Abel, S. Albino and S. Antusch [*et al.*]{}, Eur. Phys. J. C [**57**]{}, 13 (2008) \[arXiv:0801.1826 \[hep-ph\]\].
L. Calibbi, R. N. Hodgkinson, J. Jones Perez, A. Masiero and O. Vives, Eur. Phys. J. C [**72**]{}, 1863 (2012) \[arXiv:1111.0176 \[hep-ph\]\].
RAaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett. [**110**]{}, 021801 (2013) \[arXiv:1211.2674 \[hep-ex\]\]. RAaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett. 111, [**101805**]{} (2013) \[arXiv:1307.5024 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], arXiv:1307.5025 \[hep-ex\]. S. Chen [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett. [**87**]{}, 251807 (2001) \[hep-ex/0108032\].
K. Abe [*et al.*]{} \[Belle Collaboration\], Phys. Lett. B [**511**]{}, 151 (2001) \[hep-ex/0103042\].
A. Limosani [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett. [**103**]{}, 241801 (2009) \[arXiv:0907.1384 \[hep-ex\]\].
J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**86**]{}, 052012 (2012) \[arXiv:1207.2520 \[hep-ex\]\].
J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**86**]{}, 112008 (2012) \[arXiv:1207.5772 \[hep-ex\]\].
B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**77**]{}, 051103 (2008) \[arXiv:0711.4889 \[hep-ex\]\].
Y. Amhis [*et al.*]{} \[Heavy Flavor Averaging Group Collaboration\], arXiv:1207.1158 \[hep-ex\]. HFAG: Rare B decay parameterss, http://www.slac.stanford.edu/xorg/hfag/rare/
K. Funakubo, S. Tao and F. Toyoda, Prog. Theor. Phys. [**109**]{}, 415 (2003) \[hep-ph/0211238\].
Y. Okada, M. Yamaguchi and T. Yanagida, Prog. Theor. Phys. [**85**]{}, 1 (1991).
J. R. Ellis, G. Ridolfi and F. Zwirner, Phys. Lett. B [**257**]{}, 83 (1991).
H. E. Haber and R. Hempfling, Phys. Rev. Lett. [**66**]{}, 1815 (1991). H. E. Haber, R. Hempfling and A. H. Hoang, Z. Phys. C [**75**]{} (1997) 539 \[hep-ph/9609331\]. A. Djouadi and J. Quevillon, arXiv:1304.1787 \[hep-ph\].
M. S. Carena, J. R. Espinosa, M. Quiros and C. E. M. Wagner, Phys. Lett. B [**355**]{}, 209 (1995) \[hep-ph/9504316\]. M. S. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Phys. Lett. B [**495**]{} (2000) 155 \[hep-ph/0009212\]. M. S. Carena, J. R. Ellis, S. Mrenna, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**659**]{}, 145 (2003) \[hep-ph/0211467\]. K. E. Williams and G. Weiglein, Phys. Lett. B [**660**]{}, 217 (2008) \[arXiv:0710.5320 \[hep-ph\]\]. L. J. Hall, R. Rattazzi and U. Sarid, Phys. Rev. D [**50**]{}, 7048 (1994) \[hep-ph/9306309\].
M. S. Carena, M. Olechowski, S. Pokorski and C. E. M. Wagner, Nucl. Phys. B [**426**]{}, 269 (1994) \[hep-ph/9402253\].
T. Blazek, S. Raby and S. Pokorski, Phys. Rev. D [**52**]{}, 4151 (1995) \[hep-ph/9504364\].
M. S. Carena, D. Garcia, U. Nierste and C. E. M. Wagner, Nucl. Phys. B [**577**]{}, 88 (2000) \[hep-ph/9912516\].
C. Hamzaoui, M. Pospelov and M. Toharia, Phys. Rev. D [**59**]{}, 095005 (1999) \[hep-ph/9807350\].
K. S. Babu and C. F. Kolda, Phys. Rev. Lett. [**84**]{}, 228 (2000) \[hep-ph/9909476\].
G. Isidori and A. Retico, JHEP [**0111**]{}, 001 (2001) \[hep-ph/0110121\].
A. Dedes and A. Pilaftsis, Phys. Rev. D [**67**]{}, 015012 (2003) \[hep-ph/0209306\].
A. J. Buras, P. H. Chankowski, J. Rosiek and L. Slawianowska, Nucl. Phys. B [**659**]{}, 3 (2003) \[hep-ph/0210145\].
M. Spira, A. Djouadi, D. Graudenz and P. M. Zerwas, Nucl. Phys. B [**453**]{}, 17 (1995) \[hep-ph/9504378\].
M. Spira, Fortsch. Phys. [**46**]{}, 203 (1998) \[hep-ph/9705337\].
A. Djouadi, Phys. Rept. [**457**]{}, 1 (2008) \[hep-ph/0503172\].
A. Dedes and S. Moretti, Phys. Rev. Lett. [**84**]{}, 22 (2000) \[hep-ph/9908516\].
A. Dedes and S. Moretti, Nucl. Phys. B [**576**]{}, 29 (2000) \[hep-ph/9909418\].
S. Y. Choi and J. S. Lee, Phys. Rev. D [**61**]{}, 115002 (2000) \[hep-ph/9910557\].
A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Eur. Phys. J. C [**63**]{}, 189 (2009) \[arXiv:0901.0002 \[hep-ph\]\].
D. A. Dicus and S. Willenbrock, Phys. Rev. D [**39**]{}, 751 (1989).
J. M. Campbell, R. K. Ellis, F. Maltoni and S. Willenbrock, Phys. Rev. D [**67**]{}, 095002 (2003) \[hep-ph/0204093\].
F. Maltoni, Z. Sullivan and S. Willenbrock, Phys. Rev. D [**67**]{}, 093005 (2003) \[hep-ph/0301033\].
R. V. Harlander and W. B. Kilgore, Phys. Rev. D [**68**]{}, 013001 (2003) \[hep-ph/0304035\].
S. Dittmaier, M. Kramer, 1 and M. Spira, Phys. Rev. D [**70**]{}, 074010 (2004) \[hep-ph/0309204\].
S. Dawson, C. B. Jackson, L. Reina and D. Wackeroth, Phys. Rev. D [**69**]{}, 074027 (2004) \[hep-ph/0311067\].
J. Baglio and A. Djouadi, JHEP [**1103**]{}, 055 (2011) \[arXiv:1012.0530 \[hep-ph\]\].
D. Graudenz, M. Spira and P. M. Zerwas, Phys. Rev. Lett. [**70**]{}, 1372 (1993).
S. Dawson, A. Djouadi and M. Spira, Phys. Rev. Lett. [**77**]{}, 16 (1996) \[hep-ph/9603423\].
A. Djouadi and M. Spira, Phys. Rev. D [**62**]{}, 014004 (2000) \[hep-ph/9912476\].
G. Degrassi, P. Gambino and G. F. Giudice, JHEP [**0012**]{}, 009 (2000) \[hep-ph/0009337\].
M. Misiak, H. M. Asatrian, K. Bieri, M. Czakon, A. Czarnecki, T. Ewerth, A. Ferroglia and P. Gambino [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 022002 (2007) \[hep-ph/0609232\].
E. Lunghi and J. Matias, JHEP [**0704**]{}, 058 (2007) \[hep-ph/0612166\].
M. E. Gomez, T. Ibrahim, P. Nath and S. Skadhauge, Phys. Rev. D [**74**]{}, 015015 (2006) \[hep-ph/0601163\]. M. Carena, S. Gori, N. R. Shah, C. E. M. Wagner and L. -T. Wang, JHEP [**1308**]{} (2013) 087 \[arXiv:1303.4414 \[hep-ph\]\].
M. Carena, S. Gori, N. R. Shah and C. E. M. Wagner, JHEP [**1203**]{}, 014 (2012) \[arXiv:1112.3336 \[hep-ph\]\].
M. Carena, S. Gori, N. R. Shah, C. E. M. Wagner and L. -T. Wang, JHEP [**1207**]{}, 175 (2012) \[arXiv:1205.5842 \[hep-ph\]\].
\[CMS Collaboration\], CMS PAS HIG-13-016.
G. Barenboim, C. Bosch, M.L. López-Ibáñez and O. Vives, work in progress.
D. J. H. Chung, L. L. Everett, G. L. Kane, S. F. King, J. D. Lykken and L. -T. Wang, Phys. Rept. [**407**]{}, 1 (2005) \[hep-ph/0312378\].
A. J. Buras, A. Romanino and L. Silvestrini, Nucl. Phys. B [**520**]{}, 3 (1998) \[hep-ph/9712398\].
A. Masiero, S. K. Vempati and O. Vives, arXiv:0711.2903 \[hep-ph\].
L. Clavelli, T. Gajdosik and W. Majerotto, Phys. Lett. B [**494**]{}, 287 (2000) \[hep-ph/0007342\].
[^1]: It is well-known that a single CKM phase is not enough to explain the observed matter-antimatter asymmetry of the universe. Additional phases (and therefore new physics) are required for that.
[^2]: These phases enter EDMs of the electron and proton at two loops through Barr-Zee diagrams[@Barr:1990vd; @Chang:1999zw]. However, these contributions are suppressed for heavy squarks[@Ellis:2008zy].
[^3]: Limits on masses could be softer if these squarks are nearly degenerate with the LSP, but this does not affect our analysis below
[^4]: As pointed out in Ref. [@Bharucha:2013epa], these bounds with the slepton channel closed are only valid in a simplified model that assumes BR($\chi^0_2 \to Z \chi^0_1$)=1. This bound is strongly relaxed once the decay $\chi^0_2 \to h \chi^0_1$ is included. However, in our paper, this limit is only taken into account as a reference value for chargino masses and has no effect in our analysis of the feasibility of this scenario.
[^5]: Allowing the heaviest neutral Higgs to be $200$ GeV with a second-heaviest Higgs of 125 GeV is a very conservative assumption. However, it looks very difficult to have such a heavy Higgs in any realistic MSSM construction.
[^6]: In a recent analysis on this issue [@Carena:2013iba], enhancements of the diphoton decay width of order $~40\%$ could be obtained for $\tan \beta \geq 60$ and $m_{\tilde \tau}\simeq 95$ GeV.
[^7]: Even allowing a three-$\sigma$ range, we find no allowed points when $m_{\tilde t_1} \geq 650$ GeV and $m_{\chi^\pm} \geq 350$ GeV
[^8]: If we allowed points within a three-$\sigma$ region, BR$(B\to X_s \gamma) \leq 4.1 \times 10^{-4}$, several points would still survive. However, for all the three-$\sigma$ allowed points we have very large $\sigma_{H_3 \tau \tau}$ and even these points will be forbidden when ATLAS analysis on heavy MSSM Higgses is updated [@Aad:2012yfa; @privateFiorini].
|
Ten Compressed-Air Energy-Saving Tips
Did you know that using more than 30 psi for part and fixture blow-off is an OSHA violation?
We’re nearly complete with our August issue, which is devoted to analysis of the data generated from our inaugural Top Shops benchmarking survey. One of those features reports feedback about efforts shops are taking to become more environmentally responsible. A sidebar in that article lists 10 tips from Kaeser Compressors for reducing the amount of energy used by your compressed air system, and I thought I’d share them with you here:
1. Turn off your air compressors when not needed. A 100-hp compressor can cost $75,000 per year in energy costs (based on 8,760 hours at $0.10 per kilowatt hour).
2. Identify and fix air leaks. Studies show that 25 to 50 percent of all compressed air generated is wasted to leaks.
3. Eliminate inappropriate uses of compressed air. Using compressed air for blow-off is not only wasteful, but it can be dangerous as well. In fact, using more than 30 psi for part and fixture blow-off is an OSHA violation.
4. Apply proper controls to multiple-compressor systems. Master system controls maintain a stable system pressure. They also ensure that only the needed compressor units are brought online and that they are operating at peak efficiency.
5. Ensure that piping and storage are adequately sized. Many systems lack adequate storage. (Kaeser recommends having both “wet” and “dry” tanks.) Also, undersized piping will increase pressure drop in the system.
8. Apply variable-speed-drive compressors where appropriate. Variable-speed drive is not a one-size-fits-all solution. However, if demand varies, it can save thousands of dollars in electricity costs each year.
10. Recover waste heat from coolers. A 50-horsepower compressor rejects heat at approximately 126,000 BTU per hour. How might that heat be used?
Bonus tip: Finding incentives. Are you aware of the incentives local utilities offer for improving a facility’s performance with more efficient compressed air equipment? This Utility Rebate Finder can help identify programs offered in your area. |
Menu
Monday, June 11, 2012
The CFC Worldwide Short Film Festival (WSFF) has presented the winners of the festival jury prizes, including eight prizes and over $65,000 in cash and awards.
This year’s jury was comprised of Shane Smith, Director of Public Programmes at TIFF Bell Lightbox, filmmaker Iain Gardner, winner of the 2011 Best Animated Short award at WSFF for The Tannery, Susanne Folkesson, acquisition executive for UR, Ian Harnarine, winner of the Genie Award for Best Live Action Short Film in 2012 for Doubles with Slight Pepper) and award-winning filmmaker Jean-Marc Vallée (Café de Flore).
WSFF is one of only four Canadian festivals accredited by the Academy of Motion Picture Arts and Sciences. Winners for Best Live-Action Short and Best Animated Short are eligible for Academy Award nominations. WSFF Canadian award winners are eligible for Genie Awards.
The Award Winners of the 2012 CFC Worldwide Short Film Festival are:
The Bravo!FACT Award for Best Canadian Short: EDMOND WAS A DONKEY (EDMOND ÉTAIT UN ÂNE), directed by Franck Dion. This award is accompanied by a prize of $5,000. The jury remarked: “For its depth of story, sensitivity and off-beat depiction of a highly original character, this year’s choice inspired the jury to follow their bliss.”
The Deluxe Award for Best Live-Action Short – THE FACTORY (A FÁBRICA), directed by Aly Muritiba. The jury remarked: “This daring and emotional story of family ties took us on an intense journey beneath the hard surface of a brutal environment, to find the spark and beauty of the human condition. Nuanced performances, compelling storytelling and outstanding direction deliver a punch to the gut.”
The Deluxe Award for Best Performance in a Live-Action Short – MY SWEETHEART (MON AMOUREUX), performance by Miss Ming. The jury remarked: “A sympathetic, delicate and charming portrayal that engages the audience in a sensitive moral debate. The jury was seduced by the talent of this actress who delivered a touching performance ‘troublante de vérité’.”
The Kodak Award for Best Cinematography in a Canadian Short – GRAVITY OF CENTER, cinematography by Christophe Collette. The jury remarked: “This rhythmic film captures the human form and the aesthetics of movement through its precise composition, masterful use of light and shadows and seamless transitions.”
The Panasonic Award for Best Documentary Short – EIGHTY EIGHT, directed by Sebastian Feehan. The jury remarked: “This intimate portrait of a man and his desire to embrace life inspired the jury and brought tears to their eyes. This short deserved to be honoured invrecognition of its treatment of its subject, its examination of love and life lessons learned.” An honourable mention goes to REMEMBER ME MY GHOST, directed by Ross McDonnell.
Best Animated Short – THE MAKER, directed by Christopher Kezelos. The jury remarked: “We experienced a ‘coup de coeur’ and a ‘grand moment de cinéma’ after watching this exquisite film that perfectly encapsulates what the animation process is all about, as a soul is injected into an inanimate object.”
Best Experimental Short – GRAVITY OF CENTER, directed by Thibaut Duverneix. The jury remarked: “Rhythmic and inventive this film creates new forms in the language of film; cerebral and emotional, it dares to dream.” An honourable mention goes to MOVING STORIES, directed by Nicolas Provost.
Also presented at the ceremony was the winner of WSFF’s Screenplay Giveaway Prize, which went to Tanya Lemke for STATIC. The Screenplay Giveaway winner was determined by a jury comprised of Walter Forsyth (producer, The Disappeared), Liz Janzen (former director of programming, National Screen Institute), and Kellie Ann Benz (The Shorts Report). Lemke will receive a prize package of goods and services valued at over $50,000 to assist with turning her script into a short film. The jury remarked: “A poignant look at mortality through the eyes of a senior whose mind may be fracturing under the pressure of our disposable society. Good use of flashbacks, the characters feel human and the message is clear, all in a contained world.”
The WSFF 2012 Audience Choice Award determined by audience ballot went to UNRAVEL, directed by Meghna Gupta, A UK/India co-production, the documentary explores the hidden corners of the international textile industry with the charismatic Indian workers whose job it is to recycle the massive shipments of second-hand clothing arriving daily from the West.
Now in its 18th year, the Canadian Film Centre’s Worldwide Short Film Festival is the leading venue for the exhibition and promotion of short film in North America and is one of the premier short film festivals in the world. Taking place June 5th – 10th, 2012, WSFF presents 244 films from 35 countries. Offering one of the largest prize packages for short film in the world, top WSFF winners are eligible for both Academy Award®and Genie Award consideration. The WSFF Short Films: BIG IDEAS Symposium offers renowned professional development, while the WSFF Business Centre is home to the largest marketplace for the sale and acquisition of short films in North America. For more information please visit: www.shorterisbetter.com |
---
address: |
Department of Mathematics, Syktyvkar Branch of IMM UrD RAS, Chernova st., 3a, Syktyvkar, 167982, Russia\
E-mail: gromov@dm.komisc.ru
author:
- 'N. A. GROMOV, I. V. KOSTYKOV, V. V. KURATOV'
title: 'CAYLEY–KLEIN CONTRACTIONS OF ORTHOSYMPLECTIC SUPERALGEBRAS'
---
=cmr8
1.5pt
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
§[ + ]{}
[hep-th/0110257]{}
Since their discovery [@1] [@2] [@3] in 1971 the supersymmetry is used in different physical theories such as Kaluza–Klein supergravity [@W-86], supersymmetric field theories of the Wess–Zumino type [@K-75], massless higher-spin field theories [@Vas-90]. Recently the secret theory [@B-96] (or S-theory) that includes superstring theory and its super p-brane and D-brane [@BIK] generalizations was discussed. All these theories are build algebraically with the help of some superalgebra in their base. In this work we wish to present a wide class of Cayley–Klein (CK) superalgebras which may be used for constractions of different sypersymmetric models.
**$osp(m|2n)$ superalgebra**
============================
Let $e_{IJ} \in M_{m+2n}$ satisfying $(e_{IJ})_{KL}=\delta_{Il}\delta_{JK}$ are elementary matrices. One defines the following graded matrix $$G=\left (
\begin{array}{c|c}
I_m & 0 \cr \hline
0 & 0 \quad I_n \cr
& -I_n \quad 0
\end{array}
\right )$$ where $I_m,I_n$ are identity matrices. Let $i,j,\ldots=1,\ldots ,m, \,
\bar i,\bar j=m+1, \ldots, m+2n.$ The generators of the orthosymplectic superalgebra $osp(m|2n)$ are given by $$E_{ij}=-E_{ji}=\sum_{k}(G_{ik}e_{kj}-G_{jk}e_{ki}),\;\;
E_{\bar i\bar j}=E_{\bar j\bar i}=\sum_{\bar k}
(G_{\bar i\bar k}e_{\bar k\bar j}+
G_{\bar j\bar k}e_{\bar k\bar i}),\;\;$$ $$E_{i\bar j}=E_{\bar ji}=\sum_{k}G_{ik}e_{k\bar j}+
\sum_{\bar k}G_{\bar j\bar k}e_{\bar ki},
\label{1}$$ where the even (bosonic) $E_{ij}$ generate the $so(m)$ part, the even (bosonic) $E_{\bar i\bar j}$ generate the $sp(2n)$ part and the rest $E_{i\bar j}$ are the odd (fermionic) generators of superalgebra. They satisfy the following (super) commutation relations $$[E_{ij},E_{kl}]=G_{jk}E_{il}+G_{il}E_{jk}-G_{ik}E_{jl}-G_{jl}E_{ik},\;\;$$ $$[E_{\bar i\bar j},E_{\bar k\bar l}]=-G_{\bar j\bar k}E_{\bar i\bar l}-
G_{\bar i\bar l}E_{\bar j\bar k}-G_{\bar j\bar l}E_{\bar i\bar k}-
G_{\bar i\bar k}E_{\bar j\bar l},$$ $$[E_{ij},E_{k\bar l}]=G_{jk}E_{i\bar l}-
G_{ik}E_{j\bar l},\;\;
[E_{i\bar j},E_{\bar k\bar l}]=-G_{\bar j\bar k}E_{i\bar l}-
G_{\bar j\bar l}E_{i\bar k},$$ $$[E_{ij},E_{\bar k\bar l}]=0, \quad
\{E_{i\bar j},E_{k\bar l}\}=
G_{ik}E_{\bar j\bar l}-
G_{\bar j\bar l}E_{ik}.
\label{2}$$
In the matrix form $
osp(m|2n)=\{M \in M_{m+2n}|M^{st}G+GM=0\}.
$ If the matrix $M$ has the following form: $
%\begin{equation}
M=\sum_{i,j}a_{ij}E_{ij} + \sum_{\bar i,\bar j}b_{\bar i\bar j}
E_{\bar i\bar j} + \sum_{i\bar j}\mu_{i\bar j}E_{i\bar j},
%\label{3}
%\end{equation}
$ with $a_{ij}, b_{\bar i \bar j}\in $ [**R**]{} or [**C**]{} and $\mu_{i\bar j}$ as the odd nilpotent elements of Grassmann algebra: $\mu^2_{i\bar j}=0,\,
\mu_{i\bar j}\mu_{i'\bar j'}=-\mu_{i'\bar j'}\mu_{i\bar j},$ then the corresponding supergroup $Osp(m|2n)$ is obtained by the exponential map $ {\cal M}=\exp M $ and act on (super)vector space by matrix multiplication ${\cal X}'={\cal M}{\cal X},$ where ${\cal X}^t=(x|\theta)^t,$ $x$ is a $n$–dimentsional even vector and $\theta$ is a $2m$–dimensional odd vector with odd Grassmann elements. The form $inv=\sum^n_{i=1}x^2_i+2\sum^m_{k=1}
\theta_{+k}\theta_{-k}=x^2+2\theta^2$ is invariant under this action of orthosymplectic supergroup.
Cayley-Klein orthogonal and symplectic algebras
===============================================
Orthogonal $so(m)$ and symplectic $sp(2n)$ algebras are even subalgebras of $osp(m|2n).$ On the other hand both of these algebras may be contracted and analytically continued to the set of CK orthogonal (and symplectic) algebras. CK group $SO(m;j)$ is defined as the set of transformations of vector space ${\bf R}_m(j),$ which preserve the form $x^2(j)=x_1^2+\sum_{k=2}^{m}[1,k]^2x^2_k, $ where $ [i,k]=\prod^{\max(i,k)-1}_{p=\min(i,k)}j_p,
\, [i,i]=1, $ each parameter $j_k=1,\iota_k,i,$ where $\iota_k$ are nilpotent $ \iota^2_k=0,$ commutative $\iota_k\iota_p=\iota_p\iota_k \neq 0$ generators of Pimenov algebra ${\bf}P(\iota).$ Cartesian components of vector $x(j)\in {\bf R}_m(j)$ are $x^t(j)=(x_1,j_1x_2, \ldots ,[1,m]x_m)^t, $ as it is easily follows from $x^2(j).$ For $m\times m$ matrix $g(j) \in SO(m;j)$ the transformation $g(j): {\bf R}_m(j) \rightarrow {\bf R}_m(j)$ means that the vector $x'(j)=g(j)x(j)$ has exactly the same distribution of parameters $j$ among its components as $x(j).$ This requirement give an opportunity to obtain the distribution of parameter $j$ among elements of matrix $g(j),$ i.e. to build the fundamental representation of CK group $SO(m;j)$ starting from the quadratic form. It is remarkable that the same distribution of the parameters $j$ is hold for CK Lie algebra $so(m;j),$ namely $A_{ik}=[i,k]a_{ik},$ for $A \in so(m;j).$
CK symplectic group $Sp(2n;\omega)$ is defined as the set of transformations of ${\bf R}_n(\omega) \times {\bf R}_n(\omega),$ which preserve the bilinear form $S(\omega)=S_1+
\sum_{k=2}^{n}(1,k)^2S_k,$ where $S_k(y,z)=y_kz_{n+k}-y_{n+k}z_k, \,
(i,k)=\prod^{\max(i,k)-1}_{p=\min(i,k)}
\omega_k, \, (i,i)=1, \,
\omega_k=1,\xi_k,i, \, \xi^2_k=0, \,
\xi_k\xi_p=\xi_p\xi_k.$ The distribution of parameters $\omega_k$ among matrix elements of the fundamental representation $M(\omega)=\left( \begin{array}{cc}
H(\omega) & E(\omega) \cr
F(\omega) & -H^t(\omega) \end{array} \right)$ of the CK symplectic algebra $sp(2n;\omega)$ may be obtained as for orthogonal CK algebras and is as follows: $B_{ik}=(i,k)b_{ik},$ where $B=H(\omega),E(\omega),F(\omega).$
CK orthosymplectic superalgebras $osp(m;j|2n;\omega)$
=====================================================
We shall define these superalgebras starting with the invariant form $$inv=u^2\sum^m_{k=1}[1,k]^2x^2_k+v^22\sum^{m+n}_{k=m+1}(1,\hat {\bar k}-m)^2
\theta_k\theta_{-k}\equiv u^2x^2(j)+v^22\theta^2(\omega),
\label{4}$$ $\hat {\bar k}=\bar k-m, \, \bar k=m+1,\ldots ,m+n; \,
\hat {\bar k}=\bar k-2m, \, \bar k=m+n+1, \ldots ,m+2n,$ which is the natural unificatin of CK orthogonal and symplectic forms. The distributions of contraction parameters $j,\omega $ among matrix elements of the fundamental representation of $osp(m;j|2n;\omega)$ and transformations of the generators (\[1\]) are obtained in a standart CK manner and are as follows: $$E_{ik}=[i,k]E^*_{ik}, \;\;
E_{\bar i\bar k}=(\hat {\bar i},\hat {\bar k})E_{\bar i\bar k},\;\;
E_{i\bar k}=u[1,i]v(1,\hat {\bar k})E^*_{i\bar k},
\label{5}$$ where $E^*$ are generators (\[1\]) of the starting superalgebra $osp(m|2n).$ The transformed generators are subject of the (super) commutation relations: $$[E_{ij},E_{kl}] = [i,j][k,l] \left (
{{G_{jk}E_{il}} \over {[i,l]}} + {{G_{il}E_{jk}} \over {[j,k]}} -
{{G_{ik}E_{jl}} \over {[j,l]}} - {{G_{jl}E_{ik}} \over {[i,k]}} \right ),$$
$$[E_{\bar{i}\bar{j}},E_{\bar{k}\bar{l}}] =
-(\hat{\bar{i}},\hat{\bar{j}})(\hat{\bar{k}},\hat{\bar{l}}) \left (
{{G_{\bar{j}\bar{k}}E_{\bar{i}\bar{l}}} \over
{(\hat{\bar{i}},\hat{\bar{l}})}} +
{{G_{\bar{i}\bar{l}}E_{\bar{j}\bar{k}}} \over
{(\hat{\bar{j}},\hat{\bar{k}})}} +
{{G_{\bar{i}\bar{k}}E_{\bar{j}\bar{l}}} \over
{[\hat{\bar{j}},\hat{\bar{l}})}} +
{{G_{\bar{j}\bar{l}}E_{\bar{i}\bar{k}}} \over
{[\hat{\bar{i}},\hat{\bar{k}})}} \right ),$$ $$[E_{ij},E_{\bar{k}\bar{l}}] = 0, \quad
[E_{ij},E_{k\bar{l}}] =
[i,j][1,k] \left (
{{G_{jk}E_{i\bar{l}}} \over {[1,i]}} -
{{G_{ik}E_{j\bar{l}}} \over {[1,j]}} \right ),$$ $$[E_{i\bar{j}},E_{\bar{k}\bar{l}}] =
-(1,\hat{\bar{j}})(\hat{\bar{k}},\hat{\bar{l}}) \left (
{{G_{\bar{j}\bar{k}}E_{i\bar{l}}} \over {(1,\hat{\bar{l}})}} +
{{G_{\bar{j}\bar{l}}E_{i\bar{k}}} \over {(1,\hat{\bar{k}})}} \right ),$$ $$\{E_{i\bar{j}},E_{k\bar{l}}\} =u^2v^2
[1,i](1,\hat{\bar{j}})[1,k](1,\hat{\bar{l}}) \left (
{{G_{ik}E_{\bar{j}\bar{l}}} \over {(\hat{\bar{j}},\hat{\bar{l}})}} -
{{G_{\bar{j}\bar{l}}E_{ik}} \over {[i,k]}} \right ).
\label{6}$$ For $u=\iota$ or $v=\iota, \iota^2=0$ superalgebra $osp(m|2n)$ is contracted to inhomogeneous superalgebra, which is semidirect sum $ \{E_{i\bar{j}}\} \S (so(m) \bigoplus sp(2n)),$ with all anticommutators of the odd generators equal to zero $\{E_{i\bar{j}},E_{k\bar{p}} \} = 0.$
Examples
========
Kinematical contractions of $osp(1|2)$
--------------------------------------
This contractions was described in detail in [@Val-99] both on the level of (super) commutation relations and as subrepresentations of the fundamental matrix representations of $osp(3|2).$ The isomorphism of low dimensional Lie algebras $sp(2)$ and $so(3)$ are essentially used for contraction $osp(1|2)$ to $(1+1)$ dimensional Poincare and Galilei superalgebras. This case is not included in general CK contractions of the previous section and we give here the fundamental $3\times 3$ representations of $(1+1)$ Poincare and Galilei superalgebras which was absent in [@Val-99]. For this purpose we need to introduce the algebra $A_4(\xi),$ which is free generated by $\xi_1,\xi_2,$ where $\xi_1\xi_2=\xi_2\xi_1, \,
\xi^4_1=\xi^4_2=0.$ If one take the basis $X^*_{23}=\displaystyle{\frac{i}{2}}E_{23}, \,
X^*_{12}=\displaystyle{\frac{i}{4}}(E_{33}+E_{22}), \,
X^*_{13}=\displaystyle{\frac{1}{4}}(E_{33}-E_{22}), \,
Q^*_+=E_{12}, \, Q^*_-=E_{13}, $ and one transform the generators as follows $$X_{12}=\omega^2_1X^*_{12}, \, X_{23}=\omega^2_2X^*_{23}, \,
X_{13}=\omega_1^2\omega_2^2X^*_{13}, \, Q_{\pm}=\omega_1\omega_2Q^*_{\pm},
\label{7}$$ where $3\times 3$ matrix $E$ are given by (\[1\]), each parameter $\omega_k=1,\xi_k,i, \, k=1,2, $ then the (super) commutation relations of $osp(1|2;\omega)$ may be written in the form $$[X_{12},X_{13}]=\omega^4_1X_{23}, \,
[X_{13},X_{23}]=\omega^4_2X_{13}, \,
[X_{23},X_{12}]=X_{13},$$ $$[X_{12},Q_{\pm}]=\pm\frac{i}{2}\omega^2_1Q_{\mp}, \,
[X_{13},Q_{\pm}]=\frac{1}{2}\omega^2_1\omega_2^2Q_{\mp},
[X_{23},Q_{\pm}]=\pm\frac{i}{2}\omega^2_2Q_{\mp},$$ $$\{Q_+,Q_-\}=-2i\omega^2_1X_{12},\,
\{Q_+,Q_+\}=-2(X_{13}+i\omega_2^2X_{12}), \,$$ $$\{Q_-,Q_-\}=2(X_{13}-i\omega_2^2X_{12}),
\label{8}$$ which is coincided with the super commutators (4.51) in [@Val-99] for the standart contractions of $osp(1|2).$ Our designations of generators and contraction parameters are connected with corresponding of [@Val-99] as follows: $\omega_k=\epsilon_k, X_{12}=K_{21}=H, \, X_{13}=K_{20}=P, \,
X_{23}=K_{01}=K.$ The slight differences in a structure constant are due to the use of complex $so(3;{\bf C})$ instead of its anti de Sitter real form $so(2,1).$ The $(1+1)$ super Poincare algebra is obtained for $\omega_1=\xi_1, \omega_2=1$ (compare with (4.12) in [@Val-99]) and $(1+1)$ super Galilei algebra is given by (\[8\]) for $\omega_1=\xi_1, \omega_2=\xi_2$ (compare with (4.39) in [@Val-99]). The commutators $[A,B]=\xi^kC, \, k=1,2,3, $ are regarded as zero, i.e. $[A,B]=0.$
The Grassmann-hull [@Bos-91] $M(\omega)=2\alpha X_{23}+2\beta X_{12}+
2\gamma X_{13}+\mu Q_++\nu Q_- $ of $osp(1|2;\omega)$ is represented by the matrix $$M(\omega)=\left(
\begin{array}{c|cc}
0 & \omega_1\omega_2\mu & \omega_1\omega_2\nu \cr \hline
-\omega_1\omega_2\nu & -i\omega_2^2\alpha &
-\omega_1^2(i\beta+\omega^2_2\gamma) \cr
\omega_1\omega_2\mu & \omega_1^2(i\beta-\omega^2_2\gamma)&
i\omega_2^2\alpha
\end{array}
\right),$$ where $\mu, \nu $ are odd grassmannian elements: $\mu^2=\nu^2=0, \, \mu \nu=-\nu\mu. $ For this simplest case it is possible to find the corresponding supergroup $Osp(1|2;\omega)$ explicitly [@M-93], namely $${\cal M}(\omega)=\exp M(\omega)=I+\frac{\sinh u}{u}M(\omega)+
\frac{\cosh u-1}{u^2}M^2(\omega)+$$ $$+\omega_1^2\omega_2^2\frac{2(1-\cosh u)+u\sinh u}{u^2}\mu\nu A+
\omega_1^2\omega_2^2\frac{u\cosh u-\sinh u}{u^3}\mu\nu B(\omega),$$ where $u^2=\omega_1^4(\beta^2+\omega_2^4\gamma^2)-\omega_2^4\alpha^2, $ $$M^2(\omega)=\left(\begin{array}{c|c}
-2\omega_1^2\omega_2^2\mu\nu&
-i\omega_2^2\alpha\mu+\omega_1^2(i\beta-\omega^2_2\gamma)\nu
\cr \hline
i\omega_2^2\alpha\nu-\omega_1^2(i\beta+\omega^2_2\gamma)\mu &
u^2+\omega_1^2\omega_2^2\mu\nu \cr
i\omega_2^2\alpha\mu-\omega_1^2(i\beta-\omega^2_2\gamma)\nu & 0
\end{array} \right.$$ $$\left. \begin{array}{c}
i\omega_2^2\alpha\nu-\omega_1^2(i\beta+\omega^2_2\gamma)\mu \cr \hline
0 \cr
u^2+\omega_1^2\omega_2^2\mu\nu \end{array} \right),$$ $$A=\left(\begin{array}{c|cc}
0 & 0 & 0 \cr \hline
0 & 1 & 0 \cr
0 & 0 & 1 \end{array} \right), \quad
B(\omega)=\left(\begin{array}{c|cc}
0 & 0 & 0 \cr \hline
0 & -i\omega_2^2\alpha & -\omega_1^2(i\beta+\omega_2^2\gamma) \cr
0 & \omega_1^2(i\beta-\omega_2^2\gamma)&
i\omega_2^2\alpha \end{array} \right).$$
In the case of superalgebra $osp(1|4)$ the isomorphism $sp(4) \cong so(5) $ is used in [@Val-99] and the standart kinematical contractions of $osp(1|4)$ to $(3+1)$ super Poincare and $(3+1)$ super Galilei are regarded for abstract generators and for embedding in the fundamental representations of $osp(5|4).$
CK contractions of $osp(3|2)$
-----------------------------
This superalgebra has $so(3)$ as even subalgebra therefore their contractions to the kinematical $(1+1)$ Poincare, Newton and Galilei superalgebras may be fulfilled according to general CK scheme of the first section. But unlike of two odd generators of $osp(1|2)$ the superalgebra $osp(3|2)$ has six odd generators. In the basis $X_{ik}=E_{ki}, \, k,i=1,2,3, \, F=\displaystyle{\frac{1}{2}}E_{44}, \,
E=-\displaystyle{\frac{1}{2}}E_{55}, \,
H=-E_{45}, \, Q_k=E_{k4}, \, Q_{-k}=E_{k5}$ the generators are affected by the contraction coefficients $j_1,j_2$ in the following way $$X_{ik}\to [i,k]X_{ik}, \quad Q_{\pm k}\to [1,k]Q_{\pm k}
\label{12}$$ and $H,F,E $ are remained unchanged. Then superalgebra $osp(3;j|2)$ is given by $$[X_{12},X_{13}]=j_1^2X_{23}, \quad [X_{13},X_{23}]=j_2^2X_{12}, \quad
[X_{23},X_{12}]=X_{13},$$ $$[H,E]=2E, \quad [H,F]=-2F, \quad [E,F]=H,$$ $$[X_{ik},Q_{\pm i}]=Q_{\pm k}, \quad
[X_{ik},Q_{\pm k}]=-[i,k]^2Q^2_{\pm i}, \;\; i<k,$$ $$[H,Q_{\pm k}]=\mp Q_{\pm k}, \quad
[E,Q_k]=-Q_{-k}, \quad [F,Q_{-k}]=-Q_k,$$ $$\{Q_k,Q_k\}=[1,k]^2F, \quad \{Q_{-k},Q_{-k}\}=-[1,k]^2E,$$ $$\{Q_k,Q_{-k}\}=-[1,k]^2H, \quad \{Q_{\pm i},Q_{\mp k}\}=\pm [1,k]^2X_{ik}.$$ The non-minimal Poincare superalgebra is obtained for $j_1=\iota_1, \, j_2=i $ and has the structure of the semidirect sum $T \S (\{X_{23}\}\oplus osp(1|2)),$ where abelian $T=\{X_{12},X_{13},Q_{\pm 2},Q_{\pm 3}\}$ and $osp(1|2)=\{H,E,F,Q_{\pm 1}\}.$ The Newton superalgebra $osp(3;\iota_2|2)=T_2 \S osp(2|2),$ where $T_2=\{X_{13},X_{23},Q_{\pm 3}\} $ and $osp(2|2) $ is generated by $X_{12},H,E,F,Q_{\pm 1},Q_{\pm 2}.$ Finally the non-minimal Galilei superalgebra may be presented as semidirect sums $osp(3;\iota_1,\iota_2|2)=(T \S\{X_{23}\})\S osp(1|2)=
T \S (\{X_{23}\}\oplus osp(1|2)).$
Acknowledgments {#acknowledgments .unnumbered}
===============
NG would like to thank Mariano del Olmo for sending the copy of paper [@Val-99]. This work was supported by Russian Foundation for Basic Research under Project 01-01-96433.
References {#references .unnumbered}
==========
[99]{}
Yu. A. Golfand and E. P. Likhtman, .
D. V. Volkov and V. P. Akulov, .
J. Wess and B. Zumino, .
P. West, [*Introduction to supersymmetry and supergravity*]{}, (World Scientific, Singapore, 1986).
B. W. Keck , .
M. A. Vasiliev,
I. Bars, hep-th/9608061, (1996). S. Bellucci, E. Ivanov and S. Krivonos, .
V. Hussin, J. Negro and M.A. del Olmo, .
H. Boseck, .
V. Hussin and L.M. Nieto, [*Preprint*]{} CRM–1863 (1993).
|
// Code generated by smithy-go-codegen DO NOT EDIT.
package storagegateway
import (
"context"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/aws/retry"
"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
smithy "github.com/awslabs/smithy-go"
"github.com/awslabs/smithy-go/middleware"
smithyhttp "github.com/awslabs/smithy-go/transport/http"
)
// Returns information about the cache of a gateway. This operation is only
// supported in the cached volume, tape, and file gateway types. <p>The response
// includes disk IDs that are configured as cache, and it includes the amount of
// cache allocated and used.</p>
func (c *Client) DescribeCache(ctx context.Context, params *DescribeCacheInput, optFns ...func(*Options)) (*DescribeCacheOutput, error) {
stack := middleware.NewStack("DescribeCache", smithyhttp.NewStackRequest)
options := c.options.Copy()
for _, fn := range optFns {
fn(&options)
}
addawsAwsjson11_serdeOpDescribeCacheMiddlewares(stack)
awsmiddleware.AddRequestInvocationIDMiddleware(stack)
smithyhttp.AddContentLengthMiddleware(stack)
AddResolveEndpointMiddleware(stack, options)
v4.AddComputePayloadSHA256Middleware(stack)
retry.AddRetryMiddlewares(stack, options)
addHTTPSignerV4Middleware(stack, options)
awsmiddleware.AddAttemptClockSkewMiddleware(stack)
addClientUserAgent(stack)
smithyhttp.AddErrorCloseResponseBodyMiddleware(stack)
smithyhttp.AddCloseResponseBodyMiddleware(stack)
addOpDescribeCacheValidationMiddleware(stack)
stack.Initialize.Add(newServiceMetadataMiddleware_opDescribeCache(options.Region), middleware.Before)
addRequestIDRetrieverMiddleware(stack)
addResponseErrorMiddleware(stack)
for _, fn := range options.APIOptions {
if err := fn(stack); err != nil {
return nil, err
}
}
handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
result, metadata, err := handler.Handle(ctx, params)
if err != nil {
return nil, &smithy.OperationError{
ServiceID: ServiceID,
OperationName: "DescribeCache",
Err: err,
}
}
out := result.(*DescribeCacheOutput)
out.ResultMetadata = metadata
return out, nil
}
type DescribeCacheInput struct {
// The Amazon Resource Name (ARN) of the gateway. Use the ListGateways () operation
// to return a list of gateways for your account and AWS Region.
GatewayARN *string
}
type DescribeCacheOutput struct {
// Percent use of the gateway's cache storage. This metric applies only to the
// gateway-cached volume setup. The sample is taken at the end of the reporting
// period.
CacheUsedPercentage *float64
// Percent of application read operations from the file shares that are served from
// cache. The sample is taken at the end of the reporting period.
CacheHitPercentage *float64
// An array of strings that identify disks that are to be configured as working
// storage. Each string has a minimum length of 1 and maximum length of 300. You
// can get the disk IDs from the ListLocalDisks () API.
DiskIds []*string
// Percent of application read operations from the file shares that are not served
// from cache. The sample is taken at the end of the reporting period.
CacheMissPercentage *float64
// The Amazon Resource Name (ARN) of the gateway. Use the ListGateways () operation
// to return a list of gateways for your account and AWS Region.
GatewayARN *string
// The file share's contribution to the overall percentage of the gateway's cache
// that has not been persisted to AWS. The sample is taken at the end of the
// reporting period.
CacheDirtyPercentage *float64
// The amount of cache in bytes allocated to a gateway.
CacheAllocatedInBytes *int64
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
}
func addawsAwsjson11_serdeOpDescribeCacheMiddlewares(stack *middleware.Stack) {
stack.Serialize.Add(&awsAwsjson11_serializeOpDescribeCache{}, middleware.After)
stack.Deserialize.Add(&awsAwsjson11_deserializeOpDescribeCache{}, middleware.After)
}
func newServiceMetadataMiddleware_opDescribeCache(region string) awsmiddleware.RegisterServiceMetadata {
return awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
SigningName: "storagegateway",
OperationName: "DescribeCache",
}
}
|
------------------------------------------
-- agar_ada_demo.adb: Agar-GUI Ada demo --
------------------------------------------
with Agar.Init;
with Agar.Error;
with Agar.Data_Source;
with Agar.Event;
with Agar.Timer;
with Agar.Object;
with Agar.Init_GUI;
with Agar.Surface; use Agar.Surface;
with Agar.Text;
--with Agar.Widget;
with Interfaces; use Interfaces;
with System;
with Ada.Characters.Latin_1;
with Ada.Real_Time; use Ada.Real_Time;
with Ada.Text_IO;
with Ada.Numerics.Elementary_Functions;
use Ada.Numerics.Elementary_Functions;
procedure agar_ada_demo is
package T_IO renames Ada.Text_IO;
package RT renames Ada.Real_Time;
package LAT1 renames Ada.Characters.Latin_1;
Epoch : constant RT.Time := RT.Clock;
Major, Minor, Patch : Natural;
begin
--
-- Initialize the Agar-Core library.
--
if not Agar.Init.Init_Core ("agar_ada_demo") then
raise program_error with Agar.Error.Get_Error;
end if;
--
-- Initialize the Agar-GUI library and auto-select the driver backend.
--
if not Agar.Init_GUI.Init_Graphics ("") then
raise program_error with Agar.Error.Get_Error;
end if;
--
-- Print Agar version and memory model.
--
declare
begin
Agar.Init.Get_Version(Major, Minor, Patch);
T_IO.Put_Line(" _ _ _ ___ _ ___ _");
T_IO.Put_Line(" / _ \ / _ \ / _ \ | _ \ / _ \ | _ \ / _ \");
T_IO.Put_Line(" | |_| | | (_| | | |_| | | |_) | - | |_| | | |_) | | |_| |");
T_IO.Put_Line(" |_| |_| \__, | |_| |_| |_| |_| |_| |_| |___ / |_| |_|");
T_IO.Put_Line(" |___/ ");
T_IO.Put_Line
(Integer'Image(Major) & "." &
Integer'Image(Minor) & "." &
Integer'Image(Patch));
#if AG_MODEL = AG_SMALL
T_IO.Put_Line("Memory model: SMALL");
elsif AG_MODEL = AG_MEDIUM
T_IO.Put_Line("Memory model: MEDIUM");
#elsif AG_MODEL = AG_LARGE
T_IO.Put_Line("Memory model: LARGE");
#end if;
T_IO.Put_Line("Agar was initialized in" &
Duration'Image(RT.To_Duration(RT.Clock - Epoch)) & "s");
end;
--
-- Check that the ada object sizes match the definitions in agar.def
-- (which is generated by a configure test which invokes the C API).
--
declare
procedure Check_Sizeof
(Name : String;
Size : Natural;
D_Size : Natural)
is
Size_Bytes : constant Natural := Size / System.Storage_Unit;
begin
if (Size_Bytes /= D_Size) then
raise Program_Error with
"Size of " & Name & " (" & Natural'Image(Size_Bytes) & ") " &
"differs from C API (" & Natural'Image(D_Size) &
"). Need to recompile?";
else
T_IO.Put_Line("Size of " & Name & " =" & Natural'Image(Size_Bytes) & " OK");
end if;
end;
begin
-- Core --
Check_Sizeof("AG_Object", Agar.Object.Object'Size, $SIZEOF_AG_OBJECT);
Check_Sizeof("AG_ObjectClass", Agar.Object.Class'Size, $SIZEOF_AG_OBJECTCLASS);
Check_Sizeof("AG_DataSource", Agar.Data_Source.Data_Source'Size, $SIZEOF_AG_DATASOURCE);
Check_Sizeof("AG_Event", Agar.Event.Event'Size, $SIZEOF_AG_EVENT);
Check_Sizeof("AG_TimerPvt", Agar.Timer.Timer_Private'Size, $SIZEOF_AG_TIMERPVT);
Check_Sizeof("AG_Timer", Agar.Timer.Timer'Size, $SIZEOF_AG_TIMER);
-- GUI --
Check_Sizeof("AG_Color", Agar.Surface.AG_Color'Size, $SIZEOF_AG_COLOR);
Check_Sizeof("AG_FontSpec", Agar.Text.AG_Font_Spec'Size, $SIZEOF_AG_FONTSPEC);
Check_Sizeof("AG_Font", Agar.Text.AG_Font'Size, $SIZEOF_AG_FONT);
Check_Sizeof("AG_Glyph", Agar.Text.AG_Glyph'Size, $SIZEOF_AG_GLYPH);
Check_Sizeof("AG_TextState", Agar.Text.AG_Text_State'Size, $SIZEOF_AG_TEXTSTATE);
Check_Sizeof("AG_TextMetrics", Agar.Text.AG_Text_Metrics'Size, $SIZEOF_AG_TEXTMETRICS);
Check_Sizeof("AG_Rect", Agar.Surface.AG_Rect'Size, $SIZEOF_AG_RECT);
Check_Sizeof("AG_PixelFormat", Agar.Surface.Pixel_Format'Size, $SIZEOF_AG_PIXELFORMAT);
Check_Sizeof("AG_Surface", Agar.Surface.Surface'Size, $SIZEOF_AG_SURFACE);
end;
--
-- Create a surface of pixels.
--
declare
W : constant Natural := 640;
H : constant Natural := 480;
Surf : constant Surface_Access := New_Surface(W,H);
Blue : aliased AG_Color := Color_8(0,0,200,255);
Border_W : constant Natural := 20;
begin
if Surf = null then
raise Program_Error with Agar.Error.Get_Error;
end if;
--
-- Fill the background with a given color.
-- Here are different ways of specifying colors:
--
Fill_Rect
(Surface => Surf,
Color => Color_8(200,0,0)); -- 8-bit RGB components
Fill_Rect
(Surface => Surf,
Color => Color_16(51400,0,0)); -- 16-bit RGB components
Fill_Rect
(Surface => Surf,
Color => Color_HSV(0.9, 1.0, 1.0, 1.0)); -- Hue, Saturation & Value
Fill_Rect
(Surface => Surf,
Color => Blue); -- An AG_Color argument
-- Fill_Rect
-- (Surface => Surf,
-- Color => Blue'Unchecked_Access); -- An AG_Color access
--
-- Use Put_Pixel to create a gradient.
--
T_IO.Put_Line("Creating gradient");
for Y in Border_W .. H-Border_W loop
if Y rem 4 = 0 then
Blue.B := Blue.B - Component_Offset_8(1);
end if;
Blue.G := 0;
for X in Border_W .. W-Border_W loop
if X rem 8 = 0 then
Blue.G := Blue.G + Component_Offset_8(1);
end if;
Put_Pixel
(Surface => Surf,
X => X,
Y => Y,
Pixel => Map_Pixel(Surf, Blue),
Clipping => false);
end loop;
end loop;
--
-- Generate a 2-bit indexed surface and initialize its 4-color palette.
--
declare
Bitmap : Surface_Access;
begin
T_IO.Put_Line("Generating a 2-bpp (4-color) indexed surface");
Bitmap := New_Surface
(Mode => INDEXED,
Bits_per_Pixel => 2,
W => 128,
H => 128);
-- R G B --
Set_Color(Bitmap, 0, Color_8(0, 0, 0));
Set_Color(Bitmap, 1, Color_8(0, 100,0));
Set_Color(Bitmap, 2, Color_8(150,0, 0));
Set_Color(Bitmap, 3, Color_8(200,200,0));
for Y in 0 .. Bitmap.H loop
for X in 0 .. Bitmap.W loop
if Natural(X) rem 16 = 0 then
Put_Pixel
(Surface => Bitmap,
X => Integer(X),
Y => Integer(Y),
Pixel => 1);
else
if Natural(Y) rem 8 = 0 then
Put_Pixel
(Surface => Bitmap,
X => Integer(X),
Y => Integer(Y),
Pixel => 1);
elsif Sqrt(Float(X)*Float(X) + Float(Y)*Float(Y)) < 50.0 then
Put_Pixel
(Surface => Bitmap,
X => Integer(X),
Y => Integer(Y),
Pixel => 2);
elsif Sqrt(Float(X)*Float(X) + Float(Y)*Float(Y)) > 150.0 then
Put_Pixel
(Surface => Bitmap,
X => Integer(X),
Y => Integer(Y),
Pixel => 3);
else
Put_Pixel
(Surface => Bitmap,
X => Integer(X),
Y => Integer(Y),
Pixel => 0);
end if;
end if;
end loop;
end loop;
--
-- Export our 2bpp bitmap to a PNG file.
--
T_IO.Put_Line("Writing 2bpp bitmap to output-index.png");
if not Export_PNG(Bitmap, "output-index.png") then
T_IO.Put_Line ("output-index.png: " & Agar.Error.Get_Error);
end if;
--
-- Blit our 2bpp bitmap to Surf.
--
T_IO.Put_Line("Blitting 2bpp bitmap, converting");
Blit_Surface
(Source => Bitmap,
Target => Surf,
Dst_X => 32,
Dst_Y => 32);
-- Blit again with a different palette.
Set_Color(Bitmap, 0, Color_8(255,255,255));
Set_Color(Bitmap, 1, Color_8(100,100,180));
Set_Color(Bitmap, 2, Color_8(120,0,0));
Set_Color(Bitmap, 3, Color_8(0,0,150));
Blit_Surface
(Source => Bitmap,
Target => Surf,
Dst_X => 200,
Dst_Y => 32);
Free_Surface (Bitmap);
end;
--
-- Test the font engine by rendering text to a surface.
--
T_IO.Put_Line("Testing Agar's font engine");
declare
Hello_Label : Surface_Access;
Text_W, Text_H : Natural;
Line_Count : Natural;
begin
-- Push rendering attributes onto the stack.
Agar.Text.Push_Text_State;
-- Set the text color.
Agar.Text.Text_Set_Color_8(16#73fa00ff#);
-- Render some text.
Hello_Label := Agar.Text.Text_Render("Hello, world!");
T_IO.Put_Line("Rendered `Hello' is: " &
C.unsigned'Image(Hello_Label.W) & "x" &
C.unsigned'Image(Hello_Label.H) & "x" &
C.int'Image(Hello_Label.Format.Bits_per_Pixel) & "bpp");
Blit_Surface
(Source => Hello_Label,
Target => Surf,
Dst_X => 0,
Dst_Y => 0);
Free_Surface(Hello_Label);
-- Change some attributes and render text again.
Agar.Text.Text_Set_BG_Color_8(16#00ee00ff#);
Agar.Text.Text_Set_Color_8(16#000000ff#);
Agar.Text.Text_Set_Font
(Family => "courier-prime",
Size => Agar.Text.AG_Font_Points(18),
Bold => True);
Hello_Label := Agar.Text.Text_Render("Hello, world!");
Blit_Surface
(Source => Hello_Label,
Target => Surf,
Dst_X => 100,
Dst_Y => 0);
Free_Surface(Hello_Label);
-- Set to 150% of the current font size and dark green BG.
Agar.Text.Text_Set_Font
(Percent => 150);
Agar.Text.Text_Set_Color_8(255,150,150);
Agar.Text.Text_Set_BG_Color_8(16#005500ff#);
Hello_Label := Agar.Text.Text_Render
("Agar v" &
Integer'Image(Major) & "." &
Integer'Image(Minor) & "." &
Integer'Image(Patch));
Blit_Surface
(Source => Hello_Label,
Target => Surf,
Dst_X => 360,
Dst_Y => 420);
Free_Surface(Hello_Label);
-- Calculate how large a surface needs to be to fit rendered text.
Agar.Text.Size_Text
(Text => "Agar version " &
Integer'Image(Major) & "." &
Integer'Image(Minor) & "." &
Integer'Image(Patch),
W => Text_W,
H => Text_H);
T_IO.Put_Line("Font engine says `Hello' should take" &
Natural'Image(Text_W) & " x " & Natural'Image(Text_H) & " pixels");
Agar.Text.Size_Text
(Text => "Hello, one" & LAT1.CR & LAT1.LF &
"two" & LAT1.CR & LAT1.LF &
"and three",
W => Text_W,
H => Text_H,
Line_Count => Line_Count);
T_IO.Put_Line("Font engine says three lines should take" &
Natural'Image(Text_W) & " x" & Natural'Image(Text_H) & " pixels and" &
Natural'Image(Line_Count) & " lines");
--
-- Calculate offsets needed to justify and align text in a given area.
--
declare
X,Y : Integer;
begin
Agar.Text.Text_Align
(W_Area => 320,
H_Area => 240,
W_Text => Text_W,
H_Text => Text_H,
X => X,
Y => Y);
T_IO.Put_Line("To center it in 320x240, offsets would be X:" &
Natural'Image(X) & ", Y:" &
Natural'Image(Y));
end;
-- Pop rendering attributes off the stack.
Agar.Text.Pop_Text_State;
end;
--
-- Set a clipping rectangle.
--
Set_Clipping_Rect
(Surface => Surf,
X => 55,
Y => 220,
W => 640-(55*2),
H => 200);
--
-- Show the extent of the clipping rectangle.
--
T_IO.Put_Line("Testing clipping rectangles");
declare
White : constant AG_Pixel := Map_Pixel(Surf, Color_8(255,255,255));
Clip_X : constant Integer := Integer(Surf.Clip_Rect.X);
Clip_Y : constant Integer := Integer(Surf.Clip_Rect.Y);
Clip_W : constant Integer := Integer(Surf.Clip_Rect.W);
Clip_H : constant Integer := Integer(Surf.Clip_Rect.H);
procedure Put_Crosshairs
(Surface : Surface_Access;
X,Y : Natural;
Pixel : AG_Pixel) is
begin
for Z in 1 .. 3 loop
Put_Pixel (Surface, X+Z,Y, Pixel, Clipping => false);
Put_Pixel (Surface, X-Z,Y, Pixel, Clipping => false);
Put_Pixel (Surface, X,Y+Z, Pixel, Clipping => false);
Put_Pixel (Surface, X,Y-Z, Pixel, Clipping => false);
end loop;
end;
begin
Put_Crosshairs (Surf, Clip_X, Clip_Y, White);
Put_Crosshairs (Surf, Clip_X+Clip_W, Clip_Y, White);
Put_Crosshairs (Surf, Clip_X+Clip_W, Clip_Y+Clip_H, White);
Put_Crosshairs (Surf, Clip_X, Clip_Y+Clip_H, White);
end;
T_IO.Put_Line
("Surf W:" & C.unsigned'Image(Surf.W) &
" H:" & C.unsigned'Image(Surf.H) &
" Pitch:" & C.unsigned'Image(Surf.Pitch) &
" Clip_X:" & C.int'Image(Surf.Clip_Rect.X) &
" Clip_Y:" & C.int'Image(Surf.Clip_Rect.Y) &
" Clip_W:" & C.int'Image(Surf.Clip_Rect.W) &
" Clip_H:" & C.int'Image(Surf.Clip_Rect.H) &
" Padding:" & C.unsigned'Image(Surf.Padding));
--
-- Load a surface from a PNG file and blit it onto Surf. Transparency is
-- expressed by colorkey, or by an alpha component of 0 (in packed RGBA).
--
T_IO.Put_Line("Testing transparency");
declare
Denis : constant Surface_Access := New_Surface("axe.png");
Degs : Float := 0.0;
Alpha : AG_Component := 0;
begin
if Denis /= null then
T_IO.Put_Line
("Denis W:" & C.unsigned'Image(Denis.W) &
" H:" & C.unsigned'Image(Denis.H) &
" Pitch:" & C.unsigned'Image(Denis.Pitch) &
" Clip_X:" & C.int'Image(Denis.Clip_Rect.X) &
" Clip_Y:" & C.int'Image(Denis.Clip_Rect.Y) &
" Clip_W:" & C.int'Image(Denis.Clip_Rect.W) &
" Clip_H:" & C.int'Image(Denis.Clip_Rect.H) &
" Padding:" & C.unsigned'Image(Denis.Padding));
for Y in 1 .. 50 loop
Degs := Degs + 30.0;
Set_Alpha
(Surface => Denis,
Alpha => Alpha); -- Per-surface alpha
Alpha := Alpha + 12;
-- Render to target coordinates under Surf.
for Z in 1 .. 3 loop
Blit_Surface
(Source => Denis,
Target => Surf,
Dst_X => Y*25,
Dst_Y => H/2 + Z*40 - Natural(Denis.H)/2 -
Integer(50.0 * Sin(Degs,360.0)));
end loop;
end loop;
else
T_IO.Put_Line (Agar.Error.Get_Error);
end if;
end;
T_IO.Put_Line("Testing export to PNG");
if not Export_PNG(Surf, "output.png") then
raise program_error with Agar.Error.Get_Error;
end if;
T_IO.Put_Line ("Surface saved to output.png");
Free_Surface(Surf);
end;
T_IO.Put_Line
("Exiting after" &
Duration'Image(RT.To_Duration(RT.Clock - Epoch)) & "s");
Agar.Init.Quit;
end agar_ada_demo;
|
This application focuses on the prevention of cerebral and myocardial injury during cardiac arrest and resuscitation. Our hypothesis is that a major mechanism of such injury is the nitric oxide-superoxide-peroxynitrite pathway. Our aim is to show that pharmacological modification of this pathway, using nitric oxide synthase inhibitors, both nonselective and neuronal selective, will ameliorate resuscitation injury and thereby enhance survival from cardiac arrest, and preserve post-arrest cerebral and cardiac function. We will test a series of specific hypotheses. We propose that nonselective NOS inhibitors, given during VF cardiac arrest-defibrillation-resuscitation sequences, enhance resumption of spontaneous circulation and improve post-arrest left ventricular performance. Conversely, nitric oxide donors given during arrest and resuscitation exacerbate cardiac resuscitation injury by increasing toxic peroxynitrite generation. We propose to demonstrate, in coronary microvessels and isolated vascular rings, a specific mechanism of cardiac resuscitation injury direct current shock-induced loss of normal endothelial-mediated coronary arterial vasodilation. We will show that NOS inhibitors protect against this mechanism. Finally, in survival experiments, we will show that selective neuronal nNOS inhibition preserves neurologic function 48 hours after cardiac arrest and resuscitation and minimizes histopathological neuronal damage, while nonselective NOS inhibition enhances cardiac performance at 48 hours after arrest. More effective treatment of resuscitation-induced cardiac and brain injury is imperative for improved survival of the estimated 300,000 US victims of cardiac arrest every year. |
Q:
EventCreate.exe creates a "CustomSource" value, what does it mean?
The command-line EventCreate.exe tool registers a user-defined event source in the Registry for the Windows Event Log Viewer to use, like this:
eventcreate /t INFORMATION /ID 100 /L "Application" /SO [SourceName] /D "Description"
I wrote an app that has its own Event Log resource strings and is registered as an event source, per MSDN, but it doesn't use the CustomSource value and works fine.
I can't find any documentation on MSDN, or elsewhere online, on what CustomSource is meant for exactly. None of the registered sources on my machines use it.
Does anyone know what CustomSource is meant for, and how it works? Is it just something internal to EventCreate.exe, or does the Windows Event Log actually use it for something?
A:
Thanks to @RbMm for pointing out this blog article:
EventCreate and "ERROR: Source parameter is used to identify custom applications/scripts only"
For whatever reason, EventCreate was designed only to log events that are associated with event log sources that EventCreate created. It does this by adding a REG_DWORD value called CustomSource in the source's registry key when it creates a new source, and checking for that value for a source that already exists. So in the above example, if the "MyStuff" source didn't already exist in the Application log, the above command would have created it and configured its key with a CustomSource value. Subsequent calls to EventCreate with the same source would succeed after verifying the existence of the CustomSource value. If, however, the "MyStuff" source had been created through another mechanism that didn't create a CustomSource flag, such as with the PowerShell New-EventLog cmdlet, then you'd get the error message. If you create a CustomSource value in an event source's key, then EventCreate will work with that source.
|
Over the last few months I’ve been working in Elixir and its most popular web framework Phoenix. During this time I built “Houston”, a deployment tool written in Elixir/Phoenix to help make deploying at TpT easier by providing a simple uniform interface to all of our Jenkins Pipelines. The Houston interface shows a number of important pieces of time-sensitive information that I needed to keep up-to-date so developers could coordinate deployments more effectively. I wanted to minimize the usage of other frameworks and tackle the problem quickly. I read about the team at thoughtbot using Phoenix Channels in lieu of React to deliver updated content to the browser and decided that using Channels would allow me to implement realtime updates easily by leveraging Phoenix directly.
A few weeks ago I had the pleasure of pairing with Chris McCord, one of the co-creators of the Phoenix framework, when TpT invited him to help with our migration to Elixir. One of the things we worked on together was implementing realtime updates for Houston with Channels. The Houston code is unfortunately still closed source, but I have created a simple application, TonicTime, to demonstrate the same Channel concepts. You can see the code on GitHub. The code on GitHub is slightly modified from the snippets here due to how it is being hosted.
After a short blurb about Channels we will look at the most important snippets from TonicTime to see how it all works.
What are Phoenix Channels?
Channels are a simple high-level abstraction for developing realtime components in Phoenix applications using WebSockets. Channels allow for interaction between application components that are not Elixir processes without building out a separate pub/sub system or using a third-party service like Pusher.
Using Channels
If you take a look at the embedded live demo of TonicTime below you will notice that the time is constantly updating without reloading the page or submitting a new request. So how is this happening? Below is a diagram of some of the components involved. We’ll look at the Javascript/HTML first and then the supporting Elixir modules. The lettered interaction stages are referenced below as we walk through the code.
HTML
The index template contains only one div with the current time inside, web/templates/page/index.html.eex :
<div id="clock" class="jumbotron"> <h2><%= @time_now %></h2> </div>
Above @time_now is accessing a variable passed to the template in the assigns map. The Phoenix docs go into more detail about this, see Phoenix.View.render/3 .
Javascript
All of our Javascript is in web/static/js/app.js , it opens a socket connection to our application’s mount point (A) and also specifies that we should replace the innerHTML content when receiving update messages (G):
// We set an explicit id on the div in our HTML to allow // us to easily access it and replace its content. let container = document . getElementById ( "clock" ) // The code on GitHub connects to "/time/socket", this is due // to how TonicTime is deployed. The socket endpoint just needs // to match the socket created in the TonicTime.Endpoint module. let socket = new Socket ( "/socket" ) socket . connect () let timeChannel = socket . channel ( "time:now" ) // When an `update` message is received we replace the contents // of the "clock" element with server-side rendered HTML. timeChannel . on ( "update" , ({ html }) => container . innerHTML = html ) // Attempt to connect to the WebSocket (Channel). timeChannel . join () . receive ( "ok" , resp => console . log ( "joined time channel" , resp )) . receive ( "error" , reason => console . log ( "failed to join" , reason ))
Elixir
There are a number of moving parts here:
The socket mount-point (B) needs to be defined in our Endpoint setup. As mentioned above, this needs to match the socket endpoint that we attempt to connect to in our Javascript code:
socket " /socket" , TonicTime . UserSocket
The function Page.Controller.index which handles GET requests to the index:
def index ( conn , _params ) do # Get the time from the TimeManager state, we'll look at this # in detail below. time_now = TimeManager . time_now () # Render the template `index.html` passing `time_now` in # the `assigns` map. render conn , " index.html" , [ time_now: time_now ] end
The TimeChannel Channel (C) which listens for updates and alerts subscribers:
defmodule TonicTime . TimeChannel do use TonicTime . Web , :channel # Client method called after an update. def broadcast_update ( time ) do # Render the template again using the new time. html = Phoenix . View . render_to_string ( TonicTime . PageView , " index.html" , [ time_now: time ] ) # Send the updated HTML to subscribers. TonicTime . Endpoint . broadcast ( " time:now" , " update" , %{ html: html } ) end # Called in `app.js` to subscribe to the Channel. def join ( " time:now" , _params , socket ) do { :ok , socket } end end
The TimeManager GenServer (D) which holds the current time in its state, and is also responsible for triggering updates to the time (E). If you look at the full file you’ll see there is a lot of code there to facilitate interaction with the GenServer. Most of this is not important to understanding Channels, the most relevant function is below:
defp update_time do updated_time = " US/Eastern" |> Timex . now () |> Timex . format! ( " %I:%M:%S %p" , :strftime ) # Schedule another update call to happen in 1 second. schedule_time_update () # Send the updated to our Channel so it can update clients. TonicTime . TimeChannel . broadcast_update ( updated_time ) %{ clock: updated_time } end
Review
By leveraging Channels I was able to reduce the use of another framework in my project and still provide users with seamless dynamic page content. While we have looked at a trivial example of what Channels can do, there are many possibilities and native support in Phoenix makes implementation fast and natural.
Hopefully through this overview I was able to provide you with an easy to follow introduction to Channels and a framework for implementing them in your own projects.
Credits
A huge thanks to Chris McCord who walked me through Channels and their usage in Phoenix. Credit for the pun in the title, “A refreshing tonic”, goes to my fantastic friend and coworker Shanti Chellaram, who is almost as good as making up puns as she is at programming. And finally a big thanks to Ryan Sydnor for his help with building Houston, his many edits to this post, and his endless enthusiasm. |
Q:
Meaning of sentence in Frankenstein
In Frankenstein, Chapter 15
I cherished hope, it is true, but it vanished when I beheld my person reflected in water or my shadow in the moonshine, even as that frail
image and that inconstant shade.
Monster's hope vanished when it saw its horrible image.
But what does ", even as that frail image and that inconstant shade." indicate here?
Does the "even as" and "that" in this sentence serve for?
A:
Examining the sentence, which says "I beheld my person", it mentions two ways in which the speaker does that:
a) reflected in water
b) shadow in the moonshine
The two phrases further on in the sentence, about which you ask, refer to those two images, respectively:
a) that frail image (reflected in water)
b) that inconstant shade (the shadow)
What "even as" does, is to say that the speaker's appearance was horrible, even when seen indirectly in those ways.
So to paraphrase the sentence:
I hoped that I did not look awful, but even mere suggestions of my appearance were enough to destroy that hope.
|
Q:
How do I get the hi-hat to the right place in RB3 Pro drum kit?
Is there any way to set up the RB3 drum kit + cymbals so that the hi-hat (yellow cymbal) is straight above the snare? It would be nice to be able to use either hand for the snare while playing running sixteenth notes on the hi-hat. Actually, failing that, I'd really rather have the hi-hat to the left of the snare so I can play the snare with my right hand.
Harmonix seems to have been aware of this issue when charting basic drums; on songs where it comes up, they chart hi-hat as red and snare as yellow. It seems strange that they'd ignore it for Pro Drums.
A:
Following the advice in this review and Grasa's comment below, I was able to set it up like this:
This is as far left as I could get hi-hat. Note that this does not require you to pull apart any of the clamps, so you are free to experiment without incredible pain, and if you decide you don't like it, you can just put it back together the way it was.
How I did it:
Remove all the cymbals from their clamps. This will also give you the clamp that was previously keeping the blue- and green-cymbals stable.
Move the green cymbal to the long pole, and the blue cymbal to one of the shorter poles.
Place the green cymbal where the yellow cymbal was, and turn its clamp more to the left.
Attach the yellow cymbal to green's pole using the clamp you removed from between the blue- and green-cymbals. You'll need to fidgit with the rotations on the clamps holding the yellow/green cymbals to get the yellow to a comfortable position.
You can place the blue cymbal back where it was.
Note that, yes, the green and blue cymbals are intentionally swapped from where you'd think they would be. This is because in Rock Band 3, the green cymbal is almost always the crash, while the blue cymbal is the ride, and most drum sets have the crash on the left and the ride on the right. This can make the game very difficult to play, and I'm seriously considering I have since swapped the green- and blue-cymbals back again.
View from behind the right-side:
View from behind the left-side:
View from in front of left-side:
Hope this helps!
A:
I found a way to move my hi-hat farther left without having to move either blue or green over to the left-hand side (which was driving me crazy). However, this will only work if you have an extra RB drum kit around to cannibalize. (Since the pieces of my RB1 kit turned out to be compatible, I assume an RB2 kit would be, but I don't know.)
I removed the long clamp from between the blue and green cymbals. Then I took the yellow cymbal and its pole out of their clamp, replacing that pole with one leg (both the post and the foot) of my old RB1 drums. Finally, using the long clamp, I attached the yellow cymbal and its pole sticking out to the left of the RB1 pole.
Here's the back of the left-hand side; the yellow cymbal is just above the picture frame. Note the two 'nested' feet at the bottom.
Here's the back of the right-hand side. At first it was unchanged except for the removal of the cross-brace clamp; then I realized I could use the other leg of my old drums to stabilize the post of the green cymbal, which was wobbling without the cross-brace:
There's nothing attaching the green cymbal's post to the leg; I just adjusted the extra leg's height to sliiiightly higher than the bottom of the green post, so that when I nestled the base of the green post into the open upper end of the extra leg, the extra leg would take some of the weight.
Here's a drummer's-eye view of the result:
|
Q:
How can i use action_listen inside a custom action in Rasa
I want to listen to user inside custom action and then continue that custom action.
Basically what i am looking for is.
I have a loop in custom action from 0 to 5.
for each value i want to take some input from user and continue that loop.
def Action():
for 0 to 5:
input = action_listen
// do something with input
A:
You should use a form for this. A form loops over a set of slots that you define until all are filled.
Looping with action_listen in a regular action won't work because an action has only one run() method and the events are only added to the tracker once the run() method has returned, after which the action is completed (and you can't get back into it).
https://rasa.com/docs/rasa/core/forms/
|
The present invention is directed toward an ambulatory infusion pump and, more particularly, toward a programmable ambulatory infusion pump having remote programming capability. |
The Seattle Seahawks postponed a meeting with Colin Kaepernick this week for a workout with team officials after he declined to say he would stop kneeling during the national anthem next season. The Seahawks are still weighing whether to bring Kaepernick in, and no final decisions have been made, a source said to ESPN.
According to Schefter, the Seahawks backed away when Kaepernick would not promise to stop kneeling in protest during national anthem demonstrations before games next season.
Here's the curious thing about the Seahawks doing the Kaepernick two-step.
The day after the scheduled workout, NFL attorneys administered a deposition in NY regarding the league alleged attempts to keep him out of the league.
On NFL Live, Schefter said there is a possibility that the workout could be canceled.
It's getting harder and harder for NFL teams to claim not signing Colin Kaepernick is exclusively a football decision. Kaepernick has the talent that should warrant him being on an National Football League roster as a backup quarterback at the very least at this point. Should the two sides be able to work beyond the bumps in the road, Kaepernick would still appear a capable backup to Russell Wilson, as he still maintained a high passer rating and completion percentage even on a downtrodden 49ers team.
In 69 regular-season games with the 49ers from 2011 through 2016, Kaepernick threw for 12,271 yards with 72 touchdowns and 30 interceptions. In fact, this news broke while Kaepernick was sitting in on the deposition of Dallas Cowboys owner Jerry Jones. "Kaepernick is expected to be one of the options, but other quarterbacks will be considered". "Duane Brown knelt for the anthem and is still on the team". |
its cold here in NY! I was shivering by the
Today show fence and I saw Jimmy dancing
around and took a few pictures! I hope they
come out ok. After a hot choc. and choc
croissant at the Dean & Deluca, im ready to
go back to the Quality hotel for a nap!
To Jim-I love you more than words can say,
my biggest dream in life is to marry you someday.-your future wife, Carol Carrey
aka Jimlover1
ps. or at least a date!
i really regret not recording that part where he goes outside to collect the toys but instead acted like he was one of the people behind the railing trying to get on TV; I REALLY REALLY regret not recording the parts where he interrupted Katie and Bonnie's interview it was so freaken HILARIOUS....i was ROTFL in tears. i hope someone out there recorded it and will be able to upload it. |
Q:
A piece has a given key. When it modulates, are the consequent accidentals *chromatic* or are they *diatonic* temporarily?
Moving tangentially from a recent question and its answers, Diatonic notes are 'of the key'. Other notes are 'chromatic'. However, if a piece is written in, say, C major, and modulates to G major for several bars, are the F&sharp notes that occur then considered chromatic - not from C major, or diatonic, because at that point, the key is actually G?
A:
The key lies in the ear of the listener. It depends on what you think the key is. If it feels like the tonality really changed to G, then the F# notes can be considered diatonic. (My opinion only.)
I'd like to turn the perspective around. What do you want it to sound like? If you play like the key was G major, then chances are pretty soon at least a part of the audience will agree.
I'll fix the subject line:
A piece has a given key. When it modulates, the key changes.
In jazz songs, the tonality may change very rapidly, but usually the composer or transcriber or whoever wrote the music doesn't bother declaring the "correct" interpretation with different key signatures all over the place. It's the listener's responsibility to interpret what is diatonic or chromatic or something else.
The key signature by itself doesn't tell the whole truth.
|
Q:
Constructing a minute-by-minute volatility curve
For market making in front month vanilla commodity options we need a volatility curve that updates every second or so as the underlying and the options change prices.
If all the strikes have a good two-way market then a simple smoothing spline produces a usable curve. But when the bids disappear in a few strikes, how should we preserve the shape of the curve and fit it to the new market data?
Should we be working with strikes or in log(strike) space?
A:
For simple interpolation, implied variance (i.e. implied vol squared times maturity) vs. log moneyness (i.e. log of strike over forward) is probably the best choice. In these coordinates, Roger Lee results on the asymptotics imply the curve flattens to linear in the high and low moneyness limits, which fits with natural spline boundary conditions.
There is no canonical solution for the full problem you face of interpolating/extrapolating missing data across strikes and time. You can make modelling approaches as complicated as you want; it just is a hard problem.
I would suggest to do the simplest thing you can tolerate. Complex solutions have a way of causing more problems than they solve.
|
/*
** Snapshot handling.
** Copyright (C) 2005-2011 Mike Pall. See Copyright Notice in luajit.h
*/
#ifndef _LJ_SNAP_H
#define _LJ_SNAP_H
#include "lj_obj.h"
#include "lj_jit.h"
#if LJ_HASJIT
LJ_FUNC void lj_snap_add(jit_State *J);
LJ_FUNC void lj_snap_purge(jit_State *J);
LJ_FUNC void lj_snap_shrink(jit_State *J);
LJ_FUNC void lj_snap_regspmap(uint16_t *rsmap, GCtrace *T, SnapNo snapno);
LJ_FUNC const BCIns *lj_snap_restore(jit_State *J, void *exptr);
LJ_FUNC void lj_snap_grow_buf_(jit_State *J, MSize need);
LJ_FUNC void lj_snap_grow_map_(jit_State *J, MSize need);
static LJ_AINLINE void lj_snap_grow_buf(jit_State *J, MSize need)
{
if (LJ_UNLIKELY(need > J->sizesnap)) lj_snap_grow_buf_(J, need);
}
static LJ_AINLINE void lj_snap_grow_map(jit_State *J, MSize need)
{
if (LJ_UNLIKELY(need > J->sizesnapmap)) lj_snap_grow_map_(J, need);
}
#endif
#endif
|
It’s been like that since video of Israeli soldiers brutalizing Palestinians came to light during the first intifada , which began 25 years ago this weekend.
Videos or no videos, decade after decade, Israel’s brutal occupation grinds on without accountability and with impunity for those who give the orders and those follow them.
*
Israeli soldiers say it’s “intolerable” they can’t kill Palestinians more freely due to cameras, “rules”
Submitted by Ali Abunimah An Israeli soldier lobs tear gas at Palestinian demonstrators during a protest against Israel’s attack on the Gaza Strip, in the village of Beit Ummar near the West Bank city of Hebron on November, 16, 2012. (Mamoun Wazwazi / APA images)
Israeli occupation soldiers have complained to Israel’s Ynet that they are not allowed to be more violent against Palestinians whose land they occupy in the West Bank.
In particular, the soldiers seem unhappy that they can no longer just shoot dead Palestinians who throw stones at them because Palestinians do not like foreign armies occupying their towns. Ynet used only initials to identify the soldiers.
According to S., orders to open fire address situations of a clear and present danger and only if there is a person with the means and intent to kill. “But what is an angry mob throwing stones and sometimes rocks at you if not a life threatening situation? I wouldn’t order opening fire at a crowd of people but we can’t have a situation where you stand in front of a person with a rock and start to ask yourself is this person life threatening. If I shoot at him I go to jail.
“Intolerable” not to be able to shoot Palestinians at will
One soldier admits that the presence of cameras – presumably in the hands of Palestinian and other videographers – inhibits the soldiers from being even more abusive:
T. says the cameras on the ground undermine the forces’ efforts. “A commander or an officer sees a camera and becomes a diplomat, calculating every rubber bullet, every step. It’s intolerable, we’re left utterly exposed. The cameras are our kryptonite.”
Occasionally crimes by Israeli occupation soldiers and settlers are caught on video.
But more often they are not. In recent testimonies given to the group Breaking the Silence, Israeli soldiers admitted to horrifying crimes including deliberate and random attacks on Palestinian children, sometimes killing them and sometimes just for amusement.
In video shot by Palestinians last May, Israeli settlers can be seen attacking a village with stones, live fire and setting fire to fields as Israeli occupation forces guard the settlers.
*
*
In this video, posted a few days ago on YouTube, settlers can be seen throwing stones at Palestinians in the occupied West Bank village of Urif, again protected by soldiers.
It seems unlikely that “S.” and “T.” would be too keen on Palestinians being given the right to shoot at them. Stones are only deadly weapons, it would appear, in the hands of Palestinians, and when directed against heavily-armed, invading occupation forces.
Israeli soldiers kill with impunity anyway
While “T.” worries about “calculating every rubber bullet,” Israeli soldiers have found ways around rules nominally meant to prevent wanton killing of Palestinians.
Exactly one year ago, Mustafa Tamimi, 28, was killed when Israeli soldiers in the village ofNabi Saleh fired a tear gas canister at his face at point blank range, a murder witnessed by Linah Alsaafin.
In November, harrowing video caught images of Rushdi Tamimi, 31, also in Nabi Saleh,lying on the ground shortly after being shot in the stomach and thigh by Israeli occupation forces during a protest against Israel’s attacks on Gaza.
The video shows the occupation soldiers threatening the woman shooting the video and preventing villagers from tending to Tamimi, who died of his wounds in hospital two days later.
In addition to Tamimi, 22-year-old Hamdi al-Falah was killed by Israeli soldiers during protests against the attack on Gaza in the West Bank city of Hebron.
“S.” who was concerned, “If I shoot at him I go to jail,” need not worry. A year after Mustafa Tamimi’s killing, no one has been brought to justice. It’s unlikely that Rushdi Tamimi’s killers will face justice either.
It’s been like that since video of Israeli soldiers brutalizing Palestinians came to light during the first intifada, which began 25 years ago this weekend.
Videos or no videos, decade after decade, Israel’s brutal occupation grinds on without accountability and with impunity for those who give the orders and those follow them.
Written FOR
*_____________________________________
*
Mark Elf @ Jews Sans Frontiers posted the following related article….
*
Israel doesn’t kill stone throwers…not on camera anyway
* YNET. Israeli soldiers in the West Bank were confronted by stone-throwers. Here’s a turn up in. Israeli soldiers in the West Bank were confronted by stone-throwers. * HERE to view video Clickto view video * A video that surfaced over the weekend shows Palestinians stoning Israeli security forces, eventually forcing them to run for cover – similiar to the incident in Hebron. Six IDF soldiers equipped with shields and crowd-dispersal means found themselves ambushed by a crowd of stone throwing Palestinians in the West Bank village of Kafr Qaddum. YNET people spoke to “S” and “T”:
According to S., orders to open fire address situations of a clear and present danger and only if there is a person with the means and intent to kill. “But what is an angry mob throwing stones and sometimes rocks at you if not a life threatening situation? I wouldn’t order opening fire at a crowd of people but we can’t have a situation where you stand in front of a person with a rock and start to ask yourself is this person life threatening. If I shoot at him I go to jail.” T’s testimony was a tad more telling:
T., a combatant in an infantry brigade, also claims that soldiers are not equipped to handle the complex situation on the ground. “There’s nothing more humiliating for a combatant than to see his friends run,” he says. He criticizes the army for sending such a small group of soldiers to Qaddum on Friday at a particularly volatile time. T. says the cameras on the ground undermine the forces’ efforts. “A commander or an officer sees a camera and becomes a diplomat, calculating every rubber bullet, every step. It’s intolerable, we’re left utterly exposed. The cameras are our kryptonite.” I see, so cameras are a constraint. I don’t know how Freudian the guy was being regarding kryptonite. It’s the only thing that can kill Superman. Does this Israeli soldier really think that not being able to kill is the same thing as actually being dead? |
We Need To Talk #24: Before Star Wars: The Force Awakens
Note: there are no spoilers for the movie in this episode! Life-long fan of Star Wars, Petter Mårtensson, sits down with Breki the Trekkie to talk about their thoughts and expectations regarding the Star Wars franchise as we move into the final few days before Star Wars: The Force Awakens.
Show notes and links:
Star Wars: The Force Awakens (2015) (imdb.com)
A continuation of the saga created by George Lucas and set thirty years after Star Wars: Episode VI – Return of the Jedi (1983).
Harmy’s Star Wars: Despecialized Edition v2.5 (youtube.com)
This is a documentary about the Despecialized Edition of the original trilogy, a far superior way of watching them than most of the material that we have available today.
Join up with Breki and his friends before you go see that movie or TV series you're interested in, as they'll be discussing their hopes and expectations. Then join them again afterwards as they sum up their thoughts once they've seen the thing in question! |
Q:
Error in installing Bundle for Vim
I was tiring to install bundle for vim.
I input :BundleInstall, and it seems fail to install
Bundle 'git://git.wincent.com/command-t.git' |~
Bundle 'file:///Users/gmarik/path/to/plugin'
The log shows that
[131205 15:35:35] Bundle git://git.wincent.com/command-t.git |~
[131205 15:35:35] $ git clone --recursive 'git://git.wincent.com/command-t.|~
git' '/home/p/.vim/bundle/command-t' |~
[131205 15:35:35] > fatal: read error: Connection reset by peer^@Cloning in|~
to '/home/p/.vim/bundle/command-t'...^@ |~
[131205 15:35:36] |~
[131205 15:35:36] Bundle file:///Users/gmarik/path/to/plugin |~
[131205 15:35:36] $ git clone --recursive 'file:///Users/gmarik/path/to/plu|~
gin' '/home/p/.vim/bundle/plugin' |~
[131205 15:35:36] > Cloning into '/home/p/.vim/bundle/plugin'...^@fatal: '/|~
Users/gmarik/path/to/plugin' does not appear to be a git repository^@fatal:|~
Could not read from remote repository.^@^@Please make sure you have the co|~
rrect access rights^@and the repository exists.^@
How to fix it?
A:
Those two errors occurs because you've misunderstood the Vundle installation.
What you see on the GitHub repo (https://github.com/gmarik/vundle#about) is a sample config.
You don't need the two lines that gives you the errors as you don't have access to them (one's probably a private GIT while the other is a local file), so just remove those lines in your .vimrc.
|
#include <memory>
#include <algorithm>
#include <thread>
#include <filesystem>
#include "wisp.hpp"
#include "d3d12/d3d12_renderer.hpp"
#include "spheres_scene.hpp"
#include "frame_graphs.hpp"
static const unsigned int default_window_width = 1290;
static const unsigned int default_window_height = 720;
static const unsigned int frames_till_capture = 3;
static const std::string output_dir = "benchmark_images/";
static int benchmark_number = 0;
inline void ReplaceAll(std::string& str, std::string const & original_delimiter, std::string const & new_delimiter)
{
std::string::size_type n = 0;
while ((n = str.find(original_delimiter, n)) != std::string::npos)
{
str.replace(n, original_delimiter.size(), new_delimiter);
n += new_delimiter.size();
}
}
template<typename S, typename O>
void PerformBenchmark(FGType fg_type, unsigned int width = default_window_width, unsigned int height = default_window_height, unsigned int output_render_target_index = 0)
{
// Get benchmark information
std::string scene_name = std::string(typeid(S).name());
scene_name.erase(0, 6); // Remove "class " from the name.
std::transform(scene_name.begin(), scene_name.end(), scene_name.begin(), ::tolower);
std::string rt_name = std::string(typeid(O).name()) + "_" + std::to_string(output_render_target_index);
rt_name.erase(0, 6); // Remove "class " from the name.
std::transform(rt_name.begin(), rt_name.end(), rt_name.begin(), ::tolower);
ReplaceAll(rt_name, "::", "-");
std::string fg_name = GetFrameGraphName(fg_type);
std::transform(fg_name.begin(), fg_name.end(), fg_name.begin(), ::tolower);
ReplaceAll(fg_name, " ", "_");
LOGW("Starting Benchmark \"{}\" for scene \"{}\" with fg \"{}\" to output \"{}\"", benchmark_number, scene_name, fg_name, rt_name);
auto scene = std::make_unique<S>();
// Initialize
auto render_system = std::make_unique<wr::D3D12RenderSystem>();
std::string window_title = "Benchmark (Scene: " + scene_name + " Fg:" + fg_name + " Rt:" + rt_name + ")";
auto window = std::make_unique<wr::Window>(GetModuleHandleA(nullptr), window_title, width, height, true);
wr::ModelLoader* assimp_model_loader = new wr::AssimpModelLoader();
render_system->Init(window.get());
scene->Init(render_system.get(), width, height);
auto frame_graph = CreateFrameGraph(fg_type, *render_system);
// Render
unsigned int frame = 0;
while (window->IsRunning())
{
window->PollEvents();
scene->Update();
render_system->Render(*scene->GetSceneGraph(), *frame_graph);
// Capture screenshot
if (frame == frames_till_capture)
{
std::string path = output_dir + "wisp_img_" + std::to_string(benchmark_number) + "_" + scene_name + "_" + fg_name + "_" + rt_name + ".tga";
frame_graph->SaveTaskToDisc<O>(path, output_render_target_index);
LOGW("Saving output to: {}", output_dir);
}
// Quit after capture.
else if (frame == frames_till_capture + 1)
{
window->Stop();
}
frame++;
}
// Shutdown
render_system->WaitForAllPreviousWork();
delete assimp_model_loader;
scene.reset();
frame_graph.reset();
render_system.reset();
benchmark_number++;
}
int GraphicsBenchmarkEntry()
{
// Create output directory if nessessary.
if (!std::filesystem::exists(output_dir))
{
std::filesystem::create_directory(output_dir);
LOGW("Created output dir {}", output_dir);
}
LOGW("Starting Benchmarks");
PerformBenchmark<SpheresScene, wr::PostProcessingData>(FGType::PBR_BASIC, 1000, 1000);
PerformBenchmark<SpheresScene, wr::PostProcessingData>(FGType::PBR_RT_REF_SHADOWS, 1000, 1000);
LOGW("Benchmarks Finished")
return 0;
}
WISP_ENTRY(GraphicsBenchmarkEntry) |
1. Field of the Invention
This invention relates to record player turntables and particularly relates to devices for dampening, dissipating, and blocking vibrations and resonances that interfere with faithful sound reproduction. It especially relates to devices for centering a record on the platter and then isolating the record from the turntable drive mechanism while the record is being played in order to block such vibrations travelling through the drive mechanism.
2. Review of the Prior Art
Playback distortion from the turntable frequently occurs in even the finest equipment. It may be an obvious distortion which makes listening very unpleasant, an objectionable resonant coloration, a blurring of clear, distinct sound into an unrecognizable mass of sound, a subtly annoying but not totally unpleasant effect, or even an unidentifiable source of fatigue. Mechanical vibration in a turntable may originate in, or be transmitted by, the drive system, the loader assembly, the platter design, or the chassis design.
Feedback is a major source of mechanical vibration which may be either mechanical or acoustical. Mechanical feedback is energy transmitted through the floorboards and to the wall beams and the like so that the loudspeaker is mechanically coupled to the turntable. Acoustical feedback is created by acoustical energy emanating from the loudspeaker or other sources when it moves or pumps energy into the room in the form of low-to-high level pressures at multiple frequencies and in complex patterns and with changing forces. A mechanical force is thus created when the pressure patterns in the air are absorbed by solid objects.
The lower the frequency, the more obvious the mechanical force becomes until it reaches a frequency too low to be heard. But even at such low frequencies, sufficient energy can be absorbed to rattle windows and shake walls as well as to create mechanical energy in the turntable platter, it main board, its base, and its supporting structure. Each of these parts vibrates with its own characteristic resonances in accordance with varying amounts of acoustical energy in the room.
These mechanical and acoustical vibrations travel through the equipment and coincide from all directions at certain key pathways to the tone arm. The result of such combinations seems to be a compounded increase in the feedback to the tone arm at many key points which might be called "collision course vibrations". These collision course vibrations are also generated within the mechanism itself, by and between the motor and the main bearing and the chassis and the subchassis, and are transmitted to and picked up by each end of the tone arm.
Such vibrations are commonly measured in the laboratory as rumble. Rumble is a low-pitched vibration or frequency that is caused by a mechanical vibration acting on the turntable and tone arm when the vibration occurs at the rotation frequency of the motor, the idler, the bearing, or the platter, or at some multiple of any of these frequencies. The platter bearing is indeed the main source of rumble in turntables that are now available on the market. Rumble may be reported as weighted or unweighted. Weighted rumble measurements discriminate against subsonic frequency components which cannot be reproduced by loudspeakers or heard by the human ear, but such frequencies can overdrive an amplifier or speaker and thereby impair the reproduction of higher frequencies. Thus, an unweighted measurement can also provide useful information because both sonic and subsonic frequencies--from one to 100,000 cps--contribute undesirable side effects.
Flutter is a rapid pitch fluctuation in reproduced music which is caused by pulsations or changes of the turntable speed, i.e., a rapid variation from constant rotational speed. When flutter occurs at a low rate, it is called "wow", suggesting the characteristic sound it imparts to steady musical tones. When it occurs at higher rates, the effect is of a "gargling" or roughness. Wow and flutter are usually reported as a combined flutter measurement which is weighted to emphasize the most objectionable flutter rates at around 5-10 Hz. This combined flutter measurement is usually specified in hundredths of a percent of perfect accuracy with 0.03% being a typically good figure.
Flutter robs a musical instrument of its character by blurring the musical image. Flutter can be characterized as forward and backward movement. The composite of all of these vibrations creates a situation that has much the same distortional effect, with respect to playback, as flutter itself but with more severe characteristics because these vibrations react in all planes and in 360.degree..
Even though such laboratory measurements report excellent values, such as an average peak wow and flutter of not more than .+-.0.03% and a rumble low enough to produce an ARLL-weighted measurement of -73 db or even -80 db, collision course vibrations can produce annoying disturbances to the trained ear. Neither is consequently acceptable for quality equipment.
Numerous devices have been designed and built for decoupling the turntable from mechanical vibrations. However, the frequency at which the energy decoupler resonates must be above the rotational speed of the turntable, which at 331/3 rpm is approximately one-half cycle per second, and, at the same time, must be lower than the resonant frequency of the tone arm mass and cartridge compliance, which is preferably 8 or 9 cycles per second. Thus, the best frequency for decoupling mechanical energy to the turntable is two or three cyles per second. Such decouplers include the use of a dense, thick, massive support board, upon which the turntable is placed, and the use of a number of coil springs between the support board and the platform therebeneath. Adding additional mass to the frame of the turntable also changes its frequency of vibration and reduces distortion that may range from frizzy highs to muddy lows, e.g., that is, music "out of focus".
About twenty-four years ago, a turntable having excellent acoustical qualities is believed to have been advertised. This turntable featured a centering pin that expanded for precisely centering a record and remained in place during play.
U.S. Pat. No. 1,821,916 describes a resilient center for phonograph records which is a laminated structure comprising the record body and a relatively thin rubber layer with a pin opening therein which is substantially coincident with the central axis of the record.
U.S. Pat. No. 3,801,476 relates to accurately centering a centering core clamped around a lacquer foil original recording for the manufacture of record discs containing sound or video recordings. It provides a centering sleeve which closely fits both the spindle and the outer edge of the centering core, whereby any possible misalignment of the foil with reference to the spindle is avoided.
However, these arrangements do not effectively decouple the mechanism of the turntable from the center hole of a record. There is accordingly a need for a simple, generally applicable and efficient decoupler for collision course vibrations travelling from the motor or other parts of the mechanism and up the spindle towards the center hole of the record and then to the needle carried by the tonearm. |
---
archs: [ armv7, armv7s, arm64, i386, x86_64 ]
platform: ios
install-name: /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/libSparseBLAS.dylib
current-version: 1
compatibility-version: 1
exports:
- archs: [ armv7, armv7s, arm64, i386, x86_64 ]
symbols: [ _sparse_commit, _sparse_elementwise_norm_double, _sparse_elementwise_norm_float,
_sparse_extract_block_double, _sparse_extract_block_float, _sparse_extract_sparse_column_double,
_sparse_extract_sparse_column_float, _sparse_extract_sparse_row_double,
_sparse_extract_sparse_row_float, _sparse_get_block_dimension_for_col,
_sparse_get_block_dimension_for_row, _sparse_get_matrix_nonzero_count,
_sparse_get_matrix_nonzero_count_for_column, _sparse_get_matrix_nonzero_count_for_row,
_sparse_get_matrix_number_of_columns, _sparse_get_matrix_number_of_rows, _sparse_get_matrix_property,
_sparse_get_vector_nonzero_count_double, _sparse_get_vector_nonzero_count_float,
_sparse_inner_product_dense_double, _sparse_inner_product_dense_float,
_sparse_inner_product_sparse_double, _sparse_inner_product_sparse_float, _sparse_insert_block_double,
_sparse_insert_block_float, _sparse_insert_col_double, _sparse_insert_col_float,
_sparse_insert_entries_double, _sparse_insert_entries_float, _sparse_insert_entry_double,
_sparse_insert_entry_float, _sparse_insert_row_double, _sparse_insert_row_float,
_sparse_matrix_block_create_double, _sparse_matrix_block_create_float, _sparse_matrix_create_double,
_sparse_matrix_create_float, _sparse_matrix_destroy, _sparse_matrix_product_dense_double,
_sparse_matrix_product_dense_float, _sparse_matrix_trace_double, _sparse_matrix_trace_float,
_sparse_matrix_triangular_solve_dense_double, _sparse_matrix_triangular_solve_dense_float,
_sparse_matrix_variable_block_create_double, _sparse_matrix_variable_block_create_float,
_sparse_matrix_vector_product_dense_double, _sparse_matrix_vector_product_dense_float,
_sparse_operator_norm_double, _sparse_operator_norm_float, _sparse_outer_product_dense_double,
_sparse_outer_product_dense_float, _sparse_pack_vector_double, _sparse_pack_vector_float,
_sparse_permute_cols_double, _sparse_permute_cols_float, _sparse_permute_rows_double,
_sparse_permute_rows_float, _sparse_set_matrix_property, _sparse_unpack_vector_double,
_sparse_unpack_vector_float, _sparse_vector_add_with_scale_dense_double,
_sparse_vector_add_with_scale_dense_float, _sparse_vector_norm_double, _sparse_vector_norm_float,
_sparse_vector_triangular_solve_dense_double, _sparse_vector_triangular_solve_dense_float ]
...
|
Related literature {#sec1}
==================
For related literature, see: Ahmad *et al.* (1990[@bb1], 1997[@bb2]); Beeam *et al.* (1984[@bb3]); Elguero (1983[@bb5]); Trofinenko (1972[@bb8]).
Experimental {#sec2}
============
{#sec2.1}
### Crystal data {#sec2.1.1}
C~21~H~16~N~2~O~2~*M* *~r~* = 328.36Monoclinic,*a* = 10.793 (3) Å*b* = 12.948 (3) Å*c* = 11.705 (3) Åβ = 93.508 (14)°*V* = 1632.7 (7) Å^3^*Z* = 4Mo *K*α radiationμ = 0.09 mm^−1^*T* = 298 (2) K0.44 × 0.40 × 0.26 mm
### Data collection {#sec2.1.2}
Siemens P4 diffractometerAbsorption correction: none5767 measured reflections3720 independent reflections2353 reflections with *I* \> 2σ(*I*)*R* ~int~ = 0.0243 standard reflections every 97 reflections intensity decay: 3.6%
### Refinement {#sec2.1.3}
*R*\[*F* ^2^ \> 2σ(*F* ^2^)\] = 0.047*wR*(*F* ^2^) = 0.128*S* = 1.033720 reflections227 parametersH-atom parameters constrainedΔρ~max~ = 0.15 e Å^−3^Δρ~min~ = −0.16 e Å^−3^
{#d5e415}
Data collection: *XSCANS* (Siemens, 1999[@bb7]); cell refinement: *XSCANS*; data reduction: *XSCANS*; program(s) used to solve structure: *SHELXTL-Plus* (Sheldrick, 2008[@bb6]); program(s) used to refine structure: *SHELXTL-Plus*; molecular graphics: *SHELXTL-Plus* and *Mercury* (Macrae *et al*., 2006); software used to prepare material for publication: *SHELXTL-Plus*.
Supplementary Material
======================
Crystal structure: contains datablocks I, global. DOI: [10.1107/S1600536808017054/lh2632sup1.cif](http://dx.doi.org/10.1107/S1600536808017054/lh2632sup1.cif)
Structure factors: contains datablocks I. DOI: [10.1107/S1600536808017054/lh2632Isup2.hkl](http://dx.doi.org/10.1107/S1600536808017054/lh2632Isup2.hkl)
Additional supplementary materials: [crystallographic information](http://scripts.iucr.org/cgi-bin/sendsupfiles?lh2632&file=lh2632sup0.html&mime=text/html); [3D view](http://scripts.iucr.org/cgi-bin/sendcif?lh2632sup1&Qmime=cif); [checkCIF report](http://scripts.iucr.org/cgi-bin/paper?lh2632&checkcif=yes)
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: [LH2632](http://scripts.iucr.org/cgi-bin/sendsup?lh2632)).
AB is grateful to the Higher Education Commission of Pakistan for the PhD scholarship grant.
Comment
=======
Pyrazoles are important because of their potential for biological activity (Beeam *et al.*, 1984). Both traditional and new scientific methods have been used used to prepare new materials for medicine (Elguero *et al.*, 1983) and agriculture (Trofinenko, 1972). Neutral and anionic pyrazoles are excellent ligands and their co-ordination chemistry has been extensively studied (Bonati, 1980). In the molecular structure of the title compound (III) (Fig. 1 and Fig. 3) there is an intramolecular hydrogen bond between the OH group of one phenolic group and the N atom of the pyrazole group (see Table 1 for hydrogen bond details). One of the phenyl groups is approximately coplanar with the pyrazole groups (dihedral angle = 7.5 (3)°), possibly due to the intramolecular hydrogen bond formation. The other two phenyl groups are rotated by 66.4 (12)°. In the crystal structure an intermolecular hydrogen bond between non equivalent hydroxy groups of symmetry related molecules, forms extended chains along \[201\] (Fig. 2).
Experimental {#experimental}
============
Compound (I) \[see Fig. 3\] was prepared by a modified Baker Venkataram rearrangement as reported earlier (Ahmad *et al.*, 1990, 1997). Purification was carried out by recrystallization using absolute ethanol. Compound (II) was synthesized by adding 0.1 mole of phenyl hydrazine in 0.1 mole of compound (II) dissolved in 200 ml of absolute ethanol. The mixture was refluxed for 7 h. Solvent was removed under reduced pressure. Highly viscous residue was recrystallized using absolute ethanol. Compound (III) was synthesized by demethylation of compound (II) using 48% hydrogen bromide in acetic acid. Single crystals suitable for X-ray analysis were obtained by recrystallization from an ethanol solution of (III) at room temperature (Yield: 96%, m.p: 490K).
Refinement {#refinement}
==========
All H atoms were placed in idealized positions and treated as riding atoms, with C---H = 0.93Å, O-H = 0.82Å and *U*~iso~(H) = 1.2*U*~eq~(C) or 1.5*U*~eq~(O).
Figures
=======
{#Fap1}
{#Fap2}
{#Fap3}
Crystal data {#tablewrapcrystaldatalong}
============
------------------------- -------------------------------------
C~21~H~16~N~2~O~2~ *F*~000~ = 688
*M~r~* = 328.36 *D*~x~ = 1.336 Mg m^−3^
Monoclinic, *P*2~1~/*c* Melting point: 490 K
Hall symbol: -P 2ybc Mo *K*α radiation λ = 0.71073 Å
*a* = 10.793 (3) Å Cell parameters from 84 reflections
*b* = 12.948 (3) Å θ = 4.6--12.4º
*c* = 11.705 (3) Å µ = 0.09 mm^−1^
β = 93.508 (14)º *T* = 298 (2) K
*V* = 1632.7 (7) Å^3^ Prismatic, colourless
*Z* = 4 0.44 × 0.40 × 0.26 mm
------------------------- -------------------------------------
Data collection {#tablewrapdatacollectionlong}
===============
------------------------------------------ ------------------------
Siemens P4 diffractometer *R*~int~ = 0.024
Radiation source: fine-focus sealed tube θ~max~ = 27.5º
Monochromator: graphite θ~min~ = 1.9º
*T* = 298(2) K *h* = −14→4
2θ/ω scans *k* = −16→1
Absorption correction: none *l* = −15→15
5767 measured reflections 3 standard reflections
3720 independent reflections every 97 reflections
2353 reflections with *I* \> 2σ(*I*) intensity decay: 3.7%
------------------------------------------ ------------------------
Refinement {#tablewraprefinementdatalong}
==========
---------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
Refinement on *F*^2^ Hydrogen site location: inferred from neighbouring sites
Least-squares matrix: full H-atom parameters constrained
*R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.047 *w* = 1/\[σ^2^(*F*~o~^2^) + (0.0468*P*)^2^ + 0.4416*P*\] where *P* = (*F*~o~^2^ + 2*F*~c~^2^)/3
*wR*(*F*^2^) = 0.128 (Δ/σ)~max~ \< 0.001
*S* = 1.03 Δρ~max~ = 0.15 e Å^−3^
3720 reflections Δρ~min~ = −0.16 e Å^−3^
227 parameters Extinction correction: SHELXTL-Plus (Sheldrick, 2008), Fc^\*^=kFc\[1+0.001xFc^2^λ^3^/sin(2θ)\]^-1/4^
Primary atom site location: structure-invariant direct methods Extinction coefficient: 0.0155 (18)
Secondary atom site location: difference Fourier map
---------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
Special details {#specialdetails}
===============
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Geometry. All e.s.d.\'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.\'s are taken into account individually in the estimation of e.s.d.\'s in distances, angles and torsion angles; correlations between e.s.d.\'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.\'s is used for estimating e.s.d.\'s involving l.s. planes.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords}
==================================================================================================
------ --------------- --------------- --------------- -------------------- --
*x* *y* *z* *U*~iso~\*/*U*~eq~
O1 −0.19429 (12) 0.33852 (11) −0.38082 (11) 0.0598 (4)
H1B −0.2627 0.3197 −0.3621 0.090\*
O2 0.55378 (12) 0.16872 (10) 0.17109 (13) 0.0603 (4)
H2B 0.5054 0.2064 0.1339 0.090\*
N1 0.26395 (13) 0.27005 (12) −0.01205 (13) 0.0467 (4)
N2 0.34918 (13) 0.21948 (12) 0.05691 (13) 0.0462 (4)
C1 0.3431 (2) 0.43942 (17) −0.05478 (19) 0.0628 (6)
H1A 0.4075 0.4086 −0.0915 0.075\*
C2 0.3343 (3) 0.54518 (19) −0.0493 (2) 0.0775 (7)
H2A 0.3933 0.5863 −0.0821 0.093\*
C3 0.2391 (2) 0.58993 (18) 0.0041 (2) 0.0721 (7)
H3A 0.2326 0.6615 0.0064 0.087\*
C4 0.1531 (2) 0.53006 (17) 0.0543 (2) 0.0653 (6)
H4A 0.0891 0.5610 0.0915 0.078\*
C5 0.16142 (18) 0.42387 (16) 0.04968 (17) 0.0546 (5)
H5A 0.1035 0.3827 0.0837 0.066\*
C6 0.25650 (16) 0.38005 (14) −0.00589 (15) 0.0460 (4)
C7 0.21875 (16) 0.10653 (14) −0.03158 (15) 0.0437 (4)
H7A 0.1815 0.0444 −0.0540 0.052\*
C8 0.18333 (15) 0.20293 (15) −0.06634 (15) 0.0433 (4)
C9 0.08425 (15) 0.23722 (14) −0.14967 (15) 0.0441 (4)
C10 −0.03190 (17) 0.19422 (15) −0.15016 (17) 0.0523 (5)
H10A −0.0474 0.1428 −0.0975 0.063\*
C11 −0.12620 (17) 0.22564 (16) −0.22714 (17) 0.0543 (5)
H11A −0.2041 0.1950 −0.2266 0.065\*
C12 −0.10491 (16) 0.30191 (14) −0.30416 (15) 0.0460 (4)
C13 0.01140 (17) 0.34379 (16) −0.30736 (16) 0.0516 (5)
H13A 0.0269 0.3941 −0.3614 0.062\*
C14 0.10531 (17) 0.31154 (16) −0.23075 (16) 0.0507 (5)
H14A 0.1841 0.3402 −0.2337 0.061\*
C15 0.32262 (15) 0.11962 (14) 0.04482 (14) 0.0407 (4)
C16 0.40064 (15) 0.04285 (14) 0.10482 (14) 0.0404 (4)
C17 0.36890 (17) −0.06076 (15) 0.10014 (15) 0.0476 (4)
H17A 0.2955 −0.0804 0.0603 0.057\*
C18 0.44262 (18) −0.13518 (16) 0.15259 (17) 0.0554 (5)
H18A 0.4201 −0.2044 0.1471 0.066\*
C19 0.55033 (18) −0.10638 (17) 0.21343 (16) 0.0546 (5)
H19A 0.6000 −0.1563 0.2503 0.066\*
C20 0.58441 (18) −0.00551 (16) 0.21988 (16) 0.0525 (5)
H20A 0.6572 0.0133 0.2613 0.063\*
C21 0.51143 (16) 0.06923 (14) 0.16520 (15) 0.0447 (4)
------ --------------- --------------- --------------- -------------------- --
Atomic displacement parameters (Å^2^) {#tablewrapadps}
=====================================
----- ------------- ------------- ------------- -------------- -------------- --------------
*U*^11^ *U*^22^ *U*^33^ *U*^12^ *U*^13^ *U*^23^
O1 0.0470 (8) 0.0663 (9) 0.0638 (8) 0.0073 (7) −0.0160 (6) 0.0117 (7)
O2 0.0474 (8) 0.0503 (8) 0.0796 (10) 0.0043 (6) −0.0253 (7) −0.0134 (7)
N1 0.0380 (8) 0.0451 (9) 0.0548 (9) 0.0054 (7) −0.0141 (7) −0.0037 (7)
N2 0.0375 (8) 0.0480 (9) 0.0514 (8) 0.0067 (7) −0.0119 (7) −0.0054 (7)
C1 0.0587 (13) 0.0636 (14) 0.0668 (13) −0.0021 (11) 0.0084 (10) 0.0014 (11)
C2 0.0867 (18) 0.0632 (15) 0.0824 (16) −0.0137 (14) 0.0036 (14) 0.0141 (13)
C3 0.0878 (18) 0.0479 (12) 0.0775 (15) 0.0045 (12) −0.0202 (14) 0.0046 (11)
C4 0.0574 (13) 0.0601 (13) 0.0764 (14) 0.0163 (11) −0.0138 (11) −0.0140 (11)
C5 0.0446 (10) 0.0554 (12) 0.0631 (12) 0.0036 (9) −0.0028 (9) −0.0045 (10)
C6 0.0423 (10) 0.0457 (10) 0.0483 (10) 0.0043 (8) −0.0104 (8) −0.0023 (8)
C7 0.0368 (9) 0.0468 (10) 0.0467 (9) −0.0006 (8) −0.0051 (7) −0.0039 (8)
C8 0.0324 (8) 0.0524 (10) 0.0443 (9) 0.0035 (8) −0.0032 (7) −0.0031 (8)
C9 0.0348 (9) 0.0504 (10) 0.0460 (9) 0.0036 (8) −0.0063 (7) −0.0021 (8)
C10 0.0423 (10) 0.0558 (11) 0.0572 (11) −0.0031 (9) −0.0089 (8) 0.0118 (9)
C11 0.0363 (9) 0.0598 (12) 0.0651 (12) −0.0072 (9) −0.0108 (9) 0.0073 (10)
C12 0.0400 (9) 0.0490 (10) 0.0475 (10) 0.0079 (8) −0.0103 (8) −0.0008 (8)
C13 0.0477 (11) 0.0601 (12) 0.0465 (10) −0.0009 (9) −0.0014 (8) 0.0095 (9)
C14 0.0359 (9) 0.0654 (12) 0.0500 (10) −0.0050 (9) −0.0021 (8) 0.0021 (9)
C15 0.0351 (9) 0.0465 (10) 0.0400 (8) 0.0025 (8) −0.0022 (7) −0.0052 (8)
C16 0.0344 (8) 0.0487 (10) 0.0375 (8) 0.0031 (8) −0.0034 (7) −0.0033 (7)
C17 0.0415 (10) 0.0524 (11) 0.0478 (10) −0.0047 (8) −0.0051 (8) 0.0032 (8)
C18 0.0518 (11) 0.0526 (12) 0.0612 (11) 0.0003 (9) −0.0005 (9) 0.0094 (9)
C19 0.0480 (11) 0.0617 (13) 0.0540 (11) 0.0108 (10) 0.0012 (9) 0.0122 (10)
C20 0.0407 (10) 0.0663 (13) 0.0488 (10) 0.0059 (9) −0.0097 (8) −0.0004 (9)
C21 0.0391 (9) 0.0491 (10) 0.0449 (9) 0.0044 (8) −0.0052 (8) −0.0084 (8)
----- ------------- ------------- ------------- -------------- -------------- --------------
Geometric parameters (Å, °) {#tablewrapgeomlong}
===========================
---------------------- -------------- ----------------------- --------------
O1---C12 1.362 (2) C8---C9 1.471 (2)
O1---H1B 0.8200 C9---C10 1.371 (3)
O2---C21 1.367 (2) C9---C14 1.380 (3)
O2---H2B 0.8200 C10---C11 1.379 (3)
N1---N2 1.3549 (19) C10---H10A 0.9300
N1---C8 1.360 (2) C11---C12 1.366 (3)
N1---C6 1.428 (2) C11---H11A 0.9300
N2---C15 1.330 (2) C12---C13 1.370 (3)
C1---C6 1.363 (3) C13---C14 1.376 (3)
C1---C2 1.374 (3) C13---H13A 0.9300
C1---H1A 0.9300 C14---H14A 0.9300
C2---C3 1.364 (4) C15---C16 1.455 (2)
C2---H2A 0.9300 C16---C17 1.385 (3)
C3---C4 1.369 (3) C16---C21 1.394 (2)
C3---H3A 0.9300 C17---C18 1.371 (3)
C4---C5 1.379 (3) C17---H17A 0.9300
C4---H4A 0.9300 C18---C19 1.377 (3)
C5---C6 1.371 (3) C18---H18A 0.9300
C5---H5A 0.9300 C19---C20 1.358 (3)
C7---C8 1.360 (3) C19---H19A 0.9300
C7---C15 1.401 (2) C20---C21 1.380 (3)
C7---H7A 0.9300 C20---H20A 0.9300
C12---O1---H1B 109.5 C11---C10---H10A 119.3
C21---O2---H2B 109.5 C12---C11---C10 119.82 (17)
N2---N1---C8 111.18 (15) C12---C11---H11A 120.1
N2---N1---C6 119.34 (14) C10---C11---H11A 120.1
C8---N1---C6 128.64 (14) O1---C12---C11 123.06 (17)
C15---N2---N1 105.81 (13) O1---C12---C13 117.22 (17)
C6---C1---C2 119.5 (2) C11---C12---C13 119.71 (16)
C6---C1---H1A 120.3 C12---C13---C14 120.10 (18)
C2---C1---H1A 120.3 C12---C13---H13A 120.0
C3---C2---C1 120.0 (2) C14---C13---H13A 120.0
C3---C2---H2A 120.0 C13---C14---C9 120.92 (17)
C1---C2---H2A 120.0 C13---C14---H14A 119.5
C2---C3---C4 120.4 (2) C9---C14---H14A 119.5
C2---C3---H3A 119.8 N2---C15---C7 110.14 (15)
C4---C3---H3A 119.8 N2---C15---C16 119.87 (15)
C3---C4---C5 120.0 (2) C7---C15---C16 129.96 (16)
C3---C4---H4A 120.0 C17---C16---C21 117.38 (16)
C5---C4---H4A 120.0 C17---C16---C15 120.54 (15)
C6---C5---C4 118.9 (2) C21---C16---C15 122.04 (16)
C6---C5---H5A 120.5 C18---C17---C16 121.86 (18)
C4---C5---H5A 120.5 C18---C17---H17A 119.1
C1---C6---C5 121.20 (19) C16---C17---H17A 119.1
C1---C6---N1 119.94 (18) C17---C18---C19 119.33 (19)
C5---C6---N1 118.86 (18) C17---C18---H18A 120.3
C8---C7---C15 106.22 (15) C19---C18---H18A 120.3
C8---C7---H7A 126.9 C20---C19---C18 120.44 (18)
C15---C7---H7A 126.9 C20---C19---H19A 119.8
N1---C8---C7 106.65 (14) C18---C19---H19A 119.8
N1---C8---C9 122.37 (17) C19---C20---C21 120.24 (18)
C7---C8---C9 130.90 (17) C19---C20---H20A 119.9
C10---C9---C14 117.98 (16) C21---C20---H20A 119.9
C10---C9---C8 120.49 (17) O2---C21---C20 117.24 (16)
C14---C9---C8 121.51 (16) O2---C21---C16 122.02 (16)
C9---C10---C11 121.40 (18) C20---C21---C16 120.73 (18)
C9---C10---H10A 119.3
C8---N1---N2---C15 0.7 (2) C10---C11---C12---O1 −178.41 (18)
C6---N1---N2---C15 171.10 (16) C10---C11---C12---C13 2.6 (3)
C6---C1---C2---C3 −0.4 (4) O1---C12---C13---C14 178.78 (17)
C1---C2---C3---C4 1.2 (4) C11---C12---C13---C14 −2.2 (3)
C2---C3---C4---C5 −0.9 (3) C12---C13---C14---C9 −0.2 (3)
C3---C4---C5---C6 −0.1 (3) C10---C9---C14---C13 2.0 (3)
C2---C1---C6---C5 −0.7 (3) C8---C9---C14---C13 −179.23 (17)
C2---C1---C6---N1 179.36 (19) N1---N2---C15---C7 −0.6 (2)
C4---C5---C6---C1 0.9 (3) N1---N2---C15---C16 177.64 (15)
C4---C5---C6---N1 −179.09 (17) C8---C7---C15---N2 0.4 (2)
N2---N1---C6---C1 76.2 (2) C8---C7---C15---C16 −177.66 (17)
C8---N1---C6---C1 −115.2 (2) N2---C15---C16---C17 175.03 (17)
N2---N1---C6---C5 −103.7 (2) C7---C15---C16---C17 −7.1 (3)
C8---N1---C6---C5 64.8 (3) N2---C15---C16---C21 −7.4 (3)
N2---N1---C8---C7 −0.4 (2) C7---C15---C16---C21 170.51 (18)
C6---N1---C8---C7 −169.74 (17) C21---C16---C17---C18 0.1 (3)
N2---N1---C8---C9 −177.64 (16) C15---C16---C17---C18 177.79 (17)
C6---N1---C8---C9 13.1 (3) C16---C17---C18---C19 1.1 (3)
C15---C7---C8---N1 0.0 (2) C17---C18---C19---C20 −1.1 (3)
C15---C7---C8---C9 176.91 (18) C18---C19---C20---C21 −0.1 (3)
N1---C8---C9---C10 −138.9 (2) C19---C20---C21---O2 −177.28 (18)
C7---C8---C9---C10 44.6 (3) C19---C20---C21---C16 1.4 (3)
N1---C8---C9---C14 42.4 (3) C17---C16---C21---O2 177.27 (17)
C7---C8---C9---C14 −134.1 (2) C15---C16---C21---O2 −0.4 (3)
C14---C9---C10---C11 −1.6 (3) C17---C16---C21---C20 −1.4 (3)
C8---C9---C10---C11 179.66 (18) C15---C16---C21---C20 −179.04 (17)
C9---C10---C11---C12 −0.7 (3)
---------------------- -------------- ----------------------- --------------
Hydrogen-bond geometry (Å, °) {#tablewraphbondslong}
=============================
------------------ --------- --------- ----------- ---------------
*D*---H···*A* *D*---H H···*A* *D*···*A* *D*---H···*A*
O1---H1B···O2^i^ 0.82 2.05 2.824 (2) 158
O2---H2B···N2 0.82 1.87 2.595 (2) 147
------------------ --------- --------- ----------- ---------------
Symmetry codes: (i) *x*−1, −*y*+1/2, *z*−1/2.
###### Hydrogen-bond geometry (Å, °)
*D*---H⋯*A* *D*---H H⋯*A* *D*⋯*A* *D*---H⋯*A*
------------------ --------- ------- ----------- -------------
O1---H1*B*⋯O2^i^ 0.82 2.05 2.824 (2) 158
O2---H2*B*⋯N2 0.82 1.87 2.595 (2) 147
Symmetry code: (i) .
|
Ex-Durham resident faces money laundering charge
DENVER — A former Durham resident with a local criminal record was arrested last week in Denver and now faces a slew of drug-related money laundering charges in Colorado.
By ANDREA BULFINCHabulfinch@fosters.com
DENVER — A former Durham resident with a local criminal record was arrested last week in Denver and now faces a slew of drug-related money laundering charges in Colorado.
Taylor Hills, 25, of Boulder, Colo., was charged with 17 counts of money laundering, 12 counts of illegal use of the mail, one count of a false statement on a loan application and one count of failing to file a tax return.
He was indicted by a federal grand jury in Denver on Jan. 29 on the charges.
The case was investigated by agents attached to a homeland security investigations division of U.S. Immigration and Customs Enforcement and staffers from the Internal Revenue Service and U.S. Postal Inspection Service. Officers from the Boulder County Drug Task Force and the Boulder County District Attorney's office also participated. The case is being prosecuted by Assistant U.S. Attorney Michele Korver.
According to a press release from the U.S. Department of Justice, the indictment indicates that Hills “allegedly engaged in financial transactions, which transactions involved the proceeds of specified unlawful activity, that is, the distribution of controlled substances.”
The indictment further states that he “allegedly conducted financial transactions while knowing that the property involved in each transaction represented the proceeds of some form of unlawful activity.”
Hills also allegedly made “a false statement or report for the purpose of influencing the action of JP Morgan Chase, a federally insured financial institution, in connection with a vehicle loan application.”
Should Hills be convicted of the violations stated in these indictments, he is to forfeit “any and all of the defendants' right, title and interest in all property, real or personal, involved in such offenses, or all property traceable to such property, including but not limited to the following: 2008 Ducati motorcycle, and U.S. currency located in a Roth IRA.”
Durham Deputy Police Chief Rene Kelley said Hills developed a criminal record while living in Durham. In May 2010, he was arrested and found guilty of simple assault and possession of controlled drugs.
In Sept. 2009, he was the victim of a home invasion, in which four robbers entered a Scotland Road residence and tied up several of the residents, including Hills. An investigation revealed the robbers were after drugs, Kelley said.
And in 2010, Hills forfeited $15,000 in cash after it was intercepted by the Durham Police Department. He had intended to send the money to a person in California using a fake name. Hills lived at a residence on Route 4 before moving into the Scotland Road home, Kelley said.
It does not appear any of the charges brought against Hills in Colorado have any connection to Durham, Kelley said.
If convicted of money laundering, Hills faces up to 20 years in federal prison and a fine of up to $500,000 per count. If convicted of illegal use of the mail, he faces up to four years in federal prison and a fine of up to $250,000 per count. If convicted of false statement in loan application, he faces up to 30 years in federal prison and a fine of up to $1 million per count. If convicted of failing to file a tax return, he faces up to one year in federal prison and a fine of up to $25,000 per count.
An indictment is not an indication of guilt; rather, it means a grand jury has found sufficient evidence to warrant a trial.
Never miss a story
Choose the plan that's right for you.
Digital access or digital and print delivery. |
1.. Introduction
================
One among the main WSN issues that should be addressed while dealing with target tracking and monitoring applications, in 3D environments with obstacles, is area coverage. This is because a sensor can detect the occurrence of events or the presence of hostile targets only if they are within its sensing range. Coverage reflects how well a zone is monitored or a system is tracked by sensors. Therefore, the WSN detection performance depends on how well the wireless sensors observe the physical space under control.
Several metrics have been provided in the literature to measure the quality of coverage. Among these metrics, one can mention the following: (a) the number of coverage holes; (b) the proportion of uncovered area with respect to the area under monitoring; and (c) the so called Average Linear Uncovered Length (ALUL), which has been developed in 2D zones to estimate the average distance a mobile target can traverse before being detected by one sensor \[[@b1-sensors-11-09904]\]. The ALUL can be used to assess the detection efficiency of the WSN in more general spaces. However, the major shortcoming of this approach is its heavy computational load making it non-conforming with the severe processing and energy limitations characterizing WSNs.
Obstacles in monitored 3D domains may complicate seriously the role of the monitoring sensors, increase their power consumption, and limit the coverage efficiency of the process providing coverage control \[[@b2-sensors-11-09904],[@b3-sensors-11-09904]\]. Procedures set up to implement coverage control and target tracking efficiency should be optimal. They should take into consideration the geographic nature of the monitored area and cope with the number and the shape of obstacles.
This paper proposes a coverage assessment approach amenable to implement advanced target tracking functionalities. First, it provides a technique based on the concept of retraction by deformation applied to a special space, called the Rips complex, associated with the deployment of a set of sensors to develop a low complexity algorithm for locating coverage holes. Second, it constructs a collaborative mechanism to repair coverage holes, assuming that the sensors have mobility capabilities. Third, the paper builds on higher-order Voronoi diagrams to define an efficient scheme to coordinate tracking activities of single and multiple targets. To the best of our knowledge, this is the first time where retraction by deformation and higher-order Voronoi tessellations are used for hole assessment and target tracking in 3D domains with obstacles using sensors. The major contributions of this paper are as follows: The definition proposed to distributively reduce the Rips complex associated to the sensors is general, in the sense that it applies to a large variety of sensor, detection techniques, monitored domains, and obstacles.The proposed cooperative coverage repairing approach considerably reduces the uncovered areas and provides efficient handling of obstacles with respect to existing methods. The detection and localization of holes is done with low complexity.We show that the higher-order Voronoi tessellations we utilize are useful for performing multiple tasks including activity scheduling and coordination. In addition, we show that local coverage information, when gathered using the Voronoi diagram, can be used to implement coverage preserving mobility models.
The remaining part of this paper is organized as follows: Section 2 describes the state of the art of coverage control in various areas in general and in 3D spaces in particular. Section 3 surveys the definition of the mathematical objects needed for coverage and tracking control, the Vietoris complex and the Voronoi diagram and discusses the retraction by deformation. Section 4 discusses different schemes based on the Vietoris complex to detect and count the coverage holes in 3D domains, locate these holes, and repair them. It also defines a special procedure to reduce the complexity of the Vietoris complexes without modifying their topological properties. Section 5 sets up models for coverage assessment, sensor mobility, and target tracking. Section 6 analyzes the complexity of the algorithms constructed in this paper and sets some extensions of our results to more general types of sensors. Section 7 develops simulation experiments to evaluate the performance of a monitoring system implementing our techniques. Section 8 concludes this paper.
2.. Related Work
================
Studies on coverage, holes, and boundary detection have been addressed using three main categories of techniques: geometric methods, statistical/probabilistic methods, and topological methods.
Studies using probabilistic approaches usually make assumptions on the probability distribution of the sensor deployment. Fekete *et al.* \[[@b4-sensors-11-09904]\] assume uniformly randomly distributed sensors inside a geometric region for their boundary detection algorithm. Their approach hinges on the idea that the boundary nodes would have lower average degrees than that of the "interior" nodes and statistically provide a degree threshold to differentiate interior and boundary nodes. Kuo *et al.* \[[@b5-sensors-11-09904]\] propose an error model for location estimation using probabilistic coverage, while Ren *et al.* \[[@b6-sensors-11-09904]\] presents an analytical model based on probabilistic coverage to track moving objects in a densely covered sensor field. Most of probabilistic approaches have focused on the detection and tracking of objects in a sensor field. They did not address other related issues such as location of the holes, number of such holes and repairing.
A number of literature has addressed the static or "blanket" coverage. Dynamic or "sweeping" coverage \[[@b7-sensors-11-09904]\] has been also a common and challenging task with applications ranging from security to housekeeping. Two primary approaches to static coverage problems in the literature. The first uses computational geometry tools applied to exact node coordinates. Such approaches are very rigid with regards to inputs: one, for example, must know exact node coordinates and must know the geometry of the domain to determine the Delaunay complex. To alleviate the former requirement, many authors have turned to probabilistic tools. For example, in \[[@b8-sensors-11-09904]\], the author assumes a randomly and uniformly distributed collection of nodes in a domain with a fixed geometry and proves expected area coverage. Other approaches give probabilistic or percolation results about coverage for randomly distributed nodes. The drawback of these methods is the fact that uniform distribution of nodes may not be always realistic.
More recently, the robotics community has explored how networked sensors and robots can interact and augment each other: (see e.g., \[[@b9-sensors-11-09904]\] for more details). There are several new approaches to networks without localization that come from research works in ad hoc wireless networks that are not unrelated to coverage questions. One example is the routing algorithm of \[[@b10-sensors-11-09904]\], which generally works in practice but is a heuristic method involving heat-flow relaxation. This work investigates the issues of maintaining coverage and connectivity by keeping minimum number of sensor nodes to operate in the active mode. The authors show that if the radio range is at least twice the sensing range, then complete coverage implies connectivity. A decentralized and localized density control algorithm, called OGDC, is devised to control and maintain coverage and connectivity. However, their approach requires knowledge of node location. The authors claim that this requirement can be relaxed to that each node knows its relative location to its neighbors.On the other hand, Hsin and Liu \[[@b11-sensors-11-09904]\] give methods for localizing an entire network if localization of a certain portion is known. They address the problem of target tracking in face of partial sensing coverage by considering the effect of different random and coordinated scheduling schemes. In their coordinated-coverage algorithm, a sensor might decide to sleep for some time after acknowledgments from its neighbor(s) that must be active. These decisions are not synchronized as individual sensors could negotiate with sponsors independently.
Since coverage verification is inherently a geometric problem, many research done in this area are based on computational geometry, and more precisely on the Voronoi Tessellation (and its dual, Delauney Triangulation). Motivated from the early success of the application of geometric techniques to cope with coverage problems (*Art Gallery Problem*), researchers have applied these techniques to ad-hoc distributed sensor networks (\[[@b12-sensors-11-09904]--[@b15-sensors-11-09904]\]).
The most important drawback of these approaches is that they are too computationally expensive to be implemented in real-time contexts. Another severe limitation is the impact of localization uncertainty on the performance of these approaches. These claims are well-documented in (\[[@b16-sensors-11-09904]\]). In fact, to detect coverage holes, the locations of the sensors must be exactly known. Obviously, this cannot be always provided, especially when the sensing nodes are mobile. Moreover, equipping sensors with localization devices may considerably increase the deployment cost of the WSN and reduce its resources. In the following paragraphs, we summarize the methodologies, problems addressed and results of some of the recent, notable studies in the area of detection and coverage in wireless sensor networks.
Meguerdichian *et al.* \[[@b13-sensors-11-09904]\] study the problem of computing a path along which a target is least or most likely to be detected. They provide an optimal polynomial time algorithm that uses graph theoretic and computational geometric (Voronoi diagram) methods. They address the issues of maximal breach path, maximal support path and provide best and worst case coverage using computational geometry. Delaunay triangulation was used to find the best-coverage path. In addition, deployment heuristics are provided to improve coverage. Since computational geometric methods require location information, the authors implement a location procedure prior to their coverage scheme. This procedure requires that a few of the deployed nodes (called beacons) must know their locations in advance (either from GPS or pre-deployment). Li *et al.* \[[@b14-sensors-11-09904]\] uses local Delauney triangulation, relative neighborhood graph and the Gabriel graph to find the path with the best-case coverage.
Huang *et al.* \[[@b15-sensors-11-09904]\] study the problem of *k*-coverage. They propose solutions to the *k*-UC and *k*-NC (Unit Disks and Non-Unit Disks) coverage problems which are modeled as decision problems whose goal is to determine if each location of a target sensing area is sufficiently covered. They present a polynomial-time algorithm with a geometric approach that runs in *O*(*nd* log *d*) time.
Ghrist *et al.* \[[@b17-sensors-11-09904]\] use topological methods to detect insufficient sensor coverage and holes. In their seminal work on using homological concepts for addressing hole detection and coverage, their algorithm detects holes with no knowledge of their location. Although the approaches by Ghrist *et al.* have many desirable properties, the assumption of a static network and the centralized scheme are not suitable for dynamic networks.
3.. Mathematics for Coverage and Tracking
=========================================
The objective of this section is to provide a mathematical model for accurately gauging the coverage degree of a monitored domain in the 3D space **R**^3^ and repairing the coverage holes. This model uses the Vietoris Complex \[[@b6-sensors-11-09904],[@b8-sensors-11-09904]\].
The following assumptions will be used in the next subsections: Let **M** be a bounded domain (or manifold) in **R**^3^ with non-empty boundary ∂*M*. The boundary is assumed to be an orientable topological surface (*i.e.*, a closed surface homeomorphic to some number of spheres and some number of connected sum of *g* tori, for *g* ≥ 1, \[[@b18-sensors-11-09904]\]). Let *δ* : **R**^3^ × **R**^3^ → **R**^+^ denoting the Euclidean distance. We denote by *S* a set of sensors deployed in **R**^3^ to monitor *M*, and by \|*S*\| the number of these sensors. We will designate indifferently by *p* ∈ *S* the sensor in *S* and its location (*x~p~, y~p~, z~p~*) in **R**^3^. Let us notice, finally, that the sensors can be deployed inside **M** or outside it.
3.1.. Voronoi Diagrams for Spherical-Detection Sensors
------------------------------------------------------
Let us assume that the sensors in *S* have identical covered area represented by a ball with radius *ρ*. For every pair *p, q* ∈ *S*, we denote by *B*(*p, q*) the plane, in **R**^3^ perpendicular to segment \[*p, q*\] and passing by its middle point and by *H*(*p, q*) the half space of **R**^3^ containing the *p* and delimited by *B*(*p, q*). Thus, *B*(*p, q*) and *H*(*p, q*) are expressed as follows: $$B\left( {p,q} \right) = \left\{ {x \in \mathbf{R}^{3}/\delta\left( {p,x} \right) = \delta\left( {q,x} \right)} \right\}.$$ $$H\left( {p,q} \right) = \left\{ {x \in \mathbf{R}^{3}/\delta\left( {p,x} \right) \leq \delta\left( {q,x} \right)} \right\}.$$
We also denote by *H~M~* (*p, q*) and *B~M~* (*p, q*) the intersection of *H*(*p, q*) and *B*(*p, q*) with **M**, respectively.
The Voronoi cell generated by *p* ∈ *S* is nothing but the common area to the (*\|S\| −* 1) closed half spaces containing *p* involving the other sensors. Therefore, the Voronoi cell generated by *p* is expressed by: $$V_{S}\left( p \right) = \underset{q \in S\backslash p}{\cap}{H\left( {p,q} \right)}$$
The Voronoi cell of a sensor is convex and contractible. The common boundary of two Voronoi cells *V~S~*(*p*) ∩ *V~S~*(*q*) is included in *H*(*p, q*). It can be a plane, a half plane, an edge, a point, or an empty set. The Voronoi diagram associated to the set *S* of sensors deployed to monitor **M** is the unique subdivision defined in **R**^3^ by the Voronoi cells associated to all sensors. Thus, every cell of the subdivision contains the nearest neighbors defined in *S* for a sensor *p*. The Voronoi diagram of *S* is the set of point belonging to the all the Voronoi cell. Hence we have: $$V^{D}\left( S \right) = \underset{p \in S}{\cup}{V_{S}\left( p \right).}$$
In particular, the Voronoi diagram *V^D^*(*S*) has no vertices and no edges when the sensors are located at collinear points. In that case, the faces of the Voronoi diagram are parallel planes. In addition, one can notice that when *p* ∈ *S* lies on the boundary of the convex hull of *S*, then the Voronoi cell of *p* is unbounded in **R**^3^.
Since in this paper, we are rather interested in partitioning a domain **M** into cells according to *k*-nearest neighbors in *S*, for a given integer 1 ≤ *k* ≤ *n −* 1, we turn now to the definition of the High-order Voronoi diagrams, as they are useful concepts to define these sets and support target tracking. An order *k* Voronoi diagram is defined as follows: Let *T* ⊂ *S* containing *k* sensors, the *T*-generated cell is defined by $$V\left( T \right) = \left\{ {x \in \mathbf{R}^{3}\left| {\forall p \in T,\forall q \in S - T,\delta\left( {x,p} \right) \leq \delta\left( {x,q} \right)} \right.} \right\}.$$The order *k* Voronoi diagram is given by: $$V_{k}^{D}\left( S \right) = \underset{T \subset S,{|T|} = k}{\cup}{V\left( T \right).}$$
One can easily see that the order 1 Voronoi diagram $V_{1}^{D}\left( S \right)$ is just *V^D^S*, that *V*(*T*) can be empty, and that *V^D^*(*S*) induces a partition on the domain M into bounded components.
3.2.. Vietoris-Rips Complexes
-----------------------------
We consider a set of points *S* = {*v*~1~,\..., *v~n~*} corresponding to the locations of a set of sensor nodes in a 3D space. For brevity, (*v~i~*)~1≤*i*≤*n*~ will be simply referring indifferently to as sensor nodes and points. We suppose that each sensor is capable of covering a disk of radius *r~c~* and communicate with the other sensors within a distance $r_{b} \leq \sqrt{3}r_{c}$. The total region covered by the sensor network can be represented by: $$\Gamma\left( S \right) = \underset{v_{i} \in S}{\cup}\Gamma_{v_{i},r_{c}}$$where Γ~*v~i~,r~c~*~ = {*x* ∈ ℝ^3^ : \|\|*x − v~i~*\|\| ≤ *r~c~*}.
A *k*-simplex (or a simplex of dimension *k*) *σ* is an unordered set *σ* = {*v*~0~, *v*~1~, .., *v~k~*} ⊆ *S*, where *v~i~ ≠ v~j~* and *δ*(*v*ı*, v*j), for all *i ≠ j*. A face of the *k*-simplex *σ* is a (*k −* 1)-simplex formed by *k* elements (or vertices) of *σ*. Clearly, any *k*-simplex has exactly *k* + 1 faces. The collection of all k-simplices of S is called the abstract associated with Γ(*S*). In fact, an abstract simplicial complex *X* is a finite collection of simplices which is closed with respect to the inclusion of faces; meaning that, if *σ* ∈ *X*, then all faces of *σ* are also in *X*. It is noteworthy that a simplicial complex is a generalization of a graph; that is, the connectivity graph is nothing but the set of 1-simplices of the simplicial complex associated to a set *V* of points in the 3D space.
Now let us discuss the definition of the Vietoris-Rips complex. This complex captures the features related to connectivity and coverage of WSNs.
**Definition 3.1.** *(Vietoris-rips complex) Let S be a set of points in a 3D space and a given radius ε. The Vietoris-rips complex of S, denoted by R~ε~*(*S*)*, is the simplicial complex whose k-simplices correspond to unordered* (*k* + 1)*-tuples of points in S which are pairwise within Euclidean distance ε of each other*.
A subset of *k* + 1 points in *S* determine a *k*-simplex of for the Vietoris-rips complex if, and only if, each of these points lies within the intersection of the balls of radius *ε* centered at the other *k* points.
The reader, however, may wonder whether such topological structure can be computed in practice by tiny motes equipped with radio devices and limited storage capabilities. To answer this question, we propose a simple mechanism allowing a fully distributed construction of the Vietoris-rips complex. Through a 3-step broadcast of connectivity information, each sensor node can be aware of what simplices it belongs to, and what other simplices its neighbors belong to. To this end, we assume that every sensor node has a unique identifier (typically a layer-2 address) and has enough space to maintain a table of identifiers. The protocol performs as follows: Initialization: Every sensor *v~i~* broadcasts its identity to its neighbors. Upon receipt of the message, each sensors builds the list, denoted by $\sum_{0}^{i}$, of 0-simplices formed by its neighbors.Edge construction: Sensor *v~i~* appends its identity to the vertices in $\sum_{0}^{i}$ to construct the list, say $\sum_{1}^{i}$, of all 1-simplices it belongs to. It also determines the number *n~i~* of its neighbors. Then it informs its neighbors about the 1-simplices it built.Simplicial iteration: On receiving the information from its neighbors, sensor *v~i~* starts building the lists $\sum_{j}^{i}$, 2 ≤ *j* ≤ *n~i~*, by simply adding appropriately the structures it has received to the ones it has already constructed.
An informal explanation of the construction algorithm is as follows. Simplices of higher dimension are constructed iteratively. In the first iteration, the 2-simplices are constructed by applying the following rule: $$\left. < v_{i},v_{j} > \in \sum_{1}^{i}, < v_{i},v_{k} > \in \sum_{1}^{i}, < v_{j},v_{k} > \in \sum_{1}^{j}\rightarrow < v_{i},v_{j},v_{k} > \in \sum_{2}^{i} \right.$$for every *i, j,* and *k*, provided that *i ≠ j, i ≠ k, j ≠ k*. The rules used for the following iterations are similar.
3.3.. Homotopy and Retraction
-----------------------------
Let *X* and *Y* be two topological spaces and *f, g : X → Y* be two maps (or continuous functions). We say that *f* and *g* are homotopic if there is a map *F* : \[0, 1\] × \[0, 1\] *→ X* such that $$F\left( {x,0} \right) = f\left( x \right),F\left( {x,1} \right) = g\left( x \right),\forall x,y \in X$$
Let *x*~0~ ∈ *X* be a given basepoint of *X*. A loop based on *x*~0~ is a map *α* : \[0, 1\] → *X*, such that *x*~0~ = *α*(0) = *α*(1). An equivalence relation on the set of all loops based at *x*~0~ can be defined by stating that loops *α*~1~ and *α*~2~ are equivalent if they are homotopic with respect to *x*~0~; meaning that there exists a homotopy *F* between *α*~1~ and *α*~2~ such that $$F\left( {0,t} \right) = F\left( {1,t} \right) = x_{0},\forall t \in \left\lbrack {0,1} \right\rbrack.$$
We denote the equivalence class of a loop *α* : \[0, 1\] *→ X* based at *x*~0~ by \[*α*\] and call it the based homotopy class of the loop *α*. The set of equivalence classes of loops based at *x*~0~ is denoted by *π*~1~(*X, x*~0~) and is called the fundamental group. It can be equipped with a multiplication defined by \[*α*~1~\] ★ \[*α*~2~\] = \[*α*~1~.*α*~2~\], for all loops \[*α*~1~\] and \[*α*~2~\] based at *x*~0~, where *α*~1~.*α*~2~ is the loop obtained by attaching *α*~1~ to *α*~2~. A second group of homotopy, denoted by *π*~2~(*X, x*~0~) can be defined as the set of homotopy equivalence classes of applications *β* : \[0, 1\]^2^ *→ X*, based at *x*~0~. It is an Abelian group, \[[@b19-sensors-11-09904]\].
On the other hand, a map *f : X → Y* is called a homotopy equivalence if there is a map *g : Y → X* such that *f* ○ *g* is homotopic to the identity function in *X* and *g* ○ *f* is homotopy to the identity function *Y*. Thus, one can say that two spaces are homotopy equivalent if they have "the same shape".
A deformation retraction of a space *X* onto a subspace *A* ⊆ *X* is a map *f : X* × \[0, 1\] → *X* such that: $$f\left( {x,0} \right) = x,f\left( {x,1} \right) \in A,f\left( {a,t} \right) = a,\forall x \in X,a \in A,0 \leq t \leq 1.$$
In other words, The subset *A* is a retraction by deformation of the space *X* if, starting from the original space *X* at time 0, we can continuously deform *X* until it becomes the subspace *A* at time 1 and deformation is performed without ever moving the subspace *A* in the process. It is obvious that, if *A* is a retraction by deformation of *X*, then *X* and *A* are homotopically equivalent.
Finally, let *K* be complex, a retraction filtration of *K* is a nested finite sequence of subcomplexes *K~i~*, $$K_{0} \subseteq K_{1} \subseteq \ldots \subseteq K_{n} = K.$$such that, for all *k* ≥ 0, *K~k~* is a retract by deformation of *K~k~*~+1~. Thus, it can be shown easily that, *K*~0~ and *K~n~* have the same type of homotopy and the same homotopy group.
Let *T* = {*p*~1~, .., *p~k~*} be a simplex and A *T*~1~ = {*p*~2~, .., *p~k~*} be one of its faces. Then *A* = *T* −(*T*~1~ − ∂*T*~1~) be the part of the boundary of *T* that is not internal to *T*~1~. Then *A* is a deformation retract of *T*.
Let *R*(*S*) be the Rips complex associated with *S*, repeating the process of retraction of simplexes that are on the boundary of *R*(*S*), with faces external to *R*(*S*), would lead to a filtration of *R*(*S*)*,* say *K~k~*, 0 ≤ *k* ≤ *n*, such that, for all *k* ≥ 0, *K~k~* is a retract by deformation of *K~k~*~+1~ and *K~k~*~+1~ is obtained from *K~k~* by adding one simplex, external to *K~k~* and belonging to *R*(*S*). The object *K*~0~ has no simplex with external face that is retractible.
4.. Coverage Hole Management of Spherical Sensors
=================================================
In this section, we propose a novel distributed technique to count the coverage holes of WSN using the retraction theory of spaces. In particular, we show that the Vietoris-rips complex associated with the WSN can be reduced to a simpler space that is tightly related to the number of holes.
In the following, let *D* ⊆ ℝ^3^ be a compact domain in the 3D space ℝ^3^ and ∂*D* be its boundary. We consider that *D* contains no obstacles. We also consider that a collection *S* = {*v*~1~, .., *v~n~*} is deployed over domain *D* and that the sensors are equipped with local communication and sensing capabilities. In fact, each sensor is capable of communicating directly with other sensors in its proximity (within a given distance *r~b~*) and has a limited sensing range *ε*.
4.1.. Reducing the Vietoris-Rips Complex
----------------------------------------
We assume, in this subsection, a complete absence of localization capabilities and metric information, in the sense that the sensors in the network can determine neither distance nor direction. Under these assumptions, we are interested in designing distributed algorithms for coverage assessment and hole detection.
To this end, we need first to introduce a special procedure, called *Retract*, that reduces the size of the Vietoris-rips complex while keeping its type of homotopy. Repeating this procedure several times will eliminate all the 3-cells of the Vietoris-rips complex.
Let *R~ε~*(*S*) be the Vietoris-rips complex. Let {*v*~0~, \..., *v*~3~} be a 3-simplex in *R~ε~*(*S*) such that one of the 2-cell {*v*~0~, *v*~1~, *v*~2~} does not belong to another 3-simplex in *R~ε~*(*S*). If such a situation does not exist, then one can easily deduce that *R~ε~*(*S*) has no 3-cells. Let *X*~1~ and *A*~1~ be the set of points *x* ∈ *R~ε~*(*S*) belonging to simplex {*v*~0~*, v*~1~*, v*~2~} and the subset of *X*~1~ generated by the other two faces, respectively. Then its is easy to construct a map *h*~1~ : *X*~1~ *×* \[0, 1\] *→ X*~1~ such that: $$h_{1}\left( {x,0} \right) = x,h_{1}\left( {x,1} \right) \in A_{1},h_{1}\left( {a,t} \right) = a,\forall x \in X,a \in A_{1},0 \leq t \leq 1.$$
Map *h*~1~ can be easily extended to a map $$\left. \mathit{Retract}:\mathit{R}_{\varepsilon}\left( S \right) \times \left\lbrack {0,1} \right\rbrack\rightarrow R_{\varepsilon}\left( S \right) \right.$$such that: $$\mathit{Retract}\left( {x,0} \right) = x,\mathit{Retract}\left( {x{,1}} \right) \in A,\mathit{Retract}\left( {\mathit{a},t} \right) = a,\forall x \in X,a \in A,0 \leq t \leq 1.$$where *A* is *R~ε~*(*S*) *− X*~1~) ∪ *A*~1~.
Repeating the map *Retract* several times will lead to eliminating all the 3-simplices in *R~ε~*(*S*). The map *Retract* can also be reapplied several times to delete all 2-simplices and 1-simplices that a free face. The resulting space, say $R_{\varepsilon}^{\mathit{red}}\left( S \right)$.
**Proposition 4.1.** *Let S be a set of sensors. If R~ε~*(*S*) *is path-connected, then* $R_{\varepsilon}^{\mathit{red}}\left( S \right)$ *satisfies the following properties:* $R_{\varepsilon}^{\mathit{red}}\left( S \right)$ *is homotopy equivalent to R~ε~*(*S*)*the number of holes delimited by* $R_{\varepsilon}^{\mathit{red}}\left( S \right)$ *is equal to the number of holes of the vietoris space R~ε~*(*S*)
*Proof.* Applying the map *Retract* several times helps creating a retraction filtration of *R~ε~*(*S*) such that: $$R_{\varepsilon}^{\mathit{red}}\left( S \right) = K_{0} \subseteq K_{1} \subseteq \ldots \subseteq K_{n} = R_{\varepsilon}\left( S \right).$$where *n* is number of 3-simplices in *R~ε~*(*S*). Since, for every *i*, *K~i~* is homotopy equivalent to *K~i~*~+1~, we can deduce that $R_{\varepsilon}^{\mathit{red}}\left( S \right)$ is homotopy equivalent to *R~ε~*(*S*).
The second statement of the theorem can be deduced from the following features: a holes is a path connected component that is surrounded by the delimiting space ( $R_{\varepsilon}^{\mathit{red}}\left( S \right)$ and *R~ε~*(*S*)).Retracting a 3-simplex in *R~ε~*(*S*) may enlarge a hole but does not eliminate it.The retraction process does not create holes since it operates on the simplices that have free faces.
4.2.. Counting and Locating Coverage Holes
------------------------------------------
To count and locate holes, we set up a 3-step algorithm. In the first step, we construct the external boundary of *R~ε~*(*S*). This is the subset of *S* containing all the nodes occurring on free faces and facing the boundary ∂*D* of the domain. In the second step, we define an algorithm that detects holes by progressively transforming the external boundary by retracting all its external simplices. In the third step, the following process is repeated: one external 2 simplex is deflated, the Retract map is applied several times to reduce appearing simplices with free faces, and the external boundary is updated. The number of iterations of this process gives the number of coverage holes.
### 4.2.1.. Constructing the Boundary of *R~ε~*(*S*)
Let us assume that the boundary ∂*D* of the domain **D** under monitoring can be seen (or detected) by the sensors in *S* and that the nodes in *S* broadcast periodically their unique ID numbers. The construction is based on the three following actions: Every sensor node detecting a boundary component of **D** or finding itself on an external facet sends this information to its neighbors.The information related to boundary detection, when received by sensors should be put together to form the external boundary of *R~ε~*(*S*), by simply allowing every sensor node to know which neighbor is on the external boundary.The nodes broadcast information related the external boundary of *R~ε~*(*S*) so that every node on the boundary can have a precise picture of the boundary.
### 4.2.2.. Counting Coverage Holes
Counting the coverage holes can be set up by an algorithm that repeats iteratively the following major procedures: Boundary retraction: Let *C~n~* be a n-simplex on the boundary of *R~ε~*(*S*) and *C*~*n*−1~ be one of its external faces, then *C~n~* can be retracted using the procedure Retract and the boundary is updated by adding a new node (the one in *C~n~* − *C*~*n*−1~), if *n* ≥ 2, or by deleting the node occurring in *C*~*n*−1~, if *n* = 1.Boundary deflation: When all the simplices on the boundary of have been retracted, a pre-selected node in *S* (in charge of the counter) selects one of the nodes of the new external boundary, withdraws it from the boundary, and increments the counter.
### 4.2.3.. Locating Coverage Holes
It is worth noticing that, when a deflation of a 2-simplex on the boundary *R~ε~*(*S*) is applied after retraction is complete, a hole is reduced from the coverage zone. This because the selected node, for deflation, is observing the hole, since it is one of the nearest nodes surrounding the reduced hole. Thus, this node can start the construction of the boundary of the reduced hole by determining the list of the nodes surrounding immediately the hole.
One can conclude, therefore, that any time a deflation is operated, a hole can be located by simply constructing its boundary using the nearest nodes to that hole.
4.3.. Repairing Coverage Holes
------------------------------
Let us here assume that the 3D domain **D** under monitoring has no obstacles and let us denote by *χ* (*χ* = 4*πε*^3^/3) the volume of the area covered by a sensor and by *Vol*(*D*) the volume of **D**. One can state that the number *\|S\|* of sensors in *S* should be higher than the number *N*~0~ = *Vol*(*D*)/*χ* to be able to guarantee full coverage of **D**, at least after hole detection and coverage optimization. Therefore, we will assume in the sequel that this condition is satisfied. Finally, we assume that the sensors are able to move and detect the external boundary of **D**, when they are close to it, like in the above subsection.
Repairing holes aims at extending the coverage by eliminating the holes, or at least by shrinking considerably their size. An algorithm can be defined to this purpose. It can be built based on the following general rules: A node detecting the external boundary ∂*M* should keep seeing the boundary when it moves.A node on the external boundary of *R~ε~*(*S*) should move towards the uncovered area, when it does not see the boundary.When two neighbor nodes on the external boundary of *R~ε~*(*S*) are separated by a distance higher than a predefined threshold, say *θ*~1~, and one of them is not seeing the boundary of **D**, then the sensor unable to see the boundary asks its successor (*i.e.*, a neighbor involved in the retraction of the simplex containing this sensor) to move towards the external boundary.A node seeing the boundary should inform its neighbors so that they can move accordingly.When the distance between a sensor *s* and its neighbors on the boundary of a hole is lower than a predefined value, say *θ*~2~, then *s* should move in the opposite direction of the hole, while the other sensors should move towards the hole so that when they see each other, *s* can withdraw itself from the minimal surface after informing its neighbors.A node on the external boundary, finding itself unable to move informs, its successor to move towards its direction.
5.. Target Tracking in 3D Domains
=================================
In this section, we use 3D Voronoi diagrams to optimize sensor coverage and target tracking performance. We first propose a strategy to measure the uncovered zones of the monitored region. Then, we develop two mobility models that provide target tracking using order k Voronoi diagrams and optimize the coverage ratio of a zone using Voronoi cells. Finally, we extend these models to multiple target tracking. We assume in this section that the sensors have spherical coverage. The vector-guided case can be addressed using similar techniques.
5.1.. Measuring Uncovered Areas
-------------------------------
Assume that a location *x* within the surveillance area is not covered by any sensor. Let ℒ(*x*, *θ*) define the Linear Uncovered Length (LUL) at location *x* with direction *θ*. This is the undetected path length of a target traveling from location *x* with direction *θ* = (*θ*~1~, *θ*~2~), for 0 ≤ *θ*~1~ ≤ 2*π*, *−π*/2 ≤ *θ*~2~ ≤ *π*/2).
The Average Linear Uncovered Length (ALUL), denoted by *ALUL*(*x*), introduced in \[[@b20-sensors-11-09904],[@b21-sensors-11-09904]\], for the 2D space, gives an approximation of the average distance that can be made by a target, moving in 3D space, before being detected by the sensor network. The Average Linear Uncovered Length (ALUL) function can be defined by the following formula: $$\mathit{ALUL}\left( x \right) = \begin{cases}
{0,} & {\text{if}\, x\,\text{is\ covered}.} \\
{\frac{1}{\left( {2\pi} \right)^{2}}{\int_{- \pi/2}^{\pi/2}{\int_{0}^{2\pi}{\mathcal{L}\left( {x,\theta_{1},\theta_{2}} \right)d\theta_{1}d\theta_{2},}}}} & {\text{otherwise}.} \\
\end{cases}$$
More generally, when *A* is a subregion of the 3D domain under supervision, the Average Linear Uncovered Length related to *A*, *ALUL*(*A*), that a target can travel within *A* without been detected by a sensor is given by the expression: $$\mathit{ALUL}\left( A \right) \equiv \frac{\int_{x \in A}{\mathit{ALUL}\left( x \right)\mathit{dx}}}{\left\| A \right\|},$$where \|\|*A*\|\| is the volume of *A*.
the ALUL metric was developed to deal with a static deployment, which is not the case of our study. When a mobility model is implemented, the topology of the WSN is no longer static. To overcome this, we extend this notion so as to support sensor node mobility. The ALUL should also vary according to time and should use a function, denoted by the ℒ(*x*, *θ*, *t*), that defines the Linear Uncovered Length at location *x* with direction *θ*, at time *t*. Based on this reasoning, we define the metric *ALUL~m~*(*x, t*) representing the ALUL in a location *x* at time *t* and given by:
Due to sensor node mobility, the ALUL, over time, in a point *x* will be expressed by: $$\mathit{ALUL}_{m}\left( x \right) = {\int_{0}^{\infty}{\mathit{ALUL}_{m}\left( {x,t} \right)\mathit{dt}.}}$$
Finally, *ALUL~m~*(*A*) can be computed by [Equation (8)](#FD8){ref-type="disp-formula"} by replacing *ALUL*(*x*) by *ALUL~m~*(*x*).
From the performance evaluation perspective, two important points should be highlighted: *ALUL~m~*(*A, t*) gives information about the coverage-preserving capabilities of the mobility model. It can be used to state whether the steady state is rapidly reached, and whether the mobility model affect the detection performance of the sensor network.*ALUL~m~*(*A*) provides information about the long-term behavior of the mobility model. It can be used to evaluate the impact of mobility on the possibility for a target to be undetected within the monitored region.
5.2.. Mobility Models for Target Tracking
-----------------------------------------
In this section, we show how the Voronoi cells can be used to implement target tracking using a sensor mobility model. In fact, we define two mobility models: The first model is called k-mobility model. Sensor nodes in this model move toward the regions where the hostile target is supposed to be and collaborate to keep the target controlled by k sensors all the time, To this end, the order k Voronoi diagrams are used and maintained all the time.The second model is called simplified model. It relies on estimating the uncovered zones within the Voronoi cells, using the ALUL metrics and moving sensor nodes toward the "uncovered zones".
While the first model is triggered by the occurrence of targets, the second model aims at adapting the covered area so that the targets can be detected with higher probabilities. Obviously, the k-mobility model is more energy-consuming than the second since it encompasses the prediction of the target position and requires tracking using k sensor nodes. Therefore, we suppose that the second model can be used when energy resources become scarce. The performance of both models will be assessed in Section 7. Moreover, one can notice that the prediction function we are using is tightly related to the coverage of the zones where the targets are expected and that the mobility models assume that nearest sensor nodes can move to these zones while reducing the coverage of other zones where targets are not expected. In fact, the greater is the number of target detection signals, the better is the prediction precision to command sensor movements.
### 5.2.1.. The k-Mobility Model
In the following, we distinguish two cases: (a) a target crossing a *k−*covered area and (b) a target crossing non *k−*covered zone.
### 5.2.2.. For a Target Crossing a *k−*covered Zone
The mobility algorithm is triggered upon the detection of a target presence. Every detecting sensor sends its detection signal to the relevant intermediate sensor (called IS). The latter collects all detection signals, verifies their integrity, deduces the current zone where the target might be, estimates the positions of the target in the next of time slot, and commands k sensors to move to monitor the new zone to ensure tracking continuity.
Typically, the selected zone of target presence is taken among other zones (when more then k sensors detect the target presence). These zones are ordered according to the probability of presence of the target. The zone selected is the one presenting the highest probability among those which are *k−*covered.
The mobility algorithm is defined through five steps: Assume that *k*′ sensors detect the target (*k*′ \> *k*). The *k*′ sensors *s~i~*, 1 ≤ *i* ≤ *k*′, send their detection data *d~i~* to an intermediate node under the form: $$d_{i} = \left( {r_{t,i},\theta_{t,i},\tau_{t,i},s_{i}} \right)$$where *r~t,i~* = *δ*(*x~i~, z~t,i~*) is the Euclidean distance separating *s~i~* from the position *z~t,i~* of the target as seen by *s~i~, θ~t,i~* = (*α~t,i~*, *β~t,i~*) is the direction of the vector $\overset{\rightarrow}{z_{t,i}-x_{i}}$, and *τ~t,i~* is the detection instant.In the case where detection signals are sent to different intermediate nodes; the intermediate coordinate to gather all signals (or at least *k* of them) at a unique node IS, which verifies first the authentication of the messages.IS constructs: The zone of target presence $Z_{t,i}^{\mathit{tau}}$ for each sensor based the errors made for the values reported. This zone is delimited by the following eight points: $$\left( {r_{t,i} \pm \Delta r,\left( {\alpha_{t,i} \pm \Delta\alpha,\beta_{t,i} \pm \Delta\beta} \right)} \right.$$as defined by the estimated detection errors.The most likely target presence zone *Z^τ^*(*t*). Several strategies can be used for this including selecting the largest intersection of k zones of the form $Z_{t,i}^{\mathit{tau}}$. It can also be the largest union of k zones. Let *T* be the set of *k* sensors involved in the definition of *Z^τ^* (*t*).Then, IS computes the order k Voronoi cell *V^S^*(*T*). Obviously, it contains *Z^τ^* (*t*).IS estimates the zone *Z*^*τ*,+^(*t*), where target *z~t~* is likely to be in the next time slot. Several strategies can be used for this estimation including extrapolation of older positions or some information related to target direction and speed. It also estimates the most likely new position of *z~t~*.IS selects *k* sensors based on a specific criteria and order them to move towards *Z*^*τ*,+^(*t*) to increase its coverage. If no criteria is used, then the order goes to the sensors in *T*. A criteria can simply to reduce sensor movement.
When a criteria is applied for the selection of k sensors to cover the new position, some of the selected sensors (say *k*″ sensors) may belong to *T* and the other (say *k−k*″) have to be added among the neighbors of *T*. This situation is addressed in the following subsection.
### 5.2.3.. For a Target Crossing a Non *k−*covered Zone
In this case, only *k*′ (*k*′ ≤ *k*) detection signals are received by the intermediate sensor IS, which should proceeds at the construction of the probable current zone of presence of the target the way the preceding algorithms does. Then it starts the selection of the remaining (*k − k*′) required signals. Then, it orders the movement of the k sensor provide *k* monitoring to the target. For this purpose, IS executes the following steps: IS computes the most likely zone of target presence let *z~t~* using the *k′* reports from *k′* sensors denoted by *s*~1~, \..., *s~k′~*.For each *i* ≤ *k*′, IS selects the nearest *k* sensors to *s~i~*. It computes the related *k−*Voronoi cell $V_{i}^{(k)}$ and deduces the intersection $z_{t} \cap V_{i}^{(k)}$For each *i* ≤ *k*′, IS gets the number of sensors *k~i~*", 0 ≤ *k~i~*" *\< k*, that have sent detection signals to IS.IS classifies the *k−* Voronoi cells according to the value of *k~i~*". The greater *k~i~*" is, the most important is the probability of presence of the target in $V_{i}^{(k)}$. A small value of *k~j~*" induces that the target is going in or out the cell $V_{i}^{(k)}$.IS selects the nearest *k* sensors involved in $\partial V_{i}^{(k)}$, where *k~i~*" = *max*~*j*≤*k*′~*k~j~*, and guides the (*k −k*") added sensors (among the nearest sensors to *s~i~*) to move towards $\partial V_{i}^{(k)}$. For that, it sends them a mobility instruction including the probability of presence of the target. A mobility instruction is defined by the 3-tuple. $$\left( {r_{i},\alpha_{i},\pi_{i}} \right)$$where *r~i~* ≥ (*s~i~, p*) such that $\forall q \in \partial V_{i}^{(k)}$, *δ*(*s~i~, p*) ≥ *δ*(*s~i~, q*). and $\alpha_{i} = \mathit{argmax}\hat{xs_{i}y}$ where *x, y* ∈ *v~i~* and *v~i~* is the set of the vertices of the boundary ∂*V~i~*, *π~i~* = *k*"*/k* is the probability of presence of the target in $\delta V_{i}^{(k)}$.
To enhance coverage while keeping more mobility freedom, we implement a group mobility model in which ground sensors move in groups in order to preserve the *k−*coverage. To this purpose, for each mobility step, the sensors define randomly groups of *k* members for each, the latter are not required to be the nearest neighbors. Each group chooses randomly a head which chooses the first mobility step. The remaining members of the group take into account this choice to determine their next mobility step. By this way, every sensor's mobility will depend on the integrating group. Furthermore, a sensor may move from one group to another in each mobility step. This model enables the definition of overlapping *k−*Voronoi groups which increases the guarantee of having a *k−*coverage.
### 5.2.4.. Simplified Mobility Model
We propose hereafter a mobility model which is based on the use of simple Voronoi diagram to identify and reduce coverage holes.
This model can serve to implement a mobility strategy where a sensor node looks for one or more neighbors that are at least 2*ρ*-distant from it. If such nodes exist, the sensor node moves toward the most distant neighbor, denoted by *n~f~*, with a distance $\frac{\delta\left( {s_{i},n_{f}} \right) - 2\rho}{2}$.
The following result extends this strategy to the case where the monitored region is required to be *k*-covered using the simplified algorithm. It uses a set, denoted by *X*(*s~i~*, *V* (*S*)), which defined the set of intersection points expressed as follows: $$X\left( {s_{i},V^{D}\left( S \right)} \right) = {\mathfrak{D}}\left( {V^{D}\left( {S\backslash\left\{ s_{i} \right\}} \right)} \right) \cap {\Gamma\left( {s_{i},R_{s_{i}}} \right)},$$where 𝔇, for a region *R* ⊆ **R**^3^, denotes the boundary of *R*.
For the sake of parsimony, we do not provide proofs for these corollaries in this paper.
**Lemma 5.1.** *For s~i~ in S, if \|N*(*s~i~*, *V^D^*(*S*))*\| \< k, where \|.\| denotes set cardinality, then V^D^*(*s~i~*) *is not k-covered. For s~i~ in S, if \|X*(*s~i~*, *V^D^*(*S*))*\| \< k, then V^D^*(*s~i~*) *is not k-covered.*
This lemma shows how simple Voronoi diagrams can be used to detect the coverage holes based on the distance between the sensor node and the edges of its Voronoi cell. It is based on the concept that the Voronoi tessellation is a partition of the points belonging to the monitored area according to their proximity to the sensor nodes. In other terms, if a point is not detected by the sensor node located at the generator of the Voronoi cell it belongs to, it cannot be detected by any other sensor node. If a sensor detects that the distance to one among the edges of its Voronoi edges is more than its coverage range, it has to move towards this edge to cover the corresponding hole. The uncovered can therefore be gradually reduced using this distributed strategy. However, a sensor node can detect that more than one of its Voronoi neighbors do not fulfill the condition of the lemma, it will therefore move towards the most distant neighbor.
The major advantage of this strategy, with respect to the advanced strategy, is that it relies on simple Voronoi diagrams to deal with *k*-coverage while the advanced model proposed in the previous subsection is based on order *k* Voronoi tessellations which are more complex to build.
A more accurate comparison between the two models will be carried out in the simulation section.
5.3.. Multi-Target Tracking
---------------------------
The two tracking models presented in the above can be extended to the tracking of multiple targets. To describe the extension let us assume, for the sake of clarity, only two targets are detected by sensors in *S*. Let *z~t~* and *z~t~*~′~ be the reported positions.
The extension of the simplified model considers two cases: Only one node has detected the presence of the two targets: In that case, the sensor keeps monitoring one of the targets and invites the nearest neighbor to the second target to monitor the second and provides it with relevant information it collects.More than one node have detected the targets: In that case, two sensor among those that have detected the targets are selected to keep monitoring the targets independently.
On the other hand, the k-mobility model extends in following way: if d nodes detect the targets, these sensors are divided into two subsets, each in charge of monitoring one target, then the subsets are extended so that any of them contains k sensors.
6.. Complexity Analysis of Coverage Management and Tracking
===========================================================
6.1.. Complexity
----------------
In this section, we analyze the complexity of the different algorithms we have developed in the previous sections for the detect and locate holes or to repair coverage holes. Our approach to estimate the complexity can be based on the following metrics: The number of messages exchanged between the sensors during the execution of the algorithm.The number of additions and deletions of simplices to the Vietoris complex.The number of sensor movements made during the execution of an algorithm.
Some other operations can be added for a more accurate estimation of complexity. These metrics may include, for example, the number of storing operations made at the node level to update the related data structures. The messages exchanged during the execution of an algorithm can be of different types. In particular, they can be sent to a neighbor to tell it to change its status from internal (to the Vietoris complex) to external (*i.e.*, on the boundary of Vietoris complex). They also can be used to construct the initial boundary of Rips complex, or used to reduce the external boundary. They also can be sent after the retraction or the deflation of a simplex, or they are sent by a leader node the command a coordinated movement of sensors.
For the sake of clarity, we will focus on the complexity on the detection and counting of coverage hole. In this case, let *n* be the number of sensors in *S*, and *e* be the number of 1-simplices, *f* the number of 2-simplices, and *t* the number 3-simplices in RIPS complex of *S*). Let also *p* the number of vertices at the initial boundary of the Rips complex.
The number of messages sent during the execution of the algorithm should be lower or equal to the number of messages exchanged if all the polyhedra (external and internal) have been retracted first and that after deflation all the facets have been retracted. In that case, one can state that the number *N*~1~ of messages sent is given by: $$N = p + \left( {f - p} \right) + \left( {e - \left( {f - p} \right)} \right) = p + e \leq \left| S \right| + e,$$where *p* is the number of external vertices. This result can be deduced from the preceding and the fact that *p* ≤ *\|S\|*.
Now let us assume, without loss of generality, that the deployment of sensors (initial and current) guarantees that every node in the Rips complex of *S* has at most *v* neighbors (*v* is a fixed value coping with the volume of the area to monitor and the radius of coverage). Then, one can conclude that *e* is smaller than *v × \|S\|*, we deduce that $$N \leq n + v \times \left| S \right| \leq \left( {v + 1} \right)n.$$Let us notice that the assumption is not mandatory and a direct proof can be given. This shows that the algorithm to detect and count the coverage holes has a linear complexity.
6.2.. Extending Results
-----------------------
The results presented in the previous sections can be extended in two dimensions: the type of the sensors and the occurrence of obstacles in the domain under monitoring.
**Coping with obstacles.** The algorithms developed in the preceding sections can be adapted to the occurrence of obstacles. Obstacles in monitored 3D areas may complicate seriously the role of monitoring sensors, increase their power consumption, and limit the coverage efficiency. Two particular objects have to be modified in our algorithms. First, the coverage holes that have to be counted should not contain obstacles. one can assume for this that the sensors are able to recognize an obstacle). Second, the mobility model used to increase coverage or to provide tracking should consider moving the sensor vertically as an alternative.
**Coping with semi spherical.** The algorithms developed in the preceding sections can be extended to semi-spherical sensors (sensor having a semi spherical covered area). It is worth noticing, at this point, that this type is sufficiently general to represent various sensors-base applications. In particular, the model can be used to represent fire and smoking-based sensors or camera-based sensors. To cope with semi-spherical sensors, one can notice that the concepts of Vietoris-rips and Voronoi diagram can be extended, so that coverage holes can be handles in a similar way. However, when repairing a hole, the mobility model of the sensor should include rotating a sensor to increase the coverage of a specific area by the sensor.
It is worth noticing that the additions made to the developed algorithms do not modify significantly the complexity of the algorithms. In particular, the complexity of the hole counter remains linear (as shown in the simulation discussed in the following section).
7.. Experimental Results
========================
In this section, we carry out a set of experiments to prove the efficiency of the proposed techniques. We first address the coverage hole problem by evaluating the performance of the higher-order Voronoi-based strategy for coverage optimization. To this purpose, we define a metric representing the ratio of uncovered area with respect to the total area of the monitored region. Second, We assess the target tracking approach by estimating the maximum linear distance that can be made by a hostile target without being detected. Finally, we evaluate the complexity of our coverage control and mobility techniques. We use the number of transmitted messages as a main criterion to estimate this complexity since data transmission consumes much more power than computational steps in WSNs.
7.1.. Coverage Control and Hole Reduction
-----------------------------------------
The first experiment aims at evaluating the hole reduction strategy based on three-dimensional Voronoi tessellations with spherical coverage. We define the following metric to evaluate the performance of hole reduction. $$\mu = \frac{\text{Sum\_of\_hole\_volumes}}{\text{Total\_volume\_of\_the\_monitored\_area}}.$$
[Figure 1](#f1-sensors-11-09904){ref-type="fig"} shows the evolution of *μ* according to the number of iterations of the coverage hole reduction algorithm. We compared the Voronoi-based hole reduction strategy with the Homotopy-based strategy proposed in Section 4. It can be noticed that the increase in terms of normalized uncovered proportion is about 15% when the number of iteration is low. In addition, when the number of iteration exceeds 30, both approaches perform well since the normalized sum of uncovered areas becomes higher than 90%.
A similar experiment is conducted for vector guided sensors, assuming that at every step of the iteration, the mobility is provided along with an orientation of the vector to achieve better coverage. [Figure 1](#f1-sensors-11-09904){ref-type="fig"} shows the evolution of *μ* according to the number of iterations of the coverage hole reduction algorithm and compares it to the Voronoi-based hole reduction strategy with the Homotopy-based strategy. One can conclude that while the homotopy-based approach is less complex, since linear for detection and localization, the Voronoi-based method reaches better results. In addition, a comparison between the results obtained for spherical sensors and semi spherical sensors shows the following: The approach performs better with spherical sensors for the first iterations. Indeed, the normalized uncovered proportion reaches 70%, with spherical sensors, after 10 iterations, while it stays under 10% for semi spherical sensors.The approach performs the same for both types of sensors after 30 iterations.
This can be explained by the fact that the density of sensors is the same for both types and, therefore, it takes more mobility for semi spherical sensors the coverage holes.
7.2.. Mobility Modeling
-----------------------
The Average Linear Uncovered Length (ALUL), denoted by ℒ(*x*, *θ*), gives an approximation of the average distance that can be made by a target, moving in 3D space, before being detected by the sensor network. The metric *ALUL~m~*(*x, t*) representing the ALUL in a location *x* at time *t* is given by: $$\mathit{ALUL}_{m}\left( {x,t} \right) = \begin{cases}
{0,} & {\text{if}\, x\,\text{is\ covered\ by\ a\ sensor}.} \\
{\frac{1}{\left( {2\pi} \right)^{2}}{\int_{- \pi/2}^{\pi/2}{\int_{0}^{2\pi}{\mathcal{L}\left( {x,\theta_{1},\theta_{2},t} \right)d\theta_{1}d\theta_{2},}}}} & {\text{otherwise}.} \\
\end{cases}$$
From the performance evaluation perspective, *ALUL~m~*(*A*) provides information on the coverage-preserving capabilities of the mobility model and the long-term behavior of the mobility model.
In order to visually illustrate the performance of coverage reduction models, we use the local node density distribution that gives the number of sensors that cover every point of the monitored region. [Figure 2](#f2-sensors-11-09904){ref-type="fig"} shows that, in the simple context where one target is moving within a 100 m^2^-size monitored zone, the coverage degree considerably varies according proximity to the mobile target. In fact, after 5 mobility steps, the local sensor density is less than 1 in regions that are far from the target location (which is (70,30)) and reaches 2.7 in points that are close to this target.
More interestingly, [Figure 3](#f3-sensors-11-09904){ref-type="fig"} addresses the case where two targets are present within the region of interest. We notice that the sensors are initially uniformly distributed. The density then increases for the three following iterations in the regions where the targets are. In fact, this proves that our tracking scheme is precise enough to distinguish between the two different targets.
To confirm these results, we used the ALUL*~m~* metric to evaluate the evolution of the uncovered area with respect to time. In fact, this allows to know whether uncovered regions are created due to the density increase in the zones that are close to the target. We compared our scheme to four known mobility models, which are: the random walk model, the random waypoint model, the random direction model, and the Gauss-Markov model. The results of this comparison are depicted in [Figure 4](#f4-sensors-11-09904){ref-type="fig"}.
We notice that the proposed mobility models, denoted by Advanced Voronoi-based Mobility Model (AVBMM) and Distributed Voronoi-based Mobility Model (DVBMM), clearly outperform the existing models. They also return a better performance than the Density-Preserving Mobility Model. This is because the latter model, despite its ability to guarantee a nearly uniform node density within the monitored area, does not take into account the presence of hostile targets in the zone of interest.
7.3.. Complexity Evaluation
---------------------------
In this subsection, we evaluate the communication overhead resulting from the proposed retract-based coverage control approach. To this end, we only consider the complexity of the detection and localization steps in our algorithm and do not address the complexity of the repair step, since the repair step complexity is mainly dependent on the first deployment. However, one can easily deduce that if the deployment guarantees that holes size do not exceed a threshold, then the linear complexity can be verified.
We considered that the dimensions of the monitored zone are 10 m × 10 m × 3 m. We varied the number of nodes deployed within this zone and we measured the number of messages required to setup our coverage control protocol. We first supposed that all sensor nodes have a spherical coverage of range 0.5 m. [Figure 5](#f5-sensors-11-09904){ref-type="fig"} depicts the number of messages for densities ranging from 0.5 sensors per m^2^ to 5 sensors per *m*^2^. The major remark is that this number is nearly linear with respect to the number of sensors per area unit.
Moreover, we considered the case where sensors have semi-spherical coverage (with the same range). [Figure 5](#f5-sensors-11-09904){ref-type="fig"} shows that the communication overhead is also linear in this situation but with a smoother slope. One can deduce the following statements from the aforementioned figures: The number of exchanged messages is independent of the density. It is close to 4 for the spherical sensors and 10 for semi spherical sensors. This fact may appear strange; however, one can notice that when a deployment is performed, the detection and location will only search for holes surrounded by the Vietoris space. The latter is reduced when the density is low.The number of messages exchanged by the semi spherical sensors for detection and localization is 2.5 times higher than the number observed for spherical. Two reasons can be mentioned for this. First, the area covered by a semi spherical sensors is half the area covered by spherical sensors. Second, the guiding vectors is randomly oriented.
8.. Conclusion
==============
This paper developed a low complexity approach to detect and localize sensing holes in 3D spaces. It also constructed efficient algorithms to repair holes and track (multiple targets). Our approach has built on two concepts, the Vietoris complex and the Voronoi diagram, and demonstrated that the technique called retraction by deformation achieves low complexity algorithms for the detection of coverage holes in WSNs.
Our approach can be easily extended to more general sensors, for which the Vietoris complex and the Voronoi diagram can be defined. Such sensors can be called conical sensors or vector guided sensors and can represent camera sensors.
{#f1-sensors-11-09904}
{#f2-sensors-11-09904}
{#f3-sensors-11-09904}
{#f4-sensors-11-09904}
{#f5-sensors-11-09904}
|
Sahib
Singh Vs. State of Haryana [1997] INSC 621 (28 July 1997)
M. K.
MUKHERJEE, S. SAGHIR AHMAD
ACT:
HEADNOTE:
S.
SAGHIR AHMAD, J Hallucination, as a disease, is an apparent perception without
any corresponding external object. It is defined as any of numerous sensations,
auditory, visual or tactile, experienced without external stimulus and caused
by mental derangement or intoxication. It may occur with relation to any of the
special senses, namely, hearing sounds or seeing things that do not exist.
2. The
prosecution in this case presents before us a story of Hallucination where a
dead person is seen by the eye- witnesses to have come armed with a gun, fired
the gun at one of the witnesses who was injured and then was seen running away
with other people including the appellant, towards another village never to be
found again. The appellant was seen in the company of that dead person,
shoulder to shoulder, armed with a gun and triggering it to keep pace with the
activities of his companion, the dead.
3. Prosecution
unfolds its story by ushering us into an era when the Punjab was writhing in pain of militancy.
4.
Village Pipaltha, P.S. Garhi Distt. Jind, where Om Prakash deceased) lived with
his three sons, Dharam pal (P.W 10), Surinder (P.W. 11) and Suresh (P.W.12)
(fourth is not material) was targetted by terrorists resulting in the death of Om
Prakash and gunshot injuries to his son, Suresh.
5. The
appellant was prosecuted and tried by the Additional judge (designated Court, Rohtak
at Jind) and convicted for offences u/s 302/34 IPC read with Section 3(2) of
the Terrorist & disruptive Activities (Prevention) Act, 1987 (for Short,
the `Act') with a fine of Rs.200/- or else further rigorous imprisonment for
one year; under Section 452/34 IPC (Sentence: 3 years R.I. with a fine of
Rs.100/- or else 3 months further R.I.) under Section 307/34 IPC (Sentence: 7
years R.I.); and under Section 394/34 IPC (Sentence: 10 years R.I. with a fine
of Rs. 200/- or else R.I. for one year).
6.
House of Om Prakash which also contained a shop at which Dharam Pal and Surinder
used to sit, was located almost in the centre of the Village in a busy
locality. a short distance away was another shop at which Suresh and his
brother, the fourth son of Om prakash,
used to sit.
7. On
18.11.1991 at about 6.30 P.M. while Om Prakash was at the shop of his two sons,
Dharam Pal and Surinder, two young Sikhs armed with small guns, came and asked Om
prakash to hand over his revolver but Om Prakash who did not possess a revolver
offered his 12 bore gun. The two Sikh youths, at the point of gun, brought all
the three, namely, Om Prakash, Dharam Pal and surinder on the street where a
group of three other young Sikhs were standing on the right side of the Shop
while another group of three or four Sikhs youths, which also included the
appellant, was standing in front of their shop. All of them were holding small
guns and were between the age group of 25-30 years. One of the two Sikh youths,
brought out a Hero-Honda Motor Cycle from the shop and wanted Om Prakash to sit
on the Motor Cycle but Om prakash refused and while trying to
run inside the ship, he was fired upon. He attempted to enter the room on the
rear of the shop but all the Sikh youths present there started firing
indiscriminately as a result of which he received injuries on various parts of
his body. While Dharam pal and Surinder managed to escape, Suresh, who was at
the other shop, came running to help them but was injured in the firing. All
the Sikh youths then went away towards village `Rewar'.
8. Om prakash was taken to a hospital at Narwana where he
was declared dead while Suresh, who was medically examined there, was admitted
for treatment.
9.
After due investigation, a charge-sheet was submitted only against the
appellant who was tried and ultimately convicted as aforesaid.
10.
The appellant, from the very beginning, had denied the prosecution story and
had contended that he had been falsely implicated on account of enmity as civil
and criminal cases were pending even on the date of incident between him and
other members of the family of Om prakash. He,
in that connection, examined one witness in defence and also brought on record
certain documents including a copy of the order passed by the Punjab & Haryana
High Court in Criminal Miscellaneous case No. 6397 (M) of 1992.
11.
Let us find out the truth.
12.
The statement of three eye witnesses one of whom was an injured witness as also
the appellant's confessional statement recorded by the police under Section 15
of the Act, constitute the basis of his conviction for the offences in
question.
13. So
far as eye witnesses are concerned, they are three, namely, Dharam Pal
(P.W.10), Surinder (P.W.11) and Suresh (P.W.12). They are sons of om prakash
(deceased). Suresh (P.W. 12) is an injured witness. These witnesses speak of
the appellant's presence at the spot with a gun with one Kala Singh who was
also armed with a gun.
14. It
is contended by the learned counsel for the appellant that although the
incident had taken place at 6.30 P.M on
18.11.1991 in the market area, the prosecution did not produce any independent
eye witness and attempted to prove its case only through interested eye
witnesses who were the sons of the deceased. It is contended that in such a
situation where the independent witnesses, in spite of being available were not
produced, the conviction cannot be sustained merely on the testimony of highly
interested witnesses particularly in view of the fact that Om prakash
(deceased) and his family members including his three sons who have been
produced as eye witnesses were on inimical terms with the appellant and had
even tried earlier to implicate him and his father in a false criminal case
involving, incidentally, the same Kala Singh in whose company the appellant, in
the instant case, has been placed.
15.
The contention that the prosecution had relied only upon witnesses who are
family members of the deceased and are thus highly interested cannot, by
itself, be aground to reject their statements. Witnesses who are related to the
deceased are as competent to depose the facts as any other witness. Mere
relationship does not disqualify a witness. If the incident had taken place at
a time or under such circumstances that there was no possibility of any other
person being present at the spot , except those who were related to the
deceased, those persons, namely, persons related to the deceased, will be
competent to depose the facts seen by them. Even if the possibility of
independent witnesses being present is not ruled out, the witnesses related to
the deceased would still be competent witnesses.
All
that has to be shown is that the witnesses were stating the truth. The Court
itself, in order to find out whether what they had stated was true or not would
scrutinise their evidence with care and caution. In Kartik Malhar vs. State of
Bihar 1996 91) SCC 614: 1996 Cr.L.J. 889 decided by a Bench of this Court of
which one of us (Saghir Ahmad, J.) was a member, it was held:- "A close
relative who is a natural witness cannot be regarded as an interested witness.
The term Interested' postulates that the witness must have some direct interest
in having the accused somehow or the other convicted for some animus or for
some other reason."
16.
This contention raised on behalf of the appellant will be considered a little
later to find out whether the witnesses had the motive to secure the conviction
of the appellant and were, therefore, interested witnesses.
17. Dharam
Pal (P.W.10) has stated that he had taken his father Om Prakash and brother
Suresh to the hospital at Narwana where they reached at about 10.00 p.m The police outpost at Pipaltha had already radioed
the message to police Station, Garhi which was received by A.S.I Dharam Singh
(P.W. 16) at 6.50 P.M. It is not disputed that police
Station, Garhi falls on way to Narwana but there too the matter was not
reported. That by itself would not be relevant as Dharam Pal who was taking his
father and brother to the hospital might have been in a hurry to save their
lives. What is, however, relevant is that Surinder (P.W.11), the other son of Om prakash remained in the village and did not company
his father or the injured brother to the hospital. He had full opportunity of
going to the police station to lodge the report but there is no explanation
forthcoming as to why this was not done. Dharam Pal, in his statement on oath,
has stated that there was a police outpost in his village but there too, no
report was lodged.
18.
The police of P.S. Garhi which already knew the incident, having been informed
by the police Outpost, Pipaltha, reached at the hospital at 9.30 p.m. Om Prakash was already declared dead by the doctors
at the hospital.
The
statement of Dharam Pal was recorded by the police at the hospital at 10.50 P.M. on 18.11.91 after obtaining the opinion of the
doctors that Suresh (P.W. 12) who was injured in the incident in question, was
not in a fit condition to make the statement. On the basis of the statement of Dharam
pal, a formal F.I.R. No. 237 was recorded at police Station, Garhi at 12.15 A.M. on 19.11.91 in which the appellant was named as an
accused. The special Report which was sent to the magistrate at Narwana was
received by him at 4.00
A.M.
19. It
would be relevant here to reproduce the following passage from the statement of
Dharam Pal (P.W.10);- "My brother Surender remained in the Village I
cannot tell whether Surender my brother made any report to the police in Vill Pipaltha
when we had taken our father to Hospital. We did not lodge any report with the
police station Garhi as we were first of all to save our father. It is correct
that if we come from village Pipaltha to Narwana, P.S. Garhi is located on the
road on the way to Narwana. we reached the Hospital at about 8.10 P.M. The police came at about 9.30 P.M. in the Hospital. On our arrival in the Hospital, the doctor
had declared our father as dead. My statement was recorded at about 10.15 PM by the police. My statement was recorded only at
that time and was not recorded subsequently. I did not make any supplementary
to the police in this case after my statement was recorded by the police in CivilHospital. i stated to the police in that very Hospital after about
2/3 hours of my recording the statement, name of Kala Singh but no statement to
that effect was recorded by police at that time. I had fully recognised Kala
Singh but no statement to that effect was recorded by police at that time. I
had fully recognised Kala Singh and he was standing with Sahab Singh near the
wall of Bharthu. Kala Singh had also fired shots as all the eight were firing
while running after us. I had not sated name of Kala Singh in my statement Ex.
PD to the police. I had stated the name of Kala Singh afterwards."
20.
The chronology of events indicates that the F.I.R. was registered after the
statement of Dharam Pal was recorded by the police at the hospital and further
that although Surinder remained in the Village, he did not go to the police
station to lodge the report. This chronology further indicates that the F.I.R.,
in this case, was lodged after unreasonable "delay" and after due
deliberation. Normally, this delay would have been ignored but if it is
considered in the light of the statement of witnesses, which we shall presently
scrutinise, it would come out that this "delay" was deliberate and
meaningful.
21.
Admittedly, there is positive enmity between the appellant and his family
members on the one hand and Om Prakash and his family members on the other. The
following extract from the statement of Dharam Pal would bring out the factum
of enmity existing between the parties:- "Lakhi and Giani Harijans were
employed by us to work in the fields alongwith other workers on daily wages
whenever we felt any necessity. They were not our regular employees. I do not
know whether Lakhi and Giani got registered a case against Sucha Singh and two
brothers of Sahab Singh accused at our instance after this occurrence. It is
correct that a criminal case under Section 325 IPC etc. was pending in the
Court of JMIC, Narwana against Sahab Singh etc. accused and against us, prior
to this occurrence. A civil litigation had also proceeded between us and Sahab
Singh accused prior to this occurrence. We and sahab Singh accused were on
inimical terms prior to this occurrence due to civil and criminal litigation
between us."
22. To
the same effect is the statement of Surinder (P.W.11) who stated as under:-
"It is correct that civil and criminal litigation between us and Sahab
Singh accused is still pending in the courts and it was also pending at the
time of alleged occurrence. both of us were challenged in case under sec. 325
IPC and cross-cases against Sahab Singh and also against us was pending at the
time of occurrence.
I had
also told the police about the enmity."
23.
Suresh Kumar (P.W.12) , who is an injured witness, also admitted that he and
Sahib Singh were on inimical terms.
24. it
is in this background that the statement of these three eye witnesses, who are
real brothers, are to be analysed to find out whether the occurrence did take
place in the manner stated by them and whether in that incident Sahib Singh and
Kala Singh participated and fired at Suresh Kumar (P.W.12) or at Om Prakash
(deceased).
25. Dharam
pal, in his statement, narrated the incident in the following words;- "One
Sikh youth remained standing inside the shop while the other Sikh youth came
outside and took out personal search. The Sikh youth who took personal search
brought our motor-cycle from the shop outside. The motor cycle was of
Hero-Honda make bearing registration No. HR-32/0218. The Sikh youth who took
out the motor- cycle from the shop made to sit forcibly my father on them
other- made to sit forcibly my father started walking inside the shop.
The
Sikh youth standing inside the shop fired a shot from his fire arm which hit my
father on the left side of the chest. The Sikh youths who were standing outside
the shop started firing indiscriminately on my father which hit him on the
chest, back and on the hand etc. My father fell down inside the room next to
the shop and we ran away but at that time while we were running, sahab Singh
accused tried to catch hold of us but we succeeded in getting rid off Sahab
Singh etc. all the 8 Sikh youths chased us and were firing. On hearing the
noise of shots, my brother Suresh and sadhu came towards the side of our shop
and while he was crossing the street, he (Suresh) received gun shot
injury."
26. Surinder
(P.W. 11) narrated the incident in the following words:- "We saw that
three Sikh youths were standing in front of shop of Bharthu and three Sikh
youths were standing near the wall of Fatia Kumhar. They were also armed with
small size guns. Sahab Singh accused present in the court today was one of the
three Sikh youths who were standing in front of the shop of Bharthu. Kala Singh
was also standing at that time with Sahab Singh, Out of the two Sikh youths,
who took out us, one of them took out personal search and one of them remained
standing before us aiming the gun towards us. The Sikh youth conducted our
search took out our Hero Honda Motor-cycle from the shop asked my father to sit
on the carrier of that motor-cycle and he also forcibly tried to make my father
sit on the carrier of the motor cycle but my father gave him a push and moved
towards the shop. One of them fired at my father in the left side of the chest.
The Sikh youth who was standing inside the shop came out and all the Sikh
youths then fired at my father who was in the shop at that time . Rather my
father had entered the next room in which the shop was opening from behind at
that time. Sahab Singh and Kala Singh had also fired my father at that time and
were two of the eight. My father received injuries on the back, near the right
hip-region. He also received injuries on back, hands etc. My father fell down
in the room as a result of injuries sustained. we i.e. I and my brother Dharam
Pal, tried to run away but Sahab Singh accused tried to catch hold of us but we
escaped and ran towards the street and concealed ourselves."
27.
Suresh Kumar (P.W.12) narrated the incident in the following words:- "I
was resident at pipaltha along with my brothers and father about 1 1/2 years
ago. we were having two shops at village Pipaltha. At one of the shops, my
father, Om Prakash, brother Dharam Pal and Surender used to sit while on the
other shop my brother Sadhu Ram and I used to sit. On 18.11.1991, I was present
at my shop. Sadhu Ram was also present at that time. We heard the noise of gun
shots. Sadhu Ram, my brother, went via street which runs by the side of the
houses while I was going to my house through the main street. 8 persons
including Sahab Singh accused were coming while firing. Kala Singh was also one
of them. Sahab Singh fired at me which hit my arm . Kala Singh had also fired
at me and which also hit me at my right arm.
The
accused went towards village Rewar."
28. He
further stated in the cross-examination as under:-
29.
From the above, it would appear that so far as main incident is concerned, Dharam
pal and Surender who were present at the shop and had seen the whole of the
incident are not consistent. While Dharam Pal and Surender both stated that Kala
Singh and the appellant were present at the spot and both were armed, Dharam
Pal did not specifically say that the appellant had fired at Om Prakash nor did
he say that kala Singh had fired at his father. The job of firing was
attributed to other sikh Youths present at the spot. Surender (P.W.11), on the
contrary, specifically stated that Sahib Singh had fired at his father.
30.
Suresh Kumar (P.W.12) speaks of the presence of kala singh along with the
appellant among the group of eight Sikh youths who had come to the shop of Dharam
Pal and Surender.
He
stated in hi examination-in-chief that Sahib Singh and fired at him which had
hit his arm. He also stated that kala Sigh had also fired at him which had hit
his right arm. In cross-examination, he repeated that he had received two
gunshot injuries as two shots were fired at him; one by kala singh and the
other by Sahib Singh.
31.
Who is this Kala Singh?
32. Dharam
Pal, in his cross examination, has stated that he knew Kala Singh from his
childhood as he was the resident of village pipaltha which he had left about 2
or 3 years prior to the occurrence but his family members still lived in the
village.
33. On
account of the enmity between the parties, appellant's father Sucha Singh and
others were implicated in a case relating to the "harbouring" of Kala
Singh in their house. This case was initiated on the basis of the Fir lodged by
Lakhi Ram under Section 216-A IPC read with Section 4(3) , 3 and 6 of the Act
on the ground that Kala Singh was harboured by Sucha Singh and others in their
house. This FIR was challenged by the accused, involved in that case, in
Criminal Miscellaneous petition No. 6397-M of 1992 and Criminal Miscellaneous
petition No. 7728-M of 1992.
Both
the petitions were allowed by justice G.S Chahal of the Punjab & Haryana
High Court by judgment dated December 1,1992 with the finding that Kala Singh
had already been killed by the police on October 31, 1991, prior to the
registration of the case and, in any case, the allegations made in the FIR did
not make out any case of "harbouring".
34.
Since Kala Singh had already been killed by the police on October 31, 1991, there was no occasion that he
would be present at the spot on 18.11.91 when the incident, giving rise to this
case, took place. All the three eye witnesses, examined in this case, testify
to the presence of a dead person at the spot. All of them, therefore, speak a
lie.
When
they saw appellant to be present at the spot in the company of Kala Singh, they
again speak a lie as the appellant could not be in the company of Kala Singh.
It appears that these witnesses who are real brothers were not aware of the
death of Kala Singh and, therefore, they made another attempt to implicate the
appellant in another false case involving Kala Singh. The first case, as was
seen earlier, was initiated by Lakhi Ram who was the labourer of Om prakash(deceased).
35.
Another reason to discard the evidence of these witnesses is that Dharam Pal
and Surinder, who were present at the shop when the Sikh Youths came to the
place and started firing indiscriminately, did not receive any injury.
They
also alleged that while they were running away, Sahib Singh had caught hold of
them but they got themselves freed and ran away. Sahib Singh was armed with a
gun. If both Dharam pal and Surinder had come in close contact with him, he
would have, in the natural course of conduct, fired at them instead of
attempting to catch them alive.
36.
The evidence on record indicates that the incident had not taken place in the
manner alleged by the prosecution in which a dead person is shown to have
participated in the incident in question. Not only that he was shown to be
armed with a gun, he was also shown to have fired at Suresh. The appellant was
surprisingly, placed in the company of that dead person. Is this not
Hallucination? The three brothers seem to be suffering from auditory and visual
sensory perception without any real external stimuli as they had heard gunshots
and seen Kala Singh firing at them even though he was dead on the date of
incident, having been killed on 31.10.1991.
37.
Indeed, enmity has always the potential of making a man stoop to the lowest
level of inhumanity. This is what has happened in the instant case where
certain terrorists appear to have come and attacked the shop of Dharam pal
where his father was sitting who was shot dead and the Hero Honda Motor Cycle
was taken away. Not having seen as to what had Happened and who had killed
their father, the three brothers, thought of involving the appellant in this
case so that he may be removed from the scene and lodged in the jail as thy, on
account of the enmity, were highly interested in securing his conviction and in
achieving this object, they did not shudder in lying before the court,
ignoring, in the process, what WILLIAM HAZLITT had said that "Lying is the
strongest acknowledgement of the force of truth."
38.
The confessional statement of the appellant with which we intend to deal now is
the other basis for his conviction.
before
looking into the contents of the confessional statement, we any first consider
the relevant provisions of the Evidence Act around which certain principles
have been built by judicial pronouncements including those of this Court.
39.
Evidence Act contains a separate part dealing with "Admission". This
part comprises of Section 17 to 31.
"Confession"
which is known as a species of "Admission" is to be found contained
in sections 24 to 30. 40. "confession" has not been defined in the
Evidence Act.
Mr.
Justice Stephen in his Digest of the Law of Evidence, defined it thus:
Emperor
vs. Cunna 22 Bombay Law Reporter 1247; Imperatrix vs. Pandharinath
ILR 6 Bombay Law Reporter 1247; Imperatrix vs. Pandharinath
ILR 6 Bombay 34; Muthukumaraswami Pillai &
Ors. v. King Emperor ILR 35 MADRAS 397). Straight,
J., however, in Queen Empress vs. Jagrup & Anr. ILR 7 Allahabad 646, did
not adopt this definition and held that only those statements which are direct
acknowledgments of guilt could be regarded as "confessions" and not
mere inculpatory admission which may fall short of an admission of guilt.
Similar
view was taken in Emperor vs. Santya Bandu 11 Bombay law Reporter 633. The
judicial opinion was thus not unanimous as to the exact meaning of
"confession." The Privy Council, however, by its authoritative
pronouncement in Pakala Narayana Swami vs. The King Emperor 66 Indian Appeals
66 = AIR 1939 PC 47. clarified the position and laid down that " a
confession must either admit in terms the offence, or at any rate substantially
all the facts which constitute the offence." This was followed by this
Court in many cases, including Palvinder Kaur vs. State of Punjab AIR 1952 SC
354 =1953 SCR 94; Om Prakash vs. State of U.P AIR 1960 SC 409(412); State of
U.P. vs. Deoman Upadhyaya 1961)(1) SCR 14; and Veera Ibrahim vs. State of Maharashtra
AIR 1976 SC 1167 (3) SCR 672.
41. In
View of these decisions, it is now certain that a "Confession" must
either be an express acknowledgement of guilt of the offence charge, certain
and complete in itself, or it must admit substantially all the facts which
constitute the offence.
42.
Section 24 provides, though in the negative form that "Confession "
can be treated as relevant against the person making the confession unless it
appears to the Court that it is rendered irrelevant on account of any of the
factors, namely, threat, inducement, promises etc. mentioned therein.
Whether
the "Confession" attracts the frown of Section 24 has to be
considered from the point of view of the confessing accused as to how the
inducement, threat or promise from a person in authority would operate in his
mind. (See: Satbir Singh vs. state of Punjab 1977 (3) SCR 195=1977(2) SCC 263).
The "Confession" has to be affirmatively proved to be free and
voluntary. (See; Hem Raj vs. State of Ajmer 1954 SCR 1133= AIR 1954 SC 462).
Before a conviction can be based on "Confession", it has to be shown
that it was truthful.
43.
Section 25 which provides that a "Confession" made to a police
Officer shall not be proved against the person accused of an offence, places
complete ban on the making of such confession by that person whether he is in
custody or not. Section 26 lays down that confession made by a person while he
is in the custody of a police Officer shall not be proved against him unless it
is made in the immediate presence of a Magistrate. Section 27 provides that
when any fact is discovered in consequence of information received from a
person accused of any offence who is in the custody of a Police officer, so
much of such information, whether it amounts to a confession or not, as relates
to the fact thereby discovered, may be proved. Section 27 is thus in the form
of a proviso to sections 24,25 and 26. Sections 164,281 and 463 of the code of
Criminal procedure are the other provisions dealing with "Confession"
and the manner in which it is to be recorded.
44.
Section 15 of the TADA Act, however, makes a special provision as to the
admissibility of confession and signals a departure from the normal rule
contained in Sections 25 and 26 of the Evidence Act. It provides that a
confession made by an accused to a police officer of a particular rank or
higher would be admissible in evidence and can be proved against that person
subject to the fulfilment of other requirements indicated in that Section .
45.
According to these requirements, confession has to be made before a police
officer not below the rank of a Superintendent of Police. Before recording the
confession, the police Officer has to explain to the person concerned that he
is not bound to make the confession, the police officer has to explain to the
person concerned that he is not bound to make the confession and that if he
makes the confession it may be used as evidence against him. The Police officer
has also to satisfy himself, after questioning the person concerned, that he is
making the confession voluntarily. The officer recording the confession has
also to record a certificate of having observed the requirements of law.
46.
The Act, like the Evidence Act, does not define "confession" and ,
therefore, the principles enunciated by this Court with regard to the meaning
of "Confession" under the Evidence Act shall also apply to a
"Confession" has either to be an express acknowledgement of guilt of
the offence charged or it must admit substantially all the facts which
constitute the offence. Conviction on "Confession" is based on the
Maxim "habemus optimum testem canfitentem renum" which means that
confession of an accused is the best evidence against him. The rationale behind
this rule is that an ordinary, normal and sane peons would not make a statement
which would incriminate him unless urged by the promptings of truth and
conscience.
47.
Under this Act, although a confession recorded by a police Officer, not below
the rank of Superintendent of police officer, not below the rank of
Superintendent of Police, is admissible in evidence, such Confessional Statement,
if challenged, has to be shown, before a conviction can be based upon it, to
have been made voluntarily and that it was truthful.
48. in
the instant case, Confession of the appellant was recorded by Superintendent of
Police, Jind, on 14.12.1991, which was accompanied by a certificate by the S.P.
Jind, in compliance of the requirement of Section 15 of the Act. The
Confessional Statement has been proved and has been marked as Exh. PW-14/A. The
relevant portion of the Confessional Statement is as under:
"My
father Sucha Singh and Om Parkash Mahajan, R/o Piplatha Purchased some
agricultural land in village Pipaltha since long. After that there was dispute
between them. Om Parkash was a rich man. Om Parkash got implicated my father in
false cases and got challenged through police on the basis of which grudge
increased.
There
is one Kala Singh @ Rukha in our village who has committed two murders in our
village and he is intenglled in the group of terrorists and is residing in Punjab. Kala Singh was on visiting terms
with us 3-4 days. Before committing the murder of Om Parkash, Kala Singh @ Rukha
had come to us. I had asked Kala Singh @ Rukha to commit the murder of Om Parkash
Mahajan R/o Pipaltha. Kala Singh @ Rukha told me that he has no need of money
but he had to pay Rs. 15,000/- to the other terrorist for committing the
murder. I promised to pay Rs. 15,000/- and Kala Singh had asked me to hand over
Rs. 15,000/- to him in Makord Gurudwara. On 18-11-1991 Kala Singh @ Rukha R/o Pipaltha accompanied by six
terrorists, one of them was Nachhatar Singh, names of other not known came to
my house. Kala singh @ rukha had asked me to see as to Whether Om parkash Mahajan
is present at the house or not. On this asking I went to the house of Om Parkash.
Om Parkash was present at his shop. I told Kala Singh @ rukha that m Parkash is
present at a Shop. Kala Singh @ Rukha alongwith his companion terrorist
committed the murder of Om Parkash Mahajan by shots going at his house. Firing
in the street they ran away on the Hero Honda Motor cycle No. HR-32-0218 after
taking he same from the shop of Motor cycle, I went to my home after making
information of Om Parkash Mahajan to Kala singh @ Rukha and started drinks. On
hearing the noise of fires I ran away from my house due to fear. That the sons
of Om Parkash may not named me for the murder of Om Parkash, I had promised to
pay Rs. 15,000/- for the murder of Om parkash
Mahajan." A perusal of the Confessional Statement would indicate that
three or four days prior to the date of incident, which incidentally is
18.11.1991, Kala Singh had come to the appellant and the appellant had
requested Kala Singh to commit the murder of Om Prakash, for which Kala Singh
wanted Rs. 15,000/- to be paid to other terrorists who would be hired for that
job. It was on the basis of this arrangement that Kala Singh came along with
six other terrorists, including Nachhatar Singh, on 18.11.1991 and committed
the murder of Om Prakash. The terrorists, including Kala Singh, went away on
the Hero Honda Motor Cycle.
50. It
has been held above that Kala singh had already been killed in a police
encounter on 31.10.91. There was, therefore, no occasion of his coming to the
appellant and the appellant asking Kala Singh to commit the murder of Om Prakash
on Rs. 15,000/- being paid to him.
51.
The story of Hallucination is repeated in the so called Confessional Statement
by saying that a Dead person came to the appellant, talked to the appellant,
asked the appellant to pay rs. 15,000/- so that "dead person" may pay
it to other terrorists through whom the job of killing Om Prakash would be
performed; the dead person came to the spot along with other terrorists on
18.11.1991 and committed the murder of Om Prakash. The Confessional statement
further makes that dead person to ride on a motorcycle and drive away along
with other terrorists on the same motorcycle. The dead also drives!
52.
The confessional Statement does not admit even substantially the basic facts of
the prosecution story, inasmuch as in the Confessional Statement, no role is
assigned to the appellant while in the prosecution story an active role has
been assigned to him by showing that he too was armed with a gun and had gone
at the spot and participated in the commission of the crime by firing his gun
specially at the injured witness. The Confessional Statement is not truthful
and is part of the Hallucination with which prosecution and its witnesses were
suffering. It is accordingly discarded and cannot be acted upon.
53. A
little effort on the part of the trial court would have revealed to it the
falsity of the prosecution case, but it proceeded in a mechanical manner and
ultimately convicted the appellant ignoring that there was a deliberately
delayed FIR and the case set out therein was sought to be proved through highly
interested witnesses, instead of independant witnesses, and also by bringing on
record a Confessional statement which contained false facts. This leads to the
conclusion that the trial judge was sitting only to convict forgetting that
judiciary holds the SCALES even, not tilted.
54.
For the reasons stated above, the appeal is allowed, the judgment dated
8.2.1994 passed by the trial court is set aside and the appellant is acquitted
of all the charges. He is in jail. He shall be set at liberty forthwith, unless
required in some other case. |
start of testfile
01234567890123456789012345678901234567
line 3 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 4 xxxxxxxxxxAxxxxxxxxxxxxxxxxxxx
line 5 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 7 xxxxxBxxxxxxxxxxxxxxxxxxxxxxxx
line 8 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 9 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 10 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 11 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 12 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 13 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 14 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 15 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 16 xxxxxxxxxxCxxxxxxxxxxxxxxxxxxx
line 17 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 18 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 19 xxxxxxxxxxxxxxxxxxxDxxxxxxxxxx
line 20 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxE
line 21 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
line 22 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
end of testfile |
Q:
Fresh CentOS install with Apache in VirtualBox: Host OS can't access the test page
Context: I just got a job as a sysadmin and my employers are quite aware that I'm new to this scene so they're having someone show me the ropes. Her first task for me was to set up a VM with CentOS.
I've set up CentOS on my VM and I installed Apache without incident. When I tried to access it using the browser on the host via IP address, it says that it took too long to respond (using Google Chrome).
I've set network adapter of the VM to a bridged adapter and I'm not using the loopback address for this. I tried using
curl myipaddress
And it shows the HTML just fine. I tried pinging the VM's ip address and it replies fine.
I tried ruling out iptables. What I got when I tried to stop it:
Failed to stop iptables.service: Unit iptables.service not loaded.
Which upon further checking implies that iptables isn't installed.
I tried checking the status of the service with
sudo service httpd status
And it's apparently working fine.
I basically left the settings at their default so I'm not sure what I'm overlooking. It looks like a misconfiguration but I'm not sure exactly what.
A:
I guess you need to open the ports through firewalld (the default firewall in Centos 7).
You can try, just for testing:
firewall-cmd --permanent --add-port=80/tcp
To see what active zones you have, you can use:
firewall-cmd --get-active-zones
And then, a port (i.e. tcp 80 for http) can be opened, let's say for the zone 'public':
firewall-cmd --permanent --zone=public --add-port=80/tcp
The reload firewalld:
firewall-cmd --reload
The option permanent makes the rule permanent, even if the service or server restarts. You will need to run the commands as root or use sudo.
|
I'm Pretty Sure I Met the One
I think if you say 'the One' with a capital 'O' it means God.
I think I met the one. I let her go/ left her suddenly. Whichever way you want to look at it.
I let her down. I threw-away my chance.
I must find a way to move-on. I just feel like there is no moving-on because I already found my purpose. Oh god my hands feel like lead typing this, whole body crushed with the certain knowledg my greatest reason for living is behind me. And I march on away rfrom her. What a way to think negativly, eh?
Hobo...I'm not sure what I can say to make this any better. But if she is the one...you haven't lost the chance. If she is meant to be...she'll be back. One day soon enough. My thoughts go out to you my friend. *hugs*
ugh.. yeah this is an old story now.. from March, but talking about how I've been feeling for the last decade before that.<br /><br />It was over a decade ago that I last saw Beth. She was my first real friend, I got too over-attached.. she was my best friend as well as my first real friend, I fell in love with her because she was so wonderful and compassionate, and she loved me (as a friend) which no one else had done before, and I thought she was the only one who could. I was in love with her for a long, long time.. only started letting go this year, but I feel different now.<br />I think I was afraid that I'd never find anyone else who would accept me or love me after I told them about who I really am, where I've come from, and what my life was like in the past, and that I'd never find anyone with as much compassion, who I would love... but I was wrong: <br />It turned out that I could be loved by someone else; I just thought I couldn't because she was the only one to have ever loved me, and I wasn't making any new friends.
You shouldn't have to worry about trying to always be positive! Though I do admire you for trying to, it would drive me crazy if I had to be upbeat all the time.<br /><br />This is such a sad story! I feel this way too sometimes...like I missed my chance. :(<br /><br />I know that there are more then one purpose for our lives though. I think in fact that there are hundreds of purposes for us. If you lost someone, or left them, it doesn’t mean your life’s purpose is gone….as much as it feels that way. <br /><br />It just means one chapter is over, and you are on another one now. |
Pack it in, pack it out
As use of the PCT and the backcountry areas through which it passes increases, all of us must be especially mindful of even the smallest effects you may have on the land and on the experiences of those around you. One example of what some may consider a negligible impact is the biodegradable food waste people might leave behind to decompose.
One person’s food waste left to biodegrade is a significant eyesore for the next person coming up the trail, not to mention what’s at stake for the natural environment. Certainly, human waste (which we’ve covered [pun intended] in multipleblogposts) is ultimately much more harmful to the environment in the long run if not disposed of properly.
Stating the obvious, most of the PCT does not pass through apple orchards, orange groves, or banana plantations, yet the leftovers of these fine trail foods are left behind as if they are not trash. They are. This is not unique to the PCT, we’re sure we’ve all seen it. In fact, during our research into how long it takes things to biodegrade, we came across an article similar to this one from a fellow hiker from Scotland, where many of the mountains, or Bens as they call them there, are strewn with banana peels.
In Scotland, the most common bio-degradable items found along the trail include apple cores, which can take up to 8 weeks to decompose, and orange peels and banana skins, both of which can take up to two years! Much of the PCT passes through much drier climate than exists in Scotland, so it probably takes even longer for food wastes to decompose in the deserts of Southern California or in the high elevations of the High Sierra. While it is certainly a bonus to have some fresh produce while out on the trail, please be prepared to pack out orange peels and banana peels and other food wastes. When it comes to apples, one of our staff members has long been eating the entire apple and is left with just a few seeds to stick in his pocket or in his trash bag.
‘Pack it in, pack it out’ is one of the original tenets of backcountry travel. And, it’s a fairly simple one. Leave No Trace means just that. With the PCT’s increased popularity, we all have a responsibility to the next person and to be good stewards of the land. If you packed it in please remember to pack it out. The PCT and your fellow trail users thank you! |
Among 38 of the world’s more developed nations, the United States has the least liberal government policies regarding paid parental leave, leading some to argue that this puts American women at a disadvantage as they navigate their careers.
But it also turns out that some countries that offer more liberal parental leave policies have higher pay gaps among men and women ages 30 to 34, according to analyses of 16 countries conducted by the Organization for Economic Cooperation and Development. OECD theorizes that this link may be driven by the fact that women are more likely than men to actually use their parental leave, and that time out of the workforce is associated with lower wages.
The hourly pay gap in the U.S. is now 16%, according to a recent Pew Research Center report on gender and work – meaning that women today earn 84 cents for every $1 a man earns in an hour. This gap remains persistent, but has shrunk markedly from 36% in 1980.
Using a slightly different metric, the OECD found that the pay gap in the U.S. was about 18%, slightly higher than the median pay gap across 26 OECD states, which was about 14%.
Countries with lower pay gaps include New Zealand and Belgium; these nations also provide little by way of paid time off for new parents. Countries with higher pay gaps include The Czech Republic and Austria, both of which offer new parents about 10 months of paid parental leave (this figure does not reflect unpaid leave, or paid maternity or paternity leave). The U.S. has no national paid family leave policy, though a bill was recently introduced in Congress proposing one. (Some private U.S. employers offer paid leave.)
But parental leave and gender differences in work experience are not the only factors associated with the gender pay gap. As is the case in the U.S., across these other 25 countries the gap increases with age, and with parenthood. As in the U.S., women in most of these countries have more education than men; this reduces the pay gap to some extent.
It’s difficult to know the extent to which other factors may be driving the pay gap. What is clear is that among majorities in several developed nations, there is a perception that women are at a disadvantage when it comes to job opportunities.
In 2010, Pew Research surveyed publics in a number of countries, seven of which are also represented in this pay gap data, about whether “men get more opportunities than women for jobs that pay well, even when women are as qualified as men for the job.” Majorities in all seven countries agreed with the statement. Agreement ranged from about seven-in-ten in the U.S. and Spain (68%), and up to more than eight-in-ten in Poland (83%) and Germany (84%). |
In our article, we omitted to reference the comment by Dr. Sunil R. Moreker to the report of corneal neurotization technique by Elbaz et al.[@bib1] In the comment, Dr. Sunil R. Moreker stated that he and his team performed a successful "corneal re-innervation surgery" in a patient with neurotrophic keratopathy, corneal opacification, and proliferative diabetic retinopathy.[@bib2] They performed "local re-innervation surgery by retrieving a local nerve branch" and executing the procedure "in the same way" as described by Elbaz et al.
The differences between our technique and that of Elbaz et al. are described in our manuscript. These same differences would also apply to the comment by Dr. Sunil R. Moreker regarding his case of corneal re-innervation. As stated in our manuscript, we performed a direct ipsilateral nerve transfer for a patient with neurotrophic keratopathy due to an iatrogenic injury. In contrast, Elbaz et al., and presumably Dr. Sunil R Moreker, anastomosed an autogenous nerve graft to the donor nerve in patients with central lesions of the trigeminal nerve and diabetic neurotrophic keratopathy respectively to achieve corneal neurotization.[@bib1], [@bib2] Our technique does not rely on the nerve graft to provide the appropriate re-innervation, and instead directly connects the donor nerve with the target organ.
|
Q:
RAID 1 - IOPS Write Penalty 1 or 2
I keep seeing articles describing the RAID IOPS write penalty for RAID 1 (and RAID 10) as 2.
RAID 0 would have a penalty of 1, of course, since every write is simply written to disk. RAID 1 is described as "requiring two writes", thus a penalty of 2.
But shouldn't it be 1, since data is written simultaneously?
From the viewpoint of the application or server using the disk, a RAID 1 array should appear as a single unit which writes to both disks simultaneously. One disk may lag behind the other one, but an actual hardware RAID controller should be capable to begin the write at the same time and report the write operation as complete when the slower disk has completed, which should be only marginally higher than in a RAID-0, if at all.
So the IOPS penalty should be 1 for RAID 1 or 1.2 at the maximum.
I understand there are two write operations, so there are 2 "IOPS", but they are internally to the RAID controller.
Am I missing something here?
A:
If RAID 1 was just hotwiring a cable the performance impact would be null (a factor of 1.0), but RAID 1 mirroring is more than just hotwiring a cable - actual work needs to be done to write data to two drives and handle the results of that write from each drive.
That extra work is the factor they're talking about in the performance impact. Whether the I/O operation happens in the OS somewhere (software RAID) or in a dedicated co-processor/controller (hardware RAID) two writes still need to be issued for every piece of data, and the results of that write (success, failure, or on_fire) need to be "handled".
In the worst case you're likely to encounter (software RAID-1 implemented in the OS) that means the kernel is doing two writes, and having two conversations with the disk controller.
That's a write penalty of 2x since we're doing twice as much work almost all the way through the stack.
(Really it's probably closer to 1.9 - after all we're not issuing two write() calls to the filesystem - but let's just round it off for the sake of pessimism.)
In the best case (hardware RAID 1, implemented with a dedicated controller) the kernel is having one conversation with the controller, but the controller is still having 2 conversations (one with each disk) as it needs to ensure both drives receive the command, write the data out, and acknowledge that the data was written (or handle any error conditions the drives report).
That's probably about a 1.2x penalty for the controller's extra work as you surmised in your question - you're just saving yourself the extra in-kernel work (which is far more expensive than what the controller is doing).
Now because we're sysadmins and we're paid to be a pessimistic lot we're obviously going to take the worst case performance, just like when we rounded the performance factor for software RAID - so if anyone asks we're going to tell them there's a 2x write penalty, even for their fancy hardware controller, and let them be happy when the system performs with only a 1.5x penalty on average :-)
|
Razia Begum (48) from Telangana set out on the arduous journey on Monday morning armed with local police permission, rode solo to Nellore and returned with her younger son on Wednesday evening, showing an endurance level even seasoned rallyists would find hard to match.
Hyderabad: Love for her son, courage, and determination made a woman in Telangana ride nearly 1,400 km on a scooter over three days to bring him home after he got stuck in Nellore in neighbouring Andhra Pradesh owing to COVID-19 lockdown.
Razia Begum (48) set out on the arduous journey on Monday morning armed with local police permission, rode solo to Nellore and returned with her younger son on Wednesday evening, showing an endurance level even seasoned rallyists would find hard to match.
"It was a difficult journey on a small two-wheeler for a woman. But the determination to bring my son back overtook all my fears. I packed rotis and they kept me going. It was fearsome in the nights with no traffic movement and people on roads," the brave mother told PTI on Thursday.
She is a government school headmistress from Bodhan town in Nizamabad district, about 200 kms from here. Razia, who lost her husband 15 years ago, had been living with her two sons, an engineering graduate, and 19-year old Nizamuddin, aspiring to join MBBS.
He had gone to Rahamatabad in Nellore district on 12 March to drop his friend and stayed back there. Meanwhile, the lockdown was announced following the coronavirus outbreak and he could not return.
Razia was anguished to hear from her son that he was desperate to join the family and decided to fetch him back herself. The woman did not send her elder son as she thought police might mistake him for a joyrider and detain him.
After initially considering taking a car, she discarded the idea and chose her two-wheeler. On the morning of 6 April, she began the journey and reached Nellore the next day afternoon.
She left for home town on the same day along with her son and reached Bodhan on Wednesday evening, Razia said.
She had packed ''rotis'' to keep her from hunger pangs and rode on, making stops at fuel stations and quenching her thirst at certain points all along the way. Nizamuddin has completed his intermediate and has undergone coaching for the MBBS entrance exam. |
Q:
Build error in visual studio
Whenever I build my app that I am making, I get the same error every time "None of the input catalogs contained a matching stickers icon set or app icon set named "AppIcon". In info.plist it is correctly referencing the right app icon file. The app has previously loaded perfectly fine using the exact same assets for the app icon, and this is the first time the issue has become apparent. I have looked at other answers on the site suggesting that I go to build settings and make sure it is building with the right icon set, and it is. Any help would be greatly appreciated.
A:
in your iOS project, go to Resource/images.xcassets folder, search for any file named AppIcon...etc, be sure you have selected files for the resolutions you need. In Info.plist/iPhone icons section be sure to select the folder which points to the AppIcon set folder (e.g. images.xcassets/AppIcons.appiconset) Next delete bin and obj folders in your iOS project, rebuild the project.
|
from libc.stdint cimport int64_t, uint64_t
cdef extern from "libavformat/avformat.h" nogil:
cdef int avformat_version()
cdef char* avformat_configuration()
cdef char* avformat_license()
cdef void avformat_network_init()
cdef int64_t INT64_MIN
cdef int AV_TIME_BASE
cdef int AVSEEK_FLAG_BACKWARD
cdef int AVSEEK_FLAG_BYTE
cdef int AVSEEK_FLAG_ANY
cdef int AVSEEK_FLAG_FRAME
cdef int AVIO_FLAG_WRITE
cdef enum AVMediaType:
AVMEDIA_TYPE_UNKNOWN
AVMEDIA_TYPE_VIDEO
AVMEDIA_TYPE_AUDIO
AVMEDIA_TYPE_DATA
AVMEDIA_TYPE_SUBTITLE
AVMEDIA_TYPE_ATTACHMENT
AVMEDIA_TYPE_NB
cdef struct AVStream:
int index
int id
AVCodecContext *codec
AVCodecParameters *codecpar
AVRational time_base
int64_t start_time
int64_t duration
int64_t nb_frames
int64_t cur_dts
AVDictionary *metadata
AVRational avg_frame_rate
AVRational r_frame_rate
AVRational sample_aspect_ratio
# http://ffmpeg.org/doxygen/trunk/structAVIOContext.html
cdef struct AVIOContext:
unsigned char* buffer
int buffer_size
int write_flag
int direct
int seekable
int max_packet_size
# http://ffmpeg.org/doxygen/trunk/structAVIOInterruptCB.html
cdef struct AVIOInterruptCB:
int (*callback)(void*)
void *opaque
cdef int AVIO_FLAG_DIRECT
cdef int AVIO_SEEKABLE_NORMAL
cdef int SEEK_SET
cdef int SEEK_CUR
cdef int SEEK_END
cdef int AVSEEK_SIZE
cdef AVIOContext* avio_alloc_context(
unsigned char *buffer,
int buffer_size,
int write_flag,
void *opaque,
int(*read_packet)(void *opaque, uint8_t *buf, int buf_size),
int(*write_packet)(void *opaque, uint8_t *buf, int buf_size),
int64_t(*seek)(void *opaque, int64_t offset, int whence)
)
# http://ffmpeg.org/doxygen/trunk/structAVInputFormat.html
cdef struct AVInputFormat:
const char *name
const char *long_name
const char *extensions
int flags
# const AVCodecTag* const *codec_tag
const AVClass *priv_class
cdef struct AVProbeData:
unsigned char *buf
int buf_size
const char *filename
cdef AVInputFormat* av_probe_input_format(
AVProbeData *pd,
int is_opened
)
# http://ffmpeg.org/doxygen/trunk/structAVOutputFormat.html
cdef struct AVOutputFormat:
const char *name
const char *long_name
const char *extensions
AVCodecID video_codec
AVCodecID audio_codec
AVCodecID subtitle_codec
int flags
# const AVCodecTag* const *codec_tag
const AVClass *priv_class
# AVInputFormat.flags and AVOutputFormat.flags
cdef enum:
AVFMT_NOFILE
AVFMT_NEEDNUMBER
AVFMT_SHOW_IDS
AVFMT_GLOBALHEADER
AVFMT_NOTIMESTAMPS
AVFMT_GENERIC_INDEX
AVFMT_TS_DISCONT
AVFMT_VARIABLE_FPS
AVFMT_NODIMENSIONS
AVFMT_NOSTREAMS
AVFMT_NOBINSEARCH
AVFMT_NOGENSEARCH
AVFMT_NO_BYTE_SEEK
AVFMT_ALLOW_FLUSH
AVFMT_TS_NONSTRICT
AVFMT_TS_NEGATIVE
AVFMT_SEEK_TO_PTS
# AVFormatContext.flags
cdef enum:
AVFMT_FLAG_GENPTS
AVFMT_FLAG_IGNIDX
AVFMT_FLAG_NONBLOCK
AVFMT_FLAG_IGNDTS
AVFMT_FLAG_NOFILLIN
AVFMT_FLAG_NOPARSE
AVFMT_FLAG_NOBUFFER
AVFMT_FLAG_CUSTOM_IO
AVFMT_FLAG_DISCARD_CORRUPT
AVFMT_FLAG_FLUSH_PACKETS
AVFMT_FLAG_BITEXACT
AVFMT_FLAG_MP4A_LATM
AVFMT_FLAG_SORT_DTS
AVFMT_FLAG_PRIV_OPT
AVFMT_FLAG_KEEP_SIDE_DATA # deprecated; does nothing
AVFMT_FLAG_FAST_SEEK
AVFMT_FLAG_SHORTEST
AVFMT_FLAG_AUTO_BSF
cdef int av_probe_input_buffer(
AVIOContext *pb,
AVInputFormat **fmt,
const char *filename,
void *logctx,
unsigned int offset,
unsigned int max_probe_size
)
cdef AVInputFormat* av_find_input_format(const char *name)
# http://ffmpeg.org/doxygen/trunk/structAVFormatContext.html
cdef struct AVFormatContext:
# Streams.
unsigned int nb_streams
AVStream **streams
AVInputFormat *iformat
AVOutputFormat *oformat
AVIOContext *pb
AVIOInterruptCB interrupt_callback
AVDictionary *metadata
char filename
int64_t start_time
int64_t duration
int bit_rate
int flags
int64_t max_analyze_duration
cdef AVFormatContext* avformat_alloc_context()
# .. c:function:: avformat_open_input(...)
#
# Options are passed via :func:`av.open`.
#
# .. seealso:: FFmpeg's docs: :ffmpeg:`avformat_open_input`
#
cdef int avformat_open_input(
AVFormatContext **ctx, # NULL will allocate for you.
char *filename,
AVInputFormat *format, # Can be NULL.
AVDictionary **options # Can be NULL.
)
cdef int avformat_close_input(AVFormatContext **ctx)
# .. c:function:: avformat_write_header(...)
#
# Options are passed via :func:`av.open`; called in
# :meth:`av.container.OutputContainer.start_encoding`.
#
# .. seealso:: FFmpeg's docs: :ffmpeg:`avformat_write_header`
#
cdef int avformat_write_header(
AVFormatContext *ctx,
AVDictionary **options # Can be NULL
)
cdef int av_write_trailer(AVFormatContext *ctx)
cdef int av_interleaved_write_frame(
AVFormatContext *ctx,
AVPacket *pkt
)
cdef int av_write_frame(
AVFormatContext *ctx,
AVPacket *pkt
)
cdef int avio_open(
AVIOContext **s,
char *url,
int flags
)
cdef int64_t avio_size(
AVIOContext *s
)
cdef AVOutputFormat* av_guess_format(
char *short_name,
char *filename,
char *mime_type
)
cdef int avformat_query_codec(
AVOutputFormat *ofmt,
AVCodecID codec_id,
int std_compliance
)
cdef int avio_close(AVIOContext *s)
cdef int avio_closep(AVIOContext **s)
cdef int avformat_find_stream_info(
AVFormatContext *ctx,
AVDictionary **options, # Can be NULL.
)
cdef AVStream* avformat_new_stream(
AVFormatContext *ctx,
AVCodec *c
)
cdef int avformat_alloc_output_context2(
AVFormatContext **ctx,
AVOutputFormat *oformat,
char *format_name,
char *filename
)
cdef int avformat_free_context(AVFormatContext *ctx)
cdef AVClass* avformat_get_class()
cdef void av_dump_format(
AVFormatContext *ctx,
int index,
char *url,
int is_output,
)
cdef int av_read_frame(
AVFormatContext *ctx,
AVPacket *packet,
)
cdef int av_seek_frame(
AVFormatContext *ctx,
int stream_index,
int64_t timestamp,
int flags
)
cdef int avformat_seek_file(
AVFormatContext *ctx,
int stream_index,
int64_t min_ts,
int64_t ts,
int64_t max_ts,
int flags
)
cdef AVRational av_guess_frame_rate(
AVFormatContext *ctx,
AVStream *stream,
AVFrame *frame
)
cdef const AVInputFormat* av_demuxer_iterate(void **opaque)
cdef const AVOutputFormat* av_muxer_iterate(void **opaque)
# custom
cdef set pyav_get_available_formats()
|
Protesters are challenging the Dakota Access Pipeline with concerns that it will eventually contaminate area drinking water.
Since April, protesters against an oil pipeline have been camping in tents, tipis, and trailers at a site just across the Missouri River from the Standing Rock reservation in North Dakota.
For a few days, I stayed at the camp, and met people who gathered there to support the effort.
The camp is known as Oceti Sakowin, meaning Seven Council Fires, a reference to the origin of the Sioux tribe.
The unarmed protesters call themselves Water Protectors. They are challenging the Dakota Access Pipeline, or DAPL, a 1,172-mile-long crude oil pipeline meant to transport oil from North Dakota to Illinois.
The Standing Rock tribe has voiced worries that the pipeline, which is proposed to pass under the Missouri River, would inevitably burst, and contaminate their drinking water.
Energy Transfer Partners, the private company behind the project, has met protesters at the front line with dogs, tear gas, army vehicles, and guns. As the weather gets colder and construction moves slowly forward, the camp and Standing Rock Tribe continue their opposition of the DAPL, sustaining discussion about the future of energy and conservation.
Facebook Hill
The day I arrived, more than 20 campers had been arrested while returning from a prayer ceremony at a DAPL construction site. The next morning was quiet.
Early risers climbed “Facebook Hill” to watch the sun rise while charging their cell phones. Smoke began to rise from fire pits while a few cars left to go to the front line. Chatting with other campers revealed that there wasn’t going to be much direct action for a few days; time was needed to regroup.
Monitoring the Monitors
Every day, at sunrise and noon, a helicopter sent by Energy Transfer Partners comes from the north and flies around the entire camp. I asked a young woman if she knew why they did this.
“They want to look scary,” she told me. People pulled out their phones and filmed the helicopter making its daily rounds.
Smartphones are almost always being used at Oceti Sakowin. Although service is difficult to find below the hills, people record everything, streaming live on Facebook when they can, and posting their experiences on social media. As with police shootings, phone cameras have become mechanisms for protection at the Standing Rock camps and actions.
“People are getting more used to the idea that we need our phones and cameras out,” my friend Thomas told me. He was also camped with the Hoopa kitchen and was doing a lot of organizing within the camp. “We use our devices to prove that something is happening here.”
I heard rumors of moles hired by ETP or the FBI taking drone footage of the camp and recording conversations. Many people that I talked to were skeptical of these accounts, but everyone agreed that the camp was being monitored closely.
Sustenance
The main kitchen and sacred fire are situated on a low hill, right next to Oceti Sakowin’s entrance. The spot serves as the primary gathering place.
Most people come to this kitchen for breakfast and a late dinner. Everyone in the camp is invited to make use of a PA system. Elders and community organizers come first, but artistic performances, musical acts, jokes, and storytellers are always welcomed alongside history lectures, prayers, and appeals to the camp.
Play
People have moved their entire lives to the camp, bringing their horses and basketball hoops with them to Standing Rock. Basketball provides a social and competitive activity during long, harsh winters, and young men bring old tribal rivalries with them to games, playing for the honor of their families and communities. At Oceti Sakowin, sports and horse-riding keep boredom at bay.
When the young children of the camp were not in classes at the makeshift school, they would roam the grounds with their bikes and skateboards, rolling down hills and makeshift jumps. Older youths would grab horses from friends’ campsites and ride them bareback through the camp.
I saw people sitting around their campfires chatting all day. Others put themselves to work doing physical labor, including cooking, chopping wood, and building kitchens or showers. The community feeling was one of a shared life.
Readying for the Cold
Winterization is a major point of concern at the camp. The campers cannot stay on the current grounds for much longer, and the site needs to become more organized and sustainable.
There are plans for solar homes and a couple hundred tipis, available for anybody who needs one. There is a constant need for donations and physical laborers. The camp ran out of water and firewood one day I was there -- both are needed daily for cooking and heat.
The camp did not have enough tipi poles for canvases, and was waiting on a group from Colorado to bring a truckful of poles. Still, campers said they needed more if they were going to provide enough shelter for people camping through the winter.
Thomas told me the organizers and Standing Rock Tribal Council want the camp to be a model for the world of sustainable living, depending entirely on renewable energy.
Standing for Peace
While getting lunch at the main kitchen, I ran into Arnie, whom I’d I met earlier that day. He introduced me to three young Mohican men he had traveled with to Standing Rock. I asked if I could take their photo.
Arnie asked me what I’d been doing since our breakfast encounter, where he introduced himself to me as an Amish man with two PhDs (he wouldn’t tell me what they were in) and asked me if he looked like a man with two PhDs. I studied his ensemble — socks with sandals, sweatpants, a long black coat, a baseball cap and a gray beard — and I told him no. Arnie laughed.
As we sat watching the morning’s announcements and prayers, he told me that the violence and hatred circling the Standing Rock movement reminded him of his own village, when a man murdered several Amish children. One of them had been Arnie’s.
“We need to not only stand for peace, but also make peace,” he told me. “In my town, after those children were killed, we got together some money and gave it to the murderer’s parents as a gift, because we knew they needed to heal. We made peace. We produced it. That needs to happen here, too, and everywhere really — we have to create peace and loving kindness. It can’t be effortless.”
“We Are the Mexica”
The Oceti Sakowin camp has drawn people from across the globe. One afternoon, a large group of youth and traditionally dressed dancers marched through the camp to the sound of drums. “We are the Mexica,” announced one of the dancers. He pronounced it as Me-she-cah.
“You know us as the Aztecs, but this is not our original name.” They brought a large group of teenagers with them, representatives from different tribes, to join forces with Standing Rock’s Youth Council.
The march ended at the main camp, where one of the Standing Rock elders told the crowd about the ancient Amazon prophecy of the Condor and the Eagle — two different ways of life, the heart and the mind — and of the potential for these two paths to join to create a new consciousness for humanity.
“Our Mexica friends have brought the Condor to the Eagle,” he said. “We must fulfill our potential.”
Protest and Prayer
Much of the conversation at Oceti Sakowin revolved around how to remain positive despite the difficulties the camp has to overcome.
Late one afternoon, I met a man who was deeply upset about the lack of constant action by the Council and by the elders. He felt that the whole camp should be rising every day to be at the front line, trying to stop construction no matter what.
When I brought up this encounter with Tribal Councilman Robert Taken Alive, he told me, “Protest is not in our native tradition. Our young people want to get up and yell and hold signs, but we are here to fight this battle with prayer.” I talked about it with some fellow campers over a late-night fire. Several people echoed Robert’s statement.
One young man advised us all that we needed to focus on turning negative talk and patterns of thought into positive ones. If we respond to others’ fears and worries with more fears and worries, we allow space for disunity. Without unity, he said, it all falls apart.
Council Brandon is a film student from Hartford, Connecticut. She is currently taking a gap year. |
Reiterating his call for the need to cut human services programs to save money, Republican Gov. Paul LePage on Saturday said there are more Mainers receiving welfare benefits than there are income tax filers paying taxes.
But critics said LePage’s numbers are wrong and are designed to win political support for cuts to programs that have nothing to do with welfare.
Additional Photos
Gov. LePage
The governor said in his weekly radio address that Maine had 453,000 people receiving welfare benefits in 2010. At the same, he said, the state had only 445,000 people who paid taxes.
LePage said he doesn’t relish the thought of people being hurt by spending cuts, but said it isn’t fair to cover the $221 million Medicaid funding shortage by raising taxes or asking other state agencies to make up the difference. Rather, the state needs to rein in Medicaid costs and restructure the program, he said.
“There is no joy in taking anything away from anyone,” the governor said in his address. “But there is one question I need to ask: Why should people struggle in this economy to pay their state taxes while the state of Maine operates far outside the national average in welfare costs? It’s a fair question.”
Democrats say that the governor’s definition of welfare is overly broad and that his definition of taxpayer overly narrow — resulting in a statistic that has no relevance to the real world.
“The governor is flat wrong. In fact, he has it backwards,” said Rep. Seth Berry, D-Bowdoinham, the ranking Democrat on the Legislature’s Taxation Committee, in a statement released after the governor’s radio address.
Berry said that all Maine residents pay taxes whenever they buy anything. They also pay taxes if they own a car. All homeowners pay property taxes, he said, and apartment dwellers indirectly pay property taxes through their rent.
When all local and state taxes are taken into account, the families who are in the bottom 20 percent in terms of income have the highest tax burden of any income group, according to 2009 data provided by the Maine Revenue Services, noted Garrett Martin, executive director of the Maine Center for Economic Policy, a liberal research group.
“This is an unfortunate mischaracterization of what’s at stake here,” he said of LePage’s radio address. “While some people don’t earn enough to have an income tax obligation, they are still paying taxes that support a range of programs and services.”
Christine Hastedt, policy director at Maine Equal Justice, a legal services program that represents low-income people, said that the actual number of Mainers who paid income taxes is much higher than LePage claims because the 445,000 figure he cites does not represent the number of individuals but the number of filed returns. Married couples typically file one return, she said.
Hastedt also took issue with the governor’s use of the term “welfare.” When people think of welfare, they think of programs such as food stamps and Temporary Assistance for Needy Families, not programs that allow seniors to buy prescription drugs at Medicare prices or pay the nursing home care for people suffering with Alzheimer’s disease.
“I do not think that seniors in a nursing home think of themselves as getting welfare, or a senior who has worked their whole life getting a little help with prescription drugs,” Hastedt said.
The LePage administration has proposed Medicaid cuts that could leave up to 65,000 people without health insurance coverage. LePage says the cuts are necessary to avoid a fiscal crisis due to unsustainable Medicaid costs.
LePage spokeswoman Adrienne Bennett said that if more people paid income taxes than the tax filings indicate, as Democrats argue, then it’s also fair to say that more people received welfare benefits than the numbers suggest. The welfare numbers, for the most part, represent individuals who receive benefits, she said, but the reality is that many of the benefits help recipients’ families as well.
“If the Democrats are saying the tax numbers are higher, those welfare numbers need to be higher as well,” she said.
Sen. Earle McCormick, R-Hallowell, co-chair of the Legislature’s Health and Human Services Committee, said that LePage wants Maine to have the same level of benefits as in other states. That would mean cuts for some service levels, he said.
Still, after listening to three days of public testimony, he believes that the Legislature will have to do “a lot of work” on the governor’s proposal before it’s ready for action. He said lawmakers have questions about how many people would be affected by the cuts and about the reason for the budget shortfall.
He said the Health and Human Services and Appropriations committees will start to get some of those answers when they meet Tuesday.
There is a stigma attached to the word “welfare,” he acknowleged, and there is a perception among many in the general population that the system is abused. At the same time, he said, he would not use the term “welfare” to describe programs slated for cuts, such as Head Start or programs that pay for nursing home care for mentally ill or physically disabled people.
“Something needs to be done, but we can’t throw people out on the street either,” he said. “The governor is fully committed that we have a safety net for those really in need. We just have to find what that balance is.”
Here at MaineToday Media we value our readers and are committed to growing our community by encouraging you to add to the discussion.
To ensure conscientious dialogue we have implemented a strict no-bullying policy. To participate, you must follow our Terms of Use. Click here to flag and report a comment that violates our terms of use. |
1053 LINEHART DR WINTER GARDEN, FL 34787
4 Bedroom, 2,677 Sqft, Single Family Residence
Description
MOTIVATED SELLERS!!!BETTER THAN NEW!!! This 2017 built 4 bed, 3.5 bath seaside by David Weekly has been a top seller in the neighborhood for several years because of its open concept floor plan with a huge front porch. As you approach this home you will notice the inviting front porch with an adorable porch swing. Upon entering the home you will notice the gorgeous rich espresso flooring which is the perfect tone to go with any style. Once you enter the open living space you will be overwhelmed with the amount of attention to detail to make this house a “home†it is literally out of a magazine, from the beautiful shutters on the windows throughout the home, the farmhouse large tub sink, custom hood vent, gas five burner cooktop, Cambria countertops, wired for surround, upgraded thermostats, upgraded lighting, fireplace, screened in rear lanai too much to list. Not only will you fall in love with the home. But, Oakland Park is one of the most sought-after communities in Winter Garden. You can golf cart to downtown where they have a brewery, several restaurants, and shops. All this with the conveniences of a big city with a small town feel. Schedule your tour today!!!
Location for 1053 Linehart Dr Winter Garden, FL 34787
Directions to address
heading west on w colonial 50 make a right on Tildenville school rd, make a left on Oakland Ave, make first right to Oakland park blvd and follow to the stop sign, make a left on Lake Brim Drive and follow, make left on Civitas, make right on Linehar |
Piyush Goyal says India will get data on black money from Switzerland by end of fiscal 2019
Highlights Funds held by Indians with Swiss banks rose to Rs 6,891 crore in 2017
"Nobody has the guts to save money outside the country": Piyush Goyal
"If anybody is found guilty, strict action will be taken", he said
India will get all data on black money from Switzerland by the end of fiscal 2019, Union Minister Piyush Goyal said today, a day after the central European nation released data that showed money parked by Indians rose over 50 per cent to 1.01 billion Swiss francs (Rs 7,000 crore) in 2017.
"We will have all the info and if anybody is found guilty, strict action will be taken," Mr Goyal told reporters in New Delhi. "Today, nobody has the guts to save money outside the country. And it has been possible only because of the government's hard work," the interim Finance Minister said.
While Switzerland has already begun sharing foreign client details on evidence of wrongdoing provided by India and some other countries, it has agreed to further expand its cooperation on India's fight against black money with a new pact for automatic information exchange.
"Agreement between India and Switzerland has this. From January 1, 2018 till end of accounting year (ends March 31, 2019), all data will be made available. Why assume this is black money or illegal transactions?" Piyush Goyal said, referring to the report.
In November 2016, the government had withdrawn high-value currency notes from circulation overnight
India and Switzerland have held several rounds of discussions on the new framework on exchange of financial data, and also for fast-tracking exchange of pending information requests about suspected illegal accounts of Indians in Swiss banks.
About an hour after Piyush Goyal's remarks, Congress president Rahul Gandhi took to Twitter to take a swipe at the Modi government over what he claimed was the government's inability to win the fight against black money.
2014, HE said: I will bring back all the "BLACK" money in Swiss Banks & put 15 Lakhs in each Indian bank A/C.
Listen to the latest songs , only on JioSaavn.com Promoted
2018, HE says: 50% jump in Swiss Bank deposits by Indians, is "WHITE" money. No "BLACK" in Swiss Banks! pic.twitter.com/7AIgT529ST- Rahul Gandhi (@RahulGandhi) June 29, 2018
The surge in Indian money held with Swiss banks in 2017 comes as a surprise given India's continuing clampdown on suspected black money stashed abroad, including in banks of Switzerland that used to be known for their famed secrecy walls for years.
Indian money in Swiss banks had fallen by 45 per cent in 2016, marking their biggest ever yearly plunge, to 676 million Swiss francs (Rs 4,500 crore) -- the lowest ever since Switzerland began making the data public in 1987.
In November 2016, the government had withdrawn high-value currency notes from circulation overnight. Prime Minister Narendra Modi has said demonetisation would wipe out black money from the system.
The rupee has also been under severe stress. On Thursday, it breached the 69-mark against the US dollar for the first time, ending the day at an all-time low of 68.79. |
/**
* @license
* Copyright Google Inc. All Rights Reserved.
*
* Use of this source code is governed by an MIT-style license that can be
* found in the LICENSE file at https://angular.io/license
*/
import { normalize } from '@angular-devkit/core';
import { SchematicsException, Tree } from '@angular-devkit/schematics';
import { dirname } from 'path';
import * as ts from 'typescript';
import { findNode, getSourceNodes } from './ast-utils';
export function findBootstrapModuleCall(host: Tree, mainPath: string): ts.CallExpression | null {
const mainBuffer = host.read(mainPath);
if (!mainBuffer) {
throw new SchematicsException(`Main file (${mainPath}) not found`);
}
const mainText = mainBuffer.toString('utf-8');
const source = ts.createSourceFile(mainPath, mainText, ts.ScriptTarget.Latest, true);
const allNodes = getSourceNodes(source);
let bootstrapCall: ts.CallExpression | null = null;
for (const node of allNodes) {
let bootstrapCallNode: ts.Node | null = null;
bootstrapCallNode = findNode(node, ts.SyntaxKind.Identifier, 'bootstrapModule');
// Walk up the parent until CallExpression is found.
while (bootstrapCallNode && bootstrapCallNode.parent
&& bootstrapCallNode.parent.kind !== ts.SyntaxKind.CallExpression) {
bootstrapCallNode = bootstrapCallNode.parent;
}
if (bootstrapCallNode !== null &&
bootstrapCallNode.parent !== undefined &&
bootstrapCallNode.parent.kind === ts.SyntaxKind.CallExpression) {
bootstrapCall = bootstrapCallNode.parent as ts.CallExpression;
break;
}
}
return bootstrapCall;
}
export function findBootstrapModulePath(host: Tree, mainPath: string): string {
const bootstrapCall = findBootstrapModuleCall(host, mainPath);
if (!bootstrapCall) {
throw new SchematicsException('Bootstrap call not found');
}
const bootstrapModule = bootstrapCall.arguments[0];
const mainBuffer = host.read(mainPath);
if (!mainBuffer) {
throw new SchematicsException(`Client app main file (${mainPath}) not found`);
}
const mainText = mainBuffer.toString('utf-8');
const source = ts.createSourceFile(mainPath, mainText, ts.ScriptTarget.Latest, true);
const allNodes = getSourceNodes(source);
const bootstrapModuleRelativePath = allNodes
.filter(node => node.kind === ts.SyntaxKind.ImportDeclaration)
.filter(imp => {
return findNode(imp, ts.SyntaxKind.Identifier, bootstrapModule.getText());
})
.map((imp: ts.ImportDeclaration) => {
const modulePathStringLiteral = <ts.StringLiteral> imp.moduleSpecifier;
return modulePathStringLiteral.text;
})[0];
return bootstrapModuleRelativePath;
}
export function getAppModulePath(host: Tree, mainPath: string): string {
const moduleRelativePath = findBootstrapModulePath(host, mainPath);
const mainDir = dirname(mainPath);
const modulePath = normalize(`/${mainDir}/${moduleRelativePath}.ts`);
return modulePath;
}
|
By James Sweet III | United States
Hierarchies are naturally occurring, but the values that determine an individual’s placement in that hierarchy varies. The most peculiar of social structures is the one formed by the youth, whose brain is still developing. In high schools, students are often associated with groups, and those groups are placed above another group. These social structures vary according to location, like most social structures. Unlike other social hierarchies, this one is not reliant on wealth, race, or gender. Rather, the high school social hierarchy focuses on the acceptance of others.
The Structure
PBS compiled and analyzed research to determine what a high school social hierarchy typically looks like. The following is what they believe the average high school social structure looks like.
The “Very Popular Kids”: The athletic “alpha males” and the “queen bees”. They often have social skills and looks that make others more attracted to them. They are usually physically stronger than other students of their respective gender and may be more aggressive.
The “Accepted Kids”: The majority of high school students fall into this group. They are considered well known or popular and are smart and outgoing.
The “Average or Ambiguous Kids”: While not popular, they are also not unpopular. They are very common in friend groups.
The “Neglected Kids”: These students are often well-behaved students and achieve good or average grades, causing teachers to not give them special or extra attention. However, it does take them much longer to make friends, and they often do require or wish for some kind of attention from parents and teachers.
The “Controversial Kids”: They often have a mixed, mostly negative, reputation to their name. They may be nice with some weird habits or be bullies to kids while making others laugh with their sense of humor.
The “Rejected Kids”: These students are at the highest social risk. “Rejected Kids” are either submissive, meaning they withdraw themselves from social activities so as to not receive any attention, or aggressive, meaning they purposely act up or emotionally blow up if they are teased too much.
The Line of Acceptance
A student that belongs in any of the first three groups finds themselves above the “line of acceptance”. They are mostly accepted by their peers or are at least not considered unaccepted. Any students one of the last three groups are below the line. They are mostly not accepted by the majority of their peers.
The line is drawn between the “Average Kids” and the “Neglected Kids”. If you are on that line, you are, theoretically, perfectly balanced between acceptance and its opposite. The line is the halfway point towards total acceptance and domination of your school as well as complete isolation and “undesirable” status. One question arises from this: What causes one to rise or fall in this social structure?
The Aggressive Social Climb
As previously stated, the students at the top of the high school social hierarchy are likely to be more aggressive than their counterparts. In fact, a student is more likely to be aggressive if they above the line of acceptance and submissive if they are below the line of acceptance.
While you can have bullies that are beneath the line of acceptance, they are often found above the line. Some students below the line of acceptance undeniably are victims of bullying by either students in their same social status or by those above them. Those at the top of the social structure, however, face bullying and/or aggressive actions more commonly than one typically thinks.
In schools, students are taught that bullies are insecure or are mimicking their home life. This isn’t entirely true for all bullies. It may apply for the kids that are in the “Controversial” social status, but it likely isn’t the case for bullies that are on the top. Researchers from the University of California at Davis and Pennsylvania State sought to uncover the motives of bullying and found a possible answer.
Students at the top of the social hierarchy are aggressive and competing to become the king or queen of the school. In a conflict that occurs over the social climb, neither student is willing to back down. Students at the top of the social structure have more to lose than the average student. After all, a group of friends may revolve around one person, and they are very likely to defend that status as the center of their group, meaning that conflicts are usually started by those in the center and that the friends in the circle back up their “leader”.
Assuming you fit the social norms, the risk of victimization increases with your social status. Being at the top makes you a target. If you’re taken down or outdone and do nothing about it, that’s a guarantee that you are going to lose social status and your rival will gain your former place. If you continue to fall down the social ladder, there is less of a reason for those wishing to climb up to bully you.
Once a student is threatened, they are likely to undergo radical personal changes, either to prepare for the fall to the bottom or to prepare their retaliation. This conflict at the top does spill out to the social groups below them. If an aggressive alpha male drastically drops in social status, they may take their anger out on some submissive, lower status student who wishes no harm. There is little to gain from this, but it serves as an emotional vent for the fallen.
The Lesson
High school has a very tense environment. Students compete for grades and social status. So how does one ensure that they are not trampled during the stampede for the top?
One thing should be clear: do not change who you are as a person. You are a unique individual, and trying to conform yourself to the masses is a way to erode your identity.
It comes down to being able and willing to fight back. Do not initiate conflict, but do not avoid it if it comes your way. If you are willing to defend your own status, not only are you ensuring that you will stay at your current place in the hierarchy, you are also making it possible to shut an aggressive bully down and climb the ladder yourself. As Dr. Jordan B. Peterson said: “Stand up straight with your shoulders back.”
Get awesome merchandise. Help 71 Republic end the media oligarchy. Donate today to our Patreon, which you can find here. Thank you very much for your support! |
The Auburn Police Department provided an update regarding the police officer shot while on-duty Friday evening.The police department says Officer Justin Sanders, 30, was shot and had serious injuries during a traffic stop of a robbery suspect on Opelika Road.Officer Sanders was first taken to East Alabama Medical Center, and later transported to UAB Hospital in Birmingham.Officer Sanders is in stable condition and is recovering."We are thankful for the staff at East Alabama Medical Center for all they did to save Officer Sanders. We are very proud of the bravery he displayed in attempting to arrest someone dangerous to this community. We are humbled by the outpouring of public support; we are also very proud and appreciative of the dozens of law enforcement personnel who responded within minutes to aid in preventing further violence by those responsible. More than anything, we are thankful that Justin will recover.", says Auburn Police Chief Paul Register.Officer Sanders is a native of Auburn, and graduated from Auburn High School.He's a five-year veteran of law enforcement.The suspect in the shooting was later found deceased in an apartment fire in Auburn.
The Auburn Police Department provided an update regarding the police officer shot while on-duty Friday evening.
The police department says Officer Justin Sanders, 30, was shot and had serious injuries during a traffic stop of a robbery suspect on Opelika Road.
Officer Sanders was first taken to East Alabama Medical Center, and later transported to UAB Hospital in Birmingham.
Officer Sanders is in stable condition and is recovering.
"We are thankful for the staff at East Alabama Medical Center for all they did to save Officer Sanders. We are very proud of the bravery he displayed in attempting to arrest someone dangerous to this community. We are humbled by the outpouring of public support; we are also very proud and appreciative of the dozens of law enforcement personnel who responded within minutes to aid in preventing further violence by those responsible. More than anything, we are thankful that Justin will recover.", says Auburn Police Chief Paul Register.
Officer Sanders is a native of Auburn, and graduated from Auburn High School.
He's a five-year veteran of law enforcement.
The suspect in the shooting was later found deceased in an apartment fire in Auburn. |
Q:
Get the value of one array according to another arrays index
I'm trying to get the value of the numierical value inside array1 by using the index of array2. So for instance my arrays look like this:
const array1 = ["3","4","5","6","7","8"];
const array2 = ["bat","cat","dog","fish","cow","bird"];
If the user selects "cat", it should grab "4".
array2.forEach(animal => {
console.log(array1.indexOf(animal)); // returns -1
});
How can I make this work?
A:
Pass the current index of array2 into the forEach callback (second argument)
const array1 = ["3","4","5","6","7","8"];
const array2 = ["bat","cat","dog","fish","cow","bird"];
array2.forEach((animal, i) => {
console.log(array1[i]);
});
A:
Just use bracket notation and indexOf
const array1 = ["3","4","5","6","7","8"];
const array2 = ["bat","cat","dog","fish","cow","bird"];
let choice = 'cat'; // user selection
let result = array1[array2.indexOf(choice)]; // grab value
alert(result);
|
Lance Stephenson Arrested
Innocent until proven guilty, but according to the New York Daily News, newly signed Indiana Pacer Lance Stephenson was arrested over the weekend: “Coney Island basketball star Lance Stephenson – a second-round Indiana Pacers pick in the June NBA draft – was busted Sunday for pushing his girlfriend down a flight of stairs, cops said. Stephenson, 19, a legendary player at Brooklyn’s Abraham Lincoln High School, roughed up Jasmine Williams, 21, in the stairwell of her Brooklyn apartment building about 5 a.m., according to police. The 6-foot-5 rookie point guard’s blows sent Williams tumbling head-first down 10 steps, requiring her to be treated at a hospital for injuries to her head and neck, cops said.” |
Broke after having been wiped out in a Ponzi scheme, Malory decides to sell ISIS to rival spy agency ODIN, whose chief (voice of Jeffrey Tambor) happens to be in love with her. Not surprisingly, the ISIS staff tries to stop the sale---and the romance.
Racial tensions are stirred up when a woman (Julianne Moore) claims her son was kidnapped when she was carjacked in a black neighborhood. Well-acted but overwrought. Samuel L. Jackson, Edie Falco, Ron Eldard, William Forsythe. Directed by Joe Roth. |
Our Catholic heritage is full of treasures. Let’s bring them into the classroom
In 1905, one Miss Agnew sat at her desk in Carlisle and sketched out the “scheme of instruction” for the poor Catholic boys and girls of St Cuthbert’s school. Among her entries was the history “object lessons”: here a lesson on Caedmon and Bede, there Joan of Arc, another on Wolsey, next “the Revolution” (nothing “Glorious” about it). It was history, but it was also more than that – it was a reflection of our Catholic identity.
Today, there is little agreement about how Catholic schools should teach children about our heritage. Curricula vary widely. While those under local authority control mostly follow the National Curriculum, academies are free to set their own content.
This level of freedom can serve schools well: teachers can shape a curriculum as they wish, tailoring it to the needs of their parish and community.
But I wonder: are we making the most of that freedom? When the curriculum is left up to individual schools, what children learn is largely determined by whoever happens to be head of department at any particular time. Diocesan support is available for RE, but beyond that the curriculum is fair game for anyone who might wish to impose their preferences, or in some cases their prejudices, upon it.
If we wish children in our schools to know the wholeness of the faith, in all its creative and intellectual glory, then here we are currently falling short of that ambition.
Yet it would be unreasonable to expect each school to develop schemes of work imbued with the supernatural gaze, weaving different subjects into a coherent statement of the whole, each filled with the treasures of the Church. After all, simply holding a degree, or a teaching certificate, is not sufficient; degree courses do not always include the content one might need, and necessarily take on the character of the institution or training course through which they were formed. When so many of our teachers and leaders do not come through Catholic schools, universities or training courses, then links go unseen, knowledge goes undelivered, and our intellectual and artistic heritage are neglected.
In short, curriculum design is a specialist job. And for a Catholic curriculum, even more so.
Perhaps, then, we would all benefit from something more explicit, a collaborative effort to draw up a Catholic curriculum. It must be collaborative because it could only succeed as a collegial endeavour across sectors, with specialists, particularly in our universities, coming together and writing it. And it could integrate wider accountability demands, including exam specifications, in its creation.
This would be our way of keeping pace with changes in the broader educational world. Recent years have seen a renewed interest in “cultural literacy”: an idea which has become a key part of the curriculum revolution currently taking place, under the supportive eye of the schools inspectorate.
Cultural literacy is the idea that a good education provides awareness and understanding of the key references, the key signifiers, of the culture in which our children are being formed. By this account, there is a canon of knowledge that constitutes being culturally literate, which children ought to have as part of a good education.
Nonetheless, contemporary efforts to define the canon fall short: cultural literacy, and indeed the canon, is too often viewed through the secular mores of those who now write it, delivering a body of knowledge without the religious context in which so much of our culture was formed.
By adopting the secular humanist paradigm, we subvert the very notion of cultural literacy: as I have written elsewhere, “If one starts from a position of neglecting the religious and theological backdrop of the culture in which so much of our cultural inheritance was formed, what is offered is but a shadow of artefacts, and ultimately historical and cultural illiteracy, a secular humanist wish-projection of what our shared history and identity should have been, rather than what it practically and really is.”
In contrast, a Catholic curriculum can unlock the treasures of our cultural inheritance, serving wider society by detailing then delivering a truly coherent canon, one best able to give an accurate account of who we are and how we got here.
As such, if there is to be any lucid account of ‘‘cultural literacy’’ then it must include a kind of ‘‘faith literacy’’, and certainly scriptural literacy, as the key to unlock it. Only here do we find the intellectual infrastructure for a true understanding of Our Island Story, cognisant of its cadences and nuance, its motivations and myopias.
We have long ceased to imagine what a Catholic curriculum might look like. The introduction of the National Curriculum rendered doing so less necessary than it might previously have been, while appeal to ‘‘Gospel values’’ and ‘‘Catholic ethos’’ seemed enough to uphold the Catholicity of our schools without reference to the nuts and bolts of what children were taught. And so, all too often, the “Catholic bit” is what you do in RE, sometimes in an assembly, occasionally in Mass. The Catholic vision of education, indeed of formation, is all-encompassing, able to speak to all of what TS Eliot called the languages of human inquiry. In practice we tacitly reject that vision, treating subjects as secular domains independent of the Catholic imperative: so long as they are careful not to contradict faith, or explicitly criticise it, so it passes.
In so doing, we present the faith in an emaciated form, rather than as a comprehensive human drama and experience.
By contrast, a Catholic philosophy of education cares what happens in the history classroom, the art classroom, the English classroom, every bit as much as the RE classroom. If we are to recover in our schools not only a sense of the faith, but of ourselves, we need a newly emboldened Catholic curriculum.
Over 1,000 years ago a certain King Alfred decided that, for the good of his Kingdom and the good of souls, there were certain works it was “most necessary for men to know”. So he translated them; the intention was formation, not just generic development of a thing called ‘‘knowledge’’. It was believed that these texts, knowing these principles, would be to the benefit of all and singular. Alfred effectively created a canon, not to place limits on what people could know, but to ensure that what they knew at the very least included this.
Perhaps we are again in need of just such a canon. If we want to pass on the treasures of the faith, perhaps we need first to collectively define what they are. Do we want all our children to know the Pietà? Byrd? Lepanto? And if not, why not?
This is not just a project for RE. A century and a half after Newman sought to define a curriculum for a university, the time has come for us to do the same for our schools. If we succeed, we will have helped to define Catholicism’s place in wider culture.
Thus the time is ripe for a revived Catholic curriculum – sequential, across the key stages, to deliver excellence not only in the detail of doctrine, but in the cultural, artistic, musical, liturgical and historical heritage of the Church. Nor is it merely a curriculum of the baptised – in truly Catholic spirit, it would cherish the good, the true and the beautiful, wherever it is found. It need not be so restrictive as to exclude local innovation, but ought to enable all children, regardless of geographic or social context, to receive a minimum entitlement in their learning.
In drawing up such a curriculum, we would be adopting recent insights about core knowledge. We would also recover our own place at the forefront of education, no longer passively accepting wider assumptions and trends but reclaiming our own.
Michael Merrick is a teacher in north Cumbria. He blogs at michaelmerrick.me
This article first appeared in the August 20 2018 issue of the Catholic Herald. To read the magazine in full, from anywhere in the world, go here |
:10FC000001C0F2C0112484B790E890936100109273
:10FC10006100882369F0982F9A70923049F081FF33
:10FC200002C097EF94BF282E80E001D10C94000011
:10FC300085E08093810082E08093C80088E1809312
:10FC4000C90083E08093CC0086E08093CA008EE0F8
:10FC5000EED0279A84E028E13EEF91E030938500D2
:10FC60002093840096BBB09BFECF1F9AA89540912D
:10FC7000C80047FD02C0815089F7CDD0813479F4A6
:10FC8000CAD0C82FDAD0C23829F480E0BDD080E1D4
:10FC9000BBD0F3CF83E0C138C9F788E0F7CF823417
:10FCA00019F484E1D2D0F3CF853411F485E0FACF92
:10FCB000853581F4B0D0E82EAED0F82E87FF07C08E
:10FCC0008BB781608BBFEE0CFF1CB7D0E0CF8BB73A
:10FCD0008E7FF8CF863579F49ED08D3451F49BD049
:10FCE000CBB799D0C170880F8C2B8BBF81E0ADD082
:10FCF000CCCF83E0FCCF843609F046C08CD0C82F2F
:10FD0000D0E0DC2FCC2787D0C82B85D0D82E5E0141
:10FD10008EEFB81A00E012E04801EFEF8E1A9E0A4B
:10FD20007AD0F801808384018A149B04A9F785D0D6
:10FD3000F5E410E000E0DF1609F150E040E063E098
:10FD4000C70152D08701C12C92E0D92EF601419112
:10FD500051916F0161E0C80147D00E5F1F4F22979C
:10FD6000A9F750E040E065E0C7013ED090CF608148
:10FD7000C8018E0D9F1D78D00F5F1F4FF801FE5FE9
:10FD8000C017D107A1F783CF843701F544D0C82F1E
:10FD9000D0E0DC2FCC273FD0C82B3DD0D82E4DD083
:10FDA0008701F5E4DF120BC0CE0DDF1DC80154D072
:10FDB0002BD00F5F1F4FC017D107C1F768CFF801D5
:10FDC00087918F0121D02197D1F761CF853731F409
:10FDD00034D08EE119D086E917D05FCF813509F094
:10FDE00074CF88E024D071CFFC010A0167BFE89589
:10FDF000112407B600FCFDCF667029F0452B19F4DD
:10FE000081E187BFE89508959091C80095FFFCCFE8
:10FE10008093CE0008958091C80087FFFCCF809129
:10FE2000C80084FD01C0A8958091CE000895E0E649
:10FE3000F0E098E1908380830895EDDF803219F03F
:10FE400088E0F5DFFFCF84E1DFCFCF93C82FE3DF7A
:10FE5000C150E9F7CF91F1CFF999FECF92BD81BDA5
:10FE6000F89A992780B50895262FF999FECF1FBAE1
:10FE700092BD81BD20BD0FB6F894FA9AF99A0FBED3
:04FE8000019608954A
:02FFFE000008F9
:040000030000FC00FD
:00000001FF
|
Riyadh: In what could be seen as the highest level of misogyny, a family therapist in Saudi Arabia has gone on record teaching people the proper way of beating one's own wife.
As per a report in AWDNews.com, the national television of the Kingdom of Saudi Arabia has aired a video of Khaled Al-Saqaby – the therapist – in which he is seen giving tips on how to beat the wives to discipline them.
The video is believed to have been aired in the country in early February, 2016. The Kingdom’s government is said to have approved the video, and that is why it was given airtime on national television, states the report.
Speaking on wife beating, Khaled, in the video, advices people to beat the wife to discipline them, not to vent one's anger.
“The first step is to remind her of your rights and of her duties according to Allah. Then comes the second step – forsaking her in bed. The third step, beating, has to correspond with the necessary Islamic conditions” before taking action. The beating should not be performed with a rod, nor should it be a headband, or a sharp object. Instead, husbands should use a ‘tooth-cleaning twig or with a handkerchief’ to beat their wife. The wife will feel that she was wrong in the way she treated her husband,” Al-Saqaby was quoted as saying by the report.
He doesn't end here. At the end of the video, he says his teaching of how to beat wives is not exhaustive, and that sometimes, men can beat their wives without following his steps when the women go to the extreme by disobeying their husbands. |
Rage has just been released to the masses and like every Bethesda game is plagued by some issues. First, the graphics are buggy – bad textures, flickering, screen tearing, and textures popping, you name the issue, and Rage has it.
Secondly, most of the people are experiencing random crashes and performance issues. Add to that the usual dislikes of PC gamers – FPS, FOV, and Performance and you will have all the issues people are currently experiencing with Rage on PC.
Rage Errors and Fixes
We have listed down all these issues that you may come across when playing Rage with possible workarounds to fix these. If you know anything that would help resolve any of these issues, do share with us in comments.
1# How To Disable Mouse Acceleration
If you find yourself out of the place with the default mouse acceleration in Rage, then disable it by following the workaround. In all the previous ID Games mouse acceleration has been present by default if you have ‘Improved Pointer Precision’ checked in mouse hardware options, try unchecking it.
If that doesn’t work, open the Rage config file, and change the value from ‘1’ to ‘0’. m_smooth is the command that you would find in the console and setting it to 0 will disable mouse smoothing/acceleration in-game.
2# How To Enable The Developer Console
You will need developer console for various tweaks and workarounds, so it is better that I walk you through how to enable it at the start. To enable the dev console enable the console add +set com_allowconsole 1 to your Steam Launch Options.
This will enable dev console in-game, which you can access by press ~ button below ESC. listcvars and listcmds should show you available settings, have fun tweaking.
3# How To Change FOV in Rage
You can either change FOV to your liking through developer console or by editing your default CFG or you can add the command in your launch options. If you choose to do so through dev console, it will disable achievements. Using developer console disables achievements so use it in absolute necessity.
If you want to change the FOV through console, bring it up in-game, and use cvaradd command to change the g_fov variable. You can add or subtract fov values from the default 80. For example, cvaradd g_fov 15 will result in g_fov 95 since you added 15. Similarly, cvaradd g_fov -15 will subtract 15 from default 80 value and the resulting g_fov will be 65. I hope you got the point.
If you don’t want to mess up your achievements, you can add these values in your default config file, which is located here: C:\Program Files (x86)\Steam\steamapps\common\rage\base Or you can add these values in your launch options.
4# Horrible Rage Flickering and Artifacts Pop-Ins
If you are experiencing flickering and large artifacts pop-ins while playing Rage, you should probably roll back your drivers. If you have beta drivers installed, they don’t help either. Switching back to old drivers may help the cause.
5# Rage ATI Fixes
If you are trying to play Rage on your ATI video card, there are few things you should know:
Force the game to start in best performance mode, disable the 3D application control. Disable the Catalyst AI. Roll back your drivers.
Rage is experiencing conflicts with drivers resulting in bad overall performance – forcing these settings may improve performance for you. If you are concerned about messing up your settings, try giving Rage Hotfix drivers from ATI a shot, if it resolves your issue, you don’t need to do any of this.
6# Can’t Launch Rage
If you can’t launch the game on Steam. Logout of the Steam and log back in, verify the game cache and re-try. If it still doesn’t work, relaunch the Steam as administrator and try again. It may also be due to the customized settings you are forcing through your nVidia or Catalyst control panel. Get rid of them all and try again.
7# Rage Lag Fix
Disable Steam in-game community feature to resolve the lag issue, if you are among those facing it. Right-click on RAGE in Steam -> properties -> uncheck ‘enable steam community in game’.
8# ATI Flickeri1ng Fix
Download and Install 11.8 ATI drivers to fix the issue.
9# Rage Crashes (ATI Hot Fix)
There are two things you can do to resolve the random crashes that occur when you try to play Rage. You can disable tripple buffering in your ATI Control Panel and install Rage Hotfix Drivers.
10# How To Fix Screen Tearing in Rage
Rage by default doesn’t allow you to enable Vsync. If you are experiencing screen flickering and vertical tearing on Screen, you can force Vsync through your graphics card control panel, be it Nvidia Control Panel or ATI Catalyst Control Panel.
11# Crashes To Desktop – Rage Stopped Working
If you can’t get into the game and it crashes you to desktop on startup. Try this – If you get the black screen, have to Ctrl-Alt-Delete to get to the ‘Rage has stopped working’ window, try removing the ‘video’ folder (rage->base->video). Make sure you back up the folder for worse case scenario.
12# Rage SLI or Crossfire
Rage doesn’t support SLI or Crossfire, so don’t bother. It would only make it worse.
13# Rage Textures – Nvidia
Install the latest beta drivers if you are experiencing bad and cluttered textures in Rage. It will most probably this issue on high-end machines. If you are on average PC, try rolling back the drivers to 235.38 and see if that helps.
14# Rage Textures and Bad Graphics – ATI
Try downloading the latest Rage Hotfix Drivers and see if that helps. Most of the people have complained about these drivers but since they have helped few, trying won’t hurt.
15# How To Fix Rage Stuttering – Nvidia
Force v-sync through Nvidia control panel. Enable GPU-transcode in-game – if you have high-end GPU for faster streaming of textures.
16# Fix Slow Rage Textures
Download Battlefield 3 Beta drivers since they seem to have fixed Rage’s slow texture streaming for a lot of people.
17# How To Fix Rage Textures Popping
Some players suggest that disabling AA and transparency AA in desktop control panel will fix texture issues and improve FPS. Worth a try!
18# AMD Rage Drivers Update
AMD has just released updated drivers to fix most of the issues PC Gamers with AMD family of video cards are experiencing. You can download these drivers from here and install them by following the official instructions.
19# Rage Graphics Tweaks
You can follow our Rage Tweak Guide to tweak graphics for better image quality or to improve performance of the game by tweaking them down.
20# Rage doesn’t launch.
Go the Steam’s menu>Setting>Change>Beta participation> choose Steam Beta Update and fix this bug. The game sometime crashes when there’s a problem related to the Steam so, be sure to check out these settings.
However, if you can’t get into the game and it crashes you to desktop on startup. Try this – If you get the black screen, have to Ctrl-Alt-Delete to get to the ‘Rage has stopped working’ window, try removing the ‘video’ folder (rage->base->video). Make sure you back up the folder for worst-case scenario
If you experience any other issue, let us know in the comments and we will try to help you out. If you have successfully started the game without any hiccups, enjoy the game. |
Cytomorphology and morphometry of small round-cell tumors in the region of the kidney.
Small round-cell tumors (SRCTs), with malignant cell components measuring 10 m or less in diameter with scanty cytoplasm in alcohol-fixed smears, pose a diagnostic challenge at fine-needle aspiration cytology (FNAC), especially when they are situated in and around the kidney and need facilities such as electron microscopy, immunohistochemistry, tissue culture, and cytogenetics for their subtyping. A precise cytodiagnosis of SRCTs is important because a definite diagnosis is mandatory in preoperative diagnostic workup for presurgical chemotherapy in these cases. With this view in mind, an attempt has been made to diagnose SRCTs in the region of the kidney based on cytomorphology and morphometry alone so as to facilitate its diagnosis in a simple cytology laboratory of a developing country where facilities for auxiliary techniques are not easily available. Of 2,028 abdominal aspirates in a 12-yr period, 36 SRCTs were diagnosed in the region of the kidney by correlating with histology, radiology, and clinical features. The smears were studied for cellularity, morphology, pattern of cell arrangement, and smear background and morphometrically analyzed using an ocular micrometer. An aspirate with preponderant malignant round cells that were larger or double the size of red blood cells in air-dried smears or measured less than 10 micro in diameter in alcohol-fixed smears was considered as a small blue-cell tumor. Twenty-one were diagnosed as Wilms' tumor (WT), 10 were diagnosed as neuroblastoma (NB), 3 were ganglioneuroblastoma (GNB), 1 was a cellular congenital mesoblastic nephroma (CMN), and 1 was an adrenocortical carcinoma (ACC). Cell clusters with neuropil and cytoplasmic processes were diagnostic of NB, ganglion cells of GNB, and blastema with tubular differentiation in WT. Aspirates from CMN and ACC were considered as simulators/mimickers of SRCT because they had superficial resemblance to SRCT and their differentiating cytomorphological features observed at histology were too subtle to be noted at cytology. The latter were appreciated only on retrospective analysis after histological confirmation.Thus, morphometry in correlation with cytology, clinical history, physical findings, and radiological data is helpful in guided FNA for a definite diagnosis of SRCT in the region of the kidney. One needs to keep in mind the mimickers of small round-cell lesions at this anatomic site. |
The H7N9 bird flu virus is less likely to cause disease than the H5N1 virus that has killed hundreds of people worldwide, scientists say.
But tracking the new virus is harder, as it seems to spread quietly among poultry.
While the spread of H5N1 has been halted by culls of chickens, this will not work with the H7N9 virus, as it is not known to have caused widespread deaths among birds or other animals, says Professor Yuen Kwok-yung, a microbiologist at the University of Hong Kong.
Though symptoms are mild in animals infected with H7N9, the virus seems to be more deadly to humans, having killed three of the nine people infected so far and left the others critically ill.
"These avian viruses are not well adapted to humans, so they cause much more of a problem," Yuen said.
These avian viruses are not well adapted to humans, so they cause much more of a problem
Professor Yuen Kwok-yung, a microbiologist at the University of Hong Kong
Since culling is ineffective, the remaining preventive measures are screening tests and vaccination. Some Western countries have developed a vaccine against the H7 virus but its efficacy on H7N9 is unknown.
Scientists around the world have been seeking clues from the genome of the virus since it was obtained from the first three cases in Shanghai and published online on Sunday.
Analysis suggests that reassortment - in which different virus strains swap genes with one another in a host - gave birth to the new strain, according to an article published in science journal Nature on Tuesday. It appears to stem from the reassortment of three virus strains that only infect birds.
The H7N9 virus most likely originated in eastern China in wild birds, and was then transmitted to poultry and later to people, Yuan says.
Dr Masato Tashiro, who researches the genome of the virus at the World Health Organisation, says it has mutations that adapted to infect mammals. As such, pigs are also a possible infecting agent.
A feature of the virus is that its H protein is structurally similar to that of viruses that do not make birds severely ill. It has acquired key mutations that enable the H protein to latch onto receptors on mammal cells in the airways instead of bird receptors, according to the Nature article.
Yuen found that one of its eight gene segments had a mutation that helped it adapt to the human body temperature of 37 degrees Celsius.
As for how H7N9 infections will develop, Yuen says human infections may disappear suddenly, sporadic cases may continue before the virus returns as a more severe strain next winter, or human-to-human transmission may occur.
This article appeared in the South China Morning Post print edition as H7N9 harder to track but less likely to cause disease
"...having killed three of the nine people infected so far and left the others critically ill." One wonders how they know how many humans might have been infected with this if those people didn't become critically ill and seek treatment. The conclusion of only nine people being infected seems unwarranted from the facts presented. |
###### Strengths and limitations of this study
- Large interventional controlled trial on iodine supplementation during pregnancy, powered to detect a difference of three IQ points in children.
- Long observational follow-up of the children, up to 14 years, with complex assessment of neurocognitive development.
- Future implementation of the study is feasible, as the intervention tablet exists on the market.
- Lack of pure iodine and pure placebo tablets implies careful interpretation of results.
- Dropout rate may be high.
Background {#s1}
==========
Iodine deficiency as an international issue {#s1a}
-------------------------------------------
Iodine is essential for the production of thyroid hormones and important for growth and brain development during fetal and early postnatal life[@R1]; a knowledge obtained after a long history of iodine deficiency (ID) associated disorders. For centuries, goitre with hypothyroidism, mental retardation and cretinism have been an entity. During the 1920s in the USA, Marine and Kimball performed the classic experiment of treating schoolgirls with iodine, leading to a dramatic reduction in the prevalence of goitre. Iodine prophylaxis was established in the USA in 1921. After some debate, iodine prophylaxis was introduced in Switzerland in 1922, and then worldwide over the subsequent decades. The combat against severe and moderate ID has been successful in reducing the number of children with ID-caused mental retardation. However, mild ID is widely apparent, especially during pregnancy,[@R2] when dietary iodine demand increases from 150 to 250 µg/day.[@R3]
Iodine status in Sweden as the country for this study {#s1b}
-----------------------------------------------------
Before iodination of table salt in 1936, ID was common in Sweden.[@R4] Current iodine intake is sufficient in the general population[@R5] and was considered adequate during pregnancy during the 1990s[@R7]; there is no recommendation on iodine supplementation during pregnancy. However, since the 1990s, the situation may have changed due to dairy product consumption in adults being lower; milk iodine levels are lower than before[@R9]; a reduction in salt intake is recommended for reducing the risk of hypertension; new salt forms (flake salt, gourmet salt) without iodine are popular; there is a reluctance to consume 'food additives'; awareness of ID among the younger population is generally low; and the main proportion of total salt intake (≈80%), that is, from ready-made foods and dishes, does not provide iodine. Unless iodine is added to all salts used, the risk of decreased iodine intake is apparent, and arouses concerns, especially for pregnant women. Retrospective, local data on pregnancy highlight this assumption is realistic.[@R11]
ID during pregnancy: effects on the child's development {#s1c}
-------------------------------------------------------
Severe and moderate ID leads to lower serum thyroid hormone levels and thereby to lower availability of thyroid hormones in the brain. During fetal life and early years, the growing brain is vulnerable[@R12] and severe ID results in mental retardation in the newborn, unless the thyroid hormone is replaced.[@R14] In addition, an increased incidence of attention deficit hyperactivity disorders (ADHD) has been associated with mild to moderate ID.[@R15]
In mild ID, thyroid hormone levels are maintained, whereas thyroglobulin (TG) levels are increased as a biomarker of goitre. The brain's use of thyroid hormones depends on the local conversion of inactive hormone thyroxine (T4) to active hormone triiodothyronine (T3), a process mediated by deiodinase type 2 (D2).[@R16] D2 is found in the hippocampi and the cerebral cortex and its activity is increased by ID to maintain sufficient T3 levels.[@R16] In the presence of normal thyroid hormone in blood, it is unclear how mild ID affects brain development. One theory is that this depends on deiodinases, which can change thyroid hormone signalling locally in different tissues, without affecting serum hormone concentration.[@R16]
Mild ID during pregnancy might have an impact on brain development, despite maintained normal thyroid hormone levels.[@R19] In the UK, a longitudinal study[@R19] found 8-year-old children have an increased risk of being in the lowest quartile of verbal IQ, if their mothers had mild ID in early pregnancy, than children of mothers with normal iodine nutrition. In a similar association study from Australia,[@R20] mild ID was linked with lower cognitive performance in 9-year-old children. Results from an observational pilot study from Italy[@R21] indicate mild to moderate ID during fetal life affects cognitive development, especially verbal abilities, even in absence of maternal thyroid insufficiency. In Norway, a large observational study[@R22] found maternal iodine intake below the estimated average requirement during pregnancy was associated with reduced fine motor skills and verbal abilities and with more behaviour problems at the age of 3 years.
As the randomised controlled trial[@R23] evaluating 150 µg iodine/placebo in pregnant women in an iodine sufficient country was small (n=86) and lacked cognitive assessment in children, there were many expectations about the MITCH study.[@R24] In this trial, 832 women from Thailand and India were randomised to 200 µg iodine/placebo, and there was no difference in cognitive outcome in 5--6 year-old children. However, these results were ambiguous for several reasons. First, the women had entered Maternal Iodine Supplementation and Effects on Thyroid Function and Child Development (MITCH) study with urinary iodine concentration (UIC) as in mild ID, but they did have a normal TG, which indicated the iodine stores in prepregnancy may have been sufficiently filled, thus minimising any mental effects on the children. Second, some women were already iodine sufficient at baseline.[@R25] Third, both intervention and placebo groups were iodine sufficient in the second and third trimesters. To prevent subnormal fetal brain development, many international authorities recommend 150 µg extra iodine/day during pregnancy, despite the lack of studies proving causality.[@R26]
Knowledge gaps and background to the Swedish Iodine in Pregnancy and Development in Children study {#s1d}
--------------------------------------------------------------------------------------------------
There is a substantial gap in knowledge about mild ID during pregnancy and its potential negative consequences on neuropsychological development. Therefore, there is a need for a placebo controlled trial that compares neuropsychological outcome in children exposed to mild ID during fetal life and children with normal iodine nutrition during pregnancy.
From 29 November 2012 to 1 June 2015, a pilot randomised placebo controlled trial involving 200 pregnant women receiving a daily supplementation with either a multivitamin containing 150 µg iodine/day or a multivitamin without iodine (placebo) was conducted by our group. This study (ClinicalTrials.gov identifier: NCT02378246) aimed to evaluate the effects of iodine supplementation on UIC and thyroid function. As the MITCH study had ambiguous results, the question if mild ID during pregnancy affects fetal brain development remains unanswered, and it was evident to us that our trial needed to be expanded to include a sufficient number of pregnant women, to enable a satisfactorily powered child follow-up regarding neuropsychological development.
There are indications[@R28] that UIC level during pregnancy in Sweden is lower than detected in the MITCH study, and an elevated TG is detected in early pregnancy, implying a lower iodine status at start of study. Moreover, iodine status in the third trimester is clearly lower in a local Swedish study[@R11] than in the placebo group in the MITCH study, indicating a different iodine situation in Sweden than in Thailand and India. Therefore, the Swedish Iodine in Pregnancy and Development in Children (SWIDDICH) study is conducted. The hypothesis is that the use of an iodine-containing multivitamin during pregnancy results in better cognitive development in the child than with a multivitamin without minerals (superiority trial) and this effect is stronger on verbal competence, which is in agreement with previous findings.[@R19]
Objectives {#s1e}
----------
The primary aim is to assess whether cognition (especially verbal competence) in children whose mothers received 150 µg iodine daily in a multivitamin during pregnancy is higher than children whose mothers received placebo (a multivitamin without iodine) and probably remained in mild ID. The purpose is to determine whether all pregnant women who live in a country where the general population is iodine sufficient, but live in conditions that can result in mild ID during pregnancy, should be recommended extra iodine during pregnancy.
Methods {#s2}
=======
Design of the SWIDDICH study {#s2a}
----------------------------
This is a randomised placebo controlled study in which children are followed up as an observational cohort, separated into two groups by fetal iodine exposure.
Setting and participants {#s2b}
------------------------
Pregnant women will be recruited from more than 10 maternal healthcare centres in Sweden with the aim of forming several clusters to facilitate child follow-up. The main study site will be in Gothenburg, with secondary sites in Umeå and Linköping, and other areas where maternal healthcare centres are recruited. At the first scheduled pregnancy visit, information about the study will be provided and written informed consent collected by the midwife. All procedures during pregnancy will be combined with routine pregnancy visits.
All informed consents and blood and urine for future analyses will be sent to the main study site in Gothenburg. To promote participant retention and a complete follow-up, a contact from the study coordinator will be taken after childbirth. In addition, information will be shared with participants on the homepage <https://www.gu.se/swiddich>.
Inclusion {#s2c}
---------
The following inclusion criteria will apply: women aged 18--40 years, pregnant at 7--12 weeks, willing to refrain from iodine supplementation and take a multivitamin supplement instead, without current thyroid disease, not in another pregnancy or lactating less than 6 months before inclusion, and non-vegan.
Randomisation, allocation, concealment and blinding {#s2d}
---------------------------------------------------
Randomisation numbers with an allocation ratio 1:1 are prepared centrally and sent to each participating centre. Consecutive numbers are used and the information regarding the study group allocation of each number stored securely at the premises of the University of Gothenburg, Sweden. Mothers are provided with a random container of pills by either drawing a lot or blindly drawing a container. All containers are identical, with tasteless pills of the same size for both groups. Recruiting staff, study participants and those involved in laboratory work and developmental assessment are blinded to the group allocation. The code will only be broken by the central study team for data analyses before publications, but will still be blinded to all groups working with the follow-up. The code has been broken for the 200 women of the pilot study, but all (ie, study participants, psychologists and lab engineers) except the central study team are still blinded. No other interim analyses are planned.
Intervention {#s2e}
------------
Women in the experimental group receive a daily multivitamin supplement containing 150 µg iodine and those in the control group receive a daily multivitamin supplement containing no iodine (the contents of the two supplements are presented in [table 1](#T1){ref-type="table"}). The intervention lasts throughout pregnancy until the day of delivery. Women in both groups are recommended, as are all pregnant women in Sweden, to take extra folic acid 400 µg/day during the first trimester,[@R29] and even extra iron, when the haemoglobin status indicates it. Therefore, women in the placebo group will be on maximum 600 µg daily folic acid supplementation, which is safely below the tolerable upper level of 1000 µg/day.[@R30] The folic acid and iron administrations do not interfere with the study tablet. However, the women are not permitted to take any other multivitamins besides the study supplement.
######
Multivitamin with iodine (intervention) and multivitamin without iodine ('placebo'): table of contents
--------------------------------------------------------------------
Intervention (iodine 150 μg)\ Placebo (no iodine)\
Commercial name: MITT VAL VEGETARIAN Commercial name: ENOMDAN
-------------------------------------- -----------------------------
B~2~ 1.4 mg (87%)\*\ Vitamin A 400 μg (50%)\
B~12~ 15 μg (750%)\ Vitamin B~1~ 1.4 mg (93%)\
Iron 12 mg (30%)\ Vitamin B~2~ 1.7 mg (106%)\
Zinc 12 mg (133%)\ Vitamin B~6~ 1.8 mg (128%)\
Iodine 150 μg (85%)\ Vitamin B~12~ 3 μg (150%)\
Selenium 50 μg (71%)\ Vitamin C 60 mg (70%)\
Calcium 250 mg (28%) Vitamin D 5 μg (50%)\
Vitamin E 10 mg (100%)\
Niacin 19 mg (111%)\
Folic acid 200 μg (50%)
--------------------------------------------------------------------
\*Numbers in parentheses denote % of Recommended Daily Intake (%RDI) during pregnancy.[@R29]
Compliance: discontinuation {#s2f}
---------------------------
Participants are asked to bring the container with the remaining pills to the visit in the third trimester. The container is weighted and the percentage of intended doses used is calculated. Mothers who no longer want to participate in the study during pregnancy will be regarded as dropouts and no further data will be collected. If there is discontinuation in the children follow-up, children can come to the next visit. If the discontinuation is permanent, a registry search will still be done.
Outcomes {#s2g}
--------
### Outcomes in mothers {#s2g1}
Outcomes in mothers will be assessed in the first, second and third trimesters of pregnancy. UIC and thyroid hormones will be measured in all three trimesters, and thyroperoxidase (TPO) antibodies and TG in the first and third trimesters.
### Primary outcome in children {#s2g2}
Cognition measured by IQ (total IQ) with focus on the verbal compound (verbal IQ) at 7 years is the primary outcome (Wechsler Intelligence Scale for Children, WISC-V[@R31]).
### Secondary outcomes in children {#s2g3}
Cognition measured by IQ at 3.5 years (Wechsler Preschool and Primary Scale of Intelligence-IV[@R32]) and at 14 years (WISC-V[@R31] or an equivalent adequate version at the time) are secondary outcomes, together with outcomes related to psychomotor development, behaviour and ADHD. Psychomotor assessment will be done by the parents at 18 months (The Ages and Stages Questionnaire-3)[@R33] and by a physiotherapist at 7 years (Movement Assessment Battery for Children test).[@R34] Behaviour will be assessed through parental questionnaires, the Child Behavior Checklist (CBCL); first at 3.5 years (CBCL 1--5),[@R35] then at 7 and 14 years (CBCL 6--18).[@R36] At 7 and 14 years, the Nordic questionnaire 5--15[@R37] will be used to assess ADHD-related symptoms.
Parents also give their consent to a registry search at 3.5, 7 and 14 years regarding the inpatient and outpatient registries for collecting information on medical diagnoses, the drug registry, the medical birth registry, quality registries, maternal, child and school healthcare for medical and growth data, and educational registries.
In a subgroup of children (n=200), structural brain changes will be evaluated by MRI of the brain (with a 3T Philips MR scanner) at 7 and 14 years. Automatic segmentation of the whole brain will be with Freesurfer[@R38] and Maper, multiatlas propagation with enhanced registration.[@R39] Mediotemporal lobe (MTL) structures will be analysed through manual segmentation with custom software developed in previous projects.[@R40] Subregional analyses directed at regions of neurogenesis will be included. Intracranial volume measured manually will enable reliable normalisation of MTL volumes. Other structural and/or functional brain imaging methods may supplement, or even replace, the described protocol, depending on the state of knowledge at the time of study.
Possible confounding variables and background information {#s2h}
---------------------------------------------------------
In children, UIC will be measured from the 3.5-year visit and forward and dry blood spots will be collected for thyroid hormones, TG and deiodinases at 7 and 14 years. Background and confounding variables will be assessed at 18 months, and 3.5, 7 and 14 years.
Time frame for the study actions {#s2i}
--------------------------------
Recruitment to the SWIDDICH study began in March 2017 and is planned to be completed in 2019. Currently, 75 of 1075 pregnant women have been included. Several strategies are used to reach target sample size: a study coordinator is employed to contact maternal healthcare centres, and a stepwise reimbursement model is applied to the maternal healthcare centres in case of high recruitment rates; the National Food Agency promotes study participation in their communication with maternal healthcare centres; and local paediatricians are involved to facilitate the children follow-up. The follow-up of children was also offered to the families participating in the pilot study (2012--2015), before the study extension was decided. The time points for all study actions are presented in [table 2](#T2){ref-type="table"}.
######
Summary of SWIDDICH study actions
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Time point Pregnancy Child follow-up
-------------------------------------------- ------------------------------------ ----------------- -------------- -------------- ----------- ----------- --------- ----------
Enrolment
Information given X
Eligibility screen X
Informed consent X
Allocation X
Intervention
Iodine 150 µg or placebo in multivitamins 
Assessments First pregnancy visit (\<12 weeks) Weeks 7--12 Weeks 25--28 Weeks 34--38 18 months 3.5 years 7 years 14 years
Urinary iodine concentration X X X X X X
Thyroid function\* X X X X X
Milk iodine concentration
Cognition\ X\ X\ X\
IQ WPPSI WISC WISC
Behaviour X\ X\ X\
CBCL CBCL\ CBCL\
Nordic\ Nordic\
5--15 5--15
Psychomotor development X\ X\
ASQ-3 Mov ABC
Brain MRI (subgroup) X X
Background information
EUthyroid SES questionnaire adults X X X X
EUthyroid SES questionnaire children X
Own questionnaire X X X X X X X
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\*FT4 TSH, thyroglobulin: serum sampling during pregnancy and dry blood spot sampling during children follow-up.
ASQ, The Ages and Stages Questionnaire; CBCL, Child Behavior Checklist; EUthyroid SES questionnaire, Socioeconomic Status questionnaire, validated by EUthyroid foundation; Mov ABC, Movement Assessment Battery for Children; SWIDDICH, Swedish Iodine in Pregnancy and Development in Children; WISC, Wechsler Intelligence Scale for Children; WPPSI, Wechsler Preschool and Primary Scale of Intelligence.
Patient and public involvement statement {#s2j}
----------------------------------------
Pregnant women were not involved in the planning of the study.
Considerations {#s3}
==============
Considerations on the content of the intervention and the 'Placebo' tablets {#s3a}
---------------------------------------------------------------------------
The reason for choosing iodine-containing multivitamins instead of pure iodine tablets as the intervention is to ensure future implementation of the study is feasible. There are currently no pure iodine tablets available on the market. In the planning state of the study, discussions were initiated with pharmaceutical companies to provide pure iodine tablets and placebo, but interest was low. In the future, iodine in multivitamins will be the only available supplement source in most countries. Therefore, a multivitamin containing 150 µg iodine was chosen for the intervention and a multivitamin without minerals as the comparator.
Other components in the multivitamin products, besides iodine, may intervene with outcomes. It is proposed that vitamin B~12~ [@R42] and iron[@R44] can have positive effects on the brain, and iron and selenium influence thyroid hormone levels.[@R45] Iron is found in TPO enzyme that couples iodine to TG. Selenium is found in deiodinases, such as D2, which converts T4 to T3, and is also an antioxidant of the thyroid gland. Sweden is a selenium-deficient country,[@R47] but it is unclear whether selenium deficiency affects cognitive outcome in humans.[@R48] B~12~ is higher in iodine-containing multivitamins where iron and selenium also are included. However, B~12~ content in both placebo and intervention tablets is, at least, equal to the recommended daily intake for B~12~; thus B~12~ deficiency is not anticipated in any of the groups. In addition, the iron content is low and many pregnant Swedish women take a separate 100 mg iron supplement, which makes the 12 mg iron in the intervention tablet negligible. Iron, B~12~ and selenium will be measured in a subpopulation to evaluate possible group differences and contributions to thyroid metabolism.
Considerations in choosing a realistic starting point for intervention {#s3b}
----------------------------------------------------------------------
Fetal brain development during the first 12 weeks is dependent on maternal T4 levels. By initiating the intervention at pregnancy weeks 7--12, a substantial part of the first trimester is missed. Ideally, iodine supplements may be initiated before conception. Practically, the recruitment of women who plan a pregnancy is difficult, as these women are not known by healthcare providers before pregnancy. One way would be through advertising in the newspaper to recruit women who are planning pregnancy. However, this would be ineffective and create selection bias, as only 50% of those who fall pregnant have planned the pregnancy, and not every woman responds to an advertisement. Women are included at the earliest possible stage and this is still far earlier than in a recent publication by Casey *et al* [@R45] that included pregnant women in mean gestational weeks 16.6--18.0 and found negative results. The inclusion in the proposed study is similar to that in the MITCH study, where women were included in gestational weeks 10 and 11.[@R24]
Power calculation, data management, statistical considerations and authorship {#s3c}
-----------------------------------------------------------------------------
The sample size needed, excluding dropouts, is calculated to 788 children (394 in each group) for an effect size of 3 IQ points with SD 15 and power 0.80. Currently, there are no similar randomised studies for power calculation. The smallest significant effect of 3 IQ points is in accordance with an observational study,[@R19] where children of mothers with UIC\<150 µg/L during pregnancy had a 3-point lower IQ at school age than children of mothers with normal UIC during pregnancy. This expected effect from iodine supplementation in mild ID is also suggested by Troendle,[@R49] where statistical considerations are discussed for the possibility that the needed placebo controlled study is conducted. Assuming a dropout frequency of 22% during pregnancy (which is in accordance with preliminary data from the pilot study with 200 pregnant women[@R28]) and 20% during the children follow-up, 1263 pregnant women need to be recruited to the study. This sample size is in general agreement with Troendle,[@R49] thus the decision was made to try and recruit 1275 pregnant women. The dropout frequency for the children follow-up could be lower than estimated, as there are two occasions for dropout and mothers who remain in the study after the first follow-up can be assumed willing to continue the study. The power calculation assumes the use of an unpaired t-test between groups; however, more advanced analyses could decrease variance, thus requiring a lower sample size.
The sample size will be reassessed by calculating the dropout frequency when 750 women are included and when half of the children from the first 200 included women have been invited to the 3.5-year neuropsychological evaluation. Sample size reassessment will be conducted without unblinding the study groups.
A 100% compliance to the study medication is assumed, as the results will be based on an intention-to-treat (ITT) analysis. Compliance is monitored to enable a per-protocol analysis (only the compliant participants included) to be added. However, the ITT approach reflects the real-life clinical situation, in which a certain number of patients are not compliant with the recommended treatment, and this will be the foundation for future recommendations on iodine supplementation to all pregnant women.
A separate power calculation for the MRI investigation has been done. This assumes the described protocol will be followed, and a previous study of 11-year-old children has been used for guidance.[@R50] To detect a 5% difference with power 0.80, each group requires 60 children. As the variation in the hippocampal volumes in 7-year-old children could be slightly larger than in the previous study,[@R50] and as dropout from the MRI at 14 years needs to be considered, 100 children will be included in each group.
Coded collected data will be entered into a database, with appropriate backup from the university servers. Key lists will be kept safe and transfer of data to the databases will be validated by random cross-checks with the original data set. UIC analyses will be duplicated to promote validity. For further details, see ethical applications (Diary numbers: 431-12 approved 18 June 2012 (pregnancy part) and 1089-16 approved 8 February 2017 (children follow-up)) and <https://clinicaltrials.gov/>. All authors will have access to all data and the statisticians will have access to the data needed.
The choice of methods for comparing the main outcome between the experimental and control groups will be guided by the data distributions. In case of deviation from normality assumptions, transformations of data may be done. Non-parametric tests will be used for non-normal and ordinal data. Possible confounders, such as socioeconomic factors, other background information, gestational age, thyroid hormones, TG, deiodinase polymorphisms and UIC, will be considered in the data analyses. Repeated measurements in a mixed model (where groups are compared repeatedly at 3.5, 7 and 14 years) and within-group analyses are planned. The models will also consider dropout frequency and recruitment from different maternal healthcare centres, which will be used as a factor in the analysis. For all dropouts, relevant background variables will be studied. Adjustments for bias may be performed. For non-informative dropouts, methods for multiple imputations will be considered. A multivariate analysis with total grey, total brain volume, intracranial volume, MTL volumes and possibly other measures of brain structure and function as independent variables will be conducted. The data analyses will be undertaken by an experienced statistician. Authorship will be decided according to the Declaration of Vancouver.
MRI considerations: where are changes from ID located? {#s3d}
------------------------------------------------------
T3 receptors are distributed among all brain areas with high levels in the hippocampi and the cerebellar cortex. Rodent data indicate T3 receptors are involved in the regulation of hippocampal structure and function.[@R51] In the human cerebral cortex, thyroid receptors are already present in week 9 and concentrations increase up to 18 weeks of gestation.[@R52] Considerable amounts of D2 are also found in the cerebral cortex.[@R53] In the first half of pregnancy, the fetus is dependent on the mother's supply of thyroid hormones. In mild ID, the mother maintains serum thyroid hormone levels through unknown compensatory mechanisms. In the second half of a mild ID pregnancy, when the fetus partly relies on its own thyroid hormone production, the fetus will be hypothyroid, as it has not developed compensatory mechanisms and there is a lack of sufficient iodine levels transferred by the mother.[@R53]
The description of neuropathology caused by ID is limited to few observations from adult cretins, ranging from severe cortical atrophy to almost normal appearance. In areas with endemic goitre, fetuses aborted in the second half of the pregnancy have a less differentiated cerebral cortex.[@R53] In rats, transient periods of thyroid hormone insufficiency during periods of cortical development affect cortical and hippocampal cytoarchitecture.[@R53]
Human data from maternal hypothyroidism support an effect on the brain, specifically on the hippocampus.[@R54] These data are in line with the recent publication by Korevaar *et al*,[@R55] who conclude the relationship of IQ with FT4 (in peripheral blood) exhibits a U-shaped configuration with lower IQ levels in both ends of the normal range. FT4 in this study[@R55] is also associated with total grey matter volume.
Considerations on the neuropsychological evaluation {#s3e}
---------------------------------------------------
Neuropsychological development can be divided into three domains: psychomotor, cognitive (IQ) and socioemotional development ([figure 1](#F1){ref-type="fig"}). There are five landmark studies in the iodine field evaluating neuropsychological development in the offspring that use neuropsychological tests: the Avon Longitudinal Study of Parents and Children (UK),[@R19] Iodine Supplementation During Pregnancy and Infant Neuropsychological Development (INMA, Spain),[@R56] Generation R (Netherlands),[@R55] MITCH (India and Thailand)[@R57] and Hynes *et al* (Australia).[@R20] Verbal cognitive function appears to be the most susceptible subdomain for ID. In SWIDDICH, verbal cognitive function and total IQ were chosen (as the latter is the best understood and requested) as primary outcome measurements. As cognitive testing is less valid at younger ages, verbal IQ at 7 years was chosen as the primary evaluation time point, and all three domains of neuropsychological development will be evaluated at several follow-up times.
{#F1}
Implications for society and the individual {#s3f}
-------------------------------------------
Impaired child development increases economic burdens for society. Lowered IQ is associated with worse economic outcomes and lower lifetime earnings. Small decrements in IQ around the mean are linked to lower incomes.[@R58] IQ may be the easiest factor to quantify, but may not be the factor with the most serious consequence for a 'good life'. Environmental factors, including ID, that place the nervous system at risk may affect executive functions, such as planning and initiating ideas, and result in attention problems, impulsive behaviour and inability to handle stress and disappointment, and can impede success in school and in life and possibly lead to antisocial behaviour.[@R60]
If the average IQ of a population drops, the IQ distribution shifts and the number of individuals with low IQ (eg, below 75 or 85, classified as intellectually disabled) increases. In turn, this will also decrease the number of gifted and exceptionally gifted people with high IQ (eg, above 130), who may have major positive impacts on the immediate future for a company or a country. A cost-benefit analysis of iodine supplementation in mild to moderate ID has recently proved positive.[@R61]
Based on the dollar value in 1987 in the USA, the cost in terms of reduced income for a 1 point IQ reduction has been calculated to nearly US\$20.7 billion per year.[@R62] A 3-point decline in IQ also impacts social costs in the USA[@R60] and increases the risk of: poverty by 20% during the first 3 years; low birth weight by 12%; being a recipient of welfare by 18%; and high school dropout by 28%. Even though a decline of a few IQ points may be small for the individual, the societal effects are considerable. As a small general risk reduction entails a large social benefit, iodine supplementation could be a cost-effective action if the main hypothesis of this study holds true.
Considerations on possible adverse effects of iodine or placebo {#s3g}
---------------------------------------------------------------
Iodine supplementation may increase the frequency of postpartum thyroiditis (PPT), as iodine affects autoimmunity[@R63]: 10%--15% of women already have PPT and this number may increase slightly with iodine supplementation. As PPT is not a dangerous condition and most cases resolve spontaneously, we consider the reduced risk for subnormal brain development in a child motivates accepting the risk for PPT. In Denmark, PPT was evaluated in a placebo controlled trial in mild to moderate ID, and treatment did not increase or worsen PPT.[@R63]
Excess iodine intake in the mother may block thyroid function in the fetus, leading to hypothyroidism and goitre, and is associated with poorer mental and psychomotor development or behaviour problems in children.[@R22] However, the risk for adverse effects of iodine supplementation is higher in cases of preconception ID due to sudden increase of iodine intake, and should therefore not be the case in Sweden where the normal population is iodine sufficient.[@R65]
The placebo group is at risk of ID during pregnancy. However, as there are no current recommendations for iodine supplementation during pregnancy in Sweden, this group follows normal management.
The intervention and the comparator are dietary supplements, and the total intake of nutrients depends on the diet. Information on adverse reactions is not collected.
Conclusion {#s4}
==========
The aim of this paper is to describe the study protocol for the SWIDDICH research project and the considerations that led to its design. The study attempts to further understand the consequences of mild ID during pregnancy and to test whether treatment of the mothers with iodine-containing multivitamins improves outcome in the children. As the study is the largest of its kind, it offers the potential for influencing future recommendations on iodine supplementation with multivitamins to pregnant women living in conditions of mild ID.
Supplementary Material
======================
###### Reviewer comments
###### Author\'s manuscript
The authors thank Elisabeth Gramatkovski,
Michael Hoppe and Therese Karlsson for their invaluable help as coordinators of the study.
**Contributors:** SM, BJ, AC, JE, KG, CJT, RE, HM, LH, MD and HNF contributed to the design of the SWIDDICH study. HNF wrote the first version of the manuscript and SM was responsible for pushing the work forward together with the other coworkers. All coauthors critically reviewed and approved the final version of the manuscript. SM is the guarantor. The primary sponsor is HNF (principal investigator), Sahlgrenska Academy and University of Göteborg and Sahlgrenska University Hospital, Göteborg, Sweden. The main study site is in Göteborg with additional sites in Umeå and Linköping.
**Funding:** This work was supported by the ALF agreement (grant numbers ALFGBG 58777, ALFGBG 717311); Regional FOU (grant number VGFOUREG 664301); Lilla barnets fond (grant number 20160917); Svenska Läkarsällskapet (grant number SLS 688891); Lars Hiertas Minne Foundation (grant number FO2016 0016); Formas grant (grant number 2017 0095); and a grant from the General Maternity Hospital Foundation 2017. Multivitamins for the first 200 women were provided by Recip Medical, Solna, Sweden, but they are not involved in the study design and they do not contribute in any other way. The National Food Agency is a stakeholder in this trial. The authors of this manuscript solely contributed to the design, management, future analyses with the support of unbound statisticians, interpretation of data, writing the manuscript and decisions on where to submit. The maternal healthcare centres are reimbursed for the collection of patients by the principal investigator (HNF).
**Competing interests:** None declared.
**Patient consent:** Not required.
**Ethics approval:** Bioethics Committee of Gothenburg.
**Provenance and peer review:** Not commissioned; externally peer reviewed.
|
FRIDAY, May 14 (HealthDay News) -- If you're about to make a big financial purchase, keep your distance from the friendly and helpful saleswoman.
A series of experiments by researchers from Columbia University and University of Alberta found both men and women were more likely to take financial risks after being lightly touched on the back by a woman, new research shows.
The same contact with a man did not result in more risk-taking.
Researchers say being touched by a woman may remind participants of their mother's touch during infancy, making them feel more secure and confident in taking chances.
"Certain forms of contact are associated with memories and emotional experiences of being touched by your mom," explained study author Jonathan Levav, an associate professor of business at Columbia University in New York City. "We wanted to find out how that played out among adults. What we found for financial risk-taking is a touch by a man doesn't have much influence, but a woman's touch does."
The study was recently published online in the journal Psychological Science.
In the first experiment, participants were ushered into a room with either a light, one-second touch on back of shoulder from a female, or simply asked to take a seat without any touching. Participants then had to answer 14 questions that involved decisions about money with varying levels of risk. For example, students were asked to choose between receiving $600 for sure, or flipping a coin and having a 50-50 chance of receiving $2,000.
Both male and female participants who'd been touched were significantly more likely to gamble on the bigger payout.
In a second experiment, college business students were either touched by a male or a female, shook hands with a male or female, or were not touched. Students were then asked to choose between investing $5 (which represented $500) in some combination of riskier bank stocks or in safer bonds that delivered a 4% return.
Those who'd been touched on the back by a woman put more of their money in stocks. Those who shook hands with a woman also showed a slight increase in their risk-taking. Neither shaking hands with or being touched by a man had any effect.
In the third experiment, participants were asked to write an essay about a time they felt "secure and supported" or "insecure and alone." Recalling these events "primed" participants to feel a certain way.
Researchers then repeated the touch vs. not touching situations prior to making investment choices.
Those who wrote about feeling insecure and were not touched were especially conservative in their investment choices. Those who wrote an essay about feeling insecure but were then touched by a woman were more likely to take financial risks -- about the same as those who started off feeling secure, according to the study.
Many previous studies have demonstrated that maternal contact is key to the development of children, Levav said, and that's true not just for human babies, but for many species, even spiders.
One study found that baby spiders who'd spent more time with their mothers were more likely to explore the far reaches of their maze, Levav noted.
"Maternal physical contact serves to promote attachment with the infant, which promotes feelings of security, which gives the infant inner strength to explore new uncertain things," Levav said.
The study also suggests that decisions that appear to be driven by rational processes can be influenced by more subjective, emotional and subconscious factors, Levav said. In his experiments, most participants could not remember having being touched.
The experiments are "fascinating," said R. Chris Farley, an associate professor of psychology at University of Illinois at Urbana-Champaign. Though there are other possible explanations for why participants touched by a woman were more likely to takes risks -- perhaps some perceived the contact as romantic interest -- this is unlikely because both men and women responded similarly.
Further studies might look at whether or not people's relationships with their mothers or fathers might have any impact on how strongly people reacted to the touch, he added. |
Q:
Using perl XML::LibXML to deal with XML so slowly
The XML file is like this:
<?xml version="1.0" encoding="UTF-8"?>
<resource-data xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="resource-data.xsd">
<class name="AP">
<attributes>
<resourceId>00 11 B5 1B 6D 20</resourceId>
<lastModifyTime>20130107091545</lastModifyTime>
<dcTime>20130107093019</dcTime>
<attribute name="NMS_ID" value="DNMS" />
<attribute name="IP_ADDR" value="10.11.141.111" />
<attribute name="LABEL_DEV" value="00 11 B5 1B 6D 20" />
</attributes>
<attributes>
<resourceId>00 11 B5 1B 6D 21</resourceId>
<lastModifyTime>20130107091546</lastModifyTime>
<dcTime>20130107093019</dcTime>
<attribute name="NMS_ID" value="DNMS" />
<attribute name="IP_ADDR" value="10.11.141.112" />
<attribute name="LABEL_DEV" value="00 11 B5 1B 6D 21" />
</attributes>
</class>
</resource-data>
And my code:
#!/usr/bin/perl
use Encode;
use XML::LibXML;
use Data::Dumper;
$parser = new XML::LibXML;
$struct = $parser->parse_file("d:/AP_201301073100_1.xml");
my $file_data = "d:\\ap.txt";
open IN, ">$file_data";
$rootel = $struct->getDocumentElement();
$elname = $rootel->getName();
@kids = $rootel->getElementsByTagName('attributes');
foreach $child (@kids) {
@atts = $child->getElementsByTagName('attribute');
foreach $at (@atts) {
$va = $at->getAttribute('value');
print IN encode("gbk", "$va\t");
}
print IN encode("gbk", "\n");
}
close(IN);
My question is, if the XML file is only 80MB then then program will be very fast, but when the XML file is much larger the program can then be very slow. Can somebody help me speed this up please?
A:
Using XML::Twig will allow you to process each <attributes> element as it is encountered during parsing, and then discard the XML data that is no longer needed.
This program seems to do what you need.
use strict;
use warnings;
use XML::Twig;
use Encode;
use constant XML_FILE => 'S:/AP_201301073100_1.xml';
use constant OUT_FILE => 'D:/ap.txt';
open my $outfh, '>:encoding(gbk)', OUT_FILE or die $!;
my $twig = XML::Twig->new(twig_handlers => {attributes => \&attributes});
$twig->parsefile('myxml.xml');
sub attributes {
my ($twig, $atts) = @_;
my @values = map $_->att('value'), $atts->children('attribute');
print $outfh join("\t", @values), "\n";
$twig->purge;
}
output
DNMS 10.11.141.111 00 11 B5 1B 6D 20
DNMS 10.11.141.112 00 11 B5 1B 6D 21
A:
Another possibility is to use XML::LibXML::Reader. It works similarly to SAX, but uses the same libxml library as XML::LibXML:
#!/usr/bin/perl
use warnings;
use strict;
use XML::LibXML::Reader;
my $reader = XML::LibXML::Reader->new(location => '1.xml');
open my $OUT, '>:encoding(gbk)', '1.out';
while ($reader->read) {
attr($reader) if 'attributes' eq $reader->name
and XML_READER_TYPE_ELEMENT == $reader->nodeType;
}
sub attr {
my $reader = shift;
my @kids;
ATTRIBUTE:
while ($reader->read) {
my $name = $reader->name;
last ATTRIBUTE if 'attributes' eq $name;
next ATTRIBUTE if XML_READER_TYPE_END_ELEMENT == $reader->nodeType;
push @kids, $reader->getAttribute('value')
if 'attribute' eq $name;
}
print {$OUT} join("\t", @kids), "\n";
}
|
Translate
Monday, December 31
In recent weeks players have talked about the strange circumstances that occurred -- a gale force wind in New York that blew a punt 20 extra yards, missed opponents' field goals in which the wind blew the kicks wildly to one side, passes that seemed to hang in the air an extra second until Redskins could catch them -- and said they believed [Sean] Taylor was at work for them.
The book is by Major Chuck Larson, who served for a year with the US Army in Iraq during Operation Iraqi Freedom and was awarded the Bronze Star.
My plan was to also wish happy new year to all those groups and individuals I knew about who have done more than just their job in supporting the US war efforts in Afghanistan and Iraq, and in other places around the world.
I was thinking about the kind of people who have given more than 100 per cent in their work; the kind who believe it's their responsibility to put all their heart into supporting the war effort.
But when I began drawing up the list I realized that it would need to be book length, to even begin to do justice to the many thousands who deserve credit.
So I will just wish a Happy New Year to all those who have worked overtime to support the war effort, and with a special note to those whose efforts never receive much mention -- the translators, cooks, camera crews, aides, analysts and countless other kinds of workers who have labored unsung.
And Happy New Year to my hometown football team! The Washington Redskins game yesterday was a highly symbolic and inspiring victory. Their entire astonishing comeback this season is a reminder that when your heart is fully engaged in a just cause, the winds have a way of blowing in your favor.
"The Washington Redskins were drenched with rain, sweat and even a few cathartic tears Sunday evening when they entered their locker room after a 27-6 demolition of the Dallas Cowboys.
Rock Cartwright screamed: “We won by 21! We won by 21!”
Several teammates joined in, the chants growing louder and louder, until the refrain echoed off the walls.
“No one had to explain anything,” defensive end Chris Wilson said. “We all knew what he was talking about.”
On a day when they capped an unlikely late-season surge by clinching the National Football Conference’s final playoff berth ... the Redskins’ margin of victory matched the jersey number worn by Sean Taylor, the promising safety who was shot to death last month ..."
Tuesday, December 25
Although I was born and baptized a Christian I left the Church a long time ago. However, for decades I continued to celebrate Christmas, and I learned in India that the spirit of Christmas transcends religion. Yet in recent years I'd become quite a Scrooge, feeling like a stranger in a strange land around Christmas time.
I thought sourly that even the segment on the dangers of asteroids and a discussion of the US presidential primary race didn't dampen John's celebration of hope for the future and affirmation of America as a beacon of liberty.
And I noted that even his discussion about Pakistan's politics with Najam Sethi, editor of Pakistan's Daily Times, ended on a hopeful note. Najam recounted that Pakistan's lawyers were joining with the country's feminist and other human rights activist groups to form the core of a revolution in the making, one that transcends tribal and religious affiliations.
Bah. Humbug. Pakistan would never change.
Then today came a knocking at my door. It was a Muslim friend bringing a Christmas present. After feeling the outline beneath the wrapping paper I figured it was a box of chocolates. When I opened the present I squealed with surprise and delight; it was a book I'd been wanting to read more than any other.
Later, knock knock. Some Jewish friends dropped by; they'd chipped in on a Christmas present. When I tore off the wrapping I found that the gift was a much-needed item.
Okay okay, I get it. Here is your present. Merry Christmas to one and all!
Very smart products, and a way to get smart product ideas around faster:
[...] Sometimes, necessity makes for the most innovative products. In Australia, water is in short supply in many parts of the country, the result of a devastating drought. These days, Australia has become a country of water misers, with water-saving products to match. Gardeners who wish to water their plants can buy a new invention called the Water-Leech, which reuses so-called “grey water,” that is, water run-off from the bath or shower. A hose attachment hooks up to the shower or sink drain, then a pump in the relatively compact unit draws the water into a self-contained storage tank. Once the tank is full, it’s easy to wheel the unit outside to water your lawn or garden. The company says the unit can help the average household conserve 35,000 liters annually.
Another great water-saving product coming out of Australia is the instant, compact, continuous-flow hot water heater, which is slowly gaining in popularity in [America]. Rather than heating a large amount of water the way most hot water heaters in the U.S. do -- and keeping it heated -- this product instantly heats the water on demand and in a much more efficient way. Once it is activated, it delivers a constant supply of hot water. Since heating water accounts for over 20 percent of home energy use in this country, this energy-saving product could save homeowners money while also being more environmentally friendly. Also, the hardware on these systems usually lasts twice as long as that on regular tank systems. Some experts estimate that the average family can save between 30 and 50 percent of the cost of heating water each year with tankless systems.
Of course, cars are one of the biggest energy guzzlers. And driving electric cars is a great way to save on fuel. Norway has figured out a way to let consumers buy these autos at affordable prices by allowing them to lease the battery on the car. In the next few months, the two-seat, electric-powered Think City car will become available in Norway for somewhere between $15,000 to $17,000. Since the battery is the costliest part of the car (about $34,000), the company plans to allow consumers to lease it for about $100 to $200 a month, which will include other services such as insurance and mobile internet access. The web service will allow the company to remotely monitor the battery’s life, and contact owners when the battery needs to be replaced. [...]
Despite the globalization of commerce, there are still many innovative products that aren’t available in this country. That’s where web sites like Springwise come in. The company scans the globe for the most promising ideas and concepts ready for regional or international adaptation or expansion. Springwise has more than 8,000 “Springspotters” in over 70 countries who search the globe for new concepts.
“We regularly hear from readers who are interested in bringing a product or service to their own country, after reading about it on Springwise,” says Liesbeth den Toom, senior editor for the web site.
“Springwise is very much a child of globalization. By gathering ideas from a wide variety of countries and presenting them to readers based in over 120 countries, we believe we are helping to spread smart ideas faster.”
Tuesday, December 18
September 19“I’m not going to comment on the matter,” Mr. Bush repeated twice when asked about the [September 6] strike at a news conference at the White House. When pressed, he added, “Saying I’m not going to comment on the matter means I’m not going to comment on the matter.”
"Mr. Bush’s remarks -- a relatively rare instance of a president flatly declining to comment -- also reflected the extraordinary secrecy here in Washington surrounding the raid. Most details of what was struck, where, and how remain shrouded in official silence. "(1)
October 3"One month after the [September 6 bombing] the absence of hard information leads inexorably to the conclusion that the implications must have been enormous. That was confirmed to The Spectator by a very senior British ministerial source:
‘If people had known how close we came to world war three that day there’d have been mass panic. Never mind the floods or foot-and-mouth -- Gordon [Brown] really would have been dealing with the bloody Book of Revelation and Armageddon.’ " (2)
October 17"If Iran had a nuclear weapon, it'd be a dangerous threat to world peace," Bush said. "So I told people that if you're interested in avoiding World War III, it seems like you ought to be interested" in ensuring Iran not gain the capacity to develop such weapons.
"I take the threat of Iran with a nuclear weapon very seriously," he said.(3)
I have a hard time envisioning how Armageddon could have resulted from a few popguns immediately fired back at Israel, which pretty much sums Syria's arsenal.
It is public record that for years Syria was running centrifuges for Iran and that Syria is a 'client' state of Iran. Yet I don't see why hitting at a centrifuge-spinning site in Syria would immediately touch off Armageddon. But then the British official might not have been referring to a retaliation from Syria.
December 6"Most critically, [the NIE] does not speak to the fact that the gravest injury that United States has inflicted upon the Tehran regime since the opening of the war in 2001 was the mysterious air to ground attack on September 6, 2007 against a target in eastern Syria."-- John Batchelor(4)
Batchelor made that remark three days after the latest National Intelligence Estimate on Iran's nuclear threat was released. Up until that time, he had steadfastly refused to speculate about Israel's September 6 bombing raid on Syria.
Yet, if one wants to interpret his remarks literally, December 6 found John Batchelor implying that the Syrian sites were connected with Iran. He might have simply been referring to Syria's client status with Iran. But he just plopped out with that statement and gave no explanation or supporting data for it.
Those who follow Batchelor's news show know that he is well connected with senior US and Israeli defense officials, and also that he is a stickler for only dealing in open source information. So while his statement about the Syria bombing raid is surprising, it is not a surprise that he wouldn't provide supporting data if he had such and if he was making a literal reference.
That leaves the public with the decision to outright reject his statements, interpret them in non-literal fashion, take a wait-and-see attitude, or try to play Sherlock Holmes with open-source data.
Given the gravity of the issues at hand, and that Pundita is not a Wait and See sort, I opted for Plan D. That means I've spent more hours than I care to recount prowling around at search engines, and trying to fit together a picture from slivers of data, anecdotal accounts, and speculations by experts in various fields.
One thing that jumped out at me after Batchelor's December 9 statement is that the NIE -- at least, what's been published of it -- avoids the question of whether Iran has nuclear weapons. The NIE is only interested in whether Iran has a program of developing nuclear weapons i.e., a program to develop indigenous or 'home-made' bombs. Just to make this distinction clear, here is exactly what the NIE says:
“For the purposes of this Estimate, by ‘nuclear weapons program’ we mean Iran’s nuclear weapon design and weaponization work and covert uranium conversion-related and uranium enrichment-related work ..."(5)
That is why I was interested in General Baluyevsky's 2002 statement that Iran has nuclear bombs. The wording of his 2006 statement is also interesting because while he seems to deny the earlier statement, his wording only pertains to "intensification."
"[The Russian military] said from the very beginning we had no data to speak about intensification of efforts in Iran in the field of nuclear armaments."(6)
If Iran already has nuclear bombs, I can see how the general might consider it imprecise to assert that Iran was intensifying efforts to acquire a nuclear weapon.
But to return to the "bloody Book of Revelation and Armageddon," initially I set aside the report that quoted the British official's remark because it contained what I considered a red herring:
According to American sources, Israeli intelligence tracked a North Korean vessel carrying a cargo of nuclear material labelled ‘cement’ as it travelled halfway across the world. On 3 September the ship docked at the Syrian port of Tartous and the Israelis continued following the cargo as it was transported to the small town of Dayr as Zawr, near the Turkish border in north-eastern Syria.
The destination was not a complete surprise. It had already been the subject of intense surveillance by an Israeli Ofek spy satellite, and within hours a band of elite Israeli commandos had secretly crossed into Syria and headed for the town. Soil samples and other material they collected there were returned to Israel. Sure enough, they indicated that the cargo was nuclear.
Three days after the North Korean consignment arrived, the final phase of Operation Orchard was launched. With prior approval from Washington, Israeli F151 jets were scrambled and, minutes later, the installation and its newly arrived contents were destroyed.(2)
All very interesting, but there were strong indications the raid had been postponed from a much earlier date:
... another report indicated that Israel planned to attack the site as early as July 14, but some US officials, including Secretary of State Condoleezza Rice, preferred a public condemnation of Syria, thereby delaying the military strike until Israel feared the information would leak to the press.(7)
If Israel had been planning to attack in July, I doubt the North Korean vessel had been hanging around at that time.
This doesn't mean North Korea had not been making nuclear-related deliveries to the site in question for many months or even years or that the deliveries weren't part of Israel's concerns about the Syria site. But the reason for Israel striking the site much earlier than September 6 would still be open to question.
The October 3 report I quoted above, and which was published in The Spectator, was headlined, "We came so close to World War Three that day."
The headline made a splash at the time, and was known to anyone closely following the Syria bombing mystery and Iran's nuclear weapons program.
Less than a month after publication of the British official's statement, President Bush also plopped out with a reference to World War Three. It was a very odd statement -- inflammatory, even for Bush -- and caused much comment at the time.
Published commentaries assumed Bush was warning of a future threat. After reading John Batchelor's December 6 statement, it struck me that Bush might have been referring to the past. And signifying, for those who cared to dig, to look in the direction of the October 3 Spectator report if they wanted clues about the reason for the September 6 bombing raid.
Yet even if Bush and that unnamed British official were signifying that Israel struck something in Syria that actually belonged to Iran, that would still leave to speculation how the September 6 strike could have touched off a nuclear conflagration. What would Iran retaliate with? Their own popguns? Unless, they had nuclear weapons.
(It is a matter of record that Iran already has missiles capable of delivering a nuclear bomb and that the missile system can reach Israel.)
But US intelligence agencies don't seem to have invested resources in discovering whether Iran has imported nuclear warheads or key ingredients for assembling a bomb -- or at least, they've not announced such investigations. Ditto for European and Israeli intelligence agencies.
So I don't have much to chew on, aside from a moldering report about what a Russian general said in 2002 and a 2006 Russian news report that Ukraine had sold 250 nukes to Iran.(6) There is, however, the speculation of an Israeli nuke expert:
JERUSALEM (AP) November 22 - A Syrian site bombed by Israel in September was probably a plant for assembling a nuclear bomb, an Israeli nuclear expert said Thursday, challenging other analysts' conclusions that it housed a North Korean- style nuclear reactor.
Tel Aviv University chemistry professor Uzi Even, who worked in the past at Israel's Dimona nuclear reactor, said satellite pictures of the site taken before the Israeli strike on Sept. 6 showed no sign of the cooling towers and chimneys characteristic of reactors.
Even said the absence of telltale features of a reactor convinced him the building must have housed something else. And a rush by the Syrians after the attack to bury the site under tons of soil suggests the facility was a plutonium processing plant and they were trying to smother lethal doses of radiation leaking out. [...]
Last month, American analyst David Albright, president of the Institute for Science and International Security, said commercial satellite images taken before and after the Israeli raid supported suspicions that the target was indeed a reactor and that the site was given a hasty cleanup by the Syrians to remove incriminating evidence.
Albright saw a clue in the fact that the structure was roofed at an early stage in its construction.
Other analysts have said the satellite images are too grainy to make any conclusive judgment.
But in an interview Thursday with the Haaretz newspaper -- which first reported his assessment -- Even compared pictures of a North Korean reactor at Yongbyon, in which a cooling tower with steam rising from it can clearly be seen, with the Syrian images, where no such structure appears.
Even told The Associated Press that another piece of evidence against the reactor theory was that satellite pictures of the Syrian installation taken since 2003 showed no sign of a plutonium separation facility, which prepares fuel for a nuclear reactor -- typically a large structure with visible ventilation openings.
"It's very difficult to hide a separation plant," he said. "It's more difficult to hide a separation plant than to hide a nuclear reactor," Even added.
"In Yongbyon, the supposed sister facility in North Korea, you can see all those signs that I am pointing out that are missing in the Syrian place," Even said. "You can see the chimneys, you can see the ventilation, you can see the cooling towers, you can see the separation plant. All that is missing from this building in Syria."
Even said he believes the Syrian cleanup, in which large quantities of soil were bulldozed over the site, was an attempt to smother lethal radiation from a plutonium processing plant.
"I have no information, only an assessment, but I suspect that it was a plant for processing plutonium, namely a factory for assembling the bomb," he told Haaretz.[...] (8)
If Israel took out one or more factories for assembling Iran's imported nuclear bombs, that would explain why Iran might be tempted to haul out a completed nuke and hurl it at Israel.(9) That could touch off a nuclear exchange.
But if we continue to play around with Professor Even's scenario, we quickly bump into the question of why the NIE strove so hard to tamp down concerns about Iran being an imminent nuclear threat. Were the analysts who worked on the NIE so out of the loop at the DoD that they didn't know the real reason for Israel's strike against Syria?
Or would they get together after reviewing the reason for the strike and say, 'We must kick the can down the road so to prevent Armageddon any day now.'
If Even is on the mark and if Iran was indeed using Syria as the place to assemble nukes, then somehow I don't think that mere words would kick the can down the road very far.
Of course all this is speculation. Questions. Still many unanswered questions.
9) There is disagreement as to which site in Syria was bombed, leading to speculation that Israel bombed more than one radioactive site and that Syria covered up more than one site with tons of soil. See John Loftus's investigations:
"Senior sources in the Israeli government have privately confirmed to me that the recent New York Times articles and satellite photographs about the Israeli raid on an alleged Syrian nuclear target in Al Tabitha, Syria were of the completely wrong location. Armed with this knowledge, I searched Google Earth satellite photos for the rest of the province of Deir al Zour for a site that would match the unofficial Israeli descriptions: camouflaged black factory building, next to a military ammunition dump, between an airport and an orchard.
"Photos of this complex taken after the Israel raid appear to show that all of the buildings, earthern blast berms, bunkers, roads, even the acres of blackened topsoil, have all been dug up and removed. All that remains are what appear to be smoothed over bomb craters. [...]"
Pundita summarizes the serpentine shifts of the Russian MoD under Putin on Iran’s nuclear weapons program. I haven’t commented on the NIE much because the nine page declassified key assessments represents less than 10% of the NIE itself. Years of watching historians arrive at starkly different interpretations of identical primary sources makes me chary of accepting or rejecting reasoning I cannot cross-check myself.
I like the evocative "serpentine shifts" and I couldn't agree more with Zenpundit's rational reluctance to analyze the NIE. The problem is that within hours of the NIE publication, rationality had fallen by the wayside. Political factions, pundits, journalists, and droves of unnamed officials and their unnamed sources leaped to interpret the reasons for the NIE's conclusions and publication.
Within 15 days of the NIE's publication, the situation has taken on overtones of the infamous Orleans Rumor incident. The Orleans incident saw thousands of otherwise sane French citizens convinced on no evidence whatsoever that a gang was kidnapping Orleans girls and selling them into slavery.
DEBKAfile helped stoke fears about the NIE's implications by listing what they termed "repercussions" of Washington's "about face" on Iran's nuclear weapons threat:
DEBKAfile's sources disclose that Iran’s extremist president Mahmoud Ahmadinejad began purging the Iranian leadership of his opponents, emboldened by what he perceived as the victory of the intransigent nuclear policy he and the Revolutionary Guards had pursued.
Still in crowing mode, Iran’s oil minister Gholam Hossein Nozari announced Saturday, Dec. 8, the cessation of oil transactions in US dollars. He labeled the greenbacks an “unreliable” currency.
Less than 24 hours after the NIE was released, the Kremlin announced resumption of Russian work to finish Iran’s nuclear reactor at Bushehr and the consignment of nuclear fuel.
In Lebanon, the Hizballah opened the door for the election of chief of staff Gen. Michel Suleiman as president. To buy a stable Beirut government, Washington accepted a pro-Syrian Hizballah sympathizer as president.
The prospects of tough UN sanctions against Iran’s continued enrichment of uranium dimmed dramatically. The Russian foreign minister Sergey Lavrov said there is no point in the light of the US intelligence reassessment. Saturday, the Iranian ambassador in Tokyo invited Japanese investors to put their money in Iranian oil production which he said could be expanded by 30 percent. Tehran has clearly lost its fear of international economic sanctions.
It is irrational, and just plain bad reporting, to identify a cause-and-effect relationship between the NIE and such situations. It is a matter of public record that:
> Ahmadinejad had begun the purge months prior to the NIE publication.
> Iran's oil ministry had been preparing for more than a year to abandon the petrodollar.
> Washington had been backing away from Fouad Siniora's government months prior to the NIE publication.
> Russia had started and stopped their work on Bushehr several times over a period of years over a matter of money, and
> Tougher sanctions against Iran would be a tough sell even without the NIE judgment that Iran had abandoned their nuclear weapons program.
The DEBKAfile post linked above also stoked the rumor that the NIE represented a clandestine bargain between Iran and the United States that was brokered by Saudi Arabia. Not to be outdone:
Saudi journalist Jihad El-Khazen gave his version of the course of events in the Arab newspaper Al-Hayat:
"Here is what happened: The rate of violent acts dropped in Iraq; therefore the American intelligence services discovered that Iran had halted its military nuclear program in 2003. This means that the resumption of violence will make American intelligence services find out that there is a secret military program that is different from the peaceful and famous one.
The Saudi reporter went on to ask: "Is there a deal between the Bush administration and Iran? I cannot categorically assert that a deal was concluded between the two parties through direct negotiations; however, there is an understanding resulting in the 2007 national intelligence report.”(1)
There are close parallels between the underlying reasons for the Orleans incident and rumors about the NIE's implications:
Both situations rose on fears about war (in the Orleans case, fear of war with the Soviet Union), uncertainty about a pivotal upcoming national election, and uncertainty fueled by rapid changes in society.
The worst that might have occurred as a consequence of the Orleans Rumor, which began as a prank by a group of schoolgirls, is that some shop owners (who were rumored to be aiding the rumored kidnappers) would have been hurt and their shops wrecked.
We've not seen the worst that might occur from rumors about the NIE's publication, but the threat of another war is worrisome enough. On Saturday the Associated Press reported:
... a senior Israeli Cabinet minister who once headed Israel's internal security agency issued the country's harshest criticism yet of the U.S. intelligence report, calling it a "misconception" that threatened to lead to a surprise regional war.
Public Security Minister Avi Dichter compared the possibility of such fighting to a surprise attack on Israel in 1973 by its Arab neighbors, which came to be known in Israel for the Yom Kippur Jewish holy day on which it began.
"The American misconception concerning Iran's nuclear weapons is liable to lead to a regional Yom Kippur where Israel will be among the countries that are threatened," Dichter said in a speech in a suburb south of Tel Aviv, according to his spokesman, Mati Gil. "Something went wrong in the American blueprint for analyzing the severity of the Iranian nuclear threat."
Once the election resolved uncertainty about France's direction, rational observers in Orleans finally made a dent in the Orleans Rumor, which then blew over as quickly as it had begun.
The NIE has played into uncertainties the world over about how US policy in the Middle East will change with a new US administration. Yet we are still many months from the US presidential election. Until then many national governments, including Israel's, will be trying to cover all bets and jockeying for the best position whatever the outcome of the US election.
For readers who saw the 'early edition:' I published the post a few minutes before midnight. On Sunday, starting around 2:00 PM ET, I added two footnotes and two updates (at 3:00 and 7:00 PM ET), and made the following change to a paragraph:
"...to my knowledge World Tribune and Iran Press Service were the only news sources to pick up on Baluyevsky's statement."
Also: the links to three reports, which were provided to me by the World Tribune publisher (see 7:00 PM update), do not pertain to Baluyevsky's 2002 announcement that Iran had nuclear weapons. And the news they contain would be old to those who have spent years following reports about Iran's nuclear weapons progam. But the reports are good background so I included the links to them in the update.
There will be another post today but I'm not sure at what time; I'm aiming for 2:00 PM ET.
Sunday, December 16
JERUSALEM - December 16 (Associated Press) Israel has dispatched an unscheduled delegation of intelligence officials to the U.S. to try to convince it that Iran is still trying to develop nuclear weapon — contrary to the findings of a recent U.S. intelligence report [NIE], security officials said.
[...] The U.S. and Israel will also hold additional joint formal meetings on the matter in coming weeks, the Israeli officials said recently. Israel will use these forums to try to persuade the Americans that Iran is trying to development nuclear weapons, and intends to present information classified as top secret for security reasons, the officials said.
Saturday, December 15
"Iran does have nuclear weapons. Of course, these are non-strategic nuclear weapons. I mean these are not ICBMs with a range of more than 5,500 kilometers and more. But as a military man, I see no danger of aggression against Russia by Iran. As for the danger of Iran's attack on the United States, the danger is zero."-- General Yuri Baluyevsky, May 2002
"[The Russian military] said from the very beginning we had no data to speak about intensification of efforts in Iran in the field of nuclear armaments."-- General Yuri Baluyevsky, December 2007
General Yuri Baluyevsky is Russia's First Deputy Minister of Defense and, since July 2004, Chief of the General Staff of the Armed Forces of the Russian Federation.
The general did not exactly say in the 2007 quote above that Iran had no nuclear weapons, but his other words at the time imply he said as much:
As he commented on [the December US National Intelligence Estimate report] that Iran had suspended its nuclear armaments program back in 2003, Gen Baluyevsky recalled that Russia has always met with much caution the claims that Iran was working on military applications of its nuclear program.
General Baluyevsky's comment about Iran having nuclear weapons came at a press conference during the United States-Russian Federation Moscow Treaty summit, May 23-26, 2002.
As to why his comment didn't make headlines around the globe, I imagine the World Tribune also wondered about that; to my knowledge World Tribune and Iran Press Service were the only news sources to pick up on Baluyevsky's statement.(1) The IPS report mentions that the World Tribune:
... observed that journalists at the briefing completely missed the importance of general Baluyevsky's assertion. "The Russian deputy chief of staff has just said on the record that Iran has nuclear weapons", highlighted World Tribune.
But the spring of 2002 was a different era; news services, and publics around the world, had a great many other things on their mind at that time. Iran's nuclear ambitions were near the bottom of everyone's list, which was still topped by al Qaeda and other terrorist organizations and the US presence in Afghanistan.
What would explain General Baluyevsky's flip-flop between the spring of 2002 and the end of 2007? Again, 2002 was a different era. The US and Russia were on much better terms in 2002 than today; the general's announcement about Iran's nuclear weapons came during a summit when the US and Russia were working out greater cooperation on nuclear proliferation issues.
Also, it was to be several months after the press conference before it was clear that the US intended to invade Iraq -- something that Russia was very much against. Even the famous Downing Street meeting, during which British officials first discussed the likelihood of the US invading Iraq, did not come until July 2002.(2)
By 2007 General Baluyevsky was seriously bent out of shape about US attempts to place components of a missile shield in Europe.
MOSCOW, November 13 (RIA Novosti) [...] "If the Americans deploy the radar by 2011 and anti-ballistic missiles by 2012-2013, they will certainly be directed against Russia, and we can easily prove it," the Chief of the Russian General Staff, Gen. Yury Baluyevsky said in an interview with Russia Today, an English-language state TV channel. [...]
He also reiterated that the alleged Iranian missile threat was used by the U.S. as a simple pretext to deploy weaponry close to Russia's borders, as Iran does not possess the technology to develop and produce long-range inter-continental ballistic missiles.
Iran promptly repaid Baluyevsky for his generosity by test launching the Ashoura missile, which certainly looks by the map at Missile Monitor as if it could shave easternmost Europe, and certainly cream Turkey.
So then General Baluyevsky had to do some fancy footwork, which he managed rather well:
Russia has no data to confirm reports by Iranian leaders that Teheran has tested a new long-range ballistic missile Ashura [Ashoura], General Yuri Baluyevsky, the Chief of General Staff of the Russian Armed Forces said Thursday.
An official statement on testing the Ashura missile, the effective range of which ostensibly reaches 2,000 kilometers, was made November 27 by Iranian Defense Minister Mostafa Mohammad Najjar
Gen Baluyevsky quoted his U.S. counterparts as saying the test launch of the missile took place November 20. He said however that officials in the Pentagon, the Department of State and White House’s National Security Council, whom he had talks with, refused to provide any more information on the incident when he asked them about it.
“When I asked them to share technical surveillance data on it, they refused to do it,” he said.
Gen Baluyevsky reiterated that Russian missile experts carefully verify all the information pertaining to development of the Ashura missile by Iran.
“Nonetheless, I can’t tell you for sure right now that the launch took place, indeed, and that the missile covered 2,000 kilometers,” he said.
“I don’t rule out that in this case we see political bluffing on the Iranian side – something that happened in a number of cases before,” Gen Baluyevsky said.
So, General, was Iran bluffing back in 2002 when you announced they had nuclear weapons?
I add that Iran denied what Baluyevsky said in 2002:
Iranian Foreign Ministry spokesman Hamid Reza Asifi rejects the remarks of General Yuri Baluyevsky, Russian deputy chief of staff, that Iran has the equipment to produce nuclear weapons. He adds that the Russian official was not aware of Iran's peaceful nuclear program.— From "Iran Rejects Reports on Nuke Weapons, reiterates IAEA Pledge," IRNA (Tehran), 10 June 2002(3)
I am not sure which of Baluyevsky's statements Asifi was commenting on; there may have been several statements at that Moscow Treaty conference or at another venue:
General Yuri Baluyevsky, Russian deputy chief of staff, says that Iran has received tactical nuclear weapons from a country other than Russia. [See NTI March 1993 post, below.]—From "Iran, Russia Again Argue Over Nukes," Middle East Newsline, Vol. 4, No. 207, 24 May 2002 (4)
Without the transcript of the press conference mentioned by World Tribune, I can't nail down where the general made the above statement.
In any case, reports of Iran acquiring ready-made nukes are an old story:
March 1993The Arms Control Reporter reports that by December 1991, Iran had imported four nuclear weapons from the former Soviet Union, including a nuclear artillery shell, two nuclear warheads that could be launched on Scud missiles, and one nuclear weapon that could be delivered by a MiG-27 aircraft. [Note: See 24 May 2002 NTI entry.]
The report says that fissile material was exported from Kazakhstan to Iran and the rest of the components were exported from other republics of the former Soviet Union through Turkmenistan. Although the codes to arm the warheads were not provided with the missiles, the report says two experts from Russia arrived to bypass arming codes. [...] (2)
As to why I'm mucking around in old reports about Iran's nuclear ambitions, because I got tired of scratching my head over John Batchelor's statement "Iran has nuclear weapons" on his radio show last Sunday.
John has told his audience that he only deals in open source information. "Open source," to my understanding, is declassified data that is published somewhere, even if only in an obscure journal.
I don't have the resources to track down obscure journals on nuclear proliferation but I thought I'd give Google a whirl. The word string "Iran has nuclear bomb" immediately brought up the 2002 mention of Baluyevsky's statement.
That's not necessarily the report John was referencing -- and I noted that the Google page showed more than 200,000 references for the words I entered. Yet given that Russia's government was in a sharing mood during the Moscow Treaty summit, the 2002 statement by Russia's top military commander is interesting. I didn't start listening to Batchelor's show until March 2003 so it's possible he mentioned Baluyevsky's 2002 announcement at the time.
Iran Press Service, which reported on the World Tribune article in 2002, tartly observed that Baluyevsky's happy estimate of "zero" for the chance that an Iranian nuke could reach the US ignored that the Shahab-3 medium-range missile, which Iran was testing in 2002, was a threat to Middle Eastern nations. Yes, the Shahab-3 could hit any nation in the Middle East, from the map at Missile Monitor.
Next question: If Iran indeed has ready-made nukes, where would they store them? Maybe somewhere that the IAEA wouldn't think to look, if they were inspecting for an Iranian nuclear program? And would they store them in one piece, or do as Pakistan does and tuck nuke bomb components in different locations?
To be continued.
1) It is unclear from the June 6, 2002 Iran Press Service report I've linked to whether IPS picked up on Baluyevsky's statement from their own source, or from the World Tribune article that the IPS reporter quotes.
Complicating matters is that IPS did not publish the link to the World Tribune article they referenced. My attempts to locate the World Tribune article via their archives have failed, which might suggest that World Tribune was not the original source for a report on Baluyevsky's statements at the press briefing. I assumed at the time of publishing this post that I could easily locate the World Tribune link via Google, but my attempts came up dry. (Dec 16 update: I have written the World Tribune editor asking for help in locating the article and link.)
2) There was talk during the run-up to the May 2002 summit that Russia would be willing to help the US overthrow Saddam Hussein's regime provided the US supported Russia's financial interests in Iraq. But any such overtures on Russia's part came at a time when it seemed very unlikely that the US could obtain UN aproval for an invasion.
4) NTI Iran Chronology 1993* * * * * * * * * * *December 16, 3:00 PM ET UpdateI have added a link to my mention of the Ashoura, which underscores the relevance and gravity of Baluyevsky's 2002 statement that Iran has nuclear weapons. I might have waited until hearing from the World Tribune editor. But after reading their December 14 report on the Ashoura I set aside my obsessive insistence on providing links or at least titles for source documents before I published. The Ashoura link takes you to the World Tribune report.7:00 PM ET Update"Pundita:Thank you for writing WorldTribune.com. We are familiar with the information but have had trouble with our online archives.
The following may be relevant including the piece in our password-protected Geostrategy-Direct.com newsletter.
Robert Morton, PublisherWorldTribune.comGeostrategy-Direct.comEast-Asia-Intel.comEast West Services, Inc.
MOSCOW — Ukraine might have sold nuclear warheads to Iran, the Russian newspaper Novaya Gazeta reported April 3. Approximately 250 nuclear warheads that had been in the Soviet arsenal were never returned to Moscow.
"Russia's General Staff has no information about whether Ukraine has given 250 nuclear warheads to Iran or not," said Russian Chief of Staff Gen. Yuri Baluyevsky, who is also deputy defense minister. "I do not comment on unsubstantiated reports." [...]"
Friday, December 14
"I wrote the book to find out why poverty has been reduced by half across the world during the past 25 years and why this phenomenon has happened almost everywhere but Latin America. That’s why I went to China, to India, to Ireland, to the Czech Republic, to Poland, among other places. One of the main things I discovered is that [economic development] doesn’t have anything to do with ideology.
"The real difference between countries today is not how [Hugo] Chávez would like us to believe that there are “Right” countries and “Left” countries, but rather between countries that are drawing investments and countries that are scaring investments away. And the country that is attracting the most investment in the developing world is a communist country, China. That drove [Chavistas] crazy. That’s why Chávez spent one of his speeches lashing out against me."-- Andres Oppenheimer, Latin America expert and author of Saving the Americas: The Dangerous Decline of Latin America and What the U.S. Must Do
I have one quibble with Andres Oppenheimer's brilliant analysis, which I feature in full later in this post.
I think the Leftist tilt in certain Latin American countries does not only reflect an anachronistic view of government; it also reflects a backlash against the elites in those countries who abused neoliberal economic policies, which include an acceptance of virtually unlimited foreign investment.
Yes, it's throwing the baby out with the bath water to throw out foreign investment, but some Latin American governments were falling because they couldn't translate the benefits of foreign investment into perceivable gains for the majority.
The excuse was that it would take time for the benefits to trickle down. But when an a powerful elite backed by de facto military rule controls the government, and blocks programs that would allow the majority to see some benefits of foreign investment, a Leftist backlash is inevitable.
Oppenheimer mentions Poland, but Poland and other developing countries set to join the European Union got tremendous help from the EU while they struggled to apply various aspects of neoliberal policies. Latin American countries such as Venezuela did not receive such help.
US policy toward the poorest Latin American countries must recognize that such countries are not Britain or the United States, where the introduction of Thatcher and Reagan economics, which are reflected in neoliberalism, was softened by a large middle class and strong liberal democracy.
In 2005 a reader sent me a report about a group that was making inroads at converting the 'natives' to Islam in a particular Latin American country. The name of the country escapes me at this moment but the point is that the report noted that the natives in that country were so downtrodden that they could not even step on the same sidewalk used by descendents of the Spanish conquerors.
When citizens are greatly malnourished and see no way possible to ever move up in their society, don't expect them to say, 'Okay, we'll just tighten our belts through this rough patch while foreign investment trickles down to us.'
And that is a big difference between China and the poorest Latin American countries, a difference which Oppenheimer's analysis ignores. Yes, Beijing encourages foreign investment, but they are also very careful to encourage upward mobility for the impoverished masses, and to hurl government resources at their worst-hit economic regions.
All that said, Oppenheimer is on target when he argues that the Left-Right framing of politics ignores the realities of this era. Today, it comes down to how well a government does governing. That includes getting up the gumption to read the riot act to the elite -- a point I pounded home in my 2005 rant Why Vicente Fox is going straight to hell.
If they put you in power, that doesn't excuse them being so greedy they risk touching off a Leftist revolution. A leader has to stand up to an elite that's gone that far around the bend, even if he's scared they'll bump him off. He has to capitalize on the fact that he has a majority at his back, and sell that point to the military.
Part of standing up is figuring ways to temper the long agonizing wait for the benefits of foreign investment to trickle down. This is not rocket science, for crying out loud.
Andres Oppenheimer: Conflict fatigue. About 40 percent of the Venezuelan population was opposing Chávez to begin with, and many of the others who supported Chávez were tired of his habit of picking fights—daily—with anybody who came across him. If it wasn’t the Catholic Church, it was the businesspeople; if it wasn’t the businesspeople, it was the students; if it wasn’t the students, it was the United States; if it wasn’t the United States, it was the king of Spain; if it wasn’t the king of Spain, it was the president of Colombia. And the Chávez supporters just got fed up with this polarization.
FP: How much of a factor was his failure to make good on his promises to cut poverty?
AO: There’s no question that many Venezuelans thought it a bit of a contradiction for Chávez to be talking about creating a socialist state when there were shortages of basics foodstuffs such as milk in Venezuelan stores. And there was also a lot of resentment among Chávez supporters for him to be spending billions of dollars helping what he calls “alternative Bolivarian movements” throughout Latin America. A lot of people sent him a message saying, “Why don’t you focus on your own country?”
FP: In your book, Saving the Americas: The Dangerous Decline of Latin America . . . and What the U.S. Must Do, you describe how and why Latin America, including Venezuela, has been so unsuccessful at fighting poverty. What was Chávez’s response to your argument?
AO: I wrote the book to find out why poverty has been reduced by half across the world during the past 25 years and why this phenomenon has happened almost everywhere but Latin America. That’s why I went to China, to India, to Ireland, to the Czech Republic, to Poland, among other places. One of the main things I discovered is that [economic development] doesn’t have anything to do with ideology.
The real difference between countries today is not how Chávez would like us to believe that there are “Right” countries and “Left” countries, but rather between countries that are drawing investments and countries that are scaring investments away. And the country that is attracting the most investment in the developing world is a communist country, China. That drove [Chavistas] crazy. That’s why Chávez spent one of his speeches lashing out against me.
In Beijing, they are putting out a red carpet for foreign investors, whereas in Latin America, many presidents are going out to the balcony and yelling against foreign investors. [In my book], I tell the story of when I arrived in China, and the first thing I read in the [local] paper was that the entire Chinese government was celebrating the arrival of the board of directors of McDonald’s, who were there to announce the opening of 400 restaurants in China. I had just come from Venezuela, where the Chávez government had just suspended McDonald’s restaurants for three days for some phony tax investigation and the government was taking pride in “teaching foreign capitalists a lesson.”
FP: Do you think this rejection at the polls will harm his reputation and popularity in the region?
AO: Chávez’s reputation in the region has never been very high. In the region when he’s polled, he scores at the very bottom of the list, alongside President Bush, and only second-to-last before Fidel Castro. He has strong support among very vocal, radical, leftist support groups, but his base is not widespread. I think it will embolden opposition forces in Nicaragua, Ecuador, and Bolivia who will now feel that there’s nothing irreversible about radical leftist leaders who win democratic elections and try to erode democracy from within.
Chávez is down, but not out by any means. He still controls the presidency, Congress, the military, 20 of 22 governorships, and much of the media. If this is a boxing match, he lost the round but by no means did he lose the match itself.
FP: You’ve spent a good deal of time comparing Latin America to the rest of the world. One easy comparison I see is between Chávez and Russian President Vladimir Putin. Both are semiauthoritarians who are ruling petrostates; both are hostile to the United States. Yet on the same day, they had very different electoral results. What do you make of this?
AO: Well Chávez’s concession was not a trial of his democratic instincts, although one has to be happy that Venezuelan press reports today are talking about the fact the military high command told Chávez to accept his defeat. he conceded. He delayed the announcement for about seven hours in Venezuela, and according to a government-sanctioned monitoring group, the opposition victory was larger than officially reported. So we shouldn’t rush to celebrate Chávez’s sudden conversion into a Jeffersonian democrat.
In Putin’s case, he uses the same methods Chávez uses in Venezuela: massive uses of public resources; control of much of the media. There’s not such a huge difference. [But] Putin may be focusing more on Russia and the Russian people than Chávez is focusing on the Venezuelan people. A lot of Chávez supporters resented the fact that he spends most of his time in Saudi Arabia and Iran, talking about the world revolution when they want bread and butter.
FP: You’ve written about the much-discussed wave of neopopulism in Latin America and said it is misunderstood. What do you think an election result like this says about this so-called populist wave, if anything?
AO: Well, that’s the key question. Of course I’m worried about Chávez, and Nicaragua’s Daniel Ortega, and Bolivia’s Evo Morales, and Ecuador’s Rafael Correa scaring away investments and making the countries poor, but that’s not the key issue in Latin America; because if you put all these countries together—Venezuela, Cuba, Bolivia, Ecuador, and Nicaragua—they barely amount to 8 or 9 percent of Latin America’s GDP. U.S. officials and we in the press love to write about Chávez because he screams and yells and is colorful and insults everybody and he makes great copy. But the real story of Latin America is being written elsewhere: in Mexico; in Brazil; in Colombia; in Chile.
What really worries me about Latin America’s future is that we’re falling behind in education, science, technology, and research and development. If you look at all the international standardized tests for kids, Latin America has among the lowest scores in the world. When you look at the London Times’s ranking of the world’s 200 best universities, this year only three Latin American universities are among the world’s [top] 200 and they’re all between 195 and 200. This is scandalous. And it’s because, when the rest of the developing world is moving rapidly to create more skilled workforces, Latin America is talking ideology.
Look at Chávez. He speaks to the nation every day in front of a huge painting of Simón Bolívar. He changed the country to name it after Simón Bolívar. In every speech, he cites Bolívar as inspiration for every single measure he takes. The trouble is that Bolívar died in 1830—four years before the invention of the telephone and 150 years before the invention of the Internet.
FP: Do you think then that a lot of people who are agitating for democratic ideals would be better off if they channeled all of their anger and resentment toward Chávez and people like him into issues like education?
AO: When it comes to his opponents in the United States, I think Washington should bypass Chávez. Instead of focusing on Chávez and responding to him, Washington should build bridges with Brazil, with Mexico, with Colombia, with Chile, with Peru and simply ignore Chávez. If Washington is really serious and really worried about Chávez, the thing it should do is be serious about reducing America’s dependence on imported oil. The United States is financing Chávez. We buy $34 billion a year worth of Venezuelan oil. That’s what keeps Chávez alive. Ironically, the United States is financing Chávez’s Bolivarian revolution.
Andrés Oppenheimer is the author of “The Oppenheimer Report,” a prize-winning column on Latin American affairs in the Miami Herald, and Saving the Americas: The Dangerous Decline of Latin America . . . and What the U.S. Must Do (New York: Random House, 2007).
Wednesday, December 12
I received a letter from a reader asking what I've done all day, given that I had not put up a post or even a note about why I hadn't done so. Part of my day was spent in a sometimes heated exchange of emails with a correspondent. Below is some of my end of the exchange. My thoughts are not crystallized on the matter I discuss, but this post might be the closest I come to publishing anything about the deep concern I expressed in the emails.
What we are seeing unfold since the US invasion of Afghanistan is a still-building movement to completely demonize war; Ridley Scott's film The Kingdom of Heaven makes this quite clear, as does Wolfgang Peterson's Troy.
Of course this movement has its contradictory elements -- e.g., support for Palestinian terrorism against Israel and Chechen terrorism against Russia's government. But the movement plays into the hands of the enemy.
From that viewpoint, I argue that nothing is insignificant about the NIE debate, which I see as having great magnitude. I am not so much debating NIE as debating Wolfgang Peterson and Ridley Scott. Troy and The Kingdom of Heaven, both of which I saw within the past two weeks, were box office flops in the US but a success in Europe -- and KOH was also a success in Arab countries, including Egypt.
One may dismiss the historical inaccuracies in KOH and Peterson's mangling of Homer's telling of the Battle of Troy, but both directors were intent on portraying authority as evil and war under any conditions as having no merit.
Scott used Hamid Dabashi, an intellectual who is also an anti-war activist, as the 'history' consultant for KOH. Scott, according to Wikipedia, defended KOH's flights from the truth by saying that the script was "approved and verified" by Dabashi. Scott also said that in his opinion, Dabashi is an "important man in New York."
That latter defense of Dabashi is very funny. But Dabashi, as with many of his Iranian countrymen, and as with so many Arabs and Africans, is still caught up in a post-Colonial mindset. My rant a few days ago to Africans took aim at the mindset.
So the pivotal part of Dabashi's viewpoint went above Scott's head, and above the head of the KOH script writer. Yet I am trying to get at something more difficult, which I can't express adequately because I still don't understand it. Here's my best try for now:
Barack Obama is riding on the call for "change" as many political analysts term it. Lou Dobbs is riding on the same call. I am becoming fearful that change, in this context, is a stand-in, a symbol, for blind rage building against all authority.
Far from a cry for change, the rage is rooted in a desire to go back -- to return to a time when change was not happening so suddenly and from so many quarters.
That's why I warned yesterday that President Bush spoke too soon to support the NIE. At some point, people flip into a mood where they won't believe anything said by anyone in authority -- any kind of authority.
One may argue that Ridley Scott was simply duped by Dabashi but if you have seen the movie -- which Scott doesn't like because it was mangled in its theater release edits -- Scott's theme transcends Muslim-Christian themes and the Crusades.
The only 'good' authority in the movie is a leper who dies young and whose position is very tenuous. In other words, the only good in the world is too weak to stand up to the Juggernaut of evil authority in all spheres.
The Kingdom of Heaven taps into a spreading mindset in Europe that fears European Union authority, which fears being overrun by refugees from Muslim countries, which fears this era -- the era of globalization. It fears everything and flirts with nihilism.
It seems that Move On and other anti-war organizations are trying to import this mindset to the United States because it makes cannon fodder against Bush's preemption doctrine and war hawks. If so, the anti-war activists are handling something very dangerous because the mindset calls up the worst part of the Depression era.
But I am still trying to understand the mindset, and wondering whether it is tinder waiting for a match. If it is tinder, the match could be a sharp economic downturn in the United States that like falling dominoes engulfs emerging economies in Asia and Africa.
Yet I acknowledge your argument that the mindset I fear is no more than a small fleeting shadow on the sweep of history. Truly, this is a grand time to be alive, a grand time for human progress. But there is The War, which for many people distracts attention from the progress -- even though the war is part of progress away from tyrannies.
With regard to your comment about Corey and Jeff, they were not actually debating, to my reading; Corey, in his comment to Jeff's piece, was just underscoring that NCRI led him to Natanz but also that NCRI's original identification of the facility's use was incorrect.*
Corey's comment does not clarify whether NCRI led "US intelligence agencies" to their first bead on Natanz in 2002. I assume NCRI did provide the first lead, but it would be helpful to nail that down. And NCRI needs to defend themselves against Jeff's implication that NCRI intelligence is unreliable. Only in some cases, it seems; in other key aspects, they are on target.
I know you don't think my point is important. But this war, for the good guys, is all about pushing a peanut across a sawdust floor with one's nose, to quote Joyce Carol Oates out of context. Important battles about even the tiniest data mosaics. Credibility: how right has NCRI been in past? Very important question in light of the NIE key judgment that Iran shut down their nuke weapons program in 2003.
Afterthought: This is my nod to the complexity of Scott's film: I suppose that a deeply religious or spiritual reader, or a reader who is simply interested in questions of ethics, would contest my view of The Kingdom of Heaven. If you cast out many things about the film, yes, Ridley Scott does wrestle with questions about conscience versus expediency, and about what true spirituality represents.
He threw a great deal into the movie, as he did with Gladiator and Blade Runner. Yet I think a film about war, and which demands the viewer become deeply involved in the situations leading to particular battle, is a hard place in which to blank out all but spiritual issues.
An ironic coda: The Kingdom of Heaven is also a tribute to history's military engineers, although I suspect that some of the tribute ended on the cutting room floor. Scott's depiction of the siege machines used against Jerusalem is jaw-dropping. It brings home that the machines were weapons of mass destruction in their day. I don't think any other living director but Ridley Scott could have portrayed the ingenuity of the siege machine builders, and their destructive capacity, so well.
[...] NCRI put out a press release declaring that Negroponte: Iran’s Uranium enrichment first revealed by Iranian Resistance. Well, not quite. I repeat, as I have before, that:
> In December 2002, Mark Hibbs reported that the US intelligence community, based on imagery and procurement data, had suspected that Iran was building a clandestine uranium enrichment plant in Natanz and a heavy water production facility in Arak for about a year.
> Hibbs also reported that six months earlier, in mid 2002, the US briefed the IAEA on the intelligence, providing “precise geographical coordinates of the sites.”
> When NCRI held its press conference a few weeks later, in August 2002, they misidentified the purpose of the Natanz facility as a fuel production plant.
> In December 2002, Corey Hinderstein, then with the Institute for Science and International Secruity, was the first person to publicly identify Natanz as a gas centrifuge facility.
You can look it up.
Here is Corey's comment about the post:
Thanks for the props, Jeff. It bugs me every time I see it. They were close, and NCRI’s info led me to Natanz, but they did not identify it correctly.
Credit should go also to David Albright, since after I found the site on satellite photos we worked together to ID it as a centrifuge plant.
Tuesday, December 11
4:00 PM UpdateSo many news reports have appeared since the Wall Street Journal published their report today on the NCRI announcements that I'm having trouble keeping track of them.
Each report has some information not contained in the others. So I've culled 'unique' quotes from three of the later reports -- from AP, AFP and Fox -- and tacked them at the end of the excerpts I published earlier from the Wall Street Journal report.
One of several surprises in the Fox report relating to the NIE conclusions is that NCRI claims the Iranians did not shut down a nuke weapons site in 2003 because of international pressure but because they were caught red-handed by NCRI and wanted to stay a step ahead of the IAEA.
It's funny the way things work out; if NCRI had announced their intelligence two weeks ago, they would have gotten a yawn from most of the mainstream press with maybe the exception of Fox news. But the publication of the NIE conclusions has guaranteed that NCRI's announcements would draw attention from across the mainstream. * * * * * * * * * * *
One problem with using intelligence provided by Mujahedin e-Khalq (MEK) and their political wing, National Council for Resistance (NCRI) is that both groups are designated as terrorist organizations by the United States and the European Union. And yet the groups have provided accurate intelligence in some cases.
Now that intelligence used by the NIE is being reassessed in many quarters, I question the wisdom of President Bush's rush to publicly support the National Intelligence Estimate. Bush's political enemies have not been mollified by his generous words for the NIE. And if Bush and the NIE group are forced by new intelligence to recant -- the public can only take so much flip-flopping on intelligence matters.
So all things considered, it would have been wiser if Bush had been noncommittal in his public statements about the NIE -- at least until Israel could respond, and intelligence experts outside the NIE group could examine the report's key judgment on Iran's nuclear weapons program.
The Iranian opposition group that first exposed Iran's nuclear-fuel program said a U.S. intelligence analysis is correct that Tehran shut down its weaponization program in 2003, but claims that the program was relocated and restarted in 2004. [...]
A former U.S. intelligence official who works closely with the White House on Iran said that all the intelligence related to the NIE was being reassessed and that information coming from sources such as the NCRI would be included. "You have to take seriously what they say, but you also have to realize that they have gotten things wrong," the official said. [...]
The NCRI is the political wing of the Mujahedin e-Khalq [MEK], a group that still has as many as 4,000 members in a disarmed military camp just inside Iraq's border with Iran. The MEK has its roots as a Marxist-Islamist body that fought to overthrow the Shah and has been seeking to overthrow the current government since the mid-1980s. The U.S. and the European Union list both the NCRI and Mujahedin e-Khalq as terrorist organizations. The NCRI has had a mixed record in the accuracy of its claims concerning Iran's nuclear program.
U.S. intelligence officials have declined to comment on what role the NCRI or other Iranian dissident groups may have played in developing the new intelligence estimate. The NCRI first identified Iran's covert nuclear-fuel facilities in 2002, and the White House and State Department have credited the group with helping to expose the program. [...]
According to the NCRI, Iran's Supreme National Security Council decided to shut down its most important center for nuclear-weapons research in eastern Tehran, called Lavisan-Shian, in August 2003. [...]
But at the same meeting, the council decided to disperse pieces of the research to a number of locations around Iran, according to the NCRI. By the time international nuclear inspectors were allowed to get access to the Lavisan site, the buildings allegedly devoted to nuclear research had been torn down and the ground bulldozed. [...]
The NCRI, which claims to have intelligence sources inside Iran, said Lavisan was broken into 11 fields of research, including development of a nuclear trigger and of the technology to shape weapons-grade uranium into a warhead. [...]
"What the first part of the NIE says is right, that they halted their weaponization research in 2003," said Mohammad Mohaddessin, foreign-affairs chief for the NCRI. "But the second part, that they stopped until at least the middle of 2007, is wrong. They scattered the weaponization program to other locations and restarted in 2004."
Equipment was relocated first from Lavisan-Shian to another military compound in Tehran's Lavisan district, the Center for Readiness and Advanced Technology, Mr. Mohaddessin said. Two devices designed to measure radiation levels were moved to Malek-Ashtar University in Isfahan and to a defense ministry hospital in Tehran, he said. Other equipment was sent to other locations the NCRI hasn't been able to identify, he said.
"Their strategy was that if the IAEA found any one piece of this research program, it would be possible to justify it as civilian. But so long as it was all together, they wouldn't be able to," Mr.Mohaddessin said.
The NCRI said in a report on Iran's nuclear program in September 2005 that the Lavisan facility had been closed, setting back the regime's weaponization program by approximately one year. Mr. Mohaddessin said his group was certain no other Iranian nuclear facilities were closed in 2003.
A representative of the International Atomic Energy Agency, the United Nations nuclear watchdog in Vienna, declined to comment on the claims, but said the agency would consider seriously any NCRI information. A spokesman for the Iranian government couldn't be reached for comment.
Excerpts from three later reports:
(Associated Press)Four years ago, the [NCRI] group disclosed information about two hidden nuclear sites that helped uncover nearly two decades of covert Iranian atomic activity. But much of the information it has presented since then to back up claims that Iran has a secret weapons program has not been publicly verified.
In August 2002, [Alireza Jafarzadeh of NCRI] first reported the existence of secret Iranian nuclear sites at Natanz and Arak, prompting denunciations of Tehran by Washington and hurried inspections by the International Atomic Energy Agency.
And excerpts from Fox News interview with Alireza Jafarzadeh.
WASHINGTON (Fox news)- Twenty-one commanders of the Iranian Revolutionary Guard Corps are the top scientists running Iran's secret nuclear weapons program, says the man who exposed Iran's nuclear weapons program in 2002. [...]The scientists working on the alleged civilian nuclear centrifuge program are IGRC commanders, said Jafarzadeh, who was providing a list of names to the press on Tuesday. But their intention is not a nuclear energy source for civilians. [...]
"It's the IRGC that is basically controlling the whole thing, dominating the whole thing," Jafarzadeh told FOXNews.com. "They are running the show. They have a number of sites controlled by the IRGC that has been off-limits to the IAEA (International Atomic Energy Agency) and inspectors, including a military university known as Imam Hossein University. ... That site has not been inspected. They have perhaps the most advanced nuclear research and development center in that university."
Jafarzadeh said the 2003 decision to stop the weaponization program, which was operating in Lavizan-Shian, a posh northeast district of Tehran, was not Iran's own. The site had been exposed by the opposition, the National Council of Resistance on Iran, in April 2003 after revelations of several other nuclear sites that could be portrayed as dual purpose facilities; Lavizan-Shian could not, he said.
"The regime knew that this is not the site that they can invite the IAEA ... this site was heavily involved in militarization of the program," Jafarzadeh said. "They were doing all kinds of activities that were not justifiable. So they decided before the IAEA gets in — and it usually takes four to six months before they can go through the process and get in — use the time and try to basically destroy this whole facility, and that's what they did."
Jafarzadeh said the Iranians razed the buildings, removed the soil, cut down the trees and allowed the IAEA to inspect the Lavizan-Shian site, which had been turned into a park by June 2004. He noted that the regime acted as if it had succumbed to municipal pressure to open a park with basketball and tennis courts and that is why the area had been flattened.
Jafarzadeh said that "in a way it's correct for the NIE to say that in late 2003 the weaponization of the program was stopped, and they said it was due to international pressure. But they failed to say that it restarted in 2004" in a location called Lavizan 2, he said.
Lavizan 2 "has never been inspected by the IAEA," Jafarzadeh added. [...]
Monday, December 10
The quote in the title was plopped into a discussion that John Batchelor had with Bill Roggio last night about Iran's military activities in Iraq. Batchelor made the statement almost in passing while noting that in Iraq, the US is already at war with Iran.
At first I thought I'd misheard; I thought perhaps Batchelor had said Iran has a nuclear weapons program. But about 40 minutes later, during his closing monologue for KFI 640 AM radio, he said it again, and that time there was no mistaking.
He said, "Iran has nuclear weapons. Iran has a nuclear weapons program."
If the statement, which Batchelor did not support with data, had come from any other news source I would have filed it. But from the years I've been following his news program, I know that John Batchelor is a very cautious, very careful, analyst and news reporter. The sources he features on his show have sometimes been wrong, but John's track record is virtually unblemished because he is so cautious. As an analyst he lives by the golden guideline that in war, the first three reports are wrong.
A recent example of John's cautiousness is his refusal to speculate on the target of Israel's September 6 bombing raid in Syria. And his refusal to give weight to any of the published speculations. He has told his radio audience that the only "story" we have on the bombing is that we don't have the story.
So when Batchelor stated flatly that Iran had nuclear "weapons" -- I note the plural -- that was enough to cause me to drop my jaw.
You can also listen right now to Bill's discussion on Batchelor's show last night. Visit Bill's Long War Journal site for a link to the podcast. (Bill generally leaves the podcasts on his radio appearances up for a few days only.)
Meanwhile, the fallout from the NIE continues to rain down:
China's leaders have leaped into the breach created by the NIE publication; they have given the nod to Sinopec to go ahead and sign a $2 billion oil deal with Iran.
It's a good thing I didn't learn of that development on a Tuesday or Thursday, which I reserve for being deeply cynical. In that case I might have commented about the China-Iran deal signing that Thomas Fingar's work is now done. More than one observer of Mr Fingar's career has termed him a Panda Hugger, for those who don't get Pundita's little joke.
British spy chiefs have grave doubts that Iran has mothballed its nuclear weapons programme, as a US intelligence report [NIE] claimed last week, and believe the CIA has been hoodwinked by Teheran. [...]
A senior British official delivered a withering assessment of US intelligence-gathering abilities in the Middle East and revealed that British spies shared the concerns of Israeli defence chiefs that Iran was still pursuing nuclear weapons. [...]
A US intelligence source has revealed that some American spies share the concerns of the British and the Israelis. "Many middle- ranking CIA veterans believe Iran is still committed to producing nuclear weapons and are concerned that the agency lost a number of its best sources in Iran in 2004," the official said.
The same report from today's (UK) Telegraph notes:
The timing of the CIA report has also provoked fury in the British Government, where officials believe it has undermined efforts to impose tough new sanctions on Iran and made an Israeli attack on its nuclear facilities more likely.
Britain's government is not half as furious with the NIE publication as some members of Israel's government. Aaron Klein reported last night for John Batchelor, and again in World Net Daily, that at Sunday's Knesset session:
[...] lawmakers here blasted the report and questioned America's commitment to Israel and its front against Iran.
"It cannot be that Bush is committed to peace as was declared at Annapolis, and then the Americans propagate such an intelligence report which contradicts the information we have proving Iran intends to obtain nuclear weapons," stated Minister Yitzhak Cohen, a member of the Shas party, a key coalition partner in Olmert's government.
Cohen compared the NIE report to what he said were faulty reports released by the U.S. during the Holocaust that Jews were not being killed in spite of information possessed by American intelligence of the existence of concentration camps.
"In the middle of the previous century the Americans received intelligence reports from Auschwitz on the packed trains going to the extermination camps. They claimed then that the railways were industrial. Their attitude today to the information coming out of Iran on the Iranians' intention to produce a nuclear bomb reminds one of their attitude during the Holocaust," stated Cohen.
Now that's what I'd call fury.
Meanwhile, the meeting took place today between Joint Chiefs of Staff Admiral Mullen and Israel's defense minister Barak, during which Mullen was slated to review, among several other things, Israel's intelligence on Iran's nuclear weapons program.
No details have yet emerged from the meeting, with the exception of an inane statement by Mullen's spokesman, Captain John Kirby:
"Sometimes friends disagree," said Kirby of the differences of opinion between Israel and the US over Iran's agenda.
This is not about friends disagreeing and it's not about "opinion." It's about hard data, evidence, that Iran has continued with their nuclear weapons program. Israel's military says they have the data. The ball is now in the US government's court because the NIE on Iran's nuclear threat says the program was abandoned in 2003.
McConnell showed bad judgment in deciding to declassify the NIE right away. He rushed to get the NIE to the public and the shaky grounds that "... we felt it was important to release this information to ensure that an accurate presentation is available," explained Donald Kerr, principal deputy director of national intelligence.
That was before Retired Col. W. Patrick Lang, a former official in the Defense Intelligence Agency, said that senior CIA analysts involved in the NIE threat assessment on Iran demanded of the White House that the "gist" of the NIE key judgments be released to the public -- or they would leak the findings, even if it meant they had to go to jail.
Col. Lang's statement has been published, although I don't have the link at this time. Of course Lang's statement, which does not reference his source, might be wrong. But it is plausible enough to prompt me to hold my fire until more is known about McConnell's decision.
I still think it was the wrong decision, but it was surely made in consultation with senior White House officials and probably even President Bush. Certainly, the White House would know that any hint from CIA analysts that they would leak the key judgements would have to be taken seriously, given the recent history of leaks in Washington.
This said, I stand by my Dec 7 comments:
... the public can't have 'accuracy' on the report because certain elements of it remain classified! And because the report is public, it would be hard for US intelligence officials who signed off on the NIE to back away from the key conclusion -- that Iran had abandoned their nuclear weapons program in 2003 -- no matter what evidence Israel brings to the contrary.
As David Kay noted during an interview with Campbell Brown this weekend for CNN, the portion of the NIE that has been released is basically "headlines;" no supporting data have been published. So the public is still feeling around in the dark when it comes to assessing the NIE conclusions.
Given the grave events spinning out from the NIE publication, it might have been better to risk dealing with a leak than to have gone ahead with publication at this time. But this would be a judgment call made under great time pressure. So, for now, I'm letting McConnell halfway off the hook.
... pivotal to the US investigation into Iran's suspect nuclear weapons programme was the work of a little-known intelligence specialist, Thomas Fingar. He was the principal author of an intelligence report published on Monday that concluded Iran, contrary to previous US claims, had halted its covert programme four years ago and had not restarted it. Almost single-handedly he has stopped - or, at the very least, postponed - any US military action against Iran. [...]
The rest of the Guardian piece is mostly a hash of speculations, rumors, and crowing about the blow to Bush's policy on Iran.
The Guardian's editors have never made a secret of their hatred for Bush and his defense/foreign policy -- and hatred is not too strong a word to use in this context. But in this case their hatred may have blinded them to unfolding events since the NIE publication. As I noted in the previous post, it's not only neocons or even Republicans who are questioning the NIE.
So can we trust MacAskill's assertion that Fingar is the chief author of the NIE? I think so, from everything I know about Fingar and the people he works for.
Friday, December 7
I'm glad to see this because flaws in the NIE report on Iran are a defense thing, not a partisan thing. However; growing concerns about the NIE from across the political spectrum are shutting the barn door after the horse escaped. Publication of the NIE has given Russia and China -- and possibly Germany -- the excuse they needed to back away from applying pressure to Iran:
WASHINGTON — The new U.S. intelligence report that says Iran halted its nuclear weapons program in 2003 is suddenly raising concerns among the political center and left, as well as conservatives who have long called for a hard line against the Islamic Republic.
Moderate and liberal foreign policy experts said that U.S. intelligence agencies, possibly eager to demonstrate independence from White House political pressure, may have produced a National Intelligence Estimate that is more reassuring than it should be on the potential risks of the Iranian nuclear program.
The report, made public Monday, contradicted the Bush administration's assertion that Iran has been secretly working to build nuclear weapons. It also found that Tehran, which says it is enriching uranium solely for civilian energy purposes, appears to have a pragmatic view and has responded to outside pressure and economic sanctions, in contrast to characterizations by administration hawks.
[...] Iran expert Ray Takeyh, a former professor at the National War College and National Defense University, said that although his own politics are left of the president's, he agrees with Bush that Iran's nuclear program is a continuing threat.
"The position I take is that President Bush is right on this," said Takeyh, now at the Council on Foreign Relations.
Takeyh, who has long argued for engaging Iran in diplomacy, said the intelligence report was too easy on Tehran by not objecting to the uranium enrichment program, which many Western governments have alleged is meant to build the knowledge base to eventually develop nuclear weapons. The American intelligence agencies, in effect, accepted Iran's contention that the enrichment is for peaceful purposes, Takeyh said.
[...] Sharon Squassoni, a former government nuclear safeguards expert now with the generally liberal Carnegie Endowment for International Peace in Washington, noted that the intelligence report said Iran suspended its enrichment program in 2003 and later signed an agreement allowing U.N. inspections.
But, she said, the portion of the report made public was silent on the fact that the Iranians reversed both actions in 2006.
The ability to develop fissile materials is the most important element of a nuclear weapons program, she told reporters.
Gary Samore, who was a top arms control official in the Clinton White House, agreed that the National Intelligence Estimate did not adequately emphasize Iran's continuing efforts to enrich uranium and build missiles.
"The halting of the weaponization program in 2003 is less important from a proliferation standpoint than resumption of the enrichment program in 2006," said Samore, director of studies at the Council on Foreign Relations.
Samore said the report undermined Bush's warnings about Iranian efforts to develop nuclear weapons and left Tehran in a strong position, allowing it to develop its enrichment capacity without a substantial challenge from the United States and its allies. [...]
Anthony Lake, who was a national security advisor to President Clinton, found no fault with the intelligence report. But he said a key message was the importance of taking action.
"While we've got more time, we've got to use the time, because the enrichment activities are continuing," Lake said in an interview. [...]
In a Washington Post op-ed column Thursday, Bolton alleged that many of the officials involved [in writing the NIE] were "not intelligence professionals but refugees from the State Department" brought in by J. Michael McConnell, the director of national intelligence.
The following excerpts from a Jerusalem Post article throw some light on why US intelligence officials at work on the National Intelligence Estimate on Iran did not factor in Israel's intelligence. It seems the intelligence was ignored.
I'll grant it's possible that Israel was unwilling to share their "evidence" -- as distinct from intelligence -- until after the IDF got a look at the NIE.
Yet two things are coming clear: First, John "Mike" McConnell, the United States Director of National Intelligence, showed questionable judgment in keeping President Bush out of the loop for months about 'new' raw data on Iran's nuclear weapons program.
Second, McConnell showed bad judgment in deciding to declassify the NIE right away. He rushed to get the NIE to the public and the shaky grounds that "... we felt it was important to release this information to ensure that an accurate presentation is available," explained Donald Kerr, principal deputy director of national intelligence.
But the public can't have 'accuracy' on the report because certain elements of it remain classified! And because the report is public, it would be hard for US intelligence officials who signed off on the NIE to back away from the key conclusion -- that Iran had abandoned their nuclear weapons program in 2003 -- no matter what evidence Israel brings to the contrary.
Disappointed after failing to make their case on Iran and influence the outcome of the United States's National Intelligence Estimate (NIE) released this week, [Israel's] military Intelligence will present its hard core evidence on [Iran's] nuclear program on Sunday to the chairman of the US Joint Chiefs of Staff during a rare visit he will be making to Israel.
Admiral Michael Mullen will land in Israel Sunday morning for a 24-hour visit that will include a one-on-one meeting with IDF Chief of General Staff Lt.-Gen. Gabi Ashkenazi, as well as with Defense Minister Ehud Barak. [...]
Mullen's visit to Israel will be exactly a week after the publication of the NIE report that claimed Iran had frozen its nuclear military program in 2003 and has yet to restart it. During his visit, Military Intelligence plans to present him with Israel's evidence that Iran is in fact developing nuclear weapons.
"The report clearly shows that we did not succeed in making our case over the past year in the run-up to this report," a defense official said Thursday. "Mullen's visit is an opportunity to try and fix that." [...] |
Structural identification of phosphatidylcholines having an oxidatively shortened linoleate residue generated through its oxygenation with soybean or rabbit reticulocyte lipoxygenase.
Phosphatidylcholines (PCs) with platelet-activating factor (PAF)-like biological activities are known to be generated by fragmentation of the sn-2-esterified polyunsaturated fatty acyl group. The reaction is free radical-mediated and triggered by oxidants such as metal ions, oxyhemoglobin, and organic hydroperoxides. In this study, we characterized the PAF-like phospholipids produced on reaction of PC having a linoleate group with lipoxygenase enzymes at low oxygen concentrations. When the oxidized PCs were analyzed by gas chromatography-mass spectrometry, two types of oxidatively fragmented PC were detected. One PC had an sn-2-short chain saturated or unsaturated acyl group (C(8)-C(13)) with an aldehydic terminal; the abundant species were PCs with C(9) and C(13). The other PC had a short chain saturated acyl group (C(6)-C(9)) with a methyl terminal, and the most predominant species was PC with C(8). When the extracts of oxidation products were subjected to catalytic hydrogenation, PCs having saturated acyl groups (C(6)-C(14)) were detected; the most abundant was C(12) species. The less regiospecific formation of PAF-like lipids suggests that they were generated by oxidative fragmentation of PC hydroperoxides formed by non-stereoselective oxygenation of the alkyl radical of esterified linoleate that escaped from the active centers of lipoxygenases. One of the PAF-like PC with an aldehydic terminal was found to be bioactive; it inhibited the production of nitric oxide induced by lipopolysaccharide and interferon-gamma in vascular smooth muscle cells from rat aorta. |
Modifications to central neural circuitry during heart failure.
During heart failure (HF), excess sodium retention is triggered by increased plasma renin-angiotensin-aldosterone activity and increased basal sympathetic nerve discharge (SND). Enhanced basal SND in the renal nerves plays a role in sodium retention. Therefore, as a hypothetical model for the central sympathetic control pathways that are dysregulated as a consequence of HF, the central neural pathways regulating the sympathetic motor output to the kidney are reviewed in the context of their role during HF. From these findings, a model of the neuroanatomical circuitry that may be affected during HF is constructed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.