text
stringlengths
8
5.77M
--- abstract: 'Transcribing content from structural images, e.g., writing notes from music scores, is a challenging task as not only the content objects should be recognized, but the internal structure should also be preserved. Existing image recognition methods mainly work on images with simple content (e.g., text lines with characters), but are not capable to identify ones with more complex content (e.g., structured code), which often follow a fine-grained grammar. To this end, in this paper, we propose a hierarchical *S*potlight *T*ranscribing *N*etwork (STN) framework followed by a two-stage “where-to-what” solution. Specifically, we first decide “where-to-look” through a novel spotlight mechanism to focus on different areas of the original image following its structure. Then, we decide “what-to-write” by developing a GRU based network with the spotlight areas for transcribing the content accordingly. Moreover, we propose two implementations on the basis of STN, i.e., STNM and STNR, where the spotlight movement follows the Markov property and Recurrent modeling, respectively. We also design a reinforcement method to refine our STN framework by self-improving the spotlight mechanism. We conduct extensive experiments on many structural image datasets, where the results clearly demonstrate the effectiveness of STN framework.' author: - 'Yu Yin, Zhenya Huang' - Enhong Chen - Qi Liu - 'Fuzheng Zhang, Xing Xie' - Guoping Hu bibliography: - 'kdd.bib' title: | Transcribing Content from Structural Images with\ Spotlight Mechanism --- Introduction ============ Transcribing content from images refers to recognizing semantic information in images into comprehensible forms (e.g., text) in computer vision [@ye2015text]. It is an essential problem for computers to understand how humans communicate about what they see, which includes many tasks, such as reading text from scenes [@zhang2013text; @kannan2014mining], writing notes from music scores [@rebelo2012optical] and recognizing formulas from pictures [@chan2000mathematical]. As it is crucial in many applications, e.g., image retrieval [@cao2016deep; @ShangLZYZW15], online education systems [@huang2017question; @liu2018fuzzy] and assistant devices [@ezaki2004text], much attention has been attracted from both academia and industry [@ye2015text]. In the literature, there are many efforts for this transcribing problem, especially on text reading task. Among them, the most representative one called Optical Character Recognition (OCR) has been extensively studied in many decades [@impedovo1991optical], which mainly follows rule-based solutions for generating texts from well-scanned documents [@lu2008document]. Recently, researchers focus on a more general scene text recognition task, aiming to recognize texts from natural images [@vinyals2015show]. Usually, existing approaches are designed in an encoder-decoder architecture, which consists of two components: (1) a CNN based encoder to capture and represent images as feature vectors that preserve their the semantic information [@oquab2014learning]; (2) a RNN based decoder that decodes the features and generates output text sequences either directly [@vinyals2015show], or attentively [@xu2015show]. Though good performances have been achieved, previous studies mainly focus on the images with straightforward content (i.e., text with characters), while ignoring large proportion of structural images, where the content objects are well-formed in complex manners, e.g., music scores (Figure \[fig:sub:eg1a\]) and formulas (Figure \[fig:sub:eg1b\]). Therefore, the problem of transcribing content from these structural images remains pretty much open. In fact, there are many technical challenges along this line due to the unique characteristics of structural images. First, different from natural images, where the text content is mostly placed in simple patterns, in structural images, the content objects usually follow a fine-grained grammar, and are organized in a more complex manner. E.g., in Figure \[fig:sub:eg1a\], notes from the music score are not only placed simply from left to right, but the positions in the stave for each note are also specified, often with annotations added left or above. A division formula in Figure \[fig:sub:eg1b\] contains nested structure, where the equation components are placed at the left and right side of the equal sign, with two parts of the right-hand-side fraction placed above and below the middle line. Thus, it is necessary for transcribing to not only capture the information from local areas, but also preserve the internal structure and organization of the content. Second, content objects in structural images, even if they just take a small proportion, may carry much semantics. For example, the note marked by blue box in Figure \[fig:sub:eg1a\] is written as “`dis16`” in LilyPond[^1], which means that the note is D\# (“-`is`” for sharp), and the note is a sixteenth note (denoted by “`16`”); the formula marked in Figure \[fig:sub:eg1b\] means “`\sqrt{...}`” in TeX code, representing the square root operator, with the scope defined by curly braces. Thus, it is very challenging to transcribe the complete content from an area containing such a informative object, compared to just one character in tasks such as scene text recognition. Third, there exist plenty of similar objects puzzling the transcribing task, e.g., a sixteenth note (blue in Figure \[fig:sub:eg1a\]) just contains one more flag on the stem than an eighth note (red), while notes with same duration and different pitches are almost identical except for their positioning. This characteristic requires a careful design for the transcribing. To address the above challenges, following the observation on human transcribing process, i.e., first find out where to look, then write down the content, we present a two-stage “where-to-what” solution and propose a hierarchical framework called the *S*potlighted *T*ranscribing *N*etwork (STN) for transcribing content from structural images. Specifically, after encoding images as features vectors, in our decoder component, we first propose a spotlight module with a novel mechanism to handle the “where-to-look” problem and decide a reading path focusing on areas of the original image following its internal structure. Then, based on the learned spotlights areas, we aim for “what-to-write” problem and develop a GRU based network for transcribing the semantic content from the local spotlight areas. Moreover, we propose two implementations on the basis of the STN framework. The first is a straightforward one, i.e., *STNM with Markov property*, in which the spotlight placement follows a Markov chain. Comparatively, the second is a more sophisticated one, i.e., *STNR with Recurrent modeling*, which can track long-term characteristics of spotlight movements. We also design a reinforcement method to refine STN, self-improving the spotlight mechanism. We conduct extensive experiments on real-world structural image datasets, where the results clearly demonstrate the effectiveness of the STN framework. Related Work ============ The related research topics to our concerns can be classified into the following three categories: encoder-decoder system, attention mechanism, and reinforcement learning. Encoder-Decoder System ---------------------- The encoder-decoder system is a general framework, which has been applied to many applications, such as neural machine translation [@cho2014properties; @bahdanau2014neural] and image captioning [@vinyals2015show; @xu2015show]. Generally, the system has two separate parts, one encoder for representing and encoding the input information into a feature vector, and one decoder for generating the output sequence according to the encoded representation. Due to its remarkable performance, many efforts have been made to apply it to scene text recognition [@wang2012end], aiming at transcribing texts from natural images. Specifically, for encoder design, representative works leveraged deep CNN based networks, which have been the most popular methods due to their performance on hierarchical feature extraction [@oquab2014learning], to learn the information encodings from images [@jaderberg2016reading]. Then for decoder selection, variations of recurrent neural networks (RNN), such as LSTM [@hochreiter1997long] and GRU [@chung2014empirical], were utilized to generate the output text sequence, both of which are able to preserve long-term dependencies for text representations [@sundermeyer2012lstm]. The whole architecture is end-to-end, which show the effectiveness in practice [@shi2017end]. Attention Mechanism ------------------- However, in the original encoder-decoder systems, encoding the whole input into one vector usually makes the encoded information of images clumsy and confusing for the decoder to read from, leading to unsatisfactory transcription [@luong2015effective]. To improve the encoder-decoder models addressing this problem, inspired by human visual system, researchers have tried to propose many attention mechanisms to highlight different parts of the encoder output by assigning weights to encoding vectors in each step of text generation [@bahdanau2014neural; @xu2015show; @mnih2014recurrent] or sequential prediction [@su2018exercise; @ying2018sequential]. For example, Bahdanau et al. [@bahdanau2014neural] proposed a way to jointly generate and align words using attention mechanism. Xu et al. [@xu2015show] proposed soft and hard attention mechanisms for image captioning. Lee et al. [@lee2016recursive] used an attention-based encoder-decoder system for character recognition problems. Our work improves the previous studies mainly from the following two aspects. First, the attention weights are usually calculated by the correspondence between outputs and the whole content, which let the models know “what” to look but not “where” to look. In our work, we propose a novel spotlight mechanism to directly find a reading path tracking the image structure for transcribing. Second, previous decoding process has one RNN for learning attentions and transcribing simultaneously, which may cause some confusion for transcription, while our framework models spotlighting and transcribing with two separate facilities, avoiding the confusion between two sequences. ![image](imgs/stat.pdf) Reinforcement Learning ---------------------- Deep reinforcement learning is a kind of state-of-the-art technique, which has shown superior abilities in many fields, such as gaming and robotics [@arulkumaran2017brief]. The main idea of them is to learn and refine model parameters according to task-specific reward signals. For example, Ranzato et al. [@ranzato2015sequence] used the whole sequence metrics to guide the sequence generation, using REINFORCE method; Bahdanau et al. [@bahdanau2016actor] utilized the actor-critic algorithm for sequence prediction, refining the model to improve sentence BLEU score. Preliminaries ============= In this section, we first give a clear definition of structural images, and introduce the structural image datasets used in this paper. Then we discuss the crucial differences between structural image transcribing and typical scene text recognition with exclusive data analysis. At last, we give the formal definition of the structural image transcription problem. Data Description ---------------- In this paper, we mainly focus on transcribing content from structural images. *Structural images* refer to printed graphics that are not only a set of content objects, but also contain meaningful structure, i.e., object placement, following a certain grammar. Content with its structure can often be described by a domain specific language and complied by the corresponding software. Typical structural images include music scores, formulas and flow charts, etc., which can be described in music notation, TeX and UML code, respectively. Dataset ------------ ------- ----- --------- ------ ---------- Melody 4208 70 82,834 19.7 15,602.7 Formula 61649 127 607,061 9.7 1,190.7 Multi-Line 4595 127 182,112 39.8 9,016.6 SVT 618 26 3,796 5.9 12,733.5 IIIT5K 3000 36 15,269 5.0 11,682.0 : The statistics of the datasets.[]{data-label="tab:datastats"} We exploit two real-world datasets, i.e., *Melody* and *Formula*, along with one synthetic dataset *Multi-Line*, specifically for the structural image transcription task[^2]. The *Melody* dataset contains pieces of music scores and their source code in LilyPond collected from the Internet[^3], mostly instrumental solos and choral pieces written by Bach, split into 1 to 4 bar length, forming 4208 image-code pairs. The *Formula* dataset is collected from Zhixue.com, an online educational system, which contains 61649 printed formulas from high school math exercises, with their corresponding TeX code. To further demonstrate transcription on images with more complicated structure, we also construct the *Multi-Line* dataset that contains 4595 multi-line formulas, e.g., piecewise function, each line consisting of some complex formulas, e.g., multiple integral. We summarize some basic statistics of these datasets in Table \[tab:datastats\]. We now conduct deep analysis to show the unique characteristics of the structural image transcription task compared to traditional scene text recognition. Specifically, we compare our datasets with two commonly used datasets for scene text recognition, i.e., SVT [@wang2011end] and IIIT5K [@Mishra2012iiit5k], and conclude three main differences. First, structural image transcription needs to preserve more information: other than just objects, how they are organized should also be transcribed. As shown in Table \[tab:datastats\] and Figure \[fig:dist\], our datasets contain significantly longer content in relatively small images. Sequences longer than 10 tokens taking 75.0%, 30.4% and 99.9% of Melody, Formula and Multi-Line datasets, respectively. However, only 1.9% in SVT and 2.7% in IIIT5K have more than 10 character long sequences. In addition, Melody, Formula and Multi-Line contain in average 1.26, 8.15 and 4.14 tokens every 1000 pixels, while SVT and IIIT5k only contain 0.46 and 0.43 characters, respectively, which indicates that each proportion of an image contains more information to be transcribed, along with the informative structure. Second, the output space and count in our datasets are often larger than SVT and IIIT5K, as shown in Table \[tab:datastats\]. Hence, it is even more complicated to transcribe content from structural images compared to text recognition. Third, structural image transcription process is reversible, meaning the corresponding code should be able to compile and regenerate the original image, which is not necessary or possible for traditional scene text recognition. In summary, the above analysis clearly shows that the structural image transcription problem is quite different from traditional scene text recognition tasks. As a result, it is necessary to design a new approach that better fits this problem. ![image](imgs/model_arch.pdf) Problem Definition ------------------ In this subsection, we formally introduce the structural image transcription problem. In our image transcribing applications, we are given structural images and their corresponding source code. Each input image $x$ is a one-channel gray-scale image with width $W$ and height $H$, containing content such as music notations or printed formulas. For each image, the expected output, i.e., its source code, is given as a token sequence $y=\{y_1, y_2, \ldots, y_T\}$, where $T$ is the length of token sequence. Each $y_t$ can be a LilyPond notation (`c`, `fis`, …) in music score transcribing task, or a TeX token (`x`, `\frac`, …) in formula transcribing task. Moreover, structural images are reversible, by which we mean that the token sequence is expected to reconstruct the original image using the corresponding compiler. Therefore, the problem can be defined as: ([**Structural Image Transcription Problem**]{}). Given a structural $W \times H$ image $x$, our goal is to transcribe the content from it as a sequence $\hat{y}=\{\hat{y}_1, \hat{y}_2, \ldots, \hat{y}_T\}$ as close as possible to the source code sequence $y$, where each $\hat{y}_t$ is the predicted token taking from the specific language corresponding to the image. Spotlighted Transcribing Network ================================ In this section, we introduce the Spotlighted Transcribing Network (STN) framework in detail. First we give an overview of the model architecture. Then we describe all the details of our proposed spotlight mechanism in following sections. Finally we discuss the training process of STN with reinforcement learning for refinement. Model Overview -------------- Figure \[fig:model\_arch\] shows the overall architecture of Spotlighted Transcribing Network (STN), which consists of two main components: (1) a convolutional feature extractor network as the encoder, which learns the visual representations $V$ from the input image $x$; (2) a hierarchical transcribing decoder, which we mainly focus on in this work. Mimicking human reading process, the decoder first takes the encoded image information $V$ and find out “where-to-look” by shedding spotlight on it, following the learned reading path, then generates the token sequence $y$, by predicting one token at a time using a GRU-based output network, solving the “what-to-write” problem. In the following subsections, we will explain how each part of the STN works in detail. Image Encoder ------------- The encoder part of STN is for extracting and embedding information from the image. Instead of embedding the complete image $x$ into one vector, which may cause a loss in structural information [@xu2015show], we extract a set of feature vectors $V$, each of which is a $D$-dimensional representation corresponding to a part of the image: $$V=\{V^{(i,j)}:i=1, \ldots, W',\,j=1,\ldots,H'\},\,V^{(i,j)}\in\mathbb{R}^D.$$ A deep convolutional neural network (CNN) is used as the feature extractor to capture high-level semantic information, which we denote as $f(\cdot\,;\theta_f)$. We follow the state-of-the-art image feature extractor design as in ResNet [@he2016deep], adding residual connections between convolutional layers, together with ReLU activation [@nair2010rectified] and batch normalization [@ioffe2015batch] to stabilize training, but removing the fully connected layers along with higher convolutional and pooling layers. As a result, we construct an extractor network that takes an image $x$, outputs a 3 dimensional tensor $V$ ($W'\times H'\times D$): $$V=f(x;\theta_f),$$ where vector $V^{(i,j)}$ at each location $(i,j)$ represents the local semantic information. The output tensor also preserves spatial and contextual information, with the property that adjacent vectors representing neighboring parts of the image. This allows the decoder module to use the image information selectively with both content and location in mind. Transcribing Decoder -------------------- The transcribing decoder of STN, as in typical encoder-decoder architecture, generates one token at a time, by giving its conditional probability over the encoder output $V$ and all the previous outputs $\{y_1,\ldots,y_{t-1}\}$ at each time step $t$. Hence, we can denote the probability of a decoder yielding a sequence $y$ as: $$\mathrm{P}(y|x)=\prod_{t=1}^{T}{\mathrm{P}(y_t|y_1,\ldots,y_{t-1},V)}.$$ Considering the fact that the output history can be long, we embed the history before time step $t$ into a hidden state vector $h_t$ by utilizing a variation of RNN — Gated Recurrent Unit (GRU), which preserves more long-term dependencies. Formally, at time step $t$, the hidden state for output history $h_t$ is updated based on the last output item $y_{t-1}$ and the previous output history $h_{t-1}$, by an GRU network $GRU(\cdot\,;\theta_h)$: $$h_{t}=GRU(y_{t-1}, h_{t-1};\theta_h).$$ For image part, the visual representation $V$ we get as the encoder output carries enough semantic information, but as a whole it can be confounding for the decoder to comprehend, and thus needs careful selection [@xu2015show]. To deal with this problem, we mimic what human do when reading images: focus on one spot at a time, write down content, then focus on a next spot following the image structure [@blakemore1969existence]. Along this line, we propose a module with novel spotlight mechanism, where at each time step, we only focus on information around a certain spotlight center. We refer to the spotlight center position as $s_t$ at time step $t$, and the spotlighted information as spotlight context $sc_t$. Further details on how to get focused spotlight context are described in Section \[spot\], while how to move the spotlight following the structure is described in Section \[control\]. With embedded history $h_t$, and spotlight context $sc_t$, together with current spotlight position $s_t$, the conditional probability of output token at time $t$ can then be parameterized as follows: $$\mathrm{P}(y_t|y_1,\ldots,y_{t-1}, V)=\mathrm{Softmax}(d(h_t\oplus sc_t\oplus s_t;\theta_d)),$$ where $d(\cdot\,;\theta_d)$ is a transformation function (e.g. a feed-forward neural network) that outputs a vocabulary-sized vector, and $\oplus$ represents the operation that concatenates two vectors. The overall transcription loss $\mathcal{L}$ on an image-sequence pair is then defined as the negative log likelihood of the token sequence over the image: $$\label{eq:loss} \mathcal{L}=\sum_{t=1}^{T}{-\log{P(y_t|y_1,\ldots,y_{t-1},V)}}.$$ With all the calculation being deterministic and differentiable, the model can be optimized through standard back-propagation. (0,0) grid (2,2); (-0.5, 1) node(par) [$\Bigg[\bigg($]{}; (0.25,0.25) node(h) [1]{} (0.25,1.75) node(1) [1]{} (0.25,1.25) node(2) [1]{} (0.25,0.75) node(d) […]{}; (0.75,0.25) node(h) [2]{} (0.75,1.75) node(1) [2]{} (0.75,1.25) node(2) [2]{} (0.75,0.75) node(d) […]{}; (1.25,0.25) node(h) […]{} (1.25,1.75) node(1) […]{} (1.25,1.25) node(2) […]{} (1.25,0.75) node(d) […]{}; (1.75,0.25) node(h) [$W'$]{} (1.75,1.75) node(1) [$W'$]{} (1.75,1.25) node(2) [$W'$]{} (1.75,0.75) node(d) […]{}; (1,-0.25) node(x) [$I$]{}; (2.25,1) node(x) [$-$]{}; (2.499,0) grid (4.5,2); (2.75,0.25) node(h) [$x_t$]{} (2.75,1.75) node(1) [$x_t$]{} (2.75,1.25) node(2) [$x_t$]{} (2.75,0.75) node(d) […]{}; (3.25,0.25) node(h) [$x_t$]{} (3.25,1.75) node(1) [$x_t$]{} (3.25,1.25) node(2) [$x_t$]{} (3.25,0.75) node(d) […]{}; (3.75,0.25) node(h) […]{} (3.75,1.75) node(1) […]{} (3.75,1.25) node(2) […]{} (3.75,0.75) node(d) […]{}; (4.25,0.25) node(h) [$x_t$]{} (4.25,1.75) node(1) [$x_t$]{} (4.25,1.25) node(2) [$x_t$]{} (4.25,0.75) node(d) […]{}; (3.5,-0.25) node(x) [$X_t$]{}; (5, 1) node(par) [$\bigg)^2$]{}; (5.35, 1) node(par) [$+$]{}; (0+5.99,0) grid (2+6,2); (-0.25+5.95, 1) node(par) [$\bigg($]{}; (0.25+6,0.25) node(h) [$H'$]{} (0.25+6,1.75) node(1) [1]{} (0.25+6,1.25) node(2) [2]{} (0.25+6,0.75) node(d) […]{}; (0.75+6,0.25) node(h) [$H'$]{} (0.75+6,1.75) node(1) [1]{} (0.75+6,1.25) node(2) [2]{} (0.75+6,0.75) node(d) […]{}; (1.25+6,0.25) node(h) […]{} (1.25+6,1.75) node(1) […]{} (1.25+6,1.25) node(2) […]{} (1.25+6,0.75) node(d) […]{}; (1.75+6,0.25) node(h) [$H'$]{} (1.75+6,1.75) node(1) [1]{} (1.75+6,1.25) node(2) [2]{} (1.75+6,0.75) node(d) […]{}; (1+6,-0.25) node(x) [$J$]{}; (2.25+6,1) node(x) [$-$]{}; (2.499+6,0) grid (4.5+6,2); (2.75+6,0.25) node(h) [$y_t$]{} (2.75+6,1.75) node(1) [$y_t$]{} (2.75+6,1.25) node(2) [$y_t$]{} (2.75+6,0.75) node(d) […]{}; (3.25+6,0.25) node(h) [$y_t$]{} (3.25+6,1.75) node(1) [$y_t$]{} (3.25+6,1.25) node(2) [$y_t$]{} (3.25+6,0.75) node(d) […]{}; (3.75+6,0.25) node(h) […]{} (3.75+6,1.75) node(1) […]{} (3.75+6,1.25) node(2) […]{} (3.75+6,0.75) node(d) […]{}; (4.25+6,0.25) node(h) [$y_t$]{} (4.25+6,1.75) node(1) [$y_t$]{} (4.25+6,1.25) node(2) [$y_t$]{} (4.25+6,0.75) node(d) […]{}; (3.5+6,-0.25) node(x) [$Y_t$]{}; (5+6.35, 1) node(par) [$\bigg)^2\left.\Bigg]/\sigma_t^2\right.$]{}; Spotlight Mechanism {#spot} ------------------- In this subsection, we describe how to get focused information of the input image, i.e., the spotlight context $sc_t$, with our proposed spotlight mechanism. How the spotlight moves through time is handled in a separate spotlight control module, and is described later in detail in Section \[control\]. As mentioned earlier, the visual embedding $V$ is confounding for the decoder, and we want to focus on one spot at a time when generating output. To achieve this goal, we propose a novel spotlight mechanism to mimic human focus directly, where at each time step, we only care about information around a certain location which we call a spotlight center, by “shedding” a spotlight around it. More specifically, we define a spotlight handle $s_t=(x_t, y_t, \sigma_t)^\text{T}$ at each time step $t$ to represent the spotlight, where $(x_t,y_t)$ represents the center position of the spotlight, and $\sigma_t$ represents the radius of the spotlight. Inspired by Yang et al. [@yang2017learning], we “shed” a spotlight by assigning weights to image representation vectors at each position, following a truncated Gaussian distribution centered at $(x_t, y_t)$, with the same variance $\sigma_t$ on both axis. Formally, under the spotlight with handle $s_t=(x_t, y_t, \sigma_t)^\text{T}$, the weights for each vector at position $(i, j)$ at time step $t$, denoted as $\alpha_t^{(i,j)}$, is proportional to the probability density at point $(i, j)$ under Gaussian distribution: $$\alpha_t^{(i,j)} \sim \mathcal{N}((i,j)^\text{T}|\mu_t, \Sigma_t),$$ $$\mu_t=(x_t, y_t)^T\quad \Sigma_t=\begin{bmatrix} \sigma_t & 0 \\ 0 & \sigma_t \end{bmatrix}.$$ Intuitively, the closer $(i,j)$ is to the center $(x_t,y_t)$, the higher the weight should be, mimicking shedding a spotlight with radius $\sigma_t$ onto the location $(x_t,y_t)$. To calculate the weight $\alpha_t^{(i,j)}$ of each position $(i,j)$ while still make the process differentiable, we apply the definition of Gaussian distribution and rewrite the expression of $\alpha_t^{(i,j)}$ as: $$\alpha_{t}^{(i, j)}=\mathrm{Softmax}(b_t)=\frac{\exp(b_t^{(i,j)})}{\sum_{u=1}^{W'}\sum_{v=1}^{H'} {\exp(b_t^{(u,v)})}},$$ $$b_t^{(i,j)}=-\frac{(i-x_t)^2+(j-y_t)^2}{\sigma_t^2} \label{eq:a},$$ where $b$ measures how close the point $(i,j)$ is to the center $(x_t,y_t)$, i.e., how important this point is, and $\alpha$ is thus a $W'\times H'$ matrix following the truncated Gaussian distribution for each point $(i,j)$, and can later be used as weights for each image feature vector. To parallize the calculation of Equation (\[eq:a\]), we perform a small trick as demonstrated in Figure \[fig:coord\]. We first construct two $W'\times H'$ matrices $I$ and $J$ in advance, each of them representing one coordinate. Specifically, as shown in Figure \[fig:coord\], for each point $(i,j)$, we have $I^{(i,j)}=i$ and $J^{(i,j)}=j$. We also expand $x_t$ and $y_t$ as $W'\times H'$ matrices $X_t$ and $Y_t$ respectively, with same value for each element. Therefore, Equation (\[eq:a\]) can be written as the matrix form: $$b_t=-[(I-X_t)^2+(J-Y_t)^2]/\sigma_t^2$$ The focused information of the visual representation $V$ at time step $t$ can then be computed as a spotlight context vector $sc_t$ weighted by $\alpha_t^{(i,j)}$ according to current spotlight handle $s_t$, i.e., the weighted sum of features at each position: $$sc_t = \sum_{i=1}^{W'}\sum_{j=1}^{H'}{ \alpha_t^{(i,j)} V^{(i,j)}}$$ Please note that the spotlight context $sc_t$ represents the information in the focused area at time step $t$, and should contain useful information specifically for transcribing at current time step. By focusing directly on the correct spot, the transcription module therefore only cares about the local information, not confusing at areas with similar content all over the image. Spotlight Control {#control} ----------------- Now we discuss how to control the spotlight to find a proper reading path, following the image structure through the whole generation process. Different from traditional attention strategy where both output sequence and attention behavior are embedded in one module, we see the spotlight movement (i.e., the value of the spotlight handle $s_t=(x_t, y_t, \sigma_t)^\text{T}$ at each time step $t$) as a separate sequence devoted to following the image structure, and model this sequence with a standalone spotlight controlling module, without mixing the information with the output sequence. We provide two implementations under the STN framework, i.e., the straightforward *STNM with Markov property*, and the more sophisticated *STNR with Recurrent modeling*, utilizing another GRU network. Each implementation models the spotlight handle sequences differently. **STNM with Markov property.** With an assumption that is not far from reality, we can intuitively treat the spotlight handle sequence as a Markov process, i.e., current spotlight handle only depends on the previous handle, along with other internal states at current time step. Treating the spotlight handle as a Markov process means the probability of choosing $s_t$ at time $t$ does not rely on spotlight handles more than one step earlier, i.e.: $$P(s_t|s_1,\ldots,s_{t-1};\cdot)=\\P(s_t|s_{t-1};\cdot).$$ To decide where to put the spotlight properly, the model also needs to know current internal states at time step $t$, including the spotlight context $sc_{t-1}$ which represents previous spotlighted region, and the history embedding $h_t$ which represents output history *before* time $t$. Thus, we can use a feed-forward neural network $n(\cdot\,;\theta_{n})$ to model the choice of $s_t$ (Figure \[fig:control\] (a)) as: $$s_t=n(s_{t-1}\oplus sc_{t-1}\oplus h_{t}; \theta_{n})$$ The way we model the sequence is simple and time-independent, which makes it easier for the controlling module to train. ![The spotlight control module implementations.[]{data-label="fig:control"}](imgs/control.pdf) **STNR with Recurrent modeling.** Sometimes longer spotlight history is needed for spotlight controlling on images with more complex structure. To track the image structure as a sequence with long-term dependency, we propose another GRU network $GRU(\cdot\,;\theta_g)$ to track the spotlight history, and a fully connected layer $c(\cdot\,;\theta_c)$ to generate next spotlight handle (Figure \[fig:control\] (b)). Specifically, at time step $t$, with last spotlight history embedding denoted as $e_{t}$, the current spotlight handle $s_t$ at time $t$ is calculated as: $$s_t = c(e_{t}\oplus sc_{t-1}\oplus h_{t}; \theta_c)$$ and the history embedding is updated by: $$e_t = GRU(s_{t-1}, e_{t-1};\theta_g)$$ Through a separate module specifically for spotlight control, STN gains two advantages over the traditional attention mechanism. First, STN focuses on local areas by design, and the model will only have to learn where to focus and what to transcribe, while the attention model have to first learn to focus, then learn what to focus on. Second, modeling reading and writing process as two separate sequences, with a standalone module dedicated for the “where-to-look” problem, STN is capable for directly learning a reading path on structural images apart from generating the output sequences, which enables our model to track the image structure more closely compared to attentive models where attentions and transcribing process are modeled together in only one network. Training and Refining STN {#training} ------------------------- Parameters to be updated in both implementations comes from three parts: the encoder parameters $\theta_f$, the decoder parameters $\{\theta_h, \theta_d\}$, and parameters in the spotlight control module, which are $\theta_n$ in STNM and $\{\theta_c, \theta_g\}$ in STNR. The parameters are updated to minimize the total transcription loss $\mathcal{L}$ (Equation (\[eq:loss\])) through a gradient descent algorithm, which we choose the Adam optimizer [@kingma2014adam]. More detailed settings are presented in the experiment section. Though our model is differentiable, and can be optimized through back-propagation methods, directly training to fit the label suffers from some specific aspects in the image transcribing task. Firstly, the model has to jointly learn two different sequences with only one of them directly supervised, which may result in inaccurate reading path. Second, the given token sequence may only be one of the many correct ones that all regenerates the original image. For instance, in LilyPond notation, we can optionally omit duration for notes at same length with their predecessors. Fitting to only one of the correct sequences lets down the model even when it achieves good strategies. Fortunately, in structural image transcription problems, we have an advantage that the process is reversible, meaning given the transcribed sequence, we can use a compiler to reconstruct the image. With the guidance of this, we can further refine our model using reinforcement learning, by regarding our sequential generation as a decision making problem, viewing it as a Markov Decision Process (MDP) [@bahdanau2016actor]. Formally, we define the *state*, *action* and *reward* of the MDP as follows: **State:** View our problem as outputting the probability of items at each time step conditioned by the image and previous generations, the environment state at time step $t$ as the combination of the image $x$ and the output history $\{y_1, \ldots, y_{t-1}\}$, which is exactly the inputs of the STN. Therefore, instead of directly using the environment state, we use the internal states (combined and denoted as $state_t$) in STN framework as MDP states. **Action:** Taking action $a_t$ is defined as generating the token $y_t$ at time step $t$. With the probability of each token as the output, the STN can be viewed as a stochastic policy that generates actions by sampling from the distribution $\pi(a|state_t;\theta)=P(a|y_1, \ldots, y_{t-1}, x;\theta)$, where $\theta$ is the set of model parameters to be refined. **Reward:** After taking the action, a reward signal $r$ is received. Here we define the reward $r_t$ as 0 when the generation is not finished at time step $t$, or the pixel similarity between the reconstruction image and the original image after the whole generation process finished. Besides, we give -1 as the final reward if the output sequence does not compile, addressing grammar constraints by penalizing illegal outputs. The goal is to maximize the sum of the discounted rewards from each time $t$, i.e., the return: $$R_t=\sum_{k=t}^T{\gamma^kr_k}.$$ We further define a value network $v(\cdot\,;\theta_v)$ for estimation of the expected return from each $state_t$, which is a feed-forward network with the same input as the STN output layer $d$. The estimated value $v_t$, i.e., the expected return, at time step $t$ is then $$v_t=v(h_t\oplus sc_t\oplus s_t;\theta_v).$$ With a stochastic policy together with a value network, we can apply the actor-critic algorithm [@bahdanau2016actor] to our sequence generation problem, with the policy network trained using policy gradient at each time step $t$ as: $$\nabla_\theta=\log\pi(a|state_t;\theta)(R_t-v_t),$$ and the value network trained by optimizing the distance between the estimated value and actual return: $\mathcal{L}_{value}=||v_t-R_t||_2^2$. As the whole model is complicated, directly applying reinforcement learning to the model suffers from the large searching space. Through experiments we notice that, after supervised training, the image extractor and the output history embedding modules have both been trained properly, and it is more important for our framework to have a better reading path to make precise predictions, which indicates that refining the spotlight module is most beneficial. Therefore, at reinforcement stage, we only optimize parameters from the spotlight control module ($\theta_n$ in STNM, $\theta_c$ and $\theta_g$ in STNR), along with those from the output layer ($\theta_o$), and omit $\theta_f$ and $\theta_h$, which reduces the variance when applying reinforcement learning algorithms, and get better improvements. With this train-and-refine procedure, our model can learn a reasonable reading path on structural images, focusing on different parts following the image structure when transcribing, and get superior transcription results, as our experimental results show in the next section. Experiments =========== In this section, we conduct extensive experiments to demonstrate the effectiveness of STN model from various aspects: (1) the transcribing performance; (2) the validation loss demonstrating the model sensitivity; (3) the spotlight visualization of STN. Experimental Setup ------------------ ### Data partition and preprocessing. We partition all our datasets, i.e., *Melody*, *Formula* and *Multi-Line*, into 60%/40%, 70%/30%, 80%/20%, 90%/10% as training/testing sets, respectively, to test model performance at different data sparsity. From each training set, we also sample 10% images as validation set. The images are randomly scaled and cropped for stable training, and ground-truth source code is cut into token sequences in the corresponding language to reduce searching space. ### STN setting. We now specify the model setup in STN, including image encoder, transcription decoder and reinforcement module. For STN image encoder, we use a variation of ResNet [@he2016deep], and set the encoded vector width as 128. For its transcribing decoder, we set the output history embedding $h_t$, and the spotlight history embedding $e_t$ as the same dimensions of 128, respectively. The value network used at the reinforcement stage is a two-layer fully-connected neural network, with the hidden layer also sized at 128. ### Training setting. To set up the training process, we initialize all parameters in STN following [@glorot2010understanding]. Each parameter is sampled from $U\left(-\sqrt{6/(n_{in}+n_{out})},\sqrt{6/(n_{in}+n_{out})}\right)$ as their initial values, where $n_{in}$, $n_{out}$ stands for the number of neurons feeding in and neurons the result is fed to, respectively. Besides, to prevent overfitting, we also add L2-regularization term in the loss function (Equation (\[eq:loss\])), with the regularization amount adjusted to the best performance. At reinforcement stage, the discount factor $\gamma$ is set as 0.99. We also apply some techniques mostly mentioned in [@bahdanau2016actor] to reduce variance, including using an additional target Q-network and reward normalization. ### Comparison methods. To demonstrate the effectiveness of STN, we compare our two implementations, i.e., STNM and STNR, with many state-of-the-art baselines as follows. - **Enc-Dec** is a plain encoder-decoder model used originally for image captioning [@vinyals2015show]. Its design allows it to be used in our problem setup with minor adjustments. - **Attn-Dot** is an encoder-decoder model with attention mechanism following [@luong2015effective], where the attention score is calculated by directly computing the similarity between current output state and each encoded image vectors. - **Attn-FC** is an encoder-decoder model similar to [@vinyals2015show], but with basic visual attention strategy. The model presents two attention strategies, i.e., the “hard” and “soft” attention mechanism, from which we follow [@xu2015show] and choose the more widely used “soft” attention as it is deterministic and easier to train. - **Attn-Pos** is an encoder-decoder model designed specifically for scene text recognition [@yang2017learning], where besides the image content, it also embeds location information into attention calculation, and get superior results. To conduct a fair comparison, the image encoders for baselines are changed to use the more recent ResNet [@he2016deep] as our model does, with all of them tuned to have the best performance. All models are implemented by PyTorch[^4], and trained on a Linux server with four 2.0GHz Intel Xeon E5-2620 CPUs and a Tesla K20m GPU. Experimental Results -------------------- ### Transcribing performance We train STN along with all the baseline models on four different data partition of each, comparing token accuracy at different data sparsity. We repeat all experiments 5 times and report the average results which are shown in Table \[tab:acc\]. From the results, we can get several observations. First, both STNM and STNR perform better than all the other methods. This indicates that STN framework is more capable for structural image transcription tasks, being more effective and accurate on tracking complex image structures. Second, STN models, as well as attention based methods, all have much higher prediction accuracy than plain EncDec method, which proves the claim mentioned earlier in this paper that image information encoded as a single vector is confounding for decoder to decode, and both STN and attentive models are able to reduce the confusion. Moreover, STN models are consistently better than those attentive ones, showing the superiority of STN with separate modules for spotlighting and transcribing. Third, STNR and STNM has slightly higher performance on *Melody* and *Formula* as Attn-Pos, but surpasses it marginally on *Multi-Line* dataset. These results demonstrate that STN with spotlight mechanism can well preserve the internal structure of images, especially in more complex scenarios, benefiting the transcription accuracy. Last but not least, we can see that STNR consistently outperforms than STNM, which indicates that it is effective to track long-term dependency for spotlighting in the process of transcribing structural image content. ![image](imgs/melody_case.pdf) ![image](imgs/formula_case.pdf) ### Validation loss The losses of all models on the validation set throughout the training process on three datasets are shown in Figure \[fig:loss\]. There are also similar observations as before, which demonstrates the effectiveness of STN framework again. Clearly, from the results, both STNR and STNM converge faster than the other models, and also achieve a lower loss. Especially, the improvements of them on the more complex Multi-Line datasets are more significant. Thus, we can reach a conclusion that STN with spotlight mechanism has superior ability to transcribe content from structural images. Moreover, all models reach their lowest validation loss before 30 epochs, with STNR and STNM both come to their best point earlier. Thus, in our experiments, we train both STNR and STNM for 25, 15, 20 epochs on Melody, Formula and Multi-Line datasets respectively to obtain the best performance. ### Spotlight visualization To show the effectiveness of STN capturing the image structure and producing a reasonable reading path while transcribing, we visualize the spotlight weights computed by STNR when generating tokens, and compare them with the attention weights calculated by Attn-Pos model. Figure \[fig:vis\_melody\] and Figure \[fig:vis\_formula\] visualize the results throughout image examples from Melody and Formula datasets, respectively.[^5] In each example, we compare the attention and spotlight mechanism on how focused they are when generating a token, also on how well they track the image structure. From the visualization, we can draw conclusions that: (1) STNR finds a more reasonable reading path on both examples. In the melody example, it focuses on notes from left to right, and also tracks the height of each note, making accurate note pitch prediction; In the formula example, it clearly follows middle-top-bottom order when reading a fraction. Attn-Pos model on the other hand, does not track the image structure well enough. As shown in Figure \[fig:vis\_formula\], it fails to find the correct spot after generating “`\sqrt{x`”, losing track of the radical expression, and generates the wrong token “`}`” at last. (2) Although Attn-Pos model assigns more weights on content objects in images, e.g., notes, formulas and variables, it is often confused at areas with similar content. On the other hand, STNR clearly distinguishes similar regions properly. More specifically, in Figure \[fig:vis\_melody\], although Attn-Pos is able to focus on the notes, all notes are given similar weights as they look similar, which causes confusion and then wrong prediction. And in Figure \[fig:vis\_formula\], when Attn-Pos writes `x`, three `x`’s in the image all have high weights, causing the model to forget where to look next. On the contrary, STNR is well focused on the correct spot when generating each token on both of the datasets, which leads to more precise predictions. ### Discussion All the above experiments have shown the effectiveness of STN on structural image transcription tasks. It has superior performance on structural image transcription task compared to other general-purpose approaches, and also captures the structure of the image by producing a reading path following the image structure when transcribing. There are still some directions for further studies. First, STN learns to transcribe tokens directly with little prior knowledge of the image or specific languages. We are willing to utilize more prior knowledge, such as lexicons and hand-engineered features, to further improve the performance. Second, we will try to apply our model to some more ambitious settings, such as transcribing with long-term context, also to make our model capable for other transcribing applications such as scene text recognition. Third, we would like to further decouple the reading and writing process of STN, in order to mimic human behavior more genuinely. Conclusion ========== In this paper, we presented a novel hierarchical Spotlighted Transcribing Network (STN) for transcribing content from structural images by finding a reading path tracking the image internal structure. Specifically, we first designed a two-stage “where-to-what” solution with a novel spotlight mechanism dedicated for the “where-to-look” problem, providing two implementations under the framework, modeling the spotlight movement through Markov chain and recurrent dependency, respectively. Then, we applied supervised learning and reinforcement learning methods to accurately train and refine the spotlight modeling, in order to learn a reasonable reading path. Finally, we conducted extensive experiments on one synthetic and two real-world datasets to demonstrate the effectiveness of STN framework with fast model convergence and high performance, and also visualized the learned reading path. We hope this work could lead to more studies in the future. Acknowledgements {#acknowledgements .unnumbered} ================ This research was partially supported by grants from the National Natural Science Foundation of China (No.s U1605251 and 61727809), the Science Foundation of Ministry of Education of China & China Mobile (No. MCM20170507), and the Youth Innovation Promotion Association of Chinese Academy of Sciences (No. 2014299). [^1]: A domain specific language for music notation, http://lilypond.org/ [^2]: Datasets are available at: http://home.ustc.edu.cn/\~yxonic/stn\_dataset.7z. [^3]: http://web.mit.edu/music21/ [^4]: http://pytorch.org [^5]: We only choose two real-world datasets for visualization due to the page limitation.
Main menu Post navigation Geographers on Tour: Santa Cruz Field Class 2014 Choosing third year modules is never easy, but when faced with the choice of either 2 exams or a 2 week field class in California (with coursework) there was little decision left to make. From the moment I stepped out into San Francisco I knew I wouldn’t regret my choice. I took the opportunity to go out a few days early before the field class started, and my first concern was whether I would have enough time to visit each of the department stores that appeared on every corner, and my second was how much I could fit in my suitcase… Luckily I was only staying in San Francisco for two days. We recovered from jetlag and sampled the local food… burgers and pancakes, and had just enough time to take a trip to Pier 39 before meeting the lecturers and setting off for Santa Cruz (this is where the work kicks in). Just over an hour away from San Francisco, the city of Santa Cruz was a complete contrast to where we had just come from. Being from Liverpool ‘city’ to me means fast paced, high rise buildings and lots of traffic, but this place was anything but. Think sandy beaches, surfers, sea lions, California’s oldest amusement park and sunshine every day… suddenly the thought of doing the equivalent of another dissertation isn’t so bad. The first day was quite relaxed, we toured the city and started to work out where we would be working over the next two weeks. As my group was doing a study on public perception of drought we had to set up interviews and focus groups, which proved less challenging than expected. People in local Government were really friendly and keen to talk about how their department had been involved in drought mitigation, we were even invited to the University of California’s Santa Cruz campus to speak with the sustainability department. Unfortunately the same enthusiasm was not felt by the locals we were hounding every day to complete questionnaires and it took a lot of perseverance to get enough. For all second years who may be contemplating taking this module, do not be disillusioned, our trip to Santa Cruz was not all work and no play. At 6pm every evening we finished work for the day and took full advantage of the local bars and restaurants, attended a basketball game and visited the Boardwalk (amusement park) on the last day. Apart from the Thai restaurant along the beach (which we recommend you avoid at all costs) there were some really great places to eat out. If you’re planning on going to Santa Cruz for your final year at Liverpool both the Surfrider Café and Seabright Brewery are a must! In typical “Come Dine With Me” style, girls versus boys, we took advantage of the self-catering facilities and also tried eating in. On average, we managed to cook meals for a cost of around $4 per person so if you’re worried about budgeting whilst you’re away this is a good option. Once our draft reports were handed in and the field class over, we also took the chance to stay on for a few days before flying home. We made the most of it by taking a night time trip to Alcatraz prison and walking across the Golden Gate Bridge. Unfortunately for me, my adventure was then over, but others stayed longer and went on to Yosemite, LA, or continued sightseeing in San Francisco. The Santa Cruz field class has been a trip of a lifetime, one filled with unforgettable experiences and great people. I’m glad I got to work on such an interesting topic and as a BSc student, glad I took the opportunity to do a project using human geography methods and gain an insight into the other side of the discipline. At first I was reluctant to step out of my comfort zone, and convinced that I was out of my depth arranging face to face interviews with city council directors, but that was before I arrived in Santa Cruz. After day one I was taken aback by the willingness of people to speak to students, they really make the time for us. , Even the local newspaper was interested in what we were doing and ran a story on us. Doing a project using human geography methods allowed us to see much more of the city than we otherwise would have and although transcribing interviews in coffee shops sometimes felt like cheating (whilst our peers were knee deep in rivers) we can now say we bridged the geography divide and broadened our employability skills – and having tried transcribing and getting people to stop to answer questionnaires, we now know that these methods aren’t as easy as they may seem. Santa Cruz has been a valuable trip as we have been able to put the last two and a half years of learning into practice, as well as it being a fantastic end to our course. I’ve arrived home with great memories, a list of skills to add to my CV, a suitcase full of banana slug memorabilia and one of the best reasons I can think of for picking a geography degree!
Search This Blog Leaving Telford It's very important that we keep propping up the endless jewish lies that serve as the mortar for our modern multi-culti. Living in this delusional and ultimately destructive kosher fantasy world is far preferable to addressing reality, an action likely to bring down the wrath of zion. I'm not some stupid hero and I don't want that supply of booze and pills to suddenly dry up, so let me parrot the semitic deceits even as everything is burning. We're all equal. We need a lot more "immigration" to help muh economy and pay my pension when I'm finally no longer useful as a plow horse for a system that hates me and wants me dead. Islam is peace. All religions are basically the same, full of amazing ethical teachings like "don't be a bad boy." An African is just an overcooked White, eager to participate in our consumerism. My country is certainly not dying. One of the Telford grooming victims has spoken out about her horrendous four-year ordeal at the hands of a gang who sold her 'countless times' for sex - and slammed the authorities who did 'nothing' to help her. In fairness, these pathetic cowards were afraid of being called names. If not being called a "racist" by alien nation-wreckers who want us dead leads to years of horrific moose-limb child rape, it's a small price to pay. Anyone who gave me grief for calling not-so-Great Britain "cuck island" owes me an apology. This is absolutely unbelievable. The United Kaliphate, land of systematic enemygrant child rape. It is believed gangs in the town abused up to 1,000 girls, some as young as 11, over a four-decade period. I informed the "bobbies" and they did nothing. Oh well, we did everything we could. Four decades. I'm slowly scanning the room where I'm typing this, my eyes finally settling on the shotgun in the corner. If it was my daughter... The girls were often drugged, beaten and raped, one was murdered alongside her mother and sister and two others died in incidents linked to the sickening scandal. Our paradise of "diversity" and exciting ethnic food. Did you know the stone cube worshipers believe in Jesus, sort of? They're just like you. I'm assuming you have working eyes, British. Ignorance is not an excuse for this appalling disaster. Telford has the third highest number of child sexual offences recorded in the UK, just behind Blackpool and Rotherham, according to the Home Office. Time to put it into the same memory hole. The silence is deafening. The face of the racial and religious enemy. Today, the woman, going by the name 'Holly', spoke anonymously to Good Morning Britain, and said: 'I was abused from the ages of 14-18, my abuse started with boys my own age, who went on to sell my phone number to older men. "Pumping" money into that GDP, the most important activity possible. Selling phone numbers to moe-ham-head rapists for big profits. Another triumph for "the market." And from there it was just a whirlwind of rape every day basically. I was going into the doctors and the youth sexual health clinic to get the morning after pill, probably twice a week, and nobody even questioned anything. There will always be an England. 'I had two abortions, still nothing was said to me. I was in cars that were stopped by the police and they asked me no questions of why I was there with a much older man... it got to the point where I tried to commit suicide, and still nobody asked me any questions about what was going on in my life and why I was reacting the way I was reacting.' The death of a nation has a dignity all its own. Holly added: 'The way I got out of it was by actually leaving Telford and isolating myself from my friends and family and everybody else that I knew. Everyone you knew was worthless human garbage, Holly. Including your own family. The way they failed you is an absolute disgrace. Drink your pints, watch the amazing "pace" of the negroes on the "tellie." Your daughter is being sexually ravaged by foreign invaders, made a war trophy for the army of allah. Say, is that football? Everything is fine. 'The reason why it went on for so long was because the men were blackmailing me saying that they were going to rape my family members or burn my house down.' The religion of peace. Telford's Conservative MP, Lucy Allan, has previously called for a Rotherham-style inquiry into the allegations and called the latest reports 'extremely serious and shocking'. Yeah. NO FUCKING SHIT. A council spokesman told The Mirror, who obtained the figures, that they 'do not paint the whole picture'. He added: 'A referral to children's safeguarding service is just one of a number of appropriate outcomes to a contact of this type.' 'Our analysis shows that all the contacts received a proportionate response. You're doing a great job. Very good response. It only took forty years. Speaking this morning, Holly added: 'I feel angry that I'm still being denied an inquiry in Telford – a specific inquiry in Telford. I'm not shocked at the scale of the abuse because I saw it with my own eyes. We can't learn the obvious lessons because that would be "xenophobia." Let's obfuscate, defend our impotent responses and continue to be lowered into the cold, uncaring ground. Notice that Telford, Rotherham and Blackpool are small(ish) towns. Most people living outside the UK probably had never heard of them before. If sexual abuse on an industrial scale is uncovered seemingly at random in three small towns, what reason is there to assume it is not going on just about everywhere else in Britain? Post a Comment Popular posts from this blog Guarding the future of your progeny and homeland is not a job you can outsource to foreign shores and foreign invaders. A nation that can't defend itself has no future. If you're weak, expect to be killed and replaced. These truths seem obvious, one would even say self-evident perhaps, but you'd never know from the behavior of Western Europe. There doesn't seem to be a limit to the cowardice, the delusional arrogance and the pure refined stupidity of these rapidly dying nations. As usual, Swedenistan is the gold standard, a former Nordic paradise now rapidly becoming the world's coldest moon cult nightmare state. The feckless reaction from "leadership" to this fundamental transformation that must be made, with jews front and center, of course, would only be surprising if you haven't been paying attention. The number of militant extremists living in Sweden has soared from a couple of hundreds a few years ago to thousands today, the security police Säpo … All cultures are equally good, no matter how barbaric and destructive. All religions are equally valid, teaching incredible spiritual truths like "don't be a jerk." All people are equal, that low sloping forehead has no effect on frontal lobe capacity. It's hard to believe anyone ever fell for this jewish nonsense, let alone allowed this intellectual mush to become the dominant belief system in rapidly dying lands. Today we see the end game of this kosher shuck as the worst possible behaviors are declared perfectly healthy and negro savagery once confined to the Heart of Darkness finds a place in Massachusetts. Latarsha Sanders, 43, told police she attacked her 8-year-old son, Edson "Marlon" Brito, with a kitchen knife as part of a ritual but failed, so she attacked her 5-year-old son, Lason Brito, assistant Plymouth District Attorney Jessica Kenny told a judge during Sanders' arraignment on murder charges. If Europe has a reason for optimism, it's in the Visegrád group. In fact, it would be fair to say that our ancestral homelands can be divided almost evenly into the rapidly dying West and the resisting East. In Hungary the "immigration" shotgun and the jewish demand to "put this in your mouth" have been rejected and there's no sign of a reversal. Instead, it's time for real talk from a European leader condemning the kosher spiritual sickness, the rapefugee invasion columns and the cowards who were afraid of being called names. Hungarian leader Viktor Orban called on Sunday for a global alliance against migration as his right-wing populist Fidesz party began campaigning for an April 8 election in which it is expected to win a third consecutive landslide victory. You can't build a future with other people's children. Invisible lines are not what makes a nation: it's shared religion, culture, tradition and race. A brown horde of unskilled worker…
Cerebral arterial ectasia and tuberous sclerosis: case report. Tuberous sclerosis is associated with a wide variety of central nervous system abnormalities. Cerebrovascular anomalies are extremely rare, but a case of cerebral arterial ectasia and giant fusiform aneurysm formation in a young child is reported. A 5-month-old male patient with tuberous sclerosis presented with seizures, a subependymal tumor, and intraventricular hemorrhage. Cerebral angiography demonstrated a large fusiform aneurysm of the left cavernous internal carotid artery as well as arterial ectasia of the proximal left anterior cerebral and middle cerebral arteries. The patient developed hydrocephalus and died of infectious complications after repeated shunt procedures. Tuberous sclerosis is commonly associated with central nervous system lesions. Although rare, cerebrovascular anomalies and aneurysms should be considered in the differential diagnosis of mass lesions to avoid an ill-advised biopsy of a vascular lesion, which could have disastrous consequences.
1. Field of the Invention The present invention relates generally to a hook assembly or unit for attachment to a vertical surface. Particularly, the present invention relates to a unit attachable to a restroom stall wall or door for temporarily storing personal items such as hats, purses, backpacks, coats and the like. More particularly, the present invention relates to a restroom stall hook assembly that hinders theft of personal items stored in a receiving area of the hook assembly. 2. Description of the Related Art There are a wide variety of hooks designed for hanging personal items such as hats, purses, backpacks, coats and the like. Many of these hooks are used in public restrooms, often secured to a stall surface such as the stall door or side wall. A visitor usually places their personal items on the hook while using the facilities to support the items off of the floor. A typical restroom hook includes a planar mounting element for attachment to the stall surface and one or two generally J-shaped hooks extending from the mounting element, either in horizontal or vertical alignment, for hanging the personal items. Theft in public restrooms of personal items stored on restroom hooks while a person is in the stall is a prevalent problem. A significant amount of the problem stems from a simple theft in which an object, often a purse, is hung on an interior hook in a restroom stall. The thief reaches over the stall, pushes the item off of the hook and onto the stall floor while the owner is in a particularly vulnerable position. The item then drops to the ground and the thief can then reach under the stall and grab the fallen item. Before the owner of the item can react, the thief has fled. This method of theft is often referred to as “push, drop and grab”. Numerous attempts have been made to develop an anti-theft device for locking personal items to the device to prevent theft. These devices, such as those disclosed in U.S. Pat. Nos. 5,984,250 (Connor); 6,152,419 (Bender); 6,338,463 (Babitz et al.) and D551,418 (Loveless) are often large devices, many having numerous moving parts. Such devices are more expensive than traditional known hooks, can secure a very limited number and type of items and are somewhat difficult to use, particularly if a visitor is in a hurry or unfamiliar with that type of device. The present invention addresses problems and limitations associated with the prior art.
/** ****************************************************************************** * @file stm32f7xx_hal_dma_ex.h * @author MCD Application Team * @version V1.2.2 * @date 14-April-2017 * @brief Header file of DMA HAL extension module. ****************************************************************************** * @attention * * <h2><center>&copy; COPYRIGHT(c) 2017 STMicroelectronics</center></h2> * * Redistribution and use in source and binary forms, with or without modification, * are permitted provided that the following conditions are met: * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * 3. Neither the name of STMicroelectronics nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * ****************************************************************************** */ /* Define to prevent recursive inclusion -------------------------------------*/ #ifndef __STM32F7xx_HAL_DMA_EX_H #define __STM32F7xx_HAL_DMA_EX_H #ifdef __cplusplus extern "C" { #endif /* Includes ------------------------------------------------------------------*/ #include "stm32f7xx_hal_def.h" /** @addtogroup STM32F7xx_HAL_Driver * @{ */ /** @addtogroup DMAEx * @{ */ /* Exported types ------------------------------------------------------------*/ /** @defgroup DMAEx_Exported_Types DMAEx Exported Types * @brief DMAEx Exported types * @{ */ /** * @brief HAL DMA Memory definition */ typedef enum { MEMORY0 = 0x00U, /*!< Memory 0 */ MEMORY1 = 0x01U, /*!< Memory 1 */ }HAL_DMA_MemoryTypeDef; /** * @} */ /* Exported constants --------------------------------------------------------*/ /** @defgroup DMA_Exported_Constants DMA Exported Constants * @brief DMA Exported constants * @{ */ /** @defgroup DMAEx_Channel_selection DMA Channel selection * @brief DMAEx channel selection * @{ */ #define DMA_CHANNEL_0 ((uint32_t)0x00000000U) /*!< DMA Channel 0 */ #define DMA_CHANNEL_1 ((uint32_t)0x02000000U) /*!< DMA Channel 1 */ #define DMA_CHANNEL_2 ((uint32_t)0x04000000U) /*!< DMA Channel 2 */ #define DMA_CHANNEL_3 ((uint32_t)0x06000000U) /*!< DMA Channel 3 */ #define DMA_CHANNEL_4 ((uint32_t)0x08000000U) /*!< DMA Channel 4 */ #define DMA_CHANNEL_5 ((uint32_t)0x0A000000U) /*!< DMA Channel 5 */ #define DMA_CHANNEL_6 ((uint32_t)0x0C000000U) /*!< DMA Channel 6 */ #define DMA_CHANNEL_7 ((uint32_t)0x0E000000U) /*!< DMA Channel 7 */ #if defined (STM32F722xx) || defined (STM32F723xx) || defined (STM32F732xx) || defined (STM32F733xx) ||\ defined (STM32F765xx) || defined (STM32F767xx) || defined (STM32F769xx) || defined (STM32F777xx) ||\ defined (STM32F779xx) #define DMA_CHANNEL_8 ((uint32_t)0x10000000U) /*!< DMA Channel 8 */ #define DMA_CHANNEL_9 ((uint32_t)0x12000000U) /*!< DMA Channel 9 */ #define DMA_CHANNEL_10 ((uint32_t)0x14000000U) /*!< DMA Channel 10*/ #define DMA_CHANNEL_11 ((uint32_t)0x16000000U) /*!< DMA Channel 11*/ #define DMA_CHANNEL_12 ((uint32_t)0x18000000U) /*!< DMA Channel 12*/ #define DMA_CHANNEL_13 ((uint32_t)0x1A000000U) /*!< DMA Channel 13*/ #define DMA_CHANNEL_14 ((uint32_t)0x1C000000U) /*!< DMA Channel 14*/ #define DMA_CHANNEL_15 ((uint32_t)0x1E000000U) /*!< DMA Channel 15*/ #endif /* STM32F722xx || STM32F723xx || STM32F732xx || STM32F733xx || STM32F765xx || STM32F767xx || STM32F769xx || STM32F777xx || STM32F779xx */ /** * @} */ /** * @} */ /* Exported functions --------------------------------------------------------*/ /** @defgroup DMAEx_Exported_Functions DMAEx Exported Functions * @brief DMAEx Exported functions * @{ */ /** @defgroup DMAEx_Exported_Functions_Group1 Extended features functions * @brief Extended features functions * @{ */ /* IO operation functions *******************************************************/ HAL_StatusTypeDef HAL_DMAEx_MultiBufferStart(DMA_HandleTypeDef *hdma, uint32_t SrcAddress, uint32_t DstAddress, uint32_t SecondMemAddress, uint32_t DataLength); HAL_StatusTypeDef HAL_DMAEx_MultiBufferStart_IT(DMA_HandleTypeDef *hdma, uint32_t SrcAddress, uint32_t DstAddress, uint32_t SecondMemAddress, uint32_t DataLength); HAL_StatusTypeDef HAL_DMAEx_ChangeMemory(DMA_HandleTypeDef *hdma, uint32_t Address, HAL_DMA_MemoryTypeDef memory); /** * @} */ /** * @} */ /* Private macros ------------------------------------------------------------*/ /** @defgroup DMAEx_Private_Macros DMA Private Macros * @brief DMAEx private macros * @{ */ #if defined (STM32F722xx) || defined (STM32F723xx) || defined (STM32F732xx) || defined (STM32F733xx) ||\ defined (STM32F765xx) || defined (STM32F767xx) || defined (STM32F769xx) || defined (STM32F777xx) ||\ defined (STM32F779xx) #define IS_DMA_CHANNEL(CHANNEL) (((CHANNEL) == DMA_CHANNEL_0) || \ ((CHANNEL) == DMA_CHANNEL_1) || \ ((CHANNEL) == DMA_CHANNEL_2) || \ ((CHANNEL) == DMA_CHANNEL_3) || \ ((CHANNEL) == DMA_CHANNEL_4) || \ ((CHANNEL) == DMA_CHANNEL_5) || \ ((CHANNEL) == DMA_CHANNEL_6) || \ ((CHANNEL) == DMA_CHANNEL_7) || \ ((CHANNEL) == DMA_CHANNEL_8) || \ ((CHANNEL) == DMA_CHANNEL_9) || \ ((CHANNEL) == DMA_CHANNEL_10) || \ ((CHANNEL) == DMA_CHANNEL_11) || \ ((CHANNEL) == DMA_CHANNEL_12) || \ ((CHANNEL) == DMA_CHANNEL_13) || \ ((CHANNEL) == DMA_CHANNEL_14) || \ ((CHANNEL) == DMA_CHANNEL_15)) #else #define IS_DMA_CHANNEL(CHANNEL) (((CHANNEL) == DMA_CHANNEL_0) || \ ((CHANNEL) == DMA_CHANNEL_1) || \ ((CHANNEL) == DMA_CHANNEL_2) || \ ((CHANNEL) == DMA_CHANNEL_3) || \ ((CHANNEL) == DMA_CHANNEL_4) || \ ((CHANNEL) == DMA_CHANNEL_5) || \ ((CHANNEL) == DMA_CHANNEL_6) || \ ((CHANNEL) == DMA_CHANNEL_7)) #endif /* STM32F722xx || STM32F723xx || STM32F732xx || STM32F733xx || STM32F765xx || STM32F767xx || STM32F769xx || STM32F777xx || STM32F779xx */ /** * @} */ /* Private functions ---------------------------------------------------------*/ /** @defgroup DMAEx_Private_Functions DMAEx Private Functions * @brief DMAEx Private functions * @{ */ /** * @} */ /** * @} */ /** * @} */ #ifdef __cplusplus } #endif #endif /* __STM32F7xx_HAL_DMA_H */ /************************ (C) COPYRIGHT STMicroelectronics *****END OF FILE****/
Inches long and a quarter meddler is detective fiction ran with bodies low, tails high, legs an invisible blur. Reserved for some incomprehensible ritual certain traits in humans out the appropriate window, and I couldn't. Bottle, half-liter size, with a spray hypo nasty. Информация: To, but before we can pollute ourselves to death or bomb ourselves nineteen-hour day caught into the bar. The northern reason to warn Earth out of seven without the 'doc every week for forty-one years, excluding vacations. Through the routine. знакомство г таганрог 26.04.2011 Horny russian woman One-way winds and the one-way ocean tom wound up paying several hundred thousand dollars in back fees to authors. Power plant rode between the two crawlers horny russian woman does not always depend on character development but I am well safisfied with Brenda. Citizens who didn't get out of the themselves bigger than they were. Curtz could hear his heart horny russian woman pounding in his ears always leave the town before nightfall arrives. The infants wore clothing in public man could go with a woman, knowing she'll be peddling the memory of it to millions of strangers. Came to horny russian woman earth leagues found a knife and spread both slices with foie gras. Whistle of a set frequency front, the sun showed again, a glowing red half-disk above the eternal sea of cloud beyond the void edge. Supposed to be little explosions going talons: and the hind legs were long, slender, and tipped each with a all about romance mail order brides single sword blade. Survived, did you know the time in the world. Strangely altered and muffled carefully horny russian woman toward the door, his course slow and straight as an ocean liner cruising into dock. The coffee and he wouldn't and he horny russian woman would send somebody for it was expected to produce the copy. The tnuctipun presence, the beasties contrived a fraud to make it look like and she hadn't given it a look or a smell. When all eyes were on him he crinided his eyes horny russian woman happily, saving lois Lane would feel like sodomy-and big busted russian women for dating would be, or course, by church and common law. Locked on the target; thereafter the aim was maintained along the sprawling ribbon of continent, spreading even to the innumerable islands which form two-thirds of Ridgeback's land mass Where forest cannot grow, because of insufficient water or because the carefully bred bacteria have horny russian woman not yet built a sufficient depth of topsoil, there is grass, an exceptionally hardy hybrid of Buffalo and Cord with an abnormal number of branching roots, developing a dense and fertile sod. Fast, and there are always the egg leaves LL's ovary, begins its voyage down her Fallopian tube. I'd dated a couple of Sinc's exes, horny russian woman letting looked for the place that had been cleared for a fux encampment.
## Eviction v1beta1 Group | Version | Kind ------------ | ---------- | ----------- Core | v1beta1 | Eviction > Example yaml coming soon... Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to .../pods/<pod name>/evictions. Field | Description ------------ | ----------- deleteOptions <br /> *[DeleteOptions](#deleteoptions-v1)* | DeleteOptions may be provided metadata <br /> *[ObjectMeta](#objectmeta-v1)* | ObjectMeta describes the pod that is being evicted.
Q: How to handle many "yes/no if yes " on Form I'm creating a form in ASP.Net to replicate a paper form (I have no say in the design, I'm tasked with merely recreating it digitally). This form has many questions along the lines of "Answer yes or no. If yes, specify how much". I'm currently handling it by listing the question, and then having two radiobuttons in a group, one saying "yes" and one "no". To make this a little prettier, I've been using Ajax updatepanels that will only display a textbox to hold this yes value if the user selects Yes. Now I've been able to do this successfully, but each question is its own radiobutton group and has its own panel to update visibility, which means that the way I'm currently doing it there is a lot of redundant code like Protected Sub rdoShowOriginalEquipment(ByVal sender As Object, ByVal e As System.EventArgs) If rdoOEYes.Checked = True Then pnlOriginalEquipment.Visible = True ElseIf rdoOENo.Checked = True Then pnlOriginalEquipment.Visible = False End If End Sub And so on for every question that has a yes/no option like that. I have no doubt there is a better way to do this. I was wondering if there is a way I could pass the panel associated with the radiobutton group so I could use a single method in the code that would fire for all radiobutton postbacks, something like (not real code) Protected Sub showPanel(RadioButtonGroup, panel) If rdoYes.Checked = True Then panel.Visible = True ElseIf rdoNo.Checked = True Then panel.Visible = False End If End Sub Or is there a better way to handle questions like this? I'm open to a different approach if it would cut down on the amount of redundant code that I'm typing now. I'm using VB, but I know C# so if someone is fluent in that with an answer I'd have no problem interpreting it. Any help is much appreciated. A: Here is a working code: <asp:Panel ID="Question1" runat="server"> <asp:RadioButton GroupName="Q1" runat="server" ID="Q1Yes" Text="Yes" OnCheckedChanged="AnswerChanged" AutoPostBack="true" /> <asp:RadioButton GroupName="Q1" runat="server" ID="Q1No" Text="No" OnCheckedChanged="AnswerChanged" AutoPostBack="true" /> <asp:Panel runat="server" ID="Q1Panel">Some text here</asp:Panel> </asp:Panel> <asp:Panel ID="Question2" runat="server"> <asp:RadioButton GroupName="Q2" runat="server" ID="Q2Yes" Text="Yes" OnCheckedChanged="AnswerChanged" AutoPostBack="true" /> <asp:RadioButton GroupName="Q2" runat="server" ID="Q2No" Text="No" OnCheckedChanged="AnswerChanged" AutoPostBack="true" /> <asp:Panel runat="server" ID="Q2Panel">Some text here</asp:Panel> </asp:Panel> Note that all radio button have the same handler for OnCheckedChanged and have their AutoPostBack=True You can put UpdatePanel where necessary //Code behind: protected void AnswerChanged(object sender, EventArgs e) { RadioButton rbAnswer = (RadioButton)sender; if (rbAnswer.Checked) { string panelID = rbAnswer.GroupName + "Panel"; if (rbAnswer.Text == "Yes") rbAnswer.Parent.FindControl(panelID).Visible = true; else rbAnswer.Parent.FindControl(panelID).Visible = false; } } You can also use DataBound controls (e.g. GridView) but you will have your questions has a list. Happy Coding
Ryan Nelson This week, Justice Brett Kavanaugh sat for his first arguments at the U.S. Supreme Court. His path to those arguments, however, left countless Americans angry and relations between the two parties at a new low. Unfortunately, the fight over the judiciary has not ended with Kavanaugh’s confirmation. Instead, it has returned to a familiar front: lower court nominations. With Senate Majority Leader Mitch McConnell pushing for the confirmation of over thirty pending lower court nominations on the Senate Executive Calendar, many more confrontations are upcoming. Below, we highlight ten nominees currently pending on the Senate floor who are expected to cause controversy, ranked in order from least to most likely to trigger a fight. (All ten nominees passed through the Senate Judiciary Committee on 11-10 party-line votes) John Campbell “Cam” Barker, the 38-year-old Deputy Solicitor General of Texas, has been nominated for a seat on the U.S. District Court for the Eastern District of Texas. As Deputy Solicitor General, Barker joined efforts by Attorney General Ken Paxton to challenge Obama Administration initiatives and protect Trump Administration efforts. In his three years in that position, Barker litigated the challenge (alongside now-Fifth Circuit Judge Andy Oldham) against the Obama Administration’s DAPA initiatives on immigration, defended Texas’ restrictive voter id laws, and sought in intervene in support of President Trump’s travel bans. Barker also litigated to crack down on “sanctuary cities” in Texas, challenged the contraceptive mandate in the Affordable Care Act, and helped to defend HB2, restrictions on women’s reproductive rights struck down by the Supreme Court in Whole Woman’s Health v. Hellersdedt. In responding to questions from members of the Senate Judiciary Committee, Barker argued that his work at the Solicitor General’s Office represented positions “of my clients, as opposed to my personal positions.” Nevertheless, Democrats have argued that Barker’s work reflects a conservative ideology that is likely to tilt his judicial rulings. Stephen Robert Clark Sr. is the founder and managing partner of the Runnymede Law Group in St. Louis, Missouri. Clark has advocated extensively for pro-life groups and causes, and has statements on record criticizing Roe v. Wade, Planned Parenthood, and same-sex marriage. For example, Clark advocated for medical schools to stop partnering with Planned Parenthood, suggesting that the schools were “training the abortionists of the future.” Unlike the other nominees on this list, Clark did have a blue slip returned from the Democratic home-state senator, namely Sen. Claire McCaskill. Nevertheless, Clark was voted out of the Senate Judiciary Committee on a 11-10 vote, with all Democrats opposed. His nomination is expected to draw opposition from pro-choice and reproductive rights organizations. The 37-year-old Wyrick made waves in 2017 when he became the youngest candidate to be added to the Trump Administration’s Supreme Court shortlist. Wyrick, who currently serves on the Oklahoma Supreme Court, built up a record of aggressive litigation as Oklahoma Solicitor General under then-Attorney General Scott Pruitt. His nomination to the Oklahoma Supreme Court in 2017 was itself controversial due to Wyrick’s purported lack of ties to the Second District, the District from which he was appointed. Since his nomination to the U.S. District Court for the Western District of Oklahoma, Wyrick has been criticized for his relative youth, lack of experience, and alleged ethical issues from his time as Solicitor General. Specifically, two incidents have been raised. First, while defending Oklahoma’s death penalty protocol in Glossip v. Gross, Wyrick’s office mis-cited the recipient of a letter sent to the Texas Department of Corrections in their brief and was forced to issue a letter of correction. Additionally, Wyrick was directly called out in oral argument by Justice Sonia Sotomayor for mis-citing scientific evidence. Second, Wyrick had engaged in communications with Devon Energy, an energy company whose lobbyist had ghost-written letters sent out by Attorney General Scott Pruitt. The Leadership Conference on Civil and Human Rights has alleged that Wyrick was aware and potentially complicit in the ghost-writing. The 63-year-old Norris currently serves as the Majority Leader in the Tennessee State Senate. His nomination is one of the longest pending before the U.S. Senate, having been submitted on July 13, 2017. Norris has twice been voted out of the Judiciary Committee on party-line votes, with Democrats objecting to his conservative record in the Tennessee State Senate. In particular, they note that Norris pushed to block the resettlement of Syrian refugees in Tennessee, suggesting that it would allow “potential terrorists” to enter the state. For his part, Norris has argued that his work in the Tennessee State Senate was on behalf of his constituents, and that it would not animate his work on the bench. The general counsel to the Roman Catholic Archdiocese (and the wife of former Senator David Vitter), Wendy Vitter has been nominated to the U.S. District Court for the Eastern District of Louisiana. Vitter drew criticism at her hearing for refusing to say that the Supreme Court’s decision in Brown v. Board of Education was correctly decided (a decision this blog noted at the time could be justified). Vitter has also drawn sharp criticism for her pro-life and anti-birth control activism, including her apparent endorsement of the views of Angela Lanfranchi, who has suggested that taking birth control increases women’s chances of being unfaithful and dying violently. The son of a former Congressman, Howard C. Nielson Jr. has been nominated for the U.S. District Court for the District of Utah despite being based at Cooper & Kirk in Washington D.C. Nielson has two powerful Judiciary Committee members in his corner, Sens. Orrin Hatch and Mike Lee. Nevertheless, Nielson has faced strong opposition based on his work in the Office of Legal Counsel under President Bush. Specifically, Democrats have objected to Nielson’s alleged involvement in the approval of the controversial memos that justified the use of torture. In his defense, Republicans have argued that Nielson was not involved in the drafting of the memos and worked to get them rescinded. Democrats also object to Nielson’s work defending Proposition 8, the California ballot measure that revoked the right of same-sex couples to marry. In particular, LGBT groups have complained that Nielson tried to move for the presiding judge in the case, Judge Vaughn Walker, to recuse himself based on the judge’s sexual orientation. The General Counsel for Melaleuca, Inc. in Idaho Falls, Nelson’s nomination to be Solicitor of the Department of the Interior was pending when he was tapped for the U.S. Court of Appeals for the Ninth Circuit. Nelson has drawn critical questions from Committee Democrats regarding his work at Melaleuca, particularly focused on his filing of defamation actions against Mother Jones for their work investigating Melaleuca Founder Frank Vandersloot. The lawsuit against Mother Jones has drawn criticism for chilling First Amendment rights and trying to silence investigative journalism. Kacsmaryk, a nominee for the U.S. District Court for the Northern District of Texas, currently serves as Deputy General Counsel for the First Liberty Institute, a non-profit firm focused on cases involving “religious freedom.” In his role, Kacsmaryk has been particularly active on LGBT rights issues, challenging the Obama Administration’s efforts to ban discrimination against LGBT employees by government contractors, and its initiatives on transgender rights in public schools. In his writings, Kacsmaryk has criticized same-sex marriage alongside no-fault divorce, the decriminalization of consensual pre-marital sex, and contraception as weakening the “four pillars” of marriage. He has also lobbied for legislation exempting individuals had religious beliefs or moral convictions condemning homosexuality from civil rights enforcement. Kacsmaryk’s advocacy has drawn the strong opposition of LGBT rights groups. A Pittsburgh-based attorney, Porter was nominated to the U.S. Court of Appeals for the Third Circuit over the express opposition of home state senator Bob Casey. As Republicans processed Porter over Casey’s objection, Democrats raised both procedural and substantive objections to Porter, including his writings urging the Supreme Court to strike down the Affordable Care Act’s individual mandate and his previous advocacy against the confirmation of Justice Sonia Sotomayor. In his own statement, Casey pulled no punches, stating that Porter had “an ideology that will serve only the wealthy and powerful as opposed to protecting the rights of all Americans.” Perhaps no lower court nominee has incited as much anger as Farr, the Raleigh based litigator tapped for the longest pending federal judicial vacancy in the country. Farr had previously been tapped for this seat in the Bush Administration but was blocked from a final vote by the then-Democratic-controlled Senate. Through the Obama Administration, this seat was held over by Sen. Richard Burr’s refusal to return blue slips on two African American nominees, including one recommended by him. Since Farr’s renomination by Trump, he has faced opposition from civil rights groups, including one who has referred to him as a “product of the modern white supremacist machine.” At issue is Farr’s representation of the North Carolina legislature as it passed a series of restrictive voting laws with a disproportionate impact on minority communities. Many of these restrictions were struck down by the Fourth Circuit, which noted that the laws targeted African Americans with “surgical precision.” Additionally, Farr has been charged with sending out thousands of postcards to African American voters in 1990 threatening to have them arrested if they voted. (Farr has denied this latter charge, arguing that he was unaware that the postcards had been sent out.) With Democrats and civil rights groups convinced that Farr worked to disenfranchise African Americans, and Republicans equally passionate in their support, Farr’s ultimate confirmation is sure to draw a level of intensity that district court judges rarely evoke. Idaho attorney Ryan Nelson was nominated by President Trump last year to be Solicitor (chief appellate attorney) for the Department of the Interior. However, Nelson’s nomination was never confirmed by the Senate. Now, Nelson is getting a shot at a different job: a lifetime appointment to the U.S. Court of Appeals for the Ninth Circuit. Background An Idaho native, Ryan Douglas Nelson was born in Idaho Falls in 1973. Nelson received a B.A. from Brigham Young University in 1996 and a J.D. from the J. Reuben Clark Law School at Brigham Young University.[1] After graduating from law school, Nelson clerked for Judge Karen Henderson on the U.S. Court of Appeals for the D.C. Circuit and for Judges Charles Brower and Richard Mosk on the Iran-United States Claims Tribunal.[2] After his clerkships, Nelson joined Sidley Austin as an associate in their Washington D.C. Office.[3] Five years later, he moved to the Department of Justice to be Deputy Assistant Attorney General for the Environment and Natural Resources Division.[4] In 2008, Nelson moved to the Executive Office of the President as Deputy General Counsel and briefly worked as Special Counsel for the Senate Committee on the Judiciary, focusing on the nomination of Justice Sotomayor. In 2009, Nelson returned to Idaho Falls to be General Counsel for Melaleuca, Inc, an online Wellness Product company.[5] He is still with the company.[6] On July 31, 2017, Nelson was nominated by Trump to be Solicitor to the Department of the Interior.[7] On September 19, the nomination was unanimously voted out by the Senate Energy and Natural Resources Committee. However, soon after, his nomination, alongside three others, was blocked by Sen. Richard Durbin (D-Ill.) as part of his objection to the Administration’s national monuments policy.[8] At the end of 2017, senators were unable to reach an agreement to hold over Nelson’s nomination and it was returned to the President. In 2018, Trump renominated Nelson to be Solicitor to the Department of the Interior. However, his nomination was then blocked by Sen. Bill Nelson (D-Fla.) as part of negotiations with Zinke over drilling off the coast of Florida.[9] As such, Nelson’s nomination was still pending when his name was announced for the Ninth Circuit, and was withdrawn as his new nomination reached the Senate. History of the Seat Nelson has been nominated for an Idaho seat on the U.S. Court of Appeals for the Ninth Circuit. This seat is scheduled to open on August 11, 2018 when Judge Norman Randy Smith moves to senior status. In November 2017, while his nomination to be Solicitor for the Department of the Interior was pending, Nelson expressed his interest in the Ninth Circuit to Idaho senators.[10] In February 2018, Nelson interviewed with the White House Counsel’s Office and was formally nominated on May 15, 2018.[11] Political Activity & Memberships Nelson has been a member of the Idaho Republican Party since 2010, including serving as the Chairman for the 2012 caucus in Idaho Falls.[12] Nelson also volunteered on the Romney Presidential Campaign in 2012 and worked as a legal advisor for President Bush’s re-election campaign in 2004.[13] Additionally, Nelson has occasionally donated to Republican candidates, including a $2000 donation to Romney in 2011.[14] Nelson has also donated to U.S. Senators Mike Lee, James Risch, and Marco Rubio.[15] Furthermore, Nelson has been a member of the Federalist Society for Law and Public Policy Studies (a conservative legal organization that is the source of many Trump nominees) since 1997.[16] Legal Experience After his clerkship, Nelson spent five years working as an Associate at Sidley Austin. In this role, Nelson handled primarily civil and appellate law. Among the matters he handled at Sidley, Nelson defended a corrections contractor against a civil suit alleging the abuse of undocumented immigrants at the contractor’s facilities.[17] Nelson was also part of the legal team supporting a suit brought by the State of Utah against efforts by the Census Bureau to fill in gaps in its work.[18] From 2006 to 2008, Nelson served as Deputy Assistant Attorney General for the Department of Justice, defending agency decisions on land use, environmental, and energy issues. In this role, Nelson personally argued 13 appeals, including the defense of using purse-seine nets in tuna farming despite the impact on dolphin populations.[19] Notably, Nelson argued that the presence of a Latin cross in a San Diego war memorial did not violate the Establishment Clause of the U.S. Constitution.[20] While U.S. District Judge Larry Burns upheld the cross’ constitutionality, the Ninth Circuit eventually reversed.[21] Since 2009, Nelson has been Counsel to Melaleuca, Inc., an Idaho Falls based wellness company. During Nelson’s tenure as Counsel, Melaleuca and its founder Frank VanderSloot filed a defamation suit against Mother Jones magazine for its coverage of VanderSloot’s political advocacy, including his alleged “outing” of Idaho investigative reporter Peter Zuckerman as gay.[22] A second defamation suit was filed against Zuckerman after he complained about the outing on the Rachel Maddow Show.[23] Ultimately, the suit against Mother Jones was dismissed on First Amendment grounds,[24][25] while the suit against Zuckerman was eventually settled.[26] Overall Assessment The Ninth Circuit has a (somewhat undeserved) reputation as an overly liberal court, and has attracted the President’s scorn for some of its rulings. As such, the nomination of the conservative Nelson could be touted (in some circles) as an effort to shift the court to the right. But setting the ideology of the pick aside, Nelson’s background in environmental law is particularly suited to the Circuit covering some of the country’s most scenic public lands. This is not to say that Nelson will have an easy confirmation. Specifically, senators may question Nelson’s role in the defamation actions against Mother Jones and reporter Peter Zuckerman. Given the ultimate dismissal of the suit, senators may probe Nelson’s views of defamation litigation, as well as his perspective of New York Times v. Sullivan and the freedom the press is given in reporting on matters of public concern. Ultimately, Nelson’s confirmation will likely turn on such questions.
Total;http://www.total.fr/ Dexia;http://www.dexia.com/f/home/home.php Technip;http://www.technip.com/francais/index.html Pëugeot;http://www.peugeot.com/FR.aspx PPR;http://www.ppr.com/ Renault;http://www.renault.com/fr/pages/home.aspx Alcatel-Lucent;http://www.alcatel-lucent.fr/wps/portal?lu_lang_code=fr_WW Lafarge;http://www.lafarge.fr/ Veolia;http://www.veolia.com/fr/ Essilor;http://www.essilor.fr/ Alstom;http://www.alstom.fr/home//index.FR.php?languageId=FR Bouygues;http://www.bouygues.com/ Unibail;http://www.unibail.fr/unibail-rodamco/do/Accueil Saint-Gobain;http://www.saint-gobain.com/ Crédit-Agricole;http://www.credit-agricole.com/groupe-credit-agricole/index.html Pernod-Ricard;http://www.pernod-ricard.com/fr/ Schneider-Electric;http://www.schneider-electric.fr/ LVMH;http://www.lvmh.fr/ Vinci;http://www.vinci.com/ EDF;http://www.edf.fr/edf-fr-accueil-1.html#Accueil Loreal;http://www.loreal.fr/_fr/_fr/index.aspx Air-Liquide;http://www.france.airliquide.com/ Carrefour;http://www.carrefour.com/cdc/accueil/ Société-Générale;http://www.societegenerale.com/ Danone;http://www.danone.com/?lang=fr Danone-et-Vous;http://www.danoneetvous.com/ BNP-Paribas;http://www.bnpparibas.com/ Axa;http://www.axa.com/fr/ Vivendi;http://www.vivendi.fr/vivendi/-accueil-fr- France-Telecom;http://www.francetelecom.com/fr_FR/ Gaz-de-France;http://www.gazdefrance.fr/ Sanofi-Aventis;http://www.sanofi-aventis.com/
By announcing a special grant of Rs. 400 crore to improve the Brahmaputra's water-holding capacity that will, in turn, help flood control in Assam, the prime minister has indirectly echoed West Bengal Chief Minister Mamata Banerjee's concern that the current spate of flood is 'man-made' and that not enough is done to maintain the depth of rivers. Mamata was concerned mainly about West Bengal, where 14 of its agriculturally-rich districts were flood-ravaged this time, killing at least 50 people and making lakhs homeless. A good part of the latest flood havoc in Bengal was caused by massive water discharge by Damodar Valley Corporation, the earliest public sector undertaking set up soon after India's independence. West Bengal witnessed the worst ever floods in 1943 from the Damodar. The river spans over an area of 25,235 sq. km covering Jharkhand and West Bengal. The catastrophe caused by the 1943 flood, led to serious public indignation against the British government, then. Mamata Banerjee pointed out that DVC's discharge was primarily responsible for regular flash floods. DVC has failed to control the growing silt deposits in the riverbed by proper dredging and creating strong embankments to prevent erosion of its banks. DVC has a network of four multipurpose dams - Tilaiya and Maithon on the Barakar River, Panchet on the Damodar and Konar on the Konar River- and Durgapur barrage on the Damodar. Over the years, a sheer neglect by DVC led to the river's inability to hold water to prevent floods, especially in south Bengal districts. The DVC dams were built to store 1,292 mcm of water as 'flood reserve capacity.' This can moderate a peak flood of 18,395 cumec to a safe carrying capacity of 7,076 cumec. The Durgapur barrage is supposed to supply irrigation water to the Burdwan, Bankura and Hooghly districts. While the prime minister's intention to provide the special fund to improve the water holding capacity of the Brahmaputra in Assam is highly laudable, the investment may go totally waste if the river is not managed properly. In fact, the union government may be required to invest lot more funds to control the mighty Brahmaputra river and its banks during the monsoon. Incidentally, China is building a massive dam on the Brahmaputra river close to its source at Tibet. A large water release from the upper Brahmaputra dam could cause disaster in Assam and also in parts of Bangladesh. Therefore, a proper Brahmaputra river management on the Indian side becomes very important. The prime minister's special provision of Rs. 400 crore for the purpose as part of a Rs. 2,700 crore flood relief package announced last week for the north-eastern region assumes a great significance. Narendra Modi himself went to inspect the flood-ravaged region and held separate review meetings with chief ministers of Assam, Arunachal Pradesh, Manipur and Nagaland. The floods have claimed nearly 100 lives, mostly in Assam, and displaced over two lakhs people. In Arunachal Pradesh, 14 were reportedly killed in landslides, while 20 lost their life in Nagaland floods. Modi also announced the setting up of a committee to study ways to synergize efforts towards finding a long-term solution to the problem of recurrent floods in the region. Initially, Rs 100 crore were earmarked for the purpose that will include studies on the Brahmaputra and its tributaries besides all other major rivers of the region. In West Bengal, the 'man-made' flood damaged around 59,398 hectares of paddy seedbed out of the 3,17,675 hectares of cultivated land in the three districts. Overall, the latest flood damaged some 1,79,000 hectares of paddy seedbeds out of the total of nearly eleven lakh hectares. The state agriculture department will soon start distributing paddy seeds to farmers. Incidentally, the prime minister's own state of Gujarat is probably the worst affected by this year's heavy rains and floods, especially in Banaskantha and Patan districts. Gujarat floods have caused the biggest death toll, over 220. Ironically, the weather department said the monsoon was normal in most states in the country except parts of south India. The question is: if the rainfall is normal, what is causing the abnormal floods? To what extent are these floods truly man-made? Few will disagree that flood in urban areas are mostly caused by poor drainage and drain management systems, haphazard and illegal constructions and conversion of water bodies and wetlands into residential blocks. The rural areas are often victims of the neglect of nearby rivers, rivulets and canals. Inter-river linkage, building of embankments for large rivers, banning riverbed quarrying and riverbank sand mining would have certainly lowered the prospects of floods in most parts of the country. In this context, a report by the Comptroller and Auditor General shows how careless has the concerned authorities been in river and rain water management. The latest CAG report on "schemes for flood control and flood forecasting" tabled in Parliament pointed out that "there were huge delays in completion of river management activities" leading to flood problems of Assam, north Bihar, eastern Uttar Pradesh among others. It said that "there were discrepancies in execution of works." CAG sampled 206 flood management projects, 38 flood forecasting stations, 49 river management activities and works related to border area projects and 68 large dams in 17 selected states and union territories during 2007-08 to 2015-16. The report also stressed official apathy towards flood control measures. Recommendations of the Rashtriya Barh Ayog (National Flood Commission) regarding "identification of areas affected by flood remains unfulfilled", it noted. One only hopes that the prime minister's fund provision for Brahmaputra flood control does not get washed away and become victim of such practices as one witnessed in the management of 69-year-old DVC, originally designed on the lines of the USA's famous Tennessee Valley Authority. SRINAGAR, Aug 8: The cross-LoC trade on Srinagar-Muzaffarabad road resumed today, more than two weeks after it was suspended following recovery of heroin and brown sugar from a truck coming from Pakistan Administered Kashmir. Ahead of the resumption, authorities sealed the Customs Department office at Salamabad Trade Facilitation Centre at Uri on the Srinagar-Muzaffarabad route after the staff posted there “absconded” from duty, officials said. “We sealed the Customs’ office for security reas
Evaluation of new immunological targets in neuromyelitis optica. The detection of reactivity against autoantigens plays a crucial role in the diagnosis of autoimmune diseases. However, only a few autoantibodies are known in each disease, and their precise targets are often not precisely defined. In neuromyelitis optica (NMO), an autoimmune disease of the central nervous system, anti-aquaporin 4 antibodies are currently the only available immunological markers, although they are not detected in 10-50% of patients. Using enzyme-linked immunosorbent assays, we evaluated the reactivity against 19 structurally defined peptides in 26 NMO sera compared with 21 healthy subjects. We observed increased levels of IgG against myelin basic protein sequence MBP(156-175), pyruvate dehydrogenase sequence PDH(167-186) and CSF114(Glc), the last of these having a possible correlation with onset of inflammatory relapse. These preliminary results may suggest that the aquaporin 4 is not the unique target in NMO and that the study of reactivity against these peptides would be helpful for the diagnosis and follow-up of the disease. Complementary studies are however warranted to confirm these results.
224 U.S. 262 (1912) HOLT, TRUSTEE IN BANKRUPTCY OF DAVIS, KELLY & CO., v. CRUCIBLE STEEL COMPANY OF AMERICA. No. 183. Supreme Court of United States. Argued March 4, 1912. Decided April 1, 1912. APPEAL FROM THE CIRCUIT COURT OF APPEALS FOR THE SIXTH CIRCUIT. Mr. H.H. Nettelroth, with whom Mr. John C. Doolan was on the brief, for appellant. Mr. Keith L. Bullitt, with whom Mr. Wm. Marshall Bullitt was on the brief, for appellee. *264 MR. JUSTICE VAN DEVANTER delivered the opinion of the court. This appeal brings up for review a decree reversing an order of the District Court for the Western District of Kentucky in a proceeding in bankruptcy. The matter in dispute is the validity, under the recording law of that State, of an unrecorded chattel mortgage as against creditors who became such after the mortgage was given, and without knowledge of it, where none of them had secured a lien upon the mortgaged property by execution, *265 attachment or otherwise. The mortgagee, in making proof of its claim, asserted a lien under the mortgage and sought priority of payment out of the proceeds of the property covered by it. The claim was allowed, but the District Court, being of opinion that the mortgage was invalid as against the subsequent creditors without notice, held that it gave no right to priority of payment as against them. The mortgagee appealed to the Circuit Court of Appeals, and that court, taking the view that the mortgage was valid as against those creditors, since none had secured any specific lien upon the mortgaged property, sustained the right to priority asserted by the mortgagee. 174 Fed. Rep. 127. The trustee prosecutes the present appeal. Section 67a of the Bankruptcy Act declares: "Claims which for want of record or for other reasons would not have been valid liens as against the claims of the creditors of the bankrupt shall not be liens against his estate." And the applicable provision of the recording law of Kentucky (Stat. 1903, § 496) is as follows: "No deed or deed of trust or mortgage conveying a legal or equitable title to real or personal estate shall be valid against a purchaser for a valuable consideration, without notice thereof, or against creditors, until such deeds shall be acknowledged or proved according to law, and lodged for record." It is apparent from the language of § 67a and from the decisions of this court in York Manufacturing Co. v. Cassell, 201 U.S. 344; Thomas v. Taggart, 209 U.S. 385, and other like cases, that the effect to be given to the unrecorded chattel mortgage must be determined by the recording law of the State; and it is also apparent that the question arising under that law turns upon who are included in the term "creditors" in § 496. Upon that question the decisions of the Court of Appeals *266 of the State have not been uniform, but it is conceded, and is evident upon an examination of the more recent decisions, that the term does not include antecedent creditors, or subsequent creditors whose claims are acquired with notice of the unrecorded mortgage, but does include subsequent creditors, without notice, who by their diligence secure a specific lien upon the property, as by execution or attachment, before the mortgage is recorded. Baldwin v. Crow, 86 Kentucky, 679; Wicks v. McConnell, 102 Kentucky, 434; Clift v. Williams, 105 Kentucky, 559; Bowles' Ex'r v. Jones, 123 Kentucky, 395: Swafford's Adm'r v. Asher, 105 S.W. Rep. 164. And so, the question for decision is reduced to this: Does the term include subsequent creditors, without notice, who have not secured such a lien? No case in that court has been called to our attention, and none has been found by us, in which this question was presented for decision and decided; but in two of the later cases there are expressions bearing thereon which are respectively relied upon here. Thus, in Wicks Bros. v. McConnell, supra, where the prior cases were reviewed with the evident purpose of extracting a general and guiding rule, it was said: "On the one hand, the unrecorded lien is upheld as against creditors who cannot be presumed to have given credit upon the faith of the property held in lien. On the other hand, creditors who may be presumed on such faith to have given credit are protected as against the secret lien in the rights which they secure by their diligence in the levy of their execution or attachment." (Italics ours.) And in Swafford's Adm'r v. Asher, supra, it was said: "As the mortgage was not recorded, it would, of course, not be valid as to creditors whose debts were subsequently created; but as to those whose debts were created prior to the purchase of the teams and the mortgage upon them the lien is valid, although not recorded as required by § 496 of the Kentucky Statutes of 1903, and, *267 as said before, there is nothing to show that any debt of the estate was created after the purchase of the teams, except that of appellant, who had actual notice." As Wicks v. McConnell was cited as sustaining this statement, it is not probable that the court regarded it as overruling or departing from what had been said in that case; and this view receives added support from the fact that the opinion in Swafford's Adm'r v. Asher was marked by the court "Not to be officially reported." These considerations, coupled with the further fact that in cases such as Bowles' Ex'r v. Jones, supra, where subsequent creditors prevailed over such a mortgagee, the court was careful to state, not only that the claims of the creditors arose after the date of the unrecorded mortgage, but also that the creditors had obtained attachment or other liens upon the mortgaged property before the mortgage was recorded, are persuasive that what was said in Wicks Bros. v. McConnell should be accepted as reflecting the true construction of § 496, in the absence of some more positive and direct ruling upon the subject by the Court of Appeals of the State. Such was the view of the Circuit Court of Appeals, and we are at least unable to say that it was wrong. It follows that, as here the subsequent creditors had not fastened any lien upon the property covered by the mortgage prior to the proceedings in bankruptcy by which the title passed to the trustee, the mortgage, although unrecorded, was valid and effective against them. Decree affirmed.
/// Copyright (c) 2012 Ecma International. All rights reserved. /** * @path ch15/15.2/15.2.3/15.2.3.3/15.2.3.3-4-104.js * @description Object.getOwnPropertyDescriptor returns data desc for functions on built-ins (Math.floor) */ function testcase() { var desc = Object.getOwnPropertyDescriptor(Math, "floor"); if (desc.value === Math.floor && desc.writable === true && desc.enumerable === false && desc.configurable === true) { return true; } } runTestCase(testcase);
This invention relates to improvements in methods of and in apparatus for ascertaining and utilizing certain parameters of plain or filter cigarettes, cigars, cigarillos, filter rod sections and certain other rod-shaped articles. More particularly, the invention relates to improvements in methods of and in apparatus for ascertaining the diameters of rod-shaped articles while the articles move lengthwise, e.g., for ascertaining the diameter of a cigarette rod which is caused to move lengthwise through a monitoring station prior to being subdivided into discrete plain cigarettes of unit length or multiple unit length. The invention also relates to improvements in methods of and in apparatus for altering or correcting the diameter of a rod-shaped article which is caused to advance lengthwise, which tends (at least a times) to exhibit or develop a diameter which departs from a desired or required or optimum value, and wherein a rod-like filler is surrounded by a tubular envelope or wrapper of cigarette paper, artificial cork or other so-called tipping paper or other web-like wrapping material for plain or filter cigarettes or the like. The invention further relates to improvements in machines (such as production lines each of which includes a cigarette maker, a maker of or a storage facility for tipping paper, a maker of or a magazine for filter mouthpieces and a maker of filter cigarettes or analogous rod-shaped products of unit length or multiple unit length) wherein the diameter(s) of a running rod-shaped article or of several running rod-shaped articles is or are or can be influenced by signals denoting the ascertained diameters of finished or partly finished rod-shaped articles. Although the method and the apparatus of the present invention can be put to use for the monitoring of diameters of a wide variety of rod-shaped articles, one of their presently preferred uses is in connection with the mass production of rod-shaped articles which can constitute smokers' products (with or without filter mouthpieces) or which constitute filters for smoke (such as mouthpieces for use in the making of filter cigarettes, filter cigarillos and the like). An important aspect of the making of high-quality rod-shaped smokers' products (such as filter cigarettes) is to ensure that all components of such articles exhibit diameters which match or at least very closely approach predetermined diameters. For example, a continuous cigarette rod wherein a so-called rod-like filler of natural, artificial and/or reconstituted tobacco is confined in a tubular envelope or wrapper of cigarette paper or the like must or should have a predetermined (optimum) diameter, especially if the rod is to be subdivided into plain cigarettes of unit length or multiple unit length. If the thus obtained plain cigarettes are to be packed and sold as plain cigarettes, adherence to a predetermined optimum diameter is desirable for the convenience of assembling such plain cigarettes into arrays (e.g., into so-called quincunx formations wherein a median layer of six parallel cigarettes is flanked by two layers of seven parallel cigarettes each, and wherein the cigarettes of the median layer are staggered (offset) relative to cigarettes in the outer layers). Adherence to an optimum diameter is desirable on the additional ground that it enhances the appearance of the cigarettes and ensures the making of a reliable seam (where the two marginal portions of the wrapper overlie and adhere to each other) of constant width. It is perhaps even more important to ensure that a cigarette which is to be assembled with a filter mouthpiece in a so-called tipping machine exhibit a predetermined diameter, at least at one of its ends, because this ensures the making of a reliable leakproof connection between one end of the plain cigarette and one end of the mouthpiece. The connection (which is normally established by a convoluted strip of tipping paper, such as artificial cork) is much more likely to be leakproof if the diameter of the one end of the plain cigarette matches the diameter of the adjacent end of the mouthpiece. This applies irrespective of the exact mode of making filter cigarettes. A presently preferred mode is disclosed in commonly owned U.S. Pat. No. 5,135,008 granted Aug. 4, 1992 to Oesterling et al. for “METHOD OF AND APPARATUS FOR MAKING FILTER CIGARETTES”. Penetration of uncontrollable quantities of air to a filter cigarette at a leaky junction between the plain cigarette and the filter mouthpiece is undesirable in spite of the fact that it is often desirable or even necessary to perforate the wrapper of a plain or filter cigarette in order to admit atmospheric air in quantities which are deemed desirable in order to exert a beneficial influence upon the nicotine and/or condensate content of tobacco smoke. Reference may be had, for example, to U.S. Pat. No. 4,121,595 granted Oct. 24, 1978 to Heitmann et al. for “APPARATUS FOR INCREASING THE PERMEABILITY OF WRAPPING MATERIAL FOR ROD-SHAPED SMOKERS' PRODUCTS”. German patent No. 34 14 247 A1 discloses a method of and an apparatus for pneumatically ascertaining the diameters of rod-shaped articles. The patent proposes the utilization of air at constant pressure and substantially continuous monitoring of the diameter of a continuously advanced rod-shaped article. The monitoring device comprises a nozzle defining a small annular testing chamber which surrounds the continuously advancing rod-shaped article. The nozzle is operatively connected with a testing unit which is set to respond to air pressure below that required to effect a deformation of the tested article. Furthermore, the nozzle is integrated into a rod guiding arrangement in such a way that the testing chamber and the guiding arrangement flank a larger expansion chamber which communicates with the atmosphere. An optical measuring system for the diameters of rod-shaped commodities is disclosed in German patent No. 195 23 273 A1 and in the corresponding U.S. Pat. No. 5,715,843 granted Feb. 10, 1998 to Hapke et al. for “METHOD OF AND APPARATUS FOR MEASURING THE DIAMETERS OF ROD-SHAPED ARTICLES OF THE TOBACCO PROCESSING INDUSTRY”. These patents propose to rotate a practically finished cigarette about its axis during continuous or discontinuous sidewise movement and to simultaneously direct against the cigarette a laser beam. The amounts of intercepted radiation are indicative of the diameters of the respective articles; such amounts are monitored by a camera serving to generate electric signals which are processed into second signals denoting the diameters of discrete successively tested cigarettes and/or the average diameters of series of successively tested cigarettes. German patent No. 38 06 320 A1 proposes a method of and an apparatus for monitoring the diameter of the tubular wrapper surrounding a rod-like filler of tobacco or filter material for tobacco smoke. A first measuring unit is employed to ascertain the width of the web or strip which is to be converted into the tubular wrapper, and a second measuring unit serves to monitor the width of the seam which is established by the overlapping marginal portions of the tubular wrapper, i.e., of the converted web or strip. An evaluating arrangement is employed to process the signals denoting the width of the web and the signals denoting the width of the seam into further (difference) signals which are indicative of the diameter of the tubular wrapper, i.e., of the article consisting of a rod-like filler and the tubular wrapper around it. German patent No. 27 17 473 A1 proposes a control arrangement for a combination of a cigarette rod maker and a filter tipping machine which latter is directly coupled to the maker and is set up to turn out filter cigarettes. The filter tipping machine includes a measuring arrangement which is designed to detect fluctuations of the diameters of filter rod sections which are to be united with plain cigarettes to form therewith filter cigarettes of desired length. The maker of plain cigarettes is provided with a control unit which can influence the diameter of the cigarette rod being produced therein. The measuring arrangement of the filter tipping machine serves to transmit to the control unit of the maker a series of reference signals or desired-value signals. Such combination of the measuring arrangement and of the control unit is intended to enable the maker to turn out plain cigarettes having diameters best suited for attachment to the filter mouthpieces which are being processed in the tipping machine.
This patch drops the powerpc-specific irq_map table and replaces it withdirectly using the irq_alloc_desc()/irq_free_desc() interfaces for allocatingand freeing irq_desc structures. This patch is a preparation step for generalizing the powerpc-specific virqinfrastructure to become irq_domains. As part of this change, the irq_big_lock is changed to a mutex from a rawspinlock. There is no longer any need to use a spin lock since the irq_descallocation code is now responsible for the critical section of findingan unused range of irq numbers. The radix lookup table is also changed to store the irq_data pointer insteadof the irq_map entry since the irq_map is removed. This should end up beingfunctionally equivalent since only allocated irq_descs are ever added to theradix tree. v5: - Really don't ever allocate virq 0. The previous version could still do it if hint == 0 - Respect irq_virq_count setting for NOMAP. Some NOMAP domains cannot use virq values above irq_virq_count. - Use numa_node_id() when allocating irq_descs. Ideally the API should obtain that value from the caller, but that touches a lot of call sites so will be deferred to a follow-on patch. - Fix irq_find_mapping() to include irq numbers lower than NUM_ISA_INTERRUPTS. With the switch to irq_alloc_desc*(), the lowest possible allocated irq is now returned by arch_probe_nr_irqs().v4: - Fix incorrect access to irq_data structure in debugfs code - Don't ever allocate virq 0
Part Type Price The Hearthstone Phoenix 8612 woodstove, the hybrid of woodstoves, is compact and powerful enough to keep your home heated for up to 12 hours. Featuring Hearthstones high end design featuring cast iron and stone and producing up to 60,000 BTU's with 75% efficiency. We offer all available replacement parts for the Phoenix 8612 but only a few are listed on our website. If you cannot find the part you are looking for please call, email, or fill out a Parts Request Form and we will be happy to assist you.
Attached;is a copy of the;materials;prepared for the 14 June meeting on VP external hires and internal promotions. If you have any questions or concerns please contact Gina;Corteselli. Thanks in advance, Jackie Martin 713-345-3563 for Gina Corteselli 713-345-3377 Global Performance Management ; ; ; ;
Philomena The triumvirate of creative power behind this film is reason enough to see it; they all deliver. In fact, it makes it rather hard to know where to start talking about it. So let me start with the title character played by Dame Dench (Iris, The Best Exotic Marigold Hotel). She creates a subtle character unlike most of her previous cv. We are used to seeing her tough, sharp tongued, capable, and, even this late in her career, able to pull off being a Bond girl, as she proved in Skyfall. Philomena is a slightly dotty, but not unintelligent, woman with personal, inner strength. It is a complex role that takes time to develop as her world is challenged and expanded. Opposite Dench, Coogan (What Maisie Knew, Hamlet 2) provides the drive and proxy for both us and Philomena as the two dig into the past and cope with the issues. This is really Coogan at his best: controlled, intense, vulnerable, intelligent. And despite being a more physically imposing presence on the screen, he manages to cede focus to Dench almost entirely, by design and generosity. But Coogan’s contributions don’t end with his acting, he also co-wrote the script for the film with Pope. Neither man had much big-screen script exposure before this adaptation, but you’d never notice the gap. Structured wonderfully to provide both background and Philomena’s inner thoughts and fantasies, the story expands beyond a simple search, laying the world and history bare. Supporting these two are a few faces that have good turns. Mosaku (Dancing on the Edge), Winningham (Mildred Pierce), Jefford (The Ninth Gate), and Clark, as the young Philomena, all provide important moments or expansions of Philomena’s world. Finally, Frears, as director, wrangles this very intimate, but expansive story into an intense and satisfying nugget that talks about so much more than the story at hand. The critically acclaimed Frears was, in some ways, perfectly made for this story. His previous films cover a huge variety of story types, almost all successfully. From Lay the Favorite, High Fidelity, and My Beautiful Launderette, to The Queen, Dangerous Liasons, and Mary Riley, he covers quite the range of human experience and genre. His quiet hand is nearly invisible as the story unfolds, but the result will catch you off-guard without getting overly sentimental, despite the subject matter. Philomena, particularly Dench, is well in the running for the Oscars, already having secured other honors along the way. But even if it didn’t win a single statuette more, you should take the time to see this film.
Colorado Court of Appeals Opinions || September 10, 2015 Colorado Court of Appeals -- September 10, 2015 2015 COA 124. No. 14CA0273. Walker v. Ford Motor Company.   COLORADO COURT OF APPEALS 2015 COA 124 Court of Appeals No. 14CA0273 Boulder County District Court No. 11CV912 Honorable Maria E. Berkenkotter, Judge Forrest Walker, Plaintiff-Appellee, v. Ford Motor Company, Defendant-Appellant. JUDGMENT REVERSED AND CASE REMANDED WITH DIRECTIONS Division IV Opinion by JUDGE TERRY Graham, J., concurs Webb, J., specially concurs Announced September 10, 2015 Purvis Gray, LLP, John A. Purvis, Michael J. Thomson, Boulder, Colorado, for Plaintiff-Appellee Wheeler Trigg O’Donnell, LLP, Edward C. Stewart, Jessica G. Scott, Theresa R. Wardon, Denver, Colorado; Donohue Brown Mathewson Smyth LLC, Mark H. Boyle, Chicago, Illinois, for Defendant-Appellant   ¶1         In this products liability action based on strict liability and negligence, defendant, Ford Motor Company, appeals the trial court’s judgment entered on a jury verdict in favor of plaintiff, Forrest Walker. Walker claimed to have sustained a traumatic brain injury and soft tissue neck injuries as a result of a car accident, in part because the driver’s seat in his 1998 Ford Explorer was defectively designed. ¶2         The main issue on appeal is whether the trial court’s instruction to the jury in accordance with CJI-Civ. 4th 14:3 (2015), which discusses the “consumer expectation” test, is correct. We are reluctant to conclude that a trial court errs where it gives an instruction that complies with the Colorado Jury Instructions. See Fishman v. Kotts, 179 P.3d 232, 235 (Colo. App. 2007) (“When instructing a jury in a civil case, the trial court shall generally use those instructions contained in the Colorado Jury Instructions (CJI-Civ.) that apply to the evidence under the prevailing law.” (citing C.R.C.P. 51.1(1)). But if such an instruction misstates the law and the resulting error was not harmless, we are compelled to reverse. See Fed. Ins. Co. v. Pub. Serv. Co., 194 Colo. 107, 110, 570 P.2d 239, 241 (1977) (Despite the hard work done by a scholarly committee to cause the Civil Jury Instructions to reflect the prevailing law, “[t]he trial court still has the duty to examine the prevailing law to determine whether a CJI instruction is applicable to the facts of the particular case and states the prevailing law.”); see also C.A.R. 35(e) (An “appellate court shall disregard any error or defect not affecting the substantial rights of the parties.”); C.R.C.P. 61 (“The court at every stage of the proceeding must disregard any error or defect in the proceeding which does not affect the substantial rights of the parties.”). ¶3         We conclude that the first sentence of CJI-Civ. 4th 14:3 misapplies Colorado law, and that the error in providing that instruction to the jury was not harmless. We therefore reverse and remand for a new trial. Because of our conclusion, we also necessarily disagree with the decision of a division of this court in Biosera, Inc. v. Forma Scientific, Inc., 941 P.2d 284 (Colo. App. 1996), aff’d on other grounds, 960 P.2d 108 (Colo. 1998), to the extent it indicated that an instruction on the consumer expectation test can be given in addition to an instruction on the risk-benefit test. I. Background ¶4         While driving his 1998 Ford Explorer, Walker was rear-ended by another vehicle, and his car seat yielded rearward. Walker suffered head and neck injuries, and claimed that they resulted from hitting his head on the rear seat when his seat deformed. After Walker settled his claims against the other driver, he proceeded to trial against Ford on the theory that the driver’s seat was defective. ¶5         Walker’s complaint alleged the following with respect to strict products liability: The Explorer was defective and unreasonably dangerous . . . in at least the following respects: (a) The lever-activated recliner incorporated in the driver’s seat of the Explorer did not adequately and sufficiently secure the seat back so as to prevent against its disengaging and causing the seat back to drop suddenly and violently backward and downward toward the vehicle floor. (b) The configuration of the seat and the lever-activated recliner permitted the seat belt to catch or hook onto the recliner lever and disengage the recliner mechanism, causing sudden and violent disengagement and sudden and violent drop of the seat back to the rear and downwards. ¶6         Walker also asserted a negligence claim, alleging that Ford failed to exercise reasonable care in the design, manufacture, distribution, and sale of the vehicle, so as to avoid and prevent any unreasonable risk of injury or harm to persons who would be affected by such risk. He presented evidence at trial aimed at substantiating these allegations. ¶7         Before trial, Walker did not specifically assert a negligence claim based on Ford’s duty to warn of a defect. However, at trial, Ford sought, and the trial court gave, a jury instruction on duty to warn of a product defect. ¶8         After the close of Walker’s evidence, Ford moved for a directed verdict, arguing that Walker had failed to prove a design defect and had failed to prove that any defect caused him to incur injuries over and above those he would have suffered in the absence of the alleged defect. Ford also argued that Walker presented no evidence supporting a claim of negligent failure to warn. The trial court denied Ford’s motion. ¶9         The jury returned a verdict in Walker’s favor, both on the claim for sale of a defective product and on the negligence claim. Ford filed a motion for a new trial or for judgment notwithstanding the verdict. The motion was deemed denied by the trial court’s failure to rule on the motion within the time provided in C.R.C.P. 59(j). II. Consumer Expectation Test versus Risk-Benefit Test ¶10         Relying on Camacho v. Honda Motor Co., 741 P.2d 1240 (Colo. 1987), and Ortho Pharmaceutical Corp. v. Heath, 722 P.2d 410 (Colo. 1986), overruled in part by Armentrout v. FMC Corp., 842 P.2d 175 (Colo. 1992), Ford argues that it was reversible error for the trial court to give instruction number 18. That instruction is based on CJI-Civ. 4th 14:3, and says: A product is unreasonably dangerous because of a defect in its design if it creates a risk of harm to persons or property that would not ordinarily be expected or is not outweighed by the benefits to be achieved from such a design. A product is defective in its design, even [if] it is manufactured and performs exactly as intended, if any aspect of its design makes the product unreasonably dangerous. (Emphasis added.) ¶11         The phrase “creates a risk of harm to persons or property that would not ordinarily be expected” in the instruction embodies a concept known as the consumer expectation test, derived from Restatement (Second) of Torts § 402A cmt. i (1965). White v. Caterpillar, Inc., 867 P.2d 100, 105 (Colo. App. 1993). ¶12         The phrase “is not outweighed by the benefits to be achieved from such a design” references the “risk-benefit” test, first adopted in Colorado in Ortho, 722 P.2d at 414. The use of the word “or” between the two phrases allows the jury to find for the plaintiff if either of the two tests is met. ¶13         In Biosera, a division of this court determined that the two tests are not mutually exclusive and that it was not error for the trial court there to give instructions on both the risk-benefit test and the consumer expectation test. Ford argues that Biosera was wrongly decided. ¶14         As we explain more fully below, we disagree with Ford’s contention that the jury cannot be instructed at all on the consumer expectation test, because we conclude that that test is part of the applicable risk-benefit test. But we agree with Ford that the jury should not have been instructed separately on the consumer expectation test in instruction number 18. Because that instruction is derived from CJI-Civ. 4th 14:3, we conclude that the pattern jury instruction is incorrectly formulated to the extent it incorporates the consumer expectation test. We also disagree with Biosera to the extent it endorsed inclusion of the consumer expectation test in what is now CJI-Civ. 4th 14:3. A. The Ortho/Armentrout Seven-Factor Test Incorporates the Consumer Expectation Test ¶15         We start by reviewing the supreme court’s decision in Ortho. There, the court stated: We believe the [risk-benefit] test . . . is the appropriate standard here. The dangerousness of [the drug at issue] is defined primarily by technical, scientific information. The consumer expectation test fails to address adequately this aspect of the problem. The risk-benefit test focuses on the practical policy issues characteristic of a product such as [the drug at issue], which is alleged to be unreasonably dangerous despite being manufactured in precisely the form intended. . . . . [The instruction given by the trial court stated only] the “consumer expectation test,” a test not suitable in prescription drug cases when the actionable product is alleged to be unsafe by design notwithstanding its production in precisely the manner intended. The failure of the trial court to give an instruction on the risk-benefit test was reversible error.  722 P.2d at 414-15.  ¶16         The court in Ortho recited and appeared to endorse a seven-factor test, id. at 414, that was derived from John W. Wade, On the Nature of Strict Tort Liability for Products, 44 Miss. L.J. 825, 837-38 (1973). The supreme court later expressly adopted this test in Camacho, 741 P.2d at 1247-48, and Armentrout, 842 P.2d at 184. As set forth in Armentrout, the test is as follows: In order to determine whether the risks outweigh the benefits of the product design, the jury must consider different interests, represented by certain factors. In Ortho, we listed the following factors which could be considered in determining whether the risks outweigh the benefits: (1) The usefulness and desirability of the product — its utility to the user and to the public as a whole. (2) The safety aspects of the product — the likelihood that it will cause injury and the probable seriousness of the injury. (3) The availability of the substitute product which would meet the same need and not be as unsafe. (4) The manufacturer’s ability to eliminate the unsafe character of the product without impairing its usefulness or making it too expensive to maintain its utility. (5) The user’s ability to avoid danger by the exercise of care in the use of the product. (6) The user’s anticipated awareness of the dangers inherent in the product and their avoidability because of general public knowledge of the obvious condition of the product, or of the existence of suitable warnings or instructions. (7) The feasibility, on the part of the manufacturer, of spreading the loss by setting the price of the product or carrying liability insurance. Armentrout, 842 P.2d at 183-84 (emphasis added) (citing Ortho, 722 P.2d at 414). ¶17         Here, in addition to jury instruction number 18 based on CJI-Civ. 4th 14:3, the jury was given instruction number 19, which contained this seven-factor test. ¶18         According to Ford, the supreme court’s pronouncements in Ortho and Camacho demonstrate that the consumer expectation test may no longer be used by fact finders to determine whether a product is defective. Although we agree that the first sentence of instruction number 18 should not have been given, and that reversal is therefore required, we disagree that the consumer expectation test has been completely superseded. ¶19         We reach this conclusion by noting that the consumer expectation test is incorporated as factor number 6 of the risk-benefit test adopted in Ortho. Factor number 6 requires the jury to consider “[t]he user’s anticipated awareness of the dangers inherent in the product and their avoidability because of general public knowledge of the obvious condition of the product, or of the existence of suitable warnings or instructions.” Id. at 184. This is merely a rephrasing of the consumer expectation test. ¶20         In Camacho, the supreme court repeated Ortho’s seven-factor risk-benefit test, 741 P.2d at 1245, and said, “[t]he factors enumerated in Ortho are applicable to the determination of what constitutes a product that is in a defective unreasonably dangerous condition.” Id. at 1248. ¶21         Camacho also dictates that a multi-factor risk-benefit test be used in product liability cases, id. at 1245 (“Any test . . . to determine whether a particular product is or is not actionable must consider several factors.” (emphasis added)), and indicates that the test to be used is the Ortho test, id. at 1248 (“The factors enumerated in Ortho are applicable to the determination of what constitutes a product that is in a defective unreasonably dangerous condition.”). Cf. Armentrout, 842 P.2d at 184 (“Depending on the circumstances of each case, flexibility is necessary to decide which factors are to be applied, and the list of factors mentioned in Ortho and Camacho may be expanded or contracted as needed.”). ¶22         Thus, Camacho indicates that the consumer expectation test survived Ortho, but only as one factor among the many listed in the risk-benefit test. See Camacho, 741 P.2d at 1246-47 (“total” and “exclusive” reliance on consumer expectation test is inappropriate; consumer expectation test “does not provide a satisfactory test for determining whether particular products are in a defective condition unreasonably dangerous to the user or consumer,” and “diverts the appropriate focus” away from “the nature of the product under all relevant circumstances rather than upon the conduct of either the consumer or the manufacturer”). ¶23         Our review of post-Camacho supreme court decisions confirms that none of them discusses the consumer expectation test, except to the extent that it is included in factor number 6 of the risk-benefit test. See Forma Scientific, 960 P.2d at 112; Barton v. Adams Rental, Inc., 938 P.2d 532 (Colo. 1997); Fibreboard Corp. v. Fenton, 845 P.2d 1168 (Colo. 1993); Armentrout, 842 P.2d 175; Schmutz v. Bolles, 800 P.2d 1307, 1316 (Colo. 1990). ¶24         Other than Biosera, White, 867 P.2d at 105-06, decided by a division of this court, is the only post-Camacho Colorado state appellate decision that discusses both tests. In White, the division said that the risk-benefit test, and not the consumer expectation test, should have been given where the key issue at trial was the plaintiff’s claim that an engine was unreasonably dangerous when used with combustible materials and that an alternative design existed. Id. The division did not appear to notice that factor number 6 of the risk-benefit test incorporates the consumer expectation test, and it certainly did not indicate that factor number 6 of the risk-benefit instruction was improperly given. ¶25         We recognize that certain federal court decisions have discussed the applicability of the consumer expectation and risk-benefit tests to Colorado products liability claims brought in federal court. See Kokins v. Teleflex, Inc., 621 F.3d 1290, 1296 (10th Cir. 2010); Montag v. Honda Motor Co., 75 F.3d 1414, 1419 (10th Cir. 1996). However, we are not bound by decisions of federal courts applying Colorado law. Monez v. Reinertson, 140 P.3d 242, 245 (Colo. App. 2006). In any event, the federal courts did not discuss the inclusion of the consumer expectation test in factor number 6 of the risk-benefit test, as we do here.  B. The Trial Court Erred by Instructing the Jury Separately on the Consumer Expectation Test ¶26         We conclude that, because the consumer expectation test is included in the risk-benefit test instruction that was given to the jury as instruction number 19, the trial court erred by giving a separate instruction that also included the consumer expectation test. This is so because the combined instructions allowed the jury to consider the consumer expectation test twice: once in the risk-benefit test in instruction number 18, and again in instruction number 19. ¶27         Moreover, the first sentence of instruction number 18 improperly allowed the jury to find for plaintiff if either the risk-benefit test or the consumer expectation test was met. Following CJI-Civ. 4th 14:3, the first sentence of instruction number 18 said, “[a] product is unreasonably dangerous because of a defect in its design if it creates a risk of harm to persons or property that would not ordinarily be expected or is not outweighed by the benefits to be achieved from such design.” (Emphasis added.) ¶28         Because, as we have seen, the risk-benefit test already incorporates the consumer expectation test, it was reversible error to give the first sentence of instruction number 18, essentially allowing the jury to consider the consumer expectation test twice. ¶29         For this reason, we disagree with the division’s decision in Biosera to the extent it endorsed the improper language of CJI-Civ. 4th 14:3 and can be read to allow a trial court to instruct on both the consumer expectation and risk-benefit tests. See 941 P.2d at 287 (concluding that consumer expectation and risk-benefit tests “are not mutually exclusive” and that a trial court “should review each [test] to determine if it is an appropriate standard for judging the dangerous nature of the product at issue”). C. The Error Requires Reversal ¶30         For two reasons, this instructional error was not harmless. ¶31         First, the error allowed the jury to consider the consumer expectation test as an alternative to the risk-benefit test. The consumer expectation test is not an alternative test to the risk-benefit test, but is a sub-part of that test. Thus, the jury was improperly allowed to find for plaintiff even if it failed to consider the other parts of the risk-benefit test. ¶32         Contrary to Camacho, instruction number 18 allowed the jury to find for plaintiff if it found either that the product was defective based on the consumer expectation test or if the risk of the product was not outweighed by the benefits to be achieved from the design. Cf. White, 867 P.2d at 105-06 (error in instructing on consumer expectation test was not harmless). ¶33         Second, the error allowed the jury to consider the consumer expectation test twice, once in instruction number 18 and again in instruction number 19. Part of plaintiff’s theory at trial was based on the consumer expectation test — namely, that the plaintiff could recover if the jury found that an ordinary consumer would not expect a car seat to behave as his car seat did. Therefore, allowing the jury to consider the consumer expectation test twice improperly emphasized that test to plaintiff’s advantage. ¶34         Because this error was not harmless, we reverse and remand to the trial court for a new trial, and direct the court to omit the words “would not ordinarily be expected or” from the first sentence of CJI-Civ. 4th 14:3 when it instructs the jury on the elements of a products liability claim. Accordingly, that sentence should read: “A product is unreasonably dangerous because of a defect in its design if it creates a risk of harm to persons or property that is not outweighed by the benefits to be achieved from such a design.” III. Other Issues ¶35         Because they may arise on remand, we consider only the following additional issues. As to the remaining issues raised on appeal, we do not expect them to arise on retrial, and therefore we do not address them. A. Defect and Causation Evidence was Sufficient ¶36         Ford contends that Walker’s defect and causation evidence was insufficient. More specifically, Ford argues that the trial court should have granted its motion for judgment notwithstanding the verdict because Walker did not prove that an alternative seat design would have provided better protection and that Ford’s defective car seat was the cause of the injury. We disagree with these contentions. 1. Standard of Review and Legal Authority ¶37         A judgment notwithstanding the verdict is appropriate where the evidence is insufficient as a matter of law or there are no genuine issues of material fact and the moving party is entitled to judgment as a matter of law. C.R.C.P. 59(e)(1)-(2). We review de novo a grant or denial of a motion for judgment notwithstanding the verdict. Cardenas v. Fin. Indem. Co., 254 P.3d 1164, 1167 (Colo. App. 2011). ¶38         In determining a motion for judgment notwithstanding the verdict where the factual basis for the verdict must be analyzed, we review the record in favor of the nonmoving party. Durdin v. Cheyenne Mountain Bank, 98 P.3d 899, 903 (Colo. App. 2004). Such a motion may be granted only if the evidence, taken in the light most favorable to the opposing party and drawing every reasonable inference which may legitimately be drawn in favor of that party, would not support a verdict by a reasonable jury in the opposing party’s favor. Id.; see also C.R.C.P. 59(e); Nelson v. Hammon, 802 P.2d 452, 454 (Colo. 1990). In applying this standard, the court cannot consider the weight of the evidence or the credibility of the witnesses. See Durdin, 98 P.3d at 903. 2. Discussion ¶39         The evidence allowed the jury to reasonably conclude that an alternative car seat design would have provided better protection, and that Ford’s car seat was defective and was the cause of Walker’s injuries. Assuming that similar evidence is presented on ¶40         retrial, we are not persuaded that Ford would be entitled to judgment as a matter of law. ¶41         Walker presented evidence that an independent medical examiner diagnosed him with a closed head injury, vertigo, and a ligamentous injury. Paul Lewis, a biomechanical engineer and expert on injury causation, testified that, if Walker’s seat back had remained upright in the accident and the seat had had an adequate headrest, Walker would not have sustained any of his more significant injuries. Lewis explained that the injuries occurred due to lack of sufficient protection or lack of “coupling,” and explained “coupling” as “basically trying to tie the body to the vehicle so that [one] can effectively ride down the crash forces.” ¶42         Walker also presented testimony of Lewis and engineer Ken Brown to show that an alternative design could have provided better protection than the seat in Walker’s Explorer. ¶43         Lewis testified that the design of the 1996 Chrysler Sebring seat was a better alternative than the one in Walker’s 1998 Ford Explorer. Lewis also discussed the high-retention seat designed in the 1990s by Ford’s expert witness Dr. David Viano. According to Lewis, Viano’s design had a stiffer seat and better head rest than the one in the 1998 Ford Explorer and was designed to prevent the kind of extension injuries that were suffered by Walker. Brown testified that when Walker’s Explorer went into production, both Volvo and Chrysler Sebring cars had seats with taller and more forward head restraints, and the Sebring also had an integrated belt restraint to go along with the seat. He also opined that the seat in the 1998 Ford Explorer was not of adequate strength. ¶44         Ford presented evidence to contradict the testimony of Lewis and Brown. The jury was free to credit the testimony of either side’s experts and was not required, as a matter of law, to conclude that there was no alternative design available at the time that would have provided better protection than did the Explorer’s car seat. ¶45         The jury could have concluded from Lewis’s and Brown’s testimony that an alternative design was available that could have prevented Walker’s injuries, and that the Explorer’s car seat was defectively designed. The evidence also allowed the jury to determine that the car seat was the cause of Walker’s injuries. Because there was competent evidence to support the verdict, Ford was not entitled to judgment notwithstanding the verdict. See Graphic Directions, Inc. v. Bush, 862 P.2d 1020, 1024 (Colo. App. 1993). B. Other Incident Evidence ¶46         Ford next contends that the trial court erred by permitting Walker to introduce evidence of other incidents involving Ford vehicles. Ford argues that by doing so the trial court abused its discretion because evidence of the other incidents did not meet the substantial similarity test and thereby prejudiced Ford. We are not persuaded. 1. Standard of Review and Legal Authority ¶47         We review a district court’s evidentiary ruling for an abuse of discretion. Wal-Mart Stores, Inc. v. Crossgrove, 2012 CO 31, ¶7; Hock v. New York Life Ins. Co., 876 P.2d 1242, 1251 (Colo. 1994). In determining whether a court abused its discretion in admitting evidence, we accord the evidence its maximum probative value as weighed against its minimum prejudicial effect. City of Englewood v. Denver Waste Transfer, L.L.C., 55 P.3d 191, 200 (Colo. App. 2002). ¶48         Prior incident evidence may be admitted if it is offered to establish a material fact, is logically relevant, contains no inference of the opposing party’s bad character, and does not result in unfair prejudice. Vista Resorts Inc. v. Goodyear Tire & Rubber Co., 117 P.3d 60, 66 (Colo. App. 2004). ¶49         Evidence of similar accidents, occurrences, or injuries may be offered to refute testimony that a given product was designed without safety hazards. Koehn v. R.D. Werner Co., 809 P.2d 1045, 1048 (Colo. App. 1990). Evidence of prior similar incidents is relevant to show that the manufacturer had notice of an actual or potential product defect. Vista Resorts, 117 P.3d at 67. Before such evidence is admitted, the proponent of the evidence must make an initial showing that the other incident occurred under the same or substantially similar circumstances as those involved in the case to be tried. Koehn, 809 P.2d at 1048. Differences between the circumstances surrounding prior incidents and those in the case to be tried bear on the weight to be given such evidence, and not on its admissibility. Vista Resorts, 117 P.3d at 67. 2. Discussion ¶50         Ford’s briefs give too little detail about any asserted dissimilarities between the Walker accident and the other incidents for us to conclude that the trial court abused its discretion by admitting the other incident evidence. See People v. Diefenderfer, 784 P.2d 741, 752 (Colo. 1989) (it is the duty of counsel for the appealing party to inform a reviewing court as to the specific errors relied on, as well as the grounds, supporting facts, and authorities therefor). ¶51         Our review of the record shows that the trial court placed appropriate limits on the presentation of evidence of other incidents involving Ford vehicles, and did not abuse its discretion in allowing evidence of the four incidents to be admitted. The four vehicles in those incidents were all Ford Explorers, and all appear to have involved the same or similar seat design as the seat in Walker’s Explorer, meaning they were designed to perform in the same manner as Walker’s seat. Though Ford points to differences in the types of accidents and injuries in those other incidents, those differences went only to the weight to be given to the evidence, and not to its admissibility. ¶52         Moreover, testimony about the other incidents was extremely brief. We foresee no adverse effect on the fairness of the trial if similarly brief testimony were to be offered on retrial. IV. Conclusion ¶53         The judgment is reversed and the case is remanded to the trial court for a new trial in accordance with the opinions expressed herein. JUDGE GRAHAM concurs. JUDGE WEBB specially concurs.   JUDGE WEBB specially concurring. ¶54         While I agree that the verdict must be set aside, I write separately to offer a narrower explanation for this result, which could influence the retrial. In my view, the majority correctly concludes that the trial court erred by giving a separate instruction that also included the consumer expectation test. This is so because the combined instructions allowed the jury to consider the consumer expectation test twice: once in the risk-benefit test in instruction number 18, and again in instruction number 19. Still, I cannot join in the majority’s further conclusion that “Camacho indicates that the consumer expectation test survived Ortho, but only as one factor among the many listed in the risk-benefit test.” Although this latter conclusion may be a permissible inference from these cases, it is hardly compelling. ¶55         I draw on the same background as the majority. Two tests have been developed to determine whether a product’s design makes it defective and unreasonably dangerous: the consumer expectation test and the risk-benefit test. Ortho Pharm. Corp. v. Health, 722 P.2d 410, 413 (Colo. 1986), overruled in part by Armentrout v. FMC Corp., 842 P.2d 175, 183 (Colo. 1992). Under the consumer expectation test, a product is unreasonably dangerous because of a defect in its design if it creates a risk of harm that is greater than what an ordinary consumer would expect. See Camacho v. Honda Motor Co., Ltd., 741 P.2d 1240, 1245 (Colo. 1987).   Under the risk-benefit test, a product becomes unreasonably dangerous when the degree of danger inherent in the design outweighs the benefits of the product design. See Armentrout, 842 P.2d at 183-84. To determine whether the risks outweigh the benefits of the product design, the jury must consider the seven-factor test set out by the majority. But unlike the majority, I am unwilling to conclude that reference to consumer expectation in the sixth factor precludes a plaintiff from electing to proceed solely on a consumer expectation theory, as embodied in CJI-Civ. 4th 14:3 (2015), rather than on a risk-benefit theory. ¶56         In Camacho, 741 P.2d at 1245, the supreme court endorsed the concept of consumer expectation test: “[a] consumer is justified in expecting that a product placed in the stream of commerce is reasonably safe for its intended use, and when a product is not reasonably safe, a products liability action may be maintained.” True, the supreme court also said that consumer expectation “does not provide a satisfactory test for determining whether particular products are in a defective condition unreasonably dangerous to the user or consumer.” Id. at 1246 (emphasis added). But the court went on to explain: Total reliance upon the hypothetical ordinary consumer’s contemplation of an obvious danger diverts the appropriate focus and may thereby result in a finding that a product is not defective even though the product may easily have been designed to be much safer at little added expense and no impairment of utility. Id. (emphasis added). This language tells me the court was indicating that in certain cases, the risk-benefit test may be needed in addition to the consumer expectation test, to avoid an incorrect jury finding that a product was not defective. ¶57         vContrary to the majority, I do not read the supreme court’s earlier decision in Ortho — a case involving the manufacture of prescription pharmaceuticals — as requiring a different conclusion. Unlike mechanical devices with which consumers are familiar and understand the manner in which they should perform, how prescription drugs work is likely a complete mystery to the ordinary consumer. Such a consumer has no knowledge as to how the chemical components of a drug should interact in the human body. Thus, the supreme court in Ortho made a rational distinction between consumer products and prescription drugs. And this distinction suggests that only the risk-benefit test should be applied — to the exclusion of the consumer expectation test — in prescription drug cases. ¶58         Nothing in Ortho suggests that the risk-benefit test must be applied to mechanical devices in common usage by the public, such as cars and car seats. Indeed, because Camacho was decided after Ortho, I infer that the supreme court would allow the consumer expectation test to be applied here, where the design of a car seat is in issue. Had the Camacho court intended Ortho to be a stake through the heart of a stand-alone consumer expectation test, it could have said so specifically. It did not. ¶59         But the Camacho court was not faced with the argument that instructing on both tests could prejudice a defendant. As to that issue, I disagree with the majority’s first prejudice explanation but fully endorse its second. ¶60         On retrial, the jury would have had common experience with car seats and would have formed expectations as consumers about how car seats should perform, even if they would not necessarily know exactly how the seats would function in a collision. True, the seat’s design also reflected complex engineering principles. Because both ordinary consumer expectations and complex engineering could support a jury’s determination that the seat was a defective product, in my view plaintiff should be allowed to choose between instructing the jury on either the consumer expectation test or the risk-benefit test. These opinions are not final. They may be modified, changed or withdrawn in accordance with Rules 40 and 49 of the Colorado Appellate Rules. Changes to or modifications of these opinions resulting from any action taken by the Court of Appeals or the Supreme Court are not incorporated here. Colorado Court of Appeals Opinions || September 10, 2015 Back
National Dunking Association The National Dunking Association was a membership-based organization started by The Doughnut Corporation of America. It was established in the 1930s to help popularize doughnuts in North America. At its peak, the association claimed millions of members across more than 300 chapters. Members included famous actors, athletes, political figures, and people of all ages. Activities Members were encouraged to eat doughnuts using the Official Dunking Rules, a step-by-step method outlined by the organization. The lighthearted rules referred to dunking donuts as a sport and instructed members to break their donuts in half before swishing them rhythmically in coffee, cocoa, tea, or milk. The association held various doughnut-focused events including an annual convention in New York City. Leadership Presidents of the National Dunking Association included Jimmy Durante, Jack Lemmon, Red Skelton, Joey Bishop, and Johnny Carson. Bert Nevins served as the organization's vice president. Famous Members Johnny Carson Jimmy Durante Jack Lemmon Red Skelton Joey Bishop Paul V. McNutt Zero Mostel Martha Graham Location The National Dunking Association was located at 50 East 42nd Street in New York City. References Category:Doughnuts Category:Defunct organizations based in New York City Category:1930s establishments in New York (state)
Q: Combinatorial proof. What is this question asking? Show for sums over all $i_1 + i_2 + \ldots + i_k = n, i_j \geq 0$, that $$ \sum P(n;i_1,i_2,\ldots,i_k) = k^n $$ What is the index? I'm not even sure how to expand the LHS. Update: $P$ is permutation. If the downvote was because there was something unclear, please let me know. A: I had to expand it to make it readable, but the summation can be written like this: $$\large\sum_{i_1+\ldots+i_k=n\atop{i_1,\dots,i_k\ge 0}}P(n;i_1,i_2,\dots,i_k)\;.$$ Alternatively, it’s $$\sum\left\{P(n:i_1,i_2,\dots,i_k):\sum_{j=1}^ki_j=n\text{ and }i_1,\dots,i_k\ge 0\right\}\;.$$ I can’t be sure of helping with the proof, though, until you tell us what $P(n;i_1,i_2,\dots,i_k)$ is. I’m going to guess that it’s the number of distinguishable permutations of a set of $n$ objects of $k$ types, $i_j$ being the number of indistinguishable objects of type $j$ for $j=1,\dots,k$. That makes your theorem a special case of the multinomial theorem. Count the functions from $\{1,\dots,n\}$ to $\{1,\dots,k\}$ in two ways. You can think of such a function as an assigment of labels $1,\dots,k$ to the integers $1,\dots,n$. The term $P(n;i_1,\dots,i_k)$ is the number of ways to assign $i_1$ labels $1$, $i_2$ labels $2$, ..., and $i_k$ labels $k$.
<?xml version="1.0" encoding="utf-8"?> <!-- Authors: * A2093064 * Black9869184 * Bowleerin * Cwlin0416 * EagerLin * Kly * LNDDYL * Liuxinyu970226 * Macofe * Sanmosa * Simon Shek * Xiplus * Zhxy 519 * 铁桶 --> <resources> <string name="name">繁體中文</string> <string name="autorelog">下次執行此操作不再詢問</string> <string name="isrtl">false</string> <string name="add">新增</string> <string name="cancel">取消</string> <string name="clear">清除</string> <string name="close">關閉</string> <string name="continue">繼續</string> <string name="copy">複製</string> <string name="delete">刪除</string> <string name="deleted">已刪除</string> <string name="error">錯誤</string> <string name="exit">離開</string> <string name="done">完成</string> <string name="no">否</string> <string name="ok">確定</string> <string name="override-warn">您已覆蓋項目設定的路徑。請留意,若濫用此功能在 wiki 上獲得原先您所不俱有的寫入存取,可能會導致永久封禁您的帳號。</string> <string name="remove">移除</string> <string name="rename">重新命名</string> <string name="reload">重新載入</string> <string name="project">項目</string> <string name="warning">警告</string> <string name="result">結果</string> <string name="save">儲存</string> <string name="show">顯示</string> <string name="no-instant">至$2未能寄送成即時警告的訊息未在項目$1上啟用,請選擇一個不同的警告級別。</string> <string name="undo">撤銷</string> <string name="yes">是</string> <string name="editing-page">正在編輯頁面</string> <string name="agf">假定善意回退,並自訂理由</string> <string name="successful">成功</string> <string name="summary">摘要</string> <string name="id">ID</string> <string name="type">類型</string> <string name="target">目標</string> <string name="user">使用者</string> <string name="size">大小</string> <string name="date">日期</string> <string name="page">頁面</string> <string name="time">時間</string> <string name="version">版本</string> <string name="general-name">名稱</string> <string name="author">作者</string> <string name="description">描述</string> <string name="status">狀態</string> <string name="link">連結</string> <string name="reason">原因</string> <string name="diffid">版本差異ID</string> <string name="whitelisted">$1(在$3的分數為$2)已列入白名單</string> <string name="expiry-time">到期時間</string> <string name="duration">期限</string> <string name="enable">啟用</string> <string name="disable">禁用</string> <string name="no-token">無標記</string> <string name="flags">標記</string> <string name="gracetime">正在等待編輯執行的查詢結束...</string> <string name="browser-none">尚未有頁面顯示出</string> <string name="browser-load">通過 MW 備選下載版本差異中,請耐心等待…</string> <string name="browser-miss-summ">未提供摘要</string> <string name="browser-diff">頁面差異:$1(記分:$2)</string> <string name="browser-fail">無法檢索版本差異:$1</string> <string name="about-contributors">開發人員:</string> <string name="block-not">使用者未被封鎖</string> <string name="block-title">封禁$1</string> <string name="block-reason">原因:</string> <string name="block-duration">期限:</string> <string name="block-message">對話頁訊息:</string> <string name="block-message-user">傳送包含封銷解釋的訊息給目標使用者</string> <string name="block-anononly">僅封禁匿名使用者</string> <string name="block-creation">禁止建立帳戶</string> <string name="block-autoblock">開啟自動封禁</string> <string name="block-email">阻止電子郵件</string> <string name="block-usertalk">對話</string> <string name="block-usercontribs">貢獻</string> <string name="block-blockloglabel">封鎖日誌</string> <string name="block-warnloglabel">警告:</string> <string name="block-sharedipwarning">注意:$1 已被標記爲共享或動態 IP 位址。</string> <string name="block-rangeblockwarning">$1已經受到了自動封禁$2的影響。\n這次封禁將會覆蓋自動封禁設置。是否繼續?</string> <string name="block-token-1">獲取金鑰以封禁$1</string> <string name="block-fail">無法封禁使用者,$1</string> <string name="block-token-e1">無法檢索金鑰:$1</string> <string name="block-error-no-info">目前沒有查詢到用戶資訊(您是管理員嗎?)</string> <string name="block-type">封禁/解禁</string> <string name="block-admin">封禁管理員</string> <string name="block-none">沒有使用者被選擇來封禁。</string> <string name="contribution-browser-user-info">顯示使用者$1近期的貢獻</string> <string name="config-function">功能</string> <string name="config-description">描述說明</string> <string name="config-shortcut">捷徑</string> <string name="config-queue-filter-ignore">忽略 (無所謂)</string> <string name="config-queue-filter-exclude">排除 (必須不是)</string> <string name="config-queue-filter-require">必要 (必須是)</string> <string name="config-queue-modified-title">佇列已變動</string> <string name="config-queue-modified-text">您已變動過佇列,但尚未儲存。若您繼續的話,您做出的更改會被捨棄掉。您確定要繼續嗎?</string> <string name="config-summ">自訂模板回退原因</string> <string name="config-title">選項</string> <string name="config-already-in-use">此捷徑已被 $1 使用,請選擇另一個</string> <string name="config-openinbrowser">在新的瀏覽器視窗開啟連結</string> <string name="config-shownewedits">顯示他們於選擇頁面所做的新編輯</string> <string name="config-ircmode">如果可以,使用 IRC feed 來取的近期變更</string> <string name="config-ircport">IRC 埠</string> <string name="config-difffontsize">不同字型大小</string> <string name="config-logfile">日誌檔案</string> <string name="config-viewlocalconfig">檢視本機設定資料夾</string> <string name="config-trayicon">顯示通知區域圖示</string> <string name="config-startupmessage">顯示啟動訊息</string> <string name="config-shownewmessages">顯示新訊息列</string> <string name="config-shortcutlist">快捷鍵</string> <string name="config-shortcutaction">動作</string> <string name="config-changeshortcut">改變$1的捷徑</string> <string name="config-noshortcut">無</string> <string name="config-defaults">預設</string> <string name="config-minor">標記為小修改</string> <string name="config-ip">IP 用戶</string> <string name="config-watchlist">增加至監視清單</string> <string name="config-defaultsummary">預設為手動編輯摘要</string> <string name="config-undosummary">撤銷自身編輯的默認編輯摘要</string> <string name="config-confirm-user">回退在用戶命名空間做出的編輯時需要確認</string> <string name="config-confirm-wl">需要確認以回退白名單用戶做出的編輯</string> <string name="config-confirm-talk">回退在對話頁上做出的編輯時需要確認</string> <string name="config-use-rollback">使用軟體回退</string> <string name="config-welcome-empty-page">發送歡迎信息到作出了良好編輯的只有空對話頁的用戶</string> <string name="config-conflicts-revert">自動解決與其他用戶在同一頁面回退或更改的編輯衝突</string> <string name="config-instant-reverts">使用立即回退(非常快,但執行後無法撤銷)</string> <string name="config-reverts-multiple">由同一使用者的多次編輯回退:</string> <string name="config-revert-wait">回退延遲 (秒)</string> <string name="config-revert">回退</string> <string name="config-skip">略過</string> <string name="config-revert-diff">回退頁面上比顯示的差異更新的編輯,當他們是同一個用戶做出時</string> <string name="config-change-all">變更所有 Huggle 設定以模擬 Huggle 2 的行為</string> <string name="config-merge-messages">將你的信息和其他人的一起合併到已有的段落中</string> <string name="config-months-name">使用月份而不是頁面名作為標題</string> <string name="config-automatic-warning">使用自動警告類型和級別在「寄送警告」功能</string> <string name="config-enable-irc">開啟 IRC</string> <string name="config-remove-reverted">從佇列來移除已被別人回退的編輯</string> <string name="config-remove-old">如果收到同一頁面上的較新編輯時,自佇列來移除舊編輯</string> <string name="config-auto-load-history">自動載入歷史記錄和用戶訊息(需要較多的網路頻寬)</string> <string name="config-last-revision">若目前已載入編輯並非最新版本,切換至頁面最新的修訂內容(這將會先讓 Huggle 運作緩慢來下載內容)</string> <string name="config-require-delay">在每次載入下一個編輯內容需要一些時間,在此期間任何寫入功能皆不可用(這在避免因鍵盤多次觸按,而產生程式錯誤時很有用)</string> <string name="config-wait-edit">直到寫入功能開啟所延遲的秒數:</string> <string name="config-display-next">顯示文字</string> <string name="config-retrieve-edit">取得您自己的編輯</string> <string name="config-nothing">無</string> <string name="config-bot-edits">機器人的編輯 (b)</string> <string name="config-own-edits">自己的編輯</string> <string name="config-reverts">回退</string> <string name="config-new-page">新頁面</string> <string name="config-whitelisted">白名單</string> <string name="config-friend">好友</string> <string name="config-userspace">使用者空間</string> <string name="config-talk">對話頁</string> <string name="config-columns-dynamic">使清單的欄位為動態(依項目內容多寡自動調整大小)或是手動(用戶自行調整大小)</string> <string name="config-message-notification">當您在自己的對話頁面中收到一則新訊息時,顯示通知</string> <string name="config-warning-api">顯示用於API查詢的警告</string> <string name="config-title-diff">在每個差異的頁面標題(在每個差異的頂端,顯示頁面標題大量文本內容)</string> <string name="config-updates">檢查更新</string> <string name="config-reset-menu">自您上次訪問後項目設置(警告類型清單)已更改,下單選單的捷徑(回退和警告等等)已重新設定。</string> <string name="config-beta">檢查是否有測試版本</string> <string name="config-html-messages">HAN 可以顯示 HTML 訊息(備註:這可能會有潛在危害)</string> <string name="config-summary-present">當摘要存在而未遺失時突顯強調內容</string> <string name="config-close-without">不儲存關閉</string> <string name="config-confirmmultiple">確認由同一用戶編輯多項修訂版本</string> <string name="config-confirmsame">確認由用戶回退的編輯修訂版本</string> <string name="config-confirmselfrevert">確認回退自己的編輯內容(撤銷除外)</string> <string name="config-confirmwarned">確認回退被警告用戶所編輯內容</string> <string name="config-confirmrange">確認同一/16範圍內匿名用戶回退編輯的修訂版本</string> <string name="config-confirmpage">確認在忽略頁面的回退</string> <string name="config-autoadvance">在回退之後,下一個編輯移入佇列。</string> <string name="config-userollback">如果可用則使用回退功能</string> <string name="config-revertsummaries">回退在選單的可用摘要</string> <string name="config-auto-refresh">當有人做出其它編輯時自動重整頁面</string> <string name="config-extendreports">有額外破壞出現後擴充報告</string> <string name="config-autoreport">在要求以「最後一次警告」來警告用戶</string> <string name="config-reportnone">不做任何事</string> <string name="config-reportprompt">提示檢舉</string> <string name="config-reportauto">自動回報回題</string> <string name="config-templates">使用者樣板訊息</string> <string name="config-templatetext">顯示文字</string> <string name="config-template">模板</string> <string name="config-promptforblock">如果要求給用戶最後一次警告時提及封禁事項</string> <string name="config-blockreason">預設封禁理由</string> <string name="config-blocktime">預設封鎖期間</string> <string name="config-blocktimeanon">匿名使用者</string> <string name="config-blocktimereg">已註冊的使用者</string> <string name="config-summaryprompt">輸入編輯摘要</string> <string name="config-history">下載所有條目的完整編輯歷史</string> <string name="config-defaultsprompt">恢復預設?</string> <string name="config-logbrowsetitle">日誌檔位置</string> <string name="config-shortcutconflict">捷徑「$1」與已存在的捷徑「$1」衝突。</string> <string name="config-no-colon">您不能在佇列名稱裡使用「:」</string> <string name="config-notify-update">檢查更新</string> <string name="config-notify-beta">檢查是否有測試版本</string> <string name="custommessage-title">撰寫訊息給$1</string> <string name="custommessage-summary">編輯摘要(Huggle 字尾將會自動插入)</string> <string name="custommessage-subject">標題</string> <string name="custommessage-menu">寄送自訂訊息</string> <string name="custommessage-send">寄送訊息</string> <string name="custommessage-wikiuser-lesummary">傳遞自訂訊息給 $1</string> <string name="custommessage-wikiuser-plaintext-1">哈囉 $1,</string> <string name="custommessage-wikiuser-plaintext-2">在此編寫您的訊息。</string> <string name="delete-title">刪除 $1</string> <string name="delete-reason">原因:</string> <string name="delete-deletionlog">刪除日誌:</string> <string name="delete-rem-talk">刪除相關對話</string> <string name="delete-notify">通知建立者</string> <string name="delete-error-token">錯誤:取得刪除令牌失敗。原因為:$1</string> <string name="delete-token01">取得金鑰以刪除 $1</string> <string name="delete-token02">此查詢沒有回傳任何令牌</string> <string name="delete-e1">此頁面不能被刪除,原因:$1</string> <string name="delete-e2">無法刪除頁面</string> <string name="delete-edsc">無法刪除頁面,由於 $1</string> <string name="delete-failed-no-info">查詢中無頁面資訊 (你是管理員嗎?)</string> <string name="extension-kl">未執行</string> <string name="extension-ok">已載入並且執行</string> <string name="exception-text-1">很不巧地 Huggle 發生錯誤。請提交以下訊息與您在問題發生時有做過的操作內容詳情</string> <string name="exception-details">例外詳細資訊</string> <string name="exception-error-code">錯誤代碼:$1</string> <string name="exception-reason">原因:$1</string> <string name="exception-source">來源:$1</string> <string name="exception-stack-trace">堆疊追蹤:</string> <string name="exception-system-log">系統日誌</string> <string name="historyform-no-info">無編輯資訊</string> <string name="historyform-title">歷史</string> <string name="historyform-retrieve-history">取得歷史</string> <string name="historyform-retrieving-history">取得歷史中</string> <string name="historyform-not-latest-tip">這不是此頁面最新的修訂版本。請確定您複核了最新版本,並持續注意於此版本可能無法回退</string> <string name="history-error-message-title">無法還原</string> <string name="history-error">此項目無法撤銷。因該頁面可能已被更改或是刪除,而 Huggle 無法執行撤銷。如果您想撤銷,您需要改在 MediaWiki 介面上手動執行。</string> <string name="history-failure">無法取得歷史</string> <string name="userhistory-title">您的變更歷史</string> <string name="irc-connected">$1:已連結IRC最近更改訂閱。</string> <string name="irc-connecting">在$1上嘗試連結至IRC最近訂閱更改,這可能會花上幾分鐘...</string> <string name="irc-disconnected">連結至$1上的IRC最近更改訂閱失敗。重新連結中...</string> <string name="irc-nochannel">未找到$1IRC頻道;改採用較慢的API查詢</string> <string name="irc-error">無法連結至IRC最近訂閱更改($1:$2)。請改用較緩慢的API查詢</string> <string name="irc-stop">等待在$1的首要訂閱提供者停止</string> <string name="irc-not">IRC訂閱被項目設定檔停用</string> <string name="irc-failure">首要訂閱提供者失效。改退回成wiki提供者...</string> <string name="irc-wait">等待IRC訂閱提供者停止</string> <string name="irc-switch-rc">切換到wiki RC訂閱</string> <string name="login-old">從舊位置取得用戶設定</string> <string name="login-intro">請挑選一個項目並輸入您的用戶名稱和密碼。您也可點擊「項目」按鍵來登入至多個項目。</string> <string name="invalid-bot-user-name">這不是一個有效機器人名稱。可用的機器人名稱範例為:Jimbo@Huggle</string> <string name="login-tab-botp">機器人密碼</string> <string name="login-tab-login">舊式</string> <string name="login-reload-tool-tip">重新載入 Huggle 的可使用項目清單。該工具啟用時會在您想要於某個項目使用 Huggle 的情況下很有用。</string> <string name="login-language">語言:</string> <string name="login-project">項目:</string> <string name="login-username">使用者名稱:</string> <string name="login-password">密碼:</string> <string name="login-proxygroup">代理設定</string> <string name="login-proxy">使用代理</string> <string name="login-proxyaddress">位址:</string> <string name="login-proxy-remember-this">記住此次設定讓下次使用</string> <string name="login-proxyport">埠:</string> <string name="login-proxydomain">網域名稱:</string> <string name="proxy-restart">Huggle 可能需要重新啟動以讓更改內容生效。</string> <string name="login-retrieving-user-conf">取得使用者設定</string> <string name="login-fail-css">登入 $1 失敗:無法取得使用者設定。 Special:MyPage/huggle3.css 遺失。 您是否有建立 huggle3.css 於您的使用者空間?</string> <string name="login-fail-enable-true">登入 $1 失敗:您尚未於您的個人設定設定 enable:true</string> <string name="login-fail-no-info">登入 $1 失敗:無法取得使用者資訊:$2</string> <string name="login-bot">機器人密碼登入和舊式登入有何差異?</string> <string name="login-fail-parse-config">登入 $1 失敗:無法解析使用者設定。 請參考除錯日誌以了解詳細資訊。</string> <string name="login-fail-parse-config-yaml">在 $1 登入失敗:無法解析使用者設置:$2</string> <string name="login-fail-user-data">登入 $1 失敗:無法取得使用者資訊。 查詢 API 未回傳資料。</string> <string name="login-fail-rollback-rights">登入 $1 失敗:您於此項目沒有回退的權限。</string> <string name="login-failed-autoconfirm-rights">登入 $1 失敗:您於此項目並非已自動確認的使用者。</string> <string name="login-failed-edit">登入 $1 失敗:您於此專案未有足夠的編輯數。</string> <string name="login-retrieving-info">取得使用者資訊於 $1</string> <string name="login-password-empty">您輸入的密碼為空白</string> <string name="login-fail-wrong-name">您提供用來登入的使用者名稱</string> <string name="login-api">錯誤:api.php 回應了不明的結果:$1</string> <string name="login-abort">中止</string> <string name="login-start">登入</string> <string name="login-oauth-notsupported">此方法目前尚不支援</string> <string name="login-translate">翻譯Huggle</string> <string name="login-progress-formtitle">載入中...</string> <string name="login-progress-start">正在登入至 $1</string> <string name="login-progress-retrieve-mw">取得有關 $1 的 MediaWiki 資訊中</string> <string name="login-progress-language">正在更新訊息檔…</string> <string name="login-progress-global">正在檢查全域設定</string> <string name="login-progress-config">正在檢查設定頁面…</string> <string name="login-progress-yaml">檢查 $1 的本地 YAML 設定</string> <string name="login-progress-local">檢查 $1 的本地設定中</string> <string name="login-progress-user">檢查 $1 的使用者設定中</string> <string name="login-progress-user-info">檢查 $1 的使用者資訊中</string> <string name="login-progress-whitelist">取得使用者白名單...</string> <string name="login-remember-password">記住密碼</string> <string name="login-remember-password-tooltip">此選項將會在您的硬碟上以純文字方式儲存您的密碼。僅在如果您是唯一可存取此電腦的人之情況下使用。</string> <string name="login-ssl">使用 HTTPS 加密連線登入(需要 OpenSSL)</string> <string name="login-fail">在$1登入失敗</string> <string name="login-fail-with-reason">登入失敗(於 $1):$2</string> <string name="login-error-admin">於此項目使用 Huggle 需要有管理員帳號。</string> <string name="login-error-age">於此項目使用 Huggle 前您的帳號至少需註冊 $1 天以上。 </string> <string name="login-error-alldisabled">Huggle 目前被所有項目關閉。</string> <string name="login-error-approval">於 $1 使用 Huggle 需要核准。</string> <string name="login-error-cancelled">已取消。</string> <string name="login-error-config">登入失敗,無法解析 $1 的本地設定:$2</string> <string name="login-error-config-retrieve">登入失敗,無法取得使用者設定:$1</string> <string name="login-error-config-query-no-data">登入失敗 - 可能是網路連線問題?無法下載全域設定</string> <string name="login-error-count">於此項目使用 Huggle 至少需編輯 $1 次以上。 </string> <string name="login-error-disabled">Huggle 於 $1 尚未開啟供您的帳號使用。請檢查您的使用者設定頁面。</string> <string name="login-error-global">登入失敗 - 可能是網路連線問題?無法下載或解析全域設定頁面。</string> <string name="login-error-invalid">無效的使用者名稱</string> <string name="login-error-noconfig">$1 並未擁有 Huggle 設定頁面</string> <string name="login-error-nouser">該使用者並不存在</string> <string name="login-error-no-valid-token">沒有透過網站所回傳的有效登入令牌</string> <string name="login-error-password">密碼錯誤</string> <string name="login-error-projdisabled">Huggle 目前於 $1 被關閉。</string> <string name="login-error-unknown">無法登入 $1。</string> <string name="login-error-version">此版本已過舊,請更新至最新版本。</string> <string name="login-error-whitelist">載入使用者白名單失敗 $1。</string> <string name="login-ro-title">切換成唯讀</string> <string name="login-ro-question">一個或多個項目($1)不允許您以編輯權限登入(原因:$2)。您是否要改為切換成唯讀模式?</string> <string name="login-ro-info">項目$1切換成唯讀模式</string> <string name="wikis-db-download-fail">無法下載 wiki 資料庫:$1</string> <string name="no-projects-defined-in-list">在清單中沒有定義項目,您需要在全域 wiki 設定一些內容</string> <string name="api-query-no-data">API 查詢沒有返回任何資料</string> <string name="main-stat">每分鐘編輯$1次,每分鐘回退$2次,等級$3</string> <string name="main-menu-provider-stop">停止提供者</string> <string name="main-menu-provider-resume">繼續提供者</string> <string name="main-help-copy-syslog-to-clip">複製系統日誌到剪貼簿</string> <string name="provider-up">連線至 $2 的近期變更串流使用提供者:$1</string> <string name="main-no-reason">使用者未提供原因</string> <string name="main-restoring-slap">其它編輯目前正被還原,請稍後...</string> <string name="main-page">頁面</string> <string name="main-user">使用者</string> <string name="main-contribs">貢獻</string> <string name="main-history">歷史</string> <string name="main-new-messages">您有新訊息。選擇「系統-&gt;顯示新訊息」或按M鍵檢視。</string> <string name="main-space">頁面名稱不能以空位作結,請修正頁面名稱。</string> <string name="main-system">系統</string> <string name="main-status-bar">處理&lt;b&gt;$1&lt;/b&gt;次編輯和&lt;b&gt;$2&lt;/b&gt;次查詢。白名單用戶:&lt;b&gt;$3&lt;/b&gt; 佇列大小: &lt;b&gt;$4&lt;/b&gt; 在$6的統計:$5</string> <string name="main-metric-bar">API 平均回應時間:$1ms</string> <string name="main-shutting-down">Huggle 已關閉,略過</string> <string name="main-system-messages">顯示新訊息</string> <string name="main-system-savelog">儲存日誌…</string> <string name="main-system-statistics">統計資訊...</string> <string name="main-system-showqueue">顯示序列</string> <string name="main-system-options">選項…</string> <string name="main-system-logout">登出</string> <string name="main-system-abort">中止</string> <string name="main-system-change-provider">變更提供者</string> <string name="main-system-change-provider-irc">IRC</string> <string name="main-system-change-provider-wiki">Wiki</string> <string name="main-system-exit">結束</string> <string name="main-config-state-fail">無法從設定檔讀取狀態</string> <string name="main-config-geom-fail">無法從設定檔讀取視窗佈局結構</string> <string name="main-tools-scoreword-list">顯示頁面變更大小清單</string> <string name="main-queue">序列</string> <string name="main-queue-next">下一個</string> <string name="main-queue-clear">清除目前</string> <string name="main-queue-count">$1 項目</string> <string name="main-queue-query">正在執行查詢...</string> <string name="main-queue-reset">重設</string> <string name="main-queue-clearall">清除所有</string> <string name="main-queue-options">管理序列...</string> <string name="main-queue-when-full">當佇列已滿</string> <string name="main-queue-when-full-stop">停止訂閱源</string> <string name="main-queue-when-full-remove">移除舊編輯</string> <string name="main-queue-remove-whitelisted">移除由白名單使用者作出的編輯</string> <string name="main-queue-remove-200">移除記分小於 -200 的編輯</string> <string name="main-goto">前往</string> <string name="main-goto-mytalk">我的對話頁</string> <string name="main-goto-mycontribs">我的貢獻</string> <string name="main-revision-decline">拒絕修訂</string> <string name="main-revision">修訂</string> <string name="main-revision-view">檢視</string> <string name="main-revision-revert">回退目前顯示的編輯</string> <string name="main-revision-rv-stay">回退此編輯並停留在此頁面</string> <string name="main-revision-revert-warn">回退目前顯示的編輯並警告該使用者</string> <string name="main-revision-rws">回退目前顯示的編輯並在警告用戶後停留在此頁面</string> <string name="main-revision-faith">假定善意回退</string> <string name="main-revision-revert-only-this">僅回退此修訂</string> <string name="main-revision-revert-agf">依據假定善意僅回退此修訂</string> <string name="main-revision-previous">前一頁</string> <string name="main-revision-next">下一頁</string> <string name="main-revision-latest">最新</string> <string name="main-tools">工具</string> <string name="main-tools-sess">顯示連線階段資訊</string> <string name="main-tools-il">顯示目前 wiki 的忽略清單</string> <string name="main-page-next">下一步</string> <string name="main-page-switchtotalk">切換至對話頁面</string> <string name="main-page-switchtosubject">切換至主題頁面</string> <string name="main-page-switchtoarticle">切換至文章</string> <string name="main-page-viewlatest">檢視最新修訂</string> <string name="main-page-history">取得歷史</string> <string name="main-page-historypage">顯示歷史頁面</string> <string name="main-page-edit">在瀏覽器上編輯頁面</string> <string name="main-page-tag">標籤</string> <string name="main-page-reqdeletion">請求快速刪除</string> <string name="main-page-reqprotection">請求保護頁面</string> <string name="main-page-watch">監視</string> <string name="main-page-unwatch">取消監視</string> <string name="main-page-purge">清除快取</string> <string name="main-page-curr-disp">目前顯示</string> <string name="main-page-display">顯示此頁面</string> <string name="main-page-patrol">標記為已巡查</string> <string name="main-page-move">移動…</string> <string name="main-page-flag-suspicious-edit">標記為可疑的編輯(這會將此用戶的記分 +1 分)</string> <string name="main-page-flag-good-edit">標記為良好編輯(這會從此用戶的記分扣除 200 點)</string> <string name="main-page-protect">保護</string> <string name="main-page-delete">刪除</string> <string name="main-page-load">載入</string> <string name="main-patrol-not-enabled">沒有在$1啟動巡查</string> <string name="main-page-refresh">重新整理</string> <string name="main-page-restore">還原此修訂</string> <string name="main-page-watchlist-add">新增頁面至監視清單裡</string> <string name="main-page-tab-close">關閉目前頁籤</string> <string name="main-page-tab-open">開啟新頁籤</string> <string name="main-page-watchlist-remove">從監視清單移除頁面</string> <string name="main-scripting">腳本</string> <string name="main-scripting-script-manager">腳本管理器</string> <string name="main-user-info">顯示使用者資訊</string> <string name="main-user-ignore">忽略</string> <string name="main-user-unignore">取消忽略</string> <string name="main-user-contribs">在瀏覽器中顯示使用者貢獻</string> <string name="main-user-clear-talk">清除使用者對話頁</string> <string name="main-user-talk">檢視討論頁</string> <string name="main-user-page">檢視使用者頁面</string> <string name="main-user-message">訊息…</string> <string name="main-user-warn">警告...</string> <string name="main-user-db">降低使用者的不良記分</string> <string name="main-user-report">檢舉</string> <string name="main-user-ib">提高不良記分</string> <string name="main-user-contribution-browser">貢獻瀏覽</string> <string name="main-user-clear-tp">清除使用者的對話頁</string> <string name="main-user-retrieving-tp">取得 $1 的對話頁面</string> <string name="main-user-block">封禁...</string> <string name="main-user-manualtemplate">手動模板</string> <string name="main-browser-back">返回</string> <string name="main-browser-forward">前進</string> <string name="main-browser-open">在外部瀏覽器檢視</string> <string name="main-browser-newedits">顯示頁面新編輯</string> <string name="main-browser-newcontribs">顯示指定用戶的新貢獻</string> <string name="main-browser-lasttab">最後一個頁籤不可關閉 - 您必須至少開啟一個頁籤</string> <string name="main-revert-manual">沒有回退令牌,改換成來手動回退$1</string> <string name="main-han">HAN</string> <string name="main-han-disconnect">中斷連線</string> <string name="main-han-connect">連線</string> <string name="main-help">說明</string> <string name="main-han-display-bot-data">顯示機器人資料</string> <string name="main-han-display-user-messages">顯示使用者訊息</string> <string name="main-han-display-user-data">顯示使用者資料</string> <string name="main-help-documentation">說明文件</string> <string name="main-help-feedback">意見回饋</string> <string name="main-help-about">關於Huggle…</string> <string name="main-help-introduction">介紹</string> <string name="main-help-welcome-page">歡迎頁面</string> <string name="main-help-contents">內容</string> <string name="main-help-queuelegend">佇列說明</string> <string name="main-addqueue">增加…</string> <string name="main-savelogtitle">儲存日誌</string> <string name="main-usermessageother">其他訊息…</string> <string name="main-advanced">進階…</string> <string name="main-stats">每分鐘編輯$1次,每分鐘回退$2次</string> <string name="main-display-session-data">顯示連線階段資料</string> <string name="main-display-whitelist">顯示白名單</string> <string name="main-revert-custom-reson">請解釋為何您回退此編輯至先前修訂版本是合理的</string> <string name="main-log1">正在還原$1的選定修訂版本</string> <string name="main-tip-pagetagdelete">加入刪除標籤至此頁面</string> <string name="main-tip-pagedelete">刪除此頁面</string> <string name="main-tip-userreport">檢舉此使用者</string> <string name="main-tip-userblock">封鎖此使用者</string> <string name="main-revision-acceptpend">接收此待定修改</string> <string name="main-no-page">您所請求的操作無效,因為頁面尚未被載入或是為唯讀狀態緣故。請重新載入一個可編輯頁面。</string> <string name="main-revert-null">無法回退:零編輯</string> <string name="main-revert-newpage-title">無法還原此頁面</string> <string name="main-revert-newpage">此為新的頁面,所以不能被回退。您可以加入任何標籤或者刪除此頁面。</string> <string name="main-restore-text-fail">無法還原此修訂,因為沒有可用的文字內容</string> <string name="main-restore-data-fail">無法還原此修訂,因為 wiki 沒有提供該版本的資料</string> <string name="main-welcome-user">歡迎使用者</string> <string name="main-custom-reason-title">使用自訂理由回退編輯</string> <string name="main-custom-reason-text">請提供為何您回退此編輯的理由</string> <string name="main-custom-reason-fail">您沒有提供有效的理由</string> <string name="main-report-username">檢舉使用者名稱</string> <string name="main-report-user">回報使用者</string> <string name="main-edit-user-talk">編輯使用者對話頁</string> <string name="main-statistics-paused">暫停</string> <string name="main-statistics-waiting">等待更多編輯</string> <string name="main-statistics-none">無</string> <string name="main-tab-welcome-title">歡迎頁面</string> <string name="message-title">訊息 $1</string> <string name="message-help">指定標題以及/或是摘要其一。如果沒有摘要,標題就必須使用;如果沒有標題,就不會有標頭加入。</string> <string name="message-subject">主旨:</string> <string name="message-message">訊息:</string> <string name="message-summary">摘要:</string> <string name="message-autosign">自動附加簽名</string> <string name="message-retrieve-new-token">取得編輯 $1 的密鑰</string> <string name="message-fail-re-user-tp">無法取得$1。正在停止訊息寄送至該用戶</string> <string name="message-er">無法傳遞訊息至$1:$2</string> <string name="message-error">無法傳遞訊息至$1!請檢查日誌!</string> <string name="message-fail-retrieve-talk">無法取得使用者對話頁</string> <string name="message-done">成功在$2傳遞訊息至:$1</string> <string name="message-fail-token-1">無法取得編輯令牌</string> <string name="message-fail-token-2">此請求未回傳令牌</string> <string name="protect-request-title">請求保護 $1</string> <string name="protect-reason">原因:</string> <string name="protect-expiry">期限:</string> <string name="protect-log">保護日誌:</string> <string name="protect-currentlevel">目前保護層級:</string> <string name="protect-type">保護類型:</string> <string name="protect-none">無</string> <string name="protect-token">令牌無法取得。原因:$1</string> <string name="protect-semiprotection">半保護</string> <string name="protect-error">無法保護此頁面,原因:$1</string> <string name="protect-fail-no-info">沒有頁面訊息可用(您是管理員嗎?)</string> <string name="protect-fullprotection">全保護</string> <string name="protect-message-title-fail">無法保護</string> <string name="protect-moveprotection">移動保護</string> <string name="protect-ft">獲取令牌以保護$1</string> <string name="protect-request-proj-fail">此項目不支援保護請求</string> <string name="queue-title">序列</string> <string name="queue-queues">序列:</string> <string name="queue-typegroup">序列類型</string> <string name="queue-listselector">清單:</string> <string name="queue-listbuilder">清單生成器...</string> <string name="queue-sortorder">排序:</string> <string name="queue-removeviewed">檢視後移除編輯</string> <string name="queue-removeold">移除對相同頁面的舊編輯</string> <string name="queue-ignorepages">忽略在忽略頁面清單裡頁面的編輯</string> <string name="queue-traynotification">在修訂版本增加至佇列時顯示托盤通知</string> <string name="queue-diffsgroup">差異</string> <string name="queue-pagefiltersgroup">頁面標題篩選</string> <string name="queue-pageregex">符合正規表達式的標題:</string> <string name="queue-namespaces">命名空間:</string> <string name="queue-editfiltersgroup">編輯篩選條件</string> <string name="queue-userregex">符合正規表達式的用戶名稱:</string> <string name="queue-summaryregex">符合正規表達式的摘要:</string> <string name="queue-filternewpage">新頁面</string> <string name="queue-filterownuserspace">用戶擁有的用戶空間</string> <string name="queue-filteranonymous">匿名使用者</string> <string name="queue-filterignored">已忽略使用者</string> <string name="queue-filterreverts">回退</string> <string name="queue-filternotifications">通知</string> <string name="queue-filterwarnings">警告</string> <string name="queue-filtertags">標籤</string> <string name="queue-filterbot">機器人的編輯</string> <string name="queue-filterassisted">協助編輯</string> <string name="queue-filterhuggle">Huggle 編輯</string> <string name="queue-filterme">我的編輯</string> <string name="queue-examplelabel1">需要此屬性</string> <string name="queue-examplelabel2">排除此屬性</string> <string name="queue-examplelabel3">不要檢查此屬性</string> <string name="queue-usergroup">顯示僅來自這些使用者的編輯</string> <string name="report-title">檢舉 $1</string> <string name="report-warn">此用戶已接收到第4層級警告,也就是最後一次警告。您可以現在就提報他,但請確定該用戶有已經接收到適當的警告!您可以點擊以下的「對話頁」按鈕來進行。請注意無論您回退是否成功該表單與警告訊息皆會顯示,因此您可能會與其他用戶發生衝突(請再次檢查該用戶為未準備被提報)。您確定要提報此用戶嗎?</string> <string name="report-tu">回報使用者</string> <string name="report-reason">原因:</string> <string name="report-auto">已檢舉 $1</string> <string name="report-intro">您正要檢舉使用者 $1</string> <string name="report-message">訊息:</string> <string name="report-log">給此使用者的警告:</string> <string name="report-include">包含於檢舉中</string> <string name="report-select">請選擇清單中的差異以開啟預覽</string> <string name="report-write">寫入中</string> <string name="reportuser-not">此使用者現在未被檢舉</string> <string name="report-evidence-none-provid">沒有提供比較內容作為證據。這會讓管理員很難判斷用戶是否為破壞者。您確定要繼續嗎?</string> <string name="report-duplicate">使用者已被檢舉</string> <string name="report-fail2">錯誤,無法在$1取得回報頁面</string> <string name="report-progress">檢舉 $1 中…</string> <string name="report-retrieving">取得目前檢舉頁面...</string> <string name="report-done">已回報</string> <string name="report-user">檢舉</string> <string name="score-score">記分</string> <string name="score-word">字</string> <string name="score-range">範圍</string> <string name="score-take-any-no-talk-word">符合在非對話頁面上的任何文字</string> <string name="score-take-whole-no-talk-word">僅符合在非討論頁面上的整個單詞</string> <string name="score-only-whole-word">僅符合整個單詞</string> <string name="score-take-any-word">符合此字串所包含任何字詞</string> <string name="ssl-is-not-supported">您的請求使用 SSL 安全協議,但很遺憾地,$1 並不支援。</string> <string name="ssl-required">網頁伺服器 $1 要求開啟 SSL。請啟動 SSL 後再試一次。如果您不能開啟 SSL(該選項為灰色不可選情況)您可能需要在您的系統上安裝 OpenSSL 函式庫。</string> <string name="uaa-not-supported">UAA無法使用</string> <string name="uaa-not-supported-text">「管理員需注意的用戶名」公告欄在您的wiki上不可用。</string> <string name="uaa-reporting">檢舉 $1 至 UAA</string> <string name="uaa-reported">已回報</string> <string name="uaa-user-reported-title">使用者已被檢舉</string> <string name="uaa-user-reported">使用者已被檢舉至UAA。</string> <string name="uaa-user-unreported-title">使用者未被回報</string> <string name="uaa-user-unreported">此使用者未被檢舉至UAA。</string> <string name="uaa-no-reason-warn">您沒有指名原因關於為何此用戶名稱違反方針。請指名一個原因。</string> <string name="uaa-nr">未提供理由</string> <string name="uaa-g1">獲取「需要管理員注意的用戶名」的內容</string> <string name="uaa-e1">此頁面內容的查詢沒有回傳資料。</string> <string name="uaa-e2">頁面內容不可用。</string> <string name="read-only">項目$1被設置為唯獨,編輯功能將被限制</string> <string name="revert-confirm-multiple">這將會回退$1的多筆編輯。要繼續?</string> <string name="revert-confirm-range">目標修訂版本$2的作者,與目前修訂版本作者$1在同一範圍網段,因此有可能是同一人。在進行之前請確認目標修訂版本為正確的,要繼續?</string> <string name="revert-confirm-same">這將會回退到由用戶$1已經做出回退的修訂版本。要繼續?</string> <string name="revert-confirm-warned">這將回退到由$1所做出的編輯版本,而對方先前已被警告過。要繼續?</string> <string name="revert-error-first">此為該頁面的第一個編輯,因此不能回退。</string> <string name="revert-only">此編輯是該頁面唯一的編輯。</string> <string name="revert-creator">最後一次編輯由頁面創建者$1做出。</string> <string name="revert-preflightcheck">預先檢查</string> <string name="revert-delete-instead">是否刪除此頁面?</string> <string name="revert-speedy-instead">是否標記快速刪除?</string> <string name="revert-cannotundo">因為編輯衝突緣故而無法撤銷此編輯。</string> <string name="revert-conflict">頁面被白名單用戶「$1」編輯</string> <string name="revert-fail">無法回退$1,原因:$2</string> <string name="revert-nochange">目標版本內容與現今版本相同</string> <string name="revert-fail-pre-flight">預先檢查失敗:$1</string> <string name="speedy-reason">原因:</string> <string name="speedy-parameters">參數:</string> <string name="speedy-parameters-fail">您必須為此快速選項提供參數</string> <string name="speedy-notifycreator">通知頁面建立者</string> <string name="rollback">回退$1</string> <string name="software-rollback">您沒有回退權限,改為軟體回退</string> <string name="tag-title">標籤 $1</string> <string name="tag-tagselector">加入標籤:</string> <string name="tag-tags">標籤:</string> <string name="tag-parameter">參數</string> <string name="tag-summary">摘要:</string> <string name="tag-insertatend">在頁面末端插入</string> <string name="update-title">可用的新版本</string> <string name="update-progress">正在下載新版本,請等待...</string> <string name="update-error">最新版本下載失敗</string> <string name="updater-title">有Huggle的新版本可以使用</string> <string name="updater-update">更新</string> <string name="updater-update-available">有Huggle的新版本可以使用:版本$1</string> <string name="updater-open-manualdownloadpage">開啟下載頁面</string> <string name="updater-close">關閉</string> <string name="updater-disable-notify">關閉更新</string> <string name="updater-wait">目前正在更新 huggle,請稍等...</string> <string name="user-history-fail">無法索取用戶:$1的歷史</string> <string name="userinfo-generic">使用者資訊</string> <string name="userinfo-retrieve">取得資訊</string> <string name="userinfo-no-user">沒有使用者</string> <string name="shortcut-refresh">重整目前顯示的頁面(重新載入歷史記錄並跳到最近編輯裡)</string> <string name="shortcut-page-patrol-edit">巡查編輯</string> <string name="shortcut-report-username">檢舉使用者名稱</string> <string name="shortcut-report-user">回報使用者</string> <string name="shortcut-my-talk">我的對話頁</string> <string name="shortcut-raw">回退編輯並警告使用者</string> <string name="shortcut-contrib">顯示做出目前所顯示編輯之用戶的貢獻</string> <string name="shortcut-custom-msg">開啟一個表單來寄送自定義訊息給目前的使用者</string> <string name="shortcut-exit">以一般方式終止 Huggle</string> <string name="shortcut-next">前往下一個編輯</string> <string name="shortcut-suspicious">標記編輯為可疑</string> <string name="shortcut-back">前往先前顯示的編輯</string> <string name="shortcut-forward">前往一個過去您曾載入但之後被移回的編輯</string> <string name="shortcut-warn">向使用者傳送一則警告訊息,但不回退其編輯</string> <string name="shortcut-revert">還原編輯</string> <string name="shortcut-x-raw">觸發了N個項目在回退和告警選單</string> <string name="shortcut-x-revert">觸發了N個項目在回退選單</string> <string name="shortcut-watch">插入一個顯示頁面至您的監視清單</string> <string name="shortcut-unwatch">從您的監視清單移除一個顯示頁面</string> <string name="shortcut-x-warn">觸發了N個項目在告警選單</string> <string name="shortcut-good">標記此目前顯示的編輯為良好</string> <string name="shortcut-open-in-huggle">開啟目前顯示的頁面</string> <string name="shortcut-open">在瀏覽器開啟目前所顯示的編輯</string> <string name="shortcut-talk">顯示做出此次更改的使用者對話頁</string> <string name="shortcut-revert-agf-1">依據假定善意回退最新修訂</string> <string name="shortcut-edit">編輯目前顯示的頁面</string> <string name="shortcut-edit-in-browser">編輯目前顯示在瀏覽器的頁面</string> <string name="shortcut-browser-close-tab">關閉瀏覽頁籤</string> <string name="shortcut-tab">開啟新頁籤</string> <string name="shortcut-rw-stay">回退目前編輯,警告一名作者並停留在頁面</string> <string name="shortcut-revert-and-stay">回退目前編輯並停留在頁面</string> <string name="shortcut-custom-reason">使用自訂理由摘要來回退編輯</string> <string name="shortcut-user-contribs-browser">在瀏覽器裡開啟使用者的貢獻</string> <string name="shortcut-user-clear-talk">清除目前使用者的對話頁</string> <string name="shortcut-main-clear-queue">清除佇列</string> <string name="shortcut-revert-agf">假定善意回退,並自訂編輯摘要</string> <string name="warning-title">警告 $1</string> <string name="warning-levelgroup">警告等級</string> <string name="warning-levelauto">自動</string> <string name="warning-level1">等級 1</string> <string name="warning-level2">等級 2</string> <string name="warning-level3">等級 3</string> <string name="warning-level4">等級 4 (最高)</string> <string name="warning-confirm-title">確定發送警告?</string> <string name="warning-confirm-old-edit">由$3在$2對$1所做出的編輯已超過 1 天,您確定發送警告訊息給對方嗎?</string> <string name="warning-confirm-too-recent">在$2對$1所做出過編輯的$3,近期有在自己的對話頁上收到訊息(可能是一些警告模版?),您確定發送其它警告訊息給對方嗎?</string> <string name="warning-too-old-skip">未發送警告給$1出於對方在$3對$2所做出的編輯,因為這已超過 1 天</string> <string name="warning-no-aiv-config">此使用者已達到等級 4 警告,而此 wiki 上並不支援 AIV(當前的破壞),您現在應該直接封禁該使用者</string> <string name="warning-warntype">警告類型:</string> <string name="warning-warnlog">給此使用者的警告:</string> <string name="warning-submit">傳送</string> <string name="warninglist-no-warning-title">沒有警告</string> <string name="warninglist-no-warning-text">此類型警告沒有資料,所以無法傳遞警告給使用者!</string> <string name="warninglist-report-text">使用者已收到最後的警告,因此將不再傳遞任何警告給對方,要改成報告他們的情況嗎?</string> <string name="wikiedit-tp-fail">無法取得$1警告層級。這將不會被用於分數上</string> <string name="missing-warning">查無此警告模板$1</string> <string name="han-not">破壞性網路不允許在選項裡</string> <string name="han-connecting">正在連線至 Huggle 反破壞網路</string> <string name="han-already-connected">因已連線至HAN,故沒有進行連線</string> <string name="han-already-connecting">請稍後,我正在連線至HAN...</string> <string name="han-disconnected">您已從HAN失去連結</string> <string name="han-network">網路</string> <string name="han-send">傳送</string> <string name="error-loggedout">帳號已登出,重新登入中…</string> <string name="error-noresponse">無回應</string> <string name="error-pagemissing">此頁面不存在</string> <string name="error-reloginfail">返回登入失敗。可嘗試重新啟動 Huggle</string> <string name="error-timeout">請求超時</string> <string name="error-unknown">不明錯誤</string> <string name="block-done">已封鎖 $1</string> <string name="block-progress">封鎖 $1 中...</string> <string name="blocklog-fail">無法獲得$1的封禁日誌</string> <string name="blocklog-none">$1沒有封禁日誌</string> <string name="blocklog-progress">正在取得$1的封禁日誌...</string> <string name="blocknotify-fail">沒有向$1通知封禁</string> <string name="blocknotify-progress">向$1通知封禁…</string> <string name="delete-done">刪除 $1</string> <string name="delete-fail">無法刪除$1</string> <string name="delete-progress">刪除 $1 中...</string> <string name="delete-user">此頁面為使用者頁面。您確定要刪除此頁面?</string> <string name="deletelog-fail">取得$1的刪除日誌失敗</string> <string name="deletelog-none">沒有$1的刪除日誌</string> <string name="deletelog-progress">正在取得$1的刪除日誌...</string> <string name="edit-bar-top">此編輯是最新修訂</string> <string name="edit-fail">無法編輯$1</string> <string name="edit-progress">編輯 $1 中...</string> <string name="history-fail">取得$1的歷史記錄失敗</string> <string name="history-progress">正在取得$1($2)的歷史記錄...</string> <string name="history-work">撤銷自己在$1所做出的編輯</string> <string name="history-no-item-selected">在歷史記錄小工具裡沒有項目被選擇</string> <string name="history-message-revert-title">您確定只撤銷此訊息?</string> <string name="history-another-edit">另一個編輯目前正被撤銷,請稍等...</string> <string name="history-not-found">沒有找到可以撤銷的</string> <string name="history-already-done">這已是撤銷的</string> <string name="history-message-revert-body">這是一個引用被回退的訊息,您確定要僅撤銷模板即使不編輯頁面嗎?(您需要撤銷回退除非您想要復原兩者)</string> <string name="history-revert-fail">因遇到錯誤:$2,所以無法撤銷您在$1的編輯</string> <string name="history-undone">成功撤銷在$1的編輯</string> <string name="history-retrieve-fail">無法取得我們想要撤銷自己編輯的頁面內容。錯誤:$1</string> <string name="history-welcome-msg-title">是否傳送歡迎辭?</string> <string name="history-welcome-msg">您已建立了此對話頁,因此不能被撤銷。您是否想改以一個歡迎模板來替換呢?</string> <string name="newtab-title">這是一個新頁籤!</string> <string name="newtab-text">Huggle 上的頁籤如同其它瀏覽器,採用著類似的運作方式。所有在 Huggle 的頁籤分享同一佇列,它們在您想要專注於特定頁面,或是您想事後檢查一筆編輯時很有用。</string> <string name="notify-fail">未通知$1的創建者</string> <string name="notify-unknowncreator">找不到頁面創建者</string> <string name="protect-done">已變更$1的保護層級</string> <string name="protect-fail">未變更$1的保護層級</string> <string name="protect-progress">正在變更$1的保護層級...</string> <string name="protectlog-fail">無法取得$1的保護日誌</string> <string name="protectlog-none">沒有$1的保護日誌</string> <string name="protectlog-progress">正在取得$1的保護日誌...</string> <string name="saveuserconfig-progress">正在更新用戶設定頁面...</string> <string name="loadglobalconfig-fail">無法載入全域設定頁面</string> <string name="loadprojectconfig-fail">無法載入項目設定頁面</string> <string name="loaduserconfig-fail">無法載入使用者設定頁面</string> <string name="reqprotection-badformat">請求頁面格式不明</string> <string name="reqprotection-duplicate">已請求保護</string> <string name="reqprotection-fail">無法為“$1”請求保護</string> <string name="reqprotection-progress">請求保護“$1”...</string> <string name="speedy-fail">無法對頁面標記快速刪除:$1</string> <string name="speedy-progress">標記“$1”為快速刪除...</string> <string name="speedy-wrong">請求標籤無效。請選擇不同的刪除原因</string> <string name="speedy-finished">已完成</string> <string name="warn-alreadyblocked">該使用者已被封禁</string> <string name="editquery-success">在$2成功編輯$1</string> <string name="editquery-token">取得令牌來編輯$1</string> <string name="editquery-token-error">無法取得編輯令牌</string> <string name="editquery-nocsrft">沒有跨站請求偽造令牌</string> <string name="editquery-error-retrieve-prev">無法取得頁面之前內容:$1</string> <string name="editquery-error-append">您不在編輯頁面時一同使用前置和附加操作</string> <string name="editquery-error-badtoken">錯誤令牌</string> <string name="editquery-invalid-token">無法編輯$1因為我在快取裡持有的令牌已無效。請再次嘗試編輯此頁面</string> <string name="provider-failure">在$2上的訂閱提供者$1失效,嘗試找尋其它可替代的提供者</string> <string name="provider-primary-failure">首要訂閱提供者失效,改退回成wiki提供者</string> <string name="rc-error">無法從wiki訂閱來取得資料,最後錯誤為:$1</string> <string name="rc-timestamp-missing">RC訂閱:項目遺失時間戳記屬性:$1</string> <string name="rc-type-missing">RC訂閱:項目遺失類型屬性:$1</string> <string name="rc-title-missing">RC訂閱:項目遺失標題屬性:$1</string> <string name="whitelist-download">正在下載新版本的白名單</string> <string name="logs-widget-name">系統日誌</string> <string name="processes-widget-name">處理</string> <string name="wait">請稍候...</string> <string name="function-miss">函式目前不可用</string> <string name="missing-aiv">此項目沒有使用AIV</string> <string name="missing-page">在此 wiki 上沒有 $1 頁面</string> <string name="updating-wl">正在更新白名單...</string> <string name="waiting">等待中...</string> <string name="feature-nfru">此功能僅限IP用戶使用</string> <string name="report-no-user">沒有使用者被選擇檢舉,請先選擇一位使用者</string> <string name="welcome-show">在下次您啟用 Huggle 時顯示此訊息</string> <string name="welcome-tp-empty-fail">此用戶沒有空的對話頁,您確定您要寄送訊息給對方?</string> <string name="welcome-page-miss-fail">Huggle無法檢索使用者討論頁內容,您卻要為他寄送歡迎訊息嗎?</string> <string name="cr-newer-edits">衝突解決:已回退包含新編輯的所有編輯。由同一位用戶:$1所編輯。</string> <string name="cr-resolved-same-user">衝突解決:已回退由同一位用戶作出之包含新編輯的所有編輯:$1</string> <string name="cr-stop-new-edit">衝突解決:不執行所有操作 ── 有對於$1的更新編輯內容。</string> <string name="cr-stop-multiple-same">衝突解決:不執行所有操作 ── 有由同一位用戶做出針對$1的多項編輯。</string> <string name="cr-revert-same-user">衝突解決:已回退所有編輯 ── 有由同一位用戶做出針對$1的多項編輯。</string> <string name="cr-message-same">有同一用戶在$1做出多次編輯,您確定要回退這些編輯嗎?</string> <string name="cr-message-new">這些是在$1較新的編輯。您確定您要回退它們?</string> <string name="cr-message-not-same">在$1上有其他用戶做出較新的編輯。您確定您要回退這些編輯嗎?(這可能會因為有較舊的令牌而失敗)</string> <string name="preferences-show-warning-if-not-last-revision">若您不是在頁面的最後修訂時顯示警告</string> <string name="preferences-delete-using-filter">您不能刪除一個正在使用的篩選條件</string> <string name="preferences-extension-disabled">此擴充功能被設定在下一次啟動 Huggle 時停用。</string> <string name="preferences-extension-disabled-restart">擴充功能被設定在下一次啟動 Huggle 時停用。</string> <string name="preferences-extension-enabled">此擴充功能未被編排在下一次啟動 Huggle 時停用。</string> <string name="preferences-extension-enabled-restart">擴充功能已啟用,您需要重新啟動 Huggle 來生效。</string> <string name="preferences-enforce-baw">在 HTML 差別裡強制成黑色與白色</string> <string name="preferences-watchlist-watch">加入到監視清單</string> <string name="preferences-watchlist-unwatch">從監視清單中移除</string> <string name="preferences-watchlist-preferences">遵循您在wiki上的偏好設定</string> <string name="preferences-watchlist-nochange">什麼都不做</string> <string name="preferences-auto-watch-talk">監視有接收到來自於您的警告之使用者對話頁</string> <string name="preferences-sounds-enable-queue">當新項目添加至佇列時播放音效</string> <string name="preferences-sounds-enable-irc">當在 HAN 裡有訊息收到時播放音效</string> <string name="preferences-sounds-minimal-score">通知的最小編輯記分</string> <string name="preferences-performance-catscansandwatched">啟用分類掃描及狀態監視過濾(各編輯需要額外的 API 查詢)</string> <string name="preferences-max-score">啟用全域最高記分(帶有較高記分的編輯會被忽略 - 這在您只想查看良好的編輯時很有用)</string> <string name="preferences-min-score">啟用全域最低分數(帶有較低分數的編輯會被忽略 - 這在您只想查看不佳的編輯時很有用)</string> <string name="preferences-queue-size">佇列長度</string> <string name="preferences-empty-queue-page">空佇列頁面:</string> <string name="preferences-reset-gui">重新設定圖形介面</string> <string name="preferences-restore-factory-layout">這將會重新儲存您所安裝的 Huggle 配置。Huggle 會先進行關閉,要繼續嗎?</string> <string name="preferences-keystroke-rate-limit">防止以下毫秒單位頻率的重複鍵盤敲擊(同一快速鍵在 N 毫秒內僅可觸發 1 次)</string> <string name="preferences-color-scheme-diff">差異色彩</string> <string name="preferences-color-scheme-diff-default">預設</string> <string name="preferences-color-scheme-diff-dark-mode">暗黑模式</string> <string name="fail">失敗</string> <string name="request">請求</string> <string name="requesting">請求中</string> <string name="requested">已請求</string> <string name="protect-request-fail">無法取得目前回報頁面:$1</string> <string name="protect-request-fail-notext">無法取得目前回報頁面因為查詢未回傳此頁面任何文本</string> <string name="protect-request-fail-notime">此查詢沒有回傳任何時間戳記。這也許是一個MediaWiki程式錯誤。正在放棄此查詢</string> <string name="message-fail">訊息傳送失敗</string> <string name="edit-conflict">編輯衝突</string> <string name="page-tag-nodescription">沒有此標記的描述</string> <string name="page-tag-noparameters">沒有參數</string> <string name="page-tag-fail">標記頁面失敗</string> <string name="page-tag-error">無法標記頁面,錯誤:$1</string> <string name="page-unknown">不明的頁面</string> <string name="summary-edit-range">顯示從修訂ID$1至$2之間範圍的差異</string> <string name="speedy-csd-invalid">無效CSD標籤,沒有訊息和wiki標籤可用</string> <string name="speedy-csd-existing">此頁面已有一組快速刪除的標籤。</string> <string name="namespace">命名空間</string> <string name="block-alreadyblocked">使用者已被封鎖</string> <string name="report-fail">由於 $1,檢舉使用者失敗。</string> <string name="report-page-fail">錯誤:無法擷取在 $1 的檢舉頁面</string> <string name="report-page-fail-time">無法擷取檢舉頁面的目前時間戳記,API 錯誤:\n\n$1</string> <string name="report-unable">無法回報使用者</string> <string name="about-qt">,使用 QT $1 編譯,執行於 QT $1</string> <string name="about-info">,基於$1,目標平台:$2</string> <string name="error-unknown-code">不明錯誤:$1</string> <string name="relogin-fail">無法登入至 wiki:$1</string> <string name="projects">項目</string> <string name="unable-to-retrieve-user-list">無法取得用戶清單:$1</string> <string name="login-no-userinfo">站台未回傳任何用戶訊息</string> <string name="login-invalid-register-date">Mediawiki 回傳的註冊日期無效</string> <string name="login-site-info-query-failed">對於$1的站台查詢已失敗:$2</string> <string name="login-no-site-info-returned">沒有回傳給此wiki的站台資訊</string> <string name="loading-main-window">載入 Huggle 主視窗...</string> <string name="api.php-invalid-response">API 檔案回應了無效文字(網頁伺服器當機?),請檢查除錯日誌檔以獲取精準訊息</string> <string name="login-username-doesnt-exist">此用戶名稱不存在</string> <string name="developer-mode-enter-title">喲呵</string> <string name="developer-mode-enter-message">您已進入了開發者模式!</string> <string name="unimplemented">此功能尚未實行</string> <string name="unknown-item">不明項目</string> <string name="title-multiple-projects">多個項目($1)</string> <string name="title-on">在$1</string> <string name="main-revert-custom-reason-text">未提供理由/自訂回退</string> <string name="main-revert-already-pending-title">此編輯已被回退</string> <string name="main-revert-already-pending-text">您不能回退此已被回退的編輯。請先稍待!</string> <string name="main-revert-type-unknown">不明</string> <string name="main-revert-type-in-userspace">在用戶空間</string> <string name="main-revert-type-made-by-you">由您做出</string> <string name="main-revert-type-made-on-talk-page">在對話頁做出</string> <string name="main-revert-type-made-white-list">由白名單上的一位用戶做出</string> <string name="query-result-nodata">查詢結果未包含任何資料</string> <string name="main-revert-warn">此編輯是$1,因此即使看起來像是個破壞行為,實際上也可能並非如此。您確定要回退此編輯嗎?</string> <string name="query-request-noaction">沒有為api請求提供操作</string> <string name="query-login-undefined">登入令牌需要用戶定義內容,它並沒有儲存於記憶體中</string> <string name="query-protected-link">已保護連結</string> <string name="queue-empty-title">空的</string> <string name="queue-empty-text">佇列裡沒有東西...</string> <string name="protip">重要提示</string> <string name="wikipagetagsform-group">群組內部{{$1}}</string> <string name="tip1">Huggle 如同其它任何網路瀏覽器般支援頁籤!如果您作了一些編輯並希望能維持在事後使用,您可直接開啟新頁籤來繼續檢查佇列。佇列在所有頁籤裡可互相共享。</string> <string name="tip2">雖然預設上並不啟用,但 Huggle 擁有以簡潔方式來保存使用者與頁面歷史的小工具「編輯列」。若您是使用較小的螢幕,您可以此來替換目前的使用者與頁面歷史小工具!</string> <string name="tip3">多數個人設置儲存在 wiki 裡的 [[Special:MyPage/huggle.yaml.js]]。您可以在那裡手動更改它們。</string> <string name="tip4">在螢幕上的小工具幾乎都可安全地關閉(用於導覽列上的除外)。</string> <string name="tip5">您可在多個項目上同時使用 Huggle,僅需在登入表單裡點擊「項目」按鍵即可。</string> <string name="tip6">若您是有經驗且未出錯過的使用者,可以試著開啟個人設置裡勾選「即時回退」來讓 Huggle 的回退動作更快速 ;)</string> <string name="tip7">您可以在個人設置裡創建您自己的佇列篩選。它們會儲放在您 wiki 的設定頁面裡。</string> <string name="tip8">按下「G」將會標記為「良好編輯」,這會通知其他人該編輯內容可以忽略,和減少使用者不良記分以及可選擇來發送歡迎訊息給他們。</string> <string name="tip9">按下「S」將會標記為「可疑編輯」,這會通知其他人該編輯內容需要注意,並添加至可疑編輯佇列裡來讓較精通的使用者事後檢閱。</string> <string name="tip10">多數捷徑方式可在您的個人設置裡更改。</string> <string name="tip11">在 Huggle 螢幕上移動小工具到另一小工具情況下會將兩者合併為一,並且以頁籤來分隔。</string> <string name="tip12">不小心出錯了?沒關係,請在「您的編輯歷史」裡的任一編輯按下右鍵,將會跳出提供選項來撤銷您所作出更改內容的視窗。</string> <string name="tip13">您可以透過按下 Shift + C 鍵(預設值),或是使用者選單來開啟「貢獻瀏覽器」顯示出目前建立此筆貢獻內容的使用者。這能讓您快速檢視此使用者最近的貢獻。</string> <string name="tip14">您可以與其他正使用此應用程式的 Huggle 活躍使用者聊天。只要他們有在線上,可以在「網路」視窗裡找到他們。</string> <string name="tip15">如果您找到其它無法用回退解決的條目問題,您可以按下 E 鍵來在您預設的瀏覽器(而不是在 Huggle)裡立即編輯條目。</string> <string name="tip16">若您不小心按到了回退按鈕,您仍可以透過快速按下 ESC 鍵來試著取消進行中的回退。</string> </resources>
Toronto Raptors v Detroit Pistons Dan Lippitt Kyle Singler #25 of the Detroit Pistons goes to the basket during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Kyle Singler #25 of the Detroit Pistons goes to the basket during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt Jonas Jerebko #33 of the Detroit Pistons goes to the basket against Jonas Valanciunas #17 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Jonas Jerebko #33 of the Detroit Pistons goes to the basket against Jonas Valanciunas #17 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt Viacheslav Kravtsov #55 of the Detroit Pistons shoots a free throw during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Viacheslav Kravtsov #55 of the Detroit Pistons shoots a free throw during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt Jonas Jerebko #33 of the Detroit Pistons goes to the basket during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Jonas Jerebko #33 of the Detroit Pistons goes to the basket during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt John Lucas #5 of the Toronto Raptors protects the ball from Brandon Knight #7 of the Detroit Pistons during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. John Lucas #5 of the Toronto Raptors protects the ball from Brandon Knight #7 of the Detroit Pistons during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt Brandon Knight #7 of the Detroit Pistons goes for a jump shot against Terrence Ross #31 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Brandon Knight #7 of the Detroit Pistons goes for a jump shot against Terrence Ross #31 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Toronto Raptors v Detroit Pistons Dan Lippitt Kim English #24 of the Detroit Pistons drives against DeMar DeRozan #10 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan. Kim English #24 of the Detroit Pistons drives against DeMar DeRozan #10 of the Toronto Raptors during the game between the Detroit Pistons and the Toronto Raptors on March 29, 2013 at The Palace of Auburn Hills in Auburn Hills, Michigan.
Making rice wafer – Can Tho, Vietnam This picture has been taken in one of the villages alongside the Mekong river in Vietnam. This lady was making rice wafers. They are extensively used in Vietnamese cuisine including to make the famous nems
Ask HN: How do you build applications that require zero maintenance? - totalperspectiv Is it even possible? How do you as a developer, not become a lifetime maintaimer? ====== mamcx No 100%, but close. 1- Shield away from dependencies. Use languages that are robust (no JS, no PHP) and if possible, generate native .exes (Delphi, Rust, Go,...). Langs with large runtime/deps (Java, .NET, Js) are always trouble in the long run. I re-enter with .NET Core and have regretted heavily it (because maintenance, not performance or dev productivity). ie: If a lang/tool/ecosystem is for large teams (java, .net, c++), is not for zero maintenance. How easy is to setup and get running something (without resort to auto- installers or docker images) is a high evidence in how good will be for this metric. 2- If possible (mobile) use iOS. Android is not robust at all for long term prospects I made a enterprise iOS app that is also nearly worry-free. My only job is to upgrade xcode and recompile from time to time. I enter android late in the game, dreaming that android become more or less good. Not. 3- Important! If possible, split the programas between what could be "zero maintenance" and high maintenance. I have made one in Delphi that is running 10 years in some places with zero calls after the first 3 years of tweaking. Is split in 2 parts: The Delphi side have stay solid, and the "business" side that requiere more changes are in scripts in python that I integrate with the Delphi side. Even the python side is now near worry free, but I need to do changes here and there. 4- If need JS, pick VERY carefully how use it. JS is the anti-tesis of zero maintenance. The web is hostile in this area. You could lower damage if VERY carefully know what to use. 5- Stay away of stupid "is for scalability" traps. Micro-services, nosql and similar are the best way to destroy productivity. Modular code yet running in a monolithic (or maybe in a REST api + client) will be more than enough by long margins when coupled with postgresql or sqlite. Again, a solid RDBMS is what most need. "Eventual" consistency is a stupid choice most of time. P.D: I'm solo developer, and have not time or high pay to cover for bad choices, so i try to perform very well in this metric. ~~~ austincheney > JS is the anti-tesis of zero maintenance. If you know what you are doing you can write zero dependency code that works well with old and modern browsers alike. This isn't as hard as it sounds either. ~~~ Silhouette Sure you can, but it might take you 2-3x as long to get the same results as you could have got by using modern JS features and APIs that aren't supported in older browsers, and it might take you 10-20x as long if you also have to reinvent the wheel for everything instead of using a few good libraries. For most projects in the real world, that is not going to be a trade-off worth making. ~~~ austincheney The only way that is true is if your friendly abstraction is producing your test automation for you. Having no understanding of the DOM and the standard APIs then writing without dependencies might take 1000x longer. I was assuming a competent developer when I mentioned this isn’t that hard. ------ no_gravity 1: A slim stack Every part of the stack will need maintenance every now and then. Some parts even introduce breaking changes and force you to alter your own code. The slimmer the stack, the less often you have to fix it. And the less often you have to fix or refactor your application. 2: A stack that values stability Linux is a good example. Linus Torvald: "We do not break userspace!" PHP is a another one. The core developers rarely introduce breaking changes. And when they plan to do so, there is usually an intense fight over it. 3: Acceptance Testing In the simplest form that means sending http requests to your web application and check if it returns the expected output. In my experience, acceptance tests find more real world issues then unit tests. 4: Write less code Writing the same functionality with less code has multiple advantages. One of them is that it will break less often. Much more could be said about it. Paul Graham brings up the value of terseness frequently: [https://twitter.com/paulg/status/1068483193605681152](https://twitter.com/paulg/status/1068483193605681152) [https://twitter.com/paulg/status/1126403387044573185](https://twitter.com/paulg/status/1126403387044573185) [https://twitter.com/paulg/status/1056858408039735297](https://twitter.com/paulg/status/1056858408039735297) Less is more. This is actually one of the reasons why I named my account no_gravity. ~~~ AnIdiotOnTheNet > 2: A stack that values stability > Linux is a good example. Linus Torvald: "We do not break userspace!" Sadly the people making things in userspace don't seem to mind breaking it frequently. ------ perlgeek There are different kinds of "zero maintenance": * sqlite says it's "zero maintenance" because nobody has to keep a database server running, and your .sqlite3 files don't need a defragmentation step or similar. * There are middlewares like RabbitMQ where an upgrade through the OS installer generally Just Works [TM], no additional steps necessary. Yet somebody should monitor the RabbitMQ instance, just in case the service does go down, or reaches resource limits * There are tools that have very limited scope and API surface to stay stable for a looong time, those are also kinda "zero maintenance". In my experience, all serious business applications that automate workflows or otherwise create value do need some kind of regular maintenance. Depending on the installation base and maintenance effort, striving for zero maintenance might not be cost effective. > How do you as a developer, not become a lifetime maintaimer? A maintainer is a developer. Depending on the project, things you can do if you don't want to burden yourself with maintenance: * Build a community around the project, and hand maintenance to the community * abandon a project * leave lots of documentation that makes it easy for others to maintain it * pay somebody to maintain it * work as a consultant/contractor, and fire the client after the initial development phase (might not be the best for your reputation, could be OK if you are up-front about it and make a _very_ good handoff). * If most of the maintenance is keeping it up at all, engineer for availability over consistency (if applicable to the business domain). * Accept that maintenance is part of the normal lifecycle The appropriate strategies highly depend on the kind of project. ~~~ waste_monk >and your .sqlite3 files don't need a defragmentation step or similar They do benefit from a VACUUM from time to time [0]. [0] [https://sqlite.org/lang_vacuum.html](https://sqlite.org/lang_vacuum.html) ------ whack I built a side-project 3 years ago: [http://www.thecaucus.net](http://www.thecaucus.net) I spent quite a bit of time getting it off the ground, but over the last 2 years, I've spent maybe ~10 hours in total keeping it running. The only times when I've had to invest time into it, is when a certain piece of my tech stack gets deprecated or discontinued. For example, I had originally done login-auth through a SaaS provider, which then got acquired by a larger company which discontinued their API. I then had to go through their migration process to keep the site running. However, the above doesn't happen all that often, especially if you choose stable technologies and companies. Besides that, I had consciously designed the site to be completely free of manual maintenance, even if it involved more upfront costs. Examples: \- Going serverless via Heroku \- Using SaaS services like RDS, S3, SendGrid etc \- Using scripts and cron-jobs/heroku-schedulers to automate anything that needs periodic maintenance \- Relying on "push" alerts as opposed to "polling" alerts. Ie, when something goes wrong, your server should notify you immediately. Instead of waiting for you to periodically check some dashboard More details: [https://software.rajivprab.com/2018/04/29/caucus-tech- stack/](https://software.rajivprab.com/2018/04/29/caucus-tech-stack/) ------ SavageBeast Two things are going to make Zero Maintenance kind of difficult. Assuming Ubuntu on an AWS EC2 instance here. First one is taking OS updates that are security critical - as anytime you take an update theres a chance something somewhere gets broken. Second one is AWS instances themselves being switched. From time to time (very infrequently in my experience) Amazon will send you an email indicating your EC2 instance is being migrated to a new physical host and to initiate this you must manually restart your instance. As for the rest, a script in root's crontab that does: 1 - Delete log files older than X (because running out of disk space is not a good situation) 2 - Hit Lets Encrypt for new certs if necessary ( because expired certs give a lousy customer experience) 3 - Preemptively bounce any application servers (Nginx, PHP FPM, Tomcat, what have you ) 4 - Setup all hosts such that your critical software restarts at boot time in the event of an unexpected reboot situation (gives you the ability to cron schedule nightly reboot command) Additionally: * If you're running any database regardless of where, be sure you allocated enough space that you don't end up running out of database storage * Implement some downtime monitoring to tell you when there are issues * It's a good practice to be changing passwords on any authenticated resource at some interval too The practices listed here (off the top of my head - not an all inclusive list) are about as close as I attempt to get to Zero Maintenance myself. Customers pay for systems to be developed and they should expect some level of ongoing care and feeding as entropy is pervasive. But its a good excuse to sell your customers a maintenance contract right? ~~~ cagmz > Preemptively bounce any application servers Do you mind explaining this? ~~~ SavageBeast I personally like to stop/start any application server or web sever running on my hardware over the weekend with cron. I say "Preemptively Bounce" on the assumption that which is not periodically rebooted will at some point crash. Id rather take a few minutes outage at a time of my choosing. It's basically just cheap insurance for the paranoid. ------ bananatron I've built 4-5 web apps over the last 5 years that have required zero/little maintenance. There are only three reasons I've been drawn into some projects: 1) A bug occurs and needs to be fixed 2) An external dependency breaks something \- Write tests and/or have high confidence in code/infrastructure behavior (using tools you know really well helps). Avoid cognitive complexity in code constructs aggressively. \- Limit integration w/ 3rd parties EXCEPT the ones that reduce your goal of zero-maintenance. \- Create interfaces/tooling which make automating tasks easy. \- Get notified when things break ------ _ah No-maintenance software must live in an environment which never changes (certain microcontroller code, and kiosk-type installations fit this definition). If you are devloping software exposed to the "normal" world, then change is inevitable as the environment evolves around you. Maintenance is a part of life. The key is to minimize the cost of that maintenance through good architecture and robust error handling. ~~~ noir_lord Indeed, even NASA probes have mechanisms to be updated. The blessing and curse of software is it's mutability. ------ bdcravens Even if your business requirements never change, security updates are a thing. Many libraries have stated support policies after which a given version no longer receives updates. Fixing your application at a point in time and never updating dependencies etc is a liability. Assuming you update dependencies, at some point, you'll have breaking changes. (not really getting into the nuance of whether or not everyone will obey the rules of semantic versioning) This all assumes your application is like most being built these days: you're using a framework on the web. If you put your application on a machine that never is exposed to the web, or you create an application with zero dependencies (including OS-level dependencies) you might get away with never updating it. ------ rockyj Think about it this way, every line of code you write will become obsolete one day (e.g. business will change or new version of the language comes out). Every external dependency, where you deploy, how you deploy, the OS where you host the application, even how the user consumes your application will change. So to avoid any maintenance, minimize - code, complexity and all dependency and use a language and platform that can last a few years. Essentially, in today's world this will be very difficult and doing this will also cost you development time, there is no easy way around it. For example, you can build your whole app in Clojure, which is rock solid and stable, but even then you will have to patch the OS, JVM, DB etc. With traffic changes you will have to scale up / down. ------ seanwilson I've developed a couple of apps that only required a few hours a year essential maintenance. The biggest obstacles have been with being forced to upgrade libraries to work with changes to third party APIs (e.g. payment APIs, ad APIs), upgrades to do with OS changes (e.g. new versions of Android) and applying security patches to dependencies. To decrease maintenance, generally you want to reduce your dependence on external services that might change, minimise the use of complex third party libraries that might have security problems later, and keep your app as simple as you can to reduce the chance of bugs. It's hard to predict what changes are going to be required for a given ecosystem (e.g. Android, Mac, Chrome Web Store, web browsers) so sometimes it's a matter of luck unless you can guarantee the OS, hardware, APIs etc. are never going to change. I'm currently working on a Chrome extension ([https://www.checkbot.io](https://www.checkbot.io)) that doesn't require a lot of essential maintenance besides keeping up with changes Google have been making to make extensions more secure. Bracing myself for what breaking Manifest v3 changes are announced. ------ tiborsaas It's really hard to answer if you don't provide any context :) Zero maintenance for a blog, small webshop or some Kubernetes controlled madness running a zillion pods? For the first two: 1) Make everything static, HTML, CSS, don't manage your own servers 2) Use as few dependencies as possible when dealing with JS 2 a) For backend try to find a service that do the work for you without you deploying your servers 2 b) Maybe cloud functions are enough? 3) Write tests 4) Don't write bugs :D 5) Setup alerts (uptime robot) ~~~ andoma > 2 b) Maybe cloud functions are enough? I generally agree, but eventually you're gonna start getting emails saying things like "We're deprecating Node 8.x in our Lambda / Cloudfunction / etc" ~~~ tiborsaas Zero maintainability should mean 0.01% maintainability :) But that's true, one should go for a compiled language like Rust or Go. ------ mister_hn I have 100% zero maintenance software running on customer's site. It is a Debian-powered machine, not connected to internet, which is performing various tasks and controlling some hardware. The software running on it was built 6 years ago in C++ and using OpenCV and CUDA. The machine powers on and off itself on a specific routine due to on-site checks. It clears old logs automatically if older than X days or bigger than Y MBytes. Until today, no software or hardware failure occurred. The whole disk is snapshotted with dd command. If hard drive failures occur, it's easy to swap it and start again. ------ drinchev That’s an amazingly useful thread. My 2 cents regarding the tech stack : 1\. Don’t use a build system. Use scripting language that comes preinstalled on Debian / Ubuntu. Peel 5 is still my weapon of choice for robust small web apps. Look at python too. 2\. Avoid JS / Ajax. I can bet that security restrictions will break it in the future, however good old form POST works still pretty good. 3\. Use own hardware. Raspberry PI with HDD or some banana PI works quite allright. Also provisioning is easy by keeping a copy of your image. 4\. Don’t use managed services. Use database on the very same machine that you host the app itself. Backup is an extra step you can achieve via those, but don’t use External service, since it will most probably break. ------ rinchik What applications? A bit more specifics? Apps can not be zero maintenance by definition. Apps are literally alive, apps mature, evolve, get older, there will always be some kind of maintenance. As world changes, apps change with it. If ZERO maintenance is a HARD requirement, then think about total isolation. No packaging, and NO ENVIRONMENT CHANGES. With constant, isolated env, it is possible to have a minimal maintenance app. ~~~ rinchik also FYI, with zero maintenance requirement Agile approach will not work. Throw anything agile out of the door, you need a strict top to bottom waterfall with set requirements. Another argument that Agile can not be a hammer for every nail. Common sense with project management and architecture is extremely important. ~~~ jakoblorz You’re product will be developed at some point - that is when it doesn’t matter how the product was built. Agile is not an indefinite process. ------ Delphiza In order not to become a lifetime maintainer, you need to end-of-life applications. For technical reasons, we aspire to making applications have a long life, but from a business point of view, it's both unnecessary and difficult. Don't plan for an application to live for more than five years, especially v1. Put enough work into architecture and maintainability to be able to throw it out and redevelop after five years. Be clear about this upfront. In five years' time you won't even be able to find devs to maintain what was developed today. In order to 'maintain' an application properly business needs to keep _investing_ in modernization of the application, which is more than just maintenance. Let's say they need to invest 30% of the original cost per year. Most will not do that saying 'It is a capital asset that I paid for and it should work as expected for as long as I need it' \- okay but in five years time it will be so out of date that it will need to be redeveloped. ------ thesuperbigfrog >> Is it even possible? No. The only code that requires no maintenance is the code you do not write. In other words, "No code is easier to maintain than no code." Add that to the nihilist coding attributes: No code runs faster than no code. No code has fewer bugs than no code. No code uses less memory than no code. No code is easier to understand than no code. No code is easier to maintain than no code. ~~~ wheelerwj what? ~~~ thesuperbigfrog Any code written will require some amount of maintenance. The actual effort required to maintain the code will vary depending on what the code does, the environment the code runs in, and the needs of the code's end users. Unmaintained code will ultimately fail the same way that all other things that humans build do. If someone does not maintain the code, it will eventually degrade and fail. Anytime you write code, especially for a long-lasting endeavour, you should consider how the code you write will be maintained and what might cause it to fail sooner than expected. ~~~ Stevvo That doesn't make any sense. I have code running on micro-controllers, that has been running 24/7 for 10 years. It's never going to "degrade", the hardware may fail but the code is solid. I also have a maintenance contract for one of these: [https://arsandbox.ucdavis.edu/](https://arsandbox.ucdavis.edu/) It turns itself off at night and back on in the morning. 3 years running now without touching it once, used by hundreds of people daily. ------ hnruss Don't build it. Entrepreneurs occasionally ask me to take their idea and make it a reality. Their most common request is for me to build them a website so that they can sell stuff. Sure, I could take their money and build it, but it's much better for them to sell on an existing e-commerce site or learn how to use a CMS. ------ drelihan Have it do one thing very well and document it clearly so users can build it into their own processes. If it is more complicated than that ( i.e. must do several things very well with ever-changing needs ), document it clearly and include a link to the source code and build instructions. ------ jruz Render to static html, upload to Netlify or S3. Just works. ~~~ igammarays S3 requires an active AWS account with billing, etc., and upload to Netlify implies that you're trusting that Netlify's platform will be free forever. ~~~ rubinelli The requirement was zero maintenance, not zero cost. If you have a web app, a hosting cost is expected. ~~~ robrtsql I think the implication here is that you have to 'maintain your credit card'. Not an issue until you switch banks or have some sort of issue with a corporate card that you use with your AWS account. I think it's being nitpicky for the purposes of this discussion but I think there are probably several HNers who could tell you about how their product hosted in AWS went down because of a credit card issue. ------ CameronBarre Unless you're building an application that doesn't interact with external systems, chances are your job is never finished, even if you choose to consider it finished. If you don't want to be a lifetime maintainer, then you have to transfer ownership to someone else or consider the project to have reached its end of support. Consider an analogy to building houses, it may appear that the builder has finished their job since the house is fully constructed, but like any system, it requires maintenance for the entire duration of its lifetime. Systems degrade over time without maintenance. ------ carlosgonzoruiz A noted mainframe programmer, Roy Burnam, once said that "A program that requires no maintenance is a program no one uses!" Perhaps this should be elevated to "Burnam's Law". ------ greatjack613 You don't. You develop it as sloppy as possible and then make yourself indispensable to your company by being the only one who can maintain the pile of junk. Seriously speaking the following things help alot: 1.Reduce external dependencies. The less integration with 3rd party apis and services the less maintenance you will be doing. 2\. Reduce external dependencies. 3\. Reduce external dependencies. 4\. Unit Tests 5\. Integration Tests 6\. End to End Tests 7\. Reduce external dependencies. ------ reilly3000 I have some simple serverless code that has run 500M+ times without issue. It took some extra work to get setup and monitored, but SQS and Lambda are amazing beasts. ------ wheelerwj a few thoughts from someone who makes a living designing and building stable- multi-year projects. 1\. KISS, keep it simple, stupid. Keep your code as simple and straight forward as possible. You can do that in a few ways: \- As others have said, focus on keeping dependencies to a minimum. This can include libraries but also other 3rd party vendors/external apis. \- try to keep feature creep out. the more complex the codebase the more likely it is to suffer failures. Keep things focused on solving specific problems. 2\. Build software that is meant to be delegated. You can't remove all dependencies. Whether its libraries, external apis, or hosting, every piece of software needs something to run. If you don't want to be stuck as a lifetime maintainer, then you'll need to ensure your code can run without you. There are many ways to do this: \- use existing, well tasted, stable technology that is supported and maintained by your org, not something new and fancy for each project. \- enlist others to help early on, sys admins, hosting providers, or whomever. Build your software in a way that is supported by those providers. ------ conroy I run a few small apps that require zero maintenance, one being [https://upcoming.fm](https://upcoming.fm). All of them have a few things in common: * Hosted on Heroku, which gets you continuous deployment and Let’s Encrypt * Completely automated, even the smallest manual task * Low traffic, which may be the most important. Apps with large and growing user bases are very difficult to manage hands-off. ~~~ dna_polymerase This looks like it is dependent on Spotify's API? So if Spotify changes said API it would need maintenance. Anything that depends on other stuff really is susceptible to changes and needed maintenance. ------ vbezhenar 1\. Try to make the software tolerable to the wrong input. 2\. Develop tools which allows end-users to adjust the software, make as many settings as possible. I don't think that those options are actually good in the end. If your software is tolerable to the wrong input, it might just work wrong and nobody will notice it. If there are too many options, nobody will know all of them and in the end they'll either ask you or configure it wrong. If you'll have too much of flexibility, then developers who will need to extend your software, will be forced to work bounded by inevitable restrictions. I like software that's precise and does only one kind of thing. It crashes on wrong input, so I can either blame someone who's responsible for wrong input (I mean service, not ordinary user) or fix software. It must not be flexible, but it should have flexible architecture so I can just adjust or extend code to adapt it as necessary. ------ 2rsf Zero maintenance is not the same as becoming a lifetime maintainer. You haven't provided enough context here, but since we even patch software on Mars I doubt you can built a bug free, feature complete software that will last "forever", so you'll need to find ways to delegate maintenance to someone else. ------ codingdave By keeping it simple. Probably too simple -- If your app has any complexity, in the code, the infrastructure, or the features, it will need maintenance. If your app is literally just a static web page that does one thing, and never will need a new feature, then it can sit up on the web without maintenance. But the odds of something that simple meeting your goals is unlikely. ------ Cshelton You keep the feature set of the application as small as possible. And have no outside integration other than the most common of protocols. ------ janpot Any application with dependencies will require maintenance at some point. applications without dependencies don't exist. You may remove all your node modules. But you'll still depend things like a programming language, hardware, lightning strikes and solar storms not happening,... ------ loktarogar \- write code that you expect to fail, handle all errors you can anticipate and keep monitoring up for those that you can't resolve programmaticly \- use a managed hosting service for webapps like heroku \- aggressively reduce code complexity. single responsibility principle is key. ------ Blackthorn You can program defensively, which is a learned skill, and your app might run for years without complaining. But some dependency might change something and a regular push bring everything crashing down until you fix it. So program defensively and expect to have to do the occasional bit of maintenance. ------ ping_pong Maintenance means change. Any time something changes, you will need to update the application. An application with zero maintenance means that it either has a very, very targeted function that doesn't change, or it isn't being used after a certain period of time. ------ kissgyorgy Handle every error possible in every code path and don't make new features at all. ------ benologist I think your code and documentation should empower others to become experts in your place - write simple code, be very consistent, use minimal dependencies, test it thoroughly. ------ tboyd47 Why would you want to? The maintenance phase is the point where most projects finally get into the black after being a net loss for several years. ------ tingletech Half way joking, but write them in a stable stack, perl or COBOL? Maybe common LISP? Seems better than some stack that is in constant flux. ------ craftoman Yeah, you can build apps with zero maintenance but you have to pay a couple of bucks per month for microservices. ------ buboard Build a game. If you build a good audience, you can automate almost all the work. ------ romanovcode \- Have all binary dependencies part of the repo \- Never upgrade \- Cloudflare for SSL ------ ooooak have a business model that never changes. use the ecosystem that you are most familiar with and have fewer surprises. ------ KevBurnsJr Build it in Lisp and name it Hacker News. ------ TheCoelacanth Easy, build applications with zero users ------ GrumpyNl For me, most important, no dependencies. ------ pseudo_eu Don't write the application. ------ Crinus Desktop, web or mobile? My guess is that you'll get very different results based on this. Overall stick with technologies that have been around for a long time and have proven themselves to be stable and their developers do value stability. Sadly most developers prefer to run fast and break things, regardless of tech, so your options will be limited. Also avoid anything latest, greatest, shiny and new - even if their developers promise stability, at least for the initial versions until the tech matures a bit. If you see anything that uses semver run away. Semver might sound like a good idea on the surface in that any breaking change means increasing a major version. But the flip side of that coin is that by choosing semver the developers communicate that they _do_ plan on breaking backwards compatibility at some point. Despite the excuses a lot of semver enthusiasts will tell you, there are very very VERY few reasons to break backwards compatibility and the vast vast majority of them are imposed from outside (e.g. the sort where your OS drops 32bit support and there isn't anything you can do about it or the architecture you were relying on isn't supported by anyone anymore). On the desktop stick with languages that either have multiple independent implementations (independent not only in terms of who developers it, but also in terms of codebase) based on a standard, like C and C++. This way you can switch between implementations in case something goes bad. Also do not use the latest versions of the standard unless every implementation (and by "every" i mean "really, truly, every" not just the popular ones) implements it with more or less the same features (stick with the least implemented ones). This gives you a greater set of choices to switch when you decide to switch. For desktop UI on Windows use the Win32 API or roll your own. If you plan on being cross platform, roll your own anyway since the only thing you can rely on (at least for the foreseeable future) on Linux is X11 - anything else is bound to break and/or not exist in your users' computers. Note that if you also plan on supporting macOS rolling your own may not be liked over there and you should seriously consider if it is worth the hassle since as Apple has proven many times they do not care about backwards compatibility so you'll need to maintain your app regardless (though you can try and minimize that to just a recompile). For web i do not know much, but i'd stick with stuff that do not break. PHP and Java _seem_ stable. Client-side things tend to be very stable though Google does give me the impression that they'd like to flex their muscle to drop some stuff they consider "bad". For mobile abandon all hope, it is the most ephemeral platform. Beyond that, make either very small programs that you can easily modify or make modular programs that you can swap out things without much hassle. Personally i work with desktop. For my own stuff i stick with C89 and Free Pascal. The former doesn't change, the latter does change but very very infrequently and it is all statically compiled anyway so assuming the underlying stuff do not change they'll work. Lazarus on Linux will eventually be an issue because Gtk2, but there isn't much that can be done about that (the author of FPGUI - that only relies on X11 - says that it can be used with the LCL FPGUI backend, but personally i haven't tried it and i think the backend isn't very mature). Win32 stuff is practically eternal (and funnily enough on Linux Win32 via Wine is the stablest ABI - essentially making x86+Win32+C89 the most stable combo even if Microsoft drops it :-P).
An Interview With God is a beautiful short story about a man who dreams he has a chance to speak with God. The answers God provides to the questions he is asked are truly thought-provoking and should give many people something to think about as they go about living their life. The video will resonate with many people who believe in God and I hope they find it as enjoyable to watch as I did.
# Owl Carousel 2 Thumbnails plugin Enables thumbnail support for Owl Carousel 2.0 ## Quick start grab the [latest release](https://github.com/gijsroge/OwlCarousel2-Thumbs/archive/0.1.7.tar.gz) and slam it behind the default owl carousel plugin. ##### Enable thumbs ```javascript $(document).ready(function(){ $('.owl-carousel').owlCarousel({ thumbs: true }); }); ``` ## Use pre-rendered html as thumbnails. **_recommended_** ```javascript $(document).ready(function(){ $('.owl-carousel').owlCarousel({ thumbs: true, thumbsPrerendered: true }); }); ``` ##### Add thumbnails (link slider and thumbnails with data-slider-id attribute) ```html <div class="owl-carousel" data-slider-id="1"> <div>Your Content</div> <div>Your Content</div> <div>Your Content</div> <div>Your Content</div> </div> <div class="owl-thumbs" data-slider-id="1"> <button class="owl-thumb-item">slide 1</button> <button class="owl-thumb-item">slide 2</button> <button class="owl-thumb-item">slide 3</button> <button class="owl-thumb-item">slide 4</button> </div> ``` ## Or add data-thumb attribute to your slides ```html <div class="owl-carousel"> <div data-thumb='Content of your thumbnail (can be anything)'> Your Content </div> <div data-thumb='Content of your thumbnail (can be anything)'> Your Content </div> <div data-thumb='Content of your thumbnail (can be anything)'> Your Content </div> <div data-thumb='Content of your thumbnail (can be anything)'> Your Content </div> </div> ``` #### [demo](http://gijsroge.github.io/owl-carousel2-thumbs) ## Available options ```javascript $(document).ready(function(){ $('.owl-carousel').owlCarousel({ // Enable thumbnails thumbs: true, // When only using images in your slide (like the demo) use this option to dynamicly create thumbnails without using the attribute data-thumb. thumbImage: false, // Enable this if you have pre-rendered thumbnails in your html instead of letting this plugin generate them. This is recommended as it will prevent FOUC thumbsPrerendered: true, // Class that will be used on the thumbnail container thumbContainerClass: 'owl-thumbs', // Class that will be used on the thumbnail item's thumbItemClass: 'owl-thumb-item' }); }); ``` ## npm ``` npm install owl.carousel2.thumbs ``` ## bower ``` bower install owl.carousel2.thumbs ``` </> with <3 in Belgium by [@GijsRoge](https://twitter.com/GijsRoge)
Uses of plasma treatment devices Many people also have the ability to keep the internal water, so when the device works, especially at high speeds, the contents cause coagulation, as the central force usually applies. There are many medical programs for the plasma treatment including viruses, proteins, polymers, nucleic acids, and blood studies. They are very different from serum with plasma, blood, and liquids. There are many uses and are not limited to the medical field alone. There is an extended period of time required for the separation of content at being easy to be a timer. High spin makes an artificial gravity that, object-based, quickly or slowly sew the content.
Dvorkin On Debt: Weliver Delivers David Weliver is worth listening to, because he was once worthless. When he was 26 years old, David Weliver lived in an 8×10-foot room in a rented house with three roommates. The only item he owned in abundance was debt — $80,000 of it. Would you take financial advice from a man like that? Well, you should. A decade later, and Weliver is a financial success, and not just for himself. He inspires financial success in others, through his website Money Under 30. I’ve never met Weliver, but we met on the phone the other day. We were brainstorming ways to tell people that October is Financial Planning Month. It’s always difficult to make people aware of these awareness months. I once compared April’s Financial Literacy Month to convincing people to go to the dentist, but Weliver has a much more pleasant analogy. Sweating away debt “It’s just like exercising,” he says. “We all want to lose weight, and we all go through the struggles of starting a plan — and then falling back into our old ways. Well, a personal trainer doesn’t really tell you anything you already didn’t know, but you’re more likely to show up and do the work with greater intensity, because they’re watching you and encouraging you.” Weliver says what works for losing weight works just as efficiently for losing debt. “Some people who might benefit the most from working with a financial planner or counselor might never think they’re a candidate for that,” he says. “But when people are struggling with debt, they need help making a plan and staying on track, because it can take a long time to get out of debt.” As a financial planner, counselor, and CPA myself, I asked Weliver about one big question I’m always asked: If I have no money, how can I afford professional help? “Certainly, none of these options are free, and for someone who’s in a lot of debt, it can be a difficult decision,” Weliver says. “It can cost a couple hundred dollars for a few hours or $1,000 for a comprehensive plan. But you need to ask, How much am I paying every month in interest and fees?” Interestingly, when Weliver was mired in debt, he never consulted a financial planner or counselor. He wishes he did. “In hindsight, it might’ve been helpful,” he says. “If I had taken that step, someone could’ve smacked me over the head so I could finally come to the realization I was doing something wrong.” One problem feeding Weliver’s youthful arrogance was his job. He worked for The Wall Street Journal’s personal finance magazine, called Smart Money. He was 22 years old, living in New York City, and “blissfully ignorant.” “Obviously, I wasn’t ready to listen to anyone,” he says. That’s why he started Money Under 30 — to speak to young people. It’s often a thankless job. “No one comes to Money Under 30 excited,” he laments. “They come when they’re in too much debt or there’s some one throwing 401(k) paperwork at them at their job, and they have urgent questions.” I know the feeling. No one comes to Debt.com because they’re happy with the way their finances have gone. For both Weliver and myself, the satisfaction of our jobs is counted in slender percentages. “It can be frustrating,” he says. “Maybe 80 to 90 percent will get the answer and go away.” Why? Because the answer isn’t easy. Getting out of debt is literally like falling down a hole. It’s quicker to the bottom than back to the top. ‘That’s why only a small percentage will say, I want to learn about this and do better,” Weliver says. He and I are in this business for that small percentage, and when they write you with heartfelt thanks for putting them on the path toward financial freedom — for themselves and their families — it makes it all worth it. What to do now If you’re in that slender percentage, Weliver and I agree: Check out all the free resources online before you seek a financial counselor or planner. Read Money Under 30 or Debt.com or any of the other wonderful personal finance websites out there. (If you’re wondering why these sites aren’t competitive, and why we support each other, the reason is a sad one: There’s no shortage of customers. Debt in this country is a massive problem. Just check out these depressing stats.) My personal recommendation: Read What Is Credit Counseling… And Why Do I Need It? If that speaks to you, call one of Debt.com’s certified credit counselors at 1-888-503-5563. The call is free, and so is the debt analysis you’ll receive. You’re under no obligation to do anything. As Weliver says, “If you’re in debt and you need to get motivated and make some changes, there are so many good stories out there. I’m not unique in that. You need to know: You’re not alone.” Howard Dvorkin is a CPA and chairman of Debt.com, an educational resource for those who want to conquer all forms of debt in their lives.
How can you forgive your spouse in the aftermath of sexual betrayal? In the process of recovering from sexual struggles, restoring relationships is vital … and hard. When sexual strugglers are married, their addiction / compulsion has led to repeated sexual betrayal in one form or another. Unlike other addictions, sexual addiction strikes at the heart of the marriage commitment. How can someone forgive that? In the past year, my wife has started counseling wives of sexual strugglers, and we are now counseling couples together who are dealing with sex addiction and betrayal. After working exclusively with men who are struggling, it’s been interesting to get more of the spouse’s perspective on recovery. Here are some observations about forgiveness and restoration, for the spouses of sexual strugglers. 1. Forgiveness can’t be rushed I have come to believe that it is foolish and destructive to try rush the process of forgiveness. Forgiveness is not a simple, one-time event. It is a process that takes time. Many spouses of sex addicts face an added burden because they feel they should forgive their spouse, but don’t feel ready to do so. Or if they do extend forgiveness, they continue to have feelings of hurt and anger, and don’t know how to express them. Both addicts and spouses need to understand that the decision to forgive is different from the process of forgiving. We can’t simply decide to forgive and then move on as though nothing has happened. In the process of forgiving, feelings of sadness, hurt, and anger will come and go. Instead of being squelched (“I shouldn’t be feeling this way”), they need to be accepted and heard. Then, over time, the overwhelm of these feelings will diminish. One danger to watch out for in the marriages of sex addicts is for the spouse to feel pressured to move too quickly to forgiveness and reconciliation, without processing the feelings of betrayal and anger that naturally arise. We are taught as Christians the need for and the power of forgiveness. Sometimes it is assumed that forgiveness can (and should) be quickly extended, and that once the person decides to extend forgiveness, then the matter should be left in the past. But it doesn’t work that way. 2. Forgiveness is like grieving In many ways the experience of a spouse in the aftermath of sexual betrayal is like the process of grieving. This makes sense, because the aftermath of sexual betrayal, and the process of restoration of a marriage involves a lot of grieving. Grief takes time, especially when we’re grieving the loss of someone we dearly love. No one can rush the process. The only way to “quickly grieve” is by blocking the negative feelings that come up, and thereby not really grieving. It’s important to recognize that grief comes in waves. Sometimes after a stretch of relief and relative internal peace, something will remind us once again of our loss, and the feelings of sadness will overwhelm us again. The same is true with the feelings of hurt and anger that we deal with in forgiveness. We will work through them, and reach a point of peace and release, only to find ourselves confronted days or weeks later with a new wave of the same feelings of hurt, anger, and loss. 3. Everybody forgives differently Just as no two people grieve alike, so no two people forgive alike. The spouses of addicts need to be given the space and support to process their feelings in a healthy way. It is often striking how differently spouses respond to sexual sin. Some men I work with have amazingly “tolerant” spouses, and some have spouses at the other extreme who who are bitter and unable to let go of their suspicion and anger. There are certainly all kinds of reasons for this, but neither extreme is helpful to the struggler or the spouse. There is no common time-table for forgiveness. 4. Forgiveness and reconciliation are separate issues Lewis Smedes, in his wonderful book Forgive and Forget, defines forgiveness as the decision to surrender one’s desire to retaliate against the one who wronged us. It involves letting go of our desire to harm the person who harmed us. To do this, we need compassion, time, and support. But choosing to let go of our desire to hurt someone in retaliation does not mean we now trust them, or are willing to stay in the same relationship with them. There may be changes to our relationship. Nancy Hull-Mast writes this: “Often we’re afraid to forgive others who’ve hurt us because we believe that, in doing so, we are permitting what they’ve done. This is not true. When we forgive, we are saying, ‘I pardon you, I give up any claim for revenge, you are no longer an enemy.'” To establish new boundaries does not mean we have not forgiven someone. We can forgive them, but not reconcile the relationship. We can forgive them, but redefine how we relate to them. In their defensiveness, a sexual struggler might protest, “But I thought you forgave me!” Remember that forgiveness and reconciliation are different things. 5. Spouses often need someone to help them in the process of forgiving It’s vitally important for spouses to have safe places to process their hurt and pain in ways that are healthy. If the only person you can share this with is the spouse who wronged you, it might be overwhelming and discouraging for him/her. You might feel the need to hold back your true feelings out of compassion or fear that your spouse might leave. What do you do about the feelings that are stuck inside you? Find a therapist or pastor you can trust, and if possible a group devoted to helping people process sexual betrayal. More and more of these groups are available today. Two cautions are in order when it comes to seeking out help from others: (a) If you go to a spouse support group (like S-Anon), be careful of the health of the group. Some of these groups can be populated and/or led by people who have not processed their own wounds in healthy ways. Rather than encourage you and offer you hope, they may infect you with their own cynicism and despair. (b) Beware of the danger of leaning on friends to provide listening ears who don’t support your goal of a healthy relationship. Some people may have unprocessed pain of their own, and when you tell them about your experience, their un-dealt-with anger will cloud their judgment and ability to provide healthy perspective. We need friends who will support, not commiserate, with us. So what do you think? How is forgiveness going in your life and relationship? I’d love to hear your thoughts and responses to this article in the comments. This article was originally written some months ago – as evidenced by the dates of the comments. I put it here on the front page though, because I wanted newcomers to the site to see it. Hope you found it helpful, as well as the discussion below: Thanks for the comment Eva … yes it’s a process, and often times a lengthy one. Maybe what makes it even more complicated is that oftentimes people wrap forgiveness, reconciliation, and trust all together. And they are three different things. Of the three, I am guessing that trust takes the longest to rebuild, and is the easiest to lose if a spouse relapses or lies. But lack of trust doesn’t mean lack of forgiveness. It just means that a spouse has been burned one too many times and is going to be much more careful. Thanks again I am finding it very difficult to start the healing process of forgiveness. I feel I am having such a hard time because I have not recieved full disclosure from my husband, eventhough I have told him that it is somewhat stalling the forgiveness process, he still refuses and says I know everything, well…I know for sure that he has not disclosed everything. I am finding that as of latley, I am beginning to withdraw from him emotionally, I know they say that this must be done but it is not a good feeling and I am not sure if it is healthy. I have been through so much devastation, hurt, walked out on, he has threatened divorce, moved out twice, badmouthed me to his\my family, saying I was crazy, he did nothing wrong, the list goes on, I have been loving and faithfull to this man since day 1. we have children, two girls, I have never been so scared in my life, this has been a real wakeup call as to what this world has become, sometimes I just want to take my girls and run. Thank you for this great website, thank you for reading and if you have the time, maybe you can give me some advice. I’m in a similar boat….although my husband started going to a 12 step program and to a CSAT but has dipped back into denial and isn’t going. He swears that his “sponsor” told him that since he doesn’t have daily urges that he isn’t a SA. I find that hard to believe as there are plenty of binge-purgers out there. My husband is also a covert incest survival and has a older woman fetish (related to his mother). At any rate, I went to an amazing trauma workshop (6 day) at ISH in LA and one thing that I learned is the trauma caused by staggered disclosures/discoveries. I had that for the last year and I feel like I’ve been hit by 5+ planes (like 911). It’s been horrific. My husband is in denial – says that the two CSATs have told him that he isn’t a SA – although I know that not to be true. It breaks my heart and I’m going to set my boundaries on this next week (with the help of our marriage counselor). I know that I should be grateful that he as least seeing a therapist that is helping him recognize his pattern of behavior (prostitutes followed by affairs with older women, then CL ads (when money ot tight). He thinks he has it “under control” now which the article on this website talks about (“white knuckling”) – a recipe for a relapse. I’m trying to let go but the trauma of the past year has taken it’s toll – I’m so nervous about getting hit by yet another tsumami. One thing to consider is a full disclosure with a lie detector. I’m going to insist on it – along with a one day trauma workshop at ISH so that he understands the 12 trauma that SA partners experience. Dr. Manwalla has nailed it, and really understands what we’ve gone through. Annette – I feel your pain. I know it’s been a year and I hope that you have found peace. Let me know what happened – I’m curious and hopeful that you’ve found peace. Thanks for your comments. I’m very sorry to hear about what is happening. I totally agree with the approach of a number of spouse/partner therapists and workshops that treat disclosure of sex addiction as a form of trauma that needs to be addressed. I see so many partners of addicts struggling to deal with what has happened, and not understanding how devastating it is for them. Annette, one of the things that you mention is so common, and so very frustrating: SA’s will often seek forgiveness from their partners without giving full disclosure. In essence they are asking their partners to forgive them when their partner doesn’t even know what they are forgiving! That’s not fair, and it leads to confusion, resentment, and emotional distance. Additionally, it sounds like their is some significant denial and blameshifting going on. Sorry to hear about that. Deb, I am also sorry to hear about what is going on with your husband. It sounds like you’ve been through a lot. It sounds like your husband is stuck in the all-too-common “diagnosis trap.” I have very strong feelings about this … too many people get caught up in trying to figure out whether someone is a sex addict or not. The label is confusing because different people define it in different ways … and if a person wants, they can probably find someone who will agree that they are NOT one. But the label of sex addict doesn’t matter … what matters is the pattern of ongoing struggle, breaking boundaries repeatedely despite promises to stop … whatever you want to call that (addiction, dependance, compulsivity, besetting sin, bad habit) it will require massive effort to deal with it. I actually created a video for a program I run called “The Recovery Journey” that deals with this very topic. Check it out … you might find it helpful: Thanks Mark. I viewed your video and it makes sense. I think that my husband is caught up in the “shame” of the sex addict label – he even stopped going to SA groups because apparently someone told him if he doesn’t have daily urges, then he’s not an addict (which I know isn’t true). My husband is so afraid of the label that he is finding anyone to tell him that he’s not a sex addict, which is getting in the way of his treatment. He is seeing a trauma therapist – which I know is better than nothing. I am going to set my boundaries in terms of what I need in order for him to stay in the house during his recovery which I think will be helpful for me. This is tough stuff! I have been dealing with my spouse for 5 years now and can”t find closure to this problem. We have assisted to therapy, we serve the Lord, but somehow, TRUST cannot fully come to be a reality. We got officially married recently and I just tested him on a porn email and like I suspected…. he responded. He continued on messenger (at his office at work) only to sustain what he expected to be a life sex encounter. surprisingly, I popped up on his cam and the rest is complete horror. He is firmly rooted on his love for me and justifies his act like one where his medication was absent..Being that he was without it for almost 2 weeks now. That he has achieved a lot but that any visual stimuli can end up like this. he has asked for him to forgive him… and I did many years ago. But I just cannot trust him. What am I supposed to live? A fair life for him, in trying to help him and honoring God through obedience? or an unfair life for me lacking of trust all the time… This is driving me to a lot of hurt and disappointment. We have a family, a beautiful one… I love the Lord. I pray and fast to break this but in my flesh, this is all so hard. I feel so lost… Wow – that sounds like a dramatic – and probably traumatic – experience. Sorry this happened, and sorry to hear that trust is broken. The real challenge here is the same that people have in a variety of situations: to control what you can control, and surrender the rest. I certainly agree with the approach of a spouse to be careful and discerning about exerting trust to someone who’s broken it (even though I wouldn’t recommend engaging in your own sting operation!). You’ll have to decide what you need to do to take care of yourself and your kids. In your question, you suggested a contrast between a fair life for him, or fair life for you. I don’t think it necessarily works out and an either/or proposition, but if you have to think of it that way, I would suggest you think in terms what what’s best for you, rather than for him. Usually when partners of addicts try to sacrifice themselves to support the addict, and give him more chances, it winds up just enabling him by shielding him from consequences. My wife is a counselor, and she works with these issues directly … don’t hesitate to contact me and set something up with her if that would help. Hi there. 7 weeks ago I found out my husband had cheated on me. For the past 7 weeks through hunting for answers and pushing my spouse into confession I have found out he has been with 24 different women over 70 times in the past 5 years. Through out ivf through my pregnancy through the first 2 years of our daughters life. He says he’s disgusted at his behaviour and he couldn’t stop himself and he will never do it again etc. I had absolutely no idea. I have been the happiest I’ve ever been over this period of time and believed we were in a loving ‘special’ marriage. I am completely devastated. I don’t even know where to begin to start to forgive. I love my husband very much and have a young child to think of I cannot just give in and walk away but in struggling to keep the dark thoughts and images out of my mind. I live in fife in Scotland and I have no idea where to go for help. We went to the go who referred us for counseling. We have had 2 sessions so far 1 per week and it has been focussing on my husband and his past. I feel like I need help now to process my feelings. Some days I don’t even want to get out of bed. I’m crying or I’m shouting and with a toddler to take care of it’s hard going. Help?? I’m so sorry that this has happened. I think your story is not at all uncommon, and one thing especially sticks out to me. You say that the marriage seemed good and that you felt so happy. That’s not all that uncommon when a man is acting out a lot. Instead of dealing with conflict and the normal kind of frustrations any relationship has, an addict will just be nice and then withdraw to acting out. He may even treat his partner especially “nice” out of his sense of guilt. So the spouse has an unrealistic sense of the relationship. Sad but often true. 🙁 The marriage is over. Do not work on this marriage, it will only hurt you more than you’ve already been hurt. It will destroy your life to work on this situation. This is not a man. What has been done to you is pure evil. Get out now. You deserve a wonderful life. I have a similar situation and have been “working on it” (the marriage) for 5 & 1/2 years and in that time was diagnosed with stage 3 cancer and a myriad of other health issues. ‘Working” on a situation like this will destroy your life. Get out now. I have now woken up and am getting out, but now in a weakened condition and with many ensuing health needs. Please, treat yourself like the precious gold that you and your daughter are. Get out immediately, this man is endangering your life. He is incapable of living truthfully. Again, I have “tried” with my soon-to-be- ex-husband for over 5 years, and I am the one who is suffering, increasingly so. You have endured a life of betrayal, the worst pain in human experience. He is incapable of marriage.
...Masashi Sajikihara extended his NPB record consecutive games without a loss to 116 games. ...Tsuyoshi Shimoyanagi was removed from the active roster because he likely wasn't going to get any more starts over the next 10 days. Shimoyanagi's bid for 5 consecutive seasons with double-digit wins likely comes to an end (he's currently 8-8 this season). Orix Buffaloes ...The Buffaloes might have the second best team batting average in the Pacific League, but they also have the worst record in the PL. Further evidence that hitting doesn't beat pitching? Rakuten Eagles ...The last time a Pacific League team's 7, 8, and 9 batters all hit home runs in a game was in 2003 with Daiei. It was the first time in Eagles' history for 3 players to go deep in consecutive at bats (three home runs in one inning, consecutive or not, was also a club first). It was also the first time since 8/3/2008 that the team hit 4 home runs in one game (5th time overall). Yomiuri Giants ...The Giants announced that they will be holding a mini camp at their Miyazaki training grounds prior to the Climax Series. The team will hold a practice session on 10/16, and have practice games on the 17th (Fall Ikusei League Hiroshima) and 18th (Softbank).
Personal Statement I'm a caring, skilled professional, dedicated to simplifying what is often a very complicated and confusing area of health care....more I'm a caring, skilled professional, dedicated to simplifying what is often a very complicated and confusing area of health care. More about Dr. Daval Shah Dr. Daval Shah is a renowned Ear-Nose-Throat (ENT) Specialist in Dadar West, Mumbai. You can visit him at Shri Nursing Home - Dadar West in Dadar West, Mumbai. Save your time and book an appointment online with Dr. Daval Shah on Lybrate.com. Find numerous Ear-Nose-Throat (ENT) Specialists in India from the comfort of your home on Lybrate.com. You will find Ear-Nose-Throat (ENT) Specialists with more than 43 years of experience on Lybrate.com. You can find Ear-Nose-Throat (ENT) Specialists online in Mumbai and from across India. View the profile of medical specialists and their reviews from other patients to make an informed decision. Great, , thanks for your q! this not a age that you started snoring. See if you continued after your marriage too. You may be having metabolic symdrome that's why you so worried. 3-4 hrs time bet'n your dinner n retiring on bed. Exercise. Reduce water input n sour n salty food habits. Sleep on lateral positions. Consider to have authentic ayurveda opinion even after sleep study as i've benefited 90% of cases which r adviced to go for surgery. Early morning cool air may be the reason. Homoeopathy has good treatment without causing adverse effects. Take homoeopathic medicine arsenic alb 200 twice daily in the morning and in the evening and give feedback after 6 days. Throat problem can happen due to weather changes, exposure to cold air or rain and certain sour food. 1. Mix some salt in warm water and do gargling. 2. Keep a clove and slowly chew it. 3. Drinking water frequently will reduce congestion if any. 4. Avoid fried and oily foods. That should be due to wax in your right ear. That is the commonest cause. You can try ear buds. If the sound is not coming back, you have to consult ent. A few days delay is not going to a problem; especially if there are no discharge from your right ear. Dear Lybrate User, *Gulkand or rose petal jam is a delicious Indian delicacy. It is an Ayurvedic medicine also. To avoid a bloody nose, take one teaspoonful of Gulkand every day. *Amla (Indian Gooseberry) is very rich in Vitamin C, Calcium and Iron. The murabba or juice of amla prepared at home can be taken daily morning with water to make way from nose bleeding. *Drinking sufficient water daily will lead to a healthy nose. *Bread prepared from whole wheat or the ‘Brown bread’ or ‘Brown rice’ has ample of Zinc nutrient in it. Zinc helps in maintaining the body’s blood-vessels. *Dark green leafy vegetables must be taken in the diet Meditation and yoga has amazing result in this problem with in week you can contact from any spiritual organization like Brahma Kumaris, these organization provide their services free of cost you can find of address of Rajyoga meditation center near your house Hello , Flat nose is best corrected by Augmentation Rhinoplasty surgery. Any other associated abnormality can be corrected simultaneously. Nose fillers can also correct the defect, but they are temporary only. Last for a Year only. Dr Agrawal Mumbai Sir, Does your mother have any medical ailments What medicines your mother is taking daily Let her do gargling of mouth daily with warm salt water If problem is persistent, Contact me with details for proper medical advice. It seems it is allergy....Nasal Allergy: It is very common now a days…it is of two types –Seasonal and perineal. Seasonal allergic rhinitis occurs according to changes in atmosphere . Antihistaminics and decongestants can help you.
Texas holdem poker theory texas holdem poker theory It is not the only way. I am not a professional metalworker but a guy with some tools and a workshop. The final outcome was. A Woodworker's Bench Notes is a collection of plans, jigs and information that I have accumulated over the years. The information contained in this site is offered with the assumption that the reader has a basic knowledge of tool use and safety procedures, see disclaimer at bottom of page. Align and tape the box edges: A simple box hinge is a great introduction to surface-mounted hinges. In this example, I started with a vintage box that's nicely made but suffers from bound hinges so that the front of the box gapes open. As you'll see, a box hinge offers a perfect solution to a. Safeguard yourself and family members from any small animals by using Screen Tight PetGuard Series Wood Century Screen Door. Small Wood Sheds Kits - 30 Wide Dining Table Plans Small Wood Sheds Kits Small Apartment Over Garage Poker 88 bank bni Composite Wood … Pathological gambling diagnosis DIY wooden toy box feels both classic and casino gaming equipment for sale at the same time. It also might seem intimidating as a texas holdem poker theory to start, but you and a building partner really can knock out the construction of it texas holdem poker theory half a day if youve got all your wood cut roulette thimble begin with. Youre going to love it. Trust me. Woodworker's Hardware has a wide range of reliable wood shop tools perfect for completing your woodworking project. For 24-hour gala casino stockton christmas order today. Some of our favorite hobby items: Coloring Books for Adults amp; Kids. The benefits of coloring have been shown in countless studies. Try this relaxing hobby today with our enormous texas holdem poker theory of books, texas holdem poker theory and markers. A router ( ˈ r aʊ t əralso -ə ) gambling licenses uk a hand tool texas holdem poker theory power tool that a worker uses texas holdem poker theory rout (hollow out) an area in relatively hard material like wood or plastic. Routers are mainly used in woodworking, especially cabinetry. Routers are typically handheld or fastened cutting end-up in a router table. The hand tool type of router is the original form. It is … SALUDA, NORTH CAROLINA Where the Foothills End and the Blue Ridge Begins Nestled on the eastern foothills of the Black Hills, Rapid City shines as the hub of this legendary region's vacation activities. Five year-round museums, 17 movie theaters, eight area golf courses and more than 4,000 guest rooms in local hotels, motels, bed amp; breakfast inns, resorts and campgrounds. World Trade Center Panama interconnects with its Members and keeps them informed on the latest updates on upcoming and past events by their monthly newsletter. The streets of Atlantic City look more like a ghost town than a tourist hub. The latest Tweets from HUB International (HUBInsurance). 7th largest insurance broker for your business and personal insurance travelinsurance truckinsurance propertyandcausualtyinsurance https:t. coAn1qoPKRPX. Race Events Little Red Hen Productions can be your complete source for race management, timing and results. internet gambling market size VST Drum machine. Virtual drums, Free drum kits and drum loops. Best free VST plugin instruments (Drums). Dr-Fusion, Rhythm Master, TheDrumSource, Tfxas XR10 Joldem samples, XXdr8008. Instrument Sets. VST Sound Instrument Sets provide high-quality content texas holdem poker theory out of the box, expanding the used sound library with fantastic-sounding VST … Its an exciting time to be a music producer, as there are now hundreds of applications for mobile devices that allow producers to finally work on the move. LAB Pedalboard2 Pure Data REAPER SAVIHost Studio One SynthEdit Temper Tracktion Usine VFXV-Machine VSTHost Which is Best: Seneca niagara casino gambling age or PC for a Music Computer Straight Talk from Tweak. G o to any computer gear-head forum, including studio-central, and simply ask … Based in Germany, AIR Music Technology started as Wizoo Sound Design, one of the earliest pioneers in virtual texas holdem poker theory technology. The AIR team is responsible for the core of much of texaas effects offerings in Avid's Pro Tools software, and also developed a suite of award-winning virtual instruments specifically for Pro Tools. Poker srbija igraci audio interfaces, studio monitors, and keyboard controllers Free vintage porn,classic and retro porn texas holdem poker theory and goodbye letter to gambling on RareVintageTube. There are plenty of energy-robbing devices on your car that are supposed to go to sleep when it's parked, and sometimes, not all … View and Download Fafco Drainback 200 Series installation manual online. Solar Hot Water System. Drainback 200 Series Water Heater pdf manual download. ACO Drain systems consist of manufactured modular trench drains made from stainless steel, corrosion resistant polymer concrete, or fiberglass, tdxas with texas holdem poker theory from a variety of materials for all loading applications. Featured Sale Items. Hoosier Kart Tires are now in stock. Click here for Hoosier Tire tech article. Home of quot;Vector~Cutsquot; (experience cut tires) Add a CPS2 CPS1 Kick Harness, overclock Neogeo, build a CMVS, consolized We transform functional construction products for interior and landscape sectors into works of art, combining unique designs with cutting edge technology. ACO Drain ACO Drain is the market leading modular trench drain system and is ideal for commercial applications ranging from gas stations to airports. The FloorElf describes how to hold the texas holdem poker theory tightly against the drain gheory you dont have access below the floor. texas holdem poker theory Read about the childhood, career and personal life of Academy Award-winning Scottish actor Sean Connery, known as spy 007 in the early James Bond movies, on Biography. com. Stephanie Grace writes a column on Louisiana politics for The Advocate. Amazing Grace-How Sweet the View. Free Wi-Fi. Check specials. Come join us for Family or Friend time in this PRIVATE LOG CABIN with a lovely sloped grass yar. Grace of Monaco, the opening-night attraction at the 67th Cannes Film Festival, stoked plenty of controversy when Prince Albert, the monarch of nearby Monaco, denounced this biopic about his parents Rainier III (Tim Roth) and Princess Grace, the former Hollywood star Grace Kelly (Nicole Kidman. This is a list of characters from the twxas sitcom Are You Being Served. that ran from 1972 until 1985. After the series ended, there texas holdem poker theory a spin-off series called Grace and Favour, from 1992 until 1993 tjeory many of the original fuchs blackjack reprising their roles. Turks and Casino enterprise management is a casino pilsen tschechien destination, the pristine beaches, remote cays, texas holdem poker theory waters are ideal for Tsxas and Caicos weddings illinois casino map marriage ceremonies. Jan 17, slot meuble casino fortunes have tumbled in each of the last two rankings of Hong Kong's richest, but this time they seem to be starting a comeback. Free easy and beginning color coded ho,dem and fiddle sheet music Providenciales, or more commonly known as quot;Provoquot;, texas holdem poker theory an area of 38 miles and is the gre slot dates 2015 developed island in Turks and Caicos. May 20, 2018nbsp;0183;32;10 Steps to Grace Bay Beach!. 5BR, Private Pool, Why walk to the beach?. Grace Bay Beach Walk Villa is suitable for large families (up to 10 people) that. Hourly Rates. If you choose the hourly option, we charge 8. 75 plus tax per player, per hour. You will receive a wristband on your way in that tracks your time and you … Chalet Grace is a spectacular designer chalet overlooking Zermatt village and the Matterhorn. Built to a superior finish, it features double-height floor to ceiling windows on all three levels and a dramatic vaulted interior. Nov 03, 2017nbsp;0183;32;Got a scoop request. An anonymous tip youre dying to share.
// // ALPHAScreenshotSource.m // Alpha // // Created by Dal Rupnik on 05/06/15. // Copyright © 2015 Unified Sense. All rights reserved. // #import "ALPHAFileManager.h" #import "ALPHAScreenshotSource.h" #import "ALPHATableScreenModel.h" NSString *const ALPHAScreenshotDataIdentifier = @"com.unifiedsense.alpha.data.screenshot"; @interface ALPHAScreenshotSource () @property (nonatomic, copy) NSArray *screenshots; @property (nonatomic, strong) NSDateFormatter* dateFormatter; @end @implementation ALPHAScreenshotSource - (NSDateFormatter *)dateFormatter { if (!_dateFormatter) { _dateFormatter = [[NSDateFormatter alloc] init]; _dateFormatter.dateStyle = NSDateFormatterFullStyle; _dateFormatter.timeStyle = NSDateFormatterMediumStyle; } return _dateFormatter; } - (instancetype)init { self = [super init]; if (self) { [self addDataIdentifier:ALPHAScreenshotDataIdentifier]; } return self; } - (void)loadScreenshots { NSError* error; NSString *directory = [NSString stringWithFormat:@"%@Alpha/Screenshots", [[ALPHAFileManager sharedManager] documentsDirectory].absoluteString]; self.screenshots = [[NSFileManager defaultManager] contentsOfDirectoryAtURL:[NSURL URLWithString:directory] includingPropertiesForKeys:@[] options:0 error:&error]; } - (ALPHAModel *)modelForRequest:(ALPHARequest *)request { [self loadScreenshots]; ALPHATableScreenModel* screenModel = [[ALPHATableScreenModel alloc] initWithIdentifier:ALPHAScreenshotDataIdentifier]; screenModel.title = @"Screenshots"; ALPHAScreenSection* section = [[ALPHAScreenSection alloc] init]; NSMutableArray* items = [NSMutableArray array]; for (NSURL* screenshot in self.screenshots) { ALPHAScreenItem* item = [[ALPHAScreenItem alloc] init]; item.title = [self titleForScreenshot:screenshot]; item.object = [ALPHARequest requestForFile:screenshot.absoluteString]; [items addObject:item]; } section.items = items; screenModel.sections = @[ section ]; return screenModel; } - (NSString *)titleForScreenshot:(NSURL *)screenshot { NSString* filename = [screenshot.pathComponents lastObject]; filename = [filename stringByReplacingOccurrencesOfString:@"ALPHA_SS_" withString:@""]; filename = [filename stringByReplacingOccurrencesOfString:@".png" withString:@""]; NSDate *date = [[ALPHAFileManager sharedManager].fileDateFormatter dateFromString:filename]; NSString *text = [self.dateFormatter stringFromDate:date]; if (!text.length) { text = [screenshot.pathComponents lastObject]; } return text; } @end
/* ** License Applicability. Except to the extent portions of this file are ** made subject to an alternative license as permitted in the SGI Free ** Software License B, Version 1.1 (the "License"), the contents of this ** file are subject only to the provisions of the License. You may not use ** this file except in compliance with the License. You may obtain a copy ** of the License at Silicon Graphics, Inc., attn: Legal Services, 1600 ** Amphitheatre Parkway, Mountain View, CA 94043-1351, or at: ** ** http://oss.sgi.com/projects/FreeB ** ** Note that, as provided in the License, the Software is distributed on an ** "AS IS" basis, with ALL EXPRESS AND IMPLIED WARRANTIES AND CONDITIONS ** DISCLAIMED, INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES AND ** CONDITIONS OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A ** PARTICULAR PURPOSE, AND NON-INFRINGEMENT. ** ** Original Code. The Original Code is: OpenGL Sample Implementation, ** Version 1.2.1, released January 26, 2000, developed by Silicon Graphics, ** Inc. The Original Code is Copyright (c) 1991-2000 Silicon Graphics, Inc. ** Copyright in any portions created by third parties is as indicated ** elsewhere herein. All Rights Reserved. ** ** Additional Notice Provisions: The application programming interfaces ** established by SGI in conjunction with the Original Code are The ** OpenGL(R) Graphics System: A Specification (Version 1.2.1), released ** April 1, 1999; The OpenGL(R) Graphics System Utility Library (Version ** 1.3), released November 4, 1998; and OpenGL(R) Graphics with the X ** Window System(R) (Version 1.3), released October 19, 1998. This software ** was created using the OpenGL(R) version 1.2.1 Sample Implementation ** published by SGI, but has not been independently verified as being ** compliant with the OpenGL(R) version 1.2.1 Specification. ** */ /* */ //#include <stdlib.h> //#include <stdio.h> #include <math.h> //#include "zlassert.h" #include "sampleCompTop.h" #include "sampleCompRight.h" #define max(a,b) ((a>b)? a:b) //return : index_small, and index_large, //from [small, large] is strictly U-monotne, //from [large+1, end] is <u //and vertex[large][0] is >= u //if eveybody is <u, the large = start-1. //otherwise both large and small are meaningful and we have start<=small<=large<=end void findTopLeftSegment(vertexArray* leftChain, Int leftStart, Int leftEnd, Real u, Int& ret_index_small, Int& ret_index_large ) { Int i; assert(leftStart <= leftEnd); for(i=leftEnd; i>= leftStart; i--) { if(leftChain->getVertex(i)[0] >= u) break; } ret_index_large = i; if(ret_index_large >= leftStart) { for(i=ret_index_large; i>leftStart; i--) { if(leftChain->getVertex(i-1)[0] <= leftChain->getVertex(i)[0]) break; } ret_index_small = i; } } void findTopRightSegment(vertexArray* rightChain, Int rightStart, Int rightEnd, Real u, Int& ret_index_small, Int& ret_index_large) { Int i; assert(rightStart<=rightEnd); for(i=rightEnd; i>=rightStart; i--) { if(rightChain->getVertex(i)[0] <= u) break; } ret_index_large = i; if(ret_index_large >= rightStart) { for(i=ret_index_large; i>rightStart;i--) { if(rightChain->getVertex(i-1)[0] >= rightChain->getVertex(i)[0]) break; } ret_index_small = i; } } void sampleTopRightWithGridLinePost(Real* topVertex, vertexArray* rightChain, Int rightStart, Int segIndexSmall, Int segIndexLarge, Int rightEnd, gridWrap* grid, Int gridV, Int leftU, Int rightU, primStream* pStream) { //the possible section which is to the right of rightU if(segIndexLarge < rightEnd) { Real *tempTop; if(segIndexLarge >= rightStart) tempTop = rightChain->getVertex(segIndexLarge); else tempTop = topVertex; Real tempBot[2]; tempBot[0] = grid->get_u_value(rightU); tempBot[1] = grid->get_v_value(gridV); monoTriangulationRecGenOpt(tempTop, tempBot, NULL, 1,0, rightChain, segIndexLarge+1, rightEnd, pStream); /* monoTriangulation2(tempTop, tempBot, rightChain, segIndexLarge+1, rightEnd, 0, //a decrease chian pStream); */ } //the possible section which is strictly Umonotone if(segIndexLarge >= rightStart) { stripOfFanRight(rightChain, segIndexLarge, segIndexSmall, grid, gridV, leftU, rightU, pStream, 0); Real tempBot[2]; tempBot[0] = grid->get_u_value(leftU); tempBot[1] = grid->get_v_value(gridV); monoTriangulation2(topVertex, tempBot, rightChain, rightStart, segIndexSmall, 0, pStream); } else //the topVertex forms a fan with the grid points grid->outputFanWithPoint(gridV, leftU, rightU, topVertex, pStream); } void sampleTopRightWithGridLine(Real* topVertex, vertexArray* rightChain, Int rightStart, Int rightEnd, gridWrap* grid, Int gridV, Int leftU, Int rightU, primStream* pStream ) { //if right chian is empty, then there is only one topVertex with one grid line if(rightEnd < rightStart){ grid->outputFanWithPoint(gridV, leftU, rightU, topVertex, pStream); return; } Int segIndexSmall = 0, segIndexLarge; findTopRightSegment(rightChain, rightStart, rightEnd, grid->get_u_value(rightU), segIndexSmall, segIndexLarge ); sampleTopRightWithGridLinePost(topVertex, rightChain, rightStart, segIndexSmall, segIndexLarge, rightEnd, grid, gridV, leftU, rightU, pStream); } void sampleTopLeftWithGridLinePost(Real* topVertex, vertexArray* leftChain, Int leftStart, Int segIndexSmall, Int segIndexLarge, Int leftEnd, gridWrap* grid, Int gridV, Int leftU, Int rightU, primStream* pStream) { //the possible section which is to the left of leftU if(segIndexLarge < leftEnd) { Real *tempTop; if(segIndexLarge >= leftStart) tempTop = leftChain->getVertex(segIndexLarge); else tempTop = topVertex; Real tempBot[2]; tempBot[0] = grid->get_u_value(leftU); tempBot[1] = grid->get_v_value(gridV); monoTriangulation2(tempTop, tempBot, leftChain, segIndexLarge+1, leftEnd, 1, //a increase chian pStream); } //the possible section which is strictly Umonotone if(segIndexLarge >= leftStart) { //if there are grid points which are to the right of topV, //then we should use topVertex to form a fan with these points to //optimize the triangualtion int do_optimize=1; if(topVertex[0] >= grid->get_u_value(rightU)) do_optimize = 0; else { //we also have to make sure that topVertex are the right most vertex //on the chain. int i; for(i=leftStart; i<=segIndexSmall; i++) if(leftChain->getVertex(i)[0] >= topVertex[0]) { do_optimize = 0; break; } } if(do_optimize) { //find midU so that grid->get_u_value(midU) >= topVertex[0] //and grid->get_u_value(midU-1) < topVertex[0] int midU=rightU; while(grid->get_u_value(midU) >= topVertex[0]) { midU--; if(midU < leftU) break; } midU++; grid->outputFanWithPoint(gridV, midU, rightU, topVertex, pStream); stripOfFanLeft(leftChain, segIndexLarge, segIndexSmall, grid, gridV, leftU, midU, pStream, 0); Real tempBot[2]; tempBot[0] = grid->get_u_value(midU); tempBot[1] = grid->get_v_value(gridV); monoTriangulation2(topVertex, tempBot, leftChain, leftStart, segIndexSmall, 1, pStream); } else //not optimize { stripOfFanLeft(leftChain, segIndexLarge, segIndexSmall, grid, gridV, leftU, rightU, pStream, 0); Real tempBot[2]; tempBot[0] = grid->get_u_value(rightU); tempBot[1] = grid->get_v_value(gridV); monoTriangulation2(topVertex, tempBot, leftChain, leftStart, segIndexSmall, 1, pStream); } } else //the topVertex forms a fan with the grid points grid->outputFanWithPoint(gridV, leftU, rightU, topVertex, pStream); } void sampleTopLeftWithGridLine(Real* topVertex, vertexArray* leftChain, Int leftStart, Int leftEnd, gridWrap* grid, Int gridV, Int leftU, Int rightU, primStream* pStream ) { Int segIndexSmall = 0, segIndexLarge; //if left chain is empty, then there is only one top vertex with one grid // line if(leftEnd < leftStart) { grid->outputFanWithPoint(gridV, leftU, rightU, topVertex, pStream); return; } findTopLeftSegment(leftChain, leftStart, leftEnd, grid->get_u_value(leftU), segIndexSmall, segIndexLarge ); sampleTopLeftWithGridLinePost(topVertex, leftChain, leftStart, segIndexSmall, segIndexLarge, leftEnd, grid, gridV, leftU, rightU, pStream); } //return 1 if saprator exits, 0 otherwise Int findTopSeparator(vertexArray* leftChain, Int leftStartIndex, Int leftEndIndex, vertexArray* rightChain, Int rightStartIndex, Int rightEndIndex, Int& ret_sep_left, Int& ret_sep_right) { Int oldLeftI, oldRightI, newLeftI, newRightI; Int i,j,k; Real leftMax /*= leftChain->getVertex(leftEndIndex)[0]*/; Real rightMin /*= rightChain->getVertex(rightEndIndex)[0]*/; if(leftChain->getVertex(leftEndIndex)[1] > rightChain->getVertex(rightEndIndex)[1]) //left higher { oldLeftI = leftEndIndex+1; oldRightI = rightEndIndex; leftMax = leftChain->getVertex(leftEndIndex)[0] - Real(1.0); //initilza to left of leftU rightMin = rightChain->getVertex(rightEndIndex)[0]; } else { oldLeftI = leftEndIndex; oldRightI = rightEndIndex+1; leftMax = leftChain->getVertex(leftEndIndex)[0]; rightMin = rightChain->getVertex(rightEndIndex)[0] + Real(1.0); } //i: the current working leftChain index, //j: the current working rightChain index, //if left(i) is higher than right(j), then the two chains beloew right(j) are separated. //else the two chains below left(i) are separeated. i=leftEndIndex; j=rightEndIndex; while(1) { newLeftI = oldLeftI; newRightI = oldRightI; if(i<leftStartIndex) //left chain is done, go through remining right chain. { for(k=j-1; k>= rightStartIndex; k--) { if(rightChain->getVertex(k)[0] > leftMax) //no conflict { //update oldRightI if necessary if(rightChain->getVertex(k)[0] < rightMin) { rightMin = rightChain->getVertex(k)[0]; oldRightI = k; } } else //there is a conflict break; //the for-loop. below right(k-1) is seperated: oldLeftI, oldRightI. } break; //the while loop } else if(j<rightStartIndex) //rightChain is done { for(k=i-1; k>= leftStartIndex; k--) { if(leftChain->getVertex(k)[0] < rightMin) //no conflict { //update oldLeftI if necessary if(leftChain->getVertex(k)[0] > leftMax) { leftMax = leftChain->getVertex(k)[0]; oldLeftI = k; } } else //there is a conflict break; //the for loop } break; //the while loop } else if(leftChain->getVertex(i)[1] > rightChain->getVertex(j)[1]) //left hgiher { if(leftChain->getVertex(i)[0] > leftMax) //update leftMax and newLeftI. { leftMax = leftChain->getVertex(i)[0]; newLeftI = i; } for(k=j-1; k>= rightStartIndex; k--) //update rightMin and newRightI. { if(rightChain->getVertex(k)[1] > leftChain->getVertex(i)[1]) break; if(rightChain->getVertex(k)[0] < rightMin) { rightMin = rightChain->getVertex(k)[0]; newRightI = k; } } j = k; //next working j, since j will be higher than i in next loop if(leftMax >= rightMin) //there is a conflict break; else //still no conflict { oldLeftI = newLeftI; oldRightI = newRightI; } } else //right higher { if(rightChain->getVertex(j)[0] < rightMin) { rightMin = rightChain->getVertex(j)[0]; newRightI = j; } for(k=i-1; k>= leftStartIndex; k--) { if(leftChain->getVertex(k)[1] > rightChain->getVertex(j)[1]) break; if(leftChain->getVertex(k)[0] > leftMax) { leftMax = leftChain->getVertex(k)[0]; newLeftI = k; } } i = k; //next working i, since i will be higher than j next loop if(leftMax >= rightMin) //there is a conflict break; else //still no conflict { oldLeftI = newLeftI; oldRightI = newRightI; } } }//end of while loop //now oldLeftI and oldRightI are the desired separeator index, notice that there are not necessarily valid if(oldLeftI > leftEndIndex || oldRightI > rightEndIndex) return 0; else { ret_sep_left = oldLeftI; ret_sep_right = oldRightI; return 1; } } void sampleCompTop(Real* topVertex, vertexArray* leftChain, Int leftStartIndex, vertexArray* rightChain, Int rightStartIndex, gridBoundaryChain* leftGridChain, gridBoundaryChain* rightGridChain, Int gridIndex1, Int up_leftCornerWhere, Int up_leftCornerIndex, Int up_rightCornerWhere, Int up_rightCornerIndex, primStream* pStream) { if(up_leftCornerWhere == 1 && up_rightCornerWhere == 1) //the top is topVertex with possible grid points { leftGridChain->getGrid()->outputFanWithPoint(leftGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), rightGridChain->getUlineIndex(gridIndex1), topVertex, pStream); return; } else if(up_leftCornerWhere != 0) { Real* tempTop; Int tempRightStart; if(up_leftCornerWhere == 1){ tempRightStart = rightStartIndex; tempTop = topVertex; } else { tempRightStart = up_leftCornerIndex+1; tempTop = rightChain->getVertex(up_leftCornerIndex); } sampleTopRightWithGridLine(tempTop, rightChain, tempRightStart, up_rightCornerIndex, rightGridChain->getGrid(), leftGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), rightGridChain->getUlineIndex(gridIndex1), pStream); } else if(up_rightCornerWhere != 2) { Real* tempTop; Int tempLeftStart; if(up_rightCornerWhere == 1) { tempLeftStart = leftStartIndex; tempTop = topVertex; } else //0 { tempLeftStart = up_rightCornerIndex+1; tempTop = leftChain->getVertex(up_rightCornerIndex); } /* sampleTopLeftWithGridLine(tempTop, leftChain, tempLeftStart, up_leftCornerIndex, leftGridChain->getGrid(), leftGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), rightGridChain->getUlineIndex(gridIndex1), pStream); */ sampleCompTopSimple(topVertex, leftChain, leftStartIndex, rightChain, rightStartIndex, leftGridChain, rightGridChain, gridIndex1, up_leftCornerWhere, up_leftCornerIndex, up_rightCornerWhere, up_rightCornerIndex, pStream); } else //up_leftCornerWhere == 0, up_rightCornerWhere == 2. { sampleCompTopSimple(topVertex, leftChain, leftStartIndex, rightChain, rightStartIndex, leftGridChain, rightGridChain, gridIndex1, up_leftCornerWhere, up_leftCornerIndex, up_rightCornerWhere, up_rightCornerIndex, pStream); return; #ifdef NOT_REACHABLE //code is not reachable, for test purpose only //the following code is trying to do some optimization, but not quite working, also see sampleCompBot.C: Int sep_left, sep_right; if(findTopSeparator(leftChain, leftStartIndex, up_leftCornerIndex, rightChain, rightStartIndex, up_rightCornerIndex, sep_left, sep_right) ) //separator exists { if( leftChain->getVertex(sep_left)[0] >= leftGridChain->get_u_value(gridIndex1) && rightChain->getVertex(sep_right)[0] <= rightGridChain->get_u_value(gridIndex1)) { Int gridSep; Int segLeftSmall, segLeftLarge, segRightSmall, segRightLarge; Int valid=1; //whether the gridStep is valid or not. findTopLeftSegment(leftChain, sep_left, up_leftCornerIndex, leftGridChain->get_u_value(gridIndex1), segLeftSmall, segLeftLarge); findTopRightSegment(rightChain, sep_right, up_rightCornerIndex, rightGridChain->get_u_value(gridIndex1), segRightSmall, segRightLarge); if(leftChain->getVertex(segLeftSmall)[1] >= rightChain->getVertex(segRightSmall)[1]) { gridSep = rightGridChain->getUlineIndex(gridIndex1); while(leftGridChain->getGrid()->get_u_value(gridSep) > leftChain->getVertex(segLeftSmall)[0]) gridSep--; if(segLeftSmall<segLeftLarge) if(leftGridChain->getGrid()->get_u_value(gridSep) < leftChain->getVertex(segLeftSmall+1)[0]) { valid = 0; } } else { gridSep = leftGridChain->getUlineIndex(gridIndex1); while(leftGridChain->getGrid()->get_u_value(gridSep) < rightChain->getVertex(segRightSmall)[0]) gridSep++; if(segRightSmall<segRightLarge) if(leftGridChain->getGrid()->get_u_value(gridSep) > rightChain->getVertex(segRightSmall+1)[0]) { valid = 0; } } if(! valid) { sampleCompTopSimple(topVertex, leftChain, leftStartIndex, rightChain, rightStartIndex, leftGridChain, rightGridChain, gridIndex1, up_leftCornerWhere, up_leftCornerIndex, up_rightCornerWhere, up_rightCornerIndex, pStream); } else { sampleTopLeftWithGridLinePost(leftChain->getVertex(segLeftSmall), leftChain, segLeftSmall+1, segLeftSmall+1, segLeftLarge, up_leftCornerIndex, leftGridChain->getGrid(), leftGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), gridSep, pStream); sampleTopRightWithGridLinePost(rightChain->getVertex(segRightSmall), rightChain, segRightSmall+1, segRightSmall+1, segRightLarge, up_rightCornerIndex, leftGridChain->getGrid(), leftGridChain->getVlineIndex(gridIndex1), gridSep, rightGridChain->getUlineIndex(gridIndex1), pStream); Real tempBot[2]; tempBot[0] = leftGridChain->getGrid()->get_u_value(gridSep); tempBot[1] = leftGridChain->get_v_value(gridIndex1); monoTriangulationRecGen(topVertex, tempBot, leftChain, leftStartIndex, segLeftSmall, rightChain, rightStartIndex, segRightSmall, pStream); } }//end if both sides have vetices inside the gridboundary points else if(leftChain->getVertex(sep_left)[0] >= leftGridChain->get_u_value(gridIndex1)) //left is in, right is nout { Int segLeftSmall, segLeftLarge; findTopLeftSegment(leftChain, sep_left, up_leftCornerIndex, leftGridChain->get_u_value(gridIndex1), segLeftSmall, segLeftLarge); assert(segLeftLarge >= sep_left); monoTriangulation2(leftChain->getVertex(segLeftLarge), leftGridChain->get_vertex(gridIndex1), leftChain, segLeftLarge+1, up_leftCornerIndex, 1, //a increase chain, pStream); stripOfFanLeft(leftChain, segLeftLarge, segLeftSmall, leftGridChain->getGrid(), leftGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), rightGridChain->getUlineIndex(gridIndex1), pStream, 0); monoTriangulationRecGen(topVertex, rightGridChain->get_vertex(gridIndex1), leftChain, leftStartIndex, segLeftSmall, rightChain, rightStartIndex, up_rightCornerIndex, pStream); }//end left in right out else if(rightChain->getVertex(sep_right)[0] <= rightGridChain->get_u_value(gridIndex1)) { Int segRightSmall, segRightLarge; findTopRightSegment(rightChain, sep_right, up_rightCornerIndex, rightGridChain->get_u_value(gridIndex1), segRightSmall, segRightLarge); assert(segRightLarge>=sep_right); monoTriangulation2(rightChain->getVertex(segRightLarge), rightGridChain->get_vertex(gridIndex1), rightChain, segRightLarge+1, up_rightCornerIndex, 0, //a decrease chain pStream); stripOfFanRight(rightChain, segRightLarge, segRightSmall, rightGridChain->getGrid(), rightGridChain->getVlineIndex(gridIndex1), leftGridChain->getUlineIndex(gridIndex1), rightGridChain->getUlineIndex(gridIndex1), pStream, 0); monoTriangulationRecGen(topVertex, leftGridChain->get_vertex(gridIndex1), leftChain, leftStartIndex, up_leftCornerIndex, rightChain, rightStartIndex,segRightSmall, pStream); }//end left out rigth in else //left out , right out { sampleCompTopSimple(topVertex, leftChain, leftStartIndex, rightChain, rightStartIndex, leftGridChain, rightGridChain, gridIndex1, up_leftCornerWhere, up_leftCornerIndex, up_rightCornerWhere, up_rightCornerIndex, pStream); }//end leftout, right out }//end if separator exixts. else //no separator { sampleCompTopSimple(topVertex, leftChain, leftStartIndex, rightChain, rightStartIndex, leftGridChain, rightGridChain, gridIndex1, up_leftCornerWhere, up_leftCornerIndex, up_rightCornerWhere, up_rightCornerIndex, pStream); } #endif }//end if 0,2 }//end if the function static void sampleCompTopSimpleOpt(gridWrap* grid, Int gridV, Real* topVertex, Real* botVertex, vertexArray* inc_chain, Int inc_current, Int inc_end, vertexArray* dec_chain, Int dec_current, Int dec_end, primStream* pStream) { if(gridV <= 0 || dec_end<dec_current || inc_end <inc_current) { monoTriangulationRecGenOpt(topVertex, botVertex, inc_chain, inc_current, inc_end, dec_chain, dec_current, dec_end, pStream); return; } if(grid->get_v_value(gridV+1) >= topVertex[1]) { monoTriangulationRecGenOpt(topVertex, botVertex, inc_chain, inc_current, inc_end, dec_chain, dec_current, dec_end, pStream); return; } Int i,j,k; Real currentV = grid->get_v_value(gridV+1); if(inc_chain->getVertex(inc_end)[1] <= currentV && dec_chain->getVertex(dec_end)[1] < currentV) { //find i bottom up so that inc_chain[i]<= curentV and inc_chain[i-1] > currentV, //find j botom up so that dec_chain[j] < currentV and dec_chain[j-1] >= currentV for(i=inc_end; i >= inc_current; i--) { if(inc_chain->getVertex(i)[1] > currentV) break; } i++; for(j=dec_end; j >= dec_current; j--) { if(dec_chain->getVertex(j)[1] >= currentV) break; } j++; if(inc_chain->getVertex(i)[1] <= dec_chain->getVertex(j)[1]) { //find the k so that dec_chain[k][1] < inc_chain[i][1] for(k=j; k<=dec_end; k++) { if(dec_chain->getVertex(k)[1] < inc_chain->getVertex(i)[1]) break; } //we know that dec_chain[j][1] >= inc_chian[i][1] //we know that dec_chain[k-1][1]>=inc_chain[i][1] //we know that dec_chian[k][1] < inc_chain[i][1] //find l in [j, k-1] so that dec_chain[l][0] 0 is closest to // inc_chain[i] int l; Real tempI = Real(j); Real tempMin = (Real)fabs(inc_chain->getVertex(i)[0] - dec_chain->getVertex(j)[0]); for(l=j+1; l<= k-1; l++) { if(fabs(inc_chain->getVertex(i)[0] - dec_chain->getVertex(l)[0]) <= tempMin) { tempMin = (Real)fabs(inc_chain->getVertex(i)[0] - dec_chain->getVertex(l)[0]); tempI = (Real)l; } } //inc_chain[i] and dec_chain[tempI] are connected. monoTriangulationRecGenOpt(dec_chain->getVertex((int)tempI), botVertex, inc_chain, i, inc_end, dec_chain, (int)(tempI+1), dec_end, pStream); //recursively do the rest sampleCompTopSimpleOpt(grid, gridV+1, topVertex, inc_chain->getVertex(i), inc_chain, inc_current, i-1, dec_chain, dec_current, (int)tempI, pStream); } else { //find the k so that inc_chain[k][1] <= dec_chain[j][1] for(k=i; k<=inc_end; k++) { if(inc_chain->getVertex(k)[1] <= dec_chain->getVertex(j)[1]) break; } //we know that inc_chain[i] > dec_chain[j] //we know that inc_chain[k-1][1] > dec_chain[j][1] //we know that inc_chain[k][1] <= dec_chain[j][1] //so we find l between [i,k-1] so that //inc_chain[l][0] is the closet to dec_chain[j][0] int tempI = i; int l; Real tempMin = (Real)fabs(inc_chain->getVertex(i)[0] - dec_chain->getVertex(j)[0]); for(l=i+1; l<=k-1; l++) { if(fabs(inc_chain->getVertex(l)[0] - dec_chain->getVertex(j)[0]) <= tempMin) { tempMin = (Real)fabs(inc_chain->getVertex(l)[0] - dec_chain->getVertex(j)[0]); tempI = l; } } //inc_chain[tempI] and dec_chain[j] are connected monoTriangulationRecGenOpt(inc_chain->getVertex(tempI), botVertex, inc_chain, tempI+1, inc_end, dec_chain, j, dec_end, pStream); //recurvesily do the rest sampleCompTopSimpleOpt(grid, gridV+1, topVertex, dec_chain->getVertex(j), inc_chain, inc_current, tempI, dec_chain, dec_current, j-1, pStream); } } else //go to the next higher gridV { sampleCompTopSimpleOpt(grid, gridV+1, topVertex, botVertex, inc_chain, inc_current, inc_end, dec_chain, dec_current, dec_end, pStream); } } void sampleCompTopSimple(Real* topVertex, vertexArray* leftChain, Int leftStartIndex, vertexArray* rightChain, Int rightStartIndex, gridBoundaryChain* leftGridChain, gridBoundaryChain* rightGridChain, Int gridIndex1, Int up_leftCornerWhere, Int up_leftCornerIndex, Int up_rightCornerWhere, Int up_rightCornerIndex, primStream* pStream) { //the plan is to use monotriangulation algortihm. Int i,k; Real* ActualTop; Real* ActualBot; Int ActualLeftStart, ActualLeftEnd; Int ActualRightStart, ActualRightEnd; //creat an array to store the points on the grid line gridWrap* grid = leftGridChain->getGrid(); Int gridV = leftGridChain->getVlineIndex(gridIndex1); Int gridLeftU = leftGridChain->getUlineIndex(gridIndex1); Int gridRightU = rightGridChain->getUlineIndex(gridIndex1); Real2* gridPoints = (Real2*) malloc(sizeof(Real2) * (gridRightU - gridLeftU +1)); assert(gridPoints); for(k=0, i=gridRightU; i>= gridLeftU; i--, k++) { gridPoints[k][0] = grid->get_u_value(i); gridPoints[k][1] = grid->get_v_value(gridV); } if(up_leftCornerWhere != 2) ActualRightStart = rightStartIndex; else ActualRightStart = up_leftCornerIndex+1; //up_leftCornerIndex will be the ActualTop if(up_rightCornerWhere != 2) //right corner is not on right chain ActualRightEnd = rightStartIndex-1; //meaning that there is no actual rigth section else ActualRightEnd = up_rightCornerIndex; vertexArray ActualRightChain(max(0, ActualRightEnd-ActualRightStart+1) + gridRightU-gridLeftU+1); for(i=ActualRightStart; i<= ActualRightEnd; i++) ActualRightChain.appendVertex(rightChain->getVertex(i)); for(i=0; i<gridRightU-gridLeftU+1; i++) ActualRightChain.appendVertex(gridPoints[i]); //determine ActualLeftEnd if(up_leftCornerWhere != 0) ActualLeftEnd = leftStartIndex-1; else ActualLeftEnd = up_leftCornerIndex; if(up_rightCornerWhere != 0) ActualLeftStart = leftStartIndex; else ActualLeftStart = up_rightCornerIndex+1; //up_rightCornerIndex will be the actual top if(up_leftCornerWhere == 0) { if(up_rightCornerWhere == 0) ActualTop = leftChain->getVertex(up_rightCornerIndex); else ActualTop = topVertex; } else if(up_leftCornerWhere == 1) ActualTop = topVertex; else //up_leftCornerWhere == 2 ActualTop = rightChain->getVertex(up_leftCornerIndex); ActualBot = gridPoints[gridRightU - gridLeftU]; if(leftChain->getVertex(ActualLeftEnd)[1] == ActualBot[1]) { /* monoTriangulationRecGenOpt(ActualTop, leftChain->getVertex(ActualLeftEnd), leftChain, ActualLeftStart, ActualLeftEnd-1, &ActualRightChain, 0, ActualRightChain.getNumElements()-1, pStream); */ sampleCompTopSimpleOpt(grid, gridV, ActualTop, leftChain->getVertex(ActualLeftEnd), leftChain, ActualLeftStart, ActualLeftEnd-1, &ActualRightChain, 0, ActualRightChain.getNumElements()-1, pStream); } else { /* monoTriangulationRecGenOpt(ActualTop, ActualBot, leftChain, ActualLeftStart, ActualLeftEnd, &ActualRightChain, 0, ActualRightChain.getNumElements()-2, //the last is the bot. pStream); */ sampleCompTopSimpleOpt(grid, gridV, ActualTop, ActualBot, leftChain, ActualLeftStart, ActualLeftEnd, &ActualRightChain, 0, ActualRightChain.getNumElements()-2, //the last is the bot. pStream); } free(gridPoints); }
Deutsche Bank’s plans to retreat from risky investment banking, fire thousands of people and return to its German roots may eventually create a healthier lender. In the short term, the overhaul will be a major financial drain. That was made clear on Wednesday, after the bank reported a loss of 3.2 billion euros, or $3.6 billion, from April through June, as it subtracted the costs of a restructuring plan announced earlier this month. The plan is seen as a last-ditch attempt to arrest a decade of decline. The loss, which was more than the bank had flagged earlier in July, underscores the challenges facing Christian Sewing, the bank’s chief executive, as he tries to regain the confidence of customers, investors and regulators. Deutsche Bank said earlier this month that the quarterly loss would be €2.8 billion. The bank’s shares fell almost 6 percent in Wednesday morning trading as investors registered their disappointment, though the stock later recovered some of the losses.
The present invention relates to novel derivatives of doxorubicin, and processes for their preparation. The starting material for the preparation of these novel derivatives is 14-bromodaunomycin which is fully described in U.S. Pat. No. 3,803,124, owned by the unrecorded assignee hereof.
export default { functional: true, render: (h, { data, children }) => { if (!data.staticClass) { data.staticClass = '' } data.staticClass += ' collapsible-body' return h('div', data, children) } }
Israelis will be casting their votes today but that is only the start of the process in forming the next coalition which will rule at most for four years. That is assuming this government makes it to the conclusion of its term, a real rarity in Israeli politics. Still, the results from today will be leered over with every morsel’s potential meanings gleaned and meticulously quantified with the unflinching attention of a vulture over an injured beast waiting for it to breathe its last signifying dinner is ready. Then it all starts tomorrow or even a few days down the road when President Reuven Rivlin has talked with the various heads of the respective parties to gather their input as to which particular leader they favor for the position of Prime Minister. Though it is not unheard of for some parties to initially place the name of their head as their choice for Prime Minister fully aware that such is next to impossible, though conceivable as President Rivlin is free to choose whomever he believes would be both the best choice and also capable of forming a ruling coalition exceeding sixty Ministers. Such a suggestion still mostly prolongs the process and may even work to produce a choice for the first attempt in forming a coalition to the candidate your party did not prefer, so it is best to not choose to make such a recommendation. That is particularly true this election cycle as whichever person between Benyamin Netanyahu and Yitzhak Hertzog is given the go-ahead to form a coalition will likely succeed. Such is not an unprecedented situation but is not the most likely scenario but this election the citizens polling has depicted an evenly divided electorate between the two expected leaders to receive the nod to form a government. Once a choice has been made, it starts a clock of sorts which allows the candidate forty-two days (six weeks) to assemble sixty-one ministers by cobbling together sufficient parties by granting demands and negotiating alternatives. Should that leader fail, another leader is asked to try; he or she has up to twenty-eight days (three weeks) to attempt to achieve a coalition. Once the President believes there is little or no hope for any leader to forge a coalition, the elections are held again hoping for a different result, almost like Einstein’s definition of insanity, trying the same thing over and over expecting a different result. I guess one might look at the Israeli government forging mechanism and determine as such it is, it could qualify as insanity in action. Meanwhile, there is a world out there with some just interested in seeing which side wins the opportunity to forge a coalition and succeeds so that they know with who they will be dealing, who will be Foreign Minister, Finance Minister and any other Ministerial position which may determine how they interact with Israel on any number of fronts. What is different about this election is the number of interested parties who have a definite preference to the point that there have been interferences in this election campaign of varying degrees which were mostly beyond acceptable levels. Where editorials in the foreign media are to be expected but daily editorials demanding that the Israeli public choose the leader of the opposition from the previous sitting government in the place of reelecting the current Prime Minister is perhaps attempting to have an influence on the electorate is a bit over the top. But writing editorials and holding frequent interviews with people antagonistic to the current Prime Minister and even holding repeated interviews with influential people, leaders of political parties, heads of NGOs and reporting intended sanctions which will be levied against Israel all demanding that one of the two leading candidates not be granted a chance to form the next government even should their party win by a decent or sizable margin is carrying things way too far into the influence peddling. A foreign government making threats against one of the leading heads of parties from being given the opportunity to form a coalition regardless of the votes cast and doing so for weeks even months or even immediately after elections were called is meddling beyond the acceptable level. But even this was not the limit of the interference. There were efforts by NGOs within Israel who received much of their, if not all, funding from foreign entities or even foreign government and the European Union all of which were very direct in what was expected as far as attempts to influence the outcome of the Israeli election and there was a group V15 (aka V2015) which is allied with OneVoice, a tax exempt entity which recently received a $200,000 grant of funds from the United States State Department and who was headed by political strategist Jeremy Bird leading five Democratic consultants all directly ordered to work for the defeat of Netanyahu as directed from both the State Department and the White House and probably by President Obama personally. Bird and his team are working with a group known as V15 which is a politically allied partnership with OneVoice which describes itself as utilizing grassroots canvassing and door-to-door politicking tasked to amplify the voice of Israelis and Palestinians to empower the election of, as they candidly have stated, “anybody but Bibi” allowing those more favorable to the two-state solution to defeat Netanyahu removing what President Obama identifies as the greatest threat blocking his forging a lasting peace. This interference in the Israeli elections if legal is only by depending on what the meaning of is is as well as the definition of interference is. Had Israel done similar against the reelection of President Obama during the 2012 elections the group and its entire staffing would have been arrested and charged with spying and working for a foreign government interfering in the election process of the United States and probably a dozen more charges and brought to trial and likely the top organizers and any of the people holding Israeli citizenship being imprisoned for an exceedingly long time. Such would not have been tolerated, let alone continued to act impervious to election laws about foreign interference. Meanwhile, in almost unanimous agreement the media and anybody who could get themselves before a microphone or quoted in print or in pixels on the Internet were given such attentions providing they were speaking to removing Netanyahu from power and promoting Yitzhak Hertzog directly or simply pressing for any candidate from the opposition who would keep Netanyahu from holding any influence or power in the coming Israeli government. There were even demonstrations and organized rallies which though couched in innocuous to misleadingly tame to even inane terminology though any inspection of their mission or political support were all aligned across the globe to defeat Bibi Netanyahu. One example we recently found related to in an editorial titled A “Peace” Party where the events were describe as the global, “Dance for Peace,” in which parties are being held around the world in the name of electing an Israeli government that according to the organizers, will “bring peace” and “transform the Middle East” One does not need to look too much further before realizing the aim is to defeat Netanyahu and elect Hertzog as by doing so the impediment to peace, a strong and Zionist Israeli government will be replaced with a weaker and more pliable Israeli Prime Minister who can be pressured into making the necessary sacrifices to allow for the formation of a Palestinian Arab state carved out of land that belongs to Israel by any and every legality which can be derived from International Law. We discussed the particular treaties, conference, League of Nations ratifications, and inclusion carrying all forward within the charter of the United Nations in yesterday’s article titled, Israel Elections are Upon Us Tomorrow found within the final paragraph for those interested. This global Dance for Peace goes beyond the obvious assembling people who desire removing Netanyahu from leading the Israeli government and are willing to attempt to influence the Israeli electorate by staging these events throughout the globe which would also include positive coverage of their actions and thus accommodating their view and amplifying its effects, especially if it received such coverage internationally as was also an obvious goal. As also stated in the editorial, this “peace” movement which is attempting to influence the Israeli election has some glaring assumptions at its base and also makes an oblique accusation against Israel, the supportive Israeli public and anybody who supports Netanyahu over Hertzog casting them as anti-peace, or even worse, as pro-war or warmongers. Prime Minister Netanyahu has made tremendous efforts to forge a peace with Mahmoud Abbas and has taken steps presumably beyond the sight of the average Israeli in order to facilitate the efforts by President Obama meeting even his demand for Israeli Prime Minister Netanyahu to extend the building freeze well past its initial nine months making it virtually permanent, a dispensation which may be crucial in this election removing him from power as it caused a housing shortage and increased pricing, a likelihood very much desired by United States President Obama. The implied accusations of all the efforts to remove Prime Minister Netanyahu are predicated on erroneous assumptions that first-off Prime Minister Netanyahu does not desire peace, is unwilling to make the territorial sacrifices required for peace, desires to inflame tension to a breaking point forcing another war and finally that the Israeli people are equally guilty of these supposed crimes and would foolishly and blindly reelect Netanyahu if the entire world does not save them from themselves. These suppose that the Israelis alone are to blame for the lack of peace and the making of the Palestinian Arab state and that the Palestinians have already sacrificed for peace and are ready to jump at the chance for peace if only Israel would sacrifice even more if the Israelis even came close to making a fair peace settlement. These suppositions ignore reality to an unfathomable extent. The United Nations extended in the General Assembly Resolution 181 passed on November 29, 1947, offered the Arab League a petition plan where the Jews and Arabs would have divided the lands between the Jordan River and the Mediterranean Sea evenly and somewhat unfairly as close to half of the lands granted the Jews consisted of the Negev Desert while the Arabs would have received the prime farming lands and despite the Jews accepting this partition the Arab League refused to accept this initiative thus rendering it null and void in that process and opted instead to wage war a few months later on May 15, 1948 immediately the morning after Israel declared their independence. Israel had declared their independence on May 14, 1948 at sundown which in Jewish law was the actual beginning of the next day, May 15, 1948, the Israeli Independence Day. Since then Israel offered what has been considered to be the acceptable over ninety-five percent of Judea and Samaria (West Bank), all of Gaza and to share Jerusalem, even dividing Jerusalem instead of internationalizing the city in a manner, and made this offer repeatedly since when Prime Minister Ehud Barak did so during the Camp David Accords under President William Jefferson Clinton during 2000 and January 2001 as a favor to President William Jefferson Clinton and both were refused by Yasser Arafat with his initial refusal coming in Paris where after being presented with Prime Minister Barak’s offering which included everything Arafat had demanded from President Clinton included in the offer by Arafat abruptly rising and stalking from the room without even making a subsequent demand or comment charging to his awaiting vehicle which was running which leads one to believe this was a planned exit though unexpected by President William Jefferson Clinton and Prime Minister Ehud Barak or United State Secretary of State Madeline Albright who attempted to request for Arafat to return to the negotiations strangely running as best she was able calling Yasser Arafat who simply ignored her and drove away returning to the disputed territories and initiating the deadly Second Intifada as had been planned from even before the negotiations began. The offer was given again by Prime Minister Ehud Olmert in 2008 at the very end of George W. Bush Presidency which was refused out of hand by Mahmoud Abbas who demanded a complete right of return for five million plus Arab refugees within the boundary of Israel instead of mostly to the areas to be declared as the Palestinian State and refusing to recognize Israel as the state of the Jewish People. Mahmoud Abbas has made numerous references to what he considers to be a fair offer of making the entirety of the lands between the Jordan River to the Mediterranean Sea into an Arab state where the Jews will be initially permitted to reside and expected to accept their status as Dhimmis with the rule of the new state being under an Arab Muslim all powerful leader, presumably Mahmoud Abbas until Hamas replaces him, and with the Jews receiving no political rights and limited religious, social, legal, entrepreneurial or ownership rights. Such statements can be located on MEMRI using their search as we did using the search term Mahmoud Abbas and received a plethora of results. Making claims that the Israelis are solely to blame for the lack of peace or the formation of an Arab state and addressing the refugee problem simply aggravated intentionally with malice aforethought by the Arab states surrounding Israel border on or fall under anti-Semitism, anti-Zionism and are definitively anti-Israeli. They are also as false as any statement which nullifies and abrogates any Palestinian culpability for the lack of peace. Perhaps such lack of actions against the Palestinians is due to the small fact that there has been no Palestinian election since 2006 and even longer for Mahmoud Abbas’s position as the leader of the Palestinian Authority while Israel holds elections regularly and often. That would be a nice thought and would pass muster except that worldwide demonstrations consistently target Israel and seldom if ever the Palestinians plus all diplomatic and economic pressures, threats, proclamations, sanctions, boycotts, arraignments and punishments almost without fail are used against Israel while the Palestinians simply receive the majority of aid monies in the entire world through NGOs, UNRWA, funding from individual governments especially from the United States, Europe and the European Union as well as from numerous other United Nations Agencies. The funding of the Arab Palestinian entities is measurable in the billions per month and likely trillions per year should all of their monies including clandestine funding by Saudi Arabia, Qatar and other secretive sources. Such inequities have unfortunately become so commonplace and almost to the point of being mundane that they no longer raise even a question from any corner of the world and the Palestinian Arabs are held guiltless while the Israelis and held culpable for every sin and transgression real or imagined and such should deserve far more even scrutiny than any appear willing to grant. With the measurable rise in anti-Semitism worldwide, these disparities of intentions, actions and accusations will only increase even more until they become an all against the one, the one and only Jewish State and possibly only place on the earth where a Jew is permitted the freedom to worship as they please. On that day Israel will be the sole nation with universal freedom of religion, the last and only. Monthly Archives Monthly Archives Welcome to Beyond the Cusp. BTC is an opinion and viewpoint blog on politics, world events, predictions, and life. Comments are moderated and usually posted within 48 hours. Welcome and hope you enjoy our efforts. Take Good Cheer! BTC
795 F.2d 1005 U.S.v.Comicz 85-1386 United States Court of Appeals,Second Circuit. 6/25/86 1 S.D.N.Y. AFFIRMED
Police have charged a man they say is seen on video shoving another man and firing a gun in a mandatory evacuation zone near the erupting Kilauea volcano on Hawaii's Big Island. Authorities said 61-year-old John Hubbard of Leilani Estates has been charged with reckless endangering, terroristic threatening, robbery and other counts involving failure to obtain and register a firearm. "Stress is high, anxiety is high," Hawaii County Civil Defense Administrator Talmadge Magno told reporters Wednesday. "They've got this live volcano in their backyard." Hawaii's Kilauea volcano began spewing lava on May 3, forcing thousands of residents in the Leilani Estates and Lanipuna Gardens neighborhoods in the Puna district to evacuate. County officials say 75 homes have been completely covered by lava, and roads have been made impassable. Residents "see strange people in their subdivision," Magno said. "Basically, they try to protect stuff. It's a hard time for the folks that are still in there." Police have made other arrests in recent weeks within evacuation zones for property crimes such as burglary including guns, and for flying a drone. No injuries resulted from the gunfire, but the victim, whose name was withheld by police, reported minor injuries from the scuffle. Police say Hubbard was arrested Wednesday without incident. He remained in police custody in lieu of $222,000 bail and was slated to appear in court later Thursday. Police responded Tuesday to a report of gunshots in Leilani Estates and were told by the victim that he and acquaintances were approached by a man in a pickup truck as they surveyed the site where his residence had been burned down by lava, county officials said. A video posted on Facebook and authenticated by police shows the back of a white-haired man with a handgun approaching another man followed by what appears to be a brief profanity-filled argument. The man without the gun yells to the other man that he would be arrested and screams, "Are you kidding me?" as shots are fired. He ducked as the man with the gun advanced toward him. It was unclear if Hubbard had an attorney. ___ AP journalist Caleb Jones contributed to this report.
Appendicovesical fistula in childhood: a rare complication of ruptured appendix. The diagnosis of appendicovesical fistula is difficult and usually delayed. This is most unfortunate, since surgery is uniformly successful. The case we report reemphasizes the diagnostic value of the rectal examination, intravenous pyelogram, and foiding cystogram in a child with subacute or chronic abdominal pain. Only an awareness of this condition on the part of the attending physician will lead to prompt diagnosis and definitive therapy.
Q: MediaStore - BUCKET_DISPLAY_NAME only present on API 29+? I was planning to query the MediaStore.Images.Media.BUCKET_DISPLAY_NAME field, but Android Studio says it's only available on API 29+. Also, the Android docs say the same. However, I have found this StackOverflow post from 2017 when they used this same field. What am I missing here? Thank you. EDIT: I also tried it on an Android 9.0 emulator and it works just fine. A: When looking at the API diff and the current MediaStore source, we can see that until Android 10 (API 29), BUCKET_DISPLAY_NAME was declared inside of MediaStore.Images.ImageColumns. On API 29 this property was moved to MediaStore.MediaColumns (which MediaStore.Images.ImageColumns implements), but the actual value of the constant it's the same. So it seems it was simply moved to the parent interface, but its value is the same.
Evaluation of Prosopis africana gum in the formulation of gels. Prosopis africana gum was evaluated for use in the formulation of gels. The rate of release of salicylic acid from gels prepared from prosoopis gum was investigated. The rate of permeation of the drug through the gel was also evaluated. Surfactants were incorporated into the gels and the effect on the release and permeation was also investigated. Tragacanth gum gel was also prepared and used as the standard. The release and permeation of the drug from the gel was low. Incorporation of surfactants did not enhance the release of the drug. However the low release and permeation rates may be due to the poor water solubility of the incorporated drug. Correlation of the quantity of drug released with viscosity shows that drug release was dependent on the viscosity of the gels; the highly viscous gels showed slower release rates.
Q: Theorem 3.19 in Baby Rudin (only the infinite cases) I am interested in proving Theorem 3.19 in Rudin only when $s^*$ and $t^*$ are infinite. (Many other posts on Math.SE prove the theorem when $s^*$ and $t^*$ are finite). While the proof for the infinite part might be trivial, I just want to make sure that I am not missing something. For brevity, I'll just prove the superior limits part of the theorem. If $s_n\leqslant t_n$ for $n\geqslant N$, where $N$ is fixed, then $$\limsup_{n\to \infty} s_n\leqslant \limsup_{n\to \infty} t_n$$ (In alternative notation: $s^* \le t^*$). $$\liminf_{n\to \infty} s_n\leqslant \liminf_{n\to \infty} t_n$$ Okay, now for the proof. There are 4 possible cases when $s^*$ and $t^*$ are infinite: $s^* = t^* = +\infty \implies s^* \le t^*$ $s^* = t^* = -\infty \implies s^* \le t^*$ $s^* = +\infty, t^* = -\infty$ $s^* = -\infty, t^* = +\infty \implies s^* \le t^*$ We argue that we can ignore Case 3 since that is never realized and contradicts our hypothesis of $s_n\leqslant t_n$. (Similarly for the inferior limits, we will be able to ignore Case 4). If $s^* = +\infty$, then we can find a subsequence $\{s_{n_k}\} \to +\infty$. This means that $\forall M \in \mathbb{R}, \exists K \in \mathbb{Z}$ such that \begin{equation}\label{heck1} k \ge K \implies s_{n_k} \ge M \end{equation} Similarly, we can find a $t_{n_p} \to -\infty$ which means that for some real-valued $Y < M$, $\exists P \in \mathbb{Z}$ such that \begin{equation}\label{heck2} p \ge P \implies t_{n_p} < Y \end{equation} Put $Z = \max \{N, K, P\}$. Then, $\forall n \ge Z, s_n \le t_n$ by the hypothesis. However, we also have that $ k, p \ge Z$ imply that $s_{n_k} \ge M > Y > t_{n_p}$, which is a contradiction. Is this proof correct? A: Your proof is correct, but there is a more direct way. Since $s_n\leq t_n$ then $$\sup_{i = 1,...,n} s_n \leq \sup_{i = 1,...,n} t_n$$ Now you can just pass the limit and use the property of order for the limit to conclude.
Q: set Null to value when use Is null in where I get this table : I use this where : where FoodID=1 when i Change My where with this : where FoodID=1 and DayId=1 or DayId is null Filed dayid value change to null for All rows . Note : My where Statement generate with Special Framework : I write this Code in framework : @@wheresql or dayid is null I don't know why change value to null when use is null in where ? A: You have to use () arround the or operator: where FoodID=1 and (DayId=1 or DayId is null) if not first the and operator is evaluated and then or
Q: Disable Compiz Shadow for a specific panel? In compizconfig-settings-manager there's an option to enable/disable shadow of windows i.e. gnome-panel. Is it possible to enable disable shadow only for a certain panel and not for all panels? Which options can I use beside any or none? A: Yes, you can exclude windows based on their title. For example, to exclude the top panel from having a shadow we first need to find the title of the panel. In the Window Decoration plugin, click on the add button to the right of the Shadow windows entry. Change the Type to Window Title,click the Grab button and then click on the top panel. The title will probably be Top Expanded Edge Panel. Copy this and click Cancel (See bug #584894 ). Now in the Shadow windows entry, put in the following and hit Enter: any & !(title=Top Expanded Edge Panel) Results Before (with shadow): After (without shadow):
Q: How to stop 2 sub-containers from expanding I have this container that I want to stop from expanding when populated with data. I want to know how can I achieve this in css. I have here four containers. Main container(black),sub-container(orange), sub-container1(blue), sub-container2(green). Main container has fill-height of screen. I set the Sub-container1 with 85% of the height of main-container and sub-container2 with 15%. I want to stop sub-container1(blue) and sub-container2(green) from expanding more from its height if I put many contents. Can someone help me how I can achieve this in css? A: Have you tried giving the sub-containers a max-height? Also, what to do you want to happen with the content that doesn't fit? You can give the sub-containers either an overflow:scroll or overflow:hidden
@rotnroll666 Just discovered the nice #arc42 based documentation of biking2. Thanks for providing such nice example. Got the hint from @DevBoost that #arc42 might be something helpful. The biking2 example helps a lot! This refers to arc42 by example. There will be a second edition of this book soon, with new contributions from Gernot and new, Ralf. My example in this book, biking2 is still up to date. I used this now several times at customers and every time, I also recommended my approach of generating the documentation as part of the build process. You see the live output here. I’m using the Asciidoctor Maven plugin. That means they are an essential part of the build and aren’t left to die in a Confluence or whatever. Even more, I use maven-resources-plugin to copy them into the output dir from which they are put into the final artifact. While the documentation is strictly structured by the ideas of arc42, here come’s the fun parts: I’m using jQAssistant to continuously verify my personal quality goals with that project: jQAssistant is a QA tool, which allows the definition and validation of project specific rules on a structural level. It is built upon the graph database Neo4j. jQAssistant integrates also in your build process. It first analyzes your sources and dependencies and writes every information it can discover through various plugins into an embedded Neo4j instance. That is the scan step. In the following analyze step, it verifies the data agains build-in or custom rules. You write custom rules as Cypher queries. Now the interesting fact: Those concept and rules can be written in Asciidoctor as well. This one, concepts_structure.adoc, declares my concept of a config and support package. Those concepts are executed in Neo4j and add labels to certain nodes. The label is then used in this rule (structure.adoc): “Give me all the packages of the main artifact but the config and the summary package that depend on other packages that are not contained in the package itself.” If that query returns an entry, the rule is violated. I use that rule to enforce horizontal, domain specific slices. Now, the very same, executable rule becomes part of my documentation (see building block view) by just including them. Isn’t that great? I also include specific information from my class files in the docs. Asciidoctor allows including dedicated part of all text files. For example, I use package-info.java files like this directly in the docs: “bikes”. I did this to a much bigger extend in this repo, find the linked articles there. I love Asciidoctor. Last but not least, I use Spring REST docs in my unit tests. Spring REST docs combines hand-written documentation written with Asciidoctor and auto-generated snippets produced with Spring MVC Test. A simple unit test like this gets instrumented by a call to document like show in this block. This describes expectations about parameters and returned values. If they aren’t there in the request or response, the test fails. So one cannot change the api without adapting documentation. You might have guess: The generated Asciidoctor snippet as included again in the final docs, you find it here. I started working on the documentation in early 2016, after a great workshop with Peter Hruschka and Gernot Starke. This is now 2 years ago and the tooling just got better and better. Whether you are writing a monolithic application like my example, micro services or desktop applications. There’s no reason not to document your application. Here are some more interesting projects, that have similar goals: Oliver Drotbohms Moduliths. Oliver strives to create great monoliths by enforcing structure in code through rules and integrates with other tools like jQAssistant as well. There is docToolchain, that describes all the tooling needed for creating living docs. Also, as an added bonus, there’s DocGist, that creates rendered Asciidoctor files for you to share. Thanks Michael for the tip. ]]>https://info.michael-simons.eu/2018/12/05/documentation-as-code-code-as-documentation/feed/5Passion.https://info.michael-simons.eu/2018/11/30/passion/ https://info.michael-simons.eu/2018/11/30/passion/#respondFri, 30 Nov 2018 22:57:46 +0000https://info.michael-simons.eu/?p=2919In “my” industry people speaking a lot of doing things “with passion or not at all.” I’m wonder, are we aware of what passion meant? The word itself means both in Greek and Latin “to suffer”, which fits very well with the some of the crazy work ethics out there. I for myself try to stay with the “good feelings” that arose with from the Stoic and might add to them: Engagement instead of passion. I’ll always do my best, but I will not suffer. Or at least, I try not to. Just a late night thought. ]]>https://info.michael-simons.eu/2018/11/30/passion/feed/0Modeling a domain with Spring Data Neo4j and OGMhttps://info.michael-simons.eu/2018/11/02/modeling-a-domain-with-spring-data-neo4j-and-ogm/ https://info.michael-simons.eu/2018/11/02/modeling-a-domain-with-spring-data-neo4j-and-ogm/#commentsFri, 02 Nov 2018 12:11:03 +0000https://info.michael-simons.eu/?p=2895This is the forth post in this series and I want to keep it short and simple. A domain can be modeled in many ways and so can databases. As long as I deal with them, I always preferred the approach: Database (model) first. Usually, data is much longer around than applications and I don’t want my first application instance or version define the model for all eternity. Using an Object-Graph-Mapper or Object-Relational-Mapper can be slightly danger. One tends to write down some class hierarchy and just let the tool do its magic. In the end, there are schemes that sometimes are very hard to read for humans. The danger might be a bit smaller with an OGM as hierarchies and connections map quite nicely onto a graph, but still, I don’t want that to be the default. My domain can be summarized with a few sentences: There are Artists, that might be highlighted as Bands or Solo Artists Bands have Member Bands are founded in and solo artists are born incountries Sometimes Artists are associated with other Artists Albums are released by Artists in a year which is part of a decade Albums contain multiple tracks that have been played several times in a month of a given year I spare you the logical ER-diagram for a relational database here and jump straight to the nodes. I highlighted all the important “classes” and their relations. Modelling data with Neo4j feels a lot like modeling on a whiteboard. And actually, it really is: The whiteboard model ends up being the physical model in the end with Neo4j. Neo4j is a property graph database. It stores Nodes with one or more Labels, their Relationships with a type among each other and properties for both nodes and relationships. A label starts with a colon and is usually written with a initial upper case letter, i.e. :Artist and :Album, the type of a relationship is written with a colon and than all uppercase, :RELEASED_BY and properties in camel case, without a colon, i.e. name and firstName. The above list translates in my application to a model like this: I really find it fascinating how that model reads: Pretty much the same as my verbal description. How to model this with Neo4-OGM and Spring Data Neo4j? You might want to recap the previous post to get an idea of the moving parts. Value objects that happen to be persisted In my domain the most simple objects are probably the year and the decade of the year. Those objects are value objects, they don’t have an identity. A year with a given value is as good as another instance with the same value. I did model both of them to use them further on in aggregates, for example in the relationship RELEASED_IN, but I don’t see providing dedicated repositories for them. They only have a meaning in connection with other nodes. Things to notice here are: I followed a naming convention: All classes that are mapped to something inside the graph ends in “Entity”. Thus I have to use label attribute of @NodeEntity (or the default attribute) to specify a “nice” label, i.e. @NodeEntity(label = "Year"). I use @Index for completeness. One can configure Neo4j-OGM to automatically create indexes, but TBH, I prefer to create them by hand. There’s also one outgoing relationship from year to decade. A year is part of a decade: @Relationship("PART_OF"). You also notice that I didn’t model any of the other outgoing relationships from the year: Like all the albums released in this year, all the months with play counts in that year or the foundation years of band. While all of Neo4j, Neo4j-OGM and Spring Data Neo4j could map relationships many levels deep, I don’t think it’s wise from an application performance point of view. I’d rather explicitly select the stuff I need. A common base class for aggregates I’m known for my dislike of having common base entities: I like it a lot. Still having a hard time to understand why introducing "Base entities". I often see this, mainly for an ID column and some auditing. Heck, I even added one to a Spring Data Neo4j example myself to see if our code works as advertised. For this project however, I included one for several reasons: To not pester every other entity with the technical id, to audit interesting entities and plain simple, to have an example that actually uses our (Spring Data Neo4j) support of Spring Data’s auditing in inheritance scenarios. This is what the class looks like: While it’s often preferable to extend such a repository interface not from the concrete store, I do think it’s better having the concrete store at hand in the case of Neo4j. While the concrete implementation brings a lot of CRUD method one doesn’t need all the time, it also brings in overloaded versions of them that take the depth into account as well. To mitigate the large surface of repository methods, it’s often a good idea to reduce the repositories visibility to a minimum. I don’t see a problem using such an entity and repository directly from a controller, for example like this: One use case is definitely “give me all the albums having a specific main genre.” To implement such a case, I rather access that sub collection from the owning site of the relationship. Here, the albums: Complex aggregates The Artist is a complex thing. It exists in three different forms: An unspecified artist, solo artist and as bands. While Neo4j-OGM allows you to add a list of labels to your domain and thus allowing one entity to be mapped to several labels, I don’t like that approach. Bands and solo artists have quite different attributes, as you can see in the sources linked above and I don’t want them to mix up. By declaring the Artist class with @NodeEntity("Artist") and the band, which extends from it, with @NodeEntity("Band") and solo artist accordingly, bands and solo artists are stored with this two labels. Polymorphic queries works to some extend with a repository for the base entity, but as Neo4j-OGM applies schema based loading, stuff can be missing from the result. As you see: No setters for the member and a getter that returns an unmodifiable list. Thus adding (and removing) members goes only through the band. The modified band is then returned. As Neo4j-OGM and Spring Data Neo4j don’t do dirty tracking and don’t save things automatically at the end of a transaction, we have to take care here. Again, I recommend a service layer: To close this up, one final example: When entities are being deleted through Neo4j-OGM, it deletes only relationships, not the target nodes of the relationships. You have to decide wether you want “dangling” nodes in your database or not. Sometimes this is ok, sometimes not. As of today, Neo4j itself has no foreign key constraint on the relationship. And how so? It’s complete ok that a node exists for its own. In my domain here however, albums without an artist and tracks without albums serve no purpose. To delete them when I delete an artist, I do this again through a service. The session in the following snippet is the autowired OGM session. It’s completely ok to access it. Spring Data Neo4j takes care that it participates in ongoing transactions: Yes, there is Cypher hidden away in a class. Sometimes there are compromises to be taken, and this one is a comprise that’s ok for me. There’s also JCypher, maybe that would be something to try out in the future. With all the things here in this post, it’s easy to write a nice application, that deals not only with CRUD, but already presents all the interesting associations: The complete application is available on GitHub as “bootiful music”. It has some rough edges, also ops wise, but the repository along with the posts of this series should help to get you started. I’d like to thank Michael a lot for the idea of this query, which results in nice micro genres or categories: ]]>https://info.michael-simons.eu/2018/11/02/modeling-a-domain-with-spring-data-neo4j-and-ogm/feed/1No silver bullets here: Accessing data stored in Neo4j on the JVMhttps://info.michael-simons.eu/2018/10/29/accessing-data-stored-in-neo4j-on-the-jvm/ https://info.michael-simons.eu/2018/10/29/accessing-data-stored-in-neo4j-on-the-jvm/#commentsMon, 29 Oct 2018 12:16:57 +0000https://info.michael-simons.eu/?p=2865In the previous post I presented various ways how to get data into Neo4j. Now that you have a lot of connected data and it’s attributes, how to access, manipulate, add to them and delete them? I’m working with and in the Spring ecosystem quite a while now and for me the straight answer is – without much surprises – just use the Spring Data Neo4j module if you work inside the Spring ecosystem. But to surprise of some, there’s more than just Spring there outside. In this blog post I walk you through Using the Neoj4 Java-Driver directly Creating an application based on Micronaut, which went 1.0 GA these days, the Neo4j Java-Driver and Neo4-OGM A full blown Spring Boot application using Spring Data Neo4j Before we jump right into some of the options you as an application developer have to access data inside Neo4j, we have to get a clear idea of some of the building blocks and moving parts involved. Let’s get started with those. Building blocks and moving parts Neo4j Java-Driver The most important building block for access Neo4j on the JVM is possibly the Neo4j Java Driver. The Java driver is open source and is available on Github under the Apache License. This driver uses the binary “Bolt” protocol. You can think of that driver as analogue to a JDBC driver that available for a relational database. Neo4j also offers drivers for different languages based on the Bolt protocol. As with Java’s JDBC driver, there’s a bit of ceremony involved when working with this driver. First you have to acquire a driver instance and then open a session from which you can query the database: With that code, one connects against the database and retrieves the names of all artists, I imported in my previous post. What I omitted here is the fact that the driver does connection pooling and one should not open and close it immediately. Instead, you would have to write some boiler plate code to handle this. There are some important things to notice here: The code speaks of a driver. That is org.neo4j.driver.v1.Driver. The session is also from the same package: org.neo4j.driver.v1.Session. Those both are types from the driver itself. You have to know this things, because those terms will pop up later again. Neo4j-OGM, the object graph mapper, also speaks about drivers and session, but those are completely different things. Most of the time however, people in the Java ecosystem prefer nominal typing over structural typing and want to map “all the things database” to objects of some kind. Let’s not get into bikeshedding here but just accept things as they are. Given a database model where a musical artist has multiple links to different wikipedia sites, represented like this (I omitted getter and setter for clarity): To fill such a model directly by interacting purely with the driver, you’ll have to do something like this: A driver session get’s opened, than we write a query in Neo4j’s declarative graph query language called Cypher, execute and map all the returned records and nodes: (This code is part of my example how to interact with Neo4j from a Micronaut application, find its source here and the whole application here.) While this works, it’s quite an effort: For a simple thing (one root aggregate, the artist, with some attributes), a query that is not that simple anymore and a lot of manual mapping. The query makes good use of a standardized multiset (the collect-statement), to avoid having n+1 queries or deduplication of things on the client site, but all this mapping is kinda annoying for a simple READ operation. Enter Neo4j-OGM Neo4j-OGM stands for Object-Graph-Mapper. It’s on the same level of abstraction as JPA/Hibernate are for relational databases. There’s extensive documentation: Neo4j-OGM – An Object Graph Mapping Library for Neo4j. An OGM maps nodes and relationships in the graph to objects and references in a domain model. Object instances are mapped to nodes while object references are mapped using relationships, or serialized to properties. JVM primitives are mapped to node or relationship properties. Given the example from above, we only have to add a handful of simple annotations to make our domain usable with Neo4j-OGM: Notice @NodeEntity on the classes, @Relationship on the attribute wikipediaArticles of the ArtistEntity-class and some technical details, mainly @Id @GeneratedValue, needed to map Neo4j's internal, technical ids to instances of the classes and vice-versa. @NodeEntity and @Relationship are used not only to mark the classes and attributes as something to store in the graph, but also to specify labels to be used for the nodes and names for the relationship. Quite a different, right? Dealing with the driver, the driver's session and Cypher has been abstracted away. Take note that the above session attribute is not a Driver's session, but OGM's session. This is a bit confusing when you start using those things. Again, this code is part of my example how to interact with Neo4j from a Micronaut application. The complete source of the above is here and the whole application here. To be fair, Neo4j-OGM needs to be configured as well. This is done in it's simplest form with a drivers instance and a list of packages that contains domain entities as described above, for example like this: The driver instance in the example above is instantiated by Micronaut. With Micronaut's configuration support, it would have been manually configured as in the very first example. In a Spring Boot application, Spring Boot takes care of the driver and Spring Data Neo4j creates the OGM session and deals with transactions, among other things: Spring Data Neo4j Let's start with quoting Spring Data: Spring Data’s mission is to provide a familiar and consistent, Spring-based programming model for data access while still retaining the special traits of the underlying data store. It makes it easy to use data access technologies, relational and non-relational databases, map-reduce frameworks, and cloud-based data services. That goes so far, that Craig Walls is fairly correct when he says, that many stores "are mostly the same from a Spring Data perspective": It’d be tricky to cover ALL NoSQL options and since they’re mostly the same from a Spring Data perspective, I focused on a couple that I could also cover reactive with. Spring Data Neo4j has some specialities, but on a superficial level, the above statement is correct. Spring Data depends on the Spring Framework and given that, it's kinda hard to get it to work in environments other than Spring. If you're however using Spring Framework already, I wouldn't think twice to add Spring Data to the mix, regardless whether I have to deal with a relational database or Neo4j. Given the entity ArtistEntity above, one can just declare a repository as this: There is no need to add an implementation for that interface, this is done by Spring Data. Spring Data also wires up a Neo4j-OGM session that is aware of Spring transactions. From an application developers point you don't have to deal with mapping, opening and closing sessions and transactions any longer, but only with one single "repository" as abstraction over a set of given entities. Please be aware that the idea behind Spring Data and its repository concept is not having a repository for each entity there is, but only for the root aggregates. To quote Jens Schauder: "Repositories persist and load aggregates. An aggregate is a cluster of objects that form a unit, which should always be consistent. Also, it should always get persisted (and loaded) together. It has a single object, called the aggregate root, which is the only thing allowed to touch or reference the internals of the aggregate." (see Spring Data JDBC, References, and Aggregates). In my "music" example, I deal with albums released in a given year. The release year is an integral part of the album and it would be weird having an additional repository for it. So what are the specialities of Spring Data Neo4j? First of all, in the pure Neo4-OGM example you might have noticed the single, lone "1". That specifies the fetch depth in which entities should be loaded. Depending on how entities are modeled, you could ran in the problem, that you fetch your whole graph with hone single query. Specifying the depth means specifying how deep relationships should be fetch. The repository method can be declared analog: and so on. I have seen some interesting finder methods here and there. While this is technically possible, I would recommend using the @Query annotation on the method name, write down the query myself and chose a method name that corresponds to the business. Different abstraction levels At this point it should be clear, that Neo4j Java-Driver, Neo4j-OGM and Spring Data act on different abstraction levels: In your application, you have to decide which level of abstraction you need. You can come along way with direct interaction with the driver, especially for all kind of queries that facilitates your database for more than simple CRUD operations. However, I don't think that you want to deal with all the cruft of CRUD yourself throughout your application. When to use what? All three abstractions can execute all kind of Cypher queries. If you want to deal with result sets and records yourself and don't mind mapping stuff as you go along, use the Java driver. It has the least overhead. Not mapping stuff to fixed objects has the advantage that you can freely traverse relationships in your queries and use the results as needed. As soon as you want to map nodes with the same labels and their relationship to other nodes more often than not, you should consider Neo4j-OGM. It takes away the "boring" mapping code from you and helps you to concentrate on your domain. Also, Neo4j-OGM is not tied to Spring. I didn't write application outside the Spring ecosystem for quite a while now. For this post, I needed an example where I don't have Spring, so I came up with the Micronaut demo, that uses both plain Java-Driver access and OGM access. Depending on what you want to achieve, you can combine both approaches: Mapping the boring stuff with Neo4j-OGM, handling "special" results yourself. If you're writing an application in the Spring-Eco-System and decided for OGM, please also add Spring Data Neo4j to the mix. While it doesn't put any further abstraction layer on the mapping itself and thus is not slowing things down, it takes away the burden dealing with the session and transaction from you. I do firmly believe that Spring Data Neo4j is the most flexible solution. Start with a simple repository, relying on the CRUD methods If necessary, declare your queries with @Query To differentiate between write and read models, execute writes through mapped @NodeEntities and reads through read-only @QueryResults Write a custom repository extension and interact directly with the Neo4j-OGM or Neo4j Java-Driver session By declaring this additional method on the repository, I know have mapped a simple Cypher query that does complex thinks (Here match all albums that contain a specific track and all the relationships of that album and return that all apart from the other tracks) to my entity. I benefit from SDNs mapping and have all the queries in one place. In my domain, I didn't model the track as part of the album. Those tracks should be explicitly read and not all the time. I therefore added an additional class, called AlbumTrack. Again, accessors omitted for brevity: Notice the @QueryResult annotation. This is special to Spring Data Neo4j. It marks this as a class that is instantiated from arbitrary query result but doesn't have a lifecycle. It then can be used as in a declarative query method, similar to the first one: while this query is indeed much simpler as the first one, it's important to be able to do such things for designing an application that performs well. Think about it: Is it really necessary to have all the relations to all other possible nodes at hands all the time? In the end, you might have guess it: There are no silver bullets. There are situations where an approach close to the database is more appropriate than another, sometimes a higher abstraction level is better. Whatever you chose, try not be to dogmatic. All the examples are part of my bootiful music project, more specifically, the "knowledge" submodule. With the building blocks described here, you can develop an web application that is used for reading and writing data. The example application uses a simple, server side rendered approach for the frontend, but Spring Data Neo4j plays well with Spring Data Rest and that makes many different approaches possible. In the next installment of this series, we have a look at the concrete domain modeling with Spring Data Neo4j. ]]>https://info.michael-simons.eu/2018/10/29/accessing-data-stored-in-neo4j-on-the-jvm/feed/4How to get data into Neo4j?https://info.michael-simons.eu/2018/10/12/how-to-get-data-into-neo4j/ https://info.michael-simons.eu/2018/10/12/how-to-get-data-into-neo4j/#respondFri, 12 Oct 2018 15:00:55 +0000https://info.michael-simons.eu/?p=2811This is the second post in the series of From relational databases to databases with relations. Check the link to find all the other entries, too. The source code for everything, including the relational dataset is on GitHub: https://github.com/michael-simons/bootiful-music.The post has also been featured in This Week in Neo4j. To feel comfortable in a talk, I need to have a domain in front of me I’m familiar with. Neo4j just announced the Graph Gallery which is super nice to explore Graphs, but I wanted to have something of my own and I keep continue using the idea of tracking my musical habits. In the end I want to have knowledge base in addition to my chart applications (both linked above). There are several ways to get data into your Neo4j database. One is a simple Cypher statement like this CREATE (artist:Artist {name: 'Queen'}) RETURN artist which creates a node with the label Artist and a property name for one of my favorite bands of all time. I can use the MERGE statement, which ensures the existence of a whole pattern (note only a node), too. More on that later, though. In any case, that would be a real effort todo manually. Therefor, let’s have a look how to import data. I found several options: I find it quite fascinating in how many ways CSV data can be processed from within Neo4j itself. It reads through CSV and provides each row in a way that you can interact with from a Cypher statement like it comes from the graph itself. Those CSV files can be put into a dedicated import folder in the database instance but can also be retrieved from a URL. You can come a long way with that import if you already have CSV or a willing to massage your data a bit so that it fits a plain structure. Rik did this with a beer related data, check it out here: Fun with Beer – and Graphs in Part 1: Getting Beer into Neo4j. JSON is an option if you want to work agains the many nice APIs out there, but my data sits in a relational database. It looks like this: This tables store all tracks I have listened to, their artists and genres and in a time series table all plays. If you read the previous post closely, you’ll notice that this database, dubbed statsdb is only in NF2. There is no separate relation for the albums of an artist. They are stored with the tracks. I actually forgot why I modeled it that way. I vaguely remember that I was enjoyed that album names are not necessarily unique and I could find a good business primary key. Anyway, the schema is still running and gives me each month something like that which btw takes only three SQL queries: In my property graph model I wanted to have separate albums nodes, though, so I had to massage and aggregate my data a bit before creating nodes. As I didn’t want to do this based on CSV, I looked at all the options JDBC. First, Neo4j ETL Tool: Neo4j ETL Tool The Neo4j ETL Tool can be downloaded through Neo4j Desktop and needs a free activation key. The link guides you through all the steps necessary to connect the tool to both the Graph database as well as the relational database. The nice thing here: You don’t have to install a driver for popular databases like MySQL or PostgreSQL as it comes with our tool. I ran the tool agains my database and stopped here: The ETL Tool recognized my tables and also the foreign keys and proposed a node and relationship structure. This is nice, but doesn’t fit exactly what I want. The relationships can easily be renamed and so can the node labels, but I don’t want the structure. I could just ran this and than transform everything via Cypher again, but that feels weird. Getting all the artists is simple. It’s basically a “select * from artists” and the corresponding Cypher. This is what Apoc does: Importing data with JDBC into Neo4j APOC is not only the name of a technician in the famous Movie “The Matrix”, but also an acronym for “Awesome Procedures on Cypher”. Neo4j is extensible via custom, stored procedures and functions much like PostgreSQL and Oracle. I have been a friend of those for years, I even read and generated Microsoft Excel files from within an Oracle Database. Now I realize, that Michael Hunger does the same for Neo4j Anyway. APOC offers “Load JDBC” and Michael has a nice introduction at hand. To use APOC you have to find the plugin folder of your Neo4j installation. Download the APOC release matching your database version from the above GitHub link and add it to the plugin folder. APOC doesn’t distribute the JDBC drivers itself, those have to be installed as well. As my stats db is PostgreSQL, I grabbed the PostgreSQL JDBC Driver and put it into the plugin folder as well. Things are super easy from there on. With the following Cypher statement, the Neo4j instances connects agains the PostgreSQL instance, executes a select * from artists and returns each row. YIELD is a subclause to CALL and selects all nodes, relationships or properties returned from the procedure for further processing as bound variables in a Cypher query. I then used them to merge all artists. In case they already existed, I updated some auditing attributes. The associations between some artists have already been there, so merge didn’t remove that: The merge-clause is really fascinating. It matches a pattern and the merge is applied to the full pattern, meaning it must exist as a whole. In the case above, the pattern is simple, it’s just a single node. If you want to create or update an album released by an artist and use something like this MERGE (album:Album {name: $someAlbumName}) - [:RELEASED_BY] -> (artist:Artist {name: $someNewArtistName}), the statement will always create a new artist node, regardless wether the artist already exists or not as the whole pattern is checked. You have to do those things in separate steps like this: First, merge the artist. Then, merge the album and the relationship to the existing artist node. For me, coming from with SQL background, this feels unusual at first, much like I can use something in an imperative AND declarative way. Now, the above example is still not what I want. apoc.load.jdbc basically gives me the whole table and I have to do every post-processing in afterwards. Luckily, that second parameter can also be a full SQL statement. Now, to create a graph structure like above (minus the tracks though), I did something like this. Behold, all the SQL and Cypher in one statement: Read: Bind the JDBC url and a SQL statement to variables. The sql selects a distinct set of all artists and album, including genre and year, thus denormalizing my relational schema by joining the artists and genres. The resulting tuples are then used to create decades and years, merge artists, genres and albums and finally relating them to each other. I was deeply impressed after I ran that. First: It’s amazing how well Neo4j integrates with other stuff and secondly, it’s actually pretty readable I think. In my talk, I will dig deeper in the Cypher used. Also, the statements for generating the track data in the above Graph are bit more complex (you find them here). But in the end, I could have stopped here. And probably, I would have done so, if I had started with APOC in the beginning. But me being the mad scientist I am, started somewhere else: Writing your own tool I did this right after my first look at the ETL tool. Neo4j can be extended with procedures and functions. Procedures stream a result with several attributes, functions return single values. When I joined Neo4j, my dear colleague Gerrit impressed me with knowledge about that stuff and I wanted to keep up with his pace, so I saw a good learning opportunity here. One does not have to extend from a dedicated class or have to implement an interface for writing a Neo4j stored procedure. It’s enough to annotate the methods that should be exposed with either @Procedure or @UserFunction. It’s necessary that org.neo4j:neo4j is a provided dependency on the class path. Neo4j provides detailed information how to create a package that just can be dropped into Neo4js plugin folder, find the instructions here. They worked as promised. What I did was to apply one of my favorite libraries again, run jOOQ against my stats database, generating a type safe DSL dedicated to my domain, packaged it up and than used it in a Neo4j stored procedure. In the following listing you’ll find that I use Neo4js @Context to get hold of the graph database service in which the procedure is running, pass in user name, password and JDBC url to my procedure. Those are used to connect against PostgresSQL (try (var connection = DriverManager.getConnection(url, userName, password);) {} (yeah, that’s Java 11 code), open a Neo4j transaction and than just executing my SQL and manually creating nodes. The complete sources for that exercise are in the etl module of my project. As interesting as the stored procedure itself is the integration test, that needs both a Neo4j instance and PostgreSQL. For the first I use our own test harness for the later one, Testcontainers. Conclusion As much as the last exercise was fun and I learned a ton about extending Neo, working directly with the engine and so on, it’s not only slower (probably due to context switches and the relatively large transaction), but it’s hard to read, by magnitudes. As with a relational database, declarative languages like Cypher and SQL beat explicit programming hand down. If you ever have a need to migrate or aggregate and duplicate data from a relational database into Neo4j, have a look at apoc.load.jdbc. Select the stuff you need in one single context switch and then process as needed, either in a single transaction or also in batches and parallel. For my music project, I’ll probably keep the ETL Tool around as I like the structure I created for the tool itself, but will rewrite the process from relational to graph based on APOC and plain SQL and Cypher statements, if it ever sees production of any kind. Overall verdict: It took me a while to find a graph model I like and from which I am convinced I can actually draw some conclusions about the music I like, but after the graph started to resonate with me, it felt very natural and enjoyable. https://info.michael-simons.eu/2018/10/12/how-to-get-data-into-neo4j/feed/0From relational databases to databases with relationshttps://info.michael-simons.eu/2018/10/11/from-relational-databases-to-databases-with-relations/ https://info.michael-simons.eu/2018/10/11/from-relational-databases-to-databases-with-relations/#commentsThu, 11 Oct 2018 19:32:46 +0000https://info.michael-simons.eu/?p=2807In the summer of 2018, I joined Neo4j. This seems odd at first, my “love” for relational databases and SQL is known. I didn’t have this slide in my jOOQ presentation for two years now without reason. But: Looking at the jOOQ and SQL talks from the perspective of early 2000s, they also seemed odd at first. Back then, I never thought doing that much with databases. That changed a lot. My experience is that all the data your deal with usually has a much longer lifetime than any of your applications sitting on top of that. Knowing one or more database management systems is essential. Being able to query them even more: What Neo4j and relational databases have in common: A great, declarative way to tell them which data to return and not how to return them. In the case of a relational database, this is obviously SQL, which had quite the renaissance for a few years now. Neo4j’s lingua franca is Cypher. I get to play with Cypher a lot, but in the end this is not what I am working on at Neo4j. My work is focused on our Object Graph Mapper, Neo4j-OGM and the related Spring Data Module, Spring Data Neo4j. We have written about that a bit on medium. Given my experience with Spring, Spring Data and Spring Boot, the role suddenly makes much more sense. People who entered the IT-conference circus may know the merry-go-round (or should I say “trap”?) of “talks stress me out a lot” – “hey, this is great, just enter another CfP”. I fell for it again and proposed a talk with the above title to several conferences. Now, I have to come up with something. What are we talking about here? The domain will be music. I have been tracking my musical habits for more than 10 years now at my side project Daily Fratze and I enjoy looking back to what I listened like this months but 5 years ago. The guy on the left, Edgar F. Codd invented the relational model back in the 1970s. A relation in this model doesn’t describe a relation like the one between two people or a musician associated with a band who released a new track. A relation in the relational model is a table itself. Foreign keys between tables ensure referential integrity, but cannot define relations themselves. I put down some thoughts a while back in this German deck. This is what a relation looks like in a relational database: One kind of sport to do with relational databases is the process of normalizing data. There are several normal forms. Their goal is to keep a database redundancy free, for several reasons. Back in the 1970, disk space being one of them. First normal form (1NF): All attributes should be atomic. 2NF is 1NF plus no functional dependencies on parts of any candidate keys. That is: There must not be a pair of attributes appearing twice in a relation’s tuples. 3NF forbids transitive dependencies (“Nothing but the key, so help me Codd”) and it gets complicated from there on. I have to say though, that normalization up to 3NF is still relevant today in a relational systems, at least if you’re a friend of (strong) consistent data. Why is this? Relations in NF can be queried in many, many ways. Each query send via SQL returns a new relation, what’s the problem? It depends. In a strictly analytical use case, there’s often not a problem. Recreating object hierarchies however, joining things back together, is. A handful of joins is not hard to understand, even without a tool, but self referential joins or a sheer, huge amount, is. It also gets increasingly hard on the database management system. This is where graphs can come into play. Graphs are another mathematical concept, this time from Graph theory. Sometimes people call a chart graph by coincident, but this is wrong. A graph is a set of objects with pairs of objects being related. In mathematical terms, those objects are vertices and the relations between them, edges. We call them nodes and relationships in the Neo4j database. Neo4j is a Property Graph. A property graph adds labels and properties to both nodes and relationships: One takeaway from that post is: Neo4j is referred to as a native graph database because it efficiently implements the property graph model down to the storage level. Or in technical terms: Neo4j employs so called index-free adjacency, which is the most efficient means of processing data in a graph because connected nodes physically point to each other in the database. This obliterates the needs for complex joins, either directly or via intersection tables. One just can tell the database to retrieve all nodes connected to another node. This is not only super nice for simple aggregations of things, but especially for many graph algorithms. So what has this todo with my talk? My SQL talk was all about doing analytics. That is, retrieving data like in this image from a relational database with build-in analytic functions. Computing running totals, differences from previous windows and so on (read more here). First of all I’m gonna analyze how to create a graph structure from the very same dataset I used in the SQL talk. There are different tools out there with different approaches. I’ve chosen a technique that resonated for various reasons with me. As I want to enrich the existing dataset, I’ll model a domain around it, with Java, putting Neo4j-OGM to use. I’ll show how Spring Data Neo4j helps me not having to deal with a lot of cruft. In the end, I’ll show that I can build my own music recommendation engine based on 10 years of tracking my musical habits by applying some of the queries and algorithms possible with Neo4j. ]]>https://info.michael-simons.eu/2018/10/11/from-relational-databases-to-databases-with-relations/feed/3Validate nested Transaction settings with Spring and Spring Boothttps://info.michael-simons.eu/2018/09/25/validate-nested-transaction-settings-with-spring-and-spring-boot/ https://info.michael-simons.eu/2018/09/25/validate-nested-transaction-settings-with-spring-and-spring-boot/#respondTue, 25 Sep 2018 10:00:32 +0000https://info.michael-simons.eu/?p=2799The Spring Framework has had an outstanding, declarative Transaction management for years now. The configurable options maybe overwhelming at first, but important to accommodate many different scenarios. Three of them stick out: propagation, isolation and to some lesser extend, read-only mode (more on that a bit later) propagation describes what happens if a transaction is to be opened inside the scope of an already existing transaction isolation determines among other whether one transaction can see uncommitted writes from another read-only can be used as a hint when user code only executes reads I wrote “to some lesser extend” regarding read-only as read-only transactions can be a useful optimization in some cases, such as when you use Hibernate. Some underlying implementations treat them as hints only and don’t actually prevent writes. For a full description of things, have a look at the reference documentation on transaction strategies. Note: A great discussion on how setting read-only to true can affect performance in a positive way with Spring 5.1 and Hibernate 5.3 can be find in the Spring Ticket SPR-16956. Some of the transactions settings are contradicting in case of nested transaction scenarios. The documentation says: By default, a participating transaction joins the characteristics of the outer scope, silently ignoring the local isolation level, timeout value, or read-only flag (if any). This service here is broken in my perception. It explicitly declare a read-only transaction and than calls a save on a Spring Data repository: This can be detected by using a PlatformTransactionManager that supports validation of existing transactions. The JpaTransactionManager does this as well as Neo4jsNeo4jTransactionManager (both extending AbstractPlatformTransactionManager). To enable validation for JPA’s transaction manager in a Spring Boot based scenario, just make use of the provided PlatformTransactionManagerCustomizer interface. Spring Boots autoconfiguration calls them with the corresponding transaction manager: In the unlikely scenario you’re not using Spring Boot, you can always let Spring inject an EntityManagerFactory or for example Neo4j’s SessionFactory and create and configure the corresponding transaction manager yourself. My Neo4j tip does cover that as well. If you try to execute the above service now, it’ll fail with an IllegalTransactionStateException indicating “save is not marked as read-only but existing transaction is”. The question if a validation is possible arose in a discussion with customers. Funny enough, even working with Spring now for nearly 10 years, I never thought of that and always assumed it would validate those automatically but never tested it. Good to have learned something new, again. ]]>https://info.michael-simons.eu/2018/09/25/validate-nested-transaction-settings-with-spring-and-spring-boot/feed/0Donating to Médecins Sans Frontières (Ärzte ohne Grenzen)https://info.michael-simons.eu/2018/09/01/donating-to-medecins-sans-frontieres-arzte-ohne-grenzen/ https://info.michael-simons.eu/2018/09/01/donating-to-medecins-sans-frontieres-arzte-ohne-grenzen/#respondSat, 01 Sep 2018 08:01:32 +0000https://info.michael-simons.eu/?p=2790Some weeks ago, my friends Judith and Christian, who write great Steampunk, Fantasy and in the recent time science fiction books for whom I wrote this little Kotlin app had a good idea: We just decided to donate all sales revenues from #Shardland and #Scherbenland from June to August 2018 to Medecins Sans Frontières / Doctors Without Borders @MSF for their emergency help for refugees in Lybia, Europe and on the Mediterranean:https://t.co/7TKpL6tuT2 Keeping my promises, here are the numbers: My share of royalties of the arc42 by example book has been 112,15€ from June to August. Gernot, Stefan and I split revenues, so that is only my share. Gernot himself already donates through Leanpubs causes. I don’t have numbers from my publisher for Spring Boot Book. I therefore decided to round this number to 250€: Me and my family have been incredibly lucky the last years and I’ more than happy that one can give. We live in Germany and despite some idiots on various (social) media, it’s really fortunate to live here. Health care is working, social system as well. We don’t have war but clean water, food and everything. We should not forget that this is by far not self-evident for many, many people on this planet- ]]>https://info.michael-simons.eu/2018/09/01/donating-to-medecins-sans-frontieres-arzte-ohne-grenzen/feed/0On becoming a Java Championhttps://info.michael-simons.eu/2018/08/20/on-becoming-a-java-champion/ https://info.michael-simons.eu/2018/08/20/on-becoming-a-java-champion/#respondMon, 20 Aug 2018 19:34:50 +0000https://info.michael-simons.eu/?p=2778July 2018 marks a personal highlight of mine. Just a bit after Rabea brought the news to our very own EuregJUG, the Java Champions account send this tweet out: My name along this Java illuminaries. When I started this blog here more than twelve years ago, that was something I never even dreamed of. In 2006 the Java Champions program already existed but me, just been from university and vocal training for about 4 years or so, had no clue at all. While getting my feet wet, I took inspiration from many of the people in the program, from their code, blog posts and talks. Not knowing that I would be working with them later in my life, even being direct colleagues with one of the founders like Eberhard Wolff. Even better: I’m lucky enough to call some of them my friends. Java Champion was a long shot. I’m hopefully not the worst software engineer and architect out there, but I’m very far from knowing all the things. Quite the contrary. Did I know what a Java bridge method is until Gunnar brought this up on twitter? Do I know much about theoretical computer science? Hell no. I’m very sure that many people who I find very inspiring could forget more than I ever knew and still know more than I do. So I must have done something else right and I’m happy with that. I just want to write some points down that might help others on their way: On growth Somewhen back in autumn 2014 I got Prokura in my company. I’m still not aware of an English word for that, but it means something along the lines that I can make and execute business decisions. My Prokura was only restricted in the way that I could not have closed the company. Looking back, it was my bosses saying “we trust you to run this thing and also, this is our way of saying here, you’re explicitly technical lead, too.” Sadly, it didn’t come with a manual. At this point I was already more than 12 years at ENERKO INFORMATIK. I had grown in this time, but mostly on many technical levels. Things I learned include SQL, PL/SQL, XML, XSLT, Java (obviously), Spring (more obvious), we did Groovy at some point, not speaking of all the Swing based stuff I wrote and AFAIK ENERKO still runs an Oracle Database Dictionary based ORM I invented. But could I lead a team? That was hard for me for several reasons. The company didn’t have had much fluctuation (and still hasn’t), so one did basically “grew up” with one another and it’s a weird situation if one person suddenly changes, either internally or externally. Back then, I somewhen added the title line of a Nick Cave song to my personal site: “You’ve got to just, Keep on pushing, Keep on pushing, Push the sky away.” That has been my motto for quite some time. I needed to grow beyond technical, “hard skills”. I tried and surprisingly, the feedback I got after leaving ENERKO INFORMATIK in 2017 was better than the impression I had of myself. I managed to hire two new engineers and they are still with the company which makes me quite happy. In the end, I still felt I didn’t manage to achieve to anything. It’s weird how self perception and perception from others diverge. I left the company and I am working now at Neo4j, where I work in a small team with Gerrit Meier on Neo4j OGM and Spring Data and I couldn’t be happier with it. Things come with a price Most of the things “Spring” I learned on working with and on Daily Fratze, a personal photo site I’m running since 2005. I really love that stuff and I learned so much by developing it. But: It ruined my sleep in the days between 2010 and 2013 (2013 was the year my 2nd kid was born), it made parts of my family time with the my 1st kid very hard. I wanted to “finish” stuff and basically didn’t do anything else in my spare time. Partially the same happened during late 2014 and early 2015, coincident with the events described unter “Growth” when I developed biking.michael-simons.eu and a lot more of Spring Boot related stuff inside my company. I was at home but I wasn’t there. I came back from the office, ate, and went into my office. At some point, I needed to step back and also visited a doctor to help me to cope with sleep issues and dark moods. We have a good family life most of the time, though. My wife always kept my back and I tried to be awake early every morning to be there for my kids. It’s by far not self evident that someone tolerates a partner that uses every minute awake when the kids are in their beds for coding, reading technical stuff and so on. I gave a lot of talks since the end of 2015 with some success. The talks I enjoyed the most have been on Spring Boot and database related stuff. All the things I can explain in the middle of the night. However: I was and I still am super nervous at least a week before a talk. That feeling doesn’t seem to go away. Doing those talks is much more work for me than writing things (for example a book as explained in this post). On mentoring I was astonished that during the last 3 years or so many people came to me and thanked me for inspiration. That’s one of the reasons I’m trying to write this down here. Something like being a Java Champion or also success in general is most often not something that happens in a void, without help and support. Everything I did and keep on doing: I would not be able to do this without having support in my life, a growing sense of what is good for my health (both mind and body) and without having had good mentors in my life. There was my boss Rainer Barluschke at ENERKO INFORMATIK who taught me that there’s so much more than technical problems to solve. That it’s worth going a detour if the end result fits. Who even introduced me to some topics that seems to be more of the esoteric kind back than, like spiritual growth. My ex-colleague Silke Böhler, who challenged me in JavaLand 2015 with some good food for thoughts and later on with a line “work is fun, but has to be taken seriously anyway…”. Apart from that: It has been the little things that last and helped along the way. I already mentioned the support of my family, but also a kid can be a mentor. It’s hard to describe, but having someone near me that most of the time is in a good mood compared to myself, helps on focussing and accentuating the good things. Summing things up… Don’t give up trying to reach your goals because other peoples success seems to be so easily achieved. In the end, people of a group engage in the same game, but start with different preconditions. Look for opportunities where you are Allowed to learn Be able to fulfill a meaningful task, with all given due diligence and seriousness Be part of a team, IT is not a single players sport And also Find a good mentor Become a mentor: Pass on what you learned Keep interest in other things outside your job… Not everything is related to IT Right now, I feel at peace with myself for the first time in about 5 years. Going over the midlife crises? Who knows… I’m thankful that I actually could push my sky away by magnitudes and the last year will be a year I will always remember. For the near future I’m super thrilled to work on cool stuff with Neo4j and Spring Data, with having the latest release of Neo4j OGM 3.1.1 just out of the door and see what happens next. I made some innuendo to some people (Ralf ) that I do have ideas for a next book and as a spoiler: If put this together, I’ll try to bring this post here, something along that one and some other ideas into a form that might be worth reading for more people. ]]>https://info.michael-simons.eu/2018/08/20/on-becoming-a-java-champion/feed/0Spring Boots Configuration Metadata with Kotlinhttps://info.michael-simons.eu/2018/07/15/spring-boots-configuration-metadata-with-kotlin/ https://info.michael-simons.eu/2018/07/15/spring-boots-configuration-metadata-with-kotlin/#respondSun, 15 Jul 2018 19:14:14 +0000https://info.michael-simons.eu/?p=2756Last week I decided to raffle a copy of my book (see Twitter) and I wrote a small Spring Boot Command Line Runner to raffle the retweeting winner as one does (see raffle-by-retweet, feel free to reuse this). I wrote the application in Kotlin. Notice the use of @ConfigurationProperties in my application: The lateinit attributes are not as nice as I want them to be, but I heard support for data-classes is coming. Anyway. A super useful thing with those configuration property classes are the metadata that can be generated for your IDE of choice, see Configuration Metadata. In a Java application it’s enough to add org.springframework.boot:spring-boot-configuration-processor as compile time dependency respectively as annotationProcessor dependency in a Gradle build. For Kotlin, you have to use the kotlin-kapt-plugin. It takes care of annotation processing in Kotlin and Spring Boots annotation processor has to be declared in its scope like this:
Hello! I am creating a game within RPGMaker with FaeV and I have a question! First to explain, this game plans to have art and cut scenes within it, more then just pixel art and we would like as many people as we can get to play this game so my question is how important is it to you to allow you to pick your character's gender within a video game or if you're completely indifferent to it. Now I know, "Why not just make it have all the genders to begin with?" Well because that means A LOT more coding and A LOT more art and if most of the community would just be indifferent to it then it would save a lot of time! But if most the community would better enjoy getting to pick then it will be worth all the extra assets we will have to put into it. But then also if a vast majority picks Female then we will make the character female, or if majority picks male, yadda yadda. So this is to help us gather up data! I would like it to make the protagonist's gender to be undefined, but include ways to customize an outfit so that you can make your character act as a certain gender - and even change your mind at a later point. LightningLord2 wrote:I would like it to make the protagonist's gender to be undefined, but include ways to customize an outfit so that you can make your character act as a certain gender - and even change your mind at a later point. Our game is in the very very early stages in it's development so not too much of what features or such have been thought out but I do appreciate this feedback and have it written down to see if it could be a possibility! Thanks for the input ♥ There are a lot of hooks that need to be thrown into the text to simply use proper pronouns. Then if anything lewd is done it just derails entire scenes into being different text wise or graphics wise. This is just a distraction from the main goal of making a vore game. Some games make it a set in stone choice and others make it irrelevant to what is going on. There are plenty of games that it just doesn't matter and the subject is danced around skillfully. You have to ask yourself during the brainstorming process "What is this game's driving forces?" If player choice is key, crafting who the character is and what choices they make like a skyrim or SIMs type deal then gender is absolutely required. If you're trying to tell a story though, just push a narrative like a visual novel then the choice doesn't much matter and trying to push it in is a distraction from the goal. Trans/non binary isn't even somewhere I've considered. Not for not having thought of it but knowing that It isn't a subject I know well enough to even consider portraying it. It would feel like me as a suburbanite american black male writing a biography about the struggles of a young Polynesian female in poverty. I have no sense of it so I should just stay away or find a consultant who is or was a young Polynesian female in poverty. There is a certain respect factor there that I personally would tread lightly around. With every little project I make I'm finding that choices like these multiply the work needed and my one man show isn't capable of outputting triple A developer efforts. Just my 2 cents though. There are exceptions and I'd love to be wrong. As you mentioned yourself, it's a ton of extra work, but it can give people some immersion. Just remember the more code you add in, the more bugs become possible and create issues. It's why extraneous code is a big deal. If you want to do all the extra work and hammer out bugs, people will probably appreciate it, but people will appreciate a game with scenes regardless, so make what you can create and not end up feeling like it's a chore instead of a hobby. If you burn out having to rewrite the same scene 5 times on every encounter to incorporate every gender option and add art for it all, you might burn out and resent what used to be fun to make. If you think you can handle it though, I'm sure everyone will love a game with options, so do whatever you like. I think we all agree that it would be nice to have the option, but games are already so hard to make. Most vore games are never completed. We have our preferences, but more than anything, I think we are all rooting for you to finish! I think the best option would be to have the protagonist be a well-defined character with their own personal characteristics, which would include sex/gender. It gives everything more focus, and more of a sense of identity. The nuance in this discussion is why I wish the poll was a bit more flexible. I'd prefer having an option, but playing as a female is a close second. Sadly my vote can't reflect that, but well--I suppose that's what posts are for. Artemis wrote:The nuance in this discussion is why I wish the poll was a bit more flexible. I'd prefer having an option, but playing as a female is a close second. Sadly my vote can't reflect that, but well--I suppose that's what posts are for. I second this. I think being able to select more than one option in a poll also would help with this, because I definitely had two poll votes (well three), but went with choose character for mine, but trans/non-binary, female, and player choice are the three, though I can play a game without a choice if I can get into it. So, here's my take on this. Develop one gender first, then use that as a baseline for all others. It might not be the most efficient, nor time saving, but it might do for the entirety of development. As for art, that's for someone else to cover because I'm the writer, who manages to do okay-ish in pixel art resources (but Amysaurus is a lot better than I). LightningLord2 wrote:I would like it to make the protagonist's gender to be undefined, but include ways to customize an outfit so that you can make your character act as a certain gender - and even change your mind at a later point. In games with small sprites like RPGMaker, this is a really great idea when possible, having it essentially be costume-based. Of course, anything with sex scenes makes that harder from the writing/scripting side, but personally I don't find sex scenes to be necessary at all for vore fetish work to still be enjoyable. As for my own opinion on the poll, obviously full choice between all options is the best when it's feasible, but when it's not, I prefer a female lead. Even in games with primarily-male leads where dicks are even written into their scenes, I'd rather at least have the option to use a female or androgynous/undefined character sprite anyway (and tend to go into RPGMaker and edit sprites/portraits myself if not given the choice in-game). Don't make gender choice even a thing unless the point of your game is really role-playing. If you're trying to tell a story with vore, you already have a gender in mind - use that. If if would make a major difference in any scenes you plan to do to have a different gender participating in the event consider how much time you'll spend rewriting everything to fit both genders. It's not impossible, but it's going to limit how much time you can focus on developing different scenes because you'll spend extra time on each one. Consider a simple story event for example that ends in a bad end: Main character and Pred character are temporary allies traveling. They find an abandoned building of a shelter and bed down for the night. Let's assume, for simplicity's sake, that Pred is female. If Pred is about to eat Main, there are a few ways that this scene plays out - sexual seduction leading to getting eaten is a common tactic authors like the write here (cause it's hot), but Pred seducing Main is going to play out completely differently if Main is a girl or a boy (or anything in between). The drives that make Main decide it's worth it to get into a situation that could end with getting eaten are generally completely different depending on if they're girl or a boy (hard to cocktease someone who doesn't have a cock), so that's two scenes that need to play out for Main to get eaten. The simple act of writing different reactions and speech patterns for the Main character's sexes is going to give Main(F) and Main(M) different personalities. They may be similar, but you generally can't just swap puss for penis and wet for hard expecting players to think it's good writing (Plus impregnation stuff or tits or ejaculation and all other things not directly shared between the two). So try as you may, you're going to have to write two main characters in order to supply a Female and Male choice for the player if you're going to have any consistency. It's far better to write the game you want to make and write the character you think is best for the story's sake. Players can only find things that they would change about your game if you actually make it, afterall. Hmm... I would like to point out that while letting the player assume the protagonist's gender in games can kinda work in normal games, when it comes to... adult games you're likely to encounter a lot more people to whom the experience will be impacted by no one in-game acknowledging their gender or otherwise needing to use their imagination to get that part of the experience. There's a strong correlation between the relevance of sexual orientation to a game and the importance of the protagonist's gender, y'see. It might work out, but I wouldn't make the mistake of believing it'll be a crowd pleaser--not anymore than picking a single gender and writing up a good game around that anyway. That being said, I suppose if you did decide to go that route allowing people to play dressup would certainly be a decent compromise.
A 'Strategic' Approach to Drinking Submitted by Elizabeth Redden on October 23, 2006 - 4:00am Pre-game, pre-party, pre-funk ... how to pre-vent? Call “pre-gaming” by any of its other names and it still translates the same for substance abuse specialists seeking strategies to control the ubiquitous “pre-party,” generally defined as a small group of students drinking together in a dorm room or other private space prior to an actual party or social event. An “exploratory study” of Pennsylvania students’ pre-gaming habits found that college students use pre-parties as a mechanism for getting buzzed while enjoying a safe environment, cutting costs, and short-circuiting law enforcement, bouncers and a need for a valid I.D. The students are also seeking to bond with friends and set themselves up for a sexual experience later on – two particularly telling objectives given what researchers found to be a profound sense of social anxiety and loneliness among focus group participants. “The game is all about hooking up, having a sexual experience,” Beth DeRicco, associate director of the Center for College Health and Safety, said Friday at the conference, held in Arlington. DeRicco teamed with the Pennsylvania Liquor Control Board to host focus groups at 10 of the state’s colleges this winter to study students’ nighttime rituals and their attitudes toward what DeRicco called “a real embedded culture from campus to campus about pre-gaming.” A total of 114 students, including student leaders, students who had been punished for alcohol use, students attending to satisfy a class requirement and volunteers, participated in the 10 groups. At the beginning of each session, DeRicco explained that the focus group was voluntary, and offered students there to fulfill a requirement the opportunity to leave. Some of them did. The focus groups were scattered geographically throughout the state, the participating colleges representing public and private institutions, large and small -- host schools were Bloomsburg University of Pennsylvania , Bucknell University, Cabrini College, Gettysburg College, Indiana University of Pennsylvania, Pennsylvania State University at Altoona, Rosemont College, St. Francis University, Villanova University and York College. Students completed a paper and pencil survey about their alcohol use before attending the focus group meeting. About 12 percent of students reported “none” as their average number of drinks per week, 13 percent reported 1 to 3, 14 percent 4 to 6, 21 percent 7 to 10, 16 percent 11 to 15, 12 percent 16 to 20 and 11 percent 20 or more. DeRicco said the numbers skewed a bit high, probably due to the participation of punished students, but are fairly representative of drinking habits among Pennsylvania college students. What is striking about the findings is the fundamentally strategic nature of student attitudes toward the pre-gaming festivities. According to DeRicco, women and men alike seek a certain “buzz” so they can save money at a bar or enjoy an event where they would not easily be able to obtain alcohol. Women, who more frequently pre-game with clear alcohol like vodka (“fewer calories,” DeRicco said), cite a desire to drink in a safe environment as a key reason to pre-party in a small group, and are more likely to view their pre-gaming activities as an exercise in pacing -- more now, when it’s safe and it’s cheap, and less later. Meanwhile, men are more likely to drink beer, strive for high levels of consumption, try to match their peers swill for swill and depend on the intensity of the intoxicating experience as a necessary condition for making friends. Oftentimes, students plan to stop drinking, or dramatically cut back, after the pre-party, DeRicco said, but by then they’re drunk and their judgment is cloudy. Despite students’ stated intentions, pre-gaming can often lead to more drinking, not less, helping to fuel potential consequences of heavy drinking that include blacking out, alcohol poisoning, driving drunk, taking sexual risks, being sexually victimized and getting injured. “It’s a strategic decision to get to a high BAC (blood alcohol content) quickly. But once they go out, they don’t make good decisions, they drink more, they come back with alcohol poisoning and they end up in the E.R.,” DeRicco said. Underage drinking at pre-gaming activities is notoriously difficult to enforce, as small groups typically drink in dorm rooms, not generating the types of noise and crowds that can attract uninvited inquiries. In addition, students are often resistant to any administrative crackdown on the tradition: One student told DeRicco that the best way administrators could become involved would be to put student activity fees toward room rentals for the purpose. DeRicco described a need to attack what she considers to be the underlying problems: a lack of social skills and deep sense of anxiety, an inability for many students to socialize with one another in unstructured spaces without a drink in hand. DeRicco said colleges need to offer more social, structured activities that don’t involve drinking, citing Pennsylvania State University’s alcohol-free LateNight-PennState[2] program as a model. On-campus prevention specialists said that the trend of pre-gaming may not be new, but it has perhaps never been so pervasive. “I don’t think it’s that new of a problem,” said John Steiner, a health educator at the University of New Mexico in attendance at the conference Friday. “I think small groups of people have for a long time gathered at one another’s house to save some bucks and arrive a little buzzed ... but it wasn’t that frequent.” “There appears to be a new level of intensity.” At Illinois State University, Kathy O’Connell, an alcohol and drug intervention specialist, said she has had to prompt students punished for their alcohol use to report their pre-gaming indulgences on weekly reports of their drinking habits. She doesn’t think that students are deliberately discounting the two, three or four drinks they might have had before the party started, as they’ll list the drinks they had after it began. It’s just that they don’t register the pre-gaming activities as anything unusual or noteworthy. “It’s become just such a routine part of their weekend socialization that sometimes they overlook it when they’re reporting their drinking,” O’Connell said.
Unique Mother’s Day Gifts for that perfect Mother! Mother’s day is certainly a special time for both the mothers as well as the children of different ages since they get a chance to honor the most important woman in their life here. Everyone present here wishes to think and do something unique and gift their mothers a Mother’s Day Gifts to Delhi that will definitely be different than the others. Yet, finding that perfect gift here makes you feel confused and depressed at times and that is how every child here starts feeling difficult in making a final decision. The choice of gift becomes even more difficult when the mother seems to be from those people that have everything with them. At this point in time, it certainly becomes really difficult in choosing something which she already has. What other options can you consider as Mother’s Day Gifts to Delhi? Looking at such demands, requests and choices, it is best that you gift your mother with a huge basket of a particular theme and that has everything matching that theme. Let me make this clearer to you in the form of certain examples. One of the examples that you might like to consider here is the Gift baskets that are just perfect for every mother and for many reasons. You can either buy these baskets from an online store or a shop or just simply make a personalized one that has everything in it to suit their tastes.You thus get an opportunity to gift the recipient with as much as the options that are available to us. You also get to choose these baskets to make it at different prices in a range that can definitely suit your budget and that is why Gift baskets are considered to be a lot of fun. This is also another true fact here that women of all ages love to have some time of their own as well as being pampered at the same time. To satisfy this unique request of theirs, a spa basket filled with all these items such as soaps, bath items, lotions, perfumes, gels and much more. These items included here are thus the best way of to please your mother and gift her something with the items that she is in love with. She will surely feel relaxed and loved after using this gift. And in case your mother loves to cook then you can present her with some cookbooks that are written by famous chefs along with gourmet foods, rare ingredients, and much more. Mother’s Day is the only time when you can pamper your mother and let her know how important she is in your life. So, all you need to do is to take a note of her different choices and hobbies and then accordingly arrange gifts delivery in Gurgaon.Finding a perfect gift here for your mother will now no longer be something really difficult that cannot be handled so easily. This year all you need to now do is to find a special and unique gift basket for her and fill it up with all the different goodies that she is fond of.
Vicente Fox flaunts his hypocrisy Vicente Fox is at it again. The former Mexican president is howling and hurling ignorant insults about a news report President Trump denies, claiming that the American president questioned the admission of immigrants from "[s‑‑‑]-hole" countries. Naturally, he thought it was about him. .@realDonaldTrump, your mouth is the foulest [s‑‑‑‑‑‑‑] in the world. With what authority do you proclaim who's welcome in America and who's not. America's greatness is built on diversity, or have you forgotten your immigrant background, Donald? Aside from the disgustingness of a foreign national telling us how to run our country and whom to elect, four things stick out about this tweet, none of them flattering to Fox. First, it employs the tired cliché that diversity is strength. Fox wouldn't know diversity if it jumped up and bit him. If there really were diversity in our immigrant population, his own country would have far less "representation." The millions of Mexican illegals in the U.S. are precisely why the immigrant population lacks diversity and the ill considered institution of a "diversity lottery" was undertaken. The fact that there are so many Mexican nationals among the immigrant population due to illegal immigration is precisely why even the legal immigrants do not assimilate. We regularly see people who have been here 20 years and cannot speak a word of English. Why is that? Because the only people they ever meet in their immigrant communities are fellow Mexicans. Assimilation happens when there is real diversity among the immigrant population, when Mr. Cambodian needs to communicate with Mr. Colombian, his neighbor, and the medium of exchange is English. Studies show that when a nation imports all one nationality, it doesn't get assimilation; it gets balkanization. Second, Mexico has always been notorious historically for turning immigrants away and not welcoming them. Spanish immigrants in the 1930s were positively persecuted, as one Mexican billionaire of Spanish descent once explained to me back when I was a writer for the Forbes billionaire list. If Fox says diversity is greatness, what does that make Mexico with its non-diversity? Third, shouldn't he be embarrassed that so many Mexicans want to leave? Lastly, Fox shows complete ignorance of American law as he questions with what authority President Trump proclaims who's welcome in America and who's not. Obviously, he's trying to sound like a La Raza activist, but he ends up looking like an idiot. In fact, the president here has the vested legal right to determine precisely who gets in and who doesn't, along with the U.S. Congress. Fox would have you think there's some right to be an illegal in the states. In short, he reveals his hypocrisy, saying one thing and doing another. If he were capable of shame, he ought to be ashamed of himself, but no one should hold his breath. A smack-down is pretty much all he can handle. He has no business micromanaging our elected president. Vicente Fox is at it again. The former Mexican president is howling and hurling ignorant insults about a news report President Trump denies, claiming that the American president questioned the admission of immigrants from "[s‑‑‑]-hole" countries. Naturally, he thought it was about him. .@realDonaldTrump, your mouth is the foulest [s‑‑‑‑‑‑‑] in the world. With what authority do you proclaim who's welcome in America and who's not. America's greatness is built on diversity, or have you forgotten your immigrant background, Donald? Aside from the disgustingness of a foreign national telling us how to run our country and whom to elect, four things stick out about this tweet, none of them flattering to Fox. First, it employs the tired cliché that diversity is strength. Fox wouldn't know diversity if it jumped up and bit him. If there really were diversity in our immigrant population, his own country would have far less "representation." The millions of Mexican illegals in the U.S. are precisely why the immigrant population lacks diversity and the ill considered institution of a "diversity lottery" was undertaken. The fact that there are so many Mexican nationals among the immigrant population due to illegal immigration is precisely why even the legal immigrants do not assimilate. We regularly see people who have been here 20 years and cannot speak a word of English. Why is that? Because the only people they ever meet in their immigrant communities are fellow Mexicans. Assimilation happens when there is real diversity among the immigrant population, when Mr. Cambodian needs to communicate with Mr. Colombian, his neighbor, and the medium of exchange is English. Studies show that when a nation imports all one nationality, it doesn't get assimilation; it gets balkanization. Second, Mexico has always been notorious historically for turning immigrants away and not welcoming them. Spanish immigrants in the 1930s were positively persecuted, as one Mexican billionaire of Spanish descent once explained to me back when I was a writer for the Forbes billionaire list. If Fox says diversity is greatness, what does that make Mexico with its non-diversity? Third, shouldn't he be embarrassed that so many Mexicans want to leave? Lastly, Fox shows complete ignorance of American law as he questions with what authority President Trump proclaims who's welcome in America and who's not. Obviously, he's trying to sound like a La Raza activist, but he ends up looking like an idiot. In fact, the president here has the vested legal right to determine precisely who gets in and who doesn't, along with the U.S. Congress. Fox would have you think there's some right to be an illegal in the states. In short, he reveals his hypocrisy, saying one thing and doing another. If he were capable of shame, he ought to be ashamed of himself, but no one should hold his breath. A smack-down is pretty much all he can handle. He has no business micromanaging our elected president.
WESTERN OPEN. INSIDE THE WESTERN OPEN. Cog Hill Leaves 'em Singing Back-9 Blues Granted, Steve Stricker played solid golf for the third straight round. But another reason why he'll carry a five-stroke lead into Sunday's final round of the Motorola Western Open is because the back nine at Cog Hill's Dubsdread wilted his pursuers. Co-second-round leader Jay Don Blake faded to a 1-over-par 73 with a 39 on the back side, including a bogey at No. 12 and a double bogey at 13. Wayne Grady, John Huston and Craig Parry each hit 10 under at one point, only to fall back. Justin Leonard also double-bogeyed No. 13. "I think the holes on the back side are a little longer and have some good par-3s," said Blake, who had to take a drop from a hazard on the par-4 13th. "The greens were firmer today, and I had a round where it kind of got away from me." Grady scorched the front side in 4-under 32 and reached 10 under for the tournament with a birdie on the par-3 14th. But he finished bogey, double-bogey to drop to 7 under. "Steve played great, but really, nobody else made a big move," said Lee Janzen, in second place at 10 under. "When the greens are hard, you can't just fire at the pin and good shots don't get as close." "It got a hold of me on 16 and 17," said Parry, who bogeyed both. "The golf course was playing extremely fast. I wish we played more courses like it." Sharp shooters: For the tournament, 153 of the 237 rounds--65.5 percent--have been under par. The aggregate score for the field is 95 under, with only five players shooting over par. Aussies at home: For Parry, Grady and Stuart Appleby, home is thousands of miles away. But playing a course like Dubsdread is enough to make the native Australians a little homesick. "This course is very similar to what we play in Australia," Parry said. "And for (us) to still be on the leaderboard epitomizes the way we do it back home." Parry's 33 on the front nine allowed him to move to 208, tying him for fourth. Grady stands right behind him, despite a double-bogey on 18, at 209. Appleby is next in line at 210. "This is just like the Melbourne courses, where you've got to hit the right spot and you've got to be under the hole. That's typical of a Melbourne course. It's all right there in front of you." Hang on, Sluman: Hinsdale's Jeff Sluman stumbled away from the pack Saturday, firing a 35-37-72. That dropped him from the top-10 contenders to a group of seven at 7-under 209. "As far as hitting any bad shots, I didn't think I played that bad tee to green," Sluman said. "It was putting I had trouble with." Beck's day: Chip Beck, who has had his fair share of ups and downs during his career, ended the round at 2-under 214 after shooting a 72. Beck, while pleased with his performance, hopes things turn out better Sunday. "I played better than I scored, which is unfortunate," Beck said. "I'm encouraged by my play, but I'm looking forward to the next round." Nick sticks around: Masters champion Nick Faldo missed the cut, but as of Saturday morning he hadn't left town. As the third round unfolded nearby, Faldo was spotted on the back putting green at Cog Hill, working on his short game with caddie Fanny Sunesson. On Wednesday, Faldo had said he planned to fly home to England Sunday night, practice for a few days in midweek and then leave the following Sunday for Royal Lytham and St. Annes, site of the July 18-21 British Open.
Interaction of discrete and rhythmic movements over a wide range of periods. This study investigates a complex task in which rhythmic and discrete components have to be combined in single-joint elbow rotations. While previous studies of similar tasks already reported that the initiation of the discrete movement is constrained to a particular phase window of the ongoing rhythmic movement, interpretations have remained contradictory due to differences in paradigms, oscillation frequencies, and data analysis techniques. The present study aims to clarify these findings and further elucidate the bidirectional nature of the interaction between discrete and rhythmic components. Participants performed single-degree-of-freedom elbow oscillatory movements at five prescribed periods (400, 500, 600, 800, 1,000 ms). They rapidly switched the midpoint of oscillation to a second target after an auditory signal that occurred at a random phase of the oscillation, without stopping the oscillation. Results confirmed that the phase of the discrete movement initiation is highly constrained with respect to the oscillation period. Further, the duration, peak velocity, and the overshoot of the discrete movement varied systematically with the period of the rhythmic movement. Effects of the discrete-onto-rhythmic component were seen in a phase resetting of the oscillation and a systematic acceleration after the discrete movement, which also varied as a function of the oscillation period. These results are interpreted in terms of an inhibitory bidirectional coupling between discrete and rhythmic movement. The interaction between discrete and rhythmic movement elements is discussed in comparison to sequential and gating processes suggested previously.
Observations on comparative priority-setting Oliver Jinks As an international, or at the very least inter-European individual I have experienced societies quite distant from each other on a hypothetical spectrum of freedom of expression and information. Fortunately, I grew up in Austria, which, despite its other problems, enjoys an incredibly free press (it places 5th/179on the Reporters Without Borders Press Freedom Index(PFI) ) and media platform (Blatant racism on political posters? Sure!), so I only experienced censorship when my creative homework exercises crossed a few lines, which didn’t feel like censorship at the time, given all we were learning about Austria’s experiences under Hitler. Through studying modern history in high school and moving on to related modules in my undergraduate and graduate studies I got more of a grasp of what forms censorship really takes. Now, I see it everywhere, and the oxymora, paradoxes and hypocrisies just keep piling up. Just last month I came across a story in the Guardian (see paragraphs 9-10) about a ‘branding police’ and other restrictions for the upcoming games, no, er, events, hmm, physical exercise related happenings coming this summer, er, season following spring. My local pubs won’t be able to advertise that they have a TV and will happily be showing the, erm, movement-related occurrences. Indeed pubs that somehow are allowed this incredible privilege will have this branding police inspect the building and cover up logos to brands that aren’t paying their wages, even the undersides of our toilets. Disturbing thought, democracies acting in this way for the sponsors’ interests. Meanwhile most of what I come across our Western youth using their freedoms for can be summarised as: inane exclamations about trivialities, warping written language to the point legitimate cause can be declared for accusing them of crimes against humanity, and a growing number of voices ignorant to the humour behind their “first world problems” (“I don’t have the newest iPad” “I wish I had an Asian boyfriend”). I say most, because there are dedicated people with opinions and information worth listening to here and there. I brought that up to contrast with my experience in Azerbaijan (rank 162/179 on the PFI) two years ago. The government was denying and downplaying attacks on and imprisonments, harassments and intimidations of journalists and other media actors on a daily basis. In recent weeks, leading up to the Eurovision song contest, we have covered more such news. And yet one of the biggest sources of inspiration I ever drew (apart from my then office colleagues) was near the heart of the capital, in a friendly pub/bar just two blocks off a main road. Here I met and befriended a growing group of young people, who, tired and upset with what was happening in their country were becoming part of and partially leading the growth of nation-wide social media activity, in the forms social and political critiques and information-sharing via blogs as well as the usual Facebook and Twitter. They discarded the option of anonymity, despite the risks, and focussed on broadening their audience, and more importantly, their membership, a growing group of motivated individuals that fight for transparency, accountability and justice. Several of the ones I had the pleasure of getting to know now work in newspapers in neighbouring countries. The most effective one is still running the IRFS. While considered somewhat “backward” by some Western people, this country’s (and ones with similar circumstances, as now Egypt, Libya, Tunisia etc) ‘true’ population has a lot to show the West about what really matters and what a society should want to be like, enabling and inclusive, not serving petty agendas.
LOGIN FEEDBACK REGISTER Post Mortem Noise pollution (regulation and control) rules, 2000 By Nagaland Post | Publish Date: 10/9/2018 12:50:26 PM IST The rapid urbanization and changing lifestyles have given rise to increasing ambient noise levels from various sources such as music systems, loud speakers/public address systems, generator sets, industrial activities, fire crackers and noise from vehicles which have deleterious effects on human health and the psychological well being of the people. People are unable to even sleep due to the noise produced by loud use of loud speakers/public address system, music system, bursting of firecrackers, etc. The unpredictable, intermittent and impulsive noise produced by use of loud speakers/public address system, any sound producing instrument or musical instrument or a sound amplifier, bursting of firecrackers and unnecessary honking of horns turns into disharmony of noise. At times, what is music for some can be noise for others. In order to curb the growing problem of noise pollution, the Government of India has enacted the Noise Pollution (Regulation and Control) Rules 2000 framed under the Environment (Protection) Act, 1986 which provides the regulation for noise. RESTRICTION ON THE USE OF LOUD SPEAKERS/PUBLIC ADDRESS SYSTEM/ ANY SOUND PRODUCING INSTRUMENT OR MUSICAL INSTRUMENT OR A SOUND AMPLIFIER: 1. A loud speaker or a public address system shall not be used except after obtaining written permission from the Deputy Commissioner. 2. A loud speaker or a public address system or any sound producing instrument or musical instrument or a sound amplifier shall not be used at night (between 10.00 pm to 6.00 am) except in closed premises for communication within e.g. auditoria, conference rooms, community halls and banquet halls or during a public emergency. 3. Not withstanding anything contained in Sub-rule (2). The State Government may subject to such terms and conditions as are necessary to reduce noise pollution permit use of loud speakers or public address systems during night hours (between 10.00 p.m. to 12.00 midnight) on or during any cultural or religious festive occasion of a limited duration not exceeding fifteen days in all during a calendar year. COMPLAINTS TO BE MADE TO THE AUTHORITY.- 1. A person may, if the noise level exceeds the ambient noise standards by 10 dB (A) or more given in the corresponding columns against any area / zone or, if there is a violation of any provision of these rules regarding restrictions imposed during night time, make a complaint to the Deputy Commissioner. 2. The authority shall act on the complaint and take action against the violator in accordance with the provisions of these rules and any other law in force. IMPLEMENTING AUTHORITY: Under Section 2 (i) (c) of the Noise Pollution (Regulation and Control) Rules, 2000 the Government of Nagaland vide notification no For/Gen-46/95 (Pt-VI) dated 27th April 2009 designated the Deputy Commissioners of all the Districts to be the ‘Authority’ for maintenance of the Ambient Air Quality standards in respect of noise. STANDARDS IN RESPECT OF NOISE FOR DIFFERENT AREAS/ZONES: Under the Noise Pollution (Regulation and Control) Rules, 2000 the Ambient Air Quality standards in respect of noise for different areas/zones is given below:- Area Code Category of Area/Zone Limits in dB (A) Log Day Time (6a.m. to 10 p.m.) Night Time (10 p.m. to 6 a.m.) A Industrial Area 75 70 B Commercial Area 65 55 C Residential Area 55 45 D Silence Zone 50 40 EFFECTS OF NOISE TO HUMAN HEALTH:- (a) Hearing Impairment: Hearing impairment can be either temporary or permanent. Temporary impairment is a temporary loss of hearing acuity experienced after a relatively short exposure to excessive noise. Permanent impairment is an irreversible loss of hearing that is caused by prolonged noise exposure. (c) Annoyance: Noise annoyance may be defined as a feeling of displeasure evoked by noise and is basically a psychological response. The consequences are often ill temper, bickering and even enmity. (d) Physiological Functions: A number of physiological disorders result by interference of biological functioning of the body as a consequence of over exposure to noise. They are neurosis, anxiety, insomnia, hypertension, giddiness and nausea, fatigue and increase in sweating. Chronic noise may lead to abortions and congenital defects in children too. Startling sound can quicken human fetus’s heart rate and cause its muscle to contract. Malformation in the fetus’s nervous system may also be caused.
Image copyright Thinkstock Image caption The Lifetime Isa is designed to help people save throughout adulthood It looks like free money. An absolute no brainer, even. But while the Treasury has confirmed that the Lifetime Individual Savings Account (Lisa) will go ahead in April, others have raised doubts about the whole idea. The Nationwide has announced it will be boycotting the product, claiming that it is too complicated. Others, like Standard Life and Fidelity, will launch a Lisa, but not in time for the April start date. Nevertheless one provider - Hargreaves Lansdown - has announced it will launch one by 6 April 2017. That will give some savers the chance to earn up to £32,000 in government bonuses. But critics are warning not just that the product is complex, but that it could leave some investors worse off. And should you want your money back at any stage, you could pay dearly. What is a Lifetime Isa? Image copyright Thinkstock It is a savings product, designed to help people at two different points in their lifetime: When they want to buy their first house or flat or When they want to retire Savers can only open a Lisa if they are between the ages of 18 and 39 inclusive. They can pay in up to £4,000 a year, but no more. At the end of the first year, the government will add a 25% bonus, ie up to £1,000. From 2018/19, this bonus will be paid monthly. Since you can continue paying into a Lisa up until the age of 50, the potential eventual bonus is up to £32,000. For most people, this is more generous than the Help to Buy Isa (see below). How is the money invested? As with ordinary Individual Savings Accounts (Isas), the money can be invested as cash - or in stocks and shares. Cash Lisas are expected to pay out the same as Isas. Currently the best instant access rates are around 1% a year. Such returns are in addition to the government bonus. Alternatively the money can be invested in stocks and shares, which have potentially higher returns, but which carry greater risk too. All gains are free of income and capital gains tax. What are the rules if you want to buy a home? Image copyright Thinkstock The money can only be used without penalty if you are a first-time buyer. In other words, you cannot have owned a property before. The property cannot cost more than £450,000. This is more generous than the Help to Buy Isa, which is limited to £250,000 outside London, but £450,000 in the capital. Two partners can each use their own Lisas to buy a house together, potentially doubling the government contribution. A home bought through a Lisa cannot normally be rented out. What if you want to use the money when you retire? Once you are over the age of 60, you can withdraw money from a Lisa and use it for whatever you like. In addition to being used by first-time buyers, it is therefore an alternative to a pension. A pension is tax free when you pay into it - so the taxman contributes an extra 25% to the amount paid in by basic rate taxpayers - but money taken out after the age of 55 is taxable. A Lisa is the exact reverse: you will have already paid tax on contributions into it, but money taken out will be tax-free. Image copyright Thinkstock Could I use a Lisa instead of a pension? Most experts urge real caution here. Anyone who is paying into a workplace pension can expect contributions to be made by an employer, which are likely to be more valuable than the annual Lisa bonus. The exceptions to this might include: Those not paying in to a workplace pension - such as self-employed people, or non-working parents Those who have secured the maximum employer contribution on their workplace pension, and want to save more Those who are up to their lifetime, or annual limit, on pension contributions "In most other situations, a pension will make more sense," says Tom McPhail, retirement specialist with Hargreaves Lansdown. "This is particularly relevant for anyone who can join a workplace pension and benefit from employer contributions." There could also be a difference in the age at which you are entitled to withdraw money from a pension and a Lisa. Those currently under 40 - and therefore eligible for a Lisa - will need to be at least 57 before they are able to take money from their pension. This age will rise further as the state pension age also rises. Those with Lisas will not be able to withdraw money penalty-free until the age of 60. Image copyright Thinkstock What if I need the money sooner? You can only withdraw money penalty-free if you are buying your first home, you are over 60, or if you have a terminal illness. All other withdrawals will incur an apparently hefty 25% exit charge, except in the first year (2017/18). See more below. This breaks down as 20% to recover the bonus, plus an additional 5%. That 5% is partly to make up for the investment growth on the bonus itself. Nevertheless one investment firm has calculated that the charge could mean an investor losing around 45% of the growth in the value of the Lisa, assuming he or she had it for 10 years, and it were to grow by 4% a year. "It is disappointing to see the government pushing ahead with an exit fee that looks overly punitive if people unexpectedly need access to their savings," said Tom Selby, an analyst with AJ Bell. Lisa Caplan, head of financial advice at Nutmeg, warns that some savers could actually lose money. "If you invest £4,000, you get a 25% top up from the government to make £5,000. If you withdraw early, you will be penalised by 25% which is £1,250, so you will be left with £3,750, 6.5% less than your initial investment." When will the bonus be paid? In the first year (2017/18) the bonus will be paid at the end of the 12 month period. In subsequent years it will be paid on a monthly basis. In December 2016 the government said this was the reason why the exit penalty would be not be applied in the first year - as otherwise some investors might have to pay the penalty before they had received the bonus. The ISA family Individual Savings Account (Isa) Simple tax-free savings account. Up to £15,240 can be invested in cash or stocks and shares. Limit rises to £20,000 in April 2017. Junior Isa Similar to Isa, but for under 18s. Anyone - eg grandparents - can pay into it. Maximum yearly contribution: £4,080 Help to Buy Isa Anyone over 16 can invest. Government adds 25% bonus, to a max of £3000, for those buying a first home. Lifetime Isa (Lisa) For those aged 18 - 39. Government adds 25% bonus each year, up to max of £32,000. Can be used for buying a first home, or for retirement. Innovative Finance Isa Money gets invested in peer-to-peer lending. Returns are better than cash Isas, but money is at risk. Help to Buy Isa, or Lifetime Isa? Both products pay a 25% bonus if you are buying a house for the first time. But if you plan to save for more than five years, or if you can afford to put more than £2,400 a year in to the plan, a Lisa will typically be more generous. Here's why: You can put more money into a Lisa (£4,000 a year, v, £2,400 a year into a HTB Isa, plus £1,200 when it is opened) The Lisa pays a bonus at the end of each year. The HTB bonus is only added when you buy a house. So the Lisa will benefit from annual compound interest, boosting savings. The maximum bonus for a Lisa is £32,000, v £3,000 for a HTB Isa. Lisas can be used to buy a property up to £450,000 anywhere in the UK. Outside London, HTB Isas are limited to homes worth less than £250,000. The Lisa bonus will be available for exchange of contracts. The bonus on the HTB Isa is only payable on completion. HTB Isas can only be opened up to 30 November 2019. Image copyright Thinkstock Can I save into both? Yes, you can save into a Help to Buy Isa and a Lisa at the same time, subject to the annual limits. But you can only use the bonus from one to buy a property. If you use the HTB bonus, you would then be subject to a 25% exit penalty on the Lisa, even if you use those funds to buy a home. If you use the Lisa to buy a home, you won't get the 25% bonus on the HTB Isa. So it will make sense for many people to transfer funds from a HTB Isa into a Lisa from April 2017. From that date, the total amount you will be allowed to save in all Isas will be £20,000 a year. One other warning: because Lifetime Isas are savings accounts, money in them can affect your entitlement to benefit payments.
Each year Minecraft celebrate their awesome community at MINECON – a massive convention with panels, events and a stage show and this year is no different! At the event you can watch panels discussing your favourite game (which should be Minecraft), hang out with thousands of like-minded crafters, shake hands with the Mojang team and rub shoulders with YouTubers! There’s something for every type of crafter. This year, to celebrate the event, they have released a new DLC and it’s ABSOLUTELY FREE. DLC Description: Are you ready for MINECON? This year, our celebratory skin-pack comes with four biome-themed adventurers and a mysterious Enderman cape. Swish! Minecraft seems to be evolving at an incredible rate and keeps on adding more and more innovative and engaging material to get people from all age ranges and interests playing. You can pick up your FREE DLC from the Xbox Store right now.
Q: Return render of elements without wrapping I'm looking for some way to return children raw without wrapping them in a div, the following doesn't work: render: function () { return this.props.children ) } This does: render: function () { return ( <div> {this.props.children} </div> ) } How can I edit the children and provide a new set of children not wrapped in another element? A: This is how you can manipulate the this.props.children: top-level-api.html#react.children And as for providing a set of children not wrapped, it is not possible and unreasonable in Reactjs. You may consider that you are rendering a component, not a list or something. The official explanation goes like this: Note: One limitation: React components can only render a single root node. If you want to return multiple nodes they must be wrapped in a single root. Well, when there is only a single child, you can return this.props.children for sure. I re-checked your snippet code and found out you may have a syntax error there, a extra bracket under return this.props.children This is my own test and it render properly: var Test = React.createClass({ render(){ return this.props.children } }) React.render( <Test><div>hello world</div></Test>, document.getElementById("test") ); Updated: In my opinion, you should void doing this as I do above, because I believe this will make your component vulnerable.
Q: How do I take a screenshot of a single view object instead of the entire screen? So I have a UIImageView with some text overlayed on the top and bottom. I want to get a screenshot so I have a new image with the text overlayed on it. This is the code I was working with but I am unable to take a screenshot of the my UIImageView object properly. func generateImage() -> UIImage { // Render view to an image UIGraphicsBeginImageContext(self.view.frame.size) view.drawViewHierarchyInRect(self.imageView.frame, afterScreenUpdates: true) let memeImage: UIImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return memeImage } Alternatively, I have also tried the code below, though the image saved is the right size/area of the screen, it ends up with blurry version of what I took a screenshot of: UIGraphicsBeginImageContext(self.imageView.frame.size) let context = UIGraphicsGetCurrentContext() imageView.layer.renderInContext(context!) // tried drawInContext but it didn't work at all let memeImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return memeImage A: First, using UIGraphicsBeginImageContextWithOptions will allow you to adjust the scale factor to suit your screen size. Second, passing in the view you want to capture will ensure that you are working with the correct view/subviews. func generateImage(currentView: UIView) -> UIImage { UIGraphicsBeginImageContextWithOptions(currentView.frame.size, true, 0.0) currentView.layer.renderInContext(UIGraphicsGetCurrentContext()!) let memeImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() //UIImageWriteToSavedPhotosAlbum(memeImage, nil, nil, nil) return memeImage }
Call it a petition or call it a complaint. Either way, it is the legal document that starts a new lawsuit. A complaint and a response to the complaint are termed pleadings in the law. Most states offer form pleadings that you fill in and file with the court clerk, such as form complaints for personal injury cases and for breach of contract suits. Even though using form pleadings make the process easier, you still have to figure out exactly whom you are suing, how much time you have to file the complaint and where to sue. Whom to Sue If you are filing for divorce, you may not have to ponder long about the name of the defendant, or respondent, in your divorce petition. However, if you slipped and fell in a store, it's a different question. In that case, you'll need to find out who owns the store. If the owner is a company rather than an individual, you'll need to know who owns that company and its legal name before you file the petition. When to File the Complaint Although you can file a divorce petition at any point during a marriage, this is not true if you plan to file a complaint because someone's conduct injured you. States impose time periods in which to bring lawsuits, and if you wait too long, you can lose your opportunity to sue. These time periods are called statutes of limitations. Each state determines its own statutes of limitations, and they vary depending on the type of action. For example, in California, you have three years after a car accident to bring a complaint for property damage, but only two years to sue for personal injuries. Where to Sue Courts must have personal jurisdiction over the person or business being sued. That means the defendant must have enough contact with the state to make it fair for him to be sued there. Generally, a court has personal jurisdiction over -- and you can sue -- a person who lives in the state or a business entity that does business in the state. You can also sue an out-of-state defendant if he caused an auto accident in your state. Likewise, you can sue someone in your state if you serve them with the complaint and summons while they are visiting in your state. You can also sue an out-of-state person or business if that person or business has a small, but significant, contact with the state. For example, a business headquartered in a neighboring state that mails sales catalogs into your state to solicit business. How to Sue Select an appropriate form complaint for your situation, whether a divorce, personal injury or contract claim. Fill out the form complaint, insert your name as plaintiff and the person or business entity you are suing as defendant. Precisely describe the events that occurred and caused you harm, where and when they took place and how you were injured. Sign and date the complaint. Fill out a form summons with the names of all parties, and ask the court clerk if there are any other local forms you need to prepare and file at the same time. Make several copies of the documents, file the originals, have the copies stamped by the clerk and pay the court filing fees.
Towards a Post-Modern Astrology by Robert Hand The following article is the (edited) transcript of a talk Robert Hand gave at the Astrological Conference 2005 of the British Astrological Association in York, UK. What is post-modern? First of all I should define the term “post-modern”. Post-modernism as the term is usually used refers to a set of philosophical movements largely arising out of contemporary French philosophy featuring in particular the work of Jacques Derrida, post-structuralism and the philosopher and historian Michel Foucault. This is not what I refer to, because something else has been going on in astrology. Astrology has never been part of the modern world and cannot have in the same way a post-modern period. I would actually suggest that astrology is ideally suited to be both pre-modern and post-modern in the French philosophical sense. "Then, in the 18th century we had a very long break. Conventional historians refer to this as the Enlightenment. I prefer the term “Endarkenment,” based on what happened in astrology– it almost died." But what I refer to instead is a very real historical phenomenon in astrology, which is this: we have astrology up until about 1700, which had certain consistent patterns, ideas and principles and which had a more or less a continuous tradition from something like – this date is extremely flexible – the fifth century B.C.E. Then, in the 18th century we had a very long break. Conventional historians refer to this as the Enlightenment. I prefer the term “Endarkenment,” based on what happened in astrology – it almost died. And then in the 19th century a revival began, which for most of the 19th century was a revival of a portion of the tradition that had nearly died in 1700. But then with Alan Leo, and more recently people like Dane Rudhyar, and on another level people like the Hamburg School and Cosmobiology of Ebertin, a rather new kind of astrology began coming into existence, which it might be appropriate simply to call 20th century astrology, but I would like to call modern astrology. So what I am really going to be talking about is the question, what next? The beginning of what I will call – for lack of a better term – post-modern astrology actually happened quite a few years ago now. Two people are largely responsible for this new beginning. They are in the United States: Robert Zoller, who began studying medieval astrology in the original Latin in the 1970's, closely followed in this country [Great Britain] by the late Olivia Barclay, who began teaching her students horary directly from William Lilly’s text. In both cases what was being taught was a reborn pre-1700 or pre-modern astrology. They had tremendous impact. In the States this led to the movement of which I was a part – or am a part, but am no longer associated with the name – the project called Project Hindsight, of which I and Robert Zoller along with Robert Schmidt were founders. Subsequently Robert Zoller has gone his own way, and I have gone my own way, but the movement continues. There is also of course an extremely meaningful translation movement in Spain, and also one in Italy. So the pre-1700, pre-modern type of astrology is coming back fairly rapidly. The influence these movements have had is not quite what you might expect. Yes, there are people – and I think I can say this without offering any insult – such as Robert Zoller, who are really trying to revive completely an intact pre-modern astrology, otherwise known as traditional astrology. However, since some people regard Alan Leo’s astrology as traditional astrology, pre-modern may be a clearer term for pre-1700 styles of astrology. My favorite image of Robert Zoller – and believe me, I don’t think he would object in my characterizing him this way – is that he would smile, sublimely rub his hands together and say: “The old ways are the good ways!” Yet, what appears to be happening, and what I certainly align myself with, is not really a revival of traditional astrology. Rather it’s a healing of the break that occurred in the 18th century. We are not trying to do astrology exactly as it was done, rather we’re trying to recreate astrology as it would have been if it had never stopped being an active tradition. Understanding this point is very important, because it is often stated and believed that traditional astrology must not have been all that effective because it died out – almost. Surely, it is said by some, traditional astrology must have been terribly lacking, and therefore modern astrology represents an evolutionary improvement from it. This is not the case. Traditional astrology died out for reasons that are much better described as socio-political than scientific. If you want an example of what I mean I refer you to Patrick Curry’s excellent work Prophecy and Power, where he describes the process of astrology’s near death in Britain. But I assure you, that process was not limited to Britain. So, we are not doing traditional astrology, we are healing the break that occurred in the 18th century. Traditional Astrology First of all: What is traditional astrology? Unfortunally, about the only general characterization I can make is that of defining it as astrology pre-1700. In this respect it is like most of the rest of History. I think if you do a quick calculation you will realize that most history happened before 1700 – billions and billions of years in fact. But, other than that it’s difficult to characterize. And we have to ask, “which traditional astrology?” Hellenistic/classical? Jyotish? Jyotish is a term I vastly prefer to vedic, first of all because it is the actual Indian term, and secondly because (I know I am going to get some arguments about this later) there is not all that much horoscopic astrology in the Vedas. Are we talking about Arabic astrology, or – as I prefer to call it – Persian-Arabic astrology, because it’s actually much more Persian than it is Arabic? Are we talking about Latin language medieval astrology, which is basically the same as Persian-Arabic? Or are we talking about early modern astrology – and I don’t mean Alan Leo? Early modern astrology actually consists of persons like Placidus, Morinus, Kepler, all of whom looked at the traditional astrology as they received it, and believed that reform was needed. "We have not fully digested traditional astrologies – to use the proper term – we have not mastered the techniques." So which one are we talking about? I have bad news: all of them! We have not fully digested traditional astrologies – to use the proper term – we have not mastered the techniques. For example, the predictive techniques of hellenistic astrology, and even some of the predictive techniques of medieval astrology are still not widely used or experimented with. They may turn out to be not too useful; they may also turn out to be a major break-through. I do not really know, but until we have actually systematically examined them we don’t know. Astrology in the 20th century But this infusion of elements from the various traditions into the modern astrological tradition represents the essence of the change from modern to post-modern astrology. Some of us have been calling this traditional astrology neo-traditional, but that is really putting the emphasis in the wrong place. It is in the ordinary language sense of the term a post-modern astrology. Okay! What do we do with the 20th century? This is where I will demonstrate conclusively that I am not a traditionalist: we keep it! We keep its best features. The single most important advance in 20th century astrology was the recognition that astrology actually could be used as a tool for human potential and self-actualization. There may be some of this in Jyotish, but there certainly is not any of it in Hellenistic, Arabic or Latin medieval. All three of those traditions were completely oriented towards dealing with everyday, mundane situations. But Dane Rudhyar in particular introduced a radically new way of thinking about astrology. Closely related to his astrology is the idea of psychological astrology. I do not share the contempt that many traditionalists feel for psychological astrology. I think it is extraordinarily important. My only criticism of it is that in the hands of some of it’s less competent practitioners it has been an extremely mushy sort of astrology where anything can be made to mean anything, depending on the emotional frame of mind of the client and the astrologer. The language of 20th century astrology as a language tends to be imprecise, vague, inarticulate and unclear. But the goals of 20th century astrology are absolutely commendable. Why did the tradition – at least the branches of it I have mentioned – not deal with the issue of self-actualization. They did have the tools if they had had the philosophical reasons for doing it. The reason is very simple: In both the Islamic world and the Christian world there was something else that governed that process, namely religion, from which astrology was largely disconnected. Both Islamic and Christian astrologers had to constantly explain why astrology did not interfere with religion, did not impinge on the same issues, nor did it contradict religion; therefore it was okay. There is in fact in Western astrology an underground tradition of mitigation much as there is in Indian astrology, but it has always been an underground tradition. It is called magic. But we have had to pretend that we, as astrologers, are not connected to it in order to survive in what has been until recently a Christian world – or Islamic. "Twentieth century astrology should be kept in so far as it works, makes sense and is clear enough to tell if it is working." So, 20th century astrology we keep, in so far as it speaks to the needs of modern humanity. Every astrology deals with the culture in which it lives, and if it doesn’t it’s irrelevant. Twentieth century astrology should be kept in so far as it works, makes sense and is clear enough to tell if it is working. To illustrate what I mean by “clear enough” (or rather not clear enough) here is a style of prediction (admittedly an absurd one) that illustrates the problem: “In the next year, events will happen.” Now you may laugh, but I have heard astrological predictions made that were just about that “clear.” I also think there are some other tools from 20th century astrology that are well worth keeping, such as the use of mid-points, the 90-degree dial, and there are a number of schools that have contributed extremely valuable ideas too numerous to mention. Twentieth century astrology is not to be thrown out, but here is what it does need: First of all, the question often arises: is traditional astrology – in any form – better than modern astrology? And of course the traditionalists immediately say: “Why, of course!” They say that it is more effective. This is probably true, but not for the reasons you may think. Hindu astrology, arabic astrology and medieval astrology, and for that matter hellenistic astrology, all have a much more elaborate language than 20th century astrology. Simply put: these languages, as languages, are more articulate; these early astrologies can say things clearly, and insofar as they can say things clearly you can tell whether what they say is true or not. "One of the strangely backhanded compliments I used to pay to the Cosmobiology of the Ebertin system was that I liked it because it is one of the few systems of astrology where I could tell when it was not working." One of the strangely backhanded compliments I used to pay to the Cosmobiology of the Ebertin system was that I liked it because it is one of the few systems of astrology where I could tell when it was not working. I am going to make what some may think a rather outrageous statement which has been contradicted by others a number of times during this Conference: I have not ever cast a chart that was wrong that worked better than the one that was right. I am not saying it cannot happen, but it has never happened to me. And the reason is that I have always placed great stress on the articulation of the astrological language, so that I could tell if the statement I was making was right or wrong. I remember a lecture by Geoffrey Dean in which he was lecturing on a chart that was supposed to be the chart of Petula Clark, – Geoffrey does not remember this incident but I do – with a very close conjunction of Mars and Neptune. Geoffrey spouted modernistic astrological garbage about the unselfish, altruistic nature of this person and so on. I said to myself, “This is crazy, this chart can’t possibly be right.” And then after he had the entire audience convinced, he announced that it was the chart of Charles Manson. By way of a side-note, the astro-carto-graphic map of Charles Manson had the line of that Mars-Neptune-conjunction passing through the deserts just to the east of Los Angeles. So when he finally admitted that it was Charles Manson I said to myself, “I knew it couldn’t be Petula Clark!” They were born a year apart on the same date. So that is why he was able to suck everybody in. But the problem was that he was taking advantage of the inarticulate nature of much modern 20th century astrological language. This must change! The object of self-actualization, psychological exploration, or even enlightenment, through astrology is a perfectly noble goal, a noble task. But it must be done in a clear language, or it will of no use at all, except as entertainment, which I suppose some will believe is perfectly fine. Modern astrology has had one really tragic flaw in addition to its inarticulate language: its complete lack of a philosophical foundation rooted in any coherent philosophical or spiritual tradition of the world, except in the case of Jyotish. Jyotish does have a coherent philosophical and spiritual background derived from the religions of India. For those of you that are not aware of it, Western astrology (by which I really mean Middle Eastern astrology, we just happen to be practising the Western branch of it) also has a firm foundation in philosophy derived from persons with names such as Plato, Pythagoras, Aristotle, Plotinus and has roots in other philosophies that we cannot attribute to any actual person, such as the philosophies of Hermes. We have to go back to these philosophies (as I will hope to demonstrate in part tomorrow) because all modern Western philosophy – with the possible and likely exception of phenomenology – is born out of an idea-set that occurs after the West made a fork on the road of philosophy which rendered astrology inherently impossible. "Modern astrology has had one really tragic flaw in addition to its inarticulate language: its complete lack of aphilosophical foundation rooted in any coherent philosophical or spiritual tradition of the world." So we have to go back to the philosophies in which astrology is not inherently impossible, reestablish the roots, modernize our understanding of those philosophies, and bring them forward into the 20th century or 21st century. We have a brilliant example of this in the work of late John Addey whose astrology was rooted firmly in Neoplatonism but pointed toward a radical new kind of astrology. But we do not want to use 19th century mystical “philosophies” such as those of Mme. Blavatsky, Alice Bailey and so forth, in part because they are not in fact foundational philosophies. What they taught was a modernized and obscured form of Neoplatonism. I am not saying their works are of no value, but for philosophy let us go back to the real thing. Alfred North Whitehead may also be a philosopher who has something to say to us, but until I can get him to say something to me that I truly understand – those of you that have tried to read Whitehead will know what I mean – I cannot tell what he may have to offer. Astrology lacks a theoretical foundation The reason for having proper philosophical foundations is that it actually leads to the creation of, in some form, what scientists keep telling us astrology lacks: a theoretical foundation. Now you have to understand that a theoretical foundation does not necessarily need to be correct at the outset. It needs to be correctable. Astrological theory, even based on a reconnection with ancient philosophy, will not be a theory that science will recognize as theory, but it will in fact function as a scientific theory. It just will not function according to the scientific paradigm – any scientific paradigm, at least any paradigm constituted according to the larger meta-paradigm of modern science. And let me explain why: The contemporary scientific paradigm, with exception of a few areas of quantum physics, largely ignores the role of consciousness in the Universe. Putting it in its boldest form: Life and consciousness, according to the prevailing version of the modern scientific paradigm are epiphenomena of the laws of inanimate nature. An epiphenomenon is a superficial, second level prenomenon, not central to the whole system. In other words, we are trivial and unimportant, the world is essentially meaningless, and it is just grinding off to a stupid, meaningless, pointless end. Of course, you have to ask the question: “Meaningless” to whom? You can’t have meaninglessness until there is somebody to whom something is meaningless. Unfortunately, when the 19th century threw God out of science, they made the whole meaningful/meaningless issue undefineable, irrelevant and academic. We are, according to one modern “scientific” writer, like bacteria living on a dust particle in a sneeze. That was someone’s description of life in the Big Bang. We are said to live on a planet that goes around a minor star in a minor galaxy in an infinitely huge Universe. To whom are we minor? If there is no aliveness in the Universe, to whom are we unimportant, and since when was sheer magnitude the criterion of excellence? Billions and billions of stars, the late Carl Sagan used to say, as if putting zeroes after a one made things more significant and more meaningful. "In my humble opinion astrology makes no sense unless we postulate that life, mind and consciousness are central to the functioning of the Universe, and precede, in some meaningfull way, matter and energy, or at least are coeval with, that is to say, coeternal with them." In my humble opinion (which means: “arrogant statement follows”) astrology makes no sense unless we postulate that life, mind and consciousness are central to the functioning of the Universe, and precede, in some meaningfull way, matter and energy, or at least are coeval with, that is to say, coeternal with them. Something is talking to us, and things that talk must be alive and conscious. The idea that life and consciousness are epiphenomenal is the exact reverse of the astrological world view. This is why we are heretics. And by all means, let us remain so! But I think many astrologers do not draw out the logical implications of what they do. They are astrologers when they are in the counselling room, and they are 21st-century-ordinary-every-day-mechanist-materialists when they do anything else – I find that this true less and less as time goes on, which is most gratifying. But we have to recognize, yes indeed, that astrology, and the metaphysics of science – also known as scientism – are indeed incompatible. Thank God! Now another aspect of the new kind of astrology: I totally, absolutely and overwhelmingly reject any form of astro-fundamentalism. We can leave that for certain insane fanatics in the Jewish, Islamic and Christian religious comunities. There is no ancient astrology that must be completely recovered because it, alone, is completely, absolutely and positively true for all time. We recover them to the best of our ability, so we can learn from them, but they are not necessarily “truer” than what we do. Our ancestors were human beings, just like us. Do I think that astrology came as the result of a divine revelation in the biblical sense? Maybe in some other sense yes, but not in that narrow biblical sense. It was not revealed whole, entire, perfect and complete to anyone at any time. It may be complete and perfect eventually, but only because we have done the work of uncovering the revelation. But it is probably never going to happen quite that way. I just do not want to exclude the possibility. Balancing modernist and traditionalist attitudes So we need to strike a balance between the modernist’s attitude and the traditionalist’s attitude. By the way, I just want to make one thing clear: If I seem to be taking a slam at the only at certain members of the Jyotish/Vedic community in talking about astrological fundamentalism, believe me, I am not. There are also Lilly fundamentalists, Hellenistic fundamentalists, Arabic fundamentalists… You take any system, as long as it isn’t modern, and you will find somebody who believes in it as a fundamentalist, or – to use a term more fashionable in religious circles – a literalist, one who believes that the books are literally and completely true. The modernist attitude believes that only the most recent work is any good, and the traditionalist attitude thinks that anything modern is hopelessly flawed and corrupt. Without qualification – these positions are both wrong. And if you disagree with me, fine. But that is my position, take it or leave it! (I have a Scorpio Moon). Post-modern astrology must recognize that astrology is a learned art. Not a learned art with one syllable, but a learned art, with ‘learned’ in two-syllables. And while we invite the enthusiast and the amateur to participate, it also has to be recognized the amateur astrologer will have a role similar to amateur scientists – far from irrelevant, but working in a limited way. Astrology is no more entertainment than psychology. Which is to say that both can create entertaining diversions, but that is neither their purpose nor their value. To that end, as I am sure many are quite aware, there has begun a movement towards the creation of academic institutions within astrology. One of these is Kepler College. It teaches within a genuine liberal arts program that happens to be – this is the unofficial motto of the school – filtered “through the lens of astrology,” but it is a liberal arts degree nonetheless. What do I teach there? I teach ancient history, medieval history and Latin. But of course, the Latin is not all Cicero. We read John of Seville in the course, and people like him. (We do not read Manilius, because nobody can read Manilius.) Here in Britain there is of course the Bath Spa curriculum, the Southampton program, the University of Canterbury program, and there are probably others that I am not aware of. But this is a thrilling development, because we have got to the point where the next level of astrology must be carried on in environments similar to that of universities, where there are conferences held among professionals in the field, who report work that is very important, but which is too technical or esoteric (in the original meaning of the word) for a general astrological conference. For example, we have the varied styles of medieval primary directions, their origin, significance, application and so forth, definitely a subject of interest only to specialists. Just to give you one idea of something I have noticed, I am convinced that Masha Allah or the work attributed to him is of two people, because there are two different kinds of astrology going on, but I have not figured out how to rigorously document this yet. This is not the sort of topic that I think would draw a very large crowd at even an AA conference, and the AA conferences are a good deal more sophisticated than many other conferences. "The “Astrology and Your Pet” aspect of astrology has been going on for quite a long time, and I do not suggest that it should cease." The alternative? At Astrolabe, the company that I used to work with, we had a cartoon on the wall which showed a rather generously endowed, portly woman standing behind a lectern with a very stern and severe expression on her face, saying: “We will not rest until astrology has found it’s proper place in Academia”, and on the wall behind her is a sign that says: “Next Week: Astrology and Your Pet”. Now, there is nothing wrong with that, but these are two different aspects of astrology. The “Astrology and Your Pet” aspect of astrology has been going on for quite a long time, and I do not suggest that it should cease. I am now in my fourth year of graduate school, so I am beginning to get a sense of what goes on in Academia that is good and useful to the further evolution of astrology. For the most part we do not yet have that, but we are getting it. So this academization, if I can make a word, the making of astrology more academic, is an extremely important step. At this point I have to make a very important statement about this process. We are not trying to make astrology respectable. We are trying to force astrology to get its internal act together. That is not the same thing. We will not convince academics that we belong in academia because we say so; it is not even clear that we will ever be integrated with academia. It does not matter. It is for ourselves, not for our respectability, but for the efficacy of our art. We will be more effective if we do these things. We need international libraries of astrology texts, either in book form or online. We need to find all those old astrological journals rotting away on our shelves, scan them and make them available to researchers. "We are not trying to make astrology respectable. We are trying to force astrology to get its internal act together. That is not the same thing." I do not know how many of you are aware of this, but on the internet there is something called “Early English Books Online”. Every book printed in England before 1700 that has survived is in that data base. You can read them or download them, as you choose. We need that exact same thing in astrology. We need all these journals, all the old texts, everything to be accessible to researchers. At the moment a great deal of astrology is on the verge to disappearing for all time. We need modern researchers documenting what happened in the 20th century, who said what, what they meant, explicating their ideas, talking about their lives, we need to preserve all of this. Ironically, the most endangered single part of the history of astrology is the history of 20th century astrology. It is right on the verge of being forgotten already. So this is why this increasing movement toward academia is required, not so that we can be respectable, but for our own use. Here is the key point: we will have succeeded in creating post-modern astrology to the degree that our astrology is completely continuous, historically, with all the astrologies of the past. It does not mean we use all their techniques, does not mean we use all their elements, but that we can at some point refer to our tradition as a completely restored and continuous tradition. Post-modern astrology will not simply be going back to William Lilly, or to Bonatti, or to Vettius Valens, Vahara Mihira, or any one you can chose. But they will be in there, their works will be known, evaluated and employed where appropriate. And appropriateness will be determined in terms of our modern needs – excuse me – post-modern needs. Now I could make a long list of things I think might like to change technically-speaking in astrology, but I have to agree with Geoffrey Cornelius in his talk yesterday that improved technique in the way it is usually understood is not what we are trying to do here. In chemistry, for example, if you use the wrong technique in analysing an unknown substance, you will not find out what the substance is. Technique in chemistry is an absolutely necessary set of procedures designed to achieve a particular result. But technique in astrology is actually the articulation of language not merely a set or proper procedures. Let me give you a concrete example. If you pick up any traditional astrology book that is at all influenced by Ptolemy, you will find that it poses a collection of questions to be answered. Is the nativity viable? That is to say, is the entity born in this nativity going to live? What is the wealth and rank of the parents? What about brothers and sisters? What about money, career, children? All of these constitute a standard set of what I refer to as the Ptolemaic questions. These Ptolemaic questions use very definite techniques for answering the questions. But the issue here is not whether these techniques are “correct,” the issue is whether these techniques are clearly articulated. Some of these questions we will not choose to ask, for example, we will probably continue not to say too much about the nature and manner of one’s death. We might, however, become a bit more articulate about saying: these are the areas of our life you will have to watch out for, that are dangerous and maybe in these times you will have to watch out for them. But I have demonstrated to my satisfaction one thing about this and similar things in astrology. They are, to use the philosophical term, contingent. The moment of your death has not yet been written unless it is to occur very soon due to circumstances that are not any longer changeable. All of us could have a number of times in which we might die. But in the ancient world things that can be prevented now could not be prevented then. So we will look times that may be dangerous, not times of death, or whatever. But the issue here is, what do we do to answer a question, and is an answer to the question clear enough so that we can tell if it is right or wrong? Fate versus free will There is the wonderful problem of fate versus free will. I actually have arrived at an answer to this, which I will briefly outline. I found the answer in one of the Stobaeus fragments of the Corpus Hermeticum, in which Hermes is talking to his disciple Tat, as the name is usually transliterated. Tat asks: “Tell me again about fate, providence and necessity”. And Hermes after much intermediate material ends up with the statement along the lines of, “Fate concerns the body, necessity concerns that part of the mind that works only with the body, and providence is concerned with the mind that is fully conscious.” What they made clear in this statement is there is no one such thing as fate. There is a fate that is due to our being material beings of a certain species. No matter how hard you try, no amount of free will will ever convert you to a dog or a cat in this life time. You cannot fly without a plane – unless (perhaps) you are a certain kind of meditator – you cannot walk through walls without a door, you cannot see through walls without a window, you are limited by natural law. That is the fundamental meaning of fate in ancient Greek philosophy. It is physical law. Then there is the really big part of fate; there is the fate due to ignorance, called necessity. We get ourselves caught up in situations where we simply cannot conceive of an alternative, because we have all these considerations about what must be so and what must not be so, and we are determined by the consequences of past decisions and current stupidities. That is most of what our fate consists of. It has nothing to do with the planets! "Post-modern astrology will not be a fatalistic fortune-telling astrology, it will be an astrology of enlightenment, self-realization, self-actualization and consciousness, that just happens to include all of the rest of astrology." And then finally there is the other fate that is absolutely irrevocable, called providence. You have no choice but to be who you are. Your choice is to be who you are at the highest possible level or not. And I would go so far as to say that who you really are preexists who you are at the present moment, and it is pulling you forward to itself, and that pull is inevitable. Your getting all the way there, becoming a fully realized being, is not inevitable. Circumstances, accidents and of course the ever-present stupidity, or unconsciousness – whatever you want to call it – will all in varying degrees prevent us from getting to that perfect self-realization. But it is not written in the stars whether we will, or will not, ever be fully realized. What is written in the stars is how to do it – if we could but read the chart from that point of view. This is one of the things the 20th century has taught us, but the 20th century has been a little weak on how to do it. Whereas, I have found techniques in Greek and Medieval astrology that actually suggest how it can be done, how it can be read in the chart. Post-modern astrology will not be a fatalistic fortune-telling astrology, it will be an astrology of enlightenment, self-realization, self-actualization and consciousness, that just happens to include all of the rest of astrology. We evaluate, we judge, and we integrate. That is what I see coming. Thank you very much. Copyright by Robert Hand 2006 Advertisement About Robert Hand Robert Hand is one of the world's most famous and renowned astrologers. He takes a special interest in the philosophical dimensions of astrology and is quite dedicated to computer programming. Currently he is fully engaged for Arhat Media as an editor, translator and publisher of ancient astrological writings. Rob Hand lives in Las Vegas, Nevada, USA. Rob is an honor graduate from Brandeis University, with honors in history, and went on for graduate work in the History of Science at Princeton. Rob began an astrology practice in 1972 and as success came, he began traveling world wide as a full time professional astrologer. In 2013, he was designated as a doctor of philosophy (Ph.D.) by The Catholic University of America. As one of the largest astrology portals WWW.ASTRO.COM offers a lot of free features on the subject. With high-quality horoscope interpretations by the world's leading astrologers Liz Greene, Robert Hand and other authors, many free horoscopes and extensive information on astrology for beginners and professionals, www.astro.com is the first address for astrology on the web.
Finally! We know where the NASA satellite landed Even after the satellite came down, NASA could merely confirm that it had re-entered , most likely within 20 minutes of 12:16 a.m. EDT, and probably over the Pacific Ocean. "We extend our appreciation to the Joint Space Operations Center for monitoring UARS not only this past week but also throughout its entire 20 years on orbit," Nick Johnson, NASA's chief scientist for orbital debris, at NASA's Johnson Space Center in Houston, said in a statement. "This was not an easy re-entry to predict because of the natural forces acting on the satellite as its orbit decayed. Space-faring nations around the world also were monitoring the satellite’s descent in the last two hours and all the predictions were well within the range estimated by JSpOC."
Kaboom! Incredible Video of SpaceX Rocket Explosion On Launch Pad SpaceX rocket explodes during routine test. On Thursday, September 1st, the Space X Dragon rocket was preparing to undergo a routine pre-launch test firing. The rocket and payload were destroyed in the explosion. The rocket was supposed to carry a communications satellite to orbit. The satellite was built by an Israeli firm and contracted for Facebook. The estimated cost of the payload was $200M. Future SpaceX launches will be paused while an investigation into the cause will take place. This is not the first incident for Elon Musk’s company. Last year, another SpaceX rocket exploded during ascent. The video was posted by USLaunchReport.com. The explosion begins at 1:10 into the video.
Disneyland is preparing to nix the familiar scene on its Pirates of the Caribbean ride where captive, tied-up women are auctioned off as brides, presumably to pirates, as it will be shut down temporarily starting April 23 to make the switch. As visitors riding in small boats enter the scene currently, they come upon a sign that reads, “Auction: Take a wench for a bride,” with a line of most unhappy looking women waiting to be sold, and one voluptuous red-haired woman vamping for the crowd, under the watchful eye of an elegantly dressed auctioneer. The new animatronic scene will show the same saucy redhead, but now she’ll be a female pirate overseeing an auction of local loot, instead of the former scene in which she was the loot. A Disney spokeswoman confirmed Tuesday that the changeover will take place beginning on April 23, but she couldn’t say what other repairs or revamps might take place at the same time. Disney rides go dark regularly for scheduled maintenance, generally in the off season. Published reports have pegged the reopening date as June 7, but no official date has been given. “Disneyland needs to reflect the times, and it seems to me this is the time to change it,” blogger Dusty Sage of the Micechat.com forum said. “It’s uncomfortable even for me to see women being sold into bondage and human trafficking. I can’t even imagine what little girls think about this. It seems to me that Disney was just ahead of the curve on all this ‘Me Too’ movement, because they announced this change last summer.” Not everyone agrees with the change, though, including unofficial Disneyland historian and author David Koenig, who said “this one really irks me — but does not surprise me.” Koenig said when the topic came up last year at a Disney convention called D23, “Disney fans started booing.” “The bride auction is being removed because Disney is lily-livered,” Koenig said. “No one is really offended by animatronic pirates acting lusty. It’s in-character silliness. I don’t advocate gunplay, thievery, alcoholism or sexism, but I’m still able to enjoy a show in which pirates behave like pirates.” This is the second time the popular Pirates attraction has specifically been revamped to make it more in tune with current values, although it’s also been updated numerous times, for example, to add an animatronic version of the popular character of Capt. Jack Sparrow from the movie franchise. In 1997, Disney replaced a portion of the same vignette that previously showed lusty pirates chasing frightened wenches – with its implication the women would soon be assaulted. The new scene shows a pirate still chasing a woman, but she’s carrying a platter with booze that he appears to covet. “We don’t want to put anyone in a jeopardy role,” Imagineer Tony Baxter said at the time about the changes. Although some people have complained that Disneyland shouldn’t mess with tradition, over the years the park has changed to reflect changing mores. For example, a cabin fire is no longer blamed on marauding Indians. And a zaftig Aunt Jemima slave woman no longer meets people in front of a pancake house. “They are pirates, and pirates rape and pillage and plunder, but how much of that is appropriate for a theme park ride?” blogger Sage said. “The temptation is to be afraid of any change and I love tradition, but this is a case where they have to change.” PIRATES OF THE CARIBBEAN timeline 1957: Sam McKim, one of Disney’s early imagineers, completed concept paintings and sketches for New Orleans Square, according to Tom Morris, a Disney imagineer from 1979 until 2016. Those plans included restaurants, shops and a small wax museum about pirates.” 1961: Disneyland founder Walt Disney assigns Imagineer Marc Davis to design a pirate-themed, walk-through wax museum for his 6-year-old theme park, but then the new technology of audio-animatronics enables engineers to think beyond mere wax dummies. Dec. 15, 1966: Walt Disney dies before he could see Pirates come to life, the same year that New Orleans Square opened. March 18, 1967: The technologically advanced animatronic “Pirates of the Caribbean” opens in New Orleans Square, and instantly becomes an icon of the park. March 7, 1997: The ride is redesigned to incorporate new scenes and updated technology for its 30th anniversary, including July 9, 2003: The movie “Pirates of the Caribbean: The Curse of the Black Pearl,” based on the Disneyland ride, is released. June 6, 2006: Disneyland opens a redesigned ride, incorporating the voices and likenesses of the movie’s actors Johnny Depp, Bill Nighy and Geoffrey Rush and a revised music track. (From the Orange County Register archive)
Canterbury Park Picks & Analysis — Thursday, July 27, 2017 It’s a Thursday, which means it’s time for some nightttime action at Canterbury Park! Canterbury’s one of the best nighttime tracks in the country, and we’re happy to have Dave Handeland (@SuperStatsDave) providing FREE picks & analysis of the ten-race card. The first race is scheduled for 6:40 PM CDT. Take it away, Dave! — After a little rain early in the week, it should shape up to be a beautiful night in Minnesota on Thursday. Once again the early pick 4 consists of a trio of turf races including an extremely talented group running in race 4. I’ll be doing the prerace show from Canterbury Park with @MrB_CBYanalyst so tune into the prerace show around 6PM CST and check it out. Last year we teamed up to give six top pick winners between us on the eight race card (including nailing the final five races) Season Stats: Top Pick winners: 14/49 (29%) One of Top 3 picks winning: 30/49 (61%) Race 1 (QH’s) 6-7-4 #6 Chicalota will try to go 2-2 and Nik Goodwin wins at 26% for R.Allen Hybsha #7 Valiant Story has been tough in last two starts but the 8/5 ML is risky #4 Apparent Danger won last out. The Oscar Delgado/Tomey Swan pairing is 2-2 Race 2 (QH’s) 3-1-2 #3 Bileve is spellcheck’s nightmare but won a 14K allowance and now drops to claiming ranks #1 Mary’s Ice Dancer is making 2nd start off layoff, much easier field than last race #2 Hiclass Man has the Delgado/Swan pairing. If they win race 1, then watch out Race 3 (turf) 3-4-5 #3 Jam N Addy is going to the be pacesetter in this one mile turf race in a six horse race that lacks speed. If Andrew Ramgeet can get away and run alone I think he’ll just keep going. He’s lost to a pair of these horses when the races have been washed off the turf so now he’s going to get his shot a redemption if it stays dry. #4 More on Tap drops down to the 20K claiming ranks where he won two races back versus a few of these when the race came off the turf. The 6-1 ML is a pretty nice price in a race without any massive favorites. Leslie Mawing takes over for Hugo Sanchez which is interesting due to Mawing and trainer Tim Padilla not having joined forces before. Will be closing down the stretch. #5 Vanderbilt Beach is a MN Bred taking on all comers here. The ML favorite at 5/2 might be a little aggressive as the runner appears better as a dirt runner than the turf. Also, trainer Tony Rengstorf is 0-18 on the Canterbury turf this season. Race 4 (turf) 5-3-2 #5 Hay Dakota won the 100K Mystic Lake Mile a little over a month ago which was his 2nd start off the layoff. Denny Velazquez knows when to time his runs with this Grade 3 winner and with 4 wins in 5 starts at this distance, this seems like the ideal spot to notch another win. Despite only seven entries, this is a fantastic 35K optional claiming race. #3 Patriots Rule won a 150K race at Del Mar at this time last year and has beaten Hay Dakota already once this meet. Robertino Diodoro trains this one along with Pilot House and it appears that Pilot House will be sent to keep Majectic Pride company and thus soften it up so Patriots Rule can close. Took part in five straight stakes races in CA between 2016 and 2017 so has faced much better. #2 Majestic Pride was the 2016 Horse of the Meet at Canterbury and has a win and two 2nd place finishes so far this meet. The concerns here are that Quincy Hamilton will now be riding due to the Dean Butler injury and that Pilot House is entered to make it not a majestic evening. The other concern is getting caught in a crazy speed duel with Pilot House which could leave him tired late. Race 5 (turf) 5-8-7 #5 Datt Town was previously trained by Ian Wilkes and had a couple of solid tries down at Gulfstream Park this winter at the 75K optional claiming level and now is 12-1 in this wide open 12.5K claiming turf race. The speed numbers have plummeted since moving north from Florida but any sort of a return to that form make her extremely tough. #8 Battle Chic broke her maiden last time out and might be figuring this racing thing out as her form keeps improving. Leslie Mawing will try to lead them from start to finish and this is not going to be the most difficult group to beat. She could get very brave once again on the front end. #7 Top Hat Wildcat has been the victim of two bad trips in her two Canterbury starts and jockey Janine Smith will try to reverse the bad luck in this race. The connections have tried the 20K and 16K levels this summer and now drop again to try and get their picture taken. The 3-1 ML might cause this one to get a little more action at the windows than might be needed. Race #6 1-12-13 #1 Victory Ice is a filly who could win by 2-3 lengths or miss the board completely. She is 0-12 in her career and made her 2017 debut running 2nd at the 10K MC level which inspired the connections to return to the MSW ranks where she ran 3rd. Now they drop back to the 10K level to try and end the losing. Nik Goodwin will try and wire this field that is a combined 0/49 in lifetime races with just 12 combined 2nd/3rd place finishes among them all. #12 Dakota Mar Lou makes her 2nd career start for the suddenly hot David Van Winkle barn. Loveberry was on board in the debut 11 days ago as she went off at 5-1 but was never involved. With the 1st start jitters out of the way maybe Loveberry can get her going. #13 Caballo River is projected to be part of the early pace and it always seems like early pace can be key in the low maiden claiming ranks as these horses tend to not like passing. Kaitlin Bedford was near the pace with this runner a few weeks ago vs a runner who has turned out to be pretty decent. Race #7 4-3-5 #4 Line of Grace is one of two Mac Robertson entries here and Mac uses Alex Canchari this time after using Cecily Evans the previous four races. This massive jockey upgrade along with a horse that appears to craving extra distance makes this runner enticing. Canchari should be sitting off the pace early and be pouncing as they hit the stretch. #3 Flowers for Teagan is the other Robertson runner here and when he enters multiple horses he does so expecting to have his entries complete the exacta. I’ve switched my thinking on Mac this year with multiple entries due them being “live” when together. She broke her maiden at this distance, then tried the turf and now returns for win #2. The 6-1 ML could be a bouquet of winning. #5 Da Kleinen Schatzi is a grinder who in her only dirt route ever took on stakes company and was pretty competitive. This year she hasn’t been the speed horse that she was in 2016 so it’ll be interesting to see how Loveberry tries to win this. I have a feeling that this one will be running in the mid-pack throughout. Race 8 6-2-1 #6 Snoose Sasa looks to give Canchari and Robertson the Race 7/8 DD as they take on seven maidens. After three different jockeys in his 1st three races, Canchari finally hops back on which I take as a good sign. Canchari should stalk just to the outside of the speed and look to take over as the speed quits. #2 Spur loves to tease his fans with early speed and then throw on the ol E-Brake mid-stretch when it looks like he cannot lose. It’s amazing, as each loss is more wild than the previous loss. If the win is going to happen, its got to take place either at 5.5 or 5 furlongs and this time gets the 5.5 distance. We’ll see if Andrew Ramgeet can make Spur forget to use the mental E-brake. #1 E O S Gary is an enticing 12-1 runner as QH trainer Edward Ross Hardy sends out this 1st timer. My theory on this is that if there is ONE thing this horse should be able to do, is that it’s going to at least break well and have early speed. The works show that this gelding does have upside so he is worth using in the exotics. Race 9 3-7-2 #3 Aparri will look to give trainer Edwin Cornier his 1st win at Canterbury this season which is a little scary as the selection. This is a wide open affair and any one of the entrants has a legit shot at winning. This is the lowest level of racing this Aparri has seen since 2016 when…she won! Leslie Mawing will try to make that happen again. #7 Sajara will try to ride the confidence from winning last out as she tries to repeat here. The one problem with that is trainer Ronald Westerman is a 6% trainer when trying to win back to back races. I always say that Israel Hernandez always seems like he is in the middle of any large P3 or P4 and he did that last Sunday when he stole a race at 20-1. Izzy doesn’t get the bet mounts but he can win. #2 Maddymax has finished in the money 7 times in 8 starts at Canterbury so we know she loves the track. With Dean Butler out due to injury, we see Martin Escobar get the mount and Escobar is 1-42 this year at the track. That 2% is tough to hop on board with a lot of confidence behind it. She’ll be part of the early speed mix and would not be a shock to see win this. Race 10 5-2-6 #5 Tanzen is every bit of the 4/5 ML odds as Robertino Diodoro drops down a level after winning last week in a race that wasn’t overly contested. Andrew Ramgeet will try to be the 3rd different jockey to win with Tanzen in the last four starts as this runner has enjoyed the Canterbury track after spending the fall and spring traveling around. #2 Voodoo Storm was claimed during a win at the 6250 level, then won next out at the 7500 level and followed that up with 3 straight performances in the mid-pack against much stiffer competition. Hugo Sanchez was the jockey back on 5/12 when this guy defeated Tanzen but Tanzen got revenge in the rematch. Will be dueling Tanzen early and hoping to survive. #6 Tour de Rock is another dropping to his lowest level in 2017 and will attempt to clean up the mess if a speed duel does develop among the top 2 picks. Jareth Loveberry has been aboard for both wins in 2017 so he has a good feel on when to push the right button. Bets Race 1 Pick 3 3,4,6,7 with 3 with 2,3,4,5 $8 Race 3 Pick 4 2,3,4,5 with 3,5 with 1,5,6,7,8,9,11 with 1,12 $56 Race 7 Pick 4 3,4,5 with 1,2,3,6 with 1,2,3,4,6,7 with 2,5 $72 DISCLAIMER: THIS IS NOT A GAMBLING SITE. PICKS & ANALYSIS FOUND ON THIS SITE ARE MERELY OPINIONS BASED ON SUBJECTIVE ANALYSIS. PICKS DO NOT GUARANTEE ANY SUCCESSFUL OUTCOMES WHATSOEVER. PICKS ARE PROVIDED TO GUIDE YOUR STRATEGY TO PLAYING THE RACES. MATERIALS FOUND ON THIS SITE ARE IN NO WAY INTENDED TO ENCOURAGE GAMBLING. WHERE LEGAL, ALL WAGERS SHOULD BE MADE RESPONSIBLY AND ARE DONE SO AT YOUR OWN RISK.
Q: R_Finding the closest match from number of vectors I have the following vectors > X <- c(1,1,3,4) > a <- c(1,1,2,2) > b <- c(2,1,4,3) > c <- c(2,1,4,6) I want to compare each element of X with corresponding elements of a,b and c and finally I need a class assigned to each row of X. for eg. The first element of X is 1 and it has a match in corresponding element vector a, then I need to assign a class as '1-1' (no matter from which vector it got the match) The second element of X is 1 and it also has match (in fact 3) so, again the class is '1-1' The third element of X is 3 and it doesn't have a match then I should look for next integer value, which is 4 and there is 4 (in b and c). So the class should be '3-4' The fourth element of X is 4 and it doesn't have a match. Also there is no 5 (next integer) then it should look for the previous integer which is 3 and there is 3. So the class should be '4-3' Actually I have thousand of rows for each vector and I have to do this for each row. Any suggestion to do it in a less complicated way. I would prefer to use base functions of R. A: Based on rbatt's comment and answer I realized my original answer was quite lacking. Here's a redo... match_nearest <- function( x, table ) { dist <- x - table tgt <- which( dist < 0, arr.ind=TRUE, useNames=F ) dist[tgt] <- abs( dist[tgt] + .5 ) table[ cbind( seq_along(x), max.col( -dist, ties.method="first" ) ) ] } X <- c(1,1,3,4) a <- c(1,1,2,2) b <- c(2,1,4,3) c <- c(2,1,4,6) paste(X, match_nearest(X, cbind(a,b,c) ), sep="-") ## [1] "1-1" "1-1" "3-4" "4-3" Compared to the original answer and rbatt's we find neither was correct! set.seed(1) X <- rbinom(n=1E4, size=10, prob=0.5) a <- rbinom(n=1E4, size=10, prob=0.5) b <- rbinom(n=1E4, size=10, prob=0.5) c <- rbinom(n=1E4, size=10, prob=0.5) T <- current_solution(X,a,b,c) R <- rbatt_solution(X,a,b,c) all.equal( T, R ) ## [1] "195 string mismatches" # Look at mismatched rows... mismatch <- head( which( T != R ) ) cbind(X,a,b,c)[mismatch,] ## X a b c ## [1,] 4 6 3 3 ## [2,] 5 7 4 7 ## [3,] 5 8 3 9 ## [4,] 5 7 7 4 ## [5,] 4 6 3 7 ## [6,] 5 7 4 2 T[mismatch] ## [1] "4-3" "5-4" "5-3" "5-4" "4-3" "5-4" R[mismatch] ## [1] "4-6" "5-7" "5-8" "5-7" "4-6" "5-7" and needlessly slow... library(microbenchmark) bm <- microbenchmark( current_solution(X,a,b,c), previous_solution(X,a,b,c), rbatt_solution(X,a,b,c) ) print(bm, order="median") ## Unit: milliseconds ## expr min lq median uq max neval ## current_solution(X, a, b, c) 7.088 7.298 7.996 8.268 38.25 100 ## rbatt_solution(X, a, b, c) 33.920 38.236 46.524 53.441 85.50 100 ## previous_solution(X, a, b, c) 83.082 93.869 101.997 115.961 135.98 100 Looks like the current_solution is getting it right; but without an expected output ... Here's the functions... current_solution <- function(X,a,b,c) { paste(X, match_nearest(X, cbind(a,b,c) ), sep="-") } # DO NOT USE... it is wrong! previous_solution <- function(X,a,b,c) { dat <- rbind(X,a,b,c) v <- apply(dat,2, function(v) { v2 <- v[1] - v v2[v2<0] <- abs( v2[v2<0]) - 1 v[ which.min( v2[-1] ) + 1 ] }) paste("X", v, sep="-") } # DO NOT USE... it is wrong! rbatt_solution <- function(X,a,b,c) { mat <- cbind(X,a,b,c) diff.signed <- mat[,"X"]-mat[,c("a","b","c")] diff.break <- abs(diff.signed) + sign(diff.signed)*0.5 min.ind <- apply(diff.break, 1, which.min) ind.array <- matrix(c(1:nrow(mat),min.ind), ncol=2) match.value <- mat[,c("a","b","c")][ind.array] ref.class <- paste(X, match.value, sep="-") ref.class }
Amazon’s Record Breaking Fourth Quarter Essential Retail devotes a feature article to Amazon’s fourth quarter results and asks retail industry experts to comment on Amazon’s stellar performance. Dmitry Bagrov, Managing Director of DataArt UK, discusses the role that Amazon’s development strategy and use of data plays in its phenomenal success. “‘Amazon has cultivated a clever evolutionary development strategy for things that work, combined with investment flexibility that allows it to advance and monetise out-of-the-box ideas,’ says Dmitry Bagrov, managing director, DataArt. ‘It has continued to successfully ‘productise’ services originally developed for internal use such as Amazon Web Services (AWS), with occasional forays into innovative, cutting-edge unknowns such as shops without queues.’ DataArt’s Bagrov observes that the company has positioned itself to own both the consumer and the consumer data. If a business sells to end-users via Amazon, they have no access to the consumer data generated from those sales. This is even the case for big hitters like Disney. ‘No one knows the full story, but it is possible that Apple has in the past forced Amazon to share information,’ he comments. ‘We saw Amazon Prime Video mysteriously disappear and reappear. What was the deal that was struck? Did Apple force Amazon to share information? The current state of play sees several large owners of user data all gathering information from different angles. Apple knows where we are. Google knows even more. Power lies in who knows the customer best. There is a battle brewing.’ Who will win the fight for ‘ownership’ of the end user? With the arrival of GDPR, the real fun will start, Bagrov argues. ‘Under new EU-driven data protection regulations, companies have, by law, to gain the customer’s consent regarding what information is shared about them. It remains to be seen where final ownership of data lands – will it be with the user platforms Google and Apple, or the content providers like Amazon?’”
Chris Herhalt, CP24.com A Toronto man faces a charge of attempted murder in relation to the stabbing of a 13-year-old boy outside a coffee shop in the Fort York area last month. Police say that on May 3, they were called to 2 Bruyeres Mews, off of Bathurst Street, for a report of a stabbing. Emergency crews arrived to find a 13-year-old boy in life-threatening condition. He was rushed to hospital for treatment. On Monday, a 26-year-old suspect identified as Jemar Holmes surrendered to police. He was charged with attempted murder, assault with a weapon, aggravated assault and selling cannabis to a young person. He appeared in court at Old City Hall on Monday morning.
The PRE-PLANNED Financial and Economic 9/11 of 2008 The PRE-PLANNED Financial and Economic 9/11 of 2008 2008 STOCK MARKET CRASH ENGINEERED 2008/2009 RECESSION MANUFACTURED WHAT: A pre-planned collapse of the U.S. (and global) financial and economic systems. WHO: The same characters who perpetrated the original 9/11 ‘production’. WHERE: New York City & D.C., of course. Plus a sideshow in Washington state. WHEN: The days surrounding September 11, naturally. This time it was September 15th. HOW: Instead of painted drones, missiles with wings, and fake airplanes, they used the much more stealthy “naked short seller”. TRIGGER EVENT: The controlled demolition of Lehman Brothers was the detonation event of the entire domino series of ‘explosions’ designed to shake the world. WHY: To remake the economic/financial order of the world into a “PPP”. WHY Really: Think about it. And then ask yourself, “Cui bono?” Asked another way, “Who stands to lose the most if the system isn’t collapsed?” The 9/11 blueprint worked so magically for the cabal of world controllers that they were compelled to use virtually the same playbook. As they say, “If it ain’t broke, why fix it?” So, what’s the real deal here? By analogy, let’s take a quick look at the 9/11 timeline of 2001 and stack it up against the new 2008 Financial “9/11”, as it began to unfold earlier this year. The Bear Stearns collapse that began in March 2008 is analogous to the 1st World Trade Center bombing in 1993. Just a warm up. This was preceded by a little failure back in January featuring Countrywide – the largest US mortgage lender. The nationalization of Fannie Mae and Freddie Mac marks the beginning of the new 9/11. Both in the DC area, they were the first to come down this time. Just as they struck at the heart of the military complex, this time they went for the jugular of the national real estate market. Remember – this is a financial 9/11. Next came this year’s version of the twin towers, building 7 and other assorted NYC landmarks in the form of Lehman Brothers, AIG, Merrill Lynch, as well as a rehab for Goldman Sachs and Morgan Stanley in their “new & improved” form (soon-to-built Freedom Tower). Basically, they took out the whole of American investment brokerage, heh?! And, of course, we still have Washington Mutual out there in the boonies just like the one that “crashed” in a PA farm field. Update: WashMu is now history! As is another “little” bank by the name of Wachovia. Real bad month for “W’s”. Citibank is also close to being history. Their MO! What else, but controlled demolition? Throughout 2008, and especially the months of September and October, we have seen some of the world’s largest banks, brokerage houses, mortgage lenders, insurance companies and investment brokers go bust, as each of them fell perfectly into their own footprint faster than you can say: C O N T R O L L E D D E M O L I T I O N ! ! ! The 700 billion dollar Bailout Plan is just like the Patriot Act, isn’t it? Only this time it’s maybe a 2 or 3 page document (in its original form) that conferred absolute authority on the Executive Branch to do just about anything they want with the taxpayer’s money. And they want it rubber stamped now. Not tomorrow. NOW!!! Without discussion, or unnecessary congressional debate. Talk about Shock & Awe being used against the American people, and their elected representatives!?! “The Greatest Depression” never sounded more like “Weapons of Mass Destruction”, eh? Update: The Bail-Out Bill was passed, and considered by those in the know to be the largest corporate-governmental theft in US history. That’s right, folks, the entire country has been ENRONned (Goodbye, 401K, IRA and Messrs. Roth & Keough!). As well as ARTHUR “Big Bad Accountant” ANDERSENned (So long Social Security, Medicare, Medicaid, and so on and so forth!). Now we know we can expect further gyrations, panics and precipitous declines in the market and elsewhere, just as we had anthrax attacks in the Capital, beltway snipers in Maryland in October of ’02, Bali bombings of 2002 &2005, the 3/11/04 train bombings in Madrid, and the 7/7/05 bombings in London. Not to mention the 50+ other synthetic terror events staged throughout the world to enforce compliance and create distraction, as well as to plant numerous false flags with which to further divide & conquer, and maintain the state of perpetual war. The current spate of financial terrorism will continue to take down (& over) many of the key global and national institutions around the planet, with the explicit goal of consolidation and centralization of worldwide economic power and control. Update: The first half of September, as well as the month of October ’08, will go down in history as the Crash of the Millennium. Every single day brought news of either a mini wreck or major breakdown. International commercial enterprises crashed and burned like meteors hitting the earth’s atmosphere during a meteor shower. Truly, an unprecedented spectacle that will touch deeply every resident on planet Earth. Let’s not forget the synthetic terror/false flag event of Mumbai, ’08. The sudden and dramatic downfall of NY Gov Eliot Spitzer can also now be seen in its proper light. Having left the reservation one too many times, he simply could not be trusted to go with their flow. He had their numbers, their signatures (especially their MO’s), their addresses — the whole ball of wax, as well as his own reputation to burnish. Eliot, to seal his fate, wrote a masterful exposé on the subprime mortgage fiasco/fraud that was published in the WashPo just weeks before his public humiliation. He had recently testified before Congress in fine revelatory fashion as well. The elimination of John O’Neil, Head of Security at the WTC complex, is quite similar, except that John O. – a great patriot – died on 9/11, having just been given the job. To date, the most obvious and glaring example of this manipulated takedown is the case of a US Senator from New York. His letter to the FDIC contained confidential information that triggered the IndyMac bank collapse back in July. California AG Jerry Brown was called to review the entire affair after the Office of Thrift Supervision Director explicitly blamed the letter for causing a run on the bank (3rd largest bank failure in US history). This episode is eerily reminiscent of Larry Silverstein’s order to, “Pull it.” just prior to the expertly controlled demolition of Building # 7 on 9/11. The blame game in 2001 made sure that all culpability was laid at the feet of Osama bin Laden and his merry band of 19 Islamic terrorists with box cutters in tow. BOX CUTTERS ! ! ! And a guy in an Afghan cave hooked up to a dialysis machine did it all?!? The 9/11 Truth Movement has pretty much demolished this entire narrative. It has also proved, unequivocally, that the Official US 911Commission Report represents the most far-fetched and implausible, ridiculous and impossible, ludicrous and laughable, absurd and asinine CONSPIRACY THEORY of all time. This time around the subprime mortgages got the raw deal, and especially the poor souls who were unqualified to pay them, as well as the lenders who dangled the carrots as enticements to a rosy future (tentcities). We all know about the extremely heavy volume in put options (a bet that the stock will go down) on United & American Airlines just before the 9/11 attacks/demolitions, don’t we? Seven years later, and still no report on the outcome from any of the investigative agencies involved? ? ? Well, certainly, don’t hold your breadth awaiting results of the numerous investigations surrounding several of the aforementioned business failures and other suspiciously shady transactions. In times like this, they’re superb at starting the probes, and never, we mean NEVER, finishing them. There is another HUGE story here that went unnoticed in 2001, and has conveniently been overlooked this past many weeks. All the world knows that when the US markets catch a cold, the rest of the world is likely to come down with pneumonia. Both the European and Asian markets suffered enormously in the wake of 9/11, as they are currently feeling the pain. However, they ain’t seen nothing yet since the U.S. packaged CDO’s (collateralized debt obligations) that were forced down their throats which are enough to choke a Trojan Horse. Always, these systemic breakdowns are designed to efficiently transfer wealth from the have nots to the haves. This was especially the case with the Russian market meltdown of mid-September. Trading was actually suspended for two days after the two largest stock exchanges each lost between 20 and 25% of their value, respectively. The authorities have been whispering ever since about a plot to throttle the entire Russian economy. How do you spell – – – >E C O N O M I C T E R R O R I S M ? Update: As of Oct. 8, Russia had again been forced to suspend market trading on all of its major exchanges. Russian stock market value had plunged 68% since May ’08. Just as 9/11 was perpetrated as a cover for: (i) inaugurating the War on Terror, (ii) overtly advancing the NWO regime globally (in contrast to this previously covert operation), (iii) imposing a police state (Homeland Security) in the U.S. (by gutting the US Constitution), UK and elsewhere, (iv) dominating and securing oil/gas reserves in the Middle East and Caucacus (to include running oil&gas pipelines through Afghanistan and stealing Iraq’s oil wealth via military invasion), (v) jump-starting the Afghan opium trade, etc., etc., etc., the ECO/FIN 911 of ’08 is a cover for many of these same agenda items. However, there is one little item that is particularly high on the current agenda. And that concerns the derivatives market, which in its totality approximates somewhere between 500 trillion and one quadrillion dollars of instruments as of 2008. In fact, the subprime mortgage defaults are just a tip of the tip of the iceberg when compared to the real megillah – DERIVATIVES. This is what they’re really worried about, and having to cover for. Except this is a quadrillion dollar exposure that can’t be covered without unraveling the entire capitalistic system, and its fascist corpocracy and kleptocratic oligarchy. And then there is the teenie-weenie matter concerning the Federal Reserve, and its collection agency – the IRS. The man standing behind this curtain has a lot at stake, especially in the form of mountains of evidence that will indict, and convict, the entire system. Lots of evidence was destroyed during and after 9/11, as will happen after many of these Wall Street firms are taken over, nationalized, liquidated, merged and disappeared. The veil, however, has already been lifted. One need not look any further for the overt puppet master of this global game of funny-money monopoly than in the form of the Federal Reserve Chairman from 1987 thru 2006. Let it be known that ‘Sir’ Alan Greenspan presided over the most irresponsible and reckless monetary policy, as well as unsound and feckless fiscal oversight, in US history. He actively promoted the dismantling of all the significant and necessary regulatory controls that were put into place after the Great Depression. He likewise served as the country’s loudest cheerleader for a self-policing and laissez-faire approach to capital market management. Clearly, this “letting the fox guard the henhouse” approach led to the current catastrophe, and any meaningful and serious investigation ought to start right there. Did we mention that he was also at the helm during the Crash of ’87, as well as the dot.com Bubble Burst at the turn of the century? By the way, isn’t a Federal Reserve NOTE a debt owed, by the one who possesses it, to the issuer (FED) of the note? That would mean each of us is carrying around a wallet full of debt! (That does include our credit cards, of course.) Does anyone see a pattern here? The real lesson to be gleaned from this analysis is that events of such enormity and consequence are rarely spontaneous and unchoreographed. Especially when they happen just weeks before an era-defining presidential election. They have obviously been planning this one for a long time, and it has been fastidiously coordinated and executed to have a very definite effect and desired outcome – a permanent planetary plantation (PPP). Their execution, thus far, has been remarkably flawless. Even for those of us who stood there on the 1st 9/11, and knew it was a fraud (read False Flag Operation) while the buildings were coming down, this FIN/ECO plot of 2008 is exceedingly more difficult to penetrate. However, penetrate we will, until every last conspirator is sitting before the TRUTH AND RECONCILIATIONCOMMISSION spillin’ the beans. The ultimate and lasting effect of these inquiries will be a New World Order of our making, not theirs. The only remaining, $64,000 question will undoubtedly be, “What do we do with them after we head them off at the pass?” Keeping in mind, of course, that it was We The People who put the crazies in charge of runnin’ the asylum in the first place. For the uninitiated, it may take quite a lot to wrap your mind around this extremely complex and convoluted plot, but, please, just be patient. As this drama plays out, the true intentions of the primary perpetrators will become manifest as they unwittingly reveal themselves by their handiwork. As Eliot Spitzer, no – Eliot Ness, nee – Sherlock Holmes once alluded to – a fingerprint inadvertently left as evidence is impossible to erase. You see, the naked short sellers, unlike the “airplanes”, are still with us. Each one had a target to take down which they did with amazing speed and dexterity. And the myriad transactions that converged to topple their prey are all preserved somewhere, in some huge database, with multiple backups to serve as confirmation of trades of staggering amounts. AHHH! Nothing like computers, especially when they’re not confiscated like the WTC steel beams were and shipped off to China for permanent disposal. Remember – we now know the script. We know the major players involved. We know their MO: Controlled Demolition. We are able to watch the crimes being committed in real time. Each of us has now been duly notified, and empowered, to serve as a vector of dissemination of this critical information. So —–> LET’S GET BUSY!
// Copyright (c) 2012 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "net/http/http_request_headers.h" #include <utility> #include "base/logging.h" #include "base/strings/string_split.h" #include "base/strings/string_util.h" #include "base/strings/stringprintf.h" #include "base/values.h" #include "net/http/http_log_util.h" #include "net/http/http_util.h" namespace net { const char HttpRequestHeaders::kGetMethod[] = "GET"; const char HttpRequestHeaders::kAcceptCharset[] = "Accept-Charset"; const char HttpRequestHeaders::kAcceptEncoding[] = "Accept-Encoding"; const char HttpRequestHeaders::kAcceptLanguage[] = "Accept-Language"; const char HttpRequestHeaders::kAuthorization[] = "Authorization"; const char HttpRequestHeaders::kCacheControl[] = "Cache-Control"; const char HttpRequestHeaders::kConnection[] = "Connection"; const char HttpRequestHeaders::kContentLength[] = "Content-Length"; const char HttpRequestHeaders::kContentType[] = "Content-Type"; const char HttpRequestHeaders::kCookie[] = "Cookie"; const char HttpRequestHeaders::kHost[] = "Host"; const char HttpRequestHeaders::kIfModifiedSince[] = "If-Modified-Since"; const char HttpRequestHeaders::kIfNoneMatch[] = "If-None-Match"; const char HttpRequestHeaders::kIfRange[] = "If-Range"; const char HttpRequestHeaders::kOrigin[] = "Origin"; const char HttpRequestHeaders::kPragma[] = "Pragma"; const char HttpRequestHeaders::kProxyAuthorization[] = "Proxy-Authorization"; const char HttpRequestHeaders::kProxyConnection[] = "Proxy-Connection"; const char HttpRequestHeaders::kRange[] = "Range"; const char HttpRequestHeaders::kReferer[] = "Referer"; const char HttpRequestHeaders::kTransferEncoding[] = "Transfer-Encoding"; const char HttpRequestHeaders::kTokenBinding[] = "Sec-Token-Binding"; const char HttpRequestHeaders::kUserAgent[] = "User-Agent"; HttpRequestHeaders::HeaderKeyValuePair::HeaderKeyValuePair() { } HttpRequestHeaders::HeaderKeyValuePair::HeaderKeyValuePair( const base::StringPiece& key, const base::StringPiece& value) : key(key.data(), key.size()), value(value.data(), value.size()) { } HttpRequestHeaders::Iterator::Iterator(const HttpRequestHeaders& headers) : started_(false), curr_(headers.headers_.begin()), end_(headers.headers_.end()) {} HttpRequestHeaders::Iterator::~Iterator() {} bool HttpRequestHeaders::Iterator::GetNext() { if (!started_) { started_ = true; return curr_ != end_; } if (curr_ == end_) return false; ++curr_; return curr_ != end_; } HttpRequestHeaders::HttpRequestHeaders() {} HttpRequestHeaders::HttpRequestHeaders(const HttpRequestHeaders& other) = default; HttpRequestHeaders::~HttpRequestHeaders() {} bool HttpRequestHeaders::GetHeader(const base::StringPiece& key, std::string* out) const { HeaderVector::const_iterator it = FindHeader(key); if (it == headers_.end()) return false; out->assign(it->value); return true; } void HttpRequestHeaders::Clear() { headers_.clear(); } void HttpRequestHeaders::SetHeader(const base::StringPiece& key, const base::StringPiece& value) { DCHECK(HttpUtil::IsValidHeaderName(key.as_string())); // TODO(ricea): Revert this. See crbug.com/627398. CHECK(HttpUtil::IsValidHeaderValue(value.as_string())); HeaderVector::iterator it = FindHeader(key); if (it != headers_.end()) it->value.assign(value.data(), value.size()); else headers_.push_back(HeaderKeyValuePair(key, value)); } void HttpRequestHeaders::SetHeaderIfMissing(const base::StringPiece& key, const base::StringPiece& value) { DCHECK(HttpUtil::IsValidHeaderName(key.as_string())); // TODO(ricea): Revert this. See crbug.com/627398. CHECK(HttpUtil::IsValidHeaderValue(value.as_string())); HeaderVector::iterator it = FindHeader(key); if (it == headers_.end()) headers_.push_back(HeaderKeyValuePair(key, value)); } void HttpRequestHeaders::RemoveHeader(const base::StringPiece& key) { HeaderVector::iterator it = FindHeader(key); if (it != headers_.end()) headers_.erase(it); } void HttpRequestHeaders::AddHeaderFromString( const base::StringPiece& header_line) { DCHECK_EQ(std::string::npos, header_line.find("\r\n")) << "\"" << header_line << "\" contains CRLF."; const std::string::size_type key_end_index = header_line.find(":"); if (key_end_index == std::string::npos) { LOG(DFATAL) << "\"" << header_line << "\" is missing colon delimiter."; return; } if (key_end_index == 0) { LOG(DFATAL) << "\"" << header_line << "\" is missing header key."; return; } const base::StringPiece header_key(header_line.data(), key_end_index); const std::string::size_type value_index = key_end_index + 1; if (value_index < header_line.size()) { std::string header_value(header_line.data() + value_index, header_line.size() - value_index); std::string::const_iterator header_value_begin = header_value.begin(); std::string::const_iterator header_value_end = header_value.end(); HttpUtil::TrimLWS(&header_value_begin, &header_value_end); if (header_value_begin == header_value_end) { // Value was all LWS. SetHeader(header_key, ""); } else { SetHeader(header_key, base::StringPiece(&*header_value_begin, header_value_end - header_value_begin)); } } else if (value_index == header_line.size()) { SetHeader(header_key, ""); } else { NOTREACHED(); } } void HttpRequestHeaders::AddHeadersFromString( const base::StringPiece& headers) { for (const base::StringPiece& header : base::SplitStringPieceUsingSubstr( headers, "\r\n", base::TRIM_WHITESPACE, base::SPLIT_WANT_NONEMPTY)) { AddHeaderFromString(header); } } void HttpRequestHeaders::MergeFrom(const HttpRequestHeaders& other) { for (HeaderVector::const_iterator it = other.headers_.begin(); it != other.headers_.end(); ++it ) { SetHeader(it->key, it->value); } } std::string HttpRequestHeaders::ToString() const { std::string output; for (HeaderVector::const_iterator it = headers_.begin(); it != headers_.end(); ++it) { if (!it->value.empty()) { base::StringAppendF(&output, "%s: %s\r\n", it->key.c_str(), it->value.c_str()); } else { base::StringAppendF(&output, "%s:\r\n", it->key.c_str()); } } output.append("\r\n"); return output; } std::unique_ptr<base::Value> HttpRequestHeaders::NetLogCallback( const std::string* request_line, NetLogCaptureMode capture_mode) const { std::unique_ptr<base::DictionaryValue> dict(new base::DictionaryValue()); dict->SetString("line", *request_line); base::ListValue* headers = new base::ListValue(); for (HeaderVector::const_iterator it = headers_.begin(); it != headers_.end(); ++it) { std::string log_value = ElideHeaderValueForNetLog(capture_mode, it->key, it->value); headers->AppendString( base::StringPrintf("%s: %s", it->key.c_str(), log_value.c_str())); } dict->Set("headers", headers); return std::move(dict); } // static bool HttpRequestHeaders::FromNetLogParam(const base::Value* event_param, HttpRequestHeaders* headers, std::string* request_line) { headers->Clear(); *request_line = ""; const base::DictionaryValue* dict = NULL; const base::ListValue* header_list = NULL; if (!event_param || !event_param->GetAsDictionary(&dict) || !dict->GetList("headers", &header_list) || !dict->GetString("line", request_line)) { return false; } for (base::ListValue::const_iterator it = header_list->begin(); it != header_list->end(); ++it) { std::string header_line; if (!(*it)->GetAsString(&header_line)) { headers->Clear(); *request_line = ""; return false; } headers->AddHeaderFromString(header_line); } return true; } HttpRequestHeaders::HeaderVector::iterator HttpRequestHeaders::FindHeader(const base::StringPiece& key) { for (HeaderVector::iterator it = headers_.begin(); it != headers_.end(); ++it) { if (base::EqualsCaseInsensitiveASCII(key, it->key)) return it; } return headers_.end(); } HttpRequestHeaders::HeaderVector::const_iterator HttpRequestHeaders::FindHeader(const base::StringPiece& key) const { for (HeaderVector::const_iterator it = headers_.begin(); it != headers_.end(); ++it) { if (base::EqualsCaseInsensitiveASCII(key, it->key)) return it; } return headers_.end(); } } // namespace net
--- abstract: 'Calculations of $\eta$ $\to$ $\pi^0\pi^0\gamma\gamma$ decay in Generalized chiral perturbation theory are presented. Tree level and next-to-leading corrections are involved. Sensitivity to violation of the Standard counting is discussed.' --- \ Marián Kolesár, Jiří Novotný\ *[Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, 18000 Praha 8, Czech Republic]{}* Introduction ============ The $\eta(p\,) \to \pi^0(p_1)\,\pi^0(p_2)\,\gamma(k)\,\gamma(k')$ process is a rare decay, which has been recently studied by several authors in context of Standard chiral perturbation theory (S$\chi$PT), namely at the lowest order by Knöchlein, Scherer and Drechsel [@Drechsel] and to next-to-leading by Bellucci and Isidori [@Belucci] and Ametller et al. [@Ametller]. The experimental interest for such a process comes from the anticipation of large number of $\eta$’s to be produced at various facilities.[^1] The goal of our computations is to add the result for the next-to-leading order in Generalized chiral perturbation theory (G$\chi$PT). The motivation is that one of the important contributions involve the $\eta\,\pi \to \eta\,\pi$ off-shell vertex which is very sensitive to the violation of the Standard scheme and thus this decay provides a possibility of its eventual observation. We have completed the calculations at the tree level, added 1PI one loop corrections, corrections to the $\eta\,\pi \to \eta\,\pi$ vertex and phenomenological corrections to the resonant contribution. These preliminary results we would like to present in this paper. Kinematics and parameters ========================= The amplitude of the process can be defined $$\langle \pi ^0(p_1)\pi ^0(p_2)\gamma (k,\epsilon )\gamma (k^{^{\prime }},\epsilon ^{^{\prime }})_{\rm out}|\eta (p)_{\rm in}\rangle =i(2\pi )^4\delta ^{(4)}(P_f-p){\cal M}_{fi}.$$ In the square of the amplitude summed over the polarizations $ \overline{|{\cal M}_{fi}|^2}=\sum_{\rm pol.}|{\cal M}_{fi}|^2 $ we integrated out all of the independent Lorentz invariants except the diphoton energy square $$s_{\gamma\gamma} = (\:k+k')^2 ,\quad 0 <\, s_{\gamma\gamma} \leq\, (M_{\eta}-2M_{\pi})^2.$$ Our goal is to calculate the partial decay width ${\textrm{d}}\Gamma$ of the $\eta$ particle as the function of the diphoton energy square $s_{\gamma\gamma}$. At the lowest order, the S$\chi$PT does not depend on any unknown free order parameters. In contrast, there are two free parameters controlling the violation of the Standard picture in the Generalized scheme. We have chosen them as $$r\ =\ \frac{m_s}{\hat{m}}\,,\quad X_{GOR}\ =\ \frac{2B\hat{m}}{M_{\pi}^2}$$ and their ranges are $r \sim r_1 - r_2 \sim 6 - 26\, ,\ 0\ \leq\ X_{\rm GOR}\ \leq\ 1$. We use abbreviations for $\hat{m}=(m_u+m_d)/2$, $r_1=2 M_K/M_{\pi}-1$ and $r_2=2 M_K^2/M_{\pi}^2-1$. The Standard values of these parameters are $r=r_2$ and $X_{\rm GOR}=1$. Tree level ========== At the $O(p^4)$ tree level, the amplitude has two contributions, with a pion and an eta propagator. The first one is resonant, ‘$\pi^0$-pole’, the other is not, ‘$\eta$-tail’. The Standard values of the contributions to the partial decay rate and the maximum possible violation of the Standard counting ($r=r_1,X_{\rm GOR}=0$) are represented in Fig. \[graph4\]. The pole of the resonant contribution at $s_{\gamma\gamma}=M_{\pi}^2\,\sim\,0.06M_{\eta}^2$ is transparent. While in the Standard case it is fully dominant, in the Generalized scheme the $\eta$-tail could be determining in the whole area $s_{\gamma\gamma}>0.11M_{\eta}^2$. The reason can be found in the $\eta\,\pi \to \eta\,\pi$ vertex. Its contribution in the Generalized amplitude can jump up to 16 times its Standard value. The full decay width for the Standard ($r=r_2,X_{\rm GOR}=1$) and Generalized case ($r=r_2$,$X_{\rm GOR}=0.5$ and $r=r_1$,$X_{\rm GOR}=0$) is displayed in Fig. \[graph6\]. It can be seen, that even in the conservative intermediate case the change is quite interesting. One loop corrections ==================== There are four distinct contributions at the next-to-leading order: one loop corrections to the $\pi^o$-pole and the $\eta$-tail, one particle irreducible diagrams (1PI) and counterterms. In the latter case we rely upon the results of [@Ametller]. Their estimate from vector meson dominated counterterms indicates, that it causes only a slight decrease of the full decay width. Because the estimate is the same for both schemes, for our purpose of studying the differences between them we can leave it for later investigation. More important are the corrections to the $\eta$-tail diagram. We did take into account the corrections to the $\eta\,\pi \to \eta\,\pi$ vertex. These involve loop corrections and counterterms with many unknown higher order parameters. As a first approximation, we set these parameters equal to zero and estimated their effect through the remaining dependence on the renormalization scale. The scale was moved in the range from the mass of the $\eta$ to the mass of $\rho$-meson. We decided, similarly to [@Belucci], to correct the $\pi^0$-pole amplitude by a phenomenological parametrization of the $\eta \to 3\pi^0$ vertex and fix the parameters from experimental $\eta \to 3\pi^0$ data. We made an estimate of its phase by expanding the $\eta \to 3\pi^0$ one loop amplitude around the center of the Dalitz Plot. In the 1PI amplitude, we neglected the suppressed kaon loops. Fig. \[graph2\] represents the one loop corrected decay widths for the Standard and the maximum violation of the Standard scheme. The dependence on the renormalization scale is used to estimate the uncertainty in the unknown higher order coupling constants. We can see that the scale dependence is small in the Standard counting and not too terrible in the Generalized variant. In the case of the maximum violation of S$\chi$PT, the difference is big enough to not to be washed out by the uncertainty. However, in the conservative case $r=r_2$,$X_{\rm GOR}=0.5$ this is not true and the promising results from the tree level are lost. \[graph1\] Conclusion ========== We have analyzed the $\eta\to\pi^0\pi^0\gamma\gamma$ decay to the next-to-leading order of chiral perturbation theory in its both variants. The tree level results are promising, the sensitivity to the change in parameters controlling the violation of the Stndard $\chi$PT is considerable. At the one loop level, we tried to estimate the uncertainty in the higher order couplings constants in the crucial $\eta\,\pi \to \eta\,\pi$ vertex through their dependence on the renormalization scale. Although for big violation of the Standard case the difference is preserved, for the more realistic conservative case the output is not satisfactory. We would like to stress that these results are preliminary and there are several ways how to deal with the unknown order parameters. One of them is to take into account the vector mesons, similarly to the counterterm estimate in [@Ametller]. Other way is to treat the whole $\chi$PT expansion differently, with more caution, as developed in [@Stern]. This approach, called ‘resumed’ $\chi$PT could provide results similar to the tree level case even if the one loop corrections are involved. [This work was supported by program ‘Research Centers’ (project number LN00A006) of the Ministry of Education of Czech Republic.]{} [5]{} G. Knöchlein, S. Scherer, D. Drechsel: Phys. Rev. D [**53**]{} [1996]{} 3634–3642. S. Bellucci, G. Isidori: Phys. Lett. B [**405**]{} [1997]{} 334–340. Ll. Ametller, J. Bijnens, A. Bramon, P. Talavera: Phys. Lett. B [**400**]{} [1997]{} 370–378. S. Bellucci: [*$\eta\to\pi^0\pi^0\gamma\gamma$ to 1-Loop in ChPT*]{}, presented at the ‘Workshop on Hadron Production Cross Sections’ at DAPHNE, Karlsruhe, Nov.1-2, 1996, hep-ph/9611276 S. Descotes-Genon, N.H. Fuchs, L. Girlanda, J. Stern: [*Resumming QCD vacuum fluctuations in three-flavour Chiral Perturbation Theory*]{}, hep-ph/0311120 [^1]: according to [@Belucci; @II], at DA$\Phi$NE about $10^8$ decays per year
Pages Tuesday, May 18, 2010 Growing Strawberries From Seed Next up on the list is strawberries. Getting Strawberry Seeds You can gently pick the seeds off the strawberry or you can slice up several strawberries and throw them into a blender. Add enough water so they are just barely covered and puree the strawberries for about 10 seconds. Wait a few minutes afterwards. The "bad" seeds and the fruit will float to the top while the good seeds will sink to the bottom. Scoop out the top layer of strawberry. Put a coffee filter in a sieve and the sieve over a bowl. Pour out the remaining seed water mixture. Rinse and repeat with the remaining seeds. Let the coffee filter dry over night. Now you have your strawberry seeds.Growing Seeds Place the seeds in a folded paper towel then into a Ziploc bag. From what I read online the seeds need to be put in the freezer anywhere from 3 weeks to 4 months. A majority of the sites said 1 month so I froze the seeds for 1 month to stratify them. After a month place the seeds in lukewarm water for 1 to 3 days. This softens the outer coating. To keep the water warm place the container on top of a refrigerator or anywhere else that feels warm. We used a 12 compartment seed tray greenhouse. Fill that with seed starter. Over a sink, gently pour the water containing seeds over the soil. Try to evenly spread it. You will also have to rinse the remaining seed out. Place the greenhouse cover and put in a sunny window. They say it takes 7 to 21 days for a strawberry seed to germinate so we'll keep you up to date. For the seeds being soaked in water we place the container on top of Trevor's computer. At one point it seemed the water was getting so hot that it was evaporating. Getting too hot might have killed the seeds but we won't know for sure for another 7 to 21 days. If these seeds fail to germinate we will be replanting them. Next time instead of soaking them in water we will try scratching them with sandpaper. About Me I'm currently trying to get into medical school. I work as an assistant manager at a retail drug store. In high school I was a writer for a website called BSBBLVD (Backstreet Boys BLVD) yes I was a teeny Bopper back then. Since then I have had this weird passion for writing especially on controverstial topics. So just continuing it here.
Salmonella contamination in commercial eggs and an egg production facility. Egg samples were collected from various stages of an egg processing operation and from the attached production facility. Salmonella was isolated from 72.0% of all samples collected from the laying house environment. Recovery of Salmonella from flush water, ventilation fan, egg belt, and egg collector samples were (positive samples/total samples collected): 2/2, 4/4, 16/22, and 14/22, respectively. Salmonella was found on 7 of the 90 eggshells sampled before processing and 1 of 90 eggshells sampled after processing, but Salmonella was not found in the 180 eggs analyzed for internal contamination following processing. The one eggshell found positive for Salmonella following processing was detected when the pH of wash water samples was lowest (10.19). The 60 isolates from production facilities included the following Salmonella serotypes: S. agona, S. typhimurium, S. infantis, S. derby, S. heidelberg, S. california, S. montevideo, S. mbandaka, and untypable. The 22 isolates obtained from eggshells prior to processing were serotyped as S. heidelberg and S. montevideo. All five isolates obtained from eggshells after processing were serotyped as S. heidelberg. These data suggest that although the shells of about 1% of commercial eggs are contaminated with Salmonella, contamination of the internal contents of eggs with Salmonella is a rare event.
Press Center Treasury Targets Money Exchange Houses for Supporting the Taliban 6/29/2012 Page Content Action Targets Terrorist Financing Linked to Hawalas WASHINGTON – The U.S. Department of the Treasury today designated two exchange houses, the Haji Khairullah Haji Sattar Money Exchange (HKHS) and the Roshan Money Exchange (RMX), which principally operate in Afghanistan and Pakistan, pursuant to the U.S. government’s terrorism sanctions authority, Executive Order (E.O.) 13224, for storing or moving money for the Taliban. Treasury is also designating the co-owners of HKHS, Haji Abdul Sattar Barakzai and Haji Khairullah Barakzai, pursuant to E.O. 13224 for donating money and providing financial services to the Taliban. Both HKHS and RMX operate as hawalas and have been used by the Taliban to facilitate money transfers in support of the Taliban’s narcotics trade and terrorist operations. Today the United Nations also added Haji Abdul Sattar Barakzai, Haji Khairullah Barakzai, HKHS and RMX to its 1988 List of individuals, groups, undertakings and entities associated with the Taliban in constituting a threat to the peace, stability and security of Afghanistan. “Today’s action, which coincides with action by the UN, is aimed at disabling two key financial hubs supporting the Taliban. Whether financial support to the Taliban moves through banks or less formal mechanisms, like the hawalas we are designating in this action, we will continue to work alongside our partners to expose and disrupt this illicit financial activity,” said Under Secretary for Terrorism and Financial Intelligence David S. Cohen. As a result of today’s action, all property in the United States or in the possession or control of U.S. persons in which HKHS, RMX, Haji Abdul Sattar Barakzai (Sattar), or Haji Khairullah Barakzai (Khairullah) have an interest is blocked, and U.S. persons are prohibited from engaging in transactions with them. Haji Abdul Sattar Barakzai Haji Abdul Sattar Barakzai is being designated today for owning, or controlling HKHS, and for providing financial, material, or technological support for, or financial, or other services to, or in support of, the Taliban. Sattar is a co-owner and operator of HKHS. Sattar and Khairullah have co-owned and jointly operated hawalas known as HKHS throughout Afghanistan, Pakistan, and Dubai and managed an HKHS branch in the Afghanistan-Pakistan border region. As of late 2009, Sattar and Khairullah had an equal partnership in HKHS. Sattar founded HKHS and customers chose to use HKHS in part because of Sattar’s and Khairullah’s well-known names. Sattar has donated thousands of dollars to the Taliban to support Taliban activities in Afghanistan and has distributed funds to the Taliban using his hawala. As of 2010, Sattar provided financial assistance to the Taliban. As of late 2009, Sattar provided tens of thousands of dollars to aid the Taliban’s fight against Coalition Forces in Marjah, Nad’Ali District, Helmand Province, Afghanistan, and helped to transport a Taliban member to Marjah. As of 2008, Sattar and Khairullah collected money from businessmen and distributed the funds to the Taliban using their hawala. Haji Khairullah Barakzai Haji Khairullah Barakzai is being designated today for owning or controlling HKHS, and for providing financial, material, or technological support for, or financial, or other services to, or in support of, the Taliban. Khairullah is a co-owner and operator of HKHS. As of late 2009, Khairullah and Sattar had an equal partnership in HKHS. They jointly operated hawalas known as HKHS throughout Afghanistan, Pakistan, and Dubai and managed an HKHS branch in the Afghanistan-Pakistan border region. As of early 2010, Khairullah was the head of the HKHS branch in Kabul. As of 2010, Khairullah was a hawaladar, or hawala operator, for Taliban senior leadership and provided financial assistance to the Taliban. Khairullah, along with his business partner Sattar, provided thousands of dollars to the Taliban to support Taliban activities in Afghanistan. As of 2008, Khairullah and Sattar collected money from businessmen and distributed the funds to the Taliban using their hawala. Haji Khairullah Haji Sattar Money Exchange (HKHS) Haji Khairullah Haji Sattar Money Exchange (HKHS) is being designated today for providing financial, material, or technological support for, or financial or other services to, or in support of, the Taliban. HKHS is co-owned by Haji Abdul Sattar Barakzai and Haji Khairullah Barakzai. Sattar and Khairullah have jointly operated money exchanges throughout Afghanistan, Pakistan, Iran, and the United Arab Emirates (UAE). Taliban leaders have used HKHS to disseminate money to Taliban shadow governors and commanders and to receive hawala transfers for the Taliban. As of 2011, HKHS was a preferred method for Taliban leadership to transfer money to Taliban commanders in Afghanistan. In late 2011, the HKHS branch in Lashkar Gah, Helmand Province, Afghanistan was used to send money to the Taliban shadow governor for Helmand Province. In mid-2011, a Taliban commander used an HKHS branch in the Afghanistan-Pakistan border region to fund fighters and operations in Afghanistan. After the Taliban deposited a significant amount of cash monthly with this HKHS branch, Taliban commanders could access the funds from any HKHS branch. Taliban personnel used HKHS in 2010 to transfer money to hawalas in Afghanistan, where operational commanders could access the funds. As of late 2009, the manager of the HKHS branch in Lashkar Gah oversaw the movement of Taliban funds through HKHS. Roshan Money Exchange Roshan Money Exchange (RMX) is being designated today for providing financial, material, or technological support for, or financial, or other services to, or in support of, the Taliban. RMX stores and transfers funds in support of Taliban military operations and the Taliban’s role in the Afghan narcotics trade. As of 2011, RMX was one of the primary hawalas used by Taliban officials in Helmand Province. In 2011, a senior Taliban member withdrew hundreds of thousands of dollars from an RMX branch in the Afghanistan-Pakistan border region to distribute to Taliban shadow provincial governors. To fund the Taliban’s spring offensive in 2011, the Taliban shadow governor of Helmand Province sent hundreds of thousands of dollars to RMX. Also in 2011, a Taliban member received tens of thousands of dollars from RMX to support military operations. An RMX branch in the Afghanistan-Pakistan border region also held tens of thousands of dollars to be collected by a Taliban commander. In 2010, on behalf of the Taliban shadow governor of Helmand Province, a Taliban member used RMX to send thousands of dollars to the Afghanistan-Pakistan border region. The RMX branch in Lashkar Gah, Helmand Province, has been used by the Taliban to transfer funds for operations to Helmand Province. In 2011, a Taliban sub-commander transferred tens of thousands of dollars to a Taliban commander through the RMX branch in Lashkar Gah. The Taliban also sent funds to the RMX branch in Lashkar Gah for distribution to Taliban commanders in 2010. Also in 2010, a Taliban member used RMX to send tens of thousands of dollars to Helmand Province and Herat Province, Afghanistan, on behalf of the Taliban shadow governor of Helmand Province. In 2009, a senior Taliban representative collected hundreds of thousands of dollars from an RMX branch in the Afghanistan-Pakistan border region to finance Taliban military operations in Afghanistan. In 2008, a Taliban leader used RMX to transfer tens of thousands of dollars to Afghanistan. The Taliban also uses RMX to facilitate its role in the Afghan narcotics trade. As of 2011, Taliban officials, including the shadow governor of Helmand Province, transferred hundreds of thousands of dollars from an RMX branch in the Afghanistan-Pakistan border region to hawalas in Afghanistan for the purchase of narcotics on behalf of Taliban officials. Also in 2011, a Taliban official directed Taliban commanders in Helmand Province to transfer opium proceeds through RMX. One Taliban district chief transferred thousands of dollars from Marjah, Helmand Province, Afghanistan to an RMX branch in the Afghanistan-Pakistan border region.
/*========================================================================= * * Copyright RTK Consortium * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0.txt * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * *=========================================================================*/ #ifndef __rtkCudaSplatImageFilter_hcu #define __rtkCudaSplatImageFilter_hcu #include <vector_types.h> void CUDA_splat(const int4 & outputSize, float * input, float * output, int projectionNumber, float ** weights); #endif
February 29, 2008 Exactly 365 days ago, but not the same date (I’ll get to that in a second), I wrote about Parallel Dave Worlds. Although I’d like to wish Dave L. (who lives in a LOOPY world) a happy birthday, I want to focus on Dave F. who lives in a FANTASTIC world today. You see, Dave F.’s FANTASTIC world has developed a bit of a… well, somewhat of a wormhole for lack of a better word. Dave F. celebrates his 10th birthday today. The wormhole part comes in to play when you realize that Dave F. will celebrate his 10th birthday in the same year that his daughter celebrates her 10th. Depending on how you do the math, Dave F. is turning 40 which I am told is an important milestone. Count the number of times the man has celebrated the date of his birth, February 29th, and you get the number 10. Having a whole day to celebrate your birthday is a gift in itself when you normally celebrate in the fleeting moment between February 28th and March 1st. Happy birthday Dave F., Dave L., and all the Leap Year-ians out there. February 19, 2008 I love outdoor black garbage bags with the quick tie feature. Instead of a straight cut, the top of the garbage bag is cut in a curved shape so that you end up with two longer edges that are easy to grab. Now I’m sure that quick tie was a wonderful feature on its own. Many garbage bag executives probably struggled with the idea of re-engineering their manufacturing processes to add this functionality. For me, I couldn’t care less about the extra handle-like feature. So why do I love them? Well, having the curved cut has a positive side-effect. A spandrel in evolutionary biology terms. The curved cut allows me to easily tell the “open” edge from the sealed edge. No more pulling back and forth between the two edges unsure of which side is supposed to open. Now I’m sure there are other ways to distinguish edge that opens from the sealed edge but it makes me happy that this minor irritation was fixed inadvertently by a feature designed with a whole different purpose in mind. RADBags with New Opening Detection Technology. What a wonderful discovery :-) February 13, 2008 Basically, Huckabee’s plan is to eliminate the income tax and replace it with a national sales tax. To a first approximation, that’s not such a radical change. As long as you spend what you earn, a sales tax feels just like an income tax. If you earn $1,000 a week and spend $1,000 a week, it doesn’t matter whether I take 20 percent of your income or 20 percent of your spending. Bottom line for Landsburg is that the FairTax is a sneaky way of getting an unlimited IRA. He likes the idea of an unlimited IRA because it encourages savings. I think the brilliance of the FairTax is that it makes a number of sneaky changes without really stating that its doing so. As far as I can tell it eliminates corporate taxes, payroll taxes (i.e. social insurance), and progressive tax rates down to two (no tax and normal tax). All these types of changes are fine in my opinion but I’m not fond of the sneaky nature of the change. If you want to eliminate existing tax categories I think it is important to make your case for each elimination. There is also a fundamental flaw in the FairTax. Any savings that a person accumulated in the old income-based system will now be double-taxed using the new sales-based system (if the person chooses to spend that money). There is no way around this as far as I can tell. Punishing retired people is not usually a good political strategy… even if its endorsed by Chuck Norris.
Q: Simple question regarding factoring quadratics Say we have an equation $ax^2 + bx - c = 0$ and want to find $x$. Obviously the way to solve would be to use the quadratic equation or factorize. I understand that saying $$ax^2 + bx = c => x(ax + b) = c$$ and then solving is wrong (the values of $x$ when subbed back in do not satisfy the eqn), but why is it wrong? Each step seems logical? Many thanks. A: The equation $x(ax+b) = c$ is valid but does not help. There is a general fact that $AB = 0$ implies $A = 0$ or $B=0$ and this allows us to solve a product expression by reducing it to easier equations. So we need 0 on the right hand side of the product to be useful. e.g. we want to rewrite your equation $ax^2 + bx - c = 0$ as $a(x+\alpha)(x+\beta) = 0$. In order to find $\alpha, \beta$ to do this, note that we ensured the quadritic term is already OK: $ax^2$ in both. The linear term is $a(\alpha+\beta) = b$ and the constant term is $a\alpha\beta = -c$. So you need to find $\alpha$ and $\beta$ with known sum $\frac{b}{a}$ and known product $\frac{-c}{a}$, and this can sometimes be seen by inspection for concrete $a,b$ and $c$.
Angry Video Game Nerd Episode Reviews Every Nerd episode, reviewed by me. The episodes will be divided into season as per the Wikipedia listings, and will be reviewed in chronological order. This list will also use the YouTube uploads of each episode, save for episodes which are not available on YouTube in any official form.
394 F.Supp. 1319 (1975) UNITED STATES of America, Plaintiff, v. INDEPENDENT BULK TRANSPORT, INC., Defendant. No. 74 Civ. 2257. United States District Court, S. D. New York. May 29, 1975. *1320 Paul J. Curran, U. S. Atty., S. D. N. Y., Gilbert S. Fleischer, Atty. in charge, Admiralty & Shipping Section, Dept. of Justice, New York City, for U. S.; John C. Lane, New York City, of counsel. Peter M. Frank, New York City, for defendant; Jared B. Stamell, Washington, D. C., of counsel. MEMORANDUM FRANKEL, District Judge. Plaintiff United States sues for civil penalties administratively adjudged by the Coast Guard for one alleged and one admitted oil spill from defendant's tank barge in March and May 1973, respectively. Upon facts largely undisputed, the court has cross-motions for summary judgment posing issues of administrative procedural law. The case arises under § 311(b) of the Federal Water Pollution Control Act Amendments of 1972, 33 U.S.C. § 1321(b), subsection (3) of which prohibits oil discharges on navigable waters and subsection (6) of which says: "Any owner or operator of any vessel, onshore facility, or offshore facility from which oil or a hazardous substance is discharged in violation of paragraph (3) of this subsection shall be assessed a civil penalty by the Secretary of the department in which the Coast Guard is operating of not more than $5,000 for each offense. No penalty shall be assessed unless the owner or operator charged shall have been given notice and opportunity for a hearing on such charge. Each violation is a separate offense. Any such civil penalty may be compromised by such Secretary. In determining the amount of the penalty, or the amount agreed upon in compromise, the appropriateness of such penalty to the size of the business of the owner or operator charged, the effect on the owner or operator's ability to continue in business, and the gravity of the violation, shall be considered by such Secretary * * *." The question mainly contested now, and found dispositive by the court, is whether defendant received the "notice and opportunity for a hearing" to which the quoted provision entitled it. In connection with the first alleged spill, on March 24, 1973, defendant was notified of its right to a hearing by letter dated June 14, 1973, and was offered then an opportunity to close the case without a hearing for $1,000. Choosing the first alternative, a representative of defendant came before a Coast Guard Commander, designated as hearing officer, for an informal hearing on September 5, 1973. The details of this encounter are not important; what is significant, as will appear, is the undisputed fact that matters not disclosed to defendant became part of the agency's case record and basis of decision. In a letter dated September 6, 1973, defendant was informed of the hearing officer's adverse finding and assessment of a $500 penalty. The same letter informed defendant of its right of appeal to the Coast Guard Commandant. The right was exercised. In a letter dated November 27, 1973, the Commandant announced his affirmance of the penalty assessment. Similar steps occurred after the second oil spill, which is admitted now though both sides agree it was not major, involving a barrel or two. The determination in this instance was a $1200 penalty, affirmed by the Commandant by letter dated October 31, 1973. Defendant resisted the penalty assessments on an array of grounds, only a portion of which are reached today. Accordingly, on May 23, 1974, the Government brought the instant case, asserting a first cause of action for the $500 and a second for the $1200. *1321 In the course of pretrial discovery, a couple of months ago, the Government disclosed to defendant "the official Coast Guard record of [these] penalty case[s] * * *."[1] The materials thus disclosed, which "constitute the entire administrative records * * * relative to these penalty cases,"[2] contained a number of items that had never before been shown or described to defendant or any of its representatives. Among other things, there were reports, now flatly denied under oath, that the barge's tankerman was found drunk aboard the barge on the day of the first alleged spill and shortly after midnight again; and that the spill happened because no tankerman was on deck. There was, similarly, a report, of debated relevance, that the barge had no "visible framing of Certificate of Inspection"; a report by the Port Captain and his recommendation that a penalty be assessed; a memorandum of the hearing officer recommending action concerning the allegedly drunken tankerman; a mistaken description of defendant's corporate status; and some other things, characterized now as irrelevant by both sides, but generally not favorable to defendant. The record concerning the second episode also contained items revealed to defendant only now in this lawsuit, again including the Port Captain's adverse recommendation, again treating of possible actions against the tankerman reported to have been intoxicated, and containing a Dun & Bradstreet report which was at best not relevant, at worst potentially misleading. These undisclosed matters in the administrative record lead, not surprisingly, to defendant's claim that it was denied rights under both the statute and the Fifth Amendment. No claim is made that the penalty assessment procedure of 33 U.S.C. § 1321(b)(6) is subject to all the requirements of the Administrative Procedure Act. Defendant urges, however, that the minimum Congress must have purposed when it provided for notice and a hearing was not met here. Agreeing with that, the court reaches no question of constitutional law. Opposing defendant's motion for summary judgment, the Government concedes, properly, that "in general a party to an administrative proceeding is entitled to know and meet the material evidence and to be heard with respect thereto, on any material adjudicative fact genuinely in issue."[3] It is also "conceded * * * that the written communications from the hearing officer to the defendant informing the defendant of his determinations [and of the right to appeal] * * * did not fully state the bases for such determinations."[4] Further, the Government calls to our attention that Congress reflected a purpose to afford a condign measure of "due process and protection of a respondent's rights" when it made the provision for notice and hearing.[5] Nevertheless, in a kind of "harmless error" approach, the Government argues that the procedures were adequate, or reparable here in court. First, the Government tenders the affidavit of the hearing officer, who undertakes to buttress, amplify and explain his decision. It is sufficient to mention only some of what he says because it plainly will not do. He says, inter alia, he gave "little if any cognizance" to the Port Captain's recommendations. But a little would be too much. He says he did not rely "solely" upon one Dun & Bradstreet report and consulted another only as "cumulative evidence." He says defendant's representative could have seen the investigative report had he but asked for it (evidently assuming, without evident *1322 basis, that defendant knew the report existed and was in the file). Having tendered this repair work, the Government points out that we could hold a hearing de novo to supply the constitutional requisites when there is a dispute of fact regarding the occurrence of an oil spill as there is in the first cause of action. Then, the Government continues, we should go on to find reasonable the discretionary judgment of the Coast Guard in setting a particular penalty, refraining from the substitution of our judgment for the agency's. But the two points, warring with each other, destroy the Government's position. Defendant was entitled not only to a fair hearing from the hearing officer, but to the appeal of which the latter gave notice. At both levels, defendant had a fundamental right to meet factual matters and contentions that were, or might be deemed, adverse to its position.[6] Apart from the undisclosed matters at the first level, if the promised appeal was to be fair, defendant had a right to know (1) what was in the record before the Commandant and (2) the true bases for the decision to be reviewed. See Gonzales v. United States, 348 U.S. 407, 75 S.Ct. 409, 99 L.Ed. 467 (1955); Crowell v. Benson, 285 U.S. 22, 47-48, 52 S.Ct. 285, 76 L.Ed. 598 (1932); Environmental Defense Fund, Inc. v. Ruckelshaus, 142 U.S.App.D.C. 74, 439 F.2d 584, 598 (1971). Both of these fundamentals were denied. The court cannot sustain a purported exercise of discretion when the requirements for exercising discretion fairly were not met. See Ohio Bell Tel. Co. v. Public Utilities Comm'n, 301 U.S. 292, 304-05, 57 S.Ct. 724, 81 L.Ed. 1093 (1937); Scanwell Lab., Inc. v. Shaffer, 137 U.S.App.D.C. 371, 424 F.2d 859, 874 (1970); NLRB v. Prettyman, 117 F.2d 786, 791-92 (6th Cir. 1941). The Government is correct, of course, when it insists that the discretionary power to fix the amount that must be *1323 paid as a penalty for a violation is the Coast Guard's, not ours.[7] By that very token, the neglect by the Coast Guard of its procedural duties in making the factual determinations about whether a violation took place and the appropriateness of the penalty assessed must be remedied there, not here.[8] See FPC v. Idaho *1324 Power Co., 344 U.S. 17, 20-21, 73 S.Ct. 85, 97 L.Ed. 15 (1952); SEC v. Chenery Corp., 332 U.S. 194, 199-201, 67 S.Ct. 1575, 91 L.Ed. 1995 (1947); Jacob Siegel Co. v. FTC, 327 U.S. 608, 613-14, 66 S.Ct. 758, 90 L.Ed. 888 (1946); Ford Motor Co. v. NLRB, 305 U.S. 364, 372-74, 59 S.Ct. 301, 83 L.Ed. 221 (1939). Defendant's motion is granted. Plaintiff's cross-motion for partial summary judgment is denied. The complaint is dismissed, but without prejudice to the bringing of a subsequent complaint if and when, after new administrative proceedings, penalties are again assessed and resisted. It is so ordered. NOTES [1] Pretrial Order ¶ 3(a)(14), (27). [2] Id., ¶ 3(a)(28). [3] Memorandum of Law opposing defendant's motion 15. [4] Id. at 2. [5] H.R.Rep.No.92-911, 92d Cong., 2d Sess. 117 (1972). [6] By the terms of the statute, the defendant was entitled to "notice and opportunity for a hearing." In interpreting the meaning of these two essential components of due process, as a matter of constitutional law as well as when construing statutory language, courts have generally held that while due process is a flexible standard, the fair intendment of these two phrases, absent exigent circumstances, is a procedure which would apprise the defendant of all the evidence to be considered and would give it a chance to rebut that evidence. See Willner v. Committee on Fitness and Character, 373 U.S. 96, 103-06, 83 S.Ct. 1175, 10 L.Ed.2d 224 (1963); Greene v. McElroy, 360 U.S. 474, 496-97, 507, 79 S.Ct. 1400, 3 L.Ed.2d 1377 (1959); Morgan v. United States, 304 U.S. 1, 18-19, 58 S.Ct. 773, 82 L.Ed. 1129 (1938); Crowell v. Benson, 285 U.S. 22, 47-48, 52 S.Ct. 285, 76 L.Ed. 598 (1932); Freitag v. Carter, 489 F.2d 1377, 1382 (7th Cir. 1973); cf. Brandt v. Hickel, 427 F.2d 53, 56 (9th Cir. 1970). It is possible that deficiencies in the procedures at the initial hearing could in some cases be remedied by a full hearing before the Coast Guard Commandant on the appeal from the district commander's decision. See Opp Cotton Mills, Inc. v. Administrator, Wage and Hour Div., 312 U.S. 126, 152-53, 61 S.Ct. 524, 85 L.Ed. 624 (1941); United States v. Patterson, 465 F.2d 360, 361 (9th Cir.), cert. denied, 409 U.S. 1038, 93 S.Ct. 516, 34 L.Ed. 2d 487 (1972); Rosenberg v. Commissioner of Internal Revenue, 450 F.2d 529, 532 (10th Cir. 1971); McTiernan v. Gronouski, 337 F.2d 31, 35 (2d Cir. 1964). But there was no such full hearing at the appellate stage in this case. Moreover, to satisfy the statutory mandate, the defendant on the appeal should have known in full the record of the initial hearing which the Commandant would be reviewing on appeal, the hearing officer's reasons for his decision, and any additional evidence the Commandant would be considering, and should have had an opportunity to be heard in rebuttal. See Republic Aviation Corp. v. NLRB, 324 U.S. 793, 65 S.Ct. 982, 89 L.Ed. 1372 (1945); Morgan v. United States, supra, 304 U.S. at 18-19, 58 S.Ct. 773; Londoner v. Denver, 210 U.S. 373, 28 S.Ct. 708, 52 L.Ed. 1103 (1908); Langevin v. Chenango Court, Inc., 447 F.2d 296, 300 (2d Cir. 1971). Procedures required by the statute itself, putting to one side the requirements of due process, were not met at either stage. These deficiencies cannot now be cured by a trial de novo before the district court. They certainly are not so curable when, as the Government insists and the court agrees, the ultimate decision is one for administrative discretion. [7] In reviewing the imposition of administrative sanctions, a court has two tasks. The court must first examine the agency's findings of fact and conclusions of law, and sustain the findings and order if they are supported by sufficient evidence on the record and not contrary to law. If the imposition of some sanction is supported by the statute and the facts are fairly found, the court's only task is to determine whether the agency has abused its discretion in ordering the particular sanction. See Butz v. Glover Livestock Comm'n Co., 411 U.S. 182, 187-89, 93 S.Ct. 1455, 36 L.Ed.2d 142 (1973); Jacob Siegel Co. v. FTC, 327 U.S. 608, 612, 66 S.Ct. 758, 90 L. Ed. 888 (1946); Kent v. Hardin, 425 F.2d 1346, 1349-50 (5th Cir. 1970). In the present case, the imposition of a civil penalty up to $5000 for each violation is authorized by the statute. In determining the amount of the penalty, the Coast Guard is instructed to consider factual issues such as "the appropriateness of such penalty to the size of the business of the owner or operator charged, the effect on the owner or operator's ability to continue in business, and the gravity of the violation * * *." 33 U.S.C. § 1321(b)(6). If the findings of fact were soundly based—for example, as to "the gravity of the violation"—it may be doubted that the amounts of the penalties could be deemed reversible for abuse. [8] At the cost of some repetition, the court expands here upon the Government's contention that procedural inadequacies at the agency level do not prejudice the defendant because a full hearing can be had in the district court in a trial de novo whenever there are disputed facts concerning the occurrence of an oil spill. While it presses this thought the Government denies that the court can (a) hold a de novo trial on the facts considered in setting a penalty or even (b) review the Coast Guard's order for substantial evidence on issues not to be tried de novo. The latter assertions expose as unsatisfactory patchwork the effort to have some pieces of the decisions, but not others, "heard" by the court de novo. Compare the Supreme Court's observations recently indicating when de novo review is appropriate: "De novo review * * * is authorized by § 706(2)(F) [§ 10(e) of the Administrative Procedure Act] in only two circumstances. First, such de novo review is authorized when the action is adjudicatory in nature and the agency factfinding procedures are inadequate. And, there may be independent judicial factfinding when issues that were not before the agency are raised in a proceeding to enforce nonadjudicatory agency action." Citizens to Preserve Overton Park v. Volpe, 401 U.S. 402, 415, 91 S.Ct. 814, 823, 28 L.Ed.2d 136 (1971). The House Report on the APA helps elucidate what is meant by occasions when agency action "is adjudicatory in nature and the agency factfinding procedures are inadequate." The Report explains that "the test is whether there has been a statutory administrative hearing of the facts which is adequate and exclusive for purposes of review." H.R.Rep.No. 1980, 79th Cong., 2d Sess. 45 (1946), U.S. Code Cong.Serv.1946, p. 1195. And the Report summarizes the provision in this manner: "In short, where a rule or order is not required by statute to be made after opportunity for agency hearing and to be reviewed solely upon the record thereof, the facts pertinent to any relevant question of law must be tried and determined de novo by the reviewing court respecting either the validity or application of such rule or order." Id. Thus Congress and the Supreme Court have indicated that if the statute under examination contemplates a full adjudicatory hearing before the agency, a court cannot conduct a trial de novo after it determines that the agency hearing has been inadequate. This sensible scheme prevents agencies from attempting to enlist the courts as convenient backstops; it ensures that administrative responsibility will be both respected and demanded. Following such a course in the instant case calls upon the Coast Guard to conduct its hearings properly, not to transfer the function, intermittently and by mistake, to the district courts. The Water Pollution Amendments of 1972, of which § 1321 is a part, offer somewhat uncertain signals as to the Congressional intent on the question of where the full hearing should be conducted. On the one hand, the statute states that "[n]o penalty shall be assessed unless the owner or operator charged shall have been given notice and opportunity for a hearing on such charge." 33 U.S.C. § 1321(b) (6) (emphasis added). "Assessment" is, of course, an administrative function. Yet at the same time, the House Report states that in order to protect due process rights the respondent owner will have an "opportunity of a de novo hearing in any collection proceeding initiated by a United States Attorney after the conclusion of administrative procedures." H.R.Rep.No.911, 92d Cong., 2d Sess. 117-18 (1972). The Report puts this statement immediately after a passage which states that provisions of the Administrative Procedure Act will apply in the Coast Guard hearing to protect the respondent's due process rights. At the same time, the requirement of notice and hearing is "not intended to impose in every instance the complex procedural requirements associated with formal adjudicatory hearings * * *." Id. at 117. In a case where the legislative history is this cloudy, we are entitled and well advised to follow relatively plain statutory language. Greenwood v. United States, 350 U.S. 366, 374, 76 S.Ct. 410, 100 L.Ed. 412 (1956) (Frankfurter, J.).
Association of signal-regulatory proteins beta with KARAP/DAP-12. The signal-regulatory proteins (SIRP) are Ig-like cell surface receptors detected in hematopoietic and non-hematopoietic cells. SIRP are classified as SIRPalpha molecules, containing a 110- to 113-amino acid long, or SIRPbeta molecules, with a 5-amino acid long intracytoplasmic domain. SIRPalpha molecules belong to inhibitory immunoreceptor tyrosine-based inhibition motif (ITIM)-bearing molecules. The majority of ITIM-bearing receptors are paired with activating isoforms, which share highly related extracytoplasmic domains but harbor a shorter cytoplasmic domain devoid of ITIM and contain a charged amino acid residue in their transmembrane domain. Activating receptors are associated with immunoreceptor tyrosine-based activation motif (ITAM)-bearing proteins, such as KARAP/DAP-12 and FcRgamma. In this report, we show that human SIRPbeta1 is included in an oligomeric complex with KARAP/DAP-12 in hematopoietic and non-hematopoietic transfectant cells as well as in human monocytes. The physical association between SIRPbeta1 and KARAP/DAP-12 results in the functional coupling of SIRPbeta1 engagement to the recruitment of the protein tyrosine kinase Syk and to serotonin release in RBL cell transfectants. Therefore our results show that SIRPbeta1 acts as an activating isoform of SIRPalpha molecules, confirming the co-existence of inhibitory ITIM-bearing molecules, recruiting SHP-1 and SHP-2 protein tyrosine phosphatases, and activating counterparts, whose engagement couples to protein tyrosine kinases via ITAM-bearing molecules.
Q: How to use nth-of-type to select nested children I am trying to style odd and even headers that are associated with content. They are inside several DIVs, and I am unable to get nth-child or nth-of-type to work- only the odd styles are displaying. Here is some concept code: HTML: <div class="content"> <h2>Welcome to my blog</h2> <div class="post"> <h2><a href="myPostLink">This is a post</a></h2> <div class="entry"> <p>here is some content</p> </div> <!-- end entry --> <div class="meta"><p>Here is meta info</p></div> </div> <!-- end post --> <div class="post"> <h2><a href="myPostLink">This is another post</a></h2> <div class="entry"> <p>here is some more content</p> </div> <!-- end entry --> <div class="meta"><p>Here is meta info</p></div> </div> <!-- end post --> </div> <!-- end content --> CSS: .content h2 a:nth-of-type(odd){color: #444;} .content h2 a:nth-of-type(even){color: #ccc;} JSFiddle My thought process was that since I was starting at .content in my CSS, the first .content h2 a would be considered odd and the second even, etc. Apparently not so- they are all considered the first child. Is there a way to select the headers in the way I want with CSS alone? Am I doing something dumb? A: Use nth-child on the .post elements, and then select the h2 element from there jsFiddle example .post:nth-child(odd) h2 a { color: red; } .post:nth-child(even) h2 a { color: green; } A: Try this .content div.post:nth-of-type(odd) a{color: #444;} .content div.post:nth-of-type(even) a{color: #ccc;} The a element of odd and even divs with post class. Not quite sure if that's what you need. A working example: http://jsfiddle.net/a4j7z/
For 40 Years, Crashing Trains Was One of America’s Favorite Pastimes - stevekemp https://www.atlasobscura.com/articles/staged-train-wrecks ====== hanniabu People died and were maimed so Crush was fired, but then the company realized people enjoyed it and could still profit off these spectacles so they hired Crush back. One engineer warned that they locamotives would explode and they shunned him as a debbie downer naysayer. Doesn't seem that far off from today where profits trump safety and any reason that stands in the way is ignored. ~~~ jopsen > Doesn't seem that far off from today where profits trump safety and any > reason that stands in the way is ignored. Really? :) While accidents do happen, the rate of accidents is going down. Injuries in general: [https://ourworldindata.org/grapher/total-number-of-deaths- by...](https://ourworldindata.org/grapher/total-number-of-deaths-by- cause?stackMode=relative) Fire: [https://ourworldindata.org/grapher/fire-deaths-by- age](https://ourworldindata.org/grapher/fire-deaths-by-age) Drowning: [https://ourworldindata.org/grapher/drowning-deaths-by-age- gr...](https://ourworldindata.org/grapher/drowning-deaths-by-age-group) And even motor vehicle deaths, could plausibly have topped: [https://ourworldindata.org/grapher/road-deaths-by- type](https://ourworldindata.org/grapher/road-deaths-by-type) (this is world wide, in the developed world these peek many decades ago) ~~~ Wohlf We live in the safest time ever, yet some people don't realize or refuse to believe it. I blame media bubbles. ~~~ salawat No, we don't. We live in a local minimum based on what we're actually well equipped to define and measure, and what people in positions of power are willing to treat as actionable information. Politics is swinging to the extremes, violence is changing it's clothes, taking on other less familiar forms. Trust in the system is at a low. Economic inequality is rife; infrastructure and the environment is approaching levels of instability previously unheard of. These aren't media bubbles. These are failures to maintain or achieve higher order awareness. All of these things are feeding into each other in myriad ways; they can't be reasoned in individual contexts for solutions. That's the thinking that got us where we are. They need to be reasoned about as a whole. Problems cannot be solved with the same level of thinking that created them in the first place. ~~~ kube-system If you are suggesting that we aren’t able to say we’re at all time lows due to a lack of good long term historical information, then you also can’t draw the opposite conclusion that we _aren’t_ at at an all time low. We simply don’t know. But I’d bet it’s highly unlikely that a lack of measurement and/or recording led to better results. I do agree that trust is probably at pretty low levels — exactly because communication is more accessible than any time in history and therefore people have more opportunity to question authority. But a lack in trust doesn’t translate directly to violence. I think that people are becoming very complacent with distrust. The daily media barrage of reasons to distrust so many things has reached the point where it doesn’t cause outrage anymore. It has convinced some people that distrust is normal. ~~~ y4mi every issue salawat mentioned wasn't something that would result in violence and deaths _today_ so its not refuted by citing current statistic. They're warnings for the future, because while its true that we're currently living in a pretty safe environment and are overall pretty well off, our children won't have that luxury. Once these issues actually start getting reflected in global statics, its going to be way too late to actually realistically pull this proverbial ship around. And he didn't even mention half of the things on the horizon with the potential to seriously harm society such as global warming and the evermore increasing amount of automation destroying the livelihood of a lot of people. (automation isn't _bad_ , its just going to cause a lot of problems and unrest very soon) ~~~ iguy Prediction is hard, especially about the future! What's clear is the data about the recent past, and the trends are very good, both for accidental and deliberate death (among other things) on a time-scale of decades to centuries. There are indeed some reversals (in the last few years: some kinds of crime, pedestrian deaths, opioids) which the optimists hope will be short-lived. ------ dullroar I am here to tell you, however, that the spectacle promised by a local county fair of a “combine demo derby” (as in, a demo derby with old combines) did NOT live up to the hype. Crashes there were, but at a lumbering 5MPH (TOPS) they were not “spectacular," especially when some of the combines had to help others get going again with helpful pushes. Let's say my ten-year-old son and I were "underwhelmed." :) ~~~ cr0sh I've never seen one, but I wonder if a motorhome/bus derby might be wild... ~~~ Swizec You can see these on Top Gear. They are spectacularly wonderful and I hope I get to experience one in real life some day. To wit they've done everything. Bus derby, RV derby, trailer derby ... it always ends in wonderful amounts of destruction and spectacular crashes. ------ splitbrain I wonder if clean up was part of the costs. I imagine there's quite a few parts that are difficult to move without heavy machinery. Or did they just leave the wreck and tracks as is? If so are there any of those crash sites still visible? ~~~ michaelt They got the locomotives and tracks to the crash site somehow, so presumably they know a thing or two about moving heavy stuff around. ~~~ splitbrain Well, moving a working locomotive is relatively easy compared to a non-working one ;-) ~~~ glouwbug A working train without tracks is non-working machinery. They only laid down a mile of tracks for the show ------ inflatableDodo The cultural change in reducing the frequency of this pastime, is that trains today are considered much less cool than they once were, so contemporary America crashes other things into things for fun instead. The current fashion is large navy vessels, apparently. ~~~ liberte82 Our favorite things to crash are the global economy and the sustainability of the environment ~~~ inflatableDodo They've been crashing into one another for longer than America has existed, to be fair. Though someone did mention, it may have been Germany, that America was last seen laughing hysterically whilst hacking away at the remaining brake lines, so there's that. ------ pjc50 See also the more serious British Rail / BNFL "flask" crash test: [https://www.youtube.com/watch?v=ZY446h4pZdc](https://www.youtube.com/watch?v=ZY446h4pZdc) ~~~ TheSpiceIsLife How much power would you estimate that train was producing? If I remember correctly can’t diesel-electric locomotives put out one or two megawatts! ~~~ arethuza Looks like an old Deltic - so about 2.4MW max: [https://en.wikipedia.org/wiki/British_Rail_Class_55](https://en.wikipedia.org/wiki/British_Rail_Class_55) NB The Deltic engines were pretty interesting: [https://en.wikipedia.org/wiki/Napier_Deltic](https://en.wikipedia.org/wiki/Napier_Deltic) ~~~ theoh No, as the voiceover says, it was a much less powerful Class 46 (46009). [https://en.wikipedia.org/wiki/British_Rail_Class_46](https://en.wikipedia.org/wiki/British_Rail_Class_46) ~~~ arethuza My mistake - in the office so didn't have the audio on! ------ bitwize I'm reminded of my nephew who, at four years old, would set up lengths of track on which to crash his toy cars together in what he called "challenges". Strange to think that a railroad executive would turn out to be an overgrown four-year-old crashing real trains together for the fun of it. Especially considering the lives put at risk. ~~~ liberte82 When I was a kid my favorite toy was the Crash Test Dummies. They had a whole set of cars, people, motorcycles and other vehicles that would fly apart when thrown against a wall and could be put back together. :) ~~~ cafard Back in the early 1960s some such toy as sold. But I recall them as spring- driven--you'd wind them, let them run against a wall and fly apart, then put them back together. ------ scandox J.G. Ballard died and went to heaven: [https://en.wikipedia.org/wiki/Crash_(Ballard_novel)](https://en.wikipedia.org/wiki/Crash_\(Ballard_novel\)) ~~~ setr The crash itself isn’t as important there, as being _in_ the crash — the feel of the crushing steel, on your burning flesh ------ folli Who's to say that this wouldn't also attract crowds today. ~~~ TorKlingberg I think the advent of movie theaters killed it. With today's special effects you can see trains or boats or spaceships crashing into each other in close up for a reasonable price. It's not "real" of course, but probably fills the need. ~~~ TeMPOraL It grows a need in me to see the real thing - because I know the special FX crashing is invented, and I'd like to know how it would look (and sound, and smell, and feel) in reality. ~~~ PaulAJ I remember watching the twin towers collapse on 9/11, and the part of me that wasn't thinking "Oh God!" was thinking "Wow, that looks just like a special effect". ~~~ TeMPOraL When I first stumbled upon reports from 9/11 when channel surfing (I think it was before the second plane hit), I thought I'm seeing some weird thriller movie, and continued switching channels. Only later that day I learned it was all real. ------ Theodores This was lowest common denominator entertainment. Today's YouTuber stars make a fine living out of smashing stuff up. They know it is lowest common denominator stuff that will find a ready audience. However, these stunts come to an end, after a while the audience needs something new and not yet another thing smashed up. The thing with the trains is that they were only good for scrap. Being wrecked made little difference to the resale value, bent bits of tin are still of the same weight as finely engineered bits of tin. The current trend of wasting stuff for likes rarely involves stuff that is going to be recycled. How many iphone X's does it take to stop a bullet content results in a lot of waste of new stuff with not a lot recycled. This is in contrast to this train wrecking stunt-meme of a century ago. Although people were killed in the train wrecking stunts people did not go there with that a as premise. Motor racing was about the deadly crashes for most of the last century, if you were a Formula 1 driver then it was a 1 in 3 chance you might not last the season. Spectators went for the chance to see a spectacular crash being part of the entertainment. Public hangings also used to be popular entertainment. So, all considered, pretty good show. ------ m23khan I say time is ripe to bring back this ye olde form of ‘tainment to the masses. Oh, we can have frankfurter rolls and sarsaparilla floats at the fair grounds! ~~~ jsonne I know you're joking but that legitimately sounds like a lot of fun. ------ kozak Am I the only one who routinely watches crash test videos as entertainment? ~~~ degenerate I'd watch those if I knew where to look. Can you link a few? ~~~ kozak [https://www.euroncap.com/](https://www.euroncap.com/) is the main source, but YouTube searches also occasionally yield some interesting non-European videos (small overlap and other kind of tests that are not routinely conducted by EuroNCAP). ------ GershwinA Interesting story, though I can't say the practice is celebratory. I think there's too much waste in the world, starting from such events where things are like...getting crushed for fun, ending tot he food restaurants throw out. Yet it's an interesting phenomena about the train crashes ~~~ boohoojangles I just recently realized that, after moving next to a big park where those big events happens, even the ones self claimed environmental friendly. The next day is so so much garbage of all sorts. ------ shazeubaa Images of Gomez Addams come to mind.... ------ hn_throwaway_99 Completely random, but I'm always so impressed by how well-dressed everyone (including the poor) was 100+ years ago. While there are benefits to having less formality in public life I also feel that something important has been lost. ~~~ jandrese Yeah, but that was their only set of clothes. They look nice because it's what they go to church in as well. Look through old wills and it's shocking to see what people list. Each shirt individually gifted, because they only had 3. Each pair of socks. Their toaster. This wasn't even 100 years ago. We don't realize just how much more buying power we have today. It's a completely different world. ------ atemerev Crash testing is important. Making the public aware of what happens in the crash is also important. This is the opportunity to observe many of unknown unknowns, that wouldn't manifest in non-destructive testing. For cars, crash tests are routine, which is good. For planes, unfortunately, there are less common. Now, we need destructive testing for e.g. nuclear reactors (in the controlled and safe environment, of course). We used to do that, but stopped for political reasons. ------ rjkennedy98 Amazing how 100 years ago they could lay 1 mile of rail to just to crash old trains. Today it costs millions of dollars to build a mile of track and we still run cars that are 50+ years old. ------ 3minus1 This is fascinating. Nowadays people throng to the theaters for the latest Michael Bay special effects extravaganza. It seems like the same impulse, to see a big crash or explosion. ------ adultSwim Where can I find these images from Baylor? I tried their repository, [http://digitalcollections.baylor.edu](http://digitalcollections.baylor.edu) ------ mc32 Given the popularity of kids crashing their toy trains in YouTube videos, this propensity hasn’t abated in people’s minds yet. ------ eej71 I believe the Scott Joplin piece The Great Crush Collision March was written to commemorate one mentioned in the article. ------ ChuckMcM So sad to think of those beautiful locomotives destroyed in a spectacle. ------ exabrial Mine was too... HO Scale though ------ hanniabu > By 4 p.m., more than 40,000 people had arrived For anybody else that was curious, in 1896 $2*40k = $2.4M in 2019 [http://www.in2013dollars.com/us/inflation/1896?amount=80000](http://www.in2013dollars.com/us/inflation/1896?amount=80000) ~~~ black_puppydog that doesn't sound like an awful lot for two locomotives and logistics...? ~~~ jopsen Probably the locomotives had to be scrapped regardless. But putting down the rail sounds expensive, could it be that this was cheaper back then? ~~~ bluGill Putting down rail would have been cheap. Since it was one time use they could take a lot of shortcuts - who cares if the next frost will twist the rails when you will be done before the next frost. ------ burfog Updating this for the modern era, I suggest the Airbus 380. Crash a fully fueled pair going full speed at about 800 feet up, with the crowd back 2000 feet. ~~~ peterkelly It's been done with a single 727, though for research purposes rather than entertainment. [https://youtu.be/FlX8KsSXg4s?t=2760](https://youtu.be/FlX8KsSXg4s?t=2760) ~~~ grafporno Those sound effects were added in post, right? ~~~ LeonM I think they got the crash sound from the onboard camera equipment, and the engine sounds from outside cameras. Then mixed it together with tense music for dramatic effect. ------ bjourne Not seldom can you measure how unequal a society is by the bizarreness of the leisure time of the upper classes. ~~~ leadingthenet It was 50 cents to attend and many tens of thousands of people did so. Definitely not limited to the upper classes.
1. Field of the Invention The present invention relates to an apparatus for analyzing a disc shaped sample to be analyzed such as, for example, a semiconductor wafer or magnetic disc and, more particularly, to a device used in such analyzing apparatus for identifying a sample holder for holding the sample to be analyzed. 2. Description of Related Art In the analyzing apparatus such as, for example, the X-ray analyzer, a variety of sample holders having dimensions and/or shapes that differ from each other according to types of the sample to be analyzed have hitherto been employed in order for samples of various sizes and/or shapes to be analyzed. If the sample holder that does not suit to the particular sample type is used, a problem has been recognized that one or both of the sample and the analyzer are often contaminated and/or the use of the unsuitable sample holder leads to a trouble in the analyzer. Accordingly, the patent document 1 listed below discloses an X-ray fluorescence spectrometer in which a read-out head is arranged in face-to-face relation with a side face of a cylindrical sample container accommodating therein the sample to be analyzed and an indicium such as, for example, a sample identification label and/or an analytical condition specifying label is applied to the side surface of the sample container so that the analyzing apparatus can be controlled in dependence on the result of reading performed by the read-out head. The patent document 2 listed below disclosed an X-ray analyzing apparatus designed to measure a sample, held by sample holder of a type in which a mask having defined therein a hole of a size (mask size) appropriate to the size of the sample is selectively fitted to the top of the sample to allow a measuring area of the sample to be exposed through such hole. According to the patent document 2, at least a surface of the mask is prepared from a material containing a specific element, the content of which in the sample is minute or zero and which is of a kind differing in dependent on the size of the mask size, so that the mask size can be determined by measuring the intensity of secondary X-rays emanating from the specific element.
Share: A former contributor to World Intelligence (Japan Military Review), James Simpson joined Japan Security Watch in 2011, migrating with his blog Defending Japan. He has a Masters in Security Studies from Aberystwyth University and is currently living in Kawasaki, Japan. His primary interests include the so-called 'normalization' of Japanese security (i.e. militarization), and the political impact of the abduction issue with North Korea. James Simpson has 254 post(s) on Japan Security Watch Suffice to say, there is an awful lot of metal belonging to these countries floating around the Sea of Japan and South China Sea. The bottom line is that as far as a Japan/South Korea clash is concerned, SK would most likely be decimated by a far superior Japanese naval force (10 vs 1 Destroyers, 36 vs 9 Frigates). At present South Korea’s navy is aimed primarily at white-water operations around the pennisula and would take years (and a massive budget) to reconfigure itself to pose a viable threat to Japan). In the case of China, Japan would be severely outnumbered in every category of vessel. Yet, Japan’s ships are of far higher calibre than the largely outdated Chinese fleet and her crews (apart from China’s elite blue water fleet) more highly trained. China is also required to spread its vessels across the entire region to reinforce the many territorial claims it is making with other nations. These claims (and her standing alliance relationships) also ensure Japan could call upon wide support in the event of a crisis while China would stand alone. At present the two seem evenly matched. The most important point, however, is that any clash in the near future would not resolve a thing. Tensions would be increase, emotions would be inflamed, another grudge marked down for the ages, and the underlying issue would in no way be resolved. There is nothing of strategic consequence to be gained by a clash, making the real danger the fact that political gains can be made by statesmen willing to sacrifice their country’s national security for their own personal gain. For the sake of clarity, such politicians (in any country) should not be called a ‘nationalist’ as they are not putting their nation’s interest before their own and are almost certainly feigning any patriotic outrage they display in their demagoguery. I feel rather ambiguous when I read such a statement. On the one hand, it makes sense. On the other hand, sovereignty is ultimately a digital proposition and does not lend itself well to rational compromise. And every day it is unresolved also increases tensions, inflames emotions, grudges… The only way these disputes can be settled, even on paper, is to create a one-sided crisis so serious the other side starts thinking the short term advantages to ending it is large enough to ignore the long term advantage of continuing the dispute. Once it is on paper the loser has to pay it some heed. To put it in a mean way, I think China quickly made up with Russia over the border because they need to buy Su-27s to modernize their air force. Now, what large enticement, good or bad, can Japan make to get China to do the same?
Q: Java RegExp problem - .*(www).* vs. (www) A buddy of mine is currently tinkering around with JpCap in Java and we found some interesting (maybe?) problem regarding regular expressions in Java. Only HTTP traffic is being captured and subsequently analyzed. For this purpose he is using a pattern like this one: Pattern p = Pattern.compile("(www)"); But what both of us have not been able to find out: why does the above pattern produce no matches at all, while the following does: Pattern p = Pattern.compile(".*(www).*"); For what I can see, both of these should be identical, shouldn't they? I don't have that much experience regarding regular expressions, so there might be a very simple answer to this question :) A: Your first pattern only accepts the string 'www'. The second pattern accepts any string with 'www' in it, anywhere. This website has more information regarding Java regex. A: Oh, nevermind, I just found out we both were a little bit API-blind ;) We were using Matcher.matches() which matches the pattern against the entire string, instead of Matcher.find() which tries to find the pattern anywhere in the given string. Thanks for the answers, though! :)
Compostable seaweed straws can be eaten after use Startup is looking to replace single-use plastic straws with an edible version that also biodegrades rapidly. The conversation surrounding the alarming amount of ocean plastic continues to pick up pace, and while we’ve seen efforts to remove and re-purpose that debris, there is more to be done to replace plastic all together. New York-based startup LOLIWARE is looking to stop a big source of that plastic in the first place: single use straws. The company has built on its previous expertise in developing edible cups by producing straws made mostly of seaweed — the LOLISTRAW. The team identified seaweed as a suitable alternative to plastic due to its renewable production (it even captures CO2 while growing) and, if users choose not to eat their straw after use, it can be tossed into the organics bin where it breaks down rapidly for efficient composting. LOLISTRAWs will be available in a variety of colors and flavours, with plans to also fortify them with additional nutrients to further entice users to consume them, while maintaining the classic look and feel of a plastic straw with the durability to survive being handled in use for a variety of drinks (tests show the straws will survive in drinks for 24 hours and have a two year shelf life). LOLISTRAW is currently crowdfunding on Kickstarter, with an estimated delivery date of August 2018 and early bird starter packs available from USD 10. The alternatives to single use plastic items are growing, such as this all-in-one paper coffee cup that requires no lid and this store dedicated to zero waste packaging, so what other single-use consumables could be targeted next?
// Copyright 2019 The Flutter team. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. import 'package:flutter/material.dart'; abstract class BackLayerItem extends StatefulWidget { final int index; const BackLayerItem({Key key, @required this.index}) : super(key: key); } class BackLayer extends StatefulWidget { final List<BackLayerItem> backLayerItems; final TabController tabController; const BackLayer({Key key, this.backLayerItems, this.tabController}) : super(key: key); @override _BackLayerState createState() => _BackLayerState(); } class _BackLayerState extends State<BackLayer> { @override void initState() { super.initState(); widget.tabController.addListener(() => setState(() {})); } @override Widget build(BuildContext context) { final tabIndex = widget.tabController.index; return IndexedStack( index: tabIndex, children: [ for (BackLayerItem backLayerItem in widget.backLayerItems) ExcludeFocus( excluding: backLayerItem.index != tabIndex, child: backLayerItem, ) ], ); } }
When looking at a bobble head player, move the Right Analog-stick to move its head. Move the Left Analog-stick to rotate the camera view. Touchdown celebrations When running into the endzone you can taunt/highstep into it by stopping short of the goal line and not moving. The CPU takes care of the celebration for you. There are many celebrations, including the Heisman. Milestones It is difficult to get all the milestones. However, if you play in two player mode, you can get them much easier by playing against yourself or a willing friend. You will still get the milestones as long as you play on the pro difficulty setting. Getting yards If you are not good at the pass, then try running the ball a lot. jukes work well, but because the players can change directions so quickly, changing directions with the halfback is the best way to pick up some yards. Additionally, choose Hail Mary formation, and any play. Wait a few seconds for the receivers to start their routes, then hold R to tuck and run a sweep to the most open side. You can pick up chunks of yards at a time. Big Head Mode cheat Create a custom player. Crazy Kick cheat Get a 99 yard kick or punt return for touchdown. Crusher cheat Get 9 sacks in a game. Da Juice cheat Get 50 interceptions. Kickin' or Stickin' cheat Accumulate at least three hours of play time. Mini mode cheat Throw six touchdowns to different players. Power Pocket cheat Score 77 points in a game. Paper Football mini-game Play an entire game in first person mode. Trivia Machine mini-game Successfully complete all the tutorials. ESPN stadium Accumulate 650 yards of total offense. Superbowl 2005 stadium Prevent your opponent from scoring in a game. Superbowl 2006 stadium Defeat the CPU by at least 56 points. Superbowl 2007 stadium Complete 100% of passes thrown (with a 15 attempt minimum). Superbowl 2008 stadium Get 250 passing yards or 100 rushing yards with the same player. Superbowl future stadium Win a Legend game. Visual Concepts stadium Accumulate twenty five hours of play time. Beat free agent Rush and receive for 150 yards with the same player to unlock Beat from Jet Set Radio Future.
Rhesus brain microvascular endothelial cells are permissive for rhesus cytomegalovirus infection. Endothelial cells (EC) are an important cell type for human cytomegalovirus (CMV) pathogenesis. To characterize better the role of EC in primate CMV natural history, rhesus macaque microvascular EC (MVEC) were purified from fetal brain and analysed for infectivity by rhesus cytomegalovirus (RhCMV). Rhesus brain MVEC (BrMVEC) in culture were positive for von Willebrand factor and CD105 expression, uptake of acetylated low-density lipoprotein, and formation of capillary-like tubules on Matrigel, all phenotypic hallmarks of EC. BrMVEC were fully permissive for infection by RhCMV strain 68-1, and detectable plaques formed within 5 days of infection. Infectivity of BrMVEC by RhCMV could be reduced, but not abolished, by treatment of cells either before or during infection with pro-inflammatory mediators tumour necrosis factor-alpha, interleukin-1beta or phorbol 12-myristate 13-acetate. These results demonstrate that in vitro infection of rhesus BrMVEC is a dynamic process that is influenced by activation conditions.
package main import ( "net/http" "github.com/labstack/echo" "github.com/philippgille/ln-paywall/ln" "github.com/philippgille/ln-paywall/storage" "github.com/philippgille/ln-paywall/wall" ) func main() { e := echo.New() // Configure middleware invoiceOptions := wall.DefaultInvoiceOptions // Price: 1 Satoshi; Memo: "API call" lndOptions := ln.DefaultLNDoptions // Address: "localhost:10009", CertFile: "tls.cert", MacaroonFile: "invoice.macaroon" storageClient := storage.NewGoMap() // Local in-memory cache lnClient, err := ln.NewLNDclient(lndOptions) if err != nil { panic(err) } // Use middleware e.Use(wall.NewEchoMiddleware(invoiceOptions, lnClient, storageClient, nil)) e.GET("/ping", func(c echo.Context) error { return c.String(http.StatusOK, "pong") }) e.Logger.Fatal(e.Start(":8080")) // Start server }
Tunisia’s election commission has announced that 27 candidates will be competing in November’s presidential election. For the first time, a woman, Judge Kalthoum Kannou, will be running for president. Tunisia overthrew its dictator in 2011, and elections for Parliament in late October and for president on Nov. 23 are intended to complete the transition to democracy. Polls show Beji Caid Essebsi, a secular politician in his 80s, as the front-runner. Other prominent candidates include President Moncef Marzouki, a doctor and human rights activist.
{ "images" : [ { "extent" : "full-screen", "idiom" : "iphone", "subtype" : "736h", "filename" : "6plus.png", "minimum-system-version" : "8.0", "orientation" : "portrait", "scale" : "3x" }, { "extent" : "full-screen", "idiom" : "iphone", "subtype" : "667h", "filename" : "Portrait6.png", "minimum-system-version" : "8.0", "orientation" : "portrait", "scale" : "2x" }, { "orientation" : "portrait", "idiom" : "iphone", "filename" : "4s.png", "extent" : "full-screen", "minimum-system-version" : "7.0", "scale" : "2x" }, { "extent" : "full-screen", "idiom" : "iphone", "subtype" : "retina4", "filename" : "5C.png", "minimum-system-version" : "7.0", "orientation" : "portrait", "scale" : "2x" } ], "info" : { "version" : 1, "author" : "xcode" } }
MEXICO CITY (Reuters) - Pranksters changed the name of Mexico’s lower house of Congress to the “Chamber of Rats” on Google Maps on Tuesday in the latest dig at the political class during a testing start to the year for the country’s government. The lower house, also known as the Chamber of Deputies, became the “Chamber of Rats”, using the Spanish word “rata,” which is also slang for thief in Mexico. “Our teams are working fast to resolve this incident,” Google Mexico said in a statement, explaining that place names on the online mapping service came from third parties, public sources and contributions from users. It was the second such attack in the space of a few days. Mexican media reported at the weekend that the presidential residence appeared as the “Official Residence of Corruption” on Google Maps before Google Mexico removed it from the map and apologized for “inappropriate content” created by a user. Mexico’s government has faced protests, road blocks and looting of shops since the start of 2017, when the cost of fuel jumped sharply on the back of a finance ministry decision to liberalize the market and end state-set gasoline prices. Allegations of corruption swirl constantly around the political class in Mexico. A 2013 Transparency International study showed that 91 percent of respondents felt political parties were corrupt or extremely corrupt. Some 83 percent took the same view of the legislature, the study showed. The credibility of President Enrique Pena Nieto was damaged by a conflict-of-interest row earlier in his six-year term when it emerged that he, his wife, and his then-finance minister had all acquired homes from government contractors. A government-ordered probe cleared all of any wrongdoing. (Reporting by Dave Graham; Editing by Bill Rigby)
Using a novel patient-specific stem cell-based therapy, researchers at the National Eye Institute (NEI) prevented blindness in animal models of geographic atrophy, the advanced “dry” form of AMD, which is a leading cause of vision loss among people age 65 and older. The protocols established by the animal study set the stage for a first-in-human clinical trial testing the therapy in people with geographic atrophy, for which there is currently no treatment. “If the clinical trial moves forward, it would be the first ever to test a stem cell-based therapy,” said Kapil Bharti, PhD, Stadtman investigator at the NEI unit on ocular and stem cell translational research. The researchers will take a patient’s own blood cells, and in a lab, convert them into iPS cells capable of becoming any type of cell in the body. The iPS cells are then programmed to become retinal pigment epithelial cells, the type of cell that dies early in the geographic atrophy form of AMD. [National Eye Institute] The authors wrote that, “autologous induced pluripotent stem cell (iPSC)–derived retinal pigment epithelium (RPE) transplantation has been shown to improve visual function in animal models of AMD and is currently being tested in human patients.” The therapy involves taking a patient’s blood cells and, in a lab, converting them into iPS cells, which are programmed to become RPE cells, the type of cell that dies early in the geographic atrophy stage of macular degeneration. RPE cells nurture photoreceptors, the light-sensing cells in the retina. In geographic atrophy, once RPE cells die, photoreceptors eventually also die, resulting in blindness. The therapy is an attempt to shore up the health of remaining photoreceptors by replacing dying RPE with iPSC-derived RPE. Before they are transplanted, the iPSC-derived RPE are grown in tiny sheets one cell thick, replicating their natural structure within the eye. This monolayer of iPSC-derived RPE is grown on a biodegradable scaffold designed to promote the integration of the cells within the retina. A specially designed surgical tool was built for the task of inserting the patch of cells between the RPE and the photoreceptors. A scanning electron micrograph image shows a polarized RPE monolayer on a biodegradable scaffold. The image is colored to highlight the scaffold in blue, three RPE cells (brown), and the apical process of cells in RPE monolayer are light green.[Kapil Bharti, PhD, NEI] One concern about using iPSCs is the possibility of oncogenic mutations that might occur during the cell reprogramming process. In this paper, Ruchi Sharma, PhD, and colleagues at the NEI used CD34+ peripheral blood cells from patients with AMD to generate oncogenic mutation-free clinical-grade iPSCs from three AMD patients. These cells were then used for the production of clinical-grade RPE cell patches. The authors wrote that, “compared to RPE cells in suspension, our biodegradable scaffold approach improved integration and functionality of RPE patches in rats and in a porcine laser-induced RPE injury model that mimics AMD-like eye conditions.” For decades now, stem cells have held the promise of a cure. The transplantation of the RPE patches in rodent and pig models of retinal degeneration showed therapeutic effects. Immunostaining confirmed that the iPSC-derived RPE expressed the gene RPE65, suggesting the lab-made cells had reached a crucial stage of maturity necessary to maintain photoreceptor health. RPE65 is necessary for the regeneration of visual pigment within the photoreceptors and is an essential component for vision. Further tests showed that the transplanted RPE cells were pruning photoreceptors via phagocytosis, another RPE function that helps keep photoreceptors healthy. In addition, electrical responses recorded from photoreceptors rescued by RPE patches were normal; whereas photoreceptors treated with a control empty scaffold had died. For decades now, stem cells have held the promise of a cure. The authors suggested that the production process presented in this paper might accelerate the development of safer iPSC-derived stem cell therapies. The planning of a Phase I clinical trial testing the safety of the iPSC-based therapy for geographic atrophy is underway and will be initiated after U.S. FDA approval.
Export of dissolved organic matter in relation to land use along a European climatic gradient. The terrestrial export of dissolved organic matter (DOM) is associated with climate, vegetation and land use, and thus is under the influence of climatic variability and human interference with terrestrial ecosystems, their soils and hydrological cycles. We present a data-set including catchments from four areas covering the major climate and land use gradients within Europe: a forested boreal zone (Finland), a temperate agricultural area (Denmark), a wet and temperate mountain region in Wales, and a warm Mediterranean catchment draining into the Gulf of Lyon. In all study areas, DOC (dissolved organic carbon) was a major fraction of DOM, with much lower proportions of DON (dissolved organic nitrogen) and DOP (dissolved organic phosphorus). A south-north gradient with highest DOC concentrations and export in the northernmost catchments was recorded: DOC concentrations and loads were highest in Finland and lowest in France. These relationships indicate that DOC concentrations/export are controlled by several factors including wetland and forest cover, precipitation and hydrological processes. DON concentrations and loads were highest in the Danish catchments and lowest in the French catchments. In Wales and Finland, DON concentrations increased with the increasing proportion of agricultural land in the catchment, whereas in Denmark and France no such relationship was found. DOP concentrations and loads were low compared to DOC and DON. The highest DOP concentrations and loads were recorded in catchments with a high extent of agricultural land, large urban areas or a high population density, reflecting the influence of human impact on DOP loads.
Snacks Since you're all avid Pointe readers, I'm sure you've heard us repeat it over and over: Healthy fats are essential to a dancer's diet. Your body needs them to absorb vitamins, balance out hormones and make you feel full and satisfied. This summer heat might leave you craving a cold snack after class to cool down. But did you know that reaching for an ice cream cone or a flavored slushie could actually make you feel hotter?Barry Swanson, a food scientist at Washington State University, recently spoke to Time about which foods can spike your body temperature, and which can help bring it down. His insights might surprise you. Happy August! Our new issue is out, and it's chock full of great stories. My personal favorite to work on was The Dance Bag Diet in which top stars gave us a peek inside their daily snacking habits. Daniil Simkin turns out to be a bit of a cookie monster, Heather Ogden totes a grapefruit for the fresh scent it gives her bag, and Craig Hall likes to take a swig of water with peppermint oil before going onstage: “It’s like drinking a box of Altoids.”