text stringlengths 8 5.77M |
|---|
Reese
Adoptable Companion Horses
Does your horse need a pasture buddy? Companion horses can be a great way to provide needed socialization for your horse.
Companion horses also make wonderful pets. Just because a horse can no longer be ridden does not mean they do not enjoy human company or learning something new. Habitat for Horses has a variety of Companion Horses available at a low cost.
You still must pass the same property inspections and other rules that a Ready To Ride horse requires.
Contact us: 866 434 5737.
Our Office Hours: 8:00am – 4:00pm CDT Monday – Friday. Adoption Application |
# Copyright 2020 Dragonchain, Inc.
# Licensed under the Apache License, Version 2.0 (the "Apache License")
# with the following modification; you may not use this file except in
# compliance with the Apache License and the following modification to it:
# Section 6. Trademarks. is deleted and replaced with:
# 6. Trademarks. This License does not grant permission to use the trade
# names, trademarks, service marks, or product names of the Licensor
# and its affiliates, except as required to comply with Section 4(c) of
# the License and to reproduce the content of the NOTICE file.
# You may obtain a copy of the Apache License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the Apache License with the above modification is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the Apache License for the specific
# language governing permissions and limitations under the Apache License.
import re
from typing import Dict, Union, List, Any, Optional, cast
from dragonchain.broadcast_processor import broadcast_functions
from dragonchain.lib.interfaces import storage
from dragonchain.lib.database import redisearch
from dragonchain.lib import matchmaking
from dragonchain import exceptions
from dragonchain import logger
_log = logger.get_logger()
_uuid_regex = re.compile(r"[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}")
def get_pending_verifications_v1(block_id: str) -> Dict[str, List[str]]:
"""Get any scheduled pending verifications"""
claim_check = matchmaking.get_claim_check(block_id)
verifications = broadcast_functions.get_all_verifications_for_block_sync(block_id)
scheduled_l2 = set(claim_check["validations"]["l2"].keys())
scheduled_l3 = set(claim_check["validations"]["l3"].keys())
scheduled_l4 = set(claim_check["validations"]["l4"].keys())
scheduled_l5 = set(claim_check["validations"]["l5"].keys())
# Get only the differences (scheduled, but not received) chains
return {
"2": list(scheduled_l2.difference(verifications[0])),
"3": list(scheduled_l3.difference(verifications[1])),
"4": list(scheduled_l4.difference(verifications[2])),
"5": list(scheduled_l5.difference(verifications[3])),
}
def query_verifications_v1(block_id: str, params: Optional[Dict[str, Any]] = None) -> Union[List[Any], Dict[str, List[Any]]]:
"""Query verifications for a block ID on a level, or all levels"""
_log.info(f"Getting verifications for {block_id}, params {params}")
level = int(params.get("level") or "0") if params else 0
return _get_verification_records(block_id, level)
def query_interchain_broadcasts_v1(block_id: str) -> List[Any]:
"""Return the subsequent broadcasts to other L5 networks"""
_log.info(f"Getting subsequent L5 verifications for {block_id}")
results = []
l5_block = None
l5_verifications = _get_verification_records(block_id, 5)
if len(l5_verifications) > 0:
l5_block = cast(List[Any], l5_verifications)[0]
timestamp = l5_block["header"]["timestamp"]
dc_id = l5_block["header"]["dc_id"]
l5_nodes = redisearch._get_redisearch_index_client(redisearch.Indexes.verification.value).redis.smembers(redisearch.L5_NODES)
results = [
_query_l5_verification(l5_dc_id.decode("utf-8"), timestamp)
for l5_dc_id in l5_nodes
if l5_dc_id.decode("utf-8") != dc_id and not re.match(_uuid_regex, l5_dc_id.decode("utf-8"))
]
return ([l5_block] if l5_block else []) + [storage.get_json_from_object(f"BLOCK/{x}") for x in results if x is not None]
def _query_l5_verification(l5_dc_id: str, timestamp: str) -> str:
query_result = redisearch.search(
index=redisearch.Indexes.verification.value,
query_str=f"@dc_id:{{{l5_dc_id}}} @timestamp:[{int(timestamp)+1} +inf]",
only_id=True,
limit=1,
sort_by="timestamp",
sort_asc=True,
)
return query_result.docs[0].id if query_result and len(query_result.docs) > 0 else None
def _get_verification_records(block_id: str, level: int = 0) -> Union[List[Any], Dict[str, List[Any]]]:
if level:
if level in [2, 3, 4, 5]:
return _level_records(block_id, level)
raise exceptions.InvalidNodeLevel(f"Level {level} not valid.")
else:
return _all_records(block_id)
def _level_records(block_id: str, level: int) -> List[Any]:
return [storage.get_json_from_object(key) for key in storage.list_objects(f"BLOCK/{block_id}-l{level}")]
def _all_records(block_id: str) -> Dict[str, List[Any]]:
return {"2": _level_records(block_id, 2), "3": _level_records(block_id, 3), "4": _level_records(block_id, 4), "5": _level_records(block_id, 5)}
|
Grilled Greek Flatbread Pizza Recipe
July 21, 2017
We combined the classic, down-to-earth charm of pizza with the fancy-pants elegance of flatbread. And then we topped it with feta, olives, arugula, and other Greek(ish) foods. Oh — and we grilled it, too. Because it’s summer, and we love some good grill marks (and the subtly smoky, charred flavors that come with ’em).
If you’re short on time, feel free to skip the homemade dough and opt for store-bought flatbread instead (we won’t tell). Brush with oil and toss on the grill for a minute or two per side to warm before adding the sauce and toppings.
Make the dough: Add flour and salt to a large bowl. In a separate bowl, dissolve yeast in water. Let sit until foamy, about 5 minutes. Add to flour/salt mixture and stir. Once well combined, add olive oil. Knead dough with your hands on a well-floured surface, adding more flour as necessary to prevent it from sticking, until smooth and elastic, 3-5 minutes. Place into a lightly oiled bowl and cover with a clean kitchen towel. Let sit about 45 minutes.
Make the sauce: Meanwhile, heat oil in a pan. Once shimmering, add garlic and tomatoes. Cook 15-20 minutes, adding parsley during the last 5 minutes. Season to taste with salt, pepper, and chili flakes.
Grill the dough: Place dough on a lightly floured surface. Divide into 4 equal pieces and roll each into 1/8-inch thick rounds. Lightly brush both sides with olive oil, then place directly on the hot, oiled grill. Grill until golden brown and grill marks appear, 2-3 minutes per side.
Assemble the flatbread pizza: Remove dough from grill (or move to an area of indirect heat). Spoon over sauce, then add toppings as desired.
Hungry for more? Look no further. We’ve whipped up our fair share of easy flatbread recipes and have a feeling these three would cook up just fine on the grill.
Pizza for breakfast doesn’t have to mean a cold, leftover slice. In fact, it can be a beautifully toasty and golden crust, a mouthwatering spread of creamy ricotta cheese, and, because it’s breakfast (or brunch), a cascade of bacon and apple pieces. |
---
author:
- Konstantin Beyer
- Kimmo Luoma
- 'Walter T. Strunz'
bibliography:
- 'bibliography.bib'
title: 'Work as an external quantum observable and an operational quantum Jarzynski equality - Supplemental Material'
---
=1
Quantum JE in a TPM scheme
==========================
As in the classical case, one assumes the system to be initially in a thermal state of Hamiltonian $H_A$ $$\begin{aligned}
\label{eq:gibbs-state}
\rho_{\mathcal{S}}= \frac{1}{Z_A} \sum_a e^{-\beta E_a} {|a\rangle\langlea|}, && Z_A = \sum_a e^{-\beta E_a}.\end{aligned}$$ Therefore, the probability to measure outcome $a$ and $b$ in first and second measurement, respectively, is then given by $$\begin{aligned}
\label{eq:prob}
p(a,b) = {\operatorname{Tr}}\{ {|b\rangle\langleb|} {\operatorname{U}}{|a\rangle\langlea|} \rho_{\mathcal{S}}{|a\rangle\langlea|} {\operatorname{U}}^\dagger \},\end{aligned}$$ where we have assumed that the first measurement is implemented by a Lüders instrument [@heinosaari_mathematical_2011]. The classical form of the JE can then directly be verified for the quantum TPM scheme [@TasakiJarzynskiRelationsQuantum2000; @KurchanQuantumFluctuationTheorem2001] $$\begin{aligned}
\label{eq:TPM-JE}
\big\langle e^{-\beta W} \big\rangle &= \sum_{a,b} p(a,b) e^{-\beta (E_b - E_a)} \nonumber\\
&= \sum_{a,b} {\operatorname{Tr}}\{ {\mathds{Q}}_b {\operatorname{U}}{\mathds{P}}_a \,\rho_{\mathcal{S}}\, {\mathds{P}}_a {\operatorname{U}}^\dagger \} e^{-\beta (E_b - E_a)} \nonumber\\
&= \sum_b e^{-\beta E_b} {\operatorname{Tr}}\{{\mathds{Q}}_b {\operatorname{U}}\sum_a (e^{+\beta E_a} {\mathds{P}}_a \,\rho_{\mathcal{S}}\, {\mathds{P}}_a) {\operatorname{U}}^\dagger \} \nonumber\\
&= \sum_{b} e^{-\beta E_b} {\operatorname{Tr}}\{{\mathds{Q}}_b {\operatorname{U}}\frac{1}{Z_A} {\mathds{1}}{\operatorname{U}}^\dagger \} \nonumber \\
&= \frac{1}{Z_A} \sum_b e^{-\beta E_b} = \frac{Z_B}{Z_A} = e^{-\beta \Delta F}. \end{aligned}$$ As one can see from the forth line, the TPM-JE in this form works for any unital dynamics and is not restricted to unitary ones [@RasteginNonequilibriumequalitiesunital2013; @RasteginJarzynskiequalityquantum2014; @RasteginQuantumFluctuationsRelations2018].
The work increment observable
=============================
The vectors needed to construct the work are $$\begin{aligned}
\label{eq:6}
{|\phi_+\rangle} &= {|\psi_{\mathcal{C}}\rangle} + i\, \alpha\, {|\dot{\psi}_{\mathcal{C}}\rangle},
\nonumber\\
{|\phi_-\rangle} &= {|\psi_{\mathcal{C}}\rangle} - i\, \alpha\,
{|\dot{\psi}_{\mathcal{C}}\rangle},\nonumber\\
\alpha &= \sqrt{\frac{\langle \psi_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle}{\langle \dot{\psi}_{\mathcal{C}}| \dot{\psi}_{\mathcal{C}}\rangle }}.\end{aligned}$$ They are orthogonal and can therefore be used to define a suitable measurement on the ancillas after their collision $$\begin{aligned}
\label{eq:8}
\langle\phi_+|\phi_-\rangle =\, & \langle \psi_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle- \alpha^2
\langle\dot{\psi}_{\mathcal{C}}|
\dot{\psi}_{\mathcal{C}}\rangle\nonumber \\
&- i\, \alpha \langle
\dot{\psi}_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle -
i\, \alpha \langle \psi_{\mathcal{C}}|
\dot{\psi}_{\mathcal{C}}\rangle \nonumber \\
=\, & \langle \psi_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle- \alpha^2
\langle\dot{\psi}_{\mathcal{C}}|
\dot{\psi}_{\mathcal{C}}\rangle\nonumber \\
&- i\, \alpha \langle
\dot{\psi}_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle +
i\, \alpha \langle \dot{\psi}_{\mathcal{C}}|
{\psi}_{\mathcal{C}}\rangle \nonumber \\
=\, & \langle \psi_{\mathcal{C}}| \psi_{\mathcal{C}}\rangle- \alpha^2
\langle\dot{\psi}_{\mathcal{C}}|
\dot{\psi}_{\mathcal{C}}\rangle\nonumber \\
=\, & 0, \end{aligned}$$ where we have used that $\langle \dot{\psi}_{\mathcal{C}}|
{\psi}_{\mathcal{C}}\rangle$ is fully imaginary.
The observable reads $$\begin{aligned}
\label{eq:7}
\Omega &= \frac{1}{2 \alpha} ({|\phi_+\rangle\langle\phi_+|} -
{|\phi_-\rangle\langle\phi_-|}) + \zeta \, {\mathds{1}}\nonumber\\
&= i({|\dot{\psi}_{\mathcal{C}}\rangle\langle\psi_{\mathcal{C}}|} -
{|{\psi_{\mathcal{C}}}\rangle\langle\dot{\psi}_{\mathcal{C}}|}) + \zeta \, {\mathds{1}},\end{aligned}$$ with $$\begin{aligned}
\zeta = - 2 i\, {{\rm Im}}\{\langle \psi_{\mathcal{C}}|
\dot{\psi}_{\mathcal{C}}\rangle \}.\end{aligned}$$ The expectation value of $\Omega$ is equal to the work increment $$\begin{aligned}
\label{eq:9}
{\langle\psi_{\mathcal{C}}^\star|}\Omega{|\psi_{\mathcal{C}}^\star\rangle}
=& i\, \bigg( \langle \psi_{\mathcal{C}}{|\dot{\psi}_{\mathcal{C}}\rangle\langle\psi_{\mathcal{C}}|}
\psi_{\mathcal{C}}\rangle -\langle \psi_{\mathcal{C}}{|{\psi_{\mathcal{C}}}\rangle\langle\dot{\psi}_{\mathcal{C}}|}
\psi_{\mathcal{C}}\rangle \nonumber\\
& -dt \langle \psi_{\mathcal{C}}{|\dot{\psi}_{\mathcal{C}}\rangle\langle\psi_{\mathcal{C}}|}
{\psi}^\lightning_{\mathcal{C}}\rangle + dt \langle \psi_{\mathcal{C}}{|{\psi_{\mathcal{C}}}\rangle\langle\dot{\psi}_{\mathcal{C}}|}
{\psi}^\lightning_{\mathcal{C}}\rangle
\nonumber \\
& - dt \langle {\psi}^\lightning_{\mathcal{C}}{|\dot{\psi}_{\mathcal{C}}\rangle\langle\psi_{\mathcal{C}}|}
\psi_{\mathcal{C}}\rangle + dt \langle {\psi}^\lightning_{\mathcal{C}}{|{\psi_{\mathcal{C}}}\rangle\langle\dot{\psi}_{\mathcal{C}}|}
\psi_{\mathcal{C}}\rangle\bigg)
\nonumber \\
& + \zeta \nonumber\\
=& -2 dt\, {{\rm Im}}\{\langle\dot{\psi}_{\mathcal{C}}| {\psi}^\lightning_{\mathcal{C}}\rangle \}
+2 i\, {{\rm Im}}\{ \langle \psi_{\mathcal{C}}| \dot{\psi}_{\mathcal{C}}\rangle \} +\zeta\nonumber{}
\\
=&-2 dt\, {{\rm Im}}\{\langle\dot{\psi}_{\mathcal{C}}| {\psi}^\lightning_{\mathcal{C}}\rangle
\}\nonumber\\
=& \, dW. \end{aligned}$$
|
Individual differences in humans responding under a cocaine discrimination procedure: discriminators versus nondiscriminators.
Twenty-six cocaine-abusing volunteers were trained to discriminate cocaine (80 mg/70 kg, p.o.) from placebo. On the basis of a discrimination acquisition criterion (i.e., >80% drug-appropriate responding for 4 consecutive sessions within 8-10 sessions), 18 participants were classified as discriminators (Ds) and 8 as nondiscriminators (NDs). Relative to Ds, NDs reported a greater amount of cocaine use per time. During the training phase, NDs showed significantly lower ratings than Ds on a stimulant ratings scale, regardless of the training drug condition. During the test-of-acquisition phase, cocaine-induced increases in scores on ratings of drug strength, anxious-nervous and cocaine high, as well as on a euphoria ratings scale, were significantly greater in Ds than NDs, relative to placebo. These results suggest that drug use history, general arousal level, and drug sensitivity may be important variables influencing the acquisition of cocaine versus placebo discrimination in cocaine abusers. |
Q:
How to structure a flow graph with a blocking input source
What is a good way to modify Michael Voss' Feature Detection flow graph example when the source filter providing input images is blocking waiting for another image? This is a required modification if one wants to implement this graph for a continuous real-time input source like a video camera. I know that if the source filter function body is blocking waiting to pull an image from an input device, then one of the tbb threads will be wasted because it is idle.
I appreciate any guidance.
A:
There is async_node that is released in the TBB 4.3 Update 6 as a preview feature. The goal of this node is exactly fit for your needs. Here is the link to documentation https://www.threadingbuildingblocks.org/docs/help/reference/appendices/community_preview_features/flow_graph/async_node_cls.htm
You can create your own thread that will retrieve images from some source and using async_node::async_gateway push this messages to the graph. The advantage of such approach is that image retrieval will be done outside of TBB threads. This allows to execute other TBB tasks while your threads will wait for next image.
|
Q:
Cost Attributes- Network analyst for Arc GIS 10.2
Can anyone tell me whether Network analyst can consider multiple cost attributes? I am looking for most ecologic route (not shortest), therefore I have assigned a value of pollution to each road segment. I have 2 layers in my analysis road and cycle layer, both have these pollution values for each segment (cycle routes have mostly lower values) and I selected this pollution attribute as a default cost attribute but would also like to consider another attribute in the best route analysis.
Can this be done or does network analyst always consider just one cost attribute?
A:
Unfortunately, the Network Analyst solver can minimize only one cost attribute to find the best path for a route to go. However, you may take advantage of using costs that can be scaled partly due to using some other sources of factors - in your cases environmental pollution values. I can think of two options to expose your extra cost field for the solver. But again, the solver will treat them equally and you cannot specify it in a way "I want my routes to be short and it is twice as important comparing for routes to be ecologic."
Since you already have one extra (and may get more costs) exposed for each road link (aka edge), you can just add a new cost attribute in the network dataset properties which would be a combination of the two travel time (or distance, for instance) and the pollution value (the output cost value can be calculated by some formula) and then you just use this for the Route network analysis layer options
You can simply load the polylines of your streets (just a copy of the feature class with the value of pollution) as a scaled cost line barrier in your network analysis layer (if you have your data as polygins - also possible to use scaled cost polygon barriers). Then when doing routing, traversing each road link will cost more if there is a scaled cost barrier located on it (line barrier) or if a road link is located within a scaled cost barrier (polygon barrier). The Route solver will scale the cost appropriately based on the pollution value and then solve the Route finding the least cost path. It is possible to have scaled barrier from different sources and you can distinguish them well when loading just by assigning a class field when doing Load Locations.
I usually approach this task with the second option since it gives me more flexibility in scaling the cost comparing to "blending" distance/time cost with the additional custom cost(s).
|
Q:
Clean URLs with mod_rewrite and keep root path for includes (js and css)
Setup: I own a regular Unix Webserver running Apache and PHP.
It is serving PHP scripts as expected.
Current Situation: All my links look like typical PHP ones with parameters website.net/index.php?show=article&do=new
What i want: a mod_rewrite rule for remapping my urls, no matter how they look or contain.
For example: website.net/article/new should redirect to index.php and $_SERVER['REQUEST_URI'] should contain /article/new/ etc. but not limited to that and no fixed pattern (*n variables), all the other work is done with PHP (validating, including).
My effort this far: used google a lot and tried many expamples. but only found some with fixed patterns and some did even break the whole site.
The other problem: i noticed while experimenting with mod_rewrite that it broke the path to my js and css files.
I use relative pathes, they are located in a subfolder relative to index.php
<script src="js/included_file.js"></script>
<link href="css/included_file.css" rel="stylesheet"">
A:
You can use the -f RewriteCondition to exclude files:
RewriteCond %{REQUEST_FILENAME} !-f #exclude files
RewriteRule ^(.*) index.php?__path=$1 [QSA] #catch everything
If you want to keep directories too you can do that with -d.
Now if you call the URL:
website.net/path/to/file?query=string&foo=bar
You will get (in your $_GET):
__path = "/path/to/file"
query = "string"
foo = "bar"
Don't use $_REQUEST. It is non-standard and mixes $_GET and $_POST, it can lead to very strange bugs.
|
[Herpes simplex causing a therapy-resistant panaritium (author's transl)].
In an eight-month-old child an inflammation developed in the region of the nail bed of the right middle finger. It showed no improvement after immobilisation, extraction of the nail and antibiotics. After ten months unsuccessful treatment electron microscopy of the inflammatory tissue showed numerous Herpes simplex viruses. These increased rapidly in tissue culture and could be neutralised with antibodies against Herpes simplex virus. |
"""Tests the Fermi_integral function in the mathematics module."""
# This file contains experimental usage of unicode characters.
import numpy as np
import pytest
from plasmapy.formulary.mathematics import Fermi_integral
class Test_Fermi_integral:
@classmethod
def setup_class(self):
"""Initialize parameters for tests."""
self.arg1 = 3.889780
self.True1 = 6.272518847136373 - 8.673617379884035e-19j
self.argFail1 = 3.889781
self.order1 = 0.5
self.args = np.array([0.5, 1, 2])
self.Trues = np.array(
[
(1.1173314873128224 - 0j),
(1.5756407761513003 - 0j),
(2.8237212774015843 - 2.6020852139652106e-18j),
]
)
def test_known1(self):
"""
Test Fermi_integral for expected value.
"""
methodVal = Fermi_integral(self.arg1, self.order1)
testTrue = np.isclose(methodVal, self.True1, rtol=1e-16, atol=0.0)
errStr = f"Fermi integral value should be {self.True1} and not {methodVal}."
assert testTrue, errStr
def test_fail1(self):
"""
Test if test_known1() would fail if we slightly adjusted the
value comparison by some quantity close to numerical error.
"""
fail1 = self.True1 + 1e-15
methodVal = Fermi_integral(self.arg1, self.order1)
testTrue = not np.isclose(methodVal, fail1, rtol=1e-16, atol=0.0)
errStr = (
f"Fermi integral value test gives {methodVal} and should "
f"not be equal to {fail1}."
)
assert testTrue, errStr
def test_array(self):
"""Test Fermi_integral where argument is an array of inputs."""
methodVals = Fermi_integral(self.args, self.order1)
testTrue = np.allclose(methodVals, self.Trues, rtol=1e-16, atol=0.0)
errStr = f"Fermi integral value should be {self.Trues} and not {methodVals}."
assert testTrue, errStr
def test_invalid_type(self):
"""
Test whether `TypeError` is raised when an invalid argument
type is passed to `~plasmapy.mathematics.Fermi_integral`.
"""
with pytest.raises(TypeError):
Fermi_integral([1, 2, 3], self.order1)
|
Reexamining unconscious response priming: A liminal-prime paradigm.
Research on the limits of unconscious processing typically relies on the subliminal-prime paradigm. However, this paradigm is limited in the issues it can address. Here, we examined the implications of using the liminal-prime paradigm, which allows comparing unconscious and conscious priming with constant stimulation. We adapted an iconic demonstration of unconscious response priming to the liminal-prime paradigm. On the one hand, temporal attention allocated to the prime and its relevance to the task increased the magnitude of response priming. On the other hand, the longer RTs associated with the dual task inherent to the paradigm resulted in response priming being underestimated, because unconscious priming effects were shorter-lived than conscious-priming effects. Nevertheless, when the impact of long RTs was alleviated by considering the fastest trials or by imposing a response deadline, conscious response priming remained considerably larger than unconscious response priming. These findings suggest that conscious perception strongly modulates response priming. |
LexJet Direct is proud to offer a wide variety of printable media and laminates from industry names you can trust.
Using original HP Dye and Pigmented inks in your HP DesignJet printer helps maximise the lifespan of the printer and the printheads. We offer inks for everything from the older models to the new DesignJet Z9+. Shop Today!
HP Vital and HP Optimal Overlaminates are available in either a gloss or matte finish. From indoor retail signage to outdoor promotional graphics, you can achieve a high-quality look with this versatile, easy-to-handle, printable laminate.
It's easy to get crisp, clear results. Designed together with your HP DesignJet printer/MFP as an optimized printing system, Original HP inks can help reduce downtime, improve productivity. Choose from 40- to 300-ml cartridge sizes to fit your needs.
It's easy to get crisp, clear results. Designed together with your HP DesignJet printer/MFP as an optimized printing system, Original HP inks can help reduce downtime, improve productivity. Choose from 40- to 300-ml cartridge sizes to fit your needs.
It's easy to get crisp, clear results. Designed together with your HP DesignJet printer/MFP as an optimized printing system, Original HP inks can help reduce downtime, improve productivity. Choose from 40- to 300-ml cartridge sizes to fit your needs.
Print the vision. Then print it again confident you'll get the same color, quality and long-lasting prints. It's no problem with HP printing supplies, Original HP pigment inks, and HP media. Avoid the waste and rework and save time and money. |
Impression cytology of the healthy equine ocular surface: Inter-observer agreement, filter preservation over time and comparison with the cytobrush technique.
The cytobrush technique is commonly used to sample the equine ocular surface. Impression cytology (IC) is an innovative noninvasive method, which allows for the collection of superficial layers of ocular epithelium. The aims of this study were to compare the cytobrush and IC techniques on healthy equine ocular surfaces, to assess the agreement between observers with different levels of expertise, and to test the preservability of filters over time. Twenty-four horses were sampled within 10 minutes of slaughter using IC on the left eye and the cytobrush technique on the right eye. May-Grünwald-Giemsa stained specimens were evaluated by two observers with different levels of expertise. Morphologic features were evaluated using a 4-grade system. The IC samples were re-evaluated after 6 months to examine filter preservation. In IC samples, corneal and conjunctival cells were clearly separated. Goblet cells were found in five and 17 filters by observer 1 and 2, respectively. Using the cytobrush technique, corneal and conjunctival cells were present but mixed. Goblet cell cellularity, preservation, and enumeration were higher with the IC technique compared with the cytobrush technique (P = 0.013; P = 0.004; P = 0.031, respectively). The inter-observer agreement for the IC technique was moderate to fair. In 7/24 IC samples re-evaluated after 6 months, cellular morphology was impaired, and the overall score was significantly lower. IC is an innovative noninvasive method, which allows for sample collection with higher cellularity and preservation. Moreover, the identification of goblet cells is easier. For these reasons, IC could be interesting and useful as a complementary diagnostic cytologic method in clinical practice. |
Search filters can find some but not all knowledge translation articles in MEDLINE: an analytic survey.
Advances from health research are not well applied giving rise to over- and underuse of resources and inferior care. Knowledge translation (KT), actions and processes of getting research findings used in practice, can improve research application. The KT literature is difficult to find because of nonstandardized terminology, rapid evolution of the field, and it is spread across several domains. We created multiple search filters to retrieve KT articles from MEDLINE. Analytic survey using articles from 12 journals tagged as having KT content and also as describing a KT application or containing a KT theory. Of 2,594 articles, 579 were KT articles of which 201 were about KT applications and 152 about KT theory. Search filter sensitivity (retrieval efficiency) maximized at 83%-94% with specificity (no retrieval of irrelevant material) approximately 50%. Filter performances were enhanced with multiple terms, but these filters often had reduced specificity. Performance was higher for KT applications and KT theory articles. These filters can select KT material although many irrelevant articles also will be retrieved. KT search filters were developed and tested, with good sensitivity but suboptimal specificity. Further research must improve their performance. |
656 S.E.2d 861 (2008)
JOHNSON
v.
The STATE.
No. A07A2313.
Court of Appeals of Georgia.
January 16, 2008.
*862 Diane M. McLeod, Savannah, for appellant.
Spencer Lawton Jr., District Attorney, Isabel M. Pauley, Assistant District Attorney, for appellee.
BERNES, Judge.
A Chatham County jury convicted Edward Devoun Johnson of sale of cocaine and possession of cocaine with intent to distribute. On appeal, Johnson challenges the sufficiency of the evidence.[1] For the reasons discussed below, we affirm.
On appeal from a criminal conviction, this court views the evidence in the light most favorable to the verdict, and the defendant no longer enjoys a presumption of innocence. This court neither weighs the evidence nor judges the credibility of witnesses, but only determines whether the evidence presented at trial was sufficient for a rational trier of fact to find the defendant guilty of the crime beyond a reasonable doubt. Jackson v. Virginia, 443 U.S. 307, 99 S.Ct. 2781, 61 L.Ed.2d 560 (1979).
(Citation omitted.) Riggins v. State, 281 Ga. App. 266, 268, 635 S.E.2d 867 (2006).
So viewed, the evidence adduced at trial shows that in the summer of 2004, an agent with the Chatham-Savannah Counter Narcotics Team ("CNT") was contacted by a confidential informant whom the agent had known for many years. The informant, an admitted drug addict, advised the agent that an individual named Edward Johnson was selling cocaine at 243 Ferrell Street in Chatham County. According to the informant, she had purchased cocaine from Johnson over 150 times during a 13-year period. The informant noted that in these prior buys, Johnson sometimes stored his cocaine in small prescription or medicine bottles, which he would flush down the toilet if approached by the police. Based on this information, the agent commenced an investigation and obtained a photograph of the defendant, which the agent showed to the informant. The informant identified the person in the photograph as Edward Johnson and confirmed that he was the individual who sold cocaine.
The agent subsequently set up a "controlled buy." On July 21, 2004, the agent took the informant to a designated location near the Ferrell Street premises. The agent searched the informant, confirmed that she had no items on her person, and gave her $40 to purchase cocaine. The agent and several other CNT agents then positioned themselves in the area so that constant visual contact could be maintained with the informant as she entered and exited the premises.
The informant proceeded directly to the premises without interference. Upon entering the premises, the informant located Johnson and asked to purchase cocaine. In return for the $40 she handed to him, Johnson retrieved two white-colored rocks from "a medicine bottle, like an aspirin, vitamin bottle type thing" and gave them to her. The entire transaction lasted approximately five minutes.
After the informant exited the premises, she proceeded directly to the pick up point and handed the agent the two rocks. The agent again searched the informant to ensure that she had no money or additional drugs on her person. The agent also performed a *863 field test on the two rocks, which yielded a positive result for cocaine. Forensic testing at the state crime lab later confirmed that the substance was cocaine.
Thereafter, the agent sought and obtained a search warrant for the Ferrell Street premises. On July 22, 2004, eight agents with the CNT squad executed the warrant. During the search, four individuals, not including Johnson, were found on the premises and detained without incident.
As the search unfolded, two agents spotted Johnson and both yelled, "Police!" One of the two agents also yelled, "Search Warrant! Get on the ground!" In response, Johnson fled into a doorway and toward a bathroom. One of the agents pursued Johnson and tackled him, causing Johnson to fall into the bathtub as he appeared to be trying to dispose of something. When tackled, Johnson dropped an aspirin bottle out of his right hand, and it fell beside the toilet. After the agent secured Johnson and picked him up out of the bathtub, the agent noticed a prescription bottle inside the bathtub directly underneath where Johnson had fallen, near the drain. Although the prescription bottle bore the name "Thomas Jones," no one else was in the bathroom at the time of the incident other than the agent and Johnson.
Both bottles retrieved from the bathroom contained white-colored rocks. The rocks were field tested for the presence of cocaine, with positive results. The state crime lab later tested the rocks contained in the prescription bottle, which were of a larger quantity than those found in the aspirin bottle, and confirmed that the substance was cocaine.
Johnson was arrested and indicted for sale of cocaine and possession of cocaine with intent to distribute. Johnson was accused of committing the offense of sale of cocaine based on the July 21, 2004 controlled buy. He was accused of possession of cocaine with intent to distribute based on the cocaine seized in the bathroom during the July 22, 2004 search. Following a jury trial, Johnson was convicted of the two offenses.
Johnson contends that there was insufficient evidence to convict him of sale of cocaine. We disagree. The agent with the CNT squad and informant testified at trial about the July 21, 2004 drug sale transaction as described above. Furthermore, the agent from the CNT squad testified that the substance bought from Johnson field tested positive for cocaine, and the state's forensic expert testified that testing at the state crime lab confirmed the same. This evidence was sufficient to authorize a rational trier of fact to find Johnson guilty beyond a reasonable doubt of the charged offense. Jackson, 443 U.S. 307, 99 S.Ct. 2781, 61 L.Ed.2d 560. See OCGA 16-13-30(b); Brown v. State, 274 Ga.App. 302, 302-303(1), 617 S.E.2d 227 (2005). Questions concerning the weight of the evidence and credibility of the witnesses were for the jury to decide. Riggins, 281 Ga.App. at 268, 635 S.E.2d 867.
Johnson also argues that there was insufficient evidence to convict him of possession of cocaine with intent to distribute. Again, we disagree. Possession was shown by the pursuing agent's testimony that he saw Johnson drop the aspirin bottle out of his hand. It likewise was shown by the agent's testimony that he found the prescription bottle inside the bathtub directly underneath where Johnson had fallen after being tackled. See Jackson v. State, 281 Ga.App. 83, 85(1), 635 S.E.2d 372 (2006) (possession can be inferred when drugs are found in the immediate presence of the defendant). In turn, the field test and crime lab results proved that the substance possessed by Johnson was cocaine.[2] Furthermore, the informant testified to the fact that she had purchased cocaine from Johnson on many occasions, and a CNT agent testified that, based on her experience, the quantity and unique packaging of the cocaine in the medicine bottles were inconsistent with mere personal *864 consumption. Based on the testimony of the informant and agent, the jury could infer that Johnson had the intent to distribute the cocaine. See Evans v. State, 288 Ga.App. 103, 108(3)(a), 653 S.E.2d 520 (2007) (testimony that defendant sold drugs relevant to establish that defendant acted with intent to distribute); Helton v. State, 271 Ga.App. 272, 275(b), 609 S.E.2d 200 (2005) (intent to distribute can be inferred from quantity of drugs seized and manner of packaging). Finally, Johnson's consciousness of guilt could be inferred from his flight from the CNT agents during the search. See Sexton v. State, 268 Ga.App. 736, 737(1)(b), 603 S.E.2d 66 (2004).
Johnson emphasizes that the prescription bottle found in the bathtub bore the name "Thomas Jones," and that one of the other individuals detained during the search went by that appellation. But, it was the task of the jury, not this court, to weigh this evidence against the other evidence presented by the state as discussed above, and then to decide whether Johnson was guilty of the charged offense. Riggins, 281 Ga.App. at 268, 635 S.E.2d 867. In light of the combined evidence, the jury clearly was entitled to find Johnson guilty beyond a reasonable doubt of possession of cocaine with intent to distribute. Jackson, 443 U.S. 307, 99 S.Ct. 2781, 61 L.Ed.2d 560. See OCGA § 16-13-30(b).
Judgment affirmed.
BLACKBURN, P.J., and RUFFIN, J., concur.
NOTES
[1] Although Johnson was also convicted of obstruction, he has not presented any argument or citation of authority challenging the sufficiency of the evidence on that count. He therefore has abandoned any challenge to his obstruction conviction on sufficiency grounds. See Court of Appeals Rule 25(c)(2).
[2] Johnson points out that although the substance found in the aspirin bottle field tested positive for cocaine, it was not tested at the state crime lab. However, field test results alone are sufficient to prove the presence of a controlled substance; state crime lab results are not required. Collins v. State, 278 Ga.App. 103, 104(1)(a), 628 S.E.2d 148 (2006). In any event, Johnson's conviction could be sustained based solely on the cocaine in the prescription bottle found in the bathtub, which was subject to a field test as well as testing at the state crime lab.
|
Roofing Companies in Blairstown NJ 07825
Locally Owned & Operated
HOME PROS ROOFING
(844) 359-2189
(Blairstown NJ 07825) –
Getting a new roof is a very important improvement on a home. The new roof is an investment and will up the value to your home. What should you expect when you search for a roofing contractor that will be on your roofing ripping old shingles off and applying new shingles? Not all contractors have the same ethics nor do they have Asphalt Shingles Blairstown NJ 07825 the same policy. Be careful when you choose a roofing company, you will need one that guarantees a nice clean job and gives you a labor warranty. Three key items to look for while you search for a roofing contractor is: the roofers guarantee on garbage and cleaning the area, the roofers warranty on the new roof, and the roofers experience/referrals.
First, let’s Blairstown NJ 07825 talk about the roofers guarantee. Roofing contractors might say they clean up after themselves, but that could just mean they clean up their pop can and throw away their lunch garbage. You want to make sure you find in writing that they clean up the nails, excess shingle garbage, and other debris that may have fallen off of your roof while they Roof Patch Blairstown NJ 07825 wreak havoc on your roof. The last thing you want is your baby who just started walking to have a small nail prick through their foot. They probably will regress and never want to walk on grass again.
Second, the roofers warranty on the new roof is an extremely important topic when it comes to choosing a roofing contractor. When you look for Blairstown NJ 07825 a warranty you want to make sure the company backs it up. You want to make sure you know what you are receiving if the service goes bad or something happens. You will want to make sure that you choose a roofing company that states they have at least a labor warranty on the new roof. That would mean if the new Blairstown NJ 07825 roof falls apart before a certain time, the roofing contractors will come back out and do it over for free. Sometimes they will make you pay for material if it’s just on labor. Sometimes they will cover it all. It all depends on the roofing contractor.
Last, the roofers experience level and their list of referrals can be extremely important to any smart Blairstown NJ 07825 home owner looking for the best roofing experience they can get. You do not want to get a roofing contractor that says they are super great, but has no referrals. Referrals are great ammunition when it comes to roofing contractors and their competition. Experience for roofers whether they say their workers have 30+ years’ experience or the company is over 30 years Blairstown NJ 07825 old. Either or would be a great choice. If they were in business for that long you know they do a great job.
In conclusion, the three key items to look for while you search for the perfect roofing contractor are the roofing contractors guarantee on cleaning debris afterwards, the warranty that the company backs up, and the experience the company has. These Blairstown NJ 07825 three items together make the holy trinity of the best roofing improvement experience a home owner can have. |
Thermophilic and thermotolerant fungi in poultry droppings in Nigeria.
Ten species of fungi were obtained from poultry droppings in Nigeria. Six of these are true thermophiles while the other four are thermotolerant. Aspergillus fumigatus Fresenius, Mucor pusillus Lindt and Thermoascus aurantiacus Stolk are known human pathogens. Except for M. pusillus, all the thermotolerant species had a higher occurrence at 45 degrees C while the thermophilic varieties were readily obtained at 50 degrees C. |
The notion that you must consume two litres of fluid water a day is a myth
“There is no evidence to support this,” Kidney Health Australia experts say
"The best way of knowing how much to drink is to drink enough to satisfy your thirst.”
“If you are passing a lot of urine, you are probably drinking too much.”
... there is a little thing called “water intoxication”. It means you die.
The balance of the body’s electrolytes can be so disturbed by excessive water consumption that the heart can be brutalised, and the brain can in fact uncontrollably swell, thus attempting to crack open the skull, which it can’t, and will therefore terminate. (It happened last year in the US when a woman was forced to “hold her wee” for a radio competition. She died crying in her car.)
... additional fluid intake will increase urine flow, but found no “significant change in stool output”.
Skin experts advise that hot showers, one of winter’s sweetest luxuries, pervert the skin and suck its moisture, and should thus be militantly short, lest the shower rob you of your moisture.
Sunday, 17 August 2008
"An original idea of the Japanese artist Hiroshi Sasagawa in order to decorate your libraries. Animal Index is a concept of separators of books in the shape of silhouettes of animals: giraffe, stag, pig." - fubiz
Tuesday, 5 August 2008
Saturday, 2 August 2008
"This record is about reflecting experiences more than just love.And what is more important than love in this world?Love is what you give that makes you who you are. Love is a gift.And that's all we're writing about and how you get it shoved down your throat and then it cracks your heart open and then you just diarrhea it out." - tegan quin |
There are needs in many contexts for a lifting apparatus which is arranged so that, while carrying a load, it may be moved from a first suspension device to a second suspension device. There are also often needs in this art to be able to move a load from one level to another.
Examples of practical applications where this need exists are where goods are to be moved through an intake or discharge opening in a wall, when the opening is located above ground level and the wall above the opening separates an outer suspension device from a suspension on device located inside the opening. It is obvious that a corresponding need exists when a load is to be moved from one loading apparatus to another, for example from a conveyor path suspended in a ceiling to a mobile hoist or crane, transfer of a patient sitting in a harness from one conveyor path to another conveyor path, to a bed, to a wheelchair, etc. |
Ambulance Waits Get Longer Under NDP - Progressive Conservative Party of Manitoba
The time ambulances spend at Winnipeg hospitals just keeps growing: Driedger
January 29, 2015
Ambulances continue to be spending too much time sitting with patients and not enough time on the road responding to emergency calls. New numbers show ambulance offload times have continued to grow and hit record highs in 2014.
“The NDP has been promising improvement on this issue and has failed to deliver. Not only is there no improvement, but it is getting progressively worse. Four years and three health ministers later and we are nowhere close to a solution,” said Opposition Health Critic Myrna Driedger.
The average wait time for paramedics to pass over control of a patient in 2014 hit 78 minutes. This is a record high up from 75 minutes in 2013 and 74 in 2012. It was 66 minutes in 2011.
January of 2014 also saw a record high of over 83 minute average for ambulances to get back into service.
Latest News
Our PC government responds to the federal government on carbon pricing
Our PC government has submitted its response to the federal government on its proposed backstop and benchmarks for carbon pricing, Premier Brian Pallister announced today.
The response highlights the significant investments Manitobans have made in renewable energy and also notifies of the province’s decision to seek a legal opinion on its right to develop its own ‘made-in-Manitoba’ plan. |
During Week 17 of every NFL season, the Sunday Night Football matchup isn’t determined until sometime during Week 16 so that the NFL and NBC can get a game with the most playoff implications in the primetime spot (as opposed to a meaningless turd-burger, like say, Redskins-Giants in Week 17 this year).
This year, the possibilities for the Week 17 SNF game aren’t great. There are only four games pitting teams at .500 or better against one another (and one of those could have much less meaning after Week 16, considering the 7-7 Packers are clinging to their playoff life and the 8-6 Lions are currently on the outside looking in).
Eagles-Cowboys would normally be a mortal lock for a Sunday night slot…but the Eagles can wrap up the NFC’s top seed in Week 16 with a win over the Raiders, and the Cowboys are also currently out of the NFC playoff picture and need plenty of results to go their way over the next two weeks to sneak in.
That brings the possibilities down to a pair of games. One of them is a pretty sexy looking Panthers-Falcons matchup. The NFC South title could be on the line in Week 17 if the Falcons beat the Bucs on Monday Night Football this evening, and if they upset the Saints in Week 16. But if the Falcons lose one of those two games prior to the Panthers matchup, the game will lose some of its luster.
And then, there’s the fourth matchup that’s probably attractive to the league, and it’s a matchup that I don’t think NBC had on their radar at the beginning of the season. The Jacksonville Jaguars will travel to Nashville to take on the Tennessee Titans, and both teams could have plenty to play for. The Jags have already clinched a playoff spot, and currently hold the #3 seed in the AFC. The Titans are currently the #5 seed in the AFC, and have already beaten the Jaguars earlier this season. So it’s not out of the realm of possibility that the Titans could be playing for the division crown in Week 17 – they just need to beat the Rams (which is a daunting matchup for anyone in the NFL right now) and hope the 49ers (who beat the Titans this past Sunday) upset the Jaguars.
Jacksonville could also fighting for a first-round bye in the AFC, which is a thought that seemed inconceivable months ago. Back in Week 5, the Jaguars mauled the Steelers 30-9, meaning they have the tiebreaker over Pittsburgh. If the Steelers somehow lose to either the Texans or the Browns in the season’s final two weeks, and the Jags beat both the 49ers and Titans, Jacksonville will clinch a bye.
Of course, it’s not a lock that Jaguars-Titans will be the pick. The Bills and Ravens could win in Week 16 and the Titans could lose, bumping them out of the AFC playoff picture. The potential NFC South doomsday scenario could come together, making that Panthers-Falcons matchup so much more tantalizing. Hell, the Packers could upset the Vikings next Saturday and other results could keep their playoff hopes alive until the season’s final week.
No matter what happens, expect to see the Jaguars and Titans on Sunday night next season, given their strong play this year. Jacksonville has only appeared on SNF on NBC twice, both games against the Steelers in 2007 and 2008. The Titans have appeared three times, once in 2007 (against the Colts) and twice in 2009 (against the Colts and Steelers).
It’s incredible that two teams that don’t get featured on a national stage very often are in the running for a Week 17 Sunday Night Football flex. Now, just watch the NFL take the easy way out and go with Eagles-Cowboys. |
Trends & Talkers: Bus stop signs causing a buzz
One thousand dollars for running a stop sign? It could happen, if that stop sign is attached to a school bus.
Wednesday, April 4th 2018, 5:30 PM HST
One thousand dollars for running a stop sign? It could happen, if that stop sign is attached to a school bus.
It's a proposal getting a lot of buzz online. When a school bus like this is stopped. The sign is out and the lights are flashing, drivers in both directions need to stop.
If you don't it will cost you. State lawmakers are considering a bill that will double the fine from $500 dollars to $1000. It only needs one more vote in the senate to pass.
What do the viewers driving on those streets have to say?
Arnold Kanai: $1,000??? Want to make an impact? Add jail time and a permanent record then enforce it.
Flynn Tess Hildebrand: About time. I've been saying this since I first stated as a school bus driver from back in the early '80's. $1,000!
Golden Daoang And $500 fine for honking at the cars that do stop for the buses
Plenty of people were talking about Honolulu Mayor Kirk Caldwell's State of the City Address. We carried it live on our Facebook page, and people are still sounding off on his continued commitment to rail, and push for affordable housing.
Marc Jaslow · Rail what a waste of my money!!!!!!!!!!
Eleanor Crisostomo · Rail, affordable housing & LIES!!!!
Ricky Keona Kauanui · Free housing for Hawaiians is affordable
Also trending today- #NationalHugANewsPersonDay, and Dan Rather said it quite well on Twitter: |
RRSP vs. TFSA: Which is right for me?
Which is better—a TFSA, or an RRSP? That’s kind of like asking, “Which is better—a t-shirt, or a sweater?”
Fundamentally, they do the same thing—t-shirts and sweaters both keep you covered, Tax-Free Savings Accounts (TFSA) and Registered Retirement Savings Plans (RRSP) both let you save money for the future. But the way they do it is different, and which one you choose depends on your needs.
That being said, sometimes it’s good to wear a t-shirt, and throw on a sweater if it gets chilly. In the same way, TFSAs and RRSPs can work together, depending on circumstances.
But choosing the right one can feel like a guessing game. Don’t worry—it isn’t. In this article, we’ll look at how TFSAs and RRSPs work, how they’re different, and how to pick the best investing account for your goals.
Which is better: RRSP or TFSA? The Journey of $1000
Before we get into the nitty gritty, let’s look at an example. With so much debate online about which account is truly best for retirement savings, we decided to do some calculations of our own.
In the infographic below, we look at how $1000 could grow overtime when invested in a TFSA and an RRSP.
When it comes to long-term retirement savings, you can see how the RRSP comes out ahead. In this example, the money is invested when you’re 25, and withdrawn at 71. The income taxes you save upfront can grow into a big return over time when invested.
Keep in mind that with an RRSP, the funds are locked up until retirement and withdrawals are taxed as income, so the money in your account doesn’t all end up in your pocket. Meanwhile, with the TFSA, the full account balance is yours to spend as you wish, when you wish.
Now, let’s dive deeper into how each account works, and find out which account is more beneficial for any savings goals beyond retirement.
The differences between a TFSA and an RRSP
You can use either a TFSA or an RRSP to store assets—cash, as well as investments like stocks, bonds, mutual funds, and other financial products.
But there are two major differences between them: how much you can contribute per year, and how your assets are taxed.
How a TFSA works
This year, the annual amount that you can contribute to your TFSA is $6,000. Unused contribution room rolls over, so if you haven’t maxed out your contribution room in previous years, chances are you’ll have additional space.
The money you put in a TFSA has already been taxed, so there’s no tax break at the time you contribute. But here’s where the “tax free” part comes in: when you withdraw your assets, none of the growth on your investments is taxed in any way.
The fact that you pay tax now (before you contribute) and not later (when you withdraw) is important—it’s what makes a TFSA and an RRSP different, and affects the saving strategy for each.
How an RRSP works
We’ve got a great article diving into everything you might want to know about how an RRSP works, but here’s the summary:
You can contribute 18% of your 2019 gross income, or $26,500, (whichever is less) to your RRSP, plus any amount that rolled over from previous years. Find out more on calculating the right amount here.
When you put assets in your RRSP, you don’t pay income tax on it. Typically, that means you’ll see a bigger tax return in the spring. Sounds good, right? Here’s the catch: you’ll pay taxes on those assets when you eventually withdraw the money in retirement.
RRSP becomes a RRIF
Because the government wants to ensure that you use the money you save for retirement income specifically, eventually, you’ll be forced convert your RRSP into a registered retirement income fund (RRIF). The main difference between the two account types is that while one is designed to help you save, the other forces you to make withdrawals.
Once you convert the account to a RRIF, you can no longer make contributions—just withdrawals. You are required to convert your RRSP to a RRIF the year you turn 71.
Keep in mind, you don’t have to convert your RRSP to a RRIF to start taking retirement income.
The fact that you get taxed later (when you withdraw) rather than now (before you contribute) is what sets an RRSP apart from a TFSA.
Is it better to invest in a TFSA or an RRSP?
T-shirts vs. sweaters—either can be best, depending on your situation. TFSAs and RRSPs are the same. They’re both excellent options for long-term investing, and both offer tax advantages, but determining the best one depends on what you’re investing for.
Like the name suggests, the RRSP is typically going to be the best option if you’re investing specifically for retirement. That’s especially true if you’re in your peak earning years.
That said, if you’re a living, breathing human being, chances are you’ll need to spend money before retirement — and sometimes that will require shelling out a big chunk of change. A TFSA gives you the benefit of flexibility. The money is always available to you, and you don’t need to consider taxes when you make withdrawals.
Most people have multiple savings goals, and those goals can change overtime, so most people can likely benefit from contributing to both a TFSA and an RRSP.
Scenarios
Here are some goals you might be saving for, and the best investing account to choose for each:
Goal Account “I want to start an emergency fund.” TFSA: Money in a TFSA is available to you any time. Best part? When invested, it can keep growing while you keep saving.
When you withdraw the money, it won’t be taxed—so the amount that appears in your account is the exact amount that ends up in your pocket. “I’m making a big purchase next year.” TFSA: A TFSA is great for any and all short to medium savings. “I’m saving for my first home.” RRSP or Both: Thanks to the Home Buyers’ Plan (HBP), as a first time home buyer, you can withdraw up to $35,000 from your RRSP without paying taxes on the funds. You’ll have 15 years to gradually pay that money back. If you have a partner, they can do the same, effectively doubling the amount. That’s a tax advantage worth taking!
But here’s the catch: if you live in one of Canada’s major cities, $35,000 or even $70,000 for a couple may not get you far towards a down payment.
Investing through a TFSA will allow you to make up the difference. “I’m saving for a bigger home.” TFSA: You don’t qualify for the Home Buyers’ Plan, so you definitely don’t want to dip into your RRSP and face steep tax consequences.
No worries! Investing tax free through a TFSA is a great way to save towards your next home. “I want a comfortable income in retirement.” RRSP: An RRSP is the best way to ensure you have an income in retirement that will cover the cost of living, and maybe even a little extra! Remember that the money will be taxed as annual income. “I want to live larger in retirement.” TFSA: Your TFSA can be a great account for tax-free spending in retirement, which is particularly handy in years where you want to make a big purchase, or if you plan to spend more in retirement that you do today.
The TFSA vs. RRSP calculator
To get the most out of your investments, you’ll want to calculate where you’ll see the greatest tax benefit. That will depend on what you earn right now, what you’re saving for, and when you plan to use the money.
To help you see how your money could grow in either account, we’ve created a free, easy to use TFSA vs. RRSP calculator.
—
Nobody can tell the future, but we can help you plan for yours. Sign up for CI Direct Investing now to talk to a financial advisor and start investing for any goal.
Calculations
¹ The RRSP and TFSA comparison illustration depicts the growth of an initial investment at a 6% annual rate of return with withdrawals only in retirement. It assumes a 25% income tax rate for both pre- and post-retirement.
Portfolio performance is not guaranteed. The value of your investment can go down, up and change frequently. Past performance is not indicative of future returns. |
Q:
How to display properly a web page on all mobile browsers
I've shared this screenshot to show you how it's displayed on all android browsers:
I used <meta name="viewport" content="width=device-width, initial-scale=0.3"> but it seems to work only on android default browser.
I have put together a link for testing
The correct view should be like the Android default browser, with an adjustment of the 100% in width, in either vertical or horizontal mode.
A:
One way you can fix this is to set the view-port to the width of your document. Each browser and device has a different pixel width for displayable area or default. Change the view port to the following:
<!-- When viewing your css and live widths I got 944 wrapper width,
update if incorrect -->
<meta name="viewport" content="width=944" />
The only other way to get it to show the same is to use a mobile doctype instead of html5. But this can break some functionality.
<!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.2//EN" "http://www.openmobilealliance.org/tech/DTD/xhtml-mobile12.dtd">
|
Background
==========
Many researchers have embraced microarray technology. Due to extensive usage of microarray technology, in recent years there has been an explosion in publicly available datasets. Examples of such repositories include Gene Expression Omnibus (GEO, <http://www.ncbi.nlm.nih.gov/geo/>), ArrayExpress <http://www.ebi.ac.uk/microarray-as/ae/> and Stanford Microarray Database (SMD, <http://genome-www5.stanford.edu/>, as well as researchers\' and institutions\' websites. The use of these datasets is not exhausted, when used wisely they may yield a depth of information. Demand has increased to effectively utilise these datasets in current research as additional data for analysis and verification.
Meta-analysis refers to an integrative data analysis method that traditionally is defined as a synthesis or at times review of results from datasets that are independent but related \[[@B1]\]. Meta-analysis has ranging benefits. Power can be added to an analysis, obtained by the increase in sample size of the study. This aids the ability of the analysis to find effects that exist and is termed \'integration-driven discovery\' \[[@B2]\]. Meta-analysis can also be important when studies have conflicting conclusions as they may estimate an average effect or highlight an important subtle variation \[[@B1],[@B3]\].
There are a number of issues associated with applying meta-analysis in gene expression studies. These include problems common to traditional meta-analysis such as overcoming different aims, design and populations of interest. There are also concerns specific to gene expression data including challenges with probes and probe sets, differing platforms being compared and laboratory effects. As different microarray platforms contain probes pertaining to different genes, platform comparisons are made difficult when comparing these differing gene lists. Often the intersection of these lists are the only probes to be retained for further analysis. Moreover, when probes are mapped to their \'Entrez IDs\' \[[@B4]\] for cross platform comparisons often multiple probes pertain to the same gene. Due to reasons ranging from alternative splicing to probe location these probes may produce different expression results \[[@B5]\]. Ideal methods for aggregating these probe results in a meaningful and powerful way is currently the topic of much discussion. Laboratory effects are important because array hybridisation is a sensitive procedure. Influences that may effect the array hybridisation include different experimental procedures and laboratory protocols \[[@B6]\], sample preparation and ozone level \[[@B7]\]. For more details of the difficulties associated with microarray meta-analysis please refer to Ramasamy et al. 2008 and other works \[[@B5],[@B8]-[@B12]\].
We propose a new meta-analysis approach and provide a comprehensive comparison study of available meta-analysis methods. Our method, \'meta differential expression via distance synthesis\', (mDEDS) is used to identify differentially expressed (DE) genes which extends the DEDS method \[[@B13]\]. This new method makes use of multiple statistical measure across datasets to obtain a DE list, but becomes a novel tool, with respect to DEDS with the ability to integrate multiple datasets. Hence this meta-method concatenates statistics from datasets in question and is able to establish a gene list. Such integration should be resilient to a range of complexity levels inherent in meta-analysis situations. The strength of mDEDS as a meta-method over DEDS as a method for selecting DE genes is highlighted by comparing these two approaches to one another in a meta-analysis context. Throughout this paper the statistics used within mDEDS and DEDS are the t and modulated t statistic \[[@B14]\], SAM \[[@B15]\], the B statistic \[[@B16]\] and fold-change (FC) statistic, although any statistic can be chosen.
We also perform a comparison study of meta-analysis methods including the Fisher\'s inverse chi-square method \[[@B17]\], GeneMeta \[[@B2],[@B18]\], Probability of Expression (POE) \[[@B19]\], POE with Integrative Correlation (*IC*) \[[@B20]\], RankProd \[[@B21]\] (the latter four are available from Bioconductor) and mDEDS as well as two naive methods, \'dataset cross-validation\' and a \'simple\' meta-method. For meta-methods with several varying parameters, we have made use of the suggested or default options.
The performance of the different meta-analysis methods is assessed in two ways, through a simulation study and through two case studies. For the simulation study performance is measured through receiver operating characteristic (ROC) curves as well as the area under these ROC curves (AUC). The two different case studies vary in complexity and performance are assessed through predication accuracy in a classification framework. Warnat et al. \[[@B22]\] uses validation to evaluate performance while using multiple datasets. Our validation method differs from their process slightly. Their method takes a random selection of samples from multiple datasets to obtain a test and training set. We retain to original datasets, leaving them complete. Our method aims to simulate real situations where an additional dataset would need to be classified after a discriminate rule was developed. Although within this paper mDEDS is used in a binary setting, mDEDS is a capable multi-class meta-analysis tool, which is a concept examined by Lu et al. \[[@B23]\].
It is possible to consider meta-analysis at two levels, \'relative\' and \'absolute\' meta-analysis. \'Relative\' meta-analysis looks at how genes or features correlate to a phenotype within a dataset \[[@B10]\]. Multiple datasets are either aggregated or compared to obtain features which are commonly considered important. Meta-methods pertaining to this method include Fisher\'s inverse chi-square, GeneMeta, RankProd and the \'dataset cross-validation\' meta. \'Absolute\' meta-analysis seeks to combine the raw or transformed data from multiple experiments. By increasing the number of samples used, the statistical power of a test is increased. Traditional microarray analysis tools are then used on these larger datasets. The \'simple\' meta method is an example of \'absolute\' meta-analysis approach.
In this paper we will begin by describing existing meta-analysis methods, then we will outline our proposed mDEDS method. This is followed by the comparison study, where publicly available datasets are combined by different meta-analysis methods, examining their ability under varying degrees of complexity, as well as comparing mDEDS to DEDS. Finally, we provide discussion and conclusions of results.
Existing meta-analysis methods
------------------------------
Let *X*represent an expression matrix, with *i*= 1, *..*., *I*genes and *j*= 1, *..*., *N*samples. If there are *k*= 1,\..., *K*datasets, *n~k~*represents the number of samples in the *k*th dataset. For simplicity, and without loss of generality, we focus on dichotomous response; i.e., two-group comparisons. We designate groups as treatment *T*and control *C*. For two-channel competitive hybridization experiments, we assume that the comparisons of log-ratios are all indirect; that is we have *n~T~*arrays in which samples from group *T*are hybridized against a reference sample *R*, and we can obtain *n~T~*log-ratios, = log~2~(*T~j~/R*); *j*= 1, *..*., *n~T~*from group T. In an identical manner *n~c~*log-ratios are also calculated from group *C*. For Affymetrix oligonucleotide array experiments, we have *n~T~*chips with gene expression measures from group *T*and *n~C~*chips with gene expression measures from group *C*.
### Fisher\'s inverse chi-square
Fisher, in the 1930 s developed a meta-analysis method that combines the p-values from independent datasets. One of a plethora of methods for combining the p-values \[[@B17]\], is the Fisher summary statistic,
which tests the null hypothesis that for gene *i*, there is no differences in expression means between the two groups. The p-value *p~ik~*is the p-value for the *i*th gene from the *k*th dataset. In assessing *S~i~*, the theoretical null distribution should be . It is also possible to extend the Fisher methods by producing weights for different datasets based on, for example, quality.
### GeneMeta
One of the first methods that integrates multiple gene expression datasets was propose by Choi et al. \[[@B2]\] who describe a t-statistics based approach for combining datasets with two groups. An implementation of this method is found in `GeneMeta`\[[@B18]\] an R package containing meta-analysis tools for microarray experiments.
Choi et al. \[[@B2]\] described a meta-analysis method to combine estimated *effect-sizes*from the *K*datasets. In a two group comparisons, a natural effect size is the *t*-statistics. For a typical gene *i*, the effect size for the *k*th dataset is defined as
where and represent the means of the treatment and the control group respectively in the *k*th study. *S~pk~*is the pooled standard deviation for the *k*th dataset.
For *K*number of observed effect sizes, Choi et al. \[[@B2]\] proposed a random effects model
where *μ*is the parameter of interest, *s~k~*denotes the within study variances and *δ*\~ *N*(0, *τ*^2^) represents the between study random effects with variance *τ*^2^. Choi et al. \[[@B2]\] further mentioned that when *τ*^2^= 0, *δ~k~*denotes the between study effect in a fixed effect model. The random effects model is then estimated using a method proposed by DerSimonian and Laird \[[@B24]\] and a permutation test is used to assess the false discovery rate (FDR).
### metaArray
The R package `metaArray` contains a number of meta-analysis methods. The main function is a two steps procedure which transformed the data into a probability of expression (POE) matrix \[[@B19]\] and followed by a gene selection method based on \'integrative correlation\' (*IC*) \[[@B20]\].
Given a study, the POE method transforms the expression matrix *X*to a matrix *E*that represents the probability of differential expression. Each element in the matrix *E~ij~*is defined as the chance of multiple conditions present across *N*samples within gene *i*. The transformed matrix, *E*, consists of three values -1, 0, 1 that represent the conditions \'under-expressed\', \'not differentially expressed\' and \'over-expressed\'. After the transformation into a POE matrix, genes of interest are established using *IC*\[[@B20]\]. Notice that this integrative correlation method is not restricted to be used with a POE matrix. The method IC begins by calculating all possible pairwise Pearson correlations (, where *i*≠ *i*\') between genes (*i*and *i*\') across all samples within a dataset *k*. Thus, we generated a pairwise correlation matrix *P*with rows representing the number of pairwise correlation and *K*columns representing the number of datasets.
For a selected pair of datasets *k*and *k*\', let us denote and as means of the correlations per study. Gene-specific reproducibility for gene *i*is obtained by only considering comparisons that contain the *i*th gene. That is
where *i*≠ *i*\'. When more than two datasets are being compared, all integrative correlations for a particular gene are aggregated. This method provides a combined ranking for genes across *K*datasets.
In this comparison study, two metaArray results are used. Distinction will be made between them using the terms \'POE with *IC*\' and \'POE with *Bss/Wss*\' to indicate what type of analysis was performed after the construction of the POE matrix.
### RankProd
RankProd is a non-parametric meta-analysis method developed by Breitling et al. \[[@B21]\]. Fold change (FC) is used as a selection method to compare and rank the genes within each dataset. These ranks are then aggregated to produce an overall score for the genes across datasets, obtaining a ranked gene list.
Within a given dataset *k*, pairwise FC (pFC) is computed for each gene *i*as
producing *n~T~*× *n~C~pFC~l,m~*values per gene with *l*= 1, *..*., *n~T~*and *m*= 1, *..*., *n~C~*. The corresponding pFC ratios are ranked and we may denote this value as *pFC*~(*i;r*)~, where *i*= 1, *..*., *I*represents the number of genes and *r*= 1, *..*., *R*represents the number of pairwise comparisons between samples. Then the rank products for each gene *i*is defined as
Expression values are independently permuted *B*times within each dataset relative to the genes, the above steps are repeated to produce where *b*= 1, \..., *B*. A reference distribution is obtained from all the values, and the adjusted p-value for each of the *I*genes is obtained. Gene considered significant are used in future analysis.
### Naive meta-methods
Two forms of naive meta-methods are used in the comparison study. The \'simple\' meta-method takes the microarray expression matrices and simply combines the datasets together, forming a final matrix made up of all samples with no expression adjustment. The \'dataset cross-validation\' meta-method takes one datasets and applies the analysis, these results are then used by the other dataset(s) with the expectation that the results will be transferable. In a classification context this means that one dataset is used for feature selection and development of the discriminant rule and we predict the outcome of the other dataset(s) via this rule.
Method
======
Algorithm - mDEDS
-----------------
\'Meta differential expression via distance synthesis\' (mDEDS) is a meta-analysis method that makes use of multiple statistical measures to obtain a DE list. It is aided by \'Differential Expression via Distance Synthesis\', DEDS \[[@B13]\] which is designed to obtain DE gene lists. Example DE measures include standard and modulated-t stat \[[@B14]\], fold change, SAM \[[@B15]\] and the B-statistic, amongst many others. The concept behind the proposed meta-method considers that truly DE genes should be selected regardless of the platform or statistic used to obtain a DE list.
The true DE genes should score highly within a set of non-dominated genes, both within a dataset using DE measures and also between datasets when the same DE measures are used on different datasets across different platforms. Consistently high ranked genes are then considered DE via mDEDS. This method endeavours to be robust against both measure specific bias, when different measure produce significantly different ranked lists, and platform specific bias where particular platforms produce results that are more favourable to particular gene sets.
1\. Let there be *k*= 1, \..., *K*datasets and *g*= 1, \..., *G*appropriate (DE measuring) statistics, hence there will be *K*× *G*statistics for each of the *i*= 1, \..., *N*genes. Let *t~ikg~*be the statistic for the *i*th gene, from the *k*th dataset for the *g*th DE measure. Assuming large values indicate increased DE genes, let the observed coordinate-wise extreme point be
2\. Locate the overall (observed, permutation) extreme point E:
\(a\) Each of the *K*datasets is permuted B times by randomly assigning *n~T~*arrays to class \'T\' and *n~C~*arrays to class \'C\', producing b = 1,\... B sets of *K*datasets. For each permuted datasets the *G*number of DE statistics are recalculated yielding . Obtain the corresponding coordinate-wise maximum:
\(b\) Obtain the coordinate-wise permutation extreme point *E~p~*by maximizing over the *B*permutations,
\(c\) Obtain *E*as the overall maximum: *E*= max(*E~p~, E*~0~).
3\. Calculate a distance *d*from each gene to *E*. For example, one choice for a scaled distance is
where MAD is the median absolute deviation from the median. Order the distances, *d*~(1)~≤ *d*~(2)~≤ \... ≤ *d*~(*N*)~.
Batch correction can be performed by mDEDS, by substituting datasets with \'batch groups\' (see Discussion).
Comparison study
----------------
Eight meta-analysis methods are compared using a simulated dataset and two cases studies comprising of six publicly available datasets, pertaining to three breast cancer and three lymphoma datasets. The purpose of the comparison study is to establish how these meta-analysis methods perform under varying degrees of dataset complexity. Dataset complexity refers to the level of difficulty present when combining multiple datasets. For example datasets being produced on the similar platforms (for example different Affymetrix platforms) are less complex to analyse via meta-analysis then when analysing results across very different platforms. For this comparison paper two levels of dataset complexity are considered. *Case study 1*, implemented by the breast cancer data contains datasets from identical Affymetrix chips, this is considered \'similar platform meta-analysis\'. *Case study 2*which makes use of the lymphoma datasets contains samples that are hybridised using long oligo two colour platforms, Affymetrix chips and the \'Lymphochip\' \[[@B25]\], this is considered \'disparate platform meta-analysis\'.
For the publicly available data, probe sets for each platform are mapped to the relative \'Entrez IDs\' \[[@B4]\]. Where multiple probes pertained to the same gene the mean gene expression level is used. Probes with unknown \'Entrez IDs\' are discarded. Only the intersection of platform gene lists are used in further analysis. Data is imputed using KNN imputation with k = 10. Data analysis performed using R.
### Performance assessment
Assessing the performance of different meta-analysis methods is important to evaluate and compare methods. Although important, performance assessment of meta-analysis methods is non-trivial. Typically meta-analysis methods will be evaluated using pre-published gene lists and noticing the concordance of the obtained DE gene list and published material, this process however is subject to publication bias. To avoid such biases two forms of performance assessment will be applied in this paper.
1\. Receiver operating characteristic curves (ROC): For the simulated data where the \'true\' DE gene list is known, meta-analysis performance is measured via ROC curves. ROC curves are created by plotting the true positive rates verses the false positive rates for the obtained DE genes. Performance is indicated by how close the plots are to the upper left hand corner of the ROC space. The AUC is also used as a comparison tool, with AUC values close to one indicating an accurate DE list. Because of the design of the simulation study the \'cross-validation\' meta-analysis method can not be used.
2\. Prediction accuracy: For the case studies, prediction accuracy under a classification framework is used to asses performance of the DE list. We will use the term DE list for the consistency of this manuscript, although strictly speaking in a classification framework such gene lists are known as feature gene lists. To classify within the case studies, each consisting of three independent datasets, two datasets are combined via the meta-analysis methods and DE genes are selected. When DE gene selection is not part of the meta-analysis approach DE genes are ranked via \'between sum of squares over within sum of squares\' *Bss*/*Wss*\[[@B26]\]. Using these two datasets, a discriminant rule is constructed by diagonal linear discriminant analysis (DLDA) \[[@B26]\]. The third independent dataset is classified using this rule. The ability for the meta-analysis method to collaborate information from the two distinct datasets is reffected in the ability to classify the third. Prediction accuracy is used because the \'true\' DE list is not known. In these case studies, performance can only be judged relative to the other compared methods.
### Simulation study
To evaluate the performance of the different meta-analysis methods, data was simulated to represent three separate gene expression datasets. The simulation approach is adapted from an approach presented by Ritchie et al. \[[@B27]\]. A non-parametric bootstrap simulation is used where a matrix of non-differentially expressed gene expression data is sampled from three different datasets. This \'background\' noise contains the latent characteristics of an actual microarray data yet contains no biologically DE genes. Samples are constructed with replacement from the original data, such that an even binary class distribution is established.
DE genes are simulated via a 2 fold increase in fold change. Two types of DE genes are simulated, \'true\' DE genes, and \'platform specific\' DE genes. \'True\' DE genes are identical genes within each dataset, representing biologically relevant DE genes. \'Platform specific\' DE genes simulate platform bias apparent within DE genes from microarray experiments \[[@B28]\] and are randomly selected from the genes in the datasets, with the exclusion of the \'true\' DE genes. This simulation taps into the important notion that a powerful meta-analysis tool will have the ability to correctly distinguish a true DE gene which is DE across multiple platforms from a DE gene which is simply a platform phenomena.
### Case study 1 - Breast cancer: Similar platform meta-analysis
Three publicly available Affymetrix datasets are used for the breast cancer study, all three datasets use the affymetrix platform U133A. Classification of the breast cancer samples aims to distinguish between the sample\'s estrogen receptor (ER) status (+ve or -ve) as determined by the sample information provided with the datasets, we refer readers to the original manuscripts for more details regarding this status. In this case ER status is being used simply as a response variable common throughout all considered datasets, it should be understood that predicting ER status using gene expression data is not the same as immunohistochemistry. These datasets include the Farmer et al. dataset \[[@B29]\] (GSE1561) which utilises the Affymetrix U133A platform with 49 samples, comprising of 27 +ve and 22 -ve samples. The Loi et al. dataset \[[@B30]\] contains Affymetrix samples from three platforms, U133 (A,B) and U133plus some of which underwent treatment and some which did not. Samples from platform U133A which did not experience any treatment are used in this study, which totalled 126 with 86 +ve and 40 -ve samples (GSE6532). Ivshina et al. \[[@B31]\], developed breast cancer samples on Affymentrix U133A arrays, 200 in total corresponding to 49 +ve and 151 -ve samples (GSE4922). The performance of the meta-analysis methods employed in a \'similar platform meta-analysis\' context was assessed via classification. The Farmer et al. and Ivshina et al. datasets were combined via meta-analysis and used to obtain a DE gene list and construct a classification model. The Loi et al. dataset was classified using this gene list and discriminant rule.
### Case study 2 - Lymphoma: Disparate platform meta-analysis
An original lymphoma dataset was obtained from the Department of Haematology and Stem Cell Transplant at St Vincent\'s Hospital Sydney (which will be referred to as SVH). Gene expression levels have been gathered from 60 patients presenting with lymphoma cancers, 37 of these samples are Follicular Lymphoma (FL) and 23 samples are Diffuse Large B-Cell Lymphoma (DLBCL). Human 19000 oligo array slides from the Adelaide microarray consortium were used to obtain microarray expressions. Two well known publicly available datasets were also analysed. The Shipp et al. data \[[@B32]\] contains 19 FL and 58 DLBCL samples, hybridised using the Affymetrix platform HU6800. Alizadeh et al. \[[@B25]\] also contains 19 FL samples and 27 DLBCL samples, hybridized to the \'Lymphochip\' which is a custom designed cDNA microarray. The performance of the meta-analysis methods employed in a \'disparate platform meta-analysis\' context was also assessed via classification. The Shipp et al. and Alizadeh et al. datasets were combined via meta-analysis and used to obtain a DE gene list as well as construct a classification model. The SVH dataset was classified using this gene list and classification rule.
### Case study 3 - mDEDS versus DEDS
To establish the success of mDEDS as a meta-analysis method beyond the capabilities of DEDS, DEDS and mDEDS are compared. The strength of DEDS comes from its ability to synthesise results from a range of statistics, mDEDS goes beyond this to consider results from a range of statistics across multiple datasets. DEDS is a method for selecting DE genes and to this end was used in the simple meta-method described in the \'Existing meta-analysis methods\' section. Datasets from both the breast cancer study and the lymphoma study were used in the comparison of these meta-methods with the Loi et al. and the SVH datasets used as the independent test sets.
Results
=======
Simulation
----------
Three datasets were simulated, with 150, 100 and 80 samples, each with 20000 genes. The percentage of DE genes varied between 2.5%, 4% and 10%, with half the DE genes on each platform being \'true\' and the other being \'platform specific\' DE genes. Figure [1](#F1){ref-type="fig"} shows the ROC curves for 5% true and 5% platform specific DE genes. These results are indicative of all considered DE percentages. Table [1](#T1){ref-type="table"} contains the AUC values for the three different DE gene percentage levels for the different meta-analysis methods. GeneMeta, RankProd, POE with *Bss/Wss*and POE with *IC*appear to struggle with obtaining an accurate \'true\' DE list. Fisher and mDEDS perform competitively with the difference between Fisher, simple and mDEDS reducing as the number of genes in the gene list increases.
######
AUC values for simulated and dataset analysis
AUC
-------------------- ------- ------- -------
Fisher 0.996 0.993 0.982
POE with *Bss/Wss* 0.489 0.490 0.487
POE with *IC* 0.483 0.492 0.491
GeneMeta 0.861 0.866 0.876
RankProd 0.999 0.998 0.834
Simple 0.998 0.998 0.994
mDEDS 0.998 0.998 0.994
The AUC values for the simulated datasets, for each meta-analysis method. DE genes are simulated at 2.5%, 4% and 10% levels, with half the genes being \'true\' DE genes and the other half being \'platform specific\' DE genes
{#F1}
Case study 1 - Breast cancer: Similar platform meta-analysis
------------------------------------------------------------
Figure [2](#F2){ref-type="fig"} displays the error rates for the classification of the Loi et al. dataset, the number of DE genes used to build the classification model varies across the horizontal axis. The mean error rates can be found in Table [2](#T2){ref-type="table"}. The majority of the applied meta-methods successfully capture the DE genes across all three Affymetrix platforms to distinguish between the binary classification of positive and negative ER status, with the notable exceptions of GeneMeta and the simple meta-method. Both POE methods become more reliable meta-methods as the number of genes used to build the classifier increases. RankProd, Fisher, the cross-validation meta-method and mDEDS produce consistently relatively low classification errors for this similar platform analysis. When Farmer et al. and Ivshina et al. were used as the independent test sets, results from the meta-analysis methods were similar (results not shown).
######
Breast cancer classification error rates
Meta-Method Mean Error
-------------------- ------------
Fisher 0.182
POE with *Bss/Wss* 0.257
POE with *IC* 0.199
GeneMeta 0.534
RankProd 0.182
Simple 0.314
Cross-Validation 0.186
mDEDS 0.174
Mean of error rates in the binary classification of three breast cancer datasets using DLDA
{#F2}
Case study 2 - Lymphoma: Disparate platform meta-analysis
---------------------------------------------------------
Figure [3](#F3){ref-type="fig"} shows the error rates for the prediction of the SVH dataset. This study examines the different meta-methods across highly varying platforms (both cDNA and Affymetrix). The Fisher\'s inverse chi-square and POE with *IC*meta-methods perform well under such conditions, conversely GeneMeta, POE with *Bss/Wss*and RankProd appear to struggle in DE gene selection. However, mDEDS can still utilise these different experiments purposefully, producing the lowest mean error rate (Table [3](#T3){ref-type="table"}), and a very competitive classifier. When Shipp et al. and Alizadeh et al. were used as the independent test sets, results from the meta-analysis methods were similar (results not shown).
######
Lymphoma cancer classification error rates
Meta-Method Mean Error
-------------------- ------------
Fisher 0.276
POE with *Bss/Wss* 0.375
POE with *IC* 0.301
GeneMeta 0.525
RankProd 0.475
Simple 0.617
Cross-Validation 0.329
mDEDS 0.277
Mean of error rates in the binary classification of three Lymphoma datasets using DLDA.
{#F3}
Case study 3 - mDEDS versus DEDS
--------------------------------
Table [4](#T4){ref-type="table"} shows the mean error rates of the breast cancer and the lymphoma datasets when classified using mDEDS and the simple meta-method with DEDS as the method for selecting DE genes. It is apparent from this table that DEDS is not capturing the DE genes across the multiple datasets to distinguish the two classes being compared, although mDEDS is acting as a successful meta-method.
######
mDEDS versus DEDS
Meta-Method Breast cancer study Lymphoma study
----------------------- --------------------- ----------------
Mean error Mean error
Simple meta with DEDS 0.441 0.617
mDEDS 0.174 0.277
Mean of error rates when comparing mDEDS to the simple meta-methods when DEDS is used as a feature selection method. Performance is assessed in the binary classification of the breast cancer and lymphoma datasets using DLDA.
Discussion
==========
The simulation study coupled with the two cases studies of varying meta-analysis complexity offers insight into the eight meta-analysis methods compared in this paper. It is important to validate meta-analysis methods, although at times this is difficult to perform. Some meta-methods are simple variants of common classical statistical methods, others offer more sophisticated responses to specific issues faced in the microarray environment. A large proportion of meta-research deals with DE genes and the process of obtaining a DE list from multiple datasets. Unfortunately DE gene lists are illusive because the biological DE gene lists are not known. Often for validation purposes DE lists are compared to other published DE lists with the level of congruency indicative of the success of the meta-method. This method suffers from publication bias \[[@B26]\] as one is continuously publishing pre-published information, with little validation to the variations that are occuring. An alternative assessment criteria utilizing the classification framework offers an intuitive validation process with interpretable results. Classification performance relies heavily on the accuracy of the classifier\'s feature list, which is traditionally taken from the DE list. Within this meta-analysis study independent dataset validation classification was performed, using DLDA. DLDA was chosen as Dudoit et al. \[[@B33]\] found that DLDA was an effective, efficient and accurate classifier for microarray data. This study could have been conducted using any number of classifier s provided feature selection is not performed implicitly by the classifier. The varying DE list obtained from the meta-methods are the only varying component in the comparison. Therefore a reduction in classification error can be attributed to the meta-method.
Meta-analysis offers a way to enhance the robustness of microarray technology. The \'dataset cross-validation\' meta-analysis approach observed within this study encapsulates a very real problem with microarrays; gene lists selected from one platform or study have a limited ability to be transfered. This is highlighted by their inability to be used to classify samples generated by another platform or study, as demonstrated by the 61.7% error rate obtained via this method (Table [3](#T3){ref-type="table"}). For both the breast cancer and lymphoma case studies some meta-analysis approaches were able to increase the accuracy of cross platform classification, at times the error reduced by as much as 33% which can be seen in Table [3](#T3){ref-type="table"}. This indicates that the added power through meta-analysis produces more robust and reliable results, eventuating in a gene list that is not platform dependent but truly indicative of the disease.
Cross platform meta-analysis multiplies the level of complexity in this particular analysis paradigm. The meta-analysis complexity is suggestive of the meta-method one should employ. Within this study we have used two levels of meta-analysis complexity, (i) when meta-analysis is performed across similar platforms, for example Affymetrix with Affymetrix, (ii) when meta-analysis is performed across disparate platforms, for example Affymetrix with oligo arrays.
The breast cancer case study uses datasets from three identical Affymentrix platforms. Affymetrix\'s development and processing protocols offer a reduced variability in array comparison \[[@B34]\]. This feature of Affymetrix arrays is highlighted with the success of the cross-validation meta-analysis method, producing a relatively low mean error rate within the breast cancer study. In this case POE with both *Bss/Wss*and *IC*, Fisher\'s inverse chi-square and RankProd were able to classify competitively, hence they are able to highlight between dataset DE genes. RankProd\'s success in this circumstance is similar to the findings by Hong et al. \[[@B3]\] where RankProd is shown to be powerful in both simulated and Affymetrix based meta-analysis studies.
The lymphoma case study aims to distinguish between FL and DLBCL subtypes and the datasets used makes this analysis more complex. Both cDNA and oligonucleotide arrays are compared. These platforms vary remarkably with differences ranging from probe length to the presence of reference samples. As the complexity of the meta-analysis rises POE with *Bss/Wss*, GeneMeta and RankProd struggle to obtain a gene list robust enough for cross platform classification. Two different reasons could attribute to the depletion in accuracy of the meta-methods as the level of complexity increases. The meta methods could be over-fitting the data, methods that model the data are particularly susceptible to this, for example GeneMeta. Conversely, some feature selection methods may not capture the complexity of the data, this is potentially occuring in the POE with *Bss/Wss*case. Fisher\'s inverse chi-square meta approach does not take into consideration the actual intensities of each spot on the microarray, albeit at times this method is ideal, for example when individual intensities are unknown, or when the characteristics of the study vary greatly \[[@B35]\]. This particular characteristic of Fisher\'s inverse chi-Square method is highlighted by the more complex lymphoma case study producing lower relative classification errors than when used in similar platform breast cancer analysis.
Within both complexity environments mDEDS is able to perform DE analysis well, as this method makes use of the different datasets but does not try to fit a full parametric model to the data. Our proposed mDEDS uses multiple statistical measures while developing its ordered gene list. Using multiple measures aids robustness as more of the variability can be encapsulated within the meta-method. The success of mDEDS over DEDS as a meta-method highlights that the method of combining different statistics across datasets aids in the meta-analysis process. It is possible that the multiple platforms and multiple measures draw enough diversity to begin to transcend cross platform variability and produce a reliable gene list. The variation in some of the meta-method\'s abilities within classification suggests that different tools are beneficial depending on the researcher\'s current meta-analysis project.
One may speculate that mDEDS can be used in a batch correction context. Batch effect is a term given to non-biological experimental variation that occurs throughout an experiment. In most cases batch effects are inevitable as non-biological variations are observed simply through multiple, apparently identical, amplification and hybridisation. Staggering ones hybridisation process is a practical reality of microarray experiments for two main reasons: (i) data is often prospective and may be collected and processed in stages, (ii) there is a limit to the number of samples that may be amplified and hybridised at one time \[[@B36]\] hence forcing batches to form. As a result, powerful batch correction methods are vital for microarray research. One could consider batches obtained separately with time delays, for example a year, as separate batches, which resemble individual datasets on similar platforms. By using mDEDS one can borrow strength from the multiple batches yet avoid particular batch bias.
There are still many open questions within the meta-analysis paradigm. For example questions pertaining to mismatched probe sets across platforms and the handling of multiple probes for the same genes. More research within these areas would greatly aid meta-analysis for microarrays and ones ability to make use of the current plethora of information laying dormant in these public repositories. However, once more of these type of tools for meta-analysis have been developed, meta-analysis will save time, money and scientific resources.
Conclusion
==========
We compared eight meta-analysis methods, which comprise of five existing methods, two naive approaches and our novel approach, mDEDS. Integrating datasets within microarray analysis has copious and clear advantages. This study adds in establishing which meta-analysis methods are more successful in their approach by comparing multiple meta-analysis methods, including the Fisher\'s inverse chi-square, GeneMeta, POE with *Bss/Wss*, POE with *IC*, RankProd, a \'dataset cross-validation\' meta and a \'simple\' meta.
Our proposed method; mDEDS, has performed competitively and at times better than currently available meta-analysis methods. ROC curves were used as a comparison in a simulated study and prediction accuracy within classification was used as an evaluation tool in two real biological case studies. These case studies differ in complexity regarding data being combined, the first demonstrating the combining of three datasets from similar platforms (different Affymetrix chipsets) and the second combining datasets from Affymetrix, cDNA and the Lymphochip.
In both classification comparisons mDEDS was used as a feature selection method and produced capable classifiers, with all else held constant. These results, coupled with results from the simulated data, are indicative of mDEDS being used as a powerful meta-analysis method for cross laboratory and platform studies.
Availability and requirements
=============================
The R code for mDEDS is an additional feature within the DEDS package available at <http://Bioconductor.org>.
Authors\' contributions
=======================
AC performed the analysis and wrote the manuscript. YHY conceived the study, supervised the analysis and participated in the preparation of the manuscript. Both authors read and approved the final manuscript.
Acknowledgements
================
The authors would like to thank the Department of Haematology and Stem Cell Transplant at St Vincent\'s Hospital Sydney, in particular Dr To Ha Loi and Prof David Ma for providing the lymphoma datasets and for continuous discussion throughout. As well as Dr Oya Selma Klanten for her comments and feedback on this article. This work was supported in part by ARC through grant DP0770395 (YHY); and Australian Postgraduate Award (AC).
|
When UW Medicine interventional radiologist Wayne Monsky first saw virtual reality’s vivid, 3D depiction of the inside of a phantom patient’s blood vessels, his jaw dropped in childlike wonder.
Dr. Wayne Monsky is an interventional radiologist at UW Medicine in Seattle.
“When you put the (VR) headset on, you have a giddy laugh that you can’t control – just sheer happiness and enthusiasm. (I’m) moving up to the mesenteric artery and I can’t believe what I’m seeing,” he recalled.
The experience reminds him of “Fantastic Voyage,” the ’60s-era sci-fi film about a submarine and crew that are miniaturized and injected into a scientist’s body to repair a blood clot.
“As a child, and today, I’ve been amazed at the premise that one day you can swim around inside someone’s body. And really, that’s the sensation: You’re in it,” he said.
Interventional radiologists use catheters, thin flexible tubes that are inserted into arteries and veins and steered to any organ in the body, guided by X-ray visuals. With this approach, they (and cardiologists, vascular surgeons, and neuro-interventionalists) treat an array of conditions: liver tumors, narrowed and bleeding arteries, uterine fibroids, and more.
Monsky and two collaborators have pioneered VR technology that puts the operator inside 3D blood vessels. By following an anatomically correct, dynamic, 3D map of a phantom patient's vessels, Monsky navigates the catheter through junctions and angles. The catheter's tip is equipped with sensors that visually represent its exact location to the VR headset.
[Download related broadcast video and script.]
It’s a sizable leap forward from the 2D, black-and-white X-ray perspective that has guided Monsky’s catheters through vessels for most of his career.
He recently presented study findings that underscore VR’s value: In tests of a phantom patient, VR guidance got him to the destination faster – about 40 seconds faster, on average, over 18 simulations – than was the case with X-ray guidance.
More critically, since VR employs a magnetic field to visualize the catheter, it greatly reduces and may eliminate the need for the radiation exposure that comes with X-ray navigation.
“We always try to minimalize every patient’s exposure, but these complex procedures are currently performed with X-ray exposure to the patient, in the name of curing their disease. (With VR), we think we can come away from a lot of that – hopefully all of it,” Monsky said.
The tests provided proof of concept, but the idea still must be scrutinized and approved by the U.S. Food and Drug Administration before it could be deployed with actual patients.
He thinks VR’s small footprint also could make interventional radiology portable. A headset and electromagnetic tracking system could fit in a roller bag and travel to remote areas far from the traditional setting of an urban hospital angiography suite.
“Centers that can’t afford that overhead would have access to this treatment technology,” said Monsky. He directs interventional radiology at Harborview Medical Center, which draws patients from throughout the vast Pacific Northwest for emergency and elective care.
Monsky advances a VR sensor-tipped catheter into a replica of a patient’s abdominal aorta.
His partners in this effort are Dr. Steve Seslar, a specialist in cardiology and electrophysiology, and Ryan James, technology developer and software engineer. The three worked through CoMotion at the University of Washington to create Pyrus Medical, a startup that will commercialize the technology.
Someday, the trio thinks VR could allow interventionalists and endovascular specialists to conduct procedures remotely, far from the patient who’s receiving treatment.
“All the information you’re getting from the sensors on the catheters can be used in different ways. We can get positional information (about physiology and anatomy) as we’re doing our procedure … and we can learn from how those catheters move. Ideally that positional information could be fed into tele-robotic systems so that, just by turning my catheter here, in some room across the globe, the catheter is moving in that direction in a person in need there.”
Watch our accompanying video below; download it here for broadcast and web use.
- Brian Donohue, 206.543.7856, bdonohue@uw.edu |
# Destructuring in Clojure
In Clojure, destructuring is a shorthand for assigning names to parts
of data structures based on their forms. Don't worry if that's
confusing at first, it becomes very clear with a few examples.
Suppose we have a function that prints a greeting based on a user's
name, title, and location.
Here we'll manually pull out the name, title, and location from the
`user` parameter (a Map), and create bindings named `name`, `title`,
and `location` via [`let`](/clojure.core/let).
```
(defn greet [user]
(let [name (:name user)
location (:location user)]
(println "Hey there" name ", how's the weather in" location "?")))
(greet {:name "Josie" :location "San Francisco"})
;; Hey there Josie, how's the weather in San Francisco?
;;=> nil
(greet {:name "Ivan" :location "Moscow"})
;; Hey there Ivan, how's the weather in Moscow?
;;=> nil
```
Destructuring lets us specify naming of the parameters directly from the
structure of the passed map:
```
(defn greet2 [{:keys [name location]}]
(println "Hey there" name ", how's the weather in" location "?"))
(greet2 {:name "Josie" :location "San Francisco"})
;; Hey there Josie, how's the weather in San Francisco?
;;=> nil
```
See John Del Rosario's [destructuring cheat sheet](https://gist.github.com/john2x/e1dca953548bfdfb9844) for a more comprehensive overview.
|
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE manualpage SYSTEM "../style/manualpage.dtd">
<?xml-stylesheet type="text/xsl" href="../style/manual.fr.xsl"?>
<!-- English Revision: 1874148 -->
<!-- French translation : Lucien GENTIS -->
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<manualpage metafile="perf-scaling.xml.meta">
<parentdocument href="./">Documentations diverses</parentdocument>
<title>Amélioration des performances</title>
<summary>
<p>Il est dit dans la documentation d'Apache 1.3
à propos de l'amélioration des performances :
</p>
<blockquote><p>
"Apache est un serveur web à vocation générale, conçu pour
être non seulement efficace mais aussi rapide. Dans sa
configuration de base, ses performances sont déjà
relativement satisfaisantes. La plupart des sites possèdent
une bande passante en sortie inférieure à 10 Mbits que le
serveur Apache peut mettre pleinement à profit en utilisant un serveur à base
de processeur Pentium bas de gamme."</p>
</blockquote>
<p>Cette phrase ayant été écrite il y a plusieurs années,
entre temps de nombreuses choses ont changé. D'une part, les
serveurs sont devenus beaucoup plus rapides. D'autre part, de
nombreux sites se voient maintenant allouée une bande passante
en sortie bien supérieure à 10 Mbits. En outre, les applications
web sont devenues beaucoup plus complexes. Les sites classiques
ne proposant que des pages du style brochure sont toujours
présents, mais le web a souvent évolué vers une plateforme
exécutant des traitements, et les webmasters peuvent maintenant
mettre en ligne des contenus dynamiques en Perl, PHP ou Java,
qui exigent un niveau de performances bien supérieur.
</p>
<p>C'est pourquoi en dépit des progrès en matière de bandes passantes
allouées et de rapidité des serveurs, les performances
des serveurs web et des applications web sont toujours un sujet
d'actualité. C'est dans ce cadre que cette documentation s'attache à
présenter de nombreux points concernant les performances des
serveurs web.
</p>
</summary>
<section id="what-will-and-will-not-be-discussed">
<title>Ce qui sera abordé et ce qui ne le sera pas</title>
<p>Ce document se concentre sur l'amélioration des performances
via des options facilement accessibles, ainsi que sur les outils
de monitoring. Les outils de monitoring vous permettront de
surveiller le fonctionnement de votre serveur web afin de
rassembler des informations à propos de ses performances et des
éventuels problèmes qui s'y rapportent. Nous supposerons
que votre budget n'est pas illimité ; c'est pourquoi les
améliorations apportées le seront sans modifier l'infrastructure
matérielle existante. Vous ne souhaitez probablement pas
compiler vous-même votre serveur Apache, ni recompiler le noyau
de votre système d'exploitation ; nous supposerons cependant que
vous possédez quelques notions à propos du fichier de
configuration du serveur HTTP Apache.
</p>
</section>
<section id="monitoring-your-server">
<title>Monitoring de votre serveur</title>
<p>Si vous envisagez de redimensionner ou d'améliorer les performances
de votre serveur, vous devez tout d'abord observer la manière dont il
fonctionne. En observant son fonctionnement en conditions réelles ou
sous une charge créée artificiellement, vous serez en mesure
d'extrapoler son fonctionnement sous une charge accrue, par exemple dans
le cas où il serait mentionné sur Slashdot. </p>
<section id="monitoring-tools">
<title>Outils de monitoring</title>
<section id="top">
<title>top
</title>
<p>L'outil top est fourni avec Linux et FreeBSD. Solaris
quant à lui, fournit <code>prstat(1)</code>. Cet outil
permet de rassembler de nombreuses données statistiques
à propos du système et de chaque processus en cours
d'exécution avant de les afficher de manière
interactive sur votre terminal. Les données affichées
sont rafraîchies toutes les secondes et varient en
fonction de la plateforme, mais elles comportent en
général la charge moyenne du système, le nombre de
processus et leur état courant, le pourcentage de temps
CPU(s) passé à exécuter le code système et utilisateur,
et l'état de la mémoire virtuelle système. Les données
affichées pour chaque processus sont en général
configurables et comprennent le nom et l'identifiant du
processus, sa priorité et la valeur définie par nice,
l'empreinte mémoire, et le pourcentage d'utilisation CPU.
L'exemple suivant montre plusieurs processus httpd (avec
les MPM worker et event) s'exécutant sur un système
Linux (Xen) :
</p>
<example><pre>
top - 23:10:58 up 71 days, 6:14, 4 users, load average: 0.25, 0.53, 0.47
Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie
Cpu(s): 11.6%us, 0.7%sy, 0.0%ni, 87.3%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2621656k total, 2178684k used, 442972k free, 100500k buffers
Swap: 4194296k total, 860584k used, 3333712k free, 1157552k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16687 example_ 20 0 1200m 547m 179m S 45 21.4 1:09.59 httpd-worker
15195 www 20 0 441m 33m 2468 S 0 1.3 0:41.41 httpd-worker
1 root 20 0 10312 328 308 S 0 0.0 0:33.17 init
2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd
3 root RT -5 0 0 0 S 0 0.0 0:00.14 migration/0
4 root 15 -5 0 0 0 S 0 0.0 0:04.58 ksoftirqd/0
5 root RT -5 0 0 0 S 0 0.0 4:45.89 watchdog/0
6 root 15 -5 0 0 0 S 0 0.0 1:42.52 events/0
7 root 15 -5 0 0 0 S 0 0.0 0:00.00 khelper
19 root 15 -5 0 0 0 S 0 0.0 0:00.00 xenwatch
20 root 15 -5 0 0 0 S 0 0.0 0:00.00 xenbus
28 root RT -5 0 0 0 S 0 0.0 0:00.14 migration/1
29 root 15 -5 0 0 0 S 0 0.0 0:00.20 ksoftirqd/1
30 root RT -5 0 0 0 S 0 0.0 0:05.96 watchdog/1
31 root 15 -5 0 0 0 S 0 0.0 1:18.35 events/1
32 root RT -5 0 0 0 S 0 0.0 0:00.08 migration/2
33 root 15 -5 0 0 0 S 0 0.0 0:00.18 ksoftirqd/2
34 root RT -5 0 0 0 S 0 0.0 0:06.00 watchdog/2
35 root 15 -5 0 0 0 S 0 0.0 1:08.39 events/2
36 root RT -5 0 0 0 S 0 0.0 0:00.10 migration/3
37 root 15 -5 0 0 0 S 0 0.0 0:00.16 ksoftirqd/3
38 root RT -5 0 0 0 S 0 0.0 0:06.08 watchdog/3
39 root 15 -5 0 0 0 S 0 0.0 1:22.81 events/3
68 root 15 -5 0 0 0 S 0 0.0 0:06.28 kblockd/0
69 root 15 -5 0 0 0 S 0 0.0 0:00.04 kblockd/1
70 root 15 -5 0 0 0 S 0 0.0 0:00.04 kblockd/2</pre></example>
<p>Top est un merveilleux outil, même s'il est
relativement gourmand en ressources (lorsqu'il
s'exécute, son propre processus se trouve en général
dans le top dix des consommations CPU). Il est
indispensable pour déterminer la taille d'un processus
en cours d'exécution, information précieuse lorsque vous
voudrez déterminer combien de processus httpd vous
pourrez exécuter sur votre machine. La méthode pour
effectuer ce calcul est décrite ici : <a
href="#sizing-maxClients">calculer MaxClients</a>. Top
est cependant un outil interactif, et l'exécuter de
manière continu présente peu ou pas d'avantage.
</p>
</section>
<section id="free">
<title>free
</title>
<p>Cette commande n'est disponible que sous Linux. Elle
indique la mémoire vive et l'espace de swap utilisés.
Linux alloue la mémoire inutilisée en tant que cache du
système de fichiers. La commande free montre
l'utilisation de la mémoire avec et sans ce cache. On
peut utiliser la commande free pour déterminer la
quantité de mémoire utilisée par le système, comme
décrit dans le paragraphe <a
href="#sizing-maxClients">calculer MaxClients</a>.
L'affichage de la sortie de la commande free ressemble à
ceci :
</p>
<example><pre>
sctemme@brutus:~$ free
total used free shared buffers cached
Mem: 4026028 3901892 124136 0 253144 841044
-/+ buffers/cache: 2807704 1218324
Swap: 3903784 12540 3891244
</pre></example>
</section>
<section id="vmstat">
<title>vmstat
</title>
<p>Cette commande est disponible sur de nombreuses
plateformes de style Unix. Elle affiche un grand nombre
de données système. Lancée sans argument, elle affiche
une ligne d'état pour l'instant actuel. Lorsqu'on lui
ajoute un chiffre, la ligne d'état actuelle est ajoutée à
intervalles réguliers à l'affichage existant.
Par exemple, la commande
<code>vmstat 5</code> ajoute la ligne d'état actuelle
toutes les 5 secondes. La commande vmstat affiche la
quantité de mémoire virtuelle utilisée, la quantité de
mémoire échangée avec l'espace de swap en entrée et en
sortie à chaque seconde, le nombre de processus
actuellement en cours d'exécution ou inactifs, le nombre
d'interruptions et de changements de contexte par
seconde, et le pourcentage d'utilisation du CPU.
</p>
<p>
Voici la sortie de la commande <code>vmstat</code>
pour un serveur inactif :
</p>
<example><pre>
[sctemme@GayDeceiver sctemme]$ vmstat 5 3
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 0 186252 6688 37516 0 0 12 5 47 311 0 1 99
0 0 0 0 186244 6696 37516 0 0 0 16 41 314 0 0 100
0 0 0 0 186236 6704 37516 0 0 0 9 44 314 0 0 100
</pre></example>
<p>Et voici cette même sortie pour un serveur en charge
de cent connexions simultanées pour du contenu statique :
</p>
<example><pre>
[sctemme@GayDeceiver sctemme]$ vmstat 5 3
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
1 0 1 0 162580 6848 40056 0 0 11 5 150 324 1 1 98
6 0 1 0 163280 6856 40248 0 0 0 66 6384 1117 42 25 32
11 0 0 0 162780 6864 40436 0 0 0 61 6309 1165 33 28 40
</pre></example>
<p>La première ligne indique des valeurs moyennes depuis
le dernier redémarrage. Les lignes suivantes donnent des
informations d'état à intervalles de 5 secondes. Le
second argument demande à vmstat de générer 3 lignes
d'état, puis de s'arrêter.
</p>
</section>
<section id="se-toolkit">
<title>Boîte à outils SE
</title>
<p>La boîte à outils SE est une solution de supervision
pour Solaris. Son langage de programmation est basé sur
le préprocesseur C et est fourni avec de nombreux
exemples de scripts. Les informations fournies
peuvent être exploitées en mode console ou en mode
graphique. Cette boîte à outils peut aussi être programmée pour
appliquer des règles aux données système. Avec l'exemple
de script de la Figure 2, Zoom.se, des voyants verts,
oranges ou rouges s'allument lorsque certaines valeurs
du système dépassent un seuil spécifié. Un autre script
fourni, Virtual Adrian, permet d'affiner les
performances en tenant compte de ces valeurs.
</p>
<p>Depuis sa création, de nombreux propriétaires se sont
succédés à la tête de la boîte à outils SE, et elle a de
ce fait largement évolué. Il semble qu'elle ait
maintenant trouvé sa place chez Sunfreeware.com d'où
elle peut être téléchargée gratuitement. Il n'y a qu'un
seul paquet pour Solaris 8, 9 et 10 sur SPARC et x86, et
il inclut le code source. Le concepteur de la boîte à
outils SE, Richard Pettit, a fondé une nouvelle sociéte,
Captive Metrics4, et a l'intention de mettre sur le
marché un outil de supervision multiplateforme en Java basé sur
les mêmes principes que la boîte à outils SE.
</p>
</section>
<section id="dtrace">
<title>DTrace
</title>
<p>Etant donné que DTrace est disponible sous Solaris,
FreeBSD et OS X, il serait intéressant de l'étudier. Il
y a aussi le module mod_dtrace pour httpd.
</p>
</section>
<section id="mod_status">
<title>mod_status
</title>
<p>Le module mod_status donne un aperçu des performances
du serveur à un instant donné. Il génère une page HTML
comportant, entre autres, le nombre de processus Apache
en cours d'exécution avec la quantité de données qu'ils
ont servies, ainsi que la charge CPU induite par httpd
et le reste du système. L'Apache Software Foundation
utilise elle-même <module>mod_status</module> pour son
propre <a href="http://apache.org/server-status">site
web</a>. Si vous ajoutez une directive
<code>ExtendedStatus On</code> à votre fichier
<code>httpd.conf</code>, la page de
<module>mod_status</module> vous fournira d'avantage
d'informations, au prix d'une consommation de ressources
légèrement supérieure par requête.
</p>
</section>
</section>
<section id="web-server-log-files">
<title>Les journaux du serveur web
</title>
<p>Le moyen le plus efficace pour vérifier la bonne santé et
le niveau de performance de votre serveur consiste à
surveiller et analyser les journaux écrits par httpd. La
surveillance du journal des erreurs vous permet de
déterminer les sources d'erreurs, de détecter des attaques
ou des problèmes de performance. L'analyse du journal des
accès vous indique le niveau de charge de votre serveur,
quelles sont les ressources les plus populaires, ainsi que
la provenance de vos utilisateurs. Une analyse historique des
données de journalisation peut vous fournir des informations
précieuses quant aux tendances d'utilisation de votre
serveur au cours du temps, ce qui vous permet de prévoir les
périodes où les besoins en performance risquent de dépasser
les capacités du serveur.
</p>
<section id="ErrorLog">
<title>Journal des erreurs
</title>
<p>Le journal des erreurs peut indiquer que le nombre
maximum de processus actifs ou de fichiers ouverts
simultanément a été atteint. Le journal des erreurs
signele aussi le lancement de processus supplémentaires selon un
taux supérieur à la normale en réponse à
une augmentation soudaine de la charge. Lorsque le
serveur démarre, le descripteur de fichier stderr est
redirigé vers le journal des erreurs, si bien que toute
erreur rencontrée par httpd après avoir ouvert ses
fichiers journaux apparaîtra dans ce journal. Consulter
fréquemment le journal des erreurs est donc une bonne
habitude.
</p>
<p>Lorsque Apache httpd n'a pas encore ouvert ses
fichiers journaux, tout message d'erreur sera envoyé
vers la sortie d'erreur standard stderr. Si vous
démarrez httpd manuellement, ces messages d'erreur
apparaîtront sur votre terminal, et vous pourrez les
utiliser directement pour résoudre les problèmes de
votre serveur. Si httpd est lancé via un script de
démarrage, la destination de ces messages d'erreur
dépend de leur conception.
<code>/var/log/messages</code> est alors le premier fichier à
consulter. Sous Windows, ces messages d'erreur précoces
sont écrits dans le journal des évènements des
applications, qui peut être visualisé via l'observateur
d'évènements dans les outils d'administration.
</p>
<p>
Le journal des erreurs est configuré via les
directives de configuration <directive
module="core">ErrorLog</directive> et <directive
module="core">LogLevel</directive>. Le journal des
erreurs de la configuration du serveur principal de
httpd reçoit les messages d'erreur concernant
l'ensemble du serveur : démarrage, arrêt, crashes,
lancement de processus supplémentaires excessif,
etc... La directive <directive
module="core">ErrorLog</directive> peut aussi être
utilisée dans les blocs de configuration des
serveurs virtuels. Le journal des erreurs d'un
serveur virtuel ne reçoit que les messages d'erreur
spécifiques à ce serveur virtuel, comme les erreurs
d'authentification et les erreurs du style 'File not
Found'.
</p>
<p>Dans le cas d'un serveur accessible depuis Internet,
attendez-vous à voir de nombreuses tentatives
d'exploitation et attaques de vers dans le journal des
erreurs. La plupart d'entre elles ciblent des serveurs
autres qu'Apache, mais dans l'état actuel des choses,
les scripts se contentent d'envoyer leurs attaques vers
tout port ouvert, sans tenir compte du serveur web
effectivement en cours d'exécution ou du type
des applications installées. Vous pouvez bloquer ces
tentatives d'attaque en utilisant un pare-feu ou le
module <a
href="http://www.modsecurity.org/">mod_security</a>,
mais ceci dépasse la portée de cette discussion.
</p>
<p>
La directive <directive
module="core">LogLevel</directive> permet de définir
le niveau de détail des messages enregistrés dans
les journaux. Il existe huit niveaux de
journalisation :
</p>
<table>
<tr>
<td>
<p><strong>Niveau</strong></p>
</td>
<td>
<p><strong>Description</strong></p>
</td>
</tr>
<tr>
<td>
<p>emerg</p>
</td>
<td>
<p>Urgence - le système est inutilisable.</p>
</td>
</tr>
<tr>
<td>
<p>alert</p>
</td>
<td>
<p>Une action doit être entreprise
immédiatement.</p>
</td>
</tr>
<tr>
<td>
<p>crit</p>
</td>
<td>
<p>Situations critiques.</p>
</td>
</tr>
<tr>
<td>
<p>error</p>
</td>
<td>
<p>Situations d'erreur.</p>
</td>
</tr>
<tr>
<td>
<p>warn</p>
</td>
<td>
<p>Situations provoquant un avertissement.</p>
</td>
</tr>
<tr>
<td>
<p>notice</p>
</td>
<td>
<p>Evènement normal, mais important.</p>
</td>
</tr>
<tr>
<td>
<p>info</p>
</td>
<td>
<p>Informations.</p>
</td>
</tr>
<tr>
<td>
<p>debug</p>
</td>
<td>
<p>Messages de débogage.</p>
</td>
</tr>
</table>
<p>Le niveau de journalisation par défaut est warn. Un
serveur en production ne doit pas s'exécuter en mode
debug, mais augmenter le niveau de détail dans le
journal des erreurs peut s'avérer utile pour résoudre
certains problèmes. A partir de la
version 2.3.8, la directive <directive
module="core">LogLevel</directive> peut être définie au
niveau de chaque module :
</p>
<highlight language="config">
LogLevel debug mod_ssl:warn
</highlight>
<p>
Dans cet exemple, l'ensemble du serveur est en mode
debug, sauf le module <module>mod_ssl</module>
qui a tendance à être très bavard.
</p>
</section>
<section id="AccessLog">
<title>Journal des accès
</title>
<p>Apache httpd garde la trace de toutes les requêtes
qu'il reçoit dans son journal des accès. En plus de
l'heure et de la nature d'une requête, httpd peut
enregistrer l'adresse IP du client, la date et l'heure
de la requête, le résultat et quantité d'autres
informations. Les différents formats de journaux sont
documentés dans le manuel. Le fichier concerne par
défaut le serveur principal, mais il peut être configuré
pour chaque serveur virtuel via les directives de
configuration <directive
module="mod_log_config">TransferLog</directive> ou
<directive
module="mod_log_config">CustomLog</directive>.
</p>
<p>De nombreux programmes libres ou commerciaux
permettent d'analyser les journaux d'accès. Analog et
Webalyser sont des programmes d'analyse libres parmi les
plus populaires. L'analyse des journaux doit s'effectuer
hors ligne afin de ne pas surcharger le serveur web avec
le traitement des fichiers journaux. La plupart des
programmes d'analyse des journaux sont compatibles avec le format
de journal "Common". Voici une description des
différents champs présents dans une entrée du journal :
</p>
<example><pre>
195.54.228.42 - - [24/Mar/2007:23:05:11 -0400] "GET /sander/feed/ HTTP/1.1" 200 9747
64.34.165.214 - - [24/Mar/2007:23:10:11 -0400] "GET /sander/feed/atom HTTP/1.1" 200 9068
60.28.164.72 - - [24/Mar/2007:23:11:41 -0400] "GET / HTTP/1.0" 200 618
85.140.155.56 - - [24/Mar/2007:23:14:12 -0400] "GET /sander/2006/09/27/44/ HTTP/1.1" 200 14172
85.140.155.56 - - [24/Mar/2007:23:14:15 -0400] "GET /sander/2006/09/21/gore-tax-pollution/ HTTP/1.1" 200 15147
74.6.72.187 - - [24/Mar/2007:23:18:11 -0400] "GET /sander/2006/09/27/44/ HTTP/1.0" 200 14172
74.6.72.229 - - [24/Mar/2007:23:24:22 -0400] "GET /sander/2006/11/21/os-java/ HTTP/1.0" 200 13457
</pre></example>
<table>
<tr>
<td>
<p><strong>Champ</strong></p>
</td>
<td>
<p><strong>Contenu</strong></p>
</td>
<td>
<p><strong>Explication</strong></p>
</td>
</tr>
<tr>
<td>
<p>Adresse IP client</p>
</td>
<td>
<p>195.54.228.42</p>
</td>
<td>
<p>Adresse IP d'où provient la requête</p>
</td>
</tr>
<tr>
<td>
<p>Identité RFC 1413</p>
</td>
<td>
<p>-</p>
</td>
<td>
<p>Identité de l'utilisateur distant renvoyée
par son démon identd</p>
</td>
</tr>
<tr>
<td>
<p>Nom utilisateur</p>
</td>
<td>
<p>-</p>
</td>
<td>
<p>Nom de l'utilisateur distant issu de
l'authentification Apache</p>
</td>
</tr>
<tr>
<td>
<p>Horodatage</p>
</td>
<td>
<p>[24/Mar/2007:23:05:11 -0400]</p>
</td>
<td>
<p>Date et heure de la requête</p>
</td>
</tr>
<tr>
<td>
<p>Requête</p>
</td>
<td>
<p>"GET /sander/feed/ HTTP/1.1"</p>
</td>
<td>
<p>La requête proprement dite</p>
</td>
</tr>
<tr>
<td>
<p>Code d'état</p>
</td>
<td>
<p>200</p>
</td>
<td>
<p>Code d'état renvoyé avec la réponse</p>
</td>
</tr>
<tr>
<td>
<p>Contenu en octets</p>
</td>
<td>
<p>9747</p>
</td>
<td>
<p>Total des octets transférés sans les
en-têtes</p>
</td>
</tr>
</table>
</section>
<section id="rotating-log-files">
<title>Rotation des fichiers journaux
</title>
<p>Il y a de nombreuses raisons pour mettre en oeuvre la
rotation des fichiers journaux. Même si pratiquement
plus aucun système d'exploitation n'impose une limite de
taille pour les fichiers de deux GigaOctets, avec le
temps, les fichiers journaux deviennent trop gros pour
pouvoir être traités. En outre, toute analyse de journal
ne doit pas être effectuée sur un fichier dans lequel le
serveur est en train d'écrire. Une rotation périodique
des fichiers journaux permet de faciliter leur analyse,
et de se faire une idée plus précise des habitudes
d'utilisation.
</p>
<p>Sur les systèmes unix, vous pouvez simplement
effectuer cette rotation en changeant le nom du fichier
journal via la commande mv. Le serveur continuera à
écrire dans le fichier ouvert même s'il a changé de nom.
Lorsque vous enverrez un signal de redémarrage
"Graceful" au serveur, il ouvrira un nouveau fichier
journal avec le nom configuré par défaut. Par exemple,
vous pouvez écrire un script de ce style pour cron :
</p>
<example>
APACHE=/usr/local/apache2<br />
HTTPD=$APACHE/bin/httpd<br />
mv $APACHE/logs/access_log
$APACHE/logarchive/access_log-`date +%F`<br />
$HTTPD -k graceful
</example>
<p>Cette approche fonctionne aussi sous Windows, mais
avec moins de souplesse. Alors que le processus httpd de
votre serveur sous Windows continuera à écrire dans le
fichier journal une fois ce dernier renommé, le service
Windows qui exécute Apache n'est pas en mesure
d'effectuer un redémarrage graceful. Sous Windows, le
redémarrage d'un service consiste simplement à l'arrêter
et à le démarrer à nouveau, alors qu'avec un redémarrage
graceful, les processus enfants terminent
le traitement des requêtes en cours avant de s'arrêter,
et le serveur httpd est alors immédiatement à
nouveau disponible pour traiter les nouvelles requêtes.
Sous Windows, le processus d'arrêt/redémarrage du
service interrompt les traitements de requêtes en cours,
et le serveur demeure indisponible jusqu'à ce qu'il ait
terminé son redémarrage. Vous devez donc tenir compte de
toutes ces contraintes lorsque vous planifiez un
redémarrage.
</p>
<p>
Une seconde approche consiste à utiliser la
redirection des logs. Les directives <directive
module="mod_log_config">CustomLog</directive>,
<directive
module="mod_log_config">TransferLog</directive> ou
<directive module="core">ErrorLog</directive>
permettent de rediriger les données de journalisation
vers tout programme via le caractère pipe
(<code>|</code>) comme dans cet exemple :
</p>
<example>CustomLog "|/usr/local/apache2/bin/rotatelogs
/var/log/access_log 86400" common
</example>
<p>Le programme cible de la redirection recevra alors les
données de journalisation d'Apache sur son entrée
standard stdin, et pourra en faire ce qu'il voudra. Le
programme rotatelogs fourni avec Apache effectue une
rotation des journaux de manière transparente en
fonction du temps ou de la quantité de données écrites,
et archive l'ancien fichier journal en ajoutant un
suffixe d'horodatage à son nom. Cependant, si cette
méthode fonctionne de manière satisfaisante sur les
plateformes de style unix, il n'en est pas de même sous
Windows.
</p>
</section>
<section id="logging-and-performance">
<title>Journalisation et performances
</title>
<p>L'écriture d'entrées dans les fichiers journaux est
consommatrice de ressources, mais l'importance de ces
données est telle qu'il est fortement déconseillé de
désactiver la journalisation. Pour optimiser les
performances, vous devez enregistrer vos journaux sur un
disque physique différent de celui où se situe votre
site web car les modes d'accès sont très différents. La
lecture de données sur un disque possède un caractère
essentiellement aléatoire, alors que l'écriture dans les
fichiers journaux s'effectue de manière séquentielle.
</p>
<p>
Ne définissez jamais la directive <directive
module="core">LogLevel</directive> à debug pour un
serveur en production. En effet, lorsque ce niveau de
journalisation est défini, une grande quantité de
données est écrite dans le journal des erreurs, y
compris, dans le cas d'un accès SSL, toutes les
opérations de lecture/écriture de la négociation de la
connexion. Les implications en matière de performances
sont donc importantes et vous devez plutôt utiliser le
niveau de journalisation warn.
</p>
<p>Si votre serveur possède plus d'un serveur virtuel,
il est conseillé d'attribuer un journal des accès séparé à
chacun d'entre eux, ce qui a pour effet de faciliter son
exploitation ultérieure. Par contre, si votre serveur
possède un grand nombre de serveurs virtuels, le nombre
de fichiers journaux à ouvrir va représenter une
consommation de ressources importante pour votre
système, et il sera peut-être alors plus judicieux
d'utiliser un seul fichier journal avec l'aménagement
suivant : utiliser l'élément de format <code>%v</code>
en tête de votre directive <directive
module="mod_log_config">LogFormat</directive> (et de
votre directive <directive
module="core">ErrorLog</directive> depuis la version
2.3.8) afin que httpd enregistre le nom du serveur
virtuel qui traite la requête ou l'erreur au début de
chaque entrée du journal. Un simple script Perl peut
alors éclater le journal en fichiers spécifiques à
chaque serveur virtuel après sa rotation ; Apache
fournit un tel script dans le répertoire
<code>support/split-logfile</code>.
</p>
<p>
Vous pouvez aussi utiliser la directive <directive
module="mod_log_config">BufferedLogs</directive>
pour qu'Apache conserve en mémoire plusieurs entrées
de journal avant de les écrire sur disque. Gardez
cependant à l'esprit que si les performances peuvent
s'en trouver améliorées, la chronologie des
évènements enregistrés peut quant à elle s'en
trouver affectée.
</p>
</section>
</section>
<section id="generating-a-test-load">
<title>Mise en oeuvre d'une charge de test
</title>
<p>Il est interessant de mettre en oeuvre une charge de test
afin d'évaluer les performances du système en conditions
réelles de fonctionnement. A cet effet, il existe des
paquets commerciaux comme <a
href="http://learnloadrunner.com/">LoadRunner</a>, mais
aussi de nombreux outils libres permettant de générer une
charge de test pour votre serveur web.
</p>
<ul>
<li>Apache est fourni avec un programme de test nommé ab
(initiales de Apache Bench). Ce dernier peut générer une
charge de serveur web consistant à demander le même
fichier de manière répétitive à un rythme rapide. Vous
pouvez spécifier le nombre de connexions simultanées, et
choisir entre une durée de fonctionnement ou un nombre
de requêtes.
</li>
<li>http load11 est un autre exemple de générateur de
charge libre. Ce programme fonctionne avec un ficher
d'URLs et peut être compilé avec le support SSL.
</li>
<li>L'Apache Software Foundation propose un outil nommé
flood12. Flood est un programme assez sophistiqué que
l'on configure via un fichier XML.
</li>
<li>Pour finir, JMeter13, un sous-projet de Jakarta, est
un outil de test en charge entièrement en Java. Alors
que les premières versions de cette application étaient
lentes et difficiles d'utilisation, la version 2.1.1
actuelle semble être plus souple d'utilisation et
efficace.
</li>
<li>
<p>Des projets externes à l'ASF et réputés
relativement corrects : grinder, httperf, tsung, <a
href="http://funkload.nuxeo.org/">FunkLoad</a>.
</p>
</li>
</ul>
<p>Lorsque vous appliquez une charge de test à votre serveur
web, gardez à l'esprit que si ce dernier est en production,
la charge de test peut en affecter négativement les
performances. En outre, le transfert de données
supplémentaires induit peut être comptabilisé dans le
quota mensuel qui vous a été éventuellement alloué.
</p>
</section>
</section>
<section id="configuring-for-performance">
<title>Configuration dans une optique de performances
</title>
<section id="apache-configuration">
<title>Configuration de httpd
</title>
<p>httpd version 2.2 est par défaut un serveur web avec des
processus enfants lancés au préalable. Au démarrage du
serveur, le processus parent lance un certain nombre de
processus enfants et ce sont eux qui seront chargés de traiter les
requêtes. Mais avec httpd version 2.0 est apparu le concept
de module multi-process (MPM). Les développeurs purent alors
écrire des MPMs qui pouvaient fonctionner avec l'architecture
à base de processus ou de threads de leur système
d'exploitation spécifique. Apache 2 est fourni avec des MPMs
spécifiques pour Windows, OS/2, Netware et BeOS. Pour les
plateformes de style unix, les deux MPMS les plus connus
sont Prefork et Worker. Le MPM Prefork offre le même modèle
de processus enfants prélancés que celui d'Apache 1.3. Le
MPM Worker quant à lui, lance un nombre de processus enfants
moins important, mais attribue à chacun d'entre eux un
certain nombre de threads pour traiter les requêtes. Avec la
version 2.4, le MPM n'est plus défini à la compilation,
mais peut être chargé à l'exécution via la directive
<directive module="core">LoadModule</directive>. Le MPM par
défaut pour la version 2.4 est le MPM event.
</p>
<p>Le nombre maximum de process, à savoir le nombre de processus
enfants prélancés et/ou de threads, donne une approximation
du nombre de requêtes que votre serveur peut traiter
simultanément. Ce n'est cependant qu'une estimation car le
noyau peut mettre en file d'attente des tentatives de
connexion à votre serveur. Lorsque votre site approche de la
saturation et si le nombre maximum de process est atteint, la
machine n'impose pas de limite absolue au
delà de laquelle les clients se verront refuser l'accès.
Cependant, lorsque les requêtes commencent à être mises en
file d'attente, les performances du système se dégradent
rapidement.
</p>
<p>Enfin, si le serveur httpd en question n'exécute aucun
code tiers via <code>mod_php</code>, <code>mod_perl</code>,
etc..., nous recommandons l'utilisation de
<module outdated="true">mpm_event</module>. Ce MPM est idéal pour les
situations où httpd sert d'intermédiaire entre les clients
et des serveurs d'arrière-plan qui accomplissent le travail
proprement dit (par exemple en mode mandataire ou cache).
</p>
<section id="MaxClients">
<title>MaxClients
</title>
<p>La directive <code>MaxClients</code> permet de
spécifier le nombre maximum de process que votre serveur
pourra créer. Deux autres directives lui sont associées
: <code>MinSpareServers</code> et
<code>MaxSpareServers</code>, qui permettent d'encadrer
le nombre de process que httpd garde en réserve pour
traiter les requêtes. Le nombre total de process que
httpd peut créer peut
être défini via la directive <code>ServerLimit</code>.
</p>
</section>
<section id="spinning-threads">
<title>Rotation des threads
</title>
<p>Les directives ci-dessus suffisent pour définir les
limites des nombres de process dans le cas du MPM Prefork.
Cependant, si vous utilisez un MPM à base de threads, la
situation est un peu plus compliquée. Les MPMs à base de
threads supportent la directive
<code>ThreadsPerChild</code>. httpd impose le fait que
<code>MaxClients</code> doit être divisible par
<code>ThreadsPerChild</code>. Si les valeurs que vous
attribuez à ces deux directives ne respectent pas cette
règle, httpd affichera un message d'erreur et corrigera
la valeur de la directive <code>ThreadsPerChild</code>
en la diminuant jusqu'à ce que la valeur de la directive
<code>MaxClients</code> soit divisible par elle.
</p>
</section>
<section id="sizing-maxClients">
<title>Choix de la valeur de MaxClients
</title>
<p>Idéalement, le nombre maximum de processus devrait
être choisi de façon à ce que toute la mémoire système
soit utilisée, sans la dépasser. En effet, si votre
système est surchargé au point de devoir faire appel de
manière intensive au swap (utilisation de la mémoire
disque), les performances vont se dégrader rapidement.
La formule permettant de déterminer la valeur de
<directive module="mpm_common"
name="MaxRequestWorkers">MaxClients</directive>
est assez simple :
</p>
<example>
MaxClients = (RAM totale − RAM système − RAM pour
les programmes externes) divisé par la RAM nécessaire pour
chaque processus enfant.
</example>
<p>L'observation est la meilleure manière de déterminer
les différentes quantités de mémoire allouées au
système, aux programmes externes et aux processus httpd
: à cet effet, vous pouvez utiliser les commandes top et
free décrites plus haut pour étudier l'empreinte mémoire
du système lorsque le serveur web n'est pas en cours
d'exécution. Vous pouvez aussi étudier l'empreinte d'un
processus type du serveur web via la commande top ; en
effet, la plupart des implémentations de cette commande
présentent une colonne Mémoire résidente (RSS - Resident
Size) et Mémoire partagée (Shared Memory).
</p>
<p>La différence entre ces deux colonnes est la
quantité de mémoire par processus. Le segment de mémoire
partagée n'existe effectivement qu'une seule fois, et
est utilisé par le code et les bibliothèques chargées et
la concurrence inter-processus (ou tableau de résultat -
scoreboard) géré par Apache. La quantité de mémoire
utilisée par chaque processus dépend fortement du nombre
et du type de modules utilisés. La meilleure façon d'en
déterminer les besoins consiste à générer une charge
type pour votre site web et à observer l'importance que
prennent les processus httpd.
</p>
<p>La RAM pour les programmes externes comprend
principalement la mémoire utilisée pour les programmes
CGI et les scripts qui s'exécutent indépendamment des
processus httpd. Par contre, si vous utilisez une
machine virtuelle Java qui exécute Tomcat sur le même
serveur, cette dernière va aussi nécessiter une quantité
de mémoire significative. En conséquence, la formule
ci-dessus qui sert à calculer la valeur maximale de
<code>MaxClients</code> permet d'effectuer une première approche,
mais ne constitue en aucun cas une science exacte. En
cas de doute, soyez pragmatique et utilisez une valeur assez
basse pour <code>MaxClients</code>. Le noyau Linux
réserve une certaine quantité de mémoire pour la mise en
cache des accès disque. Sous Solaris par contre, il faut disposer
de suffisamment de mémoire RAM pour créer un processus,
et si ce n'est pas le cas, httpd va démarrer avec un
message d'erreur du style "No space left on device" dans
le journal des erreurs, et sera incapable de créer
d'autres processus httpd enfants ; une valeur trop
élevée pour <code>MaxClients</code> constituera alors
réellement un désavantage.
</p>
</section>
<section id="selecting-your-mpm">
<title>Choisir votre MPM
</title>
<p>La commutation entre threads est plus
aisée pour le système, et ceux-ci consomment moins de
ressources que les processus ; c'est la raison
principale pour laquelle il est recommandé de choisir un
MPM threadé. Et
ceci est encore plus flagrant pour certains systèmes
d'exploitation que pour d'autres. Par exemple, sous
Solaris ou AIX, la manipulation des processus est assez
lourde en termes de ressources système ; l'utilisation
d'un MPM threadé est donc tout à fait indiquée pour ces
systèmes. Sous Linux en revanche, l'implémentation des
threads utilise en fait un processus par thread. Les
processus Linux sont assez légers, mais cela signifie qu'un
MPM threadé présentera ici un gain en performance
moindre que sous d'autres systèmes.
</p>
<p>Dans certaines situations cependant, l'utilisation
d'un MPM threadé peut induire des problèmes de
stabilité. Par exemple, si un processus enfant du MPM
prefork se crashe, au plus une connexion client sera
affectée ; par contre, si un processus enfant threadé se
crashe, ce sont tous les threads de ce processus qui
vont se crasher à leur tour, ce qui signifie que tous
les clients qui étaient servis par ce processus verront
leur connexion interrompue. De plus, certains problèmes
de sécurité des threads ("thread-safety")
peuvent apparaître, particulièrement avec les
bibliothèques tierces. Avec les applications threadées,
les différents threads peuvent avoir accès aux mêmes
variables sans distinction, sans savoir si une variable
n'a pas été modifiée par un autre thread.
</p>
<p>Ce problème a fait l'objet d'un point sensible au
sein de la communauté PHP car Le processeur PHP repose
fortement sur des bibliothèques tierces, et il n'est pas
garanti que la totalité d'entre elles soient
"thread-safe". Bonne nouvelle cependant : si vous
exécutez Apache sous Linux, vous pouvez utiliser PHP
avec le MPM prefork sans craindre une diminution de
performance trop importante par rapport à une option
threadée.
</p>
</section>
<section id="spinning-locks">
<title>Verrous tournants</title>
<p>Apache httpd maintient un verrou inter-processus du
point de vue de son écoute du réseau. Dans les faits,
cela signifie qu'un seul processus httpd enfant à la
fois peut recevoir une requête. Ainsi, soient les autres
processus en profitent alors pour traiter les requêtes
qu'ils ont déjà reçues, soient ils attendent de pouvoir
à leur tour récupérer le verrou et ainsi écouter le
réseau pour recevoir une nouvelle requête. Ceci peut se
voir comme une porte tournante par laquelle un seul
processus peut passer à la fois. Sur un serveur web
fortement chargé où les requêtes arrivent constamment,
la porte tourne rapidement et les requêtes sont
acceptées à un rythme soutenu. Sur un serveur faiblement
chargé en revanche, le processus qui "détient"
le verrou est suceptible de garder sa porte ouverte un
certain temps durant lequel tous les autres processus
seront inactifs, attendant de pouvoir s'approprier le
verrou. Dans une telle situation, le processus parent
pourra décider d'arrêter quelques processus enfants en
fonction de la valeur de la directive
<code>MaxSpareServers</code>.</p>
</section>
<section id="the-thundering-herd">
<title>L'assaut de la foule
</title>
<p>La fonction de l'"accept mutex" (c'est le nom donné au
verrou inter-processus) consiste à gérer la réception
des requêtes de manière ordonnée. Si ce verrou est
absent, le syndrome de l'"assaut de la foule" peut
apparaître.
</p>
<p>Imaginez une équipe de football américain en attente
devant la ligne de remise en jeu. Si les joueurs se
comportaient comme des processus Apache, ils se
précipiteraient tous à la fois pour récupérer la balle au
signal de la reprise. Un seul d'entre eux y
parviendrait, et tous les autres n'auraient plus qu'à se
regrouper à nouveau sur la ligne jusqu'à la reprise
suivante. Dans cette métaphore, c'est le quaterback qui
va jouer le rôle d'"accept mutex" en donnant la balle
au joueur approprié.
</p>
<p>La transmission d'une telle quantité d'informations
représente naturellement beaucoup de travail et, comme
une personne intelligente, un serveur intelligent
tentera d'éviter cette surcharge dans la mesure du
possible, d'où l'idée de la porte tournante. Dans les
dernières années, de nombreux systèmes d'exploitation,
comme Linux et Solaris, ont développé du code pour
éviter le syndrome de l'"assaut de la foule". Apache
reconnaît ce code, et si vous n'effectuez qu'une seule
écoute du réseau, autrement dit si vous n'utilisez que
le serveur principal ou un seul serveur virtuel, Apache
n'utilisera pas d'"accept mutex" ; par contre, si vous
effectuez plusieurs écoutes du réseau (par exemple si
un serveur virtuel sert les requêtes SSL), Apache
utilisera un "accept mutex" afin d'éviter les conflits
internes.
</p>
<p>Vous pouvez manipuler l'"accept mutex" via la
directive <code>AcceptMutex</code>. Cette dernière
permet en particulier de fermer l'"accept mutex", mais
aussi de sélectionner le mécanisme de verrouillage. Les
mécanismes de verrouillage courants comprennent fcntl,
les sémaphores System V et le verrouillage par threads.
Tous ne sont pas supportés par toutes les plateformes,
et leur disponibilité dépend aussi des options de
compilation. Les différents mécanismes de verrouillage
peuvent avoir des exigences particulières en matière de
ressources système ; il est donc recommandé de les
utiliser avec précautions.
</p>
<p>Il n'existe aucune raison particulière pour
désactiver l'"accept mutex". Apache détermine
automatiquement s'il doit utiliser ou non mutex sur
votre plateforme en fonction du nombre d'écoutes réseau
de votre serveur, comme décrit précédemment.
</p>
</section>
</section>
<section id="tuning-the-operating-system">
<title>Optimisation du système d'exploitation
</title>
<p>Souvent, les utilisateurs recherchent le paramètre magique qui va
multiplier par quatre les performances de leur système. En
fait, les systèmes de type Unix actuels sont déjà optimisés
à l'origine, et il n'y a plus grand chose à faire pour
améliorer leurs performances. L'administrateur peut
cependant encore effectuer quelques modifications qui
permettront de peaufiner la configuration.
</p>
<section id="ram-and-swap-space">
<title>RAM et swap
</title>
<p>Le leitmotiv en matière de mémoire est souvent "plus
on en a, mieux c'est". En effet, comme nous avons dit
plus haut, la mémoire inutilisée peut être
avantageusement utilisée comme cache du système de
fichiers. Plus vous chargez de modules, plus les processus
Apache grossissent, et ceci d'autant plus si vous
utilisez des modules qui génèrent des contenus
dynamiques comme PHP et mod_perl. Un gros fichier de
configuration - avec de nombreux serveurs virtuels - a
aussi tendance à augmenter l'empreinte mémoire des
processus. Une quantité de mémoire importante vous
permettra d'exécuter Apache avec plus de processus
enfants, et donc de traiter d'avantage de requêtes
simultanément.
</p>
<p>Même si les différentes plateformes traitent leur
mémoire virtuelle de différentes manières, il est
déconseillé de configurer un espace disque de swap
inférieur à la mémoire RAM. En effet, le système de mémoire
virtuelle a été conçu de manière à prendre le relai
lorsque la mémoire RAM fait défaut, et lorsque l'espace
disque, et donc l'espace de swap vient à manquer, votre
serveur risque de s'arrêter. Vous devrez alors
redémarrer physiquement votre serveur, et votre
hébergeur pourra vous facturer le service.
</p>
<p>Evidemment, ce genre problème survient au moment le
plus défavorable : lorsque le monde vient découvrir votre
site web et se présente avec insistance à votre porte.
Si votre espace de swap est suffisant, même si la
machine sera de plus en plus surchargée et deviendra
très très lente car le système devra swapper les pages
entre la mémoire et le disque, lorsque la charge diminuera à
nouveau, le système reviendra dans son mode de
fonctionnement normal. Rappelez-vous que vous disposez
de la directive <code>MaxClients</code> pour garder le contrôle.
</p>
<p>La plupart des systèmes de type Unix utilisent des
partitions dédiées au swap. Au démarrage du système,
celui-ci enregistre toutes les partitions de swap du ou
des disques en fonction du type de la partition ou du
contenu du fichier <code>/etc/fstab</code> et les
active de manière automatique. Lorsque vous ajoutez un
disque, ou lorsque vous installez le système
d'exploitation, assurez-vous d'allouer suffisamment
d'espace de swap afin de rester en adéquation avec une
éventuelle augmentation de la mémoire RAM. La
réallocation d'espace disque sur un système en
production est en effet pénible et fastidieuse.
</p>
<p>Prévoyez un espace de swap de deux fois la taille de
votre mémoire RAM, et même jusqu'à quatre fois lorsque
les surcharges peuvent s'avérer fréquentes. Assurez-vous
de réajuster ces valeurs lorsque vous augmentez la
quantité de mémoire RAM de votre système. En secours,
vous pouvez aussi utilisez un fichier comme espace de
swap. Pour ce faire, vous trouverez les instructions
dans les pages de manuel de <code>mkswap</code> et
<code>swapon</code>, ou dans celles des programmes de
<code>swap</code>.
</p>
</section>
<section id="ulimit-files-and-processes">
<title>ulimit: fichiers et processus
</title>
<p>Supposons que vous disposiez d'une machine possédant
une énorme quantité de mémoire RAM et un processeur aux
performances astronomiques ; vous pourrez alors exécuter
des centaines de processus Apache selon vos besoins,
mais tout en restant en deçà des limites imposées par le
noyau de votre système.
</p>
<p>Considérez maintenant une situation où plusieurs
centaines de serveurs web sont en cours d'exécution ; si
certains d'entre eux doivent à leur tour lancer des
processus CGI, le nombre maximum de processus autorisé
par le noyau sera vite atteint.
</p>
<p>Dans ce cas, vous pouvez modifier cette limite avec
la commande :
</p>
<example>
ulimit [-H|-S] -u [nouvelle valeur]
</example>
<p>Cette modification doit être effectuée avant le
démarrage du serveur, car la nouvelle valeur ne sera
prise en compte que dans le shell courant et pour les
programmes lancés depuis ce dernier. Dans les derniers
noyaux Linux, la valeur par défaut a été fixée à 2048.
Sous FreeBSD, ce nombre semble être limité à la valeur
inhabituelle de 513. Dans le shell par défaut de ce
système, <code>csh</code>, la commande équivalente est
<code>limit</code> et possède une syntaxe analogue à
celle de la commande de style Bourne <code>ulimit</code> :
</p>
<example>
limit [-h] maxproc [newvalue]
</example>
<p>En outre, le noyau peut limiter le nombre de fichiers
ouverts par processus. Ce n'est généralement pas un
problème pour les serveurs dont les processus sont lancés
à l'avance, et où chacun de ceux-ci ne traite qu'une
requête à la fois. Les processus des serveurs threadés,
quant à eux, traitent plusieurs requêtes simultanément,
et sont d'avantage susceptibles de dépasser la limite du
nombre de descripteurs de fichiers. Vous pouvez alors
augmenter cette valeur limite du nombre de fichiers
ouverts avec la commande :
</p>
<example>ulimit -n [newvalue]
</example>
<p>Là encore, cette modification doit être effectuée
avant le démarrage du serveur Apache.
</p>
</section>
<section id="setting-user-limits-on-system-startup">
<title>Définition des limites en fonction de l'utilisateur ou du
groupe au démarrage du système
</title>
<p>Sous Linux, vous pouvez définir les paramètres de
ulimit au démarrage en éditant le fichier
<code>/etc/security/limits.conf</code>. Ce fichier vous
permet de définir des limites "soft" et "hard"
en fonction de l'utilisateur ou du groupe ;
vous y trouverez aussi des commentaires explicatifs des
différentes options. Pour que ce fichier soit pris en
compte, le fichier <code>/etc/pam.d/login</code> doit
contenir la ligne :
</p>
<example>session required /lib/security/pam_limits.so
</example>
<p>Chaque item peut posséder une valeur "soft" et
"hard" : la première est la valeur
par défaut, et la seconde la valeur maximale de cet
item.
</p>
<p>Dans le fichier <code>/etc/login.conf</code> de
FreeBSD, ces valeurs peuvent être limitées ou étendues à
tout le système de manière analogue au fichier
<code>limits.conf</code>. Les limites "soft" sont
spécifiées via le paramètre <code>-cur</code>, et les
limites "hard" via le paramètre <code>-max</code>.
</p>
<p>Solaris possède un mécanisme similaire pour manipuler
les valeurs limites au démarrage du système : dans le
fichier <code>/etc/system</code>, vous pouvez définir au
démarrage des paramètres du noyau valables pour
l'ensemble du système. Ce sont les mêmes paramètres que
ceux définis à l'exécution par le débogueur de noyau
<code>mdb</code>. Les commandes équivalentes à ulimit -u
pour définir les limites hard et soft seront du style :
</p>
<example>
set rlim_fd_max=65536<br />
set rlim_fd_cur=2048
</example>
<p>Solaris calcule le nombre maximal de processus
autorisé par utilisateur (<code>maxuprc</code>) en
fonction de la mémoire système disponible
(<code>maxusers</code>). Vous pouvez obtenir ces valeurs
avec la commande :
</p>
<example>sysdef -i | grep maximum
</example>
<p>Il est cependant déconseillé de les modifier.
</p>
</section>
<section id="turn-off-unused-services-and-modules">
<title>Désactiver les services et modules non utilisés
</title>
<p>Dans la plupart des distributions Unix et Linux, de
nombreux services sont activés par défaut, et vous n'
avez probablement besoin que d'une minorité d'entre eux.
Par exemple, votre serveur web n'a pas besoin de
sendmail, de fournir le service NFS, etc... Désactivez
les tout simplement.
</p>
<p>Pour ce faire, sous RedHat Linux, vous
disposez de l'utilitaire chkconfig en ligne de commande.
Sous Solaris, les commandes <code>svcs</code> et
<code>svcadm</code> vous permettent respectivement
de lister les services activés et de désactiver ceux
dont vous n'avez pas besoin.
</p>
<p>Vous devez aussi prêter attention aux modules Apache
chargés par défaut. La plupart des distributions binaires
d'Apache httpd et des versions préinstallées fournies
avec les distributions de Linux chargent les modules
Apache via la directive
<directive>LoadModule</directive>.
</p>
<p>Les modules inutilisés peuvent être désactivés : si
vous n'avez pas besoin de leurs fonctionnalités et des
directives de configuration qu'ils implémentent,
désactivez-les en commentant les lignes
<directive>LoadModule</directive> correspondantes. Vous
devez cependant lire la documentation relative à ces
modules avant de les désactiver, et garder à l'esprit que
la désactivation d'un module très peu gourmand en
ressources n'est pas absolument nécessaire.
</p>
</section>
</section>
</section>
<section id="caching-content">
<title>Mise en cache des contenus
</title>
<p>Les requêtes impliquant des contenus dynamiques nécessitent
en général d'avantage de ressources que les
requêtes pour des contenus statiques. Les contenus statiques
consistent en simples pages issues de documents ou images
sur disque, et sont servis très rapidement. En outre, de
nombreux systèmes d'exploitation mettent automatiquement en
cache en mémoire les contenus des fichiers fréquemment accédés.
</p>
<p>Comme indiqué précédemment, le traitement des requêtes dynamiques peut
nécessiter beaucoup plus de ressources. L'exécution de scripts
CGI, le transfert de requêtes à un serveur d'applications
externe, ou les accès à une base de données peuvent impliquer
des temps d'attente et charges de travail significatifs pour un
serveur web fortement sollicité. Dans de nombreuses
circonstances, vous pourrez alors améliorer les performances en
transformant les requêtes dynamiques courantes en requêtes
statiques. Pour ce faire, deux approches seront discutées dans
la suite de cette section.
</p>
<section id="making-popular-pages-static">
<title>Transformation des pages courantes en contenus
statiques
</title>
<p>En générant à l'avance les réponses pour les requêtes les
plus courantes de votre application, vous pouvez améliorer
de manière significative les performances de votre serveur
sans abandonner la flexibilité des contenus générés
dynamiquement. Par exemple, si votre application est un
service de livraison de fleurs, vous aurez tout intérêt à
générer à l'avance les pages de votre catalogue concernant
les roses rouges dans les semaines précédant le jour de la
Saint Valentin. Lorsqu'un utilisateur cherchera des roses
rouges, il téléchargera alors les pages générées à
l'avance. Par contre, les recherches de roses jaunes seront
quant à elles traitées directement via une requête vers la
base de données. Pour effectuer ces aiguillages de requêtes,
vous disposez d'un outil particulièrement approprié fourni
avec Apache : le module mod_rewrite.
</p>
<section id="example-a-statically-rendered-blog">
<title>Exemple : un blog servi statiquement
</title>
<!--we should provide a more useful example here.
One showing how to make Wordpress or Drupal suck less. -->
<p>Blosxom est un programme CGI de journalisation web
léger. Il est écrit en Perl et utilise des fichiers
texte pour ses entrées. Outre sa qualité de programme
CGI, Blosxom peut être exécuté en ligne de commande pour
générer à l'avance des pages de blog. Lorsque votre blog
commence à être lu par un grand nombre de personnes, la
génération à l'avance de pages en HTML satique peut
améliorer de manière significative les performances de
votre serveur.
</p>
<p>Pour générer des pages statiques avec blosxom, éditez
le script CGI selon la documentation. Définissez la
variable $static dir à la valeur de
<directive>DocumentRoot</directive> de votre serveur
web, et exécutez le script en ligne de commande comme
suit :
</p>
<example>$ perl blosxom.cgi -password='whateveryourpassword'
</example>
<p>Vous pouvez exécuter ce script périodiquement via
cron, à chaque fois que vous ajoutez un nouveau contenu. Pour
faire en sorte qu'Apache substitue les pages
statiques au contenu dynamique, nous
utiliserons mod_rewrite. Ce module est fourni avec le
code source d'Apache, mais n'est pas compilé par défaut.
Pour le compiler avec la distribution d'Apache, vous
pouvez utiliser l'option de la commande configure
<code>--enable-rewrite[=shared]</code>. De nombreuses
distributions binaires d'Apache sont fournies avec
<module>mod_rewrite</module> inclus. Dans l'exemple
suivant, un serveur virtuel Apache utilise les pages de
blog générées à l'avance :
</p>
<highlight language="config">
Listen *:8001
<VirtualHost *:8001>
ServerName blog.sandla.org:8001
ServerAdmin sander@temme.net
DocumentRoot "/home/sctemme/inst/blog/httpd/htdocs"
<Directory "/home/sctemme/inst/blog/httpd/htdocs">
Options +Indexes
Require all granted
RewriteEngine on
RewriteCond "%{REQUEST_FILENAME}" "!-f"
RewriteCond "%{REQUEST_FILENAME}" "!-d"
RewriteRule "^(.*)$" "/cgi-bin/blosxom.cgi/$1" [L,QSA]
</Directory>
RewriteLog "/home/sctemme/inst/blog/httpd/logs/rewrite_log"
RewriteLogLevel 9
ErrorLog "/home/sctemme/inst/blog/httpd/logs/error_log"
LogLevel debug
CustomLog "/home/sctemme/inst/blog/httpd/logs/access_log common"
ScriptAlias "/cgi-bin/" "/home/sctemme/inst/blog/bin/"
<Directory "/home/sctemme/inst/blog/bin">
Options +ExecCGI
Require all granted
</Directory>
</VirtualHost>
</highlight>
<p>
Si les directives <directive>RewriteCond</directive>
indiquent que la ressource demandée n'existe ni en tant que
fichier, ni en tant que répertoire, son
chemin sera redirigé par la directive
<directive>RewriteRule</directive> vers le programme
CGI Blosxom qui va générer la réponse. Blosxom
utilise Path Info pour spécifier les entrées de blog
en tant que pages d'index, et si un chemin dans
Blosxom existe en tant que fichier statique dans le système de
fichiers, c'est ce dernier qui sera par conséquent privilégié.
Toute requête dont la réponse n'a pas été générée à
l'avance sera traitée par le programme CGI. Cela
signifie que les entrées individuelles comme les
commentaires seront toujours servies par le
programme CGI, et seront donc toujours visibles.
Cette configuration permet aussi de ne pas faire
apparaître le programme CGI Blosxom dans l'URL de la barre
d'adresse. Enfin, mod_rewrite est un module extrêmement
souple et puissant ; prenez le temps de bien
l'étudier afin de parvenir à la configuration qui
correspondra le mieux à votre situation.
</p>
</section>
</section>
<section id="caching-content-with-mod_cache">
<title>Mise en cache du contenu avec mod_cache
</title>
<p>Le module mod_cache implémente une mise en cache
intelligente des réponses HTTP : il tient compte des délais
de péremption et des contraintes en matière de contenu
inhérentes à la spécification HTTP. Le module mod_cache met
en cache les URL des contenus des réponses. Si un contenu envoyé au
client peut être mis en cache, il est sauvegardé sur disque.
Les requêtes ultérieures pour cette URL seront alors servies
directement depuis le cache. Le module fournisseur pour
mod_cache, mod_disk_cache, détermine la manière dont les
contenus sont stockés sur disque. La plupart des systèmes de
serveur possèdent plus d'espace disque que de mémoire, et il
est bon de garder à l'esprit que certains noyaux système mettent en
cache de manière transparente en mémoire les contenus sur
disque fréquemment accédés ; il n'est donc pas très utile de
répéter cette opération au niveau du serveur.
</p>
<p>Pour mettre en oeuvre une mise en cache de contenu
efficace et éviter de présenter à l'utilisateur un contenu
invalide ou périmé, l'application qui génère le contenu à
jour doit envoyer les en-têtes de réponse corrects. En
effet, en l'absence d'en-têtes comme <code>Etag:</code>,
<code>Last-Modified:</code> ou <code>Expires:</code>,
<module>mod_cache</module> ne sera pas en mesure de décider
de manière appropriée
s'il doit mettre le contenu en cache, s'il doit le servir
directement depuis ce dernier, ou s'il doit tout simplement
ne rien faire. Lorsque vous testerez la mise en cache, vous
devrez peut-être modifier votre application ou, en cas
d'impossibilité, désactiver de manière sélective la mise en
cache des URLs qui posent problème. Les modules mod_cache ne
sont pas compilés par défaut ; pour ce faire, vous devez
utiliser l'option <code>--enable-cache[=shared]</code> du
script configure. Si vous utilisez une distribution binaire
d'Apache httpd, ou si elle fait partie de votre portage ou
de votre sélection de paquets, <module>mod_cache</module>
sera probablement déjà inclus.
</p>
<section id="example-wiki">
<title>Exemple : wiki.apache.org
</title>
<!-- Is this still the case? Maybe we should give
a better example here too.-->
<p>
Le Wiki de l'Apache Software Foundation est servi
par MoinMoin. MoinMoin est écrit en Python et
s'exécute en tant que programme CGI. A l'heure
actuelle, toute tentative pour l'exécuter via
mod_python s'est soldée par un échec. Le programme
CGI induit une charge inacceptable sur le serveur,
particulièrement lorsque le Wiki est indexé par des
moteurs de recherche comme Google. Pour alléger la
charge de la machine, l'équipe d'infrastructure
d'Apache s'est tournée vers mod_cache. Il s'est
avéré que <a href="/httpd/MoinMoin">MoinMoin</a>
nécessitait un petit patch pour adopter un
comportement approprié en aval du serveur de mise
en cache : certaines requêtes ne pouvaient jamais
être mises en cache, et les modules Python
concernés ont été mis à jour pour pouvoir envoyer
les en-têtes de réponse HTTP corrects. Après cette
modification, la mise en cache en amont du Wiki a
été activée via l'insertion des lignes suivantes
dans le fichier de configuration
<code>httpd.conf</code> :
</p>
<highlight language="config">
CacheRoot /raid1/cacheroot
CacheEnable disk /
# Une page modifiée il y a 100 minutes expirera dans 10 minutes
CacheLastModifiedFactor .1
# Dans tous les cas, vérifier la validité des pages après 6 heures
CacheMaxExpire 21600
</highlight>
<p>Cette configuration essaie de mettre en cache tout
contenu de son serveur virtuel. Elle ne mettra jamais en
cache un contenu plus vieux que 6 heures (voir la
directive <directive
module="mod_cache">CacheMaxExpire</directive>). Si
l'en-tête <code>Expires:</code> est absent de la
réponse, <module>mod_cache</module> va calculer une date
d'expiration en fonction du contenu de l'en-tête
<code>Last-Modified:</code>. Le principe de ce calcul
qui utilise la directive <directive
module="mod_cache">CacheLastModifiedFactor</directive>
se base sur l'hypothèse que si une page a été modifiée
récemment, il y a de fortes chances pour qu'elle le soit
à nouveau dans un futur proche et devra donc être remise
en cache.
</p>
<p>
Notez qu'il peut s'avérer payant de
<em>désactiver</em> l'en-tête <code>ETag:</code> :
pour les fichiers inférieurs à 1 ko, le serveur
doit calculer la somme de vérification checksum (en
général MD5) et envoyer une réponse <code>304 Not
Modified</code>, ce qui utilise des ressources CPU
et réseau pour le transfert (1 paquet TCP). Pour les
ressources supérieures à 1 ko, les ressources CPU
consommées peuvent devenir importantes car l'en-tête
est calculé à chaque requête. Malheureusement, il
n'existe actuellement aucun moyen pour mettre en
cache ces en-têtes.
</p>
<highlight language="config">
<FilesMatch "\.(jpe?g|png|gif|js|css|x?html|xml)">
FileETag None
</FilesMatch>
</highlight>
<p>
Dans l'exemple précédent: la génération d'un en-tête
<code>ETag:</code> sera désactivée pour la plupart
des ressources statiques. Le serveur ne génère pas
ces en-têtes pour les ressources dynamiques.
</p>
</section>
</section>
</section>
<section id="further-considerations">
<title>Pour aller plus loin
</title>
<p>Armés du savoir-faire pour personnaliser un système afin
qu'il affiche les performances désirées, nous découvrirons vite
qu'<em>1</em> sytème à lui seul peut constituer un goulot
d'étranglement. A ce sujet, la page du Wiki <a
href="http://wiki.apache.org/httpd/PerformanceScalingOut">PerformanceScalingOut</a>
décrit comment adapter un système à mesure qu'il prend de
l'ampleur, ou comment personnaliser plusieurs systèmes dans leur
ensemble.
</p>
</section>
</manualpage>
|
The Yankee Stadium website lists an Army-Rutgers game taking place on 2015.
The teams have played the last six seasons. The 2011 game was held at Yankee Stadium. Army lost to Rutgers 28-7 on Nov. 10 in Piscataway, N.J.
The Newark (N.J.) Star-Ledger has reported Rutgers' switch to the Big Ten may happen as early as 2014.
Is Army willing to shuffle its schedule to save the Rutgers games?
"Yeah, if we think that's in the big interest in the program and the best interest of the young people in our program," Corrigan said.
If the games are played in September or October, they likely will not be at Yankee Stadium due to scheduling conflicts with baseball.
Philadelphia will host the Army-Navy game on Dec. 8 for the 84th time in 113 years of the rivalry.
Corrigan said geographically Philadelphia's Lincoln Financial Field works for both schools.
"I think this is the logical place for it to be," Corrigan said. "When you look at it, they (Navy) are 2 hours and 15 minutes and we are 2 hours and 45 minutes from here. Our cadets, who are not playing, can wake up at a reasonable time and bus down here. When it's in D.C. (like last year), they have to get up 0-dark-30 and come down."
Lincoln Financial Field will host the game in 2013, 2015 and 2017. The game moves to Baltimore's M&T Bank Stadium in 2014 and 2016. Other future dates have not been announced. |
Universal markers of thyroid malignancies: galectin-3, HBME-1, and cytokeratin-19.
Difficulties in diagnosis of thyroid lesions, even with histologic analysis, are well known. This study has been carried on to evaluate the role of immunohistochemical markers including galectin-3, Hector Battifora mesothelial cell-1 (HBME-1), and cytokeratin-19 in the diagnosis and differential diagnosis of benign and malignant thyroid lesions. The expressions of galectin-3, HBME-1, and cytokeratin-19 were tested in formalin-fixed, paraffin-embedded tissues from 458 surgically resected thyroid lesions including non-neoplastic and neoplastic lesions. Immunostaining with standard avidin-biotin complex technique was performed by using monoclonal antibodies. In malignant neoplastic thyroid lesions, galectin-3, HBME-1, and cytokeratin-19 were diffusely expressed in general. Diffuse expression rates of these three markers were 72.3% (47/65), 70.7% (46/65), and 76.9% (50/65), respectively. The use of galectin-3, HBME-1, and cytokeratin-19 may provide significant contributions in the differential diagnosis of malignant thyroid tumors. Although focal galectin-3, HBME-1, and cytokeratin-19 expression may be encountered in benign lesions, diffuse positive reactions for these three markers are characteristic of malignant lesions. It has concluded that cytokeratin-19 alone and its combinations with other markers were more sensitive in accurate diagnosis of papillary carcinoma than the other combinations; meanwhile, there were similar results for follicular carcinomas with HBME-1 alone and its combinations. |
The College of New Jersey’s wrestling lost 26-16 in their home opener against Stevens Institute of Technology. Stevens was in firm control of the match most of the night. The College didn’t find their way onto the scoreboard until their 184-pound bout.
This loss marks the second year in a row that the College lost to Stevens in their home opener. Head coach Joe Galante talked about the loss after the match.
“We need to get a little better,” Galante said. “We can take away some positives from those last three bouts and I saw some greatness in some spots. At 125, we looked a little better than we did last week. At 149, we looked a little better than last week, the guy wrestled tough. At 165, we were right in that battle. The other guy was ranked in the country, but I think we can win that battle with a couple little changes. At 184, 197 and heavyweight, we are happy, but we are not done working yet.”
The College of New Jersey athletic logo.
The Lions lost their first seven bouts of the night letting Stevens run out to a 25 – 0 lead. The closest match up until this point was the 165-pound bout. The Lions 165-pounder Luke Balina went into battle with Thomas Poklikuha who was ranked No. 8 in the country according to NCAA Division III rankings.
The match felt like a back and forth tug-of-war between the two wrestlers. They fought aggressively, working towards a takedown. Balina gave up too early in the match after a slick takedown Poklikuha. Balina battled back in the third, but just couldn’t hang on for the win, losing 9-7 in his bout.
The Lions fans were given something to cheer about during the 184-pound bout. Lions 184-pounder Daniel Kilroy came out and wrestled hard right from the first whistle. Kilroy got his first takedown early from a scramble that he came out on top of.
Kilroy grinded his opponent most of the first period, earning 2:26 of riding time. He tilted his opponent with finesse but did not get any back points in the first period. In the second period, he build on his lead, tiring out his opponent even further.
In the third period, he ended the match early, pinning his opponent with a vicious intent. His opponent tried to run out of a double leg takedown, but Kilroy managed to catch him by the waste, taking his opponent down hard.
Kilroy managed to land on his opponent’s chest, scoring back points immediately. After a short yet strong-willed strong from his opponent, Kilroy earned the pin, getting the College it’s first six team points.
Galante talked about Kilroy’s performance.
“Kilroy’s a pinner,” Galante said. “He is a winner. Every time he goes out there he expects great things. We expect great things. I talked about the win in the locker room with him after the match. He’s not super happy with his performance, he wants to do better. In the second period, he didn’t score a lot of points. He didn’t stay as on it as he probably could have. It was like a second-period slump, a little bit. But it defiantly is a good place to build from. Starting your first home duel off with a fall, I like that.”
Not to be outshined or outperformed, Alex Mirabella, the College’s 197-pounder came out like a bat out of hell and pinned his opponent before some people could understand what had just happened.
Mirabella got a quick takedown. He walked from the top position into a nasty near-side cradle that flattened his opponent’s back, getting the pin for his team. The Lions cut into their deficient with another six points.
Heavyweight Kyle Cocozza also had a solid match, winning by major decision. He picked his shots well throughout the match. He also showed his athleticism, going to the ground for better angles on his shots.
By the third period, he had exhausted his opponent, costing to a 10-2 win, getting another four points for his team. Galante talked about his 197 and heavyweight performances.
“Both those guys are training early in the morning,” Galante said. “They train with me. They are doing one-on-one sessions with me early in the morning and I love that. They are showing sacrifice. They are sacrificing their time. They are sacrificing going to bed early the night before and I think that showed through on the mat tonight.”
Even with the surge at the end of the match, the Lions could not pull off the win. The Lions struggled, but the wrestling season is long and there is still time to get better. |
Primary Sources
The Winter of the Soviet Military
Description
By the end of December 1991, the Soviet Union was administratively dissolved. A few weeks beforehand, the United States' Central Intelligence Agency issued this report, assessing the state of the Soviet Military after its failed coup attempt in August of that year. The CIA observed that the Soviet Military suffered from two problems simultaneously. It was being starved of its traditionally huge budget, causing serious hardship for rank and file soldiers and thus disrupting the traditional chain of command. It was also suffering from general disorientation due to the dissolution of the Warsaw Pact and the end of the Cold War. The report predicted that, despite hardships, the Soviet military would continue on in compromised form, mainly because it was still preferable to civilian life for most of its soldiers and employees.
Source
Central Intelligence Agency, "The Winter of the Soviet Military: Cohesion or Collapse?" 5 December 1991, Cold War International History Project, Virtual Archive, CWIHP (accessed May 14, 2008).
Primary Source—Excerpt
Forces unleashed by the collapse of the Soviet system are breaking up its premier artifact--the Soviet military; the high command cannot halt this process. While a centralized command and control system continues to operate, political and economic collapse is beginning to fragment the military into elements loyal to the republics or simply devoted to self-preservation. . . .
Prospects for the Winter
Over the winter it is likely that the armed forces will maintain cohesion. We expect cohesion to hold whether the armed forces continue to decay under the nominal control of central authorities or whether agreements among republics lead to division of the armed forces among them. The latter case would mean the end of the traditional Soviet military. Even in a situation where its basic structures are maintained, however, the military will likely lose control of some units to republics and localities, or even collapse. Such loss of control could lead to incidents of localized violence. . . .
Our conclusion that the armed forces are likely to maintain cohesion over the winter reflects the following:
Military service, for all its problems, will continue to be more appealing to many than a return to civilian life. The availability of resources in military supply channels and reserve stockpiles, in contrast to bleak civil prospects, will keep many units largely intact. . . .
Alternative Outcomes
Though unlikely, there is still a significant chance of outcomes involving the severe degradation or destruction of organizational cohesion. These include widespread local unit autonomy and total collapse of the armed forces. . . .
Least likely are conditions, much better than we anticipate, that could halt the decay and breakup of the Soviet armed forces. Such an outcome would require major improvement in the economic conditions now affecting the military and countering the centrifugal forces at work in the former USSR.
How to Cite this Source
Central Intelligence Agency, "The Winter of the Soviet Military," Making the History of 1989, Item #128, http://chnm.gmu.edu/1989/items/show/128 (accessed December 09 2016, 10:02 pm). |
Ferocactus latispinus
Ferocactus latispinus is a species of barrel cactus native to Mexico. Originally described as Cactus latispinus in 1824 by English naturalist Adrian Hardy Haworth, it gained its current name in 1922 with the erection of the genus Ferocactus by American botanists Britton and Rose. The species name is derived from the Latin latus "broad", and spinus "spine". Ferocactus recurvus is a former name for the species.
Distribution
The species is endemic to Mexico; the more widely distributed subspecies latispinus ranges from southeastern Durango, through Zacatecas, Aguascalientes, east to the western parts of San Luis Potosí, Hidalgo and Puebla, as well as to eastern Jalisco, Guanajuato, Querétaro and Mexico State. Subspecies spiralis is restricted to the southern parts of Oaxaca and Puebla.
Description
Ferocactus latispinus grows as a single globular light green cactus reaching the dimensions of 30 cm (12 in) in height and 40 cm (16 in) across, with 21 acute ribs. Its spines range from reddish to white in colour and are flattened and reach 4 or 5 cm long. Flowering is in late autumn or early winter. The funnel-shaped flowers are purplish or yellowish and reach 4 cm long, and are followed by oval-shaped scaled fruit which reach 2.5 cm (1 in) long.
Subspecies
Two subspecies are recognised, differing in their number of radial spines.
Ferocactus latispinus subsp. latispinus — 9–15 radial spines, Devil's Tongue Barrel or Crow's Claw Cactus.
Ferocactus latispinus subsp. spiralis — 5–7 radial spines.
Cultivation
Ferocactus latispinus is fairly commonly cultivated as an ornamental plant. It blooms at an early age which is a desirable horticultural feature. It is hardy to −4 °C, with an average minimum temperature of 10 °C.
The slime mold, Didymium wildpretii feeds on the decaying remains of F. latispinus in Mexico.
References
latispinus
Category:Cacti of Mexico
Category:Endemic flora of Mexico
Category:Flora of Central Mexico
Category:Flora of Guanajuato
Category:Flora of Hidalgo (state)
Category:Flora of Puebla
Category:Flora of Jalisco
Category:Flora of Querétaro
Category:Flora of Oaxaca
Category:Flora of the State of Mexico
Category:North American desert flora |
< S7L> Would you guess the nationality of the genius behind this code: datapublikacjijava=new Date(przetarg.jakistartpublikacjirok.value, przetarg.jakistartpublikacjimiesiac.value-1, przetarg.jakistartpublikacjidzien.value) |
Don Martin (field hockey)
Don Martin (born 8 February 1940) is a former Australian Field Hockey player. He Represented Australia at the 1964 (winning Bronze) and 1968 (winning Silver) Summer Olympics. Don was inducted as a member of the Western Australian Hockey Champions on 23 May 2010.
Biography
Don was born in Kuala Lumpur, moving to Australia in 1951 to attend boarding school at Aquinas College in Perth, where he was a member of the First XI Hockey team for 6 years from 1952-1957. He was selected in the state under 16 boys team in 1954/55. After leaving school, Don began playing in the West Australian Hockey Association First Division Competition.
In 1959, Don was selected to play in the state under 21 colts team, and then the state senior team in 1960-65 and 1968. He first represented Australia in 1961 playing in a winning Manning Memorial Cup victory over New Zealand, and then continued playing for Australia from 1962–64 and 1968. He was chosen in the 1964 Olympic team winning Bronze, and four years later in Mexico, winning Silver.
He was an Australian Badged Umpire and also a Western Australian State Senior Hockey Selector.
References
Category:Living people
Category:Olympic field hockey players of Australia
Category:People educated at Aquinas College, Perth
Category:Medalists at the 1968 Summer Olympics
Category:Medalists at the 1964 Summer Olympics
Category:Olympic silver medalists for Australia
Category:Olympic bronze medalists for Australia
Category:1940 births
Category:Australian male field hockey players
Category:Field hockey players at the 1964 Summer Olympics
Category:Field hockey players at the 1968 Summer Olympics
Category:Olympic medalists in field hockey |
Fibronectin synthesis and degradation in human fibroblasts with aging.
Fibronectin was measured in early and late passaged human skin fibroblasts utilizing immunoprecipitation and polyacrylamide gel electrophoresis. A progressive increase in the rate of fibronectin and total cellular protein synthesis per cell was observed by late passaged human diploid fibroblasts. The absolute protein concentration increased in the late passaged fibroblasts. There was no significant difference in the [3H]leucine incorporation into fibronectin or total cellular protein/mg protein. The turnover of fibronectin and total cellular protein did not differ between early and late passaged fibroblasts. The transport of fibronectin to the cell membrane was similar in late passaged fibroblasts. The increased fibronectin synthesis in senescent fibroblasts appeared to correlate with the general increase in rate of protein synthesis/cell. |
“Sure, in the back of people’s minds. We’ve thought about it,” Bill Koefoed, Microsoft’s general manager of investor relations, told the Times.
Here’s an excerpt from Dudley’s column in today’s paper:
(Bill) Gates and (CEO Steve) Ballmer together own about 11 percent of the shares. About two-thirds of the rest is held by about 1,700 institutional investors and mutual funds.
To go private, Microsoft would have to reduce the number of shareholders below 300.
Maybe one could be the Gates Foundation. Imagine what it would do for the company’s reputation and morale if people buying Windows knew a portion of the profits would directly benefit the world’s poor?
Everyone would love that, except Apple and that person at last week’s shareholders meeting who asked Gates to give more to investors and less to sick and impoverished children. |
This information and translation comes from the Portuguese blog Amigo de Israel 2.0. It is more of the poison fruit of Europe’s suicidal migration policies. “We were truck drivers for many years and we had heard reports that in the Nador area was very dangerous but we never imagined it would be so violent.” Just wait: this is just the beginning. Europe hasn’t dreamed of the violence that is soon to come, and when it comes, it will be the full responsibility of Merkel, Macron, and their globalist, socialist henchmen. In Europe, free societies, and peaceful societies, will soon be but a memory.
“It looked like a horror film”: Couple lives in panic with migrants in Morocco
Portuguese vehicles stoned and invaded by a group of young people who intended to enter illegally into Europe.
Portuguese vehicles stoned and invaded by a group of young people who intended to enter Europe illegally.
“It was like a horror movie, we spent two hours trying to get them off the truck, but the more times we stopped, the more they came in. They threw stones and sticks at the trucks to make us stop and the police did not respect them, . (…)
CORREIO DA MANHÃ
VÍDEO here: https://www.jn.pt/local/videos/interior/camionistas-de-vila-verde-atacados-por-imigrantes-em-marrocos-10644851.html?fbclid=IwAR0DQFuBirdLsvV12soPuDl7uXZ7yNMayHfM0L72c-oAhd22ah17Pv1jB_Q&jwsource=cl
Vila Verde truck drivers “attacked” by immigrants in Morocco
Apparently, the men wanted to get on the Portuguese lorry so they could get on the boat that connects Morocco and Europe.
According to a Portuguese who was traveling in the truck, the couple was going through Nador when a group of more than a dozen people tried to climb to the heavy one. “We were truck drivers for many years and we had heard reports that in the Nador area was very dangerous but we never imagined it would be so violent,” Manuel Terrinha, who travels with his wife, Emilia Cabanelas, told JN Manuel Terrinha.
“It looked like a movie: some men got in front of the truck to make us stop while others tried to climb in all shapes and places,” explained the truck driver, who is transporting live animals between several European countries and also Morocco.
The truck of the couple and another heavy, also of a company of Green Village, were going through Nador when a group of more than a dozen of people tried to rise to the vehicle. The confusion was great, some people were injured and local authorities were called.
The couple tried, in various ways, to stop the migrants from remaining in the truck. “There have been run-ins, there were aggressions but fortunately everything went well and we were able to continue the trip with the help of the authorities,” said Manuel Terrinha.
In the midst of the “affliction”, Emília Cabanelas even used one of the equipment to guide the animals to ward off the illegal immigrants that, according to the couple, were from Morocco, Algeria and Senegal.
“In Nador, it was the first time I was there but we already have new transport scheduled for the next few days and we have to work,” concluded the truck driver. The couple are already on their way back to Portugal.
Should any Moroccan citizen enter Europe clandestinely, the drivers of the lorries are legally liable and may be detained immediately.
This is not the first time that Portuguese truck drivers are protagonists of attempts to leave Morocco to enter Spain. In recent times, local authorities have dismantled several networks of illegal emigration and, according to Morroco World News, five Moroccans have been arrested on suspicion of fomenting illegal emigration.
Jornal de Notícias |
Q:
Colors on sets $S=\{1,2 \cdots ,1000\}$.
To each element of sets $S=\{1,2 \cdots ,1000\}$ a color is assigned. Suppose that for any two elements $a$ and $b$, of $S$,if $15$ divides $a+b$, then they both are assigned with same color. What is the maximum possible number of distinct colors used?
Please provide some hints.
A:
One hint: If $a\equiv b\pmod{15}$, then choose some $c\equiv -a\pmod{15}$ -- this can easily be done with $c\in S$. Then $a$ and $c$ must have the same color, and $c$ and $b$ must also have the same color. Thus within each residue class only one color can be used -- and sometimes different residue classes also need to share one color.
|
Cheap Hotels in Durban Can Be Fun For Anyone
Menu
Cheap Hotels in Durban Can Be Fun For Anyone
There is a huge variety of Cheap Hotels in Durban to choose from, varying from 5 celebrity high-end turn to your backpacker dorms. Most of resorts in Durban are located along Durban beachfront, with an ever growing variety of the resorts heading north of the beach hotel of Umhlanga Rocks. With the staging of the World Cup 2010, there has been a massive rise in the variety of bed and breakfasts that could be reserved in Durban if you feel like an extra individual touch to your stay. These usually have pool as well as are in the quieter household areas.For the least expensive hotels in Durban and also their costs, visit our internet site.
To the west of Durban as well as even more away from the tourist coastlines, are the great Natal Midlands, where you could drive the renowned Midlands Meander as well as remain in some really nation cottages or if you are a golf enthusiast, there are some really top quality golf resorts around the Durban area icluding affordable Lodges in Durban. For the business individual, Durban has many top course meeting room in deluxe hotels, as well as have actually hosted lots of firm seminars with the added bonus of having the cozy Indian Ocean right on the front door for the delegates to benefit from after a difficult days function.
The ICC in Durban is Africa's premier seminar place, and stands high on the globe stage holding many worldwide conferences with huge success year after year. There are lots of hotels around this location however it is always advisable to book click here in advance to guarantee your place.
Households are also extremely well catered for in Durban along with self food catering apartment or condos. Numerous Durban hotels have kids clubs as well as youngsters swimming pools and also play areas. A lot of these family members pleasant Durban hotels and houses can be located in the Umhlanga location, which hums every summer with local as well as worldwide visitors descending to make the most of the superb swimming beaches and dining establishments all within a couple of minutes stroll from your hotel.For kids's holiday enjoyable pick cheap hotels in Durban Beachfront.
Over the past couple of years, lots of Cheap Hotels in Durban have actually undergone considerable repairs which make sure Durban is a top quality destination with top of the variety resorts and also restaurants.Hotels in Durban near the coastline is always a favourite.
The rise of international airlines flying into Durban also makes it very easy to get to Durban from throughout the world, generally with just one adjustment! Emirates was the very first global airline to fly in, and also currently Qatar and Turkish Airlines looks set to raising Durban on the International stage as they likewise fly in direct. 2016 looks set to be a really fascinating year in Durban as the rise in flights assists enhance tourist.
The city of Durban is South Africa's Premiere sub-tropical beach holiday destination. It's one of the most populated city in KwaZulu-Natal, which is a province of South Africa. Deluxe and also quality are absolutely non-negotiable when it concerns Cheap Hotels in Durban. Durban's environment is cozy and moderate throughout the year. Surfing and also swimming are the popular activities holiday makers take advantage of on the numerous risk-free coastlines. Durban is renowned for being one of the most surf-friendly cities on the planet. Various other prominent coastline tasks are angling, angling as well as boating. |
Q:
Apply function to list elements only if applicable
Is it possible to apply function to list elements only if function applicable to element?
For example
{1.2, 3, {2.3, 5.4}, null, "fff"}
Floor[%]
gives
{1, 3, {2, 5} ,Floor[null], Floor["fff"]}
but I would like to get
{1, 3, {2, 5} ,null, "fff"}
A:
It would be the combination of the comments of Szabolcs and David G. Stork
list = {1.2, 3, {2.3, 5.4}, null, "fff"};
f[x_?NumericQ] := Floor[x]; f[x_] := x; f /@ list;
MapAll[f, list]
{1, 3, {2, 5}, null, "fff"}
A:
ClearAll[f]
f = Floor@# /. Floor -> Identity &;
f@{1.2, 3, {2.3, 5.4}, null, "fff"}
{1, 3, {2, 5}, null, "fff"}
f@{1.2, 3, {2.3, "ff"}, null, "fff", Pi}
{1, 3, {2, "ff"}, null, "fff", 3}
If the function returns unevaluated when a 'non-applicable' input is passed, this approach is more convenient as it does not require detailed knowledge of the function's argument requirements.
ClearAll[h]
h[x_] := 500 /; PrimeQ[x]
h[x_] := 500 /; Divisible[x, 3]
h /@ Range[15] /. h -> Identity
{1, 500, 500, 4, 500, 500, 500, 8, 500, 10, 500, 500, 500, 14, 500}
StringLength /@ {"abcabc", "bcdbc", 234} /. StringLength -> Identity
{6, 5, 234}
|
business
Updated: Sep 17, 2019 14:44 IST
In a good news for the salaried people ahead of the festival season, labour minister Santosh Gangwar on Tuesday said over 6 crore EPFO members will get 8.65 per cent interest on their deposits for 2018-19 as compared to the current 8.55 per cent.
“...ahead of the festival season, over 6 crore EPFO subscribers would get 8.65 per cent interest for 2018-19,” Gangwar told reporters on the sidelines of a function in New Delhi. At present, the EPFO is settling PF withdrawal claims at 8.55 per cent interest rate, which was approved for 2017-18.
The announcement was preceded by differences over the rate between the labour and the finance ministry. The matter was resolved after Gangwar recently met finance minister Nirmala Sitharaman and explained that the organisation is left with a sufficient surplus even after paying 8.65% interest rate to its 46 million subscribers, an official said requesting anonymity.
The EPF interest rate is determined by Central Board of Trustees (CBT), which is the apex decision-making body and the same is notified by the labour ministry after taking concurrence of the finance ministry. In a meeting of CBT on February 19, the trustees decided on a rate of 8.65% for FY-19 but the finance ministry wanted EPFO to cut the rate by at least 10 basis points. The government could not notify the rate because of the finance ministry’s objection, which led to the delayed disbursal of interests in subscribers’ accounts by almost five months, the officials said.
The interest rates on EPF are substantially higher than that of similar government schemes. The finance ministry last month slashed interest rate on General Provident Fund (GPF) and other similar funds to 7.9% for the quarter ending September 30, 2019, as compared to 8% offered to government employees in the previous quarter. |
Q:
ng-grid is not able to show the column data
The following plnkr shows the issue that I am facing.
Plnkr
My controller is as follows. I'm expecting the columnA, columnB to show up in the grid but it does not.
var app = angular.module('myApp', ['ngGrid']);
app.controller('MyCtrl', function($scope) {
$scope.gridOptions = {
data: 'myData',
columnDefs: [{ field: "key", width: 120, pinned: true },
{ field: "a-b-c", width: 120 },
{ field: "d-e-f", width: 120 }]
};
$scope.myData = [
{
'key':"SomeKey",
'a-b-c':"columnaA",
'd-e-f':"columnB"
}];
});
When run, both the columns with '-' in their name show 0.
A:
The minus signs between the key values are not valid. You could try this
var app = angular.module('myApp', ['ngGrid']);
app.controller('MyCtrl', function($scope) {
$scope.myData = [{
'key':'SomeKey',
'abc':'columnaA',
'def':'columnB'
}];
$scope.gridOptions = {
data: 'myData',
columnDefs: [{ field: 'key', width: 120, pinned: true },
{ field: 'abc', displayName: 'a-b-c', width: 120 },
{ field: 'def', displayName: 'd-e-f', width: 120 }]
};
});
|
Current steering and current focusing with a high-density intracochlear electrode array.
Creating high-resolution or high-density, intra-cochlear electrode arrays may significantly improve quality of hearing for cochlear implant recipients. Through focused activation of neural populations such arrays may better exploit the cochlea's frequency-to-place mapping, thereby improving sound perception. Contemporary electrode arrays approach high-density stimulation by employing multi-polar stimulation techniques such as current steering and current focusing. In our procedure we compared an advanced high-density array with contemporary arrays employing these strategies. We examined focused stimulation of auditory neurons using an activating function and a neural firing probability model that together enable a first-order estimation of an auditory nerve fiber's response to electrical stimulation. The results revealed that simple monopolar stimulation with a high-density array is more localized than current steering with a contemporary array and requires 25-30% less current. Current focusing with high-density electrodes is more localized than current focusing with a contemporary array; however, a greater amount of current is required. This work illustrates that advanced high-density electrode arrays may provide a low-power, high-resolution alternative to current steering with contemporary cochlear arrays. |
10.42 miles
13.96 miles
Ozzie Ice Skating Rink is a 2 sheet indoor ice skating rink that is open year round. Ice skating sessions for All Ages are availble at Ozzie Ice. Ice skating lessons are also available. Ozzie Ice Skating Rink is also home to ice hockey leagues for all
19.64 miles
Alltel Ice Den Ice and Curling Rink Ice, Roller Skating Rink and Curling Facility is a 2.sheet ice and 1 floor and 2 sheet indoorfacility that is open year round. Public ice and roller skating sessions for all ages are available at Alltel Ice Den Ice a
24.2 miles
Oceanside Ice Arena Ice Skating Rink is a 1 sheet indoor ice skating rink that is open year round.Oceanside Ice Arena Ice Skating Rink is also home to ice hockey leagues for all ages. Pickup hockey is also offered.
242.17 miles
Fiesta Rancho Ice Skating Rink is a 1 sheet indoor ice skating rink that is open year round. Ice skating sessions for All Ages are availble at Fiesta Rancho. Ice skating lessons are also available. Fiesta Rancho Ice Skating Rink is also home to ice hoc |
Get the latest NUFC transfer and takeover news straight to your inbox for FREE by signing up to our newsletter Subscribe Thank you for subscribing We have more newsletters Show me See our privacy notice Invalid Email
Newcastle United will still have a game on their hands on Sunday – according to Kolo Toure.
The Liverpool defender believes the title race could still have another twist and knows his side must stay alert and do their side of the bargain and beat Newcastle.
Manchester City can clinch the title if they get a point against West Ham, but Toure said: “We’ll be putting pressure on right until the last game.
“I am very proud of what the team have been doing. At the start of the season nobody was expecting us to be where we are right now.
“I think everybody has been surprised how we cope with the pressure and how we’ve coped with the big teams.
“For us the season has been great. This is a really young team.”
Toure wants Liverpool to end on a high, but says this year’s experiences will help them in the future.
He said: “We are going to take the experience from this year. We will learn from that. The only way to learn is by making mistakes.
“We’ll see what went wrong, then try to do better next year.”
Newcastle’s clash with Liverpool will be broadcast on Sky at the same time as Man City’s home clash with West Ham.
But City stil have a tough game on their hands. Ex-Toon stars Kevin Nolan and Andy Carroll will ensure that moneybags City have it all to do. |
[Effects of Huoxue Bushen Mixture on skin blood vessel neogenesis and vascular endothelial growth factor expression in hair follicle of C57BL/6 mice].
To investigate the possible stimulating mechanism of Huoxue Bushen Mixture (HXBSM), a traditional Chinese compound medicine, on hair growth of mice via measuring the variance of skin blood vessel neogenesis and vascular endothelial growth factor (VEGF) expression in the hair follicle. Hot rosin and paraffin mixture depilation were used to induce C57BL/6 mice hair follicle to enter from telogen into anagen. Ninety C57BL/6 mice were divided into 3 groups randomly: HXBSM group, Yangxue Shengfa Capsule (YXSFC, another traditional Chinese compound medicine) group and untreated group. The mice were fed with corresponding drugs after modeling. The hair growth of the mice was observed every day. Every ten mice out of each group were executed respectively at day 4, 11 and day 17. Skin blood vessel neogenesis was counted through pathological section and VEGF expression in the hair follicle was measured via immunohistochemical method. The number of local blood vessel neogenisis in the HXBSM group observed was larger than that in the untreated group at day 4 (P<0.05); and evidently larger than that in the YXSFC group and the untreated group at day 11 (P<0.05). The expression of VEGF in the hair follicle was distinctively higher than that in the YXSFC group and the untreated group at day 11 and day 17 (P<0.05). HXBSM up-regulates VEGF expression to accelerate blood vessel neogenesis and hair growth. |
Regulation of cytosolic free Ca2+ in cultured rat alveolar macrophages (NR8383).
Ca2+ mobilization in the rat alveolar macrophage cell line NR8383 was examined with the Ca2+-sensitive fluorescent probe Fura-2. ATP and norepinephrine elicited a 108 and 46% increase, respectively, in cytosolic free Ca2+ concentration ([Ca2+]i). Acetylcholine, nicotine, isoproterenol, substance P, and vasoactive intestinal polypeptide did not alter [Ca2+]i. Inositol 1,4,5-trisphosphate (IP3) formation was also activated by ATP. The carbohydrate-rich cell wall preparation, zymosan, induced a gradual [Ca2+]i, increase only in the presence of external Ca2+, but did not activate IP3 formation. This increase was abolished by laminarin and by removal of extracellular Ca2+, suggesting that the [Ca2+]i increase was activated by beta-glucan receptors and mediated by Ca2+ influx. This influx was significantly reduced by SKF96365, but not by nifedipine, (omega-conotoxin GVIA, (omega-agatoxin IVA, or flunarizine. These results suggest that release of intracellular Ca2+ in NR8383 cells is regulated by P2-purinoceptors and that zymosan causes Ca2+ influx via a receptor-operated pathway. |
1. Field of the Invention
The invention relates to a method of preparing steroid compounds, in particular compounds requiring a particle size within a specific range.
2. Information Disclosure Statement
In British Application No. 82-31278 filed Nov. 2, 1982, which was refiled as Application No. 83-29173 on November 1, 1983, and corresponding U.S. application Ser. No. 545,810 filed Oct. 26, 1983 (George Margetts, inventor) there are described and claimed, in particular for use in preparing pharmaceutical compositions having improved absorption characteristics, certain 2.alpha.-cyano-4.alpha.,5.alpha.-epoxy-3-oxosteroid compounds, the compounds being in particulate form and consisting of particles having a mean equivalent sphere volume diameter less than about 20 .mu.m, at least 95% of the particles having a particle size of less than about 50 .mu.m.
The earlier applications are particularly concerned with the steroid compound having the common name "trilostane", although the earlier invention also embraces the steroid compound known by the common name "epostane", which is a compound having the formula: ##STR1##
The earlier applications describe the preparation of, for example, trilostane in particulate form within the specified particle size range by the use of milling, preferably using a fluid energy (air) mill under suitable conditions e.g. of air pressure and feed rate. In the case of epostane, however, we have observed that the raw compound is not easily reduced to a suitable particle size using a milling technique,.and in some instances milling can lead to a polymorphic change which is undesirable, or to the formation of other impurities. |
1. Field of the Invention
The invention is in the field of user interfaces for eye tracking devices, particularly as applied to the control of wireless communication and other devices.
2. Description of the Related Art
As cellular telephones and other mobile devices have proliferated, so has the expectation that individuals will always have the option to instantly communicate with their contacts. Thus in both business and in private matters, when an individual is not able to instantly respond to at least text messages, this expectation goes unmet, and social friction and/or lost business opportunities can result. Although cell phone and text communications are often infeasible during certain times of the day, the perception remains that the only reason why the recipient of a message may not have responded is due to a deliberate desire of the recipient to ignore the message.
However the act of turning on a cell phone, scrolling through incoming text messages, and then responding to the text messages can be obtrusive, conspicuous and in some situations inappropriate. Thus there are many times when it is inadvisable or socially awkward to break off a conversation to respond to an incoming cellular phone text message. Indeed, an important client or loved one may be insulted if this occurs. Thus at present, a cell phone user is faced with the difficult problem of trying to balance priority between the environment e.g., a person they are talking to face to face, versus the person who is trying to contact them.
A similar problem can often be encountered by a disabled person who may wish, for example, to remotely control an assistive device while, at the same time, not struggle with or draw attention to the disability. Likewise, military personnel may need to discreetly control a bomb-diffusing robot, drone plane or other remote vehicle. In these examples, the user may not be able to use his or her hands, and wishes to appear as outwardly normal as possible. Thus, methods to allow an inconspicuous eye control device to manage various remote functions are of great importance. Indeed, there are many situations in life where non-disabled and non-military persons may also wish to inconspicuously eye control various types of devices as well, including using an embodiment of the eye gaze interface to operate functions of a motor vehicle, robotic arm and hands-free camera. |
Antonin-Just-Léon-Marie de Noailles
Antoine Just Léon Marie de Noailles (19 April 1841 in Paris – 2 February 1909) 9th prince de Poix, from (1846) 6th duc espagnol de Mouchy, 5th duc français de Mouchy et duc de Poix, from 1854, was a French nobleman.
Son of Charles-Philippe-Henri de Noailles (1808–1854), duc de Mouchy, and the duchesse Anne Marie Cécile de Noailles (1812–1848), he was married on 18 December 1865, to the princesse Anne Murat (1841–1924), daughter of Prince Napoleon Lucien Charles Murat.
They had two children:
François Joseph Eugène Napoléon de Noailles (1866–1900), prince de Poix
Sabine Lucienne Cécile Marie de Noailles (1868–1881)
External links
Noailles, Antoine Just Leon Marie
Noailles, Antoine Just Leon Marie
106
Noailles, Antoine Just Leon Marie
Noailles, Antoine Just Leon Marie
Antoine Just Leon Marie
Noailles, Antoine Just Leon Marie
Noailles, Antoine Just Leon Marie |
Joshua Freeman, CP24.com
Transit service was disrupted at Islington Station Monday morning after a parked bus went up in flames.
Crews were called to the station at around 10:45 a.m. after a Mississauga Transit bus in one of the bays caught fire, Toronto Fire said.
Crews arrived to find the bus fully involved in flames, with heavy smoke visible around the station.
No one was aboard the bus when it caught fire and no injuries have been reported.
Crews have since extinguished the fire. It’s not yet clear how it started.
Service trains were bypassing the station, but have since resumed at Islington. |
The UK Toy Fair kicks off this week and with it we have an announcement of a cool looking Star Wars playset for Hornby's Scalextric line. It is a Battle of Hoth set from The Empire Strikes Back.
The Scalextric - £99.99 RRP - features two snow speeders, cut-out scenery including artillery and AT-AT Walkers and the 500cm track has an ‘ice-bridge’ section. |
72 Wis.2d 389 (1976)
241 N.W.2d 384
WISCONSIN GAS COMPANY, Appellant,
v.
CRAIG D. LAWRENZ & ASSOCIATES, INC., Respondent.
No. 672 (1974).
Supreme Court of Wisconsin.
Submitted on briefs April 12, 1976.
Decided May 4, 1976.
*390 For the appellant the cause was submitted on the brief of Wallace E. Zeddun and Robert A. Nuernberg, both of Milwaukee.
For the respondent the cause was submitted on the brief of John L. North of Beloit.
*391 BEILFUSS, J.
The resolution of this appeal depends upon a proper construction of sec. 66.047 (1), Stats. 1971. That section provides:
"Interference with public service structure. (1) No contractor having a contract for any work upon, over, along or under any public street or highway shall interfere with, destroy or disturb the structures of any public service corporation encountered in the performance of such work so as to interrupt, impair or affect the public service for which such structures may be used, without first procuring written authority from the commissioner of public works, or other properly constituted authority. It shall, however, be the duty of every public service corporation, whenever a temporary protection of, or temporary change in, its structures, located upon, over, along or under the surface of any public street or highway is deemed by the commissioner of public works, or other such duly constituted authority, to be reasonably necessary to enable the accomplishment of such work, to so temporarily protect or change its said structures; provided, that such contractor shall give at least 2 days' notice of such required temporary protection or temporary change to such corporation, and shall pay or assure to such corporation the reasonable cost thereof, except when such corporation is properly liable therefore under the law, but in all cases where such work is done by or for the state or by or for any county, city, village, or town, the cost of such temporary protection or temporary change shall be borne by such public service corporation."
From the complaint in this action it appears that the defendant had contracted with Shawano Lake Sanitary District No. 1 to install sewer and water works in an area around Shawano Lake. It is not specifically alleged that the contract called for work to be done "upon, over, along or under any public street or highway." However, the exhibits attached to the complaint itemizing the particular changes made by the plaintiff indicate that those changes occurred in the vicinity of, if not directly over, under or upon, public streets or highways. Furthermore, *392 the parties appear to agree that the defendant's contract called for the performance of work within the purview of the statute.
In sustaining the defendant's demurrer, the county court held: (1) That Shawano Lake Sanitary District No. 1 is an "other properly constituted authority" entitled to give written authority to interfere with the plaintiff's structures encountered by the defendant in its performance of the contract; and (2) that the plaintiff is required to bear the cost of the temporary changes because the Shawano Lake Sanitary District No. 1 is included as "any county, city, village, or town" as intended by the statute.
On appeal, the plaintiff argues that the sanitary district is not an "other properly constituted authority" entitled to give written authority to the defendant to interfere with its structures. Relying upon Wisconsin Power & Light Co. v. Gerke (1963), 20 Wis. 2d 181, 121 N. W. 2d 912, the plaintiff contends that the defendant's interference was unlawful and that it must therefore respond in damages for the cost of the temporary changes.
In Gerke, the defendant contractor was engaged in expressway construction work pursuant to a contract with the state highway commission. In the course of its work, the contractor requested the plaintiff utility to protect or remove its wires which were strung over the expressway at a certain point. The utility declined to do so. The contractor proceeded to knock down one of the utility's poles, thereby forcing the utility to remove the wires. The utility brought an action to recover the expense incurred as a result of the contractor's actions and the contractor counterclaimed for its alleged business losses caused by the utility's failure to comply with the request. From a judgment for the plaintiff, the contractor appealed.
*393 This court, in construing "commissioner of public works, or other properly constituted authority," stated at page 186:
"From the history and context of the section, the `commissioner of public works' must mean the board of public, works (consisting of commissioners) in cities where they exist and have supervision over construction and maintenance of streets. `Other properly constituted authority' is construed to mean the public body having supervision over construction and maintenance of the highway in question similar to the supervision vested in the board of public works in cities where such boards exist. In the case before us the properly constituted authority was the state highway commission."
Because no written authority had been obtained from the highway commission to interfere with the plaintiff's lines, the court held the contractor was prohibited from knocking down the pole. The court affirmed a recovery by the utility of the resulting damages.
Shawano Lake Sanitary District No. 1 is a town sanitary district organized pursuant to the provisions of secs. 60.30-60.309, Stats. As such, it has only the powers set forth in that section. Under sec. 60.306 (2), the town sanitary district commission "shall project, plan, construct and maintain a system or systems of waterworks, garbage or refuse disposal or sewerage, including sanitary sewers, surface sewers or storm water sewers, provide for sewage collection, provide chemical treatment of waters ... or all of such improvements or any combination thereof necessary for the promotion of the public health, comfort, convenience or public welfare of such district, and such commission is authorized to enter into contracts and take any or all proceedings necessary to carry out such powers and duties." Nowhere is the district given authority with respect to the maintenance and construction of public streets.
*394 The defendant argues that an agreement exists between the district and the various towns which it serves to the effect that it may undertake any excavation of the public streets as is necessary to the performance of the construction project here involved. That agreement is not a part of the record on this appeal. In any case, such an agreement would only deal with the right of the district to interfere with the use of the streets so far as is necessary to complete the project. It would have no bearing on the issue of whether, in the course of that project, the contractor is entitled to interfere with the structures of a public service corporation. That determination is required to be made on the basis of "reasonable necessity" by the "commissioner of public works or other properly constituted authority" having supervision over construction and maintenance of the highway in question.
Even conceding that no proper written authority was obtained for the defendant's "interference" with the plaintiff's structures, it does not follow that the plaintiff is automatically entitled to the recovery it seeks. The result in Gerke was because of the refusal by the utility to make requested changes and consequent unauthorized interference. Here, the plaintiff complied with the requests and made the changes. In considering the counterclaim in Gerke, this court had occasion to address itself to the considerations of policy behind sec. 66.047 (1), Stats. 1971. The court pointed out that the section was adopted in "`recognition of the doctrine of law laid down in the Adlam Case, 85 Wis. 142.'"[1] In that case the rule was established that a contractor has the right to interfere with the structures and operations of a utility only insofar as it is reasonably necessary in the performance of the work involved. The court stated at page 190:
*395 "We conclude that in enacting sec. 66.047, Stats. 1959, the legislature not only adopted the standard of reasonable necessity for determining the extent to which a contractor could demand temporary changes in utility structures in order to facilitate his work, but also provided that where the parties do not agree the determination must be made by the public body having supervision over construction and maintenance of the particular highway."
This analysis, when coupled with an objective reading of sec. 66.047 (1), Stats., requires the conclusion that interference in the absence of written authority is prohibited only in the event the parties cannot agree that the changes requested are reasonably necessary. The plaintiff's compliance with the defendant's requests in this case implies an agreement that these changes were reasonably necessary or estops the plaintiff from now claiming they were not necessary. The plaintiff should not now be allowed to assert that any further written authority was required.
The plaintiff further contends that the county court erred in concluding that, in performing contract services for Shawano Lake Sanitary District No. 1, the defendant was engaged in the performance of work for a "town" and that, therefore, was under no obligation to pay to the plaintiff the reasonable cost of the temporary changes. In so holding the trial court noted that sec. 60.30 (1), Stats., provides that the definitions set forth in sec. 144.01 are applicable to secs. 60.30-60.309 dealing with the creation, powers and dissolution of town sanitary districts. The court also observed that sec. 144.01 (12) defines "municipality" as "any city, town, village, county, county utility district, town sanitary district or metropolitan sewage district." The trial court concluded that "paragraph 12, which was added to Section 144.01 in 1965 was intended by the legislature to include town sanitary districts, as referred to in *396 60.30 and 66.047, as said districts are desperately needed in Wisconsin to prevent further pollution of our lakes and streams."
The plaintiff points out that there is no provision in ch. 66, Stats., which makes the definition of "municipality" contained in sec. 144.01 (12) applicable to all or any part of that chapter. The introduction to the definitions contained in sec. 144.01 provides: "The following terms as used in this chapter mean...." With this limitation it is not obligatory that the court accept the definition in its interpretation of a statute not within the chapter.[2]
Even if we assume that the definition of "municipality" set forth in sec. 144.01, Stats., and incorporated by reference in sec. 60.30 (1) is equally applicable to the provisions of ch. 66, it is of little assistance where the term is not used in the section under consideration. Sec. 66.047 (1), Stats. 1971, refers not to "any municipality," but to "any county, city, village, or town."
The defendant points out that sec. 2 of ch. 439, Laws of 1915, which was a part of sec. 66.047 as originally enacted, provides:
"The provisions of this act shall not be construed as modifying or restricting the existing powers of any municipality or county over streets, avenues, alleys, or highways thereof, or as repealing or amending any provisions of Chapter 608 of the laws of Wisconsin for 1913."
The defendant argues that the legislature's use of the term "municipality" in the unpublished section of the act indicates an intention to include all forms of that generic group, including town sanitary districts, within the terms of the published portion. The fallacy in the defendant's argument is that the legislature chose not *397 to use the term "municipality" in the section of the law now under consideration. If the appearance of the term in the unpublished portion of the act is of any significance it is to indicate that the legislature had a specific purpose in mind in failing to use it in the published portion.
The defendant argues that the terms "town" and "town sanitary district" are interchangeable. We do not believe this construction should follow solely from the fact that both terms appear in the definition of municipality. Such construction would require the conclusion that there is no distinction between any of the units of government which are generally referred to as "municipalities." The generally recognized rule is that while town sanitary districts may be referred to as "municipalities" or "municipal corporations," they are separate and distinct legal entities from the town or towns by which they are organized. They are "municipalities" in the broad sense of the term only.[3]
It is true that under sec. 60.301, Stats., the town board of any town has the power to establish town sanitary districts where the conditions set forth in sec. 60.303 (3) are found to exist. However, the latter section provides that, upon creation, the town sanitary district "shall be a body corporate with the powers of a municipal corporation for the purposes of carrying out the provisions of sections 60.30 to 60.309." One author has summarized the law with respect to the relationship which exists between the town sanitary district and the town or towns which it serves as follows:
"Even when formed by municipalities and counties, sewerage districts and authorities are separate corporations. Thus, speaking of a sewerage district, the Maine court says: `The defendant is a separate corporation, *398 although geographically the same territory [as the City], and established by the legislature which had complete authority to so establish. The units are for distinct and separate purposes.'" 3A Antieau (1970), Independent Local Government Entities, p. 30G-12, sec. 30G.07, citing Baxter v. Waterville Sewerage Dist. (1951), 146 Me. 211, 79 Atl. 2d 585.
Applying this principle a town sanitary district is not merely the agent of the town or towns which it serves.
Our conclusion that sec. 66.047 (1), Stats. 1971, cannot be read to include town sanitary districts is supported by the legislature's treatment of metropolitan sewerage districts in secs. 66.20-66.26. Under sec. 66.20, "municipality" is defined as "town, village, city or county." Under sec. 66.22, metropolitan sewerage districts "may be initiated by resolution of the governing body of any municipality." Section 66.24, relating to the powers and duties of such districts, provides in sub. (5):
"(5) CONSTRUCTION. (a) General. The district may construct, enlarge, improve, replace, repair, maintain and operate any works determined by the commission to be necessary or convenient for the performance of the functions assigned to the commission.
"(b) Roads. The district may enter upon any state, county or municipal street, road or alley, or any public highway for the purpose of installing, maintaining and operating the system, and it may construct in any such street, road or alley or public highway necessary facilities without a permit or a payment of a charge. Whenever the work is to be done in a state, county or municipal highway, the public authority having control thereof shall be duly notified, and the highway shall be restored to as good a condition as existed before the commencement of the work with all costs incident thereto borne by the district. All persons, firms or corporations lawfully having buildings, structures, works, conduits, mains, pipes, tracks or other physical obstructions in, over or under the public lands, avenues, streets, alleys or highways which block or impede the progress of district facilities, when in the process of construction, establishment or repair shall upon reasonable notice by the district, *399 promptly so shift, adjust, accommodate or remove the same at the cost and expense of such individuals or corporations, as fully to meet the exigencies occasioning such notice." (Emphasis supplied.)
It should be noted that the general definition of "municipality" in sec. 144.01, Stats., includes metropolitan sewage districts.
Had the legislature believed the language of sec. 66.047 (1), Stats. 1971, broad enough to include all entities generically defined as "municipalities," the emphasized provisions set out above for metropolitan sewerage districts would not have been necessary.
We are of the opinion that sec. 66.047 (1), Stats., imposes a duty upon a public service corporation, such as the plaintiff, to temporarily protect or change its structures whenever it is reasonably necessary to enable a contractor to accomplish his work "upon, over, along or under any public street or highway." However, the plain language of the statute requires the contractor to bear the reasonable cost of such protection or changes except where the public service corporation is "properly liable therefor under the law" or where the "work is done by or for the state or by or for any county, city, village, or town." Work performed for a town sanitary district is not within the exception.
The rationale used by the trial court to conclude that a town sanitary district should be included in the exception of sec. 66.047 (1), Stats., is good public policy. However, where a statute imposes an obligation upon a person or party that did not exist in common law, the legislature and not the courts should determine and express that public policy. We invite the legislature to review the statute.
The order sustaining the demurrer should be reversed. Further proceedings may be necessary to establish the reasonable costs of the plaintiff's services.
By the Court.Order reversed and remanded for further proceedings.
NOTES
[1] Milwaukee Street R. Co. v. Adlam (1893), 85 Wis. 142, 149, 150, 55 N. W. 181.
[2] See: Town of Lafayette v. City of Chippewa Falls (1975), 70 Wis. 2d 610, 619, 235 N. W. 2d 435; Paulsen Lumber, Inc. v. Meyer (1970), 47 Wis. 2d 621, 177 N. W. 2d 884.
[3] See: Page v. Metropolitan St. Louis Sewer Dist. (Mo. 1964), 377 S. W. 2d 348, 362; 3A Antieau (1970), Independent Local Government Entities, p. 30G-3, sec. 30G.00.
|
Renshaw posts his third hundred in three games
Opener Matthew Renshaw is hopeful of signing a deal to play English county cricket this year as he looks to force his way back into Australia's Test side.
With Cameron Bancroft far from ensconced as David Warner’s opening partner, the man he replaced at the top of the order four months ago has responded to his axing with a career-best run of form in domestic cricket.
Having conquered the Dukes ball in Australia with three consecutive hundreds in the JLT Sheffield Shield in recent weeks, Renshaw is shopping himself around to county sides in the UK in the hope of earning a contract for the upcoming northern summer.
While the 21-year-old's immediate focus is a Shield title with Queensland later this month, his ultimate goal is to regain his Test spot with the 2019 Ashes tour just 18 months away.
"I'm looking to play county cricket," Renshaw told cricket.com.au of his plans for the Australian winter. "I just haven't been signed by anyone yet.
"Hopefully I get the opportunity to play county cricket so I can face the Dukes ball in England and see what the difference is.
"We've been putting some feelers out there and seeing what's coming. Nothing's come up yet, but hopefully if the opportunity does come up, I can work on my game a bit more.
"I'd just be really excited to take that opportunity if it comes. It'd just keep my cricket ticking over in the winter ahead of another Shield season, if I'm not playing at a higher level."
QUICK SINGLE WA crumble, Bulls on verge of home final
After a horror start to the Australian summer that saw him lose form and drop out of the Test side ahead of the Ashes, Renshaw has roared back to his best with centuries in each of his past three Shield games.
His unbeaten 143 against Western Australia on Friday followed scores of 170 and 112 last month and has seen him become the first Queensland batsman since Matthew Hayden in 1993-94 to score a hundred in three consecutive Shield matches in a season.
It's a welcome relief following the crushing low of the start of the season, when Bancroft came from the clouds to replace him at the top of the order in the Ashes.
"I wasn't putting too much pressure on myself while I was batting, but obviously off the field it was quite tough," he said. "There was an Ashes series coming up and then I was sitting on the sidelines watching it.
Renshaw cops rare five-run penalty in Brisbane
"I'm enjoying my cricket and enjoying the group around me. There's a lot of good vibes at the moment.
"I'm getting the starts and making sure that when I'm in, I go quite big."
While county sides have already filled most of the limited spots available to international players for this year, the length of the County Championship – which starts in mid-April and concludes at the end of September – means teams are often on the lookout for replacement players throughout the season when injuries and international selections arise.
And the fact Renshaw holds a British passport having been born in the UK would alleviate some visa concerns that often accompany foreign players, particularly when they're signed at short notice.
While the Dukes ball, introduced for the second half of the Shield season in an effort to improve the technique of Australian batters against the moving ball, has seen the total runs scored in the Shield drop since Christmas, it remains a different proposition on the greener pitches of the UK.
QUICK SINGLE Renshaw cops rare five-run penalty
But the fact that Renshaw has found form in bowler-friendly conditions in recent weeks is a positive sign for him ahead of Australia's Ashes tour next year, when they'll be aiming for their first series win in the UK since 2001.
"I really enjoy it," Renshaw said of the Dukes ball. "It stays harder for longer so it keeps the bowlers interested and it swings a little bit more than the Kookaburra.
"I don't know if it's because of the Dukes or because of all the work I've been doing, but I'm really enjoying batting at the moment." |
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Renci.SshNet.Common;
using Renci.SshNet.Tests.Common;
namespace Renci.SshNet.Tests.Classes.Common
{
/// <summary>
///This is a test class for ScpExceptionTest and is intended
///to contain all ScpExceptionTest Unit Tests
///</summary>
[TestClass]
public class ScpExceptionTest : TestBase
{
/// <summary>
///A test for ScpException Constructor
///</summary>
[TestMethod]
public void ScpExceptionConstructorTest()
{
ScpException target = new ScpException();
Assert.Inconclusive("TODO: Implement code to verify target");
}
/// <summary>
///A test for ScpException Constructor
///</summary>
[TestMethod]
public void ScpExceptionConstructorTest1()
{
string message = string.Empty; // TODO: Initialize to an appropriate value
ScpException target = new ScpException(message);
Assert.Inconclusive("TODO: Implement code to verify target");
}
/// <summary>
///A test for ScpException Constructor
///</summary>
[TestMethod]
public void ScpExceptionConstructorTest2()
{
string message = string.Empty; // TODO: Initialize to an appropriate value
Exception innerException = null; // TODO: Initialize to an appropriate value
ScpException target = new ScpException(message, innerException);
Assert.Inconclusive("TODO: Implement code to verify target");
}
}
}
|
Computational study of the covalent bonding of microcystins to cysteine residues--a reaction involved in the inhibition of the PPP family of protein phosphatases.
Microcystins (MCs) are cyclic peptides, produced by cyanobacteria, that are hepatotoxic to mammals. The toxicity mechanism involves the potent inhibition of protein phosphatases, as the toxins bind the catalytic subunits of five enzymes of the phosphoprotein phosphatase (PPP) family of serine/threonine-specific phosphatases: Ppp1 (aka PP1), Ppp2 (aka PP2A), Ppp4, Ppp5 and Ppp6. The interaction with the proteins includes the formation of a covalent bond with a cysteine residue. Although this reaction seems to be accessory for the inhibition of PPP enzymes, it has been suggested to play an important part in the biological role of MCs and furthermore is involved in their nonenzymatic conjugation to glutathione. In this study, the molecular interaction of microcystins with their targeted PPP catalytic subunits is reviewed, including the relevance of the covalent bond for overall inhibition. The chemical reaction that leads to the formation of the covalent bond was evaluated in silico, both thermodynamically and kinetically, using quantum mechanical-based methods. As a result, it was confirmed to be a Michael-type addition, with simultaneous abstraction of the thiol hydrogen by a water molecule, transfer of hydrogen from the water to the α,β-unsaturated carbonyl group of the microcystin and addition of the sulfur to the β-carbon of the microcystin moiety. The calculated kinetics are in agreement with previous experimental results that had indicated the reaction to occur in a second step after a fast noncovalent interaction that inhibited the enzymes per se. |
Q:
Replace if else statement with a single method containing a conditiion
I am still learning C# but I've been on annual leave and I have come back to work and seen this piece of code my senior has left me before he went on his annual leave:
public string GetBasketTotalPrice(string basketLocation)
{
var basketTotalPrice = _driver.FindElements(CommonPageElements.BasketTotalPrice);
if (basketLocation.ToLower() == "top")
return basketTotalPrice[0].Text.Replace("£", "");
else
return basketTotalPrice[1].Text.Replace("£", "");
}
private int GetElementIndexForBasketLocation(string basketLocation)
{
return basketLocation == "top" ? 0 : 1;
}
I am assuming instead of using the if else statement, he wants me to use his new method of GetElementIndexForBasketLocation.
My question is simply how to implement this change?
Thanks
A:
It's not totally clear what you're looking for, but you can rework the code something like this:
public string GetBasketTotalPrice(string basketLocation)
{
var basketTotalPrice = _driver.FindElements(CommonPageElements.BasketTotalPrice);
int index = GetElementIndexForBasketLocation(basketLocation);
return basketTotalPrice[index].Text.Replace("£", "");
}
private int GetElementIndexForBasketLocation(string basketLocation)
{
return basketLocation.ToLower() == "top" ? 0 : 1;
}
|
Q:
Prepending (w/out newlines) to an auto-generated dependency list for Makefiles
Not sure if the title makes sense... so I'll elaborate a bit.
I'm toying with this makefile that uses gcc's auto-dependency list generator.
At the same time, I wanted to keep a nice sorted directory structure that separates source, headers, and resources.
The layout's nice and simple like so
MAIN
src
include
objects
dependencies
Now, the makefile dependency list generator atm is this:
$(DEP_PATH)%.d : $(SRC_PATH)%.c
@${CC} $(CFLAGS) -MM -c $(INCLUDE) $< > $(DEP_PATH)$*.d
include $@
The idea here being that we generate the dependency rule, then include it to the make build.
and the result for say, foo1.o is:
foo1.o: src/foo1.c include/foo1.h include/foo2.h include/foo3.h
This would work fine if I labled all my objects to be found in the main directory... however since they in /main/objects instead... the make says it can't find the rule for /main/objects/foo1.o
Now, I tried this:
@echo "$(OBJ_PATH)" > $(DEP_PATH)$*.d
@${CC} $(CFLAGS) -MM -c $(INCLUDE) $< >> $(DEP_PATH)$*.d
Which the > feeds the object path to the new/overwritten file, then concatenates the GCC auto-dependency rule generation to it... but it adds the newline between the two sets.
I tried cat'ing two separate files with said info as well... but they also get the newlines.
Is there a nice way to prepend the dependency file w/out adding the newline?
Also, if you've got any real nice tutorials on makefiles, cat, and echo, I'd really appreciate it.
Thanks for any and all responses.
A:
The answer to the question you asked is sed:
@${CC} blah blah blah | sed 's|^|$(OBJ_PATH)|' > $(DEP_PATH)$*.d
But you're going to have a different problem with this part:
include $@
The include directive is for Make itself, it is not a shell command (like $(CC)...). And $@ is an automatic variable, defined within the rule, not available to directives outside the rule. Instead, try something like this:
include $(DEP_PATH)/*.d
and take a look at Advanced Auto-Dependency Generation.
|
Evidence for the presence of type I IL-1 receptors on beta-cells of islets of Langerhans.
The cytokine interleukin-1beta (IL-1beta) has been shown to inhibit insulin secretion and destroy pancreatic islets by a mechanism that involves the expression of inducible nitric oxide synthase (iNOS), and the production of nitric oxide (NO). Insulin containing beta-cells, selectively destroyed during the development of autoimmune diabetes, appear to be the islet cellular source of iNOS following treatment with IL-1beta. In this study we have evaluated the presence of type I IL-1 signaling receptors on purified pancreatic beta-cells. We show that the interleukin-1 receptor antagonist protein (IRAP) prevents IL-1beta-induced nitrite formation and IL-1beta-induced inhibition of insulin secretion by isolated islets and primary beta-cells purified by fluorescence-activated cell sorting (FACS). The protective effects of IRAP correlate with an inhibition of IL-1beta-induced iNOS expression by islets and FACS purified beta-cells. To provide direct evidence to support beta-cell expression of IL-1 type I signaling receptors, we show that antiserum specific for the type I IL-1 receptor neutralizes IL-1beta-induced nitrite formation by RINm5F cells, and that RINm5F cells express the type I IL-1 receptor at the protein level. Using reverse transcriptase-polymerase chain reaction (RT-PCR), the expression of type I IL-1 signaling receptors by FACS purified beta-cells and not alpha-cells is demonstrated. These results provide direct support for the expression of type I IL-1 receptors by primary pancreatic beta-cells, the cell type selectively destroyed during the development of autoimmune diabetes. |
Biofuels can be dirtier than fossil fuels
Biofuels may pollute the environment much more heavily if the process used to make them isn't done in the right way, researchers say.
Conventional fossil fuels may sometimes be much greener than their biofuel counterparts, according to a new study.
University Research funded by a pair of U.S. federal government agencies found that taking into account a biofuel's origin is important.
Massachusetts Institute of Technology researchers say, for example that conventional fossil fuels may sometimes be the greener choice compared with fuel made from palm oil grown in a clear-cut rainforest.
What we found was that technologies that look very promising could also result in high emissions, if done improperly, said James Hileman, a an engineer at MIT who published results of a study with graduate students Russell Stratton and Hsin Ming Wong.
You can't simply say a biofuel is good or bad - it depends on how it's produced and processed, and that's part of the debate that hasn't been brought forward.
The study was funded by the Federal Aviation Administration and Air Force Research Labs.
The study includes a life-cycle analysis of 14 fuel sources, which include conventional petroleum-based jet fuel and biofuels which can include biofuels that can replace conventional fuels with little or no change to existing infrastructure or vehicles, according to the report.
Factors used to calculate emissions include acquiring the biomass, transporting it, converting it to fuel and combustion.
All those processes require energy, Hileman says, and that ends up in the release of carbon dioxide.
Biofuels derived from palm oil emitted 55 times more carbon dioxide if the palm oil came from a plantation located in a converted rainforest rather than a previously cleared area, according to the report.
Biofuels could ultimately emit 10 times more carbon dioxide than conventional fuel, the report found.
Severe cases of land-use change could make coal-to-liquid fuels look green, Hileman said. He added that by conventional standards, coal-to-liquid is not a green option.
The research may have applications for industry as companies consider using biofuels, Hileman says.
One solution could be to explore crops such as algae and Salicornia that don't require deforestation or fertile soil to grown, according to Hileman. Neither requires fresh water. |
Q:
How to unit test struct constructor
If I have a struct like:
struct Foo {
bar: int,
baz: bool
}
and a default constructor like:
impl Foo {
fn default() -> ~Foo {
Foo{bar: 0, baz: false}
}
}
I'd want a unit test for my constructor:
#[test]
fn test_foo_default() {
let foo1 = Foo::default();
let foo2 = ~Foo{bar: 0, baz: false};
// What to put here to compare them?
}
How do I easiest compare the two structs to make sure they are the same with regards to content, type and ownership?
A:
Have the compiler derive the Eq trait for Foo (if possible), and then check for equality with the assert_eq! macro. The macro also requires Show to be implemented for some reason, so let's derive it as well. Your original default() function doesn't actually compile because it attempts to return a Foo where a ~Foo was promised. Foo and ~Foo are actually different types, but you can dereference a ~Foo and compare that.
You may also be interested in the Default trait.
#[deriving(Eq,Show)]
struct Foo {
bar: int,
baz: bool
}
impl Foo {
fn default() -> ~Foo {
~Foo{bar: 0, baz: false}
}
}
#[test]
fn test_foo_default() {
let foo1 = Foo::default();
let foo2 = ~Foo{bar: 0, baz: false};
assert_eq!(foo1,foo2);
}
#[test]
fn test_foo_deref() {
let foo1 = Foo::default();
let foo2 = Foo{bar: 0, baz: false};
assert_eq!(*foo1,foo2);
}
|
/*
Think about Zuma Game. You have a row of balls on the table, colored red(R), yellow(Y), blue(B), green(G), and white(W).
You also have several balls in your hand.
Each time, you may choose a ball in your hand, and insert it into the row
(including the leftmost place and rightmost place).
Then, if there is a group of 3 or more balls in the same color touching, remove these balls.
Keep doing this until no more balls can be removed.
Find the minimal balls you have to insert to remove all the balls on the table.
If you cannot remove all the balls, output -1.
Examples:
Input: "WRRBBW", "RB"
Output: -1
Explanation: WRRBBW -> WRR[R]BBW -> WBBW -> WBB[B]W -> WW
Input: "WWRRBBWW", "WRBRW"
Output: 2
Explanation: WWRRBBWW -> WWRR[R]BBWW -> WWBBWW -> WWBB[B]WW -> WWWW -> empty
Input:"G", "GGGGG"
Output: 2
Explanation: G -> G[G] -> GG[G] -> empty
Input: "RBYYBBRRB", "YRBGB"
Output: 3
Explanation: RBYYBBRRB -> RBYY[Y]BBRRB -> RBBBRRB -> RRRB -> B -> B[B] -> BB[B] -> empty
Note:
You may assume that the initial row of balls on the table won’t have any 3 or more consecutive balls with the same color.
The number of balls on the table won't exceed 20, and the string represents these balls is called "board" in the input.
The number of balls in your hand won't exceed 5, and the string represents these balls is called "hand" in the input.
Both input strings will be non-empty and only contain characters 'R','Y','B','G','W'.
*/
/**
* Approach: Backtracking
* 看到这道题目的背景,我还是相当具有兴趣的,觉得这还是一道很好玩的题目.
* 然而做完之后,这道题目并没有给我留下太好的印象。原因我会在下面详细讲述。
*
* 解题思路:
* 注意Note中的说明,需要重点关注的有两点:
* 1. 游戏初始的珠子串(Board)中并不会有 >=3 个颜色相同的珠子连在一起,这跟我们玩的游戏还是不同的。
* 2. 数据规模:珠子串长度最长不会超过 20,手里的珠子最多不会超过 5 个。
* 根据数据规模,我们基本可以断定这道题目是使用 DFS(Backtracking) 来枚举所有插入位置进行暴力解的。
*
* 注意点:
* 以下代码进行了两个部分的剪枝操作。
* 剪枝操作1:因为如果有相同颜色珠子连在一起的话,插在它们中任意位置效果都是相同的,
* 所以我们统一规定插在相同颜色珠子串的最后一个位置。
* 剪枝操作2:只对插入后会被消去的地方进行插入操作。即:如果本轮操作后,无法消去一段
* 那么该位置就不进行插入操作。
* 而本题存在问题的也正是上述的这两个剪枝操作!很明显这是一个贪心的做法,但是这个贪心的策略是错误的。
* 正确的做法应该是每个位置,都需要进行一遍试探性的插入才行。
* 举个例子:
* board = "RRWWRRBBRR", hand = "WB"
* 如果按照上述的做法,我们会在 [W] 或者 [B] 的后面插入手中的珠子,但是这样肯定会剩下 [RR] 没法消除
* 因此会返回 -1. 当其实这个局面是有办法全部消去的,策略如下:
* RRWWRRBBRR -> RRWWRRBBR[W]R -> RRWWRRBB[B]RWR -> RRWWRRRWR -> RRWWWR -> RRR -> empty
* 因此这里的剪枝(贪心)策略是错误的。
*
* 大家可能会有疑问,为什么错的还要这么做呢?
* 因为...这题的标程都是错的啊!!!而且不贪心剪枝的话还过不去!!!坑爹啊!!!
* 这也能解释为啥这么多人踩这道题目了...
*/
class Solution {
public int findMinStep(String board, String hand) {
if (board == null || board.length() == 0) {
return 0;
}
// 为了方面查询,我们需要统计手中珠子的信息(本题中我们可以发射手中的任意一颗珠子)
Map<Character, Integer> balls = new HashMap<>();
for (char c : hand.toCharArray()) {
balls.put(c, balls.getOrDefault(c, 0) + 1);
}
return dfs(board, balls);
}
// 返回全部清楚当前board所需要的最少珠子,如果无法全部清除返回-1
private int dfs(String board, Map<Character, Integer> balls) {
if (board == null || board.length() == 0) {
return 0;
}
int rst = Integer.MAX_VALUE;
int i = 0, j;
while (i < board.length()) {
j = i + 1;
Character color = board.charAt(i);
// j一直向后移动,直到 j 越界 或者 指向的珠子颜色与 i 的不同
while (j < board.length() && color == board.charAt(j)) {
j++;
}
// 需要插入多少颗珠子才能消掉 i~j-1 的珠子
int costBalls = 3 - (j - i);
// 手中剩余足够的珠子可供插入
if (balls.containsKey(color) && balls.get(color) >= costBalls) {
// 消去 i~j-1 段的珠子,同时因为消去该段会导致两边的两段的珠子碰到一起
// 可能会引发连续的消除反应,因此实现了 shrink() 来实现这点
String newBoard = shrink(board.substring(0, i) + board.substring(j));
// 减去花费掉的珠子数
balls.put(color, balls.get(color) - costBalls);
// 进行 DFS 调用子过程,求解全部消去剩下珠子需要的最少珠子数
int subRst = dfs(newBoard, balls);
if (subRst >= 0) {
// 如果可以被顺利全部消除的话,取最小值即可
rst = Math.min(rst, costBalls + subRst);
}
// Backtracking 加上之前消耗掉的珠子,进行回溯操作
balls.put(color, balls.get(color) + costBalls);
}
i = j;
}
return rst == Integer.MAX_VALUE ? -1 : rst;
}
// 消除当前 board 中所有可以消除的珠子串
private String shrink(String board) {
int i = 0, j;
while (i < board.length()) {
j = i + 1;
Character color = board.charAt(i);
while (j < board.length() && color == board.charAt(j)) {
j++;
}
if (j - i >= 3) {
board = board.substring(0, i) + board.substring(j);
// 当进行成功消除后,记得将 i 重置回 0 位置,重新进行检查(因为可能发生连锁反应)
i = 0;
} else {
i++;
}
}
return board;
}
} |
Real Estate News
Glen Named to Key Post in de Blasio Administration
Adjunct Professor Alicia Glen, who started the Social Impact Real Estate Investing and Development class, was tapped by incoming Mayor Bill de Blasio to be deputy mayor for housing and economic development.
Adjunct Professor Alicia Glen, who started the popular Social Impact Real Estate Investing and Development class, was tapped by incoming Mayor Bill de Blasio to be deputy mayor for housing and economic development. Her work in that new role will be key to delivering on the administration’s promise of increasing affordable housing. She will also be responsible for leading “the administration’s efforts to invest in emerging industries across the five boroughs, retarget unsuccessful corporate subsidies ... and help New Yorkers secure good-paying jobs that can support a family,” according to the announcement.
The shift in job title, from what was previously called deputy mayor for economic development and rebuilding, was in itself hailed as progress toward streamlining the process for builders.
As noted in a July 2013 interview with one of her students, Glen has aimed to “connect the dots between the public and private sectors to advance not only business profitability but also long-term social benefit” throughout her career. In her previous position heading the Urban Investment Group at Goldman, Sachs & Co., for example, she financed job creation and neighborhood revitalization strategies and invested in the nation’s first social impact bond.
“I know she will be a great advocate for affordable housing and economic development,” stated Earle W. Kazis and Benjamin Schore Professor of Real Estate and MBA Real Estate Program Director Lynne B. Sagalyn, whom Glen names as one of her mentors. “The city is fortunate to have her talent and dedication to the task.” |
Strategic Arms Reduction TalksSTARTarms control negotiations between the United States and the Soviet Union that were aimed at reducing those two countries’ arsenals of nuclear warheads and of the missiles and bombers capable of delivering such weapons. The talks, which began in 1982, spanned a period of 20 eventful years that saw the collapse of the Soviet Union and the end of the Cold War.
START I
The START negotiations were successors to the Strategic Arms Limitation Talks of the 1970s. In resuming strategic-arms negotiations with the Soviet Union in 1982, U.S. Pres. Ronald Reagan renamed the talks START and proposed radical reductions, rather than merely limitations, in each superpower’s existing stocks of missiles and warheads. In 1983 the Soviet Union abandoned arms control talks in protest against the deployment of intermediate-range missiles in western Europe (see Intermediate-Range Nuclear Forces Treaty). In 1985 START resumed, and the talks culminated in July 1991 with a comprehensive strategic-arms-reduction agreement signed by U.S. Pres. George H.W. Bush and Soviet leader Mikhail Gorbachev. The new treaty was ratified without difficulty in the U.S. Senate, but in December 1991 the Soviet Union broke up, leaving in its wake four independent republics with strategic nuclear weapons—Belarus, Kazakhstan, Ukraine, and Russia. In May 1992, the Lisbon Protocol was signed, which allowed for all four to become parties to START I and for Ukraine, Belarus, and Kazakhstan either to destroy their strategic nuclear warheads or to turn them over to Russia. This made possible ratification by the new Russian Duma, although not before yet another agreement had been reached with Ukraine setting the terms for the transfer of all the nuclear warheads on its territory to Russia. All five START I parties exchanged the instruments of ratification in Budapest on Dec. 5, 1994.
The START I treaty set limits to be reached in a first phase within three years and then a second phase within five years. By the end of the second phase, in 1999, both the United States and Russia would be permitted a total of 7,950 warheads on a maximum of 1,900 delivery vehicles (missiles and bombers). This limit involved reductions from established levels of about 11,000 warheads on each side. Of the 7,950 permitted warheads, no more than 6,750 could be mounted on deployed intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). By early 1997, Belarus and Kazakhstan had reached zero nuclear warheads, and Ukraine destroyed its last ICBMs in 1999. The United States and Russia reached the required levels for the second phase during 1997.
A third phase was to be completed by the end of 2001, when both sides were to get down to 6,000 warheads on a maximum of 1,600 delivery vehicles, with no more than 4,900 warheads on deployed ICBMs and SLBMS. Although there had been concerns that this goal would not be achieved because of the expense and difficulty of decommissioning weapons, both sides enacted their cuts by 2001. The START I treaty is scheduled to expire on Dec. 5, 2009.
During the negotiations on START I, one of the most controversial issues had been how to handle limits on nuclear-armed cruise missiles, as verification would be difficult to implement. The issue was finally handled by means of separate political declarations by which the two sides agreed to announce annually their planned cruise missile deployments, which were not to exceed 880.
START II
Even as they agreed on the outline of START I in 1990, the United States and the Soviet Union agreed that further reductions should be negotiated. However, real negotiations had to wait for the elections that established the leadership of the new Russian Federation in 1992. The START II treaty was agreed on at two summit meetings between George H.W. Bush and Russian Pres. Boris Yeltsin, the first in Washington, D.C., in June 1992 and the second in Moscow in January 1993. Under its terms, both sides would reduce their strategic warheads to 3,800–4,250 by 2000 and to 3,000–3,500 by 2003. They would also eliminate multiple independent reentry vehicles (MIRVs) on their ICBMs—in effect eliminating two of the more controversial missiles of the Cold War, the U.S. Peacekeeper missile and the Russian SS-18. Later, in order to accommodate the delays in signing and ratifying START I, the deadlines were put back to 2004 and 2007, respectively.
START II never actually came into force. The U.S. Senate did not ratify the treaty until 1996, largely because the parallel process was moving so slowly in the Russian Duma. There the treaty became a hostage to growing Russian irritation with Western policies in the Persian Gulf and the Balkans and then to concerns over American attitudes toward the Anti-Ballistic Missile (ABM) Treaty. The Russian preference would have been for far lower final levels, as Russia lacked the resources to replace many of its aging weapons systems, but achieved at a slower pace, because it also lacked the resources for speedy decommissioning. In 2000 the Duma linked the fate of START II to the ABM Treaty, and in June 2002, following the United States’ withdrawal from the ABM Treaty, the Duma repudiated START II.
START III/SORT
Part of the Duma’s objection was that the proposed cuts were not deep enough. A more radical treaty therefore might have a better chance of ratification. In March 1997, U.S. Pres. Bill Clinton and Yeltsin agreed to begin negotiating START III, which would bring each side down to 2,000–2,500 warheads by Dec. 31, 2007. Discussions then got bogged down over the ABM Treaty, as the Russians sought to link reductions on offensive systems with the maintenance of the established restraints on defensive systems. Nonetheless, it still suited both sides to demonstrate progress, and the risks of agreement were limited by making provisions reversible if circumstances changed. Proposals from both sides began to converge in 2001, and on May 24, 2002, U.S. Pres. George W. Bush and Russian Pres. Vladimir Putin signed the Strategic Offensive Reductions Treaty (SORT). This treaty was ratified without difficulty by both the U.S. Senate and the Russian Duma in March and May 2003, respectively.
SORT would reduce strategic nuclear weapons to between 1,700 and 2,200 warheads by Dec. 31, 2012. It does not require the elimination of delivery systems; it allows nondeployed warheads to be stored instead of destroyed; and for verification it relies for verification on mechanisms outlined in START I. Implementation of SORT proceeded without problems, although it was apparent from the beginning that difficulties might arise if START I were to lapse on schedule in 2009 without replacement. Potential issues in negotiating Agreement to negotiate a replacement to START I would centre on verification arrangements and on whether even lower weapons limits could be agreed on between the two sideswas made difficult by tensions on a range of issues, including the United States’ occupation of Iraq in 2003, Russia’s invasion of Georgia in 2008, and U.S. plans to install ballistic missile defense systems in Poland and the Czech Republic in order to deter a potential threat from Iran’s growing missile force. By early 2009, however, agreement between the two sides was possible, with a new administration in Washington under Pres. Barack Obama. On July 6 Obama and Russian Pres. Dmitry Medvedev agreed to work out a new treaty by December that would build on the verification arrangements of START I and reduce strategic weapons on each side to 500–1,000 warheads and 1,500–1,675 delivery systems. |
A leader in Alzheimer’s disease research, TauRx’s mission is to discover, develop and commercialise innovative products for the diagnosis, treatment and cure of neurodegenerative diseases caused through protein aggregation.
From initial discovery through preclinical development to clinical trials, regulatory submission and product registration, we are harnessing world-class proprietary drug discovery platforms with the aim of halting the progression of Alzheimer’s and other neurodegenerative diseases with innovative treatments and diagnostics.
The company’s novel tau aggregation inhibitors (TAIs) target the formation (aggregation) of tau protein ‘tangles’ in the brain. The spread of tau tangles – the main driver in Alzheimer’s disease – is strongly correlated with dementia and they can develop in the brain up to 20 years before symptoms associated with dementia develop. TAIs work by dissolving existing tau aggregates and preventing the further aggregation of tau protein from forming new tangles.
Three LMTX® Phase 3 clinical trials, conducted in over 20 countries and involving over 1,900 patients, have recently completed. For a summary of the programme and study results, please follow this link.
TauRx and LMTX
Claude Wischik, TauRx CEO, discusses the clinical development of LMTX, and what the recent Phase 3 results mean for TauRx’s on-going development programme. |
HOUSTON (FOX 26) - Maleah Davis has been missing for over two weeks, but plenty of developments have taken place in that relatively short period of time.
The alarm was sounded and searches began immediately after she was reported abducted, but within days, police had named her mother's ex-fiance, Darion Vence a person of interest, casting doubts on his original story.
Within the week, Vence's car was found in Missouri City. Maleah's mother, Brittany Bowens, made some shocking allegations against him around the time a neighbor's surveillance video surfaced, showing Vence carrying a heavy-laden laundry basket filled with a garbage bag.
Trained dogs also detected the scent of human decomposition in the Vence's vehicle, according to a prosecutor. Dogs trained to find cadavers reacted to the trunk of the car, Pat Stayton, a prosecutor with the Harris County District Attorney's Office, said at Vence's probable cause court hearing Saturday night. |
This invention generally relates to voltage controlled oscillators (VCOs) or oscillator circuits, and more particularly to low noise and low phase hit tunable oscillator circuits. A voltage controlled oscillator or oscillator is a component that can be used to translate DC voltage into a radio frequency (RF) voltage or signal. In general, VCOs are designed to produce an oscillating signal at a particular frequency ‘f’ that corresponds to a given tuning voltage. The frequency of the oscillating signal is dependent upon the magnitude of a tuning voltage Vtune applied to a tuning network across a resonator circuit. The frequency ‘f’ may be varied from fmin to fmax and these limits are referred to as the tuning range or bandwidth of the VCO. The tuning sensitivity of the VCO is defined as the change in frequency over the tuning voltage. It is usually desirable to tune the VCO over a wide frequency range within a small tuning voltage range.
The magnitude of the output signal from a VCO depends on the design of the VCO circuit. The frequency of operation is in part determined by a resonator that provides an input signal. Clock generation and clock recovery circuits typically use VCOs within a phase locked loop (PLL) to either generate a clock from an external reference or from an incoming data stream. VCOs are often critical to the performance of PLLs. In turn, PLLs are generally considered essential components in communication networking as the generated clock signal is typically used to either transmit or recover the underlying service information so that the information can be used for its intended purpose. PLLs are particularly important in wireless networks as they enable communications equipment to lock-on to the carrier frequency (onto which communications are transmitted) relatively quickly.
The popularity of mobile communications has renewed interest in and generated more attention to wireless architectures and applications. These applications are typically available on various wireless devices or apparatus including pagers, personal digital assistants, cordless phones, cellular phones, and global positioning systems. These applications may be found on networks that transport either voice or data. The popularity of mobile communications has further spawned renewed interest in the design of relatively low phase noise and phase hit free oscillators that are also tunable over a fairly wide frequency range (e.g., broadband tunable). Phase noise at a certain offset from the carrier frequency, frequency tuning range and power consumption are generally regarded as key figures of merit with respect to the performance of an oscillator. Given the relatively low number of active and passive devices in a VCO circuit, however, designing a VCO is generally assumed to be easy. Similarly, optimizing the foregoing figures of merit is generally regarded as straightforward. But constraints on phase noise and broadband tunability are demanding performance metrics that represent tradeoffs between each other. In addition, the need for a phase hit free solution has been in demand for a relatively long time.
More specifically, despite continuous improvement in oscillators/VCOs technology, the requirements of low power consumption, low phase noise, low thermal drift, low phase hits and compact size continue to make the design of VCOs challenging. The dynamic time average Q-factor (quality factor or Q) of the resonator and the tuning diode noise contribution generally set the noise performance of VCOs. The dynamic loaded Q is inversely proportional to the frequency range or tuning band of a VCO. As such, tradeoffs are continually being made between factors that may affect an oscillator's Q-value such as, for example, power, power consumption, noise performance, frequency stability, tuning range, interference susceptibility, physical space, and economic considerations. Most oscillators utilize some form of a transistor based active circuit or element to satisfy trade-off requirements. But transistors add to the complexity of an oscillator's design due to their inherent non-linearities, noise properties and temperature variations.
In that regard, designers are often hesitant to try new oscillator topologies primarily because they are unsure of how they will perform in terms of phase noise, conduction angle, tuning range, harmonic contents, output power etc. As such, most designers have a small number of favorite oscillator circuits that they adapt to meet changing and future requirements.
Traditionally, RF designers typically use LC (Inductor/Capacitor) resonator tank circuits to achieve low phase noise performance. A perfectly lossless resonant circuit is an ideal choice for an oscillator, but perfectly lossless elements, e.g., inductor, capacitors, are usually difficult to make. Overcoming the energy loss implied by the finite Q of a practical resonator with the energy supplying action of an active element is one potentially attractive way to build oscillators that meet design requirements. In order to guarantee stable sustained oscillation, it is usually desirable to maintain a net negative resistance across the LC resonator tank of an oscillator circuit. A negative resistance generated by the active device (3-terminal bipolar/FET) is usually used to compensate for the positive resistance (loss resistance) in a practical resonator, thereby overcoming the damping losses and reinforcing the stable oscillation over the period.
One of the major challenges in the design of a transceiver system is frequency synthesis of the local oscillator signal. Frequency synthesis is usually done using a PLL. A PLL typically contains a divider, phase detector, filter, and VCO. The feedback action of the loop causes the output frequency to be some multiple of a supplied reference frequency. This reference frequency is generated by the VCO whose output frequency is variable over some range according to an input control voltage as discussed above.
Varying the reference frequency of an oscillating signal source is important to second and third generation wireless systems. In fact, the coexistence of second and third generation wireless systems requires multi-mode, multi-band, and multi-standard mobile communication systems, which, in turn, require a tunable low phase noise and low phase hit signal sources. The demand for mobile communication is increasing rapidly and in this system, tunable VCOs are used as a component of a frequency synthesizer, which provides a choice of the desired channel.
A phase hit can be defined as a random, sudden, uncontrolled change in the phase of the signal source that typically lasts for fractions of a second. It can be caused by temperature changes from dissimilar metals expanding and contracting at different rates, as well as from vibration or impact. Microphonics, which are acoustic vibrations that traverse an oscillator package and circuits, can cause a change in phase and frequency. Microphonics are usually dealt with using a hybrid resonance mode in a distributed (micro/strip-line, stripline, suspended stripline) medium.
Phase hits are typically infrequent. But they cause signal degradation in high-performance communication systems. The effect of phase hits increases with data rate. If a phase hit cannot be absorbed by a device (e.g., a receiver) in a communication system, a link may fail resulting in a data loss. As a result, a continuing task is reducing or eliminating phase hits. While phase hits have plagued communication equipments for years, today's higher transmission speeds accentuate the problem given the greater amount of data that may be lost as a result of a phase hit.
Low phase noise performance and wideband tunability have been assumed to be opposing requirements due to the problem of the controlling the loop parameters and the dynamic loaded Q of the resonator over the wideband operation. The resistive losses, especially those in the inductors and varactors, are of major importance and determine the Q of a tank circuit. There have been several attempts to come to grips with these contradictory but desirable oscillator qualities. One way to improve the phase noise of an oscillator is to incorporate high quality resonator components such as surface acoustic wave (SAW) and ceramic resonator components. But these resonators are more prone to microphonics and phase hits. These resonators also typically have a limited tuning range to compensate for frequency drifts due to the variations in temperature and other parameters over the tuning range.
Ceramic resonator (CRO) based oscillators are widely used in wireless applications, since they typically feature very low phase noise at fixed frequencies up to about 4 GHz. CRO resonator-based oscillators are also known for providing a high Q and low phase noise performance. Typically, a ceramic coaxial resonator comprises a dielectric material formed as a rectangle prism with a coaxial opening running lengthwise through the prism and an electrical connector connected to one end. The outer and inner surfaces of the prism, with the exception of the end connected to the electrical connector and possibly the opposite end, are coated with a metal such as copper or silver. A device formed in this manner essentially comprises a resonant RF circuit, including capacitance, inductance and loss resistance that oscillates in the transverse electromagnetic (TEM) mode if loss resistance is compensated.
CRO oscillators, however, have several disadvantages, including a limited operating temperature range and a limited tuning range (which limits the amount of correction that can be made to compensate for the tolerances of other components in the oscillator circuit). CROs are also typically prone to phase hits (due to expanding and contracting at different rates with variation of the temperature for outer metallic body of the CRO and other components of the oscillator circuit).
As such, a circuit designer must typically consider designing a digitally implemented CRO oscillator to overcome the above problems, otherwise, large phase hits can occur. In addition, since the design of a new CRO oscillator is much like that of an integrated circuit (IC), development of an oscillator with a non-standard frequency requires non-recurring engineering (NRE) costs, in addition to the cost of the oscillators.
Thus, a need exists for methods and circuitry that overcome the foregoing difficulties, and improve the performance of an oscillator or oscillator circuitry, including the ability to absorb phase hits over the tuning range of operation. |
Endogenous opiates in the chick retina and their role in form-deprivation myopia.
In this study, the possible role of the retinal enkephalin system in form-deprivation myopia (FDM) in the chick eye was investigated. Daily intravitreal injection of the nonspecific opiate antagonist naloxone blocked development of FDM in a dose-dependent manner, while injection of the opiate agonist morphine had no effect at any dose tested. The ED50 for naloxone (calculated maximum concentration in the vitreous) was found to be in the low picomolar range. The results using receptor-subtype-specific drugs were contradictory. Drugs specific for mu and delta receptors had no effect on FDM. The kappa-specific antagonist nor-binaltorphimine (nor-BNI) reduced FDM by about 50% at maximum daily retinal doses ranging between 4 x 10(-10) and 4 x 10(-7) M, while the kappa-specific agonist U50488 blocked FDM in a dose-dependent manner with an ED50 between 5 x 10(-8) and 5 x 10(-7) M. Met-enkephalin immunoreactivity (ME-IR) was localized immunocytochemically to a subset of amacrine cells (ENSLI cells) and their neurites in the inner plexiform layer (IPL). As reported previously, ENSLI cells from untreated chick retinas showed a cyclical pattern of immunoreactivity, with increased immunoreactivity in the light compared to the dark. Form-deprivation did not appear to change this pattern. Amounts of preproenkephalin mRNA from normal or form-deprived eyes were approximately the same under all conditions. Daily injection of naloxone, however, did increase ME-IR in the dark. These results suggest that naloxone may affect release of enkephalin from the ENSLI cells. The results as presented are inconclusive with regards to the role of the enkephalin system in FDM. While the kappa receptor may participate, there is no conclusive evidence here for a direct effect of opiate receptors. The effect of naloxone on form-deprived eyes may be due to its effect on release of peptides from the ENSLI cells. |
Preface: To be transparent in my agenda, I firmly believe there are strong parallels between Agility and Human Rights, and I believe that is a purposeful and direct by-product of the primary outcomes of the Agile Manifesto. However, I have attempted to make this article a little different from others by more subtly embedding the learnings and patterns within the messages and on several levels. As such I hope the connections are still obvious, and that you find this article refreshing, insightful, appropriate and useful.
A Premise
It seems everywhere I turn lately there is a scandal of greed, lust, abuse, harassment, violence or oppression in both the workplace as well as personal life. I’d like to believe the number of despicable activities is not actually increasing but rather I am simply exposed to more because we live in an age when the speed and ease of access to information is staggering. Certainly recent events are no exception to human history that records thousands of years of oppression, subjugation, control, and violence. My question is: as a supposedly intelligent species, why is it we have seemingly learned very little over the millennia?
I propose we have actually learned a great deal and made significant advances, yet at the same time we have experienced setbacks that repeatedly challenge that progress. These setbacks are often imposed by select individuals in positions of authority that choose to prioritize and exert their power, individual needs or desires over the rights and needs of others. However, I believe if we can truly harness the power of unity and collaboration we can make a significant positive difference, and that is what I seek your help in doing.
“The whole is greater than the sum of its parts.”
~ Aristotle
Finding a Beacon in the Darkness
Every day I find it disheartening to bear witness to people being physically and mentally hurt, abused or taken advantage of. In their personal lives and at home. At the workplace. In wars and conflicts. In human created environmental disasters. It seems there is no end to the pain and suffering or the countless ways to inflict it.
Meanwhile I sincerely believe many of us have the desire to make the world a better place, but given our positions and busy lives it can be daunting to make a real difference. In many instances we feel powerless to change the world because someone else has authority over us or over the system. It may also seem pointless to commit to change something we as an individual have little to no control over. It can also be risky to draw attention to ourselves by speaking against others in a position of power who may and sometimes will exert their influence to attack and hurt us as well as those we care for.
Despite the temptation to hide from the noise we must remain strong and acknowledge that by creating transparency and visibility in to dark and sometimes painful events we are actually opening the door to the opportunity for positive change. Obscuring truth does nothing to help a worthy cause or to better society. Remaining silent about an injustice does not provide the victim with any form of respect or comfort. Pretending something didn’t happen doesn’t make the consequences and outcomes any less real for the casualty. Inaction does not provide any benefit except perhaps the avoidance of an immediate conflict.
Many times, shining a light on something does provides tangible benefit. It creates visibility and awareness, and provides opportunity for the truth to be exposed. Although transparency itself may not solve a problem, reflection and openness should make the misalignment more critical and obvious. I believe the majority of us want trust, and honesty wherever we are, whether it be in the boardroom, on the manufacturing floor, in a political office, or even in a private home.
“Each time a man stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current that can sweep down the mightiest walls of oppression and resistance.”
~ Robert Kennedy
However we must also acknowledge that sharing truth may often be painful and uncomfortable, and in order to create the opportunity for truth we must first provide individuals with safety so they may find the courage to do what is right. Without safety people fear reprisals, embarrassment, retribution, consequences, and loss of respect. History has taught us that without safety and courage we can not expect most people to bridge the chasm from fear to justice, and as a result the silence will continue. With silence there will be no hope for change. So in order to help define expectations and to foster a safer environment for effective communication we need a code to live by; one that provides standards and creates safety – that serves as a beacon in the darkness so that we may uphold ourselves and one another to it.
To be absolutely clear, I am not saying that policies, processes and tools are more important than people. Instead, I am acknowledging that the right combination of policies and processes with appropriate tools and a method to uphold those ideals should serve to provide opportunity for fairness for people, which is the desired outcome.
A Disturbing Retrospective Leading to a Hopeful Outcome
At the end of World War II when “relative” safety was finally achieved, people were exhausted, shocked and appalled with the magnitude of human atrocities they bore witness to. Given the darkness of the times it may have seemed less painful to move on, put it in the past, and perhaps even obscure disturbing facts rather than revisit them in the pursuit of learning. Instead, the leadership of that time chose to leverage careful inspection to uncover truths and provide visibility with the aspiration that something good could flow out of the evil. In the end the aim was to use the learnings to create a shared understanding and define standards and expectations for a safe environment in the future.
“Those who cannot learn from history are doomed to repeat it.”
~ George Santayana
To this end I believe we already have a code to live by, but I surmise most of society doesn’t give it the continuous, serious consideration and support it deserves. The United Nations Universal Declaration of Human Rights (UDHR) was created on December 10, 1948 as a direct outcome of the learnings from World War II, and in this brief but impactful document are 30 articles that define human equality and set the standards for safety. Despite some of its choice wording and age (at almost 70 years) I believe it is still directly relevant and bears serious attention.
The UDHR document transcends political borders, gender, orientation, race, religion, boardrooms, workplaces, homes, family, and economic status. Every person on this planet should not only just read it, but actively live, work, and explicitly honour the values it represents. The UDHR should become the definitive core learning article for every child. If we all continuously make a firm commitment to hold ourselves and others by the standards in the UDHR I believe we could collectively create opportunity for better safety, transparency, respect, and courage in the workplace, at home, and abroad by putting focus on what matters most – equality and the value of and compassion for human life.
The UDHR document may be policy, but with continuous effort, unilateral agreement and support it enables and empowers people. It may not be perfection, but it is aspirational towards it. It focuses on individual rights but strongly values human interaction. It promotes balance, harmony and partnerships. It demands mutual respect and caring. It is elegant in its simplicity. It promotes collaboration and shared responsibility. It defines clear expectations for a safe environment.
I believe the UDHR is the manifesto of real, human agility, and if enough of us embrace and enforce it I believe we could collectively make real, positive change.
Now, A Challenge
I challenge each and every one of you to take time to read the UN Declaration of Human Rights. I don’t just mean on the train on the way to work, or over morning coffee, or while your kids are playing soccer or hockey, or whatever you do to pass a few minutes of time. I mean take time to really, truly and deeply comprehend what each of the thirty articles are saying. Reflect on the value of wisdom that it provides and how that wisdom came from pain and learning. I then encourage you to share it with every family member (adults and youth) and ask for constructive feedback on what it says about them and personal life. I encourage you to share it with every co-worker and then have an open, honest dialogue about what your company culture and leadership either does or fails to do to provide a safe work environment and to promote equality, truth, transparency and human rights.
Then, I challenge you to ask every single day “Given the declaration, what small positive adaptation or change can I make right now to help our family, friends, peers, coworkers and humanity achieve these goals and outcomes?” You could start with something as simple as a brief conversation, and see where it goes.
“Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.”
~ Martin Luther King, Jr.
I asked myself that very question after visiting the UN General Assembly and Security Council Chambers in New York late last year. In response, one of my first actions in 2018 is to publish this article in an effort to re-establish awareness about the UN declaration and how it may bring hope and positive change if we can rally enough people behind it. How about you?
A secondary (and arguably less important) challenge I am issuing for Lean and Agile enthusiasts is for you to identify the patterns and key words in this article that I have borrowed from various facets of the Lean and Agile domains (hint: there are at least 20 different words – can you spot them). I purposefully embedded these patterns and key words in this article to explicitly highlight the parallels that I see between Agility and the UDHR and I hope you see them too.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
Trust is an exceptional quality that we humans can develop with each other. It goes a long way to building positive relationships. We hope and strive for trust in our families, and with our most intimate connections. Yet do we expect trust in our work lives?
Can you imagine the relief you might feel entering your work space, knowing that you can do your work with confidence and focus? That encouragement rather than criticism underlies the culture of your workplace? That a manager or co-worker is not scheming behind your back to knock you or your efforts down in any way? That you’re not being gossiped about?
Trust is especially key in today’s work spaces. Teamwork is becoming an essential aspect of work across every kind of business and organization.
Here’s what one team development company writes about this subject:
The people in your organization need to work as a team to respond to internal and external challenges, achieve common objectives, solve problems collaboratively, and communicate openly and effectively. In successful teams, people work better together because they trust each other. Productivity improves and business prospers. http://beyondthebox.ca/workshops/team-trust-building/
It Starts With Me and You
As with so many qualities in life, the idea of trust, or being trustworthy, starts with me and you.
It is essential that we take a hard look at ourselves, and determine whether or not we display the attributes of trustworthiness.
To do this, I might ask myself some of these questions:
Do I tell the truth?
Do I avoid backbiting (talking about others behind their back)?
Do I do what I say I’m going to do?
Do I apply myself to my work and do my best?
Do I consciously build positive relationships with all levels of people in my workplace?
Do I encourage or help others when I can?
There are many more questions to ask oneself, but these offer a place to start.
One website proposesa template to assess employees in terms of their trustworthiness:
Trust develops from consistent actions that show colleagues you are reliable, cooperative and committed to team success. A sense of confidence in the workplace better allows employees to work together for a common goal. Trust does not always happen naturally, especially if previous actions make the employees question if you are reliable. Take stock of the current level of trust in the workplace, identifying potential roadblocks. An action plan to build positive relationships helps improve the overall work environment for all employees.
This snippet comes from “Lou Holtz’s Three Rules of Life,” by Harvey MacKay:
“The first question: Can I trust you?”
“Without trust, there is no relationship,” Lou said. “Without trust, you don’t have a chance. People have to trust you. They have to trust your product. The only way you can ever get trust is if both sides do the right thing.”
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
The Agile Manifesto was signed and made public in 2001. It begins with short, pithy statements regarding what should be the priorities of software developers, followed by Twelve Principles. In this article I want to call attention to the fifth principle in the Agile Manifesto, which is:
“Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”
Although it appears to be a very simple statement, I suggest that it is jam-packed with profitable guidance, and is essential to, and at the heart of, real Agility. Human qualities must be considered.
Motivation
The first part of the principle urges us to build projects around motivated individuals. What does this imply?
The idea of “building a project” makes it a process, not necessarily a fait accompli. It can change and be altered as one works toward it. There may be a structural roadmap, but many details and aspects can change in the “building.”
The second part of the statement describes motivated individuals. The verb “motivate”is an action word, meaning to actuate, propel, move or incite. Thus, in this line, is the “project” the thing which will “move or incite” those being asked to carry it out?
Or do we understand this to imply that the individuals are already “motivated” in themselves, which is an emotional condition of individuals? Is this motivation already there prior to starting a project?
The topic of motivation is rich. How does motivation occur? Is it the culture and environment of the company, lived and exemplified by it’s leaders, which motivates? Or is motivation an intrinsic quality of the individual? It may be both. (Daniel Pink, author of “Drive,” uses science to demonstrate that the best motivators are autonomy, mastery and purposeful-ness – ideas which are inherent in the Agile Manifesto.)
In any case, the line itself suggests that the project may be a) interesting to pertinent (perhaps already motivated) individuals, b) do-able by those same individuals, and c) contains enough challenges to test the mastery and creativity of the individuals. In other words, it’s going to be a project that the individuals in your company care about for more than one reason.
Environment
The second line from the fifth Principlehas two distinct parts to it. The first part, “Give them the environment and support they need” puts a great deal of responsibility on whoever is assigning the project. Let’s look at the idea of environment first.
In a simple way, we can understand environment as the physical place which influences a person or a group. It can be any space or room; it can refer to the lighting, the colours, the furniture, the vegetation, the walls, whether water or coffee is available – physical elementswhich will certainly affect the actions of people and teams. For example, creating face-to-face collaboration environments is also part of the Agile Manifesto.
But we must remember that environment also entails the non-physical ie, the intellectual, emotional, or even the spiritual. Is the environment friendly or not? Cheerful or not? Encouraging or not? Affirming or not? We can think of many non-physical attributes that make up an environment.
Support
These attributes allude to the second part of what’s to be given by an owner or manager: “…and support they need.” This idea of support pertains not just to helping someone out with tools and responding to needs, but that the environment is supportive in every way –physically, intellectually, emotionally and spiritually. This may be a more holistic way of considering this Agile principle.
The last part of the statement is of great importance as well: and trust them to get the job done.
If you as product owner, or manager have created motivation, environment and support, then the last crucial requirement of trust becomes easier to fulfill. There is nothing more off-putting than being micromanaged, supervised or controlled with excessive attention to small details. Trust means you have confidence in the capacity of your team and its individual members. It also implies that they will communicate with transparency and honesty with you, and you with them, about the project.
Context
The principles of Agile do not exist in a vacuum, because, of course, other principles such as the following, are relevant to this discussion:
“The best architectures, requirements, and designs emerge from self-organizing teams.”
“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.”
This fifth principle has application far beyond IT projects. I wanted to reflect on it because it speaks to human qualities, which must be recognized as a key factor in happy work places, and in any high-performance team.
Valerie Senyk is a Customer Service agent and Agile Team Developer with BERTEIG.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
It’s my opinion, and I think the opinion of the authors of Scrum, that a Scrum team must be collocated. A collection of geographically distributed staff is NOT a Scrum team.
If you work in a “distributed team”, please consider the following question.
Do the members of this group have authority to decide (if they wanted to) to relocate and work in the same physical space?
If you answer “Yes” with regard to your coworkers: then I’d encourage you to advise your colleagues toward collocating, even if only as an experiment for a few Sprints, so they can decide for themselves whether to remain remote.
If you answer “No”, the members do not have authority to decide to relocate:
then clearly it is not a self-organizing team;
clearly there are others in the organization telling those members how to perform their work;
and clearly they have dependencies upon others who hold authority (probably budgets as well) which have imposed constraints upon communication between team members.
CLEARLY, THEREFORE, IT IS NOT A SCRUM TEAM.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
Leading an organization to Real Agility is a complex and difficult task. However, the core responsibilities of leaders attempting this are simple to describe. This video introduces the three core responsibilities of the senior leadership team as they lead their organization to Real Agility.
The video presents three core responsibilities:
Communicating the vision for change
Leading by example
Changing the organization
Future videos in the series will elaborate on these three core responsibilities.
Real Agility References
Here are some additional references about how leaders can help their organizations move towards Real Agility:
Mishkin Berteig presents the concepts in this video series. Mishkin has worked with leaders for over fifteen years to help them create better businesses. Mishkin is a certified Leadership Circle Profile practitioner and a Certified Scrum Trainer. Mishkin is co-founder of BERTEIG. The Real Agility program includes assessment, and support for delivery teams, managers and leaders.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
The Hunt for Better Retrospectives
The rumours had started to spread, retrospectives at our organization were flat, stale and stuck in a rut. The prevailing thought was that this was stalling the pace of continuous improvement across our teams. In truth, I wasn’t sure if this was at all true, it’s a complex problem that has many possible contributing factors. Here are just some possible alternative or co-contributing causes: how the teams are organized, the level of safety, mechanisms to deal with impediments across the organization, cultural issues, levels of autonomy and engagement, competence & ability and so on…
On the theme of safety, I thought we could try to go as far as having fun; we’d already had lots of success with the getKanban game (oh Carlos you devil!). Where it all tied together for me, was being inspired by the great question-based approach from cultureqs.com that I’d had a chance to preview at Spark.
If I could create a game with the right prepared questions, we could establish safety, the right dialogue and maybe even have some fun.
The Retro Game
This is a question-based game that I created that you could use to conduct your next retro for teams of up to 10 people. The rules of the game are fairly simple and you could play through a round or two in about 1 to 2 hours depending on team size and sprint duration. Prep time for the facilitator is about 2-4 hours.
Prepping to play the game
You, as facilitator, will need to prepare for 3 types of questions that are thought of ahead of time and printed (or written) on the back of card-stock paper cards.
One question per card. Each question type has its unique colour card. About 8 questions per category is more than enough to play this game.
The 3 types of questions are:
In the Moment – These are questions that are currently on the mind of the team. These could be generated by simply connecting with each team member ahead of time and asking, “if you could only talk about one or two things this retro, what would it be?” If for example they responded “I want to talk about keeping our momentum”, you could create a question like “what would it take to keep our momentum going?”
Pulse Check – These are questions that are focused on people and engagement. Sometimes you would see similar questions on employee satisfaction surveys. An example question in this category could be “What tools and resources do we need to continue to be successful?”
Dreams and Worries – This is a longer-term view of the goals of the team. If the team has had any type of Lift Off or chartering exercise in the past, these would be questions connected to any goals and potential risks that have been previously identified. For example if one of a team’s goal is to ship product updates every 2 weeks, a question could be “What should we do next to get closer to shipping every 2 weeks?”
On the face-up side of the card it should indicate the question type as well as have room to write down any insights and actions.
You will also need:
To print out the game board
To print out the rule card
Bring a 6-sided dice
Playing the Game
Players sit on the floor or at a table around the game board. The cards are in 3 piles, grouped by type, with the questions face down.
The person with the furthest birthday goes first.
It is their turn and they get to roll the dice.
They then choose a card from the pile based on the dice roll. A roll of 1 through 3 is an “In the Moment” card, 4 is a “Pulse Check” and 5 to 6 “Dreams & Worries”
They then read the card question on the card out loud and then pass the card to the person on the right.
The person on your right is the scribe, they will capture notes in the Insight and Actions boxes of the card for this round.
Once they have read the question, they have a chance to think and then answer the question out loud to the group. Nobody else gets to talk.
Once they’ve answered the question, others can provide their thoughts on the subject.
After 3 minutes, you may wish to move on to the next round.
At the end of each round the person whose turn it was chooses the person who listened and contributed to the discussion best. That person is given the card to keep.
The person to the left is given the dice and goes next.
Winning the Game
The game ends at 10 minutes prior to the end of the meeting.
At the end of the game, the person with the most cards wins!
The winner gets the bragging rights (and certificate) indicating they are the retrospective champion!
You should spend the last 10 minutes reflecting on the experience and organizing on the action items identified.
Concepts at Play
Context & Reflection – Preparation is key, particularly for the “In the Moment” section. The topics will be relevant and connect with what the team wants to talk about. Also when presented in the form of a question they will likely trigger reflection for all those present.
Sharing the Voice – Everyone gets a chance to speak and be heard without interruptions. The game element also incentivises quality participation.
Coverage of topic areas – The 3 question categories spread the coverage across multiple areas, not just the items in the moment. The probabilities are not however equal, for example there is a 50% chance of “In the Moment” being chosen in each turn.
Fun & Safety – The game element encourages play and friendlier exchanges. You are likely to have dialogue over debate.
Want to play the game?
I’d love to hear how this game worked out for you. I’ve included everything you need here to setup your own game. Let me know how it went and how it could be improved!
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
I recently said goodbye to one of my organization’s Agile Coaches and I felt that I needed to take a pause and reflect to consider my next move. The engagement had gone well, in fact one of the best we’ve had, but not without its share of successes and failures. But the successes had clearly outpaced any failures, and so there was a lot of good I wanted to build on.
The departing coach was part of a 3rd generation of Agile Coaches that I had worked with in the 3 years since we had begun our company’s transformation to Agile. And while he was a great coach, so were his predecessors and yet they had had fewer successes.
On reflection, what had really happened is that we had changed as a company; we had learned how to better execute our engagements with an Agile Coach.
Deciding to hire an Agile Coach.
Deciding to hire an Agile Coach can be a big step. A couple of things need to have happened, you’ve recognized that you need some help or at least another perspective. And given that Agile Coaches are typically not very cheap, you have decided to invest in your Agile transformation, however big or small. You’re clearly taking it seriously.
However, through my experiences I noticed that things can get a little tricky once that decision has been made. Many organizations can fall into a trap of externalizing transformation responsibilities to the Agile Coach.
In essence thinking along the lines of “as long as I hire a good coach, they should be able to make our teams Agile” can take you into an engagement model that is not very Agile in the first place.
Much like how Scrum and other Agile Practices connect customers with teams and establishes shared risk, an organization’s relationship with their Agile Coaches need to be a working partnership.
Positive Patterns for Coaching Engagements
So it’s important for you to setup the right engagement approach to get value out of your Agile Coach and this goes beyond the hard costs of their services, but also the high cost of failure with not having the right coaching in the right areas.
1. Identify the Customer
Usually it is management who will hire a coach, and they may do so to help one or more teams with their Agile adoption needs. So in this scenario who is the customer? Is it the person that hired the coach or the teams (the coachees) who will be receiving the services? In some cases, the coachees aren’t clear why the coach is there, they haven’t asked for their services and in some cases may even feel threatened by their presence.
For this reason, if management is hiring coaches you need to recognize that there is a 3-pronged relationship that needs to be clearly established and maintained.
With the customer in this case being someone in management, i.e. the person who hired the coach in the first place. The customer’s responsibility will be to not only identify the coachee but then work with the coach to establish and support that relationship.
2. Set the Mandate
Agile Coaches typically tend to be more effective when they have one or two specific mandates tied to an organization’s goals. Not only is the mandate important to establish why the coach is there, too many goals can significantly dilute the coach’s effectiveness. Put another way, Agile Coaches are not immune to exceeding their own Work in Progress limits.
The mandate establishes why the coach is there, and should be tied to some sort of organizational need. A good way of developing this is to articulate what is currently happening and the desired future state you want the coach to help with.
For example:
The teams on our new program are great at consistently delivering their features at the end of each sprint. However, we still experience significant delays merging and testing between teams in order for the program to ship a new release. We’d like to reduce that time significantly, hopefully by at least half.
Once the engagement is well underway you may find that the coach, through serendipity alone, is exposed to and gets involved with a wide variety of other areas. This is fine, but it’s best to just consider this to be a side show and not the main event. If other activities start to take on a life of their own, it’s probably a good time to go back to inspect and potentially adjust the mandate.
If you’re not sure how to establish or identify your Agile goals, this could be the first goal of any Agile coach you hire. In this scenario, the customer is also the coachee and the mandate is to get help establishing a mandate.
3. Hire the Coach that fits the need
Agile coaches are not a homogeneous group, with many degrees of specialty, perspective and experiences. Resist the desire to find a jack-of-all-trades, you’re as likely to find them as a unicorn.
Your now established mandate will be your biggest guide to what kind of coach you should be looking for. Is the need tied to technical practices, process engineering, team collaboration, executive buy-in, transforming your culture, etc?
The other part is connected with the identified coachee. Are the coachees team members, middle management or someone with a “C” in at the start of their title? Will mentoring be required or are you just here to teach something specific?
In my example earlier, in order for your team to get help with their merging & testing needs, you may have to look for a coach with the right skills within the Technical Mastery competence. And if you have technical leaders who are championing the change, potentially the ability to Mentor.
4. Establish Feedback Loops
With the coach, customer and mandate clearly identified, you now need to be ready to devote your time to regularly connect and work with the coach. Formalizing some sort of cadence is necessary, if you leave it to ad hoc meetings you will typically not meet regularly enough and usually after some sort of failure has occurred.
The objective of these feedback loops is to tie together the communication lines between the 3 prongs established: the customer, the coach and the coachees. They should be framed in terms of reviewing progress against the goals established with the mandate. If the coachees ran any experiments or made any changes that were intended to get closer to the goals, this would be the time to reflect on them. If the coachees need something from the customer, this would be a good forum to review that need.
Along with maintaining a cadence of communication, feedback loops if done regularly and consistently, could be used to replace deadlines, which in many cases are set simply a pressure mechanism to maintain urgency. So statements like “Merge & test time is to be reduced by half by Q2” now become “We need to reduce merge and test time by half and we will review our progress and adjust every 2 weeks.”
5. It doesn’t need to be Full Time
Resist the temptation to set the coach’s hours as a full-time embedded part of the organization or team. While you may want to have the coach spend a significant amount of time with you and your coachees when the engagement is starting, after this period you will likely get a lot more value from regular check-ins.
This could look like establishing some sort of rhythm with a coachee: reviewing challenges, then agreeing on changes and then coming back to review the results after sufficient time has passed.
This approach is more likely to keep the coach as a coach, and prevents the coach from becoming entangled in the delivery chain of the organization. The coach is there to help the coachees solve the problems, and not to become an active participant in their delivery.
Time to get to work
Bringing in an Agile Coach is an excellent and likely necessary part of unlocking your Agile transformation. However, a successful engagement with a coach will have you more connected and active with your transformation, not less. So consider these 5 positive coaching engagement patterns as I consider them moving into my 4th generation of Agile coaches. I expect it will be a lot of work, along with a steady stream of great results.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
The Perfect Agile Tool doesn’t yet exist. In my training and consulting work, I often have strong words to say about electronic tools. Most of the tools out there are really bad. Unfortunately, JIRA, the most common tool, is also the worst that I know of. (Actually, the only tool worse than JIRA for an Agile team is MS Project – which is just plain evil). Some Agile tools do a bit better, but most fall far short of a good physical task board (information radiator). I am often asked to evaluate and / or partner with tool vendors to “bless” their products. Here is what I am looking for before I will consider an outright endorsement of such a tool.
Features for a Perfect Agile Tool
This list is roughly organized in order of features which do show up in some tools to those which I have never seen or heard of in tools.
1. Skeumorphism: Cards and Wall
The tool should display the current work of an Agile team in a way that is immediately recognizable as a set of note cards or PostIt’s on a physical wall. This includes colours, sizes, etc. Most people will type to enter data so fonts should be chosen to mimic hand-printed letters. Every aspect of the display should remind people of the physical analogue of the tool.
2. Live Update
As team members are using the tool, all updates that they make should be visible as immediate updates to all the other team members including typing, moving cards around, etc. There is no off-line mode for the tool. In fact, if the tool is not receiving live updates, it should be clearly disabled so that the team member knows there is a problem with the information they have displayed.
3. Simple or No Access Control
Most Agile methods strongly de-emphaisize or even disallow traditional roles and encourage self-organizing teams. This means that fine-grained access control to different features of the tool should be eschewed in favour of extremely simple access control: everyone can do anything with the tool. (It actually helps if there is no “undo” feature, just like there’s no easy way to erase Sharpie written on a note card.)
4. Infinite Zoom In/Out
When you are using cards on a wall, it is easy to see the whole wall or to get up close and see even very fine details on a single note card. Although it does not have to be literally infinite, the wide and tight zoom levels in the tool should be at least a few orders of magnitude difference. As well, the zoom feature should be extremely easy to use, similar perhaps to the way that Google Maps functions. Among all the other features I mention, this is one of the top three in importance for the perfect Agile tool.
5. Touch Device Compatible
This seems like a super-obvious feature in this day and age of tablets, smart phones and touch-screen laptops. And it would take the cards on the wall metaphor just that extra little way. But very few tools are actually easy to use on touch devices. Dragging cards around and pinch to zoom are the obvious aspects of this feature. But nice finger-drawing features would also be a big plus (see below)!
6. Size Limit on Cards
For techies, this one is extremely counter-intuitive: limit the amount of information that can be stored on a “card” by the size of the card. It shouldn’t be possible to attach documents, screen shots, and tons of meta-data to a single card. Agile methods encourage time-boxing (e.g. Sprints), work-boxing (e.g. Work-in-Process limits), and space-boxing (e.g. team rooms). This principle of putting boundaries around an environment should apply to the information stored on a card. Information-boxing forces us to be succinct and to prefer face-to-face communication over written communication. Among all the other features I mention, this is one of the top three in importance for the perfect Agile tool.
7. Minimal Meta-Data
Information-boxing also applies to meta-data. Cards should not be associated with users in the system. Cards should not have lots of numerical information. Cards should not have associations with other cards such as parent-child or container-contained. Cards should not store “state” information except in extremely limited ways. At most, the electronic tool could store a card ID, card creation and removal time-stamps, and an association with either an Agile team or a product or project.
8. Overlapping Cards
Almost every electronic tool for Agile teams puts cards in columns. Get rid of the columns, and allow cards to overlap. If there is any “modal” behaviour in the tool, it would be to allow a team member to select and view a small collection of cards by de-overlapping them temporarily. Overlapping allows the creation of visually interesting and useful relationships between cards. Cards can be used to demarcate columns or groupings without enforcing strict in/out membership in a process step.
9. Rotatable, Foldable, Rip-able Cards
Increase the fidelity of the metaphor with physical cards on a wall. Rotation, folding and ripping are all useful idioms for creating distinct visual cues in physical cards. For example, one team might rotate cards 45 degrees to indicate that work is blocked on that card. Or another team might fold a dog-ear on a card to indicate it is in-progress. Or another team might rip cards to show they are complete. The flexibility of physical cards needs to be replicated in the electronic environment to allow a team to create its own visual idioms. Among all the other features I mention, this is one of the top three in importance for the perfect Agile tool.
10. Easy Sketching on Cards… Including the Back
Cards should allow free-form drawing with colours and some basic diagramming shapes (e.g. circles, squares, lines). Don’t make it a full diagramming canvas! Instead, allow team members to easily sketch layouts, UML, or state diagrams, or even memory aides. The back side of the card is often the best place for more “complex” sketches, but don’t let the zoom feature allow for arbitrarily detailed drawing. Lines need a minimum thickness to prevent excessive information storage on the cards.
11. Handwriting Recognition
With Siri and other voice-recognition systems, isn’t it time we also built in handwriting recognition? Allowing a team member to toggle between the handwriting view and the “OCR” view would often help with understanding. Allow it to be bi-directional so that the tool can “write” in the style of each of the team members so that text entry can be keyboard or finger/stylus.
12. Sync Between Wall and Electronic Tool
This is the most interesting feature: allow a photo of cards on a wall to be intelligently mapped to cards in an electronic tool (including creating new cards) and for the electronic tool to easily print on physical note cards for placement on a wall. There is all sorts of complexity to this feature including image recognition and a possible hardware requirement for a printer that can handle very small paper sizes (not common!)
Key Anti-Features
These are the features that many electronic tools implement as part of being “enterprise-ready”. I’ll be brief on these points:
No Time Tracking – bums in seats typing doesn’t matter: “the primary measure of progress is working software” (or whatever valuable thing the team is building) – from the Agile Manifesto.
No Actuals vs. Estimates – we’re all bad at predicting the future so don’t bother with trying to get better.
No Report Generation – managers and leaders should come and see real results and interact directly with the team (also, statistics lie).
No Integration Points – this is the worst of the anti-features since it is the one that leads to the most anti-agile creeping featuritis. Remember: “Individuals and interactions [are valued] over processes and tools” – from the Agile Manifesto.
Evaluation of Common Agile Tools
I go from “Good” to “Bad” with two special categories that are discontinuous from the normal scale: “Ideal” and “Evil”. I think of tools as falling somewhere on this scale, but I acknowledge that these tools are evolving products and this diagram may not reflect current reality. The scale looks like this, with a few examples put on the scale:
Plea for the Perfect Agile Tool
I still hope that some day someone will build the perfect Agile tool. I’ve seen many of the ideal features listed above in other innovative non-Agile tools. For example, 3M made a PostIt® Plus tool for the iPhone that does some really cool stuff. There’s other tools that do handwriting recognition, etc. Putting it all together in a super-user-friendly package would really get me excited.
Let me know if you think you know of a tool that gets close to the ideal – I would be happy to check it out and provide feedback / commentary!
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
After a recent large organizational change that resulted in a number of new teams formed, a product owner (PO) approached me looking for some help. He said, “I don’t think my new Scrum Master is doing their job and I’m now carrying the entire team, do we have a job description we can look at?”
I can already imagine how a version of me from a previous life would have responded, “yes of course let’s look at the job description and see where the SM is falling short of their roles and responsibilities”. But as I considered my response, my first thought was that focusing our attention on roles and job descriptions was a doomed route to failure. Pouring our energy there would likely just extend the pain the PO, and likely SM, were going through.
Sure we have an SM job description in our organization, and it clearly documents how the SM provides service to the organization, team and PO. But reviewing this with the seasoned SM didn’t really make sense to me; they were very well aware of the content of the job description and what was expected of them.
At the same time that this was happening, another newly paired Scrum Master asked for my help regarding their PO. From their perspective the PO was “suffocating” the team. The PO was directing the team in many aspects of the sprint that they felt was stepping beyond their role. “I don’t think the PO knows their role, maybe you can help me get them some training?” was the SMs concluding comment.
Over the course of the next few weeks this scenario played out again through more POs and SMs sharing similar challenges. Surely this was not a sudden epidemic of previously performing individuals who now needed to be reminded of what their job was?
Recognizing the impact of change
A common pattern was emerging from all of this, change was occurring and each individual was relying on, and to some degree expecting, old patterns to continue to work with their new situation. Their old way of working in Scrum seemed to work very well; so it was everyone else around them that was not meeting expectations.
The core issue however was that change was not being fully confronted: the product was different, the team competencies were different, the stakeholders were different, the expectations were different and finally the team dynamic was different all the way down to the relationship between the SM and PO.
Scrum as a form of Change Management
I looked for the solution from Scrum itself, at its heart a method for teams to use to adapt to and thrive with change. Was there enough transparency, inspection and adaptation going on between the SMs and POs in these situations? I would argue, not enough.
A pattern was becoming clear: nobody was fully disclosing their challenges to the other, they hadn’t fully confronted and understood their new situation and hadn’t come up with new approaches that would improve things. Said another way, they hadn’t inspected their new circumstances sufficiently and transparently enough so that they could adapt their role to fit the new need.
One thing that many successful SMs and POs recognize is that they are both leaders dependent on each other, and for their teams to be successful they need to figure out how they will work together in partnership. It doesn’t matter whether the terms of that partnership gets hashed out over a few chats over coffee or through a facilitated chartering workshop. What matters is clarity around how you agree to work together as partners meeting some shared goal.
As an SM or PO, here are some sample questions whose answers you may wish to understand and align on:
Do we both understand and support the team’s mission and goals?
What are the product goals?
How can we best help the team achieve those goals?
Are there any conflicts between the team and product goals?
When our goals or methods are in conflict, how will we resolve them?
In what ways will I be supporting your success as an SM/PO?
How will we keep each other informed and engaged?
Should we have a peer/subordinate/other relationship?
So if you are an SM or PO, and it’s unclear to you on the answers to some of these questions, you may just want to tap your leadership partner on the shoulder and say “let’s talk”.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
The greatest benefit of working in a group is our diversity of viewpoints and approaches; groups hobble themselves when they don’t continually give attention to creating a container of trust and shared identity that invites truth-telling, hard questions, and the outlier ideas that can lead to innovation
One antidote to over-designed collaboration is the check-in.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
Many organizations try to find an electronic tool to help them manage the Scrum Process… before they even know how to do Scrum well! Use team rooms and manual and paper-based tracking for early Scrum use since it is easiest to get started. Finding a Scrum tool is usually just an obstacle to getting started.
The culture of most technology companies is to solve problems with technology. Sometimes this is good. However, it can go way overboard. Two large organizations have attempted to “go Agile” but at the same time have also attempted to “go remote”: to have everyone using electronic Scrum tools from home to work “together”. The problem with electronic Scrum tools is three-fold. They
prevent the sharing of information and knowledge,
reduce the fidelity of information and knowledge shared, and
delay the transfer of information and knowledge.
Scrum Tools Prevent Information Sharing
Imagine you are sitting at your desk in a cubicle in an office. You have a question. It’s a simple question and you know who probably has the answer, but you also know that you can probably get away without knowing the answer. It’s non-critical. So, you think about searching the company directory for the person’s phone number and calling them up. Then you imagine having to leave a voice mail. And then you decide not to bother.
The tools have created a barrier to communicating. Information and knowledge are not shared.
Now imagine that the person who has the answer is sitting literally right next to you. You don’t have to bother with looking up their number nor actually using a phone to call. Instead, you simply speak up in a pretty normal tone of voice and ask your question. You might not even turn to look at them. And they answer.
Scrum tools are no different from these other examples of tools. It takes much more energy and hassle to update an electronic tool with relevant, concise information… particularly if you aren’t good with writing text. Even the very best Scrum tools should only be used for certain limited contexts.
As the Agile Manifesto says: “The most effective means of conveying information to and within a team is face-to-face communication.”
Scrum Tools Reduce Information Fidelity
How many times have you experienced this? You send an email and the recipient completely misunderstands you or takes it the wrong way. You are on a conference call and everyone leaves the call with a completely different concept of what the conversation was about. You read some documentation and discover that the documentation is out of date or downright incorrect. You are using video conferencing and its impossible to have an important side conversation with someone so you resort to trying to send text messages which don’t arrive on time to be relevant. You put a transcript of a phone call in your backlog tracking tool but you make a typo that changes the meaning.
The tools have reduced the fidelity of the communication. Information and knowledge are incorrect or limited.
Again, think about the difference between using all these tools and what the same scenarios would be like if you were sitting right beside the right people. If you use Scrum tools such as Jira, Rally* or any of the others, you will have experienced this problem. The information that gets forced into the tools is a sad shadow of the full information that could or should be shared.
As the Agile Manifesto says: “we have come to value: individuals and interactions over processes and tools.”
Scrum Tools Delay Information Transfer
Even if a person uses a tool and even if it is at the right level of fidelity for the information or knowledge to be communicated, it is still common that electronic tools delay the transfer of that information. This is obvious in the case of asynchronous tools such as email, text messages, voice mail, document repositories, content management systems, and version control. The delay in transfer is sometimes acceptable, but often it causes problems. Suppose you take the transcript of a conversation with a user and add it into your backlog tracking tool as a note. The Scrum Team works on the backlog item but fails to see the note until after they have gone in the wrong direction. You assumed they would see it (you put it in there), but they assumed that you would tell them more directly about anything important. Whoops. Now the team has to go back and change a bunch of stuff.
The Scrum tools have delayed the communication. Information and knowledge are being passed along, but not in a timely manner.
For the third time, think about how these delays would be avoided if everyone was in a room together having those direct, timely conversations.
As the Agile Manifesto says: “Business people and developers must work together daily throughout the project.”
Alternatives to Scrum Tools
Working in a team room with all the members of the Scrum Team present is the most effective means of improving communication. There are many photos available of good team rooms. To maximize communication, have everyone facing each other boardroom-style. Provide spacious walls and large whiteboards. Close the room off from other people in the organization. Provide natural light to keep people happy. And make sure that everyone in the room is working on the same thing! Using Scrum tools to replace a team room is a common Scrum pitfall.
The most common approach to helping a team track and report its work is to use a physical “Kanban” board. This is usually done on a wall in which space is divided into columns representing (at least) the steps of “to do”, “in progress” and “done”. On the board, all the work is represented as note cards each with a separate piece of work. The note cards are moved by the people who do the work. The board therefore represents the current state of all the work in an easy-to-interpret visual way. Using a tool to replace a task board is another variant of this common Scrum pitfall.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
In our professional lives and in doing business, we commonly follow the advice to “dress for success.” We make certain to wear that business suit, or a particular pair of snazzy heels, or a certain color of tie. For better or for worse, we can be judged in the first few seconds of contact with a potential employer or customer by our attire, our hairstyle, our facial expression, our nose ring…
A more subtle way we evaluate a person is through the sound of his or her voice. The voice is a very personal instrument, and it can communicate so much about who you are, your abilities and your intentions.
The voice can tell you whether someone is nervous or at ease. Whether they’re authentic or stringing you a line. Whether they care if they communicate with you or not. When I was a kid, I thought I could detect when someone was lying to me by a certain glitch in the voice, or a tell-tale tone. Often, our brain makes intuitive judgements about what’s being said to us, and is sensitive to vocal rhythm, clarity, tones, and the use of language.
One may think it’s not fair to judge someone by their voice. Let’s face it, a voice – like being short, or having a large nose – is usually unchangeable. But it’s how the voice is used that matters. We all have an inherently full, expressive voice, but things happen to us in life that can negatively influence and/ or harm that voice.
Think of the person who speaks so quietly it’s almost a whisper – you must lean closer to catch what she says. This person may have had some trauma in her life, like being constantly told as a child to ‘be quiet’, to de-voice her. I know people whose greatest fear is public speaking, who quake inwardly and outwardly, even if they have something important to share with others.
Personality is also expressed through the voice. Imagine the annoyingly loud talker sitting nearby in a restaurant. This is certainly someone who wants too much attention and tries to get it by being overbearing. Or the fast-talker, who doesn’t want any other opinions but his own to be expressed, and doesn’t give the listener an opportunity to think or to respond, lest they disagree with him.
Anyone can be trained to use their voice for positive communication. A voice is an instrument that can become effective and optimal with practice.
Here’s a few things to think about in how you use your voice:
Are you clearly enunciating your words so as not to be mis-heard?
Are you directing your voice to the person or people you want to communicate with?
Are you speaking in a rhythm that’s neither too fast nor too slow?
Are you allowing your true feelings or intentions to come through?
Are you being honest?
The voice is just one of the important tools we use to communicate. If your work requires relating to other people in any way, for example, making presentations, or promoting a product, consider how you use your voice and what it may communicate about you!
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
Please share!
I’ve started to show this video in my public CSM classes (see sidebar for scheduled courses) as part of the discussion about why co-location for Agile teams is so important. The video is a humorous look at what conference calls are like. Probably the most notable part of it is the fact that on a conference call you can’t see people’s body language and facial language which are important cues for efficient communication:
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
The Product Owner needs to be in contact with all those that are invested in the work of the team (aka stakeholders). These stakeholders have information on the marketplace, the users’ needs, and the business needs. The Product Owner must be able to communicate with each of these individuals whenever the need arises. If this is possible, the entire Scrum Team will have the most up to date information that will aid them in their execution of the product. If not, the team will have to wait for information and/or guess which will cause confusion, blame, distrust, and unhappy customers.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective.
It is the ScrumMaster’s job to remove the Scrum Team’s obstacles that occur through all levels of the organization. To do this properly the ScrumMaster must be able to connect directly with all stakeholders of the team including those outside the organization. This direct communication aids in addressing identified obstacles with the appropriate individual or group. Without the ScrumMaster being allowed this direct communication, he will have to deal with a third party which may distort the information and/or be unable to convey the importance of removing an obstacle or addressing a need. The ScrumMaster is like a catalyst that should be able to set ablaze those individuals that are interacting or connecting with the team either directly or indirectly.
Affiliated Promotions:
Try our automated online Scrum coach: Scrum Insight - free scores and basic advice, upgrade to get in-depth insight for your team. It takes between 8 and 11 minutes for each team member to fill in the survey, and your results are available immediately. Try it in your next retrospective. |
The 2020 presidential election won't occur for another 1,399 more days, and yet Democrats are already trying to beef up their progressive credentials for the party's primary.
New York Gov. Andrew Cuomo (D) announced a plan on Tuesday morning with Vermont Sen. Bernie Sanders (I) offering free two-year community college to New York families with an income of $125,000 or less, reported The New York Times.
Cuomo stood side-by-side with Sanders, two former political enemies who blasted each other last April on the issues of gun control and minimum wage, declaring free college as the future.
The bill is set to cost more than $160 million annually and would need approval by the Republican-controlled State Senate, so it's not even guaranteed to pass. However, Cuomo's no fool. He knows that in order to gain any hopes of snagging the Democratic nomination in 2020, he would have to win over progressives like Sanders.
As Governor of New York, Cuomo has been praised as the face of moderate Democrats, who is fiscally moderate and socially liberal having backed both gay marriage and property tax caps. He works well with the Republican State Senate and has even campaigned for those who voted for some of his agenda.
He's also had an openly hostile relationship with progressive New York City Mayor Bill de Blasio and has barely lifted a finger to repair the broken Democratic Party in the State Senate, some of whom caucus with Republicans.
For all those reasons and more progressives aren't buying his "come to Sanders" moment on education policy. Brendan O'Connor wrote in Jezebel on Tuesday an article entitled "Andrew Cuomo is a f****ing snake."
"Andrew Cuomo wants to be loved," O'Connor wrote. "He wants to run for president. He is also the embodiment of a morally bankrupt Democratic Party that has failed, at every juncture, to live up to any of the ideals it purports to hold dear. Andrew Cuomo is a snake. He should be allowed nowhere near the 2020 presidential election."
Still, Cuomo realizes that he has a golden opportunity, there are just a handful of statewide elected Democrats with the pedigree to run for president and progressives are between a rock and a hard place.
He thinks that if he can cater to enough of them to be acceptable, he can be a male Hillary without the e-mails. |
(HSSC) Chocolate with Nuts
I had to kill mummy today. In the movies, there’s always loads of blood and I guess I was disappointed that she didn’t bleed much. Even after I
pulled the knife out, there was only a dribble rolling down her cheek.
I wish I had a gun. A gun would have been easier. I once saw a head explode: it went kaBOOM like a tomato. But we don’t have any guns. And anyway,
guns are loud. They attract attention and like daddy says, we have to be quiet or the neighbours will hear.
Daddy will be coming back soon. He’ll be really proud of me for killing mummy. He’ll be a bit sad but I know he’ll say I’ve been a good boy
and give me some chocolate. Just before he went, he said if I was good and looked after mummy, he’d bring back some chocolate with nuts. He gave me
a big hug and a kiss and twirled me around like an aeroplane.
Daddy won’t be long now. He’s often away for days and days on business trips. I love it when he comes back and he pretends he forgot to get me
anything and I look all sad and then he says “Wait, now that I think about it…” and digs deep in the bottom of his suitcase and shows me my
presents. I love my presents: sometimes they have strange writing I don’t understand. I wonder what daddy will get me when he comes home.
It is dark outside. Last week the lights stopped working but it was okay because mummy said that we shouldn’t use them anymore anyway. Mummy and
daddy think I’m still a baby because they said I have to crawl around on all fours, especially near the windows.
“It’s a game we’re playing with the neighbours,” mummy said. “We have to close the curtains so they can’t see us and we can’t see
them.”
“Even during the day?” I ask. I don’t understand because mummy loves opening the curtains in the morning.
“Yes, sweetheart,” she says. “Even during the day.”
“But what about school?” I say. “I’m starting at Big School soon aren’t I?”
“No, darling. Big School is weeks and weeks away. We have to play this game with the neighbours until I say so. It is a very important game. If we
lose this game then…” Mummy gulped. “Lets just say that we have to win this game.”
I need to go to the toilet. The toilet is very smelly because we cannot flush it but I have to go poo-poos. I squeeze my nose as tight as I can and
open the door. My poo is like brown pee. We don’t have any more toilet paper and so I have to use daddy’s old newspapers. It feels horrible on my
bottom and doesn’t clean up properly. I remember asking mummy ages ago why we don’t have any toilet paper.
“We can’t go out of the house, sweetheart,” she said. “If we go out of the house, then Mr Jones next door will see us and we lose the game.
You remember the game, don’t you?”
“I hate this game. I hate it in here. I want to go outside.” My face was red and sweaty. It was hot and smelly in the house. This game was
rubbish. My voice was getting louder and louder.
“Shhh sweetie,” said mummy wrapping her arms around me like a blanket. “I know you don’t like this. I don’t like it here either. But if we
go outside, mummy and daddy will get into a lot of trouble. You have to be calm and quiet. I promise it won’t last much longer.”
I hope daddy will come back soon. I’m scared. Even Edgar’s not around anymore. He died last week. Mummy put him in the freezer and said we should
eat him if there was nothing else but I’ve still got a tin of cat food in my toy box. But I have to go downstairs to get the can opener. I don’t
want to go downstairs because Mummy’s lying in the rocking chair like she’s sleeping, only she’s not breathing. Maybe daddy will bring home a
chocolate bar with nuts. I don’t want to eat Edgar.
Mummy tells me that I shouldn’t look outside but sometimes I open up the curtains a tiny, tiny bit and peek through the smallest crack in the
curtains. I do this at night when mummy’s asleep. Mr Jones is walking outside his house with his wife and a lot of other people I don’t know. He
doesn’t see me. Nobody in our street has their lights on. The moon is very bright and shines directly on top of Mr Jones’ bald head.
Mr Jones is a nice man. He wears half-glasses and wears a green cardigan. He lives next door and always seems to be gardening. He likes to ask me how
I’m getting on and how much I’m growing. I can’t wait to go to Big School. I’ve already got my pencils and pencil sharpener and a ruler.
I’ve also got a pencilcase and a blue bag with a dinosaur on the back. I like dinosaurs.
When I am scared at night I pull up the sheets so the ghosts don’t get me. Mummy comes into my room and tells me everything is alright. I sleep with
the covers pulled high over my head and sometimes I hear other people scream but not me. I’m a good boy and I don’t make a sound, even though I
want to. Mummy says that if you make a sound then they get you. You have to hide yourself away so they can’t see or hear you.
Jemma made noises and mummy had to kill her. Jem was all sick and feverish and I said that we should go to the hospital but mummy said we couldn’t.
She got hotter and hotter until one day she just stopped breathing. Mummy crossed herself and cried and plunged a knife deep into Jemma’s right eye.
I know because I saw it through the keyhole, even though she said that I had to leave the room. But mummy knew I was outside and opened the door and
said if she ever got like Jemma that I’d have to put a knife in her too. I was a good boy and didn’t cry or anything.
“Don’t be sad mummy,” I said hugging her. Jem was a funny colour. Her face was a strange grey.
Mummy used to hold me tight when I was scared but I’ve got no-one to hold now except my teddy bear. Mummy told me that I had to kill her just like
Jem.
I said, “No mummy I don’t want to. Mummy please don’t make me.” And she said “Shhhh. I love you more than life itself. Do this for me. Do
this or I’ll end up just like Mr Jones and everyone else outside. Your daddy will come home soon and then you and him will run away to see Grandma
and Granddad in the countryside.” She kissed me and hugged me and then put this huge knife we cut meat with, up to her eye and told me push it in as
hard as I could. She said, “I love you so much” and then I pushed and pushed and pushed.
There’s this scratching downstairs at the front door, just like Edgar used to do, when he wanted to come in. I think that it might be daddy but then
I remember that he said that if I hear any funny noises that I wasn’t to listen to them. I crawl as fast as I can to mummy and daddy’s bed and
pull the dirty sheets high over my head. Outside, Mr Jones is shuffling, shuffling, shuffling, like he’s sweeping his front porch.
My head under the covers, I hear something stir in the living room, like the rocking chair is moving back and forth. I lie perfectly still, holding my
breathe 1…2…3... The sound stops. I exhale and wait. There is a noise in the hallway. Then it climbs the stairs, one step, slowly at a time. It
makes a funny sound on the carpet like mummy’s slippers.
Really good scary story! I enjoyed reading it - although I felt sad for your little boy. I like the way the tension increases and then accelerates all
the way up until the end. Great pacing and nice scenes too - it reminded me a little of a Hitchcock film - a very visual unfolding, which IMO is what
scary stories are supposed to be about.
This was a really good story!! I especially like how it starts off in such a shocking way, and then proceeds to take the reader on a wild ride! I
was thinking that maybe the little boy's parents were psychopaths or something. I then realized that the neighbors were the problem!(well, part of
the problem anyway)
Your portrayal of the story through a childs eyes is very believable. I like how I was confused about how I felt about him, until I realized what was
happening. (I was thinking he might be some sort of demon child or something!)
The only thing that I was confused about was this sentence:
He likes to ask me how I’m getting on and how much I’m growing.
I kept wondering how he was able to talk to Mr Jones if he was only peeking at him through the curtains. I guessed that he was refering to an earlier
time when things were more normal. However, the way it reads in that paragraph, it confused me for a bit.
The way you ended your story was great because it leaves the reader to their imagination! I remember the days when I thought a blanket over my head
was all the protection I neede.
Thanks for your comments sylvrshadow. I've done a bit of further tweaking here and there after I submitted this story and edited out that part you
pointed out. The boy is referring to a previous time but it is a little vague in the context of the story. I didn't re-submit the story post-editing
as I thought it a little pointless.
In any case, thanks for your kind words and I look forward to reading your stories if you manage to submit any for this contest.
This content community relies on user-generated content from our member contributors. The opinions of our members are not those of site ownership who maintains strict editorial agnosticism and simply provides a collaborative venue for free expression. |
Inner angle
Inner angle may refer to:
internal angle of a polygon
Fubini–Study metric on a Hilbert space |
About Us
One of the first specialty food companies in Vermont, Sidehill Farm started in the 1970's when Dot and Ben Naylor resurrected old family recipes for artisanal jams and began producing them for friends and family. Their children spent many summer afternoons picking raspberries for those first jam batches. Folks appreciated the unique way Sidehill Farm makes their jam: No pectin or artificial ingredients, just boiled-down fruit and sugar, handmade and hand-stirred, which produces a jam with more fruit and more flavor in every bite. Before long Ben and Dot began making jam full time for specialty food shops throughout Vermont.
In 2000, Dot and Ben passed on their jam-stirring spoon to their son Kelt and his wife Kristina, allowing Kelt to fulfill his dream of moving back to Vermont after working in high tech manufacturing in the Boston area.
Joined now by their children Caroline and Max, the Naylors have been carefully expanding Sidehill’s offerings with new jam flavors and a line of Vermont Maple toppings. Their product line now includes 18 jams and fruit butters, including old favorites raspberry, blueberry, and strawberry rhubarb, as well as some unexpected treats like cinnamon pear, mango habanero, and hot red pepper. They’re especially excited with the centerpiece of their topping line, Maple Apple DrizzleTM. A delicious combination of Vermont maple syrup and New England apples, it tastes like apple pie in a jar, and can be enjoyed with ice cream, pancakes, or all by itself.
For more than thirty years Sidehill Farm has sold their jams at gift and specialty shops throughout New England. But you don't have to make the trip north to enjoy the tastiest jams in Vermont-- Sidehill can also send their jams and Drizzle toppings directly from our kitchen to yours. And don't forget to order a few extra jars or a gift basket for your family and friends: They make a unique Vermont gift in a unique package. |
Q:
PreferenceActivity lifecycle
I read http://developer.android.com/reference/android/app/Activity.html
but I have a question about PreferenceActivity lifecycle:
Does a PreferenceActivity get onStop() or onDestory() call?
I understand it gets onStop() called when user clicks 'Back', but what about onDestory()?
when does onDesgtory() for PreferenceActivity get called?
Thank you.
A:
As PreferenceActivity is a subclass of Activity, it should follow the same lifecycle.
Click on the link you provided and then navigate to Indirect Subclasses or here is the direct http://developer.android.com/reference/android/preference/PreferenceActivity.html
|
Effects of temporal muscle detachment and coronoidotomy on facial growth in young rats.
This study analyzed the effects of unilateral detachment of the temporal muscle and coronoidotomy on facial growth in young rats. Thirty one-month-old Wistar rats were distributed into three groups: detachment, coronoidotomy and sham-operated. Under general anesthesia, unilateral detachment of the temporal muscle was performed for the detachment group, unilateral coronoidotomy was performed for the coronoidotomy group, and only surgical access was performed for the sham-operated group. The animals were sacrificed at three months of age. Their soft tissues were removed, and the mandible was disarticulated. Radiographic projections-axial views of the skulls and lateral views of hemimandibles-were taken. Cephalometric evaluations were performed, and the values obtained were submitted to statistical analyses. There was a significant homolateral difference in the length of the premaxilla, height of the mandibular ramus and body, and the length of the mandible in all three groups. However, comparisons among the groups revealed no significant differences between the detachment and coronoidotomy groups for most measurements. It was concluded that both experimental detachment of the temporal muscle and coronoidotomy during the growth period in rats induced asymmetry of the mandible and affected the premaxilla. |
Tell us how you discovered Photochain or how it discovered you and what made you want to join the team?
Working in the blockchain industry since a few years ago I keep an eye on new blockchain solutions, and I take an interest in those I think are perfect use cases. As a dApp (a decentralized application), I have found Photochain a fantastic idea and I got excited when I saw the team even worked on creating a proof-of-concept (demo) open to the public. I had to share my thoughts with the team and offer my support, and so I did. After watching them for a brief period of time I have then decided to support the project. And here I am!
Where did you think you could help with your skills and your experience from Blockchain Zoo and other projects?
Blockchain applications, and decentralized applications in general, require a different architecture compared to centralized applications. Several years of my personal experience in deploying decentralized applications, and with the team at Blockchain Zoo, can offer help in implementing Photochain to be a successful dApp.
How do see Photochain amongst the big stock photography players and how will it cope in the market?
No. I don’t it see among them, I see Photochain ABOVE them in a very short time. It will get there by word of mouth, organically. Photochain is solving a very important problem in the photography industry by providing a decentralized solution that pays the photographers real money for the value of their photos, and offers cheaper access to content to the buyers, while guaranteeing usage rights cryptographically. It is inevitable that the industry will quickly adopt Photochain as one of their core tools.
What makes Photochain so different?
It is a decentralized semi autonomous application. By removing centralized management costs, the service can be offered at very competitive fees to both photographers and their clients, whilst processing the payment instantly. Being a blockchain application, Photochain offers cryptographic security in the management of rights and usage of the photos. Their smart and simple use of blockchain to serve photography sellers and buyers is what makes them so different.
Where do you see Photochain in a crypto future?
Along many other dApps people use daily from their mobile phones and in their computers. Using an operating system such as ElastOS, made for dApps. Photochain will fit the niche for those that need to buy high quality original photos. DApps like Photochain will be as simple and as common as the computer programmes and applications we use today.
Tell us a story about particular photo
It is the picture of some source code in the monitor of an old Apple Plus, in the late 80s. That piece of software has been one of the first successful application I coded! I didn’t know at the time but this was the launch of a very successful career. And with so many other achievements since, this was still one of my proudest moments. |
His irritation from the weekend had not faded. His apprehension about the week ahead had not been quashed. Dave Roberts planned to shield his players from both emotions. As manager of the Dodgers, he performs a daily ritual of alchemy, converting frustration into optimism, concern into calm. In his third year at the helm, the mask he chooses to wear has become his face.
“For me, it’s about never showing panic,” Roberts said as he sat at a True Food Kitchen outside downtown San Diego last week ahead of a series with the Padres, and days before the All-Star break.
This particular moment of the season challenged Roberts’ constant search for equilibrium. He views the clubhouse like a globe spinning on its axis. He can influence the environment, but he prefers gentle gusts over heavy gales. The approach has guided him to two division titles, through the heartache of last fall and the hangover of this spring.
So he suppressed his disgust about losing a sloppy series to the Angels. Roberts thought ahead to the external conundrum of the next seven days. He remembered how this time felt as a player, close enough to touch the break, a respite from baseball’s nine-month marathon. He fretted about his team overlooking four games with the Padres and envisioning the beach. He shook his head — this he could not abide.
“Once we get to the ballpark, our job is to win a baseball game,” he said. “Not to be on your phone, talking to a travel agent and talking to your teammates about going to Hawaii or Mexico.”
Roberts won National League Manager of the Year in his first season. He led the team to the World Series in his second. Yet his third season, which will pause this week as Roberts manages the National League in the All-Star Game on Tuesday at Nationals Park, has presented its own set of obstacles: A cavalcade of injuries, a sluggish start that wrought continuous anguish, and the malaise of a team which knows only October matters.
Roberts disdains team meetings. He believes subtle but consistent reinforcement outweighs abrupt intervention. As he pondered how to influence his players before their first game in this series against the Padres, he settled on an extra dose of unflagging optimism. “For something like that, I make a concerted effort to come in very high-energy,” he said.
A couple of hours later, the Dodgers lounged inside the visitors’ clubhouse. The voice of their manager boomed across the hall. He gabbed with anyone who crossed his path. He greeted hitters and closers and broadcasters with the same buoyancy.
“Cody! How you doing, buddy?”
“What’s up, Kenley!”
“Joe D.!”
Roberts stung the backside of Joe Davis. He held a cup of coffee and wore a smile. He bounded from the food room to his office. His behavior was performative but purposeful.
The atmosphere inside the clubhouse was light. A group of seven Dodgers surrounded Matt Kemp as he regaled them with stories from his travails at the Home Run Derby. The group gathered behind an iPhone and cackled at Yasiel Puig’s homer-free foray in the 2014 Derby.
“He didn’t know he had to bring a pitcher,” Justin Turner said.
“Oh, he didn’t?” Kemp said.
“He was like, ‘No one told me,’” Turner said.
“Are you serious?” Kemp said.
The laughter subsided and the players dispersed to take swings in the cage or receive treatment in the trainer’s room. Inside his office, Roberts studied the opposing lineup with pitching coach Rick Honeycutt. The delicate machine hummed along. The Dodgers won that night by six runs. Three days later, they stood alone in first place of the National League West for the first time all season.
---
On May 16, after a loss to the cellar-dwelling Miami Marlins, the Dodgers fell to 16-26. They had lost six games in a row, all to the Marlins and the Cincinnati Reds, two tanking teams. The Dodgers resided in fourth place in the division. They trailed Arizona by 8½ games.
The situation looked bleak. The hitters lacked power. The starters made abbreviated outings. The relievers sprinkled gasoline on fires. Clayton Kershaw was sidelined with an arm injury, Corey Seager was out for the season and Turner had only just returned from the disabled list.
Earlier that day in Miami, Roberts dropped a quote often attributed to Winston Churchill. “If you’re going through hell,” Roberts said, “keep going.” He was grinning when he spoke. A few hours later, following another dispiriting defeat, the smile was gone. “I feel bad for our ballclub,” he said.
The losing gnawed at the manager. But he had been through worse last year, when the Dodgers lost 16 of 17 games late in the season and looked comatose heading into the playoffs. The team rallied to reach the World Series. Roberts felt a similar turnaround was possible in 2018, even if it looked unlikely.
As the losses piled up, Roberts kept practicing his alchemy. He vented after one particularly gruesome loss to Cincinnati, telling reporters, “This isn’t a ‘try’ league; everyone is trying,” but otherwise held his keel even. He swallowed anger and projected positivity. His demeanor during this stretch impressed general manager Farhan Zaidi.
Roberts showed “the right mixture of disappointment, frustration, but also understanding there’s a game to be played the next day, and being overly self-absorbed about a loss doesn’t really help anybody,” Zaidi said. “When things are going bad, like they were last September, like they were early this year, I think that’s an incredibly valuable leadership quality.
“Guys are looking around, waiting for reasons to panic. When the guy in that position is mad, but confident we’re going to turn things around, it has a lot to do with how we withstand some of these stretches.”
Facing a nearly double-digit deficit in the division, Roberts implored his players to narrow their focus to daily increments. The Dodgers beat the Marlins on May 17 to avoid a sweep. After a rainout in Washington, they won a doubleheader over the Nationals and swept a series, with Roberts hollering “it’s going to be special!” in the dugout during a comeback victory. They went 6-4 on a homestand at Dodger Stadium. They reached .500 a week later, nosing upward in the standings as the Diamondbacks stumbled.
The players deserve credit for the revival, Roberts insisted. He did not show Max Muncy how to unlock his power. He did not advise Ross Stripling to unleash his curveball. He did fix Scott Alexander’s delivery during a spell in the minors. These are organizational successes, a blend of the coaching staff’s advice, the developmental system’s acumen and the player’s adaptability.
Yet players and Dodgers officials lauded Roberts for fostering a culture in which the unit can thrive. No longer do the players blanch if stars fall victim to injuries. When Turner’s wrist was broken by a pitch in late March, Roberts let him speak to the team a day later. The message defined the team’s ethos.
“No one’s going to feel sorry for us,” Turner recalled telling the group. “It’s just an opportunity for someone to step up.”
Muncy filled the void and slugged his way to the Home Run Derby. After Kershaw went down, Stripling pitched his way onto the All-Star team. With Chris Taylor replacing Seager at shortstop, Enrique Hernandez responded to his expanded role with 16 home runs.
Roberts kept his door open to Joc Pederson while Pederson hit one home run in April and May. He stuck with Taylor when Taylor ended too many at-bats striking out looking. His support never wavered for Kenley Jansen when Jansen failed to generate the requisite velocity or movement on his cutter. The manager stood by the group and watched them rebound.
“Honestly, even in his first year, he was so good,” Turner said. “The mentality, the energy, positivity. The mindset of weathering the storm. It’s been consistent since Day One.”
Roberts credited the organization for stockpiling talent. He lauded his coaching staff for being “positive by nature” and “coming in fresh, every day.” And he heaped praise upon his roster. Hernandez still redirected some acclaim toward his manager.
“Managers, they get blamed when the team doesn’t do well,” Hernandez said. “And they don’t get enough credit when the team does well. It doesn’t matter how good your boat is if you don’t have a good captain. It’s probably going to sink.”
---
One day last week, a few games before the Dodgers retook first place, Roberts called Stripling into his office. Zaidi and Honeycutt were waiting there. Roberts hugged Stripling and told him he made the All-Star team. Stripling looked thrilled.
“You’re an All-Star, dude,” Roberts said. “They can never take it from you.”
Roberts considers communication with his players to be vital. He also considers this area to be one in which he can always improve upon. On any day, at any given moment, he figures, someone on his roster needs his attention.
The deployment of players requires tact. The lineup rotates constantly. Roberts must massage egos and combat any anxiety wrought by failure.
“The other night I hit three rockets and went 0-for-4, and I was like ‘What the . . . . is wrong with my swing?’” Hernandez said. “And I came in the next day, and he was like, ‘If you go 0-for-4 every day for the rest of the year doing that, that’s OK. I’m still going to play you.’ I almost punched him when he said it, but it’s true.”
The conversations are not always pleasant. During the World Series, Roberts had to ask longtime friend Adrian Gonzalez to stop participating in workouts because he wasn’t on the active roster. After the Dodgers sent Pedro Baez to the minors in June, Roberts allowed Baez to fume inside his office. When Cody Bellinger did not hustle up to Roberts’ standards on a play in San Francisco in April, Roberts benched him.
The decision rankled Bellinger. Roberts met with him after the game and explained his reasoning. Bellinger returned to the lineup a day later. “The next day, we were perfectly fine,” Bellinger said. “I think that shows a good relationship. He’s really good at communicating. I had no hard feelings with it, really.”
Roberts adopted a similar approach with Puig, who had exasperated former manager Don Mattingly during three seasons together. Roberts understood how Puig had sown division inside the clubhouse and he sought to avoid a recurrence. He invited Puig to dinner during their first spring together and lavished him with praise. Roberts positioned himself, hitting coach Turner Ward and first-base coach George Lombard as Puig’s allies.
The strategy was not foolproof. Roberts has punished Puig for infractions ranging from inattention to tardiness to apathy. During a game against the Cubs in June, Puig goofed on the bases and dropped a ball in the outfield. Roberts called Puig the next day to applaud him for being accountable with the media after the loss.
“Myself and Turner Ward are closer to him than anyone, including players,” Roberts said. “We have a lot of tough conversations. But five, 10 years from now, Yasiel will look back and be so grateful for how we loved on him. And he will regret how difficult, at times, he made it for everyone.”
Roberts spoke in between bites of a poke bowl at lunch in San Diego. Puig was on the disabled list, sidelined with an oblique injury until August. The roster was in flux. Roberts was waiting to see if his team might acquire Baltimore shortstop Manny Machado or a collection of arms at the trade deadline.
Roberts hoped to win all seven games before the break. He had to settle for five. After so much strife, Roberts and the Dodgers ended the first half where they expected to be: In first place in the National League West, and in contention for the championship which has eluded them since 1988.
“I always felt we were the best team,” Roberts said. “We weren’t playing that way, obviously. And when you’re mired in it, it’s hard to see the light. I think we did a very good job of trying to put our head down, not worrying about who was ahead of us. Just worrying about trying to win a baseball game.
“And I think fundamentally, that’s just the best way to do it. And that’s what we did.”
andy.mccullough@latimes.com
Twitter: @McCulloughTimes |
Thymus with B lymphocytes in chronic lymphocytic leukaemia.
A 67-year-old man with a diagnosis of chronic lymphocytic leukaemia died suddenly following a coronary occlusion. Peripheral blood lymphocytes labelled with fluorescein conjugated polyvalent antisera to immunoglobulin and immunoglobulin G, and failed to label with absorbed antithymocyte serum. At necropsy, a large thymic tumour, and enlarged mediastinal lymph nodes also contained immunoglobulin G bearing B lymphocytes. The paradoxical finding of B lymphocytes with monoclonal surface immunoglobulin in a thymic mass is most unusual in chronic lymphocytic leukaemia. |
@font-face {
font-family: 'Open Sans';
font-style: italic;
font-weight: 400;
src: local('Open Sans Italic'), local('Open-Sans-Italic'),
url('../assets/fonts/OpenSans-Italic.ttf') format('truetype');
}
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 700;
src: local('Open Sans Bold'), local('Open-Sans-Bold'),
url('../assets/fonts/OpenSans-Bold.ttf') format('truetype');
}
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 400;
src: local('Open Sans Regular'), local('Open-Sans-Regular'),
url('../assets/fonts/OpenSans-Regular.ttf') format('truetype');
}
@font-face {
font-family: 'Bebas';
font-weight: 400;
src: local('Bebas'), local('Bebas'),
url('../assets/fonts/BEBAS___-webfont.ttf') format('truetype');
}
html {
}
body {
font: 14pt 'Open Sans';
background: url('../assets/bg.jpg');
background: -moz-linear-gradient(left, rgba(0,0,0,1) 0%, rgba(0,0,0,0) 20%, rgba(0,0,0,0) 80%, rgba(0,0,0,1) 100%), url('../assets/bg.jpg'); /* FF3.6+ */
background: -webkit-gradient(linear, left top, right top, color-stop(0%,rgba(0,0,0,1)), color-stop(20%,rgba(0,0,0,0)), color-stop(80%,rgba(0,0,0,0)), color-stop(100%,rgba(0,0,0,1))), url('../assets/bg.jpg'); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(left, rgba(0,0,0,1) 0%,rgba(0,0,0,0) 20%,rgba(0,0,0,0) 80%,rgba(0,0,0,1) 100%), url('../assets/bg.jpg'); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(left, rgba(0,0,0,1) 0%,rgba(0,0,0,0) 20%,rgba(0,0,0,0) 80%,rgba(0,0,0,1) 100%), url('../assets/bg.jpg'); /* Opera 11.10+ */
background: -ms-linear-gradient(left, rgba(0,0,0,1) 0%,rgba(0,0,0,0) 20%,rgba(0,0,0,0) 80%,rgba(0,0,0,1) 100%), url('../assets/bg.jpg'); /* IE10+ */
background: linear-gradient(to right, rgba(0,0,0,1) 0%,rgba(0,0,0,0) 20%,rgba(0,0,0,0) 80%,rgba(0,0,0,1) 100%), url('../assets/bg.jpg'); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#000000', endColorstr='#000000',GradientType=1 ); /* IE6-9 */
color: rgb(200, 200, 200);
margin: 0;
padding: 0;
}
a {
color: #fff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
#main-container {
margin: 0 auto;
width: 1150px;
position: relative;
}
#gallery {
width: 1150px;
height: 400px;
border: 1px solid rgba(25, 25, 25, 0.5);
box-shadow: 0 2px 10px rgba(25, 25, 25, 0.9);
margin-top: 30px;
margin-bottom: 10px;
overflow: hidden;
position: relative;
transition: height 0.5s ease-in-out;
-moz-transition: height 0.5s ease-ouin-out;
-webkit-transition: height 0.5s ease-in-out;
-o-transition: height 0.5s ease-in-out;
-ms-transition: height 0.5s ease-in-out;
}
#gallery .title {
text-shadow: rgba(25, 25, 25, 0.9) 0.1em 0.1em 0.1em;
font-family: 'Bebas';
font-size: 3.4em;
position: absolute;
left: 0.2em;
top: 0em;
z-index: 20;
}
#gallery .slide {
width: 1150px;
height: 400px;
background-size: cover;
display: block;
padding: 0;
margin: 0;
position: absolute;
}
#slide-container {
position: relative;
}
#gallery .slide {
background: #000;
text-align: center;
}
#gallery .slide img {
width: 100%;
position: relative;
top: -40%;
}
#gallery .slide video {
height: 100%;
}
#gallery .slide .video-overlay {
position: absolute;
width: 100%;
height: 100%;
background: rgba();
}
.slideable {
transition: all 0.5s ease-in-out;
-moz-transition: all 0.5s ease-ouin-out;
-webkit-transition: all 0.5s ease-in-out;
-o-transition: all 0.5s ease-in-out;
-ms-transition: all 0.5s ease-in-out;
}
#menu {
margin-top: 25px;
margin-left: 1.3%;
}
#menu .menu-item {
position: relative;
width: 32%;
height: 152px;
border: 1px solid rgba(25, 25, 25, 0.5);
box-shadow: 0 2px 10px rgba(25, 25, 25, 0.9);
display: inline-block;
background-size: 100% auto;
background-image: url('../assets/screenshots/22_mini.jpg');
background-repeat: no-repeat;
font-family: 'Bebas';
overflow: hidden;
}
#menu .menu-item:nth-child(2),
#menu .menu-item:nth-child(3),
#menu .menu-item:nth-child(5),
#menu .menu-item:nth-child(6) {
margin-left: 1%;
}
#menu .menu-item:nth-child(4),
#menu .menu-item:nth-child(5),
#menu .menu-item:nth-child(6) {
margin-top: 10px;
}
#menu .menu-item:nth-child(2) {
background-image: url('../assets/screenshots/15_mini.jpg');
}
#menu .menu-item:nth-child(3) {
background-image: url('../assets/screenshots/14_mini.jpg');
}
#menu .menu-item:nth-child(4) {
background-image: url('../assets/screenshots/03_mini.jpg');
}
#menu .menu-item:nth-child(5) {
background-image: url('../assets/screenshots/07_mini.jpg');
}
#menu .title {
font-size: 1.3em;
position: relative;
text-shadow: rgba(25, 25, 25, 0.9) 0.1em 0.1em 0.2em;
left: 0;
top: 0;
padding: 10px;
transition: all 0.25s ease-out;
-moz-transition: all 0.25s ease-out;
-webkit-transition: all 0.25s ease-out;
-o-transition: all 0.25s ease-out;
-ms-transition: all 0.25s;
z-index: 4;
cursor: pointer;
}
#menu .overlay {
opacity: 1;
transition: all 0.25s;
-moz-transition: all 0.25s;
-webkit-transition: all 0.25s;
-o-transition: all 0.25s;
-ms-transition: all 0.25s;
background: rgba(0, 0, 0, 0.5);
box-shadow: 1px 1px 1px rgba(0, 0, 0, 0.8), -1px -1px 1px rgba(0, 0, 0, 0.8);
width: 100%;
height: 100%;
cursor: pointer;
position: absolute;
top: 0;
left: 0;
z-index: 2;
}
#menu .menu-item:hover .overlay {
opacity: 0;
}
#menu .menu-item .sub-title {
position: absolute;
bottom: 0;
right: 0;
font-size: 3em;
color: rgb(200, 200, 200);
transition: text-shadow 0.25s;
-moz-transition: text-shadow 0.25s;
-webkit-transition: text-shadow 0.25s;
-o-transition: text-shadow 0.25s;
-ms-transition: text-shadow 0.25s;
}
#menu .menu-item:hover .sub-title {
text-shadow: rgba(5, 5, 5, 0.95) 0.1em 0.1em 0.8em,
rgba(5, 5, 5, 0.95) 0.1em 0.1em 0.8em;
}
#video {
opacity: 1;
background: rgba(0, 0, 0, 0.5);
box-shadow: 0 2px 10px rgba(25, 25, 25, 0.9);
width: 640px;
height: 360px;
margin-left: auto;
margin-right: auto;
margin-top: 20px;
margin-bottom: 10px;
padding: 3px;
cursor: pointer;
top: 0;
left: 0;
z-index: 2;
text-align: center;
}
.section .title {
font-size: 2.0em;
text-shadow: rgba(25, 25, 25, 0.9) 0.1em 0.1em 0.2em;
padding-top: 10px;
font-family: 'Bebas';
}
.section {
margin: 5px;
margin-top: 25px;
box-shadow: 0px -1px 0 rgba(25, 25, 25, 0.5);
}
.section p, .section ul {
margin-left: 50px;
margin-right: 50px;
}
.section .more-button {
float: right;
font-size: 1.1em;
}
#slide-counter {
position: absolute;
bottom: 0;
right: 0;
padding: 5px;
padding-left: 0;
}
#slide-counter .slide-counter-node {
border-radius: 8px;
width: 7px;
height: 7px;
border: 1px solid #fff;
box-shadow: 1px 1px 3px rgba(5, 5, 5, 0.9);
float: right;
margin-left: 4px;
cursor: pointer;
background: transparent;
transition: background 0.5s;
-moz-transition: background 0.5s;
-webkit-transition: background 0.5s;
-o-transition: background 0.5s;
-ms-transition: background 0.5s;
}
#slide-counter .slide-counter-node:hover {
background: rgba(255, 255, 255, 0.8);
}
#slide-counter .slide-counter-node.on {
background: #fff;
}
#credits .role {
font-size: 1.2em;
margin-left: 2em;
margin-top: 1em;
}
#credits .name {
font-family: 'Bebas';
font-size: 1.6em;
margin-left: 1.3em;
}
#credits .sub-name {
font-family: 'Open Sans';
font-size: 0.5em;
}
footer {
height: 100px;
width: 100%;
}
.demo-title {
text-shadow: rgba(25, 25, 25, 0.9) 0.1em 0.1em 0.1em;
font-family: 'Bebas';
font-size: 3.4em;
}
.progress-area {
box-shadow: none;
}
canvas.paused {
opacity: 0.4;
}
.status {
position: relative;
margin: 0 auto;
width: 960px;
height: 600px;
text-align: center;
background: rgba(0, 0, 0, 0.4);
margin-top: 20px;
}
.status-content {
width: 960px; /* Disabling these is a partial workaround for */
height: 600px; /*https://code.google.com/p/chromium/issues/detail?id=147609 */
position: absolute;
display: block;
top: 0;
left: 0;
}
.hide {
display: none;
}
.status .resolution-message {
margin-top: 140px;
margin-bottom: 30px;
}
.status .fullscreen-button {
display: inline-block;
width: 300px;
height: 200px;
box-shadow: 0 2px 10px rgba(25, 25, 25, 0.9);
line-height: 200px;
cursor: pointer;
font-family: 'Bebas';
background: rgb(206,220,231); /* Old browsers */
background: -moz-linear-gradient(top, rgba(206,220,231,1) 0%, rgba(89,106,114,1) 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(206,220,231,1)), color-stop(100%,rgba(89,106,114,1))); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(top, rgba(206,220,231,1) 0%,rgba(89,106,114,1) 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(top, rgba(206,220,231,1) 0%,rgba(89,106,114,1) 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(top, rgba(206,220,231,1) 0%,rgba(89,106,114,1) 100%); /* IE10+ */
background: linear-gradient(to bottom, rgba(206,220,231,1) 0%,rgba(89,106,114,1) 100%); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#cedce7', endColorstr='#596a72',GradientType=0 ); /* IE6-9 */
margin-bottom: 2em;
}
.status-content.ingame {
z-index: 30;
margin-top: 200px;
}
.status .loading .preview {
width: 100%;
height: 100%;
}
.status .loading .progress-container {
position: absolute;
width: 100%;
bottom: 10px;
}
.status .loading .progress-container.hide {
display: none;
}
.status .loading {
position: relative;
}
.status .loading progress {
display: block;
width: 600px;
margin: 10px auto;
z-index: 2;
}
.level-title {
text-shadow: rgba(25, 25, 25, 1.0) 0.1em 0.1em 0.1em;
position: absolute;
text-align: right;
font-family: 'Bebas';
font-size: 1.8em;
top: 0;
right: 20px;
}
.level-title span {
font-size: 2.0em;
margin-left: 10px;
position: relative;
top: 0.2em;
}
.preview-content {
width: 100%;
height: 100%;
display: none;
}
.preview-content.show {
display: block;
}
.preview-content.low {
background: url('../assets/screenshots/22.jpg');
background: -moz-linear-gradient(top, rgba(0,0,0,1) 0%, rgba(0,0,0,0.8) 10%, rgba(0,0,0,0.2) 30%, rgba(0,0,0,0) 60%), url('../assets/screenshots/22.jpg');
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0,0,0,1)), color-stop(10%,rgba(0,0,0,0.8)), color-stop(30%,rgba(0,0,0,0.2)), color-stop(60%,rgba(0,0,0,0))), url('../assets/screenshots/22.jpg');
background: -webkit-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/22.jpg');
background: -o-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/22.jpg');
background: -ms-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/22.jpg');
background: linear-gradient(to bottom, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/22.jpg');
background-position: 5% 5%;
background-size: 110% 110%;
}
.preview-content.medium {
background: url('../assets/screenshots/02.jpg');
background: -moz-linear-gradient(top, rgba(0,0,0,1) 0%, rgba(0,0,0,0.8) 10%, rgba(0,0,0,0.2) 30%, rgba(0,0,0,0) 60%), url('../assets/screenshots/02.jpg');
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0,0,0,1)), color-stop(10%,rgba(0,0,0,0.8)), color-stop(30%,rgba(0,0,0,0.2)), color-stop(60%,rgba(0,0,0,0))), url('../assets/screenshots/02.jpg');
background: -webkit-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/02.jpg');
background: -o-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/02.jpg');
background: -ms-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/02.jpg');
background: linear-gradient(to bottom, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/02.jpg');
background-position: 5% 5%;
background-size: 110% 110%;
}
.preview-content.high {
background: url('../assets/screenshots/14.jpg');
background: -moz-linear-gradient(top, rgba(0,0,0,1) 0%, rgba(0,0,0,0.8) 10%, rgba(0,0,0,0.2) 30%, rgba(0,0,0,0) 60%), url('../assets/screenshots/14.jpg');
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0,0,0,1)), color-stop(10%,rgba(0,0,0,0.8)), color-stop(30%,rgba(0,0,0,0.2)), color-stop(60%,rgba(0,0,0,0))), url('../assets/screenshots/14.jpg');
background: -webkit-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/14.jpg');
background: -o-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/14.jpg');
background: -ms-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/14.jpg');
background: linear-gradient(to bottom, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/14.jpg');
background-position: 5% 5%;
background-size: 110% 110%;
}
.preview-content.four {
background: url('../assets/screenshots/03.jpg');
background: -moz-linear-gradient(top, rgba(0,0,0,1) 0%, rgba(0,0,0,0.8) 10%, rgba(0,0,0,0.2) 30%, rgba(0,0,0,0) 60%), url('../assets/screenshots/03.jpg');
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0,0,0,1)), color-stop(10%,rgba(0,0,0,0.8)), color-stop(30%,rgba(0,0,0,0.2)), color-stop(60%,rgba(0,0,0,0))), url('../assets/screenshots/03.jpg');
background: -webkit-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/03.jpg');
background: -o-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/03.jpg');
background: -ms-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/03.jpg');
background: linear-gradient(to bottom, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/03.jpg');
background-position: 5% 5%;
background-size: 110% 110%;
}
.preview-content.five {
background: url('../assets/screenshots/07.jpg');
background: -moz-linear-gradient(top, rgba(0,0,0,1) 0%, rgba(0,0,0,0.8) 10%, rgba(0,0,0,0.2) 30%, rgba(0,0,0,0) 60%), url('../assets/screenshots/07.jpg');
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0,0,0,1)), color-stop(10%,rgba(0,0,0,0.8)), color-stop(30%,rgba(0,0,0,0.2)), color-stop(60%,rgba(0,0,0,0))), url('../assets/screenshots/07.jpg');
background: -webkit-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/07.jpg');
background: -o-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/07.jpg');
background: -ms-linear-gradient(top, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/07.jpg');
background: linear-gradient(to bottom, rgba(0,0,0,1) 0%,rgba(0,0,0,0.8) 10%,rgba(0,0,0,0.2) 30%,rgba(0,0,0,0) 60%), url('../assets/screenshots/07.jpg');
background-position: 5% 5%;
background-size: 110% 110%;
}
.preview-content .description {
color: rgb(240, 240, 240);
text-shadow: rgba(0, 0, 0, 1.0) 2px 2px 2px, rgba(0, 0, 0, 1.0) 2px 2px 2px;
font-size: 18pt;
width: 550px;
position: absolute;
top: 5em;
right: 20px;
text-align: left;
padding-top: 10px;
}
.status-content.error {
z-index: 50;
background: rgba(20, 20, 20, 0.9);
}
.status-content.error .title {
font-family: 'Bebas';
font-size: 7em;
}
.status-content.error p a {
font-weight: bold;
}
|
Communicating with patients in ICU.
Intensive care is an area which promotes feelings of high anxiety among patients, relatives and nurses. In such a highly-charged atmosphere, it is easy for communication problems to occur. Chris Turnock explores the reasons for poor nurse-patient communication in intensive care, and suggests that this might be largely due to the insecurities felt by nurses. While there are a number of factors which influence communication, emphasis should be placed on the quality of the nurse-patient interaction. |
Failed Texas Gubernatorial Candidate Wendy Davis Strikes Out Again
Photo: Twitter/@wendydavis8 Nov 2016
Former Texas State Senator Wendy Davis made a final desperate attempt to turn Texas blue in 2016.
Davis, who failed in her own effort to turn Texas blue when she ran for governor in 2014, has been trying to rally the state’s Democrat voters again this election season. In an Election Day message, Davis tweeted a cartoonish “SuperUterus” Hillary Clinton figure with the message “Who’s ready to elect the first woman president today??? #ImWitHer #ElectionDay”
Despite having more than 182,000 followers on her @wendydavis Twitter account, Davis’ lack of influence showed in her tweet which received less than 700 retweets as of the time of this publication.
Breitbart Texas reported on Davis’ prior attempts to stir up the base for Democrats in this election cycle. In September Davis proclaimed the historically red state of Texas as being in jeopardy of turning blue.
Davis made a weak attempt to lure dejected Bernie Sanders supporters.
“There are some lingering disappointments there,” she said. “But now is the time for all of us to pull together.” Sanders defeated Clinton in Austin’s Travis County by a 51-49 margin. Clinton did manage to carry the entire state of Texas by a larger margin. Sanders carried 13 counties in the Lone Star State, The New York Timesreported.
In October, Breitbart Texas reported it appeared she was beginning to accept the reality that her efforts had yet again failed. The 2014 Democrat gubernatorial candidate seemed to concede defeat by the Democrats this year by making a “wait till next year” kind of statement.
“A lot of people in Texas who are considering running statewide in the future are going to be closely watching what the indications are coming out of this election and re-analyzing the possibilities of when it makes sense to try to launch again a statewide race in Texas,” she said.
Despite her efforts, the latest Real Clear Politics polling average shows Donald Trump with a 12 point lead over Davis “SuperUterus” candidate Hillary Clinton.
Bob Price serves as associate editor and senior political news contributor for Breitbart Texas. He is a founding member of the Breitbart Texas team. Follow him on Twitter @BobPriceBBTX. |
Latest Zimbabwe News Summaries
Police Arrest Three Suspected Robbers
Police in Marondera have arrested three serial car thieves, the suspects are believed to be behind a spate of robberies that occurred in Mashonaland East in the past two months. The suspects Vincent Chikasha (19), Lawrence Marimbizhike (32) and Alfred Dzobo, who are from Goromonzi allegedly targeted pirate taxi drivers. They were arrested after they were found in possession of two stolen vehicles.
Provincial police spokesperson Inspector Tendai Mwanza confirmed the arrests, saying investigations were in progress to verify if the suspects were not linked to other vehicle theft cases.
Said Mwanza:
As police we urge pirate taxi drivers to be on alert when hired by strangers. |
Hantavirus: Cleaning Up Rodent-Infested Areas
Topic Overview
Use extreme caution when cleaning rodent-infested areas. Take the
following precautions to avoid becoming infected with
hantavirus.
Wear rubber or plastic gloves at all times while
cleaning.
Before you start cleaning, open all doors and windows
for at least 30 minutes. Stay out of the area until the 30 minutes is up. If you are in an
enclosed place, you may also want to wear goggles and a respirator with a
filter to avoid breathing contaminated dust.
Do not stir up dust by sweeping up or vacuuming up droppings,
urine, or nesting materials. Instead, wet all areas to be cleaned with a household disinfectant, such as Lysol, or a solution of 1 cup (237 mL) bleach in
10 cups (2.4 L) of water. Make sure you also treat the dead rodents, their nests, and droppings.
Clean up droppings or nest materials with paper towels or rags and place them in plastic bags. Place dead rodents in plastic bags. Seal each bag before throwing it away in the garbage.
If you
use a spring-loaded snap trap, you can clean it by pouring very hot water on it.
When you have finished cleaning and disposing of the rodents and
other contaminated materials:
Wash your gloves in disinfectant solution or soap
and water before you take them off.
Thoroughly wash your hands with
soap and water after you take off the gloves.
If you have an area that is heavily infested with rodents, call a
professional exterminator to remove them. You also can contact your local
health department for help.
This information does not replace the advice of a doctor. Healthwise, Incorporated disclaims any warranty or liability for your use of this information. Your use of this information means that you agree to the Terms of Use.
How this information was developed to help you make better health decisions. |
Heaven Sent
A new showpiece for Roman Catholic education has taken root in the
most unlikely of locations.
The view from Galey Colosimo's office window is one most Catholic
school principals can only imagine. The Skaggs Catholic Center, spread
out over 57 acres in the Salt Lake City suburb of Draper, boasts 75
classrooms, a 1,350- seat auditorium, equestrian trails, a television
studio, a day-care center, and the largest hardwood gym floor in the
state. Statues of saints dot the campus, and a one-hundred-foot cross
rises from a courtyard.
If that isn't enough to command attention, this spanking new
showpiece for Roman Catholic education has taken root in the most
unlikely of locations: a growing, middle-class town just 30 miles south
of Mormon Church headquarters in Salt Lake City.
"This is a one-of-a-kind facility for Catholic education," says
Colosimo, the school's principal. "People for many years have paid a
lot of money to go to Catholic schools that look like they are ready to
burn down. There is something inside those schools that is unique and
hard to turn away from. Here, we have the best in facilities and
people. That is a powerful combination."
And Catholic leaders are hoping people take notice. The school's
24-page, 4-color promotional pamphlet describes the center as the
culmination of "more than 1,500 years of Catholic philosophy and
teaching." Skaggs has hosted visitors from private, parochial, and
public schools about five times a month since it opened a year ago.
Bishop George Niederauer of the Diocese of Salt Lake City calls it a
"Catholic educator's dream."
Salt Lake's Catholic leaders did not set out to build a dream school
when they went looking for new classroom space in 1995. That changed
when multimillionaire Sam Skaggs called and essentially offered a blank
check. Skaggs, once the chairman of American Stores Co., a national
drugstore and grocery chain, contributed $55 million toward a new
school—an unprecedented gift in Catholic elementary and secondary
education.
At the time, Skaggs was a Baptist, though he and his wife, Aline, have
since converted to the Catholic faith. Notoriously private, they
declined to be interviewed for this story. But Skaggs' interest in the
religion is said to have been sparked by Catholic chaplains he met and
admired during his World War II military service. Even before the gift
for the new school, he had bankrolled some of the diocese's community
outreach.
The mogul's $55 million gift gave diocesan officials a rare chance
to think big. "Because they don't have enough money, most Catholic
schools are forced to be utilitarian in their outlook," says the
41-year-old Colosimo, who was a Catholic elementary school principal
before signing on to the Skaggs project. Eager to create the best
possible educational environment, the diocese decided to build a center
that would take students from the cradle to the brink of college.
Today, Skaggs is home to the Guardian Angel Day Care Center, with 200
children ages 6 weeks to 10 years old; St. John the Baptist Elementary
School, with 900 K-8 students; and Juan Diego Catholic High School,
with 250 students in 9th and 10th grades. Junior and senior classes
will be added over the next two years.
The diocese also wanted to create a distinctive campus, one that
honored the Catholic Church's past while preparing students for the
21st century. To that end, Colosimo and Monsignor Terrence Fitzgerald,
the vicar general of the Salt Lake City Diocese and a former
superintendent of Utah Catholic schools, spent a year traveling to
about 100 religious, private, and public schools around the
country.
Officials eventually settled on a circular design, with a courtyard
in the middle—a nod to the church's monastic tradition. The
center has four computer rooms, four science labs, libraries for both
the elementary and high schools, and a computer network that links
parents and teachers.
Sister Karla McKinnie, principal of St. John the Baptist Elementary
and a Catholic school educator for 34 years, is amazed by the facility.
Before coming to Skaggs, she had worked at a Catholic elementary school
in Los Angeles with only eight classrooms. "Most of the schools I have
been in have either been falling apart or in need of constant upkeep,"
she says
‘We fail if we graduate 4.0 students who are not good
people.’
John Colosimo,
Vice Principal,
Skaggs Catholic Center
Administrators say the center's grand design sends a message that
education is about the development of the whole person—
spiritually, intellectually, emotionally, and physically. "Learning is
not just cognitive. It is about what type of person you are, too," says
Vice Principal John Colosimo, Galey's brother. "We fail if we graduate
4.0 students who are not good people."
This philosophy of learning has been well-received, helping Skaggs
expand the tiny beachhead that Catholic schools have established in
Mormon Utah. About 5,000 of the state's half a million K-12 students
are enrolled in Utah's 10 Catholic elementary schools and three
Catholic high schools, and that number is growing.
The Reverend Terrence Moore, pastor of a church to be built on the
Skaggs campus within the next two years, says that when he arrived in
Utah 32 years ago, Catholics and other religious minorities found
little tolerance. Catholics today feel much more at home, Moore says.
"We are viewed now as full participants in the local society."
Parents have been enthusiastic about the new school, even though its
tuition—$2,400 for grades K-8 and $4,850 for high school—is
slightly higher than the national average for Catholic schools. At one
point last spring, St. John's waiting list brimmed with 1,700
students—about 70 percent of whom were not Catholic.
At 11 o'clock on a Tuesday morning late last spring, the anchors of
Juan Diego News are scanning scripts, brushing loose strands of hair
into place, and checking last-minute camera angles for the live morning
show that will be broadcast to all high school homerooms. From Skaggs'
television studio, the sharply dressed anchors start the show with a
prayer—today is the Feast of St. Peter—and fill their
classmates in on the school's Christian-service requirements and
applications for next year's courses.
Buzzing around the studio with infectious enthusiasm is Patti
Garrison, who helps television production teacher Dan Tucker run the
class. On medical leave from a local CBS-TV affiliate, Garrison and her
husband, both Greek Orthodox, decided to send their children,
15-year-old Ryan and 3-year-old Wyatt, to Skaggs because of the strong
academics and disciplined environment. Having their toddler in day care
on the same campus is both a time saver and a comfort for them.
Garrison was initially anxious about how Ryan would fit into the
Catholic community—only about 20 percent of the high school's
students are non-Catholic. But so far, he hasn't had any problems. She
is most impressed with Skaggs' high expectations, especially compared
with the public schools that Ryan attended. "My son complains that he
has no social life because he has so much homework," Garrison says.
"They demand more, and that is a good thing."
Young people,
school officials say, hunger to explore questions that inevitably
touch on faith.
Ryan, a quiet young man at an age when having his mother in the school
can be slightly embarrassing, likes the fact that he is pushed
here—even if it means some wrinkles in his social calendar. "The
work is hard, but this place is better," he says. "Here they expect
more of you with the schoolwork and how you treat people."
Non-Catholic students are required to attend Mass at the school's
chapel but do not receive communion. School officials say they are not
seeking converts, but they encourage discussions about God, values, and
morality. Young people, they say, hunger to explore questions that
inevitably touch on faith.
Prayer and the spiritual life are "just part of the fabric here,"
says the Reverend Tom O'Mahoney, a religion teacher at the school.
Robert Kealey of the National Catholic Educational Association calls
Skaggs "a model for what can be done in Catholic education across the
country." But similar schools are not likely to be built without the
help of more benefactors like Sam Skaggs. "There are tons of people out
there with big dollars," says Monsignor Fitzgerald. "We hope this
offers them a sense of encouragement."
In the meantime, Skaggs officials are concentrating on shaping an
institution that has no history or traditions. "There are a lot of
firsts, and that is exciting," says Sister Patricia Riley, who heads
the campus- ministry office and recently started a community-service
program.
School leaders are also mulling over Skaggs' growth. The high school
could hold 2,000 students, vice principal John Colosimo says, but that
may not be the ideal size. "There are so many possibilities here," he
says. "You are only limited by your imagination."
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public. |
/*
* This file is part of LibrePlan
*
* Copyright (C) 2011 WirelessGalicia, S.L.
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package org.libreplan.web.subcontract;
import org.libreplan.business.externalcompanies.entities.CommunicationType;
import org.libreplan.business.externalcompanies.entities.CustomerCommunication;
import org.libreplan.business.orders.entities.Order;
import org.libreplan.web.common.IMessagesForUser;
import org.libreplan.web.common.MessagesForUser;
import org.libreplan.web.common.Util;
import org.libreplan.web.planner.tabs.IGlobalViewEntryPoints;
import org.zkoss.zk.ui.Component;
import org.zkoss.zk.ui.event.Events;
import org.zkoss.zk.ui.util.GenericForwardComposer;
import org.zkoss.zkplus.spring.SpringUtil;
import org.zkoss.zul.Button;
import org.zkoss.zul.Checkbox;
import org.zkoss.zul.Grid;
import org.zkoss.zul.Label;
import org.zkoss.zul.Row;
import org.zkoss.zul.RowRenderer;
import org.zkoss.zul.SimpleListModel;
import java.util.List;
import static org.libreplan.web.I18nHelper._;
/**
* Controller for CRUD actions over a {@link CustomerCommunication}.
*
* @author Susana Montes Pedreira <smontes@wirelessgalicia.com>
*/
@SuppressWarnings("serial")
public class CustomerCommunicationCRUDController extends GenericForwardComposer {
private ICustomerCommunicationModel customerCommunicationModel;
private CustomerCommunicationRenderer customerCommunicationRenderer = new CustomerCommunicationRenderer();;
protected IMessagesForUser messagesForUser;
private Component messagesContainer;
private Grid listing;
private IGlobalViewEntryPoints globalView;
public CustomerCommunicationCRUDController() {
}
@Override
public void doAfterCompose(Component comp) throws Exception {
super.doAfterCompose(comp);
comp.setAttribute("controller", this);
injectsObjects();
messagesForUser = new MessagesForUser(messagesContainer);
}
private void injectsObjects() {
if ( customerCommunicationModel == null ) {
customerCommunicationModel = (ICustomerCommunicationModel) SpringUtil.getBean("customerCommunicationModel");
}
if ( globalView == null ) {
globalView = (IGlobalViewEntryPoints) SpringUtil.getBean("globalView");
}
}
public void goToEdit(CustomerCommunication customerCommunication) {
if (customerCommunication != null && customerCommunication.getOrder() != null) {
Order order = customerCommunication.getOrder();
globalView.goToOrderDetails(order);
}
}
public FilterCommunicationEnum[] getFilterItems(){
return FilterCommunicationEnum.values();
}
public FilterCommunicationEnum getCurrentFilterItem() {
return customerCommunicationModel.getCurrentFilter();
}
public void setCurrentFilterItem(FilterCommunicationEnum selected) {
customerCommunicationModel.setCurrentFilter(selected);
refreshCustomerCommunicationsList();
}
private void refreshCustomerCommunicationsList(){
// Update the customer communication list
listing.setModel(new SimpleListModel<>(getCustomerCommunications()));
listing.invalidate();
}
protected void save(CustomerCommunication customerCommunication) {
customerCommunicationModel.confirmSave(customerCommunication);
}
public List<CustomerCommunication> getCustomerCommunications() {
FilterCommunicationEnum currentFilter = customerCommunicationModel.getCurrentFilter();
switch(currentFilter) {
case NOT_REVIEWED:
return customerCommunicationModel.getCustomerCommunicationWithoutReviewed();
case ALL:
default:
return customerCommunicationModel.getCustomerAllCommunications();
}
}
public CustomerCommunicationRenderer getCustomerCommunicationRenderer() {
return customerCommunicationRenderer;
}
private class CustomerCommunicationRenderer implements RowRenderer {
@Override
public void render(Row row, Object data, int i) {
CustomerCommunication customerCommunication = (CustomerCommunication) data;
row.setValue(customerCommunication);
final CommunicationType type = customerCommunication.getCommunicationType();
if(!customerCommunication.getReviewed()){
row.setSclass("communication-not-reviewed");
}
appendLabel(row, toString(type));
appendLabel(row, customerCommunication.getOrder().getName());
appendLabel(row, Util.formatDate(customerCommunication.getDeadline()));
appendLabel(row, customerCommunication.getOrder().getCode());
appendLabel(row, customerCommunication.getOrder().getCustomerReference());
appendLabel(row, Util.formatDateTime(customerCommunication.getCommunicationDate()));
appendCheckbox(row, customerCommunication);
appendOperations(row, customerCommunication);
}
private String toString(Object object) {
if (object == null) {
return "";
}
return object.toString();
}
private void appendLabel(Row row, String label) {
row.appendChild(new Label(label));
}
private void appendCheckbox(final Row row, final CustomerCommunication customerCommunication) {
final Checkbox checkBoxReviewed = new Checkbox();
checkBoxReviewed.setChecked(customerCommunication.getReviewed());
checkBoxReviewed.addEventListener(Events.ON_CHECK,
arg0 -> {
customerCommunication.setReviewed(checkBoxReviewed.isChecked());
save(customerCommunication);
updateRowClass(row,checkBoxReviewed.isChecked());
});
row.appendChild(checkBoxReviewed);
}
private void updateRowClass(final Row row, Boolean reviewed){
row.setSclass("");
if(!reviewed){
row.setSclass("communication-not-reviewed");
}
}
private void appendOperations(Row row, final CustomerCommunication customerCommunication) {
Button buttonEdit = new Button();
buttonEdit.setSclass("icono");
buttonEdit.setImage("/common/img/ico_editar1.png");
buttonEdit.setHoverImage("/common/img/ico_editar.png");
buttonEdit.setTooltiptext(_("Edit"));
buttonEdit.addEventListener(Events.ON_CLICK, arg0 -> goToEdit(customerCommunication));
row.appendChild(buttonEdit);
}
}
/**
* Apply filter to customers communications.
*/
public void onApplyFilter() {
refreshCustomerCommunicationsList();
}
}
|
Q:
Jquery - fadeIn/fadeOut flicker on rollover
I am using the following code to acheive a fadeIn/fadeOut effect on rollover/rollout of it's parent div.
$('.rollover-section').hover(function(){
$('.target', this).stop().fadeIn(250)
}, function() {
$('.target', this).stop().fadeOut(250)
})
It works correctly when I rollover the div and out slowly. However if I move my mouse over and then off the div quickly, it breaks the effect. The target div seems to get stuck at an opacity between 0 and 1.
What confuses me is that when I use the following code it works perfectly.
$('.rollover-section').hover(function(){
$('.target', this).stop().animate({
opacity: 1
}, 250);
}, function() {
$('.target', this).stop().animate({
opacity:0
}, 250);
})
So, I have two questions.
1 - why is my first code block behaving as it does?
2 - What is the difference between fadeIn()/fadeOut() and animating the opacity?
A:
I've put my answer from the comments here:
Just use the animate example you have there. Check here for an answer to why the fade animation behaves the way it does:
jQuery fade flickers
|
Q:
Disable Gzip Compression in .htaccess file
I have tried using the following code but my website is still compressed using gzip:
SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \.(html?|txt|css|js|php|pl)$$ no-gzip dont-vary
How can I disable gzip compressions?
A:
Put this code in the htaccess file and save it.It worked for me.Disabled gzip compression.
RewriteRule ^(.*)$ $1 [NS,E=no-gzip:1,E=dont-vary:1]
Thanks
|
Warriors head coach Steve Kerr turned to politics in his pregame meeting with the media on Sunday, discussing the state of the nation's psyche in the wake of Saturday's shooting in Pittsburgh.
"It's easy to feel how broken we are as a country right now," Kerr told the media at the Barclays Center in Brooklyn. "Everybody can have influence, not just our political leaders, but people who are either well-known figures who have the camera in their face a lot, or average citizens being kinder to each other."
Kerr implored citizens to take part in the upcoming midterm elections, and he did not shy away from politics, either. The fifth-year head coach said he will "vote for any candidate that will stand up to the NRA," voicing his concern over gun violence in America.
Watch Kerr's full comments below.
Kerr hasn't been hesitant to discuss politics in the past. He has criticized President Trump numeroustimes and is an active political voice on Twitter.
The Warriors face the Nets tonight, looking to win their fourth straight. The matchup is slated to begin at 5:00 p.m. ET. |
123 Cal.Rptr.2d 40 (2002)
28 Cal.4th 828
50 P.3d 751
Betty Jean MYERS, Plaintiff and Appellant,
v.
PHILIP MORRIS COMPANIES, INC., et al., Defendants and Respondents.
No. S095213.
Supreme Court of California.
August 5, 2002.
*42 Bourdette & Partners, Philip C. Bourdette and Andre P. Gaston, Visalia, for Plaintiff and Appellant.
Wartnick, Chaber, Harowitz & Tigerman, Harry F. Wartnick, Madelyn J. Chaber, San Francisco; Law Offices of Daniel U. Smith, Daniel U. Smith, Los Angeles, and Ted W. Pelletier for Patricia Henley, Leslie Whiteley and Leonard Whiteley as Amici Curiae on behalf of Plaintiff and Appellant.
Bill Lockyer, Attorney General, Richard M. Frank, Chief Assistant Attorney General, Dennis Eckhart, Assistant Attorney General, and Peter M. Williams, Deputy Attorney General, as Amici Curiae on behalf of Plaintiff and Appellant.
Howard, Rice, Nemerovski, Canady, Falk & Rabkin and H. Joseph Escher III, San Francisco, for Defendant and Respondent R.J. Reynolds Tobacco Company.
Munger, Tolles & Olson, Michael R. Doyen, Fred A. Rowley, Jr., Daniel P. Collins, Los Angeles, and Ronald L. Olsen for Defendant and Respondent Philip Morris Incorporated.
Sedgwick, Detert, Moran & Arnold and Frederick D. Baker, San Francisco, for Defendant and Respondent Brown & Williamson Tobacco Corporation.
William L. Gausewitz for The Alliance of American Insurers, The American Insurance Association, The National Association of Independent Insurers, The National Association of Mutual Insurance Companies and The Reinsurance Association of America as Amici Curiae on behalf of Defendants and Respondents.
Fred Main for California Chamber of Commerce as Amicus Curiae on behalf of Defendants and Respondents.
*41 KENNARD, J.
In 1995, the California Legislature found that "[t]obacco-related disease places a tremendous financial burden upon the persons with the disease, their families, the health care delivery system, and society as a whole," and that "California spends five billion six hundred million dollars ($5,600,000,000) a year in direct and indirect costs on smoking-related illnesses." (Health & Saf.Code, § 104350, subd. (a)(7).) To obtain compensation for the physical and mental suffering and staggering expenses inflicted by tobacco-related illness, users of tobacco products and their families have sought relief in our courts through product liability lawsuits against manufacturers and sellers of tobacco products. In dealing with those lawsuits, courts have not been free to apply ordinary principles of tort law because, as we shall explain, the Legislature has enacted statutes that directly control the extent to which our courts may award damages against tobacco companies in product liability actions.
The statutes at issue are two successive versions of section 1714.45 of California's Civil Code.[1] The first version, which we here sometimes refer to as the Immunity Statute, granted tobacco companies complete immunity in certain product liability lawsuits as of January 1, 1988.[2] (Added *43 by Stats.1987, ch. 1498, § 3, p. 5778.) The second version, which we here sometimes refer to as the Repeal Statute, rescinded that immunity 10 years later on January 1, 1998. (Stats.1997, ch. 570, § 1.) The United States Court of Appeals for the Ninth Circuit has certified to us a question asking whether the Repeal Statute governs "a claim that accrued after January 1, 1998, but which is based on conduct that occurred prior to January 1, 1998." (Myers v. Phillip Morris Companies, Inc. (9th Cir.2001) 239 F.3d 1029, 1030 (Myers).)
Our answer is this: The Immunity Statute applies to certain statutorily described conduct of tobacco companies that occurred during the 10 year immunity period, which began on January 1, 1988, and ended on December 31, 1997. With respect to such conduct, therefore, the statutory immunity applies, and no product liability cause of action may be based on that conduct, regardless of when the users of the tobacco products may have sustained or discovered injuries as a result of that conduct. That statutory immunity was rescinded, however, when the California Legislature enacted the Repeal Statute, which as of January 1, 1998, restored the general principles of tort law that had, until the 1988 enactment of the Immunity Statute, governed tort liability against tobacco companies. Therefore, with respect to conduct falling outside the 10 year immunity period, the tobacco companies are not shielded from product liability lawsuits.
I. FACTS
The Court of Appeals for the Ninth Circuit described the background of this case as follows: "Betty Jean Myers began smoking cigarettes in 1956 and continued to smoke heavily until 1997. Throughout this period, and until August of 1998, she also worked and lived in environments in which those around her smoked cigarettes. On April 8, 1998, Myers was diagnosed with lung cancer allegedly caused by her exposure to tobacco. On March 4, 1999, Myers filed a complaint in Tulare County Superior Court against Philip Morris and other defendant tobacco manufacturers (collectively, the `Tobacco Manufacturers') alleging several claims, including strict liability, negligence, breach of implied warranties, fraud, and negligent misrepresentation." (Myers, supra, 239 F.3d at p. 1030.)
The Ninth Circuit's description continues: "After removing this case to the United States District Court for the Eastern District of California, the Tobacco Manufacturers moved, on April 13, 1999, to dismiss Myers's complaint for failure to state a claim. On May 25, 1999, the district court granted the motion to dismiss, with leave to amend, on the ground that Cal. Civ.Code § 1714.45 barred Myers's actions for any injuries incurred prior to January 1998. On June 30, 1999, Myers amended her complaint to allege that she was exposed to secondhand cigarette smoke between January 1, 1998 and April 8, 1998. On July 19, 1999, the Tobacco Manufacturers again moved to dismiss Myers's complaint for failure to state a claim. On October 6, 1999, the district court again dismissed Myers's complaint for failure to state [a] claim, this time without leave to amend, on the grounds that she had conceded that her lung cancer was not caused by her exposure to secondhand smoke after January 1, 1998, and, *44 again, that pre 1998 exposures were not actionable. Myers filed a timely notice of appeal to the Ninth Circuit." (Myers, supra, 239 F.3d at p. 1031.)
II. BACKGROUND
We start with a review of the Immunity Statute and two California cases that have construed that statute, American Tobacco Co. v. Superior Court (1989) 208 Cal. App.3d 480, 255 Cal.Rptr. 280, a decision of the state Court of Appeal, and Richards v. Owens-Illinois, Inc. (1997) 14 Cal.4th 985, 60 Cal.Rptr.2d 103, 928 P.2d 1181, a decision of this court.
A. The Immunity Statute
Enacted as part of the Willie L. Brown, Jr.Bill Lockyer Civil Liability Reform Act of 1987, former section 1714.45 (the Immunity Statute) provided in full:
"(a) In a product liability action, a manufacturer or seller shall not be liable if:
"(1) The product is inherently unsafe and the product is known to be unsafe by the ordinary consumer who consumes the product with the ordinary knowledge common to the community; and
"(2) The product is a common consumer product intended for personal consumption, such as sugar, castor oil, alcohol, tobacco, and butter, as identified in comment i to Section 402A of the Restatement (Second) of Torts.
"(b) For purposes of this section, the term `product liability action' means any action for injury or death caused by a product, except that the term does not include an action based on a manufacturing defect or breach of an express warranty.
"(c) This section is intended to be declarative of and does not alter or amend existing California law, including Cronin v. J.B.E. Olson Corp., (1972) 8 Cal.3d 121, [104 Cal.Rptr. 433, 501 P.2d 1153], and shall apply to all product liability actions pending on, or commenced after, January 1, 1988." (Stats.1987, ch. 1498, § 3, pp. 5778-5779, italics added.)
We now discuss the two California decisions that have interpreted the Immunity Statute.
1. American Tobacco Co. v. Superior Court
The state Court of Appeal's 1989 decision in American Tobacco Co. v. Superior Court, supra, 208 Cal.App.3d 480, 255 Cal. Rptr. 280 (American Tobacco), which was authored by Presiding Justice J. Anthony Kline, was the first to construe the Immunity Statute. In that case, the court described the Immunity Statute as the result of a "`peace pact'" or "compromise between parties seeking and opposing comprehensive changes in California tort law who had been locked in a long political struggle that had reached [a] stalemate." (American Tobacco, supra, at pp. 486-487, 255 Cal.Rptr. 280.) Those involved included major special interest groups such as insurers, physicians, manufacturers, and the plaintiffs lawyers. (Id. at p. 486, 255 Cal.Rptr. 280.)[3] The Court of Appeal in American Tobacco characterized the Immunity Statute as so "poorly drafted" that "on its face[, it was] amenable to two diametrically opposed interpretations, each of *45 which conflict[ed] in some way with the words" used by the Legislature. (American Tobacco, supra, at p. 485, 255 Cal. Rptr. 280.) But legislative history, the court noted, indicated that the Immunity Statute's intent was to ensure that "`highcholesterol foods, alcohol, and cigarettes that are inherently unsafe and known to be unsafe by ordinary consumers, [were] not to be subject to product liability lawsuits.' " (American Tobacco, supra, at p. 487, 255 Cal.Rptr. 280, italics added.) In light of that legislative intent, the Court of Appeal in American Tobacco concluded that the statutory immunity was very broad, providing "nearly complete" immunity for manufacturers and sellers of tobacco and the other enumerated products. (Ibid.)
2. Richards v. Owens-Illinois, Inc.
In 1997, some eight years after the Court of Appeal's decision in American Tobacco, supra, 208 Cal.App.3d 480, 255 Cal.Rptr. 280, we construed the Immunity Statute in Richards v. Owens-Illinois, Inc., supra, 14 Cal.4th 985, 60 Cal.Rptr.2d 103, 928 P.2d 1181 (Richards). Because Richards is central to answering the question the Ninth Circuit has certified to us, we discuss it in some detail.
The plaintiff in Richards, supra, 14 Cal.4th 985, 60 Cal.Rptr.2d 103, 928 P.2d 1181, was a former shipyard worker who sued several asbestos manufacturers, claiming that his exposure to asbestos fibers at various shipyard jobs had caused him to develop asbestosis, a severe respiratory injury. At trial, one asbestos manufacturer, defendant Owens-Illinois, Inc., presented evidence that the plaintiff had contributed to the development of his respiratory injury by smoking for more than 40 years. (Id. at p. 990, 60 Cal.Rptr.2d 103, 928 P.2d 1181.) The issue before this court was whether the trial court should have allowed Owens Illinois to present its so-called tobacco company defense, which would have required the jury, in determining fault with respect to noneconomic damages (to compensate the plaintiff for pain and suffering), to apportion to tobacco companies some percentage of fault, thereby reducing the percentage of the noneconomic damages award attributable to Owens-Illinois. (Id. at p. 991, 60 Cal.Rptr.2d 103, 928 P.2d 1181.)
To decide this question, we considered the interplay between the Immunity Statute and section 1431 et seq., enacted by the California electorate in an initiative known as Proposition 51. Proposition 51 provided in part that "in a tort action governed by principles of comparative fault, a defendant shall not be jointly liable for the plaintiffs `non-economic damages,' but shall only be severally liable for such damages `in direct proportion to that defendant's percentage of fault.'" (Richards, supra, 14 Cal.4th at p. 988, 60 Cal. Rptr.2d 103, 928 P.2d 1181, quoting § 1431.2, subd. (a).) The specific question we addressed in Richards was this: "To the extent [the Immunity Statute] protects tobacco companies from direct `liability]' for harm caused by smoking, does it also preclude the allocation of proportionate 'fault' to absent tobacco companies in a smoker's suit for asbestos-related lung injury, in order to reduce the `non-economic' damages payable by the asbestos defendant under Proposition 51?" (Richards, supra, 14 Cal.4th at p. 988, 60 Cal.Rptr.2d 103, 928 P.2d 1181.)
We explained: "Though the [Immunity Statute] states only an exemption from direct `liability]' where specified conditions are met, the express premise which justifies this immunity is of a broader nature. This premise is that suppliers of certain products which are `inherently unsafe,' but which the public wishes to have *46 available despite awareness of their dangers, should not be responsible in tort for resulting harm to those who voluntarily consumed the products despite such knowledge. With respect to injuries meeting the statute's requirements, that principle precludes the assignment of legal 'fault' to such suppliers in oil contexts, including suits from which they are absent by law." (Richards, supra, 14 Cal.4th at p. 1002, 60 Cal.Rptr.2d 103, 928 P.2d 1181, italics added, original italics omitted.)
We pointed out that the Immunity Statute drew "its express inspiration from product liability principles" set forth in comment i to section 402A of the Restatement Second of Torts (Restatement), and these principles provided the premise for the statute. (Richards, supra, 14 Cal.4th at p. 999, 60 Cal.Rptr.2d 103, 928 P.2d 1181.)
We observed: "Section 402A of the Restatement proposes generally that when a manufacturer or distributor sells a product 'in a defective condition unreasonably dangerous to the user or consumer' ([Rest.] p. 347, italics added), and the product reaches that person, as expected and intended, without substantial change in its condition, the seller is `subject to liability' for physical harm `thereby caused to the ultimate user or consumer.' [11] However, comment i asserts an important qualification of the general rule.... [It] makes clear that ... `[t]he rule [of liability] applies only where the defective condition of the product makes it unreasonably dangerous to the user or consumer.' (Restatement, p. 352, italics added.) As comment i then explains, `[m]any products cannot possibly be made entirely safe for all consumption,' but if a product is pure and unadulterated, its inherent or unavoidable danger, commonly known to the community which consumes it anyway, does not expose the seller to liability for resulting harm to a voluntary user." (Richards, supra, 14 Cal.4th at p. 999, 60 Cal.Rptr.2d 103, 928 P.2d 1181.)
Richards added: "The clear premise of comment i is that no `liability' arises [for the manufacture or distribution of a product that in its pure and unadulterated form poses for its voluntary users an inherent and unavoidable danger] because there is no sound basis for liability. In other words, comment i posits, a manufacturer or seller breaches no legal duty to voluntary consumers by merely supplying, in unadulterated form, a common commodity which cannot be made safer, but which the public desires to buy and ingest despite general understanding of its inherent dangers." (Richards, supra, 14 Cal.4th at p. 1000, 60 Cal.Rptr.2d 103, 928 P.2d 1181.)
Because it would have been "anomalous" for a supplier of tobacco products, "though immunized ... from direct liability for providing an `inherently unsafe' product to a knowing and voluntary consumer ... [to] nonetheless be assigned `fault' for doing so in an action between that same consumer and a third party defendant," we held in Richards "that to the extent [the Immunity Statute] afford[ed] tobacco suppliers immunity from `liab[ility]' in direct actions against them, on grounds that the immunized conduct breache[d] no duty and constitute id] no tort, the statute also preclude[d] indirect assignment of comparative 'fault' ... to such entities for purposes of Proposition 51." (Richards, supra, 14 Cal.4th at pp. 1000-1001, 60 Cal. Rptr.2d 103, 928 P.2d 1181, italics added.)
In short, our unanimous decision in Richards, supra, 14 Cal.4th 985, 60 Cal. Rptr.2d 103, 928 P.2d 1181, made clear that between January 1, 1988, and December 31, 1997, when the Immunity Statute was in effect, supplying pure and unadulterated tobacco products to knowing and voluntary consumers of those products was *47 not subject to tort liability because it breached no legal duty and thus constituted no tort.
B. The Repeal Statute
Ten years after enactment of the Immunity Statute and some eight months after our decision in Richards, supra, 14 Cal.4th 985, 60 Cal.Rptr.2d 103, 928 P.2d 1181, the California Legislature enacted the Repeal Statute, which amended former section 1714.45 to rescind the statutory immunity for tobacco companies as of January 1, 1998.[4] When enacted, the Repeal Statute provided, and with one minor change still provides:
"(a) In a product liability action, a manufacturer or seller shall not be liable if both of the following apply:
"(1) The product is inherently unsafe and the product is known to be unsafe by the ordinary consumer who consumes the product with the ordinary knowledge common to the community.
"(2) The product is a common consumer product intended for personal consumption, such as sugar, castor oil, alcohol, and butter, as identified in comment i to Section 402A of the Restatement (Second) of Torts.
"(b) This section does not exempt the manufacture or sale of tobacco products by tobacco manufacturers and their successors in interest from product liability actions, but does exempt the sale or distribution of tobacco products by any other person, including, but not limited to, retailers or distributors.
"(c) For purposes of this section, the term `product liability action' means any action for injury or death caused by a product, except that the term does not include an action based on a manufacturing defect or breach of an express warranty.
"(d) This section is intended to be declarative of and does not alter or amend existing California law, including Cronin v. J.B.E. Olson Corp., (1972), 8 Cal.3d 121, [104 Cal.Rptr. 433, 501 P.2d 1153], and shall apply to all product liability actions pending on, or commenced after, January 1, 1988.
"(e) This section does not apply to, and never applied to, an action brought by a public entity to recover the value of benefits provided to individuals injured by a tobacco-related illness caused by the tortious conduct of a tobacco company or its successor in interest, including, but not limited to, an action brought pursuant to Section 14124.71 of the Welfare and Institutions Code. In such an action brought by a public entity, the fact that the injured individual's claim against the defendant may be barred by a prior version of this section shall not be a defense. This subdivision does not constitute a change in, but is declaratory of, existing law relating to tobacco products.
"(f) It is the intention of the Legislature in enacting the amendments to subdivisions (a) and (b) of this section adopted at the 1997-98 Regular Session to declare that there exists no statutory bar to tobacco-related personal injury, *48 wrongful death, or other tort claims against tobacco manufacturers and their successors in interest by California smokers or others who have suffered or incurred injuries, damages, or costs arising from the promotion, marketing, sale, or consumption of tobacco products. It is also the intention of the Legislature to clarify that such claims which were or are brought shall be determined on their merits, without the imposition of any claim of statutory bar or categorical defense.
"(g) This section shall not be construed to grant immunity to a tobacco industry research organization." (§ 1714.45, as amended by Stats.1997, ch. 570, § 1, italics added.)[5]
Through subdivisions (b) and (f), the Repeal Statute expressly rescinded the statutory immunity from product liability lawsuits for tobacco companies that the Legislature had allowed 10 years earlier. Although the Repeal Statute retained the Immunity Statute's reference to comment i to section 402A of the Restatement negating liability to voluntary users of certain common but inherently unsafe consumer products, the Repeal Statute in subdivision (a) omitted tobacco products from specified "inherently unsafe" products.
III. DISCUSSION
We now turn to the question that the Ninth Circuit has asked us to decide: "Do the amendments to Cal. Civ.Code § 1714.45 that became effective on January 1, 1998, apply to a claim that accrued after January 1, 1998, but which is based on conduct that occurred prior to January 1, 1998?" To answer this question we must determine how the Repeal Statute affects product liability suits against tobacco companies based on their activities as manufacturers and sellers of tobacco products before, during, and after the statutory immunity period.
In certifying the question, the Ninth Circuit noted plaintiffs contention that applying the Repeal Statute in her case would be a prospective rather than a retroactive application of that law because she was diagnosed with cancer on April 8, 1998, three months after January 1, 1998, the date on which the Repeal Statute took effect. We address that issue below.
A. Whether Applying the Repeal Statute to Defendant Tobacco Companies' Conduct During the Immunity Period Would be a Prospective or a Retroactive Application of That Statute
As we said more than 50 years ago, a retroactive or retrospective law "`is one which affects rights, obligations, acts, transactions and conditions which are performed or exist prior to the adoption of the statute.'" (Aetna Cas. & Surety Co. v. Ind. Acc. Com. (1947) 30 Cal.2d 388, 391, 182 P.2d 159; accord, Evangelatos v. Superior Court (1988) 44 Cal.3d 1188, 1206, 246 Cal.Rptr. 629, 753 P.2d 585 (Evangelatos ).) Similarly, the United States Supreme Court has stated: "`[E]very statute, which takes away or impairs vested rights acquired under existing laws, or creates a new obligation, imposes a new duty, or attaches a new disability, in respect to transactions or considerations already past, must be deemed retrospective.'" *49 (Landgraf v. USI Film Products (1994) 511 U.S. 244, 269, 114 S.Ct. 1483, 128 L.Ed.2d 229.) Phrased another way, a statute that operates to "increase a party's liability for past conduct" is retroactive. (Id. at p. 280, 114 S.Ct. 1483; Evangelatos, supra, 44 Cal.3d at p. 1206, 246 Cal.Rptr. 629, 753 P.2d 585.)
As we explained earlier, while the Immunity Statute was in effect-from January 1, 1988, through December 31, 199-7-no tortious liability attached to a tobacco company's production and distribution of pure and unadulterated tobacco products to smokers. (Former § 1714.45, as added by Stats. 1987, ch. 1498, § 3, p. 5778; Richards, supra, 14 Cal.4th at p. 1001, 60 Cal.Rptr.2d 103, 928 P.2d 1181.) But on January 1, 1998, the California Legislature, through its enactment of the Repeal Statute, terminated that statutory immunity. (§ 1714.45.) Here, plaintiff started smoking in 1956 and, some 41 years later, quit smoking in 1997. But during 10 of those 41 years, from January 1, 1988, to December 31, 1997, because of the Immunity Statute, the manufacture and sale of covered products were not tortious. Accordingly, to have the Repeal Statute govern product liability suits against tobacco companies for supplying tobacco products to smokers during the immunity period would indeed be a retroactive application of that statute because it could subject those companies to "liability for past conduct" (Landgraf v. USI Film. Products, supra, 511 U.S. at p. 280, 114 S.Ct. 1483; see also Evangelatos, supra, 44 Cal.3d at p. 1206, 246 Cal.Rptr. 629, 753 P.2d 585) that was lawful during the immunity period (Richards, supra, 14 Cal.4th at p. 1001, 60 Cal.Rptr.2d 103, 928 P.2d 1181). Such retroactive application is impermissible unless there is an express intent of the Legislature to do so. We explore that issue below.
B. Whether the California Legislature Expressed an Intent That the Repeal Statute Govern Tobacco Companies' Liability During the Immunity Period
Generally, statutes operate prospectively only. In the words of section 3 of California's Civil Code: "No part of [this code] is retroactive, unless expressly so declared." (Italics added.) The Ninth Circuit, in certifying the question to us, cited our decision in Evangelatos, supra, 44 Cal.3d at page 1208, 246 Cal.Rptr. 629, 753 P.2d 585, when noting defendant tobacco companies' contention that "a retroactive application [of a statute] requires either `express language or clear and unavoidable implication' from the California Legislature." (Myers, supra, 239 F.3d at p. 1032.) On that point, defendants are right. We explained in Evangelatos: "`"[T]he first rule of [statutory] construction is that legislation must be considered as addressed to the future, not to the past.... The rule has been expressed in varying degrees of strength but always of one import, that a retrospective operation will not be given to a statute which interferes with antecedent rights ... unless such be `the unequivocal and inflexible import of the terms, and the manifest intention of the legislature.'"'" (Evangelatos, supra, 44 Cal.3d at p. 1207, 246 Cal.Rptr. 629, 753 P.2d 585, quoting United States v. Security Industrial Bank (1982) 459 U.S. 70, 78-79, 103 S.Ct. 407, 74 L.Ed.2d 235, italics omitted.) In the words of the United States Supreme Court, "The `principle that the legal effect of conduct should ordinarily be assessed under the law that existed when the conduct took place has timeless and universal appeal.'" (Landgraf v. USI Film Products, supra, 511 U.S. at p. 265, 114 S.Ct. 1483; accord, Hughes Aircraft Co. v. U.S. exrel. Schumer *50 (1997) 520 U.S. 939, 946, 117 S.Ct. 1871, 138 L.Ed.2d 135.)
As the United States Supreme Court has consistently stressed, the presumption that legislation operates prospectively rather than retroactively is rooted in constitutional principles: "In a free, dynamic society, creativity in both commercial and artistic endeavors is fostered by a rule of law that gives people confidence about the legal consequences of their actions, [¶] It is therefore not surprising that the antiretroactivity principle finds expression in several provisions of our Constitution. The Ex Post Facto clause flatly prohibits retroactive application of penal legislation .... The Fifth Amendment's Takings Clause[, and] [t]he Due Process Clause also protect! ] the interests in fair notice and repose that may be compromised by retroactive legislation; a justification sufficient to validate a statute's prospective application under the [Due Process] Clause `may not suffice' to warrant its retroactive application." (Landgraf v. USI Film Products, supra, 511 U.S. at pp. 265-266, 114 S.Ct. 1483, italics added; accord, I.N.S. v. St. Cyr (2001) 533 U.S. 289, 316, 121 S.Ct. 2271,150 L.Ed.2d 347.)
Just as federal courts apply the time-honored legal presumption that statutes operate prospectively "unless Congress has clearly manifested its intent to the contrary" (Hughes Aircraft Co. v. U.S. ex rel. Schumer, supra, 520 U.S. at p. 946, 117 S.Ct. 1871), so too California courts comply with the legal principle that unless there is an "express retroactivity provision, a statute will not be applied retroactively unless it is very clear from extrinsic sources that the Legislature ... must have intended a retroactive application" (Evangelatos, supra, 44 Cal.3d at p. 1209, 246 Cal.Rptr. 629, 753 P.2d 585, italics added). California courts apply the same "general prospectivity principle" as the United States Supreme Court. (Id. at p. 1208, 246 Cal.Rptr. 629, 753 P.2d 585.) Under this formulation, a statute's retroactivity is, in the first instance, a policy determination for the Legislature and one to which courts defer absent "some constitutional objection" to retroactivity. (Western Security Bank v. Superior Court (1997) 15 Cal.4th 232, 244, 62 Cal.Rptr.2d 243, 933 P.2d 507.) But "a statute that is ambiguous with respect to retroactive application is construed ... to be unambiguously prospective." (I.N.S. v. St. Cyr, supra, 533 U.S. at pp. 320-321, fn. 45, 121 S.Ct. 2271; Lindh v. Murphy (1997) 521 U.S. 320, 328, fn. 4, 117 S.Ct. 2059, 138 L.Ed.2d 481 ["`retroactive' effect adequately authorized by a statute" only when statutory language was "so clear that it could sustain only one interpretation"].)
1. Repeal Statute has no express retroactivity language
In contending that the Repeal Statute is not retroactive, defendant tobacco companies contrast it with other California statutes that our Legislature has expressly made retroactive. (See § 1646.5 ["This section applies to contracts ... entered into before, on, or after its effective date; it shall be fully retroactive ..." (italics added)]; Gov.Code, § 9355.8 ["This section shall have retroactive application ..." (italics added) ].)
In addition, defendants point to language in the Repeal Statute's subdivision (e) as a clear indication that the Legislature did not intend the Repeal Statute to be retroactive. (§ 1714.45, subd. (e).) That provision incorporates the substance of an earlier amendment to the Immunity Statute, the Public Entity Amendment (see fn. 4, ante), which was intended to allow public entities to sue tobacco companies notwithstanding the statutory immunity. When first enacted, the Public Entity *51 Amendment added to the Immunity Statute a provision, subdivision (d), which stated that in an "action brought by a public entity, the fact that the injured individual's claim against the defendant may be barred by this section shall not be a defense." (Former § 1714.45, subd. (d), as added by Stats.1997, ch. 25, § 2, italics added.) But when that provision became subdivision (e) of the Repeal Statute, the Legislature rephrased it to state that a public entity's suit against a tobacco company would not be precluded by "the fact that the injured individual's claim against the defendant may be barred by a prior version of this section." (§ 1714.45, subd. (e), as added by Stats.1997, ch. 570, § 1, italics added.) The italicized language suggests that even after the January 1, 1998, effective date of the Repeal Statute, "a prior version" of that statute, namely the Immunity Statute, may continue to bar claims against tobacco companies brought by individual smokers.
Plaintiff insists that certain phrases in the Repeal Statute are express legislative declarations of retroactivity notwithstanding the absence of the term "retroactive" in that provision. When enacted, subdivision (f) of the Repeal Statute provided: "It is the intention of the Legislature in enacting the amendments to subdivisions (a) and (b) of this section adopted at the 1997-98 Regular Session to declare that there exists no statutory bar to tobacco-related personal injury, wrongful death, or other tort claims against tobacco manufacturers and their successors in interest by California smokers or others who have suffered or incurred injuries, damages, or costs arising from the promotion, marketing, sale, or consumption of tobacco products. It is also the intention of the Legislature to clarify that such claims which were or are brought shall be determined on their merits, without the imposition of any claim of statutory bar or categorical defense." (§ 1714.45, subd. (f), as added by Stats. 1997, ch. 570, § 1, italics added.)
Focusing on the italicized phrases in isolation, plaintiff asserts, as does the dissent, that they are express declarations of the California Legislature's intent to retroactively apply the statute repealing the tobacco companies' immunity from products liability lawsuits brought by smokers. We are not persuaded. Neither the italicized phrases nor section 1714.45, subdivision (f) as a whole states anything more than that the Repeal Statute rescinded any former statutory immunity for tobacco companies. But even were we to accept that proposed reading of subdivision (f), the Repeal Statute is, at best, ambiguous on the question of retroactivity because of the Legislature's reference in subdivision (e) (allowing public entities to sue tobacco companies) to "a prior version" of the statute as possibly precluding suits against tobacco companies by individual smokers. This ambiguity requires us to construe the Repeal Statute as "unambiguously prospective." (I.N.S. v. St. Cyr, supra, 533 U.S. at p. 320, fn. 45, 121 S.Ct. 2271.)
Furthermore, the time-honored presumption against retroactive application of a statute, as reflected in section 3 of the California Civil Code as well as in decisions by this court and the United States Supreme Court, would be meaningless if the vague phrases relied on by plaintiff and the dissent were considered sufficient to satisfy the test of a "clear[ ] manifestation]" (Hughes Aircraft Co. v. U.S. ex rel. Schumer, supra, 520 U.S. at p. 946, 117 S.Ct. 1871) or an "`"`unequivocal `and inflexible'"'" assertion (Evangelatos, supra, 44 Cal.3d at p. 1207, 246 Cal.Rptr. 629, 753 P.2d 585, italics omitted) of the Repeal Statute's retroactivity. After a painstaking review of the entire Repeal Statute, we find it to be devoid of any express legislative intent of retroactivity. *52 Although we agree with the dissent that "no talismanic word or phrase is required to establish retroactivity" (dis. opn. of Moreno, J., post, 123 Cal.Rptr.2d at p. 56, 50 P.3d at p. 764), we do not agree there is language in the Repeal Statute of the unequivocal and inflexible statement of retroactivity that Evangelatos requires.
Interestingly, the Attorney General, in his role as amicus curiae for plaintiff, does not join plaintiff in urging this court to construe the Repeal Statute as retroactive. Instead, in an effort to avoid "the constitutional concerns inherent in retroactive laws," the Attorney General argues that the Immunity Statute did nothing more than codify the common law defense of assumption of risk, a statutory defense that the Legislature, by its enactment of the Repeal Statute effective January 1, 1998, eliminated for all trials after that date.
The Attorney General's argument disregards the logical basis of this court's 1997 decision in Richards, supra, 14 Cal.4th 985, 1001, 60 Cal.Rptr.2d 103, 928 P.2d 1181, which construed the Immunity Statute not as codifying an existing affirmative defense for trial but as declaring legally permissible and not wrongful certain conduct of tobacco companies. Applying Richards here, we reject the Attorney General's interpretation of the Immunity Statute.
2. Extrinsic sources provide no clear indication of legislative intent to apply the Repeal Statute retroactively
In contending that the Repeal Statute's legislative history is a "very clear" indicator that the California Legislature intended the statute to apply retroactively, plaintiff points to a brief comment in a report of the Senate Judiciary Committee prepared for the April 8, 1997, hearing on the bill to enact the Repeal Statute. The comment appears under the heading "Prospective repeal only" and states: "Some concern has been expressed that [Senate Bill No.] 67 would apply only to causes of action arising on or after January 1, 1998, assuming it is enacted this year. In the absence of specific language in the legislation specifying retroactive application, a measure will operate prospectively only...." (Sen. Com. on Judiciary, Rep. on Sen. Bill No. 67 (1997-1998 Reg. Sess.) Apr. 8, 1997, p. 3.) Plaintiff observes that just eight days later, an amendment added to the bill language stating that "there exists no statutory bar to ... tobacco-related personal injury, wrongful death, or other tort claims by California smokers or others who have suffered or incurred injuries," and indicating "that such claims which were or are brought" should be determined on their merits. (Sen. Bill No. 67 (1997-1998 Reg. Sess.) as amended Apr. 16, 1997, italics added; now § 1714.45, subd. (f).) Plaintiff characterizes this amendment as the Legislature's "direct response" to the expressed "concern" about retroactivity, and thus as comprising a "very clear" indication that the Repeal Statute was to apply retroactively. We are not persuaded.
As we observed earlier, a statute may be applied retroactively only if it contains express language of retroactivity or if other sources provide a clear and unavoidable implication that the Legislature intended retroactive application. (Evangelatos, supra, 44 Cal.3d at p. 1208, 246 Cal.Rptr. 629, 753 P.2d 585.) Addressing in this section the latter ground, we conclude that plaintiff has not shown a clear and unavoidable implication of legislative intent to apply the Repeal Statute retroactively. The committee report that plaintiff cites merely states the general rule that legislation operates prospectively unless there is *53 clear language of retroactivity; nothing in that report indicates that the Legislature desired retroactive application of the Repeal Statute. The April 16, 1997, bill amendment adding language to subdivision (f) of the Repeal Statute does not cure this omission, because the added language does not itself supply an unavoidable implication that the Legislature intended to subject tobacco companies to potential tort liability for conduct occurring during the 10 year period when the Immunity Statute declared that very conduct to be lawful. Its addition to the Repeal Statute eight days after some unspecified person voiced concern about retroactivity suggests that subdivision (f) may have been "the product of a legislative compromise" (Fremont Compensation Ins. Co. v. Superior Court (1996) 44 Cal.App.4th 867, 874, 52 Cal. Rptr.2d 211), a way for legislators with differing views on retroactivity to vote for the Repeal Statute. "Avoiding resolution of disputed points is one of the classic means by which legislators are able to achieve agreement on legislative text." (Id. at pp. 873-874, 52 Cal.Rptr.2d 211; accord, J.A. Jones Construction Co. v. Superior Court (1994) 27 Cal.App.4th 1568, 1577, 33 Cal.Rptr.2d 206.)
Plaintiff also points to comments by the Repeal Statute's author that "tobacco companies may have deliberately manipulated the level of nicotine" in tobacco products and "waged an aggressive campaign of disinformation about the health consequences of tobacco use." (Sen. Com. on Judiciary, Rep. on Sen. Bill No. 67 (1997-1998 Reg. Sess.) Apr. 8, 1997, p. 2.) According to plaintiff, these comments reflect the Legislature's intent to remedy "past fraud" by tobacco companies by making the Repeal Statute retroactive. Not so.
Those comments were simply reasons given "in support of repeal" of the statutory immunity for tobacco companies (Sen. Com. on Judiciary, Rep. on Sen. Bill No. 67 (1997-1998 Reg. Sess.) Apr. 8, 1997, p. 2); they did not at all address retroactivity of the Repeal Statute. Moreover, we have repeatedly declined to discern legislative intent from comments by a bill's author because they reflect only the views of a single legislator instead of those of the Legislature as a whole. (Quintano v. Mercury Casualty Co. (1995) 11 Cal.4th 1049, 1062, 48 Cal.Rptr.2d 1, 906 P.2d 1057; Grupe Development Co. v. Superior Court (1993) 4 Cal.4th 911, 922, 16 Cal. Rptr.2d 226, 844 P.2d 545.)[6]
3. Constitutional considerations reinforce our reading of the Repeal Statute as not having retroactive application
Earlier we discussed the constitutional underpinnings of the presumption against a statute's retroactive application. That presumption has particular force in this case, in which retroactive application of the California Legislature's repeal of tobacco companies' statutory immunity from product liability lawsuits could expose them to huge monetary damages for conduct that occurred during the statutory *54 immunity period when that conduct carried no tort liability.
Instructive on this point is the United States Supreme Court's recent decision in Eastern Enterprises v. Apfel (1998) 524 U.S. 498, 118 S.Ct. 2131, 141 L.Ed.2d 451 (Apfel), in which the high court invalidated a federal law that retroactively imposed on coal mining companies substantial financial obligations for the health care of their retired workers. In a plurality opinion, four of the nine justices concluded the law was an unconstitutional taking under the Fifth Amendment to the federal Constitution. (Apfel, supra, at p. 538, 118 S.Ct. 2131 (plur. opn. of O'Connor, J.).) In his concurring opinion, Justice Kennedy concluded that the act violated the Fifth Amendment's due process clause by retroactively creating liability for past conduct. (Apfel, supra, at p. 549, 118 S.Ct. 2131 (cone. opn. of Kennedy, J.).) He observed: "If retroactive laws change the legal consequences of transactions long closed, the change can destroy the reasonable certainty and security which are the very objects of property ownership. As a consequence, due process protection for property must be understood to incorporate our settled tradition against retroactive laws of great severity. Groups targeted by retroactive laws, were they to be denied all protection, would have a justified fear that a government once formed to protect expectations now can destroy them. Both stability of investment and confidence in the constitutional system, then, are secured by due process restrictions against severe retroactive legislation." (Id. at pp. 548-549, 118 S.Ct. 2131 (cone. opn. of Kennedy, J.).)
In an earlier decision, Landgraf v. USI Film Products, supra, 511 U.S. 244, 114 S.Ct. 1483, the high court questioned the constitutionality of legislation that retroactively would result in the imposition of punitive damages for particularly egregious conduct, suggesting it might violate the constitutional prohibition against ex post facto laws. (U.S. Const., art. I, § 9, cl. 3 [restricting federal government]; see also id, art. I, § 10, cl. 1 [restricting state governments]; Cal. Const., art. I, § 9.) In the words of the high court: "The very labels given `punitive' or `exemplary' damages, as well as the rationales that support them, demonstrate that they share key characteristics of criminal sanctions. Retroactive imposition of punitive damages would raise a serious constitutional question." (Landgraf v. USI Film Products, supra, at p. 281, 114 S.Ct. 1483.)
An established rule of statutory construction requires us to construe statutes to avoid "constitutional infirmities]." (United States v. Delaware & Hudson Co. (1909) 213 U.S. 366, 407-408, 29 S.Ct. 527, 53 L.Ed. 836; United States v. Security Industrial Bank, supra, 459 U.S. 70, 78, 103 S.Ct. 407, 74 L.Ed.2d 235; see also Curran v. Mount Diablo Council of the Boy Scouts (1998) 17 Cal.4th 670, 727-728, 72 Cal.Rptr.2d 410, 952 P.2d 218 (cone, opn. of Kennard, J.).) That rule reinforces our construction of the Repeal Statute as prospective only.
C. Whether the Immunity Statute Precludes Recovery for Defendants' Conduct Before the Immunity Period
Plaintiff began smoking cigarettes in 1956 and continued to do so until 1997, a period of 41 years. For 10 years of this periodfrom January 1,1988, through December 31, 1997the Immunity Statute was in effect to shield tobacco companies from liability for personal injuries that their tobacco products caused to voluntary consumers. Because, as explained above, the Repeal Statute is not retroactive, plaintiff has no product liability claim against defendant tobacco companies for *55 their conduct in manufacturing and distributing cigarettes during the statutory immunity period.
Regarding the portion of plaintiffs claim attributable to her use of cigarettes that defendant tobacco companies manufactured or distributed before the period of statutory immunity afforded those companies, defendants contend that the Immunity Statute shields them from liability. In support, they point out that the statute was expressly retroactive, covering during its effective period "all product liability actions pending on, or commenced after, January 1, 1988" (former § 1714.45, subd. (c), added by Stats. 1987, ch. 1498, § 3, p. 5779), whereas (as we hold here) the Repeal Statute was not retroactive and thus could not remove any of the protection conferred by the Immunity Statute. We reject defendants' contention.
Although the Repeal Statute has no retroactive effect, it nonetheless removed the protection that the Immunity Statute gave to tobacco companies for their conduct occurring before the Immunity Statute's effective date. This is so because a retroactive effect is one that "impair[s] rights a party possessed when he acted." (Landgraf v. USI Film Products, supra, 511 U.S. at p. 280, 114 S.Ct. 1483, italics added.) The Repeal Statute did not impair any rights that tobacco companies possessed before the immunity period. On the contrary, by abrogating the Immunity Statute, the Repeal Statute restored the law governing product liability for the manufacture or sale of tobacco products to what it had been before the January 1, 1988, effective date of the Immunity Statute.
Before January 1, 1988, general tort principles defined the extent of any tort liability that tobacco companies might have for manufacturing or distributing their tobacco products for sale to voluntary consumers. When, years later, the California Legislature repealed the statutory immunity for tobacco manufacturers in product liability actions, it reinstated those general tort rules. This repeal did not "change the legal consequences" (Apfel, supra, 524 U.S. at p. 548, 118 S.Ct. 2131) of defendants' conduct in manufacturing or distributing tobacco products before the effective date of the immunity. Nor could defendants reasonably have relied upon the Immunity Statute before its enactment. (See In re Marriage of Buol (1985) 39 Cal.3d 751, 761, 218 Cal.Rptr. 31, 705 P.2d 354.) Accordingly, repeal of the Immunity Statute eliminated any retroactive effect it may have had, so that the tort liability, if any, that defendants could incur for their conduct before the effective date of the Immunity Statute is determined by applying general tort principles.
IV. CONCLUSION
The Repeal Statute rescinding the tobacco companies' statutory immunity in certain product liability lawsuits contains no express retroactivity provision. Nor has the Legislature given any clear indication that it wanted the Repeal Statute to apply retroactively. Thus, the Immunity Statute continues to shield defendant tobacco companies in product liability actions but only for conduct they engaged in during the 10-year period when the Immunity Statute was in effect. The liability of tobacco companies based on their conduct outside the 10-year period of immunity is governed by general tort principles. We stress, however, that we are not here asked to decide, and do not decide, what liability, if any, defendants may have under those general tort principles.
WE CONCUR: GEORGE, C.J., BAXTER, WERDEGAR, CHIN and BROWN, JJ.
*56 Dissenting Opinion by MORENO, J.
I respectfully dissent.
I agree with the majority that "to have the Repeal Statute govern product liability suits against tobacco companies ... would indeed be a retroactive application of that statute...." (Maj. opn., ante, 123 Cal. Rptr.2d at p. 49, 50 P.3d at p. 758.) Unlike the majority, however, I believe both the statutory language and the legislative history of Civil Code section 1714.45[1] evince a clear legislative intent to apply the statute retroactively. I further conclude that such retroactive application would not raise serious questions of constitutionality. (Maj. opn., ante, 123 Cal. Rptr.2d at p. 53, 50 P.3d at p. 762.)
Statutes are presumed to operate prospectively. (Evangelatos v. Superior Court (1988) 44 Cal.3d 1188, 1208, 246 Cal.Rptr. 629, 753 P.2d 585 (Evangelatos ); § 3.) "Of course, when the Legislature clearly intends a statute to operate retrospectively, we are obliged to carry out that intent unless due process considerations prevent us. [Citation.]" (Western Security Bank v. Superior Court (1997) 15 Cal.4th 232, 243, 62 Cal.Rptr.2d 243, 933 P.2d 507.) Whether the Legislature intended retroactive application of a statute requires an exercise in statutory interpretation to ascertain if, by "`express language or clear and unavoidable implication,' " the presumption of prospective application has been overcome. (Evangelatos, supra, 44 Cal.3d at p. 1208, 246 Cal. Rptr. 629, 753 P.2d 585.) As this formulation in Evangelatos suggests, no talismanic word or phrase is required to establish retroactivity. Rather, the question is whether, from the language employed in the statute, there plainly emerges a legislative intent for the statute to operate retrospectively. Moreover, even in the absence of an express retroactivity provision, a statute may still be applied retroactively if the Legislature's intention is sufficiently clear from such extrinsic sources as legislative history. (Id. at pp. 1208-1209, 246 Cal.Rptr. 629, 753 P.2d 585.)
Contrary to the majority, I conclude that subdivision (f) of section 1714.45 (added by Stats.1997, ch. 570, § 1; all references to the Repeal Statute are to the 1997 enactment) is a sufficiently unambiguous statement of the Legislature's intent that the Repeal Statute be given retrospective effect. In reaching this conclusion, I rely on the familiar principle of statutory construction that requires, in the first instance, consulting the words of the statute itself to ascertain legislative intent. (Steketee v. Lintz, Williams & Rothberg (1985) 38 Cal.3d 46, 51, 210 Cal.Rptr. 781, 694 P.2d 1153.) "The court is required to give effect to statutes `"`according to the usual, ordinary import of the language employed in framing them.' [Citations.] `If possible, significance should be given to every word, phrase, sentence and part of an act in pursuance of the legislative purpose[ ]' [citation]; `a construction making some words surplusage is to be avoided.' [Citation.]"'" (Id. at p. 52, 210 Cal.Rptr. 781, 694 P.2d 1153.)
Subdivision (f) of section 1714.45 states: "It is the intention of the Legislature in enacting the amendments to subdivisions (a) and (b) of this section adopted at the 1997-98 Regular Session to declare that there exists no statutory bar to tobacco-related personal injury, wrongful death, or other tort claims against tobacco manufacturers and their successors in interest by California smokers or others who have suffered or incurred injuries, damages, or costs arising from the promotion, marketing, sale, or consumption of tobacco products. *57 It is also the intention of the Legislature to clarify that those claims that were or are brought shall be determined on their merits, without the imposition of any claim of statutory bar or categorical defense." (Italics added.)
A statute speaks from the day it takes effect. (Hersh v. State Bar (1972) 7 Cal.3d 241, 245, 101 Cal.Rptr. 833, 496 P.2d 1201.) The inclusion of persons who "have suffered or incurred injuries" as among those to whom the abolition of the statutory immunity applies cannot be understood to mean anything other than that the Legislature, speaking as of January 1, 1998, intended to eliminate immunity for past injury-producing conduct by the tobacco industry. This construction of section 1714.45, subdivision (f) is further supported by the next sentence, which declares an intent that "those claims that were or are brought shall be determined, .. without the imposition of any claim of statutory bar or categorical defense." (Italics added.) The ordinary meaning of this language plainly precludes assertion by the tobacco companies of a statutory bar or other categorical defense not only to claims which may be brought in the future ("are brought") but those based on past conduct ("were ... brought") as to which the original enactment (Stats.1987, ch. 1498, § 3, p. 5778; hereafter the Immunity Statute) might otherwise have applied.
The majority is "not persuaded" that these "phrases in isolation" express the Legislature's intent that the Repeal Statute should be retroactively applied. (Maj. opn., ante, 123 Cal.Rptr.2d at p. 51, 50 P.3d at p. 760.) The majority asserts: "Neither the italicized phrases nor section 1714.45, subdivision (f) as a whole states anything more than that Repeal Statute rescinded any former statutory immunity for tobacco companies." (Maj. opn., ante, 123 Cal.Rptr.2d at p. 51, 50 P.3d at p. 760.) This conclusory statement, however, fails to suggest an alternative interpretation of the statute. In this respect, the majority's approach does not comport with the principle of statutory construction that requires a reviewing court to give significance, if possible, to every word and phrase of a statute and avoid a construction that renders some words surplusage. (Steketee v. Lintz, Williams & Rothberg, supra, 38 Cal.3d at p. 52, 210 Cal.Rptr. 781, 694 P.2d 1153.) The majority then goes on to state that, even "were we to accept that proposed reading of subdivision (f), the Repeal Statute is, at best, ambiguous on the question of retroactivity because of the Legislature's reference in subdivision (e) (allowing public entities to sue tobacco companies) to `a prior version' of the statute as possibly precluding suits against tobacco companies by individual smokers." (Maj. opn., ante, 123 Cal.Rptr.2d at p. 51, 50 P.3d at p. 760.) Having thus discerned this ambiguity, the majority would apply the rule that a statute ambiguous as to retroactive application is to be applied prospectively. As I point out elsewhere, however, subdivision (e) does not conflict with the legislative mandate in subdivision (f) that the Repeal Statute be applied retroactively but, rather, addresses another legislative concern entirely; the possibility that the courts might determine that constitutional considerations preclude retroactivity, in which event, the Legislature carved out in subdivision (e) an exemption for public entities. In my view, therefore, subdivision (e) does not create an ambiguity. I would also observe that, by construing these two subdivisions so as to create a apparent conflict between them, the majority's interpretation is contrary to the fundamental principle of statutory construction that requires us, in construing legislation, "to harmonize its various elements without doing violence to its language or spirit." *58 (Wells v. Marina City Properties, Inc. (1981) 29 Cal.3d 781, 788, 176 Cal.Rptr. 104, 632 P.2d 217.)
I find further support for my conclusion that the Legislature intended retroactive application of the Repeal Statute in the legislative history surrounding the addition of subdivision (f) to section 1714.45. This history strongly suggests that subdivision (f) was added in response to a concern that the statute might only apply prospectively.
Senate Bill No. 67, as originally proposed, did not contain what eventually became subdivision (f), nor any other declaration of legislative intent. (Sen. Bill No. 67 (1997-1998 Reg. Sess.) § 1, as introduced Dec. 11, 1996.) In anticipation of an April 1997 hearing on the bill, the Senate Judiciary Committee noted that "[s]ome concern has been expressed that [Senate Bill No.] 67 would apply only to causes of action arising on or after January 1, 1998, assuming it is enacted this year. In the absence of specific language in the legislature specifying retroactive application, a measure will operate prospectively only upon its enactment." (Sen. Com. on Judiciary, Rep. on Sen. Bill No. 67 (1997-1998 Reg. Sess.) as amended Feb. 14, 1997, italics added.) One week after this acknowledgement of concern about a prospective-only application of amendments as then drawn, the bill was amended further by the insertion of what would become subdivision (f). (Sen. Bill No. 67 (1997-1998 Reg. Sess.) § 1, subd.(d), as amended Apr. 16, 1997.)
The proximity of these events suggests that subdivision (f) was added to section 1714.45 in response to the concern expressed in the committee report. At minimum, that "the retroactivity question was actually consciously considered during the enactment process" (Evangelatos, supra, 44 Cal.3d at p. 1211, 246 Cal.Rptr. 629, 753 P.2d 585) supports a conclusion that retroactivity was intended.
The majority, examining this legislative history, simply concludes that the addition of subdivision (f) "may have been" the product of legislative compromise that allowed legislators with "differing views on retroactivity to vote for the Repeal Statute." (Maj. opn., ante, 123 Cal.Rptr.2d at p. 53, 50 P.3d at p. 761.) But no "differing views" are expressed in the committee report regarding retroactivity and I cannot agree with the majority's explanation of the report. The plain meaning of the language used in subdivision (f) and the legislative history seem to me to unmistakably document such intent.
To buttress the assertion that subdivision (f) does not mean what it says, the majority, like defendants, cites subdivision (e) of section 1714.45. Subdivision (e) essentially reiterates an amendment to the Immunity Statute that exempted public entities from the statute. The original provision in the Immunity Statute stated that in an "action brought by a public entity, the fact that the injured individual's claim against the defendant may be barred by this section shall not be a defense." (§ 1714.45, former subd. (d), as amended by Stats.1997, ch. 25, § 2, eff. June 12, 1997.) The current version in the Repeal Statute provides "[i]n the action brought by a public entity, the fact that the injured individual's claim against the defendant may be barred by a prior version of this section shall not be a defense." (§ 1714.45, subd. (e), italics added.)
The majority finds that the phrase "a prior version of this section" suggests that "even after the January 1, 1998, effective date of the Repeal Statute, `a prior version' of that statute, namely the Immunity Statute, may continue to bar claims against tobacco companies brought by individual smokers." (Maj. opn., ante, 123 Cal. Rptr.2d at p. 51, 50 P.3d at p. 760.) In my view, subdivision (e) of section 1714.45 simply reflects the Legislature's concern that, *59 notwithstanding its intent that the Repeal Statute apply retroactively, the courts might decline to give retroactive effect to the statute based on due process or other constitutional concerns raised by retroactivity. (Western Security Bank v. Superior Court, supra, 15 Cal.4th at p. 243, 62 Cal.Rptr.2d 243, 933 P.2d 507 ["Of course, when the Legislature clearly intends a statute to operate retrospectively, we are obliged to carry out that intent unless due process considerations prevent us" (italics added) ].) Indeed, the majority touches upon these constitutional issues and while, in my view, they provide no basis to deny retroactive application of the Repeal Statute, the Legislature could not have forecast with absolute certainty whether its intent to apply the statute retroactively would survive a court challenge. This uncertainty by the Legislature is demonstrated by its use of the word "may." Therefore, the Legislature chose to make it clear that, at minimum, suits by public entities would not be precluded by judicial fiat. This interpretation harmonizes subdivisions (e) and (f), which, notably, the majority's approach does not. (Wells v. Marina City Properties, Inc., supra, 29 Cal.3d at p. 788, 176 Cal.Rptr. 104, 632 P.2d 217 ["It is fundamental that legislation should be construed so as to harmonize its various elements without doing violence to its language or spirit."].)
Finally, the majority suggests that constitutional considerations reinforce its interpretation of the Repeal Statute as not having retroactive application. (Maj. opn., ante, 123 Cal.Rptr.2d at p. 53, 50 P.3d at p. 762.) Specifically, the majority alludes to potential due process and ex post facto issues. The retroactive application of any statute must be vetted for constitutionality, but I do not agree that constitutional considerations support a conclusion of nonretroactivity. Nor are the cases cited by the majority persuasive in this respect. The concurring opinion of Justice Kennedy in Eastern Enterprises v. Apfel (1998) 524 U.S. 498, 548-549, 118 S.Ct. 2131, 141 L.Ed.2d 451, states little more than the truism that retroactive laws must meet the test of due process. The majority's citation to dictum in Landgraf v. USI Film Products (1994) 511 U.S. 244, 114 S.Ct. 1483, 128 L.Ed.2d 229 on the ex post facto issue is equally general.
Retroactive application of a statute may be unconstitutional if, inter alia, it deprives a person of a vested right without due process of law. (In re Marriage of Buol (1985) 39 Cal.3d 751, 756, 218 Cal.Rptr. 31, 705 P.2d 354.) I am not convinced that the immunity conferred in this case, however, is a vested right. First, the immunity involved here was wholly a creation of statute, and its abolition does not affect the tobacco companies' right to assert common law defenses in product liability actions. (Cf. Callet v. Alioto (1930) 210 Cal. 65, 67-68, 290 P. 438 [statutory rights, unlike common law rights, not vested for purposes of retroactive application of a statute because "all statutory remedies are pursued with full realization that the legislature may abolish the right to recover at any time"].) Second, I question whether a statutory immunity secured in part by deceptive representations by the tobacco companies about the lethal nature of their product should be deemed a vested right under any circumstance.[2]
Even assuming, arguendo, that the Immunity Statute created a vested right, it is settled that "[v]ested rights are not immutable; *60 the state, exercising its police power, may impair such rights when considered reasonably necessary to protect the health, safety, morals and general welfare of the people." (In re Marriage of Buol, supra, 39 Cal.3d at pp. 760-761, 218 Cal.Rptr. 31, 705 P.2d 354.) "In determining whether a retroactive law contravenes the due process clause, we consider such factors as the significance of the state interest served by the law, the importance of the retroactive application of the law to the effectuation of that interest, the extent of reliance upon the former law, the legitimacy of that reliance, the extent of actions taken on the basis of that reliance, and the extent to which the retroactive application of the new law would disrupt those actions." (In re Marriage of Bouquet (1976) 16 Cal.3d 583, 592, 128 Cal.Rptr. 427, 546 P.2d 1371.)
I submit that, if the due process issue actually arose, all these factors would weigh heavily in favor of finding that retroactive application of the Repeal Statute does not contravene the due process clause. The state has a substantial interest in seeing that victims of dangerous products are compensated for their injuries by the manufacturers of dangerous or defective products that are in the best position to provide such compensation. (Safeway Stores, Inc. v. Nestr-Kart (1978) 21 Cal.3d 322, 330, 146 Cal.Rptr. 550, 579 P.2d 441 ["one of the principal social policies served by product liability doctrine is to assign liability to a party who possesses the ability to distribute losses over an appropriate segment of society..."].) The state has an equally substantial interest in protecting and promoting the health of Californians. These interests would be significantly advanced by retroactive application of the Repeal Statute.
The Repeal Statute restores the right of Californians suffering from smoking-related illnesses to pursue product liability actions against the tobacco companies. Such meritorious actions would properly compensate the victims and would also shift the costs for their care from the public health system, to the extent the victims rely on public health care, to the tobacco companies. Furthermore, such actions expose the life-threatening consequences of tobacco use, as well as the tobacco companies' deceptive practices in promoting the use of their products. In the past, such suits have helped create a popular repugnance toward the tobacco companies and their products that has, in turn, contributed to a decline in the amount of consumption of tobacco products, thus promoting the health of the populace and reducing health costs associated with tobacco use. Retroactive application of the Repeal Statute would serve both goals of victim compensation and reduction of the use of tobacco products. By contrast, the tobacco companies can claim little reliance on the decade-old Immunity Statute, since the claims ordinarily advanced in these kinds of suits involve conduct stretching back decades. Furthermore, as I observed, retroactive *61 application of the Repeal Statute does not strip the tobacco companies of their common law defenses.
Briefly, the majority also suggests that retroactive application of the Repeal Statute could implicate the prohibition against ex post facto laws because of the potential availability of punitive damages. Again, however, the majority does not engage in an in-depth analysis that demonstrates retroactive application of the Repeal Statute would in fact violate the prohibition against ex post facto laws. Furthermore, the brief discussion of this point in the case cited by the majority, Landgraf v. USI Film Products, supra, 511 U.S. 244, 114 S.Ct. 1483, is dictum. (Id. at p. 281, 114 S.Ct. 1483.) Assuming a punitive damages award might be deemed penal for purposes of the ex post facto clause, the clause applies only if the challenged law makes criminal conduct that was not criminal at the time the action was performed. (Ibid. ["Before we entertain [the ex post facto] question, we would have to be confronted with a statute that explicitly authorized punitive damages for preenactment conduct"]; Collins v. Youngblood (1990) 497 U.S. 37, 42, 110 S.Ct. 2715, 111 L.Ed.2d 30.)
The conduct engaged in by the tobacco companies that might support an award of punitive damages in the instant case stretches back far beyond the 10 year period during which the Immunity Statute was in effect. As the majority concludes elsewhere in the opinion, neither due process concerns nor the ex post facto clause shields the tobacco companies from liability, presumably including punitive damages, for conduct they engaged in prior to the enactment of the Immunity Statute in 1988. (Maj. opn., ante, 123 Cal.Rptr.2d at pp. 54-55, 50 P.3d at p. 763.) The effect of the Repeal Statute in that case is simply to restore the status quo ante that existed before January 1, 1988. Since the tobacco companies' conduct that is the basis of the instant suit is a continuous course of action that encompasses several decades, I question whether a plausible ex post facto claim could be made. To do so the tobacco companies would be required to isolate specific acts that occurred during the immunity period and identify the percentage of a punitive damages award attributable to such conduct. This is not their position. Rather, they have argued that the Immunity Statute insulates them from any liability, including their pre 1988 conduct. (Maj. opn., ante, 123 Cal.Rptr.2d at p. 55, 50 P.3d at p. 763.) Therefore, the ex post facto concern raised by the majority seems both theoretical and dubious and does not present a substantive reason for declining to carry out the Legislature's will by retroactively applying the Repeal Statute.
For all these reasons, therefore, I dissent.
NOTES
[1] Further undesignated statutory references are to the Civil Code.
[2] As this court has done in prior cases discussing this legislation, we use the term "immunity" rather loosely, without restricting it to its narrowest technical meaning, that is, "a complete defense ... [that] does not negate the tort." (Black's Law Diet. (1996 pocket ed.) p. 298; see also Delaney v. Superior Court (1990) 50 Cal.3d 785, 797, fn. 6, 268 Cal.Rptr. 753, 789 P.2d 934 [discussing the "immunity-privilege distinction"].)
[3] The compromise agreement reportedly is known as "the `napkin deal' since it was hammered out by political adversaries" (one side "wanting comprehensive changes in California tort law, the other wanting to maintain the status quo")on a white cloth napkin in a Sacramento restaurant. (Moy, Tobacco Companies, Immune No MoreCalifornia's Removal of the Legal Barriers Preventing Plaintiffs From Recovering for Tobacco-related Illness (1998) 29 McGeorge L.Rev. 761, 770.)
[4] Earlier in the same legislative session, the California Legislature passed, as urgency legislation effective June 12, 1997, Assembly Bill No. 1603 (1997-1998 Reg. Sess.) (hereafter the Public Entity Amendment). That bill amended the Immunity Statute to allow for "an action brought by a public entity to recover the value of benefits provided to individuals injured by a tobacco-related illness caused by the tortious conduct of a tobacco company." (Former § 1714.45, subd. (d), as amended by Stats.1997, ch. 25, § 2.) A third bill, regarding secondhand smoke (Sen. Bill No. 340 (1997-1998 Reg. Sess.)), was passed by the Legislature but vetoed by the Governor.
[5] In 1998, the Legislature made nonsubstantive changes to the final sentence in subdivision (f) of the Repeal Statute. It now reads: "It is also the intention of the Legislature to clarify that those claims that were or are brought shall be determined on their merits, without the imposition of any claim of statutory bar or categorical defense." (§ 1714.45, subd. (f), as amended by Stats. 1998, ch. 485, § 38, underscoring indicates changes.)
[6] The dissent broadly asserts that these comments by the bill's author and similar remarks in a letter supporting passage of the Repeal Statute "suggest" that the 1987 enactment of the statutory immunity was "secured in part by deceptive representations by the tobacco companies about the lethal nature of their product." (Dis. opn. of Moreno, J., post, 123 Cal.Rptr.2d at p. 59, fn. 2, 50 P.3d at p. 767, fn. 2.) Neither these comments nor anything else in the record before us shows that the 1987 legislation was indeed a product of deception by tobacco companies. As an appellate court, we may not consider assertions of fact that are not supported by the record. (See People v. Szeto (1981) 29 Cal.3d 20, 35, 171 Cal.Rptr. 652, 623 P.2d 213.)
[1] All statutory references are to the Civil Code unless otherwise indicated.
[2] The majority asserts that there is no proof in the record that the 1987 legislation was the product of deception by tobacco companies. (Maj. opn., ante, 123 Cal.Rptr.2d at p. 53, fn. 6, 50 P.3d at p. 762, fn. 6.) At the time the Repeal Statute was proposed its author explained the need for the legislation was due in part to evidence that "the tobacco companies may have deliberately manipulated the level of nicotine" and also that "evidence shows the tobacco companies have systematically suppressed and concealed material information and waged an aggressive campaign of disinformation about the health consequences of tobacco use." (Sen. Com. on Judiciary Rep. on Sen. Bill No. 67 (1997-1998 Reg. Sess.) Apr. 8, 1997, p. 2.) In the same report, the California Medical Association, identified as "one of the main participants" in the 1987 legislation stated in support of the bill that "`[o]ver the last decade we have learned much regarding the addictive nature of tobacco and the industry's intentional efforts to mislead the public on the health effects of tobacco. This, coupled with the courts' broad interpretation of the California statute, has precipitated the need to change that statute and remove tobacco's liability protections.'" (Ibid.) I submit that these remarks, particularly the comments of the California Medical Association which was a participant in the 1987 legislation, suggest that the tobacco companies did deceive the other parties to the legislative effort that resulted in the Immunity Statute.
|
Q:
Can I use an if statement to change the API options of the Swiper javascript slider?
Sorry if this is a dumb question. I'm using the Swiper javascript slider, developed by iDangerous. It comes with an initialization field that I have in my HTML file, in it are the initialization options for the slider.
I'm in a situation where I need to search the webpage for a certain element called #editor-content. If the element is found, I need the simulateTouch option of the Swiper initialization to be set to false. If the element is not found, the stimulateTouch option should be set to true.
I am using jQuery to accomplish the finding-the-element. I'm using if(($('#editor-content').length)){...} to accomplish this. This part works just fine.
Here is what the Swiper initialization looks like...
<script>
swiper = new Swiper('.swiper-container', {
spaceBetween: 30,
centeredSlides: true,
// This is the option that needs to be set to true/false depending on whether the #editor-content element exists in the webpage or not...
stimulateTouch = false;***
autoplay: {
delay: 2500,
disableOnInteraction: true,
},
pagination: {
el: '.swiper-pagination',
clickable: true,
},
navigation: {
nextEl: '.swiper-button-next',
prevEl: '.swiper-button-prev',
},
});
</script>
So far, I've tried to initialize the slider differently based on outcome. If it finds the element, initialize the swiper with the option disabled. If it doesn't, initialize the swiper with the option enabled. That didn't work (see code below--and I'm very sorry for the indentation mess):
<script>
var swiper = undefined;
function initSwiper() {
// If the jQuery detects an #editor-content element (meaning you're in the editor...)
if((jQuery('#editor-content').length)){
swiper = new Swiper('.swiper-container', {
spaceBetween: 30,
centeredSlides: true,
// Element found, so initialize the slider with option below set to false
simulateTouch = false,
autoplay: {
delay: 2500,
disableOnInteraction: true,
},
pagination: {
el: '.swiper-pagination',
clickable: true,
},
navigation: {
nextEl: '.swiper-button-next',
prevEl: '.swiper-button-prev',
},
});
} else {
var swiper2 = new Swiper('.swiper-container', {
spaceBetween: 30,
centeredSlides: true,
// Element not found, so initialize the slider with option below set to true
simulateTouch = true,
autoplay: {
delay: 2500,
disableOnInteraction: true,
},
pagination: {
el: '.swiper-pagination',
clickable: true,
},
navigation: {
nextEl: '.swiper-button-next',
prevEl: '.swiper-button-prev',
},
});
}
}
// Initialize the function
initSwiper();
</script>
I'm expecting the slider to have the stimulateTouch option set to false when the #editor-content does exist, and have it set the option to true when the element in the body does not exist. But so far my attempted code is just making the entire slider js not function. Help?
A:
If you run your code through a Javascript validator (such as this: http://esprima.org/demo/validate.html) you'll find you have a syntax error on line 12.
stimulateTouch = false,
needs to be
stimulateTouch: false,
as per the other properties.
Just FYI, you may want to look into using Chrome Developer Tools or some other debugging tool to pick up small issues like this. The console will let you know if you have any syntax errors, and you can use the debugger to set breakpoints and step through your code line by line so you can find the exact spot where things start to go pear-shaped.
|
@import 'open-color/open-color.css';
.reposIntros {
display: flex;
flex-direction: column;
& .reposIntro {
margin-bottom: 5px;
display: flex;
position: relative;
padding: 5px;
transition: background-color 0.2s;
&:hover {
background-color: var(--oc-gray-2);
& .reposIntroLine {
opacity: 1;
}
}
& .reposIntroLine {
width: 8px;
border-radius: 3px;
margin-right: 15px;
opacity: 0.6;
transition: opacity 0.2s;
}
& .introInfoWrapper {
display: flex;
flex-direction: column;
flex: 1;
max-width: 90%;
}
& .introInfo {
font-size: 14px;
color: var(--oc-gray-5);
line-height: 1.7em;
& .introTitle {
color: var(--oc-gray-7);
line-height: 1.5em;
}
}
}
}
.languageLabelWrapper {
padding: 15px 20px;
}
/* chosed repos */
.reposTimelineContainer {
display: flex;
flex-direction: column;
padding: 20px;
}
.reposDates {
display: flex;
color: var(--oc-gray-5);
font-size: 12px;
& .reposDate {
flex: 1;
height: 40px;
line-height: 40px;
text-align: left;
&:last-child {
text-align: right;
}
}
}
.reposTimelines {
display: flex;
flex-direction: column;
& .reposTimelineWrapper {
height: 25px;
margin-bottom: 15px;
position: relative;
}
}
.timelineWrapper {
width: 100%;
display: flex;
flex-direction: row;
align-items: center;
height: 100%;
}
.timelineTipso {
height: 100%;
&:first-child {
margin-left: initial;
}
}
.timelineContent {
width: 140px;
text-align: left;
line-height: 1.5em;
padding-left: 10px;
}
.timelineItem {
opacity: 0.6;
border-radius: 20px;
transition: opacity 0.2s, box-shadow 0.2s;
cursor: pointer;
height: 100%;
&.timelineOld {
background-color: var(--oc-gray-3) !important;
border-radius: 20px 0 0 20px;
}
&.timelineNew {
border-radius: 0 20px 20px 0;
}
}
/* repos tipso */
.tipso_container {
font-size: 12px;
color: rgba(74, 74, 74, 0.6);
line-height: 1.5em;
min-width: 200px;
&.tipso_large {
min-width: 250px;
}
& .tipso_title {
font-size: 16px;
line-height: 2em;
display: flex;
flex-direction: row;
align-items: center;
color: var(--oc-gray-7);
}
& a {
color: var(--oc-gray-7);
transition: color 0.3s;
text-decoration: underline;
&:hover {
color: var(--oc-green-7);
}
}
& p {
line-height: 2em;
}
& blockquote,
& span {
line-height: 1.5em;
}
& blockquote {
margin: 5px 0;
font-size: 16px;
}
& .tipso_line {
width: 100%;
height: 1px;
background-color: var(--oc-gray-2);
margin: 10px 0;
}
}
.repos_review {
width: 100%;
}
.repos_show_container {
width: 95%;
margin-left: 2.5%;
padding: 15px 0;
min-height: 80px;
display: flex;
justify-content: center;
}
.repos_chart {
flex: 1;
position: relative;
max-width: 50%;
display: flex;
justify-content: center;
align-items: center;
& .chart_center {
position: absolute;
top: 55%;
left: 50%;
transform: translate(-50%, -50%);
font-size: 1.2em;
text-align: center;
}
}
.pieChart {
width: 250px !important;
max-width: 250px !important;
height: auto !important;
z-index: 1;
}
.radarChart {
width: 250px !important;
max-width: 250px !important;
height: auto !important;
}
.repos_chart_container {
width: auto;
margin-left: 0;
display: flex;
justify-content: center;
&.with_chart {
padding: 15px;
min-height: 80px;
}
&.small_margin {
margin-top: -30px !important;
}
}
.repos_show_container {
display: block;
& p.repos_show_title {
padding-left: 20px;
margin-bottom: 15px;
line-height: 40px !important;
font-size: 2em;
font-weight: bold;
display: block;
position: relative;
&::after {
content: "";
display: block;
position: absolute;
width: 4px;
top: 9px;
height: 27px;
left: 0;
background-color: var(--oc-gray-5);
}
& span {
font-size: 16px;
font-weight: normal;
color: var(--oc-gray-5);
}
}
}
.repos_show {
display: flex;
padding: 10px;
padding-bottom: 15px;
border-radius: 3px;
position: relative;
transition: background-color 0.2s;
&:hover {
background-color: var(--oc-gray-2);
}
& .repos_info {
padding-right: 50px;
width: 100%;
& .repos_info_name {
text-decoration: none;
color: var(--oc-gray-7);
line-height: 1.5em;
font-size: 16px;
&:hover {
color: var(--oc-green-7);
}
}
}
}
.repos_short_desc {
font-size: 14px;
color: var(--oc-gray-5);
}
.repos_star {
position: absolute;
right: 10px;
top: 50%;
transform: translateY(-50%);
&.active {
color: var(--oc-green-7);
}
}
/* orgs */
.orgs_container {
display: flex;
flex-direction: column;
padding: 20px;
}
.org_row {
flex: 1;
display: flex;
margin-bottom: 20px;
flex-direction: row;
justify-content: center;
align-items: center;
border-bottom: 1px solid var(--oc-gray-2);
}
.org_item_container {
flex: 1;
display: flex;
flex-direction: column;
align-items: center;
text-align: center;
justify-content: center;
}
.org_item {
width: 100px;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
padding-bottom: 20px;
cursor: pointer;
border-bottom: 3px solid transparent;
transition: border-bottom 0.3s;
& img {
width: 80px;
height: 80px;
margin-bottom: 15px;
}
}
.org_item_active {
border-bottom: 3px solid var(--oc-green-7);
}
.org_detail {
}
.org_info {
line-height: 1.8em;
& a {
color: var(--oc-green-7);
text-decoration: none;
&:hover {
text-decoration: underline;
}
}
}
.org_info_title {
composes: org_info;
border-left: 3px solid var(--oc-gray-5);
padding-left: 15px;
margin: 15px 0 30px;
}
.orgs_coordinate {
display: flex;
flex-direction: column;
width: 100%;
& .repos_xAxes {
height: 2px;
width: 95%;
background-color: var(--oc-gray-4);
margin-bottom: 10px;
position: relative;
& .xAxes_text {
position: absolute;
font-size: 12px;
color: var(--oc-gray-4);
left: 50%;
top: -15px;
transform: translateX(-50%);
}
&::after {
content: '';
display: block;
position: absolute;
width: 0;
height: 0;
border-top: 5px solid transparent;
border-left: 10px solid var(--oc-gray-4);
border-bottom: 5px solid transparent;
right: -2px;
top: 50%;
transform: translateY(-50%);
}
}
& .repos_wrapper {
display: flex;
flex-direction: row;
& .repos {
display: flex;
flex: 1;
flex-direction: column;
}
& .repos_yAxes {
width: 2px;
max-width: 2px;
flex: 1;
margin-left: 20px;
margin-right: 10px;
background-color: var(--oc-gray-4);
position: relative;
& .yAxes_text {
position: absolute;
font-size: 12px;
color: var(--oc-gray-4);
top: 50%;
left: 3px;
transform: translateY(-50%);
writing-mode: vertical-rl;
}
&::before {
content: '';
display: block;
position: absolute;
width: 0;
height: 0;
border-left: 5px solid transparent;
border-right: 5px solid transparent;
border-bottom: 10px solid var(--oc-gray-4);
top: -2px;
left: 50%;
transform: translateX(-50%);
}
}
}
}
/* org repos */
.repos_item {
flex: 1;
position: relative;
margin-top: 10px;
}
.tipsoWrapper {
width: 100%;
}
.tipsoContainer {
left: 100px;
}
.repos_contributions {
border-radius: 20px;
width: 100%;
background-color: var(--oc-gray-1);
height: 20px;
display: flex;
flex-direction: column;
position: relative;
cursor: pointer;
}
.repos_contributions_disabled {
cursor: default;
}
.user_contributions {
border-radius: 20px;
background-color: var(--oc-green-7);
height: 20px;
}
.contributions_chart_container {
width: 100%;
padding-bottom: 10px;
& .chart_container {
width: 100%;
height: 200px;
}
}
.contribution_dates {
width: 100%;
display: flex;
flex-direction: row;
margin: 5px 0 10px 0;
}
.contribution_date {
flex: 1;
color: var(--oc-gray-5);
font-size: 14px;
&:first-child {
text-align: left;
}
&:last-child {
text-align: right;
}
}
.repos_title {
font-size: 18px;
color: var(--oc-green-7);
}
.org_repos_infos {
display: flex;
font-size: 14px;
flex-direction: column;
}
.org_repos_info {
margin: 5px 0;
flex: 1;
display: flex;
flex-direction: row;
align-items: center;
}
.info_strong {
position: relative;
font-weight: bolder;
&.strong-1 {
color: var(--oc-yellow-4);
}
&.strong-2 {
color: var(--oc-yellow-5);
}
&.strong-3 {
color: var(--oc-yellow-6);
}
&.strong-4 {
color: var(--oc-yellow-7);
}
&.strong-5 {
color: var(--oc-yellow-8);
}
}
.info_tipso {
bottom: 120%;
}
.org_repos_desc_info {
composes: org_repos_info;
margin-top: 15px;
& blockquote {
color: var(--oc-gray-6);
border-left: 2px solid var(--oc-gray-6);
padding-left: 10px;
}
}
/* hotmap */
.githubCalendar {
composes: card from 'STYLES/common.css';
box-shadow: none;
border-color: transparent;
background-color: var(--oc-white);
text-align: center;
min-height: 200px;
padding-top: 20px;
& h2:first-child {
margin-bottom: 10px;
font-weight: 300;
padding-top: 10px;
display: none;
}
}
.githubHotmap {
overflow: hidden;
margin: 0 15px 5px;
}
.hotmapCards {
border-top: 1px solid #eaeaea;
}
.hotmapControllers {
display: flex;
flex-direction: row;
justify-content: center;
align-items: center;
margin: 0 15px;
& .hotmapController {
flex: 1;
display: flex;
flex-direction: row;
&:first-child {
justify-content: flex-start;
padding-left: 5px;
}
&:last-child {
justify-content: flex-end;
padding-right: 5px;
}
& i {
cursor: pointer;
font-size: 1.5em;
}
}
}
.loading {
position: absolute;
top: 0;
bottom: 0;
left: 0;
min-height: auto;
}
/* ========== */
.reposTitleContainer {
display: flex;
flex-direction: row;
align-items: center;
margin-bottom: 10px;
}
.reposLabel {
padding: 0 7px;
}
.languageLabel {
margin: 5px 7px;
opacity: 0.6;
transition: opacity 0.15s;
&:hover, &.active {
opacity: 1;
}
}
.reposRows {
display: flex;
flex-direction: column;
justify-content: center;
padding: 15px;
}
.showMoreButton {
height: 30px;
line-height: 30px;
color: var(--oc-gray-5);
cursor: pointer;
text-align: center;
background-color: transparent;
transition: background-color 0.2s;
&:hover {
background-color: var(--oc-gray-1);
}
} |
Q:
Makefile always says generated-file target is up to date, but generates target as expected
I have a script that, when run, generates a file. I want to create a Makefile to run the script to generate the file if the generated file does not exist or if the script changes.
Here's a simplified version. Let's say I have an executable script gen_test.sh like this:
#!/bin/sh
echo "This is a generated file" > genfile.foo
And a Makefile like this:
#
# Test Makefile
#
GEN_SCRIPT = $(CURDIR)/gen_test.sh
GEN_FILE = $(CURDIR)/genfile.foo
$(GEN_FILE): $(GEN_SCRIPT)
$(shell $(GEN_SCRIPT))
.PHONY: clean
clean:
$(RM) $(GEN_FILE)
When I run make, I see the following output:
make: '/workspace/sw/bhshelto/temp/makefile_test/genfile.foo' is up to date.
but genfile.foo does in fact get generated.
What's going on here? Why do I see the "is up to date" message, and how do I get rid of it?
Thanks!
A:
When I run make, I see the following output:
make: '/workspace/sw/bhshelto/temp/makefile_test/genfile.foo' is up to date.
but genfile.foo does in fact get generated.
As user657267 has suggested: remove the $(shell) from the recipe body.
Assuming that the gen_test.sh doesn't produce any output (or output only on stderr), the explanation for the behavior is rather simple:
The make detects that the $(GEN_FILE) needs to be rebuild and goes to invoke the rule.
Before the command invocation, make performs the expansion of the variables inside the commands. Due to the (unnecessary) $(shell), the $(GEN_SCRIPT) gets invoked already during this expansion.
The $(GEN_SCRIPT) does it work, but produces no output, thus $(shell $(GEN_SCRIPT)) is expanded to an empty string.
Make sees that the command is empty, and thus there is no command to run.
The make then again checks the files (in case the variable expansion has side-effects) and sees that the target is up-to-date. Yet, since the expanded command is empty, the make prints its generic is up to date message.
|
Q:
Ember : Two route for one template
I'm new with Ember and i try to make a simple CRUD.
I want a single template for adding and editing of an object.
This is my code :
this.route('foos', {path: '/foos_path'}, function() {
this.route('edit',{path: '/edit/:foo_id'});
this.route('add',{path: '/add'});
this.route('index');
});
The add function work great but i can't make working the edit function.
This is my edit route.
import Ember from 'ember';
export default Ember.Route.extend({
title : '',
model: function(params) {
this.store.find('foo', params.foo_id).then(function(foo) {
console.log(this, this.get('title'));
this.set('title', foo.title);
});
},
renderTemplate: function() {
this.render('foos.add', {
into: 'foos',
controller: 'foos.add'
});
this.render('foos/add');
}
});
Any help would be great :)
A:
Sorry for the delay and thank for you answer. This is how i've achieved my goal :
AddRoute :
import Ember from 'ember';
export default Ember.Route.extend({
model: function() {
return this.store.createRecord('foo');// This line is need to load a clean model into the template
},
});
EditRoute :
import Ember from 'ember';
export default Ember.Route.extend({
controllerName : 'foos.add', // Defines addController for edit route
templateName : 'foos.add', // Defines AddTemplete for edit route
model: function(params) {
return this.store.find('foo', params.foo_id); // Loads the foo object inside the template
}
});
My addTemplate looks like this :
<div class="row">
<form class="col s12">
<div class="row">
<div class="input-field col s12">
{{input placeholder="Foo name" id="foo_name" type="text" class="validate" value=model.title}}
<label for="foo_name"></label>
</div>
<div class="row">
<button class="btn waves-effect waves-light col s12" type="submit" name="save" {{action 'add'}}>Submit
<i class="mdi-content-send right"></i>
</button>
</div>
</div>
</form>
</div>
And in my controller, i define the save action (Can be defined in route instead):
import Ember from 'ember';
export default Ember.Controller.extend({
actions: {
save: function() {
// The following is not needed now because we load a record on add and edit route.
/*var foo = this.store.createRecord('foo', {
title : this.get('title')
});*/
// We can instead save the record directly
this.get('model').save().then(function() {
console.log('Foo save.');
}).catch(function(error) {
console.log('Error : ' + error);
});
},
}
});
I hope this will help someone.
|
The Expanded Use of Autoaugmentation Techniques in Oncoplastic Breast Surgery.
Autoaugmentation techniques have been applied to oncoplastic reductions to assist with filling larger, more remote defects, and to women with smaller breasts. The purpose of this report is to describe the use of autoaugmentation techniques in oncoplastic reduction and compare the results with those of traditional oncoplastic reduction. The authors queried a prospectively maintained database of all women who underwent partial mastectomy and oncoplastic reduction between 1994 and October of 2015. The autoaugmentation techniques were defined as (1) extended primary nipple autoaugmentation pedicle, and (2) primary nipple pedicle and secondary autoaugmentation pedicle. Comparisons were made to a control oncoplastic group. There were a total of 333 patients, 222 patients (67.7 percent) without autoaugmentation and 111 patients (33 percent) with autoaugmentation (51 patients with an extended autoaugmentation pedicle, and 60 patients with a secondary autoaugmentation pedicle). Biopsy weight was smallest in the extended pedicle group (136 g) and largest in the regular oncoplastic group (235 g; p = 0.017). Superomedial was the most common extended pedicle, and lateral was the most common location. Inferolateral was the most common secondary pedicle for lateral and upper outer defects. There were no significant differences in the overall complication rate: 15.5 percent in the regular oncoplastic group, 19.6 percent in the extended pedicle group, and 20 percent in the secondary pedicle group. Autoaugmentation techniques have evolved to manage complex defects not amenable to standard oncoplastic reduction methods. They are often required for lateral defects, especially in smaller breasts. Autoaugmentation can be performed safely without an increased risk of complications, broadening the indications for breast conservation therapy. Therapeutic, III. |
From MediaPost, more evidence of the power of program promotion through TV ads:
More than seven in 10 viewers (72%) find out about new summer TV shows by seeing commercials on TV — significantly more than those who use reviews online and in print (28%), word of mouth (22%), radio or online streaming spots (17%), and outdoor media (8%).
Although both men (71%) and women (74%) primarily learn about new shows through TV ads, men are more likely than women to seek information through online and print reviews (22% vs. 35%).
TV ads also are the preferred channel across all age groups, although the Silent Generation (50%) is notably less likely to rely on them than Millennials (73%), Generation X (73%), and Boomers (75%). Rather, those in the Silent Generation (44%) are more likely to be swayed to watch new TV shows through online and print reviews. |
Water temperature influencing dactylogyrid species communities in roach, Rutilus rutilus, in the Czech Republic.
Dactylogyrid species (Monogenea) communities were studied in roach, Rutilus rutilus, collected from two localities in the basin of Morava river, Czech Republic, during the period from April to November 1997 and March to September 1998 to determine the effect of water temperature on parasite abundance, species richness and diversity. Dactylogyrid species were found to co-occur on the gills of roach with up to six species found on the same host individual. Nine dactylogyrid species were identified with the abundance of each reaching a very low level. Niche size was considered to increase with species abundance even when water temperature was high. There was a strong effect of water temperature on abundance of the common dactylogyrid species (D. crucifer, D. nanus, D. rutili and D. suecicus) as well as of the rare species D. rarissimus. The temporary occurrence of the rare species was found without any temperature effect. Water temperature did not affect the relationship between abundance and niche size. Niche size increased with abundance, even when the water temperature was high, which suggests that negative interspecific interactions are not important within dactylogyrid communities. |
Bifrost.namespace("Bifrost.tasks", {
taskHistory: Bifrost.Singleton(function (systemClock) {
/// <summary>Represents the history of tasks that has been executed since the start of the application</summary>
var self = this;
var entriesById = {};
/// <field param="entries" type="observableArray">Observable array of entries</field>
this.entries = ko.observableArray();
this.begin = function (task) {
var id = Bifrost.Guid.create();
try {
var entry = Bifrost.tasks.TaskHistoryEntry.create();
entry.type = task._type._name;
var content = {};
for (var property in task) {
if (property.indexOf("_") !== 0 && task.hasOwnProperty(property) && typeof task[property] !== "function") {
content[property] = task[property];
}
}
entry.content = JSON.stringify(content);
entry.begin(systemClock.nowInMilliseconds());
entriesById[id] = entry;
self.entries.push(entry);
} catch (ex) {
// Todo: perfect place for logging something
}
return id;
};
this.end = function (id, result) {
if (entriesById.hasOwnProperty(id)) {
var entry = entriesById[id];
entry.end(systemClock.nowInMilliseconds());
entry.result(result);
}
};
this.failed = function (id, error) {
if (entriesById.hasOwnProperty(id)) {
var entry = entriesById[id];
entry.end(systemClock.nowInMilliseconds());
entry.error(error);
}
};
})
});
Bifrost.WellKnownTypesDependencyResolver.types.taskHistory = Bifrost.tasks.taskHistory; |
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s.submit
import java.io.StringWriter
import java.util.{Collections, UUID}
import java.util.Properties
import scala.collection.mutable
import scala.util.control.Breaks._
import scala.util.control.NonFatal
import io.fabric8.kubernetes.api.model._
import io.fabric8.kubernetes.client.{KubernetesClient, Watch}
import io.fabric8.kubernetes.client.Watcher.Action
import org.apache.spark.SparkConf
import org.apache.spark.deploy.SparkApplication
import org.apache.spark.deploy.k8s._
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.deploy.k8s.KubernetesUtils.addOwnerReference
import org.apache.spark.internal.Logging
import org.apache.spark.util.Utils
/**
* Encapsulates arguments to the submission client.
*
* @param mainAppResource the main application resource if any
* @param mainClass the main class of the application to run
* @param driverArgs arguments to the driver
*/
private[spark] case class ClientArguments(
mainAppResource: MainAppResource,
mainClass: String,
driverArgs: Array[String],
proxyUser: Option[String])
private[spark] object ClientArguments {
def fromCommandLineArgs(args: Array[String]): ClientArguments = {
var mainAppResource: MainAppResource = JavaMainAppResource(None)
var mainClass: Option[String] = None
val driverArgs = mutable.ArrayBuffer.empty[String]
var proxyUser: Option[String] = None
args.sliding(2, 2).toList.foreach {
case Array("--primary-java-resource", primaryJavaResource: String) =>
mainAppResource = JavaMainAppResource(Some(primaryJavaResource))
case Array("--primary-py-file", primaryPythonResource: String) =>
mainAppResource = PythonMainAppResource(primaryPythonResource)
case Array("--primary-r-file", primaryRFile: String) =>
mainAppResource = RMainAppResource(primaryRFile)
case Array("--main-class", clazz: String) =>
mainClass = Some(clazz)
case Array("--arg", arg: String) =>
driverArgs += arg
case Array("--proxy-user", user: String) =>
proxyUser = Some(user)
case other =>
val invalid = other.mkString(" ")
throw new RuntimeException(s"Unknown arguments: $invalid")
}
require(mainClass.isDefined, "Main class must be specified via --main-class")
ClientArguments(
mainAppResource,
mainClass.get,
driverArgs.toArray,
proxyUser)
}
}
/**
* Submits a Spark application to run on Kubernetes by creating the driver pod and starting a
* watcher that monitors and logs the application status. Waits for the application to terminate if
* spark.kubernetes.submission.waitAppCompletion is true.
*
* @param conf The kubernetes driver config.
* @param builder Responsible for building the base driver pod based on a composition of
* implemented features.
* @param kubernetesClient the client to talk to the Kubernetes API server
* @param watcher a watcher that monitors and logs the application status
*/
private[spark] class Client(
conf: KubernetesDriverConf,
builder: KubernetesDriverBuilder,
kubernetesClient: KubernetesClient,
watcher: LoggingPodStatusWatcher) extends Logging {
def run(): Unit = {
val resolvedDriverSpec = builder.buildFromFeatures(conf, kubernetesClient)
val configMapName = s"${conf.resourceNamePrefix}-driver-conf-map"
val configMap = buildConfigMap(configMapName, resolvedDriverSpec.systemProperties)
// The include of the ENV_VAR for "SPARK_CONF_DIR" is to allow for the
// Spark command builder to pickup on the Java Options present in the ConfigMap
val resolvedDriverContainer = new ContainerBuilder(resolvedDriverSpec.pod.container)
.addNewEnv()
.withName(ENV_SPARK_CONF_DIR)
.withValue(SPARK_CONF_DIR_INTERNAL)
.endEnv()
.addNewVolumeMount()
.withName(SPARK_CONF_VOLUME)
.withMountPath(SPARK_CONF_DIR_INTERNAL)
.endVolumeMount()
.build()
val resolvedDriverPod = new PodBuilder(resolvedDriverSpec.pod.pod)
.editSpec()
.addToContainers(resolvedDriverContainer)
.addNewVolume()
.withName(SPARK_CONF_VOLUME)
.withNewConfigMap()
.withName(configMapName)
.endConfigMap()
.endVolume()
.endSpec()
.build()
val driverPodName = resolvedDriverPod.getMetadata.getName
var watch: Watch = null
val createdDriverPod = kubernetesClient.pods().create(resolvedDriverPod)
try {
val otherKubernetesResources = resolvedDriverSpec.driverKubernetesResources ++ Seq(configMap)
addOwnerReference(createdDriverPod, otherKubernetesResources)
kubernetesClient.resourceList(otherKubernetesResources: _*).createOrReplace()
} catch {
case NonFatal(e) =>
kubernetesClient.pods().delete(createdDriverPod)
throw e
}
val sId = Seq(conf.namespace, driverPodName).mkString(":")
breakable {
while (true) {
val podWithName = kubernetesClient
.pods()
.withName(driverPodName)
// Reset resource to old before we start the watch, this is important for race conditions
watcher.reset()
watch = podWithName.watch(watcher)
// Send the latest pod state we know to the watcher to make sure we didn't miss anything
watcher.eventReceived(Action.MODIFIED, podWithName.get())
// Break the while loop if the pod is completed or we don't want to wait
if(watcher.watchOrStop(sId)) {
watch.close()
break
}
}
}
}
// Build a Config Map that will house spark conf properties in a single file for spark-submit
private def buildConfigMap(configMapName: String, conf: Map[String, String]): ConfigMap = {
val properties = new Properties()
conf.foreach { case (k, v) =>
properties.setProperty(k, v)
}
val propertiesWriter = new StringWriter()
properties.store(propertiesWriter,
s"Java properties built from Kubernetes config map with name: $configMapName")
new ConfigMapBuilder()
.withNewMetadata()
.withName(configMapName)
.endMetadata()
.addToData(SPARK_CONF_FILE_NAME, propertiesWriter.toString)
.build()
}
}
/**
* Main class and entry point of application submission in KUBERNETES mode.
*/
private[spark] class KubernetesClientApplication extends SparkApplication {
override def start(args: Array[String], conf: SparkConf): Unit = {
val parsedArguments = ClientArguments.fromCommandLineArgs(args)
run(parsedArguments, conf)
}
private def run(clientArguments: ClientArguments, sparkConf: SparkConf): Unit = {
// For constructing the app ID, we can't use the Spark application name, as the app ID is going
// to be added as a label to group resources belonging to the same application. Label values are
// considerably restrictive, e.g. must be no longer than 63 characters in length. So we generate
// a unique app ID (captured by spark.app.id) in the format below.
val kubernetesAppId = s"spark-${UUID.randomUUID().toString.replaceAll("-", "")}"
val kubernetesConf = KubernetesConf.createDriverConf(
sparkConf,
kubernetesAppId,
clientArguments.mainAppResource,
clientArguments.mainClass,
clientArguments.driverArgs,
clientArguments.proxyUser)
// The master URL has been checked for validity already in SparkSubmit.
// We just need to get rid of the "k8s://" prefix here.
val master = KubernetesUtils.parseMasterUrl(sparkConf.get("spark.master"))
val watcher = new LoggingPodStatusWatcherImpl(kubernetesConf)
Utils.tryWithResource(SparkKubernetesClientFactory.createKubernetesClient(
master,
Some(kubernetesConf.namespace),
KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX,
SparkKubernetesClientFactory.ClientType.Submission,
sparkConf,
None,
None)) { kubernetesClient =>
val client = new Client(
kubernetesConf,
new KubernetesDriverBuilder(),
kubernetesClient,
watcher)
client.run()
}
}
}
|
Role of Doppler sonography in fetal/maternal medicine.
Doppler velocimetry of umbilical, fetal, and uteroplacental vessels provides important information on fetal and uterine hemodynamics and has therefore become one of the most dynamic areas of perinatal research. Examinations of umbilical artery velocity waveforms have gained recognition as a valuable clinical method of fetal surveillance in risk pregnancies. Pathological Doppler findings, especially the finding of absent or reverse end-diastolic (ARED) flow velocity, are associated with fetal hypoxia and adverse outcome of pregnancy. The first follow-up studies indicate that abnormal Doppler findings are associated with impaired postnatal neurological development. Velocimetry of the fetal middle cerebral artery provides important information on redistribution of fetal blood flow in hypoxia; its place in clinics remains to be established. Similarly, more studies are needed to evaluate the use of Doppler velocimetry in labor. Possibly, the Doppler method may facilitate the interpretation of equivocal cardiotocography (CTG) tracings. New evidence has been collected on the correlation between hemodynamic findings and the morphology of the placenta and the placental bed. One of the very important applications of Doppler velocimetry is the evaluation of pharmacological effects on fetal circulation of drugs used for treatment in pregnancy and labor. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.